## 1. System Parameters

The setting of system parameters depend on the system requirements, such as the system bandwidth, the coverage radius and the users supported, etc.. WiMAX forum proposes some profiles for system specifications' setting [1]. For the WiMAX BS we designed, the system profile is defined as Table 1. We only set the profile under UL_PUSC zone as an example. That is we implement the BS receiver and MS transmitter to illustrate the design process.

## 2. Platform Selection

In traditional, the base station (BS) market is dominated by the proprietary hardware platforms, which are composed of DSPs and FPGA, or CPU and DSPs. Although there are ASIC solutions for BS applications, most of them are used in picocell or microcell. For the macrocell application, processing capability is always a key issue that should be considered during the BS platform selection. For the WiMAX base station (BS) design, the traditional platforms can be classified as [2]:

The CPU is used for control and system management. It can be a simple processor, such as ARM or MIPS chip. DSP array is responsible for complex calculation and FPGAs for acceleration. Currently, most of BS are based on this kind of platform. The advantages of this kind of platform are low power consumption, good development ecosystem, and rich libraries and tools. For example, TI and ADI can provide high performance DSPs for wireless communication. And Xilinx and Altera have their own solution for WiMAX. The disadvantages of the platform are less scalability and flexibility.

The solution based on ASIC is easy for development and implementation. However, cost should be considered and the system is difficult to upgrade due to the appropriative chip.

It's proprietary platform, generally with high cost, less flexibility and interoperability.

Processing capability, power consumption and cost are the three main factors when considering a BS design. From the traditional view, general IT (information technology) platform, such as blade servers, is not suitable for wireless BS applications. One reason is the processing capability. Compared with DSP or FPGA, the general purpose processor (GPP) has lower performance for complex computation. The other main reason is the power consumption. However, the situation has changed for the occurrence of multicore processor. With the specified architecture, specially designed instruction set, and optimized compiler, the general purpose processor with multicore has powerful processing capability in many applications, including wireless application. For example, Cell Broadband Engine (BE) has 256GFlops processing capability at 3.2GHz [3]. And the Integrated Performance Primitives (IPP) provided by Intel (multicore ready) achieves good performance for multimedia and data processing applications [4]. Considering the total power consumption of the system, the power consumed by baseband is a small portion. Therefore, the general IT platform with multicore or multithread is a good candidate for open wireless architecture (OWA) for its powerful processing capability, flexibility, scalability and interoperability.

In this Chapter, we select the Cell BE as the platform to design and implement the WiMAX BS (base band).

## 3. System Structure and Functions

### 3.1. System Structure

The system structure of the proposed BS transceiver (baseband) is described as Fig. 1 [2].

Fig. 2 depicts the uplink subframe with partially used subchannel (PUSC) subcarrier assignment scheme. The uplink subframe is shown in Fig. 2a. Each uplink transmitter is assigned several subchannels to transmit its burst in a timedivision- multiplexing (TDM) manner. Note that the ranging subchanel is not considered in our design. The uplink supports 35 subchannels where each transmission uses 48 data carriers as the minimal block of processing. A slot in the uplink is composed of three symbols and one subchannel, within each slot, there are 48 data subcarriers and 24 fixed-location pilot subcarrier. Multiple subchannels (slots) can be allocated to each user, with one subchannel being the minimum resource that can be allocated to a user. In the frequency domain, a subchannel is constructed from six uplink tiles, each tile has four successive active subcarriers and modulated with a mix of data and pilots over three OFDMA symbols. The configuration of a tile is illustrated in Fig. 2b. Here, we only consider one user and the user occupies the entire system bandwidth.

## 3.2 Signal Model

In the downlink shown in Fig.1, after FEC (Forward Error Control) coding, modulation, zone permutation, OFDMA modulation and cyclic prefix (CP) insertion, the time-domain samples of an OFDM symbol can be obtained from frequency-domain symbols as

where X (k ) is the modulated data on the kth subcarrier of one OFDM symbol, N is the number of subcarriers and N_{CP} is the length of cyclic prefix.

The impulse response of multi-path channel can be approximately denoted as:

where L is the total number of paths, and

Assuming perfect time and frequency synchronization, the model of received signal at the BS after removal of the CP can be written as

where H(k) is the channel frequency response at the kth subcarrier and z(n) is the additive white complex Gaussian noise (AWCGN).

### 3.3. Algorithm Selections

For the transmitter of WiMAX PHY, the algorithm of each module, such as FEC coding, modulation, map constellation, etc., is mature relatively. Thus we focus on the discussion of algorithm selection for receiver block in this section.

#### 3.3.1. Synchronization

Timing and frequency synchronization are two important tasks needed to be performed by the receiver. Through the timing and frequency offset estimation and correction, the effects of ISI (inter symbol interference) and ICI (inter-carrier interference) can be reduced.

In the presence of symbol timing offset (STO) and carrier frequency offset (CFO), equation (3) should be modified as follows:

where

A number of approaches to estimate timing and frequency offset in OFDM systems have been presented in the literature. Some operate in the time domain [6] [7], while others use the cycle prefix or the cyclostationarity of OFDM transmissions (e.g., Van de Beek algorithm [8]) to gain information about the symbol timing and frequency offset. As theWiMAX standard, the preamble in OFDMA-mode does not have the repeating pattern similar to that in OFDM-mode. And only uplink subframe is considered in our design. Therefore, in this paper, ML algorithm based on the CP [8] is chosen to achieve the symbol timing and carrier frequency synchronization.

Through the algorithm introduced in [8], we can obtain the estimation of

where
*N* samples apart. The term

Once the STO and CFO are estimated, the received time samples can be corrected as follows:

#### 3.3.2. Channel Estimation

It is well known that it is necessary to remove the amplitude and phase shift caused by the channel.

Based on the uplink tile structure, shown as Fig.2b, the pilot-aided channel estimation methods can be employed, which consist of algorithms to estimate the channel at pilot frequencies and to interpolate the channel. The estimation of the channel at the pilot frequencies can be based on least square (LS), minimum mean-square (MMSE) or least mean-square (LMS). Though MMSE has been shown to perform much better than LS, it needs knowledge of the channel statistics and the operating SNR [9]. The interpolation of the channel can depend on linear interpolation, second order interpolation, low-pass interpolation, spline cubic interpolation, and time domain interpolation. Considering the tradeoff between feasibility of implementation and system performance, we choose linear interpolation in time and frequency on a tile-by- tile basis for each subchannel.

When the data and pilot information has been assembled as shown in Fig. 2b, it is possible to calculate H_{11}, H_{14}, H_{31} and H_{34} using the equation:

for the mth OFDMA symbol of the tth tile where:

We omit the index of receive antenna here, since channel estimation for each receive antenna is performed independently. Subsequently, frequency domain linear interpolation is performed to calculate channel estimates using the following equations:

(8) |

where

Finally, time domain linear interpolation is achieved as follows:

When all of the channel estimates have been formed, these estimated values are transmitted to the space-frequency decoding module for the data detection using ML method.

#### 3.3.3. SFBC

A user-supporting transmission using transmit diversity configuration in the uplink, shall use a modified uplink tile. The pilots in each tile shall be split between the two antennas and the data subcarriers shall be encoded in pairs after constellation mapping, as depicted in Fig. 3. Because this is applied in the frequency domain (OFDM carriers) rather than in the time domain (OFDM symbols), we note it as space-frequency block coding (SFBC) [10].

Defined

symbol corresponding to the
_{1} and X_{2} are:

(10) |

where

## 4. Implementation on Cell BE

### 4.1. Cell Processor

Cell processor is proposed and designed as the engine of the PlayStation 3 of Sony initially. But as a powerful, all-purpose multiprocessor, Cell can be expect to be much potential in other areas. A single chip Cell processor contains one PowerPC Processor Element (PPE) and eight Synergistic Processor Elements (SPE). The PPE unit on Cell is a general purpose 64-bit RISC core with 2-way hardware multithreading, used for operating systems and system control, and 8 SPE cores are optimized for compute-intensive, single-precision, floating-point workloads. These units are interconnected with a coherent on-chip element interconnect bus (EIB). The system frequency of Cell is 3.2GHz and the computation capability is 256GFlops [3] [11].

### 4.2. Programming on Cell

Cell processor is a kind of heterogeneous multicore processor. Its programming model is novel, see [12] in details. In summary, programming on Cell includes two main points. One is the programming on SPE, especially optimization on SPU since SPE acts as computation accelerator. It has special chip architecture and instruction sets to support such acceleration. The other is the communication between PPE and SPE, and communications among multiple SPEs. In this section, we will focus on the discussion about optimization on SPE (SPU). The communication mechanism of PPE and SPE will be contained in the introduction about software framework design.

For the optimization on Cell, it includes two aspects. One is the processing speed, evaluated by the number of cycle. The other is the local store consuming since each SPU only have 256KB local store. We should make balance between these two factors during optimization. If the computation capability is critical for one component while the buffer and code size are small, we can scarify some local store for achieving high computation performance and vice versa. In our case, for most components, limited local store is more troubled than computation capability. In general, it can solved by good coding design, optimization and local store overlay. Some general optimization techniques on Cell are listed as follows[17][18]:

Branch can significantly influence the efficiency of the SPU since SPU is an in-order processor with no branch prediction, any judgment will result in the SPU stall. Using the compare-select function instead of short judgment function is a good optimization method for most branches.

The best assess pattern for SPU is data and structure aligned with vector operation. The Scalar and unaligned access will result in many additional instructions for data aligned and scalars extracted from vectors. In some case, we can operate the scalar as the vector. This method solves the data access problem of the SPU which can not be made as SIMD pattern.

SIMD (single instruction multiple data) is a very useful accelerating technique for SPU. It generally has 4—8 times speed-up rate.

Each instruction has its latency and Stall cycles which will influence the efficiency of the SPU due to the dependency. If two conjoint instructions can be placed in the different pipeline with no dependency, the two instructions can be dual-issue.

### 4.3. Workload Analysis and Optimization

#### 4.3.1. Workload Analysis

From the theoretical analysis, we know the modules of uplink, such as channel decoding, channel estimation and SFBC, consume most of the computation resource. They are the modules with heavy workload. This conclusion is also verified by workload test on Cell. Table 2 shows the workload of each module of uplink for processing 3 OFDMA symbols. The test runs on Cell BE simulator-Mambo with Cell SDK2.1. The cycle numbers of "CP remove" module and "channel estimation" module are for one antenna. The "viterbi" module is 1/2 data rate and the constraint length is 7. We note that the Viterbi, deinterleave and SFBC are the top three modules with heavy workload. And the other modules, such as channel estimation, derandomize and demodulation modules, do not match the throughput requirement without optimization. Thus we need to optimize those modules to meet the targeted 20Mbps throughput.

For the modules of downlink, the result of workload testing depicts as Table 3. The test environment and data length are the same as that of uplink. The initial length of data is 3354 bits, containing 3 OFDMA symbols.

We use convolutional code (data rate =1/2, constraint length = 7) for channel coding and the modulation is 16QAM. Except the interleave module, the other modules of downlink have the same level workload before optimization. Compared with the workload of uplink, the modules of downlink consume less computation resource. For the FFT and IFFT used for system, we will use the library provided by Cell SDK. There is no optimization work on these two modules. Hence we did not list their workloads here.

#### 4.3.2. Workload Optimization

Based on the workload analysis, we optimize each module to meet the throughput requirement we pre-set. That is 20Mbps processing capability for both downlink and uplink. In our application, each technique mentioned above is used and the speed-up rate of each module is shown in Table 2 and Table 3 for uplink and downlink respectively.

During the optimization, we should tradeoff between computation performance (cycles) and

local store consumption. For the computation critical module, such as Viterbi decoding, we will scarify the local store to obtain the smaller cycles; While for the local store critical module, we will try to save buffers instead of achieving highest performance. Therefore, when we refer to the performance of each module, we should consider both the number of cycles and the consuming of local store, which is very important for workload partition on different SPEs. The optimized results shown in Table 2 and Table 3 are not the best one. We just optimize them till they can meet our design requirements. They still have potential optimization space.

Based on the optimization results and local store consumptions, the workload can be partitioned to five SPEs, in which two SPEs for downlink and three SPEs for uplink. PPE is responsible for SPE control and management. So one Cell BE chip can process both uplink and downlink with 20Mbps throughput in theory. Figure 5 depicts the workload partition of Cell.

### 4.4. Framework Design

For the software framework design, we consider three scenarios: sequential framework, PPU synchronization and SPU synchronization. In the sequential framework, PPU is used as a controller for SPU control and data management. SPU is used for data processing. The data is stored in the main memory. SPU fetches the data from the main memory, calculates them and then sends the computation results back. Sequential framework is the simplest one with low efficiency. It can not satisfy the 20Mbps throughput requirement. Therefore, we only use this framework to verify the system correctness at the beginning of system integration. For the PPU synchronization framework, PPU is used to manage the synchronization of SPUs. This results in the PPU to take heavy workload. If the system (Cell blade server, named as QS20, containing two Cell Processor) wants to support 3 sectors, PPU becomes the bottleneck of system. Therefore, we do not adopt this framework. SPU synchronization is the framework we used in the current system, shown as Fig. 6.

In this design, different modules will work in parallel. SPUs will manage their synchronization through messages passing. Since there is no feedback path in the data flow of both uplink and downlink, pipeline can be used in the framework design. There are two different levels of pipeline:

SPU Level Pipelining. This level pipelining can be realized by double the input and output buffers.The double buffers are allocated on main memory.

Functional Level Pipelining. The functional units in one SPU can also work in pipelining, but it is heavily dependent on the algorithms and local store limitation.

Only when the local store can support double buffer for both input and output, the pipelining can be used. Functional level pipelining can overlap the time consumption of DMA tasks and computation tasks.

## 5. Simulation Results and System Performance

The system is implemented on IBM Cell blade server, named QS-20, which has two Cell B.E. processors (a 2-way SMP) operating at 3.2 GHz. We use 2R_{x} X 2T_{x} MIMO technique and the system parameters are set as Table 1. The uplink bandwidth is 10MHz, the subcarrier frequency spacing
_{CP} =128. The following parameters are also assumed: 1/2 convolutional coding with constraint length of 7 and generator polynomial matrix of [133 171]. A discrete channel model based on the Stanford University Interim 3 (SUI-3) [13] model is used, which represents a low delay spread case with

We evaluate the system performance from two aspects. One is the throughput of uplink and downlink, the other is the system BER. The throughput demonstrates the system processing capability. Table 4 shows the throughput test results. Each sector can achieve 20Mbps throughput whether for downlink or uplink. The total throughput of one QS20 will exceed 60Mbps.

Throughput | Downlink (Mbps) | Uplink (Mbps) |

Sector1 | 24.409414 | 20.970757 |

Sector2 | 25.042559 | 21.517656 |

Sector3 | 24.442323 | 21.473296 |

BER performance reflects the correctness of system design and the system precision. Figure 8 is the BER results tested on QS20 and X86 processor (Intel Xeron@2.8GHz) respectively. We tested both AWGN channel and Rayleigh channel on X86 and Cell platform. The results indicate that the BER performances are almost the same for X86 platform and cell platform whether under AWGN channel or Rayleigh channel.

## 6. Summary

In this chapter, we propose the possible solutions for the issues during WiMAX BS implementation, such as the platform selection, algorithm selection, and performance optimization. And we design and implement a WiMAX BS (PHY, baseband) on Cell processor as an example for illustration. The system requirements decide the platform selection, and the system processing capability and system performance requirements are the main factors considered during the BS design. The performance optimization can be classified as individual module optimization and system framework optimization. Both of them heavily depend on system hardware structures. Although different platforms have their specific optimization methods according to the system structures, efficient communications between each modules and acceleration for some key modules with heavy workloads are general methods that should be considered.