Sunday, June 27, 2010

Introduction to LTE

The recent increase of mobile data usage and emergence of new applications such as online gaming, mobile TV, Web 2.0, streaming contents have motivated the 3rd Generation Partnership Project (3GPP) to work on the Long-Term Evolution (LTE). LTE is the latest standard in the mobile network technology tree that previously realized the GSM/EDGE and UMTS/HSxPA network technologies that now account for over 85% of all mobile subscribers. LTE, whose radio access is called Evolved UMTS Terrestrial Radio Access (E-UTRA), is expected to substantially improve end-user throughputs, sector capacity, spectrum efficiency, and reduce user plane latency, bringing significantly improved user experience with full mobility. In order to achieve these targets, LTE makes use of technical principles that are innovative for cellular networks, such as, Orthogonal Frequency Division Multiplexing (OFDM) and Multiple-Input Multiple-Output (MIMO) antenna schemes. Furthermore the 3GPP defined a new core network architecture, called System Architecture Evolution (SAE), which reduces the number of network elements, introduces a full IP-based protocol for both data and voice traffic, and provides support for, and mobility between, legacy systems like GSM and UMTS, and also non-3GPP system like WiMAX.

This post is structured as follows. The first paragraph presents the target requirements of LTE specified in the standard. The second paragraph deals with the Frame Structure used in LTE. The third and the fourth paragraphs present the LTE downlink physical layer and the LTE uplink physical layer, respectively. Finally the fifth paragraph introduces the core network of LTE (SAE).

1) LTE Target Requirements

Main requirements for the design on an LTE system are specified in the 3GPP TR 25.913 Feasibility Study of Evolved UTRA and UTRAN. They can be summarized as follows:

  • Peak Data Rate: instantaneous peak data rates of 100 Mbps (downlink) and 50 Mbps (uplink) for a 20 MHz spectrum allocation, assuming 2 receive antennas and 1 transmit antenna at UE.
  • Latency: the one-way transit time between a packet being available at the IP layer in either the UE or radio access network and the availability of this packet at IP layer in the radio access network/UE shall be less than 5 ms. Also C-plane latency shall be reduced, e.g. to allow fast transition times of less than 100 ms from camped state to active state.
  • Average user throughput per MHz: downlink target is 3-4 times better than Release 6 HSDPA. Uplink target is 2-3 times better than Release 6 HSDPA.
  • Spectrum Efficiency (bit/sec/Hz/site): downlink target is 3-4 times better than Release 6 HSDPA. Uplink target is 2-3 times better than Release 6 HSDPA.
  • Mobility: the system should be optimized for low mobile speed from 0 to 15 km/h. Higher mobile speed between 15 to 120 km/h should be supported with high performance. Mobility across the cellular network shall be maintained at speeds from 120 km/h to 350 km/h.
  • Coverage: up to 5 km: full performance targets should be met, up to 30 km: slight degradations in the achieved performance, up to 100 km: should not be precluded by the specifications.
  • Capacity: at least 200 users per cell should be supported for spectrum allocation of 5 MHz, and at least 400 users for higher spectrum allocation.
  • Spectrum flexibility: support for spectrum allocations of different sizes: 1.25 MHz, 2.5 MHz, 5 MHz, 10 MHz, and 20 MHz in both uplink and downlink.
  • Spectrum allocation: operation in paired (Frequency Division Duplex / FDD) and unpaired spectrum (Time Division Duplex / TDD) shall be supported.
  • Interworking: Interworking with existing UTRAN/GERAN systems and non-3GPP systems shall be ensured. Multimode terminals shall support handover to and from UTRAN and GERAN as well as inter-RAT measurements. Interruption time for handover between E-UTRAN and UTRAN/GERAN shall be less than 300 ms for real time services and less than 500 ms for non real time services.
  • Costs: Reduced CAPEX and OPEX including backhaul shall be achieved. Cost effective migration from Release 6 UTRA radio interface and architecture shall be possible. Reasonable system and terminal complexity, cost and power consumption shall be ensured. All the interfaces specified shall be open for multi-vendor equipment interoperability.
  • Co-existence: Co-existence in the same geographical area and co-location with GERAN/UTRAN shall be ensured. Also, co-existence between operators in adjacent bands as well as cross-border coexistence is a requirement.
  • Quality of Service: End-to-end Quality of Service (QoS) shall be supported. VoIP should be supported with at least as good radio and backhaul efficiency and latency as voice traffic over the UMTS circuit switched networks.

2) LTE Frame Structure

To support transmission in paired and unpaired spectrum, two duplex modes are supported: Frequency Division Duplex (FDD), supporting full duplex and half duplex operation, and Time Division Duplex (TDD). Therefore two frame structures are defined: frame structure type 1 for FDD mode, and frame structure type 2 for TDD mode.

The 10 ms FDD frame is divided into 10 subframes of 1 ms each. Each subframe is further divided into two slots of 0.5 ms duration, as shown in Figure 1.

Figure 1: frame structure for FDD mode

Slots consist of either 6 or 7 OFDM symbols, depending on whether the normal or extended cyclic prefix is employed.

The 10 ms TDD frame consists of two half-frames of length 5 ms each. Each half-frame is divided into five subframes of each 1 ms, as shown in Figure 2.

Figure 2: frame structure for TDD mode

All subframes which are not special subframes are defined as two slots of length 0.5 ms. The special subframes consist of three fields DwPTS (Downlink Pilot Timeslot), GP (Guard Period), and UpPTS (Uplink Pilot Timeslot). DwPTS, GP and UpPTS have configurable individual lengths and a total length of 1 ms. Seven uplink-downlink configurations with either 5 ms or 10 ms downlink-to-uplink switch-point periodicity are supported. In case of 5 ms switch-point periodicity, the special subframe exists in both half-frames. In case of 10 ms switch-point periodicity the special subframe exists in the first half frame only. Subframes 0 and 5 and DwPTS are always reserved for downlink transmission. UpPTS and the subframe immediately following the special subframe are always reserved for uplink transmission. Table 1 shows the supported uplink-downlink configurations, where “D” denotes a subframe reserved for downlink transmission, “U” denotes a subframe reserved for uplink transmission, and “S” denotes the special subframe.

Table 1: Uplink-downlink configuration for TDD frame structure

3) LTE Downlink Physical Layer

The downlink transmission scheme for LTE in both FDD and TDD mode is based on conventional OFDM. In an OFDM system, the available spectrum is divided into many narrower subcarriers, each of these is independently modulated by a low rate data stream using varying levels of QAM modulation. The basic subcarriers spacing in LTE is 15 KHz, with a reduced subcarrier spacing of 7.5 KHz available for some MB-SFN scenarios. One downlink slot consists of 6 or 7 OFDM symbols, depending on whether extended or normal cyclic prefix is configured. The extended cyclic prefix is able to cover larger cell sizes with higher delay spread of the radio channel. The cyclic prefix lengths are summarized in Table 2.

Table 2: cyclic prefix lengths

Note that the CP duration is described in absolute term (e.g. 16.67 μs for extended CP) and in term of standard units, Ts. Ts is used throughout the LTE specification documents. It is defines as Ts = 1 / ( 15000 x 2048 ) = 1 / 30720000 seconds which corresponds to the 30.72 MHz sample clock for the 2048 point FFT used with the 20 MHz system bandwidth.

OFDMA is employed as the multiplexing scheme in the LTE downlink. Although it involves added complexity in terms of resource scheduling, it is vastly superior to other multiplexing methods that use OFDM as the underlying modulation in terms of efficiency and latency. In OFDMA, users are allocated a specific number of subcarriers for a predetermined amount of time. The transmitted downlink signals can be represented by a resource grid as depicted in Figure 3.

Figure 3: downlink resource grid

Each box within the grid represents a single subcarrier for one symbol period and is referred to as a resource element. Note that in MIMO configurations, there is a resource grid for each transmitting antenna. The smallest element of resource allocation is called physical resource block (PRB) and is defined as consisting of 12 consecutive subcarriers for one slot in duration. The PRBs allocated for each users do not have to be adjacent to each other and the scheduling decision can be modified every transmission time interval of 1 ms. Allocation of PRBs is handled by a scheduling function at the 3GPP base station and the scheduling algorithm has to take into account the radio link quality situation of different users, the overall interference situation, QoS requirements, service priorities, etc. The total number of available PRBs depends on the overall transmission bandwidth of the system.

Physical Downlink Channels

A downlink physical channel corresponds to a set of resource elements carrying information originating from higher layers. The following downlink physical channels are defined:

  • Physical Downlink Shared Channel (PDSCH): carriers user data (QPSK, 16QAM, 64QAM);
  • Physical Broadcast Channel (PBCH): carriers Master Information Block (QPSK);
  • Physical Multicast Channel (PMCH): carriers user data to one or more devices (QPSK, 16QAM, 64QAM);
  • Physical Control Format Indicator Channel (PCFICH): indicates format of PDCCH (QPSK);
  • Physical Downlink Control Channel (PDCCH): carriers downlink control information (DCI), e.g. downlink or uplink scheduling assignments (QPSK);
  • Physical Hybrid ARQ Indicator Channel (PHICH): carriers ACK/NACK for uplink data packets (BPSK).

Physical Downlink Signals

A downlink physical signal corresponds to a set of resource elements used by the physical layer but does not carry information originating from higher layers. The following downlink physical signals are defined:

- Reference Signals

The downlink reference signal structure is important for channel estimation. Specific pre-defined resource elements in the resource grid are carrying the cell-specific reference signal sequence. UE must get an accurate Channel Impulse Response (CIR) from each transmitting antenna. Therefore, when a reference signal is transmitted from one antenna port, the other antenna ports in the cell are idle. CIR estimates for subcarriers that do not bear reference signals are computed via interpolation.

- Synchronization signals

The synchronization signals are classified as primary and secondary synchronization signals, depending how they are used by the UE during the cell search procedure. Both primary and secondary synchronization signals are transmitted on 62 subcarriers within 72 reserved subcarriers around DC subcarrier during 0th and 10th slots of a frame.

4) LTE Uplink Physical Layer

LTE uplink requirements differ from downlink requirements in several ways. Not surprisingly, power consumption is a key consideration for UE terminal. The high peak-to-average ratio (PAPR), and related loss of efficiency associated with OFDM signaling are major concerns. As a result, an alternative to OFDM was sought for use in the LTE uplink.

Thus, the LTE uplink transmission scheme for both FDD and TDD mode is based on Single Carrier Frequency Domain Multiple Access (SC-FDMA) with cyclic prefix. SC-FDMA is a misleading term, since SC-FDMA is essentially a multi-carrier scheme that re-use many of the functional blocks included in the UE OFDM receiver signal chain. The principal advantage of SC-FDMA over conventional OFDM is a lower PAPR (by approximately 2 dB) that is important for cost-effective design of UE power amplifiers.

The uplink uses the same generic frame structure as the downlink. It also uses the same subcarrier spacing of 15 KHz and PRB width. Uplink modulation parameters (including normal and extended CP length) are identical to the downlink parameters.

Subcarrier modulation is, however, much different. In the uplink, data is mapped onto a signal constellation that can be QPSK, 16QAM, or 64QAM depending on channel quality. However, rather than using the QPSK/QAM symbols to directly modulate subcarriers (as is the case in OFDM), uplink symbols are sequentially fed into a serial/parallel converter and then into an FFT block. The result at the output of the FFT block is a discrete frequency domain representation of the QPSK/QAM symbol sequence. The discrete Fourier terms at the output of the FFT block are then mapped to subcarriers before being converted back into the time domain (IFFT). The final step prior to transmission is appending a CP. Figure 4 clarifies the conceptual differences between OFDMA and SC-FDMA.

Figure 4: OFDMA vs SC-FDMA

Scheduling of uplink resources is done by the base station which assigns certain time/frequency resources to the UEs and informs UEs about transmission format to use. The scheduling decisions is based on QoS parameters, UE buffer status, uplink channel quality measurements, UE capabilities, UE measurements gaps, etc. The resource grid used in uplink transmission is similar to the one used in downlink transmission.

Physical Uplink Channels

An uplink physical channel corresponds to a set of resource elements carrying information originating from higher layers. The following uplink physical channels are defined:

  • Physical Uplink Shared Channel (PUSCH): carriers user data (QPSK, 16QAM, 64QAM);
  • Physical Uplink Control Channel (PUCCH): carriers uplink control information (UCI), i.e. ACK/NACK information related to data packets received in the downlink, channel quality indication (CQI) reports, precoding matrix information (PMI) and rank indication (RI) for MIMO, and scheduling request (SR) (BPSK, QPSK);
  • Physical Random Access Channel: requests initial access, as part of handover, or re-establishes uplink synchronization (BPSK, QPSK).

Physical Uplink Signals

An uplink physical signal is used by the physical layer but does not carry information originating from higher layers. LTE defines one category of uplink physical signals: Reference Signals. There are two types of uplink reference signals: the demodulation reference signal is used for channel estimation in the eNodeB receiver in order to demodulate control and data channels, while the sounding reference signal provides uplink channel quality information as a basis for scheduling decisions in the base station.

5) LTE/SAE System Architecture Evolution

Along with LTE, that applies more on the radio access technology of the cellular telecommunications system, there is also an evolution of the core network. Known as System Architecture Evolution (SAE). The main principles of the LTE-SAE architecture include:

  • a common gateway (GW) node for all access technologies;
  • an optimized architecture for the user plane – from four to only two node types (base stations and gateway);
  • IP-based protocols on all interfaces;
  • a RAN-CN functional split similar to that of WCDMA/HSPA;
  • a split in the control/user plane between the mobility management entity (MME) and the gateway;
  • integration of non-3GPP access technologies using client- as well as network-based mobile IP.

Figure 5 shows a simplified view of the overall LTE-SAE architecture, where continuous and dotted lines represent traffic data interfaces and signaling interfaces respectively.

Figure 5: simplified LTE/SAE architecture

The LTE base stations (eNodeB) connect to the core network via the RAN-CN interface. The MME handles control signaling (control plane) while user data is forwarded between base stations and gateway (user plane). To support high-speed handover of terminals in active mode, each LTE base station is logically connected to all its neighboring base stations.

The gateway includes both packet data network (PDN) and serving gateway functionality. The PDN gateway serves as a common anchor point for all access technologies, providing a stable IP point-of-presence for all users regardless of mobility within or between access technologies. The serving gateway is the anchor point for intra-3GPP mobility.

The MME functionality is kept separate from the gateways to facilitate network deployment, independent technology evolution, and fully flexible scaling of capacity.

GSM and WCDMA/HSPA systems are integrated into the evolved system through standardized interfaces between the SGSN (serving GPRS support node) and the evolved core network. This includes interfaces to the MME for transferring context and establishing bearers when moving between accesses, and to the gateway for establishing IP connectivity with user equipment (UE). The gateway node thus functions as a GGSN (gateway GPRS support node) for GSM and WCDMA/HSPA terminals.


  1. can you give me the reference of the figure 5 in literature? or it has been customized by yourself?

  2. Here the reference:

    LTE-SAE architecture and performance
    Per Beming, Lars Frid, Göran Hall, Peter Malm, Thomas Noren, Magnus Olsson and Göran Rune

  3. interesting ....can you also give us more information about the current status of LTE ....

  4. Excuse me, can you give me infomation about "3GPP TR 25.913 Feasibility Study of Evolved UTRA and UTRAN" ? which version is it, V8.0.0 (2008-12) or V9.0.0 (2009-12)?
    Thank you so much!

    1. Hi, thanks for your interest. The version is V8.0.0.