Channel Equalization using Artificial Neural NetworkChannel Equalization Using Artificial Neural Network

Published on December 2016 | Categories: Documents | Downloads: 28 | Comments: 0 | Views: 328
of 33
Download PDF   Embed   Report

Channel Equalization using Artificial Neural Network

Comments

Content

Table of Contents

CERTIFICATE.................................................................................................... ii
ABSTRACT...................................................................................................... iii
ACKNOWLEDGEMENTS................................................................................... iv
Table of Contents................................................................................................. v
List of Figures................................................................................................... vii
List of Symbols, Abbreviations and Nomenclature.....................................................viii
1.

INTRODUCTION......................................................................................... 1
INTRODUCTION........................................................................................... 2
Problem Statement..................................................................................2
Organisation of report.............................................................................. 3

2.

CHANNEL EQUALIZATION..........................................................................4
INTRODUCTION TO CHANNEL EQUALIZATION............................................5
FUNDAMENTALS OF EQUALIZATION...........................................................7
Introduction............................................................................................. 7
Operating modes of an adaptive equalizer..............................................7
ADAPTIVE EQUALIZATION..........................................................................8
Communication system with an adaptive equalizer.................................8
SURVEY ON EQUALIZATION TECHNIQUES................................................10
Linear Equalizer..................................................................................... 10
Non-linear Equalizer............................................................................... 11

3.

ARTIFICIAL NEURAL NETWORKS..............................................................12
INTRODUCTION TO ANNs...........................................................................13
What are ANNs?..................................................................................... 13
Why do we use Neural Networks?..........................................................13
Benefits of ANN...................................................................................... 14
STRUCTURE OF ANN..................................................................................14
Mathematical Model of a Neuron...........................................................14
Network Architectures...........................................................................15
Learning Process.................................................................................... 17
BACK PROPAGATION ALGORITHM............................................................18
1

Introduction........................................................................................... 18
Learning Process.................................................................................... 19
4.

CHANNEL EQUALIZATION USING ANNs.....................................................22
Introduction........................................................................................... 23
State of the Art...................................................................................... 24
Proposed solution methodology.............................................................24
Conclusion............................................................................................. 25

REFERENCES................................................................................................. 26

2

List of Figures
Figure 2-1: Inter-Symbol Interference...............................................................................5
Figure 2-2: Propagation paths in an open-air radio transmission channel........................6
Figure 2-3: Communication system with an adaptive equalizer.......................................9
Figure 2-4: Equalizer located at the receiver end of the channel....................................10
Figure 2-5: Classification of the Equalizers....................................................................11
Figure 3-1: Model of an ANN.........................................................................................15
Figure 3-2: Single-layer Feed-Forward Network............................................................16
Figure 3-3: Multi-layer Feed-Forward Network.............................................................16
Figure 3-4: Recurrent Network.......................................................................................17
Figure 3-5: Three layer Neural Network with two inputs and single output...................20
Figure 4-1: Block diagram of Adaptive Equalizer..........................................................23

3

List of Symbols, Abbreviations and Nomenclature

ISI

Inter-Symbol Interference

ANN

Artificial Neural Network

MLP

Multi-Layer Perceptron

BPA

Back Propagation

TDMA

Time Division Multiple Access

LTE

Linear Transversal Equalizer

DFE

Decision Feedback Equalization

MLSE

Maximum Likelihood Sequence Estimation

LMS

Least Mean Square

RLS

Recursive Least Square

4

Chapter 1

1. INTRODUCTION
PROBLEM STATEMENT
ORGANIZATION OF REPORT

1

INTRODUCTION
In a communication system, the task of a receiver is to retrieve
the information send by the transmitter via a transmission
medium called as channel. To accomplish this task, it tries to
extract the parameters related to the transmitted information
from the received signal. The channel is central to the operation
of a communication system. Its properties determine both the
information-carrying capacity as well as the quality of service
offered by the system. Before reaching the receiver, the
transmitted signal is passes through the channel, or we can say
that the transmitted signal convolves with the channel.
Inter-Symbol Interference (ISI) caused by multipath in band-limited (frequency
selective) time dispersive channel distorts the transmitted signal, causing bit errors
at the end of the receiver. ISI has been recognized as the major obstacle to high
speed data transmission over wireless channels. Channel Equalization is a
technique used to combat inter-symbol interference.
Problem Statement
Digital communication systems are designed to transmit high
speed data over communication channels. During this process
the transmitted data is distorted, due to the effects of linear and
nonlinear distortions. So the communication system requires
signal processing techniques to improve the link performance in
mobile radio environments. Channel equalization is one of the
technique which is used to improve the quality of the received
signal and performance (i.e., to minimize the instantaneous bit
error rate) of the link over small-scale times and distances.
In mobile radio channels due to Inter-Symbol Interference, frequent
changes and multipath causes time dispersion of the digital
information.

Also

by

the

effect

2

of

Inter-symbol

Interference,

broadening and overlapping of the pulse with its neighbour
eventually becoming indistinguishable at the receiver end.
Channel distortion calls for channel equalization techniques at
the receiver side which reconstructs the transmitted symbols
correctly since our main objective is to transmit symbols with
minimum error.
Artificial Neural Networks (ANNs) are nonlinear information
(signal) processing devices, which are built from interconnected
elementary processing devices called neurons. It has a natural
tendency for storing experimental knowledge and making it
available for use. Artificial neural networks (ANNs) can perform
complex
mapping between its input and output space and are capable of
forming

complex

decision

regions

with

nonlinear

decision

boundaries.
Our main goal is to design and simulate an artificial neural
network based channel equalizer and compare its performance
with existing techniques.
Organisation of report
The report is organized as follows: To get the depth of this topic, Chapter 2
introduces the fundamentals of channel equalization its requirement in field of
digital communication. Chapter 3 gives the brief introduction of the artificial
neural network, which includes the details about the mathematical model of a
neuron, different neural network architectures and the learning process of neural
network, Chapter 4 presents literature survey and state of art followed by
conclusions with future scope of the work.

3

Chapter 2

2. CHANNEL EQUALIZATION
INTRODUCTION TO CHANNEL EQUALIZATION
FUNDAMENTALS OF EQUALIZATION
ADAPTIVE EQUALIZATION
SURVEY ON EQUALIZATION TECHNIQUES

4

INTRODUCTION TO CHANNEL EQUALIZATION
In digital communication system, Inter-Symbol Interference (ISI) is one of the
main causes of degradation of system performance. Equalization is a one of the
technique which is used to improve received signal quality and link performance
over small-scale times and distances.
Equalisation compensates for Inter-Symbol Interference (ISI) created by multipath
with time dispersive channels. Basically the term equalization can be used to
describe any signal processing operation which minimizes ISI.
In radio channels, a variety of adaptive equalizers can be used to cancel
interference, because mobile fading channels are random and time-varying,
equalizers must track the time-varying characteristics of the mobile channel and
thus are called adaptive equalizers.
There are two main threats in the process of digital communication: Inter Symbol
Interference (ISI) and Multipath Propagation
 Inter-Symbol Interference in Digital Transmission
Inter-symbol interference (ISI) arises when the data transmitted through the
channel is dispersive, in which each received pulse is affected somewhat by
adjacent pulses and due to which interference occurs in the transmitted signals
[Fig 2-1]. It is difficult to recover the original data from one channel sample.

Fig 2-1: Inter-Symbol Interference

5

 Multipath Propagation
Within telecommunication channels multiple paths of propagation commonly
occur. In practical terms this is equivalent to transmitting the same signal
through a number of separate channels, each having a different attenuation and
delay.
Consider an open-air radio transmission channel [Fig 2-2 (a)] that has three
propagation paths: Direct, Earth Bound, Sky Bound. Fig 2-2 (b) describes how
a receiver picks up the transmitted data. The direct signal is received firstly
whilst the earth and sky bound are delayed. All three of the signals are
attenuated with the sky path suffering the most. Multipath interference
between consecutively transmitted signals will take place if one signal is
received whilst the previous signal is still being detected. This would occur if
the symbol transmission rate is greater than 1/τ where, τ represents
transmission delay. Because bandwidth efficiency leads to high data rates,
multi-path interference commonly occurs.

Fig 2-2: Propagation paths in an open-air radio transmission channel

6

FUNDAMENTALS OF EQUALIZATION
Introduction
In a broad sense, the term equalization can be used to describe any signal
processing operation that minimizes ISI. In radio channels, a variety of adaptive
equalizers can be used to cancel interference while providing diversity [1]. Since
the mobile fading channel is random and time varying, equalizers must track the
time varying characteristics of the mobile channel, and thus are called adaptive
equalizers.
Operating modes of an adaptive equalizer
The general operating modes of an adaptive equalizer include:
a. Training (first stage)
In this first stage a known fixed-length training sequence is sent by the
transmitter so that the receiver's equalizer may average to a proper setting. The
training sequence is designed to permit an equalizer at the receiver to acquire
the proper filter coefficients in the worst possible channel conditions. The
training sequence is typically a pseudorandom binary signal or a fixed,
prescribed bit pattern. Immediately following the training sequence, the user
data is sent. The time span over which an equalizer converges is a function of
the equalizer algorithm, the equalizer structure, and the time rate of change of
the multipath radio channel. Equalizers require periodic retraining in order to
maintain effective ISI cancellation.
b. Tracking (second stage)
In second stage, immediately following the training sequence, the user data is
sent. As user data are received, the adaptive algorithm of the equalizer tracks
the changing channel and adjusts its filter characteristics over time. It is
commonly used in digital communication systems where user data is
segmented into short time blocks. Time Division Multiple Access (TDMA)
7

wireless systems are particularly well suited for equalizers. In TDMA data in
fixed-length time blocks, and the training sequence usually sent at the
beginning of a block.

ADAPTIVE EQUALIZATION
Consider a time varying channel where the receiver attains equalization by
adjusting several parameters continuously that is based on the measurements
taken on the channel characteristic. This process of continuously assessment done
in time varying natured channel is called as adaptive equalization. For example, in
mobile channels are random and time varying and often affected by signal fading,
the equalizers used in this case should possess the capability of tracking these time
varying channel to reduce interference. In simple words we can say that, an
adaptive equalizer is an equalizer that automatically adapts to time-varying
properties of the communication channel.
Adaptive equalizers compensate for signal distortion attributed to Inter-Symbol
Interference (ISI), which is caused by multipath within time-dispersive channels.
Typically, they are employed in high-speed communication systems, which do not
use differential modulation schemes or frequency division multiplexing. The
equalizer is the most expensive component of a data demodulator and can
consume over 80% of the total computations needed to demodulate a given signal.
Communication system with an adaptive equalizer
Fig 2-3 shows a block diagram of a communication system with an adaptive

equalizer in the receiver. If x (t) is the original information signal, and f(t) is the
combined complex baseband impulse response of the transmitter, channel, and the
RF/IF sections of the receiver, the signal received by the equalizer may be
expressed as
y (t )=x ( t ) ⨂ f ¿ ( t ) +n b (t)

2- 1

Where,

8

f*(t), is the complex conjugate of f(t) ,
nb(t), is the baseband noise at the input of the equalizer, and
⨂ , denotes the convolution operation

If the impulse response of the equalizer is heq(t), then the output of the equalizer is
d^ ( t )=x ( t ) ⨂ f ¿ ( t ) ⨂ heq ( t ) +nb ( t ) ⨂ h eq (t)

2- 2

¿ x ( t ) ⨂ g ( t )+ nb ( t ) ⨂ heq (t)
Where, g(t), is the combined impulse response of the transmitter, channel, RF/IF
sections of the receiver, and the equalizer.
The complex baseband impulse response of a transversal filter equalizer is given
by
heq ( t )=∑ c k δ ( t−n T s )

2-3

k

Where, ck, are the complex filter coefficients of the equalizer.
The desired output of the equalizer is x(t), the original source data. Assume that
nb(t) = 0. Then, in order to force

d^ ( t )=x ( t ) in equation (2.2), g(t) must be equal

to
g ( t )=f ¿ ( t ) ⨂ heq ( t ) =δ (t)

2-4

9

Fig 2-3: Communication system with an adaptive equalizer

The goal of equalization is to satisfy equation (2.4). In the frequency domain,
equation (2.4) can be expressed as
H eq ( f ) F ¿ (−f )=1

2-5

Where, Heq(f) and F(f) are Fourier transforms of heq(t) and f(t), respectively.
Equation (2.5) indicates that an equalizer is actually an inverse filter of the
channel. If the channel is frequency selective, the equalizer enhances the
frequency components with small amplitudes and attenuates the strong
frequencies in the received frequency spectrum in order to provide a flat,
composite, received frequency response and linear phase response. For a timevarying channel, an adaptive equalizer is designed to track the channel variations
so that equation (2.5) is approximately satisfied.
Equalization is the process to remove ISI and noise effects from the channel. It is
located at the receiver end of the channel as shown in below figure. It is an inverse
filter placed at the front end of the receiver. The transfer function of the equalizer
is just an inverse of the transfer function of the channel [Fig 2-4Error: Reference
source not found]. Equalization is an iterative process of reducing the mean
10

square error i.e. the difference between desired response and output of filter used
in equalizer.

Fig 2-4: Equalizer located at the receiver end of the channel

SURVEY ON EQUALIZATION TECHNIQUES
Equalization techniques can be sub- divided into two general categories as linear
and Non-linear equalizers.
Linear Equalizer
Linear equalizers aim at reducing ISI in linear channels using various algorithms
like Least Mean Square (LMS), Recursive Least Square (RLS) and normalized
LMS. The most common equalizer structure is a linear transversal equalizer
(LTE). The output of the decision maker is not used in the feedback path to adapt
the equalizer.
Non-linear Equalizer
Non-linear equalizers equalize non-linear channels. They mainly use Neural
Networks (NN) and Multilayer Perception (MLP) algorithms for equalization.
They are used in applications where the channel distortion is too severe for a
linear equalizer to handle. Decision feedback equalization (DFE) and maximum
likelihood sequence estimation (MLSE) are most commonly used non-linear
equalization techniques. The output of the decision maker is used in the feedback
path to adapt the equalizer.
Fig 2-5 provides a general categorization of the Equalization technique according
to the types, structures, and algorithms can be classified in several different ways.

11

Fig 2-5: Classification of the Equalizers

12

Chapter 3

3. ARTIFICIAL NEURAL NETWORKS
INTRODUCTION TO ANN
STRUCTURE OF ANN
BACK PROPAGATION ALGORITHM

13

INTRODUCTION TO ANNs
What are ANNs?
Working on artificial neural network has been motivated right from its inception
by the recognition that the human brain computes in an entirely different way
from the conventional digital computer. The brain is a highly complex, nonlinear
and parallel information processing system. It has the capability to organize its
structural constituents, known as neurons, so as to perform certain computations
many times faster than the fastest digital computer in existence today. The brain
routinely accomplishes perceptual recognition tasks, e.g. recognizing a familiar
face embedded in an unfamiliar scene, in approximately 100-200 ms, whereas
tasks of much lesser complexity may take day son a conventional computer.
A neural network is a machine that is designed to model the way in which the
brain performs a particular task. The network is implemented by using electronic
components or is simulated in software on a digital computer. A neural network is
a massively parallel distributed process or made up of simple processing units,
which has a natural propensity for storing experimental knowledge and making it
available for use. It resembles the brain in two respects:
 Knowledge is acquired by the network from its environment through a
learning process.
 Inter neuron connection strengths, known as synaptic weights, are used to
store the acquired knowledge.
The procedure used to perform the learning process is called a learning algorithm,
the function of which is to modify the synaptic weights of the network in an
orderly fashion to attain a desired design objective.
Why do we use Neural Networks?
Neural networks, with their remarkable ability to derive meaning from
complicated or imprecise data, can be used to extract patterns and detect trends

14

that are too complex to be noticed by either humans or other computer
techniques. A trained neural network can be thought of as an "expert" in the
category of information it has been given to analyse. This expert can then be used
to provide projections given new situations of interest and answer "what if"
questions.
Other advantages include:
a. Adaptive learning: An ability to learn how to do tasks based on the data given
for training or initial experience.
b. Self-Organization: An ANN can create its own organization or representation
of the information it receives during learning time.
c. Real Time Operation: ANN computations may be carried out in parallel, and
special hardware devices are being designed and manufactured which take
advantage of this capability.
d. Fault Tolerance via Redundant Information Coding: Partial destruction of a
network leads to the corresponding degradation of performance. However,
some network capabilities may be retained even with major network damage.
Benefits of ANN
a. They are extremely powerful computational devices.
b. Massive parallelism makes them very efficient.
c. They can learn and generalize from training data–so there is no need for
enormous feats of programming.
d. They are particularly fault tolerant – this is equivalent to the “graceful
degradation” found in biological systems.
e. They are very noise tolerant - so they can cope with situations where normal
symbolic systems would have difficulty.
f. In principle, they can do anything a symbolic/logic system can do, and more

STRUCTURE OF ANN
Mathematical Model of a Neuron
A neuron is an information processing unit that is fundamental to the operation n
of a neural network. The three basic elements of the neuron model are [Fig 3-6]:
15

a. A set of weights, each of which is characterized by a strength of its own. A
signal xj connected to neuron k is multiplied by the weight wkj.The weight of
an artificial neuron may lie in a range that includes negative as well as positive
values.
b. An adder for summing the input signals, weighted by the respective weights of
the neuron.
c. An activation function for limiting the amplitude of the output of a neuron. It
is also referred to as squashing function which squashes the amplitude range
of the output signal to some finite value.

Fig 3-6: Model of a Neuron

Therefore, a vk and yk are defined as:
p

v k =∑ w kj x j

3-6

j=1

And
y k =φ ( v k +θ k )

3- 7

Network Architectures
There are three fundamental different classes of network architectures:

16

a. Single-layer Feed-Forward Networks
In a layered neural network the neurons are organized in the form of layers.
In the simplest form of a layered network, we have an input layer of source
nodes that projects on to an output layer of neurons, but not vice versa [Fig
3-7]. This network is strictly a Feed-Forward type. In single-layer network,
there is only one input and one output layer. Input layer is not counted as
layer since no mathematical calculations take place at this layer.

Fig 3-7: Single-layer Feed-Forward Network

b. Multilayer Feed-Forward Networks
The second class of a Feed-Forward neural network distinguishes itself by
the presence of one or more hidden layers, whose computational nodes are
correspondingly called hidden neurons [Fig 2-1].

Fig 3-8: Multi-layer Feed-Forward Network

17

The function of hidden neuron is to intervene between the external input and
the network output in some useful manner. By adding more hidden layers,
the network is enabled to extract higher order statistics. The input signal is
applied to the neurons in the second layer. The output signal of second layer
is used as inputs to the third layer, and so on for the rest of the network.

c. Recurrent Networks
A recurrent neural network has at least one feedback loop. A recurrent network
may consist of a single layer of neurons with each neuron feeding its output
signal back to the inputs of all the other neurons [Fig 3-9]. Self-feedback
refers to a situation where the output of a neuron is fed back into its own input.
The presence of feedback loops has a profound impact on the learning
capability of the network and on its performance.

Fig 3-9: Recurrent Network

Learning Process
By learning rule we mean a procedure for modifying the weights and biases of a
network. The purpose of learning rule is to train the network to perform some
task. They fall into three broad categories:

a. Supervised learning
The learning rule is provided with a set of training data of proper network
behaviour. As the inputs are applied to the network, the network outputs are
compared to the targets. The learning rule is then used to adjust the weights
and biases of the network in order to move the network outputs closer to the
targets.
18

b. Reinforcement learning
It is similar to supervised learning, except that, instead of being provided
with the correct output for each network input, the algorithm is only given a
grade. The grade is a measure of the network performance over some
sequence of inputs.

c. Unsupervised learning
The weights and biases are modified in response to network inputs only.
There are no target outputs available. Most of these algorithms perform
some kind of clustering operation. They learn to categorize the input patterns
into a finite number of classes.

BACK PROPAGATION ALGORITHM
Introduction
Multiple layer perceptrons have been applied successfully to solve some
difficult diverse problems by training them in a supervised manner with a highly
popular algorithm known as the error back-propagation algorithm. This
algorithm is based on the error-correction learning rule. It may be viewed as a
generalization of an equally popular adaptive filtering algorithm- the least mean
square (LMS) algorithm.
Error back-propagation learning consists of two passes through the different
layers of the network: a forward pass and a backward pass. In the forward pass, an
input vector is applied to the nodes of the network, and its effect propagates
through the network layer by layer. Finally, a set of outputs is produced as the
actual response of the network. During the forward pass the weights of the
networks are all fixed. During the backward pass, the weights are all adjusted in
accordance with an error correction rule. The actual response of the network is
subtracted from a desired response to produce an error signal. This error signal is
then propagated backward through the network, against the direction of synaptic
connections. The weights are adjusted to make the actual response of the network
19

move closer to the desired response.
A multilayer perceptron has three distinctive characteristics:
a. The model of each neuron in the network includes a nonlinear activation
function. The sigmoid function is commonly used which is defined by the
logistic function:
y=

1
1+exp ⁡(−x )

3-8

Another commonly used function is hyperbolic tangent:
y=

1−exp (−x )
1+exp (−x )

3-9

The presence of nonlinearities is important because otherwise the inputoutput relation of the network could be reduced to that of single layer
perceptron.
a. The network contains one or more layers of hidden neurons that are not part of
the input or output of the network. These hidden neurons enable the network
to learn complex tasks.
b. The network exhibits a high degree of connectivity. A change in the
connectivity of the network requires a change in the population of their
weights.
Learning Process
To illustrate the process a three layer neural network with two inputs and one
output, which is shown in the Error: Reference source not found, is used.
Signal z is adder output signal, and y = f(z) is output signal of nonlinear element.
Signal y is also output signal of neuron. The training data set consists of input
signals (x1 and x2) assigned with corresponding target (desired output) y’. The
network training is an iterative process. In each iteration weights coefficients of
nodes are modified using new data from training data set. Symbols wmn represent

20

weights of connections between output of neuron m and input of neuron n in the
next layer. Symbols yn represents output signal of neuron n.

Fig 3-10: Three layer Neural Network with two inputs and single output

y 1=f 1 (w11 x 1 +w 21 x2 )

3- 10

y 2=f 2 (w12 x1 + w22 x 2)

3- 11

y 3=f 3 (w 13 x 1+ w23 x 2 )

3- 12

y 4 =f 4 ( w14 x1 + w24 x2 ) + w34 y 3

3- 13

y 5=f 4 ( w 15 x 1+ w25 x2 ) + w35 y 3

3- 14

y 6=f 6 (w 46 y 4 +w 56 y 5 )

3- 15

The desired output value (the target), which is found in training dataset. The
difference is called error signal δ of output layer neuron.
,,

δ= y − y

3- 16

δ 4=w46 δ

3- 17

21

δ 5=w 56 δ

3-18

δ 3=w 34 δ 4 + w35 δ 5

3-19

δ 2=w 24 δ 4 + w25 δ 5

3-20

δ 1=w14 δ 4 + w15 δ 5

3-21

When the error signal for each neuron is computed, the weights coefficients of
each neuron input node may be modified. In formulas below df(z)/dz represents
derivative of neuron activation function.
The correction wij(n) applied to the weight connecting neuron j to neuron i is
defined by the delta rule:

{

}{

}{

Weight correction= learning rate × local × input signal
parameter
gradient
of neuron i

}

∆ w ij ( n ) =η ×δ i × y j (n)

3- 22

The local gradient δi(n) depends on whether neuron i is an output node or a
hidden node:
a. If neuron i is an output node, δi(n) equals the product of the derivative
dfi(z)/dz and the error signal ei(n), both of which are associated with neuron
i.
b. If neuron j is a hidden node, δi(n) equals the product of the associated
derivative dfi(z)/dz and the weighted sum of the δs computed for the neurons
in the next hidden or output layer that are connected to neuron j.

22

Chapter 4

4. CHANNEL EQUALIZATION USING ANNs
INTRODUCTION
STATE OF THE ART
PROPOSED SOLUTION METHODOLOGY
CONCLUSION

23

Introduction

Designing efficient equalizers for complex, fast-varying channels
is an active area of research and development in academic. Since
recent past in field of wireless communications, the art of using
artificial neural network (ANN) has been gaining momentum.
Linear equalizers generally employ linear filters with transversal
or lattice structure and adaptation algorithm such as recursive
least square (RLS), least mean square (LMS), fast RLS, squareroot RLS, gradient RLS, etc. However, linear equalizers do not
perform well on channels with deep spectral nulls. ANNs are
capable of forming arbitrarily nonlinear decision boundaries to
take up complex classification tasks 3, 4, 5 and 6.
Equalization refers to any signal processing technique used at
the receiver to combat Inter-Symbol Interference (ISI) in dispersive
channels. Standard equalization techniques start by modeling
communication channel as an adaptive filter with a specific
transfer function. The equalizer, which is part of the receiver,
then estimates the parameters of this unknown transfer function,
and attempts to undo the effects of this time-varying channel
distortion [7]. The equalizer extracts the desired signal by
applying adaptive algorithm using neural network (NN), which
minimizes the error between the equalizer output and the
delayed test signal, as depicted in Fig 4-11

24

Fig 4-11: Block diagram of Adaptive Equalizer

To extract the phase characteristics of the channel from the
received data, it is necessary to use higher order statistics of the
received signal. The nonlinear function of the output of the NN
equalizer gives rise to higher order statistics of the received
signal.
State of the Art

Neural equalizers have the potential for significant performance
improvements

especially

in

severely

distorted,

nonlinear

channels [8, 9, 10 and 11]. Artificial Neural Networks are parallel
distributed

processing

interconnected

elements

systems

in

(neurons)

which

many

simultaneously

simple
process

information, adapt and learn from past patterns [12, 13, 14 and
15]. Although only capable of performing simple operations
themselves, when organized into layers, neurons are collectively
capable of performing highly sophisticated operations.
Attractive properties of ANN that are relevant to the equalization
problem

at

hand

include

massive

parallelism,

adaptive

processing, self-organization, universal approximation, and most
importantly, the capability of tackling highly nonlinear problems.

25

Proposed solution methodology

There are many research papers that agree on the fact that linear transversal
equalizers are not capable of equalizing highly nonlinear channels. Gibson et.al
[16] has explicitly mentioned that: “When the channel is non-minimum phase, the
decision boundary of equalizer is highly nonlinear and deviates markedly from
any decision boundary which can be formed by a linear transversal equalizer.”
Considering equalization as a geometric classification problem
rather than an inverse filter problem, our main objective
becomes the separation of the received symbols in the output
signal space whose optimal decision region boundaries are
generally highly nonlinear. The idea here is to classify the
received signal vectors by partitioning the signal space into some
decision regions. With this approach to equalization, complete
channel inversion is unnecessary, and the problem is tackled
using classification techniques.
In some aspects Artificial Neural Networks (ANN) can be used in
this field for achieving better performance than existing classical
methods. Since Artificial Neural Networks are well known for their
ability of performing classification tasks by forming complex
nonlinear decision boundaries, Neural equalizers based on neural
network have been recently receiving considerable attention in
order to increase receiver robustness.
Conclusion

In this report, neural network architectures and learning methods for solving the
problem of channel equalization has been proposed. The approach in future
research could be design a neural network structure and implementation of an
algorithm for it which can able to equalize time-varying channels with faster

26

convergence and simpler architecture. All the simulations will be implementing in
Matlab.

27

REFERENCES
1.

T. S. Rappaport, “Wireless Communications, Principles and Practice”, 2 nd Edition. Pearson,

2.

2010.
S. Haykin, Neural Networks, A Comprehensive Foundation. 2 nd Edition, Englewood Cliffs,

3.

N.J.: Prentice Hall, 1999.
[1] S. Bang, S. H. Sheu, and J. Bing, “Neural network for detection of signals in
communication,” IEEE Trans. Circuits Syst. I, Reg. Papers, vol. 43, no. 8, pp. 644–655,

4.

Aug. 1996.
C. P. Lim and R. F. Harrison, “Online pattern classification with multiple neural network
systems: An experimental study,” IEEE Trans. Syst.,Man, Cybern. C, Appl. Rev., vol. 33,

5.

no. 2, pp. 235–247, 2003.
B. Widrow and M. A. Lehr, “30 years of adaptive neural networks: Perceptron, madaline

6.

and backpropagation,” Proc. IEEE, vol. 78, no. 9, pp. 1415–1442, Sep. 1990.
S. K. Nair and J. Moon, “A theoretical study of linear and nonlinear equalization in the
nonlinear magnetic storage channels,” IEEE Trans. Neural Netw., vol. 8, no. 5, pp. 1106–

7.
8.

1118, Sep. 1997.
S. Haykin, Communication Systems, 4th ed. New York: Wiley, 2001.
Lee, C. D. Beach, N. Tepedelenlioglu, “Channel Equalization Using Radial Basis Function

9.

Network,” IEEE Int. Conf. on Neural Networks, 1996, vol. 4, pp. 1924-1928.
Peng, C. L. Nikias, J. G. Proakis, “Adaptive Equalization with Neural Networks: New
Multi-Layer Perceptron Structures and Their Evaluation,” in Proc. ICASSP ’92 - IEEE Int.

Conf. Acoustics, Speech, & Signal Processing, 1992, vol. 2, pp. 301-304.
10. Peng, C. L. Nikias, J. G. Proakis, “Adaptive Equalization for PAM and QAM Signals with
Neural Networks,” in Proc. of 25th Asilomar Conf. on Signals, Systems & Computers,
1991, vol. 1, pp. 496-500.
11. Parisi, E. D. Di Claudio, G. Orlandi, B. D. Rao, “Fast Adaptive Digital Equalization by
Recurrent Neural Networks,” IEEE Trans on Signal Processing, 1997, vol. 45, issue 11, pp.
2731-2739.
12. [2] Albu, A. Mateescu, J. C. M. Mota, B. Dorizzi, “Adaptive Channel Equalization Using
Neural Network,” in Proc. ITS ’98—SBT/IEEE Int. Telecommunications Symposium,
1998, vol. 2, pp. 438-441.
13. A. Al-Mashouq, I. S. Reed, “The Use of Neural Nets to Combine Equalization with
Decoding for Severe Inter Symbol Interference Channels,” IEEE Trans. on Neural
Networks, 1994, vol. 5, issue 6, pp. 982-988.
14. Gan, P. Saratchandran, N. Sundararajan, K. R. Subramanian, “A Complex Valued Radial
Basis Function Network for Equalization of Fast Time Varying Channels,” IEEE Trans. on
Neural Networks, 1999, vol. 10, issue 4, pp. 958-960.
15. Henrique and G. Coelho, “Adaptive channel equalization using EKF-CRTRL neural
networks”, Proceedings of the 2002 International Joint Conference on Neural Networks,
IJCNN '02, 2002, Vol.2, pp. 1195-1199.
16. Gibson, G.J., Siu, S., Cowan, C.F.N., “Multilayer Perceptron structures applied to adaptive

28

equalizers for data communications”, Proc. ICASSP’89, May 1989.
17. K .Burse, R.N. Yadav, S.C. Shrivastava, “Channel Equalization Using Neural Networks: A
Review”, IEEE Transactions on Systems, Man and Cybernetics Part C: Applications and
Reviews, 40 (3) (2010), pp. 352–357
18. (b) S.U.H. Qureshi, "Adaptive equalization", Proceedings of the IEEE, Vol. 73, No. 9,

19.

September 1985, pp. 1349-1387.
(a)J.R. Treichler, M.G. Larimore and J.C. Harp, “Practical Blind Demodulators for Highorder QAM signals", Proceedings of the IEEE special issue on Blind System Identification
and Estimation, vol. 86, pp. 1907-1926, Oct. 1998.

29

Sponsor Documents

Or use your account on DocShare.tips

Hide

Forgot your password?

Or register your new account on DocShare.tips

Hide

Lost your password? Please enter your email address. You will receive a link to create a new password.

Back to log-in

Close