SpringerOpen Newsletter

Receive periodic news and updates relating to SpringerOpen.

Open Access Research

Stochastic analysis of neural network modeling and identification of nonlinear memoryless MIMO systems

Mohamed Ibnkahla

Author Affiliations

Electrical and Computer Engineering Department, Queen’s University, Kingston, Ontario, K7L 3N6, Canada

EURASIP Journal on Advances in Signal Processing 2012, 2012:179  doi:10.1186/1687-6180-2012-179

The electronic version of this article is the complete one and can be found online at: http://asp.eurasipjournals.com/content/2012/1/179


Received:6 December 2011
Accepted:13 July 2012
Published:21 August 2012

© 2012 Ibnkahla; licensee Springer.

This is an Open Access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Neural network (NN) approaches have been widely applied for modeling and identification of nonlinear multiple-input multiple-output (MIMO) systems. This paper proposes a stochastic analysis of a class of these NN algorithms. The class of MIMO systems considered in this paper is composed of a set of single-input nonlinearities followed by a linear combiner. The NN model consists of a set of single-input memoryless NN blocks followed by a linear combiner. A gradient descent algorithm is used for the learning process. Here we give analytical expressions for the mean squared error (MSE), explore the stationary points of the algorithm, evaluate the misadjustment error due to weight fluctuations, and derive recursions for the mean weight transient behavior during the learning process. The paper shows that in the case of independent inputs, the adaptive linear combiner identifies the linear combining matrix of the MIMO system (to within a scaling diagonal matrix) and that each NN block identifies the corresponding unknown nonlinearity to within a scale factor. The paper also investigates the particular case of linear identification of the nonlinear MIMO system. It is shown in this case that, for independent inputs, the adaptive linear combiner identifies a scaled version of the unknown linear combining matrix. The paper is supported with computer simulations which confirm the theoretical results.

Keywords:
Nonlinear system identification; Neural networks; Gradient descent; Statistical analysis

Introduction

Neural network [1] approaches have been extensively used in the past few years for nonlinear MIMO system modeling, identification and control where they have shown very good performances compared to classical techniques [2-6].

If these NN approaches are to be used in real systems, it is important for the algorithm designer and the user to understand their learning behavior and performance capabilities. Several authors have analyzed NN algorithms during the last two decades which considerably helped the neural network community to better understand the mechanisms of neural networks [1,7-15]. For example, the authors in [13] have studied a simple structure consisting of two inputs and a single neuron. The authors in [8] studied a memoryless single-input single-output (SISO) system identification model for the single neuron case. In [9] the authors proposed a stochastic analysis of gradient adaptive identification of nonlinear Wiener systems composed of a linear filter followed with a Zero-memory nonlinearity. The model was composed of a linear adaptive filter followed by an adaptive parameterized version of the nonlinearity. This study has been later generalized [16] for the analysis of stochastic gradient tracking of time-varying polynomial Wiener systems. In [12] the author analyzed NN identification of nonlinear SISO Wiener systems with memory for the case where the adaptive nonlinearity is a memoryless NN with an arbitrary number of neurons. The case of a nonlinear SISO Wiener-Hammerstein system (i.e., an adaptive filter followed by an adaptive Zero-memory NN followed by an adaptive filter) has been analyzed in [11].

This paper deals with a typical class of nonlinear MIMO systems (Figure  1) which is composed of M inputs, M memoryless nonlinearities, a linear combiner, and L outputs. This corresponds, for example, to MIMO channels used in wireless terrestrial communications [17-22], satellite communications [23,24], amplifier modeling [25], control of nonlinear MIMO systems [6], etc. Recently, a neural network approach has been proposed to adaptively identify the overall input–output transfer function of this class of MIMO systems and to characterize each component of the system (i.e., the memoryless nonlinearities and the linear combiner) [4]. The proposed NN model is composed of a set of memoryless NN blocks followed by an adaptive linear combiner. Each part of the adaptive system aims at identifying the corresponding part in the unknown MIMO system. The algorithm has been successfully applied to system modeling, channel tracking, and fault detection.

thumbnailFigure 1. Nonlinear MIMO system.

The purpose of this paper is to provide a stochastic analysis of NN modeling of this class of MIMO systems. The paper provides a general methodology that may be used to solve other problems in stochastic NN learning analysis. The methodology consists of splitting the study into simple structures, before studying the complete structure. Here, as a first step we start by analyzing a simple linear adaptive MIMO scheme (consisting of an adaptive matrix) that identifies the nonlinear MIMO system (i.e., problem of linear identification of a nonlinear MIMO system). Then we analyze a nonlinear adaptive system in which the nonlinearities are assumed to be known and frozen during the learning process, only the linear combiner is made adaptive. Finally, the complete adaptive scheme is analyzed taking into account the insights given by the analysis of the simpler structures. In our analytical approach, we derive the general formulas and recursions, which we apply to a case study that we believe is illustrative to the reader.

The paper is organized as follows. The problem statement is given in Section 2. The study of the simple structures is detailed in Section 3. Section 4 presents the analysis for the complete structure. Simulation results and illustrations are given in Section 5. Finally, conclusions and future work are given in Section 6.

Problem statement

Nonlinear MIMO system

The class of nonlinear MIMO systems discussed in this paper is presented in Figure  1. Each input xi (n) (i = 1,…,M) is nonlinearly transformed by a memoryless nonlinearity gi (.). The outputs of these nonlinearities are then linearly combined by an L × M matrix H = [hji] (assumed in this paper to be constant). Matrix H is defined by the unknown system to be identified. For example, in wireless MIMO communication systems, M is the propagation matrix representing the channel between M transmitting antennas and L receiving antennas.

The jth output of the MIMO system is expressed as:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M1','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M1">View MathML</a>

(1)

where Nj is a white Gaussian noise with variance σ02. Let

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M2','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M2">View MathML</a>

The system input–output relationship can be expressed in a matrix form as:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M3','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M3">View MathML</a>

(2)

Neural Network identification structure and algorithm

The neural network (Figure  2) is composed of M blocks. Each block k has a scalar input xk (n) (k = 1,…,M), N neurons and a scalar output. The output of the kth block is expressed as:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M4','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M4">View MathML</a>

(3)

thumbnailFigure 2. NN identification structure.

Where f is the NN activation function. aki, cki, bki are, respectively, the input weight, bias term, and output weight of the ith neuron in the kth block. The output NNk of the kth block is connected to the jth output of the system through weight wjk. The system jth output is then expressed as:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M5','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M5">View MathML</a>

(4)

Weights wjk will be represented by an LxM matrix: W = [wjk]. Let

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M6','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M6">View MathML</a>

Equations (4) can then be expressed in a matrix form as:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M7','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M7">View MathML</a>

(5)

For the learning process, the NN parameters are updated so that to minimize the sum of the squared errors between the unknown system outputs and the corresponding outputs of the model (Figure  3):

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M8','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M8">View MathML</a>

(6)

thumbnailFigure 3. Adaptive learning scheme.

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M9','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M9">View MathML</a>

The gradient descent recursions for weight adaptation are:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M10','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M10">View MathML</a>

(7)

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M11','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M11">View MathML</a>

(8)

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M12','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M12">View MathML</a>

(9)

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M13','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M13">View MathML</a>

(10)

where μ is a small positive constant and <a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M14','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M14">View MathML</a> represents the derivative: <a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M15','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M15">View MathML</a>

Case study

After the derivation of the general formulas, it is important that we apply them to special cases in order to get closed-form expressions of the different recursions that can be illustrated to the reader. We have chosen here a case study that we think is good to illustrate our results. In this case study, the inputs xi (n) will be assumed uncorrelated Zero-mean Gaussian variables with variance <a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M16','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M16">View MathML</a>. The NN activation function will be taken as the erf function. The unknown nonlinear transfer functions are taken from a family of nonlinear functions of the form <a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M17','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M17">View MathML</a>, where αi and βi are positive constants. These nonlinear functions are reasonable models for amplitude conversions of nonlinear high power amplifiers (HPA) used in digital communications [12,25,26]. Note that other nonlinear functions may be considered, however, explicit closed-form solutions of the different derivations may not be possible.

Study of simplified structures: Linear adaptation

Before analyzing the full structure, we will analyze the following simplified schemes which will help us understand the complete structure:

1. The adaptive system is composed of an adaptive linear combiner W (Section 3.1).

2. The adaptive system is composed of W and scaled versions of the unknown nonlinearities (Section 3.2).

Linear adaptive system

This section studies the linear adaptive system that tries to model the nonlinear MIMO system (Figure  4):

thumbnailFigure 4. Linear adaptive system.

Mean weight behavior and Wiener solution

Since matrix W is linear, it will not be able to identify the nonlinear blocks. However, we will see that it is able to identify matrix H to within a diagonal scaling matrix if the inputs are Zero-mean and independent.

The gradient descent update of matrix W is expressed as:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M18','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M18">View MathML</a>

(11)

Averaging both sides of (11) and using the standard LMS assumption of small μ[10], we obtain:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M19','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M19">View MathML</a>

(12)

Where <a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M20','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M20">View MathML</a>

By setting the updating gradient term to Zero, it can be shown that this equation has a single stationary point (Wiener solution [10]) which is expressed by:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M21','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M21">View MathML</a>

(13)

Following Equation (12), the mean weights can be expressed as a function of the initial condition as:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M22','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M22">View MathML</a>

(14)

If μ is sufficiently small, the first term converges to 0 and the second term converges to <a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M23','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M23">View MathML</a>.

Hence, the mean weights converge to the Wiener solution:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M24','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M24">View MathML</a>

(15)

It can be easily shown that the stability condition on μ is [10]:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M25','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M25">View MathML</a>

(16)

where λmax is the largest eigenvalue of the covariance matrix RXX.

Note that for Zero-mean independent inputs, U is a diagonal matrix:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M26','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M26">View MathML</a>

(17)

In this case, the linear adaptation allows the identification of matrix W to within a scaling matrix, which depends on the nonlinearities and the input signals. As expected, the scaling matrix reduces to the identity matrix if gk (xk) = xk.

    Application to the case study
:

For the particular nonlinear functions given in the case study (see Section 2.3), it is easy to show:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M27','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M27">View MathML</a>

(18)

The mean weight transient recursions are expressed as:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M28','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M28">View MathML</a>

(19)

Matrix U reduces to the following diagonal matrix:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M29','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M29">View MathML</a>

(20)

Transient MSE and Wiener MSE

The transient MSE is determined by:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M30','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M30">View MathML</a>

(21)

where:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M31','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M31">View MathML</a>

(22)

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M32','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M32">View MathML</a>

Using the independence of noise and weights at time n, we get:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M33','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M33">View MathML</a>

(23)

The total MSE is therefore expressed as:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M34','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M34">View MathML</a>

(24)

Wiener MSE:

The Wiener MSE, <a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M35','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M35">View MathML</a>, is the minimum MSE that can be reached by the system if W is equal to the Wiener solution W0 = HU. It can be easily shown that:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M36','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M36">View MathML</a>

(25)

It is clear from this equation that if the unknown functions are linear, then the Wiener MSE reduces to the noise power. The MSE is always larger than ζ0 because of the misadjustment error introduced by the weight fluctuations.

Now we can write the MSE as a function of the Wiener MSE:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M37','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M37">View MathML</a>

(26)

Let the instantaneous deviation of the matrix weights with respect to the Wiener solution be denoted by:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M38','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M38">View MathML</a>

(27)

We have:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M39','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M39">View MathML</a>

(28)

This expression is similar to that of the well-known LMS algorithm [10], and can be evaluated as the sum of the minimum error and excess error (or misadjustment) as:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M40','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M40">View MathML</a>

(29)

where <a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M41','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M41">View MathML</a>.

The misadjustment is expressed as:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M42','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M42">View MathML</a>

(30)

At the convergence, we have:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M43','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M43">View MathML</a>

(31)

Derivation of the misadjustment:

From Equation (11) it is easy to show that the weight fluctuations follow the recursion:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M44','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M44">View MathML</a>

(32)

Taking the mean of this equation and applying the orthogonality principle between the input vector and the Wiener error, we get:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M45','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M45">View MathML</a>

(33)

Thus, as expected, if μ is sufficiently small E(

    V
(n)) converges to 0.

Similarly, for each vector Vj we can obtain the following recursion:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M46','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M46">View MathML</a>

(34)

The evaluation of the covariance matrix of the weight fluctuations is obtained by multiplying both sides of Equation (34) by <a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M47','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M47">View MathML</a> and averaging:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M48','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M48">View MathML</a>

(35)

These expectations are derived in Appendix III, which yields:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M49','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M49">View MathML</a>

(36)

Taking into account that E(Vj(∞)) = 0, KVjVj can be obtained by solving the following equation:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M50','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M50">View MathML</a>

(37)

This expression holds for any input signal. It can be simplified if <a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M51','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M51">View MathML</a> . In this case we have:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M52','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M52">View MathML</a>

(38)

It is now easy to determine the total misadjustment:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M53','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M53">View MathML</a>

(39)

Note that, as expected, in the case of linear functions Δ(∞) reduces to:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M54','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M54">View MathML</a>

(40)

The additional terms are due to the nonlinearities and they should be calculated specifically for each nonlinearity.

    Application to the case study
:

The MSE is expressed as:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M55','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M55">View MathML</a>

(41)

The Wiener MSE is expressed in this case as:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M56','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M56">View MathML</a>

(42)

Adaptive W, the nonlinearities are frozen and known with scale factors

In this section, matrix W is adaptive, the nonlinearities are frozen and known with scale factors (Figure  5).

thumbnailFigure 5. Simplified scheme, the nonlinearities are frozen and assumed known with a scaling factor.

Mean weight behavior and stationary points

The gradient descent update of matrix W is expressed as:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M57','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M57">View MathML</a>

(43)

where <a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M58','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M58">View MathML</a>.

Averaging both sides of (43) and using the standard LMS assumption of small μ, we obtain:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M59','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M59">View MathML</a>

(44)

These recursions have a single stationary point (Wiener solution) which is:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M60','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M60">View MathML</a>

(45)

Following Equation (44), the mean weight behavior can be expressed as function of the initial condition as:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M61','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M61">View MathML</a>

(46)

Hence, if μ is sufficiently small, it can be shown that the mean weights converge to the Wiener solution:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M62','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M62">View MathML</a>

(47)

The stability condition on μ is: <a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M63','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M63">View MathML</a>

Where λmax is the largest eigenvalue of the covariance matrix Ω2Rg (X)g(X).

Thus, if each nonlinear function gk (.) is known with a scaling factor ηk, then weights hjk will be identified by wjk (to the inverse of the scaling factor).

MSE

We have:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M64','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M64">View MathML</a>

(48)

where:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M65','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M65">View MathML</a>

(49)

Using the independence of noise and weights at time n, we obtain:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M66','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M66">View MathML</a>

The MSE is therefore expressed as:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M67','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M67">View MathML</a>

(50)

Wiener MSE:

The Wiener MSE can be easily expressed as:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M68','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M68">View MathML</a>

(51)

Therefore the Wiener MSE is equal to the noise floor: There are no terms due to the nonlinearities. This is expected since the nonlinearities are known with a scaling matrix Ω (we have seen that the scaling matrix is canceled by W0 since W0 = HΩ-1).

Let Z (n) = Ω g(X(n)), we can then express the MSE as a function of ζ0, the weight fluctuation vector V(n) = W(n)–W0, and Z(n):

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M69','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M69">View MathML</a>

(52)

Similarly to Equation (29), the misadjustment is expressed as:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M70','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M70">View MathML</a>

(53)

The steady state MSE is then expressed as:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M71','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M71">View MathML</a>

(54)

Derivation of the misadjustment:

It is easy to show that the weight fluctuations follow the recursion:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M72','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M72">View MathML</a>

(55)

Taking the mean of this equation and applying the orthogonality principle between the input vector and the Wiener error, we obtain:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M73','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M73">View MathML</a>

(56)

Thus, as expected, if μ is sufficiently small, E(V(n)) converges to 0.

For each vector Vj we have similar recursions:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M74','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M74">View MathML</a>

(57)

The evaluation of the covariance matrix of the weight fluctuations is obtained by multiplying both sides of Equation (57) by <a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M75','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M75">View MathML</a> and averaging:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M76','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M76">View MathML</a>

(58)

In a similar way as in Appendix III, KVjVj (∞) can be obtained by solving the following equation:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M77','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M77">View MathML</a>

(59)

This expression can not be further simplified because RZZ is not necessarily of the form <a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M78','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M78">View MathML</a>.

Therefore, tr(RZZKVjVj (∞)) should be calculated for each nonlinearity and for each Ω.

It is interesting to study the case where nonlinearities are known with the same scaling factor, i.e., Ω = ηI. In this case, and if the input vectors are independent and the outputs of the nonlinearities are Zero-mean and of equal variance <a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M79','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M79">View MathML</a>, we have:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M80','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M80">View MathML</a>

(60)

As expected, the total misadjustment reduces to:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M81','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M81">View MathML</a>

(61)

Here the value of the misadjustment is similar to that of linear identification of a linear system (LMS algorithm). This is expected since in this case there are no errors due to the approximation of the nonlinearities.

Case study

For the case study, it is easy to show that <a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M82','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M82">View MathML</a> This yields:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M83','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M83">View MathML</a>

(62)

Study of the full structure

This section deals with the full structure (Figures  2, 3). All the NN and matrix W weights are updated.

Mean weight transient behavior

We take the following notations for the weights: <a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M84','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M84">View MathML</a>.

The update of matrix W is expressed as:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M85','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M85">View MathML</a>

(63)

Averaging both sides of (63) and using the standard LMS assumption of small μ, we obtain:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M86','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M86">View MathML</a>

(64)

Where <a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M87','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M87">View MathML</a>.

These matrices are time-dependent since they depend on the NN block weights which are updated through time.

Using the scalar notation we have:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M88','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M88">View MathML</a>

(65)

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M89','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M89">View MathML</a>

With these notations we have:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M90','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M90">View MathML</a>

(66)

For the NN block weights we have:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M91','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M91">View MathML</a>

(67)

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M92','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M92">View MathML</a>

(68)

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M93','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M93">View MathML</a>

(69)

These equations hold for any nonlinearity. In the following, we will calculate them explicitly for the case study described in Section 2.3

    Application to the case study:

Since the inputs are independent and Zero-mean, we have Ki (akm,bkm)=0, ki , and (see Appendix I)

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M94','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M94">View MathML</a>

(70)

In the other hand we have: <a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M95','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M95">View MathML</a>, and (see Appendix I)

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M96','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M96">View MathML</a>

(71)

Inserting these expressions in equations (66)-(69), we obtain:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M97','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M97">View MathML</a>

(72)

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M98','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M98">View MathML</a>

(73)

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M99','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M99">View MathML</a>

(74)

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M100','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M100">View MathML</a>

(75)

The explicit expressions of the different derivatives are detailed in Appendix II.

Stationary points

We obtain the stationary points by setting to 0 the expectations of the updating gradient terms in (64) and(4.5-7).

For W, we obtain:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M101','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M101">View MathML</a>

(76)

For cki we obtain the equations:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M102','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M102">View MathML</a>

(77)

For aki we obtain the equations:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M103','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M103">View MathML</a>

(78)

For bki we obtain:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M104','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M104">View MathML</a>

(79)

The above equations are nonlinear in the NN variables. They can be solved numerically, but they are very difficult to solve analytically.

Convergence of the algorithm to the stationary points:

It is always interesting to show whether an algorithm is capable of converging to its stationary points or not. In our case it is difficult to establish this, since the updating equations of the weights are nonlinear, except for W.

In the case where the NN weights are frozen we can establish the convergence condition for W.

In this case we have:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M105','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M105">View MathML</a>

(80)

The covariance matrices are fixed, since in this case the NN weights are frozen.

E(W(n)) can be expressed as a function of the initial condition as:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M106','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M106">View MathML</a>

(81)

If μ is sufficiently small, the steady state solution to (81) is:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M107','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M107">View MathML</a>

(82)

Hence, the mean weights converge to the stationary point W0, and the stability condition on μ is:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M108','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M108">View MathML</a>

(83)

Where λmax is the largest eigenvalue of the correlation matrix RNN(X)NN(X).

Application to the case study:

For the case study it can be shown that U reduces to a diagonal matrix:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M109','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M109">View MathML</a>

(84)

where:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M110','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M110">View MathML</a>

(85)

This indicates that weights wjk are scaled versions of the unknown weights hjk, the scale factor γk is the same for all the weights connecting the kth NN block to the outputs and it depends only on block k weights. If the error is sufficiently small, the kth block NN will approximate the kth nonlinearity to the inverse of the scale factor.

MSE expression

The transient MSE is determined by:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M111','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M111">View MathML</a>

(86)

where:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M112','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M112">View MathML</a>

(87)

Which can be expressed as:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M113','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M113">View MathML</a>

(88)

Let Gil = E(gi(xi (n))gl(xi (n))). Using the notations of Section 4.1, we have:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M114','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M114">View MathML</a>

(89)

Application to the case study:

It can be easily shown that:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M115','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M115">View MathML</a>

(90)

The 1st term of <a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M116','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M116">View MathML</a>represents the noise power, the 2nd term is the signal power of the jth MIMO output, the 3rd term is the sum of the individual contributions of the neurons weighed by W and H weights, the 4th term represents the sum of the coupling terms between neurons inside the same block weighed by W. Note that since the inputs are Zero-mean and independent, there are no coupling terms between neurons in different blocks (as in Eq. (89)).

The total MSE is then expressed as:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M117','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M117">View MathML</a>

(91)

Case of frozen NN weights:

It is interesting to see the behavior of the MSE in the case where the NN weights are frozen.

In this case we have:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M118','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M118">View MathML</a>

(92)

Here the minimum MSE depends on the noise floor and on the NN approximation error of the nonlinearities. It is clear from this equation and from Section 3.2 that, if the NN blocks ideally identify the nonlinearities (to within scale factors), then ζ0 reduces to the noise floor.

The MSE can be written as a function of ζ0 as:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M119','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M119">View MathML</a>

(93)

The steady state MSE is in this case:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M120','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M120">View MathML</a>

(94)

The misadjustment can be derived similarly as in Sections 3.1 and 3.2. We obtain a similar equation as (53), by replacing RZZ by RNN(X)NN(X). The equation can not be simplified further.

It is interesting to notice, however, that if the NN blocks perfectly identify the nonlinearities and if the conditions above equation (60) are fulfilled, then:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M121','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M121">View MathML</a>

(95)

Simulation examples

In this section we present some simulation results which are applied to the case study described in Section 2.3. In these simulations, we have considered a 2 × 2 MIMO system (i.e., M = L = 2). For the parameterized nonlinearities we have chosen α1=α2=1, β1=1, β2=2. Unless otherwise specified, the inputs are uncorrelated Zero-mean white Gaussian processes with σxi= 1. In the simulations, the unknown combining matrix was fixed and was taken as <a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M122','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M122">View MathML</a>. For example, in a MIMO communication system, H can be seen as the propagation matrix between 2 transmitting antennas and 2 receiving antennas.

Linear adaptation

For the linear adaptation case (Section 3.1), the adaptive system is composed of a 2x2 matrix W. For the noise we have taken σ0 = 0.001. The mean weight recursions and the MSE transient behaviors (Figures  6, 7) have been estimated over 20 Monte Carlo (MC) simulations and compared to the theoretical derivations (Equations (19) and (41)). This chosen number of MC simulations shows excellent fit between the Theory and MC estimations which confirms the validity of the different assumptions made. A larger number of MC simulations allows a better smoothing of the curves, but the conclusions remain the same.

thumbnailFigure 6. Smoothed MSE vs. iteration number - Linear adaptation case.

thumbnailFigure 7. Mean weight transient behavior - Linear adaptation case.

Matrix W converges to a scaled version of H: <a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M123','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M123">View MathML</a> Note the typical behavior of the LMS algorithm: A time constant controls the transient part of the learning curve and the mean weight curve. This is fundamentally different from the full NN system learning which is governed by several time constants and presents plateau regions (Section 5.2). It should be noted that the steady state MSE is high because of the error caused by the fact that the nonlinearities are not approximated (actually they are modeled by the identity function) (Equation (25)).

MSE surface for the full NN algorithm

We move now to the study of the full NN algorithm. In this simulation, we have taken N = 3 neurons in each of the two NN blocks. The learning rate was fixed to μ =0.0045. Figure  8 shows the MSE surface (i.e., Eq. (90), with no time dependence) as a function of w11 and w12 (the other parameters were fixed). It is clear that the MSE is quadratic in w11 and w21. It presents a single global minimum (as shown in Equations (76 and 84)).

thumbnailFigure 8. MSE surface, MSE (w11, w21).

Figure  9 (resp. Figure  10) shows the MSE surface as a function of a11 and c11 (resp. a11 and a12) (the other parameters were fixed). It can be noted the flat areas (plateau regions) around the minima of the MSE surface. This explains the slow evolution of the NN weights when the algorithm gets close to its convergence point.

thumbnailFigure 9. MSE surface, MSE (a11, c21).

thumbnailFigure 10. MSE surface, MSE (a11, a12).

The MSE evolution during the learning process (Equation (90)) has been compared to 20 MC estimations (Figure  11a): The theory shows very good fit with the simulation results. In the Figure, we notice that the MSE presents several phases (each phase is controlled by a time constant) which end by a plateau phase where the MSE decreases very slowly. This is a typical behavior of the backpropagation algorithm [1] which is fundamentally different from that of the linear adaptation scheme (Figure  6). Here the MSE error is much smaller. This is expected, since here the additional MSE error due to the nonlinearities (Eq. 92) is highly reduced because our NN blocks have correctly identified the unknown nonlinearities (Figure  12, 13). Here we are in a situation close to that of Section 3.2 (Equations 51-52).

thumbnailFigure 11. a: MSE during the learning process (theory and simulation), σ0=0.001 and μ=0.0045. b: MSE for different values of noise variance, the learning rate was μ=0.0045 (theoretical results). c : MSE for different values of learning rate μ (the noise variance was set to σ0=0.001), theoretical results.

thumbnailFigure 12. Identification of g1 (x), by the 1st NN block.

thumbnailFigure 13. Identification of g2 (x) by the 2nd NN block.

Figure  11b shows the MSE evolution during the learning process for different values of the noise variance σ0. It can be seen that, as the noise variance decreases, the MSE decreases. However, below a certain value of σ0 (here σ0=0.0005), the MSE curves are almost identical. This is because in this case, the weight misadjustment error (for the linear part) and the nonlinear approximation error (of the nonlinear memoryless part) are much higher than the error caused by the presence of noise (see Eqs. 92-93).

Figure  11c investigates the influence of the learning rate μ. It can be seen that as μ increases (up to μ=0.002), the algorithm is faster and the MSE is lower at the end of the simulation time. However, for μ>0.002, as μ increases, the algorithm is faster at the beginning of the learning process, but the MSE is higher at the end of the simulation time. This is due to the misadjustment error which is higher for higher μ (see, e.g., Eq. 95).

Mean weight transient behavior for the full NN algorithm

Here we keep the system described in Section 5.2. The mean weight recursions for the linear combiner W and the two NN blocks are shown in Figure  14, 15, 16 for both theory and MC estimations. The theoretical and estimated curves are indistinguishable. This confirms the validity of the different assumptions made in Sections 4.1 and 4.2.

thumbnailFigure 14. Mean weight behavior: Matrix W (theory and simulation).

thumbnailFigure 15. Mean weight behavior: First NN block (theory and simulation).

thumbnailFigure 16. Mean weight behavior: Second NN block (theory and simulation vs. iterations).

Notice that, in Figure  14, W weights have a fast evolution at the beginning of the learning process (with values approaching H×U(n) where U is a diagonal matrix). They then evolve slowly till the end of the learning process. The slow evolution is justified by the plateau regions presented by the MSE surface. At the end of this simulation, matrix U was close to a diagonal matrix: <a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M124','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M124">View MathML</a> (and <a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M125','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M125">View MathML</a>). This result is expected since the inputs are uncorrelated (Equations 84-85).

Figure  12, 13 show that functions g1 (x) and g2 (x) have been correctly identified by the corresponding NN blocks (the NN functions are normalized by the scaling factors γ1=1/1.2702, γ1=1/1.0946, respectively).

Impact of correlated inputs

In the simulations below we study the impact of correlated inputs. The input signal vector is chosen here as a 2D Gaussian process with covariance matrix of the form <a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M126','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M126">View MathML</a> The number of neurons in each NN block was taken as N = 5 neurons. The learning rate μ=0.0075. We have run several simulations for different values of the cross correlation. Note that (ρ=0) corresponds to independent inputs, and (ρ=1) corresponds to the same input (i.e., x1=x2). The values of matrix U and the MSE after n=2105 iterations are shown in Table  1. It can be seen from Table  1 that, in practice, matrix U remains very close to a diagonal matrix, even for high correlation between inputs. This indicates that the system is capable of correctly identifying the nonlinearities even when the inputs are highly correlated. The identification performances for the cases (ρ=0.6) and ( ρ=0.99) are illustrated in Figures  17, 18 and 19, 20, respectively. As expected, the MSE increases as the correlation between inputs increases (Table  1). When ρ=1 (i.e., the two inputs are the same) the system is capable of correctly identifying the overall MIMO input–output transfer function. However, in this case, it is not capable of separating the nonlinearities (as U is not diagonal). The reason is that in this case, the system is seen by the learning algorithm as a 1x2 SIMO system which has several equivalent structures. Figure  21 shows an example of two equivalent structures. Therefore, the adaptive system is structurally not able to separate the nonlinearities. It is worth to note that for the case (ρ=0.999), the inputs look like noisy versions of each other (i.e., this is equivalent to a 1x2 system identification problem with noisy inputs). Thus, the MSE for the case (ρ=0.999 ) is larger than the MSE for the case (ρ=1).

Table 1. Effect of correlated inputs

thumbnailFigure 17. Identification of g1 (x), case ρ=0.6.

thumbnailFigure 18. Identification of g2 (x), case ρ=0.6.

thumbnailFigure 19. Identification of g1 (x), case ρ=0.99.

thumbnailFigure 20. Identification of g2 (x), case ρ=0.99.

thumbnailFigure 21. Example of two equivalent structures when the inputs are the same.

Conclusion and future work

The paper provides a statistical analysis of NN modeling and identification of a class of nonlinear MIMO systems. The study investigates the MSE error, mean weight behavior, stationary points, misadjustment error, and stability conditions. The unknown system is composed of a set of single-input memoryless nonlinearities followed by a combining matrix. The NN model is composed of a set of single-input memoryless NN blocks followed by an adaptive linear combiner. The paper is supported with simulation results which show good agreement between the theoretical recursions and MC simulations. Future work will focus on 3 research directions. The first will explore the theoretical findings in order to express the effect of the number of neurons on the transient and steady state behavior of the algorithm. The second research axis will investigate the case where matrix H is time-varying and/or with memory (this may have applications, for example, in adaptive control of nonlinear dynamical MIMO systems). Finally, we will study the algorithm behavior and performance for specific inputs (such as space-time coded signals used in wireless communications and their impact on the system capacity).

Appendix I

1) Calculation of Fkk Let x1 and x2 be two zero-mean Gaussian variables such that <a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M133','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M133">View MathML</a>Therefore, <a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M134','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M134">View MathML</a>Using Price’s theorem we have: <a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M135','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M135">View MathML</a>Let <a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M136','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M136">View MathML</a>Then: <a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M137','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M137">View MathML</a>

Thus, using the un-correlation criteria between x1 and x2 for ρ=0 , we have: <a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M138','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M138">View MathML</a> Thus: <a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M139','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M139">View MathML</a> We have:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M140','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M140">View MathML</a>

where <a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M141','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M141">View MathML</a>Combining the terms in the exponentials and completing the squares, the integrals can be calculated:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M142','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M142">View MathML</a>

Note that in the biasless case (i.e. all the bias terms are set to 0) this expression reduces to:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M143','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M143">View MathML</a>

The integral is then simple to calculate:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M144','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M144">View MathML</a>

In the other hand, since <a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M145','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M145">View MathML</a>, then: <a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M146','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M146">View MathML</a>When the bias terms are not set to 0, a Taylor series expansion on the bias terms can be used in order to avoid the calculation of the integral.

2) Calculation of K

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M147','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M147">View MathML</a>

The inside integral can be eliminated by integrating by parts on variable x.The integral is then evaluated by combining the terms in the exponentials and completing the squares. This yields:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M148','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M148">View MathML</a>

Again, a Taylor series expansion can be used to simplify this expression.Note that in the biasless case we have:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M149','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M149">View MathML</a>

Appendix II

The derivatives needed to compute the different recursions are expressed as follows:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M150','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M150">View MathML</a>

Appendix III

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M151','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M151">View MathML</a>

(96)

The calculations are similar to [9] Appendix, the main difference is that here we deal with a multi-dimensional input. Therefore, we will follow the same methodology as in [9]. Following [10] the expectation before the last one can be calculated as:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M152','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M152">View MathML</a>

(97)

The first expectation is expressed as:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M153','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M153">View MathML</a>

(98)

The first term is Zero (orthogonality principle). The second term is:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M154','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M154">View MathML</a>

(99)

The middle term is Zero (Zero-mean white noise), the last expectation is:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M155','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M155">View MathML</a>

(100)

The first expectation in Eq. (99) <a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M156','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M156">View MathML</a> involves the nonlinearity gj (xj) and should be evaluated explicitly.The remaining expectation in (96) is: <a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M157','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M157">View MathML</a>

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M158','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M158">View MathML</a>

(101)

We have

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M159','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M159">View MathML</a>

(102)

The first term in (102) is:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M160','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M160">View MathML</a>

(96) is then expressed as:

<a onClick="popup('http://asp.eurasipjournals.com/content/2012/1/179/mathml/M161','MathML',630,470);return false;" target="_blank" href="http://asp.eurasipjournals.com/content/2012/1/179/mathml/M161">View MathML</a>

Endnotes

This work has been supported by The Natural Sciences and Engineering Research Council of Canada (NSERC).

The time index of the weights has been omitted from the right hand side of the equations to make them easier to read.

Competing interests

The author declares that he has no competing interests.

References

  1. S Haykin, Neural Networks: A Comprehensive Foundation (Prentice Hall, 1999)

  2. Y Gao, M Er, Online adaptive fuzzy neural identification and control of a class of MIMO nonlinear systems. IEEE Trans. Fuzzy Systems , 462–476 (2003)

  3. SS Ge, C Wang, Adaptive neural control of uncertain nonlinear MIMO systems. IEEE Trans. Neural Networks 15, 674–692 (2004). Publisher Full Text OpenURL

  4. M Ibnkahla, A Al-Hinai, Adaptive modeling and identification of nonlinear MIMO channels using neural networks. Adaptive Signal Processing in Wireless Communications, ed. by Ibnkahla M (CRC Press, Boca Raton, FL, USA, 2008)

  5. KS Narendra, K Parthasarathy, Identification and control of dynamical systems using neural networks. IEEE Trans. Neural Networks 1, 4–27 (1990). Publisher Full Text OpenURL

  6. H Xu, P Ioannou, Robust adaptive control for a class of MIMO nonlinear systems with guaranteed error bounds. IEEE Trans. Automatic Control , 718–742 (2003)

  7. S Amari, Mathematical foundations of neurocomputing. Proc. IEEE 78(9), 1443–1463 (September 1990). Publisher Full Text OpenURL

  8. NJ Bershad, M Ibnkahla, F Castanié, Statistical analysis of a two-layer back propagation algorithm used for modeling non linear memoryless channels: The single neuron case. IEEE Trans. Signal Processing 45(3), 747–756 (March 1997). Publisher Full Text OpenURL

  9. N Bershad, P Celka, JM Vesin, Stochastic analysis of gradient adaptive identification of nonlinear systems with memory for Gaussian data and noisy input and output measurements. IEEE Trans. Signal Processing 47(3), 675–689 (March 1999). Publisher Full Text OpenURL

  10. S Haykin, Adaptive Filter Theory (Prentice Hall, 1996)

  11. M Ibnkahla, NJ Bershad, J Sombrin, F Castanié, Neural network modeling and identification of non linear channels with memory: Algorithms, applications and analytic models. IEEE Trans. Signal Processing 46, 5 (1998)

  12. M Ibnkahla, Statistical analysis of neural network modeling and identification of nonlinear channels with memory. IEEE Trans. Signal Processing , 1508–1517 (2002)

  13. J Shynk, S Roy, Convergence properties and stationary points of a perceptron learning algorithm. Proc. IEEE 70, 1599–1604 (Oct. 1990)

  14. JG Taylor, Mathematical Approaches to Neural Networks (North-Holland, Amsterdam, 1993)

  15. H White, learning in artificial neural networks: A statistical perspective. Neural Comput. 1, 425–464 (1989). Publisher Full Text OpenURL

  16. N Bershad, P Celka, JM Vesin, Analysis of stochastic gradient tracking of time-varying polynomial Wiener systems. IEEE Trans. Signal Processing 48(6), 1676–1686 (June 2000). Publisher Full Text OpenURL

  17. H Bolcskei, MIMO systems: Principles and trends. in Signal Processing for, ed. by Ibnkahla M. Mobile Communications Handbook, 2nd edn. (CRC Press, 2004)

  18. T Javornik, G Kandus, S Plevel, G White, A Burr, V-BLAST algorithm performance in non-linear channel. IEEE Computer as a Tool Conference 1, 183–187 (September 2003)

  19. G Poitau, A Kouki, Impact of realistic amplification models on dynamic VBLAST optimization. Proc. Vehicular Technology Conference Spring , 894–897 (2004)

  20. S Woo, D Lee, K Kim, H Hur, C Lee, J Laskar, Combined effects of RF impairments in the future IEEE 802.11n WLAN systems. Proc. IEEE Vehicular Technology Conference Spring 2, 1346–1349 (May 2005)

  21. S Yang, J Xi, X Mu, Decision aided joint compensation of clipping noise and nonlinearity for MIMO-OFDM systems. Proc. IEEE International Symposium on Communications and Information Technology (ISCIT) 1, 725–728 (2005)

  22. S Yang, J Xi, F Wang, X Mu, H Kobayashi, Decision aided compensation of residual frequency offset for MIMO-OFDM systems with nonlinear channel. Proc. International Symposium on Intelligent Signal Processing and Communication Systems (ISPACS) , 113–116 (2005)

  23. A Sulyman, M Ibnkahla, Performance of MIMO systems with antenna selection over nonlinear fading channels. IEEE Journal in Selected Topics in Signal Processing 2, 159–170 (April 2008)

  24. A Sulyman, M Ibnkahla, Performance analysis of nonlinearly amplified M-QAM signals in MIMO channels. European Transactions in Communications 19(1), 15–22 (January 2008)

  25. J Pedro, S Maas, A comparative overview of microwave and wireless power amplifier behavioral modeling approaches. IEEE Trans. Microwave Theory and Techniques 53(4), 1150–1163 (2005)

  26. A Saleh, Frequency-independent and frequency–dependent nonlinear models of TWT amplifiers. IEEE Trans. Communications 29, 11 (1981). Publisher Full Text OpenURL