ON EQUIVALENCE OF LINEAR CONTROL SYSTEMS AND ITS USAGE TO VERIFICATION OF THE ADEQUACY OF DIFFERENT MODELS FOR A REAL DYNAMIC PROCESS

A problem of description of algebraic invariants for a linear control system that determine its structure is considered. With the help of these invariants, the equivalence problem of two linear time-invariant control systems with respect to actions of some linear groups on the spaces of inputs, outputs, and states of these systems is solved. The invariants are used to establish the necessary equivalence conditions for two nonlinear systems of differential equations generalizing the well-known Hopfield neural network model. Finally, these conditions are applied to establish the adequacy of two neural network models designed to describe the behavior of a real dynamic process given by two different sets of time series.


Introduction
In recent decades, researchers have paid much attention to chaotic behavior in many fields, such as meteorology, medicine, economics, signal processing, traffic flow, and many others [16,34,36,49]. They also developed many models describing chaotic time series in order to predict the behavior of these time series. Researchers have found that it is a difficult problem to forecast chaotic time series, which are the evolution of chaotic systems, with the use of traditional time series forecasting methods [16,36]. Now chaos theory has become an important part of nonlinear science and is used for forecasting chaotic time series. Therefore, modeling of chaotic systems constructed from observed data and predicting multiple future values of the time series has become an important issue [16,34,36,49].
We will assume that we know the dimension n of the real phase space in which the considered dynamic process P(t) ∈ R n takes place [7]. Further, for modeling of the process P(t) = (P 1 (t), ..., P n (t)) T neural networks will be used [17,25,26]. The motivation for this use is given below.
The function φ(z) : R → R is called an activation function [17,26]. (In most scientific publications is suggested that φ(z) : R → [0, 1] is a sigmoid function. However, in some cases, the condition that the activation function φ(x) is bounded can be removed [48].) In the future, we will use the version of Theorem 1.1, in which one activation function φ(z) will be replaced by several similar functions φ 1 (z),..., φ k (z):
Further, we propose an approximate procedure for determining unknown righthand sides of the differential equations. The procedure is based on the leastsquares method and the fact that we know with sufficient precision the values of x(t) and its derivatives of order equal to the equation order. In this case we avoid considering a possible ill-posed problem by applying consecutive smoothing procedures leading to shortening the given time series (see [16,36]).
We assume that using suitable methods (based on Theorem 1.1) the system of ordinary autonomous differential equations, the solution x(t) ∈ R n of which simulates process P(t) with a given accuracy, was reconstructed [7].
We assume that this system (with the known vector of initial values x T (0) = (x 10 , ..., x n0 )) has the following form: (1. 3) where Φ(u) = (φ 1 (u 1 ), ..., φ m (u m )) T . (Note the system (1.3) can be interpreted as a generalized Hopfield neural network (see [25,48]) with activation functions φ 1 (u 1 ), ..., φ m (u m ).) The form of system (1.3) is dictated by the following considerations: the nonlinear parts of system (1.3) are continuous functions, and therefore Theorem 1.1 was used to describe them; the presence of a linear part makes it possible to use linearization methods to study the stability of solutions of system (1. 3).
There is only one serious flaw in the research plan outlined above. This disadvantage lies in the insufficient verification of the adequacy of the constructed model and the process under study. In this paper, mathematical tools have been developed to test the adequacy, based on an algebraic theory of invariants. The idea of such verification is based on the following well-known fact: with arbitrary observations of a dynamic process, there are always functions that are independent of the methods of observations, but depend on an internal structure that determines the behavior of the process. Functions describing this structure are called invariants.
Currently, the construction of a complete set of invariants describing arbitrary nonlinear dynamical systems is an unsolved problem. Therefore, in this article, the approach based on obtaining the missing invariants for the studied nonlinear system using known invariants obtained for a special linear system, was developed. (Immediately, we note that the construction of such invariants may also have an independent mathematical interest.) Suppose that the same dynamic process P(t) ∈ R n is given by another set of (1. 4) Instead of variable x, we introduce a new phase variable y ∈ R n . Introduce also the real matrices F ∈ R n×n , G ∈ R n×m , H ∈ R m×n , and real vectors l = (l 1 , ..., l n ) T , h j = (h j1 , ..., h jn ) ∈ R n ; j = 1, ..., m ≤ n.
To fulfill all equivalence conditions (especially (1.6)), it is necessary to indicate the class of functions Φ(s) (or Ψ(s)) that satisfy these conditions. One of these classes is the class of piecewise linear functions. (In the theory of neural networks, this class is a well-known class of rectified linear units (ReLU)). In the present time ReLU are standard functions to increase the depth of learning of neural networks.
Therefore, we will further assume that either the components of vectors Φ(s), Ψ(s) are piecewise continuous linear functions or these are piecewise continuous nonlinear functions such that Φ(s) = Ψ(s) (in this case T = W = E m , where E m is the identity matrix of order m).
A common practice in chaotic time series analysis has been to reconstruct the phase space by utilizing the delay-coordinate embedding technique, and then to compute the dynamical invariants such as fractal dimension of the underlying chaotic set, and its Lyapunov spectrum. As a large body of literature exists on application of the technique of the time series to study chaotic attractors [29] - [31], [34], [38], [49], a relatively unexplored issue is its applicability to dynamical systems of differential equations depending on parameters. Our focus will be concentrated on the analysis of influence of parameters of found dynamic system on the behavior of its solutions. These parameters are determined by the structure of the time series (1.1) and choice of approximating functions in the right-hand sides of the obtained system of differential equations.

Continuous analog of neural network models
In recent years, an interesting idea has appeared to interpret a system of ordinary differential equations in the form of a suitable neural network (residual network) [11,27,51]. The essence of this idea is as follows.
Consider the following neural network Here h(u, v) : R n × R k → R n is a vector of continuous functions, Ω : R k → R n is a vector of parameters. Now we rewrite relation (1.7) in the following form: , Ω).
If we consider function x(t) as a function of a continuous argument on some interval [x 0 , x N ], then the last equation can be rewritten in the following form: If now we direct the number of "layers"N → ∞ and we assume ∆t → 0, then we get the following system of ordinary differential equationṡ So we can say that neural network (1.7) is the well-known Euler discretization procedure of system (1.8): where ∆t is the discretization step.
Besides, sequence (1.7) can be viewed as a neural network with N − 1 hidden layers, input layer x 0 and output layer x N . The architecture of such neural network is determined by the operator h(x(t), Ω), and if h(x(t), Ω) ≡ d+Ax(t)+ BΦ(Cx(t)), then an arbitrary hidden layer of this network will have the structure shown in Fig. 1.
Therefore, sequence (1.7) is a neural network model of the process P(t). In addition, the number of neurons in an arbitrary hidden layer of this network, in which h(x(t), Ω) ≡ d + Ax(t) + BΦ(Cx(t)), does not exceed nm [17,26].
The problem that is usually considered when modeling process P(t) is as follows: find parameters d, A, B, C (with the known activation function Φ(u)) of the neural network that minimize the following loss function: Here u is a norm of the vector u.
Since the number n is the dimension of the embedding space and it is completely determined by process P(t), the numbers N and nm, which determine the depth and number of neurons in any layer of the neural network, can be arbitrarily selected. Thus, there are quite wide opportunities for the neural network modeling of process P(t).
(In this case, we are already talking about defining parameters l, F, G, H with the known activation function Ψ(u).) Obviously, the equivalence conditions for such neural networks coincide with conditions (1.6). Therefore, the problems of establishing the equivalence of two differential or two neural network models do not differ from each other. In both cases, conditions (1.6) should be used to confirm equivalence. (However, it should be noted that differential and neural network models of the same process, generally speaking, are not equivalent [51].) In conclusion of this section, we note that the question of which of the modeling methods either using differential equations or neural networks, is more effective as long as it remains open.
For simplicity, we put in system (1.3) d = 0. Now we introduce the following control system [50]: (1.9) Then system (1.3) can be viewed as the linear system (1.9) closed by nonlinear feedback Our main goal is to show how the problem of reconstructing differential equations from known time series can be reduced to the problem of computing invariants for the linear control system (1.9).
Thus, the main postulate that we will implement in this work can be formulated as follows: two different sets of time series (1.2) and (1.4) describe the same dynamic process P(t) in two different bases of phase space R n ; this assumption ensures that the invariants of systems (1.3) and (1.5) are the same.
This article is organized as follows. Section 2 describes the mathematical concepts necessary to solve the equivalence problem. Sections 3 -4 study the actions of algebraic groups on varieties of linear systems. Section 5 gives an algebraic description of the key concept that is used to search for invariants; this is the concept of null-forms. Section 6 is devoted to the analysis of the structural stability of linear systems (small changes in the parameters of the system should not influence on the composition of the invariants of this system). Sections 7 -9 describe rings of invariants and equivalence conditions for linear control systems. The whole Section 10 is devoted to the application of the theory of invariants to the problem of reconstruction of ordinary differential equations from known measurements of the dynamic characteristics of these equations. Here is a practical example of the reconstruction of equations describing the behavior of current and voltage in the contact electric network [7,43]. Finally, some results are summarized in Section 11.

Mathematical preliminaries
Consider a linear time-invariant control system whose state equation iṡ and an output equation has the form Here R n , R m , R p are real linear spaces of vector-columns of dimensionalities n, m, p; x(t) = (x 1 (t), ..., x n (t)) T , u(t) = (u 1 (t), ..., u m (t)) T , and y(t) = (y 1 (t), ..., y p (t)) T are vectors of states, inputs, and outputs; A : R n → R n , B : R m → R n , C : R n → R p are real linear maps of appropriate spaces. Fix any bases in spaces R n , R m , and R p ; then the triple of operators A, B, C will be represented in the chosen bases by matrices A ∈ R n×n , B ∈ R n×m , and C ∈ R p×n . Further, we will adhere only to these denotations.
Denote by S = R p×n × R n×n × R n×m a direct product of spaces R p×n , R n×n , and R n×m . Then system (2.1),(2.2) can be uniquely represented by the triple of matrices (C, A, B) ∈ S.
Let GL(q, R) be a complete linear group of all square invertible matrices of sizes q × q with elements from the field of real numbers R.
Introduce the direct product We also introduce into system (2.1),(2.2) new variables z ∈ R n , v ∈ R m , and h ∈ R p under the formulas: x(t) = Sz(t), u(t) = T v(t), and y(t) = W h(t), where S ∈ GL(n, R), T ∈ GL(m, R), and W ∈ GL(p, R). Then we obtain the following operation GL : S → S of indicated group on space S: Let s = (C, A, B) ∈ S and g = (S, T, W ) ∈ GL be an arbitrary elements of corresponding sets.
where l S , l T , and l W are some integers. The invariant f (s) of weight l = (l S , l T , l W ) = (0, 0, 0) is called absolute; otherwise the invariant f (s) is called relative [35,44].
Notice that the problem of any classification of some set of objects (for example, the set of systems (2.1), (2.2)) implies a decomposition of this set on classes of identical (in a certain sense) elements. One of the most widespread methods of such decomposition is a description of system (2.1), (2.2) with the help of functions not depending on coordinate presentation of system (2.1), (2.2). This description is usually called invariant. Sometimes the invariant description of system (2.1), (2.2) is called an invariant (or algebraic) analysis.
The problem of invariant analysis of system (2.1),(2.2) with respect to action (2.3) was most in detail studied for the case T = E m and W = E p , where E m and E p are identity matrices of degrees m and p. In this case the action (2.3) is called an action of similarity. In one or another form classification questions of linear control systems with respect to the action of similarity, their invariants and canonical forms were studied by many authors (see, for example, C. I. Byrnes and N. E. Hurt [9], M. Hazewinkel [18,20], M. Hazewinkel and C. Martin [19], R. E. Kalman [28], A. Tannenbaum [45,46]). We also note that in the book of D. Mumford and J. Fogarty [35] the tasks of invariant theory, directly relating to the classification questions of linear control systems, were considered. The topology problems of classification of linear control systems with respect to the action of similarity in detail were studied in articles of D. F. Delchamps [12], P. A. Fuhrmann and U. Helmke [14], and U. Helmke [21] - [23].
Note that a main attention of specialists on the mathematical system theory attract problems of invariant analysis (using geometric invariant theory) connected with the action of similarity. These problems are following: the finding of good canonical forms for linear systems, a computation of their invariants, a description of regular and stable systems, and also a construction of moduli spaces as quotients of algebraic varieties under algebraic group actions. It should be said that the indicated problems were considered with different positions by A. Tannenbaum [46,47], S. Friedland [13], V. G. Lomadze [32], J. Rosenthal [41], W. Manthey and U. Helmke [33], M. Bader [1]. In our opinion the most complete solutions of indicated problems was got in [1]. In this work with the help of geometric invariant theory and methods of theory quivers compactifications of moduli spaces of linear dynamical systems were derived.
Except for the classification questions of linear control systems, invariant theory is widely used in the problem of output feed-back design both for linear and bilinear control systems (see, for examples, papers [2] - [6], [38], [39], [41,42], and [50]). Here it should be said that in article [5] it was succeeded to get the constructive solution of output feed-back design problem for system (2.1), (2.2) in the case mp > n.
Consider two systems: Equivalence problem. It is necessary to find the set S open ⊂ S and to build the finite set of invariants f 1 (s), ..., f k (s) (absolute and relative) of group GL such that ∀s 1 , s 2 ∈ S open from the conditions f 1 (s 1 ) = f 1 (s 2 ), ..., f k (s 1 ) = f k (s 2 ) it follows that there exists the element g = (S, T, W ) ∈ GL such that It should be said that in the present time there is a vast literature devoted to invariant theory and its applications to different tasks of mathematics, mechanics, physics, and control theory. However almost in all known manuscripts invariant theory is considered as part of algebraic geometry and theory of representations of groups. In addition, these treatises are intended for professional mathematicians and ineligible for specialists in applied system theory. It superfluously burdens an application of invariant theory to linear control systems, which this article is devoted. In this connection, basic results of the present work will be presented in terms of ordinary linear spaces, matrices and determinants. One of aims of the article is bringing of the solution of equivalence problem for linear control systems to such level, as it is done at description of invariant properties of a characteristic polynomial of square matrix.
In the present time the equivalence problem is not yet solved completely. In this connection we will solve this problem in a few stages.
On the first from these stages we study the simplified variant of equivalence problem, for which equation (2.2) in system (2.1), (2.2) is absent. In this case we suppose that the group GL = GL(n, R) × GL(m, R) and the space S = R n×n × R n×m . Now we change the base field R and the group GL by the base field C and the special linear group SL = SL(n, C) × SL(m, C) saving its action (2.3) on the complex space S = C n×n × C n×m .
In this case all relative invariants of group GL become absolute invariants of group SL. This circumstance allows us to use the Hilbert-Mumford theory [8,35] for description of all invariants of group SL with respect to action (2.3). Therefore in further, all absolute and relative invariants we will call simply invariants.
Denote by C[S] SL a ring of all invariants of group SL with respect to action (2.3) [35,44]. A search problem of invariants can be essentially simplified if we will take advantage of the following concept [35]. The following theorem is a basic instrument for search of null-forms. 2.1. Null-formes of space S for system (2.1) Further, for system (2.1) we will use the designation (A, B), where A ∈ C n×n , B ∈ C n×m . We will also call the system (A, B) by a system of type (n, m), where numbers n and m are dimensions of the state space and input space. In addition, we will designate the state space and input space by X ≡ C n and U ≡ C m .
Without loss of generality, it is possible to consider that rankB = m. In addition, we will also assume that n > m, since in control theory this case is most important.
The following concept in linear control theory is fundamental. Let X 1 ⊂ X be an invariant subspace in X with respect to the following action of operator A: AX 1 ⊂ X 1 [15].
We put Denote by G| L the restriction of operator G : C k → C r to subspace L ⊂ C k [15]. (Here k and r are natural numbers.) Definition 2.4. The pair of operators (A| X 1 , B| U 1 ), where A| X 1 and B| U 1 are restrictions of operators A and B to subspaces X 1 and U 1 of dimensionalities n 1 and m 1 , is called a subsystem of type (n 1 , m 1 ) of system (A, B).
Definition 2.5. The pair of operators (A| X/X 1 , B| U/U 1 ), where A| X/X 1 and B| U/U 1 are restrictions of operators A and B to factor-spaces X/X 1 and U/U 1 of dimensionalities n − n 1 and m − m 1 , is called a factor-system of type (n − n 1 , m − m 1 ) of system (A, B) on the subsystem (A| X 1 , B| U 1 ).
Theorem 2.2. Let (A, B) be a system of type (n, m), n > m. Then the system (A, B) is a null-form if and only if the operator A is nilpotent and (A, B) contains the subsystem (A| X 1 , B| U 1 ) of type (n 1 , m 1 ), n 1 ≥ m 1 , such that Proof. (a1) It is known [35,44] that in the suitable bases of spaces C n and C m the one-parametric group H can be represented by the group of diagonal matrices: where t is a real parameter, and real numbers α 1 , α 2 , ..., α n , β 1 , β 2 , ..., β m are satisfied to the following restrictions: Note that if the group H acts on the system (A, B) by rule (2.3), then the elements of matrices A and B will be transformed on formulas: .., n; j = 1, ..., m.
From here it follows that the system of inequalities (2.10) is solvable. Consequently, systems (2.7) -(2.8) are also solvable. From here it follows that if n < m, then taking into account conditions (2.9), which determines the nilpotent matrix A, all systems (A, B) of type (n, m) are null-forms. (By transformations from the group SL in the matrix B, it is possible to derive one zero column.) If n = m and det B = 0, then the system (A, B) is not a null-form; if det B = 0, then it can be reduced to the case n < m. Consequently, at n > m the search of null-forms can be taken to the search of systems (A, B), for which rankB = m.
From here we get relation, which is equivalent to the following inequality: Having fulfilled similar computations for system (2.12), we get (a3) From (2.14) and first from equalities (2.6), we have α n 1 +1 + ...
Then the solvability of inequality (2.17) is possible only in the case (n − n 1 )/(m − m 1 ) > n 1 /m 1 (or n/m > n 1 /m 1 ). If the case β m 1 +1 + ... + β m < 0 takes place, then the restriction (n − n 1 )/(m − m 1 ) < 0 must be valid. This inequality results in inequalities: either n > n 1 and m < m 1 or n < n 1 and m > m 1 that it is impossible by virtue of the conditions n > n 1 and m > m 1 .
Further, from conditions (2.9), which determine the form of the matrix A, it follows that n 1 is the dimension of the invariant with respect to operator A subspace X 1 , which contains subspace B(U 1 ) ⊂ X 1 spanned on m 1 first columns of the matrix B.
Notice that the nilpotency of matrix A [15] is necessary for an equality to zero of invariants of system (A, B) depending only on A. The equality to zero of invariants of system (A, B) depending on B is got without the use of concept of nilpotency. Thus, the following statement is obvious.
Corollary. Let A be an arbitrary matrix of order n. Then under condition (2.5) of Theorem 2.2, all invariants of the system (A, B) depending on B are equal to zero.

Stabilizer of system (A, B)
Let ψ(λ) be a minimal polynomial of matrix A ∈ C n×n [15]. (It means that polynomial ψ(λ) is the nonzero polynomial of the least degree l ≤ n such that Assume that a nonzero vector from B(U) has a minimal polynomial f 1 (λ). Suppose also that this polynomial is the polynomial of the least degree among degrees of minimal polynomials of all nonzero vectors from B(U).
Denote by L 1 the set all nonzero vectors from B(U), which have the same minimal polynomial f 1 (λ). Add the zero vector to the set L 1 . Denote the newly obtained set with the same symbol L 1 .
Proof. Let U 1 = B −1 (L 1 ) be a complete prototype of space L 1 in U with respect to the action of operator B. Then it is clear that subspace is invariant with respect to the action of operator A and it minimal polynomial coincides with f 1 (λ).
Let f 1 (λ) = ψ(λ). Consider the factor-system (A| X/X 1 , B| U/U 1 ). Then it is possible to find in the factor-space (B(U) + X 1 )/X 1 the subspace L 2 /X 1 , all nonzero vectors of which have the same minimal polynomial f 2 (λ)(mod X 1 ) [15]. In addition, the degree of this polynomial will be minimum among degrees of all minimal polynomials, which can have nonzero vectors from (B(U) + X 1 )/X 1 . Now we build the invariant with respect to action of operator A subspace Here is the complete prototype of space L 2 in U with respect to the action of operator B. If X 2 = X, then we consider the factor-system is the complete prototype of subspace L 2 in U with respect to the action of operator B, and so on. By virtue of finite dimensionality of the spaces X and U the indicated procedure will be finished on some stage r: Thus, we have the row of embedded in each other subsystems such that any factor-system of this row possesses the following property: the state space Definition 3.1. (See [20]). The maximal subgroup Stab SL (A, B) of group SL is given by the conditions is called a stabilizer of system (A, B).
be an arbitrary element and (A| X i , B| U i ) be an arbitrary system of row (3.1). Then the following inclusion takes place: Proof. We prove justice (3.1) for i = 1. Lemma 3.1 asserts that the subspace L 1 is uniquely. Then from the definition of stabilizer we get that SB( The proof for case i = 1 is finished. Now we lead an induction on i. Assume that for some i < r Consider the factor-system (A| X i /X i−1 , B| U i /U i−1 ). Then the proof of inclusion (3.2) at i = 1 word for a word is carried on the proof of inclusion At m = 1 the assertion of Lemma 3.3 is obvious. Assume that it is correctly for all k ≤ m−1. Now we have to prove that (A, B) is the direct sum of equivalent irreducible subsystems for k = m.

Taking into account the supposition of induction, we get the inclusion
Let Since the system (A, B) is the complete controllable, then we have X = X 1 + ... + X m . On the supposition of induction the last sum can be rewritten as X = X 1 ⊕ ... ⊕ X m−1 + X m ≡X + X m . We will prove that this sum is direct.
Assume that the space P ≡X ∩ X m = ∅. It is clear that the space P is a proper cyclic subspace in X m andX.
Let h(λ) be the minimal polynomial of space P (note that deg h(λ) < deg f (λ)). In this case there exists the vector Here two following cases are possible.
It is known that in any cyclic space of dimension k there exist cyclic spaces of all dimensions less than k [15].
Thus, we have to have P = ∅ and X = X 1 ⊕ ... ⊕ X m . In addition, Consequently, there exist bases of the spaces X and U, in which the matrices A and B can be represent in the following form: We choose the bases of spaces U and X in accordance with Lemmas 3.1, 3.2, and 3.3. More precisely, the base B(U) is formed by the bases of spaces (3.4) and the base X is formed by the bases of spaces where each of bases X i+1 \ X i , i = 0, 1, ..., m − 1(X 0 = 0) it is an association of bases of nonintersecting isomorphic cyclic spaces. Then matrices of operators A and B in bases (3.4) and (3.5) can be represented in the following forms: Here m i is a multiple of including of block A i in A ii (B i in B ii ); the matrix A i has sizes l i × l i , and matrix B i has sizes l i × 1; p = 1, ..., m i ; q = 1, ..., m j ; i, j = 1, ..., r; i < j. (In addition, in matrices A (pq) ij only the last column is not equal to zero.) Let (3.6) be the matrix represent of system (A, B). Denote by N = N X × N U a subset of group SL, which determined as follows: Here are arbitrary parameters; E l i are identity matrices of orders l i ; m 1 l 1 + ... + m r l r = n; p = 1, ..., m i ; q = 1, ..., m j ; i < j; , j = 1, ..., r. In this case it is easy to show that the following assertion takes place. There are a lot of different variants of canonical forms for pair matrices (A, B) (see, for example [20], [33], [45] - [47], [50]). In order to build the stabilizer of system (A, B), we represented own variant (3.6) of such canonical form.

Actions of group SL on space S
Denote by S/SL a set of orbits all points from the space S = C n×n × C n×m with respect to action (2.3) of group SL.
It is known that for a successful solution of invariant description of system (2.1) it is necessary to supply the space S (or some suitable subset W ⊂ S) by a projective variety structure [40], [45] - [47]. The points of this set are called stable. An exact definition of the set of stable points is such.  Further, orbits of all points from F s have the equal (maximal) dimension. Consequently, the stabilizers of these points must have the minimal dimension. Since for regular points is just the equality dim C Stab SL (A, B) = 0, then on the role of stable systems can pretended regular systems only [32,35,44]. (We note that the dimension of space of orbits O G (S) is given by the formula takes place. Then the system (A, B) is regular.
Corollary. Assume also that the numbers n and m are coprime. Then under the conditions of Theorem 4.1, the system (A, B) ∈ F s .
Proof. The proof of this theorem is based on Lemmas 3.1 -3.4. In essence, it is a modification of the proof proposed in [3].
The following example shows an importance of concept of coprime numbers. Consider the set S of systems of type (4,2). Let (A, B) be an arbitrary system from this set. Denote by f (A, B) = det(B, AB) an invariant polynomial of system Consider the following system of type (4, 2): where c 1 , c 2 , d 1 , d 2 ∈ C, and c 1 = 0, c 2 = 0. In this case, we get Thus, we have f (A, B) = 1 = 0 and dim C Stab SL (A, B) = 1. Therefore, system (4.3) is not stable.

Invariant description of null-forms for system (A, B)
In Section 2 the existence conditions of null-forms were got in terms of some special subspaces in X and U. However, for a description of ring of invariants C[S] SL , it is necessary to get these conditions in invariant terms of matrix pair (A, B) with respect to action (2.3) of group SL.
Consider the matrix Then action (2.3) of group SL on space C n×(n+m) induces an action of the same group on space C n×nm by the following formula: T.
Many questions of search of invariants for linear control systems may be related to problem of decomposability of polyvectors, which are constructed from vectors of linear space C n . The investigation relationships between polyvectors of C n , alternating multilinear forms on C n , hyperplanes of projective Grassmannians and regular spreads of projective spaces, it were represented in [10]. We use some constructions of this article in our own researches.
Denote by M a linear space spanned on minors of order n of matrix (5.1). Then in virtue of (5.2) SL(M) ⊂ M. On the space M group SL n acts by the multiplication on scalar det S = 1, and group SL m acts in accordance with T ). It is known [35,37] that for search of all homogeneous polynomial invariants substantially depending on B, it is necessary to find a decomposition of the indicated representation on irreducible components. In obedience to the known result of representation theory of groups an arbitrary irreducible representation of group SL m (C) is a tensor product of polyvector representations [10,19]. Then the decomposition on irreducible components have the form: where the summation is taken over all multiindexes ω = (n 1 , ..., n d ) such that Below, we construct some examples of invariants for systems of type (n, m).

Invariants of systems of type (4, 2)
In future by a character I j (A, B); j ∈ J, we will designate an invariant of system (A, B), where J is a set of indexes.
It is known that an arbitrary invariant space of matrix A contains an eigenvector of this matrix [15]. Thus, there is a nonzero vector b ∈ V ⊂ B(U) such that Ab = λb, where λ is an eigenvalue of matrix A. It means that in the system (A, B) there exists the subsystem of type (1, 1). In addition, if a 1 (A) = ... = a n (A) = 0, then in accord to Theorem 2.2 the system (A, B) is the null-form.
Let (A, B) ∈ M be an arbitrary system. Then it is possible to consider that in a suitable base of space X the matrix A = diag (() α 1 , ..., α 2m ).
Denote by K an open set in C n×(n+m) is given by the following condition: It is known [44] that if the regular function I j (A, B) = 0 on the open set K ⊂ C n×(n+m) , then this function is equal to zero everywhere on C n×(n+m) . Now we add conditions I j (A, B) = 0, j = 2, ..., k, by conditions a i (A) = 0, i = 1, ..., 2m. Then according to Theorem 2.2, we obtain that (A, B) is the null-form.

Stability criterions for system (A, B)
Now we explain an importance of stability concept. Denote by C[A, B] SL the ring of all invariants of space C n×(n+m) with respect to action (2.3) of group SL. Assume that a 1 (A), ..., a n = a n (A), I 1 = I 1 (A, B), ..., I r = I r (A, B) is a set of homogeneous polynomial generators of ring C[A, B] SL . (Since the group SL is reductive, then the set of generators is finite [35].) Thus, it is possible to write C[A, B] SL = C[a 1 , ..., a n , I 1 , ..., I r ].
In [35,45] it is shown that the set of invariants a 1 (A), ..., a n (A), I 1 (A, B) By virtue of invertibility of the matrix G (A, B), we get that ∧ s T = λE s , where E s is the identity matrix of order l = Bin(m, s). Note that the homomorphism T → ∧ s T has the identity core {ζ s E m }, where ζ s = s √ 1. Therefore, λ = ζ s , S = ζ s E n and Stab SL (A, B) consists of the finite number s of pairs matrices (ζ s E n , ζ s E m ). Therefore, all points of the invariant set M fs ∈ C n×(n+m) such that f s (A, B) = 0 are regular and, consequently, its points are stable.
Theorem 6.2. Let disc(A) = 0. Suppose also that A = diag (() λ 1 , ..., λ n ) is the diagonal matrix in some base of space C n , and in the same base of space C n all minors of matrix B are distinct from zero. Then system (A, B) is stable.
Proof. In the case (n, m) = (4, 2) this theorem was proved in [24]. (a) Let φ : C n×(n+m) → C n×(n+m) be a morphism of algebraic manifolds, which satisfies to the following condition: (b) Now we consider an automorphism φ λ : C n×(n+m) → C n×(n+m) , which is given by the rule: φ λ (A, B) = (A + λE, B), where E is the identity matrix. (It is easily to check that φ λ satisfies to condition (6.1).) It is obvious that A + λE = diag (() λ 1 + λ, ..., λ n + λ) and disc(A + λE) = 0. Then there is a number λ ∈ C such that det(A + λE) = 0. (It is enough to take λ = −λ i , i = 1, ..., n.) It means that under the conditions of Theorem 6.2 there exist collections of indexes (i 1 , ..., i m ) and (j 1 , ..., j m ), where 1 ≤ i 1 < ... < i m ≤ n and 1 ≤ j 1 < ... < j m ≤ n such that Then there exists a number λ ∈ C such that for the matrix A + λE condition (6.2) is not fulfilled. Indeed, otherwise for all λ for any elementary symmetric polynomials h k (λ i 1 , ..., λ im ), k = 1, ..., m [15]. However, it is impossible since λ i = λ j at i = j. Further, since the set of all numbers λ such that disc(∧ m (A + λE)) = 0 and det A = 0 is open in the Zariski topology and the Euclidean topology of space C [44], then their intersection is not empty. Consequently, there will be numbers λ ∈ C, for which both conditions (6.4) are valid simultaneously. Then the system φ λ (A, B) is stable and, in obedience to the item (a), the system (A, B) will be also stable. It completes proof of item (b) and all Theorem 6.2. Proof. We will consider the morphism of algebraic manifolds: Notice that the image of morphism φ is a close variety in space C n×n × C N and orbits of group SL the morphism φ transfers in the orbits of group GL(n, C) in space C n×(n+m) at action It completes the proof.

Stability of systems of type (4, 2)
In this subsection we show difficulties, which can arise up at construction of the set of all stable systems in case if m is a divisor of n (see [24]).
First of all, we show that if system (A, B) is stable then the matrix A is cyclic [15]. For this purpose we present the various Jordan formes of noncyclic matrix of order 4: Thus, there are six such forms. Consider, for example, the system (A, B) of type (4,2), for which matrix A looks like a) and matrix B is arbitrary. Apply to (A, B) the transformation and numbers α, β, γ, δ, µ, and ν satisfy a unique condition: det S = 0. Then system (A, B) can be transformed to the following system: where by character * arbitrary numbers are marked. From here it follows that the system (A, B) contains a subsystem of type (2, 1). Consequently, according to Theorem 4.1, the system (A, B) is nonstable. By applying the same arguments to one of matrices b) -f), we obtain a similar result: the conditions of Theorem 4.1 are not valid.
We reduce the matrix A of system (A, B) to triangular form [15]. Then analysis of all possible Jordan forms of A results in the conclusion: if the system is not stable then there exists a minor of the second order of matrix B, which equal to zero. Otherwise, if the system (A, B) is stable, then there is even one nonzero minor of the second order of matrix B.
Denote by I 0 = det(B, B) ≡ 0, I 1 = det(B, AB), I 2 = det(B, A 2 B), a 1 , ..., a 4 invariants of system (A, B). Theorem 6.4. Let (A, B) be a system of type (4,2). Then the set F s of all SLstable systems with respect to action (2.3) is determined by the condition: where the polynomial invariant has degree 24 with respect to elements a ij and b ik of matrices A and B.
Proof. Assume that for the matrix A, we have disc(A) = 0. In this case the system (A, B) of type (4, 2) may be transformed to the following aspect: Denote by ∧ 2 B = (∆ 12 , ∆ 13 , ∆ 14 , ∆ 23 , ∆ 24 , ∆ 34 ) T a bivector of matrix B; here ∆ ij = b i1 b j2 − b i2 b j1 are minors of matrix B; i, j = 1, ..., 4; i < j. Then the invariants I 0 , I 1 , and I 2 of system (A, B) are given by the formulas Let p 1 = α 1 α 2 + α 3 α 4 , p 2 = α 1 α 3 + α 2 α 4 , and p 3 = α 1 α 4 + α 2 α 3 . Since we consider that disc(A) = 0, then from (6.5) we have From the last relations we get Above it was noted that the system (A, B) would be not stable if either even one minor ∆ ij = 0 or disc(A) = 0. Then from (6.6) it follows that (A, B) is Consider the cyclic matrix Apply to this matrix the transformation then, we have Let T = E be the identity matrix of order 2. Then Since we have Applying similar methods, it is possible to get from matrix (6.7) an arbitrary cyclic matrix A, for which disc(A) = 0. (Note that if matrix (6.7) is noncyclic, then the last assertion is incorrect.) Consequently, the polynomial f s (A, B) saves and for cyclic matrices A such that disc(A) = 0.
Finally, we note that for noncyclic matrices A system (A, B) type (4, 2) is not stable. The proof is finished.
It is clear that I (A, B) is GL-invariant, but it is not SL-invariant. Therefore, we have to take the invariant Now we must show that if the matrix A is noncyclic then I 2 (A, B) = 0. It can be checked by the methods of subsection 6.1.
Thus, the set of all SL-stable systems (A, B) of type (3, 2) is given by the condition Theorem 7.1. Any polynomial SL-invariant of group SL is a function of elements a 1 , ..., a n , ∆ 1 , ..., ∆ r .
Proof. We represent the polynomial f (A, B) in the following form: where g i (A) (v i (B)) are polynomials depending only on the elements of matrix A (the elements of matrix B) and the polynomials g i (A) it is possible to choose linearly independent.
Further, ∀T ∈ SL(m, C), we have Consequently, from here it follows that where h(...) is a homogeneous polynomial of elements of matrix A and ∆ 1 , ..., ∆ r . Besides, if we take into account that any SL(n, C)-invariant of matrix A is a polynomial of elements a 1 , ..., a n , then the invariant f (A, B) is the function of a 1 , ..., a n , ∆ 1 , ..., ∆ r . Proof. (a) p = 2. In this case the proof easily can be got from the proofs of Theorems 5.2 and 6.4.
In all other cases system (7.6) does not have solutions. From here it follows that in the space X there exists the space X 1 such that dim X 1 ∩ B(U) = 1 and dim X 1 /1 < 5/2. It means that the system (A, B) is the null-form, which is defined by the invariants a 1 , ..., a 5 , I 23 , I 24 , I 34 (see Theorems 2.2 and 5.4).
(b) p > 2. In order that to generalize the proof of Theorem 7.3 in this case, it is necessary to do alterations in the structure of diagonal matrix D only. The diagonal elements of this matrix will be have the form In order that the conditions of Theorem 2.1 were satisfied, it is necessary the equality to zero of Bin(p + 1, 2) = p(p + 1)/2 invariants (5.11). However, among these invariants there exist (p − 1)(p − 2)/2 syzygies [37,44].
Invariants (5.11) are coordinates of bivector Thus, among generators of the ring C[A, B] SL there must be p(p + 1)/2 − (p − 1)(p − 2)/2 = 2p − 1 algebraically independent invariants (5.11). These invariants are indicated in Theorem 7.3. Now we compute the dimension of space of orbits for system of type (2p+1, 2). From formula (4.1) it follows that for the system of type (2p + 1, 2) we have As regards of a description of rings of invariants for systems of arbitrary types, we can state the following reasons.
Denote by W • ⊂ S an algebraic variety of all null-forms of space S with respect to action (2.3) of group SL. Let also f 1 (·), ..., f k (·) be homogeneous invariant polynomials defining the variety W • . (The last means that if W max is the maximal subvariety in S such that ∀y ∈ S max f 1 (y) = 0, ..., f k (y) = 0, then W max = W • .) We will denote by C[f 1 , ..., f k ] a ring of invariants generating subvariety W • . Then the following theorem holds true. Let I = C[a 1 , ..., a n , I 1 , ..., I r ] ⊂ C[A, B] be an ideal, which is generated by the polynomials a 1 , ..., a n , I 1 , ..., I r in the ring of polynomials from elements a 11 , ..., a nn , b 11 , ..., b nm of matrices A and B.
Let Thus, in generic case we have I = C[a 1 , ..., a n , I 1 , ..., I r ] ⊂ C[A, B] SL . In order that the equality I = C[a 1 , ..., a n , I 1 , ..., I r ] = C[A, B] SL takes place, the ideal I must be radical. (In this case, the conditions of Theorem 7.4 will be fulfilled.) A verification of the last condition is a difficult problem. Therefore, in the present paper of question about construction of ring of invariants for arbitrary systems was not be considered. (The exception is made by the systems of types (2p, 2) and (2p+1, 2), for which the rings of invariants were indicated in Subsections 7.2 and 7.3.) Nevertheless, we can build the generators of quotient field C(A, B) SL for the systems (A, B), which were considered in Section 5. Knowledge of these generators already allows to solve the equivalence problem. (However, their number is not minimal; it is a main lack of conception of null-forms.)

Description of invariants for system (2.1), (2.2)
Below, we will use the ring of invariants of matrix pair (C, A) with respect to action of group SL. This ring can be got from algebra C[A, B] SL by replacements B → C T and A → A T . We will designate this ring by the symbol C[C, A] SL , where SL = {S × W } = SL(n, C) × SL(p, C). In addition, for system (2.1),(2.2) we will use the designation (C, A, B). We will also call the system (C, A, B) by a system of type (p, n, m), where numbers p, n, and m are dimensions of the output space, state space, and input space.
Note that results of this subsection can be got from the theorem about structure of generators of ring of invariants with respect to action (2.3) of group SL for one (n × n)-matrix, m column vectors and p row vectors of dimension n [37].
Introduce the matrices Then action (2.3) of group SL on the space C n×n × C n×m × C p×n induce actions of the same group on spaces C n×nm , C pn×n , C p×mp , and C mp×m , which are given by the following formulas: Further, as well as in Section 5, it is necessary to find decompositions of the representations on irreducible components. These decompositions are given by the following formulas: (Here a meaning of denotations the same that in Section 5.) In future we will be restricted only to the case m = p. Introduce the following invariants of group SL: Proof. We assume that we know rings of invariants C[A, B] SL of system (A, B) and C[C, A] SL of systems (C, A). Assume also that the matrix A resulted to the diagonal form: A = diag (() α 1 , ..., α n ). Then invariants (8.1) can be rewritten in the form: where B j (C j ) are coordinates of polyvector ∧ m B (of polyvector ∧ m C); j = 1, ..., r.
We can consider that the solutions of system (8.3) are seeking on some open set L ⊂ C r such that if ∧ m B ∈ L and ∧ m C T ∈ L, then B j C j = W j = 0; j = 1, ..., r. From here it follows that if coordinates of vector ∧ m B (or ∧ m C T ) are known, then coordinates of vector ∧ m C T (or ∧ m B) are uniquely determined from the system of equations B j C j = W j = 0; j = 1, ..., r.
Further, in the same way as in the proof of Theorem 7.2, we use the Hilbert theorem [35] and the method of construction of invariants (8.1). The proof is finished.
It is possible to specify the results of Theorem 8.1 if to take advantage of Theorems 7.2 and 7.3. Two next theorems are the obvious corollaries of Theorem 8.1.
Denote by O G (C, A, B) ⊂ S an orbit of system (C, A, B) with respect to action (2.3) of group SL.
The number 8p − 1 of generators of ring C[C, A, B] SL is minimal.
Let (C, A, B) be a system of type (m, m+1, m) and let a 1 , ..., a m+1 be coefficients of characteristic polynomial of matrix A. In this case dim C O SL (S) = 2m + 3.

Equivalence of linear control systems
The most invariants, which considered in the present article, are described in Theorem 6.1. It means that these invariants are minors of appropriate matrix G(A, B) (or R i (A, B)). From here it follows that all syzygies, which exist between invariants, always are automatically satisfied. It facilitates the search of minimal base of invariants (see . Although the problem of description of generators of ring of invariants in general case is not solved, nevertheless the equivalence problem, which is important for applications, got the solution. (It is here enough to know the generators of quotient field only.) All said before we can sum up by the following obvious theorem. Let (C, A, B) be a system of type (p, n, m). Let also the numbers a 1 (A), ..., a n (A) be the coefficients of characteristic polynomial of matrix A. Denote by I j (A, B), j = 1, ..., r B , the generators of quotient field C(A, B) SL depending on B and denote by P q (C, A), q = 1, ..., r C , the generators of quotient field C(C, A) SL depending on C. If p = m, then we denote by K l (C, A, B) = det(CA l−1 B), l = 1, ..., r BC the invariants depending on B and C.  1 (A 2 ), ..., a n (A 1 ) = a n (A 2 ), where r = n · Bin(m, s), 1 < s < m, s = n − dm, d is the integer part of n/m, and as the set S open the set F s of all SL-stable systems can be taken .
Proof. We have that the numbers n and m are coprime and even one of invariants I j (A, B) (P q (C, A)) is not equal to zero. Then from Theorems 6.1 and 6.3 it follows that there exists some subset of all SL-stable systems of type (n, m) ((p, n)), for which the equivalence problem has solution.
For some types of systems the generators of quotient field can be taken from Theorems 5.1, 5.4, and 5.5.
Another important problem, which was not solved, it is the search problem of set of all SL-stable systems. (The solution was got only for systems (A, B) of types (4, 2) and (m + 1, m).) However, it should be said that the set of all SL-stable systems contains a subset, which is defined by the condition disc( are coordinates of polyvector ∧ m B (of polyvector ∧ p C); h B = Bin(n, m), h C = Bin(n, p). In future the authors hope to solve the problem of description of all SL-stable systems of type (p, n, m), n > m, n > p. 10. Reconstruction of differential equations system on the known multivariate time series We assume that there are n characteristics (measurements and computations) of some dynamic process : z 1 (t i ), ..., z n (t i ), i = 1, 2, ..., N . In addition, we also suppose that these measurements are noisy. Thus, we have multivariate time series which defined for ∀t i ∈ (t 1 , t N ). Here ∀i = 1, 2, ..., N , we have t i = i∆t and ∆t = (t N − t 1 )/N . In addition, we suppose that θ 1 (t i ), ..., θ n (t i ) are Gaussian (white) noises, unable by definition to produce statistically systematical errors [29] - [31], [34], [49]. Finally, we assume that x 1 (t i ), ..., x n (t i ) is a discrete approximation of some curve x(t) = (x 1 (t), ..., x n (t)) T ∈ R n [16], [29] - [31], [34], [49]. In the turn, it is assumed that the curve x(t) is a solution of some autonomous differential equations system. (The necessity of such description is dictated by the considerations resulted higher.) Further, we use the procedure of determining unknown right-hand side of the system of differential equations (1.3), which was suggested in [16], [29] - [31], [34], [36], [49]. This procedure is based on the least squares method and the fact that we know sufficient precision the components of x(t) and its derivativė x(t).
In view of the fact that number N may be chosen arbitrary large, a high precision reconstruction may be achieved. Thus, we can expect that the solution of reconstructed system will be near the purified solution x(t).
However, it should be said that one important circumstance, which can arise up at a reconstruction, remained outside the attention of authors of article [34], [49]. The point is that in [34], [49] it is assumed that this interval (t 1 , t N ) is finite. If the problem of long-term prediction is considered, it is necessary to assume that t N → ∞. In this case a reconstruction must be fulfilled so that system (1.3) had the bounded solutions [2] - [6]. Now suppose that the dimension n of phase space in which the dynamic process under study, is known. Assume also that m different variables (m < n) describing this process, for which time series can be measured, are also known. Then the matrices A and B of system (1.3) can be represented in the following form [5,20]: Here integer positive numbers ν i accept two values: ν i = k + 1, if i = 1, ..., n − km and ν i = k, if i = n − km + 1, ..., m, where k is the integer part of number n/m; i, j = 1, ..., m.

Reconstruction of dynamic processes in a contact electric network
In article [43] a question about construction of regulator for stabilization of voltage in a contact network was studied. In this article the system of differential equations for the design of behavior of current (I) and voltage (U) in the mentioned netwowk was constructed. However, the problem of adequacy of the got model and real dynamics of electric processes in the contact network was not investigated.
Below, we will remove the indicated lack. For this purpose we apply the method of invariant reconstruction of differential equations to the process shown on the following Fig.2(a1).
In most practical cases of the reconstruction of differential equations according to the results of measurements of certain variables, it is impossible to change the composition of the measuring instruments. This means that the representation of the constructed model in other variables is also impossible. Therefore, the question of checking the adequacy of the model and the real system remains open.
In this paper, we propose the following method for checking the adequacy of the model and the real system.
The dynamics of arbitrary autonomous system of differential equations is fully determined by the parameters of this system.
Let K be a set of restrictions on the parameters of system (1.3), which determines its desired behavior. In the general case the restrictions are included in K should be the functions of invariants of system (1.3). (For example, the behavior of any linear dynamic system can be described using the coefficients of the characteristic polynomial, which, in turn, are invariants with respect to changes of variables of this system.) At present, the construction of basis of polynomial invariants of an arbitrary polynomial system of differential equations is an unsolved problem. Therefore, we can use only a part of the polynomial invariants that are constructed in this article. In this regard, we propose the following methodology for checking the adequacy of the model and the real system.
1. Using the method of least squares to restore some system (S 1 ) of differential equations according to the measurement results.
2. Repeat measurements of the same dynamic characteristics of the process under study. After that, perform a reconstruction of new system differential equations (S 2 ).
3. For systems (S 1 ), (S 2 ) check the implementation of inequalities of set K.
4. If the current-voltage characteristics of systems (S 1 ) and (S 2 ) are equivalent, then it can be argued that the system reconstructed from measurement results adequately describes the dynamics of the real process in the contact network.
We will assume that we can measure the voltage and current, and also if it is possible other dynamic characteristics of contact electric network. We also suppose that among these characteristics can be derivatives with respect to t from the voltage and current. (If the derivatives can not be measured, it is assumed that there exist smooth enough approximations of these derivatives.) In [43] a structure of the process of represented on Fig. 2 (a2) was described by the following system of differential equations:  experimental data; (a2) modeling after14000 measurements for system (10.5) in Kv and Ka [7,43]; here x = U, z = I.

Example
In accordance with the results given in [7,43], the process presented on Fig.  2(a1) can be described by the following system of differential equations: Here x 1 = U, x 2 =U, x 3 = I, x 4 =İ.
(10.11) For this system the equilibrium point is (3.44600, 0, 0.54672, 0). Now repeating the procedure for system (10.9) described above in Subsection 10.2, we arrive at such matrices of system (1.9):  The verification of the inequalities (10.3) shows that they are valid at = 0.1 · 10 −2 . This circumstance means that for describing the process shown in Fig.2 can be used any from systems (10.9) or (10.11). The last statement can be confirmed by Fig.3 (here z = I and x = U ).

Conclusion
The results given above allow us to draw the following conclusions. 1. The problem of description of algebraic invariants for the linear control system is solved.
2. With the help of these invariants, the equivalence problem of two nonlinear systems obtained from results of studies of the corresponding time series is also solved.