ABSCISSA MINIMIZATION PROBLEMS FOR THIRD ORDER POLYNOMIAL FAMILIES

UDK: 519.853.4
MCS2000: 26C05, 49R50, 49J52
ABSCISSA MINIMIZATION PROBLEMS FOR THIRD ORDER
POLYNOMIAL FAMILIES
2011 Alexander Frumkin
Assistant Professor of the Electric-power Supply Department, PhD email: frumkinam@mail.ru South-West State University
Given a polynomial, its abscissa denotes the largest real part of its roots. We consider the problem of abscissa minimization for various third order polynomial families with one or two varying parameters. Even the simplest such problems require an individual approach for their study. Our results give an explicit relationship between the sought singular values of the variable parameters and the remaining fixed coefficients. The results are illustrated by the graphs. Simple applications to the Automatic Regulation theory are presented.
Keywords: Polynomial, root, polynomial abscissa, Hurwitz criterion, polynomial family, abscissa minimization, singular value.
1. Introduction
Let n E N, a = (a1; a2,an-1, an) E Rn be an n-dimensional vector, and
Z(a) = {X E C : f (a, X) = 0}
be the set of the roots of the corresponding polynomial f (a, X) = Xn + a1Xn-1 + ... + an. Let a E RN. Number
x(a) = max Re(X)
xez (a)
is called the abscissa of the polynomial f (a, ) (or the abscissa of the vector a). Number
P(a) = max |X|
X^Z (a)
is called the radius of f(a, ).
The following problem appears in the automatic regulation theory.
Let a smooth mapping a : Rm ^ Rn define the family of polynomials with real coefficients
(1.1) g(a, X) = f (a(a), X) = Xn + a1(a)Xn-1 + a2(a)Xn-2 + ... + an(a).
We say that the coefficients of the polynomials g(a, X) depend on the m-dimensional vector of parameters a = (a1, a2, ...am) E Rm.
Abscissa minimization problem consists of finding the set S C Rm of parameters a, where the function x(a(a)) has the minimal value.
The radius minimization problem is formulated similarly. In both problems the optimal set of the parameters S may be empty, finite or infinite.
The majority of publications in this area deal with the minimization problems for the spectral abscissa and the spectral radius of the matrix-valued function A (a) defined through the characteristic polynomials p(A(a)). Variational properties of the functions x(p(A(a))), p(p(A(a))) for classes of matrix-valued mappings were investigated in [Fletcher 1985, Overton and Womersley 1988, Burke 1994, Burke 2001-1, Lewis 2002]. Operators in Banach spaces were considered in [Friedland 1978]. The article [Burke 2001-2]
is dedicated to the general variational properties of the polynomial families with complex coefficients.
Since the abscissa and the radius are non-Lipschitz continuous function of the polynomial coefficients [Burke 2001, Lewis 2002], it was suggested to replace the minimization problem for the functions x(p(A(a))) or p(p(A(a))) by a related minimization problem for functions having better smoothness properties, see [Shapiro 95, Trefethen 97, Burke 2003, Vanbiervliet 2009].
Results of the research in the considered area are used for the design of efficient algorithms for multidimensional minimization, see e.g. [Overton 1988, Shapiro 95, Freitas 1998].
In this paper we study the abscissa minimization problems for polynomial families of low orders by means of analytic (explicit) solutions. This approach is of interest for the following reasons:
(1) It allows to estimate the parameters of automatic regulation system within the limits of its simplified model. In this case the accurate solution of the minimization problem may give acceptable values of parameters.
(2) Insight provided by our analysis can often be used as a starting point for a numerical solution of the minimization problem for a more complex system model.
(3) It can be used for testing computer programs, implementing the abscissa minimization methods for high order polynomial families.
(4) It can be used for students skill building in automatic regulation systems.
(5) This is an interesting mathematical problem.
The purely analytical approach to the abscissa or radius minimization problem for a polynomial family is rarely found in the literature. We mention [Cox 1998, Freitas 1999] where special second and the fourth degree polynomial families are considered.
In this paper we consider several abscissa minimization problems for third order polynomial families. The results are partially based on our previous studies in [Frumkin 2008-1, Frumkin 2008-2].
2. Main Results
Let f (a,X) = Xn + a1Xn-1 + a2Xn-2+ ...+an. Consider the abscissa minimization problem for this polynomial family assuming that all the parameters ak are allowed to be varied. Then the minimum of x(a) is negative infinity, i.e. the problem has a trivial solution (S = 0). Therefore, in the non-trivial case one has to consider problems where some parameters ak are fixed and the others may be varied arbitrary.
From the applied point of view it is especially important to study Hurwitz polynomial families. Recall that a polynomial f (a, X) is called Hurwitz (vector a E Rn is Hurwitz), if x(a) = maxXeZ(a) Re(X) < 0. Given a mapping a : ^ Rn, the polyno-
mial family f (a(a), X) is called Hurwitz if Q(a) = {a E : x(a(a)) < 0} = 0.
To simplify the notation we will write f (a, X) instead of f (a(a),X), Q instead of Q(a), and x(a) instead of x(a(a)).
Meaningful problems for second order polynomial families are quite simple, see [Frumkin 2008-1]. In this paper we consider several abscissa minimization problems for Hurwitz polynomial family
(2.1) f (a, X) = X3 + a1X2 + a2X + a3
with one or two variable parameters.
Scientific notes. Electronic scientific journal of the Kursk State University No 1(17) 2011 Theorem 2.1. Consider the polynomial family
f (a, A) = A + a\X + + a,
where a1,a2 > 0 are fixed, a is variable, and ^ = (0, a\a2) C R.
(1) If 3a2 > af, then S = {amin} with
a1 ( 2 2^ f N a1
^11 2 \ f \
amin 3 I a2 9a1 ) , X(amin) 3*
(2) If 3a2 < af, then S = {amin} with
1
amin =27 1 ai + 2\l af 3a2l ( a1 \/a\ 3a2
a,nd
X(amin) = 3 ^a1 \Ja2 3a2 In both cases the minimum amin is unique.
Theorem 2.2. Consider the polynomial family
f (a, A) = A3 + afA2 + aA + a3,
where a1,a3 > 0 are fixed, a is variable, and ^ = (a3/a1, to) C R.
(1) If 27a3 > af, then S = {amin} with
2 2 a3 a1
amin 7Tai + 3 , X\amin) 7T *
9 ai 3
(2) If 27a3 < af, then the polynomial p(p) = pi-a1p2 + 4a3 has a unique root n on the interval (0, 3a1], and S = {amin} vnth
amin = n(a1 n) + ) X(amin) = 'n*
a1 n 2
In both cases the minimum amin is unique.
Theorem 2.3. Consider the polynomial family
f (a, A) = A + aA + a2A + a3, where a2,a3 > 0 are fixed, a is variable, and ^ = (a3/a2, to) C R.
(1) If a2 < '3T6a3^3, then S = {amin} with
a22 2a3 a22
amin + j x(amin) q *
4a3 a2 8a3
(2) If '3T6a3^3 < a2 < 3a2/3, then S = {amin} with
=1 ( )^1 amin , x(amin) ,
n1 n1
where n1 is the smallest positive root of the polynomial
p(x) = a3x3 a2 x2 + 2*
2
Scientific notes. Electronic scientific journal of the Kursk State University No 1(17) 2011
(3) If a2 > 3a3/3, then S = {amin} with
amin + 2Va3 $1j X(a min
&1
where 91 is the smallest positive root of the polynomial
p(x) = a3x3 a2x + 2^J~a3*
In all the cases the minimum amin is unique.
Theorem 2.4. Consider the polynomial family
f (a, 3, A) = A3 + a1A2 + (3A + a,
where a1 > 0 is fixed, a,3 are variable, and Q = {(a,3) E R2 : 0 < a < a13}.
Then the point of the abscissa minimum is not unique. The set of the minimizers
is
S = |(a,3) E r2 : a > ,3 = 9a2, + a-a|
and X(a, 3) = -a1/3 for any (a, 3) E S.
3. Proof of Theorem 2.1
Throughout the proofs symbol is used to indicate a contradiction. The term singular means the value of the parameters at which the abscissa is minimal.
The following three Lemmas can be applied to an arbitrary degree polynomial
family.
Lemma 3.1. Let j1,j2, ***,jn be n nonnegative numbers andY^k=1 Yk = s. Then there exist 1 < k,m < n such that Yk < s/n, and Ym > s/n.
Proof. If for all k, 1 < k < n we have Yk > s/n, then 1k=1 Yk > s, which contradicts the Lemma condition. The second statement follows similarly.
Lemma 3.2. Let n E N and a = (a1,a2, ***,an) E Rn be a Hurwitz vector. That is x(a) = max Re (A) < 0. Then x(a) > n. Furthermore, if x(a) = n, then all the
XEZ(a)
roots of f (a, ) have the same real parts equal to n.
Proof. Let A1, A2, ***, An be some enumeration for the roots of f (a, ) accounting for their multiplicities. By the Vieta theorem a1 = (A1 + A2 + *** + An). This implies that the sum of the imaginary parts of the roots is zero, since a1 is real. Since the polynomial is Hurwitz we have ReAk = \ReAk|. Thus
a1 = (Re(A1) + ReA) + *** + Re (An)) = \ Re (A1) | + \Re(A2)\ + *** + \Re(An)\*
By Lemma 3.1, for some p, 1 < p < n we have \ReAp)\ < a1/n. Therefore Re(Ap) = \Re(Ap)\ > a1 /n. Thus x(a) = max Re(Ak) > Re(Ap) > a1/n.
1<k<n
Let x(a) = n. If for some root Ap, Re(Ap) > a1/n, then x(a) > a1/n, . Therefore for any 1 < k < n we have Re(Ak) < a1/n. If for some p, Re(Ap) < a1/n, then Re(A\) + Re(A2) + *** + Re(An) < a1, .
Lemma 3.3. Let 1 < k < n. Consider the abscissa minimization problem for the Hurwitz polynomial family f (a, A) = An + a1An-1 + a2An-2 + *** + an with all the coefficients fixed, except ak = a.
If a0 is such that there exists a simple real root A0 of the polynomial f (a0, A), and A0 is greater than the real part of any other root of this polynomial, then a0 is not a singular value.
Proof. First, consider the case of k = n, i.e. the polynomial family is f (a, A) = An + a1An-1 + a2An-2+...+an-1A + a with a1, a2, ***, an-1 being fixed.
Let a0 satisfy the conditions of the Lemma, and A0 be the corresponding simple real root. Let A be the set of all the roots of f (a0, ). By the assumptions of the Lemma
n = max Re(A) < A0,
AA\{Aq}
and so by the definition of the abscissa x(a0) = A0.
We are going to use a theorem on the continuous dependence of the polynomial roots on its coefficients. It is proved by complex analysis methods, see [Privalov 1977]. In our case it can be formulated as follows.
Let e > 0 be chosen so that the e-neighborhoods of distinct roots A E A be mutually disjoint, that is
e < e0 = min|-2 \i A\ : i,A E A A i = A
Then one can find 5 > 0 such that for any a satisfying \a a0\ <5 and for each A E A(a0) the number of the roots of the polynomial f (a, ) (counting their multiplicities) in the e-neighborhood of A is equal to the multiplicity of A.
Chose e < min{e0, | \A0 n\} and find 5 according to the above theorem. Denote
I0 = (a0 5, a0 + 5). Since the multiplicity of the root A0 is 1, the relation
p = {(a, A) E R2 : a E I0 A \A A0\ < e A f (a, A) = 0}
is a function. If a E I0, then all the roots of f (a, ) except of p(a) lie to the left of
2(A0 + n), but p(a) lies to the right of |(A0 + n), so x(a) = Re(p(a)).
Now we establish the monotonicity of p. Consider the partial derivatives of f with respect to a and A at the point (a0,A0). Let d1 and d2 denote the differentiation with respect to a and A correspondingly. We have: d1f (a0, A0) = 1 and d2f (a0, A0) = 0 since A0 is a simple root. So, by the implicit function theorem [Shilov 1972], in some neighborhood of a0 the relation p is a real-valued function. Also p'(a0) = (q0a°) = 0, and so p is
both real-valued and strictly monotone in some neighborhood I C I0 of a0. Since p is monotone, we can find an a E I such that p(a) < A0. Then x(a) = p(a) < x(a0), that is a0 is not a singular value.
The proof in the case k < n is carried out in the same way, with the only difference being the derivative of f with respect to a is An-k. If A0 = 0, then, since the family f is Hurwitz, the considered value is nonsingular. If A0 = 0, the above arguments require no change.
The following Lemma is specific to third order polynomial families.
Lemma 3.4. Consider the abscissa minimization problem for the polynomial family
f (a, A) A + a1A + a2A + a3,
with any two coefficients fixed, and the remaining one (a = ak ,k E {1, 2, 3}) being variable. The fixed coefficients are assumed to be positive. Denote
(3.1) P = {p > 0 : 3s > 0,q> 0,ak > 0 : s + p = a1 A sp + q = a2
A sq = a3 A p2 < 4q A p < 2s}.
If a0 is a singular value, then
(1) po = 2x(ao) E p.
(2) The triple of numbers (s0,q0,ak) providing the claim p0 E P, i.e. satisfying the conditions
so + po = a1 A sopo + qo = a2 A soqo = a:i A p2 < 4qo A po < 2so,
is unique and ak = ao.
(3) x(ao) = 2max(P).
Proof. If one coefficient of the polynomial is variable, then we can always satisfy the Hurwitz condition with a1,a2,a3 > 0 and a1a2 > a3, see [Anagnost 1991]. Therefore we can consider the polynomial family to be Hurwitz. Let ao be a singular value. Since the polynomial f (ao, A) is Hurwitz all its roots have negative real parts. Furthermore, since f (ao, A) is a third order polynomial at least one of the roots is real. So, let so be the smallest real root of f (ao, A).
Represent the polynomial in the product form
f (ai A) = (A + so)(A2 + poA + qo).
Here po > 0 and qo > 0, because f (ao,A) is Hurwitz. Simplifying, we get so + po = a1, sopo + qo = a2 and soqo = a3 (ak = ao).
Suppose that po > 4qo. Then the polynomial has two real roots
A _ po p2 ^ _ po po
A2 = y y y qo, A3 = y + Y y qo
in addition to so. By the choice of so we have so < A2 < A3, and by Lemma 3.3 ao is
a nonsingular value, .
Therefore p2 < 4qo. Then the real parts of the two remaining roots are equal to po/2. If po > 2so, then po < so. Using Lemma 3.3 again we get that ao is a nonsingular value, . Thus po < 2so and po E P because all the conditions from the
definition of P are fulfilled. Also pf > so and x(ao) = pf imply 2x(ao) E P.
The equalities ak = ao, po = 2x(ao), the choice of so and the Vieta formulas prove the uniqueness of the triple (so,qo,ak).
Now suppose that for some p E P we have p > po = 2x(ao). Find a set (s, q, ak) satisfying the definition of P. Denote ak = a. Conditions s + p = a1, sp + q = a2 and sq = a3 imply that the polynomial can be represented in the form f (a, A) = (A + s)(A2 + pA + q). Furthermore, conditions p2 < 4q and p < 2s imply x(a) = p. Thus x(a) < pf = x(ao), . Therefore p < po and po = 2x(ao) = max(P).

Proof of Theorem 2.1. Let us describe the set P, defined in Lemma 3.4, in a more compact way. Let p E P, s, q, a > 0 satisfy the conditions
s + p = a1, sp + q = a2, sq = a, p2 < 4q, p < 2s.
Equalities s + p = a1 and sp + q = a2 imply that s = a1 p, q = a2 (a1 p)p = a2 a1p + p2. Thus inequalities p < 2s,p2 < 4q can be rewritten as
2
p < 2(a1 p), p2 < 4(a2 a1p + p2) or p <-a1, 3p2 4a1p + 4a2 > 0.
3
Conversely, let p satisfy two last inequalities. Then we can define s = a1 p, q = a2 sp,a = sq. The inequality p < |a1 provides the conditions s > 0,p < 2s, and the inequality 3p2 4a1p + 4a2 > 0 provides the conditions q > 0,a> 0,p2 < 4q. Consequently
22 P = {p > 0 : p < -a1 A 3p 4a1p + 4a2 > 0}.
3
Let pmax = max(P). If p E P attains the maximal possible value p = |a1, then it is necessary that
(2 V 2 2
\ 3 a^ 4a13 a1 + 4a2 > 0 ^ 3a2 > a1.
Conversely, if 3a2 > af then pmax = 2a1.
Let 3a2 < a2. In this case inequality 3p2 4a1p + 4a2 > 0 has two intervals of solutions: (to, 2(a1 \Ja1 3a2)] and [|(a1 + \Ja2 3a2), +to). The second interval does not intersect interval (0, 2a1]. Since 2(a1 \ja1 3a2) < 3a1, then P = (0, 3(a1 \Za[3a2)], pmax = 3 (a1 \/a\ 3a2).
Now let amin be the singular value. According to Lemma 3.4, x(amm) =
pma
2
and amin can be determined from the equalities: s = a1 pmax, q = a2 spmax, amin = sq. Thus, if 3a2 > a1, then x(amin) = a1/3, s = 3a1, q = a2 2a\/9, amin =
ai (a2 9 a1). ___________
If 3a2 < a1, then x(amin) = 3(a1 \Jaf 3a2),
a1 + 2\J af 3a2
s q,\ \ a2 3a2)
V 1 ^ 3
a2 Pmax
ai + 2\/ a2 3a2 1 , i~2 I s
q a2 spmax a2 pmax------------------------------3- 9 (a1 \J a1 3a2)
and
3 9
1
amin sq 97(a1 + 2\/ a1 3a2 )2(a1 \J a1 3a2)2-
27
The above arguments show that for any a2 and a1 one can find the unique value of a, which may be singular. On the other hand, continuous function x(a) attains its minimum on the closed set 0 [0,a1 a2]. The extreme points are not singular. Consequently, the found value of a is in fact singular.
2
4. Proofs of Theorems 2.2 and 2.4
Proof of Theorem 2.2. The arguments are similar to the proof of Theorem 2.1. First, we simplify the description of the set P defined in Lemma 3.4. Let p G P, s,q,a > 0 satisfy the conditions
s + p a1, sp + q a, sq a3, p2 < 4q, p < 2s.
Equalities s + p = af and sq = a3, imply that s = af p, q = ap. Therefore
inequalities p < 2s, p2 < 4q can be rewritten as
2 a3 2 3 2
p < 2(af p), p < 4---------- or p <-af, p afp + 4a3 > 0.
a1 p 3
Conversely, let p satisfy two last inequalities. Then we can define s = af p, q = ,a = ps + q. The inequality p < 3af provides the conditions s > 0,p < 2s, q > 0,a > 0, and the inequality p3 a1p2 + 4a3 > 0 provides the condition p2 < 4q.
Consequently,
2
P = {p > 0 : p < -af A p3 a1p2 + 4a3 > 0}.
3
Let pmax = max(P). If pmax = |af, then it is necessary that
a^ a^3a^ + 4a3 > 0 ^ 27a3 > a\.
Conversely, if 27a3 > af, then pmax = 2af.
Let 27a3 < a3. The polynomial p(p) = p3 a1p2 + 4a3 has two points of local extrema. They are determined by the condition p'(p) = 3p2 + 2afp = 0. Thus the extrema are po = 0 andp1 = 2a1. Function p is strictly increasing on intervals (to, 0], [jia1, +to, ) and decreasing on interval [0, 3af]. Furthermore, since p(po) = 4a3 > 0 and p(p1) =
3
4(a3 af) < 0, function p has three real roots
22
no E (to, 0), n E (0, 3af), n1 E (3af, +to, ).
Therefore the set of solutions for inequality p(p) > 0 is Q = [no^U^, +to). Consequently, P = Q P|(0, 2af] = (0, n) , pmax Q'
Now let amin be a singular value. According to Lemma 3.4, x(amin) = pm2rx, and
amin can be determined from the equalities s = af pmax, q = , amin = ps + q.
Thus, if 27a3 > af, then x(amin) = a1/3, s = faf, q=3a3/af, and amin =
2 a2 + 3 af
9 a1 + 3 ai .
If 27a3 < af, then x(amin) = 2, s = af n, q = a, and amin = n(a1 n) + a Arguing as in the previous Theorem, given any relation between a3 and a1 there exists only one value of a, that can be singular. To show that this value actually is singular, we will prove that for some ao > 0 condition a > ao implies x(a) > f.
Indeed, let a E Q, s be a real root of the polynomial f (a, ), and f (a, A) = (A + s)(A2 +pA+q), wherep,q > 0. Then q = asp. On the other hand, sp < (s+p)2/4 = af/4, that is q > a af /4. Let 0 < e < 2. We will take ao > a2/4 so large, that
a3 a3
s = <----------^ < e.
q ao at
That is ao > 4a"2 + . Then, for a > ao we have x(a) > s > e> 2.
Continuous function x(a) attains its minimum on the interval [^, ao]. The extreme points are not singular. Consequently, the found value amin E (a3, ao) is singular.
Proof of Theorem 2.4. By Lemma 3.2 the abscissa cannot be smaller than a1/3. So, if S = {(a,3) E Q : x(a,3) = a1/?i} = 0, then this set is the solution of the problem. Let (a,3) E S. Consider the representation f (a, ft, A) = (A + s)(A + pA + q). By Lemma 3.2 all the real parts of the roots of the polynomial f (a, ft, A) must be equal to each other.
Condition p2 4q > 0 implies that all the roots are real, and at least two of them are distinct, . Therefore, inequality p2 4q < 0 must be satisfied. Thus s = a\/3, and
3
p = 2ai/3. Since sq = a, inequality p2 4q < 0 is equivalent to a > ^7. For each such a there exists a unique 3 such that 3 = sp + q = |a"2 + a. Thus
S = {(a, /3) E R2 : a > ^ A /3 = 2+ al =
[ 27 9 a1 )

5. Proof of Theorem 2.3
First, we establish three auxiliary results, Lemmas 5.1, 5.2 and 5.3. Then we proceed to the proof of Theorem 2.3, which is interrupted by three more results, Lemmas A, B, and C. Finally, the proof of the Theorem is concluded.
We begin by establishing the well-posedness of the problem.
Lemma 5.1. Let f (a,x) = x3 + ax2 + a2x + a3 be a polynomial family with constant coefficients a2,a3 > 0.
Then one can find a0 > 0 such that for every a > a0 polynomial f (a,x) has a unique real root ^(a).
Ordering the remaining complex roots by their imaginary parts defines functions 2(a) and 3(a). Then
lim ^(a) + a = 0, lim 2(a) = 0, lim 3(a) = 0.
Proof. Claim 1. There exists a0 such that the polynomial has a unique real root for a > a0.
Indeed, the derivative of f (a, x) with respect to x is d2f (a, x) = 3x2 + 2ax + a2. If a2 < 3a2, then f (a,x) is monotonically increasing in x and the root is unique. However, our main interest is when a2 > 3a2. In this case the roots of the derivative are x1 =
3( a + Va2 3a2 ) and x2 = 1 ( a + Va2 3a2 ). The corresponding values of the function are
.. .2 3 2 2 ,-z------------ 2 ----------- 1
f (a, xi) a + a y a 3a,2 ~a 3a,2 aa2 + 0,3
27 27 9 3
and
2 3 2 2 /--------------------------------------- 2 /- 1
f (a, X2) 27 a 27 a v a2 3a2 + 9 02y a2 3a2 3 aa2 + 03-
Note, that
4 .___________ 4 .____________
f (a, xi) f (a, X2 ) a2 y a2 3a2 - 02\J a2 3a2
27 9
4 y---------
27(a2 3a2^y a2 3a2 > 0,
that is f (a,x1) > f (a,x2).
Now we show that for a sufficiently large a we have f (a,x2) > 0. Denote t = a. Then 3
2 o 1------------ 2
3t \J9t<2 3a2 +
To prove that f (a,x2) > 0 it is sufficient to establish that
f (a, x2) 2t3-t2\j 9t2 3a2 + a2\J 9t2 3a2 ta2 + a3.
2t3 2 tV9t2 3a2 + ~a2^9t2 3a2 ta2 > 0. 39
This inequality is equivalent to
1 ____________ 2 ,__________________
2t2(t \j9t2 3a2) + a2(-\j9t2 3a2 t) > 0. 39
Since \/9t2 3a2 < 3t, the last inequality is equivalent to
2t2 t 2 \j9t2 3a2 (t + 1 \j9t2 3a2 )(t | \j9t2 3a2)
a2 t 3 j9t2 3a2 t2 11 (9t2 3a2)
t2 + 3\/ 9t2 3a2 + 22 a2 a2
or t2 > 1V9t2 3a2 + 3a2.
Since \j9t2 3a2 < 3t, last inequality is always satisfied when t2 > t + 3a2. So,
for a sufficiently large t, more exactly, for t > 2(1 + \Jl + a2) we have f (a,x2) > 0. If f (a,x2) > 0, then the polynomial has a unique root on the interval (w,x1]. Thus a0 = 3 (1 + \jl + 3a2), and Claim 1 is established.
Let a E (a0, ro). Denote the corresponding unique real root of f (a,x) by ^(a). Claim 2. lima^tt ^(a) + a = 0.
To prove it let 5 = ^(a) + a. Then ^(a) = a + 5. First, we show that 5 > 0 for a sufficiently large a.
Indeed, we have f (a, a) = a2a + a3. If a > ^2 = a1, then f (a, a) < 0. Let a > max{a0,a1}. Note that x1 = 3a 3 Va2 3a2 > 3a > a. Function f (a, ) is monotonically increasing on interval (ro,x1]. So, if ^(a) < a, then f (a,^(a)) < f (a, a) < 0, . Thus 5 > 0 for a > max{a0, a1}.
Continuing with the proof of Claim 2, let a > max{a0,a1}. Rewrite equation
f (a, x) = 0 as (a + 5)3 + a(a + 5)2 + a2(a + 5) + a3 = 0 or
5(a 5) a2(a 5) + a3 = 0.
Let 5 > a. Then 5(a 5)2 + a2(5 a) + a3 > 0, . So a > 5. Consequently,
5(a 5)2 = a2(a 5) a3 < a2(a 5). That is 5(a 5) < a2, or 52 5a + a2 > 0. Note,
that a > a0 > 3^a2 > 2ja2 ^ ^ a2 > 0. This implies that either 5 < a -\J^ a2, or 5 > a + \ja2 a2. Suppose that 5 > a> + \J^ a2. Then
a
It implies, that 5(a 5)2 a2(a 5) + a3 > 5(a 5)2 -O' + a3. For a > max{a0, a1, a2} we
have 5(a 5)2 -a- + a3 > 0, . Consequently, for sufficiently large a the only possibility is
5<
that is 5 ^ 0, as a ^ <x>. This completes the proof of Claim 2.
Now we estimate the remaining roots of the polynomial. The polynomial can be expanded as: f(a,x) = (x + a 5)(x2 + px + q). Hence the equalities a 5 + p = a, (a 5)p + q = a2, (a 5)q = a3 are satisfied. Consider the first and the third equalities.
a
2
a2
a2
a2
<
a + \ a2
2a2
a
4
a a2 a2 2a
5 <------W a2 =-----------------------. <
2 V 4 + [a ~ a
2 + \ T a2
2
They imply that p = 5, and q = 03$ The discriminants sign of the trinomial x2 + px + q is determined by the sign of the expression ^ 03$, that is by the sign of 52(a 5) 4a3. For a > max{a0,a1, } we have 5 < and a > . Then 52(a 5) < < 4a3 and
i 0 5 15 03 J a a3 v 'a 3
both roots are complex:
5 a3 52 5 a3 52
Z1 = 52(a) = - id-------------------- , Z2 = 43(a) = - + H
2 Va 5 4 - ' 2 Va 5 4
The real part of each root decreases as c1/a, and the imaginary part decreases as c2/^fa (c1,c2 are constants). Therefore the limits asserted in the Lemma are satisfied.
Lemma 5.1 has an immediate corollary.
Lemma 5.2. Let f(a, x) = x3 + ax2 + a2x + a3 be a polynomial family with constant coefficients a2,a3 > 0. Then lima^tt x(a) = 0.
Proof. For sufficiently large a we have x(a) = Re(42(a)) = Re(3(a)).
The following Lemma 5.3 is a corollary of Lemma 5.2.
Lemma 5.3. Let f (a,x) = x3 + ax2 + a2x + a3 be a polynomial family with constant coefficients a2, a3 > 0. Then there exists an a > 0 where the polynomial abscissa is minimal.
Proof. Note, that by the Hurwitz criterion, see [Anagnost 1991], if a < a3/a2, then x(a) > 0, and if a > a3/a2, then x(a) < 0. Take an arbitrary 3 > a3/a2 and find (according to Lemma 5.2) y > 3, such that for all a > 7 we have x(3) < x(a) < 0. Function x(a) is continuous and has a minimum on the interval [a3/a2,Y]. This minimum does not exceed x(3). Therefore it is the absolute minimum.
The proof of Theorem 2.3 consists of two parts. First, the description of the set P from lemma 3.4 is simplified. Then three additional lemmas are established for the evaluation of max(P). Finally, in the second part we define max(P) and complete the proof of the Theorem.
Proof of Theorem 2.3, Part I. Let p E P (defined in Lemma 3.4), and s,q,a > 0 satisfy the conditions s + p = a,sp + q = a2,sq = a3,p2 < 4q,p < 2s. Denote x = 1 > 0. Then
q = a3x, p/x + a3x = a2 ^ p = a2x a3x2, p < 2/x, p2 < 4a3x.
Conversely, suppose that p > 0 and there exists an x > 0 such that
p = a2x a3x2 A p < 2/x A p2 < 4a3x.
Then, letting s = X,q = a3x, a = p + s, we have p E P. Therefore
P = {p > 0 : 3x > 0 : p = a2x a3x2 A p < 2/x A p2 < 4a3x}.
Equation X = 2^/a3x has the unique root p = -33103. If x < p, then X > 2^/a3x. If
x > p, then - < 2^/a3x. Denoting g(x) = a2x a3x2, we can give another description of P:
P = {p > 0 : 3x > 0 : p = g(x) A
2 \
(x < p A g(x) < 2^Ja3x) V (x > p A g(x) < ) J }. Our goal is to find pmax = max(P).
Now we temporarily interrupt the formal proof, and appeal to the figure 1 for an illustration of our subsequent arguments.
x
Fig. 1. Minimum of the abscissa for four different values of the parameter a2
The shaded area in the figure 1 is the region
A = j(x,y) E R2 : x > 0 A 0 < y < A 0 < y < j .
The vertical dotted line is x = p. We search the biggest value of g(x) = a2x a3x2 with the restriction: (x,g(x)) E A. If the parameter a2 is small, then the graph y = g(x) looks like Curve 1, and pmax is the maximal value of g(x).
As the parameter a2 increases, it reaches the value where the parabola y = g(x) touches the hyperbola y = 2/x, i.e. the part of A border to the right of the line x = p. This value is not critical in our problem, although after it the parabola y = g(x) intersects the hyperbola y = 2/x at two points. Parameter a2 reaches its first critical value when the maximum of the parabola y = g(x) lies on the hyperbola y = 2/x (Curve 2). After that pmax corresponds to the smaller root of the equation g(x) = X This situation is kept until a2 reaches its second critical value, when the parabolas y = g(x), y = 2^/a3x and the hyperbola y = 2/x all intersect at one point (Curve 3). When a2 is larger than the second critical value, pmax corresponds to the smaller root of the equation g(x) = 2^/a3x (Curve 4).
To show formally that our arguments are valid we prove the following three lemmas.
2 a2
Lemma A. Function g(x) = a2x a3x2 has its maximal value tjl at x0 = 0 If
the inequality g(x0) < 2/x0 (i.e. a2 < -^a^3) is satisfied, then g(x0) < 2^a3x0.
Proof. The determination of the maximum of g is obvious. We have the following equivalent inequalities
g(x0) < 2/x0 ^ -J2 < 2/ I ^ I ^ a'2 < 16a3 ^
^ < 2/(a-4a3 \2a3
a32 < 4a3,
and 2 ______
g(xo) < 2/a:x & < 2X Ia3-a^ & aj2 < 4V2a:i.
4a 2a
Since 4a3 < 4\[2a3, the result follows.
Lemma B.
2
(1) If a2 < 3af, then g(x) < 2^Ja3x for any x > 0. The equality occurs only when
2
a2 = 3af and x = -= = p.
2
(2) If a2 > 3a3, then the following statements occur.
(a) Equation g(x) = 2^/a3x has two roots p\ E (0,p) and p2 E (p, to).
(b) If x E (0,pi] or x E [p2, to), then g(x) < 2^Ja3x. If x E (p^p2), then
g(x) > 2^fa33x.
(c) The roots p1 and p2 may be evaluated as p1 = 0^, and p2 = 0\, where 91 < 62
are two positive real roots of the polynomial p(t) = a3t3 a2t + 2^Ja3-
(d) As a function of a2 the first root p1(a2) monotonically decreases, the second root p2(a2) monotonically increases.
(e) We have lima2^^ p1(a2) = 0, and lima2^^ p2(a2) = to.
Proof. Consider function q(x) = 2^/a3x g(x) = a3x2 a2x + 2^/a3x. For x > 0 inequality
q(x) > 0 is equivalent to a3x3/2 a2^/x + 2^fa3 > 0. Denote t = \Jx. Then q(x) > 0 is equivalent to p(t) = a3t3-a2t + 2^/a3 > 0. The derivative of p(t) is p'(t) = 3a3t2 a2.
Function p(t) has a local minimum at t0 = with
3
pfo) = 4, [) - ^[ + 2/a3 = + 2^.
3a3 ) v 3a3 r-
3/ V 3 3\f3a3
It has a local maximum at t = t0 with p(t0) > p(0) = 2^/a3 > 0. Therefore function p monotonically decreases on [0,t0] and increases on [t0, to).
Condition p(t0) > 0 is equivalent to
n 3 3 2
12 + 2a3 > 0 & a2 < 3v/3a3 & a2 < 3af.
3^
Similarly, condition p(t0) = 0 is equivalent to a2 = 3a|, and condition p(t0) < 0 is 2 2
equivalent to a2 > 3a|. Therefore, if a2 < 3a|, then p(t) > 0 for all t > 0, and equation
p(t) = 0 has no positive solutions.
2
If a2 = 3a|, then p(t) > 0 for any t > 0. Therefore equation p(t) = 0 has a unique
positive solution t0 = = -a = /p.
2
If a2 > 3a3, then p(t0) < 0. Since p is monotone on intervals [0,t0] and [t0, to), equation p(t) = 0 has two roots 01 < 02. We have 01 < t0 < 02. In addition p(t) > 0 in [0, 01 ] U [02, to), and p(t) < 0 on (01,02).
The numbers 01 and 02 are continuous functions of a2. To investigate their monotonicity consider p(t) as a function of two variables a2 and t. Let y(a2,t) = a3t3 a2t + 2^/a3. Its partial derivatives with respect to a2 and t are d1j(a2,t) = t and
d2Y(a2,t) = 3a3t2 a2. Since 01(a2) < t0, we have d2^(a2,01(a2)) < 0. By the Implicit Function Theorem
01 a) = d Oai V) < 0.
d2Y(a2, 91 (a2))
Therefore 01(a2) is decreasing. Furthermore, 02(a2) > t0. Thus d2Y(a2,02(a2)) > 0, and by the Implicit Function Theorem
0' (a ) = 02(a2) > 0
2 2 d2j(a2,02(a2)) '
Consequently, 02(a2) is increasing.
Since we have limits: lim 2/3 01(a2) = lim 2/3 02(a2) = Jp, for any a2 >
02 ^30 02 ^30 v
3a2/3 01(a2) < /p and 02^2) > /p.
Since 02(a2) > t0(a2) = , we get lima2^^ 02(a2) = to. Furthermore, p"(t) =
6a3t > 0 when t > 0. Thus, function p is convex for t > 0. Therefore
p(t) < A(t) = p(0) + (p(t0) p(0))-t, t E [0,t0].
t0
The root of the equation A(t) = 0 is
p(0) 3/a3
t1 = t0
p(0) p(t0) a2
Suppose that t1 < 01. The function A(t) is decreasing, because p(t0) < p(0). Hence A(01) < A(t1) = 0 ^ p(01) < A(01) < 0 , . Consequently, 01(a2) < ^a^3, that is lima2^> 01(a2) = 0.
Since function \fx is increasing, the established properties of p(t) are transferred to function q(x) = p(\[x)\fx. In particular, the positives roots p1 < p2 of the equation q(x) = 0 are evaluated by p1 = 02, and p2 = 0^.
Lemma C.
2
(1) If a2 < a3, then g(x) < X for any x > 0. The equality occurs only when
~hc
a2 = -2a3 , and x
2
(2) If a2 > -3= a3, then the following statements occur.
(a) equation g(x) = X has two roots n1 < n2 on the interval (0, to).
(b) If x E (0,^1)] U [^2, to), then g(x) < X. If x E (^1,^2), then g(x) > X.
(c) As a function of a2 the root n1(a2) decreases monotonically, and the root n2(a2) increases monotonically.
(d) We have lima2^^ n1(a2) = 0, and lima2^^ n2(a2) = to.
Proof. Let q(x) = X g(x) = a3x2 a2x + X. For x > 0 inequality q(x) > 0 is equivalent
to p(x) = a3x3 a2x2 + 2 > 0. The derivative is p'(x) = 3a3x2 2a2x. Function p has a
local maximum at x = 0 and a local minimum at x = x1 = with
, , , , , 2a2 \ 3 / 2a^ 2 4ao
p(0) = 2 > °, p(x1) = a3() - a2() + 2 = 2a3 + 2.
Function p is monotonically decreasing on interval [0,x^, and monotonically increasing on [x1, to) . Condition p(x1) > 0 is equivalent to
An3 3 -
- < 2 ^ 2n- < 27n- ^ n2 < -tt= n3
Therefore p(x1) = 0 is equivalent to n2 = n33, and p(x1) < 0 is equivalent to n2 > 3 n33
- - -
Thus, if n2 < 3ni, we have p(x) > 0 for any x > 0, and equation p(x) = 0 has no
_3_ n 3
y-n
positive solutions.
2
n2 = I3 ni3, then p(x) > 0 for any x > 0, and equation p(x) = 0 has a
2
unique positive solution x\ = = 3 . If a2 > -=a3, then p(xi) < 0. Because
ia3 - V 2 3
of the monotonicity of p on intervals [0,x1 ] and [x1,to), equation p(x) = 0 has two
positives solutions. One of them is n1 < x1, and the other one is n2 > x1. Furthermore,
x E [0,n1] U [n2, to) implies p(x) > 0, and x E (,q1 ,n2) implies p(x) < 0.
The proof of the monotonicity of n1 and n2 as functions of n2, and the proof of
lima2^^ n2(n2) = to are carried out as in the proof of Lemma B.
Claim,. We have lima2^^ n1(n2) = 0.
To show this, consider the straight line passing through the points (0, 0) E R2, and
2 2
(2iti, 4a3) E R2. The second point is the point of maximum on the graph of the function g(x) = n2x n3x2, i.e. its abscissa is x0 = 2^ . The equation of the line is y = 2x.
Let x2 > 0 be the abscissa of the intersection of this line and the hyperbola y = 2/x. Then x2 = y=. The inequality x2 < x0 is equivalent to n2 > , so x2 < x0 for
sufficiently large n2. Furthermore, if x2 < x0 then g(x2) > 2x2 = X2, because g(x) is convex. Note, that x0 < x1 and for sufficiently small x we have g(x) < X .So the smaller positive root of the equation g(x) = X lies between 0 and x2. Thus n1(n2) < y=, which means that lim2^^ n1(n2) = 0. This completes the proof of the Claim, as well as the proof of Lemma C.
Now we continue with the proof of Theorem 2.3.
Proof of Theorem 2.3, Part II. If 0 < n2 < ^TQctU'3, then, according to Lemma A, pmax = L '
43 .
2
Let n2 > ^I6n3/3. Since -33= nH < ^T6n3 /3, then, according to Lemma C, equation
g(x) = X has two roots n1(n2) < n2(n2) on the interval (0, to). There are two possible alternative cases here: either n1(n2) > p or n1(n2) < p. Since function n1 is monotonically decreasing, inequality n1(n2) > p (n1(n2) < p) is equivalent to n2 < nf1(p) (n2 > n-1(p)). Number n-1(p) is determined from the condition n-1(p)p n3p2 = -, that is n-1(p) =
3n2/3. 2 3
Let us consider the first case n1(n2) > p, that is n2 < 3n3/3. According to Lemma B,
g(x) < 2^J n3x for any x > 0, and we find pmax to the only restriction g(x) < X .It follows 2 x
from n2 > ^TQn3, that g(x0) > 2/x0. But for small x we have g(x) < 2/x, so n1(n2) < x0. Function g(x) is increasing on interval [0, x0], so g(x) X is also increasing and n2(n2) > x0. Besides for any x such that 0 < x < n1(n2) we have g(x) < g(n1(n2)) = ni). By Lemma
2
2
C, x E (n1(a2),n2(a2)) implies g(x) > X. Furthermore, inequality ni(a2) < n2(a2), implies
that ^ ^, so for any x > n2(a2) we have
Vl(a2) V2(a2y J ,2K 2>
2 2 2
g(x) < - < -^r <
x n2(a2) Vi(a2)
Consequently, the maximum of g(x) under the restrictions g(x) < X, and g(x) < 2^Ja3x
is mO). Thus Pmax = nda2~).
2
Now consider the second case n1(a2) < p, that is a2 > 3af. In this case g(n1(a2)) = m2a2) > 2\Ja3n1(a2). By Lemma B, equation g(x) = 2^Ja3x has two roots p1(a2) < p2(a2) on the interval (0, to). Since g(x) < a2x for any x > 0, we have g(x) < 2^Ja3x for any x < Of. Consequently, at least one of the roots p1(a2) < p2(a2) lies in the interval
(0,ni(a2)), that is 0 < p1(a2) < n1 (a2).
Note, that at the point x0 of the maximum of g(x) we have x0 = Or > 2^03 = 2p. Therefore function g(x) is increasing on [0,n1(a2)] C [0, |p]. A corollary of this is g(x) < g(p1(a2)) for any x E [0,p1(a2)] . Since by Lemma B p2(a2) > p, g(x) > 2^Ja3x for any x E (p1(a2),p\. Thus the maximal value of g(x) on the interval [0,p], subject to the restriction g(x) < 2^Ja3x, is g(p1(a2)) = 2\Ja3p1(a2). Since n2(a2) > x0 the maximal value of g(x) on the interval [p, to), subject to the restriction g(x) < X, is g(n2(a2)) = ^20).
2 ,3
Claim. If a2 > 3af, then g(p1 (a2)) > g(g2(a2)).
2
To prove it, suppose that one has found a2 > 3a33 such that g(p1(a2)) < g(n2(a2)).
2
For a2 = 3a33 we have g(p1(a2)) = g(n1(a2)) > g(n2(a2)). By the Intermediate Value
2
Theorem there exists a2 > 3a3 such that g(p1(a2)) = g(n2(a2)).
For brevity, let us denote p1 = p1(a2), and n2 = n2(a2). We have
_____ 2 _____ 2
a2p1 a3p\ = V a3 p1, a2V2 a3V2 = , V a3p1 =
2 2
Rewrite the first two equalities as a2p1 = a3p2 + 2^Ja3p1, and 2 = 3 + .
2
Eliminating a2 gives a3p1 + y1\Ja3p{ = an + -Xi. Let t = ^a3~p{. Then a3p1 = t2, n2 = 1/t,
= t, and a3n2 = a3/t. The last equation can be rewritten as t2+^ = af-+2t2. Therefore
2
2
t2 = 3, t = $,, pi = n2 = , and a2 = 3a|.
t 3
2
The last equality contradicts the original assumption a2 > 3a|, . This completes the proof of the Claim.
Thus, the maximal value of g(x) on the interval [0, to), subject to the restrictions g(x) < 2^/X and g(x) < X, is Pmax = g(pi(a2)) = W3Pi(a2).
Having got pmax we evaluate minimal abscissa and corresponding value of a according to Lemma 3.4.
If 0 < a2 < -\/l6a2^3, then
a2 2a3 a22 a22 2a3
X = 2, S = ~, X(amin) = - 8, amin =4 + ~.
2 a2 8 4 a2
Scientific notes. Electronic scientific journal of the Kursk State University No 1(17) 2011 If \/l6a2/3 < a2 < 3%/3, then
x = ni(a2) j s / \ j amin / \ j X,(amin)
/ \ 5 K^mvn / \ ^ > / \ *
Vi{a>2) Vi\a2) Vi\a2)
If a2 > 3a2/3, then
X = Pi(a2), S = p (a ) > amin = ^ ^ ) + 2\/a3Pl(a2) = ^ + ^V^^^l,
X(amin) = \/ a3Pl(a2) = ^/a3^1-
We have shown that if the point of minimum exists, then it is uniquely determined by the above formulas. The existence of such a point is proved in Lemma 5.3.
6. Illustrations and Applications
To illustrate the results of Theorems 2.1-2.3 we build graphs by the following scheme.
For every problem we fix a numerical value of an unchanged parameter and use this value for all graphs illustrating the problem. For the other unchanged parameter (we call it pseudo-varied parameter) we choose several numerical values and for every value we depict the polynomial abscissa as a function of the varied parameter (figures 2, 5, 8). When the combination of the parameters correspond to a critical case, the abscissa graph is marked by a bold line.
Further we depict the singular value (figures 3, 6, 9) and the minimal abscissa (figures 4, 7, 10) as functions of the pseudo-varied parameter.
The numerical values of the parameters are indicated in the figures captions. The critical values are marked by the subscript c.
For example, consider the polynomial family f (a, A) = A3 + a\A2 + a2X + a. The graphs in the figure 2 show the dependency of the polynomial abscissa on the parameter a, for various values of the coefficient a2, while a\ = 3. The critical value of a2 in this case is a2c = 3. The corresponding graph is shown by the bold line.
The graphs in the figure 2 show that for parameters a2 > a2c = 3 the minimal abscissa is 1, attained for various singular values of a, depending on a2. Figures 3 and
4 show this conclusion graphically.
Theorems 2.1 and 2.4 are the most important in the Automatic Regulation theory. Let the system be governed by a second order linear differential equation in a neighborhood of the stationary process. Theorem 2.1 allows to find the best parameter for the Integral regulator, and Theorem 2.4 allows to find two best parameters for the ProportionalIntegral regulator.
In the case of the Proportional-Integral regulator, according to Theorem 2.4, the solution of the abscissa minimization problem is not unique, but one can impose an additional condition making it unique in the set S. This condition corresponds to the absence of oscillations in the transient process, i.e. the absence of complex roots for the polynomial corresponding to the chosen parameters values. This means that the pair (a,^) = (77, ~3r) is the best.
a l p h a
Fig. 2. Polynomial abscissa as a function of a for the family f (a, A) = A3 + a\A2 + a2A + a for various values of a2; a\ = 3, a2c = 3
a2
Fig. 3. Minimal abscissa of the polynomial family f (a, A) = A3 + a\A2 + a2A + a as a function of a2; a\ = 3, a2c = 3
a2
Fig. 4. Singular value of a for the polynomial family f (a, A) = A3 + aA + a2A + a as a function of a2; a\ = 3, a2c = 3
0 2 4 6 8 10 12
a I p h a
Fig. 5. Polynomial abscissa as a function of a for the family f (a, A) = A3 + a\A2 + aA + a3 for various values of a3; a\ = 3, a3c = 1
a3
Fig. 6. Minimal abscissa of the polynomial family f (a, A) = A3 + a\A2 + aA + a3 for various values of a3; a\ = 3, a3c =1
a3
Fig. 7. Singular value of a for the polynomial family f (a, A) = A3 + aA + aA + a3 as a function of a3; a\ = 3, a3c =1
a l p h a
Fig. 8. Polynomial abscissa as a function of a for the family f (a, A) = A3 + aA2 + a2A + a3 for various values of a2; a3 = 2, a2c0 = 4, a2c\ ~ 4.76
a2
Fig. 9. Minimal abscissa of the polynomial family f (A) = A3+aA2+a2A+a3 for various values of a2; a3 = 2, a2c0 = 4, a2c\ ~ 4.76
3
o> c * ^ v>
a2
Fig. 10. Singular value of a for the polynomial family f (a, A) = A3 + aA2 + a2X + a3 as a function of a2; a3 = 2, a2c0 = 4, a2c\ ~ 4.76
7. Acknowledgments
The author thanks Professor Semion Gutman (University of Oklahoma) for his contributions that improved the presentation of the paper. The author also thanks Professor Michael Overton (New York University) for reading and recommending the paper.
References
Anagnost J. J, Desoer C. A. An elementary proof of the Roth-Hurwitz stability criterion. //IEEE Circuits Systems Signal Process. 1991, vol. 10, No. 1, pp. 101-114.
Burke, J.V., Overton, M.L. Differential properties of the spectral abscissa and the spectral radius for analytic matrix-valued mappings. //Nonlinear Analysis, Theory, Methods and Applications. 1994, Vol. 23, No. 4, pp.467-488.
Burke J.V. and Overton M.L. Variational analysis of non-Lipschitz spectral functions. //Mathematical Programming. 2001, Ser. A 90, pp. 317-351.
Burke, J.V., Overton, M.L. Variational analysis of the abscissa map for polynomials. //SIAM J. Control an Optimization. 2001, Vol. 39, No. 6, pp.1651-1676.
Burke J. V., Lewis A. S. and Overton M. L. Optimization and pseudospectra, with applications to robust stability. //SIAM J. Matrix Analysis Appl. 2003, Vol. 25, No. 1, pp. 80-104.
Cox S. J. Deigning for optimal energy absorption, I: Lumped parameter systems. //J. Vibration Acoustics. 1998, Vol. 120, pp. 339-345.
Fletcher R. Semi-definite matrix constraints in optimization. //SIAM J. Control and Optimization. 1985, Vol. 23, pp. 493-513.
Freitas P. Optimizing the rate of decay of solutions of the wave equation using genetic algorithms: a counterexample to the constant damping conjecture //SIAM J. Control and Optimization. 1998, Vol. 37, No. 2, pp. 376-387.
Freitas P. and Lancaster P. On the optimal value of the spectral abscissa for a system of linear oscillators. //Siam J. Matrix Anal. Appl. 1999, Vol. 21, No. 1, pp. 195-208.
Friedland S. Extremal eigenvalue problems //Bol. Soc. Brasil. Mat. 1978, Vol. 9, No. 1 pp. 13-40.
Frumkin A. M. On the problems of Hurwitz index optimization for polynomial families. //Scientific notes: Electronic scientific journal of the Kursk State University. 2008, No 1. URL: http://scientific-notes.ru/pdf/005-01.pdf.
Frumkin A. M. On a problem of Hurwitz index optimization for a third degree polynomial family. //Scientific notes: Electronic scientific journal of Kursk State University. 2008, No 2. URL: http://scientific-notes.ru/pdf/006-06.pdf.
Lewis A. S. Active sets, nonsmoothness, and sensitivity. //SIAM J. Optimization. 2002, Vol. 13, No. 3, pp. 702-725.
Overton M. L. On minimizing the maximum eigenvalue of a symmetric matrix. //SIAM J. Matrix Anal. Appl. 1988, Vol. 9, No. 2, pp. 256-268.
Overton, M.L., Womersley, R.S. On minimizing the spectral radius of a nonsym-metric matrix function - optimality conditions and duality theory. //SIAM J. Matrix Anal. Appl. 1988, Vol. 9, No. 4, pp. 473-498.
Privalov I. I. Introduction into the complex variable functions theory. Moscow: Science, 1977. 444 pp.
Shapiro A. and Fan M. K. H. On eigenvalue optimization. //SIAM J. Optim. 1995, Vol. 5, No. 3, pp. 552-569.
Shilov G.E The mathematical analysis. The functions of several real variables. Moscow: Science, 1972. 624 pp.
Trefethen L. N. Pseudospectra of linear operators. //SIAM Rev. 1997, Vol. 39, pp. 383-406.
Vanbiervliet J., Vandereycken B., Michiels W., Vandewalle S., Diehl M. The smoothed spectral abscissa for robust stability optimization. //SIAM J. Optim. 2009, Vol. 20, No.
1, pp. 156-171.