close

Вход

Забыли?

вход по аккаунту

?

Estimation of the Greatest Common Divisor of many polynomials using hybrid computations performed by the ERES method.

код для вставкиСкачать
Appl. Num. Anal. Comp. Math. 2, No. 3, 293 – 305 (2005) / DOI 10.1002/anac.200410052
Estimation of the Greatest Common Divisor of many polynomials
using hybrid computations performed by the ERES method
Dimitrios Christou ∗ and Marilena Mitrouli ∗∗
Department of Mathematics, University of Athens, Panepistimioupolis 15784, Athens, Greece
Received 5 November 2004, revised 10 August 2005, accepted 10 October 2005
Published online 22 November 2005
The computation of the Greatest Common Divisor (GCD) of a set of more than two polynomials is a non-generic
problem. There are cases where iterative methods of computing the GCD of many polynomials, based on the
Euclidean algorithm, fail to produce accurate results, when they are implemented in a software programming
environment. This phenomenon is very strong especially when floating-point data are being used. The ERES
method is an iterative matrix based method, which successfully evaluates an approximate GCD, by performing
row transformations and shifting on a matrix, formed directly from the coefficients of the given polynomials.
ERES deals with any kind of real data. However, due to its iterative nature, it is extremely sensitive when
performing floating-point operations. It succeeds in producing results with minimal error, if we combine both
floating-point and symbolic operations. In the present paper we study the behavior of the ERES method using
floating-point and exact symbolic arithmetic. The conclusions derived from our study are useful for any other
algorithm involving extended matrix operations.
c 2005 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
1 Introduction
The computation of the Greatest Common Divisor of a set of polynomials has been a problem to the mathematicians for a long time because there are many applications in mathematics and engineering, where the GCD of
a set of polynomials plays an important role. The existence of a common divisor of many polynomials is not
always a trivial subject. Therefore, several numerical methods have been proposed for its computation. Some
of them are based on Euclid’s algorithm and its generalizations and others are based on procedures involving
matrices (matrix-based methods) [3, 5, 11].
Matrix based methods have better performance and quite good numerical stability, especially in the case of
large sets of polynomials. In this category we have those, which depend on the properties of generalized resultants
[12]. Such methods tend to be computationally expensive because they create and work with very large matrices.
The ERES method is an iterative matrix based method, which is based on the properties of the GCD as an
invariant of the original set of polynomials under extended-row-equivalence and shifting operations and it is very
effective, when working with large matrices [4, 5].
Consider a set of polynomials Πm,d = {pi (s) ∈ [s], i = 1 . . . m}, which has m elements and denote by
d the maximal degree of the polynomials of the set. [s] denotes the ring of polynomials with coefficients from
the reals, , and the GCD of Πm,d will be denoted by g(s). For any Πm,d set a vector representative pm (s) and
a basis matrix Pm can be defined by
pm (s) = [p1 (s), p2 (s), . . . , pm (s)]t = [p0 , p1 , . . . , pd ] ed (s) = Pm ed (s)
where, Pm ∈ m×(d+1) is a matrix formed directly from the coefficients of the polynomials of the set and
e(s)d = [1, s, . . . , sd−1 , sd ]t . If c is an integer for which p0 = p1 = · · · = pc−1 = 0 and pc = 0, then
c = w(Πm,d ) is called the order of Πm,d and sc is an elementary divisor of the GCD. The set is considered to be
a c-order set and will be called proper if c = 0, and non-proper if c ≥ 1.
∗
∗∗
Corresponding author: e-mail: dchrist@math.uoa.gr
e-mail: mmitroul@cc.uoa.gr
c 2005 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
294
D. Christou and M. Mitrouli: Estimation of the GCD of many polynomials with the ERES method
Definition 1.1 On any set Πm,d and hence on its basis matrix Pm , the following operations can be defined :
(i) Elementary row operations with scalars from .
(ii) Addition or elimination of zero rows on Pm .
(iii) If rt = [0, . . . , 0, ak , . . . , ad+1 ] ∈ d+1 , ak = 0 is a row of Pm then we define as the Shifting operation
shf : shf(rt ) = r∗t = [ak , . . . , ad+1 , 0, . . . , 0] ∈ d+1
By shf (Πm,d ) = Π∗m,d , we shall denote the set obtained from Πm,d by applying Shifting on every row of
Pm .
The combination of the three types of operations described in definition 1.1 are referred to as Extended-RowEquivalence and Shifting (ERES). These operations on Pm have an interpretation on the set of polynomials.
Remark 1.2 The shifting operation applied either on vectors or matrices can be implemented in any programming environment very easily. However, in theory, the shifting operation applied on a matrix is a subject, which
remains open. Matrix theory does not provide us with enough tools to establish a clear mathematical relation
between a matrix and its shifted form. Specifically, we are interested in expressing the matrix shifting as a matrix
product. This would have been very useful in the study of the overall error analysis of the ERES method.
The following theorem summarizes a number of properties of the GCD under the above transformations [4].
Theorem 1.3 For any set Πm,d , with a basis matrix Pm with rank ρ(Pm ) = r and gcd(Πm,d ) = g(s) we
have the following properties
(i) If R is the row space of Pm , then g(s) remains invariant after the execution of elementary row operations
on Pm , i.e. g(s) is an invariant of R. Furthermore, if r = dim(R) = d + 1, then g(s) = 1.
(ii) If w(Πm,d ) = c ≥ 1 and shf (Πm,d ) = Π∗m,d and gcd(Π∗m,d ) = g ∗ (s) then g(s) = sc g ∗ (s).
(iii) If Πm,d is proper (c = 0), then g(s) is invariant under the combined ERES set of operations.
The above results form the basis for the ERES methodology, which is developed in [4]. In [5] can be found
a comparison of the ERES method with other existing methodologies. In the present paper we study extensively
the behavior of the ERES method using floating-point or exact symbolic arithmetic. More specifically, we are
concerned with ERES’s capability of computing the existing non-trivial GCD of a given set of univariate polynomials, with minimal numerical error. From this comparison of behavior we produce useful remarks, which
will help us to develop the best performance of the method and build a framework for combining numerical and
symbolical computations.
2 Implementation of the ERES algorithm
The construction of the algorithm of the ERES method is based on algebraic processes, which are applied iteratively on a matrix formed directly from the coefficients of the polynomials of the original set. We shall denote by
(k)
Pm the matrix, which occurs after the k th iteration and it is used during the k th + 1 iteration of the algorithm
(0)
(thus Pm ≡ Pm ). The output is a vector g ∈ n containing the coefficients of the GCD.
The main target of the ERES algorithm is to reduce the number of the rows of the initial basis matrix Pm and
finally to end up to a rank 1 matrix, which contains the coefficients of the GCD. Indeed, if we combine Gaussian
elimination with partial pivoting and Shifting and furthermore, by selecting an appropriate numerical accuracy
(k)
εG such that all the elements of the matrix Pm with values less than εG be set equal to zero, we succeed in
(k)
reducing the size of the matrix Pm during the iterations.
(k)
On the other hand, the Singular Value Decomposition of Pm provides the ERES algorithm with a termination
criterion. This criterion is described in the following theorem [4, 5] :
c 2005 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
Appl. Num. Anal. Comp. Math. 2, No. 3 (2005) / www.interscience.wiley.com
295
Theorem 2.1 (Termination Criterion) Let A = [r1 , . . . , rm ]t ∈ m×n , m ≤ n, r1 = 0, i = 1, . . . , m.
Then for an appropriate accuracy εt > 0 the numerical εt -rank of A equals to one (ρεt (A) = 1) if and only if the
singular values σm ≤ σm−1 ≤√· · · ≤ σ1 of the normalization AN = [u1 , . . . , um ]t ∈ m×n , ui = ri /ri 2 of
A satisfy the conditions |σ1 − m| ≤ εt and σi ≤ εt , i = 2, 3, . . . , m.
The specified variables εt and εG will be defined as the termination and the Gaussian accuracy respectively.
(k)
It is more appropriate to apply the SVD termination criterion on a normalized matrix Pm , when the polynomials,
(k)
which correspond to the rows of the matrix, have the same degree, i.e. the last non zero column of Pm does not
contain zero elements.
(k)
Therefore, if a rank 1 matrix Pm occurs after the k th iteration, the process stops. The termination criterion
described in theorem 2.1 is used to check when the numerical rank of a matrix is considered to be equal to 1.
The ERES algorithm.
Form the basis matrix Pm ∈ m×n , n = d + 1.
Check if Pm is proper or non-proper (specify c).
If c ≥ 1 then P := shf(Pm ) else P := Pm . (Initial matrix)
Let r be the row dimension of P . (Initial value r := m)
Let q be the column dimension of P . (Initial value q := n)
Repeat
STEP 1 :
Specify the degrees di of each polynomial row.
Reorder matrix P such that d1 ≤ d2 ≤ · · · ≤ dr .
If di = dj , ∀ i, j = 1, 2, . . . , r then
Compute the Singular Value Decomposition of P .
t
t
t
P := V Σ
√W , Σ = diag{σ1 , . . . , σr }, W = [w1 , . . . , wn ]
If |σ1 − r| ≤ εt then
Select appropriately the GCD vector g.
quit
end if
end if
STEP 2 :
Scale appropriately matrix P such as |P1,1 | > |Pi,1 | for i = 2, . . . , r.
Apply Gaussian elimination with partial pivoting to P ,
eliminating entries and rows with values ≤ εG .
Specify the new value of r.
STEP 3 :
Normalize each row of P using vector 2-norm.
Apply shifting on every row of P .
Eliminate the last zero columns of P .
Specify the new value of q.
until r = 1
The ERES algorithm produces either a single row-vector or a rank 1 matrix. In the first case, the yielded
row-vector contains the coefficients of the GCD. But if a unity rank matrix occurs, the selection of the most
appropriate GCD vector is not trivial. We can pick any row of this matrix as the GCD vector since the matrix is
normalized, or take advantage of the SVD according to the following proposition [5] :
Proposition 2.2 Let A = V ·Σ·W t be the singular value decomposition of a given matrix A ∈ m×n , ρ(A) =
1. Then a “best” rank 1 approximation to A in the Frobenius norm is given by A1 = σ1 · u · wt , where σ1 is the
largest singular value of A and u and w are the first columns of the orthogonal matrices V and W of the singular
c 2005 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
296
D. Christou and M. Mitrouli: Estimation of the GCD of many polynomials with the ERES method
value decomposition of A respectively. The vector w is then the “best” representative of the rows of matrix A in
the sense of the rank 1 approximation.
Example 2.3 Consider the following set of polynomials :
⎧
p1 (s) = −s4 + s3 + 6s2 − s − 5
⎪
⎪
⎪
⎪
p
⎨ 2 (s) = s3 + 3s2 − s − 3
p3 (s) = −s4 − 2s3 + 2s + 1
m = 5, d = 4.
Π5,4 =
⎪
4
⎪
p
(s)
=
s
−
1
⎪ 4
⎪
⎩
p5 (s) = −s4 − s3 + s + 1
with exact GCD, g(s) = s2 − 1. Let εt = εG = 10−16 . The initial basis matrix Pm is proper (c = 0) and has the
following form :
⎡
⎢
⎢
(0)
=⎢
Pm ≡ Pm
⎢
⎣
−5
−3
1
−1
1
−1
−1
2
0
1
6
1
3
1
0 −2
0
0
0 −1
−1
0
−1
1
−1
⎤
⎥
⎥
(0)
⎥ ∈ 5×5 , rank(Pm
)=3
⎥
⎦
After the first iteration of the ERES algorithm, we obtain the matrix :
⎡
⎢
⎢
(1)
Pm
=⎢
⎣
√
− 3105
√
26
13
√
− 22
√
− 105
√
3 26
26
0
√
3 5
10
√
− 1326
√
2
2
√
5
10
√
− 3 2626
⎤
⎥
⎥
(1)
)=3
⎥ ∈ 3×4 , rank(Pm
⎦
0
After the second iteration of the ERES algorithm, we obtain the matrix :
⎡
(2)
Pm
=⎣
√
2
2
√ √
5
− 10
10
−
√
0
0
2
2
√ √
10 5
10
⎤
(2)
⎦ ∈ 2×3 , rank(Pm
)=1
At this point the procedure stops according to the SVD termination criterion and the selected numerical
(2)
accuracies.
By
using the result of the proposition 2.2 (or by selecting any row of Pm √
), the GCD vector is
√
√
2
2
2
g = [− 2 0 2 ], which leads to the polynomial s − 1, if we divide the elements of g by 22 .
Computational complexity. For a set of polynomials Πm,d the amount of floating point operations performed
(k)
in the k th iteration of the algorithm depends on the size of the matrix Pm and it is summarized in table 1, [5].
(0)
The first iteration is the most computationally expensive iteration since the initial matrix Pm is larger than any
(k)
Pm . Unless we know exactly the degree of the GCD of the set we cannot specify from the beginning the number
of iterations required by the algorithm.
m The following theoretical bound can be expressed for the number Nit of
required iterations : Nit ≤ min { i=1 di , 2(d − deg g(s))}
Table 1
(k)
Required floating-point operations by the matrix Pm ∈ r×q
Gaussian elimination
3
O( z3 ),
z = min{r − 1, q}
Normalization
SVD
O(2rq)
O(rq 2 + q 3 )
c 2005 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
Appl. Num. Anal. Comp. Math. 2, No. 3 (2005) / www.interscience.wiley.com
297
Numerical stability. From a theoretical point of view the following theorem establishes the numerical stability
of each iteration step of the method [5] :
(k)
Theorem 2.4 The matrix Pm ∈ m×n , computed by the method in the k th iteration, using floating-point
arithmetic with unit round-off u, satisfies the properties
(k)
(k−1)
= Pm
+ E (k) ,
M (k) · Pm
E (k) ∞ ≤ 3.003 · (Nit − 1) · n3 · u
where M (k) ∈ m×m the matrix accounting for the performed transformations and E (k) ∈ m×n , the error
matrix.
Main advantages of the ERES algorithm.
• Quick reduction of the size of the original matrix, which leads to fast data processing and low memory
consumption [5].
(k)
• Numerical stability. The normalization of Pm keeps the elements bounded.
3 Numerical and Symbolical behavior of the ERES method
3.1 Performance of ERES method using floating-point arithmetic
The ERES method shows very interesting numerical properties, when implemented properly in a software programming environment. The careful selection of the accuracies εG and εt is very crucial, since they both influence
the correctness of the achieved results. Any matrix based method requires appropriate accuracies for its implementation. When row operations are involved (e.g. resultant methods [12]) accuracies such as εG are required.
When the nature of the method is iterative (e.g. Blankinship method [5]) accuracies such as εt are required. Most
of the mathematical software packages, which are available today, use fixed point arithmetic and floating-point
arithmetic of a certain number of digits (which in the case of floating-point arithmetic form the mantissa of a real
number). In the following we will be concerned with the case of floating-point and exact symbolic arithmetic and
throughout the paper we shall use the following notation :
Notation
SFP : Standard Floating-Point arithmetic, where the internal accuracy of the system is fixed and often limited
to 16 digits (double precision)1.
VFP : Variable Floating-Point arithmetic, where the internal accuracy of the system is determined by the user
(variable precision)2.
ES : Exact Symbolic arithmetic, where the system uses rational arithmetic to perform the operations.
HC : The combination of VFP and ES operations will be referred to as Hybrid Computations.
In the following we will show that when using standard floating-point arithmetic, the internal accuracy of
the system is not always enough for the ERES method to produce good results. On the other hand, Hybrid
Computations proves to be more appropriate.
Example 3.1 We consider a set Φ1 = Π11,20 of 11 polynomials with integer coefficients (max. 3 digits),
maximum degree 20, and exact GCD, g(s) = s3 + 3s2 + 4s + 2, [4, 5]. Next, for this polynomial set, we will
study three cases of behavior of the ERES method using the above described types of arithmetic. The results are
presented in the tables 2, 3 and 4. The relative errors were estimated by using the Frobenius norm.
Generally, the accuracies εG and εt have not the same value and it seems that their selection depends strongly
on the nature of the given data. Furthermore, there is not a clear relation between the values we assign to them
1 It is also referred to as hardware floating-point precision
2 It is also referred to as software floating-point precision
c 2005 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
298
D. Christou and M. Mitrouli: Estimation of the GCD of many polynomials with the ERES method
Table 2
SFP behavior of the ERES algorithm for the set Φ1
Type of data
float
Accuracies
double precision, Digits = 15, εt = 10−15 , εG = 10−15
Resulted GCD
1.0
Type of data
float
Accuracies
double precision, Digits = 15, εt = 10−15 , εG = 10−8
Resulted GCD
1.0 s3 + 2.99999999608220 s2 + 3.99999999215816 s + 1.99999999215450
Relative error
0.215 10−8
Type of data
float
Accuracies
double precision, Digits = 15, εt = 10−8 , εG = 10−8
Resulted GCD
1.0 s3 + 2.99999996640441 s2 + 3.99999993280994 s + 1.99999993280184
Relative error
0.184 10−7
and the properties or the structure of the basis matrix Pm . Since the selection of the most appropriate values for
the Gaussian accuracy εG and the termination accuracy εt is non-trivial, we have to pay attention to the type of
the initial data (e.g. integers, decimals, fractions, radicals etc.) and the kind of operations, which are performed
throughout the whole process of computing the GCD of a given set of polynomials Πm,d .
The ERES is an iterative method with no fixed number of iterations and since it is a matrix based method, it
often deals with large amount of data. If the initial data has a standard floating-point format, then the performed
operations also yield results in standard floating-point format and this causes a lot of round-off errors to occur,
especially when we have many iterations. This often leads to results of very low accuracy (see table 2). The
number of digits used by the system for the internal representation of the data plays also an important role. Thus,
when working with ERES, it would be more preferable to use VFP arithmetic. This kind of arithmetic allows
the user to determine by himself the number of digits used for the internal representation of the data (variable
precision). We will refer to this kind of accuracy as the software accuracy and it will be denoted by the term
Digits3.
Various tests on several polynomial sets with floating-point data, in VFP arithmetic, showed that we can obtain
quite satisfactory results from the ERES method, if we allow the system to perform the internal operations with
more than 20 digits of accuracy (i.e. Digits ≥ 20) and simultaneously assign large values to the accuracies εt and
εG (see table 3). Also, it has been observed that it is not worthwhile assigning values to εt and εG less than 10−16 .
Furthermore, there is a way to have a value for the termination accuracy εt as an upper bound, by observing the
values that the termination criterion of the algorithm gets during the process of computing the GCD. This works
especially when the accuracy εG does not affect the data during the matrix transformation. The algorithm can
be slightly modified to detect a proper value for the termination accuracy by itself. In most cases, the Gaussian
accuracy εG can be deduced from the software accuracy. Generally, the quality of the results obtained by the
ERES algorithm can be controlled by the variable Digits, which determines the software accuracy. But, there is
no way to know in advance the proper value of Digits for an accurate computation and there are cases of sets of
polynomials where this value must be quite large, which costs in terms of memory and time.
3.2 Performance of ERES method using exact symbolic arithmetic
Another option is to use ES arithmetic for the operations performed with the ERES algorithm. The symbolic
operations used by rational arithmetic always yield exact results and it is more preferable to make use of them,
3 The variable Digits is very common amongst various software packages for mathematical applications such as Maple, Mathematica,
Matlab and others.
c 2005 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
Appl. Num. Anal. Comp. Math. 2, No. 3 (2005) / www.interscience.wiley.com
Table 3
299
VFP behavior of the ERES algorithm for the set Φ1
Type of data
float
Accuracies
variable precision, Digits = 24, εt = 10−16 , εG = 10−16
Resulted GCD
1.0 s3 + 2.99999999999999999998275 s2 + 3.99999999999999999996562 s
+1.99999999999999999996565
Relative error
0.942 10−20
Type of data
float
Accuracies
variable precision, Digits = 23, εt = 10−16 , εG = 10−16
Resulted GCD
1.0
Type of data
float
Accuracies
variable precision, Digits = 23, εt = 10−16 , εG = 10−12
Resulted GCD
1.0 s3 + 2.9999999999999997573495 s2 + 3.9999999999999995261066 s
+1.9999999999999995238086
Relative error
0.130 10−15
Type of data
float
Accuracies
variable precision, Digits = 20, εt = 10−16 , εG = 10−12
Resulted GCD
1.0 s3 + 3.0000000000001436500 s2 + 4.0000000000002876900 s
+2.0000000000002882608
Relative error
0.788 10−13
Type of data
float
Accuracies
variable precision, Digits = 18, εt = 10−12 , εG = 10−8
Resulted GCD
1.0 s + 1.00000204413857072
Relative error
0.816
when our initial data are type of integer, fraction or radical. For the symbolical implementation of the ERES
algorithm it can be used any symbolical package, like Maple4 or Mathematica.
Table 4
ES behavior of the ERES algorithm for the set Φ1
Type of data
rational
Accuracies
symbolic, εt ≤ 10−16 , εG ≤ 10−16
Resulted GCD
s3 + 3 s2 + 4 s + 2
Relative error
0
The following examples show clearly that the ERES method yields better results when using symbolic operations.
4 All the results presented in this paper were obtained by using Maple 8 and an Intel Pentium4 PC, 2.0MHz, 512Mb Ram . The source
code of the ERES algorithm and the examples of this paper are available from the corresponding author.
c 2005 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
300
D. Christou and M. Mitrouli: Estimation of the GCD of many polynomials with the ERES method
Example 3.2 We consider a set Φ2 = Π2,16 of 2 polynomials with integer coefficients (max. 8 digits), maximum degree 16, and exact GCD, g(s) = s4 − 10s3 + 35s2 − 50s + 24, [3, 5]. The results are presented in table
5.
Table 5
VFP and ES behavior of the ERES algorithm for the set Φ2
Type of data
rational
Accuracies
symbolic, εt ≤ 10−16 , εG ≤ 10−16
Resulted GCD
s4 − 10 s3 + 35 s2 − 50 s + 24
Relative error
0
Iterations / Time
23 / 2.969sec
Type of data
rational
Accuracies
variable precision, Digits = 22, εt = 10−16 , εG = 10−16
Resulted GCD
1.0 s4 − 10.00000000006499972615 s3 + 35.00000000058529628077 s2
−50.00000000169210353285 s + 24.00000000156365855379
Relative error
0.358 10−10
Iterations / Time
23 / 0.359sec
Example 3.3 We construct from Φ2 a new set Φ3 = Π100,16 of 100 linearly depended polynomials. For
the set Φ3 of polynomials with integer (max. 10 digits) or real-decimal coefficients and exact GCD, g(s) =
s4 − 10s3 + 35s2 − 50s + 24, we study two cases of behavior, which are presented in tables 6, 7 and clearly show
the difference between VFP and HC operations.
Table 6
VFP behavior of the ERES algorithm for the set Φ3
Type of data
rational, float
Accuracies
variable precision , Digits = 20, εt = 10−16 , εG = 10−16
Resulted GCD
1.0
Type of data
rational, float
Accuracies
variable precision , Digits = 20, εt = 10−12 , εG = 10−16
Resulted GCD
1.0 s4 − 10.000000235029354837 s3 + 35.000002115340572177 s2
−50.000006111297186974 s + 24.000005641618034996
Relative error
0.129 10−6
Iterations / Time
25 / 1.156sec
Type of data
rational, float
Accuracies
variable precision , Digits = 22, εt = 10−16 , εG = 10−16
Resulted GCD
1.0 s4 − 10.00000002242983684635 s3 + 35.00000020186794424622 s2
−50.00000058317165039951 s + 24.00000053830905097638
Relative error
0.123 10−7
Iterations / Time
25 / 1.234sec
c 2005 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
Appl. Num. Anal. Comp. Math. 2, No. 3 (2005) / www.interscience.wiley.com
Table 7
301
HC behavior of the ERES algorithm for the set Φ3
Type of data
rational, float
Accuracies
hybrid computations, Digits = 20, εt ≤ 10−16 , εG ≤ 10−16
Resulted GCD
s4 − 10 s3 + 35 s2 − 50 s + 24
Relative error
0
Iterations / Time
25 / 5.453sec
We can see that there is greater numerical error when using only VFP arithmetic.
3.3 Estimation of approximate GCD’s with the ERES method
As we have already mentioned, the ERES method produces good results either by using variable precision operations with enough digits or exact symbolic operations. But the question is how “good” are these results, or, in
other words, is the GCD obtained by the ERES a good approximation of the true GCD? The numerical computation of the exact GCD is an ill-posed problem in the sense that arbitrary tiny perturbations reduce a non-trivial
GCD to the constant 1. The proper definition of the “approximate” GCD, [7, 9, 10], and the way we can measure
the strength of the approximation, have remained open. Regarding ERES, different values to the specified accuracies εt and εG , often leads to the computation of a common divisor of the set of polynomials, which is not the
greatest common divisor (see table 3, last case). This can be a real problem, since we do not know in advance
the GCD of a given set of polynomials. Unless a mathematical tool to check the strength of approximation of the
obtained GCD is available, we ought to double-check the results by assigning more values to the accuracies εt
and εG or to the variable Digits.
However, the ERES method proved to be a quite useful mathematical tool in computing an approximate GCD
of sets of polynomials with equal root clusters. The polynomials of this kind of sets, theoretically, are considered
to be co-prime. By selecting a proper value for the accuracy εt , which in most cases must be ≥ 10−8 , we often
obtain as a GCD a new polynomial, which is approximately equal to the least degree polynomial of the original
set (example 3.5).
Example 3.4 We consider a set Φ4 = Π4,12 of polynomials with real-decimal or integer coefficients. Two of
the polynomials have GCD, g1 (s) = s2 − 3s + 2 and the other two have GCD, g2 (s) = s2 − 2.999999989 s +
1.999999979. Normally, the polynomials of the set are considered to be co-prime. The results we get from the
ERES method are presented in table 8.
Table 8
VFP and HC behavior of the ERES algorithm for the set Φ4
Type of data
rational, float
Accuracies
variable precision, Digits = 20, εt = 10−8 , εG = 10−8
Approximate GCD
1.0 s2 − 2.9999999915637740963 s + 1.9999999842285569510
Iterations / Time
10 / 0.259sec
Type of data
rational, float
Accuracies
hybrid computations, Digits = 20, εt = 10−16 , εG = 10−8
Approximate GCD
1.0 s2 − 2.9999999843558725734 s + 1.9999999688507700303
Iterations / Time
9 / 0.422sec
c 2005 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
302
D. Christou and M. Mitrouli: Estimation of the GCD of many polynomials with the ERES method
Example 3.5 We consider the set Φ5 = Π3,2 = {s2 + 3s + 2, s2 − 2.998s + 1.997001, 4s2 − 11.996s +
7.994001} where the polynomials have approximately equal root clusters. Theoretically, the exact GCD is,
g(s) = 1. The results we get from the ERES method are presented in table 9.
VFP and HC behavior of the ERES algorithm for the set Φ5
Table 9
Type of data
rational, float
Accuracies
variable precision, Digits = 20, εt = 10−3 , εG = 10−16
Approximate GCD
1.0 s2 − 2.9989997142432029971 s + 1.9984999881743550886
Type of data
rational, float
Accuracies
hybrid computations, Digits = 20, εt = 10−3 , εG = 10−16
Approximate GCD
1.0 s2 − 2.9989997142432029967 s + 1.9984999881743550885
Generally, the ERES method always succeeds in computing an approximation of the GCD of a given set of
polynomials, and this important feature is not common amongst other iterative methods based on the Euclidean
algorithm.
3.4 Computational Results
Notation In the next tables we shall use the following notation :
m
d0
Rel
Dig
:
:
:
:
the number of polynomials
the degree of the GCD
the relative error according to Frobenius norm
the number of Digits (software accuracy)
Table 10
d
Iter
Time
SRF-ES
:
:
:
:
the maximal degree of the polynomials
the number of iterations of the algorithm
algorithm’s estimated time of execution (sec)
square-root-free symbolic operations
Behavior of the ERES algorithm for random sets of polynomials
VFP
No.
i
m
5
d
4
d0
1
Iter
3
Dig
17
Rel
−17
10
−18
HC
Time
0.015
Dig
16
Rel
10
−15
−16
ii
8
7
2
5
21
10
0.032
17
10
iii
10
10
2
5
21
10−17
0.047
19
10−17
iv
v
vi
vii
viii
ix
x
5
15
15
20
30
2
15
10
15
15
15
15
20
20
2
2
5
5
3
4
4
6
7
6
5
6
33
7
20
22
35
21
22
32
23
−18
10
−15
10
−17
10
−18
10
−16
10
−15
10
−17
10
−20
0.078
0.172
0.172
0.203
0.218
0.266
0.234
17
20
30
18
18
32
22
10
−14
10
−15
10
−16
10
−16
10
−15
10
−15
10
−17
−22
xi
100
20
4
7
40
10
1.109
34
10
xii
200
50
2
9
200
10−67
69.032
150
10−57
ES
SRF-ES
Time
Time
Time
0.016
0.125
0.016
0.032
0.110
0.047
0.063
0.219
0.125
0.109
0.250
0.156
0.187
0.656
0.328
0.171
0.937
0.405
0.219
1.156
0.406
0.265
1.329
0.704
0.234
15.031
4.609
0.265
38.391
14.344
1.922
34.562
14.062
314.390
>6000
>6000
c 2005 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
Appl. Num. Anal. Comp. Math. 2, No. 3 (2005) / www.interscience.wiley.com
303
Remark 3.6 Considering the experimental results in table 10, we have to note that all the polynomial sets have
been constructed by random GCD’s and random polynomials with integer coefficients (≤ 9999). When using ES
or SRF-ES operations, the accuracy of the system is set to hardware precision (16 decimal digits). The relative
1
n
2 2
error, which is estimated by using the Frobenius vector norm vF =
for v = (v0 , . . . , vn )t ∈
i=0 |vi |
n , is 0 for ES operations and less than 10−14 for SRF-ES operations. In every case, both accuracies εt and
εG are set to 10−15 , which is close to the limit of double-precision’s arithmetic. The columns of VFP and HC
operations present the lowest number of digits, which is required for an acceptable result, and the corresponding
relative error and time of execution.
Table 11
Storage and time requirements for a random set Π12,12 with exact GCD, g(s) = s2 +
11
3
s + 0.625
Type of data
Type of operations
Digits
Memory (bytes)
Time (secs)
Relative error
rational
ES
14
7,735,542
0.547
0
rational, float
HC
28
2,853,434
0.203
0.223795 10−11
float
VFP
28
2,868,462
0.172
0.955775 10−11
4 Conclusions over the performance of the ERES method
The ERES method is quite effective, when properly implemented in a programming environment. We can have
large sets of real polynomials without restrictions to the type of data. Actually the method proves to be faster,
when the polynomials of a given large set Πm,d are linearly depended. An appropriate selection of a value for
the Gaussian accuracy εG , helps ERES to reduce dramatically the row dimension of the basis matrix and hence
proceed with a smaller set of polynomials. This reduction always takes place in the first iteration of the method.
In fact, for any polynomial set, considering the vectors of the coefficients of each polynomial, there is a way to
find the most orthogonal linearly independent representatives of the set, without transforming the original data,
and form a base of polynomials, which can give us the GCD of the whole set [5]. Such base is referred to as best
uncorrupted base [4, 6]. However, the process of computing such a base is very demanding in terms of time and
memory and it can benefit the ERES method only if the basis matrix Pm ∈ m×n is highly rank deficient, or
m n, (n = d + 1).
On the other hand, if the maximal degree of the polynomials of a given set is very large, then the column
dimension of the basis matrix Pm will be also large. This often leads to many iterations, especially if the degree
of the GCD of the set is small. Actually, the ERES algorithm becomes slow, when the maximal degree of the
polynomials of a given set is large enough and especially when m n. Therefore, the dimensions of the basis
matrix Pm and the degree of the GCD are very important factors to take into account, in order to select either VFP
arithmetic or ES arithmetic for the operations, which are performed during the execution of the ERES algorithm
in a programming environment.
Variable precision operations, in VFP arithmetic, can always be our choice, since they are faster and more
economical in memory bytes than symbolic operations, especially if there is no need to use enough digits to have
a reliable software accuracy. But, it has been observed that, when the basis matrix Pm has large dimensions and
full rank, we must use a lot of digits for the software accuracy in order to avoid results with great numerical error.
This is absolutely necessary if the column dimension of Pm is large and the degree of the GCD is small, because
this will invoke ERES to perform many iterations and hence, intense the propagation of errors. Additionally, if
we increase the number of digits of the software accuracy, the time and memory requirements will also increase.
But still, it is not easy for the user to determine in advance the proper number of digits that are necessary for a
successful computation of the GCD.
Exact symbolic operations could have been our first and only choice, since they produce excellent results with
minimal or no error at all. But, they are very costly regarding time and memory (see table 11). This problem
is more obvious, when the basis matrix Pm is large and dense. Indeed, the successive matrix transformations,
performed by the method, take enough time and consume a lot of memory bytes. However, if we recall that
c 2005 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
304
D. Christou and M. Mitrouli: Estimation of the GCD of many polynomials with the ERES method
the ERES algorithm decreases the size of the basis matrix during the iterations, the symbolical manipulation of
the data can rarely be prohibitively expensive. Symbolic operations are a good choice for the ERES method,
especially when the basis matrix Pm is sparse.
Since both variable precision operations and symbolic operations have advantages and disadvantages, it would
be best if we could combine them appropriately in order to get good performance and stability of the ERES
algorithm in every case. Many tests considering large sets of polynomials, which form a basis matrix Pm ∈
m×n , m n, ρ(Pm ) = n, showed that we can have results with quite acceptable errors within a short
time, if we use rational data and hence symbolic operations to achieve a reduction of the row dimension of the
basis matrix Pm in the first iteration and then, after converting the data of the obtained matrix to a floatingpoint format, proceed with VFP operations. The results presented in table 10 (HC columns) derive from this
type of hybrid computations. An important fact is that this kind of HC operations requires less Digits (software
accuracy) comparing to VFP operations. Additionally, if our initial data are type of radical, it is better to convert
them to an approximate rational form (fraction of integers). This conversion can also take place throughout the
process of normalization in order to avoid the presence of square roots in our data. This technique reduces the
time of execution of the ERES algorithm, when using symbolic operations. The operations performed by using
this technique shall be called square-root-free symbolic operations (SRF-ES) (see table 10). Yet again, such
techniques cannot be applied generally.
Many software mathematical packages like Maple and Mathematica, allow us to choose freely whether to
use variable precision operations or symbolic operations, by converting our initial data to an appropriate type.
The type of the initial data suggests the type of operations. For example, if our initial data are type of rational
or radical, then symbolic operations will be performed. However, there are some restrictions, which oblige
us to convert our data to floating-point format. Specifically, many built-in routines work only with floatingpoint data. For example, in Maple 8, we have to use floating-point data for the computation of the Singular
(k)
Value Decomposition of the matrix Pm obtained in the k th iteration, because computation of the left and right
singular vectors (as columns of matrices) may only be requested for a matrix whose entries can be evaluated
to purely floating-point values. This also holds for Mathematica. In other words, the function that we call for
the computation of the singular values of a matrix uses by default standard (hardware) floating-point arithmetic
(15 digits) or software floating-point arithmetic, if requested. Therefore, when we are working with the ERES
(k)
symbolically, it is more preferable to select as a GCD a row from the last matrix Pm rather than selecting the
(k)
first row of the matrix W t from the SVD of Pm , as described in proposition 2.2. The opposite holds, if there
are floating-point data. These restrictions bring up to the surface the problem of assigning proper values to the
accuracies εt , εG and Digits. Similar problems with the appropriate selection of accuracies arise in any other
matrix based method computing the GCD of polynomials or other required quantities. Thus, we must always
have in mind the software accuracy, which determines the numerical accuracy of the system.
Important factors to take into account when working with the ERES method.
• Dimensions and structure of the original basis matrix Pm .
• Data type of the elements of Pm .
• Numerical accuracy of the original data (for floating-point numbers).
• Appropriate selection of values for the variable Digits and, if it is necessary, for εt , εG .
Therefore, we conclude that the computation of the GCD of a set of polynomials with real coefficients by
the ERES method, is a process where floating-point operations and exact symbolic operations must combined
together for a better overall performance. We can say that we are looking for a kind of hybrid type of implementation for the ERES method.
The above study helps to answer the following question : Is it worthwhile to use hybrid computations to
improve the quality of the results achieved from an algorithm? From our experience we think that hybrid computations can mostly improve the accuracy of the results and thus can be used whenever possible.
c 2005 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
Appl. Num. Anal. Comp. Math. 2, No. 3 (2005) / www.interscience.wiley.com
305
References
[1] Biswa Nath Datta, Numerical Linear Algebra and Applications, (Brooks/Cole Publishing Company - ITP, 1995).
[2] G.H. Golub and C.F. Van Loan, Matrix Computations, second ed. (The John Hopkins University Press, Baltimore,
London, 1989).
[3] I. S. Pace, S. Barnett, Comparison of algorithms for calculation of g.c.d of polynomials, Int. Journ. Systems Sci. Vol. 4,
No. 2, 211-226, (1973).
[4] M.Mitrouli, N.Karcanias, Computation of the GCD of polynomials using Gaussian transformations and shifting, Int. J.
Control, 58, 211-228, (1993).
[5] M.Mitrouli, N.Karcanias, C. Koukouvinos, Further numerical aspects of the ERES algorithm for the computation of
the Greatest Common Divisor of polynomials and comparison with other existing methodologies, Utilitas Mathematica,
50,65-84, (1996).
[6] M.Mitrouli, N.Karcanias, C. Koukouvinos, Numerical aspects for non-generic computations in control problems and
related applications, Congressus Numerantium, 126, 5-19, (1997).
[7] M. Noda, T. Sasaki, Approximate GCD and its applications to ill-conditioned algebraic equations, Jour. of Comp. and
Appl. Math., 38, 335-351, (1991).
[8] N. Karcanias, Invariance properties and characterisation of the greatest common divisor of a set of polynomials, Int.
Journ. Control, 46, 1751-1760, (1987).
[9] N. Karcanias, S. Fatouros, M. Mitrouli, G. Halikias, Approximate greatest common divisor of many polynomials and
generalized resultants, Proceedings of ECC03 Conference, (September 1-4, 2003), Cambridge, England.
[10] N. Karmarkar, Y.N. Lakshman, Approximate polynomial greatest common divisors and nearest singular polynomials,
ISSAC’96, ACM Press 35-39, (1996).
[11] S. Barnett, Greatest common divisor of several polynomials, Proc. Cambridge Phil. Soc. 70, 263-268, (1971).
[12] S. Fatouros, N. Karcanias, Resultant properties of the GCD of many polynomials and a factorisation representation of
GCD, Int. Jour. Control, Vol. 76, No. 16, 1666-1683, (2003).
c 2005 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim
Документ
Категория
Без категории
Просмотров
1
Размер файла
146 Кб
Теги
many, using, polynomials, hybrid, eres, divisors, common, method, greatest, estimating, computational, performers
1/--страниц
Пожаловаться на содержимое документа