Contents Previous Next Index
7 Numerical Programming in Maple
An important part of efficient scientific and mathematical programming is numerical computation. Maple provides many tools for computing with floating-point numbers, some for improving efficiency and some for improving accuracy.
7.1 In This Chapter
An Overview of Numeric Types in Maple
An Explanation of Floating-Point Numbers in Maple
Maple Commands for Numerical Computing
Efficient Numerical Programs
7.2 Numeric Types in Maple
Before discussing numerical computing in Maple, we will first introduce the various numeric data types used in Maple and briefly describe how they are represented. All of the real numbers discussed in this section will pass checks of type,numeric or type,extended_numeric.
Integers
The most basic numeric type in Maple is the integer. Small integers are represented directly as hardware integers (similar to the int type in C or integer type in Fortran), which allows for maximum efficiency of both CPU time used for arithmetic and memory used for storage. That is, the number can be stored in one machine word and arithmetic operations can be performed with one CPU operation. On 32-bit architectures, integers in the range −230−1 to 230−1 are stored in this way, while on 64-bit architectures, integers in the range −262−1 to 262−1. Integers stored in this way are referred to as immediate integers.
Larger integers are stored in DAGs of type INTPOS or INTNEG, which contain pointers to arrays of digits that can store integers up to magnitude 10228−218 on 32-bit architectures and 10235+232−18 on 64-bit architectures.
dismantle(2^80-1);
INTPOS(6): 1208925819614629174706175
dismantle(-2^101+6);
INTNEG(6): -2535301200456458802993406410746
The arithmetic for these large integers is computed using the GNU Multiple Precision Arithmetic (GMP) library. This library is quite efficient, but still several orders of magnitude slower than arithmetic on immediate integers since each arithmetic operation will require more than one CPU operation and the larger the integer, the more operations and memory will be needed for arithmetic.
CodeTools:-Usage(add(i,i=-2^15..2^16));
memory used=160.88KiB, alloc change=0 bytes, cpu time=5.00ms, real time=4.00ms, gc time=0ns
1610629120
CodeTools:-Usage(add(i,i=2^88-2^15..2^88+2^16));
memory used=8.92MiB, alloc change=0 bytes, cpu time=8.00ms, real time=8.00ms, gc time=0ns
30423923890487326980991212339200
CodeTools:-Usage(add(i,i=2^4097-2^15..2^4097+2^16));
memory used=109.34MiB, alloc change=-3.01MiB, cpu time=495.00ms, real time=283.00ms, gc time=440.60ms
205337297974639914340665500453995519859046771005206125061344145148458177846287911591047839494539869945402403881125903466576736813681708881124024847232456927296954601158258822773698492236321182857799702845030997587567848047013984302926637722288989174866047402930648066257455156822794719737995549112033208483004719072244728296595021115951707203531283316339427422299668078787669834908390333537991291917652425009429047159125028756356966402889004692279651401969448167419628544878495940166131118356838592642274676376886478814390080150121184593927849550735415253857623428878524437164429933525844071108151834935561910383653050597127603219544447136154079024731191638245863569443556792160565672892410184321563392936412391807603283785170896557622787216442839497627198271642314069273549773891778249817599015826649550121400193546930160852466240321147395833226174991830052909513217459249145986409972890607772408436946645869934972341681160127289004812013361613475609253983756470541201044624106898890791412083639760908985411624428440457165177561278545720011754678944275988097254471054453900542709445889615221249910582150846966313990266569518867263462971416395393555619749389086067511784172161221094396003584450254897514753869702834443706806664146972590080
Any transitions between GMP integers and immediate integers will be completely transparent and it is not possible to tell them apart in general without use low-level tools such as addressof. However, you can check if an integer is small enough to fit into a single machine word with types integer[4] and integer[8] for 4-byte and 8-byte words respectively.
Integers of all types pass a type,integer type check.
The Integer constructor is guaranteed to return an integer, an extended numeric symbol such as infinity or undefined, a complex number with integer parts, or return unevaluated.
Integer(-2^160);
−1461501637330902918203684832716283019655932542976
Integer(infinity);
∞
Integer(1/2);
Integer⁡12
The system dependent value for the largest immediate integer can be found with kernelopts(maximmediate), the maximum number of decimal digits in an integer can be found with kernelopts(maxdigits), and the version of the GMP library being used can be found with kernelopts(gmpversion).
Rationals
Exact rational numbers are stored in DAGs of type RATIONAL, which consist of a pair of integers. The first integer is the numerator and can be a POSINT or NEGINT. The second integer is the denominator and is a POSINT. Most low-level programming languages such as C or Fortran do not have an equivalent rational number type.
dismantle(1/2);
RATIONAL(3): 1/2 INTPOS(2): 1 INTPOS(2): 2
dismantle(-10/3);
RATIONAL(3): -10/3 INTNEG(2): -10 INTPOS(2): 3
Rational numbers can be constructed by using the division operator or the Fraction constructor. In either case, automatic simplification will occur to ensure that the denominator is positive and that the fraction is in lowest terms (the numerator and denominator do not have factors in common). This means that the Fraction constructor may return integers in some cases.
dismantle(Fraction(21,7));
INTPOS(2): 3
dismantle(Fraction(40,-14));
RATIONAL(3): -20/7 INTNEG(2): -20 INTPOS(2): 7
Rational number arithmetic is performed in the natural way using integer arithmetic and the igcd and ilcm operations to reduce to lowest terms.
Fraction(2^20+2^12,2^27-2^13) + Fraction(2^12-1,2^13);
68141057134209536
Fraction(2^20+2^12,2^27-2^13) * Fraction(23,187);
59116127242
Rational numbers of all types will pass a type,rational type check. Only rational numbers that are not also integers will pass a type,fraction type check. Additionally, type,extended_rational includes all rationals as well as the extended numeric symbols infinity, -infinity, and undefined.
type(1, fraction);
false
Like the Integer constructor, the Fraction constructor will return unevaluated if it cannot return a value of type extended_rational.
Fraction(x,1);
Fraction⁡x,1
Fraction(infinity);
Floating-Point Numbers
Floating-point numbers are stored in DAGs of type FLOAT.
In Maple, as in nearly every programming language, floating-point numbers can be constructed using and visually distinguished from integers with a decimal point symbol, '.'. The floating-point number 1. is often treated as equal to the exact integer 1.
evalb(1. = 1);
true
Maple floating-point numbers can also be constructed with the SFloat constructor (or the equivalent Float constructor) and can be checked with the nearly equivalent type,sfloat and type,float types. We will generally refer to these numbers as sfloats to when we need to distinguish them from hardware floating-point numbers (hfloats), introduced below.
Float(1);
1.
dismantle(SFloat(0.3333));
FLOAT(3): .3333 INTPOS(2): 3333 INTNEG(2): -4
type(.1, float);
type(.1, sfloat);
type(1, float);
A floating-point number represents a rational number with a fixed precision. That rational number can be recovered with convert/rational.
convert(.3333333333, rational, exact);
333333333310000000000
However, not every rational number can be represented exactly by a floating-point number. For example, the closest floating-point number to 13 is 0.3333333333.
convert(1/3, float);
0.3333333333
Also, unlike numeric types integer and rational, integer and float do not have compatible arithmetic. Floating-point arithmetic has a fixed finite precision, and does round off if the result of arithmetic does not fit into that precision.
9123456789 + 8123456789 <> convert( 9123456789. + 8123456789., rational, exact);
17246913578≠17246913580
123456 * 1234567 <> convert( 123456.*1234567., rational, exact);
152414703552≠152414703600
Unlike many other programming languages the precision of sfloat arithmetic can be changed. For this reason, sfloats are known as arbitrary precision floating-point numbers.
More information on sfloats and how they differ from the floating-point types in languages such as C and Fortran will be discussed in greater detail in More about Floating-Point Numbers in Maple.
Hardware Floating-Point Numbers
Floating-point numbers of the type used in languages such as C and Fortran can also be constructed in Maple; they are known as hardware floating-point numbers or hfloats. These types are stored as 8-byte double precision IEEE floating-point numbers contained in DAGs of type HFLOAT. Since the . notation is used to construct Maple sfloats, hfloats must be constructed with the HFloat constructor. Maple will display sfloats and hfloats the same way, using decimal notation.
HFloat(1);
dismantle(HFloat(0.3333));
HFLOAT(2): .3333
The advantage of hfloats over sfloats is that their arithmetic is computed directly using a single CPU operation for each arithmetic operation. Maple sfloats, however, offer much more flexibility and precision. In many ways the difference is analogous to the difference between immediate integers and GMP integers.
Hardware floats can be distinguished from sfloats with the type,hfloat type.
type(HFloat(1), float);
type(HFloat(1), sfloat);
type(HFloat(1), hfloat);
type(SFloat(1), hfloat);
For more information on hardware floats and how they differ from sfloats, see More about Floating-Point Numbers in Maple.
Extended Numeric Types
The special built-in symbols infinity (∞), and undefined can be used in numeric arithmetic in Maple. In general, operations involving ∞ simplify automatically to a signed infinity or a complex infinity. For details, refer to the type,infinity help page.
-1*infinity;
−∞
1/2*infinity;
1/infinity;
0
The undefined symbol is usually produced as the result of attempting to carry out an operation that cannot result in a number for the given operands. Almost every arithmetic operation involving undefined returns undefined. For details, refer to the type,undefined help page.
infinity-infinity;
undefined
undefined-undefined;
undefined+1;
Integer and rational numbers share exact undefined and infinite symbols while sfloat and hfloat numbers have their own versions of these, which are displayed differently but treated similarly.
Float(infinity);
Float⁡∞
HFloat(undefined);
Float⁡undefined
Complex Numbers
A complex number in Maple is a DAG of type COMPLEX, which consists of a pair of any of the two numeric types. They can be constructed in the natural way using the symbol I for the imaginary unit, or using the Complex constructor.
dismantle(1+I);
COMPLEX(3) INTPOS(2): 1 INTPOS(2): 1
dismantle(Complex(1/2,1/3));
COMPLEX(3) RATIONAL(3): 1/2 INTPOS(2): 1 INTPOS(2): 2 RATIONAL(3): 1/3 INTPOS(2): 1 INTPOS(2): 3
Automatic simplification will ensure that if one of the parts of a complex number is a float (or hfloat), then other will be made into a float (hfloat).
dismantle(Complex(1., 1/1001));
COMPLEX(3) FLOAT(3): 1. INTPOS(2): 1 INTPOS(2): 0 FLOAT(3): .9990009990e-3 INTPOS(2): 9990009990 INTNEG(2): -13
dismantle(Complex(HFloat(1.), 1/1001));
COMPLEX(3) HFLOAT(2): 1. HFLOAT(2): .000999000999
dismantle(Complex(HFloat(1.), 1.));
COMPLEX(3) HFLOAT(2): 1. HFLOAT(2): 1.
Complex numbers are not of type type,numeric but can be checked with type type,complex which can also be structured to check for the numeric subtypes of its two components.
type(1+I,numeric);
type(1+I,complex(integer));
Non-numeric Constants
Many Maple expressions represent constants, but are not considered to be of type numeric. This means that arithmetic performed on these constants will be more generic symbolic operations on DAGs of type SUM, PROD, NAME, or FUNCTION. Some examples of non-numeric constants are Pi (π), Catalan, sin⁡1, 5, and π+ⅇ2−1+5⁢Catalan.
type(Pi, numeric);
type(sqrt(5)-1, constant);
7.3 More about Floating-Point Numbers in Maple
To take full advantage of floating-point numbers and to avoid many common pitfalls in numerical computing, it is important to understand exactly what floating-point numbers are and how they are represented.
Representation of Floating-Point Numbers in Maple
The dismantle command shows that the two numbers 1 and 1. have different internal representations. 1 is simply stored as an integer while 1. is stored as a pair of integers.
dismantle(1);
INTPOS(2): 1
dismantle(1.);
FLOAT(3): 1. INTPOS(2): 1 INTPOS(2): 0
Similarly, the numbers 12 and 0.5 are also different even though they are both stored as pairs of integers.
dismantle(0.5);
FLOAT(3): .5 INTPOS(2): 5 INTNEG(2): -1
In Maple, the FLOAT DAG-type represents a floating-point number in the form S * 10^E where both S and E are integers. For 1., the significand (or mantissa) is S=1 and the exponent is E=0. In addition to being specified in decimal notation, floats of this form can be constructed by using scientific notation, or the Float constructor.
Float(2,0);
2.
2*1e0;
The advantage of using this significand-exponent representation is that fixed precision approximations of large and small numbers can be stored compactly and their arithmetic can be done efficiently. Storing the integer 10^50 needs at least 167 bits or 3 words on a 64-bit machine. The floating-point number 1e50 can be stored in less than 8 bits but in in practice uses 2 words (one for each integer).
dismantle(10^50);
INTPOS(8): 100000000000000000000000000000000000000000000000000
dismantle(1e50);
FLOAT(3): .1e51 INTPOS(2): 1 INTPOS(2): 50
Using two immediate integers, a float can store a much larger range of numbers than a rational number with two immediate integers. The range a rational can represent is about 1.⁢10-9..1.⁢109 while a float can represent a range of about 1.⁢10-1073741823..9.⁢101073741823. This is a much larger range for the same storage cost. Of course, that larger range means that floats of a fixed size can represent fewer numbers in that range. And since floating-point numbers are always of a fixed size, this means that arithmetic will always be of limited precision. That is, each operation will have to round the result to a number that can be represented as another floating-point number.
In Maple, the significand is limited to 10 decimal digits of precision by default but can be changed while the exponent is restricted to being a word-sized integer.
More information on the restrictions on the size of software floats in Maple can be found by using the Maple_floats command.
By contrast, hfloats, are represented in base 2, rather than base 10. So they represent numbers using the form S * 2^E, where the significand, S, is a 52-bit integer and the exponent, E, is a 10-bit integer. Thus, the range of numbers representable as hardware floats is 2.225073859⁢10-308..1.797693135⁢10308. Because the largest possible significand of a hardware float has about floor⁡log10⁡252=15 base-10 digits of precision, hardware floats can be converted to software floats without meaningful loss of precision when Digits is 15. Conversely, so long as their exponent is smaller than 307 and their significand had fewer than 15 digits sfloats can be converted to hfloats without loss of precision.
Precision and Accuracy
By default, 10-digit precision is used for floating-point arithmetic, which means that the arithmetic will be rounded to 10 digits. This means any single floating-point operation will be accurate to 10 digits.
For example, storing 10^50+1 requires 50 decimal digits so it will be rounded in floating-point arithmetic. By contrast, 10^50+10^41 can be stored with 10 digits so it will still be computed accurately.
.1e51 + 1.;
1.×1050
.1e51 + .1e42;
1.000000001×1050
The Digits environment variable can be used to change the working precision used by Maple. Larger values of Digits will allow more accurate computation, but at the cost of slower arithmetic.
Digits := 100:
1.00000000000000000000000000000000000000000000000001×1050
The maximum value for Digits is system dependent and can be found with the Maple_floats command.
Maple_floats(MAX_DIGITS);
38654705646
For the default value of Digits, the significand is an immediate integer and so arithmetic will be fast in general. It also means that some numerical function evaluations (such as sin in the following example) will be able to use the CPU's native hardware floating-point arithmetic to achieve the needed precision. However, raising Digits about the default value will lead to slower arithmetic and slower function evaluation.
Digits:=10:
CodeTools:-CPUTime(add(sin(1e-6*i),i=1..100000));
1.280,4995.884639
Digits:=22:
6.852,4995.884638682140998954
Reducing Digits below its default value does not usually lead to large improvements in performance.
Digits:=5:
1.029,4996.0
It is also important to note that changing Digits does not necessarily change the accuracy of sequences of multiple floating-point computations; it changes only the precision of the individual operations performed. The following example computes two additions using 10 digits of precision, but catastrophic cancellation leads to a mere one digit of accuracy in the final answer.
Digits := 10:
x := 1234567890.;
x≔1.234567890×109
y := -x+1;
y≔−1.234567889×109
z := 3.141592654;
z≔3.141592654
x+z+y<>z+1;
4.≠4.141592654
Ensuring accuracy requires careful study of the problem at hand. In this example, you need 19 digits of precision to get 10 digits of accuracy.
Digits := 19:
x+z+y=z+1;
4.141592654=4.141592654
Floating-Point Contagion
An important property of floating-point numbers in Maple, and nearly every other computing environment, is contagion. When numerical expressions are created involving both floating-point numbers and exact numbers, the floating property is contagious and causes the answer to become a floating-point number.
1. * 10;
10.
0. + 10;
As you can see, this contagion property can be used as a quick method to convert exact values to floating-point numbers. However, while floating-point contagion extends to all Maple structures of type numeric (except, in some cases, hfloats), it does not apply to non-numeric constants.
type(3/4, numeric);
4/3 + 0.;
1.333333333
1.*sqrt(3);
1.⁢3
The hfloat type is also contagious, but the precise behavior of the contagion is determined by the UseHardwareFloats environment variable. By default, hfloats are contagious for small values of Digits:
type(4/3 + HFloat(0.), hfloat);
type(1. + HFloat(0.), hfloat);
HFloat(1.1) * sin(4*Pi/7) -1;
1.10000000000000⁢sin⁡3⁢π7−1
For large values of Digits, hfloats in computations will be converted to sfloats so that the results are sfloats.
Digits := 20;
Digits≔20
type(1 + HFloat(0.), hfloat);
type(1 + HFloat(0.), sfloat);
If UseHardwareFloats=true then hfloats are completely contagious.
UseHardwareFloats := true;
UseHardwareFloats≔true
a := 10.^19+1;
a≔1.0000000000000000001×1019
b := a + HFloat(0.1);
b≔1.00000000000000×1019
type(b, hfloat);
If UseHardwareFloats=false then hfloats will always be converted to sfloats in computations, regardless of the setting of Digits. The HFloat constructor will still create hfloats, however.
UseHardwareFloats := false;
UseHardwareFloats≔false
Digits := 10;
Digits≔10
c := 1 + HFloat(0.1);
c≔1.100000000
type(c, hfloat);
type(HFloat(0.1), hfloat);
Table 7.1 summarizes the floating-point contagion rules.
UseHardwareFloats
deduced
Digits
any
1...15
16...
hfloat + hfloat
hfloat
sfloat
hfloat + sfloat
sfloat + sfloat
More on the Floating-Point Model
The software floating-point system is designed as a natural extension of the industry standard for hardware floating-point computation, known as IEEE 754. Thus, there are representations for infinity and undefined (what IEEE 754 calls a NaN, meaning Not a Number) as discussed in Extended Numeric Types.
The IEEE 754 standard defines five rounding algorithms. Two methods called nearest and simple round to nearest values, and the other three are directed roundings that round up or down (as needed) towards −∞, ∞, or 0. Maple implements all of these rounding modes and the desired mode can be selected by setting the Rounding environment variable.
Rounding;
nearest
1.4^10;
28.92546550
Rounding := 0;
Rounding≔0
28.92546549
Another important feature of this system is that the floating-point representation of zero, 0., retains its arithmetic sign in computations. That is, Maple distinguishes between +0. and -0. when necessary. In most situations, this difference is irrelevant, but when dealing with functions that have a discontinuity across the negative real axis, such as ln⁡x, preserving the sign of the imaginary part of a number on the negative real axis is important.
For more intricate applications, Maple implements extensions of the IEEE 754 notion of a numeric event, and provides facilities for monitoring events and their associated status flags. For more information about this system, refer to the numerics help page.
7.4 Maple Commands for Numerical Computing
In this section we will discuss some of the commands available in Maple for floating-point computation.
The evalf Command
The evalf command is the primary tool in Maple for performing floating-point calculations in software floating-point mode. You can use evalf to compute approximations of non-numeric constants.
evalf(Pi);
3.141592654
You can alter the number of digits of the approximation by changing the value of the environment variable Digits, or by specifying the number as an index to evalf (which leaves the value of Digits unchanged).
Digits := 20:
3.1415926535897932385
evalf[200](Pi);
3.1415926535897932384626433832795028841971693993751058209749445923078164062862089986280348253421170679821480865132823066470938446095505822317253594081284811174502841027019385211055596446229489549303820
evalf(sqrt(2));
1.4142135623730950488
Remember that the Digits command specifies the precision in decimal digits, unlike hardware floating-point numbers which specify precision in binary digits.
All floating-point computations are performed in finite precision, with intermediate results generally being rounded to Digits precision. As such, it is possible for round-off errors to accumulate in long computations. Maple ensures that the result of any single floating-point arithmetic operation (+, -, *, /, or sqrt) is fully accurate. Further, many of the basic functions in Maple, such as the trigonometric functions and their inverses, the exponential and logarithmic functions, and some of the other standard special functions for mathematics, are accurate to within .6 units in the last place (ulps), meaning that if the Digits + 1st digit of the true result is a 4, Maple may round it up, or if it is a 6, Maple may round it down. Most mathematical functions in Maple, including numerical integration, achieve this accuracy on nearly all inputs.
It is possible to create software floats with different precisions. Changing the value of Digits will not change these numbers; it affects only the precision of subsequent operations on those numbers.
Digits := 50;
Digits≔50
a := evalf(Pi);
a≔3.1415926535897932384626433832795028841971693993751
a;
3.1415926535897932384626433832795028841971693993751
a+1;
4.141592654
evalf(a);
From this example, you can see that evalf can be used to create a lower precision float from one of higher precision. This can be used to round a result to a desired number of digits. However, evalf will not increase the precision of a low precision float.
evalf[100](1.0);
1.0
evalf[10000](a);
The evalf command also provides an interface to purely numerical computations of integrals, limits, and sums.
Some definite integrals have no closed-form solution in terms of standard mathematical functions. You can use evalf to obtain a numerical answer directly using numerical techniques.
r := Int(exp(x^3), x=0..1);
r≔∫01ⅇx3ⅆx
value(r);
∫01ⅇx3ⅆx
evalf(r);
1.341904418
In other cases, Maple can find an exact solution, but the form of the exact solution is almost incomprehensible. The function Beta in the following example is a special function that appears in mathematical literature.
q := Int( x^99 * (1-x)^199 / Beta(100, 200), x=0..1/5 );
q≔∫015277216764217237649652225568421760372018697733716243247244202243820317088554933908000⁢x99⁢1−x199ⅆx
value(q);
27852290545780521179255248650434305998403849800909690342170417622052715523897761906828166964420518416902474524718187972029459617663867797175746341349064425727501861101435750157352018112989492972548449785454954447636248495323512797804102876034481999911930417847858749936840755474537033615661445973112364349371450421100562106866977667955024449202371857434152360496874313577908566230689757503569126129150390625
evalf(q);
3.546007367×10−8
The two previous examples use the Int command rather than int for the integration. If you use int, Maple first tries to integrate the expression symbolically. Thus, when evaluating the following commands, Maple determines a symbolic answer and then converts it to a floating-point approximation, rather than performing direct numerical integration. In general, the symbolic computation is more difficult, and thus slower than the numerical computation.
evalf( int(x^99 * (1-x)^199 / Beta(100, 200), x=0..1/5) );
Similarly, evalf can be used on the inert forms Limit and Sum to compute using numerical algorithms for computing numeric limits and sums.
evalf(Limit(sin(erf(1)*x)/(erf(1)^2*x),x=0));
1.186660803
evalf( Sum(exp(x), x=RootOf(_Z^5+_Z+1)) );
4.791792042+0.⁢I
When Not to Use evalf
In general the symbolic commands in Maple are able to handle floating-point numbers in their input, but, by their nature floats are not as precise as rationals or symbolic constants. So, even if you want a numerical answer from a command, you should avoid calling evalf on the input.
The following command does not compute the expected answer of 0.1111111111.
limit(n*(evalf(1/3) - 1/(3+1/n)), n=infinity);
Float⁡-∞
It would have been computed correctly with non-float values in the input.
evalf( limit(n*(1/3 - 1/(3+1/n)), n=infinity) );
0.1111111111
Numeric Solvers
There are also a number of numerical algorithms available in Maple in commands other than evalf. One of the most important is fsolve which is short for floating-point solve. This command computes numerical solutions to equations or systems of equations. In general, it is much more efficient than calling evalf on the result of solve, especially if you are interested in only a single solution.
fsolve( exp(x) + 2*sin(x), x);
−0.3573274113
The fsolve command is a sophisticated heuristic that chooses among many different algorithms depending on the input. There are several more special purpose solving tools available in the RootFinding package.
Several symbolic solvers in Maple also have numeric modes. The dsolve and pdsolve commands both accept a numeric option, which indicates that a numerical answer should be computed using purely numeric methods. For extensive information on these numeric commands, refer to the dsolve/numeric and pdsolve/numeric help pages.
The evalhf Command
Like evalf, evalhf computes an numerical approximation of its input. However, evalhf uses hardware floats in all intermediate calculations before returning an sfloat.
dismantle( evalhf(1/3) );
FLOAT(3): .333333333333333315 INTPOS(2): 333333333333333315 INTNEG(2): -18
The evalhf command is affected by the value of Digits, but since intermediate calculations are done with hfloats, at most 18 digits will be returned.
Digits := 100;
Digits≔100
evalhf(1/3) ;
0.333333333333333315
Notice that in this example the result is only correct to 16 digits. In general, the results from evalhf are guaranteed to 15 digits of accuracy.
To find the number of guaranteed digits for your version of Maple, use evalhf(Digits):
evalhf(Digits);
15.
In fact, evalhf is, despite superficial similarities, very different from evalf. The evalhf command uses a completely separate evaluation environment which uses only simple types rather that the Maple DAG types. This means that it can be very fast, but at the cost of being limited in the types of computations it can perform.
Digits := 15;
Digits≔15
c := 10.^14;
c≔1.00000000000000×1014
CodeTools:-Usage( evalhf( add( (i+c), i=1..10^6) ) );
memory used=1.24KiB, alloc change=0 bytes, cpu time=42.00ms, real time=42.00ms, gc time=0ns
1.00000000499999867×1020
CodeTools:-Usage( ( add( (i+c), i=1..10^6) ) );
memory used=103.06MiB, alloc change=5.00MiB, cpu time=973.00ms, real time=766.00ms, gc time=424.23ms
1.00000000500000×1020
c := HFloat(c);
memory used=51.02MiB, alloc change=12.00MiB, cpu time=505.00ms, real time=446.00ms, gc time=119.52ms
1.00000000500001×1020
In particular evalhf only handles a specific list of functions. For the list of functions that evalhf recognizes, refer to the evalhf/fcnlist help page.
evalhf(sin(exp(gamma+2)+ln(cos(Catalan))));
0.0980197901238379354
evalhf( b /3 );
Error, cannot handle unevaluated name `b` in evalhf
evalhf works with Arrays of hardware floats. It cannot handle symbols, lists, sets, and most other Maple data structures.
evalhf(map(t->t+1, [1, 2, 3, 4]));
Error, unable to evaluate expression to hardware floats: [1, 2, 3, 4]
To create an Array of hardware floats, you can use the option datatype=float[8], which specifies that the elements in the Array are 8-byte hardware floats.
A := Array([1, 2, 3, 4], datatype=float[8]);
A≔
evalhf(map(t->t+1, A));
You can also create an Array that can be used by evalhf by using the constructor hfarray. Both constructors create an Array of hardware floats. The only difference is that hfarray defaults to C_order instead of Fortran_order.
A := hfarray(1..4, 1..4, (i,j)->ithprime(i)*isqrt(3*(i+j)));
lprint(A);
Array(1 .. 4,1 .. 4,{(1, 1) = 4., (1, 2) = 6., (1, 3) = 6., (1, 4) = 8., (2, 1) = 9., (2, 2) = 9., (2, 3) = 12., (2, 4) = 12., (3, 1) = 15., (3, 2) = 20., (3, 3) = 20., (3, 4) = 25., (4, 1) = 28., (4, 2) = 28., (4, 3) = 35., (4, 4) = 35.} ,datatype = float[8],order = C_order)
User-defined Maple procedures can be evaluated in the evalhf environment as long as they comply with the restrictions outlined in the evalhf/procedure help page.
SlowPower := proc(a,n) local i, s; s:=1; for i to n do s := a*s; end do; end proc;
SlowPower≔proca,nlocali,s;s ≔ 1;foritondos ≔ a*send doend proc
SlowPower(2,10);
1024
evalhf( SlowPower(2,10) );
1024.
Numerical Linear Algebra
Maple has access to many libraries for fast numeric computation such as BLAS, CLAPACK, and the NAG® C Library. To take full advantage of the speed provided by these commands, you need to provide them with Matrices and Vectors with the appropriate datatype.
For example, floating-point Matrix times Matrix products can been computed very quickly in the BLAS libraries and quickest dispatch to the BLAS commands will happen if the Matrices are created with datatype=float[8].
A := Matrix(5^3,5^3,(i,j)->(i-j+1)/(i+j));
sz:=interface(rtablesize=[2,2]):
CodeTools:-Usage(A^2);
memory used=0.59GiB, alloc change=109.00MiB, cpu time=2.46s, real time=1.92s, gc time=877.40ms
Ahf := Matrix(5^3,5^3,(i,j)->(i-j+1)/(i+j), datatype=float[8]);
CodeTools:-Usage(Ahf^2);
memory used=315.14KiB, alloc change=0 bytes, cpu time=30.00ms, real time=63.00ms, gc time=0ns
Of course, many of the linear algebra commands will try to determine if you have a Matrix of low precision floats and will convert to the appropriate datatype automatically. In the next example, Af has datatype=anything, but the result of Af^2 has datatype=float[8] and requires only a small, but noticeable, copy and conversion overhead.
Af := Matrix(5^3,5^3,(i,j)->(i-j+1.)/(i+j));
CodeTools:-Usage(Af^2);
memory used=391.98KiB, alloc change=0 bytes, cpu time=7.00ms, real time=3.00ms, gc time=0ns
We recommend that you specify datatype=float[8] in your constructors explicitly if you intend to perform numeric computations. This makes the numeric nature of the Matrix explicit, and it makes it impossible to accidentally add non-float entries to a Matrix and thus make subsequent computations slower. An exception will be raised if non-numeric entries are assigned into the Matrix.
Ahf[1,1] := sqrt(3);
Ahf1,1≔3
Other numeric types will be automatically converted to float[8].
Ahf[1,1] := 45/111;
Ahf1,1≔1537
Ahf[1,1];
0.405405405405405
If a Matrix contains only floats, but does not have a datatype=float[8] restriction, then addition of symbolic elements results in the more expensive symbolic commands to be used.
Af[1,1] := sqrt(3);
Af1,1≔3
memory used=342.66MiB, alloc change=28.00MiB, cpu time=2.76s, real time=2.11s, gc time=894.13ms
interface(rtablesize=sz):
Another advantage of float[8] is that these Matrices are stored in the same way as an hfarray which is analogous to an array of floats in the C or Fortran programming languages and different from a Matrix of datatype=anything or datatype=sfloat which are arrays of Maple DAGs each of which will take more memory than a single 8-byte float. Note the difference in memory used in the following two examples.
restart;
CodeTools:-Usage(Matrix(10^3,3*10^3,(i,j)->10.^4*j+j, datatype=sfloat));
memory used=114.61MiB, alloc change=22.89MiB, cpu time=2.41s, real time=2.41s, gc time=1.50s
B1:=CodeTools:-Usage(Matrix(10^3,3*10^3,(i,j)->10^4*j+i, datatype=float[8]));
memory used=23.04MiB, alloc change=22.89MiB, cpu time=258.00ms, real time=258.00ms, gc time=0ns
It is also important to note that elements extracted from a float[8] rtable will be of type hfloat and so hfloat contagion will apply subject to the current setting of UseHardwareFloats.
type(B1[1,1], hfloat);
There are also many optimized commands for Matrices of complex hfloats. These Matrices can be created using the option datatype=complex[8], and work similarly to those of datatype=float[8].
If you are constructing very large Matrices in your programs, use the ArrayTools package to construct and copy Matrices as efficiently as possible.
7.5 Writing Efficient Numerical Programs
Two main points to keep in mind when trying to write efficient numerical programs are:
Try to use hardware floating-point arithmetic when Digits allows
Try to minimize memory usage where possible
Writing Flexible Numerical Procedures
You can use the evalhf(Digits) construct to determine whether hardware floating-point arithmetic provides sufficient precision in a particular application. If Digits is less than evalhf(Digits), then you can take advantage of the faster hardware floating-point calculations. Otherwise, you should use software floating-point arithmetic, with sufficient digits, to perform the calculation.
In the following example, the procedure myevalf takes an unevaluated parameter, expr. Without the uneval declaration, Maple would evaluate expr symbolically before invoking myevalf.
myevalf := proc(expr::uneval) if Digits < evalhf(Digits) then evalf(evalhf(expr)); else evalf(expr); end if; end proc:
The evalhf command evaluates many Maple functions, but not all. For example, you cannot evaluate an integral using hardware floating-point arithmetic.
myevalf( Int(exp(x^3), x=0..1) );
Error, (in myevalf) unable to evaluate function `Int` in evalhf
You can improve the procedure myevalf so that it traps such errors and tries to evaluate the expression using software floating-point numbers instead.
myevalf := proc(expr::uneval) if Digits < evalhf(Digits) then try return evalf(evalhf(expr)); catch: end try; end if; return evalf(expr); end proc:
This procedure provides a model of how to write procedures that use hardware floating-point arithmetic whenever possible.
The myevalf procedure returns sfloats. A version that returns hfloats is easiest to write using the hfloat procedure option. This option will cause the procedure to use hfloat arithmetic as much as possible so long as digits less than 15. In particular it convert all floats in the procedure definition into hfloats, and causes evalhf to not convert its output to an sfloat.
myevalf := proc(expr::uneval) option hfloat; if Digits < evalhf(Digits) then try return evalhf(expr); catch: end try; end if; return evalf(1. * expr); end proc:
The multiplication by 1. was added to the evalf return line to induce hfloat contagion causing the output to be an hfloat when possible.
type( myevalf( Int(exp(x^3), x=0..1) ), hfloat);
For more information on the hfloat option, see The hfloat Option or refer to the option_hfloat help page.
Example: Newton Iteration
This section illustrates how to take advantage of hardware floating-point arithmetic to calculate successive approximations using Newton's method. You can use Newton's method to find numerical solutions to equations. As Example: Creating a Newton Iteration describes, if xn is an approximate solution to the equation f⁡x=0, then xn+1, given by the following formula, is typically a better approximation.
xn+1=xn−f⁡xnf'⁡xn
The procedure iterate takes a function, f, its derivative, df, and an initial approximate solution, x0, as input to the equation f⁡x=0. The procedure iterate calculates at most N successive Newton iterations until the difference between the new approximation and the previous one is small. The procedure prints the sequence of approximations to show successive approximations.
iterate := proc( f::procedure, df::procedure, x0::numeric, N::posint, $ ) local xold, xnew; xold := x0; xnew := evalf( xold - f(xold)/df(xold) ); to N-1 while abs(xnew-xold) > 10^(1-Digits) do xold := xnew; print(xold); xnew := evalf( xold - f(xold)/df(xold) ); end do; return xnew; end proc:
The following procedure calculates the derivative of f and passes all the necessary information to iterate.
Newton := proc( f::procedure, x0::numeric, N::posint:=15, $ ) local df; df := D(f); print(x0); return iterate(f, df, x0, N); end proc:
Use Newton to solve the equation x2−2=0.
f := x -> x^2 - 2;
f≔x↦x2−2
Newton(f, 1.5);
1.5
1.416666667
1.414215686
1.414213562
This version of Newton uses sfloats unless the arguments passed in are hfloats. If you add option hfloat to the procedure iterate, then hfloats are used automatically, provided the value of Digits is small enough.
iterate := proc( f::procedure, df::procedure, x0::numeric, N::posint, $ ) option hfloat; local xold, xnew; xold := 1. * x0; xnew := 1. * evalf( xold - f(xold)/df(xold) ); to N-1 while abs(xnew-xold) > 10^(1-Digits) do xold := xnew; print(xold); xnew := evalf( xold - f(xold)/df(xold) ); end do; return xnew; end proc:
Now the procedure Newton will return hfloats instead of sfloats when Digits is less than 15.
type( Newton(f, 1.5), hfloat);
1.41666666666667
1.41421568627451
1.41421356237469
In this case, the procedure is simple enough that we can go beyond option hfloat and use the evalhf command to achieve best performance. This next version of Newton uses evalhf for floating-point arithmetic if possible and reverts to sfloats otherwise. Since iterate only tries to find a solution to an accuracy of 10^(1-Digits), Newton uses evalf to round the result of the hardware floating-point computation to an appropriate number of digits.
Newton := proc( f::procedure, x0::numeric, N::posint:=15, $ ) local df, result; df := D(f); print(x0); if Digits < evalhf(Digits) then try return evalf( SFloat( evalhf(iterate(f, df, x0, N)) )); catch: end try; end if; return evalf( SFloat( iterate(f, df, x0, N) ) ); end proc:
Newton uses hardware floating-point arithmetic for the iterations and rounds the result to software precision. Hardware floating-point numbers have more digits than the software floating-point numbers, given the present setting of Digits.
1.41666666666666674
1.41421568627450989
1.41421356237468987
1.41421356237309515
Newton must use software floating-point arithmetic to find a root of the following Bessel function.
F := z -> BesselJ(1, z);
F≔z↦BesselJ⁡1,z
Newton(F, 4);
4
3.82649352308792201
3.83170246715760410
3.83170597020591108
3.83170597020751247
3.831705970
Software arithmetic is used because evalhf does not recognize BesselJ and the symbolic code for BesselJ uses the type command and remember tables, which evalhf does not allow.
evalhf( BesselJ(1, 4) );
−0.0660433280235491332
Using a try-catch block (as in the previous Newton procedure) allows the procedure to work when evalhf fails.
The previous Newton procedure prints many digits when it is trying to find a ten-digit approximation. The reason is that the print command is located inside the iterate procedure, which is inside a call to evalhf, where all numbers are hardware floating-point numbers, and print as such.
Example: Jacobi Iteration
Jacobi iteration is an iterative method for numerically solving systems of linear equations that are diagonally dominant (meaning that the diagonal elements of the matrix representing the system are larger than the sum of all other elements in any given row of the matrix). Given an initial guess of x0 for the solution to A⋅x=b, the process is: if xk is an approximation for the solution, then the next approximation is xk+1=S -1⋅b−R⋅xk where S is the diagonal of A and A=S+R.
The procedure Jacobi is a straightforward implementation of Jacobi iteration as it is usually presented in a numerical analysis course.
Jacobi := proc(A::Matrix(numeric), b::Vector(numeric), x0::Vector(numeric):=b, MAXIter::posint:=25, tolerance::positive:=evalf(LinearAlgebra:-Norm(b,2)*10^(1-Digits)), $) local i,j,k, x_old, x_new, s, residual, n; x_new := evalf(x0); n := LinearAlgebra:-Dimension(b); x_old := Vector(n, 0, rtable_options(x_new)); residual := evalf(LinearAlgebra:-Norm(A . x_new-b,2)); for k to MAXIter while residual > tolerance do ArrayTools:-Copy(x_new, x_old); for i from 1 to n do s := 0; for j from 1 to n do if i<>j then s := s + A[i,j] * x_old[j]; end if; end do; x_new[i] := (b[i] - s) / A[i,i]; end do; residual := evalf(LinearAlgebra:-Norm(A . x_new-b,2)); end do; if k < MAXIter then return x_new; else WARNING("Residual %1 greater than tolerance %2 after %3 iterations", residual, tolerance, k-1); return x_new; end if; end proc:
Here we construct a random Matrix that is strongly diagonally dominant to test the procedure. Note that, while in practice Jacobi iteration would not be used on dense Matrices, we use dense Matrices in these examples to illustrate some efficiency principles.
N := 25:
M := Matrix(N,N,(i,j)->`if`(i<>j, RandomTools:-Generate(integer(range=-100..100))/1000., RandomTools:-Generate(integer(range=100..10000))/10.),datatype=float);
b := LinearAlgebra:-RandomVector(N,datatype=float);
CodeTools:-Usage( Jacobi(M, b) );
memory used=0.65MiB, alloc change=0 bytes, cpu time=14.00ms, real time=15.00ms, gc time=0ns
The code is written in such a way that it will automatically work for software floats at higher values of digits.
Digits := 50:
memory used=18.71MiB, alloc change=37.00MiB, cpu time=202.00ms, real time=158.00ms, gc time=94.99ms
This implementation works well for small Matrices, but when the dimension becomes large, it becomes very slow.
N := 500:
memory used=142.14MiB, alloc change=8.00MiB, cpu time=2.20s, real time=1.83s, gc time=701.62ms
Adding option hfloat to Jacobi is not likely to increase performance, since hfloat contagion from the float[8] Matrix elements means that hfloat arithmetic is likely being used everywhere possible already. However, it is possible to rewrite the internal loops as a procedure that can be evaluated with evalhf. (It might be possible to rewrite all of Jacobi to be evaluatable to evalhf, but it would be difficult and potential gains would be modest.)
JacobiHelper := proc(A, b, x_old, x_new, n) local s, i, j, l; option hfloat; # this procedure acts by side effect on x_new for i from 1 to n do s := 0; for j from 1 to n do if i<>j then s := s + A[i,j] * x_old[j]; end if; end do; x_new[i] := (b[i] - s) / A[i,i]; end do; end proc:
And the rest of the procedure with option hfloat.
Jacobi := proc(A::Matrix(numeric), b::Vector(numeric), x0::Vector(numeric):=b, MAXIter::posint:=25, tolerance::positive:=evalf(LinearAlgebra:-Norm(b,2)*10^(1-Digits)), $) option hfloat; local i,j,k, x_old, x_new, s, residual, n; x_new := evalf(x0); n := LinearAlgebra:-Dimension(b); x_old := Vector(n, 0, rtable_options(x_new)); residual := evalf(LinearAlgebra:-Norm(A . x_new-b,2)); for k to MAXIter while residual > tolerance do ArrayTools:-Copy(x_new, x_old); # JacobiHelper acts by side effect on x_new if Digits <= evalhf(Digits) then evalhf( JacobiHelper(A, b, x_old, x_new, n) ); else ( JacobiHelper(A, b, x_old, x_new, n) ); end if; residual := evalf(LinearAlgebra:-Norm(A . x_new-b,2)); end do; if k < MAXIter then return x_new; else WARNING("Residual %1 greater than tolerance %2 after %3 iterations", residual, tolerance, k-1); return x_new; end if; end proc:
memory used=133.59KiB, alloc change=0 bytes, cpu time=229.00ms, real time=229.00ms, gc time=0ns
Using evalhf here achieves an impressive speed-up but you can achieve even better speed by taking advantage of the built-in Matrix and Vector operations. In general you code will be faster if you can replace nested loops with calls to external commands for Vectors or Matrices. Those commands will be highly optimized for your platform taking advantage of multiple cores and cache hierarchy where possible.
Jacobi := proc(A::Matrix(numeric), b::Vector(numeric), x0::Vector(numeric):=b, MAXIter::posint:=25, tolerance::positive:=evalf(LinearAlgebra:-Norm(b,2)*10^(1-Digits)), $) local k, x_new, S, S_inv, residual; x_new := evalf(x0); S := LinearAlgebra:-Diagonal(A,datatype=float); S_inv := 1 /~ S; residual := evalf(LinearAlgebra:-Norm(A.x_new-b,2)); for k to MAXIter while residual > tolerance do # computing R.x as A.x - S.x is probably a bad idea numerically # but we do it anyway to avoid making the code overly complicated x_new := S_inv *~ (b - ( A . x_new - S *~ x_new)); residual := evalf(LinearAlgebra:-Norm(A . x_new-b,2)); end do; if k < MAXIter then return x_new; else WARNING("Achieved tolerance of only %1 after %2 iterations", residual, i-1); return x_new; end if; end proc:
memory used=261.48KiB, alloc change=0 bytes, cpu time=4.00ms, real time=3.00ms, gc time=0ns
This sort of speed-up is typical. The built-in numerical linear algebra commands are easily an order of magnitude faster than loops run in Maple, and generally also faster than loops in evalhf.
Download Help Document