Linear Optimization with the Optimization Package Matrix Form
This worksheet introduces the matrix form of the linear optimization solver LPSolve in the Optimization package. For an introduction to the algebraic form please refer to the Optimization example worksheet.
In this worksheet, we will be minimizing a linear function (referred to as the objective function) subject to a number of linear constraints. Certain maximization examples are included as well. In many of the examples, the maximize option can be added to the command to find the maximum instead of the minimum.
Linear Optimization Primer
A linear optimization problem, often called a linear program or simply an LP, is an optimization problem of the form
minimize
c' .x
(objective function)
subject to:
A.x ≤ b
(linear inequality constraints)
Aeq.x = beq
(linear equality constraints)
bl ≤ x ≤ bu
(bounds)
where x is the vector of problem variables; c, b, beq, bl, and bu are vectors; and A and Aeq are matrices. The relations involving matrices and vectors are element-wise. In the objective function defined, c' refers to the vector transpose.
The standard form for linear programs has no linear inequality constraints (A = 0, b = 0) and uses the bound x ≥ 0 (that is, we let bli=0 and bui=∞ for all i). The standard form LP is given by
c'. x
A.x = b
x ≥ 0
(nonnegativity bounds)
Another common form for linear optimization problems is the symmetric form. The symmetric form is a maximization problem that has no linear equality constraints (Aeq=0, Beq=0) and uses the bound x ≥ 0.
maximize
While the symmetric form only has linear inequality constraints it should be noted that the algorithms to solve the problem will generally convert the inequality constraints to equality constraints by adding slack variables. For example, the inequality constraints
2x + 3y ≤4
3 x − 5y ≤ 9
where
x,y ≥ 0
are converted into equality constraints by adding two new nonnegative variables u and v.
2x + 3y + u = 4
3 x − 5 y +v = 9
u,v, x,y ≥ 0
For each inequality constraint in the optimization problem a slack variable is introduced in order to convert that constraint to an equality constraint. When solving a symmetric form optimization problem the number of variables using in actually solving the problem is double the number in the problem statement. All of the constraints of the problem, including the bounds, define a feasible region for the problem. The feasible region for a linear optimization problem is always a convex polytope, which may or may not be bounded. Since the objective function is also linear, it follows that for any linear optimization problem if there is an optimal solution then either there exists a unique optimal solution or there exist an infinite number of optimal solutions. If there is a unique optimal solution it is a corner point of the feasible region. In this case, the line defined by setting the objective function equal to the optimal value intersect the feasible region at a single point. When there is more than one optimal solution, then the line defined by setting the objective function equal to the optimal value must intersect the feasible region at least two points.However, since the feasible region is a convex polytope, this intersection must be a line or a face of the polytope. In this case, it must intersect the feasible region at an infinite number of points.
A Simple Example
Consider the two variable linear optimization problem written algebraically:
min −2 x−3 y
subject to −7 x +4 y≤2.2
5 x + y ≤5
x, y ≥ 0.
The set of constraint inequalities, including the non-negativity inequalities, are given by
restart:cnsts≔−7 x +4 y≤2.2, 5 x + y≤5,0≤x,0≤y;
cnsts≔−7⁢x+4⁢y≤2.2,5⁢x+y≤5,0≤x,0≤y
The set of points that satisfy all of these constraint inequalities defines the feasible region of the problem. The feasible region for this example is shown in the shaded region (including the borders) in the plot below.
withplots: feasibleRegion≔inequalcnsts,x=−0.5..2,y=−0.5..2,optionsexcluded=color=white:displayfeasibleRegion
The objective function of the problem is given by
obj≔−2 x−3 y
obj≔−2⁢x−3⁢y
Some contours of the objective function are shown in the following plot along with the location of a point in the feasible region that minimizes the objective function.
contours ≔ contourplotobj,x=−0.5..2,y=−0.5..2: optimalPoint ≔ pointplot0.659295,1.7037,symbolsize=22,symbol=solidcircle, color=Niagara_Red: # optimal pointdisplayfeasibleRegion,contours,optimalPoint
The example problem can be written in matrix form using the following vectors and matrix:
c ≔ −2, −3;A ≔ −7,5|4,1; b ≔ 2.2,5;
c≔−2−3
A≔−7451
b≔2.25
The Vector c corresponds to the objective function obj, the Matrix A is the inequality constraint matrix and holds the coefficients of the inequality constraint functions, and the Vector b corresponds to the constants on the right-hand side of the constraint inequalities.
To solve this problem we use the LPSolve function of the Optimization package. This function will output the optimal value (minimal value of the objective function) followed by a point in the feasible region that achieves this optimal value. The function accepts the LP in either matrix form or algebraic form and is called as follows.
withOptimization:solAlg ≔ LPSolveobj,cnsts;solMatrix ≔ LPSolvec,A,b;
solAlg≔−6.42962962962963,x=0.659259259259259,y=1.70370370370370
solMatrix≔−6.42962962962963,0.6592592592592591.70370370370370
Methods for Solving Linear optimization Problems
Linear optimization solvers are typically based on one of three techniques: the simplex method, an active set method or an interior point method. While these methods arrive at a solution in different ways they are all iterative methods. The algorithms progress by computing a sequence of successively better (or no worse) solutions until either an exact optimal solution is found or a good enough solution is found. A "good enough" solution is defined by acceptable tolerances in floating-point numbers.
The first technique that we consider is the simplex method. Invented in 1947 by George Dantzig, the simplex method uses the corner points of the feasible region as its iterates. At each iteration of the algorithm, a neighboring corner point is chosen such that the objective value either improves or does not change. When the algorithm determines that no further step can improve the objective value it terminates with the current corner point as an optimal solution.
Consider the two dimensional feasible region in the shaded region of the following plot.
restart:withplots:cnsts≔−7 x +4 y≤2.2,2 x + y≤5,x + 2 y ≥ 0.75, x−y≤2,0.1 x+ y ≤2,0≤x,0≤y;feasibleRegion≔inequalcnsts,x=−0.5..3,y=−0.5..2.5,optionsexcluded=color=white:displayfeasibleRegion;
cnsts≔−7⁢x+4⁢y≤2.2,2⁢x+y≤5,0.75≤x+2⁢y,x−y≤2,0.1⁢x+y≤2,0≤x,0≤y
If the objective function for a maximization problem is given by
obj ≔ −y;
obj≔−y
Then the optimal solution should be the point in the feasible region with greatest y value. In a simplex-based method, an initial feasible point (corner point) is first constructed. Suppose we have the initial point
initialPoint ≔ 0.75,0.0
initialPoint≔0.75,0.
A simplex-based method will look at all neighboring corner points (relative to the current point) and decide which point gives the best improvement in the objective function. If the objective function evaluated at the current point is strictly better than all neighboring corner points then the current point is an optimal point. The algorithm can terminate with the current point. Otherwise, a neighbor which improves the objective value the most (which might include a change of zero) is chosen and the current point is updated to this point.
In this example, three new points need to be considered before an optimal solution is found. The sequence of points is shown in the following plot.
p1 ≔ 0.75, 0.0: p2 ≔ 0.0,0.375: p3 ≔ 0.0, 0.55: p4 ≔ 0.7837837838,1.921621622: p5 ≔ 1.578947368,1.842105263:points ≔ pointplotp1,p2,p3,p4, symbolsize = 25, color=Niagara_Red, symbol=solidcircle: lines ≔ plotp1,p2,p3,p4, style=line, thickness=3 :labels ≔ textplot1,0.2, starting point, 1.1,1.7, optimal point, 0.5, 0.375, second point,0.42, 0.575, third point, color=White:displaypoints, feasibleRegion,labels,lines
In this example there was a unique solution because the objective function was maximized at a single point in the feasible region. This does not need to be the case. If the objective function was instead parallel to one of the edges of the feasible region then there are infinitely many solutions. For example, consider the same feasible region with the object function
obj2 ≔ 0.1 x+ y;
obj2≔0.1⁢x+y
In the following plot, the green line shows the contour of the objective function when it is maximized. The two red points are corner points that are optimal solutions to the maximization problem and every point on the green line between these two points are also optimal. A simplex-based method will return one of the two corner points as the solution to the problem.
obj2line ≔ plot−0.1 x+ 2, x=−0.5..3, thickness = 3, color=Niagara_Green:opt2points ≔ pointplotp4,p5, symbolsize = 25, symbol=solidcircle, color=Niagara_Red:displayfeasibleRegion, obj2line,opt2points
In higher dimensions, the contour of the objective function when it is maximized might intersect the feasible region at a single point, along an edge (with infinite optimal solutions) or along a face of the polytope (again having infinite optimal solutions). In higher dimensions a simplex-based method always find a corner point as an optimal solution if one exists.
In an active set method, each iteration considers points that are solutions of a subset of the constraints called the active set. The active set of a given iteration is the current estimation of the active constraints of the problem at an optimal solution. In each iteration a new point x' = x + Dx is computed from the current point x by finding a step direction Dx. The step direction is determined so that the change in objective value from x to x' is optimized while enforcing that the active set is active for the point x'. Thus, the point x' satisfies the current active set with equality.
In each iteration, the active set is also modified until the algorithm finds the active set for an optimal solution.
When the number of constraints is greater than the number of variables in a linear optimization problem, the active set method will consider active sets of size equal to the number variables.
In an interior point method, each iteration considers points that are strictly inside the feasible region, with the exception of the initial point which may be outside the feasible region. In each iteration a new point x' = x + a Dx is computed from the current point x by finding a step direction Dx. The step direction is chosen to optimize the change in objective value from x to x'. The parameter a is used to ensure that the new point x' lies strictly inside the feasible region.
For example, using the same feasible region and the same objective function as the simplex method example above, the iterates of an interior point method might look like something in the following plot. Here, the initial point is infeasible.
restart:withplots:cnsts≔−7 x +4 y≤2.2,2 x + y≤5,x + 2 y ≥ 0.75, x−y≤2,0.1 x+ y ≤2,0≤x,0≤y;feasibleRegion≔inequalcnsts,x=−0.5..3,y=−0.5..2.5,optionsexcluded=color=white:q1 ≔ 2.6, 0.3: q2≔ 1.75, 0.1: q3≔ 0.8, 0.5: q4≔ 0.7, 1.1:q5≔ 0.85,1.5:q6 ≔ 1.,1.7: q7 ≔ 1.07, 1.8: q8≔ 1.15,1.85:obj2line ≔ plot−0.1 x+ 2, x=−0.5..3, color = Niagara_Green, thickness = 3:points ≔ pointplotq1,q2,q3,q4,q5,q6,q7,q8, color = Niagara_Red, symbolsize = 25, symbol=solidcircle: lines ≔ plotq1,q2,q3,q4,q5,q6,q7,q8, color = Niagara_Red, style=line,thickness=4:labels ≔ textplot2.7,0.2, starting,2.7,0.1,point, 1.175,2.175, optimal,1.2,2.05,point, 1.7, 0.3, second point, color=White,1.2, 0.575, third point, color=White: displayfeasibleRegion,obj2line,labels,lines,points
In the example given, the interior point method does not find a corner point as its solution to the problem. Instead, it finds a point that is in the center of the lines of optimal solutions. This behavior is common with interior point methods. In this simple example, an interior point method will most likely require more iterations than a simplex based method since the possible number of corner points visited is very small. In larger problems with higher dimensions the number of iterations for an interior point method can be much smaller than a simplex based method.
References
For more details on the simplex method and interior point methods see:
Vasek Chvatal, Linear programming, W.H.Freeman, New York, 1983. [Simplex]
Stephen J. Wright, Primal-Dual Interior-Point Methods, Society for Industrial Mathematics, 1997. [Interior Point]
For Details of the active set method that is implemented here see:
Gill P E, Murray W, Saunders M A and Wright M H, Inertia-controlling Methods for General Quadratic Programming, SIAM Review Vol. 33 (1991): 1–36. [Active Set]
For details of the interior point method that is implemented here see:
M. Gonzalez-Lima, H. Wei, and H. Wolkowicz. "A stable primal-dual approach for linear programming under nondegeneracy assumptions." Computational Optimization and Applications. Vol. 44. (2009): 213-247.
LPSolve: The Optimization Package's Linear Optimization Solver
Linear optimization problems are solved using the LPSolve function in the Optimization package.
restart:withOptimization:
Calling Sequence
LPSolve(c,lc,bd,opts)
Parameters
c -- The objective function of the optimization problem.
when using the activeset method the value NoUserValue can be used when c is a Vector of zeros.
lc -- An optional list which contains the the linear constraints of the optimization problem.
lc = [A,b] if there are only inequality constraints,
lc = [A,b,Aeq,beq] if there are both inequality and equality constraints, or
lc = [NoUserValue,NoUserValue,Aeq,beq] if there are only equality constraints.
bd -- An optional list of bounds for the problem.
bd = bl,bu when bl ≤ x ≤ bu.
When 0 ≤ x ≤ ∞, we can omit the parameter bd and instead use the option assume=nonnegative. (See opts below)
opts -- Additional options for the function.
Options
Options are equations of the form option=value. They are used to specify some additional options for the function. The options for LPSolve include
assume = nonnegative -- Assume that all variables are non-negative. This can be used to replace the bd parameter when 0 ≤ x ≤ ∞.
feasibilitytolerance = realcons(positive) -- For the active-set method, set the maximum absolute allowable constraint violation. For the interior point method, set the tolerance for the sum of the relative constraint violation and relative duality gap.
infinitebound = realcons(positive) -- For the active-set method, set any value of a variable greater than the infinitebound value to be equivalent to infinity during the computation. This option is ignored when using the interior point method.
initialpoint = Vector -- Use the provided initial point, which is an n-dimensional Vector of numeric values.
iterationlimit = posint -- Set the maximum number of iterations performed by the active-set or interior point algorithm.
maximize = true -- Maximize the objective function when equal to true and minimize when equal to false. The default is maximize = false.
output = solutionmodule -- Return a module as described in the Optimization/Solution help page. This can be used to reveal more information about the solution.
The LPSolve function uses one of two algorithms to solve the given linear optimization problem. The particular method can be specified using the method option.
method = activeset, interiorpoint -- Specify whether to use the active-set or interior point algorithms. If neither method is requested, a heuristic is used to choose the method. In general, the interior point method will be more efficient for large, sparse problems. See the notes below for further details on each algorithm. If the input is given in Matrix form and the constraint matrices have the storage=sparse option set, the interior point method will be used. Otherwise, the heuristic is based on the number of variables, constraints, and the density of the constraint coefficient matrices. Note: This worksheet focuses on using the interiorpoint method for LPSolve.
Choosing the activeset or interiorpoint Method
The specific algorithm used by LPSolve can be specified using the method option as described above. If a method is not specified a heuristic is used to choose a method. The best choice for a method depends on the given problem, especially its size, and the desired output.
The activeset method: The activeset method is a good choice for small to medium sized problems. This method is also the preferred choice when a corner point optimal solution is needed. If the number of variables in the problem is greater than the number of constraints, the activeset method will actually use the simplex method to solve the problem.
The interiorpoint method: The interior point method should always be used for large problems. If the problem is both large and sparse (i.e., the constraint matrices have relatively few non-zero entries) then the interior point method is much more efficient than the activeset method. It should be noted that the solution obtained by the interior point method will, in general, not be a corner point unless it is the unique solution to the problem.
Example use of LPSolve
Standard Form Example
c' x
A.x =b
restart:withOptimization:c ≔ Vector5, −7,−2,−6,−1,3:Aeq ≔ Matrix3, 5, 2,1,−1,0,−1,0,1,−2,−2,2,−1,−1,0,−1,1:beq ≔ Vector3, −3,4,1:c,Aeq,beq
−7−2−6−13,21−10−101−2−22−1−10−11,−341
When the optimization problem does not have any inequality constraints, the linear constraints parameter must include the value NoUserValue (for both A and b) to indicate that there are no inequality constraints.
linearConstraints ≔ NoUserValue, NoUserValue, Aeq, beq
linearConstraints≔NoUserValue,NoUserValue,21−10−101−2−22−1−10−11,−341
Since this is a maximization problem the maximize option is included.
sol ≔ LPSolvec,linearConstraints,assume=nonnegative, method=interiorpoint, maximize
sol≔7.66666666666835,1.85125017096038×10−130.6666666666665551.75189383470619×10−142.000000000000173.66666666666691
The bounds on x could have been explicitly added with the bd parameter.
bd ≔ Vector5,0, Vector5, Floatinfinity;
bd≔00000,Float⁡∞Float⁡∞Float⁡∞Float⁡∞Float⁡∞
sol ≔ LPSolvec,linearConstraints,bd, method=interiorpoint, maximize
Symmetric Form
restart:withOptimization:c ≔ Vector5, 5,2,3,10,−4:A ≔ Matrix3, 5, 1,1,−1,0,−1,0,1,−2,−2,2,−2,−1,0,−2,1:b≔ Vector3, −3,4,1:c,A,b
52310−4,11−10−101−2−22−2−10−21,−341
Here there are no equality constraints and so they can be omitted from the linear constraints list.
sol ≔ LPSolvec,A,b, assume=nonnegative, method=interiorpoint
sol≔−4.00000000000029,0.9999999999999005.16025174910121×10−140.9999999999999799.10356631435686×10−143.00000000000003
General Constraints
c ≔ Vector7,1,−2,3,−4,5,−6,7:A ≔ Matrix3, 7, 2,1,−2,0,−1,1,2,0,1,−2,−2,2,3,−2,−2,−1,0,−2,1,−2,−5:b ≔ Vector3, −3,4,1:Aeq ≔ Matrix2,7,1,−2,1,−2,2,−1,1,1,1,1,1,1,1,1:beq ≔ Vector2,−1,3:c,A,b,Aeq,beq;
1−23−45−67,21−20−11201−2−223−2−2−10−21−2−5,−341,1−21−22−111111111,−13
sol ≔ LPSolvec,A,b,Aeq,beq,assume=nonnegative,method=interiorpoint,output=solutionmodule;
sol≔module...end module
sol:-Results
objectivevalue=−1.20000000000000,solutionpoint=8.42214085022941×10−218.61118459966262×10−171.600000000000001.200000000000002.78570139632184×10−170.2000000000000002.80794183247083×10−17,iterations=10
General Bounds
c ≔ −1,−12, −7,−3,−9:A ≔ 1,2,0|0,1,2|2,0,1|3,2,−2|2,1,2:b ≔ 3,2,3:c,A,b;
−1−12−7−3−9,1023221021021−22,323
First solve the problem with nonnegative bounds x ≥ 0.
LPSolvec,A,b, assume=nonnegative, method=interiorpoint
−24.0000000761608,3.33128701033309×10−91.333333321811010.9999999948504340.3333333292750609.34105558242284×10−9
We can restrict the feasible region by only consider values 0 ≤ x ≤ 1.
Bounds ≔ Vector5,0, Vector5,1;
Bounds≔00000,11111
The optimal solution should be no larger than found above (with less restrictions on x)
LPSolvec,A,b, Bounds, method=interiorpoint
−21.4000000000000,7.32115044976156×10−151.1.000000000000000.1999999999999980.199999999999999
The bounds can different for each component of x. For example, we can relax some bounds (allow negative lower bound) and restrict others.
Bounds ≔ −1,−2,0,0,−1, 1,1.5,1,0.75,0.25;
Bounds≔−1−200−1,11.510.750.25
LPSolvec,A,b, Bounds,method=interiorpoint
−26.7000000013230,−0.9999999999991751.499999999989290.9999999998037790.6000000000345190.100000000142645
Obtaining More Information about the Solution: solutionmodule and infolevel
The output of LPSolve is a list containing the optimal objective value and one solution point (one feasible value of x that achieves the optimal objective value). There are two ways in which more information can be obtained about a solution to a linear optimization problem. We outline these two methods below.
Option: output = solutionmodule
One of the options of LPSolve is output = solutionmodule. With this option set, the return value of LPSolve will be a module instead of a list simply containing the objective value and a solution point. The returned module has two exports, Settings and Results. Each export is a procedure that accepts a single optional string or name argument. When an argument is provided, the associated value is returned. For example, if the module is assigned to variable m, then the call m:−Results⁡objectivevalue returns the final numeric objective function value. When no argument is provided, then all available information is returned as a list of equations. The Settings procedure accepts any of the following names as an argument: feasibilitytolerance, infinitebound, initialpoint, or iterationlimit. These names correspond to the options of the same name described in the Optimization[Options] help page. If a particular option is requested that was not set in the call to LPSolve then an error is returned. The Results procedure accepts as an argument: iterations, objectivevalue, or solutionpoint. The arguments objectivevalue and solutionpoint correspond to the final objective function value and the solution point that are normally returned by LPSolve by default. The argument iterations refers to the total numbers of iterations performed.
Consider the following example of a linear optimization problem with only equality constraints and assuming x ≥ 0.
restart:withOptimization:c ≔ 1,−2,1:Aeq ≔ 2,2|1,−3|2,4:beq ≔ 2,−1:c,NoUserValue,NoUserValue,Aeq,beq
1−21,NoUserValue,NoUserValue,2122−34,2−1
A typical call to LPSolve returns the objective value and a solution point
soln ≔ LPSolvec,NoUserValue,NoUserValue,Aeq,beq, assume=nonnegative, method=interiorpoint;
soln≔−1.50000000000241,1.94561121272915×10−140.9999999999999920.499999999999985
Using the output=solutionmodule option the return value is now a module
solnMod≔LPSolvec,NoUserValue,NoUserValue,Aeq,beq,method=interiorpoint,assume=nonnegative,output=solutionmodule;
solnMod≔module...end module
The Results and Settings export of the module yield
solnMod:-Results;
objectivevalue=−1.50000000000241,solutionpoint=1.94561121272915×10−140.9999999999999920.499999999999985,iterations=7
solnMod:-Settings;
feasibilitytolerance=NoUserValue,infinitebound=NoUserValue,iterationlimit=50
Individual values can be accessed as follows.
solnMod:-Resultsobjectivevalue; solnMod:-Resultssolutionpoint; solnMod:-Settingsiterationlimit;
−1.50000000000241
1.94561121272915×10−140.9999999999999920.499999999999985
50
The Settings export can be used to keep track of changes to the default options of LPSolve. It also stores a user specified initial point is one is given.
solnM2≔ LPSolvec,NoUserValue,NoUserValue,Aeq,beq,method=interiorpoint, assume=nonnegative, output=solutionmodule, feasibilitytolerance=0.0001, infinitebound=101, iterationlimit=99, initialpoint=3,2,1
solnM2≔module...end module
solnM2:-Settings; solnM2:-Results;
feasibilitytolerance=0.0001,infinitebound=101.,iterationlimit=99,initialpoint=3.2.1.
objectivevalue=−1.50000240772923,solutionpoint=1.94561121285767×10−80.9999999922175550.499999984435110,iterations=6
Increasing infolevel[Optimization]
The amount of information displayed while LPSolve is computing a solution can be increased by adjusting the infolevel of the optimization package. This is a general technique that can be used with all Maple Packages.
Detailed information about the execution of LPSolve will be be displayed when infolevel[Optimization] is set to value 4 or 5.
c ≔ −1,2,−3:A ≔ 1,0|0,1|2,3:b ≔ 1,1:c,A,b;
−12−3,102013,11
The default infolevel value for the Optimization package is 0, which yields the normal (minimal) output when LPSolve is called.
LPSolvec,A,b, method=interiorpoint,assume=nonnegative;
−1.33333333749866,0.3333333351057021.71377252241086×10−90.333333331930149
Detailed information about each iteration of the interior point method can be displayed by setting the infolevel to 5.
infolevelOptimization ≔5;LPSolvec,A,b, method=interiorpoint,assume=nonnegative;
infolevelOptimization≔5
LPSolve: calling LP solver
LPSolve: using method=interiorpoint
LPSolve: number of problem variables 3
LPSolve: number of slack variables 2
LPSolve: Start of solve. Dimensions of A are 2 x 5
LPSolve: Using solver: STABLE_DIRECT
LPSolve: Number of nonzeros in A: 6
LPSolve: relErr at start is: 251.995024875622
LPSolve: Lip constant is: 20.
LPSolve: System Parameters are:
LPSolve: tau = 0.9998, haveCrossover = 0, usingPurify = false, relErrtoler = 1.1e-08
LPSolve:
LPSolve: iter mu relErr alphaP alphaD sigma itnTime PInf DInf
LPSolve: 0 3.0e+02 2.5e+02 0.8767 1.0000 1.5e-02 0.00 5.0e+02 6.0e+00
LPSolve: 1 3.7e+01 3.2e+01 1.0000 1.0000 3.3e-06 0.00 6.2e+01 0.0e+00
LPSolve: 2 5.4e-01 1.7e+00 1.0000 0.7997 1.2e-02 0.00 8.9e-16 8.9e-16
LPSolve: 3 9.8e-02 2.6e-01 0.0491 1.0000 3.1e-03 0.00 0.0e+00 0.0e+00
LPSolve: 4 1.3e+00 2.9e+00 1.0000 0.9806 1.2e-05 0.00 1.1e-16 8.9e-16
LPSolve: 5 2.5e-02 5.5e-02 0.9998 0.9996 9.1e-05 0.00 1.1e-16 1.3e-15
LPSolve: 6 1.0e-05 2.1e-05 0.9998 0.9998 1.0e-13 0.00 2.2e-16 0.0e+00
LPSolve: 7 2.0e-09 4.3e-09 0.9998 0.9998 1.0e-13 0.00 ------- -------
LPSolve: CPU seconds: average 0.00 seconds per itns on 7 itns, with relErr := 4.3e-09
LPSolve: primal optimal value is : -1.333333327468603e+00
LPSolve: dual optimal value is : -1.333333337498659e+00
LPSolve: No of iterations : 8
LPSolve: rel. min. x vrble is: 3.10200018424348e-009
LPSolve: rel. min. z vrble is: 9.10817627285263e-010
LPSolve: abs. min. x vrble is: 1.03400006691238e-009
LPSolve: abs. min. z vrble is: 2.12524113219043e-009
LPSolve: rel. norm(A.x-b) is: .951619733694216e-16
LPSolve: rel. norm(A^T.y+z-c) is: .951619733694216e-16
LPSolve: gap is : .1003006e-7
LPSolve: relgap is : .429859713518355e-8
LPSolve: relnormxz is : 1.71377252084992e-009
Similarly, more information can be obtained about the activeset method for LPSolve.
infolevelOptimization ≔ 5;LPSolvec,A,b,assume=nonnegative;
LPSolve: matrix size (number of entries) = 6, density = .666666666666667
LPSolve: choosing nonsparse method
LPSolve: using method=activeset
LPSolve: number of general linear constraints 2
LPSolve: feasibility tolerance set to 0.1053671213e-7
LPSolve: iteration limit set to 50
LPSolve: infinite bound set to 0.10e21
LPSolve: number of iterations taken 2
LPSolve: final value of objective -1.33333333333333348
−1.33333333333333,0.3333333333333330.0.333333333333333
Netlib Problems
The Netlib repository, which is a collection of software and test cases for scientific computing, contains a suite of linear optimization test cases. Many of the test cases come from real-world applications and all are difficult to solve in some sense. These test cases are routinely used when testing linear optimization solvers.
One of the smallest netlib test cases is afiro, which originated from the Systems Optimization Laboratory at Stanford University. The original problem has 28 constraints (some equality and some inequality) with 32 variables. After removing a redundant constraint and converting to standard equality form (by adding slack variables), the problem has 27 constraints and 51 variables. The problem has expected solution -464.75314286 .
restart;withOptimization:fn ≔ catkerneloptsdatadir, /help/Optimization/afiro.mpl:read fn:expectedSol ≔ −464.75314286
expectedSol≔−464.75314286
The variable Program holds all the problem data read from the file.
infolevelOptimization ≔ 5;sol ≔ LPSolveProgram1,Program2, assume=nonnegative, output=solutionmodule, method=interiorpoint
LPSolve: number of problem variables 32
LPSolve: number of slack variables 19
LPSolve: Start of solve. Dimensions of A are 27 x 51
LPSolve: Number of nonzeros in A: 102
LPSolve: relErr at start is: 5.13893981160143
LPSolve: Lip constant is: 400.750625000000
LPSolve: 0 5.8e+02 5.1e+00 0.9070 0.8645 6.3e-02 0.00 1.9e+03 4.9e+00
LPSolve: 1 9.4e+01 2.0e+02 1.0000 0.9452 6.4e-02 0.00 1.7e+02 6.6e-01
LPSolve: 2 1.3e+01 5.3e+00 0.7609 0.7982 1.1e-01 0.00 5.7e-14 3.6e-02
LPSolve: 3 4.0e+00 4.5e-01 1.0000 0.7981 1.8e-02 0.00 5.7e-14 7.3e-03
LPSolve: 4 6.3e-01 6.1e-02 0.7551 1.0000 1.1e-01 0.02 2.8e-14 1.5e-03
LPSolve: 5 9.5e-02 1.0e-02 0.9975 0.9998 8.3e-04 0.00 5.7e-14 4.4e-16
LPSolve: 6 2.1e-04 2.3e-05 0.9998 0.9998 1.0e-13 0.00 5.7e-14 4.4e-16
LPSolve: 7 4.1e-08 4.5e-09 0.9998 0.9998 1.0e-13 0.00 ------- -------
LPSolve: CPU seconds: average 0.00 seconds per itns on 7 itns, with relErr := 4.5e-09
LPSolve: primal optimal value is : -4.647531414123108e+02
LPSolve: dual optimal value is : -4.647531435228928e+02
LPSolve: rel. min. x vrble is: 1.02837448398118e-011
LPSolve: rel. min. z vrble is: 3.73541881073328e-012
LPSolve: abs. min. x vrble is: 5.14187241906322e-009
LPSolve: abs. min. z vrble is: 3.73541881078619e-011
LPSolve: rel. norm(A.x-b) is: .442477204527800e-16
LPSolve: rel. norm(A^T.y+z-c) is: .381394492791639e-17
LPSolve: gap is : .2110582e-5
LPSolve: relgap is : .453154644118094e-8
LPSolve: relnormxz is : 4.30684555500199e-010
The relative difference between the solution obtained and the expected solution is
absexpectedSol − sol:-ResultsobjectivevalueexpectedSol;
1.291007945×10−9
A much larger netlib problem, 80BAU3B, was supplied by Bob Fourer. The problem, when all constraints are converted to equality constraints by adding slack variables, has 2262 constraints and 12061 variables. The non-slack variables also have bounds (other than 0 and ∞). Its expected optimal solution is 987232.16072.
restart:withOptimization:fn ≔ catkerneloptsdatadir, /help/Optimization/80bau3b.mpl:read fn:expectedSol ≔ 987232.16072;
expectedSol≔987232.16072
The problem data is saved into variable Program.
Note: This problem requires some time to solve! Uncomment the LPSolve command and the following command to compute the relative difference if you wish to proceed.
infolevelOptimization ≔ 5;#sol ≔ LPSolveProgram1,Program2, Program3, LBDused,UBDused, output=solutionmodule:
#expectedSol − sol:-ResultsobjectivevalueexpectedSol;
Return to Index for Example Worksheets
Download Help Document