This is an implementation of the Simplex algorthm for solving linear programs given in the standard form.
The program assumes the linear program is alreay converted to the standard form as follow:
We have
The input starts with
Once the input is given, the program will start solving, showing the immediate tableau and final result.
Starting with the objective function
The feasible region is an intersection of hyperplanes. Just like two lines intersects into a point or two planes intersects into a line, generally these hyperplane intersects into faces. Some of these faces turns into corners.
Intuitively the solution should be a corner. If we are not on a corner, then we should be able to either slide further, or if we can't, we should be on a face that is parallel to the objective hyperplane, so that we can move within the face and get to a corner.
While geometry gives us idea on what the solution should look like, it is easier to manipulate algebra in a computer.
Suppose we have a
By interpreting
Now there exists
We must construct
$a$ using the non-negative columns only because we need to make sure there exists$d$ such that$x-da \ge 0$ .
This shows us the concept of a corner is related to the linear dependencies of the corresponding non zero columns. In particular, the columns associated to the non-zero entries of a corner corner solution must be linearly independent.
Note that there are
The idea of the simplex algorithm is to start with a basic feasible solution and work towards optimality.
To get started, we will start with these simplifying assumptions and eventually remove them.
- We are given a basic feasible solution.
- The columns associated with the basis is the identity matrix.
- The objective function does not depend on the basic variables (i.e. the coefficients associated with the basic columns is 0)
To get started, we simplify the problem further. Instead of minimizing
Suppose we increase the value of the non-basic variable
For this reason, we call this new constraint the reduced costs. The reduced costs is the amount the objective function will increase if we increase the value of the corresponding non-basic variable by 1.
To reduce the objective value upon the given solution, we want to change our basis by picking the column with a negative reduced cost to enter the basis.
Suppose we increase the value of the entering basis variable, then the program is no longer feasible because the equality constraints are violated, we must adjust the other variables. Looking at each constraint independently, each row is associated with exactly one basic variable, so it is easy to see that by adjust the value of the existing basic variables, we can maintain feasibility, and of course, we cannot decrease that beyond zero. So we should have a critical value such that we can increase the entering variable, and at least one of the basic variable should reach 0. That would be the column to leave the basis.
Now that we have picked a column to enter the basis, we would like to maintain that the basic column is still an identity matrix, and we can do that using a procedure similar to Gaussian elimination. First, we can scale the row such that entry becomes 1, and then we can subtract every other row with a multiple of these rows so that the basic column value becomes 0.
With that, we have decreased the objective function value, as well as maintaining all the assumptions above, so we can iterate on it, until one of the two cases occurs:
- If it happens that the reduced costs are all positive, then it is impossible decrease the objective function value by picking anything to enter the basis. In that case we are done and found the optimal.
- Or if it happens that we can indefinitely increase the entering variable without hitting the corresponding basic variable to reach zero (i.e. the basic variable adjustment corresponding to the change is all increasing, ). In that case, the linear program is said to be unbounded because the objective function can go arbitrarily small.
In general we are not given a basic feasible solution, so we need to start with finding one. The idea is to introduce so artificial variables
We optimize this instead:
Now obviously
Applying the simplex algorithm above, the optimal solution can either be all zeroes or not. In case it is all zero, then we reach a state such that the basis column are all not on the artifical variables, and the artificial variables are all zero. Now we can simply remove all those artifical variables and replace the objective from minimizing
On the other hand, if the artificial variables cannot be all zero, that means there is no way we can satisfy all the constraints. In that case we can declare the program as infeasible.
By maintaining the dual variables
In case a reduced cost is zero, it does not make sense to choose it to enter the basis since that won't change objective value, but we must choose it to enter the basis if that could be used to drive the artifical variable away in the first phase. The goal of the first phase is to drive those variables out, so even if we reach 0, we still need to make sure all artifical variables are 0.
In case the given constraints are linearly dependent, we will not be able to form an identity matrix with basic columns, we must eliminate the redundant constraint in that case.