Adaptive Multicut Aggregation Method for Solving Two-stage Stochastic Convex Programming with Recourse

In this paper we extend directly adaptive multicut aggregation method of Svyatoslav Trukhanov, Lewis Ntaimo and Andrew Schaefer to solving two-stage problems of stochastic convex programming. The implement of the algorithm is simple, so less computation work is needed. The algorithm has certain convergence.


Introduction
(Birge, J. R. & Louveaux, F. V.,1997)L-shaped algorithm for stochastic programming problems with recourse generally generate a single cut at each major iteration.While (Ruszczynski, A. & Shapiro, A.,2004) the multicut version of the algorithm allows for cuts up to the number of outcomes to be placed at once.So the L-shaped algorithm tends to have more major iterations than the multicut algorithm, however the multicut algorithm have more optimality cuts, the size of master problem relatively big.To settle the disadvantage of L-shaped algorithm and multicut algorithm, Svyatoslav Trukhanov, Lewis Ntaimo and Andrew Schaefer (Trukhanov, S.,Ntaimo, L., & Schaefer, A.,2007) have proposed the adaptive multicut aggregation algorithm for solving two stage stochastic linear programming with recourse.The computational results of the algorithm shows that the algorithm is more effective than L-shaped algorithm and multicut algorithm.
Two-stage problems of stochastic convex programming are difficult for solving.For solving they are transformed into problems of linear programming by linearization, but the size of the problems is greatly increased, the problems are difficult to solve.In this paper we extend directly adaptive multicut aggregation method for solving two-stage stochastic linear programming problems to two-stage stochastic convex programming with recourse.The method is simple and has certain convergence.

Two-Stage Stochastic Convex Programming
A two-stage stochastic convex programming with fixed recourse in the extensive form can be given as follow: Where s is the scenario(outcome).
⋅⋅⋅ ,are convex functions.X and s Y are bounded convex sets.

Adaptive Multicut Aggregation Method
The adaptive multicut aggregation method (Trukhanov, S.,Ntaimo, L., & Schaefer, A.,2007) has point out that this method has two advantages comparing L-shaped algorithm and multicut algorithm.First, it use more information from subproblems, which assumes adding more than one cut at each major iteration avoiding 'information loss'; Second ,it can keep the size of master problem relatively small, which requires to limit the number of cuts added to master problem.The adaptive multicut aggregation method is required to partition the sample space S into D aggregates . The sample space S can be partitioned into some subsets based on some aggregation rules.So each subset can generate a optimality cut, and it introduce D different optimality variables At the k-th iteration, let the scenario set partition be F k denote the set of iteration numbers up to k where all subproblems are feasible and optimality cuts are generated .The optimality cut at k-th iteration has following form: ( ) , ( ), ( ) ( ) , ( ), ( ) the algorithm stop, k x is optimal, otherwise the algorithm add optimality cut like (3) to the master problem.
At each iteration, the number of aggregation D must satisfy 1 ( ) . The master problem will have 'adaptive' optimality variables d θ , that is ,the number of optimality variables will change during the course of the algorithm.The goal is to let the algorithm use more information and then settle for a level of aggregation that tends to faster convergence to the optimal solution.Svyatoslav Trukhanov, Lewis Ntaimo and Andrew Schaefer (Trukhanov, S., Ntaimo, L., & Schaefer, A.,2007) has proposed the redundancy threshold δ ( 0 1 δ < < )to be a aggregation rule.In the master problem, if the optimality cuts contain 'little' information about the optimal solution, then these cuts are inactive.These inactive optimality cuts can be aggregated into one cut without information loss.Conside some iteration k after solving the master problem for some aggregate S k should be imposed.This prevents the algorithm leading to the L-shaped algorithm (highest level of cut aggregation) and multicut algorithm(no cut aggregation).( ) , ( ), ( )

the master problem
Where (5b) is the optimality cut, (5c) is the feasibility cut.

feasibility cut
The method of getting feasibility cut is the same as one for stochastic linear programming (Ruszczynski, A., & Shapiro, A.,2004).

Generate aggregation ( )
S k using ( 1) S k − based on some aggregation rules, each element of ( ) S k is a union of some elements from ( 1) S k − .For example , For each major iteration ,the optimality cut is updated as following: ( ) With above formula, a new aggregation optimality cut is generated: ( ) , ( ), ( )

convergence of the algorithm
Assumption: There exists a constant C such that Lemma 1 (Ruszczynski, A., & Shapiro, A.,2004): For every s S ∈ , the number of iterations for which ( ) 0 Theorem 1: If problem (1) has no feasible solutions the algorithm will stop at step4 after finitely many iterations; If problem (2) has feasible solutions then the method either stops at stop3 at an optimal solution or generates a sequence of point { } Proof: Since the master problem is a relaxation of (1), if the method stops at step4, the original problem is infeasible.
Also, It always have * k f θ ≤ , so the method can stop at step3 only if k x is optimal.It remains to analyze the case of infinitely many steps.
The construction and the use of feasibility cuts is the same as in the linear case.By lemma1 It is true that if the problem has no feasible solutions, the method will discover this after finitely many iterations.Moreover, if feasible points exist and the method does not stop at an optimal solution, It must have ( ) there will be many new optimality cuts at 1 k x .It will be in the master from 1 k on, so it has to be satisfied at On the other hand 2 * ( ) By ( 17) and ( 18), It must have the following formula : The function ( ) f x is subdifferentiable in its domain and X is compact, so there is a constant C such that

Summary
This paper extend directly adaptive multicut aggregation method for solving two-stage stochastic linear programming with recourse to solving two stage problems of stochastic convex programming with recourse.The algorithm of the method is exactly described and its convergence proof is given.In the future, more work should be done to find the effective aggregation rules and apply for multi-stage stochastic programming.
all d are combined to form one aggregate, and there is a new optimality cut generated in the next iteration.As a supplement to the redundancy threshold, a bound on the minimum and maximum number of aggregation ( ) X is compact, (21) implies that the set K ε is finite for Where z is the artificial variable, ⋅ is a norm on the space 2 kx is the feasible point of problem (2),otherwise feasibility cut like (5c) is generated