Sign Out
Logged In:
 
 
 
 
 
Tab Image

Branch-and-bound may be conceptually simple, but attempts to implement it can have surprisingly counterintuitive results. Francis McDonnell describes some of the pitfalls

With one bound he was trapped

Branch-and-bound is primarily designed to deal with linear optimisation problems in which some (or sometimes all) of the variables are required to take integer values. For example, one may insist that a variable Xi is equal to one or zero in an optimal solution to a problem in which Xi = 1 if a factory is to be built, and Xi = 0 if it is not. Now you might think that if the variables were all allowed to lie in a continuous range of values, e.g., Xi e [0,1], that this would generally make the problem harder, as the space of possible values for each variable is then infinite instead of finite. However, you would be wrong. The linear optimisation problem can be solved in a time which is polynomial in the size of the input, whereas if the variables are restricted to be integers the problem belongs to a very hard class of problems that have the property of being NP-complete.

Another way in which your intuition might fail you is by thinking that finding a solution during branch-and-bound with an objective value equal to the true optimum means that you have found an integer solution. Wrong again! Some problems can require a large number of nodes to solve even if the solution to the linear programme has the same value as the solution to the integer programme, i.e., the duality- or integrality-gap is zero. Some problems of this type exist in the standard test set MIPLIB 3.0 [2] (try, e.g., enigma).

One might also hope that if a variable Xi takes an integral value at a node in the branch-and-bound tree then that means that Xi has found its correct integer value and it will stay at that value thereafter in all descendant nodes. No! They can move away from this value (see, e.g., [9]) and possibly end up at another integer value. However, such thinking is not without value and it forms part of a heuristic, described in [4], for finding an integer solution.

One might also suppose that the larger the model, the longer it will take to solve it, and therefore that the idea of appending additional constraints could be dismissed as tending to slow down the solution of the problem. In fact, appending extra inequalities to the model is the principal idea of cutting plane techniques and the in-vogue branch-and-cut approach which is proving very successful for many problems; for an example, see [6]. In short, if the constraints added are well chosen, adding them can help.

Separating one constraint into many might also seem to be a recipe for disastrously slow computing times, but a technique that involves this, called disaggregation [10], has been shown to be very successful.

Branch-and-bound as it stands will only provide an optimal solution. For any PROLOG programmers expecting to see a complete list of optimal solutions, this may be disappointing, though the situation can be remedied (with extra programming) using some solvers, such as SCICONIC, that allow the user greater control over the solution process, though an approach based on Boolean algebra described in [1] might appeal to them more.

With some types of optimisation problems, e.g., linear programming problems, you might feel you can breathe a sigh of relief once an optimal solution is found, as it is computationally easy to confirm that it is optimal, but branch-and-bound may not be able to determine whether or not a solution is optimal until long after it has found it.

It is also very difficult to predict how long branch-and-bound will take to solve a problem. Statistics such as the number of variables and constraints, and even the density of the constraint matrix, can mean little.

Subtleties such as these are learnt early on, and one can be lulled into a feeling of knowing about the subtle and non-intuitive aspects of branch-and-bound. However, more intellectual traps lie in wait for all but the most careful of thinkers.

For example, believing that improving the linear-programming optimum of the problem, or equivalently improving the integrality gap, is always a good idea. This is perhaps encouraged by research papers in which the author(s) announce how much the integrality gap is closed by a reformulation process and how this shows the quality of the reformulation. Whilst this belief is supported by most results, with some problems reducing the integrality gap can result in the programme taking longer to solve even though the feasible region of the linear programming relaxation is strictly a contraction of the original feasible region of the relaxation. As observed in [8], This shows that, at this moment, we do not have a clear understanding of how the different techniques embedded in state-of-the-art mixed integer optimisers interact. More unsettling still is that it is possible that with such a contracted feasible region the problem can take longer to solve, not just with some very unlucky or pathological objective function, but averaging over a wide range of objective functions [5]. Thus whilst most techniques for reformulating a linear integer programme aim to reduce the size of the LP-feasible region corresponding to its continuous relaxation, and it does appear that doing so is helpful most of the time, dont ever suppose that this always holds. 

Neither should you believe that if a reformulation enables a problem to be solved more easily with one software package that it will necessarily reduce solution times with another - the opposite can occur, again even averaging over a range of objectives [5]. However, in general a reformulation that is very successful with one solver will be successful across a range of solvers.

Another effective method of reformulation, simply replacing a constraint with one involving smaller coefficients [3], manages to side-step some of these issues since it says nothing about whether the new feasible region of the continuous relaxation is a contraction of the original or not. However, again the complexities of branch-and-bound are such that this conceptually simple strategy is not always the best way to reformulate a constraint for branch-and-bound, even considering it in isolation [5].

It may help someone to understand branch-and-bound if, despite its being known as an exact method (in the sense that it is able to find the optimal solution and prove its optimality, or show that there is no solution), you realise that it contains beneath the surface a number of heuristics. Heuristics are used to decide on the node to consider next, which variable (or other type of entity - see, e.g., the SCICONIC package) to branch on and whether to branch down on the variable before branching up.

 These heuristics are sometimes constructed in such a way that other strange phenomena occur. For example, many computer codes for performing branch-and-bound allow the user to supply a solution value (perhaps obtained using a heuristic method) at the start. This can be used to discard nodes that will not lead to a better solution and so it might seem to be helpful (or at least do no harm) in all cases. However, the value of this solution can in some codes influence the order in which nodes are considered for further development (branching), potentially (though not usually) slowing down branch-and-bound [7]! The relatively recent branch-and-cut technique can exhibit similarly strange behaviour, with the added complication that adding useful cuts can slow the solution process if the cuts are not sufficiently useful. So, it seems that whilst there are useful rules of thumb about the behaviour of branch-and-bound and its relatives, there are few certainties in how they will perform on problems and what the effect on their behaviour of various seemingly reasonable reformulations of a problem will be.

  1. P. Barth, A Davis-Putnam Based Enumeration Algorithm for Linear Pseudo-Boolean Optimization, Report MPI-I-95-2-003, Max-Planck-Institut fur Informatik, January 1995.
  2. R.E. Bixby, S. Ceria, C.M. McZeal and M.W.P. Savelsbergh, An Updated Mixed Integer Programming Library: MIPLIB 3., Technical Report TR98-03, February 1998. Available at www.caam.rice.edu/~bixby/miplib/miplib.html.
  3. G.H. Bradley, P.L. Hammmer and L. Wolsey, Coefficient reduction for inequalities in 0-1 variables, Mathematical Programming, 7 (1974), 263-282.
  4. K.L. Hoffman and M. Padberg, Solving Airline Crew Scheduling Problems by Branch-and-cut, Management Science, 39 (1993), 657-682.
  5. F.J. McDonnell, ``Inequalities with small coefficients and the reformulation of integer programmes, Loughborough University, PhD thesis, September 1998. 
  6. M. Padberg and G. Rinaldi, A branch-and-cut algorithm for the resolution of large-scale symmetric travelling salesman problems, Siam Review, 33 (1991), 60-100.
  7. C.T. Ragsdale and G.W. Shapiro, Incumbent solutions in branch-and-bound algorithms: setting the record straight, Computers and Operations Research, 23 (1996), 419-424.
  8. M.W.P. Savelsbergh, Preprocessing and Probing Techniques for Mixed Integer Programming Problems, ORSA Journal on Computing, 6 (1994), 445-454.
  9. R. Simons, Scheduling the parallel way, OR Newsletter No.304 (April 1996), 14-17.
  10. H.P. Williams, Experiments in the formulation of integer programming problems, Mathematical Programming Study 2 (1974) 180-197.
First published to members of the Operational Research Society in Inside O.R. March 2000