With 700+ participants and 50+ speakers from around the world, we are thrilled to say that this year's online event was a success! Better still, all sessions are now available to watch on YouTube.
From airport scheduling to optimal search problems and allocation of assets prone to failure, many optimisation problems deal with uncertainty. The talks in this stream use case studies to address the above topics, their methodologies include stochastic linear programming, dynamic programming and multi-armed bandits.
This stream offers an engaging look into how optimisation can handle uncertainty. Whether it's an area you work infrequently or one you just have a passing interest in, this stream will solidify your grasp of stochastic optimisation and offer new ways of approaching these problems.
At congested airports outside of the US, capacity is managed through the allocation of arrival and departure slots. Many optimization models for this have been proposed, capturing in detail the regulations for allocating slots. However, a common weakness of many of these models is that they do not incorporate the inherent uncertainty in when aircraft arrive and depart and the length of time the corresponding runway movements take.
Ignoring these uncertainties can lead to queueing on the ground and in the air if runway throughput is assumed to be too high, and overly conservative scheduling if this is assumed to be too low. In this work we propose a new two-stage stochastic linear programming model for slot allocation, which incorporates the described uncertainty and presents some preliminary numerical results.
Each of the three speakers below will present for 20 minutes.
Many real-life situations, often involving high costs and significant consequences with failure, require the location of a hidden object. Efficient search methods are paramount, and therefore the choice of search speed is key. The trade-off between slow and thorough searches, and fast searches with increased risks of the object being overlooked, is core to this research.
A model with two possible search speeds, fast and slow, is used to search for a hidden object in one of several discrete locations. The goal is to discover the object in the minimum expected time by making successive searches of individual locations.
This talk will cover the approach for one-speed searches, the extension to two-speed searches and proposes a heuristic model that delivers near-optimal results.
Problems involving the allocation of assets which fail stochastically to a set of tasks are not uncommon, occurring in search and rescue, area patrol, labour allocation, and environmental monitoring settings.
We address a particular case in which we are required to form a chain of assets to communicate any discoveries back to base. We formulate the problem as a continuous-time, average-reward Markov decision problem (MDP) with impulsive controls. We identify several features necessary for a policy to be optimal and use this to formulate a variety of heuristic policies.
We present the results of extensive numerical experiments and determine that a simple half-and-half policy enhanced by a single round of policy iteration provides the best balance between computational cost and effectiveness. We also consider extending to a non-Markovian case.