Sign Out
Logged In:
 
 
 
 
 

Archive: December 2016 (Features)


Monday, 19 Dec 2016
Rob Chidley

by Gavin Blackett, OR Society Secretary and General Manager

It is 80 years since Alan Turing first raised the concept of a universal machine and 66 years since he described the ‘imitation game’ in which a person has to decide whether written answers to questions were generated by a human or a machine. In 2015, the Alan Turing Institute (ATI) was formed as a partnership of the Engineering & Physical Sciences Research Council (EPRSC) and five universities (Cambridge, Edinburgh, Oxford, UCL and Warwick) to ‘make great leaps in data science research in order to change the world for the better’ (their mission statement). The Institute has over 150 researchers and has formed strategic partnerships with Lloyd’s Register, GCHQ, Intel and HSBC.

ATI’s director, Professor Andrew Blake, gave the 2016 Blackett Memorial Lecture at the Central Methodist Hall in Westminster, just a stone’s throw from its base in the British Library. His thought-provoking title was ‘Machines that learn: big data or explanatory models?’.

The main thrust of his talk was the common conflict, or decision faced by modellers (depending on the circumstances, obviously) – whether to use an empirical classifier or some form of generative model (which Andrew also referred to as analysis by synthesis). Andrew used examples, including painful ones from his own background, to illustrate the struggle between the two approaches. The first examples included the Netflix challenge and face recognition software. In 2006, Netflix offered a prize of $1m to help design an algorithm for to make film recommendations to its users (if you enjoyed Groundhog Day, you’ll love …). In the case of face recognition software, the efficient, black-box approach, learning from masses of examples won out, and as we all know, for a number of years even the humblest of digital cameras has been making use of this to identify faces to help the camera user frame their shot.

Andrew gave us a live demonstration of the next success – image recognition. Even in the Microsoft Office suite there’s software which can pull out a particular item from a complex image, and insert it into (for example) a Word document. Andrew told us the strengths and weaknesses of both approaches needed to be considered, a lesson he’d learnt in his time with Microsoft working on the Kinect 3D Camera project. Andrew had nailed his colours to the generative model mast, but the modelling was proving difficult. Fortunately, a young tenacious researcher demonstrated that the black box approach could work, and the outcome is now sitting on top of TVs in many of your living rooms. Andrew also explained that there are gains to be made by combining both modes. 

The field is changing fast, and Andrew highlighted the magnitude of improvements over recent years. It’s not only the technology that’s changing, though. Data protection, ethical approaches and legal issues are also having an impact. The impenetrable nature of the empirical classifier (black box) approach can be problematic with an increasing need to be able to demonstrate the variables and data key to a model’s output. In some cases, generative models are being used to try to explain how the classifier models are obtaining their predictions.

Finally, Andrew gave us a brief glimpse into research into how to improve learning. The typical classifier models need many, many cases to learn from, and once they’ve learned the first thing, the same number of examples are needed for the second. Small children demonstrate a much more efficient way of learning. If they’ve had quite a few examples to learn how to identify a car, very few additional examples are required to allow them to identify lorries. Andrew’s talk was certainly entertaining, even if it might not have been what one or two were expecting from the presumably deliberately vague title. It could only ever be a flavour of the type of research work being done through the ATI. The concept of considering the modelling pluses and minuses of different approaches is definitely not a new one to the O.R. world but it was fascinating to see Andrew’s take on this.



Tuesday, 6 Dec 2016
Jeffrey Jones

Operations Research Society of America (ORSA), Annual Meeting in May 1953

The speaker was Phil Morse, the first president of the Operations Research Society of America (ORSA), and the occasion was the ORSA Annual Meeting in May 1953. Morse’s basic prescription – keep building up our theory; keep expanding our applications – works just as well today as a guide for the future of operations research.
Others have proffered up their own views of the future of operations research since Morse first looked into the crystal ball. Some of these are devastating in their pessimism. In 1979, Russell Ackoff wrote, “The life of O.R. has been a short one – it was born late in the 1930s – by the mid-1960s most O.R. courses were given by academics who never practiced it, depriving O.R. of its unique incompetence.” Ackoff argued that we “... should want to help create a world in which the capabilities of O.R. are considerably extended but in which the need for O.R. is diminished.” This does not sound like a recipe for growing a discipline. With a healthy 12,000+ membership, INFORMS has happily not followed Ackoff’s advice.
Others have provided more optimistic views. Ten years post-Ackoff, the irrepressible Alexander Rinnooy Kan wrote that, “The future of O.R. is bright – if there is anything worrying about the state of O.R., it is that our discipline seems to spend such an inordinate amount of time and effort worrying about itself.”
Who can’t relate to that? What should our name be: Operations research? Management science? Decision sciences? Analytics? Calcuholics?
In a 1952 article in the Journal of Applied Physics, Phil Morse wrote: “Personally, I would prefer to forget about definitions and get on with the work. After all, who cares what it’s called, as long as it’s useful and is used?” 1952!!
Operations research, unlike economics (or physics for that matter), does not possess a “world view” – we have no underlying holistic theory for how the world works. The natural unit of interest in O.R. is “the problem.” It shows in how we label things – the diet problem, traveling salesman problem, stochastic queue median problem, etc. – and it shows in how we decompose more complicated situations into something we can study, model, understand and perhaps improve.
But operations research has a mindset. Operations researchers are the masters of structuring messy situations into problems amenable to analysis. Operational science includes seeing or characterizing phenomena of all sorts as operations. Modeling science (or perhaps modeling art) calls upon our creativity to create new models for such operations. These are key O.R. skills, and they capture what many INFORMS members really do.
Phil Morse had it right 60 years ago. We need to develop new methodology and to adapt old; we need to generalize our basic theoretical techniques and broaden their range of application.
Some final thoughts from your departing member-in-chief: We can have a lot of fun doing these things while celebrating how our field has helped us lead more meaningful lives. Operations research is a terrific, wonderful area of endeavor of which you should all be proud.
Keep doing stuff!


From Informs website

 



 Archive 

  2017 (13)
  September (1)
  2016 (14)
  2015 (8)
  2014 (5)