Sign Out
Logged In:
 
 
 
 
 
Tab Image

Latest News

Research Excellence Framework 2021
Research Excellence Framework 2021 Panel Membership Announced

Impartiality in AI and Machine Learning
The Global Future Councils held in Dubai in 2017 discussed the effect of large-scale adoption o...

Women in Mathematics
Only 4% of mathematics professors in the UK are female

Humans are not fooled
Humans are not fooled when they get called by software bots that can convincingly mimic the hum...

The musical mood of the nation
Examine the musical mood of the nation when contemplating changes to the Bank’s interest rate

More

Tab Image

Features

A window on the world of O.R.?
The “invisibility cloak” of science fiction is now fact, albeit with limitations. O.R. could claim to have had the power of invisibility for years, though not by desire; what we want is the opposite - a high-visibility jacket! Indeed, part of the mission of the OR Society is to help make our presence more visible. But perception involves both the observed and the observer. And all of us have open and hidden parts.

YOR18 – OR – A Twenty Twenty Vision
The 18th Young [to] OR Conference got off to a great start with the plenary session given by the President of the OR Society, Dr Geoff Royston. Antuela Tako, the chair of the organising committee, began the proceedings by telling the audience what had been planned for them and how to find out more about streams.

The Education & Research Committee
- Roles and Responsibilities: Brian Dangerfield (Liaison with ESRC)
Ruth Kaufman, Inside OR February 2013

Tab Image

Posted on 12 June 2018

Artificial Intelligence

Impartiality in AI and Machine Learning

The annual meeting of the Global Future Councils held in Dubai in 2017 discussed the effect of large-scale adoption of artificial intelligence (AI) and machine learning (ML) on humanity.

The consensus was that, by 2030, AI and ML will have become fully mainstream. Hopefully, by then we will also have adopted a single term which incorporates both AI and ML but for now, I will use ‘AIML.’ AIML is cognitively smart, but can we trust it? It will also be emotionally intelligent, aware of our most nuanced mental, social and emotional states, familiar with our moods and preferences. AIML is already enabling pathways to financial inclusion, citizen engagement and more affordable healthcare. We can even wear it on our wrists by equipping ourselves with connected smart devices which sense our moods, whether we are getting tired, in need of water, exercise, a rest or [soon] even insulin. Such devices will increasingly tell us what to do next! By being connected they will also soon be changing the adverts we see, not just on our phones and laptops but even on smart billboards. Using our current emotional state, they will direct adverts for products that they think we will buy based on what we, and others, are most likely to buy when happy, sad, hungry or thirsty. In short, AIML will increasingly take over our lives. Some will resist just as some people today refuse to wear a watch. How long will it be before you are not considered to be safe without your AIML? Would you let your child go out without it? Would you be allowed to let your child go out without it? There are two fundamental problems with AIML. AIML is controlled by algorithms that are written by humans (at the moment). The algorithms incorporated into AIML are selected by humans whose primary aims are to make money for the companies that employ them. How can we be sure that these algorithms are fair, unbiased, impartial and in our best interest? The simple answer is we cannot. We can pass legislation that says they have to be all of these things but how can we be sure that one day our autonomous car will not drive over a cliff, or our health-monitoring device will not inject us with an overdose of insulin because we are not spending enough money on the products being recommended to us? Who do you hold responsible if such an ‘accident’ occurs? The company who sold you the AIML, the person who wrote the algorithm? Which algorithm was the cause? Was it a glitch? Is it possible to recreate the exact conditions that led to the ‘accident’ and if so can the algorithm be tweaked so it does not happen again? In doing so, can we be sure that the new version of the algorithm is bug-free? Can we be sure that when an upgrade to an algorithm is sent out that all instances of that algorithm will be modified even if the users have not paid for the latest enhancements? We must gain an understanding of how discrimination can enter AIML systems, and then engineer these systems to learn not discriminate.

Erica Kochi, Head of Innovation at UNICEF at the World Economic Forum’s Global Future Council, recently published an article entitled How we can make machines fair. It’s a worthwhile read at: http://bit.ly/2Ib16nc


NIGEL CUMMINGS