Unravelling the complexities of artificial intelligence

AI studies focus on what AI can do to provide insight for the individual

Nigel Cummings

A new observation, information and comment document by Grace Segran and Singapore Management University (SMU), Assistant Professor of Information Systems, Sun Qianru, has been published in an online magazine called Tech Explore, as an attempt to unravel the complexity of artificial intelligence (AI).

The document explains how AI is divided into subfields, including natural language processing, computer vision and deep learning. It also states that “most of the time”, the specific technology at work in AI is machine learning (ML), which focuses on developing algorithms that analyse data to provide insight and predictive capabilities. ML, it states, “relies heavily on human supervision.”

AI complex.jpg

Professor Sun Qianru likened such supervision in training small-scale AI models to teaching young children to recognise objects in their surroundings. She also said that AI and its training could be complex, but providing AI models with ‘labelled’ data could be useful.

She cited an example to illustrate this by saying that a labelled image used for training could contain an apple, and the image could go through the deep AI model and make some predictions. If the prediction was right, it could go on to the next image, otherwise, it needs to modify its parameters.

The state-of-the-art or best performing AI models are almost entirely based on deep learning based on a many-layered neural network or convolutional neural network.

She spoke about some work done with image recognition in AI for a mobile phone APP called Food AI++. This was developed for Singapore’s Health Promotion Board (HPB). It allows its users to determine food composition data simply by using their phones to take pictures of their food.

The overall aim of Food AI++ is to help users track the nutrition values of the food they consume and use the information to achieve a healthy, well-balanced diet. Additionally, Food AI++ could also help people with diabetes to maintain healthy eating regimes and maintain control over their HBA1C blood glucose levels.

Professor Sun and her team noted that for the development of Food AI++, it was necessary to collect data of the images that users take of their meals and upload them to the app. They observed, during its development, that food images were “very noisy and diverse.”

For example, Chinese and Malays in Singapore have different eating habits, food styles, and different categories of food. The model started with a limited list of categories, but this rapidly expanded to accommodate all the different types and cultures.

Professor Sun’s research focuses on deep convolutional neural networks, meta-learning, incremental learning, semi-supervised learning, and their applications in recognising images and videos. A comprehensive list of her projects in academia can be accessed at: bit.ly/3loEaWa see also bit.ly/3qXT72P