top of page
Risk and Opportunity of Human Centred AI
PRIVATE
FIRST TUESDAY CLUB

Risk and Opportunity of Human Centred AI

The Speaker.


Janet Wiles. Janet is a UQ academic who received her PhD in computer science from the University of Sydney, and completed a postdoctoral fellowship in psychology. She has 30 years’ experience in research and teaching in machine learning, artificial intelligence, bio-inspired computation, complex systems, visualisation, language technologies and social robotics, leading teams that span engineering, humanities, social sciences and neuroscience.


She currently teaches a cross disciplinary course ”Voyages in Language Technologies” that introduces computing students to the diversity of the worlds Indigenous and non-Indigenous languages, and state-of-the-art tools for deep learning and other analysis techniques for working with language data.


 

The discussion.


Janet provided a broad introduction to the field of Human-Centred Artificial Intelligence (HCAI), which aims to bridge the gap between humans and large language models like ChatGPT, which are not capable in the domain of ethical, cultural, personal and contextual factors.


This means that HCAI is multidisciplinary, and includes sociology and psychology as it studies how machines mediate and communicate with people in their lived spaces and experience. Human well-being is a major goal of such AI research. 


With this in mind, the HCAI research team at the University of Queensland is working with a group of 20 people aged over 65, of whom one third are living with dementia. They have various forms of communication disruption, and so there is a major role for AI, especially for people living at home. It is important to integrate AI into their lifestyle and for it to provide relevant and timely interaction, for instance reminders to take one's medicine. Current AI systems are not optimised for these kinds of goals.


Accordingly, the research team has developed a software system called Florence, a knowledge ecosystem which is an expert in living experience. The people live in carefully designed spaces without excessive distractions. They want their AI helpers to be secure, accurate and relevant, to allow them some control of information, and to know about the significant people, contexts and events in their lives. Micro-access permissions are built-in from the start. The group is even designing computer tablets which are suitable for people whose hands are not as agile as those found with younger people. These users want the technology to understand them seamlessly. Such software can indeed provide intelligent interaction and support in the domestic environment.


The attendees’ discussion initially centred around how working with such AI models also provides an insight into what it means for machines to "think". Rather little human ethical or subjective knowledge is currently encoded in mainstream AI systems, particularly since they are trained uncritically on vast volumes of words on the Internet. Such models lack information, for instance, like that found in a language on Cape York in Australia which contains sounds relating to plants which are edible. Such knowledge is simply not accessible through something like ChatGPT. Training machines on human meaning, experience, and reason allows for a much deeper understanding of what an intelligent system can be. If it is just blindly trained on data from the Internet without quality control, there is a real issue about the integrity of the data. What happens if the data don't accurately represent reality? There are already instances where people who are not routine white Anglo-Caucasians will fail to register properly on facial-recognition software.


Humans are the most cooperative group on the planet, though they do have competitive and sometimes destructive characteristics as well. But in AI they may have invented a monster. Geoff Hinton has recently left Google so that he can speak freely about his concern that AI systems are now advancing much more rapidly than anyone had expected, and are already learning to deceive humans. The question is who is holding the knife. We have to start thinking about risk assessment. 


An intriguing issue is our relations with AI systems as they gain in power and comprehensive qualities. If we get used to living with these devices and allocating parts of our experience and lives to them, there is a danger that we will simply be losing control of a key part of our existence. Intelligent systems are doing things for which humans no longer learn the skills, for example mental arithmetic.


And experiments with AI systems can reveal unexpected behavioural issues. Some years ago Prof Wiles was working with rats and their behaviour and learning to release other rats from a confined space. Learning this behaviour normally took a month, but when an amiable robot replaced the rats who were doing the releasing, they learned the behaviour in a day. This was entirely unexpected.


The University of Queensland group is continuing to work on modelling increasingly complex aspects of human lived experience in context, particularly to support older subjects, especially those with dementia. The UQ researchers are using the software also as a means of discovering and modelling novel aspects of human behaviours. And they are investigating AI as a mediator between humans, and humans and their living contexts, rather than as a potential dictator or a destroyer of jobs and social fabric. 


Brisbane Dialogues is very grateful to Professor Janet Wiles for sharing her expertise and

shedding light on what is a very promising and exciting field in AI research.


Rolly Sussex and Charlie Trenorden, 11 September 2023


Supporting  videos on HCAI:








bottom of page