Artificial intelligence (AI) is a term used to describe a broad basket of self-learning technologies. It is not only for self-driving cars, making shopping easier or for businesses to make better decisions in allocating their resources - but can reveal new insights into health issues and provide solutions to enable people to live a healthy life.
I recently attended a seminar organised by iappANZ which focused on the benefits of AI for health and which also highlighted the privacy issues which need to be addressed. To tell us more about it – especially in healthcare - were the iappANZ seminar panellists, Dave Heiner (Vice President, Microsoft Corporation, External and Legal Affairs), Kevin Ross (Director of Research, Orion Health) and Dylan Mordaunt (Physician, Waitemata District Health Board).
One study shared was done by American data scientists into sudden infant death syndrome (SIDS). The study revealed insights on SIDS, such as if the mother was a smoker, it increased the occurrence of the syndrome. This was already known but the study confirmed that the data supported the previous evidence. Another insight was how a mother’s access to pre-natal health care and when she gained access to that healthcare, also affected the probability of SIDS. These findings were only possible through the use of patient data which had been anonymised.
Another example of the use of AI can be found in an application which ran on Pivothead glasses. These enable visually impaired people to understand their surroundings. The Pivothead application took images of what is around the wearer. It then audibly informs the wearer of the gender, approximate age, facial emotion and the activity being carried out by the nearby individual. This tool was launched in Australia in November 2017 and was an example of how AI could help sight impaired people by improving accessibility and independence.
But with every technological breakthrough come challenges. In this instance, they relate to privacy and transparency. AI needs to be built to include trust, transparency and privacy controls. If it isn’t, we won’t be able to realise the benefits of the new technology, especially in health where trust and confidence between patients and health providers is paramount.
As a society, we also need to ensure that these new ‘learning’ technologies are fair and don’t discriminate against people on the basis of socio-economic standing, race, faith, gender and sexuality. There can be times when the data used is not a representative of society as a whole. For instance, an evident racist bias in society and reflected in the data will produce an unfair or discriminatory outcome. Analytical techniques need to be developed to recognise and eliminate bias. An article entitled Machine bias exposed the use of sentencing software in the United States to predict future offending. The software would provide a score which rated an offender’s likelihood of committing crimes in future – but it discriminated against African-Americans by generating higher scores than if the offenders were white, even if the criminal records were similar.
One way forward, as suggested by the panellists, would be to develop guidelines for developers of AI to ensure these new systems are fair, transparent and have proper privacy controls. Good governance and responsible development will assist in eliminating systemic profiling, unintended consequences and bias.
Image credit: Artificial intelligence by Salvatore P.