The realm of artificial intelligence (AI) remains a place of much debate, as shown in extravagant fashion by Elon Musk and Mark Zuckerberg recently. Musk made headlines with the statement that AI poses a significant threat and needs regulation, even calling it a “fundamental risk to the existence of civilisation”. For Zuckerberg, however, these warnings are “pretty irresponsible”.
The Facebook head chose instead to highlight the role that AI technology could play in saving lives with driverless cars and medical diagnoses. It seems that eager technologists far and wide are happy to put AI in a ‘box’, so to speak, and talk at length about their visions of AI transforming civilisation.
But what does the debate leave out?
AI is alive and well
The conversation around AI is a far more nuanced debate than one might consider. AI comes in many shapes and sizes: the type of AI discussed by Musk and Zuckerberg relates primarily to artificial intelligence that has ‘human level’ cognitive skills, also known as AGI or ‘Artificial General Intelligence’. Despite impressive progress in a range of specialities, from driving cars to playing Go, this technology is not close to imminent.
These public debates often ignore the fact that AI is something that’s already in widespread use by many in a business context today, and the current risks associated with it aren’t about whether it will leave us all in a state of destruction. Rather than worrying about such apocalyptic doomsday scenarios, our energies should be focused on the very real dangers posed by this technology being used incorrectly today.
Risks can include diminished business value, significant brand damage and violations of regulation. Though these don’t spell the end of humanity, they can still have an enormous impact on whether an organisation succeeds or fails.
If we are to consider AI risks in a business context, let’s remember that not all AI is created equally. Artificial intelligence comes in two particular flavours – Transparent and Opaque. Both have different uses, applications and impacts for businesses and users.
For the uninitiated: Transparent AI is a system whose insights can be understood and audited, allowing one to reverse engineer each of its outcomes to observe how it arrived at any given decision. On the other hand, Opaque AI is an AI system that cannot easily reveal how it works. Similar to our brains, it can be challenging for it to try to explain exactly how it has arrived at a certain insight or decision.
The labels ‘Opaque’ and ‘Transparent’ each have emotive connotations to them, but we shouldn’t let these influence us. There is no ‘good’ or ‘bad’ AI – only appropriate or inappropriate use of each system, depending on one’s own needs. Opaque AI has a number of positive aspects which can prove useful in the right situations. Being transparent is a constraint on AI and will limit its power and effectiveness – so in some instances an Opaque system would be preferable.
Artificial intelligence comes in two particular flavours – Transparent and Opaque. Both have different uses, applications and impacts for businesses and users.
The choice between the two becomes all the more crucial in industries that are highly regulated. For instance, in the financial services, proper use of Opaque AI in lending leads to improved accuracy and fewer errors. But if banks are asked to show how these operational improvements were achieved though reverse engineering the decision process (as demanded by the EU General Data Protection Regulation – or GDPR), it becomes a challenge or even a liability.
The issue of bias creeping in poses another possible problem in an Opaque system. An Opaque AI system could begin to favour policies that contradict your organisation’s brand promise, all without your knowledge. It’s actually quite easy for an AI system to use neutral data to work out customer details, which it can then use to make non-neutral decisions. An Opaque AI in a bank, for example, may interpret customer data and use it to start offering better deals to people based on race, gender or other demographics. Of course, this could result in disaster.
A question of trust
The answer of whether or not an organisation is using AI correctly, and which kind of AI is appropriate, lies in the extent to which that organisation is willing to trust it. In order to fully trust an AI system, either the AI must be transparent so that business management can understand how it works – or, if the AI is Opaque, it has to be tested before it’s taken into production. Tests need to be extensive, going beyond just searching for viability in delivering business outcomes and looking for the kind of unintended biases described above.
There are also other factors at play, especially for those organisations using AI as part of a customer engagement system. With GDPR coming into effect in May 2018, companies will have to be able to explain exactly how they reach certain algorithmic-based decisions about their customers. Organisations could use some sort of a switch, a ‘T-Switch’, to increase transparency by forcing the methods used by AI to make decisions from Opaque to Transparent. As they’ll be more easily able to comply, they gain a distinct advantage.
Is your AI right?
There are tangible risks posed by AI in the here and now, and businesses today are rightly concerned about selecting the correct system. It’s not an easy question in practice, especially when you consider that the choice between Transparent AI and Opaque AI could determine which technology and method will be used, for instance, to correctly diagnose and save a patient’s life. In some cases, the deciding factor could be marginal.
Regardless, we really haven’t yet arrived at the point where AI will determine the life or death of human civilisation, despite what makes headlines today.