Benelux Blog

XAI: making algorithms great again

by Charles Devroye - Marketing & Communications Manager
| minute read

With great technology usually comes great risk and, consequently, great responsibility. Artificial Intelligence (AI) is no exception to that rule. AI’s main strength, its ability to quickly process infinitely greater volumes and varieties of data than any human can, is severely undercut by its main weakness: an inherent lack of transparency and insight into how that data is actually processed and the output it provides. This raises some critical questions: how can we develop trust in a machine that makes important decisions on our behalf? And when can we really put all our trust in such a machine?

From simple voice assistants like Apple’s Siri taking commands, interpreting and answering us, to highly sophisticated software robots deciding whether or not our insurance claim should be processed or denied: Artificial Intelligence seems to be creeping into every corner of our lives. But how accurate and fair are the decisions it is increasingly making for us.

For, make no mistake, this hot new technology is already making some pretty big and important decisions on our behalf. Potentially life-or-death decisions even, when you consider that AI is currently being applied in the mobility industry, deciding whether or not a self-driving vehicle brakes, for instance. Not to mention the healthcare sector, where it is already changing how cancer is diagnosed.

It’s the trust, stupid!

This equally wonderful and worrying technology trend begs the question: how is AI actually making those decisions? To ascertain that AI makes accurate and fair decisions, it has in fact become vital for us humans to understand its underlying decision-making mechanism. Whether for compliance reasons, as is clearly the case in the financial services industry, or simply to eliminate bias, there is a need to make AI’s decision making capabilities (better) understood. This is where Explainable AI (XAI) or Transparent AI comes in.

As any banker will tell you, trust is the gold currency. And transparency is needed to create it. The fact that such transparency is sorely and inherently lacking in AI is that technology’s main weakness. Since deep learning, which is driving today’s AI explosion, is non-transparent by its very nature, AI-based systems simply don’t explain why and how they got to a certain decision.

Limits to AI?

Explainable or Transparent AI aims to remedy that natural shortcoming by making transparent to users how an AI-based system came to a specific decision. This is easier said than done, of course, for the question can rightly be raised what we actually need to make transparent: the data that was used to come to that specific decision? The AI model that was used in doing so? Or both?

While this debate remains unresolved for now, nearly everyone agrees that it is necessary and even essential to increase trust in AI. For, promising though this new technology may be, it is still far from perfect today. It fails, sometimes surprisingly and catastrophically, and people want to know, need to know, why it does so. With the help of XAI, we will hopefully be able to better determine the limits of AI.

Search