AI’s large language models (LLMs) lack transparency. Nobody, including the people who create them, knows why an LLM model gives the exact answer it gives or why it makes a particular decision. That’s a problem for organizations in high impact regulated industries like financial services or healthcare, for both legal and ethical reasons. The issue is not limited to LLMs: most AI models are based on opaque statistics that generally cannot be understood easily.
“You can’t trust what you can’t control, and you can’t control something you don’t understand,” says Angelo Dalli, CTO and Chief Scientist of UMNAI, a UK-based startup.
UMNAI is trying to tackle this issue by marrying neural networks and LLMs with neuro-symbolic AI, which relies on logic and reasoning, and an understanding of cause-and-effect, rather than just pure statistical predictions and associations, to represent knowledge and uses rule-based systems and logical inference to derive conclusions.
So how does that work with real-world business applications? Read on to find out.
This article is part of The Innovator’s premium content and available only to The Innovator’s Radar subscribers. Click here to sign up for a free four-week trial.