Focus On AI

Trustworthy AI May Be In Reach

What do chatbots emulating the role of a rabbi or medical school professor have to do with business? A lot, it turns out. Projects involving both are testing a technology call Retrieval Augmented Generation (RAG) a technique for enhancing the accuracy and reliability of generative AI models by fetching facts from qualified sources. The approach could democratize education and, if pundits are right will also enable the dependable scaling of generative AI, helping companies reap the full business benefits.

RAG is able to digest complex queries and give models sources they can cite, like footnotes in a research paper, so users can check their legitimacy. It can also reduce the possibility a model will hallucinate.

Once companies get familiar with RAG, they can combine a variety of off-the-shelf or custom large language models (LLMs) with internal or external knowledge bases to create a wide range of assistants that help their employees and customers, says a blog posting by Nvidia, which has developed an enterprise offering that includes a sample chatbot and the elements users need to create their own applications with RAG.

“Looking forward, the future of generative AI lies in creatively chaining all sorts of LLMs and knowledge bases together to create new kinds of assistants that deliver authoritative results users can verify,” says the Nvidia blog post.

The integration of RAG technology offers significant business benefits, “particularly when coupled with existing search modules and enterprise databases,” says Bernhard Pflugfelder, head of The Innovation Lab at Germany’s appliedAI Initiative a venture involving over 50 partners from science and industry, the public sector, and selected start-ups working on the development of trustworthy AI.  “By harnessing RAG’s capabilities, businesses can empower their knowledge management processes, enabling swift and effective provision of information regarding products or processes,” he says. “This integration facilitates the generation of highly reliable answers to customer or employee inquiries, advancing organizational efficiency and customer satisfaction.”

In addition to streamlining decision-making processes, RAG promises to democratize enterprise knowledge, ensuring that valuable insights are available to all employees, says Pflugfelder. And, since RAG empowers businesses to develop more intelligent customer interfaces and services, it promises to enhance the overall user experience and foster stronger client relationships.

Will Chatbots Replace Rabbis?

In keeping with its mission to develop trustworthy and secure AI applications the appliedAI Initiative is working with partners to establish a “RAG evaluation and benchmarking framework.” As part of that work appliedAI is involved in a project called Virtual Havruta to explore any potential gaps in RAG technology. The project aims to give people all over the world the option of turning to a chatbot with questions about the Jewish faith rather than simply asking their local rabbi. Partners on the project include Sefaria, the largest provider of digitalized open-source Jewish texts and the Software and Artificial Intelligence Venture Lab at the Technical University of Munich. Virtual Havruta has the buy-in of the Reform, Conservative and Orthodox branches of Judaism. Instead of spitting out answers like a search engine, the domain-specific LLM-based RAG being tested in the project acts as a sparring partner. It outlines the thinking of each branch of Judaism and then proposes relevant links for further study, encouraging people to form their own opinions.  The Havruta project was the subject of a panel entitled “Will Chatbots Replace Rabbis?” moderated by The Innovator’s Editor-in-Chief during the DLD conference in Munich in January.

“No, chatbots will not replace rabbis,” says Antoine Leboyer, managing director of the Technical University of Munich’s Software and AI Venture Lab, who participated in the panel. “But the help and contribution of AI has unique potential to enhance and facilitate the study of Jewish texts.”

The implementation of RAG technologies developed and deployed  by the appliedAI Initiative “makes us confident that we can use such a platform in a wide group without concerns of poor links and hallucination,” says Leboyer. “This is a particularly relevant for such projects where there is not a one single specific answer for every question but a set of directions to deepen a question.”

The Havruta project is a perfect test case for RAG technology, says Paul Yu-Chun Chang, a senior AI expert at the appliedAI Initiative, who also took part in the panel. “It poses challenges, particularly given the solemn nature of the topics involved and the need to delicately distinguish different interpretative perspectives for the same reference data,” he says.

Like in business, queries about religion “demand a serious, fact-based approach for both in- and out-of-context question”, says Chang, who developed the RAG technology for the Havruta project. “Addressing sensitive and unethical topics requires meticulous filtering, with an expectation that results are delivered in an inclusive, unbiased manner. These challenges extend to existing techniques for anti-adversarial attacks and effective benchmarking and evaluation.”

Virtual Havruta “serves as a tangible testing ground in our pursuit of achieving explainable, traceable, and controllable generative AI,” says Chang. “The insights gained from this use case play a pivotal role in advancing the industry-standardized framework that we are dedicated to developing.”

Democratizing Education

Another notable new project is looking to use RAG to democratize education for women. That project involves SandboxAQ, a company spun out from Google parent Alphabet that delivers AI + quantum solutions that run on today’s classical computing platforms. It has partnered with LibreTexts and The Female Quotient to equip women with the skills required for immediate workforce entry through AI-enhanced personalized education. LibreTexts is the world’s largest centralized open education project and online platform. It was founded in 2008 at the University of California, Davis, to reduce the burdens of textbook costs to students who can’t afford them.

The idea behind the project is to upskill women and girls in different parts of the globe. In locations such as India, where there is a huge shortage of doctors and nurses, the RAG project is looking to leverage medical textbooks on LibreTexts platform and an AI-powered chatbot plus RAG. The tool can develop personalized lesson plans for, allow students to ask questions of their textbooks, quiz themselves along the way and provide clickable links to relevant materials.

Since not everyone has access to personal computers the project is considering the uses of carts that contains slots for a classroom’s worth of smart tablets. The carts both charge the tablets and enables networking to a central computer. “You don’t even need Internet access, all you need is electricity,” says Marianna Bonanome, PhD, SandboxAQ’s Head of External Education Programs. She used the carts while teaching at the City University Of New York (CUNY) because many of her students couldn’t afford computers and there were a limited number of computer classrooms available. Bonanome co-authored free textbooks for her own students, a mission that brought her into contact with the founder of LibreTexts.

When Bonanome, a trained physicist and mathematician specialized in quantum algorithms computing, was hired by SandboxAQ to launch an external education program she discovered the benefits of using RAG technology. That got her thinking about marrying open educational resources with a RAG to help both democratize and personalize education for students globally. She didn’t need to do much convincing to get LibreTexts onboard.

During a soft launch of the project during the World Economic Forum’s annual meeting in Davos, SandboxAQ CEO Jack Hidary talked about the power of RAG technology to transform education. When it comes to healthcare training, the RAG ensures that the LLM does not search the Internet but only looks at textbooks validated from the LibreTexts corpus, he says. “This will give the Best of ChatGPT and LLM’s ease of use for things like grabbing text and video but also the best of validated information,” he said. “That is the promise of personalized education, that is how we can scale it.”

Adding AI and RAG technology on top of validated digitized free textbooks “will enable us to take education to the next level,” says Bonanome. “We are still mid-development with LibreTexts, and partners The Female Quotient and Future Brilliance,” she says. “We haven’t yet pinned down our pilot schools but are exploring several locations.”

There are so many potential applications, she says. In discussing this topic with Jonas Haertle, Chief of the Office of Executive Director for UNITAR, so many use cases related to the UN were on the table. One potential use case is the training of United Nations diplomats. It is impossible for them to digest all the past and current resolutions, says Bonanome. “Applying RAG to all UN resolutions would give diplomats a tool in real time to access and digest complicated back stories and concepts.”

From RAG to Riches?

So how does RAG overcome the issues that have been holding back the use of Generative AI? The idea behind RAG is to enrich the actual prompt with the most relevant information and context. “By doing so, the LLM aggregates and summarizes but does not add additional information based on its trained weights, “ says Plugfelder. The second issue to overcome is an LLM can only reason about data it has already seen in the training process. The challenge is find a way to incorporate additional knowledge without training / fine-tuning the LLM. RAG’s technical solution is to not only include external knowledge into the prompt but also include a retrieval step to find the relevant data among relevant knowledge bases, he says.

Given RAG’s potential to reduce hallucinations and extract reliable information should companies start embracing it now? “Absolutely, says Plugfelder. There are a wealth of advantages including improved code generation, customer service, product documentation, internal knowledge retrieval and engineering support. “Compared to other generative AI technologies, businesses can leverage RAG more easily and swiftly as RAG is indeed a mitigation of one of the main limitations of current LLMs for industry applications, namely hallucination and robustness,” he says.

The appliedAI Initative’s partners are already actively advancing RAG technology in prototypes and minimal viable products, says Pflugfelder. “Waiting is not advisable for businesses; they should proactively embrace this transformative technology and learn about best practices and potential use cases in their organizations.”

This article is content that would normally only be available to subscribers. Sign up for a four-week free trial to see what you have been missing.

To access more of The Innovator’s Focus on AI articles click here.

 

 

About the author

Jennifer L. Schenker

Jennifer L. Schenker, an award-winning journalist, has been covering the global tech industry from Europe since 1985, working full-time, at various points in her career for the Wall Street Journal Europe, Time Magazine, International Herald Tribune, Red Herring and BusinessWeek. She is currently the editor-in-chief of The Innovator, an English-language global publication about the digital transformation of business. Jennifer was voted one of the 50 most inspiring women in technology in Europe in 2015 and 2016 and was named by Forbes Magazine in 2018 as one of the 30 women leaders disrupting tech in France. She has been a World Economic Forum Tech Pioneers judge for 20 years. She lives in Paris and has dual U.S. and French citizenship.