Home Making Large Language Models More Explainable
Author: Pat Lasserre
Photo by Emily Morter on Unsplash
I recently watched a presentation given by Professor Christopher Potts of Stanford University, where he mentioned that if we’re going to trust large language models (LLMs), we need to better understand how they arrive at their answers.
This touches on one of the key issues with LLMs — their lack of explainability. They’re so complex that they’re basically black boxes, and we’re not sure how they came up with their answers. The more sophisticated the model, the less transparent it becomes. This is the opposite of what is needed — explainable AI (XAI) where the responses can be explained.
This becomes problematic in areas such as healthcare, finance, and law, where users need to have a clear understanding of how the model arrived at its decision.
Professor Potts compared two different approaches to using LLMs, one of which we’ll show can be used to make LLMs more explainable.
How Large Language Models Work
LLMs learn statistical co-occurrence patterns in sequences of tokens (words) to predict the probability of a sequence of words. They use those statistics to assign the probability of the next word in a sequence.
The longer the sequence grows, the greater the chance that it can go off the rails and generates text that appears plausible but is inaccurate.
ChatGPT acknowledged that it can’t be fully trusted to produce facts when I asked it how it generates a response: “My responses are generated based on statistical patterns in the data I was trained on and may not always reflect the nuances of real-life situations.”
Basically, it’s saying the information that it provides could be made-up.
The LLM for Everything Approach
Professor Potts discussed two approaches to using LLMs — one that he calls the “LLMs for everything” approach and the other that he calls “retrieval-augmented in-context learning.”
With LLMs for everything, there is no separate retriever to find relevant information — the LLM does everything. The model acts as both a knowledge store and a language model. As he states, it just becomes “a big opaque model.”
He says that if you have a “big opaque model” (a black box), that you really can’t trust it because you don’t know where the data came from that provided your answer. You now have a trustworthiness and explainability issue.
Source — LLM for everything approach where you have a black-box language model that isn’t explainable.
What he’s pointing out here is what I see as the crux of the issue — LLMs, like ChatGPT, aren’t knowledge models, they’re language models.
That’s how they should be used — as language models, and we should leverage their strengths, such as synthesizing information.
That’s where a retrieval-augmented approach comes in.
Retrieval-Augmented NLP
With the retrieval augmented approach, Professor Potts discusses how a retriever finds relevant information in a large corpus by using neural search to find documents most similar to a query. This relevant information is then fed to an LLM where it can synthesize the information.
Source — Retrieval-augmented approach where a retriever finds documents similar to a query and passes them to an LLM to synthesize a response.
As Professor Potts states, you can take the relevant documents and “synthesize them into a single answer, maybe meeting the user’s information need directly.” One benefit of this is the user no longer needs to read through multiple different retrieved documents to find the answer. The LLM can synthesize the information from those relevant documents into a single answer.
Not only does this simplify the user’s life, but it also helps address the issue of explainability.
With a retriever, you know where the results come from. You know the source of the relevant information that was fed to the LLM to generate the response. This helps address the explainability and trustworthiness issues associated with LLMs.
Vector Databases: A Path to Explainability
Vector databases are a popular way to implement the retriever function. For example, an enterprise can index their own data in a vector database and use it to retrieve relevant information to feed an LLM. This could help a manager with a task like having to sift through various data sources such as emails, notes, and reports to find specific information.
The vector database could retrieve the information and feed it to an LLM. The manager could then prompt the LLM with tasks such as synthesizing the information from the various sources into a single response or summarizing the information from each of the sources.
A vector database can provide the document IDs for the documents that it retrieved — so you know the exact sources of information that the LLM is using.
This makes LLMs less of a black box by allowing for better explainability, transparency, and trustworthiness. As mentioned previously, these are critical items when considering using LLMs in areas such as healthcare, finance, and law, where users need to know how a model came up with its response.
Getting Rid of the Black Box
As Professor Potts stated, the “LLMs for everything” approach leads to a “big opaque model” which is a black box that really can’t be trusted or explained.
His proposed “retrieval-augmented in-context learning” approach addresses the explainability issue. It adds a retriever to feed an LLM relevant information. Since the source of this information is known, you now have a model that is more transparent and trustworthy.
So for your next NLP application, make it more explainable, trustworthy, and less of a black box by using a vector database as a retriever to feed your LLM relevant, and transparent data. Your users will thank you for it.