Why Artificial Intelligence is a know-it-all, know-nothing

Why Artificial Intelligence is a know-it-all, know-nothing


Over 500 million people trust Gemini and ChatGPT every month to maintain them informed about all the pieces from pasta to sex or home tasks. But if an AI tells you to cook pasta with gasoline, you most likely shouldn’t follow its advice on contraception or algebra either.

At the World Economic Forum in January, OpenAI CEO Sam Altman bluntly asserted: “I am unable to look into your brain to grasp why you think what you think. However, I’ll ask you to elucidate your reasoning and resolve whether it seems reasonable to me or not. … I think our artificial intelligence systems may even give you the option to do the same. They will give you the option to elucidate to us the steps from A to B and we are going to give you the option to determine whether we think they are good steps

- Advertisement -

Knowledge requires justification

It’s no surprise that Altman wants us to consider that giant language models (LLMs) like ChatGPT can generate transparent explanations for all the pieces they say: without good justification, nothing people consider or suspect amounts to knowledge . Why not? Think about when you are feeling comfortable saying you know something. This is more than likely to occur when you are completely confident in your faith because it is well supported – by evidence, arguments, or testimony from trusted authorities.

LLMs are meant to be trusted authorities; reliable information providers. But until they explain their reasoning, we cannot know whether their claims meet our standards of justification. For example, suppose you tell me that today’s fog in Tennessee is attributable to wildfires in Western Canada. Maybe I’ll take you at your word. But let’s assume that yesterday you swore to me in all seriousness that snake fights are a routine a part of a Ph.D. defense. So I know I am unable to rely on you completely. So may I ask why you think smog is a results of the wildfires in Canada? For my belief to be justified, it is vital that I know that your report is credible.

The problem is that today’s AI systems cannot earn our trust by sharing the reasoning behind what they say, because no such reasoning exists. LLMs are not even remotely designed reason. Instead, models are trained on vast amounts of human writing to detect and then predict or extend complex linguistic patterns. Once the user enters text, the response is simply the algorithm’s prediction showing how the pattern will more than likely proceed. These results (increasingly) convincingly mimic what an experienced human might say. But the underlying process has nothing to do with whether the consequence is justified, let alone true. As Hicks, Humphries and Slater put it in “ChatGPT is crap”, LLMs “aim to produce text that appears truthful without any actual concern for truth.”

So if AI-generated content is not an artificial equivalent of human knowledge, then what is? Hicks, Humphries and Slater are right to call this nonsense. Still, a lot of what LLMs spew is true. When these “farting bullshit” machines produce factually accurate results, they produce what philosophers call (in keeping with philosopher Edmund Gettier). These cases are interesting because of the strange way in which they mix true beliefs with ignorance about the justification for those beliefs.

AI results can resemble a mirage

Let’s consider this instance from writings Indian Buddhist philosopher Dharmottara from the eighth century: Imagine that we are looking for water on a hot day. Suddenly we see water, or at least that is what we think. We don’t actually see water, just a mirage, but when we get there, we are lucky and find water right there, under the rock. Can we say that we had true knowledge of water?

People generally agree with this that whatever knowledge there is, the travelers in this instance do not have it. Instead, they managed to search out water exactly where they’d no good reason to consider they’d find it.

The point is that at any time when we think we know something that we have learned through LLM, we put ourselves in the same position as the Dharmottara travelers. If the LLM has been trained on a high-quality dataset, it is quite likely that its claims shall be true. These claims will be in comparison with a mirage. The evidence and arguments that might substantiate these claims probably also exist somewhere in the data set – just as the water flowing under the rock turned out to be real. But the evidence and justificatory arguments that likely exist played no role in the LLM’s results – just as the existence of water played no role in creating the illusion that supported the travelers’ belief that they’d find it there.

Altman’s claims are subsequently deeply misleading. If you ask LLM to justify its results, what’s going to it do? This is not going to provide real justification. You get Gettier justification: a natural language pattern that convincingly mimics justification. Chimera of justification. As Hicks et al would say, bullshit justification. Which, as all of us know, has no justification in any respect.

Right now, AI systems often break down or “hallucinations” in a way that stops the mask from moving. But as the illusion of justification becomes more compelling, one of two things will occur.

For those that understand that the true content of AI is one big Gettier case, LLM’s patently false claim to explaining its own reasoning will undermine its credibility. We will know that artificial intelligence is deliberately designed and trained to systematically mislead.

And for those of us who are unaware that AI spits out Gettier justifications – false justifications? Well, we are going to simply be deceived. To the extent that we rely on LLM, we shall be living in a kind of quasi-matrix, unable to differentiate fact from fiction and unaware that we must always worry that there could be a difference.

Each result have to be justified

When considering the significance of this difficult situation, it is vital to keep in mind that there is nothing improper with LLMs operating the way they do. These are amazing, powerful tools. And individuals who understand that AI systems spit out Gettier cases slightly than (artificial) knowledge are already using LLM in a way that takes this into account. Developers use LLM to create code and then use their very own coding knowledge to switch it in keeping with their very own standards and goals. Professors use LLM to develop paper-based prompts and then revise them in keeping with their very own pedagogical goals. Any speechwriter worthy of the name during this election cycle will fastidiously review every AI draft before allowing their candidate to take the stage with them. And so on.

However, most individuals turn to AI precisely where we lack expertise. Think of teenagers doing algebra… or prevention. Or seniors looking for dietary or investment advice. If LLM institutions are to mediate public access to this type of crucial information, then we at least must know if and when we will trust them. And trust would require knowing precisely what LLMs cannot tell us: whether and how each consequence is justified.

Fortunately, you most likely know that olive oil works much higher than gasoline for cooking spaghetti. But what dangerous recipes of reality have you swallowed whole without even attempting to justify them?

Data decision makers

Welcome to the VentureBeat community!

DataDecisionMakers is a place where experts, including data scientists, can share data-related insights and innovations.

If you must read about modern ideas and current information, best practices and the future of information and data technologies, join us at DataDecisionMakers.

You might even consider writing your personal article!

Latest Posts

Advertisement

More from this stream

Recomended