I know, you’re thinking, Rich is pulling my leg. Well, it is amusing, but I’m afraid it is also all too real.
Recently, one of the top people at Perplexity – a popular Artificial Intelligence platform – explained in an interview, how an AI can give totally made up answers. The explanation was technical, but he actually used the word “hallucinate”. That wasn’t my idea.
So what are we talking about here? AI’s aren’t human – there’s nobody there. They don’t have judgement. There is no internal “reality check” in what they turn out for answers or text.
If you asked someone about the population of New York City, and they said, “thirteen Eskimos,” you’d know either they didn’t hear your answer, they are making a joke, or they are stark staring mad. You wouldn’t expect an answer completely off beam like that in a serious conversation, and if you got one, you’d know something was wrong.
Yet an AI is completely capable of coming up with something that far off. The Guru from Perplexity explained how that can actually happen.
If you’ve been working with AI’s more than a little, you have probably seen things like that.
There are two big problems with this.
For one, too many people take the output of an AI as gospel, uncritically accepting it and using it or passing it along. Part of this is wishful thinking. It is such a great time saver to have an AI write your copy for you. It is so comforting to have an AI put your confusions to rest with definitive answers.
Part of this is an irrational impulse to put ones faith in an AI over and above humans. Do people forget? Ultimately everything that comes from an AI originates in the actions and decisions of human beings.
The other big problem is, what the AI comes up with can be very plausible. The example I gave above is obvious, But what if you asked an AI about some historical figure you didn’t know much about, and it gave a lengthy, detailed answer – with wrong dates, wrong locations, and incorrect events? It looks good. How will you know if it is hallucinating or not? You’d have to fact check it and that loses a lot of the advantage of using an AI at all.
If it gives you a wrong phone number, you’ll find out when you get the wrong person at that other end. If it gives you bad legal advice, you may not find out until the lawsuit arrives.
So where does this leave us? There are absolutely ways in which AIs can greatly assist your efforts. And they are undoubtedly going to keep getting better. But you are setting yourself up for a fall, if you don’t find out how you can use an AI safely – and how you can’t.
You don’t blindly trust the word of a stranger on important things.
Well, isn’t an AI a stranger?