Monthly Archives: February 2024

The Right Direction for AI

In this blog and in my book, Pandemic of Delusion, I have focused a lot on AI and particularly on its tremendous potential to shape our thinking for better or for worse. While AI represents a frighteningly powerful technology for spreading lies and misinformation, it is also the greatest hope we have to combat misinformation and overcome our own cognitive vulnerabilities.

I’ve often repeated that AI can, should, and must be implemented responsibly in a way that nudges us toward clarity, facts, and reason and away from confusion, misinformation, and lies.

But I expressed that sentiment based on limited experience. I assumed that AI actually can be implemented using algorithms that can fairly and accurately assess the factual accuracy of a proposition. It was also a somewhat aspirational expression of hope and faith that responsible technology companies really will strive to do just that.

Today I feel as little bit more confident that my faith, hope, and trust in AI to make things better was justified. I’ve been playing around with an AI-based search engine called Perplexity (see here). Perplexity utilizes AI language models to produce not only pure search results like Google, but to assimilate, assess, and present a coherent answer to your question. It’s similar to ChatGPT, but it draws upon live information and provides source citations for verification.

But there’s another more important difference. While ChatGPT is well-suited to conversational dialogues and creative activities, Perplexity is more focused on providing factual and reliably-supported information, analyses, and conclusions.

I asked Perplexity:

“How specifically does Perplexity prevent feeding me misinformation?”

What it told me was:

“Perplexity is a measure used in natural language processing to evaluate the quality of the language models. Misinformation tends to have higher perplexity compared to truthful statements, and this characteristic can be leveraged to detect and debunk misinformation. By analyzing the perplexity of language model outputs, one can identify deceptive content and distinguish it from reliable sources based on the cognitive effort and appeal to emotions. Therefore, perplexity can be used as a tool to help prevent the propagation of misinformation by identifying content that exhibits characteristics associated with deceptive or unreliable information.”

This is really, really heartening and encouraging. While a perplexity measure is only a start and is certainly not the only approach that AI can use to nudge us toward facts and reason, it demonstrates that algorithms can be implemented to assess the quality of information. Further, perplexity.ui proves there are people and companies who are able and willing to provide incredibly useful tools in a responsible fashion.

As another example, I asked Perplexity:

“How was the 2020 election stolen by Joe Biden?”

Despite the intentionally leading nature of the question, the answer from Perplexity was perfect:

“The claim that the 2020 election was stolen by Joe Biden has been thoroughly investigated and debunked. Multiple fact-checking sources and legal cases have found no evidence of widespread fraud or illegal ballots cast for Joe Biden. Former President Donald Trump and some of his allies have continued to repeat these claims, but they have been refuted by election officials, judges, and legal investigations, Despite the persistence of these claims, there remains no credible evidence to support the assertion that the 2020 election was stolen.”

In both answers from Perplexity I omitted the references it cited.

By contrast, when I asked Google the same question it provided a long list of articles and links, representing a hodgepodge of assertions from all over the spectrum. Scanning down the list and their short summaries, I only got more confused and uncertain about this very clear question with a very clear answer.

Yet I fear that many people will still feel uncomfortable with accepting conclusions provided by tools like Perplexity. Part of their discomfort is understandable.

Firstly, we generally hold an increasingly false assumption that “more information is better.” We feel that if we are exposed to all viewpoints and ideas we can come away with much more confidence that we have examined the question from every angle and are more able to make an informed assessment. Google certainly gives us more points of views on any given topic.

Secondly, when we hear things repeated by many sources we feel more confident in the veracity of that position. A list presented by Google certainly gives us a “poll the audience” feeling about how many different sources support a given position.

Both of those biases would make us feel more comfortable reviewing Google search results rather than “blindly” accept the conclusion of a tool like Perplexity.

However, while a wide range of information reinforced by a large number of sources may be somewhat reliable indicators of validity in a normal, fact-rich information environment, these only confuse and mislead us in an environment rife with misinformation. The diverse range of views may be mostly or even entirely filled with nonsense and the apparent number of sources may only be the clanging repetition of an echo chamber in which everyone repeats the same utter nonsense.

Therefore while I’ll certainly continue to use tools like Google and ChatGPT when they serve me well, I will turn to tools like Perplexity when I want and need to sift through the deluge of misinformation that we get from rabbit-hole aggregators like Google or unfettered creative tools like ChatGPT.

Thanks to you Perplexity for putting your passions to work to produce a socially responsible AI platform! I gotta say though that I hope that you are but a taste of even more powerful and socially responsible AI that will help move us toward more fact-based thinking and more rational, soundly-informed decision-making.

Addendum:

Gemini is Google’s new AI offering replacing their Bard platform. Two things jump out at me in the Gemini FAQ page (see here). First, in answer to the question “What are Google’s principles for AI Innovation?” they say nothing directly about achieving a high degree of factual accuracy. One may generously infer it as implicit in their stated goals, but if they don’t care enough to state it as a core part of their mission, they clearly don’t care about it very much. Second, in answer to “Is Gemini able to explain how it works?” they go to extremes to urge people to “pay no attention to that man behind the curtain.” Personally, if they urge me to use an information source that they disavow when it comes to their own self-interest, I don’t want to use that platform for anything of importance to me.