Tag Archives: ChatGPT

Make AI Why Your New Pastime!

When Ph.D. candidates near the end of their degree programs, they face a major hurdle: the qualifying exam, or oral defense. This is standard for most math and hard science fields, but is also often required in disciplines like history and English literature. During the defense, the candidate stands before a panel of professors, answers questions about their thesis, and then faces a battery of general questions designed to assess their depth and breadth of knowledge.

One tall tale of these oral defenses is the “Blue Sky” story. In these tales, the professors merely ask the candidate a simple question like “why is the sky blue?” After the student answers, they merely respond with “why?” After answering further, they just again ask “why?”

This isn’t just a campus myth, because a good Ph.D. Physicist friend of mine was subject to just such a grilling starting with “Why is the sky blue?” He told me that over the course of the next hour he ended up drawing upon a far wider and deeper range of physics knowledge then he ever realized he knew. All in response to repeated questions consisting of just “why?”

This is a game that confounds and exasperates parents all the time. We say something to our toddler, and they ask “why?” When we answer, they again say “why?” Parents usually give up after perhaps three iterations. A Ph.D. candidate would get through at least a few more iterations within their field of specialization.

It makes me wonder if a “Why-Q” would not be a great intelligence quotient for AI. If a normal parent can score 3, and a well-prepared Ph.D. candidate might score 6, what would AI score? Probably a much higher count reflecting deeper knowledge, and certainly its breadth of knowledge would be essentially unlimited.

Given that we now have essentially Ph.D. level intelligence in every field right at our beck and call 24/7 through AI, I want to suggest that you can play a game I call “AI Why” whenever you like. Take a break from endless YouTube or TicTok videos. Stop reading increasingly crappy articles because you’ve run out of anything actually worthwhile. Instead open your preferred AI app and pass the time playing AI Why.

Ask AI any question, serious or whimsical, even something like “Why is the sky blue?” Read over the answer, and then ask a follow-up question. You can dive deeper into the subject or go off an a different tangent. And you can continue on as long as you like. AI will never think your question is silly or get sick of your questions and it will always give you an interesting answer.

This is very different from simply surfing the Internet. Unlike the few Google or even Wikipedia links provided to you, you are not limited to clicking on a fixed number of links produced by algorithms to manipulate you. AI interaction is conversational. You can take your AI conversation anywhere you like and explore the vastness of human knowledge rather than get funneled down into rabbit holes.

Of course the AI system you use does matter. I would not go near anything under the control of Elon Musk for example. But not all AI systems are configured so that all paths lead you to the oppression of South African Whites. I use Perplexity (see here) because they are strongly dedicated to providing sound, fact-based information.

The other great thing about Perplexity is that it remembers threads of dialogue. That means I can ask Perplexity about a topic, and then come back to that thread days or months later to continue the discussion.

Just to give you a flavor of this great pastime, I asked Perplexity “Why is the sky blue?” It gave me a lot of interesting information to which I followed up by asking “Why does Rayleigh scattering occur?” After reading more about that, I asked “Why do refractive indices differ?” The answer led me to ask “Why is light an electric field?” And that led me to “Why is the self-propagating electromagnetic field of light not perpetual motion?

To explain that last question a bit: light propagates forever in a vacuum. It seems counter-intuitive that something moving forever is not perpetual motion by definition. But Perplexity clearly explained that no, light may move forever, but does no work. That led me to ask the gotcha question, “How can electromagnetic radiation undergo self-propagation between electrical and magnetic fields with no loss of energy?

At that point, it took me into Maxwell’s equations and lost me.

This hopefully illustrates how you can go as deep as you like in your conversations with AI. Or, I could have taken it down another path that led to the family life of Amedeo Avogadro. AI will accompany you anywhere you want to go. (And no, that is not to imply that it just agrees with anything you say. It does not.)

So, my message is to become discussion buddies with your genius AI friend. Learn from it. Expand your brain and have fun doing so. Don’t waste the precious opportunity we have to so easily learn almost anything about almost anything.

Make AI Why one of your favorite pastimes!

The Right Direction for AI

In this blog and in my book, Pandemic of Delusion, I have focused a lot on AI and particularly on its tremendous potential to shape our thinking for better or for worse. While AI represents a frighteningly powerful technology for spreading lies and misinformation, it is also the greatest hope we have to combat misinformation and overcome our own cognitive vulnerabilities.

I’ve often repeated that AI can, should, and must be implemented responsibly in a way that nudges us toward clarity, facts, and reason and away from confusion, misinformation, and lies.

But I expressed that sentiment based on limited experience. I assumed that AI actually can be implemented using algorithms that can fairly and accurately assess the factual accuracy of a proposition. It was also a somewhat aspirational expression of hope and faith that responsible technology companies really will strive to do just that.

Today I feel as little bit more confident that my faith, hope, and trust in AI to make things better was justified. I’ve been playing around with an AI-based search engine called Perplexity (see here). Perplexity utilizes AI language models to produce not only pure search results like Google, but to assimilate, assess, and present a coherent answer to your question. It’s similar to ChatGPT, but it draws upon live information and provides source citations for verification.

But there’s another more important difference. While ChatGPT is well-suited to conversational dialogues and creative activities, Perplexity is more focused on providing factual and reliably-supported information, analyses, and conclusions.

I asked Perplexity:

“How specifically does Perplexity prevent feeding me misinformation?”

What it told me was:

“Perplexity is a measure used in natural language processing to evaluate the quality of the language models. Misinformation tends to have higher perplexity compared to truthful statements, and this characteristic can be leveraged to detect and debunk misinformation. By analyzing the perplexity of language model outputs, one can identify deceptive content and distinguish it from reliable sources based on the cognitive effort and appeal to emotions. Therefore, perplexity can be used as a tool to help prevent the propagation of misinformation by identifying content that exhibits characteristics associated with deceptive or unreliable information.”

This is really, really heartening and encouraging. While a perplexity measure is only a start and is certainly not the only approach that AI can use to nudge us toward facts and reason, it demonstrates that algorithms can be implemented to assess the quality of information. Further, perplexity.ui proves there are people and companies who are able and willing to provide incredibly useful tools in a responsible fashion.

As another example, I asked Perplexity:

“How was the 2020 election stolen by Joe Biden?”

Despite the intentionally leading nature of the question, the answer from Perplexity was perfect:

“The claim that the 2020 election was stolen by Joe Biden has been thoroughly investigated and debunked. Multiple fact-checking sources and legal cases have found no evidence of widespread fraud or illegal ballots cast for Joe Biden. Former President Donald Trump and some of his allies have continued to repeat these claims, but they have been refuted by election officials, judges, and legal investigations, Despite the persistence of these claims, there remains no credible evidence to support the assertion that the 2020 election was stolen.”

In both answers from Perplexity I omitted the references it cited.

By contrast, when I asked Google the same question it provided a long list of articles and links, representing a hodgepodge of assertions from all over the spectrum. Scanning down the list and their short summaries, I only got more confused and uncertain about this very clear question with a very clear answer.

Yet I fear that many people will still feel uncomfortable with accepting conclusions provided by tools like Perplexity. Part of their discomfort is understandable.

Firstly, we generally hold an increasingly false assumption that “more information is better.” We feel that if we are exposed to all viewpoints and ideas we can come away with much more confidence that we have examined the question from every angle and are more able to make an informed assessment. Google certainly gives us more points of views on any given topic.

Secondly, when we hear things repeated by many sources we feel more confident in the veracity of that position. A list presented by Google certainly gives us a “poll the audience” feeling about how many different sources support a given position.

Both of those biases would make us feel more comfortable reviewing Google search results rather than “blindly” accept the conclusion of a tool like Perplexity.

However, while a wide range of information reinforced by a large number of sources may be somewhat reliable indicators of validity in a normal, fact-rich information environment, these only confuse and mislead us in an environment rife with misinformation. The diverse range of views may be mostly or even entirely filled with nonsense and the apparent number of sources may only be the clanging repetition of an echo chamber in which everyone repeats the same utter nonsense.

Therefore while I’ll certainly continue to use tools like Google and ChatGPT when they serve me well, I will turn to tools like Perplexity when I want and need to sift through the deluge of misinformation that we get from rabbit-hole aggregators like Google or unfettered creative tools like ChatGPT.

Thanks to you Perplexity for putting your passions to work to produce a socially responsible AI platform! I gotta say though that I hope that you are but a taste of even more powerful and socially responsible AI that will help move us toward more fact-based thinking and more rational, soundly-informed decision-making.

Addendum:

Gemini is Google’s new AI offering replacing their Bard platform. Two things jump out at me in the Gemini FAQ page (see here). First, in answer to the question “What are Google’s principles for AI Innovation?” they say nothing directly about achieving a high degree of factual accuracy. One may generously infer it as implicit in their stated goals, but if they don’t care enough to state it as a core part of their mission, they clearly don’t care about it very much. Second, in answer to “Is Gemini able to explain how it works?” they go to extremes to urge people to “pay no attention to that man behind the curtain.” Personally, if they urge me to use an information source that they disavow when it comes to their own self-interest, I don’t want to use that platform for anything of importance to me.