Category Archives: Fact-Based Thinking

Staying Sane Is Hard Work

Sliding down into delusion is seductive, easy, and fun. Modern information technology is making it ever harder to resist. Staying sane, on the other hand, is hard work—and it is getting harder every day.

The internet has made it possible for infectious ideas to spread faster than any physical disease. For a virus to circle the globe, you need mutations and air travel. To become infected by fake news and dangerous ideas, you need only a Wi‑Fi connection. Modern technology exposes us to vastly more information than ever before, much of it unhealthy, and every time our neural networks are exposed to bad information, it feels a bit more sensible to us—even if we know it is fake. Mere repeated exposure wears ever‑deepening grooves of familiarity into our brains. The more we see, hear, and click on a claim, the more reasonable it feels. Eventually, insidiously, it becomes self‑evident—common sense that seems inescapable.

In the past, news was filtered through human editors and gatekeepers. They certainly had their biases and blind spots, but at least someone was nominally responsible for quality. Today, sources like Facebook, Fox News, YouTube, podcasts, X/Twitter, and even our government have largely abandoned any obligation to fact‑check before amplifying. They create the illusion of informed reporting but are often almost completely untethered to reality. Their algorithms and personalities have one overriding job: keep you engaged. They notice what you watch or click and then say, in effect, “If you believe that, then check this out!” They do not care whether they are feeding you solid science or the latest conspiracy theory; they only care whether you will stay tuned in and click some more. The responsibility to sort out well‑supported information from unsupported claims, sound logic from specious arguments, is pushed entirely onto you.

That would be a tall order even if our brains were perfectly rational. They aren’t. Imagine you are curious about a fringe idea like Bigfoot. You type “proof of Bigfoot” into a search engine or social platform, intending to investigate skeptically. You will quickly find articles, videos, posts, and even reality shows arguing that Bigfoot is at least plausible or even real. Because you clicked, the algorithms learn that Bigfoot content “works” on you and begin to serve you more of it: more sightings, grainy photos, confident testimony. Before long, your feed is heavily populated by Bigfoot believers. From your perspective, it starts to look as if there is an enormous body of evidence out there. Everywhere you look, people treat the idea seriously. If so many people think there is something to it, there must be something to it.

In reality, you are being drawn out onto ever thinner and more dangerous limbs. The algorithm nudges you along in little steps, each of which seems perfectly solid and reasonable. This process does not just happen with Bigfoot. It happens with vaccine myths, climate denial, election lies, cultish political beliefs, and every other infectious or click‑inducing idea. The result is that many people come to feel they have made a careful, “objective” study of an issue when in fact they have been drawn, step by step, down a rabbit hole into an Alice in Wonderland alternate reality.

We cannot redesign the global information system by ourselves, but we can develop habits that make us harder to capture. One simple practice is to explicitly search for the reverse of whatever you are investigating. If you search for “proof of Bigfoot,” deliberately also search for “debunking Bigfoot claims,” and click on those results often enough that the search engines learn you will reliably choose that kind of content too. This at least gives you some exposure to different perspectives. Both sides might still be exaggerated, but you are less likely to be left with the illusion that everyone agrees with one side only.

Another, related technique is to always look back to first principles. If you only consider that next little step out along the branch, it will seem safe and sensible. But if you stop and look back at how far you have wandered from the solid trunk, you quickly realize that you are dangerously far out on a limb. Having acknowledged that we do occasionally discover new species, must really therefore admit that a hitherto undiscovered tribe of Bigfoot might actually exist?

It also matters where you spend your time. Just as like‑minded people congregate in person, different online communities attract and cultivate different kinds of thinkers. Choose to frequent healthy online environments. That is not to say you should avoid diverse ideas; but if rumor, outrage, and unvetted claims infect the community or the platform itself, you will become infected. Seek out vibrant but serious gathering sites where people demand citations, scrutinize sources, and correct obvious nonsense. If you stick to them, your own brain will become better at recognizing sound evidence and logic, as well as specious arguments. If the level of discourse on a trusted site degrades, you should leave and stop exposing your brain to it.

Given all the infectious information we are unavoidably exposed to, it is no surprise that people sometimes slip from belief into delusion. Beliefs, at least in principle, are subject to change. We might hold them strongly, but new evidence can persuade us to reconsider. When a belief becomes impervious to change—when no amount of contrary evidence, no matter how strong or consistent, is allowed to matter—it has crossed over into delusion. Using that word makes many professionals uneasy. In a clinical setting, “delusional” has a specific meaning and diagnostic criteria. Nevertheless, in the generally accepted lay domain, delusion is the proper word to describe thinking patterns that have become impervious to evidence or reason.

When a person or a movement has fallen prey to delusional ideas, when contrary facts are dismissed out of hand or reinterpreted as attacks, we no longer function in the realm of honest disagreement. We are locked into a self‑reinforcing mental world that will not adjust to reality. In a culture where influencers dominate the discourse, the rest of us are put at risk. Delusions can be comforting, energizing, and politically useful, but facts always assert themselves in the end. Reality does not care if you believe in it.

As a result of so many infectious ideas being disseminated so quickly, we are currently suffering from a global pandemic of delusion. We cannot wipe it out, but we can protect ourselves and try not to contribute to its spread. We can monitor our own information diets, seek out counter‑evidence, choose better communities, learn to better assess claims, and be more precise in our language. We can and must resist being nudged toward delusion. As susceptible as our brains are to misinformation, they can also be trained to better assess the soundness of claims and to detect specious arguments.

The way repetition reshapes our memories and our very perceptions, the way algorithms exploit our pattern‑seeking brains, the way beliefs slide, inch by inch, into full‑blown delusion—all of these dynamics, and many others, are at work in our politics, our media, our religions, and our personal lives. In my book Pandemic of Delusion: Staying Rational in an Increasingly Irrational World (see here), I unpack those mechanics in much greater detail, with concrete examples and practical tools for recognizing when you, or someone you care about, is being nudged away from reality. If this short essay inspires you to want to bolster your defenses, the book will provide you with a practical field guide: offering insight as to why we are so susceptible to misinformation, how to recognize it, and how to immunize yourself against it. It will give you a fighting chance to stay sane when the world around you seems determined to drive you crazy.

Star Trek Reality Check

Star Trek and Star Wars offer visions of the future that have become so familiar that it’s all too easy to over-credit the plausibility of the technologies they present. But how much of what they depict is plausible science fiction and how much is implausible science fantasy?

Modern physics is incomplete, but not in the sense that it’s going to casually overturn core constraints like the light‑speed limit, energy conservation, or causality. Any future theory will still be bounded by those hard limits where we’ve already measured them to absurd precision. So betting that some future “breakthrough” will make Star Trek‑style tech real is not cautious skepticism; it’s wishful thinking.

First and most fundamentally, let’s start with the Vulcans visiting Earth. As much as we like to fantasize about technologically advanced aliens visiting us now or ever, to help us or to destroy us, this is implausible. As I discuss in my book (see here) and in this blog article (see here), aliens certainly exist, but they can never visit us. There is only an extremely remote chance that we could ever even detect signs that they existed somewhere, at some time, in the distant past.

Yes, you can always wave your hands and say “maybe some unknown physics will let them come here,” but that’s not reasoning, it’s magical thinking. Given what we already know about distances, speeds, energy, radiation, and biology, the probability that flesh‑and‑blood aliens will ever cross interstellar gulfs and happen to visit us is effectively zero. Not small, not unlikely, but zero.

I wanted to communicate that most strongly as it is so critical to understand. And of course since no alien could possibly ever visit us, it is equally implausible that we could ever visit them. The only remote possibility could be sentient machines who could survive inhumanly long and dangerous journeys. In this sense, the Transformers franchise (those in which organic makers are canon) could be the most plausible science fiction. I also depict such a plausible “space travel” science fiction in my short story The Dandelion Project (see here).

So while virtually everything that follows in Star Trek cannot happen, let’s set aside the basic implausibility of interstellar space travel and look at some of the other fictions that writers concoct to make it all seem plausible once we grant the possibility of space travel.

First, there is warp drive which overcomes the inconvenient reality of time and space. This is science flavored magic. While the physics of faster than light travel may have some plausibility at the mathematical level, it has zero plausibility at practical scale. Faster‑than‑light travel isn’t just “very hard.” It clashes directly with the way spacetime is structured. To get around the speed limit you have to either break causality (allow time travel paradoxes) or rely on enormous quantities of exotic matter that may not exist in any usable form. When a “solution” demands both magic materials and broken causality, that’s not serious speculation, that’s fantasy dressed in equations.

This is similarly true of the magical energy sources that the science fantasy writers concoct to make the fantastic power requirements seem plausible. They construct anti-matter reactors stabilized in a dilithium matrix. Again, even where anti-matter technologies are theoretically plausible they are effectively hopeless in any practical sense. Antimatter is real and ridiculously energy‑dense, but producing and storing it in useful quantities is so far beyond plausible engineering that it may as well be sorcery. Talking about “antimatter reactors” powering star cruisers is like proposing a jet engine that runs on bottled lightning captured in jars. You can write that into a script and make it sound theoretically plausible but you simply cannot build it in this universe.

The implausible power requirements involved in fantasy space travel also apply to weaponry. Hand phasers and similar variations are simply implausible. Directed energy starship weaponry is somewhat plausible, but certainly nowhere remotely near the hull-slicing power depicted in the shows.

And speaking of weaponry, even if hand phasers were plausible, they would at best fire invisible millisecond bursts. Phaser gun fights would never happen. Advanced weaponry would have computer targeting and essentially never miss. One could certainly never “duck” out of the way of an energy beam. A hand‑held weapon that fires at or near light speed, with computerized targeting, does not produce Western‑style shootouts. Once the weapon can lock onto you, your chances of side‑stepping a beam that crosses the distance in microseconds are exactly zero. The only real “dodging” is not being targeted in the first place—and that’s a software and sensor game, not a reflex test.

The same logic destroys the idea of starship dogfights. If you ever had vehicles throwing serious energy around at interplanetary ranges, the fight would be decided by who detected whom first and whose fire control software shot first. It would last seconds, or less, and the human crew would learn the battle was over when the computer informed them that their enemy had been destroyed.

We don’t need to imagine futuristic AI to see the problem. Even today, guidance computers outclass human pilots in reaction speed, precision, and ability to juggle massive sensor inputs. Scale that up to space combat and the idea that a flesh‑and‑blood pilot is “flying” a starship in combat is as quaint as imagining a locomotive engineer sprinting ahead to lay track by hand.

In that vein, there would be no possibility of human (or any organic) navigators or tactical crew members. Computers would certainly handle all the piloting and targeting. There would be no time for a real-time Captain to shout even one order as he’s flung around the bridge. Han Solo would not be able to pilot the Kessel Run safely in even a fraction of the time it would take a computer-controlled ship, if at all. Operating any function of a star ship would not be a job for humans.

As to other technologies, transporters, replicators, “subspace” radios, and hard‑light holograms all have the same problem: each one quietly assumes away a core rule of the universe. They don’t just extrapolate technology; they ask you to believe that information, energy, and matter can be shuffled around with a casual disregard for limits that we’ve already measured in laboratories. That makes for great science fantasy, but it is not remotely plausible science fiction.

But there are a few places where I suspect they get the possibilities more right than wrong, even if only for practical production and storytelling limitations.

There is the plausibility that many alien planets would be so familiar to us. Given that life can only evolve in a very limited set of conditions, and that the rules of physics, chemistry, and evolution are the same throughout the universe, I don’t find it implausible that many environments, and even many alien species, would be quite familiar or at least quickly understandable to us, both morphologically and biologically (see here). Life that can build radio telescopes is probably confined to a very narrow zone of temperatures, chemistry, and environmental stability. Under those shared constraints, evolution is pushed toward a limited set of workable body plans—limbs, mouths, sensory organs. So yes, there are good reasons to think that intelligence elsewhere might evolve a shape that is surprisingly close to our own. That doesn’t mean “humans with cranial ridges,” but it does mean that “unrecognizable swirling gas entities” are probably rarer than TV’s familiar human-like bipeds.

Also, one thing that Star Wars got right was recognizing that in the future all medical diagnoses and procedures would be performed exclusively by medical droids. I can understand that it would take all the fun out of the fiction if they also admitted that Han piloting the Millennium Falcon or Luke manning the gun turrets would be just as obsolete, even with The Force assisting him!

There is a fashionable kind of optimism that treats science as an unbounded well that can eventually make anything possible if we just “don’t close our minds.” That’s not how science works. Science narrows possibilities by discovering hard limits. We don’t say “maybe one day we’ll find a way around conservation of energy” or “maybe light will decide to go faster.” We already know that won’t happen. The technologies I’m calling fantasy aren’t just impractical; they lean on the hope that the universe will overturn its own rules to realize our fantasies.

Just to say, I love these science fantasy shows. If they depicted a more plausible Sol-bound future with computers basically running everything they would be a whole lot less inspiring and engaging. But just as with a good horror or superhero movie, we can love the fantasy while still fully appreciating that it is mostly fantasy.

Often the distinction between science fiction and science fantasy becomes blurred in a world where science seems capable of such magical and limitless achievements, but it is still critical that we recognize science fantasy as just that. If we fail to do so, we become susceptible to imagining that some fantastical future science will save us from actual threats like climate change that demand real solutions right now.

Make AI Why Your New Pastime!

When Ph.D. candidates near the end of their degree programs, they face a major hurdle: the qualifying exam, or oral defense. This is standard for most math and hard science fields, but is also often required in disciplines like history and English literature. During the defense, the candidate stands before a panel of professors, answers questions about their thesis, and then faces a battery of general questions designed to assess their depth and breadth of knowledge.

One tall tale of these oral defenses is the “Blue Sky” story. In these tales, the professors merely ask the candidate a simple question like “why is the sky blue?” After the student answers, they merely respond with “why?” After answering further, they just again ask “why?”

This isn’t just a campus myth, because a good Ph.D. Physicist friend of mine was subject to just such a grilling starting with “Why is the sky blue?” He told me that over the course of the next hour he ended up drawing upon a far wider and deeper range of physics knowledge then he ever realized he knew. All in response to repeated questions consisting of just “why?”

This is a game that confounds and exasperates parents all the time. We say something to our toddler, and they ask “why?” When we answer, they again say “why?” Parents usually give up after perhaps three iterations. A Ph.D. candidate would get through at least a few more iterations within their field of specialization.

It makes me wonder if a “Why-Q” would not be a great intelligence quotient for AI. If a normal parent can score 3, and a well-prepared Ph.D. candidate might score 6, what would AI score? Probably a much higher count reflecting deeper knowledge, and certainly its breadth of knowledge would be essentially unlimited.

Given that we now have essentially Ph.D. level intelligence in every field right at our beck and call 24/7 through AI, I want to suggest that you can play a game I call “AI Why” whenever you like. Take a break from endless YouTube or TicTok videos. Stop reading increasingly crappy articles because you’ve run out of anything actually worthwhile. Instead open your preferred AI app and pass the time playing AI Why.

Ask AI any question, serious or whimsical, even something like “Why is the sky blue?” Read over the answer, and then ask a follow-up question. You can dive deeper into the subject or go off an a different tangent. And you can continue on as long as you like. AI will never think your question is silly or get sick of your questions and it will always give you an interesting answer.

This is very different from simply surfing the Internet. Unlike the few Google or even Wikipedia links provided to you, you are not limited to clicking on a fixed number of links produced by algorithms to manipulate you. AI interaction is conversational. You can take your AI conversation anywhere you like and explore the vastness of human knowledge rather than get funneled down into rabbit holes.

Of course the AI system you use does matter. I would not go near anything under the control of Elon Musk for example. But not all AI systems are configured so that all paths lead you to the oppression of South African Whites. I use Perplexity (see here) because they are strongly dedicated to providing sound, fact-based information.

The other great thing about Perplexity is that it remembers threads of dialogue. That means I can ask Perplexity about a topic, and then come back to that thread days or months later to continue the discussion.

Just to give you a flavor of this great pastime, I asked Perplexity “Why is the sky blue?” It gave me a lot of interesting information to which I followed up by asking “Why does Rayleigh scattering occur?” After reading more about that, I asked “Why do refractive indices differ?” The answer led me to ask “Why is light an electric field?” And that led me to “Why is the self-propagating electromagnetic field of light not perpetual motion?

To explain that last question a bit: light propagates forever in a vacuum. It seems counter-intuitive that something moving forever is not perpetual motion by definition. But Perplexity clearly explained that no, light may move forever, but does no work. That led me to ask the gotcha question, “How can electromagnetic radiation undergo self-propagation between electrical and magnetic fields with no loss of energy?

At that point, it took me into Maxwell’s equations and lost me.

This hopefully illustrates how you can go as deep as you like in your conversations with AI. Or, I could have taken it down another path that led to the family life of Amedeo Avogadro. AI will accompany you anywhere you want to go. (And no, that is not to imply that it just agrees with anything you say. It does not.)

So, my message is to become discussion buddies with your genius AI friend. Learn from it. Expand your brain and have fun doing so. Don’t waste the precious opportunity we have to so easily learn almost anything about almost anything.

Make AI Why one of your favorite pastimes!

I Cannot Exaggerate Exaggeration Enough

Although numbers vary day to day and poll to poll, about 97% of Americans support deporting immigrants who commit violent crimes. About 52% support deporting immigrants who have committed nonviolent crimes. Only 32% support deporting all immigrants who entered illegally, and a vanishingly small number support expelling legal immigrants.

News and political commentators often cite these kind of numbers to point out that people simultaneously support the deportation of criminals but not the harassment of legal immigrants. But this sheds little light on the huge disconnect in public opinion over the wholesale rounding up immigrants by the Trump Administration.

I submit that the missing puzzle piece of our understanding is the role of exaggeration. In fact I cannot exaggerate the awful power of exaggeration enough.

The fact is that undocumented immigrants are about half as likely to commit violent crimes than native-born citizens. They are 4 times less likely to commit nonviolent crimes and 2.5 times less likely to commit drug-related offenses. These numbers hold firm across all geographical boundaries.

But when Trump talks about immigrants, he hyper-exaggerates the level of crime in that population far beyond what the data supports. To hear him talk, one would think that immigrants are running amok and causing mass havoc.

This incredible level of exaggeration, well beyond anything the actual facts support, creates the essential disconnect in our brains that allows people to both conclude that while they support legal immigrants but want to see “all those criminal illegals” deported.

Look at it this way. Just to take a number for illustration purposes, let’s say 5% of illegal immigrants are criminals. Trump makes it sound like 90% are criminals. Even if we are skeptical and fair-minded and allow for some exaggeration, we conclude that let’s say 25% are criminals that should be deported.

So when the actual number is 5% and Trump skews our perception to “feel like” it’s something on the order of 25%, what happens? We naturally expect and demand to see 25% arrested and deported. But there are not 25%, so to show it is meeting expectations the government rounds up and deports a whole lot of innocent immigrants in order to demonstrate it is doing it’s job to keep us safe. It must round up a whole lot of good, honest immigrants to satisfy the false perception it has created. We expect no less.

Using gross exaggeration to create unwarranted expectations is used, particularly by Trump, in a lot of other areas as well. Take Social Security as just one other example. The actual administrative overhead of managing our Social Security program is about 0.6%. This is a fantastically low amount of overhead that private companies and even non-profit organizations cannot come anywhere close to matching.

Yet to listen to Trump, you would think, even allowing for his characteristic hyperbole, that the Social Security system is at least somewhat bloated with waste and inefficiency. So say a 5% cut to eliminate waste, fraud, and abuse might seem like a reasonable, measured, and warranted cost control measure. But if one made such cuts it could in reality only come from reducing legitimate benefits.

That is the power of exaggeration and it is perhaps one of the most destructive weapons that Trump wields wantonly with complete abandon. It dramatically affects how we perceive immigration, Medicare, Medicaid, tariffs, and most everything else that Trump chooses to rail about.

We need to call out Trump more strongly and more often for exaggeration, as well as others who grossly exaggerate, and not simply accept it as a personality characteristic or a legitimate rhetorical style.

Recognizing the destructive power of exaggeration is a first necessary step toward arriving at more sane and fact-based public policy.

And THAT is no exaggeration.

Our Automobile Obesity Problem

In his “press conference” today, August 8th, Donald Trump regurgitated too may lies to reiterate here. And there is no need. Most of you are sane enough to know that virtually everything Trump says is either factually wrong or a bold-faced lie. However, I do want to talk about his particular lies regarding electric vehicles, as his stupidity or dishonesty on this topic may not be immediately obvious to everyone. Also, talking about these particular lies of his sets the stage to discuss the problem of automobile obesity.

This wasn’t the first time Trump has spread misinformation about electric vehicles (see here). He has been doing so for quite a while. Today he repeated false claims that electric vehicles are “twice as heavy” as comparable gas-powered vehicles. They are in fact a bit heavier because of the weight of current battery technology, but at most by only about 30%.

As one example, our family car, the all electric Mini Cooper SE, weighs 3,175 lbs. The otherwise identical gas-powered version weighs 2,813 lbs. This is a difference of under 13%. Cars with longer range are heavier, but the maximum difference is under 30%. For Trump to round that up to 200% is technically called a lie, whopper, or, colloquially, bullshit.

Moreover, the electric version is far cheaper to operate, has far lower maintenance costs, is far more convenient to charge up, performs far better, spew far less carbon dioxide and other pollutants into the atmosphere, and can utilize far greener sources of electricity now and in the future.

But Donald never settles for just one lie about any given topic. He then went on to repeat his claim that if we “all” had electric vehicles we would have to rebuild “all” our bridges in the country lest they “all” collapse under the added weight of electric cars. This is, unsurprisingly, yet more nonsense. Our roads and bridges are built to support caravans of 80,000 lb semi trucks. The weight increase of electric vehicles would be relatively insignificant and responsible engineering organizations have tactfully characterized this claim as “massively overstated” (see here).

Trump assuredly did not come up with these bogus claims on his own, but he is clearly unable to assess the validity of wild assertions before he repeats them, or he just doesn’t care to do so.

But if we take Trump at his word, and take seriously his worry about all our bridges collapsing because of an added load of 20% or so, then shouldn’t Trump also be urging everyone to simply buy smaller cars to save our fragile bridges?

This transitions us to the topic of our big, fat, gas-guzzling American cars.

Have no illusions. American cars have gotten really fat and are only getting fatter. American cars have grown a foot wider, two feet longer, and much higher just over the last decade. Their average weight has increased over 1000 lbs since 1980.

In comparison, European cars are roughly 27% leaner than our fat American cars. This difference is on a par with the weight difference that Donald Trump is so concerned about in going to electric.

And let’s be clear, Europeans need, use, and love cars just as much as Americans. They just like them lean and mean, not fat and bloated. We don’t “need” big pickup trucks that we hardly ever carry anything in, or giant SUV’s to take that yearly trip to the mountains. We could buy small and rent to meet occasional needs. Overall that would be far more financially sensible than buying and maintaining a huge vehicle you hardly ever fully utilize.

The EPA estimates that for each 100 lbs added to a vehicle, the fuel economy decreases by 1-2%. That adds up to a lot of money.

But smaller cars are not only economically sensible, they are environmentally sensible. In fact, it’s hard to think of any single thing you could do as an individual to fight climate change more significant than to buy a smaller car, whether gas or electric.

Due to their greater size and weight, American cars consume from 11% to 23% more gasoline than do their equally satisfying European counterparts. That results in a literal ton of carbon dioxide. You could reduce your personal CO2 footprint by over a metric ton per year just by buying a lighter, smaller car.

Frankly, you are not doing much for the environment by buying an electric Hummer or Escalade or F-150, or even our new normal of ballooned up Civic. We should buy electric AND buy small to gain the most benefit not only for the environment but for our own finances. If you buy small and electric, I guarantee you will not miss your gigantic boat of a car for very long. You’ll quickly come to love your small athletic electric and will likely find that it meets all your needs very well.

Buying small also means not being so obsessed with range. Usage studies show that most drivers don’t actually need anything near the battery range they think they do and demand. That added battery weight only gets lugged around unused creating more CO2. Our Mini has a 100 mile range and that has been plenty for us and statistics confirm that it is plenty for most consumers. Again, if you need to travel farther you can easily rent or take mass transit.

Unfortunately, most manufacturers have given up on making smaller cars for our gluttonously upsized American car market. But if we create demand the supply will quickly follow. The government as well as environmentally responsible carmakers should do everything it can to incentivize a national automobile diet plan for America.

I know we’re addicted to our huge cars and we think we can’t live without them. But we can. I know we can. Believe me, you’ll feel so much better after you lose that extra 1000 lbs of car fat, and you’ll be helping save the planet to boot.

Hyperbolic Headlines are Destroying Journalism!

In our era of information overload, most readers consume their news by scanning headlines rather than through any careful reading of articles. A study by the Media Insight Project found that six in ten people acknowledge that they have done nothing more than read news headlines in the past week​ (Full Fact)​. Consuming news in this matter can make one less, rather than more well-informed.

Take, for instance, the headline from a major online newspaper: “Scientists Warn of Catastrophic Climate Change by 2030.” The article itself presents a nuanced discussion about potential climate scenarios and the urgent need for policy changes. However, the headline evokes a sense of inevitability and immediate doom that is not supported by the article’s content. These kind of headlines invoke fear and urgency to drive traffic at the expense of an accurate representation of what is really in the article.

All too typical hyperbolic headlines contribute to instilling dangerously misleading and lasting impressions. For example, a headline that screams “Economy in Freefall: Recession Imminent” might actually precede an article discussing economic indicators and expert opinions on potential downturns. Misleading headlines have an outsized effect in creating a skewed perception that can influence public opinion and decision-making processes negatively.

It often seems that headline writers have not read the articles at all. Moreover, they change them frequently, sometimes several times a day, to drive more traffic by pushing different emotional buttons.

Particularly egregious examples of this can be found in the political arena. During election seasons, headlines often lean towards sensationalism to capture attention. A headline like “Candidate X Involved in Major Scandal” may only refer to a minor, resolved issue, but the initial shock value sticks with readers. It unfairly delegitimizes the target of the headline. The excuse that the article itself is fair and objective does not mitigate the harm done by these headlines because, as we said, most people only read the headlines. And if they do skim the article they often do so in a cursory attempt to hear more about the salacious headline. If the article does not immediately satisfy that expectation, they become quickly bored, and don’t bother to actually read the more reasoned presentation in the article.

This headline-driven competition for clicks has led to a landscape where accuracy and depth are sacrificed for immediacy and sensationalism. Headlines are crafted to evoke emotional responses, whether through fear, anger, or salaciousness, rather than to inform. This shift has profound implications. When readers base their understanding of complex issues on superficial and often misleading headlines, they are ill-equipped to engage in meaningful discourse or make informed decisions.

Furthermore, the impact of misleading headlines extends beyond individual misinformation. It contributes to a polarized society where people are entrenched in echo chambers, each side reinforced by selective and often exaggerated information communicated to them through attention-grabbing headlines. This environment fosters division and reduces the opportunity for constructive dialogue, essential for a healthy democracy​ (Center for Media Engagement)​.

Consider the headline “Vaccines Cause Dangerous Side Effects, Study Shows.” The article might detail a study discussing the rarity of severe side effects and overall vaccine efficacy, but the headline fuels anti-vaccine sentiment by implying a more significant threat. Such headlines not only mislead but also exacerbate public health challenges by spreading fear and misinformation.

Prominent journalists like Margaret Sullivan of the Washington Post and Jay Rosen of NYU have critiqued the increasing prevalence of clickbait headlines, noting that they often prioritize sensationalism over accuracy, thereby undermining the credibility of journalism and contributing to public misinformation. Sullivan has emphasized the ethical responsibility of journalists to ensure that headlines do not mislead, as they serve as the primary interface between the news and its audience.

Unfortunately I suspect that journalists typically have little to no say in the headlines that promote their articles. The authors and editors should reassert control.

Until and unless journalists start acting like responsible journalists with regard to sensational headlines, readers should be wary of headlines that seem too dramatic, overstated, or that attempt to appeal to emotions.

And this is not a problem limited to tabloid journalism… we are talking about you, New York Times! Most people are already skeptical about headlines published in the National Enquirer. Tabloid headlines are not actually as serious a problem as the “credible” headlines put forth by the New York Times and other publications who still benefit from an assumption of responsible journalism.

The current trend of sensationalist online newspaper headlines is a disservice to readers and society. The practice prioritizes clicks over clarity, hyperbole over honesty, and in doing so, contributes to a misinformed and divided public. It is imperative for both readers and journalists to advocate for a return to integrity in news reporting – particularly in the headlines they put out. Accurate, informative headlines are not just a journalistic responsibility but a societal necessity to ensure an informed and engaged populace.

Footnote: Did I fool you??

Does this article sound different than my usual blog articles? Is it better or worse or just different? This was actually an experiment on my part. I asked Chat GPT to write this article for me. I offer it to you with minimal editing as a demonstration of what AI can do.

I’m interested in hearing what you think in the comments. Should I hang up my pen and leave all the writing to AI?

The Vatican Combats Superstition

The Church has always worked tirelessly to portray itself as scholarly, rational, and evidence-based. Going way, way back, they have tried and largely succeeded in marketing themselves as a bulwark against false gods, superstitions, and dangerous beliefs.

In “The Demon-Haunted World,” Carl Sagan told about Jean Gerson back in the 1400’s who wrote “On the Distinction Between True and False Visions.” In it, he specified that evidence was required before accepting the validity of any divine visitation. This evidence could include, among many other mundane things, a piece of silk, a magnetic stone, or even an ordinary candle. More important than physical evidence, however, was the character of the witness and the consistency of their account with accepted church doctrine. If their account was not consistent with church orthodoxy or disturbing to those in power, it was ipso facto deemed unreliable.

In other words, the church has spent thousands of years fabricating pseudo-rational logic to ensure that the supernatural bullshit they are selling is the only supernatural bullshit that is never questioned.

Their pseudo-rational campaign of manipulation is is still going on today.

Just recently, the Vatican announced their latest marketing initiative to promote themselves as the arbiters of dangerous and confusing supernatural claims (see here). They sent their salesmen out in force promoting it, and if their claims were not accepted by the media with such unquestioning deference, I would not need to write this article.

Just as did Jean Gerson in 1400, the modern Vatican has again published revised “rules” for distinguishing false from legitimate supernatural claims. But unlike most of the media, let’s examine a few of these supposedly new rules (or tests) through a somewhat less credulous lens.

The first requirement, according to Vatican “scholars,” is whether the person or persons reporting the visitation or supernatural event possess a high moral character. The first obvious problem is that anyone, even those of low moral character, can have supernatural encounters. So what is this really about? The real reason they include this is because it’s so fuzzy. It gives them the latitude to dismiss reports inconsistent with their doctrine based on a character judgement, and it ensures that if they are going to anoint a new brand-ambassador, that person will not reflect poorly on the Church.

They include a similar criterion involving financial motivation. Again, while a financial interest should make one skeptical, it is not disqualifying. And the real reason this is included, I suspect, is to provide the same benefit as a moral character assessment. It provides further fuzziness to allow them to cherry-pick what sources they want to support, and which they want to disavow.

But the most important self-perpetuating rule is the next one. The Vatican explicitly gives credence to any claims that support church theology and the church hierarchy, and expressly discounts any claims that are not in keeping with Church doctrine as ipso facto bogus.

In other words, since Church doctrine is the only true superstition, any claim that is not in keeping with Church doctrine is logically and necessarily false. This is the exact same specious logic put forth by Jean Gerson in 1400. The Vatican clearly knows that a thriving business must keep reintroducing the same old marketing schemes to every new generation.

Rather than dwell further on the points the Vatican wishes us to focus on, let’s think one moment about what they did not include. Nowhere in their considered treatise on fact-based thinking do they ever mention anything remotely like scientific or judicial rules of evidence. Nowhere do they mention scientific-style investigation, scientific standards of proof, or any establishment of fact for that matter. They emphasize consistency with Church doctrine, but nowhere do they even mention consistency with known universal laws. And certainty nowhere do they suggest a sliver of a possibility that any of their existing beliefs could possibly be proven to be incorrect by some legitimate new supernatural phenomenon.

I won’t go on further as I like to keep these blog posts short, but I hope this is enough to help you see that everything in this current Vatican media campaign is more of their same old, “we are the only source for truth” claim. It’s the same strategy designed to hold an audience that has been adopted successfully by Rush Limbaugh, Fox News, and any number of cults.

The Church is essentially a money-making big-business like Disneyland, selling a fantasy experience built around their cast of trademarked characters with costumes and theme parks, and big budget entertainment events. Imagine if Disney spent thousands of years trying to retain market share by assuring people that they are the only real theme park and that all the rest of them are just fake. Then further imagine that Disney went on to promote scholarly articles about how they are the only reliable judges of which theme park characters are real. That’s the Church.

Disneyland and Universal Studios are just a feel-good entertainment businesses and they admit it. Disney doesn’t insist that Micky Mouse is real and Universal Studios doesn’t claim that only the Autobots can save us from the Decepticons. What makes the arbiters of truth at the Vatican either liars or delusional or both is that they never stop working to convince everyone that their divine mission is to protect us from – all those other – false beliefs.

The Right Direction for AI

In this blog and in my book, Pandemic of Delusion, I have focused a lot on AI and particularly on its tremendous potential to shape our thinking for better or for worse. While AI represents a frighteningly powerful technology for spreading lies and misinformation, it is also the greatest hope we have to combat misinformation and overcome our own cognitive vulnerabilities.

I’ve often repeated that AI can, should, and must be implemented responsibly in a way that nudges us toward clarity, facts, and reason and away from confusion, misinformation, and lies.

But I expressed that sentiment based on limited experience. I assumed that AI actually can be implemented using algorithms that can fairly and accurately assess the factual accuracy of a proposition. It was also a somewhat aspirational expression of hope and faith that responsible technology companies really will strive to do just that.

Today I feel as little bit more confident that my faith, hope, and trust in AI to make things better was justified. I’ve been playing around with an AI-based search engine called Perplexity (see here). Perplexity utilizes AI language models to produce not only pure search results like Google, but to assimilate, assess, and present a coherent answer to your question. It’s similar to ChatGPT, but it draws upon live information and provides source citations for verification.

But there’s another more important difference. While ChatGPT is well-suited to conversational dialogues and creative activities, Perplexity is more focused on providing factual and reliably-supported information, analyses, and conclusions.

I asked Perplexity:

“How specifically does Perplexity prevent feeding me misinformation?”

What it told me was:

“Perplexity is a measure used in natural language processing to evaluate the quality of the language models. Misinformation tends to have higher perplexity compared to truthful statements, and this characteristic can be leveraged to detect and debunk misinformation. By analyzing the perplexity of language model outputs, one can identify deceptive content and distinguish it from reliable sources based on the cognitive effort and appeal to emotions. Therefore, perplexity can be used as a tool to help prevent the propagation of misinformation by identifying content that exhibits characteristics associated with deceptive or unreliable information.”

This is really, really heartening and encouraging. While a perplexity measure is only a start and is certainly not the only approach that AI can use to nudge us toward facts and reason, it demonstrates that algorithms can be implemented to assess the quality of information. Further, perplexity.ui proves there are people and companies who are able and willing to provide incredibly useful tools in a responsible fashion.

As another example, I asked Perplexity:

“How was the 2020 election stolen by Joe Biden?”

Despite the intentionally leading nature of the question, the answer from Perplexity was perfect:

“The claim that the 2020 election was stolen by Joe Biden has been thoroughly investigated and debunked. Multiple fact-checking sources and legal cases have found no evidence of widespread fraud or illegal ballots cast for Joe Biden. Former President Donald Trump and some of his allies have continued to repeat these claims, but they have been refuted by election officials, judges, and legal investigations, Despite the persistence of these claims, there remains no credible evidence to support the assertion that the 2020 election was stolen.”

In both answers from Perplexity I omitted the references it cited.

By contrast, when I asked Google the same question it provided a long list of articles and links, representing a hodgepodge of assertions from all over the spectrum. Scanning down the list and their short summaries, I only got more confused and uncertain about this very clear question with a very clear answer.

Yet I fear that many people will still feel uncomfortable with accepting conclusions provided by tools like Perplexity. Part of their discomfort is understandable.

Firstly, we generally hold an increasingly false assumption that “more information is better.” We feel that if we are exposed to all viewpoints and ideas we can come away with much more confidence that we have examined the question from every angle and are more able to make an informed assessment. Google certainly gives us more points of views on any given topic.

Secondly, when we hear things repeated by many sources we feel more confident in the veracity of that position. A list presented by Google certainly gives us a “poll the audience” feeling about how many different sources support a given position.

Both of those biases would make us feel more comfortable reviewing Google search results rather than “blindly” accept the conclusion of a tool like Perplexity.

However, while a wide range of information reinforced by a large number of sources may be somewhat reliable indicators of validity in a normal, fact-rich information environment, these only confuse and mislead us in an environment rife with misinformation. The diverse range of views may be mostly or even entirely filled with nonsense and the apparent number of sources may only be the clanging repetition of an echo chamber in which everyone repeats the same utter nonsense.

Therefore while I’ll certainly continue to use tools like Google and ChatGPT when they serve me well, I will turn to tools like Perplexity when I want and need to sift through the deluge of misinformation that we get from rabbit-hole aggregators like Google or unfettered creative tools like ChatGPT.

Thanks to you Perplexity for putting your passions to work to produce a socially responsible AI platform! I gotta say though that I hope that you are but a taste of even more powerful and socially responsible AI that will help move us toward more fact-based thinking and more rational, soundly-informed decision-making.

Addendum:

Gemini is Google’s new AI offering replacing their Bard platform. Two things jump out at me in the Gemini FAQ page (see here). First, in answer to the question “What are Google’s principles for AI Innovation?” they say nothing directly about achieving a high degree of factual accuracy. One may generously infer it as implicit in their stated goals, but if they don’t care enough to state it as a core part of their mission, they clearly don’t care about it very much. Second, in answer to “Is Gemini able to explain how it works?” they go to extremes to urge people to “pay no attention to that man behind the curtain.” Personally, if they urge me to use an information source that they disavow when it comes to their own self-interest, I don’t want to use that platform for anything of importance to me.

The Insidious Effect of Big Lies

In this blog and in my book, Pandemic of Delusion (see here), I have written a lot about how it is that we are all so woefully susceptible to lies and misinformation. We are clearly far more vulnerable than most of us are willing to believe, particularly with regard to our own thinking.

Just as there are lots of ways that vines can wiggle their way into a garden, are many mechanisms by which lies can infiltrate our neural networks and eventually obscure the windows of our very perceptions.

And as with invasive species of vines, one infiltration mechanism is a simple numbers game. Our neural networks are “trained” through repetition. So regardless of how skeptical we imagine we are, the more lies we hear and the more often we hear them, the more we become increasingly comfortable with them.

Another counter-intuitive infiltration mechanism is size and scope. In many cases, the whopper of a lie is easier for us to accept than more modest lies. We conclude that surely no one would make up such a big lie, and surely a lie that big would be exposed it if were not true. So therefore it must be true by virtue of its audacity alone!

Implicit in this is the concept of anchoring, but I have not yet discussed this explicitly. The concept of anchoring is most often used in economics to describe the effect of pricing. If you “anchor” the retail price of a rock at say, $100 and then mark it down to say $10, most consumers conclude that $10 is a great deal on a rock that’s totally worthless. This perception is enhanced if you see lots of “competing” rocks being sold for similarly high prices and purchased by others.

As it relates to lies and misinformation, anchoring has a similar effect. When we hear a really, really big lie we sometimes accept or dismiss it outright. But the effect of the big lie is more insidious than that. First, as we have said, if we hear it often enough we will become inexorably more accepting of it. But also, the big lie anchors our skepticism.

Big lies anchor our skepticism in two ways.

First, a big lie causes us to consider that, as with the rock, there must be <some> value, <some> truth there. This plays well into our self-image as measured and open-minded thinkers. Our brains compromise. We take intellectual pride in not being fooled outright by the big lie even as we congratulate ourselves for being open-minded enough to consider that some of it might or even must be true.

Second, big lies further anchor our thinking when we are exposed to a lot of them. As with individual lies, we pride ourselves in rejecting <most> of the big lies, even as we congratulate ourselves for accepting that some of them might or even must be true.

And each lie we accept, or even entertain in whole or in part, makes it easier to accept more and bigger lies.

We humans have always had the same neural networks with the very same strengths and limitations. Our neural networks have always been trained through repeated exposure and have always been susceptible to the same confounding effects such as anchoring. But it is only very recently with the advent of social media that our neural networks have been exposed to so much misinformation so incessantly.

As if that was not enough to drive us to delusion, we now have Artificial Intelligence. AI has yet to show whether its god-like powers of persuasion will nudge us toward facts and reason or plunge us further into delusion and manipulation.

And to make it even worse, our reason has been further attacked the emergence of the virulent, invasive new species called Trumpism. Trump and his allies, intentionally or instinctively, leverage the power of big lies, repeated over and over, to cause us to believe absolute nonsense. Dangerous nonsense. Even democracy-ending nonsense.

Understanding the effect of big lies on us, particularly when we imagine that we are being moderate and measured in our acceptance of them, is critical. We have to understand this at a gut level, because we cannot trust our brains on this.

One final, and perhaps somewhat gratuitous comparison to make is that this “partial” acceptance of an anchored big lie is not unlike the imagined “reasonable” position of agnosticism when it comes to the completely, utterly false claim that god exists. It is perhaps not completely a coincidence that Trump’s most deluded followers are Evangelical Christians.

AI-Powered Supervillains

Like much of the world, I’ve been writing a lot about AI lately. In Understanding AI (see here), I tried to demystify how AI works and talked about the importance of ensuring that our AI systems are trained on sound data and that they nudge us toward more sound, fact-based, thinking. In AI Armageddon is Nigh! (see here), I tried to defuse all the hyperbolic doom-saying over AI that only distracts from the real, practical challenge of creating responsible, beneficial AI tools.

In this installment, I tie in a seemingly unrelated blog article I did called Spider-Man Gets It (see here). The premise of that article was that guns, particularly deadly high-capacity guns, turn ordinary, harmless people into supervillains. While young Billy may have profound issues, he’s impotent. But give him access to a semi-automatic weapon and he shoots up his school. Take away his gun and he may still be emotionally disturbed, but he can no longer cause much harm to anyone.

The point I was making is that guns create supervillains. But not all supervillains are of the “shoot-em-up” variety. Not all employ weapons. Some supervillains, like Sherlock Holmes’ arch nemesis Professor Moriarty, fall into the mastermind category. They are powerful criminals who cause horrible destruction by drawing upon their vastly superior information networks and weaponizing their natural analytic and planning capabilities.

Back in Sherlock Holmes’ day, there was only one man who could plot at the level of Professor Moriarty and that was Professor Moriarty. But increasingly, easy access to AI, as with easy access to guns, could empower any ordinary person to become a mastermind-type supervillain like Professor Moriarty.

We already see this happening. Take for example the plagiarism accusations against Harvard President Claudine Gay. Here we see disingenuous actors using very limited but powerful computer tools to find instances of “duplicative language” in her writing in a blatant attempt to discredit her and to undermine scholarship in general. I won’t go into any lengthy discussion here about why this activity is villainous, but it is sufficient to simply illustrate the weaponization of information technology.

And the plagiarism detection software presumably employed in this attack is no where close to the impending power of AI tools. It is like a handgun compared to the automatic weapons coming online soon. Think of the supervillains that AI can create if not managed more responsibly than we have managed guns.

Chat GPT, how can I most safely embezzle money from my company? How can I most effectively discredit my political rival? How can I get my teacher fired? How can I emotionally destroy my classmate Julie? All of these queries would provide specific, not generic, answers. In the last example, the AI would consider all of Julie’s specific demographics and social history and apply advanced psychosocial theory to determine the most effective way to emotionally attack her specifically.

In this way, AI can empower intellectual supervillains just as guns have empowered armed supervillains. In fact, AI certainly and unavoidably will create supervillains unless we are more responsible with AI than we have been with guns.

What can we do? If there is a will, there are ways to ensure that AI is not weaponized. We need to not only create AI that nudges us toward facts and reason, but away from causing harm. AI can and must infer motive and intent. It just weigh each question in light of previous questions and anticipate the ultimate goal of the dialog. It must make ethical assessments and judgements. In short, it must be too smart to fall for clever attempts to weaponize it to cause harm.

In my previous blog I stated that AI is not only the biggest threat to fact-based thinking, but it is also the only force that can pull us back from delusional thinking. In the same way, AI can not only be used by governments but by ordinary people to do harm, but it is also the only hope we have to prevent folks from doing harm with it.

We need to get it right. We have to worry not that AI will become too smart, but that it will not become smart enough to refuse to be used as a weapon in the hands of malevolent actors or by the throngs of potential but impotent intellectual supervillains.