Monthly Archives: March 2026

Staying Sane Is Hard Work

Sliding down into delusion is seductive, easy, and fun. Modern information technology is making it ever harder to resist. Staying sane, on the other hand, is hard work—and it is getting harder every day.

The internet has made it possible for infectious ideas to spread faster than any physical disease. For a virus to circle the globe, you need mutations and air travel. To become infected by fake news and dangerous ideas, you need only a Wi‑Fi connection. Modern technology exposes us to vastly more information than ever before, much of it unhealthy, and every time our neural networks are exposed to bad information, it feels a bit more sensible to us—even if we know it is fake. Mere repeated exposure wears ever‑deepening grooves of familiarity into our brains. The more we see, hear, and click on a claim, the more reasonable it feels. Eventually, insidiously, it becomes self‑evident—common sense that seems inescapable.

In the past, news was filtered through human editors and gatekeepers. They certainly had their biases and blind spots, but at least someone was nominally responsible for quality. Today, sources like Facebook, Fox News, YouTube, podcasts, X/Twitter, and even our government have largely abandoned any obligation to fact‑check before amplifying. They create the illusion of informed reporting but are often almost completely untethered to reality. Their algorithms and personalities have one overriding job: keep you engaged. They notice what you watch or click and then say, in effect, “If you believe that, then check this out!” They do not care whether they are feeding you solid science or the latest conspiracy theory; they only care whether you will stay tuned in and click some more. The responsibility to sort out well‑supported information from unsupported claims, sound logic from specious arguments, is pushed entirely onto you.

That would be a tall order even if our brains were perfectly rational. They aren’t. Imagine you are curious about a fringe idea like Bigfoot. You type “proof of Bigfoot” into a search engine or social platform, intending to investigate skeptically. You will quickly find articles, videos, posts, and even reality shows arguing that Bigfoot is at least plausible or even real. Because you clicked, the algorithms learn that Bigfoot content “works” on you and begin to serve you more of it: more sightings, grainy photos, confident testimony. Before long, your feed is heavily populated by Bigfoot believers. From your perspective, it starts to look as if there is an enormous body of evidence out there. Everywhere you look, people treat the idea seriously. If so many people think there is something to it, there must be something to it.

In reality, you are being drawn out onto ever thinner and more dangerous limbs. The algorithm nudges you along in little steps, each of which seems perfectly solid and reasonable. This process does not just happen with Bigfoot. It happens with vaccine myths, climate denial, election lies, cultish political beliefs, and every other infectious or click‑inducing idea. The result is that many people come to feel they have made a careful, “objective” study of an issue when in fact they have been drawn, step by step, down a rabbit hole into an Alice in Wonderland alternate reality.

We cannot redesign the global information system by ourselves, but we can develop habits that make us harder to capture. One simple practice is to explicitly search for the reverse of whatever you are investigating. If you search for “proof of Bigfoot,” deliberately also search for “debunking Bigfoot claims,” and click on those results often enough that the search engines learn you will reliably choose that kind of content too. This at least gives you some exposure to different perspectives. Both sides might still be exaggerated, but you are less likely to be left with the illusion that everyone agrees with one side only.

Another, related technique is to always look back to first principles. If you only consider that next little step out along the branch, it will seem safe and sensible. But if you stop and look back at how far you have wandered from the solid trunk, you quickly realize that you are dangerously far out on a limb. Having acknowledged that we do occasionally discover new species, must really therefore admit that a hitherto undiscovered tribe of Bigfoot might actually exist?

It also matters where you spend your time. Just as like‑minded people congregate in person, different online communities attract and cultivate different kinds of thinkers. Choose to frequent healthy online environments. That is not to say you should avoid diverse ideas; but if rumor, outrage, and unvetted claims infect the community or the platform itself, you will become infected. Seek out vibrant but serious gathering sites where people demand citations, scrutinize sources, and correct obvious nonsense. If you stick to them, your own brain will become better at recognizing sound evidence and logic, as well as specious arguments. If the level of discourse on a trusted site degrades, you should leave and stop exposing your brain to it.

Given all the infectious information we are unavoidably exposed to, it is no surprise that people sometimes slip from belief into delusion. Beliefs, at least in principle, are subject to change. We might hold them strongly, but new evidence can persuade us to reconsider. When a belief becomes impervious to change—when no amount of contrary evidence, no matter how strong or consistent, is allowed to matter—it has crossed over into delusion. Using that word makes many professionals uneasy. In a clinical setting, “delusional” has a specific meaning and diagnostic criteria. Nevertheless, in the generally accepted lay domain, delusion is the proper word to describe thinking patterns that have become impervious to evidence or reason.

When a person or a movement has fallen prey to delusional ideas, when contrary facts are dismissed out of hand or reinterpreted as attacks, we no longer function in the realm of honest disagreement. We are locked into a self‑reinforcing mental world that will not adjust to reality. In a culture where influencers dominate the discourse, the rest of us are put at risk. Delusions can be comforting, energizing, and politically useful, but facts always assert themselves in the end. Reality does not care if you believe in it.

As a result of so many infectious ideas being disseminated so quickly, we are currently suffering from a global pandemic of delusion. We cannot wipe it out, but we can protect ourselves and try not to contribute to its spread. We can monitor our own information diets, seek out counter‑evidence, choose better communities, learn to better assess claims, and be more precise in our language. We can and must resist being nudged toward delusion. As susceptible as our brains are to misinformation, they can also be trained to better assess the soundness of claims and to detect specious arguments.

The way repetition reshapes our memories and our very perceptions, the way algorithms exploit our pattern‑seeking brains, the way beliefs slide, inch by inch, into full‑blown delusion—all of these dynamics, and many others, are at work in our politics, our media, our religions, and our personal lives. In my book Pandemic of Delusion: Staying Rational in an Increasingly Irrational World (see here), I unpack those mechanics in much greater detail, with concrete examples and practical tools for recognizing when you, or someone you care about, is being nudged away from reality. If this short essay inspires you to want to bolster your defenses, the book will provide you with a practical field guide: offering insight as to why we are so susceptible to misinformation, how to recognize it, and how to immunize yourself against it. It will give you a fighting chance to stay sane when the world around you seems determined to drive you crazy.

Star Trek Reality Check

Star Trek and Star Wars offer visions of the future that have become so familiar that it’s all too easy to over-credit the plausibility of the technologies they present. But how much of what they depict is plausible science fiction and how much is implausible science fantasy?

Modern physics is incomplete, but not in the sense that it’s going to casually overturn core constraints like the light‑speed limit, energy conservation, or causality. Any future theory will still be bounded by those hard limits where we’ve already measured them to absurd precision. So betting that some future “breakthrough” will make Star Trek‑style tech real is not cautious skepticism; it’s wishful thinking.

First and most fundamentally, let’s start with the Vulcans visiting Earth. As much as we like to fantasize about technologically advanced aliens visiting us now or ever, to help us or to destroy us, this is implausible. As I discuss in my book (see here) and in this blog article (see here), aliens certainly exist, but they can never visit us. There is only an extremely remote chance that we could ever even detect signs that they existed somewhere, at some time, in the distant past.

Yes, you can always wave your hands and say “maybe some unknown physics will let them come here,” but that’s not reasoning, it’s magical thinking. Given what we already know about distances, speeds, energy, radiation, and biology, the probability that flesh‑and‑blood aliens will ever cross interstellar gulfs and happen to visit us is effectively zero. Not small, not unlikely, but zero.

I wanted to communicate that most strongly as it is so critical to understand. And of course since no alien could possibly ever visit us, it is equally implausible that we could ever visit them. The only remote possibility could be sentient machines who could survive inhumanly long and dangerous journeys. In this sense, the Transformers franchise (those in which organic makers are canon) could be the most plausible science fiction. I also depict such a plausible “space travel” science fiction in my short story The Dandelion Project (see here).

So while virtually everything that follows in Star Trek cannot happen, let’s set aside the basic implausibility of interstellar space travel and look at some of the other fictions that writers concoct to make it all seem plausible once we grant the possibility of space travel.

First, there is warp drive which overcomes the inconvenient reality of time and space. This is science flavored magic. While the physics of faster than light travel may have some plausibility at the mathematical level, it has zero plausibility at practical scale. Faster‑than‑light travel isn’t just “very hard.” It clashes directly with the way spacetime is structured. To get around the speed limit you have to either break causality (allow time travel paradoxes) or rely on enormous quantities of exotic matter that may not exist in any usable form. When a “solution” demands both magic materials and broken causality, that’s not serious speculation, that’s fantasy dressed in equations.

This is similarly true of the magical energy sources that the science fantasy writers concoct to make the fantastic power requirements seem plausible. They construct anti-matter reactors stabilized in a dilithium matrix. Again, even where anti-matter technologies are theoretically plausible they are effectively hopeless in any practical sense. Antimatter is real and ridiculously energy‑dense, but producing and storing it in useful quantities is so far beyond plausible engineering that it may as well be sorcery. Talking about “antimatter reactors” powering star cruisers is like proposing a jet engine that runs on bottled lightning captured in jars. You can write that into a script and make it sound theoretically plausible but you simply cannot build it in this universe.

The implausible power requirements involved in fantasy space travel also apply to weaponry. Hand phasers and similar variations are simply implausible. Directed energy starship weaponry is somewhat plausible, but certainly nowhere remotely near the hull-slicing power depicted in the shows.

And speaking of weaponry, even if hand phasers were plausible, they would at best fire invisible millisecond bursts. Phaser gun fights would never happen. Advanced weaponry would have computer targeting and essentially never miss. One could certainly never “duck” out of the way of an energy beam. A hand‑held weapon that fires at or near light speed, with computerized targeting, does not produce Western‑style shootouts. Once the weapon can lock onto you, your chances of side‑stepping a beam that crosses the distance in microseconds are exactly zero. The only real “dodging” is not being targeted in the first place—and that’s a software and sensor game, not a reflex test.

The same logic destroys the idea of starship dogfights. If you ever had vehicles throwing serious energy around at interplanetary ranges, the fight would be decided by who detected whom first and whose fire control software shot first. It would last seconds, or less, and the human crew would learn the battle was over when the computer informed them that their enemy had been destroyed.

We don’t need to imagine futuristic AI to see the problem. Even today, guidance computers outclass human pilots in reaction speed, precision, and ability to juggle massive sensor inputs. Scale that up to space combat and the idea that a flesh‑and‑blood pilot is “flying” a starship in combat is as quaint as imagining a locomotive engineer sprinting ahead to lay track by hand.

In that vein, there would be no possibility of human (or any organic) navigators or tactical crew members. Computers would certainly handle all the piloting and targeting. There would be no time for a real-time Captain to shout even one order as he’s flung around the bridge. Han Solo would not be able to pilot the Kessel Run safely in even a fraction of the time it would take a computer-controlled ship, if at all. Operating any function of a star ship would not be a job for humans.

As to other technologies, transporters, replicators, “subspace” radios, and hard‑light holograms all have the same problem: each one quietly assumes away a core rule of the universe. They don’t just extrapolate technology; they ask you to believe that information, energy, and matter can be shuffled around with a casual disregard for limits that we’ve already measured in laboratories. That makes for great science fantasy, but it is not remotely plausible science fiction.

But there are a few places where I suspect they get the possibilities more right than wrong, even if only for practical production and storytelling limitations.

There is the plausibility that many alien planets would be so familiar to us. Given that life can only evolve in a very limited set of conditions, and that the rules of physics, chemistry, and evolution are the same throughout the universe, I don’t find it implausible that many environments, and even many alien species, would be quite familiar or at least quickly understandable to us, both morphologically and biologically (see here). Life that can build radio telescopes is probably confined to a very narrow zone of temperatures, chemistry, and environmental stability. Under those shared constraints, evolution is pushed toward a limited set of workable body plans—limbs, mouths, sensory organs. So yes, there are good reasons to think that intelligence elsewhere might evolve a shape that is surprisingly close to our own. That doesn’t mean “humans with cranial ridges,” but it does mean that “unrecognizable swirling gas entities” are probably rarer than TV’s familiar human-like bipeds.

Also, one thing that Star Wars got right was recognizing that in the future all medical diagnoses and procedures would be performed exclusively by medical droids. I can understand that it would take all the fun out of the fiction if they also admitted that Han piloting the Millennium Falcon or Luke manning the gun turrets would be just as obsolete, even with The Force assisting him!

There is a fashionable kind of optimism that treats science as an unbounded well that can eventually make anything possible if we just “don’t close our minds.” That’s not how science works. Science narrows possibilities by discovering hard limits. We don’t say “maybe one day we’ll find a way around conservation of energy” or “maybe light will decide to go faster.” We already know that won’t happen. The technologies I’m calling fantasy aren’t just impractical; they lean on the hope that the universe will overturn its own rules to realize our fantasies.

Just to say, I love these science fantasy shows. If they depicted a more plausible Sol-bound future with computers basically running everything they would be a whole lot less inspiring and engaging. But just as with a good horror or superhero movie, we can love the fantasy while still fully appreciating that it is mostly fantasy.

Often the distinction between science fiction and science fantasy becomes blurred in a world where science seems capable of such magical and limitless achievements, but it is still critical that we recognize science fantasy as just that. If we fail to do so, we become susceptible to imagining that some fantastical future science will save us from actual threats like climate change that demand real solutions right now.