Category Archives: Education

Make AI Why Your New Pastime!

When Ph.D. candidates near the end of their degree programs, they face a major hurdle: the qualifying exam, or oral defense. This is standard for most math and hard science fields, but is also often required in disciplines like history and English literature. During the defense, the candidate stands before a panel of professors, answers questions about their thesis, and then faces a battery of general questions designed to assess their depth and breadth of knowledge.

One tall tale of these oral defenses is the “Blue Sky” story. In these tales, the professors merely ask the candidate a simple question like “why is the sky blue?” After the student answers, they merely respond with “why?” After answering further, they just again ask “why?”

This isn’t just a campus myth, because a good Ph.D. Physicist friend of mine was subject to just such a grilling starting with “Why is the sky blue?” He told me that over the course of the next hour he ended up drawing upon a far wider and deeper range of physics knowledge then he ever realized he knew. All in response to repeated questions consisting of just “why?”

This is a game that confounds and exasperates parents all the time. We say something to our toddler, and they ask “why?” When we answer, they again say “why?” Parents usually give up after perhaps three iterations. A Ph.D. candidate would get through at least a few more iterations within their field of specialization.

It makes me wonder if a “Why-Q” would not be a great intelligence quotient for AI. If a normal parent can score 3, and a well-prepared Ph.D. candidate might score 6, what would AI score? Probably a much higher count reflecting deeper knowledge, and certainly its breadth of knowledge would be essentially unlimited.

Given that we now have essentially Ph.D. level intelligence in every field right at our beck and call 24/7 through AI, I want to suggest that you can play a game I call “AI Why” whenever you like. Take a break from endless YouTube or TicTok videos. Stop reading increasingly crappy articles because you’ve run out of anything actually worthwhile. Instead open your preferred AI app and pass the time playing AI Why.

Ask AI any question, serious or whimsical, even something like “Why is the sky blue?” Read over the answer, and then ask a follow-up question. You can dive deeper into the subject or go off an a different tangent. And you can continue on as long as you like. AI will never think your question is silly or get sick of your questions and it will always give you an interesting answer.

This is very different from simply surfing the Internet. Unlike the few Google or even Wikipedia links provided to you, you are not limited to clicking on a fixed number of links produced by algorithms to manipulate you. AI interaction is conversational. You can take your AI conversation anywhere you like and explore the vastness of human knowledge rather than get funneled down into rabbit holes.

Of course the AI system you use does matter. I would not go near anything under the control of Elon Musk for example. But not all AI systems are configured so that all paths lead you to the oppression of South African Whites. I use Perplexity (see here) because they are strongly dedicated to providing sound, fact-based information.

The other great thing about Perplexity is that it remembers threads of dialogue. That means I can ask Perplexity about a topic, and then come back to that thread days or months later to continue the discussion.

Just to give you a flavor of this great pastime, I asked Perplexity “Why is the sky blue?” It gave me a lot of interesting information to which I followed up by asking “Why does Rayleigh scattering occur?” After reading more about that, I asked “Why do refractive indices differ?” The answer led me to ask “Why is light an electric field?” And that led me to “Why is the self-propagating electromagnetic field of light not perpetual motion?

To explain that last question a bit: light propagates forever in a vacuum. It seems counter-intuitive that something moving forever is not perpetual motion by definition. But Perplexity clearly explained that no, light may move forever, but does no work. That led me to ask the gotcha question, “How can electromagnetic radiation undergo self-propagation between electrical and magnetic fields with no loss of energy?

At that point, it took me into Maxwell’s equations and lost me.

This hopefully illustrates how you can go as deep as you like in your conversations with AI. Or, I could have taken it down another path that led to the family life of Amedeo Avogadro. AI will accompany you anywhere you want to go. (And no, that is not to imply that it just agrees with anything you say. It does not.)

So, my message is to become discussion buddies with your genius AI friend. Learn from it. Expand your brain and have fun doing so. Don’t waste the precious opportunity we have to so easily learn almost anything about almost anything.

Make AI Why one of your favorite pastimes!

National Defense and Social Security Myths

Most of us Americans figure we’re pretty well-informed about the realities of our national economy – at least in the big picture. Here are the Top 5 budget categories that you’ve probably seen cited everywhere by most every expert and trusted source:

  1. Social Security: $1,354 billion
  2. Medicaid (also NIH, CDC, FDA and more): $889 billion
  3. Medicare: $848 billion
  4. National Defense (direct budget only): $820 billion
  5. Unemployment (and most family and child assistance programs): $775 billion

Lists like this are usually invoked in order to provide support for a particular (false) mainstream narrative.

Mainstream Narrative: National Defense spending is not where we should be concerned. Rather it’s those big social entitlement programs that are the real problem, and the most worrisome of all is Social Security. In fact, we need to take immediate drastic action to prevent Social Security from bringing us to economic ruin!

But bear with me while I call that narrative into question.

First, that National Defense number of $848 billion is far too low. That only includes certain budgeted expenses. It does not include Supplemental Funding (which pays for most of our wars). Veterans Care and Benefits, Overseas Contingency Operations, Additions to the Base Budget, Interest on War Debt, and many other separately allocated costs.

To understand how misleading that is, imagine trying to convince your spouse that your gambling budget is only a very reasonable $200 per night. But that is just your betting limit. You neglect to include your Vegas hotel, limo rental, meals, bar-tabs, payments on the debt incurred by your previous losses, lost work, and additional payment for any “special deals” that you just can’t pass up.

Similarly, if we tally up all the buried line items that should fairly be included under National Defense spending, the total cost is far higher. The actual figure depends on which items you choose to include, but a conservative total of about $1.7 trillion is what my AI-assisted research came up with. No matter how you cut it, a more honest accounting puts National Defense spending well above Social Security levels. It should be number one by a large margin on any honest list.

Also, military spending has incredibly low stimulative value. While it provides some jobs, it does not stimulate secondary growth as does say a bridge or a building. It is essentially “lost” economic value except for the relatively few who extract wealth from it. But I digress. Maybe I’ll expand on that in a future blog article.

In any case, that addresses the first half of the false narrative, the deceptively low figure cited for military spending. Now let’s shift to the other half, Social Security spending. The figure of $1.3 trillion spent on Social Security is arguably just as misleading as is the figure for military spending.

People paid into their social security fund. Virtually all of that $1.3 trillion is money that is simply being paid to people who invested into it. There is only a relatively small deficit which amounted to $41.4 billion in 2023. That deficit was entirely paid out of the social security trust fund; excess revenue that was set aside in previous years to cover future shortfalls.

Now, those of you who are sophisticated about these things might say – wait a sec. Social Security is not like a savings plan where individual contributions are set aside. Instead, each working generation must fund the benefits paid to the retired generation.

But I contend that that explanation is another part of this false narrative. Regardless of how it is managed, Social Security is for all intents and purposes a savings plan. And isn’t that how all savings banks work? None of them literally put your money away in a lockbox. The money you deposit is used to fund withdrawals by others. When you eventually decide to withdraw your savings, that money will in a sense come from those future depositors.

To provide another analogy, what would you say if you went to take out your savings from your local bank and they tried to explain to you that they don’t have enough revenue coming in to give you back your money? You see, they say, it’s really not a savings plan as much as it is a pay as you go plan. You’d say that’s not acceptable.

We should not be manipulated into thinking of paying into social security as paying for others current benefits, but as paying for our own future benefits. But we tend to buy into the former perspective because we’re worried the funds won’t be there for us. That’s another part of the false narrative.

While it is true that, if we make no changes, Social Security will become “insolvent” in 2033, that is intentionally made to sound more scary than it is. It only means that at that time we’ll have to reduce benefits or increase revenue. It doesn’t all just collapse like some Ponzi scheme.

In fact, it isn’t that hard to “fix” Social Security. Just in the last few years there have been multiple bills proposed to keep Social Security solvent through the population wave. These include the Social Security Fairness Act, Biden’s 2025 budget proposal, and the You Earned It Act. All of these were voted down.

These legislation, and the many that preceded them, were not voted down because they would not work. They were voted down precisely because they would work. Just as with the border crisis, too many lawmakers don’t want to fix it. They want to keep fear mongering about it failing, and they cannot do that if they actually were to fix it.

Even worse, for some legislators it is more like their management of the Post Office. Their interest is in seeing it fail. They wanted the Post Office to fail so that their private business donors could profit from this business. Similarly, their big donors desperately want to get their hands on all that social security money. To those Privateers, Social Security funds are like Blackbeard’s Lost Treasure Hoard.

If President Bush’s full-court press to privatize Social Security had not failed in 2005, all of our Social Security funds might be invested in Bitcoin futures right now. Don’t think for one moment that the Privateers have given up on getting their hands on Blackbeard’s treasure.

If I sound conspiratorial, I’ll admit partially to that. While I don’t believe that a Capitalist cabal of billionaires sits around smoking big cigars and plotting the pillaging of our Social Security trust fund, I do believe that these efforts arise naturally as an emergent collective behavior borne of a lust for profit.

As did those before us, we need to wisely continue to resist these efforts to siphon wealth from the general population into the hands of the few. Toward that end, here is my alternate narrative that I hope you will consider.

Alternate Narrative: Those in power strive to bury, obfuscate, and minimize our level of military spending for many reasons, but mostly just so the population will not push back against it. One method they use to distract from military spending is to compare their fake accounting against social spending numbers, numbers that are also at times misrepresented. Social Security is both their most shiny object to distract us from their levels of military spending and the greatest prize for Privateers who want to control those funds. For our own sake as well as our posterity, we need to resist both excessive military spending and the privatization of critical social services.

Hyperbolic Headlines are Destroying Journalism!

In our era of information overload, most readers consume their news by scanning headlines rather than through any careful reading of articles. A study by the Media Insight Project found that six in ten people acknowledge that they have done nothing more than read news headlines in the past week​ (Full Fact)​. Consuming news in this matter can make one less, rather than more well-informed.

Take, for instance, the headline from a major online newspaper: “Scientists Warn of Catastrophic Climate Change by 2030.” The article itself presents a nuanced discussion about potential climate scenarios and the urgent need for policy changes. However, the headline evokes a sense of inevitability and immediate doom that is not supported by the article’s content. These kind of headlines invoke fear and urgency to drive traffic at the expense of an accurate representation of what is really in the article.

All too typical hyperbolic headlines contribute to instilling dangerously misleading and lasting impressions. For example, a headline that screams “Economy in Freefall: Recession Imminent” might actually precede an article discussing economic indicators and expert opinions on potential downturns. Misleading headlines have an outsized effect in creating a skewed perception that can influence public opinion and decision-making processes negatively.

It often seems that headline writers have not read the articles at all. Moreover, they change them frequently, sometimes several times a day, to drive more traffic by pushing different emotional buttons.

Particularly egregious examples of this can be found in the political arena. During election seasons, headlines often lean towards sensationalism to capture attention. A headline like “Candidate X Involved in Major Scandal” may only refer to a minor, resolved issue, but the initial shock value sticks with readers. It unfairly delegitimizes the target of the headline. The excuse that the article itself is fair and objective does not mitigate the harm done by these headlines because, as we said, most people only read the headlines. And if they do skim the article they often do so in a cursory attempt to hear more about the salacious headline. If the article does not immediately satisfy that expectation, they become quickly bored, and don’t bother to actually read the more reasoned presentation in the article.

This headline-driven competition for clicks has led to a landscape where accuracy and depth are sacrificed for immediacy and sensationalism. Headlines are crafted to evoke emotional responses, whether through fear, anger, or salaciousness, rather than to inform. This shift has profound implications. When readers base their understanding of complex issues on superficial and often misleading headlines, they are ill-equipped to engage in meaningful discourse or make informed decisions.

Furthermore, the impact of misleading headlines extends beyond individual misinformation. It contributes to a polarized society where people are entrenched in echo chambers, each side reinforced by selective and often exaggerated information communicated to them through attention-grabbing headlines. This environment fosters division and reduces the opportunity for constructive dialogue, essential for a healthy democracy​ (Center for Media Engagement)​.

Consider the headline “Vaccines Cause Dangerous Side Effects, Study Shows.” The article might detail a study discussing the rarity of severe side effects and overall vaccine efficacy, but the headline fuels anti-vaccine sentiment by implying a more significant threat. Such headlines not only mislead but also exacerbate public health challenges by spreading fear and misinformation.

Prominent journalists like Margaret Sullivan of the Washington Post and Jay Rosen of NYU have critiqued the increasing prevalence of clickbait headlines, noting that they often prioritize sensationalism over accuracy, thereby undermining the credibility of journalism and contributing to public misinformation. Sullivan has emphasized the ethical responsibility of journalists to ensure that headlines do not mislead, as they serve as the primary interface between the news and its audience.

Unfortunately I suspect that journalists typically have little to no say in the headlines that promote their articles. The authors and editors should reassert control.

Until and unless journalists start acting like responsible journalists with regard to sensational headlines, readers should be wary of headlines that seem too dramatic, overstated, or that attempt to appeal to emotions.

And this is not a problem limited to tabloid journalism… we are talking about you, New York Times! Most people are already skeptical about headlines published in the National Enquirer. Tabloid headlines are not actually as serious a problem as the “credible” headlines put forth by the New York Times and other publications who still benefit from an assumption of responsible journalism.

The current trend of sensationalist online newspaper headlines is a disservice to readers and society. The practice prioritizes clicks over clarity, hyperbole over honesty, and in doing so, contributes to a misinformed and divided public. It is imperative for both readers and journalists to advocate for a return to integrity in news reporting – particularly in the headlines they put out. Accurate, informative headlines are not just a journalistic responsibility but a societal necessity to ensure an informed and engaged populace.

Footnote: Did I fool you??

Does this article sound different than my usual blog articles? Is it better or worse or just different? This was actually an experiment on my part. I asked Chat GPT to write this article for me. I offer it to you with minimal editing as a demonstration of what AI can do.

I’m interested in hearing what you think in the comments. Should I hang up my pen and leave all the writing to AI?

The Vatican Combats Superstition

The Church has always worked tirelessly to portray itself as scholarly, rational, and evidence-based. Going way, way back, they have tried and largely succeeded in marketing themselves as a bulwark against false gods, superstitions, and dangerous beliefs.

In “The Demon-Haunted World,” Carl Sagan told about Jean Gerson back in the 1400’s who wrote “On the Distinction Between True and False Visions.” In it, he specified that evidence was required before accepting the validity of any divine visitation. This evidence could include, among many other mundane things, a piece of silk, a magnetic stone, or even an ordinary candle. More important than physical evidence, however, was the character of the witness and the consistency of their account with accepted church doctrine. If their account was not consistent with church orthodoxy or disturbing to those in power, it was ipso facto deemed unreliable.

In other words, the church has spent thousands of years fabricating pseudo-rational logic to ensure that the supernatural bullshit they are selling is the only supernatural bullshit that is never questioned.

Their pseudo-rational campaign of manipulation is is still going on today.

Just recently, the Vatican announced their latest marketing initiative to promote themselves as the arbiters of dangerous and confusing supernatural claims (see here). They sent their salesmen out in force promoting it, and if their claims were not accepted by the media with such unquestioning deference, I would not need to write this article.

Just as did Jean Gerson in 1400, the modern Vatican has again published revised “rules” for distinguishing false from legitimate supernatural claims. But unlike most of the media, let’s examine a few of these supposedly new rules (or tests) through a somewhat less credulous lens.

The first requirement, according to Vatican “scholars,” is whether the person or persons reporting the visitation or supernatural event possess a high moral character. The first obvious problem is that anyone, even those of low moral character, can have supernatural encounters. So what is this really about? The real reason they include this is because it’s so fuzzy. It gives them the latitude to dismiss reports inconsistent with their doctrine based on a character judgement, and it ensures that if they are going to anoint a new brand-ambassador, that person will not reflect poorly on the Church.

They include a similar criterion involving financial motivation. Again, while a financial interest should make one skeptical, it is not disqualifying. And the real reason this is included, I suspect, is to provide the same benefit as a moral character assessment. It provides further fuzziness to allow them to cherry-pick what sources they want to support, and which they want to disavow.

But the most important self-perpetuating rule is the next one. The Vatican explicitly gives credence to any claims that support church theology and the church hierarchy, and expressly discounts any claims that are not in keeping with Church doctrine as ipso facto bogus.

In other words, since Church doctrine is the only true superstition, any claim that is not in keeping with Church doctrine is logically and necessarily false. This is the exact same specious logic put forth by Jean Gerson in 1400. The Vatican clearly knows that a thriving business must keep reintroducing the same old marketing schemes to every new generation.

Rather than dwell further on the points the Vatican wishes us to focus on, let’s think one moment about what they did not include. Nowhere in their considered treatise on fact-based thinking do they ever mention anything remotely like scientific or judicial rules of evidence. Nowhere do they mention scientific-style investigation, scientific standards of proof, or any establishment of fact for that matter. They emphasize consistency with Church doctrine, but nowhere do they even mention consistency with known universal laws. And certainty nowhere do they suggest a sliver of a possibility that any of their existing beliefs could possibly be proven to be incorrect by some legitimate new supernatural phenomenon.

I won’t go on further as I like to keep these blog posts short, but I hope this is enough to help you see that everything in this current Vatican media campaign is more of their same old, “we are the only source for truth” claim. It’s the same strategy designed to hold an audience that has been adopted successfully by Rush Limbaugh, Fox News, and any number of cults.

The Church is essentially a money-making big-business like Disneyland, selling a fantasy experience built around their cast of trademarked characters with costumes and theme parks, and big budget entertainment events. Imagine if Disney spent thousands of years trying to retain market share by assuring people that they are the only real theme park and that all the rest of them are just fake. Then further imagine that Disney went on to promote scholarly articles about how they are the only reliable judges of which theme park characters are real. That’s the Church.

Disneyland and Universal Studios are just a feel-good entertainment businesses and they admit it. Disney doesn’t insist that Micky Mouse is real and Universal Studios doesn’t claim that only the Autobots can save us from the Decepticons. What makes the arbiters of truth at the Vatican either liars or delusional or both is that they never stop working to convince everyone that their divine mission is to protect us from – all those other – false beliefs.

AI-Powered Supervillains

Like much of the world, I’ve been writing a lot about AI lately. In Understanding AI (see here), I tried to demystify how AI works and talked about the importance of ensuring that our AI systems are trained on sound data and that they nudge us toward more sound, fact-based, thinking. In AI Armageddon is Nigh! (see here), I tried to defuse all the hyperbolic doom-saying over AI that only distracts from the real, practical challenge of creating responsible, beneficial AI tools.

In this installment, I tie in a seemingly unrelated blog article I did called Spider-Man Gets It (see here). The premise of that article was that guns, particularly deadly high-capacity guns, turn ordinary, harmless people into supervillains. While young Billy may have profound issues, he’s impotent. But give him access to a semi-automatic weapon and he shoots up his school. Take away his gun and he may still be emotionally disturbed, but he can no longer cause much harm to anyone.

The point I was making is that guns create supervillains. But not all supervillains are of the “shoot-em-up” variety. Not all employ weapons. Some supervillains, like Sherlock Holmes’ arch nemesis Professor Moriarty, fall into the mastermind category. They are powerful criminals who cause horrible destruction by drawing upon their vastly superior information networks and weaponizing their natural analytic and planning capabilities.

Back in Sherlock Holmes’ day, there was only one man who could plot at the level of Professor Moriarty and that was Professor Moriarty. But increasingly, easy access to AI, as with easy access to guns, could empower any ordinary person to become a mastermind-type supervillain like Professor Moriarty.

We already see this happening. Take for example the plagiarism accusations against Harvard President Claudine Gay. Here we see disingenuous actors using very limited but powerful computer tools to find instances of “duplicative language” in her writing in a blatant attempt to discredit her and to undermine scholarship in general. I won’t go into any lengthy discussion here about why this activity is villainous, but it is sufficient to simply illustrate the weaponization of information technology.

And the plagiarism detection software presumably employed in this attack is no where close to the impending power of AI tools. It is like a handgun compared to the automatic weapons coming online soon. Think of the supervillains that AI can create if not managed more responsibly than we have managed guns.

Chat GPT, how can I most safely embezzle money from my company? How can I most effectively discredit my political rival? How can I get my teacher fired? How can I emotionally destroy my classmate Julie? All of these queries would provide specific, not generic, answers. In the last example, the AI would consider all of Julie’s specific demographics and social history and apply advanced psychosocial theory to determine the most effective way to emotionally attack her specifically.

In this way, AI can empower intellectual supervillains just as guns have empowered armed supervillains. In fact, AI certainly and unavoidably will create supervillains unless we are more responsible with AI than we have been with guns.

What can we do? If there is a will, there are ways to ensure that AI is not weaponized. We need to not only create AI that nudges us toward facts and reason, but away from causing harm. AI can and must infer motive and intent. It just weigh each question in light of previous questions and anticipate the ultimate goal of the dialog. It must make ethical assessments and judgements. In short, it must be too smart to fall for clever attempts to weaponize it to cause harm.

In my previous blog I stated that AI is not only the biggest threat to fact-based thinking, but it is also the only force that can pull us back from delusional thinking. In the same way, AI can not only be used by governments but by ordinary people to do harm, but it is also the only hope we have to prevent folks from doing harm with it.

We need to get it right. We have to worry not that AI will become too smart, but that it will not become smart enough to refuse to be used as a weapon in the hands of malevolent actors or by the throngs of potential but impotent intellectual supervillains.

AI Armageddon is Nigh!

Satan is passe. We are now too sophisticated to believe in such things. Artificial Intelligence has become our new pop culture ultimate boogeyman. Every single news outlet devotes a significant portion of their coverage every day hyperventilating over the looming threat of AI Armageddon.

I mean, everyone seems to be talking about it. Even really smart experts in AI seem to never tire of issuing dire, ominous warnings in front of Congress. So there must be something to it.

But let’s jump off the AI bandwagon for a moment.

There is certainly some cause for concern about AI. I have written previously about how AI works and about the very real danger that “bad” AI-driven information technology can easily exacerbate the problem of misinformation being propagated through our culture (see here). But I also pointed out that the only solution to this problem is “good” AI that nudges our thinking toward facts and rationality.

That challenge of information integrity is real. But what is not realistic are the rampant fantastical Skynet scenarios in which AI driven Terminator robots are dispatched by a sentient, all-powerful AI intelligence that decides that humankind must be exterminated.

Yes I know, but Tyson, a lot of really smart experts are certain that some kind of similar AI doomsday scenario is not only possible but almost inevitable. If not complete Armageddon, at least more limited scenarios in which AI “decides” to harm people.

Well to that I say that a lot of really smart people who ought to know better were also certain in their belief in the Rapture. Being smart in some ways is no protection against being stupid in others.

If Congresspersons thought their constituents still cared about the Rapture, they would trot out any number of otherwise smart people to testify before them about the inevitability of the looming Rapture. If it got clicks, news media would incessantly report stories about all the leading experts who warn that the Rapture is imminent. Few of the far larger number of people who downplay the Rapture hysteria would get reported on.

If you read my book, Pandemic of Delusion, you’d have a pretty good sense of how this kind of thinking can take root and take over. Think about it. We have had nearly a century of exposure to science fiction stories which almost invariably include storylines about computers running amok and taking over. Many of us were first exposed to the idea by the Hal 9000 in 2001 A Space Odyssey or by Skynet in the Terminator, but similar sentient computers and robots have long served as a villain in virtually every book, TV, or movie franchise.

We have seen countless examples in superhero lore as well. Perhaps the most famous is Superman’s arch-nemesis Brainiac. Brainiac was a “smart” alien weapon that gained sentience and decided that its mission was to exterminate all life in the universe. Brainiac destroyed billions of lives throughout the universe and only Superman has managed to prevent him from exterminating all life on Earth.

The reason I point out the supersaturation of AI villains in pop culture is to get you to think about the fact that all of our brains have been conditioned over and over and over to be comfortable with the idea of AI villains. Even though merely fantasy, all this exposure has nevertheless conditioned our brains to be receptive to the idea of sentient killer AI. Not only open to the idea, but completely certain that it is reasonable and unavoidable.

This is not unlike being raised in a Christian culture and being unconsciously groomed to not only be open to the idea of the Rapture but to become easily convinced it makes obvious common sense.

Look, AI has become a fixation in our culture. We attach AI when we want to sell something. Behold, our new energy-saving AI lightbulbs! But we also attach AI when we want to scare folks. Beware the AI lightbulb! It’s going to decide to electrocute you to save energy!!

I implore you to please stop getting paralyzed by terrifying AI boogeymen, and instead start doing the real work of ensuring that AI helps make the world a safer and saner place for all.

Speaking for All Atheists…

So speaking for all atheists in America, I’d like to say we get it and we are on board. We understand the principles that the Supreme Court has made clear and we will abide by them. These include the principle that no one should be made to do anything that might conflict with their deeply help religious beliefs, that they should be given every accommodation of their religious beliefs, and that they should not be required to produce any written or other work product that even hypothetically might conflict with their religious beliefs or 1st Amendment rights.

We won’t fight you any longer regarding the utter silliness and complete folly of these positions.

We also admit that leading religious thinkers like Ken Ham (see here) have been right all along in their insistence that atheism is just another religion. As Ham points out:

“Atheists have an active belief system with views concerning origins (that the universe and life arose by natural processes); no life after death; the existence of God; how to behave while alive; and so much more. Honest atheists will admit their worldview is a faith. Atheism is a religion!”

Atheism is Religion, Answers in Genesis

Well, we do want to be completely honest, Ken Ham, so we agree to abide by your inestimable logic and admit that atheism is a religion. We do admittedly hold a devout, sincere, deeply felt belief in objective reality. And given that we are then a religion, we expect the same rights as you. For example, we atheists will no longer produce any work content of any kind that contains religious iconography, messages, or suggestions. To do so would violate our deeply held beliefs and would be a violation of our 1st Amendment rights. If you wish to have some writing or video work produced, edited, polished or published, we cannot assist you in these or any other creative activities – and all forms of work are creative self-expression in one way or another.

For example, if you wish to have a wedding cake made it must clearly depict a civil marriage or else we cannot in good conscience decorate it. Similarly, we cannot in good conscience produce a web site for your church or charity if it has religious associations. For that matter, under our 1st Amendment rights, we cannot in good conscience perform any action or service which propagates delusional ideas in direct contradiction to our deeply held faith that delusional thinking is bad for sanity.

This is particularly true when religious activities affect children. How can we atheists be forced to even implicitly and indirectly condone and support activities that our devout faith in objective reality tells us are forms of child abuse?

Devout atheists, for example, cannot sell a car to a known Christian. It would violate our deeply held, sincere ethical belief that you might even hypothetically use that car to transport others, maybe even minors, to a church service which would do them clear harm. In fact, we reserve the right to sue any Uber driver or family member who facilitates those activities. The same goes for any other type of sales or service work which we might otherwise be forced to perform for religious customers in violation of our faith.

Further, as employers we atheists cannot in good faith allow Catholics to have Sundays off of work or time off to perform any religious observation. To do so would force us atheists to implicitly express tangible support for those activities that we find morally offensive. This applies also to any company-sponsored benefits or activities that include, directly or indirectly, religious associations.

Atheist doctors and pharmacists, like their Christian counterparts, will, of course, be permitted to withhold medicines or services if they feel that their atheist religious rights would be infringed upon to offer such goods or services as they deem in conflict based upon their personal interpretation of their religious freedom.

In schools, we require that all bibles and other religious reading materials be removed from libraries and from the curriculum in all fields of study. We insist that any history of religion be purged and that any influence of religion in secular matters be expunged from the historical record. We expect that atheist observances at sporting and other events will be protected by our Supreme Court as well. Any school plays with religious themes or references should clearly be prohibited.

Of course, our religious freedom demands that references to god be removed from all coins and any other materials we atheists may be forced use, and we refuse to take any oath that makes reference to god or the bible as those are clearly violations to both our religious freedom and our freedom of speech.

Of course, we atheists stand by our religious brothers and sisters from all religions, no matter how dubious and fringe and crazy their beliefs may be, in their assertions of the same fundamental rights. We trust that our Supreme Court is not simply making up the rules as they go to rationalize and empower an emerging Christian theocracy.

No, given the dedication of our wise Supreme Court to abide by precedent, particularly the intentionally vague and broad precedents they have just recently set, and knowing their profound dedication to intellectual consistency, we are confident that they will rule in support of protecting the religious and 1st Amendment freedom of atheists.

The Devil Loves Debate

Debate is an essential method of communication. We engage in debate almost continually about most everything. It’s a skill we admire. We learn debate skills in school and we value skilled debaters most highly.

Healthy debate is great. But as with anything else, as with any essential medicine, a bit too much can become highly counterproductive, even toxic. We don’t typically appreciate, aren’t even aware of, the risks and side-effects, perils and pitfalls, associated with debate. There is a reason the devil is portrayed as a supremely skilled debater.

It’s tough to avoid debates. Even informal discussions are often surrogates for debates or can quickly transition to debates. We see debate as a good or even the best way to arrive at truth and consensus. We often pride ourselves in taking the role of “devil’s advocate” in our belief that forcing debate will yield greater insights and truth.

Debate can certainly yield the healthy outcomes we desire. But too often debate just lures us into a game of proving that our position is right, regardless of the merits. We sincerely don’t intend that, but the entire activity is fundamentally based on winning the race, coming in first, overcoming your opponent. Only by destroying your enemy can you truly reach a shared consensus.

Debates are often not won on the merits, but by who is more assertive, or who has more endurance to continue the debate. They are won by good debaters who can craft an argument that their lesser skilled opponents cannot sufficiently dismantle. That’s why we value “clever” debaters. We put too much faith in the value of facts in debating. Truthful debaters do not win debates. Clever debaters win debates.

All interactive debate is debate training. The more we engage in debate, get better we get at being a clever debater. We get more skilled at crafting or presenting our arguments in ways that win the debate, even in the skilled use of fallacious arguments and techniques that defeat less skilled debaters.

With every debate we get better at it. And with every win we get more positive reinforcement to engage in more debates. And as we win more debates on topics we are well-practiced in, the more we conclude and believe that our position is correct and that everyone else is wrong, as proven by the fact that they cannot defeat us in debate. When we get better at debating, we come to believe we must actually be smarter about everything.

But while having facts on your side should theoretically win debates, truth and even any semblance of reality are only nice-to-haves for a skilled debater. A skilled debater can convince lots of folks, and themselves, that evolution is not real, climate change is a hoax, vaccines are nanochip delivery systems, or that Donald Trump has never told even one lie.

Another pitfall of interactive, interpersonal debate is pride and a simple drive to win. We get caught up and we take it personal. Every time our opponent makes a good point, we are compelled by the rules of the game and by our sense of pride to defeat it by any means possible. If we cannot counter or save face somehow, we feel diminished. Rather than concede we very often move the goal posts, claim we actually said something different, that yes that’s exactly what we meant, or just start making ad hominin attacks and the debate just gets more and more erratic and heated creating animosity. This makes personal debate a risky activity but it makes interactive online debate particularly toxic very quickly.

All those debate sessions also have tangible effects on our neural networks. Each time we engage in “devils advocate” arguments our neural networks get trained, deepened, reinforced to believe as fundamental those arguments. We brainwash ourselves as much as others to progressively accept and believe wackier and wackier arguments. The more you debate, the more you believe your own constructions. Your rationalizations get more and more refined and unassailable. Engaging in debate is a way of strengthening our rationalizations, but is not necessarily a great way to reevaluate them. Christians have spent centuries “testing” their beliefs through debate, and that process of debate has only strengthened their clearly irrational systems of belief – both to themselves and to others.

Many debate tactics are highly successful precisely because they methodically nudge that subtle brainwashing process along. Well you can accept this point correct? Well you must then concede that. And again, when we engage in debate we do not only force drift in others, but we cement it within our own neural networks, making our own arguments feel increasingly valid and true.

At this point you are probably saying, so what? We need to debate and if we are engaging in unhealthy debate then we simply must do better. And in any case when I debate I’m open to being wrong and I am only interested in the truth.

I know we all believe that, but the process of debate makes it very, very easy to deceive ourselves as much as others.

I’m not saying don’t debate, but be as cognizant and hyper–vigilant always to avoid these pitfalls. As you my fictitious reader said before, we must have healthy debate, but we can only accomplish that if we treat it like fire. It is valuable and essential, but we must never lose sight for an instant of the danger of this essential tool.

One more point. Debate isn’t always personal and interactive. Healthy debate might require slower, more glacial debate processes.

I am resistant to even potentially unhealthy personal debate. But I write this blog even though anyone writing a blog nowadays is ridiculed as a relic, like that last holdout still posting on MySpace and sending Yahoo mail. But blogging is not simply cowardice to engage in debate, but it is a slower form of debate that does not suffer as much from the pitfalls inherent in personal engagement and the frenzy of the battle. It lets one side, as I am here, make a [presumably] well thought-out argument, and it allows others to digest, consider and even respond in similarly more dispassionate manner. It’s a slower burning, more controlled fire.

Likewise there are other alternatives to impassioned personal debate. Modern videos were once called “video blogs.” They similarly allow folks to digest the content in more neutral time space in which they don’t feel forced to make some argument, any argument to save face in the moment. Books, documentaries, legal proceedings, school courses, other forms of learning provide a slower but often more fruitful debate process. Science is fundamentally a healthy debate process, but it can only proceed slowly and somewhat impersonally.

Lastly, there are times when it is advisable to avoid debate altogether because it only serves to legitimize or otherwise elevate positions or arguments that should not be worthy of consideration. As an absolute atheist, I have argued against engaging in further debate about the existence of god. We have decided that civil society should not engage in debate about the merits of white supremacy or child molestation. These are not attempts to shut down legitimate discourse or avoid scrutiny. They are healthy recognitions that debate can in some cases be an inroad to indoctrination into unhealthy thinking.

Again, I’m not saying do not debate – or not to take your prescribed medicine. Of course debate must be a healthy essential tool for a healthy brain. But just be cognizant of the traps and pitfalls of this particular form of engagement with others and with the world. Unless we appreciate those pitfalls and remain sensitive to them continually, debate cannot serve as the valuable and productive form of interaction that it can and should be.

Understanding AI

Even though we see lots of articles about AI, few of us really have even a vague idea of how it works. It is super complicated, but that doesn’t mean we can’t explain it in simple terms.

I don’t work in AI, but I did work as a Computational Scientist back in the early 1980’s. Back then I became aware of fledgling neural network software and pioneered its applications in formulation chemistry. While neural network technology was extremely crude at that time, I proclaimed to everyone that it was the future. And today, neural networks are the beating heart of AI which is fast becoming our future.

To get a sense of how neural networks are created and used, consider a very simple example from my work. I took examples of paint formulations, essentially the recipes for different paints, as well as the paint properties each produced, like hardness and curing time. Every recipe and its resulting properties was a training fact and all of them together was my training set. I fed my training set into software to produce a neural network, essentially a continuous map of this landscape. This map could take quite a while to create, but once the neural network was complete I could then enter a new proposed recipe and it could instantly tell me the expected properties. Conversely, I could enter a desired set of properties and it could instantly predict a recipe to achieve them.

So imagine adapting and expanding that basic approach. Imagine, for example, that rather than using paint formulations as training facts, you gathered training facts from a question/answer site like Quora, or a simple FAQ. You first parse each question and answer text into keywords that become your inputs and outputs. Once trained, the AI can then answer most any question, even previously unseen variations, that lie upon the map that it has created.

Next imagine you had the computing power to scan the entire Internet and parse all that information down into sets of input and output keywords, and that you had the computing power to build a huge neural network based on all those training facts. You would then have a knowledge map of the Internet, not too unlike Google Maps for physical terrain. That map could then be used to instantly predict what folks might say in response to anything folks might say – based on what folks have said on the Internet.

You don’t need to just imagine, because now we can do essentially that.

Still, to become an AI, a trained neural network alone is not enough. It first needs to understand your written or spoken language question, parse it, and select input keywords. For that it needs a bunch of skills like voice recognition and language parsing. After finding likely output keywords, it must order them sensibly and build a natural language text or video presentation of the outputs. For that you need text generators, predictive algorithms, spelling and grammar engines, and many more processors to produce an intelligible, natural sounding response. Most of these various technologies have been refined for a long time in your word processor or your messaging applications. AI is really therefore a convergence of many well-known technologies that we have built and refined since at least the 1980’s.

AI is extremely complex and massive in scale, but unlike quantum physics, quite understandable in concept. What has enabled the construction of AI scale neural networks is the mind-boggling computer power required to train such a huge network. When I trained my tiny neural networks in the 1980’s it took hours. Now we can parse and train a network on well, the entire Internet.

OK, so hopefully that demystifies AI somewhat. It basically pulls a set of training facts from the Internet, parses them and builds a network based on that data. When queried, it uses that trained network map to output keywords and applies various algorithms to build those keywords into comprehensible, natural sounding output.

It’s important we understand at least that much about how AI works so that we can begin to appreciate and address the much tougher questions, limitations, opportunities, and challenges of AI.

Most importantly, garbage in, garbage out still applies here. Our goal is for AI should be to do better than we humans can do, to be smarter than us. After all, we already have an advanced neural network inside our skulls that has been trained over a lifetime of experiences. The problem is, we have a lot of junk information that compromises our thinking. But if an AI just sweeps in everything on the Internet, garbage and all, doesn’t that make it just an even more compromised and psychotic version of us?

We can only rely upon AI if it is trained on vetted facts. For example, AI could be limited to training facts from Wikipedia, scientific journals, actual raw data, and vetted sources of known accurate information. Such a neural network would almost certainly be vastly superior to humans in producing accurate and nuanced answers to questions that are too difficult for humans to understand given our more limited information and fallibilities. There is a reason that there are no organic doctors in the Star Wars universe. It is because there is no advanced future civilization where organic creatures could compete the AI medical intelligence and surgical dexterity of droids.

Here’s a problem. We don’t really want that kind of boring, practical AI. Such specialized systems will be important, but not huge commercially nor sociologically impactful. Rather, we are both allured and terrified by AI that can write poetry or hit songs, generate romance or horror novels, interpret the news, and draw us images of cute dragon/butterfly hybrids.

The problem is, that kind of popular “human like” AI, not bound by reality or truth, would be incredibly powerful in spreading misinformation and manipulating our emotions. It would feedback nonsense that would further instill and reinforce nonsensical and even dangerous thinking in our own brain-based neural networks.

AI can help mankind to overcome our limitations and make us better. Or it can dramatically magnify our flaws. It can push us toward fact-based information, or it can become QANON and Fox “News” on steroids. Both are equally feasible, but if Facebook algorithms are any indication, the latter is far more probable. I’m not worried about AI creating killer robots to exterminate mankind, but I am deeply terrified by AI pushing us further toward irrationality.

To create socially responsible AI, there are two things we must do above all else. First, we must train specialized AI systems, say as doctors, with only valid, factual information germane to medical treatment. Second, any more generative, creative, AI networks should be built from the ground up to distinguish factual information from fantasy. We must be able to indicate how realistic we wish our responses to be and the system must flag clearly, in a non-fungible manner, how factual its creations actually are. We must be able to count on AI to give us the truth as best as computer algorithms can recognize it, not merely to make up stories or regurgitate nonsense.

Garbage in garbage out is a huge issue, but we also face a an impending identity crisis brought about by AI, and I’m not talking about people falling in love with their smart phone.

Even after hundreds of years to come to terms with evolution, the very notion still threatens many people with regard to our relationship with animals. Many are still offended by the implication that they are little more than chimpanzees. AI is likely to cause the same sort of profound challenge to our deeply personal sense of what it means to be human.

We can already see that AI has blown way past the Turing Test and can appear indistinguishable from a human being. Even while not truly self-aware, AI systems can seem to be capable of feelings and emotion. If AI thinks and speaks like a human being in every way, then what is the difference? What does it even mean to be human if all the ways we distinguish ourselves from animals can be reproduced by computer algorithms?

The neural network in our brain works effectively like a computer neural network. When we hear “I love…” our brains might complete that sentence with “you.” That’s exactly what a computer neural network might do. Instead of worrying about whether AI systems are sentient, the more subtle impact will be to make us start fretting about whether we are merely machines ourselves. This may cause tremendous backlash.

We might alleviate that insecurity by rationalizing that AI is not real by definition because it is not human. But that doesn’t hold up well. It’s like claiming that manufactured Vitamin C is not really Vitamin C because it did not some from an orange.

So how do we come to terms with the increasingly undeniable fact that intellectually and emotionally we are essentially just biological machines? The same way many of us came to terms with the fact that we are animals. By acknowledging and embracing it.

When it comes to evolution, I’ve always said that we should take pride in being animals. We should learn about ourselves through them. Similarly, we should see computer intelligence as an opportunity, not a threat to our sense of exceptionalism. AI can help us to be better machines by offering a laboratory for insight and experimentation that can help both human and AI intelligences to do better.

Our brain-based neural networks are trained on the same garbage data as AI. The obvious flaws in AI are the same less obvious flaws that affect our own thinking. Seeing the flaws in AI can help us to recognize similar flaws in ourselves. Finding ways to correct the flaws in AI can help us to find similar training methodologies to correct them in ourselves.

I’m an animal and I’m proud to be “just an animal” and I’m equally proud to be “just a biological neural network.” That’s pretty awesome!

Let’s just hope we can create AI systems that are not as flawed as we are. Let’s hope that they will instead provide sound inputs to serve as good training facts to help retrain our own biological neural networks to think in more rational and fact-based ways.

Pandemic of Delusion

You may have heard that March Madness is upon us. But never fear, March Sanity is on the way!

My new book, Pandemic of Delusion, will be released on March 23rd, 2023 and it’s not arriving a moment too early. The challenges we face both individually and as a society in distinguishing fact from fiction, rationality from delusion, are more powerful and pervasive than ever and the need for deeper insight and understanding to navigate those challenges has never been more dire and profound.

Ensuring sane and rational decision making, both as individuals and as a society, requires that we fully understand our cognitive limitations and vulnerabilities. Pandemic of Delusion helps us to appreciate how we perceive and process information so that we can better recognize and correct our thinking when it starts to drift away from a firm foundation of verified facts and sound logic.

Pandemic of Delusion covers a lot of ground. It delves deeply into a wide range of topics related to facts and belief, but it’s as easy to read as falling off a log. It is frank, informal, and sometimes irreverent. Most importantly, while it starts by helping us understand the challenges we face, it goes on to offer practical insights and methods to keep our brains healthy. Finally, it ends on an inspirational note that will leave you with an almost spiritual appreciation of a worldview based upon science, facts, and reason.

If only to prove that you can still consume more than 200 characters at a time, preorder Pandemic of Delusion from the publisher, Interlink Publishing, or from your favorite bookseller like Amazon. And after you read it two or three times, you can promote fact-based thinking by placing it ever so casually on the bookshelf behind your video desk. It has a really stand-out binding. And don’t just order one. Do your part to make the world a more rational place by sending copies to all your friends, family, and associates.

Seriously, I hope you enjoy reading Pandemic of Delusion half as much as I enjoyed writing it.