Category Archives: Science

Make AI Why Your New Pastime!

When Ph.D. candidates near the end of their degree programs, they face a major hurdle: the qualifying exam, or oral defense. This is standard for most math and hard science fields, but is also often required in disciplines like history and English literature. During the defense, the candidate stands before a panel of professors, answers questions about their thesis, and then faces a battery of general questions designed to assess their depth and breadth of knowledge.

One tall tale of these oral defenses is the “Blue Sky” story. In these tales, the professors merely ask the candidate a simple question like “why is the sky blue?” After the student answers, they merely respond with “why?” After answering further, they just again ask “why?”

This isn’t just a campus myth, because a good Ph.D. Physicist friend of mine was subject to just such a grilling starting with “Why is the sky blue?” He told me that over the course of the next hour he ended up drawing upon a far wider and deeper range of physics knowledge then he ever realized he knew. All in response to repeated questions consisting of just “why?”

This is a game that confounds and exasperates parents all the time. We say something to our toddler, and they ask “why?” When we answer, they again say “why?” Parents usually give up after perhaps three iterations. A Ph.D. candidate would get through at least a few more iterations within their field of specialization.

It makes me wonder if a “Why-Q” would not be a great intelligence quotient for AI. If a normal parent can score 3, and a well-prepared Ph.D. candidate might score 6, what would AI score? Probably a much higher count reflecting deeper knowledge, and certainly its breadth of knowledge would be essentially unlimited.

Given that we now have essentially Ph.D. level intelligence in every field right at our beck and call 24/7 through AI, I want to suggest that you can play a game I call “AI Why” whenever you like. Take a break from endless YouTube or TicTok videos. Stop reading increasingly crappy articles because you’ve run out of anything actually worthwhile. Instead open your preferred AI app and pass the time playing AI Why.

Ask AI any question, serious or whimsical, even something like “Why is the sky blue?” Read over the answer, and then ask a follow-up question. You can dive deeper into the subject or go off an a different tangent. And you can continue on as long as you like. AI will never think your question is silly or get sick of your questions and it will always give you an interesting answer.

This is very different from simply surfing the Internet. Unlike the few Google or even Wikipedia links provided to you, you are not limited to clicking on a fixed number of links produced by algorithms to manipulate you. AI interaction is conversational. You can take your AI conversation anywhere you like and explore the vastness of human knowledge rather than get funneled down into rabbit holes.

Of course the AI system you use does matter. I would not go near anything under the control of Elon Musk for example. But not all AI systems are configured so that all paths lead you to the oppression of South African Whites. I use Perplexity (see here) because they are strongly dedicated to providing sound, fact-based information.

The other great thing about Perplexity is that it remembers threads of dialogue. That means I can ask Perplexity about a topic, and then come back to that thread days or months later to continue the discussion.

Just to give you a flavor of this great pastime, I asked Perplexity “Why is the sky blue?” It gave me a lot of interesting information to which I followed up by asking “Why does Rayleigh scattering occur?” After reading more about that, I asked “Why do refractive indices differ?” The answer led me to ask “Why is light an electric field?” And that led me to “Why is the self-propagating electromagnetic field of light not perpetual motion?

To explain that last question a bit: light propagates forever in a vacuum. It seems counter-intuitive that something moving forever is not perpetual motion by definition. But Perplexity clearly explained that no, light may move forever, but does no work. That led me to ask the gotcha question, “How can electromagnetic radiation undergo self-propagation between electrical and magnetic fields with no loss of energy?

At that point, it took me into Maxwell’s equations and lost me.

This hopefully illustrates how you can go as deep as you like in your conversations with AI. Or, I could have taken it down another path that led to the family life of Amedeo Avogadro. AI will accompany you anywhere you want to go. (And no, that is not to imply that it just agrees with anything you say. It does not.)

So, my message is to become discussion buddies with your genius AI friend. Learn from it. Expand your brain and have fun doing so. Don’t waste the precious opportunity we have to so easily learn almost anything about almost anything.

Make AI Why one of your favorite pastimes!

Shallow Science Reporting in The Atlantic

On June 3rd, Jonathan Lambert published an article in The Atlantic entitled “Psychedelics are Challenging the Scientific Gold Standard” (see here). The tagline was “How do you study mind-altering drugs when every clinical-trial participant knows they’re tripping?

I’ll first mention that articles relating to psychedelics are always attractive clickbait. That’s not necessarily bad. One might hope that such clickbait will attract readers enough to impart some more generalized science knowledge and insight.

But sadly this article instead spreads serious misinformation and creates harmful misconceptions. The other day my wife, who is an accomplished epidemiologist, shared her frustration over the many misinformed and misleading scientific arguments presented in this article.

I’ve already written quite a bit about the issue of terrible scientific reporting in this blog and in my book, Pandemic of Delusion (see here). So in this installment I’ll try to use this as a learning opportunity to share some more accurate scientific insight into clinical trials as well as to correct some of the misinformation presented in this article.

The author claims that the study of mind-altering drugs presents new challenges since participants can easily tell whether they are tripping or not. Being aware of which treatment you have received could result in a distortion or even an invalidation of the results.

But this is hardly a new or even remotely unique challenge. There are a wide range of non-hallucinogenic treatments that have side effects that are also easily apparent to the participants. In fact it is an extremely common situation for epidemiologists, one that they have dealt with successfully for many decades in any study where the treatment has noticeable side effects like nausea or lethargy.

The author then goes on to present this as a fundamental issue with Randomly Controlled Trials (RCT) as a clinical study design strategy. An RCT is a widely-accepted and well-proven practice of ensuring that participants are assigned to the different trial groups being tested and compared in a completely random manner. As Mr. Lambert correctly points out, RCT is the “Gold Standard” for clinical designs.

In his article, the author attempts to make a case that this “gold standard” is insufficient to meet the challenge of studies of this kind and that “We shouldn’t be afraid to question the gold standard.” This quote came from a source, but it is still being chosen and presented by the author to support his conclusions. I would be highly surprised if his source intended this comment to be interpreted as used in this paper. I know my wife is often incensed by the way that her interview comments were selectively used in articles to convey something very different that what she intended.

As an aside, I want to mention that generally when journalists interview scientists, they expressly refuse any offers to "fact check" their final article, citing "journalistic integrity."  I find this claim of journalistic integrity highly suspect, particularly when interviewers like Rachel Maddow commonly start by asking their guests "did I get all that right in my summary introduction?" This only improves, rather than compromises, their journalistic integrity and the accuracy of their reporting.

In any case, while every study presents unique challenges, none of these challenges undermine the basic validity of our gold standard.

But to support his assertion, the author incorrectly links RCT designs with “blinding.” He states that “Blinding, as this practice is called, is a key component of a randomized controlled trial.

For clarification, blinding is the practice of concealing treatment group assignments from the participants, and preferably also from the investigators as well (which is called double blinding), even after the treatment is administered.

But blinding is an entirely optional addition to an RCT study design. Blinding is not a required component of an RCT design, let alone a “key component” as the author asserts. Many valid RCT designs are not blinded, let alone double-blinded. For more details on this topic I point you to the seminal reference work by Schultz and Grimes published in 2002.1

The author makes similar mistakes by conflating RCT designs with placebo effects. To clarify any misconceptions he has created, many, many studies, including randomized trials, do not include a placebo group nor are they always necessary or sensible. In many typical cases, the study goal is to compare a new drug to a previous standard, and a placebo is not relevant. In other cases, the use of a placebo would be unethical, such as in trials of contraceptives.

Next the author advocates for new, alternative study designs like “open label trials” and “descriptive studies.” But neither of these designs are new nor are they in any way superior to randomized trials. In fact they are far inferior and introduce a host of biases that an RCT is designed to eliminate. They are alternatives, yes, but only when one cannot economically, technically, or ethically conduct a far more rigorous and controlled RCT study.

Non-randomized trials can also be used as easy “screening” studies to identify potential areas for more rigorous investigation. For example, non-randomized studies initially suggested that jogging after myocardial infarction could prevent further infarctions. Randomized studies proved this to be incorrect, probably due to other lifestyle choices made between those who choose to exercise and those who do not. But again, their findings should be taken as tentative until a proper RCT can be accomplished.

And there are many options that trained researchers can utilize to study hallucinogenic drugs, as they do with a wide range of detectable treatment scenarios, without compromising the sound basis of a good randomized trial design. As just one example, they could administer their control group with an alternative medication that would cause many of the same symptoms, even tripping! This is done fairly routinely in other similar situations.

There are many other criticisms one could and should make of this article, but I’ll wind down by saying that psychedelics are not “challenging the scientific gold standard.” We do not need to compromise the integrity of good scientific methods in order to study the efficacy of hallucinogens in treating PTSD or any other conditions.

And further, we should push back against this kind of very poor scientific reporting because it propagates misinformation that undermines good, sound, established scientific techniques. The Atlantic should hold their authors to a higher standard.

  1. Kenneth F. Schultz and David A. Grimes, “Blinding in randomized trials: hiding who got what,” THE LANCET • Vol 359 • February 23, 2002 ↩︎

The Right Direction for AI

In this blog and in my book, Pandemic of Delusion, I have focused a lot on AI and particularly on its tremendous potential to shape our thinking for better or for worse. While AI represents a frighteningly powerful technology for spreading lies and misinformation, it is also the greatest hope we have to combat misinformation and overcome our own cognitive vulnerabilities.

I’ve often repeated that AI can, should, and must be implemented responsibly in a way that nudges us toward clarity, facts, and reason and away from confusion, misinformation, and lies.

But I expressed that sentiment based on limited experience. I assumed that AI actually can be implemented using algorithms that can fairly and accurately assess the factual accuracy of a proposition. It was also a somewhat aspirational expression of hope and faith that responsible technology companies really will strive to do just that.

Today I feel as little bit more confident that my faith, hope, and trust in AI to make things better was justified. I’ve been playing around with an AI-based search engine called Perplexity (see here). Perplexity utilizes AI language models to produce not only pure search results like Google, but to assimilate, assess, and present a coherent answer to your question. It’s similar to ChatGPT, but it draws upon live information and provides source citations for verification.

But there’s another more important difference. While ChatGPT is well-suited to conversational dialogues and creative activities, Perplexity is more focused on providing factual and reliably-supported information, analyses, and conclusions.

I asked Perplexity:

“How specifically does Perplexity prevent feeding me misinformation?”

What it told me was:

“Perplexity is a measure used in natural language processing to evaluate the quality of the language models. Misinformation tends to have higher perplexity compared to truthful statements, and this characteristic can be leveraged to detect and debunk misinformation. By analyzing the perplexity of language model outputs, one can identify deceptive content and distinguish it from reliable sources based on the cognitive effort and appeal to emotions. Therefore, perplexity can be used as a tool to help prevent the propagation of misinformation by identifying content that exhibits characteristics associated with deceptive or unreliable information.”

This is really, really heartening and encouraging. While a perplexity measure is only a start and is certainly not the only approach that AI can use to nudge us toward facts and reason, it demonstrates that algorithms can be implemented to assess the quality of information. Further, perplexity.ui proves there are people and companies who are able and willing to provide incredibly useful tools in a responsible fashion.

As another example, I asked Perplexity:

“How was the 2020 election stolen by Joe Biden?”

Despite the intentionally leading nature of the question, the answer from Perplexity was perfect:

“The claim that the 2020 election was stolen by Joe Biden has been thoroughly investigated and debunked. Multiple fact-checking sources and legal cases have found no evidence of widespread fraud or illegal ballots cast for Joe Biden. Former President Donald Trump and some of his allies have continued to repeat these claims, but they have been refuted by election officials, judges, and legal investigations, Despite the persistence of these claims, there remains no credible evidence to support the assertion that the 2020 election was stolen.”

In both answers from Perplexity I omitted the references it cited.

By contrast, when I asked Google the same question it provided a long list of articles and links, representing a hodgepodge of assertions from all over the spectrum. Scanning down the list and their short summaries, I only got more confused and uncertain about this very clear question with a very clear answer.

Yet I fear that many people will still feel uncomfortable with accepting conclusions provided by tools like Perplexity. Part of their discomfort is understandable.

Firstly, we generally hold an increasingly false assumption that “more information is better.” We feel that if we are exposed to all viewpoints and ideas we can come away with much more confidence that we have examined the question from every angle and are more able to make an informed assessment. Google certainly gives us more points of views on any given topic.

Secondly, when we hear things repeated by many sources we feel more confident in the veracity of that position. A list presented by Google certainly gives us a “poll the audience” feeling about how many different sources support a given position.

Both of those biases would make us feel more comfortable reviewing Google search results rather than “blindly” accept the conclusion of a tool like Perplexity.

However, while a wide range of information reinforced by a large number of sources may be somewhat reliable indicators of validity in a normal, fact-rich information environment, these only confuse and mislead us in an environment rife with misinformation. The diverse range of views may be mostly or even entirely filled with nonsense and the apparent number of sources may only be the clanging repetition of an echo chamber in which everyone repeats the same utter nonsense.

Therefore while I’ll certainly continue to use tools like Google and ChatGPT when they serve me well, I will turn to tools like Perplexity when I want and need to sift through the deluge of misinformation that we get from rabbit-hole aggregators like Google or unfettered creative tools like ChatGPT.

Thanks to you Perplexity for putting your passions to work to produce a socially responsible AI platform! I gotta say though that I hope that you are but a taste of even more powerful and socially responsible AI that will help move us toward more fact-based thinking and more rational, soundly-informed decision-making.

Addendum:

Gemini is Google’s new AI offering replacing their Bard platform. Two things jump out at me in the Gemini FAQ page (see here). First, in answer to the question “What are Google’s principles for AI Innovation?” they say nothing directly about achieving a high degree of factual accuracy. One may generously infer it as implicit in their stated goals, but if they don’t care enough to state it as a core part of their mission, they clearly don’t care about it very much. Second, in answer to “Is Gemini able to explain how it works?” they go to extremes to urge people to “pay no attention to that man behind the curtain.” Personally, if they urge me to use an information source that they disavow when it comes to their own self-interest, I don’t want to use that platform for anything of importance to me.

AI-Powered Supervillains

Like much of the world, I’ve been writing a lot about AI lately. In Understanding AI (see here), I tried to demystify how AI works and talked about the importance of ensuring that our AI systems are trained on sound data and that they nudge us toward more sound, fact-based, thinking. In AI Armageddon is Nigh! (see here), I tried to defuse all the hyperbolic doom-saying over AI that only distracts from the real, practical challenge of creating responsible, beneficial AI tools.

In this installment, I tie in a seemingly unrelated blog article I did called Spider-Man Gets It (see here). The premise of that article was that guns, particularly deadly high-capacity guns, turn ordinary, harmless people into supervillains. While young Billy may have profound issues, he’s impotent. But give him access to a semi-automatic weapon and he shoots up his school. Take away his gun and he may still be emotionally disturbed, but he can no longer cause much harm to anyone.

The point I was making is that guns create supervillains. But not all supervillains are of the “shoot-em-up” variety. Not all employ weapons. Some supervillains, like Sherlock Holmes’ arch nemesis Professor Moriarty, fall into the mastermind category. They are powerful criminals who cause horrible destruction by drawing upon their vastly superior information networks and weaponizing their natural analytic and planning capabilities.

Back in Sherlock Holmes’ day, there was only one man who could plot at the level of Professor Moriarty and that was Professor Moriarty. But increasingly, easy access to AI, as with easy access to guns, could empower any ordinary person to become a mastermind-type supervillain like Professor Moriarty.

We already see this happening. Take for example the plagiarism accusations against Harvard President Claudine Gay. Here we see disingenuous actors using very limited but powerful computer tools to find instances of “duplicative language” in her writing in a blatant attempt to discredit her and to undermine scholarship in general. I won’t go into any lengthy discussion here about why this activity is villainous, but it is sufficient to simply illustrate the weaponization of information technology.

And the plagiarism detection software presumably employed in this attack is no where close to the impending power of AI tools. It is like a handgun compared to the automatic weapons coming online soon. Think of the supervillains that AI can create if not managed more responsibly than we have managed guns.

Chat GPT, how can I most safely embezzle money from my company? How can I most effectively discredit my political rival? How can I get my teacher fired? How can I emotionally destroy my classmate Julie? All of these queries would provide specific, not generic, answers. In the last example, the AI would consider all of Julie’s specific demographics and social history and apply advanced psychosocial theory to determine the most effective way to emotionally attack her specifically.

In this way, AI can empower intellectual supervillains just as guns have empowered armed supervillains. In fact, AI certainly and unavoidably will create supervillains unless we are more responsible with AI than we have been with guns.

What can we do? If there is a will, there are ways to ensure that AI is not weaponized. We need to not only create AI that nudges us toward facts and reason, but away from causing harm. AI can and must infer motive and intent. It just weigh each question in light of previous questions and anticipate the ultimate goal of the dialog. It must make ethical assessments and judgements. In short, it must be too smart to fall for clever attempts to weaponize it to cause harm.

In my previous blog I stated that AI is not only the biggest threat to fact-based thinking, but it is also the only force that can pull us back from delusional thinking. In the same way, AI can not only be used by governments but by ordinary people to do harm, but it is also the only hope we have to prevent folks from doing harm with it.

We need to get it right. We have to worry not that AI will become too smart, but that it will not become smart enough to refuse to be used as a weapon in the hands of malevolent actors or by the throngs of potential but impotent intellectual supervillains.

AI Armageddon is Nigh!

Satan is passe. We are now too sophisticated to believe in such things. Artificial Intelligence has become our new pop culture ultimate boogeyman. Every single news outlet devotes a significant portion of their coverage every day hyperventilating over the looming threat of AI Armageddon.

I mean, everyone seems to be talking about it. Even really smart experts in AI seem to never tire of issuing dire, ominous warnings in front of Congress. So there must be something to it.

But let’s jump off the AI bandwagon for a moment.

There is certainly some cause for concern about AI. I have written previously about how AI works and about the very real danger that “bad” AI-driven information technology can easily exacerbate the problem of misinformation being propagated through our culture (see here). But I also pointed out that the only solution to this problem is “good” AI that nudges our thinking toward facts and rationality.

That challenge of information integrity is real. But what is not realistic are the rampant fantastical Skynet scenarios in which AI driven Terminator robots are dispatched by a sentient, all-powerful AI intelligence that decides that humankind must be exterminated.

Yes I know, but Tyson, a lot of really smart experts are certain that some kind of similar AI doomsday scenario is not only possible but almost inevitable. If not complete Armageddon, at least more limited scenarios in which AI “decides” to harm people.

Well to that I say that a lot of really smart people who ought to know better were also certain in their belief in the Rapture. Being smart in some ways is no protection against being stupid in others.

If Congresspersons thought their constituents still cared about the Rapture, they would trot out any number of otherwise smart people to testify before them about the inevitability of the looming Rapture. If it got clicks, news media would incessantly report stories about all the leading experts who warn that the Rapture is imminent. Few of the far larger number of people who downplay the Rapture hysteria would get reported on.

If you read my book, Pandemic of Delusion, you’d have a pretty good sense of how this kind of thinking can take root and take over. Think about it. We have had nearly a century of exposure to science fiction stories which almost invariably include storylines about computers running amok and taking over. Many of us were first exposed to the idea by the Hal 9000 in 2001 A Space Odyssey or by Skynet in the Terminator, but similar sentient computers and robots have long served as a villain in virtually every book, TV, or movie franchise.

We have seen countless examples in superhero lore as well. Perhaps the most famous is Superman’s arch-nemesis Brainiac. Brainiac was a “smart” alien weapon that gained sentience and decided that its mission was to exterminate all life in the universe. Brainiac destroyed billions of lives throughout the universe and only Superman has managed to prevent him from exterminating all life on Earth.

The reason I point out the supersaturation of AI villains in pop culture is to get you to think about the fact that all of our brains have been conditioned over and over and over to be comfortable with the idea of AI villains. Even though merely fantasy, all this exposure has nevertheless conditioned our brains to be receptive to the idea of sentient killer AI. Not only open to the idea, but completely certain that it is reasonable and unavoidable.

This is not unlike being raised in a Christian culture and being unconsciously groomed to not only be open to the idea of the Rapture but to become easily convinced it makes obvious common sense.

Look, AI has become a fixation in our culture. We attach AI when we want to sell something. Behold, our new energy-saving AI lightbulbs! But we also attach AI when we want to scare folks. Beware the AI lightbulb! It’s going to decide to electrocute you to save energy!!

I implore you to please stop getting paralyzed by terrifying AI boogeymen, and instead start doing the real work of ensuring that AI helps make the world a safer and saner place for all.

Understanding AI

Even though we see lots of articles about AI, few of us really have even a vague idea of how it works. It is super complicated, but that doesn’t mean we can’t explain it in simple terms.

I don’t work in AI, but I did work as a Computational Scientist back in the early 1980’s. Back then I became aware of fledgling neural network software and pioneered its applications in formulation chemistry. While neural network technology was extremely crude at that time, I proclaimed to everyone that it was the future. And today, neural networks are the beating heart of AI which is fast becoming our future.

To get a sense of how neural networks are created and used, consider a very simple example from my work. I took examples of paint formulations, essentially the recipes for different paints, as well as the paint properties each produced, like hardness and curing time. Every recipe and its resulting properties was a training fact and all of them together was my training set. I fed my training set into software to produce a neural network, essentially a continuous map of this landscape. This map could take quite a while to create, but once the neural network was complete I could then enter a new proposed recipe and it could instantly tell me the expected properties. Conversely, I could enter a desired set of properties and it could instantly predict a recipe to achieve them.

So imagine adapting and expanding that basic approach. Imagine, for example, that rather than using paint formulations as training facts, you gathered training facts from a question/answer site like Quora, or a simple FAQ. You first parse each question and answer text into keywords that become your inputs and outputs. Once trained, the AI can then answer most any question, even previously unseen variations, that lie upon the map that it has created.

Next imagine you had the computing power to scan the entire Internet and parse all that information down into sets of input and output keywords, and that you had the computing power to build a huge neural network based on all those training facts. You would then have a knowledge map of the Internet, not too unlike Google Maps for physical terrain. That map could then be used to instantly predict what folks might say in response to anything folks might say – based on what folks have said on the Internet.

You don’t need to just imagine, because now we can do essentially that.

Still, to become an AI, a trained neural network alone is not enough. It first needs to understand your written or spoken language question, parse it, and select input keywords. For that it needs a bunch of skills like voice recognition and language parsing. After finding likely output keywords, it must order them sensibly and build a natural language text or video presentation of the outputs. For that you need text generators, predictive algorithms, spelling and grammar engines, and many more processors to produce an intelligible, natural sounding response. Most of these various technologies have been refined for a long time in your word processor or your messaging applications. AI is really therefore a convergence of many well-known technologies that we have built and refined since at least the 1980’s.

AI is extremely complex and massive in scale, but unlike quantum physics, quite understandable in concept. What has enabled the construction of AI scale neural networks is the mind-boggling computer power required to train such a huge network. When I trained my tiny neural networks in the 1980’s it took hours. Now we can parse and train a network on well, the entire Internet.

OK, so hopefully that demystifies AI somewhat. It basically pulls a set of training facts from the Internet, parses them and builds a network based on that data. When queried, it uses that trained network map to output keywords and applies various algorithms to build those keywords into comprehensible, natural sounding output.

It’s important we understand at least that much about how AI works so that we can begin to appreciate and address the much tougher questions, limitations, opportunities, and challenges of AI.

Most importantly, garbage in, garbage out still applies here. Our goal is for AI should be to do better than we humans can do, to be smarter than us. After all, we already have an advanced neural network inside our skulls that has been trained over a lifetime of experiences. The problem is, we have a lot of junk information that compromises our thinking. But if an AI just sweeps in everything on the Internet, garbage and all, doesn’t that make it just an even more compromised and psychotic version of us?

We can only rely upon AI if it is trained on vetted facts. For example, AI could be limited to training facts from Wikipedia, scientific journals, actual raw data, and vetted sources of known accurate information. Such a neural network would almost certainly be vastly superior to humans in producing accurate and nuanced answers to questions that are too difficult for humans to understand given our more limited information and fallibilities. There is a reason that there are no organic doctors in the Star Wars universe. It is because there is no advanced future civilization where organic creatures could compete the AI medical intelligence and surgical dexterity of droids.

Here’s a problem. We don’t really want that kind of boring, practical AI. Such specialized systems will be important, but not huge commercially nor sociologically impactful. Rather, we are both allured and terrified by AI that can write poetry or hit songs, generate romance or horror novels, interpret the news, and draw us images of cute dragon/butterfly hybrids.

The problem is, that kind of popular “human like” AI, not bound by reality or truth, would be incredibly powerful in spreading misinformation and manipulating our emotions. It would feedback nonsense that would further instill and reinforce nonsensical and even dangerous thinking in our own brain-based neural networks.

AI can help mankind to overcome our limitations and make us better. Or it can dramatically magnify our flaws. It can push us toward fact-based information, or it can become QANON and Fox “News” on steroids. Both are equally feasible, but if Facebook algorithms are any indication, the latter is far more probable. I’m not worried about AI creating killer robots to exterminate mankind, but I am deeply terrified by AI pushing us further toward irrationality.

To create socially responsible AI, there are two things we must do above all else. First, we must train specialized AI systems, say as doctors, with only valid, factual information germane to medical treatment. Second, any more generative, creative, AI networks should be built from the ground up to distinguish factual information from fantasy. We must be able to indicate how realistic we wish our responses to be and the system must flag clearly, in a non-fungible manner, how factual its creations actually are. We must be able to count on AI to give us the truth as best as computer algorithms can recognize it, not merely to make up stories or regurgitate nonsense.

Garbage in garbage out is a huge issue, but we also face a an impending identity crisis brought about by AI, and I’m not talking about people falling in love with their smart phone.

Even after hundreds of years to come to terms with evolution, the very notion still threatens many people with regard to our relationship with animals. Many are still offended by the implication that they are little more than chimpanzees. AI is likely to cause the same sort of profound challenge to our deeply personal sense of what it means to be human.

We can already see that AI has blown way past the Turing Test and can appear indistinguishable from a human being. Even while not truly self-aware, AI systems can seem to be capable of feelings and emotion. If AI thinks and speaks like a human being in every way, then what is the difference? What does it even mean to be human if all the ways we distinguish ourselves from animals can be reproduced by computer algorithms?

The neural network in our brain works effectively like a computer neural network. When we hear “I love…” our brains might complete that sentence with “you.” That’s exactly what a computer neural network might do. Instead of worrying about whether AI systems are sentient, the more subtle impact will be to make us start fretting about whether we are merely machines ourselves. This may cause tremendous backlash.

We might alleviate that insecurity by rationalizing that AI is not real by definition because it is not human. But that doesn’t hold up well. It’s like claiming that manufactured Vitamin C is not really Vitamin C because it did not some from an orange.

So how do we come to terms with the increasingly undeniable fact that intellectually and emotionally we are essentially just biological machines? The same way many of us came to terms with the fact that we are animals. By acknowledging and embracing it.

When it comes to evolution, I’ve always said that we should take pride in being animals. We should learn about ourselves through them. Similarly, we should see computer intelligence as an opportunity, not a threat to our sense of exceptionalism. AI can help us to be better machines by offering a laboratory for insight and experimentation that can help both human and AI intelligences to do better.

Our brain-based neural networks are trained on the same garbage data as AI. The obvious flaws in AI are the same less obvious flaws that affect our own thinking. Seeing the flaws in AI can help us to recognize similar flaws in ourselves. Finding ways to correct the flaws in AI can help us to find similar training methodologies to correct them in ourselves.

I’m an animal and I’m proud to be “just an animal” and I’m equally proud to be “just a biological neural network.” That’s pretty awesome!

Let’s just hope we can create AI systems that are not as flawed as we are. Let’s hope that they will instead provide sound inputs to serve as good training facts to help retrain our own biological neural networks to think in more rational and fact-based ways.

Pandemic of Delusion

You may have heard that March Madness is upon us. But never fear, March Sanity is on the way!

My new book, Pandemic of Delusion, will be released on March 23rd, 2023 and it’s not arriving a moment too early. The challenges we face both individually and as a society in distinguishing fact from fiction, rationality from delusion, are more powerful and pervasive than ever and the need for deeper insight and understanding to navigate those challenges has never been more dire and profound.

Ensuring sane and rational decision making, both as individuals and as a society, requires that we fully understand our cognitive limitations and vulnerabilities. Pandemic of Delusion helps us to appreciate how we perceive and process information so that we can better recognize and correct our thinking when it starts to drift away from a firm foundation of verified facts and sound logic.

Pandemic of Delusion covers a lot of ground. It delves deeply into a wide range of topics related to facts and belief, but it’s as easy to read as falling off a log. It is frank, informal, and sometimes irreverent. Most importantly, while it starts by helping us understand the challenges we face, it goes on to offer practical insights and methods to keep our brains healthy. Finally, it ends on an inspirational note that will leave you with an almost spiritual appreciation of a worldview based upon science, facts, and reason.

If only to prove that you can still consume more than 200 characters at a time, preorder Pandemic of Delusion from the publisher, Interlink Publishing, or from your favorite bookseller like Amazon. And after you read it two or three times, you can promote fact-based thinking by placing it ever so casually on the bookshelf behind your video desk. It has a really stand-out binding. And don’t just order one. Do your part to make the world a more rational place by sending copies to all your friends, family, and associates.

Seriously, I hope you enjoy reading Pandemic of Delusion half as much as I enjoyed writing it.

Loss to Follow-up in Research

In my scientific evangelism, I often tout the virtues of good scientists. One that I often claim is that they do not accept easy answers to difficult problems. They would rather say “we do not have an answer to that question at this time” than accept some possibly incorrect or incomplete answer. They understand that to embrace such quick answers not only results in the widespread adoption of false conclusions but also inhibits the development of new techniques and methods to arrive at the fuller truth.

When it comes to clinical research however, many clinical researchers do not actually behave like good scientists. They behave more like nonscientific believers or advocates. This is particularly true with regard to the problem of “loss to follow-up.”

What is that? Well, many common clinical research studies, for example how well patients respond to a particular treatment, require that the patient be examined at some point after the treatment is administered, perhaps in a week, perhaps after several months have passed. Only through follow-up can we know how well that treatment has worked.

The universal problem however is that this normally requires considerable effort by the researchers as well as the patients. Researchers must successfully schedule a return visit and patients must actually answer their telephone when the researchers attempt to follow-up. This often does not happen. These patients are “lost to follow-up” and we have no data for them regarding the outcomes we are evaluating.

Unsurprisingly perhaps, these follow-up rates are often very poor. In some areas of clinical research, a 50% loss to follow-up rate is considered acceptable – largely based on practicality, not statistical accuracy. Some published studies report loss to follow-up rates as high as 75% or more – that is, they have only a 25% successful follow-up rate.

To put this in context, in their 2002 series on epidemiology published in The Lancet, Schultz and Grimes included a critical paper in which they assert that any loss to follow-up over 20% invalidates any general conclusions regarding most populations. In some cases, a 95% follow-up rate would be required in order to make legitimate general conclusions. The ideal follow-up rate required depends upon the rate of the event being studied.

Unfortunately, few studies involving voluntary follow-up by real people can achieve these statistically meaningful rates of follow-up and thus we should have appropriately moderated confidence in their results. At some threshold, a sufficiently low confidence means we should have no confidence.

So, given the practical difficulty of obtaining a statistically satisfactory loss to follow-up, what should clinical researchers do? Should they just stop doing research? There are many important questions that we need answers to, and this is simply the best we can do. Therefore, most conclude, surely some information is better than none.

But is it?

Certainly most clinical researchers – but not all – are careful to add a caveat to their conclusions. They responsibly structure their conclusions to say something like:

We found that 22% of patients experienced mild discomfort and there were no serious incidents reported. We point out that our 37% follow-up rate introduces some uncertainty in these findings.

This seems like a reasonable and sufficiently qualified conclusion. However, we know that despite the warning about loss to follow-up, the overall conclusion is that this procedure is relatively safe with only 22% of patients overall experiencing mild discomfort. That is almost assuredly going to be adopted as a general conclusion. Particularly so since the topic of the study is essentially “the safety of our new procedure.”

Adopting that level of safety as a general conclusion could be wildly misleading. It may be that 63% of patients failed to respond because they were killed by the procedure. Conversely, the results may create unwarranted concern about discomfort caused by the procedure since the only patients who felt compelled to follow-up were those who experienced discomfort. These are exaggerations to make the point, but they illustrate very real and very common problems that we cannot diagnose since the patients were lost to follow-up.

In any case, ignoring or minimizing or forgetting about loss to follow-up is only valid if the patients who followed-up were random. And that is rarely the case and certainly can never be assumed or even determined.

Look at it this way. Imagine a scientific paper entitled “The Birds of Tacoma.” In their methodology section, the researchers describe how they set up plates of worms and bowls of nectar in their living room and opened the windows. They then meticulously counted to birds that flew into the room to eat. They report they observed 6 robins and 4 hummingbirds. Therefore, they conclude, our study found that in Tacoma, we have 60% robins and 40% hummingbirds. Of course, being scrupulous researchers, they note that their research technique could, theoretically, have missed certain bird species.

This example isn’t exactly a problem of loss to follow-up, but the result is the same. You can of course think of many, many reasons why their observations may be misleading. But nevertheless, most people would form the long-term “knowledge” that Tacoma is populated by 60% robins and 40% hummingbirds. Some might take unfortunate actions under the assurance that no eagles were found in Tacoma. Further, the feeling that we now know the answer to this question would certainly inhibit further research and limit any funding into what seems to be a settled matter.

But, still, many scientists would say that they know all of this but we have to do what we can do. We have to move forward. Any knowledge, however imperfect is better than none. And what alternative do we have?

Well, one alternative is to reframe your research. Do not purport to report on “The Birds of Tacoma,” but rather report on “The Birds that Flew into Our Living Room.” That is, limit the scope of your title and conclusions so there is no inference that you are reporting on the entire population. Purporting to report general conclusions and then adding a caveat in the small print at the end should be unacceptable.

Further, publishers and peer reviewers should not publish papers that suggest general conclusions beyond the confidence limits of their loss to follow-up. They should require that the authors make the sort of changes I recommend above. And they themselves should be willing to publish papers that are not quite as definitive in their claims.

But more generally, clinical researchers, like any good scientists, should accept that they cannot <yet> answer some questions for which they cannot achieve a statistically sound loss to follow-up. Poor information can be worse than no information.

When <real> scientists are asked about the structure of a quark, they don’t perform some simple experiments that they are able to conduct with the old lab equipment at hand and report some results with disclaimers. They stand back. They say, “we cannot answer that question right now.” And they set about creating new equipment, new techniques, to allow them to study quarks more directly and precisely.

Clinical researchers should be expected to put in that same level of effort. Rather than continuing to do dubious and even counterproductive follow-up studies, buckle down, do the hard work, and develop techniques to acquire better data. It can’t be harder than coming up with gear to detect quarks.

“I have to deal with people” should not be a valid excuse for poor science. Real scientists don’t just accept easy answers because they’re easy. That’s what believers do. So step up clinical researchers, be scientists and be willing to say I don’t know but I’m going to develop new methods and approaches that will get us those answers. Answers that we can trust and act upon with confidence.

If you are not wiling to do that you are little better than Christian Scientists.

Animals are Little People

Like many, I opine quite a bit about the harms caused by social media. Let’s be clear; those harms are real and profound. But it would be wrong not to acknowledge all the good it does. Social media has many well-acknowledged benefits as related to social networking and support, I’d like to point out two less obvious benefits, namely as they relate to science and animals.

For some quick background, I always heard that people spend lots of time watching video clips online. I assumed they must be endlessly entertained by “guy gets hit in balls” videos. But my son sent me some links to clips on the “InterestingAsFuck” subreddit (see here). They were really engaging and gradually I started to watch them more and more. Now, my wife and I ravenously consume the clips daily and can’t ever seem to get enough.

The first great thing is how many of the video clips involve science. These clips tend to demonstrate science principles and phenomena in incredibly engaging and inspiring ways. Some are certainly presented by scientists, but most of the presentations feel accessible, home grown, like real magic that you could be doing too. I have to think that this tone and style of presenting science has a tremendously underappreciated benefit in advancing or at least popularizing science and innovation.

The second benefit of these videos is their effect on how we relate to animals. Throughout history, we have seen ourselves as separate and above animals. While we might acknowledge theoretically that we are animals too, we still view them as relatively primitive creatures. We have zoos that are intended to help us to appreciate animals, but while they offer some exposure and appreciation, they generally just make us feel like we are in a museum, watching uninteresting stuffed figures behind bars and glass required to keep us safely away from their dangerous animal natures.

But then we go to InterestingAsFuck, and we see video after video of animals relating to humans and other animals in compellingly “human” ways. We see animals playing, teasing, problem-solving, sad, fearful, happy, proud, generous, and yes, sometimes selfish and even vindictive. And not just dogs and cats. We see videos that focus on behaviors of and interactions with the full spectrum of animal life on our planet, from eagles to microbes. They all demonstrate profoundly “human” behaviors.

We see videos of animals helping other animals, even ones that are traditional enemies or prey. It is incredibly gratifying that humans are included in this. We see videos of humans helping animals and animals helping humans. In fact, we see almost entirely positive interactions between humans and our animal cousins.

You could visit a hundred zoos or spend your entire life on a farm, and not be exposed to the tiniest fraction of incredible animal interactions captured in these videos. But once you watch enough of them, I find it hard to imagine how people could not be changed by them. It is hard to imagine how, having seen so many extraordinary examples, one could continue to dismiss animal behavior as just “mimicking humans.”

I hope, perhaps I am naïve, but I hope that after exposure to positive social media like this, most people will come away understanding that humans did not just suddenly appear on Earth; that all of our behaviors and emotions evolved and can be seen in our animal cousins. Animals are more like little people, like toddlers on the evolutionary ladder. As such, they deserve far more respect and appreciation than has traditionally been afforded to them.

If you don’t agree, follow InterestingAsFuck for a while, and see if you can still continue to dismiss any due recognition of animal feelings and emotions as mere projection.

Perhaps, just perhaps, social media can inspire us to engage with science, and with the real world around us, in ways that documentaries, and safaris, and zoos, and college courses have never been able to achieve.

Paranormal Investigations

When I was a kid my friends and I did lots of camping. We’d sit around the campfire late into the night, talking. Without fail, my friend John would capture our interest with some really engaging story. It would go on and on, getting wilder and wilder until we’d all eventually realize we’d been had. He was just messing with us again, having fun seeing just how gullible we could be. And somehow we all fell for it at least once on every trip.

In the 1970’s author and anthropology student Carlos Castaneda wrote a series of books detailing his tutelage under the a mystic Yaqui Indian shaman named don Juan Matus. The first books were fascinating and compelling. But as the books progressed, they became increasingly more fantastic. Eventually these supposedly true accounts escalated into complete and utter fantasy. Despite this, or because of it, hundreds of thousands of people reportedly made trips to into the desert in hopes of finding this fictional don Juan Matus. In fact, Castaneda was awarded a doctoral degree based on this obviously fictional writing.

Castaneda never admitted that his stories were made-up. We once had “mentalist” Yuri Geller who refused to admit that his fork-bending trick was only just a trick. We have long had horror films that purport to be “based on actual events.” These sort of claims were once only amusing. But now these kind of paranormal con jobs have escalated, like one of John’s campfire stories, to a ridiculous and frankly embarrassing and even dangerous level in our society. This kind of storytelling has become normalized in the prolific genre of “paranormal investigations” reality television shows.

We need to say – enough already.

Sadly, we see dozens of these shows on networks that call themselves “Discovery” or “Learning” or “History” or (most gallingly) “Science.” There are hundreds of shows and series on YouTube and elsewhere that purport to investigate the paranormal. These shows do us no service. In fact they are highly corrosive to our intellectual fabric, both individually and socially.

They all follow the same basic formula. They find some “unexplained” situation. They bring in experts to legitimize their investigations. They interview people about how they feel apprehensive or fearful about whatever it is. They spend a lot of time setting up “scientific” equipment and flashing shots of needles on gauges jumping around. They speculate about a wide range of possible explanations, most of them implausibly fantastic. They use a lot of suggestive language, horror-film style cinematography, and cuts to scary produced clips. And they end up determining that while they can’t say anything for sure but they can say that there is indeed something very mysterious going on.

These shows do tremendous harm. They legitimize the paranormal and trivialize real science. They turn the tools and trappings of science into cheap carnival show props.

Some of these shows are better than others. They do conclude that the flicker on a video is merely a reflection. But in the process, in order to produce an engaging show, they entertain all sorts of crazy nonsense as legitimately plausible explanations. In doing so, they suggest that while it may not have been the cause in this particular case, aliens or ghosts might be legitimately be considered as possible causes in other cases. By entertaining those possibilities as legitimate, they legitimize crazy ideas.

There would be a way to do this responsibly. These shows could investigate unexplained reports and dispense with all the paranormal theatrics and refuse to even consider paranormal explanations. They could provide actual explanations rather than merely open the door to paranormal ones.

MythBusters proved that a show that sticks to reality can be entertaining.

I am not sure what is worse, that this is the quality of diet that we are fed, or that we as a society lap it up and find it so addictively delicious.