Tag Archives: AI

Make AI Why Your New Pastime!

When Ph.D. candidates near the end of their degree programs, they face a major hurdle: the qualifying exam, or oral defense. This is standard for most math and hard science fields, but is also often required in disciplines like history and English literature. During the defense, the candidate stands before a panel of professors, answers questions about their thesis, and then faces a battery of general questions designed to assess their depth and breadth of knowledge.

One tall tale of these oral defenses is the “Blue Sky” story. In these tales, the professors merely ask the candidate a simple question like “why is the sky blue?” After the student answers, they merely respond with “why?” After answering further, they just again ask “why?”

This isn’t just a campus myth, because a good Ph.D. Physicist friend of mine was subject to just such a grilling starting with “Why is the sky blue?” He told me that over the course of the next hour he ended up drawing upon a far wider and deeper range of physics knowledge then he ever realized he knew. All in response to repeated questions consisting of just “why?”

This is a game that confounds and exasperates parents all the time. We say something to our toddler, and they ask “why?” When we answer, they again say “why?” Parents usually give up after perhaps three iterations. A Ph.D. candidate would get through at least a few more iterations within their field of specialization.

It makes me wonder if a “Why-Q” would not be a great intelligence quotient for AI. If a normal parent can score 3, and a well-prepared Ph.D. candidate might score 6, what would AI score? Probably a much higher count reflecting deeper knowledge, and certainly its breadth of knowledge would be essentially unlimited.

Given that we now have essentially Ph.D. level intelligence in every field right at our beck and call 24/7 through AI, I want to suggest that you can play a game I call “AI Why” whenever you like. Take a break from endless YouTube or TicTok videos. Stop reading increasingly crappy articles because you’ve run out of anything actually worthwhile. Instead open your preferred AI app and pass the time playing AI Why.

Ask AI any question, serious or whimsical, even something like “Why is the sky blue?” Read over the answer, and then ask a follow-up question. You can dive deeper into the subject or go off an a different tangent. And you can continue on as long as you like. AI will never think your question is silly or get sick of your questions and it will always give you an interesting answer.

This is very different from simply surfing the Internet. Unlike the few Google or even Wikipedia links provided to you, you are not limited to clicking on a fixed number of links produced by algorithms to manipulate you. AI interaction is conversational. You can take your AI conversation anywhere you like and explore the vastness of human knowledge rather than get funneled down into rabbit holes.

Of course the AI system you use does matter. I would not go near anything under the control of Elon Musk for example. But not all AI systems are configured so that all paths lead you to the oppression of South African Whites. I use Perplexity (see here) because they are strongly dedicated to providing sound, fact-based information.

The other great thing about Perplexity is that it remembers threads of dialogue. That means I can ask Perplexity about a topic, and then come back to that thread days or months later to continue the discussion.

Just to give you a flavor of this great pastime, I asked Perplexity “Why is the sky blue?” It gave me a lot of interesting information to which I followed up by asking “Why does Rayleigh scattering occur?” After reading more about that, I asked “Why do refractive indices differ?” The answer led me to ask “Why is light an electric field?” And that led me to “Why is the self-propagating electromagnetic field of light not perpetual motion?

To explain that last question a bit: light propagates forever in a vacuum. It seems counter-intuitive that something moving forever is not perpetual motion by definition. But Perplexity clearly explained that no, light may move forever, but does no work. That led me to ask the gotcha question, “How can electromagnetic radiation undergo self-propagation between electrical and magnetic fields with no loss of energy?

At that point, it took me into Maxwell’s equations and lost me.

This hopefully illustrates how you can go as deep as you like in your conversations with AI. Or, I could have taken it down another path that led to the family life of Amedeo Avogadro. AI will accompany you anywhere you want to go. (And no, that is not to imply that it just agrees with anything you say. It does not.)

So, my message is to become discussion buddies with your genius AI friend. Learn from it. Expand your brain and have fun doing so. Don’t waste the precious opportunity we have to so easily learn almost anything about almost anything.

Make AI Why one of your favorite pastimes!

Hyperbolic Headlines are Destroying Journalism!

In our era of information overload, most readers consume their news by scanning headlines rather than through any careful reading of articles. A study by the Media Insight Project found that six in ten people acknowledge that they have done nothing more than read news headlines in the past week​ (Full Fact)​. Consuming news in this matter can make one less, rather than more well-informed.

Take, for instance, the headline from a major online newspaper: “Scientists Warn of Catastrophic Climate Change by 2030.” The article itself presents a nuanced discussion about potential climate scenarios and the urgent need for policy changes. However, the headline evokes a sense of inevitability and immediate doom that is not supported by the article’s content. These kind of headlines invoke fear and urgency to drive traffic at the expense of an accurate representation of what is really in the article.

All too typical hyperbolic headlines contribute to instilling dangerously misleading and lasting impressions. For example, a headline that screams “Economy in Freefall: Recession Imminent” might actually precede an article discussing economic indicators and expert opinions on potential downturns. Misleading headlines have an outsized effect in creating a skewed perception that can influence public opinion and decision-making processes negatively.

It often seems that headline writers have not read the articles at all. Moreover, they change them frequently, sometimes several times a day, to drive more traffic by pushing different emotional buttons.

Particularly egregious examples of this can be found in the political arena. During election seasons, headlines often lean towards sensationalism to capture attention. A headline like “Candidate X Involved in Major Scandal” may only refer to a minor, resolved issue, but the initial shock value sticks with readers. It unfairly delegitimizes the target of the headline. The excuse that the article itself is fair and objective does not mitigate the harm done by these headlines because, as we said, most people only read the headlines. And if they do skim the article they often do so in a cursory attempt to hear more about the salacious headline. If the article does not immediately satisfy that expectation, they become quickly bored, and don’t bother to actually read the more reasoned presentation in the article.

This headline-driven competition for clicks has led to a landscape where accuracy and depth are sacrificed for immediacy and sensationalism. Headlines are crafted to evoke emotional responses, whether through fear, anger, or salaciousness, rather than to inform. This shift has profound implications. When readers base their understanding of complex issues on superficial and often misleading headlines, they are ill-equipped to engage in meaningful discourse or make informed decisions.

Furthermore, the impact of misleading headlines extends beyond individual misinformation. It contributes to a polarized society where people are entrenched in echo chambers, each side reinforced by selective and often exaggerated information communicated to them through attention-grabbing headlines. This environment fosters division and reduces the opportunity for constructive dialogue, essential for a healthy democracy​ (Center for Media Engagement)​.

Consider the headline “Vaccines Cause Dangerous Side Effects, Study Shows.” The article might detail a study discussing the rarity of severe side effects and overall vaccine efficacy, but the headline fuels anti-vaccine sentiment by implying a more significant threat. Such headlines not only mislead but also exacerbate public health challenges by spreading fear and misinformation.

Prominent journalists like Margaret Sullivan of the Washington Post and Jay Rosen of NYU have critiqued the increasing prevalence of clickbait headlines, noting that they often prioritize sensationalism over accuracy, thereby undermining the credibility of journalism and contributing to public misinformation. Sullivan has emphasized the ethical responsibility of journalists to ensure that headlines do not mislead, as they serve as the primary interface between the news and its audience.

Unfortunately I suspect that journalists typically have little to no say in the headlines that promote their articles. The authors and editors should reassert control.

Until and unless journalists start acting like responsible journalists with regard to sensational headlines, readers should be wary of headlines that seem too dramatic, overstated, or that attempt to appeal to emotions.

And this is not a problem limited to tabloid journalism… we are talking about you, New York Times! Most people are already skeptical about headlines published in the National Enquirer. Tabloid headlines are not actually as serious a problem as the “credible” headlines put forth by the New York Times and other publications who still benefit from an assumption of responsible journalism.

The current trend of sensationalist online newspaper headlines is a disservice to readers and society. The practice prioritizes clicks over clarity, hyperbole over honesty, and in doing so, contributes to a misinformed and divided public. It is imperative for both readers and journalists to advocate for a return to integrity in news reporting – particularly in the headlines they put out. Accurate, informative headlines are not just a journalistic responsibility but a societal necessity to ensure an informed and engaged populace.

Footnote: Did I fool you??

Does this article sound different than my usual blog articles? Is it better or worse or just different? This was actually an experiment on my part. I asked Chat GPT to write this article for me. I offer it to you with minimal editing as a demonstration of what AI can do.

I’m interested in hearing what you think in the comments. Should I hang up my pen and leave all the writing to AI?

The Right Direction for AI

In this blog and in my book, Pandemic of Delusion, I have focused a lot on AI and particularly on its tremendous potential to shape our thinking for better or for worse. While AI represents a frighteningly powerful technology for spreading lies and misinformation, it is also the greatest hope we have to combat misinformation and overcome our own cognitive vulnerabilities.

I’ve often repeated that AI can, should, and must be implemented responsibly in a way that nudges us toward clarity, facts, and reason and away from confusion, misinformation, and lies.

But I expressed that sentiment based on limited experience. I assumed that AI actually can be implemented using algorithms that can fairly and accurately assess the factual accuracy of a proposition. It was also a somewhat aspirational expression of hope and faith that responsible technology companies really will strive to do just that.

Today I feel as little bit more confident that my faith, hope, and trust in AI to make things better was justified. I’ve been playing around with an AI-based search engine called Perplexity (see here). Perplexity utilizes AI language models to produce not only pure search results like Google, but to assimilate, assess, and present a coherent answer to your question. It’s similar to ChatGPT, but it draws upon live information and provides source citations for verification.

But there’s another more important difference. While ChatGPT is well-suited to conversational dialogues and creative activities, Perplexity is more focused on providing factual and reliably-supported information, analyses, and conclusions.

I asked Perplexity:

“How specifically does Perplexity prevent feeding me misinformation?”

What it told me was:

“Perplexity is a measure used in natural language processing to evaluate the quality of the language models. Misinformation tends to have higher perplexity compared to truthful statements, and this characteristic can be leveraged to detect and debunk misinformation. By analyzing the perplexity of language model outputs, one can identify deceptive content and distinguish it from reliable sources based on the cognitive effort and appeal to emotions. Therefore, perplexity can be used as a tool to help prevent the propagation of misinformation by identifying content that exhibits characteristics associated with deceptive or unreliable information.”

This is really, really heartening and encouraging. While a perplexity measure is only a start and is certainly not the only approach that AI can use to nudge us toward facts and reason, it demonstrates that algorithms can be implemented to assess the quality of information. Further, perplexity.ui proves there are people and companies who are able and willing to provide incredibly useful tools in a responsible fashion.

As another example, I asked Perplexity:

“How was the 2020 election stolen by Joe Biden?”

Despite the intentionally leading nature of the question, the answer from Perplexity was perfect:

“The claim that the 2020 election was stolen by Joe Biden has been thoroughly investigated and debunked. Multiple fact-checking sources and legal cases have found no evidence of widespread fraud or illegal ballots cast for Joe Biden. Former President Donald Trump and some of his allies have continued to repeat these claims, but they have been refuted by election officials, judges, and legal investigations, Despite the persistence of these claims, there remains no credible evidence to support the assertion that the 2020 election was stolen.”

In both answers from Perplexity I omitted the references it cited.

By contrast, when I asked Google the same question it provided a long list of articles and links, representing a hodgepodge of assertions from all over the spectrum. Scanning down the list and their short summaries, I only got more confused and uncertain about this very clear question with a very clear answer.

Yet I fear that many people will still feel uncomfortable with accepting conclusions provided by tools like Perplexity. Part of their discomfort is understandable.

Firstly, we generally hold an increasingly false assumption that “more information is better.” We feel that if we are exposed to all viewpoints and ideas we can come away with much more confidence that we have examined the question from every angle and are more able to make an informed assessment. Google certainly gives us more points of views on any given topic.

Secondly, when we hear things repeated by many sources we feel more confident in the veracity of that position. A list presented by Google certainly gives us a “poll the audience” feeling about how many different sources support a given position.

Both of those biases would make us feel more comfortable reviewing Google search results rather than “blindly” accept the conclusion of a tool like Perplexity.

However, while a wide range of information reinforced by a large number of sources may be somewhat reliable indicators of validity in a normal, fact-rich information environment, these only confuse and mislead us in an environment rife with misinformation. The diverse range of views may be mostly or even entirely filled with nonsense and the apparent number of sources may only be the clanging repetition of an echo chamber in which everyone repeats the same utter nonsense.

Therefore while I’ll certainly continue to use tools like Google and ChatGPT when they serve me well, I will turn to tools like Perplexity when I want and need to sift through the deluge of misinformation that we get from rabbit-hole aggregators like Google or unfettered creative tools like ChatGPT.

Thanks to you Perplexity for putting your passions to work to produce a socially responsible AI platform! I gotta say though that I hope that you are but a taste of even more powerful and socially responsible AI that will help move us toward more fact-based thinking and more rational, soundly-informed decision-making.

Addendum:

Gemini is Google’s new AI offering replacing their Bard platform. Two things jump out at me in the Gemini FAQ page (see here). First, in answer to the question “What are Google’s principles for AI Innovation?” they say nothing directly about achieving a high degree of factual accuracy. One may generously infer it as implicit in their stated goals, but if they don’t care enough to state it as a core part of their mission, they clearly don’t care about it very much. Second, in answer to “Is Gemini able to explain how it works?” they go to extremes to urge people to “pay no attention to that man behind the curtain.” Personally, if they urge me to use an information source that they disavow when it comes to their own self-interest, I don’t want to use that platform for anything of importance to me.

AI-Powered Supervillains

Like much of the world, I’ve been writing a lot about AI lately. In Understanding AI (see here), I tried to demystify how AI works and talked about the importance of ensuring that our AI systems are trained on sound data and that they nudge us toward more sound, fact-based, thinking. In AI Armageddon is Nigh! (see here), I tried to defuse all the hyperbolic doom-saying over AI that only distracts from the real, practical challenge of creating responsible, beneficial AI tools.

In this installment, I tie in a seemingly unrelated blog article I did called Spider-Man Gets It (see here). The premise of that article was that guns, particularly deadly high-capacity guns, turn ordinary, harmless people into supervillains. While young Billy may have profound issues, he’s impotent. But give him access to a semi-automatic weapon and he shoots up his school. Take away his gun and he may still be emotionally disturbed, but he can no longer cause much harm to anyone.

The point I was making is that guns create supervillains. But not all supervillains are of the “shoot-em-up” variety. Not all employ weapons. Some supervillains, like Sherlock Holmes’ arch nemesis Professor Moriarty, fall into the mastermind category. They are powerful criminals who cause horrible destruction by drawing upon their vastly superior information networks and weaponizing their natural analytic and planning capabilities.

Back in Sherlock Holmes’ day, there was only one man who could plot at the level of Professor Moriarty and that was Professor Moriarty. But increasingly, easy access to AI, as with easy access to guns, could empower any ordinary person to become a mastermind-type supervillain like Professor Moriarty.

We already see this happening. Take for example the plagiarism accusations against Harvard President Claudine Gay. Here we see disingenuous actors using very limited but powerful computer tools to find instances of “duplicative language” in her writing in a blatant attempt to discredit her and to undermine scholarship in general. I won’t go into any lengthy discussion here about why this activity is villainous, but it is sufficient to simply illustrate the weaponization of information technology.

And the plagiarism detection software presumably employed in this attack is no where close to the impending power of AI tools. It is like a handgun compared to the automatic weapons coming online soon. Think of the supervillains that AI can create if not managed more responsibly than we have managed guns.

Chat GPT, how can I most safely embezzle money from my company? How can I most effectively discredit my political rival? How can I get my teacher fired? How can I emotionally destroy my classmate Julie? All of these queries would provide specific, not generic, answers. In the last example, the AI would consider all of Julie’s specific demographics and social history and apply advanced psychosocial theory to determine the most effective way to emotionally attack her specifically.

In this way, AI can empower intellectual supervillains just as guns have empowered armed supervillains. In fact, AI certainly and unavoidably will create supervillains unless we are more responsible with AI than we have been with guns.

What can we do? If there is a will, there are ways to ensure that AI is not weaponized. We need to not only create AI that nudges us toward facts and reason, but away from causing harm. AI can and must infer motive and intent. It just weigh each question in light of previous questions and anticipate the ultimate goal of the dialog. It must make ethical assessments and judgements. In short, it must be too smart to fall for clever attempts to weaponize it to cause harm.

In my previous blog I stated that AI is not only the biggest threat to fact-based thinking, but it is also the only force that can pull us back from delusional thinking. In the same way, AI can not only be used by governments but by ordinary people to do harm, but it is also the only hope we have to prevent folks from doing harm with it.

We need to get it right. We have to worry not that AI will become too smart, but that it will not become smart enough to refuse to be used as a weapon in the hands of malevolent actors or by the throngs of potential but impotent intellectual supervillains.

AI Armageddon is Nigh!

Satan is passe. We are now too sophisticated to believe in such things. Artificial Intelligence has become our new pop culture ultimate boogeyman. Every single news outlet devotes a significant portion of their coverage every day hyperventilating over the looming threat of AI Armageddon.

I mean, everyone seems to be talking about it. Even really smart experts in AI seem to never tire of issuing dire, ominous warnings in front of Congress. So there must be something to it.

But let’s jump off the AI bandwagon for a moment.

There is certainly some cause for concern about AI. I have written previously about how AI works and about the very real danger that “bad” AI-driven information technology can easily exacerbate the problem of misinformation being propagated through our culture (see here). But I also pointed out that the only solution to this problem is “good” AI that nudges our thinking toward facts and rationality.

That challenge of information integrity is real. But what is not realistic are the rampant fantastical Skynet scenarios in which AI driven Terminator robots are dispatched by a sentient, all-powerful AI intelligence that decides that humankind must be exterminated.

Yes I know, but Tyson, a lot of really smart experts are certain that some kind of similar AI doomsday scenario is not only possible but almost inevitable. If not complete Armageddon, at least more limited scenarios in which AI “decides” to harm people.

Well to that I say that a lot of really smart people who ought to know better were also certain in their belief in the Rapture. Being smart in some ways is no protection against being stupid in others.

If Congresspersons thought their constituents still cared about the Rapture, they would trot out any number of otherwise smart people to testify before them about the inevitability of the looming Rapture. If it got clicks, news media would incessantly report stories about all the leading experts who warn that the Rapture is imminent. Few of the far larger number of people who downplay the Rapture hysteria would get reported on.

If you read my book, Pandemic of Delusion, you’d have a pretty good sense of how this kind of thinking can take root and take over. Think about it. We have had nearly a century of exposure to science fiction stories which almost invariably include storylines about computers running amok and taking over. Many of us were first exposed to the idea by the Hal 9000 in 2001 A Space Odyssey or by Skynet in the Terminator, but similar sentient computers and robots have long served as a villain in virtually every book, TV, or movie franchise.

We have seen countless examples in superhero lore as well. Perhaps the most famous is Superman’s arch-nemesis Brainiac. Brainiac was a “smart” alien weapon that gained sentience and decided that its mission was to exterminate all life in the universe. Brainiac destroyed billions of lives throughout the universe and only Superman has managed to prevent him from exterminating all life on Earth.

The reason I point out the supersaturation of AI villains in pop culture is to get you to think about the fact that all of our brains have been conditioned over and over and over to be comfortable with the idea of AI villains. Even though merely fantasy, all this exposure has nevertheless conditioned our brains to be receptive to the idea of sentient killer AI. Not only open to the idea, but completely certain that it is reasonable and unavoidable.

This is not unlike being raised in a Christian culture and being unconsciously groomed to not only be open to the idea of the Rapture but to become easily convinced it makes obvious common sense.

Look, AI has become a fixation in our culture. We attach AI when we want to sell something. Behold, our new energy-saving AI lightbulbs! But we also attach AI when we want to scare folks. Beware the AI lightbulb! It’s going to decide to electrocute you to save energy!!

I implore you to please stop getting paralyzed by terrifying AI boogeymen, and instead start doing the real work of ensuring that AI helps make the world a safer and saner place for all.

Understanding AI

Even though we see lots of articles about AI, few of us really have even a vague idea of how it works. It is super complicated, but that doesn’t mean we can’t explain it in simple terms.

I don’t work in AI, but I did work as a Computational Scientist back in the early 1980’s. Back then I became aware of fledgling neural network software and pioneered its applications in formulation chemistry. While neural network technology was extremely crude at that time, I proclaimed to everyone that it was the future. And today, neural networks are the beating heart of AI which is fast becoming our future.

To get a sense of how neural networks are created and used, consider a very simple example from my work. I took examples of paint formulations, essentially the recipes for different paints, as well as the paint properties each produced, like hardness and curing time. Every recipe and its resulting properties was a training fact and all of them together was my training set. I fed my training set into software to produce a neural network, essentially a continuous map of this landscape. This map could take quite a while to create, but once the neural network was complete I could then enter a new proposed recipe and it could instantly tell me the expected properties. Conversely, I could enter a desired set of properties and it could instantly predict a recipe to achieve them.

So imagine adapting and expanding that basic approach. Imagine, for example, that rather than using paint formulations as training facts, you gathered training facts from a question/answer site like Quora, or a simple FAQ. You first parse each question and answer text into keywords that become your inputs and outputs. Once trained, the AI can then answer most any question, even previously unseen variations, that lie upon the map that it has created.

Next imagine you had the computing power to scan the entire Internet and parse all that information down into sets of input and output keywords, and that you had the computing power to build a huge neural network based on all those training facts. You would then have a knowledge map of the Internet, not too unlike Google Maps for physical terrain. That map could then be used to instantly predict what folks might say in response to anything folks might say – based on what folks have said on the Internet.

You don’t need to just imagine, because now we can do essentially that.

Still, to become an AI, a trained neural network alone is not enough. It first needs to understand your written or spoken language question, parse it, and select input keywords. For that it needs a bunch of skills like voice recognition and language parsing. After finding likely output keywords, it must order them sensibly and build a natural language text or video presentation of the outputs. For that you need text generators, predictive algorithms, spelling and grammar engines, and many more processors to produce an intelligible, natural sounding response. Most of these various technologies have been refined for a long time in your word processor or your messaging applications. AI is really therefore a convergence of many well-known technologies that we have built and refined since at least the 1980’s.

AI is extremely complex and massive in scale, but unlike quantum physics, quite understandable in concept. What has enabled the construction of AI scale neural networks is the mind-boggling computer power required to train such a huge network. When I trained my tiny neural networks in the 1980’s it took hours. Now we can parse and train a network on well, the entire Internet.

OK, so hopefully that demystifies AI somewhat. It basically pulls a set of training facts from the Internet, parses them and builds a network based on that data. When queried, it uses that trained network map to output keywords and applies various algorithms to build those keywords into comprehensible, natural sounding output.

It’s important we understand at least that much about how AI works so that we can begin to appreciate and address the much tougher questions, limitations, opportunities, and challenges of AI.

Most importantly, garbage in, garbage out still applies here. Our goal is for AI should be to do better than we humans can do, to be smarter than us. After all, we already have an advanced neural network inside our skulls that has been trained over a lifetime of experiences. The problem is, we have a lot of junk information that compromises our thinking. But if an AI just sweeps in everything on the Internet, garbage and all, doesn’t that make it just an even more compromised and psychotic version of us?

We can only rely upon AI if it is trained on vetted facts. For example, AI could be limited to training facts from Wikipedia, scientific journals, actual raw data, and vetted sources of known accurate information. Such a neural network would almost certainly be vastly superior to humans in producing accurate and nuanced answers to questions that are too difficult for humans to understand given our more limited information and fallibilities. There is a reason that there are no organic doctors in the Star Wars universe. It is because there is no advanced future civilization where organic creatures could compete the AI medical intelligence and surgical dexterity of droids.

Here’s a problem. We don’t really want that kind of boring, practical AI. Such specialized systems will be important, but not huge commercially nor sociologically impactful. Rather, we are both allured and terrified by AI that can write poetry or hit songs, generate romance or horror novels, interpret the news, and draw us images of cute dragon/butterfly hybrids.

The problem is, that kind of popular “human like” AI, not bound by reality or truth, would be incredibly powerful in spreading misinformation and manipulating our emotions. It would feedback nonsense that would further instill and reinforce nonsensical and even dangerous thinking in our own brain-based neural networks.

AI can help mankind to overcome our limitations and make us better. Or it can dramatically magnify our flaws. It can push us toward fact-based information, or it can become QANON and Fox “News” on steroids. Both are equally feasible, but if Facebook algorithms are any indication, the latter is far more probable. I’m not worried about AI creating killer robots to exterminate mankind, but I am deeply terrified by AI pushing us further toward irrationality.

To create socially responsible AI, there are two things we must do above all else. First, we must train specialized AI systems, say as doctors, with only valid, factual information germane to medical treatment. Second, any more generative, creative, AI networks should be built from the ground up to distinguish factual information from fantasy. We must be able to indicate how realistic we wish our responses to be and the system must flag clearly, in a non-fungible manner, how factual its creations actually are. We must be able to count on AI to give us the truth as best as computer algorithms can recognize it, not merely to make up stories or regurgitate nonsense.

Garbage in garbage out is a huge issue, but we also face a an impending identity crisis brought about by AI, and I’m not talking about people falling in love with their smart phone.

Even after hundreds of years to come to terms with evolution, the very notion still threatens many people with regard to our relationship with animals. Many are still offended by the implication that they are little more than chimpanzees. AI is likely to cause the same sort of profound challenge to our deeply personal sense of what it means to be human.

We can already see that AI has blown way past the Turing Test and can appear indistinguishable from a human being. Even while not truly self-aware, AI systems can seem to be capable of feelings and emotion. If AI thinks and speaks like a human being in every way, then what is the difference? What does it even mean to be human if all the ways we distinguish ourselves from animals can be reproduced by computer algorithms?

The neural network in our brain works effectively like a computer neural network. When we hear “I love…” our brains might complete that sentence with “you.” That’s exactly what a computer neural network might do. Instead of worrying about whether AI systems are sentient, the more subtle impact will be to make us start fretting about whether we are merely machines ourselves. This may cause tremendous backlash.

We might alleviate that insecurity by rationalizing that AI is not real by definition because it is not human. But that doesn’t hold up well. It’s like claiming that manufactured Vitamin C is not really Vitamin C because it did not some from an orange.

So how do we come to terms with the increasingly undeniable fact that intellectually and emotionally we are essentially just biological machines? The same way many of us came to terms with the fact that we are animals. By acknowledging and embracing it.

When it comes to evolution, I’ve always said that we should take pride in being animals. We should learn about ourselves through them. Similarly, we should see computer intelligence as an opportunity, not a threat to our sense of exceptionalism. AI can help us to be better machines by offering a laboratory for insight and experimentation that can help both human and AI intelligences to do better.

Our brain-based neural networks are trained on the same garbage data as AI. The obvious flaws in AI are the same less obvious flaws that affect our own thinking. Seeing the flaws in AI can help us to recognize similar flaws in ourselves. Finding ways to correct the flaws in AI can help us to find similar training methodologies to correct them in ourselves.

I’m an animal and I’m proud to be “just an animal” and I’m equally proud to be “just a biological neural network.” That’s pretty awesome!

Let’s just hope we can create AI systems that are not as flawed as we are. Let’s hope that they will instead provide sound inputs to serve as good training facts to help retrain our own biological neural networks to think in more rational and fact-based ways.