Monthly Archives: December 2023

AI-Powered Supervillains

Like much of the world, I’ve been writing a lot about AI lately. In Understanding AI (see here), I tried to demystify how AI works and talked about the importance of ensuring that our AI systems are trained on sound data and that they nudge us toward more sound, fact-based, thinking. In AI Armageddon is Nigh! (see here), I tried to defuse all the hyperbolic doom-saying over AI that only distracts from the real, practical challenge of creating responsible, beneficial AI tools.

In this installment, I tie in a seemingly unrelated blog article I did called Spider-Man Gets It (see here). The premise of that article was that guns, particularly deadly high-capacity guns, turn ordinary, harmless people into supervillains. While young Billy may have profound issues, he’s impotent. But give him access to a semi-automatic weapon and he shoots up his school. Take away his gun and he may still be emotionally disturbed, but he can no longer cause much harm to anyone.

The point I was making is that guns create supervillains. But not all supervillains are of the “shoot-em-up” variety. Not all employ weapons. Some supervillains, like Sherlock Holmes’ arch nemesis Professor Moriarty, fall into the mastermind category. They are powerful criminals who cause horrible destruction by drawing upon their vastly superior information networks and weaponizing their natural analytic and planning capabilities.

Back in Sherlock Holmes’ day, there was only one man who could plot at the level of Professor Moriarty and that was Professor Moriarty. But increasingly, easy access to AI, as with easy access to guns, could empower any ordinary person to become a mastermind-type supervillain like Professor Moriarty.

We already see this happening. Take for example the plagiarism accusations against Harvard President Claudine Gay. Here we see disingenuous actors using very limited but powerful computer tools to find instances of “duplicative language” in her writing in a blatant attempt to discredit her and to undermine scholarship in general. I won’t go into any lengthy discussion here about why this activity is villainous, but it is sufficient to simply illustrate the weaponization of information technology.

And the plagiarism detection software presumably employed in this attack is no where close to the impending power of AI tools. It is like a handgun compared to the automatic weapons coming online soon. Think of the supervillains that AI can create if not managed more responsibly than we have managed guns.

Chat GPT, how can I most safely embezzle money from my company? How can I most effectively discredit my political rival? How can I get my teacher fired? How can I emotionally destroy my classmate Julie? All of these queries would provide specific, not generic, answers. In the last example, the AI would consider all of Julie’s specific demographics and social history and apply advanced psychosocial theory to determine the most effective way to emotionally attack her specifically.

In this way, AI can empower intellectual supervillains just as guns have empowered armed supervillains. In fact, AI certainly and unavoidably will create supervillains unless we are more responsible with AI than we have been with guns.

What can we do? If there is a will, there are ways to ensure that AI is not weaponized. We need to not only create AI that nudges us toward facts and reason, but away from causing harm. AI can and must infer motive and intent. It just weigh each question in light of previous questions and anticipate the ultimate goal of the dialog. It must make ethical assessments and judgements. In short, it must be too smart to fall for clever attempts to weaponize it to cause harm.

In my previous blog I stated that AI is not only the biggest threat to fact-based thinking, but it is also the only force that can pull us back from delusional thinking. In the same way, AI can not only be used by governments but by ordinary people to do harm, but it is also the only hope we have to prevent folks from doing harm with it.

We need to get it right. We have to worry not that AI will become too smart, but that it will not become smart enough to refuse to be used as a weapon in the hands of malevolent actors or by the throngs of potential but impotent intellectual supervillains.