Tag Archives: Chat GPT

Hyperbolic Headlines are Destroying Journalism!

In our era of information overload, most readers consume their news by scanning headlines rather than through any careful reading of articles. A study by the Media Insight Project found that six in ten people acknowledge that they have done nothing more than read news headlines in the past week​ (Full Fact)​. Consuming news in this matter can make one less, rather than more well-informed.

Take, for instance, the headline from a major online newspaper: “Scientists Warn of Catastrophic Climate Change by 2030.” The article itself presents a nuanced discussion about potential climate scenarios and the urgent need for policy changes. However, the headline evokes a sense of inevitability and immediate doom that is not supported by the article’s content. These kind of headlines invoke fear and urgency to drive traffic at the expense of an accurate representation of what is really in the article.

All too typical hyperbolic headlines contribute to instilling dangerously misleading and lasting impressions. For example, a headline that screams “Economy in Freefall: Recession Imminent” might actually precede an article discussing economic indicators and expert opinions on potential downturns. Misleading headlines have an outsized effect in creating a skewed perception that can influence public opinion and decision-making processes negatively.

It often seems that headline writers have not read the articles at all. Moreover, they change them frequently, sometimes several times a day, to drive more traffic by pushing different emotional buttons.

Particularly egregious examples of this can be found in the political arena. During election seasons, headlines often lean towards sensationalism to capture attention. A headline like “Candidate X Involved in Major Scandal” may only refer to a minor, resolved issue, but the initial shock value sticks with readers. It unfairly delegitimizes the target of the headline. The excuse that the article itself is fair and objective does not mitigate the harm done by these headlines because, as we said, most people only read the headlines. And if they do skim the article they often do so in a cursory attempt to hear more about the salacious headline. If the article does not immediately satisfy that expectation, they become quickly bored, and don’t bother to actually read the more reasoned presentation in the article.

This headline-driven competition for clicks has led to a landscape where accuracy and depth are sacrificed for immediacy and sensationalism. Headlines are crafted to evoke emotional responses, whether through fear, anger, or salaciousness, rather than to inform. This shift has profound implications. When readers base their understanding of complex issues on superficial and often misleading headlines, they are ill-equipped to engage in meaningful discourse or make informed decisions.

Furthermore, the impact of misleading headlines extends beyond individual misinformation. It contributes to a polarized society where people are entrenched in echo chambers, each side reinforced by selective and often exaggerated information communicated to them through attention-grabbing headlines. This environment fosters division and reduces the opportunity for constructive dialogue, essential for a healthy democracy​ (Center for Media Engagement)​.

Consider the headline “Vaccines Cause Dangerous Side Effects, Study Shows.” The article might detail a study discussing the rarity of severe side effects and overall vaccine efficacy, but the headline fuels anti-vaccine sentiment by implying a more significant threat. Such headlines not only mislead but also exacerbate public health challenges by spreading fear and misinformation.

Prominent journalists like Margaret Sullivan of the Washington Post and Jay Rosen of NYU have critiqued the increasing prevalence of clickbait headlines, noting that they often prioritize sensationalism over accuracy, thereby undermining the credibility of journalism and contributing to public misinformation. Sullivan has emphasized the ethical responsibility of journalists to ensure that headlines do not mislead, as they serve as the primary interface between the news and its audience.

Unfortunately I suspect that journalists typically have little to no say in the headlines that promote their articles. The authors and editors should reassert control.

Until and unless journalists start acting like responsible journalists with regard to sensational headlines, readers should be wary of headlines that seem too dramatic, overstated, or that attempt to appeal to emotions.

And this is not a problem limited to tabloid journalism… we are talking about you, New York Times! Most people are already skeptical about headlines published in the National Enquirer. Tabloid headlines are not actually as serious a problem as the “credible” headlines put forth by the New York Times and other publications who still benefit from an assumption of responsible journalism.

The current trend of sensationalist online newspaper headlines is a disservice to readers and society. The practice prioritizes clicks over clarity, hyperbole over honesty, and in doing so, contributes to a misinformed and divided public. It is imperative for both readers and journalists to advocate for a return to integrity in news reporting – particularly in the headlines they put out. Accurate, informative headlines are not just a journalistic responsibility but a societal necessity to ensure an informed and engaged populace.

Footnote: Did I fool you??

Does this article sound different than my usual blog articles? Is it better or worse or just different? This was actually an experiment on my part. I asked Chat GPT to write this article for me. I offer it to you with minimal editing as a demonstration of what AI can do.

I’m interested in hearing what you think in the comments. Should I hang up my pen and leave all the writing to AI?

AI-Powered Supervillains

Like much of the world, I’ve been writing a lot about AI lately. In Understanding AI (see here), I tried to demystify how AI works and talked about the importance of ensuring that our AI systems are trained on sound data and that they nudge us toward more sound, fact-based, thinking. In AI Armageddon is Nigh! (see here), I tried to defuse all the hyperbolic doom-saying over AI that only distracts from the real, practical challenge of creating responsible, beneficial AI tools.

In this installment, I tie in a seemingly unrelated blog article I did called Spider-Man Gets It (see here). The premise of that article was that guns, particularly deadly high-capacity guns, turn ordinary, harmless people into supervillains. While young Billy may have profound issues, he’s impotent. But give him access to a semi-automatic weapon and he shoots up his school. Take away his gun and he may still be emotionally disturbed, but he can no longer cause much harm to anyone.

The point I was making is that guns create supervillains. But not all supervillains are of the “shoot-em-up” variety. Not all employ weapons. Some supervillains, like Sherlock Holmes’ arch nemesis Professor Moriarty, fall into the mastermind category. They are powerful criminals who cause horrible destruction by drawing upon their vastly superior information networks and weaponizing their natural analytic and planning capabilities.

Back in Sherlock Holmes’ day, there was only one man who could plot at the level of Professor Moriarty and that was Professor Moriarty. But increasingly, easy access to AI, as with easy access to guns, could empower any ordinary person to become a mastermind-type supervillain like Professor Moriarty.

We already see this happening. Take for example the plagiarism accusations against Harvard President Claudine Gay. Here we see disingenuous actors using very limited but powerful computer tools to find instances of “duplicative language” in her writing in a blatant attempt to discredit her and to undermine scholarship in general. I won’t go into any lengthy discussion here about why this activity is villainous, but it is sufficient to simply illustrate the weaponization of information technology.

And the plagiarism detection software presumably employed in this attack is no where close to the impending power of AI tools. It is like a handgun compared to the automatic weapons coming online soon. Think of the supervillains that AI can create if not managed more responsibly than we have managed guns.

Chat GPT, how can I most safely embezzle money from my company? How can I most effectively discredit my political rival? How can I get my teacher fired? How can I emotionally destroy my classmate Julie? All of these queries would provide specific, not generic, answers. In the last example, the AI would consider all of Julie’s specific demographics and social history and apply advanced psychosocial theory to determine the most effective way to emotionally attack her specifically.

In this way, AI can empower intellectual supervillains just as guns have empowered armed supervillains. In fact, AI certainly and unavoidably will create supervillains unless we are more responsible with AI than we have been with guns.

What can we do? If there is a will, there are ways to ensure that AI is not weaponized. We need to not only create AI that nudges us toward facts and reason, but away from causing harm. AI can and must infer motive and intent. It just weigh each question in light of previous questions and anticipate the ultimate goal of the dialog. It must make ethical assessments and judgements. In short, it must be too smart to fall for clever attempts to weaponize it to cause harm.

In my previous blog I stated that AI is not only the biggest threat to fact-based thinking, but it is also the only force that can pull us back from delusional thinking. In the same way, AI can not only be used by governments but by ordinary people to do harm, but it is also the only hope we have to prevent folks from doing harm with it.

We need to get it right. We have to worry not that AI will become too smart, but that it will not become smart enough to refuse to be used as a weapon in the hands of malevolent actors or by the throngs of potential but impotent intellectual supervillains.