Category Archives: Science

Understanding AI

Even though we see lots of articles about AI, few of us really have even a vague idea of how it works. It is super complicated, but that doesn’t mean we can’t explain it in simple terms.

I don’t work in AI, but I did work as a Computational Scientist back in the early 1980’s. Back then I became aware of fledgling neural network software and pioneered its applications in formulation chemistry. While neural network technology was extremely crude at that time, I proclaimed to everyone that it was the future. And today, neural networks are the beating heart of AI which is fast becoming our future.

To get a sense of how neural networks are created and used, consider a very simple example from my work. I took examples of paint formulations, essentially the recipes for different paints, as well as the paint properties each produced, like hardness and curing time. Every recipe and its resulting properties was a training fact and all of them together was my training set. I fed my training set into software to produce a neural network, essentially a continuous map of this landscape. This map could take quite a while to create, but once the neural network was complete I could then enter a new proposed recipe and it could instantly tell me the expected properties. Conversely, I could enter a desired set of properties and it could instantly predict a recipe to achieve them.

So imagine adapting and expanding that basic approach. Imagine, for example, that rather than using paint formulations as training facts, you gathered training facts from a question/answer site like Quora, or a simple FAQ. You first parse each question and answer text into keywords that become your inputs and outputs. Once trained, the AI can then answer most any question, even previously unseen variations, that lie upon the map that it has created.

Next imagine you had the computing power to scan the entire Internet and parse all that information down into sets of input and output keywords, and that you had the computing power to build a huge neural network based on all those training facts. You would then have a knowledge map of the Internet, not too unlike Google Maps for physical terrain. That map could then be used to instantly predict what folks might say in response to anything folks might say – based on what folks have said on the Internet.

You don’t need to just imagine, because now we can do essentially that.

Still, to become an AI, a trained neural network alone is not enough. It first needs to understand your written or spoken language question, parse it, and select input keywords. For that it needs a bunch of skills like voice recognition and language parsing. After finding likely output keywords, it must order them sensibly and build a natural language text or video presentation of the outputs. For that you need text generators, predictive algorithms, spelling and grammar engines, and many more processors to produce an intelligible, natural sounding response. Most of these various technologies have been refined for a long time in your word processor or your messaging applications. AI is really therefore a convergence of many well-known technologies that we have built and refined since at least the 1980’s.

AI is extremely complex and massive in scale, but unlike quantum physics, quite understandable in concept. What has enabled the construction of AI scale neural networks is the mind-boggling computer power required to train such a huge network. When I trained my tiny neural networks in the 1980’s it took hours. Now we can parse and train a network on well, the entire Internet.

OK, so hopefully that demystifies AI somewhat. It basically pulls a set of training facts from the Internet, parses them and builds a network based on that data. When queried, it uses that trained network map to output keywords and applies various algorithms to build those keywords into comprehensible, natural sounding output.

It’s important we understand at least that much about how AI works so that we can begin to appreciate and address the much tougher questions, limitations, opportunities, and challenges of AI.

Most importantly, garbage in, garbage out still applies here. Our goal is for AI should be to do better than we humans can do, to be smarter than us. After all, we already have an advanced neural network inside our skulls that has been trained over a lifetime of experiences. The problem is, we have a lot of junk information that compromises our thinking. But if an AI just sweeps in everything on the Internet, garbage and all, doesn’t that make it just an even more compromised and psychotic version of us?

We can only rely upon AI if it is trained on vetted facts. For example, AI could be limited to training facts from Wikipedia, scientific journals, actual raw data, and vetted sources of known accurate information. Such a neural network would almost certainly be vastly superior to humans in producing accurate and nuanced answers to questions that are too difficult for humans to understand given our more limited information and fallibilities. There is a reason that there are no organic doctors in the Star Wars universe. It is because there is no advanced future civilization where organic creatures could compete the AI medical intelligence and surgical dexterity of droids.

Here’s a problem. We don’t really want that kind of boring, practical AI. Such specialized systems will be important, but not huge commercially nor sociologically impactful. Rather, we are both allured and terrified by AI that can write poetry or hit songs, generate romance or horror novels, interpret the news, and draw us images of cute dragon/butterfly hybrids.

The problem is, that kind of popular “human like” AI, not bound by reality or truth, would be incredibly powerful in spreading misinformation and manipulating our emotions. It would feedback nonsense that would further instill and reinforce nonsensical and even dangerous thinking in our own brain-based neural networks.

AI can help mankind to overcome our limitations and make us better. Or it can dramatically magnify our flaws. It can push us toward fact-based information, or it can become QANON and Fox “News” on steroids. Both are equally feasible, but if Facebook algorithms are any indication, the latter is far more probable. I’m not worried about AI creating killer robots to exterminate mankind, but I am deeply terrified by AI pushing us further toward irrationality.

To create socially responsible AI, there are two things we must do above all else. First, we must train specialized AI systems, say as doctors, with only valid, factual information germane to medical treatment. Second, any more generative, creative, AI networks should be built from the ground up to distinguish factual information from fantasy. We must be able to indicate how realistic we wish our responses to be and the system must flag clearly, in a non-fungible manner, how factual its creations actually are. We must be able to count on AI to give us the truth as best as computer algorithms can recognize it, not merely to make up stories or regurgitate nonsense.

Garbage in garbage out is a huge issue, but we also face a an impending identity crisis brought about by AI, and I’m not talking about people falling in love with their smart phone.

Even after hundreds of years to come to terms with evolution, the very notion still threatens many people with regard to our relationship with animals. Many are still offended by the implication that they are little more than chimpanzees. AI is likely to cause the same sort of profound challenge to our deeply personal sense of what it means to be human.

We can already see that AI has blown way past the Turing Test and can appear indistinguishable from a human being. Even while not truly self-aware, AI systems can seem to be capable of feelings and emotion. If AI thinks and speaks like a human being in every way, then what is the difference? What does it even mean to be human if all the ways we distinguish ourselves from animals can be reproduced by computer algorithms?

The neural network in our brain works effectively like a computer neural network. When we hear “I love…” our brains might complete that sentence with “you.” That’s exactly what a computer neural network might do. Instead of worrying about whether AI systems are sentient, the more subtle impact will be to make us start fretting about whether we are merely machines ourselves. This may cause tremendous backlash.

We might alleviate that insecurity by rationalizing that AI is not real by definition because it is not human. But that doesn’t hold up well. It’s like claiming that manufactured Vitamin C is not really Vitamin C because it did not some from an orange.

So how do we come to terms with the increasingly undeniable fact that intellectually and emotionally we are essentially just biological machines? The same way many of us came to terms with the fact that we are animals. By acknowledging and embracing it.

When it comes to evolution, I’ve always said that we should take pride in being animals. We should learn about ourselves through them. Similarly, we should see computer intelligence as an opportunity, not a threat to our sense of exceptionalism. AI can help us to be better machines by offering a laboratory for insight and experimentation that can help both human and AI intelligences to do better.

Our brain-based neural networks are trained on the same garbage data as AI. The obvious flaws in AI are the same less obvious flaws that affect our own thinking. Seeing the flaws in AI can help us to recognize similar flaws in ourselves. Finding ways to correct the flaws in AI can help us to find similar training methodologies to correct them in ourselves.

I’m an animal and I’m proud to be “just an animal” and I’m equally proud to be “just a biological neural network.” That’s pretty awesome!

Let’s just hope we can create AI systems that are not as flawed as we are. Let’s hope that they will instead provide sound inputs to serve as good training facts to help retrain our own biological neural networks to think in more rational and fact-based ways.

Pandemic of Delusion

You may have heard that March Madness is upon us. But never fear, March Sanity is on the way!

My new book, Pandemic of Delusion, will be released on March 23rd, 2023 and it’s not arriving a moment too early. The challenges we face both individually and as a society in distinguishing fact from fiction, rationality from delusion, are more powerful and pervasive than ever and the need for deeper insight and understanding to navigate those challenges has never been more dire and profound.

Ensuring sane and rational decision making, both as individuals and as a society, requires that we fully understand our cognitive limitations and vulnerabilities. Pandemic of Delusion helps us to appreciate how we perceive and process information so that we can better recognize and correct our thinking when it starts to drift away from a firm foundation of verified facts and sound logic.

Pandemic of Delusion covers a lot of ground. It delves deeply into a wide range of topics related to facts and belief, but it’s as easy to read as falling off a log. It is frank, informal, and sometimes irreverent. Most importantly, while it starts by helping us understand the challenges we face, it goes on to offer practical insights and methods to keep our brains healthy. Finally, it ends on an inspirational note that will leave you with an almost spiritual appreciation of a worldview based upon science, facts, and reason.

If only to prove that you can still consume more than 200 characters at a time, preorder Pandemic of Delusion from the publisher, Interlink Publishing, or from your favorite bookseller like Amazon. And after you read it two or three times, you can promote fact-based thinking by placing it ever so casually on the bookshelf behind your video desk. It has a really stand-out binding. And don’t just order one. Do your part to make the world a more rational place by sending copies to all your friends, family, and associates.

Seriously, I hope you enjoy reading Pandemic of Delusion half as much as I enjoyed writing it.

Loss to Follow-up in Research

In my scientific evangelism, I often tout the virtues of good scientists. One that I often claim is that they do not accept easy answers to difficult problems. They would rather say “we do not have an answer to that question at this time” than accept some possibly incorrect or incomplete answer. They understand that to embrace such quick answers not only results in the widespread adoption of false conclusions but also inhibits the development of new techniques and methods to arrive at the fuller truth.

When it comes to clinical research however, many clinical researchers do not actually behave like good scientists. They behave more like nonscientific believers or advocates. This is particularly true with regard to the problem of “loss to follow-up.”

What is that? Well, many common clinical research studies, for example how well patients respond to a particular treatment, require that the patient be examined at some point after the treatment is administered, perhaps in a week, perhaps after several months have passed. Only through follow-up can we know how well that treatment has worked.

The universal problem however is that this normally requires considerable effort by the researchers as well as the patients. Researchers must successfully schedule a return visit and patients must actually answer their telephone when the researchers attempt to follow-up. This often does not happen. These patients are “lost to follow-up” and we have no data for them regarding the outcomes we are evaluating.

Unsurprisingly perhaps, these follow-up rates are often very poor. In some areas of clinical research, a 50% loss to follow-up rate is considered acceptable – largely based on practicality, not statistical accuracy. Some published studies report loss to follow-up rates as high as 75% or more – that is, they have only a 25% successful follow-up rate.

To put this in context, in their 2002 series on epidemiology published in The Lancet, Schultz and Grimes included a critical paper in which they assert that any loss to follow-up over 20% invalidates any general conclusions regarding most populations. In some cases, a 95% follow-up rate would be required in order to make legitimate general conclusions. The ideal follow-up rate required depends upon the rate of the event being studied.

Unfortunately, few studies involving voluntary follow-up by real people can achieve these statistically meaningful rates of follow-up and thus we should have appropriately moderated confidence in their results. At some threshold, a sufficiently low confidence means we should have no confidence.

So, given the practical difficulty of obtaining a statistically satisfactory loss to follow-up, what should clinical researchers do? Should they just stop doing research? There are many important questions that we need answers to, and this is simply the best we can do. Therefore, most conclude, surely some information is better than none.

But is it?

Certainly most clinical researchers – but not all – are careful to add a caveat to their conclusions. They responsibly structure their conclusions to say something like:

We found that 22% of patients experienced mild discomfort and there were no serious incidents reported. We point out that our 37% follow-up rate introduces some uncertainty in these findings.

This seems like a reasonable and sufficiently qualified conclusion. However, we know that despite the warning about loss to follow-up, the overall conclusion is that this procedure is relatively safe with only 22% of patients overall experiencing mild discomfort. That is almost assuredly going to be adopted as a general conclusion. Particularly so since the topic of the study is essentially “the safety of our new procedure.”

Adopting that level of safety as a general conclusion could be wildly misleading. It may be that 63% of patients failed to respond because they were killed by the procedure. Conversely, the results may create unwarranted concern about discomfort caused by the procedure since the only patients who felt compelled to follow-up were those who experienced discomfort. These are exaggerations to make the point, but they illustrate very real and very common problems that we cannot diagnose since the patients were lost to follow-up.

In any case, ignoring or minimizing or forgetting about loss to follow-up is only valid if the patients who followed-up were random. And that is rarely the case and certainly can never be assumed or even determined.

Look at it this way. Imagine a scientific paper entitled “The Birds of Tacoma.” In their methodology section, the researchers describe how they set up plates of worms and bowls of nectar in their living room and opened the windows. They then meticulously counted to birds that flew into the room to eat. They report they observed 6 robins and 4 hummingbirds. Therefore, they conclude, our study found that in Tacoma, we have 60% robins and 40% hummingbirds. Of course, being scrupulous researchers, they note that their research technique could, theoretically, have missed certain bird species.

This example isn’t exactly a problem of loss to follow-up, but the result is the same. You can of course think of many, many reasons why their observations may be misleading. But nevertheless, most people would form the long-term “knowledge” that Tacoma is populated by 60% robins and 40% hummingbirds. Some might take unfortunate actions under the assurance that no eagles were found in Tacoma. Further, the feeling that we now know the answer to this question would certainly inhibit further research and limit any funding into what seems to be a settled matter.

But, still, many scientists would say that they know all of this but we have to do what we can do. We have to move forward. Any knowledge, however imperfect is better than none. And what alternative do we have?

Well, one alternative is to reframe your research. Do not purport to report on “The Birds of Tacoma,” but rather report on “The Birds that Flew into Our Living Room.” That is, limit the scope of your title and conclusions so there is no inference that you are reporting on the entire population. Purporting to report general conclusions and then adding a caveat in the small print at the end should be unacceptable.

Further, publishers and peer reviewers should not publish papers that suggest general conclusions beyond the confidence limits of their loss to follow-up. They should require that the authors make the sort of changes I recommend above. And they themselves should be willing to publish papers that are not quite as definitive in their claims.

But more generally, clinical researchers, like any good scientists, should accept that they cannot <yet> answer some questions for which they cannot achieve a statistically sound loss to follow-up. Poor information can be worse than no information.

When <real> scientists are asked about the structure of a quark, they don’t perform some simple experiments that they are able to conduct with the old lab equipment at hand and report some results with disclaimers. They stand back. They say, “we cannot answer that question right now.” And they set about creating new equipment, new techniques, to allow them to study quarks more directly and precisely.

Clinical researchers should be expected to put in that same level of effort. Rather than continuing to do dubious and even counterproductive follow-up studies, buckle down, do the hard work, and develop techniques to acquire better data. It can’t be harder than coming up with gear to detect quarks.

“I have to deal with people” should not be a valid excuse for poor science. Real scientists don’t just accept easy answers because they’re easy. That’s what believers do. So step up clinical researchers, be scientists and be willing to say I don’t know but I’m going to develop new methods and approaches that will get us those answers. Answers that we can trust and act upon with confidence.

If you are not wiling to do that you are little better than Christian Scientists.

Animals are Little People

Like many, I opine quite a bit about the harms caused by social media. Let’s be clear; those harms are real and profound. But it would be wrong not to acknowledge all the good it does. Social media has many well-acknowledged benefits as related to social networking and support, I’d like to point out two less obvious benefits, namely as they relate to science and animals.

For some quick background, I always heard that people spend lots of time watching video clips online. I assumed they must be endlessly entertained by “guy gets hit in balls” videos. But my son sent me some links to clips on the “InterestingAsFuck” subreddit (see here). They were really engaging and gradually I started to watch them more and more. Now, my wife and I ravenously consume the clips daily and can’t ever seem to get enough.

The first great thing is how many of the video clips involve science. These clips tend to demonstrate science principles and phenomena in incredibly engaging and inspiring ways. Some are certainly presented by scientists, but most of the presentations feel accessible, home grown, like real magic that you could be doing too. I have to think that this tone and style of presenting science has a tremendously underappreciated benefit in advancing or at least popularizing science and innovation.

The second benefit of these videos is their effect on how we relate to animals. Throughout history, we have seen ourselves as separate and above animals. While we might acknowledge theoretically that we are animals too, we still view them as relatively primitive creatures. We have zoos that are intended to help us to appreciate animals, but while they offer some exposure and appreciation, they generally just make us feel like we are in a museum, watching uninteresting stuffed figures behind bars and glass required to keep us safely away from their dangerous animal natures.

But then we go to InterestingAsFuck, and we see video after video of animals relating to humans and other animals in compellingly “human” ways. We see animals playing, teasing, problem-solving, sad, fearful, happy, proud, generous, and yes, sometimes selfish and even vindictive. And not just dogs and cats. We see videos that focus on behaviors of and interactions with the full spectrum of animal life on our planet, from eagles to microbes. They all demonstrate profoundly “human” behaviors.

We see videos of animals helping other animals, even ones that are traditional enemies or prey. It is incredibly gratifying that humans are included in this. We see videos of humans helping animals and animals helping humans. In fact, we see almost entirely positive interactions between humans and our animal cousins.

You could visit a hundred zoos or spend your entire life on a farm, and not be exposed to the tiniest fraction of incredible animal interactions captured in these videos. But once you watch enough of them, I find it hard to imagine how people could not be changed by them. It is hard to imagine how, having seen so many extraordinary examples, one could continue to dismiss animal behavior as just “mimicking humans.”

I hope, perhaps I am naĂŻve, but I hope that after exposure to positive social media like this, most people will come away understanding that humans did not just suddenly appear on Earth; that all of our behaviors and emotions evolved and can be seen in our animal cousins. Animals are more like little people, like toddlers on the evolutionary ladder. As such, they deserve far more respect and appreciation than has traditionally been afforded to them.

If you don’t agree, follow InterestingAsFuck for a while, and see if you can still continue to dismiss any due recognition of animal feelings and emotions as mere projection.

Perhaps, just perhaps, social media can inspire us to engage with science, and with the real world around us, in ways that documentaries, and safaris, and zoos, and college courses have never been able to achieve.

Paranormal Investigations

When I was a kid my friends and I did lots of camping. We’d sit around the campfire late into the night, talking. Without fail, my friend John would capture our interest with some really engaging story. It would go on and on, getting wilder and wilder until we’d all eventually realize we’d been had. He was just messing with us again, having fun seeing just how gullible we could be. And somehow we all fell for it at least once on every trip.

In the 1970’s author and anthropology student Carlos Castaneda wrote a series of books detailing his tutelage under the a mystic Yaqui Indian shaman named don Juan Matus. The first books were fascinating and compelling. But as the books progressed, they became increasingly more fantastic. Eventually these supposedly true accounts escalated into complete and utter fantasy. Despite this, or because of it, hundreds of thousands of people reportedly made trips to into the desert in hopes of finding this fictional don Juan Matus. In fact, Castaneda was awarded a doctoral degree based on this obviously fictional writing.

Castaneda never admitted that his stories were made-up. We once had “mentalist” Yuri Geller who refused to admit that his fork-bending trick was only just a trick. We have long had horror films that purport to be “based on actual events.” These sort of claims were once only amusing. But now these kind of paranormal con jobs have escalated, like one of John’s campfire stories, to a ridiculous and frankly embarrassing and even dangerous level in our society. This kind of storytelling has become normalized in the prolific genre of “paranormal investigations” reality television shows.

We need to say – enough already.

Sadly, we see dozens of these shows on networks that call themselves “Discovery” or “Learning” or “History” or (most gallingly) “Science.” There are hundreds of shows and series on YouTube and elsewhere that purport to investigate the paranormal. These shows do us no service. In fact they are highly corrosive to our intellectual fabric, both individually and socially.

They all follow the same basic formula. They find some “unexplained” situation. They bring in experts to legitimize their investigations. They interview people about how they feel apprehensive or fearful about whatever it is. They spend a lot of time setting up “scientific” equipment and flashing shots of needles on gauges jumping around. They speculate about a wide range of possible explanations, most of them implausibly fantastic. They use a lot of suggestive language, horror-film style cinematography, and cuts to scary produced clips. And they end up determining that while they can’t say anything for sure but they can say that there is indeed something very mysterious going on.

These shows do tremendous harm. They legitimize the paranormal and trivialize real science. They turn the tools and trappings of science into cheap carnival show props.

Some of these shows are better than others. They do conclude that the flicker on a video is merely a reflection. But in the process, in order to produce an engaging show, they entertain all sorts of crazy nonsense as legitimately plausible explanations. In doing so, they suggest that while it may not have been the cause in this particular case, aliens or ghosts might be legitimately be considered as possible causes in other cases. By entertaining those possibilities as legitimate, they legitimize crazy ideas.

There would be a way to do this responsibly. These shows could investigate unexplained reports and dispense with all the paranormal theatrics and refuse to even consider paranormal explanations. They could provide actual explanations rather than merely open the door to paranormal ones.

MythBusters proved that a show that sticks to reality can be entertaining.

I am not sure what is worse, that this is the quality of diet that we are fed, or that we as a society lap it up and find it so addictively delicious.

A Healthy Model of Equality

Thomas Jefferson prominently enshrined the phrase “all men are created equal” in our Declaration of Independence. This phrase has ever since embodied perhaps the single most important and enduring foundation of the American experiment (see here).

Certainly all people of good-will respect and value this “immortal declaration.” And certainly no one limits their interpretation to the literal meaning of the phrase. For if children quickly and demonstrably became unequal, the idea of equality at creation would lose any practical or useful meaning whatsoever. So we generally accept that “created equal” also implies that we remain equal throughout our lives, independent of what we do or do not accomplish in life.

But this must be much more than a mere rhetorical or theoretical equality. It must extend far beyond a mere begrudging recognition that all people have the right to basic human rights and dignity. It must be a practical working belief that operates at the real functional interpersonal level which allows us to work together in this human project as equal partners.

Indeed, without a sincere and unqualified recognition of the equality of all individuals, our social fabric cannot endure. It is not possible to have a fair and just society if we feel, even deep down, that some are deserving and others are not; that some are superior merely by virtue of their social status or race or gender or even by their level of accomplishment in life. To allow for such fundamental bases of inequality is to travel down the road toward slavery and subjugation and exploitation and ultimately into the abyss of social disfunction.

Yet, moving beyond a mere allowance of certain inalienable rights to a true respect for each individuals capabilities and worth is not easy. In fact that is a huge understatement. For in our everyday life in every social interaction we see that people are simply not equal. It is laughably obvious that in fact we are not equal by wide margins. Some folks are brilliant, others stupid. Some sane, others insane. Some gifted, others inept. Some strong, others puny. Some have lived honorable lives, others lives of ignobility.

The truth is, we cannot help but observe glaringly wide disparities on any measure of worth you care to assess.

So how can we truly hold the ideal of equality alongside the reality of inequality harmoniously in our minds? How can we sincerely believe in equality without lying to ourselves about the reality? And how can we acknowledge the reality without lying to ourselves about our belief in the ideal?

This requires some rationalization. Rationalization is not a bad thing. We all have to find some coherent model for reconciling contradictory ideas. Therefore, we all must find some kind of understanding that allows a recognition of equality to thrive, fully and harmoniously in our individual brains and in our collective psyche, alongside the reality of inequality.

You may already have your own rationalization that works well for you. But here’s how I rationalize it. It’s not perfect, but no model can be. It has long worked pretty well for me.

  1. Excluding physical or chemical debilitation, a human’s total capacity to think is neurologically dependent upon their physical brain capacity.
  2. All human brains are the same size, or close enough as the differences do not matter. Therefore our total brain “power” is essentially the same and all of it is used in some manner.
  3. Brains exhibit a wide spectrum of capabilities. Think of it as an impracticably wide bar chart. Each bar is a narrow trait, like perhaps “math,” or “kindness,” or “neuromuscular control,” but much finer grained than those.
  4. Everyone’s bar chart is a unique. It is a signature of who they are. Everyone has some high bars and some low bars. But the total area under the bars adds up to the same total power.
  5. Some bars are particularly valued by society at any given time, some are measured on an SAT exam and some are not. Some make you a business tycoon, some a starving artist. But although some signatures may be seen as more important to society, or lead to greater success, all are equal and all are valuable to society.

So, in my rationalization all people are truly equal. True, some may be less appreciated or less helpful in a given situation, but all are nevertheless worthy of true respect in my mind for their unique strengths. There is no contradiction whatsoever with the observed differences between individuals. Aspiration and reality are fully reconciled.

This model has helped me to reconcile equality with differences. It has in fact helped me appreciate equality by virtue of our differences. It has helped me to feel proud of my own personal strengths while simultaneously humble about my weaknesses and while still being as worthy and as flawed as anyone overall. It has helped me recognize that being smart or skilled in one area does not make anyone particularly smart or skilled in another. That has helped me apply a healthy level of skepticism to opinions put forth by “smart” people in areas outside their proven expertise and to allow that otherwise uninformed people can offer valuable insights in others. It has helped me understand that traits like “smart” or “sane” are not simple binaries but complex and nuanced and somewhat arbitrary. We are all smart in some things and delusional others (see here). It has also helped me to value undervalued traits and to recognize that disrespecting people for one very low bar of their chart does not mean you disrespect them in totality and that respect overall does not require you to respect every trait.

And further, we should value the undervalued signatures in our society more than we do. It is our failure and our loss if we do not identify and utilize whatever unique strengths each individual has. There are no useless skillsets, only underutilized and underappreciated skillsets.

I think these rationalizations have led me in a healthy direction. Maybe this model will help you come to a more healthy and helpful view of equality as well.

The Impending Doom of Written Language

Sci Fi and Fantasy are often lumped together, but they are very distinct literary forms. The core difference is not simply whether the subject matter is dragons or space ships, but whether the subject matter is plausible or not. Whether it could become reality. Dragons could be Sci Fi if originating in a plausible manner and if they adhere to the laws of chemistry and physics. Conversely, a space ship becomes fantasy if it jumps through time and performs “science” feats what would consume fantastically implausible amounts of energy. Lots of Sci Fi fans are actually consumers of fantasy every bit as unrealistic as Lord of the Rings.

Really good Sci Fi is not merely plausible, but likely, even predictive. Great Sci Fi is unavoidable, or more aptly inescapable, given our current trajectory.

But even mind-boggling Sci Fi can often reflect a disappointing lack of imagination.

Take for example the obligatory transparent computer screen that we see in every Sci Fi show. Or even the bigger budget full-on 3-D holographic computer interfaces that provide eye-candy in every major feature nowadays. These look cool, but are probably pretty unimaginative. Plausible and likely, but crude interim technologies at best.

Take for example my own short Sci Fi story Glitch Death (see here). In it, I envision a future in which direct brain interfaces allow people to use computers to “replace” the reality around them with perceptual themes. In that future, we skip quickly past archaic holographic technology and beam our perceptions directly into the brain.

But even that only touches the surface. For example, why would a future direct-to-brain technology be limited to flashing words across our visual field and allowing us to hit “virtual buttons” floating in mid-air? To explain my thoughts on this, let’s digress and talk about math for a moment.

Today we have entered a time where math hardly matters anymore. Oh yes, we must of course understand the concepts of math. We must understand addition, division, and even the concepts of integrals and derivatives and more complex algorithms. But we don’t need to learn or know how to compute them. Not really. We have computers to handle the actual manipulative mechanics of numbers. Most of us don’t really need to learn the mechanics of math anymore, even if we use it everyday.

We are already well on the way there with language as well. We have devices that “fix” all of our spelling and formatting automatically. We don’t actually have to produce typographically correct written text. All we need to do is to communicate the words sufficiently for a computer to understand, interpret, correct, and standardize. We are at the verge of being able, like math, to simply communicate concepts, but not worry about the mechanics of language construction and composition.

So, back to my Sci Fi vision of the future of direct-to-brain interfaces and their likely ramifications. Interfaces like the one envisioned in Glitch Death would soon make written language, and perhaps much of verbal language, prohibitively cumbersome and obsolete. Why shoot words across our visual field, forcing us to read, comprehend, process, and assimilate? Why indeed when the computer could instead stimulate the underlying processed and interpreted symbols directly at their ultimate target destination in our brain. We wouldn’t need to actually read anything. We would simply suddenly know it.

In this situation, we would not need written material to be stored in libraries in any human recognizable language. It would be more efficiently housed in computer storage in a language-independent format that is most closely compatible with and efficiently transferrable into the native storage of the same concepts in the human brain.

In this future, all of which is directly in our path of travel assuming we survive our own follies, we deal at basic symbolic levels and tedious processes of math and language become largely offloaded. Forget tools to translate human languages. We will be able to simply discard them for a symbolic language that essentially transforms us into telepathic creatures. And in this form of telepathy, we don’t hear words in our head. We just transmit ideas and thoughts and understanding and experiences with the aid of our computer interfaces. The closest depiction in popular Sci Fi is perhaps the implantation of memories in the 1990 film “Total Recall.”

A real fascinating unknown to me is, how would humans process and interact without language? Do we require at least an internal language, internal dialogue, to function? I have always wanted to be a subject in an experiment to be made to forget all language, say by hypnosis or drugs, and to experience functioning without it. Like a dog might process the world. Technology may inevitably force that experiment upon us on a huge social scale.

It’s not true that “A sufficiently advanced technology is indistinguishable from magic.” Magic would defy the fundamental restrictions of physics and chemistry. That’s how we’d know the difference. A telepathic future facilitated by direct-to-brain computer interface is Science Fiction, not Fantasy.

The Greatest Failure of Science

Before I call out the biggest, most egregious failure of science, let me pay science some due credit. Science routinely accomplishes miracles that make Biblical miracles seem laughably mundane and trivial by comparison. Water into wine? Science has turned air into food. Virgin birth? A routine medical procedure. Angels on the head of a pin? Engineers can fit upwards of 250 million transistors in that space. Healing a leper? Bah, medicine has eradicated leprosy. Raising the dead? Clear, zap, next. Create life? Been there, done that. It’s not even newsworthy anymore.

And let’s compare the record of science to the much vaunted omniscience of God himself. Science has figured out the universe in sufficient detail to reduce it to practically one small Standard Equation. It turns out to actually be kind of trivial, some would say. Like God, we can not only listen in on every person on the planet, but no mystery of the universe is hidden to us. We have looked back in time to the first tick of the cosmic clock, down inside atoms to quarks themselves, and up to view objects at very edge of our “incomprehensively” large universe.

Science routinely makes the most “unimaginable” predictions about the universe that are shortly after proven to be true. Everything from Special Relativity to the Higgs Boson to Dark Matter to Gravity Waves and so many other phenomena. Nothing is too rare or too subtle or too complex to escape science for long.

Take the neutrino as just one representative example among so many others. These subatomic particles were hypothesized in 1931 by Wolfgang Pauli. They are so tiny that they cannot be said to have any size at all. They have virtually no mass and are essentially unaffected by anything. Even gravity has only an infinitesimal effect on neutrinos. They move at nearly the speed of light and pass right through the densest matter as if it were not there at all. It seems impossible that humans could ever actually observe anything so tiny and elusive.

Yet, in 1956 scientists at the University of California at Irvine detected neutrinos. Today we routinely observe neutrinos using gigantic detectors like the IceCube Neutrino Observatory at the South Pole. Similarly we now routinely observe what are essentially infinitesimally tiny vibrations in time-space itself using gravity wave detectors like the LIGO Observatory.

The point is, when talking about anything and everything from infinitesimally small neutrinos to massive gravitational waves spread so infinitesimally thin as to encompass galaxies, science can find it. If it exists, no matter how well hidden, not matter how rare, no matter how deeply buried in noise, no matter how negligible it may be… if it exists it will be found.

Which brings us to the greatest failure of science.

Given the astounding (astounding is far too weak a word) success of science in predicting and then detecting the effects of even the most unimaginably weak forces at work in the world around us, it is baffling that it has failed so miserably to detect any evidence of the almighty hand of God at work.

I mean, we know that God is the most powerful force in the universe, that God is constantly at work shaping and acting upon our world. We know that God responds to prayers and intervenes in ways both subtle and miraculous. So how is it that science has never been able to detect His influence? Not even in the smallest possible way?

Even if one adopts that view that God restricts himself rigorously to the role of “prime mover,” how is it that science has found nothing, not one neutrino-scale effect which points back to, let alone requires, divine influence?

It is mind-boggling when you think about it. I can certainly think of no possible explanation for this complete and utter failure of science to find any shred of evidence to support the existence of God when so many of us are certain that He is the most powerful force at work in the universe!

Can you?

Three Major Flaws in your Thinking

BrainwavesEEGToday I’d like to point out three severe and consequential flaws in your thinking. I know, I know, you’re wondering how I could possibly presume that you have major flaws in your thinking. Well, I can safely presume so because these flaws are so innate that it is a statistical certainty that you exhibit them much the time. I suffer from them myself, we all do.

Our first flaw arises from our assumption that human thinking must be internally consistent; that there must necessarily be some logical consistency to our thinking and our actions. This is reinforced by our own perception that whatever our neural networks tell us, no matter how internally inconsistent, nevertheless seems totally logical to us. But the reality is that our human neural networks can accommodate any level of inconsistency. We learn whatever “training facts,” good or bad, that are presented to us sufficiently often. Our brains have no inherent internal consistency checks beyond the approval and rejection patterns they are taught. For example, training in science can improve these check patterns,  whereas training in religion necessarily weakens them. But nothing inherently prevents bad facts and connections from getting introduced into our networks. (Note that the flexibility of our neural networks to accommodate literally anything <was> an evolutionary advantage for us.)

Our second flaw is that we have an amazing ability to rationalize whatever random facts we are sufficiently exposed to so as to make them seem totally logical and consistent to us. We can maintain unquestioning certainty in any proposition A, but at the same time be perfectly comfortable with proposition B, even if B is in total opposition with and incompatible with proposition A. We easily rationalize some explanation to create the illusion of internal consistency and dismiss any inconsistencies. If our network is repeatedly exposed to the belief that aliens are waiting to pick us up after we die, that idea gradually becomes more and more reasonable to us, until eventually we are ready to drink poison. At each point in the deepening of those network pathways, we easily rationalize away any logical or empirical inconsistency. We observe extreme examples of this in clinical cases but such rationalization affects all our thinking. (Note that our ability to rationalize incoherent ideas so as to seem perfectly coherent to us was an evolutionary necessity to deal with the problems produced by flaw #1.) 

The third flaw is that we get fooled by our perception of and need to attribute intent and volition to our thoughts and actions. We imagine that we decide things consciously when the truth is that most everything we think and do is largely the instantaneous unconscious output of our uniquely individual neural network pathways. We don’t so much arrive at a decision as we rationalize a post-facto explanation after we realize what we just thought or did. Our consciousness is like the General who follows the army wherever it goes, and tells himself he is in charge. We feel drawn to a Match date. Afterwards when we are asked what attracted us to that person, so we come up something like her eyes or his laugh. But the truth is that our attraction was so automatic and so complex and so deeply buried, that we really have no idea. Still, we feel compelled to come with some explanation to reassure us that we made a reasoned conscious decision. (Certainly our illusion of control is a fundamental element of what we perceive as our consciousness.)

So these are our three core flaws. First, our brains can learn any set of random facts and cannot help but accept those “facts” as undeniable and obvious truths. Second, we can and do rationalize whatever our neural network tells us, however crazy and nonsensical, so as to make us feel OK enough about ourselves to at least allow us to function in the world. And thirdly, when we ascribe post-facto rationalizations to explain our neural network conclusions, we mistakenly believe that the rationalizations came first. Believing otherwise conflicts unacceptably with our need to feel in control of our thoughts and actions.

I submit that understanding these flaws is incredibly important. Truly incorporating an understanding of these flaws into your analysis of new information shifts the paradigm dramatically. It opens up powerful new insights into understanding people better, promotes more constructive evaluation of their thoughts and actions, and reveals more effective options for working with or influencing them.

On the other hand, failure to consider these inherent flaws misdirects and undermines all of our interpersonal and social interactions. It causes tremendous frustration, misunderstanding, and counterproductive interactions.

I am going to give some more concrete examples of how ignoring these flaws causes problems and how integrating them into your thinking opens up new possibilities. But before I do that, I have to digress a bit and emphasize that we are the worst judge of our own thoughts and conclusions. By definition, whatever our neural network thinks is what seems inescapably logical and true to us. Therefore, our first thought must always be, am I the one whose neural network is flawed here? Sometimes we can recognize this in ourselves, sometimes we might accept it when others point it out, but most of the time it is exceedingly difficult for us to recognize let alone correct our own network programming. When our networks change, it is usually a process of which we are largely unaware, and happens through repeated exposure to different training facts.

But just because we cannot fully trust our own thinking doesn’t mean we should question everything we think. We simply cannot and should not question every idea we have learned. We have learned the Earth is spherical. We shouldn’t feel so insecure as to question that, or be intellectually bullied into entertaining new flat Earth theories to prove our open-mindedness or scientific integrity. Knowing when to maintain ones confidence in our knowledge and when to question it, is of course incredibly challenging.

And this does not mean we are all equally flawed or that we cannot improve. The measure is how well our individual networks comport with objective reality and sound reason. Some of our networks have more fact-based programming than others. Eliminating bad programming is not hopeless. It is possible, even irresistible when it happens. Our neural networks are quite malleable given new training facts good or bad. My neural network once told me that any young bald tattooed male was a neo-Nazi, that any slovenly guy wearing bagging jeans below his butt was a thug, and any metro guy sporting a bushy Khomeini beard was an insecure, over-compensating douchebag. Repeated exposure to facts to the contrary have reprogrammed my neural network on at least two of those.

OK, back on point now. Below are some examples of comments we might say or hear in conversation, along with some analysis and interpretation based on an awareness of our three flaws. I use the variable <topic> to allow you to fill in the blank with practically anything. It can be something unquestionably true, like <climate change is real>, or <god is a fantasy>, or <Trump is a moron>. Alternatively, if you believe obvious nonsense like <climate change is a hoax>, or <god is real>, or <Trump is the greatest President ever>, using those examples can still help just as much to improve your comfort level and relations with the other side.

I don’t understand how Jack can believe <topic>. He is so smart!

We often hear this sort of perplexed sentiment. How can so many smart people believe such stupid things? Well, remember flaw #1. Our brains can be both smart and stupid at the same time, and usually are. There are no smart or stupid brains, there are only factually-trained neural network patterns and speciously trained neural network patterns. Some folks have more quality programming, but that doesn’t prevent bad programming from sneaking in. There should be no surprise to find that otherwise smart people often believe some very stupid things.

Jill must be crazy if she believes <topic>.

Just like no one is completely smart, no one is completely crazy. Jill may have some crazy ideas that exist perfectly well along side a lot of mostly sane ideas. Everyone has some crazy programming and we only consider them insane when the level of crazy passes some socially acceptable threshold.

I believe Ben when he says <topic> is true because he won a Nobel Prize.

A common variant of the previous sentiments. Ben may have won a Nobel Prize, he may teach at Harvard, and may pen opinion pieces for the New York Times, so therefore we should give him the benefit of the doubt when we listen to his opinions. However, we should also be cognizant of the fact that he may still be totally bonkers on any particular idea. Conversely, just because someone is generally bonkers, we should be skeptical of anything they say but still be open to the possibility that they may be reasoning more clearly than most on any particular issue. This is why we consider “argument by authority” to be a form of specious argument.

It makes me so mad that Jerry claims that <topic> is real!

Don’t get too mad. Jerry kinda can’t help it. His neural network training has resulted in a network that clearly tells him that <topic> must obviously be absolutely true. Too much Fox News, religious exposure, or relentless brainwashing will do that to anyone, even you.

How can Bonnie actually claim that she supports <topic> when she denies <topic>???

First, recall flaw #1. Bonnie can believe any number of incompatible things without any problem at all. And further, flaw #2 allows her to rationalize a perfectly compelling reason to excuse any inconsistency.

Clyde believes in <topic> so he’ll never support <topic>.

Not true. Remember our flaws again. Clyde’s neural network can in fact accommodate one topic without changing the other one, and still rationalize them perfectly well. All it takes is exposure to the appropriate “training facts.” In fact, consistent with flaw #3, after his network programming changes, Clyde will maintain that he consciously arrived at that new conclusion through careful study and the application of rigorous logic.

Sonny is conducting a survey to understand why voters support <topic>.

Social scientists in particular should be more cognizant of this one. How often do we go to great efforts to ask people why they believe something or why they did something. But remember flaw #3. Mostly what they will report to you is simply their rationalization based on flaw #2. It may not, and usually doesn’t, have anything to do with their extremely complex neural network programming. That is why “subjective” studies designed to learn how to satisfy people usually fail to produce results that actually do influence them. Sonny should look for more objective measures for insight and predictive value.

Cher should support <topic> because it is factually supported and logically sound!

Appeals to evidence and logic often fail because peoples’ neural network has already been trained to accept other “evidence” and to rationalize away contrary logic. It should be no surprise that they reject your evidence and conclusions and it doesn’t accomplish anything to expect Cher to see it, let alone berate or belittle her when she does not.

And that brings us to the big reveal of this article…

There is a fourth flaw that is far worse than the other three we have discussed so far. And that is the flaw that most of us suffer from when we fail to integrate an deep awareness of flaws 1-3 into our thinking. We may not be able to completely control or eliminate flaws 1-3, but we can correct flaw number 4!

This discussion may have left you feeling helpless to understand, let alone influence, our truth-agnostic neural networks. But it also presents opportunities. These insights suggest two powerful approaches.

The first approach is more long-term. We must gradually retrain flawed neural networks. This can be accomplished through education, marketing, advertising, example-setting, and social awareness campaigns to name a few. None of these efforts need to be direct, nor do they require any buy-in by the target audience. The reality of network training is that it is largely unconscious, involuntary, and automatic. If our neural networks are exposed to sufficient nonsense, they will gradually find that nonsense more and more reasonable. But the encouraging realization is that reprogramming works just as well – or better – for sound propositions. And to be clear, this can happen quite rapidly. Look at how quickly huge numbers of neural networks have moved on a wide range of influence campaigns from the latest fashion or music craze to tobacco reduction to interracial relationships.

The second approach can be instantaneous. Rather than attempt to reprogram neural networks, you force them to jump through an alternate pathway to a different conclusion. This can happen with just a tiny and seemingly unrelated change in the inputs, and the result is analogous to suddenly shifting from the clear perception of a witch-silhouette, to that of a vase. Your network paths have not changed, yet one moment you conclude that you clearly see a witch, and the next it becomes equally obvious that it is actually a vase. For example, when Karl Rove changed the name of legislation, he didn’t try to modify people’s neural network programming, he merely changed an input to trigger a very different output result.

I hope these observations have given you a new lens through which you can observe, interpret, and influence human behavior in uniquely new and more productive ways. If you keep them in mind, you will find that they inform much of what you hear, think, and say.

Don’t Believe your Eyes

eyesToday I wanted to talk about perceptions. Not our feelings, but what we actually see, feel, smell, hear, and taste. That is, the “objective” inputs that drive our feelings. Should we really “only believe our eyes?

I think not.

In my book (see here) I talk about how we should be skeptical of our own memories and perceptions. Our memories are not recordings. They are docudrama recreations drawing upon various stock footage to put together a satisfying re-imagining. We remember going to the beach as a child. But in “recalling” details of that experience, we draw upon fragments from various sources to fill it in. The “slant” of that recreation is strongly dependent upon our current attitudes and biases. Our re-imagined, and often very distorted memory then reinforces what we believe to be a “vivid” recollection next time we recall it. Over time our “clear” memory can drift farther and farther from reality like a memory version of the “phone game.”

I contend that our brains work similarly with regard to our senses. We don’t see what we think we see. Our perceptions are filtered through our complex neural networks. It is a matched, filtered, processed, censored, and often highly biased version that we actually see, hear, or feel.

We know that our subconscious both filters out much of the information it receives, and adds in additional information as needed to create a sensible perception. I always favor a neural network model of brain function. As it relates to perception, our neural network receives a set of sensory data. It matches that data against known patterns and picks the closest match. It then presents our consciousness with a picture – not of the original data – but of that best-fit match. It leaves out “extraneous” information and may add in missing information to complete that expected picture. That is, we do not actually see, hear, smell, or taste a thing directly. We see, hear, smell, or taste a satisfying recreation that our network presents to us.

This should not be controversial, because we experience it all the time. Based on sparse information, we “see” fine detail in a low resolution computer icon that objectively is not there. We fail to see the gorilla inserted into the background because it is out of place. We are certain we see a witch or a vase in a silhouette, depending on our bias or our expectations at that moment.

But though this should be evident, we still do not take this imprecision seriously enough in evaluating the objectivity of our own memories or perceptions. We still mostly put near-absolute faith in our memories, and are generally even more certain of our perceptions. We believe that what we perceive is absolutely objective. Clearly, it is not.

In essence, what we believe we objectively recall, see, hear, or touch is not the thing itself, but a massaged recreation of our neural network match. The version we perceive can often be wrong in very important ways. Our perceptions are only as reliable as our neural networks. And some neural networks can be more compromised than others. We can recall or even perceive radically crazy things if our neural network has been trained to do so. I campaign against belief-based thinking of all sort because it seriously compromises these critical neural networks in crazy ways.

Even more unrecognized are the ways that this phenomenon is largely ignored as it impacts scientific research. Scientists often give far too much credence to reports of perceptions, often in extremely subtle ways.

As a simple illustration, consider how we often mock wine connoisseurs who claim to taste differences in wines but cannot pick these out in blinded studies. However, consider the confounding impact of their (and our) neural networks in even this simple case. When experiencing a wine, all the associated data is fed into the drinker’s neural network. It makes a match and then presents that match to the consciousness. Therefore, if the network does not “see” one critical factor, say color, it matches to white, not red, and presents and entirely different taste pattern the the drinker, ignoring some “extraneous” flavors and adding some other “missing” ones.

These same kinds of neural network matching errors can, and I have to assume often do, confound even more rigorous scientific studies. And they are further confounded by the fact that these mismatches are typically temporary. With every new set of data, our neural networks adjust themselves, the weightings change, to yield different results. The effect of a drug or placebo, for example, may change over time. If scientists see this, they typically look exclusively for other physiological causes. But it may be a neural network correction.

That is why I always admonish my readers to stick with inputs that will strengthen your neural networks toward sound objectivity rather than allow them to be weighted toward the rationalization of, and perception of, beliefs and nonsense. But since none of us can ever have perfect networks, or even know how accurate ours performs in any given match, we all need a healthy amount of skepticism, even with regard to our own memories and perceptions.

I further urge scientists to at least consider the impact of neural network pre-processing on your studies, and to develop methodologies to explicitly detect and correct for such biases.