Tag Archives: rationalization

Our Amazing Yet Deeply Flawed Neural Networks


Back in the 1980’s when I did early work applying Neural Network technology to paint formulation chemistry, that experience gave me fascinating insights into how our brains operate. A computer neural network is a mathematically complex program that does a simple thing. It takes a set of training “facts” and an associated set of “results,” and it learns how they connect by essentially computing lines of varying weights connecting them. Once the network has learned how to connect these training facts to the outputs, it can take any new set of inputs and predict the outcome or it can predict the best set of inputs to produce a desired outcome.

Our brains do essentially the same thing. We are exposed to “facts” and their associated outcomes every moment of every day. As these new “training sets” arrive, our biological neural network connections are physically weighted. Some become stronger, others weaker. The more often we observe a connection, the stronger that neural connection becomes. At some point it becomes so strong that it becomes undeniably obvious “common sense” to us. Unreinforced connections, like memories, become so weak they are eventually forgotten.

Note that this happens whether we know it or not and whether we want it to happen or not. We cannot NOT learn facts. We learn language as children just by overhearing it, whether we intend to learn it or not. Our neural network training does not require conscious effort and cannot be “ignored” by us. If we hear a “fact” often enough, it keeps getting weighted heavier until it eventually becomes “undeniably obvious” to us.

Pretty amazing right? It is. But here is one crucial limitation. Neither computer or biological neural networks have any intrinsic way of knowing if a training fact is valid or complete nonsense. They judge truthiness based only upon their weighting. If we tell a neural network that two plus two equals five, it will accept that as a fact and faithfully report five with complete certainty as the answer every time it is asked. Likewise, if we connect spilling salt with something bad happening to us later, that becomes a fact to our neural network of which we feel absolutely certain.

This flaw wasn’t too much of a problem during most of our evolution as we were mostly exposed to real, true facts of nature and the environment. It only becomes an issue when we are exposed to abstract symbolic “facts” which can be utter fantasy. Today, however, most of what is important to our survival are not “natural” facts that can be validated by science. They are conceptual ideas which can be repeated and reinforced in our neural networks without any physical validation. Take the idea of a god as one perfect example. We hear that god exists so often that our “proof of god” pathways strengthen to the point that we see proof everywhere and god’s existence becomes intuitively undeniable to us.

This situation is exacerbated by another related mental ability of ours… rationalization. Since a neural network can happily accommodate any “nonsense” facts, regardless of how contradictory they may be, our brains have to be very good at rationalizing away any logical discrepancies between them. If two strong network connections logically contradict each other, our brains excel and fabricating some reason, some rationale to explain how that can be. When exposed to contradictory input, we feel disoriented until we rationalize it somehow. Without that ability, we would be paralyzed and unable to function.

This ability of ours to rationalize anything is so powerful that even brain lesion patients who believe they only have half of a body will quickly rationalize away any reason you give them, any evidence you show them, that proves they are wrong. Rationalization allows us to continue to function, even when our neural networks have been trained with dramatically nonsensical facts. Further, once a neural network fact becomes strong enough, it can no longer be easily modified even by contradictory perceptions, because it filters and distorts subsequent perceptions to accommodate it. It can no longer be easily modified by even our memories as our memories are recreated in accordance with those connections every time we recreate them.

As one example to put all this together, when I worked in the Peace Corps in South Africa a group of high school principals warned me to stay indoors after dark because of the witches that roam about. I asked some questions, like have you ever personally seen a witch? No, was the answer, but many others whom we trust have told us about them. What do they look like, I asked. Well they look almost like goats with horns in the darkness. In fact, if you catch one they will transform into a goat to avoid capture.

Here you clearly see how otherwise smart people can be absolutely sure that their nonsensical “facts” and rationalizations are perfectly reasonable. What you probably don’t see is the equally nonsensical rationalizations of your own beliefs in god and souls and angels or other bizarre delusions.

So our neural networks are always being modified, regardless of how smart we are, whether we want them to or not, whether we know they are or not, and those training facts can be absolutely crazy. But our only measure of how crazy they are is our own neural network weighting which tells us that whatever are the strongest connections must be the most true. Further, our perceptions and memories are modified to remain in alignment with that programming and we can fabricate any rationalization needed to explain how our belief in even the most outlandish idea is really quite rational.

In humans early days, we could live with these inherent imperfections. They actually helped us survive. But the problems that face us today are mostly in the realm of concepts, symbols, ideas, and highly complex abstractions. There is little clear and immediate feedback in the natural world to moderate bad ideas. Therefore, the quality of our answers to those problems and challenges is entirely dependent upon the quality of our basic neural network programming.

The scientific method is a proven way to help ensure that our conclusions align with reality, but science can only be applied to empirically falsifiable questions. Science can’t help much with most of the important issues that threaten modern society like should we own guns or should Donald Trump be President. Our flawed neural networks can make some of us feel certain about such questions, but how can we be certain that our certainty is not based on bad training facts?

First, always try to surround yourself by “true and valid” training facts as much as possible. Religious beliefs, New Age ideas, fake news, and partisan rationalizations all fall under the category of “bad” training facts. Regardless of how much you know they are nonsense, if you are exposed to them you will get more and more comfortable with them. Eventually you will come around to believing them no matter how smart you think you are, it’s simply a physical process like the results of eating too much fat.

Second, the fact that exposing ourselves to nonsense is so dangerous gives us hope as well. While it’s true that deep network connections, beliefs, are difficult to change, it is a fallacy to think they cannot change. Indoctrination works, brainwashing works, marketing works. Repetition and isolation from alternative viewpoints, as practiced by Fox News, works. So we CAN change minds, no matter how deeply impervious they may seem, for the better as easily as for the worse. Education helps. Good information helps.

There is a method called Feldenkrais which can be practiced to become aware of our patterns of muscle movement, and to then strip out “bad” or “unnecessary” neural network programming to improve atheletic efficiency and performance. I maintain that our brains work in essentially the same way as the neural networks that coordinate our complex movements. As in Feldenkrais, we can slow down, examine each tiny mental step, become keenly aware of our thinking patterns, identify flaws, and correct them. If we try.

Third, rely upon the scientific method wherever you can. Science, where applicable, gives us a proven method to bypass our flawed network programming and compromised perceptions to arrive at the truth of a question.

Fourth, learn to quickly recognize fallacies of logic. This can help you to identify bad rationalizations in yourself as well as others. Recognizing flawed rationalizations can help you to identify bad neural programming. In my book Belief in Science and the Science of Belief, I discuss logical fallacies in some detail as well a going deeper into all of the ideas summarized here.

Finally, just be ever cognizant and self-aware of the fact that whatever seems obvious and intuitive to you may in fact be incorrect, inconsistent, or even simply crazy. Having humility and self-awareness of how our amazing yet deeply flawed neural networks function helps us to remain vigilant for our limitations and skeptical of our own compromised intuitions and rationalizations.

The Language of Reason

reasonWe routinely use a large number of very similar words when we talk about thinking: rational, rationale, rationality, irrational, rationalize, rationalization, reason, reasonable, and even superrational. We all kinda-sorta mostly generally understand the nuanced differences between these words, but since they are so very important and so often confused, it may be helpful to put them all on the table where we can clearly compare and contrast them.

Rational describes thinking that is based upon true facts and sound logic. This is the good kind of thinking. It requires that the thinker is unbiased, fact-based, sane, logical, and as objectively correct as one can be given the best information available. Note that the threshold here is quite high. It is not enough to merely follow “my own logic” to reach a conclusion, but that one follow independently valid logic and adhere to independently validated facts. One cannot merely feel they are being rational; they must in fact be objectively, measurably, demonstrably rational. A certifiably crazy person may be absolutely convinced they are perfectly rational in concluding that aliens are beaming signals into their brain, but that does not make them so.

Rational thinking implies that one meets all these requirements in a particular line of thought. A rational thinker is one who generally employs rational thinking. Often this term implies that one consciously values rational thinking as well. Although religious thinkers insist upon being shown respect as rational thinkers, it is difficult to see how their claims in any way reach the threshold of rational thinking. They seek to dilute and diminish the term so that it applies to them.

The term rationality is generally used in the context of questioning one’s rationality. That is, when we wish to make an assessment of a person’s capacity to be rational, either generally or at a given time.

Superrational is a term coined by cognitive scientist Douglas Hofstadter to describe a cooperative group behavior in game theory. However like many scientific concepts it has been incorrectly hijacked by New Age types. They invoke it to suggest that there is some intuitive mode of thought that transcends normal rational thinking – which they then invoke to justify any magical thinking they wish. They might say, for example, I believe in psychic powers because I’m a superrational thinker.

The word reason is trickier. In its basic form it is inherently neutral with regard to truth or rationality. It simply describes the cause, explanation, or justification one uses to explain their conclusion or action. But we also use it more generally to describe our rational capacity. It is in this sense that it is used for example in the “Reason Rally.” It sounds better there than would the “Rationality Rally.” The word reason also has the benefit of invoking feelings of reasonable or reasonableness.

However the connection between reason and reasonable is also a problem. Reason shares the fairly high bar with rational. But to be reasonable only requires that one be fair, moderate, and sensible. That’s why the word reason is a dangerous one to substitute for rational. Using reason as a synonym for rational can lower the bar for rationality in the minds of many people who might like to claim that their “reasonable” beliefs are rational conclusions because they are reasonable. Reasonable people can agree to disagree. Rational people cannot disagree for very long.

And that is a great segue into the biggest source of confusion amongst these terms. Just as the word reasonable dilutes the word reason, the word rationalization dilutes the word rational. Even worse, it totally reverses it!

Although it seems like they should be different forms of the same word, rationalize is almost the complete opposite of rational. To rationalize is to contrive some rationale, some apparent logic, to make the illogical appear perfectly rational. It is to “rationalize away” facts, logic, and reason. It is the process of deluding one’s self into thinking that some possibly preposterous idea is sound and credible regardless of the facts of the matter. We rationally reach scientific conclusions, but we rationalize our religious beliefs to convince ourselves that we are rational thinkers. These are not remotely equivalent.

This is insidious because once we have rationalized something, it then seems completely rational to us. Once rationalized, we become certain that our thinking is perfectly sound and reasonable. And we have evolved to be incredibly good at rationalizing. From the evolutionary perspective it was evidently far more important that we feel certainty in the face of ambiguity or the unknown; that we reach harmonious consensus in a delusion, than that we know the real facts of the world. There is also evidence that belief served a benefit of requiring less energy consumption as well (see here). But belief is no longer a beneficial or even harmless adaptation in our modern world.

Despite the fact that it no longer serves us well, we as a species remain incredibly good at rationalizing. Clinically delusional people are often completely certain that their delusions are perfectly rational. But this isn’t just an affliction of the insane. Rationalization is our normal human brain function that we are all susceptible to. Once rationalized, we normally continue to believe any ridiculous belief without reevaluation. We don’t need to be beyond the threshold of insanity to hold some insane rationalizations.

The lesson then is to be very skeptical when anyone insists that their conclusions are rational – even when it is ourselves. Few of us can distinguish between our own truly rational positions and completely rationalized ones. Fortunately science gives us methods to help us assess whether our conclusions are fact-based and soundly logical.

Likewise, we don’t need to train our young thinkers to rationalize problems, as they are innately quite adept at that already. Education in debate, law, marketing, sales, religion and many other fields mostly enhance our innate ability to create any argument that convinces others – to rationalize. We need far more training in science and skeptical thinking so that we can  better judge whether the rationalizations that impact our lives are truly rational.