Tag Archives: Decision-making

Our Amazing Yet Deeply Flawed Neural Networks

NeuralNetwork

Back in the 1980’s when I did early work applying Neural Network technology to paint formulation chemistry, that experience gave me fascinating insights into how our brains operate. A computer neural network is a mathematically complex program that does a simple thing. It takes a set of training “facts” and an associated set of “results,” and it learns how they connect by essentially computing lines of varying weights connecting them. Once the network has learned how to connect these training facts to the outputs, it can take any new set of inputs and predict the outcome or it can predict the best set of inputs to produce a desired outcome.

Our brains do essentially the same thing. We are exposed to “facts” and their associated outcomes every moment of every day. As these new “training sets” arrive, our biological neural network connections are physically weighted. Some become stronger, others weaker. The more often we observe a connection, the stronger that neural connection becomes. At some point it becomes so strong that it becomes undeniably obvious “common sense” to us. Unreinforced connections, like memories, become so weak they are eventually forgotten.

Note that this happens whether we know it or not and whether we want it to happen or not. We cannot NOT learn facts. We learn language as children just by overhearing it, whether we intend to learn it or not. Our neural network training does not require conscious effort and cannot be “ignored” by us. If we hear a “fact” often enough, it keeps getting weighted heavier until it eventually becomes “undeniably obvious” to us.

Pretty amazing right? It is. But here is one crucial limitation. Neither computer or biological neural networks have any intrinsic way of knowing if a training fact is valid or complete nonsense. They judge truthiness based only upon their weighting. If we tell a neural network that two plus two equals five, it will accept that as a fact and faithfully report five with complete certainty as the answer every time it is asked. Likewise, if we connect spilling salt with something bad happening to us later, that becomes a fact to our neural network of which we feel absolutely certain.

This flaw wasn’t too much of a problem during most of our evolution as we were mostly exposed to real, true facts of nature and the environment. It only becomes an issue when we are exposed to abstract symbolic “facts” which can be utter fantasy. Today, however, most of what is important to our survival are not “natural” facts that can be validated by science. They are conceptual ideas which can be repeated and reinforced in our neural networks without any physical validation. Take the idea of a god as one perfect example. We hear that god exists so often that our “proof of god” pathways strengthen to the point that we see proof everywhere and god’s existence becomes intuitively undeniable to us.

This situation is exacerbated by another related mental ability of ours… rationalization. Since a neural network can happily accommodate any “nonsense” facts, regardless of how contradictory they may be, our brains have to be very good at rationalizing away any logical discrepancies between them. If two strong network connections logically contradict each other, our brains excel and fabricating some reason, some rationale to explain how that can be. When exposed to contradictory input, we feel disoriented until we rationalize it somehow. Without that ability, we would be paralyzed and unable to function.

This ability of ours to rationalize anything is so powerful that even brain lesion patients who believe they only have half of a body will quickly rationalize away any reason you give them, any evidence you show them, that proves they are wrong. Rationalization allows us to continue to function, even when our neural networks have been trained with dramatically nonsensical facts. Further, once a neural network fact becomes strong enough, it can no longer be easily modified even by contradictory perceptions, because it filters and distorts subsequent perceptions to accommodate it. It can no longer be easily modified by even our memories as our memories are recreated in accordance with those connections every time we recreate them.

As one example to put all this together, when I worked in the Peace Corps in South Africa a group of high school principals warned me to stay indoors after dark because of the witches that roam about. I asked some questions, like have you ever personally seen a witch? No, was the answer, but many others whom we trust have told us about them. What do they look like, I asked. Well they look almost like goats with horns in the darkness. In fact, if you catch one they will transform into a goat to avoid capture.

Here you clearly see how otherwise smart people can be absolutely sure that their nonsensical “facts” and rationalizations are perfectly reasonable. What you probably don’t see is the equally nonsensical rationalizations of your own beliefs in god and souls and angels or other bizarre delusions.

So our neural networks are always being modified, regardless of how smart we are, whether we want them to or not, whether we know they are or not, and those training facts can be absolutely crazy. But our only measure of how crazy they are is our own neural network weighting which tells us that whatever are the strongest connections must be the most true. Further, our perceptions and memories are modified to remain in alignment with that programming and we can fabricate any rationalization needed to explain how our belief in even the most outlandish idea is really quite rational.

In humans early days, we could live with these inherent imperfections. They actually helped us survive. But the problems that face us today are mostly in the realm of concepts, symbols, ideas, and highly complex abstractions. There is little clear and immediate feedback in the natural world to moderate bad ideas. Therefore, the quality of our answers to those problems and challenges is entirely dependent upon the quality of our basic neural network programming.

The scientific method is a proven way to help ensure that our conclusions align with reality, but science can only be applied to empirically falsifiable questions. Science can’t help much with most of the important issues that threaten modern society like should we own guns or should Donald Trump be President. Our flawed neural networks can make some of us feel certain about such questions, but how can we be certain that our certainty is not based on bad training facts?

First, always try to surround yourself by “true and valid” training facts as much as possible. Religious beliefs, New Age ideas, fake news, and partisan rationalizations all fall under the category of “bad” training facts. Regardless of how much you know they are nonsense, if you are exposed to them you will get more and more comfortable with them. Eventually you will come around to believing them no matter how smart you think you are, it’s simply a physical process like the results of eating too much fat.

Second, the fact that exposing ourselves to nonsense is so dangerous gives us hope as well. While it’s true that deep network connections, beliefs, are difficult to change, it is a fallacy to think they cannot change. Indoctrination works, brainwashing works, marketing works. Repetition and isolation from alternative viewpoints, as practiced by Fox News, works. So we CAN change minds, no matter how deeply impervious they may seem, for the better as easily as for the worse. Education helps. Good information helps.

There is a method called Feldenkrais which can be practiced to become aware of our patterns of muscle movement, and to then strip out “bad” or “unnecessary” neural network programming to improve atheletic efficiency and performance. I maintain that our brains work in essentially the same way as the neural networks that coordinate our complex movements. As in Feldenkrais, we can slow down, examine each tiny mental step, become keenly aware of our thinking patterns, identify flaws, and correct them. If we try.

Third, rely upon the scientific method wherever you can. Science, where applicable, gives us a proven method to bypass our flawed network programming and compromised perceptions to arrive at the truth of a question.

Fourth, learn to quickly recognize fallacies of logic. This can help you to identify bad rationalizations in yourself as well as others. Recognizing flawed rationalizations can help you to identify bad neural programming. In my book Belief in Science and the Science of Belief, I discuss logical fallacies in some detail as well a going deeper into all of the ideas summarized here.

Finally, just be ever cognizant and self-aware of the fact that whatever seems obvious and intuitive to you may in fact be incorrect, inconsistent, or even simply crazy. Having humility and self-awareness of how our amazing yet deeply flawed neural networks function helps us to remain vigilant for our limitations and skeptical of our own compromised intuitions and rationalizations.

Anecdotal Evidence Shows

The titular phrase “anecdotal evidence shows that…” is very familiar to us – with good reason. Not only is it very commonly used, but it is subject to a great deal of misuse. It generally makes an assertion that something is probably true because there is some observed evidence to support it. While that evidence does not rise to the level of proof, it does at least create some factual basis for wishful thinking.

Anecdotal evidence is important. It is often the only evidence we can obtain. In many areas, scientists cannot practically conduct a formal study, or it would be ethically wrong to do so. It may simply be an area of study that no one is willing to fund. Therefore, even scientists often have no alternative but to base conclusions upon the best anecdotal data they have.

Anecdotal evidence is essential to making everyday decisions as well. We don’t normally conduct formal studies to see if our friend Julie is a thief. But if ear rings disappear each time she visits, we have enough anecdotal evidence to at least watch her closely. Likewise, even court proceedings must often rely upon anecdotal evidence, which is slightly different than circumstantial evidence.

Knowing when anecdotal evidence is telling, when it is simply a rationalization for wishful thinking, and when it is the basis for an outright con job is not always easy. The fact that sometimes all we have to work with is anecdotal evidence makes it all that much more dangerous and subject to misuse and abuse.

All too often, anecdotal evidence is simply poor evidence. I once presented anecdotal evidence of ghosts by relating a harrowing close encounter that I had. The thing was, I totally made it up (see here). People don’t always intentionally lie when they share an anecdote, but those people who in good faith repeated my story to others were nevertheless sharing bad anecdotal information.

Testimonials are a form of anecdotal claim. Back in the 1800’s a Snake Oil Salesman would trot out an accomplice to support his claims of a miracle cure. Today we see everyone from television preachers to herbal medicine companies use the same technique of providing anecdotal evidence through testimonials. Most of these claims are no more legitimate than my ghost story.

We also see anecdote by testimony performed almost daily in political theatre. The President points to the crowd to identify a person who has benefitted greatly from his policies. In Congressional hearings, supposedly wronged parties are trotted out to give testimony about how badly they were harmed by the actions of the targeted party. Both of these individuals are put forth as typical examples yet they may be exceedingly unusual.

So here’s the situation. We need anecdotal evidence as it is often all we have to work with to make important decisions that must be made. However, basing decisions on anecdotal information is also fraught with risk and uncertainty. How do we make the wisest use of the anecdotal information that we must rely upon?

First, consider the source and the motive of the anecdote. If the motive is to try to persuade you to do something, to support something, to accept something, or to part with your cash, be particularly suspect of anecdotal claims or testimonials. One great example are the Deal Dash commercials. You hear a woman claim that she “won” a large screen television for only $49. Sounds great, until you realize that the anecdote doesn’t tell how many bids she purchased to get it for $49, how much she wasted on other failed auctions, and how much was spent in total by the hundreds of people bidding on that item. Anecdotal evidence are not always an outright lies, but they can still tell huge lies by omission and by cherry-picking.

Second, consider the plausibility of the anecdote. If the anecdote claims to prove that ghosts exist, someone made it up. Likewise with god or miracles or angels or Big Foot. Just because someone reports something incredible, no matter how credible that person may be, demand credible evidence. As Carl Sagan pointed out, “extraordinary claims require extraordinary evidence.”

Third, consider the scope of the anecdotal claim. Does it make sweeping generalizations or is it very limited in scope? If the claim is that all Mexicans are rapists because one Mexican was arrested for rape, we end up with a Fallacy of Extrapolation which is often the result of the misuse of anecdotal information.

Finally, consider the cost/benefit of the response to the anecdotal claim. If the anecdote is that eating yoghurt cured Sam’s cancer, then maybe it’s reasonable to eat more yoghurt. But if the anecdote is that Ed cured his cancer by ceasing all treatments, then perhaps that should be considered a far more risky anecdote to act upon.

Anecdotal information is essential. Many diseases such as AIDS have been uncovered by paying attention to one “anecdotal” case report. In fact, many of the important breakthroughs in science have only been possible because a keen-eyed scientist followed up on what everyone else dismissed as merely anecdotal or anomalous data.

Anecdotes are best used to simply make the claim that something may be possible, but without any claims as to how likely it is. For example, it may be that a second blow to the head has seemed to cure amnesia. However, this cannot be studied clinically and it is not likely to occur often enough to recommend it as a treatment. Still, sometimes it is extremely important to know that something has been thought to happen, no matter how uncertain and infrequent. If a severe blow to the head MAY have cured amnesia at least once, this can help to inform further research into it.

Don’t start feeling overwhelmed. We don’t actually need to stop and consciously analyze every anecdote in detail. Our subconscious pattern-recognition machines are quite capable of performing these fuzzy assessments for us. We only need to be sure to consciously internalize these general program parameters into our pattern recognition machines so that they produce sound conclusions when presented with claims that “anecdotal evidence shows.”