Tag Archives: Neural Networks

I Say Give Them Time

As my readers know I occasionally take exception to comments made by highly respected intellectuals. I hope that when I do so it is not to engage in a gratuitous attack, but to offer an important counterpoint. In that spirit I must take exception to recent comments made by the highly respected thinker and author Malcolm Gladwell (see here).

The comments I refer to were offered by Mr. Gladwell when he appeared on The Beat with Ari Melber last week. The full text can be heard on the Ari Melber podcast dated July 3rd, 2021.

Mr. Melber introduced the segment by pointing out that we live in a period in which Republicans are attempting to revise history and promote lies. He asked Mr. Gladwell for his thoughts about all of that and whether there were any solutions. It should be noted that this question was asked in the context of promoting Mr. Gladwell as an expert on human thinking and behavior.

Here is a slightly polished transcription of the response by Mr. Gladwell:

I think about the role of time. I wonder whether we’re in too much of a hurry to pass judgment on the people who continue to lie about what happened on Jan 6th, there are many forms that denial takes. One of it is that I honestly don’t believe that anything went wrong there. Another form is that I do believe but I’m not ready to admit it yet. A lot of what looks like a kind of malignant denial in the republican party right now is probably just people who aren’t ready to come clean and renounce a lot of what they were saying for the previous four years. I say give them time.

While this admonition for patience may sound superficially learned and wise, I find it naïve, wrong both theoretically and factually, and damagingly counterproductive. While I certainly don’t expect Mr. Gladwell to cite all his supporting evidence in a short interview segment like this, I don’t believe he has any. I suspect this is simply well-meaning but unrealistic platitude, analogous to “the arc of the moral universe is long, but it bends toward justice.” That’s OK, except that he is putting forth an unsupported platitude as the conclusion of a purported expert in human thinking.

But such an expert on human thinking should understand that neural networks simply do not function in a way that would make “give them time” a reasonable strategy. As long as Republicans continue to hear the same old lies repeated over and over, they are not going to eventually recognize and reject them. Repeated exposure does not reveal lies but rather transforms our brains to accept them more deeply.

Our neural networks are influenced mainly by the quantity and repetition of the training “facts” they are exposed to. They have little capacity to judge the quality of those facts. Any training fact, in this case any idea the neural network is exposed to, is judged as valid by our neural network machinery in proportion to how often it is reinforced. And by the way, I know most of us want to believe that we collectively are not so susceptible to this because we want to believe that we personally are not. But we are.

So, my objection to Gladwell is that he does not truly understand how our neural networks function because if he did he would understand that “I say give them time” is counterproductive advice at this time. Now, yes, it would be good advice if we were confident that Trump voters are being exposed regularly and primarily to truthful information. If that were the case I would agree, yes, give their neural networks more exposure time. However, I don’t believe that there is any reasonable basis to think that giving them more time will serve any purpose except to further reinforce the lies they are continually exposed to from Trump, the Republican Party, and Fox News. We are simply not ready to just be patient and let the truth seep in and percolate.

The more nuanced advice, in my opinion, to the question posed by Ari Melber is that we must discredit and stem the flow of misinformation from these sources and expose Republicans regularly to truly factual information. Once we do that, then, yes, I say just give them time for their neural networks to become comfortable with it. With enough exposure their neural networks will transform whether they want them to or not. But to accept the status quo right now and “give them time” as Mr. Gladwell suggests would be horribly premature and ill-advised.

Don’t Believe your Eyes

eyesToday I wanted to talk about perceptions. Not our feelings, but what we actually see, feel, smell, hear, and taste. That is, the “objective” inputs that drive our feelings. Should we really “only believe our eyes?

I think not.

In my book (see here) I talk about how we should be skeptical of our own memories and perceptions. Our memories are not recordings. They are docudrama recreations drawing upon various stock footage to put together a satisfying re-imagining. We remember going to the beach as a child. But in “recalling” details of that experience, we draw upon fragments from various sources to fill it in. The “slant” of that recreation is strongly dependent upon our current attitudes and biases. Our re-imagined, and often very distorted memory then reinforces what we believe to be a “vivid” recollection next time we recall it. Over time our “clear” memory can drift farther and farther from reality like a memory version of the “phone game.”

I contend that our brains work similarly with regard to our senses. We don’t see what we think we see. Our perceptions are filtered through our complex neural networks. It is a matched, filtered, processed, censored, and often highly biased version that we actually see, hear, or feel.

We know that our subconscious both filters out much of the information it receives, and adds in additional information as needed to create a sensible perception. I always favor a neural network model of brain function. As it relates to perception, our neural network receives a set of sensory data. It matches that data against known patterns and picks the closest match. It then presents our consciousness with a picture – not of the original data – but of that best-fit match. It leaves out “extraneous” information and may add in missing information to complete that expected picture. That is, we do not actually see, hear, smell, or taste a thing directly. We see, hear, smell, or taste a satisfying recreation that our network presents to us.

This should not be controversial, because we experience it all the time. Based on sparse information, we “see” fine detail in a low resolution computer icon that objectively is not there. We fail to see the gorilla inserted into the background because it is out of place. We are certain we see a witch or a vase in a silhouette, depending on our bias or our expectations at that moment.

But though this should be evident, we still do not take this imprecision seriously enough in evaluating the objectivity of our own memories or perceptions. We still mostly put near-absolute faith in our memories, and are generally even more certain of our perceptions. We believe that what we perceive is absolutely objective. Clearly, it is not.

In essence, what we believe we objectively recall, see, hear, or touch is not the thing itself, but a massaged recreation of our neural network match. The version we perceive can often be wrong in very important ways. Our perceptions are only as reliable as our neural networks. And some neural networks can be more compromised than others. We can recall or even perceive radically crazy things if our neural network has been trained to do so. I campaign against belief-based thinking of all sort because it seriously compromises these critical neural networks in crazy ways.

Even more unrecognized are the ways that this phenomenon is largely ignored as it impacts scientific research. Scientists often give far too much credence to reports of perceptions, often in extremely subtle ways.

As a simple illustration, consider how we often mock wine connoisseurs who claim to taste differences in wines but cannot pick these out in blinded studies. However, consider the confounding impact of their (and our) neural networks in even this simple case. When experiencing a wine, all the associated data is fed into the drinker’s neural network. It makes a match and then presents that match to the consciousness. Therefore, if the network does not “see” one critical factor, say color, it matches to white, not red, and presents and entirely different taste pattern the the drinker, ignoring some “extraneous” flavors and adding some other “missing” ones.

These same kinds of neural network matching errors can, and I have to assume often do, confound even more rigorous scientific studies. And they are further confounded by the fact that these mismatches are typically temporary. With every new set of data, our neural networks adjust themselves, the weightings change, to yield different results. The effect of a drug or placebo, for example, may change over time. If scientists see this, they typically look exclusively for other physiological causes. But it may be a neural network correction.

That is why I always admonish my readers to stick with inputs that will strengthen your neural networks toward sound objectivity rather than allow them to be weighted toward the rationalization of, and perception of, beliefs and nonsense. But since none of us can ever have perfect networks, or even know how accurate ours performs in any given match, we all need a healthy amount of skepticism, even with regard to our own memories and perceptions.

I further urge scientists to at least consider the impact of neural network pre-processing on your studies, and to develop methodologies to explicitly detect and correct for such biases.

 

Humans are Inexplicable

brainWhether it be in science or business or politics or popular culture, we expend an inordinate amount of time and effort trying to figure out why people do whatever people are doing. We seem to have more analysts than actors, all desperately trying to explain what motivates people, either by asking them directly or by making inferences about them. For the most part, this is not merely a colossal waste of time and effort and money in itself, but it stimulates even greater wastes of time and effort and money chasing wildly incomplete or erroneous conclusions about why we do what we do.

Asking people why they did what they did, or why they are doing what they are doing, or why they are going to do what they are going to do, generally yields useless and misleading information. It is not clear that people actually have distinct reasons they can recognize let alone articulate. It is quite likely in fact that most of the decisions we make are made unconsciously based upon a myriad of complex neural network associations. These associations need not be rational. These connections don’t need to be internally consistent to each other or related to the actual outcome in any way. But in our post-rationalizations and post-analyses we impose some logic to our decisions to make them feel sensible. Therefore, the reasons we come up with are almost completely made-up at every level to sound rational or at least sane to ourselves and to those we are communicating to.

The truth is, we can’t usually hope to understand our own incredibly complex neural networks, let alone the neural networks of others. Yes, sometimes we can identify a strong neural network association driving a behavior, but most determinative associations are far too diffuse across a huge number of seemingly unrelated associations.

The situation gets infinitely worse when we are trying to analyze and explain group behaviors. Most of our shared group behaviors emerge from the weak-interactions between all of our individual neural networks. The complexity of these interactions is virtually unfathomable. The challenge of understanding why a group does what it does collectively, let alone figuring out how to influence their behavior, is fantastic.

If you ask a bird why it is flying in a complex swirling pattern along with a million other birds, it will probably give you some reason, like “we are looking for food,” but in fact it is probably largely unaware that it is even flying in any particular pattern at all.

So why point all this out? Do we give up? Does this imply that a rational civilization is impossible, that all introspection or external analysis is folly?

Quite the contrary, we must continue to struggle to understand ourselves and truly appreciating our complexity is part of that effort. To do so we must abandon the constraints of logic that we impose upon our individual and group rationalizations and appreciate that we are driven by neural networks that are susceptible to all manner of illogical programming. We must take any self-reporting with the same skepticism we would to the statement “I am perfectly sane.” We should be careful of imposing our own flawed rationality upon the flawed rationality of others. Analysts should not assume undue rationality in explaining behaviors. And finally, we must appreciate that group behaviors can have little or no apparent relationship to any of the wants, needs, or expressed opinions of those individuals within that group.

In advanced AI neural networks, we humans cannot hope to understand why the computer has made a decision. Its decision is based upon far too many subtle factors for humans to recognize or articulate. But if all of the facts programmed in to the computer are accurate, we can probably trust the judgement of the computer.

Similarly with humans, it may be that our naive approach of asking or inferring reasons for feelings and behaviors and then trying to respond to each of those rationales is incredibly ineffective. It may be that the only thing that would truly improve individual and thus emergent thinking are more sanely programmed neural networks, ones that are not fundamentally flawed so as to comfortably rationalize religious and other specious thinking at the most basic level (see here). We must focus on basic fact-based thinking in our educational system and in our culture on the assumption that more logically and factually-trained human neural networks will yield more rational and effective individual and emergent behaviors.