Tag Archives: Neural Networks

Don’t Believe your Eyes

eyesToday I wanted to talk about perceptions. Not our feelings, but what we actually see, feel, smell, hear, and taste. That is, the “objective” inputs that drive our feelings. Should we really “only believe our eyes?

I think not.

In my book (see here) I talk about how we should be skeptical of our own memories and perceptions. Our memories are not recordings. They are docudrama recreations drawing upon various stock footage to put together a satisfying re-imagining. We remember going to the beach as a child. But in “recalling” details of that experience, we draw upon fragments from various sources to fill it in. The “slant” of that recreation is strongly dependent upon our current attitudes and biases. Our re-imagined, and often very distorted memory then reinforces what we believe to be a “vivid” recollection next time we recall it. Over time our “clear” memory can drift farther and farther from reality like a memory version of the “phone game.”

I contend that our brains work similarly with regard to our senses. We don’t see what we think we see. Our perceptions are filtered through our complex neural networks. It is a matched, filtered, processed, censored, and often highly biased version that we actually see, hear, or feel.

We know that our subconscious both filters out much of the information it receives, and adds in additional information as needed to create a sensible perception. I always favor a neural network model of brain function. As it relates to perception, our neural network receives a set of sensory data. It matches that data against known patterns and picks the closest match. It then presents our consciousness with a picture – not of the original data – but of that best-fit match. It leaves out “extraneous” information and may add in missing information to complete that expected picture. That is, we do not actually see, hear, smell, or taste a thing directly. We see, hear, smell, or taste a satisfying recreation that our network presents to us.

This should not be controversial, because we experience it all the time. Based on sparse information, we “see” fine detail in a low resolution computer icon that objectively is not there. We fail to see the gorilla inserted into the background because it is out of place. We are certain we see a witch or a vase in a silhouette, depending on our bias or our expectations at that moment.

But though this should be evident, we still do not take this imprecision seriously enough in evaluating the objectivity of our own memories or perceptions. We still mostly put near-absolute faith in our memories, and are generally even more certain of our perceptions. We believe that what we perceive is absolutely objective. Clearly, it is not.

In essence, what we believe we objectively recall, see, hear, or touch is not the thing itself, but a massaged recreation of our neural network match. The version we perceive can often be wrong in very important ways. Our perceptions are only as reliable as our neural networks. And some neural networks can be more compromised than others. We can recall or even perceive radically crazy things if our neural network has been trained to do so. I campaign against belief-based thinking of all sort because it seriously compromises these critical neural networks in crazy ways.

Even more unrecognized are the ways that this phenomenon is largely ignored as it impacts scientific research. Scientists often give far too much credence to reports of perceptions, often in extremely subtle ways.

As a simple illustration, consider how we often mock wine connoisseurs who claim to taste differences in wines but cannot pick these out in blinded studies. However, consider the confounding impact of their (and our) neural networks in even this simple case. When experiencing a wine, all the associated data is fed into the drinker’s neural network. It makes a match and then presents that match to the consciousness. Therefore, if the network does not “see” one critical factor, say color, it matches to white, not red, and presents and entirely different taste pattern the the drinker, ignoring some “extraneous” flavors and adding some other “missing” ones.

These same kinds of neural network matching errors can, and I have to assume often do, confound even more rigorous scientific studies. And they are further confounded by the fact that these mismatches are typically temporary. With every new set of data, our neural networks adjust themselves, the weightings change, to yield different results. The effect of a drug or placebo, for example, may change over time. If scientists see this, they typically look exclusively for other physiological causes. But it may be a neural network correction.

That is why I always admonish my readers to stick with inputs that will strengthen your neural networks toward sound objectivity rather than allow them to be weighted toward the rationalization of, and perception of, beliefs and nonsense. But since none of us can ever have perfect networks, or even know how accurate ours performs in any given match, we all need a healthy amount of skepticism, even with regard to our own memories and perceptions.

I further urge scientists to at least consider the impact of neural network pre-processing on your studies, and to develop methodologies to explicitly detect and correct for such biases.

 

Humans are Inexplicable

brainWhether it be in science or business or politics or popular culture, we expend an inordinate amount of time and effort trying to figure out why people do whatever people are doing. We seem to have more analysts than actors, all desperately trying to explain what motivates people, either by asking them directly or by making inferences about them. For the most part, this is not merely a colossal waste of time and effort and money in itself, but it stimulates even greater wastes of time and effort and money chasing wildly incomplete or erroneous conclusions about why we do what we do.

Asking people why they did what they did, or why they are doing what they are doing, or why they are going to do what they are going to do, generally yields useless and misleading information. It is not clear that people actually have distinct reasons they can recognize let alone articulate. It is quite likely in fact that most of the decisions we make are made unconsciously based upon a myriad of complex neural network associations. These associations need not be rational. These connections don’t need to be internally consistent to each other or related to the actual outcome in any way. But in our post-rationalizations and post-analyses we impose some logic to our decisions to make them feel sensible. Therefore, the reasons we come up with are almost completely made-up at every level to sound rational or at least sane to ourselves and to those we are communicating to.

The truth is, we can’t usually hope to understand our own incredibly complex neural networks, let alone the neural networks of others. Yes, sometimes we can identify a strong neural network association driving a behavior, but most determinative associations are far too diffuse across a huge number of seemingly unrelated associations.

The situation gets infinitely worse when we are trying to analyze and explain group behaviors. Most of our shared group behaviors emerge from the weak-interactions between all of our individual neural networks. The complexity of these interactions is virtually unfathomable. The challenge of understanding why a group does what it does collectively, let alone figuring out how to influence their behavior, is fantastic.

If you ask a bird why it is flying in a complex swirling pattern along with a million other birds, it will probably give you some reason, like “we are looking for food,” but in fact it is probably largely unaware that it is even flying in any particular pattern at all.

So why point all this out? Do we give up? Does this imply that a rational civilization is impossible, that all introspection or external analysis is folly?

Quite the contrary, we must continue to struggle to understand ourselves and truly appreciating our complexity is part of that effort. To do so we must abandon the constraints of logic that we impose upon our individual and group rationalizations and appreciate that we are driven by neural networks that are susceptible to all manner of illogical programming. We must take any self-reporting with the same skepticism we would to the statement “I am perfectly sane.” We should be careful of imposing our own flawed rationality upon the flawed rationality of others. Analysts should not assume undue rationality in explaining behaviors. And finally, we must appreciate that group behaviors can have little or no apparent relationship to any of the wants, needs, or expressed opinions of those individuals within that group.

In advanced AI neural networks, we humans cannot hope to understand why the computer has made a decision. Its decision is based upon far too many subtle factors for humans to recognize or articulate. But if all of the facts programmed in to the computer are accurate, we can probably trust the judgement of the computer.

Similarly with humans, it may be that our naive approach of asking or inferring reasons for feelings and behaviors and then trying to respond to each of those rationales is incredibly ineffective. It may be that the only thing that would truly improve individual and thus emergent thinking are more sanely programmed neural networks, ones that are not fundamentally flawed so as to comfortably rationalize religious and other specious thinking at the most basic levelĀ (see here). We must focus on basic fact-based thinking in our educational system and in our culture on the assumption that more logically and factually-trained human neural networks will yield more rational and effective individual and emergent behaviors.