Tag Archives: Brain

Don’t Believe your Eyes

eyesToday I wanted to talk about perceptions. Not our feelings, but what we actually see, feel, smell, hear, and taste. That is, the “objective” inputs that drive our feelings. Should we really “only believe our eyes?

I think not.

In my book (see here) I talk about how we should be skeptical of our own memories and perceptions. Our memories are not recordings. They are docudrama recreations drawing upon various stock footage to put together a satisfying re-imagining. We remember going to the beach as a child. But in “recalling” details of that experience, we draw upon fragments from various sources to fill it in. The “slant” of that recreation is strongly dependent upon our current attitudes and biases. Our re-imagined, and often very distorted memory then reinforces what we believe to be a “vivid” recollection next time we recall it. Over time our “clear” memory can drift farther and farther from reality like a memory version of the “phone game.”

I contend that our brains work similarly with regard to our senses. We don’t see what we think we see. Our perceptions are filtered through our complex neural networks. It is a matched, filtered, processed, censored, and often highly biased version that we actually see, hear, or feel.

We know that our subconscious both filters out much of the information it receives, and adds in additional information as needed to create a sensible perception. I always favor a neural network model of brain function. As it relates to perception, our neural network receives a set of sensory data. It matches that data against known patterns and picks the closest match. It then presents our consciousness with a picture – not of the original data – but of that best-fit match. It leaves out “extraneous” information and may add in missing information to complete that expected picture. That is, we do not actually see, hear, smell, or taste a thing directly. We see, hear, smell, or taste a satisfying recreation that our network presents to us.

This should not be controversial, because we experience it all the time. Based on sparse information, we “see” fine detail in a low resolution computer icon that objectively is not there. We fail to see the gorilla inserted into the background because it is out of place. We are certain we see a witch or a vase in a silhouette, depending on our bias or our expectations at that moment.

But though this should be evident, we still do not take this imprecision seriously enough in evaluating the objectivity of our own memories or perceptions. We still mostly put near-absolute faith in our memories, and are generally even more certain of our perceptions. We believe that what we perceive is absolutely objective. Clearly, it is not.

In essence, what we believe we objectively recall, see, hear, or touch is not the thing itself, but a massaged recreation of our neural network match. The version we perceive can often be wrong in very important ways. Our perceptions are only as reliable as our neural networks. And some neural networks can be more compromised than others. We can recall or even perceive radically crazy things if our neural network has been trained to do so. I campaign against belief-based thinking of all sort because it seriously compromises these critical neural networks in crazy ways.

Even more unrecognized are the ways that this phenomenon is largely ignored as it impacts scientific research. Scientists often give far too much credence to reports of perceptions, often in extremely subtle ways.

As a simple illustration, consider how we often mock wine connoisseurs who claim to taste differences in wines but cannot pick these out in blinded studies. However, consider the confounding impact of their (and our) neural networks in even this simple case. When experiencing a wine, all the associated data is fed into the drinker’s neural network. It makes a match and then presents that match to the consciousness. Therefore, if the network does not “see” one critical factor, say color, it matches to white, not red, and presents and entirely different taste pattern the the drinker, ignoring some “extraneous” flavors and adding some other “missing” ones.

These same kinds of neural network matching errors can, and I have to assume often do, confound even more rigorous scientific studies. And they are further confounded by the fact that these mismatches are typically temporary. With every new set of data, our neural networks adjust themselves, the weightings change, to yield different results. The effect of a drug or placebo, for example, may change over time. If scientists see this, they typically look exclusively for other physiological causes. But it may be a neural network correction.

That is why I always admonish my readers to stick with inputs that will strengthen your neural networks toward sound objectivity rather than allow them to be weighted toward the rationalization of, and perception of, beliefs and nonsense. But since none of us can ever have perfect networks, or even know how accurate ours performs in any given match, we all need a healthy amount of skepticism, even with regard to our own memories and perceptions.

I further urge scientists to at least consider the impact of neural network pre-processing on your studies, and to develop methodologies to explicitly detect and correct for such biases.

 

Our Amazing Yet Deeply Flawed Neural Networks

NeuralNetwork

Back in the 1980’s when I did early work applying Neural Network technology to paint formulation chemistry, that experience gave me fascinating insights into how our brains operate. A computer neural network is a mathematically complex program that does a simple thing. It takes a set of training “facts” and an associated set of “results,” and it learns how they connect by essentially computing lines of varying weights connecting them. Once the network has learned how to connect these training facts to the outputs, it can take any new set of inputs and predict the outcome or it can predict the best set of inputs to produce a desired outcome.

Our brains do essentially the same thing. We are exposed to “facts” and their associated outcomes every moment of every day. As these new “training sets” arrive, our biological neural network connections are physically weighted. Some become stronger, others weaker. The more often we observe a connection, the stronger that neural connection becomes. At some point it becomes so strong that it becomes undeniably obvious “common sense” to us. Unreinforced connections, like memories, become so weak they are eventually forgotten.

Note that this happens whether we know it or not and whether we want it to happen or not. We cannot NOT learn facts. We learn language as children just by overhearing it, whether we intend to learn it or not. Our neural network training does not require conscious effort and cannot be “ignored” by us. If we hear a “fact” often enough, it keeps getting weighted heavier until it eventually becomes “undeniably obvious” to us.

Pretty amazing right? It is. But here is one crucial limitation. Neither computer or biological neural networks have any intrinsic way of knowing if a training fact is valid or complete nonsense. They judge truthiness based only upon their weighting. If we tell a neural network that two plus two equals five, it will accept that as a fact and faithfully report five with complete certainty as the answer every time it is asked. Likewise, if we connect spilling salt with something bad happening to us later, that becomes a fact to our neural network of which we feel absolutely certain.

This flaw wasn’t too much of a problem during most of our evolution as we were mostly exposed to real, true facts of nature and the environment. It only becomes an issue when we are exposed to abstract symbolic “facts” which can be utter fantasy. Today, however, most of what is important to our survival are not “natural” facts that can be validated by science. They are conceptual ideas which can be repeated and reinforced in our neural networks without any physical validation. Take the idea of a god as one perfect example. We hear that god exists so often that our “proof of god” pathways strengthen to the point that we see proof everywhere and god’s existence becomes intuitively undeniable to us.

This situation is exacerbated by another related mental ability of ours… rationalization. Since a neural network can happily accommodate any “nonsense” facts, regardless of how contradictory they may be, our brains have to be very good at rationalizing away any logical discrepancies between them. If two strong network connections logically contradict each other, our brains excel and fabricating some reason, some rationale to explain how that can be. When exposed to contradictory input, we feel disoriented until we rationalize it somehow. Without that ability, we would be paralyzed and unable to function.

This ability of ours to rationalize anything is so powerful that even brain lesion patients who believe they only have half of a body will quickly rationalize away any reason you give them, any evidence you show them, that proves they are wrong. Rationalization allows us to continue to function, even when our neural networks have been trained with dramatically nonsensical facts. Further, once a neural network fact becomes strong enough, it can no longer be easily modified even by contradictory perceptions, because it filters and distorts subsequent perceptions to accommodate it. It can no longer be easily modified by even our memories as our memories are recreated in accordance with those connections every time we recreate them.

As one example to put all this together, when I worked in the Peace Corps in South Africa a group of high school principals warned me to stay indoors after dark because of the witches that roam about. I asked some questions, like have you ever personally seen a witch? No, was the answer, but many others whom we trust have told us about them. What do they look like, I asked. Well they look almost like goats with horns in the darkness. In fact, if you catch one they will transform into a goat to avoid capture.

Here you clearly see how otherwise smart people can be absolutely sure that their nonsensical “facts” and rationalizations are perfectly reasonable. What you probably don’t see is the equally nonsensical rationalizations of your own beliefs in god and souls and angels or other bizarre delusions.

So our neural networks are always being modified, regardless of how smart we are, whether we want them to or not, whether we know they are or not, and those training facts can be absolutely crazy. But our only measure of how crazy they are is our own neural network weighting which tells us that whatever are the strongest connections must be the most true. Further, our perceptions and memories are modified to remain in alignment with that programming and we can fabricate any rationalization needed to explain how our belief in even the most outlandish idea is really quite rational.

In humans early days, we could live with these inherent imperfections. They actually helped us survive. But the problems that face us today are mostly in the realm of concepts, symbols, ideas, and highly complex abstractions. There is little clear and immediate feedback in the natural world to moderate bad ideas. Therefore, the quality of our answers to those problems and challenges is entirely dependent upon the quality of our basic neural network programming.

The scientific method is a proven way to help ensure that our conclusions align with reality, but science can only be applied to empirically falsifiable questions. Science can’t help much with most of the important issues that threaten modern society like should we own guns or should Donald Trump be President. Our flawed neural networks can make some of us feel certain about such questions, but how can we be certain that our certainty is not based on bad training facts?

First, always try to surround yourself by “true and valid” training facts as much as possible. Religious beliefs, New Age ideas, fake news, and partisan rationalizations all fall under the category of “bad” training facts. Regardless of how much you know they are nonsense, if you are exposed to them you will get more and more comfortable with them. Eventually you will come around to believing them no matter how smart you think you are, it’s simply a physical process like the results of eating too much fat.

Second, the fact that exposing ourselves to nonsense is so dangerous gives us hope as well. While it’s true that deep network connections, beliefs, are difficult to change, it is a fallacy to think they cannot change. Indoctrination works, brainwashing works, marketing works. Repetition and isolation from alternative viewpoints, as practiced by Fox News, works. So we CAN change minds, no matter how deeply impervious they may seem, for the better as easily as for the worse. Education helps. Good information helps.

There is a method called Feldenkrais which can be practiced to become aware of our patterns of muscle movement, and to then strip out “bad” or “unnecessary” neural network programming to improve atheletic efficiency and performance. I maintain that our brains work in essentially the same way as the neural networks that coordinate our complex movements. As in Feldenkrais, we can slow down, examine each tiny mental step, become keenly aware of our thinking patterns, identify flaws, and correct them. If we try.

Third, rely upon the scientific method wherever you can. Science, where applicable, gives us a proven method to bypass our flawed network programming and compromised perceptions to arrive at the truth of a question.

Fourth, learn to quickly recognize fallacies of logic. This can help you to identify bad rationalizations in yourself as well as others. Recognizing flawed rationalizations can help you to identify bad neural programming. In my book Belief in Science and the Science of Belief, I discuss logical fallacies in some detail as well a going deeper into all of the ideas summarized here.

Finally, just be ever cognizant and self-aware of the fact that whatever seems obvious and intuitive to you may in fact be incorrect, inconsistent, or even simply crazy. Having humility and self-awareness of how our amazing yet deeply flawed neural networks function helps us to remain vigilant for our limitations and skeptical of our own compromised intuitions and rationalizations.

The Anatomy of Thought

Mind-uploading is the fictional process by which a person’s consciousness is transferred into some inanimate object. In fantasy stories this is typically accomplished using magic. By casting some arcane spell, the person’s consciousness is transferred into a physical talisman – or it might just float around in the ether in disembodied spirit form.

Mind_switcherIn science fiction, this kind of magic is routinely accomplished by means of technology. Upgraded hair-dryers transfer the person’s consciousness into a computer or some external storage unit. There it is retained until  it can be transferred back to the original host or into some new person or device. This science fiction mainstay goes back at least to the 1951 novel “Izzard and the Membrane” by Walter M. Miller Jr.

In some of these stories, the disembodied consciousness retains awareness within the computer or within whatever golem it has been placed. Sometimes the consciousness is downloaded into a new host body. It might inhabit a recently dead body but other times it might take over a living host or even swap bodies with another consciousness. Fictional stories involving technology being used for a variety mind-downloading and body-swapping scenarios or possessions go back at least to the book to “Vice Versa” written by Thomas Anstey Guthrie in 1982.

The 2009 movie “Avatar” depicts of all sorts of sophisticated technological mind-uploading, remote consciousness-control, and even the mystical downloading of consciousness into a new body. In this and innumerable other science fiction, fantasy, and horror plots, minds are portrayed as things that can be removed and swapped out given sufficiently advanced magic or technology – like a heart or liver. This is depicted so often in fact that it seems like some routine medical procedure that must be right around the technological corner at a Body-Swap™ franchise near you.

One reason this idea seems so believable to us because it is so similar to installing new software into your computer. But the computer analogy fails here. Brains are not analogous to computers in this regard and consciousness is not analogous to a computer program. Our hardware and software are not independent. Our hardware is our software. Our thoughts are literally our anatomy.

It might be a better analogy to rather think of our brains as non-programmable analog computers in which the thinking is performed by specific electronic circuits designed to perform that logic. The logic is not programmed into the circuits, the logic is the circuitry itself. Our thoughts are not programmed into our brains, our thoughts are produced by our neural circuitry. Obviously  our thinking does change over time, but this is a physical re-linking and re-weighting of our neural connections, not the inhabitation of some separable, independent consciousness within our brains.

I allow that we might conceivably copy our consciousness into a computer, but it would only be a mapped translation programmed to emulate our thought patterns. And as far-fetched as that is, downloading our consciousness into another brain is infinitely more far-fetched. That would require rewiring the target brain, that is, changing its physical microstructure. Maybe there is some scientific plausibility to that, like a magnet aligning all the particles of iron along magnetic ley lines. But it’s incredibly unlikely. We’d essentially have to scan all the connections in the subject’s brain and then physically realign all the neurons in the target brain in exactly the same way and tune the strength of all the connections identically.

And even if we did that, there are lots of nuanced effects that would still introduce differences. Our body chemistry and external drugs influence how these neurons fire. In fact, it’s likely that even if our brain were physically transplanted into a new host body, subtle differences in the environment of the new body would affect us in unanticipatable ways, influencing the very thoughts and emotions that make us – us.

Yet our fantasy imagining of consciousness as an independent abstraction not only persists but largely dominates our thinking. Even the most modern intellectuals tend to be locked into at least an implicit assumption of a mind-body dualism. René Descartes was a key figure in bringing scientific and philosophical credibility to what is fundamentally a religious fantasy concocted to make religion seem plausible (see here).

For religious thinkers, a mind-body duality MUST exist in order for there to be an after-life. In order for religious fantasies to seem reasonable, the soul (essentially just our disembodied mind) must be independent and independently viable outside the body. For many, the mind or soul is bestowed by god and is the uniquely holy and human thing that we have that lesser species do not. For them, the mind has to be separable to support their fantasy of God-given uniqueness from the rest of the animal kingdom. A unified mind-body greatly undermines their case for creationism, human divinity, and an afterlife.

So this illusory assumption of dualism is propagated by familiar computer analogies, by ubiquitous fantasy and science fiction, by horror ghost stories, and by our dominant religious and new age thinking. But this dualistic pseudoscience leads to many false and misleading ideas about how our brains work. That in turn results leads us to a great deal of mistaken thinking about a broad and diverse range of questions and precludes our ability to even imagine more realistic answers to those questions.

One harm this idea does is to provide a circular, self-fulfilling basis for belief in the supernatural. If we accept the assumption that our mind is independent, that then demands some kind of mystical explanation. But this dualistic thinking hinders our understanding of many non-religious questions as well. How do newborns fresh out of the womb or the egg know what to do? How can thoughts be inherited? How can a child be born gay? The answer to all these questions become quite simple if you shed your mistaken assumption of dualism. We all start with an inherited brain structure which is the same as to say that we are all born with thoughts and emotions and personalities.

When you truly internalize that the mind and body are one and the same, that our thoughts arise purely from our brain micro-structure and our unique body chemistry, new and far simpler solutions and perspectives open up for a wide range of otherwise perplexing and vexing social, scientific, and metaphysical questions.

Someone smarter than me could write a fascinating book about all the ways that this fantasy of an independent consciousness leads us to false conclusions and inhibits our ability to consider real answers to important questions. But if you simply become aware of this false assumption of duality, you will find that you’ll naturally start to look at a wide range of questions in far more satisfying and logically self-consistent ways.