Category Archives: Fact-Based Thinking

The Greatest Failure of Science

Before I call out the biggest, most egregious failure of science, let me pay science some due credit. Science routinely accomplishes miracles that make Biblical miracles seem laughably mundane and trivial by comparison. Water into wine? Science has turned air into food. Virgin birth? A routine medical procedure. Angels on the head of a pin? Engineers can fit upwards of 250 million transistors in that space. Healing a leper? Bah, medicine has eradicated leprosy. Raising the dead? Clear, zap, next. Create life? Been there, done that. It’s not even newsworthy anymore.

And let’s compare the record of science to the much vaunted omniscience of God himself. Science has figured out the universe in sufficient detail to reduce it to practically one small Standard Equation. It turns out to actually be kind of trivial, some would say. Like God, we can not only listen in on every person on the planet, but no mystery of the universe is hidden to us. We have looked back in time to the first tick of the cosmic clock, down inside atoms to quarks themselves, and up to view objects at very edge of our “incomprehensively” large universe.

Science routinely makes the most “unimaginable” predictions about the universe that are shortly after proven to be true. Everything from Special Relativity to the Higgs Boson to Dark Matter to Gravity Waves and so many other phenomena. Nothing is too rare or too subtle or too complex to escape science for long.

Take the neutrino as just one representative example among so many others. These subatomic particles were hypothesized in 1931 by Wolfgang Pauli. They are so tiny that they cannot be said to have any size at all. They have virtually no mass and are essentially unaffected by anything. Even gravity has only an infinitesimal effect on neutrinos. They move at nearly the speed of light and pass right through the densest matter as if it were not there at all. It seems impossible that humans could ever actually observe anything so tiny and elusive.

Yet, in 1956 scientists at the University of California at Irvine detected neutrinos. Today we routinely observe neutrinos using gigantic detectors like the IceCube Neutrino Observatory at the South Pole. Similarly we now routinely observe what are essentially infinitesimally tiny vibrations in time-space itself using gravity wave detectors like the LIGO Observatory.

The point is, when talking about anything and everything from infinitesimally small neutrinos to massive gravitational waves spread so infinitesimally thin as to encompass galaxies, science can find it. If it exists, no matter how well hidden, not matter how rare, no matter how deeply buried in noise, no matter how negligible it may be… if it exists it will be found.

Which brings us to the greatest failure of science.

Given the astounding (astounding is far too weak a word) success of science in predicting and then detecting the effects of even the most unimaginably weak forces at work in the world around us, it is baffling that it has failed so miserably to detect any evidence of the almighty hand of God at work.

I mean, we know that God is the most powerful force in the universe, that God is constantly at work shaping and acting our world. We know that God responds to prayers and intervenes in ways both subtle and miraculous. So how is it that science has never been able to detect His influence? Not even in the smallest possible way?

Even if one adopts that view that God restricts himself rigorously to the role of “prime mover,” how is it that science has found nothing, not one neutrino-scale effect which points back to, let alone requires, divine influence?

It is mind-boggling when you think about it. I can certainly think of no possible explanation for this complete and utter failure of science to find any shred of evidence to support the existence of God when so many of us are certain that He is the most powerful force at work in the universe!

Can you?

Three Major Flaws in your Thinking

BrainwavesEEGToday I’d like to point out three severe and consequential flaws in your thinking. I know, I know, you’re wondering how I could possibly presume that you have major flaws in your thinking. Well, I can safely presume so because these flaws are so innate that it is a statistical certainty that you exhibit them much the time. I suffer from them myself, we all do.

Our first flaw arises from our assumption that human thinking must be internally consistent; that there must necessarily be some logical consistency to our thinking and our actions. This is reinforced by our own perception that whatever our neural networks tell us, no matter how internally inconsistent, nevertheless seems totally logical to us. But the reality is that our human neural networks can accommodate any level of inconsistency. We learn whatever “training facts,” good or bad, that are presented to us sufficiently often. Our brains have no inherent internal consistency checks beyond the approval and rejection patterns they are taught. For example, training in science can improve these check patterns,  whereas training in religion necessarily weakens them. But nothing inherently prevents bad facts and connections from getting introduced into our networks. (Note that the flexibility of our neural networks to accommodate literally anything <was> an evolutionary advantage for us.)

Our second flaw is that we have an amazing ability to rationalize whatever random facts we are sufficiently exposed to so as to make them seem totally logical and consistent to us. We can maintain unquestioning certainty in any proposition A, but at the same time be perfectly comfortable with proposition B, even if B is in total opposition with and incompatible with proposition A. We easily rationalize some explanation to create the illusion of internal consistency and dismiss any inconsistencies. If our network is repeatedly exposed to the belief that aliens are waiting to pick us up after we die, that idea gradually becomes more and more reasonable to us, until eventually we are ready to drink poison. At each point in the deepening of those network pathways, we easily rationalize away any logical or empirical inconsistency. We observe extreme examples of this in clinical cases but such rationalization affects all our thinking. (Note that our ability to rationalize incoherent ideas so as to seem perfectly coherent to us was an evolutionary necessity to deal with the problems produced by flaw #1.) 

The third flaw is that we get fooled by our perception of and need to attribute intent and volition to our thoughts and actions. We imagine that we decide things consciously when the truth is that most everything we think and do is largely the instantaneous unconscious output of our uniquely individual neural network pathways. We don’t so much arrive at a decision as we rationalize a post-facto explanation after we realize what we just thought or did. Our consciousness is like the General who follows the army wherever it goes, and tells himself he is in charge. We feel drawn to a Match date. Afterwards when we are asked what attracted us to that person, so we come up something like her eyes or his laugh. But the truth is that our attraction was so automatic and so complex and so deeply buried, that we really have no idea. Still, we feel compelled to come with some explanation to reassure us that we made a reasoned conscious decision. (Certainly our illusion of control is a fundamental element of what we perceive as our consciousness.)

So these are our three core flaws. First, our brains can learn any set of random facts and cannot help but accept those “facts” as undeniable and obvious truths. Second, we can and do rationalize whatever our neural network tells us, however crazy and nonsensical, so as to make us feel OK enough about ourselves to at least allow us to function in the world. And thirdly, when we ascribe post-facto rationalizations to explain our neural network conclusions, we mistakenly believe that the rationalizations came first. Believing otherwise conflicts unacceptably with our need to feel in control of our thoughts and actions.

I submit that understanding these flaws is incredibly important. Truly incorporating an understanding of these flaws into your analysis of new information shifts the paradigm dramatically. It opens up powerful new insights into understanding people better, promotes more constructive evaluation of their thoughts and actions, and reveals more effective options for working with or influencing them.

On the other hand, failure to consider these inherent flaws misdirects and undermines all of our interpersonal and social interactions. It causes tremendous frustration, misunderstanding, and counterproductive interactions.

I am going to give some more concrete examples of how ignoring these flaws causes problems and how integrating them into your thinking opens up new possibilities. But before I do that, I have to digress a bit and emphasize that we are the worst judge of our own thoughts and conclusions. By definition, whatever our neural network thinks is what seems inescapably logical and true to us. Therefore, our first thought must always be, am I the one whose neural network is flawed here? Sometimes we can recognize this in ourselves, sometimes we might accept it when others point it out, but most of the time it is exceedingly difficult for us to recognize let alone correct our own network programming. When our networks change, it is usually a process of which we are largely unaware, and happens through repeated exposure to different training facts.

But just because we cannot fully trust our own thinking doesn’t mean we should question everything we think. We simply cannot and should not question every idea we have learned. We have learned the Earth is spherical. We shouldn’t feel so insecure as to question that, or be intellectually bullied into entertaining new flat Earth theories to prove our open-mindedness or scientific integrity. Knowing when to maintain ones confidence in our knowledge and when to question it, is of course incredibly challenging.

And this does not mean we are all equally flawed or that we cannot improve. The measure is how well our individual networks comport with objective reality and sound reason. Some of our networks have more fact-based programming than others. Eliminating bad programming is not hopeless. It is possible, even irresistible when it happens. Our neural networks are quite malleable given new training facts good or bad. My neural network once told me that any young bald tattooed male was a neo-Nazi, that any slovenly guy wearing bagging jeans below his butt was a thug, and any metro guy sporting a bushy Khomeini beard was an insecure, over-compensating douchebag. Repeated exposure to facts to the contrary have reprogrammed my neural network on at least two of those.

OK, back on point now. Below are some examples of comments we might say or hear in conversation, along with some analysis and interpretation based on an awareness of our three flaws. I use the variable <topic> to allow you to fill in the blank with practically anything. It can be something unquestionably true, like <climate change is real>, or <god is a fantasy>, or <Trump is a moron>. Alternatively, if you believe obvious nonsense like <climate change is a hoax>, or <god is real>, or <Trump is the greatest President ever>, using those examples can still help just as much to improve your comfort level and relations with the other side.

I don’t understand how Jack can believe <topic>. He is so smart!

We often hear this sort of perplexed sentiment. How can so many smart people believe such stupid things? Well, remember flaw #1. Our brains can be both smart and stupid at the same time, and usually are. There are no smart or stupid brains, there are only factually-trained neural network patterns and speciously trained neural network patterns. Some folks have more quality programming, but that doesn’t prevent bad programming from sneaking in. There should be no surprise to find that otherwise smart people often believe some very stupid things.

Jill must be crazy if she believes <topic>.

Just like no one is completely smart, no one is completely crazy. Jill may have some crazy ideas that exist perfectly well along side a lot of mostly sane ideas. Everyone has some crazy programming and we only consider them insane when the level of crazy passes some socially acceptable threshold.

I believe Ben when he says <topic> is true because he won a Nobel Prize.

A common variant of the previous sentiments. Ben may have won a Nobel Prize, he may teach at Harvard, and may pen opinion pieces for the New York Times, so therefore we should give him the benefit of the doubt when we listen to his opinions. However, we should also be cognizant of the fact that he may still be totally bonkers on any particular idea. Conversely, just because someone is generally bonkers, we should be skeptical of anything they say but still be open to the possibility that they may be reasoning more clearly than most on any particular issue. This is why we consider “argument by authority” to be a form of specious argument.

It makes me so mad that Jerry claims that <topic> is real!

Don’t get too mad. Jerry kinda can’t help it. His neural network training has resulted in a network that clearly tells him that <topic> must obviously be absolutely true. Too much Fox News, religious exposure, or relentless brainwashing will do that to anyone, even you.

How can Bonnie actually claim that she supports <topic> when she denies <topic>???

First, recall flaw #1. Bonnie can believe any number of incompatible things without any problem at all. And further, flaw #2 allows her to rationalize a perfectly compelling reason to excuse any inconsistency.

Clyde believes in <topic> so he’ll never support <topic>.

Not true. Remember our flaws again. Clyde’s neural network can in fact accommodate one topic without changing the other one, and still rationalize them perfectly well. All it takes is exposure to the appropriate “training facts.” In fact, consistent with flaw #3, after his network programming changes, Clyde will maintain that he consciously arrived at that new conclusion through careful study and the application of rigorous logic.

Sonny is conducting a survey to understand why voters support <topic>.

Social scientists in particular should be more cognizant of this one. How often do we go to great efforts to ask people why they believe something or why they did something. But remember flaw #3. Mostly what they will report to you is simply their rationalization based on flaw #2. It may not, and usually doesn’t, have anything to do with their extremely complex neural network programming. That is why “subjective” studies designed to learn how to satisfy people usually fail to produce results that actually do influence them. Sonny should look for more objective measures for insight and predictive value.

Cher should support <topic> because it is factually supported and logically sound!

Appeals to evidence and logic often fail because peoples’ neural network has already been trained to accept other “evidence” and to rationalize away contrary logic. It should be no surprise that they reject your evidence and conclusions and it doesn’t accomplish anything to expect Cher to see it, let alone berate or belittle her when she does not.

And that brings us to the big reveal of this article…

There is a fourth flaw that is far worse than the other three we have discussed so far. And that is the flaw that most of us suffer from when we fail to integrate an deep awareness of flaws 1-3 into our thinking. We may not be able to completely control or eliminate flaws 1-3, but we can correct flaw number 4!

This discussion may have left you feeling helpless to understand, let alone influence, our truth-agnostic neural networks. But it also presents opportunities. These insights suggest two powerful approaches.

The first approach is more long-term. We must gradually retrain flawed neural networks. This can be accomplished through education, marketing, advertising, example-setting, and social awareness campaigns to name a few. None of these efforts need to be direct, nor do they require any buy-in by the target audience. The reality of network training is that it is largely unconscious, involuntary, and automatic. If our neural networks are exposed to sufficient nonsense, they will gradually find that nonsense more and more reasonable. But the encouraging realization is that reprogramming works just as well – or better – for sound propositions. And to be clear, this can happen quite rapidly. Look at how quickly huge numbers of neural networks have moved on a wide range of influence campaigns from the latest fashion or music craze to tobacco reduction to interracial relationships.

The second approach can be instantaneous. Rather than attempt to reprogram neural networks, you force them to jump through an alternate pathway to a different conclusion. This can happen with just a tiny and seemingly unrelated change in the inputs, and the result is analogous to suddenly shifting from the clear perception of a witch-silhouette, to that of a vase. Your network paths have not changed, yet one moment you conclude that you clearly see a witch, and the next it becomes equally obvious that it is actually a vase. For example, when Karl Rove changed the name of legislation, he didn’t try to modify people’s neural network programming, he merely changed an input to trigger a very different output result.

I hope these observations have given you a new lens through which you can observe, interpret, and influence human behavior in uniquely new and more productive ways. If you keep them in mind, you will find that they inform much of what you hear, think, and say.

Don’t Believe your Eyes

eyesToday I wanted to talk about perceptions. Not our feelings, but what we actually see, feel, smell, hear, and taste. That is, the “objective” inputs that drive our feelings. Should we really “only believe our eyes?

I think not.

In my book (see here) I talk about how we should be skeptical of our own memories and perceptions. Our memories are not recordings. They are docudrama recreations drawing upon various stock footage to put together a satisfying re-imagining. We remember going to the beach as a child. But in “recalling” details of that experience, we draw upon fragments from various sources to fill it in. The “slant” of that recreation is strongly dependent upon our current attitudes and biases. Our re-imagined, and often very distorted memory then reinforces what we believe to be a “vivid” recollection next time we recall it. Over time our “clear” memory can drift farther and farther from reality like a memory version of the “phone game.”

I contend that our brains work similarly with regard to our senses. We don’t see what we think we see. Our perceptions are filtered through our complex neural networks. It is a matched, filtered, processed, censored, and often highly biased version that we actually see, hear, or feel.

We know that our subconscious both filters out much of the information it receives, and adds in additional information as needed to create a sensible perception. I always favor a neural network model of brain function. As it relates to perception, our neural network receives a set of sensory data. It matches that data against known patterns and picks the closest match. It then presents our consciousness with a picture – not of the original data – but of that best-fit match. It leaves out “extraneous” information and may add in missing information to complete that expected picture. That is, we do not actually see, hear, smell, or taste a thing directly. We see, hear, smell, or taste a satisfying recreation that our network presents to us.

This should not be controversial, because we experience it all the time. Based on sparse information, we “see” fine detail in a low resolution computer icon that objectively is not there. We fail to see the gorilla inserted into the background because it is out of place. We are certain we see a witch or a vase in a silhouette, depending on our bias or our expectations at that moment.

But though this should be evident, we still do not take this imprecision seriously enough in evaluating the objectivity of our own memories or perceptions. We still mostly put near-absolute faith in our memories, and are generally even more certain of our perceptions. We believe that what we perceive is absolutely objective. Clearly, it is not.

In essence, what we believe we objectively recall, see, hear, or touch is not the thing itself, but a massaged recreation of our neural network match. The version we perceive can often be wrong in very important ways. Our perceptions are only as reliable as our neural networks. And some neural networks can be more compromised than others. We can recall or even perceive radically crazy things if our neural network has been trained to do so. I campaign against belief-based thinking of all sort because it seriously compromises these critical neural networks in crazy ways.

Even more unrecognized are the ways that this phenomenon is largely ignored as it impacts scientific research. Scientists often give far too much credence to reports of perceptions, often in extremely subtle ways.

As a simple illustration, consider how we often mock wine connoisseurs who claim to taste differences in wines but cannot pick these out in blinded studies. However, consider the confounding impact of their (and our) neural networks in even this simple case. When experiencing a wine, all the associated data is fed into the drinker’s neural network. It makes a match and then presents that match to the consciousness. Therefore, if the network does not “see” one critical factor, say color, it matches to white, not red, and presents and entirely different taste pattern the the drinker, ignoring some “extraneous” flavors and adding some other “missing” ones.

These same kinds of neural network matching errors can, and I have to assume often do, confound even more rigorous scientific studies. And they are further confounded by the fact that these mismatches are typically temporary. With every new set of data, our neural networks adjust themselves, the weightings change, to yield different results. The effect of a drug or placebo, for example, may change over time. If scientists see this, they typically look exclusively for other physiological causes. But it may be a neural network correction.

That is why I always admonish my readers to stick with inputs that will strengthen your neural networks toward sound objectivity rather than allow them to be weighted toward the rationalization of, and perception of, beliefs and nonsense. But since none of us can ever have perfect networks, or even know how accurate ours performs in any given match, we all need a healthy amount of skepticism, even with regard to our own memories and perceptions.

I further urge scientists to at least consider the impact of neural network pre-processing on your studies, and to develop methodologies to explicitly detect and correct for such biases.

 

Humans are Inexplicable

brainWhether it be in science or business or politics or popular culture, we expend an inordinate amount of time and effort trying to figure out why people do whatever people are doing. We seem to have more analysts than actors, all desperately trying to explain what motivates people, either by asking them directly or by making inferences about them. For the most part, this is not merely a colossal waste of time and effort and money in itself, but it stimulates even greater wastes of time and effort and money chasing wildly incomplete or erroneous conclusions about why we do what we do.

Asking people why they did what they did, or why they are doing what they are doing, or why they are going to do what they are going to do, generally yields useless and misleading information. It is not clear that people actually have distinct reasons they can recognize let alone articulate. It is quite likely in fact that most of the decisions we make are made unconsciously based upon a myriad of complex neural network associations. These associations need not be rational. These connections don’t need to be internally consistent to each other or related to the actual outcome in any way. But in our post-rationalizations and post-analyses we impose some logic to our decisions to make them feel sensible. Therefore, the reasons we come up with are almost completely made-up at every level to sound rational or at least sane to ourselves and to those we are communicating to.

The truth is, we can’t usually hope to understand our own incredibly complex neural networks, let alone the neural networks of others. Yes, sometimes we can identify a strong neural network association driving a behavior, but most determinative associations are far too diffuse across a huge number of seemingly unrelated associations.

The situation gets infinitely worse when we are trying to analyze and explain group behaviors. Most of our shared group behaviors emerge from the weak-interactions between all of our individual neural networks. The complexity of these interactions is virtually unfathomable. The challenge of understanding why a group does what it does collectively, let alone figuring out how to influence their behavior, is fantastic.

If you ask a bird why it is flying in a complex swirling pattern along with a million other birds, it will probably give you some reason, like “we are looking for food,” but in fact it is probably largely unaware that it is even flying in any particular pattern at all.

So why point all this out? Do we give up? Does this imply that a rational civilization is impossible, that all introspection or external analysis is folly?

Quite the contrary, we must continue to struggle to understand ourselves and truly appreciating our complexity is part of that effort. To do so we must abandon the constraints of logic that we impose upon our individual and group rationalizations and appreciate that we are driven by neural networks that are susceptible to all manner of illogical programming. We must take any self-reporting with the same skepticism we would to the statement “I am perfectly sane.” We should be careful of imposing our own flawed rationality upon the flawed rationality of others. Analysts should not assume undue rationality in explaining behaviors. And finally, we must appreciate that group behaviors can have little or no apparent relationship to any of the wants, needs, or expressed opinions of those individuals within that group.

In advanced AI neural networks, we humans cannot hope to understand why the computer has made a decision. Its decision is based upon far too many subtle factors for humans to recognize or articulate. But if all of the facts programmed in to the computer are accurate, we can probably trust the judgement of the computer.

Similarly with humans, it may be that our naive approach of asking or inferring reasons for feelings and behaviors and then trying to respond to each of those rationales is incredibly ineffective. It may be that the only thing that would truly improve individual and thus emergent thinking are more sanely programmed neural networks, ones that are not fundamentally flawed so as to comfortably rationalize religious and other specious thinking at the most basic level (see here). We must focus on basic fact-based thinking in our educational system and in our culture on the assumption that more logically and factually-trained human neural networks will yield more rational and effective individual and emergent behaviors.

 

But More Importantly…

climate-changeThose of you who follow my blog know that I’m virulently anti-gun. In fact, I’ll take any opportunity to slip my disdain for guns and the deplorable people who own them into any discussion. Which is why you should definitely go back and read this, and this, and even this.

But not now! Because more importantly… climate change.

As much as I loathe, hate, and despise guns, I fear climate change far worse. No matter what your issue, you are extremely foolish if you do not prioritize climate change far ahead of it. Humanity will survive gun violence, wars, poverty, hate, bigotry, diseases, despots, jobs, slavery, even genocides. But we may likely not survive climate change. Every other issue can be fixed, waited out, and overcome in the long term. Climate change is a death warrant for civilization, for mankind, and possibly for all life on Earth. It’s a terminal disease, game over, if not treated with every means we can muster and more.

So how can you ever rationally argue that efforts to curb climate change must wait because your issue, however important, is more urgent and existential? And no, we cannot “do both.” We must still prioritize. If we spend effort on your issue or even my issue then we are not doing enough to avert catastrophic climate change.

Most of my readers have to know that I’m an outspoken atheist activist. However, I cannot prioritize my atheist movement over climate change. Not even remotely. In fact, if atheists are indeed the more rational and sensible humanists that we think we are and claim to be, we should be taking a leading role in battling climate change. Sadly my atheist community as a whole is not showing such wisdom and leadership.

If there is one litmus test in the next Presidential election, it should be climate change. Not abortion, or gender equality, or a Wall, or fealty to Capitalism, or anything else… because more importantly, climate change.

In a recent interview Presidential candidate Pete Buttigieg rattled off ten or so things he would prioritize as President. Not one was climate change. When asked about climate change, he made a dutiful perfunctory comment about it. This should disqualify him utterly. Even if he does make stronger comments about climate change later, I would have no confidence that he is sufficiently sincere.

In fact, at this time, the ONLY candidate we should be strongly considering is Washington State Governor Jay Inslee. He is the only candidate showing the intelligence, leadership, and long-term thinking that we literally cannot live without. Others might make progress on health care, or immigration, or jobs, or LGBTQ rights. But really, will any of that ultimately matter if we fail to mitigate the worst impacts of catastrophic climate change?

Here’s what you should do. Ask your candidates at all levels about what they will do about climate change and make it an unequivocal priority. Be willing to put aside your own issues in order to work together to make progress on climate change. Demand that the social and religious organizations that you affiliate with push for action on climate change.

And finally, in the signature line of your emails, add the line “But more importantly climate change.” This will remind both you and your recipients that while whatever we are discussing is important, it does not begin to compare with climate change.

 

The Art of Technical Lying

bart-simpson-I-didntWe discover the fine art of technical lying at a young age. It might be more accurately described as technical truth-telling, but technical lying is catchier and more descriptive. It is the practice of lying by making false statements that are technically true or at least defensible. One example of technical lying might be when our parents demand to know whether you went to that unsupervised party at Kim’s house. With feigned affront you lie and insist you did not. When confronted with evidence you claim that you didn’t really lie because it wasn’t technically a party it was a “get-together,” and you didn’t go because technically you were “taken” by Josh on his bike, and in any case it wasn’t Kim’s house since technically her parents are the ones that own it.

We all spin the truth and try to mislead and misdirect through technical nuances when it serves us, but this becomes formalized in the legal sphere where lawyers are taught to exploit technical lying in depositions and court testimonies. They coach clients to answer questions with short answers, in part to leave open ways to later claim they did not perjure themselves using some technical rationale.

Fortunately, parents generally know when their kids are playing these games and usually don’t let them get away with it. Sometimes technical lying can help in legal situations, but lawyers, like our parents, are very good at exposing such obfuscation. In legal proceedings there is usually sufficient opportunity to follow-up with probing questions that trip up and expose technical lies. Lawyers are happy to play this game in court because when a pattern of technical lying is exposed thus, it generally backfires badly on the liar and harms their credibility resulting in a worse outcome for them.

But technical lying isn’t limited to family squabbles and court proceedings. It is rampant in the public sphere and in the semi-formal environment of Congressional hearings. In responding to questions from the Press, some people engage in serial technical lying. Even in testimony to Congress, these individuals engage in technical lying with seeming impunity.

Did the President offer you a pardon? He did not. No I did not lie because it wasn’t the President, it was his lawyer and it wasn’t an offer, it was a possible offer, and it wasn’t a pardon, it was “everything in his power.”

The reason this pattern of technical lying is so frustrating is because it can be quite effective. It can really frustrate and delay efforts to arrive at the truth in situations in which the follow-up questioning is limited and delayed. In these settings, to delay temporarily is to win. This is the case for public statements, media interviews, and to a large extent even Congressional hearings. These are disparate and enough time goes by between follow-up questions that the narrative can keep changing, the goal post keep moving, and impartial observers have difficulty recognizing the extent of gamesmanship being conducted over time.

In an age in which truth is under methodical attack using every possible form of deceit and deception, technical lying is rampant. It is particularly well-suited to frustrate efforts by society to arrive at truth outside of courtroom walls. Technical lying has grown into an art form celebrated by proud dissemblers like Roger Stone.

In this ridiculous era of Trump, we have had to become far more willing to call a lie a lie. This must include lies in all their forms, and for Trump and all those who lie incessantly for him, a technical truth is most likely just another type of lie.

 

Cloud Angels

CloudAngelA recent article in People Magazine was entitled Texas Driver Spots ‘Spectacular’ Cloud Shaped Like an Angel: ‘How Awesome Is That?’ (see here).

Although the question was rhetorical – well actually it was meant as more a statement than a question – I’ll answer it anyway.

Not very!

The reality is that at any given moment of any day from any point anywhere on Earth, there are clouds that we could imagine bear some resemblance to something other than a billowy mass of condensed water vapor floating in the atmosphere.

Some of these clouds might resemble boats, or alligators, or elephants, or pretty much anything really. The limit is our imaginations. So it is fun, but not particularly newsworthy, to take note of the wacky shapes that clouds happen upon. That is, unless the image is religious, and in that case it is apparently quite newsworthy.

The truth is that of all the clouds, or pieces of toast, or rotten peaches, or paint stains, that look like something, we don’t get really excited about these random resemblances unless they resemble an angel, or Jesus, or Mother Mary, or some vague Saint. All this random stuff is just random, unless it has a religious connotation. In that case, random stuff is inspiring, proof of gods hand in the world, miraculous, and fascinatingly newsworthy.

This all speaks to our powerful mental ability to create patterns that conform to our particular confirmation biases. Moreover it also speaks to our intense desire and interest in any confirmation of our religious bias in particular.

And I can see how a cloud pattern, or some lichen on a rock create powerful imagery. I had one such experience.

I was on the beach in Costa Rica watching baby tortoises dauntlessly plunge into the ocean only to be thrown back onto the sand over and over again by the uncaring waves. It was late afternoon and I glanced up, only to stare in wonder at the sky. Directly in front of me were the very gates of heaven. A glowing pathway lead up from directly before me to a shimmering cloud platform. Upon it stood two gleaming pearly gates, connected by a vibrant golden archway, highlighted by dramatic halos of light. Within the great arch, in the distance, was a glowing point of light so divine that it could only have been the glow of god almighty.

The sight was so photo-realistically detailed and delineated with vibrant color and perfect proportions that it made the Texas cloud angel look like a child’s watercolor. I gaped in wonder for a moment before I thought to reach for my camera. But by the time I fumbled to work it, the lines had begun to blur, the light to diminish, and the effect to become far more abstract. That singular moment was past. Within minutes the gates of heaven were once again just one more set of abstract cloud shapes.

Given that experience, I can understand how primitive people might be so inspired as to believe they had actually glimpsed a heavenly place revealed to them in the sky. I can understand how they might have taken this as proof of heaven. Or, perhaps, thousands of years ago someone glimpsed a sight very similar to my own and created our modern imagery of heaven based upon that one powerful awe-inspiring moment.

But what I cannot understand and cannot excuse is any modern person today believing that some vaguely angel-shaped cloud is particularly inspiring or reassuring, let alone believed to be a message from god. And I find it doubly disappointing that a news outlet, even one that is merely reporting human interest stories, would preferentially pick out these kind of “sightings” to report, thereby depositing yet another straw of religious delusion on the already straining back of the reason and rationality of our culture.

 

Our Amazing Yet Deeply Flawed Neural Networks

NeuralNetwork

Back in the 1980’s when I did early work applying Neural Network technology to paint formulation chemistry, that experience gave me fascinating insights into how our brains operate. A computer neural network is a mathematically complex program that does a simple thing. It takes a set of training “facts” and an associated set of “results,” and it learns how they connect by essentially computing lines of varying weights connecting them. Once the network has learned how to connect these training facts to the outputs, it can take any new set of inputs and predict the outcome or it can predict the best set of inputs to produce a desired outcome.

Our brains do essentially the same thing. We are exposed to “facts” and their associated outcomes every moment of every day. As these new “training sets” arrive, our biological neural network connections are physically weighted. Some become stronger, others weaker. The more often we observe a connection, the stronger that neural connection becomes. At some point it becomes so strong that it becomes undeniably obvious “common sense” to us. Unreinforced connections, like memories, become so weak they are eventually forgotten.

Note that this happens whether we know it or not and whether we want it to happen or not. We cannot NOT learn facts. We learn language as children just by overhearing it, whether we intend to learn it or not. Our neural network training does not require conscious effort and cannot be “ignored” by us. If we hear a “fact” often enough, it keeps getting weighted heavier until it eventually becomes “undeniably obvious” to us.

Pretty amazing right? It is. But here is one crucial limitation. Neither computer or biological neural networks have any intrinsic way of knowing if a training fact is valid or complete nonsense. They judge truthiness based only upon their weighting. If we tell a neural network that two plus two equals five, it will accept that as a fact and faithfully report five with complete certainty as the answer every time it is asked. Likewise, if we connect spilling salt with something bad happening to us later, that becomes a fact to our neural network of which we feel absolutely certain.

This flaw wasn’t too much of a problem during most of our evolution as we were mostly exposed to real, true facts of nature and the environment. It only becomes an issue when we are exposed to abstract symbolic “facts” which can be utter fantasy. Today, however, most of what is important to our survival are not “natural” facts that can be validated by science. They are conceptual ideas which can be repeated and reinforced in our neural networks without any physical validation. Take the idea of a god as one perfect example. We hear that god exists so often that our “proof of god” pathways strengthen to the point that we see proof everywhere and god’s existence becomes intuitively undeniable to us.

This situation is exacerbated by another related mental ability of ours… rationalization. Since a neural network can happily accommodate any “nonsense” facts, regardless of how contradictory they may be, our brains have to be very good at rationalizing away any logical discrepancies between them. If two strong network connections logically contradict each other, our brains excel and fabricating some reason, some rationale to explain how that can be. When exposed to contradictory input, we feel disoriented until we rationalize it somehow. Without that ability, we would be paralyzed and unable to function.

This ability of ours to rationalize anything is so powerful that even brain lesion patients who believe they only have half of a body will quickly rationalize away any reason you give them, any evidence you show them, that proves they are wrong. Rationalization allows us to continue to function, even when our neural networks have been trained with dramatically nonsensical facts. Further, once a neural network fact becomes strong enough, it can no longer be easily modified even by contradictory perceptions, because it filters and distorts subsequent perceptions to accommodate it. It can no longer be easily modified by even our memories as our memories are recreated in accordance with those connections every time we recreate them.

As one example to put all this together, when I worked in the Peace Corps in South Africa a group of high school principals warned me to stay indoors after dark because of the witches that roam about. I asked some questions, like have you ever personally seen a witch? No, was the answer, but many others whom we trust have told us about them. What do they look like, I asked. Well they look almost like goats with horns in the darkness. In fact, if you catch one they will transform into a goat to avoid capture.

Here you clearly see how otherwise smart people can be absolutely sure that their nonsensical “facts” and rationalizations are perfectly reasonable. What you probably don’t see is the equally nonsensical rationalizations of your own beliefs in god and souls and angels or other bizarre delusions.

So our neural networks are always being modified, regardless of how smart we are, whether we want them to or not, whether we know they are or not, and those training facts can be absolutely crazy. But our only measure of how crazy they are is our own neural network weighting which tells us that whatever are the strongest connections must be the most true. Further, our perceptions and memories are modified to remain in alignment with that programming and we can fabricate any rationalization needed to explain how our belief in even the most outlandish idea is really quite rational.

In humans early days, we could live with these inherent imperfections. They actually helped us survive. But the problems that face us today are mostly in the realm of concepts, symbols, ideas, and highly complex abstractions. There is little clear and immediate feedback in the natural world to moderate bad ideas. Therefore, the quality of our answers to those problems and challenges is entirely dependent upon the quality of our basic neural network programming.

The scientific method is a proven way to help ensure that our conclusions align with reality, but science can only be applied to empirically falsifiable questions. Science can’t help much with most of the important issues that threaten modern society like should we own guns or should Donald Trump be President. Our flawed neural networks can make some of us feel certain about such questions, but how can we be certain that our certainty is not based on bad training facts?

First, always try to surround yourself by “true and valid” training facts as much as possible. Religious beliefs, New Age ideas, fake news, and partisan rationalizations all fall under the category of “bad” training facts. Regardless of how much you know they are nonsense, if you are exposed to them you will get more and more comfortable with them. Eventually you will come around to believing them no matter how smart you think you are, it’s simply a physical process like the results of eating too much fat.

Second, the fact that exposing ourselves to nonsense is so dangerous gives us hope as well. While it’s true that deep network connections, beliefs, are difficult to change, it is a fallacy to think they cannot change. Indoctrination works, brainwashing works, marketing works. Repetition and isolation from alternative viewpoints, as practiced by Fox News, works. So we CAN change minds, no matter how deeply impervious they may seem, for the better as easily as for the worse. Education helps. Good information helps.

There is a method called Feldenkrais which can be practiced to become aware of our patterns of muscle movement, and to then strip out “bad” or “unnecessary” neural network programming to improve atheletic efficiency and performance. I maintain that our brains work in essentially the same way as the neural networks that coordinate our complex movements. As in Feldenkrais, we can slow down, examine each tiny mental step, become keenly aware of our thinking patterns, identify flaws, and correct them. If we try.

Third, rely upon the scientific method wherever you can. Science, where applicable, gives us a proven method to bypass our flawed network programming and compromised perceptions to arrive at the truth of a question.

Fourth, learn to quickly recognize fallacies of logic. This can help you to identify bad rationalizations in yourself as well as others. Recognizing flawed rationalizations can help you to identify bad neural programming. In my book Belief in Science and the Science of Belief, I discuss logical fallacies in some detail as well a going deeper into all of the ideas summarized here.

Finally, just be ever cognizant and self-aware of the fact that whatever seems obvious and intuitive to you may in fact be incorrect, inconsistent, or even simply crazy. Having humility and self-awareness of how our amazing yet deeply flawed neural networks function helps us to remain vigilant for our limitations and skeptical of our own compromised intuitions and rationalizations.

The Multiverse is Bigger than God

MultiverseOur gods used to be gods of specific things; the sky, the sea, war, love. Then God took over and became the god of everything. But our understanding of “everything” keeps expanding, and as it does, our fanciful notion of God has to expand along with it to remain ever beyond the limits of mere science.

The visible horizon of our observable universe is 46.5 billion light years away in any direction. That is an immense distance, and this visible sphere around us contains about 100 billion galaxies, each with perhaps 100 billion stars. Our God of everything created all that too, presumably just for us to look at.

But wait, there’s more, much more. Today we understand that our universe is almost certainly unimaginably larger than that which we can observe. It is perhaps 100 billion trillion times larger than our observable universe. That makes what we can see just the tiniest mote of dust in our greater universe. In our observable universe we can look into the sky and at least see what happened in the distant past. We can not even see out into the darkness beyond that. But since it apparently exists, believers have no choice except to inflate God once more. God presumably created all that inaccessible space beyond the horizon as well, and just for us.

It gets better. Now we are beginning to understand that God apparently created an infinite multiverse just for us as well. I first recall being fascinated by the idea of multiple universes in 1966 when Mr. Spock met Captain Kirk’s evil counterpart from an alternate universe (see here). But just as Star Trek communicators became everyday reality, the science fiction of multiple universes has become legitimate science.

There are many forms that the multiverse may take, but for now let it suffice to think of an infinite number of universes just like ours, maybe isolated in pockets of space, maybe superimposed upon each other, maybe both. Their infinity extends through both time and space. This infinite multiverse is not static. In it (if the word “in” even applies to an infinite space) universes appear, grow old, and die. Each is born with a particular set of fundamental parameters. Only a relatively tiny (but still infinite) fraction have parameters in the “Goldilocks” range that allow organized structures. In a tiny fraction of those, life is possible. The rest are stillborn or survive for a short while as unsustainable regions of chaos.

How can it get more mind-blowing? Well it is an inescapable logical conclusion is that in an infinite multiverse everything that could possibly happen must happen. For example, there must be a universe in which every possible variation of our own exists, in fact there must be an infinite number of each possible variation – infinite numbers of each of us.

Whatever form it takes, we become even more insignificant within the time-space grandeur of the multiverse. So our notion of God must once again expand dramatically to exceed even the non-existent bounds of an already infinite multiverse in order to remain the unbounded God of all things. And of course God created that infinite multiverse, so far beyond our ability to grasp let alone interact with, just for we infinitesimal humans.

I talk about god here knowing full well that it is of course completely silly to do so. I might as well talk about the how our notion of Santa Claus must expand to encompass the belief that he has to deliver Christmas presents to all children in the multiverse on one night. Yet, unfortunately we do focus our attention on our fantasy of god whenever these cosmological discussions take place.

Some “religious scholars” try desperately to keep god relevant in the face of our growing awareness by arguing that in a multiverse in which all things are possible, god must exist somewhere. In an otherwise decent article author Mark Vernon (see here), perpetuates this fallacy by repeating that since “everything is possible somewhere … it would have to conclude that God exists in some universes.

This will certainly keep getting repeated but it is simply not a correct interpretation of the science to say that in a multiverse “everything is possible.” This is a perversion of the correct formulation which is “everything possible must happen.” These are completely different ideas. Any particular universe is still governed by its own physics and there is a limit to the possible physics of any given universe. Impossible things, like gods and ghosts, can not happen in any universe.

And even if some universe had some being approaching a god, it would still not be an omnipotent god of everything and it would certainly not be our god. Therefore I am not sure how claiming that a God exists in some other universe does anything but admit that one does not exist in our own.

So what is the most rational of the possible irrational responses for someone clinging to their belief in god in the face of a multiverse? The best would be simply to claim that god created the multiverse and not even try to invoke any pseudo-scientific arguments. As you always have, just keep expanding your definition of god to supersede whatever new boundaries science reveals.

But really, adding God to the multiverse is simply adding fake infinity on top of real infinity. Like infinity plus infinity, the extra infinity is entirely superfluous and unnecessary. And what does it add to place God beyond infinity? It only replaces the insistence that something had to create the multiverse with an acceptance that nothing had to create God. It’s silly, especially given the fact that our limited concept of “before” has little relevance in an infinite multiverse.

Better yet would be to finally give in and acknowledge that the multiverse has rendered your god small and insignificant and kind of pathetic. God is like a quaint old Vaudeville act that can no longer compete with huge 3-D superhero blockbusters, and looks silly trying. Back in the day, it might have been an understandable conceit to believe that God created the Earth just for us… or even maybe the solar system. But the level of conceit required to believe that some God created the entire multiverse just for us is wildly absurd. The idea that such a God would be focused on us is insanely narcissistic.

The multiverse forces God to grow SO large, that it swells him far beyond any relevance to us or us to him.

So abandon your increasingly simplistic idea of god and find comfort, wonder, and inspiration in our incredible multiverse. You do not need to feel increasingly insignificant and worthless in this expanding multiverse. You don’t need God to give you a phony feeling of significance and meaning within it. All it takes is the flip of a mental soft-switch and you can find comfort and wonder and meaning in our amazing multiverse. It’s all just in your head after all.

I do not share the pessimism of some that we can never “see” or understand the multiverse. My working assumption is that even the greater multiverse is our cosmos, that it is knowable. If we survive Climate Change, we may eventually understand it more fully through indirect observations or through the magical lens of mathematics. Until then, if you are intrigued and stimulated by these real possibilities, I highly recommend that you read the excellent overview article by Robert Lawrence Kuhn (see here).

Atheists Can Be Deluded Too

rollAs webmaster for New York City Atheists (see here), I recently found myself on a mailing list for a man named Michael Roll, pictured right. While he considers himself an atheist, Mr. Roll is also a self-professed spiritualist who has undertaken a personal mission to sell his particular fantasy as a non-religious, science-based idea. Since the 1960’s his “campaign for philosophical freedom” (see here) has tried to promote his spiritualist delusions.

Following are just a few of the ideas that he puts forth with great intellectual soberness and gravitas:

  • There is no god, but there is an afterlife that is part of the natural world. This spirit world exists on a “different frequency” and accounts for the unaccounted 95% of the energy in our universe.
  • While the religious beliefs of others are nonsense, his essentially identical beliefs are based on “experiments and mathematical models.”
  • His evidence is largely based on the “research” conducted by Sir William Crooks between 1871 and 1874. Crooks observed the manifestations produced by several “materialism mediums” which he claimed proved the existence of a vast afterlife (see here).
  • The media is in cahoots with the Vatican in a conspiracy to discredit legitimate science on the paranormal including work linking subatomic physics with the afterlife (see here).
  • According to Roll “famous television scientist Professor Brian Cox […] is let loose on the public because his false model of the universe is no danger to the Vatican and their powerful materialistic agents.
  • Roll also states “2018 could just be the year that a few billion people will find out that the great philosopher Jesus started from the correct scientific base that we all have a soul that separates from the dead physical body. But most important of all, that Einstein started from the incorrect scientific base that the mind dies with the brain.

I am not going to waste any of your time refuting all of Roll’s clearly delusional fantasies, any more than I would waste your time refuting the Narnia-really-exists theory. Here is a video in which you can hear his “logic” directly from him (video here). It particularly saddens me that Roll appears to be a student of Carl Sagan and quotes him extensively, yet manages to do so in a way that is a blasphemy to everything Dr. Sagan stood for (see here).

What interests me more than debunking this one clearly delusional individual is the more general observation that atheists are not immune to magical thinking. While atheists may not believe in god, they may certainly believe in lots of other equally nonsensical ideas. Just calling oneself an atheist does not immunize one from delusions. Michael Roll’s secular form of rationalizing his magical thinking with “logic” is no different than the “logic” put forth by Ken Ham to rationalize his biblical fantasy (see here).

Atheist delusions can be unique to an individual, but are more often propagated by non-religious movements and fads. Spiritualism and New Age thinking are examples of non-religious structures of fantastical delusions about the world.

Even smart, logical, sophisticated thinkers are not insulated from spiritual delusion. Sir Arthur Conan Doyle, the brilliant creator of the paragon of rational thought, Sherlock Holmes, was another passionate proponent of spiritualism. He clung to his belief, even after Houdini proved to him that his magic tricks were merely tricks. Even after that irrefutable evidence, Doyle refused to be swayed from his insistence that they proved spiritualism was real (see here).

That these kind of spiritual belief systems can so compromise the thinking of one such as Conan Doyle demonstrates that they are both highly seductive and tenacious. Many of my atheist friends do not share my concern about these non-religious movements because they do not have the institutional power of an organized church behind them. Fair enough. However, they still contribute significantly to a culture in which magical thinking is encouraged and rational thought diminished. They legitimize and normalize public debate on important matters in which “alternative facts” are even entertained.

I argue that while misguided atheists like Michael Roll claim not to believe in god, their belief in essentially the same kind of pseudoscientific thinking supports faith-based thinking in all its forms. To attempt to use phony science fiction to rationalize a delusion does not make it less harmful than a purely religious belief. Indeed, the false invocation of the facade of science may in fact make the delusion far more harmful and damaging.

In my book “The Science of Belief,” (see here), I tried hard to not focus too much on religious thinking specifically, but on all non-fact based thinking in general. My thesis was that we cannot successfully attack religion or other secular forms of magical thinking directly. Rather we must teach real, authentic scientific ways of thinking and approaching the unknown. If we succeed at that, religion and spiritualism will crumble away to dust on their own.