Tag Archives: logic

Three Major Flaws in your Thinking

BrainwavesEEGToday I’d like to point out three severe and consequential flaws in your thinking. I know, I know, you’re wondering how I could possibly presume that you have major flaws in your thinking. Well, I can safely presume so because these flaws are so innate that it is a statistical certainty that you exhibit them much the time. I suffer from them myself, we all do.

Our first flaw arises from our assumption that human thinking must be internally consistent; that there must necessarily be some logical consistency to our thinking and our actions. This is reinforced by our own perception that whatever our neural networks tell us, no matter how internally inconsistent, nevertheless seems totally logical to us. But the reality is that our human neural networks can accommodate any level of inconsistency. We learn whatever “training facts,” good or bad, that are presented to us sufficiently often. Our brains have no inherent internal consistency checks beyond the approval and rejection patterns they are taught. For example, training in science can improve these check patterns,  whereas training in religion necessarily weakens them. But nothing inherently prevents bad facts and connections from getting introduced into our networks. (Note that the flexibility of our neural networks to accommodate literally anything <was> an evolutionary advantage for us.)

Our second flaw is that we have an amazing ability to rationalize whatever random facts we are sufficiently exposed to so as to make them seem totally logical and consistent to us. We can maintain unquestioning certainty in any proposition A, but at the same time be perfectly comfortable with proposition B, even if B is in total opposition with and incompatible with proposition A. We easily rationalize some explanation to create the illusion of internal consistency and dismiss any inconsistencies. If our network is repeatedly exposed to the belief that aliens are waiting to pick us up after we die, that idea gradually becomes more and more reasonable to us, until eventually we are ready to drink poison. At each point in the deepening of those network pathways, we easily rationalize away any logical or empirical inconsistency. We observe extreme examples of this in clinical cases but such rationalization affects all our thinking. (Note that our ability to rationalize incoherent ideas so as to seem perfectly coherent to us was an evolutionary necessity to deal with the problems produced by flaw #1.) 

The third flaw is that we get fooled by our perception of and need to attribute intent and volition to our thoughts and actions. We imagine that we decide things consciously when the truth is that most everything we think and do is largely the instantaneous unconscious output of our uniquely individual neural network pathways. We don’t so much arrive at a decision as we rationalize a post-facto explanation after we realize what we just thought or did. Our consciousness is like the General who follows the army wherever it goes, and tells himself he is in charge. We feel drawn to a Match date. Afterwards when we are asked what attracted us to that person, so we come up something like her eyes or his laugh. But the truth is that our attraction was so automatic and so complex and so deeply buried, that we really have no idea. Still, we feel compelled to come with some explanation to reassure us that we made a reasoned conscious decision. (Certainly our illusion of control is a fundamental element of what we perceive as our consciousness.)

So these are our three core flaws. First, our brains can learn any set of random facts and cannot help but accept those “facts” as undeniable and obvious truths. Second, we can and do rationalize whatever our neural network tells us, however crazy and nonsensical, so as to make us feel OK enough about ourselves to at least allow us to function in the world. And thirdly, when we ascribe post-facto rationalizations to explain our neural network conclusions, we mistakenly believe that the rationalizations came first. Believing otherwise conflicts unacceptably with our need to feel in control of our thoughts and actions.

I submit that understanding these flaws is incredibly important. Truly incorporating an understanding of these flaws into your analysis of new information shifts the paradigm dramatically. It opens up powerful new insights into understanding people better, promotes more constructive evaluation of their thoughts and actions, and reveals more effective options for working with or influencing them.

On the other hand, failure to consider these inherent flaws misdirects and undermines all of our interpersonal and social interactions. It causes tremendous frustration, misunderstanding, and counterproductive interactions.

I am going to give some more concrete examples of how ignoring these flaws causes problems and how integrating them into your thinking opens up new possibilities. But before I do that, I have to digress a bit and emphasize that we are the worst judge of our own thoughts and conclusions. By definition, whatever our neural network thinks is what seems inescapably logical and true to us. Therefore, our first thought must always be, am I the one whose neural network is flawed here? Sometimes we can recognize this in ourselves, sometimes we might accept it when others point it out, but most of the time it is exceedingly difficult for us to recognize let alone correct our own network programming. When our networks change, it is usually a process of which we are largely unaware, and happens through repeated exposure to different training facts.

But just because we cannot fully trust our own thinking doesn’t mean we should question everything we think. We simply cannot and should not question every idea we have learned. We have learned the Earth is spherical. We shouldn’t feel so insecure as to question that, or be intellectually bullied into entertaining new flat Earth theories to prove our open-mindedness or scientific integrity. Knowing when to maintain ones confidence in our knowledge and when to question it, is of course incredibly challenging.

And this does not mean we are all equally flawed or that we cannot improve. The measure is how well our individual networks comport with objective reality and sound reason. Some of our networks have more fact-based programming than others. Eliminating bad programming is not hopeless. It is possible, even irresistible when it happens. Our neural networks are quite malleable given new training facts good or bad. My neural network once told me that any young bald tattooed male was a neo-Nazi, that any slovenly guy wearing bagging jeans below his butt was a thug, and any metro guy sporting a bushy Khomeini beard was an insecure, over-compensating douchebag. Repeated exposure to facts to the contrary have reprogrammed my neural network on at least two of those.

OK, back on point now. Below are some examples of comments we might say or hear in conversation, along with some analysis and interpretation based on an awareness of our three flaws. I use the variable <topic> to allow you to fill in the blank with practically anything. It can be something unquestionably true, like <climate change is real>, or <god is a fantasy>, or <Trump is a moron>. Alternatively, if you believe obvious nonsense like <climate change is a hoax>, or <god is real>, or <Trump is the greatest President ever>, using those examples can still help just as much to improve your comfort level and relations with the other side.

I don’t understand how Jack can believe <topic>. He is so smart!

We often hear this sort of perplexed sentiment. How can so many smart people believe such stupid things? Well, remember flaw #1. Our brains can be both smart and stupid at the same time, and usually are. There are no smart or stupid brains, there are only factually-trained neural network patterns and speciously trained neural network patterns. Some folks have more quality programming, but that doesn’t prevent bad programming from sneaking in. There should be no surprise to find that otherwise smart people often believe some very stupid things.

Jill must be crazy if she believes <topic>.

Just like no one is completely smart, no one is completely crazy. Jill may have some crazy ideas that exist perfectly well along side a lot of mostly sane ideas. Everyone has some crazy programming and we only consider them insane when the level of crazy passes some socially acceptable threshold.

I believe Ben when he says <topic> is true because he won a Nobel Prize.

A common variant of the previous sentiments. Ben may have won a Nobel Prize, he may teach at Harvard, and may pen opinion pieces for the New York Times, so therefore we should give him the benefit of the doubt when we listen to his opinions. However, we should also be cognizant of the fact that he may still be totally bonkers on any particular idea. Conversely, just because someone is generally bonkers, we should be skeptical of anything they say but still be open to the possibility that they may be reasoning more clearly than most on any particular issue. This is why we consider “argument by authority” to be a form of specious argument.

It makes me so mad that Jerry claims that <topic> is real!

Don’t get too mad. Jerry kinda can’t help it. His neural network training has resulted in a network that clearly tells him that <topic> must obviously be absolutely true. Too much Fox News, religious exposure, or relentless brainwashing will do that to anyone, even you.

How can Bonnie actually claim that she supports <topic> when she denies <topic>???

First, recall flaw #1. Bonnie can believe any number of incompatible things without any problem at all. And further, flaw #2 allows her to rationalize a perfectly compelling reason to excuse any inconsistency.

Clyde believes in <topic> so he’ll never support <topic>.

Not true. Remember our flaws again. Clyde’s neural network can in fact accommodate one topic without changing the other one, and still rationalize them perfectly well. All it takes is exposure to the appropriate “training facts.” In fact, consistent with flaw #3, after his network programming changes, Clyde will maintain that he consciously arrived at that new conclusion through careful study and the application of rigorous logic.

Sonny is conducting a survey to understand why voters support <topic>.

Social scientists in particular should be more cognizant of this one. How often do we go to great efforts to ask people why they believe something or why they did something. But remember flaw #3. Mostly what they will report to you is simply their rationalization based on flaw #2. It may not, and usually doesn’t, have anything to do with their extremely complex neural network programming. That is why “subjective” studies designed to learn how to satisfy people usually fail to produce results that actually do influence them. Sonny should look for more objective measures for insight and predictive value.

Cher should support <topic> because it is factually supported and logically sound!

Appeals to evidence and logic often fail because peoples’ neural network has already been trained to accept other “evidence” and to rationalize away contrary logic. It should be no surprise that they reject your evidence and conclusions and it doesn’t accomplish anything to expect Cher to see it, let alone berate or belittle her when she does not.

And that brings us to the big reveal of this article…

There is a fourth flaw that is far worse than the other three we have discussed so far. And that is the flaw that most of us suffer from when we fail to integrate an deep awareness of flaws 1-3 into our thinking. We may not be able to completely control or eliminate flaws 1-3, but we can correct flaw number 4!

This discussion may have left you feeling helpless to understand, let alone influence, our truth-agnostic neural networks. But it also presents opportunities. These insights suggest two powerful approaches.

The first approach is more long-term. We must gradually retrain flawed neural networks. This can be accomplished through education, marketing, advertising, example-setting, and social awareness campaigns to name a few. None of these efforts need to be direct, nor do they require any buy-in by the target audience. The reality of network training is that it is largely unconscious, involuntary, and automatic. If our neural networks are exposed to sufficient nonsense, they will gradually find that nonsense more and more reasonable. But the encouraging realization is that reprogramming works just as well – or better – for sound propositions. And to be clear, this can happen quite rapidly. Look at how quickly huge numbers of neural networks have moved on a wide range of influence campaigns from the latest fashion or music craze to tobacco reduction to interracial relationships.

The second approach can be instantaneous. Rather than attempt to reprogram neural networks, you force them to jump through an alternate pathway to a different conclusion. This can happen with just a tiny and seemingly unrelated change in the inputs, and the result is analogous to suddenly shifting from the clear perception of a witch-silhouette, to that of a vase. Your network paths have not changed, yet one moment you conclude that you clearly see a witch, and the next it becomes equally obvious that it is actually a vase. For example, when Karl Rove changed the name of legislation, he didn’t try to modify people’s neural network programming, he merely changed an input to trigger a very different output result.

I hope these observations have given you a new lens through which you can observe, interpret, and influence human behavior in uniquely new and more productive ways. If you keep them in mind, you will find that they inform much of what you hear, think, and say.

Anecdotal Evidence Shows

The titular phrase “anecdotal evidence shows that…” is very familiar to us – with good reason. Not only is it very commonly used, but it is subject to a great deal of misuse. It generally makes an assertion that something is probably true because there is some observed evidence to support it. While that evidence does not rise to the level of proof, it does at least create some factual basis for wishful thinking.

Anecdotal evidence is important. It is often the only evidence we can obtain. In many areas, scientists cannot practically conduct a formal study, or it would be ethically wrong to do so. It may simply be an area of study that no one is willing to fund. Therefore, even scientists often have no alternative but to base conclusions upon the best anecdotal data they have.

Anecdotal evidence is essential to making everyday decisions as well. We don’t normally conduct formal studies to see if our friend Julie is a thief. But if ear rings disappear each time she visits, we have enough anecdotal evidence to at least watch her closely. Likewise, even court proceedings must often rely upon anecdotal evidence, which is slightly different than circumstantial evidence.

Knowing when anecdotal evidence is telling, when it is simply a rationalization for wishful thinking, and when it is the basis for an outright con job is not always easy. The fact that sometimes all we have to work with is anecdotal evidence makes it all that much more dangerous and subject to misuse and abuse.

All too often, anecdotal evidence is simply poor evidence. I once presented anecdotal evidence of ghosts by relating a harrowing close encounter that I had. The thing was, I totally made it up (see here). People don’t always intentionally lie when they share an anecdote, but those people who in good faith repeated my story to others were nevertheless sharing bad anecdotal information.

Testimonials are a form of anecdotal claim. Back in the 1800’s a Snake Oil Salesman would trot out an accomplice to support his claims of a miracle cure. Today we see everyone from television preachers to herbal medicine companies use the same technique of providing anecdotal evidence through testimonials. Most of these claims are no more legitimate than my ghost story.

We also see anecdote by testimony performed almost daily in political theatre. The President points to the crowd to identify a person who has benefitted greatly from his policies. In Congressional hearings, supposedly wronged parties are trotted out to give testimony about how badly they were harmed by the actions of the targeted party. Both of these individuals are put forth as typical examples yet they may be exceedingly unusual.

So here’s the situation. We need anecdotal evidence as it is often all we have to work with to make important decisions that must be made. However, basing decisions on anecdotal information is also fraught with risk and uncertainty. How do we make the wisest use of the anecdotal information that we must rely upon?

First, consider the source and the motive of the anecdote. If the motive is to try to persuade you to do something, to support something, to accept something, or to part with your cash, be particularly suspect of anecdotal claims or testimonials. One great example are the Deal Dash commercials. You hear a woman claim that she “won” a large screen television for only $49. Sounds great, until you realize that the anecdote doesn’t tell how many bids she purchased to get it for $49, how much she wasted on other failed auctions, and how much was spent in total by the hundreds of people bidding on that item. Anecdotal evidence are not always an outright lies, but they can still tell huge lies by omission and by cherry-picking.

Second, consider the plausibility of the anecdote. If the anecdote claims to prove that ghosts exist, someone made it up. Likewise with god or miracles or angels or Big Foot. Just because someone reports something incredible, no matter how credible that person may be, demand credible evidence. As Carl Sagan pointed out, “extraordinary claims require extraordinary evidence.”

Third, consider the scope of the anecdotal claim. Does it make sweeping generalizations or is it very limited in scope? If the claim is that all Mexicans are rapists because one Mexican was arrested for rape, we end up with a Fallacy of Extrapolation which is often the result of the misuse of anecdotal information.

Finally, consider the cost/benefit of the response to the anecdotal claim. If the anecdote is that eating yoghurt cured Sam’s cancer, then maybe it’s reasonable to eat more yoghurt. But if the anecdote is that Ed cured his cancer by ceasing all treatments, then perhaps that should be considered a far more risky anecdote to act upon.

Anecdotal information is essential. Many diseases such as AIDS have been uncovered by paying attention to one “anecdotal” case report. In fact, many of the important breakthroughs in science have only been possible because a keen-eyed scientist followed up on what everyone else dismissed as merely anecdotal or anomalous data.

Anecdotes are best used to simply make the claim that something may be possible, but without any claims as to how likely it is. For example, it may be that a second blow to the head has seemed to cure amnesia. However, this cannot be studied clinically and it is not likely to occur often enough to recommend it as a treatment. Still, sometimes it is extremely important to know that something has been thought to happen, no matter how uncertain and infrequent. If a severe blow to the head MAY have cured amnesia at least once, this can help to inform further research into it.

Don’t start feeling overwhelmed. We don’t actually need to stop and consciously analyze every anecdote in detail. Our subconscious pattern-recognition machines are quite capable of performing these fuzzy assessments for us. We only need to be sure to consciously internalize these general program parameters into our pattern recognition machines so that they produce sound conclusions when presented with claims that “anecdotal evidence shows.”

 

Time To Dump Linda

You have probably read articles that reference the famous Linda Study conducted by researchers Daniel Kahneman and Amos Tversky back in the early 1970’s. In it, the researchers describe an outspoken person named Linda who is and smart and politically active and who has participated in anti-nuclear demonstrations. They then ask the subject to indicate whether Linda is more likely to be a) a bank teller or b) a bank teller who is also an active feminist.

No direct evidence is given to indicate that Linda is either a bank teller or a feminist. She is smart so she might be a bank teller, and since she has been socially active she might be a feminist. But logically it is far more likely that Linda is only one of these things than that she is both. Yet most people, given the choices presented and regardless of education, answer that Linda is probably both a bank teller and a feminist. This is an example of the Conjunction Fallacy (see here), in which a person mistakenly believes that multiple conditions are more likely than a single one.

Although this study is frequently cited in popular science articles, the conclusions drawn from it have been strongly criticized or at least given more nuanced analysis (see here). Few popular ideas from science since the Heisenberg Uncertainty Principle have been so misused and overextended as the Linda Study. We really should stop reading so much into this study and cease abusing it so badly.

irrationalAn example of one such popular science article describes research by Professor Keith Stanovich (see here). In his work he used the Linda Study methodology along with other tests to measure rationality. Although I do not know how well this popular science article represents the actual research by Stanovich, it suggests that the Linda Test is a strong indicator of rationality. I find that assertion very troubling.

First off, while the Linda Test does expose the Conjunction Fallacy, we are all are susceptible to a huge number of logical fallacies. I document dozens of these in my book, “Belief in Science and the Science of Belief” (see here). While everyone should be taught to do better at recognizing and avoiding logical fallacies, failing to do so probably does not adequately correlate to irrational thinking.

If subjects were made aware that this was intended as an SAT-style logic gotcha, many would answer it in a more literal context. But we normally assume a broader scope of inference when answering this sort of question and the pattern-recognition machines we call our brains are capable of all sorts of fuzzy logic that is completely independent of, and much broader than, strict mathematical logic. In the real world, it might well turn out that women like Linda are in fact more likely to be both bankers and feminists.  Moreover “both” is a far richer answer in the context of most real-world interactions. The more logically correct answer is less insightful and interesting.

This is not to suggest that we should become lax about adhering to principles of logic, but only to suggest that a simple “brain teaser” logic question is not a very powerful indicator of overall rationality. Furthermore, equating rationality to a fallacy recognition test diminishes the profound complexity and importance of rationality.

I suggest that there are far stronger indicators of rationality. Does the subject believe in God? Do they deny climate-change? Do they subscribe to pseudoscientific nonsense? Is their thinking muddled by irrational New Age rationalizations? Do they insist the world is only 6 million years old and that humans coexisted with dinosaurs (cough) Ken Ham see here (cough).

Here’s the problem. All of these direct indicators are too entrenched and widespread to be overtly linked to irrationality. So instead we use safe, bland, non-confrontational indicators like the Linda Test that are at best weak and at worst undermine important and frank questions about rationality.

So dump Linda already in favor of far more meaningful measures of rationality!

 

Caution: Slippery Slope

SlipperySlopeThe slippery slope is one of the most commonly invoked arguments and usage of slippery slope arguments seems to be on the rise. One study found that the phrase is used in the media 7 times more frequently than it was just 20 years ago (see here).

These slippery slopes are bandied about quite routinely to sway sentiment and opinion. They are typically used to argue in opposition to something, and they work pretty well. Slippery slope arguments invoke fear, inaction, and even rejection of a proposition by suggesting that if you allow a not-so-bad thing to happen, it will lead to something-much-worse happening.

We hear examples of slippery slope arguments every day. Just a few include:

  • Physician-assisted suicide will open the door to the government pulling the plug on grandma to save Medicare dollars.
  • If we encourage contraception, sexual promiscuity will run rampant and immorality will destroy the fabric of our nation.
  • Legalizing gay marriage will result in incest, polygamy, bestiality, and the breakdown of the American family.
  • Pot smoking is the gateway to heroin addiction.
  • First they came for my gun; then they came for my liberty.

Like the examples above, most slippery slope arguments have extremely dubious connections between the actual and predicted events. Many are in fact completely ridiculous. Most slippery slope arguments are guilty of gross exaggeration and are a form of arguing to the extreme and to fear. They are also a form of false conclusions or invalid extrapolations that mistakenly assume that rational lines cannot be drawn to halt any slippery slope. In short, they tend to violate a large number of basic tests of logical and factual validity.

Apart from appealing to emotion and fear, there is another big reason slippery slope arguments work so well. It is because they are often quite valid. Give them an inch and they’ll take a mile is a valid truism.

First they came for” is granddaddy of slippery slope arguments. It is traced back to a poem by Lutheran Pastor Martin Niemoller that described the decent of Germany into Nazi atrocities. It was a cautionary message about political apathy that described an actual progression of attitudes and events. As a slippery slope argument, it was perfectly valid and substantiated. It was however a retrospective analysis, not a prediction. Nevertheless, today it gets adapted into slippery slope predictions for all sorts of unlikely and implausible outcomes.

This illustrates the fact that we can often only recognize a true slippery slope after we have slid down it. Still, being cognizant and wary of a slippery slope can help us to put on the brakes and avoid sliding too far. We should not necessarily avoid slippery slopes, but we certainly should be cautious and especially sure-footed when negotiating one.

There are criteria that we can use to evaluate the amount of legitimate concern to grant to any particular slippery slope argument. Are the causes and effects that it predicts really likely on the grounds of logic and evidence? Are there any valid precedents to support this slippery slope prediction? Is it only arguing to fear? Is it really likely that such a slippery slope would not be halted before it went too far?

Not all but many invalid slippery slope arguments are put forth by religious people to defend their practices and implement their beliefs in policy. This is understandable. When you do not have facts to argue, slippery slope arguments are very easy to fabricate and usually very effective.

However, we are all guilty of putting forth invalid slippery slope arguments at times when we think it supports our position. Unfortunately this rampant misuse of slippery slope arguments tends to discredit valid slippery slope arguments. Paradoxically it seems that they work very well most of the time but at other times are dismissed out of hand as almost a pejorative. Invalid slippery slope arguments are given far too much credibility and consequently valid ones can be too easily dismissed.

We as fact-based thinkers must be especially cognizant of the rampant misuse of slippery slope arguments and only invoke them after careful consideration. When we do, we must be ready to defend those arguments with supporting evidence or rationale even as we question the basis for the slippery slope claims of others. And most importantly, we must resist the temptation to use specious slippery slope arguments even when they serve our own interests. If we do not reject all such arguments, even when they help our own cause, then we all suffer from a diminution of effective logic and reason and rational decision-making.

 

 

Pascal’s Folly

PascalsWagerYou’re probably familiar with Pascal’s Wager. It says that even if there is only an infinitesimally small possibility that god exists, the consequences of eternal reward or punishment far outweighs any earthly cost. Therefore, a smart person should “hedge their bets” and believe in god.

This is incredibly specious logic but it nevertheless holds powerful sway over a great many people. Lots of otherwise intelligent thinkers put it forth as a reasonable argument, even as an inescapable iron-clad rationale. But there are many flaws in it including the assumption that belief is a harmless hedge. In the end it is no more than a silly trick of logic that can equally justify anything whatsoever. By this logic, for example, the proposition you received via email from a Nigerian Prince might be legitimate. However small the chance that it’s real, isn’t it worth responding? In fact, the Nigerian Prince is far more likely to be real than is god. Such a prince could actually exist.

But you might reject that argument with yet more pseudo-logic. You might argue that only heaven is sufficient reward to offer compelling enough stakes to accept Pascal’s Wager. And I then counter by suggesting right here and now that you cannot get into heaven unless you give up ice cream. Regardless of how small the chance that god only favors those who prove their faith by forsaking ice cream, Pascal’s Wager demands you give it up. But I doubt you would accept that wager and actually swear off ice cream.

We reject most such nonsense out of hand. Here is yet one more flaw of Pascal’s Wager. We apply it only to one extremely specific assertion and reject an infinite number of others even though they are equally legitimate according to the logic put forth. You can counter yet again and say, well but I cannot play all possible lottery games, and I choose to play this one. Fair enough, so I can counter your counter. This logical fencing goes on and on unendingly without resolution. Playing mental games is something we humans do extremely well.

But why do we reject the same logic for pretty much anything else except the god proposition? We reject it because such logic is clearly stupid. And this brings us to yet another problem with Pascal’s Wager. There is actually in fact no possibility, none, nada, nil, zero, absolute zero, that god actually exists. Someone will in fact actually win the $100M lotto, so that might be worth a $2 ticket by Pascal’s logic. But no one can actually go to heaven because it does not exist. And you cannot claim “but it could” unless you really are equally willing to ACT ON every other impossible proposition.

This illustrates a fundamental problem with logic. As powerful and important as it is, logic has limitations. Thinking that abstract logic necessarily reflects reality can be like a Chinese Finger Trap. I just read an interesting book by Jordan Ellenberg called “How Not to be Wrong: The Power of Mathematical Thinking” (see here). I do recommend it highly. But in it he twice states emphatically that “reason cannot answer the question of god.” If that is true, then it is our reason that is flawed. And it’s easy to see how. Ellenberg is a mathematician. Even a mathematician can become too familiar and comfortable with mathematical concepts like infinity that have no actual basis in reality. Our minds can conceive of symbols and rules of logic that cannot exist in reality. God is one of those. Pascal’s Wager is one of those. It is a human conceptual model that leads to seemingly incontrovertible but nevertheless absurd conclusions.

To illustrate the problem of blindly accepting a “logical” argument without insisting upon testing that logic against reality, consider Zeno’s Paradox. In the 5th century Zeno gave us his famous paradox that says that since we cannot arrive at our destination without infinitely cutting the remaining distance in half, we can never actually arrive at it. The “logic” of this proposition has confounded thinkers ever since as it is extremely difficult to refute by the rules of logic. But a guy called Diogenes the Cynic disproved it by simply standing up and walking across the room.

We humans have an amazing capacity to imagine things outside physical reality and to conceptualize logical systems of rationality that are imperfect in describing that reality or that extend beyond physical boundaries. But we have to be careful that our own cleverness does not make us stupid. Get up and walk across the room. God does not exist and religion is not a harmless hedge.

Here’s the bottom line. If your system of logic leads you to the conclusion that god might exist or that you cannot ever reach the other side of the room, it’s because your system of logic is flawed or ever-extended or you just want it to be true. If your logic cannot disprove flying pigs or gods, you are not thereby proving that god might actually exist. You are merely encountering the limitations or failings of your logic.

And to my agnostic atheist friends who refuse to say with certainty that god does not exist, if you allow for any possibility that god might exist, you have essentially lost the argument. You have admitted that Pascal’s Wager is reasonable and that belief and religion are therefore reasonable. You may think you can logic your way out of that shifting maze, but that only leads to endless ridiculous arguments that mostly serve to give undue credibility to the ridiculous.