Monthly Archives: April 2018

Our Amazing Yet Deeply Flawed Neural Networks

NeuralNetwork

Back in the 1980’s when I did early work applying Neural Network technology to paint formulation chemistry, that experience gave me fascinating insights into how our brains operate. A computer neural network is a mathematically complex program that does a simple thing. It takes a set of training “facts” and an associated set of “results,” and it learns how they connect by essentially computing lines of varying weights connecting them. Once the network has learned how to connect these training facts to the outputs, it can take any new set of inputs and predict the outcome or it can predict the best set of inputs to produce a desired outcome.

Our brains do essentially the same thing. We are exposed to “facts” and their associated outcomes every moment of every day. As these new “training sets” arrive, our biological neural network connections are physically weighted. Some become stronger, others weaker. The more often we observe a connection, the stronger that neural connection becomes. At some point it becomes so strong that it becomes undeniably obvious “common sense” to us. Unreinforced connections, like memories, become so weak they are eventually forgotten.

Note that this happens whether we know it or not and whether we want it to happen or not. We cannot NOT learn facts. We learn language as children just by overhearing it, whether we intend to learn it or not. Our neural network training does not require conscious effort and cannot be “ignored” by us. If we hear a “fact” often enough, it keeps getting weighted heavier until it eventually becomes “undeniably obvious” to us.

Pretty amazing right? It is. But here is one crucial limitation. Neither computer or biological neural networks have any intrinsic way of knowing if a training fact is valid or complete nonsense. They judge truthiness based only upon their weighting. If we tell a neural network that two plus two equals five, it will accept that as a fact and faithfully report five with complete certainty as the answer every time it is asked. Likewise, if we connect spilling salt with something bad happening to us later, that becomes a fact to our neural network of which we feel absolutely certain.

This flaw wasn’t too much of a problem during most of our evolution as we were mostly exposed to real, true facts of nature and the environment. It only becomes an issue when we are exposed to abstract symbolic “facts” which can be utter fantasy. Today, however, most of what is important to our survival are not “natural” facts that can be validated by science. They are conceptual ideas which can be repeated and reinforced in our neural networks without any physical validation. Take the idea of a god as one perfect example. We hear that god exists so often that our “proof of god” pathways strengthen to the point that we see proof everywhere and god’s existence becomes intuitively undeniable to us.

This situation is exacerbated by another related mental ability of ours… rationalization. Since a neural network can happily accommodate any “nonsense” facts, regardless of how contradictory they may be, our brains have to be very good at rationalizing away any logical discrepancies between them. If two strong network connections logically contradict each other, our brains excel and fabricating some reason, some rationale to explain how that can be. When exposed to contradictory input, we feel disoriented until we rationalize it somehow. Without that ability, we would be paralyzed and unable to function.

This ability of ours to rationalize anything is so powerful that even brain lesion patients who believe they only have half of a body will quickly rationalize away any reason you give them, any evidence you show them, that proves they are wrong. Rationalization allows us to continue to function, even when our neural networks have been trained with dramatically nonsensical facts. Further, once a neural network fact becomes strong enough, it can no longer be easily modified even by contradictory perceptions, because it filters and distorts subsequent perceptions to accommodate it. It can no longer be easily modified by even our memories as our memories are recreated in accordance with those connections every time we recreate them.

As one example to put all this together, when I worked in the Peace Corps in South Africa a group of high school principals warned me to stay indoors after dark because of the witches that roam about. I asked some questions, like have you ever personally seen a witch? No, was the answer, but many others whom we trust have told us about them. What do they look like, I asked. Well they look almost like goats with horns in the darkness. In fact, if you catch one they will transform into a goat to avoid capture.

Here you clearly see how otherwise smart people can be absolutely sure that their nonsensical “facts” and rationalizations are perfectly reasonable. What you probably don’t see is the equally nonsensical rationalizations of your own beliefs in god and souls and angels or other bizarre delusions.

So our neural networks are always being modified, regardless of how smart we are, whether we want them to or not, whether we know they are or not, and those training facts can be absolutely crazy. But our only measure of how crazy they are is our own neural network weighting which tells us that whatever are the strongest connections must be the most true. Further, our perceptions and memories are modified to remain in alignment with that programming and we can fabricate any rationalization needed to explain how our belief in even the most outlandish idea is really quite rational.

In humans early days, we could live with these inherent imperfections. They actually helped us survive. But the problems that face us today are mostly in the realm of concepts, symbols, ideas, and highly complex abstractions. There is little clear and immediate feedback in the natural world to moderate bad ideas. Therefore, the quality of our answers to those problems and challenges is entirely dependent upon the quality of our basic neural network programming.

The scientific method is a proven way to help ensure that our conclusions align with reality, but science can only be applied to empirically falsifiable questions. Science can’t help much with most of the important issues that threaten modern society like should we own guns or should Donald Trump be President. Our flawed neural networks can make some of us feel certain about such questions, but how can we be certain that our certainty is not based on bad training facts?

First, always try to surround yourself by “true and valid” training facts as much as possible. Religious beliefs, New Age ideas, fake news, and partisan rationalizations all fall under the category of “bad” training facts. Regardless of how much you know they are nonsense, if you are exposed to them you will get more and more comfortable with them. Eventually you will come around to believing them no matter how smart you think you are, it’s simply a physical process like the results of eating too much fat.

Second, the fact that exposing ourselves to nonsense is so dangerous gives us hope as well. While it’s true that deep network connections, beliefs, are difficult to change, it is a fallacy to think they cannot change. Indoctrination works, brainwashing works, marketing works. Repetition and isolation from alternative viewpoints, as practiced by Fox News, works. So we CAN change minds, no matter how deeply impervious they may seem, for the better as easily as for the worse. Education helps. Good information helps.

There is a method called Feldenkrais which can be practiced to become aware of our patterns of muscle movement, and to then strip out “bad” or “unnecessary” neural network programming to improve atheletic efficiency and performance. I maintain that our brains work in essentially the same way as the neural networks that coordinate our complex movements. As in Feldenkrais, we can slow down, examine each tiny mental step, become keenly aware of our thinking patterns, identify flaws, and correct them. If we try.

Third, rely upon the scientific method wherever you can. Science, where applicable, gives us a proven method to bypass our flawed network programming and compromised perceptions to arrive at the truth of a question.

Fourth, learn to quickly recognize fallacies of logic. This can help you to identify bad rationalizations in yourself as well as others. Recognizing flawed rationalizations can help you to identify bad neural programming. In my book Belief in Science and the Science of Belief, I discuss logical fallacies in some detail as well a going deeper into all of the ideas summarized here.

Finally, just be ever cognizant and self-aware of the fact that whatever seems obvious and intuitive to you may in fact be incorrect, inconsistent, or even simply crazy. Having humility and self-awareness of how our amazing yet deeply flawed neural networks function helps us to remain vigilant for our limitations and skeptical of our own compromised intuitions and rationalizations.

Why the Facebook Problem Matters

facebook-cambridge-analyticaMost of us know the basics of the Facebook scandal involving the political consulting firm, Cambridge Analytica, which has close ties with Steve Bannon and Robert Mercer. Cambridge Analytica obtained massive amounts of Facebook user data through an outside researcher in violation of that person’s usage agreement with Facebook. This data included not only public information, but private data as well as detailed “metadata” about user behavior. Cambridge Analytica analyzed this “big data” to perform “psychographic profiling” in order to conduct “psychological warfare” and “influence operations” to benefit the campaign of Donald Trump.

When you speak with people about this Facebook controversy, many of them will respond by saying that they don’t feel like it’s a very big deal. After all, when users sign up for Facebook, what do they expect? Of course their information is public. This is really a generational problem because people are far too promiscuous in exposing all of their private and personal information. It’s just the world we live in today. And anyway, Cambridge Analytica may have talked big but really had very little impact in the scheme of things. Of course Facebook shares data with advertisers and that benefits us all!

The thing is though, what is actually going on isn’t necessarily benign and it isn’t at all what Facebook users did or should have expected. Analysts keep saying that Facebook “shared” the data. Facebook doesn’t “share” user data. They sell it. They either profit from it directly or leverage it as tangible value to attract their lucrative partner relationships. The profit motive does not in itself corrupt the relationship, but it does potentially shift it from wholesome sharing toward unsavory exploitation.

And they don’t just sell your public postings. They sell subtle usage metrics that go way beyond what you intended to make public and what any one individual could ever see just by looking at your Facebook page. They sell deep metadata that can give insight into how you think and respond and thereby how to manipulate you. They create data sets that contain not only details about your behavior but they can link that behavior in real time to a huge number of other user behaviors and to events going on at that exact moment in the world and in the web of public consciousness.

Given the amount of data they accumulate, sophisticated programs can deduce things about you that you did not intend to make public. You posted that you had zucchini for lunch and like pandas? You might have just divulged your sexual orientation to these sophisticated big data systems. The amount of detail recorded and our ability to analyze, predict, and even modify behaviors based on that data is difficult for most of us to comprehend. What can be done goes way beyond just picking who to target with Cialis or Trump campaign ads. It includes detailed information that provides insight into your deepest psychology, how you think, how you respond, and how you can be manipulated.

Further, this deep metadata isn’t merely sold to well-meaning researcher or advertisers, but it can make its way into the hands of unscrupulous and nefarious players like Cambridge Analytica. They can analyze all this data to determine things about you that you did not intend to make public. They can then use that information to influence how you think about critical matters like elections. If you are important enough, such organizations can even use private information extracted from your public activities to smear, discredit, or even blackmail you.

So the concern about the relationship between Facebook and Cambridge Analytica is not just a matter of silly people being too indiscreet with their postings. Concerns about the kind of activities exposed by the nexus between big data analysis and political activity are far more disturbing and potentially consequential. The ability to acquire massive amounts of metadata not intended to be public and to analyze that big data along side other external events to produce individualized predictive algorithms,  moves innocent Facebook postings into the dark and scary region of mass undercover surveillance and psychological manipulation. Even if Cambridge Analtyica came nowhere near achieving their ambitious goals in the Trump campaign, make no mistake, the ability to assure elections is their business objective.

Facebook is not the only company profiting from massive information gathering. Google, Amazon and others are also sweeping up data that could be exploited by unscrupulous players like Cambridge Analytica. We need to take this seriously and take steps to ensure that big data works to empower and inform us, not to manipulate us. We need to push back now, and strongly, to ensure that this infant monster born of the information age is controlled before it grows into something powerful enough to ensure its own existence.