Tag Archives: Neural Networks

Understanding AI

Even though we see lots of articles about AI, few of us really have even a vague idea of how it works. It is super complicated, but that doesn’t mean we can’t explain it in simple terms.

I don’t work in AI, but I did work as a Computational Scientist back in the early 1980’s. Back then I became aware of fledgling neural network software and pioneered its applications in formulation chemistry. While neural network technology was extremely crude at that time, I proclaimed to everyone that it was the future. And today, neural networks are the beating heart of AI which is fast becoming our future.

To get a sense of how neural networks are created and used, consider a very simple example from my work. I took examples of paint formulations, essentially the recipes for different paints, as well as the paint properties each produced, like hardness and curing time. Every recipe and its resulting properties was a training fact and all of them together was my training set. I fed my training set into software to produce a neural network, essentially a continuous map of this landscape. This map could take quite a while to create, but once the neural network was complete I could then enter a new proposed recipe and it could instantly tell me the expected properties. Conversely, I could enter a desired set of properties and it could instantly predict a recipe to achieve them.

So imagine adapting and expanding that basic approach. Imagine, for example, that rather than using paint formulations as training facts, you gathered training facts from a question/answer site like Quora, or a simple FAQ. You first parse each question and answer text into keywords that become your inputs and outputs. Once trained, the AI can then answer most any question, even previously unseen variations, that lie upon the map that it has created.

Next imagine you had the computing power to scan the entire Internet and parse all that information down into sets of input and output keywords, and that you had the computing power to build a huge neural network based on all those training facts. You would then have a knowledge map of the Internet, not too unlike Google Maps for physical terrain. That map could then be used to instantly predict what folks might say in response to anything folks might say – based on what folks have said on the Internet.

You don’t need to just imagine, because now we can do essentially that.

Still, to become an AI, a trained neural network alone is not enough. It first needs to understand your written or spoken language question, parse it, and select input keywords. For that it needs a bunch of skills like voice recognition and language parsing. After finding likely output keywords, it must order them sensibly and build a natural language text or video presentation of the outputs. For that you need text generators, predictive algorithms, spelling and grammar engines, and many more processors to produce an intelligible, natural sounding response. Most of these various technologies have been refined for a long time in your word processor or your messaging applications. AI is really therefore a convergence of many well-known technologies that we have built and refined since at least the 1980’s.

AI is extremely complex and massive in scale, but unlike quantum physics, quite understandable in concept. What has enabled the construction of AI scale neural networks is the mind-boggling computer power required to train such a huge network. When I trained my tiny neural networks in the 1980’s it took hours. Now we can parse and train a network on well, the entire Internet.

OK, so hopefully that demystifies AI somewhat. It basically pulls a set of training facts from the Internet, parses them and builds a network based on that data. When queried, it uses that trained network map to output keywords and applies various algorithms to build those keywords into comprehensible, natural sounding output.

It’s important we understand at least that much about how AI works so that we can begin to appreciate and address the much tougher questions, limitations, opportunities, and challenges of AI.

Most importantly, garbage in, garbage out still applies here. Our goal is for AI should be to do better than we humans can do, to be smarter than us. After all, we already have an advanced neural network inside our skulls that has been trained over a lifetime of experiences. The problem is, we have a lot of junk information that compromises our thinking. But if an AI just sweeps in everything on the Internet, garbage and all, doesn’t that make it just an even more compromised and psychotic version of us?

We can only rely upon AI if it is trained on vetted facts. For example, AI could be limited to training facts from Wikipedia, scientific journals, actual raw data, and vetted sources of known accurate information. Such a neural network would almost certainly be vastly superior to humans in producing accurate and nuanced answers to questions that are too difficult for humans to understand given our more limited information and fallibilities. There is a reason that there are no organic doctors in the Star Wars universe. It is because there is no advanced future civilization where organic creatures could compete the AI medical intelligence and surgical dexterity of droids.

Here’s a problem. We don’t really want that kind of boring, practical AI. Such specialized systems will be important, but not huge commercially nor sociologically impactful. Rather, we are both allured and terrified by AI that can write poetry or hit songs, generate romance or horror novels, interpret the news, and draw us images of cute dragon/butterfly hybrids.

The problem is, that kind of popular “human like” AI, not bound by reality or truth, would be incredibly powerful in spreading misinformation and manipulating our emotions. It would feedback nonsense that would further instill and reinforce nonsensical and even dangerous thinking in our own brain-based neural networks.

AI can help mankind to overcome our limitations and make us better. Or it can dramatically magnify our flaws. It can push us toward fact-based information, or it can become QANON and Fox “News” on steroids. Both are equally feasible, but if Facebook algorithms are any indication, the latter is far more probable. I’m not worried about AI creating killer robots to exterminate mankind, but I am deeply terrified by AI pushing us further toward irrationality.

To create socially responsible AI, there are two things we must do above all else. First, we must train specialized AI systems, say as doctors, with only valid, factual information germane to medical treatment. Second, any more generative, creative, AI networks should be built from the ground up to distinguish factual information from fantasy. We must be able to indicate how realistic we wish our responses to be and the system must flag clearly, in a non-fungible manner, how factual its creations actually are. We must be able to count on AI to give us the truth as best as computer algorithms can recognize it, not merely to make up stories or regurgitate nonsense.

Garbage in garbage out is a huge issue, but we also face a an impending identity crisis brought about by AI, and I’m not talking about people falling in love with their smart phone.

Even after hundreds of years to come to terms with evolution, the very notion still threatens many people with regard to our relationship with animals. Many are still offended by the implication that they are little more than chimpanzees. AI is likely to cause the same sort of profound challenge to our deeply personal sense of what it means to be human.

We can already see that AI has blown way past the Turing Test and can appear indistinguishable from a human being. Even while not truly self-aware, AI systems can seem to be capable of feelings and emotion. If AI thinks and speaks like a human being in every way, then what is the difference? What does it even mean to be human if all the ways we distinguish ourselves from animals can be reproduced by computer algorithms?

The neural network in our brain works effectively like a computer neural network. When we hear “I love…” our brains might complete that sentence with “you.” That’s exactly what a computer neural network might do. Instead of worrying about whether AI systems are sentient, the more subtle impact will be to make us start fretting about whether we are merely machines ourselves. This may cause tremendous backlash.

We might alleviate that insecurity by rationalizing that AI is not real by definition because it is not human. But that doesn’t hold up well. It’s like claiming that manufactured Vitamin C is not really Vitamin C because it did not some from an orange.

So how do we come to terms with the increasingly undeniable fact that intellectually and emotionally we are essentially just biological machines? The same way many of us came to terms with the fact that we are animals. By acknowledging and embracing it.

When it comes to evolution, I’ve always said that we should take pride in being animals. We should learn about ourselves through them. Similarly, we should see computer intelligence as an opportunity, not a threat to our sense of exceptionalism. AI can help us to be better machines by offering a laboratory for insight and experimentation that can help both human and AI intelligences to do better.

Our brain-based neural networks are trained on the same garbage data as AI. The obvious flaws in AI are the same less obvious flaws that affect our own thinking. Seeing the flaws in AI can help us to recognize similar flaws in ourselves. Finding ways to correct the flaws in AI can help us to find similar training methodologies to correct them in ourselves.

I’m an animal and I’m proud to be “just an animal” and I’m equally proud to be “just a biological neural network.” That’s pretty awesome!

Let’s just hope we can create AI systems that are not as flawed as we are. Let’s hope that they will instead provide sound inputs to serve as good training facts to help retrain our own biological neural networks to think in more rational and fact-based ways.

I Say Give Them Time

As my readers know I occasionally take exception to comments made by highly respected intellectuals. I hope that when I do so it is not to engage in a gratuitous attack, but to offer an important counterpoint. In that spirit I must take exception to recent comments made by the highly respected thinker and author Malcolm Gladwell (see here).

The comments I refer to were offered by Mr. Gladwell when he appeared on The Beat with Ari Melber last week. The full text can be heard on the Ari Melber podcast dated July 3rd, 2021.

Mr. Melber introduced the segment by pointing out that we live in a period in which Republicans are attempting to revise history and promote lies. He asked Mr. Gladwell for his thoughts about all of that and whether there were any solutions. It should be noted that this question was asked in the context of promoting Mr. Gladwell as an expert on human thinking and behavior.

Here is a slightly polished transcription of the response by Mr. Gladwell:

I think about the role of time. I wonder whether we’re in too much of a hurry to pass judgment on the people who continue to lie about what happened on Jan 6th, there are many forms that denial takes. One of it is that I honestly don’t believe that anything went wrong there. Another form is that I do believe but I’m not ready to admit it yet. A lot of what looks like a kind of malignant denial in the republican party right now is probably just people who aren’t ready to come clean and renounce a lot of what they were saying for the previous four years. I say give them time.

While this admonition for patience may sound superficially learned and wise, I find it naïve, wrong both theoretically and factually, and damagingly counterproductive. While I certainly don’t expect Mr. Gladwell to cite all his supporting evidence in a short interview segment like this, I don’t believe he has any. I suspect this is simply well-meaning but unrealistic platitude, analogous to “the arc of the moral universe is long, but it bends toward justice.” That’s OK, except that he is putting forth an unsupported platitude as the conclusion of a purported expert in human thinking.

But such an expert on human thinking should understand that neural networks simply do not function in a way that would make “give them time” a reasonable strategy. As long as Republicans continue to hear the same old lies repeated over and over, they are not going to eventually recognize and reject them. Repeated exposure does not reveal lies but rather transforms our brains to accept them more deeply.

Our neural networks are influenced mainly by the quantity and repetition of the training “facts” they are exposed to. They have little capacity to judge the quality of those facts. Any training fact, in this case any idea the neural network is exposed to, is judged as valid by our neural network machinery in proportion to how often it is reinforced. And by the way, I know most of us want to believe that we collectively are not so susceptible to this because we want to believe that we personally are not. But we are.

So, my objection to Gladwell is that he does not truly understand how our neural networks function because if he did he would understand that “I say give them time” is counterproductive advice at this time. Now, yes, it would be good advice if we were confident that Trump voters are being exposed regularly and primarily to truthful information. If that were the case I would agree, yes, give their neural networks more exposure time. However, I don’t believe that there is any reasonable basis to think that giving them more time will serve any purpose except to further reinforce the lies they are continually exposed to from Trump, the Republican Party, and Fox News. We are simply not ready to just be patient and let the truth seep in and percolate.

The more nuanced advice, in my opinion, to the question posed by Ari Melber is that we must discredit and stem the flow of misinformation from these sources and expose Republicans regularly to truly factual information. Once we do that, then, yes, I say just give them time for their neural networks to become comfortable with it. With enough exposure their neural networks will transform whether they want them to or not. But to accept the status quo right now and “give them time” as Mr. Gladwell suggests would be horribly premature and ill-advised.

Don’t Believe your Eyes

eyesToday I wanted to talk about perceptions. Not our feelings, but what we actually see, feel, smell, hear, and taste. That is, the “objective” inputs that drive our feelings. Should we really “only believe our eyes?

I think not.

In my book (see here) I talk about how we should be skeptical of our own memories and perceptions. Our memories are not recordings. They are docudrama recreations drawing upon various stock footage to put together a satisfying re-imagining. We remember going to the beach as a child. But in “recalling” details of that experience, we draw upon fragments from various sources to fill it in. The “slant” of that recreation is strongly dependent upon our current attitudes and biases. Our re-imagined, and often very distorted memory then reinforces what we believe to be a “vivid” recollection next time we recall it. Over time our “clear” memory can drift farther and farther from reality like a memory version of the “phone game.”

I contend that our brains work similarly with regard to our senses. We don’t see what we think we see. Our perceptions are filtered through our complex neural networks. It is a matched, filtered, processed, censored, and often highly biased version that we actually see, hear, or feel.

We know that our subconscious both filters out much of the information it receives, and adds in additional information as needed to create a sensible perception. I always favor a neural network model of brain function. As it relates to perception, our neural network receives a set of sensory data. It matches that data against known patterns and picks the closest match. It then presents our consciousness with a picture – not of the original data – but of that best-fit match. It leaves out “extraneous” information and may add in missing information to complete that expected picture. That is, we do not actually see, hear, smell, or taste a thing directly. We see, hear, smell, or taste a satisfying recreation that our network presents to us.

This should not be controversial, because we experience it all the time. Based on sparse information, we “see” fine detail in a low resolution computer icon that objectively is not there. We fail to see the gorilla inserted into the background because it is out of place. We are certain we see a witch or a vase in a silhouette, depending on our bias or our expectations at that moment.

But though this should be evident, we still do not take this imprecision seriously enough in evaluating the objectivity of our own memories or perceptions. We still mostly put near-absolute faith in our memories, and are generally even more certain of our perceptions. We believe that what we perceive is absolutely objective. Clearly, it is not.

In essence, what we believe we objectively recall, see, hear, or touch is not the thing itself, but a massaged recreation of our neural network match. The version we perceive can often be wrong in very important ways. Our perceptions are only as reliable as our neural networks. And some neural networks can be more compromised than others. We can recall or even perceive radically crazy things if our neural network has been trained to do so. I campaign against belief-based thinking of all sort because it seriously compromises these critical neural networks in crazy ways.

Even more unrecognized are the ways that this phenomenon is largely ignored as it impacts scientific research. Scientists often give far too much credence to reports of perceptions, often in extremely subtle ways.

As a simple illustration, consider how we often mock wine connoisseurs who claim to taste differences in wines but cannot pick these out in blinded studies. However, consider the confounding impact of their (and our) neural networks in even this simple case. When experiencing a wine, all the associated data is fed into the drinker’s neural network. It makes a match and then presents that match to the consciousness. Therefore, if the network does not “see” one critical factor, say color, it matches to white, not red, and presents and entirely different taste pattern the the drinker, ignoring some “extraneous” flavors and adding some other “missing” ones.

These same kinds of neural network matching errors can, and I have to assume often do, confound even more rigorous scientific studies. And they are further confounded by the fact that these mismatches are typically temporary. With every new set of data, our neural networks adjust themselves, the weightings change, to yield different results. The effect of a drug or placebo, for example, may change over time. If scientists see this, they typically look exclusively for other physiological causes. But it may be a neural network correction.

That is why I always admonish my readers to stick with inputs that will strengthen your neural networks toward sound objectivity rather than allow them to be weighted toward the rationalization of, and perception of, beliefs and nonsense. But since none of us can ever have perfect networks, or even know how accurate ours performs in any given match, we all need a healthy amount of skepticism, even with regard to our own memories and perceptions.

I further urge scientists to at least consider the impact of neural network pre-processing on your studies, and to develop methodologies to explicitly detect and correct for such biases.

 

Humans are Inexplicable

brainWhether it be in science or business or politics or popular culture, we expend an inordinate amount of time and effort trying to figure out why people do whatever people are doing. We seem to have more analysts than actors, all desperately trying to explain what motivates people, either by asking them directly or by making inferences about them. For the most part, this is not merely a colossal waste of time and effort and money in itself, but it stimulates even greater wastes of time and effort and money chasing wildly incomplete or erroneous conclusions about why we do what we do.

Asking people why they did what they did, or why they are doing what they are doing, or why they are going to do what they are going to do, generally yields useless and misleading information. It is not clear that people actually have distinct reasons they can recognize let alone articulate. It is quite likely in fact that most of the decisions we make are made unconsciously based upon a myriad of complex neural network associations. These associations need not be rational. These connections don’t need to be internally consistent to each other or related to the actual outcome in any way. But in our post-rationalizations and post-analyses we impose some logic to our decisions to make them feel sensible. Therefore, the reasons we come up with are almost completely made-up at every level to sound rational or at least sane to ourselves and to those we are communicating to.

The truth is, we can’t usually hope to understand our own incredibly complex neural networks, let alone the neural networks of others. Yes, sometimes we can identify a strong neural network association driving a behavior, but most determinative associations are far too diffuse across a huge number of seemingly unrelated associations.

The situation gets infinitely worse when we are trying to analyze and explain group behaviors. Most of our shared group behaviors emerge from the weak-interactions between all of our individual neural networks. The complexity of these interactions is virtually unfathomable. The challenge of understanding why a group does what it does collectively, let alone figuring out how to influence their behavior, is fantastic.

If you ask a bird why it is flying in a complex swirling pattern along with a million other birds, it will probably give you some reason, like “we are looking for food,” but in fact it is probably largely unaware that it is even flying in any particular pattern at all.

So why point all this out? Do we give up? Does this imply that a rational civilization is impossible, that all introspection or external analysis is folly?

Quite the contrary, we must continue to struggle to understand ourselves and truly appreciating our complexity is part of that effort. To do so we must abandon the constraints of logic that we impose upon our individual and group rationalizations and appreciate that we are driven by neural networks that are susceptible to all manner of illogical programming. We must take any self-reporting with the same skepticism we would to the statement “I am perfectly sane.” We should be careful of imposing our own flawed rationality upon the flawed rationality of others. Analysts should not assume undue rationality in explaining behaviors. And finally, we must appreciate that group behaviors can have little or no apparent relationship to any of the wants, needs, or expressed opinions of those individuals within that group.

In advanced AI neural networks, we humans cannot hope to understand why the computer has made a decision. Its decision is based upon far too many subtle factors for humans to recognize or articulate. But if all of the facts programmed in to the computer are accurate, we can probably trust the judgement of the computer.

Similarly with humans, it may be that our naive approach of asking or inferring reasons for feelings and behaviors and then trying to respond to each of those rationales is incredibly ineffective. It may be that the only thing that would truly improve individual and thus emergent thinking are more sanely programmed neural networks, ones that are not fundamentally flawed so as to comfortably rationalize religious and other specious thinking at the most basic level (see here). We must focus on basic fact-based thinking in our educational system and in our culture on the assumption that more logically and factually-trained human neural networks will yield more rational and effective individual and emergent behaviors.