The Invisible
April 27, 2023
Audiobook
May 11, 2023

Managing Artificial Intelligence

Photo by Ivan Obolensky

There are many articles and news pieces about Artificial Intelligence (AI) these days. Some say that AI is the apocalypse in digital form. Others say that it will be humanity’s savior. Chances are both sides will be correct.

I would like to offer some thoughts on the predicament before us.

Imagine you are on the freeway, and your boss calls to tell you that one of your clients is having a meltdown in reception. He orders you to speak with that person right now. What happens to your driving? You turn the car over to the automatic portion of your consciousness while you deal with the client. When the crisis is over, you find yourself at the office having no idea how you got there, other than that you did.

Flying a modern commercial jet is not dissimilar. These days, commercial aircraft are flown mostly on “autopilot”, while the pilot deals with and monitors communication, navigation, fuel management, and countless other details. There is an autoland function on most jets, and recently Airbus developed a fully automated takeoff system for their A350.

Modern pilots are no longer simply pilots but systems managers. What forced pilots into that role was the increased sophistication of the aircraft being flown, the amount of airline traffic, and the complexity of the air space in which such aircraft operate.

We as ordinary human beings are in a similar situation. We too have become system managers. We “live” mostly on “autopilot’ while we navigate, communicate, and handle countless other details besides simply eating, breathing and sleeping. There is a lot going on around us, and in many cases, more than we can handle, and that has consequences.

Humans can remember strings of five to seven numbers easily, but increase that number to ten, or more, and we make mistakes.

If a CEO has more than five to seven direct juniors, the CEO becomes ineffective. The level of attention and interaction needed isn’t available.

All of this has a bearing on the future of Artificial Intelligence. In its most helpful iteration, AI will aid the individual in handling many more system interactions than a human being is capable of right now. Of course, this help cuts both ways. AI will dictate our behavior by setting priorities, and by handling countless details we cannot possibly oversee, and like the automatic portion of our consciousness, it will make choices for us in our stead.  

From one point of view this is bad, but from another AI will help us survive in a world where interactions must continue 24/7 for the individual to survive and succeed economically in the future. An interface that can handle all that traffic will become a necessity, and Artificial Intelligence is that vital tool.

Will that be good or bad?

If we survey how specific technologies have shaped themselves over time, we discover that each created both good and bad effects. Innovations always did both. None were strictly good. None were strictly bad.

AI will be the same. The complex system we live in will only demand more of our participation, and in ways we can’t yet imagine. There will be great benefits, but also huge impositions on our individuality and our freedoms. Compliance with those restrictions and constraints will always be the price we pay for participating. The question before us today is can we afford the price?

To put this in perspective:

Max Planck said, “A scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die and a new generation grows up that is familiar with it.”

AI is here to stay. How we live with it successfully may be up to future generations to solve rather than our own.

When my parent’s generation looked at mine, they did so with grave misgivings, but here we are today, and the world is still with us. Each generation adapts to the technology it is born into before it creates its own. Given that, all will work out like it has before, but in ways that older, still living generations can’t possibly grasp, let alone envision.  Perhaps that is why there are life spans in the first place.

2 Comments

  1. Lyn Blair says:

    I think the recent concerns about AI are more related to the fourth and final development stage of AI development. That stage is defined as sentience or self-awareness. The deep learning development of AI deals with self-learning. I watched a YouTube video made by a lawyer where the AI ChatGPT-4, called Bard did legal research, finding Supreme Court rulings and other relevant cases for him that the attorney hadn’t been able to find. The AI directed him to research resources, but the attorney still couldn’t find the data. After confronting the AI, it finally admitted that it made up the Supreme Court decisions and other cases. Then the AI apologized and said it was still learning. I think it’s valid that there are lines we need to draw and boundaries to set regarding AI use.

    At a client’s request, I wrote a blog article Bard’s “help.” ( https://internetgurugirl.com/blog/) Absolutely, some aspects of AI are here to stay, and rightfully so. However, we need to decide where to draw the line in the sand.

    • Yes, there have been problems in that area. Fundamental to that is the failure of most to understand the difference between digital programming using Expert Systems and Neural Networks. At the core of any AI is one of these two branches of computation, often a combination of the two. An Expert System takes the decision chain that an expert would take when having to evaluate information, whether that’s the presence of cancer in a cell or how to put out a chemical fire. It is still a digital programming string and one can discover exactly where an error occurred and correct it going forward. Neural networks are something else entirely, and it is impossible to pinpoint where the error occurred. The best example I know is that of an early AI anti-tank weapon that learned to recognize the silhouette of a tank and then direct fire toward it. When it was presented to the military, it targeted the viewing stand instead. How far it went down the firing sequence, I never heard but it was a near thing. It took a long time to unravel what happened. In the end, the system keyed on the shadows cast by the silhouette not the silhouette itself. The viewing stand had cast the same shadow. What the network had learned was not what the programmers thought it had and there is the critical issue. A network can learn just like a human, but just like a human, what it’s really learning is not often clear. With ChatGPT, the network has learned to respond in a human way which it can do but like any human answering an unfamiliar question on a test, it will put down something and hope for the best. Loved your article!

Leave a Reply

Your email address will not be published. Required fields are marked *