Backscatter

Smart Machines and Artificial Intelligence

By Donald Christiansen

I started this column with the intention of discussing artificial intelligence (AI). But not everyone seems comfortable with the term. Machine intelligence may be a better description. Machine intelligence has been with us since long before the term artificial intelligence was coined in the 1950s. It played a rudimentary role in early motorized machine tools, which skilled technicians designed and programmed to do more and more of what human operators had once been required to do.

Stanford computer science professor Fei-Fei Li points out there is nothing artificial about AI, noting that it is made by humans, is intended to behave like humans, and it affects humans. Then too, “artificial” often carries the connotation of inferior or not genuine. Yet its defenders respond by asking whether most things made by humans aren’t artificial in some sense.

These arguments notwithstanding, I will defer to the description “AI” for the balance of this column.

AI can be employed in solving complex mathematical problems, to which, of course, there is but one correct solution. Or it can be applied to complex situations having many variables, some or all of which may change with time. An example would be evaluating the status of an offensive or defensive military situation. It is in evaluating real-life situations that AI faces its greatest challenges, including financial, legal, ethical, and moral issues.

How Intelligent?

Since AI is designed by humans, the question arises, how intelligent must the human (or humans) who design a particular AI system be? And, given the same project, two separate groups of human intellectuals may reach opposite conclusions. Which group’s “brains” should be used to design an AI program? Suppose both designs are completed, one for a peaceful democracy and one for a warring dictatorship?

The answers to these questions are critical when dealing with potentially autonomous military systems. If and when such systems are permitted to operate in a fully autonomous mode is a major concern. The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems identifies several. Among them:

  • Designing automated weapons with audit trails to help guarantee accountability and control.
  • Achieving behavior of autonomous functions that is predictable by their (human) operators.
  • Including adaptive and learning systems that can explain their reasoning and decisions to human operators in a transparent and understandable way.

Defining Today’s AI

As the experts debate the definition and structure of AI, a criticism of its “deep learning” function has arisen. Deep learning is based fundamentally on statistical data processing, and its critics worry that it cannot intuit much from the vast amounts of data it collects. One response is a program proposed by the Pentagon’s DARPA. Called Machine Common Sense, it would provide a network to share technology ideas that emulate human common-sense reasoning, an area where deep learning is limited. Generally termed cognitive computing, another form is the intelligent virtual assistant (IVA) that helps retrieve answers to questions relating to specific, bounded procedures in a particular category.

Projecting the Future

AI today is still thought to be in its “weak” phase. Max Tegmark, president of the Future of Life Institute, cites AI driving a car as an example of weak AI. “Strong” or “superintelligent” AI, if it ever arrives, suggests that an AI system then becomes better than humans at all cognitive tasks. Such a system, Tegmark notes, could potentially undergo self-recursive improvement, triggering an intelligence explosion (and) leaving human intellect far behind.

Many students of AI are certain of the eventual development of superintelligent AI. What then? Some see it as the ultimate in salvation of the universe, while others fear it as devious, dangerous, and uncontrollable by humans.

Stephen Hawking in a 2016 speech at Cambridge University concluded that “AI will either be the best or the worst thing ever to happen to humanity. We do not yet know which.” On the positive side he listed the eradication of disease and poverty, and the repair of some of the damage done to the world by its industrialization. But he noted the dangers included powerful autonomous weapons, new ways for the few to oppress the many, and great disruption of the economy.

Even with today’s AI there are serious issues of concern, including the potential role of AI in cyber-security attacks on the U.S. electric grid system as well as similar attacks on the nation’s investment and banking structures.

But with respect to superintelligent AI itself running the world, not to worry! If it happens, it will positively not happen in our lifetimes. (A confidential AI source assured me of this.)

Resources

  • Hurlburt, G., “How Much to Trust Artificial Intelligence?,” Computing Edge, 2018.
  • Earley, S., “The Problem with AI,” IT Professional, Vol. 19, No. 4, 2017.
  • Lohr, S., “Is There a Smarter Path to Artificial Intelligence? Some Experts Hope So,” The New York Times, June 30, 2018, https://www.nytimes.com/2018/06/20/technology/deep-learning-artificial-intelligence.html retrieved July 12, 2018
  • Tegmark, M., Benefits & Risks of Artificial Intelligence, Future of Life Institute, https://futureoflife.org/background/benefits-risk-of-artificial-intelligence/ retrieved July 12, 2018
  • Collins, N., “Artificial Intelligence will be as Biased and Prejudicial as its Human Creators.” Pacific Standard, 1, 2016.
  • Jbeily, M., “Be Aces For All Seasons,” (USNI) Proceedings, June 2018.
  • Galderisi, G., “Producing Unmanned Systems Even Lawyers Can Love,” (USNI) Proceedings, June 2018.
  • Li, F., “How to Make A.I. Human-Friendly,” The New York Times, May 8, 2018.
  • The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems.
  • Spadafura, A., “Stephen Hawking believes AI will be mankind’s greatest accomplishment,” https://betanews.com/2016/10/21/artificial-intelligence-stephen-hawking/ retrieved June 20, 2018
  • Agle, R., “Deal with the Next Threat,” (USNI) Proceedings, May 2018.
  • Dobbs, M. J., “Upman Your Battle Stations,” (USNI) Proceedings, 2013.
  • Darrah, M., “The Age of Unmanned Systems,” (USNI) Proceedings, Sep. 2018.
  • Sanchez, E., Unmanned Aerial Vehicles and Ethics,” (USNI) Proceedings, June 2010.
  • Hummar, S., “Protecting Our Warrior Ethos Tomorrow,” (USNI) Proceedings, 2018.
  • Jacobstein, N., “The Future of Work: Managing the Benefits and Risks of Artificial Intelligence,” Pacific Standard, 13, 2015.
Advertisement

Donald Christiansen

Donald Christiansen is the former editor and publisher of IEEE Spectrum and an independent publishing consultant. He is a Fellow of the IEEE. His Backscatter columns can be found here.

Related Articles

One Comment

  1. I’m not worried about the Hawking-Gates-Musk conjecture of disaster in the future, if a machine made by man can overcome overcome a computational machine made of 8E+12 (humans) * 86E+12 (neurons) * 7E+3 (axons) arranged in a way that mostly works in parallel and can rewire itself at multiple levels itself as required, it will be shortly after the last proton decays (I think we are waiting to see the first decay).

    The ethics issues are always there and I have not seen any indication that worrying about them before the technology has produced results. Deploy the technology and years later the politicians and the rest of the intelligentsia will figure out what we did wrong.

    I started in AI in 1963 on a real problem, it was real dumb problem but we could not solve it. Aura from Argonne was an idiot (savant). I helped kick IBM into AI and IBM won a chess match and a game show. Now we can recognize faces. Really dumb programs have defeated the Turing Test. On and on. I have yet to see the ‘Intelligence’ of AI. It seem necessary to replace the Turing Test. I propose an new test, call the Michie Mouse test (actually the ‘Michie Test,’ Don Michie was Turing’s buddy during the Enigma project and later taught me a lot about AI). The test: “When a computing machine constructed by humans can take n (say 1000) photographs containing m (say 50) items, with five items shown in active use by a human in each, and can without human intervention in the learning process, can in finite time identify in another set of photographs, and describe the use of the item. This will be called level one Intelligence, aka ‘baby intelligence.'”

    Meanwhile I do love AI, but as some unknown AI sage once complained ‘once we get something to work it is not AI anymore, just computer science.’

Leave a Reply

Your email address will not be published. Required fields are marked *

Check Also
Close
Back to top button