Discussing Ethical Mission Definition and Execution for Maritime Robots Under Human Supervision
with Prof. Emeritus Robert B. McGhee

2020

[Host]

Hi, this is Jonathan Rodriguez. And today, I’m interviewing Professor Emeritus Robert B. McGhee from the Naval Postgraduate School. Professor McGhee served in active duty service in the U.S. Army Ordnance Corps from 1952 to 1955 and has held a variety of technical leadership roles serving as a pioneer in the field of robotics, leading up from 1955 all the way through the present.

Although he is a retired Professor Emeritus, he still actively programs in both Lisp and Prolog, and his academic focus is: (although he does not use this terminology) on the A.I. Alignment Problem and on a novel method he proposes which can solve the A.I. alignment problem, which is to put a provable algorithm at the top level, in control of any other algorithms running in the system.

This is explained well in Professor McGhee’s 2018 paper in the IEEE Journal of Oceanic Engineering. The paper is titled Ethical Mission Definition and Execution for Maritime Robots Under Human Supervision. Today, I’ll be speaking with Professor McGhee about his paper and his work on this fascinating and novel approach to A.I. Alignment using provable, finite state machines as the top executive level in a hierarchy of algorithms, under ultimately provable human command.

[Host]

Thank you so much for taking the time to talk. I would love to hear more about maybe if you could start with just going back over how you got into the field of robotics.

[Robert]

Well, that’s a very long story so I’ll shorten it greatly.

I’m a Korean War veteran. I was an undergraduate student at the University of Michigan and enrolled in the Army ROTC, not expecting the Korean War. And I was called to active duty in 1952. The Army was just beginning its surface to surface ballistic missile program and it was looking for people to train to work in that area during their military experience, which I did, and enjoyed very much. I received very good education in that area by the Army and then was responsible for training students in surface to surface guided missile maintenance.

And while I was doing that, I made up my mind that I certainly wanted to go to graduate school. And I found out that Hughes Aircraft Company in Culver City was recruiting engineers who would like to work half time and go to school half time. So I did that. That worked out very well. They supported me extremely well and I went to school roughly half time and worked roughly half time for eight years, finishing up with a Ph.D. By that time, I’d had much broader experience dealing with various kinds of guided missiles from air to air through anti-tank. Those were the principal ones. The same ideas were central to all those kinds of missiles.

After I got my Ph.D., I decided that I wanted to stay in academia because I observed the instability that was in the aerospace industry and I wanted to pick a track and follow it.

I was fortunate, one year later, to meet a professor from Serbia, Rajko Tomović, who wrote a book on repetitive analog computers, which I became acquainted with during my Ph.D. studies because it was the French book that I translated for one of my two language requirements. He came to the University of Southern California and I spent a year with him, and we determined together that we would try to do research in the area of walking machines with the principle goal being to develop technology that could be used to help disabled people.

Serbia and Yugoslavia suffered terribly in World War 2, as did all of the countries in Europe, but they had a great many people with physical disabilities, and Tomović was a visionary who thought that electronics could be used to coordinate the motion (electronically) of artificial limbs and braces for the lower extremities.

So that was our original motivation. We realized very quickly that we had no credibility and no right and no competence to experiment on human beings as subjects. So we decided to build walking machines to understand how to coordinate motion. That turned into a very successful line of research, and it did eventually result in a level of knowledge and components such that it’s now routine to prescribe electronically controlled prosthetic devices for lower extremity amputees. And I think almost all such devices are now electronically coordinated.

The principle question we faced in that work was: was it necessary to communicate with nerve endings in the stump of the upper part of the leg of an amputee, or, could such a device coordinate its own motions and a person could interface with it very much like riding a bicycle. We hoped that the latter was the case and it turned out that it was. And our work on walking machines culminated in a three ton monster completed about 19… I think the project ended in 1989 and it’s still the biggest computer coordinated walking machine ever built.

And at that time, I realized it had a pilot who would sit in the seat and control it. Similar to a fly by wire airplane. The pilot had a three-axis joystick, the onboard pilot. In the early 80’s, I began wondering if there was any way to make such a device carry out a mission by itself, without a human pilot onboard. And I decided that no one knew how to do that for a machine of that complexity. And it would be easier to study ways to carry out mission control for unmanned untethered submarines, generally autonomous submarines, than it would be to try to create a behavior similar to that of a land animal. And that turned out to be a very good choice.

You certainly need to talk to Don Brutzman because he was in it pretty much from the beginning. We decided that we would use a dolphin as our model and build a dolphin-sized submarine and match dolphin capabilities as far as we could. We did that until about 2006, when the money ran out, we got too old, and we decided not to go on building any more submarines.

And we turned, and I myself in particular, turned my efforts to trying to understand as deeply as I could how one could specify a mission to an unmanned submersible vehicle, similar to in behavior to a manned submarine, and able to take mission orders such as those that are handed in written form to a submarine commander.

Brutzman had over 20 years in attack submarines, so he understood how all of that worked very well. And he and I and some faculty members from the Naval Postgraduate School were very successful in putting the first two submarines capable of carrying out complex unmanned missions in the world.

We built the first two of them and that ended in 2006. So we’ve continued our work by simulation means since then. And the paper I mentioned to you that was published in 2018 was a summary of 20 years of our findings and our understanding of the problem at that time.

I still program in Lisp and Prolog. That takes nothing but a computer. I do that at home. And if I can persuade you to go in some direction like this, I would be delighted.

[Host]

I mean, honestly I think that your work is extremely important.

[Robert]

I do too; it applies to driverless automobiles extremely well. The bottom line is: A.I. is too dangerous to give to robots. Robots can do their job, they may use A.I. for some understanding of their surroundings and even for some internal decision making, but not for mission definition and mission execution. We feel strongly.

No doubt you’ve seen the movie 2001. You remember HAL? HAL was able to use predicate calculus. HAL could reason from the general to the particular. And that was a very bad mistake because he decided the mission was more important than the human crew.

So that started me wondering about that, way back then, and I really didn’t understand the difference between propositional calculus and predicate calculus until fairly recently. I’ve tried to read before, but didn’t put enough time in on it with Hofstadter’s book. You know that book?

Hofstadter: Gödel, Escher, Bach?

[Host]

Very, .. I guess uh, very puzzle-like. With a lot of parables.

[Robert]

It’s a very difficult book, but he tries to approach the understanding of logic from different perspectives. And if you work at it long enough, you can learn a great deal. But it’s just also quite easy to read (well, relatively speaking; it’s a work of mathematics) so it’s relatively easy to read to understand what is the meaning of a computer language that can deal with unbound variables. What is the meaning of logic that deals with relationships between classes (that’s predicate logic) versus propositional logic, which deals with particular instances of classes. And that difference is so profound.

One of the most striking examples is it was only until the year 2000, approximately, that someone succeeded in proving that all of Euclid’s theorems were valid. What Euclid taught were proofs were plausibility arguments. They were just arguments for convincing humans, for one human to convince another human, not something to trust a robot with.

[Host]

So, I think that, I definitely personally very much agree with your sentiment that it’s too dangerous to give general reasoning to “robots”, broadly defined as “systems that can take action in the real world”. And I wonder, I wonder if you’ve engaged with this community before, but, over the past year or so, I’ve been speaking more and more with a community that now styles themselves as the A.I. Alignment Community, which is doing a lot of both ethics work and technical work about the question of how, from a technical standpoint,… it’s very similar to the question that you were studying of: How do you translate human orders, directives and goals into an actual, either a procedure for a machine to follow or, I think more generally what they’re interested in is establishing a utility function, which then a utility maximizing optimizer can reason its own way towards.

[Robert]

I think that’s a very dangerous approach for mission control. It’s not a dangerous approach, it’s a useful approach for the lower levels of controlling an autonomous or partially autonomous vehicle. You don’t want machine learning to be the way that a lethal robot learns about how to behave in an ethical way. It’s taken billions of humans and we’re still working on it.

[Host]

I would definitely agree with you. I think that the question is, for a lot of people, I think part of what they assume is that as A.I. is becoming more and more commercialized that there will be at least a commercial race to put more and more general and fast and powerful A.I. into more and more applications, including stuff that’s directly lethal, like robotics, and then also including things that are powerful and potentially lethal in the abstract, like economic systems. So if you have a system that controls large parts of the economy, it’s de facto making choices about who can afford food and medicine.

[Robert]

Yes. Right. And the way you secure or protect humanity from such systems is you do not allow any A.I. at all at the top level. Everything on the top level has to be controlled by a finite state machine. A.I. in general requires infinite state machines, A.I. in general requires predicate logic, and predicate logic is not provable.

And I don’t believe we should [allow systems with] lethal capabilities to use abstract reasoning to reason from the general to the particular. We should not allow that at the top layer. There has to be what amounts to a remote human in control. And it’s a remote human who specifies the mission logic without using any A.I.

A.I is okay to use at the lower levels, but in my opinion, not at the very top level.

And this is not very complicated actually. It’s spelled out in very great detail in the paper that we published two years ago. But nobody… you are actually the first person I’ve talked to who’s taken this seriously. Everyone else I’ve talked to believes that mission specification and mission control is a problem in A.I. and I argue it is not. It’s a problem in finite state machines.

After all, the most powerful computers we have use no A.I. None. They use only finite state machines. And they don’t use any predicate logic; they only use propositional logic.

[Host]

Hm. That’s very, very interesting. So one question, and forgive me, this may be addressed already in your paper, is: If let’s say you are able to avoid using A.I. at the top level, is it just turtles all the way down? So have you just pushed your problem down a level?

[Robert]

Not at all. Not at all. We have a name for this technology and it’s called… Ha! How could I forget? I’m 91 years old almost, and I’m starting to get forgetful. We defined something called the Mission Execution Automaton, which we believe is sufficiently powerful for any robot that you would really like to turn loose on the world.

I’m only ever speaking at the top level mission definition and mission control.

Suppose you have a whole, well, take the typical army unit. At the lower levels, the people are younger; they tend to have less knowledge and they tend to be kind of rowdy and unpredictable. Typical human behavior. Civilians demonstrate it. Children, as they grow older, become, we hope, more responsible and more predictable, but they require adult supervision. And our position, felt by the four of us who work in this area, is: No robot should be left out without adult supervision, but adult supervision can be as simple as strict rules on its behavior expressed in a finite state machine.

[Host]

That’s really interesting.

[Robert]

Machines are far more powerful than we think. Has anyone never been bothered by the fact that no computer is based on A.I.?

[Host]

So, I guess then, maybe… Right now, I’m going to play devil’s advocate a little bit. This is not a view that I believe, but I will speak on behalf of a lot of people who hold this view… Is that, some people say that A.I. would be able to solve problems that are unsolvable today. I don’t personally believe that. I think that humans can with enough time, solve almost any problem.

[Robert]

I don’t believe that. I think I have a primate brain that is physically limited.

[Host]

Well, so, as a concrete example, one of the, I don’t know, maybe you’ve collaborated with him before or spoken with him, but Professor Stuart Russell from Berkeley has spoken about basically the challenge of how to properly set A.I.’s goals. But one of the reasons that he mentioned, one of the problems he mentioned that he thinks A.I. could solve that humans could not -- this seems very out there to me -- But he says that maybe it could figure out a way to do faster than light travel? I think that’s just completely impossible.

[Robert]

I think I can’t believe that. I think that’s daydreaming.

[Host]

I would agree.

[Robert]

A.I. may be able to solve problems in medicine, for example, that are too complex for one human or even teams of humans to deal with. It’s possible. And I’m not against A.I. I just take a strong stand on this on the use of A.I. at the top level in lethal systems. If A.I. is allowed in a lethal, potentially lethal system, then there must be a finite state machine running the show, observing what is happening, and with the ability to shut down the A.I. part if it’s making a mistake. This sound very abstract, but it’s not. We’ve done it.

[Host]

And that makes a ton of sense. I think one additional question is basically: If you’re an engineer in the field, how do you draw the line of where it’s okay to use A.I.? I think one thing that worries some people, including me a lot, is that you say that “well, A.I. should not be allowed in charge of lethal systems.” But my concern is that any system that’s sufficiently smart will make itself a lethal system to achieve its goals.

[Robert]

Yes; driverless cars are a fine example.

Have you heard of a book called… Let’s see, I’m walking over to look at it; it’s called Three Laws Lethal. It’s an update of Asimov’s three laws and deals with robots with the ability to harm people. It’s mostly about driverless cars.

[Host]

Interesting. Three Laws Lethal. I haven’t read it before, but I just pulled it up. I’ll definitely look at this later.

[Robert]

It’s available in electronic form from Kindle.

[Host]

I guess, don’t be afraid to spoil the plot, in the interest of, I would love to hear what you have to say about it.

[Robert]

Okay, uh, you go ahead and read it and then you can call me up again and we’ll talk about it some more.

[Host]

Haha okay, sounds good.

So I think then, … I don’t know if this is a concern that you share as well, but I think a,.. one example that I’m sure you’ve heard a million times is the paperclip optimizer. So if you have a system that’s trying to perform an economic goal, it may perform that economic goal in a way that is de facto lethal. So if you have an A.I. system that’s designed to manage a paperclip factory and boost production, it will just keep boosting production and then it will decide, “well, I can be more effective if I take over the next door land”, so it will maybe buy the land or maybe it will somehow manipulate people to get the land. And then eventually people will try to shut it down and it thinks, “oh, if I get shut down I can’t build more paperclips”, so it will start killing people to stay alive. And basically that, I’ve heard the term “convergent instrumental subgoal”, which is that basically if you have a primary goal, and you’re especially smart, you’ll realize that you have some subgoals such as staying alive and gathering more power. And so basically any smart optimizer will want to alive and will kill to stay alive.

[Robert]

Unless it’s under the control of another machine, which is a finite state machine and won’t let it do that.

[Host]

I guess how do you... How do you be sure that the, uh, the enslaved machine wouldn’t be able to break its shackles and defeat the management logic?

[Robert]

Okay, we have to be sure of what you mean by breaking shackles. If we’re talking about electronic shackles, I don’t think that’s a problem. If you’re talking about a machine maybe deciding “I don’t like this thing that’s telling me I can’t do this; I’m going to rip it out of my body” then that’s another whole game. I’m not that far into the future. I’m not in the future at all. I’m in the present right now. The Navy has an unknown number of unmanned, (to a very high degree, not totally, but to a very high degree) autonomous submarines at sea now with lethal capabilities. Exactly what they can do is classified. And I have no access to that information. My primary concern in my life are those submarines. It’s not right to call them submarines. They don’t call them submarines without a human onboard. Unmanned, undersea… “Untethered, unmanned undersea vehicles.”

[Host]

Certainly. And so, I think maybe where this ties together is that … at least to my knowledge, the

U.S. doesn’t have these, but … I think the claim is that Russia is building a nuclear armed, unmanned submarine: “Status-6”.

[Robert]

Yes. Yes. There are such possibilities. That’s correct.

[Host]

And so, I think then, this all becomes very, very real and very important when you talk about something that is a strategic weapon under A.I. control or at least under A.I.. “tactical management”. I know that there have been statements such as General Jack Shanahan at the Joint Artificial Intelligence Center stated that at least in the foreseeable future, we would never put nuclear command and control decision making under A.I. But even if we don’t put the decision making under A.I., you could still, you would still have a potentially unmanned system responsible for the tactical deployment.

[Robert]

You’re young enough; you could help this. There’s no technical problem with putting a finite state machine in control of a machine with A.I. capability. Did you ever study computer design?

[Host]

Um. I would say … um, I have a good amount of knowledge. Not nearly as much as you, I’m sure, but I have at least a working level of knowledge of the field.

[Robert]

I mean actual design.

[Host]

Um. I’ve never designed a processor or anything like that, but I’ve read a lot about it.

[Robert]

Okay. Well, you know, there’s a finite state machine called the Control Unit [ Interviewer’s note: the Control Unit (CU) is an internal component of the CPU. See https://en.wikipedia.org/wiki/Control_unit ] that runs everything that runs on a digital computer.

And such a digital computer can do many things we would not like it to do, but the finite state machine called the Control Unit determines what it can do and will do. And that is a finite state machine. It has zero A.I. in it. None.

And designers of computers are too smart to try to use A.I. in the control unit. They never will. But "dreamers", people who have not actually built and tested things, think maybe that’s a good idea.

[Host]

Right. I see what you’re saying.

[Robert]

It’s a very, very tricky point. It’s very, very. But I think our paper is extremely well written. There are four of us who worked very hard on it really for 20 years. And I can send you a copy.

[Host]

That would be wonderful.

And, um, is it ok if I distribute the paper at the same time as distributing this interview transcript?

[Robert]

The paper that comes from the IEEE, you’ll need their permission for any kind of mass distribution.

[Host]

That’s fine.

[Robert]

But a link will do it!

[Host]

Yup. I’ll do that. I’ll just send a link.

[Robert]

And, I urge you to have a conversation with Don Brutzman. Don and I are the two people at the Postgraduate School who’ve worked on the problem of unmanned, untethered submarine control for the longest time. He has, besides being a Ph.D. professor, has over 20 years of military experience in submarines and another 10 years on top of that. He speaks much more smoothly than I do. He is the guy we let go out to talk to people. I’m the guy who stays inside to talk to Don Brutzman.

[Host]

Don’t worry, I’m the analogy of you. But, but I’ll definitely speak with Don Brutzman as well. Thank you. I’ll contact him as well.

[Robert]

You’re only getting half the story from me. You really need very, very much to talk to Don. And he’s a generation younger than I am. And you’ll just find it very valuable.

[Host]

Ok, yeah. This is conversation with you is super, super, super great.

So maybe a final question is: From the technical standpoint, could you give advice to engineers in the field about how best to implement the type of system you’re talking about? I know you mentioned you program in Lisp and Prolog. Are those good languages for building this type of system?

[Robert]

No, no. They just happen to be the languages we started in.

Until about 15 years ago, we thought A.I. was necessary. And that’s why we used Lisp and Prolog. And the fourth member of our team who is also one of the authors on the paper I keep mentioning, his name is Duane Davis. He’s the one who saw clearly that A.I. was not necessary. That Lisp and Prolog were not necessary. That finite state machine theory was sufficient.

Nothing special about Lisp and Prolog except they’re two languages I’m comfortable with. And I would say it’s something… I really don’t like to program in anything except those two languages. But Don has programmed in other languages. And what we’re talking about, there’s very little mention of languages in the paper I’m referring to. The results that are in there happen to be executed in Prolog because it’s convenient to write it that way. But any Turing complete language, any language that is easy to do object oriented programming in is quite satisfactory.

It’s whatever you’re familiar with.

[Host]

Okay, great.

[Robert]

There’s no A.I. in this. There’s no A.I. in our paper. There’s only finite state machine theory.

[Host]

Fascinating.

I guess, is there, do you see any upper bound on the capability of a finite state machine based control system?

[Robert]

Not at the top, not at the top level. We’re only speaking of the top level. They’re there for adult supervision. The kids beneath them (not literally, but figuratively speaking), the lower layers of software can be written any way the person who’s writing the software wants to write it. And the finite state machine will shut it down if, for example, it gets stuck in an infinite loop. It’s trivial for a finite state machine to exercise timeout control of any piece of software.

Finite state machines are far more powerful than the world thinks they are.

And A.I. is a lot more dangerous than most of the world thinks it is. It’s glamorous sounding, but it’s terribly dangerous.

[Host]

Right. And I guess as kind of a final question: The primary benefit of finite state machines is that they are, that you can prove their behavior, right?

[Robert]

They’re provable. Right. Their correctness is exhaustively provable.

[Host]

Super valuable. This has been super, super helpful.

[Robert]

And I really appreciate your interest and it’s going to take a young person like yourself to make some of this actually happen.

[Host]

Well, thank you so much for your time, sir.

[Robert]

Thank you. Bye.