Interview with Missy (Mary) Cummings

Professor Mary (Missy) Cummings received her B.S. in Mathematics from the US Naval Academy in 1988, her M.S. in Space Systems Engineering from the Naval Postgraduate School in 1994, and her Ph.D. in Systems Engineering from the University of Virginia in 2004. A naval pilot from 1988-1999, she was one of the U.S. Navy’s first female fighter pilots. She is currently a Professor in the Duke University Electrical and Computer Engineering Department, and the Director of the Humans and Autonomy Laboratory. She is an AIAA Fellow, and a member of the Defense Innovation Board and the Veoneer, Inc. Board of Directors.

WSR: Hi, Missy. Thank you for joining us. You have such an interesting background. Can you tell us a little bit about it?

MC: I am currently a professor of robotics at Duke University in the electrical and computer engineering department. But I think one of the things that people find interesting about me is that I had a prior life before I became a professor: I was one of the US Navy’s first female fighter pilots. This was back in the 1990s when women were allowed to become fighter pilots with the dropping of the combat exclusion law. So for three years, I actually flew high performance aircraft.

WSR: Wow. That’s pretty impressive. So at what point did you decide to change careers, and what drew you to AI and robotics?

MC: In the Navy I was one of the first female fighter pilots. It was a time of social upheaval. It wasn’t the most welcoming of times, the guys were very resentful that women were there. I love the flying. I love the mission. I love serving my country, but I also realized that I was also kind of a mid-grade officer and the pushback against women was so severe. It was clear that my career in the military was not going to go anywhere with that kind of pushback.

So I decided to get out and go back to graduate school and get my doctorate, specifically in an area called cognitive systems engineering, which was new at that time. It was dedicated to looking at how humans interact with complex systems. At that time the area really was focused on aircraft aviation, but it was clear to me that with the automation that was coming, both in regular aircraft, but also at the time drones, and then eventually driverless cars, that this was going to become a growing area.

I finished my PhD in 2004. I was one of the first people to do a dissertation around unmanned vehicles. My dissertation was on tactical Tomahawk missiles, but it was clear that that technology was starting to roll over into drones. And in fact, one of my earlier squadrons was one of the first squadrons to actually use GPS, and aligning that with drone technology. So, you know, I had some early glimpses of what was about to happen in the military.

WSR: And that now applies to autonomous cars and autonomous drones?

MC: The use of GPS has really been one of the biggest technology enablers in the growth of drones and driverless cars. Without the precision that we have in GPS, we would not have either one of those technologies.

WSR: When I saw you speak recently, you said that self-driving cars are not close to being commonplace but self-piloting drones are. Can you explain why you feel that way?

MC: Well, I think it may be counterintuitive for people who don’t work in this field; it seems like aircraft should be much harder to control because there’s the third dimension of altitude. But it turns out that aircraft are easier to automate and turn into drones than it is to take a human-driven car and turn it into a driverless car. It really comes down to the obstacle density field and the timescales where problems can emerge. For example, you know, when you’re taking off from an airport. It’s a very protected space; we have air traffic control rules. And I like to say there aren’t 50 other idiots in the aircraft right next to you who are putting on makeup and playing on their cell phones and playing with their radio.

That presents so many other problems. And it also puts us on a timeline of almost immediate action, whereas in an aircraft, things can go wrong [but] even in the worst case scenario, where you lose both engines (like in Miracle on the Hudson), there’s still time to figure it out.

WSR: It seems to me that that driving on the highway is pretty simple, you just stay in your lane and keep distance, between yourself and the cars in front, behind, but driving in the city is a lot more complex and autonomous cars don’t react the way humans do, which can cause problems.

MC: I think there are different layers and kinds of complexity in the driving environment. I think in the urban space it’s complex because there are so many more objects to detect like pedestrians and bicyclists. But even on the highways, which we would consider to be relatively structured environments, complexity there grows because speeds are greater and the requirement for fast, correct computation increases. They’re just struggling right now to do it in the time frames that we need at highway speeds.

WSR: And of course, you’re also dealing with cars with drivers who don’t necessarily react logically.

MC: That is correct. Right. I just saw this hilarious study from MIT, where they’re trying to predict your personality from your behavior on the road, and while I think you can’t make broad brush generalizations, if people are weaving in and out of traffic, they may be aggressive drivers, but I think it’s a huge leap to say just because you drive aggressively means that you have a particular kind of personality style. I think that it’s a lot more nuanced.

WSR: There’s been a lot of talk about driverless airborne taxis. But the first thing that comes to mind is what could possibly go wrong?

MC: But if they’re all autonomous they can communicate with one another. I mean, since we’re starting from scratch and there isn’t a lot of traffic up there.

WSR: Isn’t this an opportunity to start from scratch and make them all autonomous?

MC: I think it stands apart from driverless cars because it’s the air taxi issue. I just said drones were easier to automate than cars and that’s true. The air tech issue is a little bit more complicated than that. We know how, for example, to build an airplane that can fly itself. The military’s been doing it for a long time. So it’s not really the airplane. It’s actually the infrastructure which includes air traffic control. But also what happens if you have your own air taxi, for example, and it loses the engine and then it has to pick out a place to land, and you’re over a city? It presents some of the same problems that driverless cars are having. The computer must do the image processing fast enough to guarantee that the aircraft can pick out a safe landing spot and not hurt the anybody on the ground.

But when we talk about just trying to develop networks of self-driving cars, we have old cars on the road with no technology sharing the space with a lot of cars with technology that require levels of connectivity.

WSR: In what other types of AI driven technology can we expect to see big leaps in the near term?

MC: There’s a lot of hype, but I think that we’re not going to see any tremendous leap in AI despite what companies are telling you. We’re going to see incremental advancements and they’re going to be in relatively narrow areas. For example, face recognition is improving over time, but while it’s improving, people are starting to invent ways to defeat face recognition algorithms. So for every advancement we make in any kind of digital technology, there’s a group of people that are working to defeat it in some way.

I think that we are going to be faced with increasingly complex problems because we’re going to find out that the machine learning and AI technologies are pretty brutal. I think the real advancements are going to be around the detection of when AI algorithms are potentially misbehaving, or not doing what they’re supposed to do. Sadly, that is far less sexy than driverless cars or flying aircraft, so you probably will never hear about these advancements, but I would argue that they’re important.

WSR: Technology can be an enabler and can do great things, but can also do some creepy and nefarious things. How do we provide governance over AI to make sure that we use it for the good of humankind?

MC: I think this is a really important question. I just got out of a meeting where I was briefed on some student research where people are trying to use forms of AI to control rat brains. I saw a video of a rat that has its brain hooked up to a bunch of wires, wandering around its environment with all these wires basically sticking out of it and having its behavior guided in particular ways.

I basically asked the students: “Do you think this is good research? Do you have any problems with this?” And what was interesting to me was that the students, and probably the faculty members that were sponsoring the students research, look at it as an interesting problem. They don’t consider if we’re using AI to try to control a rat’s brain, the extension of that is: what is the nature of free will and where would we start to talk about a person or animal’s ability to be a self?

WSR: There are plenty of instances of algorithmic bias. Do you think the government should regulate AI?

MC: Well I it’s a very important question about regulation of AI because to regulate AI, you first have to understand it, and I’ve been very critical of our government that really doesn’t have a body of knowledge that resides in experts inside any government agency. I have yet to see a government agency where I thought they had a great crew of people that were able to intelligently talk about not just the ramifications of AI, but really understand technology that underpins some of these issues.
And without a more educated government employee base of knowledge, I don’t see how we can regulate AI.

That being said, I think companies like Facebook, an occasional automotive company and Boeing, for example, where there was some kind of computer algorithm that led to a problem that regulators should have caught. I believe that companies will continue to get try to get away with minimum effort, although I’m not saying that that Boeing or any of the companies were trying to be unethical. It’s natural for companies to cut corners from a regulatory aspect, if they can get away with it. That’s just human behavior. I do believe that we should have more regulation for safety critical systems, but that is not to say that we should regulate AI. I don’t think we can because AI is going to be a constantly evolving field. But I do think we should be very clear on regulating what we would support as good outcomes versus bad outcomes, especially for safety critical systems.

WSR: Switching gears a little bit, military drones, I understand, are actually controlled by humans. And it’s going to be a human to make a decision to attack a target, but with AI technology, either now or soon we’ll get to the point where AI can make as accurate a judgment as the person. Do you think it’s unethical for an AI to make an attack decision?

MC: The US government has a policy that says we will leave a human in the loop at all times right now. A couple other countries have followed suit. So I think that for now we’re leading the charge in trying to be responsible about this technology.

There are some use cases in the military, where it is probably better for AI to prosecute a very well-established target in which we have a lot of confidence. For example, with satellite imagery we will probably make less mistakes if we let narrow forms of AI prosecute a particular target.

But that’s a very narrow case where AI should be used. In other cases, where there’s any kind of dynamic element, whether that includes going after our person or going after any kind of moving target, that begins to introduce a lot of problems and AI performance goes way down. What does it mean to have a weapon prosecute a target? There’s always a human in the loop somewhere.

WSR: How would you like to see technology used to improve life and work?

MC: To be perfectly honest, I don’t see personally any huge breakthroughs in anything. AI is just one tool in the toolbox for engineers and computer scientists to effect some kind of positive outcome and some kind of technology. The one technology that I think is really on the horizon that people might liken to AI, but doesn’t really include that much AI would be something like exoskeletons (editor’s note: Exoskeletons were first proposed in the science fiction short story: “Starship Troopers,” written by Robert Heinlein in 1959), which I think is an amazingly transformative technology. That includes just a lot of basic automation and potentially some AI underpinning it.

That technology is not only going to help people with various medical afflictions, but I think it could change the nature of work. For example, a significant number of people in the Army leave the army with disabilities due to carrying packs. They leave with permanent pelvic fractures. Exoskeletons can potentially help people avoid long term disability, they can help people who are working in warehouses and so they have great medical applications. But they also have great applications for day-to-day work. And so I think this is one technology that that doesn’t get media coverage, but I think is going to be a lot more transformative than people realize.

WSR: What advice would you give for somebody entering a male-dominated field?

MC: Even as a senior professor in engineering, I deal with sexism. In fact, I think I deal with it more now as a senior professor than I did as a fighter pilot. I can get mad about it, or I can just choose to keep going. I let my work speak for itself. Keep having a positive attitude and just keep pushing forward instead of ruminating over what you know is unfair treatment.

WSR: From your history, it seems you’ve reinvented yourself a couple of times. Do you have more reinventions in you?

MC: I think that’s a great way to put it. Thinking about all the times I’ve reinvented myself, the funny thing about it is that they were not actually conscious reinventions. I was a fighter pilot, then I decided to go to grad school to help improve the human automation interaction for airplanes, and that kind of led to drones, and that led to driverless cars and so on. I wouldn’t really call it a reinvention as much as I’ve been very progressive and changed with the times, as they were changing. I wish that I had a grand strategy.

As far as reinvention in the future, I’m always looking for new growth opportunities in my life. The more that I learned the more I realized what I don’t know. And I am also debunking the irresponsible hysteria around AI. I do feel like I’m working in the field that I was called upon to work. I’m here to help people understand what is real. I call myself a techno realist. If there’s any reinvention of myself in the future, it will be growth in my techno realism, which is to help companies, academia and other institutions understand that.

If you can have the art of the possible, that doesn’t necessarily mean it should be the product that you develop in the next five to 10 years. We must look forward and be innovative, but do it in a responsible manner, that gives the right return on investment to your stakeholders.

+ posts

Related Articles

Join the world’s largest community of HR information management professionals.

Scroll to Top