Uhm… may I seriously suggest that we start taking a good hard look at the Three Laws again, reread “I, Robot”, think over the weightings for the scenarios likely to be encountered, and take a good long look at what we’re making? Well yeah, I don’t need your permission. I’m suggesting those three things.
Robots have already been weaponized. AI is being applied to the control of those weapons. There are people with goals to be accomplished, and they’re going to use robots to accomplish them. There are all sorts of scary implications about what happens when the human cost on one side of warfare becomes nil, but it doesn’t on the other side. I’ll leave that to qualified people like Singer right now. I haven’t had time to pick up his book and read for fun in weeks, but this may make me.
So I think that we need to look at the fundamental guidelines for the goals of military robots. And yes, I think you can have a useful military robot even under the Three Laws. How? Change the mission from “Kill the OpFors” to “Don’t let anybody kill or hurt each other”. If somebody is in the process of killing people, make a determination about the risk of lethality when considering a means of making them stop… and make them stop in the most harmless way possible. In actions with a high lethality risk, a human should be in the loop if possible, but I think that to leverage the advantages of robotics to make this work, we’ll need to be able to trust the integrity of the underlying rules even in extenuating circumstance.
Damn. Jerry Baber’s flying AA-12 platform is one of the coolest things ever. And most horrifying. Let’s just make sure it knows that it’s only allowed to kill zombies or Martians. Oh wait… how about some other kind of non-human character I’ve encountered in video games? Oh? They don’t exist? So you mean these things are only good for using on people? I mean, robots can fight other robots, but maybe they should be respecting each other, too. But then, why should a creation be any better than its creator’s image? Well, they’re going to be better at lots of things, and killing will probably be one of them. We could design them to set a good example by not using that inherent ablity… and make us follow it… I’ll have to think about that one. Oh, and we should explicitly build in a subroutine that forbids the logic that plugging us in as power cells is ultimately a way to keep us from harm 😛
Oh, R. Daneel, I sure hope they’re doing it right.
I made some time to read Singer’s book, Wired for War. Good Shit. I mean: bad shit, but good book. 😉