US Military testing whether human pilots can trust robot wingmen in a dogfight – IOTW Report

US Military testing whether human pilots can trust robot wingmen in a dogfight

American Military News: A U.S. military research program is advancing the study of humans and machines working together by testing how well pilots and artificially intelligent entities trust each other in one of the most challenging of tasks: aerial combat, or dogfighting.

The idea behind DARPA’s  Air Combat Evolution, or ACE, program, is that human fighter pilots will soon be flying alongside increasingly capable drones — dubbed “Loyal Wingmen” — that will help evade other fighters and air defenses. Military leaders often describe the F-35 Joint Strike Fighter as a kind of flying command center, with the human operator working less like a traditional pilot and more like a team captain. The craft is loaded with AI features that pilots say make it easier to fly than traditional fighters. That enables the pilot to digest and put to use the immense amount of data the F-35 pulls in.

So human pilots will be flying and surrounded by increasingly intelligent, and lethal, weapons. This brings up big concerns about what many in the arms control community call lethal autonomy — basically, killer robots.  But military leaders are very fond of reminding people that their 2012 doctrine prefers a human on-the-loop in lethal decisions. That’s why the question of measuring trust between humans and AI assistants is critical. It’s one thing to measure how well you trust your smartphone’s driving directions; it’s much harder to measure how well highly trained fighter pilots to trust experimental AI software when death is on the line.  more here

17 Comments on US Military testing whether human pilots can trust robot wingmen in a dogfight

  1. …I work with robots. One set of them are autonomous cargo handlers that follow constellations of targets with a rotating laser, the rest are various size and purpose robot arms.

    The three things you need to understand about robots:
    1) they do not care.
    2) they are only as good as the human mind that programmed them.
    3) they act on fallable inputs with fallable outputs.

    Programming is a one-man birthday party. You don’t get any presents you don’t bring. Once a robot is put in motion by its end user, it will follow the program according to its inputs until the program says make an output.

    In all probability, the programmer is NOT a fighter pilot, so he will not THINK like a fighter pilot, he will think like a programmer. This nay frustrate a fighter pilot when he sees an enemy escape when by doing something HE could have predicted, but the programmer could NOT, because the programmer is NOT a fighter pilot.

    If it encounters a condition it is not programmed for, it cannot resolve the situation on its own and will fault. In aircraft terms, this is where an autopilot disconnects, because a human needs to make the decision.

    If it gets bad input it will behave based on this bad input. This may be what doomed the Max8 aircraft, because a computer got bad feedback from a faulty angle of attack sensor.

    As far as not caring, it just doesn’t. It won’t deviate from its programming to save a human pilot because it can’t, and it doesn’t care. It can’t put 110% effort into downing an enemy asset because it can’t. It doesn’t care. It can’t notice other possible intelligence information en route to the target because it isn’t programmed to, and it doesn’t care.

    They don’t have patriotism, love, hate, loyalty or pity. They just do what they are told.

    And what’s programmed can be reprogrammed. It can be hacked from the ground, by an angry ground crew, an enemy that’s been given intelligence by Democrats, or a rouge pilot. The “Terminator” movies kind of handled this well, where idetical robots did completely opposed things…because they were programmed to. They acted what we would call good and bad, but that’s because someone TOLD them to. They had no sense of it themselves because they had no sense at all.

    They were just machines.

    There’s a LOT more to it, but at the end of the day, no, you CAN’T trust them.

    Trust is for humans.

    Machines are never worthy of it…and they
    Just.
    Don’t.
    Care.

    6
  2. I flew single seat fighters for 22 years, including the F-16C. I don’t see a human ever trusting a robot at 500 mph in a combat environment.
    Choose humans or robots to do the job, but don’t mix the two.

    4
  3. Iceman:
    “Maverick, it’s not your flying, it’s your attitude. The enemy’s dangerous, but right now you’re worse. Dangerous and foolish. You may not like who’s flying with you, but whose side are you on?”

    Iceman = Trump
    Maverick = John Mccain

    3
  4. All this computerized drone stuff will work great until one of two things happen:
    1. The NORKs detonate a nuke 150 miles above the US
    Or
    2. Some 15 year old kid figures out how to hack into one.

    3

Comments are closed.