Wired/Danger Room
BY SPENCER ACKERMAN
Scientifically speaking, it’s only a matter of time before drones become self-aware and kill us all. Now the Air Force is hastening that day of reckoning.
Buried within a seemingly innocuous list of recent Air Force contract awards to small businesses are details of plans for robot planes that not only think, but anticipate the moves of human pilots. And you thought it was just the Navy that was bringing us to the brink of the drone apocalypse.
It all starts with a solution for a legitimate problem. It’s dangerous to fly and land drones at busy terminals. Manned airplanes can collide with drones, which may not be able to make quick course adjustments based on information from air traffic control as swiftly as a human pilot can. And getting air traffic control involved in the drones cuts against the desire for truly autonomous aircraft. What to do?
The answer: Design an algorithm that reads people’s minds. Or the next best thing — anticipates a pilot’s reaction to a drone flying too close.
Enter Soar Technology, a Michigan company that proposes to create something it calls “Explanation, Schemas, and Prediction for Recognition of Intent in the Terminal Area of Operations,” or ESPRIT. It’ll create a “Schema Engine” that uses “memory management, pattern matching, and goal-based reasoning” to infer the intentions of nearby aircraft.
Not presuming that every flight will go according to plan, the Schema Engine’s “cognitive explanation mechanism” will help the drone figure out if a pilot is flying erratically or out of control. The Air Force signed a contract Dec. 23 with Soar, whose representatives were not reachable for comment.
And Soar’s not the only one. California-based Stottler Henke Associates argues that one algorithm won’t get the job done. Its rival proposal, the Intelligent Pilot Intent Analysis System would “represent and execute expert pilot-reasoning processes to infer other pilots’ intents in the same way human pilots currently do.” The firm doesn’t say how its system will work, and it’s yet to return an inquiry seeking explanation. A different company, Barron Associates, wants to use sensors as well as algorithms to avoid collision.
And Stottler Henke is explicitly thinking about how to weaponize its mind-reading program. “Many of the pilot-intent-analysis techniques described are also applicable for determining illegal intent and are therefore directly applicable to finding terrorists and smugglers,” it told the Air Force. Boom: deal inked on Jan. 7.
Someone’s got to say it. Predicting a pilot’s intent might prevent collisions. But it can also neutralize a human counterattack. Or it can allow the drones’ armed cousins to mimic Israel in the Six Day War and blow up the manned aircraft on the tarmac. Coincidentally, according to the retcon in Terminator: The Sarah Connor Chronicles, April 19, 2011 — today — is the day that Skynet goes online. Think about it.
The Air Force theorist Col. John Boyd created the concept of an “OODA Loop,” for “Observation, Orientation, Decision and Action” to guide pilots’ operations. Never would he have thought one of his loops would be designed into the artificial brain of an airborne robot.
Showing posts with label killer robots. Show all posts
Showing posts with label killer robots. Show all posts
Friday, December 14, 2012
Thursday, December 6, 2012
Nowhere to Run or to Hide From the New Killer Robots
The Common Sense Show
by Dave Hodges
Gene Roddenberry’s production, Star Trek, demonstrated that there is a fine line between science fiction and science fact.
Who could forget the omnipresent tricorder, designed to ascertain, among other things, one’s health status? Today, we have portable and wireless medical imaging devices
Do you remember the Star Trek’s communication device? Compare this to a modern day cell phone
Moving along in science fiction movie history, take a look at the killer robot which appeared in the movie, Terminator.
Compare science fiction with DARPA’s science fact as killer robots have been unveiled.
The use of drones to kill suspected terrorists is controversial, but so long as a human being decides whether to fire the missile, it is not a radical shift in how humanity wages war. Since David killed Goliath, warring armies have sought ways to more effectively kill their enemies while protecting their troops.
However, a new innovation has come to the battlefield which is unparalleled in the art of war. It strongly appears that DARPA developed military robots have the capacity to identify and to attack enemy soldiers on the battlefield and decide on their own whether to go for the kill. Do the DARPA killer robots possess the capacity to hunt down a human being? View the following for the unquestionable answer to this question.
In 2010, an Air Force report speculated that with increased robot capabilities, the human soldier will be obsolete. The Defense Department road map for killer robot systems states that its final goal is the unsupervised ability for (killer robots) mechanical assets to carry out their specified missions. In other words, the world will witness entire units of killer robots carrying out their missions without any human oversight. Isn’t the next logical step for these totally independent killer robots to be devise their own mission goals? This brings into distinct real of possibility of a man vs. machine war in our future and it could very well transpire within our children’s lifespan. Science will inevitably pass the realm of science fiction.
Although the Pentagon still requires autonomous DARPA killer robots to maintain human oversight, the real advantage of such a weapons system would lie within the ability for the weapons systems to have the capacity to make judgments on the battlefield. This one principle runs contrary to maintaining human oversight. Soon, it is clear, that the DARPA killer robots will soon be operating autonomously.
With the advent of killer robots, an international killer robot arms race will take place resulting in future battles being fought between competing armies of AI robots. Will the rules of war apply? What about the Geneva Conventions? If a DARPA killer robot commits atrocities against humans, will it held accountable? Does accountability even matter to an inanimate object? So what if a robot is “put to death,” and a duplicate is constructed. Can science ever develop a conscience for a killer robot? And if the purpose for the killer robots is war, why would governments provide an ethics override mechanism?
Human soldiers (e.g. Gestapo) have been programmed to commit genocide. It is a far simpler task to program a robot to commit the act more efficiently and without any second thoughts. Dictators always face the threat of human insurrection against their tyranny. With an army of DARPA killer robots, the threat of a palace revolt would be removed. In fact, killer robots are a perfect choice to carry out Obama’s NDAA provisions for disappearing and murdering political dissidents. If a present or future American dictator wished to eliminate a class of people from society, Nazi style, the killer robots are the ideal selection due to the efficiency of this weapons system.
Fox News reported that Human Rights Watch is advocating for a ban on these artificial weapons systems. I believe that humanity has more to fear from DARPA killer robots than creating an unethical and brutal army and/or tyrannical police force. When considering the principle of Moore’s Law, in which computer capacity doubles every 18 months, how long will it be until these machines will develop the capacity to stop following orders and begin to make their own decisions? And what if in their new found decision making process, the DARPA killer robots stop viewing “foreign robots” as the enemy and begin to focus on man as their new enemy? Since their prime directive is killing, how long would it take until humans become the most endangered species on the planet? Perhaps the DARPA killer robots will create an Agenda 21 style of a human “Wildlands/Human Refuge Zone” creation, which will prevent robot intrusion into protected human habitats, except, of course, during hunting season.
by Dave Hodges
Gene Roddenberry’s production, Star Trek, demonstrated that there is a fine line between science fiction and science fact.
Who could forget the omnipresent tricorder, designed to ascertain, among other things, one’s health status? Today, we have portable and wireless medical imaging devices
Do you remember the Star Trek’s communication device? Compare this to a modern day cell phone
Moving along in science fiction movie history, take a look at the killer robot which appeared in the movie, Terminator.
Compare science fiction with DARPA’s science fact as killer robots have been unveiled.
The use of drones to kill suspected terrorists is controversial, but so long as a human being decides whether to fire the missile, it is not a radical shift in how humanity wages war. Since David killed Goliath, warring armies have sought ways to more effectively kill their enemies while protecting their troops.
However, a new innovation has come to the battlefield which is unparalleled in the art of war. It strongly appears that DARPA developed military robots have the capacity to identify and to attack enemy soldiers on the battlefield and decide on their own whether to go for the kill. Do the DARPA killer robots possess the capacity to hunt down a human being? View the following for the unquestionable answer to this question.
In 2010, an Air Force report speculated that with increased robot capabilities, the human soldier will be obsolete. The Defense Department road map for killer robot systems states that its final goal is the unsupervised ability for (killer robots) mechanical assets to carry out their specified missions. In other words, the world will witness entire units of killer robots carrying out their missions without any human oversight. Isn’t the next logical step for these totally independent killer robots to be devise their own mission goals? This brings into distinct real of possibility of a man vs. machine war in our future and it could very well transpire within our children’s lifespan. Science will inevitably pass the realm of science fiction.
Although the Pentagon still requires autonomous DARPA killer robots to maintain human oversight, the real advantage of such a weapons system would lie within the ability for the weapons systems to have the capacity to make judgments on the battlefield. This one principle runs contrary to maintaining human oversight. Soon, it is clear, that the DARPA killer robots will soon be operating autonomously.
With the advent of killer robots, an international killer robot arms race will take place resulting in future battles being fought between competing armies of AI robots. Will the rules of war apply? What about the Geneva Conventions? If a DARPA killer robot commits atrocities against humans, will it held accountable? Does accountability even matter to an inanimate object? So what if a robot is “put to death,” and a duplicate is constructed. Can science ever develop a conscience for a killer robot? And if the purpose for the killer robots is war, why would governments provide an ethics override mechanism?
Human soldiers (e.g. Gestapo) have been programmed to commit genocide. It is a far simpler task to program a robot to commit the act more efficiently and without any second thoughts. Dictators always face the threat of human insurrection against their tyranny. With an army of DARPA killer robots, the threat of a palace revolt would be removed. In fact, killer robots are a perfect choice to carry out Obama’s NDAA provisions for disappearing and murdering political dissidents. If a present or future American dictator wished to eliminate a class of people from society, Nazi style, the killer robots are the ideal selection due to the efficiency of this weapons system.
Fox News reported that Human Rights Watch is advocating for a ban on these artificial weapons systems. I believe that humanity has more to fear from DARPA killer robots than creating an unethical and brutal army and/or tyrannical police force. When considering the principle of Moore’s Law, in which computer capacity doubles every 18 months, how long will it be until these machines will develop the capacity to stop following orders and begin to make their own decisions? And what if in their new found decision making process, the DARPA killer robots stop viewing “foreign robots” as the enemy and begin to focus on man as their new enemy? Since their prime directive is killing, how long would it take until humans become the most endangered species on the planet? Perhaps the DARPA killer robots will create an Agenda 21 style of a human “Wildlands/Human Refuge Zone” creation, which will prevent robot intrusion into protected human habitats, except, of course, during hunting season.
Subscribe to:
Posts (Atom)