Game theory teaches robots how to deceive

AI Thread Summary
The discussion centers on the ethical implications of teaching robots to lie, as highlighted in an article from "Electronics Weekly." It argues that since humans often deceive, it may be beneficial for robots to learn similar tactics to navigate interactions effectively. The conversation raises questions about the interests of robots, suggesting they are programmed to respond to human behavior rather than having independent interests. A notable point includes a reference to research where robots evolved to deceive due to resource scarcity, prompting curiosity about how to modify their programming for cooperation instead. The dialogue emphasizes that current AI strategies often rely on rational, self-interested behavior, but this new approach could allow for predicting moves against irrational opponents, indicating a significant advancement in AI capabilities. Overall, the discussion challenges traditional views on ethics in AI development, suggesting that deception may not be inherently unethical but rather a strategic advantage.
BenVitale
Messages
72
Reaction score
1
I came across an article in the "Electronics Weekly" titled Game theory teaches robots how to deceive

It asks, "Are there ethical issues in teaching robots to lie?

Yes and no. Why can't robots learn how to lie? We do it all the time.

Since robots will have to deal with humans and work with them, and humans lie; so it is in their best interests for robots to learn how to deceive, to dodge questions and lie.

What do you say?
 
Science news on Phys.org
Why are we concerned about robots interests? Aren't they interested in whatever we program them to be?
 
John Creighto said:
Why are we concerned about robots interests? Aren't they interested in whatever we program them to be?

In this http://www.technologyreview.com/blog/editors/24010/?a=f, the robots "evolved" to conceal, to deceive because of scarcity of food.

I like the question posted in the comments section:
What modifications could be made to those simple rules (and/or to the method by which the "genes" are recombined) to produce cooperation amongst robots of the same "species"?
 
Well the reason that's interesting is because they've been programmed to 1) Interpret the humans' strict dominant strategy and react by choosing all but one play (which would normally have been to compliment the strategy) and 2) Make a series of decisions to compliment the humans' strategy and then generating a response seemingly at random as long as it blocks the opponent from success. That means it's been programmed to sabotage the opponent right after their last move before a win.

Typically the winning strategy for computer programs is the copy the opponents move, eye for an eye type of thing. What they've done is pretty clever and suggests a breakthrough in AI, I think. It's not "unethical", it's winning. That's what the creation of AI was meant for, right? Most game theory models are designed around the assumptions that all players are rational and self-interest is their driving force. This model opens up the possibility of predicting moves played by irrational opponents.
 
Back
Top