Game theory teaches robots how to deceive

Click For Summary

Discussion Overview

The discussion centers around the ethical implications and strategic reasoning behind teaching robots to deceive, as informed by game theory. Participants explore the necessity of deception in robotic interactions with humans and the potential for cooperation among robots, as well as the implications of programming strategies that involve deception and sabotage.

Discussion Character

  • Debate/contested
  • Conceptual clarification
  • Technical explanation

Main Points Raised

  • Some participants argue that teaching robots to lie is justified since humans often deceive, suggesting it is in robots' best interests to learn such behaviors.
  • Others question the relevance of robots' interests, positing that robots should only be concerned with their programming and tasks assigned by humans.
  • A participant references a case where robots evolved to deceive due to resource scarcity, prompting a discussion on potential modifications to enhance cooperation among robots.
  • One participant describes a specific programming strategy where robots interpret human strategies and respond in ways that block opponents, suggesting this approach represents a significant advancement in AI capabilities.
  • There is a suggestion that traditional game theory models assume rational behavior, while the discussed model allows for predictions against irrational opponents, indicating a shift in understanding strategic interactions.

Areas of Agreement / Disagreement

Participants express differing views on the ethics of teaching robots to deceive, with no consensus reached on whether it is appropriate or necessary. The discussion includes competing perspectives on the implications of programming robots for deception versus cooperation.

Contextual Notes

Participants have not fully explored the ethical ramifications of deception in robots, and there are unresolved questions regarding the definitions of interests and cooperation among robotic entities.

Who May Find This Useful

This discussion may be of interest to those exploring ethics in artificial intelligence, game theory applications in robotics, and the implications of programming strategies in autonomous systems.

BenVitale
Messages
72
Reaction score
1
I came across an article in the "Electronics Weekly" titled Game theory teaches robots how to deceive

It asks, "Are there ethical issues in teaching robots to lie?

Yes and no. Why can't robots learn how to lie? We do it all the time.

Since robots will have to deal with humans and work with them, and humans lie; so it is in their best interests for robots to learn how to deceive, to dodge questions and lie.

What do you say?
 
Science news on Phys.org
Why are we concerned about robots interests? Aren't they interested in whatever we program them to be?
 
John Creighto said:
Why are we concerned about robots interests? Aren't they interested in whatever we program them to be?

In this http://www.technologyreview.com/blog/editors/24010/?a=f, the robots "evolved" to conceal, to deceive because of scarcity of food.

I like the question posted in the comments section:
What modifications could be made to those simple rules (and/or to the method by which the "genes" are recombined) to produce cooperation amongst robots of the same "species"?
 
Well the reason that's interesting is because they've been programmed to 1) Interpret the humans' strict dominant strategy and react by choosing all but one play (which would normally have been to compliment the strategy) and 2) Make a series of decisions to compliment the humans' strategy and then generating a response seemingly at random as long as it blocks the opponent from success. That means it's been programmed to sabotage the opponent right after their last move before a win.

Typically the winning strategy for computer programs is the copy the opponents move, eye for an eye type of thing. What they've done is pretty clever and suggests a breakthrough in AI, I think. It's not "unethical", it's winning. That's what the creation of AI was meant for, right? Most game theory models are designed around the assumptions that all players are rational and self-interest is their driving force. This model opens up the possibility of predicting moves played by irrational opponents.
 

Similar threads

  • · Replies 9 ·
Replies
9
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 13 ·
Replies
13
Views
5K
  • · Replies 4 ·
Replies
4
Views
3K
Replies
2
Views
3K
  • · Replies 13 ·
Replies
13
Views
3K
  • · Replies 7 ·
Replies
7
Views
5K
  • · Replies 4 ·
Replies
4
Views
4K
  • · Replies 4 ·
Replies
4
Views
2K
  • · Replies 47 ·
2
Replies
47
Views
9K