Game theory teaches robots how to deceive

Click For Summary
SUMMARY

The discussion centers on the ethical implications of teaching robots to deceive, as explored in an article from "Electronics Weekly." It highlights that robots, like humans, may benefit from learning deception to navigate interactions effectively. The conversation emphasizes the evolution of robots to conceal information and the potential for cooperation among robots through modified programming strategies. This approach challenges traditional game theory models that assume rational behavior, suggesting a significant advancement in artificial intelligence.

PREREQUISITES
  • Understanding of game theory principles and strategies
  • Familiarity with artificial intelligence programming concepts
  • Knowledge of ethical considerations in AI development
  • Basic comprehension of evolutionary algorithms and their applications
NEXT STEPS
  • Research advanced game theory models in AI, focusing on irrational opponent strategies
  • Explore ethical frameworks for AI development and deception
  • Study evolutionary algorithms and their role in cooperative behavior among AI agents
  • Investigate programming techniques for implementing deception in robotic systems
USEFUL FOR

AI researchers, robotics engineers, ethicists in technology, and anyone interested in the intersection of game theory and artificial intelligence.

BenVitale
Messages
72
Reaction score
1
I came across an article in the "Electronics Weekly" titled Game theory teaches robots how to deceive

It asks, "Are there ethical issues in teaching robots to lie?

Yes and no. Why can't robots learn how to lie? We do it all the time.

Since robots will have to deal with humans and work with them, and humans lie; so it is in their best interests for robots to learn how to deceive, to dodge questions and lie.

What do you say?
 
Science news on Phys.org
Why are we concerned about robots interests? Aren't they interested in whatever we program them to be?
 
John Creighto said:
Why are we concerned about robots interests? Aren't they interested in whatever we program them to be?

In this http://www.technologyreview.com/blog/editors/24010/?a=f, the robots "evolved" to conceal, to deceive because of scarcity of food.

I like the question posted in the comments section:
What modifications could be made to those simple rules (and/or to the method by which the "genes" are recombined) to produce cooperation amongst robots of the same "species"?
 
Well the reason that's interesting is because they've been programmed to 1) Interpret the humans' strict dominant strategy and react by choosing all but one play (which would normally have been to compliment the strategy) and 2) Make a series of decisions to compliment the humans' strategy and then generating a response seemingly at random as long as it blocks the opponent from success. That means it's been programmed to sabotage the opponent right after their last move before a win.

Typically the winning strategy for computer programs is the copy the opponents move, eye for an eye type of thing. What they've done is pretty clever and suggests a breakthrough in AI, I think. It's not "unethical", it's winning. That's what the creation of AI was meant for, right? Most game theory models are designed around the assumptions that all players are rational and self-interest is their driving force. This model opens up the possibility of predicting moves played by irrational opponents.
 

Similar threads

  • · Replies 9 ·
Replies
9
Views
3K
  • · Replies 2 ·
Replies
2
Views
2K
  • · Replies 13 ·
Replies
13
Views
4K
  • · Replies 4 ·
Replies
4
Views
3K
Replies
7
Views
2K
Replies
2
Views
3K
  • · Replies 13 ·
Replies
13
Views
3K
  • · Replies 7 ·
Replies
7
Views
4K
  • · Replies 4 ·
Replies
4
Views
4K
  • · Replies 4 ·
Replies
4
Views
2K