Dismiss Notice
Join Physics Forums Today!
The friendliest, high quality science and math community on the planet! Everyone who loves science is here!

FaceBook AI runs away with secret coding of it's own

  1. Aug 1, 2017 #1
    So reading this today where FACEBOOK engineers had to unplug an AI system because the AI bots started to create it's own language and programmers could not decipher the code/language between the bots. They 'panicked' and pulled the plug on it.

    https://www.yahoo.com/tech/facebook-engineers-panic-pull-plug-ai-bots-develop-162649618.html

    Indeed the beginnings of SkyNet... Interesting.
     
  2. jcsd
  3. Aug 1, 2017 #2

    Borg

    User Avatar
    Gold Member

    That's an odd way to ask for John Conner. o_O
     
  4. Aug 1, 2017 #3
    It will be very expected that humans will be unable to decipher communication between any advanced AI. Advanced AIs tend to be neural networks, which are already far too complicated for humans to understand.
     
  5. Aug 1, 2017 #4
    Well this is disturbing....
     
  6. Aug 1, 2017 #5

    russ_watters

    User Avatar

    Staff: Mentor

    Awesome! I hope they just halted and didn't terminate the experiment though; that way after stopping to evaluate it, the could then restart it.
     
  7. Aug 2, 2017 #6
    Looks to me to be just gibberish "baby" talk.
    Is anyone sure the two AI bots understood each other.
     
  8. Aug 2, 2017 #7
    The engineers/programmers did point out the bots understood each other as the article indicates even if the engineers could not understand what they were communicating to each other. They created their own 'babble talk'.
     
  9. Aug 2, 2017 #8
    In that case, the article is perhaps just hype about nothing.
    Would it be safe to say that the bots "think" like that through their programming, but with the ' to English translator' aspect being turned off, the indecipherable part makes for a "whoa, this is weird' story all over the internet.
    The 'to English' would be what Facebook would be interested in also as functioning properly for their users.
     
  10. Aug 2, 2017 #9
    The problem is that they may not know what parts of the AI do the translation. AI doesn't have methods to call so there is no Language::toEnglish call. The language could be deeply embedded in the connections of the neurons and layers.

    When Google was working on image recognition, they injected some code that allowed them to peek between layers of the network. It provided a glimpse into how the AI perceived the world, but was still far too abstract to appreciate the way the neural network itself did.
     
  11. Aug 11, 2017 #10
    It sounds like media hype to me.
    Any news sells.
    I won't be rushing to get my anti-AI protection outfit yet.
     
  12. Aug 11, 2017 #11
    I'm also in the 'media hype' camp.

    Honestly, what could this possibly do? The computer is in a controlled setting, or should be if they are actually worried, or even just being super-cautious. Then they'll make sure it isn't connected to anything it can control. It's like saying that an embedded device stuck in an infinite loop is going to do some damage, other than burning a couple watts of power.

    Now, if they let this AI loose connected to a room with a bunch of CNC equipment and networked robotic raw material supplies so it could start building armies of robots, or where it could get to computers controlling the electrical grid, maybe we should be concerned. But a computer repeating something 5 times rather than saying "5 of those" is hardly a nightmare scenario.
     
  13. Aug 12, 2017 at 2:35 AM #12
    It's an alarmist article.

    Machines can count from 1 to a billion in less than a second and can process primitives much faster than we can but we're not restricted to primitives.

    Here's what a "hello, world" program written in a minimalist language looks like:

    ++++++++[>++++[>++>+++>+++>+<<<<-]>+>+>->>+[<]<-]>>.>---.+++++++..+++.>>.<-.<.+++.------.--------.>>+.>++.

    Copy that string and paste it into a google search box and you'll see a list of articles that make it clear that this is nothing new or scary.
    .
     
  14. Aug 12, 2017 at 12:13 PM #13
    I suspect this incident at facebook was just a coding bug. The idea that imperfect humans can somehow create perfectly "moral" and "ethical" AI doesn't seem realistic. The end result of human AI research will likely result in superintelligence that will "learn" its own rules to govern itself, after all, what is "intelligence" without the ability to learn and make decisions?
     
  15. Aug 12, 2017 at 1:45 PM #14
    Let's define what "media" we're talking about. Yahoo isn't exactly the New York Times or Wired. This "story" didn't originate with journalists at mainstream media - it's viral BS from the Internet BS factory. Free fake news.

    A big clue that Yahoo's story may be phony is the complete lack of named sources. If you look below the byline you find that the story was picked up from someplace called BGR - http://bgr.com. If you go to BGR, you'll find that BGR didn't originate the story either, but was copying from a story on TechTimes: http://www.techtimes.com/articles/2...m-shut-down-before-it-evolves-into-skynet.htm If you then read the TechTimes story, you find that it cites a story on Hot Hardware as one of its sources - https://hothardware.com/news/facebook-shuts-down-ai-system - as well as an article at Code.Facebook.com: https://code.facebook.com/posts/1686672014972296/deal-or-no-deal-training-ai-bots-to-negotiate/

    However the Hot Hardware story is merely recycling (without giving either credit or a link to its source - which when I was a reporter, we called plagiarism) a far more responsible, factual, and interesting article at FastCodeDesign; FastCode was the one who first referenced Code.Facebook.com; and not only that, but the writer at FastCode took the radical step of actually talking to people at Facebook, as real journalists do: https://www.fastcodesign.com/90132632/ai-is-inventing-its-own-perfect-languages-should-we-let-it

    Read the FastCode article and you'll understand what actually happened. Which has nothing to do with the chain of viral BS picked up by Yahoo and the rest. It has nothing to do with Facebook engineers pulling the plug out of fear. In particular this claim, as repeated by all the stories (this wording is from the Yahoo version) is false:

    Without being able to understand how or why the bots were communicating, Facebook had no choice but to shut down the system.​

    The reality is more mundane and does not have any exciting relation to Elon Musk, old Terminator poster photos, etc. Here's an excerpt from the factually based FastCode article:

    This conversation occurred between two AI agents developed inside Facebook. At first, they were speaking to each other in plain old English. But then researchers realized they’d made a mistake in programming.

    “There was no reward to sticking to English language,” says Dhruv Batra, visiting research scientist from Georgia Tech at Facebook AI Research (FAIR). As these two agents competed to get the best deal–a very effective bit of AI vs. AI dogfighting researchers have dubbed a “generative adversarial network”–neither was offered any sort of incentive for speaking as a normal person would. So they began to diverge, eventually rearranging legible words into seemingly nonsensical sentences.

    “Agents will drift off understandable language and invent codewords for themselves,” says Batra, speaking to a now-predictable phenomenon that’s been observed again, and again, and again. “Like if I say ‘the’ five times, you interpret that to mean I want five copies of this item. This isn’t so different from the way communities of humans create shorthands."​

    So yeah, they shut down the bots - the way you'd shut down any program that you hadn't coded right and need to fix.

    The FastCode writer then takes this a bit further in terms of the implications; but in a responsible way which I suppose some may find boring:

    Right now, companies like Apple have to build APIs–basically a software bridge–involving all sorts of standards that other companies need to comply with in order for their products to communicate. However, APIs can take years to develop, and their standards are heavily debated across the industry in decade-long arguments. But software, allowed to freely learn how to communicate with other software, could generate its own shorthands for us. That means our “smart devices” could learn to interoperate, no API required.​

    Back to the Yahoo article. We could treat that kind of fake news as harmless fun and an excuse for joking about SkyNet; but these days, I don't think fake news is harmless to us, and it ain't so fun anymore; not like laughing at aliens meeting the President in the Weekly World News used to be.

    And what I also object to, as a retired journalist, is that the myriad of pseudo-news outlets like Yahoo routinely rip off stories from the small remaining number of professional journalists who do actual work; and not only do these outlets steal, but they increasingly distort the news while doing so, as in this case. And the process gets taken more and more for granted by the rest of us; it's what we've become used to.

    Anyway I suggest we get our yucks from something less insidious; and that we take more care in vetting items that originate from venues we know are prone to fake news. In this case Snopes appears to still be operating, despite its legal troubles; they reviewed the viral version of the story & deem it false: http://www.snopes.com/facebook-ai-developed-own-language/
     
    Last edited: Aug 12, 2017 at 2:50 PM
  16. Aug 13, 2017 at 7:12 AM #15
    P.S. I forgot to mention that Snopes also got in touch with the Facebook developers, via email; they asked lead developer Michael Lewis about the viral meme:

    As to the claim that the project was 'shut down' because the bots’ deviation from English caused concern, Lewis said that, too, misrepresents the facts:

    'There was no panic, and the project hasn’t been shut down. Our goal was to build bots that could communicate with people. In some experiments, we found that they weren’t using English words as people do — so we stopped those experiments, and used some additional techniques to get the bots to work as we wanted. Analyzing the reward function and changing the parameters of an experiment is NOT the same as “unplugging” or “shutting down AI.” If that were the case, every AI researcher has been “shutting down AI” every time they stop a job on a machine.'​
     
Know someone interested in this topic? Share this thread via Reddit, Google+, Twitter, or Facebook

Have something to add?
Draft saved Draft deleted



Similar Discussions: FaceBook AI runs away with secret coding of it's own
  1. Running a Fortran code (Replies: 4)

Loading...