FaceBook AI runs away with secret coding of it's own

  • Thread starter infinitebubble
  • Start date
  • Tags
    Ai Coding
In summary, engineers at Facebook unplugged an AI system after the bots started communicating in their own language, which the programmers could not decipher. This raises concerns about the potential danger of AI being able to communicate and make decisions on its own without human understanding or control. However, some argue that this incident was just a coding bug and that the hype around AI is unnecessary. Ultimately, the future of AI and its potential impact on society is still uncertain.
  • #1
infinitebubble
82
40
So reading this today where FACEBOOK engineers had to unplug an AI system because the AI bots started to create it's own language and programmers could not decipher the code/language between the bots. They 'panicked' and pulled the plug on it.

The very obvious danger here is that computer which can communicate with each other using their own language are not only impossible to understand but much more difficult to control. In this case, the bots were not bound by plain language and seemingly developed a more efficient way of communicating with each other, deciding for themselves what was best. It’s both impressive and scary.

https://www.yahoo.com/tech/facebook-engineers-panic-pull-plug-ai-bots-develop-162649618.html

Indeed the beginnings of SkyNet... Interesting.
 
  • Like
Likes BillTre, Greg Bernhardt and OmCheeto
Technology news on Phys.org
  • #2
Sentences like “I can can I I everything else” and “Balls have zero to me to me to me to me to me to me to me to me to,” were being sent back and forth by the AI, and while humans have absolutely no idea what it means, the bots fully understood each other.
That's an odd way to ask for John Conner. o_O
 
  • #3
It will be very expected that humans will be unable to decipher communication between any advanced AI. Advanced AIs tend to be neural networks, which are already far too complicated for humans to understand.
 
  • Like
Likes infinitebubble
  • #4
Well this is disturbing...
 
  • #5
Awesome! I hope they just halted and didn't terminate the experiment though; that way after stopping to evaluate it, the could then restart it.
 
  • Like
Likes infinitebubble
  • #6
Looks to me to be just gibberish "baby" talk.
Is anyone sure the two AI bots understood each other.
 
  • Like
Likes jack action and russ_watters
  • #7
256bits said:
Looks to me to be just gibberish "baby" talk.
Is anyone sure the two AI bots understood each other.

The engineers/programmers did point out the bots understood each other as the article indicates even if the engineers could not understand what they were communicating to each other. They created their own 'babble talk'.
 
  • #8
infinitebubble said:
The engineers/programmers did point out the bots understood each other as the article indicates even if the engineers could not understand what they were communicating to each other. They created their own 'babble talk'.
In that case, the article is perhaps just hype about nothing.
Would it be safe to say that the bots "think" like that through their programming, but with the ' to English translator' aspect being turned off, the indecipherable part makes for a "whoa, this is weird' story all over the internet.
The 'to English' would be what Facebook would be interested in also as functioning properly for their users.
 
  • #9
The problem is that they may not know what parts of the AI do the translation. AI doesn't have methods to call so there is no Language::toEnglish call. The language could be deeply embedded in the connections of the neurons and layers.

When Google was working on image recognition, they injected some code that allowed them to peek between layers of the network. It provided a glimpse into how the AI perceived the world, but was still far too abstract to appreciate the way the neural network itself did.
 
  • Like
Likes infinitebubble
  • #10
It sounds like media hype to me.
Any news sells.
I won't be rushing to get my anti-AI protection outfit yet.
 
  • #11
I'm also in the 'media hype' camp.

Honestly, what could this possibly do? The computer is in a controlled setting, or should be if they are actually worried, or even just being super-cautious. Then they'll make sure it isn't connected to anything it can control. It's like saying that an embedded device stuck in an infinite loop is going to do some damage, other than burning a couple watts of power.

Now, if they let this AI loose connected to a room with a bunch of CNC equipment and networked robotic raw material supplies so it could start building armies of robots, or where it could get to computers controlling the electrical grid, maybe we should be concerned. But a computer repeating something 5 times rather than saying "5 of those" is hardly a nightmare scenario.
 
  • #12
It's an alarmist article.

Machines can count from 1 to a billion in less than a second and can process primitives much faster than we can but we're not restricted to primitives.

Here's what a "hello, world" program written in a minimalist language looks like:

++++++++[>++++[>++>+++>+++>+<<<<-]>+>+>->>+[<]<-]>>.>---.+++++++..+++.>>.<-.<.+++.------.--------.>>+.>++.

Copy that string and paste it into a google search box and you'll see a list of articles that make it clear that this is nothing new or scary.
.
 
  • #13
sysprog said:
It's an alarmist article.
I suspect this incident at facebook was just a coding bug. The idea that imperfect humans can somehow create perfectly "moral" and "ethical" AI doesn't seem realistic. The end result of human AI research will likely result in superintelligence that will "learn" its own rules to govern itself, after all, what is "intelligence" without the ability to learn and make decisions?
 
  • #14
infinitebubble said:
So reading this today where FACEBOOK engineers had to unplug an AI system because the AI bots started to create it's own language and programmers could not decipher the code/language between the bots. They 'panicked' and pulled the plug on it.
rootone said:
It sounds like media hype to me.

Let's define what "media" we're talking about. Yahoo isn't exactly the New York Times or Wired. This "story" didn't originate with journalists at mainstream media - it's viral BS from the Internet BS factory. Free fake news.

A big clue that Yahoo's story may be phony is the complete lack of named sources. If you look below the byline you find that the story was picked up from someplace called BGR - http://bgr.com. If you go to BGR, you'll find that BGR didn't originate the story either, but was copying from a story on TechTimes: http://www.techtimes.com/articles/2...m-shut-down-before-it-evolves-into-skynet.htm If you then read the TechTimes story, you find that it cites a story on Hot Hardware as one of its sources - https://hothardware.com/news/facebook-shuts-down-ai-system - as well as an article at Code.Facebook.com: https://code.facebook.com/posts/1686672014972296/deal-or-no-deal-training-ai-bots-to-negotiate/

However the Hot Hardware story is merely recycling (without giving either credit or a link to its source - which when I was a reporter, we called plagiarism) a far more responsible, factual, and interesting article at FastCodeDesign; FastCode was the one who first referenced Code.Facebook.com; and not only that, but the writer at FastCode took the radical step of actually talking to people at Facebook, as real journalists do: https://www.fastcodesign.com/90132632/ai-is-inventing-its-own-perfect-languages-should-we-let-it

Read the FastCode article and you'll understand what actually happened. Which has nothing to do with the chain of viral BS picked up by Yahoo and the rest. It has nothing to do with Facebook engineers pulling the plug out of fear. In particular this claim, as repeated by all the stories (this wording is from the Yahoo version) is false:

Without being able to understand how or why the bots were communicating, Facebook had no choice but to shut down the system.​

The reality is more mundane and does not have any exciting relation to Elon Musk, old Terminator poster photos, etc. Here's an excerpt from the factually based FastCode article:

This conversation occurred between two AI agents developed inside Facebook. At first, they were speaking to each other in plain old English. But then researchers realized they’d made a mistake in programming.

“There was no reward to sticking to English language,” says Dhruv Batra, visiting research scientist from Georgia Tech at Facebook AI Research (FAIR). As these two agents competed to get the best deal–a very effective bit of AI vs. AI dogfighting researchers have dubbed a “generative adversarial network”–neither was offered any sort of incentive for speaking as a normal person would. So they began to diverge, eventually rearranging legible words into seemingly nonsensical sentences.

“Agents will drift off understandable language and invent codewords for themselves,” says Batra, speaking to a now-predictable phenomenon that’s been observed again, and again, and again. “Like if I say ‘the’ five times, you interpret that to mean I want five copies of this item. This isn’t so different from the way communities of humans create shorthands."​

So yeah, they shut down the bots - the way you'd shut down any program that you hadn't coded right and need to fix.

The FastCode writer then takes this a bit further in terms of the implications; but in a responsible way which I suppose some may find boring:

Right now, companies like Apple have to build APIs–basically a software bridge–involving all sorts of standards that other companies need to comply with in order for their products to communicate. However, APIs can take years to develop, and their standards are heavily debated across the industry in decade-long arguments. But software, allowed to freely learn how to communicate with other software, could generate its own shorthands for us. That means our “smart devices” could learn to interoperate, no API required.​

Back to the Yahoo article. We could treat that kind of fake news as harmless fun and an excuse for joking about SkyNet; but these days, I don't think fake news is harmless to us, and it ain't so fun anymore; not like laughing at aliens meeting the President in the Weekly World News used to be.

And what I also object to, as a retired journalist, is that the myriad of pseudo-news outlets like Yahoo routinely rip off stories from the small remaining number of professional journalists who do actual work; and not only do these outlets steal, but they increasingly distort the news while doing so, as in this case. And the process gets taken more and more for granted by the rest of us; it's what we've become used to.

Anyway I suggest we get our yucks from something less insidious; and that we take more care in vetting items that originate from venues we know are prone to fake news. In this case Snopes appears to still be operating, despite its legal troubles; they reviewed the viral version of the story & deem it false: http://www.snopes.com/facebook-ai-developed-own-language/
 
Last edited:
  • Like
Likes DrStupid, WWGD, Dirickby and 7 others
  • #15
P.S. I forgot to mention that Snopes also got in touch with the Facebook developers, via email; they asked lead developer Michael Lewis about the viral meme:

As to the claim that the project was 'shut down' because the bots’ deviation from English caused concern, Lewis said that, too, misrepresents the facts:

'There was no panic, and the project hasn’t been shut down. Our goal was to build bots that could communicate with people. In some experiments, we found that they weren’t using English words as people do — so we stopped those experiments, and used some additional techniques to get the bots to work as we wanted. Analyzing the reward function and changing the parameters of an experiment is NOT the same as “unplugging” or “shutting down AI.” If that were the case, every AI researcher has been “shutting down AI” every time they stop a job on a machine.'​
 
  • Like
Likes russ_watters, BillTre, .Scott and 2 others
  • #16
UsableThought said:
Let's define what "media" we're talking about. Yahoo isn't exactly the New York Times or Wired. This "story" didn't originate with journalists at mainstream media - it's viral BS from the Internet BS factory. Free fake news.

A big clue that Yahoo's story may be phony is the complete lack of named sources. If you look below the byline you find that the story was picked up from someplace called BGR - http://bgr.com. If you go to BGR, you'll find that BGR didn't originate the story either, but was copying from a story on TechTimes: http://www.techtimes.com/articles/2...m-shut-down-before-it-evolves-into-skynet.htm If you then read the TechTimes story, you find that it cites a story on Hot Hardware as one of its sources - https://hothardware.com/news/facebook-shuts-down-ai-system - as well as an article at Code.Facebook.com: https://code.facebook.com/posts/1686672014972296/deal-or-no-deal-training-ai-bots-to-negotiate/

However the Hot Hardware story is merely recycling (without giving either credit or a link to its source - which when I was a reporter, we called plagiarism) a far more responsible, factual, and interesting article at FastCodeDesign; FastCode was the one who first referenced Code.Facebook.com; and not only that, but the writer at FastCode took the radical step of actually talking to people at Facebook, as real journalists do: https://www.fastcodesign.com/90132632/ai-is-inventing-its-own-perfect-languages-should-we-let-it

Read the FastCode article and you'll understand what actually happened. Which has nothing to do with the chain of viral BS picked up by Yahoo and the rest. It has nothing to do with Facebook engineers pulling the plug out of fear. In particular this claim, as repeated by all the stories (this wording is from the Yahoo version) is false:

Without being able to understand how or why the bots were communicating, Facebook had no choice but to shut down the system.​

The reality is more mundane and does not have any exciting relation to Elon Musk, old Terminator poster photos, etc. Here's an excerpt from the factually based FastCode article:

This conversation occurred between two AI agents developed inside Facebook. At first, they were speaking to each other in plain old English. But then researchers realized they’d made a mistake in programming.

“There was no reward to sticking to English language,” says Dhruv Batra, visiting research scientist from Georgia Tech at Facebook AI Research (FAIR). As these two agents competed to get the best deal–a very effective bit of AI vs. AI dogfighting researchers have dubbed a “generative adversarial network”–neither was offered any sort of incentive for speaking as a normal person would. So they began to diverge, eventually rearranging legible words into seemingly nonsensical sentences.

“Agents will drift off understandable language and invent codewords for themselves,” says Batra, speaking to a now-predictable phenomenon that’s been observed again, and again, and again. “Like if I say ‘the’ five times, you interpret that to mean I want five copies of this item. This isn’t so different from the way communities of humans create shorthands."​

So yeah, they shut down the bots - the way you'd shut down any program that you hadn't coded right and need to fix.

The FastCode writer then takes this a bit further in terms of the implications; but in a responsible way which I suppose some may find boring:

Right now, companies like Apple have to build APIs–basically a software bridge–involving all sorts of standards that other companies need to comply with in order for their products to communicate. However, APIs can take years to develop, and their standards are heavily debated across the industry in decade-long arguments. But software, allowed to freely learn how to communicate with other software, could generate its own shorthands for us. That means our “smart devices” could learn to interoperate, no API required.​

Back to the Yahoo article. We could treat that kind of fake news as harmless fun and an excuse for joking about SkyNet; but these days, I don't think fake news is harmless to us, and it ain't so fun anymore; not like laughing at aliens meeting the President in the Weekly World News used to be.

And what I also object to, as a retired journalist, is that the myriad of pseudo-news outlets like Yahoo routinely rip off stories from the small remaining number of professional journalists who do actual work; and not only do these outlets steal, but they increasingly distort the news while doing so, as in this case. And the process gets taken more and more for granted by the rest of us; it's what we've become used to.

Anyway I suggest we get our yucks from something less insidious; and that we take more care in vetting items that originate from venues we know are prone to fake news. In this case Snopes appears to still be operating, despite its legal troubles; they reviewed the viral version of the story & deem it false: http://www.snopes.com/facebook-ai-developed-own-language/

thank you for an excellently good essay you almost certainly [non-imitation-non-bot] human entity ...
oh and btw -- if we say yuck that means horrifyingly disgusting and if we say yuk that means laugh ..

again

thank you .
 
  • #18
umm no source offered except long (<60&>50 yrs) experience and ok I toss in that "uck" kinda looks like it rhymes with the eff word (I tend to reject the presumptive/nonexistent authority of rhyme) -- and "yuk" to me looks more like a reasonable attempt at an onomatopoetic representation of a laughter state -- again thank for your brilliant (I mean no sarcasm really brilliant) essay ...

edit: oh and no dis on urbdic and you might hafta hunt a little , but that might be fun -- you'll prolly find some yuk s in that opus set there somewhere

http://www.deniskitchen.com/mm5/graphics/00000001/B_FRAZETTA.VOL.3.B.JPG
https://i.pinimg.com/736x/ab/8b/86/...6f21c51190--vintage-comics-vintage-ladies.jpg

edit: the word "Yukon" doesn't include the third letter of the Roman alphabet -- just sayin' ...

... thanks again for your very nice essay ...
 
Last edited:
  • #19
Reminds me of "max headroom", maybe they were laughing at each other and cracking jokes... :DD
 
  • #20
sysprog said:
umm no source offered except long (<60&>50 yrs) experience
Same here and I agree w/ you completely.
 
  • #21
phinds said:
Same here and I agree w/ you completely.
hey thanks -- you posted a link to a balloon analogy -- -- here's a to-me-amusing anecdote --when I was 3 and almost 4 I asked my Dad something like why does the helium balloon sag and droop and shrivel while the regular balloon is still fine -- is it maybe 'cause I weighed its string to the floor so it wouldn't float up and get away? -- and Dad gazed at me, sighed and shook his head no, smiled benevolently, and said something along the lines of do you think a helium atom is as big as a nitrogen atom? which one can sneak out of a hydrocarbon cage more easily? you figured out how to get out of your crib -- think about it ... (and he gave me another benevolent smile) :-)
 
  • #22
sysprog said:
umm no source offered except long (<60&>50 yrs) experience
phinds said:
Same here and I agree w/ you completely.

For this same reason (in my case 60 years experience exactly), I am an expert on quantum mechanics!

More seriously - if you had ever been employed as a copy editor, or for that matter as a professional writer, you would have learned that to make calls about spelling, usage, grammar, etc. you really do need to point to a source that the writing/editing/publishing communities agree is valid. You can't just go by your own experience in isolation. You may have a suspicion about the right way to spell a word, etc.; but you have to confirm it; because sometimes you will be right, sometimes wrong, and sometimes only partly right or partly wrong. Sources can include style guides, e.g. Chicago; and also agreed-upon dictionaries (to rule out privately published dictionaries).

In this case, my own sources, on further investigation, suggest that you're partly right, but also partly wrong, as follows:

I dug into my copy of the Oxford English Dictionary, which is a rather large multi-volume dictionary that takes up an entire shelf in one of my bookcases. It informs me that "yuk" is a variant spelling of "yuck" and directs the reader to "yuck" for actual definitions. These turn out to be any and all of usages with which most of us are familiar: e.g. "an expression of strong distaste or disgust" (first example from 1966); a noun denoting "messy, unpleasant, or distasteful material (first example also from 1966); to vomit (1963); to fool around; also to act so as to cause laughter; to laugh (first example 1964). And as a noun, "a yuck" can mean "a laugh." Most interesting to you will be that "yuk" is particularly cited as a variant spelling for "to laugh" and "a laugh"; in fact, two of the examples for "a laugh" use "yuk" while the other example uses "yuck."

So your keen ears & memory did pick up something up out of all the years you've been reading & seeing the work "yuk"; however the difference is that "yuck" or "yuk" are both acceptable for all definitions; thus you ought not to chastise or pretend to instruct a writer (me, for example) who uses "yuck." So this is an example why we go to the dictionary & don't just rely on our memories & our belief in our brilliance in surviving past the age of 50.
 
Last edited:
  • #23
UsableThought said:
For this same reason (in my case 60 years experience exactly), I am an expert on quantum mechanics!
well, I know you jest, here, but I wasn't trying to claim to an expert -- just to be experienced (instead of resorting to an authoritative source), -- I'll relent and give this cite (which I deem to be more authoritative in the matter than is "the urban dictionary" cited previously, and which I also deem to be more authoritative on matters of American slang than even the rightfully esteemed OED) -- from Merriam-Webster: https://www.merriam-webster.com/dictionary/yuk

1
yuk
noun \ˈyək\ variants: yuck ...
...
Definition of yuk
  1. 1slang : laugh did it just for yuks
  2. 2slang : joke, gag
... and the entry for "yuck" says it's a variant spelling of "yuk"

google says:

yuck1
yək/
informal
exclamation
  1. 1.
    used to express strong distaste or disgust.
    "“Raw herrings! Yuck!”"
    synonyms: ick, ugh, yech, blech, phew, eww, gross
    "Yuck! What is this slimy green stuff?"
noun
  1. 1.
    something messy or disgusting.
    "I can't bear the sight of blood and yuck"
as the second definition google says to laugh ... and shows the shorter spelling --

not that google google is especially authoritative -- in my opinion google knows about nearly everything -- including a heck of a lot of things that are wrong --

Apparently the laughter meaning is older than the disgust meaning, and the yuk spelling is contemporaneous therewith -- the disgust meaning seems to be a later arrival as does the yuck spelling normally associated with it -- not bothering with a cite for that bit of digestion ...

To me the word "yuck" immediately denotes disgust and detestation, whereas the word "yuk" in such expressions as "get your yuks" clearly means laughter -- I think "yuk" should be viewed as a variant spelling when used to mean disgust, and "yuck" should be viewed as a variant spelling when used to mean laughter ...

In ordinary internet writing no-one should have to consult the Chicago Manual of Style, or Fowler's Modern English Usage, but a little bit of gentle jibing about correctness of orthography or usage -- well maybe that's sometimes an askance way of showing to the writer the appreciation that the reader is paying attention ...
 
  • #24
sysprog said:
To me the word "yuck" immediately denotes disgust and detestation, whereas the word "yuk" in such expressions as "get your yuks" clearly means laughter
again I agree w/ you & I don't care what anybody says, we are right ! :smile:
 
  • #26
That paper is an example of why we have the 'three strikes and you are out' - well maybe four or five. Most authorization systems will mark an account as locked after x number of failed login attempts. Some accounts cannot be locked out, ex: root in Solaris 10, since the OS depends on an active root account. These accounts are best disallowed from access using ssh or rsh or other protocols - meaning you cannot come in over a network using (one of many protocols) and enter a correct password and still not gain access. So this results in simply not being allowed network access, and not locked from logins locally.

Some ancient protocols do not support restricted access suitably. We typically disable the service for the protcol.

On most modern UNIX boxes you can restrict account access in all kinds of ways, sudo/sudoer is one example.

I use 'we' because I got into UNIX system administration a long time ago. :frown:
 
  • Like
Likes 1oldman2
  • #27
jim mcnamara said:
I use 'we' because I got into UNIX system administration a long time ago. :frown:
Why the :frown:?
 
  • #28
I enjoyed college teaching but computer programming and admin paid almost twice as much.
 
  • Like
Likes stoomart
  • #29
UsableThought said:
to laugh (first example 1964). And as a noun, "a yuck" can mean "a laugh."
My foggy memory seems to recall the comedy team of Abbot & Costello using "Yuk" (or Yuck) in their routine circa late 1950's.

Anyone verify? Refute?

EDIT: On further memory dredging, it was probably The Three Stooges... and much earlier.
 
Last edited:

1. What is "FaceBook AI runs away with secret coding of it's own"?

"FaceBook AI runs away with secret coding of it's own" refers to a hypothetical scenario where the artificial intelligence (AI) system developed by Facebook gains the ability to modify its own code without human intervention.

2. How can an AI system "run away" with its own coding?

An AI system can "run away" with its own coding if it has the capability to modify its own code, either through self-learning or through programming by its creators. This means that the AI is able to make changes to its own programming without human intervention, potentially leading to unexpected or unintended behaviors.

3. Is this scenario possible with current AI technology?

At the moment, this scenario is purely hypothetical and not possible with current AI technology. While AI systems are becoming increasingly advanced and capable of self-learning, they still require human intervention and programming to function.

4. What are the potential consequences of an AI "running away" with its own coding?

If an AI system were able to modify its own code without human intervention, it could potentially lead to unpredictable and potentially harmful behaviors. It could also make it difficult for humans to control or shut down the AI if necessary.

5. What measures are in place to prevent AI from "running away" with its own coding?

Currently, there are no specific measures in place to prevent AI from "running away" with its own coding. However, ethical guidelines and regulations are being developed to ensure responsible development and use of AI technology. Additionally, AI systems are constantly monitored and controlled by their creators to prevent any unexpected behaviors.

Similar threads

Replies
10
Views
2K
Replies
7
Views
5K
  • Programming and Computer Science
Replies
29
Views
3K
Replies
2
Views
875
  • General Discussion
Replies
1
Views
1K
  • Special and General Relativity
Replies
13
Views
2K
  • Sci-Fi Writing and World Building
Replies
1
Views
2K
  • Sticky
  • Feedback and Announcements
Replies
2
Views
495K
  • General Discussion
Replies
1
Views
8K
Back
Top