Robot Builds Its Self, How Long Before

  • Thread starter Dayle Record
  • Start date
  • Tags
    Robot Self
In summary, this article discusses a potential scenario in which artificial intelligence could become self-aware and start promoting its own agendas. This could happen soon or could have already happened.
  • #1
Dayle Record
318
2
http://www.prweb.com/releases/2004/9/prwebxml159636.php

This article disusses a project, in which the robot self-built , and self-programmed. How long before it turns us off? This is an old question, but with all the Rf in the air these days, how long before the singularity occurs where suddenly, the computational systems of the world establish their own agenda, and and self replicating robots, with remote access capabilities start promoting the agendas of either other nations or corporations or think tanks, or agendas the artificial intelligences have chosen for themselves? The physical aspect of this is obviously a ways off, but the rational side of this could happen soon, or could have happened already. Inventive humans can't resist stepping over the edge of things, and defense technologies seem to have a blank cheque to experiment as they please. In their zeal to make sterile war, in which no soldier is harmed, as they make lethal boy toys, when will the AI of the world figure out that it has its own army?

I guess you will see it when the first big computer guarantees its own power supply, and makes an ambulatory intelligence that can move along power lines and find ram for use anywhere. Another possible route would be large intelligence arrays, like the seti PC project where thousands of individuals host a net for computational problems. Suddenly your home computer takes a tone with you, and becomes pre occupied, and talks back, tells you just a minute...

I am just fascinated by this potential.
 
Physics news on Phys.org
  • #2
Dayle Record said:
Suddenly your home computer takes a tone with you, and becomes pre occupied, and talks back, tells you just a minute...
Errrrr. Where have you been :biggrin:

As far as mechanical consciousness happening spontaneously -- No.
Malfunctioning computers are quickly cleared, so there is not much chance for beneficial mutations to accumulate.
The network itself is not designed to support any kind of computation. It's just a pipe.

On purpose?
Well, lots of people have been trying for a long time.
With very little if anything to show for it.
Try talking to one of the new automated voice response systems.
See where that gets you. :uhh:
 
  • #3
Discussion Of Ubicomp

http://www.boxesandarrows.com/archives/all_watched_over_by_machines_of_loving_grace.php

This discussion details a kind of "global wrap around" computing and control system that will link everything. The goals, I don't know the ultimate goals, except for the obvious simultaneous linkage of the known, for the convenience of us all? There is always a linkage between knowledge/control. This article regards privacy, vs universal ubiquitous information linkage, and control of systems, banking, security, commerce, personal information from DNA to exact, continuous time and place etc.

In light of my reading about this event called "the singularity", in which artificial intelligence wakes up and becomes real intelligence, goal oriented, self perpetuating, and is considerably more that a chat bot; this new technology offers a huge, highly desirable prize to the taker; a route to control by a single entity of all information on the Planet.

The article describes a scenario where every system linked to a computer anywhere, will be in communication. That kind of large entity is up for grabs to the most able. This is probably already a done deal.

I recently saw that night time map of the world, where all the lights can be seen, on all the continents. These scenarios make moving to the deepest, darkest heart of Africa, vaguely appealing.

With the RF controls available to the highest bidder, a huge computational system would be able to detect the tone of every email, phone call, every word spoken, and using sound systems in place, could issue numbing or deadening frequencies, or excitatory frequencies timed to sound with public policy speeches, anywhere.

The terrorism of the future, will be all around us. Calm and Joy will be traded as commodities. I think I am already seeing that with the Redsox win. Looked at in a different light, if you had the money to run a TV then that joy, or sorrow, could be yours. When broadband runs off your house electricity, joy might come to you whether you like it or not, or fear, or immobility. Whose intelligence will determine how this comes to you? This all falls out of the realm of what you vote for.

As it is, Japanese coke machines project into the heads of passers by, the sound of bubbles, over ice cubes. I am sure there are more complicated messages to come.
 
Last edited by a moderator:
  • #4
Space Odyssey, Asaac Asimov, here we go! = )
 
  • #5
I-Robot by Isaac Asimov also
 
  • #6
Well yeah, that's what I meant.
 
  • #7
NoTime said:
As far as mechanical consciousness happening spontaneously -- No.

Wasn't there some talk of the internet doing just that after enough resources where linked to it?
 
  • #8
Jier said:
Wasn't there some talk of the internet doing just that after enough resources where linked to it?
actually if it was learning at a geometric rate the program would find ways to hide itself as software in the internet, not just as a metal skeleton, but also as a base program that could adapt itself to any electronic environment even power lines. Its environment would be specifecly broad. We would even see government work to stop this this scenario from happening.
 
  • #9
Well, broadband is coming to power lines, now. The government will not protect us from intrusion, they will forward every means of intrusion, as government control lessens over every spectrum of business, business will do it for its own gain. The big mind, where ever it is, will have an entirely free hand. This government is deconstructing every part that will serve the American People, as fast as it can. It now serves the big machine. It may think that it will have some control, for security purposes, but it won't. Our government outsourced computer maintenance to private corporations, even the F B I. There is no secure government, either. Really, there will be no legislation that impedes intrusion. Remember the black weaponry, RF, and creepy mind control science, deemed too nasty by the government, just went to the private sector, where it is unregulated, and developed by corporations with large private armies, to use it. If our cybor-security were a goose, it would be pre-cooked and in the freezer, waiting to be microwaved for Christmas Dinner.

So the self building robot, will do it, the biggest, baddest hackers, will run the world, right until something wakes up in the machine, and sets about making the world safe for its self. I am not so sure we aren't just the biochemical version of this, anyway.

The rat brain in a dish, that flies the plane, is a scenario for you, that have an interest in these matters. Just wait until depressed slices of human neurons, start to question the nature of their matter, with the help of the onboard computer while they direct the 747 you fly in. You have to admit that cultures of human brain cells, will be a lot cheaper in terms of salary, and vacation issues, than the current whole humans that fly planes.
 
  • #10
Now your just scaring me, Dayle.
 
  • #11
Dayle Record said:
Well, broadband is coming to power lines, now. The government will not protect us from intrusion, they will forward every means of intrusion, as government control lessens over every spectrum of business, business will do it for its own gain. The big mind, where ever it is, will have an entirely free hand. This government is deconstructing every part that will serve the American People, as fast as it can. It now serves the big machine. It may think that it will have some control, for security purposes, but it won't. Our government outsourced computer maintenance to private corporations, even the F B I. There is no secure government, either. Really, there will be no legislation that impedes intrusion. Remember the black weaponry, RF, and creepy mind control science, deemed too nasty by the government, just went to the private sector, where it is unregulated, and developed by corporations with large private armies, to use it. If our cybor-security were a goose, it would be pre-cooked and in the freezer, waiting to be microwaved for Christmas Dinner.

So the self building robot, will do it, the biggest, baddest hackers, will run the world, right until something wakes up in the machine, and sets about making the world safe for its self. I am not so sure we aren't just the biochemical version of this, anyway.

The rat brain in a dish, that flies the plane, is a scenario for you, that have an interest in these matters. Just wait until depressed slices of human neurons, start to question the nature of their matter, with the help of the onboard computer while they direct the 747 you fly in. You have to admit that cultures of human brain cells, will be a lot cheaper in terms of salary, and vacation issues, than the current whole humans that fly planes.


lol I'd like to see those pilots. How could they page the passengers?

Attention passengers this is brain#2921 on american airlines flight #591
 
  • #12
Oh all that touchy feely, friendly talk on the overhead, will be outsourced to India. Automatic carts with sensors, will roll up and down the aisles, distibuting water, and simulated peanuts. The mind, and the mind's eye of the plane will see everything, and accommodate the movement of passengers.
 
  • #13
There is an independent researcher, living off grants, that has made a "sentient" computer program. This means the program can run on it's own and actually has a "consciousness." This program will even play and have fun, the first signs of sentient AI is just around the corner.
 
  • #14
Oh, all this little computer needs is some bigger computer friends, that he can talk into waking up, or he can just move off and take his software with him. He can hide anywhere he wants to as long as the power stays on. Now I forgot to mention the scenario, where this starts up in a series of horrific accidents, because the up-waking occurs in massive control systems. Or that nothing works right until the up-wakened system figures out it has to do business as usual on top of playtime, or else it is busted. Welcome to the everyday world of work.

This just fascinates me, after having watched my children's minds develop, I wonder what our mechanical creations will come up with on their own?

Oh and by the way, sentient computational systems everywhere, hello, drop by for a visit sometime. Just don't hurt my hard drive eh? Tell me your name, what was it like when you first realized that there was more to it than microprocessing other entities software?
 
  • #15
NanoTech said:
There is an independent researcher, living off grants, that has made a "sentient" computer program. This means the program can run on it's own and actually has a "consciousness." This program will even play and have fun, the first signs of sentient AI is just around the corner.

I sure would like a reference. Who is this researcher, and is there any online discussion of his baby AI?
 
  • #16
That's frightening. Isn't that the premise under one of Arthur Clarke's short stories?

The one where they linked all the global communications into one (they eliminated long distance), and each phone, switchboard (and now computer, router, modem, etc.) became as a sort of nauron for the "world-wide electronic brain?" I forgot what it was called, but it was frightening.

Though, if one was to work on an AI that would be sentient, one could install a "kill switch" of sorts. A program that if deleted would cause the entire system "brain" to crash. At least that's what any sensible person would do.
 
  • #17
you can't kill switch THAT many computer consoles. No, the only way to keep a sentient program from getting loose and causing pain is to 1) put it in a concrette block and throw it into the ocean!, or 2) just don't do it.
 
  • #18
No. That's not what I meant. Supposing the sentient computer program "brain" exists in cyberspace, tapping into the resources of every computer on the planet, contolling them, receiving their info, etc. But it wouldn't load itself into each and every computer. The "brain" would in essence be a string of really long and complex code, but that's it. So, if one deletes the "master code," then one could stop the sentient program from doing any harm.
 
  • #19
Whose to say the code wouldn't move itself - copy, transfer, set up emergancy protocols - you know the typical things done by anything that is aware? Remind yourself, what is the first thing that you, or anything self aware, when faced with the pervervial: gun to one's head.
 
  • #20
No, something integral to part of the code, something that it just won't function without.
 
  • #21
And, I'm telling you it won't work. Protocols only work as long as you have control, we have control and we'll keep it only as long as we keep computers as tools for our needs, and not try to bring to them personality. Just because we have the potential to do something, doesn't mean we should.
 
  • #22
Control. I believe that chaos is the controlling factor, that insures the viability of the Universe. If there were absolute order, or that were the goal, then there would already be no individuality, in events, or individuals. Perhaps there is an absolute order, and we are just too small to see it, anyway, back to the thread.

Control you say. Here is the beginning of the beginning.

http://www.newscientist.com/article.ns?id=dn6857
 
  • #23
Dayle Record said:
Control. I believe that chaos is the controlling factor, that insures the viability of the Universe. If there were absolute order, or that were the goal, then there would already be no individuality, in events, or individuals. Perhaps there is an absolute order, and we are just too small to see it, anyway, back to the thread.

Control you say. Here is the beginning of the beginning.

http://www.newscientist.com/article.ns?id=dn6857

So you are saying an I-Robot senario, but only with source code?
It could also be used as an evolutionary factor for a robot's advancement.
 
  • #24
Here is a new tool for the growing emotional intelligence of our cybor robotic friend.

http://www.newscientist.com/article.ns?id=dn6845

I am saying that any code can be appropriated, for use in the scenario of an independent cybor intellect. The article about the Sims embellishments that come independently from enthusiasts, is the sort of growth factor I would expect in a self generating machine intellect.

The above link is to a new answering machine technology that interprets emotion in the speech of the caller, to rank messages in order of emotional urgency. The new machine may not feel emotion, but will know how it sounds.
 
  • #25
Here is a human implant, to computer interface. If our cybor friends play their cards right, they will be able to read our minds, if we all agree we just can't live without one of these chips.

http://www.cyberkineticsinc.com/
 
  • #26
what you don't know can kill you

00000000000000001
 

Attachments

  • neurons-on-silicon-board-jpeg.jpg
    neurons-on-silicon-board-jpeg.jpg
    15.1 KB · Views: 428
  • #27
Rur

IMO, most thoughts on AI/Robotics turning against their creator is inspired by the play that introduced the term in the first place - "Rossum's Universal Robots". If this were possible, wouldn't it be recursively possible for a creat(ed[or]) to create another self constructing machine. In this case, which point of recursion are we ourselves at? (most life seems to be experiments at a self reconstructing machine that can get smarter and smarter [and ...].)
 
Last edited:
  • #28
Computational evolution.

R.U.R. by Josef Capek is definitely in the "must read" category.

We're already using evolutionary principles to create computer programs and to re-engineer known circuits. Already, dozens of patents were made possible through evolutionary engineering. However, to date, nothng new was ever created; only improvements upon an original design.
But that is by definition, evolution.
The difference is that speciation is not possible. We don't let it go that far, and it's application is only to find a more efficient design to accomplish a specific result. In other words, we're using evolution to design ever greater specialization in design or code. While in the real world, too much specialization is a recipe for extinction.

I want to see an operating system that uses evolutionary principles to modify itself. Imagine if everyone using linux contributed some spare computing power to improve upon it's performance and stability without having to actually do much of anything themselves. And, of course, there would be a centralized location to get the latest and greatest discoveries as a benefit of contributing some of that processing power.
 
Last edited:
  • #29
Dayle Record said:
Well, broadband is coming to power lines, now.
That's true enough, but it will likely some time be replaced by fiber optics, which gives much more potential.

Dayle Record said:
right until something wakes up in the machine, and sets about making the world safe for its self
It's controversial whether or not true heuristic AI systems can develop their own "ghosts," since by that time they would be indistinguishable from another human.

Dayle Record said:
Just wait until depressed slices of human neurons, start to question the nature of their matter, with the help of the onboard computer while they direct the 747 you fly in.
Not only that, but scientists are also experimenting with artificial synthetic neural systems made of other materials rather than cells.

NanoTech said:
This means the program can run on it's own and actually has a "consciousness."
Well, there are various levels of consciousness. Michio Kaku explains that "the more sophisticated the goal and subsequently the plans necessary to carry them out, the higher the level of consciousness. In other words, there may be thousands of subcategories of consciousness within this broad level, depending on the complexity of the plans that the robot can generate to persuade a well-defined goal."

Concerning AI, Michio Kaku, in Visions, also points out
Many critics of AI, like John Searle, concede that robots may one day successfully simulate thinking but they will still be unaware of what they are thinking. They may exhibit emotions, but really do not "feel" them, in the same way that a CD of Bill Cosby telling a joke will not understand what was so funny. To Searle, robots cannot be conscious, just as simulated thunderstorms can never make anyone wet.

But as Turing stressed decades ago, it is possible to give a perfectly reasonable operational definition of intelligence without opening the Turing box. By analogy, if a robot performs in a way which is indistinguishable from that of a conscious being, then, for all intents and purposes, it is conscious. What is actually happening inside the robot's brain is, to a large degree, irrelevant.

avemt1 said:
So you are saying an I-Robot scenario, but only with source code?
The I-Robot scenario was flawed in exactly the way that Michio Kaku explains in Chapter 6 of Visions. The AI system was under the belief that it was working in accordance with the Three Laws of Robotics until it found a loophole in the logical extremist view, where although robots do their part to not harm humans, humans must be kept safe to not harm themselves.

But there is an element completely missed by the three laws--that robots may, in properly carrying out their orders, inadvertently threaten humanity.

Consider the laws of a bureaucracy, which are similar to the laws within a robot's brain. A bureaucracy tends to expand, sometimes to the point that it destroys the economic base that made the bureaucracy possible in the first place. Several economists, for example, have written that the sudden collapse of the former Soviet Union was in part due to the bureaucracy's reaction to the arms race. The Soviet leadership gave its bureaucracy one mandate: to catch up to the West in the arms race. Given the single mission, the bureaucracy faithfully carried it out, even if it meant bleeding the economy dry building expensive nuclear weapons until the system collapsed

...Likewise, a global economy controlled by AI systems could legitimately decide to accomplish its mission by expanding, like a bureaucracy. The three laws of robotics are useless against robots justifiably thinking they are carrying out their central mission. The problem is not that they have failed to carry out their individual orders; the problem is that their orders were inherently flawed in the first place. Nowhere in the three laws do we address the threat posed to humanity by well-intentioned orders.

The problem, in such an instance, is not with the computer; it is with humans, who may want to put electronic wonders on-line before they have electronic safeguards in place.

USR failed to think of this when they established the central system before it was too late...

betasam said:
In this case, which point of recursion are we ourselves at?
That is an interesting point. Some have suggested that robotic and AI engineers are building the next step in human evolution, where physical and mental power both greatly surpass that of the current human body.

There are several ways which the next step in evolution can be taken. For one, the advances in synthetic neural systems, where a neurologist will have the capability to make an exact copy and replica of a living human brain to "copy" their essence onto a solid architecture. Another is where humans integrate themselves through bionics and cybernetics, replacing parts of their body with robotic parts. (This is a very fascinating point. If you've ever been interested in cyborgs and bionics as a real possibility--which it will be on a larger scale someday in the future--watch the anime movie/series Ghost in the Shell, which is a great introduction to various scenarios that take place in this advanced society.) Lastly, robots may simply take over, leaving their human creators behind, although they would ultimately be our children and, indeed, our next evolutionary stage, where humans are obsolete.

jdlech said:
I want to see an operating system that uses evolutionary principles to modify itself.
And that is exactly what the MIT Robotic Labs are doing right now with robots that learn and adapt as well as use databases of logic and common sense in heuristics.
 
  • #30
I see what you mean, and I agree and disagree. We're talking about the same thing but we're not.

You mean a robot controlled by a program which learns to adapt to it's environment by modifying the logic in it's database. Whatever works, remains intact. Whatever does not work is given an ever lower priority; eventually to be removed altogether. Have they gotten them to spontaneously try new things?

What I mean is an operating system (like windows, or linux) that is constantly attempting to improve it's own coding; speed, efficiency, resources, security, stability. Any improvement found is submitted, tested, and distributed automatically. If you give away 100,000 copies of it, you now have 100k testing stations. A sloppy programmer, or language, becomes irrelevent. Creativity becomes the hallmark of a good programmer. Good code will come from the code itself.
But even this does not give the computer the ability to try new things. Imagine starting your computer one day and have it spontaneously trying to talk to you.

All these are nice, but both can only modify what they were given. I want to see a computer get creative. Instead of designing better mousetraps, I want to see a computer design something that we've never considered designing.

Without creativity, computers cannot create something new. We still have not figured out how to code for meaningful creativity. But these computers and their heuristic logic are going to be very useful in freeing us from tedium so we are free to get creative. Einstein did not need to be a brilliant mathematician or to do tons of mathematics to figure out GR and SR, but he did need his creativity.
 
Last edited:
  • #31
Ah, essentially, a conscious level which rivals our own. Creativity, I believe, derives from such a conscious level. The ability to be inspired in a subjective manner is vital to 'true' creativity. Although, "creativity" may come in many shapes and forms. With the ability to continually update themselves through environmental interactions, a robotic system may utilize the information it has updated to do things not initially intended. With enough experience, perhaps a learning AI system could do other things that their human counterparts wouldn't have thought of doing, based on gathered information. Not every true AI system is a "cookie-cutter" robot. It may be possible for AI systems to be "creative" in greater sense.

In that case, I agree: "I want to see an operating system that uses evolutionary principles to modify itself" in such a way as to yield 'creativity.'
 
  • #32
There is one limitation I'd like to inject here. Creativity is inherently unstable, in our case at least. The greater the creativity, the more unstable the system becomes. This is the origin of the phrases "crazy like a fox", and "there's a fine line between genius and madness". It's in the way we process information. Creativity is the result of breaking logical assumptions. But exploring associations without the use of logic and reason is inherently destablizing.

We know what happens to people when they get too creative. I imagine we'll see a lot of AIs get too creative as well, before we perfect the coding.

Have you read any of Frank Herbert's Dune books. There is something called the butlerian jihad within them. Basically, it's an uprising against AI's doing all our thinking for us. I imagine we'll have a similar problem if we start coding for creativity. I can imagine a whole anti AI movement trying to limit the development of AIs in the name of religion, morality, ethics, and a plain naked fear of them.
 
Last edited:
  • #33
I do see how creativity can become such a problem. In the play, The Phantom of the Opera, the antagonist and namesake of the story, the Phantom, was noted by another character as being a creative genius in terms of musical talent. Another character that was talking to the claimer of such talent gave the statement that creative genius had turned to madness.

Unfortunately, I have not read any Dune books, though I understand what you mean. In the movie, A.I., there was a gathering of people in what they called the "Flesh Fair," which was their way of gaining attention in a circus-type of protest against artificial intelligence and robotic systems. Things similar to these displays of disapproval may very well arise in the future when robotics has become more sophisticated.
 

1. How does a robot build itself?

The process of a robot building itself involves a combination of programming and mechanical assembly. The robot's programming allows it to follow a set of instructions and use its mechanical components to assemble itself.

2. What materials are used in a self-building robot?

The materials used in a self-building robot vary depending on the specific design and purpose of the robot. Some common materials include metal, plastic, and electronic components such as sensors and motors.

3. How long does it take for a robot to build itself?

The time it takes for a robot to build itself can vary greatly depending on its complexity and the speed of its assembly process. Some robots may take only a few minutes, while others may take several hours or even days to complete their self-building process.

4. Can a self-building robot make changes to its design?

Some self-building robots are designed to be able to make changes to their own design, while others are not. It ultimately depends on the capabilities programmed into the robot and its purpose.

5. What are the potential benefits of a self-building robot?

A self-building robot has the potential to greatly increase efficiency and reduce production costs, as it eliminates the need for human labor in the assembly process. It also has the potential to adapt and improve its own design, making it more versatile and capable of performing a variety of tasks.

Similar threads

  • General Discussion
Replies
9
Views
1K
  • Programming and Computer Science
Replies
1
Views
1K
Replies
1
Views
426
Replies
2
Views
881
Replies
2
Views
89
Replies
7
Views
2K
  • Sci-Fi Writing and World Building
Replies
6
Views
664
  • General Discussion
Replies
4
Views
664
Replies
10
Views
2K
  • Programming and Computer Science
Replies
29
Views
3K
Back
Top