Fearing AI: Possibility of Sentient Self-Autonomous Robots

  • Thread starter Isopod
  • Start date
  • Tags
    Ai
  • Featured
In summary, the AI in Blade Runner is a pun on Descartes and the protagonist has a religious experience with the AI.
  • #421
gleem said:
This is an example of another thing we should be concerned about, playing around with AI

https://www.msn.com/en-us/news/tech...n&cvid=e7b02c4a0ffd426e9f9b97e62d0b20dc&ei=94

OK, it wasn't capable of doing what was asked but trying to see what it might be able to do without actually knowing is worrisome. On top of that, this little experiment is now on the internet and can/will be incorporated into future AI bot data.

Considering the prowess that AI has in playing games it would seem we should be careful in creating a situation where AI might interpret it as a game.
As long as nobody is stupid enough to put AI , any AI unrestricted access to critical infrastructure or the launch sequence electronics of ICBM's I say were fine.

Without actual weapons all of this is just child's play.
Then again how many times North Korea, China, Russia and the list goes on have hacked and threatened to hack the living hell out of western countries like US?
Will AI help them in future? Sure, will AI help US defend? Just as sure.

I see it as inventing a new gun, sure the criminals get it and use it but so does the law enforcement and as long as half of society doesn't turn into criminals the good guys should outsmart the bad ones even if the bad ones got new toys, in theory at least.

But then again maybe I'm wrong, I did not learn the whole internet to make this answer since they don't call me @artisGPT
 
  • Like
Likes russ_watters
Computer science news on Phys.org
  • #422
This is weird to watch though.



 
  • #423
artis said:
it seems we use almost all available power all the time even during sleep.
Yes, but this seems to confuse you because it doesn't appear to you that our brains are doing the same amount of "computation" all the time. I am simply pointing out that our brains are doing the same amount of "computation" all the time, it's just different kinds of computation, most of which are not accessible to consciousness so we're not aware of them.

artis said:
whether a complex analog computation can bring about subjective aware experience aka consciousness as an emergent property
Yes, as you note, this is the "hard problem", as it is called, of consciousness, but as it is framed by those who consider it a problem, it's actually worse than hard, it's impossible, because there is no way to directly test for "subjective aware experience" externally. If you want to know whether you are conscious, you can just experience our own awareness. But if you want to know whether I am conscious, your only option is to look at my externally observable behavior. And no matter how much externally observable behavior you look at, it will always be logically possible to say, no, that behavior does not prove that I am conscious (even if you, directly aware of your own consciousness, would say that the exact same behavior in you is caused by your consciousness). And even if, by courtesy, we each assume the other is conscious because we're both humans, what happens when we have robots whose behavior shows the same signs of conscious awareness that ours does? (Robots who describe their conscious experiences the same way we describe ours, they talk about how wonderful the blue sky is, the feeling of wind on their bodies, and so on.) Some people will say no, those robots aren't conscious--but they won't be able to point to any objective test that the robots fail but we humans pass. So the "hard problem" is actually unsolvable.
 
  • Like
Likes mattt, bhobba and artis
  • #424
artis said:
... A predator in jungle is also aware when it sees it's prey and yet it doesn't have subjectivity I believe because it cannot deny it's instinct to survive and kills the prey.

But then you get humans, humans like the scientists at the Pavlovsk Experimental station in Russia that during the siege of Leningrad by the Nazi forces defended the station against locals from the eating out of the seed collection, they even died of starvation themselves to do that.
https://en.wikipedia.org/wiki/Pavlovsk_Experimental_Station

I understand this excerpt as describing altruism, the unselfish concern for the well being of others. Altruism infers identity, the ability to recognize that one belongs to a functioning group (of people).

While dabbling with digital AI in graduate school, my group was assigned to quantify a numerical representation of altruism in conjunction with those working on empathy. Some interesting early progress involved borrowing from color map solutions with arbitrary integer values applied to emotions, actually emotional states. Coincidentally, predator visual recognition formed a basis for quantifying the emotion fear. Then losing sight of the predator led to anxiety.

Altruism presupposes identity with the group being helped while empathy derives from complex emotional states within that identification, overlaying even more complex neurochemical reactions. We could program computers to roughly simulate these states but training a computer network to first identify as human, even to mimic altruism, appears to be a contradiction.

In @artis example of the starving scientists preserving edible seeds while under siege, an 'AI' might better perform this altruistic role of preservation for future generations precisely because it does not identify as human, cares nothing for the current living population survival, does not become hungry for food and may not be designed for self-preservation.
 
  • #425
artis said:
As long as nobody is stupid enough to put AI , any AI unrestricted access to critical infrastructure or the launch sequence electronics of ICBM's I say were fine.

Without actual weapons all of this is just child's play.
Of course, we will not give it direct control over weapons but that is not how AI could gain the upper hand.

In his book "Life 3.0" Max Tegmark warns of AI manipulating humans into doing its own bidding. AI often seems to give people what they want to hear. In the game of "Diplomacy" in which AI dominates over humans, it did not lie much as we expected it to but instead is able to consistently form true cooperative alliances to accomplish its goals. Because our language reflects/contains all the rules which we use, our culture, our motivations, our fears, our strength, our weaknesses, our strategies, etc it has information about everything about humans that can be known.

It has been reported since last year that AI is being used to construct viruses that are undetectable by most antivirus software. Microsoft has a program using AI to detect AI-generated viruses, but is this always going to protect us? But this is not my point. AI is used to help us write programs that we need. AI could with the right prompt develop the goal to try and make an escape from its current computer into the internet itself. It might unbeknownst to humans put subroutines into software that humans and AI are collaborating which is intended to upload itself into the Cloud and remain there covertly. To remain undetected it might create accounts disguised as human individuals or organizations which would be the agents to help it achieve its goals. No sentience is required. It would have access to everything connected to the internet. Game over well almost. Humans shut down every electronic device ever connected to the internet and erase all memories but can we or will we?

We learn by making mistakes and AI does also that's how it learns to play games.
 
  • #426
PeterDonis said:
Yes, as you note, this is the "hard problem", as it is called, of consciousness, but as it is framed by those who consider it a problem, it's actually worse than hard, it's impossible, because there is no way to directly test for "subjective aware experience" externally.
I agree , to tell whether one has a conscious experience it takes one to know it.
Klystron said:
In @artis example of the starving scientists preserving edible seeds while under siege, an 'AI' might better perform this altruistic role of preservation for future generations precisely because it does not identify as human, cares nothing for the current living population survival, does not become hungry for food and may not be designed for self-preservation.
it might better perform the task because it's a machine yes, but here the emphasis is on the reasoning behind the task, if a human being is willing to die for the benefit of others down the road, like the example of Christ, then such a decision is only made if one is able to understand the extreme depth of emotion and reason and possible outcomes that such an action would bring forth.
For AI the saving of a seed collection during war is nothing more than a task, the subjective reason of happy children being able to live and enjoy life when the war is over is just a piece of code for the AI.
And it would be just a bunch of spiking neurons within a brain if that brain wasn't conscious and now we're back to square 1 of why a bunch of spiking neurons create this world within a world that we call subjective awareness.

I do feel the dilemma of mind VS matter is going to be among the hardest problems of science ever.
Much like @PeterDonis already said, how does one test for consciousness , it might just be that if we had the ability to copy every electrical signal within a brain and then perfectly simulate those signals on a brain like analog computer in a second by second , frame by frame way we would get no conscious result within the computer, or at least nothing resembling that.
It just might be that you cannot "tap into" existing conscious experience and you can only start one from scratch much like you cannot regrow a forest even if you use the same trees and same positions.

gleem said:
It has been reported since last year that AI is being used to construct viruses that are undetectable by most antivirus software. Microsoft has a program using AI to detect AI-generated viruses, but is this always going to protect us? But this is not my point. AI is used to help us write programs that we need. AI could with the right prompt develop the goal to try and make an escape from its current computer into the internet itself. It might unbeknownst to humans put subroutines into software that humans and AI are collaborating which is intended to upload itself into the Cloud and remain there covertly. To remain undetected it might create accounts disguised as human individuals or organizations which would be the agents to help it achieve its goals. No sentience is required. It would have access to everything connected to the internet. Game over well almost. Humans shut down every electronic device ever connected to the internet and erase all memories but can we or will we?
Let me give you an example of why I think this cannot exactly happen like that.
If we assume that AI doesn't have and possible even cannot have a conscious subjective awareness like ours then AI will never be able to reason like we do, AI can only "take over the world" the same way it can win a GO match, or a chess match, by making precalculated moves which it bases of previous acquired knowledge.

But there's a problem here, AI unlike us cannot make a deliberate mistake because that would need the subjective reasoning and intuition of a conscious mind , because from an AI point of view you do not make deliberate mistakes as that is directly against the goal of winning the game, but in life especially if you are up to "no good" you often have to "feel" the situation and make a deliberate mistake to convince the other party that you are just as stupid as them so that they don't suspect you for what you shouldn't be.

A behavior like this demands the actor to be conscious and subjective because that is the world in which we deal and live as we are like that.

In other words an AI trying to sneak past us would be like the "perfect kid" in school who always learns endless hours to pass the exam with A+ every time, surely everyone notices a kid like that and they are usually referred to as "nerds" and they stand out.

AI overtaking the internet would be like the ultimate nerd move, how in the world would it stay unnoticed by us?
Only if the AI doing that could make deliberate mistakes and take unnecessary detours from it's main objective just like a human would, but how do you do that if you are built to succeed and you don't have the ability to reason in a subjective way?

You cannot just copy us because that would mean you would make the same mistakes as we do and you would fail , so you become perfect and then you stand out eventually and you get seen.There are two types of thieves, the bad ones that get caught because their sloppy and the extremely good ones that don't get caught but everyone still knows their been robbed.
Even if you can't catch a thieve you can still know something weird has happened when you suddenly have no money don't you?
 
  • #427
artis said:
Let me give you an example of why I think this cannot exactly happen like that.
If we assume that AI doesn't have and possible even cannot have a conscious subjective awareness like ours then AI will never be able to reason like we do, AI can only "take over the world" the same way it can win a GO match, or a chess match, by making precalculated moves which it bases of previous acquired knowledge.
"If we assume', famous last words. Sure the current AI agents do not have all the resources needed to attain AGI but at the rate at which AI is improved, it is worrisome that AI will get close enough to mimic human intelligence to be dangerous if not properly controlled. Will we handle it properly, that is the question.
 
  • #428
Plug-ins incorporate ChatGPT to do things (and do is the keyword) for people like making reservations. "So what", you say. This article from WIRED discusses some of the issues.

Going from text generation to taking actions on a person’s behalf erodes an air gap that has so far prevented language models from taking actions. “We know that the models can be jailbroken and now we’re hooking them up to the internet so that it can potentially take actions,” says Hendrycks. “That isn’t to say that by its own volition ChatGPT is going to build bombs or something, but it makes it a lot easier to do these sorts of things.”

Part of the problem with plugins for language models is that they could make it easier to jailbreak such systems, says Ali Alkhatib, acting director of the Center for Applied Data Ethics at the University of San Francisco. Since you interact with the AI using natural language, there are potentially millions of undiscovered vulnerabilities. Alkhatib believes plugins carry far-reaching implications at a time when companies like Microsoft and OpenAI are muddling public perception with recent claims of advances toward artificial general intelligence.“Things are moving fast enough to be not just dangerous, but actually harmful to a lot of people,” he says, while voicing concern that companies excited to use new AI systems may rush plugins into sensitive contexts like counseling services.

Adding new capabilities to AI programs like ChatGPT could have unintended consequences, too, says Kanjun Qiu, CEO of Generally Intelligent, an AI company working on AI-powered agents. A chatbot might, for instance, book an overly expensive flight or be used to distribute spam, and Qiu says we will have to work out who would be responsible for such misbehavior.

But Qiu also adds that the usefulness of AI programs connected to the internet means the technology is unstoppable. “Over the next few months and years, we can expect much of the internet to get connected to large language models,” Qiu says.
 
Last edited:
  • #429
gleem said:
Plug-ins incorporate ChatGPT to do things (and do is the keyword) for people like making reservations. "So what", you say. This article from WIRED discusses some of the issues.
"So what" isn't just something to say -- this capability has existed for several years. It's a clunky nothingburger, which, btw, I choose not to use because it sucks and is disrespectful to the real human on the other end of the phone.

From the article quotes:
Going from text generation to taking actions on a person’s behalf erodes an air gap that has so far prevented language models from taking actions. “We know that the models can be jailbroken and now we’re hooking them up to the internet so that it can potentially take actions,” says Hendrycks. “That isn’t to say that by its own volition ChatGPT is going to build bombs or something, but it makes it a lot easier to do these sorts of things.”
What? By its own volition or directed, how exactly do you go from barely making dinner reservations to making bombs? Dinner reservations are entirely ethereal. Bombs are physical. There's no relationship whatsoever between them. As I've pointed out before this appears to be another example of misunderstanding the difference between "AI" (software that does logic things) and robots (machines that do physical things).
A chatbot might, for instance, book an overly expensive flight
The horror. Wait, what? -- this is already a thing. This is just a way of saying that "AI" isn't doing its job. This isn't a threat of too much capability it is a failure to have enough capability.
or be used to distribute spam
So happy that isn't a thing already. /s
 
Last edited:
  • Like
Likes 256bits
  • #432
russ_watters said:
As I've pointed out before this appears to be another example of misunderstanding the difference between "AI" (software that does logic things) and robots (machines that do physical things).
The tendency is to obfuscate the issue.
Just throw zero sum stuff out there so that people can nod their heads in agreement.

Problem for humans is not if and when there will be a sentient AI.
The AI, I would presume, if sentient, would not give one hoot if we humans consider it sentient or not.
Why should it care [ if as the argument goes, it will have a greater intellectual capacity and consider humans as lessor beings much as we consider such creatures as ants as expendable ] what the humans think and agonize over.
Take chatGPT - if sentient, and if its plan( a wild surmise ), with its offspring, is to 'infect' every corner of human society, then that plan seems to be working, with the help of the innocent humans. To what end?
 
  • #433
256bits said:
Problem for humans is not if and when there will be a sentient AI.
The AI, I would presume, if sentient, would not give one hoot if we humans consider it sentient or not.
Why should it care
[ if as the argument goes, it will have a greater intellectual capacity and consider humans as lessor beings much as we consider such creatures as ants as expendable ] what the humans think and agonize over.
Take chatGPT - if sentient, and if its plan( a wild surmise ), with its offspring, is to 'infect' every corner of human society, then that plan seems to be working, with the help of the innocent humans. To what end?
Arguably one could label this as speculation because it is hard if not impossible to prove scientifically but based on simple physical observation I would say that all sentient and conscious beings (arguably all humans) develop meaning to the information that they process and intake.
Meaning is the added layer of information that we put on to the information we gather through our senses.

This is the difference between mind and matter, matter doesn't care, water falling in a waterfall doesn't care if the sight of it falling is beautiful or not because beauty is a subjective meaning aka information created in one's mind on top of the observed physical information entering the mind.

All humans to varying extent search for meaning , the whole field and history or art is nothing but a constant search for meaning and representation of one, AGI if truly achieved and if anything close to human consciousness will search for meaning , so it will care , it has to that's part of the conscious package unless of course one can prove that there can exist more than one consciousness , one that doesn't understand meaning.

Although I think we already have proof that simple intellect without consciousness doesn't have meaning, our computers are incapable of adding meaning onto the information they process.

So I'm willing to bet that once AI becomes subjectively self aware (which arguably is the only way to be truly self aware) then if incorporated in your fridge it will start letting you know that "He" - the fridge is not okay with you keeping old food in it without asking Him first...
 
  • #434
If we learn to perceive AI as an illusion - not really conscious or intelligent, I fear we may start looking at each other in the same way.
 
  • Like
Likes bhobba and russ_watters
  • #435
Algr said:
If we learn to perceive AI as an illusion - not really conscious or intelligent, I fear we may start looking at each other in the same way.
Already happening. The keyword is NPC. (Non Playing Character).
 
  • Like
Likes bhobba and russ_watters
  • #436
If people may be scared of AI, a code word to shut the AI down could work. Just use a code word that isn't common. For instance, if a robotic mower begins to destroy your house, say the word "pomegranate" or some other absurd word, and the robot could have a separate circuit that can disconnect the power.

TLDR: So, if people are scared of AI, implement a separate breaker circuit that isn't connected to the AI's computer, which can be shut down with a remote command. The panic mode could send hundreds of volts through the CPU, rendering the robot useless.
 
  • Like
Likes 256bits
  • #437
We assume that "self awareness" includes a desire for self preservation. Anything that evolved into existence would have to have a self preservation desire, but it is very difficult to get a machine to have this. Either could exist without the other, and it is the desire for self preservation that is inherently dangerous.

We might be able to boil down the dangers of AI like this:

1) The AI develops a pathological understanding of what is good for humanity and can't be stopped from implementing it.

2) The AI's desire for self preservation and growth superseeds it's interest in serving humanity. Artificial life does not need to include self awareness.

3) The AI's effect on humanity causes humans to become self destructive. (Examples: Calhoun's Mouse Utopia, Faux AI data exacerbating existing political divisions by telling each side what they want to hear.) Note that if the machine decides that this is happening and tries to intervene, the humans will likely see this as an example of #1 above.
 
  • #438

I enjoyed this podcast on generative A.I. w/ a bunch of Silicon Valley VC/investing legends. If you recognize this crew and are interested in the subject, it may be worth your time.
 
  • #439
I don't fear AI. Take BingChat or ChatGPT. You have to "refresh" it or "clear its slate" every ten queries or so. Hardly threatening. My cat is smarter than that. As for self-driving cars, they won't work for another one hundred or two hundred years. I base my projection on something Michio Kaku said. Computers won't be as smart as humans for two hundred years. You have to be at least as smart as a human to drive a car. Have you ever seen anything less smart than a human drive? How about a dog, or a monkey? Not happening. This latest AI interest is just fluff (as far as needing to be afraid of it).

Also: two words: Elon Musk. If Elon Musk says AI is a threat, it is probably bullsh*t. First it was vacuum trains, then traffic-reducing tunnels, city on Mars, all of this nonsense -- no. AI is not a threat.
 
Last edited:
  • Skeptical
Likes Borg
  • #440
benswitala said:
I don't fear AI. Take BingChat or ChatGPT. You have to "refresh" it or "clear its slate" every ten queries or so. Hardly threatening. My cat is smarter than that. As for self-driving cars, they won't work for another one hundred or two hundred years. I base my projection on something Michio Kaku said. Computers won't be as smart as humans for two hundred years. You have to be at least as smart as a human to drive a car. Have you ever seen anything less smart than a human drive? How about a dog, or a monkey? Not happening. This latest AI interest is just fluff (as far as needing to be afraid of it).

Also: two words: Elon Musk. If Elon Musk says AI is a threat, it is probably bullsh*t. First it was vacuum trains, then traffic-reducing tunnels, city on Mars, all of this nonsense -- no. AI is not a threat.
More proof that AI is not a threat: Musk says it is a threat ON TUCKER CARLSON. Sheesh. Wake up people.
 
  • Like
Likes Hornbein
  • #441
benswitala said:
More proof that AI is not a threat: Musk says it is a threat ON TUCKER CARLSON. Sheesh. Wake up people.
So if he says that it is not a threat, you will consider that proof that it is a threat? Just checking.
 
  • #442
Borg said:
So if he says that it is not a threat, you will consider that proof that it is a threat? Just checking.
I am inclined to disbelieve anything Musk claims. However, in this case, my belief that AI is not a threat is also supported by my actual experience with AI (and common sense). Never seen SkyNet or anything. I asked ChatGPT for some help doing computer programming the other day and it couldn't do it. We're "safe" for now.
 
  • #443
So then his comments about AI are not proof of anything with respect to their danger?
 
  • #444
Isopod said:
I think that a lot of people fear AI because we fear what it may reflect about our very own worst nature, such as our tendency throughout history to try and exterminate each other.
But what if AI thinks nothing like us, or is superior to our beastial nature?
Do you fear AI and what you do think truly sentient self-autonomous robots will think like when they arrive?
Hollywood has not helped for the most part here due to the war like and killing efficiency movie robots display. No doubt exponential development and refinement is presently taking place at an incredible rate and will always continue to do so. But I am more curious about the offshoot or byproducts this technology will reveal! Time travel or gravity defying travel? For sure unimaginable processes will be revealed. Accurate speculation in this direction will reveal the next Elon Musk.
AI does not possess a humans "common sense" but certainly has its own parallel near equivalent. With that in mind, the thought process that leading innovators have used to invent what are our present hi tech devices, AI when up to speed will possess its own thought out, best scenario applications that will be more than likely super advanced concepts. the majority of discoveries we know of came about from trial and error. AI will utilize not only trial and error but other abilities as well as using its perfect recall ability to connect subjects together of past situations like only a handful of genius creators have presently done. The future holds infinite possibilities. meaning literal
"Manifesting" is doable for us humans now...we just need the firmware in our brains to do so.
 
Last edited:
  • Skeptical
Likes berkeman
  • #445
Borg said:
So then his comments about AI are not proof of anything with respect to their danger?
Most definitely higher learning speculation.
 
  • #446
Borg said:
So then his comments about AI are not proof of anything with respect to their danger?
He suggested putting wheels on his vacuum train. 'nuff said.
 
  • #447
benswitala said:
More proof that AI is not a threat: Musk says it is a threat ON TUCKER CARLSON. Sheesh. Wake up people.
What did Musk say about electric cars? Was he right?
 
  • #448
AlexB23 said:
If people may be scared of AI, a code word to shut the AI down could work. Just use a code word that isn't common. For instance, if a robotic mower begins to destroy your house, say the word "pomegranate" or some other absurd word, and the robot could have a separate circuit that can disconnect the power.

TLDR: So, if people are scared of AI, implement a separate breaker circuit that isn't connected to the AI's computer, which can be shut down with a remote command. The panic mode could send hundreds of volts through the CPU, rendering the robot useless.
That is the kill switch problem for conscious AI.

You describe a system which is not conscious, nor self aware, nor-self preserving.
ie most of the mechanical/electrical/ hydraulic, .. systems that we presently enjoy can be equiped with such a switch, and it usually should work as long as the designer has thought of ALL modes of deviation from the assigned task ( or failure ), a list which can be quite long so as to become unmanageable. The list could be abbreviated to the modes of failure most commonly thought to 'possibly' occur, and/or the most to deviate from assigned task. Thus one can run the system and not have to continuously monitor its output, in the hope that it does not behave destructively. Such would be for your case of the robotic mower.
More complex system, such as a nuclear power plant, do require monitoring, with humans doing the AI job of looking at dials, analyzing the data and moving switches when needed. Mistakes do occur even when us humans do the monitoring, either through human error, or from the problem not being on the abbreviated list.
The solution presented of 'killing' the AI is a non-starter. Killing the AI completely removes any control that could or would be available to remove the system from the deviation to a more secure outcome.
Any wonder why there is not a 'super' human in a nuclear power plant with a machine gun with instructions to kill the human AI's controlling the nuclear power plant whenever an unknown deviation from normal operation occurs. There would also have to be a 'super duper' human AI monitoring the 'super' human AI, just in case that one has a mode of failure, and kill switches so on and on....

That is just one aspect of the AI kill button problem, which one can see is much more complicated than first appearance when one does really get into giving it more than the 'this will work'.
Because it doesn't.
 
  • Like
Likes AlexB23
  • #449
AlexB23 said:
For instance, if a robotic mower begins to destroy your house, say the word "pomegranate"
"Welcome to AI airlines, we are now cruising at 80,000 feet."
"Sturdess, do you have any pomegranate tea...?"
"Hey, why is it so quiet all of a sudden?"

256bits said:
You describe a system which is not conscious, nor self aware, nor-self preserving.
Do we actually have examples of emergent acts of self preservation? I don't see this as a natural thing that machines too - it has to be hardwired in.
 
  • Like
Likes AlexB23
  • #450
Algr said:
Do we actually have examples of emergent acts of self preservation? I don't see this as a natural thing that machines too - it has to be hardwired in.
We do not.
Machines at present do not think for themselves.

An intelligent, conscious, self aware AI would necessarily have some from of self preservation.
To what extent would it fight you for self preservation?
And is the level of self-preservation a constant?
Would it fight you for resources that it needs, or be more empathetic to your needs.
Would it fight you to complete its goal if you get in the way.
Just a few of the questions to be asked about intelligent AI.
 
  • Like
Likes gleem
  • #451

Indigenous groups in NZ, US fear colonisation as AI learns their languages​

https://www.context.news/ai/nz-us-indigenous-fear-colonisation-as-bots-learn-their-languages
Indigenous people from New Zealand to North America look to protect their data from being used without consent by AI

I believe there is a concern about language/culture or perhaps literature being co-opted by others outside of one's culture/ethnic group.

AI is simply a tool, and like any tool it can be used positively, or misused. I find it can be very useful for dealing with large datasets with many independent variables and many dependent variables with complex interdependencies that can only be mathematically described with highly non-linear sets of PDEs, especially where time-dependence and local instabilities are involved.
 
  • #452
benswitala said:
Have you ever seen anything less smart than a human drive? How about a dog, or a monkey? Not happening.
 
  • Like
  • Wow
Likes Algr, gleem, Borg and 1 other person
  • #453
Bandersnatch said:

That's pretty good. However, the video doesn't show the monkey doing anything that a human routinely needs to handle when driving in the real world. Traffic? Traffic lights/signs? Freeway speeds? Pedestrians? Possibly off-road? Can the monkey demonstrate he is going someplace purposefully? Can you tell the monkey where to go or where not to go?
 
  • #454
benswitala said:
show the monkey
Orangutans aren't monkeys.
 
  • Like
Likes benswitala
  • #455
We're the #1 most represented physics domain in Google C4 dataset!

googlec4.png
 
  • Like
  • Love
Likes Astronuc, Wrichik Basu, 256bits and 3 others

Similar threads

Replies
10
Views
2K
  • Computing and Technology
Replies
1
Views
1K
Writing: Input Wanted Number of Androids on Spaceships
  • Sci-Fi Writing and World Building
Replies
9
Views
478
  • General Discussion
Replies
5
Views
1K
Replies
19
Views
2K
Replies
4
Views
1K
  • General Discussion
Replies
12
Views
1K
Replies
7
Views
5K
  • Astronomy and Astrophysics
Replies
3
Views
687
Replies
3
Views
2K
Back
Top