Fearing AI: Possibility of Sentient Self-Autonomous Robots

  • Thread starter Thread starter Isopod
  • Start date Start date
  • Tags Tags
    Ai
Click For Summary
The discussion explores the fear surrounding AI and the potential for sentient, self-autonomous robots. Concerns are raised about AI reflecting humanity's darker tendencies and the implications of AI thinking differently from humans. Participants emphasize that the real danger lies in the application of AI rather than the technology itself, highlighting the need for human oversight to prevent misuse. The conversation touches on the idea that AI could potentially manipulate information, posing risks to democratic discourse. Ultimately, there is a mix of skepticism and cautious optimism about the future of AI and its impact on society.
  • #541
Ivan Seeking said:
That was good! So one of his conclusions is that we need to understand how AI decisions are made.

In other words, we need AI psychiatrists.

And if true that AI will tend to go insane, I don't see that being a good thing!
I don't believe AI can go insane. Insanity requires to have a certain overwhelming realization of the magnitude and scale of incoming information from a personal subjective level. Do ordinary silicon computers go insane when you overload the CPU? No, they just "clog up" and freeze, Insanity i think requires not just the ability to process information but to also add meaning to it on a personal level and then contemplate that meaning.
Our current AI absolutely can't do that, it doesn't have a mechanism how to, nor do we know how to make it.

As for AI decisions, well what decisions exactly? It's just rehashing information we gave it and adding patterns along the way, it can create no more decisions than a obedient soldier on a battle field receiving a general's orders. If I ask chatGPT to write me a story about love, does it not then turn my English language text into it's program language and then that into machine code to execute and find information that is compared to it's memory and found to be matching to the input it was given?
I think I just explained how it makes a "decision" in a simple way, the answer is it doesn't make any decisions!
It just compares it's input to it's known memory and produces an output and does that based on the specific algorithm that it works on. The fact that the output is confusing and makes you think it has some "magic" up it's sleeve is only because it copies our speech and thought so essentially creating the illusion of being like us.
As I said before, consciousness is far far easier to copy/simulate than to create, given we have yet to create it you can insert your own word in the part where I said "far far"
 
Computer science news on Phys.org
  • #542
Ivan Seeking said:
So let me get this straight. We don't know what creates self-awareness or desire in humans, much less what would in an AI.

An AI program claims to love and wants to live, and we don't know why, but we can say with 100% confidence that it didn't really experience those emotions.

Prove it.

We can never know if a machine becomes self aware.
I agree we can never really know, but we have damn good proof , I think, that it hasn't happened, and the one thing that makes me think that is - we don't understand consciousness ourselves , some say we do but that is either their arrogance or them "jumping the gun"
Back in the day some also claimed brains work like hydraulic systems.

But the proof I think is that we do understand how our AI systems work but we don't understand how our own non deterministic brain neuron firing creates structures conscious self awareness so that is I think all the proof one needs that our AI is nothing but a fancy tool so far.
 
  • #543
Vanadium 50 said:
There is a theme in this thread that we are very, very close to "true AI", whatever that is. I think we have learned over the decades that "intelligence" is an ensemble of abilities, some of which can be automated, and many of which we don't even know where to start.

To set the scale, the largest supercomputer I have ever worked with has 3M cores. )It was maybe #4 or so when I used it) That's maybe 1/25,000 the number of neurons in the human brain, and maybe 1/150,000,000 the number of synapses. The hardware just isn't there. And if you think "yeah, but maybe not all this is necessary", let me remind you that the brain is a very expensive organ - there are strong evolutionary pressures to make it smaller. If it could be, it probably would.

We're not talking SkyNet. We're maybe talking the brains of a minnow. Maybe.
I personally believe that conscious self awareness and intelligence is not as connected as we think. Indeed we can simulate and create intelligence that even surpasses our abilities in specific tasks quite well.

But let me pose a strong counter argument against what you stated here.
There is a rather popular idea among intelligence and AGI researchers that indeed once we get to human brain level capacity for silicon based information processing architectures we will achieve general intelligence as an emergent property.

Now it is definitely true that for a computer it's information processing capacity (and it's claimed potential to reach conscious self awareness by some) scales with the number of logic gates and CPU transistor count etc.
I would argue that we have so far good evidence that the same is definitely not true for human brains!
Let me present just a few of the many proofs for that.

https://www.cbc.ca/radio/asithappen...f-his-brain-who-leads-a-normal-life-1.3679125

I highly suggest listening to this article in the provided audio.

When a 44-year-old man from France started experiencing weakness in his leg, he went to the hospital. That's when doctors told him he was missing most of his brain. The man's skull was full of liquid, with just a thin layer of brain tissue left.
https://www.thelancet.com/action/showPdf?pii=S0140-6736(07)61127-1

He was living a normal life. He has a family. He works. His IQ was tested at the time of his complaint. This came out to be 84, which is slightly below the normal range … So, this person is not bright — but perfectly, socially apt

So basically a grown man with almost no brain left being fully self aware and living a ordinary family life.
Clearly if conscious self awareness was proportionally related to brain capacity (neuron count and total brain size) then this man would be as dull as a hammer.https://www.dailymail.co.uk/news/ar...fy-Rodriguez-explains-got-bizarre-injury.htmlAlso an interesting fact is that humans by no means have the most number of neurons in their brain or certain brain regions, for example , the long finned pilot whale has roughly twice as many neurons in it's neocortex than humans, and neocortex by many is considered the most important brain region for intelligent self awareness, clearly it's not the numbers that decide conscious ability and I think we have good evidence for that.
 
  • #544
artis said:
I think that trying to use quantum mechanics to solve consciousness is just another attempt at the many existing but not necessarily a guarantee for success.
Quantum laws work differently than macroscopic electrically connected logic gates but then again that in itself is not proof that they are closer to consciousness than the logic gates.
Actually as far as we know our brains don't seem to be that "quantum" at all. And their temperature is far above that where we normally start noticing quantum behavior.
Pick your obstacle. On the one hand you have the difficulty in doing any significant quantum information processing in the warm and wet brain environment.
On the other hand you have the epiphenomenon issue discussed earlier in this thread. If you don't use QM, then you need to identify a way in which the information you are conscious of is associated (as is done with Integrated Information Theory) AND you need to show how that method of association has a method of effecting the universe - it can't just be epiphenomenal.

Given the choice, that "warm and wet" problem looks way more surmountable that what is like to be a search for new Physics.
 
  • #545
artis said:
I would argue that there is something real nevertheless about qualia , because if all we had is pain signals and processing of them then in theory all pain or sense input would result in a action-reaction style of process similar to that of the "hammer tapping on the knee" reaction.
Not at all. Your brain does "processing" of pain signals that is far more complex. Indeed, to a physicalist, "qualia" is simply part of that processing.

artis said:
this is the problem of faking consciousness , because unlike intellect which can be measured consciousness can be faked because it is not deterministically measurable.
This is the "zombie argument", which has been made by many philosophers, and debunked by many others. It has always seemed incomprehensible to me: it amounts to the claim that your own consciousness has no observable effect on your behavior--a "zombie" duplicate of you could exhibit identical observable behavior without being conscious. Really? So when you do things like describe your own conscious experience in detail, your consciousness has nothing to do with that? That's ridiculous.

In short: human qualia, at least, are not epiphenomenal.
 
  • #546
PeterDonis said:
This is the "zombie argument", which has been made by many philosophers, and debunked by many others. It has always seemed incomprehensible to me: it amounts to the claim that your own consciousness has no observable effect on your behavior--a "zombie" duplicate of you could exhibit identical observable behavior without being conscious. Really? So when you do things like describe your own conscious experience in detail, your consciousness has nothing to do with that? That's ridiculous.

In short: human qualia, at least, are not epiphenomenal.
It may be the "zombie argument" or any other argument, quite frankly there are so many I lost count, but that is not what I meant by saying that "consciousness can be copied", what I meant was that on a average simple level it is possible to make a machine that behaves very similarly to an actual conscious being, in fact we are already there. Text wise chatGPT could as well pretend to be a school teacher helping out a kid doing homework, if the kid wasn't explicitly told what is in the other end of his text conversation I would bet many would think it's an actual human.

Now self awareness, I'm sure, has a huge impact on what you observe, otherwise CCTV cameras would cry seeing a terrible traffic accident, but the problem is when you need to discern whether the other side has that experience or doesn't. It's always easy with yourself because you know your self aware, it says that right in the word "self" and "aware".
 
  • #547
PeterDonis said:
Not at all. Your brain does "processing" of pain signals that is far more complex. Indeed, to a physicalist, "qualia" is simply part of that processing.
I agree Peter, my point was more subtle , at least I hope it is. My point was that besides the process from pain source to the transport of signal to the processing of it there is another process going on. That process is the brain making a conscious choice on how to react to that stimuli. So for example if your an MMA practitioner and you also happen to follow one of the Asian religions , let's say you use pain not as a signal to be avoided as much as possible but rather as a tool and even a welcomed part of your life.
We know from neurology now that human brains rewire themselves with time as we live, new neuronal connections are made based on how we perceive the world, our experiences and what we feed ourselves information wise.
So I believe all humans have similar brains and nerves and yet based on the differences on how you perceive the world or the signals that you uptake, your brain is rewired and adapted to that.
This then begs the question, at least for me, is the mechanism of perception/self awareness a separate one from that of information processing or not.

The reason I say this is because in computers we don't have this "observer within a box" phenomenon, a computer truly only processes information and doesn't have the capacity to contemplate that information from a point of reference that is outside it's logic circuitry.
Yet for us it seems that what we experience and are aware of are two things not necessarily 100% intertwined.

Of course maybe this is all just a really weird emergent phenomenon of very complex special purpose information processing machines like our brains, that they can create this illusion of the "observer" being distanced from the very signals that allow him to observe.
Either way I believe this is paramount to achieving human like conscious self awareness, to understand how the observer can become, at least, in simulation, separated from that which is observed.
In other words , how input signals create a first person reality where the observer , if not physically, then at least mentally becomes separated from the signals he perceives.
 
  • #548
.Scott said:
On the one hand you have the difficulty in doing any significant quantum information processing in the warm and wet brain environment.
Well if I'm not mistaken then currently I think we have no evidence of whether the brain does any quantum effects at all and if it does then how much. So I think it is really hard to talk about it because this is one of those arguments that really needs actual repeatable evidence.
 
  • #549
PeterDonis said:
Then, as I said, it's personal speculation and is off limits here.
The AI can search every published work and attribute any possible idea to multiple reputable authors. It is inverse plagiarism, where original works are passed off as the work of others.
 
  • #550
artis said:
on a average simple level it is possible to make a machine that behaves very similarly to an actual conscious being, in fact we are already there
Only if people limit themselves to very simplistic tests of its behavior.

artis said:
besides the process from pain source to the transport of signal to the processing of it there is another process going on. That process is the brain making a conscious choice on how to react to that stimuli.
There is also a lot of unconscious information processing going on in addition to the simple "reflex arc" response that you originally described.

artis said:
is the mechanism of perception/self awareness a separate one from that of information processing or not.

The reason I say this is because in computers we don't have this "observer within a box" phenomenon
Here you are assuming that your question has the answer "yes". But what if the answer is "no"? In other words, what if it's all information processing, including qualia? Then you could put the same information processing into a computer and it would also have qualia.

Even if the answer to your question is "yes", there could still be some other physical mechanism that produces qualia, which just can't be usefully described as "information processing"--but you could still in principle put such a mechanism into a computer, or a robot, or whatever you want to call it, and it would have qualia.

Of course we are very far away from knowing how to do this, but that doesn't mean it's not possible.
 
  • #551
artis said:
Well if I'm not mistaken then currently I think we have no evidence of whether the brain does any quantum effects at all and if it does then how much. So I think it is really hard to talk about it because this is one of those arguments that really needs actual repeatable evidence.
That's only because you haven't caught on to the more profound problem. If you eliminate QM, you need to presume new Physics. Which of those choices is most in need of "repeatable evidence".

You're following what I would call the "common argument" - that consciousness comes from complicated and/or fuzzy logic. I have mentioned IIT only because they have filled in this common argument with enough detail to bring its short-coming into easier focus.

But if you only go as far as that common argument, you only have an epiphenomenal effect. To make it "phenomenal", you have to describe exactly what constitutes that complicated and/or fuzzy logic. Then you need to postulate that when such conditions exist, something physically different happens. At that point, you are describing full-fledged physics. If it isn't QM, it's new Physics.

So the repeatable experiment that defeats the "common argument" in favor of QM (or new Physics) is simply asking people if they can truthfully report being conscious and feel a sense of reality or awareness.
 
Last edited:
  • #552
.Scott said:
So the repeatable experiment that defeats the "common argument" in favor of QM (or new Physics) is simply asking people if they can truthfully report being conscious and feel a sense of reality or awareness.
I'm sorry but I'm not sure whether I follow your thought there, can you please elaborate , how would first person subjective experience told verbally prove anything beyond that which we already know?As for the new physics VS QM, I'd say I see an even bigger problem, I tried to put it forth but apparently I wasn't successful enough so far.
Let me ask this both to you as well as anyone else participating in this thread. Am I the only one who thinks there is a problem there.

All our computers so far , whether analog or digital and irrespective of their architecture , whether Von Neuman or other, follow certain known physics. In every computer you can actually trace out the path from the most abstract - a symbol within a user interface, to the symbol representing that within a program language to then the machine code that represents that, down further to the actual electrical signals that get created as the code is executed , then you can follow how those signals go back and forth as they are being processed by logic gates.
In fact we know that every single step, every single bite , every single half period of a square wave is deterministic within such system, because all of them have to be right and matching in order for the process to work.
Any single deviation from that operation is automatically an error, sure software are designed to tolerate certain amount of errors, but the bottom line is - these errors don't add anything of value to the process!
It's not like a CPU logic gate that is failing is creating new creativity within the computer. It adds no new information but only noise.
It's like corrupting a TV signal, you don't add information you can only add noise which at some point will completely destroy the original signal if left to increase.

Why am I saying this?
Because as far as we currently know and as far as I can understand it, our brains work differently.
There is no clear path one can trace out for every thought down to every neuron.
Sure, every thought is a spike of many neurons that progressed along the way , but there is no order as far as we know it. We can approximate the brain regions where the spike path begins, we can also see the regions that are most involved in certain activities, but apart from that every region has millions of neurons, and depending on the brain input , many neurons within a region can be "primed" or readied for spiking and yet not spike until some input is given, or maybe one spikes randomly.
In a way it is somewhat similar to nuclear decay, where each atom can decay randomly at any given time. Now unlike atoms , the neurons that are ready to fire can be , at least in theory , seen, by their increased potential that approaches but stays below the threshold, but there is no deterministic rule that determines which exact neuron will spike at any given time!

This is much more than just "fuzzy logic" or whatever you want to call it, this is essentially complete randomness with very little determination. It's like creating information from chaos, almost like the butterfly effect.
It also means that no two thoughts come from the same neuronal path, they may come from the same brain area but as probability would say, each has a different path, even if slightly.

Now add to that the property of brains that neuronal connections rewire with time - even more complexity that is completely unique for each individual. Almost like you would have 8 billion unique CPU architectures.

I see AI researchers making alot of brain to computer analogies but I believe they are premature and wrong, the brain is nothing like a computer, a computer is deterministic, and the determinism is built right into the circuit, the circuitry is fixed, it can;t change by it;s own accord nor can a error create new information.
Human brains on the other hand are working by what it seems, built in randomness.

And what is funny is that so far you don't even need QM or new physics, we understand how the neurons can change their potential and activity and influence one another even under classical physics - the hard part is to understand how that seemingly random behavior creates meaningful and complex information.
And is also very robust against damage and external influence.
One good argument against QM is that we know so far that quantum phenomenon not only need low temperatures for low kinetic energies of the involved particles so that their states can be preserved but they are also very sensitive to external influences.
The brain on the other hand is I'd say extremely robust against damage and external influence of all kinds, from downright mechanical impact to chemical to radiation etc . For reference see my post #543 and how the man with most of his brain deformed was able to preserve almost normal conscious self awareness.
 
Last edited:
  • #553
I'm not sure how it's being used above but Fuzzy Logic is an actual algorithm that can be useful for machine learning outputs.
 
  • #554
artis said:
I'm sorry but I'm not sure whether I follow your thought there, can you please elaborate , how would first person subjective experience told verbally prove anything beyond that which we already know?

From post #509 in this thread:
One of the problems with epiphenomenalism is called "self-stultification", (as described in "The Stanford Encyclopedia of Philosophy"). I have quoted a portion of it here:
The most powerful reason for rejecting epiphenomenalism is the view that it is incompatible with knowledge of our own minds — and thus, incompatible with knowing that epiphenomenalism is true. (A variant has it that we cannot even succeed in referring to our own minds, if epiphenomenalism is true. See Bailey 2006 for this objection and Robinson 2012 for discussion.) If these destructive claims can be substantiated, then epiphenomenalists are, at the very least, caught in a practical contradiction, in which they must claim to know, or at least believe, a view which implies that they can have no reason to believe it.

The point is that it is not sufficient for a "consciousness model" to explain your own experience of qualia. It must also explain how you can claim that qualia exists - how information about the qualia can escape into the real physical world.

For this to happen, qualia must be full-fledged physics - responsive to exterior events and able to influence exterior events.

So when you describe precisely what causes qualia, you are also describing what causes qualia's outward physical influences. If you say that all that is required to create qualia is complicated logic, then you are also saying that "complicated logic" is something physically significant - something that has effects greater than what would be expected of an assembly of Boolean logic gates, voting circuits, etc.

As far as the technology issues you described (the brain uses much different information processing technology and strategies than Von Neumann machines), note the reasoning presented above is completely agnostic to technology and algorithms. Any information processing machine (biological or otherwise) which elicits qualia as we know it, will require the physics described above.
artis said:
One good argument against QM is that we know so far that quantum phenomenon not only need low temperatures for low kinetic energies of the involved particles so that their states can be preserved but they are also very sensitive to external influences.
This is the "warm and wet" argument. But examples of quantum processes have been identified in nature. Photosynthesis comes to mind. Stuart Hameroff has his own specific ideas on this.

artis said:
The brain on the other hand is I'd say extremely robust against damage and external influence of all kinds, from downright mechanical impact to chemical to radiation etc . For reference see my post #543 and how the man with most of his brain deformed was able to preserve almost normal conscious self awareness.
So, to respond, all I need to provide is an example QM brain model that would demonstrate the durability and severability you question. I'm going to put it in a box so that it is clear that I am only providing an example model that would "checks all the boxes":
Given the practicalities of QM data processing technology, it would seem unlikely to me that it occurs across cells. And yet, it is also not dependent on any single brain location. So it would seem that we must sport many consciousness engines. Presuming they are critical in the decision-making process, and given that two simultaneous decisions are worse than one, only one engine gets our main storyline at a time. Only one gets to write to our "storyline" memory at a time. And only that same one gets to act as the "first person" at a time - and claim our general attention to consider a proposed action.

artis said:
I see AI researchers making alot of brain to computer analogies but I believe they are premature and wrong, the brain is nothing like a computer, a computer is deterministic, and the determinism is built right into the circuit, the circuitry is fixed, it can't change by it's own accord nor can a error create new information.
Human brains on the other hand are working by what it seems, built in randomness.
Two points on this: The brain/computer analogy predates AI by a lot. In fact, "computer" was originally a job title and the first computers were "analytical engines", "computing engines", "digital computers", or "automated computers". So, I suppose the question shouldn't be "Are computers human?", but "Are analytic engines computers?".
Those analogies are not wrong, but certainly they can be taken in the wrong way. And since people a very social animals, we are very ready (perhaps too ready) to befriend a machine as if it was another social animal.

The other point is about stochastic circuitry. There are stochastic algorithms (such as the Monte Carlo method) that depend on "randomness". I would be careful about describing any of these as "creating new information".

artis said:
And what is funny is that so far you don't even need QM or new physics, we understand how the neurons can change their potential and activity and influence one another even under classical physics - the hard part is to understand how that seemingly random behavior creates meaningful and complex information.
As described above, the QM or new physics is only required of neurons that elicit our consciousness. So the project is not to explain how neurons can operate without QM/NP but to find the ones that cannot be explained that way.
 
  • #555
.Scott said:
the QM or new physics is only required of neurons that elicit our consciousness
We don't know that this is the case. Some physicists believe it is, others believe it isn't. We have no real testable predictions either way.
 
  • #556
PeterDonis said:
We don't know that this is the case. Some physicists believe it is, others believe it isn't. We have no real testable predictions either way.
If someone is claiming that a device that limits itself to known physics minus QM can truthfully report itself to be conscious, they need to explain where that report is coming from.
 
  • #557
.Scott said:
a device that limits itself to known physics minus QM
The "new physics" part is the only limitation I was referring to in my previous post. I did not intend to include any "minus QM" limitation. Sorry if that wasn't clear.

If we take out the "minus QM" part, then as far as we know now, you are such a device. So am I, and so is every other human that says they are conscious. We don't know that any new physics is required for consciousness.
 
  • Like
Likes mattt and .Scott
  • #558
Any new disruptive technology is always an opportunity to make a profitable business. But not just by use of the technology, but also by the development of counter-technology (and it has the added bonus of not risking being seen as evil).

Are there any efforts toward AI-busting technology? I suppose the obvious tech would be analysis software that produces a confidence level that some given piece of content (text, picture, video, etc.) was produced by AI.

That seems like it might be lucrative. There's my million dollar idea for any of you entrepreneurs out there. The wave of counter-tech will come; you could be on the leading edge.
 
  • #559
This is the general idea for a class of neural networks called Generative Adversarial Networks (GANs). Two networks are built - one tries to create fake outputs (commonly images or audio) to fool the other and the second one tries to learn to detect the fakes. The problem is that they both get better so that it becomes harder for the detector network to detect the fakes.

That said, a common problem with GANs is that the generator network can often generate outputs that fool the detector but would be spotted by humans as obvious fakes.
 
  • #560
.Scott said:
So when you describe precisely what causes qualia, you are also describing what causes qualia's outward physical influences. If you say that all that is required to create qualia is complicated logic, then you are also saying that "complicated logic" is something physically significant - something that has effects greater than what would be expected of an assembly of Boolean logic gates, voting circuits, etc.
Well actually that's the mainstream view so far. It's commonly known as "emergent properties", We do see similar properties for complex systems elsewhere so the researchers have a natural tendency to think that conscious information processing should be similar.
They also believe that because the mainstream view, still is, evolutionary biology that describes the human mind as just a result of long process of random mutations aided by natural selection. Now there are quite a few shortcomings to evolution , that are now even pointed out by evolutionists themselves, but in order not to diverge here let that be another topic.
All in all I do see this as the main belief, that consciousness emerges when sufficient complexity of specific circuits is reached.
I myself have doubts about this , mainly because , although I subscribe to the fact that many systems do show emergent phenomenon, it is by no means clear that it is even possible to achieve consciousness out of silicon logic irrespective of it's complexity, therefore it might be that silicon logic is incapable of this type of emergent property and some other systems needs to be used that can achieve it.
.Scott said:
This is the "warm and wet" argument. But examples of quantum processes have been identified in nature. Photosynthesis comes to mind. Stuart Hameroff has his own specific ideas on this.
I think we need to put a distinction here. Quantum processes happen everywhere in nature if you zoom in to subatomic scales, that doesn't mean that the macro objects that hold within them these QM processes are conscious.
In other words, there is no proof so far that even if some quantum process does happen within the brain that it then directly influences consciousness or more so causes it to emerge.

And the dilemma is huge. Think about it. We know roughly how the brain works. We know there are neurons and synapses and neurons fire all the time. The problem is , how do you take that wet blob gray piece of matter called brain with all of it's billions of neurons and how do you then map out the neuron firing path on to the thought path. If you can't do this, how do you know which neurons made which thought , when and why?
Also, in order to prove a quantum process takes place one needs a highly prepared background. Now one can't just take the brain out of a living human and expect it will continue to function , so how do you prove a quantum process taking place by using the only working brain we have - that inside of a living human which is wet and warm.
This same problem applies to those that , I'd say naively, believe that there will come a time when we will be able to make an exact copy of one's consciousness at any given moment in time. But in order to do that you would have to know the state of each neuron within a brain at some specific time.
It is just an impossibility.

Now one could argue that we could take just few neurons, maybe even produce them artificially and then put them under the test to see whether QM is involved , but that wouldn't match the real situation, because you need them in their native environment to truly see how they work. That is because we don't know whether the brain is just a "sum of it's parts" or more and that is due to the emergent phenomenon law.

Borg said:
That said, a common problem with GANs is that the generator network can often generate outputs that fool the detector but would be spotted by humans as obvious fakes.
As you would expect because both are made from similar working principles. Almost like two thieves that work by the same methods would find it hard to rob one another.
 
  • #561
Answering the thread title, I think we are still safe for a few years:

 
  • Haha
  • Like
Likes 256bits, erobz, bhobba and 1 other person
  • #563
Human referee sends off wrong player in a case of mistaken identity!

 
  • Skeptical
Likes russ_watters
  • #564
jack action said:
And again ...
That one is bizarre. Yes it shows a limitation of current "AI" (quote unquote to indicate meaninglessness). But why? All you need is an rfid chip or something similar to make this work perfectly. It has nothing to do with "AI" it's just properly implementing '90s location/identification technology that's now mundane.

I suppose the real answer though is that the goal is "AI"-specific: they are trying to do it with "AI" because they are trying to advance "AI". Nevermind the pointlessness.
 
Last edited:
  • #565
russ_watters said:
I suppose the real answer though is that the goal is "AI"-specific: they are trying to do it with "AI" because they are trying to advance "AI".
Yes, but this example is to help answering the OP's question:
Isopod said:
Do you fear AI and what you do think truly sentient self-autonomous robots will think like when they arrive?
The answer seems to be that "sentient self-autonomous robots who think" are still closer to fiction than not. Therefore, it is impossible to determine what AI will or will not do once (if?) being sentient.
 
  • Like
Likes 256bits and russ_watters
  • #566
russ_watters said:
All you need is an rfid chip or something similar to make this work perfectly.
It's a little more complicated than that.

russ_watters said:
It has nothing to do with "AI" it's just properly implementing '90s location/identification technology that's now mundane.
In fact in-ball technology for Association Football was only successfully implemented and approved in 2022 and is not yet in widespread use: I can't imagine it reaching Inverness's stadium for some time (the money would be better spent upgrading the toilets).

russ_watters said:
I suppose the real answer though is that the goal is "AI"-specific: they are trying to do it with "AI" because they are trying to advance "AI".
No, they are doing it with AI because this technology has been around for a number of years and is relatively easy to add to the existing infrastructure used to televise games. This clip is from a match in October 2020 and the software used has been updated many times since then at zero cost to the clubs.
 
  • #567
pbuk said:
It's a little more complicated than that.

In fact in-ball technology for Association Football was only successfully implemented and approved in 2022 and is not yet in widespread use: I can't imagine it reaching Inverness's stadium for some time (the money would be better spent upgrading the toilets).

No, they are doing it with AI because this technology has been around for a number of years and is relatively easy to add to the existing infrastructure used to televise games. This clip is from a match in October 2020 and the software used has been updated many times since then at zero cost to the clubs.
Ok, so I'm looking into the actual typical technology:
https://en.wikipedia.org/wiki/Hawk-Eye#Method_of_operation

That's been in use for 25 years, and is currently used in 20 different sports leagues. I don't think I've ever heard it described as "AI" before (that term only recently became fashionable). I don't see the it explicitly said how it identifies the ball, but what I'm reading implies it's visual/software, not hardware.

I think people trying to copy this technology today might just call it "AI" because it's fashionable.

[edit] When FoxSports did it in the mid-90s it was a hardware-based solution (an IR emitter in a hockey puck):
https://en.wikipedia.org/wiki/FoxTrax
 
  • #568
PeroK said:
Human referee sends off wrong player in a case of mistaken identity!
I think you posted that in the wrong thread. This thread is about AI.
 
  • #569
russ_watters said:
I don't see the it explicitly said how it identifies the ball, but what I'm reading implies it's visual/software, not hardware.

I think people trying to copy this technology today might just call it "AI" because it's fashionable.
The ball tracking uses a subset of computer vision, itself being a subset of AI. The development of computer vision btw begin the 60's when the minds of the day thought making a self functioning robot would be easy.

It uses the concepts of line and edge detection to locate the ball, and a motion algorithm to estimate the trajectory of the ball to pan the viewing camera. Somewhere along the way, the head is mistaken as the trajectory of the ball, I suspect, within the motion algorithm . The focus is on the head for a few seconds until, realizing its mistake, the program re-finds the ball, pans the viewer camera back to the ball, and the erroneous loop continues repeating itself in a situation where the head looks similar to a ball.
 
  • #570
DaveC426913 said:
We are all 18th century authors, discussing a journey from Fiji to New Zealand.
Some truth in your statement regarding AI projected scenarios.
 

Similar threads

Replies
10
Views
4K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 12 ·
Replies
12
Views
2K
  • · Replies 21 ·
Replies
21
Views
3K
  • · Replies 5 ·
Replies
5
Views
2K
Replies
19
Views
3K
Replies
3
Views
3K
Replies
7
Views
6K
  • · Replies 18 ·
Replies
18
Views
3K
  • · Replies 12 ·
Replies
12
Views
3K