Can Artificial Intelligence ever reach Human Intelligence?

AI Thread Summary
The discussion centers around whether artificial intelligence (AI) can ever achieve human-like intelligence. Participants express skepticism about AI reaching the complexity of human thought, emphasizing that while machines can process information and make decisions based on programming, they lack true consciousness and emotional depth. The conversation explores the differences between human and machine intelligence, particularly in terms of creativity, emotional understanding, and the ability to learn from experiences. Some argue that while AI can simulate human behavior and emotions, it will never possess genuine consciousness or a soul, which are seen as inherently non-physical attributes. Others suggest that advancements in technology, such as quantum computing, could lead to machines that emulate human cognition more closely. The ethical implications of creating highly intelligent machines are also discussed, with concerns about potential threats if machines become self-aware. Ultimately, the debate highlights the complexity of defining intelligence and consciousness, and whether machines can ever replicate the human experience fully.

AI ever equal to Human Intelligence?

  • Yes

    Votes: 51 56.7%
  • No

    Votes: 39 43.3%

  • Total voters
    90
StykFacE
Messages
26
Reaction score
0
1st time post here... thought i'd post up something that causes much debate over... but a good topic. ;-) (please keep it level-minded and not a heated argument)

Question: Can Artificial Intelligence ever reach Human Intelligence?

please give your thoughts... i vote no.
 
Physics news on Phys.org
I'm pretty sure my cell phone has more intelligence then some of the people I have met...
 
Pengwuino said:
I'm pretty sure my cell phone has more intelligence then some of the people I have met...
lol, funny.

so your cell phone can think on its own? that's a pretty smart cell phone you got there.
 
StykFacE said:
lol, funny.

so your cell phone can think on its own? that's a pretty smart cell phone you got there.

No it can't think on its own. Now think of those implications.
 
whether humans make smart, or dumb, desicions, the level of complexity is far greater than a computer will ever be.

i think that's what makes the difference mostly.
 
Though not strictly artificial intelligence, right now I am using a program called Dragon NaturallySpeaking version 8 to dictate my comment and have these words translated into text automatically.
 
pallidin said:
Though not strictly artificial intelligence, right now I am using a program called Dragon NaturallySpeaking version 8 to dictate my comment and have these words translated into text automatically.
so what is the point...? lol, I'm not sure i follow...
AI vs human intelligence is the issue at hand. ;-)
 
I believe that program does have to make "decisions" on what your speech patterns mean and all.
 
Pengwuino said:
I believe that program does have to make "decisions" on what your speech patterns mean and all.
no, a program is... "programmed". lol, it only does what it was programmed to do. there is no decision making process. a computer merely calculates numbers, and that's all a computer will ever do, no matter how advanced. ;-)
 
  • #10
StykFacE said:
no, a program is... "programmed". lol, it only does what it was programmed to do. there is no decision making process. a computer merely calculates numbers, and that's all a computer will ever do, no matter how advanced. ;-)

Using that definition, AI is undefinable. Your thread is thus, useless.
 
  • #11
Pengwuino said:
... AI is undefinable...

how so? please comment... :biggrin:
 
  • #12
StykFacE said:
no, a program is... "programmed". lol, it only does what it was programmed to do. there is no decision making process. a computer merely calculates numbers, and that's all a computer will ever do, no matter how advanced. ;-)

Most people who deal with AI have better definitions then this when it comes to AI. Your basically saying the only platform AI is going to be used with is intrinsically incapable of using AI.
 
  • #13
StykFacE said:
...there is no decision making process. a computer merely calculates numbers, and that's all a computer will ever do, no matter how advanced. ;-)
How do you know that we do not do the same thing? As Pengwuino stated, your definition is useless.

What is a decision making process? Think of it as you are studying,
Sub Chapterend()
If Sleepiness<10 Then
Study(NextChapter)
Else
Goto Bed.AtYourRoom
End If
End Sub
OR
`Waa I am so sleepy Id better study that Bessel Functions tomorrow...`
One way of thinking of AI (and making so-called intelligent robots) is that taking a `pleasure function` as a base and letting the machine decide which action avbailable makes it increase the most. This places `instincts`. For example a robot's bumping into a wall decreases its p. function but recharging its battery increases it and so on. What would you get? Robots addicted to charge, as we may be addicted to sex etc.
 
  • #14
sure can if you can code 100billion neurons and 10000synapse on average per neuron and given it the sensory/motor skills of a human. It might work a bit slower sort of like a child...but there are movements in california and colorado to build hardware...and i already think colorado has a machine that's like 3-4 years old can't remember what its called though. ohoh did i forgot to mention you got to raise it for like 10-15 years.
 
  • #15
If you mean "raw intelligence", computers can already beat the best chess champions in the world, so I'm certain that surpassing human intelligence in complexity (think multi-tasking to the extreme) is inevitable. But programming it with emotions and intuitiveness could prove to be much more complexed. You're trying to teach a computer to ignore logic based on a "feeling". In that sense it would be very difficult to emulate us.

However the deeper question I think, isn't weather or not we CAN do this, but if we SHOULD. Machines with superior intelligence who are self aware may constitute a threat if they are given sufficient power and control. The counter to this of course, is that we simply keep them in the box and don't give them arms and legs to pummel us with. However society craves simplicity and convenience- the draw of a robot nanny may be too much to resist.
 
  • #16
if you keep them in a box and take off the limbs how are you they going to grow =]..i mean what if we were to do that to a baby.
 
  • #17
Humans can be creative; Computers cannot.

Creativity makes Humans different from Computers.

It's possible, but what'll the telltale difference between AI and Real I be?

That, and if computers had free will, what makes us so sure of no revolt?

Above the above poster's post, shortened into 3 sentences, and agreed with.
 
  • #18
Zantra said:
However the deeper question I think, isn't weather or not we CAN do this, but if we SHOULD. Machines with superior intelligence who are self aware may constitute a threat if they are given sufficient power and control. The counter to this of course, is that we simply keep them in the box and don't give them arms and legs to pummel us with. However society craves simplicity and convenience- the draw of a robot nanny may be too much to resist.

great statement, this is the kind of talk I'm looking for. however... can we ever 'create' an artificial conscience, for a computer to actually think on its own, with emotions and feelings? sure, bots and computers can simulate us very much so, but there's still a difference. i think that's a key thought... conscience. that is a very complex intelligence in itself. :smile:
 
  • #19
"Creativity makes Humans different from Computers" ?!? whose to say computers can't be creative? you ever seen someone code a program with 90%
precompiler directives?
 
  • #20
neurocomp2003 said:
"Creativity makes Humans different from Computers" ?!? whose to say computers can't be creative? you ever seen someone code a program with 90%
precompiler directives?
could a computer ever have a "gut feeling" regarding a situation?

could perhaps a computer ever physically feel pain from an emotion?
 
  • #21
sure. give it similar pain sensory receptors to a human than evolve it by coding billiions of neurons and trillions of synapse. Don't forget a child is born 9 months before it actually comes outta teh womb. Does it feel guts/pain while in the womb...perhaps...but how does that come to be?

Then when the child grows up it begins to know these sensations. What you expect outta a computer is to instantly have these sensations..why?
 
  • #22
I think your question should be weather or not in the future we will be able to make a computer think and feel like a human
my answer is not in the near future but i think eventually we are going to get there
the way i see it is that the problem is with the software rather than the hardware to be able to develop a program that can make one single decision will be a huge leap in the field.


http://en.wikipedia.org/wiki/Artificial_intelligence
 
  • #23
neurocomp2003 said:
sure. give it similar pain sensory receptors to a human than evolve it by coding billiions of neurons and trillions of synapse. Don't forget a child is born 9 months before it actually comes outta teh womb. Does it feel guts/pain while in the womb...perhaps...but how does that come to be?

Then when the child grows up it begins to know these sensations. What you expect outta a computer is to instantly have these sensations..why?
hmmm, "pain sensory receptors"...? is this something that will physically make the computer 'feel', or simply receptors that tell the central processing unit that it's 'feeling' pain, then it reacts to it.

lol, sorry but I'm having a hard time believing that something, that is not alive, can actually feel pain.

yes we have 'receptors', but when it tells are brain that there is pain, we literally feel it.

;-)
 
  • #24
how do you define alive...are we not the sum of our components?

"but when it tells are brain that there is pain, we literally" so are you talking about in the faked sense?
 
  • #25
neurocomp2003 said:
how do you define alive...are we not the sum of our components?

"but when it tells are brain that there is pain, we literally" so are you talking about in the faked sense?

What I’m talking about, is the difference in "the human brain and the human conscience". they are not within the same. The human brain is that of a computer system - it calculates data, keeps our body running properly, so forth. However, it does not coincide with the conscience - physically. That is the complexity, and technology, that computers will never reach. I'll explain...

Say a person has brain surgery and lost 20% of their brain due to a tumor, but lives. Do you lose 20% of your personality? or 20% of your conscience? (remember I’m talking about someone who lives and still has 100% functionality in life as before). No, you do not lose anything. That's because the conscience and brain are separate entities.

Now, say a highly advanced computer system that has the highest AI available in the future, so far advanced you cannot tell the difference in it and a human. Now take 20% of its CPU. What would you think would happen? The complexity and technology of the human brain & conscience together is the intelligence a computer will never reach. A conscience does not consist of particles or matter. You cannot create one for a highly advanced computer system. A computer will only be programmed, with learning capabilities at best.

IMO of course.

;-)
 
  • #26
I think the difference between human and computer , in addition to what mentioned before , is that human can deal with unexpected situation when it demands it . Computers only function on some pre-set rules . So in this way humans can learn from experience whereas computers can't . Even though there is an active research in this area to make computers learn from experience but with no fruitful results .
 
  • #27
"You cannot create one for a highly advanced computer system. A computer will only be programmed, with learning capabilities at best"?
how do you come to assume this...and what is dna code?
ANd thus the cited statements you conclude that physics has no fundamental rules and no fundamental structure because isn't that what a brain is a physical structure?

and if consciousness and the brain are so separate...would you let me surgically remove your brain or even some portion>50%?anyways I'm sick of these debates because i realize i had one similar in these forums liek in the middle of summer. lastly isn't concsioucness grown from a fetus?? who lays 9mths in a womb and only begins talking at the age of 1?
 
Last edited:
  • #28
Nice StykFacE.

I'm very optimistic we'll create one day qualitatively different computational machines which simulate human cognition to the extent that creativity and discovery will emerge from them. I suspect we will have to "grow a mind" in some way similar to how humans do so in early development. The brain is massively non-linear and recurrent (outputs go back to inputs). That's why AI, in my opinion, has been a failure for the past 30 or so years: They're using linear mind-sets and devices (current digital computers) to model a highly non-linear one. A novel device (non-linear I suspect) will emerge from technology one day and it's output will be qualitatively different than the output of present computers: At first it won't look like much and the results will be tough to interpret. Gradually, our paradigms for what constitutes cognition will change qualitatively as we begin to grow a new form of intelligence.

Would be nice to be around to see it happen. :smile:
 
  • #29
saltydog said:
Nice StykFacE.

I'm very optimistic we'll create one day qualitatively different computational machines which simulate human cognition to the extent that creativity and discovery will emerge from them. I suspect we will have to "grow a mind" in some way similar to how humans do so in early development. The brain is massively non-linear and recurrent (outputs go back to inputs). That's why AI, in my opinion, has been a failure for the past 30 or so years: They're using linear mind-sets and devices (current digital computers) to model a highly non-linear one. A novel device (non-linear I suspect) will emerge from technology one day and it's output will be qualitatively different than the output of present computers: At first it won't look like much and the results will be tough to interpret. Gradually, our paradigms for what constitutes cognition will change qualitatively as we begin to grow a new form of intelligence.

Would be nice to be around to see it happen. :smile:
I see what you're saying. However, i still believe that a conscience can never be created by human beings. I believe that computers will become so advanced they will simulate, or seem as if they are aware, however computers are consisted completely on physical matter. The conscience is also referenced as the soul, or spirit; it is the inner self and is a nonphysical item. This is the main separation key.

neurocomp2003:
"and what is dna code" - well it's not a conscience, now is it? lol, its genetic code.

"a brain is a physical structure" - of course it is, however the conscience is not. "If you apply a physical process to physical matter, you're going to get a different arrangement of physical materials. No matter how complex, it's still going to be physical." (quoted by J. P. Moreland) this is a VERY true statement in the world of physics and science. Can you argue?? Of course not. Now the conscience or soul... what very complex physical processes of matter arranged in such a way that we now have a non-physical state of awareness? please explain lol ;-)

"surgically remove your brain or even some portion>50%" - losing 50% of the brain is common with surgeries, accidents, strokes, etc. the medical field has records of many cases I'm sure, and some people still live 100% full lives. 'but they lost over 50% of their PHYSICAL brain? how can they PHYISICALLY still be the same person they were before hand?'

Maybe our NON PHYSICAL conscience is within our PHYSICAL DNA. lol

;-)
 
Last edited:
  • #30
StykFacE said:
I see what you're saying. However, i still believe that a conscience can never be created by human beings. I believe that computers will become so advanced they will simulate, or seem as if they are aware, however computers are consisted completely on physical matter. The conscience is also referenced as the soul, or spirit; it is the inner self and is a nonphysical item. This is the main separation key.

neurocomp2003:
"and what is dna code" - well it's not a conscience, now is it? lol, its genetic code.

"a brain is a physical structure" - of course it is, however the conscience is not. "If you apply a physical process to physical matter, you're going to get a different arrangement of physical materials. No matter how complex, it's still going to be physical." (quoted by J. P. Moreland) this is a VERY true statement in the world of physics and science. Can you argue?? Of course not. Now the conscience or soul... what very complex physical processes of matter arranged in such a way that we now have a non-physical state of awareness? please explain lol ;-)

"surgically remove your brain or even some portion>50%" - losing 50% of the brain is common with surgeries, accidents, strokes, etc. the medical field has records of many cases I'm sure, and some people still live 100% full lives. 'but they lost over 50% of their PHYSICAL brain? how can they PHYISICALLY still be the same person they were before hand?'

Maybe our NON PHYSICAL conscience is within our PHYSICAL DNA. lol

;-)

Well, they argue in here all the time about what consciousness constitutes. We have too groups: One thinks consciousness is something more than material existence, the other group thinks otherwise. You are in the former I see.

I just don't understand why people think consciousness is something beyond physical interpretation and construction. It's just an emergent property of large assemblies of neural networks. That's it. Nothing else in my view. We really are fragile creatures still limited in many ways by our beliefs about the world, ourselves, death, and life.

I use to look outside of my window at the world and wonder why about a lot of things. Ten years ago I started studying non-linear dynamics . . . I no longer wonder why about a lot of things. :smile:
 
  • #31
saltydog said:
I just don't understand why people think consciousness is something beyond physical interpretation and construction. It's just an emergent property of large assemblies of neural networks.

You've been watching too much Star Trek, lol.

Here's one way to understand it. Is a brick conscious? No. How about a building? No. How about any other highly complex stacking and arrangement of bricks? Again, I'd have to say no.

The same thing could be said for atoms. For a lot of people, it just seems too implausible that a collection of atoms could somehow "feel" or be conscious. Consciousness seems to require something fundamentally different.

Confer Searle's Chinese Room thought experiment (it also seems especially relevant to this discussion on A.I., but I'll have to post on it later because I’ve run out of time).
 
  • #32
saltydog said:
Ten years ago I started studying non-linear dynamics . . . I no longer wonder why about a lot of things. :smile:

I know nothing about non-linear dynamics, so I will not argue with you there... lol

I think I've stated enough, until neurocomp2003 disputes... I think we're on the same level of thinking, just different sides of the tracks. :smile:
 
  • #33
I think, that if you cast aside any notions of religious connotation, any spiritual aspect, and look at the pure mechanics, the human brain and all related processes are nothing but an advanced machine. Allbeit the construct is flesh and tissue vs silicon and metal, but it's still something that can be duplicated. You may speak to me about "soul" or "gut instinct" but that comes down to chemical processes in our brain and neurotransmitters affecting a decision that would otherwise be based on logic and experience. Can we teach a machine to "love"? Well the simple answer is no, allthough we can definitely teach the machine to duplicate it. But we can definitely create machines that can outthink us, and that eventually can become self-aware. We cannot expect to duplicate to ultimate satisfaction that which we don't fully grasp. Love is induced by the release of endorphins and dopamine neurotransmitters. Is that how we think of love? No. But that is the process, and it can eventually be duplicated. The question is can we make a machine that will "fall in love" or develop a profound sense of attachment to something or someone. If the eventual answer were yes, how would that make you feel? Would that impress you,frighten you, or disgust you?

I think it comes down to our own vanity- Can we withstand our creation outpacing us in development? When our child outdistances us, will we feel a pang of jealousy? And not only that, but how will that machine then view us, having surpassed us? As an annoyance? When machines can love, will that lessen our importance and change the dynamics of machines? Questions that will plague us as AI develops.
 
  • #34
tsishammer: but you see humans have sensory systems that feed into the the brain, and the entire human system flows..does a stack of brick walls flow? perhaps from a philosophical standpoint and that the adaptation that a brick/wall has accustom to is to not respond at all. The entire of an artificial system is teh concept of fluidic motion of signals to represent a pattern (like in steve grand's book). and where not talking about a few hundred atoms here we are talking about:
(~100billion neurons*#atom/neuron+~10000*#neurons*#atom/synapse)
Thats how many atoms are in teh brain and mostlikely a rough guess would be
10^(25-30) atoms. Try counting that high.

As for john searle's highly used argument: this can also be applied to humans' but because we have such big egos we do not adhere to it. That is to say that we could have cells passing information that gives this resemblance of what we call a higher cognitive state but in all reality...it may just be a byproduct like in searles argument against cognitive robotics. But as the superior beings that we are we seem to neglect this side of the argument.

Skyeface: before i continue to argue, may i know your educational background...because nonlinear dynamics/multiagent learning is a huge part of my thinking process.
[]as for the 50% argument...show me an experiment that has lesioned
teh prefrontal cortex(decision making) and occipital/parietal(LIP/7a-imagery) regions and has the patient living a non vegetative life. Also teh hippocampus. And I'm not talking partial lesions.

Zantra: "Can we teach a machine to "love"? No." I'd have to disagree, Personally emotions IMO comes down to interaction of a child and his surroundings(relatives/friends/parents) not simply by having NTs in your system.
 
Last edited:
  • #35
Tisthammerw said:
Here's one way to understand it. Is a brick conscious? No. How about a building? No. How about any other highly complex stacking and arrangement of bricks? Again, I'd have to say no.

The same thing could be said for atoms. For a lot of people, it just seems too implausible that a collection of atoms could somehow "feel" or be conscious. Consciousness seems to require something fundamentally different.

The complex stack of bricks is static. Nothing happens. Same dif with neurons if they were static. The point is that neurons are dynamic. That it the key: the mind, I belive, is a dynamic phenomenon independent of the substrate it finds itself in. The only example we have to date is a biological substrate and so naturally the urge is to associate mind to be a particular property of living systems. Star Trek has nothing to do with this. Frankly, get the bricks behaving in the same dynamic fashion as neurons and a brick conscience will emerge as far as I'm concerned. Marbles too for that matter. :smile:

Edit: I see above Neurocomp and I got to the bricks at the same time although I do not wish to suggest he's willing to go all the way to brick conscience like me :smile: .
 
Last edited:
  • #36
saltydog:oh but i will...see the bricks are conscious but they adapted over the years to respond to almost nothing(the exception is a battering of environmental conditions). They are smart because they show no response or emotion whatsoever which i think is what's wrong with humans...I mean its great to feel love/happiness but to feel sadness/depression/failure over many years sucks.
I must correct myself though, a brick does respond without emotion...when you etch in a brick it allows you etch for if it did not all you to..you would not be able to, and when you bounce a ball towards it, it bounces it back. HEHE
 
  • #37
Computers are digital, they must do their work by manipulating one digit at a time. Brains do it sort of analog-like but it more complex and more like many analogs operating at the same time.

The memory is similar to a relational database but many datum are related at once. The brain can think of a complex concept in a millisecond that would take many minutes to verbalize.

With machines, we are so far from the way the brain works I doubt we can ever make them equal.

Just my two cent's worth.
 
  • #38
neurocomp2003 said:
Zantra: "Can we teach a machine to "love"? No." I'd have to disagree, Personally emotions IMO comes down to interaction of a child and his surroundings(relatives/friends/parents) not simply by having NTs in your system.

Ok so then you would agree that by definition, love requires the interaction of a being with their surroundings and immediate friends/family in order to form a bond, correct? They must develop with those people a sense of familiarity with their surroundings and people. Based on that premise, you do not think an eventual machine might interact with human beings enough to form such a bond- not only love, but friendship?

Even prior to that step in AI evolution, how difficult would it be to emulate emotions to the point that you couldn't tell the difference? If you encountered a machine you didn't know was a machine, and they were able to emulate love, could you tell the difference? And if so, how?
 
  • #39
nono...lol i take it you never read my posts. My point was that your post seemed to be biased toward biology rather than psychology. That is to say that your first paragraph takes the stance that NTs are the cause. SEe the quote i cited also contained the answer to which i thoght was yours...that you stated which was we can not teach machines.

kublai: but we have analog to digital converts that happen on every aspect of a computer. eg a joystick.
 
Last edited:
  • #40
I think one of the key things here is the hardware. Our hardware is chemical and computers as we know them right now are solid state. An incredibly sophisticated and compact system.

Now, the only way I think we could get a computer to at least emulate our behavior convincingly, is to have an immensely more powerful system. I've done some study on the theory behind quantum computers. If we could get quantum computing off the ground I believe we may have the power to emulate human behavior. But it will always be artificial IMO.

We will never create life. Whether it's in a petri dish or an electronic device. I haven't heard a convincing argument that shows that we really understand what drives life in the first place.
 
  • #41
neurocomp2003 said:
nono...lol i take it you never read my posts. My point was that your post seemed to be biased toward biology rather than psychology. That is to say that your first paragraph takes the stance that NTs are the cause. SEe the quote i cited also contained the answer to which i thoght was yours...that you stated which was we can not teach machines.


Sorry I misread that as part of your response. I think we are actually in agreement here.. hehe. I may have oversimplified love. It is a combination of factors, not just NT. However if we can build a machine capable of processing all of those interactions we can simulate the experience. That was my point.
 
  • #42
neurocomp2003 said:
Skyeface: before i continue to argue, may i know your educational background...because nonlinear dynamics/multiagent learning is a huge part of my thinking process.

that was an insult neuro. i do not need to verify my educational background to speak on my beliefs... we are just debating our own viewpoints, the facts and quotes i know come from the books that I've read, and have never alleged, nor will i, to be an expert.

now that you've confirmed on judging my statements as a result of my educational background, i guess my argument is over.

:wink:
 
  • #43
no my basis is on what terms you know...if you don't know some of the terms that i need to use...then i cannot explain to you my thoughts. And thus the discussion is over. You say you are unfamiliar with nonlinear dynamics...have you ever read on a program called creatures by steve grand? Or do you know anything about child development, or anything on adaptive learning(neuralnets or swarm intelligence or genetic programming).
at lastly what do you define as consciousness? There are other threads on these forums where people define consciousness/intelligence. Check them out.
 
Last edited:
  • #44
neurocomp2003 said:
no my basis is on what terms you know...if you don't know some of the terms that i need to use...then i cannot explain to you my thoughts. And thus the discussion is over. You say you are unfamiliar with nonlinear dynamics...have you ever read on a program called creatures by steve grand? Or do you know anything about child development, or anything on adaptive learning(neuralnets or swarm intelligence or genetic programming).
at lastly what do you define as consciousness? There are other threads on these forums where people define consciousness/intelligence. Check them out.
yet another insult.

lol, it's crazy that one person can have an ego towards someone because of their intelligence.

I am 22 (3 weeks I'm 23), on my last semester of a 4 yr degree as an Advanced AutoCAD programmer. I am as average joe as someone could get, with the exeption of a good skill in CAD design thanks to my father.

Oh, and I have a high school diploma. Does that meet your 'standards' of required education on this particular public forum? lol

sorry, just wanted to good clean debates from others. ;-)
 
  • #45
heh your very sensitive if you take those as insults.
 
  • #46
StykFacE said:
yet another insult.

If you feel insulted by neuro's comments or questions, that's your problem. There is nothing abusive about anything he's written. Indeed, he is simply trying to establish a common baseline for communication.
 
  • #47
neurocomp2003 said:
no my basis is on what terms you know...if you don't know some of the terms that i need to use...then i cannot explain to you my thoughts.

saltydog explained his dispute with me, it was above my head, and i simply said i know nothing about it. that's called respect for others opinions, and he didnt assume "my terms i know", and hasn't poked at my intelligence yet.

learn from that. :approve:
 
  • #48
On another note, here's a little story not totally unrelated to the topic. Enjoy. :biggrin:

"They're made out of meat."

"Meat?"

"Meat. They're made out of meat."

"Meat?"

"There's no doubt about it. We picked up several from different parts of the planet, took them aboard our recon vessels, and probed them all the way through. They're completely meat."

"That's impossible. What about the radio signals? The messages to the stars?"

"They use the radio waves to talk, but the signals don't come from them. The signals come from machines."

"So who made the machines? That's who we want to contact."

"They made the machines. That's what I'm trying to tell you. Meat made the machines."

"That's ridiculous. How can meat make a machine? You're asking me to believe in sentient meat."

"I'm not asking you, I'm telling you. These creatures are the only sentient race in that sector and they're made out of meat."

"Maybe they're like the orfolei. You know, a carbon-based intelligence that goes through a meat stage."

"Nope. They're born meat and they die meat. We studied them for several of their life spans, which didn't take long. Do you have any idea what's the life span of meat?"

"Spare me. Okay, maybe they're only part meat. You know, like the weddilei. A meat head with an electron plasma brain inside."

"Nope. We thought of that, since they do have meat heads, like the weddilei. But I told you, we probed them. They're meat all the way through."

"No brain?"

"Oh, there's a brain all right. It's just that the brain is made out of meat! That's what I've been trying to tell you."

"So ... what does the thinking?"

"You're not understanding, are you? You're refusing to deal with what I'm telling you. The brain does the thinking. The meat."

"Thinking meat! You're asking me to believe in thinking meat!"

"Yes, thinking meat! Conscious meat! Loving meat. Dreaming meat. The meat is the whole deal! Are you beginning to get the picture or do I have to start all over?"

"Omigod. You're serious then. They're made out of meat."

"Thank you. Finally. Yes. They are indeed made out of meat. And they've been trying to get in touch with us for almost a hundred of their years."

"Omigod. So what does this meat have in mind?"

"First it wants to talk to us. Then I imagine it wants to explore the Universe, contact other sentiences, swap ideas and information. The usual."

"We're supposed to talk to meat."

"That's the idea. That's the message they're sending out by radio. 'Hello. Anyone out there. Anybody home.' That sort of thing."

"They actually do talk, then. They use words, ideas, concepts?"
"Oh, yes. Except they do it with meat."

"I thought you just told me they used radio."

"They do, but what do you think is on the radio? Meat sounds. You know how when you slap or flap meat, it makes a noise? They talk by flapping their meat at each other. They can even sing by squirting air through their meat."

"Omigod. Singing meat. This is altogether too much. So what do you advise?"

"Officially or unofficially?"

"Both."

"Officially, we are required to contact, welcome and log in any and all sentient races or multibeings in this quadrant of the Universe, without prejudice, fear or favor. Unofficially, I advise that we erase the records and forget the whole thing."

"I was hoping you would say that."

"It seems harsh, but there is a limit. Do we really want to make contact with meat?"

"I agree one hundred percent. What's there to say? 'Hello, meat. How's it going?' But will this work? How many planets are we dealing with here?"

"Just one. They can travel to other planets in special meat containers, but they can't live on them. And being meat, they can only travel through C space. Which limits them to the speed of light and makes the possibility of their ever making contact pretty slim. Infinitesimal, in fact."

"So we just pretend there's no one home in the Universe."

"That's it."

"Cruel. But you said it yourself, who wants to meet meat? And the ones who have been aboard our vessels, the ones you probed? You're sure they won't remember?"

"They'll be considered crackpots if they do. We went into their heads and smoothed out their meat so that we're just a dream to them."

"A dream to meat! How strangely appropriate, that we should be meat's dream."

"And we marked the entire sector unoccupied."

"Good. Agreed, officially and unofficially. Case closed. Any others? Anyone interesting on that side of the galaxy?"

"Yes, a rather shy but sweet hydrogen core cluster intelligence in a class nine star in G445 zone. Was in contact two galactic rotations ago, wants to be friendly again."

"They always come around."

"And why not? Imagine how unbearably, how unutterably cold the Universe would be if one were all alone ..."

the end

http://www.terrybisson.com/meat.html
 
Last edited by a moderator:
  • #49
deckart said:
I haven't heard a convincing argument that shows that we really understand what drives life in the first place.

Consider the Lorenz Attractor. You know, that owl-eyes icon of Chaos Theory? That's a dynamic system with three degrees of freedom I believe can serve as a metaphor for the motor of life.

The Lorenz Attractor is stable: Trajectories within the attractor remain there. Surrounding the attractor is a basin of attraction. Nearby points in the basin are pulled into the attractor by the dynamics of the system. If the trajectory is perturbed to a point outside of the attractor, it does not return to the attractor. However it may be that it now is in a new basin of attraction and so is attracted to a new attractor. Rene' Thom uses this to describe change in nature:

"All creation or destruction of form or morphogenesis, can be described by the disappearance of the attractors representing the initial forms and their replacement by capture by the attractors representing the final form."

There is something else though about the attractor: it is dense. This means trajectories NEVER cross. It is in fact a fractal with an infinitely nested structure. Each point is distinct from all the others.

The Lorenz system is a simple example containing 3 degrees of freedom. What might a system exhibit with a large number of degrees of freedom in a world in which the degrees of freedom is increasing by the very attractors themselves in the same way that the Lorenz Attractor generates a diverse set of points?

I could imagine such a world of attractors pushed to increasingly higher dimensional ones as the attractors themselves generate an increasing number of degrees of freedom in a self-reinforcing act we mistakenly interpret as the evolution of life from simple to complex.
 
Last edited:
  • #50
Tom Mattson said:
If you feel insulted by neuro's comments or questions, that's your problem. There is nothing abusive about anything he's written. Indeed, he is simply trying to establish a common baseline for communication.

of course it's my problem... lol, i was insulted and there's nothing i can do really. and he wasn't trying to establish common grounds, he was challenging my education against his own.

;-)
 

Similar threads

Replies
26
Views
2K
Replies
1
Views
2K
Replies
21
Views
2K
Replies
40
Views
5K
Replies
76
Views
9K
Replies
18
Views
3K
Replies
4
Views
2K
Back
Top