Engineering Measuring Artificial Intelligence

AI Thread Summary
There is currently no standardized method to measure the competency levels of AI systems, which complicates the evaluation of their capabilities compared to human intelligence. The discussion outlines a competency framework for AI, with levels ranging from basic knowledge retrieval to advanced generalization and problem-solving akin to human scientists. Presently, most AI systems are assessed at level AI1.2, with ongoing developments aimed at achieving AI2.2, where machines can draw conclusions from patterns. The conversation highlights the importance of understanding the context and limitations of AI, particularly in comparison to human cognitive abilities, emphasizing that while AI can process information rapidly, it lacks the adaptability and experiential learning inherent to humans. Ultimately, the debate underscores the need for clearer definitions and metrics in assessing AI competency.
hasilm
Messages
3
Reaction score
0
Homework Statement
How to measure the competency of a AI enabled device
Relevant Equations
Human competency
As we make product that are Artificial Intelligence (AI) enabled there are no method to mark its AI competency levels. Though we find some co-relation in the competency levels of human and machines there are no method available to be judged. This Post tries to explain the competency levels of human’s and describes ways to measure competency and a table for the AI competency levels that can be applied to systems in general.

Competency Levels:
The higher levels are a superset of the lower levels.

[AI version : Competency Description]
AI1.1 : Able to get knowledge from its own repository
AI1.1.1 : Process knowledge.
AI1.1.2 : Apply the knowledge to find solutions
AI1.2 : Gather knowledge from external sources
AI1.2.1 : Process knowledge, processing knowledge from external source requires a different kind of competency than AI1.1.1
AI1.2.2 : Apply the knowledge to find solutions from knowledge gathered from external sources; matured human
AI2.1 : Understanding patterns
AI2.2 : Making conclusion out of patterns
AI2.3 : Experiment with conclusions
AI3.1 : Generalize patterns, finding solutions without feeding
AI4.1 : Super set of generalization, finding solutions by combining different patterns. This is the level of scientists or inventors.

In my assessment at this time all AI machines are at level AI1.2. And current developments are to make the machine achieve AI2.2. Beyond that if a system reaches AI3.1 then they have reached a point where they can take decisions like we do.

Generally speaking, say for a machine with AI1.2.2 , we also can have three variants to depict the time factor. AI1.2.2.1, AI1.2.2.2, AI1.2.2.3 to specify performance, whether the machine takes Longer, Medium or Faster time.

AI3.1 is the normal nature for human. But for machines AI3.1 is at the level they start thinking like humans.
This level AI3.1 is where chatGPT competency is at present. The advantage for machines here is extensive information database available vs memory constraint's of human.

AI4.1 for humans its an ultimate competency level. But for machines AI4.1 is at the level they can over power human.
 
Physics news on Phys.org
I think there is considerable vagueness in your terms and some lack of knowledge on your part. For example

AI2.1 : Understanding patterns
AI2.2 : Making conclusion out of patterns

You seem to think those are beyond current AI models, and yet there are well documented cases of AI systems making conclusions based on patterns in a way that is measurably superior to human ability.

As just one example:

Medical Imaging: In 2020, a study published in "Nature" demonstrated that an AI model developed by Google Health outperformed human radiologists in detecting breast cancer in mammograms. The AI achieved an 89% accuracy rate compared to 73% for human experts, showcasing its ability to identify subtle patterns in the images that may be missed by human eyes.
 
  • Like
Likes FactChecker
"Competency" is a very vague term. In many applications, the cost/benefit of different types of errors is critical. We do not usually expect perfection from an AI system, at any level.
 
Thread closed for Moderation...
 
Thread is reopened provisionally. Thanks everybody for your patience.

hasilm said:
Homework Statement: How to measure the competency of a AI enabled device
Relevant Equations: Human competency

As we make product that are Artificial Intelligence (AI) enabled there are no method to mark its AI competency levels.
What class is this assignment for? What year of high school or university is it for?
 
phinds said:
I think there is considerable vagueness in your terms and some lack of knowledge on your part. For example

AI2.1 : Understanding patterns
AI2.2 : Making conclusion out of patterns

You seem to think those are beyond current AI models, and yet there are well documented cases of AI systems making conclusions based on patterns in a way that is measurably superior to human ability.

As just one example:

Medical Imaging: In 2020, a study published in "Nature" demonstrated that an AI model developed by Google Health outperformed human radiologists in detecting breast cancer in mammograms. The AI achieved an 89% accuracy rate compared to 73% for human experts, showcasing its ability to identify subtle patterns in the images that may be missed by human eyes.
where did I say those are beyond current AI capability. That's why I rate chatGPT with a competency level of AI3.1
 
FactChecker said:
"Competency" is a very vague term. In many applications, the cost/benefit of different types of errors is critical. We do not usually expect perfection from an AI system, at any level.
Definitely AI is far from perfection. However the capabilities of any entity can be termed from the perfection and efficiency of the output. Since AI is emulation of human intelligence, we can mark its competency with respect to how fast and perfect a solution is reached. This is what I am trying to do here.
 
berkeman said:
Thread is reopened provisionally.
berkeman said:
What class is this assignment for? What year of high school or university is it for?
Please respond to my questions. Thank you.
 
Last edited:
hasilm said:
Homework Statement: How to measure the competency of a AI enabled device
Relevant Equations: Human competency

Able to get knowledge from its own repository


Here is the crux of my problem with your attempt. There is tacitly an assumption (number it as you wish) that we can provide inspected, grade A, approved "knowledge" to teach our darling AI. My high school stage encounter (I was Emille de Becque) with Mssrs. Rogers and Hammerstein taught me this :



This seems to me the most important opportunity (for either good or ungood) presented by AI and I see it nowhere in your formulation. Who adjudicates the seed "knowledge" and how? Or do we just keep piling it on higher and deeper?
 
  • #10
hasilm said:
Homework Statement: How to measure the competency of a AI enabled device
Relevant Equations: Human competency

As we make product that are Artificial Intelligence (AI) enabled there are no method to mark its AI competency levels. Though we find some co-relation in the competency levels of human and machines there are no method available to be judged. This Post tries to explain the competency levels of human’s and describes ways to measure competency and a table for the AI competency levels that can be applied to systems in general.

Competency Levels:
The higher levels are a superset of the lower levels.

[AI version : Competency Description]
AI1.1 : Able to get knowledge from its own repository
AI1.1.1 : Process knowledge.
AI1.1.2 : Apply the knowledge to find solutions
AI1.2 : Gather knowledge from external sources
AI1.2.1 : Process knowledge, processing knowledge from external source requires a different kind of competency than AI1.1.1
AI1.2.2 : Apply the knowledge to find solutions from knowledge gathered from external sources; matured human
AI2.1 : Understanding patterns
AI2.2 : Making conclusion out of patterns
AI2.3 : Experiment with conclusions
AI3.1 : Generalize patterns, finding solutions without feeding
AI4.1 : Super set of generalization, finding solutions by combining different patterns. This is the level of scientists or inventors.

In my assessment at this time all AI machines are at level AI1.2. And current developments are to make the machine achieve AI2.2. Beyond that if a system reaches AI3.1 then they have reached a point where they can take decisions like we do.

Generally speaking, say for a machine with AI1.2.2 , we also can have three variants to depict the time factor. AI1.2.2.1, AI1.2.2.2, AI1.2.2.3 to specify performance, whether the machine takes Longer, Medium or Faster time.

AI3.1 is the normal nature for human. But for machines AI3.1 is at the level they start thinking like humans.
This level AI3.1 is where chatGPT competency is at present. The advantage for machines here is extensive information database available vs memory constraint's of human.

AI4.1 for humans its an ultimate competency level. But for machines AI4.1 is at the level they can over power human.
AI spits out the correct answer or not, the human brain has the ability to judge itself and make corrections at will. Part of having any competency is the ability to recognise an error and setting out to correct it. AI has no ability to discriminate against its lack of investigation options. We can leave the room and ponder the machine is isolated and without contrast of possibilities.
 
  • Skeptical
Likes phinds and PeroK
  • #11
Ian Telcher said:
AI spits out the correct answer or not, the human brain has the ability to judge itself and make corrections at will. Part of having any competency is the ability to recognise an error and setting out to correct it. AI has no ability to discriminate against its lack of investigation options. We can leave the room and ponder the machine is isolated and without contrast of possibilities.
One could make the case that it's more like the opposite. It's hard for many humans to ever admit an error. AI has the potential to be more objective.

When I make a mistake on these forums I try to admit it. Nevertheless, I confess it's an unpleasant feeling.

There are some posters here who never under any circumstances admit a mistake. It appears to be a matter of principle.

And that's essentially objective questions like physics and maths.
 
  • Like
Likes Ian Telcher and FactChecker
  • #12
PeroK said:
One could make the case that it's more like the opposite. It's hard for many humans to ever admit an error. AI has the potential to be more objective.
Well, let us look at it like this: if you ask AI to tell you the best way to accomplish a task, it may well do so, but if you ask a person with knowledge of the task, you will receive additional information regarding things like safety and other factors that are associated with the task. This is true competency; only eyes and ears can know what the computer can not.
 
  • #13
PeroK said:
One could make the case that it's more like the opposite. It's hard for many humans to ever admit an error. AI has the potential to be more objective.

When I make a mistake on these forums I try to admit it. Nevertheless, I confess it's an unpleasant feeling.

There are some posters here who never under any circumstances admit a mistake. It appears to be a matter of principle.

And that's essentially objective questions like physics and maths.
I think the only thing AI will have over the human brain is speed, and speed alone is not the priority of intelligence.
 
  • #14
Ian Telcher said:
I think the only thing AI will have over the human brain is speed, and speed alone is not the priority of intelligence.
The field and the possibilities are huge. It could also have advantages in memory size, and a huge amount of training, and no prejudice, and no forgetting, and (in some applications) data communications with other machines with other sensors, and accurate probability calculations (including Bayesian), and Kalman filtering of inputs, and sophisticated optimization techniques, etc., etc.
 
  • #15
Ian Telcher said:
I think the only thing AI will have over the human brain is speed, and speed alone is not the priority of intelligence.
You're making my point to some extent by insisting for possibly emotional reasons that what you believe must be right. And, indeed, I assume it would be practically impossible to get you to change your mind.

And if you say the same about me, then that doubly makes the point!
 
  • #16
PeroK said:
You're making my point to some extent by insisting for possibly emotional reasons that what you believe must be right. And, indeed, I assume it would be practically impossible to get you to change your mind.

And if you say the same about me, then that doubly makes the point!
Then I must be wrong, and you are correct, even if I am just missing your point of view. My point was that a machine will always be just a machine. The human brain is adaptable and able to discover new evidence with the aid of seeing and hearing, and as I pointed out, the AI will never have this ability as it lives in a box, and we live in reality, and it is reality that physically relates to learning and discovering. It takes a fully conscious mind to know why it is searching without knowing what it is searching for. We are on the lookout for answers. A computer only has a database, it can't look out.
 
  • #17
FactChecker said:
The field and the possibilities are huge. It could also have advantages in memory size, and a huge amount of training, and no prejudice, and no forgetting, and (in some applications) data communications with other machines with other sensors, and accurate probability calculations (including Bayesian), and Kalman filtering of inputs, and sophisticated optimization techniques, etc., etc.
All this relates to speed, not discovery. We can discover new things, but a computer is only full of itself.
 
  • #18
Ian Telcher said:
All this relates to speed, not discovery. We can discover new things, but a computer is only full of itself.
You can have a whole swarm of drones sharing information and coordinating with each other in microseconds. That is more than single-computer speed. It is situational awareness. Many other aspects are more than simple speed. Even a super-fast human brain could not handle the amount of information, could not do the calculations or even know the algorithms.
 
  • #19
Ian Telcher said:
We can discover new things, but a computer is only full of itself.
We have sent unmanned spacecraft to discover new things on every planet in the solar system and beyond. We have had to control them with commands that take a very long time to get there and more time to get the results back to us. If they had more artificial intelligence, they could do a lot more.
 
  • #20
FactChecker said:
You can have a whole swarm of drones sharing information and coordinating with each other in microseconds. That is more than single-computer speed. It is situational awareness. Many other aspects are more than simple speed. Even a super-fast human brain could not handle the amount of information, could not do the calculations or even know the algorithms.
You use the word self, and there is no self in a computer. The intelligence you refer to is nothing but ones and zeros. Self is defined as a characteristic of the mind; it starts in the mind and stays there.
 
  • #21
FactChecker said:
We have sent unmanned spacecraft to discover new things on every planet in the solar system and beyond. We have had to control them with commands that take a very long time to get there and more time to get the results back to us. If they had more artificial intelligence, they could do a lot more.
But they never will.
 
  • #22
Ian Telcher said:
You use the word self, and there is no self in a computer. The intelligence you refer to is nothing but ones and zeros. Self is defined as a characteristic of the mind; it starts in the mind and stays there.
This is a science forum. Unsubstantiated opinions do not contribute to the debate.
 
  • #23
PeroK said:
This is a science forum. Unsubstantiated opinions do not contribute to the debate.
what he said (small).jpg


@Ian Telcher, you have, throughout this thread, expressed an obviously strongly, and sincerely, held belief that humans will always be superior to AI. You are not alone in this belief, BUT ... it is a belief, an opinion, and it is NOT shared by all and yet throughout you have expressed this opinion categorically, as though it were fact. It is not, and as @PeroK pointed out, this is a science forum. You should preface such beliefs with something like "I believe that ... " or even "I am sure that ... " not state your opinions as fact.

EDIT: by the way, I think we all (or most of us, including myself) do this from time to time, but you have done it throughout this thread, which is why I mentioned it.
 
Last edited:
  • #24
phinds said:
View attachment 360438

@Ian Telcher, you have, throughout this thread, expressed an obviously strongly, and sincerely, held belief that humans will always be superior to AI. You are not alone in this belief, BUT ... it is a belief, an opinion, and it is NOT shared by all and yet throughout you have expressed this opinion categorically, as though it were fact. It is not, and as @PeroK pointed out, this is a science forum. You should preface such beliefs with something like "I believe that ... " or even "I am sure that ... " not state your opinions as fact.

EDIT: by the way, I think we all (or most of us, including myself) do this from time to time, but you have done it throughout this thread, which is why I mentioned it.
I would be dishonest to myself if I considered the possibility that AI can or will be superior to the human brain. I believe that what makes biological organisms non-replicable is a system on a level that no computer can ever match. If you think a computer can be capable of emotions, dreams, pain or any other trait of life, then I would agree with the possibility, but I can't see any of this type of system operating on such a level. I think it would be more likely that dogs will evolve to speak human language.
 
  • #25
Ian Telcher said:
I would be dishonest to myself if I considered the possibility that AI can or will be superior to the human brain. I believe that what makes biological organisms non-replicable is a system on a level that no computer can ever match. If you think a computer can be capable of emotions, dreams, pain or any other trait of life, then I would agree with the possibility, but I can't see any of this type of system operating on such a level. I think it would be more likely that dogs will evolve to speak human language.
I was not disputing your belief, I was saying that you should be clear that you understand that it IS a belief and not state it as a fact because regardless of how strongly held your opinion is, it IS still just an opinion.
 
  • #26
Ian Telcher said:
I would be dishonest to myself if I considered the possibility that AI can or will be superior to the human brain.
"superior" is a very broad demand. My statements are only about the advantages of AI in specific tasks. There is no doubt that automated machines perform better than humans in several tasks. The boundary between dumb automation versus artificial intelligence is another issue. I have noticed that many photos and videos that used to be called "Photoshopped" are now called AI. On the other hand, many AI videos on YouTube are realistic enough that I am often fooled.
Ian Telcher said:
I believe that what makes biological organisms non-replicable is a system on a level that no computer can ever match.
Never is a long time.
Ian Telcher said:
If you think a computer can be capable of emotions, dreams, pain or any other trait of life, then I would agree with the possibility, but I can't see any of this type of system operating on such a level. I think it would be more likely that dogs will evolve to speak human language.
Real evolution can take place over millions (billions?) of years. My mind can not grasp what is, or is not, possible over those time spans.
 
  • Like
Likes nsaspook, phinds and PeroK
  • #27
Ian Telcher said:
You use the word self, and there is no self in a computer. The intelligence you refer to is nothing but ones and zeros. Self is defined as a characteristic of the mind; it starts in the mind and stays there.
"Self" is easy. "Conscious" is not. And they are very different.
People have a built-in illusion of self. It's what they want to survive. It is what they view as the agent of their deliberate actions. It is what society interacts with and holds accountable. It is a very functional concept and can be coded into software to serve the same purposes for the host system.

When I consider "consciousness", I ask questions such as "can you be conscious without being conscious of anything?"; "if, in a moment, you are conscious of something, what is the minimal number of bits it would take to designate that something?". It is a kind of information processing that von Neumann machines do not implement and cannot reproduce. ... And if they did, it would be a bug and I would fix it.
 

Similar threads

Back
Top