Latest Notable AI accomplishments

  • Thread starter Thread starter gleem
  • Start date Start date
  • Tags Tags
    Ai
AI Thread Summary
Recent advancements in AI and machine learning are notable, particularly in areas like pattern recognition and image analysis across various fields such as medicine and astronomy. Both Alibaba's AI and Microsoft's systems have recently surpassed human performance in the Stanford Question Answering Dataset, highlighting significant progress in natural language processing. The discussion emphasizes that while many overestimate the timeline for AI to transform society, the reality is that AI's evolution is complex and ongoing, with applications already impacting job markets. There is concern regarding the implications of AI in autonomous systems, especially in military contexts, where ethical considerations are paramount. Overall, the conversation reflects a growing awareness of AI's capabilities and its potential to reshape various aspects of life and work.
gleem
Science Advisor
Education Advisor
Messages
2,701
Reaction score
2,175
Developments in AI and machine learning continue to progress at a rate that may be surprising and not just in playing games but in pattern recognition and image analysis in medicine, astronomy and business. One of the challenges to AI is effective natural language processing. Last week both China's Alibaba's AI and Microsoft's systems outperformed humans in the Stanford University's Stanford Question Answering Dataset (SQuAD) a test requesting answers to 100,000 questions based on over 500 Wikipedia articles.

https://www.bloomberg.com/news/arti...outgunned-humans-in-key-stanford-reading-test

With so many people working on so many different applications with so much to gain I believe that we will be witnessing other "eyebrow" raising developments in the next few years that will significantly change minds about the timeline of AI's progress.
 
  • Like
Likes FactChecker and berkeman
Technology news on Phys.org
I always find that people unfamiliar with the field dramatically overestimate the time it will take for AI to completely overtake our society in the same way that industrialization did. Geometric progression is too complex for most people to fully comprehend. Although we've fallen behind Moore's law quite a few years ago, we're still progressing geometrically. Such huge advances in AI is not eyebrow raising to most who understand it.

Humans are really good at picking out obvious patterns in data, but we're very poor at separating subtle patterns in loads of noise. Neural networks specialize in such things.
 
newjerseyrunner said:
I always find that people unfamiliar with the field dramatically overestimate the time it will take for AI to completely overtake our society in the same way that industrialization did.

When I started programming in 1969 I was told by a professor that a career in programming was unlikely because in 5 years the AIs would be doing it. It has been 5 years ever since.

Cheers
 
  • Like
Likes Yaroslav Granowski, phinds, aaroman and 6 others
Remember that that AI research began in earnest in the 1950's.. Take Allen Newell et al. paper on creative thinking https://www.rand.org/content/dam/rand/pubs/papers/2008/P1320.pdf where he discusses his some of his early investigations into AI including his "Logic Theorist" which could prove many of the theorems in chapter 2 of Russell and Whiteheads "Principia Mathematica" where it found a proof for one theorem that was shorter and more elegant than the author's. Consider some of the hardware avaiable at the time like Illiac. Things looked promising even at that time (need just five or six more years more years to go) but goes to show you the magnitude of the problem where 60 yrs later we are only beginning to see a light at the end of the tunnel.
 
  • Like
Likes PhDeezNutz and Buzz Bloom
This is quite old, 2009 and I am not sure if it has been previously discussed but I think worth mentioning here at least for the issues that AI raises in its contribution to our endeavors. An AI program ( see https://www.wired.com/2009/04/Newtonai/) has "discovered" basic laws of physics by sifting through dynamic motion data with nothing known of these processes and armed only with some basic arithmetic/algebraic operations. The program eventually produces mathematical relationships (Laws of Physics) that describe the behavior of the data.(including in this case Lagrangians and Hamiltonians) The issue is when a AI program is applied to a complex data set and produces a "Law" or says "look what I found" that describes the behavior of the data how do we begin to understand the processes that give rise to this law? Probably in a similar way we analyze regularities in data that we observe although It might be quite difficult if the understanding involves a new concept or hypothesis.
 
newjerseyrunner said:
I always find that people unfamiliar with the field dramatically overestimate the time it will take for AI to completely overtake our society in the same way that industrialization did.
I know I'm late, but what does "completely overtake our society" mean?
 
russ_watters said:
I know I'm late, but what does "completely overtake our society" mean?
Open ended. It just means the society after the singularity will be radically different than before it. Electricity, for example, has completely taken over our society. Some people here may have had grandparents that lived in a world without power, and some of us have been to places like Lancaster, PA, but for the most part, our society is entirely dependent on our electrical infrastructure. Most of us have had at least three generations who have lived with it, I imagine AI will be so prevalent than by three generations from the singularity, it'd be almost impossible to imagine a world without AI doing work for us.
 
newjerseyrunner said:
Open ended. It just means the society after the singularity will be radically different than before it. Electricity, for example, has completely taken over our society. Some people here may have had grandparents that lived in a world without power, and some of us have been to places like Lancaster, PA, but for the most part, our society is entirely dependent on our electrical infrastructure. Most of us have had at least three generations who have lived with it, I imagine AI will be so prevalent than by three generations from the singularity, it'd be almost impossible to imagine a world without AI doing work for us.
Ok, fair enough, but I was more interested in *how*, as in, what is it going to do that would be game changing?
 
  • #10
Changing what people do for a living.

Before the industrial revolution, it was mainly "working on a farm" or some variant of that. Afterwards it was much more diverse.
Computers changed most jobs, and together with the internet they changed society a lot even beyond that.
AI will replace various jobs, which means many people will have to find new jobs.
 
  • #11
mfb said:
Changing what people do for a living.

Before the industrial revolution, it was mainly "working on a farm" or some variant of that. Afterwards it was much more diverse.
Computers changed most jobs, and together with the internet they changed society a lot even beyond that.
AI will replace various jobs, which means many people will have to find new jobs.
Any idea which jobs? And how fast?

What I'm getting at here is related to what I discussed in this post:
https://www.physicsforums.com/threa...anies-will-i-become-rich.937847/#post-5928916

I feel like to AI "alarmists", for lack of a better word, think of the "singularity" as a single event which will suddenly change a lot of things, rapidly altering the human labor landscape. But they don't, that I've seen, articulate the what/how. It's all very vague from what I have seen. And I think the reason why is that "AI" is a poorly defined concept and the types of things that AI might do are just everything that computers do, evolving over the past 50 years and simply continuing to evolve. As such, I don't see a potential for a singular or short term disruptive event.

"Terminator" was literally the flip of an AI switch that destroyed the world. But AI isn't needed for that. A decent chess-playing computer program could destroy the world if programmed badly and connected to nukes, a la "War Games". I feel like people over-estimate the "profoundness" of what AI could be and discount what computers already are.

Thoughts?
 
  • #13
russ_watters said:
Any idea which jobs? And how fast?
We had this discussion a while ago, I don't want to repeat it here.
russ_watters said:
I feel like to AI "alarmists", for lack of a better word, think of the "singularity" as a single event which will suddenly change a lot of things, rapidly altering the human labor landscape. But they don't, that I've seen, articulate the what/how. It's all very vague from what I have seen. And I think the reason why is that "AI" is a poorly defined concept and the types of things that AI might do are just everything that computers do, evolving over the past 50 years and simply continuing to evolve. As such, I don't see a potential for a singular or short term disruptive event.
You are mixing two different concepts here. The singularity is a possible future event that would be - by definition - a single event that suddenly changes a lot of things. But that is not what we were discussing before. The change in society from AI is earlier - it started already, with a few jobs getting replaced, or transformed notably. It will become more important in the next decades as computers will be able to do more and more tasks humans do today.. This is not what the (possible) singularity is about. That would be an AI that improves itself so much that it would vastly exceed the intelligence of all humans.
 
  • #14
mfb said:
The singularity is a possible future event that would be - by definition - a single event that suddenly changes a lot of things.

And, importantly, it changes things in ways that are very difficult to predict. It is like projecting a tight bundle of trajectories right by a singularity point and watch them fill up most of the state space.

Also, to me the singularity (in AI) do not have to be a single (localized) event as such, but more like narrow passage of time where we from afar are unable to predict how we emerge on the other side (and with the possible scenarios only limited by physical laws, thus ranging from near human distinction to near human nirvana) and close up things change too fast for us to control. In short, in such an event we fly blind and with loss of control.
 
  • #15
The singularity assumes the creation and of a strong AI, something no one has come close to doing.
 
  • #16
russ_watters said:
Any idea which jobs? And how fast?

AI in some form has been with us for years and only now are most people becoming aware of it. It is only in the last few years that the general population is beginning to appreciate its possible impact. As far as what jobs will be replace you do not have to be specific. Any job not involving dexterity (unless designed for that quality or new or unique situations) that can be reduced to a series of operations or rules no matter how complex based on a some delineated situation or data set will be replaced by AI in the near term. How fast? Much of AI today is mission specific and very focused, so as fast as they can be programmed and marketed.

I think one of the reasons that AI hasn't progresses faster was its difficulty in interacting with humans or its environment. As I noted in the OP AI programs can now read and answer questions about what it read. Since AI can understand spoken language it can now share information directly with humans in a human fashion. With a large knowledge base one would think AI devices will soon be able to carry on a meaningful conversation with people.

I think it is possible now for a factory to be run almost entirely by robots from the receiving to the shipping departments including business transactions, ordering material, inventory control, and perhaps some maintenance and repairs. A current controversy in AI is autonomous vehicles which is an order of magnitude more difficult than robots use in manufacturing. Here again is the issue of interacting in a meaningful way with the environment, and it is happening. Of particular concern is the use of AI in the military in especially autonomous weapons meaning armed drones from the MQ-1B Predator to the Perdix micro UAV swarm. International law requires humans to make kill decisions but the utility of the microdrone reallly depends an their determining the targets and that is quite bothersome. It is said that perhaps the greatest issue with AI is how we humans use it.
 
  • #17
My intent in initiating this thread was to concentrate on the development of the capabilities of AI and its various forms of implementation. Any way I think another notable improvement in AI implementation is the use of memristors in new type of neural network system call reservoir computing by University of Michigan researchers.

https://www.sciencedaily.com/releases/2017/12/171222090313.htm

This will apparently make AI processors smaller and allow significantly faster learning and allow processing as in speech recognition in which the use of a words may depend on context. It is claimed that it can predict the next word in a conversation before it is spoken.

Filip Larsen said:
Also, to me the singularity (in AI) do not have to be a single (localized) event as such, but more like narrow passage of time where we from afar are unable to predict how we emerge on the other side (and with the possible scenarios only limited by physical laws, thus ranging from near human distinction to near human nirvana) and close up things change too fast for us to control. In short, in such an event we fly blind and with loss of control

Yes like the smart phone or internet. And that is a problem with AI we see it as we would like it to be and as we think it should be and not how it will become. Did the innovators of the internet consider the potential of its implementation in the facilitation of international terrorism and other nefarious activities?
 
  • #18
gleem said:
It is claimed that it can predict the next word in a conversation before it is spoken.
How often is it right? My phone tries to do that with typed text, but it doesn't do a good job so far (although it is much better than random guessing).
 
  • #19
mfb said:
How often is it right? My phone tries to do that with typed text, but it doesn't do a good job so far (although it is much better than random guessing).

The claim apparently was made in an interview of the author and not in the published article so I expect future publications to present specific information, stay tuned.
 
  • #20
It is blatant that we've overvaluated the time when strong AI woull come. But insomuch we overvaluated this, we assumed the passion and excitement of most researchers. Indeed, reality is very different. Many researchers won't spend most of their lives working in something that seems like pure speculation. Other researchers have just stayed in Machine Learning for the money (Strong AI researchers will probably not make much money, whereas Machine Learning experts get astonishing salaries).

Ok AI is not as easy as the public might think at a first glance. But come on, it is increasingly getting academic and media attention. A bright future awaits.
PD: nah just joking, machines have alredy taken over our society and we cannot even notice it.
 
  • #21
I just felt I had something to add to this article. Biological neural networks are massively parallel but computer CPUs are not. You can have a computer with a few cpus set in parallel but you're still at a huge disadvantage compared to a biological brain when you're trying to run a neural network on silicon. There are such things as "artificial neurons" but putting several billion together into an artificial brain is not going to happen ...

Eventually we'll be able to run human sized brains on silicon, but even with the most modern technology we're at like 11 billion connections compared to ~100 trillion: https://www.popsci.com/science/article/2013-06/stanfords-artificial-neural-network-biggest-ever

It would probably be easier to create artificially intelligent biorobots ... but silicon is a little more resilient maybe?
 
  • #22
A house mouse has ~100 billion synapses, and a honey bee just 1 billion. But these networks didn't reach the intelligence of a honey bee yet. Recognizing cats with thousands of training samples is nice, but ultimately not the main task of brains in animals.
 
  • #23
AI assistants, while not Earth shaking this AI application is a step toward integrating it more seamlessly into our daily lives and at the same time introducing unexpected concerns.

Pundits of AI implementations have argued that applications of "chat-bot" are too unhumanlike to replace humans where conversation is the primary mode of interaction as in retail sales for example. Google and Microsoft have introduced what is called full duplexing into AI/human communication. That is the ability to speak and listen at the same time. The results have been expressed as "creepy" for Googles product. Microsoft has introduced its product in China finding a wide acceptance.



Some think that AI assistants used to make phone calls should be identified. California is introducing a bill https://www.artificialintelligence-news.com/2018/05/24/bill-ai-bot-reveal-eff/ to require chat-bots to identify themselves as not human. Is such legislation necessary? '

While many think the evolution of AI will continue to remain slow I still believe that with improved AI dedicated hardware as neural net processors and "brains on a chip" and 10nm technology we will experience periodic quantum leaps in capability with full general intelligence arriving significantly earlier than 2100. There is just too many people working on too many angles with too much concern for money and security for there not to be more rapid progress.

.
 
  • #24
cosmik debris said:
When I started programming in 1969 I was told by a professor that a career in programming was unlikely because in 5 years the AIs would be doing it. It has been 5 years ever since.

Cheers

Perhaps true AI will come online at about the same time as commercial nuclear fusion reactors? That's always five years in the future as well.
 
  • #25
During the mid-1990's as principle computer engineer at the speech technology and research laboratory (STAR-lab) for SRI International, the term artificial intelligence (AI) was notable primarily by its absence.

A significant problem facing speech researchers was the limited funding available to purchase equipment. A typical grant might purchase a quality audio device, one workstation, sometimes a UNIX server, and a few disk drives -- all 'off-the-shelve' (ots) items. My main contribution to speech technology -- along with fellow computer scientists -- was to design and implement a STAR-net that (legally) seamlessly connected disparate research project devices into unified networks able to share language databases, collect data in any spoken language, and distribute processes across connected systems including individual workstations. STAR-lab had so many individual disk drives, my team was able to mirror drives greatly reducing data loss while enhancing DB and table look-up data reliability.

The star-network concept came from my work on earlier Digital Equipment Corp (DEC) VAX computers running VMS operating system (OS) at NASA's advanced concepts flight simulator (ACFS), later implemented on Sun Microsystem servers running Solaris OS. The beauty of these private networks is the ability to connect different platforms and systems to share data and distribute processing in near real time (nRT) in a secure environment.

Apropos to the original and subsequent posts, the speech technologists solved many problems in speech recognition, speaker identification (ID), native speaker language selection, phonetic & word prediction, and sentence completion and so much more based on clever use of common ots hardware and software. At the time I would not call the ability to predict the next syllable or word in a sentence "artificial intelligence" any more than I thought that the computer network could "understand" or "speak Spanish" when the Spanish language databases were on line. Perhaps I'm short-sighted and take technology for granted.
 
  • Like
Likes russ_watters
  • #26
AI is allowing the use of unusual techniques to perform unusual tasks for example the use of RF by AI to detect human movement behind a barrier. It is also able to determine the posture of the subjects.



Another task that AI can do that humans can"t. Anybody keeping score?
 
  • #27
I really need to catch up with the latest criteria for what constitutes AI. Unaided human senses are notoriously weak compared to our technology. Using the military definition of intelligence, a 1950's era Nike-style RADAR [NATO designation Fansong] extends human vision out many kilometers -- this example includes boresighted visible light tracking -- down deep into the microwave band -- India band in this unclassified example -- under conditions where a human would be helpless.

Let me attach some MIDI audio processes and a few speakers in the RADAR van and a little voice can shout "Here I am!" in the relative direction of an actual target.

Add some visual recognition software and a few table look-ups, digitize the analogue outputs from the track-range computer and the little voice could shout "Boeing 777 approaching from x direction, distance y km., height above ground z km." in the apparent direction and height (angle) of the actual aircraft in any language in our DB that matches the native dialect of the human operator. Mostly off-the-shelve HW & SW given an old refurbished RADAR van and a 400hz multi-phase power source.

Wait one... let me think... OK I convinced myself: this likely constitutes artificial intelligence! Certainly compared to an unaided human.
 
  • Like
Likes russ_watters
  • #28
Below is a short review of major accomplishments last year in AI and predictions of advances to be expected this year by Siraj Raval, director of the School of AI.

 
  • #29
gibberingmouther said:
Biological neural networks are massively parallel but computer CPUs are not. You can have a computer with a few cpus set in parallel but you're still at a huge disadvantage compared to a biological brain when you're trying to run a neural network on silicon. There are such things as "artificial neurons" but putting several billion together into an artificial brain is not going to happen ...
45 years ago analogue computers were fairly popular. What did them in were digital computers that could model them faster and cheaper than the real thing. Biological neural nets are slow. You could emulate a million of them in real time with a single modern CPU core.

That said, computers have "completely overtaken" society without much AI. They will continue that trend with it.
As for the singularity, that's still some decades away.
 
  • #30
Klystron said:
I really need to catch up with the latest criteria for what constitutes AI. Unaided human senses are notoriously weak compared to our technology. Using the military definition of intelligence, a 1950's era Nike-style RADAR [NATO designation Fansong] extends human vision out many kilometers -- this example includes boresighted visible light tracking -- down deep into the microwave band -- India band in this unclassified example -- under conditions where a human would be helpless.

Let me attach some MIDI audio processes and a few speakers in the RADAR van and a little voice can shout "Here I am!" in the relative direction of an actual target.

Add some visual recognition software and a few table look-ups, digitize the analogue outputs from the track-range computer and the little voice could shout "Boeing 777 approaching from x direction, distance y km., height above ground z km." in the apparent direction and height (angle) of the actual aircraft in any language in our DB that matches the native dialect of the human operator. Mostly off-the-shelve HW & SW given an old refurbished RADAR van and a 400hz multi-phase power source.

Wait one... let me think... OK I convinced myself: this likely constitutes artificial intelligence! Certainly compared to an unaided human.
OK. So Automobile "Blind Side Detection" would also count.
Actually, "AI" usually implies some sort of machine learning and statistical processing. In that sense, the difference can be what development tools you are using - as oppose to the application functionality or user interface.
 
  • #31
.Scott said:
OK. So Automobile "Blind Side Detection" would also count.
Actually, "AI" usually implies some sort of machine learning and statistical processing. In that sense, the difference can be what development tools you are using - as oppose to the application functionality or user interface.

Agreed.

.Scott said:
45 years ago analogue computers were fairly popular. ...[snip]...
That said, computers have "completely overtaken" society ...[snip]...

Concur; with the proviso that analogue computers were/are 'popular' because they work.
You likely noticed I included an analog electro-mechanical computer in my "hodge-podge" example. The track range computer (TRC) provided quite a bit of intelligence to the radar system freeing the operators to concentrate on target identification, other targets in a cell, and radio (voice) communication.

Certainly, high-speed digital computation solves or approximates many problems but need not replace slower analogue computers for all tasks. For instance, the TRC compared returns derived from a rotating feed-horn independent of the radar frequency and pulse rates, automatically correcting antenna positions essentially with clockwork mechanisms coupled with (1950's) electronics.
 
  • Like
Likes sysprog and .Scott
  • #32
.Scott said:
45 years ago analogue computers were fairly popular. What did them in were digital computers that could model them faster and cheaper than the real thing. Biological neural nets are slow. You could emulate a million of them in real time with a single modern CPU core.

That said, computers have "completely overtaken" society without much AI. They will continue that trend with it.
As for the singularity, that's still some decades away.

Yeah, I realize there's a bit more to this than I understand. But I do know the biggest attempts at making a digital "brain" out of neural net nodes connected to each other took a ton of supercomputing power and was still well below what a human brain has in terms of the number of neurons and the number of connections between them. There is also all this other information that human brains handle at the level of DNA and cellular interactions (or intra-actions), and a modern computer system would not come close to being able to model that.

Still, for the sake of creating useful AI, neural nets seem to be a very powerful tool.
 
  • #33
Relentlessly AI progress continues to encroach on human activities. Most recently in the writing of original copy by an AI system called GPT2 developed by OpenAI. So good are the results that the release of the research has been held up from publication to further explore what mischief it might be used for.

A British columnist has had one of he columns synthesized by the system and here is her observation/opinion.
 
  • Like
Likes Klystron and mfb
  • #34
The way that AI makes a prediction has been of particular interest especially when it is unexpected or evades logical verification. Was its decision reasonable based on the information that it used? We do not just want to take AI's "word" for it. When an AI makes an obvious mistake we can go back and analyze the process for example there was an article in the popular news illustrating a generic AI problem where the AI make its decision based on irrelevant cues in the data. An AI system misidentified a husky as a wolf based not on any characteristics of the animals but by the presence of snow in the background because all wolf data used for learning contained snow.

Researcher have come up with an algorithm that helps determine how AI made its prediction that can be used to determine how intelligent the decision/conclusion was.

By using their newly developed algorithms, researchers are finally able to put any existing AI system to a test and also derive quantitative information about them: a whole spectrum starting from naive problem solving behavior, to cheating strategies up to highly elaborate "intelligent" strategic solutions is observed.

Dr. Wojciech Samek, group leader at Fraunhofer HHI said: "We were very surprised by the wide range of learned problem-solving strategies. Even modern AI systems have not always found a solution that appears meaningful from a human perspective, but sometimes used so-called 'Clever Hans Strategies'."

Clever Hans was a horse that could supposedly count and was considered a scientific sensation during the 1900s. As it was discovered later, Hans did not master math but in about 90 percent of the cases, he was able to derive the correct answer from the questioner's reaction.

The team around Klaus-Robert Müller and Wojciech Samek also discovered similar "Clever Hans" strategies in various AI systems. For example, an AI system that won several international image classification competitions a few years ago pursued a strategy that can be considered naïve from a human's point of view. It classified images mainly on the basis of context. Images were assigned to the category "ship" when there was a lot of water in the picture. Other images were classified as "train" if rails were present. Still other pictures were assigned the correct category by their copyright watermark. The real task, namely to detect the concepts of ships or trains, was therefore not solved by this AI system -- even if it indeed classified the majority of images correctly.

The researchers were also able to find these types of faulty problem-solving strategies in some of the state-of-the-art AI algorithms, the so-called deep neural networks -- algorithms that were so far considered immune against such lapses. These networks based their classification decision in part on artifacts that were created during the preparation of the images and have nothing to do with the actual image content.

"Such AI systems are not useful in practice. Their use in medical diagnostics or in safety-critical areas would even entail enormous dangers," said Klaus-Robert Müller. "It is quite conceivable that about half of the AI systems currently in use implicitly or explicitly rely on such 'Clever Hans' strategies. It's time to systematically check that, so that secure AI systems can be developed."

With their new technology, the researchers also identified AI systems that have unexpectedly learned "smart" strategies. Examples include systems that have learned to play the Atari games Breakout and Pinball. "Here the AI clearly understood the concept of the game and found an intelligent way to collect a lot of points in a targeted and low-risk manner. The system sometimes even intervenes in ways that a real player would not," said Wojciech Samek.

"Beyond understanding AI strategies, our work establishes the usability of explainable AI for iterative dataset design, namely for removing artefacts in a dataset which would cause an AI to learn flawed strategies, as well as helping to decide which unlabeled examples need to be annotated and added so that failures of an AI system can be reduced," said SUTD Assistant Professor Alexander Binder.

https://www.sciencedaily.com/releases/2019/03/190312103643.htm
 
  • Like
Likes mfb and Klystron
  • #35
Currently most AI systems are run on computers, the cloud or system that use component originally designed for other purposes as GPUs or FPGAs. The systems are often large required large amounts of power. I have noted previously that progress in AI implementation will depend on the development of dedicated AI chips and systems. Intel has been working on a chip called Loihi which is a neuromorphic chip whose architecture more closely imitates the human neruoNet. This chip has 130,000 neurons in a 60 mm2 area. It has taken 64 of these chips and created a system (code named Pohoiki Beach)i with 8 million neurons (think frog). This system is 1000 time faster and uses 10000 time less power that equivalent AI systems. By the end of the year they expect to have a 100 million neuron system. A Pohoiki system can be scaled with up with 16,000 chips. Now you are talking about a serious neuroNet.

It is expected that this system will be able to overcome a serious problem with today's machine learning called catastrophic forgetting. If you try to add a new time to the repertoire of the AI system you have to start the process all over to include the new item. Trying to add it as an addendum can cause the system to forget all that has previously learned. It is also expected that it will be harder to fool an AI system with distorted images.

For more see https://spectrum.ieee.org/tech-talk/robotics/artificial-intelligence/intels-neuromorphic-system-hits-8-million-neurons-100-million-coming-by-2020

More on the Loihi chip : https://www.intel.com/content/www/us/en/research/neuromorphic-computing.html
 
  • Like
Likes Klystron, QuantumQuest and Buzz Bloom
  • #36
Another indirect contribution to AI implementation has been developed. Researchers in Zurich have found a way to store information in DNA molecules and placing them in nanobeads of ceramic materials. Possible applications are huge information storage densities and self replicating machines.

https://physicsworld.com/a/embedded-dna-used-to-reproduce-3d-printed-rabbit/ Rabbit is not real.
 
  • #38
gleem said:
Rabbit is not real.
Thanks for the clarification! :oldbiggrin:
 
  • #39
Up until just recently a successful model for AI was the neural net based on a network and interconnectedness of neurons. This was because the neuron was identified as the leading component of information processing in the brain. The brain is composed mostly of two types of cells neuron and glia cells. The glia ( from the Latin for glue) cells, of which there are several, originally were believed to be support cells for the neurons doing maintenance and protective functions. Fairly recently their function was also deduced to include communicating with the neurons especially the astrocytes which have many dendrite type structures.

https://medicalxpress.com/news/2020-04-adult-astrocytes-key-memory.html

Developing a human level neural net system for even a dedicated task was challenging and hardware limited. A human brain has about 75 billion neurons with possibly 10,000 or more synapses each. You can see the problem with software models of a neural net using standard computer hardware. Even with neural net processors with 10 nanometer technology, there are still challenges, for example, brains being three dimensional. Astrocytes are at least as numerous as neurons.

https://techxplore.com/news/2020-07-astrocytes-behavior-robots-neuromorphic-chips.html

Now a group At Rutgers University has integrated some astrocyte functionality into a commercial neuromorphic chip from Intel to control the movement of a six legged robot.
"As we continue to increase our understanding of how astrocytes work in brain networks, we find new ways to harness the computational power of these non-neuronal cells in our neuromorphic models of brain intelligence, and make our in-house robots behave more like humans," Michmizos said. "Our lab is one of the few groups in the world that has a Loihi spiking neuromorphic chip, Intel's research chip that processes data by using neurons, just like our brain, and this has worked as a great facilitator for us. We have fascinating years ahead of us."

One final note, they used the term plastic a few times whose standard definition and as applied to AI refers to the ability to adapt.
 
  • #41

Drug-Discovery AI Designs 40,000 Potential Chemical Weapons In 6 Hours​

In a recent study published in the journal Nature Machine Intelligence, a team from pharmaceutical company Collaborations Pharmaceuticals, Inc. repurposed a drug discovery AI. It successfully identified 40,000 new potential chemical weapons in just 6 hours, with some remarkably similar to the most potent nerve agent ever created.
According to an interview with the Verge, the researchers were shocked by how remarkably easy it was.

“For me, the concern was just how easy it was to do. A lot of the things we used are out there for free. You can go and download a toxicity dataset from anywhere. If you have somebody who knows how to code in Python and has some machine learning capabilities, then in probably a good weekend of work, they could build something like this generative model driven by toxic datasets,” said Fabio Urbina, lead author of the paper, to the Verge.

“So that was the thing that got us really thinking about putting this paper out there; it was such a low barrier of entry for this type of misuse.”
https://www.iflscience.com/drugdisc...0-potential-chemical-weapons-in-6-hours-63017
 
  • #43
I have mentioned before that useful applications will be accelerated by hardware development. Recently a company has developed the largest CPU chip 462 cm2. This eliminates the need to use hundreds of GPUs for the calculations and the need to mind the intricate interconnections resulting in expensive (and expensive) time to program the system. This chip will help accelerate AI research however it still requires a huge amount of power, 20KW.

Some researchers are giving AI access to other ways to interact with the outside world. They are giving AI the ability to learn about themselves that is to self-model.

Summary: Researchers have created a robot that is able to learn a model of its entire body from scratch, without any human assistance. In a new study, the researchers demonstrate how their robot created a kinematic model of itself, and then used its self-model to plan motion, reach goals, and avoid obstacles in a variety of situations. It even automatically recognized and then compensated for damage to its body.
https://www.sciencedaily.com/releases/2022/07/220713143941.htm
 
  • #44
While my previous post shows significant progress in reducing learning time using computers to model the human brain MIT researchers have taken another tack in creating a system in which the learning will take place within a memory structure more in line with the architecture of a biological brain. Current computer-generated neural networks emulating the conductivity of a neuron synapses by weighting a synapse through a computation. This involves using memory to store this weighting factor as well as the need to shuttle information into and out of the memory to a CPU for the weighting calculation as I understand it. This new approach uses what is known as a resistive memory in which the memory develops a conductance within not needing a CPU and the moving of data and the associated power requirement. This process then is really analog and not digital. The system uses a silicon-compatible inorganic substrate to build the artificial neurons which are1000 times smaller than biological neurons and promise to process information much faster with a power requirement much closer to a biological system. Additionally, the system is massively parallel reducing learning time further.

Earlier work published last year demonstrated the feasibility of a resistive memory. One year later significant progress has been made so that a working resistive NN can be built with silicon fabrication techniques..

Bottom line: smaller size, lower power, faster learning. Most predicted artificial general intelligence would be developed at the earliest 2050 probably closer to 2100 if at all. It is beginning to look like it might be earlier.

MIT's Quest for Intelligence Mission Statements
https://quest.mit.edu/research/missions
 
  • Informative
  • Like
Likes anorlunda and Oldman too
  • #45
A question that is often asked is when might we expect AGI. Well, there is some evidence that it might occur sooner than we think. Human language usage is one of the most difficult tasks for AI considering a large number of exceptions, nuances, and contexts. This makes the translation from one language to another challenging. The reason AGI might be reached a lot sooner than most AI experts suggest is a metric that determines the time for a human to correct a language translation generated by AI. It takes a human translator about one second per word to edit the translation of another human. In 2015 it took a human 3.5 seconds per word to edit a machine-generated translation. Today it takes 2 seconds. If the progress in accuracy continues at the same rate then machine translations will be as good as humans in 7 years.

https://www.msn.com/en-us/news/tech...A16FldN?cvid=2c1db71908854908b3a14b864e9c1eec

Some will find it difficult to accept an AGI and find numerous reasons to reject the idea. But we do not understand how we do what we do as well as what the machines are doing. The "proof" will probably be in some sort of intellectual contest perhaps a debate between an AGI and a human.

Go AGI.
 
  • #46
gleem said:
A question that is often asked is when might we expect AGI. Well, there is some evidence that it might occur sooner than we think. Human language usage is one of the most difficult tasks for AI considering a large number of exceptions, nuances, and contexts. This makes the translation from one language to another challenging. The reason AGI might be reached a lot sooner than most AI experts suggest is a metric that determines the time for a human to correct a language translation generated by AI. It takes a human translator about one second per word to edit the translation of another human. In 2015 it took a human 3.5 seconds per word to edit a machine-generated translation. Today it takes 2 seconds. If the progress in accuracy continues at the same rate then machine translations will be as good as humans in 7 years.

https://www.msn.com/en-us/news/tech...A16FldN?cvid=2c1db71908854908b3a14b864e9c1eec

Some will find it difficult to accept an AGI and find numerous reasons to reject the idea. But we do not understand how we do what we do as well as what the machines are doing. The "proof" will probably be in some sort of intellectual contest perhaps a debate between an AGI and a human.

Go AGI.
Choice of language matters. I have noted that Google translates Japanese poorly. This isn't surprising. The written language is extremely ambiguous, so much so that constructing sentences with dozens if not hundreds of possible meanings is a national pastime.
 
  • #47
gleem said:
If the progress in accuracy continues at the same rate then machine translations will be as good as humans in 7 years.
Machines have beaten humans in chess for 25 years. Neither of these tasks is AGI.

ChatGPT is a recent example: It produces text with great grammar that is full of factual errors - it knows grammar but has a very poor understanding of content.
 
  • #48
mfb said:
Machines have beaten humans in chess for 25 years. Neither of these tasks is AGI.

ChatGPT is a recent example: It produces text with great grammar that is full of factual errors - it knows grammar but has a very poor understanding of content
Deep Blue, IBM's computer that beat Kasparov was an 11.5GFlop machine and would be incapable of what Chat GPT can do. BTW use of language is considered an element of human intelligence. GPT is not capable of reflecting on its response like humans. If we misspeak we can correct ourselves. Keep in mind the use of the internet to train it with its tainted data is really a bad way to train anything or anybody. When humans are taught they generally are provided with vetted data. If we were taught garbage we would spew garbage and actually, some do anyway.
 
  • #49
Hornbein said:
Choice of language matters. I have noted that Google translates Japanese poorly. This isn't surprising. The written language is extremely ambiguous, so much so that constructing sentences with dozens if not hundreds of possible meanings is a national pastime.

From "The History of Computer Langauge Translation" https://smartbear.com/blog/the-history-of-computer-language-translation/
Human errors in translation can be and have been cataclysmic. In July 1945, during World War 2, the United States issued the Potsdam Declaration, demanding the surrender of Japan. Japanese Premier Kantaro Suzuki called a news conference and issued http://www.lackuna.com/2012/04/13/5-historically-legendary-translation-blunders/#f1j6G4IAprvcoGlw.99 That wasn't what got to Harry Truman. Suzuki used the word http://www.nsa.gov/public_info/_files/tech_journals/mokusatsu.pdf The problem is, “mokusatsu” can also mean “We’re ignoring it in contempt.” Less than two weeks later, the first atomic bomb was dropped.
 
  • #50
Japanese Premier Kantaro Suzuki called a news conference and issued http://www.lackuna.com/2012/04/13/5-historically-legendary-translation-blunders/#f1j6G4IAprvcoGlw.99
http://www.lackuna.com/2012/04/13/5-historically-legendary-translation-blunders/#f1j6G4IAprvcoGlw.99his link from NSA mentions it.
https://www.nsa.gov/portals/75/docu...ssified-documents/tech-journals/mokusatsu.pdf

But I think it is a stretch that the poor translation changed the outcome. We can't get into the heads of the participants, so we'll never know for sure.
 
Back
Top