Latest Notable AI accomplishments

  • Thread starter gleem
  • Start date
  • Tags
    Ai
In summary, developments in AI and machine learning are progressing rapidly and are not limited to playing games, but also include pattern recognition and image analysis in various fields such as medicine, astronomy, and business. One challenge for AI is effective natural language processing, but recent advancements have shown promising results. With so many people working on AI and its potential to significantly change our society, it is likely that we will see more groundbreaking developments in the next few years. However, it is important to note that people tend to overestimate the timeline for AI's progress. AI has the potential to completely change the way we live and work, similar to how electricity has become an inseparable part of our society. It is expected to replace various jobs, which will require
  • #1
gleem
Science Advisor
Education Advisor
2,424
1,871
Developments in AI and machine learning continue to progress at a rate that may be surprising and not just in playing games but in pattern recognition and image analysis in medicine, astronomy and business. One of the challenges to AI is effective natural language processing. Last week both China's Alibaba's AI and Microsoft's systems outperformed humans in the Stanford University's Stanford Question Answering Dataset (SQuAD) a test requesting answers to 100,000 questions based on over 500 Wikipedia articles.

https://www.bloomberg.com/news/arti...outgunned-humans-in-key-stanford-reading-test

With so many people working on so many different applications with so much to gain I believe that we will be witnessing other "eyebrow" raising developments in the next few years that will significantly change minds about the timeline of AI's progress.
 
  • Like
Likes FactChecker and berkeman
Technology news on Phys.org
  • #2
I always find that people unfamiliar with the field dramatically overestimate the time it will take for AI to completely overtake our society in the same way that industrialization did. Geometric progression is too complex for most people to fully comprehend. Although we've fallen behind Moore's law quite a few years ago, we're still progressing geometrically. Such huge advances in AI is not eyebrow raising to most who understand it.

Humans are really good at picking out obvious patterns in data, but we're very poor at separating subtle patterns in loads of noise. Neural networks specialize in such things.
 
  • #4
newjerseyrunner said:
I always find that people unfamiliar with the field dramatically overestimate the time it will take for AI to completely overtake our society in the same way that industrialization did.

When I started programming in 1969 I was told by a professor that a career in programming was unlikely because in 5 years the AIs would be doing it. It has been 5 years ever since.

Cheers
 
  • Like
Likes phinds, aaroman, NotASmurf and 5 others
  • #5
Remember that that AI research began in earnest in the 1950's.. Take Allen Newell et al. paper on creative thinking https://www.rand.org/content/dam/rand/pubs/papers/2008/P1320.pdf where he discusses his some of his early investigations into AI including his "Logic Theorist" which could prove many of the theorems in chapter 2 of Russell and Whiteheads "Principia Mathematica" where it found a proof for one theorem that was shorter and more elegant than the author's. Consider some of the hardware avaiable at the time like Illiac. Things looked promising even at that time (need just five or six more years more years to go) but goes to show you the magnitude of the problem where 60 yrs later we are only beginning to see a light at the end of the tunnel.
 
  • Like
Likes Buzz Bloom
  • #6
This is quite old, 2009 and I am not sure if it has been previously discussed but I think worth mentioning here at least for the issues that AI raises in its contribution to our endeavors. An AI program ( see https://www.wired.com/2009/04/Newtonai/) has "discovered" basic laws of physics by sifting through dynamic motion data with nothing known of these processes and armed only with some basic arithmetic/algebraic operations. The program eventually produces mathematical relationships (Laws of Physics) that describe the behavior of the data.(including in this case Lagrangians and Hamiltonians) The issue is when a AI program is applied to a complex data set and produces a "Law" or says "look what I found" that describes the behavior of the data how do we begin to understand the processes that give rise to this law? Probably in a similar way we analyze regularities in data that we observe although It might be quite difficult if the understanding involves a new concept or hypothesis.
 
  • #7
newjerseyrunner said:
I always find that people unfamiliar with the field dramatically overestimate the time it will take for AI to completely overtake our society in the same way that industrialization did.
I know I'm late, but what does "completely overtake our society" mean?
 
  • #8
russ_watters said:
I know I'm late, but what does "completely overtake our society" mean?
Open ended. It just means the society after the singularity will be radically different than before it. Electricity, for example, has completely taken over our society. Some people here may have had grandparents that lived in a world without power, and some of us have been to places like Lancaster, PA, but for the most part, our society is entirely dependent on our electrical infrastructure. Most of us have had at least three generations who have lived with it, I imagine AI will be so prevalent than by three generations from the singularity, it'd be almost impossible to imagine a world without AI doing work for us.
 
  • #9
newjerseyrunner said:
Open ended. It just means the society after the singularity will be radically different than before it. Electricity, for example, has completely taken over our society. Some people here may have had grandparents that lived in a world without power, and some of us have been to places like Lancaster, PA, but for the most part, our society is entirely dependent on our electrical infrastructure. Most of us have had at least three generations who have lived with it, I imagine AI will be so prevalent than by three generations from the singularity, it'd be almost impossible to imagine a world without AI doing work for us.
Ok, fair enough, but I was more interested in *how*, as in, what is it going to do that would be game changing?
 
  • #10
Changing what people do for a living.

Before the industrial revolution, it was mainly "working on a farm" or some variant of that. Afterwards it was much more diverse.
Computers changed most jobs, and together with the internet they changed society a lot even beyond that.
AI will replace various jobs, which means many people will have to find new jobs.
 
  • #11
mfb said:
Changing what people do for a living.

Before the industrial revolution, it was mainly "working on a farm" or some variant of that. Afterwards it was much more diverse.
Computers changed most jobs, and together with the internet they changed society a lot even beyond that.
AI will replace various jobs, which means many people will have to find new jobs.
Any idea which jobs? And how fast?

What I'm getting at here is related to what I discussed in this post:
https://www.physicsforums.com/threa...anies-will-i-become-rich.937847/#post-5928916

I feel like to AI "alarmists", for lack of a better word, think of the "singularity" as a single event which will suddenly change a lot of things, rapidly altering the human labor landscape. But they don't, that I've seen, articulate the what/how. It's all very vague from what I have seen. And I think the reason why is that "AI" is a poorly defined concept and the types of things that AI might do are just everything that computers do, evolving over the past 50 years and simply continuing to evolve. As such, I don't see a potential for a singular or short term disruptive event.

"Terminator" was literally the flip of an AI switch that destroyed the world. But AI isn't needed for that. A decent chess-playing computer program could destroy the world if programmed badly and connected to nukes, a la "War Games". I feel like people over-estimate the "profoundness" of what AI could be and discount what computers already are.

Thoughts?
 
  • #13
russ_watters said:
Any idea which jobs? And how fast?
We had this discussion a while ago, I don't want to repeat it here.
russ_watters said:
I feel like to AI "alarmists", for lack of a better word, think of the "singularity" as a single event which will suddenly change a lot of things, rapidly altering the human labor landscape. But they don't, that I've seen, articulate the what/how. It's all very vague from what I have seen. And I think the reason why is that "AI" is a poorly defined concept and the types of things that AI might do are just everything that computers do, evolving over the past 50 years and simply continuing to evolve. As such, I don't see a potential for a singular or short term disruptive event.
You are mixing two different concepts here. The singularity is a possible future event that would be - by definition - a single event that suddenly changes a lot of things. But that is not what we were discussing before. The change in society from AI is earlier - it started already, with a few jobs getting replaced, or transformed notably. It will become more important in the next decades as computers will be able to do more and more tasks humans do today.. This is not what the (possible) singularity is about. That would be an AI that improves itself so much that it would vastly exceed the intelligence of all humans.
 
  • #14
mfb said:
The singularity is a possible future event that would be - by definition - a single event that suddenly changes a lot of things.

And, importantly, it changes things in ways that are very difficult to predict. It is like projecting a tight bundle of trajectories right by a singularity point and watch them fill up most of the state space.

Also, to me the singularity (in AI) do not have to be a single (localized) event as such, but more like narrow passage of time where we from afar are unable to predict how we emerge on the other side (and with the possible scenarios only limited by physical laws, thus ranging from near human distinction to near human nirvana) and close up things change too fast for us to control. In short, in such an event we fly blind and with loss of control.
 
  • Like
Likes gleem
  • #15
The singularity assumes the creation and of a strong AI, something no one has come close to doing.
 
  • #16
russ_watters said:
Any idea which jobs? And how fast?

AI in some form has been with us for years and only now are most people becoming aware of it. It is only in the last few years that the general population is beginning to appreciate its possible impact. As far as what jobs will be replace you do not have to be specific. Any job not involving dexterity (unless designed for that quality or new or unique situations) that can be reduced to a series of operations or rules no matter how complex based on a some delineated situation or data set will be replaced by AI in the near term. How fast? Much of AI today is mission specific and very focused, so as fast as they can be programmed and marketed.

I think one of the reasons that AI hasn't progresses faster was its difficulty in interacting with humans or its environment. As I noted in the OP AI programs can now read and answer questions about what it read. Since AI can understand spoken language it can now share information directly with humans in a human fashion. With a large knowledge base one would think AI devices will soon be able to carry on a meaningful conversation with people.

I think it is possible now for a factory to be run almost entirely by robots from the receiving to the shipping departments including business transactions, ordering material, inventory control, and perhaps some maintenance and repairs. A current controversy in AI is autonomous vehicles which is an order of magnitude more difficult than robots use in manufacturing. Here again is the issue of interacting in a meaningful way with the environment, and it is happening. Of particular concern is the use of AI in the military in especially autonomous weapons meaning armed drones from the MQ-1B Predator to the Perdix micro UAV swarm. International law requires humans to make kill decisions but the utility of the microdrone reallly depends an their determining the targets and that is quite bothersome. It is said that perhaps the greatest issue with AI is how we humans use it.
 
  • #17
My intent in initiating this thread was to concentrate on the development of the capabilities of AI and its various forms of implementation. Any way I think another notable improvement in AI implementation is the use of memristors in new type of neural network system call reservoir computing by University of Michigan researchers.

https://www.sciencedaily.com/releases/2017/12/171222090313.htm

This will apparently make AI processors smaller and allow significantly faster learning and allow processing as in speech recognition in which the use of a words may depend on context. It is claimed that it can predict the next word in a conversation before it is spoken.

Filip Larsen said:
Also, to me the singularity (in AI) do not have to be a single (localized) event as such, but more like narrow passage of time where we from afar are unable to predict how we emerge on the other side (and with the possible scenarios only limited by physical laws, thus ranging from near human distinction to near human nirvana) and close up things change too fast for us to control. In short, in such an event we fly blind and with loss of control

Yes like the smart phone or internet. And that is a problem with AI we see it as we would like it to be and as we think it should be and not how it will become. Did the innovators of the internet consider the potential of its implementation in the facilitation of international terrorism and other nefarious activities?
 
  • #18
gleem said:
It is claimed that it can predict the next word in a conversation before it is spoken.
How often is it right? My phone tries to do that with typed text, but it doesn't do a good job so far (although it is much better than random guessing).
 
  • #19
mfb said:
How often is it right? My phone tries to do that with typed text, but it doesn't do a good job so far (although it is much better than random guessing).

The claim apparently was made in an interview of the author and not in the published article so I expect future publications to present specific information, stay tuned.
 
  • #20
It is blatant that we've overvaluated the time when strong AI woull come. But insomuch we overvaluated this, we assumed the passion and excitement of most researchers. Indeed, reality is very different. Many researchers won't spend most of their lives working in something that seems like pure speculation. Other researchers have just stayed in Machine Learning for the money (Strong AI researchers will probably not make much money, whereas Machine Learning experts get astonishing salaries).

Ok AI is not as easy as the public might think at a first glance. But come on, it is increasingly getting academic and media attention. A bright future awaits.
PD: nah just joking, machines have alredy taken over our society and we cannot even notice it.
 
  • #21
I just felt I had something to add to this article. Biological neural networks are massively parallel but computer CPUs are not. You can have a computer with a few cpus set in parallel but you're still at a huge disadvantage compared to a biological brain when you're trying to run a neural network on silicon. There are such things as "artificial neurons" but putting several billion together into an artificial brain is not going to happen ...

Eventually we'll be able to run human sized brains on silicon, but even with the most modern technology we're at like 11 billion connections compared to ~100 trillion: https://www.popsci.com/science/article/2013-06/stanfords-artificial-neural-network-biggest-ever

It would probably be easier to create artificially intelligent biorobots ... but silicon is a little more resilient maybe?
 
  • #22
A house mouse has ~100 billion synapses, and a honey bee just 1 billion. But these networks didn't reach the intelligence of a honey bee yet. Recognizing cats with thousands of training samples is nice, but ultimately not the main task of brains in animals.
 
  • Like
Likes aaroman
  • #23
AI assistants, while not Earth shaking this AI application is a step toward integrating it more seamlessly into our daily lives and at the same time introducing unexpected concerns.

Pundits of AI implementations have argued that applications of "chat-bot" are too unhumanlike to replace humans where conversation is the primary mode of interaction as in retail sales for example. Google and Microsoft have introduced what is called full duplexing into AI/human communication. That is the ability to speak and listen at the same time. The results have been expressed as "creepy" for Googles product. Microsoft has introduced its product in China finding a wide acceptance.



Some think that AI assistants used to make phone calls should be identified. California is introducing a bill https://www.artificialintelligence-news.com/2018/05/24/bill-ai-bot-reveal-eff/ to require chat-bots to identify themselves as not human. Is such legislation necessary? '

While many think the evolution of AI will continue to remain slow I still believe that with improved AI dedicated hardware as neural net processors and "brains on a chip" and 10nm technology we will experience periodic quantum leaps in capability with full general intelligence arriving significantly earlier than 2100. There is just too many people working on too many angles with too much concern for money and security for there not to be more rapid progress.

.
 
  • Like
Likes mfb
  • #24
cosmik debris said:
When I started programming in 1969 I was told by a professor that a career in programming was unlikely because in 5 years the AIs would be doing it. It has been 5 years ever since.

Cheers

Perhaps true AI will come online at about the same time as commercial nuclear fusion reactors? That's always five years in the future as well.
 
  • #25
During the mid-1990's as principle computer engineer at the speech technology and research laboratory (STAR-lab) for SRI International, the term artificial intelligence (AI) was notable primarily by its absence.

A significant problem facing speech researchers was the limited funding available to purchase equipment. A typical grant might purchase a quality audio device, one workstation, sometimes a UNIX server, and a few disk drives -- all 'off-the-shelve' (ots) items. My main contribution to speech technology -- along with fellow computer scientists -- was to design and implement a STAR-net that (legally) seamlessly connected disparate research project devices into unified networks able to share language databases, collect data in any spoken language, and distribute processes across connected systems including individual workstations. STAR-lab had so many individual disk drives, my team was able to mirror drives greatly reducing data loss while enhancing DB and table look-up data reliability.

The star-network concept came from my work on earlier Digital Equipment Corp (DEC) VAX computers running VMS operating system (OS) at NASA's advanced concepts flight simulator (ACFS), later implemented on Sun Microsystem servers running Solaris OS. The beauty of these private networks is the ability to connect different platforms and systems to share data and distribute processing in near real time (nRT) in a secure environment.

Apropos to the original and subsequent posts, the speech technologists solved many problems in speech recognition, speaker identification (ID), native speaker language selection, phonetic & word prediction, and sentence completion and so much more based on clever use of common ots hardware and software. At the time I would not call the ability to predict the next syllable or word in a sentence "artificial intelligence" any more than I thought that the computer network could "understand" or "speak Spanish" when the Spanish language databases were on line. Perhaps I'm short-sighted and take technology for granted.
 
  • Like
Likes russ_watters
  • #26
AI is allowing the use of unusual techniques to perform unusual tasks for example the use of RF by AI to detect human movement behind a barrier. It is also able to determine the posture of the subjects.



Another task that AI can do that humans can"t. Anybody keeping score?
 
  • #27
I really need to catch up with the latest criteria for what constitutes AI. Unaided human senses are notoriously weak compared to our technology. Using the military definition of intelligence, a 1950's era Nike-style RADAR [NATO designation Fansong] extends human vision out many kilometers -- this example includes boresighted visible light tracking -- down deep into the microwave band -- India band in this unclassified example -- under conditions where a human would be helpless.

Let me attach some MIDI audio processes and a few speakers in the RADAR van and a little voice can shout "Here I am!" in the relative direction of an actual target.

Add some visual recognition software and a few table look-ups, digitize the analogue outputs from the track-range computer and the little voice could shout "Boeing 777 approaching from x direction, distance y km., height above ground z km." in the apparent direction and height (angle) of the actual aircraft in any language in our DB that matches the native dialect of the human operator. Mostly off-the-shelve HW & SW given an old refurbished RADAR van and a 400hz multi-phase power source.

Wait one... let me think... OK I convinced myself: this likely constitutes artificial intelligence! Certainly compared to an unaided human.
 
  • Like
Likes russ_watters
  • #28
Below is a short review of major accomplishments last year in AI and predictions of advances to be expected this year by Siraj Raval, director of the School of AI.

 
  • #29
gibberingmouther said:
Biological neural networks are massively parallel but computer CPUs are not. You can have a computer with a few cpus set in parallel but you're still at a huge disadvantage compared to a biological brain when you're trying to run a neural network on silicon. There are such things as "artificial neurons" but putting several billion together into an artificial brain is not going to happen ...
45 years ago analogue computers were fairly popular. What did them in were digital computers that could model them faster and cheaper than the real thing. Biological neural nets are slow. You could emulate a million of them in real time with a single modern CPU core.

That said, computers have "completely overtaken" society without much AI. They will continue that trend with it.
As for the singularity, that's still some decades away.
 
  • #30
Klystron said:
I really need to catch up with the latest criteria for what constitutes AI. Unaided human senses are notoriously weak compared to our technology. Using the military definition of intelligence, a 1950's era Nike-style RADAR [NATO designation Fansong] extends human vision out many kilometers -- this example includes boresighted visible light tracking -- down deep into the microwave band -- India band in this unclassified example -- under conditions where a human would be helpless.

Let me attach some MIDI audio processes and a few speakers in the RADAR van and a little voice can shout "Here I am!" in the relative direction of an actual target.

Add some visual recognition software and a few table look-ups, digitize the analogue outputs from the track-range computer and the little voice could shout "Boeing 777 approaching from x direction, distance y km., height above ground z km." in the apparent direction and height (angle) of the actual aircraft in any language in our DB that matches the native dialect of the human operator. Mostly off-the-shelve HW & SW given an old refurbished RADAR van and a 400hz multi-phase power source.

Wait one... let me think... OK I convinced myself: this likely constitutes artificial intelligence! Certainly compared to an unaided human.
OK. So Automobile "Blind Side Detection" would also count.
Actually, "AI" usually implies some sort of machine learning and statistical processing. In that sense, the difference can be what development tools you are using - as oppose to the application functionality or user interface.
 
  • #31
.Scott said:
OK. So Automobile "Blind Side Detection" would also count.
Actually, "AI" usually implies some sort of machine learning and statistical processing. In that sense, the difference can be what development tools you are using - as oppose to the application functionality or user interface.

Agreed.

.Scott said:
45 years ago analogue computers were fairly popular. ...[snip]...
That said, computers have "completely overtaken" society ...[snip]...

Concur; with the proviso that analogue computers were/are 'popular' because they work.
You likely noticed I included an analog electro-mechanical computer in my "hodge-podge" example. The track range computer (TRC) provided quite a bit of intelligence to the radar system freeing the operators to concentrate on target identification, other targets in a cell, and radio (voice) communication.

Certainly, high-speed digital computation solves or approximates many problems but need not replace slower analogue computers for all tasks. For instance, the TRC compared returns derived from a rotating feed-horn independent of the radar frequency and pulse rates, automatically correcting antenna positions essentially with clockwork mechanisms coupled with (1950's) electronics.
 
  • Like
Likes sysprog and .Scott
  • #32
.Scott said:
45 years ago analogue computers were fairly popular. What did them in were digital computers that could model them faster and cheaper than the real thing. Biological neural nets are slow. You could emulate a million of them in real time with a single modern CPU core.

That said, computers have "completely overtaken" society without much AI. They will continue that trend with it.
As for the singularity, that's still some decades away.

Yeah, I realize there's a bit more to this than I understand. But I do know the biggest attempts at making a digital "brain" out of neural net nodes connected to each other took a ton of supercomputing power and was still well below what a human brain has in terms of the number of neurons and the number of connections between them. There is also all this other information that human brains handle at the level of DNA and cellular interactions (or intra-actions), and a modern computer system would not come close to being able to model that.

Still, for the sake of creating useful AI, neural nets seem to be a very powerful tool.
 
  • #33
Relentlessly AI progress continues to encroach on human activities. Most recently in the writing of original copy by an AI system called GPT2 developed by OpenAI. So good are the results that the release of the research has been held up from publication to further explore what mischief it might be used for.

A British columnist has had one of he columns synthesized by the system and here is her observation/opinion.
 
  • Like
Likes Klystron and mfb
  • #34
The way that AI makes a prediction has been of particular interest especially when it is unexpected or evades logical verification. Was its decision reasonable based on the information that it used? We do not just want to take AI's "word" for it. When an AI makes an obvious mistake we can go back and analyze the process for example there was an article in the popular news illustrating a generic AI problem where the AI make its decision based on irrelevant cues in the data. An AI system misidentified a husky as a wolf based not on any characteristics of the animals but by the presence of snow in the background because all wolf data used for learning contained snow.

Researcher have come up with an algorithm that helps determine how AI made its prediction that can be used to determine how intelligent the decision/conclusion was.

By using their newly developed algorithms, researchers are finally able to put any existing AI system to a test and also derive quantitative information about them: a whole spectrum starting from naive problem solving behavior, to cheating strategies up to highly elaborate "intelligent" strategic solutions is observed.

Dr. Wojciech Samek, group leader at Fraunhofer HHI said: "We were very surprised by the wide range of learned problem-solving strategies. Even modern AI systems have not always found a solution that appears meaningful from a human perspective, but sometimes used so-called 'Clever Hans Strategies'."

Clever Hans was a horse that could supposedly count and was considered a scientific sensation during the 1900s. As it was discovered later, Hans did not master math but in about 90 percent of the cases, he was able to derive the correct answer from the questioner's reaction.

The team around Klaus-Robert Müller and Wojciech Samek also discovered similar "Clever Hans" strategies in various AI systems. For example, an AI system that won several international image classification competitions a few years ago pursued a strategy that can be considered naïve from a human's point of view. It classified images mainly on the basis of context. Images were assigned to the category "ship" when there was a lot of water in the picture. Other images were classified as "train" if rails were present. Still other pictures were assigned the correct category by their copyright watermark. The real task, namely to detect the concepts of ships or trains, was therefore not solved by this AI system -- even if it indeed classified the majority of images correctly.

The researchers were also able to find these types of faulty problem-solving strategies in some of the state-of-the-art AI algorithms, the so-called deep neural networks -- algorithms that were so far considered immune against such lapses. These networks based their classification decision in part on artifacts that were created during the preparation of the images and have nothing to do with the actual image content.

"Such AI systems are not useful in practice. Their use in medical diagnostics or in safety-critical areas would even entail enormous dangers," said Klaus-Robert Müller. "It is quite conceivable that about half of the AI systems currently in use implicitly or explicitly rely on such 'Clever Hans' strategies. It's time to systematically check that, so that secure AI systems can be developed."

With their new technology, the researchers also identified AI systems that have unexpectedly learned "smart" strategies. Examples include systems that have learned to play the Atari games Breakout and Pinball. "Here the AI clearly understood the concept of the game and found an intelligent way to collect a lot of points in a targeted and low-risk manner. The system sometimes even intervenes in ways that a real player would not," said Wojciech Samek.

"Beyond understanding AI strategies, our work establishes the usability of explainable AI for iterative dataset design, namely for removing artefacts in a dataset which would cause an AI to learn flawed strategies, as well as helping to decide which unlabeled examples need to be annotated and added so that failures of an AI system can be reduced," said SUTD Assistant Professor Alexander Binder.

https://www.sciencedaily.com/releases/2019/03/190312103643.htm
 
  • Like
Likes mfb and Klystron
  • #35
Currently most AI systems are run on computers, the cloud or system that use component originally designed for other purposes as GPUs or FPGAs. The systems are often large required large amounts of power. I have noted previously that progress in AI implementation will depend on the development of dedicated AI chips and systems. Intel has been working on a chip called Loihi which is a neuromorphic chip whose architecture more closely imitates the human neruoNet. This chip has 130,000 neurons in a 60 mm2 area. It has taken 64 of these chips and created a system (code named Pohoiki Beach)i with 8 million neurons (think frog). This system is 1000 time faster and uses 10000 time less power that equivalent AI systems. By the end of the year they expect to have a 100 million neuron system. A Pohoiki system can be scaled with up with 16,000 chips. Now you are talking about a serious neuroNet.

It is expected that this system will be able to overcome a serious problem with today's machine learning called catastrophic forgetting. If you try to add a new time to the repertoire of the AI system you have to start the process all over to include the new item. Trying to add it as an addendum can cause the system to forget all that has previously learned. It is also expected that it will be harder to fool an AI system with distorted images.

For more see https://spectrum.ieee.org/tech-talk/robotics/artificial-intelligence/intels-neuromorphic-system-hits-8-million-neurons-100-million-coming-by-2020

More on the Loihi chip : https://www.intel.com/content/www/us/en/research/neuromorphic-computing.html
 
  • Like
Likes Klystron, QuantumQuest and Buzz Bloom

Similar threads

Back
Top