Latest Notable AI accomplishments

  • Thread starter gleem
  • Start date
  • #36
gleem
Science Advisor
Education Advisor
2,088
1,525
Another indirect contribution to AI implementation has been developed. Researchers in Zurich have found a way to store information in DNA molecules and placing them in nanobeads of ceramic materials. Possible applications are huge information storage densities and self replicating machines.

https://physicsworld.com/a/embedded-dna-used-to-reproduce-3d-printed-rabbit/ Rabbit is not real.
 
  • #38
berkeman
Mentor
64,197
15,450
Rabbit is not real.
Thanks for the clarification! :oldbiggrin:
 
  • #39
gleem
Science Advisor
Education Advisor
2,088
1,525
Up until just recently a successful model for AI was the neural net based on a network and interconnectedness of neurons. This was because the neuron was identified as the leading component of information processing in the brain. The brain is composed mostly of two types of cells neuron and glia cells. The glia ( from the Latin for glue) cells, of which there are several, originally were believed to be support cells for the neurons doing maintenance and protective functions. Fairly recently their function was also deduced to include communicating with the neurons especially the astrocytes which have many dendrite type structures.

https://medicalxpress.com/news/2020-04-adult-astrocytes-key-memory.html

Developing a human level neural net system for even a dedicated task was challenging and hardware limited. A human brain has about 75 billion neurons with possibly 10,000 or more synapses each. You can see the problem with software models of a neural net using standard computer hardware. Even with neural net processors with 10 nanometer technology, there are still challenges, for example, brains being three dimensional. Astrocytes are at least as numerous as neurons.

https://techxplore.com/news/2020-07-astrocytes-behavior-robots-neuromorphic-chips.html

Now a group At Rutgers University has integrated some astrocyte functionality into a commercial neuromorphic chip from Intel to control the movement of a six legged robot.


"As we continue to increase our understanding of how astrocytes work in brain networks, we find new ways to harness the computational power of these non-neuronal cells in our neuromorphic models of brain intelligence, and make our in-house robots behave more like humans," Michmizos said. "Our lab is one of the few groups in the world that has a Loihi spiking neuromorphic chip, Intel's research chip that processes data by using neurons, just like our brain, and this has worked as a great facilitator for us. We have fascinating years ahead of us."

One final note, they used the term plastic a few times whose standard definition and as applied to AI refers to the ability to adapt.
 
  • #41
BWV
1,246
1,426

Drug-Discovery AI Designs 40,000 Potential Chemical Weapons In 6 Hours​

In a recent study published in the journal Nature Machine Intelligence, a team from pharmaceutical company Collaborations Pharmaceuticals, Inc. repurposed a drug discovery AI. It successfully identified 40,000 new potential chemical weapons in just 6 hours, with some remarkably similar to the most potent nerve agent ever created.
According to an interview with the Verge, the researchers were shocked by how remarkably easy it was.

“For me, the concern was just how easy it was to do. A lot of the things we used are out there for free. You can go and download a toxicity dataset from anywhere. If you have somebody who knows how to code in Python and has some machine learning capabilities, then in probably a good weekend of work, they could build something like this generative model driven by toxic datasets,” said Fabio Urbina, lead author of the paper, to the Verge.

“So that was the thing that got us really thinking about putting this paper out there; it was such a low barrier of entry for this type of misuse.”
https://www.iflscience.com/drugdisc...0-potential-chemical-weapons-in-6-hours-63017
 
  • #43
gleem
Science Advisor
Education Advisor
2,088
1,525
I have mentioned before that useful applications will be accelerated by hardware development. Recently a company has developed the largest CPU chip 462 cm2. This eliminates the need to use hundreds of GPUs for the calculations and the need to mind the intricate interconnections resulting in expensive (and expensive) time to program the system. This chip will help accelerate AI research however it still requires a huge amount of power, 20KW.

Some researchers are giving AI access to other ways to interact with the outside world. They are giving AI the ability to learn about themselves that is to self-model.

Summary: Researchers have created a robot that is able to learn a model of its entire body from scratch, without any human assistance. In a new study, the researchers demonstrate how their robot created a kinematic model of itself, and then used its self-model to plan motion, reach goals, and avoid obstacles in a variety of situations. It even automatically recognized and then compensated for damage to its body.
https://www.sciencedaily.com/releases/2022/07/220713143941.htm
 
  • #44
gleem
Science Advisor
Education Advisor
2,088
1,525
While my previous post shows significant progress in reducing learning time using computers to model the human brain MIT researchers have taken another tack in creating a system in which the learning will take place within a memory structure more in line with the architecture of a biological brain. Current computer-generated neural networks emulating the conductivity of a neuron synapses by weighting a synapse through a computation. This involves using memory to store this weighting factor as well as the need to shuttle information into and out of the memory to a CPU for the weighting calculation as I understand it. This new approach uses what is known as a resistive memory in which the memory develops a conductance within not needing a CPU and the moving of data and the associated power requirement. This process then is really analog and not digital. The system uses a silicon-compatible inorganic substrate to build the artificial neurons which are1000 times smaller than biological neurons and promise to process information much faster with a power requirement much closer to a biological system. Additionally, the system is massively parallel reducing learning time further.

Earlier work published last year demonstrated the feasibility of a resistive memory. One year later significant progress has been made so that a working resistive NN can be built with silicon fabrication techniques..

Bottom line: smaller size, lower power, faster learning. Most predicted artificial general intelligence would be developed at the earliest 2050 probably closer to 2100 if at all. It is beginning to look like it might be earlier.

MIT's Quest for Intelligence Mission Statements
https://quest.mit.edu/research/missions
 
  • Informative
  • Like
Likes anorlunda and Oldman too
  • #45
gleem
Science Advisor
Education Advisor
2,088
1,525
A question that is often asked is when might we expect AGI. Well, there is some evidence that it might occur sooner than we think. Human language usage is one of the most difficult tasks for AI considering a large number of exceptions, nuances, and contexts. This makes the translation from one language to another challenging. The reason AGI might be reached a lot sooner than most AI experts suggest is a metric that determines the time for a human to correct a language translation generated by AI. It takes a human translator about one second per word to edit the translation of another human. In 2015 it took a human 3.5 seconds per word to edit a machine-generated translation. Today it takes 2 seconds. If the progress in accuracy continues at the same rate then machine translations will be as good as humans in 7 years.

https://www.msn.com/en-us/news/tech...A16FldN?cvid=2c1db71908854908b3a14b864e9c1eec

Some will find it difficult to accept an AGI and find numerous reasons to reject the idea. But we do not understand how we do what we do as well as what the machines are doing. The "proof" will probably be in some sort of intellectual contest perhaps a debate between an AGI and a human.

Go AGI.
 
  • #46
Hornbein
1,125
829
A question that is often asked is when might we expect AGI. Well, there is some evidence that it might occur sooner than we think. Human language usage is one of the most difficult tasks for AI considering a large number of exceptions, nuances, and contexts. This makes the translation from one language to another challenging. The reason AGI might be reached a lot sooner than most AI experts suggest is a metric that determines the time for a human to correct a language translation generated by AI. It takes a human translator about one second per word to edit the translation of another human. In 2015 it took a human 3.5 seconds per word to edit a machine-generated translation. Today it takes 2 seconds. If the progress in accuracy continues at the same rate then machine translations will be as good as humans in 7 years.

https://www.msn.com/en-us/news/tech...A16FldN?cvid=2c1db71908854908b3a14b864e9c1eec

Some will find it difficult to accept an AGI and find numerous reasons to reject the idea. But we do not understand how we do what we do as well as what the machines are doing. The "proof" will probably be in some sort of intellectual contest perhaps a debate between an AGI and a human.

Go AGI.
Choice of language matters. I have noted that Google translates Japanese poorly. This isn't surprising. The written language is extremely ambiguous, so much so that constructing sentences with dozens if not hundreds of possible meanings is a national pastime.
 
  • #47
36,253
13,309
If the progress in accuracy continues at the same rate then machine translations will be as good as humans in 7 years.
Machines have beaten humans in chess for 25 years. Neither of these tasks is AGI.

ChatGPT is a recent example: It produces text with great grammar that is full of factual errors - it knows grammar but has a very poor understanding of content.
 
  • #48
gleem
Science Advisor
Education Advisor
2,088
1,525
Machines have beaten humans in chess for 25 years. Neither of these tasks is AGI.

ChatGPT is a recent example: It produces text with great grammar that is full of factual errors - it knows grammar but has a very poor understanding of content
Deep Blue, IBM's computer that beat Kasparov was an 11.5GFlop machine and would be incapable of what Chat GPT can do. BTW use of language is considered an element of human intelligence. GPT is not capable of reflecting on its response like humans. If we misspeak we can correct ourselves. Keep in mind the use of the internet to train it with its tainted data is really a bad way to train anything or anybody. When humans are taught they generally are provided with vetted data. If we were taught garbage we would spew garbage and actually, some do anyway.
 
  • #49
gleem
Science Advisor
Education Advisor
2,088
1,525
Choice of language matters. I have noted that Google translates Japanese poorly. This isn't surprising. The written language is extremely ambiguous, so much so that constructing sentences with dozens if not hundreds of possible meanings is a national pastime.

From "The History of Computer Langauge Translation" https://smartbear.com/blog/the-history-of-computer-language-translation/
Human errors in translation can be and have been cataclysmic. In July 1945, during World War 2, the United States issued the Potsdam Declaration, demanding the surrender of Japan. Japanese Premier Kantaro Suzuki called a news conference and issued a statement that was supposed to be interpreted as, “No comment. We’re still thinking about it.” That wasn't what got to Harry Truman. Suzuki used the word “mokusatsu.” The problem is, “mokusatsu” can also mean “We’re ignoring it in contempt.” Less than two weeks later, the first atomic bomb was dropped.
 
  • #50
anorlunda
Staff Emeritus
Insights Author
11,209
8,614
  • #51
gleem
Science Advisor
Education Advisor
2,088
1,525
This is why the translation of languages is challenging for AI as well. and why equaling a human translator will be such an accomplishment.
 
  • #53
gleem
Science Advisor
Education Advisor
2,088
1,525
ChatGPT 4 will be released sometime this year as soon as this spring. There are rumors that it will be disruptive. There is an overview of what might be released including the possibility that it will be multimodal i.e., using text, speech, and images, although Open AI will not confirm this. A review of an interview with Sam Altman CEO of OpenAI can be found here. https://www.searchenginejournal.com/openai-gpt-4/476759/#close and the actual interview/podcast here https://greylock.wpengine.com/greymatter/sam-altman-ai-for-the-next-era/

One thing that Altman has brought up is that these agents as they are called often have surprising characteristics. He emphasizes that GPT4 will not be released until it is assured that it will be safe. Another interesting tidbit is that work is being done to try different approaches to NLP beyond GTP. Issues that he believes will arise in the future with AI are wealth distribution along with access to and governance of AI,
 

Suggested for: Latest Notable AI accomplishments

Replies
5
Views
626
Replies
2
Views
522
Replies
16
Views
597
  • Last Post
Replies
4
Views
799
Replies
2
Views
509
  • Last Post
Replies
1
Views
773
  • Last Post
Replies
2
Views
2K
Replies
1
Views
1K
  • Last Post
Replies
3
Views
1K
  • Last Post
Replies
19
Views
2K
Top