Latest Notable AI accomplishments

  • Thread starter Thread starter gleem
  • Start date Start date
  • Tags Tags
    Ai
Click For Summary
SUMMARY

The forum discussion highlights significant advancements in AI and machine learning, particularly in natural language processing, as demonstrated by Alibaba's and Microsoft's systems outperforming humans on the Stanford Question Answering Dataset (SQuAD). The conversation emphasizes the rapid progression of AI technologies, including neural networks' capabilities in pattern recognition and their potential to transform various industries. Participants express concerns about the societal impact of AI, particularly regarding job displacement and the implications of reaching a technological singularity. The discussion underscores the need for a deeper understanding of AI's evolving role in society and its potential to redefine human labor.

PREREQUISITES
  • Understanding of natural language processing techniques
  • Familiarity with machine learning concepts, particularly neural networks
  • Knowledge of the Stanford Question Answering Dataset (SQuAD)
  • Awareness of the implications of AI on labor markets and societal structures
NEXT STEPS
  • Research advancements in natural language processing frameworks like BERT and GPT-3
  • Explore the applications of neural networks in various industries, including healthcare and finance
  • Investigate the ethical implications of AI in autonomous systems and military applications
  • Study the economic impact of AI on job markets and workforce transformation strategies
USEFUL FOR

This discussion is beneficial for AI researchers, machine learning practitioners, policymakers, and anyone interested in the societal implications of AI technologies and their potential to reshape the future of work.

  • #61
Researchers at Rice University in conjunction with Intel have developed a new algorithm for machine learning called a sub-linear deep learning engine (SLIDE ) which avoids matrix inversion and the need for GPU. In a comparison of this algorithm and an Intel 44 core Xenon CPU versus the standard, Google's TensorFlow SW using an Nvidia V100 GPU for a 100 million parameter test case the CPU was trained in less than one-third of the time it took the GPU.

https://www.unite.ai/researchers-cr...Us) without specialized acceleration hardware.
 
  • Informative
Likes   Reactions: Filip Larsen
Technology news on Phys.org
  • #62
  • Like
Likes   Reactions: Greg Bernhardt
  • #63
There are a few companies that are using a different approach to AI, one is Verses AI.
From their website:https://www.verses.ai/rd-overview

At the heart of VERSES: Active Inference​

We are investigating this hypothesis using Active Inference, an approach to modeling intelligence based on the physics of information. With Active Inference, we can design and create software programs or agents capable of making decisions and taking real-time actions based on incoming data. These agents update their beliefs in real-time as they receive new data and use their understanding of the world to plan ahead and act appropriately within it.

Our approach allows us explicit access to the beliefs of agents. That is, it gives us tools for discovering what an agent believes about the world and how it would change its beliefs in light of new information. Our models are fundamentally Bayesian and thus tell us what an optimal agent should believe and how it should act to maintain a safe and stable environment.

Our goal is to leverage Active Inference to develop safe and sustainable AI systems at scale:
Opening the black box. Equip agents with interpretable models constructed from explicit labels, thus making our agents’ decisions auditable by design.
Hybrid automated and expert systems. Active inference affords an explicit methodology for incorporating existing knowledge into AI systems and encourages structured explanations.
Optimal reasoning. Bayesian systems optimally combine what agents already know, i.e., prior knowledge, with what they learn from newly observed data to reason optimally in the face of uncertainty.
Optimal planning. We implement the control of actions using Bayesian reasoning and decision-making, allowing for optimal action selection in the presence of risk and uncertainty.
Optimal model selection. Bayesian active inference can automatically select the best model and identify “the right tool for the right job.”
Explanatory power and parsimony. Instead of maximizing reward, active inference agents resolve uncertainty, balancing exploration and exploitation.
Intrinsic safety. Built on the principles of alignment, ethics, empathy, and inclusivity, agents maximize safety at all levels.
They believe that they can achieve AGI in two years. Their current AI agent GENIUS is anticipated to be available to the public by the end of this year. It is currently under private beta testing by JPL among others.
Go to the above link for more details.
 
  • #64
.Scott said:
To put it simply, an AI assigned to evaluate x-rays for pneumonia will prefer "cheating" over meritorious evaluation whenever cheating yields better answers. ... AI picking up indications in the imagery (subtle or otherwise) of this staff knowledge?

So blind trials involving AI will have to make sure the AI is also truly blind to the intervention or technology, apart from humans.
 
  • Like
Likes   Reactions: .Scott
  • #65
Lot's going on this week.

First up - This week, The Stargate Project.
The Stargate Project is an American artificial intelligence (AI) joint venture created by OpenAI, SoftBank, Oracle and investment firm MGX.[1] The venture plans on investing up to $500 billion in AI infrastructure in the United States by 2029.

Then yesterday, OpenAI announced the release of its Operator platform.
Today we’re releasing Operator⁠, an agent that can go to the web to perform tasks for you. Using its own browser, it can look at a webpage and interact with it by typing, clicking, and scrolling. It is currently a research preview, meaning it has limitations and will evolve based on user feedback. Operator is one of our first agents, which are AIs capable of doing work for you independently—you give it a task and it will execute it.

Operator can be asked to handle a wide variety of repetitive browser tasks such as filling out forms, ordering groceries, and even creating memes. The ability to use the same interfaces and tools that humans interact with on a daily basis broadens the utility of AI, helping people save time on everyday tasks while opening up new engagement opportunities for businesses.
 
  • Like
Likes   Reactions: gleem and Greg Bernhardt
  • #66
I expect that AI customer service will be even worse.
 
  • #67
If this wasn't mid-Winter I would have thought it an April-Fools joke.:cry:
 
  • #68
I came across an article AI hallucinations can’t be stopped — but these techniques can limit their damage
https://www.nature.com/articles/d41586-025-00068-5

Developers have tricks to stop artificial intelligence from making things up, but large language models are still struggling to tell the truth, the whole truth and nothing but the truth.

When computer scientist Andy Zou researches artificial intelligence (AI), he often asks a chatbot to suggest background reading and references. But this doesn’t always go well. “Most of the time, it gives me different authors than the ones it should, or maybe sometimes the paper doesn’t exist at all,” says Zou, a graduate student at Carnegie Mellon University in Pittsburgh, Pennsylvania.

It’s well known that all kinds of generative AI, including the large language models (LLMs) behind AI chatbots, make things up. This is both a strength and a weakness. It’s the reason for their celebrated inventive capacity, but it also means they sometimes blur truth and fiction, inserting incorrect details into apparently factual sentences. “They sound like politicians,” says Santosh Vempala, a theoretical computer scientist at Georgia Institute of Technology in Atlanta. They tend to “make up stuff and be totally confident no matter what”.

The particular problem of false scientific references is rife. In one 2024 study, various chatbots made mistakes between about 30% and 90% of the time on references, getting at least two of the paper’s title, first author or year of publication wrong1. Chatbots come with warning labels telling users to double-check anything important. But if chatbot responses are taken at face value, their hallucinations can lead to serious problems, as in the 2023 case of a US lawyer, Steven Schwartz, who cited non-existent legal cases in a court filing after using ChatGPT.

. . . .

As part of training to access AI tools, I was directed to this
https://www.nist.gov/trustworthy-and-responsible-ai
https://airc.nist.gov/AI_RMF_Knowledge_Base/AI_RMF
https://airc.nist.gov/home

https://new.nsf.gov/news/nsf-announces-7-new-national-artificial?sf176473159=1
 
  • #69
Stadium Rock written by AI. Better than the real thing, if you ask me.

 
  • Like
Likes   Reactions: gleem

Similar threads

  • · Replies 6 ·
Replies
6
Views
1K
Replies
10
Views
5K
Replies
3
Views
3K