Latest Notable AI accomplishments

  • Thread starter Thread starter gleem
  • Start date Start date
  • Tags Tags
    Ai
AI Thread Summary
Recent advancements in AI and machine learning are notable, particularly in areas like pattern recognition and image analysis across various fields such as medicine and astronomy. Both Alibaba's AI and Microsoft's systems have recently surpassed human performance in the Stanford Question Answering Dataset, highlighting significant progress in natural language processing. The discussion emphasizes that while many overestimate the timeline for AI to transform society, the reality is that AI's evolution is complex and ongoing, with applications already impacting job markets. There is concern regarding the implications of AI in autonomous systems, especially in military contexts, where ethical considerations are paramount. Overall, the conversation reflects a growing awareness of AI's capabilities and its potential to reshape various aspects of life and work.
  • #51
This is why the translation of languages is challenging for AI as well. and why equaling a human translator will be such an accomplishment.
 
Technology news on Phys.org
  • #52
gleem said:
BTW use of language is considered an element of human intelligence.
So is playing chess.
ChatGPT is yet another specialized AI.
PS: So is playing chess, too.
 
  • #53
ChatGPT 4 will be released sometime this year as soon as this spring. There are rumors that it will be disruptive. There is an overview of what might be released including the possibility that it will be multimodal i.e., using text, speech, and images, although Open AI will not confirm this. A review of an interview with Sam Altman CEO of OpenAI can be found here. https://www.searchenginejournal.com/openai-gpt-4/476759/#close and the actual interview/podcast here https://greylock.wpengine.com/greymatter/sam-altman-ai-for-the-next-era/

One thing that Altman has brought up is that these agents as they are called often have surprising characteristics. He emphasizes that GPT4 will not be released until it is assured that it will be safe. Another interesting tidbit is that work is being done to try different approaches to NLP beyond GTP. Issues that he believes will arise in the future with AI are wealth distribution along with access to and governance of AI,
 
  • #54
mfb said:

A TON of caution is required here - as described in this science.org article.
There are several pit falls to using AI this way, but I think the one that holds the highest potential to blind side users of AI methods is this one:
At Mount Sinai, many of the infected patients were too sick to get out of bed, and so doctors used a portable chest x-ray machine. Portable x-ray images look very different from those created when a patient is standing up. Because of what it learned from Mount Sinai's x-rays, the algorithm began to associate a portable x-ray with illness. It also anticipated a high rate of pneumonia.
To put it simply, an AI assigned to evaluate x-rays for pneumonia will prefer "cheating" over meritorious evaluation whenever cheating yields better answers.
Ideally, the systems/software engineers would gain a sense about how the AI is making its determinations. But by AI standards, this would be quite counter-productive. The whole purpose of using the AI is to avoid that kind of analysis - to use an automated AI analysis in place of a detailed human examination of the many possible tip-offs to the disease severity.

In that case of the cancer patients, did the staff that made the decisions on who, when, where, and how x-rays were to be made base those decisions on their own evaluation of the patients prognosis? If so, is the AI picking up indications in the imagery (subtle or otherwise) of this staff knowledge?
 
  • #55
It seems that developments in AI especially in capability are becoming quicker. Hardware on the other hand still lags especially in power requirements. A new biological model based on the memristor may help address this issue.
A team of researchers at the University of Oxford, IBM Research Europe, and the University of Texas, have announced an important feat: the development of atomically thin artificial neurons created by stacking two-dimensional (2D) materials. The results have been published in Nature Nanotechnology.
https://techxplore.com/news/2023-05-artificial-neurons-mimic-complex-brain.html

This memristor works with both electricity and light.

Such a memory would be useful for autonomous robots with limited power resources.
 
  • #56
anorlunda said:
http://www.lackuna.com/2012/04/13/5...-translation-blunders/#f1j6G4IAprvcoGlw.99his link from NSA mentions it.
https://www.nsa.gov/portals/75/docu...ssified-documents/tech-journals/mokusatsu.pdf

But I think it is a stretch that the poor translation changed the outcome. We can't get into the heads of the participants, so we'll never know for sure.
The Japanese written language is very ambiguous. A kanji character can stand for a dozen completely different words. I've noticed that the Google translator has a difficult time with it.

It's possible to write a sentence that corresponds to hundreds of differing spoken sentences. Then you can throw puns into the mix. This is a national pastime.

In thge nation of Japan the study of written English is more or less required in high school. In Japanese popular music it is very common to include English phrases in the lyrics. When they do this the practice of punning doesn't go away. There is a popular song called My Lover Is A Stapler. This makes no sense until you find out that the Japanese word for a stapler is Hotchkiss, after the first brand of stapler to catch on in Japan. Then you make a pun. My lover has a hot kiss. A bilingual pun! Though a real Japanese speaker might expose me as full of beans. I do know that a band named Bandmaid has a album called Maid in Japan, no doubt about those puns.

maid in japan.jpg


Many people think they are the best hard rock band in the world today. They are embarrassed by this first album and have suppressed it.

I have read that the world's most ambiguous language is Beijingese, in which a single word can have 253 meanings. Or something like that.
 
Last edited:
  • #57
cosmik debris said:
When I started programming in 1969 I was told by a professor that a career in programming was unlikely because in 5 years the AIs would be doing it. It has been 5 years ever since.
That's better/faster than commercial nuclear fusion, which is always 10 years away for the last 5 or 6 decades. :-p
 
  • Like
Likes Tom.G and russ_watters
  • #58
I was logging into my work computer, which is Windows based, and found the following:

Hello, this is Bing! I’m the new AI-powered chat mode of Microsoft Bing that can help you quickly get summarized answers and creative inspiration .

  • Got a question? Ask me anything - short, long, or anything in between 🤗.
  • Dive deeper. Simply ask follow-up questions to refine your results .
  • Looking for things to do? I can help you plan trips ️ or find new things to do where you live.
  • Feeling stuck? Ask me to help with any project from meal planning to gift ideas .
  • Need Inspiration? I can help you create a story, poem, essay, song, or picture.
Try clicking on some of these ideas below to find out why the new Bing is a smarter way to search .

AI has succeeded in being obnoxious.
 
  • #60
Astronuc said:
That's better/faster than commercial nuclear fusion, which is always 10 years away for the last 5 or 6 decades. :-p
If anything the closer we get to it the further it gets away.
 
  • #61
Researchers at Rice University in conjunction with Intel have developed a new algorithm for machine learning called a sub-linear deep learning engine (SLIDE ) which avoids matrix inversion and the need for GPU. In a comparison of this algorithm and an Intel 44 core Xenon CPU versus the standard, Google's TensorFlow SW using an Nvidia V100 GPU for a 100 million parameter test case the CPU was trained in less than one-third of the time it took the GPU.

https://www.unite.ai/researchers-cr...Us) without specialized acceleration hardware.
 
  • Informative
Likes Filip Larsen
  • #62
  • Like
Likes Greg Bernhardt
  • #63
There are a few companies that are using a different approach to AI, one is Verses AI.
From their website:https://www.verses.ai/rd-overview

At the heart of VERSES: Active Inference​

We are investigating this hypothesis using Active Inference, an approach to modeling intelligence based on the physics of information. With Active Inference, we can design and create software programs or agents capable of making decisions and taking real-time actions based on incoming data. These agents update their beliefs in real-time as they receive new data and use their understanding of the world to plan ahead and act appropriately within it.

Our approach allows us explicit access to the beliefs of agents. That is, it gives us tools for discovering what an agent believes about the world and how it would change its beliefs in light of new information. Our models are fundamentally Bayesian and thus tell us what an optimal agent should believe and how it should act to maintain a safe and stable environment.

Our goal is to leverage Active Inference to develop safe and sustainable AI systems at scale:
Opening the black box. Equip agents with interpretable models constructed from explicit labels, thus making our agents’ decisions auditable by design.
Hybrid automated and expert systems. Active inference affords an explicit methodology for incorporating existing knowledge into AI systems and encourages structured explanations.
Optimal reasoning. Bayesian systems optimally combine what agents already know, i.e., prior knowledge, with what they learn from newly observed data to reason optimally in the face of uncertainty.
Optimal planning. We implement the control of actions using Bayesian reasoning and decision-making, allowing for optimal action selection in the presence of risk and uncertainty.
Optimal model selection. Bayesian active inference can automatically select the best model and identify “the right tool for the right job.”
Explanatory power and parsimony. Instead of maximizing reward, active inference agents resolve uncertainty, balancing exploration and exploitation.
Intrinsic safety. Built on the principles of alignment, ethics, empathy, and inclusivity, agents maximize safety at all levels.
They believe that they can achieve AGI in two years. Their current AI agent GENIUS is anticipated to be available to the public by the end of this year. It is currently under private beta testing by JPL among others.
Go to the above link for more details.
 
  • #64
.Scott said:
To put it simply, an AI assigned to evaluate x-rays for pneumonia will prefer "cheating" over meritorious evaluation whenever cheating yields better answers. ... AI picking up indications in the imagery (subtle or otherwise) of this staff knowledge?

So blind trials involving AI will have to make sure the AI is also truly blind to the intervention or technology, apart from humans.
 
  • #65
Lot's going on this week.

First up - This week, The Stargate Project.
The Stargate Project is an American artificial intelligence (AI) joint venture created by OpenAI, SoftBank, Oracle and investment firm MGX.[1] The venture plans on investing up to $500 billion in AI infrastructure in the United States by 2029.

Then yesterday, OpenAI announced the release of its Operator platform.
Today we’re releasing Operator⁠, an agent that can go to the web to perform tasks for you. Using its own browser, it can look at a webpage and interact with it by typing, clicking, and scrolling. It is currently a research preview, meaning it has limitations and will evolve based on user feedback. Operator is one of our first agents, which are AIs capable of doing work for you independently—you give it a task and it will execute it.

Operator can be asked to handle a wide variety of repetitive browser tasks such as filling out forms, ordering groceries, and even creating memes. The ability to use the same interfaces and tools that humans interact with on a daily basis broadens the utility of AI, helping people save time on everyday tasks while opening up new engagement opportunities for businesses.
 
  • Like
Likes gleem and Greg Bernhardt
  • #66
I expect that AI customer service will be even worse.
 
  • #67
If this wasn't mid-Winter I would have thought it an April-Fools joke.:cry:
 
  • #68
I came across an article AI hallucinations can’t be stopped — but these techniques can limit their damage
https://www.nature.com/articles/d41586-025-00068-5

Developers have tricks to stop artificial intelligence from making things up, but large language models are still struggling to tell the truth, the whole truth and nothing but the truth.

When computer scientist Andy Zou researches artificial intelligence (AI), he often asks a chatbot to suggest background reading and references. But this doesn’t always go well. “Most of the time, it gives me different authors than the ones it should, or maybe sometimes the paper doesn’t exist at all,” says Zou, a graduate student at Carnegie Mellon University in Pittsburgh, Pennsylvania.

It’s well known that all kinds of generative AI, including the large language models (LLMs) behind AI chatbots, make things up. This is both a strength and a weakness. It’s the reason for their celebrated inventive capacity, but it also means they sometimes blur truth and fiction, inserting incorrect details into apparently factual sentences. “They sound like politicians,” says Santosh Vempala, a theoretical computer scientist at Georgia Institute of Technology in Atlanta. They tend to “make up stuff and be totally confident no matter what”.

The particular problem of false scientific references is rife. In one 2024 study, various chatbots made mistakes between about 30% and 90% of the time on references, getting at least two of the paper’s title, first author or year of publication wrong1. Chatbots come with warning labels telling users to double-check anything important. But if chatbot responses are taken at face value, their hallucinations can lead to serious problems, as in the 2023 case of a US lawyer, Steven Schwartz, who cited non-existent legal cases in a court filing after using ChatGPT.

. . . .

As part of training to access AI tools, I was directed to this
https://www.nist.gov/trustworthy-and-responsible-ai
https://airc.nist.gov/AI_RMF_Knowledge_Base/AI_RMF
https://airc.nist.gov/home

https://new.nsf.gov/news/nsf-announces-7-new-national-artificial?sf176473159=1
 
  • #69
Stadium Rock written by AI. Better than the real thing, if you ask me.

 
Back
Top