- 2,700
- 2,171
This is why the translation of languages is challenging for AI as well. and why equaling a human translator will be such an accomplishment.
So is playing chess.gleem said:BTW use of language is considered an element of human intelligence.
mfb said:
To put it simply, an AI assigned to evaluate x-rays for pneumonia will prefer "cheating" over meritorious evaluation whenever cheating yields better answers.At Mount Sinai, many of the infected patients were too sick to get out of bed, and so doctors used a portable chest x-ray machine. Portable x-ray images look very different from those created when a patient is standing up. Because of what it learned from Mount Sinai's x-rays, the algorithm began to associate a portable x-ray with illness. It also anticipated a high rate of pneumonia.
https://techxplore.com/news/2023-05-artificial-neurons-mimic-complex-brain.htmlA team of researchers at the University of Oxford, IBM Research Europe, and the University of Texas, have announced an important feat: the development of atomically thin artificial neurons created by stacking two-dimensional (2D) materials. The results have been published in Nature Nanotechnology.
The Japanese written language is very ambiguous. A kanji character can stand for a dozen completely different words. I've noticed that the Google translator has a difficult time with it.anorlunda said:http://www.lackuna.com/2012/04/13/5...-translation-blunders/#f1j6G4IAprvcoGlw.99his link from NSA mentions it.
https://www.nsa.gov/portals/75/docu...ssified-documents/tech-journals/mokusatsu.pdf
But I think it is a stretch that the poor translation changed the outcome. We can't get into the heads of the participants, so we'll never know for sure.
That's better/faster than commercial nuclear fusion, which is always 10 years away for the last 5 or 6 decades.cosmik debris said:When I started programming in 1969 I was told by a professor that a career in programming was unlikely because in 5 years the AIs would be doing it. It has been 5 years ever since.
Hello, this is Bing! I’m the new AI-powered chat mode of Microsoft Bing that can help you quickly get summarized answers and creative inspiration .
Try clicking on some of these ideas below to find out why the new Bing is a smarter way to search .
- Got a question? Ask me anything - short, long, or anything in between
.
- Dive deeper. Simply ask follow-up questions to refine your results .
- Looking for things to do? I can help you plan trips ️ or find new things to do where you live.
- Feeling stuck? Ask me to help with any project from meal planning to gift ideas .
- Need Inspiration? I can help you create a story, poem, essay, song, or picture.
If anything the closer we get to it the further it gets away.Astronuc said:That's better/faster than commercial nuclear fusion, which is always 10 years away for the last 5 or 6 decades.![]()
They believe that they can achieve AGI in two years. Their current AI agent GENIUS is anticipated to be available to the public by the end of this year. It is currently under private beta testing by JPL among others.At the heart of VERSES: Active Inference
We are investigating this hypothesis using Active Inference, an approach to modeling intelligence based on the physics of information. With Active Inference, we can design and create software programs or agents capable of making decisions and taking real-time actions based on incoming data. These agents update their beliefs in real-time as they receive new data and use their understanding of the world to plan ahead and act appropriately within it.
Our approach allows us explicit access to the beliefs of agents. That is, it gives us tools for discovering what an agent believes about the world and how it would change its beliefs in light of new information. Our models are fundamentally Bayesian and thus tell us what an optimal agent should believe and how it should act to maintain a safe and stable environment.
Our goal is to leverage Active Inference to develop safe and sustainable AI systems at scale:
Opening the black box. Equip agents with interpretable models constructed from explicit labels, thus making our agents’ decisions auditable by design.
Hybrid automated and expert systems. Active inference affords an explicit methodology for incorporating existing knowledge into AI systems and encourages structured explanations.
Optimal reasoning. Bayesian systems optimally combine what agents already know, i.e., prior knowledge, with what they learn from newly observed data to reason optimally in the face of uncertainty.
Optimal planning. We implement the control of actions using Bayesian reasoning and decision-making, allowing for optimal action selection in the presence of risk and uncertainty.
Optimal model selection. Bayesian active inference can automatically select the best model and identify “the right tool for the right job.”
Explanatory power and parsimony. Instead of maximizing reward, active inference agents resolve uncertainty, balancing exploration and exploitation.
Intrinsic safety. Built on the principles of alignment, ethics, empathy, and inclusivity, agents maximize safety at all levels.
.Scott said:To put it simply, an AI assigned to evaluate x-rays for pneumonia will prefer "cheating" over meritorious evaluation whenever cheating yields better answers. ... AI picking up indications in the imagery (subtle or otherwise) of this staff knowledge?
Today we’re releasing Operator, an agent that can go to the web to perform tasks for you. Using its own browser, it can look at a webpage and interact with it by typing, clicking, and scrolling. It is currently a research preview, meaning it has limitations and will evolve based on user feedback. Operator is one of our first agents, which are AIs capable of doing work for you independently—you give it a task and it will execute it.
Operator can be asked to handle a wide variety of repetitive browser tasks such as filling out forms, ordering groceries, and even creating memes. The ability to use the same interfaces and tools that humans interact with on a daily basis broadens the utility of AI, helping people save time on everyday tasks while opening up new engagement opportunities for businesses.
When computer scientist Andy Zou researches artificial intelligence (AI), he often asks a chatbot to suggest background reading and references. But this doesn’t always go well. “Most of the time, it gives me different authors than the ones it should, or maybe sometimes the paper doesn’t exist at all,” says Zou, a graduate student at Carnegie Mellon University in Pittsburgh, Pennsylvania.
It’s well known that all kinds of generative AI, including the large language models (LLMs) behind AI chatbots, make things up. This is both a strength and a weakness. It’s the reason for their celebrated inventive capacity, but it also means they sometimes blur truth and fiction, inserting incorrect details into apparently factual sentences. “They sound like politicians,” says Santosh Vempala, a theoretical computer scientist at Georgia Institute of Technology in Atlanta. They tend to “make up stuff and be totally confident no matter what”.
The particular problem of false scientific references is rife. In one 2024 study, various chatbots made mistakes between about 30% and 90% of the time on references, getting at least two of the paper’s title, first author or year of publication wrong1. Chatbots come with warning labels telling users to double-check anything important. But if chatbot responses are taken at face value, their hallucinations can lead to serious problems, as in the 2023 case of a US lawyer, Steven Schwartz, who cited non-existent legal cases in a court filing after using ChatGPT.
. . . .