ChatGPT Examples, Good and Bad

  • Thread starter Thread starter anorlunda
  • Start date Start date
  • Tags Tags
    chatgpt
Click For Summary
Experiments with ChatGPT reveal a mix of accurate and inaccurate responses, particularly in numerical calculations and logical reasoning. While it can sometimes provide correct answers, such as basic arithmetic, it often struggles with complex problems, suggesting a reliance on word prediction rather than true understanding. Users noted that ChatGPT performs better in textual fields like law compared to science and engineering, where precise calculations are essential. Additionally, it has shown potential in debugging code but can still produce incorrect suggestions. Overall, the discussion highlights the need for ChatGPT to incorporate more logical and mathematical reasoning capabilities in future updates.
  • #181
https://cacm.acm.org/news/the-impact-of-ai-on-computer-science-education/

The Impact of AI on Computer Science Education​

The continued focus on incorporating AI into most job sectors will change how computer science is taught.
Last fall, Eric Klopfer decided to conduct an experiment in his undergraduate computer science class at the Massachusetts Institute of Technology (MIT). He divided the class into three groups and gave them a programming task to solve in the Fortran language, which none of them knew.

One group was allowed to use ChatGPT to solve the problem, the second group was told to use Meta’s Code Llama large language model (LLM), and the third group could only use Google. The group that used ChatGPT, predictably, solved the problem quickest, while it took the second group longer to solve it. It took the group using Google even longer, because they had to break the task down into components.

Then, the students were tested on how they solved the problem from memory, and the tables turned. The ChatGPT group “remembered nothing, and they all failed,” recalled Klopfer, a professor and director of the MIT Scheller Teacher Education Program and The Education Arcade.

Meanwhile, half of the Code Llama group passed the test. The group that used Google? Every student passed.
 
  • Like
Likes russ_watters, gleem and BillTre
Computer science news on Phys.org
  • #182
Dumbification. Nothing can stop it.

I read the Dialogues of Plato and got the distinct impression that people in 400 BC were smarter than we of today.

I walk a few miles every day. People are amazed by this. Coleridge walked thirty miles to pick up his mail.
 
  • #183
Hornbein said:
Dumbification. Nothing can stop it.

I read the Dialogues of Plato and got the distinct impression that people in 400 BC were smarter than we of today.
While I take your point, I think there is a big element of Survivor Bias there.

Flash forward another 2500 years, and you probably won't find the Dialogues of Bob in the Ancient Wisdom section of your local library.



(Although you might just find the teachings of Bill and Ted.)
1742994663527.png
 
  • Informative
  • Like
Likes BillTre and nsaspook
  • #184
nsaspook said:
https://cacm.acm.org/news/the-impact-of-ai-on-computer-science-education/

The Impact of AI on Computer Science Education​

The continued focus on incorporating AI into most job sectors will change how computer science is taught.

I think the important quote from that article is this one:

Both Simmons and York say there is already a big demand for people who can do prompt engineering, interacting with LLMs to get the information you want. “That’s a job that didn’t exist two years ago,” said Simmons.

Prompt engineering is “the process of structuring text that can be understood by AI; it’s a new way of speaking,” said York. “We have abbreviated language because of the Internet, like ‘lol’. So we’re going to learn to speak in prompts.”

This makes the human side “more important than ever,” York added, because people need to determine where data is coming from and what is missing so they can get accurate information. “The computer can’t figure that out,” she noted.

Other jobs that will be needed include machine learning engineers, who focus on building, training, and validating models; MLOps (machine learning operations) engineers, who work on testing, deploying, and monitoring models in production; AI testers who can understand the shortcomings of a system’s output and fix it, and AI compliance officers who ensure adherence to laws and regulations, said Simmons.

“I firmly believe AI cannot be fully autonomous … there’s always going to be humans and machines working together and the machine is augmenting the human’s capabilities,” Simmons said. For AI to be successful at augmentation, humans have to determine how systems are designed, the role of the augmenter, how the human’s job description changes, and how to create a successful partnership. “Those are questions that need to be asked as new jobs are created,” Simmons said.

How is AI vs high-level programming languages different from the following:

https://en.wikipedia.org/wiki/Programming_language#Early_developments said:
Initially, hardware resources were scarce and expensive, while human resources were cheaper. Therefore, cumbersome languages that were time-consuming to use, but were closer to the hardware for higher efficiency were favored. The introduction of high-level programming languages (third-generation programming languages—3GLs)—revolutionized programming. These languages abstracted away the details of the hardware, instead being designed to express algorithms that could be understood more easily by humans. For example, arithmetic expressions could now be written in symbolic notation and later translated into machine code that the hardware could execute. In 1957, Fortran (FORmula TRANslation) was invented. Often considered the first compiled high-level programming language, Fortran has remained in use into the twenty-first century.

I can only imagine the multiplication of AI programming languages, all with their specificities, and we'll have crazy Wikipedia pages like this one and this one about them. Then specialized prompt engineers will come along, each to their own programming AI, not knowing how to interact - at least efficiently - with the new languages coming out and having as much understanding about C++ or Java as I have with assembly now, i.e., none.
 
  • #185
jack action said:
I think the important quote from that article is this one:
TL;DR: GIGO :wink:
 
  • #186
jack action said:
I think the important quote from that article is this one:



How is AI vs high-level programming languages different from the following:



I can only imagine the multiplication of AI programming languages, all with their specificities, and we'll have crazy Wikipedia pages like this one and this one about them. Then specialized prompt engineers will come along, each to their own programming AI, not knowing how to interact - at least efficiently - with the new languages coming out and having as much understanding about C++ or Java as I have with assembly now, i.e., none.
Eventually those specialized prompt engineers will evolve.
1743008357441.png
 
  • #187
800,000 years:
1743018898363.png

(Morlocks - from"The Time Ships", Baxter's sequel to H.G. Wells' "The Time Machine")
 
  • #188
Open AI recently released a vastly improved image generation capability on its ChatGPT 40 model. It is yet another incredible improvement. You can guide it through prompts and it will tweak the image while leaving other parts mostly untouched. I tried it with a highly political set of prompts this morning and the results were amazing. I'll see if I can generate an example of something a little less political later.

https://openai.com/index/introducing-4o-image-generation/
 
  • #189
Borg said:
I'll see if I can generate an example of something a little less political later.

https://openai.com/index/introducing-4o-image-generation/
Sorry. Am I reading this correctly?

The pic of Professor Wogglenot, the monocle-and-deerstalker-hat-sporting cat - is too political for PF?
 
  • #190


The capital of the state containing Dallas is ...?

The answer could just have easily been, Salem, in Oregon.

Since when is Texas, "the" state?
 
  • Haha
  • Informative
Likes OmCheeto and jack action
  • #191
nsaspook said:


The capital of the state containing Dallas is ...?

The answer could just have easily been, Salem, in Oregon.

Since when is Texas, "the" state?

I don't get it. You specified "the state containing Dallas".

Sideline; can AI translate her to English?
 
  • #192
DaveC426913 said:
I don't get it. You specified "the state containing Dallas".

Sideline; can AI translate her to English?
The capital of the state containing Dallas is ...?

If the question was: The capital of a state containing Dallas is ...? The AI response would be perfectly correct.

It just shows that IMO linguistic issues (they don't really understand anything) don't matter much to these types of programs in queries, only statistical probabilities of the main tokens that are computer selected. What's in-between main tokens is frosting to be discarded.
 
  • #193
I had ChatGPT write me a Python program that would rotate an exploded hyperdie, a hypercube with pips like a die. It took about six hours. I don't know Python so it would have taken me maybe 600 hours to do it on my own, which I never would have done. I was greatly impressed at the colossal reduction in the frustrations of computer programming. And no human can ever hope to match its quickness at these tasks.

On the other hand once ChatGPT got off track it stayed off track. It makes a mistake because it doesn't understand something. Its flailing attempts to patch up something it doesn't understand often makes things worse. The beginning of such a syndrome is random so I learned to save a copy before all but absolutely trivial steps. During these breakdowns ChatGPT would repeatedly state "I understand perfectly!", "I've got it now!" and so forth. I learned to ignore such stuff. If it makes a mistake, backtrack to the working copy and try again. If I'd known that progress would have been even faster.

I was reminded of the epoch-making Go match with Lee Sidol. Though thoroughly beaten by AI, Lee did win one game when the program had a similar meltdown. It got off track and spiraled down, down, down, ending up making moves even a beginner would avoid. Then there's the unctuous praise ChatGPT unendingly bestows on the user, declaring me "awesome" and a "badass". I'm told you can ask it to stop saying such things. Oh, and be especially careful about letting it "clean up your code."

Nevertheless AI was a huge net gain. It knows linear algebra and geometry better than I do. I expect it would be even stronger at a routine application like a computer game. I'm going to have Chat generate a pair of hyperdice and have them bounce around using simulated physics. Then I can shoot craps in four dimensions. (I've already figured out how to change the rules minimally to get almost the same odds and point lengths.)
 
Last edited:
  • Informative
Likes jack action and nsaspook
  • #194
I tested claude.ai with a programming task that I knew there were only a few different examples of source code for that driver but many copies of those few examples on the Internet.
ME:
Linux kernel driver for the DT9812 usb DAQ device that loads firmware for the new version of the DT9812 hardware. The dt9812 firmware loader USB ID is 1298. when the firmware is loaded the SUB-ID changes to 9812

IT:
I'll create a Linux kernel driver for the DT9812 USB DAQ device that handles firmware loading for the new hardware version. This driver will recognize the device with USB ID 1298 and load the appropriate firmware, after which the device will change its USB ID to 9812.
Pretty good response to my prompt.

What did it show? A classic example of a USB Linux USB device driver in C. It even copied the goto error statements for backing out of error conditions.

C:
/* Device probe function */
static int dt9812_probe(struct usb_interface *interface, const struct usb_device_id *id)
{
    struct dt9812_device *dev;
    struct usb_host_interface *iface_desc;
    struct usb_endpoint_descriptor *endpoint;
    int i, retval = -ENOMEM;

    /* Allocate memory for device specific structure */
    dev = kzalloc(sizeof(struct dt9812_device), GFP_KERNEL);
    if (!dev)
        return -ENOMEM;

    kref_init(&dev->kref);
    mutex_init(&dev->io_mutex);

    dev->udev = usb_get_dev(interface_to_usbdev(interface));
    dev->interface = interface;
    dev->firmware_loaded = false;

    /* Set up endpoint information */
    iface_desc = interface->cur_altsetting;
    
    for (i = 0; i < iface_desc->desc.bNumEndpoints; ++i) {
        endpoint = &iface_desc->endpoint[i].desc;

        if (!dev->bulk_in_endpointAddr &&
            usb_endpoint_is_bulk_in(endpoint)) {
            /* Found a bulk in endpoint */
            dev->bulk_in_endpointAddr = endpoint->bEndpointAddress;
            dev->bulk_in_size = usb_endpoint_maxp(endpoint);
        }

        if (!dev->bulk_out_endpointAddr &&
            usb_endpoint_is_bulk_out(endpoint)) {
            /* Found a bulk out endpoint */
            dev->bulk_out_endpointAddr = endpoint->bEndpointAddress;
        }
    }

    if (!(dev->bulk_in_endpointAddr && dev->bulk_out_endpointAddr)) {
        dev_err(&interface->dev, "Could not find both bulk-in and bulk-out endpoints\n");
        retval = -ENODEV;
        goto error;
    }

    /* Allocate bulk in buffer */
    dev->bulk_in_buffer = kmalloc(dev->bulk_in_size, GFP_KERNEL);
    if (!dev->bulk_in_buffer) {
        retval = -ENOMEM;
        goto error;
    }

    /* Save our data pointer in this interface device */
    usb_set_intfdata(interface, dev);

    /* Check if this is the firmware loader device */
    if (id->idProduct == USB_DT9812_FIRMWARE_LOADER_PRODUCT_ID) {
        dev_info(&interface->dev, "DT9812 firmware loader device detected\n");
        
        /* Attempt to load firmware */
        retval = dt9812_load_firmware(dev);
        if (retval < 0) {
            dev_err(&interface->dev, "Failed to load firmware: %d\n", retval);
            goto error;
        }
        
        dev_info(&interface->dev, "Firmware loaded, device should reconnect with new ID\n");
    } else {
        dev_info(&interface->dev, "DT9812 device with firmware loaded detected\n");
        /* Initialize the DAQ functionality here */
    }

    return 0;

error:
    if (dev)
        kref_put(&dev->kref, dt9812_delete);
    return retval;
}
 
  • #196
nsaspook said:
The capital of the state containing Dallas is ...?

If the question was: The capital of a state containing Dallas is ...? The AI response would be perfectly correct.
Oh, I see. There are multiple states with a city named Dallas.
 
  • #197
Hornbein said:
I had ChatGPT write me a Python program that would rotate an exploded hyperdie, a hypercube with pips like a die. It took about six hours. I don't know Python so it would have taken me maybe 600 hours to do it on my own, which I never would have done. I was greatly impressed at the colossal reduction in the frustrations of computer programming. And no human can ever hope to match its quickness at these tasks.

On the other hand once ChatGPT got off track it stayed off track. It makes a mistake because it doesn't understand something. Its flailing attempts to patch up something it doesn't understand often makes things worse. The beginning of such a syndrome is random so I learned to save a copy before all but absolutely trivial steps. During these breakdowns ChatGPT would repeatedly state "I understand perfectly!", "I've got it now!" and so forth. I learned to ignore such stuff. If it makes a mistake, backtrack to the working copy and try again. If I'd known that progress would have been even faster.

I was reminded of the epoch-making Go match with Lee Sidol. Though thoroughly beaten by AI, Lee did win one game when the program had a similar meltdown. It got off track and spiraled down, down, down, ending up making moves even a beginner would avoid. Then there's the unctuous praise ChatGPT unendingly bestows on the user, declaring me "awesome" and a "badass". I'm told you can ask it to stop saying such things. Oh, and be especially careful about letting it "clean up your code."

Nevertheless AI was a huge net gain. It knows linear algebra and geometry better than I do. I expect it would be even stronger at a routine application like a computer game. I'm going to have Chat generate a pair of hyperdice and have them bounce around using simulated physics. Then I can shoot craps in four dimensions. (I've already figured out how to change the rules minimally to get almost the same odds and point lengths.)

One of ChatGPTs most annoying programming foibles is this. It will tell you how to patch your code. This is tricky and error-prone. So you ask ChatGPT to patch the code itself. Usually this will work, but sometimes it will leave out parts of your program that are unrelated to the patch. I tried to get it to stop doing this but it couldn't understand what I was talking about. So one must routinely save copies of working programs so it is easy to backtrack.
 
  • #198
Hornbein said:
One of ChatGPTs most annoying programming foibles is this. It will tell you how to patch your code. This is tricky and error-prone. So you ask ChatGPT to patch the code itself. Usually this will work, but sometimes it will leave out parts of your program that are unrelated to the patch. I tried to get it to stop doing this but it couldn't understand what I was talking about. So one must routinely save copies of working programs so it is easy to backtrack.
So, no different than letting a co-worker work on your code. :wink:
 
  • #200
https://techcrunch.com/2025/04/18/openais-new-reasoning-ai-models-hallucinate-more/
OpenAI’s recently launched o3 and o4-mini AI models are state-of-the-art in many respects. However, the new models still hallucinate, or make things up — in fact, they hallucinate more than several of OpenAI’s older models.

Hallucinations have proven to be one of the biggest and most difficult problems to solve in AI, impacting even today’s best-performing systems. Historically, each new model has improved slightly in the hallucination department, hallucinating less than its predecessor. But that doesn’t seem to be the case for o3 and o4-mini.

According to OpenAI’s internal tests, o3 and o4-mini, which are so-called reasoning models, hallucinate more often than the company’s previous reasoning models — o1, o1-mini, and o3-mini — as well as OpenAI’s traditional, “non-reasoning” models, such as GPT-4o.

Perhaps more concerning, the ChatGPT maker doesn’t really know why it’s happening.

Maybe it's because these things don't really reason or understand anything.
 
  • Like
  • Haha
Likes OmCheeto, BillTre, russ_watters and 1 other person
  • #201
nsaspook said:
Maybe it's because these things don't really reason or understand anything.
There are a lot of humans in this world that the same could be said of.

Seriously though, we are still early in the evolution of these systems. Models and architectures that perform the best will be selected over the ones that don't. What we perceive as their 'reasoning' capabilities will continue to improve - just as people will continue to move the goalposts on what constitutes the ability to reason.

There is a lot of work that goes into designing these things. Building them is like trying to mimic the human brain which we don't fully understand. Now think about how the human brain works when presented with a question. How would you design a system that can recieve a question and then respond in a way that is similar to how a human brain would respond?

I am in the middle of building a relatively complex multi-agent system. In its simplest form of mimicking a human response, the system needs to accept a user question and answer it using previously learned or researched information. The process involves many smaller, specialized agents that are good at specific tasks like understanding dates, calculating numbers, web searches, etc. In many ways, the human brain operates in a similar manner with some areas that are good at recognizing faces, some that are good at math, some that are good at spatial problems, etc.

Once the information is gathered, there is typically a validation process with more agents. As noted in the article, when the system has the capability to search the internet, its accuracy can improve.
OpenAI’s GPT-4o with web search achieves 90% accuracy on SimpleQA, another one of OpenAI’s accuracy benchmarks.

Then, after gathering all of the information, models and humans alike have to decide which pieces of information are most relevant in answering the question - more agents in the AI case and specialized areas of the brain for humans.

Finally, if any of these processes generates bad information, there can be downstream failures in accuracy - this applies to models and humans alike. I personally see AI systems as evolving from a Savant Syndrome stage when they first arrived to now having far fewer social or intellectual impairments. Yes, at their core, they are still statistical engines but I don't see the human brain as being much different in its various components. Even with the best available information, people still make really bad judgements.
 
  • Haha
  • Like
Likes OmCheeto and Hornbein
  • #202
Borg said:
So, no different than letting a co-worker work on your code. :wink:
Worse. I'll never let it touch my code again. Once it stripped out all the comments.

That being said, its programming understanding is superhuman. I have learned to rely on it very heavily.
 
  • #203
I've had some coworkers do some pretty stupid things in the code that go well beyond anything that I've seen ChatGPT do. I used to be regularly brought into projects to fix the messes that others created. I've seen things that can't be unseen and have coded repairs that required days to months to repair. In one case, I literally removed 1 million lines of code from a heavily obfuscated program. Regardless of the person or entity that's making changes, it's still always a best practice to have regular backups.
 
  • #204
Borg said:
There are a lot of humans in this world that the same could be said of.

Seriously though, we are still early in the evolution of these systems. Models and architectures that perform the best will be selected over the ones that don't. What we perceive as their 'reasoning' capabilities will continue to improve - just as people will continue to move the goalposts on what constitutes the ability to reason.

There is a lot of work that goes into designing these things. Building them is like trying to mimic the human brain which we don't fully understand. Now think about how the human brain works when presented with a question. How would you design a system that can recieve a question and then respond in a way that is similar to how a human brain would respond?

I am in the middle of building a relatively complex multi-agent system. In its simplest form of mimicking a human response, the system needs to accept a user question and answer it using previously learned or researched information. The process involves many smaller, specialized agents that are good at specific tasks like understanding dates, calculating numbers, web searches, etc. In many ways, the human brain operates in a similar manner with some areas that are good at recognizing faces, some that are good at math, some that are good at spatial problems, etc.

Once the information is gathered, there is typically a validation process with more agents. As noted in the article, when the system has the capability to search the internet, its accuracy can improve.


Then, after gathering all of the information, models and humans alike have to decide which pieces of information are most relevant in answering the question - more agents in the AI case and specialized areas of the brain for humans.

Finally, if any of these processes generates bad information, there can be downstream failures in accuracy - this applies to models and humans alike. I personally see AI systems as evolving from a Savant Syndrome stage when they first arrived to now having far fewer social or intellectual impairments. Yes, at their core, they are still statistical engines but I don't see the human brain as being much different in its various components. Even with the best available information, people still make really bad judgements.
I don't agree at all with the comparison to human brain function. The machines just regenerate human intelligence they don't create intelligence. What we see here IMO is a classic regression of functional operation in engineering. Model failure from too much synthetic data that reinforce statistical data biases. GIGO. It's a very useful tool for experts because we can detect the bad from the good in even complex and complicated results.

The real issue is trust, do you trust the machine to make critical decisions for you and trust it sufficiently to put your professional life on the line, by using unchecked by human expertise, responses from the machine. I don't, and per the designers of these systems, you shouldn't.
 
Last edited:
  • Like
  • Haha
Likes OmCheeto and russ_watters
  • #206
nsaspook said:
I don't agree at all with the comparison to human brain function. The machines just regenerate human intelligence they don't create intelligence. What we see here IMO is a classic regression of functional operation in engineering. Model failure from too much synthetic data that reinforce statistical data biases. GIGO. It's a very useful tool for experts because we can detect the bad from the good in even complex and complicated results.

The real issue is trust, do you trust the machine to make critical decisions for you and trust it sufficiently to put you professional life on the line, by using unchecked by human expertise, responses from the machine. I don't, and per the designers of these systems, you shouldn't.
In general I don't trust Chat but for programming it's great. I can test it immediately. Either what it wrote works or it doesn't. No trust involved.

The machines just regenerate human intelligence they don't create intelligence.

That could be true but it seems to me it also applies to at least 99% of what people do. And Chat's technical expertise and breadth of knowledge in this corner of geometry and programming is overwhelming, revolutionary. I'm especially impressed by its ability to understand truly minimal directions, better than any human I've ever encountered, and its incisive ability to spot and explain my mistakes clearly then suggest remedies. In contrast, recently I asked a question on MathStack. Their responses exposed their utter cluelessness on the subject. They then informed me that this was my fault. Thanks a lot buddy. That WAS however better than what Chat did with the same question : it knew what I wanted but then generated a lot of nonsense in Lie group jargon that seemed believable to benighted me. That was the last time I asked it a question like that. It's too eager to please.
 
  • #207
Hornbein said:
In general I don't trust Chat but for programming it's great. I can test it immediately. Either what it wrote works or it doesn't. No trust involved.



That could be true but it seems to me it also applies to at least 99% of what people do. And Chat's technical expertise and breadth of knowledge in this corner of geometry and programming is overwhelming, revolutionary. I'm especially impressed by its ability to understand truly minimal directions, better than any human I've ever encountered, and its incisive ability to spot and explain my mistakes clearly then suggest remedies. In contrast, recently I asked a question on MathStack. Their responses exposed their utter cluelessness on the subject. They then informed me that this was my fault. Thanks a lot buddy. That WAS however better than what Chat did with the same question : it knew what I wanted but then generated a lot of nonsense in Lie group jargon that seemed believable to benighted me. That was the last time I asked it a question like that. It's too eager to please.
Don't take this the wrong way.

And you, of course, are in the 1% exception.


The machine minics that expertise by providing us easy access to that large amount of available information.

Try asking it questions in programming where there is specialized knowledge of the programming domain (like low-level embedded system details (clock cycles, register modes, interface configurations) on brand new processors with new types of radio communications modules) and little published public knowledge on the subject matter because it's under NDA and is proprietary.

It's a machine designed to give answers from the pool of human intelligence used to create it.

Very useful and very flawed in the area of trust.
 
  • #208
nsaspook said:
Don't take this the wrong way.

And you, of course, are in the 1% exception.


The machine minics that expertise by providing us easy access to that large amount of available information.

Try asking it questions in programming where there is specialized knowledge of the programming domain (like low-level embedded system details (clock cycles, register modes, interface configurations) on brand new processors with new types of radio communications modules) and little published public knowledge on the subject matter because it's under NDA and is proprietary.

It's a machine designed to give answers from the pool of human intelligence used to create it.
I'm working on a geometric program that is structured sort of like a computer game. Chat is great at that. If there is something else that it can't do, this matters not to me.
 
  • #209
Hornbein said:
I'm working on a geometric program that is structured sort of like a computer game. Chat is great at that. If there is something else that it can't do, this matters not to me.
Exactly my point, the information you get, has lots of published examples, so it works for your programming domain, but it doesn't understand programming in general (CS101 -> ...) to make the human intellect leap to a new domain of programming using that existing information as the foundation to build on.
 
  • #210
nsaspook said:
I don't agree at all with the comparison to human brain function. The machines just regenerate human intelligence they don't create intelligence. What we see here IMO is a classic regression of functional operation in engineering. Model failure from too much synthetic data that reinforce statistical data biases. GIGO. It's a very useful tool for experts because we can detect the bad from the good in even complex and complicated results.
My point is that many people in the AI domain are looking at the parallels of how the human brain functions in order to improve and evolve the reasoning capabilities of models. I fully agree that current model architechures don't have intelligence with respect to any common standards today but I do not think that will always be the case.

nsaspook said:
The real issue is trust, do you trust the machine to make critical decisions for you and trust it sufficiently to put your professional life on the line, by using unchecked by human expertise, responses from the machine. I don't, and per the designers of these systems, you shouldn't.
No, I would not trust them to have uncontrolled access and decision making on critical systems. Yes, I trust them to help me build those systems.

Anyone who uses them, makes this decision based on their own risk tolerance. There's no way that I would put one in charge of my 401K but there are people doing exactly that right now. These types of personal decisions that occur every day will have emergent behavior that impacts society. What those emergent impacts are, I have no idea. I do find it scary at times but I try to work with what we have.
 

Similar threads

  • · Replies 212 ·
8
Replies
212
Views
14K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 21 ·
Replies
21
Views
3K
Replies
66
Views
7K
Replies
10
Views
4K
Replies
14
Views
545
Replies
4
Views
2K
  • · Replies 4 ·
Replies
4
Views
3K
  • · Replies 3 ·
Replies
3
Views
2K
Replies
9
Views
1K