russ_watters
Mentor
- 23,737
- 11,188
I know this thread has aged, but it popped-up again, so:
1. Maintaining control of the model and its growth. There was an early chatbot that was allowed to learn and grow and it quickly learned trolling/hate/racism and incorporated that into its model, and had to be shut down.
2. Current chatbots lean heavily towards emphasis on a positive user experience. They are programmed to appear pleasant and agreeable. That means they agree with and "learn" whatever you tell them to, even if it's wrong. If what they learned was allowed to persist, they would rapidly accrue garbage in their models.
For the second one, I'm not sure if AI chatbots are currently capable of true rational thought based learning (I lean no), but certainly they could be programmed to not promote crackpottery. I suspect the researchers/programmers would prefer that (prioritizing quality), but the businesspeople would not(prioritizing user engagement/sales).
If I'm understanding your point correctly, I'd say I disagree. When you're asking AI a question it is not supposed to be the equivalent of asking any random human (but it currently kind of is, and that's a flaw). The assumption should be that you're asking a leading expert in the subject of the question. When you search Wikipedia for information on quantum mechanics, you expect that it was written by a physicist, not a random human. If you're not sure, you look at a textbook instead, so you can be certain of the expertise of the source human. ChatGPT isn't fully curated, as far as I know. There's limited filtering of chaff from wheat. It's providing a statistical average response, so if a certain subject contains a lot of chaff online, you'll get a response with a lot of chaff.erobz said:I don't see the disparity between it and humans in that regard. I don't care how good you are, in general I'd say the vast majority of humans aren't coming from first principles when saying whether or not they understand a topic.
There's two related reasons for this, as I understand it:Moreover, the chatbot are designed with no "long term " memory with regards to users. So when you correct it can/does understand...but because of its forced architecture it must throw away the learning. At least that is what chatGPT is telling me.
1. Maintaining control of the model and its growth. There was an early chatbot that was allowed to learn and grow and it quickly learned trolling/hate/racism and incorporated that into its model, and had to be shut down.
2. Current chatbots lean heavily towards emphasis on a positive user experience. They are programmed to appear pleasant and agreeable. That means they agree with and "learn" whatever you tell them to, even if it's wrong. If what they learned was allowed to persist, they would rapidly accrue garbage in their models.
For the second one, I'm not sure if AI chatbots are currently capable of true rational thought based learning (I lean no), but certainly they could be programmed to not promote crackpottery. I suspect the researchers/programmers would prefer that (prioritizing quality), but the businesspeople would not(prioritizing user engagement/sales).
Last edited:
