Insights Blog
-- Browse All Articles --
Physics Articles
Physics Tutorials
Physics Guides
Physics FAQ
Math Articles
Math Tutorials
Math Guides
Math FAQ
Education Articles
Education Guides
Bio/Chem Articles
Technology Guides
Computer Science Tutorials
Forums
Chemistry
Biology and Medical
Earth Sciences
Computer Science
Computing and Technology
DIY Projects
Trending
Featured Threads
Log in
Register
What's new
Search
Search
Search titles only
By:
Chemistry
Biology and Medical
Earth Sciences
Computer Science
Computing and Technology
DIY Projects
Menu
Log in
Register
Navigation
More options
Contact us
Close Menu
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Forums
Other Sciences
Computing and Technology
Bias, errors, etc. within ChatGPT & other AI chatbots
Reply to thread
Message
[QUOTE="Borg, post: 6865350, member: 185214"] Some of the anecdotal stories that I've read on PF and other places leads me to believe that Chat GPT is doing something akin to [URL='https://ai.googleblog.com/2016/06/wide-deep-learning-better-together-with.html']Google's Wide & Deep model[/URL] where a model has a standard deep learning aspect along with a wide component that allows it to learn over time. In Chat GPT's case, it obviously has a large 300 billion+ neural network but also seems to learn during a conversation so that a person is able to convince it of different beliefs. This seems probable to me since you can tell it that it's wrong about something and it will adjust accordingly. However, those 'beliefs' don't appear to carry over from one conversation to the next - I asked Chat GPT if it could see information in another chat on my account and it couldn't. This approach would have several benefits that I can see: [LIST] [*]The deep portion provides very good base model that starts each conversation. [*]The wide aspect allows the model to adjust itself to the user's responses (for good or bad). This might explain some of the odd conversations that have been posted. [*]The adjustments during a conversation aren't carried over to other conversations and avoids the trolling issues like Microsoft's Tam suffered from. The Open AI team controls the updates that get into the base model. [/LIST] [/QUOTE]
Insert quotes…
Post reply
Forums
Other Sciences
Computing and Technology
Bias, errors, etc. within ChatGPT & other AI chatbots
Back
Top