Insights Blog
-- Browse All Articles --
Physics Articles
Physics Tutorials
Physics Guides
Physics FAQ
Math Articles
Math Tutorials
Math Guides
Math FAQ
Education Articles
Education Guides
Bio/Chem Articles
Technology Guides
Computer Science Tutorials
Forums
Trending
Featured Threads
Log in
Register
What's new
Search
Search
Search titles only
By:
Menu
Log in
Register
Navigation
More options
Contact us
Close Menu
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Forums
The Lounge
Art, Music, History, and Linguistics
With ChatGPT, is the college essay dead?
Reply to thread
Message
[QUOTE="Jarvis323, post: 6840758, member: 475688"] I think that its lines, when acknowledging a mistake, are either hard coded or reinforced. E.g., it was the decision of the developers to make it say that specific line, rather than an emergent intelligent behavior. In that sense, those patterns are not really very reflective of its "intelligence" in my opinion, and you could in a sense say that "it" isn't saying those lines so much as someone else is taking over and speaking for it. Its answers are based on the text you've given for context. It seems sometimes that in telling it to try again, it tries again using its last answer as part of the context without properly framing it as a wrong previous response, and instead of fixing the problem it reinforces the original mistake. Instead, of telling it to try again, you can just click the regenerate button. It will be interesting to see the difference. Or you can edit you original prompt and try phrasing things differently, or try giving it clues. Getting it to give the best answers is sort of an art at this point and a lot is trial and error. The other day I was asking it to list all of the nouns in the first paragraph of the Lord of the Rings in alphabetical order. It would tell me it doesn't know the first paragraph because it doesn't have access to the internet (which is another developer talking situation). So next I asked it to write the first paragraph of the lord of the rings, and it did so. Subtle differences in the question can sometimes determine if its pre-filter gets activated or not (or its evaluated probability to generate the reinforced wrote answers dominate, not sure exactly). Then I asked it to list the nouns in alphabetical order and it did so with a few mistakes. And then I asked it to reverse each word and resort it, and it made a lot more mistakes. Then I asked it to list the verbs in alphabetical order, and it glitched out and started repeating the same word until it reached its max response length. And no matter how I tried to tell it not to do that it would always enter the same loop. I asked for example, to end any future responses if the same word were to be repeated more than twice, and it sill looped. Basically, telling it to detect a pattern in its own output and then modify its behavior is not something it can do, I assume. Maybe that is because once it begins its response, the context is fixed until its response is complete? Probably if it reevaluated the context after each word to include all that has been said so far, it might be able to overcome that limitation. I am just speculating. I think it has some major limitations now, but within the scope of its abilities it is very impressive. And basically a lot of the limitations are possibly fairly trivial to overcome in the future. [/QUOTE]
Insert quotes…
Post reply
Forums
The Lounge
Art, Music, History, and Linguistics
With ChatGPT, is the college essay dead?
Back
Top