Insights Blog
-- Browse All Articles --
Physics Articles
Physics Tutorials
Physics Guides
Physics FAQ
Math Articles
Math Tutorials
Math Guides
Math FAQ
Education Articles
Education Guides
Bio/Chem Articles
Technology Guides
Computer Science Tutorials
Forums
Trending
Featured Threads
Log in
Register
What's new
Search
Search
Search titles only
By:
Menu
Log in
Register
Navigation
More options
Contact us
Close Menu
JavaScript is disabled. For a better experience, please enable JavaScript in your browser before proceeding.
You are using an out of date browser. It may not display this or other websites correctly.
You should upgrade or use an
alternative browser
.
Forums
The Lounge
Art, Music, History, and Linguistics
With ChatGPT, is the college essay dead?
Reply to thread
Message
[QUOTE="Jarvis323, post: 6850119, member: 475688"] OpenAI might have hidden techniques it uses, but I wasn't convinced by the author's speculations. Some of his claims were obviously wrong, like that the neural network should not be able to recognize patterns in words such as writing styles, and the idea it should not be able to output/repeat words from the prompt it hasn't been trained on. It isn't like it has a database of words it extracted and picks from as it assembles its sentences. It represents words as numeric vectors in an embedded space. A word in the input (even one it hasn't seen) becomes a vector in that space. The new word's position in that space will be meaningful to the extent that the learned embedding generalizes or to the extent that the parts of the model which determine the relationships between the words in the input generalize. That it is able to take an input vector in the prompt and output the same vector, is not strange, and even that it does so in a meaningful way is not strange, especially for a word that is number. The idea it shouldn't be able to understand garbled words also doesn't make sense. First, garbled words exist in the training data, and second, even ones which are not in the training data may be embedded in a way which captures relevant information about it by generalization. Basically, in theory, if a human can understand some new word within context, so could a transformer model. Regarding example 7, where the author is surprised that ChatGPT understood which parts of a copy and pasted previous session were its own, it is also not surprising. Aside from whatever secrets OpenAI has, the model is supposed to be basing its responses entirely on the prompt. Its own output (and the whole conversation at a given point) is just an extension of the prompt. So it should make no difference if you copy a previous interaction and paste it or continue an ongoing prompt. It can predict what part it wrote based on the patterns in the dialog. [/QUOTE]
Insert quotes…
Post reply
Forums
The Lounge
Art, Music, History, and Linguistics
With ChatGPT, is the college essay dead?
Back
Top