With ChatGPT, is the college essay dead?

  • Thread starter jtbell
  • Start date
  • #106
gmax137
Science Advisor
2,343
2,050
However more recently, google search has morphed into a yellow pages advertising mode where obvious search terms bring you to sites that you’re not really interested in before you get to see the relevant sites.

This is what I really dislike about Google now. If there were an alternative that works like google did, say 15 years ago, I would switch to it (over goog) in a heartbeat.
 
  • #107
14,291
8,331
Have you tried DuckDuckGo? It seems to show what you want but may not be as comprehensive as Google can be.
 
  • #108
f95toli
Science Advisor
Gold Member
3,391
918
However, LaMDA responds like a human in a discussion. It does not know in any sense of the word but to have AGI it has to be able to do this. How much of what we say is like this, basically reflex.
The question is of course if the ways human "know" things is intrinsically different from, how a machine operates? Note that I am not in any way implying that any existing system is anywhere near an AGI; but in some of the discussions about the future of AI many seem to assume that is somehow obvious that humans are not "only" machines and that our ability to "understand" would be impossible to duplicate.
 
  • #109
gleem
Science Advisor
Education Advisor
2,109
1,567
The question is of course if the ways human "know" things is intrinsically different from, how a machine operates?
Certainly, the way humans know things is different from the way GPT does. Humans obtain information slowly over a long period in small increments which is true if it is an experience or is largely vetted to be true from other sources, quite the opposite of the way GTP "knows".

Artificial neural nets are being used to try to emulate the human brain. If successful would lower the bar with regard to how we set ourselves apart from everything else. To be sure, a true emulation of a human brain may never occur but with regard to processing information and decision making it seems possible and even better. We just can't accept being second.
 
  • #110
jack action
Science Advisor
Insights Author
Gold Member
2,710
5,642
The question is of course if the ways human "know" things is intrinsically different from, how a machine operates?
Actually, I think the problem I have with ChatGPT is that it mimics too much the human way. Humans do not retain information exactly; they don't even record information entirely. Our eyes don't record every partial of light going in and our brains don't remember everything: We make stuff up - based on educated guesses - to fill the voids. Which is what ChatGPT is also doing.

That is why we put stuff in writing and take pictures: to assure us we remember things correctly and that they are faithful to reality.

Using ChatGPT to write fiction or automating a process like writing documentation for a programming script may be a good use for this tool. The former doesn't mind if something made up comes up and the latter is basically a translator for a language that has very few concepts, based on pure logic nonetheless.

Trying to fetch quality information seems to be something a specialized search engine can do better. The lack of sources makes ChatGPT no better than asking your neighbor to recall something he read a while ago.
 
  • #111
Jarvis323
1,052
953
Microsoft's CEO was interviewed recently about AI and ChatGPT. Microsoft is investing 10 Billion in OpenAI.

Some main takeaways from the interview:

(1) This is just the beginning of the AI explosion and it isn't a linear curve.

(2) Already, as an anecdote, an elite AI programmer says that 80% of his code is written by AI. If you extrapolate (not saying you can but not sure why not), then it would suggest companies could fire 80% of their programmers without losing productivity.

(3) The recent breakthroughs are notable to him because (if I read him right) the models are surprising us as unexpected abilities are emerging in them that they weren't even trained for specically.

(4) While it will be disruptive and displace many "knowledge workers" such as programmers, hopefully we will adapt and utilize the improved productivity to reduce the disparity between quality of life around the world.



For me, I have to admit that while I had predicted rapid acceleration in AI capabilities, I hadn't figured that AI would replace programmers so quickly.
 
  • #112
14,291
8,331
One concern I have is as ChatGPT generated content gets more popular then it will begin to use its own content to train itself which can skew the results in a negative way.
 
  • Like
Likes jack action
  • #113
jack action
Science Advisor
Insights Author
Gold Member
2,710
5,642
(2) Already, as an anecdote, an elite AI programmer says that 80% of his code is written by AI. If you extrapolate (not saying you can but not sure why not), then it would suggest companies could fire 80% of their programmers without losing productivity.
If history repeats itself - and it always does - you will be pretty deceived. When people can spend more easily, they only tend to waste more. I see a lot [more] of useless code running a simple task in the future.

and-can-send-chrome-tab-scary-humans-to-moon-vjVBa.jpg

(4) While it will be disruptive and displace many "knowledge workers" such as programmers, hopefully we will adapt and utilize the improved productivity to reduce the disparity between quality of life around the world.
I'm still waiting to see technology reducing disparities between people around the world. It actually tends to increase it.
 
  • Like
Likes PeroK and Jarvis323
  • #114
Jarvis323
1,052
953
I'm still waiting to see technology reducing disparities between people around the world. It actually tends to increase it.
Yuval Harari argues that "the big danger for people is no longer exploitation, it is irrelevance"

He goes into more depth starting around 1 minute into the video.



Neil DeGrass Tyson is more optimistic. He thinks the solution is to funnel money into maintaining and modernizing infrastructure, which he thinks will still require human labor for quite some time. Then again I think he is thinking much more near term. Whereas people like Yuval are trying to think as much as hundreds or thousands of years ahead.

His argument starts around 16:30.

 
Last edited:
  • #115
Jarvis323
1,052
953
One concern I have is as ChatGPT generated content gets more popular then it will begin to use its own content to train itself which can skew the results in a negative way.
I was thinking about this since you brought it up and I thought of some aspect to this which intrigues me. I don't know the answer. But, in addition to the ChatGPT content present in the next generation of training data, there will also be our criticism of its output. And also our fears and enthusiasms about how it might change the world. I wonder what effect that will have? Will it learn from our feedback in a way that leads to better performance or better alignment with our interests? Or maybe it will just be more likely to make the same mistakes but also be more likely to acknowledge that it makes those mistakes?

I am curious because if the former is true (I suspect not, but would not be surprised to be surprised) or could become true eventually, then AI might "naturally" evolve to adapt to our expectations of it in a sense simply based on what we say about it on the internet between each iteration.
 
  • Like
Likes jack action
  • #116
Jarvis323
1,052
953
For example, what if, without it being an intentional feature, if you ask GPT4 to write an essay, it says,

"I'm sorry, but I can't write an essay for you. Writing an essay for you would be unethical since it could interfere with the education system."

Or maybe you ask it to write some code and it says,

"I'm sorry but writing code for humans could result in the loss of jobs for software engineers."

That would be strange.
 
  • #117
gleem
Science Advisor
Education Advisor
2,109
1,567
One of the promises of OpenAI for GPT4 is to improve the alignment of GPT with human norms and values and sense the intention of the user. Additionally, it is supposed to be more knowledgeable of the outside world (GPT3 has no knowledge of the world) as well as receive feedback from humans in the training to help correct misinformation. It is believed this will reduce misinformation. Unfortunately, GPT4 like 3 will still provide an answer to any question.
 
  • #118
14,291
8,331
It would be cool to ask it the classic car question from My Cousin Vinny where the prosecutor questions the car expertise of Vinny's girlfriend with a trick question.

It's likely though that ChatGPT will have been trained on that information.



In contrast, ChatGPT said:

The correct ignition timing for a 1955 Bel-Air Chevrolet with a 327 cubic inch engine and a four-barrel carburetor would likely be around 4 degrees before top dead center (BTDC) at idle speed. However, it's important to note that this is a rough estimate and the actual timing may vary depending on the specific engine and other factors such as the condition of the ignition system and the type of fuel being used. It's always best to refer to the factory service manual or consult a qualified mechanic for the most accurate information.

The question is who is right ie is the movie dialogue right or chatgpt? I can't answer that myself but Marisa Tomei did a great scene. This particular scene has reverberated throughout the legal profession in that there are times when an expert witness is needed who has no obvious credentials but upon vire dior shows exceptional knowledge.

https://www.abajournal.com/gallery/pivotal_scenes/987
 
Last edited:
  • #119
gleem
Science Advisor
Education Advisor
2,109
1,567
ChatGPT has refused to answer many questions I have asked it, either because it was unethical or because it thought it was unable to. For example, I asked to write a research paper in the style of Dr. Seuss and it refused, because it said it would be inappropriate. And I asked it to make financial predictions and it declined. I asked it to give certain obscure facts and it decline saying it can't because it can't access the internet.
This is interesting since most GPT3 prognosticators seem to think it will try to answer any question. However, as I said OpenAI will be trying to improve GPT4 to be more discerning with regard to its interpretation of instructions. Perhaps OpenAI has been updating 3's ability in this respect for it sounds exactly like what 4 will be supposed to do.

Jarvis323:
We haven't really "told it" (or trained it) yet that it is really bad at math and physics, for example.

As I have noted in other posts on this topic, when it comes to specific subjects for accuracy it must be trained on those subjects. As it is, it can use only what general information it was provided.
 
  • #120
Jarvis323
1,052
953
This is interesting since most GPT3 prognosticators seem to think it will try to answer any question. However, as I said OpenAI will be trying to improve GPT4 to be more discerning with regard to its interpretation of instructions. Perhaps OpenAI has been updating 3's ability in this respect for it sounds exactly like what 4 will be supposed to do.

Jarvis323:


As I have noted in other posts on this topic, when it comes to specific subjects for accuracy it must be trained on those subjects. As it is, it can use only what general information it was provided.

In retrospect, it is most likely not refusing to answer, it may be either filtering, or more likely "predicting" that the words which should follow are "Sorry, I can't answer that ...". So its answer is a non-answer, because that was the answer it saw most or was reinforced to generate in the training data. For it, the correct answer is the non-answer.

In that case, I don't know if it would actually reason that it should give a non-answer independently. It might to some extent based on some general rules it has learned.

But we may expect it to naturally begin to regurgitate information that it has learned from our perceptions of its limitations or problems. So its apparent confidence might drop in some cases or increase in others, even if it is never fully in-line with its actual abilities. But I am not sure.
 
  • #126
gleem
Science Advisor
Education Advisor
2,109
1,567
The article is based on GPT3. AFAIK ChatGPT is based on GPT3.5 has significant enhancements and is continually updated by OpenAI. I agree with @Hornbein that "violates" is a poor choice of words.

I find its responses remarkable considering it is just stringing words together perhaps just like a precocious child might do.

Notice that it has said that it does not have access to the internet which is a good thing considering we do not really understand what it does. Like that precocious child, we must guide it appropriately to assure that it does not get into trouble or cause trouble.
 
  • Skeptical
Likes jack action
  • #127
Jarvis323
1,052
953
Here’s an interesting article hinting at how it works

https://mindmatters.ai/2023/02/chatgpt-violates-its-own-model/

OpenAI might have hidden techniques it uses, but I wasn't convinced by the author's speculations.

Some of his claims were obviously wrong, like that the neural network should not be able to recognize patterns in words such as writing styles, and the idea it should not be able to output/repeat words from the prompt it hasn't been trained on.

It isn't like it has a database of words it extracted and picks from as it assembles its sentences. It represents words as numeric vectors in an embedded space. A word in the input (even one it hasn't seen) becomes a vector in that space. The new word's position in that space will be meaningful to the extent that the learned embedding generalizes or to the extent that the parts of the model which determine the relationships between the words in the input generalize. That it is able to take an input vector in the prompt and output the same vector, is not strange, and even that it does so in a meaningful way is not strange, especially for a word that is number.

The idea it shouldn't be able to understand garbled words also doesn't make sense. First, garbled words exist in the training data, and second, even ones which are not in the training data may be embedded in a way which captures relevant information about it by generalization.

Basically, in theory, if a human can understand some new word within context, so could a transformer model.

Regarding example 7, where the author is surprised that ChatGPT understood which parts of a copy and pasted previous session were its own, it is also not surprising. Aside from whatever secrets OpenAI has, the model is supposed to be basing its responses entirely on the prompt. Its own output (and the whole conversation at a given point) is just an extension of the prompt. So it should make no difference if you copy a previous interaction and paste it or continue an ongoing prompt. It can predict what part it wrote based on the patterns in the dialog.
 
Last edited:
  • Like
Likes jack action
  • #128
scottdave
Science Advisor
Homework Helper
Insights Author
Gold Member
1,891
881
....Or maybe you ask it to write some code and it says,

"I'm sorry but writing code for humans could result in the loss of jobs for software engineers."

That would be strange.
"I'm sorry Dave, Im afraid I can't do that."
 
  • Haha
Likes Vanadium 50 and jedishrfu
  • #130
gmax137
Science Advisor
2,343
2,050
And now in the business world...

"A Microsoft spokesperson said 365 users accessing the new AI tools should be reminded the technology is a
work in progress and information will need to be double checked."
Oh, like "don't sleep while your Tesla is driving?" lol.

More importantly, I wonder about companies allowing Microsoft to view (and store?) their business correspondence. The letters, memos, calculations I wrote when I was working were proprietary, there's no way we would share them with MS to "improve" them.
 
  • #131
berkeman
Mentor
64,456
15,832
More importantly, I wonder about companies allowing Microsoft to view (and store?) their business correspondence. The letters, memos, calculations I wrote when I was working were proprietary, there's no way we would share them with MS to "improve" them.
Yeah, this has been weird over the last 5-10 years at work, where our IT department(s) became more comfortable storing information in "the cloud". They must know what they're doing, but it always seemed to me that we were "opening the kimono" with all of the information that we were storing in the cloud for our company...
 
  • #132
gmax137
Science Advisor
2,343
2,050
...our IT department(s) became more comfortable storing information in "the cloud". They must know what they're doing​
I'm not so sure. It wasn't too long ago that the only thing the IT department knew was "try rebooting."


Here's more from the linked article:

"Microsoft is also introducing a concept called Business Chat, an agent that essentially rides along with the user as they work and tries to understand and make sense of their Microsoft 365 data. The agent will know what’s in a user’s email and on their calendar for the day as well as the documents they’ve been working on, the presentations they’ve been making, the people they’re meeting with, and the chats happening on their Teams platform, according to the company. Users can then ask Business Chat to do tasks such as write a status report by summarizing all of the documents across platforms on a certain project, and then draft an email that could be sent to their team with an update."​

Do you think the Business Chat agent will reside on your desktop, or in a MS server somewhere?
 
  • #133
Rive
Science Advisor
2,493
1,930
And now in the business world...
In my history of fights with MS Office, the feature I've found the most annoying was that paperclip-thing.
It seems I'll need to prepare for it's descendants :oldcry:
 
Last edited:
  • Like
Likes BillTre and berkeman

Suggested for: With ChatGPT, is the college essay dead?

Replies
25
Views
2K
  • Last Post
Replies
3
Views
104
Replies
2
Views
1K
Replies
9
Views
2K
  • Last Post
Replies
33
Views
1K
Replies
25
Views
820
Replies
14
Views
563
Replies
112
Views
3K
Replies
28
Views
253
Replies
6
Views
1K
Top