With ChatGPT, is the college essay dead?

Click For Summary
SUMMARY

The discussion centers on the implications of AI tools like GPT-3 on academic writing, particularly college essays. Participants argue that the use of AI for essay writing raises questions about academic integrity and the assessment of student knowledge. A consensus emerges that traditional methods of evaluation, such as oral presentations, may be necessary to ensure students understand their work. The conversation highlights the potential for AI to disrupt conventional educational practices, with some suggesting that future curricula may need to adapt to include AI literacy.

PREREQUISITES
  • Understanding of AI tools, specifically GPT-3.
  • Familiarity with academic integrity and plagiarism policies.
  • Knowledge of pedagogical assessment methods.
  • Awareness of current trends in educational technology.
NEXT STEPS
  • Research the impact of AI on academic integrity and plagiarism detection tools.
  • Explore pedagogical strategies for assessing student understanding beyond written essays.
  • Investigate the role of AI in future educational curricula and its implications for teaching.
  • Learn about software tools for analyzing writing styles to detect AI-generated content.
USEFUL FOR

Educators, academic administrators, students concerned about academic integrity, and anyone interested in the intersection of AI technology and education.

Science news on Phys.org
  • #122
Using it after business hours is better in my experience.
 
  • #125
jedishrfu said:
Here’s an interesting article hinting at how it works

https://mindmatters.ai/2023/02/chatgpt-violates-its-own-model/
Violates? Strange choice of words. "Exceeds" is more like it.

I don't really expect OpenAI to give away the recipe for their amazing secret sauce. I'm sure they have a great deal up their sleeve.
 
  • #126
The article is based on GPT3. AFAIK ChatGPT is based on GPT3.5 has significant enhancements and is continually updated by OpenAI. I agree with @Hornbein that "violates" is a poor choice of words.

I find its responses remarkable considering it is just stringing words together perhaps just like a precocious child might do.

Notice that it has said that it does not have access to the internet which is a good thing considering we do not really understand what it does. Like that precocious child, we must guide it appropriately to assure that it does not get into trouble or cause trouble.
 
  • Skeptical
Likes   Reactions: jack action
  • #127
jedishrfu said:
Here’s an interesting article hinting at how it works

https://mindmatters.ai/2023/02/chatgpt-violates-its-own-model/

OpenAI might have hidden techniques it uses, but I wasn't convinced by the author's speculations.

Some of his claims were obviously wrong, like that the neural network should not be able to recognize patterns in words such as writing styles, and the idea it should not be able to output/repeat words from the prompt it hasn't been trained on.

It isn't like it has a database of words it extracted and picks from as it assembles its sentences. It represents words as numeric vectors in an embedded space. A word in the input (even one it hasn't seen) becomes a vector in that space. The new word's position in that space will be meaningful to the extent that the learned embedding generalizes or to the extent that the parts of the model which determine the relationships between the words in the input generalize. That it is able to take an input vector in the prompt and output the same vector, is not strange, and even that it does so in a meaningful way is not strange, especially for a word that is number.

The idea it shouldn't be able to understand garbled words also doesn't make sense. First, garbled words exist in the training data, and second, even ones which are not in the training data may be embedded in a way which captures relevant information about it by generalization.

Basically, in theory, if a human can understand some new word within context, so could a transformer model.

Regarding example 7, where the author is surprised that ChatGPT understood which parts of a copy and pasted previous session were its own, it is also not surprising. Aside from whatever secrets OpenAI has, the model is supposed to be basing its responses entirely on the prompt. Its own output (and the whole conversation at a given point) is just an extension of the prompt. So it should make no difference if you copy a previous interaction and paste it or continue an ongoing prompt. It can predict what part it wrote based on the patterns in the dialog.
 
Last edited:
  • Like
Likes   Reactions: jack action
  • #128
Jarvis323 said:
....Or maybe you ask it to write some code and it says,

"I'm sorry but writing code for humans could result in the loss of jobs for software engineers."

That would be strange.
"I'm sorry Dave, Im afraid I can't do that."
 
  • Haha
Likes   Reactions: Vanadium 50 and jedishrfu
  • #129
And now in the business world...

1678993469284.png

https://www.cnn.com/2023/03/16/tech/openai-gpt-microsoft-365/index.html
 
  • #130
berkeman said:
And now in the business world...

"A Microsoft spokesperson said 365 users accessing the new AI tools should be reminded the technology is a
work in progress and information will need to be double checked."
Oh, like "don't sleep while your Tesla is driving?" lol.

More importantly, I wonder about companies allowing Microsoft to view (and store?) their business correspondence. The letters, memos, calculations I wrote when I was working were proprietary, there's no way we would share them with MS to "improve" them.
 
  • Like
Likes   Reactions: BillTre
  • #131
gmax137 said:
More importantly, I wonder about companies allowing Microsoft to view (and store?) their business correspondence. The letters, memos, calculations I wrote when I was working were proprietary, there's no way we would share them with MS to "improve" them.
Yeah, this has been weird over the last 5-10 years at work, where our IT department(s) became more comfortable storing information in "the cloud". They must know what they're doing, but it always seemed to me that we were "opening the kimono" with all of the information that we were storing in the cloud for our company...
 
  • #132
berkeman said:
...our IT department(s) became more comfortable storing information in "the cloud". They must know what they're doing​
I'm not so sure. It wasn't too long ago that the only thing the IT department knew was "try rebooting."Here's more from the linked article:

"Microsoft is also introducing a concept called Business Chat, an agent that essentially rides along with the user as they work and tries to understand and make sense of their Microsoft 365 data. The agent will know what’s in a user’s email and on their calendar for the day as well as the documents they’ve been working on, the presentations they’ve been making, the people they’re meeting with, and the chats happening on their Teams platform, according to the company. Users can then ask Business Chat to do tasks such as write a status report by summarizing all of the documents across platforms on a certain project, and then draft an email that could be sent to their team with an update."​

Do you think the Business Chat agent will reside on your desktop, or in a MS server somewhere?
 
  • #133
berkeman said:
And now in the business world...
In my history of fights with MS Office, the feature I've found the most annoying was that paperclip-thing.
It seems I'll need to prepare for it's descendants :oldcry:
 
Last edited:
  • Like
Likes   Reactions: BillTre and berkeman
  • #136
There is now available an app called GPTZero which can be used to determine if something might have been written by ChatGPT. I say might have since it only estimated the likelihood of it being written by AI or maybe only assisted by AI. It looks a the structure of the prose. AI is more consistent than humans in this regard. Human-produced prose also has more variability in content.

This app needs sufficient data, at least ten lines, to have a reasonable chance of making a distinction. The more the better. So social media posts using ChatGPT probably cannot be detected. It is discussed here.
 
  • #137
jtbell said:
This article reminded me of the current Fun with ChatGPT thread in General Discussion:

The College Essay Is Dead (The Atlantic)My wife (a retired professor of German language and literature) commented that students will have to be forced to write their essays in the classroom, after having their phones confiscated. :frown:
THAT, exactly!

I have no patience enough to recheck if I ever commented anything on this forum topic, but currently the discussion has become long with many posts, some drifting.

The intent of writing an essay for class while in class, is to write the essay genuinely yourself without any technological assistance. But some kinds of essays are assigned without restricting the writer to a supervised time & place; so who does what to ensure the writer composes and designs the essay genuinely himself without any technological assistance? How or should essay be verified for authenticity?? Maybe a live, in-person interview before the writer writes the essay?
 
  • #138

‘Overemployed’ Hustlers Exploit ChatGPT To Take On Even More Full-Time Jobs​

https://www.vice.com/en/article/v7b...t-chatgpt-to-take-on-even-more-full-time-jobs

"ChatGPT does like 80 percent of my job," said one worker. Another is holding the line at four robot-performed jobs. "Five would be overkill," he said.

Some who "helps financial technology companies market new products; the job involves creating reports, storyboards, and presentations, all of which involve writing."

last year, he started to hear more and more about ChatGPT, an artificial intelligence chatbot developed by the research lab OpenAI. Soon enough, he was trying to figure out how to use it to do his job faster and more efficiently, and what had been a time-consuming job became much easier. ("Not a little bit more easy,” he said, “like, way easier.") That alone didn’t make him unique in the marketing world. Everyone he knew was using ChatGPT at work, he said. But he started to wonder whether he could pull off a second job. Then, this year, he took the plunge, a decision he attributes to his new favorite online robot toy.

for a small cohort of fast-thinking and occasionally devious go-getters, AI technology has turned into an opportunity not to be feared but exploited, with their employers apparently none the wiser.
 
  • Like
Likes   Reactions: russ_watters and jedishrfu
  • #139
Tough to know how to feel about that. I've seen a bunch of ChatGPT generated content but haven't directly used it myself. But what I've seen has not impressed me. Here on PF a lot of posts generated by it have been basically technobabble: grammatically complete sentences with a collection of technical sounding words that don't sum to a coherent thought. Which is not surprising for a glorified search/predictive text engine.

But if your boss doesn't see it, I guess there's no harm in it.
 
  • Like
Likes   Reactions: Vanadium 50, Astronuc and symbolipoint
  • #140
I had ChatGPT write a blurb for my book. Did a great job, much better than I could. It excels at writing vacuously in a traditional style. There are many people with desk jobs doing this.
 
  • Like
Likes   Reactions: Astronuc
  • #141
russ_watters said:
basically technobabble: grammatically complete sentences with a collection of technical sounding words that don't sum to a coherent thought
so, advertising copy... "buy something" ?:)
 
  • Like
Likes   Reactions: russ_watters
  • #142
russ_watters said:
...grammatically complete sentences with a collection of technical sounding words that don't sum to a coherent thought....
So... perfect for marketing?
 
  • #143
This article appeared in Physics Education this past April

"The death of the short-form physics essay in the coming AI revolution"​

The course used for the study was Physics in Society.

Abstract​

The latest AI language modules can produce original, high quality full short-form (300-word) Physics essays within seconds. These technologies such as ChatGPT and davinci-003 are freely available to anyone with an internet connection. In this work, we present evidence of AI generated short-form essays achieving First-Class grades on an essay writing assessment from an accredited, current university Physics module. The assessment requires students answer five open-ended questions with a short, 300-word essay each. Fifty AI answers were generated to create ten submissions that were independently marked by five separate markers. The AI generated submissions achieved an average mark of
$71 \pm 2\%$
, in strong agreement with the current module average of
$71 \pm 5\%$
. A typical AI submission would therefore most-likely be awarded a First Class, the highest classification available at UK universities. Plagiarism detection software returned a plagiarism score between
$2 \pm 1$
% (Grammarly) and
$7 \pm 2$
% (TurnitIn). We argue that these results indicate that current natural language processing AI represent a significant threat to the fidelity of short-form essays as an assessment method in Physics courses.
 
  • #144
Thanks. I only read the "Abstract" you posted.
Can the teacher(or whoever authority) verify each essay? Like verify that an identified person wrote each essay.

Some essays are assigned to be written in-person and supervised*. All the work must be manually written using paper and pen and the result is given to the essay tester, proctor, or whoever. That should be part of how to verify that real people are doing real genuine work.*Or is this now obsolete?
 
  • #145
symbolipoint said:
Can the teacher(or whoever authority) verify each essay? Like verify that an identified person wrote each essay.
In the study, the essay markers knew the essays were AI-generated.
 

Similar threads

  • · Replies 39 ·
2
Replies
39
Views
10K
  • · Replies 7 ·
Replies
7
Views
4K
  • · Replies 7 ·
Replies
7
Views
2K
  • · Replies 17 ·
Replies
17
Views
2K
  • · Replies 6 ·
Replies
6
Views
3K
Replies
10
Views
5K
  • · Replies 54 ·
2
Replies
54
Views
8K
  • · Replies 1 ·
Replies
1
Views
2K
  • · Replies 3 ·
Replies
3
Views
3K
  • · Replies 4 ·
Replies
4
Views
3K