Is ChatGPT Becoming Teachers' Favorite Tool for AI Writing?

In summary, ChatGPT is rapidly becoming the teachers' favorite tool, and why not. It is interesting that Khan Academy uses their version of ChatGPT as a math tutor.
Physics news on Phys.org
  • #2
And why not. If I were teaching today I would use it. It was interesting that Khan Academy uses their version of ChatGPT as a math tutor.
 
  • Like
Likes russ_watters and jedishrfu
  • #3
For teachers it can assist in those mundane tasks that they must do like writing brief notes to contact parents on specific kids issues.

As an aside, there was a recent video short by a teacher about teacher speak that parents didn’t grasp. When the teacher says your kid is very sociable that means they talk too much, or when they say your is a born leader it means they are too bossy…
 
  • Like
Likes russ_watters and Klystron
  • #4
My complaint is that if a communication is generated by AI, it is unclear to anyone receiving it whether it is a sincere and truthful representation of the state of affairs (for lack of better terms) of the sender.

Personally, I think that ideally, if a person, or company, or organization, uses generative AI to automate any form of communication, whether it is a tweet, a letter, an email, a memo, a mission statement, a research paper, or whatever, they should indicate clearly that it, or which parts of it, was/were generated by AI.
Otherwise, even without the intent to deceive and confuse, people will be blurring, or obfuscating, each other's understanding of each other. It will ultimately make us isolated, confused, and unable to collaborate effectively.
 
  • #5
Jarvis323 said:
My complaint is that if a communication is generated by AI, it is unclear to anyone receiving it whether it is a sincere and truthful representation of the state of affairs (for lack of better terms) of the sender.

Personally, I think that ideally, if a person, or company, or organization, uses generative AI to automate any form of communication, whether it is a tweet, a letter, an email, a memo, a mission statement, a research paper, or whatever, they should indicate clearly that it, or which parts of it, was/were generated by AI.
Otherwise, even without the intent to deceive and confuse, people will be blurring, or obfuscating, each other's understanding of each other. It will ultimately make us isolated, confused, and unable to collaborate effectively.
Front the article:
In January, the New York City education department, which oversees the nation’s largest school district with more than 1 million students, blocked the use of ChatGPT by both students and teachers, citing concerns about safety, accuracy and negative impacts to student learning.
Obviously students and teachers are fundamentally different, and I don't see an inherent problem with a teacher using it, nor a need to cite it. Teachers can't "cheat" and there's nothing wrong with/difference between asking a bot "write me a two hour lecture on the Battle of Gettysburg" and finding one in a repository or even copying your own from last year (of from the guy who taught it last year who left his lesson plans when he retired). The teacher is still responsible for the content. I wonder if the district can articulate a real/potential problem that doesn't make it sound like they are treating their teachers like students?

It's the same reason when discussing rule changes on PF the default/starting position was that nothing has changed. The poster is responsible for the content either way.

I alluded to this in the other thread where I said teachers shouldn't fear for their jobs; writing lesson plans is not what makes a teacher a teacher, it's the human interaction of "teaching" that does.
 
  • Like
Likes gleem
  • #6
Jarvis323 said:
Personally, I think that ideally, if a person, or company, or organization, uses generative AI to automate any form of communication, whether it is a tweet, a letter, an email, a memo, a mission statement, a research paper, or whatever, they should indicate clearly that it, or which parts of it, was/were generated by AI.
If, e.g., an organizational executive asks a human assistant to draft a document and then issues the document under that executive's authority, must it include a disclaimer that it was prepared by an assistant? The executive is, after all, the one bearing ultimate responsibility for any information put out in their name, regardless of who or what was used in its preparation. Why should AI be different than any other assistance?
 
  • Like
Likes gleem and Hornbein
  • #7
renormalize said:
If, e.g., an organizational executive asks a human assistant to draft a document and then issues the document under that executive's authority, must it include a disclaimer that it was prepared by an assistant? The executive is, after all, the one bearing ultimate responsibility for any information put out in their name, regardless of who or what was used in its preparation. Why should AI be different than any other assistance?
That is why I said "ideally". Obviously no communication a corporation puts out can be reasonably expected to be sincere. I don't honestly believe that Qunol is the brand Tony Hawk trusts. There is a great deal of communication in our society which is obviously insincere and misrepresentative. But that isn't a good thing, its a bad thing.

When you receive incentive based communications written insincerely by bots or assistants, it is basically spam or junk mail.

I just don't think, just because it makes the job easier, teachers should be sending out junk mail instead of authentic mail, without disclosing that.
 
  • #8
russ_watters said:
Front the article:

Obviously students and teachers are fundamentally different, and I don't see an inherent problem with a teacher using it, nor a need to cite it. Teachers can't "cheat" and there's nothing wrong with/difference between asking a bot "write me a two hour lecture on the Battle of Gettysburg" and finding one in a repository or even copying your own from last year (of from the guy who taught it last year who left his lesson plans when he retired). The teacher is still responsible for the content. I wonder if the district can articulate a real/potential problem that doesn't make it sound like they are treating their teachers like students?

It's the same reason when discussing rule changes on PF the default/starting position was that nothing has changed. The poster is responsible for the content either way.

I alluded to this in the other thread where I said teachers shouldn't fear for their jobs; writing lesson plans is not what makes a teacher a teacher, it's the human interaction of "teaching" that does.

Would there be a problem if a teacher outsourced all of the technical aspects of their job completely? For example, what if teachers were just there for moral support, but didn't actually know the course material, couldn't answer any technical questions, didn't grade or comment on any of the work, etc?
 
  • #10
Jarvis323 said:
Would there be a problem if a teacher outsourced all of the technical aspects of their job completely? For example, what if teachers were just there for moral support, but didn't actually know the course material, couldn't answer any technical questions, didn't grade or comment on any of the work, etc?
I don't see how that's possible. You can't have interaction if the teacher doesn't know the material.
 
  • #11
What about a document whose content was prescribed by a person but written by AI? This would seem to me to be the most common way personal correspondences will be written. These correspondences would be reviewed by the originator.

Indicating AI wrote a document with content prescribed by a human would seemingly reduce the credibility of the content and candor that might otherwise be inferred.

GLM/GLM
 
Last edited:
  • #13
gleem said:
Indicating AI wrote a document with content prescribed by a human would seemingly reduce the credibility of the content and candor that might otherwise be inferred.

GLM/GLM

If being honest and candid about the origin of the document would reduce the credibility and candor that would otherwise be inferred, then something is wrong with the system.

I hope we don't end up lying to each-other more in an attempt to be perceived as more honest, and then that becoming normal, so that most of us know we are lying to each other anyways, but still do it. Maybe some portion of the population is still outright deceived (e.g., kids and people with very low intelligence or disabilities), and the rest of us are just a little psychologically manipulated while perhaps being conscious of it.

It reminds me of youtube videos where the thumbnail implies the video includes something that it doesn't. You watch the video and find it doesn't even contain what it said it did. And after a while, you realize this is normal now, you don't expect the promise in the thumbnail to be kept. They might as well add a disclaimer that the promise is false from the start. Except everyone is doing it, so the people who know they are being lied to accept it, and there are enough vulnerable new or stupid people to deceive to make the lie worth maintaining.

It's not much different than advertising in general. A company puts out a commercial which is obviously a big lie. You might think it would be reasonable for an intelligent enough person to form a negative view of the company or product and then avoid it based on the dishonesty of the company/ad. But we don't because it's normal.

It might end up being the same way with all kinds of other forms of communication.
 
  • Like
Likes russ_watters
  • #14
gleem said:
Indicating AI wrote a document with content prescribed by a human would seemingly reduce the credibility of the content and candor that might otherwise be inferred.
Yup. Or, rather, being AI written reduces the credibility. Indicating just announces it.
 

What is ChatGPT?

ChatGPT is an AI-powered writing tool that uses natural language processing to generate human-like text responses based on prompts given by the user.

How does ChatGPT work?

ChatGPT uses a large neural network trained on a vast amount of text data to generate responses to prompts. It analyzes the context and structure of the prompt and uses its knowledge of language patterns to generate a coherent response.

Why is ChatGPT becoming teachers' favorite tool for AI writing?

ChatGPT is becoming teachers' favorite tool for AI writing because it can assist students in generating high-quality written content, provide feedback on grammar and spelling, and help students improve their writing skills in a fun and interactive way.

What are the benefits of using ChatGPT in the classroom?

Using ChatGPT in the classroom can help students develop their critical thinking, creativity, and problem-solving skills. It can also save teachers time by providing automated feedback on student writing assignments.

Are there any limitations to using ChatGPT in the classroom?

While ChatGPT can be a useful tool for AI writing, it is not a replacement for human instruction and feedback. It also has limitations in understanding context and may generate responses that are not relevant or appropriate for the prompt given.

Similar threads

  • General Discussion
Replies
18
Views
943
  • General Discussion
Replies
11
Views
1K
  • Computing and Technology
Replies
16
Views
2K
  • Computing and Technology
Replies
8
Views
2K
  • STEM Educators and Teaching
2
Replies
38
Views
5K
  • Art, Music, History, and Linguistics
5
Replies
144
Views
8K
  • STEM Educators and Teaching
Replies
33
Views
3K
Replies
10
Views
2K
  • General Discussion
Replies
4
Views
1K
Replies
6
Views
1K
Back
Top