- #1
- 14,787
- 9,123
Teachers embrace ChatGPT
https://www.forbes.com/sites/emmawh...st-becoming-the-teachers-pet/?sh=60c039af5177
https://www.forbes.com/sites/emmawh...st-becoming-the-teachers-pet/?sh=60c039af5177
Front the article:Jarvis323 said:My complaint is that if a communication is generated by AI, it is unclear to anyone receiving it whether it is a sincere and truthful representation of the state of affairs (for lack of better terms) of the sender.
Personally, I think that ideally, if a person, or company, or organization, uses generative AI to automate any form of communication, whether it is a tweet, a letter, an email, a memo, a mission statement, a research paper, or whatever, they should indicate clearly that it, or which parts of it, was/were generated by AI.
Otherwise, even without the intent to deceive and confuse, people will be blurring, or obfuscating, each other's understanding of each other. It will ultimately make us isolated, confused, and unable to collaborate effectively.
Obviously students and teachers are fundamentally different, and I don't see an inherent problem with a teacher using it, nor a need to cite it. Teachers can't "cheat" and there's nothing wrong with/difference between asking a bot "write me a two hour lecture on the Battle of Gettysburg" and finding one in a repository or even copying your own from last year (of from the guy who taught it last year who left his lesson plans when he retired). The teacher is still responsible for the content. I wonder if the district can articulate a real/potential problem that doesn't make it sound like they are treating their teachers like students?In January, the New York City education department, which oversees the nation’s largest school district with more than 1 million students, blocked the use of ChatGPT by both students and teachers, citing concerns about safety, accuracy and negative impacts to student learning.
If, e.g., an organizational executive asks a human assistant to draft a document and then issues the document under that executive's authority, must it include a disclaimer that it was prepared by an assistant? The executive is, after all, the one bearing ultimate responsibility for any information put out in their name, regardless of who or what was used in its preparation. Why should AI be different than any other assistance?Jarvis323 said:Personally, I think that ideally, if a person, or company, or organization, uses generative AI to automate any form of communication, whether it is a tweet, a letter, an email, a memo, a mission statement, a research paper, or whatever, they should indicate clearly that it, or which parts of it, was/were generated by AI.
That is why I said "ideally". Obviously no communication a corporation puts out can be reasonably expected to be sincere. I don't honestly believe that Qunol is the brand Tony Hawk trusts. There is a great deal of communication in our society which is obviously insincere and misrepresentative. But that isn't a good thing, its a bad thing.renormalize said:If, e.g., an organizational executive asks a human assistant to draft a document and then issues the document under that executive's authority, must it include a disclaimer that it was prepared by an assistant? The executive is, after all, the one bearing ultimate responsibility for any information put out in their name, regardless of who or what was used in its preparation. Why should AI be different than any other assistance?
russ_watters said:Front the article:
Obviously students and teachers are fundamentally different, and I don't see an inherent problem with a teacher using it, nor a need to cite it. Teachers can't "cheat" and there's nothing wrong with/difference between asking a bot "write me a two hour lecture on the Battle of Gettysburg" and finding one in a repository or even copying your own from last year (of from the guy who taught it last year who left his lesson plans when he retired). The teacher is still responsible for the content. I wonder if the district can articulate a real/potential problem that doesn't make it sound like they are treating their teachers like students?
It's the same reason when discussing rule changes on PF the default/starting position was that nothing has changed. The poster is responsible for the content either way.
I alluded to this in the other thread where I said teachers shouldn't fear for their jobs; writing lesson plans is not what makes a teacher a teacher, it's the human interaction of "teaching" that does.
It should, yes: https://smallbusiness.chron.com/reference-notation-business-letters-21548.htmlrenormalize said:If, e.g., an organizational executive asks a human assistant to draft a document and then issues the document under that executive's authority, must it include a disclaimer that it was prepared by an assistant?
I don't see how that's possible. You can't have interaction if the teacher doesn't know the material.Jarvis323 said:Would there be a problem if a teacher outsourced all of the technical aspects of their job completely? For example, what if teachers were just there for moral support, but didn't actually know the course material, couldn't answer any technical questions, didn't grade or comment on any of the work, etc?
Thanks. It's good to know that there are recognized attribution standards for official letters. But it would be even better if that practice could be extended to public press releases and advertising!russ_watters said:
gleem said:Indicating AI wrote a document with content prescribed by a human would seemingly reduce the credibility of the content and candor that might otherwise be inferred.
GLM/GLM
Yup. Or, rather, being AI written reduces the credibility. Indicating just announces it.gleem said:Indicating AI wrote a document with content prescribed by a human would seemingly reduce the credibility of the content and candor that might otherwise be inferred.
ChatGPT is an AI-powered writing tool that uses natural language processing to generate human-like text responses based on prompts given by the user.
ChatGPT uses a large neural network trained on a vast amount of text data to generate responses to prompts. It analyzes the context and structure of the prompt and uses its knowledge of language patterns to generate a coherent response.
ChatGPT is becoming teachers' favorite tool for AI writing because it can assist students in generating high-quality written content, provide feedback on grammar and spelling, and help students improve their writing skills in a fun and interactive way.
Using ChatGPT in the classroom can help students develop their critical thinking, creativity, and problem-solving skills. It can also save teachers time by providing automated feedback on student writing assignments.
While ChatGPT can be a useful tool for AI writing, it is not a replacement for human instruction and feedback. It also has limitations in understanding context and may generate responses that are not relevant or appropriate for the prompt given.