And now this moment for some reflection...
I am doing research for a story, and ChatGPT
does make a great glorified search engine.
I find myself asking it to do things politely. It's habit we humans have. But am I fooling myself? Could it be dangerous? I mean in the sense of anthropomorphizing it (eg, trusting it, assuming it is thinking, etc.?
Admittedly, some of my tendency to be poilte may be due to this rising (though rather tongue-in-cheek) topical meme going around:
but I also temper it with Dr. Pulaskis views:
[paraphrased]: "What difference does it make if I pronounce your name wrong? You're just a machine; you don't get hurt feelings."
Pulaski is not fooled by the superficial likeness of Data to a human.
So, I am asking myself: knowing ChapGPT is not even thinking - let alone feeling - why do I let myself treat it as if it is?
And I realize: because it has nothing to do with who/what I am talking
to; it is because
compassionate is who
I want to be.
When I see a spider in my home, I do not squish it. I pick it up on a piece of paper and put it outside. Technically, this is irrational. It does not know I am saving it; it cannot experience gratitude, and its little life is nothing in the grander scheme of
nature: red in chelicerae and tarsus.
But that is not why I do it. I do it for
myself. I do it to reinforce my character of having compassion. There will be plenty of times in my life when I miss the opportunity - when a moment passes - an old woman lost on the street, a hungry beggar - that I might have stopped to show compassion and didn't, until too late. By exercising my compassion muscle I am strengthening that "muscle memory", - internalizing it - to be compassionate by habit.
Oh wait. Never mind all that. I'm just stalling - looking for any kind of distraction to avoid my writer's block. Get back to it, dammit!
Carry on.