PeroK said:
In a practical sense, you could live according to what answers ChatGPT gives you.
For your sake I sincerely hope you don't try this. Unless, of course, you only ask it questions whose answers you don't really care about anyway and aren't going to use to determine any actions. Particularly any actions that involve risk of harm to you or others.
PeroK said:
Wolfram Alpha is a mathematical engine. It's not able to communicate on practical everyday matters.
Sure it is. You can ask it questions in natural language about everyday matters and it gives you answers,
if the answers are in its databases. Unlike ChatGPT, it "knows" when it
doesn't know an answer and tells you so. ChatGPT doesn't even have the
concept of "doesn't know", because it doesn't even have the concept of "know".
All it has is the relative word frequencies in its training data, and all it does is produce a "continuation" of the text you give it as input, according to those relative word frequencies.
Granted, Wolfram Alpha doesn't communicate its answers in natural language, but the answers are still understandable. Plus, it also includes in its answers the assumptions it made while parsing your natural language input (which ChatGPT doesn't even do at all--not just that it doesn't include any assumptions in its output, but it doesn't even parse its input). For example, if you ask Wolfram Alpha "what is the distance from New York to Los Angeles", it includes in its answer that it assumed that by "New York" you meant the city, not the state.
PeroK said:
You are too focused, IMO, on how it does things and not what it does.
Huh? The Insights article under discussion, and the Wolfram article it references, are entirely about what ChatGPT does, and what it
doesn't do. Wolfram also goes into some detail about the "how", but the "what" is the key part I focused on.