Dale said:
the actual risks that they currently pose and their typical malfunctions.
Fair enough - you've been asking for discussion around this and no one has so far engaged directly on that topic. Maybe its OT for this post, if the OP complains I'll open a separate thread.
I don't think you are talking about the dangers of humans using AI that is working as intended to do bad things - deepfakes, for instance? This is dangerous and bad, but its still people deciding to do bad things and then doing the bad things - that scenario imo is more like a discussion around gun control.
IMO, the greater risk is not malfunction but that it works so well at pretending (or actually doing, it doesn't matter I have the same concern either way) to do critical thinking that it trains us humans to depend on it for our critical thinking. I am considering LLM's when I say this, but that line of concern probably applies to AI other than LLM's as well - your example of equity trading software, for instance.
I am concerned in 40 or 60 years we will be a society that depends on AI and no longer really understands how the AI ecosystem even works, by and large. I am envisioning a global system that has no top-down design, but has evolved bit by bit, system by system, until understanding how it works at a high level requires reverse engineering that very few people 2-3 generations hence will have the energy to bother with. We will become a slothful stagnant society - not because AI wants us that way, if indeed it evolves into something with wants, but because that is where we will allow ourselves to sink. AI will write our books (if anyone still reads books), create our movies and our games, manage our social lives and our careers. We needn't speculate around whether AI can do physics research because no one will care about fundamental physics - a few folks may still care about technology, but I suspect not many.
I'm not sure I've given you anything worth responding to. I would comment on current risks and malfunctions, but I don't know enough about AI applications to have anything meaningful on that. I'm aware that ChatGPT will occasionally tell me things that are clearly wrong - but since I'm typically doing things like asking for orderable part numbers for a mounting bracket for my car, the errors have been fairly benign to date.