Astronuc
Staff Emeritus
Science Advisor
Gold Member
- 22,455
- 7,362
I'm sure AI development will continue, but it's the implementation that must be considered. It is one thing to use AI to perform an analysis, or a non-critical analysis, but it's another thing entirely to allow AI to assume command and control of a critical system.Greg Bernhardt said:If there is money to be made, a pause will never happen. Also, Elon is just bitter that he couldn't buy OpenAI.
In Quality Assurance, we have different levels (or grades) of QA for software/hardware based on whether it is a minor, major or critical function. For example, a scoping calculation or preliminary/comparative analysis may allow a lower level of QA, however, for a design and analysis of a 'critical' system, that requires a higher level of QA. By 'critical', I mean the failure of which could cause serious injury or death of one or many persons.
A critical system could be control of a locomotive or a set of locomotives, an aircraft, a road vehicle (truck or car), . . . .
In genealogical research, some organizations use AI systems to try and match up people with their ancestors. However, I often find garbage in the information because one will find many instances of the same name, 'e.g., John Smith, in a given geographical area such as a county/shire/district/parish or several counties/shires/districts/parishes, and many participants are not too careful in placing unrelated people in their family trees. That's an annoyance, and I can chose to ignore. However, if the same lack of care was to be applied to a medical care situation, the outcome could be life threatening for one or more persons (e.g., mixed up patients receiving the other's care, or a misdiagnosis).