russ_watters
Mentor
- 23,741
- 11,192
I agree with most of what you're saying in that post, but I think there is some nuance here you might be missing or omitting:f95toli said:This is most definitely not correct. In fact, this is already happening. ML is already being used to make important decisions about people lives (the police uses automatic image recognition, government agencies use to determine if you are eligible for certain benefits, banks use it to evaluate credit card applications etc) and the fact that even the creators of the SW does NOT always understand how the machines make the decision is already a problem .
1. We already use software to do analysis, machine learning or not. In theory machine learning should offer improvement/refinement over traditional dumb criteria ("discard every resume without a 4 year degree"). So the main risk is just not being an improvement.
2. In most of those examples it isn't making decisions, it is providing analysis with which humans make decisions. In those where it seems to, it appears to be limited to screening/sorting based on human provided criteria; it only "seems" to be making a decision.
Most of the risk I see with these software tools comes from an exaggerated belief in their capabilities which then paradoxically leads to giving them more executive control than they warrant.
Last edited: