In addition to other problems plaguing the nascent technology, a recent study suggests that self-driving cars may be biased toward blacks and children.
Self-driving cars have been in the spotlight after causing traffic jams, getting stuck in concrete and stopped by unfamiliar police officers.
However, according to a recent study, these basic technical vulnerabilities pale in comparison to more serious issues like color discrimination.
Adalah's study of eight AI pedestrian detectors trained on datasets "scattered in the real world" found that the software performed poorly in detection, according to a study by researchers at King's College London, which has not yet been subject to expert review. black infantry. Dark-skinned people compared to white pedestrians because the system cannot detect dark-skinned people and the detection rate is about 8% higher than light-skinned people.
This number is considered high and illustrates the real and deadly danger of biased AI systems.
According to the study, the researchers first fully annotated the data, tagging a total of 8,111 images with "16,070 gender markers, 20,115 age markers, and 3,513 skin color markers."
After statistics, the researchers finally found that the difference in recognition accuracy between fair-skinned and dark-skinned people was 7.52. The study showed that people with darker skin are more likely to get sick in "low contrast" or "low light" environments, that is, at night.
In addition to this racial bias, pedestrian detectors have another annoying collective blind spot: children, who have been shown to be 20% less likely to be detected by adult detectors.
It should be noted that none of the systems tested actually belong to a specific driverless car company, as such information generally falls into the category of "proprietary information".
However, Gang Li, co-author of the study and senior lecturer at Kings College of Computer Science, told New Scientist that corporate models may not be that far apart, and given that self-driving vehicles are beginning to promise major regulatory victories, this case is very worrying.
“It is confidential information and does not let others know which models they are using. However, we know that they are often based on open source models.” Self-driving cars. We can be sure that their models must have similar problems. "
Many have warned that machine bias is a serious problem, and one whose impact is becoming more apparent as more and more advanced AI technologies are integrated into everyday life.
However, since lives are at stake, regulators should not impose such a bias in the face of avoidable tragedy.