New research from MIT reinforces the idea that facial recognition software is subject to biases due to the data sets provided, and the way in which these algorithms are created.
The data base for this study was built by Jay Buolamwini, a researcher from MIT Media Lab, who used 1,270 faces from politicians from around the world.
-
Related Story: Apple Teases Great New Secret Feature On iPhones
The results from the facial recognition software showed inaccuracies in gender identification dependent on the person’s skin color. According to The Verge, gender was misidentified in less than one percent of lighter-skinned males, in up to seven percent of lighter-skinned females, in up to 12 percent in darker-skinned males, and in up to 35 percent in darker-skinned females.
Buolamwini concluded that male subjects were more accurately classified than female ones, and that lighter-skinned subjects were more accurately classified than darker-skinned subjects. The subjects that fared the worst were darker-skinned females.
Since computer vision technology is being utilized in high-stakes sectors such as healthcare and law enforcement, more work needs to be done in benchmarking vision algorithms for various demographic and phenotypic groups.
-
Related Story:Â Young Love: How To Find Out What Your First Netflix Binge Was
This is not the first time that results like this have occurred, highlighting the need for more diverse data sets and more diverse people who create these technologies, so that there’s less opportunity for these biases to sneak in. This is particularly important in the law enforcement area, where biases could result in the incrimination of innocent people.