I really am not meaning to exaggerate in that title, but that's what the article says: AI that can determine a person’s sexuality from photos shows the dark side of the data age
And the photos were from a DB where the people classified THEMSELVES so it's not researchers cherry-picking what "they thought" sexuality looked like.
It's in this forum because there's no way this can't turn political in SOME way or another.
91% and 83% is pretty good. Not perfect of course, but... frighteningly accurate. But I need this paragraph explained:When presented with multiple pictures of a pair of faces, one gay and one straight, the algorithm could determine which was which 91 percent of the time with men and 83 percent of the time with women. People provided the same images were correct 61 and 54 percent of the time, respectively — not much better than flipping a coin.
Top 10% what? I don't get that paragraph.This accuracy, it must be noted, is only in the system’s ideal situation of choosing between two people, one of whom is known to be gay. When the system evaluated a group of 1,000 faces, only 7 percent of which belonged to gay people (in order to be more representative of the actual proportion of the population), it did relatively poorly. Only its top 10 showed a 90 percent hit rate.
And the photos were from a DB where the people classified THEMSELVES so it's not researchers cherry-picking what "they thought" sexuality looked like.
It's in this forum because there's no way this can't turn political in SOME way or another.