Michal Kosinski, a psychologist at Stanford University, recently made a bold claim: the artificial intelligence (AI) tool he created can recognize disturbing traits of a person just by looking at their appearance. According to Business Insider, this artificial intelligence can accurately guess a person's intelligence level (IQ), sexual preference, and political views from one single picture.
Kosinski's findings has caused considerable debate and highlighted significant issues regarding ethics. Critics claim this sort of recognizing faces studies are similar to phrenology, an old pseudoscience that improperly linked physical characteristics to mental characteristics. Still, Kosinski denies the similarities. He argues his own findings are not just reliable, but also acts like a crucial alert to authorities about the potential risks associated with these kinds of technologies.The first of Kosinski's demonstrations, published in 2021, points out the potential of this technology. His artificial intelligence model detected a person's political views with 72% precision, far higher than the 55% accuracy rate attained by human evaluators. Kosinski advises that the widespread use of facial recognition systems might result in serious implications for privacy and freedom of expression.
Contrary to what he intended, Kosinski's work possesses a dark part. While he portrays his results as warning narratives, they also have grave consequences. The release of these results may unintentionally create an example for unfair practices, particularly given that such AI models may not be perfect.
For example, a paper from 2017 co-authored by Kosinski suggested that facial recognition technology was capable of predicting sexual preference with 91% precision. Organizations like the Human Rights Campaign and GLAAD condemned the report, calling it "dangerous and flawed." They warned that it might be utilized to harm and persecute against LGBTQ+ individuals.
In an age in which the use of facial recognition is currently being abused—for example, when minority groups are improperly categorized or innocent people are falsely accused of crimes—Kosinski's findings poses serious issues. His research may be meant as a warning, but it also resembles specific directions for anyone who might utilize this kind of technology to harm.
Kosinski's work emphasizes the importance of vigorous dialogues about the moral concerns of AI and facial recognition technologies, particularly as humanity struggles to strike the right balance among progress and the safeguarding of people's liberties.
Labels : #AI ,#Future-Techs ,#Mental ,#Science ,#Technology ,
