Not all is fair in algorithms

image @nytime

While most of us believe that we are rational and logical beings, we are in-fact driven by our cognitive bias.

Social media is also one place where these biases get propagated, and ever more so sadly with kind of reach and influence that social media has amplified this bias.

Here, I highlight few instances of NLP algorithm biases (gathered from various news and research sources) in context of Instagram, Facebook, tik-toc and similar platforms, and how they can dangerously amplify the polarization and affect minorities or particular group of people.

Examples of how linguistic biases can contribute to polarization:

During Black Lives Matter (BLM) protests, many activists were left frustrated when Facebook flagged or even blocked their accounts as violation of policies however didn’t do enough to stop posts that were racist against black community.

Most of NLP algorithms used in back-end of social media are trained on datasets in standard English or that is spoken by a particular group/community. It is a known problem, that dialects and language variations can affect natural language processing accuracy on what’s marked offensive and what’s not. Depending on language variations, while in particular social settings some slurs can be offensive while in another same slurs might be totally acceptable.

In two computational linguistic studies published in 2019, researchers discovered that AI intended to identify hate speech actually end up amplifying racial bias.

  • In one study researchers found that tweets written in African American English commonly spoken by Black Americans are up to twice more likely to be flagged as offensive compared to others.
  • Another study, that used 155,800 tweets, showed the evidence of systematic racial bias in all datasets, as classifiers trained on them predicted that tweets written in African-American English as abusive at substantially higher rates.

In 2017, in report published by ProPublica, after accessing Facebook’s internal document found that unanticipated outcome of the way algorithm was trained was that Facebook would censor hate speech against “protected categories,” which included white males, but allowed attacks on “subsets” such as female drivers and black children.

Another example of how these algorithms can amplify the existing news news is when in mid-2020, when Facebook’s algorithm deleted close to accounts of Syrian journalists and activists on the pretext of terrorism while in reality, they were campaigning against violence and terrorism.

These studies show, how little consideration potentially dangerous algorithm bias can be and can negatively impact on underrepresented communities (that are potentially already at risk) on social media users by wrongly categorizing them as offensive, criminals or even terrorists.

Prime reasons for these biases to exist and potential fixes:

  1. Models itself are still not robust enough to handle large variations. On positive side there has been promising research in cross-lingual NLP along with dialect/language fluidity
  2. Algorithms are trained showed the evidence of systematic racial bias in all datasets. For example, classifiers trained on standard English predicted that tweets written in African-American English as abusive at substantially higher rates. Inclusion of more people from diverse background in the entire development process from algorithm and model development. Diversity is one issue that many organizations still struggle with, as a result these platforms are developed by predominantly homogeneous group (white, male, American). As a result these potential issues are never thought of during development or during training stage.
  3. Less transparency from companies itself and less regulations from Governments to push for research to reduce potential polarization caused due to algorithms.

Follow me on linkedin here

Reference:

[1] https://toronto.citynews.ca/2021/04/05/the-growing-criticism-over-instagrams-algorithm-bias/

[2] https://bloggeronpole.com/2020/06/instagram-quietly-admitted-algorithm-bias-but-how-will-it-fight-it/

[3] https://www.bbc.com/news/technology-57306800

[4] https://theconversation.com/beyond-a-technical-bug-biased-algorithms-and-moderation-are-censoring-activists-on-social-media-160669

[5] https://www.theverge.com/2019/3/19/18273018/facebook-housing-ads-jobs-discrimination-settlement

[6] https://www.propublica.org/article/facebook-hate-speech-censorship-internal-documents-algorithms

#algorithm #bias #underrepresented #unconsciousbias #inclusion #diversity #socialmedia #futureofwork #transformation #conversationsforchange #technology #engineering #ai #artificialintelligence #nlp #neuralnetworks

--

--

--

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

Why would a successful Machine Learning Agency bring in an AI Business Consultant student from…

Are AI companies the next generation advisory firms?

Meta Unveils The AI Research SuperCluster Supercomputer, Powered By NVIDIA’s A100 GPU & Packs 220…

Making a case for buying medical imaging AI: How to define the return on investment

VietAI Summit 2019: 1 Million KAT Giveaway!

AI makes mission impossible possible!

Is AI the future of personality typing?

Can Deepfake Disrupt Hollywood?

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
CodeSerra

CodeSerra

More from Medium

K-Means Based Authentication

Enhancing AI with data annotations

data annotation services

How AI learns — the plain English guide

Intro to Hadoop and It’s Core Components