October 23, 2015

Our relationship with the digital world — and the math behind it -- has become increasingly intertwined and seamless.  Math can now determine how we move, communicate, find a date, or research directions to a restaurant (one that has good reviews on Yelp, of course).

The power behind this technology comes from algorithms that can identify patterns and make predictions based on user feedback as well as personal preferences.

Is it helpful? Certainly. But can it be harmful? Seeta Peña Gangadharan, a program fellow at New America's Open Technology Institute, thinks there’s more to the story than an algorithm’s ability to improve our access to knowledge.

Instead, she says, they often strengthen existing biases in society, perpetuating discrimination.

Gangadharan finds that in our day-to-day lives, algorithms facilitate more sophisticated tracking and predatory practices.

“The use of algorithmic analysis on big data or vast amounts of information really enables this type of efficiency in targeting and tracking individuals,” she says. 

Big data provides a picture of individual spending habits and behavior, but is discrimination built in?

According to Gangadharan, certain ethnicities within low-income populations were targeted for risky loans before the recession of 2008.

“Through the collection of data on individuals - both online and offline - this merging of information between data brokers, lenders, marketers, online platforms, lead-generators, there was this effect of identifying mostly African-American and Latinos for these high-risk loans.”

Although it was never an intended consequence to entrap specific groups, there is evidence that it’s happening:

“We know from research done by Pew, for example, that low-income populations generally tend to be targeted for risky financial products.  But that situation or condition is particularly exacerbated in communities of color where there’s a whole confluence of social, economic, and political problems unfolding,” says Gangadharan.

Many tech companies aren’t even aware that their algorithms are discriminatory.

She says algorithms rely on the “original kind of bias in a certain population and then that grows and grows as the network becomes bigger.”

What are the solutions?  Consumers certainly notice when an app is creepy or even racist.

Gangadharan acknowledges that there is still a long way to go when it comes to diversifying a company’s network of users so that there are fewer discriminatory outcomes. As our integration with the online world continues - via apps on our phones, in our homes, or in our cars - the sophistication of algorithms will only improve.  

Gangadharan says that improved policies and transparency about how companies use the information collected about us can alleviate these issues.

“I think it’s possible to imagine a scenario where, a few years down the line, people will have gained that vocabulary for understanding how algorithmic analysis - how code - can structure the machines and our interaction with machines that suffuse our everyday lives.”

Seeta Peña Gangadharan, Racism, algorithms, discrimination, Sci and Tech

Previous Post

Peter Singer on Making Charity Smarter

Next Post

The People Powering AI Decisions

comments powered by Disqus