When Netflix recommends you watch “Everything Sucks” after you’ve finished “Unbreakable Kimmy Schmidt,” an algorithm decided that would be the next logical thing for you to watch. And when Google shows you one search result ahead of another, an algorithm made a decision that one page was more important than the other. Oh, and when law enforcement wrongly identifies a suspect using facial recognition, that’s another example of algorithms gone wrong.
Algorithms are sets of rules that computers follow in order to solve problems and make decisions about a particular course of action. Whether it’s the type of information we receive, the information people see about us, the jobs we get hired to do, the credit cards we get approved for, and, down the road, the driverless cars that either see us or don’t see us, algorithms are increasingly becoming a big part of our lives.
But there is an inherent problem with algorithms that begins at the most base level and persists throughout its adaption: human bias that is baked into these machine-based decision-makers.
So what does it take to ensure the algorithms used to make decisions about potentially life-changing circumstances like bail and policing are fair? And what does fair even mean?
At TechCrunch Disrupt San Francisco, I’ll be delving further into the topic with Kairos founder and CEO Brian Brackeen, and Kristian Lum and Patrick Ball of the Human Rights Data Analysis Group. This is a conversation you don’t want to miss.
Disrupt SF will take place in San Francisco’s Moscone Center West from September 5 to 7. The full agenda is here, and you can still buy tickets right here.
from www.tech-life.in
No comments:
Post a Comment