{"id":2631,"date":"2015-06-07T13:27:00","date_gmt":"2015-06-07T21:27:00","guid":{"rendered":"http:\/\/associatednews.info\/content\/what-makes-algorithms-go-awry\/2631\/"},"modified":"2015-06-07T13:27:00","modified_gmt":"2015-06-07T21:27:00","slug":"what-makes-algorithms-go-awry","status":"publish","type":"post","link":"https:\/\/associatednews.info\/content\/what-makes-algorithms-go-awry\/","title":{"rendered":"What Makes Algorithms Go Awry?"},"content":{"rendered":"<p><span style=\"font-style:italic;font-size:16px\">By  <a target=\"_blank\" href=\"http:\/\/www.npr.org\/sections\/alltechconsidered\/2015\/06\/07\/412481743\/what-makes-algorithms-go-awry?utm_medium=RSS&amp;utm_campaign=business\">NPR Staff<\/a><\/span>  <\/p>\n<div class=\"ftpimagefix\" style=\"float:left\"><a target=\"_blank\" rel=\"nofollow\" href=\"http:\/\/www.npr.org\/sections\/alltechconsidered\/2015\/06\/07\/412481743\/what-makes-algorithms-go-awry?utm_medium=RSS&amp;utm_campaign=business\"><img decoding=\"async\" width=\"150\" src=\"http:\/\/media.npr.org\/assets\/img\/2015\/06\/07\/istock_000021488772large-1afb6720a19f3e37be81211841faadbc03a94086-s1100-c15.jpg\"><\/a><\/div>\n<div>\n<div><\/div>\n<div>\n<div>\n<p>By clicking &#8220;Like&#8221; and commenting on Facebook posts, users signal the social network&#8217;s algorithm that they care about something. That in turn helps influence what they see later. Algorithms like that happen all over the web \u2014 and the programs can reflect human biases. <strong>iStockphoto<\/strong> <strong>hide caption<\/strong><\/p>\n<\/div>\n<p><strong>i<\/strong>toggle caption <span>iStockphoto<\/span><\/div>\n<\/div>\n<p>Like it or not, much of what we encounter online is mediated by computer-run algorithms \u2014 complex formulas that help determine our Facebook feeds, Netflix recommendations, Spotify playlists or Google ads.<\/p>\n<p>But algorithms, like humans, can make mistakes. Last month, users found the photo-sharing site Flickr&#8217;s new image-recognition technology was <a target=\"_blank\" rel=\"nofollow\" href=\"http:\/\/www.theguardian.com\/technology\/2015\/may\/20\/flickr-complaints-offensive-auto-tagging-photos\">labeling<\/a> dark-skinned people as &#8220;apes&#8221; and auto-tagging photos of Nazi concentration camps as &#8220;jungle gym&#8221; and &#8220;sport.&#8221;<\/p>\n<p>How does this happen? <a target=\"_blank\" rel=\"nofollow\" href=\"http:\/\/technosociology.org\/\">Zeynep Tufekci,<\/a> an assistant professor at the University of North Carolina at Chapel Hill&#8217;s School of Information and Library Science, tells NPR&#8217;s Arun Rath that biases can enter algorithms in various ways \u2014 not just intentionally.<\/p>\n<p>&#8220;More often,&#8221; she says, &#8220;they come through the complexity of the program and the limits of the data they have. And if there are some imperfections in your data \u2014 and there always [are] \u2014 that&#8217;s going to be reflected as a bias in your system.&#8221;<\/p>\n<div>\n<hr>\n<\/div>\n<h3>Interview Highlights<\/h3>\n<p><strong>On bias in the Facebook &#8220;environment&#8221;<\/strong><\/p>\n<p>These systems have very limited input capacity. So for example, on Facebook, which is most people&#8217;s experience with an algorithm, the only thing you can do to signal to the algorithm that you care about something is to either click on &#8220;Like&#8221; or to comment on it. The algorithm, by forcing me to only &#8220;Like&#8221; something, it&#8217;s creating an environment \u2014 to be honest, my Facebook is full of babies and engagements and happy vacations, which I don&#8217;t mind. I mean, I like that. When I see it, I click on &#8220;Like&#8221; \u2014 and then Facebook shows me more babies.<\/p>\n<p>And it doesn&#8217;t show me the desperate, sad news that I also care about a lot, that might be coming from a friend who doesn&#8217;t have &#8220;likable&#8221; news.<\/p>\n<p><strong>How biases creep into computer code<br \/><\/strong><\/p>\n<p>One, they can be programmed in directly, but I think that&#8217;s rare. I don&#8217;t think programmers sit around thinking, you know, &#8220;Let us make life hard for a certain group&#8221; or not. More often, they come through the complexity of the program and the limits of the data they have. And if there are some imperfections in your data \u2014 and there always [are] \u2014 that&#8217;s going to be reflected as a bias in your system.<\/p>\n<p>Sometimes [biases] can come in through the confusing complexity. A modern program can be so multi-branch that no one person has all the scenarios in their head.<\/p>\n<p>For example, increasingly, hiring is being done by algorithms. And an algorithm that looks at your social media output can figure out fairly reliably if you are likely to have a depressive episode in the next six months \u2014 before you&#8217;ve exhibited any clinical signs. So it&#8217;s completely possible for a hiring algorithm to discriminate and not hire people who might be in that category.<\/p>\n<p>It&#8217;s also possible that the programmers and the hiring committee [have] no idea that&#8217;s what&#8217;s going on. All they know is, well, maybe we&#8217;ll have lower turnover. They can test that. So there&#8217;s these subtle but crucial biases that can creep into these systems that we need to talk about.<\/p>\n<p><strong>How to limit human bias in computer programs<br \/><\/strong><\/p>\n<p>We can test it under many different scenarios. We can look at the results and see if there&#8217;s discrimination patterns. In the same way that we try to judge decision-making in many fields, when the decision making is done by humans, we should apply a similar critical lens \u2014 but with a computational bent to it, too.<\/p>\n<p>The fear I have is that every time this is talked about, people talk about it as if it&#8217;s math or physics, therefore some natural, neutral world. And they&#8217;re programs! They&#8217;re complex programs. They&#8217;re not like laws of physics or laws of nature. They&#8217;re created <em>by<\/em> us. We should look into what they do and not let them do everything. We should make those decisions explicitly.<\/p>\n<p><em>This entry passed through the Full-Text RSS service &#8211; if this is your content and you&#8217;re reading it on someone else&#8217;s site, please read the FAQ at fivefilters.org\/content-only\/faq.php#publishers.<\/em><\/p>\n<p>Source:: <a href=\"http:\/\/www.npr.org\/sections\/alltechconsidered\/2015\/06\/07\/412481743\/what-makes-algorithms-go-awry?utm_medium=RSS&amp;utm_campaign=business\" target=\"_blank\" title=\"What Makes Algorithms Go Awry?\" rel=\"nofollow\">http:\/\/www.npr.org\/sections\/alltechconsidered\/2015\/06\/07\/412481743\/what-makes-algorithms-go-awry?utm_medium=RSS&amp;utm_campaign=business<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<div class=\"ftpimagefix\" style=\"float:left\"><a target=\"_blank\" rel=\"nofollow\" href=\"http:\/\/www.npr.org\/sections\/alltechconsidered\/2015\/06\/07\/412481743\/what-makes-algorithms-go-awry?utm_medium=RSS&amp;utm_campaign=business\"><img decoding=\"async\" width=\"150\" src=\"http:\/\/media.npr.org\/assets\/img\/2015\/06\/07\/istock_000021488772large-1afb6720a19f3e37be81211841faadbc03a94086-s1100-c15.jpg\"><\/a><\/div>\n<div>\n<div><\/div>\n<div>\n<div>\n<p>By clicking &#8220;Like&#8221; and commenting on Facebook posts, users signal the social network&#8217;s algorithm that they care about something. That in turn helps influence what they see later. Algorithms like that happen all over the web \u2014 and the programs can reflect human biases. <strong>iStockphoto<\/strong> <strong>hide caption<\/strong><\/p>\n<\/div>\n<p><strong>i<\/strong>toggle caption <span>iStockphoto<\/span><\/div>\n<\/div>\n<p>Like it or not, much of what we encounter online is mediated by computer-run algorithms \u2014 complex formulas that help determine our Facebook feeds, Netflix recommendations, Spotify playlists or Google ads.<\/p>\n<p>But algorithms, like humans, can make mistakes. Last month, users found the photo-sharing site Flickr&#8217;s new image-recognition technology was <a target=\"_blank\" rel=\"nofollow\" href=\"http:\/\/www.theguardian.com\/technology\/2015\/may\/20\/flickr-complaints-offensive-auto-tagging-photos\">labeling<\/a> dark-skinned people as &#8220;apes&#8221; and auto-tagging photos of Nazi concentration camps as &#8220;jungle gym&#8221; and &#8220;sport.&#8221;<\/p>\n<p>How does this happen? <a target=\"_blank\" rel=\"nofollow\" href=\"http:\/\/technosociology.org\/\">Zeynep Tufekci,<\/a> an assistant professor at the University of North Carolina at Chapel Hill&#8217;s School of Information and Library Science, tells NPR&#8217;s Arun Rath that biases can enter algorithms in various ways \u2014 not just intentionally.<\/p>\n<p>&#8220;More often,&#8221; she says, &#8220;they come through the complexity of the program and the limits of the data they have. And if there are some imperfections in your data \u2014 and there always [are] \u2014 that&#8217;s going to be reflected as a bias in your system.&#8221;<\/p>\n<div>\n<hr>\n<\/div>\n<h3>Interview Highlights<\/h3>\n<p><strong>On bias in the Facebook &#8220;environment&#8221;<\/strong><\/p>\n<p>These systems have very limited input capacity. So for example, on Facebook, which is most people&#8217;s experience with an algorithm, the only thing you can do to signal to the algorithm that you care about something is to either click on &#8220;Like&#8221; or to comment on it. The algorithm, by forcing me to only &#8220;Like&#8221; something, it&#8217;s creating an environment \u2014 to be honest, my Facebook is full of babies and engagements and happy vacations, which I don&#8217;t mind. I mean, I like that. When I see it, I click on &#8220;Like&#8221; \u2014 and then Facebook shows me more babies.<\/p>\n<p>And it doesn&#8217;t show me the desperate, sad news that I also care about a lot, that might be coming from a friend who doesn&#8217;t have &#8220;likable&#8221; news.<\/p>\n<p><strong>How biases creep into computer code<br \/><\/strong><\/p>\n<p>One, they can be programmed in directly, but I think that&#8217;s rare. I don&#8217;t think programmers sit around thinking, you know, &#8220;Let us make life hard for a certain group&#8221; or not. More often, they come through the complexity of the program and the limits of the data they have. And if there are some imperfections in your data \u2014 and there always [are] \u2014 that&#8217;s going to be reflected as a bias in your system.<\/p>\n<p>Sometimes [biases] can come in through the confusing complexity. A modern program can be so multi-branch that no one person has all the scenarios in their head.<\/p>\n<p>For example, increasingly, hiring is being done by algorithms. And an algorithm that looks at your social media output can figure out fairly reliably if you are likely to have a depressive episode in the next six months \u2014 before you&#8217;ve exhibited any clinical signs. So it&#8217;s completely possible for a hiring algorithm to discriminate and not hire people who might be in that category.<\/p>\n<p>It&#8217;s also possible that the programmers and the hiring committee [have] no idea that&#8217;s what&#8217;s going on. All they know is, well, maybe we&#8217;ll have lower turnover. They can test that. So there&#8217;s these subtle but crucial biases that can creep into these systems that we need to talk about.<\/p>\n<p><strong>How to limit human bias in computer programs<br \/><\/strong><\/p>\n<p>We can test it under many different scenarios. We can look at the results and see if there&#8217;s discrimination patterns. In the same way that we try to judge decision-making in many fields, when the decision making is done by humans, we should apply a similar critical lens \u2014 but with a computational bent to it, too.<\/p>\n<p>The fear I have is that every time this is talked about, people talk about it as if it&#8217;s math or physics, therefore some natural, neutral world. And they&#8217;re programs! They&#8217;re complex programs. They&#8217;re not like laws of physics or laws of nature. They&#8217;re created <em>by<\/em> us. We should look into what they do and not let them do everything. We should make those decisions explicitly.<\/p>\n<p><em>This entry passed through the Full-Text RSS service &#8211; if this is your content and you&#8217;re reading it on someone else&#8217;s site, please read the FAQ at fivefilters.org\/content-only\/faq.php#publishers.<\/em><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[19],"tags":[],"class_list":["post-2631","post","type-post","status-publish","format-standard","hentry","category-business-2"],"_links":{"self":[{"href":"https:\/\/associatednews.info\/content\/wp-json\/wp\/v2\/posts\/2631","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/associatednews.info\/content\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/associatednews.info\/content\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/associatednews.info\/content\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/associatednews.info\/content\/wp-json\/wp\/v2\/comments?post=2631"}],"version-history":[{"count":0,"href":"https:\/\/associatednews.info\/content\/wp-json\/wp\/v2\/posts\/2631\/revisions"}],"wp:attachment":[{"href":"https:\/\/associatednews.info\/content\/wp-json\/wp\/v2\/media?parent=2631"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/associatednews.info\/content\/wp-json\/wp\/v2\/categories?post=2631"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/associatednews.info\/content\/wp-json\/wp\/v2\/tags?post=2631"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}