View Static Version
Loading

Chapter 4: Algorithms and blues

Algorithm, why computers seem so smart today, Biases, filter bubble

How humans and our algorithms keep computers from looking stupid

Nearly any software platform you use performs its work based on algorithms, which enable it to respond to humans. An algorithm is a step-by-step set of instructions for getting something done to serve humans, whether that something is making a decision, solving a problem, or getting from point A to point B (or point Z).

Most platforms have many algorithms at work at once, which can make the work they do seem so complex it's almost magical. But all functions of digital devices can be reduced to simple steps if needed. The steps have to be simple, because computers are, well, kind of simpleminded.

Humans make computers what they are

Computers don't know anything unless someone has already given them instructions that are explicit, with every step fully explained. Humans, on the other hand, can figure things out if you skip steps, and can make sense of tacit instructions. But give a computer instructions that skip steps or include tacit steps, and the computer will either stop or get it wrong. That is, unless humans then modify their behavior to help computers get it right.

Here's an example - with a robot.

As an instructor, I can say to human students on the first day of class, "Let's go around the room. Tell us who you are and where you're from." Easy for humans, right? But imagine I try that in a mixed human/robot classroom, and all goes swimmingly with the first two [human] students. But then the third student, a robot with a computer for a brain, says, "I don't understand." It seems my instructions were not clear enough. Now imagine another [human] student named Lila tells the robot helpfully, "Well first just tell us your name."

The robot still does not understand. Finally Lila says, "What is your name?" That works; the robot has been programmed with an algorithm instructing it to respond to "What is your name?" with the words, "My name is Feefee," which the robot now says. Then Lila continues helping the robot by saying, "Now tell us where you're from, Feefee." Again the robot doesn't get it. At this point, though, Lila has figured out what works in getting answers from this robot, so Lila says, "Where are you from?" This works; the robot has been programmed to respond to, "Where are you from?", with the sentence, "I am from Neptune."

In the above example, human intelligence was responsible for the robot's successes and failures. The robot arrived with a few communication algorithms, programmed by its human developers. Feefee had not been taught enough to converse very naturally, however. Then Lila, a human, figured out how to get the right responses out of Feefee by modifying her human behavior to better match behavior Feefee had learned to respond to. Later, the students might all run home and say, "A robot participated in class today! It was amazing!" They might not even acknowledge the human participation that day, which the robot fully depended on.

Two reasons simpleminded computers seem so smart today

What computers can do these days is amazing, for two main reasons. The first is cooperation from human software developers. The second is cooperation on the part of users.

First, computers seem so smart today because human software developers help one another teach computers. Apps that seem groundbreaking may simply include a lot of instructions. This is possible because developers have coded many, many algorithms, which they share and reuse on sites like Github. The more a developer is able to copy the basic steps others have already written for computers to follow, the more that developer can then focus on building new code teaching computers new tricks.

The most influential people known as "creators" or "inventors" in the tech world may be better described as "tweakers," who improved and added to other people's code in their "creations" and "inventions."

The second reason computers are so smart today is because users are teaching them. More and more algorithms are being designed to "learn" from human input. New algorithms automatically plug inputs into new programs, then automatically run those programs. This sequence of automated learning and application is called artificial intelligence (AI). AI essentially means teaching computers to teach themselves directly from their human users.

If only humans were always good teachers. You've learned the algorithm part of this chapter; now come the blues.

Teaching machines the best and worst about ourselves

In 2016, Microsoft introduced Tay, an AI online robot they branded as a young female. Their intention was for Tay to learn to communicate from internet users who conversed with her on Twitter - and learn she did. Within a few hours, Tay's social media posts were so infected with violence, racism, sexism, and other bigotry that Microsoft had to take her down and apologize.

Microsoft had previously launched an Xiaolce, an AI whose behavior remained far less offensive than TAY, on Chinese sites including the microblog Weibo. However, the Chinese sites Xiaolce learned from were heavily censored. The English-language Twitter was far less censored, and rife with trolls networked and ready to coordinate attacks. Developers and users who were paying attention already knew Twitter was full of hate.

Tay was an embarrassment for Microsoft in the eyes of many commentators. How could they not have predicted and protected her from bad human teachers, considering there is bigotry expressed so often on the internet? Why didn't Tay's human programmers teach her what not to say, to keep her from looking stupid? It certainly involved a lack of research, since bots like @oliviataters have been more successful and even benefited from a shared list of banned words that could easily be added to their algorithms.

But Tay's failure may also have been caused by a lack of diversity in Microsoft's programmers and team leaders.

Programming and bias

Humans are at the heart of any computer program. Algorithms for computers to follow are all written in programming languages, which translate instructions from human language into the computing language of binary numerals, 0's and 1's. Algorithms and programs are selective and reflect personal decision-making; there are usually different ways they could have been written.

Computer programming languages like Python, C++, and Java are written in source code. Writing programs, sometimes just called "coding," is an intermediary step between human language and the binary language that computers understand. Learning programming languages takes time and dedication. To learn to be a computer programmer, you either have to feel driven to teach yourself on your own equipment, or you have to be taught to program - and this is still not common in US schools.

Computer programming languages like Python, C++, and Java are written in source code. Writing programs, sometimes just called "coding," is an intermediary step between human language and the binary language that computers understand. Learning programming languages takes time and dedication. To learn to be a computer programmer, you either have to feel driven to teach yourself on your own equipment, or you have to be taught to program - and this is still not common in US schools.

Because computer programmers are self-selected this way, and because many people think of the typical "tech geeks" as white and male (as suggested by the Google Image search on the left), people who end up learning computer programming in the US are more likely to be white than any other race, and are more likely to identify as male than any other gender.

Most of these programmers are not trying to spread bigotry online, but they are more likely than any other populations to ignore its existence because they are the least likely to be targets of that bigotry. And they are less likely to examine their own unconscious biases, even when bias is clearly a problem in computing fields. Biases are assumptions about a person, culture, or population. The less diversity in any field, the more likely those in it will carry unchecked biases.

How can computers carry bias?

Many people think "computers and algorithms are neutral - racism and sexism are not programmers' problems." But for Tay's programmers, thinking bigotry was not their problem enabled more hate speech online, and may have led to the embarrassment of their employer and themselves. Human-crafted computer programs mediate nearly everything humans do today, and human responses are involved in all of those tasks, making bias a threat on multiple levels.

Problems like these are rampant in the tech industry because there is a damaging belief in US (and some other) societies that the development of computer technologies is antisocial, and that some kinds of people are better at it than others. As a result of this bias in tech industries and computing, there are not enough kinds of people working on tech development teams: Not enough women, not enough people who are not white, not enough people who remember to think of children, not enough people who think socially.

Remember Google Glass? You may not; that product failed, because few people wanted interaction with a computer to come between themselves and eye contact with humans and the world; but people who fit the definition of "tech nerd" fell within those few. The unfortunate people who did purchase the product were soon labeled "glassholes."

As the film Code: Debugging the Gender Gap exposes, women have always been part of the US computing industry, and today that industry would collapse without engineers from diverse cultures; but there is widespread evidence they women and racial minorities have simultaneously always been made to feel they did not belong in the industry. And the numbers of engineers and others in tech development show there is a serious problem in silicon valley with racial and ethnic diversity, resulting in terrible tech decisions that spread racial and ethnic bias. Google has made some headway in achieving a more diverse workforce, but not without conservative backlash - backed up by bad science.

Algorithms: Powerful, invisible, and potentially dangerous

Algorithms today control a great deal of what we see and do online. Selectivity and bias always have the potential to affect how these algorithms shape our online experiences. We as users carry our own biases, and today there is particular concern that algorithms pick up and spread these biases to many, many others - and even make us more biased by hiding hits we may not like from us. When we get our news and information from social media, invisible algorithms consider our own biases and those of friends in our social networks to determine which new posts and stories to show us in search results and news feeds. The result can be called an echo chamber or filter bubble, in which we only see news and information we like and agree with, leading to political polarization.

In its early years the internet was viewed as a utopia - an ideal world - that would permit a completely free flow of all available information to everyone, equally. John Perry Barlow's 1996 Declaration of the Independence of Cyberspace represents this utopian vision, in which the internet liberates users from all biases and even from their own bodies (at which human biases are so often directed.) We now know that utopian vision does not match the internet of today. Our social norms and inequalities accompany us across all the media and sites we use. In the final chapter of this book, on identity, I will try to help you understand some reasons why.

Yet the internet can be a remarkable tool for social change. We will learn about online activism in Chapters 5 and 6.

Did you get all that?

Let's test how well you've been programmed.

  1. A magical way computers have to figure things out, which humans still don't understand
  2. A set of instructions for getting something done
  3. A series of steps that repeats at least two times
  4. A robotic voice that lives inside any digital device, telling it what to do
  1. She became the bot @oliviataters.
  2. She learned to be a racist, sexist bigot on Twitter and was quickly taken down.
  3. She benefited from a shared list of of banned words.
  4. She learned to be polite from censored Chinese microblogs.
  5. All of the above
  1. Computers are actually smarter than humans.
  2. Developers program many algorithms and can share one another's code.
  3. Computers are programmed to learn from users, a process known as "artificial intelligence" (AI.)
  4. All of the above
  5. B and C only
  1. It is a programming language similar to C++. It is very helpful.
  2. It is a chatbot launched in Great Britain in 2015. It is surprisingly successful.
  3. It is a program recommended by this author to protect you from viruses. It is very helpful.
  4. Similar to the concept of an echo chamber, it is Eli Pariser's term for the algorithmic filtering out of content that we may not like. It is problematic.
Created By
Diana Daly
Appreciate

Credits:

Algorithms by Jonathan (Own work) [Public domain], via Wikimedia Commons Robot via PIxabay, CC0 Public Domain Programming languages by Hr.hanafi (Own work) [CC BY-SA 3.0 (http://creativecommons.org/licenses/by-sa/3.0)], via Wikimedia Commons A Google search for "tech geek": Screenshot. Google and the Google logo are registered trademarks of Google Inc., used with permission. Source code of a simple computer program: By Esquivalience - Own work, CC0. Google Glass image By Loïc Le Meur via Flickr, CC BY 2.0 (http://creativecommons.org/licenses/by/2.0)], via Wikimedia Commons

NextPrevious

Anchor link copied.

Report Abuse

If you feel that the content of this page violates the Adobe Terms of Use, you may report this content by filling out this quick form.

To report a copyright violation, please follow the DMCA section in the Terms of Use.