{"id":1116,"date":"2012-04-20T07:00:47","date_gmt":"2012-04-20T11:00:47","guid":{"rendered":"https:\/\/enterprisestrategies.com\/?p=1116"},"modified":"2015-07-27T04:41:13","modified_gmt":"2015-07-27T04:41:13","slug":"automation_bias","status":"publish","type":"post","link":"https:\/\/enterprisestrategies.com\/2012\/04\/20\/automation_bias\/","title":{"rendered":"We Can’t Lose; We Have Computer Power!"},"content":{"rendered":"
<\/a>Let me start by retelling one of my favorite bedtime stories from childhood. No, not the one about the little boy who kept getting external symbol linkage errors in C++ only to discover that the assembler directives in the make file were wrong. (Although that one is a classic!) This story is about a down-trodden little league baseball team. At the beginning of the season, the coach, who I guess was tired of losing, proposed a radical new approach.<\/p>\n He gave each child a survey and fed those answers into a computer. The computer gave each child a new position in the line-up. (I was assigned to play “the bench,” which my mom assured me was a very important position.) As expected in these types of stories, the new line-up began to win. When they would fall behind, the children would rally by saying things like, “We can\u2019t lose, we are the computer team<\/strong>” or “We have computer power.” Inevitably, they went on to win the championship.<\/p>\n At the celebration, the coach told the team \u2013 who were so inspired by their perfectly aligned computer<\/em> selected roles \u2013 that the computer said everyone should be a pitcher. He had lied to the kids and just randomly moved the lineups. In the end, the kids were the real reason the team won, not “the machine”. The story had a happy ending…as long as you ignore that an adult in a power position lied to children and manipulated them to win games. But I digress.<\/p>\n The purpose of retelling this story is to introduce an idea called automation bias<\/strong>. To be fair, automation bias isn\u2019t widely known. In fact, it doesn\u2019t even have an entry in Wikipedia – which nowadays practically means it never existed. (But I assure you, it is as real as I am…wait a minute, I\u2019m not in Wikipedia either!) It is a real phenomenon and one you have probably experienced yourself first-hand. Linda Skitka<\/a> from the University of Illinois has done some very interesting research in this area, especially around medical and pilot errors.<\/p>\n Like the children in our earlier story, real-life humans often make similar mistakes. Automation bias refers to the phenomenon in which people mistakenly believe the infallibility of computer-based systems. It is called bias because users become predisposed to accept the computer\u2019s authority and act in accordance with its recommendations. Like the kids in our story, people become emotionally invested in the “power of computers” and it affects our behavior.<\/p>\n This extends even into situations involving life and death. For example, research shows that air traffic controllers, despite their extensive training and ability to perform well under stress, lean towards delegating key decisions to a computer system. Studies show that people will trust a computer\u2019s advice even when it contradicts their own intuition or training<\/strong>. This includes jobs with highly trained professionals such as pilots and doctors. The Therac-25<\/a> accidents recount numerous doctors ignoring their intuition and training because of thier bias towards<\/strong> the machines. This effect also extends to those specifically trained in technology, who presumably should know how fallible technology can be. Even working in groups with other people doesn\u2019t totally eliminate this effect.<\/p>\n Most research indicates that automation bias is an unconscious cognitive shortcut meant to improve efficiency. That’s ironic because studies have shown that some increased automation can actually decrease<\/em> performance<\/strong>. Turns out automated cues actually diminish the users’ willingness to put forth the cognitive effort to process all the information. This is especially true if the task at hand is complex. Other studies show that over-reliance on computers weakens our social intuition for perceiving bias and faulty expertise. Said another way, the presence of technology make it harder for us to tell who is full of it.<\/p>\n What does this have to do with social media?<\/strong><\/p>\n Because automation today is amorphous. Technology is everywhere and automation is integrated into everything we do. We use technology as a second brain<\/strong>. Edwin Hutchins<\/a> developed the theory of distributed cognition<\/a> in the mid 1980\u2019s. Interestingly enough, he also studied pilots. (Note to self: if I want to be studied by future psychologists, become a pilot…or a serial killer.) We use devices as an extension of our own brain, doing things we can\u2019t or don\u2019t want to do. Seriously, when is the last time you manually dialed a number? But automation bias is something more, something deeper than storing hard-to-remember data. Automation bias replaces<\/strong> vigilant information seeking and processing. That’s why it is a challenge.<\/p>\n What you have to realize is that automation bias will affect how users view and respond to information presented to them. You have to be sensitive to the influence (perhaps undue?) that a system has over users\u2019 perception of the information itself. Sure, deep down, we know that social networks are populated by people…in fact, that’s why we use them. But on another level, how do we access our “social” network? By logging into a machine, or accessing a device, or clicking a mouse. At that point, we are at the fringes of automation bias trying to subconsciously differentiate system from human. We are after all, not interacting with people. We are interacting with a machine who is a proxy for the people we really want to reach<\/strong>.<\/p>\n And the more technology does for us automatically, the stronger the chance that our cognitive laziness<\/strong> undermines the very system we are building. So it is an interesting challenge. How do you introduce and socialize technology into a complex organization? You want to empower<\/em> employees to use technology without over-powering<\/em> them<\/strong>. If you lose the ability to leverage their instincts and experience – if they defer mindlessly to a systems’ authority – then you lose what makes them human and what makes them your best asset.<\/p>\n Sherry Turkle (@STurkle<\/a>) is a renowned researcher from MIT who writes a lot about these issues. (And she has a Wikipedia<\/a> page!) Though she doesn\u2019t talk about automation bias per say, she does describe how the virtual world interacts (and supplants) the real world. In some sense, you could view it as an extreme case of automation bias. She gave a fantastic talk at TEDx in 2011 where she talked about the “culture of distraction”. But think about how we often manage these distractions…with more and more automation<\/em><\/strong>. #irony<\/p>\n