We Can’t Lose; We Have Computer Power!

We Can’t Lose; We Have Computer Power!

AdamJBurton_Small-300x300Let me start by retelling one of my favorite bedtime stories from childhood. No, not the one about the little boy who kept getting external symbol linkage errors in C++ only to discover that the assembler directives in the make file were wrong. (Although that one is a classic!) This story is about a down-trodden little league baseball team. At the beginning of the season, the coach, who I guess was tired of losing, proposed a radical new approach.

He gave each child a survey and fed those answers into a computer. The computer gave each child a new position in the line-up. (I was assigned to play “the bench,” which my mom assured me was a very important position.) As expected in these types of stories, the new line-up began to win. When they would fall behind, the children would rally by saying things like, “We can’t lose, we are the computer team” or “We have computer power.” Inevitably, they went on to win the championship.

At the celebration, the coach told the team – who were so inspired by their perfectly aligned computer selected roles – that the computer said everyone should be a pitcher. He had lied to the kids and just randomly moved the lineups. In the end, the kids were the real reason the team won, not “the machine”. The story had a happy ending…as long as you ignore that an adult in a power position lied to children and manipulated them to win games. But I digress.

The purpose of retelling this story is to introduce an idea called automation bias. To be fair, automation bias isn’t widely known. In fact, it doesn’t even have an entry in Wikipedia – which nowadays practically means it never existed. (But I assure you, it is as real as I am…wait a minute, I’m not in Wikipedia either!) It is a real phenomenon and one you have probably experienced yourself first-hand. Linda Skitka from the University of Illinois has done some very interesting research in this area, especially around medical and pilot errors.

Like the children in our earlier story, real-life humans often make similar mistakes. Automation bias refers to the phenomenon in which people mistakenly believe the infallibility of computer-based systems. It is called bias because users become predisposed to accept the computer’s authority and act in accordance with its recommendations. Like the kids in our story, people become emotionally invested in the “power of computers” and it affects our behavior.

This extends even into situations involving life and death. For example, research shows that air traffic controllers, despite their extensive training and ability to perform well under stress, lean towards delegating key decisions to a computer system. Studies show that people will trust a computer’s advice even when it contradicts their own intuition or training. This includes jobs with highly trained professionals such as pilots and doctors. The Therac-25 accidents recount numerous doctors ignoring their intuition and training because of thier bias towards the machines. This effect also extends to those specifically trained in technology, who presumably should know how fallible technology can be. Even working in groups with other people doesn’t totally eliminate this effect.

Most research indicates that automation bias is an unconscious cognitive shortcut meant to improve efficiency. That’s ironic because studies have shown that some increased automation can actually decrease performance. Turns out automated cues actually diminish the users’ willingness to put forth the cognitive effort to process all the information. This is especially true if the task at hand is complex. Other studies show that over-reliance on computers weakens our social intuition for perceiving bias and faulty expertise. Said another way, the presence of technology make it harder for us to tell who is full of it.

What does this have to do with social media?

Because automation today is amorphous. Technology is everywhere and automation is integrated into everything we do. We use technology as a second brain. Edwin Hutchins developed the theory of distributed cognition in the mid 1980’s. Interestingly enough, he also studied pilots. (Note to self: if I want to be studied by future psychologists, become a pilot…or a serial killer.) We use devices as an extension of our own brain, doing things we can’t or don’t want to do. Seriously, when is the last time you manually dialed a number? But automation bias is something more, something deeper than storing hard-to-remember data. Automation bias replaces vigilant information seeking and processing. That’s why it is a challenge.

What you have to realize is that automation bias will affect how users view and respond to information presented to them. You have to be sensitive to the influence (perhaps undue?) that a system has over users’ perception of the information itself. Sure, deep down, we know that social networks are populated by people…in fact, that’s why we use them. But on another level, how do we access our “social” network? By logging into a machine, or accessing a device, or clicking a mouse. At that point, we are at the fringes of automation bias trying to subconsciously differentiate system from human. We are after all, not interacting with people. We are interacting with a machine who is a proxy for the people we really want to reach.

And the more technology does for us automatically, the stronger the chance that our cognitive laziness undermines the very system we are building. So it is an interesting challenge. How do you introduce and socialize technology into a complex organization? You want to empower employees to use technology without over-powering them. If you lose the ability to leverage their instincts and experience – if they defer mindlessly to a systems’ authority – then you lose what makes them human and what makes them your best asset.

Sherry Turkle (@STurkle) is a renowned researcher from MIT who writes a lot about these issues. (And she has a Wikipedia page!) Though she doesn’t talk about automation bias per say, she does describe how the virtual world interacts (and supplants) the real world. In some sense, you could view it as an extreme case of automation bias. She gave a fantastic talk at TEDx in 2011 where she talked about the “culture of distraction”. But think about how we often manage these distractions…with more and more automation. #irony

The subtitle of her book Alone Together says its all: Why we expect more from technology and less from each other.

I happen to prefer another book by her entitled, Life on the Screen. It is near and dear to my heart since it was written years ago about text based MUDs – which kids today would stare at like a T-Rex skeleton. Years from now my grandchildren will tug on my shirt and ask, “Papa, why did you play games that were text-based? Were you like in jail or something?” Anyway, one of my favorite lines from that book is, “You are what you pretend to be…” (Then I really am a ninja!)

One last thought about automation bias. The children’s story from the first paragraph is etched in my memory. I know – almost without a doubt – that my mother read it to me as a kid. I think back to that and think maybe that is what drew me to technology. But I cannot for the life of me find that story – or even the name of the book – anywhere on the web. So I have come to doubt that this story ever existed. Automation bias in action. That’s how powerful it is. Google says it isn’t there, it doesn’t exist. I start to think perhaps I am wrong; perhaps I made it all up. Or perhaps the CIA implanted this story in my head to be some kind of Manchurian blogger. Sounds crazy but like my mom used to say as we made tin foil hats, “You can never be paranoid enough Adam…if that is your real name.” Ok, that’s probably not what happened, but you get the idea…