AI is Just Opinions Written in Code
Artificial Intelligence (AI) is such a promising technology. It's so exciting that I worry a little that people treat it like magic. However, it could be that magic is the wrong word. Instead, it could be the same old human biases coming out in ones and zeroes but given a new authority it might not have earned.
We discussed this on a recent podcast with our guest, Broderick Turner, Ph.D., Assistant Professor of Marketing at Virginia Tech. Dr. Turner founded TRAP LAB (Technology Race and Prejudice LAB) intending to expose the inherent biases about race and racism that have mixed into the foundations of marketing, market places, consumer technology, and market research. They have a weekly lab meeting that will resume in September, and he invites our audience to join. The Zoom meetings feature research-active faculty from across the globe and from all academic disciplines and are designed to enhance organizations' economic, environmental, and social outcomes. Click here to join the TRAP.
Dr. Turner explains that he thinks of AI the way author Cathy O'Neil does in Weapons of Math Destruction: Algorithms are opinions embedded in code. Dr. Turner believes that description is exactly how algorithms and machine learning works. In his classes, Dr. Turner asks students if they are familiar with the basic linear regression formula, Y=X + B? They are (although I might have been hard-pressed to remember that one without his help, I must admit).
Dr. Turner explains that AI decides the "Y" of an outcome. For example, approve a loan or not? Approve parole or not? These are the Y. However, what Dr. Turner says matters is how much weight is placed on the "X." Since AI is only doing the math, what matters is how much importance the programmer puts on the inputs and who decides them. As of now, only humans decide; AI hasn't progressed to that level of decision in this scenario.
So, Dr. Turner says, AI is not magic. Instead, it's a 10th-grade algebra formula stacked on top of itself.
Moreover, we have not reached the point where AI decides its inputs' importance; human beings did. So instead, we humans made a bunch of small decisions that add up to something that looks big and magical but is, in reality, a 10th-grade algebra equation stacked on top of itself.
The misconception is that machine learning and forms of AI are pure. That technological brains built ideas from scratch without the influence of emotional clutter and self-developed to generate new revelations. So it might someday, but today, Dr. Turner explains, it is built by humans, and, like anything made by humans, it is at risk of human influence, which can imbalance the results.
In other words, there is no need to build a panic room or stock up on canned goods for a SkyNet takeover just yet.
A machine takeover is purely (ridiculous) fiction with the technology we have today, per Dr. Turner, because using a bunch of linear regressions to replace the complexity of human thought and cognition doesn't make any sense. The way humans process information is much more complex. Dr. Turner says a person with below-average intelligence has more to offer than the "smartest" AI today. As it stands, Dr. Turner says AI should not stand for artificial intelligence but for artificial ignorance.
So, What's AI Good for Then?
Even with these limitations, it is clear that people are excited about AI and its future potential. People are looking for it for use in various applications. So, what are the advantages of it? Dr. Turner says it depends on the weights that humans put in there. Humans make those decisions, even in unsupervised machine learning, which means it is still an opinion about what those will be, affecting AI's suitability. Therefore, Dr. Turner says today's AI works well in spaces with limited inputs that we humans have high confidence about what that weight should be.
From a consumer perspective, it is interesting to consider the most successful AI robotic product. Any guesses?
If you said, "The Roomba," you are right.
Let's be honest. That's not quite what we were promised in our youth about the future of robotics. But the Roomba does clean the floor decently, and when it bumps into the sofa, it turns around and goes the other way. So Dr. Turner says we should get used to it because that's as good as it will get for quite some time.
AI tends to work well when there are limited numbers of inputs, when the correlations with outputs are pretty straightforward, and there are rule systems to access, such as mapping a room or playing chess.
So, if algorithms are opinions written in code, and it's lots of opinions, we can start to use some of the tools that we've learned as social psychologists or cognitive psychologists, or consumer behavior researchers to investigate what those opinions are and what outcomes come out of those systems.
The Opinions in Algorithms Affect What You See
One of the things that Dr. Turner learned in his research with an Algorithm audit was that algorithms are only as unbiased as humans can make them. Unfortunately, our biases creep into results in many ways.
For example, when an algorithm is used to suggest a thumbnail for a social media platform content, the biases become clear. If a video has a person in it, nearly all the platforms will gravitate toward a frame for the video that includes a human face.
However, which emotion they show on their face differs between platforms. For example, YouTube is likely to show a smiling face; on Facebook, a frowny one.
No matter its expression, it is most likely to show a white male. Twitter's algorithm gained notoriety for its racial bias. The automatic cropping of images favored a crop to include Senator Mitch McConnell instead of President Barack Obama when they were in the same large photo that needed resizing. Twitter decided to ditch it after this bias came to light.
So, What Should You Do About This?
This fascinating area of bias and algorithms and AI is a discussion I can indulge in for a long time. However, in terms of practicality, what is the key takeaway here? Besides that you don't have to worry about T-800 in Arnold Schwarzenegger's likeness rolling up on you with a sawed-off anytime soon. Dr. Turner has a few for you, starting with:
Investigate claims. Dr. Turner encourages you to look into who is making the claims that a new AI is sentient or "superhuman." For example, if a Google engineer says the language model is aware, it probably isn't. AI isn't there yet.
Find out who put the inputs there. If you use the technology and are developing it, find out who decided what the information was in the algorithm and how they were weighted. The AI isn't capable of choosing it themselves. Someone had to select it, and that's where the biases will start.
Experiment with your algorithms. Dr. Turner encourages researchers to use their skills to experiment with their inputs and see if they pass the test for biases. The tools are in place, so use them to perfect the organic nature of your results.
This discussion gave me a radically different view of AI. It certainly gave me another way of thinking about it.
We all need to become more familiar with these systems as consumers. Moreover, for those of us that work for companies or systems where AI use is becoming more widespread, it is essential to overcome some misunderstandings of what AI is and how it works at this point. Sure, it does incredible things and is advanced. However, AI also introduces a black box effect where we optimize these outcomes and get better results but don't remember why we are getting them. The system is opaque and doesn't allow us to see what led to these outcomes and what caused them. Audits will enable us to reclaim some in-between knowledge where we can back away into that original Sophomoric equation. This process is valuable and essential and can apply to all spaces
Colin Shaw is an original pioneer of 'Customer Experience.' LinkedIn has recognized him as one of the 'World's Top 150 Business Influencers', where he has 291,000 followers.
Shaw’s Customer Experience consulting company, Beyond Philosophy LLC, has been recognized by the Financial...