Are customers trading their digital souls for better CX?


Customers are seemingly happy to trade their data for more online choice and convenience, clicking 'accept all' without a thought. But should customers and companies care more about about this Faustian pact?

2nd Feb 2021

As a society, we are more connected than ever, and with that connectivity comes great opportunities but also risks - many hidden in plain sight – such as the increased use of artificial intelligence.

In this article, we will discuss some of the issues raised in today’s world, our addiction to convenience and instant gratification; where our data is constantly gathered, analysed, and used in ways most of us can’t conceive, and may be exposing us to dangers never before imagined. In doing so, we will use an analogy – the legendary deal made between Faust and the Devil.

Trading your digital soul

Like us, you have probably moved everything you can to the cloud, from your pictures and music to your money; it’s all stored out there somewhere in cyberspace and it’s readily accessible to you and to the people you chose to share it with – all through your smartphone and your many other digital devices. You do so because you find it convenient, think of it as secure, and it fits in with busy lives. It all seems so effortless compared to how you used to do things a decade ago.

With little to no thought, and in exchange for having access to your friends and your ‘stuff’ at any time of day or night and from anywhere, you readily hit the ‘accept all’ button about cookies, or input your personal data on demand: after all, it’s the only way to get what you want done and everyone else does it.

However, you may be unknowingly consenting to - and providing a constant stream of data to - third-parties: your network operator, your favourite search engines, app providers, and an untold number of advertising services.

In the legend, Faust is a ‘scholar’ - a man at the pinnacle of his career and yet he wants more. In exchange for more knowledge and magical powers, Faust makes a deal with the Devil (through his agent, Mephistopheles) – selling his soul in the future for power now. With his newly-granted powers, Faust is able to indulge every whim and learn the knowledge of the world. However, in the end and as agreed, the Devil appears and claims Faust’s soul.

For the purpose of this article, we want to focus on one aspect of the legend – what Faust knowingly traded away in exchange for his version of instant gratification.

In our analogy, your digital ‘soul’ (your data) is what you are bargaining away in exchange for more choice and more convenience now. But what are you really giving away? Do you know what is happening to all that data? Do you appreciate the correlation between your digital alter-ego and your real-life wellbeing?

In many stories and myths, the cost of satisfying wishes can be high and appear in unexpected ways. As a consumer, you have little real control over how your data is collected and how someone else will use it and, all too often, consumers don’t care – so long as their wishes are granted with little effort on their part. However, in this competitive commercial world, those organisations that satisfy your wishes may be presenting you with choices that prioritise their needs, rather than yours.

What’s more, they use technology and psychology to formulate messaging that they are constantly whispering in your ear; that you need this product, to shop at that store, or to ‘buy now to avoid disappointment’.

It all appears so reasonable, they seem to know what you want (or should want) and feed on your insecurities as well as your desire to be happier, better looking, more popular, and so on.

Why care about how it works, as long as it does?

Despite the horror stories about data breaches, or misuse of personal data, most people have little idea of the feeding frenzy around their data and just how much of it is being harvested by government or businesses. Even fewer know the ways in which that data is analysed to form an insight into who you are, what you do, why, and how to exploit those insights.

For this article, Peter explored what a popular search engine provider ‘knows’ about him: that he is a married, middle-aged male, a business owner, who is also interested in astronomy, web-hosting, science fiction and about 150 other topics. However, it has also incorrectly identified that he is interested in beauty services, coffee makers, country music, and about 50 other things. Two things worry us about this:

  1. They know a lot about Peter; and
  2. About a quarter of what they ‘know’ is wrong.

Bearing in mind that business decision-making (especially using artificial intelligence) is only as accurate as the data used to inform it, this means that they are going to be making a lot of incorrect assumptions and ill-informed decisions. Also, Peter has no insight into how those decisions are being made, or control of what data is being collected about him and why (and if you have ever clicked ‘accept all’ on the cookie banner, you are just as vulnerable).

In a previous article, we talked about the deliberate misuse of this power to influence consumer behaviour – so called ‘dark psychology’ - but whether it’s done in the name of good or ill, everyone needs to be careful about how this power is used. AI may eventually solve some of the most intractable problems of our age, or create a whole host of new ones.

Like any innovation, artificial intelligence is a tool, a double-edged sword if you will; it is neither ethical or immoral, intrinsically good or evil – that’s down to the discretion of those who wield it. It is up to humans to set limits on where AI is applied and how it acts.

If you teach an AI that only the ‘ends’ are important, it will surely come up with some unpalatable (to us) ‘means’ (utilitarianism) – as Facebook discovered, unconstrained AI will find the most efficient ways to achieve an end, but few humans would say about the things they treasure the most that it is because of their ‘efficiency’.   

According to behavioural economics, humans have over 150 ways our logic lets us down (cognitive biases) and if they are known, they can be anticipated and used (even manipulated). Marketers have been doing it for decades and now organisations are teaching the machines to use their huge compute power to focus it on the individual, down to the level of individual feelings and motivations.

AI can now anticipate your mood, your daily movements, even your need for healthcare, and make recommendations that directly influence your decision making process; from which route to take as you drive home, which movie to watch, or which product to buy, and perhaps even influence the way you vote. But it’s still humans who decide how and whether an AI is working well or not (at least for now).

Not a bargain between equals

Taking our analogy further; the ‘Devil’ (AI in the hands of unscrupulous big business / government) has tremendous insight and overwhelming power and you by comparison have very little – it is not a bargain between equals.

That doesn’t mean that you are powerless – just that you need to exercise your power over what is yours - your data. What is more, it needs to be as simple an experience as it is for you to walk without thinking, or stretching out your hand to grasp your tea cup when you want a sip. Your data, is the food and fuel that powers and informs these systems and today we all for the most part just ‘agree all’ to giving away our golden goose (mixing metaphors) for the sake of perceived convenience or social ‘popularity’.

To achieve this, we have to make a simple binary shift in the thinking and supporting frameworks we live by. We need to expand private as well as public investments in initiatives like the EU’s Next Generation Internet (NGI), where the focus is human and life-centric, rather than ‘profit by any means’. We need to humanise the language of technology so that it is legible to all and not the incomprehensible jargon and technobabble it currently is.  

We also need to put competition aside and foster collaboration between organisations like W3C, ISO, the IEEE, and the many other interest groups, where a great deal of work is being done on ethical AI standards. 

When we venture outside of our silos and bring together our specialised viewpoints into a co-creation process that questions the limits of what is possible and resetting the permissible, we will be able to see the many paths to prosperity that were hidden in plain sight. Like Lubna always says, “If you only saw red and I only saw blue, how would either of us see purple?” 

There is no reason to prohibit the technology, or limit the creativity of people but there is a need to examine the merits and drivers of our socioeconomic and geopolitical systems. Society must re-examine where value is placed, measured, and rewarded and the broader purpose of our institutions and businesses.    

In conclusion

We know that our analogy is not perfect, but individuals need to consciously safeguard their power (their digital selves). Governments and enterprises also have a responsibility not to abuse their overwhelming advantage either - if they won’t do it voluntarily, we will need policy and regulation with the power to back it up.

Let’s be realistic; we can talk all we want, and ask all we want, but if we do not reset our definition of success, we cannot expect change to happen. We cannot dream of planetary wellbeing if fiduciary responsibility to shareholders and corporate success is only measured by profit.  

On a wider front, we need to evolve our human socioeconomic and geopolitical systems, as the current models are concentrating knowledge, power and wealth into the hands of a very few de-facto superpowers. Otherwise, as the world becomes increasingly ‘smart’ and connected, individuals are in danger of becoming irrelevant. We will be well served by shifting our thinking from ‘what’s in it for me?’ to ‘what’s in it for us all?’.

We call on you, the reader, to get involved; think about what ‘accept all’ may mean and exercise your right to decide how much of your data to share, with whom, and why.

Remember, just like Faust, be careful what you wish for and what you give away in exchange!


About Lubna Dajani

Lubna is a pioneering design thinker and ICT innovator with over 25 years' executive experience with multinational brands. She is currently serving as Advisory Board member and coach to Horizon 2020 and NGI Trust, mentor and board member to several accelerators including SOSV and Springboard Enterprise and innovators. She is also a W3C Invited Expert on Privacy, an accredited contributor to several IEEE standards, including the IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems

Lubna is a champion and role model for diversity, inclusion and women in STEAM. She is committed to applying technology and science to elevating the human experience and regenerating planetary wellbeing.

About Peter Dorrington

Peter is the founder of XMplify Consulting and an expert in using a combination of data and behavioural sciences to lead transformation in the field of Experience Management (XM).

Over the last 5 years, Peter has been focussed on developing and using Predictive Behavioural Analytics to understand why people do what they do, what they are likely to do next, and how businesses should respond. As the inventor of Predictive Behavioural Analytics, Peter is an internationally recognised expert in the field of Customer Experience analysis. In addition to this, an executive advisor, award winning blogger and writer.

Replies (0)

Please login or register to join the discussion.

There are currently no replies, be the first to post a reply.