Like many geeky teenagers in the 80's, I was fascinated by electronic music and especially bands like Kraftwerk that were developing the technology as well as the genre, and I ended up building a synthesizer in the vain hope of releasing my inner electro pop star; suffice to say that knowledge without talent (and practice) is not enough to top the charts (my teens predated the punk era).
One of the things I did learn was about sound 'envelopes'; specifically: Attack, Decay, Sustain and Release (Also known as ADSR).
In a synthesizer, ADSR controls and modifies the amplitude (volume) of a sound during 4 distinct phases; after the initial stimulus (e.g. key press - attack), whilst the key remains pressed (decay and sustain) and what happens when the key is let go (release). It partly explains why a harp does not sound like a piano, even though both use strings. However, I squirrelled this knowledge away with no expectation of ever finding a use for it again.
So why think about it now? because I am working a lot with emotions and how to analyse and represent them in predictive computer models of behaviour.
Early in my research, I realised that I was going to have to find a way to take into account that emotions are not simply 'on or off' or maintain a continuous amplitude once the stimulus has passed. In some ways, the intensity of emotions is very like an ADSR envelope - with different envelopes for different emotions and in response to different stimuli (e.g. like the difference between striking a string and plucking it, a piano and a harp).
- Attack phase - how quickly the intensity of an emotion changes in response to the initial stimulus;
- Decay phase - how quickly it starts to subside, even though the stimulus remains in place;
- Sustain phase - the lingering effect of a continuous stimulus; and finally
- Release phase - the rate of return to an underlying normal state once the stimulus is removed.
Taking two core emotions as examples - the intensity envelope for surprise looks very different to the one for joy.
Emotions are not simply 'on or off' - they are far more complex and subtle than that.
To continue the analogy; the complex interactions of emotions is a little like playing chords - some pleasing, some discordant and jarring. Repeated stimuli can add to the overall sounds or distract. I could go on but you probably get the picture (or hear the tune) by now.
In my opinion, it is not enough to say that a customer feels 'happy' or 'sad', or even 'very happy' or a 'little sad' - if we are to get even close to analysing and modelling emotions over time (and I have), we are going to have to start listening to the music - especially if we ever hope to be able to play.
About Peter Dorrington
Inventor of the Customer Experience Vector (CXV), Peter is a seasoned information strategist and leader with nearly 20 years’ core business development and operational experience in data management, business intelligence & analytics for financial services, telecommunications, retail and public sectors.
Peter’s core strength is in building, developing and leading high-performance data & analytics teams and aligning them to the achievement of strategic business objectives that positively impact the bottom line. At TeleTech Consulting, Peter is focused on using Customer Insights (analytics) to drive next the next generation of customer engagement.