You can’t create a great employee experience in one hit. Even if you’ve identified where you want to begin, it’s very hard to predict how your people will respond to new interventions. Instead of trying to design the ‘perfect’ employee experience, our suggestion is to take an iterative, data led approach that responds to user (employee) behaviour over time. If you’d like to know how to do that, read on…
Traditionally HR teams have spent a long time designing their frameworks and people processes – taking months (if not years) to develop the ‘complete’ answer. They’ve looked to the latest ‘best practice’ and HR expertise for guidance and typically rolled out one-size-fits-all solutions for the entire business.
In the old HR world, measurement has been an afterthought. By the time we’ve got to implementing a solution, we’ve invested so much time and effort that we’re not overly keen to see evidence that our intervention might not be working. Where they have existed, metrics have usually been targeted at employee satisfaction, not tied to commercial outcomes.
This is a pretty unsatisfactory state of affairs. So how can we do a better job?
Here are three steps to taking a data led approach to employee experience…
Designing evidence based prototypes is about starting with good theory – the scientific evidence on what really motivates people, how we make decisions and how you can influence people’s behaviour. When it comes to designing new interventions to improve your employee experience, you should start with good science. If you’re looking for a good place to find well researched ideas, Science For Work is an excellent online resource.
There’s lots of exciting new research out there but there are very few replication studies and most of the research is conducted in “non-natural” environments. So even if you’re following the science, there’s no guarantee it will work in your context… You can’t predict how a complex human system will respond to a new intervention.
So once you have good ideas on the table, you need to take an experimental approach to implementing them by setting up explicit tests.
To do this you need to agree what you’re going to measure in advance and build in a feedback loop before you release a new intervention.
This helps you avoid two common human biases:
Agreeing what you’re going to measure starts with identifying a clear hypothesis: "If we do ‘x’, then we expect ‘y’ to happen.” Once you have your hypothesis you not only need to decide how you’re going to build ‘x’, you also need to decide how you’re going to measure ‘y’. This feedback loop on ‘y’ is an important part of your prototype.
Once you have your hypothesis, you then have two main options for setting up tests:
Option 2 is only open to you if you can split your employees up. This will depend on things like the size of your company, your setting (it’s easier to split people up if they’re spread across multiple sites) and the type of intervention (it’s easier to randomise recruitment candidates than changes in an office environment).
If you’re small you’ll probably want to do a pre and post intervention test i.e. measure for a fixed period of time to get a benchmark and then implement the idea and continue to measure for same period of time so you can compare results during the implementation period with the benchmark.
If you have more employees, you may want to test interventions with a subset of your employee population. To make these kinds of tests fair, it’s important the subset is identified at random and as far as possible is representative of the wider population. Otherwise you risk learning things that work (or don’t work) for the subset but don’t have the same effect in the wider population.
If possible (scale, setting and intervention dependent) you may want to set up A/B tests so you can learn which of your interventions are most effective.
Exactly what and how you measure will depend on the size of your organisation, what youre testing and what you’re aiming to improve e.g. engagement, retention, performance etc.
However, the sorts of things you can measure include:
The most important thing to remember is that you are measuring to reduce the uncertainty about whether your intervention is having the effect you want and to support better future decision making.
You’ve started with a good evidence based idea and set up a test to measure its impact, now you need to review the results! Just having data doesn’t make you data led. You need to build in prompts to review the data and to use it to inform what you should build next to keep improving your employee experience.
During your reviews you not only want to analyse the data you’re collecting but also compare your results with business outcomes. You’re looking to discover whether your interventions appear to correlate with or cause better business outcomes. This analysis is critical for making the business case for continuing to invest in the employee experience.
You develop the hypothesis: “If we give rejected candidates timely, personal feedback during the recruitment process, then it will improve their experience and they’ll be more likely to recommend us as an employer to their network despite being rejected.”
The hypothesis is based on good evidence – research has shown that people respond more positively to experiences where they feel recognised as an individual/personally significant and similar ideas have delivered good results when implemented in other businesses
You now need to set up your experiment. To do this you need to:
Once your experiment is up and running, you need to build in reviews to discuss:
Wherever possible you want to be bringing commercial data into this discussion too.
The more regular and frequent you make this test and review process the better.
It is only by taking a data led approach to the design, build and testing of your employee experience that you’ll be able to iteratively improve. Without a good hypothesis, you won’t know what to measure. Without building the appropriate measurement mechanisms, you’ll never capture the data that will help you to discover what works. And without reviewing this data at the end of a pre-agreed test period, you’ll miss the opportunity to learn and inform your next steps based on the real experiences of people in your organisation.
At this point it’s worth noting that data will only take you so far. We’re always going to have an incomplete picture of the whole system and we have most uncertainty at the start of developing new interventions. There’s a danger that we put too much weight on one piece of data and only see the problem through the narrow lens of what we’ve chosen to measure. It’s important we never forget to use our judgement too. Data isn’t truth, particularly in the people space. So even if you follow all this guidance around taking a data led approach to improving your employee experience, remember to exercise judgement throughout.
Click here to read about how we took a data led approach to improving the onboarding experience at Five Guys. Over the course of 5 months we reduced 90 day turnover by 20%.
If you’re already convinced this is an approach you should be taking in your business and want to talk through the challenges you’re trying to solve, please get in touch. I’m happy to help and share advice to get you started.