Reboot: Measuring Wellbeing

19 Jun 2017 |Written by Laurence Piercy


Why collecting wellbeing data is hard, and our most recent approach.

Evaluations are always limited by the data that they can accumulate. And the data is always limited by many things. It is limited by the way in which individuals respond to the data collection, the necessary safeguarding of service-level data sets, and the measures that are used.

At Good Things Foundation, these issues are often compounded by our relationship with Online Centres. We do not directly deliver projects, we fund Online Centres to deliver projects. Because of this relationship, when we need data for an evaluation, we often build a data requirement into the funding agreement that we have with centres. 

reboot-blog_image2.png

For our first year of Reboot UK, we were looking for ways to show the health and wellbeing impact of a digital skills programme. So, we started to investigate ways of collecting wellbeing data within a project. We followed the sound advice of the Big Lottery Fund’s ‘National Wellbeing Evaluation’, and used a combination of wellbeing surveys. The survey collected wellbeing measures from the Shorter Warwick Edinburgh Mental Wellbeing Scale, the Personal Well-being measures from the ONS’s Annual Population Survey, and a Social Trust question. As Big Lottery Fund say, this combination of questions is a near-minimal requirement for accurate wellbeing data. The combination of data sets also allows for comparison with national statistics. 

Still, it is 12 questions at the beginning and end of an intervention. And the questions ask for responses to statements like “I’ve been feeling optimistic about the future”, and “I’ve been able to make up my own mind about things”. Not only are these questions quite hard to answer, but they are also the kind of question that needs careful thought. I’m not sure that a community centre is the best place in which to answer questions like this, and it feels slightly odd to ask them before an informal digital skills class.

In addition to these complications, through our Reboot UK project, we were specifically working with people with mental health problems. Often, these people had been through clinical mental health services, and had a lot of experience answering the very same questions. As a result of this, individuals were often reluctant to answer questions. In this case, you might get an effect like this, in which there are a lot of repeated answers:

reboot_blog_image.png

Or, you get an effect which shows no change. Although this might have been legitimate in some cases, we also suspect that tutors were protecting their clients from data collection, and filling in the survey on their behalf (an understandable response, if the learner is reluctant to engage with the survey, or if the tutor feels like data collection is getting in the way of the learner’s experience). 

In the first year of Reboot, we worked very hard to collect data from a high percentage of learners. This was labour intensive for us, and put pressure on Online Centres. Finally, we had a large data set, but we still needed to clean the data of responses which we were uncertain about. The result was a statistic. This was robust within reason, but alone it needed a lot of explanation to make any sense.

In the second year of Reboot, we are trying a slightly different approach. Rather than focusing on monitoring wellbeing through the use of wellbeing measures, we are trying to use proxy measures for wellbeing. 

We are not surveying learners. Instead, we have taken successful elements from the first year of the project, and embedded them in the delivery model. These features of the project model are also supported by an evidence base which links them to improved wellbeing. We are not just delivering digital skills training, we are delivering digital skills in ways which are known to support wellbeing. 

There are three main ways in which we are doing this:

  1. Digital is the cornerstone of this project, because digital exclusion exacerbates the conditions of social exclusion. It limits the social connections that people can make, exacerbates the poverty premium, and reduces their locus of control. 
  2. Peer Mentoring provides a volunteering route for learners to become mentors. This explicitly incentivises volunteering and community building into the Reboot programme. The New Economics Foundation identify that ‘feelings of happiness and life satisfaction have been strongly associated with active participation in social and community life’. Peer mentoring also builds on the idea, from JRF, that individuals in poverty respond ‘more positively to those who are socially close’.
  3. Findings from the first year of Reboot strongly recommend the role of digital skills as a platform for progression. We took this up in the second year, and now fund a progression interview for each project participant. This is important, because (as NEF say), ‘the practice of setting goals has been strongly associated with higher levels of well-being’. In the project, we have made space for goal setting to happen, with the assumption that goal-setting, progression, and achievement will have a positive impact on wellbeing.

Rather than collecting lots of quantitative data, we are minimising the data requirement from learners. This is to reduce friction in their support. 

That doesn’t mean that we aren’t evaluating the project. We are focussing our efforts on collecting progression data from learners: what have they gone on to do, and what role did digital skills play in this? If we know that progression improves wellbeing, then it can act as a proxy. We are also conducting a realist evaluation and conducting service design workshops with three research partners. We want to help to develop services to ensure that they are delivering digital inclusion to the best of their ability. When services are working with minimum friction and maximum value for the user, users will experience better wellbeing. 

Featured project

Reboot UK