For over 8 years now we have been running regular surveys asking middle class (ABC1) consumers and opinion leaders their perceptions of a number of British universities’ reputations. A number of our university clients have found it useful to monitor how their reputation scores change (or don’t change) over time and to ask audiences how and why they assess universities’ reputations and standing.

This year, for the first time we looked beyond the survey data to see whether, through statistical analysis, it would be possible to identify whether perceptions of reputation bear any relationship with other known data about the institutions we ask about.

We looked at a range of publicly available data including: current and past league table positions; the factors that feed into the league tables (e.g. National Student Survey results; enrolment figures; entry grades; student to staff ratios) and even the numbers of social media followers that each institution has. We looked for correlations between these factors and our survey results.

The findings were fascinating. In reality, very few of the measures showed a strong relationship to our survey scores. This of course, doesn’t negate the value of the survey results, rather it means that much of the data available publicly doesn’t seem to have a bearing on people’s perceptions of universities’ reputations.

The strongest and clearest exception to this was with Entry Standards (i.e. the average UCAS tariff score for new students.) The higher an institution’s required grades, the more likely it is that both opinion leaders and consumers will rate that university highly. For those of you who are interested in the stats; the correlation coefficient was 0.75 for consumers and 0.85 for opinion leaders (a score of 1 would be perfect correlation.) Of course our analysis cannot show the nature of this relationship, it can only show that there is one. For institutions thinking of lowering their entry grades, however, this may at least give them some pause for thought as to the possible longer term impact on their reputation.

Perhaps even more interesting is the finding that there is a stronger correlation between survey reputation scores amongst middle class consumers and universities’ league table positions in 1999 than with the current league table position. Much of our work over the years has suggested that university reputations are slow to move. We hypothesise that this is because a university’s reputation is something assessed very infrequently – for most of us only once or twice in our lives (when we apply and, if we have them, when our children apply.) This appears to be reflected in the fact that league table positions from 17 years ago bear a stronger relationship to current consumers’ perceptions than the current league ranking. Amongst opinion leaders, however, the slightly stronger correlation is with the more recent league table position.

It is also interesting that so many of the measures we looked at have no real correlation at all with our survey results. Student satisfaction scores, for example, bear no discernible relationship with the perceptions of reputation that we have uncovered.

The results of this exercise may be of interest to universities in particular, but it is also worth all of our clients considering whether and how they might also cross-reference their survey results with other external data to gain added value and insight from their research.

With thanks for the statistical work to Amy Lopata.

Image: Alan Light

Leave a Reply

Your email address will not be published. Required fields are marked *

Sign up to our e-news

* indicates required