“You
can’t manage what you don’t measure,” they say. Generally speaking, I think
most of us would agree with that. Maybe you have sat down with the senior
management team while they go through their Key Performance Indicators and tut
over a failing sales campaign or rejoice in a customer satisfaction survey.
They need to know what’s going well and what not so well in order to fix it.
Obvious.
Not quite
so easy with internal comms, however. I’ve included measurement in every single
comms strategy I’ve ever written but (don’t tell anyone) I don’t think I’ve
ever completely cracked it. Of course, you seek feedback from people and
measure site hits on the relevant intranet pages, but all it takes is someone
with an axe to grind and they will be able to pick holes in your findings (I do
love a good mixed metaphor). If they challenge your findings you need to be
able to justify them.
Measuring
what you have communicated is not
much use as what’s important is not what you’ve communicated but whether the
message has landed as you intended it to. We may communicate perfectly what we
would like people to do and they may understand it. If they then carry on
regardless and go on doing what they always did it doesn’t matter how much you
have communicated or how beautiful it looks – it’s essentially a waste of
money. We can say we have communicated x key messages in line with the comms
plan, but it’s whether behaviour or attitudes have changed and whether the
outcome is different that’s important.
Another
reason that communications efforts are notoriously difficult to measure is that
absorption of the information we are trying to share is subjective and
influenced by how negative or positive the message is. If you’re telling people that they have a pay
rise you get their attention immediately and, if it’s equal to or (less likely
these days) more than they expected, they are likely to be happy with this
communication. We need to consider carefully what we ask people and be specific
on the topic.
It also
depends what we are measuring for:
the successful communication of a programme or an activity, or the performance
of the communications department? You’d
ask different questions, I think.
So we
have a good reason to be selective about what we measure and what conclusions
we draw from it. By all means measure site hits, but don’t conclude from that
the higher the hit rate the better. There are a number of reasons people might
return to it – not always different people checking it out because they have
been told it’s so good (unfortunately).
Where
appropriate, it’s a good idea to focus on whether the outcome matches
expectations. To what extent are people following the new process (as an
example)? Focus groups or other F2F stuff help with digging below the surface
to find out what could have been better. Both qualitative and quantitative methods
have their place.
I’m a bit
suspicious of hanging too much on
surveys from the time that I ran two versions of a survey across matched
samples and found that the results were quite different: where I had asked a
number of questions leading up to the final ‘overall what do you think?’
question the results were much more positive than where I had asked this
question first, and then gone through the separate components afterwards. The
questions were the same; they were just in a different order. My conclusion? Asking
the question up front gave an instant reaction, a reflex. Asking it last, after
people had been reminded about all aspects, seemed to have made them think that
overall the change had been positive. Further exploration was needed.
The result
is that it seems to be wise to (a) be clear why you are measuring things, (b)
be selective and focused on a few important things (unless you’re doing a
simple comms audit), (c) use a wide range of methods to help you cross-check
your findings and (d) be careful in your conclusions, taking account of other
factors that might skew the results.
No comments:
Post a Comment