A measure of success

2008.11.24 - Things you don't believe when you're 16 by Adrian Clark

Is he counting what he can easily measure or measuring what counts?

One of the significant changes in the new Matrix quality assurance framework for careers services is an increased emphasis on evaluating the outcomes of our work with clients.

The most significant changes are the increased focus on outcomes, competence of staff, commitment to continuous improvement, service delivery linked to outcomes and responses to information technology advances.

A few services who have stuck with collecting more traditional feedback on their services have been judged as falling short in this area. Just asking clients whether they found a session useful or interesting is not enough any more (if it ever was).

In the Value and Impact Toolkit developed by the Association of Managers of Student Services in Higher Education (AMOSSHE), measures of impact are differentiated from measures of satisfaction.

Impact is about change, which implies that a situation needs to be evaluated before an action to stimulate change takes place, and after to determine whether indeed change has taken place. Impact might also be evaluated in terms of the effect of an activity on different groups; for example, students might attend a particular programme on a voluntary basis, so impact might be measured after the programme takes place in relation to the knowledge levels of those who attended against those who did not attend.

So what else could you measure and what would it tell you about the impact you are having?

Donald Kirkpatrick (former president of the American Society for Training and Development) formulated a a four level model for evaluation of training, which could apply to any interventions designed to bring about learning and behaviour change.

Level 1: Reactions

This has traditionally meant obtaining immediate feedback on how participants feel about the session they have just participated in. As such, in most cases you will be evaluating their emotional responses to the coach/presenter and to the methods of delivery as much as their reactions to the perceived usefulness of the material. It is often possible to get good feedback merely by being entertaining, even if the session you deliver had no significant long term consequences for participants. You may also get good reactions from people by telling them what they already believe in. People tend to seek validation for their existing beliefs and assumptions. They are likely to give higher ratings to someone who confirms what they already think than someone who disagrees with their perceptions. This means that interactions most likely to produce changes may be rated lower than ones that reinforce the status quo.

Perceptions of helpfulness are also influenced by the client’s own self-image (Schedin & Armelius, 2008). Clients with a positive self-image are more likely to engage in constructive behaviours themselves during guidance and are more likely to perceive constructive behaviours on the part of the guidance practitioner than people with negative self-image.

This doesn’t mean that it is pointless to measure immediate reactions. Instead of measuring emotional response to the session, it might make sense to measure response and changes in attitude to the career management tasks covered in the content. How you feel about those tasks (confused, tentative, apprehensive, optimistic, enthusiastic) will obviously affect your likelihood of following through with them.

In a recent careers workshop for people facing redundancy I started the session by asking participants to pick words that described their emotional state. I did the same exercise at the end of the session. For that particular group it was an important sign of success that the first selection was dominated by words such as ‘suspicious’, ‘helpless’, ‘angry’, ‘shattered’, etc., and the final selection contained more words such as ‘eager’, ’empowered’, ‘hopeful’ and ‘determined’. (See Towards or away from?)

Questions worth asking:

  • How have your levels of optimism changed?
  • How has your self-confidence changed?
  • How has your attitude changed?
  • How has your motivation changed?

Level 2: Learning

This is often interpreted as the acquisition of knowledge. It is therefore often measured by some form of testing or assessment of the participant to see how much knowledge they have retained. A broader approach looks at the extent to which particular learning outcomes have been achieved within the session.

There’s nothing wrong with assessing whether clients have obtained new information (knowing what), or developed new skills (knowing how). But so many careers sessions I observe stop there. Perhaps we should be hoping to measure something more transformational. How often do your learning outcomes for training or your goals for coaching seek to develop new understanding, new perspectives, new mental models and new meanings for the client (knowing why)? Do you venture into the territory of developing new identities, new self-perceptions and new self-definitions (knowing who)? (See the In the right zone, Intentional change, New year, new identity.)

Obviously, not everything that a participant learns during a session will be retained by them over time. So, a more accurate measure of impact would involve assessing the learning that has ‘stuck’ over a certain period.

Over the years, a number of different measures of career learning have been developed such as The Career Decision Scale, My Vocational Situation, The Career Development Inventory, The Career Maturity Inventory (see Barnes et al., 2007) and the Career Decision-Making Self-Efficacy Scale (Betz & Luzzo, 1996).

Questions worth asking:

  • What do you know that you didn’t know previously?
  • What can you do that you couldn’t do previously?
  • What do you understand that you didn’t understand previously?
  • How do you see yourself that is different from how you viewed yourself previously?

Level 3: Behaviours

This third level seeks to evaluate whether there has been a transfer of learning into the real world. Have the actions, the habits, the working methods of the individual been changed by the interaction?

You could attempt to measure the strength of participants’ intentions to follow through on actions as a result of an intervention (and there is some evidence that asking people if they will go through with an intention actually increases the likelihood that they will).

Of course, you will never know if they did follow through unless you check up on them later, but how much later? One guide might be the time it takes for a new behaviour, through repetition, to become an automatic habit. Recent research indicates that this could be around 66 days of continued practice on average (Lally et al., 2010).

Another problem with measuring behaviours is that, if you just ask about what people are doing, you may not be sure whether they would have done those things anyway without your intervention. You can specifically ask about behaviour changes in order to mitigate this somewhat.

Questions worth asking:

  • What new actions or behaviours are you likely to (did you) start as a result of this intervention?
  • What actions or behaviours are you likely to (did you) stop as a result?
  • What actions or behaviours are you likely to (did you) change as a result?
  • What actions or behaviours are you likely to (did you) continue or persist with as a result?
  • How likely?

Level 4: Results

At this level, the outcomes you are looking at relate to the impact of the individual on their environment. What are they able to achieve as a result of your intervention? Is there a tangible return on the investment?

Here we are talking about success, and most of our current measures of career success (e.g. destination surveys) only look at objective success criteria such as attainment of particular types of positions or salary levels. But we know that success is more complicated than that (Success: what is it and how do you achieve it? More success. What does success mean to you?). It has always puzzled me that the destinations surveys do not include any quantifiable measure of subjective success, such as job satisfaction or optimism about future career.

Why don’t we look at other measures of success that are relevant to our stakeholders? For example, one important metric for academic institutions is student retention. It doesn’t pay to have lots of students dropping out of courses. There are established links between career decision-making self-efficacy and student persistence (see Sandler, 2000). This might be easier to measure and easier to relate to our efforts than job success.

And this leads back to a bigger question worth asking:

And…

  • Why are we trying to achieve that? (And it may be worth asking the ‘why?’ question more than once.)

For good measure (sorry), here a few more questions from the AMOSSHE Toolkit:

  1. Before starting
    • Who needs to be involved?
    • What skills will be required?
    • What is the timing/timescale?
    • What are the budgetary implications?
  2. What is the issue being investigated?
  3. What is the purpose of the evaluation?
  4. Where to get the information needed?
  5. Who should be studied?
  6. What is the best evaluation method?
  7. How should we collect the data?
  8. What instrument should we use?
  9. Who should collect the data?
  10. How should we analyse the data?
  11. What are the implications of the evaluation for policy and practice?
  12. How should we report the results effectively?

Further reading

  • Bernes, K.B., Bardick, A.D. & Orr, D.T. (2007). Career guidance and counselling efficacy studies: an international research agenda. International Journal for Educational and Vocational Guidance, 7(2), 81-95. DOI: 10.1007/s10775-007-9114-8
  • Betz, N.E & Luzzo, D.A. (1996). Career assessment and the Career Decision-Making Self-Efficacy Scale.
    Journal of Career Assessment, 4(4), 413-428. DOI: 10.1177/106907279600400405
  • Kirkpatrick, D. L. & Kirkpatrick, J.D. (2006). Evaluating Training Programs (3rd ed.). San Francisco, CA: Berrett-Koehler Publishers. (See this summary of Kirkpatrick on businessballs.)
  • Lally, P., van Jaarsveld, C.H.M., Potts, H.W.W. & Wardle, J. (2010). How are habits formed: Modelling habit formation in the real world. European Journal of Social Psychology, 40(6), 998-1009. DOI: 10.1002/ejsp.674
  • Maguire, M. (2004). Measuring the outcomes of career guidance. International Journal for Educational and Vocational Guidance, 4(2-3), 179-192. DOI: 10.1007/s10775-005-1022-1
  • Sandler, M. (2000). Career decision-making self-efficacy, perceived stress, and an integrated model of student persistence: A structural model of finances, attitudes, behavior, and career development. Research in Higher Education, 41(5), 537-580. DOI: 10.1023/A:1007032525530
  • Schedin, G. & Armelius, K. (2008). Does self-image matter? Client’s self-image, behaviour and evaluation of a career counselling session: an exploratory study. International Journal for the Advancement of Counselling, 30(3), 189-201. DOI: 10.1007/s10447-008-9057-x

, , , , , ,

  1. #1 by Ghislaine Dell on 27 July 2012 - 11:59

    This 4-level framework sounds very like the Rugby Team Impact Framework for assessing impact of the various activities undertaken for development of researchers va the Roberts initiative. (I suspect the RTIF was based on it but haven’t got it to hand to check). For any of your readers who are new to assessing impact in this formal way, it could be worth their while liaising with the team in their institutions responsible for delivering the Roberts-related services as they will already have been using this sort of assessment (they had to report back to the Research Councils who funded the work). Careers workshops and guidance interventions have been assessed using this framework too.
    I like the idea of measuring impact, but the most important thing we have found is that you need to know, before you deliver anything, what you are trying to achieve with it. How else can you assess whether you are succeeding? And I’ve found it very helpful to think in this way. Only time will tell whether my clients agree….

    • #2 by David Winter on 30 July 2012 - 09:21

      Thanks Ghislaine

      It appears that the Rugby Team Impact Framework was based on Kirkpatrick.

      I suppose that your point about knowing what you are trying to achieve was the ultimate message of my post. And that goes beyond just thinking about learning outcomes.

      I sometimes use the “5 Whys” approach to force me to think about it. (Although it’s intended to find root causes of problems, it’s also quite useful for exploring ultimate purposes.)

Leave a comment