Galexia

An Overview of International Cyber-Security Awareness Raising and Educational Initiatives (2011)

3.3. Challenges for Evaluation

The limited number of evaluations is a useful indicator that organisations face significant challenges in conducting evaluations of Cyber-Security education campaigns. There is no direct literature explaining why so few evaluations have been undertaken, but some challenges can be identified from the comparative analysis of the campaigns:

  • The design of the campaigns may be such that evaluation only occurs at the end of the campaign. Many of the campaigns have been recently initiated such that there may not have been the opportunity for evaluation;
  • Evaluation may have occurred but has not been publicly disclosed;
  • Qualitative and quantitative metrics[3] are difficult to put into place for the evaluation of training and awareness initiatives (in any sector); and
  • Project budgets may not have included an evaluation component – and evaluation can be and expensive undertaking.

There are also challenges in determining ‘when’ to evaluate a campaign. Post-project ‘program evaluation’ is useful and required, but may be too late and ‘after the fact’ to have an impact on current projects. Evaluation after the project may offer some guidance for design of new projects, but it will be too late to assist the now completed project.

The most reliable evaluations of projects have involved a multi-process method whereby key performance indicators and metrics are in place before, during and after the project. The French project Family Online is a good example of multi-process metrics [Refer to Section 3.2 Campaign Evaluations: A. F@milie en ligne: Sur internet, la securité ça commence aussi par vous (Family Online: Internet Security Begins With You) at page 12].

Another perspective on when to evaluate is drawn from software development methodology, and its use of ‘work in progress’ evaluation throughout a project to address the high failure rate of large software projects. This integrates evaluation, particularly of alignment with well understood user needs, into the design process, not just the program governance and review stage, and thereby offers a technique for assisting new and current projects. See below at the end of this chapter for further discussion.


[3] Examples of metrics include:
- Consumer surveys to measure awareness before and after campaigns
- Analysis of website traffic
- Analysis of skills acquisition before & after campaigns