Once the objectives have been defined they should be translated into a set of more specific outcomes that the exercise is expected to produce. These outcomes may be either tangible or intangible. Thus, you will need to:
• Define the main outcomes from your exercise
• Relate outcomes to user groups
• Describe these outcomes in a way suitable for different audiences so as to promote the exercise
Relating outcomes to users
When defining the outcomes it is crucial to discuss them with all their possible users. The desired outcomes from the Foresight exercise may vary between actors – some may want a focus on certain types of work, others on particular sectors of the economy or on certain social groups, and so on. Some expectations as to outcomes may be unrealistic, in that they will be informed by too optimistic a view of the emphasis that will be given to certain issues, how far decision-makers are liable to heed the inputs from Foresight in dealing with such issues, and how rapidly to expect change.
For these reasons, it is helpful to have a clear notion of the sorts of benefits for different groups of people that can reasonably be expected. This needs to be conveyed as part of the Foresight activity. It needs to be communicated by capturing relevant information, and putting it into a form suitable for stakeholders to examine. As the exercise proceeds, and better understanding is gained as to what it can and cannot hope to accomplish, there may also need to be some modifications to these expectations.
Background note: Let’s take the example of a roadmap developed within the framework of a policy-intelligence institute. The experts involved will want a roadmap that will be as ‘scientifically exact’ as possible. However, the policy makers sponsoring the project may actually need an overall picture including socio-economic and human factors, even if the technological and scientific issues are treated in a more superficial way. Accordingly, the validation of the process and outcome should be considered as much an assessment of the relevance or as ‘market tests’ or a quality control as pure scientific validation. The ultimate evaluation of a Foresight study is whether the outcomes have been translated into actions and have triggered some changes in the client organisations.
Defining the target outcomes
In close interaction with the sponsor, and possibly with representatives of the main user groups, the team should prepare a set of outcomes from the Foresight exercise. As a start you can get some inspiration from the list of typical outcomes of Foresight exercises compiled in this guide or from other exercises. However, it is vital that you enumerate the expected outcomes for your specific exercise taking into account:
• The identified users’ needs
• The set objectives
• The specific context
When considering the outcomes it is vital not to focus only on tangible (formal) outcomes such as reports, priority lists, etc. but explicitly consider whether more process-related (informal) outcomes such as improved networks or changed mind sets are also aimed at.
Remember that all later design steps are likely to refer back to the target outcomes. Therefore, outcomes that are not explicitly recorded in the early design stage are not likely to be adequately addressed later on. On the other hand, later on the success of the exercise will be assessed in the light of the target outcomes so you should be careful not just to mention anything that might be done.
There is more information on defining the expected outcomes in one of the example cases: Futur – the German Research Dialogue
Describing the target outcomes
It is advisable to describe the outcomes you are expecting in a way that is suitable for different audiences (policy makers, various stakeholder groups, and the general public). In many cases it will be useful to prepare different types of descriptions for different audiences. In the same way as when defining the objectives it is important to think about both tangible and intangible outcomes.
Again, these two types call for different kinds of descriptions: For tangible products like reports and action lists you will need to come up with formal descriptions such as topics to be addressed, structure, number of pages, etc. For communication purposes you might want to draw up templates at an early point in the exercise to show users and stakeholders what to expect from the exercise.
For the process-related outcomes you could provide some best practice examples from other exercises showing what the intangible benefits were of an exercise with a similar focus to yours. Together with the objectives the description of outcomes should become part of the scoping document and other documents used to promote and communicate the exercise.
To avoid misplaced expectations, it should always be clear to everybody involved at the various stages what outcomes are expected.
The definition of outcomes is part of the iterative process of the design phase of the exercise. At various stages of the design phase (e.g. when assessing the resources available) it will have to be re-examined.
Once the main tasks of the Foresight exercise have been completed, a number of follow-up activities are required to ensure that the results are used effectively and all possible lessons are learned and passed on to future exercises. These activities may include:
• Disseminating the results
• Evaluating the effectiveness of the exercise
• Turning Foresight into an ongoing activity
All too often, insufficient thought is given to the action to be taken following the Foresight exercise. In many cases this has led to implementation gaps (i.e. recommendations have been prepared, but there has been no mechanism to check whether they have been implemented; networks that were working productively have been allowed to dissolve). Making the results of the exercise known to a wide audience and passing them on to future exercises is a key part of achieving full implementation.
The evaluation of ongoing or completed Foresight exercises, their processes, products, and outcomes, is essential to ensure accountability, the credibility of the activity and to demonstrate to potential clients that Foresight is a worthwhile investment. All Foresight exercises lead to a report, but a synthesis of the scenarios or the conclusions in terms of strategic options or the description of the vision or project decided upon needs to be more widely disseminated. Passing on good practice requires more than just distributing the exercise’s documentation.
In recent years, there has been a shift from one-off studies towards more continuous iterations of the process of envisioning future challenges and opportunities. Although making Foresight an ongoing activity brings with it the risk of ‘bureaucratising’ Foresight, there are ways to avoid this and so sustain its value.
The evaluation of Foresight is necessary to assess whether objectives were met, to learn lessons on how the exercise was managed, and to define follow-up activities. Evaluation is likely to refer back to the implementation plan.
In any evaluation strategy it is important to recognise that benefits from Foresight tend to emerge on different levels and at different times. It is also important to keep an eye out for unexpected benefits and to identify ‘success stories’ as possible ‘demonstrators’ of positive outcomes. However, you should bear in mind that impacts often also depend on external factors such as luck and timing. There is certainly no ‘one best way’ to evaluate Foresight, and thus actual exercises cannot be evaluated against an ‘ideal’, ‘optimal’ or ‘best practice’ design. Nonetheless, benchmarking against successful past exercises is a possible approach.
Foresight evaluation has to be designed carefully and various approaches are possible. It should not be so obtrusive as to disrupt operations and annoy stakeholders; nor so cursory as to fail to be useful to the majority of these stakeholders. It also needs to be sufficiently independent to provide a credible and legitimate overview of the activity.
A wide range of data may be relevant to the evaluation. Some of these data may be “by-products” such as records of meeting attendance, press reports, publication lists, etc. But often it will be necessary to generate new data – often by surveying people participating in (or potentially being influenced by) the activity.
One key feature of evaluation is measuring the exercise’s achievements against its intended objectives using the “Logic Diagram” approach.
The EFP brief below, provides an example of foresight evaluation. Furthermore, the FTA conference 2011, had a parallel session on evaluation and impact. Material from the conference is available here.
EFP Brief 119 Evaluating Foresight: Colombian Case: This brief introduces the evaluation framework designed by the Manchester Institute of Innovation Research (MIoIR) for the evaluation of the Colombian Technology Foresight Programme (CTFP). An assessment of the First Cycle of CTFP (2002-2004) carried out by PREST (now MIoIR) produced the 2004 Recommendations to CTFP report (Popper and Miles, 2004) that was used to reshape the objectives and activities of the Second Cycle of CTFP. The current evaluation framework is a follow-up of these activities.
As explained in the “Logic Diagram” below, foresight exercises are in general evaluated against:
• overall policy objectives
• objectives of the exercise
• main activities pursued
• immediate effects
• intermediate impacts
• ultimate impacts
Steps Relevance to evaluation Overall Policy Objectives Identifying the overall mission of organisations sponsoring Foresight, leading to a specific Foresight exercise and a range of other activities. Evaluation focuses on the relationship between these different activities.
Objectives of Foresight Exercise The main goals selected for the Foresight activities, implicit goals remaining implicit, as well as goals added to the exercise during its operation. Evaluation examines how well all goals have been accomplished.
Main Activities Pursued in Foresight Exercise The exercise will have a number of major activities that are being pursued. Evaluation examines how well the activities have contributed to achieving the Foresight objectives. Monitoring, in contrast, examines the detailed operation of the activities, how far milestones are being met, etc.
Immediate Effects Evaluation examines the extent to which tangible output have been achieved (e.g. reports produced and circulated, meetings held and attended).
Intermediate Impacts Evaluation, using methods such as interviews and surveys, with participants in the projects, with the “users” of their results, etc. asks questions such as: Have new networks been formed, have people changed their behaviour, have other organisations incorporated Foresight methods or results?
Ultimate Impacts Evaluation will try to identify effects of the exercise on regional performance as a whole, although effects of diverse Foresight and other interventions may be difficult to disentangle.
Data for evaluation
The most straightforward way of evaluating outcomes is to ask the people involved in the activity to report on them systematically. This systematic approach has to be open enough to allow for unexpected benefits to be captured, and will need to be employed at several periods (if not continually), so as to capture immediate and longer-term benefits, and changing appraisals of how important these have proved to be. Furthermore, benefits may be experienced at different levels – in terms of the effectiveness and careers of individuals, the organisational capabilities of participating agencies and firms, improvements in communication networks and social interaction more generally. Thus survey questions need to be framed so as to capture different types of benefits.
Examples of the sorts of data on potential benefits that might be generated include:
• Are there improved linkages? Are participants (especially the stakeholders who might be more peripheral to existing networks) more aware of, and better known by, relevant organisations and experts? Are they involved in meetings and discussion groups, do they have access to sources of knowledge and assistance when faced with problems and opportunities? Such benefits can be assessed by asking participants directly about their experiences, or by examining data on meetings, websites, help lines, etc.
• Have new activities or initiatives been undertaken, and have priorities been shifted as a result of Foresight? This involves examining what the sponsors of these activities claim, and what the other people involved in collaboration or implementation believe to be the case, how far reference is made to Foresight in supporting documents, etc.
• Is there evidence of the creation of a “Foresight culture”, with longer-term perspectives being taken seriously by a wider spectrum of actors? Have other bodies undertaken Foresight activities of their own, and is there evidence of the results of Foresight being discussed within user organisations?
Many regions, sectors or organisations find ongoing Foresight to be a valuable way of adapting to new challenges as they arise. Although a one-off Foresight exercise may inform decisions for a length of time beyond the particular policy need that triggered the exercise, in the end it is likely that:
• the reports will be viewed as out of date and increasingly irrelevant
• the personal links forged in networks will have decayed as people move around within and between organisations
• even the Foresight skills acquired may grow rusty through disuse
• other topics are likely to arise which require longer-term perspectives, making a new Foresight effort will be necessary
The upshot of this is that some form of continuous Foresight activity is bound to be of value in the country, in a region or sector. This does not necessarily mean that a full-blown Foresight programme should be run on a permanent basis – though this is not inconceivable, as long as there is plenty of room built into it for renewal and reorganisation to deal with changing circumstances.
It may be something far more modest, such as setting up a Foresight Unit charged with conducting small-scale Foresight exercises or training activities with particular agencies or sets of users on a continual basis. Such a unit could also play a valuable role in organising regular meetings to maintain and reinvigorate the networks set up during the original Foresight activity, and in providing information and analysis that can help update the reports and considerations that such networks may have generated.
The final evaluation of a foresight project depends primarily on the client satisfactions. However the following points are important:
It is essential to keep the process focused. The temptation to scan a functional or technological field exhaustively should be avoided so as to highlight the most important information and avoid becoming overwhelmed with details. The challenges/functions/technologies have to be prioritised and only the more relevant or important ones selected.
Inclusion of Human Factors
It is essential that the policy-intelligence roadmap is centred on some of the major challenges society is facing rather than be “pushed” by technology and the technology developers. Therefore the ‘challenge’ and the human factors, i.e. the economic, social, human and demographic dimensions, have to be intrinsic to the roadmapping process.
It is important to ensure the legitimacy of studies, which may later be used to support major decisions in R&D policy. Therefore, the requirement for transparency of the roadmapping project should be considered early in the definition stage.
Reliability and repeatability are essential for the credibility of the products and the process. Even if roadmapping deals with uncertainty, this should not imply that uncertainty and randomness are part of the process. The transparency of the process is a pre-requirement for the reliability of the output.
User-Friendliness of the Outputs
Considering the information overload of the clients and stakeholders (Foresight and S&T communities, industry, citizens groups, etc.), the appropriation of the outputs of foresight studies is always challenging. The necessity to deliver the outputs in user-friendly formats should be integrated from the definition stage, the form being in this case almost as important as the content.