Nous contacter|Plan du site|Mentions légales|Accès réservé

Assessing Deviance, Crime and Prevention in Europe
Version française PDF

Crimprev info n°20bis –Evaluating safety and crime prevention policies

Philippe Robert

Full text

Within the framework of the thematic workpackage on methodological problems and good practice, one workshop was devoted to the evaluation of crime prevention and safety policies.

It started with a seminar1 at the University of Bologna where four very different examples of evaluation were presented and compared: from Belgium, France, the Netherlands, and English and Wales.

  • 1  The people present were Wolfgang Heinz (Univ. Ko(...)

Following which a general report on the lessons drawing upon this seminar and international scientific literature was prepared, which provided an overview of the state of evaluative research and its uses in Europe.

    This CRIMREVInfo document is a summary of the workpackage.

      I - A variety of evaluative approaches

      The steering committee of the workpackage has constituted a representative sample of the great variety of practices and even conceptions pertaining to the evaluation of safety and crime prevention policies in Europe.

        In Belgium, the evaluative effort – which is linked to security contracts between the Federal State and the municipalities – seems to have lost momentum in the last few years.

          France offers the example of an ‘official’ evaluation the undertaken by the administration.

            The Netherlands has chosen a meta-evaluation that examines all the available country-wide data on the basis of ‘quasi-experimental’ scientific canons of Anglo-Saxon origin.

              Lastly, England and Wales present an old and systematic evaluative tradition which covers the entire range of prevention policies, making it possible to address complex methodological problems.

                1 - A fast shrinking evaluation2

                In Belgium, the security contracts signed between the Federal Ministry of the Interior and the municipalities were accompanied by evaluations to decide the continuation, extension or reduction of the funds released by the Central government. The integrality of these ‘evaluations’ are called ‘summative’ in the sense that their aim is to make an assessment of the contracts between the State and a municipalities in order to decide whether to extend the contract or not.

                • 2  Based on the report present by Sybille Smeets an(...)

                2 - An administrative evaluation3

                Since the Rocard government of the late 1980s, the evaluation of public policies has become an entirely official activity. A statutory instrument was even promulgated some ten years later. Generally this activity is conducted by the concerned departments, mostly through the intervention of the inspectorate which exists in each administrative branch. The most prestigious among them, the Inspection générale des Finances (IGF), holds a pre-eminent place, generally in association with the inspection belonging to the relevant ministry. One of the permanent tasks of the Cour des Comptes, the French court of audit is also the evaluation of public policies. The place of these two bodies underlines the highly ‘financial’ bent of the official conception of evaluation ‘à la française’: primarily it involves verifying the proper use of public monies.

                • 3  Based on the report presented at the Bologna sem(...)

                With regard to safety and crime prevention policies, however, we can only cite a limited number of examples. In 1993, the Conseiller d'État, Jean-Michel Belorgey, had headed the evaluation mission of the politique de la Ville4 but without giving a very prominent place to crime prevention programmes which had been clubbed with the public policies’ package from the late 1980s. This report was a follow up to the assessment of the review of Neighbourhood Social Development local contracts – the ancestor to the politique de la Ville – which was completed in 1988 under the direction of François Lévy, an Ingénieur général des Ponts et Chaussées (Engineer General of Bridges and Highways).

                • 4  A set of public policies addressing impoverished(...)

                This quasi administrative monopoly of policy evaluation, especially in crime prevention and safety matters, was seemingly accompanied on the part of policy-makers and officials, by a solid mistrust of academia, who they feel are only too ready to criticise public programmes.

                  For their part, academic circles do not seem to be particularly keen on this evaluation activity. Only political scientists specialising in the analysis of public policies show some interest in it, but not all. This perspective does not really retain the attention of those who use a bottom-up approach to study public policies, i.e. interactions between the street-level administrative officer and the client, or from the point of view of the latter. There exists a sort of scholarly consensus which estimates that what is undertaken by the administration in the name of ‘evaluation’ constitutes merely an internal audit, and that there is practically no genuine evaluation of public policies done in France, especially on crime prevention and safety.

                    However, the Institut national de la statistique et des études économiques (INSEE, National Institute for Statistics and Economic Studies has conducted a study on the trend in the relative situation of neighbourhoods where the politique de la Ville was implemented between the 1990 and 1999 censuses. This work by government statisticians is what best approximates an evaluative study. For the most recent period, among the many national observatories operating in the field of crime prevention and safety, those dealing with disadvantaged urban zones and above all drugs and addiction have published reports that are clearly in line with an evaluative perspective. Some specific studies can also be mentioned, for example on the use of CCTV in high schools of the Île-de-France region (the Greater Paris area), but it is merely to point out that their dissemination has been seriously curtailed by the instructions of the sponsors. Lastly, mention should be made of a certain number of studies, evaluative in nature, on the penal career of convicts sentenced to various types of penalties, after their release.

                      3 - An example of a meta-evaluation of a safety and crime prevention policy5

                      In the Netherlands, queries by the Dutch Court of Audit regarding the performance of a crime prevention and safety programme, initiated in 2002, have led to a meta-evaluation in three phases by the Dutch Institute of Social Research (Sociaal en Cultureel Planbureau,SCP), which has been entrusted to Van Noije and Wittebrood.

                      • 5 Based on the report presented at the Bologna semi(...)

                      These authors first collected some 150 evaluations on various aspects of this policy. They then selected those which they thought conformed to the evaluative principle defined in the classic report for the United States Congress prepared under Lawrence Sherman’s direction, and also with the criteria of the Campbell Collaboration Crime and Justice Group. Finally, they used the studies selected to review the different chapters of the government policy under scrutiny6.

                      • 6  Law enforcement, ‘developmental’ crime preventio(...)

                      It is not the first time that the Dutch government has thus asked for a systematic review of the available evaluations. The raw material has been provided thanks to the old habit of reserving 10% of the amount allocated to prevention and safety programmes for evaluation. Such a routine tends to show a certain practice of meta-evaluations, although, each time, the reviewers deplored the extremely inconsistent quality of the evaluations carried out.

                        4 - An abundance of evaluations and discussions on methodology7

                        In England and Wales, since the 1980s, crime prevention and safety programmes are systematically accompanied by evaluation. What is more, since its return to power in 1997, the Labour party has combined its policy of crime and disorder reduction with an Evidence-based policy and practices (EBPP), which primarily rests on the accumulation of expert knowledge acquired from the systematic examination of evaluations. In undertaking the latter, the standardised criteria developed in the aftermath of Sherman’s work at the University of Maryland and that developed by the group pursuing the work initiated by Donald Campbell, carry more and more weight.

                        • 7 Based on the report presented at the Bologna semi(...)

                        This abundance of works using a variety of methods and data enables a detailed discussion of evaluative methodology.

                          The systematic use of evaluation in piloting crime prevention and safety policies can facilitate the discussion on the consequences of such an extensive mobilisation of scientific resources and their stowing to the ultimate aims of public policies. In this regard, Hope points out that such a situation could paradoxically lead, not to the ‘scientification’ of politics, but to the politicisation of science, if it is combined with a compelling aversion to critical conclusions. There can be no contribution of science to politics except if each maintain its autonomy in relation to the other, and hence each its qualities.

                            II – Evaluative know-how

                            Evaluation constitutes a paradoxical subject: everyone extols its virtues but in reality they are all rather wary. On the policy-makers side, it would be delightful to have proof (with an aura of science) that it works, but they are always apprehensive that the policy on which their success and reputation are based may not prove to be as good as claimed. On the academics’ side, the difficulty of the method can prove a problem. There is always the dread of the ridicule, i.e. having pronounced a programme to be effective, only to discover later that it is counter-productive. Most of all, they dread having their arm twisted by the sponsors who will only accept a laudatory evaluation. All in all, each protagonist, policy-maker or scholar, has perhaps a lot to gain potentially through evaluation, but he also runs the risk of losing enormously. And perhaps this is what explains the real reluctance to work on a theme which however everyone talks about.

                              The temptation is great to avail the benefits of evaluation without subjecting oneself to its risks. There are two ways of doing this: the first consists of evaluating oneself, the second is to control the external evaluator really well so that he is practically forced into arriving at only positive conclusions. By proceeding at the internal level, the institution responsible for the prevention and safety policy will at best produce an audit, which measures the output on the basis of the initial intentions and the inputs. Evaluation, on the contrary, only begins when what is measured is not what was done, but the outcome, the impact that this action has had on an external target, a group or an area. The setting up of twenty CCTV cameras in the streets of a city constitutes an output and not an outcome: this is what has been done. On the other hand, achieving a 20% reduction of crime on the street or reducing fear definitely represents a result.

                                For an evaluation, a view from a distance constitutes a condition, if not a sufficient one, at least a necessary one. Similarly it is necessary to use data external to administrative life in order to estimate the impact.

                                  For all that, having determined beforehand the real substance of the policy or the programme under review, its objectives, its inputs, their implementations, and the outputs, is a prerequisite for all evaluation. Few domains of public action are more subject to announcement effects than crime: there are talks and promises, but all these statements do not constitute faithful descriptions of what is really going to be undertaken. Failure due to the ineffectiveness of a programme has to be differentiated from that attributable to nonexistent or uncompleted implementation.

                                    A sort of minimal standard has been set comprising a before/after comparison, consideration of control groups and areas, and lastly, examination of the operation/impact relationship8.

                                    • 8 The classic references have been provided by the (...)

                                    The before/after comparison is obviously fundamental: without it there simply cannot be any evaluation. Four points at least deserve attention. First, evaluation should preferably be provided for before the commencement of the programme: the situation will be easier to observe ex ante rather than having to painfully reconstitute it after the fact. Also, a sufficient number of criteria for this before/after comparison should be retained so as not to overlook unexpected effects. Such a precaution can be used for discerning the unintended effects more easily: harassing dealers can reduce the impact of drugs in an area, but police intervention methods exasperate the youth so much that violence increases. Spreading a before/after procedure beyond the sole implementation area allows for the identification of the possible displacement effects of crime: it may decrease in areas where the programme has been implemented but can surface in an adjoining area; it also allows virtuous contagion effects, where the investment in crime prevention is powerful enough to radiate in areas contiguous to its particular zone of intervention.

                                      Specialists are highly critical of before/after measures that are not accompanied by the observation of control zones or groups where the programme to be assessed is not implemented. They even insist on the importance of having a pool of control zones or groups in order to neutralise the effect of a sudden crisis in one of them. This amounts to an aspiration to move forwards from the rather primitive model of the ‘black box’ to a method deemed quasi-experimental. The aim of control zones or groups is to settle the issue: can the change observed be attributed to the assessed programme or would it have occurred anyway despite its absence? However, the functioning of this control is not automatic: thus the zero effect of a neighbourhood watch programme can simply be an expression of the similar levels of ‘social capital’ available both in the experimental and the control communities. This type of difficulty obviously recedes with Tilley’s ‘realist’ evaluation: as it is no longer a matter of determining ‘what works’, but what has worked in a given context and in view of this context, it is clearly no longer necessary to find control zones or groups, nor expose oneself to the agony of this quest. From this perspective, it is not the programme that is being tested but the theory that underlies the actual implementation.

                                        More generally, the mechanical application of a quasi-experimental procedure does not help to steer clear of all the selection biases. Thus, a community can be selected because it seems to favour the programme to be implemented, but the effect observed can be as much due to the ‘social capital’ present in this community, as to the programme in operation here. But specific deprived zones or groups can also be selected and then what is observed is only their reversion to the mean. Which is why Hope suggests modelling selection effects by drawing inspiration from micro-econometrics.

                                          In any case, all these difficulties accord particular importance to the third phase of evaluation: there are three prerequisites to concluding as to the existence of an outcome, i) envisage and discard alternative explanations, ii) explain on the contrary how the programmes actually implemented have arrived at the result observed and iii) decide on the likelihood of this process. This is a crucial point where the evaluator’s know-how and experience prove useful.

                                            The downstream part of the evaluation, its utilisation, now remains. The monitoring of the programme should be considered separately from its replicability. As a mere post-implementation judgement, evaluation may only be of historical relevance to the programme, something which may not be of particular interest to its directors and to the professionals. This is the real advantage of an impact study set up at the very outset: it is capable of providing information while underway, thus making it possible to adjust the tools.

                                              But it is also expected from the evaluation that it indicate solutions promising enough to be replicated. It is in fact the reason why evaluation should preferably focus on those specific programmes, which by their rigour and scope are likely, in case of positive evaluation, to be re-used on a wider scale. Nevertheless, the exercise is a delicate one: generalisation of a pilot-experience is not that straightforward: something that has given good results in a particular context can turn out to be less successful if transposed to other very different ones. Here, the selection biases can have a major impact, hence the importance of detecting and neutralising them before concluding as to the external validity of the programme under evaluation.


                                                  In the end, evaluation is a cumulative exercise. It is through evaluation that know-how may be improved. It is also while evaluating that practical knowledge of the impact of the different crime prevention measures is gradually mustered... knowledge that can always be revised, as is normally the case with scientific research, but also knowledge that enables accounting for public policies and adjusting them as and when required. It is useless to try and evaluate everything, especially when experience and skills at hand are modest. The outcome would be a pseudo-evaluation, based on impressions or trends and therefore doubtful. It is preferable to start modestly by choosing some specific programmes on which available resources and skills could be concentrated. Thus little by little know-how will be developed and reasonably reliable diagnoses accumulated. Thereafter the process will have a snowball effect through trial and error.

                                                    It should be remembered, however, that in the domain of evaluation the relationship between policy-makers and scholars is of a particularly delicate nature. Between the refusal of the former to approach the latter and, on the contrary, proceed with a quasi-takeover, it is not easy to promote cooperation based on the mutual respect of the independence of the two spheres. Without it however, evaluation can only be pretence.

                                                      For all that, the effectiveness of a programme does not settle the question of the relevance of its location: the resources that are expended in one place could be in short supply in another where the needs are more urgent. Over and above all evaluation, we cannot shirk the task of reflecting on how the target-areas of intervention are prioritised, otherwise security measures in the long run will merely be the privilege of the well-off.

                                                        Lastly, crime prevention and safety policies are on the whole incapable of preventing the devastating effects of an accumulation of negative socio-economic conditions on certain social groups or poverty-ridden urban zones. These policies should not help to mask the absence of effective social and economic policies or worse, the continued accumulation of segregative decisions and practices. Without an effective re-affiliation policy, they would merely be an illusion.

                                                          This said, it would be interesting to find out how many (scientifically acceptable) evaluations have had a real change-inducing effect on public policies.

                                                            On the subject of recommendations, we could suggest to anyone who would like to be involved in the evaluation of crime prevention and safety policies:

                                                              - not to mix up evaluation – which applies to the impact of these policies on a target – with audit, programme controlling or cost-effectiveness calculations;

                                                                - to entrust the evaluation to a scientific body, competent and external to the institutions that are in charge of the programmes to be evaluated;

                                                                  - to respect the mutual exteriority of the policy-makers’ and the evaluators’ spheres;

                                                                    - to plan the evaluation before the start of the programme;

                                                                      - to provide data and know-how which are coherent with the nature of the evaluation.


                                                                        1  The people present were Wolfgang Heinz (Univ. Konstanz), Tim Hope (Keele Univ.), Marion Jendly (CIPC), Michel Marcus (FESU), Gian-Guido Nobili (Città sicure), Amadeu Recasens I Brunet (UCB), Philippe Robert (CESDIP, CNRS, UVSQ, MJ), Sybille Smeets (ULB), Carrol Tange (ULB), Anne Wyvekens (CERSA, Univ. Panthéon-Assas and CNRS), Karin Wittebrood (SCP), Renée Zauberman (CESDIP, CNRS, UVSQ, MJ) and post-graduate students from Bologna University.

                                                                        2  Based on the report present by Sybille Smeets and Carrol Tange at the Bologna seminar on the Belgian example.

                                                                        3  Based on the report presented at the Bologna seminar on the French example by Anne Wyvekens.

                                                                        4  A set of public policies addressing impoverished urban areas.

                                                                        5 Based on the report presented at the Bologna seminar on the Dutch case presented by Karin Wittebrood.

                                                                        6  Law enforcement, ‘developmental’ crime prevention, situational prevention and lastly, systemic measures (i.e. those involving the functioning of the criminal justice system).

                                                                        7 Based on the report presented at the Bologna seminar on the England and Wales example by Tim Hope.

                                                                        8 The classic references have been provided by the report prepared under the direction of Lawrence Sherman for the American Congress, the recommendations of the Campbell Collaboration Crime and Justice Group and the Scientific Model Scale or SMS, even if the exclusive nature of these models has led to various reservations.

                                                                        Date of publishing :

                                                                        01 may 2009

                                                                        Paper ISBN :

                                                                        978 2 917565 51 3

                                                                        To cite this document

                                                                        Philippe Robert, «Crimprev info n°20bis –Evaluating safety and crime prevention policies», CRIMPREV [En ligne], CRIMPREV programme, Crimprev Info, URL :

                                                                        About : Philippe Robert

                                                                        Centre de recherches sociologiques sur le droit et les institutions pénales (CESDIP, Centre national de la recherche scientifique (CNRS), Université de Versailles Saint Quentin (UVSQ), Ministry of Justice

                                                                        Contacts :

                                                                        Philippe Robert, CESDIP, Immeuble Edison, 43, boulevard Vauban, F – 78280 Guyancourt. E-mail :