Editorial

"Evaluation of Research Has Increased Over Time But it is Necessary"

(April 13th, 2016) As member of an independent expert group, Stefan Kuhlmann was involved in investigating the outcome and impact of the 7th EU Framework Programme. Lab Times talked to him about his career, his experiences as an FP7 evaluator and also about current trends in research evaluation.

Stefan Kuhlmann is full professor at the Department of Science, Technology & Policy Studies of the University of Twente in Enschede, the Netherlands. The German studied political science and history at Marburg and received a PhD from the University in Kassel in 1986. Thereafter, he joined the Fraunhofer Institute for Systems and Innovation Research (ISI) in Karlsruhe, Germany, where he became the managing director (2005-2006) and also one of the leading international experts in public policy analysis with a focus on research, technology and innovation. Kuhlmann was Professor of Innovation Policy Analyses at the Copernicus Institute of Utrecht University (2001-2006). He published numerous studies on different aspects of research and technological innovation and was involved in many science advisory boards and evaluation committees at national and international levels.


What has sparked your interest in policy analysis?

Kuhlmann: I got more or less accidentally into the field of technology and society as a young social scientist. Research in this area gathered momentum in Germany in the late 1970s and early 80s. Questions raised at that time addressed the influence of novel technologies, such as computers on society. Initially, I investigated the effects of computer technology in public administration, as a member of a research group on ‘Informatization of Public Administration’ at the University of Kassel. Very soon, the topic of governance became one of my favourite subjects: What are the possibilities and limitations of governing research and innovation processes. During the 1990s, at the Fraunhofer ISI, I continued to work on these topics but, in addition, more emphasis was placed on technology, innovation and research policies. It became very obvious to me that scientific research, technological development and innovation are creative processes with a great impact on the past and on the future development of our society. What are the risks and opportunities? What are ways to actively intervene into and shape these processes? What is the influence of research policy or research administration on these processes? All serious questions requiring substantiated answers.


You were a member of the group in charge of the ex-post evaluation of FP7 (2007-2013). What were the main challenges?

Kuhlmann: European Framework Programmes have evolved over several decades, starting in the 1980s. Size, diversity, complexity and objectives have become substantially diversified in comparison to the early beginnings. There are many different sub-programmes nowadays, which, to a large extent, are being conducted independently from each other. Referred to as ‘Framework Programmes’ one has to keep in mind that de facto FPs are a mixed bag of individual programmes with differing objectives, target groups and expectations. During the work of the High Level Expert Group (HLEG) it soon became clear that we cannot apply one single measure to everything.


For the analysis, the HLEG had to plough through numerous reports and data sets. In addition, many interviews with experts were performed. How was this pile of work organised and mastered? What was your contribution?

Kuhlmann: The HLEG was and had to be composed of experts with widely divergent backgrounds. We had face-to-face meetings once a month. Since we all occupy leading positions at our institutions with a strict time limit for such activities, we took the task-sharing approach and addressed particular questions in small groups of two or three people. We received great support by experts hired by the European Commission, if topics needed to be investigated and addressed in more detail. The European Commission also installed an own working group for the operational support of our study. Early on, the members of the HLEG decided that the report is a piece of common work and that we all share the responsibility for it. This does not mean that there were no intense discussions during the preparation of the report. In the end, it does not really matter who was specifically in charge of a given part of the report.


In your report “Commitment and Coherence” it was mentioned several times that available data or current methodologies are insufficient to give definite answers. Please elaborate on this topic.

Kuhlmann: The possibilities to fully capture the dimensions of research and innovation activities in a systematic manner are definitely limited. Longer-term outcomes of science as a creative process are not predictable. It is often easier to address the more direct effects of research programmes in a short-term perspective. For example, did the research funds find their way to the researcher and was the money spent efficiently and according to EU regulations? In contrast, indirect effects often become apparent after longer time periods. What is, for example, the effect of a single finding in a given research project on the generation of new knowledge in a whole branch of science? Even more complicated, the question of how newly generated knowledge may influence other branches of science by so-called spillover effects in the long term. In our annual international R&D Evaluation Course, we emphasise that, while we often can demonstrate positive input-output correlations, it is almost impossible to establish causality. This also holds true for the economic effects of Framework Programmes. Politicians, in particular, are keen on the causality of funding programmes, with respect to employment or competitiveness in the market. But to establish causality, we need not just more data but rather data that are collected in a systematic, coherent and long-term way. Although the HLEG had access to numerous studies and reports, we often had to compare apples to pears. The reports were produced from different perspectives and for different purposes or applied varying methodologies. Therefore, it was also one of our recommendations to the European Commission that evaluations and analyses should be conducted in a regular and coherent manner. Also, there is a need to do more retrospective studies on the long-term effects of FPs. Right now, there are only a few pilot projects that follow up isolated aspects over several Framework Programmes. These studies are of particular interest to me, since they show how structural changes take place not over years but decades and how different forms of cooperation evolve over time.


Are you satisfied with the final report? How was the feedback?

Kuhlmann: As already said, the report was created in a highly cooperative fashion and was able to integrate the different perspectives of HLEG members. As far as I can see, it turned out well. Many colleagues have shown great interest and gave a positive feedback.


One might get the impression that we, nowadays, have to deal with an evaluation hype. Please comment.

Kuhlmann: It is true that the evaluation of research has significantly increased over time and I am convinced that evaluation is necessary. In publicly-funded projects it has to be proved that there is an efficient and reasonable use of the taxpayers’ money. But what we also see today is that more and more evaluation exercises are becoming uncoupled from the primary scientific process. Evaluations are increasingly done mechanistically by administrators or service providers. This may lead to misjudgement and, in addition, bear the risk of losing objectivity and transparency, which may deteriorate the conditions for creative research. Research assessments like those performed in the UK, but also here in the Netherlands, have not only far-reaching effects on the individual scientist and her/his institution but also on the whole research landscape. Many colleagues share the view that research evaluation has gone out of control. In the Netherlands we have, for example, the initiative “science in transition”. One of the concerns of the initiative is that there is too much mechanistic evaluation today and that the quality of science is measured primarily by highly formalised key performance indicators. This may lead to an undervaluation of the different dimensions of science, such as societal relevance.


The Imboden report on the German Excellence Initiative was published a few weeks ago. Are you familiar with it?

Kuhlmann: I, by myself, was involved in the appraisal of several universities’ applications for the second round of the German Excellence Initiative. I am convinced that the German research landscape needs such an initiative to overcome notorious limitations and backlogs in areas such as internationalisation, interdisciplinarity and competitiveness, which are still present at many institutions. There have also been definite beneficial effects on university governance through the Excellence Initiative. I agree with the Imboden report’s main conclusions, such as to continue with the promotion of the so-called clusters of excellence (first funding line). I also consent to the proposed bonus for excellent performing universities as second funding line, while it is certainly a challenge now to find reliable and transparent evaluation criteria for excellence.


What about your current research?    

Kuhlmann: I am heading a team of about 30 co-workers, focussing on the dynamics and governance of science, technology and innovation. We follow an interdisciplinary approach on three broad topics: science and innovation policies, technology dynamics and assessment, as well as history of science, technology and society. Equally important to me are the various networks and international projects I have been involved with over the years. For example, I am the President of the European Forum for Studies of Policies for Research and Innovation (Eu-SPRI), which was founded in 2010, involving 15 member institutions.


Do you think that the topic of scientific evaluation should be addressed in more detail in the education of future scientists?

Kuhlmann: In my opinion, it is not so much the topic of evaluation per se that needs to be addressed. But each researcher should understand that her/his work has additional implications beyond science. She/he has to learn early on how to deal with the varying demands and expectations vis-à-vis, as expressed, for example, by society, politicians or economy. At the University of Twente, we have installed into our Bachelor education an interdisciplinary strand of courses, where we encourage students to reflect on the various aspects of science, innovation and technology. We also just published a good-practice framework called Responsibility Navigator. It was developed in frame of Res-AGorA, a three-year FP7-funded research project. It is based on the objective that research and innovation activities need to become more responsive to societal challenges and concerns. Decision makers in various institutional or organisational settings are targeted. They are provided with guiding principles for the identification, development and implementation of measures and procedures that can transform research and innovation in such a way that responsibility becomes an institutionalised ambition.

Stefan Kuhlmann, thank you very much for your time.

Interview: Ralf Schreck

Photo: S. Kuhlmann



Letzte Änderungen: 12.06.2016