Prospects for Technology Assessment in a framework of responsible research and innovation .

In this article the broad societal quest for the “right” impacts of science and technology and the imperative of governmental bodies to make “impact assessments” part and parcel of the planning and justification of their major activities will be discussed. This will be discussed from a European perspective. The basis of a systematic use of impact assessments and foresight will pave the way for a framework for responsible research and innovation on which a proposal will be made.


Introduction
In this article the broad societal quest for the "right" impacts of science and technology and the imperative of governmental bodies to make "impact assessments" part and parcel of the planning and justification of their major activities will be discussed. This will be discussed from a European perspective. The basis of a systematic use of impact assessments and foresight will pave the way for a framework for responsible research and innovation on which a proposal will be made.
In the context of European policy making, Technology Assessments (TA), ideally, have to merge with other types of impact assessments, as the success of major public policies increasingly depend on the anticipated impacts or the selected scientific and technological options. Practically, this merging is taking place, both driven by a "policy pull" for impact assessments and by the practice of "assessing" itself. This can be illustrated as follows: 1. The European Commission must deliver general impact assessments on all its major legislative proposals within the framework for better regulation (European Communities, 2006). The Commission impact assessment follows an integrated approach which was introduced in 2002. This also includes, ex-ante impact assessments for the Framework Programmes for Research. These impact assessments include among others, social, environmental and economic impacts. These circumstances also bring into focus the interwovenness of Technology Assessments with broader impacts. Results of Technology Assessments can and should, feed into impact assessments of prospective, planned research activities. There is, a certain "policy pull" to merge and use various impact assessments.
2. In the tradition of Technology Assessment, there has been a preoccupation with assessing the intended as well as the non-intended consequences of the introduction of new technologies. Increasingly TA practitioners had, to make use of other assessment activities such as environmental and sustainability impact assessments in order to be able to deliver such assessments. Those engaged in sustainability assessments or even with public policy evaluation or broad impact assessments of important legislative proposals can not ignore the role of science and technology. They also look towards technology assessments. In other words, the practices of "assessors" already show a certain interwovenness of the various assessment processes.
The "policy pull" dimension does have a reconfiguring influence on the "type" of impacts to be assessed. Whereas TAs have traditionally addressed the "negative consequences" in terms of risks and adverse effects of technologies, the focus of attention within policy is predominantly to demonstrate potentially positive impacts of future outcomes of public policy including research policy. "Negative impacts" are dealt within the context of broader cost-benefit analysis or within specialized fields of policy, such as risk management and risk assessments. The quest for positive or the "right" impacts is a much more overarching feature of public policy. A natural progression to consider is therefore: what are the "right" impacts and how can policy legitimately pursue this quest for the "right" impacts? The subsequent question is then ofcourse how these impacts should be assessed on the basis of various impact assessments including TAs. This article will attempt to answer these questions and how they can be tackled within a new framework for responsible research and innovation.

Defining the "right" impacts for science and technology policy
Some philosophers of technology have recently argued that science should move beyond a contractual relationship with society and join in the quest for the common good. In their view, the "good in science, just as in medicine, is integral to and finds its proper place in that overarching common good about which both scientists and citizens deliberate" (Mitcham and Frodeman, 2000). This view may sound attractive, but it fails to show how various communities with competing concepts of the "good life", within modern societies, could arrive at a consensus and how this could drive public (research) policy. Moreover, an Aristotelian concept of the good life is difficult to marry with a modern rights' approach, whereby, for instance in the case of the European Union, the European Charter of Fundamental Rights provides a legitimate and actual basis for European Public Policy. Nonetheless, their point of departure remains challenging: "We philosophers believe that publicly funded scientists have a moral and political obligation to consider the broader effects of their research; to paraphrase Socrates, unexamined research is not worth funding" (Frodeman and Holbrook, 2007) The US National Science Foundation assesses their proposals in terms of "broader impacts" in the framework of considering research proposals worth funding. Under the European Framework Programmes for Research, there is a long tradition of awarding research grants on the basis of anticipated impacts. Indeed, even at the stage of evaluation of research proposals particular impacts are sought. Currently, expected impacts of research topics which are subject to public calls for proposals, are listed in the work programmes of the 7th Framework Programme. But what are legitimate normative assumptions to make these expected impacts, the right impacts allowing us to steer public research agenda's? We can't make an appeal to concepts of the good life, but we can make an appeal to the normative targets which we can find in the Treaty on the European Union. These normative targets have been democratically agreed and provide the legitimate basis for having a public framework programme for research at the European Level. From article 3of the Treaty on the European Union (European Union, 2010) we can derive the following: • "The Union shall (…) work for the sustainable development of Europe based on balanced economic growth and price stability, a highly competitive social market economy, aiming at full employment and social progress, and a high level of protection and improvement of the quality of the environment. It shall promote scientific and technological advance. • It shall combat social exclusion and discrimination, and shall promote social justice and protection, equality between women and men, solidarity between generations and protection of the rights of the child.
• To promote (..) harmonious, balanced and sustainable development of economic activities, a high level of employment and of social protection, equality between men and women, sustainable and non-inflationary growth, a high degree of competitiveness and convergence of economic performance, a high level of protection and improvement of the quality of the environment, the raising of the standard of living and quality of life, and economic and social cohesion and solidarity among Member States.
Rather than pre-empting views and concepts of the "good life", the European Treaty on the European Union provides us with normative anchor points. These normative anchor points and their mutual relationship thus provides a legitimate basis for defining the type of impacts, or the "right" impacts of research and innovation should pursue.(see figure1 below). There are of course normative anchor points which have their impacts beyond the EU. A reflection on "solidarity" and the promotion of Human Rights refer to possible implications for the use of technology. Benefit sharing from the use of technologies and the use of genetic resources should address particular technology divides and potential inherent injustice and be translated into international commitments (Schröder, 2010). The subsequent question is how the normative anchor points are reflected [or neglected] in the development of technologies. A short historical perspective can shed some light on this question.

The responsible development of technologies: An historical perspective
The formation of public opinion on new technologies is not a historically or geographically isolated process; rather, it is inevitably linked to prior (national and international) debate on similar topics. Ideally, such debates should enable a learning process -one that allows for the fact that public opinion forms within particular cultures and political systems. It is therefore not surprising that, in the case of relatively new technologies, such as nanotechnologies, the nature of public debate and its role in the policy making process is articulated against a background of previous discussion of the introduction of new technologies (such as biotechnology), or that specific national experiences with these technologies become important. For example, the introduction of genetically modified organisms (GMOs) into the environment is a frequent reference point within Europe (and often absent in such debates in the USA).
This historical development of policy frameworks can be followed through the ways in which terms are used and defined: initially, definitions are often determined by the use of analogies which, in the initial stages of the policy process, serve to 'normalise' new phenomena. In a number of countries, for instance, GMOs were initially regulated through laws which dealt with toxic substances. Subsequently such analogies tend to lose their force as scientific insights on the technology grows and distinct regulatory responses can be developed. GMOs, for example, eventually became internationally defined as 'potentially hazardous', and, in the European Union, a case by case approach was adopted under new forms of precautionary regulation. This framework was developed over a period of decades, and took into account the ever-widening realm in which GMOs could have effects (developing from an exclusive focus on direct effects to eventually including indirect and long-term effects). It is not, however, solely the scientific validity of analogies which determines definitions and policy: public interest also plays an important role. Carbon dioxide, for instance, has changed from being viewed as a gas essential to life on earth to being a 'pollutant'. (The latest iteration of this evolution came just prior to the Copenhagen summit on climate change in December 2009, when the American Environmental Protection Agency defined greenhouse gases as a "threat to public health" -a definition which has important implications for future policy measures.) In the case of relatively new or emerging technology, policies such as nanotechnology policy, it seems likely that we are still in the initial phases of development. So far there are no internationally agreed definitions relating to the technology (despite repeated announcements of their imminence), and nanoparticles continue to be defined as "chemical substances" under the European regulatory framework REACH. (Analogies are also made with asbestos, as a way to grasp hold of possible environmental and human health effects, but these are contested. There is no certainty that they will become the definitive way to frame risk assessments.) To cite one topical example, nanotechnology in food will not start its public and policy life with a historically blank canvas but will be defined as a 'novel food' under a proposal for renewing the Novel Foods regulation. (The Novel Foods regulation came into existence in the 1990s with foods containing or consisting of GMOs in mind).
The European Commission has adopted a European strategy and action plan on nanotechnologies, which addresses topics ranging from research needs to regulatory responses and ethical issues to the need for international dialogue. This strategy emphasizes the "safe, integrated and responsible" development of nanosciences .The "safe, integrated and responsible" development gives us a new, overarching anchor point for making nanotechnology policy. This should be built on the basic anchor points in the treaty, concerning "a high level of protection of the environment and human health", applying precaution etc.
These normative anchor points, and their mutual interdependency, should guide the impact assessments of technologies, and desirable expected impacts of research. The use of foresight and ability to identify plausible outcomes should be employed to identify the right impacts.

Identifying Plausibility and use of Foresight
One can distinguish, within the thought tradition of Charles Sander Peirce, the plausibility of knowledge claims from the predictability of individual statements in the context of scientific discourse (von Schomberg, 1993). For instance, epistemic discussions in science can be characterised as discussions triggered by controversies arising from the acquisition of new scientific knowledge, whereby scientific methods and the fundamental understanding of the nature of the subject matter often become subject to dispute themselves. In such cases, the authorities within scientific disciplines are mutually challenged in terms of which discipline can claim to offer the best solution to the problem in question. Recent examples of epistemic discussions in science include the debates between molecular biologists and ecologists on the risks of GMO's, the debate on climate change as either being induced by human interventions or as caused by natural cycles, and the debate between K. Eric Drexler and Richard Smalley on the plausibility of molecular nanotechnology and engineering.
Typically, epistemic discussions induce public debate long before any scientific closure on the issue is to be expected and provides a significant challenge for developing reasonable public policy. Which group of scientists can we believe and should we endorse? Plausible, epistemic approaches on the acquisition of knowledge in science are associated with problem-definitions, which in turn frame (although, often, only implicitly) policy approaches. Unidentified and unacknowledged epistemic debate can result in unbalanced public policy: the until recently not uncommon "wait and see" character of public policies of nation states on climate change or the concentration on the promises and blessings of all kinds of new technologies provide examples whereby public policy takes sides prematurely in a scientific debate that is still unfolding.
It is therefore of utmost importance to be able to identify such epistemic discourses and knowledge gaps within the various plausible options on the table in order to be able to have a more robust outlook on potential technological solutions-and in order to keep open the possibility for alternative developments. Foresight projects can make a contribution towards the possibility that alternative developments might remain in sight for possible public policy responses and towards enabling democratic choices at early stages of technological development. The use of foresight projects can help us to overcome the often too narrowly conceived problem definition scientists implicitly work with (Karinen and Guston, 2010). Social scientists could do some heuristic work by spelling out these problem definitions. For example, an imaginary nanotechnology enabled product of a "disease detector" (a device which would enable disease detections before symptoms emerge) is probably based on a problem definition that it is a medical imperative that any "disease" needs to be identified, irrespective of available treatment and irrespective whether the individual in question would define himself or herself as ill and possibly sidetracks preventive approaches based adopting particulars lifestyles. Moreover, problem definitions scientists implicitly work with often correspond to a centuries old, general standard list of fundamental human needs (which represent overarching problem definitions) to which new technologies will presumably provide answers in a given future: food and energy supply, human health, security and since a half a century also "the environment". The case of recent technologies such as nanotechnology is in no way different, especially if one considers the public reasons for its funding. Because of its enabling and diverse character, it would open a future with very efficient solar energy, nanorobots cleaning our blood vessels, water sanitation solutions for the "third world", etc.
The link between options, which may only look plausible at a particular stage of development in science and technology, and particular ways of social problem solving, is a perplexing one. For instance, it seems obvious that our world food problem is principally not a technological problem but a political-economic distribution problem. Yet, the increase of land use for biofuels may well cause a situation whereby a political-economic solution could become increasingly less likely, if not impossible, before it ever arrived at a (world) policy level in a historic time period in which this type of solution still was an option. Putting our attention-and with it our hopes and/or fears-primarily on an accelerated form of innovation by (nano) technological means is therefore irresponsible.
In order to help mitigate this, foresight projects could benefit from a prior analysis of potential relationships between types of plausible technological pathways and particular (social) problemdefinitions, rather than starting with "naïve product scenes," which are, as Selin (Selin, 2009) outlines them, "short vignettes that describe in technical detail, much like technical sales literature, a nano-enabled product of the future," thereby methodologically ignoring the underlying problem definitions. It is also important to make an analysis of the linkages between technological pathways and social problem definitions and how they may well get the support of particular stakeholders or give a boost to particular ideologies within public policies. A process of "negotiating plausibility" eventually means reaching consensus on such problem definitions. Minimally, we could help to avoid continually funding developments which are later shown to be fictious; but more constructively, we could create deliberative forms of decision making on the problem definitions themselves and place them in a wider perspective.

Deliberative approaches to the policy making process
Public engagement projects such as the Nanofutures2 or the Nanosec project3 adopts both a foresight and a deliberative approach, which is to be welcomed. It is, however, important to note that the reason for this approach is not limited to the normative rationale of a more democratic and transparent decision making process. The deliberative foresight approach can also improve the quality of the decision making process and help to identify knowledge gaps for which we would need to go back to science. A part of this potential "quality" gain gets lost when we limit deliberation to stakeholder or public deliberation, although these constitute necessary components. An immediate normative deficiency of stakeholder deliberation is that the involved actors do not necessarily include the interest of non-included actors. That said, foresight exercises need to be progressively embedded in public policy in order to make a real qualitative step forward.
We cannot rely on stakeholder and or public deliberation as such, since epistemic debate in science is immediately mirrored by stakeholder and public dissent in society. Policy makers are 2 A project conducted by the Center for Nanotechnology in Society at Arizona State University http://cns.asu.edu/program/rtta3.htm 3 Nanotechnologies for tomorrow's society' (nanosoc): The nanosoc research consortium seeks to understand and address these issues by calling for an early and informed dialogue between nanotechnology researchers, social scientists, technology assessment experts, industry representatives, policy makers, non-governmental agencies, and interested citizens in Flanders, Belgium. http://www.nanosoc.be/ResearchDesign-en.asp equally challenged by dissent in science as by dissent among stakeholders and the public. If we deal unreflexively with public debate induced by epistemic debate, an improper politicising effect inevitably occurs and translates into an irrational struggle concerning the "right" data and the "most trustful and authoritative scientists" in the political arena. Interest groups can pick and choose the experts which share their political objectives. A functional deliberative approach, apart from public and stakeholder deliberation, includes a deliberative extension of the sciencepolicy interface. Such an interface institutionalises particular deliberation based on normative filters such as notions of proportionality and precaution (or as we have in the EU, the requirement to implement the precautionary principle in policy frameworks), various forms of impact analysis, such as sustainability impacts, cost-benefit analysis, environmental policy impact analysis etc., the application of particular consensual norms or prioritisation of norms (for instance that health and environment takes precedence over economic considerations) and the application of normative standards for product acceptability. These normative filters are in themselves results of public and policy deliberation and enable consensual decision making at the public policy level. Although democratic societies have these deliberative filters in place, they need to be consciously applied and be subject of public monitoring (Von Schomberg , 2007) sees a procedural gap, especially, when it comes to identification of knowledge gaps and the assessment of the quality of the available knowledge. Von Schomberg, therefore, argues for a deliberative form of "knowledge assessment" at the science-policy interface to allow for a qualified knowledge input.
Moreover, in the context of scientific uncertainty and production of knowledge by a range of different actors, we need knowledge assessment mechanisms which will assess the quality of available knowledge for the policy process. We are currently forced to act upon developments[ in terms of public policy] while at the same time being uncertain about the quality and comprehensiveness of the available scientific knowledge and the status of public consensus. A deliberative approach to the policy-making process would complement and connect with deliberative mechanisms outside policy. The outcomes of ongoing knowledge assessment (Von Schomberg, 2007;Von Schomberg, Guimaraes Pereira and Funtowicz, 2005) should feed into other assessment mechanisms and into deliberation on the acceptability of risk, the choice of regulatory frameworks or the measures taken under those frameworks (see figure 2). Knowledge assessment following the result of foresight exercises would then be important tools in setting out arguments for the necessity and nature of future legislative actions. At the same time, we have to ensure that science policies are consistent with other public policies: The challenge is not only, to focus on the conditions for good and credible science but to make knowledge production, dissemination and use a key factor for virtually all public policy goals. Both impact assessments and assessments of expected impacts of research should reflect this. In the following section, the necessary elements for a framework for responsible research and innovation which systematically takes up the quest for "the right impacts" will be described.

Responsible Research and Innovation
The following working definition for Responsible Research and Innovation is proposed:

Definition: Responsible Research and Innovation is a transparent, interactive process by which societal actors and innovators become mutually responsive to each other with a view to the (ethical) acceptability, sustainability and societal desirability of the innovation process and its marketable products( in order to allow a proper embedding of scientific and technological advances in our society).
There is a significant time lag (this can be several decades) between the occurrence of technical inventions (or planned promising research) and the eventual marketing of products resulting from RTD and innovation processes. The societal impacts of scientific and technological advances are difficult to predict. Even major technological advances such as the use of the internet and the partial failure of the introduction of GMOs in Europe have not been anticipated by governing bodies. Early societal intervention in the Research and Innovation process can help to avoid that technologies fail to embed in society and or help that their positive and negative impacts are better governed and exploited at a much earlier stage. Two interrelated dimensions can be identified: the product dimension, capturing products in terms of overarching and specific normative anchor points and a process dimension reflecting a deliberative democracy. The normative anchor points should be reflected in the product dimension. They should be: -(Ethically) acceptable: refers to a mandatory compliance with the fundamental values of the EU charter on fundamental rights [right for privacy etc] and the safety protection level set by the EU. This may sound obvious, but the practice of implementing ICT technologies has already demonstrated in various cases the neglectence of the fundamental right for privacy and data protection. It also refers to the "safety" of products in terms of acceptable risks. It goes without saying that ongoing risk assessments is part of the procedure towards acceptable products when safety issues are concerned. However, the issue of safety should be taken in a broader perspective. The United Kingdom's largest public funder of basic innovation research, the Engineering and Physical Science and Research Council has asked applicants to report the wider implications and potential risk (environmental, health, societal and ethical) (Owen and Goldberg 2010). Often, the risks related to new technologies, can neither be quantified nor a normative baseline of acceptability assumed by scientists, as if such a particular assumed baseline would represent the baseline of societal acceptance.
-Sustainable: contributing to the EU's objective of sustainable development. The EU follows the 1997 UN "definition" of sustainable development, consisting of economic, social and environmental dimension in their mutual dependency. This overarching anchor point can become further materialized under the following overarching anchor point: -Socially desirable: "socially desirable" captures the relevant, and more specific normative anchor points of the Treaty on the European Union, such as "Quality of life", "Equality among men and women" etc. It has to be noted that a systematic inclusion of these anchor points in product development and evaluation, would clearly go beyond simple market profitability, although the latter could be a precondition for the products' viability in market competitive economies. However, it would be consistent with the EU treaty to promote such product development through financing RTD actions. In other words, at this point, Responsible Research and Innovation would not need any new policy guidelines, but simply would require a consistent application of the EU's fundamentals to the research and innovation process reflected in the Treaty on the European Union. Perhaps it has been wrongly assumed that these values could not considered in the context of research and innovation.

Product dimension:
Products be evaluated and designed with a view to their normative anchor points: high level of protection to the environment and human health, sustainability, and societal desirability.
Deployment of Methods:

Use of Technology Assessment and Technology Foresight.
In order to anticipate positive and negative impacts or, whenever possible, define desirable impacts of research and innovation both in terms of impact on consumers and communities. Setting of Research priorities with their anticipated impacts need to be subjected to a societal review. This implies broadening the review of research proposals beyond scientific excellence and including societal impacts 4 . Specific Technology Assessment methods also help to identify societal desirable product by addressing the normative anchor points throughout their development. Methodologies to further precise and "script" the future expected impacts of research should be developed (Den Boer, Rip and Speller, 2009). A good example exists in the field of synthetic biology by Marc Bedau et al. (2009). They have identified six key checkpoints in protocell development (e.g. cells produced from nonliving components by means of synthetic biology) in which particular attention should be given to specific ethical, social and regulatory issues, and made ten recommendations for responsible protocell science that are tied to the achievement of these checkpoints.
The advantage is that Technology Assessment and Technology Foresight can reduce the human cost of trial and error and make advantage of a societal learning process of stakeholders and technical innovators. It creates a possibility for anticipatory governance This should ultimately lead to products which are (more) societal robust.

Application of Precautionary Principle
The precautionary principle is embedded in EU law and applies especially within EU product authorization procedures (e.g. REACH, GMO directives etc.). The precautionary principle works as an incentive to make safe and sustainable products and allow governmental bodies to intervene with Risk Management decisions (such as temporary licensing, case by case decision making etc.) whenever necessary, in order to avoid negative impacts.
As discussed above, the responsible development of new technologies must be viewed in its historical context. Some governance principles have been inherited from previous cases: this is particularly notable for the application of the precautionary principle to the field of nanosciences and nanotechnologies. This principle is firmly embedded in European policy, and is enshrined in the 1992 Maastricht Treaty as one of the three principles upon which all environmental policy is based. It has been progressively applied to other fields of policy, including food safety, trade and research.
The principle runs through legislation that is applied to nanotechnologies, for example in the 'No data, no market' principle of the REACH directive for chemical substances, or the pre-market reviews required by the Novel Foods regulation as well as the directive on the deliberate release of GMOs into the environment. More generally, within the context of the general principles and requirements of the European food law it is acknowledges that "scientific risk assessment alone cannot provide the full basis for risk management decisions" (European Commission, 2002)leaving open the possibility of risk management decision making partly based on ethical principles or particular consumer interests.
In the European Commission's Recommendation on a Code of Conduct for nanosciences and nanotechnologies research, the principle appears in the call for risk assessment before any public funding of research (a strategy currently applied in the 7th Framework Programme for research). Rather than stifling research and innovation, the precautionary principle acts within the Code of Conduct as a focus for action, in that it calls for funding for the development of risk methodologies, the execution of risk research, and the active identification of knowledge gaps. Under the Framework Programme, for example, an observatory has been funded to create a network for the communication and monitoring of risk.

Use of demonstration projects: moving from risk to innovation governance
These projects should bring together actors from industry, civil society and research to jointly define an implementation plan for the responsible development of a particular product to be developed within a specific research/innovation field, such as information and communication technology or nanotechnology. Responsible innovation should be materialised in terms of the research and innovation process as well as in terms of (product) outcomes. The advantage is that actors can not exclusively focus on particular aspects (for instance, civil society organizations addressing only the risk aspects) but have to take a position on the innovation process as such.
Thus allowing a process to go beyond risk governance and move to innovation governance. The company BASF, for example, has established a dialogue forum with civil society organizations and also developed a code of conduct for the development of new products 5

Process dimension
The challenge is to arrive at a more responsive, adaptive and integrated management of the innovation process. A multidisciplinary approach with the involvement of stakeholders and other interested parties should lead to an inclusive innovation process whereby technical innovators become responsive to societal needs and societal actors become co-responsible for the innovation process by a constructive input in terms of defining societal desirable products.

Deployment of Codes of Conduct for Research and Innovation: organizing collective coresponsibility
Codes of Conduct in contrast to regulatory interventions allow a constructive steering of the innovation process. It enables the establishment of a proactive scientific community which identifies and reports to public authorities on risks and benefits in an early stage. Codes of Conduct are particular useful when risks are uncertain and when there is uncertain ground for legislative action (nanotechnology for example). Codes of Conduct also help to identify knowledge gaps and direct research funds towards societal objectives.
Policy development treads a fine line: governments should not make the mistake of responding too early to a technology, and failing to adequately address its nature, or of acting too late, and thereby missing the opportunity to intervene. A good governance approach, then, might be one which allows flexibility in responding to new developments. After a regulatory review in 2008, the European Commission came to the conclusion that there is no immediate need for new legislation on nanotechnology, and that adequate responses can be developed -especially with regard to risk assessment -by adapting existing legislation.
While, in the absence of a clear consensus on definitions, the preparation of new nano-specific measures will be difficult and although there continues to be significant scientific uncertainty on the nature of the risks involved, good governance will have to go beyond policy making focused on legislative action. The power of governments is arguably limited by their dependence on the insights and cooperation of societal actors when it comes to the governance of new technologies: the development of a code of conduct, then, is one of their few options for intervening in a timely and responsible manner. The European Commission states in the second implementation report on the action plan for Nanotechnologies that "its effective implementation requires an efficient structure and coordination, and regular consultation with the Member States and all stakeholders" (Commission of the European Communities, 2009).Similarly, legislators are dependent on scientists' proactive involvement in communicating possible risks of nanomaterials, and must steer clear of any legislative actions which might restrict scientific communication and reporting on risk. The ideal is a situation in which all the actors involved communicate and collaborate. The philosophy behind the European Commission's code of conduct, then, is precisely to support and promote active and inclusive governance and communication. It assigns responsibilities to actors beyond governments, and promotes these actors' active involvement against the backdrop of a set of basic and widely shared principles of governance and ethics. Through codes of conduct, governments can allocate tasks and roles to all actors involved in technological development, thereby organising collective responsibility for the field (Von Schomberg, 2007) Similarly, Mantovani et al (2010) propose a governance plan which both makes use of existing governance structures and suggests new ones, as well as proposing how they should relate to each other.
The European Commission recommendation on a Code of Conduct views Member States of the European Union as responsible actors, and invites them to use the Code as an instrument to encourage dialogue amongst "policy makers, researchers, industry, ethics committees, civil society organisations and society at large"(recommendation number 8 to Member States, cited on page 6 of the Commission's recommendation) , as well as to share experiences and to review the Code at European level on a biannual basis. It should be considered that such Codes of Conduct would in the future extend their scope beyond research and also address the innovation process. 6

Ensuring market accountability: Use of Standards, Certification and accreditation schemes and labels
The adoption of standards and even "definitions" are fundamental requirements to allow for a responsible development. The outstanding adoption of a definition for nanoparticles, for example makes legislation and adequate labelling practices difficult, if not impossible. Bush (2010) notes that the use of standards, certifications and accreditations constitute a new form of governance which progressively has replaced and transmuted positive law, as a product of the state, with its market equivalent. Although this form of governance is in need of improvement, we unavoidably have to make productive use of it, as the flood of products and processes coming on to the market will not be manageable through governmental bodies and agencies alone. Yet, the perception and working practice of these standards is significant. In 2005, it was claimed that the EU had forced local authorities to remove see-saws from children's playgrounds. No such EU measures were taken. Some standards were set by the European Committee for Standardisation (CEN), a voluntary organisation made of national standards bodies. CEN sought to limit the height from which children could fall, by specifying the maximum height for seats and stands, and by setting standards for hand supports and footrests. Manufacturers could choose to follow these standards, which carried the advantage of being able to export across Europe, instead of having to apply for certification in each country (European Communities, 2006).
The area of data-and privacy protection in the context of the use of ICT and security technologies should also be impacted by forms of self-regulation and standard setting. Data controllers based at operators need to provide accountability, which can be termed as a form of verifiable responsibility (Guagnin, Hempel and Ilten, 2010). The involvement of third parties which can implement, minimally, a transparent verification practice will be crucial. In other fields, the whole certification can be carried out by a third party. For example, in 1996, the World Wildlife Fund (WWF) and Unilever joined forces and collectively constructed a longterm programme for sustainable fishery. They founded an independent non-profit organisation to foster worldwide fishery. They also apply "standards of Sustainable Fishing" which is also monitored by independent certifying agencies to control those standards.
Standards will also need to reflect particular ethical considerations and go well beyond mere technical safety issues. Currently, the development of new ISO standards for Nanofood might involve the inclusion of ethical standards (Forsberg, 2010).

Ethics as a "Design" factor of Technology and increasing social-ethical reflexivity in research practices
Ethics should not be seen as being only a constraint of technological advances. Incorporating ethical principles in the design process of technology can lead to well accepted technological advances. For instance, in Europe, the employment of Body Imaging Technology at Airports has raised constitutional concerns in Germany. It has been questioned whether the introduction is proportional to the objectives being pursued. The introduction of a "smart meter" at the homes of people in the Netherlands to allow for detection of and optimalisation of energy use, was rejected on privacy grounds, as it might have allowed third parties to monitor whether people are actually in their homes. These concerns could have been avoided if societal actors had been involved in the design of technology early on. "Privacy by design" has become a good counter example in the field of ICT by which technology is designed with a view to taking privacy into account as a design principle of the technology itself. Yet, practicing it is still rare. The European project ETICA 7 has recommended the introduction of specific governance structures for emerging (ICT) technologies.
Recently "Midstream Modulation" (Fisher et al., 2006;Fisher, 2007) emerged as a promising approach to increase social-ethical reflexivity within research practices. In the form of laboratory engagement practices, social scientists and ethicists are embedded in research teams of natural scientists. The embedded social scientist engages natural scientists in the wider impact of their work, while doing research in the laboratories. Reports from these practices could feed into schemes on responsible research and innovation.

Deliberative mechanisms for allowing feedback with policymakers: devise models for responsible governance
Continuous feedback from information generated in Technology Assessment, Technology Foresight and demonstration projects to policy makers could allow for a productive innovation cycle.
In addition, as outlined above, "knowledge assessment" procedures should be developed in order to allow assessing the quality of information within the policy process, especially in areas in which scientific assessments contradict each other or in the case of serious knowledge gaps. (The EC practises this partly with its impact assessments for legislative actions). Knowledge assessment could integrate the distinct cost-benefit analysis, environmental and sustainability impact assessments. In short: models of responsible governance should be devised which allocate roles of responsibility to all actors involved in the innovation process. Ideally, this should lead to a situation in which actors can resolve conflicts and go beyond their traditional roles: companies addressing the benefits and Non-Governmental Organisations the risks. Coreponsibility implies here that actors have to become mutually responsive, thus companies adopting a perspective going beyond immediate market competiveness and NGOs reflecting on the constructive role of new technologies for sustainable product development. In this context, Technology Assessment, as practised, for example, by the Dutch Rathenau Insitute, can take up the function of "seducing actors to get involved and act"(Van Est, 2010).

Public debate: Moderating "Policy Pull" and "Technology Push"
On-going public debate and monitoring public opinion is needed for the legitimacy of research funding and particular scientific and technological advance. Continuous public platforms should replace one-off public engagement activities with a particular technology and, ideally, a link with the policy process should be established. The function of public debate in viable democracies includes enabling policy makers to exercise agenda and priority setting. Public debate, ideally, should have a moderating impact on the "Technology Push" and "Policy Pull" of new technologies. Technology push has occurred in the European Union in the past when operators tried to accomplish a fait accompli with the market introduction of genetically modified soya in the mid 1990s. Environmental groups, notably Greenpeace who did not react to GMOs as an environmental concern prior to their introduction on the market, responded with an outright rejection. Technology push as a product-acceptance strategy does not work. At the other extreme, we can notice a strong policy pull concerning the introduction of security technologies such as the use of biometrics for passports, asylum applications and whole body image technology ("body scanners") at airports. Politicians and policy makers have been eager to accept and promote the implementation of these technologies, sometimes beyond their technical feasibility. Impact Assessments should consist of a proportionality analysis. Such an analysis should identify whether particular measures or potential infringement of privacy and data protection are proportional with a view to possible legitimate objectives for implementing security technologies. However, both "technical safety" and the determination of proportionality cannot be fully left to scientist or, in case of proportionality, to legal experts. Both cases assume normative baselines for acceptable risks or acceptable infringements of privacy rights. These baselines should be subject to public debate.