Computational thinking beyond STEM: an introduction to "moral machines" and programming decision making in ethics classroom

This paper describes a learning activity on computational thinking in Ethics classroom with compulsory secondary school students (14-16 years). It is based on the assumption that computational thinking (or better "logical thinking") is applicable not only to STEM subjects but to any other field in education, and it is particularly suited to decision making in moral dilemmas. This will be carried out through the study of so called "moral machines", using a game-based learning approach on self-driving vehicles and the need to program such cars to perform certain behaviours under extreme situations. Students will be asked to logically base their reasoning on different ethical approaches and try to develop a schema of decision making that could serve to program a machine to respond to those situations. Students will have to deal also with the uncertainty of reaching solutions that will be debatable and not universally accepted as part of de difficulty, more ethical than technical, to provide machines with the ability to take decisions where there is no such thing as a "right" versus "wrong" answer, and potentially both (or more) of the possible actions will bring unwanted consequences.


INTRODUCTION
During the last decades, scholars and policy-makers have been (and still are) discussing the key skills and competencies to be taught, promoted and assessed at school, from kindergarten to university levels, to deal with the requirements of citizens and the labour market of the 21 st century [32].
This discussion is still open and, while the OECD defined in the DeSeCo Project a framework of key competencies in three broad categories (use tools interactively, interact in heterogeneous groups and act autonomously) [29], which formed the underpinnings of PISA, the European Framework of key competences for lifelong learning [14] sets out eight key competences (communication in the mother tongue, communication in foreign languages, mathematical competence and basic competences in science and technology, digital competence, learning to learn, social and civic competences, sense of initiative and entrepreneurship and cultural awareness and expression), which are also stated in Spain through LOMCE, the current educational legislation, as defined in [26]. Other studies [4] try to categorize skills into personal and ethical aspects (living in the world), working (dividing them in tools for working and ways of working) and thinking (ways of thinking). Critical thinking, problem solving and decision making is one of the skills involved in this area. We decided to underline this particular skill of such framework because it deals specifically with the main approach of this work.
In addition to this general debate, during the last years the role of the so called STEM skills in education (Science, Technology, Engineering and Mathematics) has been also discussed. The growth in the demand of positions related to technology and scientific knowledge (particularly Engineering, but not only) is not reflected in the increase of students in such university degrees. On the contrary, it seems that some countries like Spain could start demanding foreign engineers in the next ten years if this trend does not change drastically [23], and this concerns both academic authorities and policy makers to the extent that it can affect the productive development of the country. This is not only a Spanish issue, but applies also to other European countries [8].
So that, are we failing by not addressing our students to the positions with more and better employment opportunities? According to some studies in the United States [35], the failure relays in the way instructors use and teach Computer Science nowadays, since schools stepped up in the use, integration and teach of ICT, while they confused "the use of technology with teaching computer science as a core academic discipline within STEM fields". These studies recommend (among other measures) "to create a well-defined set of K-12 computer science standards based on algorithmic/computational thinking concepts". In the European Union, where it is expected to experience a shortage of The subsequent sections will explain the development of a decision making learning activity in Ethics based on a computational thinking approach as follows: firstly, by clarifying both the concepts of Computational Thinking (CT) and Machine Ethics (ME) throughout a brief literature review; then, the learning activities on moral machines with compulsory secondary education students will be presented; finally, we will compare this learning activity with other experiences and take some conclusions in this regard.

BACKGROUND: COMPUTATIONAL THINKING AND MACHINE ETHICS
As Jeannette M. Wing clearly stated, "computational thinking is a way that humans, not computers, think" [37], because it represents the way humans resolve problems and take decisions, when they are made rationally. Taking decisions and solving problems do not deal only with STEM or logical reasoning. On the contrary, these are strategies that we use in everyday life, consciously or unconsciously, and the purpose of this paper is to show how we can use such strategies in a structured and algorithmic (computational) way to approach, explain and try to resolve a particular instance of moral dilemmas, those applied to a specific kind of artefacts called moral machines, namely, machines programmed to perform decisions with ethical implications. One example of such moral machines, whose "behaviour" is relatively easy to study and analyse in Ethics with students is the selfdriving car.

Computational Thinking and Ethics
In speaking of education, it is almost so frequent to deal with the resistance to change by teachers and the establishment as with the powerful attraction of any new educational trend by the most enthusiastic and innovative (what usually fuels even more, if possible, the resistance to change of the first). For instance, the educational reforms in Spain have been favouring the study of disciplines related to Economics and the need to insert economical and entrepreneurship contents in the core curricula of every matter, as if the economic crisis was a consequence of the lack of these skills and, even more, the solution to this problem will depend on the economic literacy of young students. By the way, these reforms drastically reduce the presence of subjects like Ethics and Citizenship Education, even if many authors (among which expert economists) point on moral and ethical issues as having at least as responsibility as the wrong economic decisions. Similarly, the scenario described a few lines above is leading to increasing the presence of ICT and computing subjects in the curriculum, so attributing them the responsibility to enhance ICT literacy and, incidentally, becoming students aware of the relevance of STEM subjects. Perhaps it should be better to deeply analyse what are skills behind this lack of STEM vocations and training and to reflect on how the educational system could tackle them. Adding more computing subjects to the curriculum is useless without increasing creativity, abstraction, logical reasoning, etc.; and such skills could be taught through any subject, provided that it is made intentionally and with the adequate pedagogical approaches.
Among the most important skills for any student (dealing with STEM or not) is the logical or algorithmic thinking, now better known as computational thinking. This term emerges precisely within the discussion on digital literacy skills and the alleged relevance of coding as part of these. Well then, coding is clearly related to programming tasks, but computational thinking has a broader use, as García-Peñalvo indicates: "although coding is so interesting, it is more important to emphasize in the idea of computational thinking as the application of high level of abstraction and an algorithmic approach to solve any kind of problems" [19]. Consequently, distinction must be made between computational thinking and coding or language programming. Programming is a way to approach computational thinking [25].
Computational thinking, as defined by J. M. Wing, is a kind of analytical thinking [38] that "describes the mental activity in formulating a problem to admit a computational solution" [39]. As such, is influencing research in disciplines from sciences and humanities [7], and can be taught throughout any subject at school. Another definition (perhaps more systematic and comprehensive), by Wing and other authors, states that "computational thinking is the thought processes involved in formulating problems and their solutions so that the solutions are represented in a form that can be effectively carried out by an information-processing agent" [13]. Being so, it is a huge mistake to constrain computational thinking to matters related with computing or programming, especially if we consider "enough" the current trend to insert robotics everywhere at school. It is interesting and funny, of course, but it is not the panacea to increase students' STEM skills.
Of course computational thinking is an emerging competence domain in primary and secondary school (K-12). Different case studies (for example in the UK, Netherlands and USA) show that great efforts are being carried out to improve this skill through computing subjects [40]. But another reviews of the state of the field [22] recognize that, despite the innovative introductory computational experiences (with different environments and tools as visual programming, robotic kits, video gaming), there are still some difficulties to clearly define computational competencies and to assess their impact in terms of educational development. These studies claim for new research in learning sciences applied to this subject, including new approaches like situated learning, distributed and embodied recognition, activity, interaction and discourse analysis, cognitive aspects of learning computing concepts, etc. In addition, and more importantly, it is still under investigated how computing will become a medium for teaching other subjects; for example, how problem-solving skills practised in programming could be applied to other subjects.
But maybe the core of this last problem is being wrongly focused. The question is not how to export computational thinking from computing to other subjects. The best way to succeed in promoting computational thinking and so increasing our students' logical reasoning is to insert it as a pedagogical strategy for posing and solving problems, both in science subjects and humanities, not only in computing subjects. And this is where this proposal comes to contribute from the field of Ethics. The development of a framework for decision-making processes applied to the analysis of moral dilemmas given certain variables and specific conditions, according to certain moral principles, is an example of computational thinking applied to this subject that students can use for reasoning without symbolic abstraction nor programming languages, but following the same schemas of abstraction, problem solving and decision making of computational thinking.

Machine Ethics and Moral Machines
As Anderson and Anderson clearly explained [2], there is a relatively new concept concerned with adding ethical dimension to machines that differs from the most commonly used computer ethics, and this is machine ethics. Unlike computer ethics, related to human behaviour and moral implications in the use of machines by humans, machine ethics concerns the moral behaviour of artificially intelligent artefacts.
This principle of machine ethics as providing artificially intelligent beings with ethical behaviour may seem, at first, just science fiction. In fact, the early beginning of machine ethics comes from the Three Laws of Robotics, also known as Asimov's Laws, who wrote this set of rules for the first time in a short story, "Runaround", in 1942. Subsequently, they appeared in the Robot series and other novels. However, with the development of both robotics and artificial intelligence arises the problem of decision making by increasingly sophisticated machines that will be soon exposed to act autonomously, and many of those decisions should have moral consequences.
Machine ethics have to be approached at least by three reasons: "First, there are ethical ramifications to what machines currently do and are projected to do in the future.
[…] Second, it could be argued that humans' fear of the possibility of autonomous intelligent machines stems from their concern about whether these machines will behave ethically, so the future of AI may be at stake.
[…] Finally, we believe that it's possible that research in machine ethics will advance the study of ethical theory" [2]. This is a problem with both engineering and philosophical approaches, and to be honest the second is by far more complex than the first according to the ethical discussions in Western Philosophy during more than 2,500 years. Anyway, every attempt to define a set of principles or a framework for ethical behaviour of machines is a good starting point for dealing with this challenge, even if we have to metaphorically discuss about "machine virtues" [11] or to define guiding principles or a Prime directive for the Human-Robot Interaction (HRI) code of Ethics [28].
Some authors suggest that before trying to teach machines how to undertake ethical decisions, we should better try to apply computational thinking to Ethics itself, and so seek to automate the process of detecting and classifying values expressed in human communication [15]. Then, and provided that we will be able to technically and philosophically make machines into explicit moral reasoners, the debate is to select the appropriate moral approach [1]. Top-down approaches rely on the idea that moral principles or theories may be used as rules for the selection of ethically appropriate actions. Kantian deontology and utilitarianism are ethical theories likely to be stated under this approach. Bottom-up approaches are not based on specific moral theories but on evolutionary and dynamic principles, under the premise that AI should imitate the child development (according to the theories of Turing and later Piaget, Kohlberg, etc.) and so the computer will learn. Finally, as none of the previous models meet the criteria for designing an artificial entity as a moral agent, Hybrid approaches are required to mix some top-down rules with the experience and learning of the machine.
Although the discussion is really interesting and should deserve further and deeper analysis, it exceeds by far the purpose of this work. In this context, we will focus on different moral approaches (mostly top-down according to previous classification), trying to develop scenarios where students will be able to represent, respond and discuss different possibilities regarding some moral dilemmas to be posed to machines interacting with other machines and/or humans.

Why the autonomous vehicle?
Autonomous vehicles are becoming a reality. The first test for autonomous driving car was in 2007 and some prototypes, such as Google's self-driving vehicle, covered thousands of kilometres of real-road driving [5]. There is a lot of studies regarding algorithms for navigation and models of decision-making and control systems for programming these cars [33]. But is very likely that some of the main difficulties for the development of autonomous vehicles are more philosophical (i.e. ethical) and sociological than technical. In 2015, the University of Stanford held a seminar that brought together philosophers and engineers to discuss and implement ethical settings for then developing and testing the code in simulations and even real vehicles [24]. And the debate in the newspapers and popular science magazines show headlines as "Would you buy a car that should kill you to save other lives?" [30] or "Our driverless dilemma. When should your car be willing to kill you?" [21]. These highlight the fact that there is no ethical (nor sociological) consensus regarding the moral approach to be tackled by autonomous vehicles. Utilitarian regulations seem to be generally accepted by the participants in some studies (in the sense of minimizing the number of casualties on the road), but these should prefer and buy autonomous vehicles that will protect them at all costs (regardless of the utilitarian consequences) [5]. Utilitarian moral criteria should perhaps be preferred by carmakers, especially if by "utility" we mean "less cost". This applies also to insurance companies and even governments. But other approaches (deontological or even random behaviour in certain cases, just to mention some top-down models) are also possible and ethically defendable. To sum up, "the problem, it seems, is more philosophical than technical. Before we can put our values into machines, we have to figure out how to make our values clear and consistent" [21]. So the autonomous vehicle provides us with a testing ground for computational thinking on Ethics.
The other reason to focus on self-driving vehicles for this purpose is that there is a significant number of studies and simulators that allow us to visualize and formalize moral dilemmas. Autonomous vehicle ethical behaviours are usually represented as trajectories and movement, which is very interesting for clearly analysing causes and consequences and undertake decisions based upon certain ethical principles. Some of these scenarios have been conceived with specifically ethical approaches [20], even giving the possibility to analyse and simulate different outputs to the same scenario by adopting different ethical principles/behaviours [10]. Others have been defined with a sociological approach, allowing us to decide which output is the best option and comparing the result with the answers given by other users [27].
The study of such different approaches for decision making will be useful as a warming up exercise for developing the training action described in the next section.

TRAINING ACTION
The aim of this paper is to describe a set of activities on computational thinking applied to Ethics subject with students of compulsory secondary school (14-16 years). Students may have taken the Ethics course previously, since in the current education system this subject is optional during compulsory secondary school. Therefore, they should be familiar to moral dilemmas and problem solving in ethical contexts. Either way, students in this course do not possess skills on symbolic logic (to be studied later, in Philosophy), so this activity will also be useful to introduce logical reasoning, although with no formal language structures.
In the next subsections we will show the structure and outlines of the learning activity, but firstly we will spend a few lines to explain the difficulties to use any of the ethical decision making frameworks in this context.

Ethical decision making model
Although it is not the goal of this work, a brief literature review on ethical decision making models and frameworks has been carried out in order to analyse their feasibility in this context. Basically, two different types of frameworks have been studied: those related to computational models of decision making in AI (or applied to moral machines), and those coming directly from philosophical enquiry. And both had to be discarded for this purpose, as will be shown below.
The models usually applied to artificial intelligence programming, like BDI and LIDA (just to mention a couple of examples), are too complicated to fulfil the requirements of a simple activity for young students. The Belief-Desire-Intention model of human practical reasoning (commonly known as BDI model), by the philosopher Michael Bratman [6], has been used as basis for developing the BDI software model for programming intelligent agents, so it seems initially interesting for computational thinking on decision making in Ethics because of the possibility to add moral principles to artificial beings. LIDA is a cognitive architecture that can encompass moral decision making, both for humans and artificial agents [34]. The cognitive process starts from a bottom-up collection of sensory data, values and experiences, and then acts throughout top-down processes for making sense of its current situation. LIDA, BDI and other cognitive architectures (as GWT, Clarion, ACT-R, etc.), even if they are able likely to undertake "real" ethical decisions, do not satisfy the conditions we request to perform simple decision making processes in this context.
With regard to the models for the analysis of ethical dilemmas, there is a comprehensive list from deontological to utilitarian, humanistic and other approaches. Those models, to be applied in Medicine and Nursery, Marketing, Economics, Psychology, etc., should not be appropriate here because of their lack of "computational structure". They allow to perform rational analysis of pros and cons and to study the implications of different decisions, but still they are too complicated to deal with this kind of learning activity.
What kind of ethical decision making model should we use then? First, we need to focus on top-down approaches for decision making, since the purpose of this activity is to understand the implications and consequences of taking decisions through different ethical principles. Second, we need to "reduce" such different ethical principles to a rule (or a few set of rules) in order to behave and perform simple decisions according to certain principles. Third, such model should be applicable to scenarios where two or more options will be eligible in unavoidable crashes. Definitely, we will define a set of ethical approaches to guide decision making processes, as follows: A. Consequentialist approaches A1. Utilitarianism: "the best action will be that which provides the best or does the least harm".
A2. Egoism (self-protection): "the best action will be that which protects me and those who are with me".
A3. Profit-based ethics: "the best action will be that which provides the less economic cost".
B. Non-consequentialist approaches B1. Deontology: "the best action will be that which protect those who act according to the rules".

Learning activity: ethical decision making during unavoidable self-driving car crashes
To better describe the learning activities to be carried out in Ethics classroom, the following template is proposed. This way, the learning plan can be handled separately from the rest of this work.

Title
Introduction to moral machines and decision making in Ethics classroom

Overview
The aim of this activity is to train students of compulsory secondary school in computational thinking applied to Ethics. To do so, the lesson plan focuses on the ethical implications of programming self-driving vehicles to perform actions that will have moral consequences when a crash is unavoidable and damage and harm will be certainly provoked. Students should be able to analyse, represent, study possible outcomes and reflect on ethical behaviours of machines and "program" such behaviours as response to certain inputs, according to ethical moral principles.
Skills: computational thinking, decision making, logical reasoning, ethical discussion.

Aim of the lesson
This lesson is intended to develop in young people skills related to computational thinking, logic reasoning and algorithmic decision making. This will be done by the analysis and study of different ethical approaches and the formalisation of their main principles to theoretically program machines to perform ethical behaviours. In addition to these "computational" skills, students will become aware of the relevance of Ethics and ethical approaches not only as guiding principles for our daily life decisions, but also to discuss and try to achieve a consensus on some socially accepted moral principles to decide how intelligent agents should behave, both in their interactions with humans and other machines and the environment.

Lesson plan
The current lesson plan will be structured in 5 sessions, as follows:

Session 1. Introduction and Moral Machine platform.
The first session will be devoted to introduce students to the challenges involved in programming self-driving vehicles, not only due to technical issues but also (or maybe mainly) to ethical difficulties.
The first activity will consist on asking students to read a newspaper article [30] and discussing in groups the differences between autonomous vehicles and present cars, as so as the ethical consequences of letting cars decide themselves: who is responsible for the damage caused by the car? How should you feel if you know that in certain circumstances the car will be willing to kill you instead of killing others?
The second activity will let students decide how should a car behave under determinate conditions. To do so, they will be asked to visit the MIT Lab Portal Moral Machine, http://moralmachine.mit.edu (see Figure 1), where they can browse, design and judge different scenarios, and then compare their responses with those of other people

Session 2. Analysis of pre-defined scenarios
During the second session the instructor will split the classroom in groups and will provide students with some pre-designed scenarios (as shown in Figure 2, for example). Then, they will analyse, discuss and decide how the car should behave according to some of the ethical approaches studied in previous lessons, properly re-defined to fit into these particular actions.
After that, each group will explain to the rest of the classmates the scenario received to study, as so as the possible outputs according to different ethical approaches. Finally, they will discuss each scenario in classroom, trying to reach an agreement regarding the output and ethical approach that offers the "better" solution, if possible.

Session 3. Student-designed scenarios (i)
During the third session students will be asked to develop in groups their own scenarios for machine ethical decision making by retrieving data from the matrix in Table 1 and planning the ethical decision making process according to the Autonomous Vehicle -Decision Making in Ethics Classroom (AV-DMEC) framework shown in Figure 3. To do so, they will start selecting the number and nature of agents involved in the scenario. Then, they will provide the agents with some properties for completing and clarifying the actions to analyse. Later on, they will sketch the scene by using a free tool like AccidentSketch.com, http://draw.accidentsketch.com. They should also describe the scene with logical propositions using connectors and natural language (for example: Car A is cutting the road of AV; AV cannot stop in time to avoid collision AND will run over the motorcyclist OR crash into a barrier OR run over two pedestrians on the sidewalk).

Session 4. Student-designed scenarios (ii)
After verifying that the sketch and the logical propositions match and clearly describe the scenario, students will be asked to analyse the possible outputs taking into account different ethical approaches. After studying and discussing such approaches, they should select the better ethical approach according to the most desirable output and explain their reasons.

Session 5. Discussion and feedback
The last session will be devoted to discuss some issues implied within the results of sessions 3 and 4. For example: is there any ethical approach that is generally preferable? Are there scenarios where it should be impossible to determine a "better" output? Are there scenarios where none of the ethical approaches seem to provide with a reasonable solution?
In order to introduce students to complex computational thinking and ethical decision making processes, they will be invited to watch the video Ethical autonomous vehicles, https://vimeo.com/85939744, where Matthieu Cherubini illustrates two case studies (scenarios) under three different ethical approaches. Students will be invited to analyse the ethical algorithms and the formal representation, as shown in http://research.mchrbn.net/eav.
Finally, students will be asked to participate in a game-based learning competition for assessing what they should have learned using a Kahoot test (http://kahoot.it) prepared by the instructor.

Assessment
The aim of evaluating the learning experience is twofold: on the one hand, assessing students' performance in understanding, defining (both visually and linguistically) and ethical decision making regarding this topic. To do so, the instructor will take note of the experiences and outputs, discussions and argumentations by students, and will guide them to better and more accurate logical and ethical reasoning when these are not being carried out properly. On the other hand, it is necessary to assess the satisfaction of students with the learning plan itself, how did they feel and to which extent they improved the capacity to logically analyse moral dilemmas and to apply different ethical approaches to different scenarios they have created. This will be done by a set of questionnaires developed with Kahoot.it! (http://kahoot.it) and a group final reflection, followed by a short individual essay to be handed to the instructor. In this way the learning activity itself should be evaluated and improved in subsequent iterations.

DISCUSSION
The models currently available to describe and formalize decision making processes for moral machines are yet too much complicated to be applied in Ethics classroom with young students. Such models focus on how the machine "acts" to gather information from the environment, how its artificial intelligence process this information and how it is capable to perform programmed behaviours (even with ethical approaches). In spite of this, our learning activity is addressed to reflect on the morality of decision itself more than trying to explain how the machine should behave to act ethically. We are mainly interested on what should the moral machine do and why a self-driving car should behave in a certain way by undertaking certain decisions instead of explaining how it will fulfil those tasks, because we play the role of the philosopher, not the computer scientist. Then, students need to be aware of the relevance of formalizing moral dilemmas to be resolved by such moral machines and try to analyse possible outputs under certain ethical approaches.
Platforms like Moral Machine by MIT Media Lab provide us with a very interesting tool to browse and judge pre-defined scenarios, or even to design new situations regarding this issue. Nevertheless, Moral Machine platform has been designed to analyse user attitudes towards moral decisions of self-driving cars and so ethical implications are discovered a posteriori as a user set of values in comparison with other users' decisions. On the contrary, for the purpose of this learning experience we need to analyse and make ethical decisions starting from a priori ethical approaches, in order to compare different outputs that should be preferred depending on what moral algorithm constitutes the basis of decision making. Besides, Moral Machine tool allow us to judge and define scenarios where only two outputs are possible, which is right for proposing a dilemma, but the experience should become more interesting with other scenarios designed for three or more outputs.
This proposal does not reach the technical complexity of a framework for decision making applicable to artificial intelligence nor (yet) the simplicity of a tool where users can add variables, define outputs and judge them according to different ethical approaches. But it has the virtue of allowing students to define a potentially infinite set of scenarios starting from a finite set of variables, analysing possible outputs by sketching and formalising these scenarios in natural language but with logical structures and connectors, and comparing and discussing the consequences of such outputs according to ethical approaches well known by our students. They put into practice computational thinking skills, decision making processes, philosophical analysis of moral behaviours and other soft skills: teamwork, oral skills, debating skills, etc.
This lesson plan should consequently let students become aware of the relevance of ethical decisions and to which extent the development of artificial intelligence requires the definition of a set of ethical principles (that both philosophers and scientists are far from agreeing) or an ethical framework for decision making by the so called moral machines. The computational thinking approach to Ethics will also help students to understand the need to combine ethical reflection and computing to formalise algorithms capable to perform at least "weak" ethical decisions, as long as technology and science won't be able to develop "real" moral machines.

CONCLUSIONS AND FUTURE WORK
Among all the branches of Philosophy, undoubtedly Ethics is currently one of the most challenging and with more and more relevant implications. Apart from the importance itself of teaching students to become ethically skilled as human beings and citizens, ethical approaches permeate sciences and even determine their future development, as this paper has demonstrated in the case of moral machines and artificial intelligence.
Moreover, living in a the 21 st century implies the development of a set of skills, still under discussion, among which computational thinking, problem solving, decision making and STEM competencies play a relevant role. Compulsory school should tackle these skills by developing training actions that will help our students acquiring the ability to apply school curricula within these approaches. Computational thinking is not only a matter of computer sciences, as ethical behaviour is not only a matter of philosophical studies.
The development of training actions with a cross-curricular approach, covering subjects as Computer Science, Ethics, Mathematics, Arts & Crafts, Language, etc., will help us to succeed in this goal of preparing our students to cope with their own future as workers and citizens.
This training action allows students to understand and experience the implications of moral decisions, while they put into practice philosophical skills like logical reasoning, resolution of moral dilemmas, etc. Besides, by connecting sciences and humanities they can start understanding human knowledge as a continuum instead of perceiving it as a set of "watertight compartments".
The set of activities introduce students to the representation (visually and logically) of ethical implications of machine behaviours, as they are expected to become "moral machines" in the near future. In the meanwhile, students will be able to analyse the implications of programming decision making frameworks and algorithms to these machines and discuss those consequences. Future directions of this study will move forward towards the implication of Computer Sciences subjects, in order to continue this work in Ethics classroom by representing different moral scenarios with programming languages usually studied in these subjects. These scenarios could be perfectly programmed in Arduino, Scratch or any other language that could also be inserted to virtual or real robots, in order to both improve computational thinking abilities and check the algorithms and decision making processes in "real" scenarios, so trying the meaning of roboethics in real robots.
This project has been funded with support from the European Commission. This communication reflects the views only of the author, and the Commission cannot be held responsible for any use which may be made of the information contained therein.