Approaching the Data Protection Impact Assessment as a legal methodology to evaluate the degree of privacy by design achieved in technological proposals. A special reference to Identity Management systems

The process of digitalization of societies and innovation is involving the fast introduction of new technologies in different sectors. However, the development of technology represents a challenge as it involves technical, legal, economic and social aspects that have to be considered since its conception or design. The aim of this paper is to offer an adaptation of an existing legal methodology, the Data Protection Impact Assessment, as a legal obligation to evaluate technological proposals and assure compliance with privacy by design requirements. For that purpose, we will refer to the specific case of Identity Management technologies. We introduce the main challenges in the sector of Digital Identity Management as well as the importance of covering the “architecture” and “user” sides in the development of safer technologies by citing concrete examples. Finally, in order to provide a more practical view of the methodology to adapt the Data Protection Impact Assessment, we refer to the work developed in the research project OLYMPUS in the evaluation of its privacy implications. By introducing this example, the paper offers a specific methodology directly reusable for the study of technological proposals in IdM but that can be adapted to any other sector.


INTRODUCTION
The identification of individuals in online environments plays a key role in the guarantee of safe and trustworthy online activities. Digital identification and subsequent authentication have become the vehicle for the development of online environments and services where it was necessary to ascertain that the right individual is behind the "screen" and therefore has the corresponding right to perform that specific action or process. However, digital identification and authentication of individuals is a complex task and is posing a set of risks that threatens the basis of free and democratic societies, in particular due to the surveillance practices developed by a reduced number of Identity Providers (hereafter, IdP) [1].
On the other hand, the change in the habits and communication means has led identity theft to become one of the most widespread forms of cybercrime [2][3][4]. Identity theft is a specific type of cybercrime linked to digital identity and it can be defined in different ways. Following the definition provided by the U.S. Department of Justice "identity theft and identity fraud are terms used to refer to all types of crime in which someone wrongfully obtains and uses another person's personal data in some way that involves fraud or deception, typically for economic gain" [5]. In some cases, identity theft is included as a category of identity fraud, or identity theft is considered as a precursor of identity fraud [6]. Consequently, identity theft does not appear as a stand-alone crime and its consequences can affect the victims for a long time [7]. Furthermore, as stated with regard to surveillance practices, in the case of identity theft, the excessive "centralization" in Identity Management (hereafter, IdM) in a unique, or a reduced number of IdPs in the case of federated IdM, favours targeted attacks to thereof as they represent a single point of failure.
The fight against surveillance practices and identity theft requires a multidisciplinary approach involving the collaboration of technical, legal and social experts in order to assess the problem, propose solutions and in a last stage, validate them. To limit these practices, new forms of safe and privacy-preserving IdM will have to be designed and regulations will have to support the adoption of such technologies.
With regard to the prevention of identity theft, the adoption of safe technologies is essential. Nevertheless, safe technologies do not only refer to complex techniques or encryption processes, but they also imply comprehensible concepts for end users, who will also be decisive in the fight against this specific form of cybercrime. The evaluation of the degree of privacy achieved in a technology's design, and its balance with security and usability requirements can be studied by means of different methodologies. In this sense, the methodology par excellence for the study of privacy implications is the Data Protection Impact Assessment (hereafter, DPIA), envisaged by Article 35 of the General Data Protection Regulation (hereafter, GDPR) among others. In addition, methodologies such as the risk analysis (usually subsumed in the DPIA) or new approaches, such as the concept of privacy engineering adopted by the Spanish Data Protection Agency are of extreme relevance.
The aim of this paper is to offer a legal methodology for the study of the degree of privacy by design achieved in technological proposals prior their implementation. For that purpose, we will make reference to the two "sides" or "dimensions" that must be covered in the development of privacy-preserving technologies, including specific examples in the sector of IdM. Then, a comprehensive explanation of our proposal for adaptation of the DPIA to the scenario above-mentioned (technologies that have not been implemented yet) will be provided, specifying phases, steps and special considerations that must be taken into account to facilitate subsequent analysis in context-specific scenarios.

PRIVACY-PRESERVING TECHNOLOGIES IN IDM
IdM technologies are evolving through the development of new architectures and the implementation of encryption techniques to face the challenges referred above and that ensure a better compliance with privacy by design requirements. Privacy by design has been envisaged as a legal obligation in Article 25.1 of the GDPR. To determine the content of this obligation we should refer to Article 5 of the same legal text that establishes the principles that all data processing activity must fulfil (i.e., lawfulness, data minimization and security in the data processing) and that must also be taken into account in the design of a technology. Along this process, technical experts have remarked the need to design more than resilient architectures, but also to protect users thereof [8]. In this sense, we have distinguished a double direction in which IdM technologies must be improved and where innovations must be complemented and supported between them. These dimensions refer, on the one hand, to the architecture or the "internal side", and, on the other hand, to the user or the "external side". With the term of "internal" side we refer to the protection before those attacks that aim to compromise a technology for unlawful purposes (e.g., attacks directed against databases). Conversely, with the term of "external side" we refer to those attacks directed against the user, that do not aim to compromise a technology but to trick the natural person, so he/she voluntarily provides his/her data (e.g., identification data, passwords. . .). The design of safe and privacy respectful technologies has been one of the main objectives in the improvement of IdM. In this sense, important research has been carried out in the area [9-11]. By way of example, in this paper we make reference to the European Union research project OLYMPUS [12]. OLYMPUS is an IdM system developed in the framework of delegated/ federated IdM. More specifically, it introduces three main innovations [13]: a) It distributes the task of the IdP among several IdPs (integrating the virtual IdP) by means of novel cryptographic approaches that allows password "fragmentation".
b) It envisages a possibility of offline deployment through Privacy Attribute-Based Credentials (hereafter, p-ABCs) cryptographic techniques. c) It allows the redistribution of fragments of passwords among the partial IdPs in established periods of time.
The result of these innovations is translated into two main possibilities. On the one hand, the possibility to prevent surveillance practices through the implementation of p-ABCs in offline scenarios. In other words, the user will be able to request the issuance of a credential containing a set of attributes (e.g., the name, data of birth. . .), that he/she will store in his/her digital wallet and will employ to authenticate in a later stage. By making use of this process, the connection between the IdP and the service provider is "broken".
On the other hand, OLYMPUS innovations improve the prevention of identity theft. Indeed, the distributed architecture introduced by OLYMPUS hampers different types of identity theft. Regarding token identity theft, for the token issuance OLYMPUS requires the collaboration of all the partial IdPs which conform the vIdP, demanding the attacker to have the control over all the structure, as user's password (necessary for the token issuance) appears disaggregated through thereof. In addition, against traditional identity theft attacks or those that relate to the discovery of a fragment of password, OLYMPUS allows the redistribution of the segments of password through the mechanism of "key-resharing", posing countless scenarios to the potential attacker.
Nevertheless, measures to prevent identity theft attacks do not only require the development of resilient architectures but must also focus their attention on the user. As stated in the introduction, identity theft has become one of the most widespread forms of cybercrime and the means for its commission have evolved, representing social engineering one of the most common techniques used for that purpose [14]. From the perspective of technical proposals that focus their attention on the user we could refer to The Expanded Password System (hereafter, EPS).
The EPS consists in an authentication method that introduces the possibility to convert text passwords into images. The authentication process takes place by selecting a set of images that only the user is able to select correctly since these images are associated with his/her autobiographical or episode memories. Nevertheless, this collection of images will be exclusively presented to the user, since software will translate the images into text passwords, that will be the ones finally stored [15] [16]. In such scenario it will be more difficult to steal user's password and he/she will easily detect attempts of fake logging since the images that he/she has selected would not appear. Indeed, as Hitoshi Kokumai notes, "a would-be phisher can easily copy the login screen and show it to a target user whose User ID is known. But the phisher does not know which image was registered by the user as the credential of the genuine login server as against the other images, whereas both the user and the genuine login server know which one was registered" [17]. Consequently, if the user is given a password box or the choices do not include the registered images, the user would know immediately that it is an attempt of phishing.
Besides, there exist other mechanisms such as Firefox Monitor or Google leaked-password checker to detect privacy violations in an early stage and minimize the possible harm of online risks [18]. Data Protection Authorities and public/private intermediaries develop a labor of concerning individuals in choosing strong passwords [19] as well as with regard to the understanding of social engineering techniques and how to avoid becoming victims of cybercrime. Nevertheless, these have just been a few examples of how to implement privacy-preserving mechanisms that improve IdM to face the challenges that it is raising nowadays. All these technical "tools" have required a previous evaluation that determines their suitability and their compliance with regulations. From our side, as legal experts, the challenge is how to determine whether a specific technology fulfils privacy and data protection requirements prior its implementation. For that purpose, we suggest the adaptation of the DPIA methodology. The DPIA is not a new tool, but the innovative approach given to it in this paper is an application that differs from traditional uses. Indeed, we propose in the following section the possibility of using the DPIA methodology to perform a preliminary assessment over technological proposals prior their implementation in order to ease further context-based studies, and above all, assure compliance with privacy by design requirements.

THE DPIA AS A METHODOLOGY TO ENSURE GDPR COMPLIANCE IN TECHNOLOGICAL PROPOSALS
The importance of considering privacy as part of a system's development process is widely accepted as an essential aspect towards the development of privacy-aware technologies [20]. In addition, there exist an effort in the standardization of technological solutions that are privacy friendly. In such scenario, Privacy Enhancing Technologies (PETs) defined in the Communication of European Commission to the European Parliament and Council as "a coherent system of ICT measures that protects privacy by eliminating or reducing personal data or by preventing unnecessary and/or undesired processing of personal data, all without losing the functionality of the information system" [21] reach an extraordinary importance. Nevertheless, the development of privacy respectful technologies is a complex task. Although technical experts manage some privacy concepts, they tend to omit essential legal requirements that could be decisive in the final evaluation of a technology or that imply in some cases, an inadequate approach to the technology's design. The Spanish Data Protection Agency has referred to the concept of privacy engineering [22] as the process for the implementation of privacy in the lifecycle of those information systems where the processing of personal data takes place. In this process, we distinguish a set of phases described in Figure 1. The first step will take place even before the design of the technology had started. In this sense, it is critical that the properties and functionalities that a system must fulfill in terms of privacy are clearly and previously stated. These properties must refer to the minimal requirements that would make possible the implementation of the technology. The second step will consist in the design of the architecture and implementation of the elements in the system that cover those privacy requirements previously defined. Finally, it must be confirmed whether privacy requirements have been correctly implemented and satisfy expectations and needs. In the final evaluation, or during the technology's design, modifications or safeguards can be proposed to integrate the final architecture design Consequently, these prior evaluations or in other words, collaborative designs, can bring important benefits and enhance the possibilities of success of a technology. Nevertheless, developing a methodology for the evaluation of technological proposals prior their implementation represents a challenge as it involves not only technical but very relevant legal concepts as well. The DPIA is an instrument specifically envisaged for the assessment of data processing operations in those cases where, as stated in Recital 84 of the GDPR, "processing operations are likely to result in a high risk to the rights and freedoms of natural person" [23], and as Article 35 adds "where a type of processing in particular using new technologies, and taking into account the nature, scope, context and purposes of the processing, is likely to result in a high risk" [23]. From these two articles we can extract that the DPIA is a tool specifically envisaged for context-based scenarios. However, some of the privacy and data protection risks detected in a DPIA are linked to the technology deployed. In this sense, performing this study in a prior stage could prevent from discarding technological proposals with big potential, or in the worst cases, implementing technological solutions that imply a high risk but that are too expensive to modify at that phase of development. For these reasons, we propose the adaptation of the DPIA for the appraisal of technological solutions prior their implementation by giving in a new approach to the DPIA methodology in "layers". By way of example, we will propose a methodology taking as reference the work developed in the research project OLYMPUS.

First layer
The first step in the development of a DPIA is the description of the scope of study. This description should content at least an explanation of the technological proposal and its main differences or improvements concerning existing technologies. Explained the characteristics of the technology, the data flows must be described. The data flows refer to the transfer, exchange, storage or modification of data that takes place in the use of the technology or the performance of operations involving the processing of personal data. In the case of IdM systems the data flows refer to the lifecycle of digital identities, that is to say, how enrollment, authentication and digital identity management will be carried out in this specific system. Furthermore, it would also be recommendable to include a graphical representation of the architecture design to make the technology easily understandable and to highlight the main differences introduced. This phase does not represent major difficulties, but the description of the technology must be made using a language that can be understood by technical and non-technical experts so that compliance with legal requirements is properly considered.
Once described the technology and the data flows, before proceeding in the DPIA, it would be recommendable to analyze at least one prior issue. Considering the latest developments in cryptography and the emergence of techniques aiming to reach data anonymity, the first aspect to study is whether the data processed can actually be considered as personal data and therefore, if the GDPR applies. If from this step, we conclude that the data processed do not qualify as personal data the assessment would finish in this phase.
In order to determine whether the data processed can be qualified as personal data, we have to refer to the definition contained in Article 4 (1) of the GDPR which states that "personal data means any information relating to an identified or identifiable natural person (data subject)" [23]. Hence, personal data is the information that directly or indirectly relates to an identified or identifiable natural person. Conversely, when data does not relate to an identified or identifiable natural person, data must be considered anonymous, which according to what is stated in Recital 26 of the GDPR does not fall under the scope of the principles of data protection and our DPIA will conclude at this phase.
Consequently, personal data does not only refer to the data that directly identify an individual, but that make individuals identifiable [24]. An identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person. In order to determine whether an individual is identifiable, the criteria provided by the Article 29 Working Party (hereafter, A29 WP) in its Opinion 05/2014 on anonymization techniques [25] are of interest. In this sense, the A29 WP introduces three criteria to analyze whether data can be considered anonymous: 1. Singling out: refers to the possibility to isolate some or all records which identify an individual in the dataset. 2. Linkability: denotes the risk generated where, at least, two datasets contain information about the same data subject. 3. Inference: refers to the possibility to deduce, with significant probability, the value of an attribute from the values of other set of attributes.
In addition, it is essential to determine before which parties the data subject is identified or identifiable. In this part, we have to determine the parties involved in the service or data flow, that identify or could potentially identify the individual as well as the specific risks that could cause the "discover" of data or the identification of the natural person. In the case of IdM services the analysis could be the one contained in Table 1.
The specific techniques implemented in the technology subject of study must be considered before each party. Depending on the existence or not of risks of identification, as well as their likelihood we could classify the "degree of anonymization" achieved before each party in the categories of low, medium or high. Nevertheless, note that only if a high degree of anonymization is achieved before all parties the DPIA could stop in this phase. For example, a medium degree of anonymization could be achieved in the case of making use of multiple pseudonyms but pursuing what established in Recital 26 of the GDPR pseudonymized data shall be considered as personal data as it merely reduces linkability.
Determined that personal data are processed, the next step in the DPIA consists in the evaluation of risks. Risk management is necessary to determine the potential damages or risks to which an activity is exposed. From the perspective of data protection, the analysis focuses on those threats that affect rights and freedoms of individuals. As a first step in the risk analysis, we should classify threats depending on the risks source [26]. Our proposal would be to consider at least three risk sources: 1. Risks relating to the particularities of the service. 2. Risk relating to the architecture system components. 3. Risks relating to the user.
In the subject of study in this paper, IdM, these risks sources could materialize in the threats included in Table 2.
As in the common methodology for the DPIA, the risk consists in the result of multiplying likelihood for impact. Likelihood and impact of threats are variable. We could make use of different scales of quantitative or qualitative nature such as the ones contained in Table 3. The likelihood criteria could be classified regarding the possible frequency of threats (e.g., almost never, once per year, more than three times per year. . .). Regarding the impact criteria, the values could be associated with the deprivation of rights and freedoms. Linkability risks, request of excessive personal data Identity providers Non-anonymized account or anonymized account but linkability between user's attributes  Once determined the likelihood and impact for each specific threat, the results must be classified into different levels of risk. We could take as reference for the maximum level of risk the result of multiplying a very likely threat with a maximum impact. Conversely, the minimum level of risk would be obtained by multiplying an unlikely threat with a negligible impact. Between these two values we can create as many classifications as desired but keeping a proportional relation between them. We provide an example in Table 4.
Note that in the scenario of a technology that has not been implemented yet, there would exist values of impact that will not be possible to determine and will have to be restudied in a subsequent context-based analysis. Nevertheless, we propose for these cases to assign a medium level of impact (or in our scale, the level significant) that will refer to average data.
In addition, risks can involve different consequences or effects. In this sense, there might exist risks that affect availability of the service, but that do not have privacy implications. In the study of privacy risks, three risk dimensions are of interest [27]: a) Integrity of the data: data are correct and complete. b) Confidentiality of the data: data remain unknown before unauthorized parties. c) Authenticity of users and information: the user is the authorized person and the information corresponds to this person.
Consequently, only those risks the consequences of which affect these dimensions will be considered. The clearest example is the case of those risks affecting hardware components. Stealing physical hardware was a common method for committing cybercrime [28]. However, due to the existence of complex encryption processes this method lacks efficiency in some cases, only affecting the availability of the service.
At this stage we must have already determined those risks with privacy implications that might affect our technological proposal. Depending on the features or characteristics of our technology, likelihood values must be assigned (e.g., in the case of the OLYMPUS technology account taken of the distribution of the task of the IdP and password fragmentation, identity theft was qualified as unlikely [29]). Concerning impact values, if we do not have knowledge about the specific data that will be processed (e.g., a technological proposal to be applied in the health sector), we recommend considering the values that will correspond to the processing of average data.
By performing this first assessment or in this first "layer", we will obtain a set of results concerning the degree of privacy by design achieved in a specific technological proposal. In the research project OLYMPUS, the degree of privacy by design achieved in the technology proposed was studied by making use of the methodology proposed with certain modifications. In this study it was detected in an early stage that OLYMPUS suffered from two specific drawbacks in its conception or design [29]. On the one hand, its distributed architecture required the replication of user's attributes in each partial IdP which could increase information risks and challenge the principle of proportionality in the data processing. On the other hand, the authentication method supporting OLYMPUS solution was limited to text passwords, something that was problematic in the scenario of a highly resilient architecture which might favor attacks to focus on the user. The first problem is currently under study while the second one has already been solved by implementing a multi-factor authentication process [30]. In addition, thanks to this preliminary study of privacy implications, technical and business partners were informed about the conditions of deployment to achieve the best privacy results.
This first assessment can conclude in three different ways: -The technical solution achieves an adequate level of privacy by design. -The technical solution still requires modifications or safeguards. -The technical solution will never achieve an adequate level of privacy by design.
If from this first study safeguards are proposed, a deadline must be granted for the conception and implementation thereof. Note that in this phase it is extremely important a proactive and collaborative attitude between the members of the team. In this sense, legal experts must discuss the technical viability of the safeguards proposed with the experts in the area as well as the possible adaptation or modifications that these measures can suffer. Once the time for the implementation of safeguards has concluded, it should be reevaluated whether the technology has solved relevant risks. Consequently, we repeat the step performed before (i.e., the technical solution achieves an adequate level of privacy by design; it still requires modifications; or it will never achieve an adequate level of privacy by design).
To conclude this subsection, it must be noted that privacy is not an absolute value and will have to be considered conjointly with the nature and scope of the data processed. Nevertheless, performing these prior assessments can bring important benefits as they do not only help to detect possible problems, but it can also advance whether a technology could or not be adequate for the processing of certain type of personal data.

Second layer
In the first and second scenarios mentioned in the previous subsection (once safeguards have been introduced and reevaluated), the technology can now be implemented in a specific context. At this moment it is necessary to perform the second assessment or "layer" consisting in the study of the level of compliance with data protection principles in the specific scope of the data processing.
Consequently, in this case the risks will refer to the lack of compliance with data protection principles. Likewise, in this second assessment we can also take into account other context-dependent aspects such as the training of the staff in charge of deploying the service. The development of this/these subsequent assessment/s will be easier since the implications of making use of a specific technology have already been determined. This second analysis is more similar to the traditional conception of the DPIA, as a methodology thought for legal professionals, and should include at least the following elements.
The first aspect to study is the lawfulness of the data processing. The data processing could be based on consent or any of the other circumstances envisaged by Articles 6 or 9 of the GDPR. In those cases where the data processing is based on user consent, the data controller shall be able to demonstrate that user consent was informed and given in a voluntary way. The same would apply to the procedure to withdraw his/her consent. On the other hand, for those cases where the processing will not be based on user consent, the data processing must be justified according to any of the other causes listed in Article 6.1 of the GDPR and specific regulations must be taken into account (e.g. the Directive EU 2016/680 on the protection of natural persons with regard to the processing of personal data by competent authorities for the purposes of the prevention, investigation, detection or prosecution of criminal offences or the execution of criminal penalties [31]).
Once the lawfulness of the data processing has been determined, other circumstances must be considered. In this sense, data minimization principle implies to process the strictly necessary data for justified purposes, as well that the storage will be limited to the strictly necessary term. In the risk analysis, examples of threats of lack of compliance with data minimization principle could be the excessive data collection or an excessive period of storage.
Likewise, proportionality in the data protection activity must be analyzed. From this perspective it is necessary to evaluate if the aim pursued with the data processing can be achieved by other means which imply a lower risk. In order to determine the proportionality of a data processing activity, the guidelines provided by the Spanish Data Protection Agency [26] might be useful. Pursuing these guidelines, three successive assessments are proposed: 1. If the measure can achieve the proposed objective (suitability criteria). 2. If no more moderate measure can meet this goal with the same effectiveness (necessity criteria). 3. If the measure implies more benefits than damages for other assets or values in conflict (proportionality criteria in strict sense).
The evaluation of the proportionality in a data processing activity does not only refer to a single threat, but it requires to considerate the specific characteristics of the data processing. A good example of these cases could be the implementation of biometric authentication. Biometric data are qualified as a special category of personal data and they present a high risk since once compromised, they will be compromised forever [23]. However, biometric data also represent important advantages in the process of binding identity during authentication and it is extremely convenient for end users. Consequently, in the case of implementing a technology involving the use of biometrics in this "second layer" it must be studied whether the deployment of that specific technology is justified or balanced for the concrete scenario pursuing the criteria cited above. In addition, accuracy, integrity, transparency and confidentiality in the data processing must be studied. Pursuing Article 5.1. of the GDPR, accuracy in the data processing requires data to be accurate in order to assure a correct fulfillment of requests and rights, as well as the possibility of the user to demand the correction of inaccurate data as stated in Article 16 of the same text [23]. On the other hand, transparency in the data processing can be considered from different perspectives. It means that the data subject can access his/her data at any moment with no need to provide special justification, but also that the data processing must be carried out in a way it enables and facilitates eventual controls by Law Enforcement Authorities. To conclude, confidentiality of the data implies that data must remain unknown before non-authorized parties, thus it requires appropriate mechanisms to assure that the person accessing the data is the authorized user.
Besides, in this second assessment we will count on additional information such as the staff in charge of providing the service or specific security measures adopted that could modify the initial result of our DPIA. By way of example, we will invent a use case where OLYMPUS technology could be deployed. We have noted the following aspects with regard to OLYMPUS: a) It increased the amount of data process as it replicates user's attributes in each partial IdP. b) It was exclusively based on passwords.
As the second problem has been solved, we should have already changed the likelihood of social engineering attacks in our first DPIA.
The invented use case where we would deploy OLYMPUS is the following: "OLYMPUS technology is implemented to provide services of identification and identity management (identity as a service) in the context of authentication before streaming services. In this case the data collected will be the name and surname of the user as well as his/her age and email address. User consent is obtained in a comprehensible and informed way. These data will be exclusively used for the purpose of providing identification services and will be erased in the moment the user decides to delete his account. The user can access his account and visualize his/her data at any moment. Financial information (i.e., credit card information) remains in the side of the service provider. The IdP does not receive/store any information about the content visualized".
The analysis is deployed in Table 5.
Considering that financial information remains in the service provider's side (the streaming service), the nature of the data processed by the IdP make risk of unauthorized access limited. Conversely, user impersonation will allow access to financial information and the content visualized, hence in case of materialization of this threat the impact would be maximum. Nevertheless, the resulting risk of impersonation must be considered as medium thanks to OLYMPUS distributed architecture that reduces the likelihood of this risk in common IdPs.
This analysis can be repeated in different use cases where the OLYMPUS technology aims to be implemented. The process will be easy as the previous analysis performed with regard to the technological proposal for IdM has already defined the likelihood of those risks commonly linked with IdM services for this specific technology.

CONCLUSIONS
The methodology exposed along this paper evidences the need of adopting multidisciplinary approaches that involve the collaboration of experts from different areas in the development of safer, more privacy respectful and human centered technologies. More specifically, we have proposed a multiphase DPIA, or in other words, the division of the DPIA methodology in two phases or "layers" to obtain more efficient results and avoid problems such as the ones described in the introduction and along this paper (i.e., vulnerable technologies, wide rejection of technological proposals or implementation of technological proposals involving a high privacy risk). Indeed, a multiphase DPIA would be more adapted to the evolutive reality of a technological project. In this sense, it is not enough to determine a set of requirements and definitions at the beginning of a project, but legal compliance must be dynamic and adapted accordingly with technology's evolution.
We consider that this new approach is necessary because the DPIA, despite being conceived as a legal tool in its design, involves some technical aspects difficult to understand for legal experts. Conversely, it also involves legal concepts that technical experts are not used to manage. In consequence, we propose an innovative and interdisciplinary approach that favors the collaboration between both professionals and assures privacy not only since the earliest stage in the development of the technology, but also during the technology evolution and adaptation.
Nevertheless, it must be noted that the DPIA is not a tool designed to obtain absolute values but to study the level of balance achieved. This is particularly important when we are considering the evaluation of a technology that has not been implemented yet. Therefore, in the development of the first "layer", conclusions have to be interpreted and compared with other technologies, the current state of the technique, or complemented with subsequent analysis of real use cases or examples of second layers.
In conclusion, technological developments cannot ignore social or economic realities, as well as the protection of rights and freedoms established by regulations. Therefore, although perfect solutions do not exist, all these aspects must be studied in order to support the society to evolve in the adoption of technologies that are safe and respectful with the rights of the individuals but also with the society as a whole. Digitalized societies have come to stay. However, the future of technology is multidisciplinary and legal compliance must adapt to these challenges so that Law is not perceived as a barrier for innovation but as an essential safeguard for the protection of fundamental rights and civil liberties.