Subversion-Resilient Enhanced Privacy ID

. Anonymous attestation for secure hardware platforms leverages tailored group signature schemes and assumes the hardware to be trusted. Yet, there is an ever increasing concern on the trustworthiness of hardware components and embedded systems. A subverted hardware may, for example, use its signatures to exﬁltrate identifying information or even the signing key. In this paper we focus on Enhanced Privacy ID (EPID)—a popular anonymous attestation scheme used in commodity secure hardware platforms like Intel SGX. We deﬁne and instantiate a subversion resilient EPID scheme (or SR-EPID). In a nutshell, SR-EPID provides the same functionality and security guarantees of the original EPID, despite potentially subverted hardware. In our design, a “sanitizer” ensures no covert channel between the hardware and the outside world both during enrollment and during attestation (i.e., when signatures are produced). We design a practical SR-EPID scheme secure against adaptive corruptions and based on a novel combination of malleable NIZKs and hash functions modeled as random oracles. Ourapproach has a number of advantages over alternative designs. Namely, the sanitizer bears no secret information—hence, a memory leak does not erode security. Further, the role of sanitizer may be distributed in a cascade fashion among several parties so that sanitization becomes eﬀective as long as one of the parties has access to a good source of randomness. Also, we keep the signing protocol non-interactive, thereby minimizing latency during signature generation.


Introduction
Anonymous attestation is a key feature of secure hardware platforms, such as Intel SGX 5 or the Trusted Computing Group's Trusted Platform Module 6 .It allows a verifier to authenticate a party as member of a trusted set, while keeping the party itself anonymous (within that set).This functionality is realized by using a privacy-enhanced flavor of group signatures in which signatures cannot be traced, not even by the group manager.
Given such realization paradigm, the security of anonymous attestation schemes is grounded on the trustworthiness of the signer.In particular, anonymity and unforgeability definitions assume that the signer is trusted and does not exfiltrate any information via its signatures.Yet, in most applications, the signer is a small piece of hardware with closed-source firmware (e.g., a smart card) to which a user has only black-box access.In such a scenario, trusting the hardware to behave honestly may be too strong of an assumption for mainly two reasons.First, having only black-box access to a piece of hardware makes it virtually impossible to verify whether the hardware provides the claimed guarantees of security and privacy.Second, recent news on state-level adversaries corrupting security services 7 have shown that subverted hardware is a realistic threat.In the context of anonymous attestation, if the hardware gets subverted (e.g., via firmware bugs or backdoors), it may output valid, innocent-looking signatures that, in reality, covertly encode identifying information (e.g., using special nonces).Such signatures may allow a remote adversary to trace the signer, thereby breaking anonymity.Using a similar channel, a subverted signer could also exfiltrate its secret key, and this would enable an external adversary to frame an honest signer, for example by signing bogus messages on its behalf.
Previous work has studied subversion resilience in the context of Direct Anonymous Attestion (DAA)-the anonymous attestation scheme used in TPMs.The subversion-resilient DAA proposed by Camenisch et al. [9] leverages a "split" signature scheme where the secret key is split between the TPM and the host.Intuitively, this approach guarantees security in presence of a subverted TPM as long as the host behaves honestly and does not leak its share of the secret key.

Our Contribution
We continue the study of subversion-resilient anonymous attestation and we focus on Enhanced Privacy ID (EPID) [8,7], a popular anonymous attestation scheme that is currently deployed on commodity trusted execution environments like Intel SGX.Our contribution is mainly twofold: we first formalize the notion of Subversion-Resilient EPID (SR-EPID), and then we propose an efficient realization of this cryptographic primitive in bilinear groups.
The Model of Subversion-Resilient EPID.Enhanced Privacy ID is essentially a privacyenhanced group signature where the group manager cannot trace a signature but signers can be revoked.In the context of remote attestation, a group member is instantiated by its signing component (the "signer"), which is typically a piece of hardware.
In order to counter subverted signers, our main idea is to enhance the EPID model by adding a "sanitizer" party whose goal is to ensure that no covert channel is established between a potentially subverted signer and external adversaries. 8In practical application scenarios, the sanitizer could run on the same host of the signer (e.g., on a phone to sanitize signatures issued by the SIM card), or on a separate one (e.g., on a corporate firewall to sanitize signatures issued by local machines).
Compared to a subversion-resilient anonymous attestation scheme that uses split-signatures [9], our approach comes with multiple benefits.First, signature generation is non-interactive and the communication flow is unidirectional from the signer to the sanitizer, on to the verifier.Thus, our design decreases signing latency and provides more flexibility as the sanitization of a signature does not need to be done online.Another benefit of our design is the fact that the sanitizer holds no secret.This means that if a memory leak occurs on the sanitizer, one has nothing to recover but public information.Differently, in a split signature approach, security properties no longer hold if the TPM is subverted and the key share of its host is leaked.Further, as sanitization is non-interactive and requires no secret, it may even be carried out by multiple parties in a cascade fashion so that covert channels are eradicated as long as one of the sanitizers has access to a good source of randomness-and such randomness is not available to the adversary.It is not clear how to achieve such "fault tolerance" with split signatures.One may split the signing key across several parties and design a multiparty signing protocol, but very likely this would lead to high latency for signature generation.
The idea of adding a sanitizer to mitigate subversion attacks in anonymous attestation is inspired by that of using a cryptographic reverse firewall of Mironov and Stephens-Davidowitz [22].Besides subversion-resilient unforgeability (as in Ateniese et al. [2]), in an EPID scheme we have to guarantee additional properties such as anonymity and non-frameability, as well as to deal with the complications of supporting revocation.Formalizing all these properties in rigorous definitions turned out to be non trivial and is a significant contribution of this paper.
not only malleable but also to have a form of simulation-extractable soundness. 9In the EPID of [7], simulation-extractable soundness is also needed, but it is obtained for free by using Fiat-Shamir transformed Sigma protocols (Faust et al. [16]).In our case, this approach is not viable because the Fiat-Shamir compiler breaks any chance for re-randomizability. 10One could use a re-randomizable and (controlled) simulation-extractable NIZK (Chase et al. [13]), but in practice these tools are very expensive-they would require hundreds of pairings for verification and hundreds of group elements for the proofs.
To overcome this problem, we propose a combination of (plain) GS proofs with the random oracle model.Briefly speaking, we use the random oracle to generate the common reference string that will be used by the GS proof system and use the property that, in perfectly-hiding mode, this CRS can be created from a uniform random string.(In particular, we need cryptographic hash functions that allow to hash directly on G 1 and on G 2 , see Galbraith et al. [17].)In this way we can program the random oracle to produce extractable common reference strings for the forged signature made by the adversary and for the messages in the join protocol with corrupted members, and program the random oracle to have perfectly-hiding common reference strings for all the material that the reduction needs to simulate.Our technique is a reminiscence of techniques based on programmable hash functions [19,12] and linearly homomorphic signatures [20].However, our ROM-based technique enables for more efficient schemes with unbounded simulation soundness.
The resulting scheme provides the same functionality of EPID, tolerates subverted signers, and features signatures that are shorter than the ones in [7] for reasonable sizes of the revocation list: ours have 28 + 2n group elements whereas EPID signatures have 8 + 5n, where n is the size of the revocation list (i.e., ours are shorter already for n ≥ 7).

Related work
Subversion-resilient signatures and Cryptographic Reverse Firewalls.Ateniese et al., [2] study subversion-resilient signature schemes and shows that unique signatures as well as the use of a cryptographic reverse firewall (RF) of [22] ensure unforgeability despite a subverted signing algorithm.Our scheme could be roughly interpreted as a new EPID scheme equipped with a cryptographic reverse firewall for the join protocol that allows a new party to join the group, and a cryptograhic reverse firewall that protects the signatures sent by the signer.However, as already mentioned, there are some technical details that differentiate our model to the cryptographic reverse firewall framework.
Subversion-resilient anonymous attestation.Camenisch et al. [10] modify the UC corruption model and provide a UC definition for DAA that guarantees privacy despite a subverted TPM.The DAA scheme presented in [10] leverages dual-mode signatures of Camenisch and Lehmann [11] and builds upon the ideas of Bellare and Sandhu [5] to provide a signature scheme where the signing key is split between the host and the TPM.Later on, Camenisch et al. [9] build on the same idea of [10] and show a UC-secure DAA scheme that requires only minor changes to the TPM 2.0 interface and tolerates a subverted TPM by splitting the signing key between the host and the TPM.
We argue that splitting the signing key between the potentially subverted hardware (e.g., the TPM) and the host to achieve resilience to subversions is viable in scenarios where (i) the channel between the two parties has low latency-because of the interactive nature of the signing protocoland (ii) the user can trust the host.Both conditions holds for TPM scenarios.In particular, a TPM is soldered to the motherboard of the host and has a high-speed bus to the main processor.Also, the TPM manufacturer is usually different from the one of the main processor-hence, the user may trust the latter but not the former.
In case of TEEs such as Intel SGX, we note that there is no real separation between the TEE and the main processor.Thus, it would be hard to justify an untrusted TEE and a trusted processor since, in reality, they lie on the same die and are shipped by the same manufacturer.As such, the entity in charge of preventing the TEE from exfiltrating information (i.e., the one holding a share of the signing key) must be placed elsewhere along the channel between the TEE and the verifier, thereby paying a latency penalty to generate signatures.
We argue that our solution is more suitable for TEE platforms like Intel SGX.In particular, the non-interactive nature of the signing protocol allows us to place the sanitizer "away" from the signer, without impact on performance.Thus, the sanitizer may be instantiated by a co-processor next to the TEE, or it may run on a company gateway that sanitizes attestations produced by hosts within the company network before they are sent out.As the sanitizer and the potentially subverted hardware may run on different platforms, they may come from different manufacturers.For example, one could pick an AMD or Risc-V processor to sanitize an Intel-based TEE such as SGX.A sanitizer may even be built by combining different COTS hardware as [21].
Finally, we note that our definition of SR-EPID is not UC but caters for adaptive corruptions whereas the UC definition of DAA in [10,9] only considers static corruptions.

Subversion-Resilient Enhanced Privacy ID
In this section we introduce our notion of Subversion-Resilient Enhanced Privacy ID (SR-EPID).Before we do so, we discuss EPID and its shortcomings in case of subverted hardware.
Background on EPID.Enhanced Privacy ID is essentially a privacy-enhanced group signature scheme with a group manager and a number of group members.
Compared to classic group signatures (see Bellare et al. [4]), EPID drops the ability of the group manager to trace signatures, and adds novel revocation mechanisms.In particular, EPID allows to revoke a group member by adding its private key to a revocation list named PrivRL; while verifying a signature σ, the verification algorithm checks that none of the private keys in PrivRL may have produced σ.In case the secret key of a misbehaving group member did not leak, EPID can still revoke that member by using one of its signatures.That is, EPID accounts for an additional revocation list, named SigRL, containing signatures of revoked members.Thus, a valid signature σ must carry a zero-knowledge proof that the private key used to compute σ is different from any of the keys used to produce any of the signatures in SigRL.
Security notions for EPID include anonymity and unforgeability.Informally, anonymity ensures that signatures are not traceable by any party, including the group manager.Unforgeability ensures that only non-revoked group members can generate valid signatures.
We note that EPID does not account for pseudonymous signatures.The latter allow for a sort of controlled linkability as each signature is bound to a "basename", and one can easily tell-via a Link algorithm-whether two signatures on the same basename where produced by the same group member.This signature mode is actually available in DAA and in the version of EPID used by Intel SGX.Further, DAA defines a security property tailored to pseudonymous signatures called nonframeability.Informally, non-frameability ensures that no adversary-not even a corrupted group manager-can create a signature on a message m and basename B, that links to a signature of an honest group member (when this honest group member never signed m, B).Given the usefulness of pseudonymous signatures in real-world deployments, we decide to include them-along with a definition of non-frameability-in our definition of subversion-resilient EPID.

Subversion-Resilient EPID
Overview and rationale of the definition.We introduce a "sanitizer" that proxies the communication between the signer and the outside world.For simplicity, we assume each signer to be paired with a sanitizer and we denote a pair of signer-sanitizer as a "platform". 11In the security experiments we denote with I the issuer, with S the sanitizer, with M the signer, and with P the platform.Very often we refer to the signer as the "hardware" or the "machine" (thus the letter M for our notation).We assume group members to be platforms and gear security definition towards them. 12he goal of the sanitizer is to remove any possible covert channel from the signer to an external adversary.For example, a subverted signer could establish a covert channel through the randomness used at signature generation.Alternatively, a subverted signer may maliciously influence the join protocol to obtain as output a fixed secret key that it is a prior known to the adversary; later on, the adversary may simply use this known private key to break anonymity (since, by definition, private key based revocation allows a verifier to tell if a signature has been produced with a given private key).Yet another option is for the signer to behave honestly during the join protocol, but later use a preloaded secret key to produce signatures. 13Once again, the adversary may use that known key and a signature to break platform anonymity.
To deal with these issues, our notion of SR-EPID is designed so that (i) the sanitizer participates to the join protocol contributing to the private key of the signer, (ii) each signature output by the signer carries a proof (for the sanitizer to verify) that the private key used for signing is the very same one obtained during the join protocol, and (iii) the sanitizer sanitizes signatures to avoid covert channel based on maliciously-sampled randomness.
The resulting syntax is a generalization of EPID that adds a Sanitize algorithm and modifies the original Join and Sig algorithms.

Syntax of Subversion-Resilient EPID (SR-EPID)
We denote by d, e, f ← P A,B,C a, b, c an interactive protocol P between parties A, B and C where a, b, c (resp.d, e, f ) are the local inputs (resp.outputs) of A, B and C, respectively.
An SR-EPID consists of an interactive protocol Join and algorithms: Init, Setup, Sig, Ver, Sanitize.All the algorithms (and the protocol) but Init take as input public parameters (generated by Init); for readability reasons, we keep this input implicit.
Init(1 λ ) → pub.This algorithm takes as input the security parameter λ and outputs public parameters pub.Setup(pub) → (gpk, isk).This algorithm takes the public parameters pub and outputs a group public key gpk and an issuing secret key isk for the issuer I. Join I,Si,Mi (gpk, isk), gpk, gpk) → b, (b, svt i ), sk i .This is a three-party protocol between the issuer I, a sanitizer S i and a signer M i .In our syntax, we assume PrivRL to be a set of private keys {sk i } i , and SigRL to be a set of triples {(bsn i , M i , σ i )} i , each consisting of a basename, a message and a signature.We define two forms of correctness with and without revocation lists.

Subversion-resilient Security
The security of an SR-EPID scheme is defined by three main properties, namely anonymity, unforgeability, and non-frameability that are defined below.
We consider subverted signers that can arbitrarily behave during the join protocol and, in particular, abort the execution of the protocol.However, once the join protocol is completed we assume that signers, although subverted, maintain a correct "input-output behavior".That is, a subverted signer produces a valid signature to a message and basename, namely a signature that verifies if the signer were not revoked, but that could be arbitrarily (and maliciously) distributed over the set of all valid signatures.We formalize this idea in the following assumption.
Assumption 1.Let Π be a SR-EPID.We assume that for any public parameter pub, any adversary A, any gpk and auxiliary information aux, and any (possibly adaptively chosen) sequence of tuples (bsn 1 , M 1 ), . . ., (bsn q , M q ), let b, (b , svt), state 1 be a possible output of the join protocol Join A,S,M (gpk, aux), gpk, (gpk, aux) conditioned on b = 1 or a possible output of the join protocol Join I,A,M (gpk, aux), gpk, (gpk, aux) conditioned on b = 1 and let σ i , state i ← M i (state i−1 , M i , bsn i ) for i = 1, . . ., q then ∀i = 1, . . ., q : Vf(gpk, M 1 , bsn 1 , σ i ) = 1 Assumption 1 models the fact that, if signers can be subverted, a signer should be considered safe as long as it does not return errors when it comes to generating signatures.The occurrence of such an error should alert a sanitizer anyway.First, such an error can occur if one of the signatures produced by the signer was included in the signature based revocation list: if the list was honestly created, it means that the signer has been revoked; if the list was maliciously crafted, then the signature request may constitute an attempt to deanonymize the signer.Second, if the errors are arbitrary then they inevitably enable to signal any kind of information from the signer.
Macros for the Join Protocol and Signature generation.As mentioned, the join protocol is a three-party protocol with the sanitizer being in the middle.To simplify the already heavy notation, we define the macro Join(M, state S , state M , γ I ) which identifies one full round of the join protocol from the issuer point of view with an honest sanitizer and a machine M. In more detail, the macro takes as input the description of the (possibly subverted) machine M, the state of the sanitizer state S , the state of the machine state M and the message sent by the issuer γ I , and it identifies the following set of actions: Notice the procedures additionally take as input the group public key gpk, which we keep implicit.Similarly, the signature procedure is a two-phase protocol between the signer and the sanitizer for which we define the macro: The macro additionally checks in step 2 that svt is a valid string.We use this check to discriminate the case when the sanitizer is corrupted.
Subversion-Resilient Anonymity.This notion formalizes the idea that an adversarial issuer cannot identify a group member through the signatures it produces.Recall that we assume a signer M i to be paired with a sanitizer S i ; we denote the platform constituted by M i and S i with P i .We assume M i to be subverted, i.e., it runs an adversarially specified program, while S i is honest.The case when both M i and S i are corrupted is meaningless for anonymity since the adversary controls all the relevant parties.The remaining case in which M i is honest but S i is corrupted is also hopeless for anonymity since a corrupted sanitizer could always maul the outputs of the signer in order to reveal its identity.We formalize subversion-resilient anonymity for SR-EPID in a security experiment that appears in Fig. 1, and we formally define anonymity as follows.
Definition 1.Consider the experiment described in Fig. 1.We say that an SR-EPID Π is anonymous if and only if for any PPT adversary A: In the experiment we let M i be an adversarially specified program, yet, as argued above, we assume that it preserves the expected input-output functionality.Namely, there is a command M i .Sig that is supposed to follow the input-output behavior of the Sig algorithm.
Here we provide an intuitive explanation of the anonymity experiment.The idea is that the adversary plays the role of the issuer, i.e., it selects the group public key, and it can do the following: (1) ask platforms with subverted signers to join the system; (2) ask platforms with subverted signers to sign messages; (3) corrupt platforms.For (1), it means that the adversary specifies the code of a signer M i that, together with an honest sanitizer S i , run the Join protocol with the adversary playing the role of the issuer.For (2), a subverted signer M i produced a signature that is sanitized by S i and then delivered to the adversary.Finally, (3) simply models a full corruption of the platform in which the adversary learns the secret key sk i obtained by M i at the end of its Join protocol.
The adversary can choose two platforms (P i0 , P i1 ), a basename bsn * , and a message M * and it receives a sanitized signature on M * , bsn * produced by one of the two platforms.The goal of the adversary is to figure out which platform produced the signature.In order to avoid trivial attacks the two "challenge" platforms must be non-corrupted and none of their signatures can be included in the SigRL used to produce the challenge signature.Further, if the adversary has previously requested a signature with bsn * form either platform, the challenger aborts.Similarly, after seeing Retrieve (ij, Mi j , state i j , svti j , Bi j ) from Lusr;   the challenge signature, the adversary may not ask for a signature by any of the challenge platforms on basename bsn * .
Technical details.The structure is the one depicted earlier: the adversary chooses the group public key on input the public parameters and then starts interacting with the oracle C. The experiment maintains lists L join , L usr , L corr to bookkeep information on the state of the Join protocol sessions, and the list of non-corrupted and corrupted platforms, respectively.Also, it maintains a flag Bad, initialized to false, which is turned to true whenever the adversary violates the rules of the experiments (see below).At some point the adversary outputs a message M * , a basename bsn * , and two indices i 0 , i 1 , along with a signature revocation list SigRL * ; it receives a sanitized signature generated using the subverted signer M i b .In line 8 of Exp anon A,Π (λ, b) we ensure that the adversary did not previously query for a signature with basename bsn * by one of the challenge platforms; if that is the case, the adversary could trivially win by using the Link algorithm.In line 11 of Exp anon A,Π (λ, b) we ensure that both challenge platforms generate valid signatures, after sanitization.Indeed if a difference would occur (e.g., one of them is ⊥), the adversary could trivially win the game.For example, this would be the case if the SigRL chosen by A would contain a signature from, e.g., M i0 .Similar checks are done in lines 15-19 of the C oracle upon a signing query that involves one of the challenge platforms, say i 1−β .The code of those lines essentially ensure that the queried basename is not the challenge one, and that the other challenge platform i β would generate a signature on the same message M that is valid iff so is the one generated by i 1−β .Again if such a difference would occur the adversary could trivially distinguish and win the experiment.Similarly to the other case, this could occur if the queried SigRL contains a signature of (only) one of the challenge platforms.
We stress that the mechanism that uses the verification tokens is necessary.Indeed, consider the definition above where the svt and the proof π σ are missing.An attacker can first performs two join protocols with two subverted machines M1 and M2 with hardcoded secret keys sk 1 (resp.sk 2 ) that during joining time act honestly, thus obtaining new fresh secret keys, but that compute valid signature using the hardcoded secret keys.Suppose the scheme has a secret-key based revocation mechanism, then the adversary that knows sk 1 and sk 2 can easily distinguish which machine produced the signature.In particular, it could verify the challenge signature using the revocation list { sk 1 }.Because the signatures are anonymous, the sanitizer, which only posses public information, has no way to identify that a different secret key has been used and avoid this attack.
Finally we notice that the model without verification token mechanism, after some necessary cosmetic changes, fits with the cryptographic reverse firewall framework.In the lingo of [22], the sanitizer of a scheme satisfying the anonymity property which works without the verification token mechanism, is a cryptographic reverse firewall that weakly preserve the anonymity property for the signer S.
Another aspect of the anonymity experiment that we would like to point out is that the adversary receives the verification token immediately after the Join protocol is over.This models the fact the adversary could have access to the internal state of an honest sanitizer (except for its random tape), and this does not break anonymity.
Subversion-Resilient Unforgeability.This notion formalizes the idea that an adversary who does not control the issuer cannot generate signatures on new messages on behalf of non-corrupted platforms.To model subversion attacks, we let the platform signer M i be an adversarially specified program.The sanitizer S i is instead honest (unless the platform is fully corrupted).
Here we provide an intuition of the notion.The idea is that the adversary receives the group public key, and it can do the following: (1) ask platforms with subverted signers to join the system; (2) ask corrupted platforms to join the system; (3) ask platforms with subverted signer to sign messages; (4) corrupt platforms.For (1), it means that the adversary specifies the code of a signer M i and that signer together with sanitizer S i , run the Join protocol where both the issuer and S i are controlled by the challenger.For (2), the adversary runs the Join protocol with the challenger playing the role of the issuer, whereas both the signer M i and the sanitizer S i are fully controlled by the adversary.For (3), the adversary asks a platform that joined the system to create a signature using the subverted signing algorithm (specified in M i at Join time), this signature is sanitized by S i and given to the adversary.Finally, (4) simply models a full corruption of the platform in which the adversary learns the secret key sk i obtained by M i at the end of its Join protocol 14The adversary's goal is to produce a valid signature on a basename-message tuple bsn * , M * .On the one hand, we cannot require the tuple bsn * , M * to be fresh, since it is reasonable to assume that multiple platforms may sign the same bsn * , M * .On the other hand, strong unforgeability is impossible, as we require that the signatures must be valid before and after sanitization.To satisfy these two apparently contrasting requirements simultaneously, we instead require that the adversary's forgery does not link to any of the other queried signatures on the same basenamemessage tuple.This essentially guarantees that the forgery is not a trivial rerandomization of signature obtained through a signing query.
Since an SR-EPID is a (kind of) group signature and in the above game the adversary may have learnt the secret keys of some group members, we add some additional checks to formalize what is a forgery, namely to avoid trivial attacks that are unavoidable in this model.Intuitively, we want that the signature must verify with respect to a private-key revocation list PrivRL * (resp.signature-based revocation list SigRL * ) that includes the secret keys of (resp.a signature from) all corrupted group members.These corrupted group members include both the ones that honestly joined the system and were later corrupted, and those that were already corrupted (i.e., adversarially controlled) at join time.Modeling which keys should be revoked is not straightforward though.The first issue is that in case of a corrupted platform joining the group, the challenger does not know what is the key obtained by the adversary.Essentially, unless we revoke exactly that key or a signature produced with that key, the adversary is able to create valid signatures on any message of its choice.The second issue is similar and involves cases when a platform with a subverted signer joins the group: the challenger obtains a secret key sk i from the signer M i at the end of the Join protocol, but M i is subverted and thus we have no guarantee that sk i is the "real" secret key. 15To define forgeries, we solve these issues by assuming the existence of an extractor that, by knowing a trapdoor and seeing the transcript of the Join protocol between the issuer and the sanitizer, can extract a token uniquely linkable (via an efficient procedure) to the secret key that is supposed to correspond to such transcript.This definition is close to the notion of uniquely identifiable transcripts used by [6] for DAA schemes.We stress that the extractor does not exist in the real world and is only an artifact of the security definition. 16A practical interpretation of our definition is that unforgeability is guaranteed under the assumption that the revocation system is "perfect", namely that one revokes all the secret keys, or signatures produced by those secret keys, that an adversary obtained by interacting with the issuer in the Join protocol.
We formalize subversion-resilient unforgeability for SR-EPID via the experiment of Fig. 2, and we formally define unforgeability as follows.
Definition 2. Consider the experiment described in Fig. 2. We say that an SR-EPID Π is unforgeable if there exist PPT algorithms CheckTK, CheckSig, and a PPT extractor E = (E 0 , E 1 ) such that the following properties hold: 1.For any pair of keys (gpk, isk) in the support of Setup(pub) and for any (even adversarial) tk, sk 1 , sk 2 it holds (CheckTK(gpk, sk (Namely, any tk is uniquely associated to one and only one sk.) 2. For any pair of keys (gpk, isk) in the support of Setup(pub) and for any (even adversarial) tk, sk, M, bsn, σ, SigRL, PrivRL such that Vf(gpk, bsn, M, σ, SigRL, PrivRL) = 1 and Vf(gpk, bsn, M, σ, SigRL, PrivRL∪{sk}) = 0, it is always the case that CheckTK(gpk, sk, tk) = 0∨CheckSig(gpk, tk, σ) = 1.(Namely, the token tk and the algorithm CheckSig allow to verify if a signature comes from a specific secret key.) 3.For any PPT adversary A, Technical details.Besides the use of the extractor, the security experiment is rather technical in some of its parts.Here we explain the main technicalities.As mentioned earlier, the structure of the experiment is that the adversary receives the group public key and then starts interacting with the oracle.The experiment maintains lists L join , L usr , L corr , L msg to bookkeep information on the state of the Join protocol sessions, the list of uncorrupted and corrupted platforms respectively, and the list of the messages on which the adversary obtained signatures.
After interacting with the oracle, the adversary outputs a message M * , a basename bsn * , a signature σ * and revocation lists PrivRL * , SigRL * .The adversary wins if either event (4), or the conjunction of events (1), ( 2) and (3) occur.Intuitively, event (4) means that the adversary has "fooled" the extractor.Namely, the adversary produced a secret key sk (provided in the privatekey revocation list PrivRL * ) that the algorithm CheckTK recognizes as associated to a token tk extracted by E 1 , but sk is not a valid signing key.In other words, our definition requires that any secret key 17 extracted by E 1 should be valid.For the other winning case, events (2) and (3) are a generalization of the classical winning condition of digital signatures, i.e.where the adversary returns a valid signature on a new message.The conjunction of event (2) and (3) are more general than the classical unforgeability notion because instead of considering as new just the message, we also include the basename, and, more importantly, the fact that the forged signature apparently comes from a machine that either has never been set up or that has never signed the basenamemessage tuple.
Event (1) instead is there to avoid trivial attacks due to the possibility of corrupting group members.Basically, (1) ensures that for any corrupted platform we have either its secret key in PrivRL * or a signature produced by that platform in SigRL * .For the latter statement to be efficiently checkable in the experiment we require the existence of an algorithm CheckSig for this purpose and that works with the token tk extracted by E 1 .
With honest join queries the adversary specifies the code of a signer M i , which then runs the Join protocol with an honest issuer and an honest sanitizer controlled by the challenger.At the end, if the issuer accepts, we extract a secret-key token tk i from the transcript τ of the Join protocol, and we store information about M i , its state, the verification token and the extracted secret-key token.The verification token svt i is also returned to the adversary.
With dishonestP join queries the adversary can let a fully corrupted platform (i.e., both M i and S i are under its control) join the group.In this case, the adversary runs the join protocol with the honest issuer controlled by the challenger: the oracle allows the adversary to start a Join session and then sends one message, γ, at a time; lines 9-11 formalize this step-by-step execution of the honest issuer on each message sent by the adversary on behalf of S i .At the end, if the issuer accepts, we extract a secret-key token tk i from the transcript τ of the Join protocol, and we store this token in the list L corr of corrupted users.
With dishonestS join queries we consider the case in which the adversary fully controls the sanitizer but the signer is not subverted.In this case, the oracle allows the adversary to run in the Join protocol with the honest issuer and honest signer.This is done by letting the adversary send messages to either M or I; lines 15-17 formalize this step-by-step execution of the honest issuer and honest signer on each message γ sent by the corrupted sanitizer.At the end, if the issuer accepts, we extract a secret-key token tk i from the transcript τ of the Join protocol, and we store all the relevant information in the list L usr of honest platforms.Note that in this case we do not necessarily know the verification token since this is received by the sanitizer, which is the adversary.
For sign queries, the oracle first checks that the platform has joined the system and if so it lets the (possibly subverted) signer M i generate a signature σ and corresponding proof π σ .Next, if svt i = ⊥ the signature is sanitized and given to the adversary, otherwise a non-sanitized signature is returned.Notice that the case svt i = ⊥ (when i is in L usr ) can occur only if the platform joined the system using a dishonestS join query, in which case the sanitizer is controlled by the adversary but -we recall -the signer is not subverted.
Finally corrupt queries allow the adversary to corrupt an existing platform, which may have joined through either a honest join or dishonestS join query.As a result, the adversary learns the internal state of the signer, which is supposed to contain the secret key (note that the state of the sanitizer, that is the verification token, was already returned after the Join).
Subversion-Resilient Unforgeability in the Random Oracle Model.In order to capture also constructions in the random oracle model (ROM)-as ours-we provide a suitable adaptation of the unforgeability definition.A dedicated ROM-based definition is needed in order to consider extractors that may simulate, and program, the random oracle.The ROM definition is essentially the same as Def 2, except that condition (3) is modified to account for the programmability powers granted to the extractor.More in details, all the random oracle queries (both made by the adversary and by the corrupted signer M i ) are passed to the extractor, which is now a stateful machine; the extractor must provide a view to the adversary that is indistinguishable from the real world view, where the ROM outputs uniformly random strings.To formalize this, we consider a dummy extractor Ẽ that (i) initializes the public parameters as done by the SR-EPID scheme, and (ii) it does not program the ROM answers, but simply outputs uniformly random values.We additionally require that the view of the adversary in an execution of the experiment with the extractor and the view of the adversary in an execution of the experiment with the dummy extractor are indistinguishable.Namely, all the queries to the random oracle made by A are re-directed and answered by the extractor E. We say that an SR-EPID scheme Π is unforgeable in the ROM if conditions (1), ( 2), ( 3) and (4) of Def. 2 hold, and additionally, the view of the adversary at the end of the experiment Exp unf A,E,Π (1 λ ) and the view of the adversary at the end of the experiment Exp unf A, Ẽ,Π (1 λ ) are computationally indistinguishable.
Comparison with Unforgeability of EPID.The notion of unforgeability defined above closely follows the one defined for EPID in [8], with the following main differences.First, in [8] there is no sanitizer.Second, in [8] the adversary cannot specify a subverted signer, namely honest join and sign queries are executed according to the protocol description.Third, valid forgeries in [8] include fresh signatures on messages already signed by the oracle.Such a forgery is not valid in our case since signatures are sanitizable (essentially re-randomizable).
Notice that the unforgeability definition of [8] requires the adversary to return the secret key obtained via dishonest join queries (called Join of type (i) in [8]).Nevertheless, the definition does not enforce at any point that the adversary is returning the correct key.It is possible that the authors are implicitly making the assumption that the adversary is honest at this stage, and this what seems to be used in the security proof (where the reduction does not even look at the key returned by the adversary but uses the key extracted from the PoK made by A during the Join protocol).This is a quite strong assumption.If this assumption is not made we can show an attack.A first performs a dishonest join query by playing honestly (the same works if this query is honest join followed by corrupt), it obtains a key sk 1 .Next A performs another dishonest join query where it plays honestly in the Join protocol, it obtains another key sk 2 but returns to the challenger sk 1 .When it comes to the forgery step, from the point of view of the challenger the key that must be in PrivRL * is sk 1 (maybe twice).This means that technically sk 2 is not revoked and thus the adversary can use it to create a signature that would pass the forgery checks and win the game.Note that this attack works even if the forgery checks ensure that all sk in PrivRL * must be "valid" (this check was proposed as part of the Revoke algorithm of the EPID construction).
In our definition of unforgeability we avoid the above attack by requiring a security property of the Join protocol.Specifically, the join protocol is such that, if the execution of the protocol ends successfully, then the platform must have learnt one (and only one) secret key.We formalize this by requiring the existence on an extractor that can find this key by only looking at the transcript.In this way, we avoid the unrealistic requirement that the adversary surrenders all the corrupted secret keys.Notice that the existence of the extractor is only for definitional purpose, namely, only to asses the security statement that "unforgeability holds if all the corrupted secret keys are revoked".
Subversion-Resilient Non-frameability.This notion formalizes the idea that an adversarial issuer should not be able to produce a signature that links to the identity of an honest platform.Since "linking" is only possible across signatures, we treat non-frameability as the property that guarantees that no adversary can output a signature that links to another signature output by an honest platform.
We formalize subversion-resilient non-frameability for SR-EPID in a security experiment in Fig. 3, and we formally define non-frameability as follows.
Definition 4. Consider the experiment described in Fig. 3.We say that an SR-EPID Π is nonframable if for any PPT adversary A: Here we provide an intuition on the notion.Similar to the anonymity experiment, in the nonframeability one, the adversary plays the role of the issuer and can do the following: (1) ask platforms with subverted signers to join the system; (2) ask platforms with subverted signers to sign messages; (3) corrupt platforms.For (1), it means that the adversary specifies the code of a signer M i and that signer together with sanitizer S i , run the Join protocol where both the issuer and S i are controlled by the challenger.For (2), a platform that joined the system creates a signature using the subverted signing algorithm (specified in M i ); this signature is sanitized by the honest sanitizer S i and given to the adversary.Finally, (3) simply models a full corruption of the platform in which the adversary learns the secret key sk i obtained by M i at the end of its Join protocol.
The adversary must output (i * , bsn * , M * , σ * ) providing the victim platform index i * and a basename-message-signature triple bsn * , M * , σ * .The adversary wins the experiment if (1) σ * is a valid signature for bsn * , M * , (2) the signature "links" to one of the signatures produced by the oracle when queried on platform i * , and (3) if the oracle has output a signature on bsn, M on behalf of platform i and if bsn = bsn * , then M = M * .In the experiment, the challenger keeps a list L i of signatures and their respective basename-message pairs, for each of the non-corrupted platforms that have joined the group.

Bilinear groups
An asymmetric bilinear group generator is an algorithm G that upon input a security parameter 1 λ produces a tuple bgp = (p, G 1 , G 2 , G T , e, P 1 , P 2 ), where G 1 , G 2 and G T are groups of prime order p ≥ 2 λ , the elements P 1 , P 2 are generators of G 1 , G 2 respectively, e : G 1 × G 2 → G T is an efficiently computable, non-degenerate bilinear map.In our construction we use Type-3 groups in which it is assumed that there is no efficiently computable isomorphism between G 1 and G 2 .We use the bracket notation introduced in [15].Elements in G i , are denoted in implicit notation as [a] i := aP i , where i ∈ {1, 2, T } and P T := e(P 1 , P 2 ).Every element in G i can be written as [a] i for some a ∈ Z q , but note that given [a] i , it is in general hard to compute a ∈ Z q (discrete logarithm problem).Given a, b ∈ Z q we distinguish between [ab] i , namely the group element whose discrete logarithm base P i is ab, and [a] i • b, namely the execution of the multiplication of [a] i and b, and namely the execution of a pairing between [a] 1 and [b] 2 .Vectors and matrices are denoted in boldface.We extend the pairing operation to vectors and matrices as e All the algorithms we describe next take implicitly as input the public parameters bgp.

Structure-Preserving Signatures
A signature scheme over groups generated by G is a triple of efficient algorithms (KGen, Sig, Ver).Algorithm KGen outputs a public verification key vk and a secret signing key sk.Algorithm Sig takes as input a signing key and a message m in the message space, and outputs a signature σ.Algorithm Ver takes as input a verification key vk, a message m and a signature σ, and returns either 1 or 0 (i.e., "accept" or "reject", respectively).The scheme (KGen, Sig, Ver) is correct if for every correctly generated key-pair vk, sk, and for every message m in the message space, we have Ver(vk, m, Sig(sk, m)) = 1.
: pub ← Init(1 λ ); gpk ← A(pub); Ljoin, Lusr, Lcorr ← ∅;  We say that a signature scheme (KGen, Sig, Ver) is existentially unforgeable under adaptive chosen message attack (EUF-CMA) if for any PTT adversary A we have that: where Q is the set of messages queried by A to the signing oracle.A stronger notion of unforgeability, named "strong" EUF-CMA or sEUF-CMA, further prevents the adversary to forge a new signature on a message that has already been signed.This notion is captured by modifying the above definition so that (m, σ) / ∈ Q whereas Q is defined as the set of message-signature pairs stemming from the adversary's queries to the signing oracle.Finally, a signature scheme over groups generated by G is structure-preserving [1] if (1) the verification key, the messages, and signatures consist of solely elements of G 1 , G 2 , and (2) the verification algorithm evaluates the signature by deciding group membership of elements in the signature and by evaluating pairing product equations.

Non-Interactive Zero-Knowledge Proof of Knowledge
A non-interactive zero-knowledge (NIZK) proof system for a relation R is a tuple N IZK = (Init, P, V) of PPT algorithms such that: Init on input the security parameter outputs a (uniformly random) common reference string crs ∈ {0, 1} λ ; P(crs, x, w), given (x, w) ∈ R, outputs a proof π; V(crs, x, π), given instance x and proof π outputs 0 (reject) or 1 (accept).
In this paper we consider the notion of NIZK with labels, that are NIZKs where P and V additionally take as input a label L ∈ L (e.g., a binary string).A NIZK (with labels) is correct if for every crs $ ← Init(1 λ ), any label L ∈ L, and any (x, w) ∈ R, we have V(crs, L, x, P(crs, L, x, w)) = 1.(ii) There exists a PPT algorithm E(tp e , x, π) such that every PPT adversary A:

Definition 5 (Adaptive composable perfect zero-knowledge). A NIZK N IZK for relation R satisfies adaptive composable perfect zero-knowledge if the following properties hold:
where the experiment is defined in Fig. 4. Malleable Proofs.We use the definitional framework of Chase et al [13] for malleable proof systems.
For simplicity of the exposition we consider only unary transformations (see the aforementioned paper for more details).Let T = (T x , T r ) be a pair of efficiently computable functions, that we refer as a transformation.
Definition 7 (Admissible transformations [13]).An efficient relation R is closed under a transformation T = (T x , T w ) if for any (x, w) ∈ R the pair (T x (x), T w (w)) ∈ R. If R is closed under T then we say that T is admissible for R. Let T be a set of transformations.If for every T ∈ T , T is admissible for R, then T is an allowable set of transformations.
Definition 8 (Malleable NIZK [13]).Let N IZK = (Init, P, V) be a NIZK for a relation R. Let T be an allowable set of transformations for R. The proof system N IZK is malleable with respect to T if there exists an PPT algorithm ZKEval that on input (crs, L, (x, π), T ), where T ∈ T , L is a label and V(crs, L, x, π) = 1, outputs a valid proof π for the statement x = T x (x).
For malleable NIZKs one can define the property that one should not distinguish between "freshly" generated proofs and derived ones.This property is formalized with the notion of derivation privacy.Definition 9. Let N IZK = (Init, P, V, ZKEval) be a malleable NIZK argument for a relation R and an allowable set of transformations T .We say that N IZK is strong derivation private if for any PPT adversary A we have that where Exp der-priv is the game described in Fig. 4.Moreover, we say that N IZK is perfectly strong derivation private (resp.statistically strong derivation private) when for any (possibly unbounded) adversary the advantage above is 0 (resp.negligible).
Re-randomizable NIZKs.First we notice that the derivation privacy property implicitly says that proofs are re-randomized (since outputs of ZKEval are indistinguishable from freshly generated proofs).In the special case of a malleable NIZK where the allowable transformation is the identity function we simply say that it is a re-randomizable NIZK and we omit the transformation from the inputs of ZKEval.

Our SR-EPID Construction
In this section we describe our construction of a subversion-resilient EPID.We start by providing a high-level explanation of our technique, next we describe the scheme, discuss how to instantiate it efficiently, and prove its security.
An Overview of Our Scheme.We elaborate further on the overview from Sec. 1.1.Recall that our construction follows the classical template similar to many group signature schemes to prove in zero-knowledge the knowledge of a signature originated by the issuer.In particular: (I) The issuer I keeps a secret key isk of a (structure-preserving) signature scheme.(II) The secret key of a platform is a signature σ sp on a Pedersen commitment [t] 1 whose opening y is known to the signer only.
Following the description given in Sec.1.1, the conjunction of σ sp and [t] 1 forms a blind signature on y. (III) The signer generates a signature on a message M and basename bsn by creating a NIZK with label (bsn, M ) of the knowledge of a valid signature σ sp made by I on message a commitment [t] 1 and the knowledge of the opening of such commitment to a value y.To realize the NIZK, our idea is to use a random oracle H to hash the string bsn, M and use the output string as the commonreference string of a (malleable) NIZK for the knowledge of the σ sp , the commitment [t] 1 and the opening y = (y 0 , y 1 ).Furthermore, to be able to re-randomize the signature, we make use the re-randomizable NIZK.(IV) To support revocation and linkability the final signature additionally contains the pseudorandom value [c 1 ] 1 := K(bsn) • y 0 , where K is a random oracle.More in details, linkability is trivially obtained, as two signatures by the same signer and for the same basename share the same value for [c 1 ] 1 , while for (signature-based) revocation we additionally let the signer prove that all the revoked signatures contain a [c 1 ] 1 of the form K(bsn) • y 0 where y 0 = y 0 .
Specific Building Blocks.Our scheme works over bilinear groups generated by a generator G, and it makes use of the following building blocks: -A structure-preserving signature scheme SS = (KGen sp , Sig sp , Ver sp ) where messages are elements of G 1 and signatures are in -An re-randomizable NIZK N IZK sign for the relationship R sign defined as: , and y = (y 0 , y 1 ) T .To simplify the exposition, in the description of the protocol below we omit gpk (the public key of the scheme) from the instance and we consider ([b] 1 , SigRL) as an instance for the relation.
-A malleable and re-randomizable NIZK N IZK com for the following relationship R com and set of transformations T com defined below: Namely, the relation proves the knowledge of the opening of a Pedersen's commitment (in G 1 ) whose commitment key is [h] 1 .The transformation allows to re-randomize the commitment by adding fresh randomness.
Theorem 1.If SS is EUF-CM secure, both N IZK sign and N IZK com are adaptive extractable sound, perfect composable zero-knowledge and strong derivation private, N IZK svt is adaptive extractable sound, composable zero-knowledge, and both the XDH assumption holds in G 1 and the Assumption 1 holds, the SP-EPID presented above is unforgeable in the ROM.
We first give a proof sketch.To prove unforgeability we need to define an extractor: its main idea is to program the random oracle J to output strings (used as common reference strings in the protocol) that come with extraction trapdoors.Recall that by the properties of the NIZK, such strings are indistinguishable from random strings.Then, whenever required, the extractor can run the NIZK extractor over the NIZK proof provided by the platform during the join protocol to obtain a value [y] 2 .Finally, looking at the transcript of the join protocol, the extractor can produce the token tk = ([t] 1 , σ sp , [y] 2 ).Notice that the created token looks almost like the secret key with the only difference that, in the secret key, the value y is given in Z 2 q . 19It is clear that the token is uniquely linked to the secret key.
With this extractor, we proceed with a sequence of hybrid experiments to prove unforgeability.In the first part of the hybrid argument (from H 0 to H 6 in the formal proof ) we exploit the programmability of the random oracle to puncture the tuple (bsn * , M * ) selected by the adversary for its forgery.In particular, we reach a stage where we can always extract the witnesses from valid signatures for (bsn * , M * ), while for all the other basename-message tuples the challenger can always send to the adversary simulated signatures.To reach this point, we make use of the strong derivation privacy property of the NIZK proof system (which states that re-randomization of valid proofs are indistinguishable from brand-new simulated proofs for the same statement).Specifically, we can switch from signatures produced by the subverted hardware and re-randomized by the challenger of the experiment to signatures directly simulated by the challenger.The latter cutoff any possible channels that the subverted machines can setup with the adversary using biased randomness.
At this point we can define the set Q sp of all the messages [t] 1 signed by the challenger (impersonating the issuer) using the structure-preserving signature scheme.Notice that our definition allows the adversary to query the challenger for a signature on the message (bsn * , M * ) itself.As the signatures for such basename-message tuple are always extractable, the challenger has no chances to simulate such signatures.However, by the security definition, the adversary is bound to output a forgery that does not link to any of the signatures for (bsn * , M * ) output by the challenger.We exploit this property together with the fact that two not-linkable signatures must have different value for y 0 , to show that the forged signature must be produced with a witness that contains a fresh value [t * ] 1 that is not in Q sp .Slightly more technically, we can reduce this to the binding property 20 of the Pedersen's commitment scheme that we use.Now, we can divide the set of the adversaries in two classes: the ones which produce a forged signature where [t * ] 1 is in Q sp and the ones where [t * ] 1 is not in Q sp .For the latter, we can easily reduce to the unforgeability of the structure preserving signature scheme.For the former, instead, we need to proceed with more caution.
First of all, we are assured by the previous step that adversaries from the first class of adversaries would never query the signature oracle on (bsn * , M * ).Secondly, we use the puncturing technique again, however, this time we select the platform (let it be the platform number j * ) that is linked to the forged signature.By the definition of the class of adversaries this platform always exists.For this platform we switch the common-reference string used in the join protocol to be zero-knowledge.Once we are in zero-knowledge mode, we can use strong derivation privacy to make sure that the join protocol does not leak any information about the secret key that the platform computes (even if the machine is corrupted).At this point the secret key of the j * -th platform is apparently completely hidden from the view of the adversary, in fact: (1) all the signatures are simulated and (2) the join protocol of the j-th platform is simulated.However, the j * -th platform is still using a subverted machine, which, although cannot communicate anymore using biased randomness with the outside adversary, still receives the secret key.We show that we can substitute this subverted machine with a well-behaving machine that might abort during the join protocol but that, if it does not so then it always sign every basename-message tuple received (here we rely on Assumption 1).
The last step is to show that such forgery would break the hiding property of the Pedersen's commitment scheme that we make use of.
Proof.Given an adversary A for the unforgeability game, we assume, w.l.g. that if the adversary sends the query (sign, * , bsn, M, * ) for some bsn, M then the adversary has already queried the random oracle H on the tuple (bsn, M ).Notice that this assumption is without loss of generality 21 .
Given a PPT adversary A we define the extractor E. Let E com be the extractor for the N IZK com .The extractor E is defined below: Extractor E(•): -At the first call initialize the database D RO as empty and generates the group parameter bgp . The property 1 is obviously true, in fact, the function that map x ∈ Z p to [x] 2 ∈ G is injective, moreover, the property 2 is true too, in fact, the step (3) of the verification algorithm checks that [c 0 ] In the following we define two sequences of hybrid experiments.In the first sequence of hybrids experiment we consider the random variable view A,i that is the view of the adversary A in the hybrid experiment H i .Recall that Def. 3 also requires to compare the view of the adversary in the unforgeability experiment with the dummy extractor and the same view with the extractor defined above.
Let H 0 (λ) := Exp unf A, Ẽ,Π (λ), namely the experiment run with the dummy extractor that answers the random-oracle queries as a random oracle would do.
, where E is the same as E, as defined above, but where when it is called upon input (extract, τ ) simply it returns ⊥.Proof.Notice that the difference between the two hybrids is that in the second the extractor additionally computes the tokens tk.However, the tokens are never add in the view of the adversary.
By the two lemmas above and the triangular inequality we already have the extra condition of the unforgeability in the ROM (Def 3).
In the next sequence of hybrids we will gradually modify the winning condition of the adversary.Recall that in the unforgeability experiment of Fig 2, we defined the winning condition of the adversary to be W := ((1) ∧ (2) ∧ (3)) ∨ (4).For notation, we call W i the winning condition in the hybrid experiment H i , we set W 0 := W and, whenever we don't mention it explicitly, we set .
Proof.We reduce to the adaptive knowledge soundness of the N IZK com .Moreover we rely on the perfect correctness of N IZK sign and the perfect correctness of the signature scheme SS.Hybrid H 2 (λ).Let H 2 be the same as H 1 but where the winning condition is changed.In particular, let q H be an upper bound on the number of oracle queries made by A to H, w.l.g.we assume the adversary does not query twice the RO with the same input.The hybrid samples an index i * $ ← [q H ] and a common-reference string crs * , tp e * $ ← N IZK sign .Init snd (bgp).At the i * -th call to the random oracle H it set the output of the random oracle to be crs * .Moreover, consider the condition (5) defined as: (bsn * , M * ) (the basename-message tuple of the forgery) is queried to the random oracle H at the i * -th query.
Proof.First consider the intermediate hybrid H 2,1 equal to H 2 (we sample crs * and assign it to the i * -th query to the random oracle), but where we do not change the winning condition.By property (i) of Def. 6 (extractable sound CRSs are indistinguishable from random strings) we know that ] Pr [( 5)], in fact, the view of the adversary is independent of the random variable i * .Moreover, the probability of ( 5) is 1/q H . Hybrid H 3 (λ).Let H 3 be the same as H 2 but where the winning condition of the adversary is changed.In particular, after the adversary outputs its forgery the hybrid additionally computes Proof.The proof of the lemma follows by property (i) of Def. 5 (adaptive composable perfect zero-knowledge) of N IZK sign .
Hybrid H 5 (λ).Let H 5 be the same as H 4 but where the queries (sign, * , * , * ) are answered in a different way.Let S sign be the zero-knowledge simulator of N IZK sign .Upon query (sign, i, bsn, M, SigRL) where (i, M i , state i , svt i , tk i ) ∈ L urs and svt i = ⊥ (namely, the sanitizer S is honest) and (bsn, M ) = (bsn * , M * ) (where (bsn * , M * ) is the i * -th query to the random oracle H), the hybrid computes σ = ([c] 1 , π), state i ← M i (state i , bsn, M, SigRL), retrieve the tuple (H, (bsn, M, crs, tp s ) from D RO (or create it if it does not exist), computes π ← S(tp s , ([c] 1 , SigRL)) and outputs ( First notice that the simulation given by B is statistically close to the hybrid experiment H 7 .In fact, the only difference is that in H 7 there might be collisions in K, however the probability of such event is negligible in the security parameter. Let must hold, while if z is uniformly random in Z p then the test hold with negligible probability.
Next, we define two different classes of adversaries.Let A 1 be the class of adversaries such that the event [t * ] 1 ∈ Q sp happens with noticeable probability in H 7 .Similarly, let A 2 be the class of adversaries such that the same event happens with negligible probability in H 7 .The two classes partition the entire class of adversaries.
We now fork our hybrid argument in two.The first sequence is to argue the unforgeability for the adversaries from the class A 1 .
Hybrid H 8 (λ).Let H 8 be the same as H 7 but where the winning condition is changed.Let q join be a polynomial in λ that upper bounds the number of join that the adversary performs.Pick j * $ ← [q join ] and change the winning condition to W 8 := W 8 ∧ (8) where ( 8) is defined as described below: Check that (j * , * , * , * , ([t * ] 1 , * , * )) ∈ L usr .Namely, the witness [t * ] 1 extracted from the proof π * in the forged signature was signed by the issuer at the j * -th join protocol, and the parties S j * , M j * were not (both) corrupted.

Lemma 10. For any
Proof.Let p (λ) be a polynomial such that Pr [[t * ] 1 ∈ Q sp ] ≥ 1/p (λ).By the definition of A 1 , this polynomial exists.Notice that the condition (8) holds when [t * ] 1 ∈ M and [t * ] 1 is the message signed by I (using SS) at the j * -th join session.In particular these two events are independent, so the probability that (7) holds is 1/q join • 1/p (λ) which is noticeable in λ.
Hybrid H 9 (λ).Let H 9 be the same as H 8 but where the random oracle J is programmed differently.Let N IZK com .Init zk be the zero-knowledge common-reference string generator for N IZK com .In particular, when the challenger is queried with either (honest join, j * , * ) or with (dishonestH join, j * , I, ξ)23 , the challenger picks a random id * $ ← {0, 1} λ (we assume that id * was not queried to J), computes crs, tp s ← N IZK com .Init zk (bgp) and set the entry (J, id * , crs, tp s ) in the database D RO .Finally it outputs the message id * as the first message of the issuer I in the join protocol.

Lemma 11. For any
Proof.We reduce to composable zero-knowledge property.Also notice that the probability that id * was queried already to J is q RO /2 λ where q RO upper bounds the number of queries made to the RO.Hybrid H 10 (λ).Let H 10 be the same as H 9 but where the transcript output in the j * -th join protocol is different.Let S com be the zero-knowledge simulator of N IZK com .Upon query (honest join, j * , M), let τ be the transcript at the end of the execution of the join protocol, find in τ the message ([t] 1 , π S ), compute πS ← S com (tp scom , [t] 1 ) and set τ be the same as τ but where the message ([t] 1 , π S ) is substituted with the message ([t] 1 , πS ).Return svt i , τ to the adversary.

Lemma 12. For any
Proof.The proof of the lemma follows by the strong derivation privacy of the N IZK com .In particular, we can perform an hybrid argument over the number of execution of the join protocol with an honest sanitizer.The reduction is straight forward therefore omitted.
Hybrid H 11 (λ).Let H 11 be the same as H 10 but where at the j * -th join protocol, if the adversary plays with a subverted machine and an honest sanitizer, then we substitute the subverted machine with well behaving machine.Recall that, in the description of the join protocol the machine M sends two messages.Consider the machine M that samples a random index r $ ← {1, 2, 3} and that executes the same code of the honest machine Π.M but that, if r = 1 it does not send the first message (or the message is invalid), if r = 2 it does not send the second message (or the message is invalid) and if r = 3 does complete the join protocol.If the adversary sends a query of the kind (honest join, j * , M i ) then the hybrid executes the query with the machine M instead of M i .

Lemma 13. For any
Proof.Let r be the random variable that is 1 if the machine M i does not send the first message (or the message is invalid), 2 if it does not send the second (or the message is invalid) and r otherwise.
We prove that for any assignment l ∈ {1, 2, 3}, Pr [H 10 |r = l] = Pr [H 9 |r = l].Notice that the distribution of the transcript of the join protocol, conditioned on r = l, it is the same either if the machine is M i or M. In fact, if l = 1 then both distribution are trivially equivalent (as no messages was sent by M i or M).If l = 2 then the first message of the transcript is ([t] 1 , [t] 2 , πS ) where the proof is simulated and therefore independent of the machine's message, and t is a uniformly chosen vector in the span of (1, y 0 ).If l = 3 then the last message is a deterministic message (the message completed)) moreover by the Assumption 1 the machine M i never aborts after the protocol join successfully completed.
Also, if H 9 = 1 then the sanitizer S i is honest, therefore all the signatures are re-randomized and for the correct key y 0 .Specifically, let ([c] 1 , π) be a signature output by the challenger on query (sign, j * , M, SigRL), the vector [c] 1 is a function of K and y 0 (we used the soundness of the proof π σ sent by the machine to the S i to state this in H 6 ), so independent of the machine's messages, moreover, by the change introduced in the hybrid H 5 we simulate the proof π, which is therefore independent of the machine's messages.
With the following derivation we can conclude the proof of the lemma: By simplification, the equation above implies that y = αy * 0 + y * 1 − αy 0 , thus the reduction B will always output 1.On the other hand, if z is uniformly random, the reduction B will output 0 (with overwhelming probability).
By the triangular inequalities and by putting together all the lemmas above, we have now showed that adversaries from the class A 1 can win the unforgeability game only with negligible probability.Thus, we need to show the same statement for the adversary from the class A 2 .We roll back to hybrid H 7 .We can now show that the winning probability in H 7 is negligible.Proof.We reduce to the unforgeability of structure preserving signature SS.Consider the following adversary B against the existential unforgeability against chosen-message attacks of SS: Adversary B(pk sp ) with oracle access to O sign (sk sp , •) 1. Simulate the hybrid H 6 , in particular use pk sp to define the public material in the Setup.2. Simulate the join protocol using the oracle access to O sign , in particular whenever the hybrid executes the party I in a join protocol and receives the message ([t] We first give a sketch of the proof.First we notice that adaptive corruption and selective corruption for anonymity are equivalent up to a polynomial degradation of the advantage of the adversary.In particular, we can assume that the adversary corrupts all the platforms but the i 1 -th and the i 2 -th platforms used for the challenge of security game. The idea of the reduction is to switch to zero-knowledge the common reference strings used in the join protocols for the platforms i 1 and i 2 by programming the random oracle.Similarly, switch to zero-knowledge and simulate all the signatures output by the two platforms (again by programming the random oracle).Thus using the strong derivation privacy property of N IZK sign and N IZK com to make sure that no information about the platform keys is exfiltrated.Notice that at this point the machines cannot communicate any information using biased randomness, on the other hand, they could still communicate using valid/invalid signatures.Although, the definition of anonymity disallows telling apart i 1 from i 2 using this channel, for technical reasons, in the last step of the proof (when we reduce to XDH) we need to completely disconnect the subverted machines and, again, substitute them with well-behaving machines, thus here we need to rely on Assumption 1.
Similarly to anonymity, adaptive corruption and selective corruption for non-frameability are equivalent up to a polynomial degradation of the advantage of the adversary.So we can assume that the challenger knows the honest platform that will be attacked by the adversary, let such platform be the i * -th platform.Again, similarly to the proof of anonymity and the proof of unforgeability of our scheme, we switch, thanks to the strong derivation privacy of the NIZK schemes, to an hybrid experiment where all the messages, both during the join protocol and the signature queries, are simulated by challenger and where, moreover, the signature forged by the adversary is extractable.Also, similarly to the proof of unforgeability, thanks to Assumption 1, we substitute the machine of the i * -th platform with a well-behaving machine.
At this point we can reduce the security to the computational problem of finding [x] 2 given [x] 1 , which directly implies the XDH assumption in G 1 .The idea of the reduction is that, given the challenge [x] 1 we can (implicitly) install the element x as the first element of the platform key of the i * -th platform.Notice that given [x] 1 , by programming the random oracle and thanks to the simulation trapdoors, we can faithfully run this hybrid-version of the non-frameability game.Moreover, we do not need to explicitly communicate the platform key to the machine of the i * -th platform because we substituted it with a well-behaving one.Once the adversary output its forgery, we can use the extraction trapdoor to extract the witness from the signature.A successful adversary forges a signature ([c * ], π * ) that links to another signature ([c] 1 , π) produced by the i * -th platform, recall that the linking procedure, given the two signatures on the same basename bsn * , checks that As for anonymity, it is not hard to see that for any PPT adversary that adaptive corrupts the platforms there exists another adversary that commits to its corruptions at the very beginning of the experiment, and, in particular, independently of all the public parameters.More in details, given an adversary A which performs at most q different join protocols, let A be the adversary that (1) first samples an index i $ ← [q], then corrupts all the platforms expect that i -th, and (2) runs the same as A but aborts if the index i * chosen by A in the forgery is not equal to i .
Clearly, for any b ∈ {0, 1} we have: Pr Exp non-frame A ,Π (λ) = 1 = 1 q Pr Exp non-frame A,Π (λ) = 1 In the following, we therefore consider adversaries that non-adaptively corrupts the platforms.We give a sequence of hybrid experiments, where H 0 (λ) := Exp non-frame A ,Π (λ).Moreover, we can assume that the machines M i * does not abort during the join protocol.In fact, if this happened then the winning condition (3) would not be satisfied.
The first step of the hybrid argument proceed exactly the same as in the hybrid step H 2 and H 3 of the proof of Theorem 1.In the next hybrid we summarize the change.As in the proof of Theorem 1, we call W i the winning condition in the hybrid experiment H i , we set W 1 := (1) ∧ (2) ∧ (3) ∧ (4) and, whenever we don't mention it explicitly, we set W i+1 := W i .

Definition 3 (
Unforgeability in the ROM.).Consider a game similar to Fig 2 where additionally the extractor can program the random oracle.

Fig. 4 :
Fig.4:The security experiments for the strong derivation privacy and adaptive extractable soundness.
[c * ] 1 = [c] 1 and verifies the signatures, thus we have [c * 1 ] = K(bsn) • x and the reduction must have extracted the value [x] 2 from proof π * of the forged signature.Proof.Recall the security experiment in Fig 3.The experiment postulates adaptive corruption of the platforms, namely, the query (corrupt, * ) can be function of the view of the adversary.
The issuer inputs (gpk, isk), while the other parties only input gpk.At the end of the protocol, I obtains a bit b indicating if the protocol terminated successfully, M i obtains private key sk i , and S i obtains a sanitizer verification token svt i and the same bit b of I. Sig(gpk, sk i , bsn, M, SigRL) → ⊥/(σ, π σ ).The signing algorithm takes as input the group public key gpk, a private key sk i , a basename bsn, a message M , and a signature based revocation list SigRL.It outputs a signature σ and a proof π σ , or an error ⊥ (if SigRL contains a signature produced with sk i ).Ver(gpk, bsn, M, σ, SigRL, PrivRL) → 0/1.The verification algorithm takes as input the group public key gpk, a basename bsn, a message M , a signature σ, a signature based revocation list SigRL, and a private key based revocation list PrivRL.It outputs 0 or 1 if σ is respectively an invalid or a valid signature on M .Sanitize(gpk, bsn, M, (σ, π σ ), SigRL, svt i ) → ⊥/σ .The sanitization algorithm takes as input the group public key gpk, a basename bsn, a message M , a signature σ with corresponding proof π σ , a signature based revocation list SigRL, and a sanitizer verification token svt i .It outputs either ⊥ or a sanitized signature σ .Link(gpk, bsn, M 1 , σ 1 , M 2 , σ 2 ) → 0/1.The linking algorithm takes as input the group public key gpk, a basename bsn, and two message-signature-SigRL triples M 1 , σ 1 and M 2 , σ 2 .
x, y, ⊥) exists in D RO and if so return y, else sample y 1} λ , add the tuple (H, x, y, ⊥) into the database and return y.-Upon input (RO, J, x) check if (J, x, crs, tp e ) exists in D RO and if so return crs 1 , else sample crs, tp e IZK com .Init snd (bgp), add the tuple (J, x, crs, tp e ) into the database and return crs 1 .-Upon input (extract, τ ) parse the transcript τ as described by the messages sent in the join protocol and find the value id, lookup for the tuple (J, id, crs, tp e ) into the database D RO , and if it does not exist then it output ⊥.Else, find the message ([t] 1 , π S ) from S, run the extractor [y] 2 ← E com (tp e , π S ), find the message σ sp sent from the issuer I, and output tk = ([t] 1 , σ sp , [y] 2 ).We define the CheckTK algorithm.The algorithm given in input gpk, sk and tk parses sk as ([t] 1 , [σ sp ] 1 , y) and check if tk = ([t] 1 , [σ sp ] 1 , [y] 2 ).We define the CheckSig algorithm.The algorithm given in input gpk, tk and a signature σ, parses tk = ([t] 1 , [σ sp ] 1 , [y 0 , y 1 ] 2 ) and σ = ([c 0 , c 1 ] 1 , π) and return 1 if and only if e([c 0 $ ← {0, $ ← N The extractor E computes [y] 2 using the knowledge extractor of N IZK com and output tk = ([t] 1 , σ sp , [y] 2 ), since [t] 1 , σ sp are generated by the issuer I they form a valid message-signature pair.Suppose that exists sk ∈ PrivRL * linked to tk, therefore sk = ([t] 1 , σ sp , y) and that CheckSK(gpk, sk) = 0, therefore, either [h • y] T = [t] T , but this would violate the adaptive knowledge soundness of N IZK com , or the latter holds but, the signature ([c] 1 , π) for a random message M does not verify, but this would violate either the correctness of N IZK sign or the correctness of SS.