Group Signatures with User-Controlled and Sequential Linkability

. Group signatures allow users to create signatures on behalf of a group while remaining anonymous. Such signatures are a powerful tool to realize privacy-preserving data collections, where e.g., sensors, wearables or vehicles can upload authenticated measurements into a data lake. The anonymity protects the user’s privacy yet enables basic data processing of the uploaded unlinkable information. For many applications, full anonymity is often neither desired nor useful though, and selected parts of the data must eventually be correlated after being uploaded. Current solutions of group signatures do not provide such functionality in a satisfactory way: they either rely on a trusted party to perform opening or linking of signatures, which clearly conﬂicts with the core privacy goal of group signatures; or require the user to decide upon the linkability of signatures before they are generated. In this paper we propose a new variant of group signatures that provides linkability in a ﬂexible and user-centric manner. Users – and only they – can decide before and after signature creation whether they should remain linkable or be correlated. To prevent attacks where a user omits certain signatures when a sequence of events in a certain section (e.g., time frame), should be linked, we further extend this new primitive to allow for sequential link proofs. Such proofs guarantee that the provided sequence of data is not only originating from the same signer, but also occurred in that exact order and contains all of the user’s signatures within the time frame. We formally deﬁne the desired security and privacy properties, propose a provably secure construction based on DL-related assumptions and report on a prototypical implementation of our scheme.

Abstract. Group signatures allow users to create signatures on behalf of a group while remaining anonymous. Such signatures are a powerful tool to realize privacy-preserving data collections, where e.g., sensors, wearables or vehicles can upload authenticated measurements into a data lake. The anonymity protects the user's privacy yet enables basic data processing of the uploaded unlinkable information. For many applications, full anonymity is often neither desired nor useful though, and selected parts of the data must eventually be correlated after being uploaded. Current solutions of group signatures do not provide such functionality in a satisfactory way: they either rely on a trusted party to perform opening or linking of signatures, which clearly conflicts with the core privacy goal of group signatures; or require the user to decide upon the linkability of signatures before they are generated. In this paper we propose a new variant of group signatures that provides linkability in a flexible and user-centric manner. Users -and only they -can decide before and after signature creation whether they should remain linkable or be correlated. To prevent attacks where a user omits certain signatures when a sequence of events in a certain section (e.g., time frame), should be linked, we further extend this new primitive to allow for sequential link proofs. Such proofs guarantee that the provided sequence of data is not only originating from the same signer, but also occurred in that exact order and contains all of the user's signatures within the time frame. We formally define the desired security and privacy properties, propose a provably secure construction based on DLrelated assumptions and report on a prototypical implementation of our scheme.
Schemes with trusted openers. To reduce this dependency, alternatives quickly sprouted. In group signatures with Verifier Local Revocation, verifiers can keep local lists of revoked signers, not requiring them to open incoming signatures [10]. Traceable signatures [24,18] add an extra trusted entity who, after opening a signature by any given member, can produce member-specific trapdoors that can be used to link signatures originating by them. Convertably linkable signatures remove the opener, but incorporate a party who can (non-transitively) blindly link signatures within sets of queried signatures [23]. Recently, also blind variants for central opening have been proposed [26]. Still, all these alternatives use some sort of central entity for opening or linking, which needs to be fully trusted to ensure privacy. While this trust can be distributed [13], this still gives control to a set of central entities rather than users.
Schemes with user-controlled linkability. Instead of relying on trusted parties, it may suffice to let signers control which signatures will be linkable, and when. This is also ideal from a privacy perspective, as users retain full control. In this vein, Direct Anonymous Attestation (DAA) [6,12] and anonymous credential systems [15], also aimed at preserving signer/holder privacy, follow this approach. They enable user-controlled linkability through deterministically computed pseudonyms (from a scope and the user's key) within each signature. This makes all signatures for the same scope automatically linkable. Otherwise, they remain unlinkable. Such implicit linking has the drawback of being static: a signature that was decided to be unlinkable to some or all other signatures, will remain unlinkable forever. Thus, use cases with even a remote probability of needing to link signatures a posteriori would require to make them all linkable by default, eliminating all privacy.
Further, relying on the more privacy-friendly option of user-controlled and implicit linkability instead of having an almighty opener, makes formally defining the desired security and privacy properties of such group signatures much more challenging. In fact, to date no satisfactory security model for DAA in the form of accessible game-based security notions is known; we refer to [12,6] for a summary of the long line of failed security notions in that respect.
Alternatively, some existing group signatures offer user-controlled a posteriori linking or opening of previously anonymous signatures: In [29] users can claim signatures by outputting their secret key which allows to test whether a signature stemmed from that user. But this is an all-or-nothing approach, immediately destroying privacy of all the user's signatures and thus is unsuitable for most realistic scenarios. The recent work by Krenn et al. [26] implement a more flexible explicit linking by enabling users to issue link proofs for two (or, in theory, more) signatures. However, their model still crucially relies on the presence of a trusted opener to model and prove the desired security properties. Thus, even if only explicit linking would be needed, the scheme must allow full opening through a central entity in order to fit their model and hope for any provable security guarantees.
Ideally, one would hope for group signatures supporting both implicit and explicit linking to increase utility and, for scenarios handling sensitive data, without trusted parties that can unilaterally remove privacy.

Our Contributions
In this paper we provide the first provably secure group signatures that are purely user-centric, i.e., where only the user can control the linkage of her signatures. To allow for the necessary flexibility, our solution supports both implicit and explicit linkability. That is, the user can make signatures linkable with respect to pseudonyms when she generates them, and also link signatures with different pseudonyms afterwards through explicit link proofs.
Security model without opener, and for implicit and explicit linking. Our first challenge was to provide meaningful security notions when no opener is available that can be leveraged, e.g., to express who is a valid member of the group. Instead, we take inspiration from security models for DAA [6,12] to express membership of groups through linking. We define anonymity by requiring that it must not be possible to link signatures by the same user, except when she decides to make them linkable by default, or when she explicitly links them. For traceability, (1) it must not be possible to create signatures that are not traceable to any valid member of the group, and (2) it must not be possible to explicitly link signatures originating from different (possibly corrupt) users. Finally, for non-frameability we require that (1) no signature can be implicitly linkable to another honest signature unless it was honestly generated by the same userwho also made both signatures linkable by default, and (2) no adversary can explicitly link honest and dishonest signatures, or honest signatures that have not been explicitly linked by their signer. Note that we give two variants for both traceability and non-frameability. This is needed due to the possibility to implicitly and explicitly link signatures, and is a direct consequence of leveraging linkability to replace the opener. We emphasize that, to the best of our knowledge, implicit linking has not been modelled previously for group signatureslet alone in combination with explicit linking.
Sequential link proofs. When the pseudonymous signatures are over data with inherent order properties -e.g., time series -just re-establishing linkage is not enough. Therein, it may be needed to attest that the linked messages are given in the same order in which they were produced, and without omitting (possibly relevant) ones. For instance, smart vehicles in Intelligent Transportation Systems (ITSs) are required to send measurements to a data lake. There, the order of a sequence of events may be useful to detect anomalies: e.g., a vehicle reporting 35-45-30-40 litres of fuel in a short timespan is probably an anomaly, while one reporting 45-40-35-30 is probably not. Or, again, in contact tracing systems, where pseudonyms are reused during a limited time, after which new ones are derived. Users may eventually be required to reveal their pseudonymous data spanning several of those pseudonyms, and omitting specific chunks of this data (or altering the order) may preclude effective contact tracing. In these use cases, the number of pseudonymously signed messages that may be required to be linked can be expected to be of at least many tens (and possibly a few hundreds) of signatures, in short time spans. Additionally, order may be relevant in less throughput-demanding scenarios. For instance, it may have very different implications when a person fails to pay X mortgage fees in a row, than the case when the X defaults correspond to months very distant in time.
This motivates our next contribution. We extend our previous model and construction to enable sequential link proofs: signers can prove that a sequence of signatures was produced in the specified order, and no signature is being omitted. To model this, we introduce a new unforgeability property, sequentiality, ensuring that honest-then-corrupt users cannot create sequential proofs for wrongly ordered sequences, nor omitting signatures. Our extended construction builds on efficient hash-chain ideas from anonymous payment systems [27].
Efficient construction with batch proofs for linking. We give an efficient construction realizing our model. Pseudonymous signatures are computed using the scope-exclusive nym approach from DAA and anonymous credentials, where the pseudonym is deterministically derived from a scope and the same secret key in the user's credential. This gives implicit linkage. For explicitly linking signatures, we propose a new way to batch the signatures being linked, leveraging the fact that pseudonyms are group elements that can be "aggregated". This leads to an efficient mechanism for linking large sets of signatures.
Implementation and comparison. To further assess efficiency of our constructions, we implement them and report on the obtained experimental results (check Appendix A for notes on the implementation and a demo). Both the basic scheme and sequential extension outperform the most related previous work [26]: we link sets of ∼100 signatures in ∼40ms, while [26] requires ∼300ms for linking only 2 signatures (besides requiring a trusted opener.)

Preliminaries
Notation. G = g denotes a cyclic group G generated by g, a ← A(·) denotes a obtained by applying algorithm A, a ← $ S means a is picked uniformly from set S, and [n] denotes the closed interval [1, n]. H and H are cryptographic hash functions. Signed messages are represented as a tuple of elements. When arguing about sets of such tuples, Σ Σ Σ denotes a set, and Σ i the i-th element in Σ Σ Σ. Σ Σ Σ o is an ordered set, and A o ∈ o S o denotes that A o appears in S o , respecting order.
Hardness assumptions. We base the security of our scheme in the well known Discrete Logarithm and DDH assumptions [16] and in the q-SDH assumption for Type-III pairings [9], which we informally recall next.
q-SDH assumption (for Type-III pairings [9].) Given g 1 ∈ G 1 , g 2 ∈ G 2 , χ ∈ Z p , and a (G q+1 , it is computationally unfeasible for any polynomial-time machine to output a tuple (g BBS+ signatures and Pseudonyms. We rely on the BBS+ signature scheme proposed in [1] for Type-II pairings, and Type-III pairings in [11]. We use the following convention for BBS+ operations, for some previously generated Type-III pairing group (p, G 1 , G 2 , G T , g 1 , g 2 , e): Set sk ← y and pk ← (W, h 1 , h 2 ). We extend the proof of knowledge in BBS+ signatures to prove correctness of the pseudonyms that signers generate.
For pseudonyms, we follow [14]. Roughly, with the help of a hash function, pseudonyms are deterministically generated from a scope scp and a private key sk as H(scp) sk .
Proof protocols. We use non-interactive proofs of knowledge obtained through the Fiat-Shamir transform [21].
, denotes a signature of knowledge of (x, r) meeting the condition to the right of the colon, for public message m, and parameters ctx to prevent malleability attacks [7]. For verification, we write SPKVerify(π, ctx, m), returning 1 (correct) or 0 (incorrect).
Additional building blocks. We rely on an append-only bulletin board BB and pseudo random functions (PRFs). PRFs generate pseudorandom output from a secret key and arbitrary inputs. PRF.KeyGen(1 τ ) → k generates the keys, and PRF.Eval(k, m) → r pseudorandomness r from key k and message m. The BB is assumed to verify the data before writing, and written data cannot be erased.

Scheme with User-Controlled Linkability (UCL)
In this section we present our basic group signature scheme with user-controlled and selective linkability. We start by presenting the general syntax, then describe how the desired security properties can be formulated without the presence of an opening entity, and finally present our secure instantiation.
The core contribution of this section is the new security model that captures the desired security and privacy properties without a central (trusted) entity and allows for selective, user-centric linkability. The proposed scheme follows in most parts the standard approach of group signatures, integrates the pseudonym idea from DAA, and provides a new way to prove linkage of a batch of signatures.

Syntax
In group signatures, an issuer interacts with users who want to join the group and become group members. Members create anonymous signatures on behalf of the group, which verifiers can check without learning the signers' identity. In our setting, the anonymity of the signer is steered via pseudonyms, generated with every signature, as well as explicit link proofs. More precisely, a UCL scheme supports two types of linkability (see Fig. 1 for a pictorial representation): Implicit Linkability: Every signature is accompanied with a pseudonym, generated by the user for a particular scope. Re-using the same scope leads to the same pseudonym, making all signatures generated for the same scope immediately linkable for the verifier. Pseudonymous signatures for different scopes cannot be linked, except via explicit link proofs generated by the user. Explicit Linkability: After the signatures have been generated, they can be claimed and linked by the user: given a set of signatures, the user proves that she created all of them, i.e., links the signatures in the set.   We emphasize that users have full control on the scopes, which can be any arbitrary (bit)string. For instance, in the contact tracing example given in Section 1, where identifiers are reused during 15 minutes, the scope could be derived from publicly available information, such as the current epoch. Alternatively, using randomly chosen scopes would lead to unlinkable signatures.
A UCL group signature scheme consists of the following algorithms: Setup(1 τ ) → param: Generates the public parameters for the scheme. IKGen(param) → (isk, ipk): Generates the issuer's keypair (isk,ipk). Join(ipk), Issue(ipk, isk) → (usk, ⊥): To become a member of the group, the user runs the interactive join protocol with the issuer. If successful, the user obtains a user secret key usk. Sign(ipk, usk, m, scp) → (σ, nym): Signs a message m w.r.t. scope scp via user secret key usk. The output is a pseudonym nym and group signature σ. Verify(ipk, Σ) → 0/1: On input a group public key ipk and tuple Σ = (m, scp, σ, nym), containing a group signature σ and a pseudonym nym, purportedly corresponding to m and scp, returns 1 when the tuple is valid and 0 otherwise.
and user secret key usk, produces a proof π l of these signatures being linked or ⊥ indicating failure. The link proof is also done for a specific message lm, which can be used e.g., to ensure freshness of the proof.
were produced by the same signer and for link message lm, or 0 otherwise.
We delay the definition of the correctness properties for a UCL scheme after introducing some extra notation in the next section.

Security Model
A UCL group signature scheme should provide the following privacy and security properties: For privacy, signatures should not leak anything about the signer's identity beyond what is exposed by the user through implicit and explicit linkability (anonymity). Security is expressed through a number of properties covering the desired unforgeability guarantees: signatures should only be created by users that have correctly joined the group (traceability), and even a corrupt issuer should not be able to impersonate honest users (non-frameability).
Oracles and State. Our definitional framework closely follows the existing work of group signatures, and in particular the work by [5] for security of dynamic schemes. They make use of a number of oracles and global variables that allow the adversary to engage with honest parties, and which we adjust to our setting. SNDI: (The SeND to Issuer oracle.) Runs the Issue process on behalf of an honest issuer, allowing the adversary to join in the role of corrupt users in games with an honest issuer. Updates transcript[uid] with a transcript of the exchanged messages. SIGN/LINK: Allow the adversary to obtain honest users' signatures/link proofs for messages/signatures of his choice (with restrictions in anonymity game). CH-SIGN b /CH-LINK b : Challenge oracles in the anonymity game that allow the adversary to get signatures and link proofs for a challenge user uid b . Fig. 2 presents the details of the oracles used in our games: the standard ADDU, SNDU, and SNDI oracles as defined in [5], and SIGN and CH-SIGN b , which we modify from [5], and LINK and CH-LINK b , which are specific to our model.
Challenge user in anon-b. Ignored in the other games. HUL uids of honest users that have joined CUL uids of corrupt users that have joined (needed when issuer is honest) signing key of honest user uid transcript[uid] messages from join protocol between user uid & honest issuer Helper Function Identify. In some security games we need to determine if a certain user secret key was used to create a given signature. For this we follow DAA work [6,12] and assume the availability of a function Identify(ipk, usk, Σ) → 0/1, returning 1 when Σ = (m, scp, σ, nym) was produced by usk, or 0 otherwise.
We use the function for keys of both honest and corrupt users. Abusing notation, we write Identify(uid, Σ) to indicate that Identify is run for the secret key usk of user uid (where ipk is clear from the context). For honest users, Identify simply uses USK[uid]; while keys of corrupt users can be extracted from the join transcript. For the latter, note that Identify is only used in games where the issuer is honest, i.e., such a transcript is available. In our concrete scheme we exploit the random oracle to extract a user's keys via rewinding. If online-extractable proofs are used, then Identify will also receive the trapdoor information as input.
We now formally capture the expected security properties. [5] if uid / ∈ HUL : Correctness. We formalize the correctness of Sign and correctness of Link properties in the full version [19].
Anonymity. We adapt the classic privacy notion to our setting. It expresses that signatures must not reveal anything about the signer's identity beyond what was intended by her, even when the issuer is corrupt. The adversary plays the role of the issuer and can trigger honest users to join, sign and link. Eventually, he chooses two honest users uid * 0 and uid * 1 , and one becomes the challenge user uid * b . The adversary can receive signatures and link proofs of uid * b (via CH-SIGN b and CH-LINK b ) and must determine b better than by random guessing.
As our signatures support user-controlled linkability, we must be careful to exclude trivial wins leveraging it. There are two ways in which the adversary can trivially win. First, by leveraging implicit linkability: signatures by the same user and with the same scope are directly linkable. The adversary could exploit this by calling CH-SIGN b and SIGN (the latter, for uid * 0 or uid * 1 ) with the same scope. Second, the adversary can leverage explicit linkability by obtaining link proofs via LINK or CH-LINK b for a set of signatures that contains challenge signatures, obtained though CH-SIGN b , and non-challenge signatures (for a challenge user), obtained from SIGN. Definition 1. (Anonymity). A group signature scheme UCL with user-controlled linkability is anonymous if for all ppt adversaries A, the following is negligible Traceability. This property covers the desired unforgeability guarantees for corrupt users of groups with an honest issuer. Intuitively, it guarantees that only legitimate members of the group are able to generate valid signatures on behalf of that group. The traditional approach in group signature models [5,26] is to ask the adversary for a forgery and leverage the trusted opener to check whether the forged signature opens to any user that has joined the group.
As our setting does not have such an opening entity, we cannot follow this approach and instead take inspiration from the DAA security models [6,12]. Therein, one uses the implicit availability of an Identify function (introduced above) which allows to check whether a given signature belongs to a certain user secret key (which we know from honest users, and can extract from corrupt ones). The adversary wins if he can produce valid signatures (or link proofs) that cannot be traced back via Identify to any member of the group. This alone would not be sufficient though, as our signatures also carry some information in their implicit and explicit linkability, which an adversary should not be able to manipulate either. That is, the adversary also wins if he can produce more standalone signatures that are unlinkable (for the same scope) than he controls corrupt users, or if he manages to produce a valid link proof for signatures of different corrupt users.
We have grouped these properties along the statement that the adversary has to forge, i.e., we have signature traceability for forgeries of standalone signatures, and link traceability that works analogously for the link proofs. Non-Frameability. This property guarantees that an honest user cannot be framed by the adversary, even when the issuer is corrupt. In our setting such framing can be done when signatures of an honest user are linkable to signatures that she has not generated. As we support two different types of linkability, we again need a dedicated variant of that property for each of them. The first captures non-frameability from standalone signatures, i.e., via implicit linking. In this case, the adversary can only frame an honest user by producing a signature that holds for the same pseudonym that an honest signature generated for that scope. Linkability (and thus framing attacks) across scopes is not possible and thus does not have to be considered here. Such linkage for different scopes is only possible via explicit link proofs. The second property we define captures non-frameability for these proofs, which the adversary can leverage to frame an honest user in two ways: producing a proof that (1) links honestly generated signatures with adversarial ones; or (2) producing a proof that links honestly generated signatures by the same user, but the honest user did not create that proof -i.e., it is the proof itself that is forged and aims to impersonate the honest user.  / / Honestly created sigs., but π l was forged Definition 6. (Security of UCL). A group signature scheme UCL with usercontrolled linkability is secure if it ensures the previous anonymity, traceability and non-frameability properties.

Construction
We now present our scheme satisfying the desired security and privacy properties. The core of our constructions follows the standard approach of group signatures (see, e.g., [8]): during join, users receive from the issuer a membership credential, and signing essentially is a proof of knowledge of such a credential. We use BBS+ signatures for such blindly issued membership credentials. Adding implicit linkability: Whereas standard group signatures usually include an encryption of the user's identity (for opening) in her signature, we use the pseudonym idea of DAA and anonymous credentials instead [6,12,14] and, specifically, of [11]. That is, when creating a signature, the user also reveals a pseudonym nym ← H(scp) y for her key y and a particular scope scp. Clearly, these pseudonyms are scope-exclusive, i.e., there is only one valid pseudonym per scope and user key [14]. The user also proves that she has computed the pseudonym from her key.
Adding explicit linkability: The existing solution for link proofs [26,14] of signatures with different pseudonyms is to let the user provide a fresh proof that all pseudonyms are all based on the same user key. So far, this approach has been proposed for linking only two signatures, and will grow linearly when being used for many signatures. For our proofs, we instead use the observation that all individual pseudonyms the signatures are associated to can form a "meta-nym" nym = i∈[n] nym i = i∈[n] H(scp i ) y . That is, the user can simply prove that she knows the secret key y such that nym ← hscp y , where nym and hscp = i∈[n] H(scp i ) are uniquely determined by the signatures. We stress that we do not claim novelty of the main parts of the group signatures. The core contribution here is (1) the simple trick for making efficient batched link proofs, and (2) making the pseudonym idea of credentials and DAA also formally available for group signatures.
Join(ipk), Issue(ipk, isk) → (usk, ⊥). This interactive protocol lets the user blindly obtain a BBS+ signature by the issuer on her secret key y: -Issuer: sends a random nonce n ← Z * p to the user.
Sign(ipk, usk, m, scp) → (σ, nym). To sign a message m for scope scp, the user generates the pseudonym nym ← H(scp) y and computes a proof that the pseudonym was computed for a key that she has a BBS+ credential on, including the message m in the Fiat-Shamir hash of the proof.
Link(ipk, lm, Σ Σ Σ) → π l /⊥. Linking signatures is done by batching all nyms and scopes into nym and hscp, and proving knowledge of the discrete logarithm of nym w.r.t. hscp. The link message lm is included in the hash of the proof.

Security of our Construction
Theorem 1. Assuming SPK is zero-knowledge and simulation-sound, our construction is secure under the discrete logarithm, DDH, and q-SDH assumptions, in the random oracle model for H and SPK.
Proof sketch. Under the DDH assumption [28], anonymity follows from zeroknowledgeness and simulation-soundness of the SPKs, and the fact that pseudonyms are indistinguishable from random when different scopes are used. We realize Identify with the help of the pseudonyms. Given a signature (m, scp, σ, nym), Identify fetches y from the usk of the specified uid and, if H(scp) y = nym, returns 1; else, returns 0. Scope-exclusiveness of pseudonyms ensures the required uniqueness [14]. Then, signature traceability follows from unforgeability of the BBS+ credentials, and zero-knowledgeness and soundness of SPK: if the adversary produces, for the same scope, more unlinkable signatures than corrupt users, or a signature from a non-member, we extract a forged BBS+ credential and can break the q-SDH assumption [11]. Winning condition 1 of link traceability is shown similarly. For condition 2, soundness of SPK ensures the individual signatures and the link proof are valid discrete logarithm proofs. Also, after the uniqueness property of pseudonyms, no two nyms in the same link proof can have different values if derived from the same scp. This prevents malleability attacks: e.g., corrupt users joining with y = a and y = b − a and using nyms derived from those keys and the same scp in the same link proof. Thus, an adversary can only try to subvert the proof with nyms derived from different scopes. But this requires to find non-trivial roots in an equation of the form g α1y1 ...g αnyn = 1, where the y i 's are controlled by the adversary, but the α i 's are not, as the g αi 's are produced by H (a random oracle). We show that a successful adversary can be used to break the discrete logarithm assumption.
For signature non-frameability, we rely on the uniqueness property of the pseudonyms and zero-knowledgeness and soundness of SPK. We break the discrete logarithm assumption from an adversary forging a signature with the same scope and nym that a signature of an honest user. For link non-frameability, we rely on the zero-knowledgeness and soundness of SPK. First, a similar argument as in traceability ensures that the link proof must be over the same exponents. We leverage this to embed a DL challenge into the nyms and link proofs of an honest user. If the adversary forges a signature (for winning condition 1) or a link proof (winning condition 2) for this user, we can extract a solution to the challenge.
The full proofs are given in the full version of this work [19].

Leveraging a Trusted Bulletin Board.
Our UCL group signatures target a setting where signatures are generated and collected in a pseudonymous manner, and where linkability can still be refined later on by the users. Such a setting implicitly assumes the storage and availability of the originally exposed group signatures, e.g., in form of a central data lake that collects all individual signatures. In applications where the data lake is trusted by the verifiers (or even maintained by them), we can leverage this to improve the efficiency of our scheme. For clarity, we refer to such a trusted data lake and the additional functionality it must provide as bulletin board (BB), which can be used as follows: -All signatures Σ i are sent to the BB, who verifies and appends them, if valid.
-Link and VerifyLink no longer check the validity of all Σ i in Σ Σ Σ, but simply check whether all signatures are in the BB.
By using such a trusted BB we can improve the efficiency of Link and VerifyLink significantly -of course for the price of trusting a central entity again. This trust assumption would be necessary for the anonymity, link traceability and link nonframeability properties. However, the functionality of the BB can easily be distributed, e.g., using a blockchain; or the trust enforced and verified via regular audits where verifiers randomly pick signatures in the BB and check their validity. Thus, we believe that such a trust assumption is much more relaxed than trusting an entity that can single-handedly revoke the anonymity of all users.
Requirements on long-term storage capacity of the bulletin board depend on the use case. However, it seems reasonable to assume that, for most real world settings, a maximum timespan for storing past signatures can be established.

Scheme with Sequential Linkability (sUCL)
We extend our basic UCL scheme to allow for sequential link proofs. These sequential proofs target a setting where the originally signed (and unlinkable) data has an inherent order, e.g., time series data when sensors or vehicles continuously upload their measurements into a data lake. While the data is collected in unlinkable form, the eventual subsequent link proof must re-establish not only the correlation but also the order of a selected subset in an immutable manner.
We start by describing the minor syntax changes needed for our sequential group signatures (sUCL), and then discuss the additional security property we want such a sUCL scheme to achieve. Roughly, when making a sequential link proof, a corrupt user should not be able to swap, omit or insert signatures within the selected interval -and yet, this proves, nor reveals, nothing about signatures outside the proven interval. For this sequentiality property, we consider security against honest-then-corrupt users. While this may seem too lenient, note that it fits many real world applications where signing is an automatic process performed in the background by some device or application. In those cases, the need to alter sequences will only arise after the signatures have been created and sent. But, as described, the produced signatures -which contain extra information to enable proving order -are assumed to be stored in a data lake. Then, eventually, users have to make some claim that involves proving order with respect to those previously stored signatures. But this limits the options of malicious users. E.g., assume signatures Σ 1 , Σ 2 and Σ 3 are produced in that order (i.e., first Σ 1 , then Σ 2 and finally Σ 3 ), but a malicious user A wants to prove the reverse order. Then, A needs to commit to that strategy before sending the signatures by consequently altering the order information embedded in the signatures. Our argument is that, in many real world cases, A will not know which order he will be interested to prove in the future. For instance, in a contact tracing scenario (for a pandemic), malicious users will not know what order they are interested to prove until after learning which has been the risky contact.
Moreover, which specific alteration might be needed would also depend on the originally produced (and signed) data, and uninformed/random alterations may very well be useless or even counterproductive for the purposes of a malicious user. Nevertheless, even modeling this weak property requires a non-trivial approach. In Section 6, we give some insight about what seems to be possible beyond the honest-then-corrupt approach.
Finally, we present a simple extension to our Π UCL scheme that uses the trusted bulletin board sketched in Section 3.3.2 and includes a hidden hash-chain into the group signatures, which allows to re-establish the order of signatures.
Syntax of sUCL. The signatures -despite being unlinkable per se -must now have an implicit order that can be recovered and verified through SLink and VerifySLink respectively. Abusing notation, we consider the set of signatures Σ Σ Σ o to be given as an ordered set, and the proof and verification is done with respect to. this order. Further, to allow signatures to have an implicit order, we need to turn SSign into a stateful algorithm. That is, in addition to the standard input, it also receives a state st and outputs an updated state st . We model that the state is initially set together with usk during the Join protocol. In summary, a sUCL scheme follows the UCL syntax from Section 3.1 with the following modifications:

Security Model for sUCL
We want the sUCL scheme to have (essentially) the same traceability, nonframeability and anonymity properties as in Section 3.2 -and additionally guarantee the correctness and security of the re-established sequential order.
Traceability and Non-frameability. These properties cover the security expected through the controlled linkage (not order) and only need minor adjustments to cater for the changed syntax: In the games, we use SSIGN/SLINK instead of SIGN/LINK.

Sequentiality.
This property captures the security we can expect from proofs that reveal the sequential order of several signatures issued by a same user. Namely, when a user makes a sequential link proof for an ordered set Σ Σ Σ o = Σ 1 , . . . , Σ n , we want to ensure that Σ 1 , . . . , Σ n have occurred indeed in that order and that no signature is omitted or inserted. The latter prevents attacks where a corrupt user tries to "hide" or add certain signatures, e.g., when a driver is asked to reveal the speed measurements from a certain time interval and wants to omit the moment she was speeding.
We follow the classic unforgeability style of definition and ask the adversary to output a forged link proof with an incorrect sequence. Clearly, such a definition needs to be able to capture what the "right order" of signatures is, in order to quantify whether a forgery violates that order or not. To do so, we opted for a two-stage game where the adversary can engage with honest users and make them sign (and link) messages of his choice. This ensures that we know the correct order in which the signatures are generated. Eventually, the adversary picks one of the honest users uid * , upon which uid * becomes corrupted and the adversary receives her secret key and current state. The adversary wins if he outputs a valid sequential link proof that violates the sequence produced by the originally honest user, e.g., re-orders, omits or inserts signatures.
Clearly we must allow the adversary to possibly include maliciously generated signatures in his forgery, but must be careful to avoid trivial wins: as soon as we give the adversary the secret key of uid * he can trivially (re-)generate signatures on behalf of the honest user. Thus, we ask the adversary to commit to a set of maliciously generated signatures Σ Σ Σ before corrupting uid * and request that his link forgery for alleged ordered signatures Σ Σ Σ * must be a subset of Σ Σ Σ ∪ SIG[uid * ].
In the basic scheme (UCL), we defined anonymity with the typical approach: the adversary first picks two honest users and must then guess which one is used to produce challenge signatures and link proofs. In UCL, we just needed to prevent the adversary from leveraging implicit linkability and explicit linkability. This boils down to not allowing the reuse of scopes between calls to CH-SIGN b and SIGN (for challenge users), and not allowing to link signatures produced by CH-SIGN b and SIGN (again, for challenge users).
In the sequential extension (sUCL), the idea is still the same, i.e., the adversary has to guess which is the chosen challenge user out of the two he picked up. However, the adversary has more ways to trivially learn the challenge user by leveraging the order information unavoidably revealed by the sequential link queries. Take, for instance, the scenario sketched in Fig. 3. There, the adversary interleaves a call to CH-SSIGN b (the one producing Σ * 1 ) between calls to SSIGN for the same challenge user (the call that produces Σ 2 and the calls producing Σ 3 -Σ 5 ). If the adversary makes a call to SLINK with the signatures produced before and after the call to CH-SSIGN b (e.g., including Σ 2 , Σ 3 in Fig. 3) and the call fails, then the challenge user is the same as the one used in the calls to SSIGN. Indeed, the link call fails because one signature is missing in the sequence (and, in Fig. 3 the correct sequence would be the dashed one). Similarly, if the call succeeds, then the challenge user is not the one used in the calls to SSIGN (and the correct sequence in Fig. 3 is the solid one). Note that this works even when the scopes in all signatures are different: hence, it would not constitute a disallowed action in the UCL model. A similar strategy interleaving a call to SSIGN between calls to CH-SSIGN b also applies.
Oracles and state. In the previous example, we saw that calls to CH-SSIGN b and SSIGN (the latter for uid * 0 or uid * 1 ) can later be used to (trivially) expose the challenge user -by linking signatures produced before those calls, with signatures produced after. However, linking signatures produced within the same interval of such calls should not leak any information about the challenge user. To capture SSIGN(uid * 0 , . . . ): Fig. 3: Sketch of a strategy leading to a trivial win by A leveraging order information in sUCL, and the model to detect it.
those intervals, we assign every honestly generated signature to a cluster (set of signatures). Since the calls to CH-SSIGN b and SSIGN are the events defining the linkage of which signatures would lead to trivial wins, we use those calls to mark when we need to start assigning signatures to a new cluster. More specifically, to keep track of the cluster to which we need to assign signatures by challenge users, we resort to two counters: i SIG * and i CSIG . Every time the adversary makes a call to CH-SSIGN b , we dump all signatures produced by SSIGN(uid * b , . . . ) since the last call to CH-SSIGN b to a new cluster SIG * [uid * b , i SIG * ], and increment i SIG * . Similarly, when a call to SSIGN(uid * b , . . . ) is made, we increment i CSIG so that all signatures produced by CH-SSIGN b from that point onwards start being assigned to a new cluster CSIG[i CSIG ].
The oracles used to model sUCL are summarized next and fully defined in Fig. 4. The state variables are summarized in Table 2. We emphasize that the new modifications only affect the anonymity property, while the other properties just need to adjust for the updated syntax.
-SSIGN/SLINK extend SIGN/LINK. SSIGN uses st uid , the state of user uid, to call SSign, and updates it with the returned st uid . SLINK gets an ordered set.
Challenge oracles for the anonymity game, allowing the adversary to get signatures and link proofs for the challenge user.
Helper Function Adjacent. We rely on a helper function, Adjacent(LNK[uid], CLNK) → 0/1. It explores LNK to check link queries for honest signatures and CLNK to check link queries for challenge signatures. It returns 1 if SLINK and CH-SLINK b have been respectively queried with two sets of signatures that were sequentially generated, or 0 otherwise. This is an artifact of our specific construction rather than a general requirement, though. In Π sUCL , given two adjacent signatures Σ n , Σ n+1 , if Σ n is included in a link proof and Σ n+1 in another link proof, it is possible to determine that they were sequentially issued. Consequently, if one is / / Create new sigs. cluster for challenge users for d = 0, 1 : a challenge signature and the other is not, it would be possible to trivially guess the bit b in the anonymity game. The Adjacent function is defined in Fig. 5.
and one of the following conditions holds: 1) Σ0 was produced by SSIGN immediately after Σ n being produced by CH-SSIGNb 2) Σ 0 was produced by CH-SSIGNb inmmediately after Σn being produced by SSIGN Anonymity definition. Beyond the cumbersome changes required to prevent the new trivial wins, and the extra Adjacent check required by our specific construction, we capture anonymity in sUCL as in UCL. Specifically, the adversary controls the issuer and allows users to join, sign and link signatures. He chooses a pair of honest users, one of which is randomly picked to initialize the challenge oracles. Eventually, the adverary needs to guess which one of the users was chosen, task for which he can query again the oracles, subject to the restrictions described above. The formal definition is given next. Trusting an append-only bulletin board BB. In our sequential scheme construction, the BB is required. It now also checks that the commitments to the pseudorandom numbers specified above are unique across all the uploaded signatures: this is critical to prevent malleable sequences. Also, being append-only prevents removing signatures once added, avoiding tampering with order.
Our construction Π sUCL . For brevity, we only describe the modified functions. , st), ⊥). Operates as in Π UCL , but the user adds k ← PRF.KeyGen(τ ) to her usk and sets st ← 1.
The signatures in our construction are required to be uploaded to the bulletin board BB. The entity responsible to do so may depend on the use case. BB verifies (m, scp, (σ, (seq 1 , seq 2 , seq 3 )), nym) and checks uniqueness of seq, rejecting the signature if either check fails. Uniqueness of seq ensures that no Σ = (·, ·, (·, (seq 1 , seq 2 , ·)), ·) exists in BB, such that seq 1 = seq 1 or seq 2 = seq 2 .
-If any Σ i does not exist in BB, return 0. Else, verify π l as in VerifyLink.
Efficiently fetching previously created signatures. Finally, note that users can leverage the n st values to easily fetch signatures from the bulletin board BB. If a user has a rough idea of the value of st when the signature was created, she can use PRF to recompute n st for near st values. Otherwise, it is always possible to iterate from the initial value until finding the desired signature (as opposed to locally storing all signatures, or iterating through all signatures in BB).

Security of our Construction
Theorem 2. Assuming zero-knowledgeness and simulation-soundness of SPK, collision resistance of H , pseudorandomness of PRF, and a trusted BB verifying signatures and checking uniqueness of seq (across all signatures in BB), our construction is secure under the discrete logarithm, DDH, and q-SDH assumptions, in the random oracle model for H, H and SPK.
Proof sketch. Proving anonymity essentially requires showing that the newly added seq components can be simulated, which follows from pseudorandomness of PRF and the modelling of H and H as random oracles.
For sequentiality, we show how to find collisions in H , assuming a trusted BB verifying signatures and checking uniqueness of their seq components, and pseudorandomness of PRF. Since honest signatures must exist in Σ Σ Σ * , all the attacker can do is to remove or swap honest signatures, or insert dishonest signatures before or after honest ones. However, the adversary commits to the set Σ Σ Σ of dishonest signatures in the first stage of the game, and he can only use signatures in this set and SIG[uid * ] to produce Σ Σ Σ * . First, the uniqueness checks by BB prevent the adversary from creating multiple signatures with the same seq values and re-order them as desired. Then, we show that to remove or swap honest signatures, or insert malicious ones, the adversary must find different openings to the seq 1 or seq 2 values in the commited signatures that are consistent with their hash chain, implying a collision in H . This ensures that, before corrupting the user, the probability of the adversary producing a dishonest signature that can be "chained" with an honest one, is negligible.
Full proofs for the new and modified properties are given the full version of this work [19]. The rest of the properties are proven as in the basic scheme. Table 3 summarises the functionality provided by the UCL and sUCL variants proposed in the present work, as well as that of the most related works [23,26]. The table focuses on the linkability aspects, and on which are the entities that can perform such linking.  Table 3: Functionality comparison between the schemes presented here and [23,26].

Evaluation and Measurements
We now analyse the computational and space costs of our constructions, comparing with related work. In Table 4, we denote with e G X , p and h, respectively, an exponentiation (in G X ), a pairing and a computation of a hash function; and with nG 1 , nZ p , nh, n elements in G 1 , Z p and hashes, respectively (also, elements associated to the Paillier encryption used in [23] are denoted with Z n 2 ). For the SPKs, we use the Fiat-Shamir transform, and for the PRF an HMAC construction [4]. The used curve is BLS12-381 [3,2]. The costs derived from verifying and storing the individual signatures involved in Link and VerifyLink are omitted, i.e., we only account for the costs derived from storing/computing or verifying the linkability proof itself. Note also that [23] does not include a linking functionality per se. The (mostly) equivalent functionality is a combination of their Blind, Convert and Unblind operations. Thus, in the table we show the aggregate of their costs. In addition, other operations supported by [26], but not compatible with our model, are also omitted. These include their Opn, Lnk and LnkJdg functions (in Table 4, Link and VerifyLink refers to SLnk and SLnkJdg in [26] Table 4: Computational (top) and space (bottom) costs. In the "Our scheme" column, we show in black font the costs of the UCL scheme (Section 3), and the text in red corresponds to the added costs of the sUCL scheme (Section 4). Since [26,23] only support explicit linkability, we only compare the linking costs in those schemes against the explicit linking of our schemes. Link costs for [23] aggregate their blinding, converting and unblinding costs. Operations from [26] that are not compatible with our model are omitted. Fig. 6 shows the results of experiments obtained with a C implementation of both variants of our scheme (run on a MacBook Pro 2.5 GHz Quad-Core Intel i7, 16 GB 2133 MHz LPDDR3 RAM), and iterating every trial 1000 times. Setup, Join and Issue are omitted, as they will typically take place either rarely or in non time-critical contexts. Sign and Verify run in well below 5ms. For Link and VerifyLink (and the sequential variants), we experiment with sets of 10, 50 and 100 signatures. As in Table 4, this does not include verification of individual signatures. Note that even in the case of 100 signatures, we are still in the order of 40ms for linking and 20ms for verifying the proofs. For comparison, [26] reports signing and signature verification times around 100-150 ms, and linking and link verification times (for only two signatures) in the order of 330 ms.

Conclusion
We have presented a new variant of group signatures that allows users to explicitly link large sets of signatures, supports implicit signature linking, and does not rely on a trusted opener. We have then extended this to allow proving order within a sequence of linked signatures, including that no signature has been omitted which was originally produced between the first and last signatures of the sequence. We have also given a formal model capturing the extended unforgeability and privacy properties in this setting, and efficient constructions realizing our model, which we have proved secure under discrete logarithm related assumptions. We have also reported on experimental evaluation obtained from an implementation of our schemes.
Several lines of further work are possible. First, we give an unforgeability property ensuring that order is maintained against honest-then-corrupt users, but we do not consider the equivalent for initially corrupt ones. While we argue that modelling honest-then-corrupt users is applicable to many real-world use cases, it is interesting to consider the stronger variant. In that case, initially, it seems that we can only hope to detect inconsistent proofs. Otherwise, if we only consider independent sequence proofs, a malicious signer may just "precompute" the sequence in the order he intends to prove afterwards, even if he publishes the signatures in a different order. Also, being able to prove non-linkage of signatures may be an interesting functionality -which would also impact the model. In practice, there may be use cases where proving not having issued a (set of) signature(s) can be useful. For instance, as a basic mechanism for (privacy respectful) blacklisting. Efficiency-wise, taking inspiration on [20,25], a great improvement would be to study the incorporation of batch verification of signatures (in addition to batch linking). On a more specific note, our construction for proving linked sequences introduces an artifact that affects the anonymity property. Namely, separately linking two adjacent sequences (i.e., where the last signature of one sequence was created immediately before the first signature of the other) makes both sequences linkable. Hence, removing this constraint would be an obvious improvement.