Internet-Draft | Privacy Pass Architecture | August 2023 |
Davidson, et al. | Expires 10 February 2024 | [Page] |
This document specifies the Privacy Pass architecture and requirements for its constituent protocols used for authorization based on privacy-preserving authentication mechanisms. It describes the conceptual model of Privacy Pass and its protocols, its security and privacy goals, practical deployment models, and recommendations for each deployment model that helps ensure the desired security and privacy goals are fulfilled.¶
This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.¶
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at https://datatracker.ietf.org/drafts/current/.¶
Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."¶
This Internet-Draft will expire on 10 February 2024.¶
Copyright (c) 2023 IETF Trust and the persons identified as the document authors. All rights reserved.¶
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Revised BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Revised BSD License.¶
Privacy Pass is an architecture for authorization based on privacy-preserving authentication mechanisms. In other words, relying parties authenticate clients in a privacy-preserving way, i.e., without learning any unique, per-client information through the authentication protocol, and then make authorization decisions on the basis of that authentication suceeding or failing. Possible authorization decisions might be to provide clients with read access to a particular resource or write access to a particular resource.¶
Typical approaches for authorizing clients, such as through the use of long-term state (cookies), are not privacy-friendly since they allow servers to track clients across sessions and interactions. Privacy Pass takes a different approach: instead of presenting linkable state-carrying information to servers, e.g., a cookie indicating whether or not the client is an authorized user or has completed some prior challenge, clients present unlinkable proofs that attest to this information. These proofs, or tokens, are private in the sense that a given token cannot be linked to the protocol interaction where that token was initially issued.¶
At a high level, the Privacy Pass architecture consists of two protocols: redemption and issuance. The redemption protocol, described in [AUTHSCHEME], runs between Clients and Origins (servers). It allows Origins to challenge Clients to present tokens for consumption. Origins verify the token to authenticate the Client -- without learning any specific information about the Client -- and then make an authorization decision on the basis of the token verifying successfully or not. Depending on the type of token, e.g., whether or not it can be cached, the Client either presents a previously obtained token or invokes an issuance protocol, such as [ISSUANCE], to acquire a token to present as authorization.¶
This document describes requirements for both redemption and issuance protocols and how they interact. It also provides recommendations on how the architecture should be deployed to ensure the privacy of clients and the security of all participating entities.¶
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "NOT RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in BCP 14 [RFC2119] [RFC8174] when, and only when, they appear in all capitals, as shown here.¶
The following terms are used throughout this document:¶
An entity that seeks authorization to an Origin. Using [RFC9334] terminology, Clients implement the RATS Attester role.¶
A cryptographic authentication message used for authorization decisions.¶
A privacy-preserving authenticator that is used for authorization.¶
An entity that consumes tokens presented by Clients and uses them to make authorization decisions.¶
The mechanism by which Origins request tokens from Clients.¶
The mechanism by which Clients present tokens to Origins for consumption.¶
An entity that issues tokens to Clients for properties attested to by the Attester.¶
The mechanism by which an Issuer produces tokens for Clients.¶
An entity that attests to properties of Clients for the purposes of token issuance. Using [RFC9334] terminology, Attesters implement the RATS Verifier role.¶
The procedure by which an Attester determines whether or not a Client has the specific set of properties that are necessary for token issuance.¶
The trust relationships between each of the entities in this list is further elaborated upon in Section 3.3.¶
The Privacy Pass architecture consists of four logical entities -- Client, Origin, Issuer, and Attester -- that work in concert for token redemption and issuance. This section presents an overview of Privacy Pass, a high-level description of the threat model and privacy goals of the architecture, and the goals and requirements of the redemption and issuance protocols. Deployment variations for the Origin, Issuer, and Attester in this architecture, including considerations for implements these entities, are further discussed in Section 4.¶
The typical interaction flow for Privacy Pass uses the following steps:¶
If the Client has a token, it includes it in a subsequent request to the Origin, as authorization. This token is sent only once in response to a challenge; Clients do not send tokens more than once, even if they receive duplicate or redundant challenges. The Origin validates that the token was generated by the expected Issuer and has not already been redeemed for the corresponding token challenge. If the Client does not have a token, perhaps because issuance failed, the client does not reply to the Origin's challenge with a new request.¶
Use cases for Privacy Pass are broad and depend greatly on the deployment model as discussed in Section 4. The initial motivating use case for Privacy Pass [PrivacyPassCloudflare] was to help rate limit malicious or otherwise abusive traffic from services such as Tor [DMS2004]. The generalized and evolved architecture described in this document also work for this use case. However, for added clarity, some more possible use cases are described below.¶
The end-to-end flow for Privacy Pass described in Section 3.1 involves three different types of contexts:¶
The interactions and set of information shared between the Client and Origin, i.e., the information that is provided or otherwise available to the Origin during redemption that might be used to identify a Client and construct a token challenge. This context includes all information associated with redemption, such as the timestamp of the event, Client visible information (including the IP address), and the Origin name.¶
The interactions and set of information shared between the Client, Attester, and Issuer, i.e., the information that is provided or otherwise available to Attester and Issuer during issuance that might be used to identify a Client. This context includes all information associated with issuance, such as the timestamp of the event, any Client visible information (including the IP address), and the Origin name (if revealed during issuance). This does not include the token challenge in its entirety, as that is kept secret from the Issuer during the issuance protocol.¶
The interactions and set of information shared between the Client and Attester only, for the purposes of attesting the validity of the Client, that is provided or otherwise available during attestation that might be used to identify the Client. This context includes all information associated with attestation, such as the timestamp of the event and any Client visible information, including information needed for the attestation procedure to complete.¶
The privacy goals of Privacy Pass assume a threat model in which Origins trust specific Issuers to produce tokens, and Issuers in turn trust one or more Attesters to correctly run the attestation procedure with Clients. This arrangement ensures that tokens which validate for a given Issuer were only issued to a Client that successfully completed attestation with an Attester that the Issuer trusts. Moreover, this arrangement means that if an Origin accepts tokens issued by an Issuer that trusts multiple Attesters, then a Client can use any one of these Attesters to issue and redeem tokens for the Origin. Whether or not these different entities in the architecture collude for learning redemption, issuance, or attestation contexts, as well as the necessary preconditions for context unlinkability, depends on the deployment model; see Section 4 for more details.¶
The mechanisms for establishing trust between each entity in this arrangement are deployment specific. For example, in settings where Clients interact with Issuers through an Attester, Attesters and Issuers might use mutually authenticated TLS to authenticate one another. In settings where Clients do not communicate with Issuers through an Attester, the Attesters might convey this trust via a digital signature over that Issuers can verify.¶
Clients explicitly trust Attesters to perform attestation correctly and in a way that does not violate their privacy. In particular, this means that Attesters which may be privy to private information about Clients are trusted to not disclose this information to non-colluding parties; see Section 4 for more about different deployment models and non-collusion assumptions. However, Clients assume Issuers and Origins are malicious.¶
Given this threat model, the privacy goals of Privacy Pass are oriented around unlinkability based on redemption, issuance, and attestation contexts, as described below.¶
These unlinkability properties ensure that only the Client is able to correlate information that might be used to identify them with activity on the Origin. The Attester, Issuer, and Origin only receive the information necessary to perform their respective functions.¶
The manner in which these unlinkability properties are achieved depends on the deployment model, type of attestation, and issuance protocol details. For example, as discussed in Section 4, in some cases it is necessary to use an anonymization service such as Tor [DMS2004] which hides Clients IP addresses. In general, anonymization services ensures that all Clients which use the service are indistinguishable from one another, though in practice there may be small distinguishing features (TLS fingerprints, HTTP headers, etc). Moreover, Clients generally trust these services to not disclose private Client information (such as IP addresses) to untrusted parties. Failure to use an anonymization service when interacting with Attesters, Issuers, or Origins can allow the set of possible Clients to be partitioned by the Client's IP address, and can therefore lead to unlinkability violations. Similarly, malicious Origins may attempt to link two redemption contexts together by using Client-specific Issuer public keys. See Section 5 and Section 6 for more information.¶
The remainder of this section describes the functional properties and security requirements of the redemption and issuance protocols in more detail. Section 3.6 describes how information flows between Issuer, Origin, Client, and Attester through these protocols.¶
The Privacy Pass redemption protocol, described in [AUTHSCHEME], is an authorization protocol wherein Clients present tokens to Origins for authorization. Normally, redemption follows a challenge-response flow, wherein the Origin challenges Clients for a token with a TokenChallenge ([AUTHSCHEME], Section 2.1) and, if possible, Clients present a valid Token ([AUTHSCHEME], Section 2.2) in response. This interaction is shown below.¶
Alternatively, when configured to do so, Clients may opportunistically present Token values to Origins without a corresponding TokenChallenge.¶
The structure and semantics of the TokenChallenge and Token messages depend on the issuance protocol and token type being used; see [AUTHSCHEME] for more information.¶
The challenge provides the client with the information necessary to obtain tokens that the server might subsequently accept in the redemption context. There are a number of ways in which the token may vary based on this challenge, including:¶
Origins that admit cross-Origin tokens bear some risk of allowing tokens issued for one Origin to be spent in an interaction with another Origin. In particular, cross-Origin tokens issued in response to a challenge for one Origin can be redeemed at another Origin in the cross-Origin set, which can make it difficult to regulate token consumption. Depending on the use case, Origins may need to maintain state to track redeemed tokens. For example, Origins that accept cross-Origin tokens across shared redemption contexts SHOULD track which tokens have been redeemed already in those redemption contexts, since these tokens can be issued and then spent multiple times in response to any such challenge. Note that Clients which redeem the same token to multiple Origins do risk those Origins being able to link Client activity together, which can disincentivize this behavior. See Section 2.1.1 of [AUTHSCHEME] for discussion.¶
How Clients respond to token challenges can have privacy implications. For example, if an Origin allows the Client to choose an Issuer, then the choice of Issuer can reveal information about the Client used to partition anonymity sets; see Section 6.2 for more information about these privacy considerations.¶
The Privacy Pass issuance protocol, described in [ISSUANCE], is a two-message protocol that takes as input a TokenChallenge from the redemption protocol ([AUTHSCHEME], Section 2.1) and produces a Token ([AUTHSCHEME], Section 2.2), as shown in Figure 1.¶
The structure and semantics of the TokenRequest and TokenResponse messages depend on the issuance protocol and token type being used; see [ISSUANCE] for more information.¶
Clients interact with the Attester and Issuer to produce a token in response to a challenge. The context in which an Attester vouches for a Client during issuance is referred to as the attestation context. This context includes all information associated with the issuance event, such as the timestamp of the event and Client visible information, including the IP address or other information specific to the type of attestation done.¶
Each issuance protocol may be different, e.g., in the number and types of participants, underlying cryptographic constructions used when issuing tokens, and even privacy properties.¶
Clients initiate the issuance protocol using the token challenge, a randomly generated nonce, and public key for the Issuer, all of which are the Client's private input to the protocol and ultimately bound to an output Token; see Section 2.2 of [AUTHSCHEME] for details. Future specifications may change or extend the Client's input to the issuance protocol to produce Tokens with a different structure.¶
The issuance protocol itself can be any interactive protocol between Client, Issuer, or other parties that produces a valid token bound to the Client's private input, subject to the following security requirements.¶
See Section 3.5.4 for requirements on new issuance protocol variants and related extensions.¶
In the sections below, we describe the Attester and Issuer roles in more detail.¶
In Privacy Pass, attestation is the process by which an Attester bears witness to, confirms, or authenticates a Client so as to verify properties about the Client that are required for Issuance. Issuers trust Attesters to perform attestation correctly, i.e., to implement attestation procedures in a way that are not subverted or bypassed by malicious Clients.¶
[RFC9334] describes an architecture for attestation procedures. Using that architecture as a conceptual basis, Clients are RATS attesters that produce attestation evidence, and Attesters are RATS verifiers that appraise the validity of attestation evidence.¶
The type of attestation procedure is a deployment-specific option and outside the scope of the issuance protocol. Example attestation procedures are below.¶
Attesters may support different types of attestation procedures.¶
Each attestation procedure has different security properties. For example, attesting to having a valid account is different from attesting to running on trusted hardware. Supporting multiple attestation procedures is an important step towards ensuring equitable access for Clients; see Section 5.1.¶
The role of the Attester in the issuance protocol and its impact on privacy depends on the type of attestation procedure, issuance protocol, deployment model. For instance, requiring a conjunction of attestation procedures could decrease the overall anonymity set size. As an example, the number of Clients that have solved a CAPTCHA in the past day, that have a valid account, and that are running on a trusted device is less than the number of Clients that have solved a CAPTCHA in the past day. Attesters SHOULD NOT be based on attestation procedures that result in small anonymity sets.¶
Depending on the issuance protocol, the Issuer might learn information about the Origin. To ensure Issuer-Client unlinkability, the Issuer should be unable to link that information to a specific Client. For such issuance protocols where the Attester has access to Client-specific information, such as is the case for attestation procedures that involve Client-specific information (such as application-layer account information) or for deployment models where the Attester learns Client-specific information (such as Client IP addresses), Clients trust the Attester to not share any Client-specific information with the Issuer. In deployments where the Attester does not learn Client-specific information, the Client does not need to explicitly trust the Attester in this regard.¶
Issuers trust Attesters to correctly and reliably perform attestation. However, certain types of attestation can vary in value over time, e.g., if the attestation procedure is compromised. Broken attestation procedures are considered exceptional events and require configuration changes to address the underlying cause. For example, if attestation is compromised because of a zero-day exploit on compliant devices, then the corresponding attestation procedure should be untrusted until the exploit is patched. Addressing changes in attestation quality is therefore a deployment-specific task. In Split Attester and Issuer deployments (see Section 4.4), Issuers can choose to remove compromised Attesters from their trusted set until the compromise is patched.¶
From the perspective of an Origin, tokens produced by an Issuer with at least one compromised Attester cannot be trusted assuming the Origin does not know which attestation procedure was used for issuance. This is because the Origin cannot distinguish between tokens that were issued via compromised Attesters and tokens that were issued via uncompromised Attesters absent some distinguishing information in the tokens themselves or from the Issuer. As a result, until the attestation procedure is fixed, the Issuer cannot be trusted by Origins. Moreover, as a consequence, any tokens issued by an Issuer with a compromised attester may no longer be trusted by Origins, even if those tokens were issued to Clients interacting with an uncompromised Attester.¶
In Privacy Pass, the Issuer is responsible for completing the issuance protocol for Clients that complete attestation through a trusted Attester. As described in Section 3.5.1, Issuers explicitly trust Attesters to correctly and reliably perform attestation. Origins explicitly trust Issuers to only issue tokens from trusted Attesters. Clients do not explicitly trust Issuers.¶
Depending on the deployment model case, issuance may require some form of Client anonymization service, similar to an IP-hiding proxy, so that Issuers cannot learn information about Clients. This can be provided by an explicit participant in the issuance protocol, or it can be provided via external means, such as through the use of an IP-hiding proxy service like Tor [DMS2004]. In general, Clients SHOULD minimize or remove identifying information where possible when invoking the issuance protocol.¶
Issuers are uniquely identifiable by all Clients with a consistent identifier. In a web context, this identifier might be the Issuer host name. Issuers maintain one or more configurations, including issuance key pairs, for use in the issuance protocol. Each configuration is assumed to have a unique and canonical identifier, sometimes referred to as a key identifier or key ID. Issuers can rotate these configurations as needed to mitigate risk of compromise; see Section 6.2 for more considerations around configuration rotation. The Issuer public key for each active configuration is made available to Origins and Clients for use in the issuance and redemption protocols.¶
Certain instantiations of the issuance protocol may permit public or private metadata to be cryptographically bound to a token. As an example, one trivial way to include public metadata is to assign a unique Issuer public key for each value of metadata, such that N keys yields log2(N) bits of metadata. Metadata may be public or private.¶
Public metadata is that which clients can observe as part of the token issuance flow. Public metadata can either be transparent or opaque. For example, transparent public metadata is a value that the client either generates itself, or the Issuer provides during the issuance flow and the client can check for correctness. Opaque public metadata is metadata the client can see but cannot check for correctness. As an example, the opaque public metadata might be a "fraud detection signal", computed on behalf of the Issuer, during token issuance. Generally speaking, Clients cannot determine if this value is generated honestly or otherwise a tracking vector.¶
Private metadata is that which Clients cannot observe as part of the token issuance flow. Such instantiations can be built on the Private Metadata Bit construction from Kreuter et al. [KLOR20] or the attribute-based VOPRF from Huang et al. [HIJK21].¶
Metadata can be arbitrarily long or bounded in length. The amount of permitted metadata may be determined by application or by the underlying cryptographic protocol. The total amount of metadata bits included in a token is the sum of public and private metadata bits. Every bit of metadata can be used to partition the Client issuance or redemption anonymity sets; see Section 6.1 for more information.¶
The Privacy Pass architecture and ecosystem are both intended to be receptive to extensions that expand the current set of functionalities through new issuance protocols. Each new issuance protocol and extension MUST adhere to the following requirements:¶
The end-to-end process of redemption and issuance protocols involves information flowing between Issuer, Origin, Client, and Attester. That information can have implications on the privacy goals that Privacy Pass aims to provide as outlined in Section 3.3. In this section, we describe the flow of information between each party. How this information affects the privacy goals in particular deployment models is further discussed in Section 4.¶
To use Privacy Pass, Origins choose an Issuer from which they are willing to accept tokens. Origins then construct a token challenge using this specified Issuer and information from the redemption context it shares with the Client. This token challenge is then delivered to a Client. The token challenge conveys information about the Issuer and the redemption context, such as whether the Origin desires a per-Origin or cross-Origin token. Any entity that sees the token challenge might learn things about the Client as known to the Origin. This is why input secrecy is a requirement for issuance protocols, as it ensures that the challenge is not directly available to the Issuer.¶
Clients interact with the Attester to prove that they meet some required set of properties. In doing so, Clients contribute information to the attestation context, which might include sensitive information such as application-layer identities, IP addresses, and so on. Clients can choose whether or not to contribute this information based on local policy or preference.¶
Clients use the issuance protocol to produce a token bound to a token challenge. In doing so, there are several ways in which the issuance protocol contributes information to the attestation or issuance contexts. For example, a token request may contribute information to the attestation or issuance contexts as described below.¶
Moreover, a token response may contribute information to the issuance attestation or contexts as described below.¶
Exceptional cases in the issuance protocol, such as when either the Attester or Issuer aborts the protocol, can contribute information to the attestation or issuance contexts. The extent to which information in this context harms the Issuer-Client or Attester-Origin unlinkability goals in Section 3.3 depends on deployment model; see Section 4. Clients can choose whether or not to contribute information to these contexts based on local policy or preference.¶
Clients send tokens to Origins during the redemption protocol. Any information that is added to the token during issuance can therefore be sent to the Origin. Information can either be explicitly passed in a token, or it can be implicit in the way the Client responds to a token challenge. For example, if a Client fails to complete issuance, and consequently fails to redeem a token in response to a token challenge, this can reveal information to the Origin that it might not otherwise have access to. However, an Origin cannot necessarily distinguish between a Client that fails to complete issuance and one that ignores the token challenge altogether.¶
The Origin, Attester, and Issuer portrayed in Figure 1 can be instantiated and deployed in a number of ways. The deployment model directly influences the manner in which attestation, issuance, and redemption contexts are separated to achieve Origin-Client, Issuer-Client, and Attester-Origin unlinkability.¶
This section covers some expected deployment models and their corresponding security and privacy considerations. Each deployment model is described in terms of the trust relationships and communication patterns between Client, Attester, Issuer, and Origin.¶
The discussion below assumes non-collusion between entities that have access to the attestation, issuance, and redemption contexts, as collusion between such entities would enable linking of these contexts and may lead to unlinkability violations. Generally, this means that entities operated by separate parties do not collude. Mechanisms for enforcing non-collusion are out of scope for this architecture.¶
In this model, the Attester and Issuer are operated by the same entity that is separate from the Origin. The Origin trusts the joint Attester and Issuer to perform attestation and issue Tokens. Clients interact with the joint Attester and Issuer for attestation and issuance. This arrangement is shown in Figure 4.¶
This model is useful if an Origin wants to offload attestation and issuance to a trusted entity. In this model, the Attester and Issuer share an attestation and issuance context for the Client, which is separate from the Origin's redemption context.¶
For certain types of issuance protocols, this model achieves Origin-Client, Issuer-Client, and Attester-Origin unlinkability. However, issuance protocols that require the Issuer to learn information about the Origin, such as that which is described in [RATE-LIMITED], are not appropriate since they could lead to Attester-Origin unlinkability violations through the Origin name.¶
In this model, the Origin and Issuer are operated by the same entity, separate from the Attester, as shown in the figure below. The Issuer accepts token requests that come from trusted Attesters. Since the Attester and Issuer are separate entities, this model requires some mechanism by which Issuers establish trust in the Attester (as described in Section 3.3). For example, in settings where the Attester is a Client-trusted service that directly communicates with the Issuer, one way to establish this trust is via mutually-authenticated TLS. However, alternative authentication mechanisms are possible. This arrangement is shown in Figure 5.¶
This model is useful for Origins that require Client-identifying attestation, e.g., through the use of application-layer account information, but do not otherwise want to learn information about individual Clients beyond what is observed during the token redemption, such as Client IP addresses.¶
In this model, attestation contexts are separate from issuer and redemption contexts. As a result, any type of attestation is suitable in this model.¶
Moreover, any type of token challenge is suitable assuming there is more than one Origin involved, since no single party will have access to the identifying Client information and unique Origin information. Issuers that produce tokens for a single Origin are not suitable in this model since an Attester can infer the Origin from a token request, as described in Section 3.6.3. However, since the issuance protocol provides input secrecy, the Attester does not learn details about the corresponding token challenge, such as whether the token challenge is per-Origin or cross-Origin.¶
In this model, the Origin, Attester, and Issuer are all operated by different entities. As with the joint Origin and Issuer model, the Issuer accepts token requests that come from trusted Attesters, and the details of that trust establishment depend on the issuance protocol and relationship between Attester and Issuer; see Section 3.3. This arrangement is shown in Figure 1.¶
This is the most general deployment model, and is necessary for some types of issuance protocols where the Attester plays a role in token issuance; see [RATE-LIMITED] for one such type of issuance protocol.¶
In this model, the Attester, Issuer, and Origin have a separate view of the Client: the Attester sees potentially sensitive Client identifying information, such as account identifiers or IP addresses, the Issuer sees only the information necessary for issuance, and the Origin sees token challenges, corresponding tokens, and Client source information, such as their IP address. As a result, attestation, issuance, and redemption contexts are separate, and therefore any type of token challenge is suitable in this model as long as there is more than a single Origin.¶
As in the Joint Origin and Issuer model in Section 4.3, and as described in Section 3.6.3, if the Issuer produces tokens for a single Origin, then per-Origin tokens are not appropriate since the Attester can infer the Origin from a token request.¶
Section 4 discusses deployment models that are possible in practice. Beyond possible implications on security and privacy properties of the resulting system, Privacy Pass deployments can impact the overall ecosystem in two important ways: (1) discriminatory treatment of Clients and the gated access to otherwise open services, and (2) centralization. This section describes considerations relevant to these topics.¶
Origins can use tokens as a signal for distinguishing between Clients that are capable of completing attestation with one Attester trusted by the Origin's chosen Issuer, and Clients that are not capable of doing the same. A consequence of this is that Privacy Pass could enable discriminatory treatment of Clients based on Attestation support. For example, an Origin could only authorize Clients that successfully authenticate with a token, prohibiting access to all other Clients.¶
The type of attestation procedures supported for a particular deployment depends greatly on the use case. For example, consider a proprietary deployment of Privacy Pass that authorizes clients to access a resource such as an anonymization service. In this context, it is reasonable to support specific types of attestation procedures that demonstrate Clients can access the resource, such as with an account or specific type of device. However, in open deployments of Privacy Pass that are used to safeguard access to otherwise open or publicly accessible resources, diversity in attestation procedures is critically important so as to not discriminate against Clients that choose certain software, hardware, or identity providers.¶
In principle, Issuers should strive to mitigate discriminatory behavior by providing equitable access to all Clients. This can be done by working with a set of Attesters that are suitable for all Clients. In practice, this may require tradeoffs in what type of attestation Issuers are willing to trust so as to enable more widespread support.¶
For example, to disallow discriminatory behavior between Clients with and without device attestation support, an Issuer may wish to support Attesters that support CAPTCHA-based attestation. This means that the overall attestation value of a Privacy Pass token is bound by the difficulty in spoofing or bypassing either one of these attestation procedures.¶
A consequence of limiting the number of participants (Attesters or Issuers) in Privacy Pass deployments for meaningful privacy is that it forces concentrated centralization amongst those participants. [CENTRALIZATION] discusses several ways in which this might be mitigated. For example, a multi-stakeholder governance model could be established to determine what candidate participants are fit to operate as participants in a Privacy Pass deployment. This is precisely the system used to control the Web's trust model.¶
Alternatively, Privacy Pass deployments might mitigate this problem through implementation. For example, rather than centralize the role of attestation in one or few entities, attestation could be a distributed function performed by a quorum of many parties, provided that neither Issuers nor Origins learn which Attester implementations were chosen. As a result, Clients could have more opportunities to switch between attestation participants.¶
The previous section discusses the impact of deployment details on Origin-Client, Issuer-Client, and Attester-Origin unlinkability. The value these properties affords to end users depends on the size of anonymity sets in which Clients or Origins are unlinkable. For example, consider two different deployments, one wherein there exists a redemption anonymity set of size two and another wherein there redemption anonymity set of size 232. Although Origin-Client unlinkabiity guarantees that the Origin cannot link any two requests to the same Client based on these contexts, respectively, the probability of determining the "true" Client is higher the smaller these sets become.¶
In practice, there are a number of ways in which the size of anonymity sets may be reduced or partitioned, though they all center around the concept of consistency. In particular, by definition, all Clients in an anonymity set share a consistent view of information needed to run the issuance and redemption protocols. An example type of information needed to run these protocols is the Issuer public key. When two Clients have inconsistent information, these Clients effectively have different redemption contexts and therefore belong in different anonymity sets.¶
The following sections discuss issues that can influence anonymity set size. For each issue, we discuss mitigations or safeguards to protect against the underlying problem.¶
Any metadata bits of information can be used to further segment the size
of the Client's anonymity set. Any Issuer that wanted to track a single
Client could add a single metadata bit to Client tokens. For the tracked
Client it would set the bit to 1
, and 0
otherwise. Adding additional
bits provides an exponential increase in tracking granularity similarly to
introducing more Issuers (though with more potential targeting).¶
For this reason, the amount of metadata used by an Issuer in creating redemption tokens must be taken into account -- together with the bits of information that Issuers may learn about Clients otherwise. Since this metadata may be useful for practical deployments of Privacy Pass, Issuers must balance this against the reduction in Client privacy.¶
In general, limiting the amount of metadata permitted helps limit the extent to which metadata can uniquely identify individual Clients. Clients SHOULD bound the number of possible metadata values in practice. Most token types do not admit any metadata, so this bound is implicitly enforced. Moreover, Privacy Pass deployments SHOULD NOT use metadata unless its value has been assessed and weighed against the corresponding reduction in Client privacy.¶
Anonymity sets can be partitioned by information used for the issuance protocol, including: metadata, Issuer configuration (keys), and Issuer selection.¶
Any issuance metadata bits of information can be used to partition the Client
anonymity set. For example, any Issuer that wanted to track a single Client
could add a single metadata bit to Client tokens. For the tracked Client it
would set the bit to 1
, and 0
otherwise. Adding additional bits provides an
exponential increase in tracking granularity similarly to introducing more
Issuers (though with more potential targeting).¶
The number of active Issuer configurations also contributes to anonymity set partitioning. In particular, when an Issuer updates their configuration and the corresponding key pair, any Client that invokes the issuance protocol with this configuration becomes be part of a set of Clients which also ran the issuance protocol using the same configuration. Issuer configuration updates, e.g., due to key rotation, are an important part of hedging against long-term private key compromise. In general, key rotations represent a trade-off between Client privacy and Issuer security. Therefore, it is important that key rotations occur on a regular cycle to reduce the harm of an Issuer key compromise.¶
Lastly, if Clients are willing to issue and redeem tokens from a large number of Issuers for a specific Origin, and that Origin accepts tokens from all Issuers, segregation can occur. In particular, if a Client obtains tokens from many Issuers and an Origin later challenges that Client for a token from each Issuer, the Origin can learn information about the Client. Each per-Issuer token that a Client holds essentially corresponds to a bit of information about the Client that Origin learns. Therefore, there is an exponential loss in privacy relative to the number of Issuers.¶
The fundamental problem here is that the number of possible issuance configurations, including the keys in use and the Issuer identities themselves, can partition the Client anonymity set. To mitigate this problem, Clients SHOULD bound the number of active issuance configurations per Origin as well as across Origins. Moreover, Clients SHOULD employ some form of consistency mechanism to ensure that they receive the same configuration information and are not being actively partitioned into smaller anonymity sets. See [CONSISTENCY] for possible consistency mechanisms. Depending on the deployment, the Attester might assist the Client in applying these consistency checks across clients. Failure to apply a consistency check can allow Client-specific keys to violate Origin-Client unlinkability.¶
Side-channel attacks, such as those based on timing correlation, could be used to reduce anonymity set size. In particular, for interactive tokens that are bound to a Client-specific redemption context, the anonymity set of Clients during the issuance protocol consists of those Clients that started issuance between the time of the Origin's challenge and the corresponding token redemption. Depending on the number of Clients using a particular Issuer during that time window, the set can be small. Applications should take such side channels into consideration before choosing a particular deployment model and type of token challenge and redemption context.¶
This document describes security and privacy requirements for the Privacy Pass redemption and issuance protocols. It also describes deployment models built around non-collusion assumptions and privacy considerations for using Privacy Pass within those models. Ensuring Client privacy -- separation of attestation and redemption contexts -- requires active work on behalf of the Client, especially in the presence of malicious Issuers and Origins. Implementing mitigations discussed in Section 4 and Section 6 is therefore necessary to ensure that Privacy Pass offers meaningful privacy improvements to end-users.¶
Depending on the Origin's token challenge, Clients can request and cache more than one token using an issuance protocol. Cached tokens help improve privacy by separating the time of token issuance from the time of token redemption, and also allow Clients to reduce the overhead of receiving new tokens via the issuance protocol.¶
As a consequence, Origins that send token challenges which are compatible with cached tokens need to take precautions to ensure that tokens are not replayed. This is typically done via keeping track of tokens that are redeemed for the period of time in which cached tokens would be accepted for particular challenges.¶
Moreover, since tokens are not intrinsically bound to Clients, it is possible for malicious Clients to collude and share tokens in a so-called "hoarding attack." As an example of this attack, many distributed Clients could obtain cacheable tokens and them share them with a single Client to redeem in a way that would violate an Origin's attempt to limit tokens to any one particular Client. Depending on the deployment model, it can be possible to detect these types of attacks by comparing issuance and redemption contexts; for example, this is possible in the Joint Origin and Issuer model.¶
This document has no IANA actions.¶
The authors would like to thank Eric Kinnear, Scott Hendrickson, Tommy Pauly, Christopher Patton, Benjamin Schwartz, Martin Thomson, Steven Valdez and other contributors of the Privacy Pass Working Group for many helpful contributions to this document.¶