Corruption-Parametrized Security in Cryptography
Corruption-Parametrized Security in Cryptography
August 8, 2024
Abstract
In the multi-user with corruptions (muc) setting there are n ≥ 1 users, and the goal is
to prove that, even in the face of an adversary that adaptively corrupts users to expose their
keys, un-corrupted users retain security. This can be considered for many primitives including
signatures and encryption. Proofs of muc security, while possible, generally suffer a factor n
loss in tightness, which can be large. This paper gives new proofs where this factor is reduced
to the number c of corruptions, which in practice is much smaller than n. We refer to this
as corruption-parametrized muc (cp-muc) security. We give a general result showing it for a
class of games that we call local. We apply this to get cp-muc security for signature schemes
(including ones in standards and in TLS 1.3) and some forms of public-key and symmetric
encryption. Then we give dedicated cp-muc security proofs for some important schemes whose
underlying games are not local, including the Hashed ElGamal and Fujisaki-Okamoto KEMs
and authenticated key exchange. Finally, we give negative results to show optimality of our
bounds.
1
Department of Computer Science & Engineering, University of California San Diego, 9500 Gilman Drive, La
Jolla, California 92093, USA. Email: [email protected]. URL: http://cseweb.ucsd.edu/˜mihir/. Supported in
part by NSF grant CNS-2154272 and KACST.
2
Department of Computer Science & Engineering, University of California San Diego, 9500 Gilman Drive, La
Jolla, California 92093, USA. Email: [email protected]. Supported in part by KACST.
3
Paul G. Allen School of Computer Science & Engineering, University of Washington, Seattle, WA, USA. Email:
[email protected]. Supported in part by NSF grants CNS-2026774, CNS-2154174, a JP Morgan Faculty
Award, a CISCO Faculty Award, and a gift from Microsoft.
4
Department of Computer Science & Engineering, University of California San Diego, 9500 Gilman Drive, La
Jolla, California 92093, USA. Email: [email protected].
Contents
1 Introduction 2
1.1 Spotlight on signatures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 HWD samplers and general cp-muc theorem . . . . . . . . . . . . . . . . . . . . . . . 5
1.3 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.4 Optimality results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2 Preliminaries 8
3 HWD samplers 10
5 Applications 23
5.1 Direct applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
5.2 Indirect applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
6 Optimality results 33
References 40
B Proof of FO-Transform 48
1
1 Introduction
In practice, keys can be exposed, through system infiltration by hackers or phishing attacks; a
striking example is the exposure of a Microsoft signing key to the Storm-0558 threat actor in
July 2023 [Whi23]. This motivates the multi-user-with-corruptions (muc) setting, where there are
n ≥ 1 users, each holding some secret key, and the goal is to prove that, even in the face of an
adversary that adaptively corrupts users to expose their keys, un-corrupted users retain security.
(The last means their signatures remain unforgeable, ciphertexts encrypted to them retain privacy
or whatever else the underlying primitive decrees.)
The good news is that proving muc-security is possible. The bad is that, in general —and in
particular for canonical and standardized schemes such as the Schnorr signature scheme [Sch91]—
current reductions lose a factor of the total number n of users, which can be very large. This
leaves implementations to either use larger security parameters (inefficient), ignore corruptions
(dangerous) or turn to special schemes that, while offering tight reductions, are less efficient than the
canonical schemes [BHJ+ 15, HJK+ 21, DGJL21]. And meanwhile, dauntingly and disappointingly,
negative results [BJLS16, JSSOW17] appear to say that the factor n loss is unavoidable for the
canonical schemes.
Counting corruptions. In practice, corruptions certainly happen, but we suggest that the num-
ber of successful corruptions will likely be small, much smaller than the total number of users. Why
is this? Because key-owners, recognizing the loss they face if their keys are exposed, are taking sig-
nificant steps to prevent it. Important Internet services are increasingly storing their TLS signing
keys in HSMs (Hardware Security Modules), which makes them harder to expose. Breaches lead
to systems being hardened to prevent further breaches. (Microsoft, for example, has taken steps
to prevent a repetition of the exposure of their signing key to Storm-0558 [Mic23].) Threshold
cryptography is used to make key-exposure more difficult, a mitigation particularly popular for the
signing keys securing digital wallets in cryptocurrencies. Employees and users are regularly trained
to not fall prey to phishing.
Contributions in brief. Based on the above, this paper brings to muc security a new dimension,
namely to view the number of corruptions as an adversary resource parameter. Denoting it c, we
then give techniques and results that prove muc-security with tightness loss c rather than n. We
refer to this as corruption-parametrized muc (cp-muc) security. Since c is in practice much smaller
than n, cp-muc security allows theoretically-sound instantiation with practical security parameters.
We show cp-muc security for many primitives including signatures, encryption and authenti-
cated key exchange. Our main results are universal, applying to all schemes, including the canonical
ones. Existing negative results are not contradicted because they implicitly assume n − 1 corrup-
tions, and we give new negative results to show the optimality of our new bounds.
Our inspiration and starting point is a technique of Coron [Cor00] used to show security with
improved bounds for the RSA-FDH signature scheme of [BR96]. We generalize and improve this via
a modular approach. First, we define and study a primitive we call a Hamming-Weight Determined
(HWD) sampler. Using HWD samplers, our general cp-muc security theorem then gives a reduction
from multi-user (mu) to cp-muc security that (1) loses only a factor c and (2) holds for any security
game satisfying a condition we call locality. Intuitively, this means that the game does not make
use of global secrets across users, such as a global challenge bit. This directly yields cp-muc security
for many schemes and goals, and avoids repeating similar proofs across them. For important games
that are not local, we go on to give dedicated proofs of cp-muc security. In particular, we do so (for
security games with a global challenge bit) for the Hashed ElGamal and Fujisaki-Okamoto KEMs .
2
Broader context. The idea of assuming a bound c on the number of corruptions, amongst
some larger number n of users, arises, and is indeed the basis for security, in some other and
prior contexts in cryptography. The most prominent example is certainly secret sharing [Sha79,
Bla79], where c is the threshold. Thence it enters multi-party computation [GMW87] and threshold
cryptography [DDFY94]. With cp-muc, we are bringing this classical perspective to a broader
setting, and to basic primitives like signatures, encryption and authenticated key exchange.
3
with c corruptions, adversary advantage grows by at most a factor about c, regardless of the number
n of users and for all signature schemes Sig. But we can do better. In Eq. (2), muc security for
n users (and c corruptions) is provided assuming mu security for the same number n of users. It
turns out, curiously, that we can (substantially) reduce the number of users, now denoted m, for
which mu security is assumed. Our second universal result (Theorem 5.1) is:
uf -muc-(n,c) -mu-m for m = ⌊(n − 1)/(c + 1)⌋.
N2 : ∀ Sig : ϵSig ≤ e · (c + 1) · ϵuf
Sig (3)
How and when the new N1, N2 results yield improvements over the prior P1 may not be obvious
due to the starting points being different (mu for the former and su for the latter). The following
will clarify this.
Specialized results. These are results that hold for particular (but not all) schemes. Let’s call
Sig tightly-mu-secure if (roughly) ϵuf -mu-n = ϵuf -su . In an important class of specialized results, prior
Sig Sig
work [Ber15,KMP16,GJKW07,HJ12,HS16,Lac18,PR20] shows that many schemes —including the
Schnorr scheme [Ber15, KMP16]— have tight mu security. The proofs, however, exploit algebraic
self-reducibility amongst keys and cannot tolerate corruptions, so that, even for these schemes, for
muc security, Eq. (1) remains the best known bound. We offer a substantial improvement; Eq. (2)
implies that
uf -muc-(n,c)
N3 : For all tightly-mu-secure Sig : ϵ Sig ≤ e · (c + 1) · ϵuf -su .
Sig (4)
That is, the muc advantage degrades by at most a factor about c even relative to the (standard)
su advantage, again regardless of n. Eq. (4) holds in particular for the Schnorr signature scheme.
While it would be desirable to show that su security implies cp-muc security with a loss c in
general and for all schemes, such a statement does not seem possible without additional assumption
about the scheme. Therefore, our results can be viewed as a middle ground, either (using N3)
applying to tightly-mu-secure schemes or (using N2) reducing to mu security for a number of users
m substantially smaller than n.
Numerical example. Let Sig be the Schnorr scheme over a size p elliptic curve group G. Suppose
we target 128-bit security for n = 230 users with c = 210 corruptions, and let t be the running-time
of the adversary. Assuming discrete-logarithm computation over G is the best attack, we have
ϵuf -su ≤ t2 /p, which by the prior P1 result of (1) yields ϵuf -muc-n ≤ nt2 /p. Now 128-bit security
Sig Sig
requires nt2 /p ≤ t/2128 or p = 2128 · nt. This means we use a group G1 of size p1 = 2128 · nt.
Meanwhile the tight mu security of Sig implies ϵuf -mu-n ≤ t2 /p which by our new N1 result of (2)
Sig
uf -muc-(n,c)
(and ignoring the e factor) yields ϵSig ≤ ct /p, and ct2 /p ≤ t/2128 now yields p = 2128 · ct,
2
so we use a group G2 of size p2 = 2128 · ct. Now note that p2 /p1 = c/n = 2−20 so we have dropped
the group size by a factor of 220 while retaining security. For example if p1 = 2256 then p2 = 2236 .
Exponentiation takes time cubic in the logarithm of the group size, so in this case we conclude that
the cost of signing or verifying for Sig over G1 is (236/256)3 = 0.78 times that in G2 , meaning we
have reduced the running time of the implemented scheme by 22%, a significant gain in practice,
while retaining security.
Standards and key exchange. The best muc-security bound we have for the standardized RSA-
SSAPSS [Nat23], EdDSA [BDL+ 12, Nat23, JL17] and ECDSA [Nat23] signature schemes is the
generic one of Eq. (1) with its factor n loss. This lead tight proofs for the TLS 1.3 key ex-
change [DG21, DJ21] to simply assume tight muc security —meaning ϵuf -muc-n = ϵuf -su — for these
Sig Sig
schemes. Can we do better? Unfortunately, Result N3 (Eq. (4)) will not help since these schemes
have evaded proofs of tight mu security. However, Result N2 (Eq. (3)) is helpful here. It says that
assuming ϵuf -mu-m = ϵuf -su for some small number m of users, we get ϵuf -muc-(n,c) ≤ c · ϵuf -su . While
Sig Sig Sig Sig
4
this is still a non-standard assumption, it is better than directly assuming ϵuf -muc-n = ϵuf -su . We
Sig Sig
only need to assume mu (rather than muc) security, and that too for a small number of users m.
Tight muc security. The absence of results better than P1 (Eq. (1)) for standardized schemes
lead researchers to develop new signature schemes that are tightly muc secure [BHJ+ 15, GJ18,
HJK+ 21, DGJL21, PW22, HLG23]. (As above this means ϵuf -muc-n = ϵuf -su .) In terms of just the
Sig Sig
bound, this is better than cp-muc security. But canonical schemes (Schnorr and standardized ones)
are not known to be in this class so there is no improvement for in-use signatures or TLS 1.3 key
exchange. Moreover, the key sizes and computation time of the new schemes is larger than that of
the canonical schemes. Hence, cp-muc security is a pragmatic alternative.
Techniques. We outline the simplest case of our proof technique, specialized to signatures and
for the weaker of our two universal results, namely N1 (Eq. (2)). Given a cp-muc adversary A
uf -muc-(n,c) -mu-n and
with advantage ϵSig , we want to construct a mu adversary B, with advantage ϵuf Sig
about the same running time as A, such that Eq. (2) holds. Letting p be a parameter in the range
0 ≤ p ≤ 1, adversary B picks a vector (c1 , . . . , cn ) of independent, p-biased bits. (That is, each
ci is 1 with probability p and 0 with probability 1 − p.) We’ll say user i is red if ci = 1 and let
R ⊆ {1, . . . , n} be the set of red users; correspondingly i is blue if ci = 0 and B is the set of blue
users. Now, B runs A. In answering A’s oracle queries, B simulates red users directly and forwards
queries for blue users to its own mu game. In more detail, B generates a signing-verifying key pair
(vk i , sk i ) for each i ∈ R, allowing it to easily answer both Sign and Corrupt queries to i. It
answers Sign queries for i ∈ B via its own Sign oracle. The difficulty is a Corrupt(i) query for
i ∈ B; adversary B has no way to answer this, and aborts. If it does not abort, then A outputs
a user j (called the forgery victim) and a forgery under vk j . If j ∈ B then B returns this to win
its mu game, else it aborts. Let GD be the event that i ∈ R for all Corrupt(i) queries of A and
also the forgery victim j is in B. The probability of GD can be computed as f (p) = pc (1 − p)
(Theorem 3.2) and a careful analysis (that we make precise via Lemma 2.2) shows that GD is
independent of the success of A, so that ϵuf -mu-n ≥ f (p) · ϵuf -muc-(n,c) . Calculus shows that f (p) is
Sig Sig
maximized at p = 1 − 1/(c + 1) where f (p) ≥ 1/(e(c+1)) (Theorem 3.2), yielding Eq. (2). For some
intuition, note that with this choice the expected number of red users is (cn − 1)/(c + 1), meaning
a high number of red users are needed to successfully answer the small number c of Corrupt
queries.
This adapts Coron’s [Cor00] proof of the security of the RSA-FDH signature scheme. However,
his setting has only one user and no secret-key exposing corruptions; what for us are “users” and
“corruptions” are for Coron messages queried to the random oracle and signing queries, respectively.
The path ahead. We generalize and improve this with a modular approach. First, we introduce
and study HWD samplers, as a way to find the best way to sample the vector (c1 , . . . , cn ). Second,
we give a framework and general cp-muc security theorem that shows how to use any HWD sampler
to promote mu security to cp-muc security for a large class of games satisfying a condition we call
locality. Numerous applications, as well as improvements such as Eq. (3), are then obtained, some
directly, some with more dedicated work.
5
Weight Determined (HWD). Section 3 associates to any HWD sampler a success probability α
and an error probability β. Continuing to use signatures as an example, Theorem 4.1 implies that
uf -muc-(n,c) -mu-m for an m that grows with β, so we want to make α large and β small.
ϵSig ≤ (1/α) · ϵuf
Sig
Now the way of sampling (c1 , . . . , cn ) discussed in Section 1.1 corresponds to a particular choice of
D, that we call the biased-coin sampler and analyze in Theorem 3.2; it has good success probability
but unfortunately high error probability. We then give a new sampler that we call the fixed-set-
size sampler. Theorem 3.3 shows that it has not only optimal success probability but zero error
probability, which is crucial to obtaining the improved Eq. (3). Figure 3 shows some numbers.
General framework and theorem. We now ask how far these techniques will generalize be-
yond signatures. We seek to show a result of the form “For all security games in a certain class, mu
security can be promoted to cp-muc security with loss only a factor about c.” To do this rigorously,
we first (in Section 4) define something we call a formal security specification (FSS). Denoted Π,
it is simply an algorithm that, given a scheme Sch, specifies the initializations and oracle-response
computations (including for Corrupt queries) for the intended target notion of security for Sch.
This leads naturally to an actual (code-based) game GΠ Sch defining su security (Figure 4), and
Π - mu - n Π-muc-(n,c)
thence to games GSch and GSch capturing mu and cp-muc security, respectively, where n
is the number of users and c the number of allowed corruptions (Figure 5). For example, Figure 4
shows an FSS UF such that, when Sch = Sig is a signature scheme, the corresponding games are
exactly the standard su, mu and cp-muc games for signatures discussed above. The su, mu and
muc versions of many other notions in the literature can in this way be recovered by simply giving
a single FSS for the notion, as we will see. Let ϵΠ -su Π-mu-n and ϵΠ-muc-(n,c) denote the su, mu
Sch , ϵSch Sch
and muc adversary advantages, respectively, for FSS Π with scheme Sch.
In Section 4 we define a condition on an FSS that we call locality; roughly it means that in
the mu and muc settings, different users share no common secret unknown to the adversary. This
is what is needed for our technique to work. The main result is Theorem 4.1, showing how to
use any HWD sampler D to promote mu to cp-muc for any local FSS Π for a scheme Sch, and
giving bounds in terms of quantities related to D. The proof uses game playing including a crucial
use of the Second Fundamental Lemma (Lemma 2.2). For applications however, it is easier to use
Theorem 4.2, which sets D to the optimal fixed-set-size sampler to say that for any local FSS Π for
a scheme Sch, we have
Π-muc-(n,c)
ϵSch ≤ e · (c + 1) · ϵΠ-mu-m for m = ⌊(n − 1)/(c + 1)⌋.
Sch (5)
As an example, FSS UF for signature schemes is local, so this immediately yields Eqs. (2),(3). We
now turn to further applications.
1.3 Applications
TG-(n,c) -m where
An overview of our applications is in Figure 1. The results are of the form ϵTS ≤ c · ϵSG
SS
TG, SG, TS, SS are the target goal, starting goal, target scheme and starting scheme, respectively.
That is, TG-security of TS for n users and c corruptions is shown with loss c assuming SG-security
for SS for m users. What are “users” and “corruptions” depends on the goal, showcasing the
breadth of our framework. The last column indicates the application type. Direct (D) means we
show that an underlying FSS is local and apply Theorem 4.2. Indirect (I) means the FSS is not
local but we can nevertheless show the result via a dedicated proof that uses the general theorem
in an intermediate step.
Direct applications. D1 is the result for signatures (Theorem 5.1). D2 is for IND-CCA security
of KEMs in the multi-challenge-bit (mb) setting [HS21] (Theorem 5.2). D3 is for OW-PCVA (one-
6
Target Start
m Type
TG TS SG SS
D1 UF-muc Sig UF-mu Sig ⌊(n−1)/(c+1)⌋ D
D2 CCA-MB-muc KEM CCA-MB-mu KEM ⌊(n−1)/(c+1)⌋ D
D3 OW-PCVA-muc PKE OW-PCVA-mu PKE ⌊(n−1)/(c+1)⌋ D
D4 AE-MB-muc SE AE-MB-mu SE ⌊(n−1)/(c+1)⌋ D
SB1 CCA-SB-muc HEG St-CDH (G, p, g) 1 I
SB2 CCA-SB-muc KEM CPA-SB-mu PKE ⌊(n−1)/(c+1)⌋ I
KE1 IND-WFS CCGJJ St-CDH (G, p, g) ⌊(n−1)/(c+1)⌋ I
KE2 IND-FFS AKE UF-mu Sig ⌊(n−1)/(c+1)⌋ I
SO1 SIM-SO-CCA DH-PKE St-CDH (G, p, g) 1 I
SO2 SIM-SO-CCA RSA-PKE RSA RSAGen 1 I
TG-(n,c) -m where TS, TG
Figure 1: Overview of our applications: We show that ϵTS ≤ c · ϵSG
SS
are the target scheme and goal, and SS, SG are the starting scheme and goal. D1-D4 are direct
applications of our general cp-muc security theorem; the rest are indirect. SB1-SB2 relate to
encryption in the single-bit setting. KE1-KE2 are for (authenticated) key exchange. SO1-SO2
are for selective opening security.
way security under plaintext checking and ciphertext validity attacks) security of PKE [HHK17]
(Theorem 5.3), which will be used in our results for the FO transform. D4 is for (nonce-based)
authenticated encryption in the mb setting (Theorem 5.4), demonstrating the generality of our
approach in that our results also apply in the symmetric setting. In each case we observe that the
underlying FSSs, denoted UF, CCA-MB, OW-PCVA and AE-MB respectively, are local, yielding the
results via Theorem 4.2. We stress that these rows are purely illustrative; there are far more such
applications than we can exhaustively list.
Indirect applications. In the definition of mu security for encryption (KEM, PKE) from [BBM00],
there is a single challenge bit across all users. Therefore, the underlying FSS CCA-SB is not lo-
cal. However, this is recognized as the leading and stronger definition [GKP18] so rather than
give up on it, we ask what one can show. Our answer is to show cp-muc security for particu-
lar, important schemes via dedicated proofs, making use of our general theorem for intermediate
steps. We first prove such a result (SB1, Theorem 5.5) for the hashed ElGamal KEM HEG, as-
suming strong computational Diffie-Hellman (St-CDH) [ABR01]. Then, we use the modular FO
transform [HHK17] to get a more general result (SB2, Theorem 5.8). In particular, we show how
to turn a CPA-SB-mu PKE scheme into a CCA-SB-muc KEM. For this, we first go tightly from
CPA-SB-mu to OW-PCVA-mu. Next, we exploit locality of the corresponding FSS OW-PCVA and
use Theorem 4.2 to go to OW-PCVA-muc. Finally, we go tightly from the latter to CCA-SB-muc.
As a natural extension, we then turn to authenticated key exchange (AKE). Security games for
AKE (e. g., [BR94]) also use a single challenge bit and are not local, but we can still give several
cp-muc security results. The Diffie-Hellman based CCGJJ protocol [CCG+ 19] is shown in [CCG+ 19]
to achieve weak forward secrecy (wfs) [Kra05], with a factor n loss from St-CDH. We show (KE1)
that by considering our corruption-parameterized approach for AKE, we can reduce the loss to
the number c of corruptions. Previous work on tightly-secure AKE [GJ18, LLGW20, HJK+ 21],
including analyses of the TLS protocol [DJ21, DG21], all use muc-secure signatures as a building
block. Our results on signatures allow AKE security based on UF-mu secure schemes (KE2)
which yields tightness improvements when using the canonical and standardized signature schemes
actually used in practice.
7
We then extend our indirect results to applications in a quite different setting, looking at
selective opening security for PKE. In simulation-based selective opening (SIM-SO-CCA) security
for PKE [BHK12,BDWY12], the adversary can reveal randomness underlying encrypted messages.
Practical Diffie-Hellman and RSA-based PKE schemes are shown in [HJKS15] to be SIM-SO-CCA,
with loss factor the total number n of ciphertexts encrypted. Modeling randomness reveal as a
“corruption” and each encryption as a “user” in our framework allows us (with some extra steps)
to reduce the loss to the number c of openings (SO1-SO2, Theorems C.1 and C.2). Finally, of
pedagogic interest but no novelty, we show (Theorem A.2, Appendix A) how our framework can
be used to recover Coron’s result [Cor00].
2 Preliminaries
Notation. If w is a vector then |w| is its length (the number of its coordinates) and w[i] is its
i-th coordinate, where we start with i = 1. Strings are identified with vectors over {0, 1}∗ , so that
|Z| denotes the length of a string Z and Z[i] denotes its i-th bit. By ε we denote the empty string
or vector. By x∥y we denote the concatenation of strings x, y. If x, y are equal-length strings then
x⊕y denotes their bitwise xor. If S is a finite set, then |S| denotes its size. For integers a ≤ b we
let [a..b] be shorthand for {a, . . . , b}. The notation JBK for a boolean statement B returns true if
B is true and false otherwise. When we write e, we mean Euler’s number.
8
If X is a finite set, we let x ← $ X denote picking an element of X uniformly at random and
assigning it to x. Algorithms may be randomized unless otherwise indicated. If A is an algorithm,
we let y ← $ A[O1 , . . .](x1 , . . .) denote running A on inputs x1 , . . ., with oracle access to O1 , . . ., and
assigning the output to y. We let Out(A[O1 , . . .](x1 , . . .)) denote the set of all possible outputs of
A on this run. Running time is worst case, which for an algorithm with access to oracles means
across all possible replies from the oracles. We use ⊥ (bot) as a special symbol to denote rejection,
and it is assumed to not be in {0, 1}∗ .
We may consider an algorithm A that takes some input in and a current state, and returns an
output out and updated state, written (out, st) ← $ A(in, st). We may refer to such an algorithm
as stateful, but note that A is, syntactically, just an algorithm like any other. There is no implicit
state maintained by the algorithm; it is the responsibility of the game executing A to maintain the
state variable st, which it will do explicitly.
Games. We use the code-based game-playing framework of BR [BR06]. By Pr[G(A) ⇒ y] we
denote the probability that the execution of game G with adversary A results in the game output
(what is returned by Fin) being y, and write just Pr[G(A)] for Pr[G(A) ⇒ true]. Different games
may have procedures (oracles) with the same names. If we need to disambiguate, we may write
G.O to refer to oracle O of game G. In games, integer variables, set variables, boolean variables
and string variables are assumed initialized, respectively, to 0, the empty set ∅, the boolean false
and ⊥. For the following, recall that games G, H are identical-until-bad if their code differs only in
statements that follow the setting of flag bad to true [BR06].
Lemma 2.1 [First Fundamental Lemma of Game Playing [BR06]] Let G, H be identical-until-bad
games. Then for any adversary A we have
|Pr[G(A)] − Pr[H(A)]| ≤ Pr[H(A) sets bad] = Pr[G(A) sets bad] .
Lemma 2.2 [Second Fundamental Lemma of Game Playing [BNN07]] Let G, H be identical-until-
bad games and let GD be the event that bad is not set to true in the execution of these games with
adversary A. Then
Pr[G(A) ∧ GD] = Pr[H(A) ∧ GD] .
Signature schemes. A signature scheme Sig allows generation of a verifying key vk and cor-
responding signing key sk via (vk, sk) ← $ Sig.Kg. A signature of a message M is produced as
σ ← $ Sig.Sign(sk, M ). Verification returns a boolean d ← Sig.Vf(vk, M, σ). Correctness asks that
for all M we have Sig.Vf(vk, M, Sig.Sign(sk, M )) = true with probability 1, where the probability
is over the choice of keys and the coins of the signing algorithm, if any. To measure security we
define the uf advantage of an adversary A as AdvUF -su UF
Sig (A) = Pr[GSig (A)] where the game is shown
on the bottom right of Figure 4.
Public-key encryption schemes. A public-key encryption (PKE) scheme PKE allows genera-
tion of a public encryption key ek and corresponding secret decryption key dk via (ek, dk) ← $ PKE.Kg.
We denote the message space by PKE.MS and require that it is a length-closed set (i. e., for
any m ∈ PKE.MS it is the case that {0, 1}|m| ⊆ PKE.MS). Encryption of a message M ∈
PKE.MS is via C ← $ PKE.Enc(ek, M ). Decryption M ← PKE.Dec(ek, dk, C) returns a value
M ∈ {0, 1}∗ ∪ {⊥}. Correctness asks that Pr[PKE.Dec(dk, PKE.Enc(ek, M )) = M ] = 1 for ev-
ery (ek, dk) ∈ Out(PKE.Kg) and every M ∈ PKE.MS. We will consider several security definitions;
they will be given in Section 5 and Appendices B and C.
Key-encapsulation mechanisms. A key-encapsulation mechanism (KEM) KEM allows genera-
tion of a public encapsulation key ek and corresponding secret decapsulation key dk via (ek, dk) ← $
9
KEM.Kg. There is no message. Instead, encapsulation KEM.Encaps(ek) returns a symmetric key
K ∈ {0, 1}KEM.kl and a ciphertext C encapsulating it. Decapsulation KEM.Decaps(dk, C) returns
a value K ∈ {0, 1}KEM.kl ∪ {⊥}. Correctness asks that Pr[KEM.Decaps(dk, C) = K] = 1 for ev-
ery (ek, dk) ∈ Out(KEM.Kg), where the probability is over (C, K) ← $ KEM.Encaps(ek). Security
definitions are in Section 5.
3 HWD samplers
We introduce and define HWD samplers and their success and error probabilities. We give and
analyze a natural HWD sampler called the biased-coin sampler and then give an optimal HWD
sampler called the fixed-set-size sampler.
HWD samplers. A sampler is an algorithm D that on input a positive integer n and a non-negative
integer c < n returns an n-bit string s ← $ D(n, c). In our application, n will be the number of users
and c will be the number of corruptions. For s ∈ {0, 1}n , let R(s) = { i ∈ [1..n] : s[i] = 1 } and
B(s) = { i ∈ [1..n] : s[i] = 0 }. We think of sampling s ← $ D(n, c) as coloring the points 1, 2, . . . , n,
with point i ∈ [1..n] colored red if s[i] = 1 and colored blue if s[i] = 0, so that R(s) and B(s) are
the sets of red and blue points, respectively.
We say that sampler D is Hamming-weight determined (HWD) if for all n, c and all s1 , s2 ∈
{0, 1}n we have: if wH (s1 ) = wH (s2 ) then s1 and s2 have the same probability of arising as outputs
of D(n, c). This will allow our applications. Note that this condition is true if and only if there is
a function W such that Pr[s = s′ : s′ ← $ D(n, c)] = W(n, c, wH (s)) for all s ∈ {0, 1}n and all n, c.
We refer to W as D’s Hamming-weight probability function.
As another way of understanding this, if π : [1..n] → [1..n] is a permutation, let π(s) ∈ {0, 1}n
denote the string whose j-th bit is s[π(j)] for j ∈ [1..n]. That is, the bits of s are permuted
according to π. Now say that D is permutation invariant if for all n, c, all s ∈ {0, 1}n and all
permutations π : [1..n] → [1..n] we have that s and π(s) have the same probability of arising as
outputs of D(n, c). Then it is easy to see that D is permutation invariant if and only if it is
Hamming-weight determined.
For a set C ⊆ [1..n] of size c < n, and i ∈ [1..n] \ C, we let
PD,n,c (C, i) = Pr[ i ∈ B(s) and C ⊆ R(s) : s ← $ D(n, c)] . (6)
This is the probability that s ← $ D(n, c) colors the points in C red and colors i blue. (How points
not in C ∪ {i} are colored does not matter.) Our applications need a lower bound on it. Towards
this, we define the sampler success probability
P∗D (n, c) = n−c−1
Pn−c−1
j=0 W(n, c, c + j) · j . (7)
The following says PD,n,c (C, i) is determined by the success probability and thus in particular
independent of the particular choices of C, i.
Proof: Let E = { s ∈ {0, 1}n : i ∈ B(s) and C ⊆ R(s) }. Recall that the characteristic function
χE : {0, 1}n → {0, 1} of set E is defined by: χE (s) = 1 if s ∈ E and χE (s) = 0 if s ̸∈ E. Let
Hℓ = { s ∈ {0, 1}n : wH (s) = ℓ } and Eℓ = E ∩ Hℓ = { s ∈ Hℓ : χE (s) = 1 }. Writing PD (s) for the
10
Sampler D-BCx : Sampler D-FXt :
1 For j = 1, . . . , n do 1 T ← $ P(n, t)
2 s[j] ← x {0, 1} 2 For j = 1, . . . , n do
3 Return s 3 If j ∈ T then s[j] ← 1 else s[j] ← 0
4 Return s
Figure 2: Our Hamming-Weight Determined Samplers. Left: Biased-coin sampler with probability
parameter x ∈ [0, 1]. Right: Fixed set size sampler with set-size parameter t ∈ [0..n − 1].
= P∗D (n, c) ,
where the last equality is by Eq. (7). This completes the proof.
Theorem 3.1 will be used in the proof of Theorem 4.1, our general cp-muc security result. Here it
allows us to focus on the success probability as defined by Eq. (7). Now define the error probability
of sampler D via
RD (n, c, m) = Pr[ |B(s)| > m : s ← $ D(n, c)] . (8)
We want to upper bound this, which in our application will allow m to be the number of users for
which mu security without corruptions is assumed.
Biased-coin sampler. We extract from the Coron technique [Cor00] a sampler D-BCx that we
call the biased-coin sampler and show on the left in Figure 2. Here x ∈ [0, 1] is a probability and
b ← x {0, 1} returns a x-biased coin b ∈ {0, 1}, meaning Pr[b = 1] = x and Pr[b = 0] = 1 − x.
We let D-BC = D-BCx for x = 1 − 1/(c + 1) and refer to this as the optimal biased-coin sampler.
Theorem 3.2 below justifies this name by showing that D-BC maximizes the success probability
across samplers in the biased-sampler class. It also gives a good lower bound on this success
probability and bounds the sampler error probability.
Theorem 3.2 Let n, m ≥ 1 and c ≥ 0 be integers such that n/(c + 1) < m and c < n. Let D-BCx
be the biased-coin sampler associated to x ∈ [0, 1] as per Figure 2. Then D-BCx is Hamming-weight
determined and P∗D-BCx (n, c) = xc (1 − x). Furthermore P∗D-BC (n, c) ≥ P∗D-BCx (n, c) for all x ∈ [0, 1]
11
and
c
1 1 1 1
P∗D-BC (n, c) = 1− ≥ · . (9)
c+1 c+1 e c+1
2 /2n
Letting a = m − n/(c + 1) we also have RD-BC (n, c, m) ≤ e−a if m < n and RD-BC (n, c, m) = 0
if m ≥ n.
Proof: Let W(n, c, ℓ) = xℓ (1 − x)n−ℓ . If s ∈ {0, 1}n has Hamming weight ℓ then its probability
of being output by D-BCx is W(n, c, ℓ), showing that the sampler D-BCx is Hamming-weight de-
termined with Hamming-weight probability function W. Define the function f : [0, 1] → [0, 1] by
f (x) = xc (1 − x). Now from Eq. (7) we have
P∗D-BCx (n, c) = x (1 − x)n−c−j · n−c−1
Pn−c−1 c+j
j=0 j
x (1 − x)n−c−1−j · n−c−1
Pn−c−1 j
= xc (1 − x) · j=0
j (10)
= xc (1 − x) = f (x) . (11)
The sum in Eq. (10) has value 1 by the Binomial Theorem, justifying Eq. (11). Now we seek to
maximize f . Let p = 1−1/(c+1). The derivative of f is f ′ (x) = cxc−1 (1−x)−xc = xc−1 (c−x(c+1)).
This derivative is 0 for x = c/(c + 1) = 1 − 1/(c + 1) = p and x = 0, and it is easy to see that f
attains its maximum at p, so that
P∗D-BC (n, c) = P∗D-BCp (n, c) = f (p) ≥ f (x) = P∗D-BCx (n, c)
for all x ∈ [0, 1], as claimed in the Proposition statement. Now we evaluate f (p) to get
c
1 1
P∗D-BC (n, c) c
= f (p) = p (1 − p) = 1 − .
c+1 c+1
We now claim that
c
1 1
1− ≥ , (12)
c+1 e
which will justify Eq. (9). Towards this we first note that for all ϵ in the range 0 ≤ ϵ < 1 we have
(1 − ϵ) ln(1 − ϵ) = −(1 − ϵ)(ϵ + ϵ2 /2 + ϵ3 /3 + · · · ) (13)
= −ϵ + (ϵ2 /2 + ϵ3 /6 + ϵ4 /12 + · · · )
≥ −ϵ ,
where Eq. (13) used a Taylor series for ln(1 − ϵ). Now set ϵ = 1/(c + 1). Then by the above we
have ln(1 − 1/(c + 1)) = ln(1 − ϵ) ≥ −ϵ/(1 − ϵ) = −1/c and thus 1 − 1/(c + 1) ≥ e−1/c , from which
we get Eq. (12).
Moving to the bounds on the error probability, the second claim is clear since the number of blue
points cannot be more than n. To show the first claim, we first recall the following:
Chernoff bound. Let X1 , . . . , Xn be independent random variables over {0, 1}, each with
2
expectation q, and let X = X1 + · · · + Xn . Then Pr[X − nq > a] ≤ e−a /2n for any a > 0.
To apply this let Xi (s) = 1 − s[i] for i ∈ [1..n], with the probability over s ← $ D-BC. Then
X(s) = |B(s)| is the number of blue points in s. Each Xi has expectation q = 1 − p so nq =
n(1 − p) = n/(c + 1) and we have assumed m > n/(c + 1) so a = m − nq > 0 and we have
RD-BC (n, c, m) = Pr[ |B(s)| > m : s ← $ D-BC(n, c)] = Pr[X > m]
12
D-BC D-FX
n c e(c + 1)
1/P∗D-BC (n, c) m 1/P∗D-FX (n, c) m
100 M 10 K 27, 184 150, 665 27, 183 9, 999 27, 185
100 M 100 K 271, 829 143, 293 271, 694 999 271, 830
100 M 1000 K 2, 718, 283 144, 002 2, 704, 680 99 2, 718, 284
250 M 25 K 67, 958 233, 439 67, 955 9, 999 67, 959
250 M 250 K 679, 571 227, 001 679, 232 999 679, 573
250 M 2500 K 6, 795, 705 228, 633 6, 761, 699 99 6, 795, 707
500 M 50 K 135, 915 327, 085 135, 909 9, 999 135, 916
500 M 500 K 1, 359, 142 321, 695 1, 358, 463 999 1, 359, 143
500 M 5000 K 13, 591, 410 324, 365 13, 523, 397 99 13, 591, 411
Figure 3: Evaluation of samplers, where n is stated in millions (M) and c in thousands (K). For
D-BC we compute m such that RD-BC (n, c, m)/P∗D-BC (n, c) ≤ 2−128 since this will be an additive
term in our main theorem (cf. Theorem 4.1). For D-FX we set m = ⌊(n − 1)/(c + 1)⌋. The last
column is our estimate of e(c + 1) for the reciprocal of the success probabilities.
2 /2n
= Pr[X − nq > a] ≤ e−a ,
which completes the proof.
Fixed-set-size sampler. Can one do better? It turns out the success probability of D-BC is not
optimal but still very good; its more important drawback is having non-zero error probability. Our
fixed-set-size sampler fills both gaps. Its success probability is optimal, meaning maximal in the
class of all HWD samplers, and it achieves this with zero error probability.
For intuition, returning to Eq. (7), let ℓ be such that n−c−1 ≥ n−c−1
ℓ j for all j ∈ [0..n − c − 1].
Then clearly P∗D (n, c) is maximized by setting W(n, c, c + j) = 1 if j = ℓ and 0 otherwise. That is,
all the probability is on strings of Hamming weight t = c+ℓ. This leads to our fixed-set-size sampler
D-FXt , which is parameterized by an integer t ∈ [c..n − 1] and shown on the right in Figure 2. Here
P(n, t) denotes the set of all size t subsets of [1..n]. The sampler picks a random set T ⊆ [1..n] of
size t and returns as s its characteristic vector, meaning s[j] = 1 if j ∈ T and s[j] = 0 otherwise
for all j ∈ [1..n]. We let D-FX = D-FXt for t = ⌈(cn − 1)/(c + 1)⌉ and refer to this as the optimal
fixed-set-size sampler. Theorem 3.3 below justifies this name by showing that D-FX maximizes the
success probability, first across all samplers in the fixed-set-size class, and second across all HWD
samplers, and meanwhile has error probability is zero.
Theorem 3.3 Let n, m ≥ 1 and c, t ≥ 0 be integers such that n − t ≤ m and c ≤ t < n. Let
D-FXt be the fixed-set-size sampler associated to t as per Figure 2. Then D-FXt is Hamming-weight
n−c−1 n −1
determined and P∗D-FXt (n, c) = t−c · t . Furthermore P∗D-FX (n, c) ≥ P∗D-FXt (n, c) for all
∗ ∗
t ∈ [c..n − 1]. Also PD-FX (n, c) ≥ PD (n, c) for all HWD samplers D and
1 1
P∗D-FX (n, c) ≥ · . (14)
e c+1
Finally, RD-FX (n, c, m) = 0.
Proof: Let W(n, c, ℓ) = 1/ nℓ if ℓ = t and 0 otherwise. If s ∈ {0, 1}n has Hamming weight ℓ
then its probability of being output by D-FXt is W(n, c, ℓ), showing that the sampler D-FXt is
Hamming-weight determined with Hamming-weight probability function W. Define the function
13
n−c−1 n−1
f : [c..n − 1] → [0, 1] by f (t) = t−c · t . Now from Eq. (7) we have
P∗D-FXt (n, c) = n−c−1
Pn−c−1
j=0 W(n, c, c + j) · j
n−c−1
n−c−1 t−c
= W(n, c, t) · t−c = n = f (t) .
t
Now we seek to maximize f . Calculus does not seem much help in the face of this complex function.
We instead look at ratios of consecutive terms, namely we seek t such that f (t)/f (t + 1) = 1. We
start by noting that
a
b b+1
a = (15)
b+1 a−b
for any integers 0 ≤ b < a. Now for t ∈ [c..n − 2], and using Eq. (15), we have
n−c−1 n
f (t) t−c t+1 t−c+1 n−t
= n−c−1 · n = · . (16)
f (t + 1) t+1−c t n−t−1 t+1
This ratio is 1 when (t − c + 1)(n − t) = (n − t − 1)(t + 1) which solves to t = (cn − 1)/(c + 1).
Let us define t̄ to be this value. The latter may however not be an integer. We observe that f
increases to its maximum and then decreases, so the maximum for integer arguments occurs at
∗
t∗ = ⌈t̄ ⌉, which was the choice of t used to define D-FX = D-FXt . This shows the claim in the
Theorem that P∗D-FX (n, c) ≥ P∗D-FXt (n, c) for all t ∈ [c..n − 1]. Now the broader optimality claim,
namely that P∗D-FX (n, c) ≥ P∗D (n, c) for all HWD samplers D, follows from Eq. (7). Why? The
expression in the equation is maximized by picking t ∈ [c..n − 1] such that n−c−1 n−c−1
t−c ≥ j for
∗
all j ∈ [0..n − c − 1] and setting W(n, c, c + j) = 1 if c + j = t and 0 otherwise. That is, PD (n, c)
≤ P∗D-FXt (n, c) ≤ P∗D-FX (n, c). Eq. (14) now follows from Theorem 3.2. The error probability is
clearly zero. This completes the proof.
Numerical estimates. Figure 3 gives numbers for a few values of n (in millions) and c (in
thousands). We see that 1/P∗D-FX (n, c) ≤ 1/P∗D-BC (n, c), meaning D-FX is always better than D-BC,
but the difference is small. (Intuitively this is because in the biased coin sampler, the expected
Hamming weight of the sampled s is w = cn/(c+1), which is very close to t∗ = ⌈w −1/(c+1)⌉.) We
see that the approximation e(c + 1) is very good, which is why we will use it in our applications. We
see that e(c + 1), the factor our bounds will give up in applications, is much less than n, the factor
that prior bounds gave up. In the least conservative estimation, we set c = 1% as this already gives
improvements for concrete parameters. Depending on the application, an asymptotic improvement
√
can be achieved if c can be bounded by n. For D-BC, we have chosen m to put the error probability
at 2−128 , and see that it is already significantly less than n, but the m = ⌊(n − 1)/(c + 1)⌋ for D-FX
is appreciably better (lower).
Conclusion. Moving forward, we will use the fixed-set-size sampler D-FX and Theorem 3.3. This
may raise the question of why we have considered the biased-coin sampler at all. One reason is that
the analysis and results of Theorem 3.2 are used and needed as comparison points to determine and
eventually conclude that Theorem 3.3 does better. The other reason is historical, namely that the
biased-coin sampler is the natural extension of Coron’s technique and thus worthy of formulating
and analyzing.
14
4 General framework and cp-muc security theorem
Our technique works across many games (security notions) and schemes. We want to avoid repeating
similar proofs each time and also want to understand the scope and limits of the technique: under
what conditions does it work, and when does it not work? This Section develops answers to these
questions with a general framework.
Our goal is to prove statements of the form “For all security games satisfying a certain condition,
mu security can be promoted to cp-muc security.” For this to be mathematically sound requires
a formal definition of a “security game.” The only approach we know, code-based games [BR06],
would require formalizing a programming language. We suggest here a simpler approach suited to
our ends. We define an object called a formal security specification (FSS) that, formally, is just
an algorithm that functionally specifies the initializations, oracle input-output behaviors (including
what happens under corruptions) and final decision of the game underlying the target security
notion. To an FSS Π we then associate three (standard, code-based) games capturing su, mu and
muc security, respectively. We define locality of an FSS as the condition needed for the result,
which is stated and proved in our General Theorem (Theorem 4.1).
Schemes. An FSS aims to define security of a scheme Sch, so we start with a simple and general
abstraction of the latter. Namely, a scheme Sch is simply a tuple. Its entries may include algo-
rithms as well as (descriptions of) associated sets or numbers. An example is a signature scheme,
which specifies a key-generation algorithm Sch.Kg, a signing algorithm Sch.Sign and a verification
algorithm Sch.Vf. As this indicates, we extract individual components of the scheme tuple with dot
notation. Another example is an encryption scheme, specifying key-generation algorithm Sch.Kg,
encryption algorithm Sch.Enc and decryption algorithm Sch.Dec. It may also specify the length
Sch.rl of the randomness (coins) used by the encryption algorithm, and a message space Sch.MS.
In the ROM, it is often the case that the range set of the RO needed depends on the scheme.
(For example, for a KEM, it could be the set of strings of length the desired session key.) To
accommodate this, we allow schemes to name the set Sch.ROS from which they ask their random
oracle to be drawn.
Formal security specifications and the single-user game. A formal security specification
(FSS) Π is an algorithm whose input is a string name, drawn from a finite set Names ⊆ {0, 1}∗
associated to Π, that serves to name a sub-algorithm Π(name, · · · ). The names gs, init, fin ∈
Names, and in our context also corr ∈ Names, are reserved; they stand, respectively, for “Global
Setup,” “Initialize,” “Finalize” and “Corrupt.” Particular FSSs can define more names. Through
(code-based) game GΠ Sch shown in Figure 4, Π specifies a notion of security for a scheme Sch. In
Init, line 1 picks, from the random oracle space of the scheme, a function h that will serve as the
random oracle. The global setup sub-algorithm Π(gs) of the FSS is then run to produce a pair
(pp, os) ← $ Π[Sch, h](gs) whose components are called the public parameters and oracle secrets,
respectively. (An example os is a global challenge bit for an ind-style game.) As the notation
indicates, sub-algorithms of Π have access to the scheme Sch and may thus call its algorithms, and
they also have oracle access to h. At line 2, the initialize sub-algorithm Π(init, ·) is run. It takes
as input pp, os and produces an output iout together with an initial state St. The adversary is
given (pp, iout). The GΠ Sch game will maintain the state St and update it as necessary. After that,
the FSS maps to the actual game in a straightforward way. For any name ∈ {gs, init, fin, corr},
sub-algorithm Π(name, · · · ) implements an actual oracle Oracle(name, · · · ) in GΠ Sch . Given an
argument arg, the latter runs Π[Sch, h](name, arg, St), where St is the current state, to get an
output oout (that is returned to the adversary) and updated state St (that is maintained by game
GΠ Π
Sch ). Game GSch also provides a random oracle RO that gives the adversary access to h. Fin
15
Game GΠ
Sch
Init:
1 h ← $ Sch.ROS ; (pp, os) ← $ Π[Sch, h](gs)
2 (iout, St) ← $ Π[Sch, h](init, (pp, os)) ; Return (pp, iout)
Oracle(name, arg): // name ̸∈ {gs, init, fin, corr}
3 (oout, St) ← $ Π[Sch, h](name, arg, St) ; Return oout
RO(x): // Random oracle
4 y ← h(x) ; Return y
Fin(farg):
5 dec ← $ Π[Sch, h](fin, farg, St) ; Return dec
Figure 4: Top: Game associated to formal security specification Π and scheme Sch in the single-
user setting. Bottom Left: Formal security specification UF. Bottom Right: The GUF Sig game,
rewritten in standard form.
uses sub-algorithm Π(fin, · · · ) to return a game decision dec. Beyond the argument farg provided
by the adversary, Π(fin, · · · ) gets the current state as input.
As an example, we show on the bottom left of Figure 4 the FSS UF corresponding to uf-security
of a signature scheme Sig. To its right we show GUF
Sig re-written in the usual way to see that it is the
standard game. Of course, if one is interested in just one notion (such as uf-security) one can give
GUF
Sig directly and there is no need for FSSs, but as we have discussed, FSSs will allow us to give
precise yet general results covering many games, and also allow us to classify games into different
types.
The GΠ Sch game captures the single-user setting and does not use sub-algorithm Π(corr, · · · ); it
will be used below. The difference between pp and os is that the FSS gets both but the adversary
gets only the former. This distinction will allow us to distill exactly what our general result needs
to work and also let us to apply the latter broadly and in settings where “corrupt” does not have
the usual interpretation.
An FSS Π has an associated type, which is either “search” or “decision.” The advantage of
Π
an adversary A playing GΠ Π
Sch is defined as AdvSch (A) = Pr[GSch (A)] if Π is a search FSS and
Π Π
AdvSch (A) = 2 Pr[GSch (A)] − 1 if Π is a decision FSS. In the latter case we will make an extra
technical assumption, namely that for all Sch, h, St the decision dec ← $ Π[Sch, h](fin, farg, St) is
16
Π-mu-n Π-muc-(n,c)
Games GSch and GSch
Init:
1 h ← $ Sch.ROS ; (pp, os) ← $ Π[Sch, h](gs)
2 For i = 1, . . . , n do (iout[i], St[i]) ← $ Π[Sch, h](init, (pp, os))
3 Return (pp, iout)
Oracle(name, (i, arg)): // name ̸∈ {gs, init, fin, corr} and i ∈ [1..n]
4 (oout, St[i]) ← $ Π[Sch, h](name, arg, St[i]) ; Return oout
Π-muc-(n,c)
Corrupt(i, arg): // Game GSch only
5 CS ← CS ∪ {i}
6 If |CS| > c then return ⊥
7 (oout, St[i]) ← $ Π[Sch, h](corr, arg, St[i]) ; Return oout
RO(x): // Random oracle
8 y ← h(x) ; Return y
Figure 5: Game describing the execution of a formal security specification Π with scheme Sch in
the multi-user setting, both without corruptions (mu) and with corruptions (muc). The difference
is that oracle Corrupt is included only in the second game.
uniformly distributed over {true, false} when farg ← $ {0, 1}. This is typically true because os is a
random challenge bit and Π[Sch, h](fin, farg, St) returns [[farg = os]], but it does not in general
follow merely through the definition of the advantage as 2 Pr[GΠSch (A)]−1. So we make it a separate
requirement. This will be used in the proof of Theorem 4.1.
We say that an FSS Π is local if the oracle secret os is always ε. (A bit more formally, the
probability that os = ε when (pp, os) ← $ Π[Sch, h](gs) is 1 for all Sch, h.) Intuitively, different
users in (upcoming) games GΠ -mu-n and GΠ-muc-(n,c) will have no secret common to them all but
Sch Sch
unknown to the adversary. This captures the condition for our general result of Theorem 4.1.
Multi-user security for an FSS. To FSS Π, scheme Sch, a number n ≥ 1 of users and a
number c ≥ 0 of corruptions, we now associate a multi-user (without corruptions) game GSch Π-mu-n
Π-muc-(n,c)
and a multi-user with corruptions game GSch as shown in Figure 5. We see here the reason
to separate initialization into the two sub-algorithms Π(gs) and Π(init, · · · ): The first is run once
to produce parameters common to all users, while the second is run once per user and produces a
state St[i] for each user i ∈ [1..n]. Oracle Oracle(name, · · · ) now takes a user identity i in addition
to arg and answers via the sub-algorithm Π(name, · · · ) using the state of user i. Fin also takes a
user identity i and returns its decision using the state of that user. Oracle Corrupt is present
Π-muc-(n,c)
only in game GSch and answers as per the associated sub-algorithm, again for a given user.
Line 6 restricts the number of corruptions to c and line 9 excludes wins on corrupted users. Note
that the models of prior works, which allow any number of corruptions, can be captured as the
special case of our model in which we set c = n − 1. The advantage AdvΠ -mu-n (A) of an adversary
Sch
A playing GΠ -mu-n is defined as Pr[GΠ-mu-n (A)] if Π is a search FSS and 2 Pr[GΠ-mu-n (A)] − 1
Sch Sch Sch
Π-muc-(n,c) Π-muc-(n,c)
if Π is a decision FSS, and correspondingly AdvSch (A) is defined as Pr[GSch (A)] or
Π-muc-(n,c)
2 Pr[GSch (A)] − 1.
General cp-muc theorems. We give our main, general results that promote mu to corruption-
parameterized muc security for all local game specifications. Later we will see many applications.
Below we crucially leverage HWD samplers.
17
Theorem 4.1 Let n, m ≥ 1 and c ≥ 0 be integers such that m ≤ n and c < n. Let D be a Hamming-
weight determined sampler. Let α = P∗D (n, c) and β = RD (n, c, m). Let Sch be a scheme, and Π
a formal security specification for it. Assume Π is local. Let γ = 1 if Π is a search-type FSS and
Π-muc-(n,c)
γ = 2 if Π is a decision-type FSS. Let A be an adversary for game GSch . Then we construct
an adversary B for game GSch Π -mu -m such that
Π-muc-(n,c)
Adv Sch (A) ≤ (1/α) · AdvΠ-mu-m (B) + γ · β/α .
Sch (17)
Adversary B makes, to any oracle in its game, the same number of queries as A made. The running
time of B is about that of A plus the time for an execution of the sampler D.
Before discussing the proof we state a corollary obtained by plugging the fixed-set-size sampler into
the above; this is what we will most often use in applications.
Theorem 4.2 Let n, c be integers such that 0 ≤ c < n, and let m = ⌊(n − 1)/(c + 1)⌋. Let Sch be
a scheme, and Π a formal security specification for it. Assume Π is local. Let A be an adversary
Π-muc-(n,c) -mu-m such that
for game GSch . Then we construct an adversary B for game GΠ Sch
Π-muc-(n,c) -mu-m (B) .
AdvSch (A) ≤ e(c + 1) · AdvΠ
Sch (18)
Adversary B makes, to any oracle in its game, the same number of queries as A made. The running
time of B is about that of A plus the time for an execution of the sampler D-FX.
Proof of Theorem 4.2: Let D = D-FX be the optimal fixed-set-size sampler and apply Theo-
rem 4.1. We have α = P∗D-FX (n, c) and β = RD-FX (n, c, m) in Eq. (17). Now Theorem 3.3 says
that 1/α ≤ e(c + 1), while β = 0, yielding Eq. (18).
It remains to prove Theorem 4.1.
Proof of Theorem 4.1: We start with an overview. Adversary B (shown in detail in Figure 6)
picks s by running the sampler D, thereby coloring each user i ∈ [1..n] as either red (s[i] = 1) or
Π-muc-(n,c)
blue (s[i] = 0). It now runs A. In answering the game-GSch queries that A makes, B will
simulate a red user i directly. This means it will pick and maintain this user’s state St[i], allowing
it to answer all queries to this user, including, most importantly, a Corrupt(i) query. Meanwhile,
it will aim to identify blue users with the users in its own underlying GΠ -mu-m game, using its
Sch
oracles from that game to reply to queries of A for these users. It hopes that A runs to a win with
a Fin query to a blue user, in which case B will win its own game.
There are a number of bad events, ways in which this strategy can fail. The first is that n−wH (s) >
m, meaning the number of blue users is too large for them to be mapped to the users in game
Π-mu-m . Letting G (Figure 7) be a game capturing B’s advantage, a first game hop will bound
GSch 0
the probability of this bad event via the First Fundamental Lemma of Game Playing (Lemma 2.1),
and lead to the β/α term in the bound while putting us now in a game G1 where the limitation
on the number of blue users has vanished. The second source of failure is that either there is a
Corrupt query to a blue user (B has no way to answer this) or A’s Fin query is to a red user
(in which case B won’t win). Game G2 (Figure 8) flags this through settings of flag bad. The
difficulty is that the probability (that bad is set) is large, namely close to 1, and not small. So,
while handling it again via the First Fundamental Lemma of Game Playing is possible, it would
yield a large additive term in the bound, making the latter vacuous. Instead, letting GD be the
probability that bad is not set and W the event that B wins, we seek to lower bound Pr[W ∧ GD] in
G2 . A crucial use of the Second Fundamental Lemma of Game Playing (Lemma 2.2) equates this
with the same probability in a game G3 where events GD and W are (unlike in G2 ) independent.
18
Adversary B:
1 s ← $ D(n, c) ; (pp, iout′ ) ← $ GΠ-mu-m .Init ; farg′ ← $ {0, 1}
Sch
2 If n − wH (s) > m then dec ← $ GΠ-mu-m .Fin(1, farg′ )
Sch
This leaves us with the product Pr[W] · Pr[GD] in G3 . To conclude, we will lower bound Pr[W]
in terms of the advantage of A. Then we will use Theorem 3.1 to lower bound Pr[GD] in terms of
the sampler success probability P∗D (n, c) of Eq. (7).
A wrinkle is that the analysis sketched above is for the case that Π is a search-type FSS. The
case of it being a decision-type FSS is more delicate. We will need to ensure that when bad events
happen, B has advantage zero, which is done leveraging the assumption we have made that decision
formal security specifications return a random decision in this case. While the adversary B and the
games will be written to cover both cases (search and decision), the analyses are different enough
that, below, we give them separately. (And indeed the bound in Eq. (17) is slightly different.)
Why do we need to assume that Π is local? The secret os in the GΠ -mu-n game is not available to
Sch
Π-muc-(n,c)
B, yet it has to simulate game GSch for A in such a way that it is underlain by this same os.
This happens correctly for blue users because their queries are forwarded, but B cannot simulate
the oracle replies for red users to be consistent with os. If the FSS is local, however, os = ε so the
difficulty goes away.
Proceeding to the full proof, we assume A makes exactly c queries to its Corrupt oracle and also
respects the rules of its game, allowing us to omit the corresponding checks in our games. Adversary
B is now shown in Figure 6. Recall that it is playing game GΠ -mu-m , whose oracles it calls. At
Sch
line 1 it picks s by running D. If there are too many blue users, it terminates at line 2 with a call
Π-mu-m .Fin using a random bit farg′ (chosen at line 1) and (as default) user 1. Now B runs A,
to GSch
Π-muc-(n,c)
responding to the latter’s oracle queries as shown. In the simulation of GSch .Init, if user i is
red, then (line 5) its state is initialized honestly as per Π, and otherwise (line 6) i is identified with
19
Games G0 , G1
Init:
1 h ← $ Sch.ROS ; (pp, ε) ← $ Π[Sch, h](gs) ; s ← $ D(n, c) ; farg′ ← $ {0, 1}
2 For i = 1, . . . , n do
3 (iout[i], St[i]) ← $ Π[Sch, h](init, (pp, ε))
4 If n − wH (s) > m then
5 bad ← true ; dec ← $ Π[Sch, h](fin, farg′ , St[1])
6 Return (pp, iout)
Oracle(name, (i, arg)): // name ̸∈ {init, fin, corr} and i ∈ [1..n]
7 (oout, St[i]) ← $ Π[Sch, h](name, arg, St[i]) ; Return oout
Corrupt(i, arg): // i ∈ [1..n]
8 CS ← CS ∪ {i}
9 If s[i] = 1 then (oout, St[i]) ← $ Π[Sch, h](corr, arg, St[i])
10 Else oout ← ⊥ ; B ← B ∪ {i}
11 Return oout
RO(x): // Random oracle
12 y ← h(x) ; Return y
Fin(i, farg): // i ∈ [1..n]
13 If dec ̸= ⊥ then return dec
14 If i ∈ CS then return false
15 If (s[i] = 0 and B = ∅) then dec ← $ Π[Sch, h](fin, farg, St[i])
16 Else dec ← $ Π[Sch, h](fin, farg′ , St[i])
17 Return dec
Figure 7: Games G0 , G1 for the proof of Theorem 4.1, where the former includes the boxed code
and the latter does not.
user π(i) of game GSchΠ-mu-m . For queries GΠ-muc-(n,c) .Oracle(name, (i, arg)), the reply is computed
Sch
directly if i is red and forwarded to user π(i) in game GΠ -mu-m otherwise. If i is red, a correct
Sch
Π-muc-(n,c)
reply is possible, and given, to a GSch .Corrupt(i, arg) query, and otherwise the reply is ⊥.
Random oracle queries of A are simply forwarded to B’s random oracle. (The latter is also used to
reply to random oracle queries made by Π in any direct executions by B of the latter, as at lines
Π-muc-(n,c) -mu-m .Fin for user
5, 8, 11.) When A calls GSch .Fin(i, farg), adversary B calls its own GΠSch
π(i) with the same argument farg, and otherwise fails through calling with a random argument
farg′ .
For the analysis, consider games G0 , G1 of Figure 7, where the former includes the boxed code and
the latter does not. These are to be executed with adversary A. (Not B.) In the following we make
the case distinction between search-type and decision-type FSS.
Case 1: Search-type FSS. Assume Π is a search-type FSS. The clump of equations below is
justified after their statement:
AdvΠ -mu-m (B) = Pr[G (A)] (19)
Sch 0
20
Games G2 , G3
Init:
1 h ← $ Sch.ROS ; (pp, ε) ← $ Π[Sch, h](gs) ; s ← $ D(n, c) ; farg′ ← $ {0, 1}
2 For i = 1, . . . , n do
3 (iout[i], St[i]) ← $ Π[Sch, h](init, (pp, ε))
4 Return (pp, iout)
Figure 8: Games G2 , G3 for the proof of Theorem 4.1, where the former includes the boxed code
and the latter does not.
A if there are too many blue users, while G0 computes the decision in the boxed code at line 5 but
its execution with B would continue; however, the outcome is the same due to line 13 (Figure 7)
and the assumption that adversaries always call their Fin oracle. Recalling that Π is a search-type
FSS, this justifies Eq. (19). (Note that we use here the assumption that Π is local, meaning os = ε,
without which this equation may not be true.) Eq. (20) is trivial. We have written games G0 , G1
to be identical-until-bad, so Eq. (21) follows from the First Fundamental Lemma of Game Playing
(Lemma 2.1). Now n − wH (s) at line 4 is the number of blue points in s so by Eq. (8) we have
Pr[G1 (A) sets bad] ≤ RD (n, c, m) = β. (22)
The benefit of moving from G0 to G1 is that the restriction on the number of blue users has
vanished. We turn to lower bounding Pr[G1 (A)]. For this consider games G2 , G3 of Figure 8 (the
former includes the boxed code and the latter does not), again to be executed with A. In either
game, let GD be the event that bad is not set to true in the execution of A with the game. We
claim that
Pr[G1 (A)] = Pr[G2 (A)] (23)
≥ Pr[G2 (A) ∧ GD] (24)
= Pr[G3 (A) ∧ GD] (25)
= Pr[G3 (A)] · Pr[GD] . (26)
Π-muc-(n,c)
Let us justify this. With the intent of moving towards game GSch , game G2 initializes all
users honestly. However, the boxed code at line 8 ensures that it fails on Corrupt queries to blue
users like G1 , and the boxed code on line 14 ensures the decision returned is that same as in G1 ,
justifying Eq. (23). Eq. (24) is trivial. Games G2 , G3 have been defined to be identical-until-bad, so
21
that we may now (crucially) apply the Second Fundamental Lemma of Game Playing (Lemma 2.2)
to get Eq. (25). The benefit of this move is that in G3 , due to the absence of the boxed code,
the setting of bad does not influence what the game returns, so the events “G3 (A)” and GD are
independent, justifying Eq. (26).
Next we observe that
Π-muc-(n,c)
Pr[G3 (A)] = AdvSch (A) (27)
Pr[GD] = P∗D (n, c) = α . (28)
Π-muc-(n,c)
With the boxed code gone, G3 is the same as GSch Eq. (27). If bad is set at line 8
, justifying
it is also set at line 13, so for the purpose of computing Pr[GD] we can consider only the latter.
Now we see that the probability that bad is not set at line 13 in the Fin(i, farg) call is exactly
PD,n,c (CS, i) as defined via Eq. (6). But we have assumed A makes exactly c queries to Corrupt,
so |CS| = c. Now Proposition 3.1 implies that PD,n,c (CS, i) = P∗D (n, c). (In particular, it does not
depend on the choices of CS, i, which here are random variables.) This justifies Eq. (28).
Putting together the above we have
AdvΠ -mu-m (B) ≥ α · AdvΠ-muc-(n,c) (A) − β ,
Sch Sch
yielding Eq. (17), and completing the proof, in the case that Π is a search-type FSS.
Case 2: Decision-type FSS. We now proceed to the case that Π is a decision-type FSS. The
adversary B and games are already written to handle this. We indicate how to adapt the analysis.
We start with the key claim, namely that
1
Pr[ G2 (A) | ¬GD ] = (29)
2
where ¬GD, the complement of GD, is the event that bad is set to true in the game. The reason
for this is that whenever ¬GD happens, the output of the game is determined at line 14 with farg′
the random bit from line 1, and our definition of decision formal security specifications then says
that the game output dec ∈ {true, false} is also random. Note Eq. (29) implies that
2 Pr[G2 (A) ∧ ¬GD] − Pr[¬GD]
= 2Pr[ G2 (A) | ¬GD ] · Pr[¬GD] − Pr[¬GD] = 0 , (30)
which we will use below. Now, turning to the full analysis, we start with
AdvΠ-mu-m (B) = 2 Pr[G (A)] − 1
Sch 0
22
= (2 Pr[G3 (A)] − 1) · Pr[GD] .
Eq. (32) is by Eq. (30). Other equations are either standard probability theory or justified as above
in the search case. We also use throughout that (by Lemma 2.1) Pr[GD] is the same in G2 and
G3 , and hence do not indicate the game in the notation. Now we observe that
Π-muc-(n,c)
2 Pr[G3 (A)] − 1 = AdvSch (A) . (33)
Then using Eq. (28) and putting all this together we have
AdvΠ -mu-m (B) ≥ α · AdvΠ-muc-(n,c) (A) − 2β ,
Sch Sch
yielding Eq. (17), and completing the proof, in the case that Π is a decision-type FSS.
5 Applications
We can promote mu to cp-muc for many target goals and schemes simply by noting that the FSS is
local and applying Theorem 4.2. We call these direct applications, and below we illustrate with a
few examples, but there are many more. However, the FSSs for some important goals, most notably
ind-style games with a single challenge bit across all users, are not local. Nonetheless, we can give
dedicated proofs for specific schemes. Since these results use Theorem 4.2 in an intermediate step,
we call them indirect applications.
Theorem 5.1 Let n, c be integers such that 0 ≤ c < n, and let m = ⌊(n − 1)/(c + 1)⌋. Let A be
UF-muc-(n,c) -mu-m such
an adversary for game GSig . Then we construct an adversary B for game GUF
Sig
that
UF-muc-(n,c) -mu-m (B) .
AdvSig (A) ≤ e(c + 1) · AdvUF
Sig
Adversary B makes, to any oracle in its game, the same number of queries as A made. The running
time of B is about that of A plus the time for an execution of the sampler D-FX.
As explained in Section 1.1: (1) for tightly mu secure schemes, this implies
UF-muc-(n,c)
AdvSig (A) ≤ e(c + 1) · AdvUF
Sig (B) ,
a significant improvement over the factor n from the hybrid argument bound, and (2) this yields
tightness improvements for standardized schemes that are used in practical applications such as
AKE, which we will discuss in Section 5.2. Further, we note that the analogous result is true for
strongly-unforgeability of signatures.
Public-Key Encryption. Mu (and thus muc) security for public-key encryption and KEMs can
be defined with a single challenge bit across all users [BBM00], which we call the single-bit (sb)
setting, or with a per-user challenge bit, which we call the multi-bit (mb) setting. The settings are
asymptotically equivalent, but we are interested in tightness. Heum and Stam [HS21] show that
23
CCA-MB-muc-(n,c)
CCA-MB[KEM, h](gs): Game GKEM
1 Return (ε, ε) Init:
1 h ← $ Sch.ROS
CCA-MB[KEM, h](init, ε):
2 For i = 1, . . . , n do
2 (ek, dk) ← $ KEM.Kg[h]
3 (ek i , dk i ) ← $ KEM.Kg[h]
3 St.ek ← ek ; St.dk ← dk
4 bi ← $ {0, 1}
4 b ← $ {0, 1} ; St.b ← b
5 Return (ek 1 , . . . , ek n )
5 Return (ek, St)
Oracle(enc, i): // i ∈ [1..n]
CCA-MB[KEM, h](enc, ε, St):
6 (C, K0 ) ← $ KEM.Encaps[h](ek i )
6 (C, K0 ) ← $ KEM.Encaps[h](St.ek)
7 K1 ← $ {0, 1}KEM.kl
7 K1 ← $ {0, 1}KEM.kl
8 Si ← Si ∪ {C} ; Return (C, Kbi )
8 St.S ← St.S ∪ {C}
Oracle(dec, (i, C)): // i ∈ [1..n], C ∈
/ Si
9 Return (C, KSt.b , St)
9 K ← KEM.Decaps[h](dk i , C)
CCA-MB[KEM, h](dec, C, St): 10 Return K
10 If C ∈ St.S then return ⊥
Corrupt(i): // i ∈ [1..n]
11 K ← KEM.Decaps[h](St.dk, C) 11 If Si ̸= ∅ then return ⊥
12 Return (K, St) 12 CS ← CS ∪ {i} ; Return dk i
CCA-MB[KEM, h](corr, ε, St): RO(x):
13 Return (St.dk, St) 13 h ← h(x) ; Return h
Figure 9: Left: Formal security specification CCA-MB[KEM, h] capturing security with multiple
CCA-MB-muc-(n,c)
challenge bits. Right: Game GKEM describing the execution of CCA-MB[KEM, h] in the
muc setting.
without corruptions, SB security is the stronger definition, but which is stronger in the presence of
adaptive corruptions is open.
Our framework is able to capture both settings, in the sense that we can give FSSs for both, but
interestingly, only the multi-bit one is local, so our general result only applies in the MB setting.
We discuss this here; we give results for the SB setting in Section 5.2.
Our FSS CCA-MB for KEMs capturing indistinguishability under chosen-ciphertext attacks in
the MB setting and the corresponding muc game are shown in Figure 9. It picks one challenge bit
for each user which is used for encryption queries. During the game, the adversary can adaptively
learn user’s decryption keys via a corruption, thus also learning the user’s challenge bit. In the end,
the adversary has to guess the challenge bit of an uncorrupted user. Note that CCA-MB does not
need oracle secret, i. e., it is local, and we can apply Theorem 4.2 to obtain the following result.
Theorem 5.2 Let n, c be integers such that 0 ≤ c < n, and let m = ⌊(n − 1)/(c + 1)⌋. Let A be an
CCA-MB-muc-(n,c) -mu-m
adversary for game GKEM . Then we construct an adversary B for game GCCA-MB
KEM
such that
CCA-MB-muc-(n,c) -mu-m (B) .
AdvKEM (A) ≤ e(c + 1) · AdvCCA-MB
KEM
Adversary B makes, to any oracle in its game, the same number of queries as A made. The running
time of B is about that of A plus the time for an execution of the sampler D-FX.
As a second illustrative example, we consider one-way security of PKE under plaintext checking
and ciphertext validity attacks [Den03,HHK17], which serves as a useful intermediate notion for the
24
OW-PCVA-muc-(n,c)
OW-PCVA[PKE, h](init, ε): GPKE
1 (ek, dk) ← $ PKE.Kg[h] Init:
2 St.ek ← ek ; St.dk ← dk 1 h ← $ Sch.ROS
Figure 10: Left: Formal security specification OW-PCVA[PKE, h], where the global setup is trivial.
OW-PCVA-muc-(n,c)
Right: Game GPKE describing the execution of OW-PCVA[PKE, h] in the muc setting.
Fujisaki-Okamoto (FO) transform [FO99, FO13], which we will have a closer look at in Section 5.2.
The FSS OW-PCVA is given in Figure 10. It is local, and thus we get:
Theorem 5.3 Let n, c be integers such that 0 ≤ c < n, and let m = ⌊(n − 1)/(c + 1)⌋. Let
OW-PCVA-muc-(n,c)
A be an adversary for game GPKE . Then we construct an adversary B for game
GOW-PCVA -mu-m such that
PKE
Adversary B makes, to any oracle in its game, the same number of queries as A made. The running
time of B is about that of A plus the time for an execution of the sampler D-FX.
25
AE-MB-muc-(n,c)
AE-MB[SE, h](init, ε): Game GSE
1 k ← $ {0, 1}SE.kl ; St.k ← k Init:
1 h ← $ Sch.ROS
2 b ← {0, 1} ; St.b ← b
$
2 For i = 1, . . . , n do
3 Return (ε, St)
3 ki ← $ {0, 1}SE.kl ; bi ← $ {0, 1}
AE-MB[SE, h](enc, (M, N ), St):
Oracle(enc, (i, M, N )): // i ∈ [1..n]
4 C0 ← $ SE.Enc[h](St.k, M, N )
SE.cl(|M |)
4 C0 ← $ SE.Enc[h](ki , M, N )
5 C1 ← $ {0, 1}
5 C1 ← $ {0, 1}SE.cl(|M |)
6 St.Si ← St.Si ∪ {(CSt.b , N )}
6 Si ← Si ∪ {Cbi } ; Return Cbi
7 Return (CSt.b , St)
Oracle(dec, (i, C, N )): // i ∈ [1..n], (C, N ) ∈
/ Si
AE-MB[SE, h](dec, (C, N ), St):
7 M0 ← $ SE.Dec[h](dk i , C, N ); M1 ← ⊥
8 If (C, N ) ∈ St.S then return ⊥ 8 Return Mbi
9 M0 ← SE.Dec[h](St.k, C, N ) ; M1 ← ⊥
Corrupt(i): // i ∈ [1..n]
10 Return (MSt.b , St)
9 CS ← CS ∪ {i} ; Return ki
AE-MB[SE, h](corr, ε, St):
RO(x):
11 Return (St.k, St) 10 h ← h(x) ; Return h
AE-MB[SE, h](fin, b′ , St): Fin(i, b′ ): // i ∈ [1..n]
12 Return (JSt.b = b′ K, ϵ) 11 If i ∈ CS then return false
12 Return Jbi = b′ K
Figure 11: Left: Formal security specification AE-MB[SE, h] for multi-bit security of authenticated
AE-MB-muc-(n,c)
encryption. Right: Game GSE describing the execution of AE-MB[SE, h] in the muc
setting.
expressed via a local FSS which we provide in Figure 11. We then apply our general theorem to
get the following.
Theorem 5.4 Let n, c be integers such that 0 ≤ c < n, and let m = ⌊(n − 1)/(c + 1)⌋. Let SE be an
AE-MB-muc-(n,c)
SE scheme and let A be an adversary for game GSE . Then we construct an adversary
B for game GAE-MB -mu-m such that
SE
AE-MB-muc-(n,c) -mu-m (B) .
AdvSE (A) ≤ e(c + 1) · AdvAE-MB
SE
Adversary B makes, to any oracle in its game, the same number of queries as A made. The running
time of B is about that of A plus the time for an execution of the sampler D-FX.
26
CCA-SB-muc-(n,c)
CCA-SB[KEM, h](gs): Game GKEM
1 b ← $ {0, 1} ; Return (ε, b) Init:
1 h ← $ Sch.ROS ; b ← $ {0, 1}
CCA-SB[KEM, h](init, (ε, b)):
2 For i = 1, . . . , n do
2 (ek, dk) ← $ KEM.Kg[h]
3 (ek i , dk i ) ← $ KEM.Kg[h]
3 St.ek ← ek ; St.dk ← dk
4 Return (ek 1 , . . . , ek n )
4 St.corr ← false ; St.b ← b
5 Return (ek, St) Oracle(enc, i): // i ∈ [1..n] \ CS
5 (C, K0 ) ← $ KEM.Encaps[h](ek i )
CCA-SB[KEM, h](enc, ε, St):
6 K1 ← $ {0, 1}KEM.kl
6 If St.corr then return ⊥
7 Si ← Si ∪ {C} ; Return (C, Kb )
7 (C, K0 ) ← $ KEM.Encaps[h](St.ek)
Oracle(dec, (i, C)): // i ∈ [1..n], C ∈
/ Si
8 K1 ← $ {0, 1}KEM.kl ; St.S ← St.S ∪ {C}
8 K ← KEM.Decaps[h](dk i , C)
9 Return ((C, KSt.b ), St)
9 Return K
CCA-SB[KEM, h](dec, C, St):
Corrupt(i): // i ∈ [1..n]
10 If C ∈ St.S then return ⊥ 10 If Si ̸= ∅ then return ⊥
11 K ← KEM.Decaps[h](St.dk, C) 11 CS ← CS ∪ {i} ; Return dk i
12 Return (K, St)
RO(x):
CCA-SB[KEM, h](corr, ε, St): 12 h ← h(x) ; Return h
13 If St.S ̸= ∅ then return ⊥ Fin(b′ ):
14 St.corr ← true ; Return (St.dk, St) 13 Return Jb = b′ K
Figure 12: Left: Formal security specification CCA-SB[KEM, h] capturing security with a single
CCA-SB-muc-(n,c)
challenge bit. Right: Game GKEM describing the execution of CCA-SB[KEM, h] in the
muc setting.
muc setting [LLP20,HLG23,HLWG23]. As for signature schemes, we can however observe that the
latter schemes are less efficient in terms of size (of public keys and ciphertexts) and computation
complexity than canonical schemes.
In Figure 12, we define the FSS CCA-SB for KEM schemes capturing indistinguishability under
chosen-ciphertext attacks in the SB setting and its muc game. (The one for PKE is similar and
provided in Figure 27 in Appendix B.) The game setup chooses a global challenge bit (stored as
oracle secret os); thus the game is not local. This also requires to restrict queries to the challenge
and corruption oracle, otherwise the adversary can trivially learn the challenge bit.
The main motivation to study security in the presence of adaptive corruptions for KEMs is
that they are of high importance in practice. Here, the SB setting is particularly interesting for
composition using the KEM/DEM paradigm [CS03]. While it has already been studied in the mu
setting without corruptions [Zav12,GKP18], the result can be trivially extended to the muc setting
using a CCA-SB-muc secure KEM. (This composition result was recently studied for simulation-
based security definitions [Jae23].) For these reasons, we now aim to establish CCA-SB-muc security
with improved tightness for practical encryption schemes.
Hashed ElGamal. On the left-hand side of Figure 13, we describe the KEM underlying the
hashed ElGamal encryption scheme, which we denote by HEG. It is known that HEG can be proven
tightly mu secure under the strong computational Diffie-Hellman (St-CDH) assumption [ABR01]
which we define in terms of game GSt -CDH
(G,p,g) on the right-hand side of Figure 13. We denote the
advantage of an adversary A in the game by AdvSt-CDH (A). For muc security, we only know the
(G,p,g)
27
HEG.Kg: GSt-CDH
(G,p,g)
x
1 x ← $ Zp ; X ← g Init:
2 Return (ek, dk) ← (X, (X, x)) 1 x, y ← $ Zp
HEG.Encaps[H](ek): 2 Return (g x , g y )
Figure 13: Left: Hashed ElGamal KEM HEG for a group (G, p, g), where HEG.ROS is the set of all
H : {0, 1}∗ → {0, 1}HEG.kl . Right: Game for the strong computational Diffie-Hellman assumption
in group (G, p, g).
generic result with a security loss in the number of users. Thus, we ask whether we can do better
using our technique and we provide an affirmative answer.
Theorem 5.5 Let HEG be the KEM defined in Figure 13. Let n, c be integers such that 0 ≤ c < n.
CCA-SB-muc-(n,c)
Let A be an adversary for game GHEG . Then we construct an adversary B for game
GSt -CDH such that
(G,p,g)
CCA-SB-muc-(n,c) -CDH
AdvHEG (A) ≤ e(c + 1) · AdvSt
(G,p,g) (B) .
Let qro be the number of random oracle queries A makes. Then B makes at most qro queries
to oracle Ddh. The running time of B is about that of A plus the time for an execution of the
sampler D-FX, re-randomization of group elements and maintaining lists of encryption, decryption
and random oracle queries.
Proof: We modularize the proof as illustrated on the left of Figure 14, using the multi-user strong
asymmetric CDH assumption with corruptions as introduced in [KPRR23] for which we can apply
Theorem 4.2. We further use the fact that without corruptions, this assumption is tightly equivalent
to the standard St-CDH assumption (cf. Figure 13). Finally, we show that its muc variant implies
muc security of HEG.
Multi-User Strong Asymmetric CDH. In Figure 15 we define the asymmetric multi-user ver-
sion of CDH with corruptions as a formal security specification k-St-ACDH[(G, p, g)] for an integer
k ≥ 1. The game requires to compute the Diffie-Hellman share of two group elements which can
be chosen from two (distinct) sets of group elements. Asymmetric means that exponents of group
elements of only one of the sets can be revealed. We can view these group elements as the users
in the game. The other set of k elements is generated during game setup and part of the public
parameters pp. As in GSt -CDH
(G,p,g) , the game provides Oracle(ddh, ·) which can be queried for each
user.
Note that without corruptions (i. e., game Gk-St-ACDH-mu-n ), the game is essentially a multi-user
(G,p,g)
version of St-CDH and known to be tightly equivalent to single-user St-CDH which is the same as
1-St-ACDH.1 The following lemma combines this fact with Theorem 4.2.
1
More precisely, showing that St-CDH implies n-user St-CDH loses a factor of 2, cf. Lemma 9 in the full version
of [KPRR23]. However, the asymmetry in k-St-ACDH-mu avoids this factor since the final output is restricted to a
smaller set of solutions.
28
HEG CCA-SB-muc KEM CCA-SB-muc
Figure 14: Achieving CCA-SB-muc security for KEM: We prove security for the hashed
ElGamal KEM (top left) based on the St-CDH assumption (bottom left) via the intermediate
St-ACDH-mu(c) assumption. We also lift the modular analysis of the FO transform [HHK17] to the
muc setting showing CCA-SB-muc security for a KEM (top right) based on a CPA-SB-mu secure
PKE (bottom right) via the intermediate notion of OW-PCVA-mu(c) security. Solid arrows denote
tight transformations (in the ROM), dashed arrows apply our general cp-muc theorem with a loss
of c.
k-St-ACDH-muc-(n,c)
k-St-ACDH[(G, p, g)](gs): Game G(G,p,g)
1 For j = 1, . . . , k do Init:
2 yj ← $ Zp 1 For i = 1, . . . , n do
3 Y ← [g y1 , . . . , g yk ] 2 xi ← $ Zp ; Xi ← g xi
4 Return (pp, os) ← (Y, ε) 3 For j = 1, . . . , k do
Figure 15: Left: Formal security specification k-St-ACDH[(G, p, g)] for the strong asymmetric CDH
k-St-ACDH-muc-(n,c)
problem in (G, p, g), where k ≥ 1 is an integer. Right: Game G(G,p,g) describing the
execution of k-St-ACDH[(G, p, g)] in the muc setting.
29
k-St-ACDH-muc-(n,c)
Lemma 5.6 Let n, k ≥ 1 and c ≥ 0 such that c < n. Let A be an adversary for game G(G,p,g) .
Then we construct an adversary B for game GSt -CDH
(G,p,g) such that
Advk-St-ACDH-muc-n (A) ≤ e(c + 1) · AdvSt-CDH (B) .
(G,p,g) (G,p,g)
Adversary B makes, to any oracle in its game, the same number of queries as A made. The
running time of B is about that of A plus the time for an execution of the sampler D-FX and
re-randomization of group elements.
It remains to show that St-ACDH-muc tightly implies the security of HEG. We capture this in the
following lemma.
Lemma 5.7 Let HEG be the KEM defined in Figure 13. Let n, c be integers such that 0 ≤ c < n.
CCA-SB-muc-(n,c)
Let A be an adversary for game GHEG that issues at most k queries to Oracle(enc, ·).
k-St-ACDH-muc-(n,c)
Then we construct an adversary B for game G(G,p,g) such that
CCA-SB-muc-(n,c) k-St-ACDH-muc-(n,c)
AdvHEG (A) ≤ Adv(G,p,g) (B) .
Adversary B makes the same number of corruption queries as A makes. Let qro be the number
of random oracle queries A makes. Then B makes at most qro queries to Oracle(ddh, ·). The
running time of B is about that of A plus the time for maintaining lists of decryption and random
oracle queries.
Proof: We construct an adversary B as described in Figure 16. It runs adversary A and simulates
CCA-SB-muc-(n,c)
game GHEG as follows.
To initialize the game, B queries its own Init oracle and receives a list of n + k group elements,
where it outputs the first n group elements as public keys. It uses a counter j for the remaining
k group elements. On the j-th query to Oracle(enc, i), B will use the j-th group element as
ciphertext C. If a user i was corrupted, B does not respond to encryption queries as specified by
the game. Since B does not know the exponent to C, it cannot compute the key K using RO.
Instead, it chooses K uniformly at random from {0, 1}KEM.kl and stores (C, K) in a set Si . Note
that as long as RO is not queried on the respective input, this simulation is perfect.
For queries to Oracle(dec, (i, C)), B keeps an additional set DS which stores tuples of the form
(ek i , C, K). If a query is repeated, the same key is given as output. If a new ciphertext is queried, B
chooses a random key K and stores it in DS. In order to be consistent with random oracle queries,
we will also add entries there, which we will describe below. Corruption queries, if allowed, are
simply forwarded.
It remains to describe the simulation of RO. Queries are stored in a list HS and repeating queries
are answered consistently. If ek belongs to one of the users in the game, B queries Oracle(ddh, ·)
on (i, C, Z) provided by A. If (ek i , C, Z) is a valid DDH tuple, where C was previously output by
a query to enc, then (since corruption and encryption queries are not allowed for the same user)
B stops the execution of A and queries Fin. If C was previously queried to Oracle(dec, ·), B
patches the random oracle using list DS. Otherwise, it adds a new entry. Note that until now we
were looking at the case that the query forms a valid DDH tuple. If this is not the case, we only
add an entry to HS and output a fresh K.
30
Adversary B
▷ Run A, responding to its oracle queries as follows:
CCA-SB-muc-(n,c)
GHEG .Init:
k-St-ACDH-muc-(n,c)
1 (X1 , . . . , Xn , Y1 , . . . , Yn ) ← G(G,p,g) .Init
2 j ← 0 ; Return (ek 1 , . . . , ek n ) ← (X1 , . . . , Xn )
CCA-SB-muc-(n,c)
GHEG .Oracle(enc, i):
3 If i ∈ CS then return ⊥
4 j ← j + 1 ; C ← Yj ; K ← $ {0, 1}KEM.kl
5 Si ← Si ∪ {(C, K)} ; Return (C, K)
CCA-SB-muc-(n,c)
GHEG .Oracle(dec, (i, C)):
6 If (C, ·) ∈ Si then return ⊥
7 If ∃K s. t. (ek i , C, K) ∈ DS then return K
8 K ← $ {0, 1}KEM.kl ; DS ← DS ∪ {(ek i , C, K)} ; Return K
CCA-SB-muc-(n,c)
GHEG .Corrupt(i):
9 If Si ̸= ∅ then return ⊥
k-St-ACDH-muc-(n,c)
10 CS ← CS ∪ {i} ; Return dk i ← G(G,p,g) .Corrupt(i)
CCA-SB-muc-(n,c)
GHEG .RO(ek, C, Z):
11 If ∃K s. t. (ek, C, Z, K) ∈ HS then return K
12 K ← $ {0, 1}KEM.kl
k-St-ACDH-muc-(n,c)
13 If ∃i s. t. ek = ek i and G(G,p,g) .Oracle(ddh, (i, C, Z)) = true:
∗ k-St-ACDH-muc-(n,c)
14 If ∃j s. t. C = Yj ∗ then stop and return G(G,p,g) .Fin(i, j ∗ , Z)
15 If ∃K ′ s. t. (ek, C, K ′ ) ∈ DS then K ← K ′
16 Else: DS ← DS ∪ {(ek, C, K)}
17 HS ← HS ∪ {(ek, M, C, K)} ; Return K
CCA-SB-muc-(n,c)
GHEG .Fin(i, b):
k-St-ACDH-muc-(n,c)
18 G(G,p,g) .Fin(1, ⊥, ⊥)
In order to win, A must query the random oracle on a valid pair of (ek i , C, Z), otherwise K is uni-
k-St-ACDH-muc-(n,c)
formly random in any case. Further, if A issues such a query, B can win game G(G,p,g) ,
which concludes the proof.
31
PKE.Enc[G](ek, M ): KEM.Encaps[G, H](ek):
1 C ← $ PKE.Enc(ek, M ; G(ek, M )) 7 M ← $ PKE.MS
2 Return C 8 C ← $ PKE.Enc[G](ek, M )
PKE.Dec[G](dk, c): 9 K ← H(ek, M, C) ; Return (C, K)
Figure 17: Left: PKE scheme PKE = T[PKE] obtained from the T-Transform, where PKE.ROS
is the set of all G : {0, 1}∗ → {0, 1}PKE.rl and PKE.Kg := PKE.Kg. Right: KEM scheme KEM =
U[PKE, s] obtained from the U-Transform, where KEM.ROS is the set of all (G, H) such that G ∈
PKE.ROS and H : {0, 1}∗ → {0, 1}s . Further, KEM.Kg := PKE.Kg.
for every (ek, dk) ∈ Out(PKE.Kg) and every M ∈ PKE.MS, where the probability is taken over the
coins of PKE.Enc.
Theorem 5.8 Let PKE be γ-spread and KEM = U[T[PKE], s] the KEM scheme obtained from
applying the modular FO transform as described in Figure 17. Let n, c be integers such that 0 ≤
CCA-SB-muc-(n,c)
c < n, and let m = ⌊(n − 1)/(c + 1)⌋. Let A be an adversary for game GKEM that issues
at most qe queries to Oracle(enc, ·), qd queries to Oracle(dec, ·) and qro queries to both random
oracles. Then we construct an adversary B for game GCPA-SB -mu-m such that
PKE
CCA-SB-muc-(n,c)
AdvKEM (A) ≤ 2e(c + 1) · AdvCPA-SB-mu-m (B) + q · 2−γ + 2nqe qro + 1 .
PKE d
|PKE.MS|
Adversary B makes the same number of encryption queries as A makes. The running time of B
is about that of A plus the time for maintaining lists of encryption, decryption and random oracle
queries and the time for an execution of the sampler D-FX.
We provide the full proof in Appendix B. Starting with a CPA-SB-mu secure scheme PKE, step
(1) applies transform T to obtain a OW-PCVA-mu secure scheme PKE. In step (2), we apply
Theorem 5.3 for OW-PCVA. Finally, in step (3), we apply the second part of the transform,
namely, we take the OW-PCVA-muc secure scheme and use transform U to obtain the CCA-SB-muc
scheme KEM. We want to note that Jaeger [Jae23] analyzes transform U in a simulation-based
setting with corruptions, focusing on step (3) only.
Corruption-parameterized security for AKE. Security models for authenticated key ex-
change (AKE) protocols (e. g., [BR94]) allow for corruptions of long-term secret keys which makes
the construction of tightly-secure AKE [BHJ+ 15, GJ18, LLGW20, JKRS21, HJK+ 21, PWZ23] non-
trivial. All these works require some kind of muc security for their building blocks. As for signatures
and encryption, we propose to study AKE with the number of corruptions being an additional ad-
versary resource. This allows us to give improved tightness results for many AKE protocols.
In [CCG+ 19] Cohn-Gordon et al. analyze very efficient Diffie-Hellman based AKE protocols.
We denote their main protocol by CCGJJ which is proven secure based on St-CDH with a loss linear
in the number of users. They also show that this loss is optimal for their and similar DH-based
protocols. By considering cp security, however, we can improve the previous bound to the number
of corruptions. For this we make use of the analysis in [KPRR23] which proves tight security of the
protocol based on the variant of St-CDH which we used for HEG. The improved bound for CCGJJ
follows from combining Theorem 5.6 with [KPRR23, Thm. 3].
32
A common approach to generically construct AKE protocols is to use signatures for authenti-
cation, as for example in TLS. This means that long-term keys are now signing keys of a signature
scheme. When aiming for tight security proofs, we thus require UF-muc security. We want to use
our direct results for signatures here. Combining it with the results in [GJ18,DJ21,HJK+ 21,DG21],
we get AKE with forward secrecy and with a security loss linear in the number of corruptions, re-
quiring a UF-mu secure signature scheme. The other assumptions made remain untouched and
their bound is already tight.
Improved bounds for selective opening security. In selective opening security, the adver-
sary is given multiple PKE ciphertexts and is allowed to reveal not only the encrypted messages,
but also the encryption randomness. We then want non-revealed ciphertexts to remain secure. In
this setting, cp security concerns the number of opened ciphertexts. The security notion we are in-
terested in is simulation-based selective opening security (in the following denoted by SIM-SO-CCA)
which is considered the strongest notion of security [BHK12, BDWY12].
While our framework is designed for game-based rather than simulation-based security, we show
how to leverage our framework to improve the bounds of the schemes considered in [HJKS15]. In
particular, we show that their Diffie-Hellman based PKE scheme can be proven secure assuming
St-CDH with a loss linear in the number of opened ciphertexts rather than the total number of
ciphertexts. The scheme is conceptually simpler than the scheme of [PZ22] (which has full tightness)
and allows to save one group element in the ciphertext. Further, we show that the improved bound
also applies to the RSA-based variant of the scheme. We provide formal definitions and theorem
statements in Appendix C.
6 Optimality results
We want to show that the loss of c is indeed optimal for a large class of schemes and games. For
this, we use the framework of Bader, Jager, Li and Schäge [BJLS16]. We adapt their definitions to
our framework of formal security specifications.
Re-randomizable relations. A relation Rel specifies a generation algorithm Rel.Gen that via
(x, ω) ← $ Rel.Gen outputs an instance x and witness ω. The (possibly randomized) verification
algorithm d ← $ Rel.Vf(x, ω ′ ) returns a boolean decision d as to whether ω ′ is a valid witness for x.
Correctness asks that Pr[Rel.Vf(Rel.Gen)] = 1, where the probability is over the coins of Rel.Gen
(and possibly Rel.Vf). Let Rel.WS(x) = { ω : Rel.Vf(x, ω) = true } be the witness set of x. We say
an algorithm Rel.ReRand is a re-randomizer for Rel if on input an instance x and a witness ω such
that Rel.Vf(ω, x) = true, it outputs a witness ω ′ which is uniformly distributed over Rel.WS(x) with
probability 1.
We define a witness recovery game for a scheme Rel via FFS REC. Its formal description and
REC-muc-(n,c)
muc game GRel are given in Figure 18. The adversary in this game gets n statements and
may ask for witnesses of c of them. Its task is then to compute a witness for an “uncorrupted”
statement.
Simple adversaries and (black-box) reductions. Our optimality results will involve a spe-
cial class of “simple” adversaries issuing all corruptions at once. More precisely, a (n, c)-simple
adversary for relation Rel is a randomized and stateful algorithm sA, which induces an adversary
REC-muc-(n,c)
A = A[sA] for GRel as described on the left-hand side of Figure 19.
We also consider a class of natural black-box reductions that transform an (n, c)-simple ad-
REC-muc-(n,c)
versary sA for Rel into an adversary for a game G. Hence, in the following, GRel will
play the role of the security game of the considered scheme, whereas G will be associated with
33
REC-muc-(n,c)
REC[Rel, h](init, (ε, ε)): Game GRel
1 (x, ω) ← $ Rel.Gen Init:
2 (St.x, St.wit) ← (x, ω) 1 h ← $ Rel.ROS
Fin(i, ω ∗ ): // i ∈ [1..n]
7 If i ∈ CS then return false
8 Return Rel.Vf(ω ∗ , xi )
Figure 18: Left: Formal security specification REC[Rel, h] defining a witness recovery game for Rel.
REC-muc-(n,c)
The global setup is trivial. Right: Game GRel describing the execution of REC[Rel, h] in
the multi-user setting with corruptions.
Figure 19: Left: Adversary A[sA] induced by an (n, c)-simple adversary sA for Rel. Right: Ad-
versary R[sR, sA] defined by an (n, c, r)-simple reduction sR.
REC-muc-(n,c)
some underlying assumption. More formally, an (n, c, r)-simple (G → GRel )-reduction
sR is a stateful and randomized algorithm which, for every (n, c)-simple adversary sA, induces the
adversary R = R[sR, sA] described on the right of Figure 19. Crucially, we require R to be a
valid adversary for game G. In particular, sR can make calls to the oracles provided by G, except
for calling its Fin procedure. Here, we do not allow the reduction to choose the random tape of
the adversary, but note that it is easy to incorporate this into our formalization (and to extend
Theorem 6.1 to take this into account) via standard techniques.2
Stateless games. We prove our first tightness result, showing that any (n, c, r)-simple (G →
REC-muc-(n,c)
GRel )-reduction sR must incur an advantage loss of roughly a factor c + 1 for any re-
randomizable relation Rel whenever the game G is stateless. A stateless game is one for which the
2
More specifically, sR in Figure 19 outputs the random coins to be used as input to sA. In the proof, we then
make the meta-adversary deterministic, and use an efficient and secure pseudorandom function, with a hard coded
key, to efficiently simulate the random choices based on its input.
34
Algorithm sA((x1 , . . . , xn ), ϵ): Algorithm sA((ωi )i∈[c+1]\{i∗ } , st = (x1 , . . . , xn , i∗ ):
∗
1 i ← $ [c + 1] 1 If ∃i ∈ [c + 1] \ {i∗ }: Rel.Vf(ωi , xi ) = false then
2 st ← (x1 , . . . , xn , i∗ ) 2 Return (i∗ , ε)
∗
3 Return ([c + 1] \ {i }, st) 3 Else
4 ω ∗ ← $ Rel.WS(xi∗ )
5 Return (i∗ , ω ∗ )
Figure 20: Hypothetical adversary in the proof of Theorem 6.1. The first and the second stage are
given on the left-hand and right-hand sides, respectively.
Init procedure sets some global variables, and queries to all game oracles never update these global
variables, and the query output only depends on the global variables and the query input.
Our result will not require the game itself to be efficiently implementable, and often a game
can have both an inefficient stateless version, as well as an equivalent efficient stateful version. For
example, the efficient realization of the game defining PRF security would proceed by lazy sampling
a random function, whereas an equivalent stateless version would initially sample a whole random
function table. This is an example of game for which our result applies. Other games, such as those
modeling the unforgeability of signatures under chosen message attacks, are inherently stateful, as
the validity of the final forgery depends on previously issued signing queries.
Here, we will consider search games G (rather than distinguishing games) as the starting point
for our reductions, and thus we associate an advantage metric AdvG (A) = Pr [G(A)]. However, it
is straightforward to extend our result to distinguishing games.
Tightness result. The proof of the following theorem generalizes that of Bader et al. [BJLS16]
by extending it to stateless games and to a setting with c out of n corruptions.
Theorem 6.1 Let n, c be integers such that 0 ≤ c < n. Let G be a stateless game and let Rel
be a relation with re-randomizer ReRand. There exists an (n, c)-simple adversary sA such that
REC-muc-(n,c) REC-muc-(n,c)
AdvRel (A[sA]) = 1, and such that for any (n, c, r)-simple (G → GRel )-reduction
sR, there exists an adversary B for G such that
r
AdvG (B) ≥ AdvG (R) −
c+1
for R = R[sR, sA]. The number of queries B makes to the oracles in G is at most c times the
number of queries sR makes. Further, the running time of B is at most r · (c + 1) times the running
time of sR plus running the relation’s verification algorithm rc · (c + 1) times plus running the
re-randomizer once.
Proof: We start by giving an inefficient hypothetical simple adversary sA which operates as de-
scribed in Figure 20. The adversary initially selects the index i∗ of one of the first c + 1 users it
does not corrupt, and then learns all remaining c witnesses. (Overall, thefore, it corrupts exactly c
users.) Then, the adversary checks it received indeed valid witnesses for all j ∈ [c + 1] \ {i∗ }, and
if so, it outputs a random witness for xi∗ . (The latter step is of course in general inefficient.)
The rest of the proof will culminate in a so-called “meta-adversary” B which will efficiently simulate
the adversary R = R[sR, sA], up to some small error probability. We will get to B by providing a
sequence of three adversaries B0 , B1 , B2 for game G, where B0 = R and B2 is the meta-adversary
B. We describe them in Figure 21. In these descriptions, and the analysis, we also assume that sR
is a deterministic algorithm. (It is easy to drop this assumption by having the initial state of sR
by a sufficiently long random tape, instead of ε.)
35
Adversary B0 :
1 stsR ← ε
2 For k = 1, . . . , r do
3 (x1 , . . . , xn , stsR ) ← $ sR(stsR )
4 i∗ ← $ [c + 1]
5 ((ωj )j∈[c+1]\{i∗ } , stsR,i∗ ) ← $ sR([c + 1] \ {i∗ }, stsR )
6 If ∃j ∈ [c + 1] \ {i∗ }: Rel.Vf(xj , ωj ) = false then ω ∗ ← ε
7 Else ω ∗ ← $ Rel.WS(xi∗ )
8 stsR ← sR(i∗ , ω ∗ , stsR,i∗ )
9 win ← G.Fin(stsR )
Adversary B1 , B2 :
1 stsR ← ε
2 For k = 1, . . . , r do
3 For i = 1, . . . , c + 1 do W k [i] ← ⊥
4 (x1 , . . . , xn , stsR ) ← sR(stsR )
5 i∗ ← $ [c + 1]
6 For i = 1, . . . , c + 1 do
7 ((ωi,j )j∈[c+1]\{i} , stsR,i ) ← sR([c + 1] \ {i}, stsR )
8 If ∀j ∈ [c + 1] \ {i}: Rel.Vf(xj , ωi,j ) = true then
9 For all j ∈ [c + 1] \ {i} do W k [j] ← ωi,j
10 If ∃j ∈ [c + 1] \ {i∗ }: Rel.Vf(xj , ωi∗ ,j ) = false then ω ∗ ← ε
11 Else
12 If W k [i∗ ] = ⊥ then
13 bad ← true; abort
14 ω ∗ ← $ Rel.WS(xi∗ )
15 ω ∗ ← $ Rel.ReRand(xi∗ , W k [i∗ ])
16 stsR ← sR(i∗ , ω ∗ , stsR,i∗ )
17 win ← G.Fin(stsR )
Figure 21: The sequence of adversaries B0 , B1 and B2 . Boxed statements are only executed by
either of B1 or B2 .
By inspection, it is clear that B0 = R. The main difference between B0 and B1 is that in the latter,
after i∗ is sampled in an iteration, the reduction is run, on the same state stsR , for every possible
choice of the corruption sets [c + 1] \ {i}, and not only on [c + 1] \ {i∗ }. Further, the adversary also
keeps track of the witnesses output by sR when run on each i, as long as they are all indeed correct.
This is done by keeping an array W k in the k-th iteration, so that W k [i] is always set to ⊥ or to a
valid witness for xi . Finally, B1 sets a flag bad whenever all witnesses output by the reduction for
the case i = i∗ are indeed correct, yet W k [i∗ ] has not been set.
Note that sR can make calls to G’s procedures, but because the game is stateless, these calls do
not affect the overall state of G, and the behavior of B1 is only affected by the state stsR,i∗ output
for the case i = i∗ . Therefore, these additional executions to sR do not modify the behavior with
respect to B0 . Thus,
AdvG (R) = Pr [G(B0 )] = Pr [G(B1 )] . (34)
Moving to the final adversary B2 , we make two modifications. One is that if the flag bad is set
to true, the adversary actually aborts. Further, instead of sampling ω ∗ (inefficiently) from the
witness set Rel.WS(xi∗ ), we sample from Rel.ReRand(xi∗ , W k [i∗ ]). The latter is identical, because
the instruction is only executed if B2 has not previously aborted, and thus W k [i∗ ] is set to a valid
36
witness of xi∗ . Therefore, G(B1 ) and G(B2 ) are identical until bad is set to true, and thus
Pr [G(B1 )] − Pr [G(B2 )] ≤ Pr [G(B2 ) sets bad] . (35)
Further, as we set B = B2 , we have Pr [G(B2 )] = AdvG (B). By (34) and (35) we have
AdvG (R) = Pr [G(B1 )] − Pr [G(B2 )] + Pr [G(B2 )]
(36)
≤ Pr [G(B2 ) sets bad] + AdvG (B) .
To upper bound Pr [G(B2 ) sets bad], in a particular iteration k, for each i ∈ [c + 1], define the event
BadRunk (i) as the event that there exists j ∈ [c + 1] \ {i} such that ωi,j is invalid. We also let,
for ℓ ∈ [c + 1],
^
BadPickk (ℓ) = ¬BadRunk (ℓ) ∧ BadRunk (i)
i∈[c+1]\{ℓ}
Note that the events BadRunk (i) are all determined by the value stsR when entering the For loop
on Line 6. We observe that bad is set to true in a particular iteration k ∈ {1, . . . , r} if and only if
BadPickk (i∗ ) happens. However, note that BadPickk (ℓ) can only happen for at most a single ℓ.
Thus, by the union bound,
r h i r
Pr BadPickk (i∗ ) ≤
X
Pr [G(B2 ) sets bad] ≤ .
k=1
c+1
This concludes the proof.
Interpretation of the above result. To explain optimality, let us define tightness more for-
mally. Let sA be an (n, c)-simple adversary for G running in time tA = tsA with advantage εA =
REC-muc-(n,c) REC-muc-(n,c)
AdvRel (A[sA]). Following [BJLS16], we say that an (n, c, r)-simple (G → GRel )-
reduction sR running in time tsR with advantage εR = AdvG (R[sR, sA]) loses a factor ℓ if tR /εR ≥
ℓ · tA /εA , where tR = tsR + r · tsA .
Now we want to look closer at Theorem 6.1. Applying the bound gives us
−1
tR tsR + r · tsA r · tsA r tsA r(c + 1) tA
= ≥ = r εB + · = · .
εR εR εB + r/(c + 1) c+1 1 εB (c + 1) + r εA
In the last step we use that the reduction must work for any (n, c)-simple adversary and, in par-
ticular, for the adversary sA with advantage 1 that we construct in the proof. We conclude that if
εB is small, then ℓ ≈ c + 1 and R must lose a factor c + 1.
Example 1: CPA-secure KEM. The above theorem naturally covers non-interactive complexity
assumptions as considered by [BJLS16] which are, by definition, stateless. As a more interesting ex-
ample, the result also applies when using GCPA-SB -mu-n for KEM schemes as a starting game G and
KEM
a suitable relation, Rel[KEM], which we provide in Figure 22. That is, if there exists an efficient re-
randomizer Rel[KEM].ReRand, then any (n, c, r)-simple (GCPA-SB -mu-n → GREC-muc-(n,c) )-reduction
KEM Rel[KEM]
must lose a factor c + 1.
All schemes for which the above relation is unique, i. e., there exists exactly one decryption key
such that the relation holds, have a trivial efficient re-randomizer. An example is the ElGamal
KEM.
In order to conclude that the optimality result also holds when the target game is the cor-
responding muc game (rather than the relation game), we describe in Figure 22 how to trans-
REC-muc-(n,c)
form any simple adversary sA for GRel[KEM] into an equivalent adversary A′ [sA] for game
CPA-SB-muc-(n,c) CPA-SB-mu-n
GKEM . Following [BJLS16], the existence of a black-box reduction from GKEM
37
Rel[KEM].Gen: Adversary A′ [sA]:
1 (ek, dk) ← $ KEM.Kg CPA-SB-muc-(n,c)
1 (ek 1 , . . . , ek n ) ← $ GKEM .Init
2 Return (x, ω) ← (ek, dk) 2 st ← ε
Rel[KEM].Vf(ek, dk): 3 (CS, st) ← $ sA(ek 1 , . . . , ek n , st)
3 (C, K) ← KEM.Encaps(ek)
$ 4 If |CS| > c then abort
4 Return JKEM.Decaps(dk, C) = KK 5 For all i ∈ CS do
CPA-SB-muc-(n,c)
6 dk i ← GKEM .Corrupt(i)
7 (i∗ , dk ∗ ) ← $ sA((dk i )i∈CS , st)
CPA-SB-muc-(n,c)
8 (C, K) ← GKEM .Oracle(enc, i∗ )
′
9 K ← KEM.Decaps(dk i∗ , C)
10 b′ ← JK ′ ̸= KK
CPA-SB-muc-(n,c)
11 win ← GKEM .Fin(i∗ , b′ )
Figure 22: Left: Relation Rel[KEM] for a KEM scheme KEM. Right: The adversary A′ [sA] induced
by an (n, c)-simple adversary sA for relation Rel[KEM].
CPA-SB-muc-(n,c)
to GKEM , rewinding the given adversaries r times, implies the existence of an (n, c, r)-
CPA-SB
simple (GKEM -mu - n → GREC-muc-(n,c) )-reduction sR with the same loss. Roughly, this is because
Rel[KEM]
we can build sR which, when given access to sA, behaves exactly the same as the reduction using
A′ [sA]. As in [BJLS16], we do not make this argument fully formal, as it requires formalizing a
more general notion of black-box reduction.
38
Adversary B[Bmain , Bside ]: Algorithm Bmain (stsR , it):
1 st ← ε ; it ← 0 ; halt ← false 1 (x1 , . . . , xn , stsR ) ← $ sR(stsR )
2 While ¬halt do 2 i∗ ← $ [c + 1]
3 it ← it + 1 3 st′ ← (stsR , i∗ )
′
4 (st, st , halt) ← $ Bmain (st, it) 4 CS ← [c + 1] \ {i∗ }
′
5 v ← $ Bside (st , it) 5 ((ωj )j∈[c+1]\{i∗ } , stsR ) ← $ sR(CS, stsR )
6 win ← G.Fin(st) 6 If ∃j ∈ CS: Rel.Vf(xj , ωj ) = false then
7 ω∗ ← ε
8 Else
9 ω ∗ ← $ Rel.WS(xi∗ )
10 stsR ← sR(i∗ , ω ∗ , stsR )
11 If it < r then halt ← false else halt ← true
12 Return (stsR , st′ , halt)
Figure 23: Left: The adversary B[Bmain , Bside ] for G defined by the branching adversary (Bmain ,
Bside ). Right: The branching adversary used to generalize Theorem 6.1 to an arbitrary game G.
Advbranch
G (Bmain , Bside ) ≤ qs /2γ , (37)
where qs is a bound on the overall number of signatures obtained by all executions of Bside . To
prove (37), it is easy to verify that the advantage is upper bounded by the probability that one of
the executions of Bside makes a signing query which corresponds to the same user, message, and
signature used for the final forgery output by Bmain . This probability is upper bounded by qs /2γ ,
using the min-entropy of signatures and union bound over the total number of signing queries.
For signatures we specify a relation Rel[Sig] as shown on the left-hand side of Figure 24. That
is, if there exists an efficient re-randomizer Rel[Sig].ReRand, then any (n, c, r)-simple (GSigSUF-mu-n →
REC-muc-(n,c)
GRel[Sig] )-reduction must lose a factor c + 1. For completeness, and similar to Rel[KEM]
REC-muc-(n,c)
considered in Section 6, we show how to transform any simple adversary sA for GRel[Sig] into
′ SUF-muc -(n,c) ′
an equivalent adversary A [sA] for game GSig . Adversary A [sA] is given on the right-hand
side of Figure 24, which gives us the desired optimality result.
As a concrete example consider the Boneh-Boyen signature scheme [BB08] in bilinear groups
for which the relation Rel[Sig] is unique and thus efficiently re-randomizable. Further, signatures
have min-entropy log(p), where p is the size of the group.
39
Rel[Sig].Gen: Adversary A′ [sA]:
1 (vk, sk) ← $ Sig.Kg SUF-muc-(n,c)
1 (ek 1 , . . . , ek n ) ← $ GSig .Init
2 Return (x, ω) ← (vk, sk) 2 st ← ε
Rel[Sig].Vf(vk, sk): 3 (CS, st) ← $ sA(vk 1 , . . . , vk n , st)
ℓ
3 M ← $ {0, 1} ; σ ← $ Sig.Sign(sk, M ) 4 If |CS| > c then abort
Figure 24: Left: Relation Rel[Sig] for a signature scheme Sig, where we assume w.l.o.g. the message
space is finite, i. e., {0, 1}ℓ for some ℓ. Right: The adversary A′ [sA] induced by an (n, c)-simple
adversary sA for relation Rel[Sig].
Example 3: CCA-secure KEM. We can make a similar argument for KEM schemes. The CCA-
mu game is not stateless since it disallows to query challenge ciphertexts to the decryption oracle.
Thus, we need to bound the probability that either any such query happens. The idea is to also
use min-entropy (or γ-spreadness), similar to that defined for PKE in Section 5.2. Then, for all n,
with G ∈ {GCCA-SB -mu-n , GCCA-MB-mu-n }, and all B
PKE PKE main , Bside ,
Advbranch
G (Bmain , Bside ) ≤ (qe qd )/2γ , (38)
where qe and qd are the bound on the overall number of encryption resp. decryption queries made
in Bmain and all executions of Bside . To prove (38), observe that the advantage is upper bounded by
the probability that any execution of Bmain or Bside makes a decryption query which corresponds
to a previously received challenge ciphertext. Recall that ciphertexts have min-entropy γ and
those received in Bside do not influence stsR , meaning that a subsequent execution of Bmain or Bside
with stsR will make an invalid query can be upper bounded by γ-spreadness. The same argument
applies to Bside . The state given to Bside in the k-th iteration does not contain information about
the ciphertexts received in the k-th execution of Bmain . Union bound over all decryption queries
gives us the bound. Note that it does not matter here whether there is a single or multiple challenge
bits.
Note that the relation Rel[KEM] and adversary A′ [sA] from Figure 22 apply to CCA security as
well.
References
[ABR01] Michel Abdalla, Mihir Bellare, and Phillip Rogaway. The oracle Diffie-Hellman as-
sumptions and an analysis of DHIES. In David Naccache, editor, CT-RSA 2001,
volume 2020 of LNCS, pages 143–158. Springer, Heidelberg, April 2001. 7, 27
[BB08] Dan Boneh and Xavier Boyen. Short signatures without random oracles and the SDH
assumption in bilinear groups. Journal of Cryptology, 21(2):149–177, April 2008. 39
[BBM00] Mihir Bellare, Alexandra Boldyreva, and Silvio Micali. Public-key encryption in a
multi-user setting: Security proofs and improvements. In Bart Preneel, editor, EURO-
40
CRYPT 2000, volume 1807 of LNCS, pages 259–274. Springer, Heidelberg, May 2000.
3, 7, 23, 26
[BDK+ 18] Joppe Bos, Leo Ducas, Eike Kiltz, T Lepoint, Vadim Lyubashevsky, John M. Schanck,
Peter Schwabe, Gregor Seiler, and Damien Stehle. CRYSTALS - Kyber: A CCA-
secure module-lattice-based KEM. In 2018 IEEE European Symposium on Security
and Privacy (EuroS&P), pages 353–367, 2018. 31
[BDL+ 12] Daniel J Bernstein, Niels Duif, Tanja Lange, Peter Schwabe, and Bo-Yin Yang. High-
speed high-security signatures. Journal of cryptographic engineering, 2(2):77–89, 2012.
4
[BDWY12] Mihir Bellare, Rafael Dowsley, Brent Waters, and Scott Yilek. Standard security does
not imply security against selective-opening. In David Pointcheval and Thomas Jo-
hansson, editors, EUROCRYPT 2012, volume 7237 of LNCS, pages 645–662. Springer,
Heidelberg, April 2012. 8, 33
[Ber15] Daniel J. Bernstein. Multi-user Schnorr security, revisited. Cryptology ePrint Archive,
Report 2015/996, 2015. https://eprint.iacr.org/2015/996. 3, 4
[BHJ+ 15] Christoph Bader, Dennis Hofheinz, Tibor Jager, Eike Kiltz, and Yong Li. Tightly-
secure authenticated key exchange. In Yevgeniy Dodis and Jesper Buus Nielsen, edi-
tors, TCC 2015, Part I, volume 9014 of LNCS, pages 629–658. Springer, Heidelberg,
March 2015. 2, 5, 32
[BHK12] Florian Böhl, Dennis Hofheinz, and Daniel Kraschewski. On definitions of selective
opening security. In Marc Fischlin, Johannes Buchmann, and Mark Manulis, editors,
PKC 2012, volume 7293 of LNCS, pages 522–539. Springer, Heidelberg, May 2012. 8,
33
[BJLS16] Christoph Bader, Tibor Jager, Yong Li, and Sven Schäge. On the impossibility of tight
cryptographic reductions. In Marc Fischlin and Jean-Sébastien Coron, editors, EU-
ROCRYPT 2016, Part II, volume 9666 of LNCS, pages 273–304. Springer, Heidelberg,
May 2016. 2, 8, 33, 35, 37, 38
[BNN07] Mihir Bellare, Chanathip Namprempre, and Gregory Neven. Unrestricted aggregate
signatures. In Lars Arge, Christian Cachin, Tomasz Jurdzinski, and Andrzej Tarlecki,
editors, ICALP 2007, volume 4596 of LNCS, pages 411–422. Springer, Heidelberg, July
2007. 9
[BR94] Mihir Bellare and Phillip Rogaway. Entity authentication and key distribution. In Dou-
glas R. Stinson, editor, CRYPTO’93, volume 773 of LNCS, pages 232–249. Springer,
Heidelberg, August 1994. 7, 32
[BR96] Mihir Bellare and Phillip Rogaway. The exact security of digital signatures: How to
sign with RSA and Rabin. In Ueli M. Maurer, editor, EUROCRYPT’96, volume 1070
of LNCS, pages 399–416. Springer, Heidelberg, May 1996. 2
41
[BR06] Mihir Bellare and Phillip Rogaway. The security of triple encryption and a framework
for code-based game-playing proofs. In Serge Vaudenay, editor, EUROCRYPT 2006,
volume 4004 of LNCS, pages 409–426. Springer, Heidelberg, May / June 2006. 9, 15
[CCG+ 19] Katriel Cohn-Gordon, Cas Cremers, Kristian Gjøsteen, Håkon Jacobsen, and Tibor
Jager. Highly efficient key exchange protocols with optimal tightness. In Alexandra
Boldyreva and Daniele Micciancio, editors, CRYPTO 2019, Part III, volume 11694 of
LNCS, pages 767–797. Springer, Heidelberg, August 2019. 3, 7, 8, 32
[Cor00] Jean-Sébastien Coron. On the exact security of full domain hash. In Mihir Bellare,
editor, CRYPTO 2000, volume 1880 of LNCS, pages 229–235. Springer, Heidelberg,
August 2000. 2, 5, 8, 11, 46
[CS03] Ronald Cramer and Victor Shoup. Design and analysis of practical public-key en-
cryption schemes secure against adaptive chosen ciphertext attack. SIAM Journal on
Computing, 33(1):167–226, 2003. 27
[DDFY94] Alfredo De Santis, Yvo Desmedt, Yair Frankel, and Moti Yung. How to share a function
securely. In 26th ACM STOC, pages 522–533. ACM Press, May 1994. 3
[Den03] Alexander W. Dent. A designer’s guide to KEMs. In Kenneth G. Paterson, editor, 9th
IMA International Conference on Cryptography and Coding, volume 2898 of LNCS,
pages 133–151. Springer, Heidelberg, December 2003. 24
[DG21] Hannah Davis and Felix Günther. Tighter proofs for the SIGMA and TLS 1.3 key
exchange protocols. In Kazue Sako and Nils Ole Tippenhauer, editors, ACNS 21,
Part II, volume 12727 of LNCS, pages 448–479. Springer, Heidelberg, June 2021. 3, 4,
7, 33
[DGJL21] Denis Diemert, Kai Gellert, Tibor Jager, and Lin Lyu. More efficient digital signatures
with tight multi-user security. In Juan Garay, editor, PKC 2021, Part II, volume 12711
of LNCS, pages 1–31. Springer, Heidelberg, May 2021. 2, 5
[DHK+ 21] Julien Duman, Kathrin Hövelmanns, Eike Kiltz, Vadim Lyubashevsky, and Gregor
Seiler. Faster lattice-based KEMs via a generic fujisaki-okamoto transform using prefix
hashing. In Giovanni Vigna and Elaine Shi, editors, ACM CCS 2021, pages 2722–2737.
ACM Press, November 2021. 31, 49
[DJ21] Denis Diemert and Tibor Jager. On the tight security of TLS 1.3: Theoretically sound
cryptographic parameters for real-world deployments. Journal of Cryptology, 34(3):30,
July 2021. 3, 4, 7, 33
[Dra23] SSL Dragon. Top 12 revealing ssl stats you should know, May 2023. https://www.
ssldragon.com/blog/ssl-stats/. 3
[FO99] Eiichiro Fujisaki and Tatsuaki Okamoto. Secure integration of asymmetric and sym-
metric encryption schemes. In Michael J. Wiener, editor, CRYPTO’99, volume 1666
of LNCS, pages 537–554. Springer, Heidelberg, August 1999. 25, 31
[FO13] Eiichiro Fujisaki and Tatsuaki Okamoto. Secure integration of asymmetric and sym-
metric encryption schemes. Journal of Cryptology, 26(1):80–101, January 2013. 25,
31
42
[GGJJ23] Kai Gellert, Kristian Gjøsteen, Håkon Jacobsen, and Tibor Jager. On optimal tightness
for key exchange with full forward secrecy via key confirmation. In Helena Handschuh
and Anna Lysyanskaya, editors, CRYPTO 2023, Part IV, volume 14084 of LNCS,
pages 297–329. Springer, Heidelberg, August 2023. 8
[GHK17] Romain Gay, Dennis Hofheinz, and Lisa Kohl. Kurosawa-desmedt meets tight security.
In Jonathan Katz and Hovav Shacham, editors, CRYPTO 2017, Part III, volume 10403
of LNCS, pages 133–160. Springer, Heidelberg, August 2017. 26
[GHKW16] Romain Gay, Dennis Hofheinz, Eike Kiltz, and Hoeteck Wee. Tightly CCA-secure
encryption without pairings. In Marc Fischlin and Jean-Sébastien Coron, editors,
EUROCRYPT 2016, Part I, volume 9665 of LNCS, pages 1–27. Springer, Heidelberg,
May 2016. 26
[GJ18] Kristian Gjøsteen and Tibor Jager. Practical and tightly-secure digital signatures and
authenticated key exchange. In Hovav Shacham and Alexandra Boldyreva, editors,
CRYPTO 2018, Part II, volume 10992 of LNCS, pages 95–125. Springer, Heidelberg,
August 2018. 5, 7, 32, 33
[GJKW07] Eu-Jin Goh, Stanislaw Jarecki, Jonathan Katz, and Nan Wang. Efficient signature
schemes with tight reductions to the Diffie-Hellman problems. Journal of Cryptology,
20(4):493–514, October 2007. 3, 4
[GKP18] Federico Giacon, Eike Kiltz, and Bertram Poettering. Hybrid encryption in a multi-
user setting, revisited. In Michel Abdalla and Ricardo Dahab, editors, PKC 2018,
Part I, volume 10769 of LNCS, pages 159–189. Springer, Heidelberg, March 2018. 7,
27
[GMR88] Shafi Goldwasser, Silvio Micali, and Ronald L. Rivest. A digital signature scheme
secure against adaptive chosen-message attacks. SIAM Journal on Computing,
17(2):281–308, April 1988. 3
[GMW87] Oded Goldreich, Silvio Micali, and Avi Wigderson. How to play any mental game or A
completeness theorem for protocols with honest majority. In Alfred Aho, editor, 19th
ACM STOC, pages 218–229. ACM Press, May 1987. 3
[HHK17] Dennis Hofheinz, Kathrin Hövelmanns, and Eike Kiltz. A modular analysis of
the Fujisaki-Okamoto transformation. In Yael Kalai and Leonid Reyzin, editors,
TCC 2017, Part I, volume 10677 of LNCS, pages 341–371. Springer, Heidelberg,
November 2017. 7, 24, 29, 31
[HJ12] Dennis Hofheinz and Tibor Jager. Tightly secure signatures and public-key encryption.
In Reihaneh Safavi-Naini and Ran Canetti, editors, CRYPTO 2012, volume 7417 of
LNCS, pages 590–607. Springer, Heidelberg, August 2012. 3, 4, 26
[HJK+ 21] Shuai Han, Tibor Jager, Eike Kiltz, Shengli Liu, Jiaxin Pan, Doreen Riepel, and Sven
Schäge. Authenticated key exchange and signatures with tight security in the standard
model. In Tal Malkin and Chris Peikert, editors, CRYPTO 2021, Part IV, volume
12828 of LNCS, pages 670–700, Virtual Event, August 2021. Springer, Heidelberg. 2,
3, 5, 7, 32, 33
43
[HJKS15] Felix Heuer, Tibor Jager, Eike Kiltz, and Sven Schäge. On the selective opening secu-
rity of practical public-key encryption schemes. In Jonathan Katz, editor, PKC 2015,
volume 9020 of LNCS, pages 27–51. Springer, Heidelberg, March / April 2015. 8, 33,
51
[HLG21] Shuai Han, Shengli Liu, and Dawu Gu. Key encapsulation mechanism with tight
enhanced security in the multi-user setting: Impossibility result and optimal tightness.
In Mehdi Tibouchi and Huaxiong Wang, editors, ASIACRYPT 2021, Part II, volume
13091 of LNCS, pages 483–513. Springer, Heidelberg, December 2021. 8
[HLG23] Shuai Han, Shengli Liu, and Dawu Gu. Almost tight multi-user security under adaptive
corruptions & leakages in the standard model. In Carmit Hazay and Martijn Stam,
editors, EUROCRYPT 2023, Part III, volume 14006 of LNCS, pages 132–162. Springer,
Heidelberg, April 2023. 5, 27
[HLLG19] Shuai Han, Shengli Liu, Lin Lyu, and Dawu Gu. Tight leakage-resilient CCA-security
from quasi-adaptive hash proof system. In Alexandra Boldyreva and Daniele Miccian-
cio, editors, CRYPTO 2019, Part II, volume 11693 of LNCS, pages 417–447. Springer,
Heidelberg, August 2019. 26
[HLWG23] Shuai Han, Shengli Liu, Zhedong Wang, and Dawu Gu. Almost tight multi-user secu-
rity under adaptive corruptions from LWE in the standard model. In Helena Handschuh
and Anna Lysyanskaya, editors, CRYPTO 2023, Part V, volume 14085 of LNCS, pages
682–715. Springer, Heidelberg, August 2023. 27
[HS16] Goichiro Hanaoka and Jacob C. N. Schuldt. On signatures with tight security in the
multi-user setting. In 2016 International Symposium on Information Theory and Its
Applications (ISITA), pages 91–95, 2016. 3, 4
[HS21] Hans Heum and Martijn Stam. Tightness subtleties for multi-user PKE notions. In
Maura B. Paterson, editor, 18th IMA International Conference on Cryptography and
Coding, volume 13129 of LNCS, pages 75–104. Springer, Heidelberg, December 2021.
6, 23
[Jae23] Joseph Jaeger. Let attackers program ideal models: Modularity and composabil-
ity for adaptive compromise. In Carmit Hazay and Martijn Stam, editors, EURO-
CRYPT 2023, Part III, volume 14006 of LNCS, pages 101–131. Springer, Heidelberg,
April 2023. 27, 32
[JK22] Joseph Jaeger and Akshaya Kumar. Memory-tight multi-challenge security of public-
key encryption. In Shweta Agrawal and Dongdai Lin, editors, ASIACRYPT 2022,
Part III, volume 13793 of LNCS, pages 454–484. Springer, Heidelberg, December 2022.
49, 50
[JKRS21] Tibor Jager, Eike Kiltz, Doreen Riepel, and Sven Schäge. Tightly-secure authenticated
key exchange, revisited. In Anne Canteaut and François-Xavier Standaert, editors, EU-
ROCRYPT 2021, Part I, volume 12696 of LNCS, pages 117–146. Springer, Heidelberg,
October 2021. 32
[JL17] Simon Josefsson and Ilari Liusvaara. Edwards-curve digital signature algorithm (Ed-
DSA). RFC 8032, Jan. 2017. https://datatracker.ietf.org/doc/html/rfc8032. 4
44
[JSSOW17] Tibor Jager, Martijn Stam, Ryan Stanley-Oakes, and Bogdan Warinschi. Multi-key
authenticated encryption with corruptions: Reductions are lossy. In Yael Kalai and
Leonid Reyzin, editors, TCC 2017, Part I, volume 10677 of LNCS, pages 409–441.
Springer, Heidelberg, November 2017. 2, 8, 25
[KMP16] Eike Kiltz, Daniel Masny, and Jiaxin Pan. Optimal security proofs for signatures
from identification schemes. In Matthew Robshaw and Jonathan Katz, editors,
CRYPTO 2016, Part II, volume 9815 of LNCS, pages 33–61. Springer, Heidelberg,
August 2016. 3, 4
[KPRR23] Eike Kiltz, Jiaxin Pan, Doreen Riepel, and Magnus Ringerud. Multi-user CDH prob-
lems and the concrete security of NAXOS and HMQV. In Mike Rosulek, editor, CT-
RSA 2023, volume 13871 of LNCS, pages 645–671. Springer, Heidelberg, April 2023.
28, 29, 32
[Kra05] Hugo Krawczyk. HMQV: A high-performance secure Diffie-Hellman protocol. In Vic-
tor Shoup, editor, CRYPTO 2005, volume 3621 of LNCS, pages 546–566. Springer,
Heidelberg, August 2005. 7
[Lac18] Marie-Sarah Lacharité. Security of BLS and BGLS signatures in a multi-user setting.
Cryptography and Communications, 10(1):41–58, 2018. 3, 4
[LJYP14] Benoı̂t Libert, Marc Joye, Moti Yung, and Thomas Peters. Concise multi-challenge
CCA-secure encryption and signatures with almost tight security. In Palash Sarkar
and Tetsu Iwata, editors, ASIACRYPT 2014, Part II, volume 8874 of LNCS, pages
1–21. Springer, Heidelberg, December 2014. 26
[LLGW20] Xiangyu Liu, Shengli Liu, Dawu Gu, and Jian Weng. Two-pass authenticated key
exchange with explicit authentication and tight security. In Shiho Moriai and Huaxiong
Wang, editors, ASIACRYPT 2020, Part II, volume 12492 of LNCS, pages 785–814.
Springer, Heidelberg, December 2020. 7, 32
[LLP20] Youngkyung Lee, Dong Hoon Lee, and Jong Hwan Park. Tightly CCA-secure encryp-
tion scheme in a multi-user setting with corruptions. DCC, 88(11):2433–2452, 2020.
27
[Mic23] Microsoft. Results of major technical investigations for storm-0558 key acquisi-
tion. Microsoft Blog, September 2023. https://msrc.microsoft.com/blog/2023/09/
results-of-major-technical-investigations-for-storm-0558-key-acquisition/. 2
[MPS20] Andrew Morgan, Rafael Pass, and Elaine Shi. On the adaptive security of MACs
and PRFs. In Shiho Moriai and Huaxiong Wang, editors, ASIACRYPT 2020, Part I,
volume 12491 of LNCS, pages 724–753. Springer, Heidelberg, December 2020. 8
[MS04] Alfred Menezes and Nigel Smart. Security of signature schemes in a multi-user setting.
Designs, Codes and Cryptography, 33(3):261–274, 2004. 3
[Nat] National Institute for Standards and Technology (NIST). Post-quantum cryptography
standardization. https://csrc.nist.gov/projects/post-quantum-cryptography. 31
[Nat23] National Institute of Standards and Technology. Digital Signature Standard (DSS).
FIPS PUB 186-5, Feb. 2023. https://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.
186-5.pdf. 4
45
[PR20] Jiaxin Pan and Magnus Ringerud. Signatures with tight multi-user security from search
assumptions. In Liqun Chen, Ninghui Li, Kaitai Liang, and Steve A. Schneider, editors,
ESORICS 2020, Part II, volume 12309 of LNCS, pages 485–504. Springer, Heidelberg,
September 2020. 3, 4
[PW22] Jiaxin Pan and Benedikt Wagner. Lattice-based signatures with tight adaptive cor-
ruptions and more. In Goichiro Hanaoka, Junji Shikata, and Yohei Watanabe, editors,
PKC 2022, Part II, volume 13178 of LNCS, pages 347–378. Springer, Heidelberg,
March 2022. 5
[PWZ23] Jiaxin Pan, Benedikt Wagner, and Runzhi Zeng. Lattice-based authenticated key
exchange with tight security. In Helena Handschuh and Anna Lysyanskaya, editors,
CRYPTO 2023, Part V, volume 14085 of LNCS, pages 616–647. Springer, Heidelberg,
August 2023. 32
[PZ22] Jiaxin Pan and Runzhi Zeng. Compact and tightly selective-opening secure public-key
encryption schemes. In Shweta Agrawal and Dongdai Lin, editors, ASIACRYPT 2022,
Part III, volume 13793 of LNCS, pages 363–393. Springer, Heidelberg, December 2022.
33
[Sch91] Claus-Peter Schnorr. Efficient signature generation by smart cards. Journal of Cryp-
tology, 4(3):161–174, January 1991. 2
[Sha79] Adi Shamir. How to share a secret. Communications of the Association for Computing
Machinery, 22(11):612–613, November 1979. 3
[Whi23] Zack Whittacker. Microsoft lost its keys, and the government got
hacked. TechCrunch, July 2023. https://techcrunch.com/2023/07/17/
microsoft-lost-keys-government-hacked/. 2
[Zav12] G.M. Zaverucha. Hybrid encryption in the multi-user setting. Cryptology ePrint
Archive, Report 2012/159, 2012. https://eprint.iacr.org/2012/159. 27
The running time of B is about that of A plus the time for an execution of the sampler D-FX and
re-randomization of elements.
46
RSA-FDH.Kg: RSA-FDH.Sign[H](sk, M ): RSA-FDH.Vf[H](vk, M, σ):
1 (N, e, d) ← $ RSAGen 4 y ← H(M ) 6 y ← σ e mod N
2 (vk, sk) ← ((N, e), (N, d)) 5 Return σ ← y d mod N 7 Return Jy = H(M )K
3 Return (vk, sk)
Figure 25: Signature scheme RSA-FDH for an RSA instance generator RSAGen, where
RSA-FDH.ROS is the set of all H : {0, 1}∗ → Z∗N .
RSA-muc-(n,c)
RSA[RSAGen](gs): Game GRSAGen
1 (N, e, d) ← $ RSAGen Init:
2 Return (pp, os) ← ((N, e), ε) 1 (N, e, d) ← $ RSAGen
2 For i = 1, . . . , n do
RSA[RSAGen](init, ((N, e), ε)):
3 xi ← $ Z∗N , yi ← xei mod N
3 St.N ← N ; St.e ← e
4 Return (N, e, y1 , . . . , yn )
4 x ← $ Z∗N ; St.x ← x ; St.y ← xe mod N
5 Return (y, St) Corrupt(i): // i ∈ [1..n]
5 CS ← CS ∪ {i}
RSA[RSAGen](corr, ε, St):
6 Return xi
6 Return (St.x, St)
Fin(i, x): // i ∈ [1..n]
RSA[RSAGen](fin, x, St): 7 If i ∈ CS then return false
7 Return (JSt.x = xK, ϵ) 8 Return Jxi = xK
Figure 26: Left: Formal security specification RSA for an RSA instance generator RSAGen. Right:
RSA-muc-(n,c)
Game GRSAGen describing the execution of RSA[RSAGen] in the muc setting.
Theorem A.2 Let RSA-FDH be the signature scheme defined in Figure 25. Let qro , qs be integers
such that 0 ≤ qs < qro . Let A be an adversary for game GUF
RSA-FDH . Then we construct an adversary
B for game GRSA
RSAGen such that
AdvUF RSA
RSA-FDH (A) ≤ e(qs + 1) · AdvRSAGen (B) ,
where qs is the number of signing queries and qro be the number of random oracle queries A makes.
The running time of B is about that of A plus the time for an execution of the sampler D-FX and
re-randomization of elements.
We prove the theorem via the following lemma, showing that RSA-muc tightly implies the security
of RSA-FDH. Theorem A.2 then follows by combining Lemmas A.1 and A.3.
Lemma A.3 Let RSA-FDH be the signature scheme defined in Figure 25. Let qro , qs be integers
such that 0 ≤ qs < qro . Let A be an adversary for game GUF RSA-FDH that issues at most qro queries
to random oracle RO and qs queries to the signing oracle. Then we construct an adversary B for
RSA-muc-(qro ,qs )
game GRSAGen such that
RSA-muc-(qro ,qs )
AdvUF RSA-FDH (A) ≤ AdvRSAGen (B) .
The running time of B is about that of A.
RSA-muc-(q ,q )
Proof of Lemma A.3: We construct adversary B for game GRSAGen ro s as follows: When A
calls GUF
RSA-FDH .Init, B calls its own Init oracle to receive (N, e) and y1 , . . . , yqro . It sets vk ← (N, e)
47
CCA-SB-muc-(n,c)
CCA-SB[PKE, h](gs): Game GPKE
1 b ← $ {0, 1}; Return (ε, b) Init:
1 h ← $ Sch.ROS
CCA-SB[PKE, h](init, (ε, b)):
2 b ← $ {0, 1}
2 (ek, dk) ← $ PKE.Kg[h]
3 For i = 1, . . . , n do
3 St.ek ← ek ; St.dk ← dk ; St.b ← b
4 (ek i , dk i ) ← $ PKE.Kg[h]
4 St.corr ← false ; Return (ek, St)
5 Return (ek 1 , . . . , ek n )
CCA-SB[PKE, h](enc, (M0 , M1 ), St):
Oracle(enc, (i, M0 , M1 )): // i ∈ [1..n] \ CS
5 If St.corr then return ⊥
6 C ← $ PKE.Enc[h](ek i , Mb )
6 C ← $ PKE.Enc[h](St.ek, MSt.b )
7 Si ← Si ∪ {C} ; Return C
7 St.S ← St.S ∪ {C} ; Return (C, St)
Oracle(dec, (i, C)): // i ∈ [1..n], C ∈
/ Si
CCA-SB[PKE, h](dec, C, St): 8 Return M ← $ PKE.Dec[h](dk i , C)
8 If C ∈ St.S then return ⊥
Corrupt(i): // i ∈ [1..n]
9 M ← PKE.Dec[h](St.dk, C) ; Return (M, St) 9 If Si ̸= ∅ then return ⊥
CCA-SB[PKE, h](corr, ε, St): 10 CS ← CS ∪ {i} ; Return dk i
Figure 27: Left: FSS CCA-SB[PKE, h] capturing security with a single challenge bit. Right:
CCA-SB-muc-(n,c)
Game GPKE describing the execution of CCA-SB[PKE, h] in the muc setting. The FFS
for CPA-security CPA-SB[PKE, h] is obtained by omitting the dec sub-algorithm.
and returns it to A. We assume w.l.o.g. that A makes a query to RO for each message M it queries
to the signing oracle and the message M ∗ for its final forgery. On the i-th query to RO, B sets the
output to yi . When A queries the signing oracle for message M such that RO(M ) = yi , B queries
Corrupt(i) to obtain xi = yid mod N which is a valid signature for M since yi = xei mod N . When
A outputs its forgery (M ∗ , σ ∗ ), B looks up the index i∗ such that RO(M ∗ ) = yi∗ and calls Fin for
(i∗ , σ ∗ ). Note that if A is successful, i. e., σ ∗ is a valid forgery and M ∗ has not been queried to the
RSA-muc-(q ,q )
signing oracle, then B wins game GRSAGen ro s and the statement follows.
B Proof of FO-Transform
Since we start from a PKE scheme, we provide the FSS for single-bit security of PKE in Figure 27.
Proof of Theorem 5.8: We prove the theorem in three steps. This allows us to prove security
of the resulting KEM scheme in the sb setting with corruptions. As an intermediate notion we will
use one-way security for PKE as defined via the local FFS from Figure 10 in Section 5.1. For a
schematic overview we refer to the right side of Figure 14 in Section 5.2.
T-Transform. We start with the PKE scheme PKE constructed using the T-Transform T[PKE, G]
as shown in Figure 17, where we model G as a random oracle mapping to randomness space PKE.RS.
The following lemma shows OW-PCVA-mu security of PKE.
Lemma B.1 Let PKE be γ-spread and PKE = T[PKE, G] as described in Figure 17. Let m ≥ 1 be
an integer. Let A be an adversary for game GOW-PCVA -mu-m that issues q queries to Oracle(enc, ·),
PKE e
48
qp queries to Oracle(pco, ·), qv queries to Oracle(cvo, ·) and qro queries to random oracle RO.
Then we construct an adversary B for game GCPA-SB -mu-m such that
PKE
Note that this lemma does not consider corruptions and is basically the same as the one proved
by Jaeger and Kumar in [JK22]. However, they apply small changes to the scheme in order to
additionally achieve memory-tightness. Further, Duman et al. [DHK+ 21] prove multi-user security
(without corruptions) of the full FO transform. Since our proof is very similar to those, we only
give a sketch here.
Proof: Starting with the OW-PCVA-mu game, we first change the way how Oracle(pco, ·) and
Oracle(cvo, ·) work. Instead of checking whether C decrypts to M , Oracle(pco, ·) only checks
whether M encrypts to C. Assuming perfect correctness, this does not make a difference. Oracle(cvo, ·)
performs its check using the random oracle queries and rejects if there does not exist a query that
explains C. This change can be bounded by the γ-spreadness, i. e., qv · 2−γ .
The next modification is that the game sets a flag bad when the random oracle is queried on any
message output by the encryption oracle. In order to bound the probability of bad being set, we
construct a reduction B for game GCPA-SB -mu-m . For each encryption query B picks two random
PKE
messages (M0 , M1 ) and forwards them to its own encryption oracle to receive an encryption of Mb .
If the random oracle is queried on one of these messages, B outputs the corresponding bit. With
probability 1/|PKE.MS|, A queries a message M1−b to either the random oracle or Oracle(pco, ·).
Union bound over the number of encryption queries and users gives us nqe (qro + qp )/|PKE.MS|.
Otherwise, if A queries Mb , B wins.
We construct another reduction B ′ to bound the probability that the final game returns true. Note
that the game aborts when the random oracle is queried on any M that is used for encryption
queries. B ′ is similar to B. It chooses two random messages and forwards them to its encryption
oracle. When A queries finalize with a message M , B ′ checks whether this message equals one
of the messages and outputs the respective bit. Note that for this the message space needs to be
sufficiently large. Folding the two adversaries into a single adversary and collecting the probabilities
gives us the stated bound.
From mu to muc. Luckily, the OW-PCVA FSS is local and we can apply Theorem 5.3 from Sec-
tion 5.1 to go from multi-user security without corruptions to multi-user security with corruptions.
U-Transform. Finally, we prove security of the KEM scheme KEM constructed using the U-
Transform U[PKE, H] as shown in Figure 17, where we now model H as a random oracle mapping
to {0, 1}KEM.kl .
Lemma B.2 Let KEM = U[PKE, H] be as defined in Figure 17. Let n, c be integers such that
CCA-SB-muc-(n,c)
0 ≤ c < n. Let A be an adversary for game GKEM . Then we construct an adversary B
OW-PCVA-muc-(n,c)
for game GPKE such that
CCA-SB-muc-(n,c) OW-PCVA-muc-(n,c)
AdvKEM (A) ≤ AdvPKE (B) .
49
Adversary B makes the same number of corruption and encryption queries as A makes. Let qd and
qro be the number of decryption and random oracle queries A makes. Then B makes at most qd
queries to Oracle(cvo, ·) and at most qro queries to Oracle(pco, ·). The running time of B is
about that of A plus the time for maintaining lists of encryption, decryption and random oracle
queries.
Jaeger and Kumar in [JK22] prove this statement without corruptions. Since we already moved
to a setting with corruptions, this means that both games now allow corruptions such that the
reduction we will construct can simply forward those queries.
Proof: The proof is very similar to that of Theorem 5.7, where here Oracle(pco, ·) serves the
purpose of the Oracle(ddh, ·) oracle. The adversary B which we will construct is similar to that
in Figure 16, thus we will only describe it in words. It runs adversary A and simulates game
CCA-SB-muc-(n,c)
GKEM as follows.
To initialize the game, B queries its own Init oracle and receives a list of n encryption keys. When
A queries Oracle(enc, i), B first checks whether i was corrupted in which case this query is not
allowed. Otherwise it proceeds and queries its own oracle Oracle(enc, i) to receive a ciphertext
C. Since its task is to compute the underlying message for one of the ciphertexts it receives, it
cannot compute the key K honestly. Instead, it chooses K uniformly at random from {0, 1}KEM.kl
and stores (C, K) in a set Si . Note that as long as RO is not queried on the respective input, this
simulation is perfect.
For queries to Oracle(dec, (i, C)), B keeps an additional set DS which stores tuples of the form
(ek i , C, K). If a query is repeated, the same key is given as output. If a new ciphertext is queried
and it is not a valid ciphertext which can be checked using B’s oracle Oracle(cvo, ·), B outputs ⊥.
Otherwise it chooses a random key K and stores it in DS. In order to be consistent with random
oracle queries, we will also add entries there, which we will describe below. Corruption queries, if
allowed, are simply forwarded.
It remains to describe the simulation of RO. Queries are stored in a list HS and repeating queries
are answered consistently. If ek belongs to one of the users in the game, B queries Oracle(pco, ·)
on (M, C) provided by A. If C is a valid ciphertext for M and C was previously output by a query
to Oracle(enc, ·), then (since corruption and encryption queries are not allowed for the same
user) B stops the execution of A and queries Fin. If C was previously queried to Oracle(dec, ·),
B patches the random oracle using list DS. Otherwise, it adds a new entry. Note that until now
we were looking at the case that C is a valid encryption of M . If this is not the case, we only add
an entry to HS and output K.
In order to win, A must query the random oracle on a valid pair of (M, C), otherwise K is uniformly
OW-PCVA-muc-(n,c)
random in any case. Further, if A issues such a query, B can win game GPKE , which
concludes the proof.
The statement in Theorem 5.8 now follows from Lemmas B.1 and B.2 and Theorem 5.3.
50
REAL-SIM-SO-CCA-(n,c) IDEAL-SIM-SO-CCA-(n,c)
Game GPKE,Rel Game GPKE,Rel
Init: Init:
1 (ek, dk) ← $ PKE.Kg[h] 1 Return ε
2 Return ek Enc(M): // only one query
Enc(M): // only one query 2 For i = 1, . . . , n do
3 For i = 1, . . . , n do 3 Mi ← M
4 Mi ← M ; ri ← $ PKE.RS 4 Return (|M1 |, . . . , |Mn |)
5 Ci ← $ PKE.Enc[h](ek, Mi ; ri ) Open(i): // c queries and i ∈ [1..n]
6 CS ← CS ∪ {Ci } 5 IS ← IS ∪ {i} ; Return Mi
7 Return (C1 , . . . , Cn ) Fin(out):
Dec(C): // C ∈
/ CS 6 Return Rel((M1 , . . . , Mn ), M, IS, out)
8 Return PKE.Dec[h](dk, C)
Fin(out):
10 Return Rel((M1 , . . . , Mn ), M, IS, out)
REAL-SIM-SO-CCA-(n,c) IDEAL-SIM-SO-CCA-(n,c)
Figure 28: Games GPKE,Rel and GPKE,Rel for selective opening security.
under selective opening attacks. We then show that we can improve the tightness of the practical
encryption schemes based on Diffie-Hellman and RSA considered in [HJKS15] in the sense that the
security loss is linear in the number of openings rather than linear in the number of ciphertexts.
Simulation-based security for selective openings. We recall the definition of SIM-SO-CCA
REAL-SIM-SO-CCA-(n,c)
security for PKE schemes from [HJKS15]. We define two games, a real game GPKE,Rel
IDEAL-SIM-SO-CCA-(n,c)
which runs an adversary A and an ideal game GPKE,Rel which runs a simulator S.
The games are presented in Figure 28, where M is a distribution over the message space PKE.MS
specified by A from which n messages will be drawn and Rel is a relation. We define the advantage
function as
SIM-SO-CCA-(n,c) REAL-SIM-SO-CCA-(n,c) IDEAL-SIM-SO-CCA-(n,c)
AdvPKE,Rel (A) := | Pr[GPKE,Rel (A)] − max Pr[GPKE,Rel (S)]| .
S
Diffie-Hellman and RSA-based PKE. We present schemes DH-PKE and RSA-PKE in Fig-
ure 29. They are instantiations of the KEM-based scheme from [HJKS15]. In contrast to [HJKS15],
we prove their security with a tightness loss in the number of Open queries instead of the number
of ciphertexts n. This is captured in the following theorems.
Theorem C.1 Let DH-PKE be the PKE scheme as defined in Figure 29. Let n, c be integers such
REAL-SIM-SO-CCA-(n,c)
that 0 ≤ c < n and Rel a relation. For any adversary A in game GDH-PKE,Rel , there
exists an adversary B against GSt -CDH such that
(G,p,g)
SIM-SO-CCA-(n,c) -CDH nqd qro qd
AdvDH-PKE,Rel (A) ≤ e(c + 1) · AdvSt
(G,p,g) (B) + + kl + tl ,
p 2 2
where kl = DH-PKE.kl and tl = DH-PKE.tl. Let qro be the number of random oracle queries A
makes. Then B makes at most qro queries to oracle Ddh. The running time of B is about that of
A plus the time for an execution of the sampler D-FX and re-randomization of group elements.
Theorem C.2 Let RSA-PKE be the PKE scheme as defined in Figure 29. Let n, c be integers such
REAL-SIM-SO-CCA-(n,c)
that 0 ≤ c < n and Rel a relation. For any adversary A in game GRSA-PKE,Rel , there
51
DH-PKE.Enc[G, H](ek, M ): RSA-PKE.Enc[G, H](ek, M ):
1 r ← $ Zp ; C1 ← g r 1 y ← $ Z∗N ; C1 ← y e mod N
2 (Ksym , Kmac ) ← G(C1 , ek r ) 2 (Ksym , Kmac ) ← G(C1 , y)
3 C2 ← Ksym ⊕ M 3 C2 ← Ksym ⊕ M
4 C3 ← H(Kmac , C1 , C2 ) 4 C3 ← H(Kmac , C1 , C2 )
5 Return (C1 , C2 , C3 ) 5 Return (C1 , C2 , C3 )
Figure 29: Left: PKE scheme DH-PKE for a group (G, p, g), where DH-PKE.Kg = HEG.Kg.
Right: PKE scheme RSA-PKE for RSAGen, where RSA-PKE.Kg = RSA-FDH.Kg. For both
PKE ∈ {DH-PKE, RSA-PKE}, we have PKE.ROS contains all G : {0, 1}∗ → PKE.MS × {0, 1}PKE.kl
and H : {0, 1}∗ → {0, 1}PKE.tl for integers kl, tl.
We start with the proof of Theorem C.1 and give a proof sketch of Theorem C.2 noting the
differences.
Proof of Theorem C.1: We prove the theorem via the sequence of games G0 to G5 in Figure 30,
where ROG and ROH use lazy sampling to simulate random oracles for G and H, respectively. G0
REAL-SIM-SO-CCA-(n,c)
is the same as GDH-PKE,Rel , except that we draw ri and compute C1,i and (Ksym,i , Kmac,i )
already during Init. We also introduce a boolean flag queried-enc to record whether the encryption
oracle has been queried. We will use it in a later game hop. Since these are only conceptual changes,
we have
REAL-SIM-SO-CCA-(n,c)
Pr[G0 (A)] = Pr[GDH-PKE,Rel (A)] .
In G1 , we change the way decryption queries are handled. Instead of calling ROG directly, we will
use lists GS and DS, where the latter stores entries which have not been queried to ROG by A.
More specifically, whenever A queries Dec, then the game checks whether there exist Ksym , Kmac
in either GS or DS and if so, uses them for decryption. If this is not the case, fresh Ksym , Kmac are
chosen and added to DS together with C1 . Then, these are used to perform the decryption. We also
adapt the behavior of ROG accordingly. Namely, we check list DS and if Z is the correct value,
ROG returns (Ksym , Kmac ) which were chosen during decryption. If a new query to ROG happens
and Z = C1dk , the game adds this entry to DS as well in order to simulate future decryption queries
consistently. Note that G1 behaves exactly as G0 , thus Pr[G1 (A)] = Pr[G0 (A)].
In G2 , we change decryption again. G2 sets bad1 if A queries the decryption oracle on a value C1,i
before the encryption oracle is even queried. We have
| Pr[G2 (A)] − Pr[G1 (A)]| ≤ Pr[G2 (A) sets bad1 ] ,
52
G0 -G6
Init:
1 dk ← $ Zp ; ek ← g dk
2 For i = 1, . . . , n do
3 ri ← $ Zp ; C1,i ← g ri
4 (Ksym,i , Kmac,i ) ← ROG(C1,i , ek ri ) // G0 -G5
5 (C2,i , Kmac,i ) ← $ {0, 1}|DH-PKE.MS| × {0, 1}DH-PKE.kl // G6
6 C3,i ← ROH(Kmac,i , C2,i , C3,i ) // G6
7 Return ek
Enc(M): // only one query
8 queried-enc ← true
9 For i = 1, . . . , n do
10 Mi ← M
11 C2,i ← Ksym,i ⊕ Mi ; C3,i ← ROH(Kmac,i , C2,i , C3,i ) // G0 -G5
12 Ci ← (C1,i , C2,i , C3,i ) ; CS ← CS ∪ {Ci }
13 Return (C1 , . . . , Cn )
Dec(C1 , C2 , C3 ): // (C1 , C2 , C3 ) ∈
/ CS
14 If ∃i ∈ [n] s. t. C1 = C1,i : // G2 -G6
15 If queried-enc = false then bad1 ← true ; Abort // G2 -G6
16 If C2 ̸= C2,i ∧ (Kmac,i , C1 , C2 , C3 ) ∈
/ HS then return ⊥ // G3 -G6
17 If C2 ̸= C2,i then return ⊥ // G5 -G6
18 (Ksym , Kmac ) ← ROG(C1 , C1dk ) // G0
19 If ∃K, K ′ s. t. (C1 , K, K ′ ) ∈ DS // G1 -G6
20 (Ksym , Kmac ) ← (K, K ′ ) // G1 -G6
21 Else // G1 -G6
22 (Ksym , Kmac ) ← $ {0, 1}|DH-PKE.MS| × {0, 1}DH-PKE.kl // G1 -G6
23 DS ← DS ∪ {(C1 , Ksym , Kmac )} // G1 -G6
24 If ROH(Kmac , C1 , C2 ) ̸= C3 then return ⊥
25 Else return C2 ⊕ Ksym
Open(i): // only after Enc and i ∈ [1..n]
26 IS ← IS ∪ {i}
27 GS ← GS ∪ {(C1,i , ek ri , C2,i ⊕ Mi , Kmac,i )} // G6
28 Return (Mi , ri )
ROG(C1 , Z):
29 If ∃i ∈ [n] \ IS s. t. C1 = C1,i and Z = C1dk then bad2 ← false ; Abort // G4 -G6
30 If ∃Ksym , Kmac s. t. (C1 , Z, Ksym , Kmac ) ∈ GS then return (Ksym , Kmac )
31 If ∃Ksym , Kmac s. t. (C1 , Ksym , Kmac ) ∈ DS and Z = C1dk : // G1 -G6
32 Return (Ksym , Kmac ) // G1 -G6
33 (Ksym , Kmac ) ← $ {0, 1}|DH-PKE.MS| × {0, 1}DH-PKE.kl
34 If Z = C1dk then DS ← DS ∪ {(C1 , Ksym , Kmac )} // G1 -G6
35 GS ← GS ∪ {(C1 , Z, Ksym , Kmac )} ; Return (Ksym , Kmac )
ROH(Kmac , C1 , C2 ):
36 If ∃C3 s. t. (Kmac , C1 , C2 , C3 ) ∈ HS then return C3
37 C3 ← $ {0, 1}DH-PKE.tl ; HS ← HS ∪ {(Kmac , C1 , C2 , C3 )} ; Return C3
Fin(out):
38 Return Rel((M1 , . . . , Mn ), M, IS, out)
since the games are identical until bad1 is set to true and we can use Theorem 2.1. As long as A
has not queried the encryption oracle, the values C1,i are hidden from A so that we can bound
Pr[G2 (A) sets bad1 ] ≤ (nqd )/p.
53
In G3 , the decryption oracle rejects all ciphertexts, where C1 equals that of a challenge C1,i and
C2 is new, but there exists no entry (Kmac , C1 , C2 , C3 ) in the list HS for random oracle ROH. We
have
qd
| Pr[G3 (A)] − Pr[G2 (A)]| ≤ tl .
2
For this note that if there has not been a query (Kmac , C1 , C2 ) to ROH at all, C3 can only be valid
with probability 2−tl . If (Kmac , C1 , C2 ) has been queried, but C3 stored in HS is different to C3
queried to Dec, then the output would have been ⊥ anyway.
In G4 , we add a flag bad2 if ROG is queried on a challenge ciphertext C1,i and Z such that i has
not been queried to Open and Z = C1,idk . We have
In G5 , the decryption oracle rejects all ciphertexts where C1,i was part of a challenge, that is we
do not have to check list HS anymore. We claim that
qro
| Pr[G5 (A)] − Pr[G4 (A)]| ≤ kl , (39)
2
which follows from observing that Kmac,i is determined by the output of ROG but the game aborts
when ROG is queried on the correct input for C1,i , thus Kmac,i is hidden from A. In order to
produce a valid C2 , C3 , it must query the random oracle on Kmac,i . Since C1,i is included in the
hash, a critical query happens with probability 2−kl . Union bound over all queries gives us Eq. (39).
Finally, in G6 , we compute the whole ciphertext at the beginning of the game without querying
ROG explicitly and before the message is known. For this, the game simply chooses C2,i and
Kmac,i uniformly at random. Note that Ksym,i will then only be defined when Enc is queried.
Once Open is queried, the game will add the correct value to list GS. Due to the game aborting
54
when ROG is queried, this maintains consistency and the adversary’s view does not change. We
have Pr[G6 (A)] = Pr[G5 (A)].
We now construct a simulator S that runs in the ideal game and simulates G6 for A. We show that
IDEAL-SIM-SO-CCA-(n,c)
Pr[GPKE,Rel (S)] = Pr[G6 (A)] .
First, S runs DH-PKE.Kg and gives ek to A. When A calls Enc, S calls its own Enc oracle to
obtain the lengths of messages. It then computes the ciphertexts obliviously without knowing Mi
as described in G6 . If A queries Open, S queries Open as well to receive Mi . Since it has chosen
the randomness ri , it can now simply output (Mi , ri ) to A. All other oracles can be simulated as
IDEAL-SIM-SO-CCA-(n,c)
described in G6 . When A calls Fin, S calls Fin in the GPKE,Rel game with the same
output. The theorem then follows from collecting the bounds.
We now briefly sketch the differences for the proof of scheme RSA-PKE.
Proof of Theorem C.2: The proof is very similar to that of Theorem C.1 and we essentially
do the same sequence of games. Since the randomness for C1 is chosen from Z∗N , we bound the
probability that G sets bad1 by (nqd )/|Z∗N |. Further, we bound the event that G sets bad2 by
RSA-muc-(n,c)
constructing an adversary for game GRSA
RSAGen . Here, we we can make use of game GRSAGen
which we also use for RSA-FDH in Appendix A. This simplifies the proof and does not require to
use the sampler directly. The difference to the previous proof lies in the fact that we can check
whether Z belongs to C1 by checking whether Z e = C1 . The remaining steps are then similar to
the proof of Theorem C.1.
55