Correlated-Output Differential Privacy and Applications To Dark Pools
Correlated-Output Differential Privacy and Applications To Dark Pools
Bernardo David #
IT University of Copenhagen, Denmark
Mariana Gama #
COSIC, KU Leuven, Belgium
Abstract
In the classical setting of differential privacy, a privacy-preserving query is performed on a private
database, after which the query result is released to the analyst; a differentially private query
ensures that the presence of a single database entry is protected from the analyst’s view. In this
work, we contribute the first definitional framework for differential privacy in the trusted curator
setting (Fig. 1); clients submit private inputs to the trusted curator, which then computes individual
outputs privately returned to each client. The adversary is more powerful than the standard setting;
it can corrupt up to n − 1 clients and subsequently decide inputs and learn outputs of corrupted
parties. In this setting, the adversary also obtains leakage from the honest output that is correlated
with a corrupted output. Standard differentially private mechanisms protect client inputs but do
not mitigate output correlation leaking arbitrary client information, which can forfeit client privacy
completely. We initiate the investigation of a novel notion of correlated-output differential privacy to
bound the leakage from output correlation in the trusted curator setting. We define the satisfaction
of both standard and correlated-output differential privacy as round differential privacy and highlight
the relevance of this novel privacy notion to all application domains in the trusted curator model.
We explore round differential privacy in traditional “dark pool” market venues, which promise
privacy-preserving trade execution to mitigate front-running; privately submitted trade orders and
trade execution are kept private by the trusted venue operator. We observe that dark pools satisfy
neither classic nor correlated-output differential privacy; in markets with low trade activity, the
adversary may trivially observe recurring, honest trading patterns, and anticipate and front-run
future trades. In response, we present the first round differentially private market mechanisms that
formally mitigate information leakage from all trading activity of a user. This is achieved with
fuzzy order matching, inspired by the standard randomized response mechanism; however, this also
introduces a liquidity mismatch as buy and sell orders are not guaranteed to execute pairwise, thereby
weakening output correlation; this mismatch is compensated for by a round differentially private
liquidity provider mechanism, which freezes a noisy amount of assets from the liquidity provider
for the duration of a privacy epoch, but leaves trader balances unaffected. We propose oblivious
algorithms for realizing our proposed market mechanisms with secure multi-party computation
(MPC) and implement these in the Scale-Mamba Framework using Shamir Secret Sharing based
MPC. We demonstrate practical, round differentially private trading with comparable throughput as
prior work implementing (traditional) dark pool algorithms in MPC; our experiments demonstrate
practicality for both traditional finance and decentralized finance settings.
Keywords and phrases Differential Privacy, Secure Multi-party Computation, Dark Pools, Decent-
ralized Finance
P1 P1 P1
x1
x1 M 1 (x = (x1 , x2 , x3 ))
x2
P2 T A P2 x2 C P2
M (x = x1 , x2 , x3 ) M 2 (x)
x3
x3
M 3 (x)
P3 P3 P3
Figure 1 The standard model of differential privacy (L) vs. the trusted curator model (R). In this
work, we contribute the first definitional framework for differential privacy in the trusted curator
model (§3.1).
Funding James Hsin-yu Chiang: Part of the work was supported by a DTU Compute scholarship.
Bernardo David: This work was supported by the Independent Research Fund Denmark (IRFD)
grants number 9040-00399B (TrA2 C), 9131-00075B (PUMA) and 0165-00079B. Mariana Gama:
This work was supported by CyberSecurity Research Flanders with reference number VR20192203
and by the FWO under an Odysseus project GOH9718N. Christian Janos Lebeda: This work was
supported by the VILLUM Foundation grant 16582.
1 Introduction
In the standard differential privacy setting (Fig. 1, left), a single analyst (A) receives a
query on private inputs from clients (P1 , P2 , P3 ) computed by the trusted third party (T ). A
differentially private query protects the privacy of an input xi submitted by client Pi . In
the trusted curator model (Fig. 1, right), the curator C evaluates a function on all privately
submitted inputs, (y1 , y2 , y3 ) ← M (x1 , x2 , x3 ), and returns each output yi privately to client
Pi , which may be corrupted by the adversary. A (classically) differentially private mechanism
M will protect the honest input x1 . However, if honest output y1 = M 1 (x1 , x2 , x3 ) and
adversarial output yi̸=1 = M i (x1 , x2 , x3 ) are correlated, honest y1 may be trivially inferred
from an adversarial yi̸=1 , breaking client privacy. In this work, we introduce correlated-output
differential privacy (§3.3) to protect against such leakage to achieve client privacy in the
setting of the trusted curator. The conjunction with standard differential privacy protecting
inputs is defined as round differential privacy (§3.4), protecting the entire client transcript
in each interaction round. In this model, the adversary can inject inputs to each round;
round differential privacy insures that such a chosen-input attack has a bounded effect
on the honest user’s output. We highlight the investigation of round differentially private
algorithms for general and specific application domains as a research question of independent
interest. In this work, we investigate round differentially private market applications to
prevent front-running in traditional or decentralized finance.
The term front-running originates from the notion of “getting in front” of pending trades.
A party anticipating a large buy order may purchase the same asset first, as the pending
large buy order will likely drive up the price of the asset; the front-running party can then
sell the asset at a higher price following the execution of the large buy order. Front-running
occurs whenever submitted trade orders that have yet to be executed are observable by the
front-running adversary. In traditional finance, the presence of pending orders may be public
or inferred from market order books. In decentralized finance, pending transactions are
J. Chiang, B. David, M. Gama and C. Lebeda :3
1
https://en.wikipedia.org/wiki/Time-weighted_average_price
:4 Correlated-Output Differential Privacy and Applications to Dark Pools
to both traditional dark pool venue operators and decentralized finance. Our fair markets
can be instantiated in privacy-preserving smart contract frameworks realized by an MPC
committee and privacy-preserving ledger, most recently demonstrated by Baum et al. in [4]
with minimal complexity overhead; here, trade execution is settled in private on a public
ledger.
Dark pool markets. Recent proposals [6, 7, 11, 12] have convincingly demonstrated that the
role distribution of the dark pool operator can be instantiated in practice with multi-party
computation (MPC) to prevent abuse of private order information. Still, these works also do
not consider the entirety of information flow leaking from all honest trader activity; Firstly,
adversarial outputs reveal information about privately submitted honest inputs (Lemma 4)
and secondly, outputs are correlated, such that an adversary also obtains information about
honest outputs (Lemma 6). In the decentralized finance setting, homomorphic encryption has
been proposed to aggregate orders obliviously [23]; however, since all inputs are encrypted to
the same public key, any subsequent decryption to reveal the aggregated order will leak the
privacy of any single trade, if all but one client has been corrupted.
Differential privacy and MPC. Whilst differentially private mechanisms have been implemen-
ted in MPC, these works do not consider privacy over the full, individual transcript in the
trusted curator model (§3.1), where clients submit private inputs and receive private outputs.
Instead, the MPC output is a single query result computed over inputs from a private
database. Here, the returned query is not considered private. The main use-case is generating
differentially private machine learning models over private data with MPC [21, 1, 26, 22].
2 Preliminaries
Differential privacy. Differential privacy was introduced in [13] as a technique for quantifying
the privacy guarantees of a mechanism. A central concept is the definition of neighbouring
datasets which are denoted x ∼ x′ . Intuitively, this definition is used to capture the
information we want to protect. Typically x and x′ are identical except for the data about
one individual. We formally define neighbouring inputs in our setting of the trusted curator
in Section 3.1. Differential privacy is a restriction on how much the output distribution of a
mechanism can change between any neighbouring input datasets.
▶ Definition 1 ((ε, δ)-DP). A randomized mechanism M satisfies (ε, δ)-differential privacy
if for all pairs of neighbouring datasets x ∼ x′ and all sets of outputs S we have:
t and coefficients in Fp such that f (0) = s. The protocol assumes an honest majority, i.e.,
t < n/2, and it is actively secure with an abort, meaning that a malicious party deviating
from the protocol is caught with overwhelming probability and the honest parties abort
the protocol when this happens. In this work, we use Scale-Mamba [3], a framework that
implements various MPC protocols in the preprocessing model. In this methodology, the
computation has a preprocessing phase where input independent data is generated. This
data is then used in the input dependent online phase, where the desired computation over
private inputs is performed.
1. Input phase All parties send their individual inputs to the trusted curator C, which
obtains the input set x1 , ..., xn from parties P1 , ..., Pn respectively.
2. Evaluation phase Upon receiving all inputs, the trusted curator locally computes a
known algorithm M over inputs received in the input phase: namely y ← M(x), where
x = (x1 , ..., xn ) and y = (y1 , ..., yn ). Further, curator C is assumed to have access to
randomness to evaluate randomized algorithms.
3. Output phase The trusted curator privately sends each output element yi in y to
party Pi , and enters the input phase again. Any “public output” ypub is encoded in each
individual output; ∀i ∈ [n] : ypub ∈ yi .
Client corruption. The adversary A can statically corrupt up to n − 1 clients, upon which
it decides what inputs the corrupted clients submit in each interaction round. The adversary
decides corrupted inputs and observes the output for each corrupted client returned from the
trusted curator; the adversary cannot corrupt the curator itself. We denote the adversarial
output view from a round evaluating mechanism M on round inputs x as MA (x).
Public outputs. We permit the trusted curator to also return public outputs; naturally,
any public output is part of the adversarial output view MA (x).
Privacy against the network adversary. We assume that the physical presence of a
party in each round is observable by the network adversary. Since obfuscating the active
participation across the network may be challenging, we assume parties to be physically
online and to participate in each round, but permit them to submit dummy inputs, allowing
for passive participation and obfuscating the logical presence of a party in a given round.
:6 Correlated-Output Differential Privacy and Applications to Dark Pools
Without dummy inputs, the physical presence of a party will always leak the presence of
a logical input contributed by a party to the computation by the trusted curator; in the
setting of privacy-preserving markets, for example, the network adversary would learn that a
party is submitting some trade in a given round.
Further, we assume that parties can anonymously submit inputs to the trusted curator via
techniques such as mixnets [9, 8], thereby hiding their identity from the network adversary.
In practice, parties can delegate the physical interaction with the trusted curator model in
each round to trusted servers, and only need to come online when they wish to forward a
valid, non-dummy input.
Group privacy. We highlight that individual differential privacy guarantees introduced in
the subsequent section naturally imply group privacy; a mechanism protecting the presence
of a single client can do so for multiple clients, consuming equal privacy budget amounts for
each additional group member.
For a randomized algorithm M evaluated on input vector x, let MA (x) = {Yj }j∈A denote
output distributions observed by corrupted clients. Then, the following definition follows
directly from the standard notion [14] of differential privacy where we consider the input
vector as the private database on which the query M is performed and the adversary obtains
the output view MA (x) of all corrupted parties. Note that there is no restriction on the
output distribution seen only by the honest user.
▶ Definition 3 ((ε, δ)-input DP). For an evaluation of (ε, δ)-input differentially private
algorithm M in the trusted curator model over neighboring private input vectors x ∼ x′ , the
following must hold for any adversarially observable output event S A .
As we will see in Section 3.3, input differential privacy is necessary, but insufficient
to protect both in- and output of an honest client in the trusted curator round. Whilst
Definition 3 protects the privacy of a user input, it does not guarantee that the honest
output remains private. This motivates correlated-output differential privacy, introduced in
the subsequent section Section 3.3. Again, the standard setting of differential privacy does
not consider the privacy of the query output, as there is only a single query result released
publicly or to the adversarial analyst.
J. Chiang, B. David, M. Gama and C. Lebeda :7
▶ Lemma 4. Dark Pools violate (ε, δ)-input differential-privacy for any δ < 1.
Proof. (Sketch) A dark pool venue operator can be idealized as a trusted curator which
privately receives trade orders from clients. Upon evaluating the market algorithm in private,
it privately outputs trade executions to clients. Assume an honest user submits the only
buy order and the corrupted client submitting a sell order observes that its trade order is
executed. Any change in the honest counter-party’s privately submitted buy order cancels
the matching of this order pair, observable to the adversary with probability 1, thereby
violating Definition 3. ◀
Adversarially chosen inputs Note that input differential privacy in Definition 3 naturally
protects against chosen input attacks; informally, such an attack permits the adversary to
change its inputs and observe induced effects on its output distributions to learn something
about honest inputs. However, note that (ε, δ)-input DP applies equal privacy guarantees to
any input submitted to the trusted curator. Thus, for appropriately chosen privacy parameters,
a chosen input attack on an (ε, δ)-input DP mechanism will not reveal meaningful information
to the adversary, as its chosen input perturbation will not induce a sufficiently observable
effect on its output distributions.
Definition 5 is interpreted as follows; for any set of inputs and two different honest output
events (Mh (x) ∈ S h vs. Mh (x) ̸∈ S h ), the output distribution MA (x) of the adversary
remains (ε, δ)-similar. In other words, any change in the honest output can only have a
bounded effect on the adversarially observable output distribution.
We highlight an immediate consequence of Definition 5 for economic applications; a
correlated-output DP mechanism cannot distribute funds to all clients where the supply of
output funds is known or public; an adversary corrupting n − 1 clients can trivially infer the
funds privately output to the single honest client by just aggregating its own outputs and
observing the difference to the total supply. Thus;
▶ Lemma 6. Economic mechanisms evaluated in the trusted curator model which allocate a
fixed supply of “assets” over client outputs violate (ε, δ)-correlated-output differential privacy
for any δ < 1.
Applications with correlated outputs. We argue there exist many applications in the
trusted curator setting which require correlated outputs; most closely related to this work are
economic applications which govern the private allocation of finite resources, which include
auctions, markets, financial derivatives and other economic contracts.
Let m-round client inputs x̄ = (x1 , ..., xm ) and x̄′ = (x1′ , ..., xm
′
) be neighboring if they only
differ in inputs submitted by a single client throughout the m rounds;
∃!client i : ∀round r ∈ [m] : xr = xr′ or xr ∼ xr′ where xr (i) ̸= xr′ (i)
A
Further, we denote an m-round output event for the adversary and honest client as Smul =
A A h h h
S1 , ..., Sm and Smul = S1 , ..., Sm respectively.
The m-round interaction is (εin , δ in )-(εout , δ out )-m-round differentially private if for any
two neighbouring m-round inputs x̄ and x̄′ , any adversarial and honest m-round events Smul A
h
and Smul , the following holds true;
Pr[ MA A
mul (x̄) ∈ Smul ]
≤ exp(εin ) · Pr[ MA ′ A
mul (x̄ ) ∈ Smul ] + δ
in
(a)
Pr[ MA A h h
mul (x̄) ∈ Smul | Mmul (x̄) = Smul ]
≤ exp(εout ) · Pr[ MA A h h
mul (x̄) ∈ Smul | Mmul (x̄) ̸= Smul ] + δ
out
(b)
The following theorem relates single-round DP (def. 7) with m-round DP (def. 8),
allowing us to achieve multi-round privacy from sequential interaction rounds between clients
and the trusted curator.
J. Chiang, B. David, M. Gama and C. Lebeda :9
Proof. We use the basic version of the adaptive composition theorem for the proof. Since
the inputs in each round are either equal or neighboring, the m-round notion of input DP
(Eq. (a) in Def. 8) follows directly from applying the Composition Theorem for approximate
DP (see [25, Theorem 22]).
Towards satisfying the m-round notion of correlated-output DP (Eq. (b) in Def. 8) notice
that for each round we can define the event S h that the honest output agrees with Smul h
.
Definition 5 then tells us that each round is (εj , δj )-indistinguishable. Similar to the input DP
we can use the Composition Theorem to get guarantees for the m-round correlated-output
DP. ◀
may not sum to zero; in any given round, the total buy volume may not equal the total sell
volume. We handle this mismatch in the second phase of rDP-volume-match.
2. Liquidity compensation. We introduce a liquidity provider, which compensates for
the mismatch between buy and sell volume; however, without any additional treatment, the
adversary corrupting n − 1 traders and the liquidity provider can trivially learn the honest
output from the implied flow of assets between corrupt and honest parties (e.g. output
correlation). To ensure that the corrupt liquidity provider’s output is correlated-output
differentially private, a randomized amount of its liquidity is frozen; here, the parameteriza-
tion of rDP-Volume-match permits the choice of an upper limit (ρmax ) on frozen volumes of
both the risky and numeraire asset types, thereby bounding the opportunity cost imposed
on the liquidity provider. We define a privacy epoch over multiple rounds in Definition 10,
during which the privacy guarantees of rDP-volume-match hold; if the frozen liquidity is later
returned to the liquidity provider, round differential privacy is no longer guaranteed. In
practice, we argue that it is acceptable to guarantee round differential privacy for a bounded
number of rounds, during which honest users can complete their multi-round trading strategy
without front-running interference. For privacy guarantees to hold indefinitely, assets would
have to burned. Note that assets are never minted, preserving the integrity of their supply.
We also note that, in principle, multiple liquidity providers could participate in each round
of rDP-Volume-Match; we model a single liquidity provider to simplify exposition and formal
proofs.
Next, we detail and motivate steps of rDP-volume-match and refer to Fig. 2 for a formal
description of the algorithm.
Orders in rDP-Volume-match. Let a valid, privately submitted trade order be the tuple
(b, s, id), where b and s represent buy and sell bits respectively, and id is the trader identifier.
Thus, let (b, s) ∈ [(1, 0), (0, 1), (0, 0)] represent a buy, sell and dummy order respectively. We
fix buy and sell unit volumes such that a single sell and buy order always match in exchanged
asset value.
1a. Deterministic matching (1a. in Fig. 2). Let the number of orders sent to the trusted
curator by n clients be x = {(b, s, id1 ), ..., (b, s, idn )}. Then, the maximum possible number
of matches between buy (b, s) = (1, 0) and sell (b, s) = (0, 1) orders is computed, which is
simply the smaller of the number of buy b and sell s orders. Let the result of the deterministic
matching phase be the bit array match = (match1 , ..., matchn ), where bit matchi indicates if
the i’th submitted order was matched (1) or not (0). Once the total number of preliminary
matched pairs is computed, they are assigned randomly to the non-dummy orders; dummy
orders are never matched.
1b. Randomized response over matches (1b. in Fig. 2). Here, we apply the standard
randomized response mechanism [14, 27] to determine whether a trade or no-trade is returned
to the trader who submitted a valid trade order; for each bit in array match where matchi = 1,
the probability of the final tradei bit equaling 1 or 0 is given by;
in in
Pr[ tradei = 1 | matchi = 1 ] = eε /(1 + eε ) (1)
in
Pr[ tradei = 0 | matchi = 1 ] = 1/(1 + eε )
Conversely, for each bit matchi = 0 in match and in the case that party i did not submit a
dummy order, the probability of the final tradei outcome being sampled as 1 or 0 is given by;
in
Pr[ tradei = 1 | matchi = 0 ] = 1/(1 + eε ) (2)
εin εin
Pr[ tradei = 0 | matchi = 0 ] = e /(1 + e )
J. Chiang, B. David, M. Gama and C. Lebeda :11
2. Liquidity compensation
Thus, for parties submitting valid, non-dummy trades, each of the final trading results in
array trade = [trade1 , ..., traden ] is obtained from independently sampling from distributions
Eq. 1 or 2 according to the matchi bit output from the deterministic matching subroutine
[1a]. Trader outputs are given by the array [(bout out out out
1 , s1 , id1 ), ..., (bn , sn , idn )], where each
entry (bi , si , idi ) returned to party i indicates whether a buy (bout
out out out
i , si ) = (1, 0), sell
out out out out
(bi , si ) = (0, 1) or no trade (bi , si ) = (0, 0) was executed;
We emphasize that a trade can only be executed if a non-dummy order was submitted
at the beginning of the round, and in the same direction (sell or buy) as intended by the
trader. Dummy orders always return (bout , sout ) = (0, 0) as output; the fuzzy matching is
only applied to valid, non-dummy orders only, and thus the trading “interface” remains the
same as in traditional volume matching algorithms; a trade order is either filled or not at all.
2a. Liquidity compensation for sampled trades (2a. Fig. 2). Fuzzy matching of orders
via randomized response implies that traded volumes from step [1b] in rDP-volume-match do
not match precisely; for the trade outputs [(bout out out out
1 , s1 , id1 ), ..., (bn , sn , idn )], the following
can occur;
X X
sout
i ̸= bout
i
i∈[n] i∈[n]
Since sells and buys may not cancel out, we introduce the presence of a liquidity provider,
:12 Correlated-Output Differential Privacy and Applications to Dark Pools
which compensates for this mismatch in traded asset liquidity. Then, the amount of the
numeraire asset (∆0 ) and risky asset (∆1 ) provided (∆ < 0) or received (∆ > 0) by the
liquidity provider is given as;
X X
∆0 = − sout
i − bi
out
∆1 = sout
i − bi
out
(3)
i∈[n] i∈[n]
The liquidity provider compensates for this liquidity imbalance resulting from fuzzy
matching; its initial balances (xliq liq liq liq
0 , x1 ) are updated to (x0 + ∆0 , x1 + ∆1 ); however, note
that any change in the honest user’s trade execution will affect ∆0 , ∆1 with probability 1,
observable by the corrupted liquidity provider and violating correlated-output differential
privacy (Def. 5); relaxing the correlation between the final exchange of assets and the update
in funds observed by the liquidity provider can only imply the minting or removal of funds
in the round outputs.
We propose a compromise, which is a randomized mechanism to freeze liquidity, protecting
the privacy of traders for the m-round duration that the liquidity remains frozen; we call this
a privacy epoch (Def. 10). Our algorithm DP-volume-match refrains from minting, preserving
the integrity of the underlying asset types.
2b. Randomized liquidity freezing (2b. in Fig. 2). (L8 in Figure 6). The liquidity
provider inputs (xliq liq
0 , x1 ) amounts of numeraire (0) and risky (1) asset to a given round,
and is returned updated reserve balances (y0liq , y1liq ) = (xliq liq
0 + ∆0 − ρ0 , x1 + ∆1 − ρ1 ), where
(ρ0 , ρ1 ) is the volume of assets (0) and (1) frozen in the given round and returned at the end
of the privacy epoch, chosen to be sufficiently long to protect a common trading strategies
executed over multiple rounds.
Note that it would be easy to freeze liquidity with perfect privacy if we had unbounded
liquidity. The liquidity provider could provide n units of each asset in every round and
liquidity would be frozen such that (y0liq , y1liq ) = (xliq liq
0 − n, x1 − n). However, the required
liquidity would not be feasible for large n. Our mechanism instead provides a trade-off
between privacy and frozen liquidity. In each round we sample ρ0 ∈ [0, ρmax ] and set
ρ1 = ρmax − ρ0 . We give the probability mass function Pfrz from which ρ0 is sampled in
Equation (4); this distribution is parameterized by a maximum amount of frozen liquidity
ρmax ≥ 1 in the round, and correlated-output differential privacy parameters εout , δ out .
max
out out
δ · exp(ε · ρ0 )
ρ0 ∈ [ 0 : ⌈ ρ 2−1 ⌉ ]
max
Pfrz (ρ0 ) = δ out · exp(εout · (ρmax − ρ0 )) ρ0 ∈ [ ⌈ ρ 2−1 ⌉ + 1 : ρmax ] (4)
0 otherwise
The sensitivity of ρ0 +∆0 and ρ1 +∆1 to the execution of a single trade is ±1. Distribution
frz allocates probability mass across multiples of unit trade volume; neighbouring freezing
events ρt and ρt ± 1 are allocated probabilities which differ by factor exp(εout ). Since we
limit the amount of frozen tokens to the range [0 : ρmax ], we must accept a non-zero δ out
probability of violating (εout )-correlated-output differential privacy (See Lemma 13).
Parameterization of Pfrz . The freeze distribution is parameterized by (εout , δ out , ρmax ),
but we note that these cannot be chosen independently; parameters are set so the aggregate
probability mass of the Pfrz is 1. We illustrate various parameterizations of εout , δ out and
ρmax in Fig. 3. To achieve (2.5, 4.5 · 10−4 )-correlated-output differential privacy, ρmax must
be set to 6, implying that up to 6 unit volumes of each asset type provided by the liquidity
provider will be frozen. Lowering the ρmax reduces frozen liquidity, but implies higher privacy
parameters δ out or εout . For ρmax = 6 and rounds exceeding 103 number of submitted orders
J. Chiang, B. David, M. Gama and C. Lebeda :13
(as benchmarked in §5.3), we argue the opportunity cost of freezing up to 6 unit volumes of
each asset represents an acceptable cost for round-differential-privacy.
Cost of liquidity provisioning. In fuzzy
order matching, the worst case liquidity mis- 100
match occurs when all submitted orders are
10−1
in the equal direction and are all executed
or fulfilled. Here, the maximum mismatch 10−2
δ out
εout = 0.0
submitting orders in the round. Thus, the 10−4 εout = 0.5
liquidity provider has to provide as much εout = 1.0
liq 10−5 εout = 1.5
liquidity as number of clients (x0,1 = n), in
εout = 2.0
addition to ρmax of each asset type in each 10−6
εout = 2.5
round. However, in rDP-Volume-match, the
exchange rate is decided apriori according to 0 2 4 6 8 10 12
ρmax
an external reference price; we argue that the
vast majority of the liquidity can be sourced Figure 3 We plot selected parameterizations of
directly from the external market trading at Pfrz in Eq. 4. The choice of parameters represents
the reference price. In the blockchain con- a trade-off between degree of privacy (εout , δ out )
text, this could be a large Automatic Market and frozen funds (ρmax ).
Maker with sufficient liquidity, thereby reducing the amount of liquidity required from the
liquidity provider to just ρmax of each asset type. We leave the detailed analysis of effective
incentivization of liquidity provisioning to future work; we imagine traders submitting trade
fees in each round, but do not model this explicitly.
▶ Definition 10 (Privacy epoch). We define a privacy epoch over the repeated execution of
rDP-Volume-match for m rounds, during which the participating liquidity provider contributes
amounts of risky and numéraire assets to be frozen in each round; all frozen funds are
returned when the m rounds of the privacy epoch are completed.
We emphasize that the following privacy properties hold for client in- and outputs during
the duration of a single private epoch; once the frozen funds are returned, round differential
privacy no longer holds. For purposes of mitigating front-running, we argue that the epoch
duration should be chosen to be sufficiently long permit the execution of common, long-
running honest user strategies. Alternatively, if the frozen funds provided by the liquidity
provider are never returned or burnt, the following privacy properties hold absolutely.
We refer to Appendix A for formal proofs of the following theorem and lemmas which
demonstrate round differential privacy for rDP-Volume-matching.
Theorem 11 follows directly from Lemmas 13 and 14, while the latter is demonstrated by
leveraging bounds from Lemmas 12 and 13; we refer to the proof strategy in Appendix A.
Then, the probability distribution over which the clearing price is sampled is given by the
exponential mechanism parameterized by utility function ux , which in turn is determined
from the submitted trade orders x. Thus, the probability of each discrete price rj ∈ r is
given by;
exp(εin1 · ux (j)/2)
Pr[j] = P in · u (i)/2) (5)
i∈[|r|] exp(ε 1 x
Since the exponential mechanism is (εin 1 , 0)-input differentially-private over all inputs
([17]), we consume εin1 of our (ε in
, 0)-input differential-privacy budget when outputting the
clearing price computed over x, leaving another εin 2 for the subsequent rDP-volume-match at
price r, such that εin = εin
1 + εin
2 .
J. Chiang, B. David, M. Gama and C. Lebeda :15
Sample: On input P , the probability distribution of a discrete random variable X that may
take k different values x1 , ...xk :
1. Sample ⟨z⟩ ∈ (0,P 1] uniformly at random.
i
2. For all i: Fi ← j=1 P (X = xj )
3. For all i: ⟨ci ⟩ ← (Fi ≥ ⟨z⟩).
4. For all i: ci ← Open(⟨ci ⟩).
5. Return xj for the lowest j such that cj = 1.
(bi , si ) ∈ {(1, 0), (0, 1), (0, 0)}. To do so, we run the InputCheckVM procedure in Figure 5,
where orders without the correct format are rejected.
InputCheckVM: On input x′ = [x′1 , ..., x′n ], where x′i = (⟨bi ⟩, ⟨si ⟩, ⟨idi ⟩) and bi , si , idi ∈ Fp :
Check validity of inputs bits: (0, 0) ∨ (0, 1) ∨ (1, 0).
1. Sample αi , βi , γi uniformly at random.
2. ⟨ti ⟩ ← αi · (⟨bi ⟩ · ⟨bi ⟩ − ⟨bi ⟩) + βi · (⟨si ⟩ · ⟨si ⟩ − ⟨si ⟩) + γi · (⟨bi ⟩ · ⟨si ⟩)
3. ti ← Open(⟨ti ⟩)
4. If ti = 0 then add x′i to a list x, otherwise reject x′i .
5. Return x.
Figure 5 Input correctness check for the rDP-volume-match algorithm (from [11], Figure 3).
Fuzzy order matching. To achieve the desired differential privacy guarantees, we avoid
revealing any of the secret shared values throughout the computation. This is unlike the
Bucket Match mechanism from [11], where the trading direction with the most total volume
was revealed, and the matching procedure was simplified by opening successful orders as soon
as they were matched. As a consequence, we obtain a more complex, oblivious procedure,
described in MatchVol in Figure 6. Here, we calculate the cumulative total volume for each
i and for each direction, thus obtaining ⟨σib ⟩ and ⟨σis ⟩ (note that we need to perform the
calculations in both directions to hide which direction has more total volume). We then
compare ⟨u⟩ (the total matched volume in each direction) with the cumulative volume at
each index i, and accept every order i until ⟨u⟩ is exceeded. A randomized response over the
matches is obtained by using the randomness ⟨πi ⟩ sampled during MPC pre-processing.
Liquidity compensation. This phase of the oblivious algorithm, realized in step 12 of the
MatchVol procedure (Fig. 6), is identical to the liquidity compensation procedure described
in Section 4.1, except that we are now operating over secret shared values.
NoiseGen: Use Sample from Figure 4 to compute the noise for steps 5 and 9:
- For all i: ⟨πi ⟩ ← Sample(Prr ) (def. in Eq. 1).
- ⟨ρ0 ⟩, ⟨ρ1 ⟩ ← Sample(Pfrz ) (def. in Eq. 4)
rDP-volume-matching: On input x′ , xliq , submitted by (P1trd , ..., Pntrd ) and P liq , respectively,
where x′ = [x′1 , ..., x′n ], x′i = (⟨bi ⟩, ⟨si ⟩, ⟨idi ⟩), xliq = (⟨xliq liq liq liq
0 ⟩, ⟨x1 ⟩) and bi , si , idi , x0 , x1 ∈ Fp :
1. Let x ← InputCheckVM(x′ )
2. Let y, yliq ← MatchVol(x, xliq )
3. Return y = [y1 , ..., yn ], yliq to (P1trd , ..., Pntrd ) and P liq , respectively.
Subroutines invoked by rDP-volume-matching
MatchVol: On input x = [x1 , ..., xn ] and xliq = (⟨xliq liq
0 ⟩, ⟨x1 ⟩):
Step [1a] Deterministic matching of buy & sell orders
1. For all i: ⟨B⟩ ← ⟨B⟩ + ⟨bi ⟩, and ⟨S⟩ ← ⟨S⟩ + ⟨si ⟩
2. Let ⟨c⟩ ← (⟨S⟩ > ⟨B⟩) and ⟨u⟩ ← ⟨c⟩ · ⟨B⟩ + (1 − ⟨c⟩) · ⟨S⟩.
3. For all i: ⟨bigi ⟩ ← ⟨c⟩P· ⟨si ⟩ + (1 − ⟨c⟩) · ⟨bi ⟩.P
i i
4. For all i, let ⟨σib ⟩ ← h=1 ⟨bh ⟩ and ⟨σis ⟩ ← h=1 ⟨sh ⟩.
′ s b
5. For all i, let ⟨σi ⟩ ← ⟨c⟩ · ⟨σi ⟩ + (1 − ⟨c⟩) · ⟨σi ⟩
6. For all i, let ⟨match′i ⟩ ← (⟨σi′ ⟩ ≤ ⟨u⟩) · ⟨bigi ⟩
7. For all i: ⟨matchi ⟩ ← (1 − ⟨c⟩) · ⟨si ⟩ + ⟨c⟩ · ⟨bi ⟩ + ⟨match′i ⟩
8. Set match = [⟨match1 ⟩, ..., ⟨matchn ⟩]
Step [1b] Randomized response over order matches
9. For all i:
- Let ⟨tradei ⟩ ← ⟨πi ⟩ · ⟨matchi ⟩ + (1 − ⟨πi ⟩) · (1 − ⟨matchi ⟩)
- Let ⟨bout
i ⟩ ← ⟨bi ⟩ · ⟨tradei ⟩
- Let ⟨sout
i ⟩ ← ⟨si ⟩ · ⟨tradei ⟩
- Add yi = (⟨bout out
i ⟩, ⟨si ⟩, ⟨idi ⟩) to the output list y.
Step [2a] Liquidity compensation for sampled trades
10. For all i: ⟨ob ⟩ ← ⟨ob ⟩ + ⟨bout s s
i ⟩, and ⟨o ⟩ ← ⟨o ⟩ + ⟨si ⟩
out
InputCheckDA: On input x′ = [x′1 , ..., x′n ], where x′i = (wi , ⟨diri ⟩, ⟨idi ⟩), wi = [⟨wi1 ⟩, ..., ⟨wil ⟩]
and wij , diri , idi ∈ Fp :
Check all inputs are bits.
1. For all j: sample αij uniformly at random.
2. Sample βi uniformly at random.
3. ⟨ti ⟩ ← αi1 · (⟨wi1 ⟩ · ⟨wi1 ⟩ − ⟨wi1 ⟩) + ... + αil · (⟨wil ⟩ · ⟨wil ⟩ − ⟨wil ⟩)
4. ⟨ti ⟩ ← ⟨ti ⟩ + βi · (⟨diri ⟩ · ⟨diri ⟩ − ⟨diri ⟩)
5. ti ← Open(⟨ti ⟩)
6. If ti ̸= 0 then reject xi′ . Otherwise, continue to the next step.
7. For all j, let ⟨bij ⟩ = ⟨wij ⟩ · (1 − ⟨diri ⟩) and ⟨sij ⟩ = ⟨wij ⟩ · ⟨diri ⟩.
8. Add xi = (⟨bi1 ⟩, ⟨si1 ⟩, ..., ⟨bil ⟩, ⟨sil ⟩, ⟨idi ⟩) to a list x.
9. Return x.
Figure 7 Input correctness check for rDP-double-auction.
:18 Correlated-Output Differential Privacy and Applications to Dark Pools
5.3 Experiments
To benchmark the performance of our MPC algorithms, we implemented and executed them
using Scale-Mamba [3] with Shamir secret sharing between 3 parties. All the parties are run
on identical machines with an Intel i-9900 CPU and 128GB of RAM. The ping time between
all the machines is 1.003 ms. Precise numerical values for the results presented here are given
in Appendix B.
J. Chiang, B. David, M. Gama and C. Lebeda :19
rDP-double-auction: On input x′ , xliq , submitted by (P1trd , ..., Pntrd ) and P liq , respectively,
where x′ = [x′1 , ..., x′n ], x′i = (wi , ⟨diri ⟩, ⟨idi ⟩), wi = [⟨wi1 ⟩, ..., ⟨wil ⟩], xliq = (⟨xliq liq
0 ⟩, ⟨x1 ⟩) and
liq liq
wij , diri , idi , x0 , x1 ∈ Fp , as well as a list of prices r = [r1 , ..., rl ]:
1. Let x ← InputCheckDA(x′ )
2. xmatch , ⟨cR ⟩, ⟨uR ⟩ ← FindPrice(x)
3. Execute MatchVol from Figure 6 from step 3 with inputs xmatch = [xmatch 1 , ..., xmatch
n ], xliq , ⟨cR ⟩
and ⟨uR ⟩.
Subroutine invoked by rDP-double-auction
FindPrice: On input x = [x1 , ..., xn ], where xi = (⟨bi1 ⟩, ⟨si1 ⟩, ..., ⟨bil ⟩, ⟨sil ⟩, ⟨idi ⟩):
1. For all j: ⟨Bj ⟩ ← ⟨Bj ⟩ + ⟨b1j ⟩ + ... + ⟨bnj ⟩, and ⟨Sj ⟩ ← ⟨Sj ⟩ + ⟨s1j ⟩ + ... + ⟨snj ⟩.
2. For all j, let ⟨cj ⟩ ← (⟨Sj ⟩ > ⟨Bj ⟩) and ⟨uj ⟩ ← ⟨cj ⟩ · ⟨Bj ⟩ + (1 − ⟨cj ⟩) · ⟨Sj ⟩.
3. Calculate weights P ⟨W1 ⟩, ..., ⟨Wl ⟩ using Algorithm 3 from [5] on input ⟨u1 ⟩, ..., ⟨ul ⟩.
j
4. For all j: ⟨Fj ⟩ ← h=1 ⟨Wh ⟩
5. Sample ⟨z ⟩ ∈ (0, 1] uniformly at random and let ⟨z⟩ ← ⟨z ′ ⟩ · ⟨Fl ⟩.
′
of rDP-volume-match should still be high enough for most real-world applications, especially
considering the improved privacy it provides.
Online phase of rDP-double-auction. The runtimes for the online phase of the rDP-
double-auction algorithm for an increasing number of submitted orders and different values of
εin
1 can be found in Figure 9. These runtimes include the InputCheckDA procedure described
in Figure 7, as well as the FindPrice procedure from Figure 8 and the MatchVol from Figure
6 starting from step 3.
Figure 10 Runtimes in seconds (with logarithmic scale on the x-axis) for the online phase of
the rDP-double-auction algorithm with different values of εin 1 , showing: (left) selection between 10
different price points; (right) selection between 100 different price points. εin
1 is the amount of input
privacy budget consumed when executing the exponential mechanism to find the clearing price.
The average runtime of InputCheckDA is of 0.00030 seconds (0.30 ms) per order when
considering 10 price points and 0.00145 seconds (1.45 ms) per order when considering 100
price points. The percent contribution of this part of the algorithm to the total runtime
becomes more significant as the number of orders increases, constituting around 50% of the
total runtime across all εin
1 values when considering 10 thousand orders with 10 price points,
and 70% to 80% when considering 10 thousand orders with 100 price points, depending on
the choice of εin
1 . The FindPrice procedure, on the other hand, does not get significantly
slower with the increase in the number of orders. This is also the only part of the algorithm
that depends on the choice of εin 1 , since the method for calculating the weights associated
with each price point changes depending on εin 1 , as described in Section 5.2. As expected,
in
the difference in runtime for different ε1 ’s becomes more noticeable when considering more
price points, with FindPrice taking around 2.2 seconds more with εin 1 = ln(2)/2 than with
εin
1 = 2 ln(2). Nonetheless, this increase remains comparatively small when we consider large
numbers of orders.
6 Future work
In this work, we have initiated the study of differential privacy in the trusted curator model,
resulting in a definitional framework of round differential privacy, which protects both
private inputs and private, yet correlated-outputs. We argue this setting applies to many
economic or financial application domains. We introduce round differentially private market
mechanisms for traditional finance, but also decentralized finance when instantiated with
privacy-preserving smart contracts [4].
We highlight the investigation of general correlated-output differentially private mechan-
isms for common output correlation classes as an interesting avenue for future work. In the
J. Chiang, B. David, M. Gama and C. Lebeda :21
References
1 Abbas Acar, Z Berkay Celik, Hidayet Aksu, A Selcuk Uluagac, and Patrick McDaniel. Achieving
secure and differentially private computations in multiparty settings. In 2017 IEEE Symposium
on Privacy-Aware Computing (PAC), pages 49–59. IEEE, 2017. https://doi.org/10.1109/
PAC.2017.12.
2 Mehrdad Aliasgari, Marina Blanton, Yihua Zhang, and Aaron Steele. Secure computation on
floating point numbers. 20th Annual Network and Distributed System Security Symposium,
NDSS 2013, San Diego, California, USA, 2013.
3 Abdelrahaman Aly, Kelong Cong, Daniele Cozzo, Marcel Keller, Emmanuela Orsini, Dragos
Rotaru, Oliver Scherer, Peter Scholl, Nigel P. Smart, Titouan Tanguy, and Tim Wood. SCALE-
MAMBA v1.12: Documentation, 2021. URL: https://homes.esat.kuleuven.be/~nsmart/
SCALE/Documentation.pdf.
4 Carsten Baum, James Hsin-yu Chiang, Bernardo David, and Tore Kasper Frederiksen. Eagle:
Efficient Privacy Preserving Smart Contracts. Cryptology ePrint Archive, 2022. https:
//eprint.iacr.org/2022/1435.
5 Jonas Böhler and Florian Kerschbaum. Secure multi-party computation of differentially
private median. In 29th USENIX Security Symposium (USENIX Security 20), pages 2147–
2164. USENIX Association, August 2020. URL: https://www.usenix.org/conference/
usenixsecurity20/presentation/boehler.
6 John Cartlidge, Nigel P Smart, and Younes Talibi Alaoui. MPC joins the dark side. In
Proceedings of the 2019 ACM Asia Conference on Computer and Communications Security,
pages 148–159, 2019. https://doi.org/10.1145/3321705.3329809.
7 John Cartlidge, Nigel P Smart, and Younes Talibi Alaoui. Multi-party computation mechanism
for anonymous equity block trading: A secure implementation of turquoise plato uncross.
Intelligent Systems in Accounting, Finance and Management, 28(4):239–267, 2021. https:
//doi.org/10.1002/isaf.1502.
8 David Chaum, Debajyoti Das, Farid Javani, Aniket Kate, Anna Krasnova, Joeri De Ruiter,
and Alan T Sherman. cmix: Mixing with minimal real-time asymmetric cryptographic
operations. In Applied Cryptography and Network Security: 15th International Conference,
ACNS 2017, Kanazawa, Japan, July 10-12, 2017, Proceedings 15, pages 557–578. Springer,
2017. https://doi.org/10.1007/978-3-319-61204-1_28.
9 David L Chaum. Untraceable electronic mail, return addresses, and digital pseudonyms.
Communications of the ACM, 24(2):84–90, 1981. https://www.doi.org/10.1145/358549.
358563.
10 Tarun Chitra, Guillermo Angeris, and Alex Evans. Differential privacy in constant function
market makers. Cryptology ePrint Archive, 2021. https://eprint.iacr.org/2021/1101.
11 Mariana Botelho da Gama, John Cartlidge, Antigoni Polychroniadou, Nigel P Smart, and
Younes Talibi Alaoui. Kicking-the-bucket: Fast privacy-preserving trading using buckets.
Cryptology ePrint Archive, 2021. To appear at FC’22. https://eprint.iacr.org/2021/1549.
12 Mariana Botelho da Gama, John Cartlidge, Nigel P. Smart, and Younes Talibi Alaoui. All for
one and one for all: Fully decentralised privacy-preserving dark pool trading using multi-party
computation. Cryptology ePrint Archive, Paper 2022/923, 2022. https://eprint.iacr.org/
2022/923. URL: https://eprint.iacr.org/2022/923.
:22 Correlated-Output Differential Privacy and Applications to Dark Pools
13 Cynthia Dwork, Frank McSherry, Kobbi Nissim, and Adam Smith. Calibrating noise to
sensitivity in private data analysis. In Theory of cryptography conference, pages 265–284.
Springer, 2006. https://doi.org/10.1007/11681878_14.
14 Cynthia Dwork, Aaron Roth, et al. The algorithmic foundations of differential privacy.
Foundations and Trends® in Theoretical Computer Science, 9(3–4):211–407, 2014. http:
//dx.doi.org/10.1561/0400000042.
15 Fabienne Eigner, Aniket Kate, Matteo Maffei, Francesca Pampaloni, and Ivan Pryvalov.
Differentially private data aggregation with optimal utility. ACSAC ’14, page 316–325, New
York, NY, USA, 2014. Association for Computing Machinery. doi:10.1145/2664243.2664263.
16 Peter Kairouz, Sewoong Oh, and Pramod Viswanath. The composition theorem for differential
privacy. In ICML, volume 37 of JMLR Workshop and Conference Proceedings, pages 1376–1385.
JMLR.org, 2015.
17 Frank McSherry and Kunal Talwar. Mechanism design via differential privacy. In 48th Annual
IEEE Symposium on Foundations of Computer Science (FOCS’07), pages 94–103. IEEE, 2007.
https://doi.org/10.1109/FOCS.2007.66.
18 United States of America before the Securities and Exchange Commission. In the matter of
itg inc. and alternet securities, inc., exchange act release no. 75672. https://www.sec.gov/
litigation/admin/2015/33-9887.pdf, 12 Aug 2015.
19 United States of America before the Securities and Exchange Commission. In the matter of
pipeline trading systems llc, et al., exchange act release no. 65609. https://www.sec.gov/
litigation/admin/2011/33-9271.pdf, 24 Oct 2011.
20 United States of America before the Securities and Exchange Commission. In the matter
of liquidnet, inc., exchange act release no. 72339. https://www.sec.gov/litigation/admin/
2014/33-9596.pdf, 6 Jun 2014.
21 Manas Pathak, Shantanu Rane, and Bhiksha Raj. Multiparty differential privacy via
aggregation of locally trained classifiers. Advances in neural information processing
systems, 23, 2010. https://proceedings.neurips.cc/paper_files/paper/2010/file/
0d0fd7c6e093f7b804fa0150b875b868-Paper.pdf.
22 Sikha Pentyala, Davis Railsback, Ricardo Maia, Rafael Dowsley, David Melanson, Anderson
Nascimento, and Martine De Cock. Training differentially private models with secure multiparty
computation. arXiv preprint arXiv:2202.02625, 2022. https://arxiv.org/abs/2202.02625.
23 Penumbra. ZSwap documentation. https://protocol.penumbra.zone/main/zswap.html,
2023.
24 Monica Petrescu and Michael Wedow. Dark pools in european equity markets: emergence,
competition and implications. ECB Occasional Paper, (193), 2017. https://doi.org/10.
2866/555710.
25 Thomas Steinke. Composition of differential privacy & privacy amplification by subsampling.
CoRR, abs/2210.00597, 2022.
26 Sameer Wagh, Xi He, Ashwin Machanavajjhala, and Prateek Mittal. Dp-cryptography:
marrying differential privacy and cryptography in emerging applications. Communications of
the ACM, 64(2):84–93, 2021. https://doi.org/10.1145/3418290.
27 Stanley L Warner. Randomized response: A survey technique for eliminating evasive answer
bias. Journal of the American Statistical Association, 60(309):63–69, 1965. https://doi.org/
10.1080/01621459.1965.10480775.
J. Chiang, B. David, M. Gama and C. Lebeda :23
A Proofs
▶ Theorem 11. rDP-Volume-matching is (εin + εout , δ out )-(εout , δ out )-m-round-differentially-
private.
Proof. (Theorem 11) Follows directly from Lemma 13, and Lemma 14, stated and proven
below, for the duration of the privacy epoch.
We note that round differential privacy must be analyzed in the presence of corrupted
traders and a corrupted liquidity provider; even if the liquidity provider only interacts with
the “outputs” of clients by compensating for liquidity mismatch, it can also potentially infer
knowledge about honest inputs from its output, complicating the proof.
Thus, our proof strategy is summarized as follows; we first state and prove Lemma 12 to
demonstrate input differential privacy against corrupted traders; correlated-output differential
privacy against corrupted traders holds trivially, as trader outputs are independently sampled.
Then we first demonstrate correlated output differential privacy against corrupted liquidity
providers in Lemma 13 (in isolation); finally, input differential privacy against both corrupted
traders and corrupted liquidity provider is considered in Lemma 14, which leverages privacy
bounds from both Lemmas 12 and 13. ◀
Proof. (Lemma 12) Let x, x′ be neighboring vectors of trade orders (Definition 2) submitted
to the DP-volume matching algorithm. Suppose for the rest of the proof that the honest
user submits a buy order in one of x, x′ and a dummy order in the other. The proof is the
same in the case of a sell order. The key to the proof is that at most one adversarial order is
affected in the deterministic matching by changing the honest user’s trade order.
Let bA and sA denote the number of adversarial buy and sell orders, respectively. If
b < sA the number of matches increases by 1 by changing the honest user’s trade from a
A
dummy to a buy order. Here, the (bA + 1)’th adversarial sell order changes from unmatched
to matched and all other orders are unaffected. For bA ≥ sA the number of matches is the
same for both inputs. If the honest user’s buy order is not matched changing the input has
no impact on any matches. However, if it is matched the sA ’th adversarial buy order changes
from matched to unmatched. All other adversarial trades are unaffected.
We need to show that the probability of the adversaries observing any trade output
in
changing by at most a factor of at most eε between the two inputs. It is easy to see that
this holds for the case where no adversarial trades are changed in step [1a] so it remains to
show this for the case when one trade is changed. Let j be the index of the trader whose
trade was changed between match and unmatched in the deterministic phase and let t denote
a binary vector of trade outcomes. From the independence of the samples in step [1b] and
equations 1 & 2 we have for any t:
in in
Y Pr[ tradei = ti | x ] Pr[ tradej = tj | x ] eε · (1 + eε ) in
′
= ′
≤ ε in = eε
Pr[ tradei = ti | x ] Pr[ tradej = tj | x ] (1 + e )
i∈A
Proof. (Lemma 13) As per Definition 5, we must demonstrate that adversary output event
probability distributions are (εout , δ out )-indistinguishable to a change in the honest user’s
:24 Correlated-Output Differential Privacy and Applications to Dark Pools
output; the adversarial output view is composed of corrupted trader outputs and the view of
the corrupted liquidity provider.
Note that inputs are fixed in correlated-output differential privacy. Thus, the output
match = [match1 , ..., matchn ] of deterministic matching in step [1a] remains unaffected;
the distribution of trader outputs also remain unchanged in step [1b]. Correlated-output
differential privacy holds trivially for the adversarial trader output view.
The corrupted liquidity provider provides reserves (xliq liq
0 , x1 ) and observes the updated
liq liq liq liq
reserves (y0 , y1 ) = x0 + ∆0 − ρ0 , x1 + ∆1 − ρ1 , where ∆0 and ∆1 := −∆0 are based on
the liquidity mismatch from step [1b] and ρ0 and ρ1 are noisy values as described in step [2b].
The sensitivity of ∆0 to the honest output is 1. An adversary who knows the adversarial
trade outputs can compute the mismatch between them and therefore knows ∆0 to an error
of at most 1. For simplicity of presentation we assume that there is no mismatch between
adversarial sell and buy orders. For the rest of the proof we assume that the honest user
issued a buy order and S h is the event where the order was not fulfilled. That is, ∆0 = 0
and ∆′0 = 1. The other cases follow from symmetric proofs.
We split the outputs into two categories. For any event where y0liq < xliq 0 we know that
ρ0 > 0 liquidity was frozen when conditioning on S h . At the same time ρ0 − 1 liquidity was
frozen when conditioning on S h not happening. We can see directly from the probability
mass in Equation (4) that the conditional probabilities of observing any such output differ
out
by a factor eε . The probability of observing the special case of y0liq = xliq 0 is 0 when the
buy order was fulfilled because it implies that ρ0 = −1. In contrast, the probability is δ out
when conditioning on S h . Therefore, the algorithm satisfies (εout , δ out )-correlated-output
differential privacy, since for any event S A we have
B Experimental Results
Here we provide the precise numerical values for the results presented in Section 5.3.
Table 1 Runtimes in seconds for the online phase of the rDP-volume-match algorithm.
Table 2 Runtimes in seconds for the online phase of the rDP-double-auction algorithm with
εin
1 = 2 ln(2). Note that the FindPrice procedure includes both price determination and order
matching.
Table 3 Runtimes in seconds for the online phase of the rDP-double-auction algorithm with
εin
1 = ln(2). Note that the FindPrice procedure includes both price determination and order matching.
Table 4 Runtimes in seconds for the online phase of the rDP-double-auction algorithm with
εin
1 = ln(2)/2. Note that the FindPrice procedure includes both price determination and order
matching.