Papers updated in last 7 days (33 results)
The Supersingular ℓ-Isogeny Path and Endomorphism Ring Problems: Tighter Unconditional Reductions
In this paper we show that the supersingular ℓ-isogeny path problem and the endomorphism ring problem are unconditionally equivalent under polynomial-time reductions given access to a factoring oracle. We show access to such oracle is sufficient to solve the Quaternion Path Problem of [KLPT14], removing the heuristic and GRH-based assumptions of previous work. Using Shor’s factorization algorithm, this implies unconditional quantum polynomial-time equivalences between the ℓ-Isogeny Path and Endomorphism Ring problems. Our result strengthens a previous work of Wesolowski’s reduction by removing the Generalized Riemann Hypothesis assumption and achieving polynomial-time equivalences with significantly lower polynomial degree, whereas the original reduction incurs a polynomial overhead of impractically high degree.
Meta-PBS: Compact High-Precision Programmable Bootstrapping
Currently, most FHE schemes realize bootstrapping through the linear-decrypt-then-round paradigm. For the programmable bootstrapping (PBS) of TFHE, this means the lookup table (LUT) needs a redundancy of $O(\sqrt{N})$ to be able to remove the modulus switching noise, which limits the plaintext modulus of PBS to $O(\sqrt{N})$. We remove this requirement for redundancy by proposing the Meta-PBS framework, which allows us to start with under-redundant or non-redundant LUTs. Meta-PBS iteratively blind-rotates the LUT, during which the LUT redundancy gradually increases. The bootstrapping outputs the correct result when the redundancy eventually exceeds the noise bound. Asymptotically, Meta-PBS requires $O(1)$ blind rotations in dimension $N$ to evaluate a negacyclic function modulo $2N$, whereas PBS needs $O(\sqrt{N})$ blind rotations. Meta-PBS also enjoys an additive noise growth, allowing for more homomorphic arithmetic on bootstrapped ciphertext. We modified Meta-PBS to support the simultaneous evaluation of multiple LUTs on the same ciphertext and/or arbitrary LUTs. According to our implementation, when evaluating a 12-bit negacyclic function, Meta-PBS outperforms EBS (PKC'23) by 79 times. When evaluating an arbitrary function on an 8-bit LWE ciphertext, Meta-PBS reduces the running time of the Refined LHE (CCS'25) by half while allowing for a 27 times larger post-bootstrap linear combination.
UPPR: Universal Privacy-Preserving Revocation
Self-Sovereign Identity (SSI) frameworks enable individuals to receive and present digital credentials in a user-controlled way. Revocation mechanisms ensure that invalid or withdrawn credentials cannot be misused. These revocation mechanisms must be scalable (e.g., at national scale) and preserve core SSI principles such as privacy, user control, and interoperability. Achieving both is hard, and finding a suitable trade-off remains a key challenge in SSI research. This paper introduces UPPR, a revocation mechanism for One-Show Verifiable Credentials (oVCs) and unlinkable Anonymous Credentials (ACs). Revocations are managed using per-credential Verifiable Random Function (VRF) tokens, which are published in a Bloom filter cascade on a blockchain. Holders prove non-revocation via a VRF proof for oVCs or a single Zero-Knowledge Proof for ACs. The construction prevents revocation status tracking, allows holders to stay offline, and hides issuer revocation behavior. We analyze the privacy properties of UPPR and provide a prototype implementation on Ethereum. Our implementation enables off-chain verification at no cost. On-chain checks cost 0.56-0.84 USD, while issuers pay only 0.00002-0.00005 USD per credential to refresh the revocation state.
Efficient Multi-instance Vector Commitment and Application to Post-quantum Signatures
The MPC-in-the-Head (MPCitH) and the VOLE-in-the-Head (VOLEitH) paradigms have recently been utilized to develop post-quantum signatures. Both rely on a mechanism that allows the signer to commit to $N$ values and then later open all-but-one. In particular, MPCitH-based signatures achieve this using a puncturable pseudorandom function (PPRF) primitive, while VOLEitH-based signatures utilize an all-but-one vector commitment scheme.
A novel and efficient multi-instance PPRF, introduced by Bui et al. (Asiacrypt'24), provides a significant performance boost for MPCitH-based signatures, employing only a fixed-key block cipher to instantiate the PPRF while being provably secure in the ideal cipher model. This work presents an efficient multi-instance vector commitment derived from multi-instance PPRF. Our vector commitment scheme is secure in the multi-instance setting, when handling repetitive parallel executions. As a result, it can be directly applied to enhance the efficiency of VOLEitH-based signatures.
We implemented our vector commitment scheme into FAEST (\url{faest.info}), a round one candidate in the NIST post-quantum cryptography standardization. According to our experimental implementation, we achieve $10\%-27\%$ improvement in both signing and verification times for various settings.
% In particular, for 128-bit security (128S), we gain $23\%$ improvement with a reasonable trade-off of security loss.
On the other hand, we adapt the VOLE-in-the-Head framework to the Multivariate Quadratic (MQ) problem, achieving a reduction in signature size. Specifically, for a 128-bit security level, our approach yields signature sizes between 3.1KB and 3.6KB with a small field size parameter.
Worst-Case Lattice Sampler with Truncated Gadgets and Applications
Gadget-based samplers have proven to be a key component of several cryptographic primitives, in particular in the area of privacy-preserving mechanisms. Most constructions today follow the approach introduced by Micciancio and Peikert (MP) yielding preimages whose dimension linearly grows with that of the gadget. To improve performance, some papers have proposed to truncate the gadget but at the cost of an important feature of the MP sampler, namely the ability to invert arbitrary syndromes. Technically speaking, they replace the worst-case MP sampler by an average-case sampler that can only be used in specific contexts. Far from being a mere theoretical restriction, it prevents the main applications of gadget-based samplers from using truncated variants and thus from benefiting from the associated performance gains.
In this paper, we solve this problem by describing a worst-case sampler that still works with truncated gadgets. Its main strength is that it retains the main characteristics of the MP sampler while providing flexibility in the choice of the truncation parameter. As a consequence, it can be used as a plug-in replacement for all applications relying on the MP sampler so far, leading to performance improvements up to 30% as illustrated by several examples in this paper. Our sampler is supported by a thorough security analysis that addresses the hurdles met by previous works and its practicality is demonstrated by a concrete implementation.
Honest Majority Constant-Round MPC with Linear Communication from One-Way Functions
Secure multiparty computation (MPC) faces a fundamental efficiency trade-off between round complexity and communication complexity: without fully homomorphic encryption, protocols with constant round complexity (e.g., protocols based on garbled circuits) incur high communication cost, while communication-efficient approaches (e.g., protocols based on secret sharing) have round complexity linear in the depth of the circuit. In this work, we focus on reducing the communication complexity of constant-round MPC protocols in the honest majority setting. Existing results either rely on strong assumptions (e.g., random oracles, DDH, LPN) or incur high communication of $\Omega(|C|n^2\kappa)$ bits under one-way functions (OWFs). However, non-constant-round MPC protocols can achieve linear communication in the number of parties even with information-theoretic security.
We resolve this gap by presenting the first constant-round honest majority MPC protocol with linear communication complexity of $O(|C|n\kappa + n^2\kappa^2+n^4\kappa)$ only from OWFs. We introduce novel techniques for computing garbled circuits via party virtualization and efficient local computation of virtual parties, which optimize the existing protocols on multiparty garbling. These allow us to overcome the $O(n^2\kappa)$ bit of communication per-gate bottleneck of prior protocols, matching the scalability of the best non-constant-round protocols in the same setting.
Mina: Decentralized Cryptocurrency at Scale
We introduce the notion of a succinct blockchain, a replicated state machine in which each state transition (block) can be efficiently verified in constant time regardless of the number of prior transitions in the system. Traditional blockchains require verification time linear in the number of transitions. We show how to construct a succinct blockchain using recursively composed succinct non-interactive arguments of knowledge (SNARKs). Finally, we instantiate this construction to implement Coda (now known as Mina), a payment system (cryptocurrency) using a succinct blockchain. Coda offers payment functionality similar to Bitcoin, with a dramatically faster verification time of 200ms making it practical for lightweight clients and mobile devices to perform full verification of the system’s history.
From OT to OLE with Subquadratic Communication
Oblivious Linear Evaluation (OLE) is an algebraic generalization of oblivious transfer (OT) that forms a critical part of a growing number of applications. An OLE protocol over a modulus $q$ enables the receiver party to securely evaluate a line $a\cdot X+b$ chosen by the sender party on a secret point $x\in\mathbb{Z}_q$.
Motivated by the big efficiency gap between OLE and OT and by fast OT extension techniques, we revisit the question of reducing OLE to OT, aiming to improve the communication cost of known reductions.
We start by observing that the Chinese Remainder Theorem (CRT) can be combined with a prior protocol of Gilboa (Crypto ’99) to reduce its communication cost from $O(\ell^2)$ to $\tilde O(\ell)$ bits, for $\ell=\log q$. Unfortunately, whereas Gilboa's protocol is secure against a semi-honest sender and a malicious receiver, a direct application of the CRT technique is only semi-honest secure (it is insecure against malicious receivers). Thus, we employ number-theoretic techniques to protect our CRT-based protocol against malicious receivers, while still retaining a concrete advantage over Gilboa's protocol (e.g., $10.2\times$ less communication for $\ell=256$). Furthermore, we obtain a fully malicious OLE-to-OT reduction by applying either information-theoretic techniques with moderate overhead, or RSA-based cryptographic techniques with very low overhead.
We demonstrate the usefulness of our results in the context of OLE applications, including a post-quantum oblivious pseudorandom function (OPRF) and distributed signatures. In particular, assuming pre-existing random OT correlations, we can use our malicious-receiver OLE protocol to realize (a single instance of) the power-residue based OPRF candidate with security against a malicious client and a semi-honest server using only 1.14 KB of communication, a $16\times$ improvement over the best previous protocol in this setting. Using our RSA-based fully malicious OLE protocol, we achieve a $5\times$ communication improvement over previous OT and EC-based distributed ECDSA protocols. Compared to other ECDSA protocols (including ones that use Paillier and class groups), the communication gains are more modest, but come at only a fraction of the computational cost as we avoid all expensive group operations.
Memory Optimizations of Wagner's Algorithm with Applications to Equihash
The Generalized Birthday Problem ($\textsf{GBP}$) serves as a cornerstone for a broad spectrum of cryptanalytic research. The classical solution, Wagner's $k$-tree algorithm (CRYPTO’02), is characterized by inherently high memory complexity. Subsequently, Biryukov and Khovratovich (NDSS'16) introduced $\textsf{Equihash}$, a memory-hard proof-of-work scheme constructed upon a single-list variant of Wagner's algorithm. Due to its robust resistance to ASIC mining, $\textsf{Equihash}$ has emerged as one of the most widely adopted proof-of-work schemes in blockchain. Memory optimization for Wagner-type algorithms remains a persistent challenge in cryptographic research. Conventional approaches primarily focus on reducing list size to lower memory consumption, yet this strategy typically incurs a prohibitive time penalty. For instance, halving the memory usage of the mining algorithm for $\textsf{Equihash}(200, 9)$ increases its theoretical time complexity by a factor of $2^{24.6}$.
In this work, we explore a new optimization direction: List Item Reduction (LIR), which facilitates practical and fine-grained memory-time trade-offs. We systematize existing LIR techniques and propose novel optimization methods integrated into a new hybrid framework. While our approach does not improve asymptotic memory complexity, it achieves a near-linear trade-off in practice, offering substantial benefits for the concrete design of efficient Wagner-style algorithms. Specifically, our techniques reduce peak memory usage by more than 50% (from $> 2nN$ to $nN$ bits) across all $\textsf{Equihash}$ parameters, with only an approximate twofold time penalty. For $\textsf{Equihash}(144,5)$, our optimized algorithm requires only 700 MB of memory, compared to approximately 2.5 GB in previous implementations.
OverModRaise: Reducing Modulus Consumption of CKKS Bootstrapping
The Cheon-Kim-Kim-Song (CKKS) homomorphic encryption scheme is widely adopted for securely evaluating circuits over real numbers, such as those arising in privacy-preserving machine learning (PPML), because it efficiently supports approximate floating-point arithmetic of messages. A CKKS ciphertext has a finite level, which corresponds to the budget for how many multiplicative operations can be applied. Once these levels are consumed, the ciphertext must be refreshed through a bootstrapping procedure to restore its capacity for further computation. The CKKS bootstrapping procedure consists of four main steps: 1) ModRaise, which raises the ciphertext coefficient modulus; 2) C2S, which homomorphically evaluates the inverse-DFT (iDFT) to enable further operations on the ciphertext coefficients; 3) EvalMod, which homomorphically removes the unintended coefficients introduced by ModRaise and shifted to the message side by C2S; and 4) S2C, which homomorphically evaluates the DFT to map the ciphertext back from the iDFT domain. However, these bootstrapping procedures also consume a significant number of levels, leaving fewer levels after each bootstrapping.
In this work, we introduce three techniques—OverModRaise1, OverModRaise2, and Tuple-C2S/S2C—that target reductions in the modulus consumption of C2S/S2C among the CKKS bootstrapping procedures, without introducing substantial overhead or compromising security. By combining these techniques, our implementation demonstrates at most 41% throughput improvement compared to the state-of-the-art bootstrapping.
SoK: Approximate Agreement
Approximate Agreement (AA) is a relaxation of consensus that requires honest parties to output values that are close and within the honest inputs' range. Introduced as a relaxation of exact consensus, AA has become a versatile primitive with applications from blockchain oracles to cyber-physical systems.
This paper provides a systematization of knowledge (SoK) on byzantine-resilient AA in complete networks.
We mainly focus on the real-valued variant, and chart the feasibility frontiers in synchronous, asynchronous, and network-agnostic models. We compare protocols in terms of resilience, round complexity, and communication efficiency, while also clarifying overlooked details and gaps.
Beyond standard requirements on the outputs, we discuss stronger conditions, such as having the outputs \emph{close} to the honest inputs' median. Moreover, we briefly situate the real-valued AA problem within the broader landscape of AA, where other input domains such as higher-dimensional spaces and graphs introduce further challenges.
Formal Verification of Privacy Pass
CAPTCHA is a ubiquitous challenge-response system for preventing spam (typically bots) on the internet. Requiring users to solve visual challenges, its design is inherently cumbersome, and can unfairly punish those using low reputation IP addresses, such as anonymous services e.g. TOR.
To minimise the frequency in which a user must solve CAPTCHAs, Privacy Pass (PETS 2018) allows users to collect and spend anonymous tokens instead of solving challenges. Despite 400,000 reported monthly users and standardisation efforts by the IETF, it has not been subject of formal verification, which has been proven to be a valuable tool in security analysis.
In this paper we perform the first analysis of Privacy Pass using formal verification tools, and verify standard security properties hold in the symbolic model. Motivated by concerns of Davidson et al. and the IETF contributors, we also explore a stronger attack model, where additional key leakage uncovers a potential token forgery. We present a new protocol, Privacy Pass Plus, in which we show the attack fails in the symbolic model and give new cryptographic reductions to show our scheme maintains the security properties. Moreover, our work also highlights the complementary nature of analysing protocols in both symbolic and computational models.
Experimentally studying path-finding problem between conjugates in supersingular isogeny graphs: Optimizing primes and powers to speed-up cycle finding
We study the problem of finding a path between conjugate supersingular elliptic curves, a sub-routine of cycle finding algorithm in the graph $\chi_l(\overline{\mathbb{F}_p})$ of all supersingular elliptic curves over $\mathbb{F}_{p^2}$, connected by $l$ isogenies for primes $l,p$. We prove that asymptotically the most time-efficient way of overviewing the graph to find conjugate paths is seeing $i(=3)$ steps together, a question posed in Remark 3.5 by Eisentr{\"a}ger \etal~\cite{eisentrager2020}. We see both theoretically and experimentally that selecting $l,i$ optimally speeds up the cycle finding.
Rubato: Provably Post-Quantum Secure and Batched Asynchronous Randomness Beacon
Distributed randomness beacons are essential for distributed systems (e.g., blockchain and MPC), providing unbiased and unpredictable shared randomness. However, implementations in asynchronous networks often suffer from poor scalability, low throughput, and high resource consumption when deployed as independent protocols. The state-of-the-art HashRand (CCS'24) achieves high throughput and low computational intensity using only lightweight post-quantum cryptographic primitives for an independent asynchronous beacon. Building further on this, we propose Rubato, a low-overhead, high-throughput beacon protocol that leverages lightweight batched Asynchronous Complete Secret Sharing with Byzantine Atomic Broadcast-based state machine replication (via our tailored RubatoSMR). Rubato reduces communication complexity by an $O(c \log n)$ factor compared to HashRand and resolves the circular dependency between beacon and BAB-SMR in asynchronous settings. Experiments on AWS demonstrate that Rubato achieves ideal overall performance in scalability, resource usage, and throughput; for instance, at $n=121$ nodes, it produces an average of 174 beacons per minute, with RubatoSMR further optimizing memory and bandwidth consumption.
Revisiting the IPA-sumcheck connection
Inner Product Arguments (IPA) [BCC+16,BBB+17] are a family of proof systems with $O(\log n)$ sized proofs, $O(n)$ time verifiers, and transparent setup.
Bootle, Chiesa and Sotiraki [BCS21] observed that an IPA can be viewed as a sumcheck protocol [LFKN92] where the summed polynomial is allowed to have coefficients in a group rather than a field. We leverage this viewpoint to improve the performance of multi-linear polynomial commitments based on IPA.
Specifically,
- We introduce a simplified variant of Halo-style accumulation that works for multilinear evaluation claims, rather than only univariate ones as in [BGH19,BCMS20].
- We show that the size $n$ MSM the IPA verifier performs can be replaced by a ``group variant'' of $\mathsf{basefold}$[ZCF23].
This reduces the verifier complexity from $O(n)$ to $O_{\lambda}(\log^2 n)$ time at the expense of an additional $4n$ scalar multiplications for the IPA prover.
Attacks on Goldreich's Pseudorandom Generators by Grouping and Solving
Goldreich's pseudorandom generators (PRGs) with constant locality admit highly parallel implementations, yet the concrete security of instantiations based on the $\mathsf{XOR}\text{-}\mathsf{THR}$ predicate has remained unclear.
In this work, we present novel seed recovery attacks on Goldreich's PRGs instantiated on $\mathsf{XOR}\text{-}\mathsf{THR}$ predicates with $m=n^s$, where $m$ and $n$ are the length of output and input, respectively.
By partitioning the output bits into groups according to common input bits, high-biased noisy equations can be derived for the group whose common input bits are all 1s (or all 0s). Leveraging two solvers tailored to these equations, we achieve the seed recovery attacks, which needs roughly $2^{n(1-\log_2{(1 + \frac{2s}{\sqrt{2\pi (b-s)}})})}$ calls to Gaussian elimination when the input length of the $\mathsf{THR}$ predicate $b\geq (n-s)^{1/s}+s-1$ with small stretch $s$.
Applying our attack to the $\mathsf{XOR}\text{-}\mathsf{MAJ}$ challenges in STOC 2016 yields complexity \(2^{\,n\!\left(1-\log_{2}\!\left(1+\sqrt{\frac{s}{18\pi}}\right)\right)} \le 2^{0.82n}\), i.e., at least a \(2^{0.18n}\) speedup over exhaustive search for any stretch \(s>1\).
We also deploy our attack on an instance used in the construction of silent oblivious transfer protocols (Eurocrypt 2024) with \(n = 256\). This attack is capable of breaking the instance using approximately \(2^{31.3}\) calls to Gaussian elimination over 244 variables. We successfully implemented the attack on a cluster of 14 CPU cores, recovering the seed in 71 hours, demonstrating the attack's efficiency.
Beyond PRGs, with appropriate adaptations our method extends to FiLIP stream ciphers instantiated with \mathsf{THR}-related predicates. Our evaluation indicates that most FiLIP instances do not meet their claimed security. For instance, the configuration \(\mathsf{XOR}_{100}\text{-}\mathsf{THR}_{11,22}\text{-}\mathsf{THR}_{11,22}\) is broken in about \(2^{59}\) calls to Gaussian elimination over 357 variables despite a 397-bit key and a claimed 80-bit security level.
E2E-AKMA: An End-to-End Secure and Privacy-Enhancing AKMA Protocol Against the Anchor Function Compromise
The Authentication and Key Management for Applications (AKMA) system represents a recently developed protocol established by 3GPP, which is anticipated to become a pivotal component of the 5G standards. AKMA enables application service providers to delegate user authentication processes to mobile network operators, thereby eliminating the need for these providers to store and manage authentication-related data themselves. This delegation enhances the efficiency of authentication procedures but simultaneously introduces certain security and privacy challenges that warrant thorough analysis and mitigation.
The 5G AKMA service is facilitated by the AKMA Anchor Function (AAnF), which may operate outside the boundaries of the 5G core network. A compromise of the AAnF could potentially allow malicious actors to exploit vulnerabilities, enabling them to monitor user login activities or gain unauthorized access to sensitive communication content. Furthermore, the exposure of the Subscription Permanent Identifier (SUPI) to external Application Functions poses substantial privacy risks, as the SUPI could be utilized to correlate a user's real-world identity with their online activities, thereby undermining user privacy.
To mitigate these vulnerabilities, we propose a novel protocol named E2E-AKMA, which facilitates the establishment of a session key between the User Equipment (UE) and the Application Function (AF) with end-to-end security, even in scenarios where the AAnF has been compromised. Furthermore, the protocol ensures that no entity, aside from the 5G core network, can link account activities to the user's actual identity. This architecture preserves the advantages of the existing AKMA scheme, such as eliminating the need for complex dynamic secret data management and avoiding reliance on specialized hardware (apart from standard SIM cards). Experimental evaluations reveal that the E2E-AKMA framework incurs an overhead of approximately 9.4\% in comparison to the original 5G AKMA scheme, which indicates its potential efficiency and practicality for deployment.
SoK: Privacy-Preserving Transactions in Blockchains
Ensuring transaction privacy in blockchain systems is essential to safeguard user data and financial activity from exposure on public ledgers. This paper conducts a systematization of knowledge (SoK) on privacy-preserving techniques in cryptocurrencies with native privacy features. We define and compare privacy notions such as confidentiality, k-anonymity, full anonymity, and sender-receiver unlinkability, and categorize the cryptographic techniques employed to achieve these guarantees. Our analysis highlights the trade-offs between privacy guarantees, scalability, and regulatory compliance. Finally, we evaluate the usability of the most popular private cryptocurrencies providing insights into their practical deployment and user interaction.
Through this analysis, we identify key gaps and challenges in current privacy solutions, highlighting areas where further research and development are needed to enhance privacy while maintaining scalability and security.
Auntie: Unobservable Contracts from Zerocash and Trusted Execution Environments
Privacy-oriented cryptocurrencies like Zerocash only support direct payments and not the execution of more complex contracts. Bitcoin and Ethereum, on the other hand, cannot guarantee privacy, and using them for contract execution leaves open questions about fungibility of the proceeds and requires contract designers to take frontrunning countermeasures. This work reconciles the two worlds and develops a practical framework for decentralized execution of complex contracts that (1) is undetectable to the network at large, (2) maintains anonymity of the potentially mutually distrustful counterparties and unlinkability of their repeated interactions, (3) guarantees fair termination, and (4) is immediately resistant to frontrunning and miner bribery attacks. This is achieved by leveraging the confidentiality and anonymity guarantees of Zerocash and the verifiability and flexibility of trusted execution environments.
A Certified Framework for Deterministic Navigation in Higher-Genus p-Isogeny Graphs
We present a deterministic framework for navigating $p$-isogeny graphs of genus $g \ge 2$, addressing the lack of canonical and auditable primitives in higher dimensions. The framework integrates two components: the Certified $p$-Isogeny Step (PICS) and a Non-Decomposition Certificate (ND). PICS constructs the unique Frobenius-compatible inseparable isogeny by extracting kernel directions from Hasse--Witt invariants and differential subresultant profiles, thereby eliminating randomized kernel selection. Complementarily, ND serves as an algebraic filter that rejects Jacobians compatible with product decompositions by enforcing cyclicity in the associated differential operator module. We prove that the rejection density scales asymptotically as $O(p^{-1})$. Experimental validation using a C-based backend over 256-bit prime fields demonstrates that the certification logic incurs a relative overhead of less than $0.2\%$ compared to the mandatory Hasse--Witt computation. By enforcing strict determinism and structural safety, the resulting transition unit provides a verifiable primitive for auditable parameter generation and isogeny-based time-lock puzzles.
SNARGs for NP and Non-Signaling PCPs, Revisited
We revisit the question of whether it is possible to build succinct non-interactive arguments ($\mathsf{SNARG}$s) for all of $\mathsf{NP}$ under standard assumptions using non-signaling probabilistically checkable proofs [Kalai-Raz-Rothblum, STOC' 14]. In particular, we observe that using exponential-length PCPs appears to circumvent all of the existing barriers.
For our main result, we give a candidate non-adaptive $\mathsf{SNARG}$ for $\mathsf{NP}$ and prove its soundness under:
- the learning with errors assumption (or other standard assumptions such as bilinear maps), and
- a mathematical conjecture about multivariate polynomials over the reals.
In more detail, our conjecture is an upper bound on the minimum total coefficient size of Nullstellensatz proofs (Potechin-Zhang, ICALP 2024) of membership in a concrete polynomial ideal. We emphasize that this is not a cryptographic assumption or any form of computational hardness assumption.
Of particular interest is the fact that our security analysis makes non-black-box use of the $\mathsf{SNARG}$ adversary, circumventing the black-box barrier of Gentry and Wichs (STOC '11). This gives a blueprint for constructing $\mathsf{SNARG}$s for $\mathsf{NP}$ that is not subject to the Gentry-Wichs barrier.
Impersonating Quantum Secrets over Classical Channels
We show that a simple eavesdropper listening in on classical communication between potentially entangled quantum parties will eventually be able to impersonate any of the parties. Furthermore, the attack is efficient if one-way puzzles do not exist. As a direct consequence, one-way puzzles are implied by reusable authentication schemes over classical channels with quantum pre-shared secrets that are potentially evolving.
As an additional application, we show that any quantum money scheme that can be verified through only classical queries to any oracle cannot be information-theoretically secure. This significantly generalizes the prior work by Ananth, Hu, and Yuen (ASIACRYPT'23) where they showed the same but only for the specific case of random oracles. Therefore, verifying black-box constructions of quantum money inherently requires coherently evaluating the underlying cryptographic tools, which may be difficult for near-term quantum devices.
TSM+ and OTSM - Correct Application of Time Sharing Masking in Round-Based Designs
Among the countermeasures against side-channel analysis attacks, masking offers formal security guarantees and composability, yet remains challenging to implement efficiently in hardware due to physical defaults like glitches and transitions. Low-latency masking techniques aim to mitigate the performance penalties but can inadvertently compromise security in certain architectural contexts. In particular, the recently proposed Time Sharing Masking (TSM) technique enables single-cycle masked implementations with composability under the SNI and PINI notions but fails to satisfy stronger composability guarantees required in iterative designs, i.e., OPINI. In this work, we show that TSM-based constructions can exhibit first-order leakage when used in single-register feedback architecture, such as round-based implementations of ciphers. To address this, we propose two new masking schemes: TSM+, a more efficient variant of TSM satisfying only PINI (but not SNI), and OTSM, a construction satisfying OPINI, enabling secure round-based designs.
Our improved round-based masked implementations of PRINCE and AES ensure security in latency-critical applications under both glitch- and transition-extended probing model while demanding for slightly more area consumption.
SNARGs for NP from LWE
We construct the first succinct non-interactive argument (SNARG) for NP in the common reference string model based solely on the sub-exponential hardness of the learning with errors (LWE) assumption. Our scheme achieves non-adaptive security, partial succinctness with an argument size of $O(n^{0.91})$, and is plausibly post-quantum secure. Previous constructions of SNARGs from falsifiable assumptions either relied on indistinguishability obfuscation or were restricted to idealized models (e.g., the random oracle model or generic group model).
Our construction is also the first to instantiate the Micali transformation (Fiat-Shamir applied to Kilian's protocol) in the standard model with concrete hash functions. We achieve this by developing a new mechanism to securely instantiate the Fiat-Shamir hash function for interactive arguments, overcoming the known barriers that limit standard techniques to interactive proofs. As a result, our scheme refutes "universal" attacks on the Micali framework by demonstrating that there exist concrete instantiations of the underlying components for which the transformation is sound.
Our construction relies on two primitives of independent interest: a PCP with a new property which we term "shadow soundness", and a lattice-based vector commitment that provides statistical binding with respect to a hidden function.
TOOP: A transfer of ownership protocol over Bitcoin
We present the Transfer of Ownership Protocol (TOOP). TOOP solves a limitation of all existing BitVM-like protocols (and UTxO blockchains at large) that restricts the unlocking transfers to addresses known and preregistered during lock and setup. Accordingly, our protocol avoids the financially costly, regulatory problematic, and congestion-prone front-and-reimburse paradigm.
Furthermore, we note that one of the main applications of TOOP is as an enabler of secure transfer of assets between UTxO blockchains, and back. We showcase this via sketching a committee-based validation protocol that requires only 1-out-of-n honest security. This protocol operates in distinct phases: the lock phase, where the initial setup and individual assets are locked on Bitcoin, and the unlocking with the ownership transfer phase, where the asset is transferred to a possibly different legitimate owner.
This cross-chain bridge protocol, where TOOP plays a key role, is being formalized in concurrent work, and has been implemented for the first time in Cardinal, a protocol for wrapping Bitcoin Unspent Transaction Outputs (UTxOs) onto the Cardano blockchain, with Bitcoin Ordinals represented as Cardano Non-Fungible Tokens (NFTs).
Batch Arguments with Optimal Communication
Batch arguments (BARGs) are non-interactive arguments for conjunctions of NP statements, with proof size that is sublinear in the number of statements.
Several previous works studied the communication complexity of BARGs, focusing both on the CRS size and on the additive overhead of the proof, defined as the difference between the proof size and the size $m$ of a single NP witness:
- Devadas et al.~[FOCS 22] constructed BARGs with additive overhead that is independent of $m$, however, their CRS size is polynomial in $m$.
- Paneth and Pass [FOCS 22] constructed BARGs where the CRS size is independent of $m$, but with higher additive overhead $m^{1-\epsilon}$.
Under the hardness of LWE, we construct BARGs where both the CRS size the additive overhead of the proof are independent of $m$.
Such BARGs can be recursively composed an unbounded polynomial number of times without losing succinctness.
Along the way, we also considerably simplify the construction of fully local somewhere extractable hash functions used in the construction of Devadas et al.
LatORAM: ORAMs from Lateral Stashes and Delayed Shuffling
We study the design of Oblivious RAMs (ORAMs) that allow a client to access memory outsourced to a remote, untrusted server without revealing the client’s data access pattern. We are interested in concretely efficient constructions and prior works have yielded different ORAM frameworks with various trade-offs. Tree-based constructions such as RingORAM [Ren et al., USENIX’15] obtain low communication overhead, but require client storage of linear position maps and two roundtrip queries. Hierarchical schemes such as FutORAMa [Asharov et al., CCS’23] further reduce communication at the cost of more roundtrips during queries. Finally, SQRT-ORAM [Goldreich, STOC ’87] enables fast queries of one roundtrip and one block of communication at the cost of larger amortized communication costs.
We present two new constructions, LatORAM and Lat 2 ORAM, that simultaneously obtain the positive traits of all three types of ORAM constructions. Online queries are blazing fast with one roundtrip and a single block of communication like SQRT-ORAM. Fixing the client memory sizes for comparison, the online communication cost of our constructions are 5-8x smaller than RingORAM and 5-10x smaller than
FutORAMa even though both RingORAM and FutORAM a require multiple roundtrips per online query. Furthermore, our total amortized communication is also up to 50% smaller. To obtain our constructions, we present a new lazy approach of lateral stash growth that delays large shuffles.
Of independent interest, we present improved oblivious merging schemes for specific settings important for our ORAMs. Our constructions solely rely on symmetric cryptography.
Tight Lower Bound on Witness Update Frequency in Additive Positive Accumulators
We study additive positive accumulators, which maintain a short digest of a growing set such that each value in the set can prove membership via a generated witness. Due to the compactness of the digest, previously added values may require updated witnesses as the set grows.
In this paper, we establish a trade-off between the bit-length of the accumulator value and the number of witness updates. Specifically, we show that if the accumulator value has bit-length \( \mathsf{poly}(\log n) \), where \( n \) is the number of accumulated values, then some values must incur \( \Omega(\log n / \log \log n) \) witness updates. This improves upon the recent \( \omega(1) \) lower bound of [BCCK25] and matches the upper bound in [MQ23].
Building on the framework of [MQR22], we introduce a new combinatorial structure that removes the fixed-update-time assumption. Our approach also applies to Registration-based Encryption [GHMR18], thereby resolving the open problem left in [MQR22]: it shows that the tight lower bound on decryption-update frequency continues to hold even without any fixed-update-time assumption.
How to Prove Post-Quantum Security for Succinct Non-Interactive Reductions
Hash-based succinct non-interactive arguments (SNARGs) are widely used in practice, owing to their ease of deployment, notable efficiency, and post-quantum properties. They are constructed via the BCS transformation, which combines an interactive oracle proof (IOP) and a hash-based vector commitment. This success has motivated the study of hash-based succinct non-interactive reductions (SNRDXs), used for recursively ensuring the correctness of distributed computations, by extending the BCS transformation to work with an interactive oracle reduction (IOR) rather than an IOP.
We prove that hash-based SNRDXs constructed from IORs are secure in the quantum random oracle model (QROM), provided the IOR satisfies a natural post-quantum analogue of state-restoration security; moreover, we show that (classical) round-by-round security implies post-quantum state-restoration security. Our results thus achieve a post-quantum analogue of the classical security of SNRDXs in the ROM, and generalize a prior result about SNARGs in the QROM to cover recent SNRDXs constructions.
Moreover, for SNRDXs we propose and achieve an adaptively-secure straightline quantum extraction property in the QROM, while prior work obtains non-adaptive security for SNARGs in the QROM. Along the way, we develop a modular framework for proving the security of the (extended) BCS transformation based on a new quantum extraction property for vector commitments (which we prove is achieved by Merkle commitments), mirroring classical security analyses and departing from prior "monolithic" post-quantum analyses. This demands a new commutator bound that shows the almost-commutativity between quantum extraction and quantum oracle queries, by bounding a natural classical extraction property.
The Cokernel Pairing
We study a new pairing, beyond the Weil and Tate pairing. The Weil pairing is a non-degenerate pairing $E[m] \times E[m] \to \mu_{m}$, which operates on the kernel of $[m]$. Similarly, when $\mu_{m} \subseteq \mathbb{F}_q^*$, the Tate pairing is a non-degenerate pairing $E[m](\mathbb{F}_q) \times E(\mathbb{F}_q) / [m]E(\mathbb{F}_q) \to \mu_{m}$, which connects the kernel and the rational cokernel of $[m]$. We define a pairing \[ \langle{\quad}\rangle_m : E(\mathbb{F}_q) / [m]E(\mathbb{F}_q) \times E(\mathbb{F}_q) / [m]E(\mathbb{F}_q) \to \mu_{m}\] on the rational cokernels of $[m]$, filling the gap left by the Weil and Tate pairing. When $E[m] \subseteq E(\mathbb{F}_q)$, this pairing is non-degenerate, and can be computed using three Tate pairings, and two discrete logarithms in $\mu_{m}$, assuming a basis for $E[m]$. For $m = \ell$ prime, this pairing allows us to study $E(\mathbb{F}_q) / [\ell]E(\mathbb{F}_q)$ directly and to simplify the computation for a basis of $E[\ell^k]$, and more generally the Sylow $\ell$-torsion. This finds natural applications in isogeny-based cryptography when computing $\ell^k$-isogenies.
OOPS: One-time Oblivious Polynomial Signatures
We introduce one-time oblivious polynomial signatures (OOPS), a signature scheme based on polynomials over pairing-based elliptic curves that can securely produce signatures for up to a threshold of $n$ different messages. Signing more than $n$ messages allows anyone to forge signatures under the given parameters, making it necessary to reparameterize the scheme occasionally. We show that this property is not a severe limitation though by demonstrating how to build various efficient OOPS-based cryptographic protocols, including delegatable signatures, $1$-out-of-$n$ oblivious transfer, and partially oblivious PRFs.
WISCH: Efficient data signing via correlated signatures
We present WISCH, a commit-reveal protocol that combines compact aggregate signatures with hash-based commitments to enable selective disclosure of correlated data in multiparty computation. The protocol separates an on-chain verification core from off chain preparation, so that verification cost depends only on the number of openings, not on the size of the underlying message space. This yields asymptotic efficiency: on-chain cost grows linearly in the number of revealed items and is independent of the ambient domain, while the per-byte overhead decreases with the message granularity. Security is established via a simulation-based proof in a UC framework with an ideal ledger functionality, in the algebraic group and global random-oracle models, under standard assumptions for discrete-log-based signatures and hash-based commitments. Thus WISCH provides selectively verifiable revelation with succinct on-chain checks and provable security guarantees.
Reducing the Number of Qubits in Quantum Factoring
This paper focuses on the optimization of the number of logical qubits in quantum algorithms for factoring and computing discrete logarithms in $\mathbb{Z}_N^*$. These algorithms contain an exponentiation circuit modulo $N$, which is responsible for most of their cost, both in qubits and operations.
In this paper, we show that using only $o(\log N)$ work qubits, one can obtain the least significant bits of the modular exponentiation output. We combine this result with May and Schlieper's truncation technique (ToSC 2022) and the Eker{\aa}-H{\aa}stad variant of Shor's algorithm (PQCrypto 2017) to solve the discrete logarithm problem in $\mathbb{Z}_N^*$ using only $d + o(\log N)$ qubits, where $d$ is the bit-size of the logarithm. Consequently we can factor $n$-bit RSA moduli using $n/2 + o(n)$ qubits, while current envisioned implementations require about $2n$ qubits.
Our algorithm uses a Residue Number System and succeeds with a parametrizable probability. Being completely classical, we have implemented and tested it. For RSA factorization, we can reach a gate count $\mathcal{O}(n^3)$ for a depth $\mathcal{O}(n^2 \log^3 n)$, which then has to be multiplied by $\mathcal{O}(\log n)$ (the number of measurement results required by Eker{\aa}-H{\aa}stad). To factor an RSA-2048 instance, we estimate that 1730 logical qubits and $2^{36}$ Toffoli gates will suffice for a single run, and the algorithm needs on average 40 runs. To solve a discrete logarithm instance of 224 bits (112-bit classical security) in a safe-prime group of 2048 bits, we estimate that 684 logical qubits would suffice, and 20 runs with $2^{32}$ Toffoli gates each.