0% found this document useful (0 votes)
13 views14 pages

AICMP v3

Blockchain

Uploaded by

delsiadleos
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views14 pages

AICMP v3

Blockchain

Uploaded by

delsiadleos
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 14

AI-Powered Collaborative Mining Pool

(AICMP)

Contents
Abstract 3

1 Introduction 3

2 Background: Bitcoin Mining & Current Pool Limitations 3


2.1 Overview of the Bitcoin Protocol . . . . . . . . . . . . . . . . . . . . . . . . 3
2.2 Mining Pools: Evolution and Common Models . . . . . . . . . . . . . . . . . 4
2.3 Gaps in Existing Pool Designs . . . . . . . . . . . . . . . . . . . . . . . . . . 4

3 Problem Statement 4

4 AICMP Core Design and Features 5


4.1 Dynamic Task Allocation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
4.2 Network and Market Prediction . . . . . . . . . . . . . . . . . . . . . . . . . 5
4.3 Equitable Revenue Distribution . . . . . . . . . . . . . . . . . . . . . . . . . 5
4.4 Reinforcement Learning for Optimization . . . . . . . . . . . . . . . . . . . . 6

5 Technical Architecture 6
5.1 AI Orchestration Layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
5.2 Miner Interface Layer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
5.3 Revenue Distribution Module . . . . . . . . . . . . . . . . . . . . . . . . . . 7
5.4 Feedback and Learning Loop . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
5.5 Security, Trust & Communication Protocol . . . . . . . . . . . . . . . . . . . 7

6 Mathematical and Algorithmic Formulations 8


6.1 Task Allocation Optimization . . . . . . . . . . . . . . . . . . . . . . . . . . 8
6.2 Predictive Modeling (Difficulty & Price) . . . . . . . . . . . . . . . . . . . . 8
6.3 Reward Distribution Schemes . . . . . . . . . . . . . . . . . . . . . . . . . . 8
6.4 Reinforcement Learning Framework . . . . . . . . . . . . . . . . . . . . . . . 9

1
7 Implementation Methodology 9
7.1 Data Pipeline and Storage . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
7.2 AI Model Training & Validation . . . . . . . . . . . . . . . . . . . . . . . . . 9
7.3 Infrastructure & Scaling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
7.4 Advanced Features: Transaction Selection & Fee Optimization . . . . . . . . 10

8 Security Considerations 10
8.1 Network Security & Miner Authentication . . . . . . . . . . . . . . . . . . . 10
8.2 Prevention of Malicious or Faulty Miners . . . . . . . . . . . . . . . . . . . . 10
8.3 Resilience Against Pool Attacks . . . . . . . . . . . . . . . . . . . . . . . . . 10
8.4 Code Audits and Governance . . . . . . . . . . . . . . . . . . . . . . . . . . 11

9 Benefits and Trade-Offs 11


9.1 Efficiency & Sustainability . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
9.2 Inclusivity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
9.3 Adaptability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
9.4 Complexity & Maintenance . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

10 Extended Roadmap 11
10.1 Phase 1: Development & Testing . . . . . . . . . . . . . . . . . . . . . . . . 11
10.2 Phase 2: Pilot Deployment . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
10.3 Phase 3: Full-Scale Implementation . . . . . . . . . . . . . . . . . . . . . . . 12
10.4 Phase 4: Cross-Blockchain Expansion . . . . . . . . . . . . . . . . . . . . . . 12
10.5 Phase 5: Transaction Optimization & Mempool Analytics . . . . . . . . . . . 12

11 Conclusion 12
Abstract
The AI-Powered Collaborative Mining Pool (AICMP) introduces a comprehensive
solution to longstanding issues in Bitcoin mining pool operation. By integrating Reinforce-
ment Learning (RL) for dynamic share allocation, advanced predictive analytics (for both
network difficulty and market forecasting), and transparent weighted reward distribution,
AICMP addresses suboptimal resource usage and inequitable payouts. With a focus on
fairness, adaptiveness, and long-term scalability, AICMP aspires to create a more inclusive,
profitable, and ecologically responsible mining landscape. This whitepaper provides an in-
depth view of AICMP’s architecture, mathematical models, and security considerations to
guide future adopters in research, development, and implementation.

1 Introduction
Bitcoin, the first decentralized cryptocurrency, secures its ledger via a Proof-of-Work
(PoW) consensus algorithm. Miners, using specialized hardware (ASICs, FPGAs, and
occasionally GPUs), compete to solve cryptographic puzzles to validate new blocks. Over
the years, escalating hash power has driven miners to pool resources, ensuring more frequent
payouts and smoothing out income variance.
Despite mining pools being integral to the ecosystem, many operate with limited adap-
tation to changing conditions. Traditional pools fix share difficulty uniformly, neglect hard-
ware heterogeneity, local energy costs, or sudden shifts in Bitcoin’s market price. As a result,
large-scale industrial miners often dominate, while smaller participants struggle or abandon
mining entirely.
AICMP aims to bridge these gaps by using AI-based resource orchestration and data-
driven decision-making [3, 4]. It redistributes tasks based on miner performance profiles,
forecasts future network parameters to optimize earnings, and ensures that smaller players
receive proportionately fair payouts. Through a combination of mathematical modeling,
blockchain-based transparency, and continuous reinforcement learning, AICMP could serve
as a blueprint for the next generation of mining pools.

2 Background: Bitcoin Mining & Current Pool Limi-


tations
2.1 Overview of the Bitcoin Protocol
Bitcoin’s security model relies on solving computationally expensive SHA-256 hashes. The
network automatically adjusts its difficulty every 2,016 blocks ( 2 weeks) to maintain a
10-minute block interval. When a miner finds a valid block (hash < difficulty target), they
receive the block reward (currently 3.125 BTC, halved roughly every four years) plus any
transaction fees included. This structure incentivizes miners to continually expand or up-
grade their hardware to maintain competitiveness, a phenomenon strongly observed since
Bitcoin’s inception [1, 9, 10].

3
2.2 Mining Pools: Evolution and Common Models
As individual miners found it difficult to attain consistent payouts, mining pools emerged
to aggregate computational power. Popular pool reward methods include:

• Proportional: Rewards in a round are proportional to the number of valid shares


each miner contributes before the pool solves a block [2, 11].

• PPS (Pay-Per-Share): Each valid share has a fixed payout, providing predictable
income for miners but transferring variance risk to the pool operator.

• PPLNS (Pay-Per-Last-N-Shares): Only the last N shares prior to block discovery


matter for reward calculation, reducing potential “pool-hopping” exploits.

While these models introduced crucial trust and fairness concepts, they generally ignore
a miner’s actual power efficiency (Ei ), local costs, or real-time hardware constraints. Fur-
thermore, the lack of adaptive difficulty for each miner often results in inefficient resource
usage, and minimal attention is paid to short-term market or difficulty trends [3, 4, 11].

2.3 Gaps in Existing Pool Designs


1. Inefficient Resource Use: Uniform share distribution does not exploit differences in
miners’ ASIC models, power profiles, or network conditions.

2. High Entry Barriers: Smaller operations, constrained by less powerful hardware or


higher electricity costs, receive marginal payouts.

3. Opaque Reward Schemes: Many pools rely on black-box methods for calculating
shares and fees, which can degrade trust among participants.

4. Limited Real-Time Adaptation: Market volatility and difficulty spikes can sud-
denly erode profitability, and traditional pools rarely adjust immediately to new con-
ditions.

3 Problem Statement
1. Inefficient Resource Allocation: Uniform distribution of mining tasks overlooks
hardware diversity, leading to wasted energy and underutilized capacity.

2. Barriers for Smaller Miners: Large pools become more profitable due to economies
of scale, leaving small contributors with minimal rewards.

3. Revenue Distribution Inequity: Linear reward models fail to incentivize smaller


participants to remain in the mining ecosystem, undermining decentralization.

4. Lack of Adaptability: Existing pools generally do not leverage advanced predictive


or optimization techniques, making them vulnerable to economic shifts.

4
4 AICMP Core Design and Features
4.1 Dynamic Task Allocation
AICMP employs an AI-driven Task Allocation Engine that uses real-time data to tailor
share difficulty to each miner’s performance profile. Key inputs include:

• Hash Rate (Hi ): The speed at which a miner attempts solutions.

• Power Efficiency (Ei ): The ratio of hash rate to energy consumption.

• Latency (Li ): Average network round-trip time, impacting how quickly shares are
submitted and validated.

By matching share difficulty to these metrics, high-throughput ASICs can handle more
complex tasks, while smaller or energy-constrained devices receive proportionally lighter
workloads. This ensures a more efficient use of aggregated hash power, reducing wasted
energy from overburdened miners [4, 12] and maximizing the pool’s effective hash rate on
the network.

4.2 Network and Market Prediction


AICMP’s Predictive Analytics Unit uses machine learning models—especially time-series
neural networks (e.g., RNN, LSTM)—to forecast:

• Upcoming Difficulty Adjustments (Dt+1 )

• Bitcoin Spot Price (Pbtc,t+1 )

• Potential Mempool Congestion for transaction fee optimization

By analyzing historical volatility patterns alongside real-time market signals, the system
can proactively scale share difficulties or energy allocations. This predictive approach aims to
maintain profitability and stay agile during sudden price swings or difficulty jumps [3, 13, 14].
Additionally, the system can integrate external data (e.g., global crypto market trends, local
energy prices) for more accurate modeling.

4.3 Equitable Revenue Distribution


AICMP incentivizes smaller miners through a weighted reward scheme. Instead of a
strictly linear ratio of hash rate contributions, a non-linear exponent η < 1 is applied:

Hiη
Ri = Pn η × Block Reward.
j=1 Hj

This mathematical formulation ensures that while large miners still earn more due to
higher Hi , smaller miners receive a greater share than they would under purely linear distri-
bution. This approach is designed to bolster decentralization, maintain trust, and encourage
broader participation, ultimately supporting the security of the Bitcoin network [9, 19].

5
4.4 Reinforcement Learning for Optimization
AICMP’s orchestration leverages Reinforcement Learning (RL) algorithms to continu-
ously optimize the pool’s allocation policies. By modeling the pool’s operational environ-
ment—miner states, incoming data, block difficulty, and reward outcomes—as a Markov
Decision Process (MDP), the system trains a policy π that maximizes long-term profit.
RL’s iterative nature is well-suited for dynamic, sequential decision-making and can adapt
to evolving hardware and market conditions over time [5, 6, 7, 12].

5 Technical Architecture
5.1 AI Orchestration Layer
The AI Orchestration Layer is the central hub of AICMP, containing four primary sub-
modules:

1. Data Collection Module

• Gathers miner metrics—Hi , Ei , Li —through secure protocols (e.g., Stratum V2,


WebSockets).
• Aggregates and normalizes incoming data in real-time, storing it in a time-series
database [8, 15].
• Maintains robust monitoring for detecting outliers or anomalies (e.g., sudden
drops in hash rate).

2. Task Allocation Engine

• Applies the RL policy to assign share difficulties, solving a constrained optimiza-


tion problem to meet the pool’s efficiency targets.
• Updates assignments on intervals of a few seconds to minutes, depending on pool
size and volatility.
• Communicates directly with miners, ensuring minimal latency in share assign-
ments.

3. Predictive Analytics Unit

• Trains LSTM-based models on historical difficulty, price data, and mempool sta-
tus.
• Offers near-future estimates of block intervals, network difficulty, and potential
transaction fee outcomes [14, 16].
• Integrates with the RL agent, allowing the policy to account for likely future
states.

4. Policy Management & Reinforcement Learning Module

6
• Implements RL algorithms (e.g., Proximal Policy Optimization (PPO), A2C,
DQN) that control resource distribution.
• Maintains a replay buffer of (s, a, r) tuples to refine the policy over time [5, 7].

5.2 Miner Interface Layer


The Miner Interface Layer provides tools and dashboards to:
• Visualize each miner’s real-time performance, including submitted shares, accepted
shares, and estimated rewards.
• Configure operational parameters (e.g., max power usage, temperature thresholds).
• Notify users when unusual conditions arise, such as large latency spikes or critical
hardware failures.
A user-friendly interface is paramount for fostering trust and transparency, especially
among miners who might be unfamiliar with machine learning technologies.

5.3 Revenue Distribution Module


Once the pool successfully mines a block, the block reward and transaction fees go to the
pool’s coinbase address. The Revenue Distribution Module then:
1. Computes each miner’s payout (Ri ) using the η-weighted formula.
2. Executes the payout automatically and ensuring an immutable audit trail
3. Retains a pool fee (δ) to finance server infrastructure, AI research, and other opera-
tional costs.

5.4 Feedback and Learning Loop


All operational data (e.g., frequency of mined blocks, forecast accuracy, miner performance
shifts) feeds back into the AI Orchestration Layer. This closed-loop system refines the
entire pipeline, continuously tuning share difficulty, adjusting the weighting exponent η if
necessary, and improving forecast models for upcoming epochs.

5.5 Security, Trust & Communication Protocol


AICMP employs multiple layers of network security to protect against attacks:
• Encryption (TLS/SSL): Shields share submissions from interception or spoofing.
• Miner Authentication: Unique certificates or cryptographic keys validate each miner’s
identity.
• DDOS Protection: Distributed architecture, load balancers, and rate-limiting mech-
anisms help maintain pool uptime in hostile environments.

7
6 Mathematical and Algorithmic Formulations
6.1 Task Allocation Optimization
Let the pool be composed of n miners, each with:
Hash rate: Hi , Power consumption: Ei , Latency: Li .
Define an objective to optimize the pool’s effective efficiency:
n 
Hi
X 
max ,
i=1 Ei
subject to constraints ensuring Li ≤ Lmax and
Pn
Hi
Peff = Pi=1
n ≥ Pmin .
i=1 Ei

To keep block solution times stable, a maximum pool hash power target (Htarget ) can also
be imposed. Optimization can be done with Lagrangian multipliers or mixed-integer
linear programming if share difficulties are discrete [11, 17].

6.2 Predictive Modeling (Difficulty & Price)


AICMP’s difficulty model can follow a simplified linear pattern:
Dt+1 = αDt + βHpool,t + γ∇Pbtc,t + ϵt ,
or employ Long Short-Term Memory (LSTM) networks [3, 13] to learn complex tem-
poral dependencies:
D̂t+1 = fLSTM (Dt , Dt−1 , . . .).
For price:
P̂btc,t+1 = gLSTM (Pbtc,t , market data, . . .).
Minimizing the Mean Squared Error (MSE) or Mean Absolute Percentage Error (MAPE)
allows fine-tuning the forecast models, enabling quicker and more confident reallocation of
tasks.

6.3 Reward Distribution Schemes


1. Linear:
(linear) Hi
Ri = Pn × (Block Reward) × (1 − δ).
j=1 Hj
2. Weighted (η < 1):
(weighted) Hη
Ri = Pn i η × Block Reward.
j=1 Hj

By raising Hi to a power less than 1, smaller miners obtain a proportionally higher fraction
of the total. The parameter η can be adjusted after periodic governance votes or via the RL
module, balancing inclusivity and large-miner retention [9, 19].

8
6.4 Reinforcement Learning Framework
We define:
• State st : A snapshot of the pool’s operational status (Hpool,t , Epool,t , Dt , Pbtc,t , etc.).

• Action at : The share difficulty assignments or changes in weighting factor η.

• Reward rt : The net earnings over cost within a time step t.

• Transition P (st+1 | st , at ): Shaped by miner behavior and external network events


[5, 6, 7, 12].
The goal is to learn a policy π that maximizes the expected discounted return:

X 
t
max Eπ γ rt .
π
t=0

A variety of RL algorithms (e.g., PPO, A2C, DQN with modifications) can handle either
discrete or continuous share difficulty spaces.

7 Implementation Methodology
7.1 Data Pipeline and Storage
A robust data pipeline is essential for near-real-time AI decisions:
• Ingestion: Use Apache Kafka or RabbitMQ to handle large volumes of miner
metrics.

• Time-Series Database: InfluxDB or Prometheus for streaming writes and real-time


queries [15, 18].

• Historical Archive: Relational (e.g., PostgreSQL) or cloud-based warehouses (Big-


Query) for large-scale offline ML training.

7.2 AI Model Training & Validation


1. Offline Training: Collect historical difficulty, price, and miner performance logs to
initialize LSTM or RL agents in a simulated environment.

2. Online Fine-Tuning: Deploy continuous or periodic retraining, adapting to newly


observed conditions (e.g., evolving ASIC technology, major price swings).

3. Validation Metrics:

• Predictive: MSE, RMSE, MAPE for difficulty/price forecasting.


• RL: Cumulative reward, policy convergence rate, real-world profitability improve-
ments [5, 7, 12, 13].

9
7.3 Infrastructure & Scaling
• Cloud vs. On-Prem: Large-scale training (e.g., RL or LSTM) may be hosted on
GPU clusters in the cloud, while time-critical allocation services run on edge servers.

• Microservices: Decompose the system—data collection, RL engine, predictor, reward


distributor—into loosely coupled services. This ensures each can be scaled or updated
independently.

• Geo-Distribution: Locate servers strategically to reduce latency for miners world-


wide, bridging multiple continents if needed for high availability.

7.4 Advanced Features: Transaction Selection & Fee Optimization


By analyzing mempool data, AICMP can integrate transaction selection strategies:

• Greedy: Pick the highest-fee transactions until block space is filled.

• RL-based: Use short-term predictions of future blocks or mempool congestion to


maximize total fees over multiple blocks [14, 16].

• Layer-2 Synergies: Potential to incorporate LN or off-chain aggregator strategies for


additional profitability or network load balancing.

8 Security Considerations
8.1 Network Security & Miner Authentication
• Encrypted Protocols: Use TLS/SSL or Stratum V2 cryptographic channels to pre-
vent sniffing or share tampering [8, 15].

• Authentication: Each miner receives unique credentials, preventing impersonation


or unauthorized usage.

8.2 Prevention of Malicious or Faulty Miners


• Share Validation: Reject invalid shares at an abnormally high rate, indicative of
hardware faults or malicious attempts.

• Hardware Fingerprinting: Optional tracking of ASIC model signatures to detect


mismatched or duplicated device IDs [9, 11].

8.3 Resilience Against Pool Attacks


• DDOS Mitigation: Multi-region nodes, robust load balancers, and dynamic IP black-
listing protect uptime.

10
8.4 Code Audits and Governance
• Open-Source Releases: Community transparency fosters trust and contributions to
improve security or efficiency.

• Governance Model: On-chain voting or a decentralized committee can decide changes


to key parameters like η, pool fees, or RL hyperparameters.

9 Benefits and Trade-Offs


9.1 Efficiency & Sustainability
• Pros: Dynamic share allocation exploits hardware diversity, potentially reducing over-
all energy per valid share [12, 17].

• Cons: Operating RL models requires significant compute resources and specialized AI


expertise.

9.2 Inclusivity
• Pros: Weighted reward mechanisms (η < 1) keep smaller miners in the game, promot-
ing decentralization [9, 19].

• Cons: Some large farms may feel slighted if they don’t receive purely linear returns.

9.3 Adaptability
• Pros: Predictive analytics let the pool adjust to real-time changes in difficulty, price,
or network conditions.

• Cons: Forecast errors or unforeseen market disruptions can result in suboptimal allo-
cation decisions, requiring robust fallback strategies.

9.4 Complexity & Maintenance


• Pros: Advanced automation via RL can reduce manual oversight once stable.

• Cons: Maintaining AI/ML systems requires specialized knowledge, ongoing data cu-
ration, and frequent code audits [18, 20].

10 Extended Roadmap
10.1 Phase 1: Development & Testing
• Initial AI Modules: Implement basic RL (e.g., DQN or PPO) and simple forecasting
on historical data.

11
• Simulation Environment: Create a virtual network of diverse miners to test how
dynamic allocation affects energy efficiency, payout distribution, and system stability.

• Reward Mechanism Tuning: Experiment with different η values (e.g., 0.8, 0.9) to
balance inclusivity and large-miner buy-in.

10.2 Phase 2: Pilot Deployment


• Testnet Deployment: Roll out a minimal version of AICMP on a test network or
alternative PoW chain.

• Empirical Feedback: Monitor real miner behaviors, track predictive model perfor-
mance, and refine the RL policy in live conditions.

10.3 Phase 3: Full-Scale Implementation


• Production-Ready AI: Use advanced RL algorithms (A2C, PPO, multi-agent RL)
capable of handling thousands of miners concurrently.

• Global Infrastructure: Set up data centers in multiple regions, with local caching
and edge servers to minimize latency.

• Integration Partnerships: Coordinate with major mining hardware vendors to fa-


cilitate real-time telemetry and streamlined onboarding.

10.4 Phase 4: Cross-Blockchain Expansion


• Multi-Coin Pools: Extend AICMP’s AI-driven framework to other PoW currencies
(e.g., Litecoin, Zcash).

• Coin-Switching Logic: Dynamically switch between chains to capitalize on varying


profitability, though mindful of potential negative externalities on smaller networks.

10.5 Phase 5: Transaction Optimization & Mempool Analytics


• Fee Maximization: Incorporate mempool scanning algorithms that prioritize high-
fee transactions, increasing overall miner earnings.

• Potential Layer-2 Integration: Investigate synergy with LN or sidechains where


consolidated transactions yield higher fees at lower network overhead [14, 16, 19].

11 Conclusion
The AI-Powered Collaborative Mining Pool (AICMP) provides a holistic strategy to
upgrade the current mining ecosystem. By implementing reinforcement learning to allocate

12
tasks, combining predictive analytics for real-time difficulty and price forecasts, and intro-
ducing an η-based weighted reward scheme, AICMP can simultaneously tackle inefficiency,
encourage inclusivity, and maintain profitability. The system’s architecture, mathemati-
cal frameworks, and security features illustrate how cutting-edge AI research can merge
with decentralized blockchain infrastructures. AICMP thereby points the way to a fairer,
more sustainable, and efficient future for Bitcoin mining [1, 9, 10, 19].

References
[1] Nakamoto, S. (2008). Bitcoin: A Peer-to-Peer Electronic Cash System.

[2] Rosenfeld, M. (2011). Analysis of Bitcoin Pooled Mining Reward Systems.

[3] Garay, J., Kiayias, A., & Leonardos, N. (2015). The Bitcoin Backbone Protocol: Anal-
ysis and Applications. In Eurocrypt.

[4] Decker, C., & Wattenhofer, R. (2013). Information propagation in the Bitcoin network.
In IEEE P2P.

[5] Mnih, V., et al. (2015). Human-level control through deep reinforcement learning. Na-
ture, 518, 529–533.

[6] Lillicrap, T. P., et al. (2015). Continuous control with deep reinforcement learning.
arXiv preprint arXiv:1509.02971.

[7] Schulman, J., et al. (2017). Proximal policy optimization algorithms. arXiv:1707.06347.

[8] Stratum V2 Protocol (2021). [Online]. Available: https://braiins.com/stratum-v2

[9] Boneh, D., & Shoup, V. (2020). A Graduate Course in Applied Cryptography. Draft
manuscript.

[10] Garzik, J. (2015). O(1) block propagation. Bitcoin developer mailing list.

[11] Rosenfeld, M. (2012). More analysis of Bitcoin pooled mining reward systems.
arXiv:1112.4980.

[12] Silver, D., et al. (2016). Mastering the game of Go with deep neural networks and tree
search. Nature, 529, 484–489.

[13] Abadi, M., et al. (2016). TensorFlow: A System for Large-scale Machine Learning. OSDI
‘16.

[14] Schulman, J., et al. (2015). Trust region policy optimization. In ICML.

[15] Kreps, J., Narkhede, N., & Rao, J. (2011). Kafka: A distributed messaging system for
log processing. NetDB.

[16] Lurie, E. (2020). Mempool analytics and fee estimation in Bitcoin. arXiv:2010.00541.

13
[17] Demers, A., Greene, D., et al. (1987). Epidemic algorithms for replicated database
maintenance. In ACM SOSP.

[18] Dean, J., & Ghemawat, S. (2004). MapReduce: Simplified data processing on large
clusters. In OSDI.

[19] Kiayias, A., & Panagiotakos, G. (2016). Speed-security tradeoffs in blockchain protocols.
IACR ePrint.

[20] Bach, F., & Moulines, E. (2013). Non-strongly convex smooth stochastic approximation.
NIPS.

14

You might also like