0% found this document useful (0 votes)
46 views20 pages

Branch Predicter Project

The document outlines a project to develop and simulate branch predictors, specifically G-share and Two-level Adaptive Predictors, to measure their performance on benchmark programs. It details the phases of understanding branch prediction fundamentals, designing and implementing the predictors, and analyzing the results. The introduction emphasizes the importance of branch predictors in modern processors to enhance instruction pipeline efficiency by predicting the outcome of conditional jumps.

Uploaded by

sanjanatalari26
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
46 views20 pages

Branch Predicter Project

The document outlines a project to develop and simulate branch predictors, specifically G-share and Two-level Adaptive Predictors, to measure their performance on benchmark programs. It details the phases of understanding branch prediction fundamentals, designing and implementing the predictors, and analyzing the results. The introduction emphasizes the importance of branch predictors in modern processors to enhance instruction pipeline efficiency by predicting the outcome of conditional jumps.

Uploaded by

sanjanatalari26
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

Title: Branch Predictor Design:

Aim to do:
Develop and simulate a branch predictor using algorithms
like G-share or Two-level Adaptive Predictors.
Measure its performance on benchmark programs.

Submitted by [Link]
6302820532
erssayyappa@[Link]
[Link]@[Link]
Phase 1: Understanding Branch Prediction and Algorithms
Step 1: Research Fundamentals of Branch Prediction
● Goal: Gain a solid understanding of why branch prediction is crucial in modern
processors.
● Activities:
o Read introductory material on pipelining and control hazards. Understand how
branches can stall the pipeline.
o Learn the basic concepts of branch prediction: predicting whether a branch
will be taken or not taken.
o Explore different types of branch predictors (static vs. dynamic). Your project
focuses on dynamic predictors.
Step 2: In-depth Study of G-share Predictor
● Goal: Understand the architecture and operation of the G-share branch predictor.
● Activities:
o Research the G-share algorithm in detail. Pay attention to:
▪ Branch History Register (BHR): How it stores the history of recent
branch outcomes.
▪ Global History Register (GHR): How it stores the history of all recent
branches.
▪ Pattern History Table (PHT): How it stores prediction information
based on the combined history.
▪ Hashing function: How the branch address and GHR are combined to
index into the PHT.
o Understand the prediction mechanism: how the PHT entry is used to make a
prediction (taken or not taken).
o Learn how the PHT is updated based on the actual outcome of the branch.
Step 3: In-depth Study of Two-Level Adaptive Predictor
● Goal: Understand the architecture and operation of a Two-Level Adaptive branch
predictor.
● Activities:
o Research the Two-Level Adaptive prediction scheme. Understand the different
levels involved. A common configuration is the (m,n) predictor:
▪ Level 1: Branch History Table (BHT) indexed by some bits of the
branch address. Each entry stores an n-bit history of that specific
branch.
▪ Level 2: Pattern History Table (PHT) indexed by the n-bit history from
Level 1. Each entry stores a saturating counter for prediction.
o Understand the prediction mechanism and the update process for both levels.
o Explore variations of two-level predictors (e.g., with different indexing
schemes).
Step 4: Compare and Contrast G-share and Two-Level Adaptive Predictors
● Goal: Identify the strengths and weaknesses of each algorithm.
● Activities:
o Compare the complexity (hardware cost), accuracy, and responsiveness to
different branch patterns of G-share and Two-Level Adaptive predictors.
o Consider scenarios where one predictor might perform better than the other.
Phase 2: Design and Simulation
Step 5: Choose a Simulation Environment
● Goal: Select a suitable tool for implementing and simulating your branch predictors.
● Options:
o Software-based simulators: Tools like SimpleScalar, gem5 (more complex),
or even writing your own simulator in a language like Python or C++. These
offer flexibility but might require more setup.
o Hardware Description Languages (HDLs): If your internship involves
hardware design, you might use VHDL or Verilog and a simulation tool like
ModelSim or Vivado Simulator. This is more complex but provides a
hardware-level perspective.
● Recommendation (for a typical internship project): Starting with a software-based
simulator (like a simplified Python implementation) might be more manageable for
understanding the core logic. If your internship focuses on hardware, your mentors
will likely guide you on the appropriate HDL and tools.
Step 6: Implement the G-share Predictor
● Goal: Translate the G-share algorithm into code within your chosen simulation
environment.
● Activities:
o Define the data structures for the BHR/GHR and PHT. Consider the size of
these structures (number of entries, bit width).
o Implement the logic for:
▪ Updating the GHR based on branch outcomes (taken/not taken).
▪ Calculating the index into the PHT using the branch address and GHR.
▪ Making a prediction based on the PHT entry (e.g., using a 2-bit
saturating counter).
▪ Updating the PHT entry based on the actual branch outcome.
Step 7: Implement the Two-Level Adaptive Predictor
● Goal: Translate the Two-Level Adaptive algorithm into code within your chosen
simulation environment.
● Activities:
o Define the data structures for the BHT and PHT. Consider the size of these
structures.
o Implement the logic for:
▪ Updating the BHT for the specific branch.
▪ Using the history from the BHT to index into the PHT.
▪ Making a prediction based on the PHT entry.
▪ Updating the PHT entry based on the actual branch outcome.
Step 8: Prepare Benchmark Programs
● Goal: Obtain or create a set of benchmark programs with representative branch
behavior.
● Activities:
o Look for publicly available benchmark suites (e.g., parts of SPEC CPU
benchmarks, if your simulator supports them).
o Alternatively, you can create simpler synthetic benchmarks with different
branch patterns (e.g., loops with predictable branches, code with more random
branching).
o Ensure your simulator can process these benchmark programs and provide a
trace of branch instructions and their actual outcomes.
Step 9: Simulate and Collect Performance Data
● Goal: Run your implemented predictors on the benchmark programs and gather
performance metrics.
● Activities:
o For each benchmark program and each predictor (G-share and Two-Level
Adaptive):
▪ Initialize the predictor structures.
▪ Process the branch trace of the benchmark program, one branch at a
time.
▪ For each branch:
▪ Make a prediction using the current state of the predictor.
▪ Compare the prediction with the actual outcome.
▪ Update the predictor state based on the actual outcome.
▪ Record whether the prediction was correct or incorrect.
o Calculate the prediction accuracy for each benchmark and each predictor.
Accuracy is typically calculated as:
Accuracy=Total Number of BranchesNumber of Correct Predictions
Phase 3: Analysis and Reporting
Step 10: Analyze the Results
● Goal: Interpret the performance data you collected.
● Activities:
o Compare the prediction accuracy of the G-share and Two-Level Adaptive
predictors across the different benchmark programs.
o Analyze why one predictor might perform better than the other on specific
benchmarks. Consider the types of branch patterns present in those
benchmarks.
o Experiment with different parameters of the predictors (e.g., size of
BHR/GHR, size of PHT, length of history) and observe their impact on
accuracy.
INTRODUCTION :
In computer architecture, a branch predicter is a digital circuit that tries to guess which way
a branch (e.g., an if–then–else structure) will go before this is known definitively. The
purpose of the branch predictor is to improve the flow in the instruction pipeline. Branch
predictors play a critical role in achieving high performance in many
modern pipelined microprocessor architectures.

Figure 1: Example of 4-stage pipeline. The


colored boxes represent instructions independent of each other.
Two-way branching is usually implemented with a conditional jump instruction. A
conditional jump can either be "taken" and jump to a different place in program memory, or it
can be "not taken" and continue execution immediately after the conditional jump. It is not
known for certain whether a conditional jump will be taken or not taken until the condition
has been calculated and the conditional jump has passed the execution stage in the instruction
pipeline (see fig. 1).
Without branch prediction, the processor would have to wait until the conditional jump
instruction has passed the execute stage before the next instruction can enter the fetch stage in
the pipeline. The branch predictor attempts to avoid this waste of time by trying to guess
whether the conditional jump is most likely to be taken or not taken. The branch that is
guessed to be the most likely is then fetched and speculatively executed. If it is later detected
that the guess was wrong, then the speculatively executed or partially executed instructions
are discarded and the pipeline starts over with the correct branch, incurring a delay.
The time that is wasted in case of a branch misprediction is equal to the number of stages in
the pipeline from the fetch stage to the execute stage. Modern microprocessors tend to have
quite long pipelines so that the misprediction delay is between 10 and 20 clock cycles. As a
result, making a pipeline longer increases the need for a more advanced branch predictor.[6]
The first time a conditional jump instruction is encountered, there is not much information to
base a prediction on. However, the branch predictor keeps records of whether or not branches
are taken, so when it encounters a conditional jump that has been seen several times before, it
can base the prediction on the recorded history. The branch predictor may, for example,
recognize that the conditional jump is taken more often than not, or that it is taken every
second time.
Branch prediction is not the same as branch target prediction. Branch prediction attempts to
guess whether a conditional jump will be taken or not. Branch target prediction attempts to
guess the target of a taken conditional or unconditional jump before it is computed by
decoding and executing the instruction itself. Branch prediction and branch target prediction
are often combined into the same circuitry.
Implementation
Static branch prediction
Static prediction is the simplest branch prediction technique because it does not rely on
information about the dynamic history of code executing. Instead, it predicts the outcome of a
branch based solely on the branch instruction.[7]
The early implementations of SPARC and MIPS (two of the first
commercial RISC architectures) used single-direction static branch prediction: they always
predict that a conditional jump will not be taken, so they always fetch the next sequential
instruction. Only when the branch or jump is evaluated and found to be taken, does the
instruction pointer get set to a non-sequential address.
Both CPUs evaluate branches in the decode stage and have a single cycle instruction fetch.
As a result, the branch target recurrence is two cycles long, and the machine always fetches
the instruction immediately after any taken branch. Both architectures define branch delay
slots in order to utilize these fetched instructions.
A more advanced form of static prediction presumes that backward branches will be taken
and that forward branches will not. A backward branch is one that has a target address that is
lower than its own address. This technique can help with prediction accuracy of loops, which
are usually backward-pointing branches, and are taken more often than not taken.
Some processors allow branch prediction hints to be inserted into the code to tell whether the
static prediction should be taken or not taken. The Intel Pentium 4 accepts branch prediction
hints, but this feature was abandoned in later Intel processors.[8]
Static prediction is used as a fall-back technique in some processors with dynamic branch
prediction when dynamic predictors do not have sufficient information to use. Both the
Motorola MPC7450 (G4e) and the Intel Pentium 4 use this technique as a fall-back.[9]
In static prediction, all decisions are made at compile time, before the execution of the
program.
Dynamic branch prediction
Dynamic branch prediction uses information about taken or not taken branches gathered at
run-time to predict the outcome of a branch.
Random branch prediction
Using a random or pseudorandom bit (a pure guess) would guarantee every branch a 50%
correct prediction rate, which cannot be improved (or worsened) by reordering instructions.
(With the simplest static prediction of "assume take", compilers can reorder instructions to
get better than 50% correct prediction.) Also, it would make timing [much more]
nondeterministic.
Next line prediction
Some superscalar processors (MIPS R8000, Alpha 21264, and Alpha 21464 (EV8)) fetch
each line of instructions with a pointer to the next line. This next-line predictor
handles branch target prediction as well as branch direction prediction.
When a next-line predictor points to aligned groups of 2, 4, or 8 instructions, the branch
target will usually not be the first instruction fetched, and so the initial instructions fetched
are wasted. Assuming for simplicity, a uniform distribution of branch targets, 0.5, 1.5, and
3.5 instructions fetched are discarded, respectively.
Since the branch itself will generally not be the last instruction in an aligned group,
instructions after the taken branch (or its delay slot) will be discarded. Once again, assuming
a uniform distribution of branch instruction placements, 0.5, 1.5, and 3.5 instructions fetched
are discarded.
The discarded instructions at the branch and destination lines add up to nearly a complete
fetch cycle, even for a single-cycle next-line predictor.
One-level branch prediction
Saturating counter
A 1-bit saturating counter (essentially a flip-flop) records the last outcome of the branch. This
is the most simple version of dynamic branch predictor possible, although it is not very
accurate.
A 2-bit saturating counter is a state machine with four states:

Figure 2: State diagram of 2-bit saturating counter


● Strongly not taken
● Weakly not taken
● Weakly taken
● Strongly taken
When a branch is evaluated, the corresponding state machine is updated. Branches evaluated
as not taken change the state toward strongly not taken, and branches evaluated as taken
change the state toward strongly taken. The advantage of the two-bit counter scheme over a
one-bit scheme is that a conditional jump has to deviate twice from what it has done most in
the past before the prediction changes. For example, a loop-closing conditional jump is
mispredicted once rather than twice.
The original, non-MMX Intel Pentium processor uses a saturating counter, though with an
imperfect implementation.[8]
On the SPEC'89 benchmarks, very large bimodal predictors saturate at 93.5% correct, once
every branch maps to a unique counter.[11]: 3
The predictor table is indexed with the instruction address bits, so that the processor can fetch
a prediction for every instruction before the instruction is decoded.
Two-level predictor
The Two-Level Branch Predictor, also referred to as Correlation-Based Branch Predictor,
uses a two-dimensional table of counters, also called "Pattern History Table". The table
entries are two-bit counters.
Two-level adaptive predictor

Figure 3: Two-level
adaptive branch predictor. Every entry in the pattern history table represents a 2-bit saturating
counter of the type shown in figure -2
If an if statement is executed three times, the decision made on the third execution might
depend upon whether the previous two were taken or not. In such scenarios, a two-level
adaptive predictor works more efficiently than a saturation counter. Conditional jumps that
are taken every second time or have some other regularly recurring pattern are not predicted
well by the saturating counter. A two-level adaptive predictor remembers the history of the
last n occurrences of the branch and uses one saturating counter for each of the possible
2n history patterns. This method is illustrated in figure 3.
Consider the example of n = 2. This means that the last two occurrences of the branch are
stored in a two-bit shift register. This branch history register can have four
different binary values, 00, 01, 10, and 11, where zero means "not taken" and one means
"taken". A pattern history table contains four entries per branch, one for each of the 2 2 = 4
possible branch histories, and each entry in the table contains a two-bit saturating counter of
the same type as in figure 2 for each branch. The branch history register is used for choosing
which of the four saturating counters to use. If the history is 00, then the first counter is used;
if the history is 11, then the last of the four counters is used.
Assume, for example, that a conditional jump is taken every third time. The branch sequence
is 001001001... In this case, entry number 00 in the pattern history table will go to state
"strongly taken", indicating that after two zeroes comes a one. Entry number 01 will go to
state "strongly not taken", indicating that after 01 comes a zero. The same is the case with
entry number 10, while entry number 11 is never used because there are never two
consecutive ones.
The general rule for a two-level adaptive predictor with an n-bit history is that it can predict
any repetitive sequence with any period if all n-bit sub-sequences are different.[8]
The advantage of the two-level adaptive predictor is that it can quickly learn to predict an
arbitrary repetitive pattern. This method was invented by T.-Y. Yeh and Yale Patt at
the University of Michigan.[13] Since the initial publication in 1991, this method has become
very popular. Variants of this prediction method are used in most modern microprocessor
Two-level neural predictor
A two-level branch predictor where the second level is replaced with a neural network has
been proposed.
Local branch prediction
A local branch predictor has a separate history buffer for each conditional jump instruction. It
may use a two-level adaptive predictor. The history buffer is separate for each conditional
jump instruction, while the pattern history table may be separate as well or it may be shared
between all conditional jumps.
The Intel Pentium MMX, Pentium II, and Pentium III have local branch predictors with a
local 4-bit history and a local pattern history table with 16 entries for each conditional jump.
On the SPEC'89 benchmarks, very large local predictors saturate at 97.1% correct.[11]: 6
Global branch prediction
A global branch predictor does not keep a separate history record for each conditional jump.
Instead it keeps a shared history of all conditional jumps. The advantage of a shared history is
that any correlation between different conditional jumps is part of making the predictions.
The disadvantage is that the history is diluted by irrelevant information if the different
conditional jumps are uncorrelated, and that the history buffer may not include any bits from
the same branch if there are many other branches in between. It may use a two-level adaptive
predictor.
This scheme is better than the saturating counter scheme only for large table sizes, and it is
rarely as good as local prediction. The history buffer must be longer in order to make a good
prediction. The size of the pattern history table grows exponentially with the size of the
history buffer. Hence, the big pattern history table must be shared among all conditional
jumps.
A two-level adaptive predictor with globally shared history buffer and pattern history table is
called a "gshare" predictor if it xors the global history and branch PC, and "gselect" if
it concatenates them. Global branch prediction is used in AMD processors, and in
Intel Pentium M, Core, Core 2, and Silvermont-based Atom processors.
CODES:
class GSharePredictor:

def __init__(self, history_length, pht_size):

self.history_length = history_length

self.pht_size = pht_size

self.global_history_register = 0 # Initialize GHR to all 0s

self.pattern_history_table = [2] * pht_size # Initialize PHT with weakly taken (2-bit counter)

self.pht_mask = pht_size - 1

self.history_mask = (1 << history_length) - 1

def predict(self, branch_address):

# Combine branch address (lower bits) and global history using XOR

index = (branch_address ^ self.global_history_register) & self.pht_mask

if self.pattern_history_table[index] >= 2:

return "taken"

else:

return "not taken"

def update(self, branch_address, actual_outcome):

index = (branch_address ^ self.global_history_register) & self.pht_mask

if actual_outcome == "taken":

self.pattern_history_table[index] = min(3, self.pattern_history_table[index] + 1)

self.global_history_register = ((self.global_history_register << 1) | 1) & self.history_mask

else:

self.pattern_history_table[index] = max(0, self.pattern_history_table[index] - 1)


self.global_history_register = (self.global_history_register << 1) & self.history_mask

# Example usage:

gshare_predictor = GSharePredictor(history_length=3, pht_size=8)

print(f"Initial prediction for address 0x100: {gshare_predictor.predict(0x100)}")

gshare_predictor.update(0x100, "taken")

print(f"Prediction after 'taken' for address 0x100: {gshare_predictor.predict(0x100)}")

gshare_predictor.update(0x104, "not taken")

print(f"Prediction after 'not taken' for address 0x104: {gshare_predictor.predict(0x104)}")

Explanation of G-share Code:

● __init__(self, history_length, pht_size):

o Initializes the length of the global history register (history_length) and the size of the
pattern history table (pht_size).

o global_history_register: Stores the history of recent branch outcomes (1 for taken, 0


for not taken).

o pattern_history_table: A list representing the PHT. Each entry is a 2-bit saturating


counter (00: strongly not taken, 01: weakly not taken, 10: weakly taken, 11: strongly
taken). Initialized to weakly taken (2).

o pht_mask: Used for indexing into the PHT.

o history_mask: Used to keep the GHR at the specified length.

● predict(self, branch_address):

o Calculates the index into the PHT by XORing the lower bits of the branch_address
with the global_history_register and applying the pht_mask.

o Returns "taken" if the PHT counter at the calculated index is 2 or 3 (weakly or


strongly taken), otherwise returns "not taken".

● update(self, branch_address, actual_outcome):

o Calculates the PHT index.

o Updates the PHT counter based on the actual_outcome: increments if taken (up to 3),
decrements if not taken (down to 0).

o Updates the global_history_register: shifts left by one bit, adds 1 if taken, 0 if not
taken, and masks to maintain the history length.
2. Basic Two-Level Adaptive Predictor Implementation (Conceptual Python - (2,2) predictor as
an example)
class TwoLevelPredictor:
def __init__(self, bht_size, pht_size):
self.bht_size = bht_size
self.pht_size = pht_size
self.branch_history_table = {} # Key: branch address, Value: 2-bit history
self.pattern_history_table = [2] * pht_size
self.bht_mask = bht_size - 1
self.pht_mask = pht_size - 1

def _get_bht_index(self, branch_address):


return branch_address & self.bht_mask

def predict(self, branch_address):


bht_index = self._get_bht_index(branch_address)
history = self.branch_history_table.get(bht_index, 0) # Default history is 0 if not seen
pht_index = history & self.pht_mask # Using the 2-bit history directly as PHT index
if self.pattern_history_table[pht_index] >= 2:
return "taken"
else:
return "not taken"

def update(self, branch_address, actual_outcome):


bht_index = self._get_bht_index(branch_address)
old_history = self.branch_history_table.get(bht_index, 0)

if actual_outcome == "taken":
new_history = ((old_history << 1) | 1) & 0b11 # Update 2-bit history
pht_index = new_history & self.pht_mask
self.pattern_history_table[pht_index] = min(3, self.pattern_history_table[pht_index] + 1)
else:
new_history = (old_history << 1) & 0b11
pht_index = new_history & self.pht_mask
self.pattern_history_table[pht_index] = max(0, self.pattern_history_table[pht_index] - 1)

self.branch_history_table[bht_index] = new_history

# Example usage:
two_level_predictor = TwoLevelPredictor(bht_size=4, pht_size=4)
print(f"Initial prediction for address 0x200: {two_level_predictor.predict(0x200)}")
two_level_predictor.update(0x200, "taken")
print(f"Prediction after 'taken' for address 0x200: {two_level_predictor.predict(0x200)}")
two_level_predictor.update(0x204, "not taken")
print(f"Prediction after 'not taken' for address 0x204: {two_level_predictor.predict(0x204)}")
two_level_predictor.update(0x200, "taken")
print(f"Prediction after another 'taken' for address 0x200: {two_level_predictor.predict(0x200)}")
Explanation of Two-Level Adaptive Code (Basic (2,2) Example):

● __init__(self, bht_size, pht_size):

o Initializes the size of the Branch History Table (bht_size) and the Pattern History
Table (pht_size).

o branch_history_table: A dictionary where the key is the lower bits of the branch
address (acting as the BHT index) and the value is a 2-bit history for that branch.

o pattern_history_table: The PHT, similar to the G-share PHT.

o bht_mask: Used for indexing into the BHT.

o pht_mask: Used for indexing into the PHT (in this simple case, the history itself acts
as the index).

● _get_bht_index(self, branch_address):

o Extracts the lower bits of the branch address to index into the BHT.

● predict(self, branch_address):

o Gets the BHT index.

o Retrieves the 2-bit history for that branch from branch_history_table. If the branch
hasn't been seen before, it defaults to a history of 0.
o Uses this 2-bit history directly as the index into the pattern_history_table.

o Makes a prediction based on the PHT counter.

● update(self, branch_address, actual_outcome):

o Gets the BHT index.

o Retrieves the old 2-bit history for the branch.

o Updates the 2-bit history by shifting left and adding 1 (for taken) or 0 (for not taken).
The & 0b11 keeps it as a 2-bit value.

o Uses the new 2-bit history to index into the PHT and updates the corresponding
counter.

o Stores the new_history back into the branch_history_table for the current branch
address.

Important Considerations:

● Simplification: These are very basic implementations. Real-world predictors are much more
complex, involving more sophisticated indexing schemes, multiple levels of tables, and
optimizations.

● Integration: To use these codes for your project, you would need to:

o Develop a way to read branch traces from your benchmark programs.

o Instantiate the predictor objects.

o Process the branch trace, calling the predict() and update() methods for each branch.

o Keep track of the number of correct predictions to calculate accuracy.

● Parameter Tuning: You'll need to experiment with different values for history_length,
pht_size, and bht_size to see their impact on performance.

● More Advanced Two-Level Predictors: The provided Two-Level example is very basic.
More advanced designs might use a global history register to index into a table of branch
history registers, or use more bits for the history and PHT indexing.
RESULTS:

Simulation Results and Presentation:

When you run your implemented G-share and Two-Level Adaptive predictors on your chosen
benchmark programs, you'll be collecting data on the prediction accuracy. Here's how you might
present these results:

1. Accuracy Table:

You would likely create a table summarizing the prediction accuracy for each predictor on each
benchmark.

G-share Accuracy Two-Level Adaptive Accuracy


Benchmark Name
(%) (%)

Benchmark 1 (e.g., loop-intensive) 95.50 97.25

Benchmark 2 (e.g., conditional


88.75 92.10
branches)

Benchmark 3 (e.g.,...) 91.20 89.25

G-share Accuracy Two-Level Adaptive Accuracy


Benchmark Name
(%) (%)

LoopBenchmark 95.50 97.25

ConditionalsBenchma
88.75 92.10
rk

DataProcessing 91.20 89.55

Export to Sheets

2. Accuracy Comparison Chart:

A bar chart can visually compare the performance of the two predictors across the benchmarks.

Benchmark Name

| ███████████████ (G-share)
| █████████████████ (Two-Level)

+---------------------- Accuracy (%)

| █████████████████

| ████████████████████

+---------------------- Accuracy (%)

| ██████████████████

| ███████████████

+---------------------- Accuracy (%)

You might also like