SlideShare a Scribd company logo
Parallel Ablation Studies for
Machine Learning with Maggy on
Apache Spark
Sina Sheikholeslami
PhD Student, KTH Royal Institute of Technology
Jim Dowling
CEO, Logical Clocks AB
Assoc Prof, KTH Royal Institute of Technology
sinash93
jim_dowling
Agenda
Ablation Studies
Why are they important for deep learning?
Asynchronous ML Trials on Spark
Maggy Framework
Parallel Ablation Studies with
Maggy
Programming model with worked-through
example
Ablation for Machine Learning
3
Dataset
Machine
Learning Model
Optimizer
Evaluate
Problem Definition
Data Preparation
Model Selection
Repeat if
needed
Model Training
area roomsfloors price
Ablation study: Remove, retrain, measure.
0.6 0.17 0.05 0.05 0.1 0.98
Accuracy
=+ ++ +
4
Problem: Rewrite ML Code for Ablations, Distribution
Explore
and Design
Experimentation:
Tune and Search
Model Training
(Distributed)
Explainability and
Ablation Studies
5
Maggy: Unified code for Distributed ML + Ablations
OBLIVIOUS
TRAINING
FUNCTION
# RUNS ON THE WORKERS
def train():
def input_fn(): # return dataset
model = …
optimizer = …
model.compile(…)
rc = tf.estimator.RunConfig(
‘CollectiveAllReduceStrategy’)
keras_estimator = tf.keras.estimator.
model_to_estimator(….)
tf.estimator.train_and_evaluate(
keras_estimator, input_fn)
Ablation StudiesEDA HParam Tuning Training (Dist)
Apache V2 - https://github.com/logicalclocks/maggy
6
Maggy: Programming Model
from maggy import experiment
experiment.set_dataset_generator(gen_dataset)
experiment.set_model_generator(gen_model)
# Hyperparameter optimization
experiment.set_context('optimization', 'randomsearch', searchspace)
result = experiment.lagom(train_fun)
params = result.get('best_hp')
# Distributed Training
experiment.set_context('dist_training', 'MultiWorkerMirroredStrategy', params)
experiment.lagom(train_fun)
# Ablation study
experiment.set_context('ablation', 'loco', ablation_study, params)
experiment.lagom(train_fun)
7
Maggy: Distribution and Tracking in One Function*
# RUNS ON THE WORKERS
def train(depth, lr):
from hops import model as mr
def build_data():
..
model = generate_model()
optimizer = …
model.compile(…)
print(…)
mr.export_model(model)
return { ‘accuracy’: acc }
# RUNS ON THE DRIVER
from maggy import experiment
sp=Searchspace(depth=('INTEGER',[2,8]), lr=(..))
experiment.set_context('optimization', 'random’,
sp, direction='max’, num_trials=15)
experiment.lagom(train)
training function & Hparams
save model to Hopsworks Model Registry
track this dict with Experiment results
print to notebook & store in experiment log
define HParams
launch 15 ‘train’ functions on workers
define Trials
https://youtu.be/xora_4iDcQ8
8
Maggy vs
*https://www.logicalclocks.com/blog/hopsworks-ml-experiments
def train(depth, weight):
X_train, X_test, y_train, y_test = build_data(..)
...
model.fit(X_train, y_train) # auto-logging
...
hops.export_model(model, "tensorflow",..,model_name)
...
# import matplotlib,create diagram.png
plt.savefig(diagram.png')
return {'accuracy': accuracy, 'diagram': 'diagram.png’}
from maggy import experiment
sp=Searchspace(depth=('INTEGER',[2,8]), weight=('INTEGER’,[2,8]))
experiment.set_context('optimization', 'random’, sp,
direction='max’, num_trials=15)
experiment.lagom(train)
def train(depth, weight):
X_train, X_test, y_train, y_test = build_data(..)
mlflow.set_tracking_uri("jdbc:mysql://uname:pwd@host:3306/db")
mlflow.set_experiment("My Experiment")
with mlflow.start_run() as run:
...
mlflow.log_param("depth", depth)
mlflow.log_param("weight", weight)
with open("test.txt", "w") as f:
f.write("hello world!")
mlflow.log_artifacts("/full/path/to/test.txt")
...
model.fit(X_train, y_train) # auto-logging
...
mlflow.tensorflow.log_model(model, "tensorflow-model",
registered_model_name=model_name)
Maggy Tracking, Model Registry & HParam Tuning MLFlow Tracking & Model Registry (No Hparam Tuning)
10
PySpark for Distribution
Worker 1
Worker 5
Worker 3
Worker 2
Worker 4
Worker 7
Worker 8
Worker 6
Driver
TF_CONFIG
Driver
Experiment
Controller
Worker 1 Worker NWorker 2
Single
Host
Explore
and Design
Experimentation:
Tune and Search
Model Training
(Distributed)
Explainability and
Ablation Studies
11
Maggy makes transparent
… fixing parameters
… launching
the function
… launching trials (parametrized
instantiations of the function)
… generating new trials
… collecting and logging results
… setting up TF_CONFIG
… wrapping in Distribution Strategy
… launching function as workers
… collecting results
12
Maggy: Asynchronous Trials in PySpark for Ablations
Task11
Task12
Task13
Task1N
Driver (Global Optimizer)
Barrier
Metrics New Trial
13
Ablation Studies in Maggy
15
LOCO: Leave One Component Out
A simple, “natural” ablation policy: an implementation of an ablator
Currently supports Feature, Layer, and Module Ablation
16
Feature Ablation
Uses the Feature Store to access the dataset metadata
Generates Python callables that once called, will return modified
datasets
▪ Removes one-feature-at-a-time
17
area roomsfloors price roomsfloors price
Model Ablation
Uses a base model function
Generates Python callables that once called, will return modified
models
▪ Uses the model configuration to find and remove layer(s)
▪ Removes one-layer-at-a-time, one-layer-group-at-a-time, or one-module-at-a-time
18
Ablation User & Developer API
(Scan for Example Notebooks)
Programming Workflow
20
User API: Define Dataset Creation
21
User API: Define Model Creation
22
User API: Define Training Function
23
User API: Initialize the Study
24
User API: Setup Model Ablation
25
User API: Setup Feature Ablation
26
User API: Launch Parallel Trials
27
Developer API: Policy Implementation (1/2)
28
Developer API: Policy Implementation (2/2)
29
Maggy is Open-source
Code Repository: https://github.com/logicalclocks/maggy
API Documentation: https://maggy.readthedocs.io/en/latest/
30
Acknowledgments
Thanks to our colleagues at Logical Clocks and DC@KTH:
Moritz Meister, Robin Andersson, Kim Hammar,
Kai Jeggle, Alessio Molinari, Alex Ormenisan, Tianze Wang,
Amir Payberah, Vladimir Vlassov
This work is supported by the ExtremeEarth
project funded by European Union’s Horizon
2020 Research and Innovation Programme
under grant agreement No. 825258.
Demo: Ablation Study of Common
DL Network Architectures with Maggy
Feedback
Your feedback is important to us.
Don’t forget to rate
and review the sessions.

More Related Content

PPTX
Introduction to multiple object tracking
PPTX
BRAIN TUMOR MRI IMAGE SEGMENTATION AND DETECTION IN IMAGE PROCESSING
PPTX
PPTX
Teletherapy treatment techniques
PPTX
IGRT + SGRT for ​ Confident and Efficient SRS​
PPTX
OCR Presentation (Optical Character Recognition)
PDF
Implementing an End-to-End SGRT Workflow for Breath-Hold SABR
PPTX
brachytherapy in carcinoma prostate
Introduction to multiple object tracking
BRAIN TUMOR MRI IMAGE SEGMENTATION AND DETECTION IN IMAGE PROCESSING
Teletherapy treatment techniques
IGRT + SGRT for ​ Confident and Efficient SRS​
OCR Presentation (Optical Character Recognition)
Implementing an End-to-End SGRT Workflow for Breath-Hold SABR
brachytherapy in carcinoma prostate

What's hot (20)

PDF
Deep learning based object detection basics
PPT
Inverse Planning
PPT
External Beam Radiotherapy for Hepatocellular carcinoma
PPTX
K space and parallel imaging
PPTX
PANCREATIC SBRT SIMULATION
PDF
SIEMENS - MAGNETOM Aera 1.5T
PPTX
Patient data acquisition and treatment verification.pptx
PDF
“An Introduction to Data Augmentation Techniques in ML Frameworks,” a Present...
PPTX
Role of Image Guidance in Radiotherapy
PPT
RADIOTERAPIA Breast cancer Paliza
PPTX
Arc therapy [autosaved] [autosaved]
PPTX
Information Technology and Radiology: challenges and future perspectives
PPT
Brain tumor detection by scanning MRI images (using filtering techniques)
PPTX
Automated traffic control by using image processing
DOCX
Report artificial intelligence in breast imaging
KEY
Ct final pp presentation
PPTX
History of radiotherapy & infrastructure in india
PPTX
Face Recognition using OpenCV
PPTX
Application of-image-segmentation-in-brain-tumor-detection
ODP
radiotherapy planning of CA maxilla
Deep learning based object detection basics
Inverse Planning
External Beam Radiotherapy for Hepatocellular carcinoma
K space and parallel imaging
PANCREATIC SBRT SIMULATION
SIEMENS - MAGNETOM Aera 1.5T
Patient data acquisition and treatment verification.pptx
“An Introduction to Data Augmentation Techniques in ML Frameworks,” a Present...
Role of Image Guidance in Radiotherapy
RADIOTERAPIA Breast cancer Paliza
Arc therapy [autosaved] [autosaved]
Information Technology and Radiology: challenges and future perspectives
Brain tumor detection by scanning MRI images (using filtering techniques)
Automated traffic control by using image processing
Report artificial intelligence in breast imaging
Ct final pp presentation
History of radiotherapy & infrastructure in india
Face Recognition using OpenCV
Application of-image-segmentation-in-brain-tumor-detection
radiotherapy planning of CA maxilla
Ad

Similar to Parallel Ablation Studies for Machine Learning with Maggy on Apache Spark (20)

PDF
From Python to PySpark and Back Again – Unifying Single-host and Distributed ...
PDF
Asynchronous Hyperparameter Search with Spark on Hopsworks and Maggy
PDF
Adopting software design practices for better machine learning
PDF
Understanding Parallelization of Machine Learning Algorithms in Apache Spark™
PPTX
Automate ml workflow_transmogrif_ai-_chetan_khatri_berlin-scala
PDF
Advanced MLflow: Multi-Step Workflows, Hyperparameter Tuning and Integrating ...
PDF
SigOpt at GTC - Reducing operational barriers to optimization
PDF
Using Bayesian Optimization to Tune Machine Learning Models
PDF
Using Bayesian Optimization to Tune Machine Learning Models
PDF
Automated Hyperparameter Tuning, Scaling and Tracking
PDF
Tuning the Untunable - Insights on Deep Learning Optimization
PDF
Kaggle Otto Challenge: How we achieved 85th out of 3,514 and what we learnt
PPTX
StackNet Meta-Modelling framework
PDF
PR-232: AutoML-Zero:Evolving Machine Learning Algorithms From Scratch
PDF
MACHINE LEARNING FOR OPTIMIZING SEARCH RESULTS WITH DRUPAL & APACHE SOLR
PPT
Presentation
PDF
1.5.ensemble learning with apache spark m llib 1.5
PDF
Using SigOpt to Tune Deep Learning Models with Nervana Cloud
PDF
Javantura v4 - Java and lambdas and streams - are they better than for loops ...
PDF
Asynchronous Hyperparameter Optimization with Apache Spark
From Python to PySpark and Back Again – Unifying Single-host and Distributed ...
Asynchronous Hyperparameter Search with Spark on Hopsworks and Maggy
Adopting software design practices for better machine learning
Understanding Parallelization of Machine Learning Algorithms in Apache Spark™
Automate ml workflow_transmogrif_ai-_chetan_khatri_berlin-scala
Advanced MLflow: Multi-Step Workflows, Hyperparameter Tuning and Integrating ...
SigOpt at GTC - Reducing operational barriers to optimization
Using Bayesian Optimization to Tune Machine Learning Models
Using Bayesian Optimization to Tune Machine Learning Models
Automated Hyperparameter Tuning, Scaling and Tracking
Tuning the Untunable - Insights on Deep Learning Optimization
Kaggle Otto Challenge: How we achieved 85th out of 3,514 and what we learnt
StackNet Meta-Modelling framework
PR-232: AutoML-Zero:Evolving Machine Learning Algorithms From Scratch
MACHINE LEARNING FOR OPTIMIZING SEARCH RESULTS WITH DRUPAL & APACHE SOLR
Presentation
1.5.ensemble learning with apache spark m llib 1.5
Using SigOpt to Tune Deep Learning Models with Nervana Cloud
Javantura v4 - Java and lambdas and streams - are they better than for loops ...
Asynchronous Hyperparameter Optimization with Apache Spark
Ad

More from Databricks (20)

PPTX
DW Migration Webinar-March 2022.pptx
PPTX
Data Lakehouse Symposium | Day 1 | Part 1
PPT
Data Lakehouse Symposium | Day 1 | Part 2
PPTX
Data Lakehouse Symposium | Day 2
PPTX
Data Lakehouse Symposium | Day 4
PDF
5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop
PDF
Democratizing Data Quality Through a Centralized Platform
PDF
Learn to Use Databricks for Data Science
PDF
Why APM Is Not the Same As ML Monitoring
PDF
The Function, the Context, and the Data—Enabling ML Ops at Stitch Fix
PDF
Stage Level Scheduling Improving Big Data and AI Integration
PDF
Simplify Data Conversion from Spark to TensorFlow and PyTorch
PDF
Scaling your Data Pipelines with Apache Spark on Kubernetes
PDF
Scaling and Unifying SciKit Learn and Apache Spark Pipelines
PDF
Sawtooth Windows for Feature Aggregations
PDF
Redis + Apache Spark = Swiss Army Knife Meets Kitchen Sink
PDF
Re-imagine Data Monitoring with whylogs and Spark
PDF
Raven: End-to-end Optimization of ML Prediction Queries
PDF
Processing Large Datasets for ADAS Applications using Apache Spark
PDF
Massive Data Processing in Adobe Using Delta Lake
DW Migration Webinar-March 2022.pptx
Data Lakehouse Symposium | Day 1 | Part 1
Data Lakehouse Symposium | Day 1 | Part 2
Data Lakehouse Symposium | Day 2
Data Lakehouse Symposium | Day 4
5 Critical Steps to Clean Your Data Swamp When Migrating Off of Hadoop
Democratizing Data Quality Through a Centralized Platform
Learn to Use Databricks for Data Science
Why APM Is Not the Same As ML Monitoring
The Function, the Context, and the Data—Enabling ML Ops at Stitch Fix
Stage Level Scheduling Improving Big Data and AI Integration
Simplify Data Conversion from Spark to TensorFlow and PyTorch
Scaling your Data Pipelines with Apache Spark on Kubernetes
Scaling and Unifying SciKit Learn and Apache Spark Pipelines
Sawtooth Windows for Feature Aggregations
Redis + Apache Spark = Swiss Army Knife Meets Kitchen Sink
Re-imagine Data Monitoring with whylogs and Spark
Raven: End-to-end Optimization of ML Prediction Queries
Processing Large Datasets for ADAS Applications using Apache Spark
Massive Data Processing in Adobe Using Delta Lake

Recently uploaded (20)

PDF
Introduction to Data Science and Data Analysis
PDF
How to run a consulting project- client discovery
PPTX
retention in jsjsksksksnbsndjddjdnFPD.pptx
PDF
Global Data and Analytics Market Outlook Report
PDF
annual-report-2024-2025 original latest.
PPTX
Introduction to Inferential Statistics.pptx
PPTX
01_intro xxxxxxxxxxfffffffffffaaaaaaaaaaafg
PDF
Optimise Shopper Experiences with a Strong Data Estate.pdf
PDF
Microsoft Core Cloud Services powerpoint
PPTX
Copy of 16 Timeline & Flowchart Templates – HubSpot.pptx
PDF
Transcultural that can help you someday.
PPTX
STERILIZATION AND DISINFECTION-1.ppthhhbx
PPTX
Pilar Kemerdekaan dan Identi Bangsa.pptx
PPTX
Microsoft-Fabric-Unifying-Analytics-for-the-Modern-Enterprise Solution.pptx
PPTX
IBA_Chapter_11_Slides_Final_Accessible.pptx
PPTX
(Ali Hamza) Roll No: (F24-BSCS-1103).pptx
PPT
lectureusjsjdhdsjjshdshshddhdhddhhd1.ppt
PPTX
New ISO 27001_2022 standard and the changes
PPTX
importance of Data-Visualization-in-Data-Science. for mba studnts
PPTX
Market Analysis -202507- Wind-Solar+Hybrid+Street+Lights+for+the+North+Amer...
Introduction to Data Science and Data Analysis
How to run a consulting project- client discovery
retention in jsjsksksksnbsndjddjdnFPD.pptx
Global Data and Analytics Market Outlook Report
annual-report-2024-2025 original latest.
Introduction to Inferential Statistics.pptx
01_intro xxxxxxxxxxfffffffffffaaaaaaaaaaafg
Optimise Shopper Experiences with a Strong Data Estate.pdf
Microsoft Core Cloud Services powerpoint
Copy of 16 Timeline & Flowchart Templates – HubSpot.pptx
Transcultural that can help you someday.
STERILIZATION AND DISINFECTION-1.ppthhhbx
Pilar Kemerdekaan dan Identi Bangsa.pptx
Microsoft-Fabric-Unifying-Analytics-for-the-Modern-Enterprise Solution.pptx
IBA_Chapter_11_Slides_Final_Accessible.pptx
(Ali Hamza) Roll No: (F24-BSCS-1103).pptx
lectureusjsjdhdsjjshdshshddhdhddhhd1.ppt
New ISO 27001_2022 standard and the changes
importance of Data-Visualization-in-Data-Science. for mba studnts
Market Analysis -202507- Wind-Solar+Hybrid+Street+Lights+for+the+North+Amer...

Parallel Ablation Studies for Machine Learning with Maggy on Apache Spark

  • 1. Parallel Ablation Studies for Machine Learning with Maggy on Apache Spark Sina Sheikholeslami PhD Student, KTH Royal Institute of Technology Jim Dowling CEO, Logical Clocks AB Assoc Prof, KTH Royal Institute of Technology sinash93 jim_dowling
  • 2. Agenda Ablation Studies Why are they important for deep learning? Asynchronous ML Trials on Spark Maggy Framework Parallel Ablation Studies with Maggy Programming model with worked-through example
  • 3. Ablation for Machine Learning 3 Dataset Machine Learning Model Optimizer Evaluate Problem Definition Data Preparation Model Selection Repeat if needed Model Training area roomsfloors price
  • 4. Ablation study: Remove, retrain, measure. 0.6 0.17 0.05 0.05 0.1 0.98 Accuracy =+ ++ + 4
  • 5. Problem: Rewrite ML Code for Ablations, Distribution Explore and Design Experimentation: Tune and Search Model Training (Distributed) Explainability and Ablation Studies 5
  • 6. Maggy: Unified code for Distributed ML + Ablations OBLIVIOUS TRAINING FUNCTION # RUNS ON THE WORKERS def train(): def input_fn(): # return dataset model = … optimizer = … model.compile(…) rc = tf.estimator.RunConfig( ‘CollectiveAllReduceStrategy’) keras_estimator = tf.keras.estimator. model_to_estimator(….) tf.estimator.train_and_evaluate( keras_estimator, input_fn) Ablation StudiesEDA HParam Tuning Training (Dist) Apache V2 - https://github.com/logicalclocks/maggy 6
  • 7. Maggy: Programming Model from maggy import experiment experiment.set_dataset_generator(gen_dataset) experiment.set_model_generator(gen_model) # Hyperparameter optimization experiment.set_context('optimization', 'randomsearch', searchspace) result = experiment.lagom(train_fun) params = result.get('best_hp') # Distributed Training experiment.set_context('dist_training', 'MultiWorkerMirroredStrategy', params) experiment.lagom(train_fun) # Ablation study experiment.set_context('ablation', 'loco', ablation_study, params) experiment.lagom(train_fun) 7
  • 8. Maggy: Distribution and Tracking in One Function* # RUNS ON THE WORKERS def train(depth, lr): from hops import model as mr def build_data(): .. model = generate_model() optimizer = … model.compile(…) print(…) mr.export_model(model) return { ‘accuracy’: acc } # RUNS ON THE DRIVER from maggy import experiment sp=Searchspace(depth=('INTEGER',[2,8]), lr=(..)) experiment.set_context('optimization', 'random’, sp, direction='max’, num_trials=15) experiment.lagom(train) training function & Hparams save model to Hopsworks Model Registry track this dict with Experiment results print to notebook & store in experiment log define HParams launch 15 ‘train’ functions on workers define Trials https://youtu.be/xora_4iDcQ8 8
  • 9. Maggy vs *https://www.logicalclocks.com/blog/hopsworks-ml-experiments def train(depth, weight): X_train, X_test, y_train, y_test = build_data(..) ... model.fit(X_train, y_train) # auto-logging ... hops.export_model(model, "tensorflow",..,model_name) ... # import matplotlib,create diagram.png plt.savefig(diagram.png') return {'accuracy': accuracy, 'diagram': 'diagram.png’} from maggy import experiment sp=Searchspace(depth=('INTEGER',[2,8]), weight=('INTEGER’,[2,8])) experiment.set_context('optimization', 'random’, sp, direction='max’, num_trials=15) experiment.lagom(train) def train(depth, weight): X_train, X_test, y_train, y_test = build_data(..) mlflow.set_tracking_uri("jdbc:mysql://uname:pwd@host:3306/db") mlflow.set_experiment("My Experiment") with mlflow.start_run() as run: ... mlflow.log_param("depth", depth) mlflow.log_param("weight", weight) with open("test.txt", "w") as f: f.write("hello world!") mlflow.log_artifacts("/full/path/to/test.txt") ... model.fit(X_train, y_train) # auto-logging ... mlflow.tensorflow.log_model(model, "tensorflow-model", registered_model_name=model_name) Maggy Tracking, Model Registry & HParam Tuning MLFlow Tracking & Model Registry (No Hparam Tuning) 10
  • 10. PySpark for Distribution Worker 1 Worker 5 Worker 3 Worker 2 Worker 4 Worker 7 Worker 8 Worker 6 Driver TF_CONFIG Driver Experiment Controller Worker 1 Worker NWorker 2 Single Host Explore and Design Experimentation: Tune and Search Model Training (Distributed) Explainability and Ablation Studies 11
  • 11. Maggy makes transparent … fixing parameters … launching the function … launching trials (parametrized instantiations of the function) … generating new trials … collecting and logging results … setting up TF_CONFIG … wrapping in Distribution Strategy … launching function as workers … collecting results 12
  • 12. Maggy: Asynchronous Trials in PySpark for Ablations Task11 Task12 Task13 Task1N Driver (Global Optimizer) Barrier Metrics New Trial 13
  • 14. LOCO: Leave One Component Out A simple, “natural” ablation policy: an implementation of an ablator Currently supports Feature, Layer, and Module Ablation 16
  • 15. Feature Ablation Uses the Feature Store to access the dataset metadata Generates Python callables that once called, will return modified datasets ▪ Removes one-feature-at-a-time 17 area roomsfloors price roomsfloors price
  • 16. Model Ablation Uses a base model function Generates Python callables that once called, will return modified models ▪ Uses the model configuration to find and remove layer(s) ▪ Removes one-layer-at-a-time, one-layer-group-at-a-time, or one-module-at-a-time 18
  • 17. Ablation User & Developer API (Scan for Example Notebooks)
  • 19. User API: Define Dataset Creation 21
  • 20. User API: Define Model Creation 22
  • 21. User API: Define Training Function 23
  • 22. User API: Initialize the Study 24
  • 23. User API: Setup Model Ablation 25
  • 24. User API: Setup Feature Ablation 26
  • 25. User API: Launch Parallel Trials 27
  • 26. Developer API: Policy Implementation (1/2) 28
  • 27. Developer API: Policy Implementation (2/2) 29
  • 28. Maggy is Open-source Code Repository: https://github.com/logicalclocks/maggy API Documentation: https://maggy.readthedocs.io/en/latest/ 30
  • 29. Acknowledgments Thanks to our colleagues at Logical Clocks and DC@KTH: Moritz Meister, Robin Andersson, Kim Hammar, Kai Jeggle, Alessio Molinari, Alex Ormenisan, Tianze Wang, Amir Payberah, Vladimir Vlassov This work is supported by the ExtremeEarth project funded by European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No. 825258.
  • 30. Demo: Ablation Study of Common DL Network Architectures with Maggy
  • 31. Feedback Your feedback is important to us. Don’t forget to rate and review the sessions.