SlideShare a Scribd company logo
A full Machine learning pipeline
in Scikit-learn vs Scala-Spark:
pros and cons
Jose Quesada and David Anderson
@quesada, @alpinegizmo, @datascienceret
A full Machine learning pipeline in Scikit-learn vs in scala-Spark: pros and cons
Why this talk?
• How do you get from a single-machine workload to a fully distributed
one?
• Answer: Spark machine learning
• Is there something I'm missing out by staying with python?
A full Machine learning pipeline in Scikit-learn vs in scala-Spark: pros and cons
A full Machine learning pipeline in Scikit-learn vs in scala-Spark: pros and cons
• Mentors are world-class. CTOs, library authors, inventors, founders of
fast-growing companies, etc
• DSR accepts fewer than 5% of the applications
• Strong focus on commercial awareness
• 5 years of working experience on average
• 30+ partner companies in Europe
A full Machine learning pipeline in Scikit-learn vs in scala-Spark: pros and cons
DSR participants do a portfolio project
A full Machine learning pipeline in Scikit-learn vs in scala-Spark: pros and cons
Why is DSR talking about Scala/Spark?
They are behind Scala
IBM is behind this
They hired us to make training
materials
Source: Spark 2015 infographic
Time
Mindsharein‘datasciencebadasses’(subjective)
A full Machine learning pipeline in Scikit-learn vs in scala-Spark: pros and cons
Scala
“Scala offers the easiest refactoring experience that I've ever had due
to the type system.”
Jacob, coursera engineer
Spark
• Basically distributed Scala
• API
• Scala, Java, Python, and R bindings
• Libraries
• SQL, streams, graph processing, machine learning
• One of the most active open source projects
“Spark will inevitably become the de-facto Big Data framework
for Machine Learning and Data Science.”
Dean Wampler, Lightbend
All under one roof (big Win)
Source: Spark 2015 infographic
Spark Core
Spark SQL
Spark
streaming
Spark.ml
(machine
learning
GraphX
(graphs)
Spark Programming Model
Input
Driver /
SparkContext
Worker
Worker
Data is partitioned; code is sent to the data
Input
Driver /
SparkContext
Worker
Worker
Data
Data
Example: word count
hello world
foo bar
foo foo bar
bye world
Data is immutable,
and is partitioned
across the cluster
Example: word count
hello world
foo bar
foo foo bar
bye world
We get things done
by creating new,
transformed copies
of the data.
In parallel.
hello
world
foo
bar
foo
foo
bar
bye
world
(hello, 1)
(world, 1)
(foo, 1)
(bar, 1)
(foo, 1)
(foo, 1)
(bar, 1)
(bye, 1)
(world, 1)
Example: word count
hello world
foo bar
foo foo bar
bye world
Some operations require a shuffle
to group data together
hello
world
foo
bar
foo
foo
bar
bye
world
(hello, 1)
(world, 1)
(foo, 1)
(bar, 1)
(foo, 1)
(foo, 1)
(bar, 1)
(bye, 1)
(world, 1)
(hello, 1)
(foo, 3)
(bar, 2)
(bye, 1)
(world, 2)
Example: word count
lines = sc.textFile(input)
words = lines.flatMap(lambda x: x.split(" "))
word_count =
(words.map(lambda x: (x, 1))
.reduceByKey(lambda x, y: x + y))
-------------------------------------------------
word_count.saveAsTextFile(output)
Pipelined into the same
python executor
Nothing happens until
after this line, when this
"action" forces evaluation
of the RDD
RDD – Resilient Distributed Dataset
• An immutable, partitioned collection of elements that can be
operated on in parallel
• Lazy
• Fault-tolerant
PySpark RDD Execution Model
Whenever you provide a
lambda to operate on an
RDD:
• Each Spark worker
forks a Python worker
• data is serialized and
piped to those Python
workers
A full Machine learning pipeline in Scikit-learn vs in scala-Spark: pros and cons
Impact of this execution model
• Worker overhead (forking, serialization)
• The cluster manager isn't aware of Python's memory needs
• Very confusing error messages
Spark Dataframes (and Datasets)
• Based on RDDs, but tabular; something like SQL tables
• Not Pandas
• Rescues Python from serialization overhead
• df.filter(df.col("color") == "red") vs. rdd.filter(lambda x: x.color == "red")
• processed entirely in the JVM
• Python UDFs and maps still require serialization and piping to Python
• can write (and register) Scala code, and then call it from Python
A full Machine learning pipeline in Scikit-learn vs in scala-Spark: pros and cons
DataFrame execution: unified across
languages
Python DF Java/Scala DF R DF
Logical Plan
Execution
API wrappers create a
logical plan (a DAG)
Catalyst optimizes the plan;
Tungsten compiles the plan
into executable code
DataFrame performance
ML Workflow
Data
Ingestion
Data Cleaning /
Feature
Engineering
Model
Training
Testing and
Validation
Deployment
Machine learning with scikit-learn
• Easy to use
• Rich ecosystem
• Limited to one machine (but see sparkit-learn package)
Machine learning with Hadoop (in short: NO)
• Each iteration is a new M/R job
• Each job must store data in HDFS – lots of overhead
How Spark killed Hadoop map/reduce
• Far easier to program
• More cost-effective since less hardware can perform the same tasks
much faster
• Can do real-time processing as well as batch processing
• Can do ML, graphs
Machine learning with Spark
• Spark was designed for ML workloads
• Caching (reuse data)
• Accumulators (keep state across iterations)
• Functional, lazy, fault-tolerant
• Many popular algorithms are supported out of the box
• Simple to productionalize models
• MLlib is RDD (the past), spark.ml is dataframes, the future
Spark is an Ecosystem of ML frameworks
• Spark was designed by people who understood the need of ML
practitioners (unlike Hadoop)
• MLlib
• Spark.ml
• System.ml (IBM)
• Keystone.ml
Spark.ML– the basics
• DataFrame: ML requires DFs holding vectors
• Transformer: transforms one DF into another
• Estimator: fit on a DF; produces a transformer
• Pipeline: chain of transformers and estimators
• Parameter: there is a unified API for specifying parameters
• Evaluator:
• CrossValidator: model selection via grid search
Hyper-parameter
tuning
Machine Learning scaling challenges that
Spark solves
A full Machine learning pipeline in Scikit-learn vs in scala-Spark: pros and cons
Hyper-parameter
tuning
Machine Learning scaling challenges that
Spark solves
ETL/feature
engineering
Hyper-parameter
tuning
Machine Learning scaling challenges that
Spark solves
ETL/feature
engineering
Model
Q: Hardest scaling problem in data science?
A: Adding people
• Spark.ml has a clean architecture and APIs that should encourage
code sharing and reuse
• Good first step: can you refactor some ETL code as a Transformer?
• Don't see much sharing of components happening yet
• Entire libraries, yes; components, not so much
• Perhaps because Spark has been evolving so quickly
• E.g., pull request implementing non-linear SVMs that has been stuck for a
year
Structured types in Spark
SQL DataFrames DataSets
(Java/Scala only)
Syntax Errors Runtime Compile time Compile time
Analysis Errors Runtime Runtime Compile time
User experience Spark.ml – Scikit-learn
Indexing categorical features
• You are responsible for identifying and indexing categorical features
val rfcd_indexer = new StringIndexer()
.setInputCol("color")
.setOutputCol("color_index")
.fit(dataset)
val seo_indexer = new StringIndexer()
.setInputCol("status")
.setOutputCol("status_index")
.fit(dataset)
Assembling features
• You must gather all of your features into one Vector, using a
VectorAssembler
val assembler = new VectorAssembler()
.setInputCols(Array("color_index", "status_index", ...))
.setOutputCol("features")
Spark.ml – Scikit-learn: Pipelines (good news!)
• Spark ML and scikit-learn: same approach
• Chain together Estimators and Transformers
• Support non-linear pipelines (must be a DAG)
• Unify parameter passing
• Support for cross-validation and grid search
• Can write your own custom pipeline stages
Spark.ml just like scikit-learn
Transformer Description scikit-learn
Binarizer Threshold numerical feature to binary Binarizer
Bucketizer Bucket numerical features into ranges
ElementwiseProduct Scale each feature/column separately
HashingTF Hash text/data to vector. Scale by term frequency FeatureHasher
IDF Scale features by inverse document frequency TfidfTransformer
Normalizer Scale each row to unit norm Normalizer
OneHotEncoder Encode k-category feature as binary features OneHotEncoder
PolynomialExpansion Create higher-order features PolynomialFeatures
RegexTokenizer Tokenize text using regular expressions (part of text methods)
StandardScaler Scale features to 0 mean and/or unit variance StandardScaler
StringIndexer Convert String feature to 0-based indices LabelEncoder
Tokenizer Tokenize text on whitespace (part of text methods)
VectorAssembler Concatenate feature vectors FeatureUnion
VectorIndexer Identify categorical features, and index
Word2Vec Learn vector representation of words
Spark.ml – Scikit-learn: NLP tasks (thumbs up)
Graph stuff (graphX, graphframes, not great)
• Extremely easy to run monster algorithms in a cluster
• GraphX has no python API
• Graphframes are cool, and should provide access to the graph tools in
Spark from python
• In practice, it didn’t work too well
Things we liked in Spark ML
• Architecture encourages building reusable pieces
• Type safety, plus types are driving optimizations
• Model fitting returns an object that transforms the data
• Uniform way of passing parameters
• It's interesting to use the same platform for ETL and model fitting
• Very easy to parallelize ETL and grid search, or work with huge models
Disappointments using Spark ML
• Feature indexing and assembly can become tedious
• Surprised by the maximum depth limit for trees: 30
• Data exploration and visualization aren't easy in Scala
• Wish list: non-linear SVMs, deep learning (but see Deeplearning4j)
What is new for machine learning in Spark 2.0
• DataFrame-based Machine Learning API emerges as the primary ML
API: With Spark 2.0, the spark.ml package, with its “pipeline” APIs,
will emerge as the primary machine learning API. While the original
spark.mllib package is preserved, future development will focus on
the DataFrame-based API.
• Machine learning pipeline persistence: Users can now save and
load machine learning pipelines and models across all programming
languages supported by Spark.
What is new for data structures in Spark 2.0
Unifying the API for Streams and static data: Infinite datasets (same interface as dataframes)
What have Spark and Scala ever given us?
… Other than distributed dataframes,
distributed machine learning,
easy distributed grid search,
distributed SQL,
distributed stream analysis,
more performance than map reduce
easier programming model
And easier deployment …
What have Spark and Scala ever given us?
Reminder: 25 videos explaining ML on spark
• For people who already know ML
• http://datascienceretreat.com/videos/data-science-with-scala-and-
spark)
Thank you for your attention!
@quesada, @datascienceret

More Related Content

PDF
What is MLOps
PPTX
Reshape Data Lake (as of 2020.07)
PDF
Building the Artificially Intelligent Enterprise
PDF
Productionzing ML Model Using MLflow Model Serving
PDF
MLOps Bridging the gap between Data Scientists and Ops.
PDF
Lecture 10: ML Testing & Explainability (Full Stack Deep Learning - Spring 2021)
PDF
Zipline—Airbnb’s Declarative Feature Engineering Framework
PDF
Drifting Away: Testing ML Models in Production
What is MLOps
Reshape Data Lake (as of 2020.07)
Building the Artificially Intelligent Enterprise
Productionzing ML Model Using MLflow Model Serving
MLOps Bridging the gap between Data Scientists and Ops.
Lecture 10: ML Testing & Explainability (Full Stack Deep Learning - Spring 2021)
Zipline—Airbnb’s Declarative Feature Engineering Framework
Drifting Away: Testing ML Models in Production

What's hot (20)

PPTX
Elastic stack Presentation
PPTX
Elastic - ELK, Logstash & Kibana
PDF
ksqlDB - Stream Processing simplified!
PDF
Getting Started with Databricks SQL Analytics
PPTX
Introduction to KSQL: Streaming SQL for Apache Kafka®
PDF
Hyperspace for Delta Lake
PPTX
An Intro to Elasticsearch and Kibana
PPTX
Cassandra Operations at Netflix
PDF
Locality Sensitive Hashing By Spark
PDF
Scalability, Availability & Stability Patterns
PDF
ksqlDB: A Stream-Relational Database System
PPTX
Exploring KSQL Patterns
PDF
Introduction to MLflow
PDF
Introduction to ETL and Data Integration
PDF
What’s New with Databricks Machine Learning
PPTX
Building a Virtual Data Lake with Apache Arrow
PDF
Data Security at Scale through Spark and Parquet Encryption
PDF
Securing Kafka
PDF
Integrating systems in the age of Quarkus and Camel
PDF
Advanced Streaming Analytics with Apache Flink and Apache Kafka, Stephan Ewen
Elastic stack Presentation
Elastic - ELK, Logstash & Kibana
ksqlDB - Stream Processing simplified!
Getting Started with Databricks SQL Analytics
Introduction to KSQL: Streaming SQL for Apache Kafka®
Hyperspace for Delta Lake
An Intro to Elasticsearch and Kibana
Cassandra Operations at Netflix
Locality Sensitive Hashing By Spark
Scalability, Availability & Stability Patterns
ksqlDB: A Stream-Relational Database System
Exploring KSQL Patterns
Introduction to MLflow
Introduction to ETL and Data Integration
What’s New with Databricks Machine Learning
Building a Virtual Data Lake with Apache Arrow
Data Security at Scale through Spark and Parquet Encryption
Securing Kafka
Integrating systems in the age of Quarkus and Camel
Advanced Streaming Analytics with Apache Flink and Apache Kafka, Stephan Ewen
Ad

Viewers also liked (17)

PDF
Machine Learning Pipelines
PDF
How to Productionize Your Machine Learning Models Using Apache Spark MLlib 2....
PDF
Serverless machine learning operations
PDF
Python as part of a production machine learning stack by Michael Manapat PyDa...
PDF
PostgreSQL + Kafka: The Delight of Change Data Capture
PDF
Building A Production-Level Machine Learning Pipeline
PPTX
Production and Beyond: Deploying and Managing Machine Learning Models
PPTX
Managing and Versioning Machine Learning Models in Python
PDF
Machine learning in production
PDF
Machine learning in production with scikit-learn
PDF
Square's Machine Learning Infrastructure and Applications - Rong Yan
PDF
Multi runtime serving pipelines for machine learning
PPTX
Production machine learning_infrastructure
PDF
Using PySpark to Process Boat Loads of Data
PPTX
Machine Learning In Production
PDF
Spark and machine learning in microservices architecture
PPTX
AI and Machine Learning Demystified by Carol Smith at Midwest UX 2017
Machine Learning Pipelines
How to Productionize Your Machine Learning Models Using Apache Spark MLlib 2....
Serverless machine learning operations
Python as part of a production machine learning stack by Michael Manapat PyDa...
PostgreSQL + Kafka: The Delight of Change Data Capture
Building A Production-Level Machine Learning Pipeline
Production and Beyond: Deploying and Managing Machine Learning Models
Managing and Versioning Machine Learning Models in Python
Machine learning in production
Machine learning in production with scikit-learn
Square's Machine Learning Infrastructure and Applications - Rong Yan
Multi runtime serving pipelines for machine learning
Production machine learning_infrastructure
Using PySpark to Process Boat Loads of Data
Machine Learning In Production
Spark and machine learning in microservices architecture
AI and Machine Learning Demystified by Carol Smith at Midwest UX 2017
Ad

Similar to A full Machine learning pipeline in Scikit-learn vs in scala-Spark: pros and cons (20)

PDF
Spark meetup TCHUG
PPTX
Spark - The Ultimate Scala Collections by Martin Odersky
PPTX
Combining Machine Learning Frameworks with Apache Spark
PPTX
Combining Machine Learning frameworks with Apache Spark
PPT
Apache spark-melbourne-april-2015-meetup
PPTX
From Pandas to Koalas: Reducing Time-To-Insight for Virgin Hyperloop's Data
PDF
MLlib: Spark's Machine Learning Library
PDF
Fighting Fraud with Apache Spark
PDF
Apache Spark for Everyone - Women Who Code Workshop
PDF
BDM25 - Spark runtime internal
PDF
Composable Parallel Processing in Apache Spark and Weld
PDF
Recent Developments in Spark MLlib and Beyond
PDF
The Nitty Gritty of Advanced Analytics Using Apache Spark in Python
PDF
From Pipelines to Refineries: Scaling Big Data Applications
PDF
Apache Spark Tutorial
PDF
Recent Developments in Spark MLlib and Beyond
PDF
Integrating Deep Learning Libraries with Apache Spark
PDF
Hadoop and Spark
PDF
20170126 big data processing
PDF
Spark Summit EU talk by Shay Nativ and Dvir Volk
Spark meetup TCHUG
Spark - The Ultimate Scala Collections by Martin Odersky
Combining Machine Learning Frameworks with Apache Spark
Combining Machine Learning frameworks with Apache Spark
Apache spark-melbourne-april-2015-meetup
From Pandas to Koalas: Reducing Time-To-Insight for Virgin Hyperloop's Data
MLlib: Spark's Machine Learning Library
Fighting Fraud with Apache Spark
Apache Spark for Everyone - Women Who Code Workshop
BDM25 - Spark runtime internal
Composable Parallel Processing in Apache Spark and Weld
Recent Developments in Spark MLlib and Beyond
The Nitty Gritty of Advanced Analytics Using Apache Spark in Python
From Pipelines to Refineries: Scaling Big Data Applications
Apache Spark Tutorial
Recent Developments in Spark MLlib and Beyond
Integrating Deep Learning Libraries with Apache Spark
Hadoop and Spark
20170126 big data processing
Spark Summit EU talk by Shay Nativ and Dvir Volk

Recently uploaded (20)

PPTX
Machine Learning Solution for Power Grid Cybersecurity with GraphWavelets
PPTX
Measurement of Afordability for Water Supply and Sanitation in Bangladesh .pptx
PDF
A Systems Thinking Approach to Algorithmic Fairness.pdf
PPTX
Challenges and opportunities in feeding a growing population
PPTX
Understanding Prototyping in Design and Development
PPTX
Trading Procedures (1).pptxcffcdddxxddsss
PDF
Master Databricks SQL with AccentFuture – The Future of Data Warehousing
PDF
Data Analyst Certificate Programs for Beginners | IABAC
PDF
Digital Infrastructure – Powering the Connected Age
PPTX
Presentation1.pptxvhhh. H ycycyyccycycvvv
PPTX
Azure Data management Engineer project.pptx
PPTX
Lecture 1 Intro in Inferential Statistics.pptx
PPTX
Moving the Public Sector (Government) to a Digital Adoption
PDF
“Getting Started with Data Analytics Using R – Concepts, Tools & Case Studies”
PDF
Linux OS guide to know, operate. Linux Filesystem, command, users and system
PPTX
1intro to AI.pptx AI components & composition
PDF
Research about a FoodFolio app for personalized dietary tracking and health o...
PPTX
artificial intelligence deeplearning-200712115616.pptx
PPTX
Economic Sector Performance Recovery.pptx
PDF
Nashik East side PPT 01-08-25. vvvhvjvvvhvh
Machine Learning Solution for Power Grid Cybersecurity with GraphWavelets
Measurement of Afordability for Water Supply and Sanitation in Bangladesh .pptx
A Systems Thinking Approach to Algorithmic Fairness.pdf
Challenges and opportunities in feeding a growing population
Understanding Prototyping in Design and Development
Trading Procedures (1).pptxcffcdddxxddsss
Master Databricks SQL with AccentFuture – The Future of Data Warehousing
Data Analyst Certificate Programs for Beginners | IABAC
Digital Infrastructure – Powering the Connected Age
Presentation1.pptxvhhh. H ycycyyccycycvvv
Azure Data management Engineer project.pptx
Lecture 1 Intro in Inferential Statistics.pptx
Moving the Public Sector (Government) to a Digital Adoption
“Getting Started with Data Analytics Using R – Concepts, Tools & Case Studies”
Linux OS guide to know, operate. Linux Filesystem, command, users and system
1intro to AI.pptx AI components & composition
Research about a FoodFolio app for personalized dietary tracking and health o...
artificial intelligence deeplearning-200712115616.pptx
Economic Sector Performance Recovery.pptx
Nashik East side PPT 01-08-25. vvvhvjvvvhvh

A full Machine learning pipeline in Scikit-learn vs in scala-Spark: pros and cons

  • 1. A full Machine learning pipeline in Scikit-learn vs Scala-Spark: pros and cons Jose Quesada and David Anderson @quesada, @alpinegizmo, @datascienceret
  • 4. • How do you get from a single-machine workload to a fully distributed one? • Answer: Spark machine learning • Is there something I'm missing out by staying with python?
  • 7. • Mentors are world-class. CTOs, library authors, inventors, founders of fast-growing companies, etc • DSR accepts fewer than 5% of the applications • Strong focus on commercial awareness • 5 years of working experience on average • 30+ partner companies in Europe
  • 9. DSR participants do a portfolio project
  • 11. Why is DSR talking about Scala/Spark? They are behind Scala IBM is behind this They hired us to make training materials
  • 12. Source: Spark 2015 infographic
  • 15. Scala “Scala offers the easiest refactoring experience that I've ever had due to the type system.” Jacob, coursera engineer
  • 16. Spark • Basically distributed Scala • API • Scala, Java, Python, and R bindings • Libraries • SQL, streams, graph processing, machine learning • One of the most active open source projects
  • 17. “Spark will inevitably become the de-facto Big Data framework for Machine Learning and Data Science.” Dean Wampler, Lightbend
  • 18. All under one roof (big Win) Source: Spark 2015 infographic Spark Core Spark SQL Spark streaming Spark.ml (machine learning GraphX (graphs)
  • 19. Spark Programming Model Input Driver / SparkContext Worker Worker
  • 20. Data is partitioned; code is sent to the data Input Driver / SparkContext Worker Worker Data Data
  • 21. Example: word count hello world foo bar foo foo bar bye world Data is immutable, and is partitioned across the cluster
  • 22. Example: word count hello world foo bar foo foo bar bye world We get things done by creating new, transformed copies of the data. In parallel. hello world foo bar foo foo bar bye world (hello, 1) (world, 1) (foo, 1) (bar, 1) (foo, 1) (foo, 1) (bar, 1) (bye, 1) (world, 1)
  • 23. Example: word count hello world foo bar foo foo bar bye world Some operations require a shuffle to group data together hello world foo bar foo foo bar bye world (hello, 1) (world, 1) (foo, 1) (bar, 1) (foo, 1) (foo, 1) (bar, 1) (bye, 1) (world, 1) (hello, 1) (foo, 3) (bar, 2) (bye, 1) (world, 2)
  • 24. Example: word count lines = sc.textFile(input) words = lines.flatMap(lambda x: x.split(" ")) word_count = (words.map(lambda x: (x, 1)) .reduceByKey(lambda x, y: x + y)) ------------------------------------------------- word_count.saveAsTextFile(output) Pipelined into the same python executor Nothing happens until after this line, when this "action" forces evaluation of the RDD
  • 25. RDD – Resilient Distributed Dataset • An immutable, partitioned collection of elements that can be operated on in parallel • Lazy • Fault-tolerant
  • 26. PySpark RDD Execution Model Whenever you provide a lambda to operate on an RDD: • Each Spark worker forks a Python worker • data is serialized and piped to those Python workers
  • 28. Impact of this execution model • Worker overhead (forking, serialization) • The cluster manager isn't aware of Python's memory needs • Very confusing error messages
  • 29. Spark Dataframes (and Datasets) • Based on RDDs, but tabular; something like SQL tables • Not Pandas • Rescues Python from serialization overhead • df.filter(df.col("color") == "red") vs. rdd.filter(lambda x: x.color == "red") • processed entirely in the JVM • Python UDFs and maps still require serialization and piping to Python • can write (and register) Scala code, and then call it from Python
  • 31. DataFrame execution: unified across languages Python DF Java/Scala DF R DF Logical Plan Execution API wrappers create a logical plan (a DAG) Catalyst optimizes the plan; Tungsten compiles the plan into executable code
  • 33. ML Workflow Data Ingestion Data Cleaning / Feature Engineering Model Training Testing and Validation Deployment
  • 34. Machine learning with scikit-learn • Easy to use • Rich ecosystem • Limited to one machine (but see sparkit-learn package)
  • 35. Machine learning with Hadoop (in short: NO) • Each iteration is a new M/R job • Each job must store data in HDFS – lots of overhead
  • 36. How Spark killed Hadoop map/reduce • Far easier to program • More cost-effective since less hardware can perform the same tasks much faster • Can do real-time processing as well as batch processing • Can do ML, graphs
  • 37. Machine learning with Spark • Spark was designed for ML workloads • Caching (reuse data) • Accumulators (keep state across iterations) • Functional, lazy, fault-tolerant • Many popular algorithms are supported out of the box • Simple to productionalize models • MLlib is RDD (the past), spark.ml is dataframes, the future
  • 38. Spark is an Ecosystem of ML frameworks • Spark was designed by people who understood the need of ML practitioners (unlike Hadoop) • MLlib • Spark.ml • System.ml (IBM) • Keystone.ml
  • 39. Spark.ML– the basics • DataFrame: ML requires DFs holding vectors • Transformer: transforms one DF into another • Estimator: fit on a DF; produces a transformer • Pipeline: chain of transformers and estimators • Parameter: there is a unified API for specifying parameters • Evaluator: • CrossValidator: model selection via grid search
  • 40. Hyper-parameter tuning Machine Learning scaling challenges that Spark solves
  • 42. Hyper-parameter tuning Machine Learning scaling challenges that Spark solves ETL/feature engineering
  • 43. Hyper-parameter tuning Machine Learning scaling challenges that Spark solves ETL/feature engineering Model
  • 44. Q: Hardest scaling problem in data science? A: Adding people • Spark.ml has a clean architecture and APIs that should encourage code sharing and reuse • Good first step: can you refactor some ETL code as a Transformer? • Don't see much sharing of components happening yet • Entire libraries, yes; components, not so much • Perhaps because Spark has been evolving so quickly • E.g., pull request implementing non-linear SVMs that has been stuck for a year
  • 45. Structured types in Spark SQL DataFrames DataSets (Java/Scala only) Syntax Errors Runtime Compile time Compile time Analysis Errors Runtime Runtime Compile time
  • 46. User experience Spark.ml – Scikit-learn
  • 47. Indexing categorical features • You are responsible for identifying and indexing categorical features val rfcd_indexer = new StringIndexer() .setInputCol("color") .setOutputCol("color_index") .fit(dataset) val seo_indexer = new StringIndexer() .setInputCol("status") .setOutputCol("status_index") .fit(dataset)
  • 48. Assembling features • You must gather all of your features into one Vector, using a VectorAssembler val assembler = new VectorAssembler() .setInputCols(Array("color_index", "status_index", ...)) .setOutputCol("features")
  • 49. Spark.ml – Scikit-learn: Pipelines (good news!) • Spark ML and scikit-learn: same approach • Chain together Estimators and Transformers • Support non-linear pipelines (must be a DAG) • Unify parameter passing • Support for cross-validation and grid search • Can write your own custom pipeline stages Spark.ml just like scikit-learn
  • 50. Transformer Description scikit-learn Binarizer Threshold numerical feature to binary Binarizer Bucketizer Bucket numerical features into ranges ElementwiseProduct Scale each feature/column separately HashingTF Hash text/data to vector. Scale by term frequency FeatureHasher IDF Scale features by inverse document frequency TfidfTransformer Normalizer Scale each row to unit norm Normalizer OneHotEncoder Encode k-category feature as binary features OneHotEncoder PolynomialExpansion Create higher-order features PolynomialFeatures RegexTokenizer Tokenize text using regular expressions (part of text methods) StandardScaler Scale features to 0 mean and/or unit variance StandardScaler StringIndexer Convert String feature to 0-based indices LabelEncoder Tokenizer Tokenize text on whitespace (part of text methods) VectorAssembler Concatenate feature vectors FeatureUnion VectorIndexer Identify categorical features, and index Word2Vec Learn vector representation of words Spark.ml – Scikit-learn: NLP tasks (thumbs up)
  • 51. Graph stuff (graphX, graphframes, not great) • Extremely easy to run monster algorithms in a cluster • GraphX has no python API • Graphframes are cool, and should provide access to the graph tools in Spark from python • In practice, it didn’t work too well
  • 52. Things we liked in Spark ML • Architecture encourages building reusable pieces • Type safety, plus types are driving optimizations • Model fitting returns an object that transforms the data • Uniform way of passing parameters • It's interesting to use the same platform for ETL and model fitting • Very easy to parallelize ETL and grid search, or work with huge models
  • 53. Disappointments using Spark ML • Feature indexing and assembly can become tedious • Surprised by the maximum depth limit for trees: 30 • Data exploration and visualization aren't easy in Scala • Wish list: non-linear SVMs, deep learning (but see Deeplearning4j)
  • 54. What is new for machine learning in Spark 2.0 • DataFrame-based Machine Learning API emerges as the primary ML API: With Spark 2.0, the spark.ml package, with its “pipeline” APIs, will emerge as the primary machine learning API. While the original spark.mllib package is preserved, future development will focus on the DataFrame-based API. • Machine learning pipeline persistence: Users can now save and load machine learning pipelines and models across all programming languages supported by Spark.
  • 55. What is new for data structures in Spark 2.0 Unifying the API for Streams and static data: Infinite datasets (same interface as dataframes)
  • 56. What have Spark and Scala ever given us?
  • 57. … Other than distributed dataframes, distributed machine learning, easy distributed grid search, distributed SQL, distributed stream analysis, more performance than map reduce easier programming model And easier deployment … What have Spark and Scala ever given us?
  • 58. Reminder: 25 videos explaining ML on spark • For people who already know ML • http://datascienceretreat.com/videos/data-science-with-scala-and- spark)
  • 59. Thank you for your attention! @quesada, @datascienceret

Editor's Notes

  • #3: Scala and spark are very close: if you learn one you learn the other. Spark is distributed scala
  • #6: Scala and spark are very close: if you learn one you learn the other. Spark is distributed scala This has been possible for years, but nowadays it’s not only possible but pleasant
  • #7: You attend a Retreat, not a training
  • #15: A talk should give you a superpower. - Am I missing out?
  • #17: redo the diagram
  • #26: fault-tolerant: missing partitions can be recomputed by using the lineage graph to rerun operations​
  • #27: When using python, the sparkcontext in python is basically a proxy. py4j is used to launch a JVM and create a native spark context. py4j manages communication between the python and java spark context objects. In the workers, some operations can be executed directly in the JVM. But, for example, if you've implemented a map function in python, a python process is forked to execute this user-supplied mapping. Each thread in the spark worker will have its own python sub-process. When Python wrapper calls the underlying Spark codes written in Scala running on a JVM, translation between two different environments and languages might be the source of more bugs and issues. 
  • #28: Scala and spark are very close: if you learn one you learn the other. Spark is distributed scala This has been possible for years, but nowadays it’s not only possible but pleasant
  • #37: Just one Map / Reduce step, but many algorithms are iterative Disk based → long startup times ------- Spark is a wholesale replacement for MapReduce that leverages lessons learned from MapReduce. The Hadoop community realized that areplacement for MR was needed. While MR has served the community well, it’s a decade old and shows clear limitations and problems, as we’ve seen. In late 2013, Cloudera, the largest Hadoop vendor officially embraced Spark as the replacement. Most of the other Hadoop vendors have followed suit. When it comes to one-pass ETL-like jobs, for example, data transformation or data integration, then MapReduce is the deal—this is what it was designed for. Advantages for Hadoop: Security, staffing
  • #38: sample use case for accumulators: gradient descent
  • #48: Spark.ml Departs from scikit-learn quite a bit
  • #50: Good
  • #51: from https://databricks.com/blog/2015/07/29/new-features-in-machine-learning-pipelines-in-apache-spark-1-4.html