SlideShare a Scribd company logo
INDUSTRIAL DATA SCIENCE
Tuesday 20 August 2013
WHY DO WE NEED BIG DATA?
SIMPLE MODELS AND A LOT OF DATA
TRUMP MORE ELABORATE MODELS
BASED ON LESS DATA
Peter Norvig
“
”
BIG DATA CHANGES PEOPLE AND TECHNOLOGY
• Data changes the management mindset to expect having supporting data
available for all decisions
• Decision making then creates its own data stream that can be analyzed
• Data is an asset: What is its return? Net value? Depreciation? Future
investment plan?
IN GOD WE TRUST,
ALL OTHERS BRING DATA
W.E. Deming
“
”
DEPLOYING DATA
DATA STORAGE FOR LEARNING
• Efficient storage is critical for modeling feasibility
• What is efficient storage depends on data, algorithms and environment
• Memory: working sets, small data, online learning, fast iterations needed
• Disk: M-estimation, local context sufficient
• Data warehouse: simple models in enterprises, complex input generation
• Distributed: stochastic/ensemble methods, large and complex production models
• Cloud: variable workloads, very massive data
UNSUPERVISED LEARNING IN USE
Modeling
Significance
testing
Decision making
As input into
other modeling
Know-how
Selection of
useful pattern
types
DEPLOYMENT OF A COMMON MODEL
Modeling tool
DatabaseService
Prediction request
and answer
Datasets
periodically
for learning
Predictions
written to DB
DEPLOYMENT OF A LOCALIZED MODEL
Modeling tool
DatabaseService
Prediction request
and answer
Datasets
periodically
for learning
Predictions
Data builder
Input
construction
Query
input
DEPLOYMENT OF ONLINE LEARNING
Modeling tool
Database
Incoming
data stream
Service
Data and/or
labels
Requests
with data
Predictions
Data and/or
labels
EVALUATING RESULTS AND QUALITY
• Properly evaluating the quality of modeling results depends on project
objectives, error costs and data specifics
• Classification error makes no sense for skewed class sizes,
ranks and ROC curves do
• Operational improvements evaluated as lift and incremental $$$ over previous
• Uneven error costs:
• earthquake risk estimation
• medical research, molecule potential VS patient safety
• Upsetting recommendations to an e-commerce customer
WHAT IS REAL-TIME?
• Real-time can mean very different things to different people
• Analyst: “What’s the user count today? By source? Now? From France?”
• Sysadmin: “Network traffic up 5x in 5 seconds! What’s going on?”
• Google: “Make a bid for these placements. You have 50 ms”
PROCESSING LARGE DATA
EXAMPLES OF DATA SIZE
Human-generated
• 5K tweets/s
• 25K events/s from a mobile game (that’s 200 GB / day)
• 40K Google searches/s
Machine-generated
• 5M quotes/s in the US options market
• 120 MB/s of diagnostics from a single gas turbine
• 1 PB/s peaking from CERN LHC
HUMAN AND MACHINE GENERATED DATA
• Human-generated data will get more detailed
• … but won’t grow much faster than the underlying userbase
• It will become small eventually
• Machine-generated data will grow by the Moore’s law
• … and it’s already massive
PROCESSING DATA THE OLD WAY
• User actions modify the current state in a transaction DB
• Single events go to an offline audit log for re-running
• Snapshots of data are exported for modeling
• Production models take exports of snapshots,
write back snapshot versioned results
Events
Snapshot
Snapshot
Snapshot
PROCESSING DATA IN STATUS QUO
• Data from operational databases is constantly copied over to a data
warehouse or an analytic database
• This is idealistically a one-stop-shop for all analytics and data science
• Production models preferably work inside the database,
providing high performance and data integrity
• Model learning can try pushing back some operations to the database, but
complex models will need an external tool
• Expensive modeling may require a separate testing database
PROCESSING DATA IN THE CLOUD
• Cloud allows endless scale
• No fixed limits on CPU and data usage, but everything is I/O-bound
• Enterprise hybrid clouds allow testing environments and “cloud bursting”
• Large datasets may require specialized algorithms or retrofits to MapReduce
• Combining stochastic learning, online learning and ensemble methods has
proven itself for the task
PRACTICAL ISSUES
REAL WORLD DATA IS RIDDLED WITH PROBLEMS
• Corrupted incoming data
• Corrupted IDs
• Transient IDs
• Multiple transient IDs without match
• Crazy timestamps
• Data types mixed up
• New variables emerge
• Old variables disappear
• Changes in variable definitions
• And much, much more …
You
Garbage Great insights
AND WITH MORE PROBLEMS
• Collected data is enriched with many operationally attainable sources
⇒ varying schemas and complicated ID soup
• Analytic data often developed by frontline instead of IT waterfall
⇒ faster process, but volatile data definition
• Data scientists asking for more data ⇒ temporary kludges
• Data is big and growing ⇒ risks of unnoticed discontinuity
NO, I’M NOT FINISHED YET
• The data is not a CSV file sitting in your disk
• It’s coming in every second of the year, often gigabytes per hour
• Availability of this data is a business critical issue
• Availability of modeling results is a business critical issue
• Robustness of modeling results is a business critical issue
DATA DRIFT
• Real-world data is rarely stationary
• Equipment ages, people’s preferences change
• Quality of old data models decay
• Training and testing data may need to be specially designed
• Prefer recent data with weights or online learning
ROBUST RESULTS?
• Inputs to a decision making process must be assessed for significance
“Can I trust these numbers? Is my decision justified?”
• Ad-hoc analyses can freely employ complex and bleeding edge modeling
• In operations stability and robustness overrides everything else
• Sanity checks and fallbacks can be used to avoid failures and errors
POWER LAWS
Number of users
Revenue per user
POWER LAWS
• Power laws are ubiquitous in the real world
• Follows from principle: “Whoever has will be given more”
• Example: new links emerge to web pages in proportion to their popularity
• Product improvements can be tracked through changes in the power law curve
• Examples
• Power laws often have a cut-off in the beginning,
not enough mass to fill the lowest ranks
• User engagement and value
• Social network activity
• Brain activity
• Wealth distribution
CONSEQUENCES OF POWER LAWS
• Power laws imply extremely skewed distributions
⇒ most models assume Gaussian or generally more balanced distribution
• Huge mass at the bottom ladder breaks most traditional analyses
• Different parts of the curve have complex real world interaction
• On the other hand it is relatively easy to segment power laws
⇒ separately designed treatment for different target groups
• Bringing new users as part of the power law lifts the whole curve as new
entries slowly diffuse along the curve
THE IMPORTANCE OF PRESENTATION
• Operations or not, visualization is critical for acceptance
• Challenger shuttle disaster linked to poor visualization of O-ring failure risks
• Requires attention from business concept to implementation
• What information do these users want to see ?
• How does this information support decision making ?
• How to visualize it with clarity yet powerfully ?
DATA SCIENCE IN BUSINESS
• Data analysis in business is not the sole task of the data scientist
• The whole organization must gradually mature and engage data
• This is not a technical barrier, it is a human barrier
• How to design business and social processes to employ data?
• Average business has tons of low-hanging data fruit
• Developing and automating all that takes years (and years)
• No use for advanced modeling without visibility to the underlying
WHAT’S COMING UP
PROCESSING DATA IN THE FUTURE
• The event stream itself is increasingly becoming the master input data for
analytics and data solutions
• This is a big sea change, requiring new designs of storage and processing
• Seeing the full timeline and interactions of each object is a mixed blessing
PROS Huge opportunity for discovering significant value
CONS A very complex haystack, needs additional processing, how can a human
focus on the essential?
STREAM PROCESSING
• Instead of handling static states of the data, the data is processed
as it enters the system
• Tables turn: the internal state of the stream persisted to a database
becomes now the backup for failure occasions
• Obvious fit for quickly reactive online learning solutions
• The whole domain was spearheaded by computer trading
• Another example: credit card transaction processing and fraud prevention
HADOOP AND DATA SCIENCE
• Hadoop is a general service platform, not just a MapReduce engine
• HBase is already becoming a hugely popular service backend
• In the long run Hadoop will also host a successful analytic database
• A wide selection of very different approaches to analytics and data science
exists already:
Hive and Pig, Impala, Mahout, Vowpal Wabbit, DataFu, Cloudera ML,
Giraph, RHadoop, …
REARRANGING THE MAP
• Change is not driven by replacing current bad solutions, but by innovating
around their shortcomings
• Stream processing of data will capture a large corner, driven by a sweeping
push closer to real-time
• High-level functional interfaces to data another winner
• Examples: Cascading for batch processing, Trident for stream processing
• Further innovation in fixing MapReduce shortcomings
• Examples: Spark and Shark for iterative tasks, Impala for analytics
THE END

More Related Content

PPTX
Intro to Data Vault 2.0 on Snowflake
PPTX
The Evolution of Apache Kylin
PDF
Agile Data Engineering: Introduction to Data Vault 2.0 (2018)
PDF
DAS Slides: Building a Data Strategy – Practical Steps for Aligning with Busi...
PPTX
SKILLWISE-SSIS DESIGN PATTERN FOR DATA WAREHOUSING
PDF
Some Iceberg Basics for Beginners (CDP).pdf
PDF
AI-Powered Streaming Analytics for Real-Time Customer Experience
PDF
Azure Synapse 101 Webinar Presentation
Intro to Data Vault 2.0 on Snowflake
The Evolution of Apache Kylin
Agile Data Engineering: Introduction to Data Vault 2.0 (2018)
DAS Slides: Building a Data Strategy – Practical Steps for Aligning with Busi...
SKILLWISE-SSIS DESIGN PATTERN FOR DATA WAREHOUSING
Some Iceberg Basics for Beginners (CDP).pdf
AI-Powered Streaming Analytics for Real-Time Customer Experience
Azure Synapse 101 Webinar Presentation

What's hot (9)

PPTX
Apache kylin 2.0: from classic olap to real-time data warehouse
PDF
Optimizing the Supply Chain with Knowledge Graphs, IoT and Digital Twins_Moor...
PDF
OilGasDigitalTransformationWhitePaper
PPTX
What is no sql
PPTX
Migration to Databricks - On-prem HDFS.pptx
PDF
Data as the New Oil: Producing Value in the Oil and Gas Industry
PDF
如何快速实现数据编织架构
PPTX
Analytics Service Framework
PPT
Covek i klima
Apache kylin 2.0: from classic olap to real-time data warehouse
Optimizing the Supply Chain with Knowledge Graphs, IoT and Digital Twins_Moor...
OilGasDigitalTransformationWhitePaper
What is no sql
Migration to Databricks - On-prem HDFS.pptx
Data as the New Oil: Producing Value in the Oil and Gas Industry
如何快速实现数据编织架构
Analytics Service Framework
Covek i klima
Ad

Similar to Industrial Data Science (20)

PPTX
NYC Open Data Meetup-- Thoughtworks chief data scientist talk
PDF
Getting started in Data Science (April 2017, Los Angeles)
PDF
Career in Data Science (July 2017, DTLA)
PDF
Big Data, Physics, and the Industrial Internet: How Modeling & Analytics are ...
PDF
Data science and its potential to change business as we know it. The Roadmap ...
PDF
Getting Started in Data Science
PDF
Understanding Big Data Analytics - solutions for growing businesses - Rafał M...
PDF
D92-198gstindspdx
PPTX
basic of data science and big data......
PPTX
Analytics in business
PPTX
Explorasi Data untuk Peluang Bisnis dan Pengembangan Karir.pptx
PDF
Intro to Data Science
PDF
Deck 92-146 (3)
PPTX
Big data primer - an introduction to data exploitation.
PPTX
In-Depth Data Analytics
PDF
Getting started in data science (4:3)
PDF
Getting started in data science (4:3)
PPTX
Fundamentals of Big Data
PDF
Data Science: lesson01_intro-to-ds-and-ml.pdf
PPTX
Usama Fayyad talk in South Africa: From BigData to Data Science
NYC Open Data Meetup-- Thoughtworks chief data scientist talk
Getting started in Data Science (April 2017, Los Angeles)
Career in Data Science (July 2017, DTLA)
Big Data, Physics, and the Industrial Internet: How Modeling & Analytics are ...
Data science and its potential to change business as we know it. The Roadmap ...
Getting Started in Data Science
Understanding Big Data Analytics - solutions for growing businesses - Rafał M...
D92-198gstindspdx
basic of data science and big data......
Analytics in business
Explorasi Data untuk Peluang Bisnis dan Pengembangan Karir.pptx
Intro to Data Science
Deck 92-146 (3)
Big data primer - an introduction to data exploitation.
In-Depth Data Analytics
Getting started in data science (4:3)
Getting started in data science (4:3)
Fundamentals of Big Data
Data Science: lesson01_intro-to-ds-and-ml.pdf
Usama Fayyad talk in South Africa: From BigData to Data Science
Ad

More from Niko Vuokko (7)

PPTX
Drones in real use
PPTX
Analytiikka bisneksessä
PPTX
Sensor Data in Business
PPTX
Sensoridatan ja liiketoiminnan tulevaisuus
PDF
Introduction to Data Science
PPTX
Metrics @ App Academy
PDF
Big Data Rampage
Drones in real use
Analytiikka bisneksessä
Sensor Data in Business
Sensoridatan ja liiketoiminnan tulevaisuus
Introduction to Data Science
Metrics @ App Academy
Big Data Rampage

Recently uploaded (20)

PDF
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
PPTX
20250228 LYD VKU AI Blended-Learning.pptx
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PPT
Teaching material agriculture food technology
PDF
Per capita expenditure prediction using model stacking based on satellite ima...
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PDF
Encapsulation theory and applications.pdf
PDF
Approach and Philosophy of On baking technology
PPTX
Big Data Technologies - Introduction.pptx
PPTX
Tartificialntelligence_presentation.pptx
PDF
Assigned Numbers - 2025 - Bluetooth® Document
PDF
Video forgery: An extensive analysis of inter-and intra-frame manipulation al...
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
PDF
The Rise and Fall of 3GPP – Time for a Sabbatical?
PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PDF
cuic standard and advanced reporting.pdf
PDF
Encapsulation_ Review paper, used for researhc scholars
PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
PPTX
Digital-Transformation-Roadmap-for-Companies.pptx
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
20250228 LYD VKU AI Blended-Learning.pptx
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
Teaching material agriculture food technology
Per capita expenditure prediction using model stacking based on satellite ima...
Dropbox Q2 2025 Financial Results & Investor Presentation
Encapsulation theory and applications.pdf
Approach and Philosophy of On baking technology
Big Data Technologies - Introduction.pptx
Tartificialntelligence_presentation.pptx
Assigned Numbers - 2025 - Bluetooth® Document
Video forgery: An extensive analysis of inter-and intra-frame manipulation al...
Reach Out and Touch Someone: Haptics and Empathic Computing
The Rise and Fall of 3GPP – Time for a Sabbatical?
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
cuic standard and advanced reporting.pdf
Encapsulation_ Review paper, used for researhc scholars
Agricultural_Statistics_at_a_Glance_2022_0.pdf
“AI and Expert System Decision Support & Business Intelligence Systems”
Digital-Transformation-Roadmap-for-Companies.pptx

Industrial Data Science

  • 2. WHY DO WE NEED BIG DATA?
  • 3. SIMPLE MODELS AND A LOT OF DATA TRUMP MORE ELABORATE MODELS BASED ON LESS DATA Peter Norvig “ ”
  • 4. BIG DATA CHANGES PEOPLE AND TECHNOLOGY • Data changes the management mindset to expect having supporting data available for all decisions • Decision making then creates its own data stream that can be analyzed • Data is an asset: What is its return? Net value? Depreciation? Future investment plan?
  • 5. IN GOD WE TRUST, ALL OTHERS BRING DATA W.E. Deming “ ”
  • 7. DATA STORAGE FOR LEARNING • Efficient storage is critical for modeling feasibility • What is efficient storage depends on data, algorithms and environment • Memory: working sets, small data, online learning, fast iterations needed • Disk: M-estimation, local context sufficient • Data warehouse: simple models in enterprises, complex input generation • Distributed: stochastic/ensemble methods, large and complex production models • Cloud: variable workloads, very massive data
  • 8. UNSUPERVISED LEARNING IN USE Modeling Significance testing Decision making As input into other modeling Know-how Selection of useful pattern types
  • 9. DEPLOYMENT OF A COMMON MODEL Modeling tool DatabaseService Prediction request and answer Datasets periodically for learning Predictions written to DB
  • 10. DEPLOYMENT OF A LOCALIZED MODEL Modeling tool DatabaseService Prediction request and answer Datasets periodically for learning Predictions Data builder Input construction Query input
  • 11. DEPLOYMENT OF ONLINE LEARNING Modeling tool Database Incoming data stream Service Data and/or labels Requests with data Predictions Data and/or labels
  • 12. EVALUATING RESULTS AND QUALITY • Properly evaluating the quality of modeling results depends on project objectives, error costs and data specifics • Classification error makes no sense for skewed class sizes, ranks and ROC curves do • Operational improvements evaluated as lift and incremental $$$ over previous • Uneven error costs: • earthquake risk estimation • medical research, molecule potential VS patient safety • Upsetting recommendations to an e-commerce customer
  • 13. WHAT IS REAL-TIME? • Real-time can mean very different things to different people • Analyst: “What’s the user count today? By source? Now? From France?” • Sysadmin: “Network traffic up 5x in 5 seconds! What’s going on?” • Google: “Make a bid for these placements. You have 50 ms”
  • 15. EXAMPLES OF DATA SIZE Human-generated • 5K tweets/s • 25K events/s from a mobile game (that’s 200 GB / day) • 40K Google searches/s Machine-generated • 5M quotes/s in the US options market • 120 MB/s of diagnostics from a single gas turbine • 1 PB/s peaking from CERN LHC
  • 16. HUMAN AND MACHINE GENERATED DATA • Human-generated data will get more detailed • … but won’t grow much faster than the underlying userbase • It will become small eventually • Machine-generated data will grow by the Moore’s law • … and it’s already massive
  • 17. PROCESSING DATA THE OLD WAY • User actions modify the current state in a transaction DB • Single events go to an offline audit log for re-running • Snapshots of data are exported for modeling • Production models take exports of snapshots, write back snapshot versioned results Events Snapshot Snapshot Snapshot
  • 18. PROCESSING DATA IN STATUS QUO • Data from operational databases is constantly copied over to a data warehouse or an analytic database • This is idealistically a one-stop-shop for all analytics and data science • Production models preferably work inside the database, providing high performance and data integrity • Model learning can try pushing back some operations to the database, but complex models will need an external tool • Expensive modeling may require a separate testing database
  • 19. PROCESSING DATA IN THE CLOUD • Cloud allows endless scale • No fixed limits on CPU and data usage, but everything is I/O-bound • Enterprise hybrid clouds allow testing environments and “cloud bursting” • Large datasets may require specialized algorithms or retrofits to MapReduce • Combining stochastic learning, online learning and ensemble methods has proven itself for the task
  • 21. REAL WORLD DATA IS RIDDLED WITH PROBLEMS • Corrupted incoming data • Corrupted IDs • Transient IDs • Multiple transient IDs without match • Crazy timestamps • Data types mixed up • New variables emerge • Old variables disappear • Changes in variable definitions • And much, much more … You Garbage Great insights
  • 22. AND WITH MORE PROBLEMS • Collected data is enriched with many operationally attainable sources ⇒ varying schemas and complicated ID soup • Analytic data often developed by frontline instead of IT waterfall ⇒ faster process, but volatile data definition • Data scientists asking for more data ⇒ temporary kludges • Data is big and growing ⇒ risks of unnoticed discontinuity
  • 23. NO, I’M NOT FINISHED YET • The data is not a CSV file sitting in your disk • It’s coming in every second of the year, often gigabytes per hour • Availability of this data is a business critical issue • Availability of modeling results is a business critical issue • Robustness of modeling results is a business critical issue
  • 24. DATA DRIFT • Real-world data is rarely stationary • Equipment ages, people’s preferences change • Quality of old data models decay • Training and testing data may need to be specially designed • Prefer recent data with weights or online learning
  • 25. ROBUST RESULTS? • Inputs to a decision making process must be assessed for significance “Can I trust these numbers? Is my decision justified?” • Ad-hoc analyses can freely employ complex and bleeding edge modeling • In operations stability and robustness overrides everything else • Sanity checks and fallbacks can be used to avoid failures and errors
  • 26. POWER LAWS Number of users Revenue per user
  • 27. POWER LAWS • Power laws are ubiquitous in the real world • Follows from principle: “Whoever has will be given more” • Example: new links emerge to web pages in proportion to their popularity • Product improvements can be tracked through changes in the power law curve • Examples • Power laws often have a cut-off in the beginning, not enough mass to fill the lowest ranks • User engagement and value • Social network activity • Brain activity • Wealth distribution
  • 28. CONSEQUENCES OF POWER LAWS • Power laws imply extremely skewed distributions ⇒ most models assume Gaussian or generally more balanced distribution • Huge mass at the bottom ladder breaks most traditional analyses • Different parts of the curve have complex real world interaction • On the other hand it is relatively easy to segment power laws ⇒ separately designed treatment for different target groups • Bringing new users as part of the power law lifts the whole curve as new entries slowly diffuse along the curve
  • 29. THE IMPORTANCE OF PRESENTATION • Operations or not, visualization is critical for acceptance • Challenger shuttle disaster linked to poor visualization of O-ring failure risks • Requires attention from business concept to implementation • What information do these users want to see ? • How does this information support decision making ? • How to visualize it with clarity yet powerfully ?
  • 30. DATA SCIENCE IN BUSINESS • Data analysis in business is not the sole task of the data scientist • The whole organization must gradually mature and engage data • This is not a technical barrier, it is a human barrier • How to design business and social processes to employ data? • Average business has tons of low-hanging data fruit • Developing and automating all that takes years (and years) • No use for advanced modeling without visibility to the underlying
  • 32. PROCESSING DATA IN THE FUTURE • The event stream itself is increasingly becoming the master input data for analytics and data solutions • This is a big sea change, requiring new designs of storage and processing • Seeing the full timeline and interactions of each object is a mixed blessing PROS Huge opportunity for discovering significant value CONS A very complex haystack, needs additional processing, how can a human focus on the essential?
  • 33. STREAM PROCESSING • Instead of handling static states of the data, the data is processed as it enters the system • Tables turn: the internal state of the stream persisted to a database becomes now the backup for failure occasions • Obvious fit for quickly reactive online learning solutions • The whole domain was spearheaded by computer trading • Another example: credit card transaction processing and fraud prevention
  • 34. HADOOP AND DATA SCIENCE • Hadoop is a general service platform, not just a MapReduce engine • HBase is already becoming a hugely popular service backend • In the long run Hadoop will also host a successful analytic database • A wide selection of very different approaches to analytics and data science exists already: Hive and Pig, Impala, Mahout, Vowpal Wabbit, DataFu, Cloudera ML, Giraph, RHadoop, …
  • 35. REARRANGING THE MAP • Change is not driven by replacing current bad solutions, but by innovating around their shortcomings • Stream processing of data will capture a large corner, driven by a sweeping push closer to real-time • High-level functional interfaces to data another winner • Examples: Cascading for batch processing, Trident for stream processing • Further innovation in fixing MapReduce shortcomings • Examples: Spark and Shark for iterative tasks, Impala for analytics