1
Beginner's Guide to Data Science
Data Science is: Popular
Lots of Data => Lots of Analysis => Lots of Jobs
Universities: Starting new multidisciplinary programs
Industry: Cottage industry evolving for online and training courses
Goal of this Talk:
● Hear if from people who do it and what they do
● Use it for further learning and specialization
2
Data is: Big!
● 2.5 quintillion (1018)
bytes of data are generated every day!
● Everything around you collects/generates data
● Social media sites
● Business transactions
● Location-based data
● Sensors
● Digital photos, videos
● Consumer behaviour (online and store transactions)
● More data is publicly available
● Database technology is advancing
● Cloud based & mobile applications are widespread
Source: IBM http://www-01.ibm.com/software/data/bigdata/
Lots of Data => Lots of Analysis => Lots of Jobs
3
If I have data, I will know :)
Everyone wants better predictability, forecasting, customer satisfaction, market
differentiation, prevention, great user experience, ...
● How can I price a particular product?
● What can I recommend online customers to buy after buying X, Y or Z?
● How can we discover market segments? group customers into market segments?
● What customer will buy in the upcoming holiday season? (what to stock?)
● What is the price point for customer retention for subscriptions?
4
Data Science is: making sense of Data
Lots of Data => Lots of Analysis => Lots of Jobs
● Multidisciplinary study of data collections for analysis, prediction, learning and
prevention.
● Utilized in a wide variety of industries.
● Involves both structured or unstructured data sources.
5
Data Science is: multidisciplinary
● Statisticians
● Mathematicians
● Computer Scientists in
○ Data mining
○ Artificial Intelligence & Machine Learning
○ Systems Development and Integration
○ Database development
○ Analytics
● Domain Experts
○ Medical experts
○ Geneticists
○ Finance, Business, Economy experts
○ etc.
6
Start Data
Acquisition
What is the
question?
What type of
data is
needed?
Data
Quality
Analysis
Reformating
& Imputing
Data
Plan Clean Data
Scripts
Scripts
Feature
Selection
Model
Selection
Results
Evaluation
Modeling Deployment and
optimization
Data Analysis
Explore the
Data
Scripts
Feature
Engineering
Maintenance
Deployment
Optimization
7
Start Data
Acquisition
What is the
question?
What type of
data is
needed?
Data
Quality
Analysis
Reformating
& Imputing
Data
Plan Clean Data
Scripts
Scripts
Feature
Selection
Model
Selection
Results
Evaluation
Modeling Deployment and
optimization
Data Analysis
Explore the
Data
Scripts
Feature
Engineering
Maintenance
Deployment
Optimization
8
Data Acquisition Stage
● As soon as the data scientist identified the problem she is trying to solve, she
must assess:
● What type of data is available
● What might be required and currently is not collected
● Is it available from other units of the company?
● Does she need to crawl/buy data from third parties?
● How much data is needed? (Data volume)
● How to access the data?
● Is the data private?
● Is it legally OK to use the data?
9
Data Acquisition Stage
● Data may not exist
● Sources of data may be public or private
● Not all sources of data may be suitable for processing
● Data are often incomplete and dirty
● Data consolidation and cleanup are essential
○ Pieces of data may be in different sources
○ Formats may not match/may be incompatible
○ Unstructured data may need to be accounted for
10
Data Acquisition Stage -- Example
Example: Online customer experience may require collecting lots of data such
as
● clicks
● conversions
● add-to-cart rate
● dwell time
● average order value
● foot traffic
● bounce rate
● exits and time to purchase
11
Data Acquisition: Type and Source of Data
● Time spent on a page, browsing and/or
search history
○ Website Logs
● User and Inventory Data
○ Transaction databases
● Social Engagement
○ Social Networks (Yelp, Twitter,...)
● Customer Support
○ Call Logs, Emails
● Gas prices, competitors, news, Stock
Prices, etc..
○ RSS Feeds, News Sites, Wikipedia,...
● Training Data?
○ CrowdFlower, Mechanical Turk
12
Data Acquisition : Storage and Access
● Where the data resides
○ Cloud or Computing Clusters
● Storage System
○ SQL, NoSQL, File System
○ SQL: MySQL, Oracle, MS Server,...
○ NoSQL: MongoDB, Cassandra,
Couchbase, Hbase, Hive, ...
○ Text Indexing: Solr, ElasticSearch,...
● Data Processing Frameworks:
○ Hadoop, Spark, Storm etc...
13
Data Acquisition: Data Integration
Data integration involves combining data residing
in different sources and providing users with a
unified view of these data. (Wikipedia)
● Schema Mapping
● Record Matching
● Data Cleaning
Data
Source 1
Data
Source 2
Data
Source 3
Data
Source 4
ETL
Data
Warehouse
14
Data Cleaning
● Data are often incomplete, incorrect.
○ Typo : e.g., text data in numeric fields
○ Missing Values : some fields may not be collected
for some of the examples
○ Impossible Data combinations: e.g., gender=
MALE, pregnant = TRUE
○ Out-of-Range Values: e.g., age=1000
● Garbage In Garbage Out
● Scripting, Visualization
Figure ref: https://thedailyomnivore.net/2015/12/02/
15
Start Data
Acquisition
What is the
question?
What type of
data is
needed?
Data
Quality
Analysis
Reformating
& Imputing
Data
Plan Clean Data
Scripts
Scripts
Feature
Selection
Model
Selection
Results
Evaluation
Modeling Deployment and
optimization
Data Analysis
Explore the
Data
Scripts
Feature
Engineering
Maintenance
Deploy Models
Optimization
16
Analysis - Data Preparation
● Univariate Analysis: Analyze/explore variables one by one
● Bivariate Analysis: Explore relationship between variables
● Coverage, missing values: treating unknown values
● Outliers: detect and treat values that are distant from other observations
● Feature Engineering: Variable transformations and creation of new better
variables from raw features
Commonly used tools:
● SQL
● R: plyr, reshape, ggplot2, data.table,
● Python: NumPy, Pandas, SciPy, matplotlib
17
Analysis - Exploratory Analysis
Univariate Analysis: Analyze/explore variables one by one
- Continuous variable: explore central tendency and spread of the values
- Summary statistics
- mean, median, min, max
- IQR, standard deviation, variance, quartile
- Visualize Histograms, Boxplots
18
Analysis - Exploratory Analysis
Walmart Store Sales Forecasting Data, Kaggle
Summary statistics for “Temperature”:
Min. 1st Qu. Median Mean 3rd Qu. Max. Std Dev.
-7.29 45.90 60.71 59.36 73.88 102.00 18.68
19
Analysis - Exploratory Analysis
Univariate Analysis: Analyze/explore variables one by one
- Categorical Variable: frequency tables
- Count and count %
- Visualize Bar charts
20
Analysis - Exploratory Analysis
Bivariate Analysis: Explore relationship between variables
- Continuous to continuous variables: Correlation measures the strength and
direction of a linear relationship
- Visualize Scatterplots -> relationship may not be linear
21
Analysis - Exploratory Analysis
Bivariate Analysis: Explore relationship between variables
- Categorical to categorical variables -> crosstab table
- Visualize Stacked bar charts
- Continuous to categorical variables ->
- Visualize Boxplots, Histograms for each level(category)
22
Analysis - Correlation vs Causation
Correlation ⇏ causation!
23
Analysis - Correlation vs Causation
Correlation ⇏ causation!
To prove causation:
● Randomized controlled experiments
● Hypothesis testing, A/B testing
24
Analysis - Feature Engineering
Create new features from existing raw features: discretize, bin
Transform Variables
Create new categorical variables: too many levels, levels that rarely occur, one
level almost always occur
Extremely skewed data - outliers
Imputation: Filling in missing data
25
Analysis - Missing Values
Missing values are unknown values of a feature.
Important as they may lead to biased models or incorrect estimations and conclusions.
Some ML algorithms accept missing values: for example some tree based models treat
missing values as a separate branch while many other algorithms require complete
dataset. Therefore, we can
● omit: remove missing values and use available data
● impute: replace missing values estimating by mean/median/mode value of the
existing data, by most similar data points (KNN) or more complex algorithms like
Random Forest
26
Analysis - Outliers
Outliers are values distant from other observations like values that are > ~three
standard deviation away from the mean or values between top and bottom 5
percentiles or values outside of 1.5 IQR.
Visualization methods like Boxplots, Histograms and Scatterplots help
27
Analysis - Outliers
Some algorithms like regression are sensitive to outliers and can cause high error
variance and bias in the estimated values.
Delete, cap, transform or impute like missing values.
28
Start Data
Acquisition
What is the
question?
What type of
data is
needed?
Data
Quality
Analysis
Reformating
& Imputing
Data
Plan Clean Data
Scripts
Scripts
Feature
Selection
Model
Selection
Results
Evaluation
Modeling Deployment and
optimization
Data Analysis
Explore the
Data
Scripts
Feature
Engineering
Maintenance
Deployment
Optimization
29
Predictive data modeling
Prediction, that is the end goal of many data science adventures!
Data on consumer behaviour is collected:
● to predict future consumer behaviour and to take action accordingly
Examples:
● Recommendation systems (netflix, pandora, amazon, etc.)
● Online user behaviour is used to predict best targeted ads
● Customer purchase histories are used to determine how to price,stock,
market and display future products.
30
Machine learning
● Machine Learning is the study of algorithms that improve their performance at
some task with example data or past experience
○ Foundation to many ML algorithms lie in statistics and optimization theory
○ Role of Computer science: Efficient algorithms to
■ Solve the optimization problem
■ Represent and evaluate data models for inference
● Wide variety of off-the-shelf algorithms are available today. Just pick a library
and go! (is it really that easy?)
○ Short answer: no. Long answer: model selection and tuning requires deeper understanding.
31
Machine learning - basics
Machine learning systems are made up of
3 major parts, which are:
● Model: the system that makes
predictions.
● Parameters: the signals or factors
used by the model to form its
decisions.
● Learner: the system that adjusts
the parameters — and in turn the
model — by looking at differences
in predictions versus actual
outcome. Ref: http://marketingland.com/how-machine-learning-works-150366 32
Machine learning application examples
● Association Analysis
○ Basket analysis: Find the probability that somebody
who buys X also buys Y
● Supervised Learning
○ Classification: Spam filter, language prediction,
customer/visit type prediction
○ Regression: Pricing
○ Recommendation
● Unsupervised Learning
○ Given a database of customer data, automatically
discover market segments and group customers into
different market segments
33
Model selection and generalization
● Learning is an ill-posed problem; data is
not sufficient to find a unique solution
● There is a trade-off between three
factors:
○ Model complexity
○ Training set size
○ Generalization error (expected error
on new data)
● Overfitting and underfitting problems
Ref: http://www.inf.ed.ac.uk/teaching/courses/iaml/slides/eval-2x2.pdf 34
Generalization error and cross-validation
● Measuring the generalization error is a major
challenge in data mining and machine
learning
● To estimate generalization error, we need
data unseen during training. We could split
the data as
○ Training set (50%)
○ Validation set (25%) (optional, for selecting ML
algorithm parameters)
○ Test (publication) set (25%)
● How to avoid selection bias: k-fold cross-
validation
Figure ref: https://www.quora.com/I-train-my-system-based-on-the-10-fold-cross-validation-framework-Now-it-gives-me-10-different-models-Which-model-to-select-as-a-representative
35
Deep Learning
● Neural networks(NN) has been around for decades but they just weren’t “deep” enough. NNs with
several hidden layers are called deep neural networks (DNN).
● Different than many ML approaches, deep learning attempts to model high-level abstractions in data.
● Deep learning is suited best when input space is locally structured – spatial or temporal – vs. arbitrary
input features
36
Start Data
Acquisition
What is the
question?
What type of
data is
needed?
Data
Quality
Analysis
Reformating
& Imputing
Data
Plan Clean Data
Scripts
Scripts
Feature
Selection
Model
Selection
Results
Evaluation
Modeling Deployment and
optimization
Maintenance
Deployment
Optimization
Data Analysis
Explore the
Data
Scripts
Feature
Engineering
37
Deployment, maintenance and optimization
● Deployed solutions might include:
○ A trained data model (model + parameters)
○ Routines for inputting and prediction
○ (Optional) Routines for model improvement (through feedback, deployed system can improve
itself)
○ (Optional) Routines for training
● Once the model has been deployed in production, it is time for regular
maintenance and operations.
● The optimization phase could be triggered by failing performance, need to
add new data sources and retraining the model, or even to deploy improved
versions of the model based on better algorithms.
Ref: http://www.datasciencecentral.com/m/blogpost?id=6448529%3ABlogPost%3A234092 38
Recap - Software Toolbox of Data Scientists:
● Database
○ SQL
○ NoSQL languages for target databases
● Programming Languages and Libraries
○ Python (due to availability of libraries for data management) scikit-learn, pyML, pandas
○ R
○ General programming languages such as Java for gluing different systems
○ C/C++] mlpack, dlib
● Tools: Orange, Weka, Matlab
● Vendor Specific Platforms for data analytics
(such as Adobe Marketing Cloud, etc.)
● Hive
● Spark
39
Conclusion: It takes a team
Must haves:
- Programming and Scripting skills
- Statistics and data analysis skills
- Machine learning skills
Necessary but not sufficient:
- Database management skills
- Distributed computing skills
Domain knowledge may make or break a system: If you do not realize a type of
data is essential, the results will not be very useful
40
41
THANK YOU

More Related Content

PPTX
Analytics for offline retail: Using Advanced Machine Learning
PPTX
What is Datamining? Which algorithms can be used for Datamining?
PPTX
Introduction to Datamining Concept and Techniques
PPTX
PPTX
Data mining and its applications!
PPTX
Introduction to Big Data & Analytics
PPTX
Data mining - Process, Techniques and Research Topics
PPT
18231979 Data Mining
Analytics for offline retail: Using Advanced Machine Learning
What is Datamining? Which algorithms can be used for Datamining?
Introduction to Datamining Concept and Techniques
Data mining and its applications!
Introduction to Big Data & Analytics
Data mining - Process, Techniques and Research Topics
18231979 Data Mining

What's hot (20)

PPTX
PDF
AlogoAnalytics Company Presentation
PPT
1.2 steps and functionalities
PPTX
Image Analytics In Healthcare
PPTX
Basic Overview of Data Mining
PPTX
The 8 Step Data Mining Process
PPTX
Importance of Data Mining
PPTX
Data mining presentation by Wasiful Alam Fahim
PPTX
Machine Learning For Stock Broking
PPTX
Chatbots: Automated Conversational Model using Machine Learning
PDF
Data Analytics and Big Data on IoT
PPTX
Data Mining
PDF
Ghhh
PPTX
Machine Learning in ICU mortality prediction
PPTX
Data analytics
PDF
Apply (Big) Data Analytics & Predictive Analytics to Business Application
PPTX
Text Analytics for Legal work
PDF
Data analytcis-first-steps
PPTX
Data Analytics Life Cycle [EMC² - Data Science and Big data analytics]
PDF
Data Science
AlogoAnalytics Company Presentation
1.2 steps and functionalities
Image Analytics In Healthcare
Basic Overview of Data Mining
The 8 Step Data Mining Process
Importance of Data Mining
Data mining presentation by Wasiful Alam Fahim
Machine Learning For Stock Broking
Chatbots: Automated Conversational Model using Machine Learning
Data Analytics and Big Data on IoT
Data Mining
Ghhh
Machine Learning in ICU mortality prediction
Data analytics
Apply (Big) Data Analytics & Predictive Analytics to Business Application
Text Analytics for Legal work
Data analytcis-first-steps
Data Analytics Life Cycle [EMC² - Data Science and Big data analytics]
Data Science
Ad

Similar to Data science guide (20)

PPTX
Kp-Data Analytics-ts.pptx
PDF
The Data Scientist’s Toolkit: Key Techniques for Extracting Value
PDF
Data analytics career path
PDF
Data Analytics Career Paths
PPTX
introduction TO DS 1.pptxvbvcbvcbvcbvcbvcb
PPTX
Data Science Mastery Course in Pitampura
PPTX
Data Science.pptx NEW COURICUUMN IN DATA
PDF
Big Data overview
PPTX
Data Science in Python.pptx
PPTX
Data Science Introduction: Concepts, lifecycle, applications.pptx
PPT
1328cvkdlgkdgjfdkjgjdfgdfkgdflgkgdfglkjgld8679 - Copy.ppt
PDF
C2_W1---.pdf
PPTX
Data Science and Analytics Lesson 1.pptx
PPTX
Introducition to Data scinece compiled by hu
PDF
data mining
PDF
Lesson1.2.pptx.pdf
PPTX
Introduction to Data Science for iSchool KKU
PDF
Data Science & AI Road Map by Python & Computer science tutor in Malaysia
PPTX
Data Science Training in Chandigarh h
PPTX
Introduction to data science
Kp-Data Analytics-ts.pptx
The Data Scientist’s Toolkit: Key Techniques for Extracting Value
Data analytics career path
Data Analytics Career Paths
introduction TO DS 1.pptxvbvcbvcbvcbvcbvcb
Data Science Mastery Course in Pitampura
Data Science.pptx NEW COURICUUMN IN DATA
Big Data overview
Data Science in Python.pptx
Data Science Introduction: Concepts, lifecycle, applications.pptx
1328cvkdlgkdgjfdkjgjdfgdfkgdflgkgdfglkjgld8679 - Copy.ppt
C2_W1---.pdf
Data Science and Analytics Lesson 1.pptx
Introducition to Data scinece compiled by hu
data mining
Lesson1.2.pptx.pdf
Introduction to Data Science for iSchool KKU
Data Science & AI Road Map by Python & Computer science tutor in Malaysia
Data Science Training in Chandigarh h
Introduction to data science
Ad

More from gokulprasath06 (7)

PDF
Exploratory data analysis
PDF
K-means Clustering Algorithm with Matlab Source code
PDF
Introduction to Natural Language Processing
PDF
Introduction to Data Mining - A Beginner's Guide
PDF
Reinforcement Learning Guide For Beginners
PDF
Artificial Neural Networks (ANN)
Exploratory data analysis
K-means Clustering Algorithm with Matlab Source code
Introduction to Natural Language Processing
Introduction to Data Mining - A Beginner's Guide
Reinforcement Learning Guide For Beginners
Artificial Neural Networks (ANN)

Recently uploaded (20)

PDF
Navigating the Thai Supplements Landscape.pdf
PPTX
Tapan_20220802057_Researchinternship_final_stage.pptx
PPTX
New ISO 27001_2022 standard and the changes
PPTX
Topic 5 Presentation 5 Lesson 5 Corporate Fin
PPTX
IMPACT OF LANDSLIDE.....................
PPTX
statsppt this is statistics ppt for giving knowledge about this topic
PPTX
MBA JAPAN: 2025 the University of Waseda
PPT
lectureusjsjdhdsjjshdshshddhdhddhhd1.ppt
PPTX
DS-40-Pre-Engagement and Kickoff deck - v8.0.pptx
PPTX
CYBER SECURITY the Next Warefare Tactics
PPTX
SET 1 Compulsory MNH machine learning intro
PPTX
Copy of 16 Timeline & Flowchart Templates – HubSpot.pptx
PPTX
Crypto_Trading_Beginners.pptxxxxxxxxxxxxxx
PPTX
chrmotography.pptx food anaylysis techni
PPTX
Lesson-01intheselfoflifeofthekennyrogersoftheunderstandoftheunderstanded
PPTX
Phase1_final PPTuwhefoegfohwfoiehfoegg.pptx
PPTX
chuitkarjhanbijunsdivndsijvndiucbhsaxnmzsicvjsd
PDF
An essential collection of rules designed to help businesses manage and reduc...
PDF
Microsoft Core Cloud Services powerpoint
PDF
CS3352FOUNDATION OF DATA SCIENCE _1_MAterial.pdf
Navigating the Thai Supplements Landscape.pdf
Tapan_20220802057_Researchinternship_final_stage.pptx
New ISO 27001_2022 standard and the changes
Topic 5 Presentation 5 Lesson 5 Corporate Fin
IMPACT OF LANDSLIDE.....................
statsppt this is statistics ppt for giving knowledge about this topic
MBA JAPAN: 2025 the University of Waseda
lectureusjsjdhdsjjshdshshddhdhddhhd1.ppt
DS-40-Pre-Engagement and Kickoff deck - v8.0.pptx
CYBER SECURITY the Next Warefare Tactics
SET 1 Compulsory MNH machine learning intro
Copy of 16 Timeline & Flowchart Templates – HubSpot.pptx
Crypto_Trading_Beginners.pptxxxxxxxxxxxxxx
chrmotography.pptx food anaylysis techni
Lesson-01intheselfoflifeofthekennyrogersoftheunderstandoftheunderstanded
Phase1_final PPTuwhefoegfohwfoiehfoegg.pptx
chuitkarjhanbijunsdivndsijvndiucbhsaxnmzsicvjsd
An essential collection of rules designed to help businesses manage and reduc...
Microsoft Core Cloud Services powerpoint
CS3352FOUNDATION OF DATA SCIENCE _1_MAterial.pdf

Data science guide

  • 1. 1 Beginner's Guide to Data Science
  • 2. Data Science is: Popular Lots of Data => Lots of Analysis => Lots of Jobs Universities: Starting new multidisciplinary programs Industry: Cottage industry evolving for online and training courses Goal of this Talk: ● Hear if from people who do it and what they do ● Use it for further learning and specialization 2
  • 3. Data is: Big! ● 2.5 quintillion (1018) bytes of data are generated every day! ● Everything around you collects/generates data ● Social media sites ● Business transactions ● Location-based data ● Sensors ● Digital photos, videos ● Consumer behaviour (online and store transactions) ● More data is publicly available ● Database technology is advancing ● Cloud based & mobile applications are widespread Source: IBM http://www-01.ibm.com/software/data/bigdata/ Lots of Data => Lots of Analysis => Lots of Jobs 3
  • 4. If I have data, I will know :) Everyone wants better predictability, forecasting, customer satisfaction, market differentiation, prevention, great user experience, ... ● How can I price a particular product? ● What can I recommend online customers to buy after buying X, Y or Z? ● How can we discover market segments? group customers into market segments? ● What customer will buy in the upcoming holiday season? (what to stock?) ● What is the price point for customer retention for subscriptions? 4
  • 5. Data Science is: making sense of Data Lots of Data => Lots of Analysis => Lots of Jobs ● Multidisciplinary study of data collections for analysis, prediction, learning and prevention. ● Utilized in a wide variety of industries. ● Involves both structured or unstructured data sources. 5
  • 6. Data Science is: multidisciplinary ● Statisticians ● Mathematicians ● Computer Scientists in ○ Data mining ○ Artificial Intelligence & Machine Learning ○ Systems Development and Integration ○ Database development ○ Analytics ● Domain Experts ○ Medical experts ○ Geneticists ○ Finance, Business, Economy experts ○ etc. 6
  • 7. Start Data Acquisition What is the question? What type of data is needed? Data Quality Analysis Reformating & Imputing Data Plan Clean Data Scripts Scripts Feature Selection Model Selection Results Evaluation Modeling Deployment and optimization Data Analysis Explore the Data Scripts Feature Engineering Maintenance Deployment Optimization 7
  • 8. Start Data Acquisition What is the question? What type of data is needed? Data Quality Analysis Reformating & Imputing Data Plan Clean Data Scripts Scripts Feature Selection Model Selection Results Evaluation Modeling Deployment and optimization Data Analysis Explore the Data Scripts Feature Engineering Maintenance Deployment Optimization 8
  • 9. Data Acquisition Stage ● As soon as the data scientist identified the problem she is trying to solve, she must assess: ● What type of data is available ● What might be required and currently is not collected ● Is it available from other units of the company? ● Does she need to crawl/buy data from third parties? ● How much data is needed? (Data volume) ● How to access the data? ● Is the data private? ● Is it legally OK to use the data? 9
  • 10. Data Acquisition Stage ● Data may not exist ● Sources of data may be public or private ● Not all sources of data may be suitable for processing ● Data are often incomplete and dirty ● Data consolidation and cleanup are essential ○ Pieces of data may be in different sources ○ Formats may not match/may be incompatible ○ Unstructured data may need to be accounted for 10
  • 11. Data Acquisition Stage -- Example Example: Online customer experience may require collecting lots of data such as ● clicks ● conversions ● add-to-cart rate ● dwell time ● average order value ● foot traffic ● bounce rate ● exits and time to purchase 11
  • 12. Data Acquisition: Type and Source of Data ● Time spent on a page, browsing and/or search history ○ Website Logs ● User and Inventory Data ○ Transaction databases ● Social Engagement ○ Social Networks (Yelp, Twitter,...) ● Customer Support ○ Call Logs, Emails ● Gas prices, competitors, news, Stock Prices, etc.. ○ RSS Feeds, News Sites, Wikipedia,... ● Training Data? ○ CrowdFlower, Mechanical Turk 12
  • 13. Data Acquisition : Storage and Access ● Where the data resides ○ Cloud or Computing Clusters ● Storage System ○ SQL, NoSQL, File System ○ SQL: MySQL, Oracle, MS Server,... ○ NoSQL: MongoDB, Cassandra, Couchbase, Hbase, Hive, ... ○ Text Indexing: Solr, ElasticSearch,... ● Data Processing Frameworks: ○ Hadoop, Spark, Storm etc... 13
  • 14. Data Acquisition: Data Integration Data integration involves combining data residing in different sources and providing users with a unified view of these data. (Wikipedia) ● Schema Mapping ● Record Matching ● Data Cleaning Data Source 1 Data Source 2 Data Source 3 Data Source 4 ETL Data Warehouse 14
  • 15. Data Cleaning ● Data are often incomplete, incorrect. ○ Typo : e.g., text data in numeric fields ○ Missing Values : some fields may not be collected for some of the examples ○ Impossible Data combinations: e.g., gender= MALE, pregnant = TRUE ○ Out-of-Range Values: e.g., age=1000 ● Garbage In Garbage Out ● Scripting, Visualization Figure ref: https://thedailyomnivore.net/2015/12/02/ 15
  • 16. Start Data Acquisition What is the question? What type of data is needed? Data Quality Analysis Reformating & Imputing Data Plan Clean Data Scripts Scripts Feature Selection Model Selection Results Evaluation Modeling Deployment and optimization Data Analysis Explore the Data Scripts Feature Engineering Maintenance Deploy Models Optimization 16
  • 17. Analysis - Data Preparation ● Univariate Analysis: Analyze/explore variables one by one ● Bivariate Analysis: Explore relationship between variables ● Coverage, missing values: treating unknown values ● Outliers: detect and treat values that are distant from other observations ● Feature Engineering: Variable transformations and creation of new better variables from raw features Commonly used tools: ● SQL ● R: plyr, reshape, ggplot2, data.table, ● Python: NumPy, Pandas, SciPy, matplotlib 17
  • 18. Analysis - Exploratory Analysis Univariate Analysis: Analyze/explore variables one by one - Continuous variable: explore central tendency and spread of the values - Summary statistics - mean, median, min, max - IQR, standard deviation, variance, quartile - Visualize Histograms, Boxplots 18
  • 19. Analysis - Exploratory Analysis Walmart Store Sales Forecasting Data, Kaggle Summary statistics for “Temperature”: Min. 1st Qu. Median Mean 3rd Qu. Max. Std Dev. -7.29 45.90 60.71 59.36 73.88 102.00 18.68 19
  • 20. Analysis - Exploratory Analysis Univariate Analysis: Analyze/explore variables one by one - Categorical Variable: frequency tables - Count and count % - Visualize Bar charts 20
  • 21. Analysis - Exploratory Analysis Bivariate Analysis: Explore relationship between variables - Continuous to continuous variables: Correlation measures the strength and direction of a linear relationship - Visualize Scatterplots -> relationship may not be linear 21
  • 22. Analysis - Exploratory Analysis Bivariate Analysis: Explore relationship between variables - Categorical to categorical variables -> crosstab table - Visualize Stacked bar charts - Continuous to categorical variables -> - Visualize Boxplots, Histograms for each level(category) 22
  • 23. Analysis - Correlation vs Causation Correlation ⇏ causation! 23
  • 24. Analysis - Correlation vs Causation Correlation ⇏ causation! To prove causation: ● Randomized controlled experiments ● Hypothesis testing, A/B testing 24
  • 25. Analysis - Feature Engineering Create new features from existing raw features: discretize, bin Transform Variables Create new categorical variables: too many levels, levels that rarely occur, one level almost always occur Extremely skewed data - outliers Imputation: Filling in missing data 25
  • 26. Analysis - Missing Values Missing values are unknown values of a feature. Important as they may lead to biased models or incorrect estimations and conclusions. Some ML algorithms accept missing values: for example some tree based models treat missing values as a separate branch while many other algorithms require complete dataset. Therefore, we can ● omit: remove missing values and use available data ● impute: replace missing values estimating by mean/median/mode value of the existing data, by most similar data points (KNN) or more complex algorithms like Random Forest 26
  • 27. Analysis - Outliers Outliers are values distant from other observations like values that are > ~three standard deviation away from the mean or values between top and bottom 5 percentiles or values outside of 1.5 IQR. Visualization methods like Boxplots, Histograms and Scatterplots help 27
  • 28. Analysis - Outliers Some algorithms like regression are sensitive to outliers and can cause high error variance and bias in the estimated values. Delete, cap, transform or impute like missing values. 28
  • 29. Start Data Acquisition What is the question? What type of data is needed? Data Quality Analysis Reformating & Imputing Data Plan Clean Data Scripts Scripts Feature Selection Model Selection Results Evaluation Modeling Deployment and optimization Data Analysis Explore the Data Scripts Feature Engineering Maintenance Deployment Optimization 29
  • 30. Predictive data modeling Prediction, that is the end goal of many data science adventures! Data on consumer behaviour is collected: ● to predict future consumer behaviour and to take action accordingly Examples: ● Recommendation systems (netflix, pandora, amazon, etc.) ● Online user behaviour is used to predict best targeted ads ● Customer purchase histories are used to determine how to price,stock, market and display future products. 30
  • 31. Machine learning ● Machine Learning is the study of algorithms that improve their performance at some task with example data or past experience ○ Foundation to many ML algorithms lie in statistics and optimization theory ○ Role of Computer science: Efficient algorithms to ■ Solve the optimization problem ■ Represent and evaluate data models for inference ● Wide variety of off-the-shelf algorithms are available today. Just pick a library and go! (is it really that easy?) ○ Short answer: no. Long answer: model selection and tuning requires deeper understanding. 31
  • 32. Machine learning - basics Machine learning systems are made up of 3 major parts, which are: ● Model: the system that makes predictions. ● Parameters: the signals or factors used by the model to form its decisions. ● Learner: the system that adjusts the parameters — and in turn the model — by looking at differences in predictions versus actual outcome. Ref: http://marketingland.com/how-machine-learning-works-150366 32
  • 33. Machine learning application examples ● Association Analysis ○ Basket analysis: Find the probability that somebody who buys X also buys Y ● Supervised Learning ○ Classification: Spam filter, language prediction, customer/visit type prediction ○ Regression: Pricing ○ Recommendation ● Unsupervised Learning ○ Given a database of customer data, automatically discover market segments and group customers into different market segments 33
  • 34. Model selection and generalization ● Learning is an ill-posed problem; data is not sufficient to find a unique solution ● There is a trade-off between three factors: ○ Model complexity ○ Training set size ○ Generalization error (expected error on new data) ● Overfitting and underfitting problems Ref: http://www.inf.ed.ac.uk/teaching/courses/iaml/slides/eval-2x2.pdf 34
  • 35. Generalization error and cross-validation ● Measuring the generalization error is a major challenge in data mining and machine learning ● To estimate generalization error, we need data unseen during training. We could split the data as ○ Training set (50%) ○ Validation set (25%) (optional, for selecting ML algorithm parameters) ○ Test (publication) set (25%) ● How to avoid selection bias: k-fold cross- validation Figure ref: https://www.quora.com/I-train-my-system-based-on-the-10-fold-cross-validation-framework-Now-it-gives-me-10-different-models-Which-model-to-select-as-a-representative 35
  • 36. Deep Learning ● Neural networks(NN) has been around for decades but they just weren’t “deep” enough. NNs with several hidden layers are called deep neural networks (DNN). ● Different than many ML approaches, deep learning attempts to model high-level abstractions in data. ● Deep learning is suited best when input space is locally structured – spatial or temporal – vs. arbitrary input features 36
  • 37. Start Data Acquisition What is the question? What type of data is needed? Data Quality Analysis Reformating & Imputing Data Plan Clean Data Scripts Scripts Feature Selection Model Selection Results Evaluation Modeling Deployment and optimization Maintenance Deployment Optimization Data Analysis Explore the Data Scripts Feature Engineering 37
  • 38. Deployment, maintenance and optimization ● Deployed solutions might include: ○ A trained data model (model + parameters) ○ Routines for inputting and prediction ○ (Optional) Routines for model improvement (through feedback, deployed system can improve itself) ○ (Optional) Routines for training ● Once the model has been deployed in production, it is time for regular maintenance and operations. ● The optimization phase could be triggered by failing performance, need to add new data sources and retraining the model, or even to deploy improved versions of the model based on better algorithms. Ref: http://www.datasciencecentral.com/m/blogpost?id=6448529%3ABlogPost%3A234092 38
  • 39. Recap - Software Toolbox of Data Scientists: ● Database ○ SQL ○ NoSQL languages for target databases ● Programming Languages and Libraries ○ Python (due to availability of libraries for data management) scikit-learn, pyML, pandas ○ R ○ General programming languages such as Java for gluing different systems ○ C/C++] mlpack, dlib ● Tools: Orange, Weka, Matlab ● Vendor Specific Platforms for data analytics (such as Adobe Marketing Cloud, etc.) ● Hive ● Spark 39
  • 40. Conclusion: It takes a team Must haves: - Programming and Scripting skills - Statistics and data analysis skills - Machine learning skills Necessary but not sufficient: - Database management skills - Distributed computing skills Domain knowledge may make or break a system: If you do not realize a type of data is essential, the results will not be very useful 40