Tools and
Methods for Big
Data Analytics
One hour of everything you need to know to
navigate the data science jungle
by Dahl Winters, RTI International
Overview
•
•
•
•

What is Big Data Analytics
What Tools to Use When
Most Common Hadoop Use Cases
Geospatial Analytics
o NoSQL and Graph Databases
o Machine Learning
• Classification
• Clustering
o Deep Learning
Resources
http://www.scoop.it/u/dahl-winters
Big Data Analytics
Statistics
Machine
Learning

Data
Science

Analytics

Descriptive

Predictive

Image Processing

Geospatial Analytics

Network Analytics

Software Engineering
Lots of Complex Data
Prescriptive

Text Analytics

Sentiment Analysis

Social Media Analytics
What is Hadoop Good For
• Essentially, anything involving complex data and/or
multiple data sources requiring batch processing, parallel
execution, spreading data over a cluster of servers, or
taking the computation to the data because the data are
too big to move

• Text mining, index building, graph creation/analysis,
pattern recognition, collaborative filtering, prediction
models, sentiment analysis, risk assessment
• If your data are small, Hadoop will be slow – use
something else (scikit-learn, R, etc.)
What is Hadoop?
When to Use What
• Depends on whether you need real-time analysis or not
o Affects what products, tools, hardware, data sources, and data frequency you will
need to handle

• Data frequency and size
o Determine the storage mechanism, storage format, and necessary preprocessing
tools
o Examples: on-demand (social media data), continuous real-time feed (weather
data, transaction data), time-based data (time series)

• Type of data
o Structured (RDBMS)
o Unstructured (audio, video, images)
o Semi-structured
Decision Tree
How big is your data?
Less than 10 GB
Small Data
Methods

10 GB < x < 200 GB

More than 200 GB

What size queries?

Single element
at a time

One pass over
all the data

Big Storage

Streaming

Multiple passes
over big chunks

Response time?
Less than 100 s
Impala, Drill,
Titan

Don’t care, just do it
Batch
Processing
Big Data Considerations

http://www.ibm.com/developerworks/library/bd-archpatterns1/
Survey of Use Cases
9 general use cases for big data tools and methods
2 real-time analytics tools
8 MapReduce use cases – what you can use Hadoop for

1 geospatial use case
Use Cases
1. Utilities want to predict power consumption
o Use machine-generated data
o Smart meters generate huge volumes of data to analyze and power grid contains
numerous sensors monitoring voltage, current, frequency, etc.

2. Banks and insurance companies want to understand
risk
o Use machine-generated, human-generated, and transaction data from credit
card records, call recordings, chat sessions, emails, and banking activity
o Want to build a comprehensive data picture using sentiment analysis, graph
creation, and pattern recognition

3. Fraud detection
o Machine-generated, human-generated, and transaction data
o Requires real-time or near real-time transaction analysis and the generation of
recommendations for immediate action
Use Cases
4. Marketing departments want to understand customers
o Use web and social data such as Twitter feeds
o Conduct sentiment analysis to learn what users are saying about the company
and its products/services; sentiment must be integrated with customer profile
data to derive meaningful results.
o Customer feedback may vary according to demographics, which are
geographically uneven and thus have a geospatial component

5. They also want to understand customer churn
o Use web and social data, along with transaction data
o Build behavioral models including social media and transaction data to predict
and manage churn by analyzing customer activity. Graph creation/traversal and
pattern recognition may be involved.

6. They may also just want to get insights from the data
o Use Hadoop to try out different analyses on the data to find potential new
patterns/relationships that yield additional value
Use Cases
7. Recommendations
o If you bought this item, what other items might you buy?
o Collaborative filtering = using information from users to predict what similar users
might like.
o Requires batch processing across large, distributed datasets

8. Location-Based Ad Targeting
o Uses web and social data, perhaps also biometrics for facial recognition; also
machine-generated data (GPS) and transaction data
o Predictive behavioral targeting and personalized messaging – companies can
use facial recognition technology in combination with a photo from social media
to make personalized offers based on buying behavior and location
o Serious privacy concerns

9. Threat Analysis
o Pattern recognition to identify anomalies
Real-Time Analytics
• Streaming data management is the only technology
available to deliver low-latency analytics at large scale
• Scale by adding more servers
• Twitter Storm – can be used with any programming
language. For online machine learning or continuous
computation. Can process more than a million tuples
per second per node.
• LinkedIn Samza – built on top of LinkedIn’s Kafka
messaging system
MapReduce Use Cases
1. Counting and Summing
o N documents, each with a set of terms and we want to calculate a total number
of occurrences of each term in all N documents

2. Collating
o A set of items each have a property and we want to save all items with that
property into one file or perform some computation requiring all propertycontaining items to be processed as a group (i.e. building inverted indices)

3. Filtering, Parsing, and Data Validation
o We want to collect all records that meet some condition or transform each record
into another representation (i.e. text parsing, value extraction, conversion from
one format to another)

4. Distributed Task Execution
o Any large computational problem that can be divided into multiple parts and
results from all parts can be combined into a final result
MapReduce Use Cases
5. Sorting
o We want to sort records by some rule or process the records in a certain order

6. Iterative Message Passing (Graph Processing)
o Given a network of entities and relationships between them, calculate each
entity’s state based on the properties of surrounding entities

7. Distinct Values (Unique Items Counting)
o A set of records contain fields A and B, and we want to count the total number of
unique values of field A, grouped by B

8. Cross-Correlation
o Given a list of items bought by customers, for each pair of items calculate the
frequency that customers bought those items together.
Geospatial Analytics
• Question: What defines a community?
• Tools and Methods
o Graph Databases
o Classification Algorithms to Identify Characteristics of Community Members
o Clustering Algorithms to Identify Community Boundaries

• Base Dataset
o Synthetic Population Household Viewer
o https://www.epimodels.org/midas/synthpopviewer_index.do
Graph Databases
• Think of nodes as points, edges as lines connecting the
points
• Nodes can have attributes (properties); edges can have
labels
• In the Hadoop ecosystem: Giraph, Titan, Faunus
• Giraph: in-memory, lots of Java code
• Titan: database allowing fast querying of large,
distributed graphs; choice of 3 storage backends
• Faunus: graph analytics engine performing batch
processing of large graphs; fastest with breadth-first
searches
Identify This
Tools and Methods for Big Data Analytics by Dahl Winters
Tools and Methods for Big Data Analytics by Dahl Winters
Synthetic Population
Household Viewer
http://portaldev.rti.org/10_Midas_Docs/SynthPop/portal.html
http://portaldev.rti.org/10_Midas_Docs/SynthPop/portal.html
Machine Learning
Algorithm Roadmap

http://peekaboo-vision.blogspot.de/2013/01/machine-learning-cheat-sheet-for-scikit.html
Classification Algorithms
• kNN, Naïve Bayes, Logistic Regression, Decision Trees,
Random Forests, Support Vector Machines, Neural
Networks, oh my! How to decide?
• Look at the size of your training set
o Small: high bias/low variance classifiers like Naïve Bayes are better since the
others will overfit, but high bias classifiers aren’t powerful enough to provide
accurate models.
o Large: low bias/high variance classifiers such as kNN or logistic regression are
better because they have lower asymptotic error

• When to use kNN
o Personalization tasks – might employ kNN to find similar customers and base an
offer on their purchase behaviors
o Have to decide what k to use – vary k, calculate the accuracy against a holdout
set, and plot the results
Classification Algorithms
• When to use Naïve Bayes
o When you don’t have much training data; Naïve Bayes converges quicker than
discriminative models like logistic regression
o Any time – this should be a first thing to try especially if your features are
independent (no correlation between them)

• When to use Logistic Regression
o When you don’t have to worry much about features being correlated
o When you want a nice probabilistic interpretation, which you won’t get with
decision trees or SVMs, in order to adjust classification thresholds or get
confidence intervals
o When you want to easily update the model to take in new data (using gradient
descent), again unlike decision trees or SVMs

• When to use Decision Trees
o They are easy to interpret and explain, but easy to overfit. To solve that problem,
use random forests instead.
Classification Algorithms
• When to use Random Forests
o Whenever you think about using decision trees (random forests almost always
have lower classification error and better f-scores, and almost always perform as
well or better than SVMs but are far easier to understand).
o If your data are very uneven with many missing variables
o If you want to know which features in the data set are important
o If you want something that will train fast and that will be scalable
o Logistic Regression vs. Random Forests: both are fast and scalable; the latter
tends to beat the former in terms of accuracy

• When to use SVMs
o When working with text classification or any situation where high-dimensional
spaces are common
o Advantage: high accuracy, generally superior in classifying complex patterns.
Disadvantage: memory intensive. Unsuitable for large training sets.
Classification Algorithms
• When to Use Neural Networks
o Slow to converge, hard to set parameters, but good at capturing fairly complex
patterns. Slow to train but fast to use; unlike SVMs the execution speed is
independent of the size of the data it was trained on.
o MLP neural network – well-suited for complex real-world problems – on average,
superior to both SVM and Naïve Bayes. However, cannot easily understand the
model built for classifying.

• General Points
o Better data often beats better algorithms – designing good features goes a long
way.
o With a huge dataset, choice of classification algorithm might not really affect
performance much, so choose based on speed or ease of use instead.
o If accuracy is paramount, try many different classifiers and select the best one by
cross-validation, or use an ensemble method to choose them all.
Clustering Algorithms

http://scikit-learn.org/stable/auto_examples/cluster/plot_cluster_comparison.html
Clustering Algorithms
• Canopy clustering
o Pre-clustering algorithm, often used prior to k-means or hierarchical in order to
speed up clustering operations on large data sets and potentially improve
clustering results

• DBSCAN/OPTICS
• Density-based spatial clustering of applications with noise – finds
density-based clusters in spatial data
• OPTICS – ordering points to identify the clustering structure generalization of DBSCAN to multiple ranges so meaningful
clusters can be found in areas of varying density

• Hierarchical clustering
• K-means clustering
o Most common

• Spectral clustering
o Dimensionality reduction before clustering in fewer dimensions
Clustering Decision Tree
Do you want to define the number of clusters beforehand?
no

yes

Do your points have
varying densities?

How many clusters would you have?
A few
Spectral
clustering

no

yes

DBSCAN

OPTICS

A medium
number
K-means

A large number
Hierarchical
clustering
Deep Learning
• Why?
o Computers can learn without being taught
o Can adapt to experience rather than being dependent on a human programmer
o Think of the baby that learns sounds, then words, then sentences – must start at
low-level features and graduate to higher-level representations

• What?
o Essentially, layers of neural networks
o Restricted Boltzmann Machines, Deep Belief Networks, Auto-Encoders
o http://www.meetup.com/Chicago-Machine-Learning-Study-Group/files/

• Examples
o Word2vec – pre-packaged deep learning software that can recognize the
similarities among words (countries in Europe) as well as how they’re related to
other words (countries and capitals)
o AlchemyAPI – for image recognition of common objects
http://www.youtube.com/watch?v=n1ViNeWhC24
http://portaldev.rti.org/10_Midas_Docs/SynthPop/portal.html
Hadoop Connectors
• R: rmr2 allows MapReduce jobs from R environment;
bridges in-memory and HDFS
o Non-Hadoop R for Big Data: pbdR (programming with big data in R) – allows R
to use large HPC platforms with thousands of cores by providing an interface to
MPI, NetCDF4, and more

• MongoDB and Hadoop: Mongo-Hadoop 1.1
• Pattern: migrating predictive models from SAS,
Microstrategy, SQL Server, etc. to Hadoop via PMML
(XML standard for predictive model markup)
• .NET MapReduce API for Hadoop
• Python for Hadoop
Python-Hadoop Options

http://blog.cloudera.com/blog/2013/01/a-guide-to-python-frameworks-for-hadoop/
Python-Hadoop
Benchmarks

http://blog.cloudera.com/blog/2013/01/a-guide-to-python-frameworks-for-hadoop/
Questions?
Dahl Winters, dahlwinters@gmail.com

http://www.scoop.it/u/dahl-winters

More Related Content

PDF
Lecture3 business intelligence
PPTX
Fundamentals of big data analytics and Hadoop
PPTX
Introduction to Datamining Concept and Techniques
PPTX
Machine Learning - Challenges, Learnings & Opportunities
PPTX
Introduction to Data Mining
PPTX
Additional themes of data mining for Msc CS
PPTX
Big Data Analytics
PPT
Chapter - 5 Data Mining Concepts and Techniques 2nd Ed slides Han &amp; Kamber
Lecture3 business intelligence
Fundamentals of big data analytics and Hadoop
Introduction to Datamining Concept and Techniques
Machine Learning - Challenges, Learnings & Opportunities
Introduction to Data Mining
Additional themes of data mining for Msc CS
Big Data Analytics
Chapter - 5 Data Mining Concepts and Techniques 2nd Ed slides Han &amp; Kamber

What's hot (20)

PPTX
What is Datamining? Which algorithms can be used for Datamining?
PPTX
Data mining techniques
PDF
Introduction to Data Mining
PPT
introduction to data mining tutorial
PPTX
Data mining and its applications!
PPT
Introduction-to-Knowledge Discovery in Database
PPTX
Outlier and fraud detection using Hadoop
PDF
Training in Analytics and Data Science
PDF
EDF2013: Big Data Tutorial: Marko Grobelnik
PPTX
Data Mining : Concepts
PDF
Real World Application of Big Data In Data Mining Tools
PDF
Introduction to Mahout and Machine Learning
PDF
Data mining
PPTX
Combining Data Mining and Machine Learning for Effective User Profiling
DOCX
knowledge discovery and data mining approach in databases (2)
PDF
Machine Data Analytics
PPTX
Introduction to Data Science
PPT
Introduction
PPTX
Are you ready for BIG DATA?
PDF
Data mining and Machine learning expained in jargon free & lucid language
What is Datamining? Which algorithms can be used for Datamining?
Data mining techniques
Introduction to Data Mining
introduction to data mining tutorial
Data mining and its applications!
Introduction-to-Knowledge Discovery in Database
Outlier and fraud detection using Hadoop
Training in Analytics and Data Science
EDF2013: Big Data Tutorial: Marko Grobelnik
Data Mining : Concepts
Real World Application of Big Data In Data Mining Tools
Introduction to Mahout and Machine Learning
Data mining
Combining Data Mining and Machine Learning for Effective User Profiling
knowledge discovery and data mining approach in databases (2)
Machine Data Analytics
Introduction to Data Science
Introduction
Are you ready for BIG DATA?
Data mining and Machine learning expained in jargon free & lucid language
Ad

Similar to Tools and Methods for Big Data Analytics by Dahl Winters (20)

PDF
Big Data Usecases
PPTX
ICTER 2014 Invited Talk: Large Scale Data Processing in the Real World: from ...
PDF
big data
PPTX
Big Data Analytics with Storm, Spark and GraphLab
PDF
Big Data : Risks and Opportunities
PDF
Social Business in a World of Abundant Real-time Data
PPTX
Big Data Analysis : Deciphering the haystack
PPT
Big data analytics, survey r.nabati
PDF
Analytics&IoT
PDF
PDF
Introduction to Big Data
PDF
Big Data and Implications on Platform Architecture
PDF
LUISS - Deep Learning and data analyses - 09/01/19
KEY
Big data 4 webmonday
PPTX
Big data analytics and machine intelligence v5.0
PPTX
DataJan27.pptxDataFoundationsPresentation
PPTX
NoSQL Type, Bigdata, and Analytics
PDF
Big data beyond the hype may 2014
PPTX
Big Data Analytics with Hadoop
PDF
Big Data Tutorial - Marko Grobelnik - 25 May 2012
Big Data Usecases
ICTER 2014 Invited Talk: Large Scale Data Processing in the Real World: from ...
big data
Big Data Analytics with Storm, Spark and GraphLab
Big Data : Risks and Opportunities
Social Business in a World of Abundant Real-time Data
Big Data Analysis : Deciphering the haystack
Big data analytics, survey r.nabati
Analytics&IoT
Introduction to Big Data
Big Data and Implications on Platform Architecture
LUISS - Deep Learning and data analyses - 09/01/19
Big data 4 webmonday
Big data analytics and machine intelligence v5.0
DataJan27.pptxDataFoundationsPresentation
NoSQL Type, Bigdata, and Analytics
Big data beyond the hype may 2014
Big Data Analytics with Hadoop
Big Data Tutorial - Marko Grobelnik - 25 May 2012
Ad

Recently uploaded (20)

PDF
5-Ways-AI-is-Revolutionizing-Telecom-Quality-Engineering.pdf
PPTX
AI-driven Assurance Across Your End-to-end Network With ThousandEyes
PDF
“The Future of Visual AI: Efficient Multimodal Intelligence,” a Keynote Prese...
PDF
Planning-an-Audit-A-How-To-Guide-Checklist-WP.pdf
PDF
CXOs-Are-you-still-doing-manual-DevOps-in-the-age-of-AI.pdf
PPTX
SGT Report The Beast Plan and Cyberphysical Systems of Control
PDF
Build Real-Time ML Apps with Python, Feast & NoSQL
PDF
INTERSPEECH 2025 「Recent Advances and Future Directions in Voice Conversion」
PPTX
MuleSoft-Compete-Deck for midddleware integrations
PDF
The-Future-of-Automotive-Quality-is-Here-AI-Driven-Engineering.pdf
PDF
Connector Corner: Transform Unstructured Documents with Agentic Automation
PDF
Early detection and classification of bone marrow changes in lumbar vertebrae...
PPTX
Microsoft User Copilot Training Slide Deck
PDF
LMS bot: enhanced learning management systems for improved student learning e...
PDF
The AI Revolution in Customer Service - 2025
PDF
giants, standing on the shoulders of - by Daniel Stenberg
PDF
A symptom-driven medical diagnosis support model based on machine learning te...
PDF
A hybrid framework for wild animal classification using fine-tuned DenseNet12...
PDF
4 layer Arch & Reference Arch of IoT.pdf
PDF
Co-training pseudo-labeling for text classification with support vector machi...
5-Ways-AI-is-Revolutionizing-Telecom-Quality-Engineering.pdf
AI-driven Assurance Across Your End-to-end Network With ThousandEyes
“The Future of Visual AI: Efficient Multimodal Intelligence,” a Keynote Prese...
Planning-an-Audit-A-How-To-Guide-Checklist-WP.pdf
CXOs-Are-you-still-doing-manual-DevOps-in-the-age-of-AI.pdf
SGT Report The Beast Plan and Cyberphysical Systems of Control
Build Real-Time ML Apps with Python, Feast & NoSQL
INTERSPEECH 2025 「Recent Advances and Future Directions in Voice Conversion」
MuleSoft-Compete-Deck for midddleware integrations
The-Future-of-Automotive-Quality-is-Here-AI-Driven-Engineering.pdf
Connector Corner: Transform Unstructured Documents with Agentic Automation
Early detection and classification of bone marrow changes in lumbar vertebrae...
Microsoft User Copilot Training Slide Deck
LMS bot: enhanced learning management systems for improved student learning e...
The AI Revolution in Customer Service - 2025
giants, standing on the shoulders of - by Daniel Stenberg
A symptom-driven medical diagnosis support model based on machine learning te...
A hybrid framework for wild animal classification using fine-tuned DenseNet12...
4 layer Arch & Reference Arch of IoT.pdf
Co-training pseudo-labeling for text classification with support vector machi...

Tools and Methods for Big Data Analytics by Dahl Winters

  • 1. Tools and Methods for Big Data Analytics One hour of everything you need to know to navigate the data science jungle by Dahl Winters, RTI International
  • 2. Overview • • • • What is Big Data Analytics What Tools to Use When Most Common Hadoop Use Cases Geospatial Analytics o NoSQL and Graph Databases o Machine Learning • Classification • Clustering o Deep Learning
  • 4. Big Data Analytics Statistics Machine Learning Data Science Analytics Descriptive Predictive Image Processing Geospatial Analytics Network Analytics Software Engineering Lots of Complex Data Prescriptive Text Analytics Sentiment Analysis Social Media Analytics
  • 5. What is Hadoop Good For • Essentially, anything involving complex data and/or multiple data sources requiring batch processing, parallel execution, spreading data over a cluster of servers, or taking the computation to the data because the data are too big to move • Text mining, index building, graph creation/analysis, pattern recognition, collaborative filtering, prediction models, sentiment analysis, risk assessment • If your data are small, Hadoop will be slow – use something else (scikit-learn, R, etc.)
  • 7. When to Use What • Depends on whether you need real-time analysis or not o Affects what products, tools, hardware, data sources, and data frequency you will need to handle • Data frequency and size o Determine the storage mechanism, storage format, and necessary preprocessing tools o Examples: on-demand (social media data), continuous real-time feed (weather data, transaction data), time-based data (time series) • Type of data o Structured (RDBMS) o Unstructured (audio, video, images) o Semi-structured
  • 8. Decision Tree How big is your data? Less than 10 GB Small Data Methods 10 GB < x < 200 GB More than 200 GB What size queries? Single element at a time One pass over all the data Big Storage Streaming Multiple passes over big chunks Response time? Less than 100 s Impala, Drill, Titan Don’t care, just do it Batch Processing
  • 10. Survey of Use Cases 9 general use cases for big data tools and methods 2 real-time analytics tools 8 MapReduce use cases – what you can use Hadoop for 1 geospatial use case
  • 11. Use Cases 1. Utilities want to predict power consumption o Use machine-generated data o Smart meters generate huge volumes of data to analyze and power grid contains numerous sensors monitoring voltage, current, frequency, etc. 2. Banks and insurance companies want to understand risk o Use machine-generated, human-generated, and transaction data from credit card records, call recordings, chat sessions, emails, and banking activity o Want to build a comprehensive data picture using sentiment analysis, graph creation, and pattern recognition 3. Fraud detection o Machine-generated, human-generated, and transaction data o Requires real-time or near real-time transaction analysis and the generation of recommendations for immediate action
  • 12. Use Cases 4. Marketing departments want to understand customers o Use web and social data such as Twitter feeds o Conduct sentiment analysis to learn what users are saying about the company and its products/services; sentiment must be integrated with customer profile data to derive meaningful results. o Customer feedback may vary according to demographics, which are geographically uneven and thus have a geospatial component 5. They also want to understand customer churn o Use web and social data, along with transaction data o Build behavioral models including social media and transaction data to predict and manage churn by analyzing customer activity. Graph creation/traversal and pattern recognition may be involved. 6. They may also just want to get insights from the data o Use Hadoop to try out different analyses on the data to find potential new patterns/relationships that yield additional value
  • 13. Use Cases 7. Recommendations o If you bought this item, what other items might you buy? o Collaborative filtering = using information from users to predict what similar users might like. o Requires batch processing across large, distributed datasets 8. Location-Based Ad Targeting o Uses web and social data, perhaps also biometrics for facial recognition; also machine-generated data (GPS) and transaction data o Predictive behavioral targeting and personalized messaging – companies can use facial recognition technology in combination with a photo from social media to make personalized offers based on buying behavior and location o Serious privacy concerns 9. Threat Analysis o Pattern recognition to identify anomalies
  • 14. Real-Time Analytics • Streaming data management is the only technology available to deliver low-latency analytics at large scale • Scale by adding more servers • Twitter Storm – can be used with any programming language. For online machine learning or continuous computation. Can process more than a million tuples per second per node. • LinkedIn Samza – built on top of LinkedIn’s Kafka messaging system
  • 15. MapReduce Use Cases 1. Counting and Summing o N documents, each with a set of terms and we want to calculate a total number of occurrences of each term in all N documents 2. Collating o A set of items each have a property and we want to save all items with that property into one file or perform some computation requiring all propertycontaining items to be processed as a group (i.e. building inverted indices) 3. Filtering, Parsing, and Data Validation o We want to collect all records that meet some condition or transform each record into another representation (i.e. text parsing, value extraction, conversion from one format to another) 4. Distributed Task Execution o Any large computational problem that can be divided into multiple parts and results from all parts can be combined into a final result
  • 16. MapReduce Use Cases 5. Sorting o We want to sort records by some rule or process the records in a certain order 6. Iterative Message Passing (Graph Processing) o Given a network of entities and relationships between them, calculate each entity’s state based on the properties of surrounding entities 7. Distinct Values (Unique Items Counting) o A set of records contain fields A and B, and we want to count the total number of unique values of field A, grouped by B 8. Cross-Correlation o Given a list of items bought by customers, for each pair of items calculate the frequency that customers bought those items together.
  • 17. Geospatial Analytics • Question: What defines a community? • Tools and Methods o Graph Databases o Classification Algorithms to Identify Characteristics of Community Members o Clustering Algorithms to Identify Community Boundaries • Base Dataset o Synthetic Population Household Viewer o https://www.epimodels.org/midas/synthpopviewer_index.do
  • 18. Graph Databases • Think of nodes as points, edges as lines connecting the points • Nodes can have attributes (properties); edges can have labels • In the Hadoop ecosystem: Giraph, Titan, Faunus • Giraph: in-memory, lots of Java code • Titan: database allowing fast querying of large, distributed graphs; choice of 3 storage backends • Faunus: graph analytics engine performing batch processing of large graphs; fastest with breadth-first searches
  • 25. Classification Algorithms • kNN, Naïve Bayes, Logistic Regression, Decision Trees, Random Forests, Support Vector Machines, Neural Networks, oh my! How to decide? • Look at the size of your training set o Small: high bias/low variance classifiers like Naïve Bayes are better since the others will overfit, but high bias classifiers aren’t powerful enough to provide accurate models. o Large: low bias/high variance classifiers such as kNN or logistic regression are better because they have lower asymptotic error • When to use kNN o Personalization tasks – might employ kNN to find similar customers and base an offer on their purchase behaviors o Have to decide what k to use – vary k, calculate the accuracy against a holdout set, and plot the results
  • 26. Classification Algorithms • When to use Naïve Bayes o When you don’t have much training data; Naïve Bayes converges quicker than discriminative models like logistic regression o Any time – this should be a first thing to try especially if your features are independent (no correlation between them) • When to use Logistic Regression o When you don’t have to worry much about features being correlated o When you want a nice probabilistic interpretation, which you won’t get with decision trees or SVMs, in order to adjust classification thresholds or get confidence intervals o When you want to easily update the model to take in new data (using gradient descent), again unlike decision trees or SVMs • When to use Decision Trees o They are easy to interpret and explain, but easy to overfit. To solve that problem, use random forests instead.
  • 27. Classification Algorithms • When to use Random Forests o Whenever you think about using decision trees (random forests almost always have lower classification error and better f-scores, and almost always perform as well or better than SVMs but are far easier to understand). o If your data are very uneven with many missing variables o If you want to know which features in the data set are important o If you want something that will train fast and that will be scalable o Logistic Regression vs. Random Forests: both are fast and scalable; the latter tends to beat the former in terms of accuracy • When to use SVMs o When working with text classification or any situation where high-dimensional spaces are common o Advantage: high accuracy, generally superior in classifying complex patterns. Disadvantage: memory intensive. Unsuitable for large training sets.
  • 28. Classification Algorithms • When to Use Neural Networks o Slow to converge, hard to set parameters, but good at capturing fairly complex patterns. Slow to train but fast to use; unlike SVMs the execution speed is independent of the size of the data it was trained on. o MLP neural network – well-suited for complex real-world problems – on average, superior to both SVM and Naïve Bayes. However, cannot easily understand the model built for classifying. • General Points o Better data often beats better algorithms – designing good features goes a long way. o With a huge dataset, choice of classification algorithm might not really affect performance much, so choose based on speed or ease of use instead. o If accuracy is paramount, try many different classifiers and select the best one by cross-validation, or use an ensemble method to choose them all.
  • 30. Clustering Algorithms • Canopy clustering o Pre-clustering algorithm, often used prior to k-means or hierarchical in order to speed up clustering operations on large data sets and potentially improve clustering results • DBSCAN/OPTICS • Density-based spatial clustering of applications with noise – finds density-based clusters in spatial data • OPTICS – ordering points to identify the clustering structure generalization of DBSCAN to multiple ranges so meaningful clusters can be found in areas of varying density • Hierarchical clustering • K-means clustering o Most common • Spectral clustering o Dimensionality reduction before clustering in fewer dimensions
  • 31. Clustering Decision Tree Do you want to define the number of clusters beforehand? no yes Do your points have varying densities? How many clusters would you have? A few Spectral clustering no yes DBSCAN OPTICS A medium number K-means A large number Hierarchical clustering
  • 32. Deep Learning • Why? o Computers can learn without being taught o Can adapt to experience rather than being dependent on a human programmer o Think of the baby that learns sounds, then words, then sentences – must start at low-level features and graduate to higher-level representations • What? o Essentially, layers of neural networks o Restricted Boltzmann Machines, Deep Belief Networks, Auto-Encoders o http://www.meetup.com/Chicago-Machine-Learning-Study-Group/files/ • Examples o Word2vec – pre-packaged deep learning software that can recognize the similarities among words (countries in Europe) as well as how they’re related to other words (countries and capitals) o AlchemyAPI – for image recognition of common objects
  • 35. Hadoop Connectors • R: rmr2 allows MapReduce jobs from R environment; bridges in-memory and HDFS o Non-Hadoop R for Big Data: pbdR (programming with big data in R) – allows R to use large HPC platforms with thousands of cores by providing an interface to MPI, NetCDF4, and more • MongoDB and Hadoop: Mongo-Hadoop 1.1 • Pattern: migrating predictive models from SAS, Microstrategy, SQL Server, etc. to Hadoop via PMML (XML standard for predictive model markup) • .NET MapReduce API for Hadoop • Python for Hadoop