0% found this document useful (0 votes)
15 views57 pages

BDA Unit - V

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views57 pages

BDA Unit - V

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 57

Department of

Computer Science and Engineering

10212CS210 – Big Data Analytics

Course Category : Program Elective


Credits :4
Slot : S1 & S5
Semester : Summer
Academic Year : 2024-2025
Faculty Name : Dr. S. Jagan

School of Computing
Vel Tech Rangarajan Dr. Sagunthala R&D Institute of
Science and Technology
Unit 5 Big Data Cloud Concepts and Optimization

Big data Cloud Computing-Features, Cloud Deployment


Models, Cloud Delivery Models, Cloud for Big Data, Real time
Analytics Platform (RTAP) applications – Using Graph
Analytics for Big Data: Graph Analytics, Big Data
Optimization- Smooth Convex Optimization-Projection-free
methods, Accelerated gradient descent methods, Non smooth
Convex Optimization-Smoothing techniques, Mirror-Prox
method, Sparsity learning, Large-scale kernel machines.

Department of Computer Science and Engineering 2


Big Data

Real-time/Fast Data

Mobile devices
(tracking all objects all the time)

Social media and networks Scientific instruments


(all of us are generating data) (collecting all sorts of data)

Sensor technology and


networks
(measuring all kinds of data)
• The progress and innovation is no longer hindered by the ability to collect data
• But, by the ability to manage, analyze, summarize, visualize, and discover
knowledge from the collected data in a timely manner and in a scalable fashion

Department of Computer Science and Engineering 3


Big Data
Real-Time Analytics/Decision Requirement

Product
Recommendations Learning why Customers
Influence
that are Relevant Behavior Switch to competitors
& Compelling and their offers; in
time to Counter

Friend Invitations
Improving the Customer to join a
Marketing Game or Activity
Effectiveness of a that expands
Promotion while it business
is still in Play
Preventing Fraud
as it is Occurring
& preventing more
proactively

Department of Computer Science and Engineering 4


Big Data

Department of Computer Science and Engineering 5


Big Data

• OLTP: Online Transaction Processing (DBMSs)


• OLAP: Online Analytical Processing (Data Warehousing)
• RTAP: Real-Time Analytics Processing (Big Data Architecture & technology)

Department of Computer Science and Engineering 6


Big Data

The Model Has Changed…


• The Model of Generating/Consuming Data has Changed

Old Model: Few companies are generating data, all others are consuming data

New Model: all of us are generating data, and all of us are consuming
data

Department of Computer Science and Engineering 7


Big Data

What’s driving Big Data

- Optimizations and predictive analytics


- Complex statistical analysis
- All types of data, and many sources
- Very large datasets
- More of a real-time

- Ad-hoc querying and reporting


- Data mining techniques
- Structured data, typical sources
- Small to mid-size datasets

Department of Computer Science and Engineering 8


Big Data

The Evolution of Business Intelligence

Interactive Business
Speed
Intelligence & Big Data:
In-memory RDBMS Scale
Real Time &
Single View
BI Reporting QliqView, Tableau, HANA
OLAP &
Graph Databases
Dataware house
Business Objects, SAS, Big Data:
Scale Speed
Informatica, Cognos other SQL Batch Processing &
Reporting Tools
Distributed Data Store
Hadoop/Spark; HBase/Cassandra

1990’s 2000’s 2010’s


Department of Computer Science and Engineering 9
Big Data

• Big data is more real-time in nature


than traditional DW applications
• Traditional DW architectures (e.g.
Exadata, Teradata) are not well-suited
for big data apps
• Shared nothing, massively parallel
processing, scale out architectures are
well-suited for big data apps

Department of Computer Science and Engineering 10


Big Data

Department of Computer Science and Engineering 11


Big Data

Department of Computer Science and Engineering 12


Big Data & Cloud Computing

• IT resources provided as a service


• Compute, storage, databases, queues
• Clouds leverage economies of scale of commodity
hardware
• Cheap storage, high bandwidth networks & multicore
processors
• Geographically distributed data centers
• Offerings from Microsoft, Amazon, Google, …

Department of Computer Science and Engineering 13


Big Data & Cloud Computing

Department of Computer Science and Engineering 14


Big Data & Cloud Computing

• Cost & management


• Economies of scale, “out-sourced” resource management
• Reduced Time to deployment
• Ease of assembly, works “out of the box”
• Scaling
• On demand provisioning, co-locate data and compute
• Reliability
• Massive, redundant, shared resources
• Sustainability
• Hardware not owned

Department of Computer Science and Engineering 15


Cloud Deployment Models

• Public Cloud: Computing infrastructure is hosted at the vendor’s


premises.
• Private Cloud: Computing architecture is dedicated to the
customer and is not shared with other organisations.
• Hybrid Cloud: Organisations host some critical, secure
applications in private clouds. The not so critical applications are
hosted in the public cloud
• Cloud bursting: the organisation uses its own infrastructure for normal
usage, but cloud is used for peak loads.
• Community Cloud

Department of Computer Science and Engineering 16


Cloud Delivery Models

• Infrastructure as a service (IaaS)


• Offering hardware related services using the principles of cloud computing. These
could include storage services (database or disk storage) or virtual servers.
• Amazon EC2, Amazon S3, Rackspace Cloud Servers and Flexiscale.
• Platform as a Service (PaaS)
• Offering a development platform on the cloud.
• Google’s Application Engine, Microsofts Azure, Salesforce.com’s
force.com .
• Software as a service (SaaS)
• Including a complete software offering on the cloud. Users can
access a software application hosted by the cloud vendor on pay-
per-use basis. This is a well-established sector.
• Salesforce.coms’ offering in the online Customer Relationship
Management (CRM) space, Googles gmail and Microsofts hotmail,
Google docs.

Department of Computer Science and Engineering 17


Cloud Delivery Models

Infrastructure as a Service (IaaS)

Department of Computer Science and Engineering 18


Cloud Delivery Models

• Storage-as-a-service
• Database-as-a-service
• Information-as-a-service
• Process-as-a-service
• Application-as-a-service
• Platform-as-a-service
• Integration-as-a-service
• Security-as-a-service
• Management/
Governance-as-a-service
• Testing-as-a-service
• Infrastructure-as-a-service

Department of Computer Science and Engineering 19


Cloud Delivery Models

• Service-Oriented Architecture (SOA)


• Utility Computing (on demand)
• Virtualization (P2P Network)
• SAAS (Software As A Service)
• PAAS (Platform AS A Service)
• IAAS (Infrastructure AS A Servie)
• Web Services in Cloud

Department of Computer Science and Engineering 20


Cloud Delivery Models

Enabling Technology:
Virtualization

App App App

App App App OS OS OS

Operating System Hypervisor

Hardware Hardware

Traditional Stack Virtualized Stack

Department of Computer Science and Engineering 21


Cloud Delivery Models

Everything as a Service
• Utility computing = Infrastructure as a Service (IaaS)
• Why buy machines when you can rent cycles?
• Examples: Amazon’s EC2, Rackspace
• Platform as a Service (PaaS)
• Give me nice API and take care of the maintenance,
upgrades, …
• Example: Google App Engine
• Software as a Service (SaaS)
• Just run it for me!
• Example: Gmail, Salesforce

Department of Computer Science and Engineering 22


Cloud Delivery Models

Cloud versus cloud

• Amazon Elastic Compute Cloud


• Google App Engine
• Microsoft Azure
• GoGrid
• AppNexus

Department of Computer Science and Engineering 23


Cloud Delivery Models

The Obligatory Timeline Slide


(Mike Culver @ AWS)

COBOL, Amazon.com
Edsel ARPANET Internet

Web Web as a Web Services,


Darkness
Awareness Platform Resources Eliminated

Dot-Com Bubble Web 2.0 Web Scale


Computing

Department of Computer Science and Engineering 24


Cloud Delivery Models

AWS
• Elastic Compute Cloud – EC2 (IaaS)
• Simple Storage Service – S3 (IaaS)
• Elastic Block Storage – EBS (IaaS)
• SimpleDB (SDB) (PaaS)
• Simple Queue Service – SQS (PaaS)
• CloudFront (S3 based Content Delivery Network –
PaaS)
• Consistent AWS Web Services API

Department of Computer Science and Engineering 25


Cloud Delivery Models
What does Azure platform offer to
developers?

Department of Computer Science and Engineering 26


RTAP

• Refers to finding meaningful patterns in


data at the actual time of receiving
• Real-Time Analytics Platform (RTAP)
analyses the data, correlates, and
predicts the outcomes in the real time.

Department of Computer Science and Engineering 27


RTAP

• Manages and processes data and helps


timely decision-making
• Helps to develop dynamic analysis
applications
• Leads to evolution of business
intelligence

Department of Computer Science and Engineering 28


RTAP

• Apache SparkStreaming—a Big Data


platform for data stream analytics in
real time.
• Cisco Connected Streaming Analytics
(CSA)—a platform that delivers
insights from high-velocity streams of
live data from multiple sources and
enables immediate action.
Department of Computer Science and Engineering 29
RTAP

• Oracle Stream Analytics (OSA)—a


platform that provides a graphical
interface to “Fast Data”.
• SAP HANA— a streaming analytics
tool which also does real-time analytics

Department of Computer Science and Engineering 30


RTAP

• SQL streamBlaze—an analytics


platform, offering a real-time, easy-to-
use and powerful visual development
environment for developers and
analysts.
• TIBCO StreamBase—streaming
analytics, which accelerates action in
order to quickly build applications.
Department of Computer Science and Engineering 31
RTAP

• Informatica — a real-time data


streaming tool which transforms a
torrent of small messages and events
into unprecedented business agility

Department of Computer Science and Engineering 32


RTAP

• IBM Stream Computing—a data


streaming tool that analyzes a broad
range of streaming data— unstructured
text, video, audio, geospatial, sensor—
helping organizations spot the
opportunities and risks and make
decisions in real time

Department of Computer Science and Engineering 33


RTAP Applications

1. Fraud detection systems for online


transactions
2. Log analysis for understanding usage
pattern
3. Click analysis for online
recommendations
4. Social Media Analytics

Department of Computer Science and Engineering 34


RTAP Applications

5. Push notifications to the customers for


location-based advertisements for
retail
6. Action for emergency services such
as fires and accidents in an industry
7. Any abnormal measurements
require immediate reaction in
healthcare monitoring
Department of Computer Science and Engineering 35
RTAP Applications

• Positive/Negative Sentiments
• Sentiment analysis features
1. NEGATION 2. POSITIVE SMILEY
3. NEGATIVE SMILEY
4. DONT— YOU, OH, SO, AS
FAR AS, 5. LAUGH

Department of Computer Science and Engineering 36


Graph Analytics

Department of Computer Science and Engineering 37


Big Data Optimization

Department of Computer Science and Engineering 38


Big Data Optimization

Set of Temple Corresponding Architecture


Images Labels

𝑓{ } = “Vijayanagara
Style”
Department of Computer Science and Engineering 39
Sparsity Learning

• Trained networks occupy huge memory. (~200MB for AlexNet)


• Many DRAM accesses.

• Filters can be sparse. Sparsity percentage for filters in Alexnet


convolutional layers shown below.

Layer Sparsity Percentage


CONV2 6
CONV3 7
CONV4 7

• Sparsity too low.


• Thresholding in inference will lead to accuracy loss.
• Account for sparsity during training.

Department of Computer Science and Engineering 40


Sparsity Learning

• Deep Compression
• Pruning low-weight values, retrain to recover accuracy.
• Less storage and also speedup reported in Fully Connected layers.

• Structured Sparsity Learning (SSL) in Deep Neural


Networks
• Employs locality optimization for sparse weights.
• Memory savings and speedup in Convolutional layers.

Layer Sparsity Percentage


CONV1 14
CONV2 54
CONV3 72
CONV4 68
CONV5 58

Department of Computer Science and Engineering 41


Sparsity Learning

• Open-source framework to build and run


convolutional neural nets.
• Provides Python and C++ interface for inference.
• Source code in C++ and Cuda.
• Employs efficient data structures for feature maps
and weights.
• Blob data structure

• Caffe Model Zoo - Repository of CNN models for


analysis.
• Pretrained models for base and compressed versions
of AlexNet available in the Model zoo.
Department of Computer Science and Engineering 42
Sparsity Learning
Convolution = Matrix Multiply

.
.
.

OFM: 55x55x96

IFM: 227x227x3 Filter: 11x11x3x96


Stride: 4

• IFM converted to 363x3025 matrix


• filter looks at 11x11x3 input volume, 55 locations along W,H.
• Weights converted to 96x363 matrix
• OFM = Weights x IFM.
• BLAS libraries used to implement matrix multiply ( GEMM )
• MKL for CPU, CuBLAS for GPU
Department of Computer Science and Engineering 43
Sparsity Learning

• Weight matrix can be represented in sparse format for sparse networks.


• Compressed Sparse Row Format. Matrix converted to arrays to represent non-zero
values.
• Array A - Contains non zero values.
• Array JA - Column index of each element in A.
• Array IA - Cumulative sum of number of non-zero values in previous rows.
• Sparse representation saves memory and could result in efficient computation.

• Wen-Wei – New Caffe branch for sparse convolution


• Represent convolutional layer weights in CSR format.
• Uses sparse matrix multiply routines. (CSRMM)
• Weight in sparse format, IFM in dense , Output is in dense.
• MKL library for CPU, cuSPARSE for GPU.

Department of Computer Science and Engineering 44


Sparsity Learning
• Initially gem5-gpu was planned to be used as
simulation framework. Gem5 ended up being very
slow due to the large size of Deep Neural Networks.

• Analysis was performed by running Caffe and Cuda


programs on Native Hardware.

• For CPU analysis, AWS system with 2 Intel Xeon cores


running @ 2.4GHz was used.

• For GPU analysis, dodeca system with NVIDIA


GeForce GTX 1080 GPU was used.
Department of Computer Science and Engineering 45
Sparsity Learning

• Deep Compression and SSL trained networks were used for analysis.
Both showed similar trends.
• Memory savings obtained with sparse representation is given below

Layers Memory Saving


CONV1 1.11
CONV2 2.48
CONV3 2.72
CONV4 2.53
CONV5 2.553
• The time taken for the multiplication was recorded. Conversion time to
CSR format was not included as weights are sparsified only once for a
set of IFMs.

Department of Computer Science and Engineering 46


Sparsity Learning
CPU- CSRMM vs GEMM

• CSRMM slower compared to GEMM.


• Overhead depends on sparsity percentage.
Department of Computer Science and Engineering 47
Sparsity Learning
GPU- CSRMM vs GEMM

• CSRMM overhead more in GPU.


• GPU operations are faster compared to CPU.
Department of Computer Science and Engineering
Sparsity Learning
Fully Connected Layer (FC) Analysis
• Fully Connected Layers form the final layers of a typical CNN and implemented as
Matrix Vector Multiply operation. (GEMV).

• Modified Caffe’s internal data structures (blob) to represent Weights of FC layer in


sparse format.

• Sparse Matrix-Vector Multiplication (SpMV) used for sparse computation.

• Deep Compression Model used for analysis.


Image taken from petewarden.com

Department of Computer Science and Engineering 49


Sparsity Learning
FC Layer Analysis

• Speed-up of 3x observed for both CPU and GPU.

Department of Computer Science and Engineering 50


Sparsity Learning
• Custom C++ and Cuda programs were written to
measure the time taken to execute only matrix
multiplication routines.

• This allowed us to vary the sparsity of weight matrix


to figure out the break-even point where CSRMM
performs faster than GEMM.

• Size of the weight matrix were chosen to be equal to


that of the largest AlexNet CONV layer.

• The zeros were distributed randomly in the weight


matrix. Department of Computer Science and Engineering 51
Sparsity Learning
Matrix Multiply Analysis
Speedup of CSRMM over GEMM for
random sparsity (CPU)
2.5

• 2
Speedup

1.5

0.5

0
0.945 0.95 0.955 0.96 0.965 0.97 0.975 0.98 0.985 0.99 0.995
Sparsity

Speedup of CSRMM over GEMM for


random sparsity (GPU)
0.6

0.5

0.4
Speedup

0.3

0.2

0.1

0
0.945 0.95 0.955 0.96 0.965 0.97 0.975 0.98 0.985 0.99 0.995
Sparsity
Department of Computer Science and Engineering 52
Sparsity Learning

GPU Memory Scaling Experiment


• Motivation:
• As Sparse Matrix representation occupies significantly
less space compared to dense representation, a larger
working set can fit in the Cache/Memory system in
the case of sparse.
• Implementation:
• As the “Weights” matrix was the one being passed in
the Sparse format, we increased its size to a larger
value.

Department of Computer Science and Engineering 53


Sparsity Learning
GPU Results (GEMM vs CSRMM)
GEMM vs CSRMM ( Weight Matrix = 256 x
1200 )
Format Total Memory Used Runtime (ns)
(MB)
Dense 3377 60473
Sparse 141 207910

GEMM vs CSRMM ( Weight Matrix = 25600 x


24000 )
Format Total Memory Used Runtime (ns)
(MB)
Dense 5933 96334
Sparse 1947 184496

While GEMM is still faster in both cases, (GEMM/CSRMM) Time


Ratio increases to 0.52 from 0.29 as the Weight Matrix Dimensions
are bumped.
Department of Computer Science and Engineering 54
Sparsity Learning

GPU: Sparse X Sparse


• IFM sparsity due to ReLU activation. Sparsity in CONV layers of AlexNet given below

Layer Sparsity Percentage


CONV2 23.57
CONV3 56.5
CONV4 64.7
CONV5 68.5

• CUSP library used in custom program for sparse x sparse multiply of


IFM and Weights.
• Speedup of 4x observed compared to GEMM routine.
• Memory savings of 4.2x compared to GEMM.
• Could not scale to typical dimensions of AlexNet. This is in progress.

Department of Computer Science and Engineering 55


Sparsity Learning

• Representing Matrices in Sparse Format results in significant Memory savings as


expected.

• We didn’t observe any practical computational benefits for Convolutional Layers


using library routines provide by MKL & cuSPARSE for both CPU & GPU.

• Fully Connected Layers showed around 3x Speedup for layers having a high
sparsity.

• For a large dataset & GPU Memory, we might see drop in convolutional runtime
for Sparse representation.

• Sparse x Sparse computation showed promising results. Will be implemented in


Caffe.

Department of Computer Science and Engineering 56


Large-scale kernel machines

Department of Computer Science and Engineering 57

You might also like