Luis Rico – luis.rico@redhat.com
AMTEGA - CDTIC – 4 April 2018 – Santiago de Compostela
New use cases for Ceph, beyond OpenStack
Agenda
•  Intro to Ceph
•  Ceph as best unified storage for OpenStack
•  New use cases for Ceph
Free virtual training on Red Hat Ceph Storage:
https://red.ht/storage-testdrive
RED HAT CEPH STORAGE
•  Contributions from Intel, SanDisk, CERN, and Yahoo
•  Presenting Ceph Days in cities around the world and
quarterly virtual Ceph Developer Summit events
•  Over 11M downloads in the last 12 months
•  Increased development velocity, authorship, and
discussion has resulted in rapid feature expansion
OPEN SOFTWARE DEFINED STORAGE
97 AUTHORS/MO
2,453 COMMITS/MO
260 POSTERS/MO
33 AUTHORS/MO
97 COMMITS/MO
138 POSTERS/MO
RED HAT CEPH STORAGE
Distributed, enterprise-grade object storage, proven at web scale
Open source, massively-scalable, software-defined based on Ceph
Flexible, scale-out architecture on clustered standard hardware
Single, efficient, unified storage platform (object, block, file)
User-driven storage lifecycle management with 100% API coverage
S3 compatible object API
Designed for modern workloads like cloud infrastructure and data lakes
DIFFERENT KINDS OF STORAGE
FILE STORAGE
File systems allow users to
organize the data stored in
those blocks using
hierarchical folders and files.
OBJECT STORAGE
Object stores distribute data
algorithmically throughout a
cluster of storage media,
without a rigid structure.
BLOCK STORAGE
Physical storage media
appears to computers as a
series of sequential blocks
of a uniform size.
RED HAT CEPH STORAGE
ARCHITECTURAL COMPONENTS
RBD
A reliable, fully distributed block device
with cloud platform integration
RGW
A web services gateway for object
storage, compatible with S3 and Swift
APP HOST/VM
LIBRADOS
A library allowing apps to directly access RADOS (C, C++, Java, Python, Ruby
RADOS
A software-based reliable, autonomous, distributed object store comprised of self-
healing, self-managing, intelligent storage nodes and lightweight monitors
CEPHFS
A distributed file system with POSIX
semantics & scale-out metadata
CLIENT
BUSINESS BENEFITS
OPEN SOURCE
No proprietary lock-in, with a large commercial ecosystem and broad community
PEACE OF MIND
Over a decade of active development, proven in production and backed by Red Hat
LOWER COST
More economical than traditional NAS/SAN, particularly at petabyte scale
TECHNICAL BENEFITS
•  Massive scalability to support petabytes of data
•  Relies on no single point of failure, for maximum uptime
•  Self-manages and self-heals to reduce maintenance
•  Data distributed among servers and disks dynamically
TECHNICAL DETAILED ARCHITECTURE
PLACEMENT GROUPS
Placement Groups (PGs) are shards or fragments of a logical object pool that place objects as a
group into OSDs.
CRUSH OVERVIEW
CRUSH (Controlled Replication Under Scalable Hashing)
•  Controlled, Scalable, Decentralized Placement of Replicated Data.
•  The CRUSH algorithm determines how to store and retrieve data by computing data storage locations.
•  CRUSH requires a map of your cluster, and uses the CRUSH map to pseudo-randomly store and
retrieve data in OSDs with a uniform distribution of data across the cluster.
HOW IT WORKS?
With a replica 3 pool, how client does WRITE
HOW IT WORKS?
With a replica 3 pool, how client does READ
EFFICIENCY
•  Standard servers and disks
•  Erasure coding - reduced footprint
•  Thin provisioning
•  Traditional and containerized
deployment, including CSDs
SCALABILITY
•  Multi-petabyte support
•  Hundreds of nodes
•  CRUSH algorithm – placement/rebalancing
•  No single point of failure
PERFORMANCE
•  Server-side journaling
•  BlueStore (updated tech preview)
APIs & PROTOCOLS
•  S3, Swift, Apache Hadoop S3A
filesystem client
•  Cinder block storage
Native API protocols
•  NFS v3, V4
•  iSCSI
SECURITY
•  Integrated on premise
monitoring dashboard
•  RGW SSL support
•  Pool-level authentication
•  Active Directory, LDAP,
Keystone v3
•  At-rest encryption with keys
held on separate hosts
DATA SERVICES
•  Global clusters for S3/Swift storage
•  Disaster recovery for block and object
storage
•  Snapshots, cloning, and copy-on-write
CORE PRODUCT FEATURES
FEATURES IN RED ARE SPECIFIC TO RED HAT CEPH STORAGE 3 (BASED ON LUMINOUS)
MONITORING CLUSTERS WITH MORE PRECISION
●  Red Hat Ceph Storage dashboard, based on
upstream ‘cephmetrics’ project	
●  New web interface adds ease of use and
insight into Ceph cluster activity 	
●  14 dashboards to monitor health /
troubleshoot issues 	
●  Detailed graphical view of usage data for
cluster or components
TARGET USE CASES
•  Private Cloud - enterprise deployments growing for test & dev and production
application deployments. FSI, retail and technology sectors.
•  Archive & Backup: object storage as a replacement for tape and expensive
dedicated appliances. Hybrid cloud compatibility critical.
•  NFVi (new) - OpenStack with Ceph dominant reference platform for next-
generation telco networks. Global demand for Ceph now standalone and
hyperconverged.
•  Enterprise Virtualization (new): legacy protocol support for legacy VM storage to
be managed on same platform as modern, private cloud storage.
•  Big Data (new) - object storage providing common, data lake for multiple analytics
applications for greater efficiencies and better business insights
CEPH FOR OPENSTACK
COMPLETE OPENSTACK STORAGE
•  Deeply integrated with modular
architecture and components for
ephemeral & persistent storage
Ø  Nova, Cinder, Manila, Glance,
Keystone, Ceilometer, Swift
RED HAT CEPH STORAGE	
OPENSTACK	
Keystone API	 Swift
API	
Glance
API 	
Cinder API	 Nova API	
HYPERVISOR
(LibRBD)	CEPH OBJECT GATEWAY	
Manila API	
CephFS	CephFS
ADVANTAGES FOR OPENSTACK USERS
•  Instantaneous booting of 1 or 100s
of VMs
•  Instant backups via seamless data
migration between Glance, Cinder,
Nova
•  Multi-site replication for disaster
recovery or archiving
RED HAT CEPH STORAGE	
HYPERVISOR	
VM
OVERWHELMINGLY PREFERRED
SOURCE: OpenStack User Survey, October 2016	
OVERWHELMINGLY PREFERRED FOR OPENSTACK
SPECIAL INTEGRATION WITH RED HAT
OPENSTACK PLATFORM DIRECTOR
•  Automated object and block deployment
•  Automated upgrades from Red Hat Ceph Storage 1.3
•  Support for existing Ceph Clusters
•  OpenStack Manila file deployment as composable controller service via
integrated CephFS driver
•  Co-location of Red Hat OpenStack Platform and Red Hat Ceph Storage
CHALLENGE:
Produban wanted to create a
private cloud platform to
provide cloud services across
Grupo Santander’s
businesses, aiming to
increase its agility and reduce
costs.
PRODUCTS AND SERVICES USED:
RESULTS:
• Created a reliable, production-ready and controlled IaaS environment,
while reducing Produban’s technology footprint and costs
• Built a standardized and efficient IaaS environment with consistent
management and deployment across its hybrid cloud services
• Gained a single, efficient platform to support the demanding storage
needs of its OpenStack-based cloud
• Increased agility and reduced time-to-market for different services,
including big data analytics
PRODUBAN DELIVERS MODERN
CLOUD SERVICES
NEW USE CASES FOR CEPH
RGW, Ceph’s object storage interface:
•  Support for authentication using Active
Directory, LDAP & OpenStack Keystone v3
•  Greater compatibility with the Amazon S3
and OpenStack Swift object storage APIs
•  AWS v4 signatures, object versioning, bulk deletes
•  NFS gateway for bulk import and export of
object data
OBJECT STORAGE FOCUS
Global object storage clusters with
single namespace
•  Enables deployment of clusters across
multiple geographic locations
•  Clusters synchronize, allowing users to
read from or write to the closest one
Multi-site replication for block devices
•  Replicates virtual block devices across
regions for disaster recovery and archival
MULTISITE CAPABILITIES	
STORAGE CLUSTER
US-EAST	
STORAGE CLUSTER
US-WEST
CHALLENGE:
Wanted to replace existing
Document Management
solution based in IBM, very
expensive to mantain, with
a new solution able to
scale in a more cost-
effective way
PRODUCTS AND SERVICES USED:
RESULTS:
•  With a leading Global System Integrator, provide a new
document management system based on open source
components
•  Reduce annual costs in 80%
•  The storage platform is based on Ceph as object storage
•  Using Multi-site active-active capability to have High
Availbility and distribute workloads
INSURANCE COMPANY IN SPAIN
RADOS
RGW S3 API
OBJECT
FILE
S3A	
RGW NFS
Data Ingest
S3A	
COMPATIBILITY WITH HADOOP S3A FILESYSTEM CLIENT
S3A
ELASTIC COMPUTE AND STORAGE FOR BIG DATA
Analytics	vendors	focus	on	analytics...	 Red	Hat	on	infrastructure.	
Analytics
Red Hat provides
infrastructure
software
Analytics vendors
provide analytics
software
OpenStack or OpenShift Compute Pool
Infrastructure
Shared Data Lake on Ceph Object Storage
plus.google.com/+RedHat
linkedin.com/company/red-hat
youtube.com/user/RedHatVideos
facebook.com/redhatinc
twitter.com/RedHat
THANK YOU

More Related Content

PDF
Ceph used in Cancer Research at OICR
PDF
Upstream Consultancy and Ceph RadosGW/S3 (AMTEGA Ceph Day 2018)
PDF
Backup management with Ceph Storage - Camilo Echevarne, Félix Barbeira
PPTX
Architecting Ceph Solutions
PDF
Ceph optimized Storage / Global HW solutions for SDS, David Alvarez
PDF
Red Hat Storage Day Dallas - Storage for OpenShift Containers
PDF
Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...
PDF
Red Hat Storage Day Boston - Persistent Storage for Containers
Ceph used in Cancer Research at OICR
Upstream Consultancy and Ceph RadosGW/S3 (AMTEGA Ceph Day 2018)
Backup management with Ceph Storage - Camilo Echevarne, Félix Barbeira
Architecting Ceph Solutions
Ceph optimized Storage / Global HW solutions for SDS, David Alvarez
Red Hat Storage Day Dallas - Storage for OpenShift Containers
Red Hat Storage Day New York - Penguin Computing Spotlight: Delivering Open S...
Red Hat Storage Day Boston - Persistent Storage for Containers

What's hot (20)

PDF
Red Hat Storage Day New York - New Reference Architectures
PDF
Red hat Storage Day LA - Designing Ceph Clusters Using Intel-Based Hardware
PDF
Managing data analytics in a hybrid cloud
PPTX
Ceph Deployment at Target: Customer Spotlight
PPTX
Red Hat Storage Day Seattle: Why Software-Defined Storage Matters
PDF
Red Hat Storage Day Atlanta - Persistent Storage for Linux Containers
PPTX
Red Hat Storage Day Dallas - Why Software-defined Storage Matters
PPTX
Red Hat Storage Day Dallas - Defiance of the Appliance
PDF
Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...
PPTX
Red Hat Storage Day Seattle: Persistent Storage for Containerized Applications
PDF
Red Hat Storage: Emerging Use Cases
PPTX
Red Hat Storage Day Atlanta - Why Software Defined Storage Matters
PPTX
Why Software-Defined Storage Matters
PPTX
Implementation of Dense Storage Utilizing HDDs with SSDs and PCIe Flash Acc...
PPTX
Red Hat Storage Day Seattle: Stretching A Gluster Cluster for Resilient Messa...
PPTX
Red Hat Storage Day Dallas - Gluster Storage in Containerized Application
PPT
An intro to Ceph and big data - CERN Big Data Workshop
PPTX
Red Hat Storage Day LA - Performance and Sizing Software Defined Storage
PDF
Red Hat Storage Day Boston - OpenStack + Ceph Storage
PDF
Red Hat Storage Roadmap
Red Hat Storage Day New York - New Reference Architectures
Red hat Storage Day LA - Designing Ceph Clusters Using Intel-Based Hardware
Managing data analytics in a hybrid cloud
Ceph Deployment at Target: Customer Spotlight
Red Hat Storage Day Seattle: Why Software-Defined Storage Matters
Red Hat Storage Day Atlanta - Persistent Storage for Linux Containers
Red Hat Storage Day Dallas - Why Software-defined Storage Matters
Red Hat Storage Day Dallas - Defiance of the Appliance
Red Hat Storage Day New York - Red Hat Gluster Storage: Historical Tick Data ...
Red Hat Storage Day Seattle: Persistent Storage for Containerized Applications
Red Hat Storage: Emerging Use Cases
Red Hat Storage Day Atlanta - Why Software Defined Storage Matters
Why Software-Defined Storage Matters
Implementation of Dense Storage Utilizing HDDs with SSDs and PCIe Flash Acc...
Red Hat Storage Day Seattle: Stretching A Gluster Cluster for Resilient Messa...
Red Hat Storage Day Dallas - Gluster Storage in Containerized Application
An intro to Ceph and big data - CERN Big Data Workshop
Red Hat Storage Day LA - Performance and Sizing Software Defined Storage
Red Hat Storage Day Boston - OpenStack + Ceph Storage
Red Hat Storage Roadmap
Ad

Similar to New use cases for Ceph, beyond OpenStack, Luis Rico (20)

PDF
Red hat ceph storage customer presentation
PDF
Red Hat Storage 2014 - Product(s) Overview
PDF
Peanut Butter and jelly: Mapping the deep Integration between Ceph and OpenStack
PDF
The Future of Cloud Software Defined Storage with Ceph: Andrew Hatfield, Red Hat
PDF
adp.ceph.openstack.talk
PDF
CEPH & OPENSTACK - Red Hat's Winning Combination for Enterprise Clouds
PDF
Redhat - rhcs 2017 past, present and future
PDF
Red Hat Storage Day New York - What's New in Red Hat Ceph Storage
PDF
Red Hat Ceph Storage: Past, Present and Future
PPTX
Software Defined Storage, Big Data and Ceph - What Is all the Fuss About?
PPTX
New Ceph capabilities and Reference Architectures
PDF
Ceph Overview for Distributed Computing Denver Meetup
ODP
Ceph Day Santa Clara: Keynote: Building Tomorrow's Ceph
ODP
Ceph Day NYC: Building Tomorrow's Ceph
ODP
Ceph Day SF 2015 - Keynote
PPTX
Ceph Introduction 2017
PDF
Ceph as software define storage
ODP
London Ceph Day Keynote: Building Tomorrow's Ceph
PDF
Ceph, Now and Later: Our Plan for Open Unified Cloud Storage
PPTX
Inktank:ceph overview
Red hat ceph storage customer presentation
Red Hat Storage 2014 - Product(s) Overview
Peanut Butter and jelly: Mapping the deep Integration between Ceph and OpenStack
The Future of Cloud Software Defined Storage with Ceph: Andrew Hatfield, Red Hat
adp.ceph.openstack.talk
CEPH & OPENSTACK - Red Hat's Winning Combination for Enterprise Clouds
Redhat - rhcs 2017 past, present and future
Red Hat Storage Day New York - What's New in Red Hat Ceph Storage
Red Hat Ceph Storage: Past, Present and Future
Software Defined Storage, Big Data and Ceph - What Is all the Fuss About?
New Ceph capabilities and Reference Architectures
Ceph Overview for Distributed Computing Denver Meetup
Ceph Day Santa Clara: Keynote: Building Tomorrow's Ceph
Ceph Day NYC: Building Tomorrow's Ceph
Ceph Day SF 2015 - Keynote
Ceph Introduction 2017
Ceph as software define storage
London Ceph Day Keynote: Building Tomorrow's Ceph
Ceph, Now and Later: Our Plan for Open Unified Cloud Storage
Inktank:ceph overview
Ad

Recently uploaded (20)

PDF
A Late Bloomer's Guide to GenAI: Ethics, Bias, and Effective Prompting - Boha...
PDF
Convolutional neural network based encoder-decoder for efficient real-time ob...
PPTX
Microsoft Excel 365/2024 Beginner's training
PDF
1 - Historical Antecedents, Social Consideration.pdf
PDF
Abstractive summarization using multilingual text-to-text transfer transforme...
PDF
Hybrid horned lizard optimization algorithm-aquila optimizer for DC motor
PDF
From MVP to Full-Scale Product A Startup’s Software Journey.pdf
PPT
What is a Computer? Input Devices /output devices
PDF
Hindi spoken digit analysis for native and non-native speakers
PDF
Flame analysis and combustion estimation using large language and vision assi...
PDF
Zenith AI: Advanced Artificial Intelligence
PDF
Getting started with AI Agents and Multi-Agent Systems
PDF
A review of recent deep learning applications in wood surface defect identifi...
PPTX
2018-HIPAA-Renewal-Training for executives
PDF
Five Habits of High-Impact Board Members
PDF
Consumable AI The What, Why & How for Small Teams.pdf
PDF
Taming the Chaos: How to Turn Unstructured Data into Decisions
PDF
TrustArc Webinar - Click, Consent, Trust: Winning the Privacy Game
PDF
sbt 2.0: go big (Scala Days 2025 edition)
DOCX
search engine optimization ppt fir known well about this
A Late Bloomer's Guide to GenAI: Ethics, Bias, and Effective Prompting - Boha...
Convolutional neural network based encoder-decoder for efficient real-time ob...
Microsoft Excel 365/2024 Beginner's training
1 - Historical Antecedents, Social Consideration.pdf
Abstractive summarization using multilingual text-to-text transfer transforme...
Hybrid horned lizard optimization algorithm-aquila optimizer for DC motor
From MVP to Full-Scale Product A Startup’s Software Journey.pdf
What is a Computer? Input Devices /output devices
Hindi spoken digit analysis for native and non-native speakers
Flame analysis and combustion estimation using large language and vision assi...
Zenith AI: Advanced Artificial Intelligence
Getting started with AI Agents and Multi-Agent Systems
A review of recent deep learning applications in wood surface defect identifi...
2018-HIPAA-Renewal-Training for executives
Five Habits of High-Impact Board Members
Consumable AI The What, Why & How for Small Teams.pdf
Taming the Chaos: How to Turn Unstructured Data into Decisions
TrustArc Webinar - Click, Consent, Trust: Winning the Privacy Game
sbt 2.0: go big (Scala Days 2025 edition)
search engine optimization ppt fir known well about this

New use cases for Ceph, beyond OpenStack, Luis Rico

  • 1. Luis Rico – [email protected] AMTEGA - CDTIC – 4 April 2018 – Santiago de Compostela New use cases for Ceph, beyond OpenStack
  • 2. Agenda •  Intro to Ceph •  Ceph as best unified storage for OpenStack •  New use cases for Ceph Free virtual training on Red Hat Ceph Storage: https://red.ht/storage-testdrive
  • 3. RED HAT CEPH STORAGE
  • 4. •  Contributions from Intel, SanDisk, CERN, and Yahoo •  Presenting Ceph Days in cities around the world and quarterly virtual Ceph Developer Summit events •  Over 11M downloads in the last 12 months •  Increased development velocity, authorship, and discussion has resulted in rapid feature expansion OPEN SOFTWARE DEFINED STORAGE 97 AUTHORS/MO 2,453 COMMITS/MO 260 POSTERS/MO 33 AUTHORS/MO 97 COMMITS/MO 138 POSTERS/MO
  • 5. RED HAT CEPH STORAGE Distributed, enterprise-grade object storage, proven at web scale Open source, massively-scalable, software-defined based on Ceph Flexible, scale-out architecture on clustered standard hardware Single, efficient, unified storage platform (object, block, file) User-driven storage lifecycle management with 100% API coverage S3 compatible object API Designed for modern workloads like cloud infrastructure and data lakes
  • 6. DIFFERENT KINDS OF STORAGE FILE STORAGE File systems allow users to organize the data stored in those blocks using hierarchical folders and files. OBJECT STORAGE Object stores distribute data algorithmically throughout a cluster of storage media, without a rigid structure. BLOCK STORAGE Physical storage media appears to computers as a series of sequential blocks of a uniform size.
  • 7. RED HAT CEPH STORAGE ARCHITECTURAL COMPONENTS RBD A reliable, fully distributed block device with cloud platform integration RGW A web services gateway for object storage, compatible with S3 and Swift APP HOST/VM LIBRADOS A library allowing apps to directly access RADOS (C, C++, Java, Python, Ruby RADOS A software-based reliable, autonomous, distributed object store comprised of self- healing, self-managing, intelligent storage nodes and lightweight monitors CEPHFS A distributed file system with POSIX semantics & scale-out metadata CLIENT
  • 8. BUSINESS BENEFITS OPEN SOURCE No proprietary lock-in, with a large commercial ecosystem and broad community PEACE OF MIND Over a decade of active development, proven in production and backed by Red Hat LOWER COST More economical than traditional NAS/SAN, particularly at petabyte scale
  • 9. TECHNICAL BENEFITS •  Massive scalability to support petabytes of data •  Relies on no single point of failure, for maximum uptime •  Self-manages and self-heals to reduce maintenance •  Data distributed among servers and disks dynamically
  • 11. PLACEMENT GROUPS Placement Groups (PGs) are shards or fragments of a logical object pool that place objects as a group into OSDs.
  • 12. CRUSH OVERVIEW CRUSH (Controlled Replication Under Scalable Hashing) •  Controlled, Scalable, Decentralized Placement of Replicated Data. •  The CRUSH algorithm determines how to store and retrieve data by computing data storage locations. •  CRUSH requires a map of your cluster, and uses the CRUSH map to pseudo-randomly store and retrieve data in OSDs with a uniform distribution of data across the cluster.
  • 13. HOW IT WORKS? With a replica 3 pool, how client does WRITE
  • 14. HOW IT WORKS? With a replica 3 pool, how client does READ
  • 15. EFFICIENCY •  Standard servers and disks •  Erasure coding - reduced footprint •  Thin provisioning •  Traditional and containerized deployment, including CSDs SCALABILITY •  Multi-petabyte support •  Hundreds of nodes •  CRUSH algorithm – placement/rebalancing •  No single point of failure PERFORMANCE •  Server-side journaling •  BlueStore (updated tech preview) APIs & PROTOCOLS •  S3, Swift, Apache Hadoop S3A filesystem client •  Cinder block storage Native API protocols •  NFS v3, V4 •  iSCSI SECURITY •  Integrated on premise monitoring dashboard •  RGW SSL support •  Pool-level authentication •  Active Directory, LDAP, Keystone v3 •  At-rest encryption with keys held on separate hosts DATA SERVICES •  Global clusters for S3/Swift storage •  Disaster recovery for block and object storage •  Snapshots, cloning, and copy-on-write CORE PRODUCT FEATURES FEATURES IN RED ARE SPECIFIC TO RED HAT CEPH STORAGE 3 (BASED ON LUMINOUS)
  • 16. MONITORING CLUSTERS WITH MORE PRECISION ●  Red Hat Ceph Storage dashboard, based on upstream ‘cephmetrics’ project ●  New web interface adds ease of use and insight into Ceph cluster activity ●  14 dashboards to monitor health / troubleshoot issues ●  Detailed graphical view of usage data for cluster or components
  • 17. TARGET USE CASES •  Private Cloud - enterprise deployments growing for test & dev and production application deployments. FSI, retail and technology sectors. •  Archive & Backup: object storage as a replacement for tape and expensive dedicated appliances. Hybrid cloud compatibility critical. •  NFVi (new) - OpenStack with Ceph dominant reference platform for next- generation telco networks. Global demand for Ceph now standalone and hyperconverged. •  Enterprise Virtualization (new): legacy protocol support for legacy VM storage to be managed on same platform as modern, private cloud storage. •  Big Data (new) - object storage providing common, data lake for multiple analytics applications for greater efficiencies and better business insights
  • 19. COMPLETE OPENSTACK STORAGE •  Deeply integrated with modular architecture and components for ephemeral & persistent storage Ø  Nova, Cinder, Manila, Glance, Keystone, Ceilometer, Swift RED HAT CEPH STORAGE OPENSTACK Keystone API Swift API Glance API Cinder API Nova API HYPERVISOR (LibRBD) CEPH OBJECT GATEWAY Manila API CephFS CephFS
  • 20. ADVANTAGES FOR OPENSTACK USERS •  Instantaneous booting of 1 or 100s of VMs •  Instant backups via seamless data migration between Glance, Cinder, Nova •  Multi-site replication for disaster recovery or archiving RED HAT CEPH STORAGE HYPERVISOR VM
  • 21. OVERWHELMINGLY PREFERRED SOURCE: OpenStack User Survey, October 2016 OVERWHELMINGLY PREFERRED FOR OPENSTACK
  • 22. SPECIAL INTEGRATION WITH RED HAT OPENSTACK PLATFORM DIRECTOR •  Automated object and block deployment •  Automated upgrades from Red Hat Ceph Storage 1.3 •  Support for existing Ceph Clusters •  OpenStack Manila file deployment as composable controller service via integrated CephFS driver •  Co-location of Red Hat OpenStack Platform and Red Hat Ceph Storage
  • 23. CHALLENGE: Produban wanted to create a private cloud platform to provide cloud services across Grupo Santander’s businesses, aiming to increase its agility and reduce costs. PRODUCTS AND SERVICES USED: RESULTS: • Created a reliable, production-ready and controlled IaaS environment, while reducing Produban’s technology footprint and costs • Built a standardized and efficient IaaS environment with consistent management and deployment across its hybrid cloud services • Gained a single, efficient platform to support the demanding storage needs of its OpenStack-based cloud • Increased agility and reduced time-to-market for different services, including big data analytics PRODUBAN DELIVERS MODERN CLOUD SERVICES
  • 24. NEW USE CASES FOR CEPH
  • 25. RGW, Ceph’s object storage interface: •  Support for authentication using Active Directory, LDAP & OpenStack Keystone v3 •  Greater compatibility with the Amazon S3 and OpenStack Swift object storage APIs •  AWS v4 signatures, object versioning, bulk deletes •  NFS gateway for bulk import and export of object data OBJECT STORAGE FOCUS
  • 26. Global object storage clusters with single namespace •  Enables deployment of clusters across multiple geographic locations •  Clusters synchronize, allowing users to read from or write to the closest one Multi-site replication for block devices •  Replicates virtual block devices across regions for disaster recovery and archival MULTISITE CAPABILITIES STORAGE CLUSTER US-EAST STORAGE CLUSTER US-WEST
  • 27. CHALLENGE: Wanted to replace existing Document Management solution based in IBM, very expensive to mantain, with a new solution able to scale in a more cost- effective way PRODUCTS AND SERVICES USED: RESULTS: •  With a leading Global System Integrator, provide a new document management system based on open source components •  Reduce annual costs in 80% •  The storage platform is based on Ceph as object storage •  Using Multi-site active-active capability to have High Availbility and distribute workloads INSURANCE COMPANY IN SPAIN
  • 28. RADOS RGW S3 API OBJECT FILE S3A RGW NFS Data Ingest S3A COMPATIBILITY WITH HADOOP S3A FILESYSTEM CLIENT S3A
  • 29. ELASTIC COMPUTE AND STORAGE FOR BIG DATA Analytics vendors focus on analytics... Red Hat on infrastructure. Analytics Red Hat provides infrastructure software Analytics vendors provide analytics software OpenStack or OpenShift Compute Pool Infrastructure Shared Data Lake on Ceph Object Storage