2018年11月5日(月)開催セミナー
DBを10分間で1000個構築するDB仮想化テクノロジーとは?
~Database as code in Devops~
講演資料です。
"What is DevOps"
Office of the CTO, Delphix Adam Bowen
Devopsとは何か?DevopsにおけるDB環境はどうあるべきか?Facebook,ebay,WallmartのDevpos事例を交えて、DevopsとDBのベストプラクティスを解説します。
[db tech showcase Tokyo 2015] C16:Oracle Disaster Recovery at New Zealand sto...Insight Technology, Inc.
This document provides an agenda and introduction for a presentation on disaster recovery using physical replication technology. The presentation will include an overview of Dbvisit Standby software, which enables disaster recovery for Oracle Standard Edition databases. It will also present a case study of how the New Zealand Stock Exchange uses Dbvisit Standby to ensure continuous availability of critical trading systems across two data centers.
Achieving compute and storage independence for data-driven workloadsAlluxio, Inc.
Alluxio provides a unified interface to access data across multiple storage systems, allowing compute and storage to scale independently for data-driven applications. It uses a virtual unified file system with a global namespace and server-side API translation to abstract data location and access. Alluxio intelligently manages data placement across memory, SSDs and HDDs using multi-tier caching for local performance on remote data. This allows flexible deployment of compute like Spark on any cloud while keeping data fully controlled on-premises. Alluxio is seeing wide adoption with many large production deployments handling thousands of nodes. Upcoming features include POSIX API support and preview of version 2.0.
[db tech showcase Tokyo 2015] D25:The difference between logical and physical...Insight Technology, Inc.
This document discusses the differences between physical and logical database replication in Oracle. It begins with introductions and an overview of Dbvisit Software. The main sections summarize physical replication, logical replication, and compare the two approaches. Physical replication uses complete redo blocks to keep the target database identical to the source. Logical replication mines redo logs and converts the information to SQL statements to replicate the data. The document outlines the advantages and disadvantages of each approach and how they work at a technical level.
Our own Sean Doherty was in Madrid this week, presenting at the Red Hat Partner summit on the rise of big data and what it means for the future of the RDBMS in the enterprise. Check out his presentation!
[db tech showcase OSS 2017] Azure Database for MySQL / PostgreSQL by 日本マイクロソフ...Insight Technology, Inc.
This document announces new Azure database services for MySQL and PostgreSQL. The services provide fully managed database instances with high availability, scalability, and compatibility with existing tools. The services are available in public preview across 11 Azure regions, with basic, standard, and premium performance tiers offering different levels of IOPS, memory, and storage capacity. Migration from on-premises databases to the new managed database services can be done using common tools like mysqldump and pg_dump.
Webinar: Don't believe the hype, you don't need dedicated storage for VDI NetApp
This webinar covers how the combination of SolidFire and Citrix XenDesktop enables customers to confidently support the storage demands of a virtual desktop environment in a multi-tenant or multi-application environment.
The document is a report from Yamada Takaya about his attendance at the Apache Big Data North America 2017 conference in Miami from May 16-18. It provides an overview of the conference and various sessions, with a focus on stream processing engines and Apache Beam. Key highlights discussed include the growing support for core streaming capabilities across engines and the potential of Beam to further integrate streaming solutions.
Alluxio provides a virtual unified file system that allows for unified access and accelerated performance of data across multiple storage systems and tiers. It addresses challenges of separating compute and storage in modern data architectures by providing a global namespace, server-side API translation between storage systems, and intelligent multi-tiering of data across RAM, SSDs and HDDs. Alluxio has been deployed in over 100 production environments across financial services, retail, telecom and other industries to accelerate analytics, machine learning and other workloads.
- The document discusses Actifio's solution to address three key IT priorities - agility, resiliency, and cloud - by virtualizing copy data management to break the bond between application data and physical infrastructure silos.
- Actifio's approach simplifies infrastructure, reduces costs by eliminating multiple third-party tools and vendors, and improves business agility by providing instant access to data.
- The solution has been adopted by hundreds of enterprise customers across over 35 countries, receiving recognition from analysts, press and investors.
Big Data Taiwan 2014 Keynote 4: Monetize Enterprise Data – Big Data 在台灣的經典應用與行動Etu Solution
講者:Etu 資深協理 | 陳育杰
簡介:過去這兩年內,Big Data 在企業的應用架構已逐漸形塑出來,我們看到,不同的產業,陸續開始運用 Hadoop 來解決不同的問題,而背後的 IT 架構,其實都具有一些共通性。我們將透過這些共通性的架構來探索 Big Data / Hadoop 具體展現的企業應用。
This document discusses building applications with DataStax Enterprise (DSE) using the Killr video catalog application as an example. It shows how to make video data searchable by adding tags to videos and storing them in a Cassandra table with the tag as the primary key. It then demonstrates how to use DSE Search to enable searching on video titles, descriptions, and other fields without adding other infrastructure components. The document highlights improvements to DSE Search in the upcoming 5.1 release, including upgrading Solr and allowing core management via CQL.
[db tech showcase Tokyo 2016] E22: Getting real time Oracle data into Kafka a...Insight Technology, Inc.
Kafka is quickly gaining momentum as a very popular and very fast messaging platform that is very good at integrating different types of data quickly. Kafka makes this data available as a real-time data stream for consumption by enterprise users.There is so much hidden data available in our Oracle databases. How can we turn the database inside out to make this data available real-time to Kafka along with the other data sources in our enterprise. This paper will present the use cases of Oracle real-time data streaming as well as an introduction into Kafka and how to use Oracle logical replication to get Oracle real time into Kafka. This paper will include a real life real time demo from Oracle into Kafka.
The flash market started out monolithically. Flash was a single media type (high performance, high endurance SLC flash). Flash systems also had a single purpose of accelerating the response time of high-end databases. But now there are several flash options. Users can choose between high performance flash or highly dense, medium performance flash systems. At the same time, high capacity hard disk drives are making a case to be the archival storage medium of choice. How does an IT professional choose?
Highlights
●● ● ●Complete: combining compute, interconnect,
storage and system software
●● ● ●Modular and Extensible: match the
right combination of components and
configurations to meet your workload
●● ● ●Integrated: racked and tested in
IBM manufacturing to reduce time to
compute
Copy Data Management & Storage Efficiency - Ravi NambooriRavi namboori
In this PPT Ravi Namboori explains how copy data management practices can bring about changes in our workplaces. Creation of more space to operate in is one of its main benefits and also about storage efficiency.
OpenStack at the speed of business with SolidFire & Red Hat NetApp
When it comes to OpenStack® and the enterprise, it’s critical that you can rapidly deploy a plug-and-play solution that delivers mixed workload capabilities on a shared infrastructure. Join Red Hat and SolidFire to see how Agile Infrastructure for OpenStack can help your cloud move at the speed of business.
Developing Software for Persistent Memory / Willhalm Thomas (Intel)Ontico
NVDIMMs provide applications the ability to access in-memory data that will survive reboots. This is a huge paradigm shift happening in the industry. Intel has announced new instructions to support persistence. In this presentation, we educate developers on how to take advantage of this new kind of persistent memory tier. Using simple practical examples [3] [4], we discuss how to identify which data structures that are suited for this new memory tier, and which data structures are not. We provide developers a systematic methodology to identify how their applications can be architected to take advantage of persistence in the memory tier. Furthermore, we will provide basic programming examples for persistent memory and present common pitfalls.
Architecting Virtualized Infrastructure for Big DataRichard McDougall
This document discusses architecting virtualized infrastructure for big data. It notes that data is growing exponentially and that the value of data now exceeds hardware costs. It advocates using virtualization to simplify and optimize big data infrastructure, enabling flexible provisioning of workloads like Hadoop, SQL, and NoSQL clusters on a unified analytics cloud platform. This platform leverages both shared and local storage to optimize performance while reducing costs.
View this presentation to gain insight into optimizing Postgres and savings for your data management. Visit EntepriseDB's > Resources > Webcasts to view the presentation by Jay Barrows, VP of Field Operations.
During this 45 -minute presentation, Jay Barrows, VP of Field Operations, will provide a business review of how, where and why businesses are leveraging PostgreSQL. In addition, he will go over the primary pains and business drivers shaping the data management landscape such as significant cost pressures combined with recent improvements to open source database options. Oracle migration is often considered the most powerful cost reduction opportunity if you understand the migration risks, and have a clear migration game plan.
Jay will discuss several use cases selected that highlight how enterprise customers are leveraging their findings from the adoption of other OSS products, to helping to bring Postgres to the extremely expensive and mission critical part of their IT stack - the DB. By doing so they are driving TCO down in very meaningful ways, sacrificing nothing in terms of performance, scalability, security or reliability. Many businesses are already leveraging OSS in much lower cost parts of IT stack (OS, middleware).
This presentation will be beneficial to decision-makers interested in enhancing their data management with PostgreSQL. I
Webinar: End NAS Sprawl - Gain Control Over Unstructured DataStorage Switzerland
The key to ending NAS Sprawl is to fix the file system so it can offer cost effective, scalable, high performance storage. In this webinar Storage Switzerland Lead Analyst George Crump, Quantum VP of Global Marketing Molly Rector, and the Quantum StorNext Solution Marketing Senior Director Dave Frederick discuss the challenges facing the typical scale-out storage environment and what IT professionals should be looking for in solutions to eliminate NAS Sprawl once and for all.
Continuous integration, continuous development & continuous delivery? What does this mean? How does Kangaroot see this process & how can they help & support IT organisations?
Several scenario's are possible which we like to show you.
This document discusses transforming a traditional data center to a software-defined data center by starting with software-defined storage. It recommends implementing a scale-out software-defined storage solution like SUSE Enterprise Storage powered by Ceph to address growing storage needs that outpace budgets. SUSE is well-suited as a partner because of its expertise in storage, reference architectures, and ability to support current infrastructure while enabling future transformation to a software-defined model. The presentation provides guidance on evaluating requirements, architecting a solution, and implementing storage-first to overcome objections typically associated with traditional storage.
Webinar: Three Reasons Why NAS is No Good for AI and Machine LearningStorage Switzerland
Artificial Intelligence (AI) and Machine Learning (ML) are becoming mainstream initiatives at many organizations. Data is at the heart of AI and ML. Immediate access to large data sets is pivotal to successful ML outcomes. Without data, there is no learning. The goal of AI and ML is to try to simulate human thinking and understanding. AI and ML initiatives cannot however be realized unless the data processing layer has immediate access to, and a constant supply of, data.
The problem is that NAS solutions, often those designed for HPC environments, is what most organizations try to leverage as the AI/ML storage architectures. Legacy storage systems, like NAS, cannot support AI and ML workloads, because they were architected when spinning disk and slower networking technologies were the industry standard.
Join Storage Switzerland and WekaIO for our on demand webinar to learn the three reasons why NAS is no good for AI and ML:
* NAS wasn’t architected to leverage today’s flash technology and can’t keep pace with the I/O demands, leaving GPUs starved for data
* NAS has no or very rudimentary Cloud Integration. Tiering to the cloud can play an integral role in AI and ML workloads
* NAS data protection schemes are expensive given the amount of data required to feed an AI/ML environment
20100806 cloudera 10 hadoopable problems webinarCloudera, Inc.
Jeff Hammerbacher introduced 10 common problems that are suitable for solving with Hadoop. These include modeling true risk, customer churn analysis, recommendation engines, ad targeting, point of sale transaction analysis, analyzing network data to predict failures, threat analysis, trade surveillance, search quality, and using Hadoop as a data sandbox. Many of these problems involve analyzing large and complex datasets from multiple sources to discover patterns and relationships.
Need For Speed- Using Flash Storage to optimise performance and reduce costs-...NetAppUK
Flash Storage technologies are opening up a wealth of new opportunities for improving the optimisation of applications, data and storage, as well as reducing costs. In this session, Peter Mason, NetApp Consulting Systems Engineer, shares his experiences and discusses the use and impact of different Flash technologies.
This document provides an introduction to big data, including what it is, sources of big data, and how it is used. It discusses key concepts like volume, velocity, variety, and veracity of big data. It also describes the Hadoop ecosystem for distributed storage and processing of large datasets, including components like HDFS, MapReduce, Hive, HBase and ecosystem players like Cloudera and Hortonworks. The document outlines common big data use cases and how organizations are deploying Hadoop solutions in both on-premise and cloud environments.
“TODAY, COMPANIES ACROSS ALL INDUSTRIES ARE BECOMING SOFTWARE COMPANIES.”
The familiar refrain is certainly true of the new-school, born-in-the-cloud set. But it can also apply to traditional enterprises that are reinventing themselves by coupling DevOps excellence with intelligent DataOps.
DataOps in Financial Services: enable higher-quality test ing + lower levels ...Ugo Pollio
In this session, you will learn how banks and financial services all over the world are using DataOps tools to:
- Comply with GDPR with fully masked test data
- Achieve faster environment refreshes
- Shift Left with production-like test data
- Reduce infrastructure requirements
- Enabling continuous integration and continuous delivery
Alluxio provides a virtual unified file system that allows for unified access and accelerated performance of data across multiple storage systems and tiers. It addresses challenges of separating compute and storage in modern data architectures by providing a global namespace, server-side API translation between storage systems, and intelligent multi-tiering of data across RAM, SSDs and HDDs. Alluxio has been deployed in over 100 production environments across financial services, retail, telecom and other industries to accelerate analytics, machine learning and other workloads.
- The document discusses Actifio's solution to address three key IT priorities - agility, resiliency, and cloud - by virtualizing copy data management to break the bond between application data and physical infrastructure silos.
- Actifio's approach simplifies infrastructure, reduces costs by eliminating multiple third-party tools and vendors, and improves business agility by providing instant access to data.
- The solution has been adopted by hundreds of enterprise customers across over 35 countries, receiving recognition from analysts, press and investors.
Big Data Taiwan 2014 Keynote 4: Monetize Enterprise Data – Big Data 在台灣的經典應用與行動Etu Solution
講者:Etu 資深協理 | 陳育杰
簡介:過去這兩年內,Big Data 在企業的應用架構已逐漸形塑出來,我們看到,不同的產業,陸續開始運用 Hadoop 來解決不同的問題,而背後的 IT 架構,其實都具有一些共通性。我們將透過這些共通性的架構來探索 Big Data / Hadoop 具體展現的企業應用。
This document discusses building applications with DataStax Enterprise (DSE) using the Killr video catalog application as an example. It shows how to make video data searchable by adding tags to videos and storing them in a Cassandra table with the tag as the primary key. It then demonstrates how to use DSE Search to enable searching on video titles, descriptions, and other fields without adding other infrastructure components. The document highlights improvements to DSE Search in the upcoming 5.1 release, including upgrading Solr and allowing core management via CQL.
[db tech showcase Tokyo 2016] E22: Getting real time Oracle data into Kafka a...Insight Technology, Inc.
Kafka is quickly gaining momentum as a very popular and very fast messaging platform that is very good at integrating different types of data quickly. Kafka makes this data available as a real-time data stream for consumption by enterprise users.There is so much hidden data available in our Oracle databases. How can we turn the database inside out to make this data available real-time to Kafka along with the other data sources in our enterprise. This paper will present the use cases of Oracle real-time data streaming as well as an introduction into Kafka and how to use Oracle logical replication to get Oracle real time into Kafka. This paper will include a real life real time demo from Oracle into Kafka.
The flash market started out monolithically. Flash was a single media type (high performance, high endurance SLC flash). Flash systems also had a single purpose of accelerating the response time of high-end databases. But now there are several flash options. Users can choose between high performance flash or highly dense, medium performance flash systems. At the same time, high capacity hard disk drives are making a case to be the archival storage medium of choice. How does an IT professional choose?
Highlights
●● ● ●Complete: combining compute, interconnect,
storage and system software
●● ● ●Modular and Extensible: match the
right combination of components and
configurations to meet your workload
●● ● ●Integrated: racked and tested in
IBM manufacturing to reduce time to
compute
Copy Data Management & Storage Efficiency - Ravi NambooriRavi namboori
In this PPT Ravi Namboori explains how copy data management practices can bring about changes in our workplaces. Creation of more space to operate in is one of its main benefits and also about storage efficiency.
OpenStack at the speed of business with SolidFire & Red Hat NetApp
When it comes to OpenStack® and the enterprise, it’s critical that you can rapidly deploy a plug-and-play solution that delivers mixed workload capabilities on a shared infrastructure. Join Red Hat and SolidFire to see how Agile Infrastructure for OpenStack can help your cloud move at the speed of business.
Developing Software for Persistent Memory / Willhalm Thomas (Intel)Ontico
NVDIMMs provide applications the ability to access in-memory data that will survive reboots. This is a huge paradigm shift happening in the industry. Intel has announced new instructions to support persistence. In this presentation, we educate developers on how to take advantage of this new kind of persistent memory tier. Using simple practical examples [3] [4], we discuss how to identify which data structures that are suited for this new memory tier, and which data structures are not. We provide developers a systematic methodology to identify how their applications can be architected to take advantage of persistence in the memory tier. Furthermore, we will provide basic programming examples for persistent memory and present common pitfalls.
Architecting Virtualized Infrastructure for Big DataRichard McDougall
This document discusses architecting virtualized infrastructure for big data. It notes that data is growing exponentially and that the value of data now exceeds hardware costs. It advocates using virtualization to simplify and optimize big data infrastructure, enabling flexible provisioning of workloads like Hadoop, SQL, and NoSQL clusters on a unified analytics cloud platform. This platform leverages both shared and local storage to optimize performance while reducing costs.
View this presentation to gain insight into optimizing Postgres and savings for your data management. Visit EntepriseDB's > Resources > Webcasts to view the presentation by Jay Barrows, VP of Field Operations.
During this 45 -minute presentation, Jay Barrows, VP of Field Operations, will provide a business review of how, where and why businesses are leveraging PostgreSQL. In addition, he will go over the primary pains and business drivers shaping the data management landscape such as significant cost pressures combined with recent improvements to open source database options. Oracle migration is often considered the most powerful cost reduction opportunity if you understand the migration risks, and have a clear migration game plan.
Jay will discuss several use cases selected that highlight how enterprise customers are leveraging their findings from the adoption of other OSS products, to helping to bring Postgres to the extremely expensive and mission critical part of their IT stack - the DB. By doing so they are driving TCO down in very meaningful ways, sacrificing nothing in terms of performance, scalability, security or reliability. Many businesses are already leveraging OSS in much lower cost parts of IT stack (OS, middleware).
This presentation will be beneficial to decision-makers interested in enhancing their data management with PostgreSQL. I
Webinar: End NAS Sprawl - Gain Control Over Unstructured DataStorage Switzerland
The key to ending NAS Sprawl is to fix the file system so it can offer cost effective, scalable, high performance storage. In this webinar Storage Switzerland Lead Analyst George Crump, Quantum VP of Global Marketing Molly Rector, and the Quantum StorNext Solution Marketing Senior Director Dave Frederick discuss the challenges facing the typical scale-out storage environment and what IT professionals should be looking for in solutions to eliminate NAS Sprawl once and for all.
Continuous integration, continuous development & continuous delivery? What does this mean? How does Kangaroot see this process & how can they help & support IT organisations?
Several scenario's are possible which we like to show you.
This document discusses transforming a traditional data center to a software-defined data center by starting with software-defined storage. It recommends implementing a scale-out software-defined storage solution like SUSE Enterprise Storage powered by Ceph to address growing storage needs that outpace budgets. SUSE is well-suited as a partner because of its expertise in storage, reference architectures, and ability to support current infrastructure while enabling future transformation to a software-defined model. The presentation provides guidance on evaluating requirements, architecting a solution, and implementing storage-first to overcome objections typically associated with traditional storage.
Webinar: Three Reasons Why NAS is No Good for AI and Machine LearningStorage Switzerland
Artificial Intelligence (AI) and Machine Learning (ML) are becoming mainstream initiatives at many organizations. Data is at the heart of AI and ML. Immediate access to large data sets is pivotal to successful ML outcomes. Without data, there is no learning. The goal of AI and ML is to try to simulate human thinking and understanding. AI and ML initiatives cannot however be realized unless the data processing layer has immediate access to, and a constant supply of, data.
The problem is that NAS solutions, often those designed for HPC environments, is what most organizations try to leverage as the AI/ML storage architectures. Legacy storage systems, like NAS, cannot support AI and ML workloads, because they were architected when spinning disk and slower networking technologies were the industry standard.
Join Storage Switzerland and WekaIO for our on demand webinar to learn the three reasons why NAS is no good for AI and ML:
* NAS wasn’t architected to leverage today’s flash technology and can’t keep pace with the I/O demands, leaving GPUs starved for data
* NAS has no or very rudimentary Cloud Integration. Tiering to the cloud can play an integral role in AI and ML workloads
* NAS data protection schemes are expensive given the amount of data required to feed an AI/ML environment
20100806 cloudera 10 hadoopable problems webinarCloudera, Inc.
Jeff Hammerbacher introduced 10 common problems that are suitable for solving with Hadoop. These include modeling true risk, customer churn analysis, recommendation engines, ad targeting, point of sale transaction analysis, analyzing network data to predict failures, threat analysis, trade surveillance, search quality, and using Hadoop as a data sandbox. Many of these problems involve analyzing large and complex datasets from multiple sources to discover patterns and relationships.
Need For Speed- Using Flash Storage to optimise performance and reduce costs-...NetAppUK
Flash Storage technologies are opening up a wealth of new opportunities for improving the optimisation of applications, data and storage, as well as reducing costs. In this session, Peter Mason, NetApp Consulting Systems Engineer, shares his experiences and discusses the use and impact of different Flash technologies.
This document provides an introduction to big data, including what it is, sources of big data, and how it is used. It discusses key concepts like volume, velocity, variety, and veracity of big data. It also describes the Hadoop ecosystem for distributed storage and processing of large datasets, including components like HDFS, MapReduce, Hive, HBase and ecosystem players like Cloudera and Hortonworks. The document outlines common big data use cases and how organizations are deploying Hadoop solutions in both on-premise and cloud environments.
“TODAY, COMPANIES ACROSS ALL INDUSTRIES ARE BECOMING SOFTWARE COMPANIES.”
The familiar refrain is certainly true of the new-school, born-in-the-cloud set. But it can also apply to traditional enterprises that are reinventing themselves by coupling DevOps excellence with intelligent DataOps.
DataOps in Financial Services: enable higher-quality test ing + lower levels ...Ugo Pollio
In this session, you will learn how banks and financial services all over the world are using DataOps tools to:
- Comply with GDPR with fully masked test data
- Achieve faster environment refreshes
- Shift Left with production-like test data
- Reduce infrastructure requirements
- Enabling continuous integration and continuous delivery
“The next release is probably going to be at late”... these are words that every AppDev leader has uttered… and often.
Development teams burdened with complex release requirements often run over schedule and over budget. One of the biggest offenders? Data. Your teams are cutting corners, sacrificing quality and delivering projects late because they don’t have a good solution for managing data.
You’re one of many AppDev leaders that face these challenges. You need a new approach to manage, secure and provision your data in order to stay relevant, You need DataOps.
Confessions of the AppDev VP Webinar (Delphix)Sam Molmud
This document appears to be a presentation about challenges faced by application development VPs and how the Delphix Dynamic Data Platform addresses them. It discusses issues like long wait times for environments, testing being pushed too far right, and competing priorities and resource constraints. The Delphix platform allows automation of data for application development to provide productive developers, less worry for VPs, and ensuring the right resources are available. It enables continuous integration/delivery workflows with automated data deployment. Customers have seen benefits like significantly reduced migration times to cloud environments and increased developer productivity through rapid provisioning of virtual databases.
The document discusses how to plug the data gap in DevOps by automating database deployments. It notes that while companies have spent over $100 billion on accelerators like Agile and DevOps, 84% fail at digital transformation due to database bottlenecks. The Delphix Data Platform provisions data in minutes, eliminating the process of copying data across systems. When combined with Datical, it automates deployment of database changes, integrates with existing tools, and validates changes are deployed alongside application changes. This solution speeds development, improves the user experience, reduces risk, and increases quality and time to market.
As companies have adopted faster development methodologies a new constraint has emerged in the journey to digital transformation: data. Data has long been the neglected discipline, the weakest link in the tool chain, with provisioning times still counted in days, weeks, or even months. In addition, most companies are still using decades-old processes to manage and deploy database changes, further anchoring development teams.
Data Agility for Enterprise DevOps AdoptionDelphix
Most organizations start their DevOps journey by automating the flow of application code in their delivery pipeline and improving the speed of provisioning production-like environments. These competencies, while critical to increasing release velocity, fail to address a key element in the software development lifecycle—data.
Ensuring that the right data is securely provisioned to the right environments at the right time is often addressed last, and not very effectively. This is a problem. Organizations can’t achieve a state of Continuous Integration and Continuous Delivery (CI/CD) without first automating data delivery.
This document discusses the role of database administrators (DBAs) in DevOps environments. It begins with an introduction to DevOps, emphasizing collaboration between developers and IT professionals. It then explores how DBAs are impacted, noting both opportunities for DBAs to influence decisions and embrace automation, as well as risks of being seen as roadblocks. The document provides overviews of various DevOps practices and tools that DBAs can learn, such as configuration management, continuous delivery, and GitHub. It argues that DBAs should update their skills while automating some traditional tasks, and embrace techniques like data virtualization, snapshots, and DataOps to remove databases as roadblocks to DevOps goals.
Kellyn Pot’Vin-Gorman presents on empowering agile development with containers. As data increases, traditional methods of database provisioning are no longer sustainable for agile development. The document proposes virtualizing databases to create virtual database copies that can be provisioned quickly. It also suggests containerizing databases into "data pods" that package related environments together for easier management and portability. This allows development, testing, and production environments to be quickly provisioned in the cloud. The solution aims to remove "data gravity" that slows agile development by virtualizing and containerizing databases into portable data pods.
Scale Continuous Deployment to Production with DeployHub and CloudBeesDevOps.com
Moving from a simple Jenkins CI workflow to Continuous Delivery requires a focus on Continuous Deployment. Join us for a discussion on how to integrate DeployHub, an open source application release automation solution, into your CloudBees pipeline to support automated deployments across dev, test and production. You will see how to create a Continuous Feedback loop, track change request and support rollback and version jumping all orchestrated via the CloudBees platform. Maturing your CD process to support continuous deployment using ARA has always been possible, but extremely expensive. DeployHub OSS solves the budget problem, integrated into CloudBees - and it is agentless for fast easy implementation.
Scale Continuous Deployment to Production with DeployHub and CloudBeesDeborah Schalm
Moving from a simple Jenkins CI workflow to Continuous Delivery requires a focus on Continuous Deployment. Join us for a discussion on how to integrate DeployHub, an open source application release automation solution, into your CloudBees pipeline to support automated deployments across dev, test and production. You will see how to create a Continuous Feedback loop, track change request and support rollback and version jumping all orchestrated via the CloudBees platform. Maturing your CD process to support continuous deployment using ARA has always been possible, but extremely expensive. DeployHub OSS solves the budget problem, integrated into CloudBees - and it is agentless for fast easy implementation.
The document discusses how the role of the database administrator (DBA) is evolving from a database-centric role to a DevOps and DataOps focused role. It notes that data is a source of friction for development teams due to "data gravity", but that virtualizing databases and creating "data pods" allows DBAs to remove this friction and enable self-service access to development data. This evolution is necessary for DBAs and organizations to support modern practices like DevOps in a world where data and development cycles are constantly increasing.
451 Research: Data Is the Key to Friction in DevOpsDelphix
- The document discusses how data friction impacts DevOps initiatives and the benefits of using Delphix to remove data friction.
- It provides an overview of 451 Research findings that most organizations deploy code changes daily and have large, complex application changes. This puts pressure on development teams to access production-like data for testing.
- Choice Hotels' journey is presented as a case study where they implemented Delphix to automate provisioning of test databases from production data. This allowed developers faster access to fresh data for testing and removed bottlenecks in their testing cycles.
- The key benefits of Delphix are that it provides instant access to production-like data for various teams while ensuring data is secure and compliant through
This document discusses ServiceNow's use of MariaDB as its database platform. It summarizes ServiceNow's evolution from a single-tenant to multi-tenant to multi-instance architecture, which provides dedicated databases for each customer. It highlights key metrics like supporting over 150 million users and 50,000 instances. It also notes benefits of MariaDB like its stability, support, and protection as an open source project.
The document discusses three companies - Orasi, Delphix, and Skytap - that provide services related to application testing, data management, and environments. Orasi provides testing tools and services to help with quality assurance. Delphix offers a data management platform that provides data services and virtual copies of production data for development and testing environments. Skytap provides cloud-based virtual testing environments that allow for rapid deployment and provisioning. The document discusses how these three companies can help organizations accelerate application delivery through more efficient testing, data management, and environment provisioning.
Enterprise DevOps and the Modern Mainframe Webcast PresentationCompuware
Compuware and CloudBees demonstrate how you can apply modern DevOps practices to your mainframe applications using Compuware ISPW and Topaz for Total Test with CloudBees Jenkins. Compuware Product Manager Steve Kansa and CloudBees DevOps Evangelist Brian Dawson will:
- Position the mainframe as part of your DevOps and CI/CD journey
- Explain how Jenkins automates mainframe source code management and testing
- Demo a CI/CD workflow on a COBOL application
Watch the full presentation on YouTube: https://www.youtube.com/watch?v=x4MWrPy3bKM.
The document discusses challenges with moving databases to the cloud and proposes a solution using data virtualization. It summarizes that virtualizing databases with tools like Delphix and DBVisit allows for instant provisioning of development environments without physical copies. Databases are packaged into "data pods" that can be easily replicated and kept in sync. This streamlines cloud migrations by removing bottlenecks around copying and moving large amounts of database data.
1. The document discusses DevOps and hybrid cloud, with DevOps being an approach combining culture, processes, and technologies to continuously deliver applications and innovation.
2. APIs are key to hybrid cloud and DevOps, allowing components and services to be developed and reused across teams and cloud environments.
3. IBM recommends organizations build a common toolchain including tools for development, testing, deployment, and monitoring to facilitate DevOps practices and hybrid cloud deployments.
SplunkLive! London 2017 - DevOps Powered by SplunkSplunk
DevOps is powering the computing environments of tomorrow. When properly configured, the Splunk platform allows us to gain real-time visibility into the velocity, quality, and business impact of DevOps-driven application delivery across all roles, departments, process, and systems. Splunk can be used by DevOps practitioners to provide continuous integration/deployment and the real-time feedback to help the organisation with their operational intelligence. Join us for an exciting talk about Splunk’s current approach to DevOps, and for examples of how Splunk is being used by customers today to transform DevOps initiatives.
Some might think Docker is for developers only, but this is not really the case.Docker is here to stay and we will only see more of it in the future.
In this session learn what Docker is and how it works.This session will be covering core areas such as volumes, but also stepping it up to a few tips and tricks to help you get the most out of your Docker environment.The session will dive into a few examples of how to create a database environment within just a few minutes - perfect for testing,development, and possibly even production systems.
Machine Learning explained with Examples
Everybody is talking about machine learning. What is it actually and how can I use it?
In this presentation we will see some examples of solving real life use cases using machine learning. We will define Tasks and see how that task can be addressed using machine learning.
SQL Server 2017でLinuxに対応し、その延長線でDocker対応やKubernetesによる可用性構成が組めるようになりました。そしてリリースを間近に控えたSQL Server 2019ではKubernetesを活用したBig Data Cluster機能の提供が予定されており、コンテナの活用範囲はさらに広がっています。
本セッションではこれからSQL Serverコンテナに触れていくための基礎知識と実際に触れてみるための手順やサンプルをお届けします。
Dev Dives: System-to-system integration with UiPath API WorkflowsUiPathCommunity
Join the next Dev Dives webinar on May 29 for a first contact with UiPath API Workflows, a powerful tool purpose-fit for API integration and data manipulation!
This session will guide you through the technical aspects of automating communication between applications, systems and data sources using API workflows.
📕 We'll delve into:
- How this feature delivers API integration as a first-party concept of the UiPath Platform.
- How to design, implement, and debug API workflows to integrate with your existing systems seamlessly and securely.
- How to optimize your API integrations with runtime built for speed and scalability.
This session is ideal for developers looking to solve API integration use cases with the power of the UiPath Platform.
👨🏫 Speakers:
Gunter De Souter, Sr. Director, Product Manager @UiPath
Ramsay Grove, Product Manager @UiPath
This session streamed live on May 29, 2025, 16:00 CET.
Check out all our upcoming UiPath Dev Dives sessions:
👉 https://community.uipath.com/dev-dives-automation-developer-2025/
Introducing the OSA 3200 SP and OSA 3250 ePRCAdtran
Adtran's latest Oscilloquartz solutions make optical pumping cesium timing more accessible than ever. Discover how the new OSA 3200 SP and OSA 3250 ePRC deliver superior stability, simplified deployment and lower total cost of ownership. Built on a shared platform and engineered for scalable, future-ready networks, these models are ideal for telecom, defense, metrology and more.
Microsoft Build 2025 takeaways in one presentationDigitalmara
Microsoft Build 2025 introduced significant updates. Everything revolves around AI. DigitalMara analyzed these announcements:
• AI enhancements for Windows 11
By embedding AI capabilities directly into the OS, Microsoft is lowering the barrier for users to benefit from intelligent automation without requiring third-party tools. It's a practical step toward improving user experience, such as streamlining workflows and enhancing productivity. However, attention should be paid to data privacy, user control, and transparency of AI behavior. The implementation policy should be clear and ethical.
• GitHub Copilot coding agent
The introduction of coding agents is a meaningful step in everyday AI assistance. However, it still brings challenges. Some people compare agents with junior developers. They noted that while the agent can handle certain tasks, it often requires supervision and can introduce new issues. This innovation holds both potential and limitations. Balancing automation with human oversight is crucial to ensure quality and reliability.
• Introduction of Natural Language Web
NLWeb is a significant step toward a more natural and intuitive web experience. It can help users access content more easily and reduce reliance on traditional navigation. The open-source foundation provides developers with the flexibility to implement AI-driven interactions without rebuilding their existing platforms. NLWeb is a promising level of web interaction that complements, rather than replaces, well-designed UI.
• Introduction of Model Context Protocol
MCP provides a standardized method for connecting AI models with diverse tools and data sources. This approach simplifies the development of AI-driven applications, enhancing efficiency and scalability. Its open-source nature encourages broader adoption and collaboration within the developer community. Nevertheless, MCP can face challenges in compatibility across vendors and security in context sharing. Clear guidelines are crucial.
• Windows Subsystem for Linux is open-sourced
It's a positive step toward greater transparency and collaboration in the developer ecosystem. The community can now contribute to its evolution, helping identify issues and expand functionality faster. However, open-source software in a core system also introduces concerns around security, code quality management, and long-term maintenance. Microsoft’s continued involvement will be key to ensuring WSL remains stable and secure.
• Azure AI Foundry platform hosts Grok 3 AI models
Adding new models is a valuable expansion of AI development resources available at Azure. This provides developers with more flexibility in choosing language models that suit a range of application sizes and needs. Hosting on Azure makes access and integration easier when using Microsoft infrastructure.
nnual (33 years) study of the Israeli Enterprise / public IT market. Covering sections on Israeli Economy, IT trends 2026-28, several surveys (AI, CDOs, OCIO, CTO, staffing cyber, operations and infra) plus rankings of 760 vendors on 160 markets (market sizes and trends) and comparison of products according to support and market penetration.
Adtran’s SDG 9000 Series brings high-performance, cloud-managed Wi-Fi 7 to homes, businesses and public spaces. Built on a unified SmartOS platform, the portfolio includes outdoor access points, ceiling-mount APs and a 10G PoE router. Intellifi and Mosaic One simplify deployment, deliver AI-driven insights and unlock powerful new revenue streams for service providers.
ELNL2025 - Unlocking the Power of Sensitivity Labels - A Comprehensive Guide....Jasper Oosterveld
Sensitivity labels, powered by Microsoft Purview Information Protection, serve as the foundation for classifying and protecting your sensitive data within Microsoft 365. Their importance extends beyond classification and play a crucial role in enforcing governance policies across your Microsoft 365 environment. Join me, a Data Security Consultant and Microsoft MVP, as I share practical tips and tricks to get the full potential of sensitivity labels. I discuss sensitive information types, automatic labeling, and seamless integration with Data Loss Prevention, Teams Premium, and Microsoft 365 Copilot.
Jira Administration Training – Day 1 : IntroductionRavi Teja
This presentation covers the basics of Jira for beginners. Learn how Jira works, its key features, project types, issue types, and user roles. Perfect for anyone new to Jira or preparing for Jira Admin roles.
Introduction and Background:
Study Overview and Methodology: The study analyzes the IT market in Israel, covering over 160 markets and 760 companies/products/services. It includes vendor rankings, IT budgets, and trends from 2025-2029. Vendors participate in detailed briefings and surveys.
Vendor Listings: The presentation lists numerous vendors across various pages, detailing their names and services. These vendors are ranked based on their participation and market presence.
Market Insights and Trends: Key insights include IT market forecasts, economic factors affecting IT budgets, and the impact of AI on enterprise IT. The study highlights the importance of AI integration and the concept of creative destruction.
Agentic AI and Future Predictions: Agentic AI is expected to transform human-agent collaboration, with AI systems understanding context and orchestrating complex processes. Future predictions include AI's role in shopping and enterprise IT.
Introducing FME Realize: A New Era of Spatial Computing and ARSafe Software
A new era for the FME Platform has arrived – and it’s taking data into the real world.
Meet FME Realize: marking a new chapter in how organizations connect digital information with the physical environment around them. With the addition of FME Realize, FME has evolved into an All-data, Any-AI Spatial Computing Platform.
FME Realize brings spatial computing, augmented reality (AR), and the full power of FME to mobile teams: making it easy to visualize, interact with, and update data right in the field. From infrastructure management to asset inspections, you can put any data into real-world context, instantly.
Join us to discover how spatial computing, powered by FME, enables digital twins, AI-driven insights, and real-time field interactions: all through an intuitive no-code experience.
In this one-hour webinar, you’ll:
-Explore what FME Realize includes and how it fits into the FME Platform
-Learn how to deliver real-time AR experiences, fast
-See how FME enables live, contextual interactions with enterprise data across systems
-See demos, including ones you can try yourself
-Get tutorials and downloadable resources to help you start right away
Whether you’re exploring spatial computing for the first time or looking to scale AR across your organization, this session will give you the tools and insights to get started with confidence.
Supercharge Your AI Development with Local LLMsFrancesco Corti
In today's AI development landscape, developers face significant challenges when building applications that leverage powerful large language models (LLMs) through SaaS platforms like ChatGPT, Gemini, and others. While these services offer impressive capabilities, they come with substantial costs that can quickly escalate especially during the development lifecycle. Additionally, the inherent latency of web-based APIs creates frustrating bottlenecks during the critical testing and iteration phases of development, slowing down innovation and frustrating developers.
This talk will introduce the transformative approach of integrating local LLMs directly into their development environments. By bringing these models closer to where the code lives, developers can dramatically accelerate development lifecycles while maintaining complete control over model selection and configuration. This methodology effectively reduces costs to zero by eliminating dependency on pay-per-use SaaS services, while opening new possibilities for comprehensive integration testing, rapid prototyping, and specialized use cases.
Agentic AI - The New Era of IntelligenceMuzammil Shah
This presentation is specifically designed to introduce final-year university students to the foundational principles of Agentic Artificial Intelligence (AI). It aims to provide a clear understanding of how Agentic AI systems function, their key components, and the underlying technologies that empower them. By exploring real-world applications and emerging trends, the session will equip students with essential knowledge to engage with this rapidly evolving area of AI, preparing them for further study or professional work in the field.
Exploring the advantages of on-premises Dell PowerEdge servers with AMD EPYC processors vs. the cloud for small to medium businesses’ AI workloads
AI initiatives can bring tremendous value to your business, but you need to support your new AI workloads effectively. That means choosing the best possible infrastructure for your needs—and many companies are finding that the cloud isn’t right for them. According to a recent Rackspace survey of IT executives, 69 percent of companies have moved some of their applications on-premises from the cloud, with half of those citing security and compliance as the reason and 44 percent citing cost.
On-premises solutions provide a number of advantages. With full control over your security infrastructure, you can be certain that all compliance requirements remain firmly in the hands of your IT team. Opting for on-premises also gives you the ability to design your infrastructure to the precise needs of that team and your new AI workloads. Depending on the workload, you may also see performance benefits, along with more predictable costs. As you start to build your next AI initiative, consider an on-premises solution utilizing AMD EPYC processor-powered Dell PowerEdge servers.
European Accessibility Act & Integrated Accessibility TestingJulia Undeutsch
Emma Dawson will guide you through two important topics in this session.
Firstly, she will prepare you for the European Accessibility Act (EAA), which comes into effect on 28 June 2025, and show you how development teams can prepare for it.
In the second part of the webinar, Emma Dawson will explore with you various integrated testing methods and tools that will help you improve accessibility during the development cycle, such as Linters, Storybook, Playwright, just to name a few.
Focus: European Accessibility Act, Integrated Testing tools and methods (e.g. Linters, Storybook, Playwright)
Target audience: Everyone, Developers, Testers
Maxx nft market place new generation nft marketing placeusersalmanrazdelhi
PREFACE OF MAXXNFT
MaxxNFT: Powering the Future of Digital Ownership
MaxxNFT is a cutting-edge Web3 platform designed to revolutionize how
digital assets are owned, traded, and valued. Positioned at the forefront of the
NFT movement, MaxxNFT views NFTs not just as collectibles, but as the next
generation of internet equity—unique, verifiable digital assets that unlock new
possibilities for creators, investors, and everyday users alike.
Through strategic integrations with OKT Chain and OKX Web3, MaxxNFT
enables seamless cross-chain NFT trading, improved liquidity, and enhanced
user accessibility. These collaborations make it easier than ever to participate
in the NFT ecosystem while expanding the platform’s global reach.
With a focus on innovation, user rewards, and inclusive financial growth,
MaxxNFT offers multiple income streams—from referral bonuses to liquidity
incentives—creating a vibrant community-driven economy. Whether you
'
re
minting your first NFT or building a digital asset portfolio, MaxxNFT empowers
you to participate in the future of decentralized value exchange.
https://maxxnft.xyz/
New Ways to Reduce Database Costs with ScyllaDBScyllaDB
How ScyllaDB’s latest capabilities can reduce your infrastructure costs
ScyllaDB has been obsessed with price-performance from day 1. Our core database is architected with low-level engineering optimizations that squeeze every ounce of power from the underlying infrastructure. And we just completed a multi-year effort to introduce a set of new capabilities for additional savings.
Join this webinar to learn about these new capabilities: the underlying challenges we wanted to address, the workloads that will benefit most from each, and how to get started. We’ll cover ways to:
- Avoid overprovisioning with “just-in-time” scaling
- Safely operate at up to ~90% storage utilization
- Cut network costs with new compression strategies and file-based streaming
We’ll also highlight a “hidden gem” capability that lets you safely balance multiple workloads in a single cluster. To conclude, we will share the efficiency-focused capabilities on our short-term and long-term roadmaps.
Cyber Security Legal Framework in Nepal.pptxGhimire B.R.
The presentation is about the review of existing legal framework on Cyber Security in Nepal. The strength and weakness highlights of the major acts and policies so far. Further it highlights the needs of data protection act .