This document discusses optimizing object-oriented code for performance. It begins with an overview of object-oriented programming and how CPU and memory performance have changed significantly since C++ was first created. It then analyzes a common scene tree example and finds it is slow due to excessive cache misses from scattered data. The solution is to restructure the code to have homogeneous, sequential data by allocating nodes and matrices contiguously in memory. Processing data in order and removing virtual function calls further improves performance. Prefetching is also able to reduce cache misses, resulting in a 6x speedup over the original implementation. The key lessons are to optimize for data locality and consider data-oriented design principles when performance is important.
Data-oriented design (DOD) focuses on how data is accessed and transformed, rather than how code is organized. This improves performance by minimizing cache misses and allowing better utilization of parallelism. The document provides an example comparing an object-oriented design (OOD) approach that stores related data together in objects, resulting in scattered memory access and many cache misses, versus a DOD approach that groups together data that is accessed together, resulting in fewer cache misses and faster performance.
The document discusses data-oriented design principles for game engine development in C++. It emphasizes understanding how data is represented and used to solve problems, rather than focusing on writing code. It provides examples of how restructuring code to better utilize data locality and cache lines can significantly improve performance by reducing cache misses. Booleans packed into structures are identified as having extremely low information density, wasting cache space.
Presentation from DICE Coder's Day (2010 November) by Johan Torp:
This talk is about making object-oriented code more cache-friendly and how we can incrementally move towards parallelizable data-oriented designs. Filled with production code examples from Frostbite’s pathfinding implementation.
Linux Synchronization Mechanism: RCU (Read Copy Update)Adrian Huang
RCU (Read-Copy-Update) is a synchronization mechanism that allows for lock-free reads with concurrent updates. It achieves this through a combination of temporal and spatial synchronization. Temporal synchronization uses rcu_read_lock() and rcu_read_unlock() for readers, and synchronize_rcu() or call_rcu() for updaters. Spatial synchronization uses rcu_dereference() for readers to safely load pointers, and rcu_assign_pointer() for updaters to safely update pointers. RCU guarantees that readers will either see the old or new version of data, but not a partially updated version.
Ceph Object Storage Performance Secrets and Ceph Data Lake SolutionKaran Singh
In this presentation, i have explained how Ceph Object Storage Performance can be improved drastically together with some object storage best practices, recommendations tips. I have also covered Ceph Shared Data Lake which is getting very popular.
Siggraph2016 - The Devil is in the Details: idTech 666Tiago Sousa
A behind-the-scenes look into the latest renderer technology powering the critically acclaimed DOOM. The lecture will cover how technology was designed for balancing a good visual quality and performance ratio. Numerous topics will be covered, among them details about the lighting solution, techniques for decoupling costs frequency and GCN specific approaches.
Original slides at: http://crytek.com/cryengine/presentations
For Crysis 2, the R&D team at Crytek created the third iteration of CryENGINE. This lecture covers various topics related to Crytek’s latest engine iteration. Tiago Sousa provides an overview of the rendering pipeline, and the successful transition to a multiplatform friendly deferred lighting approach; how gamma-correct HDR rendering and its multiplatform details have been handled, with a focus on performance and quality. Other topics include deferred lighting techniques such as efficiently handling skin-rendering, and overcoming alpha-blending problems for hair/fur rendering for current-generation hardware; water-rendering and dynamic interaction; batched HDR post-processing; and how AA was handled. The lecture also includes multiplatform comparisons on final image quality, optimization strategies, and performance analysis insights. It also unveils the DX11 implementation of certain features.
Talk by Yuriy O’Donnell at GDC 2017.
This talk describes how Frostbite handles rendering architecture challenges that come with having to support a wide variety of games on a single engine. Yuriy describes their new rendering abstraction design, which is based on a graph of all render passes and resources. This approach allows implementation of rendering features in a decoupled and modular way, while still maintaining efficiency.
A graph of all rendering operations for the entire frame is a useful abstraction. The industry can move away from “immediate mode” DX11 style APIs to a higher level system that allows simpler code and efficient GPU utilization. Attendees will learn how it worked out for Frostbite.
Performance tuning in BlueStore & RocksDB - Li XiaoyanCeph Community
This document discusses performance tuning in BlueStore and RocksDB for Ceph object storage. It provides an overview of BlueStore's architecture using RocksDB for metadata storage and direct writing of data to block devices. It then examines various RocksDB and BlueStore configuration optimizations for random write workloads, including increasing parallelization, tuning memory usage, and testing different flush styles. The document concludes with ideas for future work on alternatives to RocksDB for certain data types.
Next generation gaming brought high resolutions, very complex environments and large textures to our living rooms. With virtually every asset being inflated, it's hard to use traditional forward rendering and hope for rich, dynamic environments with extensive dynamic lighting. Deferred rendering, on the other hand, has been traditionally described as a nice technique for rendering of scenes with many dynamic lights, that unfortunately suffers from fill-rate problems and lack of anti-aliasing and very few games that use it were published.
In this talk, we will discuss our approach to face this challenge and how we designed a deferred rendering engine that uses multi-sampled anti-aliasing (MSAA). We will give in-depth description of each individual stage of our real-time rendering pipeline and the main ingredients of our lighting, post-processing and data management. We'll show how we utilize PS3's SPUs for fast rendering of a large set of primitives, parallel processing of geometry and computation of indirect lighting. We will also describe our optimizations of the lighting and our parallel split (cascaded) shadow map algorithm for faster and stable MSAA output.
This document discusses various optimizations for the z-buffer algorithm used in 3D graphics rendering. It covers hardware optimizations like early-z testing and double-speed z-only rendering. It also discusses software techniques like front-to-back sorting, early-z rendering passes, and deferred shading. Other topics include z-buffer compression, fast clears, z-culling, and potential future optimizations like programmable culling units. A variety of resources are provided for further reading.
The document describes the process of generating voxelized shadows using a voxel DAG representation. It involves capturing shadow maps from the GPU and transmitting them to system memory. Min/max mip levels are also captured and transmitted. The shadow data is then used to build a voxel DAG from SVO or DAG representations, with nodes marked as lit or shadowed.
Optimizing the Graphics Pipeline with Compute, GDC 2016Graham Wihlidal
With further advancement in the current console cycle, new tricks are being learned to squeeze the maximum performance out of the hardware. This talk will present how the compute power of the console and PC GPUs can be used to improve the triangle throughput beyond the limits of the fixed function hardware. The discussed method shows a way to perform efficient "just-in-time" optimization of geometry, and opens the way for per-primitive filtering kernels and procedural geometry processing.
Takeaway:
Attendees will learn how to preprocess geometry on-the-fly per frame to improve rendering performance and efficiency.
Intended Audience:
This presentation is targeting seasoned graphics developers. Experience with DirectX 12 and GCN is recommended, but not required.
The document discusses Linux networking architecture and covers several key topics in 3 paragraphs or less:
It first describes the basic structure and layers of the Linux networking stack including the network device interface, network layer protocols like IP, transport layer, and sockets. It then discusses how network packets are managed in Linux through the use of socket buffers and associated functions. The document also provides an overview of the data link layer and protocols like Ethernet, PPP, and how they are implemented in Linux.
More Performance! Five Rendering Ideas From Battlefield 3 and Need For Speed:...Colin Barré-Brisebois
This talk covers techniques from Battlefield 3 and Need for Speed: The Run. Includes chroma sub-sampling for faster full-screen effects, a novel DirectX 9+ scatter-gather approach to bokeh rendering, HiZ reverse-reload for faster shadow, improved temporally-stable dynamic ambient occlusion, and tile-based deferred shading on Xbox 360.
The document provides an overview of graphics programming on the Xbox 360, including details about the system and GPU architecture, graphics APIs like Direct3D, shader development, and tools for graphics debugging and optimization like PIX. Key points include that the Xbox 360 GPU is designed by ATI and includes 10MB of EDRAM, supports shader model 3.0, and has dedicated hardware for features like tessellation, procedural geometry, and anti-aliasing. Direct3D is optimized for the Xbox 360 hardware and exposes new features. PIX is a powerful tool for performance analysis and debugging graphics applications on the Xbox 360.
Taking Killzone Shadow Fall Image Quality Into The Next GenerationGuerrilla
This talk focuses on the technical side of Killzone Shadow Fall, the platform exclusive launch title for PlayStation 4.
We present the details of several new techniques that were developed in the quest for next generation image quality, and the talk uses key locations from the game as examples. We discuss interesting aspects of the new content pipeline, next-gen lighting engine, usage of indirect lighting and various shadow rendering optimizations. We also describe the details of volumetric lighting, the real-time reflections system, and the new anti-aliasing solution, and include some details about the image-quality driven streaming system. A common, very important, theme of the talk is the temporal coherency and how it was utilized to reduce aliasing, and improve the rendering quality and image stability above the baseline 1080p resolution seen in other games.
An illumination model, also called a lighting model and sometimes referred to as a shading model, is used to calculate the intensity of light that we should see at a given point on the surface of an object.
Surface rendering means a procedure for applying a lighting model to obtain pixel intensities for all the projected surface positions in a scene.
A surface-rendering algorithm uses the intensity calculations from an illumination model to determine the light intensity for all projected pixel positions for the various surfaces in a scene.
Surface rendering can be performed by applying the illumination model to every visible surface point.
This talk is about our experiences gained during making of the Killzone Shadow Fall announcement demo.
We’ve gathered all the hard data about our assets, memory, CPU and GPU usage and a whole bunch of tricks.
The goal of talk is to help you to form a clear picture of what’s already possible to achieve on PS4.
Unreal Open Day 2017 UE4 for Mobile: The Future of High Quality Mobile GamesEpic Games China
This document summarizes a presentation about Unreal Engine 4 for mobile game development. It discusses UE4's mobile rendering pipeline and features for high-end graphics on mobile, including OpenGL ES 3.1, Vulkan and Metal. It provides an overview of the state of the mobile game market and examples of AAA open-world games made with UE4. It also outlines UE4's feature levels for mobile, describes the components of the mobile rendering pipeline, and highlights specific rendering techniques like HDR encoding.
The document summarizes new features and updates in Ceph's RBD block storage component. Key points include: improved live migration support using external data sources; built-in LUKS encryption; up to 3x better small I/O performance; a new persistent write-back cache; snapshot quiesce hooks; kernel messenger v2 and replica read support; and initial RBD support on Windows. Future work planned for Quincy includes encryption-formatted clones, cache improvements, usability enhancements, and expanded ecosystem integration.
This document describes a rendering technique called Forward+ that brings the benefits of both forward and deferred rendering. Forward+ uses a depth prepass and light culling pass to limit the number of lights evaluated per pixel in the shading pass. This results in better performance than deferred rendering while allowing the use of many lights and complex materials like deferred. The technique is demonstrated to render over 3000 dynamic lights in real-time on a Radeon HD 7970 GPU.
This document provides an introduction to Nodejs, NoSQL technologies like MongoDB, and how to build applications using these technologies. It discusses key aspects of Nodejs like its event-driven architecture and how it uses JavaScript. It then covers setting up and running basic CRUD operations in MongoDB. Finally, it demonstrates how to build sample applications integrating Nodejs and MongoDB.
Monitoring Big Data Systems - "The Simple Way"Demi Ben-Ari
Once you start working with distributed Big Data systems, you start discovering a whole bunch of problems you won’t find in monolithic systems.
All of a sudden to monitor all of the components becomes a big data problem itself.
In the talk we’ll mention all of the aspects that you should take in consideration when monitoring a distributed system once you’re using tools like:
Web Services, Apache Spark, Cassandra, MongoDB, Amazon Web Services.
Not only the tools, what should you monitor about the actual data that flows in the system?
And we’ll cover the simplest solution with your day to day open source tools, the surprising thing, that it comes not from an Ops Guy.
Demi Ben-Ari is a Co-Founder and CTO @ Panorays.
Demi has over 9 years of experience in building various systems both from the field of near real time applications and Big Data distributed systems.
Describing himself as a software development groupie, Interested in tackling cutting edge technologies.
Demi is also a co-founder of the “Big Things” Big Data community: http://somebigthings.com/big-things-intro/
Original slides at: http://crytek.com/cryengine/presentations
For Crysis 2, the R&D team at Crytek created the third iteration of CryENGINE. This lecture covers various topics related to Crytek’s latest engine iteration. Tiago Sousa provides an overview of the rendering pipeline, and the successful transition to a multiplatform friendly deferred lighting approach; how gamma-correct HDR rendering and its multiplatform details have been handled, with a focus on performance and quality. Other topics include deferred lighting techniques such as efficiently handling skin-rendering, and overcoming alpha-blending problems for hair/fur rendering for current-generation hardware; water-rendering and dynamic interaction; batched HDR post-processing; and how AA was handled. The lecture also includes multiplatform comparisons on final image quality, optimization strategies, and performance analysis insights. It also unveils the DX11 implementation of certain features.
Talk by Yuriy O’Donnell at GDC 2017.
This talk describes how Frostbite handles rendering architecture challenges that come with having to support a wide variety of games on a single engine. Yuriy describes their new rendering abstraction design, which is based on a graph of all render passes and resources. This approach allows implementation of rendering features in a decoupled and modular way, while still maintaining efficiency.
A graph of all rendering operations for the entire frame is a useful abstraction. The industry can move away from “immediate mode” DX11 style APIs to a higher level system that allows simpler code and efficient GPU utilization. Attendees will learn how it worked out for Frostbite.
Performance tuning in BlueStore & RocksDB - Li XiaoyanCeph Community
This document discusses performance tuning in BlueStore and RocksDB for Ceph object storage. It provides an overview of BlueStore's architecture using RocksDB for metadata storage and direct writing of data to block devices. It then examines various RocksDB and BlueStore configuration optimizations for random write workloads, including increasing parallelization, tuning memory usage, and testing different flush styles. The document concludes with ideas for future work on alternatives to RocksDB for certain data types.
Next generation gaming brought high resolutions, very complex environments and large textures to our living rooms. With virtually every asset being inflated, it's hard to use traditional forward rendering and hope for rich, dynamic environments with extensive dynamic lighting. Deferred rendering, on the other hand, has been traditionally described as a nice technique for rendering of scenes with many dynamic lights, that unfortunately suffers from fill-rate problems and lack of anti-aliasing and very few games that use it were published.
In this talk, we will discuss our approach to face this challenge and how we designed a deferred rendering engine that uses multi-sampled anti-aliasing (MSAA). We will give in-depth description of each individual stage of our real-time rendering pipeline and the main ingredients of our lighting, post-processing and data management. We'll show how we utilize PS3's SPUs for fast rendering of a large set of primitives, parallel processing of geometry and computation of indirect lighting. We will also describe our optimizations of the lighting and our parallel split (cascaded) shadow map algorithm for faster and stable MSAA output.
This document discusses various optimizations for the z-buffer algorithm used in 3D graphics rendering. It covers hardware optimizations like early-z testing and double-speed z-only rendering. It also discusses software techniques like front-to-back sorting, early-z rendering passes, and deferred shading. Other topics include z-buffer compression, fast clears, z-culling, and potential future optimizations like programmable culling units. A variety of resources are provided for further reading.
The document describes the process of generating voxelized shadows using a voxel DAG representation. It involves capturing shadow maps from the GPU and transmitting them to system memory. Min/max mip levels are also captured and transmitted. The shadow data is then used to build a voxel DAG from SVO or DAG representations, with nodes marked as lit or shadowed.
Optimizing the Graphics Pipeline with Compute, GDC 2016Graham Wihlidal
With further advancement in the current console cycle, new tricks are being learned to squeeze the maximum performance out of the hardware. This talk will present how the compute power of the console and PC GPUs can be used to improve the triangle throughput beyond the limits of the fixed function hardware. The discussed method shows a way to perform efficient "just-in-time" optimization of geometry, and opens the way for per-primitive filtering kernels and procedural geometry processing.
Takeaway:
Attendees will learn how to preprocess geometry on-the-fly per frame to improve rendering performance and efficiency.
Intended Audience:
This presentation is targeting seasoned graphics developers. Experience with DirectX 12 and GCN is recommended, but not required.
The document discusses Linux networking architecture and covers several key topics in 3 paragraphs or less:
It first describes the basic structure and layers of the Linux networking stack including the network device interface, network layer protocols like IP, transport layer, and sockets. It then discusses how network packets are managed in Linux through the use of socket buffers and associated functions. The document also provides an overview of the data link layer and protocols like Ethernet, PPP, and how they are implemented in Linux.
More Performance! Five Rendering Ideas From Battlefield 3 and Need For Speed:...Colin Barré-Brisebois
This talk covers techniques from Battlefield 3 and Need for Speed: The Run. Includes chroma sub-sampling for faster full-screen effects, a novel DirectX 9+ scatter-gather approach to bokeh rendering, HiZ reverse-reload for faster shadow, improved temporally-stable dynamic ambient occlusion, and tile-based deferred shading on Xbox 360.
The document provides an overview of graphics programming on the Xbox 360, including details about the system and GPU architecture, graphics APIs like Direct3D, shader development, and tools for graphics debugging and optimization like PIX. Key points include that the Xbox 360 GPU is designed by ATI and includes 10MB of EDRAM, supports shader model 3.0, and has dedicated hardware for features like tessellation, procedural geometry, and anti-aliasing. Direct3D is optimized for the Xbox 360 hardware and exposes new features. PIX is a powerful tool for performance analysis and debugging graphics applications on the Xbox 360.
Taking Killzone Shadow Fall Image Quality Into The Next GenerationGuerrilla
This talk focuses on the technical side of Killzone Shadow Fall, the platform exclusive launch title for PlayStation 4.
We present the details of several new techniques that were developed in the quest for next generation image quality, and the talk uses key locations from the game as examples. We discuss interesting aspects of the new content pipeline, next-gen lighting engine, usage of indirect lighting and various shadow rendering optimizations. We also describe the details of volumetric lighting, the real-time reflections system, and the new anti-aliasing solution, and include some details about the image-quality driven streaming system. A common, very important, theme of the talk is the temporal coherency and how it was utilized to reduce aliasing, and improve the rendering quality and image stability above the baseline 1080p resolution seen in other games.
An illumination model, also called a lighting model and sometimes referred to as a shading model, is used to calculate the intensity of light that we should see at a given point on the surface of an object.
Surface rendering means a procedure for applying a lighting model to obtain pixel intensities for all the projected surface positions in a scene.
A surface-rendering algorithm uses the intensity calculations from an illumination model to determine the light intensity for all projected pixel positions for the various surfaces in a scene.
Surface rendering can be performed by applying the illumination model to every visible surface point.
This talk is about our experiences gained during making of the Killzone Shadow Fall announcement demo.
We’ve gathered all the hard data about our assets, memory, CPU and GPU usage and a whole bunch of tricks.
The goal of talk is to help you to form a clear picture of what’s already possible to achieve on PS4.
Unreal Open Day 2017 UE4 for Mobile: The Future of High Quality Mobile GamesEpic Games China
This document summarizes a presentation about Unreal Engine 4 for mobile game development. It discusses UE4's mobile rendering pipeline and features for high-end graphics on mobile, including OpenGL ES 3.1, Vulkan and Metal. It provides an overview of the state of the mobile game market and examples of AAA open-world games made with UE4. It also outlines UE4's feature levels for mobile, describes the components of the mobile rendering pipeline, and highlights specific rendering techniques like HDR encoding.
The document summarizes new features and updates in Ceph's RBD block storage component. Key points include: improved live migration support using external data sources; built-in LUKS encryption; up to 3x better small I/O performance; a new persistent write-back cache; snapshot quiesce hooks; kernel messenger v2 and replica read support; and initial RBD support on Windows. Future work planned for Quincy includes encryption-formatted clones, cache improvements, usability enhancements, and expanded ecosystem integration.
This document describes a rendering technique called Forward+ that brings the benefits of both forward and deferred rendering. Forward+ uses a depth prepass and light culling pass to limit the number of lights evaluated per pixel in the shading pass. This results in better performance than deferred rendering while allowing the use of many lights and complex materials like deferred. The technique is demonstrated to render over 3000 dynamic lights in real-time on a Radeon HD 7970 GPU.
This document provides an introduction to Nodejs, NoSQL technologies like MongoDB, and how to build applications using these technologies. It discusses key aspects of Nodejs like its event-driven architecture and how it uses JavaScript. It then covers setting up and running basic CRUD operations in MongoDB. Finally, it demonstrates how to build sample applications integrating Nodejs and MongoDB.
Monitoring Big Data Systems - "The Simple Way"Demi Ben-Ari
Once you start working with distributed Big Data systems, you start discovering a whole bunch of problems you won’t find in monolithic systems.
All of a sudden to monitor all of the components becomes a big data problem itself.
In the talk we’ll mention all of the aspects that you should take in consideration when monitoring a distributed system once you’re using tools like:
Web Services, Apache Spark, Cassandra, MongoDB, Amazon Web Services.
Not only the tools, what should you monitor about the actual data that flows in the system?
And we’ll cover the simplest solution with your day to day open source tools, the surprising thing, that it comes not from an Ops Guy.
Demi Ben-Ari is a Co-Founder and CTO @ Panorays.
Demi has over 9 years of experience in building various systems both from the field of near real time applications and Big Data distributed systems.
Describing himself as a software development groupie, Interested in tackling cutting edge technologies.
Demi is also a co-founder of the “Big Things” Big Data community: http://somebigthings.com/big-things-intro/
Igor Berman presented an overview of Mario, a job scheduling system they developed for Dynamic Yield to address some limitations of Luigi. Mario defines jobs as classes with inputs, outputs, and work to be done. It provides features like running jobs on different clusters, saving job statuses to Redis for better performance than Luigi, and a UI for controlling job execution. Some challenges discussed include properly partitioning RDDs to avoid shuffles, persisting data efficiently like to local SSDs, and issues with Avro and HDFS/S3. Future plans include open sourcing Mario and using technologies like Elasticsearch and Flink.
Web-scale data processing: practical approaches for low-latency and batchEdward Capriolo
The document is a slide deck presentation about batch processing, stream processing, and relational and NoSQL databases. It introduces the speaker and their experience with Hadoop, Cassandra, and Hive. It then covers batch processing using Hadoop, describing common architectures and use cases like processing web server logs. It discusses limitations of batch processing and then introduces stream processing concepts like Kafka and Storm. It provides an example of using Storm to perform word counting on streams of text data and discusses storing streaming results. Finally, it covers temporal databases and storing streaming results incrementally in Cassandra.
Doctrine is a PHP library that provides persistence services and related functionality. It includes an object relational mapper (ORM) for mapping database records to PHP objects, and a database abstraction layer (DBAL). Other libraries include an object document mapper (ODM) for NoSQL databases, common annotations, caching, data fixtures, and migrations. The presentation provides an overview of each library and how to use Doctrine for common tasks like mapping classes, saving and retrieving objects, and schema migrations. Help and contribution opportunities are available on Google Groups, IRC channels, and the project's GitHub page.
2021 04-20 apache arrow and its impact on the database industry.pptxAndrew Lamb
The talk will motivate why Apache Arrow and related projects (e.g. DataFusion) is a good choice for implementing modern analytic database systems. It reviews the major components in most databases and explains where Apache Arrow fits in, and explains additional integration benefits from using Arrow.
Team knowledge sharing presentation covering topics of decision trees, XGBoost, logistic regression, neural networks, and deep learning using scikit-learn, statsmodels, and Keras over TensorFlow in python within PowerBI, Azure Notebooks, AWS SageMaker notebooks, and Google Colab notebooks
A super fast introduction to Spark and glance at BEAMHolden Karau
Apache Spark is one of the most popular general purpose distributed systems, with built in libraries to support everything from ML to SQL. Spark has APIs across languages including Scala, Java, Python, and R -- with more 3rd party language support (like Julia & C#). Apache BEAM is a cross-platform tool for building on top of different distributed systems, but its in it’s early stages. This talk will introduce the core concepts of Apache Spark, and look to the potential future of Apache BEAM.
Apache Spark has two core abstractions for representing distributed data and computations. This talk will introduce the basics of RDDs and Spark DataFrames & Datasets, and Spark’s method for achieving resiliency. Since it’s a big data talk, we will include the almost required wordcount example, and end the Spark part with follow up pointers on Spark’s new ML APIs. For folks who are interested we’ll then talk a bit about portability, and how Apache BEAM aims to improve portability (as well it’s unique approach to cross-language support).
Slides from Holden's talk at https://www.meetup.com/Wellington-Data-Scaling-Chats/events/mdcsdpyxcbxb/
Vladislav Supalov introduces data pipeline architecture and workflow engines like Luigi. He discusses how custom scripts are problematic for maintaining data pipelines and recommends using workflow engines instead. Luigi is presented as a Python-based workflow engine that was created at Spotify to manage thousands of daily Hadoop jobs. It provides features like parameterization, email alerts, dependency resolution, and task scheduling through a central scheduler. Luigi aims to minimize boilerplate code and make pipelines testable, versioning-friendly, and collaborative.
Modern computationally intensive tasks are rarely bottlenecked on the absolute performance of your processor cores, the real bottleneck in 2012 is getting data out of memory. CPU Caches are designed to alleviate the difference in performance between CPU Core Clockspeed and main memory clockspeed, but developers rarely understand how this interaction works or how to measure or tune their application accordingly.
This Talk aims to solve that by:
1. Describing how the CPU caches work in the latest Intel Hardware.
2. Showing people what and how to measure in order to understand the caching behaviour of their software.
3. Giving examples of how this affects Java Program performance and what can be done to address things.
Advanced Administration, Monitoring and BackupMongoDB
Sailthru has been using MongoDB for 4 years, pushing the system to scale. Maintaining a high degree of client-side customizability while growing aggressively has posed unique challenges to our infrastructure. We have maintained high uptime and performance by using monitoring that covers expected use patterns as well as monitoring that catches edge cases for new and unexpected access to the database. In this session, we will talk about Sailthru's use of MongoDB Management Service (MMS), as well as areas in which we have implemented custom monitoring and alerting tools. I will also discuss our transition from a hybrid backup solution using on-premise hardware and AWS snapshots, to using backups with MMS, and how this has benefited Sailthru.
What does OOP stand for?
When Object Oriented Programming(OOP) is taught so extensively, do computer programmers, specifically within games development, realise what it's possibly doing to productivity and performance? I explain my own view from experience in personal projects and professional work.
This talk was given to the Edinburgh meet of IGDA Scotland, on 2011/07/27.
Persistent Data Structures - partial::ConfIvan Vergiliev
This document discusses persistent data structures. It begins by introducing the speaker and their background. It then defines different types of persistent data structures including ephemeral, partially persistent, fully persistent, and confluently persistent structures. It provides examples of applications of persistent data structures like avoiding side effects, multithreading, and transaction rollback. It discusses implementation considerations like dealing with old nodes and using garbage collection. It concludes by discussing techniques to improve performance of persistent data structures like tail optimization, focus in Scala, and using transients.
The document discusses several Java and Android internals topics:
1. How ArrayList and StringBuilder work internally using arrays and memory copying as the size increases. This can lead to inefficient memory usage.
2. How inner classes are implemented by compilers by generating additional accessor methods, increasing method count and affecting optimizations.
3. How the Android zygote process improves startup and memory usage by loading the framework once and sharing it across apps.
4. How the CPU cache works and how optimizing code to improve cache locality can significantly increase performance despite doing less work.
5. Issues like memory fragmentation that can occur if the Android garbage collector and compactor are unable to run due to the app being
This document discusses how big data and machine learning can be used to gain insights from large datasets and answer complex questions. It describes challenges in working with big data like data cleaning, modeling large datasets, and limitations of traditional tools. It then introduces H2O as a platform for performing fast, distributed machine learning on big data through an in-memory key-value store, distributed fork/join framework, and APIs for math hacking and model building. H2O aims to allow users to manipulate big data interactively like small data through its distributed, parallel architecture.
This document discusses data ingestion with Spark. It provides an overview of Spark, which is a unified analytics engine that can handle batch processing, streaming, SQL queries, machine learning and graph processing. Spark improves on MapReduce by keeping data in-memory between jobs for faster processing. The document contrasts data collection, which occurs where data originates, with data ingestion, which receives and routes data, sometimes coupled with storage.
Distributed real time stream processing- why and howPetr Zapletal
In this talk you will discover various state-of-the-art open-source distributed streaming frameworks, their similarities and differences, implementation trade-offs, their intended use-cases, and how to choose between them. Petr will focus on the popular frameworks, including Spark Streaming, Storm, Samza and Flink. You will also explore theoretical introduction, common pitfalls, popular architectures, and much more.
The demand for stream processing is increasing. Immense amounts of data has to be processed fast from a rapidly growing set of disparate data sources. This pushes the limits of traditional data processing infrastructures. These stream-based applications, include trading, social networks, the Internet of Things, and system monitoring, are becoming more and more important. A number of powerful, easy-to-use open source platforms have emerged to address this.
Petr's goal is to provide a comprehensive overview of modern streaming solutions and to help fellow developers with picking the best possible solution for their particular use-case. Join this talk if you are thinking about, implementing, or have already deployed a streaming solution.
This document discusses MongoDB and scaling strategies when using MongoDB. It begins with an overview of MongoDB's architecture, data model, and operations. It then describes some early performance issues encountered with MongoDB including issues with durability settings, queries locking servers, and updates moving documents. The document recommends strategies for scaling such as adding more RAM, partitioning data through sharding, and monitoring replication delay closely for disaster recovery.
The document provides an agenda for understanding Hadoop which includes an introduction to big data, the core Hadoop components of HDFS and MapReduce, the Hadoop ecosystem, planning and installing Hadoop clusters, and writing simple streaming jobs. It discusses the evolution of big data and how Hadoop uses a scalable architecture of commodity hardware and open source software to process and store large datasets in a distributed manner. The core of Hadoop is HDFS for reliable data storage and MapReduce for parallel processing. Additional projects like Pig, Hive, HBase, Zookeeper, and Oozie extend the capabilities of Hadoop.
Measuring Microsoft 365 Copilot and Gen AI SuccessNikki Chapple
Session | Measuring Microsoft 365 Copilot and Gen AI Success with Viva Insights and Purview
Presenter | Nikki Chapple 2 x MVP and Principal Cloud Architect at CloudWay
Event | European Collaboration Conference 2025
Format | In person Germany
Date | 28 May 2025
📊 Measuring Copilot and Gen AI Success with Viva Insights and Purview
Presented by Nikki Chapple – Microsoft 365 MVP & Principal Cloud Architect, CloudWay
How do you measure the success—and manage the risks—of Microsoft 365 Copilot and Generative AI (Gen AI)? In this ECS 2025 session, Microsoft MVP and Principal Cloud Architect Nikki Chapple explores how to go beyond basic usage metrics to gain full-spectrum visibility into AI adoption, business impact, user sentiment, and data security.
🎯 Key Topics Covered:
Microsoft 365 Copilot usage and adoption metrics
Viva Insights Copilot Analytics and Dashboard
Microsoft Purview Data Security Posture Management (DSPM) for AI
Measuring AI readiness, impact, and sentiment
Identifying and mitigating risks from third-party Gen AI tools
Shadow IT, oversharing, and compliance risks
Microsoft 365 Admin Center reports and Copilot Readiness
Power BI-based Copilot Business Impact Report (Preview)
📊 Why AI Measurement Matters: Without meaningful measurement, organizations risk operating in the dark—unable to prove ROI, identify friction points, or detect compliance violations. Nikki presents a unified framework combining quantitative metrics, qualitative insights, and risk monitoring to help organizations:
Prove ROI on AI investments
Drive responsible adoption
Protect sensitive data
Ensure compliance and governance
🔍 Tools and Reports Highlighted:
Microsoft 365 Admin Center: Copilot Overview, Usage, Readiness, Agents, Chat, and Adoption Score
Viva Insights Copilot Dashboard: Readiness, Adoption, Impact, Sentiment
Copilot Business Impact Report: Power BI integration for business outcome mapping
Microsoft Purview DSPM for AI: Discover and govern Copilot and third-party Gen AI usage
🔐 Security and Compliance Insights: Learn how to detect unsanctioned Gen AI tools like ChatGPT, Gemini, and Claude, track oversharing, and apply eDLP and Insider Risk Management (IRM) policies. Understand how to use Microsoft Purview—even without E5 Compliance—to monitor Copilot usage and protect sensitive data.
📈 Who Should Watch: This session is ideal for IT leaders, security professionals, compliance officers, and Microsoft 365 admins looking to:
Maximize the value of Microsoft Copilot
Build a secure, measurable AI strategy
Align AI usage with business goals and compliance requirements
🔗 Read the blog https://nikkichapple.com/measuring-copilot-gen-ai/
UiPath Community Zurich: Release Management and Build PipelinesUiPathCommunity
Ensuring robust, reliable, and repeatable delivery processes is more critical than ever - it's a success factor for your automations and for automation programmes as a whole. In this session, we’ll dive into modern best practices for release management and explore how tools like the UiPathCLI can streamline your CI/CD pipelines. Whether you’re just starting with automation or scaling enterprise-grade deployments, our event promises to deliver helpful insights to you. This topic is relevant for both on-premise and cloud users - as well as for automation developers and software testers alike.
📕 Agenda:
- Best Practices for Release Management
- What it is and why it matters
- UiPath Build Pipelines Deep Dive
- Exploring CI/CD workflows, the UiPathCLI and showcasing scenarios for both on-premise and cloud
- Discussion, Q&A
👨🏫 Speakers
Roman Tobler, CEO@ Routinuum
Johans Brink, CTO@ MvR Digital Workforce
We look forward to bringing best practices and showcasing build pipelines to you - and to having interesting discussions on this important topic!
If you have any questions or inputs prior to the event, don't hesitate to reach out to us.
This event streamed live on May 27, 16:00 pm CET.
Check out all our upcoming UiPath Community sessions at:
👉 https://community.uipath.com/events/
Join UiPath Community Zurich chapter:
👉 https://community.uipath.com/zurich/
Contributing to WordPress With & Without Code.pptxPatrick Lumumba
Contributing to WordPress: Making an Impact on the Test Team—With or Without Coding Skills
WordPress survives on collaboration, and the Test Team plays a very important role in ensuring the CMS is stable, user-friendly, and accessible to everyone.
This talk aims to deconstruct the myth that one has to be a developer to contribute to WordPress. In this session, I will share with the audience how to get involved with the WordPress Team, whether a coder or not.
We’ll explore practical ways to contribute, from testing new features, and patches, to reporting bugs. By the end of this talk, the audience will have the tools and confidence to make a meaningful impact on WordPress—no matter the skill set.
Supercharge Your AI Development with Local LLMsFrancesco Corti
In today's AI development landscape, developers face significant challenges when building applications that leverage powerful large language models (LLMs) through SaaS platforms like ChatGPT, Gemini, and others. While these services offer impressive capabilities, they come with substantial costs that can quickly escalate especially during the development lifecycle. Additionally, the inherent latency of web-based APIs creates frustrating bottlenecks during the critical testing and iteration phases of development, slowing down innovation and frustrating developers.
This talk will introduce the transformative approach of integrating local LLMs directly into their development environments. By bringing these models closer to where the code lives, developers can dramatically accelerate development lifecycles while maintaining complete control over model selection and configuration. This methodology effectively reduces costs to zero by eliminating dependency on pay-per-use SaaS services, while opening new possibilities for comprehensive integration testing, rapid prototyping, and specialized use cases.
UiPath Community Berlin: Studio Tips & Tricks and UiPath InsightsUiPathCommunity
Join the UiPath Community Berlin (Virtual) meetup on May 27 to discover handy Studio Tips & Tricks and get introduced to UiPath Insights. Learn how to boost your development workflow, improve efficiency, and gain visibility into your automation performance.
📕 Agenda:
- Welcome & Introductions
- UiPath Studio Tips & Tricks for Efficient Development
- Best Practices for Workflow Design
- Introduction to UiPath Insights
- Creating Dashboards & Tracking KPIs (Demo)
- Q&A and Open Discussion
Perfect for developers, analysts, and automation enthusiasts!
This session streamed live on May 27, 18:00 CET.
Check out all our upcoming UiPath Community sessions at:
👉 https://community.uipath.com/events/
Join our UiPath Community Berlin chapter:
👉 https://community.uipath.com/berlin/
ELNL2025 - Unlocking the Power of Sensitivity Labels - A Comprehensive Guide....Jasper Oosterveld
Sensitivity labels, powered by Microsoft Purview Information Protection, serve as the foundation for classifying and protecting your sensitive data within Microsoft 365. Their importance extends beyond classification and play a crucial role in enforcing governance policies across your Microsoft 365 environment. Join me, a Data Security Consultant and Microsoft MVP, as I share practical tips and tricks to get the full potential of sensitivity labels. I discuss sensitive information types, automatic labeling, and seamless integration with Data Loss Prevention, Teams Premium, and Microsoft 365 Copilot.
Jeremy Millul - A Talented Software DeveloperJeremy Millul
Jeremy Millul is a talented software developer based in NYC, known for leading impactful projects such as a Community Engagement Platform and a Hiking Trail Finder. Using React, MongoDB, and geolocation tools, Jeremy delivers intuitive applications that foster engagement and usability. A graduate of NYU’s Computer Science program, he brings creativity and technical expertise to every project, ensuring seamless user experiences and meaningful results in software development.
Create Your First AI Agent with UiPath Agent BuilderDianaGray10
Join us for an exciting virtual event where you'll learn how to create your first AI Agent using UiPath Agent Builder. This session will cover everything you need to know about what an agent is and how easy it is to create one using the powerful AI-driven UiPath platform. You'll also discover the steps to successfully publish your AI agent. This is a wonderful opportunity for beginners and enthusiasts to gain hands-on insights and kickstart their journey in AI-powered automation.
GDG Cloud Southlake #43: Tommy Todd: The Quantum Apocalypse: A Looming Threat...James Anderson
The Quantum Apocalypse: A Looming Threat & The Need for Post-Quantum Encryption
We explore the imminent risks posed by quantum computing to modern encryption standards and the urgent need for post-quantum cryptography (PQC).
Bio: With 30 years in cybersecurity, including as a CISO, Tommy is a strategic leader driving security transformation, risk management, and program maturity. He has led high-performing teams, shaped industry policies, and advised organizations on complex cyber, compliance, and data protection challenges.
Agentic AI - The New Era of IntelligenceMuzammil Shah
This presentation is specifically designed to introduce final-year university students to the foundational principles of Agentic Artificial Intelligence (AI). It aims to provide a clear understanding of how Agentic AI systems function, their key components, and the underlying technologies that empower them. By exploring real-world applications and emerging trends, the session will equip students with essential knowledge to engage with this rapidly evolving area of AI, preparing them for further study or professional work in the field.
Droidal: AI Agents Revolutionizing HealthcareDroidal LLC
Droidal’s AI Agents are transforming healthcare by bringing intelligence, speed, and efficiency to key areas such as Revenue Cycle Management (RCM), clinical operations, and patient engagement. Built specifically for the needs of U.S. hospitals and clinics, Droidal's solutions are designed to improve outcomes and reduce administrative burden.
Through simple visuals and clear examples, the presentation explains how AI Agents can support medical coding, streamline claims processing, manage denials, ensure compliance, and enhance communication between providers and patients. By integrating seamlessly with existing systems, these agents act as digital coworkers that deliver faster reimbursements, reduce errors, and enable teams to focus more on patient care.
Droidal's AI technology is more than just automation — it's a shift toward intelligent healthcare operations that are scalable, secure, and cost-effective. The presentation also offers insights into future developments in AI-driven healthcare, including how continuous learning and agent autonomy will redefine daily workflows.
Whether you're a healthcare administrator, a tech leader, or a provider looking for smarter solutions, this presentation offers a compelling overview of how Droidal’s AI Agents can help your organization achieve operational excellence and better patient outcomes.
A free demo trial is available for those interested in experiencing Droidal’s AI Agents firsthand. Our team will walk you through a live demo tailored to your specific workflows, helping you understand the immediate value and long-term impact of adopting AI in your healthcare environment.
To request a free trial or learn more:
https://droidal.com/
As data privacy regulations become more pervasive across the globe and organizations increasingly handle and transfer (including across borders) meaningful volumes of personal and confidential information, the need for robust contracts to be in place is more important than ever.
This webinar will provide a deep dive into privacy contracting, covering essential terms and concepts, negotiation strategies, and key practices for managing data privacy risks.
Whether you're in legal, privacy, security, compliance, GRC, procurement, or otherwise, this session will include actionable insights and practical strategies to help you enhance your agreements, reduce risk, and enable your business to move fast while protecting itself.
This webinar will review key aspects and considerations in privacy contracting, including:
- Data processing addenda, cross-border transfer terms including EU Model Clauses/Standard Contractual Clauses, etc.
- Certain legally-required provisions (as well as how to ensure compliance with those provisions)
- Negotiation tactics and common issues
- Recent lessons from recent regulatory actions and disputes
Adtran’s SDG 9000 Series brings high-performance, cloud-managed Wi-Fi 7 to homes, businesses and public spaces. Built on a unified SmartOS platform, the portfolio includes outdoor access points, ceiling-mount APs and a 10G PoE router. Intellifi and Mosaic One simplify deployment, deliver AI-driven insights and unlock powerful new revenue streams for service providers.
Neural representations have shown the potential to accelerate ray casting in a conventional ray-tracing-based rendering pipeline. We introduce a novel approach called Locally-Subdivided Neural Intersection Function (LSNIF) that replaces bottom-level BVHs used as traditional geometric representations with a neural network. Our method introduces a sparse hash grid encoding scheme incorporating geometry voxelization, a scene-agnostic training data collection, and a tailored loss function. It enables the network to output not only visibility but also hit-point information and material indices. LSNIF can be trained offline for a single object, allowing us to use LSNIF as a replacement for its corresponding BVH. With these designs, the network can handle hit-point queries from any arbitrary viewpoint, supporting all types of rays in the rendering pipeline. We demonstrate that LSNIF can render a variety of scenes, including real-world scenes designed for other path tracers, while achieving a memory footprint reduction of up to 106.2x compared to a compressed BVH.
https://arxiv.org/abs/2504.21627
Introduction and Background:
Study Overview and Methodology: The study analyzes the IT market in Israel, covering over 160 markets and 760 companies/products/services. It includes vendor rankings, IT budgets, and trends from 2025-2029. Vendors participate in detailed briefings and surveys.
Vendor Listings: The presentation lists numerous vendors across various pages, detailing their names and services. These vendors are ranked based on their participation and market presence.
Market Insights and Trends: Key insights include IT market forecasts, economic factors affecting IT budgets, and the impact of AI on enterprise IT. The study highlights the importance of AI integration and the concept of creative destruction.
Agentic AI and Future Predictions: Agentic AI is expected to transform human-agent collaboration, with AI systems understanding context and orchestrating complex processes. Future predictions include AI's role in shopping and enterprise IT.
Protecting Your Sensitive Data with Microsoft Purview - IRMS 2025Nikki Chapple
Session | Protecting Your Sensitive Data with Microsoft Purview: Practical Information Protection and DLP Strategies
Presenter | Nikki Chapple (MVP| Principal Cloud Architect CloudWay) & Ryan John Murphy (Microsoft)
Event | IRMS Conference 2025
Format | Birmingham UK
Date | 18-20 May 2025
In this closing keynote session from the IRMS Conference 2025, Nikki Chapple and Ryan John Murphy deliver a compelling and practical guide to data protection, compliance, and information governance using Microsoft Purview. As organizations generate over 2 billion pieces of content daily in Microsoft 365, the need for robust data classification, sensitivity labeling, and Data Loss Prevention (DLP) has never been more urgent.
This session addresses the growing challenge of managing unstructured data, with 73% of sensitive content remaining undiscovered and unclassified. Using a mountaineering metaphor, the speakers introduce the “Secure by Default” blueprint—a four-phase maturity model designed to help organizations scale their data security journey with confidence, clarity, and control.
🔐 Key Topics and Microsoft 365 Security Features Covered:
Microsoft Purview Information Protection and DLP
Sensitivity labels, auto-labeling, and adaptive protection
Data discovery, classification, and content labeling
DLP for both labeled and unlabeled content
SharePoint Advanced Management for workspace governance
Microsoft 365 compliance center best practices
Real-world case study: reducing 42 sensitivity labels to 4 parent labels
Empowering users through training, change management, and adoption strategies
🧭 The Secure by Default Path – Microsoft Purview Maturity Model:
Foundational – Apply default sensitivity labels at content creation; train users to manage exceptions; implement DLP for labeled content.
Managed – Focus on crown jewel data; use client-side auto-labeling; apply DLP to unlabeled content; enable adaptive protection.
Optimized – Auto-label historical content; simulate and test policies; use advanced classifiers to identify sensitive data at scale.
Strategic – Conduct operational reviews; identify new labeling scenarios; implement workspace governance using SharePoint Advanced Management.
🎒 Top Takeaways for Information Management Professionals:
Start secure. Stay protected. Expand with purpose.
Simplify your sensitivity label taxonomy for better adoption.
Train your users—they are your first line of defense.
Don’t wait for perfection—start small and iterate fast.
Align your data protection strategy with business goals and regulatory requirements.
💡 Who Should Watch This Presentation?
This session is ideal for compliance officers, IT administrators, records managers, data protection officers (DPOs), security architects, and Microsoft 365 governance leads. Whether you're in the public sector, financial services, healthcare, or education.
🔗 Read the blog: https://nikkichapple.com/irms-conference-2025/
Protecting Your Sensitive Data with Microsoft Purview - IRMS 2025Nikki Chapple
Ad
Intro to data oriented design
1. Intro to data-oriented design
Stoyan Nikolov
@stoyannk
stoyannk.wordpress.com
github.com/stoyannk
2. What we do?
● Game UI middleware
○ Coherent UI 2.x
○ Coherent GT 1.x
○ Unannounced project
● Stoyan Nikolov - Co-Founder & Software Architect
3. ● Introduces “real-world” abstractions
● Couples “data” with the operations (code) on
them
● Treats objects as black boxes
● Promises easier code reuse and
maintenance
Quick OOP overview
4. But..
● Was born in an era when machines were
different than the ones we now have
● Tries to hide the data instead of embracing
them
● Often reuse and maintainability are hurt
through excessive coupling
6. Data-oriented design
● Relatively recent trend primarily in game
development (but the idea is old)
● Think about the data & flow first
● Think about the hardware your code runs on
● Build software around data & transforms on
it, instead of an artificial abstraction
8. Sounds good, but..
● Although simple in essence data-oriented
design can be difficult to achieve
● Probably we need more time to shake-off
years of OOP indoctrination
● Many “text-book” examples in presentations
are too obvious
9. Classic examples
● Breaking classes into pieces for better cache
utilization
● AoS -> SoA
● Arrays of Components in a game engine
● “Where there’s one - there are many”
10. An example from practice
● Rendering backend in our products is a
custom class for every API we support (9
graphic platforms)
● Library calls methods of an interface that is
implemented for every API
13. What is happening?
● We have interleaved the processing of both the library data and the
Renderer
● The data footprint of them is large
● We jump in the Renderer but the cache is full of Library data (cache miss)
● The Renderer does a lot of computations and populates the cache with its
data
● We jump back in the Library but the cache is full of Renderer stuff -> again
a cache miss
● … and so on ...
16. Why it works?
● We stay in the Library - big chance for data
to stay in cache
● Control is transferred to Renderer once with
all the commands
● “Where there’s one, there are many”
● ~15% improvement JUST by changing the
API!
17. Key take-away
Think what is happening on the machine when
it executes your code.
Algorithmic complexity is rarely the problem -
constant factors often hit performance!
19. References
● Data-Oriented design, Richard Fabian, http://www.dataorienteddesign.com/dodmain/
● Pitfalls of Object Oriented Programming, Tony Albrecht, http://harmful.cat-
v.org/software/OO_programming/_pdf/Pitfalls_of_Object_Oriented_Programming_GCAP_09.pdf
● Data-Oriented Design in C++, Mike Acton, https://www.youtube.com/watch?v=rX0ItVEVjHc
● Typical C++ Bullshit, Mike Acton,
http://macton.smugmug.com/gallery/8936708_T6zQX#!i=593426709&k=BrHWXdJ
● Introduction to Data-Oriented Design, DICE, http://www.dice.se/wp-
content/uploads/2014/12/Introduction_to_Data-Oriented_Design.pdf
● Culling the Battlefield: Data Oriented Design in Practice, Daniel Collin,
http://www.slideshare.net/DICEStudio/culling-the-battlefield-data-oriented-design-in-practice
● Adventures in data-oriented design, Stefan Reinalter,
https://molecularmusings.wordpress.com/2011/11/03/adventures-in-data-oriented-design-part-1-mesh-data-3/
● Building a Data-Oriented Entity System, Niklas Frykholm, http://bitsquid.blogspot.com/2014/08/building-data-
oriented-entity-system.html