Paper Reactive Systems
Paper Reactive Systems
Introduction 3
Understanding Messaging 5
Summary 9
Global mobile data traffic from 2014 to 2019 (in exabytes per month)
30
25 24.3
Traffic in exabytes per month
20
16.1
15
10.7
10
6.8
5 4.5
2.5
0
2014 2015* 2016* 2017* 2018* 2019*
Source: Additional Information:
Cisco Systems Worldwide 2014
© Statista 2015
1
http://www.internetlivestats.com/internet-users/
2
http://www.internetlivestats.com/internet-users/
But the world of “traditional Rather than a tactical approach to integration, a broad enabling fabric will deliver the breadth
of capability users demand with the performance, scale and resilience they expect. So when it
IT” has come to an end. Today
comes to IT infrastructure, it is no longer viable to simply invest in products and services that
businesses, partners, customers,
cover only the requirements of today.
and employees all demand more
Architects, in any environment, must think about creating a sufficiently generic layer of
flexibility and capability.
infrastructure that can be applied to many different projects and serve the unknown
requirements of tomorrow. With this approach, you naturally move away from a model of
buying technology that solves just one problem, to instead investing in broad capacity that
can be leveraged across multiple problem areas. Reusing capacity improves efficiency and
effectiveness, while reducing costs and increasing the potential return from projects.
3
http://www.reactivemanifesto.org
Understanding Messaging
Message-Driven
As described in the manifesto, a message-driven architecture is one of four key components
within a reactive system. Typically, this integration pattern can be event-driven or actor-based.
Event-Driven:
This architecture is based on events which are monitored by zero or more interested
parties (clients). Different from imperative programming as the client doesn’t need to wait
for a response from each request. Events are not directed to a specific destination, but
rather available to interested end-points.
Actor-Based:
An actor-based system is a conceptual model to handle concurrency. It’s important to
understand that, although multiple actors can run at the same time, an actor will process
a given message sequentially. The idea is similar to object-oriented languages, but the
main difference is that actors are completely isolated from each other and they will never
share memory.
In this paper, we focus on event-driven concurrency and the realtime messaging layer that
provides the reactive infrastructure a given service will use to publish and subscribe to events.
Realtime
The world of “realtime” data is perhaps misunderstood and there can be strong opinions on
the definition – here, we are discussing realtime web technologies that enable applications to
react to events as they happen.
Across much of today’s application integration there is no ability for systems to react in this
The world of “realtime” data is
way. Typically, system A (the client) will ask system B (the server) if new data is available based
perhaps misunderstood and there on a set of query parameters. So to really understand the value of realtime technology, we
can be strong opinions on the need to discuss the limitations of this integration pattern:
definition – here, we are discussing
Polling is used by the vast majority of applications today. In this model, the client
realtime web technologies that application repeatedly polls a server for data. Based entirely on the foundations of the
enable applications to react to web, these applications leverage the HTTP protocol, that allows the fetching of data in a
events as they happen. request/response pattern. The application asks (requests) the server for data based on
some query parameters, and waits for a response which is often of unknown size. If no
data is available, an empty response is returned. The problem here is that empty
responses, or multiple responses containing duplicate data cause a huge amount of
overhead.
The 5 V’s of data complexity Data efficiency is a core component of Push Technology’s realtime messaging, and various
• Volume – the sheer scale of unique and patent-pending features are offered to achieve this.
data we can or need to access. 1. Network communication is performed by an event-driven kernel that uses non-blocking I/O
Distributed across many to interact efficiently. The lock-free design exploits the way that modern CPUs access
services and systems. memory to avoid contention between sessions. This allows a single server to scale linearly
across CPU resources, and achieve very high message rates.
• Velocity – the speed at which
data is being generated, the 2. Messages are serialized in a compact binary form. A small binary header is used to frame
blurred lines between each message.
consumer and producer, and
3. Topics provide stateful streams of data to each session. Once a session is subscribed to a
the need to move data around
topic, a subsequent update is sent as a “delta”, (i.e. the difference from the previous
in realtime. value), if doing so will reduce the amount of data sent. This happens transparently to the
• Variety – the ever growing application, so doesn’t affect the ease of use: the delta is automatically applied by the
differences between data client SDK, and passed to the application as a new value.
resources (structured &
4. Messages that can’t be immediately delivered to a session are queued. If the network
unstructured), data models
connections fail, bandwidth is limited, or the client is simply slow, messages can back up
and data location.
on the queue. The platform can conflate the queue to remove messages that are stale or
• Veracity – the unknowns, no longer relevant, or combine multiple related messages into a single consolidated
inconsistencies and message.
fragmentation of data, that are
5. In addition to automatic throttling and conflation, the rate of messages delivered to a
making it increasingly difficult
session can be manually constrained to prioritize one session over another or to place
to harness.
limits on bandwidth utilization.
• Value – the inherent business
6. Sessions subscribe to topics using wild card selectors. Unlike traditional messaging
opportunity that exists within
products, when new topics are added, sessions with matching selectors are automatically
and between data sets, and
subscribed.
being able to unlock this value
at the right time. This data efficient approach to messaging means that most development teams see savings
in bandwidth of up to 90%.
Enterprise messaging and middleware has long provided effective connectivity between
legacy systems that would otherwise be unable to talk to each other. However, with application
data, services and computing resources now widely distributed across a multitude of cloud
platforms, a new approach to application integration is urgently required. Legacy middleware
was designed to handle data traffic across dedicated and managed corporate networks.
Organizations now need a data distribution layer that can adapt to the realities of the internet,
connect the explosion of cloud services, and react to the ever mobile nature of all users.
Short term business objectives and time-to-market demands often mean tactical integration
decisions are made for many cloud and mobile applications, rather than considering a
scalable long-term view across application architecture. Apps should leverage a reactive
integration and abstraction layer, that hides the complexity of your data model, and decouple
applications from potential backend changes. Then app developers can spend more time
building features, and less time fixing integration problems.
Microservices have emerged as the preferred approach to build scalable applications that can
adapt as requirements change over time. In contrast to monolithic applications which typically
have a single relational database, a microservices architecture requires a mechanism to
manage data communication between a potentially large number of services that are often
running different technology stacks and located across multiple cloud services. By using an
event-driven data backplane that offers the SDKs your developers need, a given microservice
can easily publish events that other services (existing or future) may subscribe to.
The massive scale associated with the Internet of Things presents some unique challenges in
terms of application architecture, integration and data movement. With around 5 billion
connected devices today, and the market expecting to grow to 50 billion connected devices by
2020, organizations must act now to ensure they can support the speed and scale of this
growth. An effective IoT architecture requires an event-driven integration platform that offers
An effective IoT architecture requires predictable performance and latency as connections increase, can intelligently distribute only
an event-driven integration platform necessary data, and also ensures connections are secure.
that offers predictable performance
Realtime Event Processing & Analytics
and latency as connections increase,
can intelligently distribute only Big data resources often exist in various forms, across multiple data centers, presenting
necessary data, and also ensures challenges for event processing and analytics. When integrated with data-efficient messaging,
connections are secure. organizations can provide realtime distribution of this data that manages the complexities of
connectivity across the internet.
✓ Value-Oriented Programming
A value-oriented programming model is a fundamental feature of a reactive data model.
Applications are built against an API that provides streams of values, rather than individual
messages that need further decoding. Client SDKs provide a common programming model,
making best use of the features particular to the implementation language. This frees
developers to focus on application concerns, rather than data integration.