Best Time Series Databases - Page 2

Compare the Top Time Series Databases as of June 2025 - Page 2

  • 1
    Alibaba Cloud TSDB
    Time Series Database (TSDB) supports high-speed data reading and writing. It offers high compression ratios for cost-efficient data storage. This service also supports visualization of precision reduction, interpolation, multi-metric aggregate computing, and query results. The TSDB service reduces storage costs and improves the efficiency of data writing, query, and analysis. This enables you to handle large amounts of data points and collect data more frequently. This service has been widely applied to systems in different industries, such as IoT monitoring systems, enterprise energy management systems (EMSs), production security monitoring systems, and power supply monitoring systems. Optimizes database architectures and algorithms. TSDB can read or write millions of data points within seconds. Applies an efficient compression algorithm to reduce the size of each data point to 2 bytes, saving more than 90% in storage costs.
  • 2
    Amazon Timestream
    Amazon Timestream is a fast, scalable, and serverless time series database service for IoT and operational applications that makes it easy to store and analyze trillions of events per day up to 1,000 times faster and at as little as 1/10th the cost of relational databases. Amazon Timestream saves you time and cost in managing the lifecycle of time series data by keeping recent data in memory and moving historical data to a cost optimized storage tier based upon user defined policies. Amazon Timestream’s purpose-built query engine lets you access and analyze recent and historical data together, without needing to specify explicitly in the query whether the data resides in the in-memory or cost-optimized tier. Amazon Timestream has built-in time series analytics functions, helping you identify trends and patterns in your data in near real-time.
  • 3
    KX Streaming Analytics
    KX Streaming Analytics provides the ability to ingest, store, process, and analyze historic and time series data to make analytics, insights, and visualizations instantly available. To help ensure your applications and users are productive quickly, the platform provides the full lifecycle of data services, including query processing, tiering, migration, archiving, data protection, and scaling. Our advanced analytics and visualization tools, used widely across finance and industry, enable you to define and perform queries, calculations, aggregations, machine learning and AI on any streaming and historical data. Deployable across multiple hardware environments, data can come from real-time business events and high-volume sources including sensors, clickstreams, radio-frequency identification, GPS systems, social networking sites, and mobile devices.
  • 4
    Versio.io

    Versio.io

    Versio.io

    Versio.io is an enterprise software to manage the detection and post-processing of changes in a enterprise company. Our unique and innovative approaches have enabled us to build a completely new kind of enterprise product. Below we give you insights into our research and development work. Relationships can exist between assets & configurations. These represent an important extension of information. The original data sources only partially have this information. In Versio.io, we can use the topology service to automatically recognise and map such relationships. This means that relationships or dependencies between instances from any data source can be mapped. All business-relevant assets and configuration items from all levels of an organisation can be captured, historicised, topologised and stored in a central repository.
  • 5
    OneTick

    OneTick

    OneMarketData

    It's performance, superior features and unmatched functionality have led OneTick Database to be embraced by leading banks, brokerages, data vendors, exchanges, hedge funds, market makers and mutual funds. OneTick is the premier enterprise-wide solution for tick data capture, streaming analytics, data management and research. With its superior features and unmatched functionality, OneTick is being embraced enthusiastically by leading hedge funds, mutual funds, banks, brokerages, market makers, data vendors and exchanges. OneTick’s proprietary time series database is a unified, multi-asset class platform that includes a fully integrated streaming analytics engine and built-in business logic to eliminate the need for multiple disparate systems. The system provides the lowest total cost of ownership available.
  • 6
    OpenTSDB

    OpenTSDB

    OpenTSDB

    OpenTSDB consists of a Time Series Daemon (TSD) as well as set of command line utilities. Interaction with OpenTSDB is primarily achieved by running one or more of the independent TSDs. There is no master, no shared state so you can run as many TSDs as required to handle any load you throw at it. Each TSD uses the open source database HBase or hosted Google Bigtable service to store and retrieve time-series data. The data schema is highly optimized for fast aggregations of similar time series to minimize storage space. Users of the TSD never need to access the underlying store directly. You can communicate with the TSD via a simple telnet-style protocol, an HTTP API or a simple built-in GUI. The first step in using OpenTSDB is to send time series data to the TSDs. A number of tools exist to pull data from various sources into OpenTSDB.
  • 7
    Machbase

    Machbase

    Machbase

    Machbase, a time-series database that stores and analyzes a lot of sensor data from various facilities in real time, is the only DBMS solution that can process and analyze big data at high speed. Experience the amazing speed of Machbase! It is the most innovative product that enables real-time processing, storage, and analysis of sensor data. High speed sensor data storage and inquiry for sensor data by embedding DBMS in an Edge devices. Best data storage and extraction performance by DBMS running in a single server. Configuring Multi-node cluster with the advantages of availability and scalability. Total management solution of Edge computing for device, connectivity and data.
  • 8
    Blueflood

    Blueflood

    Blueflood

    Blueflood is a high throughput, low latency, multi-tenant distributed metric processing system behind Rackspace Metrics, which is currently used in production by the Rackspace Monitoring team and Rackspace public cloud team to store metrics generated by their systems. In addition to Rackspace metrics, other large scale deployments of Blueflood can be found at community Wiki. Data from Blueflood can be used to construct dashboards, generate reports, graphs or for any other use involving time-series data. It focuses on near-realtime data, with data that is queryable mere milliseconds after ingestion. You send metrics to the ingestion service. You query your metrics from the Query service. And in the background, rollups are batch-processed offline so that queries for large time-periods are returned quickly.
  • 9
    Hawkular Metrics

    Hawkular Metrics

    Hawkular Metrics

    Hawkular Metrics is a scalable, asynchronous, multi tenant, long term metrics storage engine that uses Cassandra as the data store and REST as the primary interface. This section provides an overview of some of the key features of Hawkular Metrics. The following sections provide in-depth discussions on these as well as other features. Hawkular Metrics is all about scalability. You can run a single instance backed by a single Cassandra node. You can also scale out Cassandra to multiple nodes to handle increasing loads. The Hawkular Metrics server employs a stateless architecture, which makes it easy to scale out as well. This diagram illustrates the various deployment options made possible with Hawkular Metrics' scalable architecture. The upper left shows the simplest deployment with a single Cassandra node and single Hawkular Metrics node. The bottom right picture shows that it is possible to run more Hawkular Metrics nodes than Cassandra nodes.
  • 10
    Heroic

    Heroic

    Heroic

    Heroic is an open-source monitoring system originally built at Spotify to address problems faced with large scale gathering and near real-time analysis of metrics. Heroic uses a small set of components which are responsible for very specific things. Indefinite retention, as long as you have the hardware spend. Federation support to connect multiple Heroic clusters into a global interface. Heroic uses a small set of components which are responsible for very specific things. Consumers are the component responsible for consuming metrics. When building Heroic it was quickly realized that navigating hundreds of millions of time series without context is hard. Heroic has support for federating requests, which allows multiple independent Heroic clusters to serve clients through a single global interface. This can be used to reduce the amount of geographical traffic by allowing one cluster to operate completely isolated within its zone.
  • 11
    Proficy Historian
    Proficy Historian is a best-in-class historian software solution that collects industrial time-series and A&E data at very high speed, stores it efficiently and securely, distributes it, and allows for fast retrieval and analysis —driving greater business value. With decades of experience and thousands of successful customer installations around the world, Proficy Historian changes the way companies perform and compete by making data available for asset and process performance analysis. The most recent Proficy Historian enhances usability, configurability and maintainability with significant architectural improvements. Take advantage of the solution’s simple yet powerful features to unlock new value from your equipment, process data, and business models. Remote collector management with UX. Horizontal scalability that enables enterprise-wide data visibility.
  • 12
    Circonus IRONdb
    Circonus IRONdb makes it easy to handle and store unlimited volumes of telemetry data, easily handling billions of metric streams. Circonus IRONdb enables users to identify areas of opportunity and challenge in real time, providing forensic, predictive, and automated analytics capabilities that no other product can match. Rely on machine learning to automatically set a “new normal” as your data and operations dynamically change. Circonus IRONdb integrates with Grafana, which has native support for our analytics query language. We are also compatible with other visualization apps, such as Graphite-web. Circonus IRONdb keeps your data safe by storing multiple copies of your data in a cluster of IRONdb nodes. System administrators typically manage clustering, often spending significant time maintaining it and keeping it working. Circonus IRONdb allows operators to set and forget their cluster, and stop wasting resources manually managing their time series data store.
  • 13
    KairosDB

    KairosDB

    KairosDB

    Data can be pushed in KairosDB via multiple protocols like Telnet, Rest and Graphite. Other mechanisms such as plugins can also be used. KairosDB stores time series in Cassandra, the popular and performant NoSQL datastore. The schema consists of 3 column families. This API provides operations to list existing metric names, list tag names and values, store metric data points, and query for metric data points. With a default install, KairosDB serve up a query page whereby you can query data within the data store. It's designed primarily for development purposes. Aggregators perform an operation on data points and down samples. Standard functions like min, max, sum, count, mean and more are available. Import and export is available on the KairosDB server from the command line. Internal metrics to the data store can monitor the server’s performance.
  • 14
    QuestDB

    QuestDB

    QuestDB

    QuestDB is a relational column-oriented database designed for time series and event data. It uses SQL with extensions for time series to assist with real-time analytics. These pages cover core concepts of QuestDB, including setup steps, usage guides, and reference documentation for syntax, APIs and configuration. This section describes the architecture of QuestDB, how it stores and queries data, and introduces features and capabilities unique to the system. Designated timestamp is a core feature that enables time-oriented language capabilities and partitioning. Symbol type makes storing and retrieving repetitive strings efficient. Storage model describes how QuestDB stores records and partitions within tables. Indexes can be used for faster read access on specific columns. Partitions can be used for significant performance benefits on calculations and queries. SQL extensions allow performant time series analysis with a concise syntax.
  • 15
    Canary Historian
    The beauty of the Canary Historian is that the same solution works as well on site as it does for the entire enterprise. You can log data locally, while sending it to your enterprise historian simultaneously. Best of all, as you grow, so does the solution. A single Canary Historian can log more than two million tags, and multiple Canary Historians can be clustered to handle tens of millions of tags. Enterprise historian solutions can be hosted in your own data centers or in AWS and Azure. And, unlike other enterprise historian solutions, Canary Historians don't require specialized teams of ten and more to maintain them. The Canary Historian is a NoSQL time series database that uses loss-less compression algorithms to provide you the best of both worlds, high-speed performance without requiring data interpolation!
    Starting Price: $9,970 one-time payment
  • 16
    DataStax

    DataStax

    DataStax

    The Open, Multi-Cloud Stack for Modern Data Apps. Built on open-source Apache Cassandra™. Global-scale and 100% uptime without vendor lock-in. Deploy on multi-cloud, on-prem, open-source, and Kubernetes. Elastic and pay-as-you-go for improved TCO. Start building faster with Stargate APIs for NoSQL, real-time, reactive, JSON, REST, and GraphQL. Skip the complexity of multiple OSS projects and APIs that don’t scale. Ideal for commerce, mobile, AI/ML, IoT, microservices, social, gaming, and richly interactive applications that must scale-up and scale-down with demand. Get building modern data applications with Astra, a database-as-a-service powered by Apache Cassandra™. Use REST, GraphQL, JSON with your favorite full-stack framework Richly interactive apps that are elastic and viral-ready from Day 1. Pay-as-you-go Apache Cassandra DBaaS that scales effortlessly and affordably.
  • 17
    kdb+

    kdb+

    KX Systems

    A high-performance cross-platform historical time-series columnar database featuring: - An in-memory compute engine - A real-time streaming processor - An expressive query and programming language called q kdb+ powers kdb Insights portfolio and KDB.AI, together delivering time-oriented data insights and generative AI capabilities to the world’s leading enterprise organizations. Independently benchmarked* as the fastest in-memory, columnar analytics database available, kdb+ delivers unmatched value to businesses operating in the toughest data environments. kdb+ improves decision-making processes to help navigate rapidly changing data landscapes​.
  • 18
    GridDB

    GridDB

    GridDB

    GridDB uses multicast communication to constitute a cluster. Set the network to enable multicast communication. First, check the host name and an IP address. Execute “hostname -i” command to check the settings of an IP address of the host. If the IP address of the machine is the same as below, no need to perform additional network setting and you can jump to the next section. GridDB is a database that manages a group of data (known as a row) that is made up of a key and multiple values. Besides having a composition of an in-memory database that arranges all the data in the memory, it can also adopt a hybrid composition combining the use of a disk (including SSD as well) and a memory.
  • 19
    JaguarDB

    JaguarDB

    JaguarDB

    JaguarDB enables fast ingestion of time series data, coupling location-based data. It also can index in both dimensions, space and time. Back-filling time series data is also fast (inserting large volumes of data in past time). Normally time series is a series of data points indexed in time order. In JaguarDB, the time series has a different meaning: it is both a sequence of data points and a series of tick tables holding aggregated data values at specified time spans. For example, a time series table in JaguarDB can have a base table storing data points in time order, and tick tables such as 5 minute, 15 minute, hourly, daily, weekly, monthly tables to store aggregated data within these time spans. The format for the RETENTION is the same as the TICK format, except that it can have any number of retention periods. The RETENTION specifies how long the data points in the base table should be kept.