TigerData logo
TigerData logo
  • Product

    Tiger Cloud

    Robust elastic cloud platform for startups and enterprises

    Agentic Postgres

    Postgres for Agents

    TimescaleDB

    Postgres for time-series, real-time analytics and events

  • Docs
  • Pricing

    Pricing

    Enterprise Tier

  • Developer Hub

    Changelog

    Benchmarks

    Blog

    Community

    Customer Stories

    Events

    Support

    Integrations

    Launch Hub

  • Company

    Contact us

    About

    Timescale

    Partners

    Security

    Careers

Log InTry for free
Home
AWS Time-Series Database: Understanding Your OptionsStationary Time-Series AnalysisThe Best Time-Series Databases ComparedTime-Series Analysis and Forecasting With Python Alternatives to TimescaleWhat Are Open-Source Time-Series Databases—Understanding Your OptionsWhy Consider Using PostgreSQL for Time-Series Data?Time-Series Analysis in RWhat Is Temporal Data?What Is a Time Series and How Is It Used?Is Your Data Time Series? Data Types Supported by PostgreSQL and TimescaleUnderstanding Database Workloads: Variable, Bursty, and Uniform PatternsHow to Work With Time Series in Python?Tools for Working With Time-Series Analysis in PythonGuide to Time-Series Analysis in PythonUnderstanding Autoregressive Time-Series ModelingCreating a Fast Time-Series Graph With Postgres Materialized Views
Understanding PostgreSQLOptimizing Your Database: A Deep Dive into PostgreSQL Data TypesUnderstanding FROM in PostgreSQL (With Examples)How to Address ‘Error: Could Not Resize Shared Memory Segment’ How to Install PostgreSQL on MacOSUnderstanding FILTER in PostgreSQL (With Examples)Understanding GROUP BY in PostgreSQL (With Examples)PostgreSQL Join Type TheoryA Guide to PostgreSQL ViewsStructured vs. Semi-Structured vs. Unstructured Data in PostgreSQLUnderstanding Foreign Keys in PostgreSQLUnderstanding PostgreSQL User-Defined FunctionsUnderstanding PostgreSQL's COALESCE FunctionUnderstanding SQL Aggregate FunctionsUsing PostgreSQL UPDATE With JOINHow to Install PostgreSQL on Linux5 Common Connection Errors in PostgreSQL and How to Solve ThemUnderstanding HAVING in PostgreSQL (With Examples)How to Fix No Partition of Relation Found for Row in Postgres DatabasesHow to Fix Transaction ID Wraparound ExhaustionUnderstanding LIMIT in PostgreSQL (With Examples)Understanding PostgreSQL FunctionsUnderstanding ORDER BY in PostgreSQL (With Examples)Understanding WINDOW in PostgreSQL (With Examples)Understanding PostgreSQL WITHIN GROUPPostgreSQL Mathematical Functions: Enhancing Coding EfficiencyUnderstanding DISTINCT in PostgreSQL (With Examples)Using PostgreSQL String Functions for Improved Data AnalysisData Processing With PostgreSQL Window FunctionsPostgreSQL Joins : A SummaryUnderstanding OFFSET in PostgreSQL (With Examples)Understanding PostgreSQL Date and Time FunctionsWhat Is Data Compression and How Does It Work?What Is Data Transformation, and Why Is It Important?Understanding the Postgres string_agg FunctionWhat Is a PostgreSQL Left Join? And a Right Join?Understanding PostgreSQL SELECTSelf-Hosted or Cloud Database? A Countryside Reflection on Infrastructure ChoicesUnderstanding ACID Compliance Understanding percentile_cont() and percentile_disc() in PostgreSQLUnderstanding PostgreSQL Conditional FunctionsUnderstanding PostgreSQL Array FunctionsWhat Characters Are Allowed in PostgreSQL Strings?Understanding WHERE in PostgreSQL (With Examples)What Is a PostgreSQL Full Outer Join?What Is a PostgreSQL Cross Join?What Is a PostgreSQL Inner Join?Data Partitioning: What It Is and Why It MattersStrategies for Improving Postgres JOIN PerformanceUnderstanding the Postgres extract() FunctionUnderstanding the rank() and dense_rank() Functions in PostgreSQL
Guide to PostgreSQL PerformanceHow to Reduce Bloat in Large PostgreSQL TablesDesigning Your Database Schema: Wide vs. Narrow Postgres TablesBest Practices for Time-Series Data Modeling: Single or Multiple Partitioned Table(s) a.k.a. Hypertables Best Practices for (Time-)Series Metadata Tables A Guide to Data Analysis on PostgreSQLA Guide to Scaling PostgreSQLGuide to PostgreSQL SecurityHandling Large Objects in PostgresHow to Query JSON Metadata in PostgreSQLHow to Query JSONB in PostgreSQLHow to Use PostgreSQL for Data TransformationOptimizing Array Queries With GIN Indexes in PostgreSQLPg_partman vs. Hypertables for Postgres PartitioningPostgreSQL Performance Tuning: Designing and Implementing Your Database SchemaPostgreSQL Performance Tuning: Key ParametersPostgreSQL Performance Tuning: Optimizing Database IndexesDetermining the Optimal Postgres Partition SizeNavigating Growing PostgreSQL Tables With Partitioning (and More)Top PostgreSQL Drivers for PythonWhen to Consider Postgres PartitioningGuide to PostgreSQL Database OperationsUnderstanding PostgreSQL TablespacesWhat Is Audit Logging and How to Enable It in PostgreSQLGuide to Postgres Data ManagementHow to Index JSONB Columns in PostgreSQLHow to Monitor and Optimize PostgreSQL Index PerformanceSQL/JSON Data Model and JSON in SQL: A PostgreSQL PerspectiveA Guide to pg_restore (and pg_restore Example)PostgreSQL Performance Tuning: How to Size Your DatabaseAn Intro to Data Modeling on PostgreSQLExplaining PostgreSQL EXPLAINWhat Is a PostgreSQL Temporary View?A PostgreSQL Database Replication GuideHow to Compute Standard Deviation With PostgreSQLHow PostgreSQL Data Aggregation WorksBuilding a Scalable DatabaseRecursive Query in SQL: What It Is, and How to Write OneGuide to PostgreSQL Database DesignHow to Use Psycopg2: The PostgreSQL Adapter for Python
Best Practices for Scaling PostgreSQLHow to Design Your PostgreSQL Database: Two Schema ExamplesHow to Handle High-Cardinality Data in PostgreSQLHow to Store Video in PostgreSQL Using BYTEABest Practices for PostgreSQL Database OperationsHow to Manage Your Data With Data Retention PoliciesBest Practices for PostgreSQL AggregationBest Practices for Postgres Database ReplicationHow to Use a Common Table Expression (CTE) in SQLBest Practices for Postgres Data ManagementBest Practices for Postgres PerformanceBest Practices for Postgres SecurityBest Practices for PostgreSQL Data AnalysisTesting Postgres Ingest: INSERT vs. Batch INSERT vs. COPYHow to Use PostgreSQL for Data Normalization
PostgreSQL Extensions: amcheckPostgreSQL Extensions: Unlocking Multidimensional Points With Cube PostgreSQL Extensions: hstorePostgreSQL Extensions: ltreePostgreSQL Extensions: Secure Your Time-Series Data With pgcryptoPostgreSQL Extensions: pg_prewarmPostgreSQL Extensions: pgRoutingPostgreSQL Extensions: pg_stat_statementsPostgreSQL Extensions: Install pg_trgm for Data MatchingPostgreSQL Extensions: Turning PostgreSQL Into a Vector Database With pgvectorPostgreSQL Extensions: Database Testing With pgTAPPostgreSQL Extensions: PL/pgSQLPostgreSQL Extensions: Using PostGIS and Timescale for Advanced Geospatial InsightsPostgreSQL Extensions: Intro to uuid-ossp
Columnar Databases vs. Row-Oriented Databases: Which to Choose?Data Analytics vs. Real-Time Analytics: How to Pick Your Database (and Why It Should Be PostgreSQL)How to Choose a Real-Time Analytics DatabaseUnderstanding OLTPOLAP Workloads on PostgreSQL: A GuideHow to Choose an OLAP DatabasePostgreSQL as a Real-Time Analytics DatabaseWhat Is the Best Database for Real-Time AnalyticsHow to Build an IoT Pipeline for Real-Time Analytics in PostgreSQL
When Should You Use Full-Text Search vs. Vector Search?HNSW vs. DiskANNA Brief History of AI: How Did We Get Here, and What's Next?A Beginner’s Guide to Vector EmbeddingsPostgreSQL as a Vector Database: A Pgvector TutorialUsing Pgvector With PythonHow to Choose a Vector DatabaseVector Databases Are the Wrong AbstractionUnderstanding DiskANNA Guide to Cosine SimilarityStreaming DiskANN: How We Made PostgreSQL as Fast as Pinecone for Vector DataImplementing Cosine Similarity in PythonVector Database Basics: HNSWVector Database Options for AWSVector Store vs. Vector Database: Understanding the ConnectionPgvector vs. Pinecone: Vector Database Performance and Cost ComparisonHow to Build LLM Applications With Pgvector Vector Store in LangChainHow to Implement RAG With Amazon Bedrock and LangChainRetrieval-Augmented Generation With Claude Sonnet 3.5 and PgvectorRAG Is More Than Just Vector SearchPostgreSQL Hybrid Search Using Pgvector and CohereImplementing Filtered Semantic Search Using Pgvector and JavaScriptRefining Vector Search Queries With Time Filters in Pgvector: A TutorialUnderstanding Semantic SearchWhat Is Vector Search? Vector Search vs Semantic SearchText-to-SQL: A Developer’s Zero-to-Hero GuideNearest Neighbor Indexes: What Are IVFFlat Indexes in Pgvector and How Do They WorkBuilding an AI Image Gallery With OpenAI CLIP, Claude Sonnet 3.5, and Pgvector
Understanding IoT (Internet of Things)A Beginner’s Guide to IIoT and Industry 4.0Storing IoT Data: 8 Reasons Why You Should Use PostgreSQLMoving Past Legacy Systems: Data Historian vs. Time-Series DatabaseWhy You Should Use PostgreSQL for Industrial IoT DataHow to Choose an IoT DatabaseHow to Simulate a Basic IoT Sensor Dataset on PostgreSQLFrom Ingest to Insights in Milliseconds: Everactive's Tech Transformation With TimescaleHow Ndustrial Is Providing Fast Real-Time Queries and Safely Storing Client Data With 97 % CompressionHow Hopthru Powers Real-Time Transit Analytics From a 1 TB Table Migrating a Low-Code IoT Platform Storing 20M Records/DayHow United Manufacturing Hub Is Introducing Open Source to ManufacturingBuilding IoT Pipelines for Faster Analytics With IoT CoreVisualizing IoT Data at Scale With Hopara and TimescaleDB
What Is ClickHouse and How Does It Compare to PostgreSQL and TimescaleDB for Time Series?Timescale vs. Amazon RDS PostgreSQL: Up to 350x Faster Queries, 44 % Faster Ingest, 95 % Storage Savings for Time-Series DataWhat We Learned From Benchmarking Amazon Aurora PostgreSQL ServerlessTimescaleDB vs. Amazon Timestream: 6,000x Higher Inserts, 5-175x Faster Queries, 150-220x CheaperHow to Store Time-Series Data in MongoDB and Why That’s a Bad IdeaPostgreSQL + TimescaleDB: 1,000x Faster Queries, 90 % Data Compression, and Much MoreEye or the Tiger: Benchmarking Cassandra vs. TimescaleDB for Time-Series Data
Alternatives to RDSWhy Is RDS so Expensive? Understanding RDS Pricing and CostsEstimating RDS CostsHow to Migrate From AWS RDS for PostgreSQL to TimescaleAmazon Aurora vs. RDS: Understanding the Difference
5 InfluxDB Alternatives for Your Time-Series Data8 Reasons to Choose Timescale as Your InfluxDB Alternative InfluxQL, Flux, and SQL: Which Query Language Is Best? (With Cheatsheet)What InfluxDB Got WrongTimescaleDB vs. InfluxDB: Purpose Built Differently for Time-Series Data
5 Ways to Monitor Your PostgreSQL DatabaseHow to Migrate Your Data to Timescale (3 Ways)Postgres TOAST vs. Timescale CompressionBuilding Python Apps With PostgreSQL: A Developer's GuideData Visualization in PostgreSQL With Apache SupersetMore Time-Series Data Analysis, Fewer Lines of Code: Meet HyperfunctionsIs Postgres Partitioning Really That Hard? An Introduction To HypertablesPostgreSQL Materialized Views and Where to Find ThemTimescale Tips: Testing Your Chunk Size
Postgres cheat sheet
HomeTime series basicsPostgres basicsPostgres guidesPostgres best practicesPostgres extensionsPostgres for real-time analytics
Sections

Compression

Postgres TOAST vs. Timescale Compression

Hyperfunctions

More Time-Series Data Analysis, Fewer Lines of Code: Meet Hyperfunctions

Hypertables

Is Postgres Partitioning Really That Hard? An Introduction To HypertablesTimescale Tips: Testing Your Chunk Size

Continuous aggregates

PostgreSQL Materialized Views and Where to Find Them

Build your app

How to Migrate Your Data to Timescale (3 Ways)Building Python Apps With PostgreSQL: A Developer's GuideData Visualization in PostgreSQL With Apache Superset5 Ways to Monitor Your PostgreSQL Database

Products

Time Series and Analytics AI and Vector Enterprise Plan Cloud Status Support Security Cloud Terms of Service

Learn

Documentation Blog Forum Tutorials Changelog Success Stories Time Series Database

Company

Contact Us Careers About Brand Community Code Of Conduct Events

Subscribe to the Tiger Data Newsletter

By submitting, you acknowledge Tiger Data's Privacy Policy

2025 (c) Timescale, Inc., d/b/a Tiger Data. All rights reserved.

Privacy preferences
LegalPrivacySitemap

Published at Mar 6, 2024

Hypertables

Is Postgres Partitioning Really That Hard? An Introduction To Hypertables

A colorful tiger with a grid (representing the many Postgres partitions) in the background.

Written by Carlota Soto

If you’re working with growing PostgreSQL tables, you're likely no stranger to the challenges of managing large datasets efficiently:

  • Your query performance is degrading.

  • You’re dealing with maintenance overhead. 

  • You’re finding it hard to keep up with high ingestion rates.

  • You’re having trouble managing your data lifecycle.

Postgres partitioning is your most powerful ally in solving these problems. By partitioning your large PostgreSQL tables, you can keep them fast and efficient. But setting up and maintaining partitioned PostgreSQL tables in production can be difficult.  

“Yes,” your mind may go, “I might be able to improve my performance if I partition my tables, but this will be at the cost of countless hours spent on manual configurations, running maintenance jobs, testing, and not to mention the unforeseen issues that might pop up during scaling. It’s like having a potent car with an incredibly complicated gearbox.” If you’re using vanilla PostgreSQL in products like Amazon RDS, there’s a lot of truth to this. You will undoubtedly spend much of your time managing your partitioned tables. Plus, you’ll have to deal with custom scripts, keep rigorous maintenance practices, and carefully monitor your performance to revisit and tweak your configuration whenever you see changes in your dataset or ingestion rate. 

But guess what: there’s a better way of creating a Postgres partition, and it’s called hypertables.

Meet Hypertables: Automatic PostgreSQL Partitioning for Your Large PostgreSQL Tables

Hypertables (which are available via the TimescaleDB extension and, in AWS, via the Timescale platform) are an innovation that makes the experience of creating a Postgres partition completely seamless. They automate the generation and management of data partitions without changing your user experience. 

Working with a hypertable feels exactly like working with a regular PostgreSQL table. But, under the covers, hypertables create all the partitioning magic, speeding up your queries and ingests. This performance boost will sustain as your tables' volume keeps growing, making hypertables extremely scalable.

image

Hypertables look like regular PostgreSQL tables, but under the hood, they’re being automatically partitioned to enhance performance

Hypertables are optimized for time-based partitioning, so this is the type of partitioning that we’ll focus on in this article. However, hypertables also work for tables that aren’t time-based but have something similar, for example, a BIGINT primary key.

Let’s explain how hypertables work with an example.

Imagine you have a PostgreSQL table called sensor_data, where data from various IoT devices is stored with a timestamp. The table might look something like this:

CREATE TABLE sensor_data ( device_id INT NOT NULL, event_time TIMESTAMPTZ NOT NULL, temperature FLOAT NOT NULL, humidity FLOAT NOT NULL );

Now, as the volume of sensor_data grows, you start facing performance issues and management complexities. Here’s where hypertables come to help. If you were using Timescale, the only thing you’d need to do is convert your sensor_data table into a hypertable: 

SELECT create_hypertable('sensor_data', by_range('event_time'));

This is how easy it is. With this simple command, sensor_data is now a hypertable that automatically partitions your data by the event_time column. 

Your PostgreSQL partitioning is all set.

image

Your data will be automatically partitioned as it gets ingested into the hypertable, with no manual work required on your end to create or manage such partitions

Native Partitioning vs. Hypertables: How Much Easier Does It Get? 

Let’s look at what’s happening under the hood.

If you were using a traditional native method to create a Postgres partition, you would have to go through all these steps to set up partitioning in sensor_data: 

  1. Create a parent table with the common schema and constraints but no data.

  2. Define child tables, each covering a specific time range. 

  3. Add indexes to the parent and child tables. 

  4. Set up a job for scheduling the creation of partitions. 

  5. Set up a job for managing old partitions.

  6. Attaching it all together.

Each one of these steps comes with its chunk of code; they require you to run different extensions, like pg_partman and cron ; you’ll have to monitor potential issues on each one of these steps and set up adjustments manually along the way, etc. Overall, you’ll create significant maintenance overhead for yourself. 

What hypertables do instead is encapsulate and automate all these steps, significantly reducing the complexity, manual effort, and potential for errors on your end:

  • With hypertables, there’s no need to create a parent table manually and to define child tables for each time range. You would simply convert your existing table into a hypertable. 

  • Hypertables also simplify indexing. When you create an index on a hypertable, Timescale automatically creates the corresponding indexes on all current and future partitions, ensuring consistent query performance without manual adjustments.

  • Hypertables automatically create new partitions on the fly based on the specified time interval. As new data is ingested, appropriate partitions are ready to store the data without manual intervention or scheduled jobs. Using Timescale eliminates the risk of partitions not existing, completely removing partition management from your to-do list. 

  • Timescale maintains its own partition catalogs and implements its own minimized locking strategy to ensure that your application’s read or write operations are never blocked by the underlying partitioning operations (something that can be an issue in native PostgreSQL partitioning).

Once your PostgreSQL table becomes a hypertable, you can keep querying it as usual. You will instantly experience a performance boost. When you execute a query, Timescale’s query planner intelligently routes the query to the appropriate partition(s), ensuring that only relevant data is scanned. This process remains completely transparent; you don't need to think about it or worry about which partition contains which data.

Something similarly straightforward happens when you ingest data. Timescale will take care of routing your new data to the appropriate partition under the hood, ensuring that each partition remains optimally sized. (The default partition size is seven (7) days in Timescale, but you can easily modify this.)  

Partitioning Is Only the Beginning: Features Unlocked With Hypertables

Hypertables make partitioning seamless and unlock a wealth of features that will help you improve your PostgreSQL performance even further and save you time when managing your data. 

A few examples: 

  • Columnar compression for faster queries and cheaper storage. By enabling Timescale compression, your hypertable will change from row to column-oriented. This can reduce storage usage by up to 95 % and unlock blazing-fast analytical queries while allowing the data to be updated.

  • Blazing-fast analytical views. By creating incrementally updated materialized views, known as continuous aggregates, you’ll improve the performance of aggregate queries tremendously. 

    Continuous aggregates automatically refresh and store aggregated data, enabling you to build fast visualizations, including real-time insights and historical analytics that go back in time. 

  • Easy and configurable data retention. Hypertables allow you to set up automatic data retention policies with one simple command:add_retention_policy. You can just tell Timescale when you want your data dropped, and your hypertables will automatically drop outdated partitions when it’s time. 

  • SQL hyperfunctions to run analytics with fewer lines of code. Hypertables come with a full set of hyperfunctions that give you a blazing-fast full set of mathematical analytical functions, procedures, and data types optimized for effectively querying, aggregating, and analyzing large volumes of data. 

  • Faster DISTINCT and now()queries. Queries that reference now( ) when pruning partitions will perform better in Timescale, and your ordered DISTINCT queries will benefit from SkipScan. 

  • Built-in job scheduler. The Timescale job scheduler lets you schedule any SQL or function-based job within PostgreSQL, meaning you don’t need an external scheduler or another extension like pg_cron.

When To Use Hypertables: Example Use Cases 

In sum, if you plan to partition your PostgreSQL tables by time, you’ll surely benefit from hypertables. But who doesn’t love some concrete use-case examples? 

Let’s paint a few scenarios where hypertables would be most useful. Needless to say, this is not a comprehensive list! If you’re intrigued by hypertables and Timescale but are unsure if your use case is a fit, don’t hesitate to contact us.

Ingesting thousands of energy metrics per second 

An engineering team at a leading energy company is tasked with managing the data from a newly installed smart grid, a big investment for the energy company, which now has granular insights into energy consumption, distribution efficiency, and grid health metrics. Elements in the smart grid generate thousands of energy metrics per second that need to be properly collected, analyzed, and managed. 

These energy metrics are currently stored in PostgreSQL, but the engineering team has to figure out the best solution to ingest this high-velocity data efficiently without losing granularity or accuracy. They must also ensure they can query this data quickly for real-time monitoring and analysis. 

This would be an ideal use case for Timescale: 

  • Timescale’s hypertables can handle the high ingestion without imposing manual work on the team. 

  • Hypertables also optimized query performance, ensuring that real-time energy data will be readily accessible for queries. 

  • As the smart grid expands, Timescale's hypertables will seamlessly scale, accommodating increased data volumes without compromising performance. 

  • Given that Timescale is built on PostgreSQL, the engineering team can leverage their existing knowledge and tools, ensuring a smooth transition and minimal learning curve.

-- Creating a table to store energy metrics from the smart grid CREATE TABLE energy_metrics ( element_id INT NOT NULL, event_time TIMESTAMPTZ NOT NULL, voltage DECIMAL NOT NULL, current DECIMAL NOT NULL, frequency DECIMAL NOT NULL, PRIMARY KEY(grid_id, event_time) );

-- Converting the energy_metrics table into a hypertable SELECT create_hypertable('energy_metrics', by_range('event_time'));

-- Sample query to ingest new metrics data into the hypertable INSERT INTO energy_metrics (element_id, event_time, voltage, current, frequency) VALUES (1, NOW(), 210.5, 10.7, 50.01);

-- Sample query to retrieve the latest energy metrics for real-time monitoring SELECT * FROM energy_metrics WHERE element_id = 1 ORDER BY event_time DESC LIMIT 10;

Building dashboards for monitoring sensor data  

An industrial manufacturing company operates a range of heavy machinery and equipment in its facilities. Each piece of machinery is equipped with sensors that continuously monitor and log temperature data in a sensor_data table to ensure optimal performance and safety.

The company needs its PostgreSQL database to achieve two distinct yet critical objectives:

  • Provide engineers and maintenance staff with real-time temperature data to detect anomalies and ensure that machinery is operating within safe temperature ranges.

  • Analyze historical temperature data to identify trends, predict maintenance needs, and improve operational efficiency.

The team decides to turn sensor_table into a hypertable. To facilitate real-time monitoring, they create a continuous aggregate to calculate the average temperature for every piece of machinery, which is updated every minute:

CREATE MATERIALIZED VIEW real_time_avg_temp WITH (timescaledb.continuous, timescaledb.refresh_interval = '1m') AS SELECT device_id, time_bucket('1 minute', event_time) AS one_min, AVG(temperature) AS avg_temp FROM sensor_data GROUP BY device_id, one_min;

With real_time_avg_temp, the maintenance team has immediate access to the average temperature of every machinery piece, enabling swift responses to temperature anomalies and preventing potential breakdowns. 

For historical analysis, the team creates another continuous aggregate view, this time aggregating daily average temperatures:

CREATE MATERIALIZED VIEW daily_avg_temp WITH (timescaledb.continuous) AS SELECT device_id, time_bucket('1 day', event_time) AS one_day, AVG(temperature) AS avg_temp FROM sensor_data GROUP BY device_id, one_day;

Both views (real_time_avg_temp and daily_avg_temp) feed into a monitoring dashboard. The maintenance team would get alerted of potential issues as they arise. At the same time, the team can review historical temperature trends, conduct analyses to predict when machinery might need maintenance, and optimize operational protocols to mitigate excessive temperature fluctuations.

Storing large volumes of weather data effectively 

An environmental research institute is collecting and analyzing many TBs of weather data to study climate change. The team already knows PostgreSQL, so they want to stick to it—but the storage cost is becoming a concern. 

The team decides to start using Timescale. After optimizing their database to reduce storage use and enabling compression, their storage costs become a fraction of what they were, with the data remaining fully accessible for analysis.

CREATE TABLE weather_data ( sensor_id INT NOT NULL, event_time TIMESTAMPTZ NOT NULL, temperature DECIMAL NOT NULL, humidity DECIMAL NOT NULL, pressure DECIMAL NOT NULL, PRIMARY KEY(sensor_id, event_time) );

-- Conversion to hypertable SELECT create_hypertable('sensor_data', by_range('event_time')); -- Enabling compression ALTER TABLE weather_data SET ( timescaledb.compress, timescaledb.compress_segmentby = 'sensor_id' );

-- Sample query SELECT * FROM weather_data WHERE sensor_id = 1 AND event_time > NOW() - INTERVAL '1 month' ORDER BY event_time DESC LIMIT 10;

Querying high volumes of crypto data in real-time 

A new crypto exchange is grappling with the challenge of providing real-time analytics to traders. As the data volume stored in the underlying PostgreSQL database increases, the engineering team struggles to keep the database fast enough. To them, it’s essential to deliver a better user experience than the competition, which has a more established but slower product. Keeping up the speed and responsiveness of their portal is paramount. 

The team knows that by partitioning their large pricing table, they’ll most likely improve query performance. Instead of attempting to manage partitioning themselves, since they’re already swamped, the engineers decide to implement Timescale. 

Once turned into a hypertable, their pricing table automatically partitions the data as it gets in. Real-time analytics, previously marred by delays, are now swift and accurate. P.S. Check out the story of how Messari improved its performance with hypertables.

Get Started With Hypertables 

PostgreSQL partitioning is a powerful tool for managing large tables. On its own, Postgres partitioning can be complex to implement and maintain, but Timescale's hypertables make the whole process seamless and automatic. The best part is that by using hypertables, you’ll unlock a myriad of other awesome features (like columnar compression and automatic materialized views) that will make it even easier to scale your PostgreSQL database. 

If you're ready to explore Timescale's hypertables, start by signing up to Timescale, our fully managed PostgreSQL—but faster—cloud solution. It’s free for 30 days, and no credit card is required. If you are self-hosting your own PostgreSQL instance, you can get access to hypertables by adding the TimescaleDB extension.

On this page