-
Notifications
You must be signed in to change notification settings - Fork 1.1k
[YSQL][PG15 Online Upgrade] REFRESH MATERIALIZED VIEW CONCURRENTLY table_1mv failing #26007
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Closed
1 task done
Labels
2025.1 Backport Required
2025.1.0_blocker
area/ysql
Yugabyte SQL (YSQL)
kind/bug
This issue is a bug
priority/medium
Medium priority issue
Comments
kai-franz
added a commit
that referenced
this issue
May 30, 2025
Summary: # Background Postgres stores everything in 8K pages. What happens when you have a large value (e.g. a long string) that is longer than 8K? Postgres uses TOAST to deal with this. TOAST (The Oversized Attribute Storage Technique) has two approaches that it uses to fit large tuples into 8K pages. 1. Compression: Postgres always tries this first--it compresses the datums in the tuple one by one until either the target size has been reached or everything has been compressed and it's still too big. 2. Out-of-line storage—for each table PostgreSQL also creates an auxiliary TOAST table for values that are still >8160B after compression. It splits the value up into 2K chunks and stores them in the TOAST table. Then, in the main table, it stores a pointer to the TOAST table entry. # The state of toasting in YugabyteDB For regular tables, we generally have all forms of toasting disabled. This is mainly because we replaced Postgres's page-based storage engine with our own RocksDB-based storage engine, which does not have the page size constraint (RocksDB also applies its own compression to SST files on disk, but it decompresses them as soon as it reads them into the in-memory block cache). We recently enabled toast compression for catalog tuples in the catcache. This only affects the catcache and does not affect how they are stored in DocDB. Toast tables remain disabled for all DocDB tables. See {D29916}. # Temp tables Temp tables in YB use Postgres's storage engine rather than DocDB. This means that they have the 8K size restriction, and, because we don't support toast tables at all, large values cannot be inserted into temp tables. Attempts to insert such values will result in an error such as the following: ``` row is too big: size 8560, maximum size 8160 ``` # The fix To fix this, we enable toast tables for temp relations. These toast tables are themselves also considered temporary and use PG storage. They have all the other properties of temp tables (dropped at the end of a session, etc.) Jira: DB-15339 Test Plan: ``` ./yb_build.sh release --cxx-test pg_temp_toast-test ``` The data generation in this test has been modified to force Postgres to use both compression and out-of-line storage for the rows we insert into PG. Reviewers: fizaa Reviewed By: fizaa Subscribers: yql Differential Revision: https://phorge.dev.yugabyte.com/D42530
kai-franz
added a commit
that referenced
this issue
Jun 2, 2025
Summary: Original commit: 823474f / D42530 # Background Postgres stores everything in 8K pages. What happens when you have a large value (e.g. a long string) that is longer than 8K? Postgres uses TOAST to deal with this. TOAST (The Oversized Attribute Storage Technique) has two approaches that it uses to fit large tuples into 8K pages. 1. Compression: Postgres always tries this first--it compresses the datums in the tuple one by one until either the target size has been reached or everything has been compressed and it's still too big. 2. Out-of-line storage—for each table PostgreSQL also creates an auxiliary TOAST table for values that are still >8160B after compression. It splits the value up into 2K chunks and stores them in the TOAST table. Then, in the main table, it stores a pointer to the TOAST table entry. # The state of toasting in YugabyteDB For regular tables, we generally have all forms of toasting disabled. This is mainly because we replaced Postgres's page-based storage engine with our own RocksDB-based storage engine, which does not have the page size constraint (RocksDB also applies its own compression to SST files on disk, but it decompresses them as soon as it reads them into the in-memory block cache). We recently enabled toast compression for catalog tuples in the catcache. This only affects the catcache and does not affect how they are stored in DocDB. Toast tables remain disabled for all DocDB tables. See {D29916}. # Temp tables Temp tables in YB use Postgres's storage engine rather than DocDB. This means that they have the 8K size restriction, and, because we don't support toast tables at all, large values cannot be inserted into temp tables. Attempts to insert such values will result in an error such as the following: ``` row is too big: size 8560, maximum size 8160 ``` # The fix To fix this, we enable toast tables for temp relations. These toast tables are themselves also considered temporary and use PG storage. They have all the other properties of temp tables (dropped at the end of a session, etc.) Jira: DB-15339 Test Plan: ``` ./yb_build.sh release --cxx-test pg_temp_toast-test ``` The data generation in this test has been modified to force Postgres to use both compression and out-of-line storage for the rows we insert into PG. Reviewers: fizaa Reviewed By: fizaa Subscribers: yql Differential Revision: https://phorge.dev.yugabyte.com/D44436
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
2025.1 Backport Required
2025.1.0_blocker
area/ysql
Yugabyte SQL (YSQL)
kind/bug
This issue is a bug
priority/medium
Medium priority issue
Uh oh!
There was an error while loading. Please reload this page.
Jira Link: DB-15339
Description
After upgrading ( finalize included ) from 2024.2.2.0 to 2.25.1.0-b282 when I am trying to run refresh materialized view, it is failing with error.
row is too big: size 8560, maximum size 8160REFRESH MATERIALIZED VIEW CONCURRENTLY table_1mv failing
Query run:
REFRESH MATERIALIZED VIEW CONCURRENTLY table_1mv
Issue Type
kind/bug
Warning: Please confirm that this issue does not contain any sensitive information
The text was updated successfully, but these errors were encountered: