Fix sscanf limits in pg_dump
authorDaniel Gustafsson <[email protected]>
Tue, 19 Oct 2021 10:59:50 +0000 (12:59 +0200)
committerDaniel Gustafsson <[email protected]>
Tue, 19 Oct 2021 10:59:50 +0000 (12:59 +0200)
Make sure that the string parsing is limited by the size of the
destination buffer.

The buffer is bounded by MAXPGPATH, and thus the limit must be
inserted via preprocessor expansion and the buffer increased by
one to account for the terminator. There is no risk of overflow
here, since in this case, the buffer scanned is smaller than the
destination buffer.

Backpatch all the way down to 9.6.

Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/B14D3D7B-F98C-4E20-9459-C122C67647FB@yesql.se
Backpatch-through: 9.6

src/bin/pg_dump/pg_backup_directory.c

index acf7a485e9221c9b6eb97f2c28f9605144a5434c..27615bfa9b6f91aa33c90a6a40c7c1ba47099bab 100644 (file)
@@ -458,11 +458,11 @@ _LoadBlobs(ArchiveHandle *AH)
    /* Read the blobs TOC file line-by-line, and process each blob */
    while ((cfgets(ctx->blobsTocFH, line, MAXPGPATH)) != NULL)
    {
-       char        fname[MAXPGPATH];
+       char        fname[MAXPGPATH + 1];
        char        path[MAXPGPATH];
 
        /* Can't overflow because line and fname are the same length. */
-       if (sscanf(line, "%u %s\n", &oid, fname) != 2)
+       if (sscanf(line, "%u %" CppAsString2(MAXPGPATH) "s\n", &oid, fname) != 2)
            exit_horribly(modulename, "invalid line in large object TOC file \"%s\": \"%s\"\n",
                          fname, line);