summaryrefslogtreecommitdiff
path: root/src/backend/utils/mmgr
AgeCommit message (Collapse)Author
2023-04-14Adjust Valgrind macro usage to protect chunk headersDavid Rowley
Prior to this commit we only ever protected MemoryChunk's requested_size field with Valgrind NOACCESS. This means that if the hdrmask field is ever accessed accidentally then we're not going to get any warnings from Valgrind about it. Valgrind would have warned us about the problem fixed in 92957ed98 had we already been doing this. Per suggestion from Tom Lane Reviewed-by: Richard Guo Discussion: https://postgr.es/m/[email protected] Discussion: https://postgr.es/m/CAApHDvr=FZNGbj252Z6M9BSFKoq6BMxgkQ2yEAGUYoo7RquqZg@mail.gmail.com
2023-04-05Fix minor signed/unsigned mixupPeter Eisentraut
The chunk header is unsigned, and the output format takes unsigned, so casting it to signed in between is incorrect.
2023-02-21Fix erroneous Valgrind markings in AllocSetRealloc.Tom Lane
If asked to decrease the size of a large (>8K) palloc chunk, AllocSetRealloc could improperly change the Valgrind state of memory beyond the new end of the chunk: it would mark data UNDEFINED as far as the old end of the chunk after having done the realloc(3) call, thus tromping on the state of memory that no longer belongs to it. One would normally expect that memory to now be marked NOACCESS, so that this mislabeling might prevent detection of later errors. If realloc() had chosen to move the chunk someplace else (unlikely, but well within its rights) we could also mismark perfectly-valid DEFINED data as UNDEFINED, causing false-positive valgrind reports later. Also, any malloc bookkeeping data placed within this area might now be wrongly marked, causing additional problems. Fix by replacing relevant uses of "oldsize" with "Min(size, oldsize)". It's sufficient to mark as far as "size" when that's smaller, because whatever remains in the new chunk size will be marked NOACCESS below, and we expect realloc() to have taken care of marking the memory beyond the new official end of the chunk. While we're here, also rename the function's "oldsize" variable to "oldchksize" to more clearly explain what it actually holds, namely the distance to the end of the chunk (that is, requested size plus trailing padding). This is more consistent with the use of "size" and "chksize" to hold the new requested size and chunk size. Add a new variable "oldsize" in the one stanza where we're actually talking about the old requested size. Oversight in commit c477f3e44. Back-patch to all supported branches, as that was, just in case anybody wants to do valgrind testing on back branches. Karina Litskevich Discussion: https://postgr.es/m/CACiT8iaAET-fmzjjZLjaJC4zwSJmrFyL7LAdHwaYyjjQOQ4hcg@mail.gmail.com
2023-02-20Add MSVC support for pg_leftmost_one_pos32() and friendsJohn Naylor
To allow testing for general support for fast bitscan intrinsics, add symbols HAVE_BITSCAN_REVERSE and HAVE_BITSCAN_FORWARD. Also do related cleanup in AllocSetFreeIndex(): Previously, we tested for HAVE__BUILTIN_CLZ and copied the relevant internals of pg_leftmost_one_pos32(), with a special fallback that does less work than the general fallback for that function. Now that we have a more general test, we just call pg_leftmost_one_pos32() directly for platforms with intrinsic support. On gcc at least, there is no difference in the binary for non-assert builds. Discussion: https://www.postgresql.org/message-id/CAFBsxsEPc%2BFnX_0vmmQ5DHv60sk4rL_RZJ%2BMD6ei%3D76L0kFMvA%40mail.gmail.com
2023-01-25Avoid type cheats for invalid dsa_handles and dshash_table_handles.Tom Lane
Invent separate macros for "invalid" values of these types, so that we needn't embed knowledge of their representations into calling code. These are all zeroes anyway ATM, so this is not fixing any live bug, but it makes the code cleaner and more future-proof. I (tgl) also chose to move DSM_HANDLE_INVALID into dsm_impl.h, since it seems like it should live beside the typedef for dsm_handle. Hou Zhijie, Nathan Bossart, Kyotaro Horiguchi, Tom Lane Discussion: https://postgr.es/m/OS0PR01MB5716860B1454C34E5B179B6694C99@OS0PR01MB5716.jpnprd01.prod.outlook.com
2023-01-11Fix typos in code and commentsMichael Paquier
Author: Justin Pryzby Discussion: https://postgr.es/m/[email protected]
2023-01-04Fix some compiler warnings in aset.c and generation.cDavid Rowley
This fixes a couple of unused variable warnings that could be seen when compiling with MEMORY_CONTEXT_CHECKING but not USE_ASSERT_CHECKING. Defining MEMORY_CONTEXT_CHECKING without asserts is a little unusual, however, we shouldn't be producing any warnings from such a build. Author: Richard Guo Discussion: https://postgr.es/m/CAMbWs4_D-vgLEh7eO47p=73u1jWO78NWf6Qfv1FndY1kG-Q-jA@mail.gmail.com
2023-01-02Update copyright for 2023Bruce Momjian
Backpatch-through: 11
2022-12-22Add palloc_aligned() to allow aligned memory allocationsDavid Rowley
This introduces palloc_aligned() and MemoryContextAllocAligned() which allow callers to obtain memory which is allocated to the given size and also aligned to the specified alignment boundary. The alignment boundaries may be any power-of-2 value. Currently, the alignment is capped at 2^26, however, we don't expect values anything like that large. The primary expected use case is to align allocations to perhaps CPU cache line size or to maybe I/O page size. Certain use cases can benefit from having aligned memory by either having better performance or more predictable performance. The alignment is achieved by requesting 'alignto' additional bytes from the underlying allocator function and then aligning the address that is returned to the requested alignment. This obviously does waste some memory, so alignments should be kept as small as what is required. It's also important to note that these alignment bytes eat into the maximum allocation size. So something like: palloc_aligned(MaxAllocSize, 64, 0); will not work as we cannot request MaxAllocSize + 64 bytes. Additionally, because we're just requesting the requested size plus the alignment requirements from the given MemoryContext, if that context is the Slab allocator, then since slab can only provide chunks of the size that's specified when the slab context is created, then this is not going to work. Slab will generate an error to indicate that the requested size is not supported. The alignment that is requested in palloc_aligned() is stored along with the allocated memory. This allows the alignment to remain intact through repalloc() calls. Author: Andres Freund, David Rowley Reviewed-by: Maxim Orlov, Andres Freund, John Naylor Discussion: https://postgr.es/m/CAApHDvpxLPUMV1mhxs6g7GNwCP6Cs6hfnYQL5ffJQTuFAuxt8A%40mail.gmail.com
2022-12-21Fix newly introduced bug in slab.cDavid Rowley
d21ded75f changed the way slab.c works but introduced a bug that meant we could end up with the slab's curBlocklistIndex pointing to the wrong list. The condition which was checking for this was failing to account for two things: 1. The curBlocklistIndex could be 0 as we've currently got no non-full blocks to put chunks on. In this case, the dlist_is_empty() check cannot be performed as there can be any number of completely full blocks at that index. 2. The curBlocklistIndex may be greater than the index we just moved the block onto. Since we need to ensure we fill up fuller blocks first, we must reset curBlocklistIndex when changing any blocklist element that's less than the curBlocklistIndex too. Reported-by: Takamichi Osumi Discussion: https://postgr.es/m/TYCPR01MB8373329C6329768D7E093D68EDEB9@TYCPR01MB8373.jpnprd01.prod.outlook.com
2022-12-20Add copyright notices to meson filesAndrew Dunstan
Discussion: https://postgr.es/m/[email protected]
2022-12-20Improve the performance of the slab memory allocatorDavid Rowley
Slab has traditionally been fairly slow when compared with the AllocSet or Generation memory allocators. Part of this slowness came from having to write out an entire block when we allocate a new block in order to populate the free list indexes within the block's memory. Additional slowness came from having to move a block onto another dlist each time we palloc or pfree a chunk from it. Here we optimize both of those cases and do a little bit extra to improve the performance of the slab allocator. Here, instead of writing out the free list indexes when allocating a new block, we introduce the concept of "unused" chunks. When a block is first allocated all chunks are unused. These chunks only make it onto the free list when they are pfree'd. When allocating new chunks on an existing block, we have the choice of consuming a chunk from the free list or an unused chunk. When both exist, we opt to use one from the free list, as these have been used already and the memory of them is more likely to be cached by the CPU. Here we also reduce the number of block lists from there being one for every possible value of free chunks on a block to just having a small fixed number of block lists. We keep the 0th block list for completely full blocks and anything else stores blocks for some range of free chunks with fuller blocks appearing on lower block list array elements. This reduces how often we must move a block to another list when we allocate or free chunks, but still allows us to prefer to put new chunks on fuller blocks and perhaps allow blocks with fewer chunks to be free'd later once all their remaining chunks have been pfree'd. Additionally, we now store a list of "emptyblocks", which are blocks that no longer contain any allocated chunks. We now keep up to 10 of these around to avoid having to thrash malloc/free when allocation patterns continually cause blocks to become free of any allocated chunks only to allocate more chunks again. Now only once we have 10 of these, we free the block. This does raise the high water mark for the total memory that a slab context can consume. It does not seem entirely unreasonable that we might one day want to make this a property of SlabContext rather than a compile-time constant. Let's wait and see if there is any evidence to support that this is required before doing it. Author: Andres Freund, David Rowley Tested-by: Tomas Vondra, John Naylor Discussion: https://postgr.es/m/[email protected]
2022-12-15Static assertions cleanupPeter Eisentraut
Because we added StaticAssertStmt() first before StaticAssertDecl(), some uses as well as the instructions in c.h are now a bit backwards from the "native" way static assertions are meant to be used in C. This updates the guidance and moves some static assertions to better places. Specifically, since the addition of StaticAssertDecl(), we can put static assertions at the file level. This moves a number of static assertions out of function bodies, where they might have been stuck out of necessity, to perhaps better places at the file level or in header files. Also, when the static assertion appears in a position where a declaration is allowed, then using StaticAssertDecl() is more native than StaticAssertStmt(). Reviewed-by: John Naylor <[email protected]> Discussion: https://www.postgresql.org/message-id/flat/941a04e7-dd6f-c0e4-8cdf-a33b3338cbda%40enterprisedb.com
2022-11-12Add repalloc0 and repalloc0_arrayPeter Eisentraut
These zero out the space added by repalloc. This is a common pattern that is quite hairy to code by hand. Reviewed-by: Tom Lane <[email protected]> Discussion: https://www.postgresql.org/message-id/[email protected]
2022-10-28Remove AssertArg and AssertStatePeter Eisentraut
These don't offer anything over plain Assert, and their usage had already been declared obsolescent. Author: Nathan Bossart <[email protected]> Reviewed-by: Michael Paquier <[email protected]> Discussion: https://www.postgresql.org/message-id/20221009210148.GA900071@nathanxps13
2022-10-18Rename SetSingleFuncCall() to InitMaterializedSRF()Michael Paquier
Per discussion, the existing routine name able to initialize a SRF function with materialize mode is unpopular, so rename it. Equally, the flags of this function are renamed, as of: - SRF_SINGLE_USE_EXPECTED -> MAT_SRF_USE_EXPECTED_DESC - SRF_SINGLE_BLESS -> MAT_SRF_BLESS The previous function and flags introduced in 9e98583 are kept around for compatibility purposes, so as any extension code already compiled with v15 continues to work as-is. The declarations introduced here for compatibility will be removed from HEAD in a follow-up commit. The new names have been suggested by Andres Freund and Melanie Plageman. Discussion: https://postgr.es/m/[email protected] Backpatch-through: 15
2022-10-14Make some minor improvements in memory-context infrastructure.Tom Lane
We lack a version of repalloc() that supports MCXT_ALLOC_NO_OOM semantics, so invent repalloc_extended() with the usual set of flags. repalloc_huge() becomes a legacy wrapper for that. Also, fix dynahash.c so that it can support HASH_ENTER_NULL requests when using the default palloc-based allocator. The only reason it didn't do that already was the lack of the MCXT_ALLOC_NO_OOM option when that code was written, ages ago. While here, simplify a few overcomplicated tests in mcxt.c. Discussion: https://postgr.es/m/[email protected]
2022-10-10Harden memory context allocators against bogus chunk pointers.Tom Lane
Before commit c6e0fe1f2, functions such as AllocSetFree could pretty safely presume that they were given a valid chunk pointer for their own type of context, because the indirect call through a memory context object and method struct would be very unlikely to work otherwise. But now, if pfree() is mistakenly invoked on a pointer to garbage, we have three chances in eight of ending up at one of these functions. That means we need to take extra measures to verify that we are looking at what we're supposed to be looking at, especially in debug builds. Hence, add code to verify that the chunk's back-link to a block header leads to a memory context object that satisfies the right sort of IsA() check. This is still a bit weaker than what we did before, but for the moment assume that an IsA() check is sufficient. As a compromise between speed and safety, implement these checks as Asserts when dealing with small chunks but plain test-and-elogs when dealing with large (external) chunks. The latter case should not be too performance-critical, but the former case probably is. In slab.c, all chunks are small; but nonetheless use a plain test in SlabRealloc, because that is certainly not performance-critical, indeed we should be suspicious that it's being called in error. In aset.c, additionally add some assertions that the "value" field of the chunk header is within the small range allowed for freelist indexes. Without that, we might find ourselves trying to wipe most of memory when CLOBBER_FREED_MEMORY is enabled, or scribbling on a "freelist header" that's far away from the context object. Eventually, field experience might show us that it's smarter for these tests to be active always, but for now we'll try to get away with just having them as assertions. While at it, also be more uniform about asserting that context objects passed as parameters are of the type we expect. Some places missed that altogether, and slab.c was for no very good reason doing it differently from the other allocators. Discussion: https://postgr.es/m/[email protected]
2022-10-07Improve our ability to detect bogus pointers passed to pfree et al.Tom Lane
Commit c6e0fe1f2 was a shade too trusting that any pointer passed to pfree, repalloc, etc will point at a valid chunk. Notably, passing a pointer that was actually obtained from malloc tended to result in obscure assertion failures, if not worse. (On FreeBSD I've seen such mistakes take down the entire cluster, seemingly as a result of clobbering shared memory.) To improve matters, extend the mcxt_methods[] array so that it has entries for every possible MemoryContextMethodID bit-pattern, with the currently unassigned ID codes pointing to error-reporting functions. Then, fiddle with the ID assignments so that patterns likely to be associated with bad pointers aren't valid ID codes. In particular, we should avoid assigning bit patterns 000 (zeroed memory) and 111 (wipe_mem'd memory). It turns out that on glibc (Linux), malloc uses chunk headers that have flag bits in the same place we keep MemoryContextMethodID, and that the bit patterns 000, 001, 010 are the only ones we'll see as long as the backend isn't threaded. So we can have very robust detection of pfree'ing a malloc-assigned block on that platform, at least so long as we can refrain from using up those ID codes. On other platforms, we don't have such a good guarantee, but keeping 000 reserved will be enough to catch many such cases. While here, make GetMemoryChunkMethodID() local to mcxt.c, as there seems no need for it to be exposed even in memutils_internal.h. Patch by me, with suggestions from Andres Freund and David Rowley. Discussion: https://postgr.es/m/[email protected]
2022-10-06Remove MemoryContextContains().Tom Lane
MemoryContextContains is no longer reliable in the wake of c6e0fe1f2, because there's no longer very much redundancy in chunk headers. (It wasn't *completely* reliable even before that, as there was a chance of a false positive if you passed it something that didn't point to an mcxt chunk at all. But it was generally good enough.) Hence, remove it. There is no remaining core code that requires it. Extensions that have been using it might be able to substitute a test like "GetMemoryChunkContext(ptr) == context", recognizing that this explicitly requires that the pointer point to some chunk. Tom Lane and David Rowley Discussion: https://postgr.es/m/[email protected]
2022-09-22meson: Add initial version of meson based build systemAndres Freund
Autoconf is showing its age, fewer and fewer contributors know how to wrangle it. Recursive make has a lot of hard to resolve dependency issues and slow incremental rebuilds. Our home-grown MSVC build system is hard to maintain for developers not using Windows and runs tests serially. While these and other issues could individually be addressed with incremental improvements, together they seem best addressed by moving to a more modern build system. After evaluating different build system choices, we chose to use meson, to a good degree based on the adoption by other open source projects. We decided that it's more realistic to commit a relatively early version of the new build system and mature it in tree. This commit adds an initial version of a meson based build system. It supports building postgres on at least AIX, FreeBSD, Linux, macOS, NetBSD, OpenBSD, Solaris and Windows (however only gcc is supported on aix, solaris). For Windows/MSVC postgres can now be built with ninja (faster, particularly for incremental builds) and msbuild (supporting the visual studio GUI, but building slower). Several aspects (e.g. Windows rc file generation, PGXS compatibility, LLVM bitcode generation, documentation adjustments) are done in subsequent commits requiring further review. Other aspects (e.g. not installing test-only extensions) are not yet addressed. When building on Windows with msbuild, builds are slower when using a visual studio version older than 2019, because those versions do not support MultiToolTask, required by meson for intra-target parallelism. The plan is to remove the MSVC specific build system in src/tools/msvc soon after reaching feature parity. However, we're not planning to remove the autoconf/make build system in the near future. Likely we're going to keep at least the parts required for PGXS to keep working around until all supported versions build with meson. Some initial help for postgres developers is at https://wiki.postgresql.org/wiki/Meson With contributions from Thomas Munro, John Naylor, Stone Tickle and others. Author: Andres Freund <[email protected]> Author: Nazir Bilal Yavuz <[email protected]> Author: Peter Eisentraut <[email protected]> Reviewed-By: Peter Eisentraut <[email protected]> Discussion: https://postgr.es/m/[email protected]
2022-09-08Temporarily make MemoryContextContains return falseDavid Rowley
5265e91fd changed MemoryContextContains to update it so that it works correctly with the new MemoryChunk code added in c6e0fe1f2. However, 5265e91fd was done with the assumption that MemoryContextContains would only ever be given pointers to memory that had been returned by one of our MemoryContext allocators. It seems that's not true and many of our 32-bit buildfarm animals are clearly showing that. There are some code paths that call MemoryContextContains with a pointer pointing part way into an allocated chunk. The example of this found by the 32-bit buildfarm animals is the int2int4_sum() function. This function returns transdata->sum, which is not a pointer to memory that was allocated directly. This return value is then subsequently passed to MemoryContextContains which causes it to crash due to it thinking the memory directly prior to that pointer is a MemoryChunk. What's actually in that memory is the field in the struct that comes prior to the "sum" field. This problem didn't occur in 64-bit world because BIGINT is a byval type and the code which was calling MemoryContextContains with the bad pointer only does so with non-byval types. Here, instead of reverting 5265e91fd and making MemoryContextContains completely broken again, let's just make it always return false for now. Effectively prior to 5265e91fd it was doing that anyway, this at least makes that more explicit. The only repercussions of this with the current MemoryContextContains calls are that we perform a datumCopy() when we might not need to. This should make the 32-bit buildfarm animals happy again and give us more time to consider a long-term fix. Discussion: https://postgr.es/m/20220907130552.sfjri7jublfxyyi4%40jrouhaud
2022-09-07Make MemoryContextContains work correctly againDavid Rowley
c6e0fe1f2 recently changed the way we store headers for allocated chunks of memory. Prior to that commit, we stored a pointer to the owning MemoryContext directly prior to the pointer to the allocated memory. That's no longer true and c6e0fe1f2 neglected to update MemoryContextContains() so that it correctly obtains the owning context with the new method. A side effect of this change and c6e0fe1f2, in general, is that it's even less safe than it was previously to pass MemoryContextContains() an arbitrary pointer which was not allocated by one of our MemoryContexts. Previously some comments in MemoryContextContains() seemed to indicate that the worst that could happen by passing an arbitrary pointer would be a false positive return value. It seems to me that this was a rather wishful outlook as we subsequently proceeded to subtract sizeof(void *) from the given pointer and then dereferenced that memory. So it seems quite likely that we could have segfaulted instead of returning a false positive. However, it's not impossible that the memory sizeof(void *) bytes before the pointer could have been owned by the process, but it's far less likely to work now as obtaining a pointer to the owning MemoryContext is less direct than before c6e0fe1f2 and will access memory that's possibly much further away to obtain the owning MemoryContext. Because of this, I took the liberty of updating the comment to warn against any future usages of the function and checked the existing core usages to ensure that we only ever pass in a pointer to memory allocated by a MemoryContext. Extension authors updating their code for PG16 who are using MemoryContextContains should check to ensure that only NULL pointers and pointers to chunks allocated with a MemoryContext will ever be passed to MemoryContextContains. Reported-by: Andres Freund Discussion: https://postgr.es/m/[email protected]
2022-09-07Make more effort to put a sentinel at the end of allocated memoryDavid Rowley
Traditionally, in MEMORY_CONTEXT_CHECKING builds, we only ever marked a sentinel byte just beyond the requested size if there happened to be enough space on the chunk to do so. For Slab and Generation context types, we only rounded the size of the chunk up to the next maxalign boundary, so it was often not that likely that those would ever have space for the sentinel given that the majority of allocation requests are going to be for sizes which are maxaligned. For AllocSet, it was a little different as smaller allocations are rounded up to the next power-of-2 value rather than the next maxalign boundary, so we're a bit more likely to have space for the sentinel byte, especially when we get away from tiny sized allocations such as 8 or 16 bytes. Here we make more of an effort to allow space so that there is enough room for the sentinel byte in more cases. This makes it more likely that we'll detect when buggy code accidentally writes beyond the end of any of its memory allocations. Each of the 3 MemoryContext types has been changed as follows: The Slab allocator will now always set a sentinel byte. Both the current usages of this MemoryContext type happen to use chunk sizes which were on the maxalign boundary, so these never used sentinel bytes previously. For the Generation allocator, we now always ensure there's enough space in the allocation for a sentinel byte. For AllocSet, this commit makes an adjustment for allocation sizes which are greater than allocChunkLimit. We now ensure there is always space for a sentinel byte. We don't alter the sentinel behavior for request sizes <= allocChunkLimit. Making way for the sentinel byte for power-of-2 request sizes would require doubling up to the next power of 2. Some analysis done on the request sizes made during installcheck shows that a fairly large portion of allocation requests are for power-of-2 sizes. The amount of additional memory for the sentinel there seems prohibitive, so we do nothing for those here. Author: David Rowley Discussion: https://postgr.es/m/[email protected]
2022-08-30Use MAXALIGN() in calculations using sizeof(SlabBlock)David Rowley
c6e0fe1f2 added a new pointer field to SlabBlock to make it 4 bytes larger on 32-bit machines. Prior to that commit, the size of that struct was a multiple of 8, which meant that MAXALIGN(sizeof(SlabBlock)) was the same as sizeof(SlabBlock), however, after c6e0fe1f2, due to the addition of the new pointer field to store a pointer to the owning context, that was no longer true on builds with sizeof(void *) == 4. This problem was highlighted by an Assert failure which was checking that the pointer given to pfree() was MAXALIGNED. Various 32-bit ARM buildfarm animals were failing. These have MAXIMUM_ALIGNOF of 8. The only 32-bit testing I'd managed to do on c6e0fe1f2 had been on x86, which has a MAXIMUM_ALIGNOF of 4, therefore did not exhibit this issue. Here we define Slab_BLOCKHDRSZ and copy what is being done in aset.c and generation.c for doing calculations based on the size of the context's block type. This means that SlabAlloc() will now always return a MAXALIGNed pointer. This also fixes an incorrect sentinel_ok() check in SlabCheck() which was incorrectly checking the wrong sentinel byte. This must have previously not caused any issues due to the fullChunkSize never being large enough to store the sentinel byte. Diagnosed-by: Tomas Vondra, Tom Lane Author: Tomas Vondra, David Rowley Discussion: https://postgr.es/m/CAA4eK1%2B1JyW5TiL%3DyV-3Uq1CrfnTyn0Xrk5uArt31Z%3D8rgPhXQ%40mail.gmail.com
2022-08-29Improve performance of and reduce overheads of memory managementDavid Rowley
Whenever we palloc a chunk of memory, traditionally, we prefix the returned pointer with a pointer to the memory context to which the chunk belongs. This is required so that we're able to easily determine the owning context when performing operations such as pfree() and repalloc(). For the AllocSet context, prior to this commit we additionally prefixed the pointer to the owning context with the size of the chunk. This made the header 16 bytes in size. This 16-byte overhead was required for all AllocSet allocations regardless of the allocation size. For the generation context, the problem was worse; in addition to the pointer to the owning context and chunk size, we also stored a pointer to the owning block so that we could track the number of freed chunks on a block. The slab allocator had a 16-byte chunk header. The changes being made here reduce the chunk header size down to just 8 bytes for all 3 of our memory context types. For small to medium sized allocations, this significantly increases the number of chunks that we can fit on a given block which results in much more efficient use of memory. Additionally, this commit completely changes the rule that pointers to palloc'd memory must be directly prefixed by a pointer to the owning memory context and instead, we now insist that they're directly prefixed by an 8-byte value where the least significant 3-bits are set to a value to indicate which type of memory context the pointer belongs to. Using those 3 bits as an index (known as MemoryContextMethodID) to a new array which stores the methods for each memory context type, we're now able to pass the pointer given to functions such as pfree() and repalloc() to the function specific to that context implementation to allow them to devise their own methods of finding the memory context which owns the given allocated chunk of memory. The reason we're able to reduce the chunk header down to just 8 bytes is because of the way we make use of the remaining 61 bits of the required 8-byte chunk header. Here we also implement a general-purpose MemoryChunk struct which makes use of those 61 remaining bits to allow the storage of a 30-bit value which the MemoryContext is free to use as it pleases, and also the number of bytes which must be subtracted from the chunk to get a reference to the block that the chunk is stored on (also 30 bits). The 1 additional remaining bit is to denote if the chunk is an "external" chunk or not. External here means that the chunk header does not store the 30-bit value or the block offset. The MemoryContext can use these external chunks at any time, but must use them if any of the two 30-bit fields are not large enough for the value(s) that need to be stored in them. When the chunk is marked as external, it is up to the MemoryContext to devise its own means to determine the block offset. Using 3-bits for the MemoryContextMethodID does mean we're limiting ourselves to only having a maximum of 8 different memory context types. We could reduce the bit space for the 30-bit value a little to make way for more than 3 bits, but it seems like it might be better to do that only if we ever need more than 8 context types. This would only be a problem if some future memory context type which does not use MemoryChunk really couldn't give up any of the 61 remaining bits in the chunk header. With this MemoryChunk, each of our 3 memory context types can quickly obtain a reference to the block any given chunk is located on. AllocSet is able to find the context to which the chunk is owned, by first obtaining a reference to the block by subtracting the block offset as is stored in the 'hdrmask' field and then referencing the block's 'aset' field. The Generation context uses the same method, but GenerationBlock did not have a field pointing back to the owning context, so one is added by this commit. In aset.c and generation.c, all allocations larger than allocChunkLimit are stored on dedicated blocks. When there's just a single chunk on a block like this, it's easy to find the block from the chunk, we just subtract the size of the block header from the chunk pointer. The size of these chunks is also known as we store the endptr on the block, so we can just subtract the pointer to the allocated memory from that. Because we can easily find the owning block and the size of the chunk for these dedicated blocks, we just always use external chunks for allocation sizes larger than allocChunkLimit. For generation.c, this sidesteps the problem of non-external MemoryChunks being unable to represent chunk sizes >= 1GB. This is less of a problem for aset.c as we store the free list index in the MemoryChunk's spare 30-bit field (the value of which will never be close to using all 30-bits). We can easily reverse engineer the chunk size from this when needed. Storing this saves AllocSetFree() from having to make a call to AllocSetFreeIndex() to determine which free list to put the newly freed chunk on. For the slab allocator, this commit adds a new restriction that slab chunks cannot be >= 1GB in size. If there happened to be any users of slab.c which used chunk sizes this large, they really should be using AllocSet instead. Here we also add a restriction that normal non-dedicated blocks cannot be 1GB or larger. It's now not possible to pass a 'maxBlockSize' >= 1GB during the creation of an AllocSet or Generation context. Allocations can still be larger than 1GB, it's just these will always be on dedicated blocks (which do not have the 1GB restriction). Author: Andres Freund, David Rowley Discussion: https://postgr.es/m/CAApHDvpjauCRXcgcaL6+e3eqecEHoeRm9D-kcbuvBitgPnW=vw@mail.gmail.com
2022-08-28Add more detail why repalloc and pfree do not accept NULL pointersPeter Eisentraut
Per discussion, we choose not to change this. This just gives a little bit more information. Discussion: https://www.postgresql.org/message-id/flat/cf26e970-8e92-59f1-247a-aa265235075b%40enterprisedb.com
2022-07-21Remove fls(), use pg_leftmost_one_pos32() instead.Thomas Munro
Commit 4f658dc8 provided the traditional BSD fls() function in src/port/fls.c so it could be used in several places. Later we added a bunch of similar facilities in pg_bitutils.h, based on compiler builtins that map to hardware instructions. It's a bit confusing to have both 1-based and 0-based variants of this operation in use in different parts of the tree, and neither is blessed by a standard. Let's drop fls.c and the configure probe, and reuse the newer code. Reviewed-by: David Rowley <[email protected]> Reviewed-by: Tom Lane <[email protected]> Discussion: https://postgr.es/m/CA%2BhUKG%2B7dSX1XF8yFGmYk-%3D48dbjH2kmzZj16XvhbrWP-9BzRg%40mail.gmail.com
2022-07-16Replace many MemSet calls with struct initializationPeter Eisentraut
This replaces all MemSet() calls with struct initialization where that is easily and obviously possible. (For example, some cases have to worry about padding bits, so I left those.) (The same could be done with appropriate memset() calls, but this patch is part of an effort to phase out MemSet(), so it doesn't touch memset() calls.) Reviewed-by: Ranier Vilela <[email protected]> Reviewed-by: Alvaro Herrera <[email protected]> Discussion: https://www.postgresql.org/message-id/[email protected]
2022-06-26Fix relptr's encoding of the base address.Thomas Munro
Previously, we encoded both NULL and the first byte at the base address as 0. That confusion led to the assertion in commit e07d4ddc, which failed when min_dynamic_shared_memory was used. Give them distinct encodings, by switching to 1-based offsets for non-NULL pointers. Also improve macro hygiene in passing (missing/misplaced parentheses), and remove open-coded access to the raw offset value from freepage.c/h. Although e07d4ddc was back-patched to 10, the only code that actually makes use of relptr at the base address arrived in 84b1c63a, so no need to back-patch further than 14 for now. Reported-by: Justin Pryzby <[email protected]> Reviewed-by: Robert Haas <[email protected]> Discussion: https://postgr.es/m/20220519193839.GT19626%40telsasoft.com
2022-04-13Remove extraneous blank lines before block-closing bracesAlvaro Herrera
These are useless and distracting. We wouldn't have written the code with them to begin with, so there's no reason to keep them. Author: Justin Pryzby <[email protected]> Discussion: https://postgr.es/m/[email protected] Discussion: https://postgr.es/m/attachment/133167/0016-Extraneous-blank-lines.patch
2022-04-11Fix various typos and spelling mistakes in code commentsDavid Rowley
Author: Justin Pryzby Discussion: https://postgr.es/m/[email protected]
2022-04-04Improve the generation memory allocatorDavid Rowley
Here we make a series of improvements to the generation memory allocator, namely: 1. Allow generation contexts to have a minimum, initial and maximum block sizes. The standard allocator allows this already but when the generation context was added, it only allowed fixed-sized blocks. The problem with fixed-sized blocks is that it's difficult to choose how large to make the blocks. If the chosen size is too small then we'd end up with a large number of blocks and a large number of malloc calls. If the block size is made too large, then memory is wasted. 2. Add support for "keeper" blocks. This is a special block that is allocated along with the context itself but is never freed. Instead, when the last chunk in the keeper block is freed, we simply mark the block as empty to allow new allocations to make use of it. 3. Add facility to "recycle" newly empty blocks instead of freeing them and having to later malloc an entire new block again. We do this by recording a single GenerationBlock which has become empty of any chunks. When we run out of space in the current block, we check to see if there is a "freeblock" and use that if it contains enough space for the allocation. Author: David Rowley, Tomas Vondra Reviewed-by: Andy Fan Discussion: https://postgr.es/m/[email protected]
2022-03-07Create routine able to set single-call SRFs for Materialize modeMichael Paquier
Set-returning functions that use the Materialize mode, creating a tuplestore to include all the tuples returned in a set rather than doing so in multiple calls, use roughly the same set of steps to prepare ReturnSetInfo for this job: - Check if ReturnSetInfo supports returning a tuplestore and if the materialize mode is enabled. - Create a tuplestore for all the tuples part of the returned set in the per-query memory context, stored in ReturnSetInfo->setResult. - Build a tuple descriptor mostly from get_call_result_type(), then stored in ReturnSetInfo->setDesc. Note that there are some cases where the SRF's tuple descriptor has to be the one specified by the function caller. This refactoring is done so as there are (well, should be) no behavior changes in any of the in-core functions refactored, and the centralized function that checks and sets up the function's ReturnSetInfo can be controlled with a set of bits32 options. Two of them prove to be necessary now: - SRF_SINGLE_USE_EXPECTED to use expectedDesc as tuple descriptor, as expected by the function's caller. - SRF_SINGLE_BLESS to validate the tuple descriptor for the SRF. The same initialization pattern is simplified in 28 places per my count as of src/backend/, shaving up to ~900 lines of code. These mostly come from the removal of the per-query initializations and the sanity checks now grouped in a single location. There are more locations that could be simplified in contrib/, that are left for a follow-up cleanup. fcc2817, 07daca5 and d61a361 have prepared the areas of the code related to this change, to ease this refactoring. Author: Melanie Plageman, Michael Paquier Reviewed-by: Álvaro Herrera, Justin Pryzby Discussion: https://postgr.es/m/CAAKRu_azyd1Z3W_r7Ou4sorTjRCs+PxeHw1CWJeXKofkE6TuZg@mail.gmail.com
2022-02-28Fix SPI's handling of errors during transaction commit.Tom Lane
SPI_commit previously left it up to the caller to recover from any error occurring during commit. Since that's complicated and requires use of low-level xact.c facilities, it's not too surprising that no caller got it right. Let's move the responsibility for cleanup into spi.c. Doing that requires redefining SPI_commit as starting a new transaction, so that it becomes equivalent to SPI_commit_and_chain except that you get default transaction characteristics instead of preserving the prior transaction's characteristics. We can make this pretty transparent API-wise by redefining SPI_start_transaction() as a no-op. Callers that expect to do something in between might be surprised, but available evidence is that no callers do so. Having made that API redefinition, we can fix this mess by having SPI_commit[_and_chain] trap errors and start a new, clean transaction before re-throwing the error. Likewise for SPI_rollback[_and_chain]. Some cleanup is also needed in AtEOXact_SPI, which was nowhere near smart enough to deal with SPI contexts nested inside a committing context. While plperl and pltcl need no changes beyond removing their now-useless SPI_start_transaction() calls, plpython needs some more work because it hadn't gotten the memo about catching commit/rollback errors in the first place. Such an error resulted in longjmp'ing out of the Python interpreter, which leaks Python stack entries at present and is reported to crash Python 3.11 altogether. Add the missing logic to catch such errors and convert them into Python exceptions. We are probably going to have to back-patch this once Python 3.11 ships, but it's a sufficiently basic change that I'm a bit nervous about doing so immediately. Let's let it bake awhile in HEAD first. Peter Eisentraut and Tom Lane Discussion: https://postgr.es/m/[email protected] Discussion: https://postgr.es/m/[email protected]
2022-02-24Clean up and simplify code in a couple of set-returning functionsMichael Paquier
The following set-returning functions have their logic simplified, to be more consistent with other in-core areas: - pg_prepared_statement()'s tuple descriptor is now created with get_call_result_type() instead of being created from scratch, saving from some duplication with pg_proc.dat. - show_all_file_settings(), similarly, now uses get_call_result_type() to build its tuple descriptor instead of creating it from scratch. - pg_options_to_table() made use of a static routine called only once. This commit removes this internal routine to make the function easier to follow. - pg_config() was using a unique logic style, doing checks on the tuple descriptor passed down in expectedDesc, but it has no need to do so. This switches the function to use a tuplestore with a tuple descriptor retrieved from get_call_result_type(), instead. This simplifies an upcoming patch aimed at refactoring the way tuplestores are created and checked in set-returning functions, this change making sense as its own independent cleanup by shaving some code. Author: Melanie Plageman, Michael Paquier Reviewed-by: Justin Pryzby Discussion: https://postgr.es/m/CAAKRu_azyd1Z3W_r7Ou4sorTjRCs+PxeHw1CWJeXKofkE6TuZg@mail.gmail.com
2022-02-17Remove all traces of tuplestore_donestoring() in the C codeMichael Paquier
This routine is a no-op since dd04e95 from 2003, with a macro kept around for compatibility purposes. This has led to the same code patterns being copy-pasted around for no effect, sometimes in confusing ways like in pg_logical_slot_get_changes_guts() from logical.c where the code was actually incorrect. This issue has been discussed on two different threads recently, so rather than living with this legacy, remove any uses of this routine in the C code to simplify things. The compatibility macro is kept to avoid breaking any out-of-core modules that depend on it. Reported-by: Tatsuhito Kasahara, Justin Pryzby Author: Tatsuhito Kasahara Discussion: https://postgr.es/m/[email protected] Discussion: https://postgr.es/m/CAP0=ZVJeeYfAeRfmzqAF2Lumdiv4S4FewyBnZd4DPTrsSQKJKw@mail.gmail.com
2022-01-28Prevent memory context logging from sending log message to connected client.Fujii Masao
When pg_log_backend_memory_contexts() is executed, the target backend should use LOG_SERVER_ONLY to log its memory contexts, to prevent them from being sent to its connected client regardless of client_min_messages. But previously the backend unexpectedly used LOG to log the message "logging memory contexts of PID %d" and it could be sent to the client. This is a bug in memory context logging. To fix the bug, this commit changes that message so that it's logged with LOG_SERVER_ONLY. Back-patch to v14 where pg_log_backend_memory_contexts() was added. Author: Fujii Masao Reviewed-by: Bharath Rupireddy, Atsushi Torikoshi Discussion: https://postgr.es/m/[email protected]
2022-01-08Update copyright for 2022Bruce Momjian
Backpatch-through: 10
2021-12-22Fix incorrect format placeholdersPeter Eisentraut
2021-10-01Fix Portal snapshot tracking to handle subtransactions properly.Tom Lane
Commit 84f5c2908 forgot to consider the possibility that EnsurePortalSnapshotExists could run inside a subtransaction with lifespan shorter than the Portal's. In that case, the new active snapshot would be popped at the end of the subtransaction, leaving a dangling pointer in the Portal, with mayhem ensuing. To fix, make sure the ActiveSnapshot stack entry is marked with the same subtransaction nesting level as the associated Portal. It's certainly safe to do so since we won't be here at all unless the stack is empty; hence we can't create an out-of-order stack. Let's also apply this logic in the case where PortalRunUtility sets portalSnapshot, just to be sure that path can't cause similar problems. It's slightly less clear that that path can't create an out-of-order stack, so add an assertion guarding it. Report and patch by Bertrand Drouvot (with kibitzing by me). Back-patch to v11, like the previous commit. Discussion: https://postgr.es/m/[email protected]
2021-05-21Restore the portal-level snapshot after procedure COMMIT/ROLLBACK.Tom Lane
COMMIT/ROLLBACK necessarily destroys all snapshots within the session. The original implementation of intra-procedure transactions just cavalierly did that, ignoring the fact that this left us executing in a rather different environment than normal. In particular, it turns out that handling of toasted datums depends rather critically on there being an outer ActiveSnapshot: otherwise, when SPI or the core executor pop whatever snapshot they used and return, it's unsafe to dereference any toasted datums that may appear in the query result. It's possible to demonstrate "no known snapshots" and "missing chunk number N for toast value" errors as a result of this oversight. Historically this outer snapshot has been held by the Portal code, and that seems like a good plan to preserve. So add infrastructure to pquery.c to allow re-establishing the Portal-owned snapshot if it's not there anymore, and add enough bookkeeping support that we can tell whether it is or not. We can't, however, just re-establish the Portal snapshot as part of COMMIT/ROLLBACK. As in normal transaction start, acquiring the first snapshot should wait until after SET and LOCK commands. Hence, teach spi.c about doing this at the right time. (Note that this patch doesn't fix the problem for any PLs that try to run intra-procedure transactions without using SPI to execute SQL commands.) This makes SPI's no_snapshots parameter rather a misnomer, so in HEAD, rename that to allow_nonatomic. replication/logical/worker.c also needs some fixes, because it wasn't careful to hold a snapshot open around AFTER trigger execution. That code doesn't use a Portal, which I suspect someday we're gonna have to fix. But for now, just rearrange the order of operations. This includes back-patching the recent addition of finish_estate() to centralize the cleanup logic there. This also back-patches commit 2ecfeda3e into v13, to improve the test coverage for worker.c (it was that test that exposed that worker.c's snapshot management is wrong). Per bug #15990 from Andreas Wicht. Back-patch to v11 where intra-procedure COMMIT was added. Discussion: https://postgr.es/m/[email protected]
2021-04-06Add function to log the memory contexts of specified backend process.Fujii Masao
Commit 3e98c0bafb added pg_backend_memory_contexts view to display the memory contexts of the backend process. However its target process is limited to the backend that is accessing to the view. So this is not so convenient when investigating the local memory bloat of other backend process. To improve this situation, this commit adds pg_log_backend_memory_contexts() function that requests to log the memory contexts of the specified backend process. This information can be also collected by calling MemoryContextStats(TopMemoryContext) via a debugger. But this technique cannot be used in some environments because no debugger is available there. So, pg_log_backend_memory_contexts() allows us to see the memory contexts of specified backend more easily. Only superusers are allowed to request to log the memory contexts because allowing any users to issue this request at an unbounded rate would cause lots of log messages and which can lead to denial of service. On receipt of the request, at the next CHECK_FOR_INTERRUPTS(), the target backend logs its memory contexts at LOG_SERVER_ONLY level, so that these memory contexts will appear in the server log but not be sent to the client. It logs one message per memory context. Because if it buffers all memory contexts into StringInfo to log them as one message, which may require the buffer to be enlarged very much and lead to OOM error since there can be a large number of memory contexts in a backend. When a backend process is consuming huge memory, logging all its memory contexts might overrun available disk space. To prevent this, now this patch limits the number of child contexts to log per parent to 100. As with MemoryContextStats(), it supposes that practical cases where the log gets long will typically be huge numbers of siblings under the same parent context; while the additional debugging value from seeing details about individual siblings beyond 100 will not be large. There was another proposed patch to add the function to return the memory contexts of specified backend as the result sets, instead of logging them, in the discussion. However that patch is not included in this commit because it had several issues to address. Thanks to Tatsuhito Kasahara, Andres Freund, Tom Lane, Tomas Vondra, Michael Paquier, Kyotaro Horiguchi and Zhihong Yu for the discussion. Bump catalog version. Author: Atsushi Torikoshi Reviewed-by: Kyotaro Horiguchi, Zhihong Yu, Fujii Masao Discussion: https://postgr.es/m/[email protected]
2021-01-26Improve performance of repeated CALLs within plpgsql procedures.Tom Lane
This patch essentially is cleaning up technical debt left behind by the original implementation of plpgsql procedures, particularly commit d92bc83c4. That patch (or more precisely, follow-on patches fixing its worst bugs) forced us to re-plan CALL and DO statements each time through, if we're in a non-atomic context. That wasn't for any fundamental reason, but just because use of a saved plan requires having a ResourceOwner to hold a reference count for the plan, and we had no suitable resowner at hand, nor would the available APIs support using one if we did. While it's not that expensive to create a "plan" for CALL/DO, the cycles do add up in repeated executions. This patch therefore makes the following API changes: * GetCachedPlan/ReleaseCachedPlan are modified to let the caller specify which resowner to use to pin the plan, rather than forcing use of CurrentResourceOwner. * spi.c gains a "SPI_execute_plan_extended" entry point that lets callers say which resowner to use to pin the plan. This borrows the idea of an options struct from the recently added SPI_prepare_extended, hopefully allowing future options to be added without more API breaks. This supersedes SPI_execute_plan_with_paramlist (which I've marked deprecated) as well as SPI_execute_plan_with_receiver (which is new in v14, so I just took it out altogether). * I also took the opportunity to remove the crude hack of letting plpgsql reach into SPI private data structures to mark SPI plans as "no_snapshot". It's better to treat that as an option of SPI_prepare_extended. Now, when running a non-atomic procedure or DO block that contains any CALL or DO commands, plpgsql creates a ResourceOwner that will be used to pin the plans of the CALL/DO commands. (In an atomic context, we just use CurrentResourceOwner, as before.) Having done this, we can just save CALL/DO plans normally, whether or not they are used across transaction boundaries. This seems to be good for something like 2X speedup of a CALL of a trivial procedure with a few simple argument expressions. By restricting the creation of an extra ResourceOwner like this, there's essentially zero penalty in cases that can't benefit. Pavel Stehule, with some further hacking by me Discussion: https://postgr.es/m/CAFj8pRCLPdDAETvR7Po7gC5y_ibkn_-bOzbeJb39WHms01194Q@mail.gmail.com
2021-01-02Update copyright for 2021Bruce Momjian
Backpatch-through: 9.5
2020-12-15Improve hash_create()'s API for some added robustness.Tom Lane
Invent a new flag bit HASH_STRINGS to specify C-string hashing, which was formerly the default; and add assertions insisting that exactly one of the bits HASH_STRINGS, HASH_BLOBS, and HASH_FUNCTION be set. This is in hopes of preventing recurrences of the type of oversight fixed in commit a1b8aa1e4 (i.e., mistakenly omitting HASH_BLOBS). Also, when HASH_STRINGS is specified, insist that the keysize be more than 8 bytes. This is a heuristic, but it should catch accidental use of HASH_STRINGS for integer or pointer keys. (Nearly all existing use-cases set the keysize to NAMEDATALEN or more, so there's little reason to think this restriction should be problematic.) Tweak hash_create() to insist that the HASH_ELEM flag be set, and remove the defaults it had for keysize and entrysize. Since those defaults were undocumented and basically useless, no callers omitted HASH_ELEM anyway. Also, remove memset's zeroing the HASHCTL parameter struct from those callers that had one. This has never been really necessary, and while it wasn't a bad coding convention it was confusing that some callers did it and some did not. We might as well save a few cycles by standardizing on "not". Also improve the documentation for hash_create(). In passing, improve reinit.c's usage of a hash table by storing the key as a binary Oid rather than a string; and, since that's a temporary hash table, allocate it in CurrentMemoryContext for neatness. Discussion: https://postgr.es/m/[email protected]
2020-09-08Use <unnamed> for name of unnamed portal's memory contextPeter Eisentraut
Otherwise just printing an empty string makes the memory context debug output slightly confusing. Discussion: https://www.postgresql.org/message-id/flat/ccb353ef-89ff-09b3-8046-1d2514624b9c%402ndquadrant.com
2020-09-04remove redundant initializationsBruce Momjian
Reported-by: Ranier Vilela Discussion: https://postgr.es/m/CAEudQAo1+AcGppxDSg8k+zF4+Kv+eJyqzEDdbpDg58-=MQcerQ@mail.gmail.com Author: Ranier Vilela Backpatch-through: master
2020-08-26Move codes for pg_backend_memory_contexts from mmgr/mcxt.c to adt/mcxtfuncs.c.Fujii Masao
Previously the codes for pg_backend_memory_contexts were in src/backend/utils/mmgr/mcxt.c. This commit moves them to src/backend/utils/adt/mcxtfuncs.c so that mcxt.c basically includes only the low-level interface for memory contexts. Author: Atsushi Torikoshi Reviewed-by: Michael Paquier, Fujii Masao Discussion: https://postgr.es/m/[email protected]
2020-08-19Add pg_backend_memory_contexts system view.Fujii Masao
This view displays the usages of all the memory contexts of the server process attached to the current session. This information is useful to investigate the cause of backend-local memory bloat. This information can be also collected by calling MemoryContextStats(TopMemoryContext) via a debugger. But this technique cannot be uesd in some environments because no debugger is available there. And it outputs lots of text messages and it's not easy to analyze them. So, pg_backend_memory_contexts view allows us to access to backend-local memory contexts information more easily. Bump catalog version. Author: Atsushi Torikoshi, Fujii Masao Reviewed-by: Tatsuhito Kasahara, Andres Freund, Daniel Gustafsson, Robert Haas, Michael Paquier Discussion: https://postgr.es/m/[email protected]