Is it safe to decommit memory from under an Arena?

I know the point of an arena is to free all the memory at once but in some use cases, the memory usage spikes momentarily, but afterwards most of arena remains allocated despite being unused. Is the memory being de-committed need to be from the same location and size as was originally committed or can it be anything? (This is assuming I update the Arena metadata manually when I do it)

Giving the user of the software the ability to free unused memory (maybe through like a button or something) is something I want to do but am not sure if it’s safe on all platforms. I tried it on my machine and it works fine but I don’t know if that behavior is universal.

1 Like

If you mean this decommit, then the pointer should be page aligned on all posix platforms (^virtual.Memory_Block is page aligned) as it uses madvise internally (I don’t know about Windows). You can’t just pick a random offset in a memory block, it’s either all or nothing. Also, note that decommit just marks the page as “can me be freed” but won’t actually free it on most platforms unless there’s memory pressure. It could also lead to weird side-effects:

  • if page is touched before memory pressure results in a free, the mark is removed and the page wont get freed
  • if page is read after it got freed, you may read zeroes or the actual values you had there

Best is to release the memory (munmap internally) to crash your program if you have a dangling pointer. Less confusing…

But at this point you could just use Arena_Temp and arena_temp_begin/end/ignore to roll back to the last “definitely used” checkpoint.

Although, to me it sounds like the wrong allocator for the job… If you want to release larger, random blobs of data just use a general purpose allocator. If you want the free_all behaviour as well, I’d probably just wrap the system allocator such that all allocated data has an intrusive list as header. Since I don’t know your application it’s hard to suggest alternatives.

1 Like

Thanks for the detailed response! I’m curious how that Arena_Temp can be used to achieve this.

Assuming I use a growing arena of 128mb blocks, if I call arena_temp_begin twice, once after using 128mb and the next after using another 128mb, then calling arena_temp_end will free the last block, and calling it again will free all the memory?

I should probably just write an allocator for this…

Not quite… Each Arena_Temp holds what was the current block in the allocator when you called arena_temp_begin(), it will free all other blocks up until that one.

It’s easy to test this behaviour:

main :: proc() {
	MIN_BLOCK :: virtual.DEFAULT_ARENA_GROWING_MINIMUM_BLOCK_SIZE
	a: virtual.Arena
	if err := virtual.arena_init_growing(&a); err != nil do panic("EOM")
	defer virtual.arena_destroy(&a)

	_, _ = virtual.arena_alloc(&a, MIN_BLOCK, align_of(rawptr))
	checkpoint0 := virtual.arena_temp_begin(&a)
	fmt.printfln("1st block - reserved: %v, temp: %v", a.total_reserved, a.temp_count)

	_, _ = virtual.arena_alloc(&a, MIN_BLOCK, align_of(rawptr))
	checkpoint1 := virtual.arena_temp_begin(&a)
	fmt.printfln("2nd block - reserved: %v, temp: %v", a.total_reserved, a.temp_count)

	_, _ = virtual.arena_alloc(&a, MIN_BLOCK, align_of(rawptr))
	checkpoint2 := virtual.arena_temp_begin(&a)
	fmt.printfln("3rd block - reserved: %v, temp: %v", a.total_reserved, a.temp_count)

	virtual.arena_temp_ignore(checkpoint2)
	fmt.printfln("ignore cp2 - reserved: %v, temp: %v", a.total_reserved, a.temp_count)
	virtual.arena_temp_ignore(checkpoint1)
	fmt.printfln("ignore cp1 - reserved: %v, temp: %v", a.total_reserved, a.temp_count)

	virtual.arena_temp_end(checkpoint0)
	fmt.printfln("free until cp0 - reserved: %v, temp: %v", a.total_reserved, a.temp_count)
}

The above code will print:

1st block - reserved: 1048576, temp: 1
2nd block - reserved: 2097152, temp: 2
3rd block - reserved: 3145728, temp: 3
ignore cp2 - reserved: 3145728, temp: 2
ignore cp1 - reserved: 3145728, temp: 1
free until cp0 - reserved: 1048576, temp: 0

If you have persistent data in your arena, just do the allocation of those first, take a checkpoint and reset to that checkpoint at times when you are sure nothing else above that point is in use.

This approach assumes you have allocations going down the callstack and at the top level you can free. Basically, you can’t keep a block inbetween checkpoints.

2 Likes