Description
Hi,
I'm working on a strange bug that would results in jemalloc throwing a std::bad_alloc - although there is plenty of physical memory and virtual address space available - for a 'simple' newImpl allocation (of a bunch 64-bit integer through a vector resize and/or reserve).
This would happen when vm.overcommit_memory=2 even with a high vm.overcommit_ratio. By using vm.overcommit_memory=1, the bad_alloc is gone and the Committed_AS value is much lower.
Some work was done for handling overcommit, but is there some corner cases that remained identified ? What would be a good approach to better understand this issue ?
This behavior was seen with jemalloc, 4.3.1, 4.5, 5.0.1 and 5.1.0 on different Linux servers (all 64-bit, kernels 2.6, 3.10, 4.4).
Thanks,
Eloi