1 year ago

#384179

test-img

Sardonic

Why can malloc allocate more memory than mmap?

I'm experimenting to see how much virtual memory I can allocate on 64-bit Linux, currently running Ubuntu via repl.it. I'm using some simple code to find this limit through experimentation by repeatedly calling realloc() or mmap(). I also use getrlimit() with RLIMIT_AS to query the OS for the maximum address space.

Here is the output:

Soft mem limit: 17592186044415 MB
Hard mem limit: 17592186044415 MB
----------------- Using mmap() -----------------
Trying 32768 MB.. Success.
Trying 65536 MB.. Failed.
----------------- Using realloc() -----------------
Trying 32768 MB.. Success.
Trying 65536 MB.. Success.
Trying 131072 MB.. Failed.

This surprised me for a few reasons, which should perhaps each be their own SO question:

  1. The title question: Why can realloc() allocate 64GB while mmap() fails after 32GB? Perhaps I'm misusing mmap() somehow?
  2. Why can't realloc() or mmap() come even close to approaching the memory limit? On a 64-bit process, I would expect hundreds of terabytes of virtual address space available.
  3. When removing PROT_WRITE and using only PROT_READ or PROT_NONE, mmap() manages to allocate up to 67108864 MB, which is around 64 terabytes (!). How does PROT_WRITE cause that allocation to fail? What use would this have (if any) with anonymous mappings?

Here is the code in full, in case that offers any insight:

#include <iostream>
#include <stdlib.h>
#include <sys/mman.h>
#include <sys/resource.h>

const size_t KB = 1024;
const size_t MB = 1024 * 1024;
const size_t GB = 1024 * 1024 * 1024;

void usingMmap(size_t size);
void usingRealloc(size_t size);

int main() {
  rlimit limit;
  getrlimit(RLIMIT_AS, &limit);
  std::cout << "Soft mem limit: " << limit.rlim_cur / MB << " MB\n";
  std::cout << "Hard mem limit: " << limit.rlim_max / MB << " MB\n";
  std::cout << "----------------- Using mmap() -----------------\n";
  usingMmap(32 * GB);
  std::cout << "----------------- Using realloc() -----------------\n";
  usingRealloc(32 * GB);
  return 0;
}

void usingRealloc(size_t size) {
  void *p = NULL;
  while (true) {
    std::cout << "Trying " << size / MB << " MB.. ";
    p = realloc(p, size);
    if (p == NULL)
      break;
    std::cout << "Success.\n";
    size *= 2;
  }
  std::cout << "Failed.\n";
  if (errno != ENOMEM)
    perror("realloc");
}

void usingMmap(size_t size) {
  void *p = NULL;
  while (true) {
    std::cout << "Trying " << size / MB << " MB.. ";
    p = mmap(NULL, size, PROT_READ | PROT_WRITE, MAP_PRIVATE | MAP_ANONYMOUS,
             -1, 0);
    if (p == MAP_FAILED)
      break;
    std::cout << "Success.\n";
    if (munmap(p, size) == -1) {
      perror("munmap");
      exit(-1);
    }
    size *= 2;
  }
  std::cout << "Failed.\n";
  if (errno != ENOMEM)
    perror("mmap");
}

Note that changing the starting size up or down or calling only one of either mmap() or malloc() did not change this behavior. Actually, changing the starting value to 64 GB causes both mmap() and realloc() to fail. I'm starting to think it has more to do with how large an allocation each call can handle at once rather than how much virtual address space a process is allowed to use up.

(I know there's a lot of error reporting code that distracts from the main point; I've kept it there to demonstrate that there aren't any unexpected errors coming from mmap(), munmap(), or realloc())

c++

linux

malloc

mmap

0 Answers

Your Answer

Accepted video resources