-
Notifications
You must be signed in to change notification settings - Fork 75
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
High resident set size (RSS) #1220
Comments
I tried to log RSS usage during a Before MMTk initializes, the RSS is 14M. After MMTk initializes Though it is worth investigating how the RSS grows over the run, we should focus on SFT and VM map first. They in total contribute to 80% of the RSS usage. |
@qinsoon Can you also evaluated the effect of
and see whether they affect the RSS impact of |
I was using a compressed pointer build. So it used |
Yes. I ran tests a few months ago and I mentioned back then that we've regressed because the restricted address space uses the sparse chunk map. That, VMMap, and not returning pages back to the OS were the largest sources of RSS overhead from memory |
Right. Without compressed pointer (using On the contrary, with compressed pointer (using |
I compared the mmap entries after running the Liquid benchmark on mmtk-ruby. I used the same binary, and used command line argument to control whether to use MMTk or CRuby's default GC. When using MMTk, the plan is StickyImmix, and the heap size is set to 36M, that is, 1.5x min heap size. I printed the mmap entries and calculated their pages in RAM using the methodology described here. The data is collected at the time of Note that the overhead of SFTMap is trivial because the mmtk-ruby binding is currently using SFTSpaceMap on 64-bit systems, and the tables don't have many entries. The overhead of Map64 is also trivial because the length of its tables (descriptor map, base address and high watermark) is The mmap entries specific to MMTk includes:
The mmap entries specific to CRuby's own GC includes:
The main mmap entry for malloc (the
In summary, MMTk has larger RSS footprint in
In this execution, the MMTk heap size was set to 1.5x min heap size. If we divide the ImmixSpace by 1.5, we get 16MB, which is still larger than CRuby's default GC which is 7MB in size. One possible explanation is that in the current implementation of MMTk uses the work packet system. All work packets are allocated in the malloc heap, and the |
We observed on several VM bindings that the RSS memory consumption is higher when using MMTk compared to those VMs' default GCs. We need to inspect where those memory is used for. Possibilities include (but are not limited to)
BlockQueue
which caches memory blocks instead of returning the memory to the OS by unmapping.Vec
,Box
, etc.The text was updated successfully, but these errors were encountered: