Brennan Vincent
2018-06-02 22:16:10 UTC
The attached program `eatmem.c` is a simple example to waste N gigs of memory as quickly as possible.
When I run something like `eatmem 32` (on a system with less than 32GB of RAM), about half the time everything works fine: the system quickly runs out of RAM and swap, the kernel kills `eatmem`, and everything recovers. However, the other half of the time, the system becomes completely unusable: my ssh session is killed, important processes like `init` and `getty` are killed, and it's impossible to even log into the system (the local terminal is unresponsive, and I can't ssh in because sshd is killed immediately whenever it tries to run). The only way to recover is by rebooting.
Is this expected behavior?
My system details are as follows:
FreeBSD 12 CURRENT x86_64 guest on VMWare Fusion.
ram: 8 GB
swap: 1 GB
Host: macbook pro running macOS.
When I run something like `eatmem 32` (on a system with less than 32GB of RAM), about half the time everything works fine: the system quickly runs out of RAM and swap, the kernel kills `eatmem`, and everything recovers. However, the other half of the time, the system becomes completely unusable: my ssh session is killed, important processes like `init` and `getty` are killed, and it's impossible to even log into the system (the local terminal is unresponsive, and I can't ssh in because sshd is killed immediately whenever it tries to run). The only way to recover is by rebooting.
Is this expected behavior?
My system details are as follows:
FreeBSD 12 CURRENT x86_64 guest on VMWare Fusion.
ram: 8 GB
swap: 1 GB
Host: macbook pro running macOS.