|
From: | Antonio Diaz Diaz |
Subject: | Re: [Lzip-bug] bug report: 'plzip -vf9' fails to compress files over a certain size |
Date: | Sat, 16 Dec 2017 20:58:42 +0100 |
User-agent: | Mozilla/5.0 (X11; U; Linux i586; en-US; rv:1.9.1.19) Gecko/20110420 SeaMonkey/2.0.14 |
Erik Jan Tromp wrote:
Essentially, I *think* the patch originally worked by accident on my somewhat odd hardware/software combination. Only by having a larger (very time consuming) data set did I spot this.
I have been doing some testing with massif, and I have obtained interesting results.
First we need to take into account that physical RAM is a soft limit. If it is exceeded, swap is used. OTOH, the 3 GiB limit of address space per process on a 32 bit system is a hard limit. If it is exceeded, even if only by one byte, the process crashes in a bad way. The crash may happen when allocating stack to call a function, for example.
Second, it seems that the amount of address space used by a process can be much larger than the amount of memory allocated from the heap. For example, the 'top' command reports for 'plzip -0 -n1' 23 MiB virtual memory and 5 MiB resident memory on my 32 bit system. On my 64 bit system it reports 161 MiB virtual for the same 5 MiB resident.
As the gap between virtual and resident memory use seems to decrease with increasing number of threads, I have made some tweaks and have added 90 MiB per thread to the memory limiting code. I send you the modified source in private.
Best regards, Antonio.
[Prev in Thread] | Current Thread | [Next in Thread] |