On Sat, Oct 23, 1999 at 12:07:21AM +1000, info@talent.com.au wrote:
> On a Mac, I can switch what Apple call "Virtual Memory" (& what every
> one else calls swap disks) off altogether & the Mac goes a lot
> quicker. Yes, it runs out of memory, but you allocate memory
> manually so if you're running servers you don't have a problem!
First word of reassurance: the Mac (and Windoze) implementations of
virtual memory via swap file are many times worse than the UNIX
implementations, for no excusable reason.
<Tirade>The various ways to implement virtual memory via swap files
had been worked out exhaustively by the mid-'70s at latest, and
certainly in the '80s when those OSes were written, there was no excuse
for any OS implementing it poorly!</Tirade>
In any UNIX variant I know of, there is zero penalty associated with
just having swap. The only performance hit is when a process actually
gets moved out of RAM to the swap area. This will happen to your
servers only when and if some other process needs to take RAM in use by
your server software, e.g. if you run some memory-intensive command
from the shell on the same machine.
> I know you can use the command "swapoff -a" with Linux, but I have a vague
> memory of there being a problem with that.
There would definitely be a problem if some process suddenly becomes
a memory hog; it would probably mean that you can't start other
processes due to lack of memory, possibly including the root shell or
the "kill" process you'd need to shut down the problem child. As
noted, this would also mean that inactive but still-resident processes
will always take up RAM instead of being shuffled off to disk. I don't
know what Linux-specific problems there might be.
> Is there a way that Squid (& other Linux/UNIX) programs can be
> convinced that they just cannot have more RAM than is available
> without spitting the dummy such that you can use swapoff without
> fear? If not, would it be possible to build something of the kind in
> future versions of Squid? I'm sure a lot of people would love to
> know the answer to this...
If it were implemented, the "cure" might be worse than the problem:
if Squid needs more memory to service a request, is it better for it to
increase its memory footprint (possibly slowing down) or to just stop
taking requests altogether? You can limit the amount dedicated to
in-memory caching, but most of the memory use will scale with the
number of simultaneous connections and transactions, and with the size
of the disk cache.
Next word of reassurance: Your best bet is to leave swap enabled, and
simply watch the total memory usage of Squid (and its DNS servers,
don't forget!) with ps or top, and make sure it's not growing bigger
than your available memory. Alternatively, watch swap use with "pstat
-s" (or the Linux equivalent) and see that it either stays zero, or
grows to a small amount and then grows no further. Our main server:
% pstat -s
Device name 1K-blocks Type
sd0b 525308 Interleaved
0 (1K-blocks) allocated out of 525308 (1K-blocks) total, 0% in use
Just for a final point, this past week I got Squid running (mostly
for laughs, but with some real use planned) on one of my home systems
which is an old 486 running OpenBSD with 16MB of real RAM. Performance
isn't great, but it works!
% pstat -s
Device 512-blocks Used Avail Capacity Type
/dev/wd0b 132048 25616 106432 19% Interleaved
% top
load averages: 0.13, 0.10, 0.08 08:58:02
22 processes: 1 running, 21 idle
CPU states: 0.3% user, 0.0% nice, 0.2% system, 0.5% interrupt, 99.1% idle
Memory: Real: 5268K/10M act/tot Free: 384K Swap: 13M/64M used/tot
-- Clifton
-- Clifton Royston -- LavaNet Systems Architect -- cliftonr@lava.net "An absolute monarch would be absolutely wise and good. But no man is strong enough to have no interest. Therefore the best king would be Pure Chance. It is Pure Chance that rules the Universe; therefore, and only therefore, life is good." - ACReceived on Fri Oct 22 1999 - 13:10:39 MDT
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:49:02 MST