Bruce,
To see how the logfile became so large. Open it up and take a look at
what you have, make sure that you have plenty of ram though...
One user? You must have been very busy or have been testing that server
for quite some time without rotating your logs.
The text file is large because it keeps track of every single HTTP
request that Squid receives. It lists the originating IP of the
requesting system, the URL plus a date and time. Many web-sites will
generate more then a hand full of URLs when you visit them. Most banner
ads will generate their own line as they load as well.
In fact, in some cases there will be one or two lines listed in an
access.log file that are for inappropriate material (porn). Of course,
those one or two lines could be surrounded by nothing but a mostly
"clean" website that has just sold ad space to a porn-site.
To test what would happen if someone where to go to a porn site, clear
out the access.log, then fire up your browser and point it directly to a
porn site without hitting any other sites. Then kill the browser and
check the log. You might be astounded by the amount of garbage that fills
the log.
That is just a bit of info, in case the boss wants you to find out who
is surfing for questionable material during the work day. If there is
only one or two lines, banner ad. If a whole screen is filled with
questionable URL material, porn site. Of course, that doesn't do much for
those annoying pop-up sites that load many, many pages. That is just such
a hard thing to be able to cover.
If you are really concerned about that, use ACLs to block all
questionable sites.
Ask me about that in a few days, when my site's ISP is supposed to
return our www connection. Once that is up, I can pull up a wealth of
info on that and forward it to you.
What I would suggest is to create a job to shutdown squid, preferably
late at night, tar.gz that file into a filename based on the date, then
restart squid. There is probably an easier way to do that. That is only
one idea, I would have to research how to implement that myself. I am not
extremely familiar with cron at this time.
I have to thank you for the information regarding the error message as
well. Now, I know what to expect when that message crops up.
Regards,
Robert Adkins
IT Manager/Buyer
IMPEL Industries, Inc.
-----Original Message-----
From: Squid [mailto:squid@wister.k12.ok.us]
Sent: Thursday, June 13, 2002 2:47 PM
To: mailinglistsquid-users@squid-cache.org; Squid-Users; Robert Adkins
Subject: [squid-users] RE: (Squid-users) Anybody know what this means???
Here I will reply to myself...
I ran out of disk space....my log files were each over 100Megs each...
and
this is on a test mechine with one user...
Here is some more info...Redhat 7.3 on an old 200 mhz machine...
Any Ideas on how to limit the log size...or even why a text log is this
big...
Bruce
Squid wrote:
> Squid worked fine yesterday now I get this...any ideas
>
> storeUfsWriteDone:got failure (-6)
>
> Bruce
Received on Thu Jun 13 2002 - 14:07:38 MDT
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 17:08:40 MST