m0f0x disse na ultima mensagem:
> I think you should setup -CURRENT FreeBSD boxes to test gjournal[1].
> Maybe gjournal can help you out, but you'll only know if you test it on
> your own.
>
> gjournal will be probably on the next FreeBSD engineering release,
> 7.0-RELEASE[2].
>
yep I know, zfs eventually can be interesting too but that does not change
the problem.
certainly using current on production servers is not so very wise at all
still I do not know if the fs is or not the problem because since squid is
running and compiling well on freebsd it should handle the default ufs2 wo
any problem
Michel
> Cheers,
> m0f0x
>
> [1] http://wiki.freebsd.org//gjournal (historic)
> ... http://docs.freebsd.org/cgi/mid.cgi?20060619131101.GD1130
> [2] http://www.freebsd.org/releases/7.0R/schedule.html
>
> On Wed, 8 Aug 2007 07:12:37 -0300 (BRT)
> "Michel Santos" <michel@lucenet.com.br> wrote:
>
>>
>> I am coming back with this issue again since it is still persistent
>>
>> This problem is real and easy to repeat and destroys the complete
>> cache_dir content. The squid vesion is 2.6-Stable14 and certainly it
>> is with all 2.6 versions I tested so far. This problem is not as easy
>> to launch with 2.5 where it happens in a different way after an
>> unclean shutdown.
>>
>> How to repeat this is easy, on any 2.6 version you shut down the
>> machine with rc.shutdown time shorter than squid needs to close the
>> cache_dirs what then kills the still open squid process[es] - no hard
>> reset or power failure is necessary.
>>
>> After reboot squid gets crazy with swap.state on the affected
>> cache-dirs as you can see in messages and cache_dir graphs I put
>> together from two different machines in the following file
>>
>> Important here, the partitions ARE clean from OS's view and fsck is
>> not beeing invoked and running fsck manually before mounting them
>> does NOT change anything.
>>
>> You also can see on the machine with 4 cache_dirs that only two dirs
>> are beeing destroyd, probably because of their size which needed
>> longer to close them
>>
>> http://suporte.lucenet.com.br/supfiles/cache-prob.tar.gz
>>
>> This happens with 100% sure hit with AUFS and DISKD and UFS still does
>> what squid-2.5 did:
>>
>>
>> - squid-2.6 creates a never-ending-growing swap.state until the disk
>> is full and the squid process dies becaus of disk full
>>
>> - squid-2.5 let the swap.state as is and empties the cache_dirs
>> partially or completely
>>
>>
>> Even I can see that this can be understood as unclean shutdown I must
>> insist that the growing swap.state and cache_dir Store rebuild
>> negative values and it's 2000%-and-what-ever values in messages are
>> kind of strange and probably wrong
>>
>> What I do not understand here is the following.
>>
>> So fare I ever was told that the problem is a corrupted swap.state
>> file
>>
>> But for my understandings the cached file is beeing referenced in
>> swap.state soon it is cached.
>>
>> This obviously should have been happened BEFORE squid is shutting
>> down or dies so why squid still needs to write to swap.state at this
>> stage?
>>
>> And if it for any reason did not happened than the swap.state rebuild
>> process detect and destroys the invalid objects in each cache_dir on
>> startup
>>
>> If squid needs to read swap.state in order to close the cache_dirs
>> than it would be enough to have swap.state open for reading? Then
>> certainly it does not get corrupted or not?
>>
>>
>> Since you tell me that *nobody* has this problem what I certainly can
>> not believe ;) but seems you guys are using linux or windows then
>> might this be related to freebsd's softupdate on the file system and
>> squid can not handle this? Should I disable it and check it out?
>>
>>
>> michel
>> ...
>
>
>
>
>
>
>
> A mensagem foi scaneada pelo sistema de e-mail e pode ser considerada
> segura.
> Service fornecido pelo Datacenter Matik https://datacenter.matik.com.br
>
...
****************************************************
Datacenter Matik http://datacenter.matik.com.br
E-Mail e Data Hosting Service para Profissionais.
****************************************************
Received on Wed Aug 08 2007 - 07:53:34 MDT
This archive was generated by hypermail pre-2.1.9 : Sat Sep 01 2007 - 12:00:03 MDT