> > Sounds like one of the ideas I have..
> >
> > Run multiple "independent" Squid processes, all using the
> same backend
> > store. The only thing the processes share is
> > a) The backend store
> > b) The listening socket
> > c) Some IPC mechanism for sharing the accept() load
Actually it would be nice to have something more yet:
- shared caches (at least IP, FQDN, authentications, whatever
and why not? shared in-core cached objects.
> Here's the cute bit - either (a) we can have a socket
> redirector process
> sitting at the front of the X squids, or (b) we can modify
> the ipf/ipfw/
> ipchains/whatever code to do load-balancing redirection.
(a) has some drawbacks. Either the front-end process does HTTP
header parsing, or we won't be able to correctly log accesses.
(b) is heavily OS dependent, I fear.
Why can't just the OS do the load-balancing for us? We can't
sit on accept(), but there should be some other way to just
let the OS do the job for us.
> > Unfortunately raw reiserfs is not entirely suitable yet due
> to not being
> > able to differentiate between partially and fully stored
> objects, but I
> > imagine it could quite easily be extended to exclusive
> writer access (no
> > other readers while writing to an object).
>
> Depends how it is implemented. See, if we use aio, we can
> schedule reads
> of the file from multiple processes and have the reads block
> whilst the
> object is being written. Squid can't do that now due to how
> the storage
> manager is written but I don't imagine it'd be that hard.
Can't we just expand on the diskd concept?
Idea: the parent process forks one diskd for each storage, and
passes shmem IDs to all children. Then accesses are done
through this helper processes farm.
The idea would be to delegate much of the storage management to
this "storage manager" process. If possible.
-- /kinkieReceived on Tue Oct 17 2000 - 07:21:27 MDT
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:12:50 MST