Actually the way to do this (I was thinking about it last night) is
to set the select/poll timeout quite low if there's stuff waiting in the
async queue. The other way would be to actually make it event driven
rather than polling for finished tasks. This could probably be done by
opening up a dummy FD (say a unnamed pipe) and have each sub-thread write
a single byte to it when they're done. Then include that FD into the
regular poll/select set of FD's and we have event driven async-IO.
What do you think?
The behaviour I think you're looking for has two dimensions. One is critical
and the other is 'nice if you can get it'
1) the method should adapt to the change in traffic. If traffic is light,
the outcome should be 'better' than the default for heavy traffic. An
alternative view of this is to say that the method is optimal for both
cases.
2) if the method is self-controlling, rather than having to be enabled, then
it responds dynamically, and so is inherently more efficient and applicable
than one enabled by a low/high tide or rate monitor.
If you change the behaviour to make it unneccessary to dynamically adjust I
think you need to show if there is a downside cost. It may be that even a
slight shift of the median response/load ratio would be 'worse' for a loaded
cache. If the variance of cache served load is high, then you might live with
that, to retain responsiveness under light load. If its always saturated, then
the change is not a good idea.
So does your proposal meet {either,both} conditions?
-George
-- George Michaelson | DSTC Pty Ltd Email: ggm@dstc.edu.au | University of Qld 4072 Phone: +61 7 3365 4310 | Australia Fax: +61 7 3365 4311 | http://www.dstc.edu.auReceived on Thu May 28 1998 - 17:06:01 MDT
This archive was generated by hypermail pre-2.1.9 : Tue Dec 09 2003 - 16:40:30 MST