On 21/01/2013 11:54 p.m., Steve Hill wrote:
> On 18.01.13 18:37, Alex Rousskov wrote:
>
>> However, this brings us back to trunk r11920 and bug 3420.
>
> Ok, reading that bug it seems I shouldn't be calling
> enableAutoConsumption() directly - this needs to be handled through
> expectNoConsumption()?
>
> I rather suspected this was going to be the case to be honest -
> calling enableAutoConsumption() directly seemed messy and probably wrong.
>
>> Indeed, having two methods with approximately the same purpose and a
>> hidden critical distinction is asking for trouble. A lot of existing
>> expectNoForwarding-path code comments describe the situation that
>> matches yours. Moreover, as far as I can see, we may call the old
>> expectNoForwarding() first and then call your forwardToBitbucket()
>> later, all for the same error!
>
> I'm not sure under what circumstances we'd ever want the current
> expectNoForwarding() behaviour (i.e. closing the socket immediately).
> It seems to me that whatever happens, we should be allowing the
> client's request to conclude (possibly with some timeout to prevent
> the client from tieing up the connection for forever).
>
> As far as I can tell, we ideally have the following options:
>
> 1. Return a response with Connection: keep-alive, wait until we've
> received the whole request from the client and then start reading the
> next request.
> 2. Return a response with Connection: close, wait until we've received
> the whole request from the client and then close the socket.
This (2) opens the door for the "Slow-Loris" style of attacks. If we are
sure there is nothing to follow, AND the server-side is not using the
data we should close as soon as possible. IIRC this early closure is the
case we have comm_lingering_close() for, to ensure both FIN packets and
any pending data are handled gracefully.
As an alternative if we don't care about the client receiving any
data/error page from our side we can use comm_reset_close() to send a
TCP RST packet down the line.
>
> So I'm not sure there's ever any requirement to ditch the connection
> before we've got the whole request - obviously in the case of a
> completely fatal error it would be nice to not have to deal with the
> overhead of a large POST that is just going to end up in the
> bitbucket, but I'm not sure we actually have a choice since closing
> the socket early may not be handled well by the client.
>
>> 1. Authentication: Must read next request on the same connection so must
>> have readNextRequest set (if not, this is probably a bug). Have to live
>> with the possibility of the client sending forever.
>
> I suppose whether a client can send forever is dependent on the
> encoding its using. If the client has set a Content-Length header
> then it can only send that much content. If the client is using
> chunked encoding then no upper limit on content length is required (I
> think?) so in this case maybe squid needs to impose its own limit and
> chop the connection if its exceeded.
>
> In either case, a client sitting idle for a long time should probably
> be dropped anyway.
>
Yes. read_timeout during a transaction, client_idle_pconn_timeout when
between requests.
>> 2. Adaptation errors: The code may have been written to abort the client
>> connection, but it also sets readMore or readNextRequest. This sounds
>> inconsistent. We should review this, especially since this is the only
>> place where we call expectNoForwarding() today!
>
> Just to be clear, in what circumstances does the adaption code need to
> abort the client connection? The only one I can think of off the top
> of my head is where something breaks half way through handling the
> response body (i.e. we haven't actually delivered an error to the
> client - we've started delivering a real response but can't complete it).
>
> As an aside, the adaption errors probably need a bit of review anyway,
> I especially like the "Communication with the ICAP server produced the
> following error: No error" error messages. :) (Can't recall the exact
> wording, but its along those lines).
>
>> * Use expectNoForwarding() instead of adding forwardToBitbucket().
>>
>> * In ConnStateData::noteBodyConsumerAborted() do not stop reading if
>> flags.readMore is set. Just keep auto-consuming. This may prevent
>> authenticating connections closures even though expectNoForwarding() is
>> used. Needs to be tested: Will we correctly stop and switch to the new
>> request when the body finally ends?
>
> I shall have a look at these today (hopefully :) and see if I can
> engineer a better patch.
>
>> * Discuss and possibly remove readNextRequest setting from
>> ClientHttpRequest::handleAdaptationFailure(), to remove inconsistency
>> discussed in #2 above.
>
> I'll have a look at this, although I'm quite unfamiliar with that bit
> of code so may not be able to understand it well enough.
I *think* that might be a good idea. The way HTTP/2 is progressing we
are going to be multiplexing requests on each connection very shortly.
At the framing level each request will be isolated from others rather
than dependent on the previous being completed. So all we need is
signals about the state of the current request, the other request(s)
will have to be kept isolated by the ConnStateData manager object from
how this one is handled.
Amos
Received on Mon Jan 21 2013 - 22:57:40 MST
This archive was generated by hypermail 2.2.0 : Tue Jan 22 2013 - 12:00:07 MST