With the help of some more testing, I managed to capture this
again. Below is cache.log output at log lever 5 for an attempt to
connect to a HTTP server which didn't exist (http://76.31.53.108 in
this case):
The 'shutdown' sequence starts with the client closing its connection
to the server:
2013/01/16 17:18:55.145| comm.cc(143) commHandleRead: comm_read_try: FD 10, size 4095, retval 0, errno 0
2013/01/16 17:18:55.145| IoCallback.cc(108) finish: called for local=76.31.53.108:80 remote=10.1.3.64:53181 FD 10 flags=33 (0, 0)
2013/01/16 17:18:55.145| AsyncCall.cc(85) ScheduleCall: IoCallback.cc(127) will call ConnStateData::clientReadRequest(local=76.31.53.108:80 remote=10.1.3.64:53181 FD 10 flags=33, data=0xe93ca8, size=0, buf=0xe1b2d0) [call85454]
2013/01/16 17:18:55.145| AsyncCallQueue.cc(51) fireNext: entering ConnStateData::clientReadRequest(local=76.31.53.108:80 remote=10.1.3.64:53181 FD 10 flags=33, data=0xe93ca8, size=0, buf=0xe1b2d0)
2013/01/16 17:18:55.145| AsyncCall.cc(30) make: make call ConnStateData::clientReadRequest [call85454]
2013/01/16 17:18:55.145| AsyncJob.cc(116) callStart: ConnStateData status in: [ job64]
2013/01/16 17:18:55.145| client_side.cc(2932) clientReadRequest: local=76.31.53.108:80 remote=10.1.3.64:53181 FD 10 flags=33 size 0
2013/01/16 17:18:55.145| client_side.cc(2968) clientReadRequest: local=76.31.53.108:80 remote=10.1.3.64:53181 FD 10 flags=33 closed?
2013/01/16 17:18:55.145| client_side.cc(2414) connFinishedWithConn: local=76.31.53.108:80 remote=10.1.3.64:53181 FD 10 flags=33 aborted (half_closed_clients disabled)
2013/01/16 17:18:55.145| comm.cc(1087) _comm_close: comm_close: start closing FD 10
2013/01/16 17:18:55.146| comm.cc(745) commUnsetFdTimeout: Remove timeout for FD 10
2013/01/16 17:18:55.146| comm.cc(940) commCallCloseHandlers: commCallCloseHandlers: FD 10
2013/01/16 17:18:55.146| comm.cc(948) commCallCloseHandlers: commCallCloseHandlers: ch->handler=0xe87d20*1
2013/01/16 17:18:55.146| AsyncCall.cc(85) ScheduleCall: comm.cc(949) will call ConnStateData::connStateClosed(FD -1, data=0xe93ca8) [call85449]
2013/01/16 17:18:55.146| AsyncCall.cc(18) AsyncCall: The AsyncCall comm_close_complete constructed, this=0xe635a0 [call85571]
2013/01/16 17:18:55.146| AsyncCall.cc(85) ScheduleCall: comm.cc(1163) will call comm_close_complete(FD 10) [call85571]
2013/01/16 17:18:55.146| AsyncJob.cc(145) callEnd: ConnStateData status out: [ job64]
2013/01/16 17:18:55.146| AsyncCallQueue.cc(53) fireNext: leaving ConnStateData::clientReadRequest(local=76.31.53.108:80 remote=10.1.3.64:53181 flags=33, data=0xe93ca8, size=0, buf=0xe1b2d0)
2013/01/16 17:18:55.146| AsyncCallQueue.cc(51) fireNext: entering ConnStateData::connStateClosed(FD -1, data=0xe93ca8)
2013/01/16 17:18:55.146| AsyncCall.cc(30) make: make call ConnStateData::connStateClosed [call85449]
2013/01/16 17:18:55.146| AsyncJob.cc(116) callStart: ConnStateData status in: [ job64]
2013/01/16 17:18:55.146| AsyncJob.cc(48) deleteThis: ConnStateData will NOT delete in-call job, reason: ConnStateData::connStateClosed
2013/01/16 17:18:55.146| AsyncJob.cc(130) callEnd: ConnStateData::connStateClosed(FD -1, data=0xe93ca8) ends job [Stopped, reason:ConnStateData::connStateClosed job64]
2013/01/16 17:18:55.146| client_side.cc(784) swanSong: local=76.31.53.108:80 remote=10.1.3.64:53181 flags=33
[...]
In the process of tearing the client side down,
2013/01/16 17:18:55.146| AsyncCallQueue.cc(51) fireNext: entering comm_close_complete(FD 10)
2013/01/16 17:18:55.146| AsyncCall.cc(30) make: make call comm_close_complete [call85571]
2013/01/16 17:18:55.146| fd.cc(116) fd_close: fd_close FD 10 http://76.31.53.108/
2013/01/16 17:18:55.146| ModEpoll.cc(139) SetSelect: FD 10, type=1, handler=0, client_data=0, timeout=0
2013/01/16 17:18:55.146| ModEpoll.cc(139) SetSelect: FD 10, type=2, handler=0, client_data=0, timeout=0
2013/01/16 17:18:55.146| AcceptLimiter.cc(42) kick: size=0
2013/01/16 17:18:55.146| AsyncCallQueue.cc(53) fireNext: leaving comm_close_complete(FD 10)
2013/01/16 17:18:55.146| event.cc(250) checkEvents: checkEvents
2013/01/16 17:18:55.146| AsyncCall.cc(18) AsyncCall: The AsyncCall mem_obj->abort.callback constructed, this=0xea5fe0 [call85572]
2013/01/16 17:18:55.146| AsyncCall.cc(85) ScheduleCall: event.cc(259) will call mem_obj->abort.callback(0xe08128*?) [call85572]
2013/01/16 17:18:55.146| AsyncCallQueue.cc(51) fireNext: entering mem_obj->abort.callback(0xe08128*?)
2013/01/16 17:18:55.146| AsyncCall.cc(30) make: make call mem_obj->abort.callback [call85572]
2013/01/16 17:18:55.146| forward.cc(240) ~FwdState: FwdState destructor starting
2013/01/16 17:18:55.146| forward.cc(206) completed: entry aborted
2013/01/16 17:18:55.146| store.cc(570) unlock: StoreEntry::unlock: key '282B5777A2A8B2DA9D31ADD62A609D88' count=0
2013/01/16 17:18:55.146| store_client.cc(782) storePendingNClients: storePendingNClients: returning 0
2013/01/16 17:18:55.146| store.cc(1262) release: storeRelease: Releasing: '282B5777A2A8B2DA9D31ADD62A609D88'
2013/01/16 17:18:55.146| store.cc(466) destroyStoreEntry: destroyStoreEntry: destroying 0xe4fa28
2013/01/16 17:18:55.146| store.cc(444) destroyMemObject: destroyMemObject 0xe4fa90
2013/01/16 17:18:55.146| MemObject.cc(111) ~MemObject: del MemObject 0xe4fa90
2013/01/16 17:18:55.146| AsyncCall.cc(48) cancel: will not call fwdConnectDoneWrapper [call85455] because FwdState destructed
2013/01/16 17:18:55.146| forward.cc(270) ~FwdState: FwdState destructor done
the FwdState etc are destroyed. A little later, the ConnectTimeout is
scheduled,
2013/01/16 17:18:56.999| AsyncCall.cc(18) AsyncCall: The AsyncCall ConnectTimeout constructed, this=0xea5fe0 [call85575]
2013/01/16 17:18:56.999| AsyncCall.cc(85) ScheduleCall: event.cc(259) will call ConnectTimeout(0xe4cd60) [call85575]
2013/01/16 17:18:56.999| AsyncCallQueue.cc(51) fireNext: entering ConnectTimeout(0xe4cd60)
2013/01/16 17:18:56.999| AsyncCall.cc(30) make: make call ConnectTimeout [call85575]
2013/01/16 17:18:56.999| AsyncCall.cc(85) ScheduleCall: ConnOpener.cc(398) will call Comm::ConnOpener::timeout(local=0.0.0.0 remote=76.31.53.108:80 flags=1, data=0xea6788) [call85459]
2013/01/16 17:18:56.999| AsyncCallQueue.cc(53) fireNext: leaving ConnectTimeout(0xe4cd60)
2013/01/16 17:18:56.999| AsyncCallQueue.cc(51) fireNext: entering Comm::ConnOpener::timeout(local=0.0.0.0 remote=76.31.53.108:80 flags=1, data=0xea6788)
2013/01/16 17:18:56.999| AsyncCall.cc(30) make: make call Comm::ConnOpener::timeout [call85459]
2013/01/16 17:18:56.999| AsyncJob.cc(116) callStart: Comm::ConnOpener status in: [ job65]
2013/01/16 17:18:56.999| AsyncJob.cc(130) callEnd: Comm::ConnOpener::timeout(local=0.0.0.0 remote=76.31.53.108:80 flags=1, data=0xea6788) ends job [ job65]
but not actually executed (no diagnostic) because of the initial
callback check. Afterwards, the half-open server-side connection is
torn down:
2013/01/16 17:18:56.999| AsyncCall.cc(48) cancel: will not call Comm::ConnOpener::earlyAbort [call85458] because Comm::ConnOpener::swanSong
2013/01/16 17:18:56.999| AsyncCall.cc(48) cancel: will not call Comm::ConnOpener::timeout [call85459] because Comm::ConnOpener::swanSong
2013/01/16 17:18:56.999| ConnOpener.cc(141) doneConnecting: local=0.0.0.0 remote=76.31.53.108:80 flags=1 closing temp FD 12
2013/01/16 17:18:56.999| fd.cc(116) fd_close: fd_close FD 12 76.31.53.108
2013/01/16 17:18:56.999| ModEpoll.cc(139) SetSelect: FD 12, type=1, handler=0, client_data=0, timeout=0
2013/01/16 17:18:56.999| ModEpoll.cc(139) SetSelect: FD 12, type=2, handler=0, client_data=0, timeout=0
2013/01/16 17:18:56.999| AsyncJob.cc(33) ~AsyncJob: AsyncJob destructed, this=0xea6788 type=Comm::ConnOpener [job65]
2013/01/16 17:18:56.999| AsyncCallQueue.cc(53) fireNext: leaving Comm::ConnOpener::timeout(local=0.0.0.0 remote=76.31.53.108:80 flags=1, data=0xea6788)
This behaviour is consistent with the original code except that the
check leading to the non-execution of the timeout code now happens
elsewhere. If this is not the proper way to deal with this situation,
what is it?
Argument against it: doneConnecting would call
ipcacheMarkBadAddr(host_, conn_->remote); for the destination address
if the timeout handler was executed in this case which looks as if it
might be desirable.
?
Received on Wed Jan 16 2013 - 16:36:43 MST
This archive was generated by hypermail 2.2.0 : Wed Jan 16 2013 - 12:00:06 MST