Hi.
Thank you for your reply.
I have attached the valgrind log for both with and without
'--with-valgrind-enabled' builds.
./configure --prefix=/home/jc/LEAK/HPM1/3.1.9/
[--with-valgrind-support] --enable-icap-client
--enable-linux-netfilter --with-filedescriptors=65535
Execution command line was
valgrind --leak-check=full --show-reachable=true
--log-file=./valgrind.log ./3.1.9/sbin/squid -d 2 -N -f ./hpm1.conf
Also, note that there appeared to be a large and growing number of
cbdata HttpStateData entries when viewing memory usage via
cachemgr.cgi.
Thanks again,
John Craws
On Fri, Jan 7, 2011 at 7:27 PM, Amos Jeffries <squid3_at_treenet.co.nz> wrote:
> On 08/01/11 08:57, John Craws wrote:
>>
>> Hi,
>>
>> I originally posted this on December 14th, but did not get any reply.
>> Maybe someone will be able to help this time.
>>
>
> Can you ensure you are using a build with valgrind support and please run
> these tests with SQUID1 started inside valgrind. That will hopefilluy be
> able to give a useful report on what memory is leaking.
>
> FWIW: we had leaks fixed in 3.1.10. Though these symptoms dont appear to
> match any of those fixed problems.
>
> Amos
>
>> Thanks,
>>
>> --
>>
>> I am seeing a level of memory consumption that I do not understand
>> from a squid instance configured to use a single cache_peer over
>> multicast ICP.
>>
>> Please disregard the setup unless it is actually relevant. This is for
>> testing purposes and not meant to be used as is.
>> The same behavior has been observed on a more standard, multi-machine
>> setup.
>> Tried on 3.1.4, 3.1.9 and 3.2.0.3.
>>
>> [Instance 1 (SQUID1 / 127.0.0.11)] squid1.conf
>>
>> visible_hostname squid1
>> debug_options ALL,1
>> http_access allow all
>> acl manager proto cache_object
>> acl localhost src 127.0.0.0/24
>> http_access allow manager localhost
>> cache_mgr cachemgr
>> cachemgr_passwd cachemgr
>> cache deny all
>> http_port 127.0.0.11:3128
>> http_port 127.0.0.11:3129 transparent
>> coredump_dir .
>> cache_mem 8 MB
>> icp_port 3130
>> icp_access allow all
>> cache_peer 239.255.99.8 multicast 3128 3130 ttl=64
>> icp_query_timeout 2000
>> cache_peer 172.16.5.69 sibling 3128 3130 multicast-responder proxy-only
>> access_log access.log
>> cache_log cache.log
>> cache_store_log none
>> pid_filename squid1.pid
>>
>> [Instance 2 (SQUID2 / 172.16.5.69)] squid2.conf
>>
>> visible_hostname squid2
>> debug_options ALL,1
>> http_access allow all
>> acl manager proto cache_object
>> acl localhost src 127.0.0.0/24
>> http_access allow manager localhost
>> cache_mgr cachemgr
>> cachemgr_passwd cachemgr
>> http_port 172.16.5.69:3128
>> http_port 172.16.5.69:3129 transparent
>> coredump_dir .
>> cache_mem 512 MB
>> icp_port 3130
>> icp_access allow all
>> mcast_groups 239.255.99.8
>> icp_query_timeout 2000
>> access_log access.log
>> cache_log cache.log
>> cache_store_log none
>> pid_filename squid2.pid
>>
>> ---------------------------------------
>>
>> A lighttpd origin server listens at 127.0.0.33. It is used strictly to
>> prime the SQUID2 cache.
>> Jmeter is used to perform the following (also validated with
>> squidclient and wget).
>>
>> Step 1:
>> SQUID2 is primed with a large (20 000) number of small (20k) distinct
>> objects.
>> SQUID1 has not received a single request.
>>
>> Step 2:
>> SQUID1 receives a continuous flow of requests for objects that have
>> already been cached by SQUID2 (from the origin server).
>> All requests lead to an entry in access.log that is similar to
>>
>> 1292361190.281 1 127.0.0.11 TCP_MISS/200 20848 GET
>> http://127.0.0.33/media/20k.jpg-20362 - SIBLING_HIT/172.16.5.69
>> application/octet-stream
>>
>> Result:
>> Memory consumption by SQUID1 increases rapidly and constantly.
>> Below is the output of a very simple script that simply outputs the
>> result of the ps command every 5 seconds:
>>
>> [ETIME, CMD, RSS, VSZ, %CPU, %MEM]
>> 01:16 ./3.1.9/sbin/squid -d 2 -N 11156 50540 0.6 0.2
>> 01:21 ./3.1.9/sbin/squid -d 2 -N 11156 50540 0.6 0.2
>> 01:26 ./3.1.9/sbin/squid -d 2 -N 11156 50540 0.6 0.2
>> 01:31 ./3.1.9/sbin/squid -d 2 -N 15948 55028 2.8 0.4
>> 01:36 ./3.1.9/sbin/squid -d 2 -N 22256 61220 7.4 0.5
>> 01:41 ./3.1.9/sbin/squid -d 2 -N 28704 67684 11.8 0.7
>> 01:46 ./3.1.9/sbin/squid -d 2 -N 34944 74016 15.6 0.8
>> 01:51 ./3.1.9/sbin/squid -d 2 -N 41424 80472 19.2 1.0
>> 01:56 ./3.1.9/sbin/squid -d 2 -N 47784 86796 22.5 1.2
>> 02:01 ./3.1.9/sbin/squid -d 2 -N 54272 93248 25.6 1.3
>> 02:06 ./3.1.9/sbin/squid -d 2 -N 60716 99704 28.3 1.5
>> 02:11 ./3.1.9/sbin/squid -d 2 -N 66844 105908 31.0 1.7
>> 02:16 ./3.1.9/sbin/squid -d 2 -N 73004 111976 33.3 1.8
>> 02:21 ./3.1.9/sbin/squid -d 2 -N 79644 118700 35.6 2.0
>> 02:26 ./3.1.9/sbin/squid -d 2 -N 85916 124888 37.6 2.1
>> 02:31 ./3.1.9/sbin/squid -d 2 -N 91932 130948 39.5 2.3
>> 02:36 ./3.1.9/sbin/squid -d 2 -N 98268 137284 41.3 2.5
>> 02:41 ./3.1.9/sbin/squid -d 2 -N 104452 143476 43.0 2.6
>> 02:46 ./3.1.9/sbin/squid -d 2 -N 111108 150188 44.6 2.8
>> 02:51 ./3.1.9/sbin/squid -d 2 -N 117404 156392 46.2 2.9
>> 02:56 ./3.1.9/sbin/squid -d 2 -N 123960 162984 47.6 3.1
>> 03:01 ./3.1.9/sbin/squid -d 2 -N 129896 168912 48.9 3.3
>> 03:06 ./3.1.9/sbin/squid -d 2 -N 136360 175364 50.2 3.4
>> 03:11 ./3.1.9/sbin/squid -d 2 -N 142368 181416 51.3 3.6
>> 03:16 ./3.1.9/sbin/squid -d 2 -N 148480 187464 52.4 3.7
>> 03:21 ./3.1.9/sbin/squid -d 2 -N 154592 193660 53.5 3.9
>> 03:27 ./3.1.9/sbin/squid -d 2 -N 160800 199856 54.2 4.1
>> 03:32 ./3.1.9/sbin/squid -d 2 -N 166948 205920 55.2 4.2
>> 03:37 ./3.1.9/sbin/squid -d 2 -N 172984 211980 56.1 4.4
>> 03:42 ./3.1.9/sbin/squid -d 2 -N 179048 218044 57.0 4.5
>> 03:47 ./3.1.9/sbin/squid -d 2 -N 184972 223976 57.8 4.7
>> 03:52 ./3.1.9/sbin/squid -d 2 -N 191468 230432 58.6 4.8
>> 03:57 ./3.1.9/sbin/squid -d 2 -N 197524 236496 59.4 5.0
>> 04:02 ./3.1.9/sbin/squid -d 2 -N 203284 242300 60.1 5.1
>> 04:07 ./3.1.9/sbin/squid -d 2 -N 209296 248352 60.8 5.3
>> 04:12 ./3.1.9/sbin/squid -d 2 -N 215384 254412 61.5 5.4
>> 04:17 ./3.1.9/sbin/squid -d 2 -N 221652 260612 62.2 5.6
>> 04:22 ./3.1.9/sbin/squid -d 2 -N 227964 266944 62.9 5.8
>> 04:27 ./3.1.9/sbin/squid -d 2 -N 233808 272876 63.5 5.9
>> 04:32 ./3.1.9/sbin/squid -d 2 -N 239676 278668 64.0 6.1
>> 04:37 ./3.1.9/sbin/squid -d 2 -N 245720 284732 64.6 6.2
>> 04:42 ./3.1.9/sbin/squid -d 2 -N 251520 290532 65.1 6.4
>> 04:47 ./3.1.9/sbin/squid -d 2 -N 257524 296592 65.7 6.5
>> 04:52 ./3.1.9/sbin/squid -d 2 -N 263496 302524 66.2 6.7
>> 04:57 ./3.1.9/sbin/squid -d 2 -N 269452 308464 66.7 6.8
>> 05:02 ./3.1.9/sbin/squid -d 2 -N 275408 314396 67.1 7.0
>> 05:07 ./3.1.9/sbin/squid -d 2 -N 281600 320592 67.6 7.1
>> 05:12 ./3.1.9/sbin/squid -d 2 -N 287572 326636 68.1 7.3
>> 05:17 ./3.1.9/sbin/squid -d 2 -N 293692 332700 68.5 7.4
>> 05:22 ./3.1.9/sbin/squid -d 2 -N 299728 338764 68.9 7.6
>> 05:27 ./3.1.9/sbin/squid -d 2 -N 305676 344700 69.3 7.8
>> 05:32 ./3.1.9/sbin/squid -d 2 -N 311668 350632 69.8 7.9
>> 05:37 ./3.1.9/sbin/squid -d 2 -N 317704 356692 70.1 8.1
>> 05:42 ./3.1.9/sbin/squid -d 2 -N 323652 362624 70.5 8.2
>> 05:47 ./3.1.9/sbin/squid -d 2 -N 329700 368676 70.9 8.4
>>
>>
>> I would definitely appreciate any help on understanding the issue.
>>
>> Thank you,
>>
>> John Craws
>
>
> --
> Please be using
> Current Stable Squid 2.7.STABLE9 or 3.1.10
> Beta testers wanted for 3.2.0.4
>
This archive was generated by hypermail 2.2.0 : Tue Jan 11 2011 - 12:00:04 MST