Posts by krazykat1980

    The 20% is from the pattern from the link you posted above about dynamic content and is default in 2.7...

    From the config file:
    # 'Percent' is a percentage of the objects age (time since last
    # modification age) an object without explicit expiry time
    # will be considered fresh.

    Which I think doesn't affect most sites I've seen since they all use 'Expires'....

    So, 0% would mean that it would always be stale if it didn't have an 'Expires'... I think... and I think to never be stale it would be 100%... but I could be reading it wrong... this stuff is confusing (which is why after using Squid for 3+ years I'm still using almost complete default settings lol)...

    I'm afraid to get rid of that line completely because I don't want to treat truly dynamic sites as if they were static.

    I just figured cloudflare would keep it for a day or 2 at least...

    I think I 'might' have fixed it by adding 'reload-into-ims' to these lines:
    refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 reload-into-ims
    refresh_pattern . 0 20% 4320 reload-into-ims

    Using Wireshark I got a look at the headers on my end... it seems I send no-cache and/or max-age:0 which makes no sense... although there was also some really odd cookie behavior... ending up with a string of several cookies including sometimes one that expired in 1970... I think that one usually ended up after one that did something like set-cookie: randomStuff: delete

    I would send an 'If-None-Match', get a 304, and still download it... odd... I'm not thrilled with this solution as it might cause problems on other sites (although it's fixed a couple too lol)... I'd like to figure out how to apply it to just specific sites when I have a little more time.

    I was about to ask you if you'd be willing to disable the 'vary' header temporarily because if you're running Apache certain versions were assigning the same etag to the gzip versions and non-gzip versions of a file....

    But so far it looks like it might be solved by the 'reload-into-ims'... I'll let you know if it isn't. Thank you so much for the help!

    (now if I could just figure out where to set 'stale-if-error' for a problem on another site... maybe it goes in the same place as the ims one... lol)

    I swear that said Nov not Dec... grrrr... I'm going to throw my monitor if this keeps up... I'm going to have to see if I can get a look at the headers on my personal end - might be different than redbot, and my response headers might have something to do with it... maybe firefox being http 1.1 and Squid being 1.0 have something to do with it. Except Firefox never kept it more than a day or 2 either, but it also usually crashed and lost the cache every few days from being open 24/7 (the reason for Squid).

    Thank you for the forum fix! And thank you for being so patient trying to help me figure this out. I'm even learning a lot about headers and caching that will help me when I finally start working on any of my websites.

    My max file size in squid is something like 4mb, so that's not the problem... the fix for the dynamic content was default in 2.7... Cache size is 5gb and contains less than 300mb... My clock is definitely right...

    Next is look at the headers on my end (I think Wireshark gives me that info), make sure my plugins like ImgLikeOpera aren't interfering, and compare the headers to sites that definitely cache correctly to see if I can find a difference. Maybe upload an image to my server and make sure the fix for dynamic content actually works. Make sure Squid is actually sending an "If-Modified-Since". Then if I can't find anything it'll be time to post my squid.conf file on Stack Exchange and see if anyone else can figure it out.

    What's interesting is that Cloudflare isn't keeping it cached either, but I don't see that being able to mess my cache up. Or at least the first time after I don't have it cached that I run it through redbot it's saying:
    CF-Cache-Status: MISS
    Then next load it's a HIT.

    Unfortunately, the dynamic content thing doesn't apply, it was fixed in 2.7 (I checked my config anyways hoping that was the answer for some other caching issues, but those lines were fixed)... and it wouldn't explain these 2:
    http://flowergame.net/img/bg/body_bottom.png
    http://flowergame.net/img/bg/body_top.png
    (it actually seems none of the images were cached, but I only noticed the 3 because they loaded slowest)

    Although this does explain it (today's redbot header):
    Date: Mon, 23 Nov 2015 13:42:15 GMT
    Expires: Wed, 23 Dec 2015 13:15:12 GMT
    Cache-Control: public, max-age=2592000

    Because Squid is HTTP/1.0 it doesn't do Cache-Control but uses the Expires... same with older browsers and a lot of proxies... But the real problem here is (and I can't believe I only just now noticed it, but I was only paying attention to the weekday):

    "Many HTTP/1.0 cache implementations will treat an Expires value that is less than or equal to the response Date value as being equivalent to the Cache-Control response directive "no-cache"."

    So you are preventing Squid (and anyone using HTTP/1.0) from caching pages because the 2 dates match... and even if the 'Expires' said Wed, 25 Dec 2015 that would still mean HTTP/1.0 could only cache it for 2 days, which I don't think is what you want. Also - the day of the week doesn't match the date in 'Expires', don't have any idea what kind of problems that can cause, but from that I assume the 'Expires' is supposed to be 2016.

    Also, if you could use '304 Not Modified' on the flower images that get posted in the forums that would also help a lot since 'Cache-Control: must-revalidate' means nothing in HTTP/1.0 and those images don't cache for more than a couple minutes...

    It did it again... I think the issue might be Squid not playing well with this line in the header 'Vary: Accept-Encoding' although I can't find any verification that was a problem in Squid 2.7.... I also see 'The Pragma header is being used in an undefined way.' using Redbot.org... Or it could be the ETags, because it looks like squid doesn't always play nice with those either... I wish I could find another caching proxy that will run on winxp 32bit. But I seem to remember having this problem without Squid too.


    Now I just need to figure out how to see the actual headers I'm getting on my end because they might be different than what that site is seeing.


    Edit: This in the header is interesting...
    Expires: Tue, 22 Dec 2015 18:43:27 GMT
    Cache-Control: public, max-age=2592000

    Also, is this actually correct for the flower image? If it's not, it won't be cached:
    Content-Length: 1180936

    For the heck of it I just opened http://flowergame.net/img/flower.png?1447851275 again... it's already not cached... I don't get it... the rest of the images on the page are cached - just not that 1 file. This is strange... I'll let you know if it keeps happening.

    Edit:
    Actually I was wrong, I keep forgetting these 2 don't always cache properly either...
    http://flowergame.net/img/bg/body_bottom.png
    http://flowergame.net/img/bg/body_top.png

    It's very odd sitting there waiting for the page to fully load looking at a screen with parchment with squares on in in the middle of the screen, a pixie, and green leaves filling the top and bottom.

    Next time it isn't cached I'll have to take a screenshot.

    You can ignore the previous although I left it because it's good info. It IS staying cached... the problem is the fact that the other day the address of the image was http://flowergame.net/img/flower.png?1445373065 and today the image url is http://flowergame.net/img/flower.png?1447851275... those aren't the same address, so even though the first one is cached, the new one isn't... so what this means is that every couple days I end up with a new cached file that is identical...

    Is there a way to change that?

    My cache has more than enough room... and I just reinstalled my caching proxy now that I have hard drive space... the plain flower.png seems to stay cached... but the url that looks like flower.png?1445373065 wasn't staying cached...

    Or rather, it seems to be staying cached now when I open it directly, but when I load my greenhouse after a day or 2 it still takes a lot longer to load than that cached image does and uses a lot of bandwidth. And I know it's just that image loading because I use a plugin that lets me choose to only load the cached images on the page and that's the only thing that doesn't load.

    I miss the individual images because with the ImgLikeOpera plugin I was able to load the images for just my seeds and seedlings without having to load the images for the rest of the plants.

    I assume the '?1445373065' means that it's processing php (I'm not sure on this because it's not a 'blah.php?1445373065' link)? If that's the case, I've noticed that kind of page often doesn't cache the way it would be expected to.

    Could you please do something about the way the flower images are loaded in the greenhouse and Compendium? Maybe load each flower image separately (I think that actually used to be the way it worked)... it seems pointless to load all the flower images when a person doesn't have all the flowers. Or at least shrink the file size of the flower image? http://flowergame.net/img/flower.png takes 6 minutes to load on dialup internet and my flowers don't stay cached for more than a day or two...