Please change the way images are loaded...

  • Could you please do something about the way the flower images are loaded in the greenhouse and Compendium? Maybe load each flower image separately (I think that actually used to be the way it worked)... it seems pointless to load all the flower images when a person doesn't have all the flowers. Or at least shrink the file size of the flower image? takes 6 minutes to load on dialup internet and my flowers don't stay cached for more than a day or two...

  • Changing it back to loading the graphics individually would worsen the page load times for users on broadband connections, plus this single file is already smaller than the individual files were. I can't exactly shrink the file either, it's already compressed as much as possible, there is just too much detail to these images.

    I would recommend you turn up the cache size in your browser - the files should actually be cached for ~1 month, not only two days.

    I could get the filesize down to ~1/10th, but only with visible loss of detail.

  • My cache has more than enough room... and I just reinstalled my caching proxy now that I have hard drive space... the plain flower.png seems to stay cached... but the url that looks like flower.png?1445373065 wasn't staying cached...

    Or rather, it seems to be staying cached now when I open it directly, but when I load my greenhouse after a day or 2 it still takes a lot longer to load than that cached image does and uses a lot of bandwidth. And I know it's just that image loading because I use a plugin that lets me choose to only load the cached images on the page and that's the only thing that doesn't load.

    I miss the individual images because with the ImgLikeOpera plugin I was able to load the images for just my seeds and seedlings without having to load the images for the rest of the plants.

    I assume the '?1445373065' means that it's processing php (I'm not sure on this because it's not a 'blah.php?1445373065' link)? If that's the case, I've noticed that kind of page often doesn't cache the way it would be expected to.

  • You can ignore the previous although I left it because it's good info. It IS staying cached... the problem is the fact that the other day the address of the image was and today the image url is those aren't the same address, so even though the first one is cached, the new one isn't... so what this means is that every couple days I end up with a new cached file that is identical...

    Is there a way to change that?

  • That last one was a one-time change, the number is just the Unix epoch timestamp of the last file modification. It actually is expected to change whenever a new variant or plant has been added or updated.

    I will think about some other solution though...

  • For the heck of it I just opened again... it's already not cached... I don't get it... the rest of the images on the page are cached - just not that 1 file. This is strange... I'll let you know if it keeps happening.

    Actually I was wrong, I keep forgetting these 2 don't always cache properly either...

    It's very odd sitting there waiting for the page to fully load looking at a screen with parchment with squares on in in the middle of the screen, a pixie, and green leaves filling the top and bottom.

    Next time it isn't cached I'll have to take a screenshot.

  • It did it again... I think the issue might be Squid not playing well with this line in the header 'Vary: Accept-Encoding' although I can't find any verification that was a problem in Squid 2.7.... I also see 'The Pragma header is being used in an undefined way.' using Or it could be the ETags, because it looks like squid doesn't always play nice with those either... I wish I could find another caching proxy that will run on winxp 32bit. But I seem to remember having this problem without Squid too.

    Now I just need to figure out how to see the actual headers I'm getting on my end because they might be different than what that site is seeing.

    Edit: This in the header is interesting...
    Expires: Tue, 22 Dec 2015 18:43:27 GMT
    Cache-Control: public, max-age=2592000

    Also, is this actually correct for the flower image? If it's not, it won't be cached:
    Content-Length: 1180936

  • Content-Length: 1180936
    That's only the transfer size - not the actual file size. gzip compression or whatever squid and the server agree on can make this differ from the actual file size.

    Have a look at this:
    Squid isn't behaving correctly when it encounters a query string (the part after the "?"), wrongly assuming a dynamic page, no matter what the headers say.

  • Unfortunately, the dynamic content thing doesn't apply, it was fixed in 2.7 (I checked my config anyways hoping that was the answer for some other caching issues, but those lines were fixed)... and it wouldn't explain these 2:
    (it actually seems none of the images were cached, but I only noticed the 3 because they loaded slowest)

    Although this does explain it (today's redbot header):
    Date: Mon, 23 Nov 2015 13:42:15 GMT
    Expires: Wed, 23 Dec 2015 13:15:12 GMT
    Cache-Control: public, max-age=2592000

    Because Squid is HTTP/1.0 it doesn't do Cache-Control but uses the Expires... same with older browsers and a lot of proxies... But the real problem here is (and I can't believe I only just now noticed it, but I was only paying attention to the weekday):

    "Many HTTP/1.0 cache implementations will treat an Expires value that is less than or equal to the response Date value as being equivalent to the Cache-Control response directive "no-cache"."

    So you are preventing Squid (and anyone using HTTP/1.0) from caching pages because the 2 dates match... and even if the 'Expires' said Wed, 25 Dec 2015 that would still mean HTTP/1.0 could only cache it for 2 days, which I don't think is what you want. Also - the day of the week doesn't match the date in 'Expires', don't have any idea what kind of problems that can cause, but from that I assume the 'Expires' is supposed to be 2016.

    Also, if you could use '304 Not Modified' on the flower images that get posted in the forums that would also help a lot since 'Cache-Control: must-revalidate' means nothing in HTTP/1.0 and those images don't cache for more than a couple minutes...

  • We still have November, at least according to my calendar. So December 23rd is still a full month in the future, which is reasonable.

    You are right about the forum signatures. They even used to do that, looks like that one broke a while ago. It should be working again.

  • I swear that said Nov not Dec... grrrr... I'm going to throw my monitor if this keeps up... I'm going to have to see if I can get a look at the headers on my personal end - might be different than redbot, and my response headers might have something to do with it... maybe firefox being http 1.1 and Squid being 1.0 have something to do with it. Except Firefox never kept it more than a day or 2 either, but it also usually crashed and lost the cache every few days from being open 24/7 (the reason for Squid).

    Thank you for the forum fix! And thank you for being so patient trying to help me figure this out. I'm even learning a lot about headers and caching that will help me when I finally start working on any of my websites.

    My max file size in squid is something like 4mb, so that's not the problem... the fix for the dynamic content was default in 2.7... Cache size is 5gb and contains less than 300mb... My clock is definitely right...

    Next is look at the headers on my end (I think Wireshark gives me that info), make sure my plugins like ImgLikeOpera aren't interfering, and compare the headers to sites that definitely cache correctly to see if I can find a difference. Maybe upload an image to my server and make sure the fix for dynamic content actually works. Make sure Squid is actually sending an "If-Modified-Since". Then if I can't find anything it'll be time to post my squid.conf file on Stack Exchange and see if anyone else can figure it out.

    What's interesting is that Cloudflare isn't keeping it cached either, but I don't see that being able to mess my cache up. Or at least the first time after I don't have it cached that I run it through redbot it's saying:
    CF-Cache-Status: MISS
    Then next load it's a HIT.

  • Cloudflare not having the file cached is expected - it's a distributed CDN, and not every reverse proxy has enough activity to keep the files cached indefinitely.

    If you want, you can upload your Squid configuration and I will have a quick look. I suspect that there is a wrong refresh_pattern directive in your Squid configuration.

  • I just figured cloudflare would keep it for a day or 2 at least...

    I think I 'might' have fixed it by adding 'reload-into-ims' to these lines:
    refresh_pattern -i (/cgi-bin/|\?) 0 0% 0 reload-into-ims
    refresh_pattern . 0 20% 4320 reload-into-ims

    Using Wireshark I got a look at the headers on my end... it seems I send no-cache and/or max-age:0 which makes no sense... although there was also some really odd cookie behavior... ending up with a string of several cookies including sometimes one that expired in 1970... I think that one usually ended up after one that did something like set-cookie: randomStuff: delete

    I would send an 'If-None-Match', get a 304, and still download it... odd... I'm not thrilled with this solution as it might cause problems on other sites (although it's fixed a couple too lol)... I'd like to figure out how to apply it to just specific sites when I have a little more time.

    I was about to ask you if you'd be willing to disable the 'vary' header temporarily because if you're running Apache certain versions were assigning the same etag to the gzip versions and non-gzip versions of a file....

    But so far it looks like it might be solved by the 'reload-into-ims'... I'll let you know if it isn't. Thank you so much for the help!

    (now if I could just figure out where to set 'stale-if-error' for a problem on another site... maybe it goes in the same place as the ims one... lol)

  • Why that 20% figure? That means that 20% of all requests should still bypass the cache, even if the cache wasn't stale yet. If bandwidth matters for you - 0% for both patterns. Respectively ditch the first pattern entirely.
    The following should be sufficient alone:

    refresh_pattern . 0 0% 0 reload-into-ims

    It's not an Apache, but an Nginx running here. It doesn't have that bug.

    Oh and yes, it goes into the same place.

  • The 20% is from the pattern from the link you posted above about dynamic content and is default in 2.7...

    From the config file:
    # 'Percent' is a percentage of the objects age (time since last
    # modification age) an object without explicit expiry time
    # will be considered fresh.

    Which I think doesn't affect most sites I've seen since they all use 'Expires'....

    So, 0% would mean that it would always be stale if it didn't have an 'Expires'... I think... and I think to never be stale it would be 100%... but I could be reading it wrong... this stuff is confusing (which is why after using Squid for 3+ years I'm still using almost complete default settings lol)...

    I'm afraid to get rid of that line completely because I don't want to treat truly dynamic sites as if they were static.