WebPageTest's 'simple' test: Dulles, VA - Moto G4 - Chrome - 3GSlow. The (gzipped
) home page HTML is still well within the initcwd
, at 8.4kB.
With a 'simple' test with 3G 'fast' (~150ms, ~200ms with trans-Atlantic RTT) Dulles, VA - Moto G4 - Chrome - 3GFast I can just about hit the 1s target on the front page. On other lighter pages (eg with no hero image at all) I am more nearly hitting 1s. (Disabling JavaScript so that parsing it does not compete for CPU has WebPageTest claiming visually complete faster than actually possible in ~300ms, but the charts suggest ~750ms! Even the home page seems to come in under 1s ATF without JavaScript.)
Interestingly the sizes of the mobile/lite and main gzipped
pages are now fairly similar, the inlined image filling the gap taken by minimising the HTML boilerplate.
Given Prefer DEFER Over ASYNC: "DEFER scripts don't execute until the HTML document is done being parsed", I am testing switching the already-at-the-end Share42 script from async
to defer
to possibly marginally improve behaviour on mobiles in particular (CPU starved, share icons not shown ATF).
A preliminary WebPageTest suggests at least that the loading of the script stays late (low priority), and that there is no obvious harm from doing this.
Note that defer
seems to be slightly less supported than async
, eg Opera was reported as not.
As an experiment I am dropping most of the ads from the mobile/lite site as the nominal revenue (loss) is tiny but the effect on page weight is huge! Partly inspired by "Banner Ads Considered Harmful (Here)".
Always poking the hornets' nest out of sheer devilment, I went and took another run at PageSpeed Insights.
By folding in some of Google's specific ImageMagick convert
suggestions from "Optimize Images", in particular -sampling-factor 4:2:0
for JPEGs, I was able to hit 100/100 for the mobile home page in a mobile browser, with all other site/browser combinations in the high 90s.
Googlebot owns an interesting warp in the fabric of space-time!
See GSC Crawl Stats up to a couple of days ago. (All the record highs are in that snapshot, plus the record low download time.) Up before the last day the usual rule of thumb of ~200ms + ~1ms/kB was holding true, but for the very last point the download time drops ~50ms below the usual baseline, which was due to nothing at my end that I know of, and usually only happens with a very small number of fetches in a day.
I looked through my logs for being crawled locally (eg from within the EU) which could knock down the RTT significantly (vs ~150ms London to California).
Other than a few obvious fakes all the (non-Images) Googlebot IPs appear to be registered in Mountain View, but a couple of traceroutes
are interesting:
% traceroute 66.249.76.95 traceroute to 66.249.76.95 (66.249.76.95), 30 hops max, 60 byte packets 1 192.168.0.254 (192.168.0.254) 0.766 ms 0.997 ms 1.160 ms ... 18 crawl-66-249-76-95.googlebot.com (66.249.76.95) 20.784 ms 20.212 ms *
% traceroute 66.249.70.31 traceroute to 66.249.70.31 (66.249.70.31), 30 hops max, 60 byte packets 1 192.168.0.254 (192.168.0.254) 0.825 ms 1.116 ms 1.407 ms ... 19 crawl-66-249-70-31.googlebot.com (66.249.70.31) 103.652 ms 103.580 ms 95.009 ms
Note that it can take ~10--20ms RTT across my local FTTC link to my ISP's infrastructure. StatusCake's London server sees a minimum of ~27ms and a mean of ~70ms to pull down the mobile home page.
One Googlebot is 80ms RTT closer than the other, and both seem closer than Mountain View (for normal traffic)!
(Just after writing the above the data for the became visible, with more normal download time but still fairly vigorous spidering.)