×
Eventil - Find Tech Events
Official mobile app
FREE - In Google Play
View
×
Eventil
Official mobile app
FREE - In App Store
View

The Browser Hackers Guide To Instantly Loading Everything

0 0

[Music] I'm sorry this conferences loves video effects I've gotta got to do it alright so today we're going to talk about loading on the web loading is a user journey with very disparate expectations you're basically sending thousands and thousands of bytes down the wire and hoping that whatever comes out from the very end of that is something that's actually useful to your users and helps them interact with your applications relatively quickly so I thought that today we'd have a conversation about loading now we can really have a conversation about loading without talking about where we're at right now so this is what the average web page on mobile looks like in 2017 whether it's built with sort of a JavaScript framework or just a static site in many cases it takes 16 seconds to get interactive on a real mobile device on 3G usually takes 19 seconds to be fully loaded and people mostly send you know somewhere in the region of 420 450 kilobytes of JavaScript down the wire and why does any of this actually matter well if we take a look at a high level and how the browser actually manages to get anything render to the screen from the network it's a relatively simple process we send a request out server returns an HTML we go and we parse the CSS and JavaScript and images and anything else that comes back and then we actually have to parse compile and render that code in order to render and turn onto pixels on the screen right but it's never quite that simple we're usually developing on relatively powerful high-end desktop machines and the expectations that we have when we're profiling there are quite different to mobile particularly when it comes to things like JavaScript startup performance where on a real world mobile device you can end up seeing anywhere between 4 and 5 times with a slowdown so the first thing I think that we as a community need to shift worth doing is actually testing on real phones and real networks I know there are a bunch of people so how many people here actually used like the dev tools network emulation or CPU throttling or device node so a lot of people here that's great that's a great first step we need to do better than that though because mobile devices have got different GPUs different CPUs different memory different battery characteristics so there's a lot that we can do there for people that want to start doing that today we recently shipped the new part of webpagetest.org called slash easy so webpagetest.org slash easy has got a whole farm of average mobile devices on there right now with easy profiles for going and checking out your performance now a lot of the time we talk about mobile performance these days we reference this idea of time to interactive and the idea there is that you're just able to make sure that the user can actually tap around your interface and how something useful actually happen in this case I think that someone is going through like a withdrawal symptom because there isn't actually anything on their phone looks a little bit dead so there are a few rules that I like to follow and I'm building modern web apps that try to load efficiently the first is only load what you need so trying to make sure that if you're shipping script and CSS and everything else down the wire that it's only the things that are going to actually be useful to the users initial experience that you're using idle time to load in anything else so your comment threads any additional pages that might be needed for the rest of the user experience now there are a lot of things that we can do to actually help load less code one of the first things is code splitting so code splitting something that a lot of you probably are familiar with as a concept relatively straightforward get set up using webpack were split able or closure compiler or browserify the basic idea is that instead of like making your user eat an entire pizza and get really bloated you just give them a single piece at a time and that way hopefully they feel you know a little bit better about the experience that you're shipping down tree-shaking so removing unused exports using things like roll-up also worth spending time on something we don't talk about enough is the fact that the baselines we're using when we're trying to ship powerful experiences on mobile don't always set us up for success today so if a framework for example and I love I love frameworks I created two MVC ela frameworks I use like reactant view and free all the time but the frameworks that we use today are often built with desktop machines in mind and when it comes to actually trying them out on mobile if we're saying you've got to be interactive in like five seconds and your framework is beating up like four seconds of that and it's do it at time that's not setting you up for success so there's probably room that we've got to improve there there are lots of lightweight options today pre-act view svelte polymer there are lots of others they generally have a relatively though parsing startup time over on the dev tools side one of the things that we recently shipped to help you with this idea of shipping less code is a feature called code coverage the idea here is that you just load up you're out you hit record in this new item in the drawer and then we tell you what blocks of code actually got executed and which ones didn't so here we've got sort of an indication that maybe you know I'm not really using 50% of the code in the bundle that I'm shipping down and as we click on that and scroll through the sources panel you can see that we highlight in green code that got executed and red code that didn't in this case because this is a camera app I've got a lot of code for actually doing cross browser saving a file to an export so I can probably be lazily loading in other things we can do like most of us here probably using a transpiler of some sort to use all the juicy new features javascript has got but the reality is that you know cross browser es2015 support is in a relatively good place right now we don't always need to be you know sending people down es5 so if you're shipping an experience using babel today i strongly encourage you to try out babel preset ends this will basically only transpile code for the browsers that need it and just keep everything else in es2015 let's check that out if you happen to be using low - check out babel plugin low - this is sort of a neat transform that will rewrite your low - code so it only uses the modules that you're actually using in your source rather than the entire package so check that out - this week we also announced after a very long wait support for es2015 modules in chrome thank you this is something that will hopefully encourage again a little bit less transformation and is something that will open up a few more opportunities for interesting loading experiences across the board the next thing I like to do is order loading thoughtfully you know more than anybody else in the stack you know what is important to your user journey and what needs to be sent down earlier on than anything else something that we recently also shipped in dev tools to help with this is a feature called network request blocking that's in canary right now the idea here is that in the network waterfall for any site you can right-click on a network request and you can block it or you can block the domain and take a look at what impact that has on the overall critical path of your site I'm particularly useful if you've got a lot of third-party code that's slowing you down and finally cache aggressively and granularly cache as much as you can locally that's both with HTTP cache but also using serviceworker caching at Google we've been trying to ramp up our investment in serviceworker for a lot of our flagship apps inbox has been using serviceworker quite a lot and recently saw a 10% improvement in time to interactive just by making sure that they're using it for things like static resource caching now about three years ago I was speaking at CSS comp and we spent the whole talk sort of optimizing jazz comp that you and I thought you know given that I'm back here it'd be kind of fun to do that again at a much shorter rate but also a little bit may be unorthodox plea so what we're going to do is we're going to hack chrome to make j/s confi you load a little bit faster this guy's not in fact a hacker he's just like scrolling through his web pack on fake or something so you dive into C++ in chrome and you find this file called resource fetcher and resource fetcher is one of these files I know nobody can see this so let's just zoom in here this is one of these files that define how chrome actually handles the prioritization for different types of resources so your CSS your JavaScript your images I don't expect anybody to read this so here's the table of how we actually handle this layout blocking resources like CSS and fonts get the highest priority load in layout blocking phase resources like scripts or images that happens being the viewport get a medium priority and then everything from your basing scripts to images that are outside of the viewport mismatch CSS they get a much much lower priority what's interesting about this table actually is that images in your viewport get a medium priority and stuff that's outside of they get a lower one so we kind of do automatic lazy loading of images to some extent by default that's great but as a developer you probably care more about where this is exposed for you so in the network panel there's a column called priority that will tell you exactly what we're already was used for any of the resources that you wanted to load so we're back here and the part of chrome that we're going to hack is we're actually going to change absolutely every single type of resource to load with an extremely high priority it sounds like a great idea right so let's let's do that and that's going to fix all of our problems and I can just end the talk there right we could ship a new browser we could call it so I discovered Germany it's got this wonderful world called vers lemon discern which that's a terrible pronunciation but it means when you try to fix something but you actually made it a whole lot worse and I did so this is the original filmstrip for a jazz talk to you and this is what it looks like when everything is considered high priority we've actually completely shifted first meaningful paint way back performance is worse lesson there when everything is high priority nothing is and I ended up fixing this by going through the different types of resources that are used in this particular page and trying to figure out okay well are the image of the most important things the CSS and as it turns out it was the CSS and fonts so let's do this right by the browser we talked already about sort of at the high level how never for process work but there's a piece of this puzzle that I didn't quite dive into and that's the browser preload scanner now browsers like Chrome have got a document parser and as we go through the tokenization phase of actually reading through all the different tokens that compose your HTML we will go and try to fetch those resources and start processing them if we run into like a blocking script that's going to stop the document parser in its tracks which is why we have the sort of fallback thing that load scanner which is able to look ahead even when the you know dr. MacArthur is blocked and find other resources that we can continue to fetch and process when this change was first introduced in chrome I believe it introduced like a 20% improvement over all the time so the preload scanner is pretty cool but we run into this other interesting challenge which is discovery no browser knows exactly what sequence of things to load to make sure that your page is going to be fast you more than anybody else again know what's important in your page whether it's your web pack bundles that need to be loaded early on or something else and so to address discoverability and the ability for you as an author to be able to say what you consider to be high priority you can use things like link rel preload which works with scripts and stylesheets and other types of resources and it's basically a declarative fetch that tells the browser that you consider something to be high priority now this is what the impact of using it on you know a site that happens to be using webpack looked like you you kind of shift all of the yellow that's on the right right all the way to the left at parse time I seeing this pattern being used increasingly in progressive web apps where it's having a positive impact on time to interactive for a lot of folks so I check that out if you're interested in actually hooking this up to your build process today I wrote a web pack plug-in called preload web pack plug-in they can do this for a synchronous chunks as well as like normal chunks so check it out if you find it interesting over on chrome the polymer team worked on an app a while ago called shop and the idea behind shop was trying to see you know if we if we use the web platform and the web platform features that are available to us today just how fast we make a modern web experience that was sort of non-trivial and shop kind of checked that off it was able to reach sort of granular loading you're able to tap on things and everything is just really nice and buttery smooth on mobile but how did they accomplish this so they used a pattern we came up with about last year called purple and the idea with purple is that you try to make sure that you're sending down the most important things for the user as early on as possible you push the minimal code needed for a route you render that route for next routes you're able to precache anything using service workers so that not only is that stuff already available locally in the disk cache when they try navigating to it but it's constantly available on repeat visits in addition for JavaScript using serviceworker we'll actually early on opt you into v8 code cache which will save you a little bit of time on things like parsing compilation and then the pattern suggests lazy loading code that you need for other parts of your user experience so let's take a look at what that actually looks like so this is um this is shop before any optimizations were applied you see sort of this this step pattern in the timeline this is a CB 2 with 3G and remember that little block at the very start we're not actually seeing any any activity we're going to go back to that limit later with preload we actually changed the shape of that completely so we've gone from this to something looks like this basically we've shifted our time a little bit everything is now going to be attempting to load in parallel to some extent and that's start that starts shaving off some time over the overall user experience but it still comes with the cost of multiple round trips and this is where things like hb2 server push can actually come in useful now what push allows us to do is as an author specify using a manifest the files that we know are going to be critical to the user journey instead of just you know pushing down the request you know sending the browser some HTML with spending to be parsed when we send back that an is real HTML we can also start sending down a list of files that are super important to start fetching for the experience effectively we're filling up server think time we're today in a lot of cases we're not actually doing that so hey speed to server push is great for that and the impact that it had on this particular app was quite stark again we've shaved thousands of milliseconds off of the overall time for this app to get interactive and to load in general using this technique unfortunately hb2 server pushes not a silver bullet by any means it's not particularly Kasia we're in a perfect world we have the ability to have a cache digest or something that lets you know what exactly is in your user cache and so it's very easy to run into cases where you know every single time someone comes to your site you're just force pushing them you know the same set of files even if it's inside their cache which is not exactly ideal so push versus preload you know push can cut out a whole RCT it's not cache weird there's no real prioritization in place preload is particularly useful because in addition to what push can do it also you know sports cross-origin requests got load narrower events got content negotiation but how do we address this issue of h2 push not particularly knowing what's in the cache what we can use serviceworker if we have a serviceworker registered in such a way that instead of going to the network every single time that we need you know more resources to be fetched we're just trying to get them locally based on what is already cached we avoid this issue of needing cache digests and it makes this entire setup relatively sane for shop in particular this meant that on repeat visits once you tie absolutely everything in purple together you're actually able to boot up and get interactive in just a few hundred milliseconds it's quite a powerful pattern so preload is good for moving the start download time of an asset closer to the initial request and push is good for cutting out a full RTT if you have a serviceworker thanks to Sam Saucony for a bunch of the research that he did in this area the next thing I want to talk about is how a lot of this stuff can apply to the apps that you're probably building today out of interest how many people here are using react as a part of their default stack almost everybody or good good size the audience so I had the privilege of being able to work with Twitter on their new progressive web app Twitter light I wanted to talk a little bit about the learnings that we had there so Twitter started off with this which was their old mobile web experience and sort of the server-side render thing it was really really slow and it wasn't particularly happy didn't particularly encourage users to get engaging with the app this is the new progressive web app the Twitter shipped very very recently Twitter light and one of the accomplishments that they had by taking advantage of some of the primitives that we just talked about is that they're able to get interactive in under five seconds on 3G which is quite a nice feet now this didn't come without an amount of pain you know so you can use you know modern frameworks like react to shift aggressive web apps but you're going to have to put the work in to cut down on how much you know application code you've got going on there you're going to have to take advantage of code splitting you're going to have to make sure that you're just granularly loading and serving things in as well as you can so when Twitter first started working on this app early on they they had a relatively poor time to interactive score so they were looking at about 15 16 milliseconds before anyone could actually start tapping around the interface so not too far away from where a lot of us probably are today most of their critical path was dominated by time spent in script just booting up and so they started taking a look at patterns like purple and and how they could take advantage of them the first thing they introduced was support for DNS prefetch so the ability to you know just specify declaratively what servers you want to start warming your DNS connections up to that led to an 18% perform performance improvement on what they initially had just for at least hitting things like FMP next they investigated using preload for their critical scripts I can't tell you how easy this stuff is to set up like this will take you you know if you're using a static site it's probably less than ten minutes if you're using something full stack it will probably take you an hour but it's we're trying out just to see if it actually has a perceivable improvement on your site so they ended up reloading their critical script so this is like their vendor bundle their synchronous chip scripts as well as their their main scripts that lets with 36% improvement on their overall time to interactive next they put work into actually putting pixels on the screen much faster now Twitter is one of those experiences that are very you know meteor rich there's a lot of images in there and so it's unsurprising the media and images one of the things that were slowing them down render wise now one of the things they did was they used request I'll call back we defer loading of some of the images in their timeline and that led to a four times improvement on the render performance ric is kind of awesome definitely worth exploring another thing that they noticed was and this was kind of it's it's so silly but images are still such a big part of what slows us down these days they noticed that they were still sending down relatively large images that were not the right height or width that were still encoded you know sub-optimally and we're taking a long time to decode as soon as they hit chrome they went through the process of actually optimizing that and that shaved off a whole lot of time on their image decodes helped them to make sure that as you scroll through the timeline images are at least not one of those things that are causing you a bottleneck another thing that they introduced was data saver mode this is this idea that you as a user can say well I've got a limited data plan I just want you to not show any images or videos unless I actually tap on them and this led to a 70 percent improvement in many cases on the amount of data consumed by the application if you're looking at a web platform level at what we're doing to try these types of experiences we've got the save data client int that you can use and Twitter going to investigate using that next then you've got free cash so initially Twitter Lite didn't have support for anything offline or serviceworker caching and what they did was they incremental II took an approach to adopting serviceworker they started off by first of all statically sort of caching their scripts the emoji they're used whenever you try DMing someone or applying to tweets as well as CSS and then they ramp that up over time to include things like application shell caching so UI caching now what that did on repeat visits was it took load time down from six point ten seconds down to one point four nine seconds so when you after the first time that you visit Twitter light it feels instant coming back and trying to navigate across different views it's pretty powerful now I was cheeky and asked twitter to like host 20 different versions their site so I could go and profile them and they were kind enough to do that now on second load without a serviceworker after some of these changes they saw that they were 47% faster on second load so repeat visits with a surface worker they're 65 percent faster and you've got lazy load so remember when we were looking so this is a lighthouse report if I didn't mention it before remember when we were saying earlier that their time to interactive scores kind of sucked and needed a little bit of work well breaking that work up was one of the first things they needed to do so they have to do these large blocks of like monolithic JavaScript bundles that I know people are like you know some people just don't look at me I'm not doing that but a lot of people actually still do this and that's relatively slow to load on mobile if you have a relatively large bundle even if you think that like you know a few hundred kilobytes isn't a lot that's still extra work the browser has to do to parse and compile that code before it can even you know start to boot it up so initially their bundles were taking five and a half seconds five and a half second dish to get ready before code splitting and then they had this great moment where they're trying to figure out how you're supposed to configure web pack and are still having the best of times with it I hear it's gone better with web pack - we'll see and one of the things that they ended up doing was this is this is a lot of what their code splitting looks like beyond just just using require done ensure they're just making sure that they're correctly using sort of vendor splitting for all of their bundles across different views Twitter ended up creating something like 40 different a synchronous chunks that are granularly loaded as you navigate from one view to the other and the impact that that had on their experience was that that actual bundle ended up only taking about three seconds to fully process and it improved their overall times interactive by the end of it they're actually doing much better than this now but by the end of it they were getting interactive in about 5.7 seconds which is still impressive as an investment in just code splitting and some relatively low friction ideas around efficient loading so one other thing that they ended up finding was incredibly invaluable and have been coming back to it pretty regularly is just making sure that they're using bundle analyzers like webpack bundle analyzer to find out what low-hanging fruit they have in their bundles I keep running into people they don't realize that moment J s or other libraries are actually a big part of their bundles and can probably be trimmed down so if you're not using things like web pack bundle analyzer or source map Explorer do check them out they do generally lead to at least understanding a little bit more about what it is you're sending down the wire and in Twitter's case they were making sure that they were using the bundle analyzer plugin so that every single time someone was actually working directly on the application they could see what impact that ended up having on their bundle site on their actual bundle shape so performance is sort of this continuous game of measuring for areas to improve there isn't a single thing that you can do and then just leave it where your site is going to you know always be fast regardless of the devices that your users end up trying it out on for anyone that is interested in sort of getting involved with more continuous performance profiling you know where your team are staying on top of it and you're building up a performance culture I'm happy to encourage you trying out lighthouse it's one of the projects that we work on lighthouse sort of is a is an auditing tool for both performance metrics but also progressive web app features and general web performance of web platform best practices and I'm also happy to suggest trying out calibre by Ben Schwartz calibre is sort of great in that it allows you over time to track everything from your bundle size through to different performance metrics and see you know what impact your different employees had over time so check that out and I'd also encourage people to check out web page test integration with your git hooks housing comm is a progressive web app that did this and basically they have this really awesome setup where any time someone tries to submit a new PR for a feature it runs it through web page test and will include a filmstrip at the very bottom of your Pia are to show exactly what impact that had on user experience so with that I hope you found this little journey into loading useful I hope you found the story about some of the experiences we had with Twitter useful I know that you know Twitter of Twitter is great it also sometimes feels like group therapy where no one ever it gets any better that it's it's great so yeah if if you're interested in learning a little bit more about loading we started a new blog over on a crime scene called reloading on medium so medium.com slash reloading and we intend on publishing more material there over the next coming months about everything from h to server push through to broadly Gretz lee and a lot of the other work that we're doing in the loading space and that's it for me thank you [Applause] you