Ember performance tweaks: Optimising Assets

15 min read • 12th May 2020
Road to development

In the previous post, we spoke about build time strategies and how to optimise them. We also saw strategies to improve developer productivity. This is the second post to the series:

  1. Improving build timelines & optimising build size
  2. Optimising Assets ← this post
  3. Search engine optimisation
  4. Improving the accessibility of Ember apps
  5. Making Ember apps installable

ember-cli handles bundling of frontend codebase into assets like Javascript, CSS, HTML, image, etc., and is fingerprinted by default while running a production build. Deploying these assets can be done to any server or CDN, and that comes at a cost.

Let us take a look at Amazon's S3 pricing, from here.

The crucial point to notice is that Storage, API and Data transfer comes at a cost. That means the number of assets we push and the bandwidth we consume also adds to that cost. Not just Amazon S3, it also applies to services like Netlify or CDN hosts.

Link to this sectionFingerprinting

The Ember CLI's build pipeline takes care of fingerprinting assets for production environment by default. The asset file's content defines its final fingerprint. Broccoli-asset-rev is the addon that takes care of fingerprinting and is included by default. The addon will automatically fingerprint your JS, CSS, PNG, JPG, and GIF assets by appending an md5 checksum to the end of their names. Besides, your HTML, JS, and CSS files will be re-written to include the new name. Ember-CLI also provides options to change the fingerprint hash to your customHash. When specified, instead of md5, this is appended as the fingerprint to your files.

For the sake of understanding, let's take an example app that contains 400 assets consisting of JS, CSS, and images. Also, let's pretend your CDN sets a high cache-expiry header for them. Adding a higher cache-expiry header value to these assets would help in caching them in the users' browser memory.

A change made in your app JS would generate a new fingerprint for that file. When a user visits your app's index.html, it refers to the new app.js (with the new fingerprint). It would now be fetched from the CDN and the remaining 399 or so, would be picked from the user's browser - the disk/memory cache.

On the other hand, let's say if you use a custom fingerprint like the one below for customHash:

// ember-cli-build.js
const EmberApp = require('ember-cli/lib/broccoli/ember-app');

module.exports = function(defaults) {
  let app = new EmberApp({
    fingerprint: {
      prepend: 'https://subdomain.cloudfront.net/',
      customHash() {
        return (+ new Date()); // Returns the current timestamp
      }
    }
  });

  //...
  return app.toTree();
};

A change made in your application logic now generates 400 new assets, and that involves 400 transactions to your CDN. It indirectly now contributes to 400 transactional costs on your services.

There is another problem associated with customHash - browser caching at the user's end. Since your app now refers to 400 new assets, the browser would now result in fetching them once again. Thus, resulting in fetching assets that never changed intentionally. The more the assets users download, the longer it takes for rendering your application pages.

ember-cli-deploy which is one of the recommended addons for deploying ember assets to various services, has plugins that can help in pushing only those assets that have changed - refer ember-cli-deploy-manifest. Quoting from the plugin's readme:

Note
This plugin generates a manifest file listing the versioned asset files generated by your app's build process. By comparing the latest manifest to the previous one, your deployment plugin (such as ember-cli-deploy-s3) can determine which files have changed and only upload those, improving efficiency.

This blog is hosted on Netlify and uses Cloudflare for CDN caching. You may want to have a look at the asset's network requests to understand the fingerprint and caching. Also, a higher value on the asset's cache-control header can be an efficient asset caching strategy at the user's end. It is also one of the recommended Lighthouse's Page Speed optimisations.

Another point to note is that fingerprinting images that may not change over time would only add to increased build timelines. For example, the images shared in this blog will not change often. That is one reason why this blog's JS and CSS assets are fingerprinted but not images. Here's the snippet from this blog engine's ember-cli-build.js:

fingerprint: {
  enabled: config.environment === 'production',
  extensions: ['js', 'css', 'map']
}

However, you may want to consider adding SVG assets under fingerprinting configuration, if you use them as icons in your application.

Link to this sectionDealing with image assets - especially large ones

If you are working with websites or blogs, images are the major contributors to asset sizes than JS or CSS. Consider the posts that I have shared in the Photography category as an example. The posts' images are of large sizes.

One way to optimise them is to shrink them during build time or save them as small size assets after editing them. Another approach is to optimise them at runtime. Runtime optimisation involves a series of steps that would require services like Cloudinary, which can help provide optimised images based on the quality, device resolutions, screen ratios, etc. Both the approaches have their pros and cons, and we will see them in the subsequent discussions.

Link to this sectionOptimising images at build time

This approach tends to optimise all the images that the website or blog consumes during the build time. Libraries like Sharp can help do this for you. However, the ember addon ecosystem has got you covered through an addon called ember-responsive-image. To use this addon, all you need to do is add a configuration in config/environment.js like below:

module.exports = function(environment) {
  var ENV = {
    'responsive-image': {
      sourceDir: 'assets/images/generate',
      destinationDir: 'assets/images/responsive',
      quality: 80,
      supportedWidths: [2048, 1536, 1080, 750, 640],
      removeSourceDir: true,
      justCopy: false,
      extensions: ['jpg', 'jpeg', 'png', 'gif']
    }
  }
}

The addon then takes care of compiling all the assets from assets/images/generate directory and creates 5 different assets of the sizes mentioned in the supportedWidths array using the Sharp library. It then adds them into the destination directory mentioned as assets/images/responsive in the config.

It would also create the images namespaced with the size that can be easily consumed in your templates using the "responsive-image" component or the "responsive-image-resolve" helper. The "responsive-image" component creates an <img> tag that contains srcset attribute with the assets generated for a particular image. If you are providing support for mobile & tablet devices, this is an essential checklist item. For, you don't want your users to download 5MB 3000px x 3000px images on a mobile device of 320px width. The same image with lowered quality at 100x decreased size, say 50kb, would be sufficient to reveal the details for viewing on small devices.

The build time approach works fine, and the Sharp library has astonishing speeds for conversion. And they also claim that:

Resizing an image is typically 4x-5x faster than using the quickest ImageMagick and GraphicsMagick settings due to its use of libvips.

In the process, we are resizing the images, creating new images at build time and then storing that config in the <meta> tags for consumption by the "responsive-image" component.

Link to this sectionOptimising images at runtime

Runtime optimisation is somewhat different from resizing images at build time. We use services like Cloudinary or host our own Thumbor service on Heroku or GCP. The approach doesn't resize 400 images into 5 different sizes (2000 in total) at build time and push all of them to our Server or CDNs. Instead, the idea is to resize the image at the server and maintain a cache of the resized image output. So, here's what happens:

The browser asks for image.png > Server resizes image.png > Stores the resized image in the disk > Returns the image to the browser.

I had chosen this approach for my blog engine to explore and learn more about Thumbor. All the images you see in this blog, as well as others, are routed via Cloudflare's CDN. I run a Thumbor service on a GCP Compute Engine instance that resizes these images that are requested at runtime.

And if you inspect the Network tab for images, they are requested as cdn.abhilashlr.in/<hash>/<size>/<image-path>.

Let's dig deep a bit into each of the items on the requested URL. The Thumbor service hosted on the sub-domain - cdn.abhilashlr.in is responsible for the runtime resizing.

The hash that you observe is a security aspect of the Thumbor service. It is one of the recommended approaches for not letting anyone access your Thumbor service and suffer from DDOS. An encrypted key can help prevent unwanted access to your service by others. Here's a good read about securing your hosted Thumbor service.

You can read about how Thumbor works and its detailed documentation from here.

To support such a dynamic URL, I built an addon called ember-thumbor-images. During build time, it generates the URL of the image assets and stores it in a <script> tag in your index.html. Similar to ember-responsive-images, ember-thumbor-images takes a configuration about which assets to process, and provides a component and a helper. The responsive-image component generates the following HTML semantics:

<picture>
  <source media="(max-width: 320px)" srcset="https://cdn.abhilashlr.in/n9B_e40GUy3tBf-8BnPrZZYRpnI=/320x0/smart/https://abhilashlr.in/assets/images/blogs/ember/ember-performance-part-2-1.png">
  <source media="(max-width: 768px)" srcset="https://cdn.abhilashlr.in/wOvq_p1x9_MIQL13sm-FH0WrY88=/768x0/smart/https://abhilashlr.in/assets/images/blogs/ember/ember-performance-part-2-1.png">
  <source media="(max-width: 1024px)" srcset="https://cdn.abhilashlr.in/7ytyKJd6FFUY9rW7syupz8BoCdM=/1024x0/smart/https://abhilashlr.in/assets/images/blogs/ember/ember-performance-part-2-1.png">
  <img src="https://cdn.abhilashlr.in/7ytyKJd6FFUY9rW7syupz8BoCdM=/1024x0/smart/https://abhilashlr.in/assets/images/blogs/ember/ember-performance-part-2-1.png" alt="Prember built dist path folder structure" loading="lazy">
</picture>

Link to this sectionOptimising other assets

In the previous sections we discussed about strategies for JS, CSS, and images. But we have an important segment to cover - the HTML.

If you are building static websites or blogs, Javascript frameworks like Ember, Angular, Vue, and libraries like React could be overwhelming in terms of byte size delivered to the user. The simplest reason being, your blog's JS file holds the sole responsibility of parsing and execution of the page's functionality like rendering the page, routing, component's UI logic, etc. Script parsing and execution could end up affecting metrics like First Contentful Paint (FCP) and First Input Delay (FID).

I've left the links for you to understand what each of these metrics means and how it could help you improve your page speeds.

Single page apps are modern web strategies that deal with pure frontend based routing that has no page reload. But if you time travel ~10 years behind in web development, the pages hosted on the servers were responsible for the UI. And the UI logic or reusable plugins for those pages were included as part of the HTML under script tags. In SPA, we ship minimal HTML, and approximately 90-95% of the functional logic remains in the JS assets. It implies that unless the JS is downloaded by the browser, parsed, and executed, the page's UI shall not render. If at all it does, it would render a minimal set of DOM nodes.

In an Ember app's instance, there are 2 JS assets: app.js and vendor.js. Therefore, for your entire application to render, these assets need to be downloaded, parsed, and executed. If you had noticed Lighthouse for performance optimisation, one of the factors it suggests is to "Minimise main-thread work". You can read further on it in here.

Modern-day frontend stack has improved this area across frameworks and libraries. Especially if you are using frameworks or libraries for blogs, you should consider the concept of Server Side Rendering. You must have heard about Jekyll, React's GatsbyJS and ember's empress-blog. empress-blog builds multiple HTML files (like the olden days' server-rendered pages) and containing the DOM required to render the page upfront.

I chose to build my blog engine myself and not any of these as I wanted to learn to build from scratch. So here's how I approached it:

  • Ember application that would work with HTML + JS
  • Used ember-cli-fastboot that will help in SSR using NodeJS.
  • Used prember for generating the HTML files as I would not be running a Node server to serve these files.

These are pretty much the same steps approached by empress-blog, and if you are getting started with building your blog, I highly recommend using that addon. This strategy seems good for blogs and static websites.

However, you might wonder how to implement it for large apps. It is something that even I'm exploring and would be happy to learn from you if you have done some work in this regard.

I am pretty confident Embroider should solve for large applications - the aspect of chunking the assets into what is required for the current page to load and execute, as against loading, parsing, and executing the whole of app.js and vendor.js.

By implementing prember based HTML generated pages, we now serve HTML pages with the full DOM and text required to render the page's UI along with the JS assets. These assets would take care of succeeding functionalities like routing and page transitions, UI components, etc.

Here's how my dist path looks after introducing prember:

Prember built dist path folder structure

To add icing on the cake, you could try some of the following as well:

  1. Mark the app.js script tag with a defer attribute so that it helps in the execution of app.js after the page has finished parsing. An important point to note here is that you can't mark vendor.js and app.js with defer tags because, your app.js 'requires' vendor.js for its execution.

  2. Enable compression on HTML files so that your servers can serve light-weight HTML to the user.

  3. Inline the most critical CSS like loading CSS, and a minimal page layout style and move the link tag to the bottom of the HTML page.

  4. Besides 3, if you can dynamically fetch your CSS using a script tag marked with defer, that is going to shave off couple of more milliseconds. Although I must warn you that, 3 and 4 together can cause a minor UI flickering until the DOM paint is complete.

  5. Service workers to cache assets. Perhaps, this one comes at a price of over-caching and sometimes latest assets not received by the user.

If you are looking for service worker implementations, ember's addon ecosystem has again got you covered:

To conclude, we learnt three primary topics on assets and how to optimise caching for them. We saw how customising the fingerprint of assets can lead to improper caching and unwanted side-effects, and improving images and their caching mechanisms. We learnt how to use SSR for rendering HTML and optimising for FCP and FID. Finally, we also saw some quick tips on other strategies and using service workers.

Hope that was helpful to you! In the next post, we will see how to optimise your ember app built for websites or blogs for SEO.

Enjoyed this article? Tweet it.

I guess you might be looking to add your comments? Glad to tell you that this section is under construction. But don't hold on to your thoughts! DM them to me on Twitter