Skip to main content

JJ's Mostly Adequate Summary of Chrome Dev Summit 2018

I went to Chrome Dev Summit 2018. Here is the schedule. Here is the official news from day 1. Here are all the recordings. And, here are my notes:


Chrome started with WebKit from Safari.

They added V8.

They made a joke about the fact that they decided not to add Dart ;)

Chrome has been around for 10 years.

Day 1 Keynote

Ben Galbraith, Director, Chrome and Dion Almaer, Director, Web Developer Ecosystem.

Google was founded 20 years ago.

Android was founded 15 years ago.

In 2008, Chrome launched with process isolation: secure and stable. It's now the standard.

This year, they launched site isolation. Even within a tab, content from different domains is isolated into separate processes.

HTTP pages are now marked as "Not Secure".

80% of the top 100 sites are all HTTPS.

V8 has been around for 10 years.

There was something in React Hooks that was slow, and they reacted quickly to fix it.

The new WASM (WebAssembly) compiler is called LiftOff. This made Unity 10X faster.

Someone compiled Window 95 into an Electron app.

Chrome landed support for AV1. It's a next generation, royalty-free video codec. It's 30% more efficient.

Edge and Firefox shipped support for WebP.

The average webpage is now the size of the original version of Doom.

Sites use 8X more JS than they used to.

The plugged Lighthouse, PageSpeed Insights, and the Chrome User Experience Report.

They talked about Performance Budgets.

Pinterest's mobile web experience used to just be an upsell for their mobile app. They decided to make it first class. They made it a lot faster. They use Service Workers. Mobile Web is now their top platform for new signups. Adding the site to a user's homescreen is now a core feature for them.

Wayfair is another example. They have a Performance Portal. It's helped them improve performance a lot.

Web Package is a new feature to sign a web page. You can prove that the content comes from the original domain.

Portals are a new iframe-like element. They create a new, top-level web page. This allows you to securely pre-load pages that transition (i.e. zoom in) to the new site instantly.

Squoosh is a new example they're launching to demonstrate WebASM, Web Workers, PWAs, Service Workers, etc.

Offscreen Canvas lets you paint to surfaces like Canvas and WebGL offscreen and then copy that to the screen.

Smooth, buttery UIs are important.

Worklets, Virtual Scroller, and a Task Scheduler API are new things they're announcing.

You can install web apps as desktop apps on the desktop. This works in ChromeOS, Linux, and Windows. Mac support is coming soon.

Twitter is a nice example of this.

WebAuthN enables websites to integrate with biosensors, etc.

They talked about "open digital economy", "federated learning", and "secure aggregation".

Next year, they'll ship new privacy stuff.

The web-platform-tests project is a new project that the browser vendors are doing to test interoperability.

Chrome has contributing to the Mozilla Developer Network docs. BETA is a new website. It explains the principles of web development. They have guides and clear tutorials. There is a lot of sample code. Glitch is embedded into it. So is Lighthouse.

Go to if you want to register your own .dev domain.

Service Workies is an online game to learn about Service Workers.

Project VisBug provides "Open Source Browser Design Tools". It has tool called "guides" for aligning things. There are a whole bunch of tools. There's a hover tool. It's a Chrome extension. It makes the browser into a design tool. It's neat how he was tweaking the colors of a site.

He briefly mentioned ChromeOS which now has support for Linux and Android apps.

They are providing a 75% discount on Chrome Pixelbooks for all attendees.

Between Session

They did the Big Web Quiz again. At some point, they joked about the fact that someone should hire whoever wins the contest. Later, I tweeted to them that when Jilles won the Big Web Quiz two years ago, I actually did manage to hire him ;)

Get Down to Business: Why the Web Matters

Aanchal Bahadur, Strategic Partner Development Manager; Zack Argyle, Engineering Lead, Pinterest; David Brunelle, director, product engineering, Starbucks; and Matt Rizzo, Senior Product Manager, Spotify.

She goes back to India 3 times a year.

She's from Landour, India, the land of flaky internet. It's either 2G or offline.

Reliable, fast 4G is still a privilege, although the graph looks better than I thought it would.

Investing in web performance pays off.

LinkedIn saw a 4X improvement in job applications by switching to a PWA.

Unlocking the power of the web

Matt Rizzo, Senior Product Manager, Spotify.

In Brazil, many of their users have less than 1GB of space free on their phones.

App downloads are a large commitment for a storage-conscious user.

They used the mobile web to unlock growth.

They started with a limited mobile web player that could play just one song.

For them, there were 3 key features:

  1. Media Session API
  2. Add to home screen
  3. Protected content support (i.e. DRM)

They saw a 54% increase in day one plays.

It was also good for bringing churned users back onto the platform.

They didn't see any cannibalization of their mobile app traffic. In fact, they saw the opposite.

Meeting customers where they are

David Brunelle from Starbucks.

6 million customers were using their iOS app.

They created a PWA that's on par with their mobile app.

For them, the important features were precaching, runtime caching, and service workers.

They focused on less code, caching, and image optimization.

They use Workbox for Service Workers.

Important: Their CDN dynamically optimizes their images.

They use the Credential Management API for sign in. 28% of Chrome users are using it across mobile and desktop.

Add to home screen is important.

They use a pattern library. They open sourced it.

They talked about the importance of thoughtful animations.

They're using React and Webpack.

They have Blackberry users using their mobile web app.

Surprisingly, 25% of their orders come from desktop browsers.

They saw a 65% increase in Starbucks Rewards registrations via the web.

We've shipped, now what?

Zack Argyle from Pinterest.

They rebuilt their mobile web app from scratch a year ago. It took them 3 months. It was 4X faster to load and 6X faster on subsequent visits. They only use 200 kbs of JS and 6 kbs of CSS (inlined into the app shell).

You have to keep fighting performance entropy. Performance budgets were the answer for them. They talked about logging, alerting, and prevention with regards to performance budgets.

They talked about JavaScript bundle sizes. WebPack has a plugin to track this. It logs bundle sizes. They have dashboards and alerts. Importing a single dependency can kill you if it itself has a large dependency tree.

They have a custom performance metric called "Pinner Wait Time" (PWT). It includes time to interactive + above-the-fold images.

They have 100s of experiments at any given time. They can tell if an experiment impacts the PWT.

This resulted in more users and longer sessions. Their PWA is their number one source of new signups.

They saw a 6X increase in number of "pins" seen, which directly correlates to their revenue.

Perf has directly impacted the company's bottom line.

  1. Reduce friction to get more users.
  2. Use Workbox caching strategies to meet users where they are.
  3. Set a perf budget in order to focus on perf.

Between Sessions

The closing tag for "sarcasm" is in the HTML element spec but the opening tag is missing.

acronym is a tag in browsers, but it's no longer in the specs. It's very deprecated.

fefunca is an HTML element.

State of the Union for Speed Tooling

Paul Irish, Performance Engineer and Elizabeth Sweeny, Product Manager.

You can't improve what you don't measure. -- Peter Drucker

Metrics: Measure, optimize, and monitor.

Focus your metrics on the user.

First Contentful Paint is the time between navigation start and the first text on the page.

Time to Interactive is the time between navigation start and once the main thread and network have calmed down. This is really a lab metric.

First Input Delay is the time from when the first input has been received until it's been dispatched. It's another field metric.

These metrics are available in various places such as Lighthouse, PageSpeed, Chrome UX Report, and WebPerf API.


They refactored a bunch of the PWA stuff. This is launching today. The results are a lot more black and white.

They made Lighthouse about twice as fast.

They changed how scores are represented. Green was previously 75-100. Now, it's 90-100. If you're in the green, you should feel really good about it--don't worry about hitting 100.

They improved the throttling simulation. "Fast 3G" is now "Slow 4G".

Calibre, treo, and SpeedCurve are commercial products that integrate Lighthouse.

They plugged again.

SquareSpace used Lighthouse to improve their 95% TTI by 3X.

Chrome UX Report (CrUX)

CrUX is a dataset powered by real user measurements from across the web.

They have regional analysis and country-specific datasets.

The new CrUX dashboard was just announced. It's more easily accessible. You no longer have to write scripts using BigQuery.

They went from 10k origins to 4 million origins.

Unified Tooling

PageSpeed Insights is now powered by Lighthouse.

It even has CrUX data built in.

They have mobile and desktop tabs.

PageSpeed Insights has an API. PageSpeed API v5 is Lighthouse API v1. There's a CrUX summary for the URL/origin.

PageSpeed Insights has a nice mix of field and lab data.

  1. Measure well, measure often.
  2. Use PageSpeed Insights for quick Lighthouse analysis.
  3. Utilize CrUX real world data to round out your perf perspective.
  4. Evaluate perf at every stage. This is now aided by the PageSpeed Insights API.

Speed Essentials: Key Techniques for Fast Websites

Houssein Djirdeh, Developer Advocate and Katie Hempenius, Chrome Engineer.

They're going to cover images, web fonts, and JavaScript.

Everything they cover can be audited by Lighthouse.


Images can eat your entire perf budget.

Use the appropriate format, compression, display size, and density, and lazy load your images. Automate and systemitize these things.

Use the right format. Animated GIFs are huge. Videos are much smaller. Animated GIFs are 5-20X larger.

Twitter automatically converts animated GIFs to autolooping videos.

Videos have inter-frame compression.

She showed how to go from an animated GIF to a video using ffmpeg and the video tag. See the video for the command and the code.

WebP has arrived. Edge shipped support last month. Firefox has shown intent to ship.

You can use the picture and source tags to use WebP while also supporting other browsers.

She talked about the AV1 video format. It's the future of video on the web. It has 40-50% better compression than MP4. It's still too new to use, though.

She talked about lossless vs. lossy image compression.

Configuring the quality setting to 80-85 will reduce file size by 30-40% while having minimal effect on image quality.

Use imagemin. It has plugins for the various image types.

Sites should be doing a better job with image sizing. Serve multiple image sizes. Use 3-5 different image sizes.

Use sharp and jimp to resize images.

You can use the img tag with the srcset and sizes attributes.

She talked about lazy loading images. This alleviates the bottleneck for the initial load. It can make a huge difference.

Look at lazysizes and lozad.

Important: Lazy loading is simple and important. Native lazy loading is coming to Chrome.


Flash of Invisible Text (FOIT) affects 2 out of 5 sites.

Flash of Unstyled Text (FOUT) is better.

Important: Use font-display: swap every place you use @font-face.


The average amount of JavaScript sent to browsers is 370k for mobile and 420k for desktop.

It's usually about 1MB uncompressed.

It's too easy to pull in a ton of dependencies.

He talked about bundle splitting.

He talked about dynamic imports.

Vue, Angular, and React all have higher-level approaches to code splitting.

Important: React 16.6 has Suspense and lazy.

Use preload to fetch things ahead of time when it makes sense.

Only transpile what you need to transpile.

He talked about @babel/preset-env and the useBuiltIns setting.

He talked about differential serving. This is where you serve untranspiled code to newer browsers. You can do this using ES Modules. You can set targets: esModules: true. Use a script tag with type=module vs. nomodule.

The NY Times uses Sapper as their client-side framework. They're also using Rollup.

He talked about code coverage in Chrome.

He talked about Webpack Bundle Analyzer.

Important: You can find the cost of a library with Bundlephobia.

Set budgets. Integrate these into your build workflow. You can use Lighthouse CI.

The talked about doing code splitting at the route-level, for i18n, etc.

Bundlesize is some tool they use to set budgets for their bundles.

Chrome has a lightweight version of the page using "Data Saver" mode.

navigator.connection.saveData will tell you if users have "Data Saver" enabled.

You can use effectivetype to tell what sort of connection your users have.

Use what works.

All of this stuff is covered on

Building Faster, More Resilient Apps with Service Worker: A Caching Strategy Deep Dive

Ewa Gasperowicz, Developer Programs Engineer and Phil Walton, Developer Programs Engineer.

ServiceWorker is now supported in all modern browsers.

It's great for offline users, and it can help with perf for repeat users.

Important: If used improperly, Service Workers can make things worse. There are benefits and costs.

Requests can skip the network entirely.

You can show stale data while fetching new data in the background.

Chrome can cache bytecode for JavaScript.

Service worker startup isn't free. If you're just going to fetch stuff from the network, firing up the service worker makes things slightly slower.

Based on his experience with his own website:

  • 75% of the time, the service worker hasn't been fired up yet
  • It takes 20-100ms on desktop to fire it up
  • It takes 100ms on mobile to fire it up

There are multiple possible scenarios. If you're going to cache, cache hit is the fastest scenario. Then no SW. Then, all the other approaches with SW.

Cache reads aren't necessarily always instant.

Aggressive precaching can hog the network. If your service worker caches everything, this might bite you.

Advice: Wait until after the load event before registering your service worker.

They talked about Workbox.

Pay attention to which requests you optimize.

He talked about navigation requests vs. resource requests.

Most people aren't very good at responding to navigation requests from the cache, yet this is where it will help the user most.

It can be helpful to use HTTP caching headers, link preload, and SWs.

Here are three ways to speed up navigations:

  1. Respond from cache right away. Then check for updates afterwards. Use staleWhileRevalidate in Workbox.
  2. Important: When the network is needed, fetch partial content, and stream the rest from cache. Workbox can help. (This technique was new to me.)
  3. Use navigation preload so network requests and service worker startup can happen in parallel.
Cache Management

Store the right resources at the right time. Control size. Prevent quota overflow. Update efficiently.

Critical resources include the HTML, the core JS scripts, and basic styles.

Non-critical resources include chat widgets, images below-the-fold, and video.

Trash resources that you shouldn't even have include unoptimized images, unused libraries, and dead CSS rules.

She talked about precaching vs. runtime caching in your service worker.

You can store partials instead of full HTML.

You can control cache expiration in Workbox.

Workbox's precaching feature can use content hashes.

Workbox has a Webpack plugin.

You can give the user explicit controls they can use to save things for later.

Look at "Network independence" on


  1. Have a plan.
  2. Don't just re-invent the HTTP cache.
  3. Optimize navigation requests.
  4. Make perf decisions based on data.
  5. Control app size and quota usage.
  6. Respect the user.

Smooth Moves: Rendering at the Speed of Right ®

Jason Miller, Developer Programs Engineer and Adam Argyle, UX Engineer.

It's good to test with a Nexus 5 which is from 2013.

You can think of smoothness as perceived performance.

RAIL stands for Responsive, Animation, IDLE, and Load.

Responsive: Respond to user input in 100ms--ideally 50ms.

Animation: You have 16ms to render a frame--ideally 10ms since the browser might take 6ms.

Idle: Do your background work in 50ms chunks.

Load: Load in 5s. Ideally in 1s for small changes.

We want our apps to be smooth everywhere--even low-end hardware.

Chrome DevTools has all the tools to measure this stuff.

In the expansion menu next to the console, there's a Rendering panel.

In the top-right expansion menu, there's a Layers panel.

In the expansion menu next to the console, there's a Performance Monitor.

Finally, there's Lighthouse.

Three keys to "smooth moves":

Efficient animations

He talked about how rendering, layout, paint, and compositing work.

(I stepped out for a moment.)

GPUs are really good at transforming composited layers.

Please don't animate max-height. Use slide-in instead.


Interaction with the DOM affects everything else.

He talked about read vs. write operations with respect to the DOM.

Deferring reads with requestAnimationFrame doesn't help.

Reading properties that require layout data before writing layout data can lead to problems.

Lazy wins

Avoid synchronous interactions. For instance, let users type as many messages as they want--don't block after each message.

HEART stands for happiness, engagement, adoption, retention, and task success.

Whenever possible, make use of browser primatives. Use position: sticky instead of doing it by hand.

Use scrollIntoView with behavior: smooth.

He talked about building a carousel. Doing things manually requires pointer events and transforms. Another option is scroll-snaps in CSS. You get nice things like bounce, etc.

Keep the browser pipeline in mind.

Complex JS-heavy Web Apps, Avoiding the Slow

Jake Archibald, Developer Advocate and Mariko Kosaka, Drawsplainer.

Mosaic 1.0 shipped with img tags.

Images account for 52% of bandwidth. However, they aren't otherwise as resource intensive.

cjpeg and cwebp are command line tools for image compression.

They announced You can drag and drop an image onto the window.

"It works great on a Chromebook. I was paid to say that."

We're more sensitive to changes in brightness than changes in color.

You can save even more using WebP over jpeg, even though Mozilla did a great job with their jpeg compressor.

WebP's lossless codec is very different than its lossy codec.

Squoosh gives you a little UI to try lots of different image compression tools.

Squoosh works in the latest version of all modern browsers.

Squoosh is based on WebASM.

WebASM is supported by all modern browsers.

They compiled various C codecs to WebASM using Emscripten.

Browsers do have image compressors built into the Canvas API, but they're really bad.

Surma is their WebASM guy. He wrote an article about it. He had to build some C++ code to make it all work.

They compile stuff using Emscripten. It generates WebASM and JavaScript.

They used Preact and Webpack for Squoosh.

Preact is very small, and one of the authors works at Google.

The whole app is 400kb unzipped. Most of that size is in the codecs--they're 300kb.

They use web workers to do the image compression in the background so the UI stays responsive.

They're using dynamic imports with await. Webpack can handle await import(...).

They use two workers. If the first worker is already working, but you want to kill it, then start the second. When the first notices that you're using the second, it'll kill itself.

Lazy load as much of the JS as possible.

Their main page is 15kb. On 3G, they can load in 3s. They're interactive in less than 5s.

They couldn't have hit these numbers with React (30kb), Angular (bigger), or Ember (even bigger).

Figure out what the first interaction is going to be, and only ship the code necessary for that.

Keep an eye on the bundles between builds.

They plugged Rollup, but it doesn't have "the hollistic asset graph we needed."

You can use Preact and Web Components together.

Polymer != Web Components. Polymer is a framework on top of that lower-level mechanism.

They released a few of their "components" as Web Components. You can use them in whatever framework you want.

See custom-elements-everywhere.

The site works offline using Service Workers.

They cache the whole site when the user first visits it. They don't download all the codecs ahead of time. They wait for the user to interact a bit.

They talked about why web is better than mobile. It's easier to start using the app, easier to share the app, easier to download the app, etc.

Between Session

There were a bunch of keywords that were reserved in ES2, but are no longer reserved.

Building Modern Web Media Experiences: Picture-in-Picture and AV1

Fran├žois Beaufort, Developer Programs Engineer and Angie Bird, Chrome Engineer.

Media on the web matters.


It's a new web API.


The user has to interact with the page first.

It's available in Chrome 70 for Linux, Mac, and Windows. Android support is coming soon.

Soon, they'll support web cam video streams.

There will be a screen capture API. This will allow screen sharing.

You can custom tailor the media controls.

It's perfect for:

  1. Multi-tasking
  2. Recording the desktop while using the camera as well
  3. An always on top media center
AV1 video codec

Angie Chiang.

They're trying to make AV1 the future preferred video codec for the web.

It's open source and royalty free.

They want to deploy widely and quickly.

Then using H264, a 25gb uncompressed video can be compressed to 300mb.

VP9 is the predecessor of AV1.

AV1 has 30% better compression than VP9.

The spec is published. There's still more work to be done to roll it out world-wide.

Codec switching

You can switch from AV1 to VP9 fairly seamlessly.

She demoed switching from H264 to AV1 seamlessly.

Media capabilities

It's not always clear if the device can play back something smoothly and power efficiently.

There are new APIs that can give you more information.

See: navigator.mediaCapabilities.decodingInfo

These features enabled YouTube users to see the buffering UI 7% less often.

Google for "Chrome media updates".

Modern Websites for E-commerce in the Real World

Cheney Tsai, Mobile Solutions Consultant and Ramya Raghavan, Mobile Solutions Consultant.

Houses are a great metaphor for the cleanliness of codebases.

Just because Google is providing some cool tool doesn't mean it's going to work well for your web site.

Focus on the user, and all else will follow. -- Google

Users value consistency.

Expedia's new frontend platform is 4X faster.

Reduce friction.

There are three main challenges: Organizational alignment, technical approach, and measurement.

Organizational alignment

Everyone says they want performance, but when it really comes down to it, it's hard for people to budget for it.

Performance is not just an engineering priority.

Cross-functional buy-in is key to the success of performance initiatives.

Technical approach

Balance long-term vision with achievable short-term goals.

Align your goals against your long-term vision.

Walmart reduced their time to interactive by 70%. They removed 500KB of JS and 40KB of CSS from their page loads.

eBay prefetches the first 5 search results.

Airbnb moved to client-side routing. They saw a 7-8X speed improvement. 88% of search result "loads" are now client-side rendered.


Create a measurement strategy that is shared, automated, and actionable.

She plugged performance budgets again.

She mentioned RUM speed index.

She talked about performance portals.

Between Session

window.window.window... works.

Progressive Content Management Systems

Alberto Medina, Developer Advocate and Weston Ruter, Developer Programs Engineer.

There are some pillars of delightful UX:

  • Perf and reliability
  • Trust and safety
  • Accessibility and integration
  • Great content quality

"Progressive web development" means developing:

  • Modern workflows and web APIs
  • Coding and perf best practices
  • Effective incentives and validation structures

He talked about web apps vs. content management systems.

54% of sites are built on a CMS. CMSs are seeing 11% YoY growth.

The CMS space is large and complex.

Upgrading these CMSs is hard:

  • High complexity
  • Architectural choices that are difficult to work with
  • High levels of fragmentation
  • Lack of effective incentives

CMSs have a core, an extension system, a developer community, and a user base.

They're trying to bring PWA technology to WordPress.

30% of the web runs on WordPress.

AMP is a well-lit path for modern web development.

They built AMP for WordPress.

The next step is to integrate modern web capabilities and APIs.

Surma built a WordPress theme that did "all the things." It had a service worker. It had offline capabilities. It had background sync. It had smooth transitions. There were a lot of challenges because WordPress made it hard to do the things he wanted to do.

He said, "Service Worker API Core Integration".

They built some PHP abstraction that compiles down to JavaScript for use in the Service Worker.

They talked about the App Shell Model.

They built a SPA in WordPress with AMP content in the middle of the navigation.

They made it so that the content had to be AMP compatible.

They used a Service Worker for various things like the offline page, offline commenting, etc.

He talked about Gutenberg. It's a web editor.

Progressive CMS highlights:

  • Drupal
  • Magento
  • Adobe Experience Manager
  • EC-CUBE (popular in Japan)

XWP is the WordPress agency that they're working with.

Making Modern Web Content Discoverable for Search

Martin Splitt, Developer Advocate and Tom Greenaway, Developer Advocate.

Google search and indexing

There's now a renderer. There's a version of Chrome that runs the page.

Important: The rendering of JS-powered websites in Google Search is deferred until Googlebot has resources available to process that content. Sometimes, it can take up to a week before the render is complete.

In 2016, there were 130 trillion documents on the web.

Dynamic rendering means switching between client-side rendered and pre-rendered content on the server. I.e., you can send a different payload when you're hit by Googlebot.

You can use puppeteer or rendertron to help with this.

There's an npm module called rendertron-middleware for Express.

Use user-agent sniffing to look for Googlebot. You can also use reverse DNS lookup.

Tools for "Debugging Google Search"

There are a bunch of new Search Console features.

Within the Search Console, you can export a link to share something with someone who doesn't have access to your Search Console.

There are SEO audits in Lighthouse.

Best practices

Googlebot runs Chrome 41.

It only supports ES5. You should be using Babel to transpile.

It has mixed support for web components. Use polyfills. He recommends

It's stateless. It doesn't have service workers, local storage, session storage, IndexDB, WebSQL, cookies, or the cache API.


You should use dynamic rendering if you have rapidly changing content, modern JavaScript features (that aren't transpiled), or for sites with a strong social media presence (their crawlers don't use JavaScript). It's still a workaround though.

Google is going to be improving on their side too. They're going to be integrating crawling and rendering. They're hoping to catch up with Chrome and use a more sustainable update process.

End of day 1.

Between Session

Yesterday was all about what you can do today. And today is all about the future, including emerging proposals.

Day 2 Keynote

Malte Ubl, Engineering Lead and Nicole Sullivan, Product Manager.

Web frameworks

The web platform and frameworks don't take each other into account often enough.

You can't not use a framework. You either use a good one, or you're going to cobble together some unmaintainable mess.

There are web primitives, built-in modules, frameworks, and web components (speaking generically).

Most people in the room had built a React application.

We're struggling to build awesome stuff because it's actually really hard to make things buttery smooth and highly performant.

There's a tradeoff between developer experience and user experience.

Frameworks sometimes make web apps slower. They are also our best hope to make them faster.

React has been working on some really foundational work, including lazy, suspense, and concurrent react/Fiber.

Angular added performance budgets to their CLI, removed unnecessary polyfills, etc.

Vue ships modern code to modern browsers, and it preloads and prefetches by default.

Polymer transitioned to lit-elements for super small components and got faster because Firefox shipped native web components.

Svelte was already super fast. You can use it to build an idiomatic Hacker News app in under 20kb total.

AMP shipped a feature policy rule against synchronous XHR for all ads and reduced JS size on the wire by up to 20% via Brotli compression.

Ember removed jQuery from its default bundle and implemented incremental progressive rendering which allows "batched rehydration".

Frameworks + integrated best practices = great outcomes for everyone

Frameworks and Chrome

They announced that:

  1. They're going to include frameworks in the Chrome Intent to Implement process.
  2. They're going to provide $200k of funding for improving perf best practices in frameworks. See:
  3. They're going to increase collaboration with framework teams.

There are a bunch of things "under construction", including:

  1. Display locking: Don't want to update the DOM inadvertently? You can lock it!
  2. Important: "OMG, page transitions!" (i.e. modern navigation): For example, you can swipe-up to navigate. On today's web, you have to choose between fancy transitions and doing a real navigation. There's is no solution for fancier cross-origin navigations. They're improving page transitions with "portals". This is a highly experimental web API. It enables fancy transitions for real navigations. Web pages are the new SPAs.
  3. Feature policies: These allow you to opt-into better control over policy. There are two modes: report-only and enforce. For instance, you can configure the browser to disallow synchronous XHRs, unoptimized images, oversized images, unsized media, etc.
  4. Instant loading: Load the app before the user even clicks. In the past, preloading a web page has resulted in privacy problems. The solution is web packaging. The app author signs the content with their certificate. Anyone can deliver the app. Browsers treat it as if it came from the original domain. They collaborated with the AMP team on this. They can even make the AMP URL problem go away. Web packaging can bring back cell tower edge caching even in an HTTPS world. (Note, this is not related to Webpack.) It'll bring bundling to the web platform as a first-class citizen. See
  5. Scheduling API: It puts developers in control of when things happen in the browser. Schedulers in frameworks have serious downsides. They're working with the React team on this. They were inspired by Grand Central Dispatch from iOS.
  6. Animation workout: This will enable jank-free parallax. There's a Web Animation API that just landed in Safari preview. CSS animations are awesome and everywhere, but they only work for time-based animations. Animation Worklets are web animations that support a custom time source.
  7. Virtual Scroller: "Holy Sh*T" UITableView is coming to the web platform. They're going to have searchable, invisible DOM.


  1. The future of the web platform is going to be framework-inclusive.
  2. Instant loading and page transitions are coming to the web.
  3. There is a new set of low-level APIs that make it easier to build reliably fast web apps.

Feature Policy & the Well-Lit Path for Web Development

Jason Chase, Chrome Engineer.

"Web development is hard."

They're adding guardrails.

AMP is an example of guardrails in web development.

Feature policies establish a contract between you and the browser.

For example, you can disallow:

  1. Images that are not optimized.
  2. Images that are too big for the viewport.
  3. Unsized media.

Here's an example HTTP header:

Feature-Policy: oversized-images 'none'

You can use this to allow videos to autoplay on your site.

You can get granular control over features.

You can also do it via an iframe:

<iframe allow="autoplay" src="...">

Policies can be applied in two ways:

  1. Inherited from the top-level frame
  2. Via a one-way toggle

There are two approaches to handling policy violations:

  1. Simply detect the bad behavior
  2. Prevent the bad behavior

You can use feature policy prevention mode during development.

You might use report-only mode in production.

There are browser interventions. For instance, it might block navigator.vibrate because the user hasn't tapped on the frame yet.

There are lots of things going on in the wild on your site.

They block document.write on 2G.

There's a reporting API which reports on network errors, feature policy violations, deprecations, crashes, CSP violations, and browser interventions.

There's a ReportingObserver to funnel these things to your own system. You can even use the Report-To header to have the browser send reports out-of-band to a URL.

CSP (Content Security Policy) now integrates with the reporting API.

Firefox is implementing this. Safari has partial support. Chrome has already shipped this stuff.

There are about 25 policies. Some are hidden behind a flag.

There is a DevTools extension to configure policies.

virtual-scroller: Let there be less (DOM)

Gray Norton, Engineering Manager.

Trying to render too much can slow things down.

For a certain class of performance problems, the best thing you can do is minimize the number of things in the DOM.

They're adding first-class virtual scrolling to the platform.

This is a core feature in native mobile toolkits.

In React, there's already react virtualized and react window.

But it'd be better if it was part of the platform. Find in page is a problem. The content is not indexable by search engines. This is an example of "paving cow paths."

He's using an "Android Go" phone as a typical low-end phone.

You really start feeling the pain when you get up to around 500 items.

The HTML spec is about a million words. It's notoriously slow to load.

When adding new high-level web features, it's important to "nail the basics" and embrace layering.

They want to both identify and implement missing primitives. They want to enable "vanilla" usage but also give frameworks and libraries a base to build on.

They want to use this feature as a testing ground for new principles for building new, higher-level features.

It's being implemented in JavaScript in the open on GitHub. This is unusual, but it's part of the whole layered approach.

<script type="module">
    import 'std:virtual-scroller';

It looks like it's based on Web Components.

One missing primitive they noted was "invisible DOM". He showed an "invisible" attribute.

Adding this enables intra-document links to content that isn't even in the DOM yet.

Invisible content could even help out with making the content indexable by search engines.

They're going to work on more invisible DOM integration. It's available behind a flag in Chrome Canary.

They may add more primitives.

They want to do down-the-stack explorations.

They're collaborating with frameworks.

A Quest to Guarantee Responsiveness: Scheduling On and Off the Main Thread

Shubhie Panicker, Chrome Engineer and Jason Miller, Developer Programs Engineer.

They showed a sequence of long tasks that block the main thread which queues user input. This causes frames to drop.

Typing is input and should occur within 10ms.

There are a bunch of different types of work each with their own deadlines.

Potential solutions:

  • Do less work.
  • Do work in chunks to avoid blocking. Give chunks different priorities.

Doing this manually can be difficult and impractical. Frameworks can help.

We need a scheduler.

Google Maps has an internal scheduler. It gives a much higher priority to user input.

A scheduler needs to know when to execute things at the best time.

Task priorities: user-blocking, default, or idle.

Building this stuff purely in JavaScript has its challenges because it doesn't have enough control.

They're considering adding new, low-level APIs.

Hence, they thought of building a built-in scheduler.

They're working on a TaskQueue API.

There are multiple different queues with different priorities.

They are collaborating with frameworks. It's still pretty early stage.

Sometimes work can't be chunked. That's why they're thinking of background threads.

There are a bunch of places where threads really make sense.

In the browser, the closest thing we have to threads is Workers.

You can transfer a buffer between Workers with postMessage.

However, structured cloning makes thread hoping have a cost.

There are also MessageChannels and BroadcastChannels.

Soon, they'll be adding "Transferrable Streams".

All of these are message-based.

They're thinking of a higher-level API based on proxying.

Thread hops are expensive.

There are all sorts of problems with the messaging approach.

They're thinking about something like await postTask(...). This returns the result.

iOS has Grand Central Dispatch, and it's really good.

They wanted something ergonomic, pooled, and managed.

Naive task queueing is not enough.

Hops can take up to 15ms.

There are a bunch of pitfalls if they do this stuff wrong.

They have a new proposal called Task Worklet.

It allows multiple steps to be broken down, and it only returns to the main thread when it's all done.

They showed a typical problem that took 3 tasks and 6 thread hops. Using their approach, it only required 2 thread hops.

The Task Worklet proposal involves thread pools, named tasks, and task graphs. It's still really early.

Right now, each Worker is both an OS thread and a V8 Isolate (which requires 4 MB). There's no code sharing with main thread. The base cost of a worker includes 10ms of startup overhead. The termination overhead is 5ms. Each thread hop is 1-15ms (depending on the device and data).

Workers can make rendering smoother but input slower.

Use workers if:

  • The job blocks for a long time.
  • The job involves a small amount of input and output.
  • The job follows the request response model.

Avoid workers if:

  • The job relies on DOM.
  • The job needs minimal delay.

Make sure the worker is worth the cost.

Key messages:

  1. Chunk up work and prioritize tasks.
  2. See if your framework uses a scheduler.
  3. Offload long tasks to workers.
  4. Try out Task Worklet.

Architecting Web Apps - Lights, Camera, Action!

Paul Lewis, Chrome Engineer and Surma, Developer Advocate.

Too much is going through the main thread.

Should we just add threads to the platform?

JavaScript is inherently single-threaded. Each thread is essentially isolated.

3D UI? VR? Voice? The demands just keep getting worse.

The Actor Model is about 45 years old. They realized it's a good fit for the web.

Imagine a UI actor, another actor for state, another for storage, another for broadcasting, etc.

They want a separation of concerns.

Every time you ship a message, you're giving the browser a chance to ship a frame.

They talked about "some actors not running on the main thread" and "location independence."

They're not launching a product, framework, or library, but they've been playing around with these ideas.

They used xstate for their state machine.

They plugged TypeScript. (I haven't heard them mention Dart other than within a joke.)

They're using Preact.

Certain APIs are only available on the main thread--not just the DOM, but also things like media, etc.

The cost of a thread hop might be more expensive than leaving the actor on the main thread.

They moved a lot of non-chatty actors to workers.

However, sometimes they run them on the server-side. Think of the server as just another actor.

They used Firebase.

BYOF = bring your own framework

Beware of actor perf challenges.

This is not appropriate for all use-cases.

But, it's just experimental.

Take home message: they're excited about the Actor model. (I know I've been excited about it since 2006.)

From Low Friction to Zero Friction with Web Packaging and Portals

Kinuko Yasuda, Chrome Engineer; Rudy Galfi, Product Manager; Sumantro Das, Director, Product Innovation and Growth Brands, 1-800-Flowers; and Rustam Lalkaka, Director of Product, Cloudflare.

In today's web, you can feel the navigations.

They talked about instant search.

They talked about AMP HTML and JavaScript, AMP cache, and platform prerendering.

You have to worry about the privacy implications of loading a page before the user picks it.

They talked about the AMP viewer header which is used for source attribution below the URL bar. You need that since the URL refers to Google.

They started down the path of improving URLs for AMP pages.

They want to show the publisher's URL in the URL bar, but with the instant loading of AMP.

They're extending these learnings to the whole web, and they're growing standards.

Web Packaging provides privacy-preserving, instant navigation for all.

She talked about prefetching.

There is a proof of origin that goes with the web package. Cryptographically signing lets the browser verify the origin of the resource.

It's a bundled exchange.

They showed demos from 1-800-Flowers and Cloudflare.

They did a Google search. They showed an AMP badge. Clicking on it made the site load instantly. The URL was instead of Google's cache.

AMP Cache Flow -> AMP Packager -> AMP Cache.

Cloudflare plugged Cloudflare Workers which let you write JavaScript code and run it on V8 at the edge.

Portals let you turn your navigations into "transitions".

They showed a demo where he clicked on an image, and then zoomed into that image. This was a navigation from one domain to another.

Portals are like iframes, but you can navigate into them by "activating" them. You can even add an animation.

It's still in the early stages.

Bundled exchanges are on the horizon. This allows multiple signed exchanges to go into one bundled exchange.

They will enable offline loading of cross-site content.

Some of things are in the more distant future.

State of Houdini

Surma, Developer Advocate.

Style, layout, paint, and composite are the four stages in the rendering pipeline.

Houdini is a standards effort in the CSS world to expose hooks in the various rendering stages.

They're working with other browsers.

There are a bunch of APIs under the one umbrella.

There are four major, higher-level APIs, and there are also some new lower-level APIs.

"Worklets" are a new, general-purpose feature that they are using to achieve high performance with these APIs.

Worklets != Workers.

Workers get their own event loops. They can communicate with the main event loop.

Worklets are isolated. They don't get their own event loop. They're cheaper to create. They're mostly stateless.

For paint, there's a Houdini API called the CSS Paint API. You can teach CSS how to draw the exact look you want.

You load a JavaScript file into the worklet.

There's a CSS namespace.

There's a registerPaint function.

Here's how to use your painter in CSS: background-image: paint(my-paint)

There was some bug, and his system got hosed. It frooze Chrome. Someone yelled, "Are you running Slack?" Everyone laughed. He had to reboot. And then it crashed again. He said, "Houdini broke my laptop!" Poor guy.

He had to resume the talk later.

Never animate width and height ;)

The worklet code doesn't run on the main thread. (Or at least, it doesn't in theory. Currently, it does still run on the main thread in Chrome.)

There's no DOM overhead.

His machine crashed again. It said, "Ah, snap!"

His demo showed a really neat sparkling effect.

It's more efficient in a paint worklet rather than doing it in the DOM. Hence, it'll run better on low-end devices.

He added his own CSS attributes.

You can use it for border-images.

He came up with the "3PigStability index" based on the story of the 3 pigs.

The CSS Paint API is stable.

The next Houdini API is for compositing. It's called the Animation Worklet API.

There are currently three animation APIs:

  1. CSS Transitions
  2. CSS Keyframe Animations
  3. The Web Animations API (which is badly supported)

You can create a WorkletAnimation. You link in a separate JavaScript file. You register an animator. You can implement arbitrarily-complex time sources (instead of using the real clock) to control the animation.

He implemented a bouncing animation. It runs on the compositor thread, not on the main thread. Even if the main thread is busy, it'll still be buttery smooth.

The spec is a bit less stable.

There's a CSS Layout API. It's not stable at all.

You can define your own display elements.

He showed how to create a custom layout algorithm.

He created "Is Houdini ready yet?"

Building Engaging Immersive Experiences

Chris Wilson, Developer Advocate and John Pallett, Product Manager.

The "Immersive Web" is:

  • Virtual Reality: immersing yourself in an alternate reality.
  • Augmented Reality: immersing your computer in your reality.

Google has been exposing this to the web for a while.

WebXR is a polyfill that you can use even if you don't have the right hardware.

He talked about a browser inside the VR world showing content with further VR content.

The Chrome VRBrowser is used by 83% of Daydream users.

With AR, your computer looks at your reality and integrates into it.

There are various systems for AR.

There is also camera pass through on a mobile device.

The web is a great fit for immersive computing. It's very ephemeral.

The WebXR Device API is new. The WebVR API is deprecated.

Lots of companies are working on this stuff together.

He's on the working group. They're trying to finish the standard.



He mentioned the WebXR polyfill. This polyfill runs WebXR on top of WebVR. It's implemented in pure JavaScript. It works in any browser.

Cardboard VR also works in any mobile browser.

You can add AR to an existing 2D website.

You can put virtual objects in reality.

Showing things in the context of the real world provides a lot of benefits.

Using the phrase "view in your room" makes more sense than trying to use the term "AR".

The realism could be better, but it looks surprisingly good.

You might want to start with three.js. It's a helper library for WebGL. That way you don't have to use WebGL directly.

A "reticle" is a UX widget. It's a circle shown on the screen over the real world in the background. It's like a cursor.

3D models can be difficult.

This stuff is super early. You can create 3D models without 3D programming. It works across browsers. This stuff will improve over time, and if you use this library, you'll get those benefits for free.

Using WebAssembly and Threads

Alex Danilo, Developer Advocate and Thomas Nattestad, Product Manager.

WebAssembly is a new language for the web. It's not a replacement for JS. It's a compilation target for other languages. It offers maximized, reliable performance.

It's supported by all four major browsers. It's the first new language supported in every browser since JavaScript itself.

This level of performance unlocks new possibliities.

It enables amazing portability and new levels of flexibility.

There were new goodies added in the last year:

  • Source maps: These are super important for debugging your code.
  • Streaming compilation: It compiles your code as you download it. It's almost always true that compilation is faster than downloading.
  • Liftoff: This is a new compiler. There's also a TurboFan optimizing compiler. It made things more than 7X faster.
  • Threads: This is the biggest, new feature.

He showed demos from SketchUp.

He mentioned Emscripten.

Common core code written in C++ -> Emscripten -> WebAssembly.

Google Earth has been ported to WebAssembly as well. It works in Chrome and Firefox. And they're using threads.

He also showed Soundation. This enables online music creation for casual producers.

He mentioned Audio Worklets which is a new feature in Chrome.

JavaScript <-> WebAssemby mixing engine <-> shared buffers <-> JavaScript

Adding threads greatly improved their performance.

There are a bunch of amazing community projects. For instance, Mozilla brought WebAssembly to Rust.

There are other languages which can run via WebAssemly including Go, Perl, Python, Ruby, Kotlin, .NET, and PHP.

However, there is no built-in garbage collection, and doing garbage collection yourself is really experimental.

The three that work the best are C, C++, and Rust.

Microsoft Windows 2000 can run in a browser tab.

Single-threaded web applications leave so much of the CPU unused.

With WebAssembly threads, you can use lots of cores.

Web Workers run concurrently. There is the main thread and then background threads which can't access stuff like the DOM.

More workers == more V8 instances. Each instance uses a lot of memory. The instances don't talk to each other. There's no shared state. Each has a separate copy of V8 in memory. They talk to each other using postMessage.

WebAssembly threads share the same module. The WebAssembly module is shared between multiple workers. Shared array buffers are shared across workers. V8 can also see the SharedArrayBuffer.

emcc is the Emscripten compiler.

You can configure the pthread pool size at compile time. The threads will be spawned at startup time. You should specify the maximum expected number of threads. You should tune this.

You have to enable this using Chrome flags.

You need an origin trial token. You need to embed it in a meta tag. You need this for WebAssembly threads.

You can now single step over WebAssembly instructions in order to debug your code. However, source map support is way better.

"Unlock the power of the super computer sitting in your pocket right now."

The Virtue of Laziness: Leveraging Incrementality for Faster Web UI

Justin Fagnani, Chrome Engineer.

Do less. Be lazy. Take breaks.

This will lead to better user exerience, performance, responsiveness, and developer experience.

The stuff he's showing is experimental, but it's also based on current browser features.

He's using Web Components and lit-html.

Web Components = custom elements + shadow DOM

Web Components + lit-html = LitElement. You can declare observable properties. (It sounded a little like MobX.)

Batch work for better perf and DX

His code doesn't do a new render every time you set a single property. (This sounds like actions in MobX.)

LitElement rendering is always async.

Non-blocking rendering

Break up the rendering. Use a task per component. Each thing should fit in under 10ms. He's talking about how he implemented LitElement.

React is working on async rendering.

He showed smooth animations in LitElement by making use of React's Sierpenski's Triangle demo.

He talked about scheduling. He'd like to use a native scheduling API.

Managing async state for great UX and DX

He talked about fetching data asynchronously in order to render it.

LitHTML handles promises automatically.

He implemented something called until(). It takes a promise and renders the result when it resolves. It renders a placeholder in the meantime.

(He's basically re-implementing his own framework with many of the same features as the other frameworks.)

runAsync() performs an async operation, but only when the data it depends on changes.

He showed an example with search that properly handles all the states (initial, loading, error, and success). He built separate UI for each state.

Coordinating async UIs for better UX

Coordinate things with events and promises.


He talked about the various approaches to showing a hierarchy of components, each with different things loading.

Chrome OS: Ready for Web Development

Dan Dascalescu, Partner Developer Advocate and Stephen Barbers, Chrome Engineer.

Why develop on Chrome OS? It brings together Web / PWA, Android, and Linux tools.

You can have a terminal, VS Code, a browser, etc.

Chrome OS powers a wide variety of devices and even types of devices.

Why target Chrome OS? It has a large market share. Optimizing for Chrome OS will also improve other form factors. You'll also future-proof yourself against yet-to-be-imagined devices.

Why? Diversity! Chrome OS brings together Linux stuff, different form factors, and browsers (Android and Linux). There's a diversity of Android browsers that work on it, including Edge. (But isn't this running via Android, and hence doesn't it use Chrome's rendering engine?)

You can install Docker. Docker support is unofficial though. There's a thread on Reddit. See r/crostini.

Chrome OS is fast, simple, and secure.

It's based on a containers architecture.


  • Chrome OS UI
  • Chrome | Android | Crostini
  • Chrome OS System Services

Linux is very isolated from the rest of Chrome OS. They added the ability to share files. Google Drive support is coming soon. (It has always amazed me that Google Drive runs in the cloud on Linux servers and is partially built by developers running Linux workstations, but Google Drive doesn't have an official Linux client.)

There's a lightweight VM and a container. It is tightly integrated with Chrome OS.

You get a terminal.

It feels like most other Linux systems. It's based on Debian stable.

It targets web developers first.

The container supports port forwarding, it doesn't feel like a separate container.

They'll soon add support for: USB, GPU, audio, FUSE, file sharing, etc.

They want to let developers do everything they need to do locally.

Most things work as expected, including almost anything you can install with apt.

Consider using Carlo instead of Electron. You don't have to ship Chromium. It uses the local version of Chrome.

Or, just build a PWA. You can install those.

Mac is finally getting installable PWAs.

You can run Chromium for Linux on ChromeOS.

You can also use Chrome Dev (on the Android side?) talking to a server on the Linux side.

He showed remote debugging from Chromium on Linux talking to Chrome Dev on the Android side.

Things aren't terribly fast.

He showed VS Code running.

When it comes to optimizing PWAs for Chrome OS, it's kind of a non-issue. Use Lighthouse to learn more.

Can I detect whether or not my code is running on Chrome OS? Not really. Use feature detection instead.

Make your app responsive.

A ton of users order Starbucks from their desktops.

Pointer Events are a unified UI for all sorts of different types of input.

In the future, they'll work on:

  • Improving the PWA implementation.
  • Low-latency canvas rendering.

Remember that Chromebooks run Linux natively and that they have access to the Android Play store.

The end.


Popular posts from this blog

Ubuntu 20.04 on a 2015 15" MacBook Pro

I decided to give Ubuntu 20.04 a try on my 2015 15" MacBook Pro. I didn't actually install it; I just live booted from a USB thumb drive which was enough to try out everything I wanted. In summary, it's not perfect, and issues with my camera would prevent me from switching, but given the right hardware, I think it's a really viable option. The first thing I wanted to try was what would happen if I plugged in a non-HiDPI screen given that my laptop has a HiDPI screen. Without sub-pixel scaling, whatever scale rate I picked for one screen would apply to the other. However, once I turned on sub-pixel scaling, I was able to pick different scale rates for the internal and external displays. That looked ok. I tried plugging in and unplugging multiple times, and it didn't crash. I doubt it'd work with my Thunderbolt display at work, but it worked fine for my HDMI displays at home. I even plugged it into my TV, and it stuck to the 100% scaling I picked for the othe

ERNOS: Erlang Networked Operating System

I've been reading Dreaming in Code lately, and I really like it. If you're not a dreamer, you may safely skip the rest of this post ;) In Chapter 10, "Engineers and Artists", Alan Kay, John Backus, and Jaron Lanier really got me thinking. I've also been thinking a lot about Minix 3 , Erlang , and the original Lisp machine . The ideas are beginning to synthesize into something cohesive--more than just the sum of their parts. Now, I'm sure that many of these ideas have already been envisioned within , LLVM , Microsoft's Singularity project, or in some other place that I haven't managed to discover or fully read, but I'm going to blog them anyway. Rather than wax philosophical, let me just dump out some ideas: Start with Minix 3. It's a new microkernel, and it's meant for real use, unlike the original Minix. "This new OS is extremely small, with the part that runs in kernel mode under 4000 lines of executable code.&quo

Haskell or Erlang?

I've coded in both Erlang and Haskell. Erlang is practical, efficient, and useful. It's got a wonderful niche in the distributed world, and it has some real success stories such as CouchDB and Haskell is elegant and beautiful. It's been successful in various programming language competitions. I have some experience in both, but I'm thinking it's time to really commit to learning one of them on a professional level. They both have good books out now, and it's probably time I read one of those books cover to cover. My question is which? Back in 2000, Perl had established a real niche for systems administration, CGI, and text processing. The syntax wasn't exactly beautiful (unless you're into that sort of thing), but it was popular and mature. Python hadn't really become popular, nor did it really have a strong niche (at least as far as I could see). I went with Python because of its elegance, but since then, I've coded both p