Monday, November 14, 2016

Chrome Dev Summit 2016 Day 1

I attended day 1 of Chrome Dev Summit 2016. Here are my notes:
The intermissions between the talks were really entertaining ;)

Keynote: Darin Fisher: Chrome Engineering

Mission: Move the web platform forward.

Over 2 billion active Chrome browsers.

Bluetooth beacons are broadcasting the URL for the Chrome Dev Summit website.

Mobile web usage has far eclipsed desktop web usage.

Almost 60% of mobile is 2G.

India is experiencing the most growth in people getting online.

A lot of Indian users have so little storage space on their phones, they can't afford to install a lot of apps. They routinely install and then uninstall apps.

The web works really well in emerging markets.

Progressive Web Apps are radically improving the web user experience.

He demoed It's a progressive web app. He showed a smooth experience even though his phone was in airplane mode. The icon was added to his home screen.

11/11 is singles day in China. It's happening right now. It's the biggest shopping day of the year for them. Alibaba built a Progressive Web App, and it increased conversion by 76%.

53% of users abandon sites that take longer than 3 seconds to load.

It's all based on the Service Worker API. It lets you control your caching strategy and offline approach.

Apps should be interactive within 5 seconds on a 3G connection.

Users re-engage 4x more often once you've added the site to the home screen.

Prompting users sooners yields 48% more installs.

18 billion push notifications are sent daily.

50,000 domains use push.

Seamless Sigh-In using the credential API: -85% sign-in failure, +11% conversion for Alibaba.

Lighthouse is a Chrome extension that they built to help you optimize your Progressive Web App.

There's a new Security tab in Chrome Dev Tools.

Polymer is sugaring on top of Web Components that makes the web platform more capable.

Browsers have implemented Web Components, minus the HTML imports part.

Polymer App Toolbox makes it easy and fast to prototype a Progressive Web App, and to transition from prototype to real app.

AMP: Accelerated Mobile Pages

700,000 domains are using AMP. Search for bugs across the major browsers.

Progressive Web App Roadshow:

He mentioned Web Assembly, WebGL, WebVR, AR (augmented reality) etc.

Imagine you walk up to a device. It uses web Bluetooth to share a URL. The URL takes you to a website that allows you to control the device. The website uses web Bluetooth to control the device.

There are a lot of subtle anti-Trump references from the various speakers.

Building Progressive Web Apps


A 1MB download can cost up to 5% of the monthly salary for some people.

She talks to lots of companies about being early adopters of web technologies.

Flipkart built a Progressive Web App that was really successful.

They saw as much traffic on mobile web as on their mobile app on the biggest shopping day of the year in India.

Smashing Magazine: A Beginner's Guide to Progressive Web Apps

What is a Progressive Web App: it's about radically improving the web user experience.

It should be your main web site. has a baseline and an exemplary checklist.

She mentioned the Lighthouse Chrome extension.

Alibaba conversion rate improvement:
  • +14% iOS
  • +30% Android was paying $3.75 to acquire new Android users. However, it only cost them $0.07 to acquire new mobile web users.

lyft is no longer app only. They built a web app, and they got 5X more traffic than they expected.
  • Android app: 17MB
  • iOS: 75MB
  • PWA: < 1MB found that for first visit bookings, PWA users book 3X more than often than app users.

IBM acquired The Weather Company (The Weather Channel).

Things they're talking about:
  • Progressive Web Apps
  • Accelerated Mobile Pages
  • Push Notifications
  • Seamless Sign-In
  • One Tap Checkout

on the Web

54% of users will abandon rather than register.

There's an autocomplete attribute that you can use on your login forms.

92% of users will leave a site instead of resetting login info. 2 cookie handoff using Service Workers

There's a Credential Management API.

It's a standards-based browser API.

It's in Chrome.

Automatically sign in the user on the homepage.

  • 11% increase in conversion
  • 85% drop in sign-in failures
  • 60% decrease in time spent signing in
It can work with federated accounts such as logging in with Google or Facebook.

Google Smart Lock: the browser remembers your credentials. Then, on another device, it'll automatically sign you in. Nice demo.

Google Smart Lock even works with federated accounts.

It also works even if you've signed in in multiple ways.

Really impressive.

Faster Web Payments



Let the browser help you with checkout.

Mobile has 66% fewer conversions than on desktop.

Chrome started with autofill. Set autocomplete types on the checkout forms.

But, today, he's talking about PaymentRequest. It's a new payments API for the web.

It's *not* a new payment method or a processor/gateway.

It's a standards-based API baked into the browser.

The API is based on native UI.

Really seamless. No keyboard usage needed (except typing in the CVC).

For now, Chrome supports credit cards and Android Pay.

It's really about helping the user avoid filling in a bunch of form fields. The browser can help a lot.

Kill the cart: give the user an option to "Buy Now".

Think of this as a progressive enhancement.

PaymentRequest works best for new users and guest checkout flows.

Coming soon: support for third-party payment apps.

Integration guide:

I think this is only mobile for now.

Debugging the Web


The call stack now looks better, even when you shrink the window.

Chrome supports ES6 pretty well. Even in the console, ES6 is supported pretty well. Even async/await.

You can now enter multiple lines in console.log without hitting shift return. It indents as you go. It does brace matching. He pointed out that Safari had this first.

Tab completion works in the console quite nicely. It even works for arrays once you hit '['. The tab completion works with substring completion instead of just prefixes--just type any part of the word.

There's a snippets tab in the sources tab where you can have snippets of code that might be useful. These are persistent.

Inline breakpoints: You can place a breakpoint, and then it allows you to drag the breakpoint within the line.

Note to self: never use the unauthenticated GitHub API during a demo--you might get rate limited. At the very least, have some static data to use as a backup.

The workspaces feature is improved: In Sources: Network: Drag an OS X folder onto the left pane. Without configuration, you should be able to edit your code in Chrome. It works even if you're using transpilation with sourcemaps.

There's a new shadow editor.

As of today, in the Canary, there's a checkbox with CSS coverage. You hit record, and then start using the site. It'll mark the CSS styles that were never used.

He's using iTerm2.

node --inspect app.js

It gives you a URL. That's the old way.

However, in Sources, there's a Node thing under Threads. Then, in Console, there's a context.

You can use a single instance of Dev Tools to debug both the frontend and the backend.

Application tab >> Manifest: Tells you about the state of your Service Workers.

There's also Application >> Clear storage.

Application: Service Workers >> Update on reload. Lets you reload the service worker every time. There's also "Bypass for network". That's useful to skip the service worker.

He plugged the Lighthouse extension again. It's useful to turn a web app into a Progressive Web App. It also helps audit for best practices. Captures performance metrics too.

It's just a Node module. A lot of tools are already building on top of it. You can even set it up for use in CI.

Chrome Dev Tools >> Settings >> More tools >> Audits 2.0 is powered by Lighthouse.

They wrote


He's using Visual Studio Code a bit.

Polymer, Web Components, & You


"We are at war with phones."


Polymer is the Chrome team's set of libraries for web developers to take advantage of the web platform.

Web components is an umbrella term for a bunch of low-level features.

Web components v1 is something today. All major browsers are onboard with everything--except HTML Import, that's still on hold.

Cross-browser web components are here.

Polymer is a light weight layer on top of Web Components. It's evolved with Web Components.

Polymer 1.0 is production-ready. That was released last year.

Comcast, USA Today, ING, Coca Cola, EA, etc. use Polymer.

It's used by Chrome, Play Music, YouTube Gaming, YouTube (their next version of mobile and desktop app).

There's a Polymer App Toolbox that they released a while ago to show how to build full apps:
75% of mobile connections in sub-Saharan Africa are 2G. It's forecasted to be 45% in 2020.

She showed off Jumia Travel. It uses very little data, yet still provides a nice UI.

They're working on Polymer 2.0. They just released the preview editing.

They're focused on:
  • Web Component v1 support.
  • Better interoperability with other JavaScript libraries. It's truly web native, and looks just like a normal web component.
  • Minimal breaking. There's an interoperability layer. You can incrementally upgrade.
Polymer is only 12kb.

Mobile web development is hard, expensive, inefficient, slow, and confusing.

There are about 500+ apps using Polymer.

There's an event system.

Use properties, not methods.

Use = true instead of

Properties go down. Events go up.

"Fear driven development." Be afraid of your users. Be afraid of breaking changes.

tattoo = Test All The Things Over & Over = it's their repo for testing Web Components.

Be afraid of perf regressions.

polydev is a tool that you can use to see how expensive the various elements are.

polyperf lets you compare the performance between changes.

demo-snippets shows you an element and how to use it. It's dynamic, so that you can edit the code. has re-usable elements.

Progressive Performance


Great talk!

He showed this comic:

"So it takes a lot for me to get to this point. But seriously folks, time to
throw out your frameworks and see how fast browsers can be."

He mentioned extensibleweb/manifesto.

  • Respond: 100ms
  • Animate: 8ms
  • Idle work: 50ms chunks
  • Load: 1000ms to interactive
53% of users bounce from sites that take longer than 3 seconds to load.

The average mobile site takes 19 seconds to load.

"Your Laptop Is A Filthy Liar": Simulating real devices using Chrome is just not the real thing.

Use Chrome inspect in order to use USB to debug real devices.

It's really, really hard to hit RAIL on mobile, even with a reasonable phone like a Nexus 5X.

"The Truth Is In The Trace". Use Dev Tools attached to real devices.

He tests with cheap, slow phones. He doesn't think that emulation cuts the mustard.

If you're using a $700 iPhone and assuming the other users are like that, you're wrong. World-wide phones are getting slower as more poor people buy cheap phones.

Desktop is 25X faster than a desktop.

He added an ice pack to the bottom of a phone in order to keep it cool, and his benchmark ran 15% faster ;)

Desktops are faster because they can dissipate more heat.

He recommended the paper, "Dark Silicon and the End of Multicore Scaling".

We can't use all the silicon in our phones because of the heat dissipation constraints.

And the battery has only 10 watt hours.

We can't dissipate enough power, and we can't carry enough power.

A cell phone battery can't keep a 60W light bulb lit for more than a few minutes.

big.LITTLE refers to the technique of having a mix of:
  • Infrequently used "big" (high power) cores
  • Agressively used "little" (lower power) cores
You get what you pay for when it comes to mobile devices.

"Touch boost": when you touch the screen, it turns on the more powerful CPUs.

"Benchmarketing" is a thing.

A MacBook Pro has the maximum amount of power stored in a battery that you can bring onto a plane.

Nexus 5X has 2 fast cores and 4 slow ones.

Flash read performance:
  • MBP: ~2GB/s
  • N5X: ~400MB/s
Mobile phones use fewer flash chips which means less parallelism which means slower.

Think of mobile phones as having spinning disks from 2008.

"High Performance Browser Networking" is a really good book. He highly recommends it.

"Mobile networks hate you."

"Adaptive Congestion Control for Unpredictable Cellular Networks".

What's really killing you is the variance / volatility.

LTE speed is actually getting slower.

Different markets are wildly different.

"A 4G user isn't a 4G user most of the time." -llya Grigorik

It's really hard to get anything on the screen in 3 seconds because it takes so long to even spin up the radio.

The tools and the methods that we've brought over from the desktop just aren't working.

Using the platform well is the only way to go.

Today's frameworks are signs of ignorance or privilege or both. It's broken by default.

Server-side rendering = isomorphic rendering = universal rendering

The Pixel phone is pretty good. Most phones aren't.

Scrolling != interaction.

We want progressive interactivity.

Only load the code that you need right now.

Using HTML imports with tiny scripts means you don't have big bundles that make the thread choke.

Service workers lets you return something quickly from cache.

The top-level SW shouldn't depend on the network. Don't use a SW that hits the network first.

"Mobile Hates You; Fight Back!"

Test on real hardware.

Use a Moto G4. Use Chrome Inspect on Dev Tools. That'll show you what it's like in the median real world.

Use WebPageTest since it lets you use real mobile phones.

PRPL is important (but I don't know what it's about yet).

The challenge is larger than you think it is.

Real Talk About HTTPS


Over 50% of pages are loaded over HTTPS.

For desktop platforms, 75% of time is spent on HTTPS. It's probably on top sites.

For some reason, it's somewhere around 33% for Android. They don't know why, but they have a theory that it's because they're those users aren't using sites like Gmail and Facebook like they would be on desktop because they instead use apps for those. Hence, any web usage is most likely for longer tail sites.

They're restricting powerful features if you're not using HTTPS. Geolocation is one example.

Mozilla is only going to support HTTPS for all new features. From llya Grigorik.

HTTP/2 is only available if you're also using HTTPS.

HTTP/2's performance advantages can more than offset the cost of TLS.

They have a goal of HTTPS everywhere.

At this point, we can't yet use an insecure icon everytime there's a site that uses HTTP. It would just desensitize people to the risk.

Let's Encrypt: Free, easy certificates. It's a certificate authority.

Chrome made a large donation to Let's Encrypt.

It's usage is growing exponentially.

All Google sourced ads are already served over HTTPS.

Moving to HTTPS can be done with no impact on search rankings.

There's a Security tab in Chrome Dev Tools. It can help.

They want to start easing users into thinking that HTTP is bad.

Chrome will start saying "(i) Not secure" in the URL bar for pages that have passwords or credit card inputs on them.

Building a Service Worker

They built a service worker in real time.

He used async and await. I think he said it's only in Canary, though :-/

The service worker will look to see if it itself has been updated.

He's using VS Code.

A Service Worker can be spun down even if the tab is still open.

Wow, they keep making the Service Worker more and more complex. It seems like a giant house of cards.

There's an offline checkbox in Chrome Dev Tools in the Application tab, and there's also one at the top of Dev Tools. They're not in sync.

He's clear that his example is not production code.

Cache invalidation is hard.

That was some pretty impressive real-time coding.

Planning for Performance


"Mobile web" is no longer a subset of the web. It is simply the web.

It's more than just responsive CSS.

Mobile has been bigger than desktop since 2014.

The average mobile web user is not on a $600+ phone. They're more likely to use the free phone you get when you sign up for a plan.

The average Android phone has 1G or less of RAM.

The average load time for mobile sites is 19 seconds.

HTTP2 + link rel="preload"
<link href="bower_components/polymer/polymer.html" rel="preload">
He used this trick to move from 5 seconds to 3 seconds from first byte to first useful paint.

H2 Server Push:
  • Push assets from the server to the client before the client even requests them.
  • It's not cache aware.
  • It lacks resource prioritization.
  • But, H2 Push + Service Workers = awesomeness
He went from 5 seconds to 1.7 seconds.

Preload is good for moving the start download time of an asset closer to the initial request.

H2 Push is good for cutting out a full RTT if you have SW support.

Older phones are slow when parsing a ton of JavaScript. Event relatively recent phones suffer because of how much JavaScript there is to parse.

There is no simple way to see the parse cost of JavaScript.

There's some cool new thing in Chrome Canary: V8 internal metrics in timeline

Webpack Bundle Analyzer is very helpful to figure out what's in your bundle.

Fetch but don't parse:
<script src="..." type="inert">
Put it in the page, but don't eval it until you strip the comments out:
<script id="inert">
    /* (...) */
He talked about Webpack require.ensure and the aggressive splitting plugin.

Test on real devices on mobile networks.

Optimize network utilization: sw, preload, and push for fast first loads.

Parsing JavaScript has a cost: ship the smallest amount of JS possible.

Predictability for the Web

This was a product manager-like talk about making Chrome and the web in general more predictable.

The browser companies are working together more and more.

He talked about resolving bugs in Blink, the engine behind Chrome.

2/3rds of the top 1% of bugs have been fixed.

Using chrome://flags, you can turn on "Experimental Web Platform Features".

They fix regressions very quickly.

They do want to occasionally deprecate things.

Chrome shipped something to improve power substantially by not doing as many request animation frames for iframes not showing.

Filing or starring a bug is the quickest way to get your issue resolved.

They built a Browser Bug Searcher to search for bugs across browsers.

They're trying to treat the web as a single platform.


A lot of Google apps aren't Progressive Web Apps.

One of the most difficult things to debug in web apps is memory leaks.

HTML imports will probably not survive the way they are, but eventually, there will be something better.

A few people complained about Chrome Web Apps being deprecated.

Friday, October 14, 2016

Books: Self Leadership and The One Minute Manager

I finished reading Self Leadership and The One Minute Manager. In summary, it was short, easy, and moderately useful.

Most of the book is written as a story about an advertising account executive who's having a hard time at work and feels like he's right about to lose his job. That made it interesting and very easy to read. There are a few nuggets of wisdom. However, I wouldn't put it on the same level as some of my other favorite books, like "Getting More: How You Can Negotiate to Succeed in Work and Life".

Nonetheless, at a mere 140 pages, it was worth reading, and I think it'll impact my thinking going forward. For instance, have you ever experienced being really excited about starting a new job, but then feeling like you were right about to quit (or on the verge of getting fired) once reality hit, and you realized it was going to be a lot harder than you originally thought? It talks a lot about coping with that.

Disclaimer: The book was given to me.

Monday, October 10, 2016

My Short Review of Visual Studio Code

I decided to try out Microsoft's Visual Studio Code. I think it's a useful open source project with a lot of potential, and I congratulate Microsoft on their contributions to the open source community. On the other hand, I think I'll still with IntelliJ IDEA Ultimate Edition and Sublime Text 3 for the time being. I used it for a few days after watching some of the videos. What follows is not a full review, but rather a few thoughts based on my experience.

VS Code is usable. On the other hand, a few of the extensions that I picked were buggy. They either munged the source code in clearly broken ways, or they caused the editor to go into a weird, infinite loop where it kept trying to edit and then save the text. I think the situation will improve with time--Rome wasn't built in a day.

One thing I really missed was being able to search for multiple things at the same time. In IntelliJ, I often start a carefully crafted search as part of a refactoring effort. That search tab might be open for a day or more. However, I can start up as many additional search tabs as I need. NetBeans also had this feature. I couldn't figure out how to do it in VS Code.

In general, it seems like the search interface was designed more by designers than by hard core engineers. Looking at the imagine, imagine trying to do a regex-based search. You have to click on the tiny ".*" symbol that's printed using gray text on black. Then, the search results themselves are shown using an inadequate amount of horizontal space. It all feels very dark and cramped.

Emacs does something useful when you hit Control-k: it deletes to the end of the line. If you hit it again, it deletes the line ending as well, which joins the next line with the current line. If this were a feature in just Emacs, it wouldn't be very important. However, this feature works pretty much everywhere in OS X, so it's something I've learned to rely on. It doesn't work quite right in VS Code. Here's the bug.

Many editors (Sublime, Vim, etc.) can by default rewrap a paragraph of text, i.e. re-insert newlines so that the text has line lengths of consistent width--not just in how you see the text, but also in the file itself. This is a critical feature for those of us who like to edit lots of plain text files. This isn't built into VS Code. However, there's a plugin--no biggie. Some editors (Vim, Sublime) get bonus points for doing this really well, such as being able to rewrap a paragraph of text even if it has comment symbols (like '#') at the beginning of each line.

One more thing I really missed from Sublime Text is that even if I'm in the middle of editing a file and have unsaved changes, if I close and restart the app, it puts things back to exactly the way they were before I closed it. In VS Code, I have to work to re-open the files I had open, etc. This is very inconvenient if you need to restart your editor because you installed an extension, or for any other reason.

The configuration system is pleasant. I liked the fact that it's based on text, and that there are global settings, user settings, and project settings. I can imagine committing settings files so that everyone on the same project shares project settings. Similarly, I liked the fact that a lot of commands are meant to be searched for using fuzzy search--the UI for this was nice.

The Git integration was nice, but perhaps inadequate. I very often have to do much more than git add, git commit, and git push. Using the command line, using Tower, and using IntelliJ's Git UI (which is kind of awkward compared to Tower, by the way), are all doable. I don't feel like I could use VS Code's Git integration without falling back to using the command line a lot.

My buddy said he preferred VS Code over IntelliJ because IntelliJ had too much "bloat". Whether or not a feature is bloat is certainly a matter of opinion. For instance, he would never rely on IntelliJ to work with Git. He'd only use the command line. I can use the command line or IntelliJ, but I really prefer using IntelliJ for dealing with messy rebases. IntelliJ's rebase flow is incredibly helpful. He'd also never use IntelliJ to refactor an expression to use a variable or rename a local variable (which isn't based on simple search and replace, but is actually based on understanding a programming language's scoping rules). Those are things I rely on IntelliJ to do very often. Hence, I think it's fair to say that IntelliJ is a little bit greedy when it comes to memory (it'll use everything you give it). On the other hand, it has tons of very advanced features that actually do help me quite a bit on a daily basis. There were a lot of features I missed when I used VS Code.

The last thing I'd say about VS Code (which I hinted at earlier) is that it made me feel very cramped and uncomfortable. I felt like it was difficult to see the text and I felt like I was typing with 10 thumbs. I don't know if it was because of the dark theme and my aging eyes (none of the themes felt exactly right), or if it was because of my inexperience with it. I don't remember feeling this way when I first tried Sublime Text. It was this uncomfortable feeling that pushed me back toward using IntelliJ and Sublime Text 3.

Nonetheless, I suspect it'll continue to get better. The plugins will stabilize. Missing features will be added. Soon, it'll be yet another perfectly good editor that some of my friends swear by. I remember going to a talk by Bram Moolenaar, the author of Vim. Someone asked why his Vi clone succeeded while all the other Vi clones didn't. He said it was because he kept making it better. I think that's good advice ;)

Sunday, October 02, 2016

Having Fun with Linux on Windows on a Mac

I thought I'd have a little fun. Here's a picture of Linux running on Windows on a Mac. I'm running Linux in two ways:

  • In the top left, I'm running bash on Ubuntu inside Docker for Windows.
  • In the bottom left, I'm not actually running Linux. I'm running "Bash on Ubuntu on Windows" using Microsoft's "Windows Subsystem for Linux".

I'm running it all on a Mac I borrowed from work because I don't actually own any Windows machines :-P

Monday, August 15, 2016

Ideas: Mining the Asteroid Belt

Disclaimer: I don't even pretend to know what the heck I'm talking about. Feel free to ignore this post.

I've been thinking lately about efficient ways to mine the asteroid belt. My guess is that there's a lot of useful raw materials out there, but getting them back to earth is kind of a challenge.

Now, in my thinking, I'm presupposing that we have a working space elevator. Nonetheless, it's still a challenge because the asteroid belt is so far from Earth's orbit. It would take a lot of time and energy to travel there and back in order to gather materials. Certainly, we'd need some robotic help.

However, the distance (and time involved) becomes less of an issue once you have a system in place. To use an analogy, selling whiskey that's been aged for 10 years is only difficult when you're waiting for those first 10 years to pass. After that, there's always another batch about ready to be sold.

One problem is that it would take a lot of energy to move large amounts of raw materials back toward earth. Inertia sucks when you're trying to move heavy things. Furthermore, you have to somehow transport that energy all the way out there. Sure, a space elevator might help you get things off the surface, but the asteroid belt is still a long way away.

The next problem is that, "For every reaction, there's an equal and opposite reaction." You can waste a lot of fuel trying to push the raw material toward Earth. However, it might be helpful to push some material toward Earth and an equal amount away from Earth.

Next up, consider that it would be useful to push the material toward a position near Earth's orbit so that it can be captured and brought down to the surface. Hopefully, we won't mess this part up and doom humankind to the same fate as the dinosaurs ;)

I was thinking of creating three bundles of raw materials at a time and then using a contraption between the three bundles to send the three bundles in different directions. By varying the angles and the relative weights of the bundles, you could slightly vary the speeds. The contraption might look like a triangle with a piston on each side. I'm not sure what is the best way of causing the pistons to move.

However, one good thing is that you'd be able to reuse the contraption over and over again. Only the raw materials would move. The contraption would stay in place to be reused.

It would probably make sense for bundles headed toward earth to have some sort of propulsion and positioning mechanism. The contraption that I mentioned above is only there for initial launch.

Thursday, August 11, 2016

JavaScript: ForwardJS

Here are my notes from ForwardJS:

My favorite talks were:

Keynote: On how your brain is conspiring against you making good software

Jenna Zeigen @zeigenvector.

I particularly enjoyed thinking about how this talk relates to politics ;)

She studied cognitive science. She wrote a thesis on puns.

"Humans are predictably irrational." -- Dan Ariely

"Severe and systematic errors."

Humans aren't great logical thinkers.

People will endorse a bad argument if it leads to something they believe to be true. This is known as the belief bias.

"Debugging is twice as hard as writing a program in the first place" -- Brian Kernighan

We tend to interpret and favor information in a way that confirms our pre-existing beliefs.

We even distrust evidence that goes against our prior beliefs. It's even harder for emotionally charged issues.

We have a tendency to be rigid in how we approach a problem.

We sometimes block problem solutions based on past experiences.

We often have no idea we're going to solve a problem, even thirty seconds before we crack it.

Breaks are more important than you think.

Creativity is just about having all the right ingredients.

Again, we tend to think about problems in fixed ways. That makes it harder to understand other people's code.

We prefer things that we have made or assembled ourselves.

We're bad at making predictions about how much time it will take us to do something.

We tend to be pessimistic when predicting how long it will take other people to do something, and optimistic when predicting how long it will take other people to do something.

We think that bad things are more likely to happen to other people than to us.

We're actually pretty good at filtering out unwanted stimuli, but we're not totally oblivious to it. Selective attention requires both ignoring and paying attention.

We're very limited in our mental processing. For instance, we can't do a good job writing code while doing a good job listening in on someone else's conversation.

We're sometimes helpless to the processing power of our brain.

Software is about people.

Relatively unskilled people think they are better at tasks than they actually are.

We tend to overestimate our own skills and abilities. 90% of people think they're above average at teaching.

Skilled people underestimate their abilities and think tasks that are easy for them are easy for others.

She mentioned Imposter Syndrome and said that the Wikipedia page was pretty good.

We favor members of our own in-group.

We prefer the status quo.

We weigh potential losses caused by switching options to be greater than the potential gains available if we do switch options.

We're liable to uphold the status quo, even when it hurts other people.

People have a tendency to attribute situations to other people's character rather than to external factors.

People have a tendency to believe that attributes of a group member apply to the group as a whole.

We rely on examples that come to mind when evaluating something.

We assume things in a group will resemble the prototype for that group, and vice versa.

In some cases, we ignore probabilities in favor of focusing on details.

Diversity is important for a team. The more diverse the team, the more diverse their background, the more creative they can be.

Real-time Application Panel

I didn't think this talk was very interesting.

There were 3 people on a panel: Guillermo Rauch (he wrote, Aysegul Yonet (she works at Autodesk), and Daniel Miller (a developer relations guy from RethinkDB).

The RethinkDB guy said that RethinkDB doesn't "have any limitations or make any compromises" which seems quite impossible. Certainly, it can't violate the CAP theorem ;)

From their perspective, Real-time is a lot about getting updates from your database (change feeds) as the changes comes in.

Some databases can give you a change feed.

The XHR object has an upload event so you can track uploads.

Real-time is about minimizing latency.

REST doesn't fit very well with push architectures / WebSockets.

Sometimes you need to be notified that a long running job has completed.

WebSocket tooling can be insufficient.

It's unclear what's going to happen with HTTP/2 vs. WebSockets.

The fact that HTTP/2 provides multiplexing is perhaps an advantage over WebSockets especially when you have wildly different parts of the client app talking to the server.

Server push is an exciting part of HTTP/2.

2 members of the panel were excited about observables and RxJS.

Fetch, the new way of making requests, doesn't have a way of aborting.

The RethinkDB guy gave a shoutout to MobX, but he didn't seem to know much about it.

(By the way, looking around, I see a sea of Macs. One guy said there were only 3 Windows machines in the workshop he was in.)

There was a little bit of talk about REST perhaps not being the best approach for a lot of situations.

Fireside Chat: Serverless Application Architecture

This was another panel, but I thought it was pretty good. Alex Salazar (from Stormpath) and Ben Sigelman (who wrote Dapper, the distributed tracing system, at Google) where the two people on the panel.

Stormpath has adopted this sort of architecture. The company itself provides authentication as a service.

"Serverless architecture" is a serious buzzward. It started 6 months ago. It's trending a lot right now.

Alex says it can refer to two different things:
  • Using services like Twilio, etc.: He says this is more "backend as a service".
  • Functions as a service: He says this is more accurate. He mentioned AWS Lambda. The idea is that you can write some business logic specific to you, and you don't have to manage anything that even looks remotely like a server.
When people say "servers", they're kind of referring to VMs. With serverless, you don't even need to think about VMs.

There's a lot of cross-over with microservices.

Although Heroku got pretty far, he says it's beyond and different than what Heroku provides.

Heroku automates server work (i.e. a worker). You're still writing a Python, JVM, or Node application.

Google App Engine, Heroku, etc. were all trying to be platforms as a service. It's still a backend, monolithic application on their architecture.

Serverless in the lambda model is not a server application. It's a set of finite functions that run in a stateless environment. He says it's a superior model depending on your use case.

Stormpath started with a Java monolith. They moved to asynchronous microservices. They spent a lot of time looking at AWS Lambda. They wanted to update and deploy modular pieces of code without versioning the whole system. For instance, they wanted to be able to update just one function.

Ben is terrified of CI (Continuous Integration) and CD (Continuous Delivery). Some PRs (pull requests) may not have considered all the weird interactions that actually happen. He thinks the pendulum might swing back the other way so that there's one deploy per day.

Monolithic apps can be easier since the different parts are all versioned together. It's scarier with microservices because there might be more version mismatches.

Stormpath tried to move to the Lambda model, and it didn't work. Latency is a real problem. Stateless pieces of code take a while to spin up and spin down. Functions that aren't used very often take a while to spin up--especially with the JVM. They went from serverless to even more servers. It resulted in more infrastructure, not less.

NoSQL databases are beneficial, but there's also too much hype. They kind of hid their drawbacks which was bad. Proof of point, see the guy from RethinkDB above :-P

Ben said that Lambda is the absolute extreme of where this movement will go. Consider GRPC, Finagle, Thrift; can we have hosted services for these things with a little caching?

Anytime you have latency issues, you apply caching.

The difference between memcached vs. in-memory caching (i.e. in the current process) is huge. So stateless has a huge drawback since you can't do in-memory caching. If you have to cache anything between requests to minimize latency, Lambda isn't the right thing.

If performance is important for you, Lambda (serverless) isn't ready yet. That's not true of microservices, though.

If you need to create a chain of these functions as a service, then maybe you shouldn't be using serverless. The latency compounds. One time, he was doing 4 chains, and he was seeing multi-second latency.

Serverless is really good for certain applications.

The history of computing is all about more and more abstraction.

From microservices to serverless is a natural transition.

What will the learning curve be like for learning how to use this model?

Stormpath had to invent a lot of stuff to adopt the model because a lot didn't exist. This included testing, messaging, authentication, etc.

However, he likes async, promises, etc. Scala and Node both have this right. However, Ben thinks Scala is "ridiculous". If you're all asynchronous anyway, it's easier to move functions off the server into a separate service.

Promises need to have more things like deadlines, etc. He thinks promises are a good fit.

It's an anti-pattern to wait on anything that's not in process.

A lot of stuff from Scala (async, reactive, etc.) is making its way back into Java.

JavaScript developers are already in the async paradigm, which is why they adapt to this stuff more easily.

The average Java developer hasn't wrapped his head around async yet.

Stormpath is hoping to provide authentication as a service for services, not just authentication for users.

Ben thinks security needs to be applied more at the app layer. Auth, security, provisioning, monitoring, etc. should all start happening at the app layer.

There is indeed business risk depending on a bunch of external services like Lambda, Stormpath, etc. There's no silver bullet. Can you trust the vendor? What's the uptime and SLA? Now, with SAAS, you're not just depending on them for code, but also for ops.

Having an SLA for high percentile latency (99%) is important. Uptime doesn't mean anything. However, historical uptime is still important.

Ben says that the main advantage of the monolithic model is that all deps get pushed out every week along with any updates to the code.

Your team needs processes and automation. Test before things go out. Integration testing is how you keep things from blowing up.

The Web meets the Virtual and Holographic Worlds

This was a very fun talk by Maximiliano Firtman, @firt, author of High Performance Mobile Web! I can't find the video, but if you can find it, it's worth watching. He's written and translated a bunch of books. He's very engaging.

He started by remembering the web from 22 years ago.

He says that the web has been trapped in a 2D rectangle.

New worlds:
  • Physical world
  • Virtual reality
  • Mixed reality (holographic world)
Immersion modes:
  • Seated mode
  • Room space mode
You can be a part of all of this, even as a web deveoper!

We're at a point now like where mobile web was 10 years ago.

VRML is from 20 years ago! We're not talking about it anymore. Technology has changed.

  • Flat web content (this stuff still works on these devices)
  • 3D world (stereo)
  • 360 still or animated content
  • Apps
  • Websites
  • PWAs (pervasive web apps)
Human interfaces:
  • Gaze (move your head to select things)
  • Voice
  • Touch controls
  • Clickers (remote controls)
  • Controllers
  • Your body (mostly hand gestures)
  • Mouse and keyboard (good for mixed reality)
  • Oculus Rift (Windows)
  • HTC Vive
  • Cardboard (Android, iOS)
  • Oculus Gear VR (by Samsung; has 1 million users)
  • LG 360 VR (LG G5)
  • Hololens (this is the most different; uses Windows 10; not connected to a computer; mixed reality)
New worlds:
  • Virtual / mixed reality
  • Physical world
  • Immersion modes
  • Human interface
  • Hardware
User experience:
  • Safari or Chrome (iOS or Android) with Cardboard:
    Use a gyroscope to recognize your position.
  • Samsung Internet (browser) on Gear VR:
    It's a 3D environment, but the web is still a box in that environment. It has different modes for watching videos.
  • Microsoft Edge on Hololens:
    This was a really neat demo! You can use hand gestures. He showed a hologram floating in your room that you can see using the headset. You can walk into the "window". It's like it's floating in space. It recognizes the room around you. You can put a hologram in a certain place. For instance, you can have a window running Netflix that lives in your bathroom.
APIs and specs:
  • Web VR:
    • You can get data about the device's current capabilities.
    • You can poll HMD pose.
    • you can get room scale data (data about the size of the room).
    • You can request a VR mode which is like the fullscreen API.
    • There are two different versions of the API. He recommended version API 1.0 with the changes they've made to it. It's optimized for WebGL content.
    • You can use this API to get data from the devices. This API is not for drawing to the screen.
    • It's supported by Chrome, Firefox, and Samsung Internet browser.
    • There's a polyfill to work on other browsers.
  • Web Bluetooth:
    • This allows you to talk to any type of Bluetooth device.
    • You can scan for BLE devices.
    • You can scan for services available.
    • You can connect to the services.
    • Note, Bluetooth is complex.
    • This is only in Chrome, and it's hidden under a flag.
  • Other APIs
    • The ambient light API lets you get info about the current lighting conditions.
    • There's a gamepad API. Chrome is the furthest ahead on this API.
    • There's speech synthesis and recognition. This allows you to interact with the user by voice. Note that synthesis is more broadly supported than recognition.
    • Web audio lets you generate dynamic audio, including 3D audio. You can do ultrasound communication with devices. He recommends a library called Omnitone for doing spatio audio on the web. This API is available all over the place.
What we can do today:
  • You can show flat content in these environments:
    • I.e., you can show the kind of content we already have in these environments.
    • It's like a 2D box in a 3D environment.
  • You can show 360 content:
    • You can get the device's orientation.
    • You can touch some of these devices, or move around.
    • There's something called VRView from Google.
    • The Vizor Editor is pretty useful. You can use it to create a 360 degree environment.
    • You can capture 3D content using cameras. However, most browsers don't support live streaming 360 content today. You have to use YouTube if you want that.
    • You can use the Cardboard Camera app from Google. Then use the cardboard-camera-converter.
  • VR 3D:
    • This is mostly based on WebGL.
    • You can use ThreeJS with VR support.
  • Holographic experiences:
    • This is only native at this point. You can't do it from the Web yet.
    • AltSpace VR is like a social network using VR. It's a native app. It has an SDK. You can use ThreeJS.
  • More will come in the future...
These are things he expects we'll see in the future:
  • New Devices:
    • Daydream from Google looks pretty cool.
  • Think about pixels:
    • What do they mean now in a VR environment?
    • How do we think about bitmaps? Can we use responsive images based on distance? See MIP Maps.
      • There's a new format called FLIF. It has multiple resolutions in the same file.
  • Future CSS:
    • We might see media queries based on how close the user is.
  • New challenges:
    • Consider infinite viewports.
    • If I increase the size of a virtual window, what should happen? Should it show more content, or should it show the content larger?
    • Augmented reality physical web:
      • Laforge:
        • Like Google Glass but not from Google.
    • We need more from Edge and Windows:
      • Holographic
    • New challenges:
      • Responsive VR.

React Native: Learn from my mistakes

This was a talk from Joe Fender. He's from a company named Lullabot. He lives in London. Here are the slides.

"We create digital experiences for the world's best brands."

(React is very popular at the conference.)

(There are a lot of women at the conference. It's not all men.)

Not as many people use React Native.

Being a web person is enough to build mobile apps.

What's the fuss? Developing native mobile apps is a pain. It's difficult to code the same thing twice (Java and Swift), especially if you're not very familiar with those languages.

PhoneGap/Titanium just didn't feel right. Furthermore, there are some memory and performance limitations. You're stuck with a non-native UI. It's a WebView embeded in a mobile app. Furthermore, it's a bit weak on community support.

React Native is really simple. It allows you to make truly native apps. React itself is nice. He thought coding in JavaScript was nice. His team was more successful than when they were trying to write the app in Java and Swift.

10 things he wish he knew. These things weren't obvious to him when he started:
  1. You need to know some things ahead of time such as JS, ES6, React (including the component lifecycle), and the differences between various devices (i.e. their differing UIs and features).
  2. You need to learn about the various React Native components. React Native has a whole bunch of components--about 40. You can get a lot done with just the core components. However, there are also contributed packages--about 200 of them. Sometimes you'll need to write your own components.
  3. You need to think a lot about navigation. It's a really important part of the app. How will users get between screens? How will they get back? How will the transitions look? NavigatorIOS is good, but it's only for iOS. Navigator is cross-platform. NavigatorExperimental is very bleeding edge.
  4. You need to think about how data flows through your app. Consider how your app will scale. Consider using Flux or something similar. He used Redux and liked it.
  5. Think about how you will structure your code if you need to support both iOS and Android. Will you use a single code base? Will there be platform specific code? Will you write expressions like { iOS ? 'nice : 'ok' } to match the different platform conventions? Some people have very high expectations and expect your app to look very native. You can get started with npm install -g react-native-cli react-native init MyApp. You can have and index.ios.js just require index.js. Note, many components work on both platforms, and usually the component will take care of platform-specific things.
  6. Flexbox is a really great way to lay out components on the screen. It's responsive. That's really important in mobile. It's pretty much the way to go with React Native. There's a bit of a learning curve, though. It's just a different way of thinking about things.
  7. You still need to test on real devices. Simulators are insufficient. It took them a while to realize this:
    1. The performance is very different. Consider animations and load times. Your laptop is too fast.
    2. The networking is very different. With a mobile device, you're not always on a nice WiFi connection. What about if you're disconnected or have a 3G connection. WebSockets behave differently when you have a spotty network.
    3. Push notifications are a pain. You can't test this on the iOS simulator. For Android you can.
    4. Real devices don't have separate keyboards. This caused a big problem for them. Using an on screen keyboard takes up half the screen, and some of their layouts were incompatible with that.
    5. Think about landscape mode. Think about different layouts.
  8. Debugging is important. The developer menu in React Native is very helpful. In general, React Native has a really nice developer experience. On iOS, shake the device. There's a Debug JS Remotely feature so that you can debug via Chrome's DevTools. This supports live reload, and it only takes 100ms to reload new code. console.log() works too. You can pause on caught exceptions and add debugger; to your code like normal. Unfortunately, the React Dev Tools Chrome extension doesn't work with React Native.
  9. Be aware of the React Native release cycle. It's every 2 weeks. Sometimes, it's weekly. It's very fast, and very bleeding edge. His company is 10 versions behind. In general, the release notes are quite good. He recommends that you don't try too hard to stay on the most recent version.
  10. Remember that you'll need to release your app since it's a mobile app. Submitting your app to the app store is not easy. You need certificates, icon sets, a privacy policy, a support page, etc. It took them 2 weeks. Furthermore, it takes Apple a while to approve it. You also need to think about things like automated testing. React Native has a bunch of useful stuff to help. You can also use ESLint, which is nice.
React Native has Android, but it came a bit late.

His company didn't have any problems with missing APIs.

They used Parse which is an external DB as a service. Unfortunately, Parse is going away.

He thinks Firebase has good React Native support.

If you want to have really native-looking components, it's going to be hard to just use one codebase with flexbox.

Dealing with WebSockets was a pain.

Push notifications are pretty different between platforms.

Building Widget Platform with Isomorphic React and Webpack

Unfortunately, I didn't think this talk was very good or very useful for most people.

It was given by a guy named Roy Yu.

He was considering SEO value and discussing his architectural decisions.

He was creating isomorphic widgets that were shared by PHP, Java, and Python.

React isn't so simple once you hit the advanced stuff (e.g. high order components vs. dumb components).

I knew the keywords he was using, but I didn't understand what he was trying to convey to me.

He says that Webpack is more powerful than Grunt and Gulp.

He introduced React and Webpack a little.

You can package and deliver widgets to third parties using a range of mechanisms:
  • S3, CDN
  • Web component / open component
  • Public / private registry
  • Docker container as a resource / API proxy
You can have an API that other servers hit to fetch HTML.

I left early.

Generating GIF Art with JavaScript

This was a fun talk by Jordan Gray, @staRpauSe.

Here are the slides. They contain lots of fun GIF art.

He pronounced GIF with a hard g.

He works at Organic and Codame (in his spare time). Codame unites art and tech.

"Creativity is the basis for life."

He releases everything he does under Creative Commons. He likes the remix culture.

His stuff has showed up in major magazines, films, etc.

GIFs have stood the test of time. The format is 30 years old. It's completely free of patents.

They're inherently social.

Creativity loves constraints, and GIFs have interesting constraints. They only permit so many colors, frames, and dimensions.

Constraint considerations:
  • Tumblr is the holy grail to target. There's no censorship.
  • 2 MB 540px wide (or less) is a good target.
  • He talked about VJing (like DJing, but with artwork). He talked about GIF Slap. You must target 5 MB or less.
  • He talked about PixiVisor. It supports any height as long as it's 0.5625 times the width (i.e. 64x36). You can embed the GIF in audio.
  • You can target CAT Clutch which is for LED displays. They are 32x16 pixels in size.
  • You can use GIFPop Prints to print the GIF on a "lenticular" piece of paper. This is really neat. You're limited to 10 frames but you can have up to 1500px. This was really neat.
How to make GIFs:
  • Use gif.js.
  • In general, it's really easy.
  • Define a GIF. Add frames. Render it.
  • You can use three.js to do 3D manipulation, etc.
  • The best resource he offered is
  • He also talked about
  • Anything else that renders to canvas is usable as well. See
  • is incredibly helpful. It's "A lightweight graphical user interface for changing variables in JavaScript."
  • He mentioned jzz.js. It's an Asynchronous MIDI library. It lets you control a web page with an actual MIDI device. is a good example. They used a Kinect and a body-sized turntable to help create the GIFs.

gif.js generates the gif, and then you can download it from the web page.

(He's using Sublime Text.)

His next demo was or involved Technopticon.

Next, he talked about the theory of animation. "The Illusion of Life" is really good. Some Disney animators wrote a book about this stuff. There are some YouTube videos about it which looked really good. he talked about "the12principles".

He talked about keyframes vs. straight ahead. It's mostly about straight ahead when generating gifs.

he talked about greensock.


He mentioned Movecraft.

React Application Panel

There were 5 people on the panel.

There are some people at Twitter using React.

The main reason people are initially turned off by React is JSX. However, at Facebook, they were using something like JSX for PHP, so it was a natural fit for them.

Reddit started using React.

Here's how one team split up their code:
  • Dumb components
  • Containers
  • Modules
  • Screens (route handlers with React Router)
  • An API layer
Try to reuse data layers and view layers.

Lots and lots of people use Redux.

"It's the web. We'll probably rewrite it in 3 Angular 3."

The Twitter guy does some work to try to keep the connection between React and Redux as thin (small) as possible. He doesn't want them overly coupled.

One person was from Adroll.

One person was from Reddit.

Lots of people started adding React to their existing Backbone apps.

One person said try to use stateless components when possible. Push state as high up the hierarchy as possible.

The Facebook guy didn't like the class syntax. The function syntax makes it clearer that you're just mapping props to render output.

The Facebook guy says that people are a little too scared to use component state, but it's there for a reason.

Time traveling debugging is "sick AF (as f*ck)".

One guy mentioned MobX, but I didn't hear many people mention it at the conference.

One person said to introduce Redux only when it's time to add complex state management. One person said you're going to need it at some point, so you might as well add it to begin with.

The Facebook guy said to start with the simplest thing (component state), and then when you're familiar with the pain points that Redux addresses, you can start using Redux. You probably don't need Redux on day one.

The Reddit guy said he would add Redux later. Introduce data stores once it becomes a necessity, such as when multiple apps need to access the same data.

The Twitter guy said that it's helpful to keep things separate to make them more testable and adaptable to change.

A lot of people had different ideas of how to do HTTP async with React.

2 people really liked Axios.

Promises are great, but they can't be cancelled, for instance if you move to a different page. They might add that. It's a significant weakness.

The Facebook guy said they use promises on the outside, but something more complicated inside. He gave a plug for await. It has reached stage 4, which means it'll make it to all browsers soon.

One guy plugged RxJS. You can unsubscribe from an observable. Observables are more advanced than promises, and they provide more granular control.

Some one asked what people's favorite side effect library was for Redux. Most people didn't have an opinion. The Twitter guy uses Redux Thunk.

They've been rewriting Twitter from scratch in the last year. A lot of abstractions that were there got in the way. Too many layers of abstraction sucks.

Something like Saga (?) is overkill.

Make the code more approachable.

No abstraction is better than the wrong abstraction. It's ok to duplicate a little code if you don't know the right way to abstract the code to make it DRY. It sucks to have some abstraction that handles 3 use cases, but not the 4th. Too DRY sometimes makes things horrible. Sometimes you need more flexibility. One guy used the acronym WET (write everything twice) and suggested that it was a good approach to avoiding bad abstractions.

Create React App is nice. It helps people get started quicker. One person said that it's awesome.

React is not very prescriptive about which build system or packager you use, although, those things are useful. Most people should be using Webpack or something like that.

One person is using CSS modules. She said "it's fine," but admitted that she hates it.

There are so many ways to attack the CSS problem.

The Twitter guys are using Webpack.

React is quite a hackable platform. That's really useful. The Twitter guy uses CSS modules. "It's been ok." Some things are really not fun to debug. It's a black box. CSS really limits your ability to share components across projects in open source because different people use different CSS libraries.

"CSS is like riding a bike...but the bike is on fire...and you're in hell."

It's hard to know what CSS is being applied. If you're looking at CSS, it's hard to know if the CSS is even being used. If you apply CSS in the wrong order, it changes what things look like.

Some people think that styles in JavaScript seems interesting.

The CSS thing isn't really solved.

The Netflix guy says, "Write CSS that can be thrown away." They use LESS. They do some static analysis.

Aphrodite is a CSS library.

The Twitter guy is more excited by the idea of moving CSS into JavaScript since it enables a bunch of new things. Move away from strings to something introspectable. It'll be more powerful and predictable, but it may have performance issues.

One person has a team with a woman who does amazing UX design and amazing CSS.

One woman at Netflix said that they have one CSS file per component. Just separating files is not enough. Putting CSS into the code modules is an interesting movement. The React guy is afraid of it causing performance regressions.

Just like we've moved onclick handlers back into React, we'll probably move the styles back into the DOM code.

Code Trade-offs: Benefits and Drawbacks of Each Decision

This was another 5 person panel. It was ok, but not great. I didn't get all the names, but here are the ones I remember:
  • Ben Lesh from Netflix. I think he wrote RxJS 5.
  • Richard Feldman who wrote Seemless Immutable. He's an Elm guy.
  • Amy Lee from Salesforce.
  • Rachel Myers from Obsolutely, GitHub, etc.
  • Brian Lonsdorf
Abstractions come at the cost of performance.

RxJS is really an abstraction for async. But, he says, start off with just a callback. Maybe move to a promise. Move to Rx when things get more complex.

John Carmack had some nice comments on when to inline vs. abstract. Performance is a consideration. Readability is a consideration. One guy said inline by default, but then pull it out when it makes sense.

One React user said YAGNI (you ain't gonna need it).

You can add the greatest abstraction ever, but if it's not readable, understandable, and greppable, it's not going to help. Abstractions can get out of control pretty quickly.

The Ruby world has no constraints. They look at constraints as if they're a horrible thing. There are 8 ways to add things to an array. In Python, there's one way. The speaker said that that's much nicer.

In Elm everything is immutable, and everything's constants. NoRedInc uses Elm. They have 35k lines of code. They don't ever get runtime exceptions. Elm makes refactoring very easy. Elm is one of the panelist's favorite programming languages.

RxJS started using TypeScript. Dealing with users using it from JavaScript is kind of painful.

With MVC, you think you can replace each of the pieces separately, but in practice, this doesn't really happen. All three pieces are very wed to each other. It's the "myth of modularity."

Simple Made Easy is a great talk.

Drawing the lines of modularity in the wrong places causes a lot of pain. Doing modularity just right is really hard.

Concise naming patterns are really important when you're doing modularity.

Generic code and interfaces are "dangerous". "No abstraction is better than the wrong abstraction."

Imagine you have 3 things, and you notice a common interface among them. Now suppose you create an interface, but then you suddenly get a 4th thing that doesn't match the interface. Now, you're in bad shape.

When you create a public API, you have to have an interface, but creating an interface for your own code when you don't necessarily need one might cause more problems later.

When speaking of adaptation in JavaScript, someone said "It's not a duck, so you punch it until it's a duck."

If you can come up with a good interface that can fit a lot of things, like Observable, it's really useful. It gives wildly different things some sameness.

Then, you can write code that people can understand.

Lisp has a very pleasant interface: a paren, a command, some stuff, a paren.

Rx is arguably a DSL.

The key is finding an interface that's really simple that can solve a wide variety of needs. It's tough. You sacrifice some specificity.

On the subject of principles:

Someone said, "I used to have principles. I no longer do." For instance, I used to always have unit tests. Then, she started a company, and figured integration tests were good enough.

One guy wrote a lib with a little DI. He achieved 100% code coverage. Then, he ended up breaking it anyway. Unit tests are not enough. He is a little integration test heavy. Integration test first, unit test second. He was told the opposite, but changed his mind.

Another person said: unit tests have a lot of value, but you also need integration tests. You also need to test with real users.

One guy's principle was: put UX first. The end user is more important than clean code.

Principle: If I feel 100% sure I'm right, that worries me. Question your own decisions.

Principle: never write the same thing twice unless there's some really good reason. (Presumably, they've written "if" more than once :-P)

Monday, August 08, 2016

JavaScript: Mastering Chrome Developer Tools

I went to an all day tutorial on Mastering Chrome Developer Tools. It was my favorite part of the whole conference. Here are my notes:

Jon Kuperman @jkup gave the talk.

Here are the slides. However, there isn't much in them. Watching him use the DevTools was the most important part of the tutorial. I did my best to take notes, but of course, it's difficult to translate what I saw into words.

He created a repo with some content and some exercises. Doing the exercises was fun.

Chrome moves really fast, and they move things around all the time.

Everything in this talk is subject to change. For instance, the talk used to talk about the resources panel, but that's now gone. Now there's an application panel.

In the beginning, there was view source. Then we had alert; however, you can't use alert to show an object; you have to show a string. Then, there was Live DOM Viewer. Then, there was Firebug. It kind of set the standard. Firefox has completely rewritten their dev tools before, and they're rewriting them again.

Here's the current state of browsers developer tools:

Firefox's dev tools are a few years behind Chrome. For instance, it can't edit source files, and it doesn't have support for async stack traces like Chrome has.

Safari and Chrome were based on WebKit. Chrome split off and created Blink.

Safari and Opera have really stepped up their game. There are a few features that are in Safari or Firefox that aren't in Chrome. Performance and memory auditing is better in Chrome. Firefox has a better animation tool.

Edge has very rudimentary tooling, but they let you speak directly to a Chrome debugger; it's built in. It's called something like "project to Chrome".

Chrome just came out with a new user interface update to their DevTools.

Chrome has a nice API for their DevTools. They abstract their tools so you can use them with other products.

See node-inspector for using Chrome tools with Node--it's "Node.js based on Blink dev tools." It's coming to Node directly (see v8_inspector).

For React, you need React Developer Tools. There are similar tools for Redux, Ember, etc.

DevTools can:
  • Create files
  • Write code (you can use it as your IDE)
  • Persist changes to disk
  • Do step through debugging
  • Audit pages
  • Emulate devices
  • Simulate network conditions
  • Simulate CPU conditions
  • Help you find and fix memory leaks
  • Profile your code
  • Analyze JavaScript performance
  • Spot page jank
The docs are at, but they're sometimes behind.

He really recommends Chrome Canary. It has all the latest and greatest tools several months before Chrome stable. He uses Chrome Canary for development. He even uses it for his daily driver; he says he hasn't had any stability issues.

All the browsers have canary builds these days.

He also teaches an accessibility workshop. Chrome moves so quickly that they broke something he was teaching, but only half of the people have a version of Chrome that was new enough for it to be broken. The fact that Chrome has rolling updates means there's actually a lot of variety in the versions of Chrome out there.

Chrome version 52 merged in "most of the stuff" (which I assume means updates to the DevTools).

When it comes to the DevTools, there have been a bunch of UI changes in the last six months, so Chrome Canary looks a bit different.

I asked if you'll run into compatibility issues if you only use Canary. He said it's not any worse than the general case of deciding to skip cross-browser testing in general. Rendering issues are at an all time low. However, you probably still need to test different browsers. Of course, if you're supporting old IE, you always have to test (IE 10 and below). Edge is pretty compliant. The ES6 support is different across browsers, but a lot of people just use a transpiler.

The nice thing about Canary is that there's only one version of Canary, whereas everyone is running slightly different versions of Chrome because of how they roll out updates.

He likes Frontend Masters workshops. Douglas Crockford has a 13 hour workshop on JavaScript that's amazing. They're nicely edited, and they have nice coursework.

He gave us a quick walk through the panels:

Most people only use the element panel and the console, but there's so much more!

Right click on the page, and click Inspect.

In DevTools, click the three dots in the top right in order to pick where you want the window to be.

In DevTools, click the icon in the top right (the arrow on top of a box), and then click on an element to inspect it.

If you place the DevTools dock to the right, you can drag the dock to the left in order to make your window a certain width. This is a super easy way to test responsive layouts. Make the site 320px wide--that's a good thing to target.

Sometimes he pops DevTools out to a full screen and puts it on a separate monitor.

You can drag the tabs like Console, Elements, Profile, etc. around, and it'll persist your changes.

If you're not on the console, you can press escape to show or hide the console.

You can use Settings >> Reset defaults.

Next to the inspect icon, there's an icon showing a phone and a tablet on top of each other. You can use that to simulate a device. This isn't just changing the screen. It's also sending a different user agent string. That's important to note. If you want mobile web instead, instead of picking a particular device at the top of the screen, you can use "Responsive".

In general, use relative units. Don't use pixels for your font sizes. Use ems, rems, etc. Use %s for Flexbox. Flexbox gets iffy with cross browser support.

The reliability of the mobile emulation is pretty good, but you do also need to try it with real mobile devices. Mobile emulation will get you pretty far, though, especially during development.

Twitter has a separate mobile app.

There's device, network, and CPU emulation.

He talked about the DOM representation on the Elements tab. Remember, the DOM representation (i.e. the current state of the DOM) is probably very different than what view source will show (since that shows the original HTML that came down).

You can use $('.tweet') on

Select an element in the element tab, then right click on it, and select scroll into view. That's a great way to find an element on the page.

You can select an element and click "h" to toggle hiding. It's visibility hidden, not display none.

Twitter has style sheets all over the place. They're bundled in prod, but in dev, there might be something like 13 different stylesheets.

CSS specificity (most to least specific):
  • Style attribute
  • ID
  • Class, pseudo-class, attribute
  • Element selector (like a div selector)
In the Elements tab, go to the Computed tab, and you can see what's actually being applied. Click on the attribute, like border-bottom, and then open the arrow, and it'll show you where the rule comes from. That's a really nice way to find where styles are coming from.

He showed the box model widget on the computed tab. Inner to outer:
  • Element
  • Padding (inside the border)
  • Border
  • Margin (outside the border)
  • Position (like "position: relative; top: 200px")
He started playing around with

Elements >> Computed is really helpful.

Next, he showed Elements >> DOM Breakpoints:

Find an element in the page. Right click on it. Click "Break on...":
  • Subtree modifications
  • Attributes modifications
  • Node removal
Then, it'll break into JavaScript. Hence, if you don't know the code, but you do know the UI, this is a nice way to find the code that changed the DOM.

However, keep in mind, it'll usually break on the jQuery code, not your application code which is up a few layers in the stack. Usually, it'll give you enough to back track to the app code.

Color formats:

Shift click on any hex value, and it'll switch between different color formats. He prefers hex codes over RGB. If you click on the box of a color, you get a cool screen with a color picker. He talked more about the color picker:

Click on the arrows next to the color palette. It has material design colors. Right click on the color to get various shades.

Find an element. Find the color in Elements >> Styles. Click on the color box. Click on the arrows next to the color palette. It has Material, Custom, and Page Colors. The page colors thing helps you pick a color that matches other colors on the page.

Material Design is Google's color scheme. All those colors look good on white and black. He likes using the Material Design stuff; it makes it easier since he's not a designer.

In Elements >> Styles: You can click on the checkbox next to a rule to apply or unapply a rule to play with colors.

Use Cmd-z to undo. Refresh will also go back to what's in the source.

Workspaces are really helpful so that DevTools writes your changes to your files.

Elements >> Event Listeners is sometimes useful.

Next, he showed the Sources tab:

It looks like an IDE.

Use Cmd-P to fuzzy search for files.

By default, you can make changes. However, if you save and then refresh, you'll lose your changes. Here's how to make your changes persist:

See: Set Up Persistence with DevTools Workspaces

Drag your project folder to the left pane of the Sources tab. Then, you can map local sources to remote sources (I explain this again below). It's a bit clumsy, but then you can actually edit your real code.

Right click in the left pane. Add Folder to Workspace. Pick the local folder. Click on a file from that local folder. When prompted map the file. Then, when you save, it saves to disk. It's awesome, but it's limited.

Anything you change in styles persists to disk (Elements >> Styles). Anything you change in the markup (the DOM thing on the left), doesn't persist to disk (because who knows what code created that DOM).

Only styles defined in external CSS files can be saved.

To see all the limitations, open Set Up Persistence with DevTools Workspaces and search for "there are some limitations you should be aware of".

You have to try it out to get the hang of it.

You can even try this out with your production server. You can still map your files. However, when you save changes, it'll only change your local files--it can't automatically deploy your changes to prod.

It does work with sourcemaps a bit.

There's an experiment for SASS support. They're going to add support for templates.

We should prefer SASS over LESS because LESS will probably go away. Bootstrap was the last major user of LESS, but Bootstrap 4 is moving to SASS.

When viewing a file in the Sources tab, on the lower left, there's a {} icon to prettify the code. It can't fill in minified names, but it gets your pretty far.

Use Elements >> Styles >> + to add new styles to a particular file.

To set up persistence, the key thing is to drag your project folder onto the Sources tab, find the file in your local sources, and double click to open it. Chrome will guess at the mapping.

In the DOM section, you can select an element, right click, and use edit as HTML. Edit the HTML. Then click outside the box. However, these changes can't be persisted to disk.

He's using Atom for his editor.

He's using something called Pug which is like Mustache.

The color "rebeccapurple" is named after a guy's daughter who died.

If you have things mapped, then with the elements panel, anything you edit is implicitly saved.

Use Elements >> Event Listeners to see all the event listeners for an element. However, this isn't perfect because there might be event abstractions in the way.

Scroll listeners are expensive, so sometimes there's some sort of abstraction, like jQuery's on('scroll').

In Elements >> Styles, there's :hov in the top right. This lets you play with forcing a particular hover state. That way you don't need to keep trying to hover over the element to test out its hover handling.

There are a few things that are only in Chrome Canary.

In Elements, long press on an item, and then you can drag the element around.

Workspaces is one of the cooler things you can do with DevTools. You can go pretty far without using another editor, and you can do design work from the elements panel.

We're going to debug code in the Sources panel.

The step through debugger is pretty top notch and pretty clean.

Click on a line to add a breakpoint. It has to be on an executable line of code. Just move it down or up a line if it doesn't work. Then refresh.

Now, we're in the debugger:

The Watch widget allows you to put in an expression and watch that expression. If you're some place random, most of your watch expressions will be undefined because those variables probably don't make sense in that context.

Even the source code widget has useful stuff in it. If it knows the value of a variable, it'll show you the value in a popup near the variable.

When the debugger is paused, you're in a certain context, and you can interact with the current state using the console. Go to the Console tab, or press escape to open the console at the bottom.

Press the Play icon to resume execution.

He talked about the step over and step into icons near the play icon.

If you press step into, it'll find the next function call and step into it. That's a little different than most debuggers which will simply go down one line if the current line isn't a function. You'll probably want to use step over by default.

Just play with the debugger. You're not really breaking anything.

Pause on all exceptions is pretty helpful. However, sometimes, various libraries are using exceptions internally for various reasons.

In the Call Stack widget, right click on a line and click Blackbox Script. Then, it'll hide all the stuff from that script. That way, you can ignore the framework code. This is per script, per domain. It persists per session. If you restart the browser, you'll lose your blackboxes.

Right click, Add Conditional Breakpoint. Use any expression you want.

There is an XHR Breakpoints widget. You can use that to set a breakpoint that gets tripped anytime there is an XHR that matches a particular URL. That's really useful if you don't know the code very well, but you know the related requests to the server.

You can also just put "debugger;" in your source code. Remember to use a linter to prevent yourself from committing it ;)

Click the "Async" checkbox to turn on async debugging. This captures async stack traces. That way, you can see the stack trace from before and after the the asynchronous activity (such as making an XHR, setting a timer, etc.). It will make your call stacks taller, but it's super helpful for understanding how you got into a particular state.

For most of these things, your changes only impact the current session.

Warning: If you use the GitHub API in your exercises, you might get hit by their rate limiting.

Now we're going to talk about Profiling:

For Google, a half a second page load time increase will result in 20% traffic loss.

Amazon reported 100ms decrease in speed resulted in a 1% sales loss.

We do know that slow sites, non-SSL sites, sites with a bad mobile experience, etc. get penalized by search engines. However, we don't know if it's related to the DOMContentLoaded event, or if they're measuring perceived performance. Google is pushing the RAIL performance model, and it's based on perceived performance.

Twitter measured "time to first Tweet". Facebook has something similar.

Build, then profile, and only if it's a problem, tackle it.

There's an Audits tab in DevTools. It's very high level, but very helpful.

Memory leaks are not very common. Browsers and frameworks are pretty good these days.

He played with a particular course on

Go to a page. Click the Audits tab. Click Select All. Click Reload Page and Audit on Load. Click Run.

It prioritizes and suggests things.

He found that Udemy has 10,429 unused CSS rules on a course's landing page. 90% of the CSS is not used. He says everyone has that. Advanced apps use bundle splitting. It's easy to see the problem. It's much harder to figure out a fix.

Modern web apps tend to put a lot of stuff in the head tag. If you have an async script tag, it doesn't matter if it's in the head. The beginning of the body is fine.

People tend to use Bootstrap, but it has lots of stuff they may not be using.

It's also important to remove CSS that isn't being used anywhere.

Everything should be gzipped. This is a big win.

Udemy's course landing page has 2.7 MB of content when uncompressed. That's kind of average he said.

In DevTools, it can give you a list of unused CSS rules in the Audits tab.

Here are some common audit problems:
  • Combine external CSS and JS
  • Enable gzip compression
  • Compress images
  • Leverage browser caching
  • Put CSS in the document head
  • Unused CSS rules
If you have HTTP/1, combine separate JS and CSS files. With HTTP/2, keep them separate.

CloudFlare is one of the only CDNs that currently supports HTTP/2.

Combine and minify your assets.

Compressing images is probably one of your biggest wins. He said that it seems like Udemy is doing okay in this regard.

He recommended ImageOptim.

He has the settings set to JPEG 70%, PNG 70%, GIF 40%.

If you don't need transparency, you can switch to JPG.

Browser caching is another big win.

Next, he showed the Network tab:

Hit record. Refresh the page.

Press the camera icon to capture screenshots. It grabs screenshots every step along the way (anytime there's going to be a major refresh).

However, if you refresh, it'll start taking screenshots before the page comes back from the server.

It only shows the visible portions of the screen.

You can get a lot of wins by trying to optimize what things load first. The solutions are little hackier, but you can impact the experience.

Server side rendering would help, but it's hard.

Udemy's loading indicators are making Chrome take a lot of screenshots.

It stops recording when the page is fully loaded.

If you record manually, it'll keep recording and taking screenshots until you explicitly hit stop.

He showed the waterfall in the Network tab.

Hover over the colors in the waterfall to see more details.

If something is "Queued", that means Chrome has postponed loading it.

Chrome prioritizes the order in which to load various assets. CSS is ranked higher than images. Prioritization is mostly by filetype.

Webpack can compile your CSS into your static HTML.

With Google's AMP, all the CSS has to be inlined into HTML.

Performance is often about give and take.

Chrome allows up to 6 TCP connections per origin.

If you see that a resource is "Stalled" or "Blocking", that's usually the result of some weird proxy negotiation.

DNS tends to be pretty stable.

He went through all the different parts of the request, explaining each one. He talked about:
  • Proxy negotation
  • DNS lookup
  • Initial connection/connecting
  • SSL
  • Request sent, etc.
  • TTFB (time to first byte): This will suffer if your server is slow.
  • Content download/downloading
The important thing is to triage performance problems.

You can get really good information not just about what's slow, but why it's slow.

Common problems:
  • Queued or stalled: Too many concurrent requests.
  • Slow time to first byte: Bad network conditions or slowly responding server app.
The Audit tab has some useful stuff, but PageSpeed Insights has even more stuff.

We talked about the fact that we have a blocking script tag for Optimizely in our head. He said that sometimes, you need blocking code in the head, and there's no getting away from it. Optimizely is one of those times. is also useful. It's a bit more simplistic.

GTmetrix is the most advanced tool. It uses all the other tools, and then combines everything. The analysis of "What does this mean?" is pretty helpful.

Saucelabs and BrowserStack have some performance stuff.

In Network, hold shift over the file, it'll show green and red for what things called what and were called by something else.

Hover over the initiator to get the call stack for the initiator.

In the Network tab, you can right click on the column headings, and there's more stuff that you can see. For instance Domain is helpful.

Preserve Log is helpful to save the log messages across refreshes.

You should almost always have Disable Cache selected. By the way, it only applies when DevTools is open.

There's an Offline checkbox. You can check this if you're working on service workers.

The throttling is super helpful. Don't forget to turn off the throttling, though ;) Good 3G is a good thing to try. Remember to disable cache.

1MB is not huge for sites these days. It's not really a problem until you're above 3MB.

100 requests is too many requests.

You should put your CSS above your images.

Next, he talked about the Timeline tab:

Chrome keeps adding more stuff to this tab. It's the most overwhelming tab in DevTools.

It has CPU throttling.

You can hide the screenshots (uncheck the checkbox) to make some space.

If you look at the memory tab, and you see a jigsaw that's trending upwards, it could be a memory leak.

If you don't see a memory leak, you can uncheck that checkbox.

The summary tells you how the browser is spending its time. CSS is in Painting and Rendering. Now, you can hide the summary.

At the top, there are red dots that might mean page jank.

The green at the top has to do with frames per second.

Then it shows you what the CPU is up to. The colors match the summary.

Selecting the timeline:
  • Click and drag.
  • Double click to select everything.
  • Single click to select a small piece.
  • You can also scroll side to side or scroll in.
  • When you're on certain flame charts, use shift when scrolling.
You can probably hide the Network and Screenshots stuff since it's already on the Network tab.

See FPS, CPU, and NET on the top right.

This stuff is very different between Chrome Stable and Canary.

He talked about Flame Charts. They're under Main. Wide is bad. Tall is not a problem. Find your widest things, and they're taking a long time to execute. This can help you find your slow code. Then, you can zoom in.

What he does is zoom in and look for the last function that's really fat, and everything under it is obviously skinny.

Total Time tells you how much time your function took to execute, and all the functions under it as well.

Self Time is just how much the function itself took, without counting how much the children took.

In the flame charts, dark yellow is native browser stuff, whereas yellow is the application code.

The colors in the flame charts correspond to the colors in the sumary at the bottom of the page.

Ads and analytics are often times the performance headaches.

CPU throttling is pretty helpful. One thing it's good for is that it makes it more obvious where the slow parts are in the flame charts.

You might need to turn on a Chrome flag to see the CPU throttling.

People used to use try/catch for lexical scoping before we had let. You can have a try that always raises, and then a var inside the catch is scoped to the catch. Traceur used this trick. Babel just renames things.

Sometimes Chrome extensions get in your way.

PageSpeed Insights is more robust than the Audits tab.

The AMP project has their own Twitter and YouTube embeds.

If you're using a pure Node server, there's a compression thing to turn on gzip.

For Bootstrap, you can use the SASS version and just pull in the parts that you need.

Don't use a 1000 pixel wide image and shrink down the image to 200px using the img tag. Compress (lossy) them and resize them.

He really likes the screenshot view. Remember to pop out DevTools so that it takes a larger screenshot.

Server side rendering is nice in order to get text onto the screen faster.

You can use command click to show both CSS and Img in the Network tab.

On the Network tab, you can use regexes, but that's not so common.

Embedding YouTube videos loads a lot of JavaScript. Consider deferring them. That could help finish loading much faster.

He talked about the DOMContentLoaded event and the Load event:

You can hook into either. They're native browser events.

For his example, it was the YouTube embeds that totally killed his page performance.

AMP has its own video.js.

Most of what he does on the Timeline tab is to look at stuff and then hide it when he figures out that that's not where the problems lie.

In the Summary, you can click on Bottom-Up, Sort by Descending Self Time. That's an easy way to find the slow parts of your code.

(He's using zsh with a very colorful prompt.)

Start with Audit and Network. Then go to the Timeline. Next, you can go to Profile.

Next, he talked about the Profile tab:

You can use Profile for CPU profiles or to take snapshots of the heap.

Remember, they kind of push everything into the Timeline. Profile has a simpler view of some stuff that's in Timeline.

Start profiling, then refresh the page.

Once you run a profile, when you go back to your code, it'll show times next to your functions in the Sources tab.

Next, he talked about page jank:

Jank is any stuttering, juddering, or just plain halting that users see when a site or app isn't keeping up with the refresh rate.

He talked about 60 FPS (frames per second):

Most devices refresh their screens 60 FPS. The browser needs to match the device's refresh rate. 1 sec / 60 = 16.66ms per frame. In reality, you have about 10ms for your code.

In the Timeline tab, hit Esc. There's a rendering panel (it might be in the drop down). There's an FPS meter.

Most of the time, page jank is obvious just using the site.

He explained some causes of page jank:

Every time you do a write to the DOM, it invalidates the layout. For instance: = (h1 * 2) + 'px';

You can use window.requestAnimationFrame() to ask the browser to let you know when it's going to to do layout invalidation. Do all your writes in there.

There's a fastdom library. Basically, never do a write or a read to the DOM without using his library. It eliminates DOM thrashing.

React has a lot of stuff tied to requestAnimationFrame. A lot of things use the fastdom library. It makes scrolling butter smooth.

He doesn't know if NG 1 takes care of using requestAnimationFrame correctly.

His favorite demo is

Escape (to show the Console tab) >> Rendering (next to the Console tab) >> Paint Flashing. This shows green wherever there's a re-paint.

This will help you find cases where you're re-rendering things don't need to be re-rendering. This is a common performance problem.

There's this thing in CSS, will-change: transform, that you can use to tell the browser to kick some work off to the GPU.

In his demos, the old Macs and the Windows machines were getting jank even though the newer Macs weren't.

There's an ad in his demo that has fixed position. That kind of stuff can cause page jank. Adding will-change: tranform to the ad container helped.

If you have some ad that's 100x100 with a fixed position, it's likely you'll get jank.

JavaScript animations don't use the GPU, but CSS animations do. CSS animations are way smoother.

Used fixed position sparingly.

Next, he talked about how to find and fix memory leaks:

JavaScript is pretty good at garbage collecting. Leaks are not super common.

JavaScript uses mark and sweep.

Browser are always getting better.

He talked about some common causes of JavaScript memory leaks:
  • Accidental globals: you forgot to use var (e.g. bar = "foo";). Strict mode disallows that.
  • Forgotten intervals: once you start an interval, it keeps going. This can lead to a leak if it keeps pulling in more and more data.
  • Holding onto a reference to a DOM element that is no longer in the DOM anymore: browsers and frameworks are getting better at handling this. However, event listeners are particularly likely to hold onto things.
Again, he starts with Audits and Network tabs. Then he goes to the Timeline tab.

You can leak memory in the form of either JavaScript objects or DOM nodes.

With memory recording, use a lot of time to see a lot of growth.

Next, he talked about the Profiles tab:

Use Take Heap Snapshot twice. Then, you can compare them. Sort by allocated size descending.

Profiles >> Allocation Timeline >> Summary >> Click on an object >> Object: you can use this to find the code that caused the allocation to happen.

Shallow Size: the amount of space needed for the actual item.

Retained Size: how much you'll be able to get rid of if you get rid of the object. For instance, there might be a parent that points to a lot of stuff.

Don't start profiling your memory until you see an obvious jigsaw on your timeline.

Chrome DevTools makes it pretty simple to tackle memory leaks.

Chrome top right of the browser three dots >> More Tools >> Task Manager: Add JavaScript as a Category: you can use this to see how much JS memory is used per tab and per extension.

Next, he talked about Experiments:

Go to chrome://flags. Ignore the nuclear icon ;) Search for DevTools. Enable Developer Tools Experiments.

There are so many experiments.

Now, open Devtools >> Settings >> Experiments. There's CPU throttling. There's accessibility stuff. There's live SASS stuff. There's request blocking.

Hit shift 7 times, and you'll get super secret experiments!!! They're very much still in progress ;) They're not very stable yet.

Here are some resources: