Tuesday, July 26, 2016

JavaScript: Advanced JS Foundations

I went to an all day tutorial by Kyle Simpson. He has a book called You Don't Know JS. Here are my notes:

Scope

I missed the first 2 hours. However, that only consisted of 14 or so slides, and I managed to catch up by reviewing the slides and someone else's notes.

JS is a compiled language. It continuously compiles and executes the code.

var foo = "bar" is 2 operations not 1. It's both a declaration as well as an assignment.

The scope manager reserves spots in memory for variables.

When you assign a variable, if it's heard of it, it uses it, otherwise, it makes it global.

Functions have their own scopes.

He talked about shadowing.

Figuring out the scope of things is done during the compilation phase.

He talked about the LHS and RHS of an assignment.

window and global scope are different. Global scope sits above window. window is an alias to global scope only in the browser.

It's not a good idea to use global scope because:

  • You're more likely to see variable name classes.
  • Looking up something in the global scope is harder (slower?).
You should use strict mode.

"Undefined" means that a variable has been declared, but has not been assigned to yet.
"Undeclared" means that a variable has not been declared, and there is not instance attached to it.

Strict mode takes away many operations that make your program slower.

If you execute a non-function as if it were a function, you'll get a runtime type error.

Formal function declarations start with the word "function" as the first thing in the statement. Hence, if you wrap a function in parenthesis, that's a function expression, not a function declaration.

Here's another function expression (not a function declaration):

var foo = function bar() {}

Function declarations bind to their enclosing scope.

A named function expression should be preferred to an anonymous function expression.

Lexical scope is something that is determined at author time, and is thus fixed at runtime.

Dynamic scopes are determined at runtime.

"Scope" is where to look for things like variables and function declarations.

JavaScript mostly uses function scope.

However, when you use try {} catch (err) {}, err is actually block scoped. You can't access it outside the catch block.

He showed lexical scope as a series of concentric rings.

Named functions can refer to themselves using their name.

3 reasons why named function expressions are preferable over anonymous function expressions:

  1. You sometimes need to refer to the function's name within the function. This is useful for unbinding event handlers. Note, arguments.callee has been deprecated.
  2. Anonymous functions don't have a name, and hence they make stack traces a lot less readable.
  3. If you receive a function object, you can look at the name to know a little about what it is and who's sending it.
The only reason to not name functions is if "you're lazy, uncreative, or you're trying to save space in your slides".
He thinks arrow functions suck, one of the reasons is that they're anonymous. The only thing he likes about them is that they fix lexical this.

In the newest JavaScript standards, they're adding a new feature called name inferencing. JavaScript can now infer the name of this function:

var f = () => 1;

However, callbacks don't have name inferencing, and that's the one time where you're most likely to use an arrow function.

He prefers named functions over arrow functions.

He says he's big into readability.

He uses arrow functions only when he needs to maintain lexical this. He says it's better than var self = this.

Only 2% of his code uses lexical this.

Linters are useful, but only if you know what they're for. Linters check code style. They can't check for correctness, accuracy, etc.

He doesn't think any linter understands lexical scope.

He says that linters are great in theory, but he ends up turning off a lot of rules because the linter is not smart enough.

He wishes the tools were smarter.

He says that linters are a bit stupid with respect to hoisting as well.

He thinks of lexical scopes as nested bubbles.

Lexical scope is fixed by the developer. It doesn't change dynamically.

You can use eval to add new variables to a scope dynamically:

function f(s) {
    eval(s);
}

f('var x = 1;');

This is an exception to the normal rule that scopes are determined statically. The way to think about it is that eval executes the code as if you had written it in the place where you call eval. Using eval inhibits the VM from making certain optimizations.

He doesn't think that the security argument is a good argument against using eval. He says that your system can be insecure even if you don't use eval. Hence, he doesn't think that eval is inherently insecure. He doesn't like eval because it is horrible from a performance point of view.

If you have to ask whether you should use eval or not, the answer is no. You should only use it if you already know what you're doing and the tradeoffs.

He says he's a compiler nerd.

You can use a function constructor instead of eval.

The valid reasons for eval are extremely esoteric.

One reason is if you're using eval to figure out at runtime what syntax is supported by the engine.

You should probably still use the function constructor instead of eval.

In strict mode, eval creates its own scope. However, even in strict mode, the performance is still severely impacted by eval.

He says let and const are not the solution to var.

He says that browsers use a regex to look for eval, and they it sees it, they disable the performance optimizations for that one file.

The with keyword is the other way of messing with lexical scoping in a dynamic manner.

with looks like a syntactic shortcut. It turns an object into a lexical scope at runtime. That's slow, and it has some subtle problems. You shouldn't use with.

He says one of the worse sins in programming is to create a construct that you don't know what it's going to do until runtime. (What about if statements?)

If you use:

with (obj) {
    d = 3;
}

Then if obj has d, it'll set that. Otherwise, it'll look up the scope hierarchy for a d. He says that that's horrible because you don't know what it's going to do until runtime.

You cannot use the with keyword in strict mode.

Knockout.js uses the with keyword all over the place. Function constructors don't run in strict mode. They make use of that.

The Chrome console wraps all of your code in a with statement in order to provide a bunch of stuff that's only available in the console.

The with statement makes your code run slower.

Next, he talked about properties when it comes to lexical scoping.

An IIFE is an immediately invoked function expression:

(function() {
    ...
})();

He prefers to give his IIFEs names. If you can't come up with a name, call it IIFE.

This idiom also works:

!function IIFE() {
    ...
}();

The negation doesn't really do anything except turn the function declaration into a function expression.

He prefers:

void function IIFE() {
    ...
}();

The void operator just turns the function into a function expression. Furthermore, the void is kind of a hint that the function has no return value.

He prefers void as a stylistic preference, but most people use parenthesis.

It's bad to have a bunch of global stuff.

He talked about the "principle of least exposure" or "principle of least privilege". You should protect everything by default and expose things only when necessary.

It's very common to put your entire file into an IIFE.

He's working his way toward module systems.

You could just use this idiom to expose things from within the IIFE:

(function() {
    window.foo = ...;
})();

Now, elsewhere in the codebase, people can just use foo().

Here's another idiom:

(function(global, $) {
    global.foo = foo;
})(window, jQuery);

Now, you can use $ to reference jQuery even if you have multiple libraries that each like to use $.

You could name your IIFE something like CustomerLogin. (It's strange to see him use this naming convention since that convention is usually reserved for constructor functions.)

Variables declared using var are scoped within the nearest enclosing function.

let enables you to scope a variable to a block instead of a function.

He says that if you're not using let, putting var inside a block is still useful as a stylistic signal that the variable isn't used outside the block even though variables are hoisted to function scope.

However, using let is even better.

He thinks the idea of replacing all vars with lets is really stupid.

He thinks that let doesn't replace var, it augments it.

If he wants function scope, he uses var. If he wants block scope, he uses let.

He doesn't recommend using const very much. He doesn't think it really helps make anything better.

He is not consistent as to whether he puts a space before the curly when defining a function.

In general, he has very strong opinions and doesn't seem to have much regard for other people's opinions, even members of the standards bodies. Even the name of his book, "You don't know JS" presupposes he knows more than you.

He would have preferred this syntax for let since it's more explicit:

let (tmp = x) {
    x = y;
    y = tmp;
}

He likes this idiom:

if (x > y) {
    { let x = tmp;
        x = y;
        y = tmp;
    }
}

He likes it because it's clearer, and he's okay with the extra level of indentation.

The let keyword is good for for loops:

for (let i=0; i<10 font="" i="">
    ...
}

Dynamic scope is dependent on runtime conditions.

He says he doesn't know why most languages prefer lexical scoping over dynamic scoping other than for performance reasons. (This was explained in SICP. Lexical scoping makes it easier to reason about the program.)

You can think about hoisting by imagining that all the variable declarations and function declarations get moved to the top of the function. It's an important feature.

He says it enables mutually recursive functions, and that you wouldn't be able to use mutually recursive functions without function hoisting. (I think that as long as you have first class functions, you can do mutual recursion.)

Function expressions don't get hoisted.

Function hoisting is quite helpful. He routinely uses functions before they're defined.

foo();

function foo() {}

He likes to put the executable code at the top and the functions at the bottom.

If you try to use a let variable before it's been declared, you get a TDZ (temporal dead zone) error. I think they may call it a ReferenceError.

He recommends that you put all the let declarations all on the first line.

Closure


Closure is when a function "remembers" its lexical scope even when the function is executed outside that lexical scope.

He claims that JavaScript is the first "mainstream" (i.e. non-academic) language to have closures. I mentioned that Perl probably got them first and was mainstream at the time. He claims that Perl wasn't mainstream by the time JavaScript got them. As far as I understand it, Perl added my in version 5 which was released in 1994, and Perl was certainly mainstream by then.

Furthermore, I would also take issue with his claim that Lisp (which had closures) wasn't mainstream since various Lisp variants were certainly used outside of academia.

He talked about the problem that occurs when you generate closures from inside a for loop. He talked about how to use IIFEs to avoid this problem.

Here's the module pattern:

var foo = (function() {
    var o = { bar: 'bar' };
    return {
        bar: function() {
            console.log(o.bar);
        }
    };
})();

foo.bar();

Similarly:

var foo = (function() {
    var publicAPI = {
        bar: function()...,
        baz: function()...
    };
    return publicAPI;
})();

That way, the different things in publicAPI have access to each other via the name publicAPI.

He likes to criticize the TC39 committee.

He mentioned HTTP 2.

He said that if you want to use ES6 import syntax you should use HTTP 2 :-/ I think he may be glossing over how transpilers like Webpack fit into the picture.

Export syntax:

export function bar() { ... }

Import syntax:

import { bar } from "foo";
import * as foo from "foo";

He said there are about two dozen variations of the import / export syntax.

He hasn't switched to ES6 modules because there's political upheaval around them.

Even though the import syntax is valid in some browsers, the loader necessary to load them isn't yet a standard.

He said that TC39 didn't consider whether the Node guys would be able to switch to ES6 syntax.

He said the Node community isn't going to implement ES6 import syntax.

He says he's still on the sidelines.

He said the import syntax is synchronous.

Two characteristics that makes something the module pattern:

  • There has to be an enclosing function that executes at least once.
  • That function has to return something that has a function that closes around its outer scope.

He said the module pattern is the most important pattern in JavaScript.

Object-oriented Programming


He doesn't think JavaScript has classes, and you should not do anything like classes in JavaScript.

He created the term "OLOO" which stands for objects linked to other objects.

He says that "behavior delegation" better matches what JavaScript does.

Every function, when executing, has a reference to its execution context, which it calls this.

(He uses Sublime Text.)

4 ways to bind this:

  1. Implicit binding: If none of the other rules apply: if we're in non-strict mode, this refers to window. In strict mode, this is undefined.

    By the way, you almost never want this to refer to the global object.
  2. Default binding: If there is a context object at the call site (e.g. obj.f()), then use the context object.

    By the way, you can "borrow" functions by attaching them to the object and then on that object.
  3. Explicit binding: f.call(obj).

    This allows us to define a single function and then use it in multiple contexts.

    The this system is very dynamic whereas lexical scoping is very fixed.

    This breaks when you have code like this:

    $("#btn").click(obj.foo);

    It's because jQuery saves obj.foo as something like cb, and cb doesn't reference obj anymore. (JavaScript doesn't have implicit bound methods like Python does.) However, you can use explicit binding:

    $("#btn").click(obj.foo.bind(obj));

    jQuery has a proxy method that does something similar.

    Some people just abandon using this entirely and just bind things 
    explicitly since it gives you predictability back. However, hard binding is less flexible.

    The this keyword is very powerful and very dynamic. Sometimes, lexical scoping is simpler and more predictable.
  4. Using new: new foo();
There's a precedence order for the 4 rules, because it's possible to hit multiple rules at the same time:
  1. Was the function called with new?
  2. Was the function called with call or apply with an explicit this?
  3. Was the function called via a containing/owning object (i.e. context)?
  4. Otherwise, use the global object (or undefined in strict mode).
When you use new:

  1. A brand new empty object is created.
  2. The brand new empty object gets linked to another object (the prototype).
  3. The newly created and linked object gets passed into the function as this.
  4. If the function doesn't return an object, it assumes that the function should return this.

You can use new with almost any function. He says that constructor functions really don't do much and really don't construct anything. It's new that does all the work.

The new keyword can override even a hard-bound function (i.e. one that already has its own this).

Every single "object" is built by a constructor call. (I wonder if he considers an object literal to be a constructor call.)

A constructor makes an object prototype its own.

He explained the prototypal system.

He said the traditional approach to OOP (e.g. classes) is all about "copying" stuff from the parent into the child.

Pretending JavaScript is like other languages causes a lot of confusion.

Rather than copy down, it's a link up.

An object has a property that refers to another object.

The base prototype doesn't have a good name. It has things like toString().

The prototype object has a property called constructor. That doesn't necessarily mean that the thing it points to is really a constructor.

An object has a prototype. That prototype might refer to another prototype.

obj.__proto__ === ObjConstructor.prototype

obj.constructor === ObjConstructor

If you ask for something on this that isn't on this, it looks it up on the prototype.

A core theme of this class is how to compare and contrast JS's lexical scoping system with its prototypal system.

He's really big on the idea that functions aren't really constructors; they're just initializers. new does the real work of constructing the instance.

There's Object.getPrototypeOf(obj).

In the IE8 days, we used to write this hack:

obj.costructor.prototype

Both the constructor and prototype properties are changeable at runtime.

JavaScript didn't originally have super. ES6 has a super keyword.

It's not easy and automatic for a method in a child class to refer to the method of the same name in its prototype:

Foo.prototype.method.call(this)

If the child and parent have differently named methods, it's way easier.

To link from a child to a parent:

function Bar(who) {
    Foo.call(this, who);
}
Bar.prototype = Object.create(Foo.prototype);

All the magic of the prototype system really comes down to the fact that objects are linked together.

He said it's too complex. He doesn't like "classes" in JavaScript.

He prefers thinking of it as "behavior delegation" over "inheritance".

OLOO = objects linked to other objects

He likes this idiom. He calls this OLOO style coding:

var Foo = {
    init: function(who) {
        this.me = who;
    }
};
var Bar = Object.create(Foo);
Bar.speak = function() {
    ...
};
var b1 = Object.create(Bar);
b1.init("b1");
b1.speak();

This does the same as the other way of writing the code.

He likes the fact that Douglas Crockford created Object.create. However, he thinks that Crockford "went off the rails" when abandoning this.

In his code:

  • 95% = module pattern
  • 5% = delegation pattern (OLOO)

Object.create was added in ES5. Crockford suggested it. Here's how it's basically implemented:

Object.create = function(o) {
    function F() {}
    F.prototype = o;
    return new F();
};

Simpson's entire OO system is just based on Object.create.

There was some discussion at the end. He said that JavaScript's object system was radically different than any other language's since:

  • It's less about classes copying things into the object and more about an object pointing to its prototype.
  • Constructor functions don't actually construct instances, they only initialize them.
I pointed out that actually:
  • The prototype system started with Self in the 80's. Python's class system is very similar to JavaScript's if you treat classes as objects and never instantiate them. Each parent class is like the child class's prototype. When you look for something on a class, it'll go up the parent chain dynamically.
  • Java's constructor methods don't instantiate objects either. The constructor methods themselves are really just initializers like in JavaScript. By the time they're called, the object already exists. Objective C is one of the few OO languages that I know that lets you explicitly allocate and initialize the object in two different steps.
He also said that JavaScript and Lua are the only languages that allow you to just create an object and start attaching things to it without creating a class first. However, I think you can certainly do this in Python and Ruby, and I suspect you can do it in some other languages as well.

Delegation Design Pattern


"Delegation-oriented design".

He says that it was the late 90s when we finally started preferring composition over inheritance.

He says the prototypal chain is "virtual composition".

He says with delegation, it's peer-peer.

He says when you're writing JS, you should go with the flow and use JS the way it was meant to be used.

He says you should ignore the ES6 class syntax. He says that's the only part of the language that he categorically discards. He doesn't even touch the class keyword. He says it's duck tape on top of duck tape.

He doesn't think that class is just syntactic sugar. He says that's not true.

He says they're adding even more class-oriented stuff (doubling down) in later versions of JavaScript.

Anything you put on the prototype is public. It's not like closures where there's a lot of room to hide things.

Some parts of your app should use modules. Some parts should use the prototype system.

Here's an idiom he likes:

var Button = Object.assign(Object.create(Widget), {
    configure: ...
    activate: ...
});

He said to avoid shadowing method names in different parts of the prototype chain.

He said he's been working in JavaScript for 18 years.

Tuesday, October 27, 2015

Web Performance Short Course

I went to a tutorial on web performance at HTML5DevConf. These are my notes:

Daniel Austin was the instructor.

He worked down the hallway from Vint Cerf when he was creating the world wide web, and he was the manager of the team at Yahoo that created frontend performance as a discipline. He was the manager of the guy who created YSlow. A lot of the books on web performance are from people he used to manage. He was the "chief architect of performance" at Yahoo.

He's writing a book called "Web Performance: The Definitive Guide".

He started by asking us how many hops it took to get to Google (per traceroute).

He had us install the HTTP/2 and SPDY indicator Chrome extension.

He's given this class at this conference 5 years in a row. It's changed dramatically over the years.

This is only a class on the basics.

The most important key to understanding performance problems is to understand how the web works and work within those constraints.

Understand what's going on under the covers, and identify the problems. That's half the battle.

There are lots of tools.

We're always focused on the end user's point of view (the "user narrative").

This is both an art and a science.

Most of the people doing web performance now started at Yahoo.

He didn't think the first book on web performance was very good.

All of his slides are on SlideShare.

Capacity planning and performance are opposite sides of the same coin.

Most performance problems are actually capacity problems.

Tools:

  • spreadsheets
  • webpagetest.org
  • speedtest.net
  • your browser's developer tools
  • YSlow
  • netmon
  • dig
  • ping
  • curl
  • Fiddler
  • there are a bunch of mobile tools

The site is fast enough when it's faster than the user is.

Theme: Ultimately, performance is about respect. He thinks Google is just making stuff up when it says that slower responses result in X amount of lost dollars. He thinks it's really just about respect.

He seems to have an anti-Google bias ;) He even asked who in the class was a Googler.

Section I: What is Performance?

It's all about response time!

Latency is about packets on a wire. Humans experience response time, not latency.

The goal: "World-class response times compared to our competitors."

We want reliable, predictable performance.

It must be efficient and scalable.

We want to delight our users.

Performance is a balancing act.

Security vs. performance is a common tradeoff.

Section II: Performance Basics

Statistics 101:
  • sort
  • mean
  • median
  • mode
  • variance
  • standard deviation
  • coefficient of variation (variance)
  • minimum
  • maximum
  • range

He compared the mean, median, and mode. The mean is rarely used in performance work.

The median is the number in the middle. We use that more often than the mean.

The mode is the most frequent number in some set.

Performance data is full of outliers. The outliers disturb the mean which is why we can't use it.

Pay close attention that you're talking about the median, not the mean.

If the mean and the median are more than one standard deviation apart, then the data is wrong because that's not possible.

The standard deviation is the average distance between a point and the mean. It's a measure of how scattered the data is.

Performance is vastly different between Asia, the EU, and the US. It's a network infrastructure issue.

The margin of error is a measure of how close the results are likely to be.

The more data you have, the lower the margin of error.

You need 384 data points to get a 5% margin of error.

You need to gather a considerable amount of data to be confident in your analysis.

"5 Number Reports" consist of:

  • median
  • 1st quartile
  • 3rd quartile
  • minimum
  • maximum

The typical performance curve:

  • No users get response times less than a certain amount.
  • Most people get response times somewhere in the middle.
  • There's a long tail of people getting much longer response times.
  • Sometimes they even time out.

You know you have a problem if:

  • A lot of people are getting bad response times.
  • A lot of people are timing out.
  • There's a second hump in the curve for the people getting slower response times.

Curl is our favorite browser! ;)

curl -o /dev/null -s -w %{time_total}\\n http://www.twitter.com

Run it 10 times in a row, put the numbers in a spreadsheet, and calculate a 5 Number Report.

The results are really messy!

Curl is way more widely used than you might think. It's even used in production at very large companies.

I don't think I got this completely right:

0.039 min
0.044
0.045 1st quartile
0.048
0.05 median
0.086
0.098
0.107 3rd quartile
0.603
1.233 max

25% of the data points are in each of the quartiles.

In the performance world, there's usually a big difference between the mean and the median.

You can look at Wikipedia to get the exact formulas for these things.

Across the class, we did 100 measurements, and we had a huge range.

There's a significant amount of variation on the Internet in general. "The web is subject to very high variance."

dRt / dt = crazy ratio = the derivative of the response time

Anytime your slope is greater than 0.5, then it's crazy. It's possible that your connection is bad.

We have to figure out: Is the DNS slow? Is the SSL slow? Are the servers slow?

The first thing you want to do is calculate the crazy ratio.

Excel:

  • Min = MIN(Data Range)
  • Q1 = QUARTILE(Data Range, 1)
  • Q2 = QUARTILE(Data Range, 2)
  • Q3 = QUARTILE(Data Range, 3)
  • Max = MAX(Data Range)

R:

  • RT <- ...="" 0="" 1="" c="" li="" numbers="">
  • fivenum(RT)

Operational research:

  • Supply chains
  • Utilization Law
  • Forced Flow Law
  • Little's Law
  • Response Time Law

You have to understand how queues work. Think of freeways.

Resources and queues:

  • Service time (Si)
  • Queue residence time (Ri)
  • Queue length (i)

In general, systems consist of many combined queues and resources.

Workload differentiation: different lanes for different speed vehicles.

The Utilization Law: Ui = Xi * Si

The utilization (Ui) of resource i is the fraction of time that the resource is busy.

If you let your systems get to 95% load, you should be fired.

Xi: average throughput of queue i, i.e. the average number of requests that complete from queue i per unit of time.

Si: average service time of a request at queue i per visit to the resource.

The Interactive Response Time Law: R = (N/X)-Z

R = response time
N = number of users
X = number of requests/s
Z = time the user is thinking (think time)

This doesn't make sense to me because in my mind, the number of requests/second varies a lot based on the number of concurrent requests.

He suggested that you can always increase the number of requests/second by adding more capacity. However, my understanding is that it takes a lot of work to get to a horizontally scalable architecture, and that it's often the case that there is a bottleneck that throwing more servers at the problem can't
immediately solve.

Figure out if you have a capacity planning problem.

Capacity and performance are intimately related.

Often, your performance problems are really capacity problems.

Antipattern: keyhole optimization: optimizing your project at the expense of everyone else.

Section III: The MPPC Model

Dimensions of performance:

Geography
Network location
Bandwidth
Transport type
Browser/device type:
RT varies by as much as 50%
Page composition:
Client-side rendering and execution effects
Network transport effects:
Number of connections; CDN use

You have to test on multiple types of devices.

Take some crap off of your page to make it faster.

CSS used to be benign in terms of performance. That's now no longer true. CSS can cause performance issues.

He's big on CDNs.

He talked about how hardware and routing work. It was a pretty complex slide.

The backbone is about as good as it can get. It's the last mile that is the problem.

He talked about the OSI Stack model.

Microsoft invented ethernet type 2.

MTU = 1500 bytes = maximum transmission unit

MSS = 1460 bytes = maxiumum segment size = the size of data in the packet

20 bytes for IP, 20 bytes for TCP.

SSL is a good example of the session layer.

He said HTTP is layer 7, application.

We care about:

  • IP (layer 4)
  • SSL (layer 5)
  • HTTP (layer 7)

OSI = Open Stack Interchange

HTTP connection flow:
  • Make a TCP connection
  • Send a request
  • Get the response

HTTP is a request/response protocol.

MPPC = Multiple Parallel Persistent Connections

He wrote the original paper on this model.

To calculate the end-to-end time, you can use the given equation: E2E = T1 + T2 + T3 + T4

  • T1 = network connection:
    • T1 = T(DNS) + T(TCP) + T(SSL)
  • T2 = server duration = time it takes the server to respond
  • T3 = network transport
  • T4 = client processing = process the response, display the result, plus the user's think time

For Facebook, it's usually T3 (network transport) that takes the longest, whereas most developers are almost entirely focused on T2 (server duration).

Don't go chasing after T2 too quickly. Figure out all of the Ts.

He thinks Microsoft's browser, Edge, is perfectly fine.

There are two types of hyperlinks on the web:

  1. Transitive hyperlinks: The ones you click on.
  2. Intransitive hyperlinks: The ones that browsers clicks on for you (images, JS, CSS, etc.).

He said that the number of intransitive hyperlinks is way more than the number of transitive hyperlinks. [I did some tests on a bunch of sites, and that turns out to often not be true.]

95% of the bytes are from intransitive hyperlinks (images, JS, CSS, etc.).

DNS is typically a larger part of the E2E than expected.

TCP is highly variable.

SSL is slow!

T1 might be bigger than you think. For PayPal, T1 accounts for 40% of their E2E.

Nothing happens and the user doesn't see anything before DNS.

Google runs their own DNS servers to improve response times. It makes it more reliable and predictable.

He had us install Dyn Dig on our phones.

Using Dyn Dig on my iPhone, it took 40 msec to resolve udemy.com.

Using dig, it took 175 msec to resolve udemy.com.

He worked on x.com. It's the only single letter domain that you can sign up for your own email address.

Among all the people in the class, there was very high variance in the DNS response times. There was a factor of 10 difference. A factor of 3 is more common.

A DNS lookup anywhere on earth should take less than 500 ms.

It shouldn't take you longer than 10ms to get to your ISP.

Mobile DNS lookup times are all over the map.

For popular sites, DNS lookups are fairly constant because it only involves talking to your ISP.

It takes 14 steps to make an SSL connection.

He said that wherever he said SSL, he really meant TLS.

SSL takes up the lion's share of T1.

If you're using SSL, it is likely the biggest thing in T1-T4.

EV certificates = extended value certificates

EV certificates take twice as long. It's a 2048 bit key.

Banks use EV certificates.

When they're used, there's a nice green bar in your browser.

"The current internet is overly reliant on encryption and confuses encryption with security...Don't confuse being encrypted with being secure."

T2 - The Server Duration

He treats the server as a black box. He doesn't care what's inside it. He only cares about how long the server takes to respond to a request.

If there's a lot of variance in T2, it's a capacity problem.

We want servers to scale linearly with the number of users.

At some point, a server can't respond to more load in a linear way. Don't load your machines past the point where they go non-linear.

Typically, machines in production run at 70% utilization or less. 40% is actually pretty common.

You have to have enough capacity to account for machines going down.

T3 - TCP Transport Time

This is the part he likes the most since he's a network guy.

TCP is pretty predictable.

Remember that HTTP has evolved over time.

He said that HTTP/1.1 came out in 1998.

We got HTTP Keepalive in HTTP/1.1.

HTTP/2 became a standard on May 14, 2015.

Firefox will open up to 6 connections for each unique host on a page. IE will only open 2.

"There was no equation until your truly solved it...published in IEEE."

With HTTP/2, you make one connection, but then there's a bunch of streams within that one connection.

TCP is not very efficient for transferring small files.

The size distribution of objects on the internet peaks around 7k.

The Yahoo logo is always fairly small in file size. It only uses a single color.

T4 - What the Browser Does

He showed the waterfall of request times for yahoo.com.

T4 is especially important for mobile devices. They have smaller processors, so they take longer to render pages.

The big guys have mobile versions of their sites that have less stuff on them.

Mobile devices often run JavaScript much more slowly than desktop devices. Part of this is because of how they do floating point arithmetic.

Bandwidth and latency have to do with the network.

More bandwidth is like having a wider hose.

Latency is like the length of the hose.

Adding bandwidth only helps up to about 5 Mbps.

Reducing latency helps linearly with reducing response times.

In the US, more than 90% of people have a 5 Mbps connection or better.

If the pipe is fixed, then put stuff in the pipe more efficiently.

He talked about packet loss.

He talked about the congestion window.

Every time TCP looses a packet, it cuts the bandwidth in half.

He really likes using equations with Greek characters to model things. He calls it "solving the equation".

On mobile, packet loss is typically 5-7%.

For any given user, their latency and bandwidth is fairly fixed.

Packet loss is a limiting factor for bandwidth.

Packet loss almost always happens because of overflowed buffers or failure to reassemble fragmented packets.

Antipattern: saying "that's outside my control."

It's never the case that there is nothing you can do about a performance problem.

Compensate in some other part of the E2E. Think outside the box.

Section IV: Tools and Testing

I didn't get his entire list of tools. Sorry.

  • YSlow
  • HTTPWatch (very good)
  • Your browser's development tools

You must gather data from lots of users. Performance work is statistical in nature.

Remember, we have a special position here in the valley. Think about people who don't have internet connections as good as ours.

Those tools aren't going to help you make the network faster in India. But, they can help you fix problems with page composition.

For instance, what things are being loaded? What things are blocking progress on your page?

There might be an ad making your page slow.

There are commercial performance services:

  • Gomez (Compuware)
  • Keynote
  • AlertSite
  • ThousandEyes

Gomez and Keynote are super expensive corporate tools.

New Relic is a less expensive tool to try to solve some of those problems.

Performance numbers are going to vary between the backbone and the last mile (of course).

gamma = last mile response time / back bone response time

His goal is to identify problems. How to solve them is another thing.

He's worked at a lot of the big dot coms.

RUM = real user measurements

Yahoo alone was responsible for 16% of JavaScript errors on the Internet. It was mostly because of ads.

Users were seeing response times that were 10X the response times on the backbone.

When he tests things, a test is a set of pages. A monitor is a set of tests.

He talked about YSlow. The rules were published by his team at Yahoo. People pay attention to the first 14 rules, but there were actually 105.

PageSpeed is from Google.

HTTPWatch is the commercial software.

UNIX tools: ping, dig, traceroute, curl

WebPageTest.org.

MSS / RTT = maximum segment size / round trip time = a good way to guess how long it'll take for your page to arrive.

Use ping to figure out the RTT.

WebPageTest.org will give you a lot of the same information that the commercial tools provide.

All the performance work in the web world came out of Yahoo.

The 14 YSlow rules are all about T3.

Here are the original 14 YSlow rules:

  1. Make fewer HTTP requests.
  2. Use a CDN.
  3. Add an expires header. (There are now better headers.)
  4. Gzip components.
  5. Put CSS at the top.
  6. Put scripts at the bottom. (We now say to put them in the head.)
  7. Avoid CSS expressions.
  8. Make JS and CSS external.
  9. Reduce DNS lookups.
  10. Minify JS.
  11. Avoid redirects.
  12. Remove duplicate scripts.
  13. Configure ETags. (He says don't bother.)
  14. Make AJAX cacheable.

stevesouders.com

Mobile devices are weak on floating point operations. Hence, they may not be as good at decompressing things.

Do not put the scripts at the bottom. The advice has changed. Chrome compiles your scripts to binary, but only if you put them at the top.

It halts the rendering process if you have JavaScript in the body. If it's in the head, it doesn't.

He's mixed on whether minification is good or not. It makes debugging harder. Maybe gzip is enough.

The rules are now different.

Unix performance testing tools:

  • ping
  • nslookup, dig (These are somewhat interchangeable.)
  • traceroute
  • netstat (This lists the network connections on the machine.)
  • curl

When you traceroute a site, the number of hops varies between runs.

If you get stars during a traceroute, that means there's a firewall that is preventing you from getting that information.

traceroute google.com

When tracerouting google, we got a range of 13-17 hops.

UNIX can't really measure things less than a millisecond.

netstat -a
netstat -A
netstat -A | grep -i HTTP

curl only returns the base page. It doesn't retrieve the images, etc.

WebPageTest.org is really good.

When you look at a waterfall diagram, find the long pole.

cache ratio = cached response time / uncached response time

Cache more.

WebPageTest.org is running real browsers.

It's okay if your base page isn't cached. Make sure the images, etc. are cached.

Task-based performance thinking: Users have use cases. They don't care about just a single page.

Look at the paths users use. Then, make those paths easier.

Users don't do what you thought they would do when you designed the website.

Focus on optimizing the 2-3 things the users do the most.

Test your competitor's performance.

Tumblr was a top 20 website, and it ran out of the founder's basement before Yahoo bought it.

Stormcat is for global performance testing.

Antipattern: design-time failure.

You can't bolt performance onto your website after you launch it.

Section V: ???

He talked about "W3C navigation timing". He almost never uses this. He doesn't think it's very good even though he worked on it.

Antipattern: we'll be done with this soon.

Performance is an ongoing activity, not a fire and forget activity.

Antipattern: not treating performance as a property of the system, or only testing at release time.

Pattern: establishing a long-term performance management plan as part of your cycle.

Native apps run 5X faster than HTML5.

Mobile is 10X slower than desktop.

HTML5 on mobile devices can be 50X slower:
  • 10X from the ARM chip
  • 5X from JavaScript

However, chips have gotten a lot better lately.

3G adds 2000ms of latency.

3G is not very common here, but it's very common overseas.

4G is much better.

Since 2009, mobile browsers went from 30X to 5X slower than desktop browsers.

In the US, we're generally on LTE, not 4G.

HTTPWatch is a good app for mobile.

Amazon's home page makes 322 requests. It's insane.

74% of users will leave if a mobile website takes more than 5 seconds to load.

Use the right tool for the right job:

  • Server
  • HTML
  • CSS
  • JavaScript

Nick Zakas architected the Yahoo homepage.

Doug Crockford said, "Don't touch the DOM!" [Not sure about that.]

TTFB = time to first byte

TTFB is not a good measure of server duration.

Use web workers for preloading.

Test performance on different transport types.

Test battery consumption.

The NYT website eats up your battery life.

Mobile networking is a big challenge, so design for delay tolerance.

Speedtest/Ookla.

There's iCurl for the iPhone.

Antipattern: Failing to recognize that the distribution of the mobile E2E is very different from a desktop performance profile.

The server duration is about 35% of the total E2E.

Section VI: Psychology of Performance

100ms to identify distinct objects
150ms to respond
250ms for user "think time"

TVs delay the sound by 30ms.

Th = Tp + Tc + Tm
T(human) = T(perceptual processing) + T(cognitive) + T(motor)

When faced with N choices, users will take O(log N) cycles to proceed.

The size of UI objects on small screens limits your accuracy.

Wearables and small devices are near the point of minimum usability for visual interactions.

humanbenchmark.com/tests/reactiontime

My initial response time was 244ms ;)

  1. Make performance a priority.
  2. Test, measure, test again.
  3. Learn about tools.
  4. Balance performance with features.
  5. Track results over time.
  6. Set targets.
  7. Ask questions; check for yourself!

He pointed at me and said, "This guy has been asking me questions all day, and he's not entirely sure I'm right about everything, which is good...I'm not right about everything...I can be wrong."

Tim Berners-Lee invited the WWW, HTTP, and the URL addressing scheme.

Doug Engelbart invented the mouse and hypertext. He died 2 years ago.

Dr. Charles Nelson (?) invented SGML. [Hmm, Wikipedia says something else.]

HTML is based on CALS which is an SGML dialect.

Tim Berners-Lee wrote the original code for all of this, although he's not very good at writing code. His genius was assembling all the parts into a working system.

Wednesday, October 21, 2015

HTML5DevConf

I went to HTML5DevConf. Here are my notes:

ES6 for Everyone

JavaScript started in 1995.

ECMAScript version 3 came out in 1999. It was in IE6. It's what most people are used to.

for (var state in states) {
if (states.hasOwnProperty(state)) {
...
}
}

There were 10 years of sadness after that during which not much happened.

ECMAScript 5 came out in 2009. It was a minor improvement.

Object.keys()
Object.create()
Array.forEach()

Then, HTML5 started happening.

Object.keys(states).forEach(function(state) {
...
});

Babel is a tool that compiles ES6 code to ES5 code.

Only 10 people in the room were using it in production.

Here is some ES6 code:

let states = {...};
Object.keys(states)
.forEach(state => console.log(state));

ES6 is mostly syntactic sugar.

To use it, you must have Chrome, Edge, or Babel.

Default parameters:

... = function(height = 50) {
....
}

Previously, you had to do:

... = function(height) {
height = height || 50;
...
}

Template literals (back ticks):

var name = `${first} ${last}`;

var x = `
I have a wonderful
block of text here
`;

Destructuring:

Grab the house and mouse properties from the thing on the right:

var { house, mouse } = $('body').data();

If those properties aren't defined, you just get undefined.

He's not covering let and const.

Grab the middleware property from the my-module module:

var { middleware } = require('my-module');

It works with arrays too:

var [column1, column2] = $('.column');

Skip a column using multiple commas:

var [line1, line2, line3,, line5] = contents;

How do you get started with ES6?

  • Babel converts ES6 to ES5.
  • Webpack takes your JS modules and combines them into a single file.

Arrow functions.

Old way:

var greetings = people.filter(function(person) {
return person.age > 18;
}).map(function(person) {
return "Hello " + person.name;
});

New way:

var greetings = people
.filter(person => person.age > 18)
.map(person => "Hello " + person.name);

If you have more than one line, you can add brackets, but that gets rid of the implicit return statement.

Arrow functions deal with "this" better.

You don't have to do things like:

var self = this;

Arrow functions implicitly bind "this".

$.get("/awesome/api", data => {
this.doSomethingWith(data);
});

Be careful of jQuery because jQuery will hijack "this".

Use parenthesis if you have more than one argument.

() => console.log("hi");

(person) => person.name;

person => person.name;

(one, two) => {
return three * four;
};

An IEFE:

(x => console.log(x))("hey");

All of the above are the simplest, most useful features of ES6.

He's not going to cover classes today. He thinks that they don't add enough value.

Promises:

$.get(`/people/`${id}`).then((person) => {
...
}).catch(function() {
console.log(...);
})

This standardizes promises because everyone had their own version.

Only two things are important:

  1. .then
  2. .catch

Errors in the "then" also end up in the catch which is really helpful.

Use Promise.resolve to convert jQuery promises to ES6 promises.

The history of modules in JavaScript:

First, we used closures.

var module = (function($) {
...
return ...;
})(jQuery);

Then, AMD:

define(['jquery'], function($) {
...
});

Then CommonJS (which is a good place to start):

var $ = require('jquery');

var x, y, z;
module.exports = $.doSomething();

ES6 modules:

import $ from 'jquery';

var x, y, z;
export default $.doSomething();

// More examples:
export var age= 32;

export function printName() {
...
};

// In another file:
import { age, printName } from 'my-module';

He still didn't cover a ton of stuff. Babel has a nice tutorial on all the new features.

Reusable Dataviz with React and D3.js

@swizec

http://swizec.com/html5devconf

He showed how to implement a Space Invaders game using React and D3. Very clever!

The problem: Spreadsheets don't scale. Simple libs aren't customizable. D3 is hard.

React principles:

  • Components
  • Immutability
  • Just re-render

Solution: change the data and redraw. This is how a video game works.

Flux is an architecture, not a library.

Only 1 person in the audience admitted to using Emacs. Crazy.

You can use React to build SVG, not just HTML.

Just update the model and let React deal with it.

React gives you hot reloading.

Use D3 more for the math and less for the SVG.

He thinks that React may be better than Angular because of the diffing algorithm (and virtual DOM). Angular also makes it harder to move things around because of the nesting and scopes.

He strongly recommends using Flux with React.

React--Not Just Hype!

@mjackson

@ReactJSTraining

He is the primary author of react-router.

He also works on Redux.

He gave many compelling arguments and examples of how React has an advantage over Angular and Ember.

React has historically had better performance than Ember and Angular.

"It's amazing what you can do with the right abstraction."

He wrote the JS implementation of MustacheJS.

In Angular, you can put things in scope in the controller, rootScope, directive, somewhere else in the view, etc. If you see a variable in the template, it's hard to know where it's coming from.

He showed a nice side-by-side example of Angular and React.

It's easier to drive things from the JS (in React) than from the template (in Angular).

<div ng-repeat="model in collection">
{{model.name}}
</div>

vs.

collection.map(model => (
React.createElement('div', null, model.name)
))

or:

collection.map(model => (
<div>{model.name}</div>
))

Angular has to keep inventing new DSL syntax for the templating language to implement new features. In React, you just have JavaScript.

He's done a lot of work in both models. He just thinks the React model is better.

Scoping in Angular is complex. Scoping in React is just JavaScript scoping.

In general, it's better to be in JavaScript than to keep on adding stuff to Angular's templating DSL.

MDN's docs are pretty good for JavaScript documentation.

React lets you "describe UI". It doesn't have to render to just HTML. It can render to other things as well, for instance SVG, Canvas, three.js (WebGL), Blessed (terminal), etc.

Netflix uses React to render to TVs.

Decouple from the DOM. Describe the UI and render wherever it makes sense.

This is the same thinking that underlies React Native.

He poked at Ember a bit.

With Ember, it's complex to understand the side effects of setting a property on an object.

If you're setting two properties, it might end up executing the same block of code twice. You can't batch the changes.

He doesn't like Object.observe. He says it makes it hard to know what's going to happen when you make changes to state.

Two important questions:

  1. What state is there?
  2. When does it change?

He likes setState more than the KVO paradigm (i.e. Object.observe).

setState:

  • One object contains state.
  • Imperative, intent is obvious.
  • Easy to grep.

React inserts a virtual DOM between the view logic and the DOM.

This lets you delegate a lot of responsibilities to the framework. The framework can handle a lot of optimizations. You can also introduce new render targets.

Most tech companies have web, iOS, and Android teams that are all mostly separate.

He recommends structuring teams around feature verticals instead of technologies.

He gave a plus for React Native.

Architecting Modern JavaScript Applications

He's one of the Meteor developers.

Meteor is a full stack framework. In encompasses both the frontend and the backend. It's really quite impressive what it can do and how far ahead of other stacks it really is.

He wants to talk about the "connected client" architecture.

He used UBER as an example. You can look at a map and watch all the cars move around in realtime.

Move from websites to apps.

The main driver for this is mobile.

Apps:

  • Stateful: There's a stateful server with a connection to the client (WebSockets or XHRs)
  • Publish / subscribe
  • Data: What goes over the wire is data, not presentation

He has mentioned multiple times that the server needs to be able to push data to the client.

The architecture for these modern apps is really different.

We've seen major shifts in how software is built before:

Mainframe => PC => Web => Connected client

Connected client:

  • Stateful clients and servers with a persistent network connections (WebSockets) and data cached on the client.
  • Reactive architecture where data is pushed to clients in real time.
  • There's code running on the client and in the cloud. The app spans the network.

You need to cache things on the client in order to get any kind of performance. Hence, the server needs to know what's on the clients.

You have to be able to push code changes.

We have to move away from stateless, share nothing, horizontally scalable architectures.

Each part should "reactively" react to changes in state.

You have to have the same team working on a feature on the client and on the server.

However, you can still have separate teams for separate microservices.

"JavaScript is the only reasonable language for cross-platform app development."

That's because it runs everywhere you need it to run.

He says JavaScript is better than cross-compiling from languages like Java.

He gave a plug for ES 2015.

You can transition to ES 2015 gradually.

He likes ES 2015 classes.

He showed how ES 2015 code made the code much nicer and shorter.

Sass seems to be more popular than Less.

He talked about Reactivity.

He wants more than what Angular or React can provide. He wants reactivity across the whole stack.

With Meteor, you can be notified of changes in your datastore so that you can update your UI. (That's impressive. I've seen the Android guys do that, and the Firebase guys do that, but I haven't seen many other people do it.)

He's very pro-WebSockets. Use an XHR if they're not available.

He doesn't use REST. You publish/subscribe with live data.

He talked about Facebook's Relay Store, GraphQL, etc.

DevOps is different when you have connected clients.

He talked about something called Galaxy.

You need stable sessions. I.e. when a client reconnects, they must reconnect to the same server.

He talked about reactive architectures for container orchestration.

He plugged Kubernetes and Marathon (Mesos).

You have to update the version of the software on the client and the server at the same time.

He showed how he dealt with deploying new versions of the software. It's complicated.

Meteor is an open-source connected client platform. It simplifies a ton of stuff.

Building Web Sites that Work Everywhere

Doris Chen @doristchen from Microsoft.

She is going to talk about polyfills and Modernizer.

Edge is the default browser in Windows 10.

Edge removes as much IE-specific code as possible and implements as much support for cross-browser stuff as possible.

Testing tools:

  • Site Scan: Free
  • Browser screenshots: Free
  • Windows virtual machines: dev.modern.ie/tools/vms/windows
  • BrowserStack: Paid

http://dev.modern.ie/tools is Edge's developer site. It's pretty neat. It gives you a bunch of warnings for when you're doing things that don't have cross-browser support.

She keeps skipping Chrome in a lot of her slides ;)

I think she said stuff for CSS3 needs prefixing.

Autoprefixer is a postprocessor that automatically adds CSS prefixes as necessary.

You can integrate this using Grunt.

Browser detection (using the version string to decide what your code should do) just doesn't work.

All the browsers have User Agent strings that mimic each other. They embed substrings of the other browsers.

Instead use feature detection.

IE browser detection is going to be broken in Edge since it doesn't support the older IE ways of doing things.

Modernizr is the best tool to use for feature detection.

It also has polyfills to keep older browsers supported.

Edge's dev tools are called F12 Tools.

She talked about polyfills.

MP3 is supported in all the browsers.

H.264 (MPEG-4) is supported in all the browsers.

Browsers won't render elements they don't understand (such as video tags), but they will fallback to rendering the stuff inside the tag.

She called Silverlight "legacy", and said people should move away from using any plugins.

She disables Flash as well as all other plugins.

She talked about using Fiddler to modify requests and responses.

She used it to manually change some Flash video to HTML5 video by looking at the MP4 the Flash code was playing (it was an argument to the Flash player).

Edge has an emulation mode to emulate older versions of IE.

Evolution of JavaScript

The story of how the "glue of the internet" became the world's most popular programming language.

His view of the history of JavaScript was, unfortunately, somewhat flawed...Actually, it was very, very flawed.

I left early along with some other people.

WebGL: The Next Generation

Tony Parisi.

http://tonyparisi.com

WebGL is everywhere. It runs generally the same everywhere.

It started in 2006 at Mozilla.

It's OpenGL but built to work the way the web works.

It has very low-level drawing primitives.

No file format, no markup language, no DOM.

Libraries and frameworks are key to ramping up quickly and being productive.

If you use the base API, it takes a 300 line program to make a cube that spins in space with an image on each face.

three.js is the most popular WebGL library. It's very well done.

WebGL 2 is a major upgrade based on OpenGL ES 3.0. It was introduced earlier this year.

Unity can cross-compile from C++ to JavaScript using Emscripten.

WebGL 2 has lots of features that allow you to get low-level efficiency gains.

It has deferred rendering. It enables you to have tons of light sources.

WebVR is virtual reality in the browser.

The world of VR is moving very quickly.

Firefox and Chrome have VR working in the browser.

Head tracking is part of VR.

So is stereo view mode.

Firefox and Chrome can attach to VR devices.

You have to update the head tracking position at 70-90 Hz to avoid motion sickness.

HMD = head mounted display

He talked about Google Cardboard. It's not as good as the professional stuff, but it's still pretty neat.

Browsers have an accelerometer API.

glTF is a web-friendly 3D file format for use with WebGL. It's a "JPEG for 3D".

He wrote the glTF loader for three.js.

FBX is a fairly standard proprietary format. There's an FBX to glTF converter.

He wrote "Learning Virtual Reality" along with two other O'Reilly books related to WebGL, etc.

He has an Oculus Rift DK2 headset. You can buy it on their website.

Firebase: Makes IoT Prototyping Fun

This was one of my favorite talks because it was so simple and yet so inspiring.

Jenny Tong @MimmingCodes

She gave a brief intro to some simple EE topics.

0V = ground

PWM = pulse width modulation

10k ohm resistors are the only resistors you need to worry about right now.

The longer foot of an LED goes to high voltage, and the other foot goes to ground.

She's working with voltages in the range of 0 to 5.

Arduino Unos are simple, durable, and inexpensive. Keep in mind there is no OS. Your code runs directly on the metal.

A Raspberry Pi is a tiny computer. It usually runs Linux.

Instead of doing things mostly in hardware or in C, she tries to make it so you can do things in JavaScript.

Her standard recipe is:

  • Arduino Uno
  • Johnny-Five: a NodeJS library for interacting with hardware stuff
  • Raspberry Pi (or laptop): for talking to the internet
  • Firebase (she's a developer advocate for Google Cloud)

Johnny-Five is very versatile and has great documentation. It has images that walk you through various projects.

Firebase has a realtime database. You can think of it as a giant ball of JSON in the sky. You can create a listener in your JavaScript code to listen for changes to the database.

http://wherebus.firebaseapp.com is a cool example.

The "hello world" of IoT is creating a button that changes something on the Internet. During her talk, she created a button that controlled a real light. The light responded to changes in the data in her Firebase database.

Arduino has a bridge called Firmata that can be used to connect Johnny-Five to your Arduino.

She started wiring things together using a breadboard:

Red = high
Brown = low

She started with some JavaScript to make an LED blink.

A pull down resister (10k ohm) connected to ground gives the electrons a place to go when the circuit would otherwise be open (i.e. not connected).

She integrated Firebase really quickly.

You can use APPNAME.firebaseio-demo.com/TABLE as a Firebase DB without even signing up. However, keep in mind that so can everyone else.

You can go to the Firebase URL to get a web view of the data. Change the data there, and it gets pushed into the JavaScript code, which controls the board.

It's really easy to prototype cool things with real hardware, JavaScript, and Internet connectivity.

When you are learning how to build this stuff, these things will catch on fire sometimes. That's okay. Keep a fire extinguisher on hand ;)

mimming.com/presos/internet-of-nodebots
github.com/mimming/snippets

You can buy a bunch of cool hardware from Adafruit.

Upgrade your JavaScript to ES2015

David Greenspan (Meteor Development Group).

The speaker takes a very conservative approach to new browser features. In the talk, he gave an overview of whether using ECMAScript5 features via transpiling with Babel is a prudent thing to do.

He started in 2007 with IE 6 (Firefox 1.5, Safari 2, no Chrome).

Meteor dropped IE 6 in 2013.

IE 7 still couldn't garbage-collect cycles between JavaScript and the DOM. [I remember that. I sure am glad they fixed that!]

You can enjoy ECMAScript 2015 because of Babel which is a transpiler.

ECMAScript3 -- 1999 -- IE 8-9
ECMAScript4 -- cancelled in 2008
ECMAScript5 -- 2009 -- IE 9, "modern" browsers -- just a minor update
ECMAScript6 -- 2015 -- being implemented now (aka ECMAScript 2015)

They're going to start rolling new versions annually.

Features will appear quickly in new browsers.

In 0-2 years, we'll stop caring about IE 8.

When people say "ES", they're just using it as an abbreviation for "EcmaScript".

Start using ES 2015 and transpile to ES 5 using Babel.

Babel targets ES 5. You can even get it to transpile to ES 3 if you build the right plugins.

Babel can get you a lot of the newer syntax features.

However, it can't provide all of the "engine-level" features such as efficient typed arrays, Proxy, Reflect, WeakMap, WeakSet, and proper tail call optimization.

Babel can do generators, but it's a tricky transform.

All sophisticated JS apps have a build step of some sort these days, so requiring a build step is nothing new.

Readability / debuggability of generated code isn't bad, especially with source maps.

Even just a year ago, the transpilers weren't that great. Babel has kind of won the war.

Babel is ready for production. However, you do have to follow some guidelines:

  • Use "loose" mode. It's the fastest, and it makes the best trade-offs.
  • Avoid "high-compliancy" mode.
  • Don't go nuts with experimental (ES7+) transforms.
  • Whitelist your transforms.
  • Use external helpers.
  • If you need IE 8 support, you'll need custom helpers.
Here's an ES6 class:

class TextView extends View {
constructor(text) {
super();
this.text = text;
}

render() {
return super.renderText('foo');
}
}

It's just nice syntactic sugar for what you'd do anyway.

Classes are very controversial in JavaScript. However, he said that there is no significant reason not to use classes.

Arrow functions:

[1, 2, 3].map(x => x*x)

['a', 'b', 'c'].forEach((letter, i) => {
this.put(i, letter);
});

setTimeout(() => {
this.foo();
}, 1000);

"this" refers to the same thing inside and outside the function. It's not broken for nested functions.

Use arrow functions.

Block-scoped variable declarations: let and const

The difference between let and var is that var is function scoped, but let is block scoped.

const is like final. It just says that the variable can't be reassigned. That doesn't mean that the thing the const points to isn't mutable.

Switch from var to let and const. Default to using const.

Imports:

import {foo, bar} from "mymodule";

export function logFoo() {
...
}

It transpiles to CommonJS.

By using transpilers, we'll always be able to try out new features before they're completely set in stone. We get the features early, and the standards bodies get some feedback on their designs. It's a win win.

Use Babel and ECMAScript 2015 today.

However, don't use engine-level features like generators, Proxy, Reflect, etc. yet.

The latest Chrome supports a lot (but not all of) ES 2015.

Proper tail call optimization is not implemented in any of the browsers yet.

ES7 and Beyond!

Jeremy Fairbank @elpapapollo

TC39 is the group responsible for the standard.

He uses React a lot.

ES6 was only standardized this year, and there's already more stuff coming down the pipeline.

Some proposed features might be coming after ES7. They call this "ES.later".

You can use those features now using Babel.

He went over what features were at what level of standardization. I didn't get a chance to write down the exact meaning of each of the stages, but the smaller the stage number, the closer it is to being in the standard.

Remember, nothing in this talk has been fully standardized yet.

At stage 1:

Decorators:

class Person {
@enumerable
get fullName() {
...
}
}

function enumerable(target, name, descriptor) {
descriptor.enumerable = true;
return descriptor;
}

Decorators can take arguments.

Decorators can work on classes as well. You can use this to create a mixin @decorator.

At stage 2:

Object rest and spread.

// Rest
const obj = { a: 42, b: 'foo', c: 5, d: 'bar' };

// Save the "a" and "b" attributes into the variables named
// "a" and "b", and save the rest of the attributes into an
// object named "rest".
const { a, b, ...rest } = obj;

// Spread
const obj = { x: 42, y: 100 };

// Add all of the attributes from obj inline into the current
// object. In "obj" has a "z" attribute, it'll override the
// earlier "z".
const newObj = { z: 5, ...obj };

// If "obj" has an "x" attribute, it'll be overriden by the
// "x" that comes after it.
const otherObj = { ...obj, x: 13 };

These are both just transpiled to Object.assign.

Here's a nice example of their use:

function createDog(options) {
const defaults = { a: "Some default", b: "Some default" };

return { ...defaults, ...options };
}

Be careful, this is not a deep clone.

At stage 3:

Async functions. These provide synchronous-like syntax for asynchronous code.

Prepend the function with "async". Use the "await" operator on a promise to obtain a fulfilled promise value.

async function fetchCustomerNameForOrder(orderId) {
let order, customer;

try {
order = await fetchOrder(orderId);

} catch(err) {
logError(err);
throw err;
}

try {
customer = await fetchCustomer(order.customerId);
return customer.name;
} catch(err) {
logError(err);
throw err;
}
}

Await awaits a promise and obtains the fulfilled value. It's non-blocking under the covers.

It takes a lot of work to transpile async / await code into ES5 code.

Babel has a REPL that you can use to see how it transpiles things.

Here's how to make requests in parallel:

const orders = await Promise.all(
orderIds.map(id => fetchOrder(id))
);

You should prefer making a bunch of requests in parallel rather than making them sequentially.

Sometimes you have to do things sequentially because of the business logic. You have to get one thing before you know what the next thing you need is.

Async functions let you use the language's native flow control constructs like for...of, try/catch. The problem with promises is that you can't easily use things like try/catch.

There are many other upcoming features in various stages:

SIMD: single instruction, multiple data.

Array.prototype.includes

Object.observe

See: https://github.com/tc39/ecma262

To get every possible crazy feature, use babel --stage 0 myAwesomeES7code.js. However, keep in mind that if the standard goes in a different direction, your code is going to break.

http://speakerdeck.com/jfairbank/html5devconf-es7-and-beyond