Skip to main content

JavaScript: ForwardJS Day 1


I went to ForwardJS. Here are my notes for day 1:

Livable Code

Sarah Mei @sarahmei @LivableCode

She's the chief architect at She's been doing software consulting for 10 years--half of her career. She runs RubyConf and RailsConf.

She talked at a very high level about the software process in general such as the Capability Maturity Model, Conway's Law, the statement that there's no silver bullet, etc.

There's the code, and then there's the team. She's focusing on the codebase.

If you see a problem in your code, it's reflecting a problem in your team, and vice versa. is Brooks' paper on "No Silver Bullet".

Software is more like theater people putting together a play than factory workers making cars.

Software projects are never "done" these days.

She talked about software architecture.

New jobs are in application development. Rarely do they focus on the architectural concerns. The frameworks already exist these days.

A codebase isn't something we build anymore; it's more like a place we live. Just like with a house, some things are easier to change, and some things are much harder to change.

Livable code means you can make the changes you need to make without undue pain. It's different based on the team.

Your day to day happiness depends a lot more on the people you work with than on the code or framework.

A codebase goes bad one decision at a time.

She talked about the show "Hoarders" and how it compares to codebases.

Most hoarders eventually become hoarders again even after the one big cleanup. The more useful approach to slowly help them build up good habits is to make the situation a little bit better one decision at a time.

It's not the big decisions that kill you (like framework, language, etc.). It's the little decisions, group habits, etc.

Everyone has to help cleanup every day.

You can leave the big refactors to people interested in doing them, but everyone has to help out.

We need something in between "Simple Living" magazine and "Hoarders".

The Silver Bullet doesn't say improvement can't happen. He said that an order of magnitude improvement wasn't possible within a decade.

Thinking of software as engineering is holding us back. It's not a useful model.

You can't live in a codebase that's as perfect as a house in "Simple Living" just like you can't really live in a staged house.

How to improve the "Hoarders" approach to software:

  1. Don't make it worse.

  2. Value improvement over consistency.

  3. Inline everything:

    1. No more stories in your backlog that says "refactor" such and such. Just do it.
  4. Liase:

    1. You need to talk within your team as well as outside your team.
  5. Don't ask for permission:

    1. …before cleaning stuff up. But do be upfront.
  6. Don't ask for forgiveness:

    1. What you're doing is part of your job. Learn from your mistakes.
  7. Do ask for advice:

    1. But don't always take it.
  8. Do work together:

    1. We all have to live here.
    2. You get to live here.

The new model for software has some bits from engineering. Think again about how people put together a play and all of the parts that go into that.

The important part of software is the system (both the people and the code).

We're not quite engineers like civil engineers. Software is different than any other type of artifact that humans create.

Types for Frontend Developers


They had an Angular 1 codebase that was two years old. They had tests and linting. There was a ton of boilerplate. It was hard to understand code you hadn't seen in a while.

He said that this isn't a talk on the various systems (although he did do a lot of that). It's about why they chose types.

Static vs. dynamic types.

Strong vs. weak typing.

Statically typed: types are checked at compile time.

He mentioned TypeScript, Flow, and PureScript.

PureScript is a Haskell-like language similar to Elm.

He mentioned type inferencing.

Yes, JavaScript has types. They're just not static types.

They wanted to be precise about what the code was meant to do.

"Strings are literally the worst" because they are too imprecise.

He likes union types (he showed something that looked like an enum). He showed some stuff that I would call algebraic types.

Next, they wanted to enforce invariants.

He showed more PureScript examples that looked just like Haskell to me.

He talked about handling undefined.

TypeScript can enforce null checks.

He talked about Maybe in TypeScript.

Flow has n?: number as well as n: ?number, and they're slightly different.

You need types and tests, but, with types, your tests can be more focused on the things that really matter and your code doesn't have to do a bunch of runtime checks.

They wanted something that would allow them to define the shape of the data rather than what they wanted to call it. Some people call this structural typing.

"Inheritance is literally the worst."

They wanted more to be able to reason about their code.

Elm and PureScript both enforce immutability.

PureScript will tell you "more about your code" even than Elm.

He's not a fan of tooling as an argument for static typing, but it is an argument.

There are tradeoffs involved.

There is ramp up time.

For instance, Elm and PureScript may not be familiar to people coming from a JavaScript background. Keep in mind that everything was unfamiliar to you at some point.

Type annotations will increase verbosity, especially if you start using generics. Type inference can help a bit. PureScript is nice because the types are on a separate line.

It's requires a build step, but perhaps this isn't an issue if you're already building your JS.

Interop is important.

TypeScript and Flow use definition files, but they may have problems of their own. Sometimes the module and definitions are out of sync. Maybe the definition file doesn't exist. They often write their own.

He says the Any type is a complete non-starter in his book. "Any is literally the worst."

In TypeScript, you should disable the support for implicit Any.

He showed an example, and then said "That type is a lie."

Any, like inheritance and strings, has its place.

Interop is the biggest tradeoff.

He said moving to types was worth it. They switched to TypeScript.

He said that, despite his best efforts, he couldn't get his team to move to Elm or PureScript, and they may not work for your team.

He linked to a few other talks and papers.

If you're calling into typed code from untyped code, all bets are off.

He doesn't have a lot of reasoning for why you should use TypeScript vs. Flow. TypeScript was just more familiar to them at the time.

Here Be Dragons: Refactoring Terrifying Legacy Code with Jest

Christian Schlensker @wordofchristian from bugsnag

He's a very entertaining speaker.

He talked about how they used Jest to deal with legacy code.

He talked about how to get rid of a bunch of legacy CoffeeScript, including why they did it.

They did a gradual rewrite, intermixed with doing new feature work.

Initially, they were trying to use manual testing during the refactor. He talked about the problems with that approach.

Some of Jest's features (snapshot testing) are particularly nice for testing legacy code. It's easy to add new tests. The tests are fast. It has built in code coverage.

Snapshot testing is perfect for regression testing.

Find the happy path, and test that first.

Throw a console log into the function at the top to get an example of the input the function really gets.

Always tweak the code to test that a test can fail.

They used decaffeinate to translate their CoffeeScript code into idiomatic ES6 code. It produces code that is behaviorally equivalent. It prioritizes safety over style.

CoffeeScript has implicit returns that get translated into explicit returns even in places where no return is needed.

Jest has spies.

Snapshots can't catch everything.

Jest runs in a node environment in order to make it fast.

There's something called Bulk Decaffinate to work with Decaffinate on a project as a whole.

There are nice codemod to move from Mocha / Karma tests to Jest tests with Jasmine. They moved from Karma to Jest very quickly.

Jest is a unit testing framework, but you can use it for tests that are more like integration tests.

Jest will determine which files a test depends on, and if a file changes, it'll only run the tests that depend on that file.


Bugsnag looks good according to a person named Talin. It's a big improvement over just using Loggly.

Immersive Analytics

Todd Margolis

He talked about Qlik Playground and Qlik Branch.

"Immersive Analytics is an area of emerging research that blends analytical reasoning with immersive virtual spaces to enhance collaborative decision-making.

"Through the use of virtual reality, augmented reality and large scale tiled display environments, Immersive Analytics can be used to connect colleagues both synchronously and asynchronously as well as locally and remotely.

"Come join the conversation with Todd Margolis who will present a highly interactive discussion that offers analytical examples from a variety of industries, from healthcare, manufacturing, metagenomics and retail, demonstrated by the Qlik analytics platform with Oculus Rift, Microsoft Hololens and tiled-displays through Unity 3D and SAGE2 collaboration middleware."

He came from academia and has been doing VR for a long time.

As you might expect, he had a bunch of really interesting demos that I can't replicate in text.

He talked about different form factors: VR, AR, tablets, phones, a projection screen that covers all the walls, etc.

He mentioned Unity 3D and using it to target Oculus Rift, HoloLens, etc.

Stone Henge is one of the earliest forms of immersive analytics. You walk into it, and it shows you data encoded into the placement of the stones.

Wikipedia is a "highly-interactive form of immersive storytelling".

Are we ready to move beyond the desktop, both technically and socially? Google Glass was a failure ("glassholes").

He showed something with HoloLens. It's AR. It's running Microsoft Edge. It was built with Unity.

Getting Hooked on Vue.js

Dalton Mitchell @daltonamitchell

All React and Angular fans should find something familiar in Vue.js.

It's great because you can add it just a tiny bit at a time, but it also scales well. It's incrementally adoptable. It's 19kb gzipped.

You don't have to focus on the framework.

They were re-writing their web app which was written in PHP and jQuery. Vue.js is popular among PHP developers.

  • Flexible
  • Lightweight
  • Performant
  • Incrementally adoptable

He walked through building a small app.

  • Vue instance
  • Vue component
  • Single file components

He's putting HTML, JavaScript, and CSS all in the same file. Originally, it was weird, but then it became very productive.

It has the notion of a computed property that is re-evaluated only as necessary. It reminds me of a mix of React and MobX.

It has the concept of watchers which reminds me of autorun in MobX or a watcher in Angular.

Vue is just an object with a set of properties that each have their own purpose. Once you understand these properties, you can go back to thinking about your application. It's like a simple toolbox.

You can just start with some HTML with a single script tag to pull in vue.js, and then your code can live in an inline script tag.

It supports Angular syntax where you add properties to HTML, and it also supports a JSX-like syntax for embedding HTML templates alongside JavaScript.

Vue components let you split up your logic.

It has the notion of properties passed from parent components to sub-components.

In his example, he still doesn't have a build step. He said something about a compiler that is similar to Elm's compiler. Hmm, he's using an arrow function; I wonder how he's coping with older browsers.

There's also a custom event system.

There's a vue-loader for Webpack for doing single file components.

He showed a file. There's some HTML in a template tag. There's some JavaScript in a script tag. There's some CSS in a style tag at the bottom. The CSS can be written in SASS. The CSS can be scoped.

There's a vue-cli. You can use it to generate a project skeleton.

He was pretty productive in just a few hours.

"Maybe we need a few more tools like this that focus on beginners but also scale well as your project grows."

ClojureScript in your Pocket: ClojureScript, ReactNative, & GraphQL

Dom Kiva-Meyer @DomKM and Lily M. Goh @LilyMGoh

They had to replicate an iOS app in Android. They were web people. They had a tight deadline.

They're using Clojure on the backend.

They already had React on the frontend.

They wanted to move from REST to GraphQL.

With ReactNative, the app renders on a different thread than your app logic thread.

These are the things they most wanted:

  • Fast iteration
  • Familiarity
  • Cross-platform
  • Performance
  • Large ecosystem

They achieved all of these goals.

ClojureScript and React are a good match. React is very functional at its heart.

They're using Expo with ReactNative:

  • Additional APIs
  • Developer tools
  • Instant deployment
  • ReactNative + Expo is roughly similar to web development
  • Fast iteration

Next, they introduced GraphQL.

The schemas, requests, and responses all have a similar JSON-like shape.

There aren't many people using ClojureScript, ReactNative, and GraphQL all at the same time.

Moving from React to ReactNative is very easy.

ReactNative is not a browser environment and also not Node. User interactions are also different.

Styling in ReactNative has a surprisingly high learning curve. There's no CSS. There is no global styling. However, using JavaScript for styling is powerful.

Programming and styling are more intertwined in ReactNative than React.

Just because your code works on Android and you're using only cross-platform APIs doesn't mean it's going to seamlessly work on iOS. It's not straightforward.

ReactNative itself is incomplete. There are tons of native APIs that aren't wrapped / available. If you know both platforms, you can write native code to bridge the gap. Expo helps a lot, but if you use Expo, you can't use the rest of the ReactNative ecosystem's native bridges. They had to "detach" the project, which means they embedded the Expo source code into their own project, and now it's more like a normal native project. This forces you to use the platform-specific build tools.

Live reload is sweet in ClojureScript. It's very fast. However, the initial configuration was difficult since not many people use ReactNative with ClojureScript.

GraphQL has really great tooling. It's much better than what you can do with REST.

He said that GraphQL is as useful an abstraction as React is. He also mentioned Apollo. A lot of complexity entirely vanishes for the client. It was a huge win for them. You declaratively state the data that you need. Your data is either loading, loaded, or there was an error.

A GraphQL server will be more complex than a RESTful server. He said with REST, the clients connect various different resources together, but with GraphQL, those connections are done on the server. He said this is irreducible complexity, and he would prefer to deal with that complexity on the server.

There are also incidental complexities that it adds. For instance, you often serve everything through a single endpoint, so HTTP caching is no longer an option, and yet caching is really, really important with GraphQL.

D3, TypeScript, and Deep Learning

Oswald Campesato

Summary: This is meant to be an intro, but I really don't think people could possibly understand this unless they already knew the material.

You have to understand the concepts before you can understand the code. Keep in mind, you have to put in your time to understand all of these things.

Deep Learning is part of Machine Learning which is at the top of the hype curve right now.

The official start of AI was in 1956.

He's talking about neural networks.

AI vs. ML vs. DL (Deep Learning).

They all started in the 50s.

Expert systems were a big thing in the 80s.

This was the era of LISP and Prolog.

The machine power just wasn't there at the time.

ML and DL use tons and tons of data.

Deep Learning is based on neural networks. It involves massive data sets and lots of heuristics. It's heavily based on empiracal results.

The big bang in 2009 was deep-learning neural networks and NVidia GPUs.

Microsoft came up with something with 1000 hidden layers.

Beating the best Go player and improving Japanese <=> English translation were huge accomplishments.

He introduced linear regression.

It seems like he's covering stuff from Andrew Ng's Machine Learning course.

He's throwing out lots of terms like supervised learning, etc.

He mentioned that semi-supervised learning is the hot topic right now.

If you're going to get into Deep Learning, you should learn Python. His code sample used numpy.

It's all about heuristics in machine learning. Try all the possibilities. See what works, and go in that direction. It's all about trial and error.

Stuff changes all the time. Stuff could be relatively different in a period of 6 months.

He talked about gradient descent.

He talked about neural networks with more hidden layers.

He talked about the fact that training the models can take a lot of CPU time (and also calendar time).

"Add layers until you start overfitting your training set."

"Add layers until the test error does not improve anymore."

Convolution Neural Networks are stateless.

Recurrent Neural Networks are more complex. They're good for NLP and audio processing. They have long term memory.

There are APIs that give you all these things.

He talked about convulsion matrices. Blur, edge, sharpen, emboss—it's like photo processing, but just applied to numbers in a matrix.

You can generate an image that looks like the original image but defeats any neural network-based processor.

He moved onto SVG.

Someone converted the Tensorflow playground to TypeScript.

"So now you know SVG and D3..."

Quora is the Stack Overflow of Deep Learning.

You Don't Know Node

Samer Buna

You probably don't know most of Node.

Most Node educational content only covers packages. They don't cover the Node runtime.

The call stack is part of v8.

You only get one call stack.

The event loop is part of libuv, not V8.

The event loop is a loop that picks events from the event queue and pushes their callbacks onto the call stack.

There is the V8 engine. There's the Node API. The event queue is in the middle.

setTimeout is not part of V8.

If the event queue is empty and the call stack is empty, Node will just exit.

Besides V8 and libuv, what external deps does Node have? http-parser, c-ares, OpenSSL, and zlib.

You can run Node with a different engine such as Chakra (kudos Microsoft).

He talked about some details about how exports work.

He talked a little about the module system. Every Node file gets its own IIFE.

The IIFE gets passed exports, require, module, __filename, and __dirname.

When you do a require, Node will:

  1. Resolve the thing you're trying to require to a path.
  2. Load the thing into memory.
  3. Wrap the content with an IIFE.
  4. It evaluates the file with V8.
  5. It caches the file so that it only gets evaluated once.

require.resolve('module-name') will tell you if the module exists.

He talked about circular module dependencies. That's not actually illegal. However, one module might get a partial version of another module which leads to cryptic errors.

Aside from .js, Node will also look for .json and .node files automatically.

.node files are binary files (written in C).

He wrote more about how require works:

He talked about spawn vs. exec: spawn, by default, does not use a shell. exec does. spawn works with streams. exec buffers the whole output. spawn is much better for both reasons. He wrote a blog post about this.

Use node —use-strict to make Node strict everywhere. This is actually a V8 argument.

process.argv has command line arguments.

Use Process.on('uncaughtException', (err) => { … }) to synchronously handle things before a crash. You can't do anything asynchronously here. You have to use process.exit(1) in this callback.

Dot commands in the Node REPL: .help, .break, .clear, .editor, etc.

_ contains the result of the last expression evaluated in the REPL.

Buffer.alloc(num) gives you memory initialized to 0s. Buffer.allocUnsafe(num) gives you uninitialized memory.

The node cluster module can setup a master process with worker processes. This lets you use multiple CPUs. However, each is a separate Node process with their own heap.

Many modules use the cluster module by default.

Sometimes it's okay to use the synchronous filesystem APIs, such as at initialization. Don't use these APIs when you're handling a request on a server.

console.dir(global, { depth: 0 }) prints one layer of a deeply-nested object.

You can debug a Node program in Chrome dev-tools. Use the —inspect argument.


  • Callbacks
  • Promises
  • async/await

Callbacks aren't necessarily asynchronous.

You really should make your functions always be synchronous or asynchronous.

There's an event system. See the events module.

Event emitters are better than callbacks if you need multiple things to respond to an event.

Event emitters are at the heart of things in Node.

Streams are an important topic in Node. If you're not using streams, you're doing it wrong.

Paused streams have to be read from using

With flowing streams, you have to register events to capture their data.

You can move back and forth between paused and flowing streams.

"Everything in Node is a frickin' stream."

There are transform streams that take data, do something to the data, and then let you read from them.

All streams are event emitters.


a | b | c | d => a.pipe(b).pipe(c).pipe(d)

Implementing streams is very different than consuming streams.

He's going to give a 7 hour tutorial on Node later this week.


jjinux said…
The videos are now live.

Popular posts from this blog

Ubuntu 20.04 on a 2015 15" MacBook Pro

I decided to give Ubuntu 20.04 a try on my 2015 15" MacBook Pro. I didn't actually install it; I just live booted from a USB thumb drive which was enough to try out everything I wanted. In summary, it's not perfect, and issues with my camera would prevent me from switching, but given the right hardware, I think it's a really viable option. The first thing I wanted to try was what would happen if I plugged in a non-HiDPI screen given that my laptop has a HiDPI screen. Without sub-pixel scaling, whatever scale rate I picked for one screen would apply to the other. However, once I turned on sub-pixel scaling, I was able to pick different scale rates for the internal and external displays. That looked ok. I tried plugging in and unplugging multiple times, and it didn't crash. I doubt it'd work with my Thunderbolt display at work, but it worked fine for my HDMI displays at home. I even plugged it into my TV, and it stuck to the 100% scaling I picked for the othe

ERNOS: Erlang Networked Operating System

I've been reading Dreaming in Code lately, and I really like it. If you're not a dreamer, you may safely skip the rest of this post ;) In Chapter 10, "Engineers and Artists", Alan Kay, John Backus, and Jaron Lanier really got me thinking. I've also been thinking a lot about Minix 3 , Erlang , and the original Lisp machine . The ideas are beginning to synthesize into something cohesive--more than just the sum of their parts. Now, I'm sure that many of these ideas have already been envisioned within , LLVM , Microsoft's Singularity project, or in some other place that I haven't managed to discover or fully read, but I'm going to blog them anyway. Rather than wax philosophical, let me just dump out some ideas: Start with Minix 3. It's a new microkernel, and it's meant for real use, unlike the original Minix. "This new OS is extremely small, with the part that runs in kernel mode under 4000 lines of executable code.&quo

Haskell or Erlang?

I've coded in both Erlang and Haskell. Erlang is practical, efficient, and useful. It's got a wonderful niche in the distributed world, and it has some real success stories such as CouchDB and Haskell is elegant and beautiful. It's been successful in various programming language competitions. I have some experience in both, but I'm thinking it's time to really commit to learning one of them on a professional level. They both have good books out now, and it's probably time I read one of those books cover to cover. My question is which? Back in 2000, Perl had established a real niche for systems administration, CGI, and text processing. The syntax wasn't exactly beautiful (unless you're into that sort of thing), but it was popular and mature. Python hadn't really become popular, nor did it really have a strong niche (at least as far as I could see). I went with Python because of its elegance, but since then, I've coded both p