Monday, August 15, 2016

Ideas: Mining the Asteroid Belt

Disclaimer: I don't even pretend to know what the heck I'm talking about. Feel free to ignore this post.

I've been thinking lately about efficient ways to mine the asteroid belt. My guess is that there's a lot of useful raw materials out there, but getting them back to earth is kind of a challenge.

Now, in my thinking, I'm presupposing that we have a working space elevator. Nonetheless, it's still a challenge because the asteroid belt is so far from Earth's orbit. It would take a lot of time and energy to travel there and back in order to gather materials. Certainly, we'd need some robotic help.

However, the distance (and time involved) becomes less of an issue once you have a system in place. To use an analogy, selling whiskey that's been aged for 10 years is only difficult when you're waiting for those first 10 years to pass. After that, there's always another batch about ready to be sold.

One problem is that it would take a lot of energy to move large amounts of raw materials back toward earth. Inertia sucks when you're trying to move heavy things. Furthermore, you have to somehow transport that energy all the way out there. Sure, a space elevator might help you get things off the surface, but the asteroid belt is still a long way away.

The next problem is that, "For every reaction, there's an equal and opposite reaction." You can waste a lot of fuel trying to push the raw material toward Earth. However, it might be helpful to push some material toward Earth and an equal amount away from Earth.

Next up, consider that it would be useful to push the material toward a position near Earth's orbit so that it can be captured and brought down to the surface. Hopefully, we won't mess this part up and doom humankind to the same fate as the dinosaurs ;)

I was thinking of creating three bundles of raw materials at a time and then using a contraption between the three bundles to send the three bundles in different directions. By varying the angles and the relative weights of the bundles, you could slightly vary the speeds. The contraption might look like a triangle with a piston on each side. I'm not sure what is the best way of causing the pistons to move.

However, one good thing is that you'd be able to reuse the contraption over and over again. Only the raw materials would move. The contraption would stay in place to be reused.

It would probably make sense for bundles headed toward earth to have some sort of propulsion and positioning mechanism. The contraption that I mentioned above is only there for initial launch.

Thursday, August 11, 2016

JavaScript: ForwardJS

Here are my notes from ForwardJS:

My favorite talks were:

Keynote: On how your brain is conspiring against you making good software


Jenna Zeigen @zeigenvector.

jenna.is/at-forwardjs.pdf

I particularly enjoyed thinking about how this talk relates to politics ;)

She studied cognitive science. She wrote a thesis on puns.

"Humans are predictably irrational." -- Dan Ariely

"Severe and systematic errors."

Humans aren't great logical thinkers.

People will endorse a bad argument if it leads to something they believe to be true. This is known as the belief bias.

"Debugging is twice as hard as writing a program in the first place" -- Brian Kernighan

We tend to interpret and favor information in a way that confirms our pre-existing beliefs.

We even distrust evidence that goes against our prior beliefs. It's even harder for emotionally charged issues.

We have a tendency to be rigid in how we approach a problem.

We sometimes block problem solutions based on past experiences.

We often have no idea we're going to solve a problem, even thirty seconds before we crack it.

Breaks are more important than you think.

Creativity is just about having all the right ingredients.

Again, we tend to think about problems in fixed ways. That makes it harder to understand other people's code.

We prefer things that we have made or assembled ourselves.

We're bad at making predictions about how much time it will take us to do something.

We tend to be pessimistic when predicting how long it will take other people to do something, and optimistic when predicting how long it will take other people to do something.

We think that bad things are more likely to happen to other people than to us.

We're actually pretty good at filtering out unwanted stimuli, but we're not totally oblivious to it. Selective attention requires both ignoring and paying attention.

We're very limited in our mental processing. For instance, we can't do a good job writing code while doing a good job listening in on someone else's conversation.

We're sometimes helpless to the processing power of our brain.

Software is about people.

Relatively unskilled people think they are better at tasks than they actually are.

We tend to overestimate our own skills and abilities. 90% of people think they're above average at teaching.

Skilled people underestimate their abilities and think tasks that are easy for them are easy for others.

She mentioned Imposter Syndrome and said that the Wikipedia page was pretty good.

We favor members of our own in-group.

We prefer the status quo.

We weigh potential losses caused by switching options to be greater than the potential gains available if we do switch options.

We're liable to uphold the status quo, even when it hurts other people.

People have a tendency to attribute situations to other people's character rather than to external factors.

People have a tendency to believe that attributes of a group member apply to the group as a whole.

We rely on examples that come to mind when evaluating something.

We assume things in a group will resemble the prototype for that group, and vice versa.

In some cases, we ignore probabilities in favor of focusing on details.

Diversity is important for a team. The more diverse the team, the more diverse their background, the more creative they can be.

Real-time Application Panel


I didn't think this talk was very interesting.

There were 3 people on a panel: Guillermo Rauch (he wrote socket.io), Aysegul Yonet (she works at Autodesk), and Daniel Miller (a developer relations guy from RethinkDB).

The RethinkDB guy said that RethinkDB doesn't "have any limitations or make any compromises" which seems quite impossible. Certainly, it can't violate the CAP theorem ;)

From their perspective, Real-time is a lot about getting updates from your database (change feeds) as the changes comes in.

Some databases can give you a change feed.

The XHR object has an upload event so you can track uploads.

Real-time is about minimizing latency.

REST doesn't fit very well with push architectures / WebSockets.

Sometimes you need to be notified that a long running job has completed.

WebSocket tooling can be insufficient.

It's unclear what's going to happen with HTTP/2 vs. WebSockets.

The fact that HTTP/2 provides multiplexing is perhaps an advantage over WebSockets especially when you have wildly different parts of the client app talking to the server.

Server push is an exciting part of HTTP/2.

2 members of the panel were excited about observables and RxJS.

Fetch, the new way of making requests, doesn't have a way of aborting.

The RethinkDB guy gave a shoutout to MobX, but he didn't seem to know much about it.

(By the way, looking around, I see a sea of Macs. One guy said there were only 3 Windows machines in the workshop he was in.)

There was a little bit of talk about REST perhaps not being the best approach for a lot of situations.

Fireside Chat: Serverless Application Architecture


This was another panel, but I thought it was pretty good. Alex Salazar (from Stormpath) and Ben Sigelman (who wrote Dapper, the distributed tracing system, at Google) where the two people on the panel.

Stormpath has adopted this sort of architecture. The company itself provides authentication as a service.

"Serverless architecture" is a serious buzzward. It started 6 months ago. It's trending a lot right now.

Alex says it can refer to two different things:
  • Using services like Twilio, etc.: He says this is more "backend as a service".
  • Functions as a service: He says this is more accurate. He mentioned AWS Lambda. The idea is that you can write some business logic specific to you, and you don't have to manage anything that even looks remotely like a server.
When people say "servers", they're kind of referring to VMs. With serverless, you don't even need to think about VMs.

There's a lot of cross-over with microservices.

Although Heroku got pretty far, he says it's beyond and different than what Heroku provides.

Heroku automates server work (i.e. a worker). You're still writing a Python, JVM, or Node application.

Google App Engine, Heroku, etc. were all trying to be platforms as a service. It's still a backend, monolithic application on their architecture.

Serverless in the lambda model is not a server application. It's a set of finite functions that run in a stateless environment. He says it's a superior model depending on your use case.

Stormpath started with a Java monolith. They moved to asynchronous microservices. They spent a lot of time looking at AWS Lambda. They wanted to update and deploy modular pieces of code without versioning the whole system. For instance, they wanted to be able to update just one function.

Ben is terrified of CI (Continuous Integration) and CD (Continuous Delivery). Some PRs (pull requests) may not have considered all the weird interactions that actually happen. He thinks the pendulum might swing back the other way so that there's one deploy per day.

Monolithic apps can be easier since the different parts are all versioned together. It's scarier with microservices because there might be more version mismatches.

Stormpath tried to move to the Lambda model, and it didn't work. Latency is a real problem. Stateless pieces of code take a while to spin up and spin down. Functions that aren't used very often take a while to spin up--especially with the JVM. They went from serverless to even more servers. It resulted in more infrastructure, not less.

NoSQL databases are beneficial, but there's also too much hype. They kind of hid their drawbacks which was bad. Proof of point, see the guy from RethinkDB above :-P

Ben said that Lambda is the absolute extreme of where this movement will go. Consider GRPC, Finagle, Thrift; can we have hosted services for these things with a little caching?

Anytime you have latency issues, you apply caching.

The difference between memcached vs. in-memory caching (i.e. in the current process) is huge. So stateless has a huge drawback since you can't do in-memory caching. If you have to cache anything between requests to minimize latency, Lambda isn't the right thing.

If performance is important for you, Lambda (serverless) isn't ready yet. That's not true of microservices, though.

If you need to create a chain of these functions as a service, then maybe you shouldn't be using serverless. The latency compounds. One time, he was doing 4 chains, and he was seeing multi-second latency.

Serverless is really good for certain applications.

The history of computing is all about more and more abstraction.

From microservices to serverless is a natural transition.

What will the learning curve be like for learning how to use this model?

Stormpath had to invent a lot of stuff to adopt the model because a lot didn't exist. This included testing, messaging, authentication, etc.

However, he likes async, promises, etc. Scala and Node both have this right. However, Ben thinks Scala is "ridiculous". If you're all asynchronous anyway, it's easier to move functions off the server into a separate service.

Promises need to have more things like deadlines, etc. He thinks promises are a good fit.

It's an anti-pattern to wait on anything that's not in process.

A lot of stuff from Scala (async, reactive, etc.) is making its way back into Java.

JavaScript developers are already in the async paradigm, which is why they adapt to this stuff more easily.

The average Java developer hasn't wrapped his head around async yet.

Stormpath is hoping to provide authentication as a service for services, not just authentication for users.

Ben thinks security needs to be applied more at the app layer. Auth, security, provisioning, monitoring, etc. should all start happening at the app layer.

There is indeed business risk depending on a bunch of external services like Lambda, Stormpath, etc. There's no silver bullet. Can you trust the vendor? What's the uptime and SLA? Now, with SAAS, you're not just depending on them for code, but also for ops.

Having an SLA for high percentile latency (99%) is important. Uptime doesn't mean anything. However, historical uptime is still important.

Ben says that the main advantage of the monolithic model is that all deps get pushed out every week along with any updates to the code.

Your team needs processes and automation. Test before things go out. Integration testing is how you keep things from blowing up.

The Web meets the Virtual and Holographic Worlds


This was a very fun talk by Maximiliano Firtman, @firt, author of High Performance Mobile Web! I can't find the video, but if you can find it, it's worth watching. He's written and translated a bunch of books. He's very engaging.

He started by remembering the web from 22 years ago.

He says that the web has been trapped in a 2D rectangle.

New worlds:
  • Physical world
  • Virtual reality
  • Mixed reality (holographic world)
Immersion modes:
  • Seated mode
  • Room space mode
You can be a part of all of this, even as a web deveoper!

We're at a point now like where mobile web was 10 years ago.

VRML is from 20 years ago! We're not talking about it anymore. Technology has changed.

Experiences:
  • Flat web content (this stuff still works on these devices)
  • 3D world (stereo)
  • 360 still or animated content
Distribution:
  • Apps
  • Websites
  • PWAs (pervasive web apps)
Human interfaces:
  • Gaze (move your head to select things)
  • Voice
  • Touch controls
  • Clickers (remote controls)
  • Controllers
  • Your body (mostly hand gestures)
  • Mouse and keyboard (good for mixed reality)
Hardware:
  • Oculus Rift (Windows)
  • HTC Vive
  • Cardboard (Android, iOS)
  • Oculus Gear VR (by Samsung; has 1 million users)
  • LG 360 VR (LG G5)
  • Hololens (this is the most different; uses Windows 10; not connected to a computer; mixed reality)
New worlds:
  • Virtual / mixed reality
  • Physical world
  • Immersion modes
  • Human interface
  • Hardware
User experience:
  • Safari or Chrome (iOS or Android) with Cardboard:
    Use a gyroscope to recognize your position.
  • Samsung Internet (browser) on Gear VR:
    It's a 3D environment, but the web is still a box in that environment. It has different modes for watching videos.
  • Microsoft Edge on Hololens:
    This was a really neat demo! You can use hand gestures. He showed a hologram floating in your room that you can see using the headset. You can walk into the "window". It's like it's floating in space. It recognizes the room around you. You can put a hologram in a certain place. For instance, you can have a window running Netflix that lives in your bathroom.
APIs and specs:
  • Web VR:
    • You can get data about the device's current capabilities.
    • You can poll HMD pose.
    • you can get room scale data (data about the size of the room).
    • You can request a VR mode which is like the fullscreen API.
    • There are two different versions of the API. He recommended version API 1.0 with the changes they've made to it. It's optimized for WebGL content.
    • You can use this API to get data from the devices. This API is not for drawing to the screen.
    • It's supported by Chrome, Firefox, and Samsung Internet browser.
    • There's a polyfill to work on other browsers.
  • Web Bluetooth:
    • This allows you to talk to any type of Bluetooth device.
    • You can scan for BLE devices.
    • You can scan for services available.
    • You can connect to the services.
    • Note, Bluetooth is complex.
    • This is only in Chrome, and it's hidden under a flag.
  • Other APIs
    • The ambient light API lets you get info about the current lighting conditions.
    • There's a gamepad API. Chrome is the furthest ahead on this API.
    • There's speech synthesis and recognition. This allows you to interact with the user by voice. Note that synthesis is more broadly supported than recognition.
    • Web audio lets you generate dynamic audio, including 3D audio. You can do ultrasound communication with devices. He recommends a library called Omnitone for doing spatio audio on the web. This API is available all over the place.
What we can do today:
  • You can show flat content in these environments:
    • I.e., you can show the kind of content we already have in these environments.
    • It's like a 2D box in a 3D environment.
  • You can show 360 content:
    • You can get the device's orientation.
    • You can touch some of these devices, or move around.
    • There's something called VRView from Google.
    • The Vizor Editor is pretty useful. You can use it to create a 360 degree environment.
    • You can capture 3D content using cameras. However, most browsers don't support live streaming 360 content today. You have to use YouTube if you want that.
    • You can use the Cardboard Camera app from Google. Then use the cardboard-camera-converter.
  • VR 3D:
    • This is mostly based on WebGL.
    • You can use ThreeJS with VR support.
  • Holographic experiences:
    • This is only native at this point. You can't do it from the Web yet.
    • AltSpace VR is like a social network using VR. It's a native app. It has an SDK. You can use ThreeJS.
  • More will come in the future...
These are things he expects we'll see in the future:
  • New Devices:
    • Daydream from Google looks pretty cool.
  • Think about pixels:
    • What do they mean now in a VR environment?
    • How do we think about bitmaps? Can we use responsive images based on distance? See MIP Maps.
      • There's a new format called FLIF. It has multiple resolutions in the same file.
  • Future CSS:
    • We might see media queries based on how close the user is.
  • New challenges:
    • Consider infinite viewports.
    • If I increase the size of a virtual window, what should happen? Should it show more content, or should it show the content larger?
    • Augmented reality physical web:
      • Laforge:
        • Like Google Glass but not from Google.
    • We need more from Edge and Windows:
      • Holographic
    • New challenges:
      • Responsive VR.

React Native: Learn from my mistakes


This was a talk from Joe Fender. He's from a company named Lullabot. He lives in London. Here are the slides.

"We create digital experiences for the world's best brands."

(React is very popular at the conference.)

(There are a lot of women at the conference. It's not all men.)

Not as many people use React Native.

Being a web person is enough to build mobile apps.

What's the fuss? Developing native mobile apps is a pain. It's difficult to code the same thing twice (Java and Swift), especially if you're not very familiar with those languages.

PhoneGap/Titanium just didn't feel right. Furthermore, there are some memory and performance limitations. You're stuck with a non-native UI. It's a WebView embeded in a mobile app. Furthermore, it's a bit weak on community support.

React Native is really simple. It allows you to make truly native apps. React itself is nice. He thought coding in JavaScript was nice. His team was more successful than when they were trying to write the app in Java and Swift.

10 things he wish he knew. These things weren't obvious to him when he started:
  1. You need to know some things ahead of time such as JS, ES6, React (including the component lifecycle), and the differences between various devices (i.e. their differing UIs and features).
  2. You need to learn about the various React Native components. React Native has a whole bunch of components--about 40. You can get a lot done with just the core components. However, there are also contributed packages--about 200 of them. Sometimes you'll need to write your own components.
  3. You need to think a lot about navigation. It's a really important part of the app. How will users get between screens? How will they get back? How will the transitions look? NavigatorIOS is good, but it's only for iOS. Navigator is cross-platform. NavigatorExperimental is very bleeding edge.
  4. You need to think about how data flows through your app. Consider how your app will scale. Consider using Flux or something similar. He used Redux and liked it.
  5. Think about how you will structure your code if you need to support both iOS and Android. Will you use a single code base? Will there be platform specific code? Will you write expressions like { iOS ? 'nice : 'ok' } to match the different platform conventions? Some people have very high expectations and expect your app to look very native. You can get started with npm install -g react-native-cli react-native init MyApp. You can have index.android.js and index.ios.js just require index.js. Note, many components work on both platforms, and usually the component will take care of platform-specific things.
  6. Flexbox is a really great way to lay out components on the screen. It's responsive. That's really important in mobile. It's pretty much the way to go with React Native. There's a bit of a learning curve, though. It's just a different way of thinking about things.
  7. You still need to test on real devices. Simulators are insufficient. It took them a while to realize this:
    1. The performance is very different. Consider animations and load times. Your laptop is too fast.
    2. The networking is very different. With a mobile device, you're not always on a nice WiFi connection. What about if you're disconnected or have a 3G connection. WebSockets behave differently when you have a spotty network.
    3. Push notifications are a pain. You can't test this on the iOS simulator. For Android you can.
    4. Real devices don't have separate keyboards. This caused a big problem for them. Using an on screen keyboard takes up half the screen, and some of their layouts were incompatible with that.
    5. Think about landscape mode. Think about different layouts.
  8. Debugging is important. The developer menu in React Native is very helpful. In general, React Native has a really nice developer experience. On iOS, shake the device. There's a Debug JS Remotely feature so that you can debug via Chrome's DevTools. This supports live reload, and it only takes 100ms to reload new code. console.log() works too. You can pause on caught exceptions and add debugger; to your code like normal. Unfortunately, the React Dev Tools Chrome extension doesn't work with React Native.
  9. Be aware of the React Native release cycle. It's every 2 weeks. Sometimes, it's weekly. It's very fast, and very bleeding edge. His company is 10 versions behind. In general, the release notes are quite good. He recommends that you don't try too hard to stay on the most recent version.
  10. Remember that you'll need to release your app since it's a mobile app. Submitting your app to the app store is not easy. You need certificates, icon sets, a privacy policy, a support page, etc. It took them 2 weeks. Furthermore, it takes Apple a while to approve it. You also need to think about things like automated testing. React Native has a bunch of useful stuff to help. You can also use ESLint, which is nice.
React Native has Android, but it came a bit late.

His company didn't have any problems with missing APIs.

They used Parse which is an external DB as a service. Unfortunately, Parse is going away.

He thinks Firebase has good React Native support.

If you want to have really native-looking components, it's going to be hard to just use one codebase with flexbox.

Dealing with WebSockets was a pain.

Push notifications are pretty different between platforms.

Building Widget Platform with Isomorphic React and Webpack


Unfortunately, I didn't think this talk was very good or very useful for most people.

It was given by a guy named Roy Yu.

He was considering SEO value and discussing his architectural decisions.

He was creating isomorphic widgets that were shared by PHP, Java, and Python.

React isn't so simple once you hit the advanced stuff (e.g. high order components vs. dumb components).

I knew the keywords he was using, but I didn't understand what he was trying to convey to me.

He says that Webpack is more powerful than Grunt and Gulp.

He introduced React and Webpack a little.

You can package and deliver widgets to third parties using a range of mechanisms:
  • S3, CDN
  • Web component / open component
  • Public / private registry
  • Docker container as a resource / API proxy
You can have an API that other servers hit to fetch HTML.

I left early.

Generating GIF Art with JavaScript


This was a fun talk by Jordan Gray, @staRpauSe.

Here are the slides. They contain lots of fun GIF art.

He pronounced GIF with a hard g.

He works at Organic and Codame (in his spare time). Codame unites art and tech.

"Creativity is the basis for life."

He releases everything he does under Creative Commons. He likes the remix culture.

His stuff has showed up in major magazines, films, etc.

GIFs have stood the test of time. The format is 30 years old. It's completely free of patents.

They're inherently social.

Creativity loves constraints, and GIFs have interesting constraints. They only permit so many colors, frames, and dimensions.

Constraint considerations:
  • Tumblr is the holy grail to target. There's no censorship.
  • 2 MB 540px wide (or less) is a good target.
  • He talked about VJing (like DJing, but with artwork). He talked about GIF Slap. You must target 5 MB or less.
  • He talked about PixiVisor. It supports any height as long as it's 0.5625 times the width (i.e. 64x36). You can embed the GIF in audio.
  • You can target CAT Clutch which is for LED displays. They are 32x16 pixels in size.
  • You can use GIFPop Prints to print the GIF on a "lenticular" piece of paper. This is really neat. You're limited to 10 frames but you can have up to 1500px. This was really neat.
How to make GIFs:
  • Use gif.js.
  • In general, it's really easy.
  • Define a GIF. Add frames. Render it.
  • You can use three.js to do 3D manipulation, etc.
  • The best resource he offered is bit.ly/eric3demos.
  • He also talked about processingjs.org.
  • Anything else that renders to canvas is usable as well. See javascripting.com/animation
  • github.com/dataarts/dat.gui is incredibly helpful. It's "A lightweight graphical user interface for changing variables in JavaScript."
  • He mentioned jzz.js. It's an Asynchronous MIDI library. It lets you control a web page with an actual MIDI device.
github.com/CODAME/modbod3d-gifgen is a good example. They used a Kinect and a body-sized turntable to help create the GIFs.

gif.js generates the gif, and then you can download it from the web page.

(He's using Sublime Text.)

His next demo was or involved Technopticon.

Next, he talked about the theory of animation. "The Illusion of Life" is really good. Some Disney animators wrote a book about this stuff. There are some YouTube videos about it which looked really good. he talked about "the12principles".

He talked about keyframes vs. straight ahead. It's mostly about straight ahead when generating gifs.

he talked about greensock.

See:  vimeo.com/80018593.

He mentioned Movecraft.

React Application Panel


There were 5 people on the panel.

There are some people at Twitter using React.

The main reason people are initially turned off by React is JSX. However, at Facebook, they were using something like JSX for PHP, so it was a natural fit for them.

Reddit started using React.

Here's how one team split up their code:
  • Dumb components
  • Containers
  • Modules
  • Screens (route handlers with React Router)
  • An API layer
Try to reuse data layers and view layers.

Lots and lots of people use Redux.

"It's the web. We'll probably rewrite it in 3 weeks...in Angular 3."

The Twitter guy does some work to try to keep the connection between React and Redux as thin (small) as possible. He doesn't want them overly coupled.

One person was from Adroll.

One person was from Reddit.

Lots of people started adding React to their existing Backbone apps.

One person said try to use stateless components when possible. Push state as high up the hierarchy as possible.

The Facebook guy didn't like the class syntax. The function syntax makes it clearer that you're just mapping props to render output.

The Facebook guy says that people are a little too scared to use component state, but it's there for a reason.

Time traveling debugging is "sick AF (as f*ck)".

One guy mentioned MobX, but I didn't hear many people mention it at the conference.

One person said to introduce Redux only when it's time to add complex state management. One person said you're going to need it at some point, so you might as well add it to begin with.

The Facebook guy said to start with the simplest thing (component state), and then when you're familiar with the pain points that Redux addresses, you can start using Redux. You probably don't need Redux on day one.

The Reddit guy said he would add Redux later. Introduce data stores once it becomes a necessity, such as when multiple apps need to access the same data.

The Twitter guy said that it's helpful to keep things separate to make them more testable and adaptable to change.

A lot of people had different ideas of how to do HTTP async with React.

2 people really liked Axios.

Promises are great, but they can't be cancelled, for instance if you move to a different page. They might add that. It's a significant weakness.

The Facebook guy said they use promises on the outside, but something more complicated inside. He gave a plug for await. It has reached stage 4, which means it'll make it to all browsers soon.

One guy plugged RxJS. You can unsubscribe from an observable. Observables are more advanced than promises, and they provide more granular control.

Some one asked what people's favorite side effect library was for Redux. Most people didn't have an opinion. The Twitter guy uses Redux Thunk.

They've been rewriting Twitter from scratch in the last year. A lot of abstractions that were there got in the way. Too many layers of abstraction sucks.

Something like Saga (?) is overkill.

Make the code more approachable.

No abstraction is better than the wrong abstraction. It's ok to duplicate a little code if you don't know the right way to abstract the code to make it DRY. It sucks to have some abstraction that handles 3 use cases, but not the 4th. Too DRY sometimes makes things horrible. Sometimes you need more flexibility. One guy used the acronym WET (write everything twice) and suggested that it was a good approach to avoiding bad abstractions.

Create React App is nice. It helps people get started quicker. One person said that it's awesome.

React is not very prescriptive about which build system or packager you use, although, those things are useful. Most people should be using Webpack or something like that.

One person is using CSS modules. She said "it's fine," but admitted that she hates it.

There are so many ways to attack the CSS problem.

The Twitter guys are using Webpack.

React is quite a hackable platform. That's really useful. The Twitter guy uses CSS modules. "It's been ok." Some things are really not fun to debug. It's a black box. CSS really limits your ability to share components across projects in open source because different people use different CSS libraries.

"CSS is like riding a bike...but the bike is on fire...and you're in hell."

It's hard to know what CSS is being applied. If you're looking at CSS, it's hard to know if the CSS is even being used. If you apply CSS in the wrong order, it changes what things look like.

Some people think that styles in JavaScript seems interesting.

The CSS thing isn't really solved.

The Netflix guy says, "Write CSS that can be thrown away." They use LESS. They do some static analysis.

Aphrodite is a CSS library.

The Twitter guy is more excited by the idea of moving CSS into JavaScript since it enables a bunch of new things. Move away from strings to something introspectable. It'll be more powerful and predictable, but it may have performance issues.

One person has a team with a woman who does amazing UX design and amazing CSS.

One woman at Netflix said that they have one CSS file per component. Just separating files is not enough. Putting CSS into the code modules is an interesting movement. The React guy is afraid of it causing performance regressions.

Just like we've moved onclick handlers back into React, we'll probably move the styles back into the DOM code.

Code Trade-offs: Benefits and Drawbacks of Each Decision


This was another 5 person panel. It was ok, but not great. I didn't get all the names, but here are the ones I remember:
  • Ben Lesh from Netflix. I think he wrote RxJS 5.
  • Richard Feldman who wrote Seemless Immutable. He's an Elm guy.
  • Amy Lee from Salesforce.
  • Rachel Myers from Obsolutely, GitHub, etc.
  • Brian Lonsdorf
Abstractions come at the cost of performance.

RxJS is really an abstraction for async. But, he says, start off with just a callback. Maybe move to a promise. Move to Rx when things get more complex.

John Carmack had some nice comments on when to inline vs. abstract. Performance is a consideration. Readability is a consideration. One guy said inline by default, but then pull it out when it makes sense.

One React user said YAGNI (you ain't gonna need it).

You can add the greatest abstraction ever, but if it's not readable, understandable, and greppable, it's not going to help. Abstractions can get out of control pretty quickly.

The Ruby world has no constraints. They look at constraints as if they're a horrible thing. There are 8 ways to add things to an array. In Python, there's one way. The speaker said that that's much nicer.

In Elm everything is immutable, and everything's constants. NoRedInc uses Elm. They have 35k lines of code. They don't ever get runtime exceptions. Elm makes refactoring very easy. Elm is one of the panelist's favorite programming languages.

RxJS started using TypeScript. Dealing with users using it from JavaScript is kind of painful.

With MVC, you think you can replace each of the pieces separately, but in practice, this doesn't really happen. All three pieces are very wed to each other. It's the "myth of modularity."

Simple Made Easy is a great talk.

Drawing the lines of modularity in the wrong places causes a lot of pain. Doing modularity just right is really hard.

Concise naming patterns are really important when you're doing modularity.

Generic code and interfaces are "dangerous". "No abstraction is better than the wrong abstraction."

Imagine you have 3 things, and you notice a common interface among them. Now suppose you create an interface, but then you suddenly get a 4th thing that doesn't match the interface. Now, you're in bad shape.

When you create a public API, you have to have an interface, but creating an interface for your own code when you don't necessarily need one might cause more problems later.

When speaking of adaptation in JavaScript, someone said "It's not a duck, so you punch it until it's a duck."

If you can come up with a good interface that can fit a lot of things, like Observable, it's really useful. It gives wildly different things some sameness.

Then, you can write code that people can understand.

Lisp has a very pleasant interface: a paren, a command, some stuff, a paren.

Rx is arguably a DSL.

The key is finding an interface that's really simple that can solve a wide variety of needs. It's tough. You sacrifice some specificity.

On the subject of principles:

Someone said, "I used to have principles. I no longer do." For instance, I used to always have unit tests. Then, she started a company, and figured integration tests were good enough.

One guy wrote a lib with a little DI. He achieved 100% code coverage. Then, he ended up breaking it anyway. Unit tests are not enough. He is a little integration test heavy. Integration test first, unit test second. He was told the opposite, but changed his mind.

Another person said: unit tests have a lot of value, but you also need integration tests. You also need to test with real users.

One guy's principle was: put UX first. The end user is more important than clean code.

Principle: If I feel 100% sure I'm right, that worries me. Question your own decisions.

Principle: never write the same thing twice unless there's some really good reason. (Presumably, they've written "if" more than once :-P)

Monday, August 08, 2016

JavaScript: Mastering Chrome Developer Tools

I went to an all day tutorial on Mastering Chrome Developer Tools. It was my favorite part of the whole conference. Here are my notes:

Jon Kuperman @jkup gave the talk.

Here are the slides. However, there isn't much in them. Watching him use the DevTools was the most important part of the tutorial. I did my best to take notes, but of course, it's difficult to translate what I saw into words.

He created a repo with some content and some exercises. Doing the exercises was fun.

Chrome moves really fast, and they move things around all the time.

Everything in this talk is subject to change. For instance, the talk used to talk about the resources panel, but that's now gone. Now there's an application panel.

In the beginning, there was view source. Then we had alert; however, you can't use alert to show an object; you have to show a string. Then, there was Live DOM Viewer. Then, there was Firebug. It kind of set the standard. Firefox has completely rewritten their dev tools before, and they're rewriting them again.

Here's the current state of browsers developer tools:

Firefox's dev tools are a few years behind Chrome. For instance, it can't edit source files, and it doesn't have support for async stack traces like Chrome has.

Safari and Chrome were based on WebKit. Chrome split off and created Blink.

Safari and Opera have really stepped up their game. There are a few features that are in Safari or Firefox that aren't in Chrome. Performance and memory auditing is better in Chrome. Firefox has a better animation tool.

Edge has very rudimentary tooling, but they let you speak directly to a Chrome debugger; it's built in. It's called something like "project to Chrome".

Chrome just came out with a new user interface update to their DevTools.

Chrome has a nice API for their DevTools. They abstract their tools so you can use them with other products.

See node-inspector for using Chrome tools with Node--it's "Node.js based on Blink dev tools." It's coming to Node directly (see v8_inspector).

For React, you need React Developer Tools. There are similar tools for Redux, Ember, etc.

DevTools can:
  • Create files
  • Write code (you can use it as your IDE)
  • Persist changes to disk
  • Do step through debugging
  • Audit pages
  • Emulate devices
  • Simulate network conditions
  • Simulate CPU conditions
  • Help you find and fix memory leaks
  • Profile your code
  • Analyze JavaScript performance
  • Spot page jank
The docs are at developers.google.com/web/tools/chrome-devtools, but they're sometimes behind.

He really recommends Chrome Canary. It has all the latest and greatest tools several months before Chrome stable. He uses Chrome Canary for development. He even uses it for his daily driver; he says he hasn't had any stability issues.

All the browsers have canary builds these days.

He also teaches an accessibility workshop. Chrome moves so quickly that they broke something he was teaching, but only half of the people have a version of Chrome that was new enough for it to be broken. The fact that Chrome has rolling updates means there's actually a lot of variety in the versions of Chrome out there.

Chrome version 52 merged in "most of the stuff" (which I assume means updates to the DevTools).

When it comes to the DevTools, there have been a bunch of UI changes in the last six months, so Chrome Canary looks a bit different.

I asked if you'll run into compatibility issues if you only use Canary. He said it's not any worse than the general case of deciding to skip cross-browser testing in general. Rendering issues are at an all time low. However, you probably still need to test different browsers. Of course, if you're supporting old IE, you always have to test (IE 10 and below). Edge is pretty compliant. The ES6 support is different across browsers, but a lot of people just use a transpiler.

The nice thing about Canary is that there's only one version of Canary, whereas everyone is running slightly different versions of Chrome because of how they roll out updates.

He likes Frontend Masters workshops. Douglas Crockford has a 13 hour workshop on JavaScript that's amazing. They're nicely edited, and they have nice coursework.

He gave us a quick walk through the panels:

Most people only use the element panel and the console, but there's so much more!

Right click on the page, and click Inspect.

In DevTools, click the three dots in the top right in order to pick where you want the window to be.

In DevTools, click the icon in the top right (the arrow on top of a box), and then click on an element to inspect it.

If you place the DevTools dock to the right, you can drag the dock to the left in order to make your window a certain width. This is a super easy way to test responsive layouts. Make the site 320px wide--that's a good thing to target.

Sometimes he pops DevTools out to a full screen and puts it on a separate monitor.

You can drag the tabs like Console, Elements, Profile, etc. around, and it'll persist your changes.

If you're not on the console, you can press escape to show or hide the console.

You can use Settings >> Reset defaults.

Next to the inspect icon, there's an icon showing a phone and a tablet on top of each other. You can use that to simulate a device. This isn't just changing the screen. It's also sending a different user agent string. That's important to note. If you want mobile web instead, instead of picking a particular device at the top of the screen, you can use "Responsive".

In general, use relative units. Don't use pixels for your font sizes. Use ems, rems, etc. Use %s for Flexbox. Flexbox gets iffy with cross browser support.

The reliability of the mobile emulation is pretty good, but you do also need to try it with real mobile devices. Mobile emulation will get you pretty far, though, especially during development.

Twitter has a separate mobile app.

There's device, network, and CPU emulation.

He talked about the DOM representation on the Elements tab. Remember, the DOM representation (i.e. the current state of the DOM) is probably very different than what view source will show (since that shows the original HTML that came down).

You can use $('.tweet') on twitter.com.

Select an element in the element tab, then right click on it, and select scroll into view. That's a great way to find an element on the page.

You can select an element and click "h" to toggle hiding. It's visibility hidden, not display none.

Twitter has style sheets all over the place. They're bundled in prod, but in dev, there might be something like 13 different stylesheets.

CSS specificity (most to least specific):
  • Style attribute
  • ID
  • Class, pseudo-class, attribute
  • Element selector (like a div selector)
In the Elements tab, go to the Computed tab, and you can see what's actually being applied. Click on the attribute, like border-bottom, and then open the arrow, and it'll show you where the rule comes from. That's a really nice way to find where styles are coming from.

He showed the box model widget on the computed tab. Inner to outer:
  • Element
  • Padding (inside the border)
  • Border
  • Margin (outside the border)
  • Position (like "position: relative; top: 200px")
He started playing around with jsbin.com.

Elements >> Computed is really helpful.

Next, he showed Elements >> DOM Breakpoints:

Find an element in the page. Right click on it. Click "Break on...":
  • Subtree modifications
  • Attributes modifications
  • Node removal
Then, it'll break into JavaScript. Hence, if you don't know the code, but you do know the UI, this is a nice way to find the code that changed the DOM.

However, keep in mind, it'll usually break on the jQuery code, not your application code which is up a few layers in the stack. Usually, it'll give you enough to back track to the app code.

Color formats:

Shift click on any hex value, and it'll switch between different color formats. He prefers hex codes over RGB. If you click on the box of a color, you get a cool screen with a color picker. He talked more about the color picker:

Click on the arrows next to the color palette. It has material design colors. Right click on the color to get various shades.

Find an element. Find the color in Elements >> Styles. Click on the color box. Click on the arrows next to the color palette. It has Material, Custom, and Page Colors. The page colors thing helps you pick a color that matches other colors on the page.

Material Design is Google's color scheme. All those colors look good on white and black. He likes using the Material Design stuff; it makes it easier since he's not a designer.

In Elements >> Styles: You can click on the checkbox next to a rule to apply or unapply a rule to play with colors.

Use Cmd-z to undo. Refresh will also go back to what's in the source.

Workspaces are really helpful so that DevTools writes your changes to your files.

Elements >> Event Listeners is sometimes useful.

Next, he showed the Sources tab:

It looks like an IDE.

Use Cmd-P to fuzzy search for files.

By default, you can make changes. However, if you save and then refresh, you'll lose your changes. Here's how to make your changes persist:

See: Set Up Persistence with DevTools Workspaces

Drag your project folder to the left pane of the Sources tab. Then, you can map local sources to remote sources (I explain this again below). It's a bit clumsy, but then you can actually edit your real code.

Right click in the left pane. Add Folder to Workspace. Pick the local folder. Click on a file from that local folder. When prompted map the file. Then, when you save, it saves to disk. It's awesome, but it's limited.

Anything you change in styles persists to disk (Elements >> Styles). Anything you change in the markup (the DOM thing on the left), doesn't persist to disk (because who knows what code created that DOM).

Only styles defined in external CSS files can be saved.

To see all the limitations, open Set Up Persistence with DevTools Workspaces and search for "there are some limitations you should be aware of".

You have to try it out to get the hang of it.

You can even try this out with your production server. You can still map your files. However, when you save changes, it'll only change your local files--it can't automatically deploy your changes to prod.

It does work with sourcemaps a bit.

There's an experiment for SASS support. They're going to add support for templates.

We should prefer SASS over LESS because LESS will probably go away. Bootstrap was the last major user of LESS, but Bootstrap 4 is moving to SASS.

When viewing a file in the Sources tab, on the lower left, there's a {} icon to prettify the code. It can't fill in minified names, but it gets your pretty far.

Use Elements >> Styles >> + to add new styles to a particular file.

To set up persistence, the key thing is to drag your project folder onto the Sources tab, find the file in your local sources, and double click to open it. Chrome will guess at the mapping.

In the DOM section, you can select an element, right click, and use edit as HTML. Edit the HTML. Then click outside the box. However, these changes can't be persisted to disk.

He's using Atom for his editor.

He's using something called Pug which is like Mustache.

The color "rebeccapurple" is named after a guy's daughter who died.

If you have things mapped, then with the elements panel, anything you edit is implicitly saved.

Use Elements >> Event Listeners to see all the event listeners for an element. However, this isn't perfect because there might be event abstractions in the way.

Scroll listeners are expensive, so sometimes there's some sort of abstraction, like jQuery's on('scroll').

In Elements >> Styles, there's :hov in the top right. This lets you play with forcing a particular hover state. That way you don't need to keep trying to hover over the element to test out its hover handling.

There are a few things that are only in Chrome Canary.

In Elements, long press on an item, and then you can drag the element around.

Workspaces is one of the cooler things you can do with DevTools. You can go pretty far without using another editor, and you can do design work from the elements panel.

We're going to debug code in the Sources panel.

The step through debugger is pretty top notch and pretty clean.

Click on a line to add a breakpoint. It has to be on an executable line of code. Just move it down or up a line if it doesn't work. Then refresh.

Now, we're in the debugger:

The Watch widget allows you to put in an expression and watch that expression. If you're some place random, most of your watch expressions will be undefined because those variables probably don't make sense in that context.

Even the source code widget has useful stuff in it. If it knows the value of a variable, it'll show you the value in a popup near the variable.

When the debugger is paused, you're in a certain context, and you can interact with the current state using the console. Go to the Console tab, or press escape to open the console at the bottom.

Press the Play icon to resume execution.

He talked about the step over and step into icons near the play icon.

If you press step into, it'll find the next function call and step into it. That's a little different than most debuggers which will simply go down one line if the current line isn't a function. You'll probably want to use step over by default.

Just play with the debugger. You're not really breaking anything.

Pause on all exceptions is pretty helpful. However, sometimes, various libraries are using exceptions internally for various reasons.

In the Call Stack widget, right click on a line and click Blackbox Script. Then, it'll hide all the stuff from that script. That way, you can ignore the framework code. This is per script, per domain. It persists per session. If you restart the browser, you'll lose your blackboxes.

Right click, Add Conditional Breakpoint. Use any expression you want.

There is an XHR Breakpoints widget. You can use that to set a breakpoint that gets tripped anytime there is an XHR that matches a particular URL. That's really useful if you don't know the code very well, but you know the related requests to the server.

You can also just put "debugger;" in your source code. Remember to use a linter to prevent yourself from committing it ;)

Click the "Async" checkbox to turn on async debugging. This captures async stack traces. That way, you can see the stack trace from before and after the the asynchronous activity (such as making an XHR, setting a timer, etc.). It will make your call stacks taller, but it's super helpful for understanding how you got into a particular state.

For most of these things, your changes only impact the current session.

Warning: If you use the GitHub API in your exercises, you might get hit by their rate limiting.

Now we're going to talk about Profiling:

For Google, a half a second page load time increase will result in 20% traffic loss.

Amazon reported 100ms decrease in speed resulted in a 1% sales loss.

We do know that slow sites, non-SSL sites, sites with a bad mobile experience, etc. get penalized by search engines. However, we don't know if it's related to the DOMContentLoaded event, or if they're measuring perceived performance. Google is pushing the RAIL performance model, and it's based on perceived performance.

Twitter measured "time to first Tweet". Facebook has something similar.

Build, then profile, and only if it's a problem, tackle it.

There's an Audits tab in DevTools. It's very high level, but very helpful.

Memory leaks are not very common. Browsers and frameworks are pretty good these days.

He played with a particular course on udemy.com.

Go to a page. Click the Audits tab. Click Select All. Click Reload Page and Audit on Load. Click Run.

It prioritizes and suggests things.

He found that Udemy has 10,429 unused CSS rules on a course's landing page. 90% of the CSS is not used. He says everyone has that. Advanced apps use bundle splitting. It's easy to see the problem. It's much harder to figure out a fix.

Modern web apps tend to put a lot of stuff in the head tag. If you have an async script tag, it doesn't matter if it's in the head. The beginning of the body is fine.

People tend to use Bootstrap, but it has lots of stuff they may not be using.

It's also important to remove CSS that isn't being used anywhere.

Everything should be gzipped. This is a big win.

Udemy's course landing page has 2.7 MB of content when uncompressed. That's kind of average he said.

In DevTools, it can give you a list of unused CSS rules in the Audits tab.

Here are some common audit problems:
  • Combine external CSS and JS
  • Enable gzip compression
  • Compress images
  • Leverage browser caching
  • Put CSS in the document head
  • Unused CSS rules
If you have HTTP/1, combine separate JS and CSS files. With HTTP/2, keep them separate.

CloudFlare is one of the only CDNs that currently supports HTTP/2.

Combine and minify your assets.

Compressing images is probably one of your biggest wins. He said that it seems like Udemy is doing okay in this regard.

He recommended ImageOptim.

He has the settings set to JPEG 70%, PNG 70%, GIF 40%.

If you don't need transparency, you can switch to JPG.

Browser caching is another big win.

Next, he showed the Network tab:

Hit record. Refresh the page.

Press the camera icon to capture screenshots. It grabs screenshots every step along the way (anytime there's going to be a major refresh).

However, if you refresh, it'll start taking screenshots before the page comes back from the server.

It only shows the visible portions of the screen.

You can get a lot of wins by trying to optimize what things load first. The solutions are little hackier, but you can impact the experience.

Server side rendering would help, but it's hard.

Udemy's loading indicators are making Chrome take a lot of screenshots.

It stops recording when the page is fully loaded.

If you record manually, it'll keep recording and taking screenshots until you explicitly hit stop.

He showed the waterfall in the Network tab.

Hover over the colors in the waterfall to see more details.

If something is "Queued", that means Chrome has postponed loading it.

Chrome prioritizes the order in which to load various assets. CSS is ranked higher than images. Prioritization is mostly by filetype.

Webpack can compile your CSS into your static HTML.

With Google's AMP, all the CSS has to be inlined into HTML.

Performance is often about give and take.

Chrome allows up to 6 TCP connections per origin.

If you see that a resource is "Stalled" or "Blocking", that's usually the result of some weird proxy negotiation.

DNS tends to be pretty stable.

He went through all the different parts of the request, explaining each one. He talked about:
  • Proxy negotation
  • DNS lookup
  • Initial connection/connecting
  • SSL
  • Request sent, etc.
  • TTFB (time to first byte): This will suffer if your server is slow.
  • Content download/downloading
The important thing is to triage performance problems.

You can get really good information not just about what's slow, but why it's slow.

Common problems:
  • Queued or stalled: Too many concurrent requests.
  • Slow time to first byte: Bad network conditions or slowly responding server app.
The Audit tab has some useful stuff, but PageSpeed Insights has even more stuff.

We talked about the fact that we have a blocking script tag for Optimizely in our head. He said that sometimes, you need blocking code in the head, and there's no getting away from it. Optimizely is one of those times.

tools.pingdom.com is also useful. It's a bit more simplistic.

GTmetrix is the most advanced tool. It uses all the other tools, and then combines everything. The analysis of "What does this mean?" is pretty helpful.

Saucelabs and BrowserStack have some performance stuff.

In Network, hold shift over the file, it'll show green and red for what things called what and were called by something else.

Hover over the initiator to get the call stack for the initiator.

In the Network tab, you can right click on the column headings, and there's more stuff that you can see. For instance Domain is helpful.

Preserve Log is helpful to save the log messages across refreshes.

You should almost always have Disable Cache selected. By the way, it only applies when DevTools is open.

There's an Offline checkbox. You can check this if you're working on service workers.

The throttling is super helpful. Don't forget to turn off the throttling, though ;) Good 3G is a good thing to try. Remember to disable cache.

1MB is not huge for sites these days. It's not really a problem until you're above 3MB.

100 requests is too many requests.

You should put your CSS above your images.

Next, he talked about the Timeline tab:

Chrome keeps adding more stuff to this tab. It's the most overwhelming tab in DevTools.

It has CPU throttling.

You can hide the screenshots (uncheck the checkbox) to make some space.

If you look at the memory tab, and you see a jigsaw that's trending upwards, it could be a memory leak.

If you don't see a memory leak, you can uncheck that checkbox.

The summary tells you how the browser is spending its time. CSS is in Painting and Rendering. Now, you can hide the summary.

At the top, there are red dots that might mean page jank.

The green at the top has to do with frames per second.

Then it shows you what the CPU is up to. The colors match the summary.

Selecting the timeline:
  • Click and drag.
  • Double click to select everything.
  • Single click to select a small piece.
  • You can also scroll side to side or scroll in.
  • When you're on certain flame charts, use shift when scrolling.
You can probably hide the Network and Screenshots stuff since it's already on the Network tab.

See FPS, CPU, and NET on the top right.

This stuff is very different between Chrome Stable and Canary.

He talked about Flame Charts. They're under Main. Wide is bad. Tall is not a problem. Find your widest things, and they're taking a long time to execute. This can help you find your slow code. Then, you can zoom in.

What he does is zoom in and look for the last function that's really fat, and everything under it is obviously skinny.

Total Time tells you how much time your function took to execute, and all the functions under it as well.

Self Time is just how much the function itself took, without counting how much the children took.

In the flame charts, dark yellow is native browser stuff, whereas yellow is the application code.

The colors in the flame charts correspond to the colors in the sumary at the bottom of the page.

Ads and analytics are often times the performance headaches.

CPU throttling is pretty helpful. One thing it's good for is that it makes it more obvious where the slow parts are in the flame charts.

You might need to turn on a Chrome flag to see the CPU throttling.

People used to use try/catch for lexical scoping before we had let. You can have a try that always raises, and then a var inside the catch is scoped to the catch. Traceur used this trick. Babel just renames things.

Sometimes Chrome extensions get in your way.

PageSpeed Insights is more robust than the Audits tab.

The AMP project has their own Twitter and YouTube embeds.

If you're using a pure Node server, there's a compression thing to turn on gzip.

For Bootstrap, you can use the SASS version and just pull in the parts that you need.

Don't use a 1000 pixel wide image and shrink down the image to 200px using the img tag. Compress (lossy) them and resize them.

He really likes the screenshot view. Remember to pop out DevTools so that it takes a larger screenshot.

Server side rendering is nice in order to get text onto the screen faster.

You can use command click to show both CSS and Img in the Network tab.

On the Network tab, you can use regexes, but that's not so common.

Embedding YouTube videos loads a lot of JavaScript. Consider deferring them. That could help finish loading much faster.

He talked about the DOMContentLoaded event and the Load event:

You can hook into either. They're native browser events.

For his example, it was the YouTube embeds that totally killed his page performance.

AMP has its own video.js.

Most of what he does on the Timeline tab is to look at stuff and then hide it when he figures out that that's not where the problems lie.

In the Summary, you can click on Bottom-Up, Sort by Descending Self Time. That's an easy way to find the slow parts of your code.

(He's using zsh with a very colorful prompt.)

Start with Audit and Network. Then go to the Timeline. Next, you can go to Profile.

Next, he talked about the Profile tab:

You can use Profile for CPU profiles or to take snapshots of the heap.

Remember, they kind of push everything into the Timeline. Profile has a simpler view of some stuff that's in Timeline.

Start profiling, then refresh the page.

Once you run a profile, when you go back to your code, it'll show times next to your functions in the Sources tab.

Next, he talked about page jank:

Jank is any stuttering, juddering, or just plain halting that users see when a site or app isn't keeping up with the refresh rate.

He talked about 60 FPS (frames per second):

Most devices refresh their screens 60 FPS. The browser needs to match the device's refresh rate. 1 sec / 60 = 16.66ms per frame. In reality, you have about 10ms for your code.

In the Timeline tab, hit Esc. There's a rendering panel (it might be in the drop down). There's an FPS meter.

Most of the time, page jank is obvious just using the site.

He explained some causes of page jank:

Every time you do a write to the DOM, it invalidates the layout. For instance:

element1.style.height = (h1 * 2) + 'px';

You can use window.requestAnimationFrame() to ask the browser to let you know when it's going to to do layout invalidation. Do all your writes in there.

There's a fastdom library. Basically, never do a write or a read to the DOM without using his library. It eliminates DOM thrashing.

React has a lot of stuff tied to requestAnimationFrame. A lot of things use the fastdom library. It makes scrolling butter smooth.

He doesn't know if NG 1 takes care of using requestAnimationFrame correctly.

His favorite demo is koalastothemax.com.

Escape (to show the Console tab) >> Rendering (next to the Console tab) >> Paint Flashing. This shows green wherever there's a re-paint.

This will help you find cases where you're re-rendering things don't need to be re-rendering. This is a common performance problem.

There's this thing in CSS, will-change: transform, that you can use to tell the browser to kick some work off to the GPU.

In his demos, the old Macs and the Windows machines were getting jank even though the newer Macs weren't.

There's an ad in his demo that has fixed position. That kind of stuff can cause page jank. Adding will-change: tranform to the ad container helped.

If you have some ad that's 100x100 with a fixed position, it's likely you'll get jank.

JavaScript animations don't use the GPU, but CSS animations do. CSS animations are way smoother.

Used fixed position sparingly.

Next, he talked about how to find and fix memory leaks:

JavaScript is pretty good at garbage collecting. Leaks are not super common.

JavaScript uses mark and sweep.

Browser are always getting better.

He talked about some common causes of JavaScript memory leaks:
  • Accidental globals: you forgot to use var (e.g. bar = "foo";). Strict mode disallows that.
  • Forgotten intervals: once you start an interval, it keeps going. This can lead to a leak if it keeps pulling in more and more data.
  • Holding onto a reference to a DOM element that is no longer in the DOM anymore: browsers and frameworks are getting better at handling this. However, event listeners are particularly likely to hold onto things.
Again, he starts with Audits and Network tabs. Then he goes to the Timeline tab.

You can leak memory in the form of either JavaScript objects or DOM nodes.

With memory recording, use a lot of time to see a lot of growth.

Next, he talked about the Profiles tab:

Use Take Heap Snapshot twice. Then, you can compare them. Sort by allocated size descending.

Profiles >> Allocation Timeline >> Summary >> Click on an object >> Object: you can use this to find the code that caused the allocation to happen.

Shallow Size: the amount of space needed for the actual item.

Retained Size: how much you'll be able to get rid of if you get rid of the object. For instance, there might be a parent that points to a lot of stuff.

Don't start profiling your memory until you see an obvious jigsaw on your timeline.

Chrome DevTools makes it pretty simple to tackle memory leaks.

Chrome top right of the browser three dots >> More Tools >> Task Manager: Add JavaScript as a Category: you can use this to see how much JS memory is used per tab and per extension.

Next, he talked about Experiments:

Go to chrome://flags. Ignore the nuclear icon ;) Search for DevTools. Enable Developer Tools Experiments.

There are so many experiments.

Now, open Devtools >> Settings >> Experiments. There's CPU throttling. There's accessibility stuff. There's live SASS stuff. There's request blocking.

Hit shift 7 times, and you'll get super secret experiments!!! They're very much still in progress ;) They're not very stable yet.

Here are some resources:

Tuesday, July 26, 2016

JavaScript: Advanced JS Foundations

I went to an all day tutorial by Kyle Simpson. He has a book called You Don't Know JS. Here are my notes:

Scope

I missed the first 2 hours. However, that only consisted of 14 or so slides, and I managed to catch up by reviewing the slides and someone else's notes.

JS is a compiled language. It continuously compiles and executes the code.
var foo = "bar"
is 2 operations not 1. It's both a declaration as well as an assignment.

The scope manager reserves spots in memory for variables.

When you assign a variable, if it's heard of it, it uses it, otherwise, it makes it global.

Functions have their own scopes.

He talked about shadowing.

Figuring out the scope of things is done during the compilation phase.

He talked about the LHS and RHS of an assignment.

window and global scope are different. Global scope sits above window. window is an alias to global scope only in the browser.

It's not a good idea to use global scope because:
  • You're more likely to see variable name classes.
  • Looking up something in the global scope is harder (slower?).
You should use strict mode.

"Undefined" means that a variable has been declared, but has not been assigned to yet. "Undeclared" means that a variable has not been declared, and there is not instance attached to it.

Strict mode takes away many operations that make your program slower.

If you execute a non-function as if it were a function, you'll get a runtime type error.

Formal function declarations start with the word "function" as the first thing in the statement. Hence, if you wrap a function in parenthesis, that's a function expression, not a function declaration.

Here's another function expression (not a function declaration):
var foo = function bar() {}
Function declarations bind to their enclosing scope.

A named function expression should be preferred to an anonymous function expression.

Lexical scope is something that is determined at author time, and is thus fixed at runtime.

Dynamic scopes are determined at runtime.

"Scope" is where to look for things like variables and function declarations.

JavaScript mostly uses function scope.

However, when you use try {} catch (err) {}, err is actually block scoped. You can't access it outside the catch block.

He showed lexical scope as a series of concentric rings.

Named functions can refer to themselves using their name.

3 reasons why named function expressions are preferable over anonymous function expressions:
  1. You sometimes need to refer to the function's name within the function. This is useful for unbinding event handlers. Note, arguments.callee has been deprecated.
  2. Anonymous functions don't have a name, and hence they make stack traces a lot less readable.
  3. If you receive a function object, you can look at the name to know a little about what it is and who's sending it.
The only reason to not name functions is if "you're lazy, uncreative, or you're trying to save space in your slides".
He thinks arrow functions suck, one of the reasons is that they're anonymous. The only thing he likes about them is that they fix lexical this.

In the newest JavaScript standards, they're adding a new feature called name inferencing. JavaScript can now infer the name of this function:
var f = () => 1;
However, callbacks don't have name inferencing, and that's the one time where you're most likely to use an arrow function.

He prefers named functions over arrow functions.

He says he's big into readability.

He uses arrow functions only when he needs to maintain lexical this. He says it's better than var self = this.

Only 2% of his code uses lexical this.

Linters are useful, but only if you know what they're for. Linters check code style. They can't check for correctness, accuracy, etc.

He doesn't think any linter understands lexical scope.

He says that linters are great in theory, but he ends up turning off a lot of rules because the linter is not smart enough.

He wishes the tools were smarter.

He says that linters are a bit stupid with respect to hoisting as well.

He thinks of lexical scopes as nested bubbles.

Lexical scope is fixed by the developer. It doesn't change dynamically.

You can use eval to add new variables to a scope dynamically:
function f(s) {
    eval(s);
}

f('var x = 1;');
This is an exception to the normal rule that scopes are determined statically. The way to think about it is that eval executes the code as if you had written it in the place where you call eval. Using eval inhibits the VM from making certain optimizations.

He doesn't think that the security argument is a good argument against using eval. He says that your system can be insecure even if you don't use eval. Hence, he doesn't think that eval is inherently insecure. He doesn't like eval because it is horrible from a performance point of view.

If you have to ask whether you should use eval or not, the answer is no. You should only use it if you already know what you're doing and the tradeoffs.

He says he's a compiler nerd.

You can use a function constructor instead of eval.

The valid reasons for eval are extremely esoteric.

One reason is if you're using eval to figure out at runtime what syntax is supported by the engine.

You should probably still use the function constructor instead of eval.

In strict mode, eval creates its own scope. However, even in strict mode, the performance is still severely impacted by eval.

He says let and const are not the solution to var.

He says that browsers use a regex to look for eval, and they it sees it, they disable the performance optimizations for that one file.

The with keyword is the other way of messing with lexical scoping in a dynamic manner.

with looks like a syntactic shortcut. It turns an object into a lexical scope at runtime. That's slow, and it has some subtle problems. You shouldn't use with.

He says one of the worse sins in programming is to create a construct that you don't know what it's going to do until runtime. (What about if statements?)

If you use:
with (obj) {
    d = 3;
}
Then if obj has d, it'll set that. Otherwise, it'll look up the scope hierarchy for a d. He says that that's horrible because you don't know what it's going to do until runtime.

You cannot use the with keyword in strict mode.

Knockout.js uses the with keyword all over the place. Function constructors don't run in strict mode. They make use of that.

The Chrome console wraps all of your code in a with statement in order to provide a bunch of stuff that's only available in the console.

The with statement makes your code run slower.

Next, he talked about properties when it comes to lexical scoping.

An IIFE is an immediately invoked function expression:
(function() {
    ...
})();
He prefers to give his IIFEs names. If you can't come up with a name, call it IIFE.

This idiom also works:
!function IIFE() {
    ...
}();
The negation doesn't really do anything except turn the function declaration into a function expression.

He prefers:
void function IIFE() {
    ...
}();
The void operator just turns the function into a function expression. Furthermore, the void is kind of a hint that the function has no return value.

He prefers void as a stylistic preference, but most people use parenthesis.

It's bad to have a bunch of global stuff.

He talked about the "principle of least exposure" or "principle of least privilege". You should protect everything by default and expose things only when necessary.

It's very common to put your entire file into an IIFE.

He's working his way toward module systems.

You could just use this idiom to expose things from within the IIFE:
(function() {
    window.foo = ...;
})();
Now, elsewhere in the codebase, people can just use foo().

Here's another idiom:
(function(global, $) {
    global.foo = foo;
})(window, jQuery);
Now, you can use $ to reference jQuery even if you have multiple libraries that each like to use $.

You could name your IIFE something like CustomerLogin. (It's strange to see him use this naming convention since that convention is usually reserved for constructor functions.)

Variables declared using var are scoped within the nearest enclosing function.

let enables you to scope a variable to a block instead of a function.

He says that if you're not using let, putting var inside a block is still useful as a stylistic signal that the variable isn't used outside the block even though variables are hoisted to function scope.

However, using let is even better.

He thinks the idea of replacing all vars with lets is really stupid.

He thinks that let doesn't replace var, it augments it.

If he wants function scope, he uses var. If he wants block scope, he uses let.

He doesn't recommend using const very much. He doesn't think it really helps make anything better.

He is not consistent as to whether he puts a space before the curly when defining a function.

In general, he has very strong opinions and doesn't seem to have much regard for other people's opinions, even members of the standards bodies. Even the name of his book, "You don't know JS" presupposes he knows more than you.

He would have preferred this syntax for let since it's more explicit:
let (tmp = x) {
    x = y;
    y = tmp;
}
He likes this idiom:
if (x > y) {
    { let x = tmp;
        x = y;
        y = tmp;
    }
}
He likes it because it's clearer, and he's okay with the extra level of indentation.

The let keyword is good for for loops:
for (let i=0; i<10; i++) {
    ...
}
Dynamic scope is dependent on runtime conditions.

He says he doesn't know why most languages prefer lexical scoping over dynamic scoping other than for performance reasons. (This was explained in SICP. Lexical scoping makes it easier to reason about the program.)

You can think about hoisting by imagining that all the variable declarations and function declarations get moved to the top of the function. It's an important feature.

He says it enables mutually recursive functions, and that you wouldn't be able to use mutually recursive functions without function hoisting. (I think that as long as you have first class functions, you can do mutual recursion.)

Function expressions don't get hoisted.

Function hoisting is quite helpful. He routinely uses functions before they're defined.
foo();

function foo() {}
He likes to put the executable code at the top and the functions at the bottom.

If you try to use a let variable before it's been declared, you get a TDZ (temporal dead zone) error. I think they may call it a ReferenceError.

He recommends that you put all the let declarations all on the first line.

Closure


Closure is when a function "remembers" its lexical scope even when the function is executed outside that lexical scope.

He claims that JavaScript is the first "mainstream" (i.e. non-academic) language to have closures. I mentioned that Perl probably got them first and was mainstream at the time. He claims that Perl wasn't mainstream by the time JavaScript got them. As far as I understand it, Perl added my in version 5 which was released in 1994, and Perl was certainly mainstream by then.

Furthermore, I would also take issue with his claim that Lisp (which had closures) wasn't mainstream since various Lisp variants were certainly used outside of academia.

He talked about the problem that occurs when you generate closures from inside a for loop. He talked about how to use IIFEs to avoid this problem.

Here's the module pattern:
var foo = (function() {
    var o = { bar: 'bar' };
    return {
        bar: function() {
            console.log(o.bar);
        }
    };
})();

foo.bar();
Similarly:
var foo = (function() {
    var publicAPI = {
        bar: function()...,
        baz: function()...
    };
    return publicAPI;
})();
That way, the different things in publicAPI have access to each other via the name publicAPI.

He likes to criticize the TC39 committee.

He mentioned HTTP 2.

He said that if you want to use ES6 import syntax you should use HTTP 2 :-/ I think he may be glossing over how transpilers like Webpack fit into the picture.

Export syntax:
export function bar() { ... }
Import syntax:
import { bar } from "foo";
import * as foo from "foo";
He said there are about two dozen variations of the import / export syntax.

He hasn't switched to ES6 modules because there's political upheaval around them.

Even though the import syntax is valid in some browsers, the loader necessary to load them isn't yet a standard.

He said that TC39 didn't consider whether the Node guys would be able to switch to ES6 syntax.

He said the Node community isn't going to implement ES6 import syntax.

He says he's still on the sidelines.

He said the import syntax is synchronous.

Two characteristics that makes something the module pattern:
  • There has to be an enclosing function that executes at least once.
  • That function has to return something that has a function that closes around its outer scope.
He said the module pattern is the most important pattern in JavaScript.

Object-oriented Programming


He doesn't think JavaScript has classes, and you should not do anything like classes in JavaScript.

He created the term "OLOO" which stands for objects linked to other objects.

He says that "behavior delegation" better matches what JavaScript does.

Every function, when executing, has a reference to its execution context, which it calls this.

(He uses Sublime Text.)

4 ways to bind this:
  1. Implicit binding: If none of the other rules apply: if we're in non-strict mode, this refers to window. In strict mode, this is undefined.

    By the way, you almost never want this to refer to the global object.

  2. Default binding: If there is a context object at the call site (e.g. obj.f()), then use the context object.

    By the way, you can "borrow" functions by attaching them to the object and then on that object.

  3. Explicit binding: f.call(obj).

    This allows us to define a single function and then use it in multiple contexts.

    The this system is very dynamic whereas lexical scoping is very fixed.

    This breaks when you have code like this:

    $("#btn").click(obj.foo);

    It's because jQuery saves obj.foo as something like cb, and cb doesn't reference obj anymore. (JavaScript doesn't have implicit bound methods like Python does.) However, you can use explicit binding:

    $("#btn").click(obj.foo.bind(obj));

    jQuery has a proxy method that does something similar.

    Some people just abandon using this entirely and just bind things explicitly since it gives you predictability back. However, hard binding is less flexible.

    The this keyword is very powerful and very dynamic. Sometimes, lexical scoping is simpler and more predictable.

  4. Using new: new foo();
There's a precedence order for the 4 rules, because it's possible to hit multiple rules at the same time:
  1. Was the function called with new?
  2. Was the function called with call or apply with an explicit this?
  3. Was the function called via a containing/owning object (i.e. context)?
  4. Otherwise, use the global object (or undefined in strict mode).
When you use new:
  1. A brand new empty object is created.
  2. The brand new empty object gets linked to another object (the prototype).
  3. The newly created and linked object gets passed into the function as this.
  4. If the function doesn't return an object, it assumes that the function should return this.
You can use new with almost any function. He says that constructor functions really don't do much and really don't construct anything. It's new that does all the work.

The new keyword can override even a hard-bound function (i.e. one that already has its own this).

Every single "object" is built by a constructor call. (I wonder if he considers an object literal to be a constructor call.)

A constructor makes an object prototype its own.

He explained the prototypal system.

He said the traditional approach to OOP (e.g. classes) is all about "copying" stuff from the parent into the child.

Pretending JavaScript is like other languages causes a lot of confusion.

Rather than copy down, it's a link up.

An object has a property that refers to another object.

The base prototype doesn't have a good name. It has things like toString().

The prototype object has a property called constructor. That doesn't necessarily mean that the thing it points to is really a constructor.

An object has a prototype. That prototype might refer to another prototype.
obj.__proto__ === ObjConstructor.prototype
obj.constructor === ObjConstructor
If you ask for something on this that isn't on this, it looks it up on the prototype.

A core theme of this class is how to compare and contrast JS's lexical scoping system with its prototypal system.

He's really big on the idea that functions aren't really constructors; they're just initializers. new does the real work of constructing the instance.

There's Object.getPrototypeOf(obj).

In the IE8 days, we used to write this hack:

obj.costructor.prototype

Both the constructor and prototype properties are changeable at runtime.

JavaScript didn't originally have super. ES6 has a super keyword.

It's not easy and automatic for a method in a child class to refer to the method of the same name in its prototype:
Foo.prototype.method.call(this)
If the child and parent have differently named methods, it's way easier.

To link from a child to a parent:
function Bar(who) {
    Foo.call(this, who);
}
Bar.prototype = Object.create(Foo.prototype);
All the magic of the prototype system really comes down to the fact that objects are linked together.

He said it's too complex. He doesn't like "classes" in JavaScript.

He prefers thinking of it as "behavior delegation" over "inheritance".

OLOO = objects linked to other objects

He likes this idiom. He calls this OLOO style coding:
var Foo = {
    init: function(who) {
        this.me = who;
    }
};
var Bar = Object.create(Foo);
Bar.speak = function() {
    ...
};
var b1 = Object.create(Bar);
b1.init("b1");
b1.speak();
This does the same as the other way of writing the code.

He likes the fact that Douglas Crockford created Object.create. However, he thinks that Crockford "went off the rails" when abandoning this.

In his code:
  • 95% = module pattern
  • 5% = delegation pattern (OLOO)
Object.create was added in ES5. Crockford suggested it. Here's how it's basically implemented:
Object.create = function(o) {
    function F() {}
    F.prototype = o;
    return new F();
};
Simpson's entire OO system is just based on Object.create.

There was some discussion at the end. He said that JavaScript's object system was radically different than any other language's since:
  • It's less about classes copying things into the object and more about an object pointing to its prototype.
  • Constructor functions don't actually construct instances, they only initialize them.
I pointed out that actually:
  • The prototype system started with Self in the 80's. Python's class system is very similar to JavaScript's if you treat classes as objects and never instantiate them. Each parent class is like the child class's prototype. When you look for something on a class, it'll go up the parent chain dynamically.
  • Java's constructor methods don't instantiate objects either. The constructor methods themselves are really just initializers like in JavaScript. By the time they're called, the object already exists. Objective C is one of the few OO languages that I know that lets you explicitly allocate and initialize the object in two different steps.
He also said that JavaScript and Lua are the only languages that allow you to just create an object and start attaching things to it without creating a class first. However, I think you can certainly do this in Python and Ruby, and I suspect you can do it in some other languages as well.

Delegation Design Pattern


"Delegation-oriented design".

He says that it was the late 90s when we finally started preferring composition over inheritance.

He says the prototypal chain is "virtual composition".

He says with delegation, it's peer-peer.

He says when you're writing JS, you should go with the flow and use JS the way it was meant to be used.

He says you should ignore the ES6 class syntax. He says that's the only part of the language that he categorically discards. He doesn't even touch the class keyword. He says it's duck tape on top of duck tape.

He doesn't think that class is just syntactic sugar. He says that's not true.

He says they're adding even more class-oriented stuff (doubling down) in later versions of JavaScript.

Anything you put on the prototype is public. It's not like closures where there's a lot of room to hide things.

Some parts of your app should use modules. Some parts should use the prototype system.

Here's an idiom he likes:
var Button = Object.assign(Object.create(Widget), {
    configure: ...
    activate: ...
});
He said to avoid shadowing method names in different parts of the prototype chain.

He said he's been working in JavaScript for 18 years.