Friday, February 03, 2017

Microservices.com Practitioner Summit

I went to the Microservices.com Practitioner Summit. Here are the videos. Here are my notes:

Lyft's Envoy: From Monolith to Service Mesh

Matt Klein, Senior Software Engineer at Lyft

Envoy from Lyft looks pretty cool. It's a proxy that runs on every server that facilitates server-to-server communication. It takes care of all sorts of distributed systems / microservices problems. It implements backoff, retry, and all sorts of other things.

It works as a byte-oriented proxy, but it has filters to apply smarts to the bytes going over the wire.

It takes care of a lot of the hard parts of building a microservices architecture--namely server-to-server communication.

It's written in C++11 for performance and latency reasons.

He said that there are a bunch of solutions for doing service discovery. A lot of them try to be fully consistent. These include ZooKeeper, Etcd, Consul. However, he felt that it was better to build an eventually consistent service discovery system.

When you build a system using those fully consistent systems, you usually end up with a team devoted to managing them. However, Lyft's eventually consistent system was only a few hundred lines of code and had been rock solid for six months.

They used lots of health checks, and the results of the health checks was more important than what the discovery system said.

He also recommended Lightstep and Wavefront.

Envoy can work with gRPC, although there's a little bit of overlap in terms of what each provides.

Microservices are the Future (and Always Will Be)

Josh Holzman, Director, Infrastructure Engineering at Xoom

Xoom is a company that lets you send money to people in foreign countries. It's been around for something like 16 years (as I recall).

They're running Apache Curator on top of ZooKeeper for service discovery. Apparently, that removes some of the need to be fully consistent. He completely agreed with the earlier speaker's suggestion that eventually consistent systems were better for service discovery.

He mentioned Grafana and InfluxDB.

He said that moving to micro services gave them more visibility into their overall software stack, and that enabled them to achieve better performance and lower latency. However, their latency distribution is higher.

They use 2 DC's as well as AWS.

He mentioned Terraform and Packer.

They use Puppet and Ansible to manage their machines.

He said that the whole infrastructure as code idea is a good one, but it's important to use TDD when writing such code. He said that they use Beaker for writing such tests.

They have self service and automated deploys.

He recommended that you start eliminating cross-domain joins now. However, he admitted that they still haven't achieved this.

He said that analytics is hard when you have a bunch of separate databases.

Listening to a lot of people, most companies still have monoliths that they're still chipping away at.

You need to think about how to scale your monitoring. They had a metrics explosion that took out their monitoring system.

He said that moving to microservices was worth it.

They have "Xoom in a box" for integration testing. There are a bunch that are running all the time, and you can deploy to one of them.

They also use mocks for doing integration testing.

They have to jump through a bunch of regulatory compliance hoops. One of the requirements that they have to follow is that the people who have access to the code and the people who have access to prod must be completely separate.

They have production-like data sets for testing with.

They have dev, QA, stage, and prod environments.

Bringing Learnings from Googley Microservices into gRPC

Varun Talwar, Product Manager for gRPC at Google

HTTP/JSON doesn't cut it.

gRPC is Google's open source RPC framework. Google is all Stubby internally. gRPC is their open source version of it.

They have 10s of billions of RPCs per second using Stubby.

Just like gRPC is their open source version of Stubby, Kupernetes is their open source version of Borg.

He joked that there are only two types of jobs at Google:
  • Protobuf to protobuf
  • Protobuf to UI
Forward and backward compatibility is really important.

So is using binary on the wire.

gRPC supports both sync and async APIs. It also supports streaming and non-streaming.

Deadlines are a first class feature.

It supports deadline propagation.

You can cancell requests, and that cancellation is cascaded to downstream services.

It can do flow control.

You can create configuration for how your service should be used.

You can even use gRPC for external clients (such as the website or mobile clients).

It's based on HTTP/2.

The Hardest Part of Microservices: Your Data

Christian Posta, Principal Architect at Red Hat

Here are the slides.

A lot of smaller companies use the same tools that the big guys released in order to build their microservices, but that doesn't always work out so well. You can't just cargo cult. You have to understand why the big guys did what they did.

Microservices are about optimizing for being able to make changes to the system more quickly.

Stick with relational DBs as long as you can.

I loved this image he used:


mysql_streamer from Yelp lets you stream changes as they come into MySQL.

Debezium.io is similar. It's built on top of Kafka Connect.

Funny: the guy from Red Hat is using OS X.

Note to self: always have a video of your demo just in case it fails.

They currently support MySQL and Mongo, and they're working on PostgreSQL (although it has its own mechanism).

Systems are Eating the World

Rafi Schloming, CTO / Chief Architect at Datawire

WTF is a microservice?

Just because you know distributed systems doesn't mean you know microservices.

According to Wikipedia, "There is no industry consensus yet regarding the properties of microservices, and an official definition is missing as well."

He said it's about technology, process, and people.

3 other things to keep in mind are experts, bootstrapping, and migrating.

He told his story of building microservices infrastructure using microservices.

Minikube for running Kupernetes locally.

This cheat sheet covers the what, why, and how of microservices:

Engineering & Autonomy in the Age of Microservices

Nic Benders, Chief Architect at New Relic

This was perhaps the most interesting talk, although really it's a talk about management, not microservices. It starts at about 5:46:00 in the livestream. Here are the slides.

He and many others kept mentioning Conway's Law, "Any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization's communication structure."

New Relic wanted, "Durable, full-ownership teams, organized around business capabilities, with the authority to choose their own tasks and the ability to complete those tasks independently."

They wanted to eliminate or at least minimize the dependencies between teams.

He talked about optimizing team structure to make the company more successful. They didn't just talk about how to reorganize, they looked empirically at how teams communicated.

They wanted to invert control in the org.

They figured the best way to get the engineers into the right teams was to let the engineers pick which team to be in. Hence, they did self selection. They didn't even conduct internal interviews in order to switch teams. The ICs were fully in control.

Early on, this made the managers and even the ICs unhappy. The managers wanted control of who was on their team, and the ICs were fearful that this was a game of musical chairs, and they were going to be left without a chair.

They almost backed down but they didn't.

They really wanted the ICs to have self determination.

They have a core value as a company: we will take care of you.

In my mind, this was like a gigantic experiment in management which is why it made for such a fascinating talk; although, to be fair, the speaker was also quite engaging.

1/3 of people ended up switching teams.

Each team crafted a "working agreement" based on answering the question "We work together best when..."

They optimized for agility, not efficiency. These two things are very different. Most companies optimize for efficiency. Hence, they have deep backlogs, and engineers are never out of things to do. He said that people should optimize for agility instead. This means that people can switch projects and teams easily, although they may suffer from going more slowly. Also, the backlogs are shorter.

At least one team practiced "mob programming" which is where the entire team participates in "pair" programming. This is terrible for efficiency, but it's great for helping people get up to speed.

The experiment worked. The reorg took about a quarter before things really settled down, but over the course of a year, they got a lot more stuff done than they would have otherwise.

Autonomous teams have rights and responsibilities.

You hired smart people--trust them.

Toward the end, he gave a great list of book recommendations:
  • The Art of Agile Development
  • Liftoff: Launching Agile Teams & Projects
  • Creating Great Teams: How Self-Selection Lets People Excel
  • Turn the Ship Around!
  • The Principles of Product Development Flow
All of their teams had embedded PMs.

It's more important to have autonomy than to have technological consistency.

Engineers were allowed to deploy whatever they wanted using containers. They also had to meet some minimum observability requirements.

They even had one team that used Elixir and Phoenix.

The managers weren't allowed to change teams, they provided stability during the process.

There was turnover, but it wasn't that much different than their yearly average. He said, "You're not going to win them all."

The stuff between teams (such as the protocols used between services) is owned by the architecture team. He said this was the "interstate commerce clause."

They came up with a list of every product, library, etc. in the company, and then transferred each of these things during a careful 2 week handover period. However, there were still a few balls that got dropped.

They're still working on what to do going forward, such as how often to do such a reshuffle or whether they should do something continuous.

Microservice Standardization

Susan Fowler, Engineer at Stripe (and previously Uber) and author of "Production Ready Microservices".

Microservices in Production is a ebook that summarizes "Production Ready Microservices".

She said that the inverse of Conway's Law causes the structure of a company to mirror its architecture.

I think it's interesting that she left Uber for Stripe.

She was in charge of standarding microservices at Uber. It was a bit of a mess.

Microservices should not mean you can do whatever you want, however you want.

Every new programming language costs a lot.

Microservices are not a silver bullet.

Microservices can lead to massive technical sprawl and debt, which isn't scalable.

She was trained as a physicist, but she said there are no jobs in physics.

At Uber, there was no trust at the org, cross-team, or team level.

There was a need for standardization at scale.

They needed to hold each service to high standards.

Good logging is critical for fixing bugs. With microservices, it can be very hard to reproduce bugs.

Production readiness should be a guide, not a gate.

Monday, November 14, 2016

Chrome Dev Summit 2016 Day 1

I attended day 1 of Chrome Dev Summit 2016. Here are my notes:
Intermission
The intermissions between the talks were really entertaining ;)

Keynote: Darin Fisher: Chrome Engineering

Mission: Move the web platform forward.

Over 2 billion active Chrome browsers.

Bluetooth beacons are broadcasting the URL for the Chrome Dev Summit website.

polymon.polymer-project.org

Mobile web usage has far eclipsed desktop web usage.

Almost 60% of mobile is 2G.

India is experiencing the most growth in people getting online.

A lot of Indian users have so little storage space on their phones, they can't afford to install a lot of apps. They routinely install and then uninstall apps.

The web works really well in emerging markets.

Progressive Web Apps are radically improving the web user experience.

He demoed cnet.com/tech-today. It's a progressive web app. He showed a smooth experience even though his phone was in airplane mode. The icon was added to his home screen.

11/11 is singles day in China. It's happening right now. It's the biggest shopping day of the year for them. Alibaba built a Progressive Web App, and it increased conversion by 76%.

53% of users abandon sites that take longer than 3 seconds to load.

It's all based on the Service Worker API. It lets you control your caching strategy and offline approach.

Apps should be interactive within 5 seconds on a 3G connection.

Users re-engage 4x more often once you've added the site to the home screen.

Prompting users sooners yields 48% more installs.

18 billion push notifications are sent daily.

50,000 domains use push.

Seamless Sigh-In using the credential API: -85% sign-in failure, +11% conversion for Alibaba.

Lighthouse is a Chrome extension that they built to help you optimize your Progressive Web App.

There's a new Security tab in Chrome Dev Tools.

Polymer is sugaring on top of Web Components that makes the web platform more capable.

Browsers have implemented Web Components, minus the HTML imports part.

Polymer App Toolbox makes it easy and fast to prototype a Progressive Web App, and to transition from prototype to real app.

beta.webcomponents.org

AMP: Accelerated Mobile Pages

700,000 domains are using AMP.

developers.google.com/web/

browser-issue-tracker-search.appspot.com: Search for bugs across the major browsers.

Progressive Web App Roadshow: developers.google.com/web/events/

He mentioned Web Assembly, WebGL, WebVR, AR (augmented reality) etc.

Imagine you walk up to a device. It uses web Bluetooth to share a URL. The URL takes you to a website that allows you to control the device. The website uses web Bluetooth to control the device.
Intermission
bigwebquiz.com

There are a lot of subtle anti-Trump references from the various speakers.

Building Progressive Web Apps

@thaotran

A 1MB download can cost up to 5% of the monthly salary for some people.

She talks to lots of companies about being early adopters of web technologies.

Flipkart built a Progressive Web App that was really successful.

They saw as much traffic on mobile web as on their mobile app on the biggest shopping day of the year in India.

Smashing Magazine: A Beginner's Guide to Progressive Web Apps

What is a Progressive Web App: it's about radically improving the web user experience.

It should be your main web site.

developers.google.com/web/progressive-web-apps/checklist has a baseline and an exemplary checklist.

She mentioned the Lighthouse Chrome extension.

Alibaba conversion rate improvement:
  • +14% iOS
  • +30% Android
housing.com was paying $3.75 to acquire new Android users. However, it only cost them $0.07 to acquire new mobile web users.

lyft is no longer app only. They built a web app, and they got 5X more traffic than they expected.
  • Android app: 17MB
  • iOS: 75MB
  • PWA: < 1MB
makemytrip.com found that for first visit bookings, PWA users book 3X more than often than app users.

IBM acquired The Weather Company (The Weather Channel).

Things they're talking about:
  • Progressive Web Apps
  • Accelerated Mobile Pages
  • Push Notifications
  • Seamless Sign-In
  • One Tap Checkout

Sign-in on the Web

54% of users will abandon rather than register.

There's an autocomplete attribute that you can use on your login forms.

92% of users will leave a site instead of resetting login info.

g.co/2CookieHandoff: 2 cookie handoff using Service Workers

There's a Credential Management API.

It's a standards-based browser API.

It's in Chrome.

Automatically sign in the user on the homepage.

AliExpress:
  • 11% increase in conversion
  • 85% drop in sign-in failures
  • 60% decrease in time spent signing in
It can work with federated accounts such as logging in with Google or Facebook.

Google Smart Lock: the browser remembers your credentials. Then, on another device, it'll automatically sign you in. Nice demo.

Google Smart Lock even works with federated accounts.

It also works even if you've signed in in multiple ways.

Really impressive.

g.co/CredentialManagementAPI

Faster Web Payments

@zachk

PaymentRequest.

Let the browser help you with checkout.

Mobile has 66% fewer conversions than on desktop.

Chrome started with autofill. Set autocomplete types on the checkout forms.

But, today, he's talking about PaymentRequest. It's a new payments API for the web.

It's *not* a new payment method or a processor/gateway.

It's a standards-based API baked into the browser.

polykart-credential-payment.appspot.com

The API is based on native UI.

Really seamless. No keyboard usage needed (except typing in the CVC).

For now, Chrome supports credit cards and Android Pay.

It's really about helping the user avoid filling in a bunch of form fields. The browser can help a lot.

Kill the cart: give the user an option to "Buy Now".

Think of this as a progressive enhancement.

PaymentRequest works best for new users and guest checkout flows.

Coming soon: support for third-party payment apps.

Integration guide: goo.gl/eLefwM

I think this is only mobile for now.

Debugging the Web

@paul_irish

The call stack now looks better, even when you shrink the window.

Chrome supports ES6 pretty well. Even in the console, ES6 is supported pretty well. Even async/await.

You can now enter multiple lines in console.log without hitting shift return. It indents as you go. It does brace matching. He pointed out that Safari had this first.

Tab completion works in the console quite nicely. It even works for arrays once you hit '['. The tab completion works with substring completion instead of just prefixes--just type any part of the word.

There's a snippets tab in the sources tab where you can have snippets of code that might be useful. These are persistent.

Inline breakpoints: You can place a breakpoint, and then it allows you to drag the breakpoint within the line.

Note to self: never use the unauthenticated GitHub API during a demo--you might get rate limited. At the very least, have some static data to use as a backup.

The workspaces feature is improved: In Sources: Network: Drag an OS X folder onto the left pane. Without configuration, you should be able to edit your code in Chrome. It works even if you're using transpilation with sourcemaps.

There's a new shadow editor.

As of today, in the Canary, there's a checkbox with CSS coverage. You hit record, and then start using the site. It'll mark the CSS styles that were never used.

He's using iTerm2.

node --inspect app.js

It gives you a URL. That's the old way.

However, in Sources, there's a Node thing under Threads. Then, in Console, there's a context.

You can use a single instance of Dev Tools to debug both the frontend and the backend.

Application tab >> Manifest: Tells you about the state of your Service Workers.

There's also Application >> Clear storage.

Application: Service Workers >> Update on reload. Lets you reload the service worker every time. There's also "Bypass for network". That's useful to skip the service worker.

He plugged the Lighthouse extension again. It's useful to turn a web app into a Progressive Web App. It also helps audit for best practices. Captures performance metrics too.

It's just a Node module. A lot of tools are already building on top of it. You can even set it up for use in CI.

Chrome Dev Tools >> Settings >> More tools >> Audits 2.0 is powered by Lighthouse.

They wrote caltrainschedule.io.

@ChromeDevTools.

He's using Visual Studio Code a bit.

Polymer, Web Components, & You

@TaylorTheSavage

"We are at war with phones."

#UseThePlatform

Polymer is the Chrome team's set of libraries for web developers to take advantage of the web platform.

Web components is an umbrella term for a bunch of low-level features.

Web components v1 is something today. All major browsers are onboard with everything--except HTML Import, that's still on hold.

Cross-browser web components are here.

Polymer is a light weight layer on top of Web Components. It's evolved with Web Components.

Polymer 1.0 is production-ready. That was released last year.

Comcast, USA Today, ING, Coca Cola, EA, etc. use Polymer.

It's used by Chrome, Play Music, YouTube Gaming, YouTube (their next version of mobile and desktop app).

There's a Polymer App Toolbox that they released a while ago to show how to build full apps:
75% of mobile connections in sub-Saharan Africa are 2G. It's forecasted to be 45% in 2020.

She showed off Jumia Travel. It uses very little data, yet still provides a nice UI.

They're working on Polymer 2.0. They just released the preview editing.

They're focused on:
  • Web Component v1 support.
  • Better interoperability with other JavaScript libraries. It's truly web native, and looks just like a normal web component.
  • Minimal breaking. There's an interoperability layer. You can incrementally upgrade.
Polymer is only 12kb.

Mobile web development is hard, expensive, inefficient, slow, and confusing.

There are about 500+ apps using Polymer.

There's an event system.

Use properties, not methods.

Use dialog.open = true instead of dialog.open().

Properties go down. Events go up.

"Fear driven development." Be afraid of your users. Be afraid of breaking changes.

tattoo = Test All The Things Over & Over = it's their repo for testing Web Components.

Be afraid of perf regressions.

polydev is a tool that you can use to see how expensive the various elements are.

polyperf lets you compare the performance between changes.

demo-snippets shows you an element and how to use it. It's dynamic, so that you can edit the code.

webcomponents.org has re-usable elements.

Progressive Performance

@slightlylate

Great talk!

He showed this comic: xkcd.com/1367

"So it takes a lot for me to get to this point. But seriously folks, time to
throw out your frameworks and see how fast browsers can be."

He mentioned extensibleweb/manifesto.

RAIL:
  • Respond: 100ms
  • Animate: 8ms
  • Idle work: 50ms chunks
  • Load: 1000ms to interactive
53% of users bounce from sites that take longer than 3 seconds to load.

The average mobile site takes 19 seconds to load.

"Your Laptop Is A Filthy Liar": Simulating real devices using Chrome is just not the real thing.

Use Chrome inspect in order to use USB to debug real devices.

It's really, really hard to hit RAIL on mobile, even with a reasonable phone like a Nexus 5X.

"The Truth Is In The Trace". Use Dev Tools attached to real devices.

He tests with cheap, slow phones. He doesn't think that emulation cuts the mustard.

If you're using a $700 iPhone and assuming the other users are like that, you're wrong. World-wide phones are getting slower as more poor people buy cheap phones.

Desktop is 25X faster than a desktop.

He added an ice pack to the bottom of a phone in order to keep it cool, and his benchmark ran 15% faster ;)

Desktops are faster because they can dissipate more heat.

He recommended the paper, "Dark Silicon and the End of Multicore Scaling".

We can't use all the silicon in our phones because of the heat dissipation constraints.

And the battery has only 10 watt hours.

We can't dissipate enough power, and we can't carry enough power.

A cell phone battery can't keep a 60W light bulb lit for more than a few minutes.

big.LITTLE refers to the technique of having a mix of:
  • Infrequently used "big" (high power) cores
  • Agressively used "little" (lower power) cores
You get what you pay for when it comes to mobile devices.

"Touch boost": when you touch the screen, it turns on the more powerful CPUs.

"Benchmarketing" is a thing.

A MacBook Pro has the maximum amount of power stored in a battery that you can bring onto a plane.

Nexus 5X has 2 fast cores and 4 slow ones.

Flash read performance:
  • MBP: ~2GB/s
  • N5X: ~400MB/s
Mobile phones use fewer flash chips which means less parallelism which means slower.

Think of mobile phones as having spinning disks from 2008.

"High Performance Browser Networking" is a really good book. He highly recommends it.

"Mobile networks hate you."

"Adaptive Congestion Control for Unpredictable Cellular Networks".

What's really killing you is the variance / volatility.

LTE speed is actually getting slower.

Different markets are wildly different.

"A 4G user isn't a 4G user most of the time." -llya Grigorik

It's really hard to get anything on the screen in 3 seconds because it takes so long to even spin up the radio.

The tools and the methods that we've brought over from the desktop just aren't working.

Using the platform well is the only way to go.

Today's frameworks are signs of ignorance or privilege or both. It's broken by default.

Server-side rendering = isomorphic rendering = universal rendering

The Pixel phone is pretty good. Most phones aren't.

Scrolling != interaction.

We want progressive interactivity.

Only load the code that you need right now.

shop.polymer-project.org

Using HTML imports with tiny scripts means you don't have big bundles that make the thread choke.

Service workers lets you return something quickly from cache.

The top-level SW shouldn't depend on the network. Don't use a SW that hits the network first.

github.com/GoogleChrome/lighthouse

"Mobile Hates You; Fight Back!"

Test on real hardware.

Use a Moto G4. Use Chrome Inspect on Dev Tools. That'll show you what it's like in the median real world.

Use WebPageTest since it lets you use real mobile phones.

PRPL is important (but I don't know what it's about yet).

The challenge is larger than you think it is.

Real Talk About HTTPS

@estark37

Over 50% of pages are loaded over HTTPS.

www.google.com/transparencyreport/https/metrics

For desktop platforms, 75% of time is spent on HTTPS. It's probably on top sites.

For some reason, it's somewhere around 33% for Android. They don't know why, but they have a theory that it's because they're those users aren't using sites like Gmail and Facebook like they would be on desktop because they instead use apps for those. Hence, any web usage is most likely for longer tail sites.

They're restricting powerful features if you're not using HTTPS. Geolocation is one example.

Mozilla is only going to support HTTPS for all new features.

istlsfastyet.com: From llya Grigorik.

HTTP/2 is only available if you're also using HTTPS.

HTTP/2's performance advantages can more than offset the cost of TLS.

They have a goal of HTTPS everywhere.

At this point, we can't yet use an insecure icon everytime there's a site that uses HTTP. It would just desensitize people to the risk.

Let's Encrypt: Free, easy certificates. It's a certificate authority.

Chrome made a large donation to Let's Encrypt.

It's usage is growing exponentially.

All Google sourced ads are already served over HTTPS.

developers.google.com/web/fundamentals/security/encrypt-in-transit/enable-https

Moving to HTTPS can be done with no impact on search rankings.

There's a Security tab in Chrome Dev Tools. It can help.

They want to start easing users into thinking that HTTP is bad.

Chrome will start saying "(i) Not secure" in the URL bar for pages that have passwords or credit card inputs on them.

Building a Service Worker

They built a service worker in real time.

He used async and await. I think he said it's only in Canary, though :-/

The service worker will look to see if it itself has been updated.

He's using VS Code.

A Service Worker can be spun down even if the tab is still open.

Wow, they keep making the Service Worker more and more complex. It seems like a giant house of cards.

There's an offline checkbox in Chrome Dev Tools in the Application tab, and there's also one at the top of Dev Tools. They're not in sync.

He's clear that his example is not production code.

Cache invalidation is hard.

That was some pretty impressive real-time coding.

Planning for Performance

@samccone

"Mobile web" is no longer a subset of the web. It is simply the web.

It's more than just responsive CSS.

Mobile has been bigger than desktop since 2014.

The average mobile web user is not on a $600+ phone. They're more likely to use the free phone you get when you sign up for a plan.

The average Android phone has 1G or less of RAM.

The average load time for mobile sites is 19 seconds.

HTTP2 + link rel="preload"
<link href="bower_components/polymer/polymer.html" rel="preload">
He used this trick to move from 5 seconds to 3 seconds from first byte to first useful paint.

H2 Server Push:
  • Push assets from the server to the client before the client even requests them.
  • It's not cache aware.
  • It lacks resource prioritization.
  • But, H2 Push + Service Workers = awesomeness
He went from 5 seconds to 1.7 seconds.

Preload is good for moving the start download time of an asset closer to the initial request.

H2 Push is good for cutting out a full RTT if you have SW support.

Older phones are slow when parsing a ton of JavaScript. Event relatively recent phones suffer because of how much JavaScript there is to parse.

There is no simple way to see the parse cost of JavaScript.

There's some cool new thing in Chrome Canary: V8 internal metrics in timeline

Webpack Bundle Analyzer is very helpful to figure out what's in your bundle.

Fetch but don't parse:
<script src="..." type="inert">
Put it in the page, but don't eval it until you strip the comments out:
<script id="inert">
    /* (...) */
</script>
He talked about Webpack require.ensure and the aggressive splitting plugin.

Test on real devices on mobile networks.

Optimize network utilization: sw, preload, and push for fast first loads.

Parsing JavaScript has a cost: ship the smallest amount of JS possible.

Predictability for the Web

This was a product manager-like talk about making Chrome and the web in general more predictable.

The browser companies are working together more and more.

He talked about resolving bugs in Blink, the engine behind Chrome.

2/3rds of the top 1% of bugs have been fixed.

Using chrome://flags, you can turn on "Experimental Web Platform Features".

They fix regressions very quickly.

They do want to occasionally deprecate things.

Chrome shipped something to improve power substantially by not doing as many request animation frames for iframes not showing.

Filing or starring a bug is the quickest way to get your issue resolved.

developers.google.com/web/feedback

They built a Browser Bug Searcher to search for bugs across browsers.

They're trying to treat the web as a single platform.

Panel

A lot of Google apps aren't Progressive Web Apps.

One of the most difficult things to debug in web apps is memory leaks.

HTML imports will probably not survive the way they are, but eventually, there will be something better.

A few people complained about Chrome Web Apps being deprecated.

Friday, October 14, 2016

Books: Self Leadership and The One Minute Manager

I finished reading Self Leadership and The One Minute Manager. In summary, it was short, easy, and moderately useful.

Most of the book is written as a story about an advertising account executive who's having a hard time at work and feels like he's right about to lose his job. That made it interesting and very easy to read. There are a few nuggets of wisdom. However, I wouldn't put it on the same level as some of my other favorite books, like "Getting More: How You Can Negotiate to Succeed in Work and Life".

Nonetheless, at a mere 140 pages, it was worth reading, and I think it'll impact my thinking going forward. For instance, have you ever experienced being really excited about starting a new job, but then feeling like you were right about to quit (or on the verge of getting fired) once reality hit, and you realized it was going to be a lot harder than you originally thought? It talks a lot about coping with that.

Disclaimer: The book was given to me.

Monday, October 10, 2016

My Short Review of Visual Studio Code

I decided to try out Microsoft's Visual Studio Code. I think it's a useful open source project with a lot of potential, and I congratulate Microsoft on their contributions to the open source community. On the other hand, I think I'll still with IntelliJ IDEA Ultimate Edition and Sublime Text 3 for the time being. I used it for a few days after watching some of the videos. What follows is not a full review, but rather a few thoughts based on my experience.

VS Code is usable. On the other hand, a few of the extensions that I picked were buggy. They either munged the source code in clearly broken ways, or they caused the editor to go into a weird, infinite loop where it kept trying to edit and then save the text. I think the situation will improve with time--Rome wasn't built in a day.

One thing I really missed was being able to search for multiple things at the same time. In IntelliJ, I often start a carefully crafted search as part of a refactoring effort. That search tab might be open for a day or more. However, I can start up as many additional search tabs as I need. NetBeans also had this feature. I couldn't figure out how to do it in VS Code.

In general, it seems like the search interface was designed more by designers than by hard core engineers. Looking at the imagine, imagine trying to do a regex-based search. You have to click on the tiny ".*" symbol that's printed using gray text on black. Then, the search results themselves are shown using an inadequate amount of horizontal space. It all feels very dark and cramped.

Emacs does something useful when you hit Control-k: it deletes to the end of the line. If you hit it again, it deletes the line ending as well, which joins the next line with the current line. If this were a feature in just Emacs, it wouldn't be very important. However, this feature works pretty much everywhere in OS X, so it's something I've learned to rely on. It doesn't work quite right in VS Code. Here's the bug.

Many editors (Sublime, Vim, etc.) can by default rewrap a paragraph of text, i.e. re-insert newlines so that the text has line lengths of consistent width--not just in how you see the text, but also in the file itself. This is a critical feature for those of us who like to edit lots of plain text files. This isn't built into VS Code. However, there's a plugin--no biggie. Some editors (Vim, Sublime) get bonus points for doing this really well, such as being able to rewrap a paragraph of text even if it has comment symbols (like '#') at the beginning of each line.

One more thing I really missed from Sublime Text is that even if I'm in the middle of editing a file and have unsaved changes, if I close and restart the app, it puts things back to exactly the way they were before I closed it. In VS Code, I have to work to re-open the files I had open, etc. This is very inconvenient if you need to restart your editor because you installed an extension, or for any other reason.

The configuration system is pleasant. I liked the fact that it's based on text, and that there are global settings, user settings, and project settings. I can imagine committing settings files so that everyone on the same project shares project settings. Similarly, I liked the fact that a lot of commands are meant to be searched for using fuzzy search--the UI for this was nice.

The Git integration was nice, but perhaps inadequate. I very often have to do much more than git add, git commit, and git push. Using the command line, using Tower, and using IntelliJ's Git UI (which is kind of awkward compared to Tower, by the way), are all doable. I don't feel like I could use VS Code's Git integration without falling back to using the command line a lot.

My buddy said he preferred VS Code over IntelliJ because IntelliJ had too much "bloat". Whether or not a feature is bloat is certainly a matter of opinion. For instance, he would never rely on IntelliJ to work with Git. He'd only use the command line. I can use the command line or IntelliJ, but I really prefer using IntelliJ for dealing with messy rebases. IntelliJ's rebase flow is incredibly helpful. He'd also never use IntelliJ to refactor an expression to use a variable or rename a local variable (which isn't based on simple search and replace, but is actually based on understanding a programming language's scoping rules). Those are things I rely on IntelliJ to do very often. Hence, I think it's fair to say that IntelliJ is a little bit greedy when it comes to memory (it'll use everything you give it). On the other hand, it has tons of very advanced features that actually do help me quite a bit on a daily basis. There were a lot of features I missed when I used VS Code.

The last thing I'd say about VS Code (which I hinted at earlier) is that it made me feel very cramped and uncomfortable. I felt like it was difficult to see the text and I felt like I was typing with 10 thumbs. I don't know if it was because of the dark theme and my aging eyes (none of the themes felt exactly right), or if it was because of my inexperience with it. I don't remember feeling this way when I first tried Sublime Text. It was this uncomfortable feeling that pushed me back toward using IntelliJ and Sublime Text 3.

Nonetheless, I suspect it'll continue to get better. The plugins will stabilize. Missing features will be added. Soon, it'll be yet another perfectly good editor that some of my friends swear by. I remember going to a talk by Bram Moolenaar, the author of Vim. Someone asked why his Vi clone succeeded while all the other Vi clones didn't. He said it was because he kept making it better. I think that's good advice ;)

Sunday, October 02, 2016

Having Fun with Linux on Windows on a Mac

I thought I'd have a little fun. Here's a picture of Linux running on Windows on a Mac. I'm running Linux in two ways:

  • In the top left, I'm running bash on Ubuntu inside Docker for Windows.
  • In the bottom left, I'm not actually running Linux. I'm running "Bash on Ubuntu on Windows" using Microsoft's "Windows Subsystem for Linux".

I'm running it all on a Mac I borrowed from work because I don't actually own any Windows machines :-P


Monday, August 15, 2016

Ideas: Mining the Asteroid Belt

Disclaimer: I don't even pretend to know what the heck I'm talking about. Feel free to ignore this post.

I've been thinking lately about efficient ways to mine the asteroid belt. My guess is that there's a lot of useful raw materials out there, but getting them back to earth is kind of a challenge.

Now, in my thinking, I'm presupposing that we have a working space elevator. Nonetheless, it's still a challenge because the asteroid belt is so far from Earth's orbit. It would take a lot of time and energy to travel there and back in order to gather materials. Certainly, we'd need some robotic help.

However, the distance (and time involved) becomes less of an issue once you have a system in place. To use an analogy, selling whiskey that's been aged for 10 years is only difficult when you're waiting for those first 10 years to pass. After that, there's always another batch about ready to be sold.

One problem is that it would take a lot of energy to move large amounts of raw materials back toward earth. Inertia sucks when you're trying to move heavy things. Furthermore, you have to somehow transport that energy all the way out there. Sure, a space elevator might help you get things off the surface, but the asteroid belt is still a long way away.

The next problem is that, "For every reaction, there's an equal and opposite reaction." You can waste a lot of fuel trying to push the raw material toward Earth. However, it might be helpful to push some material toward Earth and an equal amount away from Earth.

Next up, consider that it would be useful to push the material toward a position near Earth's orbit so that it can be captured and brought down to the surface. Hopefully, we won't mess this part up and doom humankind to the same fate as the dinosaurs ;)

I was thinking of creating three bundles of raw materials at a time and then using a contraption between the three bundles to send the three bundles in different directions. By varying the angles and the relative weights of the bundles, you could slightly vary the speeds. The contraption might look like a triangle with a piston on each side. I'm not sure what is the best way of causing the pistons to move.

However, one good thing is that you'd be able to reuse the contraption over and over again. Only the raw materials would move. The contraption would stay in place to be reused.

It would probably make sense for bundles headed toward earth to have some sort of propulsion and positioning mechanism. The contraption that I mentioned above is only there for initial launch.

Thursday, August 11, 2016

JavaScript: ForwardJS

Here are my notes from ForwardJS:

My favorite talks were:

Keynote: On how your brain is conspiring against you making good software


Jenna Zeigen @zeigenvector.

jenna.is/at-forwardjs.pdf

I particularly enjoyed thinking about how this talk relates to politics ;)

She studied cognitive science. She wrote a thesis on puns.

"Humans are predictably irrational." -- Dan Ariely

"Severe and systematic errors."

Humans aren't great logical thinkers.

People will endorse a bad argument if it leads to something they believe to be true. This is known as the belief bias.

"Debugging is twice as hard as writing a program in the first place" -- Brian Kernighan

We tend to interpret and favor information in a way that confirms our pre-existing beliefs.

We even distrust evidence that goes against our prior beliefs. It's even harder for emotionally charged issues.

We have a tendency to be rigid in how we approach a problem.

We sometimes block problem solutions based on past experiences.

We often have no idea we're going to solve a problem, even thirty seconds before we crack it.

Breaks are more important than you think.

Creativity is just about having all the right ingredients.

Again, we tend to think about problems in fixed ways. That makes it harder to understand other people's code.

We prefer things that we have made or assembled ourselves.

We're bad at making predictions about how much time it will take us to do something.

We tend to be pessimistic when predicting how long it will take other people to do something, and optimistic when predicting how long it will take other people to do something.

We think that bad things are more likely to happen to other people than to us.

We're actually pretty good at filtering out unwanted stimuli, but we're not totally oblivious to it. Selective attention requires both ignoring and paying attention.

We're very limited in our mental processing. For instance, we can't do a good job writing code while doing a good job listening in on someone else's conversation.

We're sometimes helpless to the processing power of our brain.

Software is about people.

Relatively unskilled people think they are better at tasks than they actually are.

We tend to overestimate our own skills and abilities. 90% of people think they're above average at teaching.

Skilled people underestimate their abilities and think tasks that are easy for them are easy for others.

She mentioned Imposter Syndrome and said that the Wikipedia page was pretty good.

We favor members of our own in-group.

We prefer the status quo.

We weigh potential losses caused by switching options to be greater than the potential gains available if we do switch options.

We're liable to uphold the status quo, even when it hurts other people.

People have a tendency to attribute situations to other people's character rather than to external factors.

People have a tendency to believe that attributes of a group member apply to the group as a whole.

We rely on examples that come to mind when evaluating something.

We assume things in a group will resemble the prototype for that group, and vice versa.

In some cases, we ignore probabilities in favor of focusing on details.

Diversity is important for a team. The more diverse the team, the more diverse their background, the more creative they can be.

Real-time Application Panel


I didn't think this talk was very interesting.

There were 3 people on a panel: Guillermo Rauch (he wrote socket.io), Aysegul Yonet (she works at Autodesk), and Daniel Miller (a developer relations guy from RethinkDB).

The RethinkDB guy said that RethinkDB doesn't "have any limitations or make any compromises" which seems quite impossible. Certainly, it can't violate the CAP theorem ;)

From their perspective, Real-time is a lot about getting updates from your database (change feeds) as the changes comes in.

Some databases can give you a change feed.

The XHR object has an upload event so you can track uploads.

Real-time is about minimizing latency.

REST doesn't fit very well with push architectures / WebSockets.

Sometimes you need to be notified that a long running job has completed.

WebSocket tooling can be insufficient.

It's unclear what's going to happen with HTTP/2 vs. WebSockets.

The fact that HTTP/2 provides multiplexing is perhaps an advantage over WebSockets especially when you have wildly different parts of the client app talking to the server.

Server push is an exciting part of HTTP/2.

2 members of the panel were excited about observables and RxJS.

Fetch, the new way of making requests, doesn't have a way of aborting.

The RethinkDB guy gave a shoutout to MobX, but he didn't seem to know much about it.

(By the way, looking around, I see a sea of Macs. One guy said there were only 3 Windows machines in the workshop he was in.)

There was a little bit of talk about REST perhaps not being the best approach for a lot of situations.

Fireside Chat: Serverless Application Architecture


This was another panel, but I thought it was pretty good. Alex Salazar (from Stormpath) and Ben Sigelman (who wrote Dapper, the distributed tracing system, at Google) where the two people on the panel.

Stormpath has adopted this sort of architecture. The company itself provides authentication as a service.

"Serverless architecture" is a serious buzzward. It started 6 months ago. It's trending a lot right now.

Alex says it can refer to two different things:
  • Using services like Twilio, etc.: He says this is more "backend as a service".
  • Functions as a service: He says this is more accurate. He mentioned AWS Lambda. The idea is that you can write some business logic specific to you, and you don't have to manage anything that even looks remotely like a server.
When people say "servers", they're kind of referring to VMs. With serverless, you don't even need to think about VMs.

There's a lot of cross-over with microservices.

Although Heroku got pretty far, he says it's beyond and different than what Heroku provides.

Heroku automates server work (i.e. a worker). You're still writing a Python, JVM, or Node application.

Google App Engine, Heroku, etc. were all trying to be platforms as a service. It's still a backend, monolithic application on their architecture.

Serverless in the lambda model is not a server application. It's a set of finite functions that run in a stateless environment. He says it's a superior model depending on your use case.

Stormpath started with a Java monolith. They moved to asynchronous microservices. They spent a lot of time looking at AWS Lambda. They wanted to update and deploy modular pieces of code without versioning the whole system. For instance, they wanted to be able to update just one function.

Ben is terrified of CI (Continuous Integration) and CD (Continuous Delivery). Some PRs (pull requests) may not have considered all the weird interactions that actually happen. He thinks the pendulum might swing back the other way so that there's one deploy per day.

Monolithic apps can be easier since the different parts are all versioned together. It's scarier with microservices because there might be more version mismatches.

Stormpath tried to move to the Lambda model, and it didn't work. Latency is a real problem. Stateless pieces of code take a while to spin up and spin down. Functions that aren't used very often take a while to spin up--especially with the JVM. They went from serverless to even more servers. It resulted in more infrastructure, not less.

NoSQL databases are beneficial, but there's also too much hype. They kind of hid their drawbacks which was bad. Proof of point, see the guy from RethinkDB above :-P

Ben said that Lambda is the absolute extreme of where this movement will go. Consider GRPC, Finagle, Thrift; can we have hosted services for these things with a little caching?

Anytime you have latency issues, you apply caching.

The difference between memcached vs. in-memory caching (i.e. in the current process) is huge. So stateless has a huge drawback since you can't do in-memory caching. If you have to cache anything between requests to minimize latency, Lambda isn't the right thing.

If performance is important for you, Lambda (serverless) isn't ready yet. That's not true of microservices, though.

If you need to create a chain of these functions as a service, then maybe you shouldn't be using serverless. The latency compounds. One time, he was doing 4 chains, and he was seeing multi-second latency.

Serverless is really good for certain applications.

The history of computing is all about more and more abstraction.

From microservices to serverless is a natural transition.

What will the learning curve be like for learning how to use this model?

Stormpath had to invent a lot of stuff to adopt the model because a lot didn't exist. This included testing, messaging, authentication, etc.

However, he likes async, promises, etc. Scala and Node both have this right. However, Ben thinks Scala is "ridiculous". If you're all asynchronous anyway, it's easier to move functions off the server into a separate service.

Promises need to have more things like deadlines, etc. He thinks promises are a good fit.

It's an anti-pattern to wait on anything that's not in process.

A lot of stuff from Scala (async, reactive, etc.) is making its way back into Java.

JavaScript developers are already in the async paradigm, which is why they adapt to this stuff more easily.

The average Java developer hasn't wrapped his head around async yet.

Stormpath is hoping to provide authentication as a service for services, not just authentication for users.

Ben thinks security needs to be applied more at the app layer. Auth, security, provisioning, monitoring, etc. should all start happening at the app layer.

There is indeed business risk depending on a bunch of external services like Lambda, Stormpath, etc. There's no silver bullet. Can you trust the vendor? What's the uptime and SLA? Now, with SAAS, you're not just depending on them for code, but also for ops.

Having an SLA for high percentile latency (99%) is important. Uptime doesn't mean anything. However, historical uptime is still important.

Ben says that the main advantage of the monolithic model is that all deps get pushed out every week along with any updates to the code.

Your team needs processes and automation. Test before things go out. Integration testing is how you keep things from blowing up.

The Web meets the Virtual and Holographic Worlds


This was a very fun talk by Maximiliano Firtman, @firt, author of High Performance Mobile Web! I can't find the video, but if you can find it, it's worth watching. He's written and translated a bunch of books. He's very engaging.

He started by remembering the web from 22 years ago.

He says that the web has been trapped in a 2D rectangle.

New worlds:
  • Physical world
  • Virtual reality
  • Mixed reality (holographic world)
Immersion modes:
  • Seated mode
  • Room space mode
You can be a part of all of this, even as a web deveoper!

We're at a point now like where mobile web was 10 years ago.

VRML is from 20 years ago! We're not talking about it anymore. Technology has changed.

Experiences:
  • Flat web content (this stuff still works on these devices)
  • 3D world (stereo)
  • 360 still or animated content
Distribution:
  • Apps
  • Websites
  • PWAs (pervasive web apps)
Human interfaces:
  • Gaze (move your head to select things)
  • Voice
  • Touch controls
  • Clickers (remote controls)
  • Controllers
  • Your body (mostly hand gestures)
  • Mouse and keyboard (good for mixed reality)
Hardware:
  • Oculus Rift (Windows)
  • HTC Vive
  • Cardboard (Android, iOS)
  • Oculus Gear VR (by Samsung; has 1 million users)
  • LG 360 VR (LG G5)
  • Hololens (this is the most different; uses Windows 10; not connected to a computer; mixed reality)
New worlds:
  • Virtual / mixed reality
  • Physical world
  • Immersion modes
  • Human interface
  • Hardware
User experience:
  • Safari or Chrome (iOS or Android) with Cardboard:
    Use a gyroscope to recognize your position.
  • Samsung Internet (browser) on Gear VR:
    It's a 3D environment, but the web is still a box in that environment. It has different modes for watching videos.
  • Microsoft Edge on Hololens:
    This was a really neat demo! You can use hand gestures. He showed a hologram floating in your room that you can see using the headset. You can walk into the "window". It's like it's floating in space. It recognizes the room around you. You can put a hologram in a certain place. For instance, you can have a window running Netflix that lives in your bathroom.
APIs and specs:
  • Web VR:
    • You can get data about the device's current capabilities.
    • You can poll HMD pose.
    • you can get room scale data (data about the size of the room).
    • You can request a VR mode which is like the fullscreen API.
    • There are two different versions of the API. He recommended version API 1.0 with the changes they've made to it. It's optimized for WebGL content.
    • You can use this API to get data from the devices. This API is not for drawing to the screen.
    • It's supported by Chrome, Firefox, and Samsung Internet browser.
    • There's a polyfill to work on other browsers.
  • Web Bluetooth:
    • This allows you to talk to any type of Bluetooth device.
    • You can scan for BLE devices.
    • You can scan for services available.
    • You can connect to the services.
    • Note, Bluetooth is complex.
    • This is only in Chrome, and it's hidden under a flag.
  • Other APIs
    • The ambient light API lets you get info about the current lighting conditions.
    • There's a gamepad API. Chrome is the furthest ahead on this API.
    • There's speech synthesis and recognition. This allows you to interact with the user by voice. Note that synthesis is more broadly supported than recognition.
    • Web audio lets you generate dynamic audio, including 3D audio. You can do ultrasound communication with devices. He recommends a library called Omnitone for doing spatio audio on the web. This API is available all over the place.
What we can do today:
  • You can show flat content in these environments:
    • I.e., you can show the kind of content we already have in these environments.
    • It's like a 2D box in a 3D environment.
  • You can show 360 content:
    • You can get the device's orientation.
    • You can touch some of these devices, or move around.
    • There's something called VRView from Google.
    • The Vizor Editor is pretty useful. You can use it to create a 360 degree environment.
    • You can capture 3D content using cameras. However, most browsers don't support live streaming 360 content today. You have to use YouTube if you want that.
    • You can use the Cardboard Camera app from Google. Then use the cardboard-camera-converter.
  • VR 3D:
    • This is mostly based on WebGL.
    • You can use ThreeJS with VR support.
  • Holographic experiences:
    • This is only native at this point. You can't do it from the Web yet.
    • AltSpace VR is like a social network using VR. It's a native app. It has an SDK. You can use ThreeJS.
  • More will come in the future...
These are things he expects we'll see in the future:
  • New Devices:
    • Daydream from Google looks pretty cool.
  • Think about pixels:
    • What do they mean now in a VR environment?
    • How do we think about bitmaps? Can we use responsive images based on distance? See MIP Maps.
      • There's a new format called FLIF. It has multiple resolutions in the same file.
  • Future CSS:
    • We might see media queries based on how close the user is.
  • New challenges:
    • Consider infinite viewports.
    • If I increase the size of a virtual window, what should happen? Should it show more content, or should it show the content larger?
    • Augmented reality physical web:
      • Laforge:
        • Like Google Glass but not from Google.
    • We need more from Edge and Windows:
      • Holographic
    • New challenges:
      • Responsive VR.

React Native: Learn from my mistakes


This was a talk from Joe Fender. He's from a company named Lullabot. He lives in London. Here are the slides.

"We create digital experiences for the world's best brands."

(React is very popular at the conference.)

(There are a lot of women at the conference. It's not all men.)

Not as many people use React Native.

Being a web person is enough to build mobile apps.

What's the fuss? Developing native mobile apps is a pain. It's difficult to code the same thing twice (Java and Swift), especially if you're not very familiar with those languages.

PhoneGap/Titanium just didn't feel right. Furthermore, there are some memory and performance limitations. You're stuck with a non-native UI. It's a WebView embeded in a mobile app. Furthermore, it's a bit weak on community support.

React Native is really simple. It allows you to make truly native apps. React itself is nice. He thought coding in JavaScript was nice. His team was more successful than when they were trying to write the app in Java and Swift.

10 things he wish he knew. These things weren't obvious to him when he started:
  1. You need to know some things ahead of time such as JS, ES6, React (including the component lifecycle), and the differences between various devices (i.e. their differing UIs and features).
  2. You need to learn about the various React Native components. React Native has a whole bunch of components--about 40. You can get a lot done with just the core components. However, there are also contributed packages--about 200 of them. Sometimes you'll need to write your own components.
  3. You need to think a lot about navigation. It's a really important part of the app. How will users get between screens? How will they get back? How will the transitions look? NavigatorIOS is good, but it's only for iOS. Navigator is cross-platform. NavigatorExperimental is very bleeding edge.
  4. You need to think about how data flows through your app. Consider how your app will scale. Consider using Flux or something similar. He used Redux and liked it.
  5. Think about how you will structure your code if you need to support both iOS and Android. Will you use a single code base? Will there be platform specific code? Will you write expressions like { iOS ? 'nice : 'ok' } to match the different platform conventions? Some people have very high expectations and expect your app to look very native. You can get started with npm install -g react-native-cli react-native init MyApp. You can have index.android.js and index.ios.js just require index.js. Note, many components work on both platforms, and usually the component will take care of platform-specific things.
  6. Flexbox is a really great way to lay out components on the screen. It's responsive. That's really important in mobile. It's pretty much the way to go with React Native. There's a bit of a learning curve, though. It's just a different way of thinking about things.
  7. You still need to test on real devices. Simulators are insufficient. It took them a while to realize this:
    1. The performance is very different. Consider animations and load times. Your laptop is too fast.
    2. The networking is very different. With a mobile device, you're not always on a nice WiFi connection. What about if you're disconnected or have a 3G connection. WebSockets behave differently when you have a spotty network.
    3. Push notifications are a pain. You can't test this on the iOS simulator. For Android you can.
    4. Real devices don't have separate keyboards. This caused a big problem for them. Using an on screen keyboard takes up half the screen, and some of their layouts were incompatible with that.
    5. Think about landscape mode. Think about different layouts.
  8. Debugging is important. The developer menu in React Native is very helpful. In general, React Native has a really nice developer experience. On iOS, shake the device. There's a Debug JS Remotely feature so that you can debug via Chrome's DevTools. This supports live reload, and it only takes 100ms to reload new code. console.log() works too. You can pause on caught exceptions and add debugger; to your code like normal. Unfortunately, the React Dev Tools Chrome extension doesn't work with React Native.
  9. Be aware of the React Native release cycle. It's every 2 weeks. Sometimes, it's weekly. It's very fast, and very bleeding edge. His company is 10 versions behind. In general, the release notes are quite good. He recommends that you don't try too hard to stay on the most recent version.
  10. Remember that you'll need to release your app since it's a mobile app. Submitting your app to the app store is not easy. You need certificates, icon sets, a privacy policy, a support page, etc. It took them 2 weeks. Furthermore, it takes Apple a while to approve it. You also need to think about things like automated testing. React Native has a bunch of useful stuff to help. You can also use ESLint, which is nice.
React Native has Android, but it came a bit late.

His company didn't have any problems with missing APIs.

They used Parse which is an external DB as a service. Unfortunately, Parse is going away.

He thinks Firebase has good React Native support.

If you want to have really native-looking components, it's going to be hard to just use one codebase with flexbox.

Dealing with WebSockets was a pain.

Push notifications are pretty different between platforms.

Building Widget Platform with Isomorphic React and Webpack


Unfortunately, I didn't think this talk was very good or very useful for most people.

It was given by a guy named Roy Yu.

He was considering SEO value and discussing his architectural decisions.

He was creating isomorphic widgets that were shared by PHP, Java, and Python.

React isn't so simple once you hit the advanced stuff (e.g. high order components vs. dumb components).

I knew the keywords he was using, but I didn't understand what he was trying to convey to me.

He says that Webpack is more powerful than Grunt and Gulp.

He introduced React and Webpack a little.

You can package and deliver widgets to third parties using a range of mechanisms:
  • S3, CDN
  • Web component / open component
  • Public / private registry
  • Docker container as a resource / API proxy
You can have an API that other servers hit to fetch HTML.

I left early.

Generating GIF Art with JavaScript


This was a fun talk by Jordan Gray, @staRpauSe.

Here are the slides. They contain lots of fun GIF art.

He pronounced GIF with a hard g.

He works at Organic and Codame (in his spare time). Codame unites art and tech.

"Creativity is the basis for life."

He releases everything he does under Creative Commons. He likes the remix culture.

His stuff has showed up in major magazines, films, etc.

GIFs have stood the test of time. The format is 30 years old. It's completely free of patents.

They're inherently social.

Creativity loves constraints, and GIFs have interesting constraints. They only permit so many colors, frames, and dimensions.

Constraint considerations:
  • Tumblr is the holy grail to target. There's no censorship.
  • 2 MB 540px wide (or less) is a good target.
  • He talked about VJing (like DJing, but with artwork). He talked about GIF Slap. You must target 5 MB or less.
  • He talked about PixiVisor. It supports any height as long as it's 0.5625 times the width (i.e. 64x36). You can embed the GIF in audio.
  • You can target CAT Clutch which is for LED displays. They are 32x16 pixels in size.
  • You can use GIFPop Prints to print the GIF on a "lenticular" piece of paper. This is really neat. You're limited to 10 frames but you can have up to 1500px. This was really neat.
How to make GIFs:
  • Use gif.js.
  • In general, it's really easy.
  • Define a GIF. Add frames. Render it.
  • You can use three.js to do 3D manipulation, etc.
  • The best resource he offered is bit.ly/eric3demos.
  • He also talked about processingjs.org.
  • Anything else that renders to canvas is usable as well. See javascripting.com/animation
  • github.com/dataarts/dat.gui is incredibly helpful. It's "A lightweight graphical user interface for changing variables in JavaScript."
  • He mentioned jzz.js. It's an Asynchronous MIDI library. It lets you control a web page with an actual MIDI device.
github.com/CODAME/modbod3d-gifgen is a good example. They used a Kinect and a body-sized turntable to help create the GIFs.

gif.js generates the gif, and then you can download it from the web page.

(He's using Sublime Text.)

His next demo was or involved Technopticon.

Next, he talked about the theory of animation. "The Illusion of Life" is really good. Some Disney animators wrote a book about this stuff. There are some YouTube videos about it which looked really good. he talked about "the12principles".

He talked about keyframes vs. straight ahead. It's mostly about straight ahead when generating gifs.

he talked about greensock.

See:  vimeo.com/80018593.

He mentioned Movecraft.

React Application Panel


There were 5 people on the panel.

There are some people at Twitter using React.

The main reason people are initially turned off by React is JSX. However, at Facebook, they were using something like JSX for PHP, so it was a natural fit for them.

Reddit started using React.

Here's how one team split up their code:
  • Dumb components
  • Containers
  • Modules
  • Screens (route handlers with React Router)
  • An API layer
Try to reuse data layers and view layers.

Lots and lots of people use Redux.

"It's the web. We'll probably rewrite it in 3 weeks...in Angular 3."

The Twitter guy does some work to try to keep the connection between React and Redux as thin (small) as possible. He doesn't want them overly coupled.

One person was from Adroll.

One person was from Reddit.

Lots of people started adding React to their existing Backbone apps.

One person said try to use stateless components when possible. Push state as high up the hierarchy as possible.

The Facebook guy didn't like the class syntax. The function syntax makes it clearer that you're just mapping props to render output.

The Facebook guy says that people are a little too scared to use component state, but it's there for a reason.

Time traveling debugging is "sick AF (as f*ck)".

One guy mentioned MobX, but I didn't hear many people mention it at the conference.

One person said to introduce Redux only when it's time to add complex state management. One person said you're going to need it at some point, so you might as well add it to begin with.

The Facebook guy said to start with the simplest thing (component state), and then when you're familiar with the pain points that Redux addresses, you can start using Redux. You probably don't need Redux on day one.

The Reddit guy said he would add Redux later. Introduce data stores once it becomes a necessity, such as when multiple apps need to access the same data.

The Twitter guy said that it's helpful to keep things separate to make them more testable and adaptable to change.

A lot of people had different ideas of how to do HTTP async with React.

2 people really liked Axios.

Promises are great, but they can't be cancelled, for instance if you move to a different page. They might add that. It's a significant weakness.

The Facebook guy said they use promises on the outside, but something more complicated inside. He gave a plug for await. It has reached stage 4, which means it'll make it to all browsers soon.

One guy plugged RxJS. You can unsubscribe from an observable. Observables are more advanced than promises, and they provide more granular control.

Some one asked what people's favorite side effect library was for Redux. Most people didn't have an opinion. The Twitter guy uses Redux Thunk.

They've been rewriting Twitter from scratch in the last year. A lot of abstractions that were there got in the way. Too many layers of abstraction sucks.

Something like Saga (?) is overkill.

Make the code more approachable.

No abstraction is better than the wrong abstraction. It's ok to duplicate a little code if you don't know the right way to abstract the code to make it DRY. It sucks to have some abstraction that handles 3 use cases, but not the 4th. Too DRY sometimes makes things horrible. Sometimes you need more flexibility. One guy used the acronym WET (write everything twice) and suggested that it was a good approach to avoiding bad abstractions.

Create React App is nice. It helps people get started quicker. One person said that it's awesome.

React is not very prescriptive about which build system or packager you use, although, those things are useful. Most people should be using Webpack or something like that.

One person is using CSS modules. She said "it's fine," but admitted that she hates it.

There are so many ways to attack the CSS problem.

The Twitter guys are using Webpack.

React is quite a hackable platform. That's really useful. The Twitter guy uses CSS modules. "It's been ok." Some things are really not fun to debug. It's a black box. CSS really limits your ability to share components across projects in open source because different people use different CSS libraries.

"CSS is like riding a bike...but the bike is on fire...and you're in hell."

It's hard to know what CSS is being applied. If you're looking at CSS, it's hard to know if the CSS is even being used. If you apply CSS in the wrong order, it changes what things look like.

Some people think that styles in JavaScript seems interesting.

The CSS thing isn't really solved.

The Netflix guy says, "Write CSS that can be thrown away." They use LESS. They do some static analysis.

Aphrodite is a CSS library.

The Twitter guy is more excited by the idea of moving CSS into JavaScript since it enables a bunch of new things. Move away from strings to something introspectable. It'll be more powerful and predictable, but it may have performance issues.

One person has a team with a woman who does amazing UX design and amazing CSS.

One woman at Netflix said that they have one CSS file per component. Just separating files is not enough. Putting CSS into the code modules is an interesting movement. The React guy is afraid of it causing performance regressions.

Just like we've moved onclick handlers back into React, we'll probably move the styles back into the DOM code.

Code Trade-offs: Benefits and Drawbacks of Each Decision


This was another 5 person panel. It was ok, but not great. I didn't get all the names, but here are the ones I remember:
  • Ben Lesh from Netflix. I think he wrote RxJS 5.
  • Richard Feldman who wrote Seemless Immutable. He's an Elm guy.
  • Amy Lee from Salesforce.
  • Rachel Myers from Obsolutely, GitHub, etc.
  • Brian Lonsdorf
Abstractions come at the cost of performance.

RxJS is really an abstraction for async. But, he says, start off with just a callback. Maybe move to a promise. Move to Rx when things get more complex.

John Carmack had some nice comments on when to inline vs. abstract. Performance is a consideration. Readability is a consideration. One guy said inline by default, but then pull it out when it makes sense.

One React user said YAGNI (you ain't gonna need it).

You can add the greatest abstraction ever, but if it's not readable, understandable, and greppable, it's not going to help. Abstractions can get out of control pretty quickly.

The Ruby world has no constraints. They look at constraints as if they're a horrible thing. There are 8 ways to add things to an array. In Python, there's one way. The speaker said that that's much nicer.

In Elm everything is immutable, and everything's constants. NoRedInc uses Elm. They have 35k lines of code. They don't ever get runtime exceptions. Elm makes refactoring very easy. Elm is one of the panelist's favorite programming languages.

RxJS started using TypeScript. Dealing with users using it from JavaScript is kind of painful.

With MVC, you think you can replace each of the pieces separately, but in practice, this doesn't really happen. All three pieces are very wed to each other. It's the "myth of modularity."

Simple Made Easy is a great talk.

Drawing the lines of modularity in the wrong places causes a lot of pain. Doing modularity just right is really hard.

Concise naming patterns are really important when you're doing modularity.

Generic code and interfaces are "dangerous". "No abstraction is better than the wrong abstraction."

Imagine you have 3 things, and you notice a common interface among them. Now suppose you create an interface, but then you suddenly get a 4th thing that doesn't match the interface. Now, you're in bad shape.

When you create a public API, you have to have an interface, but creating an interface for your own code when you don't necessarily need one might cause more problems later.

When speaking of adaptation in JavaScript, someone said "It's not a duck, so you punch it until it's a duck."

If you can come up with a good interface that can fit a lot of things, like Observable, it's really useful. It gives wildly different things some sameness.

Then, you can write code that people can understand.

Lisp has a very pleasant interface: a paren, a command, some stuff, a paren.

Rx is arguably a DSL.

The key is finding an interface that's really simple that can solve a wide variety of needs. It's tough. You sacrifice some specificity.

On the subject of principles:

Someone said, "I used to have principles. I no longer do." For instance, I used to always have unit tests. Then, she started a company, and figured integration tests were good enough.

One guy wrote a lib with a little DI. He achieved 100% code coverage. Then, he ended up breaking it anyway. Unit tests are not enough. He is a little integration test heavy. Integration test first, unit test second. He was told the opposite, but changed his mind.

Another person said: unit tests have a lot of value, but you also need integration tests. You also need to test with real users.

One guy's principle was: put UX first. The end user is more important than clean code.

Principle: If I feel 100% sure I'm right, that worries me. Question your own decisions.

Principle: never write the same thing twice unless there's some really good reason. (Presumably, they've written "if" more than once :-P)