Skip to main content

JavaScript: Mastering Chrome Developer Tools

I went to an all day tutorial on Mastering Chrome Developer Tools. It was my favorite part of the whole conference. Here are my notes:

Jon Kuperman @jkup gave the talk.

Here are the slides. However, there isn't much in them. Watching him use the DevTools was the most important part of the tutorial. I did my best to take notes, but of course, it's difficult to translate what I saw into words.

He created a repo with some content and some exercises. Doing the exercises was fun.

Chrome moves really fast, and they move things around all the time.

Everything in this talk is subject to change. For instance, the talk used to talk about the resources panel, but that's now gone. Now there's an application panel.

In the beginning, there was view source. Then we had alert; however, you can't use alert to show an object; you have to show a string. Then, there was Live DOM Viewer. Then, there was Firebug. It kind of set the standard. Firefox has completely rewritten their dev tools before, and they're rewriting them again.

Here's the current state of browsers developer tools:

Firefox's dev tools are a few years behind Chrome. For instance, it can't edit source files, and it doesn't have support for async stack traces like Chrome has.

Safari and Chrome were based on WebKit. Chrome split off and created Blink.

Safari and Opera have really stepped up their game. There are a few features that are in Safari or Firefox that aren't in Chrome. Performance and memory auditing is better in Chrome. Firefox has a better animation tool.

Edge has very rudimentary tooling, but they let you speak directly to a Chrome debugger; it's built in. It's called something like "project to Chrome".

Chrome just came out with a new user interface update to their DevTools.

Chrome has a nice API for their DevTools. They abstract their tools so you can use them with other products.

See node-inspector for using Chrome tools with Node--it's "Node.js based on Blink dev tools." It's coming to Node directly (see v8_inspector).

For React, you need React Developer Tools. There are similar tools for Redux, Ember, etc.

DevTools can:
  • Create files
  • Write code (you can use it as your IDE)
  • Persist changes to disk
  • Do step through debugging
  • Audit pages
  • Emulate devices
  • Simulate network conditions
  • Simulate CPU conditions
  • Help you find and fix memory leaks
  • Profile your code
  • Analyze JavaScript performance
  • Spot page jank
The docs are at, but they're sometimes behind.

He really recommends Chrome Canary. It has all the latest and greatest tools several months before Chrome stable. He uses Chrome Canary for development. He even uses it for his daily driver; he says he hasn't had any stability issues.

All the browsers have canary builds these days.

He also teaches an accessibility workshop. Chrome moves so quickly that they broke something he was teaching, but only half of the people have a version of Chrome that was new enough for it to be broken. The fact that Chrome has rolling updates means there's actually a lot of variety in the versions of Chrome out there.

Chrome version 52 merged in "most of the stuff" (which I assume means updates to the DevTools).

When it comes to the DevTools, there have been a bunch of UI changes in the last six months, so Chrome Canary looks a bit different.

I asked if you'll run into compatibility issues if you only use Canary. He said it's not any worse than the general case of deciding to skip cross-browser testing in general. Rendering issues are at an all time low. However, you probably still need to test different browsers. Of course, if you're supporting old IE, you always have to test (IE 10 and below). Edge is pretty compliant. The ES6 support is different across browsers, but a lot of people just use a transpiler.

The nice thing about Canary is that there's only one version of Canary, whereas everyone is running slightly different versions of Chrome because of how they roll out updates.

He likes Frontend Masters workshops. Douglas Crockford has a 13 hour workshop on JavaScript that's amazing. They're nicely edited, and they have nice coursework.

He gave us a quick walk through the panels:

Most people only use the element panel and the console, but there's so much more!

Right click on the page, and click Inspect.

In DevTools, click the three dots in the top right in order to pick where you want the window to be.

In DevTools, click the icon in the top right (the arrow on top of a box), and then click on an element to inspect it.

If you place the DevTools dock to the right, you can drag the dock to the left in order to make your window a certain width. This is a super easy way to test responsive layouts. Make the site 320px wide--that's a good thing to target.

Sometimes he pops DevTools out to a full screen and puts it on a separate monitor.

You can drag the tabs like Console, Elements, Profile, etc. around, and it'll persist your changes.

If you're not on the console, you can press escape to show or hide the console.

You can use Settings >> Reset defaults.

Next to the inspect icon, there's an icon showing a phone and a tablet on top of each other. You can use that to simulate a device. This isn't just changing the screen. It's also sending a different user agent string. That's important to note. If you want mobile web instead, instead of picking a particular device at the top of the screen, you can use "Responsive".

In general, use relative units. Don't use pixels for your font sizes. Use ems, rems, etc. Use %s for Flexbox. Flexbox gets iffy with cross browser support.

The reliability of the mobile emulation is pretty good, but you do also need to try it with real mobile devices. Mobile emulation will get you pretty far, though, especially during development.

Twitter has a separate mobile app.

There's device, network, and CPU emulation.

He talked about the DOM representation on the Elements tab. Remember, the DOM representation (i.e. the current state of the DOM) is probably very different than what view source will show (since that shows the original HTML that came down).

You can use $('.tweet') on

Select an element in the element tab, then right click on it, and select scroll into view. That's a great way to find an element on the page.

You can select an element and click "h" to toggle hiding. It's visibility hidden, not display none.

Twitter has style sheets all over the place. They're bundled in prod, but in dev, there might be something like 13 different stylesheets.

CSS specificity (most to least specific):
  • Style attribute
  • ID
  • Class, pseudo-class, attribute
  • Element selector (like a div selector)
In the Elements tab, go to the Computed tab, and you can see what's actually being applied. Click on the attribute, like border-bottom, and then open the arrow, and it'll show you where the rule comes from. That's a really nice way to find where styles are coming from.

He showed the box model widget on the computed tab. Inner to outer:
  • Element
  • Padding (inside the border)
  • Border
  • Margin (outside the border)
  • Position (like "position: relative; top: 200px")
He started playing around with

Elements >> Computed is really helpful.

Next, he showed Elements >> DOM Breakpoints:

Find an element in the page. Right click on it. Click "Break on...":
  • Subtree modifications
  • Attributes modifications
  • Node removal
Then, it'll break into JavaScript. Hence, if you don't know the code, but you do know the UI, this is a nice way to find the code that changed the DOM.

However, keep in mind, it'll usually break on the jQuery code, not your application code which is up a few layers in the stack. Usually, it'll give you enough to back track to the app code.

Color formats:

Shift click on any hex value, and it'll switch between different color formats. He prefers hex codes over RGB. If you click on the box of a color, you get a cool screen with a color picker. He talked more about the color picker:

Click on the arrows next to the color palette. It has material design colors. Right click on the color to get various shades.

Find an element. Find the color in Elements >> Styles. Click on the color box. Click on the arrows next to the color palette. It has Material, Custom, and Page Colors. The page colors thing helps you pick a color that matches other colors on the page.

Material Design is Google's color scheme. All those colors look good on white and black. He likes using the Material Design stuff; it makes it easier since he's not a designer.

In Elements >> Styles: You can click on the checkbox next to a rule to apply or unapply a rule to play with colors.

Use Cmd-z to undo. Refresh will also go back to what's in the source.

Workspaces are really helpful so that DevTools writes your changes to your files.

Elements >> Event Listeners is sometimes useful.

Next, he showed the Sources tab:

It looks like an IDE.

Use Cmd-P to fuzzy search for files.

By default, you can make changes. However, if you save and then refresh, you'll lose your changes. Here's how to make your changes persist:

See: Set Up Persistence with DevTools Workspaces

Drag your project folder to the left pane of the Sources tab. Then, you can map local sources to remote sources (I explain this again below). It's a bit clumsy, but then you can actually edit your real code.

Right click in the left pane. Add Folder to Workspace. Pick the local folder. Click on a file from that local folder. When prompted map the file. Then, when you save, it saves to disk. It's awesome, but it's limited.

Anything you change in styles persists to disk (Elements >> Styles). Anything you change in the markup (the DOM thing on the left), doesn't persist to disk (because who knows what code created that DOM).

Only styles defined in external CSS files can be saved.

To see all the limitations, open Set Up Persistence with DevTools Workspaces and search for "there are some limitations you should be aware of".

You have to try it out to get the hang of it.

You can even try this out with your production server. You can still map your files. However, when you save changes, it'll only change your local files--it can't automatically deploy your changes to prod.

It does work with sourcemaps a bit.

There's an experiment for SASS support. They're going to add support for templates.

We should prefer SASS over LESS because LESS will probably go away. Bootstrap was the last major user of LESS, but Bootstrap 4 is moving to SASS.

When viewing a file in the Sources tab, on the lower left, there's a {} icon to prettify the code. It can't fill in minified names, but it gets your pretty far.

Use Elements >> Styles >> + to add new styles to a particular file.

To set up persistence, the key thing is to drag your project folder onto the Sources tab, find the file in your local sources, and double click to open it. Chrome will guess at the mapping.

In the DOM section, you can select an element, right click, and use edit as HTML. Edit the HTML. Then click outside the box. However, these changes can't be persisted to disk.

He's using Atom for his editor.

He's using something called Pug which is like Mustache.

The color "rebeccapurple" is named after a guy's daughter who died.

If you have things mapped, then with the elements panel, anything you edit is implicitly saved.

Use Elements >> Event Listeners to see all the event listeners for an element. However, this isn't perfect because there might be event abstractions in the way.

Scroll listeners are expensive, so sometimes there's some sort of abstraction, like jQuery's on('scroll').

In Elements >> Styles, there's :hov in the top right. This lets you play with forcing a particular hover state. That way you don't need to keep trying to hover over the element to test out its hover handling.

There are a few things that are only in Chrome Canary.

In Elements, long press on an item, and then you can drag the element around.

Workspaces is one of the cooler things you can do with DevTools. You can go pretty far without using another editor, and you can do design work from the elements panel.

We're going to debug code in the Sources panel.

The step through debugger is pretty top notch and pretty clean.

Click on a line to add a breakpoint. It has to be on an executable line of code. Just move it down or up a line if it doesn't work. Then refresh.

Now, we're in the debugger:

The Watch widget allows you to put in an expression and watch that expression. If you're some place random, most of your watch expressions will be undefined because those variables probably don't make sense in that context.

Even the source code widget has useful stuff in it. If it knows the value of a variable, it'll show you the value in a popup near the variable.

When the debugger is paused, you're in a certain context, and you can interact with the current state using the console. Go to the Console tab, or press escape to open the console at the bottom.

Press the Play icon to resume execution.

He talked about the step over and step into icons near the play icon.

If you press step into, it'll find the next function call and step into it. That's a little different than most debuggers which will simply go down one line if the current line isn't a function. You'll probably want to use step over by default.

Just play with the debugger. You're not really breaking anything.

Pause on all exceptions is pretty helpful. However, sometimes, various libraries are using exceptions internally for various reasons.

In the Call Stack widget, right click on a line and click Blackbox Script. Then, it'll hide all the stuff from that script. That way, you can ignore the framework code. This is per script, per domain. It persists per session. If you restart the browser, you'll lose your blackboxes.

Right click, Add Conditional Breakpoint. Use any expression you want.

There is an XHR Breakpoints widget. You can use that to set a breakpoint that gets tripped anytime there is an XHR that matches a particular URL. That's really useful if you don't know the code very well, but you know the related requests to the server.

You can also just put "debugger;" in your source code. Remember to use a linter to prevent yourself from committing it ;)

Click the "Async" checkbox to turn on async debugging. This captures async stack traces. That way, you can see the stack trace from before and after the the asynchronous activity (such as making an XHR, setting a timer, etc.). It will make your call stacks taller, but it's super helpful for understanding how you got into a particular state.

For most of these things, your changes only impact the current session.

Warning: If you use the GitHub API in your exercises, you might get hit by their rate limiting.

Now we're going to talk about Profiling:

For Google, a half a second page load time increase will result in 20% traffic loss.

Amazon reported 100ms decrease in speed resulted in a 1% sales loss.

We do know that slow sites, non-SSL sites, sites with a bad mobile experience, etc. get penalized by search engines. However, we don't know if it's related to the DOMContentLoaded event, or if they're measuring perceived performance. Google is pushing the RAIL performance model, and it's based on perceived performance.

Twitter measured "time to first Tweet". Facebook has something similar.

Build, then profile, and only if it's a problem, tackle it.

There's an Audits tab in DevTools. It's very high level, but very helpful.

Memory leaks are not very common. Browsers and frameworks are pretty good these days.

He played with a particular course on

Go to a page. Click the Audits tab. Click Select All. Click Reload Page and Audit on Load. Click Run.

It prioritizes and suggests things.

He found that Udemy has 10,429 unused CSS rules on a course's landing page. 90% of the CSS is not used. He says everyone has that. Advanced apps use bundle splitting. It's easy to see the problem. It's much harder to figure out a fix.

Modern web apps tend to put a lot of stuff in the head tag. If you have an async script tag, it doesn't matter if it's in the head. The beginning of the body is fine.

People tend to use Bootstrap, but it has lots of stuff they may not be using.

It's also important to remove CSS that isn't being used anywhere.

Everything should be gzipped. This is a big win.

Udemy's course landing page has 2.7 MB of content when uncompressed. That's kind of average he said.

In DevTools, it can give you a list of unused CSS rules in the Audits tab.

Here are some common audit problems:
  • Combine external CSS and JS
  • Enable gzip compression
  • Compress images
  • Leverage browser caching
  • Put CSS in the document head
  • Unused CSS rules
If you have HTTP/1, combine separate JS and CSS files. With HTTP/2, keep them separate.

CloudFlare is one of the only CDNs that currently supports HTTP/2.

Combine and minify your assets.

Compressing images is probably one of your biggest wins. He said that it seems like Udemy is doing okay in this regard.

He recommended ImageOptim.

He has the settings set to JPEG 70%, PNG 70%, GIF 40%.

If you don't need transparency, you can switch to JPG.

Browser caching is another big win.

Next, he showed the Network tab:

Hit record. Refresh the page.

Press the camera icon to capture screenshots. It grabs screenshots every step along the way (anytime there's going to be a major refresh).

However, if you refresh, it'll start taking screenshots before the page comes back from the server.

It only shows the visible portions of the screen.

You can get a lot of wins by trying to optimize what things load first. The solutions are little hackier, but you can impact the experience.

Server side rendering would help, but it's hard.

Udemy's loading indicators are making Chrome take a lot of screenshots.

It stops recording when the page is fully loaded.

If you record manually, it'll keep recording and taking screenshots until you explicitly hit stop.

He showed the waterfall in the Network tab.

Hover over the colors in the waterfall to see more details.

If something is "Queued", that means Chrome has postponed loading it.

Chrome prioritizes the order in which to load various assets. CSS is ranked higher than images. Prioritization is mostly by filetype.

Webpack can compile your CSS into your static HTML.

With Google's AMP, all the CSS has to be inlined into HTML.

Performance is often about give and take.

Chrome allows up to 6 TCP connections per origin.

If you see that a resource is "Stalled" or "Blocking", that's usually the result of some weird proxy negotiation.

DNS tends to be pretty stable.

He went through all the different parts of the request, explaining each one. He talked about:
  • Proxy negotation
  • DNS lookup
  • Initial connection/connecting
  • SSL
  • Request sent, etc.
  • TTFB (time to first byte): This will suffer if your server is slow.
  • Content download/downloading
The important thing is to triage performance problems.

You can get really good information not just about what's slow, but why it's slow.

Common problems:
  • Queued or stalled: Too many concurrent requests.
  • Slow time to first byte: Bad network conditions or slowly responding server app.
The Audit tab has some useful stuff, but PageSpeed Insights has even more stuff.

We talked about the fact that we have a blocking script tag for Optimizely in our head. He said that sometimes, you need blocking code in the head, and there's no getting away from it. Optimizely is one of those times. is also useful. It's a bit more simplistic.

GTmetrix is the most advanced tool. It uses all the other tools, and then combines everything. The analysis of "What does this mean?" is pretty helpful.

Saucelabs and BrowserStack have some performance stuff.

In Network, hold shift over the file, it'll show green and red for what things called what and were called by something else.

Hover over the initiator to get the call stack for the initiator.

In the Network tab, you can right click on the column headings, and there's more stuff that you can see. For instance Domain is helpful.

Preserve Log is helpful to save the log messages across refreshes.

You should almost always have Disable Cache selected. By the way, it only applies when DevTools is open.

There's an Offline checkbox. You can check this if you're working on service workers.

The throttling is super helpful. Don't forget to turn off the throttling, though ;) Good 3G is a good thing to try. Remember to disable cache.

1MB is not huge for sites these days. It's not really a problem until you're above 3MB.

100 requests is too many requests.

You should put your CSS above your images.

Next, he talked about the Timeline tab:

Chrome keeps adding more stuff to this tab. It's the most overwhelming tab in DevTools.

It has CPU throttling.

You can hide the screenshots (uncheck the checkbox) to make some space.

If you look at the memory tab, and you see a jigsaw that's trending upwards, it could be a memory leak.

If you don't see a memory leak, you can uncheck that checkbox.

The summary tells you how the browser is spending its time. CSS is in Painting and Rendering. Now, you can hide the summary.

At the top, there are red dots that might mean page jank.

The green at the top has to do with frames per second.

Then it shows you what the CPU is up to. The colors match the summary.

Selecting the timeline:
  • Click and drag.
  • Double click to select everything.
  • Single click to select a small piece.
  • You can also scroll side to side or scroll in.
  • When you're on certain flame charts, use shift when scrolling.
You can probably hide the Network and Screenshots stuff since it's already on the Network tab.

See FPS, CPU, and NET on the top right.

This stuff is very different between Chrome Stable and Canary.

He talked about Flame Charts. They're under Main. Wide is bad. Tall is not a problem. Find your widest things, and they're taking a long time to execute. This can help you find your slow code. Then, you can zoom in.

What he does is zoom in and look for the last function that's really fat, and everything under it is obviously skinny.

Total Time tells you how much time your function took to execute, and all the functions under it as well.

Self Time is just how much the function itself took, without counting how much the children took.

In the flame charts, dark yellow is native browser stuff, whereas yellow is the application code.

The colors in the flame charts correspond to the colors in the sumary at the bottom of the page.

Ads and analytics are often times the performance headaches.

CPU throttling is pretty helpful. One thing it's good for is that it makes it more obvious where the slow parts are in the flame charts.

You might need to turn on a Chrome flag to see the CPU throttling.

People used to use try/catch for lexical scoping before we had let. You can have a try that always raises, and then a var inside the catch is scoped to the catch. Traceur used this trick. Babel just renames things.

Sometimes Chrome extensions get in your way.

PageSpeed Insights is more robust than the Audits tab.

The AMP project has their own Twitter and YouTube embeds.

If you're using a pure Node server, there's a compression thing to turn on gzip.

For Bootstrap, you can use the SASS version and just pull in the parts that you need.

Don't use a 1000 pixel wide image and shrink down the image to 200px using the img tag. Compress (lossy) them and resize them.

He really likes the screenshot view. Remember to pop out DevTools so that it takes a larger screenshot.

Server side rendering is nice in order to get text onto the screen faster.

You can use command click to show both CSS and Img in the Network tab.

On the Network tab, you can use regexes, but that's not so common.

Embedding YouTube videos loads a lot of JavaScript. Consider deferring them. That could help finish loading much faster.

He talked about the DOMContentLoaded event and the Load event:

You can hook into either. They're native browser events.

For his example, it was the YouTube embeds that totally killed his page performance.

AMP has its own video.js.

Most of what he does on the Timeline tab is to look at stuff and then hide it when he figures out that that's not where the problems lie.

In the Summary, you can click on Bottom-Up, Sort by Descending Self Time. That's an easy way to find the slow parts of your code.

(He's using zsh with a very colorful prompt.)

Start with Audit and Network. Then go to the Timeline. Next, you can go to Profile.

Next, he talked about the Profile tab:

You can use Profile for CPU profiles or to take snapshots of the heap.

Remember, they kind of push everything into the Timeline. Profile has a simpler view of some stuff that's in Timeline.

Start profiling, then refresh the page.

Once you run a profile, when you go back to your code, it'll show times next to your functions in the Sources tab.

Next, he talked about page jank:

Jank is any stuttering, juddering, or just plain halting that users see when a site or app isn't keeping up with the refresh rate.

He talked about 60 FPS (frames per second):

Most devices refresh their screens 60 FPS. The browser needs to match the device's refresh rate. 1 sec / 60 = 16.66ms per frame. In reality, you have about 10ms for your code.

In the Timeline tab, hit Esc. There's a rendering panel (it might be in the drop down). There's an FPS meter.

Most of the time, page jank is obvious just using the site.

He explained some causes of page jank:

Every time you do a write to the DOM, it invalidates the layout. For instance: = (h1 * 2) + 'px';

You can use window.requestAnimationFrame() to ask the browser to let you know when it's going to to do layout invalidation. Do all your writes in there.

There's a fastdom library. Basically, never do a write or a read to the DOM without using his library. It eliminates DOM thrashing.

React has a lot of stuff tied to requestAnimationFrame. A lot of things use the fastdom library. It makes scrolling butter smooth.

He doesn't know if NG 1 takes care of using requestAnimationFrame correctly.

His favorite demo is

Escape (to show the Console tab) >> Rendering (next to the Console tab) >> Paint Flashing. This shows green wherever there's a re-paint.

This will help you find cases where you're re-rendering things don't need to be re-rendering. This is a common performance problem.

There's this thing in CSS, will-change: transform, that you can use to tell the browser to kick some work off to the GPU.

In his demos, the old Macs and the Windows machines were getting jank even though the newer Macs weren't.

There's an ad in his demo that has fixed position. That kind of stuff can cause page jank. Adding will-change: tranform to the ad container helped.

If you have some ad that's 100x100 with a fixed position, it's likely you'll get jank.

JavaScript animations don't use the GPU, but CSS animations do. CSS animations are way smoother.

Used fixed position sparingly.

Next, he talked about how to find and fix memory leaks:

JavaScript is pretty good at garbage collecting. Leaks are not super common.

JavaScript uses mark and sweep.

Browser are always getting better.

He talked about some common causes of JavaScript memory leaks:
  • Accidental globals: you forgot to use var (e.g. bar = "foo";). Strict mode disallows that.
  • Forgotten intervals: once you start an interval, it keeps going. This can lead to a leak if it keeps pulling in more and more data.
  • Holding onto a reference to a DOM element that is no longer in the DOM anymore: browsers and frameworks are getting better at handling this. However, event listeners are particularly likely to hold onto things.
Again, he starts with Audits and Network tabs. Then he goes to the Timeline tab.

You can leak memory in the form of either JavaScript objects or DOM nodes.

With memory recording, use a lot of time to see a lot of growth.

Next, he talked about the Profiles tab:

Use Take Heap Snapshot twice. Then, you can compare them. Sort by allocated size descending.

Profiles >> Allocation Timeline >> Summary >> Click on an object >> Object: you can use this to find the code that caused the allocation to happen.

Shallow Size: the amount of space needed for the actual item.

Retained Size: how much you'll be able to get rid of if you get rid of the object. For instance, there might be a parent that points to a lot of stuff.

Don't start profiling your memory until you see an obvious jigsaw on your timeline.

Chrome DevTools makes it pretty simple to tackle memory leaks.

Chrome top right of the browser three dots >> More Tools >> Task Manager: Add JavaScript as a Category: you can use this to see how much JS memory is used per tab and per extension.

Next, he talked about Experiments:

Go to chrome://flags. Ignore the nuclear icon ;) Search for DevTools. Enable Developer Tools Experiments.

There are so many experiments.

Now, open Devtools >> Settings >> Experiments. There's CPU throttling. There's accessibility stuff. There's live SASS stuff. There's request blocking.

Hit shift 7 times, and you'll get super secret experiments!!! They're very much still in progress ;) They're not very stable yet.

Here are some resources:


jjinux said…
Action palette: search for a command.

Search across all files.
jjinux said…
Cool tricks in the console from

* copy(obj) -- copy something to the clipboard as JSON.
* debug(nativeFunction) -- setup breakpoints on native function.
* queryObjects(constructor) -- list all instances of a constructor.

In the triple dots next to the console tab, there's a "Search" tab that lets you search across all the sources. You can even find objects that you don't have access to.

He showed a new experiment where you can turn on eager evaluation to evaluate an expression before you even hit enter. It'll keep re-evaluating as you tweak the code. It's pretty magical in a good way. It's based on a new feature in V8 that can tell whether code has side effects or not. You need Chrome Canary. Turn on "Eager evaluation" in the console settings.

Popular posts from this blog

Ubuntu 20.04 on a 2015 15" MacBook Pro

I decided to give Ubuntu 20.04 a try on my 2015 15" MacBook Pro. I didn't actually install it; I just live booted from a USB thumb drive which was enough to try out everything I wanted. In summary, it's not perfect, and issues with my camera would prevent me from switching, but given the right hardware, I think it's a really viable option. The first thing I wanted to try was what would happen if I plugged in a non-HiDPI screen given that my laptop has a HiDPI screen. Without sub-pixel scaling, whatever scale rate I picked for one screen would apply to the other. However, once I turned on sub-pixel scaling, I was able to pick different scale rates for the internal and external displays. That looked ok. I tried plugging in and unplugging multiple times, and it didn't crash. I doubt it'd work with my Thunderbolt display at work, but it worked fine for my HDMI displays at home. I even plugged it into my TV, and it stuck to the 100% scaling I picked for the othe

ERNOS: Erlang Networked Operating System

I've been reading Dreaming in Code lately, and I really like it. If you're not a dreamer, you may safely skip the rest of this post ;) In Chapter 10, "Engineers and Artists", Alan Kay, John Backus, and Jaron Lanier really got me thinking. I've also been thinking a lot about Minix 3 , Erlang , and the original Lisp machine . The ideas are beginning to synthesize into something cohesive--more than just the sum of their parts. Now, I'm sure that many of these ideas have already been envisioned within , LLVM , Microsoft's Singularity project, or in some other place that I haven't managed to discover or fully read, but I'm going to blog them anyway. Rather than wax philosophical, let me just dump out some ideas: Start with Minix 3. It's a new microkernel, and it's meant for real use, unlike the original Minix. "This new OS is extremely small, with the part that runs in kernel mode under 4000 lines of executable code.&quo

Haskell or Erlang?

I've coded in both Erlang and Haskell. Erlang is practical, efficient, and useful. It's got a wonderful niche in the distributed world, and it has some real success stories such as CouchDB and Haskell is elegant and beautiful. It's been successful in various programming language competitions. I have some experience in both, but I'm thinking it's time to really commit to learning one of them on a professional level. They both have good books out now, and it's probably time I read one of those books cover to cover. My question is which? Back in 2000, Perl had established a real niche for systems administration, CGI, and text processing. The syntax wasn't exactly beautiful (unless you're into that sort of thing), but it was popular and mature. Python hadn't really become popular, nor did it really have a strong niche (at least as far as I could see). I went with Python because of its elegance, but since then, I've coded both p