Skip to main content

JavaScript: DOM vs. innerHTML, Server-driven vs. Client-driven

What's the best approach to architecting JavaScript, and which frameworks best support that approach? Is it best to build the app mostly on the client like Gmail and Google Maps, or is it better to provide a normal HTML page, but with lots of Ajax mixed in like YouTube? Which approach leads to the fewest bugs when the client and server get out of sync? How does your server respond to Ajax requests? Does it serve up JavaScript code to run, JSON or XML data to digest, or pre-rendered HTML?

In the Rails world, there are all these helper functions that generate JavaScript in your HTML pages. The JavaScript might result in Ajax requests that themselves serve up more JavaScript (via .js.erb or .rjs files). There is also heavy use of innerHTML. The server is in control of the application flow.

In the jQuery world, it's standard to keep the JavaScript separate of the HTML. I think innerHTML use is still very common via the append() method.

It seems like having the client be in control is more common in the YUI world. That is, I think rich internet applications that talk to a server that just serves up data is more common in YUI than in jQuery. I think that's true of Dojo too.

I've seen some applications that only request data from the server and build the entire UI using DOM functionality. I've heard that this approach is painful, heavy, and occasionally very frustrating.

You could also build an application that only requests data from the server and builds the entire UI using mostly innerHTML. Building up a lot of HTML using JavaScript strings doesn't seem particularly pleasant either.

In GWT and Pyjamas, you write your JavaScript application in Java or Python respectively and then compile the app down to JavaScript. I'm guessing that the JavaScript builds the UI using DOM calls, but I'm not 100% sure. Has anyone out there tried Pyjamas and liked it better than, say, jQuery or YUI?

I've read the documentation for MochiKit, Dojo, YUI, and jQuery at various times over the years, and I've even read a couple books on Ajax. However, I've never read anything that gave a comprehensive break down of the pluses and minuses of each of these approaches.

At Metaweb, I do believe they started with the "build everything from scratch on the client using DOM calls" approach, and eventually the browser keeled over because there was just too much data. (Freebase produces a lot of data.) They switched to generating HTML on the server, and using Ajax to ask for even more HTML from the server when it was necessary. They liked that better. That approach is also recommended in JavaScript Best Practices on Dev.Opera.

I think most people pick an approach without even really thinking about it and never think about alternatives. Have you ever taken one approach and switched to another?


jjinux said…
I'm looking at, since that's one of my favorite UIs. I tried turning JavaScript off. The page loads and looks right. That means that the markup is being generated on the server.

However, you can't do much. A lot of things are no longer clickable. The search just ignores you. They really aren't putting any serious effort in making the app usable for people with JavaScript turned off. (That's not a complaint, just a useful thing to note.)

Some of the tables do look like they are generated on the client. I double checked, and they are HTML, not Flex. It's beautiful, semantic HTML.

The front page uses jQuery, but most of the app is built using YUI as mentioned here:

I used Firebug to take a peek at the Ajax requests. Here are a few interesting Ajax requests that I saw:


I'm guessing that one returns JSON ;)


That one returns a snippet of HTML.


I wonder what xevent is. Notice that updateTransaction is *not* a noun ;) That means they're not all hung up on RESTful, resource-based routing. said…
I was going to say that the big disadvantage to me of having the client build the page is that you don't really have "the web" anymore. This is fine for apps (like Gmail say) but not what you want if you hope to be searchable/indexable/scrapeable or hope to degrade gracefully.

I tend to use the AHAH approach because I like that you start out with fairly complete html before js is involved. Asynchronous calls can send data as json or whatever but you are getting back a pre-rendered snippet of HTML and this leads to easy code reuse - the server side code that builds one page can probably spit out fragments simply by omitting the outer UI that surrounds the dynamic data...

About the only times I break out of this sort of pattern is with large data sets (say datagrids) where json is significantly more compact than the pre-rendered html... I tend to think of this a more of an optimization technique.

I wonder if part of the consideration in choosing techniques is people's language preferences. I don't feel nearly as solid in managing extensive codebases in Javascript as I do in PHP or Python so I'd rather do more work server-side and keep the client side code simple. Somebody with serious javascript chops might feel differently.
jjinux said…, I think we're on the same page.
Jeff said…
DOM performance is very implementation-dependent. innerHTML is *much* faster than programmatically building a tree.

The problem with a JS-heavy interface is that you cannot control the performance of the browser; you *can* directly affect the performance of the server-side stuff.

Therefore, I tend to have the server do any heavy lifting, and build the page in as much HTML as possible. When I need dynamic updates, I use injected HTML when possible. That way, JS can be relegated to UI effects and page updates, without putting too much onus on the client.
jjinux said…
Good comment, Jeff.
jjinux said…
The Rails JavaScript helpers generate JavaScript code in your HTML that hooks into prototype and, which is loaded separately. This is convenient, but it loses the conceptual purity of having your HTML and JavaScript be separate.

jRails is a Rails plugin that provides all those Rails helpers, but with jQuery. That's cool, but it still means there's JavaScript in the HTML.

There's an intrinsic issue with how jQuery works. When you set up your event handlers, they only apply to the current content. They don't apply to any HTML that has been newly fetched by an Ajax request. There are a few workarounds. See: I wonder if YUI has this issue as well.

If you use jRails, when you request a snippet of HTML, the JavaScript for that HTML can come with it. This alleviates the jQuery issue. It's "impure", but effective and convenient.

Of course, I'm making all this stuff up off the top of my head since I've only read the documentation at this point ;)
jjinux said…
I asked about this problem, and I got this response:

(From Aaron B./M/Philadelphia,PA, Re: **YUI (JavaScript)*
I'm not sure how it works specific to YUI, but in jQuery i usually setup an ajax complete event to rebind my event handlers to the new content
jjinux said…
This conversation is also happening on the BayPiggies mailing list:
jjinux said…
I talked to Brandon Goldman who did the JS for He agreed that building HTML fragments on the server and rendering them on the client using jQuery is a good idea. He also agreed that building everything from scratch using DOM methods will cause the browser to keel over painfully. Last of all, we verified that jQuery is using code that at its heart uses innerHTML when you render a snippet of HTML from the server.
very cool & good tip, thank you very much for sharing.
Godfryd's Blog said…
I see two approaches:
1. Whole UI is handled in javascript using e.g. ExtJS. This is pleasant because server side just serves data and all logic is in one language but then it is invisible to google search
2. UI is a HTML generated server-side with mixed javascript. This is ok but html is mixed with language used server-side but it is searchable for google.
jjinux said…
> I see two approaches:
> 1. Whole UI is handled in javascript using e.g. ExtJS. This is pleasant because server side just serves data and all logic is in one language but then it is invisible to google search

The server still has to enforce some business logic.

> 2. UI is a HTML generated server-side with mixed javascript. This is ok but html is mixed with language used server-side but it is searchable for google.

You can have the server generate HTML, and place all your JavaScript in a static JavaScript file. That's what I'm currently doing. You might have to work a little harder to tell jQuery to bind your event handlers to any new content you fetch via Ajax.
Eric Bloch said…
You should check out what my friend Adam is doing at some point. I think he's going down an excellent that allows fine-grained choice and code sharing between client and server.

As to DOM vs. innerHTML, I think it's mostly a pragmatic choice between performance (DOM bad sadly) and hideous code (innerHTML worse than DOM). I think it's OK to mix and match as you see fit. You can always migrate from one to the other if need be. Another choice is to wrap up common use of innerHTML in better APIs (sort of like ExtJS, which is "all client" but mostly uses innerHTML to construct DOM).
jjinux said…
Thanks, Eric. I'll ping Adam. He's read some of my other stuff, but he probably hasn't read this one.
Adam Wolff said…
JJ, Nice post. I see it as damned-if-you-do, etc. If you mostly keep your app on the server, you usually end up having to stand on your head to factor out common code for the initial page load and the ajax callback. There's also this impediment to introducing additional AJAX fanciness: every bit is more work and more bugs.

If you choose to mostly keep your app on the client, the biggest problem you have is that you're writing twice the code; when you add a record, you have to write the code to add it to the local state on the client, and to add it to the server. In such environments, it's also usually very hard to write apps where the client state can be altered by an external action (like a record being updated by another user) since the state is mostly kept on the client. You end up having to make the unfortunate choice between sending relevant state with every server call (like a POP email client) or modeling the client state on the server (which means that you're writing even more of your program twice.) Add to this the additional problems you mention around delivering application functionality into under-equipped browsers, like email clients and search robots (let alone IE,) the slowness of the DOM APIs, and the somewhat sucky initial experience of having to load a bunch of app code and then call back to the server before your app can really start. I'm pretty down on this approach, even though I evangelized it at laszlo for several years.

The framework that Eric referenced above is something that we've been developing for the last year. We call it msjs (pronounced like "messages") and it's a dataflow style system where programs are composed out of nodes that are written in javascript. These nodes send and receive JSON messages and are connected in a well defined way at initialization time. As such, in msjs, the program can be automatically distributed between the client and server by doing some relatively simple graph analysis. Since all of the messaging in the system is JSON, it's easy to call across the client/server boundary without doing any formal RPC.

As for how we handle markup, we've done a little of both. For big blocks of declarative markup -- usually for structural layout -- we've enabled inline-XHTML by using E4X on the server. The other thing we did is to implement much of the low-level DOM APIs within our framework, so that we can manipulate DOM elements on the server just as we would on the client. This means that we can deliver properly rendered XHTML pages to the client at startup, but then have a seamless transition to AJAX updates after the page loads; and it's literally the same code running in both places. It's conceivable that we could take this further and automatically work in an environment without javascript, but for now, that's a special consideration for the app developer.

At some point, I hope to open-source msjs, but for now it's proprietary, and attached to the product I'm working on. Still, I'd be happy to show you what we've done if you want to check it out sometime.
jjinux said…
Hey Adam,

Thanks a lot for your response. I really respect your opinion on this subject. I think I get the gist of your opinion, and it reminds me of 280 Slides and Objective-J.

Please keep me in mind as your opinions grow and change on this subject.

I think the conclusion that I've come to is the least crappy option for my current situation:

* Keep the app mostly server side.
* Use a little Ajax to enhance the page.
* Use jQuery.
* jQuery uses innerHTML instead of the DOM API.
* Send chunks of HTML to the client.
* Mostly render the page on the server.

Popular posts from this blog

Ubuntu 20.04 on a 2015 15" MacBook Pro

I decided to give Ubuntu 20.04 a try on my 2015 15" MacBook Pro. I didn't actually install it; I just live booted from a USB thumb drive which was enough to try out everything I wanted. In summary, it's not perfect, and issues with my camera would prevent me from switching, but given the right hardware, I think it's a really viable option. The first thing I wanted to try was what would happen if I plugged in a non-HiDPI screen given that my laptop has a HiDPI screen. Without sub-pixel scaling, whatever scale rate I picked for one screen would apply to the other. However, once I turned on sub-pixel scaling, I was able to pick different scale rates for the internal and external displays. That looked ok. I tried plugging in and unplugging multiple times, and it didn't crash. I doubt it'd work with my Thunderbolt display at work, but it worked fine for my HDMI displays at home. I even plugged it into my TV, and it stuck to the 100% scaling I picked for the othe

ERNOS: Erlang Networked Operating System

I've been reading Dreaming in Code lately, and I really like it. If you're not a dreamer, you may safely skip the rest of this post ;) In Chapter 10, "Engineers and Artists", Alan Kay, John Backus, and Jaron Lanier really got me thinking. I've also been thinking a lot about Minix 3 , Erlang , and the original Lisp machine . The ideas are beginning to synthesize into something cohesive--more than just the sum of their parts. Now, I'm sure that many of these ideas have already been envisioned within , LLVM , Microsoft's Singularity project, or in some other place that I haven't managed to discover or fully read, but I'm going to blog them anyway. Rather than wax philosophical, let me just dump out some ideas: Start with Minix 3. It's a new microkernel, and it's meant for real use, unlike the original Minix. "This new OS is extremely small, with the part that runs in kernel mode under 4000 lines of executable code.&quo

Haskell or Erlang?

I've coded in both Erlang and Haskell. Erlang is practical, efficient, and useful. It's got a wonderful niche in the distributed world, and it has some real success stories such as CouchDB and Haskell is elegant and beautiful. It's been successful in various programming language competitions. I have some experience in both, but I'm thinking it's time to really commit to learning one of them on a professional level. They both have good books out now, and it's probably time I read one of those books cover to cover. My question is which? Back in 2000, Perl had established a real niche for systems administration, CGI, and text processing. The syntax wasn't exactly beautiful (unless you're into that sort of thing), but it was popular and mature. Python hadn't really become popular, nor did it really have a strong niche (at least as far as I could see). I went with Python because of its elegance, but since then, I've coded both p