Nicholas C. Zakas's Blog, page 18
June 15, 2011
Introducing CSS Lint
Not too long ago, Nicole Sullivan and I announced that we've started working together. Today, we're happy to announce the release of our first collaboration effort: CSS Lint. The goal of CSS Lint, as you may guess, is to help you write better CSS code. We've spent huge chunks of time over the past couple of weeks building and debating rules to help everyone write more efficient and less problematic CSS.
The rules
To begin with, we defined several rules (explained in more detail on the CSS Lint About page). The rules are:
Parsing errors should be fixed
Don't use adjoining classes
Remove empty rules
Use correct properties for a display
Don't use too many floats
Don't use too many web fonts
Don't use too may font-size declarations
Don't use IDs in selectors
Don't qualify headings
Heading styles should only be defined once
Be careful using width: 100%
Zero values don't need units
Vendor prefixed properties should also have the standard
CSS gradients require all browser prefixes
Avoid selectors that look like regular expressions
Beware of broken box models
The rules are all created using a very simple plugin model that makes it easy to change specific rules or add new ones. The ability to turn off or on specific rules isn't yet exposed in the web or command line interfaces but is supported by the underlying API, so look for this addition soon.
In your build…
While we're happy to introduce the web interface, we also realized that you may want to incorporate this into your build system. To help, there is CSS Lint for Node.js. You can install the CSS Lint command line tool via npm:
sudo npm install -g csslint
Once installed, you can pass in any number of files or directories with CSS files:
csslint foo.css bar.css dir_of_css/
The tool then outputs the same information as the web interface.
Contribute
CSS Lint is completely open source and available on GitHub. We're actively looking for people to contribute rules, bug fixes, and extensions. The rules, by the way, are completely pluggable. You can easily strip out rules you don't want or add new rules that are more specific to your needs. Then, build a custom version that suits your needs or contribute the changes back.
The CSS parser that it's built on is also open source and available on GitHub. There are some known issues with the parser that I'm planning on addressing soon, but it's generally CSS3-compliant.
I hope Nicole and I will be able to make more tools of this nature to help everyone write better front-end code. Enjoy!





June 3, 2011
On leaving Yahoo! and what's next
After nearly five years, today is my last day at Yahoo!. It really seems like only yesterday I was blogging about my new job and packing up my Peabody, Massachusetts condo to move to California. My plan at the time was to work at Yahoo! for a year to help finish work on My Yahoo! and then evaluate if I wanted to stay in California or move back to Massachusetts. Somewhere along the line, I forgot to make that evaluation and before I knew it, years had passed.
Leaving Yahoo! is incredibly difficult for me. I moved to California without knowing anyone, and so for the past five years, Yahoo! has been my family. Almost everyone I know I met through Yahoo!, including some wonderful and inspirational mentors to whom I will be forever grateful. I was welcomed into the outstanding Yahoo! front-end community and learned so much through hallway conversations and internal talks as well as through my day job. The passion of the front-end engineers at Yahoo! is truly inspirational and is something I will remember for a very long time.
To everyone at Yahoo!: thank you for every single moment of the past five years. I've learned so much from all of you, and your willingness to share and educate is truly your greatest strength. Walking away from you is one of the hardest things I've had to do, and I will continue to cheer for you and hope for your success – but from now on, I'll do so on the sidelines.
I'm leaving Yahoo! to take a risk. Earlier this year, a few things happened that caused me to do some deep thinking about my life and what I want from it. I realized that I was in a pretty good position: I have some money saved up and no major expenses. I'm not married, don't have kids, and don't have a mortgage. I'm a big believer that you should strive for stability and security in life, and once you find it, take a risk and repeat the process. This seemed like the perfect time to make a big leap. So what's next is actually a couple of things.
First, a friend had been talking about a startup idea, and the more I thought about it, the more I liked it. I approached him to ask if he could use my help and off we went. I've been working nights and weekends for several months on this idea and we're now getting close to having something real. The things I like about this product include its simplicity, utility to the average person, and the business model. Yes, I'm speaking in generalities because we're not ready to unleash it yet, but hopefully soon.
So I'll be spending most of my time over the next few months working on this startup idea and trying to make it real. I'm proud to say that the team for this product is made up of several former Yahoos, which is another reason that it is so appealing. We've got a really great group of people working on this product, though there is room for at least one more: I'm looking for a really great back-end engineer, preferably with search experience, to join the team. If you're interested and live in the San Francisco Bay Area, please contact me.
The second thing I'll be doing is teaming up with my friend (and former Yahoo) Nicole Sullivan to do consulting work. Nicole and I have talked off and on about working together on outside projects after having fun working together on a couple of projects at Yahoo!. Between the two of us, we hope to provide a wide range of front-end consulting services including performance evaluations, general architecture, and of course, JavaScript and CSS. If you're interested in hiring us, please email projects (at) stubbornella.org.
With all of the changes, there are also some things that won't change. I'll still be living in California (for the time being). I'll still be a contributor to YUI and will be pushing YUI Test towards version 1.0. I'll still be writing books and blog posts. I'll still be speaking at conferences. And most importantly, I'll still be working with some of the great friends I've made while at Yahoo!.
Hopefully five years from now I'll look back and see this as another good decision in my life. And if not, at least it will be an interesting experience.





May 3, 2011
Better JavaScript animations with requestAnimationFrame
For a long time, timers and intervals have been the state of the art for JavaScript-based animations. While CSS transitions and animations make some animations easy for web developers, little has changed in the world of JavaScript-based animation over the years. That is, until Firefox 4 was released with the first way to improve JavaScript animations. But to fully appreciate the improvement, it helps to take a look at how animations have evolved on the web.
Timers
The very first pattern for creating animations was to use chained setTimeout() calls. Long-time developers will remember the obsession with statusbar news tickers that littered the web during Netscape 3′s hayday. It usually looked something like this:
(function(){
var msg = "NFL Draft is live tonight from Radio City Music Hall in New York City!"
len = 25,
pos = 0,
padding = msg.replace(/./g, " ").substr(0,len)
finalMsg = padding msg;
function updateText(){
var curMsg = finalMsg.substr(pos , len);
window.status = curMsg;
if (pos == finalMsg.length){
pos = 0;
}
setTimeout(updateText, 100);
}
setTimeout(updateText, 100);
})();
If you want to test this code out in a browser, create a element and use that instead of window.status, as I did this newsticker example.
This annoying web pattern was later countered with restrictions on window.status, but the basic technique re-emerged with the release of Internet Explorer 4 and Netscape 4, the first browsers to give developers more control over how elements were laid out on the page. With that, came the ability to dynamically change the size, location, color, etc. of elements using JavaScript, and a whole new breed of animations. For example. the following animates a to a width of 100% (often found in progress bars):
(function(){
function updateProgress(){
var div = document.getElementById("status");
div.style.width = (parseInt(div.style.width, 10) 5) "%";
if (div.style.width != "100%"){
setTimeout(updateProgress, 100);
}
}
setTimeout(updateProgress, 100);
})();
Even though the animated parts of the page were different, the basic technique remained the same: make a change, use setTimeout() to yield and let the page update, then the timer would be called to apply the next change. This process repeated until the animation was complete (see the progressbar in action). Same technique as the early status scrollers, just a different animation.
Chaining calls to setTimeout() together, as in both of these examples, creates an animation loop. Animation loops are used in computer programs to handle updating a user interface at regular intervals. All animation loops operate the same way: make an update, sleep, make an update, sleep. Early on, setTimeout() was the primary animation loop technique for JavaScript.
Intervals
With the successful re-introduction of animations to the web (much to the dismay of purists like myself), came new explorations. It was no longer good enough to have just one animation, there had to be multiple. The first attempts were to create multiple animation loops, one for each animation. Creating multiple timers using setTimeout() proved to be a bit much for these early browsers to handle, and so developers began using a single animation loop, created with setInterval(), to manage all of the animations on the page. A basic animation loop using setInterval() looks like this:
(function(){
function updateAnimations(){
updateText();
updateProgress();
}
setInterval(updateAnimations, 100);
})();
To build out a small animation library, the updateAnimations() method would cycle through the running animations and make the appropriate changes to each one (see both a news ticker and a progressbar running together). If there are no animations to update, the method can exit without doing anything and perhaps even stop the animation loop until more animations are ready for updating.
The tricky part about this animation loop is knowing what the delay should be. The interval has to be short enough to handle a variety of different animation types smoothly but long enough so as to produce changes the browser could actually render. Most computer monitors refresh at a rate of 60 Hz, which basically means there's a repaint 60 times per second. Most browsers cap their repaints so they do not attempt to repaint any more frequently than that, knowing that the end user gets no improvement in experience.
Given that, the best interval for the smoothest animation is 1000ms / 60, or about 17ms. You'll see the smoothest animation at this rate because you're more closely mirroring what the browser is capable of doing. Compare this example with a 17ms interval to the previous example and you'll see a much smoother animation (also much faster because the animations are updating more frequently and I've not done any calculation to take that into effect). Multiple animations may need to be throttled so as not to complete too quickly when using an animation loop with a 17ms interval.
The problem(s)
Even though setInterval()-based animation loops are more efficient than having multiple sets of setTimeout()-based loops, there are still problems. Neither setInterval() nor setTimeout() are intended to be precise. The delay you specify as the second argument is only an indication of when the code is added in the browser's UI thread queue for possible execution. If there are other jobs in the queue ahead of it, then that code waits to be executed. In short: the millisecond delay is not an indication of when the code will be executed, only an indication of when the job will be queued. If the UI thread is busy, perhaps dealing with user actions, then that code will not execute immediately.
Understanding when the next frame will be drawn is key to smooth animations, and until recently, there was no way to guarantee when the next frame would be drawn in a browser. As became popular and new browser-based games emerged, developers became increasingly frustrated with the inaccuracy of setInterval() and setTimeout().
Exacerbating these problems is the timer resolution of the browser. Timers are not accurate to the millisecond. Here are some common timer resolutions[1]:
Internet Explorer 8 and earlier have a timer resolution of 15.625ms
Internet Explorer 9 and later have a timer resolution of 4ms.
Firefox and Safari have a timer resolution of ~10ms.
Chrome has a timer resolution of 4ms.
Internet Explorer prior to version 9 has a timer resolution of 15.625 ms[1], so any value between 0 and 15 could be either 0 or 15 but nothing else. Internet Explorer 9 improved timer resolution to 4 ms, but that's still not very specific when it comes to animations. Chrome's timer resolution is 4ms while Firefox and Safari's is 10ms. So even if you set your interval for optimum display, you're still only getting close to the timing you want.
mozRequestAnimationFrame
Robert O'Callahan of Mozilla was thinking about this problem and came up with a unique solution. He pointed out that CSS transitions and animations benefit from the browser knowing that some animation should be happening, and so figures out the correct interval at which to refresh the UI. With JavaScript animations, the browser has no idea that an animation is taking place. His solution was to create a new method, called mozRequestAnimationFrame(), that indicates to the browser that some JavaScript code is performing an animation. This allows the browser to optimize appropriately after running some code.
The mozRequestAnimationFrame() method accepts a single argument, which is a function to call prior to repainting the screen. This function is where you make appropriate changes to DOM styles that will be reflected with the next repaint. In order to create an animation loop, you can chain multiple calls to mozRequestAnimationFrame() together in the same way previously done with setTimeout(). Example:
function updateProgress(){
var div = document.getElementById("status");
div.style.width = (parseInt(div.style.width, 10) 5) "%";
if (div.style.left != "100%"){
mozRequestAnimationFrame(updateProgress);
}
}
mozRequestAnimationFrame(updateProgress);
Since mozRequestAnimationFrame() only runs the given function once, you need to call it again manually the next time you want to make a UI change for the animation. You also need to manage when to stop the animation in the same way. Pretty cool, and the result is a very smooth animation as seen in this enhanced example.
So far, mozRequestAnimationFrame() has solved the problem of browsers not knowing when a JavaScript animation is happening and the problem of not knowing the best interval, but what about the problem of not knowing when your code will actually execute? That's also covered with the same solution.
The function you pass in to mozRequestAnimationFrame() actually receives an argument, which is a time code (in milliseconds since January 1, 1970) for when the next repaint will actually occur. This is a very important point: mozRequestAnimationFrame() actually schedules a repaint for some known point in the future and can tell you when that is. You're then able to determine how best to adjust your animation.
In order to determine how much time has passed since the last repaint, you can query mozAnimationStartTime, which contains the time code for the last repaint. Subtracting this value from the time passed into the callback allows you to figure out exactly how much time will have passed before your next set of changes are drawn to the screen. The typical pattern for using these values is as follows:
function draw(timestamp){
//calculate difference since last repaint
var diff = timestamp - startTime;
//use diff to determine correct next step
//reset startTime to this repaint
startTime = timestamp;
//draw again
mozRequestAnimationFrame(draw);
}
var startTime = mozAnimationStartTime;
mozRequestAnimationFrame(draw);
The key is to make the first call to mozAnimationStartTime outside of the callback that is passed to mozRequestAnimationFrame(). If you call mozAnimationStartTime inside of the callback, it will be equal to the time code that is passed in as an argument.
webkitRequestAnimationFrame
The folks over at Chrome were clearly excited about this approach and so created their own implementation called webkitRequestAnimationFrame(). This version is slightly different than the Firefox version in two ways. First, it doesn't pass a time code into the callback function, you don't know when the next repaint will occur. Second, it adds a second, optional argument which is the DOM element where the changes will occur. So if you know the repaint will only occur inside of one particular element on the page, you can limit the repaint to just that area.
It should come as no surprised that there is no equivalent mozAnimationStartTime, since that information without the time of the next paint is not very useful. There is, however, a webkitCancelAnimationFrame(), which cancels the previously scheduled repaint.
If you don't need precision time differences, you can create an animation loop for Firefox 4 and Chrome 10 with the following pattern:
(function(){
function draw(timestamp){
//calculate difference since last repaint
var drawStart = (timestamp || Date.now()),
diff = drawStart - startTime;
//use diff to determine correct next step
//reset startTime to this repaint
startTime = drawStart;
//draw again
requestAnimationFrame(draw);
}
var requestAnimationFrame = window.mozRequestAnimationFrame || window.webkitRequestAnimationFrame,
startTime = window.mozAnimationStartTime || Date.now();
requestAnimationFrame(draw);
})();
This pattern uses the available features to create an animation loop with some idea of how much time has passed. In Firefox, this uses the time code information that is available while Chrome defaults to the less-accurate Date object. When using this pattern, the time difference gives you a general idea of how much time has passed but certainly isn't going to tell you the next time a repaint will occur in Chrome. Still, it's better to have some idea of how much time has passed rather than none.
Wrap up
The introduction of the mozRequestAnimationFrame() method is the most significant contribution to improving JavaScript animations perhaps in the history of the web. As discussed, the state of JavaScript animation has pretty much been the same since the early days of JavaScript. With browsers getting better at animation and the introduction of CSS transitions and animations, it's nice to see some attention being paid to JavaScript-based animations, as these will mostly certainly become more important and more CPU-intensive with the proliferation of -based games. Knowing when JavaScript is attempting animation allows browsers to do more optimal processing, including stopping that processing when a tab is in the background or when the battery on a mobile device is running low.
The requestAnimationFrame() API is now being drafted as a new recommendation by the W3C and is being worked on jointly by Mozilla and Google as part of the Web Performance group. It's good to see the two groups moving so quickly to get compatible (if not completely) implementations out into the wild.
Update (03-May-2011): Fixed typo, added mobile information.
Update (04-May-2011): Fixed link to enhanced example.
References
Chrome: Cranking up the clock, by Mike Belshe
requestAnimationFrame implementation (Chrome)





April 5, 2011
Lessons on @font-face from the F2E Summit
Last week, I helped host the F2E Summit at Yahoo!, our internal developer event that brings together front end engineers from all around the world. One of the most heavily covered topics was @font-face, and more specifically, it's pros and cons. Before I forgot all of the great information, I wanted to write it down. This information comes directly from yahoo.com/tablet: Lessons from the Tablet Front Page[1] by Matt Seeley and Case Study:
Wretch New Front Page – Featuring CSS3 for the future[2] by Adam Wang. I'm doing nothing aside from summarizing their findings.
Compatibility
If you think there are problems with and ;, then you might not have looked closely at @font-face. While there is reasonably decent compatibility across desktop browsers via OpenType, TrueType, and WOFF, this not necessarily the case for mobile. iOS 4.1 and earlier only supports SVG web fonts while Android only supports TrueType; iOS 4.2 and later support Open Type, True Type, and SVG fonts. That means your minimum CSS code to support iOS and Android ends up looking like this:
@font-face {
font-family: "Gotham Medium";
font-weight: normal;
font-style: normal;
src: url(gothmed.ttf) format(truetype),
url(gothmed.svg#id) format(svg);
}
Perhaps not the worst thing in the world, but the compatibility issue isn't yet resolved. Since Internet Explorer prior to 9 does not support multiple values in the src property, it incorrectly parses the above as:
@font-face {
font-family: "Gotham Medium";
font-weight: normal;
font-style: normal;
src: url(gothmed.ttf)%20format(truetype),%20url(gothmed.svg#id)%20format(svg);
}
Catch that? The entire src property value turns into a single URL. That means Internet Explorer 8 and earlier makes a request to your server with this URL:
/gothmed.ttf)%20format(truetype),%20url(gotmed.svg
Note that the part following # is considered a fragment identifier and so isn't part of the HTTP request. The problem is that this causes a 404 for every page view, so even though earlier Internet Explorer versions don't support this file format, they still make a request.
To fix this, Matt used a data URI for the TTF file, which Internet Explorer 8 and earlier drops on the floor and doesn't make any further requests. He also notes that data URIs don't work for SVG fonts as the data URI size hits some sort of upper limit for iOS.
Unexpected behavior
Matt also pointed out an unexpected behavior when applying text-overflow: ellipsis to SVG fonts. This works fine for other fonts and behaves as expected. When applied to SVG fonts, text-overflow: ellipsis causes all of the characters to disappear except for the ellipsis. So instead of seeing "My text…", you end up seeing just "…" when an SVG font is used on the element.
Performance
When talking about performance, I'm also talking about user experience, and more importantly, user-perceived performance. Both Matt and Adam touched on the performance issues surrounding @font-face. Matt pointed out the flash of unstyled text (FOUT3) on iOS is actually a flash of no text. He found that even though the page would render, the text that used the web font would not render until the file was completely downloaded. He further noted that the download only began when an element using the web font was received by the browser. This led to applying styles using the web font on the element to trigger download as quickly as possible. The problem isn't completely solved and is probably worth more research.
Another aspect of web font performance is the size of an individual font file. We in the United States are pretty spoiled when it comes to web fonts since our alphabet has only 26 letters and a few punctuation marks. Adam pointed out that Asian character sets are much large, and so font files can be as large as 4-5 MB per file. So if you're thinking about using non-standard fonts for Asian web sites, you may want to think again before imposing this on your users.
Conclusion
It appears we still have a lot to learn about @font-face and its use on high-traffic web sites. There are still a lot of caveats to consider, not the least of which are compatibility and performance. I'd like to thank Matt and Adam for sharing their learnings so we can better understand the implications of a design centered around web fonts.
Update (6-Mar-2011) – Included iOS version information for web font support.
References
yahoo.com/tablet: Lessons from the Tablet Front Page by Matt Seeley
Case Study:
Wretch New Front Page – Featuring CSS3 for the future by Adam Wang
Fighting the @font-face FOUT by Paul Irish





March 22, 2011
Using HTML5 semantic elements today
Over the past year, the argument over whether or not to use the new HTML5 semantic elements has morphed into how to use the new HTML5 semantic elements. All major browsers officially supporting these elements before the end of the year (many before the end of the quarter), and as such, the time to start using these new elements is now. Of course, the world is not just made up of HTML5-capable browser and so the question of writing for backwards compatibility is a major question that many have attempted to answer.
The problem
The biggest issue with using the new semantic elements is how non-supporting browsers deal with them. There are essentially three possible outcomes when HTML5 elements are used in a page:
The tag is considered an error and is completely ignored. The DOM is constructed as if the tag doesn't exist.
The tag is considered an error and a DOM node is created as a placeholder. The DOM is constructed as indicated by the code but the tag has no styles applied (considered an inline element).
The tag is recognized as an HTML5 tag and a DOM node is created to represent it. The DOM is constructed as indicated by the code and the tag has appropriate styling applied (in many cases, as a block element).
As a concrete example, consider this code:
title
text
Many browsers (such as Firefox 3.6 and Safari 4) will parse this as a top-level element with an unknown child element () that is created in the DOM but treated as an inline element. The and
elements are children of . Because is represented in the DOM, it is possible to style the element. This is case #2.
Internet Explorer prior to 9 parses this as a top-level but sees as an error. So is ignored and then and
are parsed, both becoming children of . The closing is also seen as an error and skipped. The effective understanding of this code in the browser is equivalent to:
title
text
So older Internet Explorer browsers actually recover quite nicely from unknown elements but creates a different DOM structure than other browsers. Because there is no DOM representation of the unknown element, you also cannot apply styles to . This is case #1.
Of course, HTML5-capable browsers such as Internet Explorer 9, Firefox 4, and Safari 5 create the correct DOM structure and also apply the correct default styles to that element as specified in HTML5.
So the big problem is that browser produce not only different DOM structures for the same code, but also different styling rules for the same DOM structures.
The solutions
A number of people have come up with a number of different solutions to using HTML5 elements in pages today. Each attempts to attack one or more of the specific problems already mentioned in an effort to provide cross-browser compatibility.
JavaScript shims
JavaScript shims aim to primarily solve the problem of styling HTML5 elements in older Internet Explorer browsers. There is a now-well-known quirk in Internet Explorer where it won't recognize unknown elements unless one of these elements has already been created via document.createElement(). So the browser will create a DOM element and will allow styling of a element so long as document.createElement("section") is called.
Shims such as html5shim[1] use this capability to ensure that HTML5 elements correctly create DOM elements in Internet Explorer and therefore allow you to apply styles. Shims typically also set HTML5 block element to display: block so they display correctly across other browsers as well.
I don't like this approach because it breaks one of my primary web application principles: JavaScript should not be relied on for layout. This is about more than creating a bad experience for those with JavaScript disabled, it's about making a predictable and maintainable web application codebase where there is a clear separation of concerns amongst layers. It does have the benefit of producing the same DOM structure across all browsers, thus making sure your JavaScript and CSS works exactly the same everywhere, but that benefit doesn't outweigh the downside in my opinion.
Namespace hack
Never short on hacks, Internet Explorer also has another technique for making the browser recognize unknown elements. This one was first gained wide attention through Elco Klingen's article, HTML5 elements in Internet Explorer without JavaScript[2]. This technique involves declaring an XML-style namespace and then using elements with the namespace prefix, such as:
The html5 prefix is just purely pretend and isn't official at all – you could just as well have the prefix be "foo" and the effect would be the same. With the prefix in place, Internet Explorer will recognize the new elements so that you can apply styles. This also works in other browsers, so you'll end up with the same DOM and same styling everywhere.
The downside is clear: you must use XML-style namespaces in an HTML document and also use them in CSS, meaning something like this:
html5\:section {
display: block;
}
This isn't the way I'd like web developers to have to write their code. It's a brilliant solution to the problem but one that teaches what I consider to be an unnatural application of the new elements. I don't want to see files full of namespaced elements.
"Bulletproof" technique
I was first exposed to this technique at YUIConf 2010, when Tantek Çelik gave a talk entitled, HTML5: Right Here, Right Now[3]. In that talk, Tantek suggests using an inner element for each of the new HTML5 block elements, and to include a CSS class name on that indicating that it represents the HTML5 element. For example:
The intent of this approach is to ensure that content flows correctly in all browsers. Using one block element inside of an HTML5 element that should be a block means you'll either have a single block element (Internet Explorer < 9), a block element inside of an inline element (Firefox 3.6, Safari 4, etc.), or a block element inside of a block element (Internet Explorer 9, Firefox 4, Safari 5, etc.). In each of these three cases, the default rendering is the same.
Tantek did note one exception where this doesn't work, and that is with , which explicitly disallows non-heading child elements. For that he recommended putting the on the outside:
For styling, Tantek recommended not to try to style the HTML5 element itself but rather to style the surrogate . So instead of this:
section {
color: blue;
}
Use this:
.section {
color: blue;
}
The rationale is that it will be easy to automatically convert this pattern into one referencing the HTML5 element tag name later on. I'm not a fan of this part of his suggestion, since I generally do not like applying styles via tag name.
The downside of this approach is that different browsers create different DOM structures and so you must be careful in how you write JavaScript and CSS. For instance, using the immediate child selector (>) across an HTML5 element won't work in all browsers. Also, directly accessing parentNode might result in a different node in different browsers. This is especially obvious in code such as:
If you then have a selector such as section > .main, it will not be applied in Internet Explorer 8 and earlier. Whenever you cross the HTML 4 to HTML5 to HTML 4 barrier, you'll end up with these issues.
Reverse bulletproof technique
There are other posts, such as Thierry Koblentz's, HTML elements and surrogate DIVs[4] that have explored reversing Tantek's approach so that the HTML5 elements appear inside of the elements. For example:
The only difference is the placement of the HTML5 element – everything else is the same. Proponents like this technique because of its consistency (works the same for all elements, including ). It's worth noting that this approach has the same caveats as Tantek's as part as selector usage and JavaScript DOM traversal goes. It's main advantage is the consistency of technique.
My approach
My main goal in choosing an approach was to ensure that I would only have to make changes to the HTML of a page. That meant zero changes to either CSS or JavaScript. Why make such a requirement? The more layers of a web application (or any application) that have to change, the more likely you are to introduce bugs. Limiting the changes to one layer limits the introduction of bugs and, if they occur, limits your search for the underlying problem to one area. For example, if a layout breaks, I'll know it was because I added rather than the combination of that plus a change to the CSS that styles that area.
After researching each of these techniques, doing some prototyping and testing, I eventually arrived back at Tantek's approach. It was the only one where I could get all of the existing pages I was prototyping with to work without requiring changes to CSS and JavaScript. Now, I didn't follow his approach to the letter and made several changes where I thought improvements could be made.
First, I never styled anything based on the class name representing the HTML5 element (so no .sectionin my selectors). I kept the same elements that were already in the page and used the semantic class names that were applied to these elements as my style and JavaScript hooks. For instance, this code:
Became this code:
With this change, I still used .content as the style and scripting hook for that area of the page. In doing so, the JavaScript and CSS I already had didn't need to change.
Second, instead of having a special case for , I opted not to use it. The honest truth is that I didn't find anywhere in any of my existing pages where this element would have been useful. Since can only contain headings, it's mostly safe to include on its own if you really want to (assuming its contained within another block element).
I did spend a considerable amount of time bouncing back and forth between bulletproof and reverse bulletproof trying to determine which one worked best. The key determining factor for me was that reverse bulletproof required me to add CSS to make it work. In browsers that created a DOM node for the HTML5 element but did not apply default styling, having an HTML5 block element inside of a messed up my layouts on more than one occasion because they became inline elements in older browsers. I had to explicitly add rules to make them into block elements to make my layouts work, and that broke my own requirement of not changing CSS to make things work.
The proof
One of the things I've found incredibly frustrating in this realm of discussion is how people too quickly dismiss one approach because they can find at least one situation where it doesn't work. None of the solutions I presented here is perfect; none of them work in every single situation you may run into. If you give me any technique I can virtually guarantee you that someone can come up with a situation where it won't work. That doesn't invalidate the technique, it simply informs you of the technique's limitations so you can make a better decision.
In my research I took several existing pages and converted them to use the modified bulletproof technique. I put them in pages with simple layouts and complex layouts, pages with and without JavaScript interactions. In each case, the only changes I made were to the HTML and everything continued to work correctly (no changes to JavaScript or CSS). What about those caveats about child nodes and parent node relationships? The interesting thing is that I never ran into these problems.
Granted, the reason it may have been so easy for me is because of the rigor I apply to my coding. I religiously double-check that:
Tag names and IDs are not being used to apply styles (only use class names)
CSS selectors are as general as possible and use as few selector types as possible
JavaScript doesn't rely on a specific DOM structure to work
Tag names aren't being used to manipulate the DOM
Another interesting thing I noted is that I was using the HTML5 elements as containers. These new elements really are just boundaries between groups of functionality rather than anything else. You spend most of your time styling and scripting items inside of these boundaries rather than crossing the boundaries themselves. Since my JavaScript and CSS targets what's going on inside of containers, everything continued to work. I suspect this would be the case for most sites that have been well-coded.
Conclusion
The technique I ultimately decided on and would recommend to others is a modification of Tantek's bulletproof technique. Clearly the name is a bit of a misnomer as there are some side effects in CSS and JavaScript, but in my experiments it really did seem to be the one approach that allowed me to change just the HTML of a page and have everything continue to work. I'm sure the debate will continue both inside of companies and on the Internet in general, and I hope this post helps you make an informed decision.
References
html5shim
HTML5 elements in Internet Explorer without JavaScript, by Elco Klingen
HTML5: Right Here, Right Now, by Tantek Çelik (Video, Slides)
HTML elements and surrogate DIVs, by Thierry Koblentz





Nicholas C. Zakas's Blog
- Nicholas C. Zakas's profile
- 106 followers
