Nicholas C. Zakas's Blog, page 9
August 6, 2013
Understanding how you provide value
As a geeky teenager, I watched in envy as all the pretty girls started dated the jocks, the rebels, and the mysteriously popular kids who didn’t seem to have any discernible talent. I, like most guys in my situation, would sit and watch and daydream about the day that those girls would want to date me.
By the time I got to college I was a bit obsessed with trying to hide my geeky side. I wanted to be the type of guy those girls wanted to date. I wanted to be good at picking up girls at bars, nevermind the fact that I didn’t drink nor did I have any desire to. I wanted to be exciting and dangerous, nevermind that I was afraid of my own shadow. I wanted to be a badass. As you can probably imagine, that plan didn’t go very well.
In dating, people always tell you to “be yourself” and then you’ll find someone. But myself was boring, I thought, in comparison to the guys who got all the girls. The problem with my perception was that I didn’t understand how I was providing value in a relationship. It took me some years, but then I finally figured it out: the way I provide value in a relationship is by using what I’m good at. I’m not dangerous, or exciting, or adventurous. I’m not a non-stop adrenaline rush that likes to push the edge. Girls who want that would never find value in me, and that’s okay.
My value in relationships is that I am trustworthy, I’m stable, I’m safe. I call when I say I will and show up when you need me. I remember birthdays and anniversaries, and I’ll take care of you when you’re sick. I listen, I understand, I help. I don’t run and hide when things go sideways. All of these things that I perceived as weaknesses when I was young turned out to be invaluable assets. This is how I provide value in relationships. It’s not the value everyone seeks, but it’s the value I’m best equipped to provide. And that’s what matters.
Oddly enough, I learned this lesson about love first, before realizing the same basic principle applies in other types of relationships. Understanding how you provide value is key to getting the most of our your relationships and enjoying what you do. It wasn’t until I joined Yahoo that I’d go through the exact same process with my career.
As the front-end lead for the Yahoo homepage, heading up a team of 20 front-end engineers located in three countries, I had become overwhelmed. There was so much work to do and I felt like I had to do it all. I was coding so much, and people were coming to me so frequently, that I was severely stressed. There was just too much going on, too much code to write, too many people’s questions to answer, just too much.
I mentioned this during a one-on-one with one of my mentors and he, as usual, had some fantastic advice. He told me that I’d transitioned into a new role and my true value to him and the team was no longer as a coder. I was confused, because when I interviewed he specifically said he needed me as a coder, and now he was telling me that this was not what I should be doing? My heart sank – I must not be good at coding anymore. He doesn’t want me doing it.
He correctly pointed out that there was too much code for me to write all by myself and that I had to learn to delegate effectively. I could no longer be the go-to guy for any important piece of code to be written, I had to learn to trust others to do that work. My role, he went on, was to be a “multiplier”. It sounded like management lingo to me. He said I was far more valuable as a leader on the team, acting as a multiplier that enabled everyone else to be more effectively at their job. I might be a 10x coder amongst a group of 1x coders, he explained, and so if I’m spending all of my time coding then I’m providing 10 times the productivity in one area. However, if by working with the 20 other front-end engineers I can even just double or triple their productivity, then I’ve created more value for the team as a whole. With my guidance, he said, I could probably get several of the team members to get to 10x as well. I was a multiplier, by inserting me into the equation, things got better for everyone.
It took my several months to process this conversation. My ego took a severe hit because I had thought of myself as a coder for so long. I was used to coding from morning til night every day, and now I was being asked not to do that. I got depressed. If I’m not able to do what I love, should I really be doing this at all? For a brief point in time, I considered leaving high tech altogether. Doing something completely unrelated, like teaching or medicine. I was so dismayed at handing off interesting work to other people that I could see no benefit to sticking around.
Then, something changed. More specifically, the people I was working with changed for the better. I had put in place some guidelines and rules for how we should be writing code, and had devised this thing I’d end up calling code workshops. The result was that people were improving dramatically. The quality of code across the board was rising, even from people who had reputations for being sloppy coders. People I feared would end up getting fired started rising to the occasion and the group became incredibly high-functioning. It all fell into place and my mentor was completely right. I was a multiplier.
It turned out that the satisfaction I got from coding was smaller than the satisfaction I got from seeing other people succeed. In helping them do better work, I was happier than I had been previously. Through further self-reflection, I realized that I was not, in fact, the best programmer in the world. I suck at algorithms, I only understand compilers at a superficial level, the number of languages I write is pretty small, and I’m not even that fast a coder.
Where I provide value in my career is through two skills that I apply towards a large number of goals: problem-solving and communication. I’m a very good problem solver, I’m able to quickly break down alternative approaches and choose the best. That skill allows me to be very good at designing systems (as opposed to components). It allows me to step into situations and figure out a path forward even in the direst of situations and it also allows me to identify when rules are arbitrary and outdated. However, none of that would be very useful if I weren’t able to communicate effectively. I communicate clearly in whatever form the communication needs to happen: email, audio, video, face-to-face. I just know how to talk to people and get my point across in a way that is consumable by my audience.
Don’t get me wrong, I still love coding, and I do it whenever I can, but I realize that my professional value is in being that multiplier. The more I can help others to do better work, the more valuable I become. It was by focusing on these qualities, the ones where I provide the most value, that I was able to progress in my career. Likewise, by accepting how I provided value in my dating life, I ended up in some very fulfilling relationships.
Understanding how you provide value to a situation, relationship, career, or anything else, is incredibly important. To borrow a sports analogy, understanding your role on a team is what leads to team success. In basketball, everyone can’t have the ball at the same time. Sometimes your role is to pass, sometimes your role is to score, sometimes your role is to rebound. All of the roles are important and all of them provide to the team.
All of this can be summed up really well in one of my favorite moments from one of my favorite TV shows, SportsNight.
Taking the time to figure out how you provide value to a situation is a worth pursuit. Once you understand that, and come to grips with exactly why you are valuable, the rest is easy.





July 16, 2013
Introducing ESLint
A long time ago, JSLint was the state of the art in JavaScript linting technology. Then JSHint came along as a fork and took over due to increased flexibility. I welcomed JSHint as my linter of choice and used it everywhere, happily submitting patches and customizing which rules to apply based on the project. At some point I started to feel stifled and frustrated by JSHint as well. There’s no easy way to add additional rules or to create your own that may be project-specific.
One of the design decisions made on CSS Lint was to make all of the rules pluggable. Each rule would be a standalone file with a standalone test file accompanying it. In this way, it would be easy to incorporate new rules at any point in time and compile them into the final, distributed version. I really wanted the ability to do the same thing, and more, for JavaScript.
After talking with Anton about the possibilities available with JSHint, we both came to the conclusion that it wouldn’t be possible to do what I wanted. I really wanted an AST to evaluate for context and to be able to dynamically plug in new rules at any time, including run time.
This is ESLint
And so I somewhat regrettably introduce ESLint. ESLint is a JavaScript linting tool built on top of Esprima. The goal of the project is to create a linting tool where all rules are pluggable. This is achieved by having one rule per file and allowing each rules to inspect the AST at the points it wants. Additionally, some of the key features:
Easy to create and incorporate new rules by inspecting the AST.
Rules can be dynamically loaded at runtime, so if you have a company- or project-specific rule that isn’t appropriate for inclusion in the tool, you can still easily use them.
All rules are turned on and off the same way, avoiding the confusing rule configuration used by JSLint and inherited by JSHint.
Individual rules can be configured as warnings, errors, or disabled. Errors make ESLint return a non-zero error code while warnings have a zero exit code.
The output format for results is also completely pluggable. There’s only one formatter now but you can easily create more. These will also eventually be able to be dynamically loaded at runtime.
How ESLint differs from JSHint
Despite similar goals, ESLint and JSHint have some very specific differences. First and foremost, JSHint uses a progressive parser, finding errors along the way. ESLint uses Esprima, so the parsing is done first and then the rules are applied. That means JSHint will print out warnings up to and including a syntax error where ESLint will show only the syntax error. This makes JSHint much better for use in editors.
ESLint is much better suited for use in build systems and as a general command line utility. It works great for pre-commit hooks.
ESLint is a two-pass utility. The first pass is done by Esprima to parse the JavaScript and the second pass is a traversal of the AST to apply certain rules. JSHint is a single-pass utility, meaning that it will generally be faster.
ESLint is strictly a Node.js utility. JSHint runs on most JavaScript runtimes, including Rhino.
You can help
The project is in a good enough state that I can now start asking for contributions from others. What you can do:
Write documentation on the wiki
Create new formatters
Create new rules (I want to get feature parity with the important JSHint rules)
Work on some open issues
Anything else you want
I want the development of ESLint to be as open as possible and accept as many contributions as possible. I’ve already started a Developer Guide to help people get started, but what the project really needs is contributions from the community.
I’m excited about this project, as I believe it’s providing a key missing piece in the JavaScript toolchain. The ability to create arbitrary rules for your project in a standard way is a powerful capability that enables a whole host of possibilities. I’m already planning to get this into our JavaScript workflow at Box, and I hope others will do the same.





July 9, 2013
The case for setImmediate()
One of my favorite new APIs that has been beaten about is setImmediate(). While I’ll concede the naming is completely wrong, the functionality is completely awesome. The basic idea is to tell the browser that you want some JavaScript code executed after the last UI task in the event loop completes. To put it more simply, this is a much better implementation of setTimeout(fm, 0). Since browsers clamp their timers to 4ms, it really doesn’t matter if you say 0, 1, 2, 3, or 4. You aren’t actually going to get exactly what you specified and so using setTimeout(fn, 0) introduces an observable delay as well as the overhead of using a timer when it’s not needed (see more about this in my Velocity talk from last year[1]). The setImmediate() function was designed to do what setTimeout(fn, 0) seems like it should do.
The setImmediate() method was first described in Efficient Script Yielding[2] and spearheaded by Microsoft. To date, Internet Explorer 10+ is the only browser to implement setImmediate(). For some reason both Mozilla and WebKit are against adding this method[3] despite what I consider to be a large amount of evidence that this would be a useful addition to the web platform. Here’s why.
Animations and requestAnimationFrame()
One of the primary arguments against including setImmediate() is that we already have requestAnimationFrame(), and that is actually what people mean to do when they use setTimeout(fn, 0). If I were to make up statistics about the correctness of this assumption, I would say that this is probably the case 50% of the time. That is to say, I believe that half of the setTimeout(fn, 0) calls on the Internet are likely tied to animations in some way, and in that case requestAnimationFrame() is the correct answer.
However I have been a big proponent of using timers to break up large chunks of work that need to be done (see my post, Timed array processing in JavaScript[4]). In fact, my work was cited on Microsoft’s demo page[5] as part of the reason for introducing setImmediate(). This particular use case is ill-suited to use requestAnimationFrame() because it doesn’t necessarily require a UI update.
Keep in mind that requestAnimationFrame() schedules a new paint task in the event loop queue and allows the specified code to execute prior to that paint task occurring. This is in sharp contrast to setImmediate() where its JavaScript task is inserted after the last paint task already existing in the event loop queue. The implementation differences lend to the use case differences.
process.nextTick() and callbacks
The assertion that setTimeout(fn, 0) is always used for animations also falls apart when you look at Node.js. Node.js has had a method called process.nextTick()[6] for a long time. This method is similar to setImmediate() in Node.js, it simply inserts a task at the end of the current event loop turn queue (whereas setImmediate() inserts a task at the beginning of the next event loop turn). If setTimeout(fn, 0) is mostly used for animations then why would Node.js, an environment devoid of graphics, find it necessary to add such a method?
First and foremost, Node.js was designed to use asynchronous processing wherever possible. That means using callbacks to be notified when an operation is complete. The callbacks must always be executed asynchronously for consistency even when the result could be achieved synchronously. To illustrate this, consider a read-through cache of a remote data fetch, such as:
var remote = require("./remote"),
cache = {};
function getValue(key, callback) {
if (key in cache) {
process.nextTick(function() {
callback(key, cache[key]);
});
} else {
remote.fetch(key, function(value) {
cache[key] = value;
callback(key, cache[key]);
});
}
}
In this case, remote data is stored in a variable called cache whenever it is retrieved. Whenever getValue() is called, the cache is checked first to see if the data is there and otherwise makes the remote call. Here’s the issue: the actual remote call will return asynchronously while reading from cache is executed synchronously. You wouldn’t want callback() to be called synchronously in the cached situation and asynchronously in the non-cached situation as it would completely destroy the application flow. So process.nextTick() is used to ensure the same flow by deferring execution of the callback until the next time through the event loop.
When you want to do something like the previous example, using a timer is incredibly inefficient, which is why process.nextTick() was created (as mentioned in the Node.js documentation itself[7]). Node.js as of v0.9 also includes setImmediate(), which is equivalent to the browser version.
This asynchronous callback pattern isn’t unique to Node.js, as more browser APIs moved to asynchronous models, the need to be able to defer code execution until the next time through the event loop is becoming more and more important. Read-through caches in the browser will only become more prominent, as I suspect will be the need to defer execution so that it doesn’t block UI interaction (something that can sometimes, but not always, be done via web workers).
Polyfills, etc.
In 2010, David Baron of Mozilla wrote was has become the definitive resource for creating truly zero-millisecond timeout calls entitled, setTimeout with a shorter delay[8]. David’s post highlighted the desire for shorter timeout delays and introduced a method to achieve it using the postMessage() API.
The approach is a bit circuitous but nonetheless effective. The onmessage event handler is called asynchronously after a window receives a message, so the approach is to post a message to your own window and then use onmessage to execute what would otherwise be passed into setTimeout(). This approach works because the messaging mechanism is effectively using the same methodology as setImmediate() under the hood.
Since David’s post, a number of polyfills, most notably the NobleJS version[9], have been released. Such polyfills continue to get used despite the availability of requestAnimationFrame() due to the different use case.
What’s holding us back?
As mentioned previously, both Mozilla and WebKit have been against setImmediate() for some reason. The arguments seem to range from “you should just to use requestAnimationFrame()” to “it’s easy enough to create a polyfill, just do that instead.” The first argument I hope has already dispelled in this argument, the second argument I find devoid of meaning as there are plenty of things that are easy to polyfill (note: setImmediate() is not actually one of them) and yet are still standardized. The classList property of DOM elements comes to mind.
What’s most surprising has been the reaction of the Chrome team, a group who I’ve credited numerous times for pushing forward on incremental API changes that make the web platform better. In a ticket asking for support[10] there is a conversation that has gone nowhere, instead focusing on how polyfills could work (the bug is still open but hasn’t been updated in a while). A more recent one looking at Chrome’s poor performance on an IE11 demo[11] that uses setImmediate() has a rather disappointing sequence of comments:
…Yup, it looks like a bug in their test. They’re specifically using setTimeout() in the Chrome version which gets clamped to 5ms (as per the spec). If they used postMessage() instead then it would run fine in Chrome…
…To summarize my findings, this test is running intentionally slow JS in all browsers besides IE. This is because setTimeout(0) is incorrectly used as a polyfill for setImmediate()…
…Yes, that’s due to a bug in the test, see comment #7. The test basically has a check for IE11 (more or less) and does something unnecessarily slow on all other browsers…
So basically, the commenters are saying that the IE demo was rigged to make IE look faster than the other browsers because it wasn’t including the correct hack to do something similar in Chrome. Lest we ascribe evil to every single thing Microsoft does, I would suggest several alternative explanations:
Many people are using setTimeout(fn, 0) when they would much rather use setImmediate(), so this is a reasonable, cross-browser fallback.
The person writing the test likely didn’t have the bandwidth to fully develop a polyfill for setImmediate(). The person’s goal was to write a demo page, not write a library.
Why would they include a postMessage()-based solution when the whole point of the demo was to show the utility of setImmediate()?
This was an omission because the person writing the demo didn’t have the knowledge about alternative polyfills.
I would comment on the bug itself, but comments have been locked-down for some reason.
The interesting thing is that with a postMessage() polyfill, the commenters claim that Chrome runs faster than IE11 in this demo. That’s great, now why not just wrap that into a formal setImmediate() implementation and then brag about how you beat IE at their own API? I’d buy you a beer for that!
Conclusion
I’m still not entirely sure why there’s such an allergy to setImmediate() outside of Microsoft. It has demonstrated utility and some pretty clear use cases that are not adequately serviced by requestAnimationFrame() or any other means. Node.js recognized this from the start so we know for sure that there are non-UI-based reasons for using setTimeout(fn, 0).
I can’t draw any conclusions as to why this particular API, which pretty much just exposes something that’s already in the browser, is being vilified. It seems like there’s enough data at this point to say that setImmediate() is useful – the presence of polyfills alone is a strong indicator as are the continuing discussions around why postMessage() is faster than setTimeout(fn, 0). I think it’s time for the holdouts to listen to what developers are asking for and implement setImmediate().
Update (09-July-2013): Updated description of process.nextTick() per Isaac’s comments below.
References
July 2, 2013
Internet Explorer 11: “Don’t call me IE”
This past week, Microsoft officially unveiled the first preview of Internet Explorer 11 for Windows 8.1[1]. Doing so put to rest a whirlwind of rumors based on leaked versions of the much-maligned web browser. We now know some very important details about Internet Explorer 11, including its support for WebGL, prefetch, prerender, flexbox, mutation observers, and other web standards. Perhaps more interestingly, though, is what is not in Internet Explorer 11.
For the first time in a long time, Microsoft has actually removed features from Internet Explorer. The user-agent string has also changed. It seems that Microsoft has gone out of their way to ensure that all existing isIE() code branches, whether in JavaScript or on the server, will return false for Internet Explorer 11. The optimistic view of this change is that Internet Explorer 11 finally supports enough web standards such that existing IE-specific behavior is no longer needed.
User-agent changes
The user-agent string for Internet Explorer 11 is shorter than previous versions and has some interesting changes:
Mozilla/5.0 (Windows NT 6.3; Trident/7.0; rv 11.0) like Gecko
Compare this to the Internet Explorer 10 user-agent string (on Windows 7):
Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.1; WOW64; Trident/6.0)
The most glaring difference is the removal of the “MSIE” token, which has been part of the Internet Explorer user-agent from the beginning. Also noticeable is the addition of “like Gecko” at the end. This suggests that Internet Explorer would prefer to be identified as a Gecko-type browser if it’s not identified as itself. Safari was the first browser to add “like Gecko” so that anyone sniffing for “Gecko” in the user-agent string would allow the browser through.
Any sniffing code that looks for “MSIE” now will not work with the new user-agent string. You can still search for “Trident” to identify that it’s Internet Explorer (the “Trident” token was introduced with Internet Explorer 9). The true Internet Explorer version now comes via the “rv” token.
Additionally, there are changes to the navigator object that also obscure which browser is being used:
navigator.appName is now set to “Netscape”
navigator.product is now set to “Gecko”
This may seem like a sneaky attempt to trick developers, but this behavior is actually specified in HTML5[2]. The navigator.product property must be “Gecko” and navigator.appName should be either “Netscape” or something more specific. Strange recommendations, but Internet Explorer 11 follows them.
The side effect of these navigator changes is that JavaScript-based logic for browser detection may end up using these and will end up identifying Internet Explorer 11 as a Gecko-based browser.
document.all and friends
Since Internet Explorer 4, document.all has been an omnipresent force in Internet Explorer. Prior to the implementation of document.getElementById(), document.all was the “IE way” of getting an element reference. Despite Internet Explorer 5′s DOM support, document.all has remained in Internet Explorer through version 10. As of 11, this vestige of a bygone era has now been made falsy, meaning that any code branches based on the presence of document.all will fail for Internet Explorer 11 even though code that actually uses document.all will work.[3]
Another holdover is the attachEvent() method for adding event handlers. This method, as well as detachEvent(), have now been removed from Internet Explorer 11. Removing these methods is a means to short-circuit logic such as:
function addEvent(element, type, handler) {
if (element.attachEvent) {
element.attachEvent("on" + type, handler);
} else if (element.addEventListener) {
element.addEventListener(type, handler, false);
}
}
Of course, it’s recommended that you always test for the standards-based version first, in which case the removal of attachEvent() would yield no different behavior. However, the Internet is littered with bad feature detection logic and removing attachEvent() ensures that any code written in the above manner will use the standard version instead of the IE-specific one.
Some of the other features that have been removed:
window.execScript() – IE’s own version of eval()
window.doScroll() – IE’s way of scrolling the window
script.onreadystatechange – IE’s way of telling of listening for when a script was loaded
script.readyState – IE’s way to test the load state of a script
document.selection – IE’s way of getting currently selected text
document.createStyleSheet – IE’s way to create a style sheet
style.styleSheet – IE’s way to reference a style sheet from a style object
All of these have standards-based equivalents that should be used instead of the old Internet Explorer way of doing things. As with removing the other features, removing these means that cross-browser code that does feature detection for standards-based features should continue working without change.
Conclusion
It looks like Internet Explorer 11 could be the best Internet Explorer yet by a long shot. By finally removing the evidence of past mistakes, Microsoft is ready to take a place amongst the standards-based browsers of today. Removing old features and adjusting the user-agent string to not be identified as Internet Explorer is a rather unique move to ensure that all sites that work today continue to work. If web applications are using feature detection instead of browser sniffing, then the code should just work with Internet Explorer 11. For servers that are sniffing the user-agent, users should still get a fully functional site because of Internet Explorer 11′s excellent standards support.
A future without IE-specific code branches is near, and I for one am happy to welcome it.
Update (02-July-2013): Revised to mention that document.all is not actually removed, rather has been changed to be falsy.
References
Internet Explorer 11 preview guide for developers (MSDN)
Navigator Object – Client Identification (HTML5)
Obsolete – Behavior of document.all (HTML5)





June 25, 2013
eval() isn’t evil, just misunderstood
In all of JavaScript, I’m not sure there is a more maligned piece than eval(). This simple function designed to execute a string as JavaScript code has been the more source of more scrutiny and misunderstanding during the course of my career than nearly anything else. The phrase “eval() is evil” is most often attributed to Douglas Crockford, who has stated[1]:
The eval function (and its relatives, Function, setTimeout, and setInterval) provide access to the JavaScript compiler. This is sometimes necessary, but in most cases it indicates the presence of extremely bad coding. The eval function is the most misused feature of JavaScript.
Since Douglas hasn’t put dates on most of his writings, it’s unclear whether he actually coined the term as an article in 2003[2] also used this phrase without mentioning him. Regardless, it has become the go-to phrase for anyone who sees eval() in code, whether or not they really understand it’s use.
Despite popular theory (and Crockford’s insistence), the mere presence of eval() does not indicate a problem. Using eval() does not automatically open you up to a Cross-Site Scripting (XSS) attack nor does it mean there is some lingering security vulnerability that you’re not aware of. Just like any tool, you need to know how to wield it correctly, but even if you use it incorrectly, the potential for damage is still fairly low and contained.
Misuse
At the time when “eval() is evil” originated, it was a source of frequent misuse by those who didn’t understand JavaScript as a language. What may surprise you is that the misuse had nothing to do with performance or security, but rather with not understanding how to construct and use references in JavaScript. Suppose that you had several form inputs whose names contained a number, such as “option1″ and “option2″, it was common to see this:
function isChecked(optionNumber) {
return eval("forms[0].option" + optionNumber + ".checked");
}
var result = isChecked(1);
In this case, the developer is trying to write forms[0].option1.checked but is unaware of how to do that without using eval(). You see this sort of pattern a lot in code that is around ten years old or older as developers of that time just didn’t understand how to use the language properly. The use of eval() is inappropriate here because it’s unnecessary not because it’s bad. You can easily rewrite this function as:
function isChecked(optionNumber) {
return forms[0]["option" + optionNumber].checked;
}
var result = isChecked(1);
In most cases of this nature, you can replace the call to eval() by using bracket notation to construct the property name (that is, after all, one reason it exists). Those early bloggers who talked about misuse, Crockford included, were mostly talking about this pattern.
Debugability
A good reason to avoid eval() is for debugging purposes. Until recently, it was impossible to step into eval()ed code if something went wrong. That meant you were running code into a black box and then out of it. Chrome Developer Tools can now debug eval()ed code, but it’s still painful. You have to wait until the code executes once before it shows up in the Source panel.
Avoiding eval()ed code makes debugging easier, allowing you to view and step through the code easily. That doesn’t make eval() evil, necessarily, just a bit problematic in a normal development workflow.
Performance
Another big hit against eval() is it’s performance impact. In older browsers, you encountered a double interpretation penalty, which is to say that your code is interpreted and the code inside of eval() is interpreted. The result could be ten times slower (or worse) in browsers without compiling JavaScript engines.
With today’s modern compiling JavaScript engines, eval() still poses a problem. Most engines can run code in one of two ways: fast path or slow path. Fast path code is code that is stable and predictable, and can therefore be compiled for faster execution. Slow path code is unpredictable, making it hard to compile and may still be run with an interpreter[3]. The mere presence of eval() in your code means that it is unpredictable and therefore will run in the interpreter – making it run at “old browser” speed instead of “new browser” speed (once again, a 10x difference).
Also of note, eval() makes it impossible for YUI Compressor to munge variable names that are in scope of the call to eval(). Since eval() can access any of those variables directly, renaming them would introduce errors (other tools like Closure Compiler and UglifyJS may still munge those variables – ultimately causing errors).
So performance is still a big concern when using eval(). Once again, that hardly makes it evil, but is a caveat to keep in mind.
Security
The trump card that many pull out when discussing eval() is security. Most frequently the conversation heads into the realm of XSS attacks and how eval() opens up your code to them. On the surface, this confusion is understandable, since by it’s definition eval() executes arbitrary code in the context of the page. This can be dangerous if you’re taking user input and running it through eval(). However, if your input isn’t from the user, is there any real danger?
I’ve received more than one complaint from someone over a piece of code in my CSS parser that uses eval()[4]. The code in question uses eval() to convert a string token from CSS into a JavaScript string value. Short of creating my own string parser, this is the easiest way to get the true string value of the token. To date, no one has been able or willing to produce an attack scenario under which this code causes trouble because:
The value being eval()ed comes from the tokenizer.
The tokenizer has already verified that it’s a valid string.
The code is most frequently run on the command line.
Even when run in the browser, this code is enclosed in a closure and can’t be called directly.
Of course, since this code has a primary destination of the command line, the story is a bit different.
Code designed to be used in browsers face different issues, however, the security of eval() typically isn’t one of them. Once again, if you are taking user input and passing it through eval() in some way, then you are asking for trouble. Never ever do that. However, if your use of eval() has input that only you control and cannot be modified by the user, then there are no security risks.
The most common attack vector cited these days is in eval()ing code that is returned from the server. This pattern famously began with the introduction of JSON, which rose in popularity specifically because it could quickly be converted into a JavaScript by using eval(). Indeed, Douglas Crockford himself used eval() in his original JSON utility due to the speed with which it could be converted. He did add checks to make sure there was no truly executable code but the implementation was fundamentally eval().
These days, most use the built-in JSON parsing capabilities of browsers for this purpose, though some still fetch arbitrary JavaScript to execute via eval() as part of a lazy-loading strategy. This, some argue, is the real security vulnerability. If a man-in-the-middle attack is in progress, then you will be executing arbitrary attacker code on the page.
The man-in-the-middle attack is wielded as the ever-present danger of eval(), opening the security can of worms. However, this is one scenario that doesn’t concern me in the least, because anytime you can’t trust the server you’re contacting means any number of bad things are possible. Man-in-the-middle attacks can inject code onto the page in any number of ways:
By returning attacker-controlled code for JavaScript loaded via .
By returning attacker-controlled code for JSON-P requests.
By returning attacker-controlled code from an Ajax request that is then eval()ed.
Additionally, such an attack can easily steal cookies and user data without altering anything, let alone the possibility for phishing by returning attacker-controlled HTML and CSS.
In short, eval() doesn’t open you up to man-in-the-middle attacks any more than loading external JavaScript does. If you can’t trust the code from your server then you have much bigger problems than an eval() call here or there.
Conclusion
I’m not saying you should go run out and start using eval() everywhere. In fact, there are very few good use cases for running eval() at all. There are definitely concerns with code clarity, debugability, and certainly performance that should not be overlooked. But you shouldn’t be afraid to use it when you have a case where eval() makes sense. Try not using it first, but don’t let anyone scare you into thinking your code is more fragile or less secure when eval() is used appropriately.
References
About JSLint by Douglas Crockford (JSLint)
Eval is evil, Part One by Eric Lippert (Eric’s blog)
Know Your Engines by David Mandelin (SlideShare)
eval() usage in my CSS parser by me (GitHub)





May 28, 2013
On the politics, cargo-culting, and maintainability of JavaScript
There has recently been a renewed focus on what I’ve come to call the anti-convention movement in JavaScript. It seems like once or twice a year, someone either does a talk or writes an article saying that all of the things so-called JavaScript experts tell you are wrong and you should do whatever you want. I take notice because frequently I’m listed along with those who are telling you not to do certain things (you know, the people you shouldn’t listen to). The most recent contributions are Angus Croll’s Politics of JavaScript talk[1] from Web Directions and James Padolsey’s NetTuts article, Cargo-Culting in JavaScript[2]. Both take stances against commonly held beliefs in how you should write JavaScript. While I always enjoy a good debate about whether best practices makes sense or not, I feel that sometimes the discussion ends up in the wrong place.
Maintainability
I have a bias. I think that maintainability is important in all code (not just with JavaScript). If you are at all familiar with my work, then this will come as no surprise. After all I’ve written a book about maintainability practices in JavaScript and I’ve written several articles and given talks about the subject as well. To me, maintainability is about creating high functioning teams that can move seamlessly between one another’s code. Code conventions and other best practices designed to increase maintainability do so by decreasing the likelihood that two people on the same team will solve the same problem differently. That may seem like a minor point some, but in practice, seeing things the same way is important for teams.
I like to think of American football as a good example. Perhaps the most interesting relationship on the field is that between the quarterback and his wide receivers. The quarterback’s main job is to read the defense and figure out how best to make progress. The wide receivers’ job is to read the defense and figure out how best to get open so the quarterback can throw the ball to them . The most interesting part of this process is that the quarterback must actually throw the ball before the receiver arrives at the reception location. Because it takes a couple of seconds for the ball to get there, waiting until the receiver is wide open means waiting an extra couple of seconds during which the defense can get in the way. That’s why it’s important that the quarterback and the wide receivers see the same thing on defense and react the same way. When a defensive player behaves a certain way, both the quarterback and the wide receiver must realize it and react in complementary ways. That’s the only way a pass will work.
It’s the same thing with a team of developers. You want everyone reading the field the same way. The fewer unique patterns there are in the code base, the easier it is for everyone to work with. As I’ve said in many of my writings and talks, code is actually a communication medium between developers. Making sure everyone speaks the same dialect is important.
What I do
The very first talk that I gave was on maintainability. I wasn’t trying to trail blaze nor was I trying to prevent anyone from doing anything they wanted to do. What I did then, and what I continue to do now, is to share my experiences. When I say to avoid something, it’s because I actually ran into trouble using it. When I say something is a good way to approach a problem, it’s because I found it to be successful in my own work. Most of my advice has to do with building large web applications and working on large teams because that’s how I’ve spent the past several years of my career. If you have ever seen me give a talk in person, you probably heard me say that some of these don’t apply when it’s just you working on a project all by yourself or with a couple of other people.
Because I enjoy working on large projects and with large numbers of people, I focus most of my own energy on making those systems work. I love the scalability problem because it is much more difficult than anything else. I never talk from a theoretical background and I never claim that my way is the only way to do things. everything I share publicly, from my blog posts, to my books, to my talks, is just about sharing what I’ve learned in the hope that it also helps you. If it doesn’t help you, I wholeheartedly invite you to leave my advice off to the side where it doesn’t get in the way. I have no desire to convince you that I’m right or that you’re wrong, my only desire is to share what I’ve learned and let you use that however you see fit.
“I’m not stupid!”
Both Angus and James base their arguments around the assumption that those who are recommending certain practices believe that everyone else is stupid. While I can’t speak for everyone, I don’t think that this is the case. Recommending certain practices has nothing to do with whether or not you think that developers are stupid. If that were true, you could say the same thing about every person who gave a talk or wrote a book recommending anything. I don’t know when people started getting so upset about recommendations, but pointing the finger back at those making the recommendations and saying, “don’t call me stupid,” is ridiculous. Unfortunately, this seems to happen whenever somebody disagrees with a recommendation.
That’s not to say that all advice is good. That’s also not to say that you should follow all of the advice all the time. You should, though, stop and think about the context in which the advice is given and whether or not that context applies to you. There is no rule that applies 100% of the time. That’s not just true with JavaScript, it’s true with every rule in the entire world. The fact that there are exceptions doesn’t mean that it’s a bad rule. If the rule holds 99% of the time than it’s worth codifying as a good idea. The recommendations that people make around best practices should be treated the same way. All rules are starting points and it’s up to you to continue the journey.
Think about driving. Most roads have a line down the center and some have guardrails along the side. Most of the time, you expect people to drive on the correct side of the road and not drive off of the road onto the sidewalk. Why bother having those lines and guardrails? I’m relatively sure that everyone within a country knows which side of the road to drive on and that staying within your defined driving lane is expected. The lines and guardrails just serve to reinforce what you already know when you’re driving a car. They give you a few extra hints. So if you start to veer over the line in the middle of the road, you know that you may be entering into some dangerous territory. The line doesn’t stop you from doing it, it’s just an indicator of expectations. Yet I don’t know anyone who is offended by the lines in the road or guardrails.
And just like with best practices, sometimes you actually have to cross over the line or drive over a sidewalk. What if you’re making a turn to the other side of the street? What if you need to pull into a driveway? What if a car is broken down and you need to get around it? There are even exceptions to the rules of the road. No one really thinks about it because we all just do it naturally.
If you come from a position that anyone recommending a practice to you thinks you’re stupid then you are doing yourself a disservice. There is no global JavaScript contest to see who can get the most people to follow their practices. No one wins anything by having more people using comma-first than comma-last. Literally there is no skin in this game for anyone.
Coding for the maintainer
Both Angus and James use the following quote (one of my favorites, from Code for the Maintainer[3]):
Always code as if the person who ends up maintaining your code is a violent psychopath who knows where you live.
Unfortunately both miss the context of this quote before dismissing it as bad advice. This quote doesn’t speak about your current teammates nor does it imply the person who is going to maintain your code will be stupider than you. The meaning behind this quote is that you don’t know who is going to be maintaining your code in the future and that person will lack context to figure out what your code is doing. Lacking context has nothing to do with intelligence.
Think back to a time when you had to take over code from somebody else. Maybe that person was still at the company or maybe not. How did you feel needing to work with that code? I can tell you from personal experience, I’ve inherited some really bad code over the years. Code that is hard to work with because it’s very difficult to understand what it’s doing. I consider myself to be reasonably intelligent, typically above average on most days, but if you sit me down in front of some code that I’ve never seen before and tell me to fix a problem it will likely take me a while to do that.
If I were to restate the quote in a way that would hopefully make people understand the intent better, I would restate it as this:
Always code as if the person who ends up maintaining your code will not be able to talk to you for help.
Removing the scare tactic phrases from the quote makes it a bit more palatable. The idea is that the person who maintains your code won’t have you as a resource and therefore the code has to make sense on its own. The assumptions and organizational knowledge that exist only in your head are the enemy of that maintainer. It doesn’t matter how intelligent that person is, the job is a nightmare without proper context. That’s why I can’t jump in and start maintaining your JavaScript library even though I know JavaScript pretty well. That’s why things like code conventions and documentation are so important for maintainability.
If your code can’t be easily maintained by someone else, then that’s not a mark of quality. The teams I’ve worked on have all converged on common conventions and that has allowed anyone to be able to work with any file at any point in time. Understanding the conventions means that you understand the files and that means you can do your job with a very low barrier to entry. It’s a point of pride for my teams that code ends up looking the same regardless of who wrote it because ultimately it’s the team’s responsibility for that code rather than an individual’s responsibility.
It’s a starting point
Thankfully, Angus ends his presentation with a very important statement: there are no absolutes. All of the rules and best practices that you hear about are simply a starting point. I always tell people on my teams that we’re going to define some rules and follow them until they don’t make sense. When they don’t make sense, we’re going to talk about why that is and figure out what we’ve learned. The rules are there to help you get off on the right foot and make sure that you don’t need to stop and ask at every moment what the right approach is. That’s important because our jobs are fundamentally repetitive.
We go into work mostly doing the same thing every day. If your job is to create features on a product, you’ll find that the features can get implemented in very similar ways even though the features themselves are very different. If your job is to fix bugs, you tend to debug and fix things in the same way. It’s the same for all of us, programming is repetitive. If everyone ends up doing the same task in different ways than the code becomes harder to manage. So you start by defining some rules about how things will be written and deal with the exceptions as they come up.
And there will be exceptions. Exceptions don’t mean that the rule is bad, it just means that the context has changed and the rule may not apply.
What we’re really talking about here is skill acquisition[4]. The rules are there to get you started on a journey of learning. All beginners are taught the rules that let them get moving quickly while avoiding common pitfalls. As you get more experienced, you learn more rules and also start to figure out the context in which the rules don’t apply. Not everyone is at the same level of professional development and so everyone doesn’t have a proper handle on what they’re doing to throw the rules away. It’s only through experience that these becomes more apparent, as the novice chess player eventually becomes a grandmaster.
Effective learning
This really all comes down to how you choose to learn. Every single person who takes the time to write a blog post or give a talk or otherwise share their knowledge is saving you valuable time. They are doing the heavy lifting of presenting an idea and it’s just up to you to decide if that idea fits with what you do or not. Thinking those people automatically believe you are stupid is counterproductive and doesn’t matter at all. Recommendations are simply ideas presented for consideration. Many times, the ideas flow from a problem that the recommender experienced at some point in time. Figure out the problem and you can figure out whether or not the context applies to you. That’s the most effective way to learn. Or to put it more eloquently:
Do not believe in anything simply because you have heard it. Do not believe in anything simply because it is spoken and rumored by many. Do not believe in anything simply because it is found written in your religious books. Do not believe in anything merely on the authority of your teachers and elders. Do not believe in traditions because they have been handed down for many generations. But after observation and analysis, when you find that anything agrees with reason and is conducive to the good and benefit of one and all, then accept it and live up to it.
- Buddha
References
The Politics of JavaScript by Angus Croll (SpeakerDeck)
Cargo-Culting in JavaScript by James Padolsey (NetTuts)
Code for the Maintainer (Cunningham & Cunningham)
Dreyfus Model of Skill Acquisition (Wikipedia)





May 21, 2013
GitHub workflows inside of a company
Recently I asked engineers to share their experiences working with GitHub at companies. I’ve always used GitHub for open source projects, but I was interested in learning more about using it professionally and how one’s development workflow might change given all of GitHub’s capabilities. I set up a gist[1] so people could leave the answers to my questions and got some great responses. The information comes from companies such as Yammer, BBC News, Flickr, ZenDesk, Simple, and more. This is an overview of the responses I received plus some detail from Scott Chacon’s post on Git Flow at GitHub[2].
Basic setup
Everyone has at least one GitHub organization under which the official repositories live. Some have more than one organization, each representing a different aspect of the business, however all official repositories are owned by an organization. I suspect this would be the case as it would be horribly awkward to have an important repository owned by a user who may or may not be at the company next year. Also, using an organizational owner for these repositories allows better visibility as to what’s going on with official projects just by looking at the organization.
Several people mentioned that no one is barred from creating their own repositories on GitHub for side projects or other purposes. Creating repositories for company-related work is generally encouraged. If a side project becomes important enough, it can be promoted to an organizational repository.
Developer setup
Companies took a couple of different approaches to submitting code:
Most indicated that developers clone the organization repository for their product and then work on feature branches within that repository. Changes are pushed to a remote feature branch for review and safe-keeping.
Some indicated that each developer forks the organization repository and does the work there until it’s ready for merging into the organization repository.
A couple indicated that they started out with forks and then switched to feature branches on the organization repository due to better transparency and easier workflow.
The general trend is in the direction of feature branches on the organization repository. Since you can send pull requests from one branch to another, you don’t lose the review mechanism.
Submitting code
In the open source world, external contributors submit pull requests when they want to contribute while the maintainers of the project commit directly to the repository. In the corporate world, where everyone may logically be a maintainer for the repository, does it makes sense to have developers send pull requests? The responses were:
Some required pull requests for all changes.
Some required pull requests only for changes outside of their responsibility area (i.e., making a change to another team’s repo). Other changes can be submitted directly to the organization repository.
Some left this up to the developer’s discretion. The developer should know the amount of risk associated with the change and whether or not another set of eyes is useful. The option to submit directly to the repository is always there.
The responsibility for merging in pull requests varied across the responses. Some required the team leads to do the merging, others allowed anyone to do the merging.
Interestingly, some indicated that they start a pull request as soon as a new feature branch is created in order to track work and provide better visibility. That way, there can be a running dialog about the work being done in that branch instead of temporary one at the time of work completion.
Preparing code for submission
A secondary part of this process is how the code must be prepared before being merged in. The accepted practice of squashing commits and rebasing still remains common across the board though the benefits aren’t clear to everyone. Of those who responded:
Some required a squash and rebase before a pull request can be merged in.
Some will merge in a pull request regardless of the makeup of commits.
Some care about keeping a strict, non-branching history while others do not.
It’s hard to outline any consistent trends in this regard. Whether or not you squash, rebase, or merge is very much a team-specific decision (not an organization-specific one).
What about git-flow?
I didn’t ask this question specifically, but it came up enough. Git-flow[3] is a process for managing changes in Git that was created by Vincent Driessen and accompanied by some Git extensions[4] for managing that flow. The general idea behind git-flow is to have several separate branches that always exist, each for a different purpose: master, develop, feature, release, and hotfix. The process of feature or bug development flows from one branch into another before it’s finally released.
Some of the respondents indicated that they use git-flow in general. Some started out with git-flow and moved away from it. The primary reason for moving away is that the git-flow process is hard to deal with in a continuous (or near-continuous) deployment model. The general feeling is that git-flow works well for products in a more traditional release model, where releases are done once every few weeks, but that this process breaks down considerably when you’re releasing once a day or more.
Conclusion
A lot of people shared a lot of very interesting anecdotes about using GitHub at their companies. As I expected, there’s no one accepted way that people are using GitHub for this purpose. What’s interesting is the range of ways people have chosen to adapt what is essentially an open source workflow for enterprise use. Since GitHub also has GitHub Enterprise, I’m certain that this trend will continue. It will be interesting to see if the feedback from GitHub Enterprise and corporate needs will end up changing the public-facing GitHub in any way.
I’m interested in doing more research about how Git and GitHub in particular are used inside of companies. I’ve yet to see any good research done on whether or not squashing and rebasing is important in the long run, and I think that would be great to figure out. Please feel free to share your experiences in the comments.
References
How do you use GitHub at your company? (GitHub Gist)
GitHub flow by Scott Chacon (Scott Chacon’s Blog)
A successful Git branching model by Vincent Driessen (nvie.com)
git-flow Git Extensions (GitHub)





April 30, 2013
Blink and the end of vendor prefixes
When Google announced that it was forking WebKit into Blink, there was a lot of discussion around what this would mean for web developers, and if the WebKit monoculture was truly breaking or not. Amongst the consternation and hypotheticizing was a detail that went overlooked by many: Blink’s plan to stop creating new vendor prefixes. This, to me, is one of the most significant shifts in browser philosophy that has occurred in recent memory.
Why vendor prefixes anyway?
The idea behind vendor-prefixed features (both JavaScript and CSS) is quite simple: give browser developers and web developers opportunities to work with as-yet-incompletely-specified features to get feedback. The goal was just. How do browser developers really know if a proposed feature will work if they don’t actually try to build it? Likewise, how to web developers know that a feature is useful if they never have the chance to try and use it? Vendor-prefixed features gave browsers the freedom to implement stuff that might not be quite ready and web developers the chance to give feedback to the browsers regarding these features. It seemed like a win-win situation. So what went wrong?
To a person with a hammer…
There were problems with these vendor-prefixed features. First, browser developers were hanging on to them for far longer than originally anticipated. Firefox had -moz-border-radius from the beginning and it was only officially removed in Firefox 13, a timespan of over 8 years. Vendor-prefixed properties ended up being a playground for browser vendors where any experiment, prototype, or other “not exactly standard” feature could be implemented. After all, it was free to do so and developers should be forewarned by the vendor-prefix that this thing shouldn’t be relied upon.
Web developers, however, saw vendor-prefixed properties more as “the way that browser X is choosing to support this feature” rather than as an experimental extension. That meant web sites started to depend on these experimental features for their functionality or appearance, leading to the creation of tools that would help by pointing out or automatically adding the correct vendor-prefixed version of CSS properties. Lest we forget this mess:
.box {
background: #1e5799; /* Old browsers */
background: -moz-linear-gradient(top, #1e5799 0%, #2989d8 50%, #207cca 51%, #7db9e8 100%); /* FF3.6+ */
background: -webkit-gradient(linear, left top, left bottom, color-stop(0%,#1e5799), color-stop(50%,#2989d8), color-stop(51%,#207cca), color-stop(100%,#7db9e8)); /* Chrome,Safari4+ */
background: -webkit-linear-gradient(top, #1e5799 0%,#2989d8 50%,#207cca 51%,#7db9e8 100%); /* Chrome10+,Safari5.1+ */
background: -o-linear-gradient(top, #1e5799 0%,#2989d8 50%,#207cca 51%,#7db9e8 100%); /* Opera 11.10+ */
background: -ms-linear-gradient(top, #1e5799 0%,#2989d8 50%,#207cca 51%,#7db9e8 100%); /* IE10+ */
background: linear-gradient(to bottom, #1e5799 0%,#2989d8 50%,#207cca 51%,#7db9e8 100%); /* W3C */
filter: progid:DXImageTransform.Microsoft.gradient( startColorstr='#1e5799', endColorstr='#7db9e8',GradientType=0 ); /* IE6-9 */
}
And in JavaScript, more and more code ended up looking like this[1]:
var vendors = ['ms', 'moz', 'webkit', 'o'];
for(var x = 0; x < vendors.length && !window.requestAnimationFrame; ++x) {
window.requestAnimationFrame = window[vendors[x]+'RequestAnimationFrame'];
window.cancelAnimationFrame = window[vendors[x]+'CancelAnimationFrame']
|| window[vendors[x]+'CancelRequestAnimationFrame'];
}
These patterns in CSS and JavaScript indicated that web developers were not treating these features as experimental, but as dependencies they needed for things to work. Browser vendors didn’t do web developers any favors by keeping these features around as long as they did. Certainly, quickly-disappearing vendor-prefixed features would teach web developers not to rely on them.
More recently
The quickly-disappearing theory of vendor-prefixed features also seemed to be incorrect. Internet Explorer 10 introduced some features with a ms extension in the beta version, and then removed the prefixes when the final version was released. This created a lot of confusion amongst web developers as to whether or not ms was necessary to use certain features. Likewise, Chrome had started moving features through vendor-prefixed versions to standard ones much faster than before. Yet, there were still features that lagged far behind, so there’s a mix of webkit prefixed functionality: some that have been around for a while and no one’s quite sure where they’re going (-webkit-appearance) while others moved quickly into a standardized version (webkitIndexedDB).
Due to this confusion, and the desire of browsers to properly support a large number of sites, some Mozilla and Opera considered supporting webkit CSS properties to have better compatibility with the web at large.[2] There were a lot of opinion pieces about whether or not this was a good thing, but all agreed that vendor prefixes resulted in a massive mess that there was no easy way to clean up.
Goodbye, vendor prefixes!
Blink’s decision to stop creating new vendor-prefixed functionality is the right approach to me. Remember, the whole point of vendor prefixes was to allow browser developers to test implementations and for web developers to give feedback. You don’t need vendor prefixes for that. Blink’s approach is to allow you to turn on certain experimental features through browser settings is something I’m very excited about. Doing so allows browser developers to continue with experimental implementations without the fear that web developers will come to rely on those features. It also gives web developers the opportunity to turn on those features and try them out, with a very clear directive that you cannot rely on your users having this feature enabled, therefore you should not depend on it.
I definitely welcome and look forward to a prefix-free world. We got into such a mess and placed so much cognitive burden on so many people for so long that it may be a crime in certain parts of the universe. Everyone needs the ability to experiment so that the web can continue to developer new functionality, but we all need to be smarter about how we implement and use those features. The Blink approach follows a systems approach that I like: make it difficult for people to do the wrong thing. If other browsers adopt the same approach then we can be assured that the future CSS and JavaScript we write won’t be hampered down by numerous duplicate declarations or annoying feature tests. We can all just go back to what we really want to do: make awesome web experiences without forking our code for every browser.
References
requestAnimationFrame() polyfill (GitHub)
tl;dr on vendor prefix drama by Chris Coyier (CSS Tricks)





April 16, 2013
Getting the URL of an iframe’s parent
Dealing with iframes is always a double-edged sword. On the one hand, you get sandboxing of content within another page, ensuring that JavaScript and CSS from one page won’t affect another. If the iframe is displayed a page from a different origin then you can also be assured that the page can’t do anything nefarious to the containing page. On the other hand, iframes and their associated window objects are a mess of permissible and impermissible actions that you need to remember[1]. Working with iframes is frequently an exercise in frustration as you methodically move through what you’re allowed to do.
I was recently asked if there’s a way to get the URL of an iframe’s parent page, which is to say, the URL of the page with the element. This seems like a simple enough task. For a regular page, you typically get the URL by using window.location. There are also parent to get the window object of the parent page and top to get the window object of the outermost page. You can then use parent.location or top.location to get a URL from the containing page depending on your needs. At least, that’s how you do it when both the iframe page and the containing page are from the same origin.
When the iframe page and containing page are from different origins, then you are completely cut off from the parent.location and top.location. This information is considered unsafe to share across origins. However, that doesn’t mean you can’t find out the URL of the containing page. To do so, you simply need to keep in mind what information the iframe owns and what information it does not.
To start, you should double-check that the page is actually in an iframe, which you can do with this code:
var isInIframe = (parent !== window);
When a page is running inside of an iframe, the parent object is different than the window object. You can still access parent from within an iframe even though you can’t access anything useful on it. This code will never cause an error even when crossing origins.
Once you know you’re in an iframe, you can take advantage of a little-known fact: the HTTP Referer header for a page inside of an iframe is always set to the containing page’s URL. That means a page embedded in an iframe on http://www.nczonline.net will have a Referer header equal to that URL. Knowing this fact, you need only use the oft-forgotten document.referrer property. As the name suggestions, this property contains the value of the Referer header for the given document. So you can get the URL of the iframe’s parent page like this:
function getParentUrl() {
var isInIframe = (parent !== window),
parentUrl = null;
if (isInIframe) {
parentUrl = document.referrer;
}
return parentUrl;
}
While this may look like a security issue, it’s really not. The Referer is already being sent to the server that is serving up the iframe page, so that information is already known by the web application. The document.referrer property is just exposing the information that the server already has. Since the document object is owned by the iframe window, you aren’t crossing the same-origin policy boundary.
Iframes are always a little bit hairy to deal with, especially when you throw JavaScript into the mix. The good news is that there’s usually a way to do something that makes sense and won’t put a user at risk, whether that be through document.referrer cross-document messaging, or some other means.
References
Iframes, onload, and document.domain by me (NCZOnline)





April 1, 2013
Making accessible icon buttons
Last week, Pamela Fox tweeted a question to me:
@slicknet Do you know the best way to make a that just has an icon accessible? title, aria-label, hidden text?
— Pamela Fox (@pamelafox) March 26, 2013
As tends to happen on Twitter, we fruitlessly exchanged 140 character messages trying to get some resolution before I finally offered to write a blog post explaining my view. Along the way, I discovered that I had misunderstood the original question (yay 140 character responses) and needed to do a little research. And so here it is.
Simple icon buttons
In the beginning, there . Many seem to have forgotten this part of HTML. Early on, web developers wanted to use images as submit buttons rather than the plain submit button and allowed you to create an image that actually works like a button. Further, this type of image actually announces itself as a button in screen readers. Anytime you want the user to click on something and not navigate to another page, you’re looking for a button, and gives you a nice compact way of doing that while supporting the same attributes as [image error]. For example:
In this case, the major screen readers (JAWS, NVDA, VoiceOver) announce “Email button” in all major browsers, reading the alt text and identifying the image as a button. Part of the reason this pattern isn’t used much anymore is due to the ubiquity of CSS sprites. However, it’s still my favorite pattern, and with a little adjusting, works fine with sprites:
.email-btn {
width: 14px;
height: 14px;
background: url(activities.png) 0 -85px no-repeat;
}
This example uses a single transparent GIF as the src of the image. Doing so allows a background image to be applied and show through via the “email-btn” class. If the thought of having an extra download for an icon button is unpalatable to you, you can also use a data URI representing a single transparent pixel:
data:image/gif;base64,R0lGODlhAQABAIAAAP///wAAACH5BAEAAAAALAAAAAABAAEAAAICRAEAOw==
In either case, the major screen readers still announce “Email button” when focus is set to the button, which is exactly what you want.
Using
The element has becomes the element of choice for making non-text buttons in HTML. This makes complete sense because you can put whatever HTML you desire inside and it renders as a button. However, the end result is not the same as with the more traditional approach. First, consider a really simple example:

This button produces some very different results in different browsers and different screen readers:
Chrome 25 (Win7/NVDA): “Button”
Internet Explorer 9 (Win7/NVDA): “Button”
Firefox 19 (Win7/NVDA): “Email graphic button”
Chrome 25 (Win7/JAWS): “Email button”
Internet Explorer 9 (Win7/JAWS): “Email button”
Firefox 19 (Win7/JAWS): “Email button”
Chrome 25 (Mac OS X/VoiceOver): “Email button”
Safari 6 (Mac OS X/VoiceOver): “Button”
Firefox 19 (Mac OS X/VoiceOver): “Email button”
Mobile Safari (iOS 6/VoiceOver): “Button”
So basically, using a element introduces a barrier for most screen reader/browser combinations to figure out what the button is doing. It doesn’t matter if the [image error] is used more traditionally or with a sprite, you don’t get much information in most places.
You can give screen readers a hint by using the aria-label attribute on the element. Doing so means providing a plain-text label for the button as a whole:

By adding an aria-label, the various screen readers and browsers respond as follows:
Chrome 25 (Win7/NVDA): “Email button”
Internet Explorer 9 (Win7/NVDA): “Email button”
Firefox 19 (Win7/NVDA): “Email graphic button”
Chrome 25 (Win7/JAWS): “Email Button”
Internet Explorer 9 (Win7/JAWS): “Email Button”
Firefox 19 (Win7/JAWS): “Email button”
Chrome 25 (Mac OS X/VoiceOver): “Email button”
Safari 6 (Mac OS X/VoiceOver): “Email button”
Firefox 19 (Mac OS X/VoiceOver): “Email button”
Mobile Safari (iOS 6/VoiceOver): “Email button”
So now you actually have a nice response from all of the major browsers and screen readers. You can use this same technique regardless of what you place inside of the element.
Font Awesome
Part of what I missed from my original Twitter discussion with Pamela was that she was using Font Awesome[1]. For the uninitiated, Font Awesome is an icon font that contains numerous common icons for use in HTML. Instead of using separate image files or a sprite that you have to manage, you can use an icon font and reference the relevant icon by using a class name. The icon is inserted via CSS, so it has no negative accessibility concerns. This example is similar to the one Pamela brought up:
The question is, how do you add descriptive text to this? One way would be to add an aria-label attribute as in the previous section:
Since that always works for elements, that’s the fastest and easiest way forward. The result in various screen readers:
Chrome 25 (Win7/NVDA): “Email button”
Internet Explorer 9 (Win7/NVDA): “Email button”
Firefox 19 (Win7/NVDA): “Email button”
Chrome 25 (Win7/JAWS): “Email Button”
Internet Explorer 9 (Win7/JAWS): “Email Button”
Firefox 19 (Win7/JAWS): “Email button”
Chrome 25 (Mac OS X/VoiceOver): “Email button”
Safari 6 (Mac OS X/VoiceOver): “Email button”
Firefox 19 (Mac OS X/VoiceOver): “Email button”
Mobile Safari (iOS 6/VoiceOver): “Email button”
I prefer this over the second (and often overused option) of hiding text off screen, in which case you would have code similar to this:
The idea here is to position text far off in some direction such that it’s not visible to sighted users but is still read out for screen reader users. I’m not a huge fan of hiding text off screen, primarily because it feels very hacky…a bit like a sleight of hand trick. Additionally, each of the major ways of hiding text off screen comes with some sort of caveat. Using a big negative text-indent doesn’t work well with RTL languages, using a height of 0 means VoiceOver won’t announce the contents, and so on. Jonathan Snook put together a fantastic post[2] outlining the different approaches and the caveats to each.
The screen readers all end up announcing the same message as when using aria-label.
Do I use hiding text off screen periodically? Yes, but only as a measure of last resort when I’ve exhausted all other possibilities. I would encourage you to do the same.
One final note: don’t use title as your button label. Example:
While it would be ideal if screen readers were able to use this value, the results are very uneven:
Chrome 25 (Win7/NVDA): “Email button”
Internet Explorer 9 (Win7/NVDA): “Email button”
Firefox 19 (Win7/NVDA): “Button email”
Chrome 25 (Win7/JAWS): “Email Button”
Internet Explorer 9 (Win7/JAWS): “Email Button”
Firefox 19 (Win7/JAWS): “Email button”
Chrome 25 (Mac OS X/VoiceOver): “Button”
Safari 6 (Mac OS X/VoiceOver): “Button”
Firefox 19 (Mac OS X/VoiceOver): “Email button”
Mobile Safari (iOS 6/VoiceOver): “Email button”
Even though the title attribute is helpful for sighted users as a hint, it doesn’t provide any real consistent benefit as far as screenreaders go.
Conclusion
I still prefer the old-school element for creating clickable icons. However, in the case of using an icon font, that really doesn’t work. In that situation, I prefer to use the aria-label attribute to provide additional text for screen readers. Doing so yields the most consistent treatment for buttons across major browsers and screenreaders. As a bonus, you don’t have to worry too much about how position text off screen might affect other parts of the page.
References
Font Awesome (GitHub)
Hiding content for accessibility by Jonathan Snook (snook.ca)





Nicholas C. Zakas's Blog
- Nicholas C. Zakas's profile
- 106 followers
