Native Prototypes

One of the most widely known and classic pieces of JavaScript best practice wisdom is: never extend native prototypes.

Whatever method or property name you come up with to add to Array.prototype that doesn’t (yet) exist, if it’s a useful addition and well-designed, and properly named, there’s a strong chance it could eventually end up being added to the spec — in which case your extension is now in conflict.

Here’s a real example that actually happened to me that illustrates this point well.

I was building an embeddable widget for other websites, and my widget relied on jQuery (though pretty much any framework would have suffered this gotcha). It worked on almost every site, but we ran across one where it was totally broken.

After almost a week of analysis/debugging, I found that the site in question had, buried deep in one of its legacy files, code that looked like this:

  1. // Netscape 4 doesn't have Array.push
  2. Array.prototype.push = function(item) {
  3. this[this.length] = item;
  4. };

Aside from the crazy comment (who cares about Netscape 4 anymore!?), this looks reasonable, right?

The problem is, Array.prototype.push was added to the spec sometime subsequent to this Netscape 4 era coding, but what was added is not compatible with this code. The standard push(..) allows multiple items to be pushed at once. This hacked one ignores the subsequent items.

Basically all JS frameworks have code that relies on push(..) with multiple elements. In my case, it was code around the CSS selector engine that was completely busted. But there could conceivably be dozens of other places susceptible.

The developer who originally wrote that push(..) hack had the right instinct to call it push, but didn’t foresee pushing multiple elements. They were certainly acting in good faith, but they created a landmine that didn’t go off until almost 10 years later when I unwittingly came along.

There’s multiple lessons to take away on all sides.

First, don’t extend the natives unless you’re absolutely sure your code is the only code that will ever run in that environment. If you can’t say that 100%, then extending the natives is dangerous. You must weigh the risks.

Next, don’t unconditionally define extensions (because you can overwrite natives accidentally). In this particular example, had the code said this:

  1. if (!Array.prototype.push) {
  2. // Netscape 4 doesn't have Array.push
  3. Array.prototype.push = function(item) {
  4. this[this.length] = item;
  5. };
  6. }

The if statement guard would have only defined this hacked push() for JS environments where it didn’t exist. In my case, that probably would have been OK. But even this approach is not without risk:

  1. If the site’s code (for some crazy reason!) was relying on a push(..) that ignored multiple items, that code would have been broken years ago when the standard push(..) was rolled out.
  2. If any other library had come in and hacked in a push(..) ahead of this if guard, and it did so in an incompatible way, that would have broken the site at that time.

What that highlights is an interesting question that, frankly, doesn’t get enough attention from JS developers: Should you EVER rely on native built-in behavior if your code is running in any environment where it’s not the only code present?

The strict answer is no, but that’s awfully impractical. Your code usually can’t redefine its own private untouchable versions of all built-in behavior relied on. Even if you could, that’s pretty wasteful.

So, should you feature-test for the built-in behavior as well as compliance-testing that it does what you expect? And what if that test fails — should your code just refuse to run?

  1. // don't trust Array.prototype.push
  2. (function(){
  3. if (Array.prototype.push) {
  4. var a = [];
  5. a.push(1,2);
  6. if (a[0] === 1 && a[1] === 2) {
  7. // tests passed, safe to use!
  8. return;
  9. }
  10. }
  11. throw Error(
  12. "Array#push() is missing/broken!"
  13. );
  14. })();

In theory, that sounds plausible, but it’s also pretty impractical to design tests for every single built-in method.

So, what should we do? Should we trust but verify (feature- and compliance-test) everything? Should we just assume existence is compliance and let breakage (caused by others) bubble up as it will?

There’s no great answer. The only fact that can be observed is that extending native prototypes is the only way these things bite you.

If you don’t do it, and no one else does in the code in your application, you’re safe. Otherwise, you should build in at least a little bit of skepticism, pessimism, and expectation of possible breakage.

Having a full set of unit/regression tests of your code that runs in all known environments is one way to surface some of these issues earlier, but it doesn’t do anything to actually protect you from these conflicts.

Shims/Polyfills

It’s usually said that the only safe place to extend a native is in an older (non-spec-compliant) environment, since that’s unlikely to ever change — new browsers with new spec features replace older browsers rather than amending them.

If you could see into the future, and know for sure what a future standard was going to be, like for Array.prototype.foobar, it’d be totally safe to make your own compatible version of it to use now, right?

  1. if (!Array.prototype.foobar) {
  2. // silly, silly
  3. Array.prototype.foobar = function() {
  4. this.push( "foo", "bar" );
  5. };
  6. }

If there’s already a spec for Array.prototype.foobar, and the specified behavior is equal to this logic, you’re pretty safe in defining such a snippet, and in that case it’s generally called a “polyfill” (or “shim”).

Such code is very useful to include in your code base to “patch” older browser environments that aren’t updated to the newest specs. Using polyfills is a great way to create predictable code across all your supported environments.

Tip: ES5-Shim (https://github.com/es-shims/es5-shim) is a comprehensive collection of shims/polyfills for bringing a project up to ES5 baseline, and similarly, ES6-Shim (https://github.com/es-shims/es6-shim) provides shims for new APIs added as of ES6. While APIs can be shimmed/polyfilled, new syntax generally cannot. To bridge the syntactic divide, you’ll want to also use an ES6-to-ES5 transpiler like Traceur (https://github.com/google/traceur-compiler/wiki/Getting-Started).

If there’s likely a coming standard, and most discussions agree what it’s going to be called and how it will operate, creating the ahead-of-time polyfill for future-facing standards compliance is called “prollyfill” (probably-fill).

The real catch is if some new standard behavior can’t be (fully) polyfilled/prollyfilled.

There’s debate in the community if a partial-polyfill for the common cases is acceptable (documenting the parts that cannot be polyfilled), or if a polyfill should be avoided if it purely can’t be 100% compliant to the spec.

Many developers at least accept some common partial polyfills (like for instance Object.create(..)), because the parts that aren’t covered are not parts they intend to use anyway.

Some developers believe that the if guard around a polyfill/shim should include some form of conformance test, replacing the existing method either if it’s absent or fails the tests. This extra layer of compliance testing is sometimes used to distinguish “shim” (compliance tested) from “polyfill” (existence checked).

The only absolute take-away is that there is no absolute right answer here. Extending natives, even when done “safely” in older environments, is not 100% safe. The same goes for relying upon (possibly extended) natives in the presence of others’ code.

Either should always be done with caution, defensive code, and lots of obvious documentation about the risks.