I’m an original co-signer of the Extensible Web Manifesto, which urges Web standards to focus on powerful, efficient, and composable primitives, in order to allow developers — who are far more efficient and scalable than standards can ever be — to innovate building higher layers of the platform. The TAG has recognized the Extensible Web as a core principle. We need to build on this momentum to continue educating people about how the principles play out in practice for designing new APIs and platform capabilities that empower developers to extend the web forward.
Thinking Big and Working Collaboratively
For the Web to compete with native platforms, I believe we have to think big. This means building on our competitive strengths like URLs and dynamic loading, as well as taking a hard look at our platform’s weaknesses — lack of access to modern hardware, failures of the offline experience, or limitations of cross-origin communication, to name a few. My entire job at Mozilla Research is focused on thinking big: from ES6 modules to asm.js and Servo, my goal is to push the Web as far forward as possible. I’m running for TAG because I believe it’s an opportunity to set and articulate big goals for the Web.
At the same time, standards only work by getting people working together. My experience with open source software and standards work — particularly in shepherding the process of getting modules into ES6 — has taught me that the best way to build community consensus is the layers of the onion approach: bring together key stakeholders and subject experts and iteratively widen the conversation. It’s critical to identify those stakeholders early, particularly developers. Often we see requests for developer feedback too late in the process, at which point flawed assumptions are too deeply baked into the core structure of a solution. The most successful standards involve close and continuous collaboration with experienced, productive developers. Pioneers like Yehuda Katz and Domenic Denicola are blazing trails building better collaboration models between developers and platform vendors. Beyond the bully pulpit, the TAG should actively identify and approach stakeholders to initiate important collaborations.
Articulating Design Principles
And there’s room to lead more proactively still. One area I’d like to help with is in evolving or reforming WebIDL, which is used by browser vendors to specify and implement Web API’s, but which carries a legacy of more C++- and Java-centric API’s. Several current members of TAG have begun investigating alternatives to WebIDL that can provide the same convenience for creating libraries but that lead to more idiomatic API’s.
If you’re a developer who finds my perspective compelling, I’d certainly appreciate your public expression of support. If you belong to a voting member organization, I’d very much appreciate your organization’s vote. I also highly recommend Domenic Denicola as the other candidate whose vision and track record are most closely aligned with my own. Thanks!
On his impossibly beautiful blog (seriously, it’s amazing, take some time to bask in it), Steven Wittens expressed some sadness about asm.js. It’s an understandable feeling: he compares asm.js to compatibility hacks like UTF-8 and x86, and longs for the browser vendors to “sit down and define the most basic glue that binds their platforms”—referring to a computational baseline that could form a robust and portable VM for the web.
So for developers like Steven who are put off by the web’s idiosyncratic twists of fate, let’s keep working to build better abstractions to extend the web forward. In particular, in 2014 I want to invest in LLJS, as James Long has been doing in his spare time, to build better developer tools for generating high-performance code—and asm.js can be our stepping stone to get there.
We will be hosting a virtual doc sprint to work on these pages next Thursday, August 23rd. If you enjoy writing documentation or coming up with bite-sized example programs to demonstrate new language features, please join us! A few of us will be on US Eastern time, so starting around 9 - 10am UTC-5, and others will be coming online on US Pacific time, around 9am UTC-8. You’re welcome to join us for any part of the day.
We’ll be hanging out all day in the #jsdocs channel on irc.mozilla.org. Hope you can join us!
What’s the difference? BitSet.prototype.set doesn’t have to test whether its argument is an array. It’ll work for any object that acts like an array (i.e., has indexed properties and a numeric length property). It’ll even accept values like an arguments object, a NodeList, some custom object you create that acts like an array, or even a primitive string.
One answer you’ll sometimes see is what I call “duck testing”: use some sort of heuristic that probably indicates the client intended the argument to be an array:
Beware the word “probably” in programming! Duck testing is a horribly medieval form of computer science:
For example, what happens when a user happens to pass in a dictionary object with the string 'length'?
The user clearly intended this to be the dictionary case, but the duck test saw a numeric 'length' property and gleefully proclaimed “it’s an array!”
This comes down to the difference between nominal and structural types.
A nominal type is a type that has a unique identity or “brand.” It carries a tag with it that can be atomically tested to distinguish it from other types.
A structural type, also known as a duck type, is a kind of interface: it’s just a contract that mandates certain behaviors, but doesn’t say anything about what specific implementation is used to provide that behavior. The reason people have such a hard time figuring out how to test for structural types is that they are designed specifically not to be testable!
There are a few common scenarios in dynamically typed languages where you need to do dynamic type testing, such as error checking, debugging, and inrospection. But the most common case is when implementing overloaded API’s like the set and add methods above.
The BitSet.prototype.set method treats arrays as a structural type: they can be any kind of value whatsoever as long as they have indexed properties with corresponding length. But StringSet.prototype.add overloads array and object types, so it has to check for “arrayness.” And you can’t reliably check for structural types.
It’s specifically when you overload arrays and objects that you need a predictable nominal type test. One answer would be to punt and change the API so the client has to explicitly tag the variants:
But this test is very different from the structural type accepted by BitSet.prototype.set. For example, you can’t pass an arguments object to StringSet.prototype.add:
As a final note, ES6’s Array.from API will do that exact conversion. This would make it very convenient, for example, for the update method above to be fixed:
Over time, I’ve gotten a bunch of good critiques about the API from people. I probably don’t want to make any huge changes, but there are a couple of small changes that would be nice:
Bug 770567 - rename callee to constructor to match the documentation
Bug 742612 - separate guarded/unguarded catch clauses
Ariya is graciously willing to change Esprima to keep in sync with SpiderMonkey. But some of these would affect existing clients of either library. I wanted to post this publicly to ask if there’s anyone who would be opposed to us making the change. Ariya and I would make sure to be very clear about when we’re making the change, and we’d try to batch the changes so that people don’t have to keep repeatedly updating their code.
Feel free to leave a comment if you are using Esprima or Reflect.parse and have thoughts about this.