Why Mozlandia Was So Effective

Dec 8th, 2014

When Chris Beard first announced that over a thousand Mozilla staff and contributors would be descending on Portland this month for an all-hands work week, I worried about two things. I knew a couple of the groups in my department would be approaching deadlines. And I was afraid that so many groups of people in one place would be chaotic and hard to coordinate. I wasn’t even wrong – but it didn’t matter.

The level of focus and effectiveness last week was remarkable. For Mozilla Research’s part, we coordinated with multiple groups, planned 2015 projects, worked through controversial technical decisions, removed obstacles, brought new contributors on board, and even managed to get a bunch of project work done all at the same time.

There were a few things that made last week a success:

Articulating the vision: Leaders have to continually retell their people’s story. This isn’t just about morale, although that’s important. It’s about onboarding new folks, reminding old-timers of the big picture, getting people to re-evaluate their projects against the vision, and providing a group with the vocabulary to help them articulate it themselves.

While Portland was primarily a work week, it’s always a good thing for leadership to grab the opportunity to articulate the vision. This is something that Mitchell Baker has always been especially good at doing, particularly in connecting our work back to Mozilla’s mission; but Chris and others also did a good job of framing our work around building amazing products.

Loosely structured proximity: The majority of the work days were spent without excessive organization, leaving broad groups of people in close proximity but with the freedom to seek out the specific contact they needed. Managers were able to set aside quieter space for the groups of people that needed to get more heads down work done, but large groups (for example, most of Platform) were close enough together that you could find people for impromptu conversations, whether on purpose or – just as important! – by accident.

Cross-team coordination: Remote teams are the life blood of Mozilla. We have a lot of techniques for making remote teams effective. But it can be harder to coordinate across teams, because they don’t have the same pre-existing relationships, or as many opportunities for face-to-face interaction. Last week, Mozilla Research got a bunch of opportunities to build new relationships with other teams and have higher-bandwidth conversations about tricky coordination topics.

I hope we do this kind of event again. There’s nontrivial overhead, and a proper cadence to these things, but every once in a while, getting everyone together pays off.

I’m Running for the W3C TAG

Jan 2nd, 2014

The W3C’s Technical Architecture Group (TAG) has two open seats for 2014, and I’m running for one of those seats.

In recent years a reform effort has been underway to help the TAG to improve the cohesiveness and transparency of the many moving parts of Web standards. Domenic Denicola and I would like to help continue that reform process. My particular interests in running focus on several themes:

Designing for Extensibility

I’m an original co-signer of the Extensible Web Manifesto, which urges Web standards to focus on powerful, efficient, and composable primitives, in order to allow developers — who are far more efficient and scalable than standards can ever be — to innovate building higher layers of the platform. The TAG has recognized the Extensible Web as a core principle. We need to build on this momentum to continue educating people about how the principles play out in practice for designing new APIs and platform capabilities that empower developers to extend the web forward.

Thinking Big and Working Collaboratively

For the Web to compete with native platforms, I believe we have to think big. This means building on our competitive strengths like URLs and dynamic loading, as well as taking a hard look at our platform’s weaknesses — lack of access to modern hardware, failures of the offline experience, or limitations of cross-origin communication, to name a few. My entire job at Mozilla Research is focused on thinking big: from ES6 modules to asm.js and Servo, my goal is to push the Web as far forward as possible. I’m running for TAG because I believe it’s an opportunity to set and articulate big goals for the Web.

At the same time, standards only work by getting people working together. My experience with open source software and standards work — particularly in shepherding the process of getting modules into ES6 — has taught me that the best way to build community consensus is the layers of the onion approach: bring together key stakeholders and subject experts and iteratively widen the conversation. It’s critical to identify those stakeholders early, particularly developers. Often we see requests for developer feedback too late in the process, at which point flawed assumptions are too deeply baked into the core structure of a solution. The most successful standards involve close and continuous collaboration with experienced, productive developers. Pioneers like Yehuda Katz and Domenic Denicola are blazing trails building better collaboration models between developers and platform vendors. Beyond the bully pulpit, the TAG should actively identify and approach stakeholders to initiate important collaborations.

Articulating Design Principles

When Alex Russell joined the TAG, he advocated for setting forth principles for idiomatic Web API design. We can do this in part by advising standards work in progress, which is the ongoing purview of the TAG. Web API creators are often browser implementors, who are under aggressive schedules to ship functionality, and don’t always have the firsthand experience of using the API’s they create. Worse yet, they sometimes break key invariants of JavaScript that the creators, who are often primarily C++ programmers, didn’t understand. One area of particular concern to me is data races: several API’s, including the File API and some proposed extensions to WebAudio introduce low-level data races into JavaScript, something that has been carefully avoided since the run-to-completion model was introduced on Day 1.

And there’s room to lead more proactively still. One area I’d like to help with is in evolving or reforming WebIDL, which is used by browser vendors to specify and implement Web API’s, but which carries a legacy of more C++- and Java-centric API’s. Several current members of TAG have begun investigating alternatives to WebIDL that can provide the same convenience for creating libraries but that lead to more idiomatic API’s.

If you’re a developer who finds my perspective compelling, I’d certainly appreciate your public expression of support. If you belong to a voting member organization, I’d very much appreciate your organization’s vote. I also highly recommend Domenic Denicola as the other candidate whose vision and track record are most closely aligned with my own. Thanks!

On “On Asm.js”

Nov 27th, 2013

On his impossibly beautiful blog (seriously, it’s amazing, take some time to bask in it), Steven Wittens expressed some sadness about asm.js. It’s an understandable feeling: he compares asm.js to compatibility hacks like UTF-8 and x86, and longs for the browser vendors to “sit down and define the most basic glue that binds their platforms”—referring to a computational baseline that could form a robust and portable VM for the web.

I get it: it’s surprising to see a makeshift VM find its way into the web via JavaScript, rather than through the perhaps more direct approach of a new bytecode language and standardization effort. What’s more, it has become clear to me that emotions always run high when it comes to JavaScript. It’s easy for observers to suspect that what we’re doing is the result of a blind fealty to JavaScript. But the fact is, our strategy has nothing to do with the success of JavaScript. It’s about the success of the web. On a shared medium like the web, where content has to run across all OSes, platforms, and browsers, backwards-compatible strategies are far more likely to succeed than discrete jumps. In short, we’re betting on evolution because it works: UTF-8 and x86 may be ugly hacks, but the reason we’re talking about them at all is that they’re success stories.

There’s more work to be done, but between sweet demos, rapid improvements in browser performance, and the narrowing gap to native via float32 and SIMD, I see plenty of reason to keep betting on evolution. The truth is, in my heart I’m an idealist. I love beautiful, clean designs done right from scratch. (I spent my academic years working in Scheme, after all!) But my head tells me that this is the right bet. In fact, I’ve spent my career at Mozilla betting on evolution: growing JavaScript with modules and classes, leveling up the internal architecture of browser engines with Servo, and kicking the web’s virtualization powers into high gear with asm.js.

So for developers like Steven who are put off by the web’s idiosyncratic twists of fate, let’s keep working to build better abstractions to extend the web forward. In particular, in 2014 I want to invest in LLJS, as James Long has been doing in his spare time, to build better developer tools for generating high-performance code—and asm.js can be our stepping stone to get there.

ECMAScript Doc Sprint next Thursday

Aug 17th, 2012

I’ve been working on a reboot to the ECMAScript web site lately, which you can preview at tc39wiki.calculist.org. One of the most important parts of this will be a set of high-level descriptions of the proposals for ES6.

We will be hosting a virtual doc sprint to work on these pages next Thursday, August 23rd. If you enjoy writing documentation or coming up with bite-sized example programs to demonstrate new language features, please join us! A few of us will be on US Eastern time, so starting around 9 - 10am UTC-5, and others will be coming online on US Pacific time, around 9am UTC-8. You’re welcome to join us for any part of the day.

We’ll be hanging out all day in the #jsdocs channel on irc.mozilla.org. Hope you can join us!

JavaScript’s two array types

Jul 16th, 2012

Imagine a BitSet constructor with an overloaded API for setting bits:

1
2
3
4
var bits = new BitSet();

bits.set(4);
bits.set([1, 4, 8, 17]);

The interface for BitSet.prototype.set is:

1
// set :: (number | [number]) -> undefined

Now imagine a StringSet constructor with an overloaded API for adding strings:

1
2
3
4
5
var set = new StringSet();

set.add('foo');
set.add(['foo', 'bar', 'baz']);
set.add({ foo: true, bar: true, baz: true });

The interface for StringSet.prototype.add is something like:

1
// add :: (string | [string] | object) -> undefined

These both look pretty similar, but there’s a critical difference. Think about how you might implement BitSet.prototype.set:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
BitSet.prototype.set = function set(x) {
    // number case
    if (typeof x === 'number') {
        this._add1(x);
        return;
    }
    // array case
    for (var i = 0, n = x.length; i < n; i++) {
        this._add1(x[i]);
    }
};

Now think about how you might implement StringSet.prototype.add:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
StringSet.prototype.add = function add(x) {
    // string case
    if (typeof x === 'string') {
        this._add1(x);
        return;
    }
    // array case
    if (/* hmmmm... */) {
        for (var i = 0, n = x.length; i < n; i++) {
            this._add1(x[i]);
        }
        return;
    }
    // object case
    for (var key in x) {
        if ({}.hasOwnProperty.call(x, key)) {
            this._add1(key);
        }
    }
};

What’s the difference? BitSet.prototype.set doesn’t have to test whether its argument is an array. It’ll work for any object that acts like an array (i.e., has indexed properties and a numeric length property). It’ll even accept values like an arguments object, a NodeList, some custom object you create that acts like an array, or even a primitive string.

But StringSet.prototype.add actually needs a test to see if x is an array. How do you distinguish between arrays and objects when JavaScript arrays are objects?

One answer you’ll sometimes see is what I call “duck testing”: use some sort of heuristic that probably indicates the client intended the argument to be an array:

1
2
3
if (typeof x.length === 'number') {
    // ...
}

Beware the word “probably” in programming! Duck testing is a horribly medieval form of computer science:

For example, what happens when a user happens to pass in a dictionary object with the string 'length'?

1
symbolTable.add({ a: 1, i: 1, length: 1 });

The user clearly intended this to be the dictionary case, but the duck test saw a numeric 'length' property and gleefully proclaimed “it’s an array!”

This comes down to the difference between nominal and structural types.

A nominal type is a type that has a unique identity or “brand.” It carries a tag with it that can be atomically tested to distinguish it from other types.

A structural type, also known as a duck type, is a kind of interface: it’s just a contract that mandates certain behaviors, but doesn’t say anything about what specific implementation is used to provide that behavior. The reason people have such a hard time figuring out how to test for structural types is that they are designed specifically not to be testable!

There are a few common scenarios in dynamically typed languages where you need to do dynamic type testing, such as error checking, debugging, and inrospection. But the most common case is when implementing overloaded API’s like the set and add methods above.

The BitSet.prototype.set method treats arrays as a structural type: they can be any kind of value whatsoever as long as they have indexed properties with corresponding length. But StringSet.prototype.add overloads array and object types, so it has to check for “arrayness.” And you can’t reliably check for structural types.

It’s specifically when you overload arrays and objects that you need a predictable nominal type test. One answer would be to punt and change the API so the client has to explicitly tag the variants:

1
2
3
set.add({ key: 'foo' });
set.add({ array: ['foo', 'bar', 'baz'] });
set.add({ dict: { foo: true, bar: true, baz: true } });

This overloads three different objects types that can be distinguished by their relevant property names. Or you could get rid of overloading altogether:

1
2
3
set.add('foo');
set.addArray(['foo', 'bar', 'baz']);
set.addDict({ foo: true, bar: true, baz: true });

But these API’s are heavier and clunkier. Rather than rigidly avoiding overloading arrays and objects, the lighter-weight approach is to use JavaScript’s latent notion of a “true” array: an object whose [[Class]] internal property is "Array". That internal property serves as the brand for a built-in nominal type of JavaScript. And it’s a pretty good candidate for a universally available nominal type: clients get the concise array literal syntax, and the ES5 Array.isArray function (which can be shimmed pretty reliably in older JavaScript engines) provides the exact test needed to implement the API.

But this test is very different from the structural type accepted by BitSet.prototype.set. For example, you can’t pass an arguments object to StringSet.prototype.add:

1
2
3
MyClass.prototype.update = function update() {
    this.wibbles.add(arguments);
};

This code clearly means to pass arguments as an array, but it’ll get interpreted as a dictionary. Similarly, you can’t pass a NodeList, or a primitive string, or any other JavaScript value that acts array-like.

In other words, JavaScript has two latent concepts of array types. Library writers should clearly document when their API’s accept any array-like value (i.e., the structural type) and when they require a true array (i.e., the nominal type). That way clients know whether they need to convert array-like values to true arrays before passing them in.

As a final note, ES6’s Array.from API will do that exact conversion. This would make it very convenient, for example, for the update method above to be fixed:

1
2
3
MyClass.prototype.update = function update() {
    this.wibbles.add(Array.from(arguments));
};

Thanks to Rick Waldron for helping me come to this understanding during an awesome IRC conversation this morning.