<td />

blog

Hackers & Painters Predictions

After finishing Hackers & Painters, I was browsing through the notes and one section contained a few interesting “predictions”.

One thing that would help web-based applications, and help keep the next generation of software from being overshadowed by Microsoft, would be a good open source browser. A small, fast browser would be a great thing in itself, and would encourage companies to build little web appliances. If you want to change the world, write a new Mosaic. Think it’s too late? In 1998 a lot of people thought it was too late to launch a new search engine, but Google proved them wrong. There is always rom for something new if it is significantly better.

Not much to say here. That’s exactly what Mozilla Firefox is (or was at least). It’s debateable whether or not it changed the world, but its impact is indisputable.

I would not even use Javascript, if I were you; Viaweb didn’t. Most of the Javascript I see on the Web isn’t necessary, and much of it breaks. And when you start to be able to browse actual web pages on your cell phone or PDA (or toaster), who knows if they’ll even support it?

We shouldn’t hold this one against him. Javascript was pretty awful back then.

If Apple were to grow the iPod into a cell phone with a web browser, Microsoft would be in big trouble.

This is so succinct and accurate that I’ll instead point out that he wasn’t smart enough to predict the term “smart phone” instead of cell phone.

When 10K is really 6K

10K Apart was a contest held to build a web app in less than 10 kilobytes. Unforgetit, my entry along with Steve, was a simple alarm/reminder app. Like any other alarm app, we needed an audio notification. We quickly settled on a simple beep and I re-encoded the MP3 to a pretty low bit-rate. The file size at this point was about 500 bytes. Perfect.

We were using HTML5 to build Unforgetit and wanted to take advantage of as many native features as possible. Embedding audio on the page with the new audio tag was dead simple. Only problem is that different browsers support different codecs. Firefox is the odd one out since they only support OGG audio. We didn't really have much of a choice: if we wanted our app to work in most browsers, OGG was the way to go.

Unfortunately, OGG apparently has a minimum filesize of around 4k because of their file header. Who knew? Setting aside 500 bytes for audio wasn't a problem. But 4k was definitely a problem. We just turned the contest into 6K Apart.

One thing I forgot to mention was that we left most of this until the last minute. Thus began a mad dash to the deadline where we tried every minification, optimization, and compression technique possible. YUI Compressor alone wasn't even close. In case anyone was wondering, compressing then packing JavaScript actually works fairly well. Next up was to manually remove all CSS selectors not being used, merge and remove other styles/selectors, and minify it. This was better but not good enough.

HTML was the only thing left to compress after the CSS and JS. Since we didn't have much markup, the gain in file size was minimal.

We had pretty much reached the limits of automated compression tools at this point. However, any compression technique is more effective when combined with manual optimization. We started replacing every string possible (IDs, classes, filenames, jQuery selectors, etc) with 1 character. Finally, this got us under by about 10-20 bytes.

Abstractions: from jQuery to node.js

High-level abstractions are one of the driving forces of technological and computing progress. Abstractions have allowed us to be productive and innovative by ignoring low level details. Overall, their advantage is indisputable. But there is a cost to abstracting. In mathematics, abstractions work by removing dependencies so a concept can be more widely applied. In programming, abstractions are often achieved by automating away low-level annoyances so we can focus on high-level solutions. Automation poses a few problems: loss of control, overreliance, and a disconnect between the user’s and system’s model (more on this in a bit)[1].

Let’s take a look at two examples of abstractions in the world of JavaScript which provide a unique perspective.

jQuery

Not many people would argue that jQuery is a bad thing (unless you’re David Mark), and I’m not going to either. For all the good jQuery does, there are very few disadvantages. Looking at the problem of automation, loss of control isn’t an issue thanks to its prototypal nature. The same can’t be said for overreliance. How many times have you seen a site that loads up 25kb of jQuery just to handle 1 click event? Or using jQuery to set CSS instead of just doing class manipulation? More importantly, overreliance leads to a dangerous problem: a disconnect between the user’s and system’s model.

Any time a user interacts with a system, they develop a mental model of that system. Good things happen when the user’s mental model is conceptually accurate with the system’s model[2]. If something works the way you think it does, interacting with it becomes easier. jQuery adds another level on top of the system (JavaScript) and this is where the problems start. Since jQuery is just a JavaScript library and not a complete abstraction, a disconnect in models arise.

Take JavaScript animations as an example. In the pre-library days, animations were done by messy setTimeouts/setIntervals. There’s no magic involved. A CSS property’s value is simply changed every few milliseconds. How are animations done in jQuery? The exact same way. Except most jQuery users don’t know that due to the simple function it’s been wrapped in. In fact, I’d argue they don’t even have a mental model for how animations happen. jQuery has abstracted that concept into oblivion.

According to Joel Spolsky’s Law of Leaky Abstractions, abstractions trade off work time for learning time. The problem is, however, that the majority of people using jQuery don’t learn how it works and what’s being abstracted. This widens the gap between jQuery and native JavaScript to a point where developers can be competent in jQuery and not know any JavaScript beyond if statements.

node.js

node.js has been garnering a lot of buzz lately, and for good reason. Described as “evented I/O for V8 JavaScript”, it unabashedly uses JavaScript which is uniquely suited to provide an interface to anything evented. Ryan Dahl (node’s creator) succinctly provided the inspiration for this post with this quote:

Be careful about making abstractions. You might have to use them.

Ryan’s philosophy is to “present the low-level interface and allow people to build on top of that”. Despite being low-level, node.js is still an abstraction (pretty much everything in programming is), but its advantage is how the two models (node.js and evented I/O) are conceptually compatible. If your goal is to write a high-performance, efficient, and concurrent network program, using a low-level layer written in a language that’s asynchronous and non-blocking by nature just makes sense.

Some common feelings about node.js are that it’s fun to use and just feels right. People usually get it or they don’t. I believe this is due to the convergence of mental models much more so than it is to a technical reason like syntax or features. node.js is special because it exposes the underlying essence of its system model rather than hiding it, or even worse, misrepresenting it.

Abstract

There actually is no problem with jQuery itself since it’s an abstraction that achieves its goal almost perfectly. The problem is for the web development community as a whole since a decreasing number of people need to learn and know JavaScript. Fortunately, there will always be abstractions like node.js which enable people to create automations rather than just using them.

[1,2] Donald A. Norman: The Design of Everyday Things

Privacy Paradox

Facebook’s announcements the past few years are mostly the same: new features, less privacy by default, and more convoluted privacy controls. And for every one of these updates, the public outcry became a little louder. It finally reached a critical mass with Facebook’s latest update last month. Apparently, the average Facebook user really does care about privacy. Or do they? Here’s a quick chart I made plotting the rise of FB visitors vs. their privacy:

Here’s a summary: Facebook became ridiculously popular and crushed all competitors while systematically exposing more and more of their users information. The factual correctness of this chart is irrelevant. All that matters is that it mirrors public perception. So do users really care about privacy?

Of course, it isn’t that simple. Facebook has always had a difficult time with the dichotomy of privacy and sharing information, and it always will. In a previous post, I showed how Google is trusted because of the geeks who love them. Unfortunately for Facebook, they went slightly too far and pissed off two important groups: geeks and the media.

Back to the future

Let’s go back to the beginning of Facebook. Most people think that the above chart is correct and that privacy has been steadily declining. However, when Facebook was only open to students, and you were only a member of your school network, everything was open to that network by default. The scope of your shared data was much more limited than it is now, but it was completely open within that scope by default, and most people left it that way.

The underlying irony here is that by expanding the data that’s shared, and widening the scope of it, for the purpose of becoming more “social”, Facebook’s recent updates have had the opposite effect. More people than ever are completely locking down their data to non-friends diminishing the effectiveness of Facebook’s new features. Users are actually discovering the privacy settings in their profile, and despite how complex and confusing they may be, they are being set to the most private (least shared).

Putting the “social” in Social

Most of this is an overreaction by users who are being told they should care about privacy. Open data is inherent to a social network: it’s what makes it social. There are two reasons people join Facebook: to share information and to view shared information. Social networks need multiple tipping points before they are popular – more important than the tipping point of registered users, is the tipping point of shared information from them. Facebook’s early success was built on the innocence of Harvard students openly sharing everything.

There’s a growing opinion that Facebook may actually be in trouble with all the bad press recently but they have something which no one can compete with: an information monopoly. The average user has invested far too much information and time to simply walk away – especially when no real alternative exists.

Google’s Greatest Trick

What does AMC’s Mad Men have to do with Google? On the surface, not that much. But a connection led me down the path to this post. As I was reading over my last post from almost a year ago, marvelling over how prescient it was with the recent announcement of Google’s Chrome OS, it reminded me of a great quote from Mad Men:

They don’t sell campaigns or ideas or jingles, they sell media, at a 15% mark-up. Creative is just window dressing, it is thrown in for free.

Google’s creative, their window dressing, are free web apps and services. Of course, I am no where near the first person to realize this. It’s no secret that Google’s business model is advertising: it accounts for 99% of their revenue. Mike Elgan’s excellent article about how we are Google’s product got me thinking even more. Elgan goes on to envision how all of Google’s services will converge to eventually know more about yourself than you do and sell that information to advertisers.

But this realization isn’t new either; almost everyone knows how much information Google collects about them but they still line up to use their newest service. What’s interesting to me is the psychology behind this phenomenon. It isn’t just a straight up trade between privacy for email, maps, or searching either. People actually trust Google and that’s why they are willing to forgo their privacy. If Google wasn’t trusted, it wouldn’t matter how good their free apps were. So why do people trust the world’s largest advertising company?

The greatest trick the Devil ever pulled was convincing the world he didn’t exist.

There’s something that can never be taken away from Google: its humble beginnings as an academic research project by two geeks at Stanford. Replace Larry Page or Sergey Brin with a slick ad man like Don Draper and everything changes. Larry and Sergey represent the Modern American Dream for geeks – turn a research project into billions.

Instead of seeing a giant advertising company invading their privacy, the geeks view Google as a software company. That is the key distinction that everything else follows from. It’s hard to hate a company when it’s your dream employer with their academic like “campus”, 20% of time devoted to personal projects, and free food. Along with all their ever increasing number of free and open source services, it’s like giving candy to a kid.

Technology, and even more so, the internet and the web, are the great equalizers. Ordinarily, geeks aren’t the most influential group, but that all changes with the internet. I’m a big fan of Malcolm Gladwell, and his book The Tipping Point helps explain the rise of Google. Gladwell says that any successful social epidemic is the work of a few special type of people. The two that apply here are Connectors (“link us up with the world … people with a special gift for bringing the world together.”) and Mavens (“information specialists”, or “people we rely upon to connect us with new information.”), who in this case, both happen to be geeks.

Paradoxically, geeks are also the most aware and vocal about privacy. The early-adopters to each new service Google releases are always going to be the geeks. It’s obvious they don’t care about the privacy implications when they are the ones sending out invites to beta apps, writing blog posts about new features, and convincing everyone they know to switch to Google’s newest offering. If the Connectors and Mavens ignore the privacy concerns, why would anybody else listen? Once Google captured the geeks, everyone else followed.

Despite all of this, Google is still walking a fine line. When the company’s unofficial moto is “Don’t be evil”, there’s a good chance it’s inevitable – if not actually being evil, then being perceived that way.

What was Google’s greatest trick? Convincing geeks it’s a software company.

Google Chrome: The OS that’s a web browser

Google’s new browser, dubbed Chrome (for now at least), comes out today and there’s already enough blogs talking about the features and technical aspects of it. I have seen some talk about how this fits into Google’s strategy and long term plan of global domination but I think most people aren’t fully grasping the impact of Chrome.

There have been endless rumours for years now that Google was developing its own OS. This seemed backwards to me because an OS itself isn’t online, and everything Google does is online. Why would they want to compete with the Windows juggernaut (and even OS X)? A web browser is arguably the most used piece of software on an OS today. More and more web apps are being made every day (led by Google of course) which expands the domain of the browser.

So what would allow Google to increase their already massive web-presence? By building a browser that would then allow them to create even better web apps. And what would make better apps? You would have to address the 2 biggest problems of them: offline integration and stability/performance. Google Chrome addresses both of these problems with built-in Gears and their completely new Javascript Virtual Machine “V8”.

Google didn’t make Chrome so it could win the browser wars. They don’t care about their browser market share. For them, it’s all about getting more people using more Google web apps (which increases their advertising revenue). There are hints about this littered all over the Chrome comic they released. They are using open source technologies because they want other browsers to adopt these features which once again would allow more people to use Google web apps.

Google Chrome is the Google OS that has been rumoured for years. Tabs are separate processes. Web apps are the software: you can detach tabs from Chrome and via Gears and use it as a standalone application offline. There’s even a task manager!

That’s not the end of it though. This summer was the start of the “netbook” craze: small, cheap, and truly portable computers. The interesting part of netbooks is that most of them come with a simple, bare bones, customized linux OS. More importantly, all of them come with Mozilla Firefox pre-installed. Web browsing is the primary use of netbooks and if you had a browser that allowed you to run more complete web apps, would you really need a more expensive OS with expensive software installed?

Another new trend is motherboards with linux embedded in flash memory for almost instant boot-up. The main benefit of this feature is quickly being able to browse the web. Do you see where this is going?

Users with cheap, mobile computers booting into linux almost instantly, completely bypassing Windows or OS X, going straight into Google Chrome and finally, using Google web apps. Oh, and don’t forget Android.