The purpose of my last post was to explain one reason why, in 2012, virtual reality is more than a pipe dream. It was meant to preempt a reaction of "Well, they've been talking about 3D movies and photos for 20 years also, and we have only made modest progress on that front." As a coworker said when he heard I was excited about virtual reality: "That sounds like the 90s." If you hype a technology for 20 years and it doesn't really go anywhere, people become jaded and give up. I'm writing this because I believe there is good reason to have hope. Virtual reality is real, and it is cool. Let me explain why.
This post is meant to get into more detail about the current challenges associated with virtual reality, and the state of the art. Almost everything I'm going to write about is sourced from John Carmack's 2012 QuakeCon keynote, and the follow-up panel discussion with Michael Abrash (seminal FPS developer with John Carmack, now researching VR at Valve Software) and Palmer Luckey (VR headset enthusiast and founder of Oculus, the makers of the Rift VR headset due for release in 2013). These discussions are the most comprehensive treatment of the current state of VR I've seen or read anywhere, and they are extraordinarily timely.
Virtual Reality has been a geek dream for decades. If you haven't been following closely, you might have missed the fact that all the technology needed to make it happen is here, right now. It's not years off, it is being used right now, and it will be available to consumers in the coming months. To be honest, I haven't been so excited for a gaming phenomenon since I rode my bike to Babbage's to pay $6 for a few floppies that had Doom Shareware on them. How did this happen? After years of talk and marketing, we've made some modest inroads with 3D movies, and yet, all of a sudden, we have virtual reality. People are building it, and it is affordable.
Welcome to the desert of the real.
You may already know that this blog is powered by Octopress (it says so at the bottom), which is a Ruby-powered static site generator. In the course of my travels, I found a wiki written in Ruby as well, courtesy of the folks over at GitHub. The wiki is called Gollum, and it powers GitHub's project pages.
Gollum is badass.
In the beginning, there were static web sites: sites that served documents, because that's what the web was designed to do. Hyper text transfer protocol is document-oriented, as is the hyper text markup language. But the web became increasingly dynamic, with new sites requiring that documents be...
Since Chromium became available in the Crunchbang/Debian repositories, I've pretty much switched away from Firefox entirely. Lately, though, I'm coming to realize the shortcomings of Chromium. If you're an Emacs user, or if you appreciate software with great extensibility, I'd assert that Firefox has more to offer than Chromium. In fact, Firefox is a bit like the spiritual successor to Emacs in the web browser world. A great example of that modularity is the Conkeror web browser, based on Mozilla's XULRunner. It's a neat piece of software, but this post is about Firefox itself, and how its extensibility provides its users with a better experience than Chromium's model can.
I just discovered Ward Cunningham's Smallest Federated Wiki. It's amazing that the man that first innovated the revolution in internet collaboration and sharing seems poised to do so again. Anyone who knows me can tell you how much I value the idea of federation on the web, whether it be social...
I've written up some of my recent trials and tribulations with Ubuntu 11.10 on my ThinkPad. While that (not yet complete) saga is really about problems that are not easily solvable by the community, there are other problems that are, and it's getting a bit frustrating.
The major new desktop environments are influencing the Linux desktop in harmful ways. In particular, I'm thinking of Gnome 3 (Gnome Shell), Unity (both 3D and 2D) and KDE 4. Here's how they're eroding the Linux desktop through their "adopt early, adopt often" methodology.
The Eclipse Foundation announced yet-another-JVM-language called Xtend. I clicked on the article with the announcement out of curiosity, since I tend to like playing with new languages quite a bit, and have done most of my professional work under the JVM (Java, Scala and Clojure, with a bit of Jython). After looking through what they're doing, I really like it. Xtend makes me excited to start a new project and try it out.
But, when I went to download Xtend, I was presented with an interesting requirement:
You need a running Eclipse including the Java Development Tools (JDT).
Gina Trapani shared a post from Brian Shih, who used to be a product manager for Google Reader. To provide some context, Google Reader has been the dominant RSS reader for just about six years, garnering an extremely strong but limited following (inherent in the genre; RSS is not used by the majority of web surfers).
Reader recently underwent a redesign. The majority of users evaluated it on its aesthetic merits, though users that made extensive use of keyboard shortcuts and for whom sharing was an integral part of the Google Reader experience noticed that the sharing functionality was altered significantly to bring the product more in line with Google Plus. As a result, the changes have impacted users to varying degrees. Some users simply don't understand what the big deal is, others complain about the whitespace that is now more prevalent than ever, and yet others (myself among them) complain about the nerfed sharing functionality, especially that which is accessible via keyboard shortcuts. Because users use Google Reader differently, the changes have had varied impact on them.
This post isn't about any of that.
Rather, when I step back and evaluate what has happened, I'm reminded about the fundamental flaw in Google's strategy, and the fundamental flaw in cloud computing. All the trends in computing are moving away from a world in which the user has control, and towards a model in which the user is merely a consumer.