In 2008, Jeff Atwood wrote of magpie developers: those who always are attracted by the latest programming framework or tool. He concludes:
Be selective in your pursuit of the shiny and new, and you may find yourself a better developer for it.
Unfortunately, nowhere in the blog post does Jeff offer any advice as to what criteria to use when being 'selective'. In fact, the final sentence is the first time he actually advocates being selective; the rest of the post is dedicated to detailing the costs of picking up new tools:
Eventually, you grow weary of the endless procession of shiny new things.
and the value of simply sticking to whatever you know:
Who cares what technology you use, as long as it works, and both you and your users are happy with it?
What Atwood is advocating in this post is what I call the 'sloth' mentality. You settle comfortably into whatever stack you happened upon and got comfortable with, and so long as you're happy with it, you don't bother examining new things.
This is terrible advice. Frameworks must be evaluated using some sort of criteria, and those that fare well according to those criteria should be examined further. Part of an engineer's job is to understand and contextualize technology, keeping an eye out for advances that offer good value propositions. But that requires constantly working to understand and evaluate new technology. This is the antithesis of the sloth approach.
Atwood's point, of course, has merit. You don't want to build your production application on top of the latest fad technology, since it might not even exist in a few months.
Another aspect to the magpie pitfall is reading a motivational blog post and deciding that some new technology is The Right Way. It's hard to evaluate an enthusiastic article about a piece of technology if you don't know how it fits into the larger picture. What technologies is it based on? What were the motivations behind its creation? Who is behind the project? How mature is it? What theory is it based upon? Unless you do constant work to understand the landscape of software engineering and computer science, many of these questions will remain unanswered, and any evaluation of new frameworks will remain correspondingly incomplete.
The Three Pillars
So, how should new technologies be evaluated? Everyone has their own answer. I can only offer mine, which rests on three major pillars:
Every new technology has some grounding in theory. Poor technologies have a weak grounding in theory, and largely ignore it during development, hoping the abstractions that seem 'most useful' will also be those that have the correct theoretical properties.
These abstractions that software rest upon are important because they determine the overall stability of the software, in the sense of how stable the outward facing contracts the software provides will be. Abstractions that are well-grounded in theory tend to endure, and those that are created to solve a problem today tend to fade with the problem they were created to solve. Such abstractions may last months or years, but it is rare that they remain prevalent for decades.
New software that builds upon well-established theory is often a very good bet. Conversely, software that is largely tone-deaf to the theory that it rests upon will have much more trouble enduring. Rails is a good example here. It was built by an inexperienced software developer with little regard for theory, but lots of regard for productivity. As a result, its API breaks with every major release, and often minor releases as well, even more than 10 years after its first release. The lure of productivity is strong, however, and Rails owes its success to its strong execution, as well as Ruby's strong design and heritage.
Every technology is built upon prior technologies. Why were those technologies built? What problems were they intended to solve? What people and companies were involved, and how did they make money?
Ruby was initially created by Matz as a set of C macros to make programming more enjoyable. How did he design those macros? What was their design based on? It helps to know that Matz is an Emacs user and familiar with Lisp, but also loves the purity of object-oriented programming in Smalltalk. He built on the multi-paradigm nature of Lisp, while still offering an interface that boldly boasts: 'Everything is an object!'
Erlang was created at Ericsson almost 30 years ago to create a high-level language that contained primitives to support concurrency and fault-tolerance. The telecom industry didn't believe in "best effort", which caused some tension when TCP came around. This was vital, as Ericsson was building very high availability telecom systems.
It's often tempting to think of languages and tools in terms of the features listed 'on the back of the box', but that's a small cross-section of what software is really about. Software is the history of the people and companies that built it, what they valued, and how much effort and time they poured into the project.
Which brings us to execution.
Successful technologies are built by teams that endure. It is very hard to determine who will have the endurance to stick with a project for years, but once a person has demonstrated their ability to do so, you can often count on them to do so again.
Examining the source repository for a project provides another indicator of good execution. Reliable projects tend to have over 500 commits, and are committed to regularly. 500 is pretty arbitrary, but seems to work well for projects I've tested with. Those tests indicated that a project should have at least 500 commits, and the most recent commit should be no older than one month ago. Linux, Ruby, Emacs and Vim are among those that do well by these metrics. Excellent projects vastly exceed these guidelines.
Another indicator of good execution is documentation. If the documentation is poor, it indicates that the project is a labor of love, with little attention given to the needs of the community. It's a warning sign that the project will not endure, since growing community is a survival technique for software.
Good software is constructed over years by teams that have an awareness of broad technological trends. These teams have to marry theory and execution to build stable, enduring platforms that the community can trust. It's often hard to spot which projects will succeed and which will fail, but it is very rare to see a project with good theoretical grounding, consistent execution, and strong heritage fall into obscurity. Some projects endure without one of these, but few (if any) projects endure without two or more.
These three pillars help build a model of the trajectory of a piece of software. That model can be seen as a sort of trajectory for the software, and while not infallible, it does suggest the arc the software will follow. With this context, it becomes easier to predict what projects are reliable enough to bring into production systems, and those that are likely to be superseded relatively quickly.