Search This Blog

Navigating The Unknown

There are two basic types of software projects: those that have no existing code where you'll be starting with a blank slate and those that already have working code that you'll be modifying in some way. There are also two basic types of engineers that work on software projects: those that have no prior experience with the domain and those that have already built something similar in the project's domain before. Of course, both of these pairs of categories are extremes on a continuum, but I'm a software engineer with roots as a hardware engineer so I'm a little too comfortable with binary. Bear with me.

These types of software projects and engineers can potentially be combined in four different ways. Imagine you're the engineer starting work on a project. You could be starting a brand new project in a domain that you know a lot about. You've written other programs in this domain, and you have plenty of experience with the constraints and pitfalls that you'll likely encounter. Things should go pretty well.

Or you could be dropping into a project with a significant code base, maybe it's even gone through a few releases, but for for one reason or another you don't know a thing about the domain you've found yourself working in. Maybe you're a new team member on a mature project, but you came from a completely different area of expertise. Maybe you've inherited the project from someone else with a specialized skill set that doesn't overlap with your own. Even though you'll have to learn the domain and orient yourself, you have a code base to learn from and work on. Changing existing code is much easier than writing new code because you can iterate over a working foundation instead of building the entire architecture from nothing.

The easiest of these situations is working in a domain where you've already done a lot of work and have plenty of working code available to draw from. Depending on what you're doing, this could also be the most boring situation to be in. Adding cool new features to a product that you previously built can be exciting, but reimplementing something you've done before with different constraints but nothing really new can be a real drag. In either case, lack of knowledge is not likely to be a problem.

The real unknowns happen in the last case, and that is what I want to focus on here. When you're taking on a brand new project in an area that you've never worked in before, how do you even get started? How do you design the architecture? How do you cope with the immensity of the blank slate before you?

Programming a Piano Concerto


I was poking around on the Pragmatic Programmer's site, and I came across a fascinating article about a team of software engineers that designed a program to convert old piano recordings into high-definition MIDI files that could be played back live on a special Disklavier Pro piano. They had to deal with plenty of unknowns including zero documentation of the piano's MIDI file format, no supporting libraries, and incomplete data.

The technology was so new that almost no one had programmed with it before, meaning domain experience was hard to come by. They didn't even know initially how they were going to test their software to make sure it was reproducing the recordings accurately. How did the team deal with all of these unknowns?
Some teams would want to start testing with a Rachmaninoff piano concerto or some such, and that’s a huge mistake. You always need to start with small, isolated unit tests before moving on to more advanced functional or acceptance testing.

In this case, the first six months or so of unit tests comprised some beautiful piano solos made of just one note. Just one single note at a time, mapping out the full range of the instrument. It was quite a while before the unit tests got the software to the point where they could try “Mary Had A Little Lamb,” and quite a while after that before poor Mary got any rhythm.
They went exploring to learn everything they could about the domain they were working in and took careful notes of their excursions by writing tests. It may seem like they spent a lot of time on the basics and that there would still be a lot of work ahead of them, but in reality, they were laying a stable foundation that they could quickly and efficiently built on later. Once they had a good test structure in place, had mapped out all of the variations of every note, and had figured out how to produce the right rhythm, they were all set up to rip through a real concert piece.

The team had built the primitives they needed for their domain through exploration and made a map of what they had found by writing tests. This process got me thinking about how often I have done similar things on projects I've worked on.

Drawing A Map In The Wilderness


One of the first projects I worked on as an engineer fell squarely into the 'no domain knowledge' and 'no existing code' category of our little classification system. The team I was on had to design an ASIC that would read an array of magnetic field sensors and calculate the position of a target magnet. The sensors had a complex non-linear response to the magnet, so the equations were complicated. To make things even harder, only two sensors could be measured at a time, the magnet needed to be between those two sensors to calculate a position, and the system needed to produce results as fast as possible. The resulting algorithm was going to be difficult to say the least.

I started in on the project the only way I could think of. I built a model of the system and the calculations in C++ and proceeded to explore it. The software model turned out to be an excellent map that we used constantly throughout the development process. We used it as a communication device when interfacing with the customer. We used it to experiment and try out new ideas. We used it to test and verify that the ASIC design was correct before we released it for prototypes. And we used it to validate that the ASIC was operating correctly when we got first silicon back. Having an executable model of our system was so invaluable that we continued building software models of every ASIC project after that and used them to validate the designs through tests.

High-level models and tests are maps of the wilderness of design. They are extremely effective at helping you find your way and remembering where things are. When you're dropped into a new domain and need to figure out the lay of the land, your best bet is to draw a map. You can start exploring in little bits and pieces, making a record of where you've been and what you've learned.

At first it may seem like slow going, like you're not making much progress and there's an overwhelming amount of territory to cover. But as your map gets more fleshed out, you'll have a much better idea of where you are and making further progress will get easier and easier. Drawing a map is one of the quickest and most reliable ways to go from not knowing what's around you to knowing your environment in great detail.

We have been making and using maps for millennia to find our way in the world. We can do the same on software projects to equally great effect. The tools we use are not called maps, but models and tests, although they work the same way. The next time you find yourself in unknown territory on a software project, don't forget to draw a map.

RSpec or Minitest, How Does One Choose?

A few months ago I started learning Ruby on Rails with two tutorials: Agile Web Development with Rails 4 and the Ruby on Rails Tutorial. The former used Minitest as its test framework, and the latter used RSpec. At the time I had some minor confusion as to why these two tutorials would use different test frameworks, but otherwise, I didn't think much of it. I was somewhat taken with RSpec's natural language approach to writing tests, and I figured I'd continue using RSpec.

Then a few weeks ago I read 7 Reasons Why I'm Sticking with Minitest and Fixtures in Rails by Brandon Hilkert, and I started reconsidering using RSpec, Factory Girl, and Capybara for Rails testing. Maybe RSpec was making things unnecessarily complicated without providing any real value with its closer-to-English tests. I decided to take a deeper look at the differences between RSpec and Minitest by converting one of the tutorials' tests to the other framework and doing a direct comparison of them.

Since I haven't had a lot of experience with RSpec yet and I'm not comfortable or familiar with all of its features, it was probably easier to convert the Ruby on Rails Tutorial RSpec specs to Minitest tests, so that's what I did. The complete sample app with both test suites is up on GitHub for your viewing pleasure. I tried to do as direct a conversion as possible, and I make no claims about being a testing guru, so there are most likely better ways to do the Minitest tests than the way I did them. I did learn a lot in the process, though. Here are some of those findings.

Setup for Minitest is Much Easier


Not that the setup for RSpec, Factory Girl, and Capybara is hard, but it is something that needs to be done and maintained if you're going to use these gems. You'll also most likely use Selenium Web Driver and Database Cleaner as well as other gems for an RSpec testing stack, so at minimum you've got five additional gems to keep track of and stay current on features, usage, and issues.

Minitest and fixtures are already built into Rails, so setup is trivial and it works out of the box. There are a few configuration parameters that you may want to tweak, but the setup is dramatically simplified. Simple is good; keep it simple.

Run Time for Minitest is not Substantially Faster


I was a bit surprised by this result. After running the test suite ten times for both RSpec and Minitest, I got average run times of 13.60s and 12.68s, respectively. That makes Minitest less than 10% faster for this test suite, which is relatively minor compared to how much speedup I would likely get from moving from my slow 5400rpm laptop hard drive to a fast SSD or spending some time optimizing the tests. Actually, I found that plugging in my laptop so the processor runs at top speed knocks 4 seconds off both run times. That's a 30% improvement for both test suites. At any rate, performance is probably not a good reason to pick one framework or the other. There are much better ways to improve test times with either one of them.

Code Size for Minitest is Significantly Smaller


As reported by rake stats, the lines of code (LOC) for the Minitest tests came to 506 LOC, and the specs totaled 701 LOC. With more careful consideration of thoroughly testing page elements only once, using tests that target unique page identifiers when testing the same page repeatedly, and refactoring duplication in tests, the Minitest LOC could be further reduced.

Why are tests so much smaller than specs? Let's look at a few examples. First, here's the spec in the user model for verifying that a user with the admin flag set is really an admin:
  describe "with admin attribute set to 'true'" do
    before do
      @user.save!
      @user.toggle!(:admin)
    end

    it { should be_admin }
  end
The corresponding Minitest code is:
  test "with admin attribute set to 'true'" do
    @user.save!
    @user.toggle!(:admin)
    assert @user.admin?
  end
They are very similar, but the spec needs a before block that requires a couple extra lines. This is a typical way that RSpec adds lines to specs while Minitest doesn't need these extra blocks. Moving on to a more complicated example, but still in the user model, here is a spec for verifying that a user can follow and stop following another user:
  describe "following" do
    let(:other_user) { FactoryGirl.create(:user) }
    before do
      @user.save
      @user.follow!(other_user)
    end

    it { should be_following(other_user) }
    its(:followed_users) { should include(other_user) }

    describe "followed user" do
      subject { other_user }
      its(:followers) { should include(@user) }
    end

    describe "and unfollowing" do
      before { @user.unfollow!(other_user) }

      it { should_not be_following(other_user) }
      its(:followed_users) { should_not include(other_user) }
    end
  end
Notice the use of Factory Girl to create a second user here, and using let to bind that user to a variable. In Minitest the same verification is much easier and more succinct:
  test "following" do
    other_user = users(:one)
    @user.save
    @user.follow!(other_user)

    assert @user.following?(other_user)
    assert @user.followed_users.include?(other_user)
    assert other_user.followers.include?(@user)

    @user.unfollow!(other_user)
    assert_not @user.following?(other_user)
    assert_not @user.followed_users.include?(other_user)
    assert_not other_user.followers.include?(@user)
  end
Minitest uses fixtures (with users(:one)) to accomplish the same binding to a variable, but it is much more direct. The flow of the test is also much cleaner and shorter without the extra describe and it blocks. It's a full 6 lines shorter than RSpec, not counting blank lines, and I think it's actually easier to read without all of the extra line noise.

Here's one last example from the user page integration tests that verifies the page that shows an index of all users. It's a longer test that signs in a user, tests that the contents of the user index page is correct, tests that pagination is present, and tests that there are delete links that work correctly when the signed in user is an admin.
  describe "index" do
    let(:user) { FactoryGirl.create(:user) }
    before(:each) do
      sign_in user
      visit users_path
    end

    it { should have_title('All users') }
    it { should have_content('All users') }

    describe "pagination" do
      before(:all) { 30.times { FactoryGirl.create(:user) } }
      after(:all) { User.delete_all }

      it { should have_selector('div.pagination') }

      it "should list each user" do
        User.paginate(page: 1).each do |user|
          expect(page).to have_selector('li', text: user.name)
        end
      end
    end

    describe "delete links" do
      it { should_not have_link('delete') }

      describe "as an admin user" do
        let(:admin) { FactoryGirl.create(:admin) }
        before do
          sign_in admin
          visit users_path
        end

        it { should have_link('delete', 
                              href: user_path(User.first)) }
        it "should be able to delete another user" do
          expect { click_link('delete', match: :first) }.to \
            change(User, :count).by(-1)
        end
        it { should_not have_link('delete', 
                                  href: user_path(admin)) }
      end
    end
  end
Factory Girl is used multiple times, and there is a fairly complex set of nested describe blocks making up this set of tests. There is a lot going on here, and the nesting makes things more difficult to read than it needs to be. Now look at the Minitest version:
  def setup
    @user = users(:one)
    @user.password = "foobar"
    sign_in @user
    get users_path
  end

  test "index" do
    assert_select 'title', full_title('All users')
    assert_select 'h1', 'All users'
  end
  
  test "pagination" do
    assert_select 'div.pagination'

    User.paginate(page: 1).each do |user|
      assert_select 'li', user.name
    end
  end

  test "delete links" do
    assert_select 'delete', false

    admin = users(:admin)
    admin.password = 'foobar'
    sign_in admin
    get users_path

    assert_select 'a[href=?]', 
                  user_path(User.first), 
                  'delete' 
    assert_difference 'User.count', -1 do
      delete user_path(User.first)
    end
    assert_select 'a[href=?]', 
                  user_path(admin), 
                  text: 'delete', 
                  count: 0
  end
I find this much easier to read and understand. The setup method is actually used for all of the tests in the user pages integration test file, so signing in the user and visiting the user index page is shared among all of the tests. There's a fixture that takes care of the user creation tasks, so all you see in the code is users(:one) and users(:admin) in place of the Factory Girl code for the RSpec version.

The tests are also split into the three main areas of focus - basic content, pagination, and delete links - in a much more straightforward way. The testing has a more direct feel to it and is much easier to follow than the RSpec version. I also really like the power of assert_select. Nearly all DOM verification can be done with this one assert method, and it's easy to learn and understand. It's also easy to wrap it in little helper methods to make new assert methods that are geared specifically for certain types of DOM tests, like assert_title or assert_link for verifying the presence of title or link text, respectively.

Minitest is Much Easier to Use


Sometimes a DSL isn't an advantage if you have to learn the DSL to do something that's not related to your product's domain. A test DSL that is built mostly for its own sake, not to make tests easier to write for the product you're testing, doesn't provide much benefit. A DSL that makes it easier to write the software for your product's domain provides the real benefits. A DSL that you have to take extra time to learn so that your tests read like natural language might be a waste of time.

RSpec is a DSL that risks focusing too much on making the perfect testing language and not enough on making a testing interface that is easy to understand and build on to efficiently test the Rails app that you really care about. Minitest is much closer to Ruby. It's a simple framework that provides a nice set of primitives that you can use to build out a test suite quickly and efficiently.

I'm sure a lot of people love writing tests in RSpec, and it feels very natural once you learn it well. That's a great thing, and I'm glad that it works for those people. But for me, RSpec seems to be an end in itself because it just wants to be a pretty testing language, whereas Minitest is a means to writing a clear, straightforward test suite that gets the job done. I want a testing framework that enables me to write tests quickly and get them out of the way so I can focus on improving the production code. Minitest does that for me, so that's the framework I'm going to use.

A Barely Adequate Guide to Syntax Highlighting Code in a Web Page

Since I talk about programming somewhat frequently, I figured it was about time that I add syntax highlighting to my code examples. This post will be an interesting experiment because I only have a vague idea of how to do it, but I think I can figure it out quickly enough while writing an article about it at the same time. I'll set out three goals for this post:
  1. Figure out how to add syntax highlighting to my posts on Blogger as quickly as possible.
  2. Do a little survey of different ways to do syntax highlighting. This won't be comprehensive, but hopefully will give a few reasonable alternatives.
  3. Write fairly continuously. I've wanted to do a little writing exercise to work on writing faster, and here's a good opportunity to do it.
Okay, with that settled, are you ready? Let's see how this turns out.

 A Brute Force Attempt


I'll focus on highlighting Ruby code in this post, and I normally do my Ruby coding in Vim. I know Vim has a way to convert syntax highlighted code to HTML with the command :TOhtml. I'll make a little code snippet that has a decent variety of Ruby elements and run the converter on it to see what it looks like. Here it is:
class Speaker
  # Greet someone
  def self.say_hello(params)
    @name = params[:name]
    puts "#{params[:greeting]}, #{@name}"
  end
end

Speaker.say_hello({greeting: "Hello", name: "Sam"})
That looks pretty good right off the bat. I could tweak the CSS a bit to change colors, but other than that, it's straightforward. The problem with this method is that it's not too easy to go back and convert all of my old code samples to add syntax highlighting. I have to copy each one into Vim, run the converter, and then copy the HTML back into Blogger. I'm not looking forward to that. Also, I had to remove some of the generated HTML because Vim produces HTML for an entire web page, and I just need the <style> and <pre> tags and related content. I don't want to have to nip and tuck the HTML for every code snippet on my blog.

There Must Be a Better Way


There must be a way to use a JavaScript library to do the highlighting for me. It's time to pull out the programmer's most essential tool - Google. If I do a search for "syntax highlighting javascript," I get the following top 10 list:
  1. highlightjs.org
  2. prismjs.com 
  3. stackoverflow.com/questions/160694/syntax-highlighting-code-with-javascript
  4. shjs.sourceforge.net
  5. craig.is/making/rainbows
  6. alexgorbatchev.com/SyntaxHighlighter/
  7. code.google.com/p/google-code-prettify/
  8. softwaremaniacs.org/playground/showdown-highlight/ 
  9. softwaremaniacs.org/blog/2011/05/22/highlighters-comparison/
  10. github.com/LeaVerou/prism 
That looks mighty promising. We've got a number of JavaScript syntax highlighters, another list of them on Stackoverflow, and a comparison down at #9 that may prove useful for my second goal. Let's take the first one off the top of the list.

Highlight.js


Highlight.js looks really easy to use. All I have to do is add three lines to my template header in Blogger as described in the basic usage instructions, wrap my code in <pre><code>...</code></pre> tags, and format the code in HTML with the exact tab spacing that I want. Really, I can just copy the code out of Vim and into the HTML, and it should work. The script is even hosted so I don't have to figure out how to add it to my Blogger template, although I'm sure that's easy, too. Here's what the result of all that looks like:
class Speaker
  # Greet someone
  def self.say_hello(params)
    @name = params[:name]
    puts "#{params[:greeting]}, #{@name}"
  end
end

Speaker.say_hello({greeting: "Hello", name: "Sam"})
Nice! To change the theme, you can change the default.min.css to a number of different CSS templates. There's even a live demo at highlightjs.org so you can easily pick a theme that you like. I like to light up my code like a Christmas tree because it's easier for me to recognize different code elements. I also want a theme with a black background and good contrast so it doesn't seem like I'm looking through a dense fog at the code. Finally, I like my keywords to be blue or purple, just because. I thought the Tomorrow Night Bright theme was okay, so I went with that one. I might change it in the future, though, so no guarantees that what you see here is that theme.

Wait, We're Done Already?


Yup, that was actually easier than I thought it would be, and I'm happy with how easy it is to go back and convert my other code samples to be highlighted. But not today. Actually, we're not quite done, yet because I only showed two highlighting options. I'll spend a few minutes looking into the other options that turned up in Google.

Prism.js: Prism.js seems pretty nice. The website is straightforward, there's some good API documentation, and there's a test drive page where you can see what code for your favorite language will look like in one of their six prepackaged themes. There aren't as many themes or supported languages as Highlight.js, but it already supports Swift, which is pretty impressive and means it's definitely in active development. Prism.js does support line numbers, which is nice for long code listings or if you want to reference specific lines in your text. There's no hosted JavaScript as far as I could tell, so you'll have to download it for your own use. Most people will probably do this anyway. It's so much easier to do a quick-and-dirty setup with a hosted instance of the plug-in.

SHJS: SHJS doesn't look like it's been updated since the end of 2008. That's not necessarily a problem, but the other syntax highlighters have probably advanced beyond it. It does support a decent selection of languages, although not as many as Highlight.js, and a decent set of themes, including many of the default themes from popular editors such as Emacs, Eclipse, and Vim. You can test out how they look with a snippet of C++ code on the website. As with Prism.js, there is no hosted JavaScript available.

Rainbow.js: Rainbow.js looks to be a more basic option, with a bare-bones, although probably sufficient, single-page website. It supports 18 languages, which is on the low side of all of these options, but they're all the most popular ones so it should work for most people. The minified JS file is tiny at only 1.4kB (no hosted instance again), and it includes a decent set of predefined themes. You can't preview the themes on the website, though.

SyntaxHighlighter: The language support is also on the low side with 23 different language 'brushes' included. As far as I could tell, there was only one default theme available, so if it's not to your liking, you'll have to tweak the CSS yourself. It does support line numbering, and the styling of the line numbers looks especially nice. Once again, there was no hosted instance of the JavaScript.

Prettify: Prettify supports a nice variety of languages, and it's the only one of these highlighters, other than Highlighter.js, that auto-detects which language it's highlighting. It only has a few prepackaged themes, and they are shown as previews on the website. It does support line numbering, but only labels every fifth line by default. Getting it to number every line takes a bit a bit of extra CSS code.

Syntax highlighting code with JavaScript - Stack Overflow: The answers to this Stack Overflow question had most of the JavaScript syntax highlighters from the first page of Google search results and a few more. Since the search results already turned up enough, I won't go into the few additional ones listed here.

Completely unfair comparison of Javascript syntax highlighters: The guy that did Highlighter.js wrote up a nice comparison of Highlight.js, SHJS, SyntaxHighlighter, and Prettify. It's now three years old, so things could have changed from when it was written, but it's a nice read for understanding what to look for in a Javascript syntax highlighter. It is admittedly biased, but it convinced me that my original choice of using Highlight.js was adequate.

In the future I'll probably figure out how to include the Highlight.js code in my Blogger template instead of linking to the hosted instance of the code, and I'll tweak the colors and code block decorations a bit. Overall, I was quite pleased with how quickly I could get the syntax highlighting working. I should have tried it out sooner! It's nice to have some colorful code now.

Why Electric Vehicles Will Finally Beat The ICE Car

I recently read yet another article from someone who does not believe that electric vehicles (EVs) will have any significant impact on the auto industry or help much in reducing CO2 emissions. He proceeded to walk out several weak arguments to support his theory. While I will go into these arguments in more detail, I want to make it clear that I'm not responding to this article specifically. I'm addressing the misguided ideas in general. I believe these misconceptions are fairly widely held by people who think of electric cars as toys that have no hope of becoming a real force in the market, and they justify their reasoning with outdated information that has become irrelevant or has long since been proven flat out wrong.

The article in question was written by Jason Perlow as a response to Matthew Inman (a.k.a. The Oatmeal) and his awesome, over-the-top review of his new Tesla Model S. I take Mr. Perlow at his word when he says that he is concerned about climate change and wants to see the most efficient and practical technologies adopted to alleviate the dangerous trends we're seeing in the environment. But then he trots out these tired old saws:
However, what I think has been lost in all this positivism and blind futurism about EVs and Tesla is how unrealistic electric cars still are for the average family.

Not only that, but they do not fundamentally solve the problems of moving to more sustainable energy sources; nor are they particularly "greener" or less fossil-powered than their gasoline, diesel, or even hybrid cousins.
These statements get at the three main criticisms of EVs: high cost, short range, and dirty energy source. Assuming that the current state of these issues for EVs will remain constant, or even improve only slowly, betrays a serious lack of vision and judgement. People who think like this must think, "Well, EVs have been widely available for two full years now. Why can't they go for 600 miles on a charge, pull electrons out of the sky, drive autonomously, sprout wings, and fly? Oh, and they shouldn't cost more than a used, stripped down Honda Civic."

Seriously, EVs have only had significant research and development investments in the last decade or so, yet they are advancing amazingly quickly. ICE cars, on the other hand, have had over a century of enormous capital investment, and they're struggling to achieve minor performance and efficiency gains. Let's take a look at the cost and range issues, because they're related, before tackling the more involved issue of how green EVs are and will be.

Choose Low Cost or Long Range, For Now


EVs are primarily electronic devices that happen to have some wheels to make them move. ICE cars are primarily mechanical devices that happen to have electronic components for sensing and control. Historically, electronics drop in price much more rapidly than mechanical devices, so it stands to reason that EVs will quickly drop in price relative to ICE cars.

Batteries are the single most expensive component of an EV, so if battery costs fall, the price of the car will fall with it. As battery production volumes increase (think Gigafactories) and raw material sourcing improves, prices are sure to come down significantly. In fact, over the 2010-2012 time frame, battery prices fell by 40%, and that trend looks to be continuing or even accelerating.

Batteries are also increasing in energy density by about 7% per year, or doubling about every 10 years, so they're getting smaller and lighter in addition to getting cheaper. Nissan is already talking about that trend with a Leaf that has double the current range slated for 2017. They also claim that battery energy density is improving much faster than anticipated, so range improvements will likely be coming much faster than their six year model cycle.

One way to think about EVs is that they are like computers. Back in the 80s and 90s, computers were expensive because they were advancing rapidly and companies were focusing on grabbing performance gains. Most reasonably powerful computers sat in the $2000-$3000 range. By the 00s, you could get more than enough power for most uses, and prices started to drop precipitously. Now you can get a ridiculously powerful computer for less than $1000, and even $500 will get you more computer than you need in most cases.

EVs are solidly in the first phase right now where they can add range while keeping the price constant, or they can drop the price while keeping the range constant if there's enough of a market for the lower range cars. Once they achieve adequate range at a low price, it's going to look like the 00s all over again. The other main components of EVs - the motors, inverters, chargers, and regenerative brakes - will also contribute to falling costs as they are standardized and mass-produced.

That's not to say that all EVs are expensive even now. Already there are many different options serving different customers. Want a mid-sized car for tooling around town or as a second commuter car? There's the Nissan Leaf S for less than $20k. Want more range for longer trips, but most trips are less than 40 miles? There's the Chevy Volt for about $28k. Want more luxury and possibly extended range for longer trips? There's the BMW i3 for about $38k. Want to go balls to the walls and price is irrelevant? There's the Tesla Model S for $65k-$110k+. (All prices are after the federal tax credit.)

(Update: I, of course, forgot to mention the gas savings. The EPA estimates savings of about $9,000 over five years versus the average (23mpg) gasoline car. This assumes 45% city and 55% highway driving at 15,000 miles per year. Your mileage will vary substantially, but you can customize the estimate on fueleconomy.gov to see what you would save. Regardless, it's significant, and makes the already affordable EVs look even better.)

There are more options out there, but those seem to be the big four right now. New models are coming out every year to fill different market holes and increase consumer choice. And then there's Tesla. Tesla is disrupting the industry, and other manufacturers are forced to respond. If Tesla does the same thing to the average consumer market that they're doing now to the luxury market, they will dominate most of the auto market within the next 5-10 years. Nissan, GM, and BMW are racing to get EVs ready in time to compete with Tesla's mass market EV (formerly the Model E) before it's too late. The other auto manufacturers are taking notice, but if they don't step up soon, they're going to miss out big time.

When people say that EVs have a huge technical hurdle to overcome, and will only become viable if they can solve these problems, they are being disingenuous at best. These technical problems are being solved as we speak. EVs are already better than ICE cars in the luxury segment and as an everyday commuter car. Within a few years EVs will meet or beat ICE cars on most metrics. Within a decade they will be far superior to ICE cars in nearly all cases. The detractors are choosing to ignore these obvious trends because they can't seem to envision a world where everyone is driving around in fun, quiet electric cars.

How Green Is An EV, Really?


EVs have to be charged up with electricity, and that electricity comes from a variety of sources. Mr. Perlow presents a graph produced by the US Energy Information Administration (EIA) that predicts rising electricity generation from natural gas and renewables, while the use of nuclear and coal electricity generation stays flat.

Projected electricity generation by fuel, 1990-2040 (in trillions of kilowatt-hours). Source: United States Energy Information Administration, May 2014
Projected electricity generation by fuel, 1990-2040 (in trillions of kilowatt-hours). Source: United States Energy Information Administration, May 2014

The interesting thing about the graph is the sharp drop in coal use for the five years leading up to the last year of data in 2012. Why won't that continue? What is causing the prediction to run flat instead of plummeting to zero by 2025? What would this chart have looked like in 2005 at coal's peak? That last question is easy to answer. Here's a chart from the EIA produced ten years ago.

Electricity generation by fuel, 2004 and 2030 (billion kilowatthours)

This chart shows coal-based electricity generation heading well north of 3 trillion kWh by 2030, while the latest prediction has coal staying below 2 trillion kWh through 2040! What happened? Natural gas mostly, but also renewables. The prediction from 2004 is clearly wildly off-base even ten years out, and totally useless for a 25 year outlook. Also consider that the energy sector is in a state of flux now, and things are going to change drastically over the next few decades. Using any predictions right now to make claims about the state of electricity generation in 25 years is probably ill-advised.

Long-term predictions are largely irrelevant, but one thing seems to be obvious. Nuclear power isn't progressing much, yet Mr. Perlow promotes nuclear electricity generation as the option to focus on. The problems with nuclear power are numerous. Nuclear plants are extremely capital intensive and take a long time to develop. Nuclear fuel is a very limited resource that's hard to get and expensive to refine. Maintenance of nuclear plants is expensive, and critical to public safety.

These are all difficult problems, but the most difficult problems to overcome are the sociopolitical ones. No one wants a new nuclear power plant in their back yard. Period. Need I mention Fukushima? Then there's the problem of guarding the nuclear fuel before, during, and after using it. The government absolutely does not want terrorists getting their hands on uranium and plutonium, even if it's not weapons grade material.

Why not spend all of that time and capital building solar fields, wind farms, geothermal plants, and other renewable energy sources for much less risk? They scale much better than nuclear, and especially with solar (another electronic device), the more they scale, the cheaper they get. Due to their variable nature, renewable energy sources are also a great compliment to EVs because EVs provide part of the storage solution for when the sun isn't shining or the wind isn't blowing. Then there's Solar Freakin' Roadways, solar panels that can replace pavement and are perfectly aligned with EVs.

The point of all of this is that no matter which way the energy sector goes in the future, it's most likely not going to be coal, so all EVs, including all those currently in use, will automatically get cleaner as the electric grid gets cleaner. And for those people that install solar panels or wind turbines, their emissions will drop to zero. If we manage to eliminate coal and natural gas as electricity generation fuel sources at some point, then all EVs will have zero emissions. How's that for potential?

But that's the future; what about now? Mr. Perlow makes a curious claim about diesel cars, his personal favorite gasoline alternative: "While they aren't emissions-free by any means, modern diesel car engines also produce less CO2 when compared with gasoline engines." I'm not sure why people think this is true. I keep hearing it, but I've never seen any proof.

Sure, modern diesel cars are cleaner than they used to be, but when I look at fueleconomy.gov, my main source for comparing cars for fuel efficiency and emissions, for every fuel efficient diesel car there's a gasoline car with equivalent fuel efficiency and lower emissions. That doesn't even include hybrid cars, which are significantly better than diesels, and EVs, which blow them all out of the water.

Since you can only compare up to four cars on fueleconomy.gov, here's a table with a selection of the best 2014 EVs, hybrids, diesels, and high efficiency gasoline cars in both the luxury and mid-sized segments:


Luxury Car
Combined
MPG/MPGe
Total Emissions (g/mile)
Tesla Model S-85 kWh (NY)
89
110
Tesla Model S-85 kWh (US average)
89
250
Tesla Model S-85 kWh (WI)
89
310
Lexus GS 450h (Hybrid)
31
357
Mercedes-Benz E250 (Diesel)
33
392
Audi A6 (Gasoline)
28
396
BMW 535d (Diesel)
30
430
Mid-sized Car
Combined
MPG/MPGe
Total Emissions (g/mile)
Nissan Leaf (NY)
114
80
Nissan Leaf (US average)
114
200
Nissan Leaf (WI)
114
240
Chevy Volt (NY)
37/98
170
Chevy Volt (US average)
37/98
250
Chevy Volt (WI)
37/98
290
Toyota Prius (Hybrid)
50
222
Toyota Corolla (Gasoline)
35
317
VW Passat (Diesel)
35
368

I loosely sorted them by lower emissions first, but kept each model of EV together. The EVs have three entries corresponding to different electricity generation fuel mixes in different areas of the country. I picked a best-case scenario of New York state, the US average, and my home state of Wisconsin, which has a fairly dirty fuel mix of mostly coal-fired plants. If you charge your EV with solar panels or purchase renewable energy offsets (as I do), then your total emissions would be zero.

Notice how the diesel car emissions are worse than all of the other cars, even if the mileage is better. Both the Mercedes-Benz E250 and the BMW 535d get better mileage than the Audi A6, but the A6 emissions are essentially the same as the E250 and 8% lower than the 535d. The Lexus hybrid's emissions are 10% lower than the E250, and the Tesla's emissions are nearly one quarter those of the E250 on New York state electricity.

On the mid-sized car side, the Toyota Corolla has 14% lower emissions than the VW Passat with equivalent fuel efficiency, and the Prius just crushes the Passat with 40% lower emissions. The Prius competes rather well with the Nissan Leaf and Chevy Volt over the average mix of US electricity, but it's no contest if the EV and PHEV are in clean electricity areas. The Leaf has nearly three times less emissions than the Prius and 4.6 times less emissions than the Passat!

I really don't understand why people want to buy diesel cars. They're more expensive, and the fuel is more expensive than an equivalent gasoline car. There's a wide selection of gasoline and hybrid cars that are just as, if not much more, fuel efficient than diesel cars so there's no fuel cost savings there. It's more difficult to find fuel because not every gas station offers diesel. And they still have higher emissions than many gasoline, most hybrid, and all electric cars. Diesel is not a viable alternative clean fuel. EVs and PHEVs are already hitting dramatically lower emission levels, so there is no need for an interim technology anyway.

The Real Reason EVs Will Dominate


The main arguments against electric cars are range, cost, and dirty electricity source, but these are all largely overblown or only issues right now. EVs are not standing still; not by a long shot. Every year they are improving significantly with no end in sight. Detractors that point to the state of EVs this year or last year and try to extrapolate that into claims about how they'll never work out are either being short-sighted or naive. The disadvantages are just engineering problems to be solved, while the advantages are fundamental, and the advantages of EVs - much better comfort and convenience - are things with which no ICE car can compete. 

People love comfort and convenience. Most of the choices we make that are not directly related to basic needs center around maximizing comfort and convenience. EVs go a long way to improving both of those things. EVs are super quiet with silky smooth acceleration and are really fun to drive. Then when you're done with your comfortable drive, you can pull into your garage, plug in your car, and you'll be all juiced up for the next day. That means no runs to the gas station ever again. Oh, and there's also the convenience of nearly zero maintenance. That's why EVs will finally beat the ICE car. I say good riddance.

To Build Big, Start Small

A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system.John Gall
The more I learn about programming, and the more programming I do, the more I am certain that this statement is true. Think about the big, complex, mature software systems out there that you use everyday: Google, Amazon, Facebook, Twitter, etc. Or how about the development tools you use: Ruby on Rails, StackOverflow, GitHub, Vim, etc. (These are some of my favorites; feel free to substitute your own.) If you had to design one of these systems from scratch, you would be completely and utterly overwhelmed, probably to the point of paralysis.

These systems didn't start in the imposing state they are now. Google started out as a simple web search engine based on an insightful idea of how to rank web pages for search keywords, and it originally ran on a few servers that Larry Page and Sergey Brin cobbled together. Amazon started out only selling books from a basic web store in Jeff Bezos' garage. Ruby on Rails was extracted from the Basecamp online project management software when it was just a simple MVC framework that made the popular web app easier to develop.

None of these systems started out serving hundreds of millions of users, responding to billions of requests per day, or offering dozens of elegant, productivity-enhancing features. They didn't have to. At first they only needed to work for hundreds of users and do a few things really, really well. Then they needed to be flexible enough to scale.

These systems would have been terribly over-designed if they had started out trying to handle massive loads that were non-existent in their infancy. Most likely the designers would have gotten everything wrong if they had gone down that path. They would have been trying to solve problems that didn't yet exist and not solving the problems that would make the system loved enough by enough people to make scaling problems an issue. Worrying about scale too early is an instance of Big Design Up Front, and Uncle Bob Martin warns about it in Clean Code:
It is not necessary to do a Big Design Up Front (BDUF). In fact, BDUF is even harmful because it inhibits adapting to change, due to the psychological resistance to discarding prior effort and because of the way architecture choices influence subsequent thinking about the design.
Scale is not the first problem. The first problem is how to make scaling up become the primary problem. There's a lot of learning that happens along the way, and if the most pressing problems are always the ones being solved, the best possible knowledge gets integrated into the system at every step of its growth. The system ends up growing with and adapting to its environment as they change together instead of the system coming into existence bloated, rigid, and completely ill-suited to its environment.

That all seems reasonable in general, but what does it mean when you get down to the specifics of programming the system? First, you need an agile development environment. In particular, you need tests in place so you can make sure you don't break anything that used to be working.  Code is going to change dramatically as the system evolves so you want to make sure that you're never regressing. Whether you practice TDD (Test-Driven Development) or not, a good test suite enables change in a way that nothing else can, and change is the name of the game.

Second, decouple optimization from development. Don't try to do them at the same time because you'll do each one better if you focus on developing great features and functionality when you're developing, and you focus on targeted optimizations where they are measurably needed when you're optimizing. If development is quick and dirty, meaning that you do the simplest thing that could possibly work, most of the system will never need to be optimized, which saves time. That leaves more time to optimize the parts that actually need it. If the architecture is done right with sane algorithms and data structures, optimizing only where it’s needed isn’t a big deal.

Of course, the system does have to be well-designed so that the parts of the system that need optimization can be refactored in isolation without affecting much of the rest of the system. The ability to design a decoupled system comes partly from experience, but such a system also emerges naturally when it is thoroughly unit tested. Even without much experience, designing an easily optimized system is possible, provided that a good test suite is developed along with the system that will support the changes that need to be made.

Third, split code into classes when it gets painful not to, but not before. When files get to be many hundreds of lines long, when a class is doing too many different things, when sections of a class are not talking to each other, that is the time to reorganize a class into a more complex structure of classes. Don't spend time creating hundreds of miniature classes that form a dozen-level inheritance hierarchy with every derived class differing by one line of code because You Aren't Gonna Need It! Wait until you feel the pain of working with a bloated class or duplicating code between classes, and then address the pain.

Until you feel the pain, you can spend your time on other things. Optimization is different than bugs, and has a different cost structure. Bugs get more expensive the longer they live and should be squashed as soon as a test is in place to reproduce them, but optimization is different. It may seem like you’re wasting time and money by re-architecting something in the heat of the moment when it's not performing as needed. But the current architecture did get you where you are today, and if you had tried to architect it for higher performance from the get-go, you probably wouldn’t have come up with the same design anyway. Any effort to design for possible future needs will likely be wasted because it doesn't allow the system to evolve to fit its environment.

Remember, hindsight is 20/20. Now that you know how to architect the code for higher performance for the workload that you have now, you can do it. You haven’t really wasted any time because you didn't have the problem or the experience before. Now that you have both, you can do it right. Maybe you feel that the system should have been designed right in the first place, and now it’s your time being used, not the time of the person that built it, to do it the right way. But you are working on much more pertinent information than whoever designed it in the first place. Remember that the environment and design constraints could have been entirely different when that code was first written, and now it needs to adapt to a new environment and work under new constraints. Your job is to make that happen.

When a system is designed for change, with a healthy test suite and constant doses of refactoring, adding to the system incrementally becomes easy. In contrast, trying to design a huge, complicated system up front is ridiculously hard. Avoid that rabbit hole. When designing a new system, don't think too much about the interaction of lots of features and how the system will behave at scale. You'll never make any progress. You'll be running in circles trying to keep everything in your head at once, or you'll create a mess of design documents that will detail some elaborate system that could never work in real life.

Start small. Think about what the most essential part of the system is, get that part working, and then expand the system. Mold it, slap some more functionality on, refine it, and build it up some more. The process mirrors that of any complex system: art, construction, and even living organisms. They don’t come into existence all at once. They grow and change over time to become complex systems as they mature. But they always start small.