Search This Blog

Things Remembered and Things Left Unsaid

Many attempts to communicate are nullified by saying too much.
-Robert Greenleaf
I've been writing this blog now for three years straight. Once a week I sit down and write about a topic that interests me, and I haven't missed a single week since I started. Sometimes that is an extremely hard schedule to meet. Through it all I've learned and remembered a few things about writing, I've struggled through several especially challenging posts, and I've appreciated everyone who's taken the time to read what I've had to say. I hope it's been insightful.

Something I've learned from all of this writing (besides the fact that it's hard and doesn't seem to get easier because I'm constantly pushing at the edge of my skill level) is that it's nearly as important to think about what to leave out as it is to think about what to include. The obvious reasons to leave things out are that those ideas aren't especially relevant to the topic at hand, or because it's impossible to address all issues related to a topic without writing an epic post. These all-inclusive posts would never end, so it's necessary to have focus in writing.

Another reason to leave things out of a post is a bit more subtle. I remember as a kid getting extremely frustrated with characters in movies or novels that seemed to be incapable of saying what needed to be said to resolve whatever conflict was going on. Parents not saying what they needed to their kids, kids not talking to their parents, friends letting misunderstandings get out of hand—why couldn't they all say what they were thinking? Part of this behavior was intentional plot devices, and part of it was normal poor communication. I get that. However, part of it also stems from communication being more complicated than simply laying it all out and having the other person listen to cold, hard reason. People want and need to come to conclusions on their own.

Writing this blog isn't exactly like relationships in a novel, of course. We're not in a major conflict that we're trying to resolve, but the desire to reach our own conclusions is the same. The best writings I've read aren't good so much because everything was explained in plain, indisputable language, but because it sparked ideas in my own mind and allowed me to fill in the gaps with compelling thoughts. It's much more engaging to read something that makes you think, and allows you to take ideas and expand on them. I find that it's a hard thing to do when writing, to say enough to paint a clear picture of the topic, but not create so much detail that the picture gets muddled and confused. It's a delicate balance.


Another aspect of writing that I'm continually aware of, and that I learned in high school is how hard but important it is to avoid Hefty Bag Words. This is an idea that has stuck with me ever since my Research and Comp class in 10th grade. I had an amazing teacher, Mrs. Hoffman, and I remember much of what she taught in that class, but nothing so much as Hefty Bag Words. The idea was that these words are used so often that they have basically lost all meaning, and should be bagged up and thrown in the trash. We were not allowed to use these words in our papers ever. Every time you used a Hefty Bag Word, you were immediately docked some number of points (5 points, if memory serves). It didn't matter one whit if the word fit well or not. If you needed to rephrase the sentence or restructure an entire paragraph to get rid of it, then that's what you had to do. Here are the ten Hefty Bag Words:
  • there
  • really
  • very
  • many
  • things/stuff
  • society
  • which
  • just
  • interesting
  • some
When Mrs. Hoffman presented the words to the class, we immediately built one of the most meaningless sentences in the English language to more easily remember them, and that's how I remember them to this day. They're even listed above in the correct order: There really are very many things about society which are just interesting to some people. The words can be rearranged somewhat, but the meaninglessness stays the same.

These are hard words to remove from your writing. I'm already guilty of using 'things' twice in the title and three times in the body of this post, not counting the times I was referring to the word itself. (It's a good thing Mrs. Hoffman isn't grading this post.) Even though I do use all of those words periodically in my writing, in the end I don't think it's entirely bad. The point is not to completely eliminate these and other mostly meaningless words from your writing, but to make the use of them count. Be as explicit as you can in your writing, and recognize when you're falling back on a vague filler word because it's easy.

So I've learned a lot over the past three years of blogging, and I've practiced writing more than at any other point in my life. It has been a worthwhile experience, and now I'm at somewhat of a transition point. Whereas at the end of the past two years of writing I found myself with more ideas for topics than I had before, this year I find my list winding down. I have a few more ideas that I can write about now, but only a handful. In addition, I have more ideas that I'd like to write about, but I haven't had the time to learn enough about them to feel competent enough to write about them. I'm also finding that I need more time to work on other projects.

Working on other projects and having extra time to read more will give me more to write about, but I have to have the time to experiment and learn first. That means I'm planning on tapering off my writing schedule a bit. Instead of writing a 2,000 word post every week, I'll shoot for once or twice per month and try to trim down the length as well. That will give me more time to learn new, um, stuff, and keep this blog somewhat, er, well, interesting. Happy Holidays, and watch out for those Hefty Bag Words.

Less Friction Generates More Waste

Last week I explored how reducing friction could increase choice, thereby actually increasing friction in the end, giving us a paradox of choice because too much choice is overwhelming. Reducing friction can have another undesirable side-effect. When things get easier, it increases the amount of waste that's generated in a system.

This outcome may seem counterintuitive because in physical systems friction generates waste as heat, and reducing friction makes the system more efficient because less energy is lost in the form of heat. More insubstantial systems like the economy or civilization as a whole don't work exactly like physical systems, though. When you look at how our civilization has progressed, we seem to generate more and more waste as we reduce the amount of friction in our lives. Will this trend continue, and how will we deal with it?

Finding Optimal Friction

In the last twenty years, the Internet and mobile devices have reduced or eliminated friction in numerous industries. Obviously the communication sector has been dramatically affected, including telecom, music, television, and publishing industries. Now anyone with an Internet connection can put their stuff up on-line for the world to see, and new players like Netflix have been able to challenge the big networks for our prime time hours.

The Internet has leveled the playing field across the communication industries, and it's now easier than ever for competing producers to get their products in front of customers. That's one way to look at the concept of friction in markets, from the perspective of producers. Another way to look at friction is from the consumer's perspective, and that friction has been dramatically reduced, as well. From having all of the world's information literally at your fingertips to being able to buy nearly anything at the click of a button and having it shipped to your door, the Internet has gone a long way in removing friction from consumers' lives.

However, not all industries or aspects of our lives have been affected equally by the Internet, and sectors like energy and transportation still have a lot of friction that could be reduced with the right advances in technology. Energy production and automobiles are ripe for a technological revolution.

Reducing friction isn't the be-all and end-all for making our lives easier, though. Reducing friction comes with its own cost, and I think we sometimes forget how high that cost can be. We can end up wasting more time and energy in a frictionless environment due to distraction and an overwhelming amount of choice. Finding the right balance means recognizing where too much friction is wasting our energy so that we can target those inefficiencies and realizing where too little friction is wasting our time so that we can avoid those time sinks. It's a constant struggle as we push forward with technology.

How to Get Better at Programming with Feedback

Feedback is a critical part of improving at any skill. Without feedback, you have no idea whether you're getting better or worse. You lose any sense of accomplishment or direction in what you're doing. Imagine trying to do something new and difficult without any kind of feedback whatsoever. You wouldn't even know when you've done something worthwhile, and you'd quickly lose any interest or motivation in the task.

Instead, with feedback you can gauge your progress. You can make changes in what you're doing more quickly and efficiently to improve your technique and tackle more difficult challenges. Feedback gives you the confidence to know when you're doing things right and to course correct when you're doing things wrong.

Programming is a challenging skill that takes years to learn and a lifetime to master. Programmers can use as much feedback as they can get to improve at this skill, so let's look at a few ways we can create good feedback loops to get better at programming.

Memorization Vs. Observation

How much do you remember of what you learned in high school? Depending on how old you are, my guess is little or very little. I know that I've forgotten more than I care to admit. If I had to sit down today and take a test in US History, English Literature, or Chemistry, I would most likely fail miserably. It's not because I don't think those things are important; it's because I don't need to know them for my career or my daily life.

I've forgotten the details and facts that I had to memorize when I was learning about President Lincoln or William Shakespeare. I used to think I had a good memory for facts, but as I get older, more and more of them seem to be leaking out of my head. That doesn't really worry me though, because there's something much more powerful than memorizing and that's observation.

By observing what's happening, understanding how it works, and applying that knowledge to new situations, you can build up higher-level thinking skills that generalize much better than a long list of memorized facts. You can use this power of observation to derive solutions to problems without having to remember the solution or needing to look it up. Let's look at a few examples to get a better idea of what I'm talking about.

Everyday DSP for Programmers: DC and Impulsive Noise Removal

For the last installment of Everyday DSP for Programmers, we'll take a look at how to remove unwanted cruft from a signal. Real-world signals can come with all kinds of unwanted noise and biases. A DC bias can wreak havoc on many DSP algorithms because the bias accumulates in the algorithm and saturates the output. The bias doesn't even have to be pure DC to cause problems. The low frequency hum of power lines is pervasive, and most electronic signals ride on top of a 60 Hz sine wave in the US and 50 Hz in Europe. Other data sources can have their own biases that need to be removed.

Another insidious type of noise is impulsive noise. Sometimes called spikes, sparkles, fliers, or outliers, these high-intensity deviations from normal input values can be hard to remove from a signal without affecting the values around them. Averaging tends to spread them out when they should be simply removed from the input signal altogether.

We'll explore how to remove both of these undesirable signal components, starting with DC removal.

Everyday DSP for Programmers: Frequency Detection

Up until now we've seen two ways to detect the frequency of a signal. If the signal has a dominant frequency, we can measure that frequency by counting zero crossings, and if there's more frequency content, we can use a DFT to measure the full frequency spectrum of the signal. If we're looking for frequency peaks, we can use spectral peak detection to calculate exact frequency values from the quantized frequency bins of the DFT.

Now we'll explore another way to detect a specific frequency in a signal using a type of filter called an Infinite Impulse Response (IIR) filter. There is a huge variety of IIR filters—the exponential average is an example of one that we've already seen—and in this case, the type of filter we'll use for detecting frequencies is called a complex resonator. We can use the DFT to help analyze the frequency response of the complex resonator, but first let's see why we would want to use it.

Everyday DSP for Programmers: FIR Filters

Now that we have this shiny new tool called the DFT, it's time to look more closely at Finite Impulse Response (FIR) filters. We can use the DFT to create and analyze FIR filters so that we can better understand how they behave with different signals.

To review, an FIR filter is named as such because when it is applied to an impulse function, the response is complete (i.e. it has reached and will remain at zero) within a finite number of samples. An FIR filter consists of an array of values, called taps, and to apply the filter to a signal, the taps are overlapped with a block of samples with each sample multiplied by a tap. Then, the results are added together to get a single filtered sample. The filter is advanced by one sample and the process is repeated for the next filtered sample. This process is called convolution, as we covered way back in the post on averaging.

We would use a filter when we want to remove some part of the signal before analyzing what's left. Normally we want to remove high-frequency noise, and a filter that does this is called a low-pass filter because it only allows low frequencies to get through it. We can also design high-pass filters that remove low-frequency bias from a signal or band-pass filters that only allow certain frequency ranges of interest through. All of these types of filters have similar design processes, so we'll focus on the low-pass filter. Let's try to design a perfect low-pass FIR filter.

Everyday DSP for Programmers: Spectral Peak Detection

For the last couple of weeks we've been looking at how the DFT works and some example DFTs. Throughout those posts we've been using signals with frequencies that fit nicely into the frequency bins of the DFT. Now we're going to see what happens when the signal frequency isn't so clean and how to deal with it. We want a simple algorithm for detecting the main frequency values of the signal without paying too high of a price in computation. This type of algorithm is called spectral peak detection. Let's get started.

Everyday DSP for Programmers: The DFT in Use

Last week we built up the DFT (Discrete Fourier Transform) from fundamentals, and while that exercise provides a good way to remember how to calculate the DFT and how the DFT works under the hood, looking at some examples is a good way to get a sense of how the DFT behaves with different types of signals. We'll take a look at the frequency response of a few basic signals by calculating their DFTs, but first, we'll briefly explore a way to calculate the DFT much faster than we did with the direct algorithm from the last post. This optimized DFT is called the FFT (Fast Fourier Transform), and if you can conform your problem to its restrictions, it's the transform that you want to use because, well, it's fast.

Everyday DSP for Programmers: The Discrete Fourier Transform

Last week we covered how to measure the frequency of a periodic signal with a single dominant frequency. This week we'll cover a way to measure the full frequency spectrum of a signal that can have any number of frequencies in it, up to the Nyquist frequency, of course. This tool is called the Discrete Fourier Transform (DFT), and we can derive it from the basic concepts of sine waves, signal transforms, and averaging.

The DFT may seem like a complicated, confusing thing at first, but fundamentally, it's actually fairly straightforward, if a bit tedious to calculate by hand. We'll explore the DFT by building it from the ground up using a signal with some known frequencies in it as a starting point. That way, we'll have a way to check that the results we're getting are reasonable. As for the tedious parts, we can write code to take care of that.

Everyday DSP for Programmers: Frequency Measurement

In DSP, when you're not calculating averages, you're calculating frequencies. Much of DSP involves frequency analysis, and for that task we have the Fourier Transform, a calculation that translates a signal from the time domain to the frequency domain. We don't always have the resources or the need to resort to that heavy handed operation, though. Sometimes the signal we're dealing with is made up of a single frequency that varies over time, and the value of that frequency is what's of the most interest. That is what we'll be exploring today.

Everyday DSP for Programmers: Signal Envelopes

Sometimes we don't care so much about the exact details of a signal as we do about whether a signal is even present or not. If the signal is periodic, it can be difficult to directly detect when the signal is there and when it goes away because when it is present, it's oscillating between various values. In cases like these, what we want to calculate is the envelope of the signal, which gives us the information about whether the signal is there or not and what its approximate amplitude is. As an added bonus, calculating a signal envelope can be done in real-time, so other DSP functions can be done on the signal as soon as it's detected.

Everyday DSP for Programmers: Edge Detection

Up until now we've been learning about the fundamental concepts and basic tools of digital signal processing (DSP). Now we're going to use those tools to solve a common problem in DSP: edge detection. When you think of edge detection, you probably immediately think of image processing and finding edges in photographs or a video signal. It's true that there are plenty of edge detection applications in image processing, especially with some of the new technologies being developed for self-driving cars, facial recognition, and automatic photo processing, but edge detection is useful in a much wider range of applications than that.

Anywhere you need to monitor a signal for a change of state is a likely candidate for edge detection. That would include the feedback systems in robotic motion, industrial automation, touch screens and other touch inputs, pressure sensors in car airbag safety systems, accelerometers in any toy or gadget that senses motion, and the list goes on. Sometimes, the relevant signal in these systems varies within a well-defined range, and the edge is easy to detect by setting a fixed threshold. If the signal crosses the threshold, the system detects an edge and does what it needs to do.

Other times edge detection is not so simple. If the signal drifts around over long periods of time, or the threshold for the occurrence of an edge doesn't necessarily happen at the same value every time, it's difficult or impossible to set a fixed threshold for detecting an edge. In cases like this, we need to do some further processing to build up a signal that we can assign a fixed threshold to for detecting edges. That's the kind of problem we'll attempt to solve here, detecting edges in a signal where the edges tend to happen anywhere within its range. Let's start by generating a representative signal that would be somewhat difficult to process.

Everyday DSP for Programmers: Signal Variance

After covering averaging in the last two posts, the natural next thing to look at is how much the signal varies around the average. This property of a signal is called signal variance, and there are a couple different ways to calculate it, depending on how the average has been calculated. Let's see how variance can be calculated and what it tells us about a signal.

Everyday DSP for Programmers: Step Response of Averaging

Last week we took a look at different kinds of averaging, and used them to analyze historical gas prices. Looking at a complex signal like gas prices gives us a nice comparison of the behaviors of the various averaging methods, but that only gives us an idea of what averaging does for one specific signal. What if we want to understand what the different averaging methods do in a more general way?

One way to analyze the different methods is by applying them to the fundamental signals. The output that results from applying an averaging function to one of the fundamental signals is called the response of the function. If the signal is the DC signal, it's called the DC response. If the signal is the step function, it's called the step response, and so on. We'll look at the step response in more detail, but first, let's briefly discuss the responses of the different averaging functions to each of the fundamental signals.

Everyday DSP for Programmers: Averaging

After finishing up the introductory concepts of DSP last week, we've now moved firmly into the digital, sampled domain, and we're going to start taking a look at the programming algorithms that are commonly used in DSP. Fundamentally, digital signal processing has a lot in common with statistics. When we start analyzing signals, some of the first tools we look at using are based in statistical analysis: maximum and minimum values, averages, and variances.

Maximums and minimums mostly come into play when bounding and normalizing signals. If you want to find the amplitude of a signal, then you'll scan through the samples to find the extreme values. It's fairly straightforward, and there's no need to delve into them until they're needed.

Averages are another story, though. Not all averages are created equal, and different types of averages are more useful in different situations. We'll take a look at five types of averages in this post, including the full, block, moving, exponential, and filter averages. To ground the discussion a bit, let's apply these different averages to a real-world dataset. Gas prices are something that most people watch pretty closely, and it happens to be a series with some interesting characteristics, so let's go with that. I'll use the weekly U.S. regular reformulated retail gasoline prices from the last twenty years, made available online through the EIA.gov website.

Everyday DSP for Programmers: Sampling Theory

It's time to tackle sampling theory. The last couple posts dealt with concepts that are equally applicable to the analog and digital worlds of signal processing, but once a signal has been sampled, we have landed squarely in digital signal processing because the signal has been digitized. Sampling theory sounds like a big, scary thing that takes months of study to comprehend and makes your brain melt. It isn't. Sure, it can get complicated if you dig down deep enough, but we're going to steer clear of those dark trenches and focus on the high-level concepts that will build up your intuition for the issues surrounding digital signals.

Everyday DSP for Programmers: Transforms

Programmers work with signals all the time. We may not normally think of the data we're working with as signals, but treating it that way and learning some basic ways of processing those signals can come in quite handy. Last week I went over the basic signal types that are commonly used in digital signal processing. Today, I'll cover the basic ways we can change a signal using a set of transforms.

These transforms can be used in a variety of ways to accomplish different goals when processing signals, and we'll look at a few use cases as we go through the transforms. We'll need an example signal to apply these transforms to, and I can think of no better signal to use than the workhorse of DSP—the sine wave. The basic equation for a sine wave is

f(t) = sin(t)

Where t is the variable of time, marching forever forward. This basic equation will get embellished with other parameters as we explore the different transforms. We'll start off with the simplest of the transforms.

Everyday DSP for Programmers: Basic Signals

Digital Signal Processing (DSP) is a field that can be incredibly useful to almost any programmer.  When you think about DSP, you probably think of complex computational tasks for audio filtering, image processing, or video editing, but DSP is much more general than that.

The name 'DSP' aptly describes what it is. It has to do with the digital domain, which is directly applicable to any kind of programming. It deals with signals, which includes much more than the normal audio and video signals we normally think about as signals. Any series of data points that changes over time is a signal. That means a stock's price history is a signal, your kid's height over time is a signal, and the number of users on a website is a signal. Signals are all around us, and we deal with them every day. Finally, DSP involves processing those signals in some way to either transform the signal into something more useful for a given purpose or extract desirable information from the signal. Encoding that processing in a fast, automated program is a natural fit for what a programmer does.

A Barely Adequate Guide to JavaScript Canvas

Whenever I want to add a new feature to this blog, I write a post describing the process of researching and implementing that feature first. I've done two Barely Adequate Guides so far, one on syntax highlighting with JavaScript and one on JavaScript charts. The goal with these guides is twofold. First, I get to document what I'm learning about how to implement these features for future reference. Second, it shows at least one way of researching, choosing, and working with a new software library starting from square one.

Nearly everything we do as programmers has already been done before. Maybe not exactly in the same way as we're trying to do it this time, but pretty much every component that makes up any given project already exists in some form out on the Internet already, probably in more than one library. That means we have a choice as to how to spend our time. We can either write that component from scratch and know exactly how it works, or we can find a library and integrate it into our project. Each method has its benefits and costs, and I normally will spend at least a small amount of time looking for a ready-made solution. If I can find one quickly, it's robust, and it has a decent API, I can save a lot of time by programming at a higher level than I would otherwise.

An Initial Search


For this Barely Adequate Guide, I want to learn about the HTML5 canvas and start using a JavaScript library that makes using the canvas even easier. I'm planning some articles that will make good use of animated drawings, and it should prove useful for future posts as well. I start, as I always do, with a Google search:

Google Search: javascript canvas

I start with the basic "javascript canvas" search terms to see what comes up. (By the way, I just had a great experience getting this screen cap. I switched from Windows to Linux about half a year ago, and this is the first time I did a SHIFT-PrintScreen. The cursor turned into a cross-hairs and I selected an area to capture. Then a dialog box appeared where I could name the file and choose where to save it. This is a way better experience than having to muck around with Paint on Windows. Linux: 1, Windows: 0.) Here are the top ten results:
  1. Canvas tutorial - Web API Interfaces | MDN
  2. HTML5 Canvas - W3Schools
  3. HTML Canvas Reference - W3Schools
  4. HTML Canvas - W3Schools
  5. HTML Canvas Coordinates - W3Schools
  6. Canvas - Dive Into HTML5
  7. HTML5 Canvas Element Tutorial - HTML5 Canvas Tutorials
  8. Fabric.js Javascript Canvas Library
  9. Drawing on Canvas :: Eloquent JavaScript
  10. Canvas element - Wikipedia, the free encyclopedia 
This looks like a lot of HTML5 canvas stuff. I dive into the first link, but it looks like a lengthy tutorial so I back out. It's from the Mozilla folks and it's probably good, but I don't want to spend that much time right now. The next link leads me to a brief description of the HTML5 canvas with some code samples. After scanning through the samples, it's clear that most of the interaction with the canvas is already done through JavaScript, but I want something with higher-order functionality that will make drawing and animation even easier.

I skip past the other HTML5 links and take a look at the Fabric.js library. This looks promising. The Fabric.js library appears to have a lot of features, good documentation, and active support on Github. I take note and look at the next link. It's actually a chapter out of a free online book called Eloquent JavaScript. It looks quite good, but not what I need right now so I bookmark it and keep searching. With the tenth link, I've reached the end of the first page of search results, learned a bit about the canvas, and found one higher-level library.

Refining the Search


I could continue paging through the results, but I think I'll have better luck refining my search.

Google Search: best javascript canvas library

That's right. I want the best. I figure by using the 'best' search term, I'll get at least a few links that will rank or review different JavaScript libraries. I wasn't disappointed:
  1. My Favorite 5 JavaScript Canvas Libraries • Crunchify
  2. html5 - Current State of Javascript Canvas Libraries? - Stack Overflow 
  3. powerful javascript canvas libraries - Stack Overflow
  4. EaselJS | A JavaScript library that makes working with the ...
  5. What are the best JavaScript drawing libraries? - Slant
  6. 10 Cool JavaScript Drawing and Canvas Libraries - SitePoint
  7. Fabric.js Javascript Canvas Library
  8. Chart.js | Open source HTML5 Charts for your website
  9. Brief Overview of HTML5 Canvas Libraries - JSter Javascript ...
  10. Pixi.js - 2D webGL renderer with canvas fallback
It looks like six of the top ten results are rankings or reviews of JavaScript canvas libraries. Then there are three sites for popular canvas libraries (EaselJS, Fabric.js, and Pixi.js) and another site for a JavaScript chart library that happens to use the canvas but is not a general purpose drawing library. The second and third links catch my eye because they're from Stack Overflow, a reliable programmer's resource even if they tend to close these types of questions.

The second link leads me to a comparison table of JavaScript canvas libraries in a Google doc. It looks a little out of date and doesn't have too many different comparison points, but it's still useful to browse through. I find Pixi.js, EaselJS, and Fabric.js at the top of the list again, along with Paper.js. Both Paper.js and Fabric.js appear to be much larger than the other two libraries, although Fabric.js is more modular so it's potentially comparable to the smaller libraries if I only use the core of it.

I follow the links to each library's website, and I find that they all have demos, example code, and documentation ranging from good to great in quality. With the short time that I spend on each site, I feel that Paper.js and EaselJS have a bit more complicated API, while the Pixi.js and Fabric.js APIs look immediately familiar. Then I visit each of the GitHub repositories to get a sense of how active and popular the libraries were. They all seem to be well maintained with regular and recent checkins and good notes in the readme files for getting started. Pixi.js has by far the most watchers, stars, forks, and contributors, with the other three libraries having similar stats to each other for the most part.

Between its potential ease of use, small size, and popularity, I decide to go with Pixi.js. I wouldn't normally go with the crowd so quickly when choosing a library, but in this context it's a reasonable course of action. If this decision was for a more complex project that depended on a drawing library for its success, I would do a more in-depth analysis and probably experiment with all of the top contenders, but for using a library with this blog, any of these libraries would likely be more than adequate. I want something I can learn quickly, and I want to waste as little time as possible deciding which library to use so I can start learning the library sooner. Clear code examples and popularity are a decent first-order proxy for a detailed evaluation in this case.

Experimenting with Pixi.js


The first thing I need to do to start playing with Pixi.js in my blog is to load the JavaScript library. I can either load the source file from https://cdn.rawgit.com/GoodBoyDigital/pixi.js/dev/bin/pixi.min.js or clone the git repo from https://github.com/pixijs/pixi.js, host the pixi.min.js myself, and load it from that location.

One easy way to host JavaScript files is to set up GitHub Pages with this tutorial, make a javascript folder to dump in any JavaScript files you want, and push them up to GitHub. Then you can source the files from your own <username>.github.io URL. I chose to load the source from the main GitHub project, so I added this line within the <header> tag of my Blogger template:
<script src='https://cdn.rawgit.com/GoodBoyDigital/pixi.js/dev/bin/pixi.min.js' type='text/javascript'/>
With the library loaded, I can start experimenting with drawing shapes on a canvas. Let's start with drawing a simple line. On the examples page on GitHub there's a Graphics example with sample code. Pulling out the necessary setup code and the code to draw a single line gives me the following first attempt:
<div id="canvas-line" style="width: 550px; height: 400px">
</div>
<script type='text/javascript'>
 var canvas = $('#canvas-line');
 var renderer = PIXI.autoDetectRenderer(canvas.width(), 
                                        canvas.height(), 
                                        { antialias: true });
 canvas.append(renderer.view);

 var stage = new PIXI.Container();

 var graphics = new PIXI.Graphics();

 graphics.lineStyle(10, 0xffd900, 1);
 graphics.moveTo(50,50);
 graphics.lineTo(500, 300);

 stage.addChild(graphics);

 animate();

 function animate() {
  renderer.render(stage);
 }
</script>
The code starts out by defining a <div> with an id that I can refer to in the JavaScript code. Then the JavaScript finds the element with that id, creates a renderer and a canvas, creates a stage for adding graphics contexts, and creates the first graphics context. The graphics context is what's used to draw things, in this case a line. The line style is defined for the context and then the line is defined with a starting point and an end point. Finally, the context is added to the stage and the animate() function is called to render the drawing. The result looks like this:


Not too bad, but a bit rudimentary. Let's try something a bit more interesting. How about an oscillating sine wave. For this drawing, I'll have to add animation, but that shouldn't be too hard because animation support is already there by default in the animate function. The hardest part is constructing the sine wave from line segments. Here's the code:
 var canvas = $('#canvas-sinewave');
 var renderer = PIXI.autoDetectRenderer(canvas.width(),
                                        canvas.height(),
                                        { antialias: true });
 canvas.append(renderer.view);

 var stage = new PIXI.Container();

 var sinewave = new PIXI.Graphics();
 stage.addChild(sinewave);
 sinewave.position.x = 75;
 sinewave.position.y = 200;

 var count = 0;

 animate();

 function animate() {
  count += 0.1;

  sinewave.clear();
  sinewave.lineStyle(4, 0xff0000, 1);

  var amplitude = Math.sin(count) * 100;
  sinewave.moveTo(0, 0);
  for (var i = 0.1; i < 6.3; i+=0.1) {
    var x = i*63.5;
    var y = amplitude*Math.sin(i);
    sinewave.lineTo(x, y);
    sinewave.moveTo(x, y);
  }
  renderer.render(stage);
  requestAnimationFrame( animate );
 }
In this case I create a graphics context for the sine wave outside of the animate function and set its origin to where I want the sine wave to start. Then in the animate function, I increment a counter that determines what the amplitude of the sine wave is for the current frame and draw out the sine wave with line segments. Finally, I have to request an animation frame to advance the animation. The result looks like this:


Mesmerizing. That's not bad for a few hours of experimentation and a couple dozen lines of code. Now I can add animated graphics to my posts, and I learned a new trick to add to my repertoire. Pixi.js is a pretty deep library, and you can do a ton of different things with it. It would probably take months to become proficient with it, but getting a handle on it in a few hours was worth the time. If you're in need of graphics or animation for your website, it's a good place to start.

How to Determine if Something is Good

I recently came across an old article entitled How Art Can Be Good by Paul Graham that I had tagged as something of interest, and I decided to read through it again. In it he argues that art can be good in an objective sense, and that good art is not simply a matter of good taste. I found this article fascinating because while I agreed with his premise that art can be objectively judged, I disagreed with most of Graham's arguments. With most of his articles, he exhibits clear, sound reasoning, and I learn a great deal from his writing. This article is peculiar in that the reasoning seemed much weaker and more vague, but it still made me think deeply about what makes a thing good so I want to explore that idea more carefully.

Keep in mind that Graham's article was written in December of 2006. He may no longer hold the same beliefs that he did when he wrote this article, and I'm not trying to tear down his ideas about good art or attack him in any way. I respect him both as a writer and as a thought leader in the tech startup community. I'm merely attempting to analyze the reasoning in the article and describe my own thoughts on the subject.

Being able to determine if something is good has much practical value, especially if you're the one creating the thing that you hope is good. When creating a work of art, or any other product for that matter, you want to have a keen sense of what makes it good because the better a product is, the more value it will have for more people. I'm going to widen the net well beyond art at this point because the qualities that make art good can apply to almost anything, so we'll be comparing art to music, movies, video games, literature, food, consumer products, Mathematics, and, of course, software.

You Keep Using That Word…


Right from the outset, Graham entangles the idea of good art with good taste in order to disprove that they are equivalent:
One problem with saying there's no such thing as good taste is that it also means there's no such thing as good art. If there were good art, then people who liked it would have better taste than people who didn't. So if you discard taste, you also have to discard the idea of art being good, and artists being good at making it.
He later concludes this argument by saying that if there is no way to make art better, then the Sistine Chapel is as good as a blank canvas, and since that's absurd, we have a contradiction. Therefore good art exists. I agree with the conclusion, but the argument is either circular, a straw man, or a slippery slope. I can't decide which one it is.

First, taste is a characteristic of the person who is judging the art, not a characteristic of the art itself, so they are already two different things. It's easy to think of something that I consider to be good, but I personally don't like it. Take music for example. There is plenty of music, whole genres in fact, that I don't like to listen to, but I can still appreciate that songs from those genres take great skill to perform and that plenty of other people like those songs. I also have guilty pleasures that I know aren't that good in a musicality sense, but I enjoy them all the same. Does that mean I have good taste for some music but not other music? No, it means I have varied tastes in music, and my tastes are different than other people's.

Second, good and better are also different things, yet he uses them as if the person with better taste automatically determines what is good taste. The world is not so narrow and simple. Faulkner and Hemingway are both considered good writers (an understatement, I know), but if one person liked both of their works and another person only liked Faulkner, would the person who liked them both have better taste? There are literary experts that like Faulkner's stream of consciousness style and hate Hemingway's terse, matter-of-fact style and plenty of other experts that love Hemingway's writing and hate Faulkner's. They can still agree that both are great authors and are important to read. Neither literary expert is better than the other because of which author they prefer, and the authors are difficult to rank because they are so different from each other.

Finally, each step of this argument doesn't immediately follow the previous step, and I think it is because the definition of good is vague and indeterminable. In the context of art, the word good can have at least two distinct meanings that I've only alluded to so far. Art can be good in the sense that it takes great skill to create, and it can be good in the sense that it evokes strong emotions in the viewers. Art created with great skill can include new techniques that have never been used before. There are numerous examples of famous paintings that were some of the first to use perspective effectively. At the time the technique was discovered, perspective was a difficult skill to master, and the paintings that showed good use of perspective were ground-breaking. Today a painting can't be considered good purely on its use of perspective.

We can see the distinction between these two concepts of good in different kinds of movies. Certain movies break ground with special effects. The Matrix is a great movie partly because so many incredible special effects were invented for it, and they were put to good use telling the story of the movie. On the other hand, I think Mr. Magorium's Wonder Emporium is a great movie partly because of the strong reactions of joy and sadness I get while watching it. I can jump into that movie at any time during the last third of it, and I'll be choked up within minutes.

Good has multiple distinct meanings, and throughout Graham's article the word seems to shift between these two meanings of requiring great skill and evoking strong emotions. It is generally easier to objectively analyze whether or not something requires great skill than it is to analyze that it evokes strong emotions, but they both play a part in making something good. Good can have other meanings as well, in the sense of morality for example, but Graham didn't get into those and I won't, either.


Know Your Audience


After the introduction, Graham goes on an extended discussion of what type of audience we're talking about when someone says that a piece of art is good. Who is the art good for? Who would appreciate the work of art? After touching on a number of characteristics that would appeal to people generally, noting that art can appeal to different groups of people in different ways, and even bringing aliens into the discussion, he settles on all human beings as the intended audience when art is described as good.

I generally agree with his reasoning here, although it is a bit drawn out and the bit about aliens seemed unnecessary. He did overlook an important group of people, though—experts in the field. He was arguably talking about how to judge whether a work of art is good for a general audience, but expert opinion is still important because experts in a field will have a much different perspective on something in their field than the general populace would. Once you have a certain level of knowledge about a given subject, your ideas about what is important, what is difficult, and what is elegant will change dramatically.

In mathematics, proofs that are particularly elegant can be considered beautiful. This is something that the average person will likely not appreciate, but an expert mathematician will see beauty where anyone else would see incomprehensible jargon. Sometimes the beauty comes from structuring a proof in a way that neatly solves the theorem in a more concise way than was thought possible. Other times the beauty comes from building up intricate mathematical machinery, and just when you think things are going to get even more complex, everything falls into place and the solution practically drops out of the proof's structure quite unexpectedly. These are forms of beauty that take intense study and specific knowledge to appreciate.

Experts will also see depth in something that the average person will fail to notice. A good example of this phenomenon happens in video games. Some games have layers of depth in game play that is not at all obvious on a single play-through. Devil May Cry is an old action game from 2001 where you are a half-demon named Dante who fights demons with a sword and guns. At first glance it's a run-of-the-mill action game, but if you spend enough time with it, you'll discover all kinds of expert-level secrets in the game. The enemies and levels are designed so that it's possible to complete every level without ever taking a hit, and the game keeps track of that, giving you bonuses and special rankings for good performance. The holy grail is a perfect SS game where you never take a hit for the entire game and complete each level within its time limit. To achieve this feat, you need to learn special tricks for using your weapons and defeating enemies. The game has a ton of depth that only experts will see and appreciate. Most of the games that are considered the best by expert gamers are like this, and the opinions of the expert audience are important when determining whether a game is good or not.

Comparing Apples and Oranges


Graham moves on to a discussion of how the general public leans in its preferences for art, and how those preferences stem from errors of personal bias and artist's tricks. He claims that good art can't be determined by a vote the way a good apple or a good beach could because the public is easily swayed by branding and advertising. I disagree both with the claim that voting is ineffective in determining if something is good and that apples and beaches are all that different than art.

Regarding voting, it's true that you may not think something is good while the majority of people think it is good, but it's likely that the set of things that come to the top in a vote includes some pretty good stuff. (Let's ignore politicians here because no one's going to agree that that statement holds for them.) Take smart phones as an easy example. People vote for smart phones with their wallet, and the top 10 list of best selling smart phones is dominated by Apple, Samsung and Xiaomi. You may not like iPhones. You may not like Galaxy phones. But it's pretty likely that you'll find a good phone on that list. They all have good performance, good build quality, and good design.

I find voting to be a pretty good guide for finding all kinds of good stuff. It's how I choose books to read on Amazon. I will nearly always pass on a book that gets less than a four-star rating because in my experience, they've been almost universally bad. I have a much better chance getting a good book with a four- or five-star rating, even though the occasional stinker still makes it through. It takes a strong recommendation from someone I trust to convince me to read a three-star rated book, and a rating less than that is out of the question. Life's too short, ya know?

Regarding apples, it seems entirely possible that I would not like an apple that the majority of people vote as being exceptionally good. What if I like sour apples and most people like sweet apples? What if I like my apples tart, covered in cinnamon, and baked in a pie? What if I hate apples? The point is that there are enough kinds of apples in the world that deciding which ones are good isn't all that different than deciding which works of art are good. Reasonable people are going to disagree on what is good art and what is a good apple. It kind of comes down to what tastes good to you, and now we've come full circle.

I Said I'd Talk About Software


All of these ideas about what makes good art or movies or smart phones also applies to software. The definition of good changes slightly because the user of a piece of software likely doesn't care if it took a lot of skill to make nor will it evoke strong emotions of joy or sadness. Maybe it will cause intense frustration, but that shouldn't be a goal. What good software does do is make the user feel highly skilled. It gives the user a great sense of accomplishment because she can do things easily that were difficult or impossible before. It should also be enjoyable to use and help the user feel productive. These things will make the user happy.

Expert users will find elegance and depth in good software. I am continually impressed by the cleanness of features in Firefox and the limitless capabilities of Vim, although I wouldn't say I'm an expert in either, yet. The point is that good software has layers of depth that keep enticing advanced users to explore further and discover new powers to make themselves more awesome. In the end good software, like good art, is still a matter of taste. Some people will like Chrome instead of Firefox. Some people will like Emacs or Sublime instead of Vim. For something to be good, it has to meet the requirements of being high quality and being appealing to an audience, and beyond that people's preferences are going to come down to taste.

So coming back to Graham's article, I think there are ways to determine objectively if art is good, but once those objective criteria are met, we are still left with a wide variety of art. Then it comes down to personal preferences, and different people are going to like different things. Ideally, good art is determined by both objective criteria and taste, and on that we seem to agree.

Tech Book Face Off: Refactoring Vs. Refactoring to Patterns

I've read a number of books that talk about and encourage refactoring—the practice of modifying code to make it cleaner and clearer without changing its function—but I've never read a book about refactoring until now. I decided to flesh out my knowledge of various refactoring techniques with Martin Fowler's classic Refactoring: Improving the Design of Existing Code, and because refactoring is often used to convert existing code into various software design patterns, I paired it with Joshua Kerievsky's Refactoring to Patterns. I normally read two related books on a subject to cover the subject in more detail and from different perspectives, and in this case the two books turned out to be intimately related, with Refactoring to Patterns building on and referencing Refactoring extensively. Here is an in-depth review of these two books.

Refactoring: Improving the Design of Existing Code front coverVS.Refactoring to Patterns front cover

Refactoring


Martin Fowler decided to write this book because at the time many books were being written about the new agile design movement, refactoring was an integral part of programming in agile design methodology, and he felt that no one else was going to write about the details of refactoring. The result is this clear and thorough book describing 68 low-level refactorings and 4 higher-level refactorings.

Before getting into the nuts and bolts, he spent a few chapters introducing refactoring, discussing when and why you would refactor, and how to identify code that could benefit from refactoring with a list of "code smells" that reek of bad code. These introductory chapters were an excellent condensed treatment of most of the agile principles found in other agile books, and Fowler had some well-reasoned arguments for his recommendations.

His advice includes such things as minimizing published interfaces within a team because interfaces create friction to code changes; starting with the simplest design if there is a clear path to refactoring to other designs; analyzing programs before optimizing for performance because the wasted time is not where you think it is; and concentrating tests where the most risk of defects are because comprehensive testing has diminishing returns. He also has a lot to say about flexibility in design, and starts the discussion by cautioning against making designs too flexible too early:
Building flexibility in all these places makes the overall system a lot more complex and expensive to maintain. The big frustration, of course, is that all this flexibility is not needed. Some of it is, but it’s impossible to predict which pieces those are. To gain flexibility, you are forced to put in a lot more flexibility than you actually need.
He proposes that refactoring provides a better path that becomes a productive habit, and sums up KISS and YAGNI in two brief sentences:
Refactoring can lead to simpler designs without sacrificing flexibility. This makes the design process easier and less stressful. Once you have a broad sense of things that refactor easily, you don’t even think of the flexible solutions. You have the confidence to refactor if the time comes. You build the simplest thing that can possibly work. As for the flexible, complex design, most of the time you aren’t going to need it.
Later he brings it all together by explaining how refactoring can actually make the design process more flexible, even if the code initially seems less flexible:
As refactoring becomes less expensive, design mistakes become less costly. Because it is less expensive to fix design mistakes, less design needs to be done up front. Upfront design is a predictive activity because the requirements will be incomplete. Because the code is not available, the correct way to design to simplify the code is not obvious.
While most of the introduction was well written and a good review, the introductory content could largely be found in other excellent books (such as The Pragmatic Programmer or Clean Code). The real meat of Refactoring is in the set of refactorings that are named, explained, and shown with examples. Each refactoring is given a descriptive name like Extract Method or Replace Magic Number with Symbolic Constant that are mostly self-explanatory and easily recognizable. This name is followed by a short description of when to use the refactoring and a diagram of what it looks like. Then there is a motivation section describing in more detail when you would want to use it and a mechanics section describing the recommended steps to perform the refactoring. Finally, an example is shown for the refactoring with code so you can see what it looks like in practice.

I found the examples to be the most valuable part of the book, and I would have been quite happy with just the name, short description, and example for nearly all of the refactorings. The mechanics sections were almost universally confusing because it's hard to describe code changes in words. The terminology and descriptions of moving code around end up becoming a muddled mess no matter how you organize the prose. As for the examples, I was thinking as I read through the refactorings that they could have been even further improved if they were set up as a set of check-ins to Git with code diffs for each step of the refactoring. Such a setup would be quite slick and very useful as a reference.

It quickly became apparent that most of the refactorings have dual refactorings that are complimentary. In some situations you want to refactor in one direction, and in other situations you'll want to refactor in the opposite direction. This leads to refactoring pairs like Extract Method and Inline Method, Hide Delegate and Remove Middleman, and Change Value to Reference and Change Reference to Value. Sometimes going through these pairs felt tedious since it seemed like repetition for completeness' sake.

By the end of the refactorings, I was wondering if most of them could be sufficiently summed up by one generic refactoring called Extract Behavior/Data and its dual Inline Behavior/Data—or maybe even the single Move Behavior/Data—with a few other oddball refactorings that don't quite fit that mold. Of course, that classification would be too generic to be especially useful for teaching, but it was an obvious way to think about what was being accomplished in most cases.

I can appreciate where Jeff Atwood was coming from when he described Refactoring as too prescriptive, although I don't think that criticism is entirely true to Fowler's intent. He meant for Refactoring to be a catalog of refactorings that you could peruse and select, with suggested implementations. The programmer is free to follow a different path while using the catalog as a guide or store of ideas.

I do wonder who would get a lot of value out of this book, though. I felt that most of the refactorings were trivial and obvious. Any programmer with a few years of experience should be able to come up with them on their own, and the handful of refactorings that were novel didn't require a 460 page book to present them. A programmer that needs to learn most of these refactorings is probably more of a novice and would be better served by gaining real programming experience and discovering refactorings on their own. They'll remember how to refactor much better from experimentation and exploration than from reading through a catalog.

Refactoring to Patterns


This book clearly builds on Refactoring with numerous references to the original refactorings and bridges the gap between basic refactorings and design patterns. The book's format is also very similar to Refactoring, with a few introductory chapters explaining what refactoring is, what design patterns are, and what code smells are. The rest of the book is the catalog of refactorings that lead from ad hoc designs to specific design patterns with the same series of description, diagram, motivation, mechanics, and example sections for each refactoring as Fowler's book.

Like Refactoring, the whole book is quite clear and readable, maybe even more so. Kerievsky is a good writer, and I enjoyed reading his many anecdotes, especially in the introductory chapters. He also had several snappy comments warning programmers not to use patterns too much:
The patterns-happy malady isn’t limited to beginner programmers. Intermediate and advanced programmers fall prey to it too, particularly after they read sophisticated patterns books or articles.
And not to optimize to patterns prematurely:
If you want to be a good software designer, don’t optimize code prematurely. Prematurely optimized code is harder to refactor than code that hasn’t been optimized. In general, you’ll discover more alternatives for improving your code before it has been optimized than after.
He was also careful to explain what the reader should really be getting from the book:
The true value of this book lies not in the actual steps to achieve a particular pattern but in understanding the thought processes that lead to those steps. By learning to think in the algebra of refactoring, you learn to solve design problems in behavior-preserving steps, and you are not bound by the small subset of actual problems that this book represents.
I totally agree with this sentiment. To learn anything well, it's best to understand the fundamental reasons why and how something works. Once you've mastered the fundamentals, then the higher level stuff that used to be confusing and hard now has a place in your brain to hook in to, and it becomes much easier. Once you have internalized the methods for refactoring to patterns and clearly understand how to do it, it becomes trivial to solve a much wider array of refactorings to any pattern you so desire. In fact, to better learn how to refactor efficiently it's probably best to not try to memorize these refactorings, but to work out the steps on your own to gain a much more solid understanding through practice and discovery. That kind of understanding is difficult to achieve purely through reading a book. It's a lot like a mechanical puzzle, like a Rubik's Cube, where the satisfaction and real learning takes place by figuring it out for yourself. It's almost impossible to learn and remember how to solve a Rubik's Cube by watching someone else do it, and refactoring has the same sort of feel to it.

Having the same format as Refactoring, this book suffers from the same shortcomings. The mechanics sections are again filled with complicated terminology and confusing descriptions, and it's much better to skim them or skip them. The examples are the most important part of the refactorings, but they would be much improved by checking the code into Git as a series of code changes that could be browsed and compared to more clearly see the evolution from less structured code to design pattern. Despite these issues Refactoring to Patterns was an interesting read. I wouldn't say it's a must read, but it's worth a look if you're curious and like reading about code transformations.

Refactoring is Something You Do


I tend to judge books based on how much I learn from them. The more sparks of insight I get and the more buzzing my brain does, the more engaged I am in the book. Unfortunately, with Refactoring and Refactoring to Patterns, I did not feel like I learned much. It was a good review and a nice overview of available options when cleaning code, but what I basically learned was that the Creation Method pattern I had worked out on my own a couple years ago is actually a thing (and has been for a while) and that refactoring is something you do, not something you read about. The best way to get good at refactoring is to write a lot of code, realize it's ugly and dirty, study some books on patterns and writing clean code, and then figure out the puzzle of refactoring for yourself by cleaning up your code and making it beautiful. Studying refactoring won't get you very far. Practicing refactoring will.

Exploring Different Types of Programming

Not all programming is the same. In fact, programming can be split into quite different categories depending on what type of program you're writing or what type of hardware you're writing for. Different types of programming, let's call them programming styles, have different constraints and require different environments and design trade-offs for programmers to work effectively in that style.

Programmers working in different styles will care deeply about entirely different aspects of programming. That's why you often see programmers vehemently debating the merits of opposing programming methods without ever reaching agreement. From each of their perspectives, their way is right. The defining difference in the argument is actually one of context.

To get a general overview of the different programming styles out there, and to get a better sense of what other programmers are concerned about with their code, let's look at the various programming styles. Some of these styles I know quite well because they are my day job, some of them I'm learning, some of them I've dabbled in, and some of them I know only from what I've heard. I'll try to be careful in what I say about those styles that I don't know too much about. There may be other styles that I haven't thought of, but these should be the main ones.

Embedded Programming


The main programming style I've been doing for the last few years is embedded programming. In this style, you are programming for a particular microprocessor or closely related family of microprocessors. The software you write is called 'firmware' and it will be programmed directly into flash on the processor or an external flash chip connected to the processor. Normally, this firmware is not changed very often, if at all, during the life of the product in which it resides. It's firm, not soft, hence the name.

Even within embedded programming, the differences in constraints from one programmer to another can be quite large, depending on whether the programmer is programming for a beefy quad-core GHz ARM processor or a meager 8-bit micro-controller. In most cases, the processor is chosen to have just enough horsepower to do the job it's meant to do, or more likely, slightly less power than the job requires, and the programmer has to make up the difference with crafty coding skills. Low power is a common consideration when choosing an embedded processor, and much of firmware design involves figuring out how often, how long, and how deeply you can put the processor to sleep.

Embedded processors have a number of communication interfaces and peripherals, and the embedded programmer must become well versed in bit-banging registers in the processor to configure these peripherals to interface with the outside world through attached sensors, storage devices, network interfaces, and user interfaces. Programs are mainly interrupt driven with real-time deadlines to meet. An RTOS (Real-Time Operating System) will provide mechanisms for defining interrupt service routines, tasks to run when interrupts aren't being handled, events and message queues for communicating between interrupts and tasks, and locks and semaphores for synchronization.

Programming environments are most often proprietary and provided by the microprocessor vendor. All of the ones I've used have been Eclipse based, and they provide a debugging interface with a hardware emulator that you connect to the processor to provide the normal debugging features of breaking into and stepping through code, reading and writing to memory, and giving access to the processor's registers. They also usually display peripheral register values in a decoded, human-readable way, show various characteristics of the RTOS state, and allow some level of profiling of running code. Non-proprietary IDEs are available as well, but they tend to be expensive.

In general, embedded programming happens very close to the metal, and you have to exert fine control over the resources available in the processor. Programming is usually done in C or C++, although Lua, JavaScript, and Python are starting to make inroads. It's as close to classic programming as you can get, with all of the considerations of limited memory spaces, special hardware instructions, and printing characters through a UART terminal included.

Systems Programming


Systems programming is the design and implementation of software that interfaces between hardware and other types of software, otherwise known as operating systems. Windows, Linux, iOS, Android, and the RTOS that an embedded programmer uses are all examples of operating systems. Systems programming is similar to embedded programming in many ways because the programmer needs intimate knowledge of the hardware, but whereas an embedded program normally targets a specific microprocessor, an operating system will run on a wider variety of hardware and include drivers for many, many more peripherals.

Operating systems provide the basic services that other programs use, like disk and file management, virtual memory, preemptive multitasking, device drivers, and application loading to name a few. The systems programmer has to worry about designing algorithms that will have high performance on a wide variety of hardware, writing drivers that work with an incredible variety of peripherals, and making sure that programs can play nice together without stepping on each others toes or bringing down the operating system altogether. Security and reliability are constant concerns of the systems programmer.

Most systems programming will involve at least some C and more likely a lot of C. The core of the operating system, referred to as the kernel, is normally written in C. C++ and Java are also commonly used outside of the kernel. The development environment is as varied as the programmer that's doing systems programming, but there are often a lot of specialized tools written specifically to support developers working on an operating system. Systems programming requires strong knowledge of algorithms and data structures, a meticulous eye for detail, and an ability to think about software running at the low level of the hardware-software interface.

Language and Compiler Design


Designing languages and writing compilers and interpreters is similar to systems programming in that programming languages are an interface between hardware and the software programs that run on that hardware. When the language runs on a virtual machine (VM) it even further blurs the line between language design and systems programming because the VM provides many of the same services as an OS. A programming language is not an OS, though. It's a lower level construct than an OS, and while an OS provides abstractions for various hardware features and peripherals of the processor, programming languages provide abstractions for the machine instructions and computational model of the processor.

Like embedded programmers and systems programmers, compiler writers are concerned with low-level performance, but they are concerned with performance in at least three languages—the host language that they're writing the compiler in, the source language that is being compiled, and the target language that the compiler outputs. After all, the compiler's basic function is to translate from a source programming language that other programmers write in to a target language that either the VM or the processor uses. Most compilers are written in C, although some are written in other languages. Rubinius is a Ruby interpreter written in Ruby, for example.

Compiler writers need to know the deep, dark corners of these languages to make the host and target code as fast as possible and to cover all of the things that any programmer could possibly do in the source language. Programmers want fast compile times, good interpreter performance, and the best runtime performance they can get, so compiler writers need to be well-versed in all of the low-level optimizations they can do to squeeze every last bit of performance out of their code, both the code the compiler runs on and the code it produces. On top of that, the code for the source language has to be interpreted correctly in all cases, even cases that most programmers will never think to exercise. Compiler writers need to think about and deal with complicated edge cases that result from interactions between seemingly simple language features that would make anyone's head hurt. I can't even begin to imagine how hard it must be to write a compiler for the more complicated languages (I'm looking at you, C++).

Application Development


The field of application development is as wide and varied as embedded programming. It ranges from the creation of simple utility apps like Notepad or a calculator to huge, complex apps like Adobe Photoshop or Firefox. The bigger an app gets, the more its development looks like systems programming, but at its core, an app provides a user interface and features to help the user do something useful with the computer. A computer with only an OS doesn't do anything meaningful. It's the applications that give the user the power to create and consume in all of the ways we've come to depend on.

For the most part, performance doesn't matter as much in application development. Normally an application has huge amounts of memory, storage, and processor performance to work with, and it's normally waiting for user input—a slow process compared with the high performance of the underlying hardware. More computationally or I/O intensive parts of an app may need to be optimized for performance, but the code should be profiled for hot spots and bottlenecks before blindly trying to improve slow operations. It's easy to get fooled into optimizing parts of the app that don't need it and ignoring the parts that are actually causing poor performance. However, in most cases the application programmer can get by with writing clean, clear code in a straightforward way and letting the compiler, OS, and hardware take care of making the app fast.

The application programmer has enough to worry about with making the app as easy to use and useful as possible. Choosing the right features, organizing the app so that it makes sense to most users, and creating an awesome user experience is insanely hard to get right. Users are fickle and demanding (not on purpose as they have their own lives and stuff they want to get done), and striking the right balance of features in any application is a constant struggle.

The environments for application development are generally quite good. Debugging features are much better than those for embedded programming, and IDEs like Visual Studio and Xcode make building a GUI and setting up the boilerplate code for an application much faster than it used to be. Application development can be done in almost any language, but native applications are most commonly written in C#, Java, Objective-C, and, to a lesser extent these days, C++.

Mobile Development


I'm not convinced that mobile development is much different than application development, except that it's done on a different platform than the target platform of the app. That gives mobile development some characteristics of embedded development because you need to connect to the system that the code is running on to debug it. Mobile development tools include a simulator to run the code on the host platform so that more debugging information is available while you're getting the app up and running. Otherwise, mobile development is very similar to application development in most respects.

Web Development


Web development is similar to application development in that the programmer is concerned with making a great user experience and producing a product that meets the needs of the majority of users for a given set of tasks. Also like application development, as a web application gets larger and more complex, development starts to look more and more like systems programming. The server infrastructure and services provided through APIs become the overarching concern as a web application grows and attracts more users.

Web development is different from application development in that network and database interfaces are a prerequisite for web development. A native application may have these things, especially the bigger applications, but a web application without them would be meaningless. The client and the server are linked, but undeniably separated, and the server needs to manage connections to hundreds, thousands, or even millions of clients whereas most native applications deal with a handful of users at most.

Web developers work in a wide variety of environments and languages. Text editors like Vim, Emacs, and Sublime are commonly used, but full-fledged IDEs like Eclipse, NetBeans, Visual Studio, or anything from JetBrains are also popular. To avoid reinventing a bunch of wheels in web programming, you'll use some kind of web framework, and there are numerous options in any language you choose. The most well known frameworks are (in no particular order) Ruby on Rails for Ruby, Django for Python, ASP.NET MVC for C#, Grails for Java, Laravel and Symphony2 for PHP, and AngularJS and Ember.js for JavaScript. Tools for web development are evolving quickly, and debugging requires a wide variety of skills from classic print statements and reading logs to modern site monitoring and testing with tools provided by various companies in the cloud.

Game Programming


I don't know much about game programming, since I've never made any serious video games, but I'll give this description my best shot. Game programming has elements of application development because of the game's user interface, systems programming because a game will normally have its own memory and storage management, and embedded programming because the game code needs to be as close to the hardware as possible and has real-time deadlines. The video game programmer is a slave to the frame clock, and everything that needs to be done for a particular frame—user input, computer AI, physics calculations, 3D model calculations, and graphics redrawing—needs to be done before the next frame gets drawn on the screen. That sequence needs to happen at least 60 times per second to make a smooth user experience.

Game programming may have more similarities with the next style, scientific computing, than anything else. The graphics programming is especially computationally intensive, and the graphics card is a massively parallel processor that's designed specifically for doing complex vector calculations. The most common game programming languages are chosen for their raw speed and closeness to the physical hardware that they're running on. That means C, C++, and assembly for most video games. Tools and environments are mostly custom and proprietary because they need to be so specialized for the type of development they're used for.

Scientific Computing


Surprisingly, scientific computing is most similar to game programming in that it is defined by massively parallel computation. While games model a fictional universe, the goal of scientific computing is to model a real-world system with enough accuracy to explore new truths about the system being modeled. Some examples of scientific computing include weather simulations, galaxy formation, fluid dynamics, and chemical reactions. Strong mathematics and algorithm knowledge is a definite requirement for scientific computing, and parallel programming is obviously common both when programming on a graphics card with CUDA and on a server cluster.

I'm hesitant to say that any of these programming styles is more complex or requires more skill than any of the others. Each style has its own set of issues, things that are challenging, and things that are interesting. They each overlap in interesting ways with other styles. Exceptional programmers populate every one of these programming categories, and what differentiates the programming styles is not how hard they are, but the types of problems that need to be overcome to create awesome software.

Creativity and Hard Work Are Both Necessary for Problem Solving

As programmers, as engineers, we solve problems all the time. It's a significantly large part of what we do. How do we solve problems efficiently and effectively? It requires a balanced mix of creativity and hard work. Both are hard to achieve, and a good combination of the two is even harder. Creativity requires getting yourself in a mood where you can let your thoughts run free and play with ideas, and hard work requires a mood where you can knuckle down and produce results. These moods are in constant conflict, and you can be at your most productive when you manage the two moods well.

I was going to make this post about how creativity is not what most people think. Instead of being driven by flashes of insight, I was going to argue that it was driven mainly by hard work. While I still think that's true in a sense, and I'll get into that more later, after looking at my notes and links to references on the subject, I believe creativity is a combination of insight and hard work in the form of intense thinking. Both things require time and space to accomplish, and both are required to creatively solve problems.

How to be Creative


I recently found this excellent talk by John Cleese on how to be creative. Much of his talk is based on the work of Dr. Donald W MacKinnon, and although I haven't read the book, I thoroughly enjoyed the talk by John Cleese. The gist of the talk is that there's no defined way to be creative, as that's somewhat of an oxymoron, but there are ways to enable creativity, and that is what Cleese focuses his talk on.

He starts off with an explanation of what creativity is, and it's not a personality trait, but a way of operating:
...the most creative had simply acquired a facility for getting themselves into a particular mood - a way of operating - which allowed their natural creativity to function. 
That mood is playfulness. They would play with ideas without any immediate practical purpose, but purely for the enjoyment of doing so. Cleese goes on to explain the two modes of operation that he labels the open and closed modes. Creativity requires an open mode of operation, but completion of the idea requires a closed mode of operation. To be most efficient, we must be able to readily switch between the open and closed modes.

The ability to get into the open mode requires five elements:
  1. Space free from distraction
  2. Time also free from distraction
  3. Time playing with uncertainty before deciding
  4. Confidence that silly thoughts may still be fruitful
  5. Humor to keep the mind playful and exploratory
Once you achieve this setup, Cleese recommends about an hour and a half to play (with ideas)—a half hour to let your mind calm down and an hour of creative work. I realized that this time frame mirrors my own flow for blog posts pretty closely. Every post is a problem that I need to solve, and when I sit down at night to work on a post, I'll spend about a half an hour collecting my thoughts, looking through my notes, and considering what I want to write and how to write about it. Sometimes I have to wrangle my thoughts into submission because my mind wanders around, not wanting to focus on the task at hand. Then I'll have an hour or so of good productive time where I can write down thoughts and play with ideas in a pretty good state of flow. I'll reach a point sometime after the two hour mark where I start to tire out and my productivity slows down. I thought it had more to do with getting tired before going to bed, but it might be as much a factor of having worked creatively for an hour or two and I'm simply tired of being creative.

You probably noticed there was not one, but two time elements in the list. The second time element is surprising, and Cleese had a great justification for it:
One of my Monty Python colleagues who seemed to be more talented than I was never produced scripts as original as mine. And I watched for some time and then I began to see why. If he was faced with a problem and saw a solution he was inclined to take it even if he knew it was not very original.  Whereas if I was in the same situation, while I was sorely tempted to take the easy way out, and finish by 5 o’clock, I just couldn’t. I’d sit there with the problem for another hour and a quarter and by sticking at it, would in the end, almost always come up with something more original. It was that simple.
My work was more creative than his simply because I was prepared to stick with the problem longer. So imagine my excitement when I found this was exactly what MacKinnon found in his research. He discovered the most creative professionals always played with the problem for much longer before they tried to resolve it. Because they were prepared to tolerate that slight discomfort, as anxiety, that we all experience when we haven’t solved it.

I find this type of thing happening frequently when I'm trying to solve program design problems. I'll come to a workable solution fairly quickly most of the time, but if I pocket that solution for later and think about the problem a while longer, I can usually come up with a more elegant solution.

Humor may seem out of place in solving problems, but Cleese strongly held that creativity is closely related to humor - it is a way of connecting two separate ideas that results in something new and interesting. In the case of humor, it's funny, and in the case of creativity, it's useful for solving a problem. He certainly opposed the idea that humor and serious business should be separated:
There is a confusion between serious and solemn. Solemnity…I don’t know what it’s for. What is the point of it? The two most beautiful memorial services I’ve ever attended both had a lot of humor. And it somehow freed us all and made the services inspiring and cathartic. But solemnity serves pomposity. And the self important always know at some level of their consciousness that their egotism is going to be punctured by humor, and that’s why they see it as a threat. And so dishonestly pretend that their deficiencies makes their views more substantial, when it only makes them feel bigger…ptttttth.
Humor is an essential part of spontaneity, an essential part of playfulness, an essential part of the creativity we need to solve problems no matter how serious they may be.
Solemnity should have no place in the creative process. It kills our confidence in being able to play with ideas freely and distracts us from thinking fully about the problem by limiting thought to the safest, most acceptable paths. Humor is definitely more productive. My coworkers and I joke around a lot at work, and we get a tremendous amount of creative work done. On the other hand, I've never heard someone claim that their workplace is serious and they do especially creative work. More likely seriousness is accompanied with words like deliberate, careful, and protocol. Oh, and if a company tells you that they work hard, but they know how to have fun once in a while, that's a big warning sign.

Hard Work is Still Necessary


Creativity alone is not enough, and while creativity does involve hard work, coming up with a great idea to solve a problem is not the end of the task. Now comes the extra hard work of implementing the idea, resolving all of the details that you hadn't thought of before, and actually finishing solving the problem. Scott Berkun has written extensively on this topic, and it's some of his better writing. In his post on The Secrets of Innovation Secrets, he reminds us of how much work is involved in innovation:
If there’s any secret to be derived from Steve Jobs, Jeff Bezos, or any of the dozens of people who often have the name innovator next to their names, is the diversity of talents they had to posses, or acquire, to overcome the wide range of challenges in converting their ideas into successful businesses.
These people didn't succeed because of flashes of insight that just happened to them, and they didn't succeed by coming up with a great idea and selling it. The ideas themselves would have been nothing if it wasn't for their superb execution. (The importance of execution is one reason why patent trolls are so destructive to innovation—they haven't done any of the hard work of implementing an idea, but they want to get paid royalties simply for patenting an idea with no intention of doing the hard work themselves.)

Even the flash of insight that leads to an idea is the result of a lot of hard work. If it wasn't for all of the studying, researching, and experimenting that came before the actual moment of clarity, the idea may not have materialized at all. Scott Berkun again has some clear reasoning on why epiphanies are more of a process than an event:
One way to think about the experience of epiphany is that it’s the moment when all of the pieces fall into place. But this does not require that the last piece has any particular significance (the last piece might be the hardest, but it doesn’t have to be). Whichever piece of the puzzle is sorted out last becomes the epiphany piece and brings the satisfying epiphany experience. However the last piece isn’t necessarily more magical than the others, and has no magic without its connection to the other pieces.
I have this experience all the time when I'm trying to solve problems (which is also pretty much all of the time). I'll be mulling over a problem for hours, days, or weeks, struggling to find a workable solution, and then the answer hits me all of a sudden, usually while I'm laying down to go to sleep. I call it Bedtime Debugging because that's when it normally happens to me, but other people have it happen when they're taking a shower or brushing their teeth. It feels like a sudden eureka moment, but it would never have happened if I hadn't spent all of that time thinking about the problem, researching options, and studying the code I'm working on. The final connection that made everything make sense may have happened in a moment, but bringing all of the other pieces of the puzzle together so that that moment could happen took much more time.

Preparing for Epiphanies


I spend huge amounts of time reading and learning. I never know when I'll need to use a particular piece of knowledge, and the more of it that I have at my disposal, the better. Constantly exercising my mind and learning new things also keeps my thinking process flexible so that I can connect ideas from different fields and think about problems in new and different ways.

Our culture tends to romanticize the eureka moment while ignoring all of the hard work that's involved in the process because the eureka moment is so much more exciting than the work that came before it and must follow it. For one of innumerable examples, Cal Newport, an assistant professor of computer science at Georgetown University, contrasts the theatrical impression of Stephen Hawking's discovery of Hawking Radiation in The Theory of Everything with the reality:
In a pivotal scene in the Stephen Hawking biopic, The Theory of Everything, the physicist is staring into the embers of a dying fire when he has an epiphany: black holes emit heat!
The next scene shows Hawking triumphantly announcing his result to a stunned audience — and just like that, his insight vaults him into the ranks of scientific stardom.…
In reality, Hawking had encountered a theory by two Russian physicists that argued rotating black holes should emit energy until they slowed to a stationary configuration.
Hawking, who at the time was a promising young scientist who had not yet made his mark, was intrigued, but also skeptical.
So he decided to look deeper.
In the (many) months that followed, Hawking trained his preternatural analytical skill to investigate the validity of the Russians’ claims. This task required any number of minor breakthroughs, all orbiting the need to somehow reconcile (in a targeted way) both quantum theory and relativity.
The reality of Hawking's discovery is a clear example of the hard work involved in solving big problems. He needed to have a lot of knowledge about multiple complex fields of physics and develop new advances in those fields over a long stretch of time to make progress towards his goal of answering a question he had about a theory he had come across. Creativity was absolutely involved in this process, but the eureka moment is a vanishingly small part of the story. These instances of major breakthroughs and discoveries through hard work are the norm, rather than the exception. After all, if these moments were as simple and easy as they are portrayed, they would be much more common than they actually are. We can't all just sit staring into a campfire or reading under a tree and expect all of the answers to hit us in the head.

In a way, creativity is only a small part of succeeding in solving a big problem, such as producing a great product, creating a market around it, and building a business from it. The idea of what product to make is a drop in the bucket compared to the colossal amount of work involved in this process. Yet creativity is also an integral part of the entire process. Every step of the way there are little problems to be solved and details to be resolved, and doing this creatively is incredibly important. Creativity and hard work are intimately woven together in the process of solving big problems.