Search This Blog

Tech Book Face Off: Growing Object-Oriented Software, Guided by Tests Vs. Agile Testing

It's been nearly eight months since I've done a Tech Book Face Off. That is way too long. Granted, I had been reading plenty of other stuff—math books, physics books, and econ books—but it's high time that I got back to some software development books. One area where I thought there was still somewhat of a hole in my understanding was automated testing, so I picked out a couple books on testing that I thought would be good: Growing Object-Oriented Software, Guided by Tests by Steve Freeman and Nat Pryce and Agile Testing: A Practical Guide for Testers and Agile Teams by Lisa Crispin and Janet Gregory.


Growing Object-Oriented Software front coverVS.Agile Testing front cover

I'm not going to lie. Getting through these books was kind of a slog. I had hoped to learn lots of great new insights on how to go about automating tests and working testing into the software development process without going crazy. I found out that pretty much everything I needed to know was already covered in other books I had read, especially Clean Code and Agile Principles, Patterns, and Practices in C#, two books I've reviewed previously.

While I have some misgivings about panning anyone's work because these authors obviously put tons of time and effort into their books, I don't want to mislead any readers about the value that I got out of these books. After all, these reviews are supposed to give my assessment of whether a pair of books is helpful or not. Sometimes both books are good and complement each other nicely, sometimes one is clearly better than the other, and sometimes I don't find either one very useful. I try to describe those opinions as respectfully and honestly as I can while providing some of the reasoning behind those opinions. With that in mind, let's take a look at these two books on agile testing.

Growing Object-Oriented Software, Guided by Tests


The premise of this book is something I totally agree with. In many ways software grows and evolves over time. Freeman and Pryce nicely described three distinct ways that a code base grows and changes through refactoring:
Breaking out: When we find that the code in an object is becoming complex, that’s often a sign that it’s implementing multiple concerns and that we can break out coherent units of behavior into helper types. … Budding off: When we want to mark a new domain concept in the code, we often introduce a placeholder type that wraps a single field, or maybe has no fields at all. As the code grows, we fill in more detail in the new type by adding fields and methods. With each type that we add, we’re raising the level of abstraction of the code. Bundling up: When we notice that a group of values are always used together, we take that as a suggestion that there’s a missing construct. A first step might be to create a new type with fixed public fields—just giving the group a name highlights the missing concept. Later we can migrate behavior to the new type, which might eventually allow us to hide its fields behind a clean interface, satisfying the “composite simpler than the sum of its parts” rule.
I find that a lot of my software design progresses in these ways. The architecture of a system emerges over many iterations, step by step. This constant flux is a characteristic of Agile development in particular, and it can be an issue for teams and management that are new to it. The authors make some good points about that initial discomfort as well:
[I]ncremental development can be disconcerting for teams and management who aren’t used to it because it front-loads the stress in a project. Projects with late integration start calmly but generally turn difficult towards the end as the team tries to pull the system together for the first time. Late integration is unpredictable because the team has to assemble a great many moving parts with limited time and budget to fix any failures. The result is that experienced stakeholders react badly to the instability at the start of an incremental project because they expect that the end of the project will be much worse.
The goal with Agile development is to not end up in crunch time at the end of the project, but instead spread the chaotic stuff more evenly over the full length of the project, making it more consistent and manageable in the long run.

The first part of the book was much like these excerpts with clear, well thought out discussions of TDD (Test-Driven Development). The second part of the book was an extended example of developing an internet auction bidding program with Agile testing principles, and that is what I had a hard time with. It became increasingly difficult to stay interested as the example went on.




There is something to be said for working through a non-trivial example to better show how a process would work in a more realistic project, but the example that they used seemed too complicated to me. Maybe it was because I haven't written in Java since college and I'm not familiar with any Java frameworks. However, I hadn't done much programming in C# before reading Agile Principles, Patterns, and Practices in C#, and I didn't have any problems with the examples used in that book. In this Java example, the authors seemed to pick a bunch of graphics and networking frameworks that required some prior familiarity to understand what was really going on. For all I know, they are common Java frameworks, but using them served to narrow the audience that could follow along. The complexity of the example and the prior knowledge required obscured the message that the authors were trying to get across.

Once the complexity of the application increased, the big ideas tended to get buried in the details of working through the code. The main point that the authors kept coming back to was that encapsulating related functionality enabled programmers to understand the code quickly and change it easily in one place. This is programming nirvana, but it was hard to see how that encapsulation developed through all of the details of their code samples.

Another issue that came up was that after a couple of tasks were completed, it wasn't clear that it was useful to continue with the example. New concepts and benefits of TDD were no longer being demonstrated; we were just plowing through the rest of the example using the same process we already know about. The danger with a long example is that the focus changes from teaching concepts using a demonstration to completing the example for its own sake. It's a difficult balance that I'm well aware of. You start off with good intentions, but as you get further into the example, it takes on a life of its own. After a while you pull back and wonder how you got where you are, and are you really saying what you mean to say anymore. The overarching ideas have gotten lost in the details of the example. In this case, the goal should have been to show how software can grow efficiently while using TDD, not the gritty details of how to implement an auction sniper.

Even though the extended example was a drag, the authors still had some nice insights into practical testing:
Some TDD practitioners suggest that each test should only contain one expectation or assertion. This is useful as a training rule when learning TDD, to avoid asserting everything the developer can think of, but we don’t find it practical. A better rule is to think of one coherent feature per test, which might be represented by up to a handful of assertions. If a single test seems to be making assertions about different features of a target object, it might be worth splitting up.
I like the pragmatic nature of this recommendation. Dogmatically splitting up tests so that each has a single assertion only serves to extend your test time and bloat your test code. When a set of assertions focus on one feature and have the exact same setup and tear down, they should be combined into one test.

Overall, Growing Object-Oriented Software, Guided by Tests was okay, but I can't recommend it. There are much better books out there, and this one didn't add much to my understanding of TDD. If you're an experienced Java programmer that happens to work with OpenFire, Smack, Swing, and the other libraries that they used in their example, you may get more out of it. Otherwise, I would pass this one by.

Agile Testing: A Practical Guide for Testers and Agile Teams


After getting through the last book, I was hoping for something a little more engaging with Agile Testing. This book was written by a couple of testers who have worked on a number of Agile teams and gave their perspective on how testers could fit into the Agile process after the programmers on the team have adopted TDD. Crispin and Gregory make the case that there are many other types of testing that still have a place in improving software quality, and I strongly agree. Automation with unit and functional testing only scratches the surface of the testing that needs to be done for a product, and having dedicated testers on the team that offer their unique perspective and talents is invaluable.

I did find some insights that resonated well:
If you’ve worked in the software industry long, you’ve probably had the opportunity to feel like Lisa’s friend. Working harder and longer doesn’t help when your task is impossible to achieve. Agile development acknowledges the reality that we only have so many good productive hours in a day or week, and that we can’t plan away the inevitability of change.
This is, of course, one of the key arguments for adopting an Agile process, and you'll see similar statements in any Agile book you read. Most of the book was much like this quote—reasonable and well thought-out—when taken in isolation, but taken as a whole, it was excessively verbose and redundant. The authors were constantly discussing the benefits of automated testing, the use of this or that tool during the testing process, or how you really don't want to make an iteration a mini-waterfall process. I felt like they could have easily covered the same amount of material in less than half of the 576 pages they took to do it.

The book was organized into five parts:
  • Introduction to Agile Testing
  • Organizational Challenges in moving to Agile Testing
  • The Agile Testing Quadrants and the different types of testing that makes up each quadrant
  • Automation of any testing that warrants it
  • An Iteration in the Life of a Tester and how testers fit into an Agile development environment
At a high level it seems well organized, but there was so much overlap between these sections that much of every chapter was repetition and cross referencing. After a while the periodic tidbits of wisdom didn't seem worth the trouble of wading through pages and pages of generalities. Even the beginning of each chapter had an extra repetition of the rest of the chapter with a mind map that exactly duplicated the chapter's section headings. Maybe that's useful to some, but it just added pages to the book for me.

Some of their points were surprising as well. For example:
“AADD,” Agile Attention Deficit Disorder. Anything not learned quickly might be deemed useless. Agile team members look for return on investment, and if they don’t see it quickly, they move on. This isn’t a negative characteristic when you’re delivering production-ready software every two weeks or even more often.
I wrote one small note on my Kindle when I highlighted this quote, "definitely sounds like a flaw to me." This attitude struck me as having a cavalier disregard for deeper thought and more exploratory learning, especially for difficult problems. Maybe it works when you're implementing yet another CRUD app, but some software requires careful research and design to make something that's going to have a big impact. My interpretation of this statement is that I shouldn't be doing Agile Testing if I'm developing ground-breaking software, but I don't think that's really true. I think Agile promotes examining the trade-offs before making a decision on how to proceed, and sometimes the right decision is research and learning. AADD isn't really defensible.

Later in the book the authors talk about the difficulties of bringing new testers up to speed on a project, and they make this recommendation:
This conversation sparked the idea of a FAQ page, an outstanding issues list, or something along that line that would provide new testers a place to find all of the issues that had been identified but for which the decision had been made not to address them.
Ideas like this should be questioned as to whether or not they scale and if they add unnecessary complexity to the process with too many tools. I can't imagine such a thing scaling well at all, and anytime you create documentation that you'll never use personally, it will not be adequately maintained. I find that keeping these kinds of lists up-to-date is nearly impossible when they're not part of some other process that the maintainer does find useful.

The book did contain a few pleasantly surprising nuggets, though. Near the end when the authors were discussing defect tracking software, they claimed:
If your team is working closely with the programmers and is practicing pair testing as soon as a story is completed, we strongly recommend that you don’t log those bugs as long as the programmer addresses them right away.
It's an interesting difference from bug tracking software proponents, and such pragmatic practices can eliminate a lot of tedious bookkeeping.

All in all, I can't recommend this book either. I pretty much heard it all before and it has been presented much better in the other books I've already mentioned. Maybe if you're a tester that's completely new to the Agile process, you'd find Agile Testing much more useful, but I'd hope you could find a more concise resource. As for programmers, The Pragmatic Programmer and Robert Martin's books are more engaging and are all you're going to need.

More Than Enough Testing


I set out to fill what I thought was a gap in my understanding, and found out that other books I read had already covered what I needed to know. I probably didn't need to read either of these books. It was too much reading about testing and not enough practicing. You only need to read a little bit about testing to understand the benefits and how to go about it, and then it's best to spend the time experimenting with it. Clean Code and Agile Principles, Patterns, and Practices in C# are more than enough on the subject of testing for developers.

No comments:

Post a Comment