It’s happening again: An extremely successful software developer with a very large following lists his rules for success, and people jump for joy at the occasion, and use that list to justify their practices. That sounds great, doesn’t it?
While reading, Testing like the TSA, I found myself nodding in agreement with what most David was suggesting. I didn’t agree with all of it, but that’s fine; there is definitely room for positive discussion and disagreement when it comes to solving problems with code. The best part about the blog post wasn’t the post itself. The comments and other discussion on sites like HackerNews are the real jewels.
I’d now like to think out loud about testing. We’ve come so far in the Ruby community, but in reality, we still have a long way to go.
Just Enough Testing
A question that I hear constantly is, “How much tests should we have?” I always reply with, “Just Enough.” What does that mean exactly? It boils down to your application or library should have just enough tests that you are confident that it works as it should. Do I think your code base has enough tests? I don’t know. You should know. There isn’t a magic number. You’ll know when you’ve found it. Velocities will increase and regressions will decrease.
There’s a difference between spiking (or prototyping) and writing code that is destined for production.
Too many times, we see a blog post or a book that echoes our thoughts, and we grasp on to it as if were a gospel. When I see a comment that basically says, “… and that’s why I don’t test”, I think we should mark that as a failure. There are plenty of times when all of us don’t test. Last year, while I was learning Clojure, I didn’t write one test (and I still don’t). When I was learning how to write Android apps, I didn’t write tests either. The reason for this is because I wasn’t sure what I was writing. It’s pretty hard to describe the behavior of something with tests. It is even harder when you don’t know the language of description.
This can also be applied to web developers who develop web apps constantly. Sometimes we don’t know what we are trying to build. A little exploratory code helps us understand the problem. What we are looking for is the correct question and you might have to write a little code before understanding that. I suggest that after you learn the right question, you throw the prototype or spike away, and start again using TDD. That might not always be possible due to time or budgetary constraints, so you might have to retrofit it with tests later. This isn’t optimal, but it’s a part of being a professional developer. You can’t learn this by reading blog posts or books.
As an aside, I almost feel confident when test driving Android apps. It isn’t perfect, but I feel like this is the beginning of a conversation rather than any type of a complete solution.
Testing for testing’s sake is a waste of time
You may remember back a few years ago when I spoke about TATFT. You might not have known it, but this was a short talk that was more parody than anything else. Many of you took this at face value. Doing that put you in the same place as taking the “Testing like the TSA” post at face value. You shouldn’t do that. The real value is understanding why we say what we say. I don’t write tests all the time. I do think constantly if I’m writing in a way to make things easily tested. I do write tests first most of the time. Keep in mind, I certainly don’t advocate that I or anyone else should write tests all the f#$@ time. That’s silly.
I rarely look at tools like code coverage or test to code ratios. That isn’t important to me. What I care about is is that the tests that I’ve written properly describe the behavior of the project. The times where code coverage and test to code ratios are important are as a metric over time. If these numbers are going down, it is a sign that code quality is decreasing. David pointed out in his post, “1:2 is a smell, above 1:3 is a stink.” You are going to have learn what’s good and bad on your own. Projects with different developers, with different problem domains will have different tolerances for smells.
TDD is hard
TDD is hard. Think about the first time you sat down to write tests first. What was the first test you wrote? Does that test still exist? I sure hope not. Red, Green, Refactor is a catchy slogan. It seems easy to write a failing test, write some code to make it pass, than refactor. It isn’t, unfortunately. There are some good books on the subject. Test Driven Development by Example (the Kent Beck book) is one people always talk about. Growing Object-Oriented Software, Guided by Tests is much better, and definitely belongs on everyone’s bookshelf.
You also have to remember that TDD is just a suggestion. Take what works for you and form your own opinions. I personally believe in Write a Test, Make it Pass, or Change the Message. This allows me an easier logical transition between Red and Green.
The one thing I can promise you is that with practice and anger, you’ll learn what does and does not work. The goal here is get working software that you can trust as fast as you can.
Don’t use tool X
I feel like this is always bad advice. Some people like RSpec and some don’t. Some people have had great results with Cucumber, while some flail with it. At one time, all of us knew nothing about code. We found a language we liked, and learned how to be productive with it. Sometimes tools aren’t productive because you aren’t using them right. Sometimes tools are just bad. I appreciate that we have a myriad of tools for our arsenals.
Don’t test your web framework
I don’t have much to say about this. Don’t test it until you absolutely have to. Think twice before writing tests for that validation. In our Rails projects, we should be thinking about behaviors of objects and interactions between objects. Your unit tests should reflect that. Your acceptance tests verify the orchestration of those units. Simple, right?