Hampton Lintorn Catlin

Hampton Hates Automated Testing

Hampton’s Testing Theorem: “Unit tested code often contains more bugs than its non unit tested counterpart.”

Hampton Hates Automated Testing from Hampton Catlin on Vimeo.

Comments

Oct 23, 2008
pjb3 said...
Hampton, Agreed, there is an overemphasis on automated testing and an underemphasis on human testing in the Ruby community, but I don't see these things as being mutually exclusive. Just as automated tests can miss things, human testers miss things too. Using a mixture of both is probably the way to go.
Oct 23, 2008
Carl said...
I dunno, I think for a major project where you can expect to have a lot of so-so coders working together, it makes sense to make automated tests after that fact to keep regressions from accidentally sneaking into the code. The example I'm thinking of is the cpython library, where I read the mailing list. Before a major release they're always moaning about how the build bots are turning red. Which is annoying for them, no doubt, but better than the alternative--whoever is doing the check-in doesn't realize that they're subtly breaking things left and right.
Oct 23, 2008
Diragor said...
I agree with many of your points, but I think your conclusion of categorically rejecting a useful tool is way off base. I don't see how it's reasonable to expect a human being to think of every possible side effect in every possible code path throughout a complex application and manually test all of them after every significant change. Automated testing doesn't catch everything, but it's more likely than a human being to find a broken edge case in an obscure corner of the app, since you should've written the test for the edge case when you wrote the code (so you don't have to remember to go poke that obscure corner of the app every time you make a change that might possibly be distantly connected to it). You call yourself lazy and paranoid, but I thought that was exactly the kind of person who *likes* automated testing!
Oct 23, 2008
Hampton said...
@Carl - Fire those programmers. Hire a good one. Study after study shows that bad programmers are simply bad, even if they are cheap. I would leave a company like that in a millisecond. Perhaps nanosecond. @Diragor - I think that's a flawed arguement. The expectation in your arguement is that you are significantly changing the data returned by a method. And, that's a bad thing. With properly done objects, its easy to keep legacy methods, to create new ones with new behaviors. If you "Paranoid Program", you don't end up breaking those corners of the app. I rarely find myself having to go test a changed method in a thousand places. If it acts the way it did before, then how would that break other parts of the program? There *are* cases where your theory holds, but I'm not sure if unit testing (or its variants) solves these problems. Or at least, they end up creating new ones. Unit testing developers get *very* myopic. They only see the local area and trust the tests to report any problems. This is a major, major problem. Green lights do NOT mean the app is still functioning as expected.
Oct 23, 2008
Diragor said...
It still sounds like you're pointing out poor use or expectations of automated testing much more than fundamental flaws with the concept of ever doing it. The developers get myopic, the developers act like passed tests mean everything is perfect, the developers wrote tests that broke in a thousand places from one code change, the developers used three different test frameworks and made it difficult to follow.... I see a pattern in the problems, and as Steve Ballmer would say, it's developers, developers, developers! I guess what I had in mind with my argument is more continuous integration or functional testing, or whatever you want to call it, rather than fine-grained unit testing. I'm talking about large, inter-connected systems where the result of a single input file at the front end is thousands of database entries, thousands of files being created in several places, e-mails being sent, spawned follow-on processes on remote machines, ... lots of stuff that I don't see how it's practical to go around and manually check after one piece of the whole chain is modified.
Oct 23, 2008
sMAshdot said...
next time... please, think it through, write it down. You have a point, but it's not worth 13 minutes of rambling ;-)
Oct 23, 2008
bryanl said...
Wow. That was long. First things first: There is a place for automated testing and a place for human testing. Both are needed. That being said, the power of automated unit testing comes from keeping those nasty regressions out of your code. I find it impossible to believe that you actually eyeball your code and keep all the bugs out. Carl brings up a good point. In some cases you can't fire your developers due to poor performance (at least instantly anyways). The automated tests are there to prevent those 'so-so developers' from wreaking havoc on your code base. Now, when it is time to release, you should have your human testers run through the application. But, as you may have pointed out, humans are ruled by human nature. People don't do things the same way every time. It is just the way we are built. We have the ability to reason, but we aren't an exact science. TATFT lives! Testing all the fucking time kills less babies.
Oct 23, 2008
melissa said...
whoa, your hair! obviously i have nothing productive to say about automated testing. ps. nice shirt!
Oct 23, 2008
Jon Dahl said...
Reminds me of Luke's "Testing is Overrated" post (http://railspikes.com/2008/7/11/testing-is-overrated). I'll agree 100% that unit testing isn't a panacea, and that it can make people overconfident. But the same is true of QA testing. I'd say the best approach is to combine four types of testing. Each find different types of bugs, and no approach does everything. 1. Unit testing 2. QA testing 3. Usability testing 4. Peer testing through code review
Oct 23, 2008
Nathan said...
I think whether or not someone tests should just not be a status symbol. You can get your rockstar/ninja badge with or without it. Reading, blogging and listening to arguments about it is just as masturbatory as doing it itself.
Oct 24, 2008
Crack Addict said...
You are obviously on crack
Oct 25, 2008
Sean Ransom said...
@Diragor Programmers are ill equipped to find and test edge cases. Human testers find the edge case time and time again. To think otherwise is arrogant and someone I would not hire. Now I feel there is value to adding the test case AFTER it is found. Test loose and easy and then let your human testers bend and break expectations because they will always find something the programmer never thought about. Thank you Hampton, the first video cast I have actually watched all the way through in a long time. -sean
Oct 25, 2008
nick evans said...
First, anyone who claims that automated unit tests removes the need for QA testers or usability testing or any form of exploratory (manual testing)... they're completely nuts. I completely agree with you on that point. Second, this is not a claim that I've ever heard any XPer, agilist or TDD/BDD proponent make. Instead, they've been saying for years now that "test driven development is not about testing": http://java.sys-con.com/node/37795 (good article with that title, from 2003). And if that "automated testing" leads to undue confidence, then you are correct: the developer's hubris will allow more bugs to get through. But, in my experience, a humble/paranoid developer will benefit greatly from BDD and put out code with fewer bugs in less time. And an arrogant "my code doesn't have bugs because of pet theory #462" developer will eventually get themselves into trouble with or without automated tests... but the automated tests may help them dig out from it and not get into that particular brand of trouble again. You keep talking about "bugs" as if the point of automated "testing" is merely to reduce bugs. TDD is about driving yourself towards a better design (which is often also more easily testable). This is also why BDD was coined, because people using the term TDD often go back to that word "test" and some of its other connotations. Several other terms were tried on for size, e.g. executable example driven development, but BDD is the one that seemed to catch on. It isn't about "if you're not good about thinking about programming." It's about giving *everyone*, the so-so programmers and the guru programmers, another paradigm through which to view their code. It's about imagining the best possible API/interface/outcome, giving some example of how that code might work (if only the implementation were there), and then filling in the implementation until it works. And then doing it again in short incremental improvements. It's about getting back into "the flow" in minutes, instead of hours. Yeah, those examples and their assertions also stay around until later as a regression suite. That's nice. The better examples also hang around as documentation to future developers for how the system is expected to behave. That's very nice. But, in my experience, they also allow me to develop better, cleaner code more quickly than otherwise... and the "tests" are both a happy byproduct and an enabler.
Oct 27, 2008
Noel O'Sullivan said...
@nick evans - I fully agree with these views. Further to that, having those tests hanging around gives me massively more confidence about refactoring/rewriting code as more use-cases are discovered and/or when so-so programmers' code is uncovered. Seeing as many (hopefully most) systems will hang around a lot longer than they took to write and will be used in different ways and be extended by many developers, maintainability is absolutely key. In addition to all the other advantages of automated tests, they also provide a persistent, long term additional level of system documentation. Reading the test cases helps developers unfamiliar with the code comprehend how it should work (and if they misinterpret this when they make a change, the automated test failure will alert them to this). This is also a better form of documentation than normal comments as unlike simple doc-comments, if they cease to be aligned with the current state of the code, the developer will know immediately. So, before ditching automated tests, spare a thought for the so-so (or brilliant) coder who will be maintaining your software 5 years after you've left the company.
Oct 28, 2008
gmcinnes said...
Moosecock. Four points about your argument: * You're not clear about what kinds of tests you're talking about. If you're talking about purely unit tests, its a straw man argument. No one suggests that unit tests without functional / acceptance tests, that test the operation of the whole machinery, are more than a semi-polished turd. Whether these functional tests are run by a human or automated is just an accidental property. * I am completely sure that a suite of tests gives me more confidence when refactoring. I sure don't trust my own code completely, but I trust it more when I make changes in the implementation and all my tests still pass. * Writing tests first slows me the fuck down when I'm excited about charging ahead with stuff. It forces me to be methodical. I'm totally down with Nick Evans on that. I'm pretty sure I end up with better API's. * It also really depends the environment you're writing in. Some domains require a higher degree of formal testing than others. If iPedia fails in some corner cases, its less serious than if hospital information systems fail, for example. Anyway, everyone's commenting as if there's no evidence in the world to support either view. Chapter 20 of Code Complete 2 gives a decent breakdown of defect detection rates of various techniques. Unit tests rate about 30%, so do functional tests, integration tests are about 40% and desk checking of code is about 40%. The highest rated is a high volume beta test (> 1000 sites) at about 75%. So the lesson is you should be TATFT *and* desk checking, *and* beta testing, *and* doing every other fucking thing to try and reach the quality level you need. By the way the video could use some editing. I'm too sober for this much hmm and haw.
Nov 18, 2008
Rajesh Duggal said...
Tested code should have more bugs in it. "Test what you think could possibly fail"... If you're good at deciding what could possibly fail, then you won't be witting tests for things you believe have a high probability of working. And you'll write tests for what you're less sure of, write tests for the complex parts of your app. Tests don't guarantee bugs-free. They reduce the probability of bugs being added to the code. Also, you seem to compare "automated tests" with "user-accepted-testing"... apples and oranges. automated unit testing doesn't replace user-accepting testing. You can also automate user accepting tests with tools... (e.g. fitnesse.org) Quality has a cost, you need to assess the ROI of maintaining the tests.... if it's costing you more to maintain/build the test infrastructure than you're getting out of it... hit delete! Cheers, Rajesh Duggal
Nov 22, 2008
desireco said...
OK, so definitely write things down :) before sitting in front of your lovely mac. I like where you are going with argument, however I noticed that testing in principle makes people write code that has to be decoupled, nicely modular/ put into proper objects. So this is really good benefit. You bring very good point, that testing can inhibit refactoring because if you refactor methods into smaller ones, you have to rewrite tests and refactoring should be easy to do, if you have to think before starting to chop up code, then you will be inhibited to do all the changes that you should do. Ruby-ists are definitely a little too opsesed with testing.
Dec 1, 2008
Robert said...
I think you are making some incorrect assumptions and generalizations about people who write tests, such as: testers think they don't need human's to also test their code, that testers believe their code is now bug-proof, and that testers only write tests from a single perspective. Also, I don't think it needs to be an either or thing: either you write automated tests or you just do human tests. It should be both, in my opinion, which will help cover various perspectives. You are also assuming that human testing would inherently understand the intended function of a feature and test it accordingly, which isn't necessarily true. It really shouldn't be an either or case, but rather an argument for using both ways of testing.
Dec 5, 2008
Mike said...
Hampton, Right on! I pretty much can't stand writing tests for my code, and definitely feel like I'm a minority in the Ruby world because of this. It's refreshing to find someone who feels the same way I do about automated testing. To me it's simply mental masturbation.
Feb 20, 2009
Bob Aman said...
This really depends. If you're talking about a web application, yeah, it's very possible. If you're talking about library code, I think you're 100% wrong. I've written a couple of libraries and I frequently find bugs that nobody's ever noticed while heckling (mutating) the code and rerunning my tests.
Apr 13, 2009
Swards said...
Automated functional tests are great. Logging in and out as different users that have various privileges to try multiple user actions on objects that are in different states can take a long time, it's really boring work, and it's too easy to miss one of the permutations. If you can use production data to race through every object action for each user in the system and it either creates, updates, responds, or redirects appropriately (CURR), you have a good feeling that a new release is solid. And, with a restful architecture, there isn't a whole lot of refactoring over time. What's not to love?
Jul 6, 2009
Adam Bair said...
Hampton, I may be a year late in saying so but... yes, I find your rambling at the camera to be interesting! That sounds creepy, but it's not. I'm just replying to the last 5 seconds of your video where you asked to let you know if you're video was interesting. Which it is. Was. Interesting. Now it's getting creepy. I apologize for descent into creepiness; that was a fine video my good man. Okay I just need to stop.