Few table-top games raise a player’s blood pressure more than Jenga. Knowing that every move could bring the whole thing to a violent crash is both terrifying and exhilarating.
Well, in software development, it’s just terrifying.
Every code change comes with a risk that the new functionality will either not work entirely as expected or break pieces of existing functionality. But that doesn’t change the fact that our clients expect stable, reliable software, and that we at Speak expect the same from ourselves.
Fortunately, there’s a way to simultaneously ensure high-quality software and up your Jenga game: test-driven development. But first, let’s take a quick look at how software and Jenga are traditionally played.
The Pluck-’n’-Pray Technique
A software development workflow that isn’t built around test-driven development (TDD) often looks something like this:
Client requests a new feature.
Development team implements and releases the new feature.
All seems well at first, but eventually something breaks due to an unaccounted for use case.
Development team scrambles to fix the bug, and (sometimes) writes a test to ensure that use case always works in the future.
The problem with this approach is that the users have already encountered the bug. Regardless of the severity or how quickly it’s fixed, it’s not a good user experience.
This methodology is like randomly pulling out Jenga blocks and hoping the tower doesn’t fall down. Eventually, it will. Sure, you can put the tower back up, maybe even in record time, but it still fell and made a whole lot of noise, and now everyone in the bar is staring at you.
Test-driven development is the act of writing code-based tests that establish acceptance criteria, then writing production code that fulfills those criteria.
More simply put, it’s like having a second Jenga tower all to yourself that you can try your moves out on before making those moves on the real tower. If your move on the test tower causes it to collapse, you know not to make that move on the real tower.
This is the part where I confess that nobody plays Jenga with me anymore. While I consider having a test tower a legitimate strategy, most others seem to think of it as a cheat. Fortunately, test-driven development is widely accepted in the software community, and something that our web and mobile developers at Speak employ without remorse.
Jenga towers aside, what does this all actually mean?
The TDD process can be boiled down to the following cycle.
We’ll assume the term “write” means “create or update”, as the cycle is the same for both new and existing code. Writing the test first establishes the definition for “correct functionality”. For example, suppose a client wants a widget on their site that retrieves the top 10 most popular dog breeds from a database. What would correct functionality look like?
The list should have exactly 10 breeds in it.
The breeds should be returned in the proper order (most popular at #1, with each subsequent breed being less popular).
The breeds in the list should be more popular than every breed not in the list.
The breeds in the list should be dog breeds (perhaps the database has cat breeds as well, which we don’t care about in this widget).
The above acceptance criteria should be agreed upon by the client and the development team, because they are now responsible for correct functionality.
Conveniently, each one of these acceptance criterion corresponds to one coded test. We can simulate an animal breed database in our development environment and run tests against it. An example test for acceptance criterion 1 would look something like this (in pseudo-code for illustration purposes):
Fill test database with animal breeds of various popularities
Execute production code to get top 10 most popular dog breeds
If list of breeds contains exactly 10 breeds
Note that we haven’t actually written the production code to retrieve the top 10 dog breeds yet. So if we run the test now (which we should, to make sure our test doesn’t give us a false positive), it should fail. This is good news.
Once all of the tests are written and confirmed as failing, it’s time to actually write the production code. Once we think we have everything correct, we run all of the tests for the widget. If they all pass, we have successfully proven that the code we just wrote does exactly what it’s supposed to. If not, we head back into the mines and try again until they do.
But wait! We must also run all of the other tests for the rest of the system, even though they seemingly have nothing to do with this widget. Since components are often connected together, adding or changing one could break others, so running our entire test suite ensures this didn’t happen. Fortunately, code-based tests are automated and take very little time to run. Once the entire test suite passes, we’re ready to deliver the product.
It’s well worth the effort.
I’d be lying if I said test-driven development doesn’t take extra time. It’s also not the most glamorous and exciting task for web developers. These are two common reasons why many development teams don’t use it consistently. But the extra effort is invaluable. It establishes the definition of “correct behavior”, greatly improves both stability and reliability of the software, and gives the client and development team confidence that their moves aren’t going to topple the tower.
Custom Development for Your Business
Custom web app development can provide a great solution for your business and create an invaluable solution for your business. To speak with one of our Mobile development or web development specialists, contact us, we'd love to hear from you.
Let's Talk Development