Building new software from scratch is one of life’s great pleasures for a developer. It gives us a chance to “do it right” that we typically don’t have when doing maintenance work. More often than not though, we find ourselves working on an older system and need to work around old decisions made by long gone developers. For old applications that still serve the business requirements, maintenance is the name of the game. We have to add new features, fix old bugs, and generally try to modernize the application in manageable steps. Over time we end up with a codebase of mixed old and new code, and doing any serious refactoring can become a scary proposition. Changing a few lines of old, uncommented business logic might fix the bug at hand, but then again it might release gremlins hidden in the code years ago.
There are a few simple practices, using free and open source tools, that can greatly simplify the problem and allow a developer to refactor old code with confidence. In his book Working Effectively With Legacy Code, Michael Feathers defines “legacy code” as any code that does not have tests around it. By eliminating legacy code as he defines it, we make the application more manageable. The first layer of testing of course is a suite of unit tests. This article is not about unit testing though, as that is often not enough to allow a developer to do the sort of large scale refactoring inherent in modernizing an old code base. Instead we will cover some best practices using Selenium and Jenkins to ensure that releases are high quality and contain no regressions.
A few words about our infrastructure. We were already using Jenkins to compile and run all unit tests each time we push our changes. Since the Selenium tests take a while to run we made the decision to do so nightly as opposed to every commit. The Selenium tests are quite exhaustive, so we split them up to run concurrently across multiple machines, in order to complete in a timely fashion. The tests themselves are JUnit tests built with Selenium-java. We spin up a Selenium server instance for each run. We created separate Ant targets for deploying a test DB, compiling, deploying and starting the app on JBoss, starting the Selenium server, as well as running the Selenium tests. Since Jenkins works great with Ant (and Maven for those who use that) we did not have a hard time setting it up.
In order to get started we needed to write Selenium tests for every screen and feature in the application. That may sound like a daunting task, but since we are creating the tests programmatically we were able to get a lot out of code reuse. For example if you want to test 10 features on one page, you can reuse the all the code that gets you to that page and sets up the data in the background. Tasks like logging in, setting up common test data, and creating user accounts have all become one-line functions in our test code.
One thing we did have to do some work in is creating functions for locating specific items in the DOM on webpages. At this point we have a large library of custom locator strategies coded up for our application. They save a ton of time as we can assert things like “Make sure the item in Row #5 in Column X is equal to ‘foo'”. We also use any downtime between releases to improve our testing infrastructure.
As we wrote test code for the old parts of the application, we also resolved to write tests for every new feature as well as every bug that we fixed. This way we can be sure that the same bug does not reoccur in the future. In some instances we had to adjust the existing code to display id’s and such so that Selenium would have an easier time picking up specific elements. These features are now second nature on the new pages we create. Every new feature we add has a corresponding Selenium test (or is tested as part of a larger test) before release. The only time we delay creating the Selenium test before delivering code to the client is if we are pushing out a hot fix fix for a production error. It is a very rare occurrence since the product is throughly tested every night.
Our initial pass ensured complete coverage on the Firefox web browser. Even though Selenium supports all modern browsers in theory, the tests run in Internet Explorer were an order of a magnitude slower than the same tests in Firefox. Fortunately, since we have a lot of code reuse we were able to identify the bottlenecks and refactor them to boost the performance of the IE tests considerably. We recommend this approach of getting complete coverage on one browser first and then moving on to other browsers. We also recommend upgrading both Jenkins and Selenium. Selenium is constantly being improved to be more efficient and to support the latest browsers. We did have a couple of times where the Selenium version we had was incompatible with the very latest browser version, but these problems are often fixed by upgrading and refactoring some test code functions.
Using this approach has a lot of benefits for the project. First and foremost, when we hand a release over to the client, we can be very confident that it will be of the highest quality. Indeed this has been proven over many releases, and most change requests we do get have to do with feature tweaking as opposed to functional errors in the codebase. Second, it creates the necessary safety net to allow us to do quite a bit of refactoring. We have refactored entire sections composed of multiple screens with several shared elements across the entire application with minimal consequences. All “feature level” bugs that can creep up when completely re-writing something for a different framework were caught by the Selenium tests.
The approach does come with several challenges. You will need to invest resources into creating the tests. This is a worthwhile investment that will save you a lot of time in the future, but it will require some work upfront. As stated before, Selenium tests are slow as far as tests go so as your suite grows you will have to contend with a long build cycle. Finally there are some hurdles with Selenium itself. These can be overcome through work-arounds, making your code more testable as well as upgrading.
Even though this through level of testing is not free, the benefits far outweigh the costs and we highly recommend this approach for your project as well.