Tuesday, July 31, 2012

Three tales of browser-based test suites [Part 1]

The topic of browser-based testing seems to come up again and again and on many occasions I meet people who have expectations that don't match my experience. So I guess it only makes sense to write down these experiences and some of the conclusions I came to.

I'm not even going to try and go into distinctions between acceptance, functional, integration or whatever testing. Whatever you're calling those tests, if they are instrumenting a web browser, that's what I'm talking about. (If you care about the distinctions, or about testing in general, I recommend reading Gojko Adzic's excellent book Specification By Example.)

I want to look at three concrete examples that I have worked on over the past years.

In the first case, nobody on our team had prior experience, except for a short but traumatic brush with IBM's Rational Functional Tester (remember kids: friends don't let friends use IBM products). To get rid of that, we decided to go ahead and try Selenium.

We initially recorded tests with the Firefox plugin but quickly realized that this wasn't maintainable and didn't really offer us good control for selecting elements. Our Tester Benjamin then took it upon him to write tests in Java and started to integrate them into our CI environment. Over time he created a solid set of Page Objects that allowed us to write new tests fairly quickly.

It took the rest of the team quite a while to take some ownership of these tests but eventually we came to a point where we'd sometimes even drive new features from these Selenium Tests.

There were hurdles along the way. Tests were brittle and the execution time was unacceptable (>30 mins). This improved with newer versions of Selenium, use of Selenium Grid to parallelize tests and by keeping a close eye on the VMs running the browser instances. In the case of IE(<8 gave="gave" just="just" p="p" simply="simply" up.="up." we="we">
Due to some relatively complex processes it was also fairly hard to test later stages of these processes. We only slowly got better at setting up test data for this without making use of the browser. This left us with some unfortunate holes in automated coverage.

There were still manual tests for each release for all of our major supported browsers. But with more of the core functionality being handled by automated tests and with the amount of changes shrinking due to shorter release cycles the effort for this went down considerably.

We also became better at writing unit and integration tests and our confidence in these grew.

What did I take away from it:


  • the demand for these tests was driven by Benjamin wanting to automate his tests to make his life easier. And also his desire to learn more about Java. It was a relatively clear goal and the benefits of effort spent on automation were fairly clear. Where automation was hard, it allowed him to balance that effort with the effort of doing the tests manually
  • setting up and maintaining browser instances for tests and keeping them stable is hard, unthankful work and I'm glad that these days you can just hand money over to saucelabs to do that for you
  • getting the CI builds to be reliably green made shared ownership easier. Collaboration on these tests continued to grow.
  • closely related to that, the tests have to run fast. If they run for more than ten minutes they might as well not exist
  • we had relatively few Selenium tests compared to our unit-tests but there were the odd cases of bugs that weren't caught by our normal test-suites
  • when we rewrote a lot of the front-end JavaScript and test-drove those with jasmine, browser compatibility issues became very rare
  • as our understanding of all the different tests grew, it became easier to decide when we could avoid writing an extra Selenium test
  • working closely with the rest of the team and writing the tests in Java gave our Tester room to grow and learn and turn into a regular dev on the team. Seeing that happen was probably one of the most fun things during my time there. The rest of the team also took up some of the manual testing work.
(Edit 13.9.15: removed some point about testers that I didn't like anymore)

Continue on to part 2