We all dislike time-consuming and repetitive tasks, especially when developing or working on an exciting project. No wonder then, that automated UI-testing, using web browsers is a trending topic of discussion.
Which is why a year ago me and my development team at iDalko decided to test this for one of our server add-ons.
This blog post is about our journey into UI-testing. And all the complications that came with it.
The intuitiveness of automated UI testing
At the very outset, we found that it was easy to write a couple of lines of Java code to automate browser testing.
This really seemed like “magic” at the time. It was also facilitated by Atlassian Selenium (PageObjects) library, based on the Selenium framework.
So, we created our first set of integration tests – which we were walking through all the necessary pages to get our add-on configured:
- Creating a custom field
- Editing its default value
Then the tests would initiate the issue creation process:
Our custom field seemed to render the edit mode just fine as well:
Once the issue creation was complete, the test would go into the issue view mode and check if the custom field rendered all the necessary UI.
Simple thus far: 4 kinds of pages/navigation patterns were used.
Or was it more pages?
The complexity of automated UI-testing
We bumped into a couple of problems. And this is how it all began: Jira actually allows many ways to create an issue.
- Click the “create” button from the context of administrative section and you would land on a full-screen page, holding only the ability to create issue of one type in one project
- Click the “create” button from the context of a dashboard or any other normal user accessible screen, this would result in a dialogue popup form. In that instance, you could switch the issue type/project. Which, in turn, resulted in the whole dialogue being re-rendered
So that made it a total of 5 pages.
But there’s also the edit menu that renders in two different ways: full-screen and popup.
So that made it a total of 7 pages.
Not to forget inline-editing.
7 pages + special trigger on view screen. That makes 8 pages already.
Oh, but lest we forget, there are multiple testing scenarios you’d like to check on the same screens.
You want your custom field UI feature tested? Now, you have to multiply the number of tests by 8.
Are you beginning to see the problem we’re having?
And we could still find a workaround for this. We could restructure the test code in a way that the same testing scenario is executed on different screens (after navigating through different other screens).
But then the real challenge surfaced. The integration tests were failing!
Some piece of UI could not be found. When the test expected it to be there, it didn’t render yet.
So, we used timed components and wrote queries. That should configure your tests to wait for something to return as true before your assertions could run.
You could also make your add-on to post events and your test code to listen to these events. And, in turn, run the assertions only when the add-on would dispatch the necessary event.
The problem with Jira updates
Right, so now you’ve written your tests for the JIRA version you are supporting. Does anything change when a new Jira version arrives?
Unfortunately, it does:
- You need to upgrade the page objects library you’re using
- Special attention goes to Selenium version update – most of the annotations/browser interactions might require extra attention. Since they get deprecated.
That means more maintenance.
Moving back to manual testing
Long story short, these were the main practical challenges that we encountered while developing UI tests.
We’ve attempted to leverage that framework and give a 70% coverage of our UI features utilizing this.
It resulted in:
- Long-running test suites
- Complicated and lengthy debug / failure root causing cycle: run the test locally, with a visual browser, try to see, where does the assertion go wrong, if the test does not fail – try out virtual X server same way as used on CI
- Higher maintenance cycle: whenever Jira version changes, or far worse: your add-on’s UI changes. Also, remember the refactorings you do to split out your add-on testing scenario from the application? This refactoring becomes a continuous process due to the aforementioned changes.
- “flipping” tests, as in “tossing a coin” and looking which side does the coin flips. This is due to timed conditions not in tune with the HTML changes, and maintenance is done wrong. Resulting in more maintenance or longer waiting periods.
In the end, we’ve decided to minimize the number of UI tests we run.
It does make sense to automate UI tests, but only for a limited set of features:
- The most important ones and the ones which are hard to test manually.
- Or tests that would need long and repetitive manual actions to get set up.
Having said that, you should ensure the coverage of your product’s UI by tracking your test cases and manual test sessions. Luckily there are tools, integrated into Jira for that, such as QMetry!
Be smart about what you test and how, and you’ll get rewarded with more time to develop some awesome features.
(This blog is contributed by Serhiy Onyschchenko, Lead Developer for Exalate. He is an Atlassian Certified Expert and lead of add-on development with the Platinum Atlassian Solutions Partner iDalko. He also has more than 6 years of experience implementing testing.)