Tuesday, June 6, 2017
Let’s say your organization has an iOS app and Apple releases a new version of iOS (sound familiar?). The more automated test cases you already have on hand, the faster you’ll be able to pinpoint what updates need to be made.
With that in mind, does it make sense to automate all testing? Well, possibly.
There are times when it may be best to conduct testing manually and other times when automated testing is the better choice.
Let’s break it down.
Once a user story is accepted, you may be ready to code test cases for regression and cross-platform testing.
The ability of the app to carry out its function on a variety of platforms and constantly changing platforms raises the typical case for automation.
The question isn’t so much “does it work as we expect?” but “does it still work as we expect?” on a newer OS, on new or different devices, or even on different operating systems.
A classic case for regression testing hits the iOS community about once a year.
Last fall, it was iOS 10; this year, it’s iOS 11. A number of object classes have changed either appearance or function, and regression tests that ran on iOS 10 can provide a roadmap and inventory of where the problems (if any) are in iOS 11.
When regression tests fail in these kinds of scenarios, you get a quick look at what needs to be done for the app to support a new operating system.
The existence of a suite of regression test cases can save critical time and can be the difference between releasing an app in a timely fashion and lagging behind Apple.
Apple is getting better at moving its user base to new releases so delays in getting updated apps out to users is almost guaranteed to cause frustration.
The need for regression testing occurs when development delivers new function, when a bug is fixed, when tools or libraries used in development are updated, and when the operating system (and/or browser) is updated.
A suite of regression test cases pays the greatest dividends in the most volatile environments – and mobility certainly fits the bill.
We at Mobile Labs recently used regression testing to update our own iOS apps. We ran automated regression tests on our apps using Mobile Labs Trust™. We put a project plan together to address the problems revealed by the tests and met the release date of the newest iOS.
The most common use of manual app testing is during the development phases of a project, while developers are coding new functionality. Developers can quickly access devices using a device access manager like Mobile Labs deviceConnect to perform initial app validation.
In the Agile world, the completion of incremental function needs quick and early validation that the code behaves the way the developer – and designer – intends. A developer can use a real mobile device to make a quick check of the code and can then hand the managed device off for remote access by the designer or product owner.
Quickly accessing the right hardware for such confirmations makes a faster path to what Agile calls “done done” – complete code that satisfies the user story and the product owner. The goal of manual testing in this case is to get a quick course correction to ensure that app functions are correct. There may be no return for automating test cases for code that won’t ship until it has been modified or completed.
Some mobile apps or hardware features can require manual testing. For example you may want to integrate with a camera function such as barcode scanning. Since there really isn’t a way to simulate this function, manual testing is the best way to go.
Manual testing may also be handy for validating the externals and the appearance of the app. Manual tests can sort out whether the app has the right icon, if it handles being interrupted and then re-dispatched, and whether the overall look and feel of the app is correct.
This last dimension is truly aesthetic, and takes a designer’s eye. Automation hasn’t yet progressed into the judgment realms of “cool” and “beautiful.”
For the overall effect, the mind’s eye can hardly be bettered.
Finally, manual testing may be useful to verify spelling, placement of objects, proportions, consistency of UI standards, and general design standards.
Are the buttons all supposed to say “Logon” or “Login?” Do we “delete” or “remove?” Are the names of the states spelled correctly? Is the right version of the company logo being used and are the colors consistent with marketing’s defined palette?
These details are mostly fit-and-finish items and don’t tend to vary once they have been done correctly. Of course if they do vary, then automated tests can be used to ensure that the spellings and terms remain correct.
When you’re testing new functionality or a single function, manual mobile app testing is often the way to go.
On the other hand, whenever you’re performing cross-platform tests or simple regression testing, automated mobile app tests can deliver a sound return on your investment.
As with most things mobile, developing and testing mobile apps is a bit of a balancing act. On the one hand, development and QA teams are tasked with quickly deploying apps that have been fully tested and perform according to plan. On the other hand, app development times can run longer than anticipated, putting increased pressure on QA to run comprehensive functional and regression tests across all device types and operating systems with far less testing time than originally planned. This puts extra pressure on the QA organization since they still need to keep a close eye on budgets, app backlogs and manage high expectations for app acceptance and usability.
It should come as no surprise that enterprise QA teams are also trying to determine if manual testing alone is the best approach. The very nature of mobility, with rapid changes and multiple form factors, paired with the pressure to quickly deploy numerous mobile apps, makes automation a popular test strategy for mobile applications. Organizations can quickly realize value from an automated mobile application testing approach because it can minimize the time it takes to get apps ready for release.
However, just because automated mobile app testing is a possibility, it doesn’t mean that it’s always the best fit for all enterprise testing needs. Mobile apps come with a lengthy list of outside variables. Until testers have isolated and addressed these variables, manual testing still has its place in the testing process. Perhaps the best option is a hybrid approach. In order to find that balance between manual and automated testing – both for software and mobile applications – consider the following key concepts:
Once automated testing has been identified for use with a particular app, QA teams should implement a test automation tool in high risk areas first and then move down through levels of priority. High-traffic and heavy use apps will benefit the most from the quicker testing process yielded through the use of automation. Considering the time it takes to put a test together, it makes sense to save automation for the apps that see the most action.