Last updated July 29, 2016.
So you’re making the switch from mobile app testing using optical character recognition (OCR) / image within-an-image (IWI) to a native object recognition (instrumentation-based) mobile application testing approach.
You’re changing from an understanding of the app based on its surface appearance (the surface of the glass) to a rich and detailed understanding of the objects, classes, and attributes that are in use.
With an instrumentation approach that utilizes a standard object repository, testers will be able to manage objects all in one location. One of the many benefits of this approach is that your mobile testing software will automatically recognize the objects, making changes to code or updates to scripts unnecessary in many cases when extending the scripts to upgraded operating systems or creating cross-platform scripts.
But before you make the switch, what do you need to keep in mind?
Know thy mobile app.
With your current OCR-based mobile app testing approach, you just need to know if your device is connected and if you can view the screen. With an instrumentation-based mobile testing approach, however, you will need to have a basic understanding of how and what technologies were used to build your mobile app.
As a tester, you should also know the underlying architecture of your mobile app because it can help you pinpoint the cause of problems.
For example, Facebook’s and LinkedIn’s mobile apps were originally developed as hybrid applications, which essentially presented a web view through an app with minor modifications. Unfortunately, their hybrid apps suffered from performance problems.
Since then, both companies have completely rewritten their apps as fully-native mobile apps and performance has dramatically improved.
As you transition your mobile app testing from an OCR/IWI approach to instrumentation, one of the first things to do is determine how your mobile apps were built — with native mobile controls, as a mobile optimized website, or using a hybrid approach combining native and web controls.
Armed with this knowledge, you’ll know which scenarios to use for your mobile app testing and you will be better positioned to ensure the thoroughness and accuracy of your testing efforts. You’ll know what kinds of objects you expect to encounter and you’ll have a good idea what methods the app is using to manipulate them.
Objects in mirror are closer than they appear.
Any licensed driver immediately recognizes this as a safety warning found on vehicles’ passenger-side mirrors.
It’s a warning that should also probably appear on many surface-based mobile app testing tools.
If you’ve ever used a surface-based testing approach, you may not realize that objects aren’t always what they appear to be on the screen.
For example, radio buttons and checkboxes aren’t really radio buttons and checkboxes in iOS. Instead, they are buttons with specific images that make them look like radio buttons and checkboxes based on the state the object is in (for example, active or inactive.)
Being aware of which controls are available to you as a developer — whether you’re building an Android or iOS-based app – is an important step.
Get familiar with the power of the properties inside your application.
When switching from surface-based to instrumentation-based mobile app testing, you will be able to see how many more objects and their associated properties exist.
That’s because the object inventory comes from the application’s inventory rather than just those surface attributes that can be interpreted by OCR or IWI. Accurate classification and identification of objects becomes simple when instrumentation passes information from the application to the object repository.
This can become critical when a web-based app, for example, presents an object that looks on the surface like a button but is actually a web link. By knowing what methods are appropriate to each object, an instrumented toolset can help you avoid problems with object recognition or with invoking the wrong methods on the wrong objects.
Further simplification is apparent in another example – a drop-down list (MobiDropDown object) with 1,000 items.
Using surface-based technology you may have to simulate manual scrolling of the list in order to find out how many items it contains or to search for a value or to scroll to a particular entry (say #749).
With instrumentation, you can inquire about the current number of items of the list, search the list, and scroll the list to a selected item using methods such as Select, GetItem, RowCount, and Set. Properties that can be used for identification or that may be queried include items count, name, selected item index, selection, etc.
These facilities give the tester much greater programmatic control over the test’s execution. In all, 18 methods, and 14 properties are available to the scripter of a MobiDropDown object. Let’s say item 749 contains the text “Simple.” Simulating the user’s selection of that item from the list is as easy as calling Select (749) or Select (“Simple”.)
One last example: a user filling out a form on a website might find that the form conditionally enables or disables subsequent items based on the choices the user makes.
A typical scenario is in response to a Yes/No question.
If the user responds, “Yes,” ten checkboxes may be enabled. If the user responds, “No,” five checkboxes might be enabled.
Those checkboxes may look nearly the same on screen, so an OCR-based tool may not be able to tell the difference.
However, instrumentation that uses native object recognition enables a tester to accurately verify that a checkbox is disabled or enabled based on the selection of certain values.
One of the properties of a MobiCheckBox is “enabled” which can be used to verify whether a given object is enabled for user input. Among the 11 other properties are “checked,” “text,” and “visible,” and the tester has 14 methods available including “Set,” “Click,” and “WaitProperty.”
The last method can be used to synchronize with a web server’s update to the control.
Over time you will most likely find that an automation framework for mobile app testing requires less maintenance using an instrumentation approach and an object repository. You will also get a clearer, more accurate picture of the underlying technology and how your mobile apps actually work.