Last updated July 29, 2016
There are traditionally two approaches to debugging software: hardware or software.
Embedded systems’ developers sometimes use hardware to debug software, most often with an in-circuit emulator (ICE). An ICE replaces the CPU and gives a window on the state of the system.
Developers often turn to ICEs when there is no support for the second and more popular approach, software-based debuggers such as GDB.
ICEs are almost always more expensive than a software debugger, and ICEs may have trouble producing genuine results.
For example, an ICE may not test the genuine operating conditions of a system because it replaces the CPU. When hardware is sent to do a job that software can do as well or better, costs almost always rise and proper validation often becomes questionable.
There is a similar story in the mobility space, where a window on the app (the surface of the glass) is sometimes used instead of a software-based approach.
When the iPhone® was unveiled in 2007, it offered little more than a cellphone, a web browser, and a few built-in apps (like mail and text messaging) written by Apple®.
The concept of mobile developers putting their own unique apps – software – on the phone wasn’t supported for quite a while.
Just like embedded systems, the original iPhone had no way to install, run, or communicate with intelligent debugging software. With no app on the phone to assist debugging, testing of manufacturer-supplied apps in various worldwide geographies naturally relied on device hardware, like the screen or solder-points on internal circuits.
In the same vein, our own early automation prototypes analyzed screen shots and gave users the ability to draw rectangles to manually define objects. Such hardware-centric approaches share a common dependence upon the surface of the glass.
Surface-based testing uses optical character recognition (OCR) and Image Within Image (IWI) algorithms to make sense of the screen. This approach is a fine place to start, but it offers a view from the outside — and has at times had drawbacks like fuzzy screen pictures, poor performance, and inaccurate results.
Costs of hardware (harnesses, cradles, cameras, servers), as with ICEs, increase tool prices.
Simply stated, surface testing is a hardware answer (screens, coordinates, clicks) to a software problem (objects, methods, and attributes). Mobile Labs found that difficult backgrounds and advanced graphics challenge the accuracy of surface testing (for example, animated clouds that float behind text).
Incorrect attributes, incorrect object classes, and failures to detect object presence can result. Once Apple announced app support and the iOS and Android software developer kits became available, it became possible for the first time to use a software-based approach to mobility testing.
A software-based automation component is usually an instrument or agent linked with the app that accurately reports its object inventory, including all attributes and object contents.
Using an instrument for app testing is a 50-year old tradition whose roots lie in memory leak detectors and performance profilers. Apple supplies many such instruments to iOS developers.
An app agent can eliminate OCR errors, IWI failures, type misidentifications, and Z-order errors (errors caused by one object partially covering another) that can occur with surface testing.
Performance is improved by eliminating the need for expensive algorithms like IWI and by reducing the number of times the screen image must be transmitted to the tester. An instrument or agent can send screen updates only when the screen actually changes, rather than a surface-based, VNC-client (frame buffer or video feed) approach.
Meeting Mobile Users’ High Expectations and Getting the Business Done
As mobility has evolved, an entire market devoted to mobile apps has evolved – and it’s big.
In fact, ABI Research estimates mobile users will download 70 billion apps in 2013 – 58 billion to smartphones and 14 billion to tablets.
App stores are filled with thousands and thousands of apps, and smartphones and tablets are more general-purpose computers than phones. Those computers we all carry around in our pockets run software programs that need testing.
That testing has evolved from hardware-based approaches to the ‘new’ mobile testing paradigm: testing software with software.
Testing is critical in our increasingly mobile world and mobile application testing tools are often required.
We all have very high expectations of how well our mobile devices and apps work whether we are accessing a mobile app or a website via a mobile browser.
According to an Equation Research survey, 71 percent of mobile web users expect sites to load roughly as quickly or better than what they experience on their computers at home – up from 58 percent in 2009.
If app quality is low, the app may load slowly, functionality may be poor, or performance may be unpredictable; app adoption rates plummet.
Testing is the only way to ensure that apps function as we hope on the multiple platforms that mobility must support and in the face of both rapidly changing app function and frequent mobile OS updates.
The Benefits of Testing Software with Software
When mobile apps are tested with a ‘testing-software-with-software’ approach, testers get a rich and intimate understanding of the structure, state, data, flow, logic, and content of the mobile application.
By putting an agent on a mobile device, we can determine with 100 percent certainty the objects that the app has defined, the methods those objects support, and their attributes – all without having to decode or analyze a screenshot.
Scripts are more accurate and performance is improved.
An example is testing a sign-in screen and verifying the user name and password fields are saved when the “remember me” option is selected. With the agent-based approach, testers can compare the password to the expected value even though the letters are masked from the user.
With OCR, testers may be able only to verify the password mask (a string of dots) since that’s what shows on the surface.
The Downside of Hardware Approaches for Mobile Application Testing
A hardware-based testing approach can leave tools vulnerable to misunderstanding the contents or type of an object.
For example, using surface testing, tools may not be able to tell whether a bitmap functions as a button or is just decoration.
Another problem that plagues the surface approach is Z-ordering – when two objects overlap each other on the surface of the glass. This ratchets up the surface analysis difficulty geometrically as algorithms try to figure out whether it’s one object or two.
A software-based agent can instantly determine the type of both objects and knows which is on top and which is below.
As another example, an application might include a list of items to display.
Looking at the surface, a script engine can’t see the entire list and knows nothing about its unseen contents. However, the entire list is available to a software agent, including attributes that tell which elements are visible.
In concert with the agent, a script can direct the app to scroll a particular known element into view, eliminating the need to perform guesswork surface clicks or other UI actions to bring data onto the surface where it can be analyzed.
Moreover, the surface-based approach must search the list or even count its elements using scroll-then-OCR sequences that can be very time-consuming.
Automated Mobile App Testing Solutions
With automated mobile application testing tools such as Mobile Labs Trust™, testers can determine with 100% certainty the actual color, font, data content, number of items, object types and other attributes of the objects in a mobile app.
There is no controversy what methods the objects can support, and the tools can give scripters powerful drag-and-drop and auto-completion support that gets them to the correct object syntax and the correct attributes and methods very quickly.
Productivity is greatly enhanced.
By testing software with software, testers can craft tests that are consistently thorough, accurate, and efficient. And, in the process, testers increase the quality of the apps they deliver and roll out apps that delight users.
Want to learn more? Click to read our latest posts!
Don’t forget to download our eBook on Amazon, to stay ahead of the curve in 2017!