+1 404-214-5804
Enterprise Mobility Blog, Mobile Testing Strategy and Techniques

Mobile Application Testing Tutorial: How Can Record and Play Increase Scripting Velocity?

Last updated July 29, 2016.

Agile aficionados are well acquainted with the term “velocity,” which is used to measure how much new work a scrum team is able to create in a given period of time.

A similar measure makes sense when we look at the process of creating automated test scripts in mobile app testing.  The question is one management has been asking since the advent of programming: how can we get done sooner?  How can we go faster?

Record and Play is a feature we recently introduced with Release 3.5 of Mobile Labs’ Trust mobile testing tool.

Among testers, Record and Play historically has had mixed reviews; many claim recorded scripts are too hard to maintain; some note that recorded scripts do not contain all the expected features of robust test cases – for example, synchronization, error detection, and recovery.

Few actually look to Record and Play as a silver bullet capable of supplanting common-sense logic and script intelligence.

After all, most Record and Play scripts need a little work before they are capable of dealing with variable conditions – timings or data content that different from when the script was recorded.  If we need to revise a Record and Play script after its creation, what good is the feature?

If savvy test engineers know these things about Record and Play, why did so many of them ask Mobile Labs to “get ‘er done?”

The answer, I think, lies in seeing Record and Play less as magic that eradicates script writing and instead as a productivity tool that gets script basics created in a hurry. This frees the test engineer from many routine tasks like defining objects and laying down basic step flow.

Accompanying this blog is a video of a Record and Play session performed with UFT and Trust.  In only a few seconds, we created the basic test flow (viewable as keywords or VBScript statements).

As a bonus, each object we touched was created in UFT’s Object Repository, including attributes –- automatically, quietly, and behind-the scenes. When the session was complete, we had the Object Repository needed to build a robust test case.

Let’s look at the script captured during the recording session in the video.  UFT supports both a script view (VBScript) and a keyword view.  Here is the keyword view:


At the same time this keyword view is created, UFT also records a VB script view:


These two views accomplish exactly the same thing and are merely two representations of the same test logic.

More significantly, each object manipulated during the recording session was captured in the UFT object repository, showing some of the real power of Record and Play. During the video, as new objects were manipulated and new lines of script created, objects appeared in the repository, one by one.  Here is the object list after the recording session:


Because the object names were all system-assigned, I spent a couple of minutes renaming them with more descriptive names.  Note that both the keyword and script views of the recorded session were automatically updated to use the new names:


Record and Play has, in a few short minutes, given us basic script steps and a nearly complete set of objects for the login-and-search case.

Is the script ready to run? Maybe.

As we know from scripting websites reached with a desktop browser, back-end performance can vary with load.  Although our app has moved from the desktop onto an actual mobile device, variations in backend server performance or load from other apps on the mobile device may mean that we need to synchronize the script with the app.

The test must proceed at the app’s speed, not at the script’s speed. Such coordination is fairly easy with desktop browsers that are certain when a page is “done” loading.  But because we are dealing with a discrete app on a mobile device, there is no global “done” state with which to sync.

Our strategy is to sync on the state of the next object we need to manipulate or some other object that confirms for us the app has reached a desired state.

Because we want to synchronize on specific objects rather than the state of the app, we can use Trust’s CheckProperty method for objects with multiple known states and Trust’s WaitProperty method for simple element objects (for example, to test whether an object has appeared on-screen).

Let’s look at a couple of objects and the power of CheckProperty.  The first object is the  “Remember Me” switch on the login screen, identified with a box in the UFT run results below:


Note that the run result shows how the device screen looked at the time CheckProperty was executed, and shows both the answer (passed) and the question (we checked the switch for its “State” property, and we wanted to wait until the property had the value “Off”).  We made the check with the following VBScript statements:


The test looks this this in keyword view:


The Set method was recorded during the Record and Play session; we added the CheckProperty step to cause the following:

  1. Check the “State” property of the switch to see if it is “Off;” if so, signal “pass” for this step.
  2. If 10 seconds pass without the state becoming “Off,” signal “fail” for this step.
  3. Continue.

Because the switch changed state as expected, a run of our script shows the “passed” status (shown in the run results above).  If the time had expired without the switch achieving the requested state, the run results would show both a failed step and a failed test case:


Another common sync action is waiting for an object to appear on screen. For example, with the test below, we wait until the app presents the search results:


We have added a couple of things to the basic Click recorded during Record and Play; first, we started a transaction timer to measure the transaction time of the search; we then waited for the search to complete by waiting for the search results page to display.

We did this with “WaitProperty” on the “Visible” property, looking for “True.” We added a test object (“Search_Results”) using Object Spy. It is nothing more than the MobiElement title of the results screen.  The result of running the test is shown in the run report below (the object is identified with a box):


In looking through the remainder of the script, one sync action we took against a button, using its “visible” property, merits a comment.  Here is the code:


The “WaitProperty” we did on the “Search” button synchronizes the test with the picker that comes up to select a manufacturer.  Note that while the picker is on screen, the “Search” button is covered by the picker list (and so it cannot receive a “click”).  Waiting for it to become visible again ensures that the Click is timely.


(Step with the picker displayed, covering the “Search” button (this is the Select “Any” method))


(Step after picker is taken down and the “Search” button is visible again)

Here is the finished test script with sync actions for each critical transition:


The net of this exercise is that Record and Play created all the objects (except one) needed for the test case, and adding a few sync operations using CheckProperty and WaitProperty equipped the script to handle any timing issues that may crop up on the device or the backend server.

Record and Play was a great help in quickly creating objects and the base script, and Trust’s sync methods were easy to add to make a truly robust test case.  Record and Play is a great start and allowed us to spend time more wisely – on device and server issues – rather than adding objects one at a time.

Not only did we automatically get the objects, correctly identified by type, but also we automatically recorded each object’s properties.  The net is that when used to best advantage, Record and Play can put the pedal to the metal for test-case velocity.

Want to learn more? Check out our latest posts!

Don’t forget to download our eBook on Amazon, to stay ahead of the curve in 2017!

Michael Ryan

Michael Ryan serves as Mobile Labs’ chief technology officer. In this role, Ryan provides the technological vision and drives Mobile Labs Trust’s product road map. Ryan has more than 35 years of experience in leading software development teams that design and build robust and market-leading solutions for large-scale enterprise customers among Fortune 1000 companies. Most recently, Ryan was with Fundamental Software where he worked on large-scale systems CPU emulation architecture, design, and implementation. Prior to Fundamental Software, Ryan was director of development, Sr. VP of R&D, and finally, Chief Technical Officer for CASE tool vendor KnowledgeWare, Inc. Ryan served as senior staff systems engineer, field manager, and regional technical support manager for mainframe manufacturer Amdahl Corporation.

More Posts - Website

Leave a Reply

You must be logged in to post a comment.

Why Mobile Labs?

Mobile Labs provides enterprise-grade, next generation mobile application testing tools. With a focus on security, agility and affordability, Mobile Labs delivers solutions to help you deliver quality mobile apps for Android, iOS and Windows platforms while also helping manage mobile devices in a private, secure cloud.

Contact Mobile Labs

3423 Piedmont Road NE
Suite 465
Atlanta, GA 30305
+1 404-214-5804
twitter  facebook linkedin google-plus SlideShare