Blog

Capture Test Evidence with Screenshots

Toffee now supports automated capture of screenshot test evidence. Issue simple commands like “take a screenshot” or “enable automated screenshot capture” and Toffee will capture screenshots of your remote testing desktop, store them securely to the cloud, and publish them online, with your test results, to your campaign stakeholders.

To celebrate this milestone, we’re offering a 50% subscription discount for 12 months to the first 25 respondents. More info at the end of this post.

What makes Toffee’s screenshot capabilities unique?

Continue reading “Capture Test Evidence with Screenshots”

Announcing Collaboration with Teams and Campaigns

With the latest release of Toffee, you can collaborate on your functional testing projects with coworkers, whether they’re in the office next door or on the other side of the world. You can also save your test results to the cloud, so that you can share them with your clients, stakeholders, and developers.

Collaboration requires you to:

  1. Purchase a subscription for a team.
  2. Invite your coworkers to join your team.
  3. Create test campaigns and assign team members to them.

Lets explore these new terms in a bit more detail. Continue reading “Announcing Collaboration with Teams and Campaigns”

Realizing Continuous Delivery for GxP Applications

Is validation of GxP applications compatible with continuous delivery (CD) of enhancements and patches? Can users reap the benefits of lower time to market? Yes, but only if we apply the techniques of CD to break the acceptance testing bottleneck.


“I am sorry, but productivity enhancements don’t justify the cost of a new release.”

“Sure, our two-year-old version is out of support. It’s cheaper to pay the vendor for custom support than it is to validate a new version, and to re-validate all the downstream applications.”

“The security risks of the old version are manageable for a few months. The business is tied up with other priorities at the moment, and we can’t spare anyone for acceptance testing the new version.

Sound familiar? We who build, install, and test applications regulated by Good Manufacturing/Laboratory/Clinical Practice (GxP) guidelines contend with higher costs of deployment than our friends in other industries. Before we deploy a new release we must perform acceptance testing to verify its ability to meet consistently our documented requirements. We must often repeat this testing for configurations at multiple sites. We must plan, execute, document, and show evidence of this testing to comply with our documented procedures and with applicable regulations. This acceptance testing is ultimately the responsibility of the customer, not the vendor.  Continue reading “Realizing Continuous Delivery for GxP Applications”

Teaching Patience to Automated Tests

Have your automated test steps ever outrun the web application it is testing? Imagine a test that enters an invalid password on a login form, clicks the “Sign In” button, and immediately tests for the presence of an “Invalid username or password” message. If the test does not find that message when it looks for it, it will fail – even if the application displays the message a few milliseconds later.

Oddly enough, the slower execution speed of manual test steps is an advantage here, as is the human capacity for patience. Humans barely notice the milliseconds it takes the server to reject those faulty credentials. If they notice anything, it will be visual cues on the browser – hourglasses, spinning circles – counseling patience.

The trick, then, is to teach automated tests to be patient.

One naive but common solution is to scatter fixed pauses at strategic points in your tests. Pause your test execution for four or five seconds at the hot spots, to let the web application catch up. Inevitably:

  • For the vast majority of runs, the pause is far longer than needed, meaning your runs take much longer than they need to;
  • For an (annoyingly large) minority of runs, the pause is not long enough, meaning your test has a not insignificant failure rate, even when the application is functioning as expected.

Fixed pauses produce tests that are simultaneously unbearably slow and unacceptably flaky – a most unhelpful combination.

A better solution is to pause only until the expected condition obtains, or until a maximum acceptable timeout expires – whichever comes first. Unlike fixed pauses, these “smart waits” go as fast as the application under test does.

Smart Waits in Toffee

Toffee implements these smart waits in several commands:

wait 30 seconds for pages to load – sets the maximum amount of time to wait for pages to finish loading. Feel free to use large timeouts here: if it only takes 2 seconds to load the page, this command will only take 2 seconds. This command is “sticky” – that is, it applies to all subsequent page loads in your workspace (until you clear it) or your script (until it completes).

wait 15 seconds for commands to execute – sets the maximum amount of time to wait for an element that is not present. This command is also sticky. Again, feel free to use large timeouts, but be aware that negative existence tests will use the full timeout. For example, if you issue the following command sequence:

wait 60 seconds for commands to execute
test that element with id "errorMessage" does not exist

The second command will take 60 seconds (!!) to complete.

wait 15 seconds until button "Archive" exists – an example of the wait…until command, which wait up to a maximum duration for some condition to obtain. You can test whether an element exists (or not), is visible (or not), or is enabled (or not). This command is not sticky: the timeout applies only to this command. It returns as soon as the condition obtains – even negative existence – affording finer-grained control over timeouts than the wait <duration> for commands to execute command.

Get Started Today!

Does your web test automation solution support smart waits? If not, join Toffee today. For more information about Toffee’s “smart waits” see the Toffee documentation.

Testing Push Notifications

Have you ever had test that the actions performed by one user yield the expected real-time impact on a second user? For example:

  • If one user sends an online chat request to another, does the recipient receive an immediate notification?
  • If a user approves a document controlled by a workflow, does the user responsible for the next step in that workflow immediately see the document appear in their inbox?
  • If an administrator revokes a user’s access, is that user’s session cut off immediately?

If you have to test against requirements like these, a testing tool that can only control one browser session at time won’t cut it.

Toffee allows you to control multiple browser sessions at once, and to switch between them with the switch to session command. With Toffee, you can open multiple sessions in multiple browsers, assign each one a name, and then switch between them using the switch to session command.

The animated GIF below (click here to enlarge) demonstrates Toffee testing a chat app. The client on the top right is Chrome, the one on the bottom right is Firefox.

Ping Pong Chat Test
Testing multiple browser sessions in a simple chat application.

Modern web apps push notifications from one user to another all the time. Can your testing tool keep up? If not, sign up for your free Toffee Composer account today.

Feature Release: Include, Alias, and Variable Enhancements

The most recent release of Toffee Composer includes several improvements that improve its usability and extend its testing reach.

Parameterized include statements

The include command already allows you to include the contents of one script inside another. We have added a where clause to the include statement that allows you to pass variables into the script, e.g.

include script "Log in to Composer" where browser is "chrome" and username is "khoerr@toffeetesting.io"
include script "Rotten Potatoes" with movieTitle = "TMNT"

See our “Understanding Scripts” documentation for more information.

The real payoff for this feature is what it means for aliases. Previously, you could alias only single commands; now, you can alias a sequence of commands – that is, your script – as follows (continuing the examples above):

log in as khoerr@ksmpartners.com using chrome 
search Rotten Potatoes for movie TMNT

This feature hides the complexity of multiple test steps behind a single statement, making your tests even more readable, accessible, and robust.

Saving variables off the glass

Often your application under test (AUT) will generate values that your tests cannot anticipate. A common example is unique identifiers generated from a sequence or a randomizer. Although you as a tester have no control over the contents of these variables, you still need to save and use them in your tests to reference whatever it is that those identifiers identify.

We have added two new commands that allow you save values “off the glass” and store them in variables. The first, “save value of,” reads the value of any locator-addressable element, such as a text box, and stores it in a variable. Example:

save value of textbox with id "studyId" as newStudyId

The second, “save attribute,” is more general: it saves the value of any attribute of any locator-addressable element. Example:

save attribute "href" of link "Documentation" as DocLink

For more information, please see our “Save and Use Variables” documentation. You must download and install version 723 or higher of Toffee Performer to use these commands.

Other enhancements

  • From the script editor, you can now convert a native Toffee command in a script to an alias. Just click the “Convert to alias” icon (an eye) for that command, enter an alias, and save.
  • From the Composer Workspace, all available aliases are now included in the auto-suggest list. (To make an alias available from the workspace, you must include a script that defines the alias.)
  • Toffee Performer release 723 includes the most recent drivers for Chrome, Firefox, and Internet Explorer.

If you have not yet joined the Toffee Early Access program and created your free Toffee Composer account, get started today!

Augmenting Exploratory Testing with Automation

My friend Brian spends his vacations exploring and mapping caves all over the world. He is part of a global community of like-minded enthusiasts who plan and execute multi-day trips miles underground, in the hopes that they might be the first people in history to lay eyes on a new cavern, pool, or rock formation. They take sophisticated mapping equipment that records their progress and the shape and size of the passages they traverse. Back at the surface, they upload this data for the benefit of future expeditions, along with notes about where they found water and where they slept. They also share updates on the state of improvements to the route: ladders, ropes, and other cached equipment and supplies. These include improvements they themselves added to the route.

rope-descent
Copyright 2017 by Brian Louden. All rights reserved. Used by permission.

Every improvement to the route gets the next expedition to the frontier faster. They can travel lighter by relying on cached equipment and supplies, and faster by knowing which paths present the fewest obstacles. By leaving their own improvements to the route and documenting their steps, they extend the exploratory reach of those to follow, and push the frontier deeper.

Brian is also my colleague at KSM Technology Partners. He writes software to analyze telemetry data from the power grid, paired with the known locations and output of generators for a given point in time, to drive the settlement of complex electricity markets. The state of the system at any given point on the grid comprises hundreds of variables: the proximity of generation and load, line ratings, circuit breakers, and known outages, to name a few. Exploratory testing requires reproducing the state of that system over some subset of variables the tester deems important to control, then fiddling with additional variables to see what breaks. Reproducing the state of the system – or test setup – to enable meaningful exploration takes a long time to perform manually.

Exploratory testing without automation is like exploring caves the first time every time, without benefit of route improvements.  We testers waste a lot of time if we manually execute setup drudgery to get to the frontier, where discovery requires the insight and judgment of a human tester.

And make no mistake, human judgment is required: just as ladders and ropes can’t map out the next unexplored cavern, automation cannot teach us anything about the (mis)behavior of software under unexplored conditions. As testers, exploratory learning is still our job. But by consigning test setup to automatons, we can extend our exploratory reach and push the frontier farther, faster.

The test doesn’t find the bug. A human finds the bug, and the test plays a role in helping the human find it.

Pradeep Soundararajan, H/T to Michael Bolton