Close

Jim Ewel

Omniata BlogJim EwelDecember 8, 2015

A Framework for High Tempo Testing

What if you could double or triple your growth rate in a systematic, predictable way? That’s the promise of high tempo testing. It requires planning, hard work and the right tools, but the results can be nothing short of amazing.

What is High Tempo Testing?

High tempo testing is based on the premise that rapid testing, rapid learning and continuous improvement is the most effective way to grow a business. Running 3 tests a week is more likely to lead to growth than running 1 test per week, for example. You learn more and learn more quickly by increasing the tempo of your testing. The kinds of tests performed during high tempo testing may include analyzing multiple channels of acquisition, introducing new features, tuning the user experience, engaging the user through email campaigns or A/B and multivariate testing.

There are several growth studies that support this premise, including one looking at the growth of GrowthHackers.com and one looking at user growth at Twitter. You can also read an overview of how to implement high tempo testing at my Agile Marketing blog.

Stage Zero - Data Analysis and Visualization

Before you rush off and begin high tempo testing, you may want to look at the data you already have, both for insight and for ideas about what to test. You will need a data analysis platform to slice and dice your data in various ways, and visualize what’s working and what’s not working. Using Omniata, for example, you could begin by analyzing signups and retention, and then look at conversions and lifetime value (LTV). You might want to get the answers to questions like “Which publishers are responsible for your highest number of acquired users? Which publishers are responsible for higher retention and/or conversion rates?”

As an illustration, the visualization below looks at several different stages of the user experience by acquisition source. In a unified dashboard, you can see the percentage of users that complete each step of your user experience and see that one of the acquisition sources significantly outperforms the other acquisition sources in terms of the percentage of 1st time users that reach the final step, Create Content.

Two of the most important things to keep in mind during this stage should be “what can I answer with the data I have” and “what don’t I know?” Answering these questions and discovering gaps can lead to ideas for testing in subsequent steps.

As you analyze your existing data, you may find that you need to unify multiple data sources in order to answer your most important questions. Your data analysis platform should make it easy to generate reports and visualizations from all of your data sources.

Stage One - Unbridled Ideation

You might refer to this step as “brainstorming”. Invite everyone in the company to participate in unbridled ideation. People from different backgrounds and with different skill sets may bring a different perspective to what should or could be tested.

Each test is written as a hypothesis: if we do X, then Y will happen. For example: If we move the signup box from the bottom of the page to the top of the page, signups will increase by more than 10 percent. It is very important that the hypothesis be written as an “if-then” statement. If we do A, then B will be the result. Stating a hypothesis in this fashion establishes a very clear, provable thesis.

Each test will have a target lever and a key metric.

  • The target lever is whatever you are changing. In the example above, we are moving the signup box from the bottom of the page to the top; that is our target lever.

  • The key metric is what we measure and how we expect it to change. Again, in the example above, we expect signups to increase by more than 10 percent; that is our key metric.

You may want to use tools like Trello, MeisterTask, Pivotal Tracker or even a spreadsheet to track your tests. Get as many ideas as possible during this stage.

Stage Two - Prioritize the Backlog

The process for prioritizing the high tempo testing backlog is similar to the process for prioritizing feature requests in Agile.

First, every idea should be given a score from 1 to 10 along three dimensions:

• I equals Impact (what do you expect the impact of the change to be on the metric that you’re measuring). A score of 1 represents low impact; 10 represents high impact.

• C equals Confidence (how confident are you that the change is going to have the impact you describe). A score of 1 indicates that you are not very confident that the change will have the desired impact; a score of 10 indicates that you are very confident.

• E equals Easy (how easy or hard is it to perform the test). A score of 1 indicates that it is relatively difficult to perform the test; a score of 10 indicates that it is easy to perform the test.

Once each idea is given a score from 1 to 10 along each of these dimensions, a composite score is calculated as the average of the three. The best ideas obviously have high impact, high confidence and are easy to implement, but of course no idea is likely to be high on all dimensions.


In the above example you would probably test the second idea, “Simplify initial orientation”, first, as its composite score of 6.56 is higher than the composite score of the idea above it.

Stage Three - Set Up Your Data Sources and Implement Your Tests

If you don’t already have well developed processes for pulling, compiling, analyzing, and feeding data into your engagement tools, you should develop one before proceeding.

Many high growth companies buy a customer data marketing platform rather than attempt to build one from scratch. These platforms provide out-of-the-box capability to pull data from multiple data sources and they also feed data into sophisticated engagement tools.

As an example of how a customer data marketing platform can enable High Tempo Testing, let’s look at the major areas where Omniata can help you automate and execute your tests.

User Acquisition

Most high tempo tests fall into one of two categories, “pings” and “optimizations”.

The first category can be illustrated by the question “What are my best sources of new users?” Using the analogy from the “Battleship” board game, these types of tests are known as “pings”, where your first job is to locate the ships (ping them) before destroying them.

For example, in the table below, you can see that acquisition from Organic sources is your single largest source of new users.

The second category, known as “optimizations”, can be illustrated by the question, “Once I acquire a user, how can I maximize the revenue or profitability of that acquired user?”.

For example, in the charts below, we can see that Gross Lifetime Value (LTV) and Average Revenue per User (ARPU) increase dramatically for those users who engage on a second day, and that it continues to increase through 28 days. The second chart illustrates that Gross LTV 360 comes from many different acquisition sources, and some contribute more than others.

Campaigns for User Engagement

Once you understand your user acquisition sources, you will probably want to optimize them through user engagement. The question is, how can you leverage user engagement in a way that is not only automated, but also customized to individual users?

Many campaigns, particularly those using marketing automation tools, take a shotgun approach. Campaigns are automated, but they are non-user specific, and this makes it challenging to optimize based on user behavior or your insights into what works and what doesn’t work.

For example, marketers looking to improve retention rates and lifetime value may want to deploy a campaign to re-engage inactive users whenever a new version or a new feature is made available.

Let’s say from previous tests, you know that notifying users of feature updates leads to a certain percentage of the users re-engaging with the product. But you hypothesize that if you get more specific with messaging around feature updates relevant to specific segments of users, then you might get even better results.

Let’s say you’ve identified a group of users that had a hard time finalizing a purchase in your application due to a bug. Typically, you would have informed your entire customer base of an update that fixed the issue and crossed your fingers, hoping that your intended audience saw the update. However, with a platform like Omniata, you’re able to identify the exact set of users who experienced the issue and engage them, via push or email, letting them know that updates have been made, and that they have “2 items in cart waiting to be delivered”.

Having the right tools in place, you can:

• Take action (send an email or push notification of a customized offer) based on customer data and customer segmentation

• Guide the user through multiple user experience flows across all platforms (mobile, desktop, etc). Test which flows deliver the highest levels of user engagement

• Personalize the user experience based on user-attributes. For example, depending on user skill, spend-type, engagement level, you may sequence their exposure to certain features of your game, product or service, and test the subsequent user engagement.

A/B and Multivariate Testing

One of the best ways to optimize a particular experience to move a particular metric, is through A/B and multivariate testing. However, A/B and multivariate tests can be hard to implement. Marketers aren’t typically statisticians, and don’t necessarily have the skills to run and/or analyze these kind of tests.

Omniata takes the complexity out of A/B and multivariate tests by allowing the marketer to implement them with just a few clicks, enabling you to:

• Analyze the impact on ALL of your metrics, not just the target event

• Test experiences at the user level and across channels and platforms

In the example above, we’re looking at changes in Average Revenue Per User (ARPU), tested across 3 different user experiences.

Notice that Omniata reports standard statistical measures like Standard Error, Standard Deviation, and ZScore. Omniata also makes these reports easy to interpret, with Red/Green coding of both the direction of change of the target metric (ARPU, in this case) and the confidence level of the difference.

Stage Four - Capture the Learning

This step may seem unnecessary or obvious, but it’s important to meet every week or two to review the results of the tests and agree upon what you learned. Different people can look at the same data and come to very different conclusions. Make sure you and your team agree on the specific learnings.

Do It All Over Again

One of the most powerful aspects of high tempo testing is that it is iterative. What you learn one week leads you to test something else the next week. It is through this iteration and continuous improvement that companies create remarkable customer experiences.

Conclusion

I hope I’ve convinced you to try out high tempo testing. I understand it can be daunting to increase the pace of your testing three-fold, five-fold or more, but integrated, automated tools like Omniata can make it possible to execute high tempo testing without adding extra staff or working longer hours.

Please feel free to contact us for more information on how Omniata can help you with high tempo testing.

About Jim Ewel

Jim Ewel is an entrepreneur, a growth CEO and one of the early proponents of Agile Marketing. Jim spent 12 years at Microsoft in sales and marketing, including positions as the General Manager of SQL Server marketing and the Vice President of Windows Servers.


After leaving Microsoft in 2001, Jim became Chairman and CEO of GoAhead Software, a startup that provided high-availability software to the telecom and defense industries, and that was sold to Oracle. From 2009 to early 2011, he was CEO of Adometry, a startup whose products provide tools to optimize online display advertising. Adometry was sold in February of 2011 to Click Forensics, which was in turn was sold to Google. From 2013 to 2015, he was the CEO of InDemand Interpreting, a provider of language interpreting services to hospitals, clinics and medical practices.

He blogs about Agile Marketing at http://www.agilemarketing.net

Jim Ewel