Experiments & Experiences

Experiments are legacy name for Campaigns, and supported for now. Experiments are designed to assign users that are part of a Campaign where users receive different types of user Experiences based on the content delivered. In each Experiment (Campaign) there are Experiences (i.e. content) that allow for statistically testing between different types of user experiences. By assigning users to these Experiences, you can measure the effect of each user experience (i.e. content) against a “control” Experience, which allows identifying the most effective "variant" Experience.

Technically speaking an Experiment is a way to deterministically assign a user to an Experience given the distribution of the Experiences. Once assigned to an Experience, Omniata stores the mapping for later usage for example in detailed analysis in the dashboards. In practice, an Experience normally modifies some aspect of the end user experience, for example by modifying some functionality, adding new functionality or removing functionality. A classical example of modification could be a change of a price of a virtual item. A user doesn’t explicitly know whether he is part of an Experiment, i.e. the experiment is transparent to the user.

Experiments can be created in Engagement > Experiments

The Actions on this page are:

  1. Edit an Experiment
  2. Lock an Experiment
  3. View the Content and Messages associated to an Experiment
  4. View the results on an Experiment
  5. Delete an Experiment

You can also create a new Experiment.

Creating Experiments

Navigate to Engagement > Experiments and click New Experiment, which opens the following screen:

The elements of the New Experiment are:

  1. Name: This is the name of your Experiment.
  2. Experiment Type: This can be display, email, or notification
  3. Tags: These tags will allow you to classify and group your experiments.
  4. Salt (used for linking Experiments)
  5. Maximum Total # Of Participants: The maximum number of participants for the Experiment, i.e. sum of maximum number of participants in all the Experiences of the Experiment. When the maximum has been reached, new users cannot enter the Experiment. Note: this number is not scientifically accurate, due to technical reason the actual maximum of users that Omniata let's enter the Experiment can be different from the configured value, but the scale of the actual max should be around the same as the configured value.
  6. New Users Only: This flag allows to show an experiment to new users only. These users will continue to be part of the experiment from that moment on, until the experiment is over.
  7. Start Experiment on: This is the date and time when your experiments starts.
  8. End Experiment on: This is the date and time when your experiments ends.
  9. Experiences
    • Define a name for the Experience
    • Define a % of users that will get exposed to that Experience, expressed as a % with 2 decimal precision (i.e. for 33.33% enter "33.33"). Do not enter the % symbol. The input already assumes that you are entering values as %.
    • Set one of the Experiences to be the "Control" group. This Experience will act as a "benchmark" so that the results will be compared to this Experience. (i.e. "Experience B shows a 25% of improvement in ARPU compared to the Control Experience")

    Note: An experience can be attached to multiple messages, shown to users across multiple channels. Is not limited to a single message or content. For instance, you can measure the effect of a new style of icons, displayed to 10% of users across mobile and web, and still consider that a single experience.
    Note: It’s highly recommended to not modify the Experience “% Users” value when the Experiment is running. It will cause problems analyzing the statistical results. Especially, if the Experiment is parallel with other Experiment(s), interpreting the results will get very difficult.

Setting Up Experiments

Running Experiments can be a delicate process. Any misconfiguration can result in invalid data, that defeats the purpose of the Experiment. For that reason, it is recommended to follow the process described below:

  1. Create Experiment and Experiences
  2. Publish Content in Channels, and associate the Messages with the corresponding Experience
  3. Enable in local environment and test
  4. Review checklist
  5. Enable in production and test
  6. Lock the Experiment
  7. Analyze the results
  8. Conclude the Experiment

Create Experiment and Experiences

Recommended practices:

  • Define (if possible) groups of the same size (i.e 4 groups of 25%). This will help you make straight comparisons when looking at results.
  • Do not create more than 8 groups total to make sure to have enough users in each Experience.
  • Use descriptive names for the Experiences. Instead of A,B or C, use "20% off in diamonds" or "sound on".

Publish Content in Channels

This is the most delicate step. Channels display Content to users. When you publish Content in a Channel, you create a "Message". This Message references the Content published, and defines the rules that must be applied, in order to show that specific Content to a user or not. Think of a message as an envelope with written rules of who should be able to access the content: "high spenders only", "people that belongs to Experience B only", "Wednesdays only". A Channel is a collection of Messages with rules to display Content.

Enable in Local Environment and Test

The most effective way for testing is to call directly the Channel API using development keys, and the user id being tested. This will emulate how your application will retrieve content for a user. The format of the API is the following:


[api key]: Found at Data > API Keys. Be sure to select the API key associated with the environment you are testing on.

[user id]: Your test user id

[channel id]: The channel Id. You can find it in Engagement > Channels

The result is the content that would be delivered to that user, in JSON format.

In order to validate aspects like targeting, you can access the status of all the user attributes in real time, by using the User Browser. Simply go to Users > User Browser. You can simply enter a user id, select the corresponding environment and project, and obtain the data of that user.

Review Checklist

This is the final checklist you should validate always before putting an Experiment live:

  • All Experiences, including the control group, must have content associated: Go to Experiments > [Your Experiment] > Content in order to display all the content associated to that Experiment, and in what Channels is being published.
  • Check the Channel configuration. Go to Channels, select Edit in the Channel associated with your Experiment, and check that Max # of messages is set to a value high enough, considering the amount of content published in that channel. Also get sure that the message ordering is set to "by priority" instead of "randomly"
  • Be sure that the targeting rules are the same across all published content. Go to Experiments > [Your Experiment] > Content and click Edit on each message.
  • Check API keys: Get sure that your application is set to use the API keys associated with the environment. Production keys for requesting content from production. Development keys for requesting content from development. The API keys used when tracking events, and when calling the Channel API must be the same, in order to track correctly the behavior of the users exposed to an experience.
  • Check the schedule: Double check the time when you would like to finalize the Experiment, and any time-related restrictions configured both in the Experiment and in the individual messages.
  • Check the % of users on each Experience: Is common (but highly not recommended) to tweak % during testing. Be sure the values match what you have defined for production.

Enable Experiment in Production

The procedure is the same as described in #3, but using production API keys.

Lock Experiment

If all looks good, go to Experiments and select "Lock" for your experiment. This will prevent anyone from changing the % of users that belong to an experiment, and thus invalidate the test.

Analyze the results

You'll find standard reports with statistical significance in Experiments > [Your Experiment] > Results updated daily after 00:00 UTC. Also feel free to build custom reports by adding the dimensions Experiment Name, Experiment Id, Experience Name and Experience Id to any reporting table you'd like to use in the Analyzer side. Note: all four dimensions are required in order to get valid results.

Conclude Experiment

The experiment will conclude automatically when the end date is reached.

Using Salt

Salt is an alphanumeric value that allows you to keep an Experiment independent from other Experiments or create parallel experiments. You can only set the salt once at the moment you create an experiment. Omniata automatically proposes a Salt value to a new experiment. If you don’t have any reason to modify the Salt, you can use the default value. It’s safer to not modify manually the value (if not needed to) so that you don’t accidentally use the same value as in another Experiment.

An independent Experiment means that the selection which users end up to which experience doesn’t depend on any other experiment, i.e. given multiple Experiments a single user may end in any of the Experiences in each of the Experiments.

A parallel Experiment means that there are two or more Experiments that use the same Salt to assign users to experiences, i.e. a single user ends up in the “same” Experience in all the parallel experiments (see the example below). Note parallel Experiments don’t need to be active at the same time.

Example: Consider Experiment 1 to have 3 Experiences A2 (20%), B2 (30%) and C2 (50%), and Experiment 2 to have 3 Experiences A2 (20%), B2 (30%) and C2(50%)

Case 1: (default) "Use different salt in Experiments 1 and 2" - The users of experience A will be independent from the users in experience A'.

Case 2: "Use the same salt in Experiments 1 and 2" - The users of experience A will be exactly the same users of experience A'

The algorithm of determining the Experience of user is:

  • Calculate x = map(om_uid + salt) -> [0, 99[, where map is a deterministic function (same input yields to same output) from string to [0, 99[. What value each input gives is pseudo-random.
  • An example, in the above cases based on x select the Experience. If x
    • [0, 20[ -> A / A2
    • [20, 50[ -> B / B2
    • [50, 100[ -> C / C2

This article was last updated on August 26, 2015 22:03. If you didn't find your answer here, search for another article or contact our support to get in touch.