Experiments are legacy name for Campaigns, and supported for now. Experiments are designed to assign users that are part of a Campaign where users receive different types of user Experiences based on the content delivered. In each Experiment (Campaign) there are Experiences (i.e. content) that allow for statistically testing between different types of user experiences. By assigning users to these Experiences, you can measure the effect of each user experience (i.e. content) against a “control” Experience, which allows identifying the most effective "variant" Experience.
Technically speaking an Experiment is a way to deterministically assign a user to an Experience given the distribution of the Experiences. Once assigned to an Experience, Omniata stores the mapping for later usage for example in detailed analysis in the dashboards. In practice, an Experience normally modifies some aspect of the end user experience, for example by modifying some functionality, adding new functionality or removing functionality. A classical example of modification could be a change of a price of a virtual item. A user doesn’t explicitly know whether he is part of an Experiment, i.e. the experiment is transparent to the user.
Experiments can be created in Engagement > Experiments
The Actions on this page are:
You can also create a new Experiment.
Navigate to Engagement > Experiments and click New Experiment, which opens the following screen:
The elements of the New Experiment are:
Note: An experience can be attached to multiple messages, shown to users across multiple channels. Is not limited to a single message or content. For instance, you can measure the effect of a new style of icons, displayed to 10% of users across mobile and web, and still consider that a single experience.
Note: It’s highly recommended to not modify the Experience “% Users” value when the Experiment is running. It will cause problems analyzing the statistical results. Especially, if the Experiment is parallel with other Experiment(s), interpreting the results will get very difficult.
Running Experiments can be a delicate process. Any misconfiguration can result in invalid data, that defeats the purpose of the Experiment. For that reason, it is recommended to follow the process described below:
This is the most delicate step. Channels display Content to users. When you publish Content in a Channel, you create a "Message". This Message references the Content published, and defines the rules that must be applied, in order to show that specific Content to a user or not. Think of a message as an envelope with written rules of who should be able to access the content: "high spenders only", "people that belongs to Experience B only", "Wednesdays only". A Channel is a collection of Messages with rules to display Content.
The most effective way for testing is to call directly the Channel API using development keys, and the user id being tested. This will emulate how your application will retrieve content for a user. The format of the API is the following:
[api key]: Found at Data > API Keys. Be sure to select the API key associated with the environment you are testing on.
[user id]: Your test user id
[channel id]: The channel Id. You can find it in Engagement > Channels
The result is the content that would be delivered to that user, in JSON format.
In order to validate aspects like targeting, you can access the status of all the user attributes in real time, by using the User Browser. Simply go to Users > User Browser. You can simply enter a user id, select the corresponding environment and project, and obtain the data of that user.
This is the final checklist you should validate always before putting an Experiment live:
The procedure is the same as described in #3, but using production API keys.
If all looks good, go to Experiments and select "Lock" for your experiment. This will prevent anyone from changing the % of users that belong to an experiment, and thus invalidate the test.
You'll find standard reports with statistical significance in Experiments > [Your Experiment] > Results updated daily after 00:00 UTC. Also feel free to build custom reports by adding the dimensions Experiment Name, Experiment Id, Experience Name and Experience Id to any reporting table you'd like to use in the Analyzer side. Note: all four dimensions are required in order to get valid results.
The experiment will conclude automatically when the end date is reached.
Salt is an alphanumeric value that allows you to keep an Experiment independent from other Experiments or create parallel experiments. You can only set the salt once at the moment you create an experiment. Omniata automatically proposes a Salt value to a new experiment. If you don’t have any reason to modify the Salt, you can use the default value. It’s safer to not modify manually the value (if not needed to) so that you don’t accidentally use the same value as in another Experiment.
An independent Experiment means that the selection which users end up to which experience doesn’t depend on any other experiment, i.e. given multiple Experiments a single user may end in any of the Experiences in each of the Experiments.
A parallel Experiment means that there are two or more Experiments that use the same Salt to assign users to experiences, i.e. a single user ends up in the “same” Experience in all the parallel experiments (see the example below). Note parallel Experiments don’t need to be active at the same time.
Example: Consider Experiment 1 to have 3 Experiences A2 (20%), B2 (30%) and C2 (50%), and Experiment 2 to have 3 Experiences A2 (20%), B2 (30%) and C2(50%)
Case 1: (default) "Use different salt in Experiments 1 and 2" - The users of experience A will be independent from the users in experience A'.
Case 2: "Use the same salt in Experiments 1 and 2" - The users of experience A will be exactly the same users of experience A'
The algorithm of determining the Experience of user is: