Here are some frequently asked questions about using Experience Optimization and content testing to create and run content tests.
In the Optimization window, what does Suggested Tests mean and what do the metrics measure?
The Suggested Tests dialog shows you the pages that you should test to gain the most potential value.
The following metrics determine if it is a good idea to test a page, using normalized scores of between 1 and 10:
- Potential – a score that suggests which pages have a high potential for increasing the engagement for the users visiting the page
- Impact – the expected level of impact of the test
- Recommendation – a combination of potential and impact. This shows you how strong the recommendation is to optimizing a page.
How does Experience Optimization calculate how long a test should run before it is statistically significant?
Sitecore calculates the number of visitors needed to validate a test. This number depends on the number of experiences that you are testing and the anticipated confidence level of the test.
When starting a test, you can see the number of visits needed for a statistically significant result. If there is historical visitor data available for a page, you see the expected number of days that the test should run before it reaches a valid result.
Historical data for the last month lets you predict the needed number of visits to achieve a significant result for the test. Website traffic, however, can vary over time, and this can have an effect on the forecast's accuracy.
The method validates that one of the experience has performed significantly better than the original and declares a winner when the needed number of visitors has been reached.
If the number of visitors needed for a test to be statistically significant is reached before a test's end date is reached, will the test still run?
Yes, the test will still run. You should allow tests to run until the suggested end date. This ensures that you have data for each day of the week, which can affect the test results and provide insight into activity on your websites.
If the test's end date passes without a statistically significant result, the test is suspended.
Why would I manually set the minimum and maximum duration of a test when Experience Optimization already suggests how long a test should run for?
Experience Optimization forecasts how long the test should run, but you may want to adjust the time frame of the test in the configuration file. For example, you might want to do this if you are making significant changes to your website that will affect traffic to the pages being tested. For example, you could add a new campaign that would increase traffic to a page and this could influence the result of your test.
In general, it is best practice to specify a maximum duration for a test because some search engines interpret indefinitely tested content as an attempt to negatively influence search results.
Can you explain the Test result overview metrics?
The Test results overview provides information about how your tests are performing. There are three metrics that give you insight into a test's performance. They are:
- Experience effect – the relative change in engagement value since the test began
- Confidence – the statistical confidence level
- Score – product of the effect and number of visits in a month
In this example, the test has an 85.9% experience effect. This means that the engagement value generated by the content has increased by 85.9% over the original. The confidence rate is 56.88%, indicating that the result is not statistically significant and the test is probably unfinished.
In the Experience Explorer, on the Optimization tab, I can select Active Tests to see how a test is performing. Can you explain how the effect is calculated?
Effect shows how much the engagement value of the contacts exposed to the tested content has changed. If the original version of the page has the highest engagement value, the effect score is 0, as the test has had no effect.
Can I run an A/B and multivariate test on one specific device?
Yes. You can create device-specific A/B and multivariate tests. Content tests, however, run across all of the relevant devices.
Can I test several different personalized components on a single page?
Can I start a test without a workflow?
Yes. You can start a component or personalization test without a workflow. You may want to do this if your organization does not use workflows to approve content or tests for your website.
Are there different security roles that provide access to Experience Optimization?
Yes. There are several security roles that provide access to different levels of functionality in Experience Optimization.
- Authoring – enables the creation, running, and editing of tests.
You typically assign this role to content authors and marketers.
- Analytics Advanced Testing – contains the same access as the Authoring role, plus additional tabs and controls.
You typically assign this role to marketing analysts.
- Analytics Management Reporting – has full access to all content testing dashboards and historical reports and cannot create tests.
You typically assign this role to marketing directors.
You can also add security roles to individual users to give them access to Experience Optimization.