AB-testing

Introduction

Prepr A/B tests gives you the power to preform A/B tests with your content in a easy to implement way.

By AB-testing your content frequently, you increase CTR and time spent. Zoom in on retrieved metrics and learn what content works best – or improve content that needs tweaking.

Handle customer IDs in experiments

Unique customer IDs are used to identify a participant in a running experiment. Supply any string you want to use for the customer ID. A consistent customer Id will ensure that a single customer isn't randomly re-assigned to a bucket every time they see your experiment.

In other words: a customer always gets the same A/B version or experience.

For example, if you're running tests on anonymous customers (visitors), you can use a first-party cookie or device ID to identify each participant.

If you're running tests on known customers, you can use the Prepr Customer Id to identify each participant. If you're using UUIDs, you can run tests that span multiple devices or applications and ensure that customers have a consistent treatment.

Things to check

Make sure that customer IDs are unique. If customer IDs get asssigned to multiple customers the test results can be compromised.

Use either logged-out or logged-in IDs. Prepr doesn't currently supports a mechanism to alias logged-out IDs with logged-in IDs. If you are running tests that span both logged-out and logged-in states (for example, test on a signup funnel and track conversions after the user has logged in), you must persist logged-out IDs for the lifetime of the session.

Track exposure and conversion events

Prepr let you track how users reacted to the different versions of your content. To do so you need to record events to track the user interaction.

After Prepr aggregates enough events to produce meaningful metrics, you'll be able to judge your experiment variations on the publication detail page.

As part of the implementation process, you'll choose on how to aggregate your tracked events. For a experiment to work you need to set-up two events that will be used to determine if the experiment has been succesful.

In general, you want to track the Expose event and a conversion event like a View. Prepr will calculate Statistical Significance using the combined Engagement Value generated in the test.

Check out Tracking Events with Prepr Capture how to implement Event tracking.

Run A/B tests

An A/B test is a randomized experiment using two variants of the same publication (A and B). Variant A is the original and variant B through n each contain at least one element that is modified from the original.

Before you can start testing, you need to create at least one B version of a publication. If you use the Prepr SDK's to query the API the audience will be evenly balanced between the A and B version.

How long should your experience run?

In Prepr you can view the results of the A/B-test in the sidebar of the publication view. A calculation based on the amount of Expose and View events will result in a verdict about the test. If the test is significant you can choose to switch off the test and continue with the version had the best results. The B version has to have a 95% probability to beat baseline for the result to be significant.

AB-experiments are still delivered by our CDN and should not query impact performance.

Tracking events

Make sure to send events to Prepr to record the users exposure and conversions of the A/B test. Without the events Prepr can't calculate the CTR and performance of the running tests.

Search results will not be impacted by AB-testing. Search queries will be based on the base line (A) version of a publication.