Bold claims. I think we need to look at some data..
At SendForensics we periodically dive into the data looking for patterns that can help us push the limits of our end goal: ensuring customers are sending the highest quality email possible for optimal deliverability and, therefore, engagement.
The common pattern is that customers boost their bottom-line results dramatically if they are able to maintain a SendForensics Deliverability Score sweetspot of >75%:
This is well known, and has been predicted by our models time and time again. However, as the click-bait headlines go: what happened next shocked us..
This is where it gets really interesting.
As data-scientists, we have a love-hate relationship with surprises. If the data shows up something we hadn’t anticipated, it could go either way.
Our recent rummagings gave just such a surprise when we added spam-complaint data to the mix.
It turns out the SendForensics Deliverability Score has a startling inverse-correlation with user-generated spam-complaints (when all else is equal i.e. similar list size, regular offering etc.).
The above shows 6-week's worth of campaign data for a customer's weekly newsletter that lists various items for sale. It was chosen for its relatively consistent offering and mailing-list size (~55,400 contacts ±250), in order to minimise fluctuations in other variables and isolate the effect of the Deliverability Score alone.
Correlations of course abound in statistics and may mean nothing but when accompanied by anti-correlations of complementary variables (variables conveying opposite meaning such as click-through rates versus abuse rates measured independently), it is much more interesting.
Investigating further across the entirety of the most recent dataset, it seems campaigns that are able to maintain an SF Deliverability Score sweetspot of >75% have, wait for it, four times lower spam-complaint rates.
So what’s going on here?
What really shocked us is what this is ultimately suggesting. The SendForensics Deliverability Score/Metric we have been so proud of for the past few years is actually measuring something more than just inbox-placement.
It’s measuring deliverability right through to the "final filter" (i.e. the user)
- in other words, predicting engagement itself
Given the growing importance of engagement-based inbox-training on deliverability, we shouldn’t have been so surprised by this relationship.
We built a system to optimise deliverability pre-emptively. Turns out it’s pre-emptively optimising engagement as a whole. Marvellous.