I've been in marketing of different forms for probably over 20 years and the first half or quarter of my career, I worked in the data side of direct marketing. Eventually I was involved in things like database propensity modeling and segmentation and direct mail and things like that related to CRM.
About 14 years ago, I just suddenly had a kind of an epiphany that direct mail was probably not something that was going to be too important in the future. I wanted to switch into more of a digital marketing role. So, I got a job with a digital agency as a digital analyst, which was a new thing around at the time. But coincidentally, around the same time, Google launched Google website optimizer, which is the first commercially available AB testing platform.
Because of my background in self-testing direct mail, I could bring the learning from testing, direct marketing, into testing website content. The same thing still drives me, which I found so fascinating as testing ideas.
Often we run on ideas that everybody thinks are going to work and things that people think will cause a massive impact. The reality is it’s very difficult to predict or guess what customers are going to do. So testing is just a really interesting way of doing that and seeing the results.
Journey Further is a performance marketing agency and our three core product pillars are paid media, organic, PR and conversion. So our LinkedIn newsletter is related to the conversion division. “Test Everything” very neatly summarizes my philosophy around the whole thing.
CRO— the way it's been practiced by many different companies has different meanings to different people. Mostly the way people approach it is as a sort of a way of tweaking small things on the website. Most of the time, companies have things running on their website which they believe to be true.
So they've just decided that those need to be retained, and then they'll have a small sort of section that is experimentation. They might have a CRO person somewhere in their marketing team who is basically running a few small experiments on button colors and things like that. This kind of misses the point because the power of experimentation is that you are not simply validating your ideas. Our philosophy is very much about experimentation and fuel everything that you do. If you get it right, you can embed it into your way of working as an organization.
At the end of the day, it's about using research and data to try and figure out what you should do and experimentation to validate in real situations with real customers. People think they intrinsically know their customers while they don't. So the only way you can understand what's going to work is by testing.
CRO industry has this very ingrained language of winning and losing and naturally there are positive and negative connotations to each. But actually, there is no such thing as winning and losing because the point of experimentation is to learn.
If you run a test that fails, that has just as much learning in it as there's a test that wins because what you've done is demonstrate something about how customers behave and how they're going to react to a certain type of content or not.
For every single test you should always be thinking, what does this tell us? What can we do next? And how do we learn from this? How do we do something bigger, something better, something different. How do we change the way we run the business according to what we're learning. That’s the power of experimentation.
A scientist will tell you what they're trying to achieve. It's all a process of learning, research and development towards some kind of goal that you are trying to achieve. It’s the same with business experimentation. You should treat it all as a fundamental method for research and development that is helping you achieve some kind of strategic goal.
There’s a problem with the word optimization. People misunderstand website optimization with variations in designs and copies. They see it as tweaking, optimizing, improving what's there. However, I believe website optimization is about learning from experimentation, which is fundamentally about research and development. The optimization word itself is a bit strange and I would prefer to use the idea of strategic development or content development.
The fundamental way we approach is that it’s very much about the combination of research data, analysis and experimentation. It's always important to remember that research and data will never get you a perfect answer, but it certainly gets you closer to the answer than just using your opinion and guesswork.
We try to understand the customer behavior from a research and data point of view. We look for sensitivities around conversion and where the key areas of the site are that might be problematic. We bring all that data together into a series of key themes, which are really a starting point.
For example, it’s possible that customers don't seem to be getting the uniqueness of your proposition and how you explain you're different to your competitors— this might be the key thing that we want to explore. This can be done through various different experiments, different pieces of copy testing or whatever.
Once we've done that we can come back and evaluate those themes, evaluate what we've learned and whether that changes, what those things are. It’s possible that testing the various kinds of value proposition actually changes the proposition, which then might lead you to think about actually fundamentally changing the whole business model. This is where experimentation should lead. Like you should be almost kind of a bottom up strategy development where small things should ladder up to big strategic questions that help you fundamentally change the way you do business.
We have a very rigorous process that is built into a project management system that we've created. Regardless of the system and the software, it's a systematized process. What we're trying to do is to, first of all, identify and prioritize different types of research in different pieces of work that allow us to come up with ideas. You could think of all sorts of different things. You could look up in terms of trying to understand customers.
There will be tons of ideas that come out of the back of those, but how do you kind of manage all that and prioritize it? The starting point is we're trying to understand the strategic direction of the website and what their sort of goals are.
Some businesses are very much focused on, for example, driving cross sell, and because their business model and their self strategy is about acquiring customers at a fairly high cost per acquisition and earning profit through nurturing those customers. And so cross sell and up-sell, and a repeat purchase is an incredibly important part of the strategy.
I know the business might just be very much about converting the first customer. So there's things like that about the business strategy that helps to guide what it is that you're focusing on and what it is that you're trying to uncover. That helps us prioritize the types of research that we're doing. From there, we're coming up with ideas and those might be kind of grouped into particular themes that we're finding out from the research.
Initially you don't know which of those themes are working, but as you start seeing the experiments, those themes that you're working too might start to become more interesting and you can then sort of start to prioritize the themes.
I'm a huge fan of LinkedIn which is a really good platform for keeping up to date with things in the industry. However, most of my learning and research happen organically.
In my early days of running experiments on websites it took me probably longer than I should have to realize the importance of the statistics in experimentation. It's a strange area because if you really want to you can go down a very deep rabbit hole and gain some incredibly complicated maths and statistics which, in and of itself actually kind of puts a lot of people off from really kind of bothering to look into it. It is really fundamentally important. You're trying to kind of infer the probability that your observation is a real observation and will be repeated if you did it another time.