Essentials of Conversion Optimization by ConversionXL
Essentials of Conversion Optimization by ConversionXL
CONVERSION OPTIMZATION
Free optimization guide by ConversionXL
Written by Peep Laja, founder of ConversionXL
that opinions dont make money. My friend and mentor Craig Sullivan likes to
say that "opinions are like assholes - everybodys got one". You are not your
customer, and you have lots of different kinds of customers.
Implement this rule in your company: whenever somebody voices an opinion,
they have to preface it by saying: In my insignificant, unsupported, baseless
opinion. That will set the right tone for the importance of whatever is to follow.
2. You dont know what will work. Every now and then you meet someone -
typically someone (self-)important - who will proclaim to know what works, what
should be changed on the site for improved results.
Well, theyre full of shit. Nobody knows what will work. If we did, wed all be
billionaires. Unfortunately, magic crystal balls dont exist. Thats why we need
split testing.
best product page layout, no best home page design layout. There are no
things that always work. Marketers that tell you otherwise by selling "tests that
always win" ebooks are just after your money. For every best practice you find, I
will show you 10 tests where it failed.
Best practices work - but only on half the sites. You dont know which half your
site belongs to. Stop thinking in tactics, and start thinking in processes. As the
saying goes, if you cant describe what youre doing as a process, you dont
know what youre doing.
Once you accept these truths, its far easier to move ahead. We humans like our egos and we like to tickle our egos. But we need to move past that. Conversion optimization
is very humbling in this regard. I have seen too many times my ideas - that I was super
confident in - fail badly in A/B tests.
Ive been in this business for many years - but when I have to predict a winner in a test,
I get it right about 60-70% of the time. Only slightly better than flipping a coin. Not
nearly good enough.
So stop guessing, and stop liking your own ideas so much. Separate yourself from
opinions.
STEP #2: Turn your unsupported and baseless opinions into data-informed, educated
hypotheses
You need to move away from random guessing, and focus instead on KNOWING
Supplemental reading:
http://conversionxl.com/your-design-sucks-copy-continuous-optimization/
http://conversionxl.com/sell-conversion-rate-optimization-to-your-boss/
http://conversionxl.com/stop-copying-your-competitors-they-dont-know-what-theyre-doing-either/
ResearchXL framework
You can use this framework for each and every optimization project. Its industryagnostic - doesnt matter if the site youre working on is a B2B lead gen, SaaS,
ecommerce or non-profit site. The process you use to get higher conversions is exactly
the same across all websites.
There are 6 steps of data gathering and analysis, followed by creating a master sheet
of all found issues that we then turn into action items.
It might sound scarier than it is. Over the next days and weeks well look at each step
individually. This lesson focuses on the big picture + first 2 steps.
Use the framework as your tool, your guide, your process map.
Cross-browser testing
Cross-device testing
Speed analysis
Identify leaks
Scroll maps
Customer surveys
Chat logs
Interviews
Step 7. Sum-up
Look up the average transaction amount. Lets assume its $50 for this
example.
Calculate: if IE8 (currently converting at 2%) would convert the same as IE10
(currently 5%), how many more transactions would we have over 6 month
period? Lets pretend that wed get 200 transactions more over 6 months.
How much time will it take to identify and fix the bug? 3 hours? Is 3 hours of
developer time more or less than $10k? If less, fix the damn bugs!
Relevancy: does the page meet user expectation - both in terms of content and
design? How can it match what they want even more?
Clarity: Is the content / offer on this page as clear as possible? How can we
make it clearer, simpler?
Distraction: whats on the page that is not helping the user take action? Is
anything unnecessarily drawing attention? If its not motivation, its friction - and
thus it might be a good idea to get rid of it.
This topic would take a three day workshop, but for this essentials course I will point
out the key stuff.
Always approach analytics with a problem: you need to know in advance what you
want to know, and what are you going to change / do based on the answer. If nothing,
then you dont need it.
In a nutshell, we can learn:
but we wont know why. Heuristic analysis and qualitative research are your best
attempts at figuring out the why. Analytics is more like what, where and how much.
Follow the data!, they say. Well, truth be told, data wont tell you anything. It is up to
you to pull the insights out of the data. And this requires practice. As with everything the more time you spend at looking at data, and trying to make sense of it, the better
youll get at it.
Its time well spent, no matter what. If I were a (full-stack) marketer today, and not
analytics-savvy, Id fear for my future. You need to love analytics. Remember, love at
first sight rarely happens. Spend more time together.
Averages lie - look at segments, distributions, comparisons
Most companies market to their average user, most marketers look at average
numbers in analytics. But thats wrong.
So if your buyer #1 is a 12 year girl from Finland, and buyer #2 is a 77 year old dude
from Spain, the average is sexually confused 30-something in Austria. Thats the
market you think youre after. See what I mean?
Your average conversion rate is 4.2%. But it becomes much more interesting if you
look at it per device category segment - desktop, tablet, mobile. You now have a much
better picture.
Instead of looking at a static number, look at distributions.
Distributions will also insightful in the case of totals. So instead of just looking at
total transactions:
You could look at the number when its distributed by visits to transaction, and learn
that most people are ready to buy during their first visit:
And always, always use absolute numbers next to ratios. For instance if landing page
A results in 8% conversion rate and landing page B has 2% conversion rate, you need
to know the absolute number of conversions to know if you can trust the ratio.
If the total number of actions is less than 100, be very suspicious of it - the ratio is
probably wrong (sample size not big enough).
buy stuff,
and so on.
etc.
Next - set up measurement for all of these items. Some of the stuff you might be able
to measure by setting thank you page URL as a Goal, for most of this stuff you will
need event tracking.
Heres an example. An ecommerce site has a feature. I set up a segment for people
that use this feature. Now Im comparing site average ecommerce performance to this
specific segment:
What do we see? People who use this specific feature convert almost 4x better, and
spend slightly more money. Thats an insight!
There could be lots of reasons for this - we dont know for sure just by looking at these
numbers. But right now ~10% of users use this feature. What would happen if we got
20% to use it? This insight can be turned into a test hypothesis.
If we didn't measure this stuff, we would have no idea. No data-driven hypothesis.
If you sell stuff for money on your site, you have a shopping cart system, products at
different price points, then you absolutely need to have ecommerce tracking
configured on your site. You need a developer for this. If you dont have it, youre
completely blind when you dont have to be.
If you have NO GOALS set up - youre a voluntary idiot. Your analytics are 100%
useless. Might as well give up now. Or - get your act together, and start measuring
stuff.
You need to have goals set up for all key actions (purchase, lead generation etc). Dont
set goals for stupid shit like visits about page - analytics measures visits to pages
anyway.
And - you absolutely need to have funnels set up (unless everything happens on a
single page): Product page -> Cart Page -> Checkout step 1 -> Step 2 -> Step 3 -> Thank
you.
I suggest you read this article very carefully as it has step-by-step instructions for
setting up your Google Analytics config.
For setting up event tracking you need one of these three options:
1. Use Google Tag Manager to set up event tracking (the best option), naturally
requires that you have GTM already set up. Read this article to learn more about
tag managers, and read this post on using Google Tag Manager. If you dont run
your analytics through a tag manager, youre being silly and unnecessarily
complicating things.
2. Learn to code, so you can manually add event tracking scripts to your site,
wherever needed.
3. Tell your developer to set up event tracking for everything on your list.
Ideally you only pursue option #1 - its the fastest and most sustainable option. Having
to go through a developer every time - and hard code each event tracking script - is a
pain in the butt, and will likely cause problems down the line.
If you work with a Google Analytics setup that was done by someone else, you need to
start with an analytics health check.
In a nutshell: health check is a series of analytics and instrumentation checks that
answers the following questions:
Is anything broken?
The truth is that nearly all analytics configurations are broken. Take this very seriously.
See if everything that needs to be measured is measured, multiple views set up, funnel
and goal data accurate (calculate funnel performance manually via Behavior -> Site
Content reports, and compare to funnel data as well as your back-end sales reporting
tool).
If you see bounce rates under 10%, you can be sure that this is due to broken setup either GA code loaded twice, or some event triggered right away thats not set to noninteractive.
I know Im not being 1-2-3 here, but this is merely an essentials course - and you
wouldnt want this email to be 10,000 words long. Its important to know that this is an
issue, and you can do your own investigation from here.
Identify high traffic & high bounce / high exit rate pages
Before we wrap up, there are 2 more articles I urge you to check out on Google Analytics:
10 Optimization Experts Share Their Favorite Google Analytics Reports
7+ Under-Utilized Google Analytics Reports for Conversion Insights
Heat maps
What is a heat map? Its a graphical representation of data where the individual values
contained in a matrix are represented as colors. Red equals lots of action, and blue
equals no action. And then there are colors in between.
When people say heat map, they typically mean hover map. It shows you areas that
people have hovered over with their mouse cursor - and the idea is that people look
where they hover, so its kind of like poor mans eye tracking.
The accuracy of this thing is always questionable. People might be looking at stuff
that they dont hover over, and might hovering over stuff that gets very little attention and hence the heat map is inaccurate. Maybe its accurate, maybe its not. How do you
know? You dont.
Thats why I typically ignore this types of heatmaps. I mean I do look at the info if its
there - to see if it confirms my own observations / suspicions (or not), but I dont put
much weight on it.
There are also tools that algorithmically analyze your user interface, and generate heat
maps off of that. They take into account stuff like colors, contrast, size of elements.
While I dont fully trust these either (not based on actual users), I dont think theyre
any less trustworthy than your hover maps.
Using algorithmic tools is especially a good idea if you lack traffic. It gives you instant
results. Check out Feng GUI (relatively cheap) and EyeQuant(best in class).
Click maps
A click map is a visual representation, aggregated data of where people click. Red
equals lots of clicks.
You can see where people click also with Google Analytics - and I actually prefer that.
Provided that you have enhanced link attribution turned on and set up, Google
Analytics overlay is great (but some people prefer to see it on a click map type of
visual).
And if you go to Behavior -> Site Content -> All pages, and click on an URL, you can
open up Navigation Summary for any URL - where people came from, and where they
went after. Highly useful stuff.
OK - back to click maps. So there is one useful bit here I like - you can see clicks on
non-links. If there's an image or text that people think is a link or want to be a link,
they'll click on it. And you can see that on a click map.
If you discover something (image, sentence etc) that people want to click on, but isn't
a link, then:
Attention maps
Some tools - like SessionCam for instance - provide attention maps.
It shows which areas of the page have been viewed the most by the users browser
Scroll map
This shows you scroll depth - how far down people scroll. Can be very useful.
It's absolutely normal that the longer the page, the less people make it all the way
down. So once you acknowledge this, it makes it easier to prioritize content. What's
must-have and what's nice-to-have. Must have content must be higher.
Also if your page is longer, you probably want to sprinkle multiple calls to action in
there - look at your scroll map to see where are the biggest drop-off points.
Analyzing the scroll map will also help you decide where you need to tweak your
design. If you have strong lines or color changes (e,g. white background becomes
orange), those are called 'logical ends' - often people think that whatever follows is no
longer connected to what came before.
So you can add better eye paths and visual cues to spots where scrolling activity
seems to drop heavily.
You dont need a million visitors to record user sessions this is almost like
qualitative data. Use tools like Inspectlet (great), SessionCam (terrible UI, but a
workhorse), or Clicktale to record user sessions, and watch your actual visitors interact
with your site. Some basic heatmap tools like Crazyegg don't even have this feature.
Session replays are extremely useful for observing how people fill out forms on your
site. You can configure event tracking for Google Analytics, but it wont provide the
level of insight that user session replay videos do.
One of our customers has an online resume building service. The process consists of 4
steps, and there was a huge drop-off in the first step. We watched videos to
understand how people were filling out the form. We noticed the first step had too
many form fields, and we saw that out of all the people who started filling out the form,
the majority of users stopped at this question:
Personal references! The form asked for 3. Most people had none. So they abandoned
the process. Solution: get rid of the references part!
Very difficult to learn this without watching the videos.
I typically spend half a day watching videos for a new client site. Not any random
videos, but where they visited key pages. Try to see what's different between
converters and non-converters etc.
Form analytics
Not exactly mouse tracking, but several mouse tracking tools
like Inspectletor Clicktale have this feature. Or use a standalone tool like Formisimo.
These tools will analyze form performance down to individual form fields.
Which form fields people leave empty, even though they're required?
And so on.
If your goal is to make your forms better - and form optimization is a key part of CRO it really adds a whole new layer of insight where you have data about each and every
form field.
You can remove problematic fields, or re-word instructions, or add help text, or turn
inline field labels into top aligned labels. Whatever. The main point is that you know
WHERE the problem is, so you can try to address it.
No data on form fields = guessing. And your guesswork is no better that flipping a
coin. And you don't want to base the success of your work on a coin toss.
Good luck!
6 more lessons to go. Getting tired yet?
You see, in conversion optimization there's so much to know. And in this free course
I'm only helping you scratch the surface. But we're getting there, one step at a time.
So for instance if this is an ecommerce product page, the goal would be cart adds. So
the question to ask could be something like "What's holding you back from adding this
product to the cart right now?" or "What's keeping you from buying this right now?".
You don't always know which question is the best one to ask - there is no single best
question. Some questions will get far better response rates, but you won't know in
advance which ones.
So try to come up with multiple different wordings to the question.
Another way to ask about friction could be "Do you have any questions that you can't
find answers to?" - give them a Y/N option, and if they choose No, have them type in
their question.
This is my pro tip actually: Ask all questions in the form of Y/N. It's easy to just choose
Yes or No. If you hit them with a complicated question right away, less people will take
the time to write. But if you start with Y/N, and only once they choose 'No', then pop
the question, they're much more likely to respond.
I see 2% - 4% response rates all the time.
So instead of "what's holding you back from..." you would ask "Is there anything
holding you back from ..."? Y/N. And ask to clarify.
And remember - a different question for each page (e.g. pricing page, category page
etc) - that's the only way to learn about the specific friction they're experiencing on
that very page.
amount of pages.
You need to do some experimentation with this, there's no universal rule.
User testing gives you direct input on how real users use your site. You may have
designed what you believe is the best user experience in the world, but watching real
people interact with your site is often a humbling experience. Because you are not your
user.
You can do this in-person or remote. When you do this in person - you go to test users
or have them come to you - make sure you film the whole thing. Doing it remote by
using online user testing tools if definitely the cheapest and fastest way to do it.
these idiots don't see that button". But the real idiot is you for putting that button
somewhere where people don't look. But that's okay - you can fix it!
In most cases you want to include 3 types of tasks in your test protocol.
A specific task
A broad task
Funnel completion
So let's say you run an ecommerce site that sells clothes. Your tasks might as follow:
You have users that know what they want, and users who're browsing around. This
test protocol accounts for both. And funnel completion is the most important thing you want to make purchasing as easy and obvious as possible.
Make sure you have them use dummy credit cards to complete the purchase. If you
don't let them complete the full checkout process, you're missing out on critical
insight.
If your platform does not allow dummy credit cards, you might want to run user tests
on a staging server (if available), or get some pre-paid credit cards and share that info
with testers. Once they've completed the test, just refund the money and cancel order.
Tasks to avoid
A typical rookie mistake is to form tasks as questions - "Do you feel this page is
secure?" or "Would you buy from this site?". That's complete rubbish, utterly useless.
The point is to OBSERVE the user. If they comment on security voluntarily, great. If
they don't, it's likely not an issue. Don't ask for their opinion on anything, just have
them complete tasks and pay attention to the comments they volunteer and to how
they (try to) use the website interface.
Asking whether they would buy or not is completely useless as humans are not
capable of accurately predicting their future actions. It's one thing to say that you
hypothetically would buy something, and it's a completely different thing to actually
take out your wallet and part with your money.
Test users know that they're not risking with their actual money - so their behavior is
not 100% reflective of actual buyer behavior.
Once I ran user testing for an expensive hotel chain. Test users had no problem
booking rooms that cost over $500 per night. I seriously doubt they'd pay that much so
easily in "real life".
Another common mistake is telling them exactly what to do. For instance "use filters
to narrow down the selection". Don't do that. You just give them the goal (e.g. find
stores near you), and watch what happens.
Recruiting testers
Your testers should be people from your target audience (although ANY random tester
is better than no tester) that understand your offer, and might represent the people
you're actually trying to sell to.
Also - it should be the very first time they're using your site. So you can't use past
customers as testers. They're already familiar with your site, and have learned to use it
even if it has a ton of usability issues.
If your service/product is for a wide audience (e.g. you sell shoes or fitness products),
you have it easy. You can turn to services like usertesting.com or TryMyUI.com, and
recruit testers from their pool. I use usertesting.com all the time with every client.
If you have a very niche audience (e.g software quality assurance testers or cancer
patients on vegan diet), it can get more complicated. You can reach out to dedicated
communities (e.g. forums for software testers or people with cancer), use your
personal connections (friends of friends) or dedicated recruiting services (expensive).
If you do custom recruiting, you absolutely need to pay your testers, typically $25 to
$50 per tester (depending on how niche they are). Or much more if they're way more
niche.
How many to recruit
In most cases 5 to 10 test users is enough. 15 max - law of diminishing returns kicks in
after that.
How often
You should conduct user testing every time before you roll out a major change (run
tests on the staging server), or at least once a year. Definitely at the start of every
optimization project.
Once you have all the videos done, time to review them all at once. Go through the
videos, take notes of every single issue.
Fix the obvious problems and test everything else. If needed, recruit another 5 test
users to see if the issues were solved or any new ones were created in the process.
Technical testing
Heuristic analysis
Customer surveys
User testing
Once you go through all these, you will find identify issues - some of them severe,
some minor.
Test
If there is an obvious opportunity to shift behavior, expose insight or increase
conversion this bucket is where you place stuff for testing. If you have traffic and
leakage, this is the bucket for that issue.
Instrument
If an issue is placed in this bucket, it means we need to beef up the analytics reporting.
This can involve fixing, adding or improving tag or event handling on the analytics
configuration. We instrument both structurally and for insight in the pain points weve
found.
Hypothesize
This is where weve found a page, widget or process thats just not working well but we
dont see a clear single solution. Since we need to really shift the behaviour at this crux
point, well brainstorm hypotheses. Driven by evidence and data, well create test plans
to find the answers to the questions and change the conversion or KPI figure in the
desired direction.
Just Do It - JFDI
This is a bucket for issues where a fix is easy to identify or the change is a no-brainer.
Items marked with this flag can either be deployed in a batch or as part of a controlled
test. Stuff in here requires low effort or are micro-opportunities to increase conversion
and should be fixed.
Investigate
You need to do some testing with particular devices or need more information to
triangulate a problem you spotted. If an item is in this bucket, you need to ask
questions or do further digging.
Once we start optimizing, we start with high-priority items and leave low priority last
but eventually all of it should get done.There are many different ways you can go
about it. A simple yet very useful way is to use a scoring system from 1 to 5 (1= minor
issue, 5 = critically important).
In your report you should mark every issue with a star rating to indicate the level of
opportunity (the potential lift in site conversion, revenue or use of features):
This rating is for a critical usability, conversion or persuasion issue that will be
encountered by many visitors to the site or has high impact. Implementing fixes or
testing is likely to drive significant change in conversion and revenue.
This rating is for a critical issue that may not be viewed by all visitors or has a lesser
impact.
This rating is for a major usability or conversion issue that will be encountered by
many visitors to the site or has a high impact.
This rating is for a major usability or conversion issue that may not be viewed by all
visitors or has a lesser impact.
This rating is for a minor usability or conversion issue and although is low for potential
revenue or conversion value, it is still worth fixing at lower priority.
There are 2 criterias that are more important than others when giving a score:
build a feature, but it takes months to do it. So its not something youd start
with.
Opportunity score (subjective opinion on how big of a lift you might get). Lets
say you see that the completion rate on the checkout page is 65%. Thats a clear
indicator that theres lots of room for growth, and because this is a money page
(payments taken here), any relative growth in percentages will be a lot of
absolute dollars.
Essentially: follow the money. You want to start with things that will make a positive
impact on your bottom line right away.
Be more analytical when assinging a score to items in Test and Hypothesize buckets.
Now create a table / spreadsheet with 7 columns:
Issue
Google
Bucket
Instrument
Location
Background
Action
Rating
Responsible
Every page
Google
Analytics script
is loaded twice!
Line 207 and
506 of the home
Remove
double
entry
Jack
Home
page
Give reasons to
buy from you
Add a
prominent
value
propositio
n
Jill
Analytics
bounce info
is wrong
Missing
value
proposition
Hypothesize
Most conversion projects will have 15-30 pages full of issues. "What to test" is not a
problem anymore, you will have more than enough.
leads insight which leads to better hypotheses, and in turns into better results.
The better our hypothesis, the higher the chances that our treatment will work, and
result in an uplift.
With a hypothesis were matching identified problems with identified solutions while
indicating the desired outcome.
Identified problem: Its not clear what the product is, whats being sold on this page.
People dont buy what they dont understand.
Proposed solution: Lets re-write product copy so it would be easy to understand what
the product is, for whom, and what the benefits are. Lets use better product
photography to further improve clarity.
Hypothesis: By improving the clarity of the product copy and overall presentation,
people can better understand our offering, and we will increase the number of
purchases.
All hypotheses should derived from your findings from conversion research. Dont test
without hypotheses. This is basic advice, but its importance cant be overstated. There
is no learning without proper hypotheses.
Next lesson: running tests!
in stopping the test early after looking at preliminary results. There's no penalty to
have a larger sample size (only takes more time).
As a very rough ballpark I typically recommend ignoring your test results until you have
at least 350 conversions per variation (or more - depending on the needed sample
size).
If you want to analyze your test results across segments, you need even more
conversions. It's a good idea to run tests targeting a specific segment, e.g. you have
separate tests for desktop, tablets and mobile.
Once your test has enough sample size, we want to see if one or more variations is
better than Control. For this we look at statistical significance.
Statistical significance (also called statistical confidence) is the probability that a test
result is accurate and not due to just chance alone. Noah from 37Signals said it well:
Running an A/B test without thinking about statistical confidence is worse than not
running a test at allit gives you false confidence that you know what works for your site,
when the truth is that you dont know any better than if you hadnt run the test.
Most researchers use the 95% confidence level before making any conclusions. At 95%
confidence level the likelihood of the result being random is very small (5%). Basically
were saying this change is not a fluke or caused by chance, it probably happened due
to the changes we made.
When an A/B testing dashboard (in Optimizely or a similar tool) says there is a 95%
chance of beating original, its asking the following question: Assuming there is no
underlying difference between A and B, how often will we see a difference like we do in
the data just by chance? The answer to that question is called the significance level,
and statistically significant results mean that the significance level is low, e.g. 5% or
1%. Dashboards usually take the complement of this (e.g. 95% or 99%) and report it as
a chance of beating the original or something like that.
If the results are not statistically significant, the results might be caused by random
factors and theres no relationship between the changes you made and the test results
(this called the null hypothesis).
Once your testing tool says you've achieved 95% statistical significance (or higher),
that doesn't mean anything if you don't have enough sample size. Achieving
significance is not a stopping rule for a test.
Read this blog post to learn why. It's very, very important.
Tests lose all the time, and thats okay. Its about learning.
Some say only 1 out of 8 wins, some claim 75% of their tests win. Convert.com ran a
query on their data and found that 70% of the A/B test performed by individuals
without agencies dont lead to to any increase in conversion.
Ignore market averages for this kind of stuff as your average tester has never done
any conversion research, and is likely to have their testing methodology wrong as well.
The same convert.com research also showed that using a marketing agency for A/B
testing gives 235% more chance of a conversion increase. So competence clearly
matter (of course, your average marketing agency is not very competent at CRO).
When you know that more than half of your tests are likely not to produce a lift, you will
have new-found appreciation for learning. Always test a specific hypothesis! That way
you never fully fail. With experience, you begin to realize that you sometimes learn
even more from tests that did not perform as expected.
Matt Gershoff, Condutrics:
Test is really data collection. Personally, I think the winner/loser vocabulary perhaps
induces risk adversity.
Some people indeed fear that losing test would mean you did something wrong, and
because of that your boss, client etc would not be happy with the performance. Doubt
and uncertainty be start to cloud your thought process. The best way to overcome this
is to be on the same page with everyone. Before you even get into testing, get
everyone to together and agree that this is about learning.
The company profit is really just a by-product of successfully building on your
customer theory.
Nazli Yuzak, Dell:
There lies the reason why many tests fail: an incorrect initial hypothesis. From numerous
tests, weve found that the hypothesis creation has a major impact on the way a test is run,
what is tested, how long a test runs and just as important, whos being tested?
Quite often you will find that one of the variations was a confident winner in a specific
segment. Thats an insight you can build on! One or more segments may be over and
under, or they may be cancelling out the average is a lie. The segment level
performance will help you (Note: in order to accurately assess performance across a
segment, you again need a decent sample size!)
If you genuinely have a test which failed to move any segments, its a crap test, assess
how you came to this hypothesis and revise your whole hypothesis list.
And finally get testing again!
Conclusion
Conversion optimization is not a set of tactics you can learn from a blog post. Its a
process. Anyone who is not able to describe their CRO work as a systematic,
repeatable process is a complete amateur.
W. Edwards Deming:
If you can't describe what you are doing as a process, you don't know what you're doing.
Looking to further advance your optimization know-how? Make sure youre signed up
to ConversionXL mailing list as we regularly announce courses, workshops and
coaching programs.
http://conversionxl.com