More on this book
Community
Kindle Notes & Highlights
by
Sean Ellis
Started reading
July 4, 2018
Instead of vague suggestions like “Our sign-up form is too hard; we should make it easier to sign up,” submissions must articulate exactly what change is to be tested, the reasoning behind why that change might improve results, and an explanation of how results should be measured.
simple proposition of expected cause and effect.
predicting the results of a test ahead of time is an inexact science at best, and so some teams forgo it.
Most experiments should measure more than one metric because sometimes, improvements in one metric come at the expense of others.
While your initial efforts should leverage the creativity of people within the company, over time you should consider opening the call for ideas to outside suppliers and partners. Outsiders can offer surprisingly helpful suggestions that help teams break out of preconceived notions about the types of things they should be trying.
when we shared our focus, a wealth of great ideas flooded in.
conjecture,
as doubling down.
squabble
This is where the cross-functional collaboration really comes into play. Using our trusty example of the shopping app, the marketing team member might work with the graphic design and email teams to create the first-time-shopper-promotion graphics and marketing copy. She’ll also collaborate with the data analyst to identify both the control group—the group of users who won’t be exposed to the experiment—and the experiment group, and to ensure that results can be tracked properly.
Once the experiments are ready to go, the growth lead will send a notification around the company that they are being launched so that there are no surprises for other teams also working on the product.
A poor test is one less opportunity to learn, slowing the team down, and bad data can send the team down a very wrong path.
99 PERCENT STATISTICAL CONFIDENCE LEVEL:
Reaching consensus is dicier when the results of a test are neither clearly positive nor negative, especially if running it involved a good deal of time and effort.
team members may want to let a test run much longer than is cost effective in the hopes that a larger sample size will change the current trend.
when results are inconclusive, the best course is to stick with the origina...
This highlight has been truncated due to consecutive passage length restrictions.
No matter how the reports are stored, the essential requirement is that the results of all experiments are easily searchable so that teams can revisit them and consider variations, and also so that they can assure that they are not repeating tests, which is all too easy to do, especially when operating at high tempo.
ideas should be brainstormed well before using the process described above.
Are there any short-term objectives that the team needs to work toward? If the focus is staying the same, this is a simple confirmation. If it’s changing: say, a shift from user acquisition to retention or monetization, a discussion of the new focus area and the reasoning behind the change should take place.
Recognizing top contributors
The speed with which the growth hacking process can produce significant improvements is astonishing. Sometimes a hack can go from germ of an idea to growth driver with just two weeks or so of work, including time spent on the preparatory data analysis and the first team meeting.
One of the great things about growth hacking is that even failed experiments can lead to significant learning over an incredibly short time frame.
user acquisition, activation, retention, and monetization—