Impact Mapping: Making a big impact with software products and projects
Rate it:
Open Preview
66%
Flag icon
time between them for follow-up activities. Beware of measuring what is easy to track instead of what is valuable.
68%
Flag icon
Ask: 'If we achieve the key targets for metrics with a completely different scope than planned, have we succeeded?' If the answer is 'No', go back to start: You don't the right metrics.
70%
Flag icon
Is it realistic that the feature will contribute to the impact? Is the impact valid for the actor? Will the impact really contribute to achieve the goal?
70%
Flag icon
71%
Flag icon
a shorter journey to the key objectives. Don't criticise any ideas, just throw them on there. Use the existing skeleton map structure as an inspiration and ask the following questions:
71%
Flag icon
What else could those guys do for us? Who else can help? How? Who can obstruct us?
71%
Flag icon
previous ideas give people inspiration, but asking for new ideas makes people think harder.
71%
Flag icon
To really push things to the limit here, you can try one of the collaborative games presented in the books Innovation Games [Hohmann06] and Gamestorming [Gray10].
72%
Flag icon
73%
Flag icon
Ask business sponsors to prioritise impacts, not deliverables (map level 3 and beyond). From my experience business users think more clearly about business activities and impacts than software features,
73%
Flag icon
If you want to put more structure into this conversation, investigate the Kano and purpose-alignment models. The Kano model [Cohn06] provides a questionnaire to categorise features into
73%
Flag icon
mandatory (must-have), linear (more is better) and exciters (small amount can dramatically increase satisfaction).
73%
Flag icon
The purpose alignment model [Pixton09] breaks features down into categories of market-differentiating, parity (should be good enough), partner (non-mission-critical, buy som...
This highlight has been truncated due to consecutive passage length restrictions.
75%
Flag icon
What is the simplest way to support this activity? What else could we do? If we're unsure about the assumption, what is the simplest way to test it? Could we test it without software? Could we start earning with a partly manual process?
75%
Flag icon
If you can't define small experiments to test key assumptions, try user story mapping [Patton08b and Patton08c] or the hamburger method [Adzic12] to identify iterative delivery slices that could help you earn or learn sooner.
76%
Flag icon
The first good approach is to list measurements as bullet points next to a particular map node or inside the node. The benefit of this approach is that it makes the distinction between measurements and actors or impacts quite clear, and keeps the map simple.
76%
Flag icon
79%
Flag icon
shows how plans that narrow attention to a single outcome, typically one of high commercial interest, have “an extraordinary power to increase yields”,
80%
Flag icon
then the whole idea of supporting invitations with further functionality should be questioned. Treat such things as failed experiments if you can. Seriously think about removing functions that fail to meet your expectations from the software.
82%
Flag icon
The overall vision map should capture desired longer-term effects and impacts on consumers. Deliverables on that map become product milestones, each with a separate lower-level impact map when its time comes.
82%
Flag icon
Measure progress periodically against key milestone metrics. If the delivery fails to achieve key targets, it's time for a strategy rethink!
« Prev 1 2 Next »