More on this book
Kindle Notes & Highlights
by
Lisa Crispin
Read between
June 29 - July 25, 2022
A critique can include both praise and suggestions for improvement.
Quadrant 3 classifies the business-facing tests that exercise the working software to see if it doesn’t quite meet expectations or won’t stand up to the competition.
User Acceptance Testing (UAT) gives customers a chance to give new features a good workout and see what changes they may want in the future, and it’s a good way to gather new story ideas.
Usability testing is an example of a type of testing that has a whole science of its own. Focus groups might be brought in,
Knowledge of how people use systems is an advantage when testing usability.
Exploratory testing is central to this quadrant. During exploratory testing sessions, the tester simultaneously designs and performs tests, using critical thinking to analyze the results.
Exploratory testing is a more thoughtful and sophisticated approach than ad hoc testing.
As a result, it is through this type of testing that many of the most serious bugs are usually found.
In the past, we’ve heard complaints that agile development seems to ignore the technology-facing tests that critique the product. These complaints might be partly due to agile’s emphasis on having customers write and prioritize stories.
Technology-facing tests that critique the product should be considered at every step of the development cycle and not left until the very end. In many cases, such testing should even be done before functional testing.
When you and your team plan a new release or project, discuss which types of tests from Quadrants 3 and 4 you need, and when they should be done. Don’t leave essential activities such as load or usability testing to the end, when it might be too late to rectify problems.
Without a foundation of test-driven design, automated unit and component tests, and a continuous integration process to run the tests, it’s hard to deliver value in a timely manner.
As we pointed out in Chapter 8, stories are only a starting place for a prolonged conversation about the desired behavior.
He created a template on the team wiki for this purpose. The checklist specifies the conditions of satisfaction—what the business needs from the story.
Customers can write a few high-level test cases to help round out a story prior to the start of the iteration, possibly using some type of checklist.
Some customer teams simply write a couple of tests, maybe a happy path and a negative test, on the back of each story card. Some write more detailed examples in spreadsheets or whatever format they’re comfortable working with.
the product owner for Lisa’s team, often illustrates complex calculations and algorithms in spreadsheets, which th...
This highlight has been truncated due to consecutive passage length restrictions.
Visuals such as flow diagrams and mind maps are good ways to describe an overview of a story’s functionality, especially if they’re drawn by a group of customers, programmers, and testers.
If your wiki knowledgebase has grown to the point where it’s hard to find anything, hire a technical writer to transform it into organized, usable documentation.
The goal of business-facing tests that support the team is to promote communication and collaboration between customers and developers, and to enable teams to deliver real value in each iteration. Some teams do this best with unit-level tools, and others adapt better to functional-level test tools.
After the simple test passes, I write more tests, covering more business rules. I write some more complex tests, run them, and the programmer updates the code or tests as needed. The story is filling out to deliver all of the desired value.
Not all code is testable using automation, but work with the programmers to find alternative solutions to your problems.
You won’t have time to do any Quadrant 3 tests if you haven’t automated the tests in Quadrants 1 and 2.
Exploratory testing combines learning, test design, and test execution into one test approach.
PerlClip is an example of a tool that you can use to test a text field with different kinds of inputs.
You should be able to create a performance test that can be run and continue to run as you add more and more functionality to the workflow.
For many applications, correct functionality is irrelevant without the necessary performance.
Maintainability is not something that is easy to test. In traditional projects, it’s often done by the use of full code reviews or inspections.
After you’ve defined your performance goals, you can use a variety of tools to put a load on the system and check for bottlenecks. This can be done at the unit level, with tools such as JUnitPerf, httperf, or a home-grown harness. Apache JMeter, The Grinder, Pounder, ftptt, and OpenWebLoad are more
If there are specific performance criteria that have been defined for specific functionality, we suggest that performance testing be done as part of the iteration to ensure that issues are found before it is too late to fix them.
Garbage collection is one tool used to release memory back to the program. However, it can mask severe memory issues.
performance and load tests described in the previous section to verify that there aren’t any memory problems.

