Agile Testing: A Practical Guide for Testers and Agile Teams
Rate it:
Open Preview
22%
Flag icon
A critique can include both praise and suggestions for improvement.
22%
Flag icon
Quadrant 3 classifies the business-facing tests that exercise the working software to see if it doesn’t quite meet expectations or won’t stand up to the competition.
22%
Flag icon
User Acceptance Testing (UAT) gives customers a chance to give new features a good workout and see what changes they may want in the future, and it’s a good way to gather new story ideas.
22%
Flag icon
Usability testing is an example of a type of testing that has a whole science of its own. Focus groups might be brought in,
22%
Flag icon
Knowledge of how people use systems is an advantage when testing usability.
22%
Flag icon
Exploratory testing is central to this quadrant. During exploratory testing sessions, the tester simultaneously designs and performs tests, using critical thinking to analyze the results.
22%
Flag icon
Exploratory testing is a more thoughtful and sophisticated approach than ad hoc testing.
22%
Flag icon
From the start of each project and story, testers start thinking of scenarios they want to try. As small chunks of testable code become available, testers analyze test results, and as they learn, they find new areas to explore.
Omer Aslam
Exploratory Testing
22%
Flag icon
As a result, it is through this type of testing that many of the most serious bugs are usually found.
22%
Flag icon
In the past, we’ve heard complaints that agile development seems to ignore the technology-facing tests that critique the product. These complaints might be partly due to agile’s emphasis on having customers write and prioritize stories.
22%
Flag icon
If we know the requirements for performance, security, interaction with other systems, and other nonfunctional attributes before we start coding, it’s easier to design and code with that in mind. Some of these might be more important than actual functionality.
Omer Aslam
Performance is important
22%
Flag icon
Technology-facing tests that critique the product should be considered at every step of the development cycle and not left until the very end. In many cases, such testing should even be done before functional testing.
23%
Flag icon
When you and your team plan a new release or project, discuss which types of tests from Quadrants 3 and 4 you need, and when they should be done. Don’t leave essential activities such as load or usability testing to the end, when it might be too late to rectify problems.
23%
Flag icon
For most products, we need all four categories of testing to feel confident we’re delivering the right value.
Omer Aslam
When story done
23%
Flag icon
Without a foundation of test-driven design, automated unit and component tests, and a continuous integration process to run the tests, it’s hard to deliver value in a timely manner.
31%
Flag icon
As we pointed out in Chapter 8, stories are only a starting place for a prolonged conversation about the desired behavior.
31%
Flag icon
He created a template on the team wiki for this purpose. The checklist specifies the conditions of satisfaction—what the business needs from the story.
31%
Flag icon
Customers can write a few high-level test cases to help round out a story prior to the start of the iteration, possibly using some type of checklist.
31%
Flag icon
Some customer teams simply write a couple of tests, maybe a happy path and a negative test, on the back of each story card. Some write more detailed examples in spreadsheets or whatever format they’re comfortable working with.
31%
Flag icon
the product owner for Lisa’s team, often illustrates complex calculations and algorithms in spreadsheets, which th...
This highlight has been truncated due to consecutive passage length restrictions.
32%
Flag icon
Visuals such as flow diagrams and mind maps are good ways to describe an overview of a story’s functionality, especially if they’re drawn by a group of customers, programmers, and testers.
32%
Flag icon
If your wiki knowledgebase has grown to the point where it’s hard to find anything, hire a technical writer to transform it into organized, usable documentation.
33%
Flag icon
The goal of business-facing tests that support the team is to promote communication and collaboration between customers and developers, and to enable teams to deliver real value in each iteration. Some teams do this best with unit-level tools, and others adapt better to functional-level test tools.
35%
Flag icon
After the simple test passes, I write more tests, covering more business rules. I write some more complex tests, run them, and the programmer updates the code or tests as needed. The story is filling out to deliver all of the desired value.
36%
Flag icon
Not all code is testable using automation, but work with the programmers to find alternative solutions to your problems.
36%
Flag icon
Manual test scenarios can also drive programming if you share them with the programmers early.
Omer Aslam
Share your test scenarios early with programmers
37%
Flag icon
You won’t have time to do any Quadrant 3 tests if you haven’t automated the tests in Quadrants 1 and 2.
37%
Flag icon
Exploratory testing combines learning, test design, and test execution into one test approach.
41%
Flag icon
PerlClip is an example of a tool that you can use to test a text field with different kinds of inputs.
42%
Flag icon
You should be able to create a performance test that can be run and continue to run as you add more and more functionality to the workflow.
42%
Flag icon
For many applications, correct functionality is irrelevant without the necessary performance.
43%
Flag icon
Maintainability is not something that is easy to test. In traditional projects, it’s often done by the use of full code reviews or inspections.
44%
Flag icon
After you’ve defined your performance goals, you can use a variety of tools to put a load on the system and check for bottlenecks. This can be done at the unit level, with tools such as JUnitPerf, httperf, or a home-grown harness. Apache JMeter, The Grinder, Pounder, ftptt, and OpenWebLoad are more
45%
Flag icon
If there are specific performance criteria that have been defined for specific functionality, we suggest that performance testing be done as part of the iteration to ensure that issues are found before it is too late to fix them.
45%
Flag icon
Garbage collection is one tool used to release memory back to the program. However, it can mask severe memory issues.
45%
Flag icon
performance and load tests described in the previous section to verify that there aren’t any memory problems.
« Prev 1 2 Next »