Central Iowa UX Book Club discussion

Lean UX: Applying Lean Principles to Improve User Experience
This topic is about Lean UX
17 views
Lean UX - Gothelf [Sept '13] > Testing with Very Busy Users

Comments Showing 1-4 of 4 (4 new)    post a comment »
dateUp arrow    newest »

Hannah (daylightatmidnight) | 14 comments Mod
I've gotten to the portion of the book that discusses validating your hypothesis as often as once a week. The example he uses is Meetup, a web service with a very wide user base. To me, it seems obvious that this is a very effective technique when you have a large user base to pull from. However, not every software product does.

In my job hunt last fall, I got to talk to several different companies about their UX processes. One that struck me was Epic. They create software for managing patient information in doctors offices and hospitals. Their end users are doctors and nurses, very, very busy professionals. The problem they run into is that while they would like to do more testing, they get very few volunteers due to the value those users place on their time. The testing they do do is with internal doctor/nurse consultants who have retired from medicine.

I've been getting similar feedback when trying to get users for testing within my own company. The response is typically that our users don't have time to sit down with us very often. Once every couple of months is challenging, once a week is inconceivable. And yet testing with the general population would be almost useless due mental models and the amount of training our real users have been through.

So, I guess my questions/point of discussion is this: When you have users that are difficult to get to volunteer, what is the best way to validate your designs? Do any of you guys have experience with this? So often the examples with these "guerilla testing" methodologies are big web apps where anyone off the street could be a user...


message 2: by Edward (new)

Edward Cupps | 3 comments At my company, our users are fairly good with lending their time to help us learn. The only caveat is during SEC filing season, little testing can be done due to their first priorities to their company. This is understandable. Depending on the time of year, we can test once every 2 weeks to multiple times a week. It really depends on their schedules - so we have to be nimble in planning testing. To Hannah's point, we can't lean on the same users over and over; and we can't use random people. We try to cycle through our test users, with the goal of not over burdening them. It seems to be working, and like Agile, we iterate on that process as well.


message 3: by Marie (new) - added it

Marie | 2 comments The research groups of Profs Cook and Hofmann (Statistics, ISU) has good experience with Amazon Turk. For testing of fundamental concepts (e.g. how "well" are users able to differentiate element A from B in different types of visualizations) it is possible to design experiments that take into account that the test subjects may not have the same expertise as end-users. My experience is that this approach is a good way to screen early design phases and shortcut the investment necessary from busiest users. Plus early findings may be leveraged to engender interest.


message 4: by Kathryn (new) - added it

Kathryn Downing | 2 comments We have a similar problem. I guess an alternative would be to work testing into other avenues? For example, we bring agents in for insurance product training. While they're there, we could work in a short demo of a feature and get feedback. Or, we could ask webinar participants a question or two at the end of a webinar. Or take prototypes to agent events.

Maybe you could also identify people within the company (outside the development team) who might have some similarities to the target user.


back to top