Building a User Testing Process

Overview

I’ve been fortunate enough to work with PWA, Android, and iOS applications capable of standard commerce flows, pop-up event flows, and in-store kiosk capabilities. Due to the rapid nature of our sprint process and the depth of knowledge accessible for ecommerce, we had been able to build a significant amount of our apps before we had taken the time to begin true user testing. We hit a point where we knew we needed to start bringing in user testing to get valuable feedback and steer us back on course anywhere we might have gone astray.

Recognizing the Need

Despite the abundance of online resources, we still had a few areas where we either purposefully deviated from the norm or had built features that were uncommon and thus had little relative material to research. Small steps outside the standard included a visual slider for the category tree, rather than a text based map in the navigation, or even offering 1-tap purchasing directly from browsing product lists. Larger areas of concern mostly centered around our AI, primarily that of an intelligent bundle builder. 

All of our efforts were to make the app more interactive and immersive, but it had to stand up to reality. In some places we had to adjust our original plan for scope creep or unpredicted obstacles that cut our ability to continue with the original flow, leading us to craft a new solution as we ran. We needed real users to give it a try and see how effective our choices had been.

Gameplan

At the time, Upscale had no user testing practices or history. We also had a very limited budget and a very small team available to contribute to this but we knew what we wanted to do would require few resources as well. I worked in a tight 3 person team, consisting of myself, a product manager, and the manager of CX. 

We reviewed the method of testing they had practiced and I was sent to research it more on my own as well as write up a series of cases to test by. Our testing methods are task based, in which case we would develop a scenario that has a specific need and pointed task to complete, sometimes, a series of small tasks. These were all oriented for the same persona and solution but typically the tasks were to ensure that specific touchpoints weren’t missed.

Building Structure

The scenario would provide the overall mission as well as specify the use case, platform, and shopping experience. The intent of this was to ground the tester in the reason why they’re using the app, before giving them any specific actions. For example, one of our scenarios were, “You’re shopping at Haverfords and can’t find what you’re looking for in the store. All associates are busy, so you go to the kiosk to find what you came in for.” 

Using this, we provide a specific store name (although a fake one)  to give a sense of authenticity. The details of why are vague but the purpose is direct, making it easier for them to put themselves in the situation as they would imagine themselves doing so. 

Tasks would be based off of the beginning scenario and equally simple but this time give a specific goal. Our tasks for this case was: 

Task 1: Put together and purchase a holiday outfit for yourself for an upcoming party.

Task 2: Create a wishlist consisting of accessories to your outfit and send to yourself.

These tasks set them on a clear mission without dictating any personal preferences so they’re free to do as they would naturally in that situation. Since they are use-case based, we can plan these to hit the components we need to get tested. So for these tasks we were able to develop an internal list of the components we knew they would have to use in order to complete the task. If anything was missing, we would know that we needed another scenario that was better aligned to that case.

Set Up & Execution

Our set up was dictated primarily around keeping the user comfortable and relaxed in order to get the best possible results. For this reason we only allowed one person in the room with them ever, to give the scenarios, ensure the recording is going as planned, and to ask questions after each case. The remaining team members sat in another room watching and listening to the testing as it progressed. If the facilitator had missed any areas of interest while questioning they would message them. This allowed the facilitator to refrain from any writing or typing during the process, which would typically make a tester nervous. 

We decided to do an official run through with people who worked in the building with us, who knew us and likely had more information on our product than our ideal testers would. We ideally would find people more detached from our work, but for the purpose of testing our process and allowing me to get comfortable with this particular method of testing, it was perfect.  

Besides some awkwardness in making sure the users stayed within views of the recordings, the tests had gone extremely well. We found the feedback to be consistent, our testers avoided the same components due to confusion or liked the same pieces. They especially loved the visual carousel style we had but both had commented on how it was structured strangely. Our test data included categories such as, “Winter Getaways” rather than the standard tree that followed, “Mens > Tops/Bottoms” for example. Feedback like this was obviously a clear flaw of their jobs involving commerce. When they focused on factors like Selling Trees, we were losing that clean immersion of the app experience.

Being in the room allowed me to notice where they hovered over elements before abandoning versus outright neglecting. I was able to ask “Why?” for behaviors like this that I would otherwise miss in a fully virtual testing environment. Best of all this method provided feedback that was as close to unbiased as we could get.