This martech company builds a digital ad platform for banks, analytics tools for advertisers, and service design tools for its employees. The foundation rests upon accurate consumer transaction data, referred to as transaction “strings”. The Validation Tool ("Val") is a way to manually check the output of an automated string cleaner.
In this role I needed to own the entire UX process, including research, interaction design, visual design, and delivery.
One challenge presented to me was to convince my new team that a "rewind" was necessary. To address this, I quickly set up a usability test with the actual users and presented my findings and a (time-bound) action plan.
An understanding of the end user and the problem space was in need. I structured user interviews so that half the time was spent understanding the daily lives of data scientists (their goals, needs, and frustrations) and half the time was spent in contextual interview ('string cleaning'). The highlights included:
From the interviews and discussions with the larger team, it became clear that we had several new service design projects going and functional overlap between. We needed to understand how to split the functionalities between them. Here's how I solved this quandary:
I set up a card sort to bucket functionalities with a group of end users. It was great to see the look of relief on their faces when these functions were assigned and organized!
Next, I organized a wireframing activity where teams mocked up the tools and discussed how they might work. I supplied wireframe components and acted as an assistant in case they got stuck.
As a result, we were able to successfully align on the distinctions and prioritize the user need statements for our respective products.
During the Ideation phase, I first wanted to facilitate ideation sessions around Val design concepts that might've been overlooked. The previous designer had delivered one design, but not explored options in the path to their solution. We used Invision Freehand for our ideation sessions so everyone could easily contribute.
The final design was arrived at by iteratively taking the Data Science team feedback and formulating an iteration. These were the key rationale points:
From the onset of the research, it was clear that this tool was going to bring joy to data scientists because it would be handling many of the manual tasks. But how much? A post launch check on the product revealed the following:
These were great results, but we planned on taking the product even further. Having our users under the same roof really made this project fun, fast, and gratifying.
I learned that it's important to not just take a handoff from a fellow designer and run forward. You must ask "is there anything missing from our evidence that would inform a better design?"