I’ve spent the last two years creating Launch Studio, a web app for testing Bluetooth devices. As the lead UX Designer and Researcher on the project, I drove the product vision from inception to a release-ready product, through deep understanding of underlying processes, policies, and divergent user needs.
All new Bluetooth products must be tested to ensure quality. Launch Studio determines testing requirements and documents test results. This tool provides 75% of the SIG’s income, around $15 million annually.
Launch Studio replaces a fragmented, siloed set of tools. Over the past decade, layers of complexity have been added as regulations change. The process is so complex and opaque that a cottage industry of independent consultants has grown solely to support this tool. Novice users are so intimidated that when testing is required, 97% choose to hire these consultants rather than learn the tool themselves.
While research and product design were challenging, it was a thorough understanding of process, policy, business culture, and collaboration practices that lead to our greatest successes.
The Bluetooth SIG is a member driven organization with advisory boards for various technical roles. I worked with a board of experts, BTI, which defines testing procedures and policies. This is an independent group of expert users, but one with with political power and insider status. I also worked with the BQEs, the independent consultants that guide clients through the qualification process. The existing system is so complex, it has spawned this cottage industry of consultants.
In a 3 day workshop, we explored the tools, pain points, and scope in terms of policy and regulation impediments. While the group had prepared multiple specific feature and UI suggestions, I kept them talking about the overall process. By stepping back from the user interface and examining the underlying process, we were able to discover the true pain points and better define how the tools could work.
The research workshop revealed a handful of critical insights:
The business began the project expecting to combine two main tools: a web app for determining test requirements, and an offline tool for conducting the testing. I discovered that not only was the existing approach working at a fundamental level, but that people using the tool suite’s offline testing platform (PTS) are an entirely different group from those who use the online tools for making a test plan. The existing division of tools, though archaic, aligns well with the different user groups and their needs.The Managers
This group of experts asked for more features, rather than making the existing features work well. Individual interviews revealed an ulterior motive: an easy to use tool undermines the consultant’s business. They wanted the SIG to provide more expert-oriented features, but not to remove the barriers to entry for novices. Challenging the expert-centric culture of the existing tools - both internally and externally - became a cornerstone of the product.The Testers
Two months later, I spoke to a completely different user group: users who conduct the testing. Unlike the managers and policy makers from the earlier workshop, these testers know the technology and terminology, but infrequently work with the existing online tools. More commonly, they consume test plans from the online tools and submit their results to test managers.
I conducted interviews by asking about the existing process and tools, then testing a paper prototype. These conversations revealed new issues surrounding collaboration and tasks manually conducted outside of our tools.
Test plans are created by test managers using one of the online tools. The testing plans are then used by test engineers, who run the tests on multiple platforms. Test engineers then document the test results and evidence, which is uploaded back to the online tools. Documentation is completed manually, usually in an excel spreadsheet. This process is tedious and prone to human error. With thousands of tests and testers, plus multiple versions of the hardware and software under test, test managers never get a clear picture of their progress. Speaking with people who don’t directly use the tools in question, I was able to better understand the process and solve problems that would not have been identified with a narrow, UI-centric approach.
The underlying process that Launch Studio supports can be complex, but in the majority of cases, it can be reduced to a simple form to fill out. The challenge is in directing users down the correct path - simple only when allowed, full and complex process only when required.
Launch Studio supports two main processes - Qualification and Declaration. For simple cases where the hardware changes are minimal, the Bluetooth SIG requires only a simple declaration. For scenarios where the Bluetooth hardware design has changed, qualification testing is required before declaration. Launch Studio uses two distinct paths to make the simple cases effortless, but without marginalizing the complex scenarios.
The core issue is determining whether a user needs to conduct qualification testing. The qualification process takes months and costs thousands of dollars, so it’s important to avoid it if not required. Unfortunately, the requirements are a several pages long and still don’t provide a clear answer.
The first approach to this problem asked two simple questions at the beginning of every project, and alters the resulting steps based on the requirements.
When tested, the experts resented having to answer questions when they already knew which process they needed. Furthermore, the questions oversimplified the process and didn’t capture every nuance of our policy.
Charged with making the the tool accessible for everyone, but unable to hide hardcore technical requirements, the Getting Started page was created. This page orients novices by explaining that there are two processes, then provides simple examples of each scenario. Users consciously choose the correct path based upon their requirements. For most cases, this provides a clear answer.
For those who still don’t know which path to take, a second method is attempted. The “Help me decide” button opens a modal box with yes and no questions. Through progressive disclosure, up to three questions are asked and a verdict is rendered.
If the correct path remains unclear, links to customer service, in depth guides, and even the policy documents are provided. This provides everyone a simple way to get a resolution, but without hiding the necessary complexity.
An interactive prototype was developed early in the design process. This helped the UX team establish a pattern library and style guide. The prototype was used extensively for user testing and internal decision making. Updates to the prototype were discontinued once the production tool became mature enough for meaningful testing. The prototype below does not represent the terminal state of the tool. Launch Studio is currently in private beta; public release is anticipated in 2017.