CASE STUDY
Custom and complex report building for campaign outcomes.
User Research / UX Design
The business stakeholders requested a new internal product to replace external software that is used to create campaign evaluation and analytics reporting. The AI team had created a prototype for the algorithm to consume all of the existing data, but a function UI platform was needed.
This project was exceptionally complex. The end-user population consisted of multiple departments with varying levels of technological skill and analytic contribution. To understand how we could create a single product to solve for all the needs was a huge task. We set out by conducting a Competitive Analysis of the original software and any comparable platforms out there. We then completed multiple rounds of user interviews and research with the different department stakeholders. After an initial card sorting exercise to understand the pain points and insights, we created situation use cases instead of personas. From there, product flows were crucial in helping us understand how the application would work at a high level.
Providing system alerts with status updates gives users a broader understanding of what’s going on with their evaluations. Additionally, showing clear validations and messaging helps the process along.
Flexibility is the key here with a “Mix and Match” type of report builder. Allowing users to save those creations to revisit later preserves time as well. Once an evaluation is created, the users can export as raw data in .csv format, as a viz or visual report or view within the platform itself. The evaluation report themselves display in Tableau based on a pre-defined template. There are 6 sections to each evaluation: An overview, members, utilization, quality performance, cost and outreach. Which users see what will be based on their security roles. The sections contain high level metrics, KPI details, rate of differences and drill downs to specific time based metrics. We are also using narrative science to help tell a gloabal story for each evaluation.
As part of the evaluations, users are comparing member populations against one another. They need a robust tool to create specific member populations based on certain criteria and save them for comparisons. Users also needed the ability to edit those populations.
Using AI to tell a story is the basic definition of the narrative science feature. Users needed a quick way to understand exactly what was happening with their evaluation and the narrative science summary provides a clear and concise snippet of just that.
The initial designs were focused on what the AI tools could do and less on what the users needed and how the AI tools could support that. A thought shift needs to occur to prevent this type of design happening in the future. Ultimately, the Minimum Viable Product (MVP) was built in DataBricks with a .Net UI layer and a Tableau "output". Incorporating pre-built tools in future iterations of this project will save time and effort into building it, getting users engaging with the product sooner.