Issue Resolvability

Issue Resolvability

Client: Noibu

Timeline: July - August 2022

Roles: Lead Product Designer, Research, Visual Design, Testing

Prototypes:

Resolvability - Issue Details

Resolvability - Table Actions

Resolvability - Occurrence Drop

Overview

Research shows that when triaging errors in Noibu, product owners consider two data points.

1. How impactful is the error?
2. How much developer effort will it take to resolve the error? 


This second point is already tricky to quantify unless you have experience with a similar error. Product owners do not possess the same knowledge base as developers. It is difficult for them to determine how much effort it would take to resolve an error or if it is resolvable.

Our users' time in Noibu should be productive and impactful. We created a new prioritization metric powered by machine learning called resolvability.


The model examines historical data and predicts whether an error is resolvable or not. In future versions, we would like to tell our users how difficult an error is, something like easy, moderate, or difficult. I kept this top of mind while working on this first version.



Problem

Research shows that the primary data point the product owner consistently defaults to is the ARL (Annual Revenue Loss). Errors with high monetary values are more likely to be assigned to a developer. The problem with this approach is that many of these high ARL errors tend to be very difficult to resolve or are unresolvable. When this occurs, it creates tension between the product owner and the developer.

there is a 62% chance that the customer will churn unless there is immediate intervention.

If this happens three times, we lose the all-important developer buy-in. When we lose that, there is a 62% chance that the customer will churn. This negative experience results in an erosion of trust in Noibu.

For this first version of resolvability, we were very limited in what data we could use and show the user. We only had information on specific types of errors that were solved in the past. Our tree-based machine learning model was very much in the infant stage. The model could determine if an error was resolvable 81% of the time and 75% of the time if it was not. 

We now had two buckets. One with errors that we knew were resolvable, and that's it. The other contained errors that could be resolvable or not. We had no way of knowing.

Humans take the path of least resistance. Product owners would likely dry the well of resolvable issues. We needed to communicate that the errors in the first bucket were solvable and not do the opposite to the errors in the second bucket. We immediately knew that the use of written language would not suffice in this first version. Instead, we had to use iconology.

Our data scientist and I shared the same concern. The ML model would not learn about the issues in that second bucket. If we feed garbage into our model, it will feed garbage to our users losing all provided value.

Solution

Ultimately, this resulted in shifting the way I approached the project. I booked calls with seven product owners and four developers to gather information so I could understand how to provide them with a fantastic experience and more importantly long-term value. 

For product owners, I discovered that:

Most product owners have no understanding how complex or long an error would take to fix when assigning it to a developer.
The primary metric they used to triage errors was ARL.
They would use issue resolvability along with the ARL to prioritize and assign errors to the developer.
They desperately needed something to help guide them when assigning errors.
Product owners with external developers typically had a set amount of time per week which made that resource very valuable.

Early stage high level JTBD flow I used to understand the product users problem.

When it came to developers, I learned that:

They get frustrated when they are unable to solve an error. They worry that it reflects poorly on them.
If they are assigned an unsolvable error, they begin to question Noibu’s value.
Having a resolvability indicator for the product owners would help, but they had low confidence in this score.

It dawned on me that I needed to take a data-first approach to give them the value they needed and deserved. The design aspect of this project was now going to be complementary and serve the ML model. The model needs good data. Just as you can't outrun a bad diet in real life, we can not design a way around bad data. 

Early stage low fidelity designs. I'm a big believer in getting ideas out of your head and on paper.

The ML model needed to learn, which meant a feedback loop was necessary. The user performs an action like changing the error state to close-fixed, which triggers a pop-up asking how complicated the issue was to resolve. We also do this on the inverse. When a user changes the state to close-ignore, we ask them if this error is resolvable or not. 

I already had concerns with our users' workflow regarding the error state being appropriately used. In conversation, I was told by our CSMs that they would have to call customers up to three times to get them to close-fix errors. 

I immediately booked calls with eleven product owners to find out what was happening.

My findings were:

Seven would fix an error and then keep the state set to in progress to monitor the error's occurrence if it spiked again. They informed me that they always forgot to close-fix the issue. 
Three flat out would forget to change the error state. 
The other one informed me that Noibu does not sync to Jira properly, which I discovered doesn't.

I took this information along with other concerns to the PM, VP of Product, and CTO. I presented the problem and a few possible solutions.

I was worried that Issue Resolvability was going to turn out like Woodstock in 1999...

My Takeaways

We are building a feature with a significant dependency on feeding the ML model clean data. Still, our user workflow was broken, so the likelihood of a feedback loop helping is low. We had a meeting where everyone was able to voice their concerns. I stressed that this value would be very short-term and that we should hit pause and focus our efforts on user workflow.

In this case, it came down to the business needs having precedence over the user needs, and a startup needs to ship consistent value to its customers. The VP of Product and CTO called for this first version of issue resolvability to launch without the feedback loop. Thankfully, I was told we would be investing a lot of time and effort in the user workflow this coming quarter.

For whatever reason, it feels like we went through the process of researching to understand what makes a good pizza dough and then put that into action. 

We made a poolish the night before. We added the salt. Then we used our hands to combine the flour with water until the flour fully absorbed that water. 

Hours later we realized the yeast was missing — the dough could not rise

I used a baking analogy to describe this project because there are specific steps you cannot skip and then expect a promising outcome. We can't add that yeast into that mixture once the flour absorbs all the water and assume the dough rises as it should. The process - just like a recipe - can not be done out of step.