What Does Usability Mean For AI Based Products?

Models are built to predict outcomes. When we’re building these models, we look at accuracy measures. We optimize performance. We often quantify the users’ expectations in these terms: speed and reliability. That’s too simplistic of a view. Those are important model measures but they’re often just the tip of the iceberg when it comes to user expectations.

Many traditional approaches to usability fail because users are unable to articulate their needs with regards to data science and machine learning products. The technology is too new. That’s one of the biggest roadblocks to getting usability right.

The typical path for users starts with the promise of the new technology. That gets them onboard and excited about how much easier their jobs will become. Once they get the product in hand, only then do they realize how they needed it to work. Trying to fix the user experience after the fact is expensive and risks losing early adopters.

The key to usability in AI based products is knowing what the users need before they do. There are common threads which lead to great user experience in AI based products. I’ve built enough of them to have seen what works and what is eventually rejected by users. I’ll start with how to build the team then move on to the most important consideration in building usable AI systems.

Collaboration and Diversity – A Team Approach To Usability

Too often a team of data scientists and machine learning engineers is mentally siloed. They have a great connection with the problem space, but little connection to the business needs and users who experience that problem. This results in teams solving a data science problem but not the business or user problem.

There is a difference between a great machine learning solution and a great user solution. Without a diverse team, there’s no way for data scientists to know the difference. Getting data scientists and machine learning engineers out of their technical silo is a critical piece of building AI products people can really use.

It is ideal to embed data scientists within the team they’re supporting. That may be a development team or a functional unit like marketing. This keeps the data scientist close to the implementation and gives them access to subject matter experts.

The implementation team gives the data scientist parameters for how their solution fits with the overall system. They share tools and frameworks to simplify integration. They’re available for knowledge transfers. Data scientists will often pick the tools they know instead of the best tool for the job. With experts around them, they can adopt the team’s tools more easily.

Many businesses spend most of their time getting the solution to work and then integrating it into a larger system. Embedding data scientists within the implementation team reduces that level of effort. That frees up both data scientist’s time as well as integration team’s time to focus on building solutions that feed users’ needs as well as the technical need.

Subject matter experts fill in the blanks on those needs. They have a close connection to in place processes and solutions. Those realities on the ground lay the foundations for products that work the way customers expect them to.

An effective team consists of a data scientist or machine learning engineer, an implementation specialist, and a subject matter expert. Scale each resource appropriately to fit the project.

Transparency & Trust

AI based systems don’t behave like traditional software. Traditional software does the same thing every time and users come to have a level of comfort in that routine. They don’t need to understand what’s under the covers because what’s going on is pretty obvious. AI based software has a level of data driven decision making which creates an entirely different user experience.

There’s a working relationship between users and AI based systems. People look at them as low skill assistants. They expect them to perform more like a person than traditional software. Often, they expect better than human performance which is unrealistic for most systems.

All AI based systems make mistakes. When a user doesn’t understand why the system makes that mistake, they lose trust in the system. It’s a rapid downward spiral from there. I’ve seen many quality AI based systems lose traction because of a lack of transparency.

For an AI based system to gain and keep user trust, it needs a way to reveal how it made any given decision and/or a means to provide feedback. That’s a balancing act. The system needs to reveal enough information to give users a sense of control over the process without being in the way of what they’re doing. Each user base has a different expectation of control which is why getting this piece of the experience right is so challenging.

I built an AI based system that reads resumes and matches them with job opportunities. In that case, the recruiters who use the system want full transparency into the process. Why did the system pick this resume? What resume terms did the system use to decide this person is a good match? They also needed a way to correct the system in near real time when it made mistakes.

I’ve also built a chatbot which required a lot less transparency. The workflow was fairly static so much of its functionality was obvious. ‘I didn’t understand that,’ was enough transparency for most users. Add a button to report the issue and most users trusted the chatbot enough to regularly use it. Anything more robust than that would have interfered with the users’ experience making the chatbot unusable.

Trust is a key aspect of user experience for AI based systems. Building that trust requires the system to reveal some of its inner workings and allow users the control they expect to change those behaviors as needed.

Transparency & Insight

A second aspect of transparency is important for decision support systems. Data science and machine learning are called upon to provide reporting or other complex analysis. These pieces of data support decisions about hiring, business strategy, pricing, marketing, and many other areas of the business.

These are often presented in reports. While they are well visualized and presented, they aren’t very transparent. The mechanisms behind the data are hidden and that’s where the real value is.

The difference between analytics and machine learning driven insights is in how the user interacts with them. Analytics require the user to interpret the data to draw their own conclusions about trends and causes. AI bases systems produce data that presents trends and draws conclusions from the data.

Without knowing that distinction, both types of analysis are frequently presented in the same way. That overlooks much of the value of AI based data analysis. The deeper insights of AI based systems require transparency to achieve their full value.

Analytics describe what happened; basic metrics. AI based analytics can explain why, but only when the underlying patterns are presented along side the results. A visualization is not enough. These presentations need to dive into the trends that produced these results.

 

There are many other design considerations for AI based systems but my intent here is to show concrete examples of how they are different from traditional software. Best practices become more important. Team composition is a critical success factor. Understanding the new elements like transparency drives user experience design.

Many users are unaware that their applications are using machine learning. Their first conscious experience with AI based software may be your application. That means your business needs to understand what makes a great experience for them before they do. It’s important to understand the concepts of AI based user experience to avoid making that a process of trial and error.

 

I’m a top applied data scientist. I’m called a thought leader by IBM, Intel, SAP, and many others. I’m a contributor at Fast Company and Silicon Republic. I teach senior leaders and executives about what AI can REALLY do so they can monetize it. Right now, not 5 years from now. With your data, not Google’s. I speak from the perspective of someone who has built AI systems; making their potential relevant to real world business needs. You can connect with me on Twitter: @v_vashishta LinkedIn: https://www.linkedin.com/in/vineetvashishta/ and email: vin@v2ds.com.

0 replies

Leave a Reply

Want to join the discussion?
Feel free to contribute!

Leave a Reply

Your email address will not be published. Required fields are marked *