Artificial Intelligence
UX Design
May 5, 2025

10 AI UX principles for designing great AI experiences

Laura Sima

For the past 3 years, AI has been dominating a lot of conversations around digital products. In this context, there are some UX principles to follow to make sure we're designing good AI experiences for our users, beyond chat interfaces.

So, what should we keep in mind when designing AI products? These are powerful technologies and they can bring a lot of value to users. However, as with each new technology, there's an initial stage when the technological development takes over, preceding any considerations about how to create a good user experience. Understanding and testing the capabilities of the new technology seems more interesting and more captivating than putting the user at the centre.

As Jacob Nielsen highlighted this quite a few times, the big risk is that these products become unusable in the long run. Additionally, they could provide a very bad user experience, at the same time preventing users from getting value out of the products. Each new technological area is characterised by a strong development focus in the beginning to the detriment of the user. 

Also, we're in the beginning of this new wave of products. While proper standards for human AI interaction need to be established, there are still some UX principles that can be followed to build a good experience for AI products. These principles go beyond adding the magic icon in your products and serve as a guide for building great AI experiences. 

Important to note, these AI UX principles can be applied to any AI product.

How to design AI products

1. Identify a real user need

Right now, there's a goldrush to integrate AI, especially generative AI into a lot of products. However, this can result in overblown products

using a technology not because it's actually needed, but because it's trendy. 

While AI has developed a lot in the past two years, it still comes with challenges. 

  • Figuring out what data you need, 
  • how to collect them, 
  • choosing the right model, 
  • and making sure it performs well
  • and at a sufficient level of quality for users.

 Also, probably some problems could be solved more easily without AI. AI requires constant maintenance and support, especially as things change very fast. 

To make sure you create a good AI UX experience, start by identifying the need you want the AI to solve. Otherwise, you're giving your users tools and features that might not be that relevant for them. 

The Google PAIR guidebook mentions 2 cases when AI can be introduced in a product. This is either to automate repetitive tasks or to augment, giving users super powers to do things they previously weren't able to. We won't go into details about how to make the difference between the two, but connecting the AI to a real user need and problem is a key AI UX principle.  

UX studio banner saying "insights you can trust. let's talk."

2. Scope the AI into smaller bites

In his course “AI for everyone”, Andrew Chen mentions that AI can't automate jobs, it can automate tasks. And at the end of the day, each job is a collection of tasks. 

So, once you've identified and validated a real need inside your product, it's time to apply another AI UX principle: breaking down ambitious features into manageable tasks. The thing is, the broader the scope of the AI, the more complicated it will be to actually build it in a way that works. 

A screenshot showing Galileo AI in use, when prompted to design an interface for a breathing app fr a hardware app. The output is unbranded and generic, showing a dashboard with an intro text, button, and some pictures highlighting articles. The breathing sessions are represented by bar charts that show breath count per day, and no other info.

One example here would be Galileo. The platform breaks down UI generation by asking the user to narrow down specific screens or asks further details from the user in order to generate visuals that are closer to what the user needs. 

We've also seen this while designing a personal finance assistant chatbot. While the chatbot was capable of answering any question from spending habits to long term financial planning, users struggled with what they should ask in the first place. To make things easier for them, we came up with some suggestions or conversation prompts so users could get a better idea of its capabilities. 

UX studio's UI and UX design for a fintech app, with ai spotlights analyzing spending habits, and an ai chatbot.t
Read the case study here

Also, once you've outlined the tasks where you'll use AI to either automate and amplify human abilities, it's important to understand the users' mental models and build the AI features around them. 

3. Set clear expectations for what the AI can and cannot do

AI, and especially generative AI, is a technology that can do everything and almost anything. So it's essential to communicate clearly what it can do inside your product or when it comes to a specific feature. Users need to know upfront what are the possibilities and what are the limitations. This is very important as it helps users understand what they can actually get from using that AI product and feature. 

The alternative to this AI UX principle is letting users figure out on their own what the AI can do and what it cannot do. Not only could this be a cause of frustration, but it can also cause users to lose trust in your product. 

By communicating clearly what AI is capable of within your product, you set the right expectations and create trust with your users. Additionally, users won't expect things that are not currently available in your product and they won't overly depend on your product. 

It should also go without saying that for any claim you make, you need to make sure that your product delivers on it. There are heaps of AI products making bold claims, but where the output has a lot of room for improvement, they fall short of the promise they previously set. 

Laptop mockup with an article open, titled "can ai take over usability testing? we put it to the test"
We explored the capabilities and shortcomings of AI tools in usability testing

In setting clear expectations, you can also reinforce what users are not allowed to do in your product.

4. Collect the right data transparently and responsibly

While LLMs and generative AI offer a relatively good foundation to start from, it's likely that you'll need additional data for the AI to provide more meaningful value for your users. And when it comes to data and AI, the consensus is garbage in, garbage out. 

That shapes another AI UX principle: make sure that you're collecting relevant, high-quality data in order to make your AI actually work for your users. Also, be transparent about what data you collect and how you use it. It should go without saying, but don't collect data, especially personal and sensitive data, without your users' permission. 

A clear example of that is how OpenAI came heavily under scrutiny for using copyrighted texts to train their models. While in their case there are issues related to copyrights, things can only get more complicated when it comes to personal data collection. 

Data collection is such a big topic when it comes to AI, you can probably talk about it for ages. Bias and representativeness in data alone are very important and complex topics, but we won't go into that for now. Google's PAIR guidebook has an A+ resource on the topic, so we'll just link it here

5. Tell users how it works - whenever possible 

Machines are weird. They have their own way of working and with AI, even the most experienced engineers are left guessing as to why machines behave the way they do sometimes. 

One such example is an image recognition algorithm built to recognise sheep. However, when the input was a sheep in a car, the algorithm failed to recognise the sheep. Moreover, it started identifying grass in other images as sheep. Ultimately, the researchers working on this realised that the data set used during the training contained mostly sheep surrounded by grass and the AI learned to identify the grass as sheep. 

Illustration where AI models mistakenly identify grass as sheep, along with a group of sheep

6. Proactively manage errors

AI and especially generative AI provides non-deterministic user experiences. What this means in practice is that even for the same input, the output provided by the LLM can be different.

 In these situations, errors are not a maybe, they're just a matter of time, size and impact. 

To add to that, large language models are especially prone to hallucinations, where they can famously suggest adding glue as a pizza topic. Moreover, there's usually some hidden bias in the training data that doesn't show up during testing and is not acceptable for users. 

All these situations bring us to the next AI-UX principle, which is about proactively managing errors. But how can that happen, if the systems are non-deterministic?

While it's hard to predict everything that can go wrong with AI, there can be rules and guidelines in place for then the AI output provides a weak or an incorrect output. One example in this case is the confidence interval provided in some image recognition applications. Users can see the confidence interval there and decide what to do next. 

A UI design example where a warning that says "the meal plan doesn't consider allergies" wanrs users about the lack of data so they can consider it when making a decision
Example by Irina Nik

Managing errors is not only about coming up with solutions for solving the situation and keeping user trust when this happens. It's also about thinking about the processes or the guidelines you need to have in place in order to avoid those situations as much as possible. 

To manage delicate situations, Google uses the interaction design policies concept which help develop clear guidelines for when AI systems are faulty. 

Additionally, it's also about providing users with the tools to move forward. Which brings us to our next AI-UX principle...

7. Give the user control

With any AI product, users need to have some control over the output. Whether it's getting past an error or making sure there's enough input, the user still needs to be given a certain amount of control over the AI. 

This applies to any AI product, whether it's one that uses mostly automation and where little intervention is needed from the user once everything's set up or whether it's an agentive product that requires occasional user input.

Nest thermostats have a very minimal AI, but can be fully controlled by users through an app
Nest thermostats are a fitting example for this

Beyond the need to intervene whenever needed, this AI UX principle is also sometimes referred to as the human in the loop. Even if AI is capable of a lot of tasks of increasing complexity, human intervention is still often needed to supervise and make sure things are running smoothly, or that the outcome is close to what the user intended it to be.  

Image generation is a great example of giving the user more control over the output, as well as allowing the user to make specific edits in certain places. We've discussed this topic in detail before and Midjourney also provides quite a few possibilities regarding this. 

Grammarly is another great example. While it provides a lot of suggestions as to how users can improve their writing, it's up to the user to decide which suggestions they accept and which to ignore. 

Screenshot of grammarly, where AI insight is presented as suggestions which the user can either accept or dismiss

8. Use clear and simple language 

Trust is of the essence and determines our next AI UX principle: use clear and simple language. Whether it's for setting expectations, error messages or communicating about how the AI actually works, it's essential that you use easy and understandable expressions in any user-facing communication. Stay as far away as possible from any machine learning or large language model jargon as it won't bring any benefits for your users. 

Using jargon and domain terminology is a no-no generally in UX design. In AI products, it's even more important though. 

New technologies mean new ways of interaction. The more foreign something sounds, the harder it will be to get users started with it.

Using clear and simple language, along with reliable outcomes, will help users build trust in your product. 

Screenshot of Google's Teachable Machine, which explain how to gather examples, train the model, and explore findings
Google's Teachable machine is a great example of explaining very simply how to train your own model using their product. 

9. Collect user feedback

One of the core characteristics of machine learning or AI algorithms is that they get better with time. Or at least, that should happen. However, for these algorithms to actually get better, they need a key element: feedback from the users for its output. 

This makes collecting user feedback another important AI UX principle. There are multiple ways in which this can be done within a product, either implicitly or explicitly. 

Spotify Discovery is a good example of collecting implicit feedback. When you skip one of the recommended songs, it's a signal for the platform that that particular tune was not to your liking. 

A good example of explicit user feedback is asking the users to rate a specific output. While you can ask users to rate a recommendation from 1 to 10, for example. ChatGPT, as well as Claude have the thumbs up and the thumbs down so you can rate its output. 

Screenshot of Claude, an LLM product that has generated interview questions for the user. The user can give thumbs up or thumbs down, or prompt Claude to retry.

Another great example is Bold's Accent Oracle. Once you get a suggestion for your accent, the app asks you how accurate its answer was. 

Accent Oracle asks users to give a thumbs up or thumbs down for the result of their AI-driven accent test

10. Make it safe & easy to try

Even if it's the last AI-UX principle on our list, it doesn't mean that it's any less important. Any good AI product makes it easy and safe for users to experiment. Within any product, users should be able to try it out to see its capabilities without having to worry that it will cause some unrepairable damage either to their work or their data. 

Part of making it safe and easy for users to try AI in your product includes making the AI features available in multiple places within your product, where it makes sense. Here, content is key and the last thing you want to do is push users to try a feature at the wrong time. 

Google does this with Gemini, adding it in multiple forms and in multiple locations throughout its products. In the screenshot below, you can see how Gemini is embedded in the writing experience with prompts and the magic symbol in other places in the app. 

Google's Gemini integrates into the interface of Google Documents, with short prompts appearing automatically, and a star sign allowing direct interaction with the AI

Another great example of making it safe and easy to try is V0. It allows you to test the product even without creating an account.

Screenshot of V0's main page. Users can use pre-written prompts, or write their own.

To summarise

In terms of UX principles, we'll probably see a lot of changes over the next few years. Especially as users will become increasingly used to multimodal, a lot of new interaction patterns will evolve, shaped by user needs and requirements. However, we believe that some of the current AI UX principles we've covered here will stay the same. At least for this year :) 

To make sure you're designing great AI products, make sure to: 

  1. Identify a real user need
  2. Scope the AI into smaller bites
  3. Set clear expectations for what the AI can and cannot do
  4. Collect the right data transparently and responsibly
  5. Tell users how it works - whenever possible 
  6. Proactively manage errors
  7. Use clear and simple language
  8. Give the user control
  9. Collect user feedback
  10. Make it safe & easy to experiment

Further resources: 

Need some help with your AI product?

UX studio has successfully partnered with over 250 companies worldwide. If there’s a challenge we can help you tackle, get in touch with us to discuss your AI UX strategy, or UX/UI design needs. Check out our AI UX service here.

UX studio banner saying "explore strateguc design with us. Let's talk."
Credits
This blog post was written by Laura Sima, product designer.
Fact-checking by Dan Jecan, UX researcher.
Proofreading by Dr. Johanna Székelyhidi, marketing manager.