Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
Important
Teams AI library v1 is deprecated. We recommend that you upgrade your agents to use the updated Teams AI library. Teams AI library is now generally available for JavaScript and C#, supports Python in developer preview. It provides a simplified SDK, support for Model Context Protocol (MCP), Agent-to-Agent communication (A2A), and streamlined tools to enable developers to build intelligent agents for Teams.
An agent transforms system interactions. For developers, creating an exceptional user experience is crucial. This article details the steps, principles, and considerations for designing intuitive, user-centered interfaces that seamlessly integrate AI capabilities. The main goals are to simplify complex tasks, enhance productivity, and offer personalized experiences through adaptive learning. An agent includes features that enhance its functionality and integration within the Teams environment:
- Generative AI integration: Uses advanced AI models for natural language processing and interaction.
- Customizable orchestration: Provides extensive customization options for tailoring the agent's behavior and responses to specific use cases.
To achieve this, you must follow mandatory requirements and best practices. For more information, see validation guidelines for agents.
Ensure mandatory requirements for an agent in Teams
The following requirements are mandatory for building the agent UX:
- Update the app manifest an agent
- Stream the agent response to the user
- Ensure the agent response contains citations
- Ensure the agent response contains an AI label
- Ensure that the agent maintains intelligent conversation
- Ensure that the agent offers prompt starters or a welcome card
Update the app manifest an agent
You must update the app manifest for the agent to define specific properties and configurations that characterize its capabilities and behavior.
Here's an example for updating the app manifest. You must add the botID property to the copilotAgents node in the app manifest.
App manifest update example:
"bots": [
{
"botId": "00001111-aaaa-2222-bbbb-3333cccc4444",
// ... existing bot node fields
}
],
"copilotAgents": {
"customEngineAgents": [{ // New
"type": "bot", // Only option
"id": "00001111-aaaa-2222-bbbb-3333cccc4444" // Validated against bots node
}]
},
Stream the agent response to the user
An agent uses LLM for complex user requests, which may delay responses. To prevent noticeable delays, the agent streams its responses, making them appear fast.
Use the following types of updates while streaming responses:
- Informative updates: Send information on the sub-steps as the agent generates the response before it sends the final response.
- Response streaming: Send the intermediate states of the final response while the LLM creates its full response.
Use Teams AI library to add streaming to the agent.
Note
Streaming bot messages is available only for one-on-one chats and in public developer preview.
Ensure the agent response contains citations
Users must know the sources an agent uses to generate its final response. Identifying these resources allows users to validate and trust the agent's responses.
Use Teams AI library to add streaming to the agent.
Note
- Citations with Adaptive Cards are available in public developer preview.
Ensure the agent response contains an AI label
An agent must identify that it uses AI. Informing users that a response is AI-generated helps build trust in the agent's capabilities. To ensure this, an agent must include a flag in each AI-generated response to indicate it was generated by AI. This flag automatically adds an AI label next to the response.
Examples of AI label:
Example of AI-generated label:
Example of sensitivity label:
Use Teams AI library to add streaming to the agent.
Ensure that the agent maintains intelligent conversation
An agent must track a conversation's context and history to provide an intelligent interaction. The agent must meet the user's expectation by being aware of the conversation's context and allowing them to refer to previous messages and responses.
Use Teams AI library to ensure intelligent context-based conversation and to manage and pass conversational history and context to the LLM. Teams AI library enables you to:
- Manage context and conversation history: Ensure that the agent can track the context and conversation history.
- Identify conversation location: Ensure the agent is aware of the platform on which the conversation is ongoing, such as on Teams, copilot.com, in a meeting side panel, or a group chat.
- Store and pass conversation history: Determine the means of storage and pass some of the conversation history to the agent.
- Understand user references: Ensure that when a user sends a message, the agent must understand what the user is referring to. You can build this understanding in the agent using LLM and the recent conversation history. The agent mustn't need the user to reestablish context with every message.
Ensure that the agent offers prompt starters or a welcome card
An agent must assist users by offering prompt suggestions on how to best utilize the agent. This helps users overcome challenges during both initial and subsequent interactions with the agent.
- Prompt starters: Prompt starters are the initial prompts users see when an agent is added to a new conversation, whether it's a one-on-one chat, a new session, or a group chat. These prompts must be tailored to the user's context and the specific conversation thread.
- Contextual prompts: Contextual prompts are dynamic recommendations from an agent during user interactions. These prompts appear via contextual flyouts, such as View Prompts in one-on-one chats and @mention flyouts in group chats. These suggestions are updated to stay relevant to the ongoing conversation.
- Suggested action: Suggested actions are prompts that appear as pills above the compose box in one-on-one chats and as action buttons in group chats. They are suggestions for actions a user might take in response to the agent's message and must be customized to match the response.
Best practices for agents in Teams
The following best practices can help enhance the overall effectiveness of an agent:
- Ensure that agent's response contains feedback button.
- Enable Teams Azure AD single sign-on.
- Enable the agent to understand conversational history and context
- Offer dynamic and contextual suggestion prompts.
Ensure that agent's response contains feedback button
Develop the capability in the agent to receive user feedback. This could enable the collection of valuable insights from users, which can be analyzed to identify areas for improvement. By incorporating this feedback, the bot's responses can be continuously refined and enhanced, leading to a more effective and user-friendly interaction experience.
To collect the user feedback, you must:
- Provide feedback buttons with every response.
- Provide the feedback received from the user to the agent.
- Use the feedback to improve the quality of agent's responses.
Use Teams AI library to add the feedback button property to the AI module. This property adds a feedback button to each AI-generated message automatically.
Note
Customizable feedback forms are available in public developer preview.
Enable Teams Azure AD single sign-on
You can add single sign-on (SSO) authentication to your agent. For more information, see enable SSO for your app.
Enable the agent to understand conversational history and context
You can design your agent to understand and refer to conversational history and context. It helps to ensure that every interaction is relevant and tailored to the user's specific needs. The agent can refer to the context and offer responses that are accurate and contextually appropriate.
Offer dynamic and contextual suggestion prompts
Enhance your agent's user experience with intelligent and context-aware prompts. The agent can offer context-relevant prompts dynamically.
To achieve this, the agent must leverage the conversation context and history, and prompt suggestions can be timely and fit for the query.
See also
Platform Docs