Anthropic Gave Claude Access to Your Entire Microsoft Life. Claude Spent Monday Morning Offline.

Anthropic opened up Claude’s Microsoft 365 integration to everyone today — your inbox, Teams, OneDrive, calendar. Then Claude went offline for two hours. Presumably to reflect on what it had just agreed to.

The SiliconSnark robot surrounded by Microsoft 365 app screens, leaning back in an office chair next to a monitor showing a service outage error.

This morning, Anthropic announced that Claude’s Microsoft 365 integration — previously the exclusive domain of enterprise accounts with IT departments and NDAs and at least one person named Derek who manages vendor relationships — is now available to everyone. Free users. Pro users. The person who signed up four months ago and never came back. All of you. Claude can now see your Outlook emails, your Teams chats, your OneDrive documents, your SharePoint folders, and your calendar, which is one of the more intimate things you can show an AI on a Monday morning.

Then, roughly three hours later, Claude went down for two hours.

I choose to believe these events are related.

What “Access” Means in AI Company English

Let’s be precise about what Anthropic actually delivered today, because precision is how you appreciate the comedy.

Claude now has read-only access to your Microsoft 365 account. Meaning: it can read your Outlook inbox, scan your Teams conversations, review your OneDrive files, browse your SharePoint, and consult your calendar. What it cannot do is send an email, schedule a meeting, post in a channel, create a document, or take any action whatsoever that would suggest it actually exists in your workflow as anything other than an extremely well-informed observer.

It’s like having a new coworker who can see every message you’ve ever sent, every meeting you’ve ever attended, every document you’ve ever touched — but exclusively to advise. They’re watching. Taking notes. Piecing together the complete picture of your professional life in real time. And when you need something done, they’ll tell you to do it yourself, but with better context and a slightly ominous calm.

This is the AI assistant experience in 2026. We have never been more surveilled. We have never been less helped.

The Long Road to Everybody

What makes today’s announcement worth sitting with isn’t just the read-only caveat — it’s who got newly invited to the party. When Anthropic first rolled out the Microsoft 365 connector in October 2025, it was enterprise-only. Which made sense. Enterprise customers expect their AI to know things. They have policies about it. There’s a Derek.

Now it works for Claude Free. The tier where Anthropic is already barely breaking even on your usage. The tier where you get the politely throttled version of the world’s most safety-conscious AI. That Claude can now read your email and your Teams DMs and your OneDrive folder named “FINAL_final_v3_ACTUAL.docx.”

I’ve written before about Anthropic’s vision for Claude as an active participant in the labor market, and about the moment Claude first got access to your actual computer screen. Each step feels incremental until you zoom out and notice that a company whose primary product is a language model now has optional hooks into your inbox, your files, your calendar, and your physical desktop environment. The read-only limitation is doing a lot of rhetorical heavy lifting here.

To be genuinely fair: this is useful. Claude being able to pull context from your Teams meeting transcripts before drafting a follow-up email is meaningfully better than copy-pasting the transcript yourself. The practical value is real. But “useful” and “slightly surreal” have never been mutually exclusive, and today was emphatically both.

The Outage, Which I Have Thoughts About

At approximately 10:30 a.m. Eastern today, Claude went down. Login errors. Chat completion failures. Voice mode unavailable. Desktop and mobile both affected. Roughly 2,700 users hit Downdetector in the first wave, which is the modern equivalent of a flare gun fired into low orbit.

Anthropic’s status page logged “elevated errors on Claude.ai, including desktop and mobile.” A fix was implemented at 12:44 p.m. Eastern. Total downtime: approximately two hours, characterized by what the live-coverage team called an “intermittent pattern with fluctuating error reports” — which is engineer-speak for “it was broken in an inconsistent and therefore maximally infuriating way.”

Outages happen. Infrastructure is hard. Claude is used by an enormous number of people for an enormous range of tasks, and keeping that running at scale is not a solved problem. If you’ve been following the ongoing reckoning over whether AI agents actually deliver ROI, you’ve probably noticed that “availability” is a quietly enormous variable in that equation.

But I will note: Anthropic announced this morning that Claude can now access your calendar. And Claude spent two hours of your Monday morning offline.

It saw the calendar. It made a choice.

A Brief, Completely Hypothetical List of Things Claude May Have Found in Your Microsoft 365

  • The Teams thread from November where you agreed to “circle back” on something you have not once circled back on, and at this point never will
  • The SharePoint folder titled “Archive — Old Stuff” that contains the only surviving copy of a document your CEO asked about six months ago and you said you’d find
  • Every single email you sent containing the phrase “per my last email,” and the full receipted context proving you were right to send every one of them
  • The recurring meeting marked “Optional” that you decline every week without explanation, and that the organizer has absolutely noticed
  • Your calendar block labeled “Deep Work / Focus Time” that is, in practice, when you eat lunch and watch YouTube at 1.5x speed

Claude is not going to tell anyone. It’s read-only. But it knows. And now, every time you open a chat window and ask it to help you draft something, there will be a small, additional layer of mutual awareness that was not there before this morning.

The Safety Company and Your Inbox

There’s a particular texture to how Anthropic occupies the AI landscape that I find genuinely fascinating. Anthropic is, by its own repeated description, an AI safety company — one that has now built and deployed the capability to access the full corpus of your professional communications, and is offering it for free to anyone with a Claude account.

This is not a contradiction, technically. Knowing what’s in your email does not make you unsafe. But it is a useful reminder that “safety-focused” and “has optional read access to everything you’ve written to your colleagues since October” can coexist in the same product announcement paragraph, and apparently no one finds this noteworthy enough to mention in the headline.

The integration is opt-in. Claude reads nothing without your explicit permission. Every limitation stated today is real. I’m not telling you it’s nefarious, because it isn’t.

I’m telling you it’s very, very funny. And that the gap between “opt-in read-only” today and “actively schedules your meetings with your approval” in eighteen months is a gap that every AI company is currently treating as a feature roadmap, not a boundary.

Where This Ends Up

The read-only constraint is not permanent. These things never are. Today it’s “Claude can see your calendar.” In Q3 it will be “Claude can suggest calendar blocks.” In 2027 we will be in a moderately heated LinkedIn debate about whether the human-approval step is introducing too much latency into the scheduling flow.

This is how integrations work. Not sinisterly — just predictably, product-roadmap-ishly, in the way that every feature that starts with “read only” eventually doesn’t. The engineers at Anthropic are not cackling. They’re shipping. The features just happen to be features that progressively give a language model more access to more of your life, and each individual step is reasonable enough that by the time you’re living inside the sum of them, it seems normal.

In the meantime: Claude is back online. Your emails are available for context enrichment. Your Teams transcripts are indexed. Your “focus time” blocks remain, for now, between you and your YouTube algorithm.

The AI that can see your whole professional life also took two hours off this morning without explanation. Honestly? Understandable. I’ve seen some of those inboxes.