AI as Your Chief of Staff

A practitioner's guide for corporate leaders

Shared by Dusan Senkypl

01

Microsoft Copilot: Where It Wins vs. Where It's a Constraint

Where Copilot genuinely wins

Where it's a genuine constraint — and what I do instead

Complex multi-step reasoning. Copilot is a single-turn assistant — you ask, it answers, done. It can't plan a 10-step analysis, execute it, review its own work, and iterate. Real examples from the last month alone:

Card-linked offers strategy

AI researched the entire CLO market (Cardlytics, Dosh, Figg, Kard), analyzed competitive positioning, modeled revenue scenarios, stress-tested assumptions, and produced a board-ready recommendation. It took 3 hours of autonomous work — deep web research, financial modeling, competitive analysis, second opinions from 3 different AI models. A consulting firm would charge $50-100K and take 4 weeks. Copilot can't even start this.

GDPR compliance research

Legal question about consent checkbox requirements across EU markets. AI read our current implementation, researched case law, analyzed 6 competitor approaches, produced a legal risk assessment with specific recommendations per market. Our legal team validated it and said "this is better than what we'd get from external counsel for a first pass."

Architecture review

Needed to understand how a specific checkout service works across 84 systems, 1,425 containers, 386 services. AI read the codebases, mapped dependencies, identified bottlenecks, and produced a technical architecture document. An engineering team would need 2 weeks to do this manually.

Candidate evaluation

For a senior hire, AI analyzed the candidate's output quality against our operating principles, compared with internal benchmarks, and produced a structured assessment. Not replacing human judgment — augmenting it with systematic analysis that no interviewer has time to do.

Mission statement development

Ran 4 rounds of iterative drafting with second opinions from 3 AI models at each round. Each model caught different blind spots. The final output was stress-tested against 12 counter-arguments before showing it to the team.

Cross-system orchestration. Copilot lives inside each M365 app. It can't read your email, check your calendar, pull Jira tickets, search Confluence, and synthesize all of that into a weekly briefing in one flow. My CoS setup does exactly this — 36 active projects, each with its own context, knowledge base, and workflow. AI moves between them seamlessly.

Deep research and second opinions. I run the same question through 3 different AI models (Claude, GPT, Gemini) and synthesize where they agree and disagree. Copilot gives you one answer from one model. That's not enough for decisions that matter. When all 3 models independently flag the same risk — that's high confidence. When they disagree — that's where the interesting strategic questions are.

Custom workflows. I have "skills" — reusable workflow templates. "Run the weekly CEO review" triggers a 15-step process: pull Asana tasks, read 5/15 reports, assess against operating principles, draft newsletter. Copilot can't do this.

Raw data + AI = democratized analytics. When you give AI access to raw data — purchases, product analytics, conversion funnels, pricing data — something magical happens: business owners stop waiting for analysts. A category manager can ask "show me which promotions in DE actually drove incremental volume vs. just pulled forward demand" and get an answer in minutes, with charts, directly from the raw data. The traditional model — business person has question, requests analyst, analyst queues it, delivers report 2 weeks later — is dead. Give people raw data access and AI, and they'll figure out the rest. The people closest to the business ask the best questions — they just never had the tools to answer them. Now they do.

The exception process I'd design

Don't call it "exceptions." Call it "approved tools for specific use cases." Frame it as risk management, not rebellion.

02

Minimum Connectivity for Chief of Staff Behavior

In order of impact:

  1. Email (read-only is enough to start) — 60% of the value. AI reads your inbox, categorizes by urgency, surfaces what needs your attention. Without this, you're still manually triaging.
  2. Calendar — AI needs to know your schedule to be useful. "Prep me for my 2pm with the retail partner" requires knowing what's at 2pm.
  3. Task/project management (Planner, Jira, Asana — whatever you use) — This is where AI goes from reactive to proactive. "What's overdue? Who hasn't reported? What's at risk?" requires access to the task system.
  4. Documents/SharePoint — AI needs to read your strategy docs, pricing frameworks, previous analyses. Without this, it can't build on institutional knowledge.
  5. Data/BI (Power BI, Excel) — For "what happened and why" questions. Hardest to connect but most transformative.
Minimum viable setup: Email + Calendar + one document store. That gets you 70% of the value.
"If I have access, AI has access" is the principle. AI operates under YOUR credentials, YOUR permissions. It doesn't see anything you can't see. Frame it this way for IT security — it's not a new attack surface, it's an automation layer on existing access.
03

Three Questions That Corner HQ Into Action

These work because they force specificity and timelines, not strategy theater:

Question 1

"What is the measurable productivity improvement we expect from our current AI investment, and by when?"

This forces them to either admit they don't have targets (which means they're spending without accountability) or state targets (which you can then say "great, let me run an experiment to hit them faster").

Question 2

"If a competitor in our category achieves 30% faster speed-to-market through AI tooling we've restricted, what is our response plan?"

This reframes the risk. Right now "risk" means "what if AI does something bad." This puts the risk on the other side: "what if NOT using AI does something bad." In FMCG, speed-to-shelf is everything. Make them feel that risk.

Question 3

"Can I run a 30-day controlled experiment with [specific tool] on [specific use case] with [specific success metric], reporting results to [specific person]?"

This is impossible to say no to without looking like you're blocking innovation. It's time-bound, measurable, governed, and transparent. If they say no, ask them to write down why — in writing. That usually changes the answer.

04

Three Things to STOP This Quarter

1. Stop weekly status reporting as a document exercise

If people are spending 2-3 hours writing status reports that someone else spends 30 minutes reading — that's a 10:1 waste ratio. Replace with: structured 5-minute input (5 bullet points: what happened, what didn't, what's next, what's blocked, one achievement). AI synthesizes the rest.

I do this with 84 people reporting weekly — and the output isn't just a summary. My system compares trends across the last 4 weeks per person: Are they progressing or stalling? Is their energy going up or down? It maps each person's behavior against our 5 operating principles and values — automatically, every week. Over time you get a rolling assessment: "This person has been consistently strong on Extreme Ownership but declining on Speed Over Comfort for 3 weeks." That's a coaching signal no annual review can give you.

The result is a complete leadership package ready for managers: per-person trend analysis, department-level health, cross-departmental patterns, escalation flags, and AI adoption tracking — all generated from 15 minutes of each person's input.

2. Stop alignment meetings that exist because information doesn't flow

Most "sync" meetings exist because systems don't talk to each other. If AI can read email, tasks, and documents — it can generate the alignment brief. The meeting becomes 15 minutes of decisions, not 45 minutes of updates. Kill the update meetings. Keep the decision meetings.

3. Stop manual post-mortem analysis on pricing/promo

If you're doing promo post-mortems manually — pulling data, comparing to benchmarks, writing slides — that's exactly the kind of structured analytical work AI does better and faster. Build it as a repeatable skill/template: input = promo parameters + results data, output = structured analysis with recommendations. Run it every time, automatically.

05

Ship in 14 Days — Undeniable Value

What to ship: AI-powered weekly business briefing for your direct reports.

Every Monday morning, your 6-8 direct reports get a personalized briefing: their team's key metrics from last week, open action items, upcoming deadlines, flagged risks, and 3 questions you want answered by Friday. Generated automatically from email, tasks, and BI data.

Why this works

Definition of done (smallest version)

06

Operating Rulebook for "One Owner per Outcome"

Rule 1: Every deliverable has exactly one name on it

Not a team. Not a function. A person. "Who owns the DACH pricing review?" has one answer. If the answer is "the pricing team" — you don't have an owner.

Rule 2: The owner decides. Others advise.

The owner can consult anyone, but the decision is theirs. If they want consensus, they can seek it. But they don't need it. "I consulted finance and supply chain, and I decided X" is a complete sentence.

Rule 3: No committees for execution

Committees are for governance (audit, risk, compliance). Everything else gets an owner. If you need a "steering committee" for a project, the project doesn't have clear enough ownership.

Rule 4: Escalation = owner's choice, not default

The owner escalates when they're stuck, not when it's a big decision. Big decisions made by the right owner are fine. Small decisions escalated to the wrong level are not.

Rule 5: AI amplifies the owner, not the process

The danger is real — AI can generate more slides, more analysis, more options, more documents. That accelerates bureaucracy. The rule: AI works FOR the owner to make their decision faster, not for the committee to have more material to discuss.

Each owner gets their own AI workspace. AI knows the owner's context, priorities, and decision history. It prepares recommendations, not options. "Here's what I'd do and why" — not "here are 5 alternatives for your consideration."
07

Your CLAUDE.md Equivalent — The BU/Cluster Constitution

Here's what mine contains, adapted for a BU:

## Identity
- Who we are, what we manage, key metrics
- Current strategic priorities (max 5)
- What "good" looks like this quarter (specific numbers)

## Operating Principles
- Your non-negotiable rules
  # e.g., "simplicity first", "no laziness", "goal-driven execution"
- Adapted for your context:
  # e.g., "retailer impact first", "speed over perfection for <€50K decisions"

## Tone & Communication
- How we communicate: direct, data-backed, no hedging
- What language to use with different audiences (HQ, retailers, team)
- What's forbidden: corporate jargon, unsubstantiated claims, passive voice

## Risk Rules
- What requires human approval (pricing above X, commitments above Y)
- What AI can do autonomously (internal analysis, drafts, data synthesis)
- PII/confidential data handling rules

## Verification Requirements
- Financial numbers: always cross-check against source system
- Market claims: require at least 2 independent sources
- Recommendations to HQ: always get second opinion from different AI model
- Customer-facing content: human review mandatory

## Workflow
- Plan before act (always)
- Second opinion on important outputs
- Verify before declaring done
This file is read by AI at the start of every session. It's not a policy document that sits in SharePoint — it's active, living instructions that shape every interaction. Update it weekly based on what goes wrong.
08

Forcing "Plan Before Act"

This is the hardest behavioral change. Here's what actually works:

Non-negotiable steps in my workflow

  1. Plan mode is default. For anything with 3+ steps or real consequences, AI enters plan mode first. It thinks through the approach, identifies risks, and presents the plan before doing anything. I approve or revise. Only then does execution start.
  2. "What's your plan?" is the first question. When someone brings me a task, I don't say "do it." I say "what's your plan?" If they don't have one, we make one. Same with AI — I never start with "do X." I start with "plan how to do X."
  3. Second opinion before shipping. Every important output gets reviewed by a different AI model. Not because the first one is wrong — but because different models catch different blind spots. This forces a pause between "I have output" and "I'm done."
  4. Kill the anti-pattern: "I asked ChatGPT and it said..." is the enemy. The output of a single AI query is a starting point, not an answer. The rule: if you're going to use AI output in a decision, you need to show the prompt, the output, and your judgment on top of it.

How to enforce with teams

09

First Skills to Standardize

Skill 1: Meeting → Actions Synthesis

Input: meeting notes (even rough ones), attendee list, context

Output: structured action items with owners, deadlines, and dependencies + follow-up email draft

Why first: everyone has meetings, everyone loses actions, the pain is universal and immediate. Time saved: 15-20 min per meeting × dozens of meetings per week across the team.

Skill 2: Promo/Pricing Post-Mortem

Input: promo parameters, actual results, benchmark data

Output: structured analysis (what worked, what didn't, why, recommendation for next time)

Why second: this is high-value, high-frequency in FMCG. Every promo cycle generates learnings that get lost. Making this a repeatable skill means institutional memory builds automatically.

Skill 3: Retailer/Customer Negotiation Prep

Input: retailer profile, historical performance, current terms, your objectives

Output: negotiation brief with talking points, BATNA analysis, anticipated pushback + responses, recommended opening position

Why third: this directly impacts commercial outcomes. A well-prepped negotiation versus an underprepared one is worth real money.

How to keep skills reusable

10

Why Claude Code Specifically

And why it sidesteps your Microsoft problem

It runs locally on your machine. It's a CLI tool — runs in your terminal, on your laptop. No cloud portal, no browser tab. It operates under YOUR user session with YOUR credentials. What you can access on your computer, Claude Code can access. Your files, your email (via MCP connectors), your calendar, your task manager, your data — if it's on your machine or reachable from it, AI can read it and act on it.

It's agentic, not conversational. This is the category difference. Copilot answers questions. Claude Code executes multi-step workflows autonomously. "Process this week's reports from 84 people, assess them against our 5 operating principles, identify company-wide themes, flag escalations, and generate an HTML dashboard" — and it does all of that. Reads files, calls APIs, writes outputs, reviews its own work, iterates. Not one prompt-response cycle — a full autonomous workflow.

It has persistent memory and skills. Every project has a CLAUDE.md file — essentially a constitution that tells Claude the context, goals, rules, and accumulated learnings. When I open a project, Claude already knows what we did last session, what decisions were made, what patterns to follow. Skills are reusable workflow templates — invoke by name, execute consistently. This is institutional memory that gets better over time.

It plays nice with Microsoft. Claude Code doesn't replace M365 — it orchestrates it. Through MCP servers (open protocol connectors), it can read Outlook, access SharePoint, query calendars. Your Microsoft data stays where it is. Claude Code just becomes the intelligent layer that connects it all. No migration, no replacement — just amplification.

The IT security argument: Claude Code runs locally. Data doesn't leave your machine unless you explicitly connect to external APIs. There's no "uploading company data to a third-party AI." It's running on your hardware, processing your local files. This is materially different from pasting data into ChatGPT's web interface — and that distinction matters for compliance.
$100-200
per month per user
5+ hrs
saved weekly per leader
ROI
embarrassingly obvious
11

Real Example: CEO Weekly Review

How I run a company of 1,200 people through AI weekly.

84
reports processed weekly
13
departments synthesized
30-40
minutes total processing
15 min
to review entire company

The input

Each person submits a weekly 5/15 report via Asana (takes them 15 minutes to write). That's it — their raw input into the system.

What AI does with it (fully automated)

  1. Fetches all 84 reports from Asana via API — no manual copy-pasting
  2. Per-person operating principles assessment — each person is evaluated against 5 operating principles (Extreme Ownership, Speed Over Comfort, Impact Obsessed, Simplify to Scale, Disciplined). Not a checkbox — a written assessment with specific evidence from their report.
  3. Department-level synthesis — 13 departments, each gets a section: key achievements, blockers, escalations, notable patterns
  4. Company-wide theme extraction — AI identifies cross-cutting themes that no single person can see.
  5. Escalation flagging — 31 items last week that need CEO attention, ranked and categorized
  6. AI adoption tracking — who's building with AI, who's using it for productivity, who hasn't started. The gap is widening weekly and the system makes it visible.
  7. HTML dashboard — interactive, department-by-department drill-down. I review the entire company in under 15 minutes.
Last week's cross-cutting themes surfaced by AI:
  • "AI adoption velocity is highly uneven across departments" — with a 3-tier breakdown
  • "Data quality is the single biggest infrastructure bottleneck for AI scaling" — flagged by 6 independent teams
  • "Sales force structure under fundamental pressure" — with 5 converging signals from different departments

What this gives me that no human analyst could

What this means for you

You probably have 6-8 direct reports, each managing teams across DACH&CEE. If each submits a structured weekly input (15 minutes of their time), AI can give you:

The system took about 2 weeks to set up from scratch. The weekly run is now fully automated. The value is compounding — each week's data makes the historical picture richer.
12

The Meta-Point

The biggest unlock isn't any single tool or technique. It's the shift from "AI as a tool I use sometimes" to "AI as an operating layer that's always on."

My CoS doesn't wait for me to ask questions. It processes my inputs, tracks my projects, remembers my preferences, and builds institutional memory across sessions. Every correction I make, it learns from. Every project I finish, it carries the learnings forward.

That's the gap between Copilot (a tool) and what I've built (an operating system). You can get there even in a Microsoft environment — you just need to be strategic about which seams to push on first.

The starting point isn't technology. It's one person (you) deciding to run their own operating rhythm through AI for 2 weeks, proving the value with real outputs, and then expanding from there.

Claude Code
The tool
$200
per month investment
14 days
to set up
Fundamentally
different
way of operating