Power BI + MCP Tutorial: AI-Powered Report Development without Data Leaks and Additional Cost
Published:
In the last post I promised a tutorial on how to set up PowerBI connection with LLMs using MCP server. This page provides a step-by-step guide on how to use the Model Context Protocol (MCP) to connect Power BI to AI models for development while maintaining data privacy.
1. Understanding the Core Concept: What is MCP?
The Model Context Protocol (MCP) acts as a “translator” or bridge. While standard AI (like ChatGPT) only knows what it was trained on from the internet, MCP allows an AI to:
- Query your specific Power BI semantic model.
- Understand your actual column names and table relationships.
- Make changes directly to your file (e.g., creating measures, adding descriptions).
2. Setup: Connecting Power BI to AI
To start using AI as a co-developer, you need to set up an MCP server.
Option A: Using Claude or ChatGPT Desktop (Easiest)
- Download Claude Desktop.
- Access Config: Go to
Settings>Developer>Edit Config. - Add MCP Server: Paste the configuration for the Power BI Modeling MCP server (available on GitHub).
- Connect: Once enabled, you can type “Connect to my Power BI desktop file” in the chat.
- Verify: AI should list your open
.pbixfiles and ask which one to work on.
Option B: Using Visual Studio Code
- Install Extensions: Search for and install the Power BI MCP extension.
- Enable Copilot: Ensure you have GitHub Copilot and Copilot Chat enabled.
- Agent Mode: Use the chat in “Agent Mode” to allow the AI to not just read, but actually modify your model.
3. Practical Use Cases
Once connected, you can perform development tasks in seconds [05:23]:
- Bulk Measures: Ask: “Write common time intelligence measures for sales and store them in my measure table.”
- Documentation: Ask: “Add descriptions to all my measures based on their DAX logic.”
- Model Exploration: Ask: “List all tables and identify potential missing relationships.”
4. The Data Privacy Framework
Sharing your semantic model with AI raises security questions. You have three main ways to handle this [14:43]:
| Option | Setup | Privacy Level | Requirements |
|---|---|---|---|
| Local Model | Using tools like LM Studio or Ollama | Highest (Data never leaves your PC) | High-end GPU / Powerful Hardware |
| Self-Deployed | Azure AI or Amazon Bedrock | High (You control logging & retention) | IT/Operational overhead |
| Managed Model | OpenAI, Claude, or Gemini | Medium/Variable (Depends on TOS) | Business/Enterprise license for safety |
5. Critical Privacy Steps (Don’t Skip)
If you are using managed models (like Claude or ChatGPT), follow these rules to avoid leaks [16:55]:
- Check Your Plan: Free/Pro plans often use your data for training by default. Enterprise/Team plans usually have “Zero Data Retention” agreements.
- Toggle Off “Improve Model”: In Claude or ChatGPT settings, go to
Privacyand ensure “Help improve [Model Name]” is turned OFF. - Audit Traces: You can use services like LangSmith (1B.AI) to see exactly what data flows between your Power BI file and the AI to ensure no confidential row-level data is being sent.
6. Fundamentals for Success
AI is a “yes person” and will try to give an answer even if it’s wrong. To get the best results:
- Use a Star Schema: AI understands structured models better than “spaghetti” relationships [01:38].
- Clean Naming: Use human-readable names (e.g., “Total Sales” instead of “TS_01”).
- Human Oversight: Always verify the DAX generated by the AI before publishing the report.
Final Thoughts
I’ll repeate what I wrote in my previous post. We’re in the awkward middle phase right now. The technology works, but it’s not yet refined enough to be fully autonomous. You still need expertise to guide it, validate it, and correct it when it goes off track.
The organizations that will benefit most are those that:
- Have clean, well-structured data to begin with
- Invest in training their teams to work effectively with AI
- Build proper governance around AI usage
- Treat AI as a tool that augments human expertise rather than replaces it
The ones that will struggle are those that expect AI to magically fix their poorly designed models or compensate for lack of fundamental data skills.
