Kristina Bachová

Power BI Specialist — Making BI Reliable, Scalable & Business-Trusted

I help companies turn messy BI into trusted, scalable reporting — fast.

Book a free discovery call

Industries I've worked across:

Logistics Healthcare Consultancy IT

What I do

BI Operations

Keep your Power BI environment healthy, monitored, and running smoothly. Gateway management, performance tuning, incident resolution.

CoE Setup & Governance

Build the structure, best practices, and training that make Power BI self-service actually work across your organisation.

Scalable Reporting

Design dashboards and data models that grow with your business — not ones you'll have to rebuild in six months.

Not sure which package fits? Start with a Health Check — it's the fastest way to find out.

Get in touch

How It Works

1

Discovery Call

A free 30-minute call. You explain the problem; I tell you honestly whether and how I can help.

2

Proposal & Scope

I put together a clear proposal: what's included, what's not, timeline, and price. No surprises.

3

Delivery

Hands-on work, regular check-ins, and a handover that actually makes sense.

Package Details

Health Check

Best for: Teams unsure where BI risks are Primary goal: Identify risks & priorities Typical trigger: "We don't trust our reports" Timeframe: 1–2 weeks Price: €1.5k – €3k

What you get:

  • Executive-ready BI assessment
  • Risk scoring & heatmap
  • 30–60–90 day prioritised roadmap
Hands-on fixes Governance & standards Executive-ready outputs Ongoing support

Primary ROI: Avoid wrong BI investments

Key risks avoided: Hidden data, security & trust risks

Take the free assessment →

Fill in the questionnaire first — then book your free follow-up call.

CoE & Operating Model

Best for: Organisations scaling BI Primary goal: Make BI scalable Typical trigger: "We're growing fast" Timeframe: 6–8 weeks Price: €6k – €12k

What you get:

  • Full governance framework & standards
  • Strategic BI roadmap
  • Executive-ready operating model & training plan
Hands-on fixes (Some) Governance & standards (Full) Executive-ready outputs Ongoing support

Primary ROI: Enable scale without chaos

Key risks avoided: BI sprawl, shadow IT, bottlenecks

Fractional BI Ops

Best for: Ongoing BI leadership needs Primary goal: Prevent future problems Typical trigger: "We need senior oversight" Timeframe: Ongoing Price: €1k – €3k / month

What you get:

  • Continuous governance & oversight
  • Advisory & leadership support
  • Rolling roadmap & prioritisation
Hands-on fixes (Advisory) Governance & standards (Continuous) Executive-ready outputs (Some) Ongoing support

Primary ROI: Avoid future rework & extra hires

Key risks avoided: Burnout, dependency on individuals

Frequently Asked Questions

Not necessarily. I'll assess your current licensing as part of any engagement and advise on what you actually need — not what a vendor wants to sell you.

A basic CoE framework can be in place in 6–8 weeks. Making it stick — training, adoption, governance — is an ongoing process, which is exactly what the Fractional BI Ops package supports.

Always from what you have. A Health Check or Stabilisation engagement starts by reviewing your current setup, not replacing it.

Yes. I work remotely with clients across Europe. Everything runs over video calls, shared screens, and collaborative tools — location is not a barrier.

Every engagement ends with clear documentation and a handover. If you want ongoing support, Fractional BI Ops is designed exactly for that.

Industry Template · Hotel

Hotel Industry BI Template for Opera PMS

A ready-to-use Power BI reporting template built around Oracle Opera PMS exports. Covers daily flash, reservations, market segments, package forecasting, and room statistics — all pre-modelled and styled for immediate deployment.

View template →
CoE & Governance

CoE Setup for a Global Medical Devices Company

Built a Power BI Centre of Excellence from scratch — governance framework, best-practice guidelines, training programme, and workspace structure. End result: business users could create their own reports confidently within guardrails.

Outcome: A structured governance foundation adopted by the BI team — clear standards that reduced ambiguity and enabled scalable self-service reporting.

View case study →
BI Ops & Troubleshooting

BI Operations Stabilisation for an IT Services Firm

Took over a chaotic Power BI environment — broken reports, performance issues, no governance. Triaged, fixed, and put a monitoring and support process in place.

Outcome: Report failures dropped by 80%. Team confidence restored.

Dashboard Development

Power BI Reporting for Port Inspections & Logistics

Designed and delivered end-to-end Power BI reports for port inspection KPIs, lab results, and certification tracking. Included RLS for regional access control and trained local superusers.

Outcome: Replaced fragmented Excel reporting with a single source of truth.

Technical Skills

Power BI

DAX Power Query (M) Data Modelling Star Schema RLS Conditional Formatting Calculation Groups Incremental Refresh

Platform & Governance

CoE Setup Tenant Administration Gateway Management Workspace Management Premium Capacity Planning

Integration & Automation

SQL Server Azure SharePoint PowerShell & REST API ServiceNow

Microsoft Fabric

Dataflows Fabric Capacity OneLake

I'm currently building new showcase dashboards using public datasets. Check back soon — or follow me on LinkedIn for updates.

Kristina Bachová

I've spent over 10 years in business intelligence — starting as a Power BI developer in logistics, moving through healthcare and consultancy, and eventually into BI operations and governance leadership. Along the way I've built dashboards that people actually use, set up centres of excellence that stuck, and fixed BI environments that were quietly falling apart.

I went on maternity leave in 2024 and used the time to think clearly about what kind of work I actually want to do. The answer was obvious: the CoE and governance side — the part where you help an entire organisation get better at using data, not just one team.

Now I freelance. It lets me work on the problems I find genuinely interesting, for clients who are ready to take their BI seriously, without the politics of a big corporate structure.

"Freelancing isn't a gap-filler. It's a deliberate choice. I take on fewer clients so I can do better work for each one — and I'm always available for the right project."

Languages

English (Fluent) Slovak (Native) Czech (Native) Portuguese (Learning)

Tech Stack

Power BI DAX Power Query SQL Server Azure SharePoint PowerShell ServiceNow Fabric JIRA Confluence
CoE

How to Set Up a Power BI Centre of Excellence — Step by Step

A practical guide to building a CoE that lasts — from getting executive buy-in to writing governance guidelines that people actually follow.

Read more →
Governance

Power BI vs Microsoft Fabric — What Actually Changed?

Dataflows, capacity, OneLake — here's what's real, what's hype, and what it means for your Power BI setup right now.

Read more →
DAX

5 DAX Functions Every BI Team Should Know (And Why)

Beyond SUM and COUNT. The functions that actually make your measures efficient, maintainable, and readable.

Read more →
Governance

Writing Power BI Best Practices That People Will Actually Follow

Guidelines only work if they're clear, relevant, and not buried in a SharePoint folder nobody visits.

Read more →

If you work with Microsoft Power BI, you've almost certainly heard this question over the past year:

"Do we still use Power BI, or are we supposed to move to Microsoft Fabric now?"

The short answer is straightforward: Power BI didn't go away. Fabric didn't replace it. But the context Power BI operates in has changed — and that matters for architecture, governance, and capacity planning.

This article breaks down what actually changed, what's mostly marketing, and how Power BI professionals should think about Fabric in practice today.

Power BI vs Fabric: The Correct Mental Model

Before Fabric, Microsoft analytics was a collection of loosely connected services:

  • Power BI
  • Azure Data Factory
  • Synapse
  • Dataflows
  • Separate storage layers

Fabric's goal is consolidation:

  • One platform
  • One storage layer
  • One capacity model

Power BI is now one workload inside Fabric, not a separate product line.

Think of Fabric as the house, and Power BI as one very important room inside it. Power BI remains the primary interface for business users — and the primary place where business logic is enforced.

What Actually Changed (And Matters)

1. OneLake Is Real — and It's the Biggest Shift

Fabric introduces OneLake, a single, tenant-wide data lake shared across Fabric workloads.

Why this matters in practice:

  • Data no longer has to be copied between tools
  • Multiple teams can access the same underlying data
  • Storage and compute are more clearly separated

Best-practice implications for Power BI:

  • Treat OneLake as a centralized data foundation, not a dumping ground
  • Keep transformation ownership clear (engineering vs BI)
  • Use Power BI semantic models to consume and shape data, not to store it redundantly

This reinforces a long-standing Power BI principle: centralize data, standardize logic, decentralize reporting.

Reality check: OneLake does not magically solve governance, data quality, or ownership. Without discipline, it can just as easily become a larger — and more expensive — data swamp.

2. Dataflows Didn't Disappear — They Evolved (Carefully)

Power BI Dataflows still exist. Fabric adds Dataflows Gen2, which:

  • Write directly to OneLake
  • Can be reused by non–Power BI workloads
  • Still use Power Query under the hood

What didn't change:

  • They're best suited for light to moderate transformations
  • They're not a replacement for full-scale data engineering pipelines
  • They require careful performance and dependency management

Practical guidance:

  • Use Dataflows for reusable, business-owned transformations
  • Avoid pushing heavy joins, large-scale fact processing, or complex orchestration into them
  • Keep Gen1 dataflows if they're stable and meeting business needs
Important: If your Gen1 dataflows are working well, there is no immediate business value in migrating just because Gen2 exists. Migration should be driven by reuse needs, not platform anxiety.

3. Capacity: Unified, but Less Forgiving

Fabric introduces unified capacity (F-SKUs) that power:

  • Power BI
  • Data Engineering
  • Data Warehousing
  • Real-Time Analytics

This replaces multiple disconnected pricing and capacity models.

What's better:

  • One shared pool of compute
  • Easier architectural alignment across teams
  • Fewer platform silos

What's harder in reality:

  • Power BI Pro licenses are still required for authors
  • BI workloads now compete with engineering workloads
  • Poorly designed pipelines can directly impact report performance
  • Cost attribution across workloads is still evolving
Senior-level reminder: Capacity does not fix bad Power BI models. Inefficient DAX still hurts. Overloaded visuals still hurt. Poor refresh strategies still hurt. Fabric amplifies good Power BI design — and exposes weak design faster.

When Fabric Capacity Makes Sense (And When It Doesn't)

Fabric capacity is a strong fit when:

  • Multiple teams share the same data platform
  • BI, engineering, and analytics workloads coexist
  • You already struggle with Premium constraints
  • Centralized governance is a priority

Traditional Power BI Premium still makes sense when:

  • Power BI is the primary or only workload
  • Fabric engineering features are out of scope
  • Cost predictability matters more than flexibility
  • The BI team owns most of the data lifecycle

Fabric is not mandatory — it's optional architecture.

What's Mostly Hype (For Now)

"You Must Move Everything to Fabric"

Not true.

  • Existing Power BI workspaces continue to work
  • Premium capacities still exist
  • Microsoft supports gradual, selective adoption

There is no requirement to redesign a functioning Power BI estate just because Fabric exists.

"Power BI Is Just a Front-End Now"

Also not true.

Power BI semantic models still:

  • Define business metrics
  • Enforce governance and security
  • Control performance at query time

Fabric adds upstream options — it does not replace the semantic layer. Business logic still belongs closest to consumption.

Fabric Readiness: An Honest Reality Check

Fabric is directionally strong, but not frictionless:

  • Monitoring across workloads is still fragmented
  • CI/CD for Fabric assets is evolving
  • Cost visibility at workload level can be unclear
  • Many organizations lack the engineering maturity Fabric assumes

For some teams, Fabric will feel empowering. For others, it will introduce operational complexity. Both outcomes are valid.

What This Means for Your Power BI Setup Today

If you already follow Power BI best practices, you're not behind.

Keep Doing:

  • Star schema modeling
  • Thin reports over shared semantic models
  • Certified datasets
  • Clear Dev/Test/Prod separation

Start Evaluating:

  • Centralized data in OneLake
  • Dataflows Gen2 where reuse is required
  • Capacity planning that accounts for non-BI workloads

Avoid:

  • Rebuilding solutions "because Fabric"
  • Mixing heavy ETL logic into Power BI models
  • Assuming capacity will compensate for poor design

The Bottom Line

Power BI wasn't replaced — it was repositioned.

Fabric is:

  • An architectural unification
  • A platform-level shift
  • A stress test for existing BI practices

For Power BI professionals, the message is simple: Fabric is not a reason to redesign your Power BI estate. It's a reason to validate whether your existing design was sound to begin with.

If your foundation is solid, Fabric isn't something you need to rush into. It's something you can adopt — deliberately, incrementally, and on your terms.

A Power BI Centre of Excellence (CoE) is not a reporting factory, and it's not a governance police force. At its best, it's a small, focused capability that enables scale: trusted data, consistent standards, and empowered report creators.

This guide outlines a pragmatic, experience-based approach to building a Power BI CoE that works in real organisations — aligned with Microsoft Power BI best practices, without unnecessary theory.

Step 1: Define the Purpose and Operating Model

Before defining tools, roles, or standards, be clear about why the CoE exists.

Most successful Power BI CoEs focus on:

  • Improving trust in data and metrics
  • Reducing duplication and rework
  • Enabling self-service analytics safely
  • Allowing Power BI usage to scale without chaos

Just as important is clarity on what the CoE is not:

  • Not the only team allowed to build reports
  • Not a mandatory gate for every dashboard

Define how the CoE engages

Many CoEs struggle because their engagement model is never explicit. Decide early:

  • When the CoE advises vs approves vs owns assets
  • Whether shared datasets are owned centrally or by domains
  • How decisions are escalated when standards are challenged
In practice: Most mature environments land on a federated model: domain teams own their data products, while the CoE owns standards, certification, and cross-domain alignment.

If you can't describe the CoE's purpose and operating model in two sentences, it isn't ready to launch.

Step 2: Secure Executive Sponsorship (With the Right Framing)

Active executive sponsorship is a strong predictor of CoE success. Without it, standards quickly become optional.

What typically resonates with leaders:

  • Reduced risk (data security, compliance, auditability)
  • Faster, more confident decision-making
  • Lower long-term cost through reuse and standardisation

What rarely resonates:

  • Dataset design patterns
  • Workspace naming conventions

Be explicit about what you're asking for:

  • Visible sponsorship, not just initial approval
  • Clear decision rights for standards and exceptions
  • Time allocation for CoE members
In practice: CoEs without sustained executive backing rarely survive beyond their initial rollout.

Step 3: Start Small With the Right Roles (Not a Big Team)

You don't need a large team to start a CoE. Most effective CoEs begin with 3–6 people, often part-time, covering these roles:

  • CoE Lead / Product Owner — prioritisation, stakeholder alignment
  • Power BI Architect — semantic models, performance, scalability
  • Platform or Tenant Admin — tenant settings, security, deployments
  • Enablement Lead — training, documentation, community building

These are roles, not job titles. Over-staffing early often leads to over-engineering. Scale the team only once demand is proven.

Step 4: Establish Governance People Will Actually Follow

Governance fails when it's either too abstract or too restrictive. Anchor it in everyday Power BI work and make expectations clear.

Workspace strategy

Define a small number of workspace types with clear intent, for example:

  • Personal / Sandbox — experimentation and learning
  • Team / Department — collaborative delivery
  • Certified / Endorsed — trusted, reusable content

Tie permissions, review expectations, and support levels to each type.

Dataset and model standards

Keep standards practical and enforceable:

  • Clear naming conventions for measures
  • Required descriptions for shared or certified models
  • Performance expectations for reusable datasets
In practice: The CoE should not own every semantic model. Domain teams own their datasets; the CoE defines standards and certifies models that are safe for reuse.

A simple certification checklist often works better than long documentation. For example:

  • Refresh succeeds consistently
  • Measures are documented
  • Query performance meets agreed thresholds
  • Security has been reviewed
  • A named business owner exists

Change and release management

Not every change requires heavy process:

  • Use Dev/Test/Prod for shared datasets
  • Apply lightweight peer review for certified assets
  • Define rollback expectations

Equally important is change communication. Define how breaking changes are announced, how long deprecations last, and where consumers can see what changed. Technical controls alone don't protect trust.

Step 5: Configure the Power BI Tenant Intentionally

The Power BI Admin Portal is where governance becomes enforceable.

Align tenant settings with your organisation's maturity:

  • Be cautious with external sharing and publish-to-web early
  • Control who can use premium features and create shared assets
  • Enable audit logs and usage metrics from day one
In practice: Hard restrictions can backfire. Blocking workspace creation or exports often leads to shadow IT. Where restrictions create friction, prefer guardrails, monitoring, and education over blanket bans.

Always document why each tenant setting exists. This prevents policy drift and helps future admins make informed changes.

Step 6: Enable Self-Service Instead of Fighting It

Self-service analytics will happen whether you plan for it or not. The CoE's role is to make the right way the easy way.

Effective enablement typically includes:

  • Shared, well-designed semantic models
  • Report templates with branding and layout guidance
  • Short, task-focused training sessions

Community scales better than ticket queues. Internal user groups, office hours, Teams channels, and showcase sessions often deliver more value than formal support processes.

Step 7: Measure, Prove Value, and Evolve

A CoE should be treated as a product, not a one-time project.

Track platform metrics such as:

  • Active users and creators
  • Reuse of shared and certified datasets
  • Support requests and performance issues

Over time, link these to business outcomes:

  • Fewer duplicate reports
  • Faster delivery of new insights
  • Reduced dependency on central teams for ad-hoc requests

As maturity increases, successful CoEs typically loosen controls, shifting from enforcement to enablement.

Common CoE Failure Modes

  • Lack of clear ownership — No one knows who owns shared datasets, standards, or decisions, leading to slow progress and conflict.
  • Over-governance too early — Heavy approval processes and restrictive tenant settings drive users to work around the platform.
  • CoE as a delivery factory — The CoE becomes the default report-building team, creating bottlenecks and burnout.
  • Ignoring change management — Breaking changes land without warning, eroding trust in shared datasets.
  • No success metrics — Without measurable outcomes, the CoE is eventually seen as overhead rather than value.

Final Thought

The most durable Power BI Centres of Excellence succeed because people want to follow the standards — not because they're forced to.

Start small, stay pragmatic, and optimise for trust over control. Perfect governance is rarely achievable; consistent, trusted insight at scale is.

If you've worked with Microsoft Power BI long enough, you know this pattern well: the model starts clean, measures are simple — and six months later, no one wants to open the measure pane.

This usually isn't because the business logic is hard. It's because the wrong DAX patterns were allowed to spread.

The five functions below aren't advanced tricks. They are foundational tools that, when used consistently, keep models understandable, performant, and safe for teams to evolve over time.

1. CALCULATE — The Engine Behind Context

CALCULATE is the most important function in DAX because it changes filter context. Almost every meaningful business measure relies on it, directly or indirectly.

Why it matters

  • Enables time intelligence and conditional logic
  • Allows measures to define their own business rules
  • Keeps logic centralized instead of scattered across visuals

Example

Sales – Online :=
CALCULATE (
    [Total Sales],
    'Sales'[Channel] = "Online"
)

This measure is explicit: the logic lives in DAX, not in report-level filters.

Practical reality: CALCULATE is powerful — and it's also where many models quietly break. Each filter argument replaces existing filters unless explicitly preserved. BI teams should treat CALCULATE logic as part of the semantic contract of the model, not as a quick fix.

2. VAR — Readability First, Performance Second

VAR lets you store intermediate results inside a measure and reuse them cleanly.

Why it matters

  • Makes measures easier to read and review
  • Simplifies debugging and testing
  • Can improve performance by avoiding repeated evaluation

Example

Profit Margin :=
VAR Revenue = [Total Sales]
VAR Cost = [Total Cost]
RETURN
DIVIDE ( Revenue - Cost, Revenue )

This pattern is easier to reason about than a single dense expression — especially in team environments.

Best practice: VAR always improves maintainability. Performance benefits depend on whether the expression would otherwise be re-evaluated — don't assume magic, but always prefer clarity.

3. DIVIDE — Defensive DAX That Behaves Well in Reports

Division errors are a common cause of broken visuals and confused users. DIVIDE handles divide-by-zero scenarios safely and intentionally.

Why it matters

  • Prevents errors without verbose logic
  • Returns BLANK() by default, which behaves better in visuals
  • Keeps measures concise and readable

Example

Profit Margin :=
DIVIDE ( [Profit], [Total Sales] )

BLANK() values suppress misleading percentages in matrices and charts, instead of showing zeros that imply meaning.

Best practice: Always use DIVIDE instead of / in measures intended for reporting.

4. SELECTEDVALUE — Safe Context Awareness

Interactive reports require measures to react to user selections. SELECTEDVALUE retrieves a single value only when exactly one exists, otherwise returning a defined fallback.

Why it matters

  • Cleaner than VALUES + HASONEVALUE
  • Prevents ambiguous or broken results
  • Ideal for slicer-driven logic and dynamic labels

Example

Selected Year :=
SELECTEDVALUE ( 'Date'[Year], "Multiple Years" )
Important guardrail: SELECTEDVALUE is best used for slicers and high-level context checks. Avoid using it in row-level calculations where the context is already singular — it adds unnecessary complexity and confusion.

5. ALL and REMOVEFILTERS — Intentional Control of Filters

Some calculations must ignore parts of the filter context — totals, benchmarks, or contribution percentages. That's where filter-removal functions belong.

Why it matters

  • Enables percent-of-total and share calculations
  • Supports baselines and comparisons
  • Keeps logic consistent across reports

Example

Total Sales (All Products) :=
CALCULATE (
    [Total Sales],
    REMOVEFILTERS ( 'Product' )
)
Team guidance: While ALL and REMOVEFILTERS often behave similarly, many teams prefer REMOVEFILTERS because it communicates intent more clearly and reduces the risk of unintended side effects in complex expressions.

What Not to Do (Common Team Pitfalls)

Avoid these patterns — they're responsible for most fragile models:

  • Embedding business logic in visuals instead of measures
  • Nesting multiple CALCULATE calls without documenting intent
  • Using / instead of DIVIDE in production measures
  • Overusing SELECTEDVALUE where row context already exists
  • Removing filters broadly (ALL(Table)) when only a column needs to be ignored
  • Writing clever one-line measures that no one else can maintain

If a measure needs explaining in a meeting, it probably needs refactoring.

Final Thought: DAX Is a Team Discipline

These functions don't make a model "advanced" on their own. What matters is how consistently they're applied.

Strong BI teams:

  • Centralize logic in measures
  • Prefer clarity over cleverness
  • Agree on filter-handling patterns
  • Write DAX that explains itself

When your measures are readable, predictable, and performant, you reduce defects, speed up onboarding, and turn your semantic model into something the business can actually rely on.

This article is about how to write Power BI best practices that professionals will actually follow — grounded in Microsoft Power BI guidance, but shaped by real-world delivery in enterprise and shared-reporting environments.

Note: These guidelines are primarily intended for shared datasets and reports that are reused, extended, or handed over between developers. Purely personal or exploratory reports may not require the same level of rigor.

Why Most Power BI Best Practices Fail

Most best practices don't fail because they're wrong. They fail because they're unusable.

Common problems include:

  • Rules that are too abstract ("Use good naming conventions")
  • Documents that are too long to reference while working
  • Generic guidance copied directly from documentation
  • Assumptions of ideal, greenfield projects that don't exist in reality

Power BI professionals work under time pressure, with evolving requirements and imperfect data. If a rule doesn't help them make a decision while building, it gets ignored.

Effective best practices are:

  • Specific
  • Actionable
  • Easy to check in under a minute

Start From How Power BI Is Actually Used

Microsoft's Power BI guidance assumes a few realities that are worth embracing:

  • Models grow over time
  • Multiple developers touch the same dataset
  • Performance problems often appear after adoption
  • Business users rarely read documentation

Best practices should be written for living models, not theoretical ones. That means accounting for refactoring, handovers, and long-term support — not just first delivery.

Key insight: When guidance reflects real usage patterns, it feels helpful instead of academic.

Structure Best Practices Around Real Decisions

Developers don't think in categories like Modeling or Visualization. They think in questions:

  • "Should this be a calculated column or a measure?"
  • "Do I need a new table, or can I reuse an existing one?"
  • "Is this DAX readable enough for someone else to maintain?"

Best practices should be structured to answer those questions directly.

Avoid this

Keep DAX simple.

Prefer this

If a measure contains multiple business rules, use variables and helper measures to improve readability, testing, and long-term maintenance.

The intent is the same — but the second version is something a developer can act on immediately.

Focus on the 20% That Causes 80% of Problems

Microsoft documentation is comprehensive. Your internal best practices shouldn't be.

Prioritize guidance that consistently causes issues in real projects:

  • Poor data modeling
  • Hard-to-maintain DAX
  • Inconsistent naming and formatting
  • Performance degradation at scale
  • Loss of trust in reported numbers

For most Power BI teams, that usually means emphasizing:

  • Star schema modeling
  • Measure-driven calculations
  • Avoiding unnecessary calculated columns
  • Consistent naming and formatting conventions
  • Careful use of bi-directional relationships
Practical reality: If a rule hasn't caused real pain in your environment, it probably doesn't belong in version 1.

Be Precise About Measures vs Calculated Columns

A common source of confusion is when to use calculated columns versus measures.

A practical guideline looks like this:

  • Prefer measures for aggregations and calculations evaluated at query time
  • Avoid calculated columns on large fact tables where possible, as they increase model size and refresh cost
  • Use calculated columns when values must be evaluated at refresh time, used in relationships, or exposed as slicers

This avoids dogma while aligning with how Power BI's engine actually works.

Make Performance Guidance Concrete — and Contextual

Performance advice is often ignored because it's vague or applied too early.

Instead of

Optimize your model for performance.

Be explicit and situational

  • Disable Auto Date/Time in shared or enterprise datasets
  • Reduce column cardinality where possible
  • Hide unused columns from the report view
  • Avoid bi-directional relationships unless there is a clear requirement
  • Validate performance using Performance Analyzer before publishing
Balance is key: Avoid premature optimization. Focus performance effort on datasets that are shared or heavily used. Optimize in response to real usage patterns, not theoretical concerns.

Treat Naming and Formatting as First-Class Practices

"Be consistent" isn't enough.

Good best practices provide examples, even if the exact standard varies by team.

For example:

  • Use business-friendly names with spaces (not underscores)
  • Avoid abbreviations unless they are widely understood by the business
  • Keep measure names free of table prefixes
  • Apply consistent number formatting at the model level

Clarity here improves usability, reduces confusion, and builds trust with report consumers.

Write for the Author, Not the Auditor

Best practices are rarely enforced by formal reviews. They're followed — or ignored — while someone is building a model.

That means they should be:

  • Skimmable
  • Short
  • Written in plain language
  • Easy to reference during development

If a developer can't quickly confirm a rule while writing DAX or modeling data, the rule won't be used.

Accept Constraints and Trade-Offs

Not every best practice can be applied in every situation.

Legacy data sources, organizational constraints, tight deadlines, and governance rules all force compromises. Good guidelines acknowledge this instead of pretending it doesn't happen.

The goal isn't theoretical perfection. It's consistency, transparency, and maintainability.

Keep Best Practices Alive

Power BI evolves constantly. So should your guidance.

To keep best practices relevant:

  • Review them periodically
  • Update them after major platform changes
  • Adjust them based on real incidents and lessons learned

And make them easy to find:

  • A short internal wiki page
  • A README alongside shared datasets
  • A pinned Teams or Slack post
Remember: If they're hidden, they don't exist.

The Real Goal

The purpose of Power BI best practices isn't to create flawless models.

It's to create predictable, understandable, and maintainable ones.

If your guidelines help someone make a better decision while building, they'll be followed.

If they can be checked quickly and explained easily, they'll last.

That's when best practices stop being rules — and start becoming shortcuts.

Book a free 30-min discovery call

Or send me a message

Please enter your name.
Please enter a valid email address.

📍 Based in Portugal · Available remotely across Europe

🕐 Typically respond within 24 hours on working days

Last updated: 26 February 2026

1. Who I am (Data Controller)

Kristina Bachová
Freelance Power BI Specialist
Based in Portugal
Contact: hello@kristinabachova.com

I am the data controller for personal data collected through this website. As a sole trader, I take data protection seriously and process only what is necessary.

2. What personal data I collect

Contact form

When you use the contact form I collect: your name, email address, company name (optional), and your message. This data is processed by Formspree (a third-party form processor) and forwarded to my email inbox.

Newsletter sign-up

If you subscribe to my newsletter, I collect your email address (and optionally your name). This is processed by Kit (formerly ConvertKit), my email marketing platform.

Booking widget

If you book a discovery call via the Calendly booking widget, Calendly collects your name, email address, and scheduling preferences directly. I receive a copy of your booking details. See Calendly's Privacy Policy for details.

Cookies and usage data

The site uses cookies and similar local storage as described in Section 9 below. I do not currently use analytics tools that track your browsing behaviour.

Server logs

Like most websites, my hosting provider (GitHub Pages) may log your IP address and browser information in standard server access logs. I do not control or access these logs directly. See GitHub's Privacy Statement for details.

3. Why I collect it (Legal basis)

4. Who I share your data with

I do not sell your data. I share it only with the service providers listed below, who act as data processors on my behalf:

I may also be required to disclose data to law enforcement or regulatory authorities if required by law.

5. International data transfers

Formspree, Kit, and Calendly are US-based companies. When your data is transferred to the United States, these transfers are made under standard contractual clauses (SCCs) or other transfer mechanisms approved under the EU GDPR. Please refer to each provider's privacy policy for details.

6. How long I keep your data

  • Contact form enquiries: I keep email correspondence for up to 2 years after our last interaction, or until you ask me to delete it.
  • Newsletter subscribers: Your email is kept until you unsubscribe. I also periodically remove inactive subscribers.
  • Booking records: Calendly retains booking data in accordance with their policy.
  • Cookie preferences: Stored in your browser's localStorage until you clear it or change your preferences.

7. Your rights under GDPR

As a data subject in the EU/EEA, you have the following rights:

8. How to exercise your rights

To exercise any of the rights above, please contact me at: hello@kristinabachova.com

I will respond within 30 days. There is no charge for making a request. If your request is complex or numerous, I may extend the response period by a further two months, and I will let you know.

To unsubscribe from my newsletter, use the unsubscribe link in any email I send, or contact me directly.

To change your cookie preferences on this website, use the Cookie Settings link in the footer of this page.

9. Cookies and similar technologies

This site uses cookies and browser localStorage. No non-essential cookies are set before you give consent.

You can manage your cookie preferences at any time using the Cookie Settings link in the footer. You can also clear cookies and localStorage through your browser settings.

10. Right to lodge a complaint

If you believe I have not handled your personal data correctly, you have the right to lodge a complaint with the Portuguese data protection supervisory authority:

CNPD — Comissão Nacional de Proteção de Dados
Website: www.cnpd.pt
Address: Rua de São Bento, 148–3º, 1200-821 Lisboa, Portugal

I would, however, appreciate the opportunity to address your concerns directly before you contact the supervisory authority. Please reach out to me first at hello@kristinabachova.com.

11. Third-party links

This site may contain links to external websites (e.g. LinkedIn, GitHub). I am not responsible for the privacy practices of those sites. Please review their privacy policies directly.

12. Changes to this policy

I may update this policy from time to time. The date at the top of this page always reflects the most recent revision. For significant changes, I will update the date and, where appropriate, notify subscribers.

Questions? Email me at hello@kristinabachova.com