TMDL demo 2/3: Change Tracker

Disclaimer

This blog post is part of a three-part series based on my speaker session at the European Microsoft Fabric Community Conference 2025 (FabCon) in Vienna, which I had the pleasure of delivering together with my colleague Roger Unholz.

Our session, titled “TMDL Playoffs”, was a fast-paced showdown where we shared our favorite tips and tricks for working with TMDL. In this series, I’ll take a deeper dive into each topic we presented on stage and explore them in more detail.All source files referenced throughout the posts are available in my public GitHub repository.

Please note, that the scripts are not tested for every scenario, that they are only created for demonstration purposes and should never be applied on any running, productive enviroments.
Link: ivsch/TMDLPlayoffs.

Situation

For semantic models in Power BI, it is always recommended to maintain a proper version history. Now, I am not talking about version control—we can already achieve that using TMDL and a proper repository. This post is about tracking the semantic model from a business point of view. This means that customers can see the changes to a report in a manual table, which could also be displayed in a Power BI report using the semantic model.

Since I don’t want to spend time re-investigating every small change I’ve made to a semantic model, I still need a way for customers to clearly see what has changed in their semantic models.

For this demo, I already have a Power BI semantic model saved in TMDL format with a manual table for the semantic model version (file is in GitHub folder “Ivo” > “SM Change Tracker”). In the report view of the Power BI file, there is also a page showing the visual representation of the table.

Idea

As TMDL provides a proper definition of all objects within a semantic model, it is a solid basis for tracking changes from one version to another. What I’d like to do is compare all the changes I have made in my local model with the one published in a Power BI workspace.

For this comparison, I’d like to follow this process:

  1. Download the published semantic model via the Fabric REST API and store it locally.
  2. Compare the TMDL definitions and generate a difference report.
  3. Send the diff report to an LLM, which will summarize the differences in a clear, understandable way. The LLM should also classify each change as minor or major in order to properly increase the version number.
  4. Write the new version number and description back to the local semantic model. As the target table is a manual table, the data is stored within the TMDL definition and can easily be updated.
  5. Publish the updated report again to the destination (this step remains manual for now).

Prerequisites for using the REST API

Before you can call the Microsoft Fabric API with a service principal (like in this demo), a few prerequisites need to be set up. First, create an app registration in Entra ID (this is your service principal). Then, generate a client secret and keep both the ID and secret safe — these will be used for authentication.

Next, place the app registration into a security group that has access to Fabric/Power BI. In the Azure Portal, link this security group to the app registration (important: this step is done in the Azure Portal, not just inside Entra ID). You’ll also need the Tenant ID (and the ctid value mentioned in the setup) to include in your API calls.

Finally, check the Power BI Admin settings to ensure service principals are allowed to use APIs. Without this toggle enabled, the service principal won’t be able to access Fabric resources programmatically.

In short: register the app, generate a secret, put it into the right security group, grab the tenant details, and confirm admin permissions. Once those are in place, the service principal can authenticate directly to the Fabric API.

Solution

Because the process involves authentication, downloading model definitions, and file handling, a Jupyter notebook is used to orchestrate everything in one place. This could also run in any Jupyter environment, also directly inside Fabric to keep the workflow closer to the data and services.

The following text represantion just descibes some part of the notebook code. The notebook is called compare-sm.ipynb and also available in the GitHub repository.

Download the published semantic model

  • Authenticates against Fabric with the service principal (client ID, secret, tenant ID).
  • Calls the Fabric API getDefinition to export the current published semantic model in TMDL format.
  • Saves all model definition files into a definition_published/ folder locally.

Compare local vs. published versions

  • Loads your local semantic model (from the .pbip project).
  • Compares it line by line with the published TMDL version.
  • Produces a structured diff report (index.md) showing what was added, removed, or changed.

Summarize changes with an LLM

  • The diff report is sent to an Awan LLM (Meta-Llama-3.1).
  • The LLM generates a short, business-friendly summary of the changes (max 50 words).
  • It also labels the change as [MAJOR] (structural, new objects) or [MINOR] (renames, small tweaks).

Parse the LLM output

  • Extracts the summary text and the change tag (MAJOR/MINOR).

Update the version log inside the semantic model

  • Re-encodes and writes the updated version history back into the TMDL file (with a backup saved).
  • Opens the Semantic Model Version.tmdl file, which stores version history in a compressed payload.
  • Reads existing version numbers and finds the last version.
  • Bumps the version automatically:
    • MAJOR → increments the major version (1.3 → 2.0).
    • MINOR → increments the minor version (1.3 → 1.4).
  • Appends a new row with the new version number and the LLM-generated description.

TMDL demo 1/3: RLS Generator

Disclaimer

This blog post is part of a three-part series based on my speaker session at the European Microsoft Fabric Community Conference 2025 (FabCon) in Vienna, which I had the pleasure of delivering together with my colleague Roger Unholz.

Our session, titled “TMDL Playoffs”, was a fast-paced showdown where we shared our favorite tips and tricks for working with TMDL. In this series, I’ll take a deeper dive into each topic we presented on stage and explore them in more detail.All source files referenced throughout the posts are available in my public GitHub repository.

Please note, that the scripts are not tested for every scenario, that they are only created for demonstration purposes and should never be applied on any running, productive enviroments.
Link: ivsch/TMDLPlayoffs.

Idea

The configuration of Role Level Security (RLS) within Power BI semantic models can sometimes be straightforward, but at other times it can feel like opening Pandora’s box. In most customer scenarios I encounter, the RLS setup is fairly basic. Nevertheless, managing RLS across a large number of semantic models can be a real challenge, especially when ensuring that the correct settings are applied consistently.

To address this, I often maintain role definitions outside of the semantic model to provide a clear overview of all enabled RLS settings. That could be a Excel file as well as a small “security application”. This should then be the basic information for a simple script generator to quickly and reliably apply the defined roles to a Power BI semantic model file.

TMDL RLS definition

To check existing RLS definitions in your Power BI semantic model, first enable the TMDL view in Power BI Desktop settings (currently available under Preview features).

On the right side of your semantic model, you’ll find all the objects available in the TMDL scripting language. If you drag and drop the Roles object into the script window, you’ll see the definition of the roles.

Outcome

Even if it’s not a complete solution for handling complex RLS definitions, the most common/basic scenarios can be covered with the Jupyter Notebook apply_rls in the GitHub source folder.

In the very first cell, you can provide all settings for the roles you want to apply. At the moment, the script only works with string values; it can be extended/adjusted to support other data types.


RLS_DEFINITION = [
    {
        "role_name":"CustomerSegmentManagersEnterprise",
        "table_name":"DimCustomer",
        "rls_field_name":"Segment",
        "allowed_values":["Consumer", "Enterprise"]
    },
    {
        "role_name":"CustomerSegmentManagersSMB",
        "table_name":"DimCustomer",
        "rls_field_name":"Segment",
        "allowed_values":["SMB"]
    }
]

This configuration generates a script for two roles—CustomerSegmentManagersEnterprise and CustomerSegmentManagersSMB—that filters the DimCustomer[Segment] column to the specified values.

With those settings, the following output is provided and can be taken over to existing semantic model files.

Generated createOrReplace TMDL script:

createOrReplace

	role CustomerSegmentManagersEnterprise
		modelPermission: read

		tablePermission DimCustomer = [Segment] == "Consumer" || [Segment] == "Enterprise"


	role CustomerSegmentManagersSMB
		modelPermission: read

		tablePermission DimCustomer = [Segment] == "SMB"

Extendability

With further extensions, this functionality could be applied to an application that centrally defines roles and their respective security settings. This would simplify the management of RLS settings across multiple semantic models. In a pro version, you could even imagine downloading the Power BI semantic model, editing the TMDL role definitions, and re-uploading it via the Fabric REST API.