Tabular Editor Blog

LLMs and semantic models: Complementary technologies for enhanced Business Intelligence

Written by Jonathan Rystrøm | April 28, 2025

Large Language Models (LLMs) – AI models that can predict text based on huge volumes of training data – are rapidly being integrated into BI workflows through copilots in everything from Fabric to Power BI. LLMs bring both immense promise of unlocking new levels of insight from data – but also immense hype. In this blog, we’ll provide a level-headed assessment of how LLMs can help developers and users of semantic models. We'll also show how semantic models can ground LLMs in business relationships to provide accurate and helpful answers to business-critical questions.

LLMs are fundamentally a semantic technology. They excel at transforming text into meaningful predictions and are used across various tasks from writing apps from scratch to summarizing complex financial regulation. Given their capabilities, it's natural to ask how LLMs can enhance another semantic technology: semantic models.

As a brief reminder, semantic models represent the meaning of data and the structured relationships between different data entities. As Kurt Buhler explains, "A semantic model is essential for you to meet business data needs" by providing a structured representation that maps relationships between data entities and their business meaning. For instance, a well-built semantic model can allow users to integrate insights across datasets to answer business critical questions like “Which market is experiencing the highest growth?”.

In this blog post, we'll explore where and how LLMs can improve the workflow of building and using semantic models, and vice versa.