This is part 4 of the Analysis Services DevOps blog series. Go to part 3
Refining the build pipeline
In the last chapter, we saw how to set up the first basic build pipeline, which used Tabular Editor to deploy a model from a .bim file or folder structure unto an instance of Analysis Services. It’s time to take this one step further. In real life, there are a couple of things we’d like our build pipeline todo. Essentially, we want to make sure that we’re producing an artifact that is ready for deployment, so for a Tabular model, this typically means the following:
- Best Practice Analysis: Ensure that Best Practice Rules are obeyed.
- Schema check: Ensure that the model can connect to its data source(s) and source columns map correctly to imported columns.
- Validation deployment: Ensure that the model does not contain invalid DAX or other semantic errors (for example, circular dependencies).
- Refresh Check: Ensure that partitions can be refreshed without errors or warnings.
- Unit Testing: Ensure that calculations provide expected results.
- Prepare artifact: Create a Model.bim file containing everything needed for deployment.
The 6 steps above roughly correspond to the tasks that we need in our build pipeline. Once we have this pipeline set up, we can use branch policies, to ensure that all changes build succesfully before a pull request can be approved.
In the following, we will assume that your Tabular Model is saved as a folder structure within the “AdventureWorks” folder in the root of your git repo. If it’s located somewhere else, change the first argument of the calls to TabularEditor.exe from $(Build.SourcesDirectory)\AdventureWorks to point to the Model.bim file or to the folder housing the database.json file of your tabular model.