High Concurrency for Notebooks in Pipelines with Microsoft Fabric
How to Use and Enable High Concurrency for Notebooks in Pipelines with Microsoft Fabric
High Concurrency Mode for Notebooks in Pipelines is a game-changer for data engineers and data scientists using Microsoft Fabric. This feature allows multiple notebooks to share a single Spark session, significantly improving performance and reducing costs. One of the other advanced is as well that Microsoft Fabric is not running to all the capacity limits due to the fact that every Notebook was starting a new session. In one of my other blogpost I explained how you could solve this with notebookutils.notebook.runMultiple.
Here’s how you can enable and use this feature effectively.
Why Use High Concurrency Mode?
High Concurrency Mode offers several benefits:
- Faster Session Start: Notebooks can attach to pre-warmed Spark sessions, reducing startup time to around 5 seconds.
- Cost Savings: By sharing a single Spark session across multiple notebooks, you only pay for one session, which can lead to significant cost reductions.
- Improved Efficiency: This mode optimizes pipeline execution, making it faster and more efficient.
Enabling High Concurrency Mode
To enable High Concurrency Mode in your Fabric workspace, follow these steps:
- Access Workspace Settings:
- Go to your Fabric workspace and select the Workspace Settings option.
- Navigate to High Concurrency Settings:
- In the settings menu, go to the Data Engineering and Science section.
- Select Spark Compute and then High Concurrency.
- Enable High Concurrency:
- In the High Concurrency section, enable the option For pipeline running multiple notebooks.
- Save your changes.
Once enabled, all notebook sessions triggered by pipelines will be packed into high concurrency sessions automatically.
Using High Concurrency Mode
After enabling High Concurrency Mode, you can start using it in your pipelines:
- Create a Pipeline:
- Open your Fabric workspace and create a new pipeline item from the Create menu.
- Add Notebook Activities:
- Navigate to the Activities tab and add a Notebook activity to your pipeline.
- Configure Session Tags:
- In the advanced settings of the notebook activity, specify a session tag. This tag helps group notebooks into shared sessions based on matching criteria.
Session Tags
When you define a Session Tag, the Notebook will use shared sessions. These sessions tags can be used across pipelines but not across workspaces, a new session will be created even if you use the same session tag. Just see a sort of grouping. You define a session on your own or create add dynamic content. But be aware Session tag can only contain letters, numbers, and underscores.
Monitoring
In the monitoring you will now see all the executed Notebooks one by one, while this was not the case notebookutils.notebook.runMultiple(DAG), you only saw the Main Notebook. This is a great step forwards while building monitoring solutions.
Below an overview in the Monitor before the session started:
Below an overview in the Monitor when the session started
Overview of all the executed Notebooks
The Notebook name is extended with the Livy id.
Remark: It looks like that currently the Snapshots from the Notebooks are incorrect because every Notebook execution is showing the Snapshots(from the first Notebook), so debugging from the Monitor is not yet possible. I’ve already created a note to the PM team.
RunMultiple
With the notebookutils.notebook.runMultiple(DAG) you have some more options.
- Define any dependency or order among them.
- Define timeouts per Cell
- Run multiple notebooks in a DAG, where each notebook can depend on the output of one or more previous notebooks.
Conclusion
High Concurrency Mode for Notebooks in Pipelines with Microsoft Fabric is a powerful feature that enhances performance, reduces costs, and improves efficiency. By following the steps outlined above, you can easily enable and start using this feature to optimize your data engineering and data science workflows. Personally I’m very happy with these new functionality, you can define easier outputs for every notebook for logging purposes.
More detailed can be found on the official Fabric Blogpost
Feel free to leave a comment
Discover more from Erwin & Data Analytics
Subscribe to get the latest posts sent to your email.
0 Comments