This full-day workshop is designed to teach professional developers how to build end-to-end solutions using new Fabric APIs and the Apache Spark support in the Fabric platform. Attendees will learn to use the Fabric User APIs to create and manage Fabric workspaces and workspace items including lakehouses, notebooks, DirectLake datasets and Power BI reports. Attendees will also learn best practices for implementing effective ALM and CI/CD processes using Fabric GIT Integration together with developer mode in Power BI Desktop.
This workshop examines the different approaches a developer can use to automate ingesting data into OneLake. Attendees will learn how to load data into a lakehouse using the ADLS Gen2 APIs together with the Fabric Load Table APIs. Attendees will also learn how to ingest and transform data by automating the dynamic execution and monitoring of Spark jobs defined using Python code in Fabric notebooks and Spark Job Definitions. This workshop concludes with an examination of developing multi-tenant application on the Fabric platform. Attendees will learn best practices for protecting and isolating data for multiple customers. The workshop will also examine how to leverage Power BI embedding in a multi-tenant application where users are able to access Power BI reports in a secure and scalable manner.