Patterns for Consistent, Metadata-Driven Processing in Fabric

Fabric provides us with a plethora of tools to process and analyse our data, which provides data engineers and other users with a great foundation to build a enterprise-level data solution on.

When delivering a solution that is built to be scalable and maintained over time it is essential that we implement consistent patterns that are repeatable and can be hardened to ensure our delivery is consistent.

In this session we will explore how you can take the base components within Fabric and build a metadata-driven framework on top of them that enables you to deliver consistent patterns for ingestion, data cleansing, data quality and curation.

We’ll look at the idea of using data contracts to drive processing and understand how we can take tools such as Data Factory and Spark extend them to utilise metadata to deliver consistent and scalable data pipelines that will enable us to build out data pipelines.

 

Share this on...