Demystifying the Data Lakehouse in Fabric

Are you a Classic BI and Data Warehouse (DWH) developer eager to understand the Data Lakehouse concept that’s taking the data world by storm? This session is tailored just for you.

Delve into key questions: What exactly is a Data Lakehouse, and why are terms like bronze, silver, and gold gaining popularity? Should you embrace Python or can you rely on trusty SQL?

Discover the power of decoupling storage and compute, where storage is cheap, compute is expensive, and you can easily scale compute for specific tasks. Learn about OneLake and Delta Lake and why mastering PySpark makes sense for tasks like data cleaning, handling semi-structured data, and integrating with API-based sources like event streams.

But don’t forget, SQL is still cool. It’s an excellent practice for defining business logic and simplifies porting logic between platforms. With Spark SQL, it’s not far from what you know. Plus, explore how to leverage Spark notebooks with mark-up for documentation.

Join us to unravel the Data Lakehouse in Fabric and learn how to make the right choices for your data development journey.

 

Share this on...