AWS re:Invent is a learning conference hosted by Amazon Web Services (AWS) for the global cloud computing community. The in-person event features keynote announcements, training and certification opportunities, access to 1,500+ technical sessions, the Expo, after-hours events, and so much more. Visit Databricks at booth 3241 and register for our Party at the Lakehouse!
Deriving deep analytics insights from your existing data are hard enough. But an even bigger challenge is making all the fresh data arriving daily ready for analytics and how to automate data pipeline management. Join us for this webinar to discover how powerful open source technologies from Fivetran and Databricks can automate data integration on the data lakehouse. You’ll find out how to unify all your data on a single platform and simplify analytics, BI and AI for the modern data stack.
Customers & Microsoft Partners who are planning on building out a use case in Azure get an introduction to the Lakehouse (on Azure). The output of this day will be the base understanding on how to setup, use and collaborate on Azure Databricks so that steps can be made to implement a use case.
Delivering process improvement, targeted services, and improved citizen outcomes isn’t possible when legacy systems limit insight into all your agency data. Join Databricks to learn why agencies are rapidly adopting a modern open data lakehouse to deliver better citizen outcomes with a true 360-degree view of their data that will grow with them as they advance in their data and predictive analytics maturity.
Learn best practices for training models and managing experiments, projects, and models using MLflow.
Join us December 7 for the Databricks Media & Entertainment Symposium, where industry leaders will talk about the big data challenges they’re solving and the business use cases they’re investing in. Come network with your peers and discover new strategies for becoming a data-driven media company.
Join Economist Impact for a virtual panel discussion, sponsored by Databricks, to examine the factors behind successful digital businesses and learn about competitive, digital business-models.
In this session, you will learn how to build and deploy a declarative streaming ETL pipeline at scale with DLT, how DTL automates complex and time-consuming tasks like task orchestration, error handling, recovery and auto-scaling with performance optimizations. Finally, we will show you how DLT enables data teams to deliver fresh, up-to-date data with built-in quality controls and monitoring ensuring accurate and useful BI, Data Science and ML.
Learn how lakehouse combines the best of data warehouses and data lakes In this live workshop, we’ll cover best practices that bring reliability, performance and security to your data lake and provide the perfect foundation for a cost-effective, highly scalable lakehouse architecture. You’ll learn about the advantages of cloud-based data lakes in terms of security and cost, and how to enable SQL analysts to easily access data in your data lake for reporting and visualization. And finally, learn how to dramatically simplify data engineering to lower costs and boost productivity for your data teams.
Databricks Lakehouse is defining the future by introducing higher collaboration between data teams, simplicity in data workflows, and flexibility in the overall data architecture. Join Databricks and Royal Cyber for an in-person event to learn how you can use open-source technologies like Apache Spark™ and Delta Lake to build a high-functioning data lakehouse.