Panasonic US's central data infrastructure team has an ambitious mandate: serve as the data backbone for multiple business units spanning sales, supply chain, HR, and beyond. When legacy ETL pipelines and fragmented data warehouses slowed down daily reporting, sometimes with multi-hour ingestion windows and unpredictable failures, the team made a strategic decision to modernize from the ground up. By standardizing on the Databricks Platform and Lakeflow, they transformed a fragile, siloed stack into a reliable, enterprise-wide data foundation. Processes that once took hours now complete in minutes, analysts have direct access to data that was previously out of reach, and the team is already building toward its next frontier: AI.
Fragmented legacy pipelines disrupt cross-functional business operations
Panasonic’s central Data and IT infrastructure team drives the overarching data strategy for multiple internal companies and business units. To support daily operations, sales forecasting, and supply chain management, business leaders rely heavily on enterprise systems such as SAP S/4HANA, Workday, and global point-of-sale (POS) systems. However, Panasonic’s legacy data stack, composed of disconnected ETL tools and complex data warehouses, struggled to handle the volume and complexity of this data, leading to severe performance and resiliency issues.
The most critical bottleneck was SAP data ingestion. Recognizing the limitations of legacy Change Data Capture (CDC) processes, the engineering team sought a more reliable approach, but the existing architecture forced them to run full data refreshes across over 100 tables every day. Massive transactional tables with hundreds of millions of rows required complex partitions and frequently caused legacy pipelines to fail. These heavy loads took five to six hours to complete and broke roughly 10 times a year, requiring hours or full days of IT-intensive cross-team troubleshooting to fix. For one of these internal business units, which operates under strict early-morning reporting cut-offs, these outages caused costly delays. Upper management was frequently left without the daily sales, inventory, and logistics reports needed to make critical business decisions, effectively disrupting daily operations. Furthermore, valuable data remained locked away in legacy database silos, creating visibility barriers for downstream Business Intelligence (BI) analysts who needed access to raw data for accurate forecasting.
Standardizing enterprise ingestion with Lakeflow Connect
To establish a resilient, centralized data backbone, Panasonic migrated to the Databricks Platform, actively using Lakeflow Connect to standardize data ingestion across major data sources in the enterprise.
The most urgent priority was SAP S/4HANA. By integrating with SAP Datasphere to land files in Azure Data Lake Storage (ADLS), Panasonic deployed Auto Loader (part of Lakeflow Connect) to seamlessly handle incremental ingestion for one of their most failure-prone pipelines. This modern architecture provided a stable, automated architecture that required a fraction of the overhead. The results were immediate.
From there, the team extended the same approach across other critical systems. HR and workforce data that had been previously difficult to track historically is now ingested with the Workday connector into structured, relational tables that capture changes like manager transitions and employee rehires over time. In addition, the SFTP connector continuously pulls global supply chain data, including point-of-sale figures, shipping metrics, and manufacturing records from Panasonic's Japan headquarters. This time-sensitive data, formatted in CSV and Excel files, is made available for near real-time supply chain analytics.
Beyond structured and semi-structured data, Panasonic is now tackling the challenge of unstructured PDF document repositories. By connecting SharePoint to the Databricks environment, the team has automated the processing of thousands of complex legal and supplier documents. Using Databricks Document Intelligence (ai_parse_document and ai_query), they were able to process and extract dozens of key fields with high precision into structured outputs, transforming static documents into live, queryable data.
"Processing 10,000 supplier contracts and MSAs used to be a two-week manual ordeal on our legacy stack. By leveraging the Databricks SharePoint connector, serverless compute, and Databricks Document Intelligence, we’ve compressed that entire workflow—from ingestion to extracting critical expiration dates—into just two to three hours." – Shingo Sakamoto, IT Principal Data Architect, Panasonic
Underpinning all of it is a unified governance layer via Unity Catalog, which enables the team to securely share data across business units without duplication. Across all these sources, the team leverages Databricks serverless compute to execute highly performant ETL notebooks in a fraction of the time.
"In our legacy environment, massive SAP tables required five to six hours to load and failed frequently. By standardizing on Databricks and using Auto Loader, ingestion for our largest table dropped from hours to just two minutes. Our pipelines are completely stable now, guaranteeing our leadership has on-time reporting." – Yuka Kato, Lead Data Engineer, Panasonic
Trusted data, enterprise-wide impact
Today, Panasonic's business leaders start every morning with what they need: accurate, on-time reports covering daily sales, billing, and inventory, delivered without fail. End-to-end data processing for all silver tables completes in roughly 30 minutes, and the reliability that once felt out of reach has simply become the new standard.
The impact goes beyond performance. By retiring expensive legacy data warehouse, ETL, and BI licensing, the team achieved a significant reduction in TCO. As a result, this freed up budget and bandwidth, allowing teams to focus on higher-value work. And with a trusted data foundation in place, access has opened up across the organization. BI analysts can now explore data directly, cutting load and refresh times by roughly 50%. Sales representatives and regional managers are building their own views and forecasting models.
"Databricks has empowered our data analysts to do more: they can explore raw data directly, collaborate in shared notebooks, and move faster than ever. As a result of this operational efficiency, our small data science team can tackle solutions at enterprise scale." – Jerry Deng, BI Director, Panasonic
With a stable, unified data foundation in place, that same spirit of access is shaping Panasonic's AI ambitions. The team is implementing a Genie workspace to give its non-technical Quoting Team self-service access to pricing history and predictive insights.
"Our Quoting Team doesn't think in SQL; they think in customers and products. Genie meets them where they are, turning pricing questions into instant answers, enabling a small data team to deliver enterprise-wide impact." - Elena Gusakova, Senior Data Scientist, Panasonic



