Skip to main content
Fleetworthy

CUSTOMER
STORY

Fleetworthy scales safety and compliance with declarative pipelines

>10x

Increase in deployment velocity

>99%

Reduction in processing time

>99%

Reduction in developer onboarding

customer Fleetworthy still image

Fleetworthy supports some of the largest commercial fleets in North America, helping them operate more safely, efficiently, and in compliance with complex regulations. Trusted by 75% of the top fleets, Fleetworthy offers the only complete Fleet Readiness platform, including safety and compliance, toll management, and weigh station bypass solutions.

As Fleetworthy grew, so did the demands on its data platform. What began as a practical, internally built pipeline framework on AWS increasingly struggled under the weight of scale. Pipelines became harder to maintain, failures more frequent, and development slower as more teams relied on the same brittle foundation.

“Frankly speaking, we were underwater,” said Cameron Lee, P.Eng., Principal Software Architect at Fleetworthy. To resolve these bottlenecks, Fleetworthy moved away from its homegrown approach in favor of Databricks. This strategic shift has reclaimed thousands of engineering hours previously lost to troubleshooting. Instead of navigating workarounds, the team now leverages a streamlined architecture that scales effortlessly with their data volume. With Databricks, Fleetworthy has turned a period of technical challenge into a competitive advantage, ensuring their platform is as robust as the products they deliver to customers.

Fleetworthy needed a data engineering model that could scale with the organization, reducing operational burden without slowing teams down.

A shift to declarative data engineering

To modernize its platform, Fleetworthy adopted Databricks and embraced Spark Declarative Pipelines as the foundation for its data engineering strategy. The appeal was immediate. Instead of hand-coding retries, incremental logic, and operational safeguards, developers could declare the transformations they wanted while Databricks handled execution, resilience, and monitoring behind the scenes.

For Fleetworthy, this wasn’t a radical departure from how they already thought about pipelines. In fact, Spark Declarative Pipelines closely resembled patterns the team had been attempting to build internally, just delivered as a fully managed, production-ready framework.

One capability in particular validated the decision early.

“Spark Declarative Pipelines had data quality expectations, which is exactly what we were planning to build ourselves,” Cameron said.

By adopting Spark Declarative Pipelines, Fleetworthy eliminated the need to maintain custom infrastructure for core pipeline functionality and instead gained built-in enforcement of data quality as pipelines evolved.

Enabling scale without fragmentation

As Fleetworthy moved toward a more domain-oriented operating model, pipeline ownership expanded beyond a small group of specialists. More than ten teams and roughly thirty developers were now positioned to contribute to data workflows supporting both internal analytics and customer-facing features.

That kind of decentralization often leads to fragmentation, with teams inventing their own frameworks, patterns, and operational shortcuts. To avoid that outcome, Fleetworthy made a deliberate decision to standardize on Spark Declarative Pipelines as the recommended foundation for pipeline development across the organization.

The result was a shared “main road” for developers. Spark Declarative Pipelines abstracts away much of the complexity traditionally associated with Spark-based pipelines, allowing application developers to contribute without needing deep data engineering expertise.

“Spark Declarative Pipelines gives our developers a main road,” Cameron said. “They don’t need a full data engineering degree to contribute.”

By removing operational plumbing from day-to-day development, teams could focus on business logic, delivering safety insights, compliance signals, and operational intelligence instead of debugging infrastructure.

From firefighting to forward momentum

The impact of this shift became visible quickly, transforming Fleetworthy’s delivery from a monthly hurdle into a continuous stream of value. By moving away from a legacy ETL process that was split between Spark and non-scaling custom Python, the organization unlocked the ability to scale horizontally and eliminate persistent bottlenecks. This transition saw deployment frequency soar from once per month to multiple times per week, while the time required for new developer setup plummeted from a full day to approximately one minute.

While the migration effort was significant and involved a deep architectural rework both within and outside of Databricks, the operational gains were profound. Although overall costs remained stable, the value delivered per dollar increased exponentially through a massive reduction in data latency; processing times that previously took 8 hours were reduced to between 2 hours and just 1 minute, depending on the workload. Today, Spark Declarative Pipelines supports 80–90% of Fleetworthy’s pipeline use cases, providing a single, consistent framework that has restored confidence in production workflows and enabled domain teams to deliver insights faster than ever before.

A foundation built for long-term growth

By replacing fragile homegrown systems with Spark Declarative Pipelines, Fleetworthy established a durable foundation for its data platform, one that balances autonomy with consistency, and speed with reliability.

With operational complexity abstracted away and quality built in by default, Fleetworthy’s teams can focus on what matters most: delivering timely, trustworthy data that helps fleets operate more safely and efficiently.

As the company continues to grow, its data platform is no longer a constraint. Instead, Spark Declarative Pipelines has become an enabler supporting innovation at scale while keeping operational risk firmly under control.