Arvind Hosagrahara leads a team that helps organizations deploy MATLAB algorithms in critical engineering applications, with a focus on integrating MATLAB into the enterprise IT/OT systems. Arvind has extensive hands-on experience developing MATLAB and Simulink applications and integrating them with external technologies. He has helped design the software and workflow for a variety of production applications focusing on robustness, security, scalability, maintainability, usability, and forward compatibility across automotive, energy and production, finance and other industries.
May 28, 2021 11:05 AM PT
Semantic segmentation is the classification of every pixel in an image/video. The segmentation partitions a digital image into multiple objects to simplify/change the representation of the image into something that is more meaningful and easier to analyze . The technique has a wide variety of applications ranging from perception in autonomous driving scenarios to cancer cell segmentation for medical diagnosis.
Exponential growth in the datasets that require such segmentation is driven by improvements in the accuracy and quality of the sensors generating the data extending to 3D point cloud data. This growth is further compounded by exponential advances in cloud technologies enabling the storage and compute available for such applications. The need for semantically segmented datasets is a key requirement to improve the accuracy of inference engines that are built upon them.
Streamlining the accuracy and efficiency of these systems directly affects the value of the business outcome for organizations that are developing such functionalities as a part of their AI strategy.
This presentation details workflows for labeling, preprocessing, modeling, and evaluating performance/accuracy. Scientists and engineers leverage domain-specific features/tools that support the entire workflow from labeling the ground truth, handling data from a wide variety of sources/formats, developing models and finally deploying these models. Users can scale their deployments optimally on GPU-based cloud infrastructure to build accelerated training and inference pipelines while working with big datasets. These environments are optimized for engineers to develop such functionality with ease and then scale against large datasets with Spark-based clusters on the cloud.
June 24, 2020 05:00 PM PT
John Deere is a leading manufacturer of agricultural, construction and forestry machinery, diesel engines, drivetrains for a variety of applications ranging from lawn care to heavy equipment. The company collects large transient engineering datasets from John Deere test vehicles in the field, and via telematic data-loggers. The goal is to leverage physics / empirical-based strategies / algorithms for predictive life cycles / damage on engine components. This technology has allowed our organization to do very little re-work of our algorithms which were based in a MATLAB environment (including all the added functionality that MATLAB has robustly built in), so that the algorithms / models execute efficiently and accurately, on duty cycles that may have never been originally defined with engine dynamometer test cells.
An engine engineer can now spin up Spark-enabled parallel compute environments on-demand automatically, to analyze data coming in from around the world. This is an extraordinary capability that allows all domain specialized engineers, not formally trained as data scientists, to apply their understanding of engineering problems successfully and easily. The heavy-lifting needed to enable large data-processing on-demand at the cloud level is drastically simplified. Overall, this may help establish a more well-rounded data science community of engineers. This talk will discuss the challenges and solutions in working with engineering data and applying physics and statistics approaches to tackling our analysis needs.