Databricks
search-icon
Search
  • Platform
    • The Databricks Lakehouse Platform
    • Delta Lake
    • Data Governance
    • Data Engineering
    • Data Streaming
    • Data Warehousing
    • Data Sharing
    • Machine Learning
    • Data Science
    • Pricing
    • Open Source Tech
    • Security and Trust Center
  • Solutions
    • Solutions by Industry
    • Financial Services
    • Healthcare and Life Sciences
    • Manufacturing
    • Communications, Media & Entertainment
    • Public Sector
    • Retail
    • See all Industries
    • Solutions by Use Case
    • Solution Accelerators
    • Professional Services
    • Digital Native Businesses
    • Data Platform Migration
  • Learn
    • Documentation
    • Training & Certification
    • Demos
    • Resources
    • Online Community
    • University Alliance
    • Events
    • Data + AI Summit
    • Blog
    • Research
    • Labs
    • Beacons
    • Executive Insights
  • Customers
    • Partners
      • Cloud Partners
      • AWS
      • Azure
      • Google Cloud
      • Partner Connect
      • Technology and Data Partners
      • Technology Partner Program
      • Data Partner Program
      • Consulting & SI Partners
      • C&SI Partner Program
      • Partner Solutions
      • Connect with validated partner solutions in just a few clicks
    • Company
      • About Us
      • Careers at Databricks
      • Our Team
      • Board of Directors
      • Company Blog
      • Newsroom
      • Databricks Ventures
      • Awards and Recognition
      • Contact Us
    Try Databricks Watch Demos Contact Us Login
    Databricks
    search-icon
    Search
    • Platform
      • -Platform column-
        • The Databricks Lakehouse Platform
          • Delta Lake
          • Data Governance
          • Data Engineering
          • Data Streaming
          • Data Warehousing
          • Data Sharing
          • Machine Learning
          • Data Science
        • Pricing
        • Open Source Tech
        • Security and Trust Center
      • Promotion Column

        Discover how to build and manage all your data, analytics and AI use cases with the Databricks Lakehouse Platform

        Read now
    • Solutions
      • -Solutions column-
        • Solutions by Industry
          • Financial Services
          • Healthcare and Life Sciences
          • Manufacturing
          • Communications, Media & Entertainment
          • Public Sector
          • Retail
        • See all Industries
      • -Solutions column2-
        • Solutions by Use Case
          • Solution Accelerators
        • Professional Services
        • Digital Native Businesses
        • Data Platform Migration
      • Promotion Column

        March 07 | 8:00 AM ET
        Data, analytics and AI are key to improving government services, enhancing security and rooting out fraud. Join us to hear agency leaders reveal how they’re innovating around government-specific use cases.

        Register now
    • Learn
      • -Learn column-
        • Documentation
        • Training & Certification
        • Demos
        • Resources
        • Online Community
        • University Alliance
      • -Learn column2
        • Events
        • Data + AI Summit
        • Blog
        • Research
        • Labs
        • Beacons
        • Executive Insights
      • Promotion Column

        June 26–29, 2023

        Attend in person or tune in for the livestream of keynotes

        Register now
    • Customers
    • Partners
      • -partners column-
        • Cloud Partners
          • AWS
          • Azure
          • Google Cloud
        • Partner Connect
        • Technology and Data Partners
          • Technology Partner Program
          • Data Partner Program
        • Consulting & SI Partners
          • C&SI Partner Program
          • Partner Solutions
      • Connect with validated partner solutions in just a few clicks

        Connect with validated partner solutions in just a few clicks.

        Learn more
    • Company
      • -Company column-
        • About Us
        • Careers at Databricks
        • Our Team
        • Board of Directors
        • Company Blog
        • Newsroom
        • Databricks Ventures
        • Awards and Recognition
        • Contact Us
      • Promotion Column

        See why Gartner named Databricks a Leader for the second consecutive year

        Get the report
    Try Databricks Watch Demos Contact Us Login
    search-icon
    Search
    Banner Image

    Modernizing risk management

    Leveraging a unified approach to data
    and AI to mitigate operational risk in the
    financial services industry

    IntroductionRisk Management Solution AcceleratorsAddressing common risk management challengesStreamlining model development with MLFlowMonte Carlo simulations at scale with Apache Spark™Aggregations, Backtesting at Scale and Introducing Alternative DataSlicing and dicing value-at-risk with Delta LakeConclusion & Next Steps

    Introduction
    Risk Management Solution Accelerators
    Addressing common risk management challenges
    Streamlining model development with MLFlow
    Monte Carlo simulations at scale with Apache Spark™
    Aggregations, Backtesting at Scale and Introducing Alternative Data
    Slicing and dicing value-at-risk with Delta Lake
    Conclusion & Next Steps

    Introduction

    Managing risk within the financial services, especially within the banking sector, has increased in complexity over the past several years. First, new frameworks (such as FRTB) are being introduced that potentially require tremendous computing power and an ability to analyze years of historical data. At the same time, regulators are demanding more transparency and explainability from the banks they oversee. Finally, the introduction of new technologies and business models means the need for sound risk governance is at an all time high. However, the ability for the banking industry to effectively meet these demands has not been an easy undertaking. Traditional banks relying on on-premises infrastructure can no longer effectively manage risk. Banks must abandon the computational inefficiencies of legacy technologies and build an agile Modern Risk Management practice capable of rapidly responding to market and economic volatility through the use of data and advanced analytics. Recent experience shows that as new threats emerge, historical data and aggregated risk models lose their predictive values quickly. Risk analysts must augment traditional data with alternative datasets in order to explore new ways of identifying and quantifying the risks facing their business, both at scale and in real-time.

    In this solution brief, we will demonstrate how to modernize traditional value-at-risk (VaR) calculation through the use of various components of the Databricks Unified Data Analytics Platform — Delta Lake, Apache SparkTM and MLflow — in order to enable a more agile and forward looking approach to risk management.

    Risk Management Solution Accelerators

    Build a Modern Risk Management Solution in Financial Services

    Adopt a more agile approach to risk management by unifying data and AI in the Lakehouse

    This solution has two accelerators. First, it shows how Delta Lake and MLflow can be used for value-at-risk calculations – showing how banks can modernize their risk management practices by streaming data-ingestion, rapid model development and Monte-Carlo Simulations at Scale with the Lakehouse.

    Try out the accelerator – Modernizing Risk Management Part 1

    The second solution accelerator shows how to perform aggregations, backtesting at scale and use alternative data to move towards a more holistic, agile and forward-looking approach to risk management and investments.

    Try out the accelerator – Modernizing Risk Management Part 2

    Benefits:

    • Gain a holistic view: Use a more complete view of risk and investment with real-time and alternative data that can be analyzed on-demand
    • Detect emerging threats: Proactively identify emerging threats to protect capital and optimize exposures
    • Achieve Speed at Scale: Scan through large volumes of data quickly and thoroughly to respond in time and reduce risk

    Addressing common risk management challenges

    Databricks provides a unified approach to data analytics to address the most common challenges when trying to effectively modernize risk management practices. In this section of the solution brief, we’ll cover the following solutions:

    • Using Delta Lake to have a unified view of your market data
    • Leveraging MLflow as a delivery vehicle for model development and deployment
    • Using Apache Spark for distributing Monte Carlo simulations at scale

    Modernizing data management with Delta Lake

    With the rise of big data and cloud based-technologies, the IT landscape has drastically changed in the last decade. Yet, most FSIs still rely on mainframes and nondistributed databases for core risk operations such as VaR calculations and move only some of their downstream processes to modern data lakes and cloud infrastructure. As a result, banks are falling behind the technology curve and their current risk management practices are no longer sufficient for the modern economy. Modernizing risk management starts with the data. Specifically, by shifting the lense in which data is viewed: not as a cost, but as an asset.

    Old Approach: When data is considered as a cost,FSIs limit the capacity of risk analysts to explore “what if“ scenarios and restrict their aggregated data silos to only satisfy predefined risk strategies. Over time, the rigidity of maintaining silos has led engineers to branch new processes and create new aggregated views on the basis of already fragile workflows in order to adapt to evolving requirements. Paradoxically, the constant struggle to keep data as a low cost commodity on-premises has led to a more fragile and therefore more expensive ecosystem to maintain overall. Failed processes (annotated as X symbol below) have far too many downstream impacts in order to guarantee both timeliness and reliability of your data. Consequently, having an intra-day (and reliable) view of market risk has become increasingly complex and cost prohibitive to achieve given all the moving components and inter-dependencies as schematised in below diagram.

    Modern Approach: When data is considered as an asset,organizations embrace the versatile nature of the data, serving multiple use cases (such as value-at-risk and expected shortfall) and enabling a variety of ad-hoc analysis (such as understanding risk exposure to a specific country). Risk analysts are no longer restricted to a narrow view of the risk and can adopt a more agile approach to risk management. With Delta Lake, an open-source storage layer, risk analysts can ensure data consistency at scale. By unifying streaming and batch ETL, ensuring ACID compliance and schema enforcement, Delta Lake brings performance and reliability to your data lake, gradually increasing the quality and relevance of your data through its bronze, silver and gold layers and bridging the gap between operation processes and analytics data.

    Modern approach to financial risk management emphasizes increasing the quality and relevance of data and bridging the gap between operation processes and analytics data.

    In this example, we evaluate the level of risk of various investments in a Latin America equity portfolio composed of 40 instruments across multiple industries, storing all returns in a centralized Delta Lake table that will drive all our value-at-risk calculations.

    Sample risk portfolio of various Latin American instruments

    For the purpose of this demo, we access daily close prices from Yahoo finance using python finance library. In real life, one may acquire market data from source systems directly (such as change data capture from mainframes) to a Delta Lake table, storing raw information on Bronze and curated/validated data on a Silver table, in real-time.

    With our core data available on Delta Lake, we apply a simple window function to compute daily log returns and output results back to a gold table ready for risk modelling and analysis.

    In the example below, we show a specific slice of our investment data for AVAL (Grupo Aval Acciones y Valores S.A), a financial services company operating in Columbia. Given the expected drop in its stock price post march 2020, we can evaluate its impact on our overall risk portfolio.

    Sample data used by Databricks to illustrate the effectiveness of a modern approach to financial risk and data management

    Streamlining model development with MLFlow

    Although quantitative analysis is not a new concept, the recent rise of data science and the explosion of data volumes has uncovered major inefficiencies in the way banks operate models. Without any industry standard, data scientists often operate on a best effort basis. This often means training models against data samples on single nodes and manually tracking models throughout the development process, resulting in long release cycles (it may take between 6 to 12 months to deliver a model to production). The long model development cycle hinders the ability for them to quickly adapt to emerging threats and to dynamically mitigate the associated risks. The major challenge FSIs face in this paradigm is reducing model development-to-production time without doing so at the expense of governance and regulations or contributing to an even more fragile data science ecosystem.

    MLflow is the de facto standard for managing the machine learning lifecycle by bringing immutability and transparency to model development, but is not restricted to AI. A bank’s definition of a model is usually quite broad and includes any financial models from Excel macros to rule-based systems or state-of-the art machine learning, all of them that could benefit from having a central model registry provided by MLflow within Databricks Unified Data Analytics Platform.

    ML flow is an open-source platform that streamlines the machine learning lifecycle

    REPRODUCING MODEL DEVELOPMENT

    In this example, we want to train a new model that predicts stock returns given market indicators (such as S&P 500, crude oil and treasury bonds). We can retrieve “AS OF“ data in order to ensure full model reproducibility and audit compliance. This capability of Delta Lake is commonly referred to as “time travel“. The resulting data set will remain consistent throughout all experiments and can be accessed as-is for audit purposes.

    In order to select the right features in their models, quantitative analysts often navigate between Spark and Pandas dataframes. We show here how to switch from a pyspark to python context in order to extract correlations of our market factors. The Databricks interactive notebooks come with built-in visualisations and also fully support the use of Matplotlib, seaborn (or ggplot2 for R).

     

    Sample variance-covariance table generated by a Databricks interactive notebook, demonstrating its efficacy and rigor in constructing its predictive risk models.

     

    Assuming our indicators are not correlated (they are) and predictive of our portfolio returns (they may), we want to log this graph as evidence to our successful experiment. This shows internal audit, model validation functions, as well as regulators that model exploration was conducted with highest quality standards and its development, was led with empirical results.

    TRAINING MODELS IN PARALLEL

    As the number of instruments in our portfolio increases, we may want to train models in parallel. This can be achieved through a simple Pandas UDF function as follows. For convenience (models may be more complex in real life), we want to train a simple linear regression model and aggregate all model coefficients as a n x m matrix (n being the number of instruments and m the number of features derived from our market factors).

    The resulting dataset (weight for each model) can be easily collected back to memory and logged to MLflow as our model candidate for the rest of the experiment. In the below graph, we report the predicted vs actual stock return derived from our model for Ecopetrol S.A., an oil and gas producer in Columbia.

    Sample model output visualization retained by MLflow, along with the experiment, its revisions, and the underlying data, providing full transparency, traceability, and context.

    Our experiment is now stored on MLflow alongside all evidence required for an independent validation unit (IVU) submission which is likely a part of your model risk management framework. It is key to note that this experiment is not only linked to our notebook, but to the exact revision of it, bringing independent experts and regulators the full traceability of our model as well all the necessary context required for model validation.

    Monte Carlo simulations at scale with Apache Spark™

    Value-at-risk is the process of simulating random walks that cover possible outcomes as well as worst case (n) scenarios. A 95% value-at-risk for a period of (t) days is the best case scenario out of the worst 5% trials. We therefore want to generate enough simulations to cover a range of possible outcomes given a 90 days historical market volatility observed across all the instruments in our portfolio. Given the number of simulations required for each instrument, this system must be designed with a high degree of parallelism in mind, making value-at-risk the perfect workload to execute in a cloud based environment. Risk management is the number one reason top tier banks evaluate cloud compute for analytics today and accelerate value through the Databricks runtime.

    CREATING A MULTIVARIATE DISTRIBUTION

    Whilst the industry recommends generating between 20 to 30 thousands simulations, the main complexity of calculating value-at-risk for a mixed portfolio is not to measure individual assets returns, but the correlations between them. At a portfolio level, market indicators can be elegantly manipulated within native python without having to shift complex matrix computation to a distributed framework. As it is common to operate with multiple books and portfolios, this same process can easily scale out by distributing matrix calculation in parallel. We use the last 90 days of market returns in order to compute todays’ volatility (extracting both average and covariance).

    We generate a specific market condition by sampling a point of the market’s multivariate projection (superposition of individual normal distributions of our market factors). This provides a feature vector that can be injected into our model in order to predict the return of our financial instrument.

    GENERATING CONSISTENT AND INDEPENDENT TRIALS AT SCALE

    Another complexity of simulating value-at-risk is to avoid auto-correlation by carefully fixing random numbers using a ‘seed’. We want each trial to be independent albeit consistent across instruments (market conditions are identical for each simulated position). See below an example of creating an independent and consistent trial set – running this same block twice will result in the exact same set of generated market vectors.

    In a distributed environment, we want each executor in our cluster to be responsible for multiple simulations across multiple instruments. We define our seed strategy so that each executor will be responsible for num_instruments x ( num_simulations / num_executors ) trials. Given 100,000 Monte Carlo simulations, parallelism of 50 executors, and 10 instruments in our portfolio, each executor will run 20,000 instrument returns.]

    We group our set of seeds per executor and generate trials for each of our models through the use of a Pandas UDF. Note that there may be multiple ways to achieve the same, but this approach has the benefit to fully control the level of parallelism in order to ensure no hotspot occurs and no executor will be left idle waiting for other tasks to finish.

    We append our trials partitioned by day onto a Delta Lake table so that analysts can easily access a day’s worth of simulations and group individual returns by a trial Id (i.e. the seed) in order to access the daily distribution of returns and its respective value-at-risk.

    With respect to our original definition of data being a core asset (as opposition to being a cost), we store all our trials enriched with our portfolio taxonomy (such as industry type and country of operation), enabling a more holistic and on-demand view of the risk facing our investment strategies.

    Aggregations, Backtesting at Scale and Introducing Alternative Data

    The first section of this solution brief addressed the technical challenges related to modernizing risk management practices with data and advanced analytics, covering the concepts of risk modelling and Monte Carlo simulations using MLflow and Apache Spark.This section of the solution brief focuses on the risk analyst persona and their requirements to efficiently slice and dice risks simulations (on demand) in order to better understand portfolio risks as new threats emerge, in real time. We will cover the following topics:

    • Using Delta Lake and SQL for aggregating value-at-risk on demand
    • Using Apache SparkTM and MLflow to backtest models and report breaches to regulators
    • Exploring the use of alternative data to better assess your risk exposure

    Slicing and dicing value-at-risk with Delta Lake

    In this example, we uncover the risk of various investments in a Latin America equity portfolio composed of 40 instruments across multiple industries. For that purpose, we leverage the vast amount of data we were able to generate through Monte Carlo simulations (40 instruments x 50,000 simulations x 52 weeks = 100 million records), partitioned by day and enriched with our portfolio taxonomy.

    VALUE-AT-RISK

    Value-at-risk is the process of simulating random walks that cover possible outcomes as well as worst case (n) scenarios. A 95% value-at-risk for a period of (t) days is the best case scenario out of the worst 5% trials.

    As our trials were partitioned by day, analysts can easily access a day’s worth of simulations data and group individual returns by a trial Id (i.e. the seed used to generate financial market conditions) in order to access the daily distribution of our investment returns and its respective value-at-risk. Our first approach is to use Spark SQL to aggregate our simulated returns for a given day (50,000 records) and use in memory python to compute the 5% quantile through a simple numpy operation.

    Provided an initial $10,000 investment across all our Latin American equity instruments, the 95% value-at-risk – at that specific point in time – would have been $3,000. This is how much our business would be ready to lose (at least) in the worst 5% of all the possible events.

    The downside of this approach is that we first need to collect all daily trials in memory in order to compute the 5% quantile. While this process can be performed easily when using 1 day worth of data, it quickly becomes a bottleneck when aggregating value-atrisk over a longer period of time.

    A PRAGMATIC AND SCALABLE APPROACH TO PROBLEM SOLVING

    Extracting percentile from a large dataset is a known challenge for any distributed computing environment. A common (albeit inefficient) practice is to 1) sort all of your data and 2) cherry pick a specific row using takeOrdered or to find an approximation through the approxQuantile method. Our challenge is slightly different since our data does not constitute a single dataset but spans across multiple days, industries and countries, where each bucket may be too big to be efficiently collected and processed in memory.

    In practice, we leverage the nature of value-at-risk and only focus on the worst n events (n small). Given 50,000 simulations for each instrument and a 99% VaR, we are interested in finding the best of the worst 500 experiments only. For that purpose, we create a user defined aggregate function (UDAF) that only returns the best of the worst n events. This approach will drastically reduce the memory footprint and network constraints that may arise when computing large scale VaR aggregation.

    By registering our UADF through spark.udf.register method, we expose that functionality to all of our users, democratizing risk analysis to everyone without an advanced knowledge of scala / python / spark. One simply has to group by trial Id (i.e. seed) in order to apply the above and extract the relevant value-at-risk using plain old SQL capabilities across all their data.

    We can easily uncover the effect of COVID-19 on our market risk calculation. A 90-day period of economic volatility resulted in a much lower value-at-risk and therefore a much higher risk exposure overall since early March 2020.

    Sample value-at-risk calculation generated for Covid-19 conditions using Spark SQL.

    HOLISTIC VIEW OF OUR RISK EXPOSURE

    In most cases, understanding overall value-at-risk is not enough. Analysts need to understand the risk exposure to different books, asset classes, different industries or different countries of operations. In addition to Delta Lake capabilities such as time travel and ACID transactions discussed earlier, Delta Lake and Apache Spark have been highly optimised on Databricks runtime to provide fast aggregations at read. High performance can be achieved using our native partitioning logic (by date) alongside a z-order indexing applied to both country and industry. This additional indexing will be fully exploited when selecting a specific slice of your data at a country or industry level, drastically reducing the amount of data that needs to be read prior to your VaR aggregation.

    We can easily adapt the above SQL code by using country and industry as our grouping parameter for VALUE_AT_RISK method in order to have a more granular and descriptive view of our risk exposure. The resulting data set can be visualised “as-is” using Databricks notebook and can be further refined to understand the exact contribution each of these countries have to our overall value-at-risk.

    Sample data visualization depicting value-at-risk by country generated by Spark SQL aggregations.

    In this example, Peru seems to have the biggest contribution to our overall risk exposure. Looking at the same SQL code at an industry level in Peru, we can investigate the contribution of the risk across industries.

    Sample data visualization depicting in-country investment portfolio value-at-risk by industry generated by Spark SQL aggregations

    With a contribution close to 60% in March 2020, the main risk exposure in Peru seems to be related to the mining industry. An increasingly severe lockdown in response to the COVID virus has been impacting mining projects in Peru, centre for copper, gold and silver production (source).

    Stretching the scope of this article, we may wonder if we could have identified this trend earlier using alternative data and specifically the global database of events, locations and tone (GDELT). We report in below graph the media coverage for the mining industry in Peru, color coding positive and negative trends through a simple moving average.

    Sample data visualization depicting financial portfolio risk due to Covid-19 conditions, demonstrating the importance of modernized value-at-risk calculations using augmented historical data with external factors derived from alternative data.

    This clearly exhibits a positive trend in early February, i.e. 15 days prior to the observed stock volatility, which could have been an early indication of mounting risks. This analysis stresses the importance of modernizing value-at-risk calculations, augmenting historical data with external factors derived from alternative data.

    Model backtesting

    In response to the 2008 financial crisis, an additional set of measures were developed by the Basel committee on banking supervision. The 1 day VaR 99 results are to be compared against daily P&Ls. Backtests are to be performed quarterly using the most recent 250 days of data. Based on the number of exceedances experienced during that period, the VaR measure is categorized as falling into one of three colored zones.

    Level Threshold Results
    Green Up to 4 exeedances No particular concerns raised
    Yellow Up to 9 exeedances Monitoring required
    Red More than 10 exeedances VaR measure to be improved
    AS-OF VALUE-AT-RISK

    Given the aggregated function we defined earlier, we can extract daily value-at-risk across our entire investment portfolio. As our aggregated value-at-risk dataset is small (contains 2 years of history, i.e. 365 x 2 data points), our strategy is to collect daily VaR and broadcast it to our larger set in order to avoid unnecessary shuffles. More details on AS-OF functionalities can be found in a blog post Democratizing Financial Time Series Analysis.

    We retrieve the closest value-at-risk to our actual returns via a simple user defined function and perform a 250-day sliding window to extract continuous daily breaches.

    We can observe a consecutive series of 17 breaches from February onwards that would need to be reported to regulations according to the Basel III framework. The same can be reported onto a graph, over time.

    In early 2020, we have observed a period of unusual stability that seems likely to presage the difficult times we are now facing. We can also observe that our value at-risk is dramatically decreasing (as our overall risk increases) but does not seem to decrease as fast as the actual returns. This apparent lag in our value-at-risk calculation is due to the 90-day observation period of volatility required by our model.

    With our model registered on MLflow, we may want to record these results as evidence for audit and regulation, providing them with a single source of truth of our risk models, their accuracies, technical context (for transparency) as well as their worst cases scenario as identified here.

    STRESSED VAR

    Introducing a “stressed VaR“ helps mitigate the risk we face today by including worstever trading days as part of our ongoing calculation. However, this wouldn’t change the fact that this whole approach is solely based on historical data and unable to cope with actual volatility driven by new emerging threats. In fact, despite complex “stressed VaR” models, banks are no longer equipped to operate in so-called “unprecedented times“ where history no longer repeats itself. As a consequence, most of the top tier banks are currently reporting severe breaches in their value-at-risk calculations as reported in the Financial Times article below.

    Wall St banks’ trading risk surges to highest since 2011

    […] The top five Wall St banks’ aggregate “value-at-risk”, which measures their potential daily trading losses, soared to its highest level in 34 quarters during the first three months of the year, according to Financial Times analysis of the quarterly VaR high disclosed in banks’ regulatory filings

    https://www.ft.com/content/7e64bfbc-1309-41ee-8d97-97d7d79e3f57?shareType=nongift

    A FORWARD LOOKING APPROACH

    As demonstrated earlier, a modern risk and portfolio management practice should not be solely based on historical returns but also must embrace the variety of information available today, introducing shocks to Monte Carlo simulations augmented with real life news events, as they unfold. For example, a white paper from Atkins et al describes how financial news can be used to predict stock market volatility better than close price. As indicated via the Peru example above, the use of alternative data can dramatically augment the intelligence for risk analysts to have a more descriptive lense of modern economy, enabling them to better understand and react to exogenous shocks in real time.

    Conclusion

    Understanding and mitigating risk is at the forefront of any financial services institution. However, banks today are still struggling to keep up with the emerging risks and threats facing their business. Plagued by the limitations of on-premises infrastructure and legacy technologies, banks until recently have not had the tools to effectively build a modern risk management practice. Luckily, a better alternative exists today based on open-source technologies powered by cloud-native infrastructure.

    Banks can modernize their risk management practices by moving to the cloud and adopting a unified approach to data analytics with Databricks. In addition, with the power of Databricks, banks can take back control of their data (consider data as an asset, not a cost) and enrich the view they have on the modern economy through the use of alternative data in order to move towards a forward looking and a more agile approach to risk management and investment decisions.

    Modernizing Your Approach to Risk Management: Next Steps

    If you want to learn how unified data analytics can bring data science, business analytics and engineering together to accelerate your data and ML efforts, check out the on-demand workshop — Unifying Data Pipelines, Business Analytics and Machine Learning with Apache Spark.™

    And if you are ready to accelerate the modernization of your risk management practices, try the below VaR and Risk Management Notebooks:

    https://www.databricks.com/notebooks/00_context.html
    https://www.databricks.com/notebooks/01_market_etl.html
    https://www.databricks.com/notebooks/02_model.html
    https://www.databricks.com/notebooks/03_monte_carlo.html
    https://www.databricks.com/notebooks/04_var_aggregation.html
    https://www.databricks.com/notebooks/05_alt_data.html
    https://www.databricks.com/notebooks/06_backtesting.html

    Learn more about how we assist customers with market risk use cases.

    Try Databricks for free
    • Product
      • Platform Overview
      • Pricing
      • Open Source Tech
      • Try Databricks
      • Demo
    • Learn & Support
      • Documentation
      • Glossary
      • Training & Certification
      • Help Center
      • Legal
      • Online Community
    • Solutions
      • By Industries
      • Professional Services
    • Company
      • About Us
      • Careers at Databricks
      • Diversity and Inclusion
      • Newsroom
      • Company Blog
      • Contact Us

    Career at databricks

    See Careers
    at Databricks

    English (United States)Deutsch (Germany)Français (France)日本語 (Japan)한국어 (South Korea)Italiano (Italy)Português (Brazil)
    Language Selector GlobeWorldwideUp Arrow

    Databricks Inc.
    160 Spear Street, 15th Floor
    San Francisco, CA 94105
    1-866-330-0121

    © Databricks 2023. All rights reserved. Apache, Apache Spark, Spark and the Spark logo are trademarks of the Apache Software Foundation.
    Privacy Notice (Updated) | Terms of Use | Your Privacy Choices | Your California Privacy Rights
    Close
    search-icon Search