Skip to main content
Logically logo

CUSTOMER
STORY

Forecasting Narrative Risk for Government and Enterprise

10M+

Social media messages processed daily for threat detection

~2 weeks

To build the conversational AI agent from concept to prototype with Databricks

~1 month

To scale, secure and embed the conversational AI agent into Logically’s production platform

customer story logically still image

The explosion of digital information brings both opportunity and risk. With this in mind, Logically provides narrative intelligence and predictive insight to help clients understand, anticipate and act on emerging information risks — from coordinated influence to reputational threats. Yet, Logically wanted to make their intelligence platform accessible to non-technical users. Facing user accessibility barriers, massive data volumes and fragmented tooling, Logically turned to the Databricks Platform to unify their disparate systems into one environment. The move allowed them to go from concept to a production-ready AI agent in less than two weeks, accelerating the delivery of critical intelligence, so clients could act on emerging threats before they escalated.

Striving to make harm detection faster and more accessible

Can you make the global flow of information as measurable and predictable as any other critical system? That’s what Logically is doing with the help of Databricks. Social media has become one of the fastest ways to share information (and run influence operations) at a global scale. Harmful narratives, coordinated harassment and foreign influence campaigns erode public trust, incite violence and destabilize institutions. Logically exists at the intersection of threat and social intelligence to help organizations identify emerging risks, protecting not just reputations, but public safety and confidence.

Logically’s platform was originally built for open source intelligence (OSINT) analysts, who specialize in working with massive volumes of public data. To find the intelligence needed, they employed complex Boolean queries, advanced filters and manual data exploration techniques to identify various boycott and misleading information campaigns, false narratives, hate speech and the targeted harassment of public officials and executives. Then, they flagged and analyzed malicious or false content to determine its source, track its spread across platforms and equip corporate teams to respond before further spread. By paying close attention to these online narratives and their transition to offline violence, teams using Logically ensured the safety of officials, safeguarded democratic elections and protected the public from organised domestic extremism.

While this approach proved highly effective for specialist teams, Logically recognized that the same intelligence could be valuable beyond OSINT analysts. The company set out to broaden access to their platform, making powerful social media monitoring and analysis tools available to non-technical users, including policy advisors, communications teams, campaign staff and corporate leaders. “We wanted to put the same intelligence our analysts rely on directly into the hands of decision makers who needed the same timely, data-backed insights but didn’t have the expertise to build complex queries,” explained Guillem Garcia, Head of Data Science at Logically. “In removing these technical barriers, we could make it possible for anyone with the right permissions to get clear, actionable insights in minutes through a conversational AI, instead of waiting hours or days for an engineer.”

Breaking through technical and operational bottlenecks

Despite their ambition to open their platform to non-technical users, Logically had to face the current challenges of user accessibility and adoption before moving into the next phase of their business. Most potential users lacked the technical expertise to extract important insights independently. Analysts became bottlenecks for many of their clients, managing requests from non-technical teams and slowing the delivery of needed materials. These barriers were compounded by data volume and processing limitations: The platform needed to process more than 10 million social media messages per day, totaling several terabytes per ETL run. But their proprietary solution built on Elasticsearch was expensive, slow and difficult to scale, while existing ETL pipelines struggled to handle high-volume, multi-source ingestion without frequent engineering intervention.

Further complicating matters, ingestion, storage, retrieval and AI model execution existed in separate environments and required manual connectors between systems. This slowed development, increased points of failure and left governance and access control fragmented, making it harder to manage all assets securely and consistently. By investing in the Databricks Data Intelligence Platform, Logically gained the ability to process massive datasets quickly and make intelligence accessible to technical and non-technical users alike.

Unifying data and AI processes to build a conversational AI agent

In an effort to address these operational and accessibility barriers, Logically adopted the Databricks Data Intelligence Platform to create a unified environment for data ingestion, storage, governance and AI development. To embolden non-technical users with natural language querying and AI-curated insights, Logically started working with the solution’s ETL orchestration capabilities, Lakeflow Jobs. With it, they could automate the orchestration of ETL pipelines to replace the manual, error-prone ingestion processes that led to frequent engineering intervention and eliminate the need for fragile manual connectors between ingestion, storage and AI systems.

Delta Lake was the foundation for ETL workflows, allowing Logically to store, process and query multi-terabyte daily ingestions from over 10 million social media messages with high reliability and low latency. Once the data was cleaned, structured and stored in Delta Lake, the team leveraged materialized views — or pre-computed results of frequent, complex queries — to pull insights instantly without rebuilding those queries each time. This optimization provided the speed and efficiency needed to support the Mosaic AI Agent Framework, a fully integrated environment for building, orchestrating and deploying agentic AI capable of multi-step reasoning, dynamic tool use and contextual memory.

Adopting Vector Search, the conversational agent performed semantic search — identifying the intent and meaning behind a query — at scale, retrieving nuanced, contextually relevant information from vast datasets without relying on keyword matching or complex Boolean queries. Combining proprietary and Google Gemini’s reasoning capabilities with Databricks’ unified platform, Logically created a conversational chatbot that answered direct user questions and continuously learned based on feedback and evolving data. LangGraph then managed the agent’s reasoning loops and tool orchestration, breaking complex user requests into structured steps, retrieving the right data and producing actionable outputs. MLflow handled the end-to-end ML lifecycle to support continuous refinement, tracking experiments, managing model versions and streamlining deployment pipelines. After the models were validated, they were deployed through Model Serving via AI Gateway to deliver safeguarded inference — meaning, everything from raw data to real-time conversations lived in one governed environment.

Maintaining security, compliance and trust in every AI-driven output also calls for centralized data governance, which Logically found in Unity Catalog. The tool applied fine-grained permissions across data assets, models and vector indices. “Because governance lived in the same Databricks environment that handled our ingestion, processing and AI execution, we knew every output was secure, accurate and ready to act on immediately. That level of trust remained critical to us, since we regularly worked with government agencies and enterprises, where confidentiality was paramount,” said Guillem.

Transforming millions of social posts into safety and security

Logically achieved their goal of expanding platform accessibility from technical OSINT analysts to non-technical users, growing their total addressable market. This shift not only removed analyst bottlenecks but also streamlined high-volume, multi-source ingestion and analysis to enable faster threat detection across sectors. With Databricks, they could now handle data volumes and processing demands at a scale the company had never reached before, processing over 10 million social media posts daily, aka several terabytes of data per ETL run.

The team built the initial AI agent in under two weeks — down from at least one and a half months — and completed full integration in about a month, accelerating time to market for new critical AI features. Performance improved with pre-computed materialized views, significantly reducing the AI agent’s query latency for faster, more responsive intelligence delivery.

“We’re just scratching the surface of what our conversational, agentic-driven platform can do,” concluded Guillem. “Next, we want to tailor the agent for specific industries and bring it even closer to real-time dashboards, making it more interactive. The goal is to get even better at spotting risks early and give our clients more time to respond.” With Databricks’ scalable architecture, Logically has expanded its platform to serve new personas across marketing, finance and other non-technical teams, unlocking additional business value and revenue streams. Now, they’re positioned to continue rolling out AI capabilities, maintaining compliance in highly regulated environments and meeting the growing demands for accessible narrative intelligence.