Skip to main content
The Mosaic Research Team

The Mosaic Research Team

The Mosaic Research Team's posts

OfficeQA: A grounded reasoning benchmark for testing enterprise AI

Mosaic Research

December 9, 2025/12 min read

Introducing OfficeQA: A Benchmark for End-to-End Grounded Reasoning

ai_parse_document

Product

November 11, 2025/5 min read

PDFs to Production: Announcing state-of-the-art document intelligence on Databricks

Week of AI Agents

Mosaic Research

November 4, 2025/9 min read

From Pilot to Production with Custom Judges

Fast PEFT Serving at Scale

Mosaic Research

October 21, 2025/12 min read

Fast PEFT Serving at Scale

Mosaic Research

September 24, 2025/12 min read

Building State-of-the-Art Enterprise Agents 90x Cheaper with Automated Prompt Optimization

Judging with Confidence: Meet PGRM, the Promptable Reward Model

Mosaic Research

August 12, 2025/10 min read

Judging with Confidence: Meet PGRM, the Promptable Reward Model

Agent Learning from Human Feedback (ALHF): A Databricks Knowledge Assistant Case Study

Mosaic Research

August 4, 2025/7 min read

Agent Learning from Human Feedback (ALHF): A Databricks Knowledge Assistant Case Study

The power of RLVR: Training a Leading SQL Reasoning Model on Databricks

Mosaic Research

July 30, 2025/4 min read

The Power of RLVR: Training a Leading SQL Reasoning Model on Databricks

Test-time Adaptive Optimization (TAO)

Mosaic Research

March 25, 2025/8 min read

TAO: Using test-time compute to train efficient LLMs without labeled data

dbrx blog header

Mosaic Research

March 27, 2024/14 min read

Introducing DBRX: A New State-of-the-Art Open LLM

MPT-30B: Raising the bar for open-source foundation models

Mosaic Research

June 22, 2023/11 min read

MPT-30B: Raising the bar for open-source foundation models

Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs

Mosaic Research

May 5, 2023/18 min read

Introducing MPT-7B: A New Standard for Open-Source, Commercially Usable LLMs