Building Graph-Powered Intelligence on Databricks with Neo4j

April 9, 2026
May 7, 2026
9:00 AM PT / 12:00 PM ET
Graphs are critical for understanding complex relationships, but building a production-ready graph layer on top of a Lakehouse takes more than a one-off script.

Join our 3-part webinar series, Building Graph-Powered Intelligence on Databricks with Neo4j where we walk through a practical, end to end approach to using graphs alongside the Databricks Lakehouse.

Register once to join all three webinars in the series, which build directly on each other, taking you from graph foundations to feature engineering and finally to AI-driven applications.

This series is ideal for data engineers and architects who want a practical blueprint they can apply immediately in their own environments.



WEBINAR 1 — The Foundation (Watch On-Demand)

The Foundation: Architecting a Scalable Graph Layer on Databricks

Graphs are critical for understanding complex relationships, but building one from a Lakehouse requires more than just a script. In this session, we show how to architect a scalable graph layer using the Neo4j Spark Connector. Using a financial fraud use case, we demonstrate how to transform Databricks Bronze and Silver tables into a high-performance Neo4j graph model, setting the foundation for advanced analytics, feature engineering, and downstream AI workloads.

What Attendees Will Learn

  • Production ETL: Using the Neo4j Spark Connector to move data at scale from Databricks into Neo4j
  • The Schema Shift: Best practices for transforming flat tables into high-performance graph models
  • Graph vs. SQL: When to query Neo4j with Cypher versus staying in the Lakehouse
  • Operational Patterns: How to run Neo4j alongside Databricks in a production data architecture



WEBINAR 2 — Enrichment

Graph-Augmented Intelligence: Feature Engineering with Neo4j and Databricks

Date: Thursday, April 9 - 9:00 AM PT / 12:00 PM ET

A graph is only as valuable as the insights it returns to your primary data store. In this session, we close the loop by extracting relationship-driven intelligence from Neo4j and sending it back to Databricks. We explore Neo4j Graph Data Science (GDS) to calculate risk scores and community clusters, showing how these graph-derived features are written back to Databricks Feature Stores to significantly improve machine learning model accuracy.

What Attendees Will Learn

  • Feature Engineering: Running GDS algorithms to identify risk patterns
  • The Bi-directional Loop: Patterns for sinking graph-derived scores back into Databricks tables and Feature Stores
  • Quantifying Lift: Measuring the performance of graph-augmented models versus traditional tabular approaches
  • Sync Strategies: Keeping Neo4j and Databricks data aligned without manual intervention



WEBINAR 3 — The AI Agent

Agentic GraphRAG: Orchestrating Neo4j Context via Databricks and MCP

Date: Thursday, May 7 - 9:00 AM PT / 12:00 PM ET

In the final session, we move from pipelines to agents. While standard RAG relies primarily on vector similarity, GraphRAG provides the structured context and reasoning depth required for reliable agent behavior. We demonstrate how to build a Databricks Mosaic AI Agent that uses the Model Context Protocol (MCP) to query Neo4j as a trusted source of contextual intelligence. You’ll see how agents traverse graph data to answer complex, multi-hop questions that break traditional RAG systems.

What Attendees Will Learn

  • GraphRAG vs. Vector RAG: Why structured context is key to reducing LLM hallucinations
  • The MCP Advantage: Using MCP for a clean, modular connection between Databricks agents and Neo4j
  • Agentic Orchestration: Managing agent logic in Databricks with Neo4j as long-term relationship memory
  • Reference Architecture: A blueprint for building production-ready, agent-driven applications on the Databricks + Neo4j stack

Register

Presenters
 Image

Will Jeffery

Sr Solution Architect, Databricks
 Image

Ryan Knight

Sr Partner Architect, Neo4j