Databricks — 5 min read

How do data scientists combine Kedro and Databricks?

In recent research, we learned that Databricks is the dominant machine-learning platform used by Kedro users. We investigated aspects of our users' data science workflows to prioritise seamless Kedro usage, and we are collaborating with Databricks on our findings.

14 Apr 2023 (last updated 10 May 2023)
small square mint knot

In recent research, we found that Databricks is the dominant machine-learning platform used by Kedro users.

The purpose of the research was to identify any barriers to using Kedro with Databricks; we are collaborating with the Databricks team to create a prioritized list of opportunities to facilitate integration. For example, Kedro is best used with an IDE, but IDE support on Databricks is still evolving, so we are keen to understand the pain points that Kedro users face when combining it with Databricks.

Our research took qualitative data from 16 interviews, and quantitative data from a poll (140 participants) and a survey (46 participants) across the McKinsey and open-source Kedro user bases. We analysed two user journeys.

How to ensure a Kedro pipeline is available in a Databricks workspace

The first user journey we considered is how a user ensures the latest version of their pipeline codebase is available within the Databricks workspace. The most common workflow is to use Git, but almost a third of the users in our research set said there were a lot of steps to follow. The alternative workflow, which is to use dbx sync to Databricks repos, was used by less than 10% of the users we researched, indicating that awareness of this option is low.

databricks-slide-1

How to ensure the latest version of a Kedro pipeline runs on Databricks

How to run Kedro pipelines using a Databricks cluster

The second user journey is how users run Kedro pipelines using a Databricks cluster. The most popular method, used by over 80% of participants in our research, is to use a Databricks notebook, which serves as an entry point to run Kedro pipelines. We discovered that many users were unaware of the IPython extension that significantly reduces the amount of code required to run Kedro pipelines in Databricks notebooks.

We also found that some users run their Kedro pipelines by packaging them and running the resulting Python package on Databricks. However, Kedro did not support the packaging of configurations until version 18.5, which has caused problems. The final option some users select is to use Databricks Connect, but this is not recommended since it is soon to be sunsetted by Databricks.

databricks-slide-2

How to run a Kedro pipeline on a Databricks cluster.

Watch the discussion about Kedro and Databricks from a recent Kedro update meeting

The output of our research

To make it easier to pair Kedro and Databricks, we are updating Kedro’s documentation to cover the latest Databricks features and tools, particularly the development and deployment workflows for Kedro on Databricks with DBx. The goal is to help Kedro users take advantage of the benefits of working locally in an IDE and still deploy to Databricks with ease.

You can expect this new documentation to be released in the next one to two weeks.

We will also be creating a Kedro Databricks plugin or starter project template to automate the recommended steps in the documentation.

Coming soon…

We have a managed Delta table dataset available in our Kedro datasets repo, which will be available for public consumption soon. We are also planning to support managed MLflow on Databricks.

We have set up a milestone on GitHub so you can check in on our progress and contribute if you want to. To suggest features to us, report bugs, or just see what we’re working on right now, visit the Kedro projects on GitHub. We welcome every contribution, large or small.

Find out more about Kedro

There are many ways to learn more about Kedro:


On this page:

Photo of Jo Stichbury
Jo Stichbury
Technical Writer, QuantumBlack
Share post:
Twitter logoLinkedIn logo

All blog posts

cover image alt

Best practices — 10 min read

How to become a more technical product owner

On World Product Day 2023, Yetunde Dada explains how to build your technical skills as a product owner to enhance your effectiveness and success in the role.

Yetunde Dada

24 May 2023

cover image alt

Tutorials — 6 min read

A Polars exploration into Kedro

Polars is an open-source library that provides fast dataframes for Python. This blog post explains how can you use Polars instead of pandas in Kedro for your data catalog and data manipulation.

cover image alt

Kedro deployment — 5 min read

Seven steps to deploy Kedro pipelines on Amazon EMR

Amazon EMR works with open-source big data frameworks like Apache Spark to help you tackle vast amounts of data. This post explains how to combine Amazon EMR, Kedro, and Apache Spark.

Afaque Ahmad

10 May 2023

cover image alt

Kedro news — 5 min read

In the Pipeline: May 2023

"In the Pipeline" is overflowing with the latest Kedro news, upcoming events, and interesting topics.

Jo Stichbury

4 May 2023

cover image alt

Best practices — 5 min read

Seven development principles for opinionated teams

In this blog post, we introduce a set of principles that summarise our development philosophy and steer our decision-making.

Jo Stichbury

26 Apr 2023