Kedro deployment — 5 min read

Seven steps to deploy Kedro pipelines on Amazon EMR

Amazon EMR works with open-source big data frameworks like Apache Spark to help you tackle vast amounts of data. This post explains how to combine Amazon EMR, Kedro, and Apache Spark.

10 May 2023 (last updated 11 May 2023)
small square planet spin

This post explains how to launch an Amazon EMR cluster and deploy a Kedro project to run a Spark job.

Amazon EMR (previously called Amazon Elastic MapReduce) is a managed cluster platform for applications built using open source big data frameworks, such as Apache Spark, that process and analyze vast amounts of data with AWS.

What is Kedro?

Kedro is an open-source Python toolbox that applies software engineering principles to data science code. It makes it easier for a team to apply software engineering principles to data science code, which reduces the time spent rewriting data science experiments so that they are fit for production.

Kedro was born at QuantumBlack to solve the challenges faced regularly in data science projects and promote teamwork through standardised team workflows. It is now hosted by the LF AI & Data Foundation as an incubating project.

1. Set up the Amazon EMR cluster

One way to install Python libraries onto Amazon EMR is to package a virtual environment and deploy it. To do this, the cluster needs to have the same Amazon Linux 2 environment as used by Amazon EMR.

We used this example Dockerfile to package our dependencies on an Amazon Linux 2 base. Our example Dockerfile is as below: 

1FROM --platform=linux/amd64 amazonlinux:2 AS base 
3RUN yum install -y python3 
5ENV VIRTUAL_ENV=/opt/venv 
6RUN python3 -m venv $VIRTUAL_ENV 
9COPY requirements.txt /tmp/requirements.txt 
11RUN python3 -m pip install --upgrade pip && \
12    python3 -m pip install venv-pack==0.2.0 && \ 
13    python3 -m pip install -r /tmp/requirements.txt 
15RUN mkdir /output && venv-pack -o /output/pyspark_deps.tar.gz 
17FROM scratch AS export 
18COPY --from=base /output/pyspark_deps.tar.gz /

Note: A DOCKER_BUILDKIT backend is necessary to run this Dockerfile (make sure you have it installed).

Run the Dockerfile using the following command:

DOCKER_BUILDKIT=1 docker build --output . <output-path>

This will generate a pyspark_deps.tar.gz file at the <output-path> specified in the command above. 

Use this command if your Dockerfile has a different name:

DOCKER_BUILDKIT=1 docker build -f Dockerfile-emr-venv --output . <output-path>

2. Set up CONF_ROOT

The kedro package command only packs the source code and yet the conf directory is essential for running any Kedro project. To make it available to Kedro separately, its location can be controlled by setting CONF_ROOT

By default, Kedro looks at the root conf folder for all its configurations (catalog, parameters, globals, credentials, logging) to run the pipelines, but this can be customised by changing CONF_ROOT in

For Kedro versions < 0.18.5

  • Change CONF_ROOT in to the location where the conf directory will be deployed. It could be anything. e.g. ./conf or /mnt1/kedro/conf

For Kedro versions >= 0.18.5

  • Use the --conf-source CLI parameter directly with kedro run to specify the path. CONF_ROOT need not be changed in

3. Package the Kedro project

Package the project using the kedro package command from the root of your project folder. This will create a .whl in the dist folder that will be used when doing spark-submit to the Amazon EMR cluster to specify the --py-files to refer to the source code. 

4. Create .tar for conf

As described, the kedro package command only packs the source code and yet the conf directory is essential for running any Kedro project. Therefore it needs to be deployed separately as a tar.gz file. It is important to note that the contents inside the folder needs to be zipped and not the conf folder entirely.

Use the following command to zip the contents of the conf directory and generate a conf.tar.gz file containing catalog.ymlparameters.yml and other files needed to run the Kedro pipeline. It will be used with spark-submit for the --archives option to unpack the contents into a conf directory.

tar -czvf conf.tar.gz --exclude="local" conf/*

5. Create an entrypoint for the Spark application

Create an file that the Spark application will use to start the job. This file can be modified to take arguments and can be run only using main(sys.argv) after removing the params array.

python --pipeline my_new_pipeline --params run_date:2023-02-05,runtime:cloud

This would mimic the exact kedro run behaviour.

1import sys 
2from proj_name.__main__ import main: 
4if __name__ == "__main__":
5	"""
6	These params could be used as *args to 
7	test pipelines locally. The example below 
8	will run `my_new_pipeline` using `ThreadRunner`
9	applying a set of params
10	params = [ 
11		"--pipeline", 
12		"my_new_pipeline", 
13		"--runner", 
14		"ThreadRunner", 
15		"--params", 
16		"run_date:2023-02-05,runtime:cloud", 
17	] 
18	main(params) 
19	"""
21	main(sys.argv)

6. Upload relevant files to S3

Upload the relevant files to an S3 bucket (Amazon EMR should have access to this bucket), in order to run the Spark Job. The following artifacts should be uploaded to S3:

  • .whl file created in step #3

  • Virtual Environment tar.gz created in step 1 (e.g. pyspark_deps.tar.gz)

  • .tar file for conf folder created in step #4 (e.g. conf.tar.gz)

  • file created in step #5.

7. spark-submit to the Amazon EMR cluster

Use the following spark-submit command as a step on Amazon EMR running in cluster mode. A few points to note:

  • pyspark_deps.tar.gz is unpacked into a folder named environment

  • Environment variables are set referring to libraries unpacked in the environment directory above. e.g. PYSPARK_PYTHON=environment/bin/python

  • conf directory is unpacked to a folder specified in the following after the # symbol (s3://{S3_BUCKET}/conf.tar.gz#conf)

Note the following:

  • Kedro versions < 0.18.5. The folder location/name after the # symbol should match with CONF_ROOT in


  • Kedro versions >= 0.18.5. You could follow the same approach as earlier. However, Kedro now provides flexibility to provide the CONF_ROOT through the CLI parameters using --conf-source instead of setting CONF_ROOT in Therefore --conf-root configuration could be directly specified in the CLI parameters and step 2 can be skipped completely.

2    --deploy-mode cluster 
3    --master yarn 
4    --conf spark.submit.pyFiles=s3://{S3_BUCKET}/<whl-file>.whl
5    --archives=s3://{S3_BUCKET}/pyspark_deps.tar.gz#environment,s3://{S3_BUCKET}/conf.tar.gz#conf
6    --conf spark.yarn.appMasterEnv.PYSPARK_PYTHON=environment/bin/python
7    --conf spark.executorEnv.PYSPARK_PYTHON=environment/bin/python 
8    --conf spark.yarn.appMasterEnv.<env-var-here>={ENV} 
9    --conf spark.executorEnv.<env-var-here>={ENV} 
11    s3://{S3_BUCKET}/ --env base --pipeline my_new_pipeline --params run_date:2023-03-07,runtime:cloud 


This post describes the sequence of steps needed to deploy a Kedro project to an Amazon EMR cluster.

  1. Set up the Amazon EMR cluster

  2. Set up CONF_ROOT (optional for Kedro versions >= 0.18.5)

  3. Package the Kedro project

  4. Create .tar for conf

  5. Create an entrypoint for the Spark application

  6. Upload relevant files to S3

  7. spark-submit to the Amazon EMR cluster

Kedro supports a range of deployment targets including Amazon SageMaker, Databricks, Vertex AI and Azure ML, and our documentation additionally includes a range of approaches for single-machine deployment to a production server.

Find out more about Kedro

There are many ways to learn more about Kedro:

On this page:

Photo of Afaque Ahmad
Afaque Ahmad
Senior Data Engineer, QuantumBlack
Share post:
Twitter logoLinkedIn logo

All blog posts

cover image alt

Best practices — 10 min read

How to become a more technical product owner

On World Product Day 2023, Yetunde Dada explains how to build your technical skills as a product owner to enhance your effectiveness and success in the role.

Yetunde Dada

24 May 2023

cover image alt

Tutorials — 6 min read

A Polars exploration into Kedro

Polars is an open-source library that provides fast dataframes for Python. This blog post explains how can you use Polars instead of pandas in Kedro for your data catalog and data manipulation.

cover image alt

Kedro news — 5 min read

In the Pipeline: May 2023

"In the Pipeline" is overflowing with the latest Kedro news, upcoming events, and interesting topics.

Jo Stichbury

4 May 2023

cover image alt

Best practices — 5 min read

Seven development principles for opinionated teams

In this blog post, we introduce a set of principles that summarise our development philosophy and steer our decision-making.

Jo Stichbury

26 Apr 2023

cover image alt

Kedro news — 4 min read

News from the Kedro Technical Steering Committee

We announce a new member of the Kedro Technical Steering Committee as we move towards Kedro's 4th open source anniversary.

Jo Stichbury

19 Apr 2023