Kedro deployment — 5 min read

Seven steps to deploy Kedro pipelines on Amazon EMR

Amazon EMR works with open-source big data frameworks like Apache Spark to help you tackle vast amounts of data. This post explains how to combine Amazon EMR, Kedro, and Apache Spark.

10 May 2023 (last updated 11 Jul 2023)
small square planet spin

This post explains how to launch an Amazon EMR cluster and deploy a Kedro project to run a Spark job.

Amazon EMR (previously called Amazon Elastic MapReduce) is a managed cluster platform for applications built using open source big data frameworks, such as Apache Spark, that process and analyze vast amounts of data with AWS.

What is Kedro?

Kedro is an open-source Python toolbox that applies software engineering principles to data science code. It makes it easier for a team to apply software engineering principles to data science code, which reduces the time spent rewriting data science experiments so that they are fit for production.

Kedro was born at QuantumBlack to solve the challenges faced regularly in data science projects and promote teamwork through standardised team workflows. It is now hosted by the LF AI & Data Foundation as an incubating project.

1. Set up the Amazon EMR cluster

One way to install Python libraries onto Amazon EMR is to package a virtual environment and deploy it. To do this, the cluster needs to have the same Amazon Linux 2 environment as used by Amazon EMR.

We used this example Dockerfile to package our dependencies on an Amazon Linux 2 base. Our example Dockerfile is as below: 

1FROM --platform=linux/amd64 amazonlinux:2 AS base 
2
3RUN yum install -y python3 
4
5ENV VIRTUAL_ENV=/opt/venv 
6RUN python3 -m venv $VIRTUAL_ENV 
7ENV PATH="$VIRTUAL_ENV/bin:$PATH" 
8
9COPY requirements.txt /tmp/requirements.txt 
10
11RUN python3 -m pip install --upgrade pip && \
12    python3 -m pip install venv-pack==0.2.0 && \ 
13    python3 -m pip install -r /tmp/requirements.txt 
14
15RUN mkdir /output && venv-pack -o /output/pyspark_deps.tar.gz 
16
17FROM scratch AS export 
18COPY --from=base /output/pyspark_deps.tar.gz /

Note: A DOCKER_BUILDKIT backend is necessary to run this Dockerfile (make sure you have it installed).

Run the Dockerfile using the following command:

DOCKER_BUILDKIT=1 docker build --output . <output-path>

This will generate a pyspark_deps.tar.gz file at the <output-path> specified in the command above. 

Use this command if your Dockerfile has a different name:

DOCKER_BUILDKIT=1 docker build -f Dockerfile-emr-venv --output . <output-path>

2. Set up CONF_ROOT

The kedro package command only packs the source code and yet the conf directory is essential for running any Kedro project. To make it available to Kedro separately, its location can be controlled by setting CONF_ROOT

By default, Kedro looks at the root conf folder for all its configurations (catalog, parameters, globals, credentials, logging) to run the pipelines, but this can be customised by changing CONF_ROOT in settings.py.

For Kedro versions < 0.18.5

  • Change CONF_ROOT in settings.py to the location where the conf directory will be deployed. It could be anything. e.g. ./conf or /mnt1/kedro/conf

For Kedro versions >= 0.18.5

  • Use the --conf-source CLI parameter directly with kedro run to specify the path. CONF_ROOT need not be changed in settings.py.

3. Package the Kedro project

Package the project using the kedro package command from the root of your project folder. This will create a .whl in the dist folder that will be used when doing spark-submit to the Amazon EMR cluster to specify the --py-files to refer to the source code. 

4. Create .tar for conf

As described, the kedro package command only packs the source code and yet the conf directory is essential for running any Kedro project. Therefore it needs to be deployed separately as a tar.gz file. It is important to note that the contents inside the folder needs to be zipped and not the conf folder entirely.

Use the following command to zip the contents of the conf directory and generate a conf.tar.gz file containing catalog.ymlparameters.yml and other files needed to run the Kedro pipeline. It will be used with spark-submit for the --archives option to unpack the contents into a conf directory.

tar -czvf conf.tar.gz --exclude="local" conf/*

5. Create an entrypoint for the Spark application

Create an entrypoint.py file that the Spark application will use to start the job. This file can be modified to take arguments and can be run only using main(sys.argv) after removing the params array.

python entrypoint.py --pipeline my_new_pipeline --params run_date:2023-02-05,runtime:cloud

This would mimic the exact kedro run behaviour.

1import sys 
2from proj_name.__main__ import main: 
3
4if __name__ == "__main__":
5	"""
6	These params could be used as *args to 
7	test pipelines locally. The example below 
8	will run `my_new_pipeline` using `ThreadRunner`
9	applying a set of params
10	params = [ 
11		"--pipeline", 
12		"my_new_pipeline", 
13		"--runner", 
14		"ThreadRunner", 
15		"--params", 
16		"run_date:2023-02-05,runtime:cloud", 
17	] 
18	main(params) 
19	"""
20
21	main(sys.argv[1:])

6. Upload relevant files to S3

Upload the relevant files to an S3 bucket (Amazon EMR should have access to this bucket), in order to run the Spark Job. The following artifacts should be uploaded to S3:

  • .whl file created in step #3

  • Virtual Environment tar.gz created in step 1 (e.g. pyspark_deps.tar.gz)

  • .tar file for conf folder created in step #4 (e.g. conf.tar.gz)

  • entrypoint.py file created in step #5.

7. spark-submit to the Amazon EMR cluster

Use the following spark-submit command as a step on Amazon EMR running in cluster mode. A few points to note:

  • pyspark_deps.tar.gz is unpacked into a folder named environment

  • Environment variables are set referring to libraries unpacked in the environment directory above. e.g. PYSPARK_PYTHON=environment/bin/python

  • conf directory is unpacked to a folder specified in the following after the # symbol (s3://{S3_BUCKET}/conf.tar.gz#conf)

Note the following:

  • Kedro versions < 0.18.5. The folder location/name after the # symbol should match with CONF_ROOT in settings.py

     

  • Kedro versions >= 0.18.5. You could follow the same approach as earlier. However, Kedro now provides flexibility to provide the CONF_ROOT through the CLI parameters using --conf-source instead of setting CONF_ROOT in settings.py. Therefore --conf-root configuration could be directly specified in the CLI parameters and step 2 can be skipped completely.

1spark-submit 
2    --deploy-mode cluster 
3    --master yarn 
4    --conf spark.submit.pyFiles=s3://{S3_BUCKET}/<whl-file>.whl
5    --archives=s3://{S3_BUCKET}/pyspark_deps.tar.gz#environment,s3://{S3_BUCKET}/conf.tar.gz#conf
6    --conf spark.yarn.appMasterEnv.PYSPARK_PYTHON=environment/bin/python
7    --conf spark.executorEnv.PYSPARK_PYTHON=environment/bin/python 
8    --conf spark.yarn.appMasterEnv.<env-var-here>={ENV} 
9    --conf spark.executorEnv.<env-var-here>={ENV} 
10
11    s3://{S3_BUCKET}/run.py --env base --pipeline my_new_pipeline --params run_date:2023-03-07,runtime:cloud 

Summary

This post describes the sequence of steps needed to deploy a Kedro project to an Amazon EMR cluster.

  1. Set up the Amazon EMR cluster

  2. Set up CONF_ROOT (optional for Kedro versions >= 0.18.5)

  3. Package the Kedro project

  4. Create .tar for conf

  5. Create an entrypoint for the Spark application

  6. Upload relevant files to S3

  7. spark-submit to the Amazon EMR cluster

Kedro supports a range of deployment targets including Amazon SageMaker, Databricks, Vertex AI and Azure ML, and our documentation additionally includes a range of approaches for single-machine deployment to a production server.

Find out more about Kedro

There are many ways to learn more about Kedro:


On this page:

Photo of Afaque Ahmad
Afaque Ahmad
Senior Data Engineer, QuantumBlack
Share post:
Mastodon logoLinkedIn logo

All blog posts

cover image alt

Kedro-Viz — 6 min read

Share a Kedro-Viz with Github pages

We have added support to automate publishing to Github pages through the publish-kedro-viz Github Action. Learn how to configure and use the feature!

Nero Okwa

4 Apr 2024

cover image alt

Kedro newsletter — 5 min read

In the pipeline: March 2024

From the latest news to upcoming events and interesting topics, “In the Pipeline” is overflowing with updates for the Kedro community.

Jo Stichbury

12 Mar 2024

cover image alt

Kedro newsletter — 5 min read

In the pipeline: February 2024

From the latest news to upcoming events and interesting topics, “In the Pipeline” is overflowing with updates for the Kedro community.

Jo Stichbury

7 Feb 2024

cover image alt

Ibis — 10 min read

Building scalable data pipelines with Kedro and Ibis

From production-ready to production. Bring the flexibility and familiarity of Python, and the scale and performance of modern SQL, to Kedro pipelines.

Deepyaman Datta

29 Jan 2024

cover image alt

Kedro news — 10 min read

Your new Kedro project. Your way.

We've made changes to Kedro in the new 0.19 release to tackle one of the most commonly perceived pain points. Find out more!

Jo Stichbury

24 Jan 2024