Most traditional data warehouse or datamart ETL routines consist of multi stage SQL transformations, often a series of CTAS (CREATE TABLE AS SELECT
) statements usually creating transient or temporary tables – such as volatile tables in Teradata or Common Table Expressions (CTE’s).
The initial challenge when moving from a SQL/MPP based ETL framework platformed on Oracle, Teradata, SQL Server, etc to a Spark based ETL framework is what to do with this…
One approach is to use the lightweight, configuration driven, multi stage Spark SQL based ETL framework described in this post.
This framework is driven from a YAML configuration document. YAML was preferred over JSON as a document format as it allows for multi-line statements (SQL statements), as well as comments - which are very useful as SQL can sometimes be undecipherable even for the person that wrote it.
The YAML config document has three main sections: sources
, transforms
and targets
.
Sources
The sources
section is used to configure the input data source(s) including optional column and row filters. In this case the data sources are tables available in the Spark catalog (for instance the AWS Glue Catalog or a Hive Metastore), this could easily be extended to read from other datasources using the Spark DataFrameReader API.
Transforms
The transforms
section contains the multiple SQL statements to be run in sequence where each statement creates a temporary view using objects created by preceding statements.
Targets
Finally the targets
section writes out the final object or objects to a specified destination (S3, HDFS, etc).
Process SQL Statements
The process_sql_statements.py
script that is used to execute the framework is very simple (30 lines of code not including comments, etc). It loads the sources into Spark Dataframes and then creates temporary views to reference these datasets in the transforms
section, then sequentially executes the SQL statements in the list of transforms. Lastly the script writes out the final view or views to the desired destination – in this case parquet files stored in S3 were used as the target.
You could implement an object naming convention such as prefixing object names with sv_
, iv_
, fv_
(for source view, intermediate view and final view respectively) if this helps you differentiate between the different objects.
To use this framework you would simply use spark-submit
as follows:
spark-submit process_sql_statements.py config.yml
Full source code can be found at: https://github.com/avensolutions/spark-sql-etl-framework
if you have enjoyed this post, please consider buying me a coffee ☕ to help me keep writing!