Teckel is a framework designed to simplify the creation of Apache Spark ETL (Extract, Transform, Load) processes using YAML configuration files. This tool aims to standardize and streamline ETL workflow creation by enabling the definition of data transformations in a declarative, user-friendly format without writing extensive code.
This concept is further developed on my blog: Big Data with Zero Code
- Declarative ETL Configuration: Define your ETL processes with simple YAML files.
- Support for Multiple Data Sources: Easily integrate inputs in CSV, JSON, and Parquet formats.
- Flexible Transformations: Perform joins, aggregations, and selections with clear syntax.
- Spark Compatibility: Leverage the power of Apache Spark for large-scale data processing.
- Apache Spark: Ensure you have Apache Spark installed and properly configured.
- YAML files: Create configuration files specifying your data sources and transformations.
To use Teckel, you can clone the repository and integrate it into your Spark setup:
git clone https://github.com/rafafrdz/teckel.git
cd teckel
TODO: Add instructions for building the project and integrating it into your Spark setup.
Once you have installed Teckel, you can use it to run ETL processes.
TODO: Add instructions for running ETL processes using Teckel.
Here's an example of a fully defined ETL configuration using a YAML file:
Contributions to Teckel are welcome. If you'd like to contribute, please fork the repository and create a pull request with your changes.
Teckel is available under the MIT License. See the LICENSE file for more details.
If you have any questions regarding the license, feel free to contact Rafael Fernandez.
For any issues or questions, feel free to open an issue on the GitHub repository.