The latest generation of Pipeline Controller represents a major revision to our Data Lake assets. Some features:
* Apache NiFi engine (vs. Spring Batch)
* Apache Spark for data ingest processing
* Additional UI module oriented to Data Lake users
* Users can create "data feeds" through our UI (no-code!)
* Powerful ingest pipeline template that implements Think Big best practices
To get started, please follow the deployment guide
The layout of code in the following subtrees follows particular reasoning described below:
Subfolder | Description |
---|---|
commons | Utility or common functionality used through the Think Big accelerator platform |
core | API frameworks that can generally used by developers to extend the capabilities of the Think Big accelerator platform |
docs | Documentation that should be distributed as part of release |
integrations | Pure integration projects with 3rd party software such as NiFi and Spark. |
metadata | The metadata server is a top-level project for providing a metadata repository, REST API for the recording of metadata |
plugins | Alternative and often optional implementations for operating with different distributions or technology branches |
security | Support for application security for both authentication and authorization |
services | Provides REST endpoints and core server-side processing |