Big Data Pipeline

Versie 1.0
Creatie datum 02-05-2021

Big Data Pipeline*

The Big Data pipeline compound pattern generally comprises multiple stages whose objectives are to divide complex processing operations into down into modular steps for easier understanding and debugging and to be amenable to future data processing requirements.

Auteur Bert Dingemans
Alias
Stereotypes ApplicationFunction
Details van Big Data Pipeline*

Poly Storage*

The Poly Storage compound pattern represents a part of a Big Data platform capable of storing high-volume, high-velocity and high-variety data.

Auteur Bert Dingemans
Alias
Stereotypes ApplicationFunction
Details van Poly Storage*

Big Data Processing Environment*

The Big Data Processing Environment represents an environment capable of handling the range of distinct requirements of large-scale dataset processing.

Auteur Bert Dingemans
Alias
Stereotypes ApplicationFunction
Details van Big Data Processing Environment*

Automated Dataset Execution

How can the execution of a number of data processing activities starting from data ingress to egress be automated?

Auteur Bert Dingemans
Alias
Stereotypes ApplicationFunction
Details van Automated Dataset Execution

Poly Source*

The Poly Source compound pattern represents a part of a Big Data platform capable of ingesting high-volume and high-velocity data from a range of structured, unstructured and semi-structured data sources.

Auteur Bert Dingemans
Alias
Stereotypes ApplicationFunction
Details van Poly Source*

Poly Sink*

The Poly Sink compound pattern represents a part of a Big Data platform capable of egressing high-volume, high-velocity and high-variety data out to downstream enterprise systems.

Auteur Bert Dingemans
Alias
Stereotypes ApplicationFunction
Details van Poly Sink*