Versie | 1.0 | Creatie datum | 02-05-2021 |
How can large amounts of data be stored in a fault tolerant manner such that the data remains available in the face of hardware failures?
How can large amounts of non-relational data that conforms to a nested structure be stored in a scalable manner so that the data retains its internal structure and sub-sections of a data unit can be accessed?
How can the execution of a number of data processing activities starting from data ingress to egress be automated?
Data transformation represents a solution environment where the Big Data platform is exclusively used for transforming large amounts of data obtained from a variety of sources.
The Poly Source compound pattern represents a part of a Big Data platform capable of ingesting high-volume and high-velocity data from a range of structured, unstructured and semi-structured data sources.
How can the size of the data be reduced to enable more cost effective storage and increased data movement mobility when faced with very large amounts of data?
How can very large amounts of data be stored without degrading the access performance of the underlying storage technology?
How can very large amounts of data be processed with maximum throughput?
How can complex processing tasks be carried out in a manageable fashion when using contemporary processing techniques?
How can large amounts of non-relational data be stored in a table-like form where each record may consist of a very large number of fields or related groups of fields?
How can large amounts of processed data be ported from a Big Data platform directly to a relational database?