Versie | 1.0 | Creatie datum | 23-05-2016 |
How can the size of the data be reduced to enable more cost effective storage and increased data movement mobility when faced with very large amounts of data?
How can large amounts of processed data be ported from a Big Data platform directly to a relational database?
How can a dataset, where related attributes are spread across more than one record, be stored in a way that lends itself to distributed data processing techniques that process data on a record-by-record basis?
How can very large amounts of data be stored without degrading the access performance of the underlying storage technology?
How can large amounts of data be imported into a Big Data platform from a relational database?
The Batch Data Processing compound pattern represents a solution environment capable of ingesting large amounts of structured data for the sole purpose of offloading existing enterprise systems from having to process this data.
How can large amounts of non-relational data be stored in a table-like form where each record may consist of a very large number of fields or related groups of fields?
How can the execution of a number of data processing activities starting from data ingress to egress be automated?
How can complex processing tasks be carried out in a manageable fashion when using contemporary processing techniques?
How can large amounts of data be stored in a fault tolerant manner such that the data remains available in the face of hardware failures?
How can very large amounts of data be processed with maximum throughput?