Versie | 1.0 | Creatie datum | 22-05-2016 |
How can the size of the data be reduced to enable more cost effective storage and increased data movement mobility when faced with very large amounts of data?
How can algorithms that require iterative processing of a large dataset that may or may not contain connected entities be processed in an efficient and timely manner?
How can different distributed processing frameworks be used to process large amounts of data without having to learn the programmatic intricacies of each framework?
The Poly Storage compound pattern represents a part of a Big Data platform capable of storing high-volume, high-velocity and high-variety data.
The Poly Source compound pattern represents a part of a Big Data platform capable of ingesting high-volume and high-velocity data from a range of structured, unstructured and semi-structured data sources.
How can data stored in a Big Data solution environment be kept private so that only the intended client is able to read it?
How can the execution of a number of data processing activities starting from data ingress to egress be automated?
How can large amounts of raw data be analyzed in place by contemporary data analytics tools without having to export data?
How can very large amounts of data be processed with maximum throughput?
The Analytical Sandbox is a standalone solution environment capable of capturing and processing large quantities of data from multiple sources in order to perform analytics in isolation the enterprise data warehouse.