Hitachi Hyper Scale-Out Platform (HSP) for Big Data Analytics combines server computing, virtualisation and storage into a low cost hyper-converged appliance to deliver high-performance NFS, integrated with Hadoop, which leads to faster insight from mixed analytics workloads.
Hitachi Hyper Scale-Out Platform Features & Benefits:
- Cost: Pay as you grow for compute and storage resources on demand. New converged compute, storage and virtualisation platform helps you save on capital expenditures
- Availability: No single point of failure or contention. Data is automatically distributed across the nodes in the system
- Recoverability: Data protection and availability. Data is protected while written into the system. Automatic copies are made of the data to allow automatic protection and recovery
- Massive scalability (scale-out paradigm): Designed to start relatively small (a few hundred terabytes) and to easily scale to multiple petabytes
- Performance: Scale while the solution grows. As each node adds capacity, as well as CPU and memory, the solution scales in each direction
- Self-configuring, self-managing and self-repairing (self-healing): Automatic configuration is applied throughout the system
- A RESTful API and the support of Hadoop management tools. Errors are automatically identified and repaired
- Balancing and rebalancing: Automatic data distribution. Distributed link and metadata management allow automatic distribution of large data sets across the nodes
- Ease of deployment, self-managing: HSP provides VM templates, which can be used to bring up preconfigured compute nodes, and a CLI to allow you to manage the complete life cycle of VMs on the platform. In case of failure of a physical node, another node in the cluster takes over the identity and workload of the failed node, thereby making the node failure transparent to the user.