Altiscale Data Cloud Features
A complete Big Data platform.
Ready to go.
Altiscale offers a complete Big Data platform based on the Apache Hadoop and Apache Spark ecosystem, so you can get all your jobs done—batch, interactive, machine learning, you name it. The Altiscale Data Cloud provides leading open-source and third-party solutions and tools, including:
- Apache Spark
- Apache Hive/HCatalog
- Apache Pig
- Apache Oozie
- Apache Sqoop
- Apache Flume
- Apache Kafka/Camus
- Apache Avro
- Apache Parquet
- Apache Zeppelin
- Apache Mahout
- Revolution R
- Pentaho Kettle
Fastest Time to Value
As a cloud-based solution Altiscale can be implemented instantly, versus on-premises alternatives, which take months.
Users simply log in for immediate access to a workbench that has all the Big Data tools they need, on the most production-ready clusters in the world.
Altiscale eliminates the time-consuming and expensive process of procuring and provisioning hardware, installing and configuring Hadoop, integrating in other tools, and managing clusters in production. Since Altiscale is a service, upfront costs are low and costs are more predictable over time.
Achieving comprehensive security for Hadoop requires policies, procedures, and technologies that go well beyond Hadoop’s built-in security capabilities.In order to safeguard data and meet compliance requirements, Altiscale employs a multi-layered, overlapping approach to data protection—an approach based on years of deep Hadoop-security expertise. As a result, the Altiscale Data Cloud offers a more secure environment, whether in the cloud or on-premises. Altiscale is the only company to offer fully implemented Kerberos authentication automatically at service start. Altiscale is proud to have achieved the following compliance certifications to meet the needs and requirements of the most security-conscious customers:
- SOC 2
Optimized for Performance
The Altiscale Data Cloud is optimized throughout for high performance.
The hardware, networking, and software have all been specifically implemented, integrated, and tuned for fast, reliable job completion. When you're dealing with data on a massive scale, even small friction points can slow you down a lot—or end up blocking you entirely. At Altiscale, our operations team is working for you around the clock to ensure your data environment is ready and optimized to get your jobs done quickly.
Altiscale has also developed its own automation tools that automatically manage and monitor ecosystem operations. This allows us to ensure fast, high-quality performance, as well as to scale rapidly when needed.
It's been estimated that 90% of Big Data success is in the operations. With Altiscale, you’ve got that huge head start locked in.
And key to that is providing not only the best infrastructure for Hadoop and Spark, it’s also about providing the best Big Data operations. Other vendors make operating your Big Data platform your problem. Altiscale, in contrast, has developed the industry’s best technology for managing Big Data and operates it for you.
Since Big Data platforms have significant ongoing demands—which only get more complex over time—you can be reassured that your jobs on Altiscale will get done quickly, without unnecessary stress, since you have some of the foremost Big Data experts on your side.
Altiscale offers elastic Big Data clusters, capable of quickly adjusting capacity to meet the resource requirements of jobs at hand.
Massive monthly job? No problem. No needs at all in the next month? No problem. Altiscale Data Cloud readily adapts to meet your needs, and you only pay for what you use.
On-premises implementations require you to always have enough hardware to meet your peak requirements. Other cloud vendors aren’t truly elastic, charging you exorbitant rates for “excess” capacity and requiring manual intervention to scale up and down a cluster. Altiscale Data Cloud clusters are set up for elasticity that is transparent to users, making handling spiky workloads easy and affordable.
Always Up to Date
Altiscale prides itself on staying at the forefront of Big Data technology developments.
The engineering team is always testing out new capabilities and tools to ensure they deliver value and are enterprise ready. When the latest technology meets our standards, we make it available to our customers. That means your team is working with the latest proven developments, not lingering too long with outmoded capabilities or wrestling with lab experiments that aren’t yet reliable or value-creating.