Alternatives to IOMETE

Compare IOMETE alternatives for your business or organization using the curated list below. SourceForge ranks the best alternatives to IOMETE in 2026. Compare features, ratings, user reviews, pricing, and more from IOMETE competitors and alternatives in order to make an informed decision for your business.

  • 1
    PopSQL

    PopSQL

    PopSQL

    PopSQL is a collaborative SQL editor and workspace that connects everyone in the data analysis process so that teams can obtain better data insights and visualizations by asking the right questions, together. * Get answers faster with real-time collaboration, version history, searchable shared queries and folders. We make it easy for your power SQL users and data analysts to work with business stakeholders * Built-in data visualization & sharing lets you go from query to chart to Slack in seconds. Build, schedule and push real-time insights and dashboards, in just a few clicks. * Our modern and elegant cloud-based workspace offers a rich SQL editing experience. Dive right in, connect to your databases and iterate on analyses from anywhere. We offer native macOS, Windows, and Linux clients. * One workspace to get it done: PopSQL puts your database connections, shared credentials and an intuitive data catalog at your fingertips so you can access & mine your data, safely, securely
    Starting Price: $199 per month
  • 2
    Databricks Data Intelligence Platform
    The Databricks Data Intelligence Platform allows your entire organization to use data and AI. It’s built on a lakehouse to provide an open, unified foundation for all data and governance, and is powered by a Data Intelligence Engine that understands the uniqueness of your data. The winners in every industry will be data and AI companies. From ETL to data warehousing to generative AI, Databricks helps you simplify and accelerate your data and AI goals. Databricks combines generative AI with the unification benefits of a lakehouse to power a Data Intelligence Engine that understands the unique semantics of your data. This allows the Databricks Platform to automatically optimize performance and manage infrastructure in ways unique to your business. The Data Intelligence Engine understands your organization’s language, so search and discovery of new data is as easy as asking a question like you would to a coworker.
  • 3
    Onehouse

    Onehouse

    Onehouse

    The only fully managed cloud data lakehouse designed to ingest from all your data sources in minutes and support all your query engines at scale, for a fraction of the cost. Ingest from databases and event streams at TB-scale in near real-time, with the simplicity of fully managed pipelines. Query your data with any engine, and support all your use cases including BI, real-time analytics, and AI/ML. Cut your costs by 50% or more compared to cloud data warehouses and ETL tools with simple usage-based pricing. Deploy in minutes without engineering overhead with a fully managed, highly optimized cloud service. Unify your data in a single source of truth and eliminate the need to copy data across data warehouses and lakes. Use the right table format for the job, with omnidirectional interoperability between Apache Hudi, Apache Iceberg, and Delta Lake. Quickly configure managed pipelines for database CDC and streaming ingestion.
  • 4
    BigLake

    BigLake

    Google

    BigLake is a storage engine that unifies data warehouses and lakes by enabling BigQuery and open-source frameworks like Spark to access data with fine-grained access control. BigLake provides accelerated query performance across multi-cloud storage and open formats such as Apache Iceberg. Store a single copy of data with uniform features across data warehouses & lakes. Fine-grained access control and multi-cloud governance over distributed data. Seamless integration with open-source analytics tools and open data formats. Unlock analytics on distributed data regardless of where and how it’s stored, while choosing the best analytics tools, open source or cloud-native over a single copy of data. Fine-grained access control across open source engines like Apache Spark, Presto, and Trino, and open formats such as Parquet. Performant queries over data lakes powered by BigQuery. Integrates with Dataplex to provide management at scale, including logical data organization.
    Starting Price: $5 per TB
  • 5
    Oracle Cloud Infrastructure Data Lakehouse
    A data lakehouse is a modern, open architecture that enables you to store, understand, and analyze all your data. It combines the power and richness of data warehouses with the breadth and flexibility of the most popular open source data technologies you use today. A data lakehouse can be built from the ground up on Oracle Cloud Infrastructure (OCI) to work with the latest AI frameworks and prebuilt AI services like Oracle’s language service. Data Flow is a serverless Spark service that enables our customers to focus on their Spark workloads with zero infrastructure concepts. Oracle customers want to build advanced, machine learning-based analytics over their Oracle SaaS data, or any SaaS data. Our easy- to-use data integration connectors for Oracle SaaS, make creating a lakehouse to analyze all data with your SaaS data easy and reduces time to solution.
  • 6
    Apache Spark

    Apache Spark

    Apache Software Foundation

    Apache Spark™ is a unified analytics engine for large-scale data processing. Apache Spark achieves high performance for both batch and streaming data, using a state-of-the-art DAG scheduler, a query optimizer, and a physical execution engine. Spark offers over 80 high-level operators that make it easy to build parallel apps. And you can use it interactively from the Scala, Python, R, and SQL shells. Spark powers a stack of libraries including SQL and DataFrames, MLlib for machine learning, GraphX, and Spark Streaming. You can combine these libraries seamlessly in the same application. Spark runs on Hadoop, Apache Mesos, Kubernetes, standalone, or in the cloud. It can access diverse data sources. You can run Spark using its standalone cluster mode, on EC2, on Hadoop YARN, on Mesos, or on Kubernetes. Access data in HDFS, Alluxio, Apache Cassandra, Apache HBase, Apache Hive, and hundreds of other data sources.
  • 7
    CelerData Cloud
    CelerData is a high-performance SQL engine built to power analytics directly on data lakehouses, eliminating the need for traditional data‐warehouse ingestion pipelines. It delivers sub-second query performance at scale, supports on-the‐fly JOINs without costly denormalization, and simplifies architecture by allowing users to run demanding workloads on open format tables. Built on the open source engine StarRocks, the platform outperforms legacy query engines like Trino, ClickHouse, and Apache Druid in latency, concurrency, and cost-efficiency. With a cloud-managed service that runs in your own VPC, you retain infrastructure control and data ownership while CelerData handles maintenance and optimization. The platform is positioned to power real-time OLAP, business intelligence, and customer-facing analytics use cases and is trusted by enterprise customers (including names such as Pinterest, Coinbase, and Fanatics) who have achieved significant latency reductions and cost savings.
  • 8
    Dataplex Universal Catalog
    Dataplex Universal Catalog is Google Cloud’s intelligent governance platform for data and AI artifacts. It centralizes discovery, management, and monitoring across data lakes, warehouses, and databases, giving teams unified access to trusted data. With Vertex AI integration, users can instantly find datasets, models, features, and related assets in one search experience. It supports semantic search, data lineage, quality checks, and profiling to improve trust and compliance. Integrated with BigQuery and BigLake, it enables end-to-end governance for both proprietary and open lakehouse environments. Dataplex Universal Catalog helps organizations democratize data access, enforce governance, and accelerate analytics and AI initiatives.
    Starting Price: $0.060 per hour
  • 9
    MLlib

    MLlib

    Apache Software Foundation

    ​Apache Spark's MLlib is a scalable machine learning library that integrates seamlessly with Spark's APIs, supporting Java, Scala, Python, and R. It offers a comprehensive suite of algorithms and utilities, including classification, regression, clustering, collaborative filtering, and tools for constructing machine learning pipelines. MLlib's high-quality algorithms leverage Spark's iterative computation capabilities, delivering performance up to 100 times faster than traditional MapReduce implementations. It is designed to operate across diverse environments, running on Hadoop, Apache Mesos, Kubernetes, standalone clusters, or in the cloud, and accessing various data sources such as HDFS, HBase, and local files. This flexibility makes MLlib a robust solution for scalable and efficient machine learning tasks within the Apache Spark ecosystem. ​
  • 10
    IBM watsonx.data
    Put your data to work, wherever it resides, with the open, hybrid data lakehouse for AI and analytics. Connect your data from anywhere, in any format, and access through a single point of entry with a shared metadata layer. Optimize workloads for price and performance by pairing the right workloads with the right query engine. Embed natural-language semantic search without the need for SQL, so you can unlock generative AI insights faster. Manage and prepare trusted data to improve the relevance and precision of your AI applications. Use all your data, everywhere. With the speed of a data warehouse, the flexibility of a data lake, and special features to support AI, watsonx.data can help you scale AI and analytics across your business. Choose the right engines for your workloads. Flexibly manage cost, performance, and capability with access to multiple open engines including Presto, Presto C++, Spark Milvus, and more.
  • 11
    IBM Analytics for Apache Spark
    IBM Analytics for Apache Spark is a flexible and integrated Spark service that empowers data science professionals to ask bigger, tougher questions, and deliver business value faster. It’s an easy-to-use, always-on managed service with no long-term commitment or risk, so you can begin exploring right away. Access the power of Apache Spark with no lock-in, backed by IBM’s open-source commitment and decades of enterprise experience. A managed Spark service with Notebooks as a connector means coding and analytics are easier and faster, so you can spend more of your time on delivery and innovation. A managed Apache Spark services gives you easy access to the power of built-in machine learning libraries without the headaches, time and risk associated with managing a Sparkcluster independently.
  • 12
    Amazon EMR
    Amazon EMR is the industry-leading cloud big data platform for processing vast amounts of data using open-source tools such as Apache Spark, Apache Hive, Apache HBase, Apache Flink, Apache Hudi, and Presto. With EMR you can run Petabyte-scale analysis at less than half of the cost of traditional on-premises solutions and over 3x faster than standard Apache Spark. For short-running jobs, you can spin up and spin down clusters and pay per second for the instances used. For long-running workloads, you can create highly available clusters that automatically scale to meet demand. If you have existing on-premises deployments of open-source tools such as Apache Spark and Apache Hive, you can also run EMR clusters on AWS Outposts. Analyze data using open-source ML frameworks such as Apache Spark MLlib, TensorFlow, and Apache MXNet. Connect to Amazon SageMaker Studio for large-scale model training, analysis, and reporting.
  • 13
    Delta Lake

    Delta Lake

    Delta Lake

    Delta Lake is an open-source storage layer that brings ACID transactions to Apache Spark™ and big data workloads. Data lakes typically have multiple data pipelines reading and writing data concurrently, and data engineers have to go through a tedious process to ensure data integrity, due to the lack of transactions. Delta Lake brings ACID transactions to your data lakes. It provides serializability, the strongest level of isolation level. Learn more at Diving into Delta Lake: Unpacking the Transaction Log. In big data, even the metadata itself can be "big data". Delta Lake treats metadata just like data, leveraging Spark's distributed processing power to handle all its metadata. As a result, Delta Lake can handle petabyte-scale tables with billions of partitions and files at ease. Delta Lake provides snapshots of data enabling developers to access and revert to earlier versions of data for audits, rollbacks or to reproduce experiments.
  • 14
    Zaloni Arena
    End-to-end DataOps built on an agile platform that improves and safeguards your data assets. Arena is the premier augmented data management platform. Our active data catalog enables self-service data enrichment and consumption to quickly control complex data environments. Customizable workflows that increase the accuracy and reliability of every data set. Use machine-learning to identify and align master data assets for better data decisioning. Complete lineage with detailed visualizations alongside masking and tokenization for superior security. We make data management easy. Arena catalogs your data, wherever it is and our extensible connections enable analytics to happen across your preferred tools. Conquer data sprawl challenges: Our software drives business and analytics success while providing the controls and extensibility needed across today’s decentralized, multi-cloud data complexity.
  • 15
    Stackable

    Stackable

    Stackable

    The Stackable data platform was designed with openness and flexibility in mind. It provides you with a curated selection of the best open source data apps like Apache Kafka, Apache Druid, Trino, and Apache Spark. While other current offerings either push their proprietary solutions or deepen vendor lock-in, Stackable takes a different approach. All data apps work together seamlessly and can be added or removed in no time. Based on Kubernetes, it runs everywhere, on-prem or in the cloud. stackablectl and a Kubernetes cluster are all you need to run your first stackable data platform. Within minutes, you will be ready to start working with your data. Configure your one-line startup command right here. Similar to kubectl, stackablectl is designed to easily interface with the Stackable Data Platform. Use the command line utility to deploy and manage stackable data apps on Kubernetes. With stackablectl, you can create, delete, and update components.
    Starting Price: Free
  • 16
    Wherobots

    Wherobots

    Wherobots

    Wherobots, the Spatial Intelligence Cloud, enables any data team to analyze data about the physical world faster, at greater scale, and at lower cost compared to traditional solutions. Built by the creators of Apache Sedona, it's a compute lakehouse engine that unifies spatial and non-spatial data, automates data workflows, and runs AI on planetary scale imagery. Spatial data refers to information about places, objects, or activities. Examples include GPS points and tracks, routes, land, road, parcel, crop, and building data, as well as imagery from drones and satellites. This data is fundamental to various industries including aerospace, mobility, ag-tech, insurance, energy, telecommunications, retail, and logistics. In one solution, Wherobots handles these diverse spatial data types and formats, with customers seeing production workloads run up to 20x faster and at lower cost than popular lakehouse engines.
  • 17
    Apache Mahout

    Apache Mahout

    Apache Software Foundation

    Apache Mahout is a powerful, scalable, and versatile machine learning library designed for distributed data processing. It offers a comprehensive set of algorithms for various tasks, including classification, clustering, recommendation, and pattern mining. Built on top of the Apache Hadoop ecosystem, Mahout leverages MapReduce and Spark to enable data processing on large-scale datasets. Apache Mahout(TM) is a distributed linear algebra framework and mathematically expressive Scala DSL designed to let mathematicians, statisticians, and data scientists quickly implement their own algorithms. Apache Spark is the recommended out-of-the-box distributed back-end or can be extended to other distributed backends. Matrix computations are a fundamental part of many scientific and engineering applications, including machine learning, computer vision, and data analysis. Apache Mahout is designed to handle large-scale data processing by leveraging the power of Hadoop and Spark.
  • 18
    PySpark

    PySpark

    PySpark

    PySpark is an interface for Apache Spark in Python. It not only allows you to write Spark applications using Python APIs, but also provides the PySpark shell for interactively analyzing your data in a distributed environment. PySpark supports most of Spark’s features such as Spark SQL, DataFrame, Streaming, MLlib (Machine Learning) and Spark Core. Spark SQL is a Spark module for structured data processing. It provides a programming abstraction called DataFrame and can also act as distributed SQL query engine. Running on top of Spark, the streaming feature in Apache Spark enables powerful interactive and analytical applications across both streaming and historical data, while inheriting Spark’s ease of use and fault tolerance characteristics.
  • 19
    Kylo

    Kylo

    Teradata

    Kylo is an open source enterprise-ready data lake management software platform for self-service data ingest and data preparation with integrated metadata management, governance, security and best practices inspired by Think Big's 150+ big data implementation projects. Self-service data ingest with data cleansing, validation, and automatic profiling. Wrangle data with visual sql and an interactive transform through a simple user interface. Search and explore data and metadata, view lineage, and profile statistics. Monitor health of feeds and services in the data lake. Track SLAs and troubleshoot performance. Design batch or streaming pipeline templates in Apache NiFi and register with Kylo to enable user self-service. Organizations can expend significant engineering effort moving data into Hadoop yet struggle to maintain governance and data quality. Kylo dramatically simplifies data ingest by shifting ingest to data owners through a simple guided UI.
  • 20
    Apache Atlas

    Apache Atlas

    Apache Software Foundation

    Atlas is a scalable and extensible set of core foundational governance services – enabling enterprises to effectively and efficiently meet their compliance requirements within Hadoop and allows integration with the whole enterprise data ecosystem. Apache Atlas provides open metadata management and governance capabilities for organizations to build a catalog of their data assets, classify and govern these assets and provide collaboration capabilities around these data assets for data scientists, analysts and the data governance team. Pre-defined types for various Hadoop and non-Hadoop metadata. Ability to define new types for the metadata to be managed. Types can have primitive attributes, complex attributes, object references; can inherit from other types. Instances of types, called entities, capture metadata object details and their relationships. REST APIs to work with types and instances allow easier integration.
  • 21
    Tabular

    Tabular

    Tabular

    Tabular is an open table store from the creators of Apache Iceberg. Connect multiple computing engines and frameworks. Decrease query time and storage costs by up to 50%. Centralize enforcement of data access (RBAC) policies. Connect any query engine or framework, including Athena, BigQuery, Redshift, Snowflake, Databricks, Trino, Spark, and Python. Smart compaction, clustering, and other automated data services reduce storage costs and query times by up to 50%. Unify data access at the database or table. RBAC controls are simple to manage, consistently enforced, and easy to audit. Centralize your security down to the table. Tabular is easy to use plus it features high-powered ingestion, performance, and RBAC under the hood. Tabular gives you the flexibility to work with multiple “best of breed” compute engines based on their strengths. Assign privileges at the data warehouse database, table, or column level.
    Starting Price: $100 per month
  • 22
    E-MapReduce
    EMR is an all-in-one enterprise-ready big data platform that provides cluster, job, and data management services based on open-source ecosystems, such as Hadoop, Spark, Kafka, Flink, and Storm. Alibaba Cloud Elastic MapReduce (EMR) is a big data processing solution that runs on the Alibaba Cloud platform. EMR is built on Alibaba Cloud ECS instances and is based on open-source Apache Hadoop and Apache Spark. EMR allows you to use the Hadoop and Spark ecosystem components, such as Apache Hive, Apache Kafka, Flink, Druid, and TensorFlow, to analyze and process data. You can use EMR to process data stored on different Alibaba Cloud data storage service, such as Object Storage Service (OSS), Log Service (SLS), and Relational Database Service (RDS). You can quickly create clusters without the need to configure hardware and software. All maintenance operations are completed on its Web interface.
  • 23
    e6data

    e6data

    e6data

    Limited competition due to deep barriers to entry, specialized know-how, massive capital needs, and long time-to-market. Existing platforms are indistinguishable in price, and performance reducing the incentive to switch. Migrating from one engine’s SQL dialect to another engine’s SQL involves months of effort. Truly format-neutral computing, interoperable with all major open standards. Enterprise data leaders are hit by an unprecedented explosion in computing demand for data intelligence. They are surprised to find that 10% of their heavy, compute-intensive use cases consume 80% of the cost, engineering effort and stakeholder complaints. Unfortunately, such workloads are also mission-critical and non-discretionary. e6data amplifies ROI on enterprises' existing data platforms and architecture. e6data’s truly format-neutral compute has the unique distinction of being equally efficient and performant across leading data lakehouse table formats.
  • 24
    Cloudera DataFlow
    Cloudera DataFlow for the Public Cloud (CDF-PC) is a cloud-native universal data distribution service powered by Apache NiFi ​​that lets developers connect to any data source anywhere with any structure, process it, and deliver to any destination. CDF-PC offers a flow-based low-code development paradigm that aligns best with how developers design, develop, and test data distribution pipelines. With over 400+ connectors and processors across the ecosystem of hybrid cloud services—including data lakes, lakehouses, cloud warehouses, and on-premises sources—CDF-PC provides indiscriminate data distribution. These data distribution flows can then be version-controlled into a catalog where operators can self-serve deployments to different runtimes.
  • 25
    Azure Databricks
    Unlock insights from all your data and build artificial intelligence (AI) solutions with Azure Databricks, set up your Apache Spark™ environment in minutes, autoscale, and collaborate on shared projects in an interactive workspace. Azure Databricks supports Python, Scala, R, Java, and SQL, as well as data science frameworks and libraries including TensorFlow, PyTorch, and scikit-learn. Azure Databricks provides the latest versions of Apache Spark and allows you to seamlessly integrate with open source libraries. Spin up clusters and build quickly in a fully managed Apache Spark environment with the global scale and availability of Azure. Clusters are set up, configured, and fine-tuned to ensure reliability and performance without the need for monitoring. Take advantage of autoscaling and auto-termination to improve total cost of ownership (TCO).
  • 26
    Spark Streaming

    Spark Streaming

    Apache Software Foundation

    Spark Streaming brings Apache Spark's language-integrated API to stream processing, letting you write streaming jobs the same way you write batch jobs. It supports Java, Scala and Python. Spark Streaming recovers both lost work and operator state (e.g. sliding windows) out of the box, without any extra code on your part. By running on Spark, Spark Streaming lets you reuse the same code for batch processing, join streams against historical data, or run ad-hoc queries on stream state. Build powerful interactive applications, not just analytics. Spark Streaming is developed as part of Apache Spark. It thus gets tested and updated with each Spark release. You can run Spark Streaming on Spark's standalone cluster mode or other supported cluster resource managers. It also includes a local run mode for development. In production, Spark Streaming uses ZooKeeper and HDFS for high availability.
  • 27
    Spark NLP

    Spark NLP

    John Snow Labs

    Experience the power of large language models like never before, unleashing the full potential of Natural Language Processing (NLP) with Spark NLP, the open source library that delivers scalable LLMs. The full code base is open under the Apache 2.0 license, including pre-trained models and pipelines. The only NLP library built natively on Apache Spark. The most widely used NLP library in the enterprise. Spark ML provides a set of machine learning applications that can be built using two main components, estimators and transformers. The estimators have a method that secures and trains a piece of data to such an application. The transformer is generally the result of a fitting process and applies changes to the target dataset. These components have been embedded to be applicable to Spark NLP. Pipelines are a mechanism for combining multiple estimators and transformers in a single workflow. They allow multiple chained transformations along a machine-learning task.
    Starting Price: Free
  • 28
    Deequ

    Deequ

    Deequ

    Deequ is a library built on top of Apache Spark for defining "unit tests for data", which measure data quality in large datasets. We are happy to receive feedback and contributions. Deequ depends on Java 8. Deequ version 2.x only runs with Spark 3.1, and vice versa. If you rely on a previous Spark version, please use a Deequ 1.x version (legacy version is maintained in legacy-spark-3.0 branch). We provide legacy releases compatible with Apache Spark versions 2.2.x to 3.0.x. The Spark 2.2.x and 2.3.x releases depend on Scala 2.11 and the Spark 2.4.x, 3.0.x, and 3.1.x releases depend on Scala 2.12. Deequ's purpose is to "unit-test" data to find errors early, before the data gets fed to consuming systems or machine learning algorithms. In the following, we will walk you through a toy example to showcase the most basic usage of our library.
  • 29
    Unity Catalog

    Unity Catalog

    Databricks

    Databricks Unity Catalog is the industry’s only unified and open governance solution for data and AI, built into the Databricks Data Intelligence Platform. With Unity Catalog, organizations can seamlessly govern both structured and unstructured data in any format, as well as machine learning models, notebooks, dashboards, and files across any cloud or platform. Data scientists, analysts, and engineers can securely discover, access, and collaborate on trusted data and AI assets across platforms, leveraging AI to boost productivity and unlock the full potential of the lakehouse environment. This unified and open approach to governance promotes interoperability and accelerates data and AI initiatives while simplifying regulatory compliance. Easily discover and classify both structured and unstructured data in any format, including machine learning models, notebooks, dashboards, and files across all cloud platforms.
  • 30
    Archon Data Store

    Archon Data Store

    Platform 3 Solutions

    Archon Data Store is a next-generation enterprise data archiving platform designed to help organizations manage rapid data growth, reduce legacy application costs, and meet global compliance standards. Built on a modern Lakehouse architecture, Archon Data Store unifies data lakes and data warehouses to deliver secure, scalable, and analytics-ready archival storage. The platform supports on-premise, cloud, and hybrid deployments with AES-256 encryption, audit trails, metadata governance, and role-based access control. Archon Data Store offers intelligent storage tiering, high-performance querying, and seamless integration with BI tools. It enables efficient application decommissioning, cloud migration, and digital modernization while transforming archived data into a strategic asset. With Archon Data Store, organizations can ensure long-term compliance, optimize storage costs, and unlock AI-driven insights from historical data.
  • 31
    Cazpian

    Cazpian

    Cazpian

    Cazpian is a unified data platform designed for modern data teams working with open lakehouse architectures. The platform brings together data governance, compute environments, catalog management, and AI capabilities into a single system. Cazpian allows organizations to connect and query data across object storage, Iceberg tables, and relational databases through one SQL interface. Its unified catalog enables teams to manage data sources without moving or duplicating datasets. The platform also includes tools for scheduling jobs, running queries, and managing compute resources. Built-in AI agents provide evidence-backed insights and help teams analyze data more efficiently. By combining governance, analytics, and automation in one platform, Cazpian helps organizations manage large-scale data environments more effectively.
  • 32
    Apache PredictionIO
    Apache PredictionIO® is an open-source machine learning server built on top of a state-of-the-art open-source stack for developers and data scientists to create predictive engines for any machine learning task. It lets you quickly build and deploy an engine as a web service on production with customizable templates. Respond to dynamic queries in real-time once deployed as a web service, evaluate and tune multiple engine variants systematically, and unify data from multiple platforms in batch or in real-time for comprehensive predictive analytics. Speed up machine learning modeling with systematic processes and pre-built evaluation measures, support machine learning and data processing libraries such as Spark MLLib and OpenNLP. Implement your own machine learning models and seamlessly incorporate them into your engine. Simplify data infrastructure management. Apache PredictionIO® can be installed as a full machine learning stack, bundled with Apache Spark, MLlib, HBase, Akka HTTP, etc.
    Starting Price: Free
  • 33
    Hadoop

    Hadoop

    Apache Software Foundation

    The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using simple programming models. It is designed to scale up from single servers to thousands of machines, each offering local computation and storage. Rather than rely on hardware to deliver high-availability, the library itself is designed to detect and handle failures at the application layer, so delivering a highly-available service on top of a cluster of computers, each of which may be prone to failures. A wide variety of companies and organizations use Hadoop for both research and production. Users are encouraged to add themselves to the Hadoop PoweredBy wiki page. Apache Hadoop 3.3.4 incorporates a number of significant enhancements over the previous major release line (hadoop-3.2).
  • 34
    FutureAnalytica

    FutureAnalytica

    FutureAnalytica

    Ours is the world’s first & only end-to-end platform for all your AI-powered innovation needs — right from data cleansing & structuring, to creating & deploying advanced data-science models, to infusing advanced analytics algorithms with built-in Recommendation AI, to deducing the outcomes with easy-to-deduce visualization dashboards, as well as Explainable AI to backtrack how the outcomes were derived, our no-code AI platform can do it all! Our platform offers a holistic, seamless data science experience. With key features like a robust Data Lakehouse, a unique AI Studio, a comprehensive AI Marketplace, and a world-class data-science support team (on a need basis), FutureAnalytica is geared to reduce your time, efforts & costs across your data-science & AI journey. Initiate discussions with the leadership, followed by a quick technology assessment in 1–3 days. Build ready-to-integrate AI solutions using FA's fully automated data science & AI platform in 10–18 days.
  • 35
    Red Hat OpenShift Streams
    Red Hat® OpenShift® Streams for Apache Kafka is a managed cloud service that provides a streamlined developer experience for building, deploying, and scaling new cloud-native applications or modernizing existing systems. Red Hat OpenShift Streams for Apache Kafka makes it easy to create, discover, and connect to real-time data streams no matter where they are deployed. Streams are a key component for delivering event-driven and data analytics applications. The combination of seamless operations across distributed microservices, large data transfer volumes, and managed operations allows teams to focus on team strengths, speed up time to value, and lower operational costs. OpenShift Streams for Apache Kafka includes a Kafka ecosystem and is part of a family of cloud services—and the Red Hat OpenShift product family—which helps you build a wide range of data-driven solutions.
  • 36
    Apache Iceberg

    Apache Iceberg

    Apache Software Foundation

    Iceberg is a high-performance format for huge analytic tables. Iceberg brings the reliability and simplicity of SQL tables to big data, while making it possible for engines like Spark, Trino, Flink, Presto, Hive and Impala to safely work with the same tables, at the same time. Iceberg supports flexible SQL commands to merge new data, update existing rows, and perform targeted deletes. Iceberg can eagerly rewrite data files for read performance, or it can use delete deltas for faster updates. Iceberg handles the tedious and error-prone task of producing partition values for rows in a table and skips unnecessary partitions and files automatically. No extra filters are needed for fast queries, and the table layout can be updated as data or queries change.
    Starting Price: Free
  • 37
    ER/Studio Enterprise Edition
    ER/Studio is an enterprise data modeling and architecture platform that enables organizations to design, manage, and govern data assets across complex, distributed environments, including data warehouses, lakehouses, data mesh frameworks, and data vault architectures. It connects business requirements to technical implementation through conceptual, logical, and physical models, providing clarity from strategy through deployment. By establishing a consistent modeling foundation, ER/Studio creates a reliable, shared view of enterprise data that supports analytics, AI initiatives, modernization, compliance, and operational systems. Design data models and keep teams aligned with ER/Studio’s multi-user shared repository and web-based collaboration portal, Team Server. The repository supports version control, role-based access, parallel development, and change tracking so modelers can work simultaneously without conflict, preserving integrity and full history.
    Starting Price: $2,687 per user
  • 38
    IBM Data Refinery
    Available in IBM Watson® Studio and Watson™ Knowledge Catalog, the data refinery tool saves data preparation time by quickly transforming large amounts of raw data into consumable, quality information that’s ready for analytics. Interactively discover, cleanse, and transform your data with over 100 built-in operations. No coding skills are required. Understand the quality and distribution of your data using dozens of built-in charts, graphs, and statistics. Automatically detect data types and business classifications. Access and explore data residing in a wide spectrum of data sources within your organization or the cloud. Automatically enforce policies set by data governance professionals. Schedule data flow executions for repeatable outcomes. Monitor results and receive notifications. Easily scale out via Apache Spark to apply transformation recipes on full data sets. No management of Apache Spark clusters needed.
  • 39
    R2 SQL

    R2 SQL

    Cloudflare

    R2 SQL is Cloudflare’s serverless, distributed analytics query engine (currently in open beta) that enables you to run SQL queries over Apache Iceberg tables stored in R2 Data Catalog without needing to manage your own compute clusters. It is built to efficiently query large volumes of data by leveraging metadata pruning, partition-level statistics, file and row-group filtering, and Cloudflare’s globally distributed compute infrastructure to parallelize execution. The system works by integrating with R2 object storage and an Iceberg catalog layer, so you can ingest data via Cloudflare Pipelines into Iceberg tables, and then query that data with minimal overhead. Queries can be issued via the Wrangler CLI or HTTP API (with an API token granting permissions across R2 SQL, Data Catalog, and storage). During the open beta period, using R2 SQL itself is not billed, only storage and standard R2 operations incur charges.
    Starting Price: Free
  • 40
    Privacera

    Privacera

    Privacera

    At the intersection of data governance, privacy, and security, Privacera’s unified data access governance platform maximizes the value of data by providing secure data access control and governance across hybrid- and multi-cloud environments. The hybrid platform centralizes access and natively enforces policies across multiple cloud services—AWS, Azure, Google Cloud, Databricks, Snowflake, Starburst and more—to democratize trusted data enterprise-wide without compromising compliance with regulations such as GDPR, CCPA, LGPD, or HIPAA. Trusted by Fortune 500 customers across finance, insurance, retail, healthcare, media, public and the federal sector, Privacera is the industry’s leading data access governance platform that delivers unmatched scalability, elasticity, and performance. Headquartered in Fremont, California, Privacera was founded in 2016 to manage cloud data privacy and security by the creators of Apache Ranger™ and Apache Atlas™.
  • 41
    Deeplearning4j

    Deeplearning4j

    Deeplearning4j

    DL4J takes advantage of the latest distributed computing frameworks including Apache Spark and Hadoop to accelerate training. On multi-GPUs, it is equal to Caffe in performance. The libraries are completely open-source, Apache 2.0, and maintained by the developer community and Konduit team. Deeplearning4j is written in Java and is compatible with any JVM language, such as Scala, Clojure, or Kotlin. The underlying computations are written in C, C++, and Cuda. Keras will serve as the Python API. Eclipse Deeplearning4j is the first commercial-grade, open-source, distributed deep-learning library written for Java and Scala. Integrated with Hadoop and Apache Spark, DL4J brings AI to business environments for use on distributed GPUs and CPUs. There are a lot of parameters to adjust when you're training a deep-learning network. We've done our best to explain them, so that Deeplearning4j can serve as a DIY tool for Java, Scala, Clojure, and Kotlin programmers.
  • 42
    Apache Ranger

    Apache Ranger

    The Apache Software Foundation

    Apache Ranger™ is a framework to enable, monitor and manage comprehensive data security across the Hadoop platform. The vision with Ranger is to provide comprehensive security across the Apache Hadoop ecosystem. With the advent of Apache YARN, the Hadoop platform can now support a true data lake architecture. Enterprises can potentially run multiple workloads, in a multi tenant environment. Data security within Hadoop needs to evolve to support multiple use cases for data access, while also providing a framework for central administration of security policies and monitoring of user access. Centralized security administration to manage all security related tasks in a central UI or using REST APIs. Fine grained authorization to do a specific action and/or operation with Hadoop component/tool and managed through a central administration tool. Standardize authorization method across all Hadoop components. Enhanced support for different authorization methods - Role based access control etc.
  • 43
    Narrative

    Narrative

    Narrative

    Create new streams of revenue using the data you already collect with your own branded data shop. Narrative is focused on the fundamental principles that make buying and selling data easier, safer, and more strategic. Ensure that the data you access meets your standards, whatever they may be. Know exactly who you’re working with and how the data was collected. Easily access new supply and demand for a more agile and accessible data strategy. Own your data strategy entirely with end-to-end control of inputs and outputs. Our platform simplifies and automates the most time- and labor-intensive aspects of data acquisition, so you can access new data sources in days, not months. With filters, budget controls, and automatic deduplication, you’ll only ever pay for the data you need, and nothing that you don’t.
  • 44
    Apache Phoenix

    Apache Phoenix

    Apache Software Foundation

    Apache Phoenix enables OLTP and operational analytics in Hadoop for low-latency applications by combining the best of both worlds. The power of standard SQL and JDBC APIs with full ACID transaction capabilities and the flexibility of late-bound, schema-on-read capabilities from the NoSQL world by leveraging HBase as its backing store. Apache Phoenix is fully integrated with other Hadoop products such as Spark, Hive, Pig, Flume, and Map Reduce. Become the trusted data platform for OLTP and operational analytics for Hadoop through well-defined, industry-standard APIs. Apache Phoenix takes your SQL query, compiles it into a series of HBase scans, and orchestrates the running of those scans to produce regular JDBC result sets. Direct use of the HBase API, along with coprocessors and custom filters, results in performance on the order of milliseconds for small queries, or seconds for tens of millions of rows.
    Starting Price: Free
  • 45
    Oracle Cloud Infrastructure Data Flow
    Oracle Cloud Infrastructure (OCI) Data Flow is a fully managed Apache Spark service to perform processing tasks on extremely large data sets without infrastructure to deploy or manage. This enables rapid application delivery because developers can focus on app development, not infrastructure management. OCI Data Flow handles infrastructure provisioning, network setup, and teardown when Spark jobs are complete. Storage and security are also managed, which means less work is required for creating and managing Spark applications for big data analysis. With OCI Data Flow, there are no clusters to install, patch, or upgrade, which saves time and operational costs for projects. OCI Data Flow runs each Spark job in private dedicated resources, eliminating the need for upfront capacity planning. With OCI Data Flow, IT only needs to pay for the infrastructure resources that Spark jobs use while they are running.
    Starting Price: $0.0085 per GB per hour
  • 46
    Cloudera

    Cloudera

    Cloudera

    Manage and secure the data lifecycle from the Edge to AI in any cloud or data center. Operates across all major public clouds and the private cloud with a public cloud experience everywhere. Integrates data management and analytic experiences across the data lifecycle for data anywhere. Delivers security, compliance, migration, and metadata management across all environments. Open source, open integrations, extensible, & open to multiple data stores and compute architectures. Deliver easier, faster, and safer self-service analytics experiences. Provide self-service access to integrated, multi-function analytics on centrally managed and secured business data while deploying a consistent experience anywhere—on premises or in hybrid and multi-cloud. Enjoy consistent data security, governance, lineage, and control, while deploying the powerful, easy-to-use cloud analytics experiences business users require and eliminating their need for shadow IT solutions.
  • 47
    Apache Eagle

    Apache Eagle

    Apache Software Foundation

    Apache Eagle (called Eagle in the following) is an open source analytics solution for identifying security and performance issues instantly on big data platforms, e.g. Apache Hadoop, Apache Spark etc. It analyzes data activities, yarn applications, jmx metrics, and daemon logs etc., provides state-of-the-art alert engine to identify security breach, performance issues and shows insights. Big data platform normally generates huge amount of operational logs and metrics in realtime. Eagle is founded to solve hard problems in securing and tuning performance for big data platforms by ensuring metrics, logs always available and alerting immediately even under huge traffic. Streaming operational logs and data activities into Eagle platform, including but not limited to audit logs, map/reduce jobs, yarn resource usage, jmx metrics and various daemon logs etc. Generate alerts, show historical trend, and correlate alert with raw data.
  • 48
    Dremio

    Dremio

    Dremio

    Dremio delivers lightning-fast queries and a self-service semantic layer directly on your data lake storage. No moving data to proprietary data warehouses, no cubes, no aggregation tables or extracts. Just flexibility and control for data architects, and self-service for data consumers. Dremio technologies like Data Reflections, Columnar Cloud Cache (C3) and Predictive Pipelining work alongside Apache Arrow to make queries on your data lake storage very, very fast. An abstraction layer enables IT to apply security and business meaning, while enabling analysts and data scientists to explore data and derive new virtual datasets. Dremio’s semantic layer is an integrated, searchable catalog that indexes all of your metadata, so business users can easily make sense of your data. Virtual datasets and spaces make up the semantic layer, and are all indexed and searchable.
  • 49
    Preset

    Preset

    Preset

    Quickly build and share beautiful, customized, and dynamic dashboards that showcase your data with just a few clicks. Easily explore with our no-code viz builder or run deeper analysis using our state-of-the-art SQL editor. Leverage the investments you have made in your data infrastructure with a lightweight and powerful visualization layer on top. Superset is agnostic to your underlying data architecture and doesn't require an additional ingestion layer. Apache Superset is a powerful open source data visualization tool built out of Airbnb. Founded by the original creator of Superset, Preset delivers a complete, easy to consume, enterprise ready platform for Superset.
    Starting Price: $25/month/user
  • 50
    Catalog

    Catalog

    Coalesce

    Catalog from Coalesce (formerly CastorDoc) is a data catalog designed for mass adoption across the whole company. Have an overview of all your data environment. Search for data instantly thanks to our powerful search engine. Onboard to a new data infrastructure and access data in a breeze. Go beyond your traditional data catalog. Modern data teams now have numerous data sources, build one truth. With its delightful and automated documentation experience, Catalog makes it dead simple to trust data. Column-level, cross-system data lineage in minutes. Get a bird’s eye view of your data pipelines to build trust in your data. Troubleshoot data issues, perform impact analyses, comply with GDPR in one tool. Optimize performance, cost, compliance, and security for your data. Keep your data stack healthy with our automated infrastructure monitoring system.
    Starting Price: $699 per month