Alternatives to Kylo
Compare Kylo alternatives for your business or organization using the curated list below. SourceForge ranks the best alternatives to Kylo in 2026. Compare features, ratings, user reviews, pricing, and more from Kylo competitors and alternatives in order to make an informed decision for your business.
-
1
Teradata VantageCloud
Teradata
Teradata VantageCloud: The complete cloud analytics and data platform for AI. Teradata VantageCloud is an enterprise-grade, cloud-native data and analytics platform that unifies data management, advanced analytics, and AI/ML capabilities in a single environment. Designed for scalability and flexibility, VantageCloud supports multi-cloud and hybrid deployments, enabling organizations to manage structured and semi-structured data across AWS, Azure, Google Cloud, and on-premises systems. It offers full ANSI SQL support, integrates with open-source tools like Python and R, and provides built-in governance for secure, trusted AI. VantageCloud empowers users to run complex queries, build data pipelines, and operationalize machine learning models—all while maintaining interoperability with modern data ecosystems. -
2
AnalyticsCreator
AnalyticsCreator
AnalyticsCreator is a metadata-driven data warehouse automation application for teams working in the Microsoft data ecosystem. It enables data engineers to design, generate, and maintain production-ready data products across Microsoft SQL Server, Azure Data Factory, and Microsoft Fabric. By using centralized metadata, AnalyticsCreator generates ELT pipelines, dimensional models, historization logic, and analytical models in a consistent, version-controlled way. This reduces manual implementation effort and tool sprawl while ensuring transparency through built-in lineage tracking and clear visibility into data dependencies and change impact. With CI/CD integration via Azure DevOps and GitHub, plus support for custom SQL, AnalyticsCreator helps data teams scale delivery, enforce standards, and maintain control as complexity grows. -
3
MANTA
Manta
Manta is the world-class automated approach to visualize, optimize, and modernize how data moves through your organization through code-level lineage. By automatically scanning your data environment with the power of 50+ out-of-the-box scanners, Manta builds a powerful map of all data pipelines to drive efficiency and productivity. Visit manta.io to learn more. With Manta platform, you can make your data a truly enterprise-wide asset, bridge the understanding gap, enable self-service, and easily: • Increase productivity • Accelerate development • Shorten time-to-market • Reduce costs and manual effort • Run instant and accurate root cause and impact analyses • Scope and perform effective cloud migrations • Improve data governance and regulatory compliance (GDPR, CCPA, HIPAA, and more) • Increase data quality • Enhance data privacy and data security -
4
Alation
Alation
The Alation Agentic Data Intelligence Platform enables organizations to scale and accelerate their AI and data initiatives. By unifying search, cataloging, governance, lineage, and analytics, it transforms metadata into a strategic asset for decision-making. The platform’s AI-powered agents—including Documentation, Data Quality, and Data Products Builder—automate complex data management tasks. With active metadata, workflow automation, and more than 120 pre-built connectors, Alation integrates seamlessly into modern enterprise environments. It helps organizations build trusted AI models by ensuring data quality, transparency, and compliance across the business. Trusted by 40% of the Fortune 100, Alation empowers teams to make faster, more confident decisions with trusted data. -
5
Snowflake
Snowflake
Snowflake is a comprehensive AI Data Cloud platform designed to eliminate data silos and simplify data architectures, enabling organizations to get more value from their data. The platform offers interoperable storage that provides near-infinite scale and access to diverse data sources, both inside and outside Snowflake. Its elastic compute engine delivers high performance for any number of users, workloads, and data volumes with seamless scalability. Snowflake’s Cortex AI accelerates enterprise AI by providing secure access to leading large language models (LLMs) and data chat services. The platform’s cloud services automate complex resource management, ensuring reliability and cost efficiency. Trusted by over 11,000 global customers across industries, Snowflake helps businesses collaborate on data, build data applications, and maintain a competitive edge.Starting Price: $2 compute/month -
6
Apache Atlas
Apache Software Foundation
Atlas is a scalable and extensible set of core foundational governance services – enabling enterprises to effectively and efficiently meet their compliance requirements within Hadoop and allows integration with the whole enterprise data ecosystem. Apache Atlas provides open metadata management and governance capabilities for organizations to build a catalog of their data assets, classify and govern these assets and provide collaboration capabilities around these data assets for data scientists, analysts and the data governance team. Pre-defined types for various Hadoop and non-Hadoop metadata. Ability to define new types for the metadata to be managed. Types can have primitive attributes, complex attributes, object references; can inherit from other types. Instances of types, called entities, capture metadata object details and their relationships. REST APIs to work with types and instances allow easier integration. -
7
Zaloni Arena
Zaloni
End-to-end DataOps built on an agile platform that improves and safeguards your data assets. Arena is the premier augmented data management platform. Our active data catalog enables self-service data enrichment and consumption to quickly control complex data environments. Customizable workflows that increase the accuracy and reliability of every data set. Use machine-learning to identify and align master data assets for better data decisioning. Complete lineage with detailed visualizations alongside masking and tokenization for superior security. We make data management easy. Arena catalogs your data, wherever it is and our extensible connections enable analytics to happen across your preferred tools. Conquer data sprawl challenges: Our software drives business and analytics success while providing the controls and extensibility needed across today’s decentralized, multi-cloud data complexity. -
8
Dremio
Dremio
Dremio delivers lightning-fast queries and a self-service semantic layer directly on your data lake storage. No moving data to proprietary data warehouses, no cubes, no aggregation tables or extracts. Just flexibility and control for data architects, and self-service for data consumers. Dremio technologies like Data Reflections, Columnar Cloud Cache (C3) and Predictive Pipelining work alongside Apache Arrow to make queries on your data lake storage very, very fast. An abstraction layer enables IT to apply security and business meaning, while enabling analysts and data scientists to explore data and derive new virtual datasets. Dremio’s semantic layer is an integrated, searchable catalog that indexes all of your metadata, so business users can easily make sense of your data. Virtual datasets and spaces make up the semantic layer, and are all indexed and searchable. -
9
Dataplex Universal Catalog
Google
Dataplex Universal Catalog is Google Cloud’s intelligent governance platform for data and AI artifacts. It centralizes discovery, management, and monitoring across data lakes, warehouses, and databases, giving teams unified access to trusted data. With Vertex AI integration, users can instantly find datasets, models, features, and related assets in one search experience. It supports semantic search, data lineage, quality checks, and profiling to improve trust and compliance. Integrated with BigQuery and BigLake, it enables end-to-end governance for both proprietary and open lakehouse environments. Dataplex Universal Catalog helps organizations democratize data access, enforce governance, and accelerate analytics and AI initiatives.Starting Price: $0.060 per hour -
10
Tokern
Tokern
Open source data governance suite for databases and data lakes. Tokern is a simple to use toolkit to collect, organize and analyze data lake's metadata. Run as a command-line app for quick tasks. Run as a service for continuous collection of metadata. Analyze lineage, access control and PII datasets using reporting dashboards or programmatically in Jupyter notebooks. Tokern is an open source data governance suite for databases and data lakes. Improve ROI of your data, comply with regulations like HIPAA, CCPA and GDPR and protect critical data from insider threats with confidence. Centralized metadata management of users, datasets and jobs. Powers other data governance features. Track Column Level Data Lineage for Snowflake, AWS Redshift and BigQuery. Build lineage from query history or ETL scripts. Explore lineage using interactive graphs or programmatically using APIs or SDKs. -
11
Simplify big data operations and build intelligent knowledge libraries with Data Lake Governance Center (DGC), a one-stop data lake operations platform that manages data design, development, integration, quality, and assets. Build an enterprise-class data lake governance platform with an easy-to-use visual interface. Streamline data lifecycle processes, utilize metrics and analytics, and ensure good governance across your enterprise. Define and monitor data standards, and get real-time alerts. Build data lakes quicker by easily setting up data integrations, models, and cleaning rules, to enable the discovery of new reliable data sources. Maximize the business value of data. With DGC, end-to-end data operations solutions can be designed for scenarios such as smart government, smart taxation, and smart campus. Gain new insights into sensitive data across your entire organization. DGC allows enterprises to define business catalogs, classifications, and terms.Starting Price: $428 one-time payment
-
12
Atlan
Atlan
The modern data workspace. Make all your data assets from data tables to BI reports, instantly discoverable. Our powerful search algorithms combined with easy browsing experience, make finding the right asset, a breeze. Atlan auto-generates data quality profiles which make detecting bad data, dead easy. From automatic variable type detection & frequency distribution to missing values and outlier detection, we’ve got you covered. Atlan takes the pain away from governing and managing your data ecosystem! Atlan’s bots parse through SQL query history to auto construct data lineage and auto-detect PII data, allowing you to create dynamic access policies & best in class governance. Even non-technical users can directly query across multiple data lakes, warehouses & DBs using our excel-like query builder. Native integrations with tools like Tableau and Jupyter makes data collaboration come alive. -
13
Cloudera
Cloudera
Manage and secure the data lifecycle from the Edge to AI in any cloud or data center. Operates across all major public clouds and the private cloud with a public cloud experience everywhere. Integrates data management and analytic experiences across the data lifecycle for data anywhere. Delivers security, compliance, migration, and metadata management across all environments. Open source, open integrations, extensible, & open to multiple data stores and compute architectures. Deliver easier, faster, and safer self-service analytics experiences. Provide self-service access to integrated, multi-function analytics on centrally managed and secured business data while deploying a consistent experience anywhere—on premises or in hybrid and multi-cloud. Enjoy consistent data security, governance, lineage, and control, while deploying the powerful, easy-to-use cloud analytics experiences business users require and eliminating their need for shadow IT solutions. -
14
Collate
Collate
Collate is an AI‑driven metadata platform that empowers data teams with automated discovery, observability, quality, and governance through agent‑based workflows. Built on the open source OpenMetadata foundation and a unified metadata graph, it offers 90+ turnkey connectors to ingest metadata from databases, data warehouses, BI tools, and pipelines, delivering in‑depth column‑level lineage, data profiling, and no‑code quality tests. Its AI agents automate data discovery, permission‑aware querying, alerting, and incident‑management workflows at scale, while real‑time dashboards, interactive analyses, and a collaborative business glossary enable both technical and non‑technical users to steward high‑quality data assets. Continuous monitoring and governance automations enforce compliance with standards such as GDPR and CCPA, reducing mean time to resolution for data issues and lowering total cost of ownership.Starting Price: Free -
15
Mozart Data
Mozart Data
Mozart Data is the all-in-one modern data platform that makes it easy to consolidate, organize, and analyze data. Start making data-driven decisions by setting up a modern data stack in an hour - no engineering required. -
16
Rocket Data Intelligence
Rocket Software
Rocket® Data Intelligence (RDI) delivers comprehensive visibility into enterprise data across mainframe, distributed, and cloud environments. It automatically discovers metadata, lineage, and data relationships so organizations can see where critical data resides, how it moves, and which applications and processes rely on it. RDI supports legacy and modern platforms, including Db2, VSAM, IMS, Adabas, Datacom, relational databases, ETL tools like Informatica and DataStage, code such as COBOL, Python, and Java, and cloud data stores. RDI provides enterprise-grade capabilities including automated data discovery and code parsing, impact analysis, lineage filtering, role/LOB-based categorization and governance, workflow management, business glossary, and dependency mapping. By unifying data asset visibility across hybrid environments, RDI reduces operational risk and accelerates data modernization, compliance reporting, discovery, and rationalization initiatives. -
17
Datameer
Datameer
Datameer revolutionizes data transformation with a low-code approach, trusted by top global enterprises. Craft, transform, and publish data seamlessly with no code and SQL, simplifying complex data engineering tasks. Empower your data teams to make informed decisions confidently while saving costs and ensuring responsible self-service analytics. Speed up your analytics workflow by transforming datasets to answer ad-hoc questions and support operational dashboards. Empower everyone on your team with our SQL or Drag-and-Drop to transform your data in an intuitive and collaborative workspace. And best of all, everything happens in Snowflake. Datameer is designed and optimized for Snowflake to reduce data movement and increase platform adoption. Some of the problems Datameer solves: - Analytics is not accessible - Drowning in backlog - Long development -
18
Infor Data Lake
Infor
Solving today’s enterprise and industry challenges requires big data. The ability to capture data from across your enterprise—whether generated by disparate applications, people, or IoT infrastructure–offers tremendous potential. Infor’s Data Lake tools deliver schema-on-read intelligence along with a fast, flexible data consumption framework to enable new ways of making key decisions. With leveraged access to your entire Infor ecosystem, you can start capturing and delivering big data to power your next generation analytics and machine learning strategies. Infinitely scalable, the Infor Data Lake provides a unified repository for capturing all of your enterprise data. Grow with your insights and investments, ingest more content for better informed decisions, improve your analytics profiles, and provide rich data sets to build more powerful machine learning processes. -
19
Global IDs
Global IDs
Find out some of the best platform features provided by Global IDs, which brings in a set of Enterprise Data Solutions like data governance, data compliance, cloud migration, rationalization, privacy, analytics & much more! Global IDs EDA Platform feature comprises a set of core functions: automated discovery and profiling, data classification, data lineage, data quality and more – that render data transparent, trustworthy, and explainable across the ecosystem. Global IDs EDA platform architecture is designed for integration from the ground up with all platform functionality accessible via APIs. Global IDs EDA platform automates data management for enterprises of any size or data ecosystem. -
20
Octopai
Octopai
Harness the power of data lineage, discovery and a data catalog to achieve full control of your data. that can instantly navigate through the most complex data landscapes. Gain access to the most comprehensive automated data lineage, discovery and data catalog. Providing unprecedented visibility and trust into the most complex data environments. Octopai extracts metadata from your entire data environment. With a quick, secure and simple process, Octopai will instantly be able to analyze the metadata. In one centralized platform Octopai allows you to access data lineage, data discovery and a data catalog, automatically. Trace any data end-to-end through your entire data landscape, in seconds. Automatically find the data you need anywhere in your data landscape. Create company-wide consistency with a self-creating, self-updating data catalog. -
21
Talend Data Fabric
Qlik
Talend Data Fabric’s suite of cloud services efficiently handles all your integration and integrity challenges — on-premises or in the cloud, any source, any endpoint. Deliver trusted data at the moment you need it — for every user, every time. Ingest and integrate data, applications, files, events and APIs from any source or endpoint to any location, on-premise and in the cloud, easier and faster with an intuitive interface and no coding. Embed quality into data management and guarantee ironclad regulatory compliance with a thoroughly collaborative, pervasive and cohesive approach to data governance. Make the most informed decisions based on high quality, trustworthy data derived from batch and real-time processing and bolstered with market-leading data cleaning and enrichment tools. Get more value from your data by making it available internally and externally. Extensive self-service capabilities make building APIs easy— improve customer engagement. -
22
Data360 Govern
Precisely
Your organization knows the value of data and the need to get it into the hands of business users for maximum impact, but without enterprise data governance, that data might be hard to find, understand, and trust. Data360 Govern is an enterprise data governance, catalog, and metadata management solution that gives you confidence in the quality, value, and trustworthiness of your data. It automates governance and stewardship tasks to help you answer essential questions about your data’s source, use, meaning, ownership, and quality. With Data360 Govern, you can make faster decisions on data usage and management, build collaboration across your entire organization, and allow users to get the answers they need – when they need them. Transparency into your organization’s data landscape gives you the power to track the critical data aligned with your most important business outcomes. -
23
Decube
Decube
Decube is a data management platform that helps organizations manage their data observability, data catalog, and data governance needs. It provides end-to-end visibility into data and ensures its accuracy, consistency, and trustworthiness. Decube's platform includes data observability, a data catalog, and data governance components that work together to provide a comprehensive solution. The data observability tools enable real-time monitoring and detection of data incidents, while the data catalog provides a centralized repository for data assets, making it easier to manage and govern data usage and access. The data governance tools provide robust access controls, audit reports, and data lineage tracking to demonstrate compliance with regulatory requirements. Decube's platform is customizable and scalable, making it easy for organizations to tailor it to meet their specific data management needs and manage data across different systems, data sources, and departments. -
24
The Qlik Data Integration platform for managed data lakes automates the process of providing continuously updated, accurate, and trusted data sets for business analytics. Data engineers have the agility to quickly add new sources and ensure success at every step of the data lake pipeline from real-time data ingestion, to refinement, provisioning, and governance. A simple and universal solution for continually ingesting enterprise data into popular data lakes in real-time. A model-driven approach for quickly designing, building, and managing data lakes on-premises or in the cloud. Deliver a smart enterprise-scale data catalog to securely share all of your derived data sets with business users.
-
25
Acryl Data
Acryl Data
No more data catalog ghost towns. Acryl Cloud drives fast time-to-value via Shift Left practices for data producers and an intuitive UI for data consumers. Continuously detect data quality incidents in real-time, automate anomaly detection to prevent breakages, and drive fast resolution when they do occur. Acryl Cloud supports both push-based and pull-based metadata ingestion for easy maintenance, ensuring information is trustworthy, up-to-date, and definitive. Data should be operational. Go beyond simple visibility and use automated Metadata Tests to continuously expose data insights and surface new areas for improvement. Reduce confusion and accelerate resolution with clear asset ownership, automatic detection, streamlined alerts, and time-based lineage for tracing root causes. -
26
Onehouse
Onehouse
The only fully managed cloud data lakehouse designed to ingest from all your data sources in minutes and support all your query engines at scale, for a fraction of the cost. Ingest from databases and event streams at TB-scale in near real-time, with the simplicity of fully managed pipelines. Query your data with any engine, and support all your use cases including BI, real-time analytics, and AI/ML. Cut your costs by 50% or more compared to cloud data warehouses and ETL tools with simple usage-based pricing. Deploy in minutes without engineering overhead with a fully managed, highly optimized cloud service. Unify your data in a single source of truth and eliminate the need to copy data across data warehouses and lakes. Use the right table format for the job, with omnidirectional interoperability between Apache Hudi, Apache Iceberg, and Delta Lake. Quickly configure managed pipelines for database CDC and streaming ingestion. -
27
Collibra
Collibra
With a best-in-class catalog, flexible governance, continuous quality, and built-in privacy, the Collibra Data Intelligence Cloud is your single system of engagement for data. Support your users with a best-in-class data catalog that includes embedded governance, privacy and quality. Raise the grade, by ensuring teams can quickly find, understand and access data across sources, business applications, BI and data science tools in one central location. Give your data some much-needed privacy. Centralize, automate and guide workflows to encourage collaboration, operationalize privacy and address global regulatory requirements. Get the full story around your data with Collibra Data Lineage. Automatically map relationships between systems, applications and reports to provide a context-rich view across the enterprise. Hone in on the data you care about most and trust that it is relevant, complete and trustworthy. -
28
SAP Information Steward software supports data profiling and monitoring and information policy management. As the information governance layer of SAP Business Technology Platform, it can help you anticipate risk and drive better business outcomes. Combine data profiling, data lineage, and metadata management to gain continuous insight into the integrity of your enterprise data model. Gain a better understanding of data quality across your data management landscape,while accessing and analyzing metrics with intuitive dashboards and scorecards. Improve enterprise information management initiatives by supporting analysts, data stewards, and IT experts with consistent validation rules and guidelines. Discover, assess, define, monitor, and improve the quality of your enterprise data assets with data profiling and metadata management – all with one solution. Forecast the potential savings of improved data quality by running what-if analyses.
-
29
Talend Data Catalog
Qlik
Talend Data Catalog gives your organization a single, secure point of control for your data. With robust tools for search and discovery, and connectors to extract metadata from virtually any data source, Data Catalog makes it easy to protect your data, govern your analytics, manage data pipelines, and accelerate your ETL processes. Data Catalog automatically crawls, profiles, organizes, links, and enriches all your metadata. Up to 80% of the information associated with the data is documented automatically and kept up-to-date through smart relationships and machine learning, continually delivering the most current data to the user. Make data governance a team sport with a secure single point of control where you can collaborate to improve data accessibility, accuracy, and business relevance. Support data privacy and regulatory compliance with intelligent data lineage tracing and compliance tracking. -
30
Aggua
Aggua
Aggua is a data fabric augmented AI platform that enables data and business teams Access to their data, creating Trust and giving practical Data Insights, for a more holistic, data-centric decision-making. Instead of wondering what is going on underneath the hood of your organization's data stack, become immediately informed with a few clicks. Get access to data cost insights, data lineage and documentation without needing to take time out of your data engineer's workday. Instead of spending a lot of time tracing what a data type change will break in your data pipelines, tables and infrastructure, with automated lineage, your data architects and engineers can spend less time manually going through logs and DAGs and more time actually making the changes to infrastructure. -
31
Paxata
Paxata
Paxata is a visually-dynamic, intuitive solution that enables business analysts to rapidly ingest, profile, and curate multiple raw datasets into consumable information in a self-service manner, greatly accelerating development of actionable business insights. In addition to empowering business analysts and SMEs, Paxata also provides a rich set of workload automation and embeddable data preparation capabilities to operationalize and deliver data preparation as a service within other applications. The Paxata Adaptive Information Platform (AIP) unifies data integration, data quality, semantic enrichment, re-use & collaboration, and also provides comprehensive data governance and audit capabilities with self-documenting data lineage. The Paxata AIP utilizes a native multi-tenant elastic cloud architecture and is the only modern information platform that is currently deployed as a multi-cloud hybrid information fabric. -
32
Select Star
Select Star
Set up your automated data catalog in just 15 minutes, and receive column-level lineage, Entity Relationship (ER) diagram, and auto-populated documentation within 24 hours. Easily find, tag, and add documentation to your data so everyone can find the right dataset for their use case. Select Star automatically detects and displays your column-level data lineage. You can now trust the data, knowing where it came from. Select Star automatically surfaces how your company uses data. That means you can identify relevant data fields without needing to ask someone else. Select Star treats your data with AICPA SOC 2 Security, Confidentiality, and Availability standards, making sure your data is always safe and sound.Starting Price: $270 per month -
33
Lentiq
Lentiq
Lentiq is a collaborative data lake as a service environment that’s built to enable small teams to do big things. Quickly run data science, machine learning and data analysis at scale in the cloud of your choice. With Lentiq, your teams can ingest data in real time and then process, clean and share it. From there, Lentiq makes it possible to build, train and share models internally. Simply put, data teams can collaborate with Lentiq and innovate with no restrictions. Data lakes are storage and processing environments, which provide ML, ETL, schema-on-read querying capabilities and so much more. Are you working on some data science magic? You definitely need a data lake. In the Post-Hadoop era, the big, centralized data lake is a thing of the past. With Lentiq, we use data pools, which are multi-cloud, interconnected mini-data lakes. They work together to give you a stable, secure and fast data science environment. -
34
Ataccama ONE
Ataccama
Ataccama reinvents the way data is managed to create value on an enterprise scale. Unifying Data Governance, Data Quality, and Master Data Management into a single, AI-powered fabric across hybrid and Cloud environments, Ataccama gives your business and data teams the ability to innovate with unprecedented speed while maintaining trust, security, and governance of your data. -
35
DataGalaxy
DataGalaxy
DataGalaxy is a next-generation data governance and intelligence platform designed to help organizations manage, understand, and maximize the value of their data. Built around a unified interface, it empowers everyone—from executives to data consumers—to collaborate seamlessly across data assets, strategies, and analytics. The platform’s automated data catalog, governance hub, and AI co-pilot reduce manual work while ensuring compliance and data quality across systems. With over 70+ integrations, including Snowflake, Databricks, Power BI, and AWS, DataGalaxy connects your data ecosystem into a single source of truth. Its value tracking center and strategy cockpit align data initiatives with business goals, driving measurable outcomes and enterprise-wide visibility. Loved by users, DataGalaxy turns governance into a strategic advantage for the modern enterprise. -
36
Databricks Data Intelligence Platform
Databricks
The Databricks Data Intelligence Platform allows your entire organization to use data and AI. It’s built on a lakehouse to provide an open, unified foundation for all data and governance, and is powered by a Data Intelligence Engine that understands the uniqueness of your data. The winners in every industry will be data and AI companies. From ETL to data warehousing to generative AI, Databricks helps you simplify and accelerate your data and AI goals. Databricks combines generative AI with the unification benefits of a lakehouse to power a Data Intelligence Engine that understands the unique semantics of your data. This allows the Databricks Platform to automatically optimize performance and manage infrastructure in ways unique to your business. The Data Intelligence Engine understands your organization’s language, so search and discovery of new data is as easy as asking a question like you would to a coworker. -
37
Upsolver
Upsolver
Upsolver makes it incredibly simple to build a governed data lake and to manage, integrate and prepare streaming data for analysis. Define pipelines using only SQL on auto-generated schema-on-read. Easy visual IDE to accelerate building pipelines. Add Upserts and Deletes to data lake tables. Blend streaming and large-scale batch data. Automated schema evolution and reprocessing from previous state. Automatic orchestration of pipelines (no DAGs). Fully-managed execution at scale. Strong consistency guarantee over object storage. Near-zero maintenance overhead for analytics-ready data. Built-in hygiene for data lake tables including columnar formats, partitioning, compaction and vacuuming. 100,000 events per second (billions daily) at low cost. Continuous lock-free compaction to avoid “small files” problem. Parquet-based tables for fast queries. -
38
BryteFlow
BryteFlow
BryteFlow builds the most efficient automated environments for analytics ever. It converts Amazon S3 into an awesome analytics platform by leveraging the AWS ecosystem intelligently to deliver data at lightning speeds. It complements AWS Lake Formation and automates the Modern Data Architecture providing performance and productivity. You can completely automate data ingestion with BryteFlow Ingest’s simple point-and-click interface while BryteFlow XL Ingest is great for the initial full ingest for very large datasets. No coding is needed! With BryteFlow Blend you can merge data from varied sources like Oracle, SQL Server, Salesforce and SAP etc. and transform it to make it ready for Analytics and Machine Learning. BryteFlow TruData reconciles the data at the destination with the source continually or at a frequency you select. If data is missing or incomplete you get an alert so you can fix the issue easily. -
39
Microsoft Purview
Microsoft
Microsoft Purview is a unified data governance service that helps you manage and govern your on-premises, multicloud, and software-as-a-service (SaaS) data. Easily create a holistic, up-to-date map of your data landscape with automated data discovery, sensitive data classification, and end-to-end data lineage. Empower data consumers to find valuable, trustworthy data. Automated data discovery, lineage identification, and data classification across on-premises, multicloud, and SaaS sources. Unified map of your data assets and their relationships for more effective governance. Semantic search enables data discovery using business or technical terms. Insight into the location and movement of sensitive data across your hybrid data landscape. Establish the foundation for effective data usage and governance with Purview Data Map. Automate and manage metadata from hybrid sources. Classify data using built-in and custom classifiers and Microsoft Information Protection sensitivity labels.Starting Price: $0.342 -
40
Data Lakes on AWS
Amazon
Many Amazon Web Services (AWS) customers require a data storage and analytics solution that offers more agility and flexibility than traditional data management systems. A data lake is a new and increasingly popular way to store and analyze data because it allows companies to manage multiple data types from a wide variety of sources, and store this data, structured and unstructured, in a centralized repository. The AWS Cloud provides many of the building blocks required to help customers implement a secure, flexible, and cost-effective data lake. These include AWS managed services that help ingest, store, find, process, and analyze both structured and unstructured data. To support our customers as they build data lakes, AWS offers the data lake solution, which is an automated reference implementation that deploys a highly available, cost-effective data lake architecture on the AWS Cloud along with a user-friendly console for searching and requesting datasets. -
41
Azure Data Lake
Microsoft
Azure Data Lake includes all the capabilities required to make it easy for developers, data scientists, and analysts to store data of any size, shape, and speed, and do all types of processing and analytics across platforms and languages. It removes the complexities of ingesting and storing all of your data while making it faster to get up and running with batch, streaming, and interactive analytics. Azure Data Lake works with existing IT investments for identity, management, and security for simplified data management and governance. It also integrates seamlessly with operational stores and data warehouses so you can extend current data applications. We’ve drawn on the experience of working with enterprise customers and running some of the largest scale processing and analytics in the world for Microsoft businesses like Office 365, Xbox Live, Azure, Windows, Bing, and Skype. Azure Data Lake solves many of the productivity and scalability challenges that prevent you from maximizing the -
42
Dataedo
Dataedo
Discover, document and manage your metadata. Dataedo is equipped with multiple automated metadata scanners that connect to various database technologies, extract data structures and metadata, and load them into the metadata repository. With a few clicks, build a catalog of your data and describe each element. Decrypt table and column names with business-friendly aliases, provide meaning and purpose of data assets with descriptions and user-defined custom fields. Use sample data to learn what data is stored in your data assets. Understand the data better before using it and make sure that the data is good quality. Ensure high data quality with data profiling. Democratize access to knowledge about data. Build data literacy, democratize data and empower everyone in your organization to make better use of your data with a lightweight on-premises data catalog. Boost data literacy through a data catalog.Starting Price: $49 per month -
43
Delta Lake
Delta Lake
Delta Lake is an open-source storage layer that brings ACID transactions to Apache Spark™ and big data workloads. Data lakes typically have multiple data pipelines reading and writing data concurrently, and data engineers have to go through a tedious process to ensure data integrity, due to the lack of transactions. Delta Lake brings ACID transactions to your data lakes. It provides serializability, the strongest level of isolation level. Learn more at Diving into Delta Lake: Unpacking the Transaction Log. In big data, even the metadata itself can be "big data". Delta Lake treats metadata just like data, leveraging Spark's distributed processing power to handle all its metadata. As a result, Delta Lake can handle petabyte-scale tables with billions of partitions and files at ease. Delta Lake provides snapshots of data enabling developers to access and revert to earlier versions of data for audits, rollbacks or to reproduce experiments. -
44
PHEMI Health DataLab
PHEMI Systems
The PHEMI Trustworthy Health DataLab is a unique, cloud-based, integrated big data management system that allows healthcare organizations to enhance innovation and generate value from healthcare data by simplifying the ingestion and de-identification of data with NSA/military-grade governance, privacy, and security built-in. Conventional products simply lock down data, PHEMI goes further, solving privacy and security challenges and addressing the urgent need to secure, govern, curate, and control access to privacy-sensitive personal healthcare information (PHI). This improves data sharing and collaboration inside and outside of an enterprise—without compromising the privacy of sensitive information or increasing administrative burden. PHEMI Trustworthy Health DataLab can scale to any size of organization, is easy to deploy and manage, connects to hundreds of data sources, and integrates with popular data science and business analysis tools. -
45
Trifacta
Trifacta
The fastest way to prep data and build data pipelines in the cloud. Trifacta provides visual and intelligent guidance to accelerate data preparation so you can get to insights faster. Poor data quality can sink any analytics project. Trifacta helps you understand your data so you can quickly and accurately clean it up. All the power with none of the code. Trifacta provides visual and intelligent guidance so you can get to insights faster. Manual, repetitive data preparation processes don’t scale. Trifacta helps you build, deploy and manage self-service data pipelines in minutes not months. -
46
NewEvol
Sattrix Software Solutions
NewEvol is the technologically advanced product suite that uses data science for advanced analytics to identify abnormalities in the data itself. Supported by visualization, rule-based alerting, automation, and responses, NewEvol becomes a more compiling proposition for any small to large enterprise. Machine Learning (ML) and security intelligence feed makes NewEvol a more robust system to cater to challenging business demands. NewEvol Data Lake is super easy to deploy and manage. You don’t require a team of expert data administrators. As your company’s data need grows, it automatically scales and reallocates resources accordingly. NewEvol Data Lake has extensive data ingestion to perform enrichment across multiple sources. It helps you ingest data from multiple formats such as delimited, JSON, XML, PCAP, Syslog, etc. It offers enrichment with the help of a best-of-breed contextually aware event analytics model. -
47
Validio
Validio
See how your data assets are used: popularity, utilization, and schema coverage. Get important insights about your data assets such as popularity, utilization, quality, and schema coverage. Find and filter the data you need based on metadata tags and descriptions. Get important insights about your data assets such as popularity, utilization, quality, and schema coverage. Drive data governance and ownership across your organization. Stream-lake-warehouse lineage to facilitate data ownership and collaboration. Automatically generated field-level lineage map to understand the entire data ecosystem. Anomaly detection learns from your data and seasonality patterns, with automatic backfill from historical data. Machine learning-based thresholds are trained per data segment, trained on actual data instead of metadata only. -
48
Dawiso
Dawiso
Dawiso is your modern platform for managing and understanding data, built to unify governance and usability in a way that works for your entire organization. At its core is a powerful, AI-powered data catalog, enabling teams to quickly discover, interpret, and access trusted data across systems, reports, and business tools. With flexible governance features and business-friendly documentation apps, Dawiso bridges the gap between technical and non-technical users, fostering true collaboration. Enhance trust in your data with clear, visual data lineage that maps relationships across sources and systems, giving you full context and control. Support compliance through customizable workflows, role-based access, and structured metadata capture.Starting Price: $49 per user per month -
49
Blindata
Blindata
Blindata covers all the functions of a Data Governance program: Business Glossary, Data Catalog & Data Lineage build an integrated and complete view on your Data. Data Classification module gives a semantic meaning to the data while the Data Quality, Issue Management & Data Stewardship modules improve the reliability and trust on data. Moreover, privacy compliance can leverage specific features: registry of processing activities, centralized privacy note management, consent registry with Blockchain integrated notarization. Blindata Agent can connect to different data sources, collecting metadata such data structures (Tables, Views, Fields, …), data quality metrics, reverse lineage, etc. Blindata has a modular and entirely API based architecture allowing systematic integration with the most critical business systems (DBMS, Active Directory, e-commerce, Data Platforms). Blindata is available as SaaS, can be installed “on Premise” or purchased on AWS Marketplace.Starting Price: $1000/year/user -
50
Tree Schema Data Catalog
Tree Schema
The essential tool for metadata management. Automatically populate your entire catalog in under 5 minutes! Data Discovery. Find the data you need anywhere within your data ecosystem from the database all the way down to the specific values for each field. Automatically document your data from existing data stores. First-class support for tabular and unstructured data. Automated data governance actions. Data Lineage. Explore your data lineage and understand where your data comes from and where it is going. View impact analysis of changes Find all up and downstream impacts. Visualize relationships and connections. API AccessNew. Manage your data lineage as code and keep your catalog up to date with the Tree Schema API. Integrate Data Lineage into CICD pipelines Capture values & descriptions within your code Analyze impact for breaking changes. Data Dictionary. Know the key terms and lingo that drive your business. Define the context and scope for keywordsStarting Price: $99 per month