×
×

Delta Lake

A Delta Lake is an open-source data platform architecture that addresses the weaknesses of data warehouses and data lakes in modern big data analytics. Also called a data lakehouse, Delta Lakes combines affordable, directly accessible storage of a data lake with the data management and performance features of a data warehouse.

Delta Lakes implement lakehouse architectures to streamline big data storage and analytics. This scalable solution relieves data teams from pipeline maintenance workloads, simplifies data science projects, and gives data users direct access to larger, more diverse, and more up-to-date datasets.

This guide will help make sense of the terms Delta Lake, data lake, data lakehouse, and data warehouse. You will learn how data lakehouses evolved from previous architectures, the advantages of Delta Lake implementations, and recent advances in Delta Lake technologies.

History of Delta Lake

The introduction of Delta Lake represented a significant departure from traditional data lakes. While built on the same cloud object storage technology, Delta Lake included innovative enhancements to functionality traditionally reserved for databases or data warehouses.

A lakehouse before lakehouses

The development of Delta Lake was a significant advance in the history of data lakehouses, now also known as a modern data lake.

In time, this set of features would become known as a data lakehouse, and other technologies would occupy a similar space to Delta Lake, notably Iceberg and Hudi. For this reason, you can think of Delta Lake as the first lakehouse.

Is Delta Lake part of Databricks? Proprietary to open source

Delta Lake was originally a proprietary system and was developed by Databricks.Though Delta Lake became an open source project in 2019. It has evolved since then to have some strong key features.

Data lake vs delta lake

Data lakes store structured and unstructured data in inexpensive, usually cloud-based, storage systems. Apache, ORC, or other open file formats allow users to query data through APIs.

This architecture addressed several weaknesses in traditional data warehouses. Since a warehouse is a complete data storage and analytics solution, it combines compute and storage within one system. Companies must invest in peak usage of both to guarantee availability. Data lakes decouple data storage from compute, letting companies optimize their infrastructure investments.

However, data lakes cannot match a warehouse’s robust data management, metadata handling, and SQL query optimization tools. Instead, they create a two-tiered system using a data lake to store data in bulk and transfer data to a warehouse for analysis.

A Delta Lake eliminates the complexity by combining the flexible, scalable storage of a data lake with management and optimization features to achieve warehouse-like performance in a single platform.

What are the advantages of Delta Lake?

The established data lake plus warehouse architecture creates growing challenges for enterprises needing insights from ever larger datasets.

Data engineering teams must maintain two sets of ETL data pipelines: one for ingestion into the lake and another to transfer data into the warehouse.

Inevitably, this two-step process allows warehoused data to age relative to the constantly updated lake.

Advanced machine learning systems cannot read the proprietary data formats of commercial warehouse platforms, forcing data teams to develop complex workarounds.

These burdensome workloads impose significant costs on data teams. In addition, companies now duplicate their storage infrastructure to keep the same data in both the warehouse and the lake.

A Delta Lake eliminates the expensive two-step pipeline structure, so engineers must only maintain the ingestion pipelines into a single storage solution. Since business intelligence applications and other systems can access the Delta Lake’s storage layer directly, they always have access to the most current data.

Finally, a Delta Lake supports open file formats and declarative dataframe APIs that let machine learning systems access the lakehouse’s query optimization features.

Data warehouse vs. delta lake

Data warehouses are analytics tools that allow enterprises to consolidate and structure business data in formats that efficiently support business intelligence analysts. However, these solutions became less effective as the scope and scale of business data expanded.

Data warehouses cannot use unstructured data such as video, social media content, or the stream of data coming from Industrial Internet of Things devices.

As mentioned above, complex data science projects such as machine learning and artificial intelligence cannot access warehoused data easily. Moreover, warehouses do not store the large datasets these projects need.

Vendor lock-in makes migrations from proprietary data warehouse platforms to alternate storage solutions difficult and expensive. This is particularly challenging since data warehouses are not scalable enough to keep pace with growing data volumes.

Delta Lakes eliminate the need for a parallel storage and analytics platform. They keep structured and unstructured data in a single location that analytics and data science users can access directly. Cloud-native Delta Lakes are more scalable, and their open file formats prevent vendor lock-in.

Data lakehouse vs. delta lake

Delta Lake is an implementation of the data lakehouse architecture. It uses the Apache Parquet open-source data storage format. Delta Lakes are compatible with the Apache Spark big-data processing framework as well as the Trino massively parallel query engine.

Using Delta Lake

Unlike the immutable records of a data lake, you can change data in a Delta Lake through enhanced create, read, update, and delete operations. This feature, along with ACID compliance, lets companies use Delta Lakes like a transactional database — but without the inefficient storage and high costs of proprietary database management systems.

The metadata features of a Delta Lake support data governance. For example, data privacy regulations such as the European Union’s GDPR or the State of California’s CCPA require companies, upon request, to delete personal data about residents of these regions. A Delta Lake’s metadata allows governance systems to identify and remove the correct data to ensure compliance.

About Delta Log

Delta Lake uses a delta log to keep track of changes to data and metadata, ensuring that data operations are ACID-compliant. It’s a key component of Delta Lake’s architecture, helping organizations maintain the quality and reliability of their data in data lake environments.

Enhanced metadata collection

Most functionality upgrades associated with the Delta Lake are facilitated through the introduction of a transaction log, known as the Delta Log. This innovation helps to keep metadata relating to the table up to date. This metadata is stored in JSON format in the first instance, with older data being bundled into Parquet-based checkpoint files for more persistent storage.

Comprehensive record of changes

Each change made to the table generates a new entry in the Delta Log. This addition is what allows for lakehouse functionality on traditional data lake storage, by providing detailed data about modification. With this data, a Delta Lake is able to determine precise changes to the lakehouse, and offer the ability to modify, update, and roll back to previous versions.

Delta Lake increased functionality

Delta Lake provides a considerable functionality upgrade when compared to traditional data lakes. All of these features are made possible by the Delta Log, and the added ability to record metadata relating to changes to the table.

ACID compliance to cloud object storage

One of the key feature enhancements involved ACID compliance. A system is ACID compliant if it is able to exhibit a certain set of characteristics. These include atomicity, consistency, isolation, and durability.

These preconditions ensure that a transaction either completes fully or fails fully, ensuring that there are no partial transactions, or half-written data.

This is particularly important for certain data types, and certain use-cases. It provides database-like functionality on top of data lake cloud object storage technology.

Recent advances in Delta Lake

Delta Lake has made several important changes recently.

With the introduction of Iceberg, and the continued usage of Hudi, Delta Lake has recently undergone changes as it responds to its competitors. These feature adoptions and performance improvements look set to upgrade Delta Lake experiences and change the architecture in exciting new ways. The project has also recently become open source, promising new improvements.

We outline some of these recent changes below.

Schema evolution

Delta Lake now supports schema evolution allowing for changes to and extensions of schemas on the fly in certain circumstances. This is useful in cases where the lakehouse has outgrown the original scope of the schema and must be enlarged. Notably, schemas can only be changed upwards, from a smaller scope to a larger one.

For example, schemas can change from Int to BigInt. This is possible because the new schema, BigInt, is able to encapsulate all of the data in Int. The same would not be true in reverse because Int could not capture all of the data included in BigInt.

Time travel

Delta Lake now includes time travel. This is made possible by changes made to the Delta Log, the part of the system that keeps an accurate transaction log of all changes to the lakehouse in JSON log files and Parquet snapshot files.

In modern versions of Delta Lake, users can inspect changes to the system and, if desired, roll back to a previous state. This introduces database-like version control and is particularly important for some use-cases, organizations, and industries.

Notably, these changes required significant alteration to the system’s architecture. These non-backwards compatible upgrades allow for new functionality like time travel, but come with one cost. Only modern versions of Delta Lake include these features, creating a discontinuity between different versions.

Iceberg vs Delta Lake: The differences

Despite competition from Iceberg, Delta Lake remains optimal in certain settings. For instance, organizations already deep within the Databricks ecosystem will benefit from proprietary enhancements between those technologies.

Users in this situation will likely want to remain with Delta Lake. Also, organizations making use of machine learning processes connected to other Databricks technology will likely benefit from retaining Delta Lake.

Related reading: Databricks and Starburst, a true unified open analytics platform

Start Free with
Starburst Galaxy

Up to $500 in usage credits included

  • Query your data lake fast with Starburst's best-in-class MPP SQL query engine
  • Get up and running in less than 5 minutes
  • Easily deploy clusters in AWS, Azure and Google Cloud
For more deployment options:
Download Starburst Enterprise

Please fill in all required fields and ensure you are using a valid email address.