this post was submitted on 01 Jul 2023
1 points (100.0% liked)

Data Engineering

105 readers
1 users here now

This is a place to discuss anything related to Data Engineering!

Self-promotion

You are welcome to share links to your own content as long as it is free, relevant & informational.

If you're posting your own content, you should be engaging with the discussion, not just farming clicks.

Do not share links to your own paid content. This is not a place to advertise your latest "influencer" course.

Vendor content

You are welcome to share links to your company's content as long as it is free, relevant & informational.

If you're posting your company's content, you should be engaging with the discussion, not just farming clicks.

founded 1 year ago
MODERATORS
top 1 comments
sorted by: hot top controversial new old
[–] [email protected] 1 points 1 year ago

This was announced by Databricks at their Data & AI summit.

It's interesting to see these older data warehouses starting to implement features that newer projects have proven over the past few years.

This method of incremental materialisation for streaming ingestion is the default in ClickHouse, and there's all sorts of so-called "streaming databases" that have popped up that exclusively do this e.g. Materialize, Rising Wave, etc.

I don't know what Databricks' implementation is like, I'm not a customer of theirs, so I'm interesting to see how successful it is. I have been using ClickHouse for some time now and it's a super powerful feature to get away from batch & schedules.