Populating Iceberg Tables with Amazon Data Firehose
In this post I look at using an Amazon Data Firehose to populate Iceberg tables, with the automatic optimization features that AWS announced in November 2024.
In this post I look at using an Amazon Data Firehose to populate Iceberg tables, with the automatic optimization features that AWS announced in November 2024.
Podcast: Play in new window | Download (Duration: 47:38 — 32.7MB) | Embed
Keith Gregory teaches Day Two Cloud about data engineering in a way DevOps folks (and hydrologists) can understand. He explains that the role of a data engineer is to create pipelines to transport data from metaphorical rivers and make it usable for data analysts. Keith walks us through the testing process; the difference between streaming … Read More
Partitioning is one of the easiest ways to improve the performance of your data lake, because it reduces the amount of data scanned. But implementing partitions can be surprisingly challenging, as can their effective use. In this post I look at several of the issues that you should consider when partitioning your data.
My prior posts used Lambda to do data transformation. But what if we could use a non-programmatic tool, in keeping with the Extract-Load-Transform mindset of the modern data pipeline. As it turns, we can: Amazon Athena can write data as well as query it. There are, of course, a few stumbles along the way. In this blog post I walk through the process of aggregating CloudTrail data using SQL.
In this final part of a three-part series, I add another aggregation step to combine a month’s worth of data and write it as Parquet.
When I ran the Lambda from my previous post against Chariot’s CloudTrail repository, it took almost four minutes to process a single day’s worth of data. That seems like a long time, and as a developer I want to optimize everything I write. In this post I look into analyzing the current runtime, and options for improving it.
As I’ve written in the past, large numbers of small files make for an inefficient data lake. But sometimes, you can’t avoid small files. Our CloudTrail repository, for example, has 4,601,675 files as-of this morning, 44% of which are under 1,000 bytes long. In this post, I develop a Lambda-based data pipeline to aggregate these files, storing them in a new S3 location partitioned by date. Along the way I call out some of the challenges that face such a pipeline.
We asked our team at Chariot about common omnichannel pitfalls, and how companies of any size can overcome or avoid them.
Podcast: Play in new window | Download (Duration: 59:00 — 54.0MB) | Embed
In this week’s TechChat, we welcome Keith Gregory, our Cloud & Data Engineering Practice Lead here at Chariot. Keith is a prolific writer both on the Chariot blog as well as on his own, and is a wealth of knowledge on all things AWS. We touch on Redshift execution plans, how to appropriately size Redshift … Read More
In my “Friends Don’t Let Friends Use JSON” post, I noted that I preferred the Avro file format to Parquet, because it was easier to write code to use it. I expected some pushback, and got it: Parquet is “much” more performant. So I decided to do some benchmarking.