Data

S3 Table Buckets vs Redshift

AWS released S3 Table Buckets at re:Invent 2024, and at release they were pretty much only usable with Elastic Map Reduce. However, over the past year, the S3 Tables team has been making improvements. And while there are still some limitations, S3 Tables with Athena gives a user experience similar to traditional data warehouses such as Redshift.

Which leads to the question: can Athena and S3 Tables be a cost-effective replacement for Redshift? In this post I show how to use S3 Tables, and run some performance comparisons to answer that question.

Populating a Data Lake with AWS Database Migration Service and Amazon Data Firehose

Data lakes are great for holding large volumes of data, such as clickstream logs. But such data has limited usefulness unless you can combine it with data from your transactional, line-of-business databases. And this is where things get tricky. Simple approaches, such as replicating entire tables, don’t scale. Streaming approaches that include updates and deletes require logic to determine the latest value (or existence!) of any given row. All of which has to be translated into static data files in a data lake.

In this post I look at one approach to solve this problem: AWS Data Migration Service to capture changes from the source database and write them to a Kinesis Data Stream, and Amazon Data Firehose to load those records into Iceberg tables.

An ML tale: From notebook to production

Data Scientists spend their days working in Jupyter notebooks, which are then passed to an implementation team to prepare for production. This post guides you through that process, emphasizing iterative refinement. I will be using the scikit-learn and XGBoost libraries, but other ML libraries could be swapped in. While scikit-learn offers a comprehensive library of … Read More

Websockets feeding Kinesis

We recently explored a project to retrieve data from a third-party service. They didn’t offer any push capabilities such as writing to a Kafka or Kinesis stream, or even a web-hook. But they did offer a WebSocket interface, so we explored whether we could use that as our streaming source. We didn’t go that route, but I was intrigued by the idea enough to make a proof-of-concept.

Transforming Data with Amazon Athena

My prior posts used Lambda to do data transformation. But what if we could use a non-programmatic tool, in keeping with the Extract-Load-Transform mindset of the modern data pipeline. As it turns, we can: Amazon Athena can write data as well as query it. There are, of course, a few stumbles along the way. In this blog post I walk through the process of aggregating CloudTrail data using SQL.

Aggregating Files in your Data Lake – Part 2

When I ran the Lambda from my previous post against Chariot’s CloudTrail repository, it took almost four minutes to process a single day’s worth of data. That seems like a long time, and as a developer I want to optimize everything I write. In this post I look into analyzing the current runtime, and options for improving it.

Aggregating Files in your Data Lake – Part 1

As I’ve written in the past, large numbers of small files make for an inefficient data lake. But sometimes, you can’t avoid small files. Our CloudTrail repository, for example, has 4,601,675 files as-of this morning, 44% of which are under 1,000 bytes long. In this post, I develop a Lambda-based data pipeline to aggregate these files, storing them in a new S3 location partitioned by date. Along the way I call out some of the challenges that face such a pipeline.