You may have moved on, but Java is thriving
If you’re one of those people who left Java before it got into functional programming with Lambdas and Streams, we’ve got news for you.
If you’re one of those people who left Java before it got into functional programming with Lambdas and Streams, we’ve got news for you.
I’ve always been a fan of database servers: self-contained entities that manage both storage and compute, and give you knobs to turn to optimize your queries. The flip side is that I have an inherent distrust of services such as Athena, which promise to run queries efficiently on structured data split between many files in a data lake. It just doesn’t seem natural; where are the knobs?
So, since I had data generated for my post on Athena performance with different file types, I decided to use that data in a performance comparison with Redshift.
In my “Friends Don’t Let Friends Use JSON” post, I noted that I preferred the Avro file format to Parquet, because it was easier to write code to use it. I expected some pushback, and got it: Parquet is “much” more performant. So I decided to do some benchmarking.
In a perfect world, there would never be a need to connect to your resources running on AWS. In the real world, it’s sometimes necessary to get your hands dirty and look at what’s happening on the actual machine, especially during development. This post dives into a few ways to connect your workstation to resources running inside a VPC. It started out as a how-to for using bastion hosts, but quickly expanded to look beyond the bastion.
Decision support databases have a number of quirks that are not obvious to the casual user, particularly someone coming from an OLTP background. In this post I look at how unbalanced distributions can impact your query performance, how you can identify imbalances, and what you can do to fix them.
Are you running a database with RDS? Would you like to manage it via migrations? This article explains how to use AWS CodeBuild to keep a database schema updated using Flyway, an open-source data migrations tool. Configuration is outlined via CloudFormation snippets. An AWS example repository is provided.
It’s possible to analyze your Glue jobs using just the logs they produce. Possible. But it’s not a pleasant task: your log messages are buried in messages from the framework, and in the case of a distributed PySpark job they’ll be spread amongst multiple CloudWatch log streams. In this post I look at an alternative: AWS X-Ray, which captures and aggregates “trace segments” that monitor specific sections of your code. With X-Ray, you can easily see where your jobs are spending their time, and compare different runs.
Introduction Today’s small microcontrollers offer impressive functionality and provide an opportunity to replace older, more expensive software and hardware. Consider the case where a facility wants to have control over devices or equipment, with rules that evaluate telemetry from sensors and activate, deactivate or regulate equipment and other devices. Many large facilities have systems in place that rely on a global standard named BACNet. This protocol has been an ISO standard since 2004 and allows for different types of devices…
Two of the biggest mistakes companies make in building an MVP (minimum viable product) and how you can avoid them.
In my last post I recommended using Avro for file storage in a data lake. It has the benefits of compact storage and a schema in every file that tells you what data it holds. In this post I show three ways to generate Avro files: one in Java, and two in Python.