Blog

Perils of Partitioning

Partitioning is one of the easiest ways to improve the performance of your data lake, because it reduces the amount of data scanned. But implementing partitions can be surprisingly challenging, as can their effective use. In this post I look at several of the issues that you should consider when partitioning your data.

Large Language Model (LLM) Coding Assistance

With all the hype surrounding Generative AI/LLM, and all the hallucinations mentioned in the news, what are these actually good for? As it turns out LLMs trained for code generation are helpful. But what if you don’t want your code going to some cloud provider? The following is a great solution for that. Here is the plan: Install Ollama and load the model Install Continue Try it out Conclusion Install ollama and load the model Ollama allows you to run…

Transforming Data with Amazon Athena

My prior posts used Lambda to do data transformation. But what if we could use a non-programmatic tool, in keeping with the Extract-Load-Transform mindset of the modern data pipeline. As it turns, we can: Amazon Athena can write data as well as query it. There are, of course, a few stumbles along the way. In this blog post I walk through the process of aggregating CloudTrail data using SQL.

From RAGs to Riches – Adding Context to Your LLM

In my previous post, Experiences in Fine-Tuning LLMs: Time + Power = Potato?, I covered my experiences around trying to fine-tune an LLM (large language model) with a dataset, which gave me less than stellar results. Ultimately, fine-tuning is best for a use-case where additional reasoning & logic needs to be added to an LLM, but it’s subpar for adding information. However, if you’re trying to get an LLM to answer questions using your data, then retrieval augmented generation (RAG)…

Experiences in Fine-Tuning LLMs: Time + Power = Potato?

Embarking on the journey to fine-tune large language models (LLMs) can often feel like setting sail into uncharted waters, armed with hope and a map of best practices. Yet, despite meticulous planning and execution, the quest for improved performance doesn’t always lead to the treasure trove of success one might anticipate. And I know you may be wondering how potatoes come into play here, but I promise that we’ll get to it. From the challenges of data scarcity to resource…

Apple Silicon GPUs, Docker and Ollama: Pick two.

As part of our research on LLMs, we started working on a chatbot project using RAG, Ollama and Mistral. Our developer hardware varied between Macbook Pros (M1 chip, our developer machines) and one Windows machine with a "Superbad" GPU running WSL2 and Docker on WSL. All hail the desktop with the big GPU. We planned on deploying to an Amazon EC2 instance as a quick test (running Docker on a g4dn.xlarge instance), and I thought initially that we could use…

Getting started with LLM in the Cloud with Amazon DLAMI EC2 Instances

Large Language Model (LLM) chatbots like ChatGPT are all the rage these days. You may be experimenting with building one of your own using a model runtime engine like Ollama, possibly accessing it with the LangChain API, maybe integrating it with a Vector Database for your custom data and using Retrieval Augmented Generation (RAG), or even fine tuning a base model to create one customized for the data you want to access. Whatever the reason, you’ll quickly find out that…

PostgreSQL Text Search

Introduction A common problem in software development is searching through text documents. For example, if you have a database of recipes, you might want to search by one or more ingredients, or if you have a collection of server log files, you might want to search for all errors that did not come from the database. This type of functionality is called “text search”. There are a lot of text search libraries like Lucene, or applications like ElasticSearch (which is…

1 2 51

How can we help your company with your development needs?

Contact Us