IoT on AWS: A Philly Cloud Computing Event

Wednesday, November 6th — 9AM to 5PM — Science History Museum, Philadelphia, PA

Thank you for attending!

We have archived the slides, and you can view and download them from the link provided.

Videos for each of the ten talks are now online!

The GitHub project is located at https://github.com/chariotsolutions/aws-iot-workshop.

About the Event

Cloud computing is a natural counterpart to smart devices: it’s available anywhere, scalable to meet your needs, and generally more reliable than self-hosted hardware. Amazon Web Services provides a start-to-finish IoT solution: from gathering data, to storing it securely, to analyzing it and providing the results to your users.

In this one-day event we’ll present a high-level overview of the steps involved in building an IoT data pipeline. In short sessions, our speakers will follow data from an IoT device as it is ingested, analyzed, secured, and used to make decisions. By the end of the day you will have a basic understanding of the complete pipeline.

Afternoon Hands-On Workshop

There will be an optional afternoon workshop, limited to 50 participants, providing hands-on experience with connecting IoT devices and working with the data they produce.

Sessions: Building a Data Pipeline

Scenario: An IoT pipeline exists to transform data into information. As a working example, we use sensors that report room temperature, humidity, and occupancy, and translate that raw data into information that would be useful to a building manager.

Ken Rimple discusses the key concepts and terminology of the Internet of Things, including Types of IoT devices, networking hardware, transport and protocols, and how they relate to our event and the real world. Topics discussed include connectivity examples and various IoT devices.

By Ken Rimple, Director of Training at Chariot Solutions

There are many ways to add wireless connectivity to a device. We’ll take a brief tour of the some of these like WiFi, Bluetooth, ZigBee, Zwave, NFC, Cellular, LoRa, and Sigfox. We’ll talk about some of the pros and cons of each technology to help you decide which is best for your use case.

By Don Coleman, CIO of Chariot Solutions

What is MQTT? How does it work? Why should you care? We'll discuss the MQTT protocol and how AWS IoT Core is an MQTT Broker able to send and receive messages to and from devices.

By Don Coleman, CIO of Chariot Solutions

AWS IoT provides connectivity to IoT devices through HTTP and MQTT. In this session we learn how to leverage AWS Core IoT as an MQTT broker, how to connect your devices using a client certificate, how policies can enforce data security, and how rules are used to move data elsewhere in the AWS infrastructure.

By Ken Rimple, Director of Training at Chariot Solutions

While IoT Core can route your device messages directly to subscribers, you gain flexibility, scalability and reliability when you put a Kinesis stream in the data path. Kinesis is a persistent data log that accepts messages from multiple producers, buffers them for up to a week, and allows multiple consumers to read them.

This talk will cover the high-level design of Kinesis, how it scales, how clients can retrieve records that have been stored in it, and the use of Kinesis Analytics to to transform data and extract outliers.

By Keith Gregory, AWS Practice Lead at Chariot Solutions

Data has different purposes over time: when fresh, it can be used for real-time decision-making; as it ages, it becomes useful for analytics; eventually, it becomes a record, useful or perhaps not. Each of these stages requires a different approach to storage and management, and this talk looks at appropriate ways to work with your data at the different stages of its life.

By Keith Gregory, AWS Practice Lead at Chariot Solutions

Storing your data in AWS can be the best decision you ever make, or the worst nightmare you can fathom. It all depends on the decisions you make at the design and implementation phases of your project and the diligence you apply throughout your development and operations cycles. This presentation will take you through the biggest areas where you need to focus your efforts in order to keep your data safe at AWS, and will show some real-life examples of what could go wrong if you make compromises or allow bad practices

By Steve Pressman, President & Chief Solutions Architect at Alpine Cyber Solutions

This talk will review two common use cases for the use of captured metric data: 1) Real-time analysis, visualization, and quality assurance, and 2) Ad-hoc analysis. Once metric data is generated, to support the use cases mentioned above it must be ingested properly using a robust and fault-tolerant streaming framework. The most common open source streaming options will be mentioned however this talk be concerned with Apache Flink specifically. A brief discussion of Apache Beam will also be included in the context of the larger discussion of a unified data processing model.

Best practices around data persistence will be discussed. An attempt will be made to eliminate confusion about the format data should take when it is ‘at rest’. Different serialization formats will be compared and discussed in context with the most typical analysis use cases. Finally fully managed solutions such as AWS Data Lake will be mentioned briefly. We will discuss their relative advantages and disadvantages.

By Eric Snyder, Software Architect at Chariot Solutions

In this session we will walk through the steps required to securely communicate with your device using the Device Shadow service. This will include an overview of user authentication and authorization, connecting to AWS IoT, and using MQTT to communicate with the device's “Device Shadow” to read and update its state. All this, using the AWS Amplify CLI and SDK.

By Steve Smith, Mobile Practice Lead at Chariot Solutions

Amazon uses a “pay as you go” pricing model: you pay for the resources that you use, and in most cases don’t need to pre-allocate resources. While this allows your business to scale, it means that each component of your data pipeline will incur a separate charge, which can obscure the overall cost of running the pipeline. This talk will examine those changes, along with strategies for partitioning those costs between your clients or organizational units.

By Keith Gregory, AWS Practice Lead at Chariot Solutions

Hands-On Workshop (Optional, Additional Cost)

Includes All Sessions

This is an additional afternoon workshop, limited to 50 participants, providing hands-on experience with connecting IoT devices and working with the data they produce.

During this work-at-your-own-pace workshop, you will work side by side with Chariot Engineers to get hands-on experience programing Arduino based IoT devices, connecting them to a secure cloud-connected data pipeline, and work with the data they produce using the latest AWS tools and services introduced in the talks from the morning’s session.

Sponsored by