Introduction to MQTT – IoT on AWS – A Philly Cloud Computing Event
What is MQTT? How does it work? Why should you care? We’ll discuss the MQTT protocol and how AWS IoT Core is an MQTT Broker able to send and receive messages to and from devices.
What is MQTT? How does it work? Why should you care? We’ll discuss the MQTT protocol and how AWS IoT Core is an MQTT Broker able to send and receive messages to and from devices.
AWS IoT provides connectivity to IoT devices through HTTP and MQTT. In this session we learn how to leverage AWS Core IoT as an MQTT broker, how to connect your devices using a client certificate, how policies can enforce data security, and how rules are used to move data elsewhere in the AWS infrastructure.
Data has different purposes over time: when fresh, it can be used for real-time decision-making; as it ages, it becomes useful for analytics; eventually, it becomes a record, useful or perhaps not. Each of these stages requires a different approach to storage and management, and this talk looks at appropriate ways to work with your data at the different stages of its life.
This talk will review two common use cases for the use of captured metric data: 1) Real-time analysis, visualization, and quality assurance, and 2) Ad-hoc analysis. The most common open source streaming options will be mentioned, however this talk be concerned with Apache Flink specifically. A brief discussion of Apache Beam will also be included in the context of the larger discussion of a unified data processing model.
In this session we will walk through the steps required to securely communicate with your device using the Device Shadow service. This will include an overview of user authentication and authorization, connecting to AWS IoT, and using MQTT to communicate with the device’s “Device Shadow” to read and update its state. All this, using the AWS Amplify CLI and SDK.
This presentation will take you through the biggest areas where you need to focus your efforts in order to keep your data safe at AWS, and will show some real-life examples of what could go wrong if you make compromises or allow bad practices.
Amazon uses a “pay as you go” pricing model: you pay for the resources that you use, and in most cases don’t need to pre-allocate resources. While this allows your business to scale, it means that each component of your data pipeline will incur a separate charge, which can obscure the overall cost of running the pipeline. This talk will examine those changes, along with strategies for partitioning those costs between your clients or organizational units.
Data has different purposes over time: when fresh, it can be used for real-time decision-making; as it ages, it becomes useful for analytics; eventually, it becomes a record, useful or perhaps not. Each of these stages requires a different approach to storage and management, and this talk looks at appropriate ways to work with your data at the different stages of its life.
This presentation will take you through the biggest areas where you need to focus your efforts in order to keep your data safe in AWS, and will show some real-life examples of what could go wrong if you make compromises or allow bad practices.