AWS Machine Learning Blog

Building an Autonomous Vehicle Part 3: Connecting Your Autonomous Vehicle

In the first blog post in our autonomous vehicle series, you built your Donkey vehicle and deployed your pilot server onto an Amazon EC2 instance. In the second blog, you learned to drive the Donkey car, and the Donkey car learned to drive itself. In this blog post, we’ll cover the process of streaming telemetry from the Donkey vehicle into AWS. We’ll use the AWS IoT service because it provides a scalable, reliable, and feature-rich set of services for all kinds of connected devices, including our connected vehicles.


1) Build an Autonomous Vehicle on AWS and Race It at the re:Invent Robocar Rally
2) Build an Autonomous Vehicle Part 2: Driving Your Vehicle
3) Building an Autonomous Vehicle Part 3: Connecting Your Autonomous Vehicle
4) Building an Autonomous Vehicle Part 4: Using Behavioral Cloning with Apache MXNet for Your Self-Driving Car


AWS IoT setup

The autonomous car is going to generate a constant stream of telemetry while it is driving. When the vehicle is not driving, there is no telemetry to gather, so we don’t want to utilize any resources because it would be wasteful. In order to accommodate our workload, we are going to rely on serverless technologies to power our entire architecture. To start, we are going to use AWS IoT to design a fleet monitoring service. It can service any amount of vehicles using the same basic architecture. The following diagram illustrates the architecture for the fleet monitoring service.

The components of this solution are color-coded by logical functionality (and AWS services) to demonstrate how each component of the solution is secure, scalable, and entirely usage based. This kind of operational model lends itself to many business models and is useful for customers of all sizes. Whether it’s the weekend autonomous vehicle hobbyist who wants to track and compare lap times, or an automobile manufacturer that wants to develop their own connected vehicle platform, AWS IoT provides a secure, scalable, and usage-based cost structure.

This solution begins with the Donkey car, which is shaded in green. Data then passes through the IoT service to the pink section which is for short term data storage in DynamoDB, and then to the blue section for longer term storage in S3. Additionally, data in an AWS IoT topic can also be queried in real-time, and as show below will be used to drive a dashboard.

The Donkey car is already internet-connected, which means that it is capable of streaming telemetry to a centralized location, in this case AWS IoT. To ensure the security of the telemetry, we’ll generate certificates from AWS IoT and deploy them to the Donkey vehicle. By using certificates, the Donkey car can securely communicate with AWS IoT using TLSv1.2 over MQTT. MQTT is an extremely lightweight protocol that deals well with low signal networks, so it can cope with connection reliability challenges.

The simplest way to handle security is to use the one-click certificate creation method. To do this, open the AWS IoT console. In the navigation pane at the left, choose Security. Next, choose the Create button.

Create a certificate

A certificate can be created using the AWS IoT console, the AWS IoT API, or as we did in our previous blog post, using the Amazon EC2 Systems Manager Run Command on Raspberry Pi to generate and deliver the certificates. By performing this process using EC2 Systems Manager, we don’t need to manually copy the certificates to the Raspberry Pi.

For the sake of simplicity, we’ll use the AWS IoT console to complete the process.

We’re going to create a new policy that will be used to grant our Donkey car specific permissions to AWS services. This allows us to set fine-grained vehicle-specific settings, such as which AWS IoT topics can be accessed, and which IoT actions can be taken.

Begin by opening your AWS Management Console. Go to the AWS IoT console and choose Security, then Policies. Then choose Create.

We’ll use this policy to allow all actions to be performed against the AWS IoT service because we don’t need to prevent the use of the full functionality of the service. This privilege includes: iot:Publish, iot:Subscribe, iot:Connect, iot: Receive, iot:UpdateThingShadow, iot:GetThingShadow, and  iot:DeleteThingShadow.

In addition to the certificate and the policy, the AWS IoT service needs to create a Thing that describes our Donkey car. To do this, in the navigation pane at the left, choose Things from Registry. Now choose Create, and provide your Thing with a name and click Create thing.

Now that the policy and thing are created, we can associate them with the certificate that we previously created. To do that, in the navigation pane on the left, choose Security, Certificates. Locate and then select the certificate that you created previously. From the Actions menu, choose Attach Policy. Then from the Actions menu, choose Attach Thing.

AWS IoT rules

Now that we have a means of generating and communicating telemetry, we can focus on building the rest of the functionality of the solution. Our solution calls for two rules, one for Amazon DynamoDB to be accessed by our dashboard, and the other for Amazon Kinesis Firehose to distribute all telemetry to Amazon S3 and Amazon Kinesis Analytics for real-time analysis of the telemetry.

DynamoDB Rule

We will start by creating the rule for DynamoDB. In the navigation pane at the left in the AWS IoT console, choose Rules. Then choose the Create button.

On the Create a rule page, give the rule a name and add a description that easily identifies the type of data and the AWS service that is consuming those records.

 

Then choose Add action, and select Split message into multiple columns of a database table (DynamoDBv2).

Next choose Configure action.

Next, you can either create a new table or use an existing one. We’ll create a new table by choosing Create a new resource.

We now follow the Create DynamoDB table wizard. We call the Table name AutonomousVehicles and set the Primary key (partition key) to be the vehicleID attribute which will be unique among all of the vehicles in the fleet. We’ll add a sort key on the attribute time which will allow the most efficient use of DynamoDB for queries. Leave the default settings. They should be more than sufficient to handle telemetry from a single vehicle.

While DynamoDB is creating the table, we can set a TTL attribute to keep the costs of running DynamoDB very low. If you keep the default of 5 Read capacity units and 5 Write capacity units the monthly bill is around $2.50. The RCU and WCU can be adjusted up and down manually, or you can choose to use Auto Scaling. To keep costs low, we’ll also allow DynamoDB to automatically expire items based on an attribute that the user defines. In our case, we are going to add a TTL timestamp to our telemetry. In our program that runs on the Donkey car, we’ll set the attribute dynamodb_ttl as the current Unix Timestamp, plus 2592000 (30 days) and then store it in DynamoDB. 30 days later DynamoDB sees that the TTL attribute has expired and deletes those items automatically.

Choose Continue to enable TTL. Return where you left off with the AWS IoT Configure action, and from Table name select your table from the drop-down list. If you do not see it, click the refresh arrows. Next, choose Create a new role and name it AutonomousVehiclesDynamoDB or something similar. Choose Update role, and then choose Add action.

Next we can review all of our choices. When you are ready, choose Create rule.

Kinesis Firehose Rule

We can now follow similar steps to create another rule to send all of the telemetry to Kinesis Firehose. In the navigation pane at the left, choose Rules. Then choose the Create button.

On the Create a rule page, give the rule a name with a description that easily identifies the type of data and the AWS service that is consuming those records.

Then choose Add action, and select Send messages to an Amazon Kinesis Firehose stream:

Then choose Configure action.

Now there is a choice: Create a new stream or use an existing one. We’ll create a new stream by choosing Create a new resource.

We now follow the Create delivery stream wizard and provide a name for the Delivery stream. Choose Next.

Choose Next.

Ensure that Amazon S3 is selected. Choose an appropriate S3 bucket and Prefix where you want the telemetry to be stored.

Choose Create new, or Choose for the IAM role.  From the drop-down list, select Create a new IAM Role and provide it with a Role Name. Choose Allow. Then choose Next.

Finally, confirm everything is correct and choose Create delivery stream.

Return to the Configure action screen and select the new stream that was created.

Then choose Create a new role to be used by AWS IoT to access the Amazon Kinesis Firehose stream. After naming the role, choose Update role and then Add action.

Finally, confirm all of the choices, and then choose Create rule when ready.

The AWS IoT service is now ready and waiting for telemetry to stream in. Since AWS IoT is serverless and usage based, there is no cost incurred with deploying it and leaving it on. Amazon Kinesis Firehose and Amazon S3 are also priced based on the volume of data ingested. We can now generate telemetry, and then use the AWS IoT console to check that it is being sent to the proper topic.

Alternatively, we could build a dashboard that connects directly to AWS IoT so that it could consume this telemetry in near real time. Here is an example of a dashboard that is hosted on Amazon S3, which means it is completely serverless and requires no management of underlying webservers. An example of how you can visualize IoT telemetry can be found in this tutorial:

Deploy an End-to–End IoT Application (pdf)

And in this blog post:

Build a Visualization and Monitoring Dashboard for IoT Data with Amazon Kinesis Analytics and Amazon QuickSight

In our next blog post we will re-cap what we’ve done so far and talk about what’s next for autonomous vehicles.