In Part 1 we looked at how to build a basic Rust program that could run in Lambda. In this post I’ll cover how to deploy this function using Terraform.
The minimum components we’ll need to serve this HTTP application are a Lambda function with our Rust program installed and an API Gateway to serve as an edge node. Fortunately, Terraform makes this extremely easy to get this up and running super quickly!
In order to deploy a Lambda we need to ensure that we have a zip file of our bootstrap
program. In Part 1 we actually
did this with the zip command, however we can also get Terraform to do that for us. The advantage of Terraform performing
this step is that we can also use Terraform to hash our zip for change detection at a later step.
If you did want to use Lambda to zip, then you can do something like:
provider "archive" {}
data "archive_file" "lambda_zip" {
source_file = "${var.bin_path}/bootstrap"
output_path = "lambda.zip"
type = "zip"
}
One thing to note in this is the use of a variable to define the bin_path
. I’ve found that having explicit paths certainly
helps when packaging - and of course, using a variable helps keep that machine specific code out of your module.
In order for our Lambda function to be able to execute, we also need to set up an “execution role” that the Lambda function runs under. This defines permissions that the Lambda function needs - such as connecting to a database, ability to write to a queue, or so forth. For our example, we can keep this simple:
resource "aws_iam_role" "lambda_execution_role" {
assume_role_policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "sts:AssumeRole",
"Principal": {
"Service": "lambda.amazonaws.com"
},
"Effect": "Allow",
"Sid": ""
}
]
}
EOF
}
resource "aws_iam_role_policy_attachment" "lambda_execution_policy" {
role = aws_iam_role.lambda_execution_role.name
policy_arn = "arn:aws:iam::aws:policy/service-role/AWSLambdaBasicExecutionRole"
}
This effectively creates a role that a Lambda function can assume, and gives it a basic AWS managed policy: AWSLambdaBasicExecutionRole
.
Now that we have the zip file and the IAM role we can wrap it all together using the aws_lambda_function
resource:
resource "aws_lambda_function" "api" {
function_name = "APIHandler"
source_code_hash = data.archive_file.lambda_zip.output_base64sha256
filename = data.archive_file.lambda_zip.output_path
handler = "bootstrap"
runtime = "provided"
role = aws_iam_role.lambda_execution_role.arn
}
A key thing to note in here is that the runtime is set to “provided” since we’re using a bootstrap file to process the Lambda.
Cool - well, that was easy. How do we get the API portion set up?
There are a few parts to get the API gateway set up. For simplicity, we’ll use Gateway Version 2 using the explicit HTTP Gateway. In my opinion, this makes setting up APIs far easier. Anyway, first things first let’s define the root resource:
resource "aws_apigatewayv2_api" "api" {
name = "API"
description = "Our example API"
protocol_type = "HTTP"
}
In order for an API gateway to function, we set up an “Integration” which effectively maps a Lambda function to a route:
resource "aws_apigatewayv2_integration" "api" {
api_id = aws_apigatewayv2_api.api.id
integration_type = "AWS_PROXY"
connection_type = "INTERNET"
description = "Example.API"
integration_method = "POST"
integration_uri = aws_lambda_function.api.invoke_arn
payload_format_version = "2.0"
}
resource "aws_apigatewayv2_route" "api" {
api_id = aws_apigatewayv2_api.api.id
route_key = "$default"
target = "integrations/${aws_apigatewayv2_integration.api.id}"
}
Using $default
as a route means that ALL requests will invoke the integration defined
as the target. Of course, in this case this is our Lambda function. Effectively, we’ll invoke our Lambda function with
a payload that matches the version 2.0 specifications and return it back to the client.
We’re still not there yet - we need something to “deploy” this API. For this, we’ll generate a “stage” as well as a deployment:
resource "aws_apigatewayv2_stage" "api" {
api_id = aws_apigatewayv2_api.api.id
name = "prod"
auto_deploy = true
}
resource "aws_apigatewayv2_deployment" "api" {
api_id = aws_apigatewayv2_api.api.id
description = "API deployment"
triggers = {
redeployment = sha1(join(",", list(
jsonencode(aws_apigatewayv2_integration.api),
jsonencode(aws_apigatewayv2_route.api),
)))
}
lifecycle {
create_before_destroy = true
}
}
This defines two things:
/prod
to indicate our production version of the API. This allows us to potentially set up
dev
or test
in the future. In this we set the ability to auto_deploy
to true. Namely to save time and effort.So this is a fair bit to set up - what’s missing? Well, as you can imagine - the API needs to have permissions to be able to invoke the Lambda!
resource "aws_lambda_permission" "api" {
statement_id = "allow_apigw_invoke"
function_name = aws_lambda_function.api.function_name
action = "lambda:InvokeFunction"
principal = "apigateway.amazonaws.com"
source_arn = "${aws_apigatewayv2_stage.api.execution_arn}/${aws_apigatewayv2_route.api.route_key}"
}
One last thing that I think is super handy is for Terraform to let you know what the URL is to invoke you’re Lambda function:
output "invoke_url" {
value = aws_apigatewayv2_stage.api.invoke_url
}
We can now deploy this! And lo and behold when we execute the invoke url we get back a nice message:
👋 world
Well this is all fine, but how do we start leveraging this Lambda to deal with real HTTP API requests? We cover that in the next section! Until next time…