Cargo Lambda Watch
The watch subcommand emulates the AWS Lambda control plane API. Run this command at the root of a Rust workspace and cargo-lambda will use cargo-watch to hot compile changes in your Lambda functions.
cargo lambda watch
cargo lambda watch
The function is not compiled until the first time that you try to execute it. See the invoke command to learn how to execute a function. Cargo will run the command cargo run --bin FUNCTION_NAME
to try to compile the function. FUNCTION_NAME
can be either the name of the package if the package has only one binary, or the binary name in the [[bin]]
section if the package includes more than one binary.
The following video shows how you can use this subcommand to develop functions locally:
Environment variables
If you need to set environment variables for your function to run, you can specify them in the metadata section of your Cargo.toml file.
Use the section package.metadata.lambda.env
to set global variables that will applied to all functions in your package:
[package]
name = "basic-lambda"
[package.metadata.lambda.env]
RUST_LOG = "debug"
MY_CUSTOM_ENV_VARIABLE = "custom value"
[package]
name = "basic-lambda"
[package.metadata.lambda.env]
RUST_LOG = "debug"
MY_CUSTOM_ENV_VARIABLE = "custom value"
If you have more than one function in the same package, and you want to set specific variables for each one of them, you can use a section named after each one of the binaries in your package, package.metadata.lambda.bin.BINARY_NAME
:
[package]
name = "lambda-project"
[package.metadata.lambda.env]
RUST_LOG = "debug"
[package.metadata.lambda.bin.get-product.env]
GET_PRODUCT_ENV_VARIABLE = "custom value"
[package.metadata.lambda.bin.add-product.env]
ADD_PRODUCT_ENV_VARIABLE = "custom value"
[[bin]]
name = "get-product"
path = "src/bin/get-product.rs"
[[bin]]
name = "add-product"
path = "src/bin/add-product.rs"
[package]
name = "lambda-project"
[package.metadata.lambda.env]
RUST_LOG = "debug"
[package.metadata.lambda.bin.get-product.env]
GET_PRODUCT_ENV_VARIABLE = "custom value"
[package.metadata.lambda.bin.add-product.env]
ADD_PRODUCT_ENV_VARIABLE = "custom value"
[[bin]]
name = "get-product"
path = "src/bin/get-product.rs"
[[bin]]
name = "add-product"
path = "src/bin/add-product.rs"
You can also set environment variables on a workspace
[workspace.metadata.lambda.env]
RUST_LOG = "debug"
[workspace.metadata.lambda.bin.get-product.env]
GET_PRODUCT_ENV_VARIABLE = "custom value"
[workspace.metadata.lambda.env]
RUST_LOG = "debug"
[workspace.metadata.lambda.bin.get-product.env]
GET_PRODUCT_ENV_VARIABLE = "custom value"
These behave in the same way, package environment variables will override workspace settings, the order of precedence is:
- Package Binary
- Package Global
- Workspace Binary
- Workspace Global
You can also use the flag --env-vars
to add environment variables. This flag supports a comma separated list of values:
cargo lambda watch --env-vars FOO=BAR,BAZ=QUX
cargo lambda watch --env-vars FOO=BAR,BAZ=QUX
The flag --env-var
allows you to pass several variables in the command line with the format KEY=VALUE
. This flag overrides the previous one, and cannot be combined.
cargo lambda watch --env-var FOO=BAR --env-var BAZ=QUX
cargo lambda watch --env-var FOO=BAR --env-var BAZ=QUX
The flag --env-file
will read the variables from a file and add them to the function during the deploy. Each variable in the file must be in a new line with the same KEY=VALUE
format:
cargo lambda watch --env-file .env
cargo lambda watch --env-file .env
Function URLs
The emulator server includes support for Lambda function URLs out of the box. Since we're working locally, these URLs are under the /lambda-url
path instead of under a subdomain. The function that you're trying to access through a URL must respond to Request events using lambda_http, or raw ApiGatewayV2httpRequest
events.
You can create functions compatible with this feature by running cargo lambda new --http FUNCTION_NAME
.
To access a function via its HTTP endpoint, start the watch subcommand cargo lambda watch
, then send requests to the endpoint http://localhost:9000
. You can add any additional path, or any query parameters.
WARNING
Your function MUST have the apigw_http
feature enabled in the lambda_http
dependency for Function URLs to work. The payload that AWS sends is only compatible with the apigw_http
format, not with the apigw_rest
format.
Multi-function projects
If your project includes several functions under the same package, you can access them using the function's name as the prefix in the request path http://localhost:9000/lambda-url/FUNCTION_NAME
. You can also add any additional path after the function name, or any query parameters.
Working with specific packages
You can specify the package that you want to work with by using the --package
flag. This way, only the function in the specified package will be available through the Function URL.
cargo lambda watch --package my-package
cargo lambda watch --package my-package
Because only one function is available through the Function URL, you can access it by using the root path http://localhost:9000
.
Lambda response streaming
When you work with function URLs, you can stream responses to the client with Lambda's support for Streaming Responses.
Start the watch command in a function that uses the Response Streaming API, like the example function in the Runtime's repository:
cargo lambda watch
cargo lambda watch
Then use cURL to send requests to the Lambda function. You'll see that the client starts printing the response as soon as it receives the first chunk of data, without waiting to have the complete response:
curl http://localhost:9000
curl http://localhost:9000
Enabling features
You can pass a list of features separated by comma to the watch
command to load them during run:
cargo lambda watch --features feature-1,feature-2
cargo lambda watch --features feature-1,feature-2
Debug with breakpoints
You have two options to debug your application, set breakpoints, and step through your code using a debugger like GDB or LLDB.
The first option is to let Cargo Lambda start your function and manually attach your debugger to the newly created process that hosts your function. This option automatically terminates the function's process, rebuilds the executable and restarts it when your code changes. The debugger must be reattached to the process when the function every time the function boots.
The second option is to let Cargo Lambda provide the Lambda runtime APIs for your function by setting the flag --only-lambda-apis
, and manually starting the lambda function from your IDE in debug mode. This way, the debugger is attached to the new process automatically by your IDE. When you modify your function's source code, let your IDE rebuild and relaunch the function and reattach the debugger to the new process.
The drawback of the second option is that essential environment variables are not provided automatically to your function by Cargo Lambda, but have to be configured in your IDE's launch configuration. If you provide a function name when you invoke the function, you must replace _
with that name.
These environment variables are also mentioned as info messages in the log output by cargo-lambda
.
Ignore changes
If you want to run the emulator without hot reloading the function every time there is a change in the code, you can use the flag --ignore-changes
:
cargo lambda watch --ignore-changes
cargo lambda watch --ignore-changes
Release mode
You can also run your code in release mode if needed when the emulator is loaded:
cargo lambda watch --release
cargo lambda watch --release
Working with extensions
You can boot extensions locally that can be associated to a function running under the watch
command.
In the terminal where your Lambda function code lives, run Cargo Lambda as usual cargo lambda watch
.
In the terminal where your Lambda extension code lives, export the runtime api endpoint as an environment variable, and run your extension with cargo run
:
This will make your extension to send requests to the local runtime to register the extension and subscribe to events. If your extension subscribes to INVOKE
events, it will receive an event every time you invoke your function locally. If your extension subscribes to SHUTDOWN
events, it will receive an event every time the function is recompiled after code changes.
WARNING
At the moment Log and Telemetry extensions don't receive any data from the local runtime.
The following video shows you how to use the watch subcommand with Lambda extensions:
TLS support
The watch subcommand supports TLS connections to the runtime if you want to send requests to the runtime securely.
To enable TLS, you need to provide a TLS certificate and key. You can use the --tls-cert
and --tls-key
flags to specify the path to the certificate and key files. The certificate and key files must be in PEM format.
cargo lambda watch --tls-cert cert.pem --tls-key key.pem
cargo lambda watch --tls-cert cert.pem --tls-key key.pem
If the root CA file is not specified, the local CA certificates on your system will be used to verify the TLS connection. You can use the --tls-ca
flag to specify a custom root CA file.
cargo lambda watch --tls-cert cert.pem --tls-key key.pem --tls-ca ca.pem
cargo lambda watch --tls-cert cert.pem --tls-key key.pem --tls-ca ca.pem
If you always want to use TLS, you can place the certificate and key files in the global configuration directory as defined by XDG_CONFIG_HOME. Cargo Lambda will automatically look for those files in a subdirectory called cargo-lambda
. The file names must be cert.pem
, key.pem
, and ca.pem
respectively.
tree $HOME/.config/cargo-lambda
/home/david/.config/cargo-lambda
├── cert.pem
└── key.pem
1 directory, 2 files
tree $HOME/.config/cargo-lambda
/home/david/.config/cargo-lambda
├── cert.pem
└── key.pem
1 directory, 2 files
TIP
We recommend using mkcert to generate the TLS certificate and key files for development purposes.
Custom HTTP routes
You can add custom HTTP routes to the emulator by setting the routes
field in the watch
section of your Cargo.toml file. This is useful if you have several functions in your package and you want to access them using paths without the /lambda-url
prefix.
This configuration can be managed at the workspace level when you have more than one function in your workspace, or at the package level if you want to separate the routes for each package. Routes at the package level will override the ones in the workspace.
Cargo Lambda uses Matchit to match the HTTP routes to the functions. The syntax to specify the route paths is similar to the one used by the Axum router.
Each route is a key-value pair where the key is the path and the value is either a string with the function name, or a table with the HTTP method to match and the function name.
Workspace level
This configuration is applied to all functions in your workspace.
[package.metadata.lambda.watch.router]
"/get-product/:id" = "get-product"
"/add-product" = "add-product"
"/users" = [
{ method = "GET", function = "get-users" },
{ method = "POST", function = "add-user" }
]
[package.metadata.lambda.watch.router]
"/get-product/:id" = "get-product"
"/add-product" = "add-product"
"/users" = [
{ method = "GET", function = "get-users" },
{ method = "POST", function = "add-user" }
]
Package level
This configuration is applied to a function in a package. It will be merged with the workspace level configuration if it exists.
[package.metadata.lambda.watch.router]
"/products" = "handle-products"
[package.metadata.lambda.watch.router]
"/products" = "handle-products"