Tutorial to document the steps to create an AWS Lambda function using the Serverless Framework for NodeJS using the basic “hello world” project.

Background

This is a very basic tutorial on how to use the Serverless Framework to create and deploy AWS Lambda functionality. The patterns used in this tutorial are not to say this is the best practice, but rather, to explore ways in which you can quickly and easily create NodeJS functionality and deploy to AWS Lambda with minimal effort and reasonable defaults.

Note that all instructions within are assumed for an Ubuntu-16.04 installation. While the commands may also work on various other operating systems of the Unix type, your mileage may vary.

In addition, you will need an AWS account to deploy the functionality into - you can, however, perform all of the steps up to the “AWS Deploy” section to get a feel for the framework and run the functionality locally prior to setting up the AWS components if you wish.

NodeJS Setup

To make life easier, we will use nvm for NodeJS versions. First, install the capability (replace the version in the path with whatever version you wish to use/are specified in the installation documentation):

$ wget -qO- https://raw.githubusercontent.com/creationix/nvm/v0.33.2/install.sh | bash

# to start using nvm immediately, perform the following (or re-login/re-establish your session):
$ export NVM_DIR="$HOME/.nvm"
[ -s "$NVM_DIR/nvm.sh" ] && \. "$NVM_DIR/nvm.sh"
[ -s "$NVM_DIR/bash_completion" ] && \. "$NVM_DIR/bash_completion"

# verify
$ nvm --version
#  0.33.2

Now that we have nvm installed, we need to install a NodeJS version:

$ nvm install 6
# will perform NodeJS 6.x installation

# verify
$ node --version
#  v6.10.3

Project Setup

Now that we have our environment configured for NodeJS, we can create our base project structure:

# install the serverless framework
$ npm install serverless -g

# create a serverless function
$ serverless create --name sls-example \
                    --template aws-nodejs \
                    --path ./sls-example
$ cd sls-example/

# initialize the project for npm
$ npm init
# answer questions appropriately

At this point you should have a few default files in the directory:

  • .gitignore: Tells git which files to ignore during commits
  • handler.js: Main entry point of the Lambda function
  • serverless.yml: Configuration for creating the service in AWS
  • package.json: Standard npm dependency and project file

For convenience in project setup, create a new file named .nvmrc in the root directory with a single line that reads v6.10.3. This will ensure that any person wishing to use this project can rely on NVM to manage the NodeJS version appropriate for the example application.

Then, you can instruct NVM to utilize the correct version moving forward, and install any initial dependencies:

$ nvm use
$ npm install

Testing Default Installation

Without any further changes, you can invoke the functionality locally to mimic what the AWS Lambda functionality will invoke. While a disclaimer about this is necessary (given that the environment may differ and it is likely best to do pre-production testing against an actual AWS Lambda setup), it allows for rapid local development and testing.

To test, simply run the following command:

$ sls invoke local -f hello

If all went successfully, you should receive an output to the console similar to the following:

{
    "statusCode": 200,
    "body": "{\"message\":\"Go Serverless v1.0! Your function executed successfully!\",\"input\":\"\"}"
}

You now have a fully functioning Serverless environment that can be run/used locally!

AWS Deploy

Now that you can run things locally, it’s time to deploy your application to AWS. The serverless.yml file manages the configurations for how the application is deployed in conjunction with the AWS environment variables present in your environment for the AWS account you are deploying to.

This tutorial utilizes static AWS credentials through the AWS IAM service. It is desirable to instead utilize the Security Token Service (STS) capability to generate short-lived tokens over persistent/long-living tokens, which may be discussed in future tutorials, but we will defer to persistent IAM security credentials for now.

First, log into your AWS account and navigate to the IAM service and create a security credential for yourself. Then, for simplicity, we will use the aws command to configure your profile on the local VM.

# install AWS command line interface
$ sudo apt-get install awscli

# configure the initial security credentials for your account
$ aws configure
# answer each question providing the information from the security credentials
# created through the IAM service for your user
#   AWS Access Key ID [None]: AKIAIOSFODNN7EXAMPLE
#   AWS Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
#   Default region name [None]: [PRESS ENTER]
#   Default output format [None]: [PRESS ENTER]

In most cases, the AWS_PROFILE environment variable must be set in order for libraries to automatically detect the default profile credentials you wish to use (in the case where there are more than one). For consistency, we will set this variable explicitly to demonstrate how you can change the profile in the case where you have different secret credentials for different functionalities:

$ export AWS_PROFILE=default

In addition, you will need to ensure that the user you generated credentials for has the respective permissions to be able to deploy AWS Lambda functionality. Obviously fine-grained permissions are desirable, but again, for simplicity, grant the user (through the AWS IAM interface) the Policy “AWSLambdaFullAccess” in order to test this functionality. Later, you will want to restrict the capabilities further to only the fine-grained access the user requires.

The environment should now be set up for deploying the functionality - let’s deploy the Lambda functionality to your AWS account:

# WARNING: This command will generate resources that may result in costs to your account
$ sls deploy -v

After several minutes following multiple verbose output lines, you should see that your Lambda functionality has been successfully created/deployed and ready for test!

Execute the following to test the function:

$ sls invoke -f hello

You should again receive output similar to the following:

{
    "statusCode": 200,
    "body": "{\"message\":\"Go Serverless v1.0! Your function executed successfully!\",\"input\":\"\"}"
}

If something goes wrong/you do not get the output you expect, you can stream the logs for the function to inspect what might be going on. Open a new tab/window and invoke the following from the project directory:

$ sls logs -f hello -t

Finally, if you wish to remove the resources created by the Serverless framework for your project, simply run the following command and all AWS resources allocated will be destroyed:

$ sls remove -v

Next Steps

If you’re interested in continuing to explore, check out this next post that details how to secure data in an AWS Lambda project using the AWS Key Management System.

Credit

The above tutorial was pieced together with some information from the following sites/resources: