Logo

Reading & Writing to S3 in AWS Lambda (Node.js)

LambdaS3

Intro

Sometimes you just need to read and write stuff from S3 inside a Lambda β€” like uploading .pem keys or fetching configs. Here's a breakdown of how that works, what permissions you need, and some gotchas to avoid.

πŸ“₯ Reading from S3 in Lambda

Start by importing the required modules from the AWS SDK v3:

1import { S3Client, GetObjectCommand } from '@aws-sdk/client-s3'
2import { Readable } from 'stream'
3
4const s3 = new S3Client({ region: 'ap-northeast-1' })
5
6const readFromS3 = async (
7 bucket: string,
8 key: string,
9): Promise<string | null> => {
10 try {
11 const { Body } = await s3.send(
12 new GetObjectCommand({ Bucket: bucket, Key: key }),
13 )
14 if (Body instanceof Readable) {
15 const chunks = []
16 for await (const chunk of Body) chunks.push(chunk)
17 return Buffer.concat(chunks).toString('utf-8')
18 }
19 } catch (err) {
20 console.error('Failed to read from S3:', err)
21 }
22 return null
23}

In the real world, you might dynamically construct the filename (e.g. using today’s date), and read multiple files in parallel using Promise.all.

πŸ“€ Writing to S3 in Lambda

1import { S3Client, PutObjectCommand } from '@aws-sdk/client-s3'
2
3const s3 = new S3Client({ region: 'ap-northeast-1' })
4
5const writeToS3 = async (
6 bucket: string,
7 key: string,
8 content: string,
9): Promise<boolean> => {
10 try {
11 const res = await s3.send(
12 new PutObjectCommand({
13 Bucket: bucket,
14 Key: key,
15 Body: content,
16 ContentType: 'application/x-pem-file',
17 }),
18 )
19 return !!res
20 } catch (err) {
21 console.error('Failed to write to S3:', err)
22 return false
23 }
24}

πŸ’‘ You can use application/json, text/plain, or whatever ContentType fits your use case.

βœ… IAM Permissions

To allow Lambda to read/write from S3, attach this policy to the execution role:

1Version: '2012-10-17'
2Statement:
3 - Sid: S3Access
4 Effect: Allow
5 Action:
6 - s3:GetObject
7 - s3:PutObject
8 - s3:DeleteObject
9 Resource:
10 - arn:aws:s3:::your-s3-bucket-name/*

Or if you're using CloudFormation/SAM:

1Policies:
2 - PolicyName: your-custom-policy
3 PolicyDocument:
4 Version: 2012-10-17
5 Statement:
6 - Sid: S3Write
7 Effect: Allow
8 Action:
9 - s3:*
10 Resource:
11 - !Sub arn:aws:s3:::your-s3-bucket-${GlobalEnvironment}
12 - !Sub arn:aws:s3:::your-s3-bucket-${GlobalEnvironment}/*

πŸ”’ Keep access as narrow as possible β€” ideally only specific actions for specific buckets or prefixes.

πŸ€” Why All This?

  • S3 is a great place to store static files, dynamic configs, or sensitive assets like .pem keys.
  • Lambdas are stateless, so pushing/pulling files via S3 is a clean way to persist anything.
  • AWS SDK v3 is modular and tree-shakeable β€” use only what you need.

πŸ”š Wrap-up

  • Use GetObjectCommand to read and stream file contents
  • Use PutObjectCommand to upload content to S3
  • Attach only the IAM permissions you need
  • Bonus tip: Handle Readable stream carefully to avoid data loss in large files

That’s it! Minimal setup, maximum flexibility.