Skip to main content

Integrating Services (S3, SNS, SQS)

The Serverless Stack

Most serverless applications combine multiple AWS services:

Serverless stack


Simple Explanation

What it is

This lesson shows how to connect storage, messaging, and compute so your app can move data and trigger work across services.

Why we need it

Real serverless apps are never one service. You need storage for files, queues for background work, and notifications for users.

Benefits

  • Clear separation of responsibilities across services.
  • Better resilience because queues absorb spikes.
  • Easier scaling as each service grows independently.

Tradeoffs

  • More services to manage and monitor.
  • More permissions to configure correctly.

Real-world examples (architecture only)

  • Upload image → S3 → Lambda → Thumbnail → SNS notification.
  • API request → Lambda → SQS → Worker function.

S3: File Storage

Why S3?

  • Unlimited file storage
  • Serverless (no servers to manage)
  • Highly available
  • Pay for storage, not instances

S3 Operations from Lambda

Upload File

import boto3

s3 = boto3.client("s3")

def handler(event, context):
buffer = file_data.encode("utf-8")

s3.put_object(
Bucket="my-bucket",
Key="path/to/file.txt",
Body=buffer,
ContentType="text/plain",
)

return {"statusCode": 200}

Download File

def handler(event, context):
response = s3.get_object(Bucket="my-bucket", Key="path/to/file.txt")
data = response["Body"].read().decode("utf-8")
print("File content:", data)
return {"statusCode": 200, "body": data}

Generate Presigned URL

Let users download files without exposing S3 directly:

import json
from boto3.session import Session
from botocore.client import Config

def handler(event, context):
url = s3.generate_presigned_url(
ClientMethod="get_object",
Params={"Bucket": "my-bucket", "Key": "private-file.pdf"},
ExpiresIn=3600,
)

return {"statusCode": 200, "body": json.dumps({"downloadUrl": url})}

SNS: Publishing Messages

SNS broadcasts messages to multiple subscribers (email, SMS, HTTP, Lambda).

Send Email Notification

import boto3

sns = boto3.client("sns")

def handler(event, context):
sns.publish(
TopicArn="arn:aws:sns:us-east-1:123456:notifications",
Subject="New Order",
Message=f"Order #{order_id} confirmed!",
)
return {"statusCode": 200}

Setup SNS Topic

  1. Go to SNS Console
  2. Click Create topic
  3. Name: notifications
  4. Under Subscriptions, add email
  5. Confirm email

SQS: Async Processing

SQS stores messages for Lambda to process asynchronously.

Why SQS?

  • Decouple producers from consumers
  • Retry failed messages
  • Handle traffic spikes
  • Guaranteed delivery

Send Message to Queue

import json
import boto3
from datetime import datetime

sqs = boto3.client("sqs")

def handler(event, context):
sqs.send_message(
QueueUrl="https://sqs.us-east-1.amazonaws.com/123456/my-queue",
MessageBody=json.dumps({
"userId": "123",
"action": "process-report",
"timestamp": datetime.utcnow().isoformat(),
}),
)
return {"statusCode": 200}

Process Queue Messages

import json

def handler(event, context):
for record in event.get("Records", []):
message = json.loads(record["body"])
print("Processing:", message)
try:
process_report(message)
except Exception as exc:
print(f"Failed: {exc}")
raise exc

Combining Services: Real Example

Scenario: User uploads image → Generate thumbnail → Send notification

Architecture

Upload API

S3 (stores original image)

Lambda (triggered by S3 upload)

Process image → Generate thumbnail

Save thumbnail to S3

Update DynamoDB

Publish to SNS

User gets email notification

Lambda Handler

import boto3
from datetime import datetime

s3 = boto3.client("s3")
sns = boto3.client("sns")
ddb = boto3.resource("dynamodb").Table("ImageMetadata")

def handler(event, context):
try:
bucket = event["Records"][0]["s3"]["bucket"]["name"]
key = event["Records"][0]["s3"]["object"]["key"]

image_data = s3.get_object(Bucket=bucket, Key=key)["Body"].read()

thumbnail = generate_thumbnail(image_data)

s3.put_object(Bucket=bucket, Key=f"thumbnails/{key}", Body=thumbnail)

ddb.update_item(
Key={"imageId": key},
UpdateExpression="SET thumbnailGenerated = :now",
ExpressionAttributeValues={":now": datetime.utcnow().isoformat()},
)

sns.publish(
TopicArn="arn:aws:sns:...",
Subject="Thumbnail Ready",
Message=f"Your thumbnail is ready: {key}",
)

return {"statusCode": 200, "message": "Processed"}
except Exception as exc:
print(exc)
raise exc

IAM Permissions Template

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["s3:GetObject", "s3:PutObject"],
"Resource": "arn:aws:s3:::my-bucket/*"
},
{
"Effect": "Allow",
"Action": ["sns:Publish"],
"Resource": "arn:aws:sns:*:*:*"
},
{
"Effect": "Allow",
"Action": ["sqs:SendMessage"],
"Resource": "arn:aws:sqs:*:*:*"
},
{
"Effect": "Allow",
"Action": ["dynamodb:*"],
"Resource": "arn:aws:dynamodb:*:*:table/*"
}
]
}

Best Practices

  1. Use SQS for heavy lifting — Don't block API responses
  2. Publish to SNS for fanout — Multiple services react to one event
  3. Store files in S3 — Not in Lambda memory or DynamoDB
  4. Presigned URLs — Let users access S3 securely
  5. Error handling — Use DLQs and retries

Hands-On: Document Processing Pipeline

Build a workflow:

  1. User uploads PDF to S3
  2. Lambda triggered automatically
  3. Extract text and metadata
  4. Store metadata in DynamoDB
  5. Send confirmation SNS email

Key Takeaway

AWS services work together through well-defined interfaces. S3 stores, SNS broadcasts, SQS queues. Combine them to build scalable serverless systems.


Project (Cloud-Agnostic)

Design a file-processing pipeline with storage, async processing, and notification.

Deliverables:

  1. Describe the vendor-neutral architecture (storage, compute, messaging, data).
  2. Map each component to AWS or GCP services.
  3. Explain how failures are retried and observed.

If you want feedback, email your write-up to [email protected].


References