Deploying Python to GCP Cloud Run: A Guide for AWS Developers

PythonGCPCloud RunGHACI/CD

Intro

I've been deploying to AWS for years. Lambda, ECS, API Gateway, you name it. But when I needed to deploy a Python Slack bot integrated with Google's Vertex AI, it made sense to use GCP instead of wrestling with cross-cloud authentication.

The problem? I had no idea how GCP worked.

After a few hours of trial and error, I got my Flask app running on Cloud Run with automatic GitHub Actions deployment. Here's everything I learned, with constant AWS comparisons to make the transition easier.

GCP vs AWS: What You Need to Know

Before diving in, here's how GCP services map to AWS:

GCP ServiceAWS EquivalentWhat It Does
Cloud RunLambda + API GatewayServerless containers with automatic HTTPS endpoints
Artifact RegistryECRDocker image registry
Secret ManagerSecrets ManagerStore API keys and credentials
Workload Identity FederationIAM Roles for service-to-serviceKeyless authentication for GitHub Actions
Vertex AIBedrockManaged AI/ML platform

The biggest difference? Cloud Run uses containers, not zip files like Lambda. This means you write a Dockerfile instead of bundling your code.

The Application: Flask + Vertex AI Gemini

The app is a Slack bot that uses Google's Gemini AI to respond to messages:

Project structure:

1my-slack-bot/
2├── app/
3│ ├── __init__.py
4│ ├── main.py # Flask app
5│ ├── handlers/
6│ │ ├── __init__.py
7│ │ └── slack_events.py # Slack event handlers
8│ └── services/
9│ ├── __init__.py
10│ └── gemini.py # Vertex AI integration
11├── .venv/
12├── pyproject.toml
13├── uv.lock
14├── Dockerfile # Container definition
15├── .dockerignore
16└── .github/workflows/deploy.yml # CI/CD pipeline

Key dependencies (in pyproject.toml):

1[project]
2dependencies = [
3 "flask>=3.1.2",
4 "python-dotenv>=1.1.1",
5 "slack-bolt>=1.26.0",
6 "google-cloud-aiplatform>=1.121.0",
7]

The Dockerfile

Unlike Lambda where you zip your code, Cloud Run requires a container image. Here's the Dockerfile:

1FROM python:3.12-slim
2
3WORKDIR /app
4
5# Install uv (fast Python package manager)
6RUN pip install uv
7
8# Copy dependency files
9COPY pyproject.toml uv.lock ./
10
11# Install dependencies (no dev dependencies for production)
12RUN uv sync --frozen --no-dev
13
14# Copy application code
15COPY app/ ./app/
16
17# Cloud Run sets PORT environment variable
18EXPOSE 8080
19
20# Production settings
21ENV FLASK_ENV=production
22ENV PORT=8080
23
24# Run the app
25CMD uv run python app/main.py

Key differences from a Lambda deployment:

  • No zip file - Your entire app + dependencies are baked into a Docker image
  • Port 8080 - Cloud Run expects your app to listen on the PORT env var (default 8080)
  • No handler function - Just run your Flask app normally

.dockerignore (keep the image small):

1__pycache__/
2*.py[cod]
3.venv/
4.git/
5.github/
6.env
7.env.*
8README.md
9tests/
10.pytest_cache/

GCP Setup: One-Time Configuration

1. Create a GCP Project

1# Set your project ID
2export PROJECT_ID="your-project-id"
3gcloud config set project $PROJECT_ID

AWS equivalent: Creating an AWS account/setting up IAM

2. Enable Required APIs

1gcloud services enable \
2 run.googleapis.com \
3 artifactregistry.googleapis.com \
4 secretmanager.googleapis.com \
5 aiplatform.googleapis.com \
6 cloudbuild.googleapis.com

AWS equivalent: No action needed - AWS services are enabled by default

3. Create Artifact Registry (Docker Registry)

1gcloud artifacts repositories create my-slack-bot \
2 --repository-format=docker \
3 --location=asia-northeast1 \
4 --description="Container images for Slack bot"

AWS equivalent: aws ecr create-repository

4. Store Secrets

Instead of environment variables, use Secret Manager (like AWS Secrets Manager):

1# Store Slack credentials
2echo -n "xoxb-your-slack-token" | \
3 gcloud secrets create SLACK_BOT_TOKEN --data-file=-
4
5echo -n "your-slack-signing-secret" | \
6 gcloud secrets create SLACK_SIGNING_SECRET --data-file=-
7
8# Store GCP project ID (needed for Vertex AI)
9echo -n "$PROJECT_ID" | \
10 gcloud secrets create GCP_PROJECT_ID --data-file=-

Why Secret Manager instead of env vars?

  • Secrets are encrypted at rest
  • Fine-grained IAM permissions
  • Automatic rotation support
  • No secrets in GitHub Actions

5. Set Up Workload Identity Federation (GitHub Actions)

This is the GCP equivalent of AWS's OIDC provider for GitHub Actions - it lets GitHub authenticate to GCP without storing service account keys.

Create service account:

1gcloud iam service-accounts create github-actions \
2 --display-name="GitHub Actions Deployment"
3
4# Grant permissions
5gcloud projects add-iam-policy-binding $PROJECT_ID \
6 --member="serviceAccount:github-actions@$PROJECT_ID.iam.gserviceaccount.com" \
7 --role="roles/run.admin"
8
9gcloud projects add-iam-policy-binding $PROJECT_ID \
10 --member="serviceAccount:github-actions@$PROJECT_ID.iam.gserviceaccount.com" \
11 --role="roles/artifactregistry.writer"
12
13gcloud projects add-iam-policy-binding $PROJECT_ID \
14 --member="serviceAccount:github-actions@$PROJECT_ID.iam.gserviceaccount.com" \
15 --role="roles/iam.serviceAccountUser"

Create Workload Identity Pool:

1# Create pool
2gcloud iam workload-identity-pools create "github" \
3 --location="global" \
4 --display-name="GitHub Actions Pool"
5
6# Create provider
7gcloud iam workload-identity-pools providers create-oidc "github-provider" \
8 --location="global" \
9 --workload-identity-pool="github" \
10 --display-name="GitHub Provider" \
11 --attribute-mapping="google.subject=assertion.sub,attribute.actor=assertion.actor,attribute.repository=assertion.repository,attribute.repository_owner=assertion.repository_owner" \
12 --attribute-condition="assertion.repository_owner == 'YOUR_GITHUB_ORG'" \
13 --issuer-uri="https://token.actions.githubusercontent.com"
14
15# Get provider resource name (you'll need this for GitHub secrets)
16gcloud iam workload-identity-pools providers describe "github-provider" \
17 --location="global" \
18 --workload-identity-pool="github" \
19 --format="value(name)"
20
21# Allow GitHub to impersonate service account
22# Replace PROJECT_NUMBER with your actual project number
23# Replace YOUR_GITHUB_ORG/REPO with your repo
24gcloud iam service-accounts add-iam-policy-binding \
25 "github-actions@$PROJECT_ID.iam.gserviceaccount.com" \
26 --role="roles/iam.workloadIdentityUser" \
27 --member="principalSet://iam.googleapis.com/projects/PROJECT_NUMBER/locations/global/workloadIdentityPools/github/attribute.repository/YOUR_GITHUB_ORG/your-repo"

AWS equivalent: Setting up OIDC provider and IAM role for GitHub Actions

6. Grant Cloud Run Access to Secrets

1# Get your Cloud Run service account (PROJECT_NUMBER-compute@developer.gserviceaccount.com)
2# Grant it access to secrets
3gcloud projects add-iam-policy-binding $PROJECT_ID \
4 --member="serviceAccount:PROJECT_NUMBER-compute@developer.gserviceaccount.com" \
5 --role="roles/secretmanager.secretAccessor"

Why? Cloud Run needs permission to read secrets when your app starts.

GitHub Actions Workflow

Here's the complete CI/CD pipeline (.github/workflows/deploy.yml):

1name: Deploy to Cloud Run
2
3on:
4 push:
5 branches: [main]
6 workflow_dispatch: # Manual trigger
7
8env:
9 PROJECT_ID: ${{ secrets.GCP_PROJECT_ID }}
10 SERVICE_NAME: my-slack-bot
11 REGION: asia-northeast1
12
13jobs:
14 deploy:
15 runs-on: ubuntu-latest
16
17 permissions:
18 contents: read
19 id-token: write # Required for Workload Identity Federation
20
21 steps:
22 - name: Checkout code
23 uses: actions/checkout@v4
24
25 - name: Authenticate to Google Cloud
26 uses: google-github-actions/auth@v2
27 with:
28 workload_identity_provider: ${{ secrets.WIF_PROVIDER }}
29 service_account: ${{ secrets.WIF_SERVICE_ACCOUNT }}
30
31 - name: Set up Cloud SDK
32 uses: google-github-actions/setup-gcloud@v2
33
34 - name: Authorize Docker push
35 run: gcloud auth configure-docker ${{ env.REGION }}-docker.pkg.dev
36
37 - name: Build and Push Container
38 run: |
39 docker build -t ${{ env.REGION }}-docker.pkg.dev/${{ env.PROJECT_ID }}/${{ env.SERVICE_NAME }}/${{ env.SERVICE_NAME }}:${{ github.sha }} .
40 docker push ${{ env.REGION }}-docker.pkg.dev/${{ env.PROJECT_ID }}/${{ env.SERVICE_NAME }}/${{ env.SERVICE_NAME }}:${{ github.sha }}
41
42 - name: Deploy to Cloud Run
43 run: |
44 gcloud run deploy ${{ env.SERVICE_NAME }} \
45 --image ${{ env.REGION }}-docker.pkg.dev/${{ env.PROJECT_ID }}/${{ env.SERVICE_NAME }}/${{ env.SERVICE_NAME }}:${{ github.sha }} \
46 --region ${{ env.REGION }} \
47 --platform managed \
48 --allow-unauthenticated \
49 --set-env-vars "FLASK_ENV=production" \
50 --set-secrets "SLACK_BOT_TOKEN=SLACK_BOT_TOKEN:latest,SLACK_SIGNING_SECRET=SLACK_SIGNING_SECRET:latest,GCP_PROJECT_ID=GCP_PROJECT_ID:latest" \
51 --memory 512Mi \
52 --cpu 1 \
53 --min-instances 0 \
54 --max-instances 10 \
55 --timeout 60
56
57 - name: Show Service URL
58 run: |
59 gcloud run services describe ${{ env.SERVICE_NAME }} \
60 --region ${{ env.REGION }} \
61 --format 'value(status.url)'

GitHub Secrets to configure:

Go to your repo → Settings → Secrets and variables → Actions:

  • GCP_PROJECT_ID: Your GCP project ID
  • WIF_PROVIDER: Workload Identity Provider resource name from step 5
  • WIF_SERVICE_ACCOUNT: github-actions@YOUR_PROJECT_ID.iam.gserviceaccount.com

Key differences from AWS CodePipeline/GitHub Actions:

FeatureAWSGCP
AuthenticationOIDC with IAM Role ARNWorkload Identity Federation
Container registryPush to ECRPush to Artifact Registry
DeploymentUpdate Lambda/ECSDeploy to Cloud Run
SecretsEnvironment variables or Parameter StoreSecret Manager with --set-secrets

Deployment

Once everything is configured, deployment is automatic:

1git add .
2git commit -m "Deploy to Cloud Run"
3git push origin main

GitHub Actions will:

  1. Build your Docker image
  2. Push to Artifact Registry
  3. Deploy to Cloud Run
  4. Output the service URL

Example output:

1Deploying container to Cloud Run service [my-slack-bot]
2✓ Deploying new service... Done.
3 https://my-slack-bot-abc123-an.a.run.app

That URL is your live endpoint! Update your Slack app's Event Subscriptions URL to https://your-url.run.app/slack/events.

Cost Comparison: Cloud Run vs Lambda

Cloud Run Pricing (as of 2024):

  • Free tier: 2 million requests/month
  • After free tier: $0.40 per million requests
  • Memory: $0.0000025/GB-second
  • CPU: $0.00002400/vCPU-second
  • Min instances 0: Scales to zero when idle (free)

Lambda Pricing:

  • Free tier: 1 million requests/month
  • After free tier: $0.20 per million requests
  • Memory: Similar per GB-second pricing
  • Min instances 0: Also scales to zero

For a small Slack bot:

  • Cloud Run: ~$0-5/month (usually free tier)
  • Lambda + API Gateway: ~$0-5/month

Nearly identical! But Cloud Run's advantage is simpler deployments (just Docker) and better cold start times for containerized apps.

Common Gotchas

1. Secrets Permission Denied

Error:

1Permission denied on secret: SLACK_BOT_TOKEN for Revision service account

Solution: Grant Secret Manager access to Cloud Run service account:

1gcloud projects add-iam-policy-binding $PROJECT_ID \
2 --member="serviceAccount:PROJECT_NUMBER-compute@developer.gserviceaccount.com" \
3 --role="roles/secretmanager.secretAccessor"

2. Import Errors After Deployment

Error: Works locally but crashes in Cloud Run with ModuleNotFoundError.

Solution: Make sure your Dockerfile installs the package in editable mode:

1# Add to Dockerfile
2RUN uv pip install -e .

Or structure your imports correctly using absolute paths.

3. Getting PROJECT_NUMBER

You need your project number (not ID) for Workload Identity Federation:

1gcloud projects describe $PROJECT_ID --format='value(projectNumber)'

Monitoring and Logs

View logs:

1# Stream logs in real-time
2gcloud run services logs tail my-slack-bot --region asia-northeast1
3
4# View in Cloud Console
5# https://console.cloud.google.com/run

AWS equivalent: CloudWatch Logs

The Cloud Run logs UI is actually nicer than CloudWatch - built-in filtering, automatic log grouping, and integrated error reporting.

Local Testing Before Deployment

Test with actual GCP credentials:

1# Authenticate
2gcloud auth application-default login
3
4# Add to .env
5GCP_PROJECT_ID=your-project-id
6GCP_LOCATION=asia-northeast1
7SLACK_BOT_TOKEN=xoxb-...
8SLACK_SIGNING_SECRET=...
9
10# Run locally
11uv run python app/main.py

Your app will use your personal GCP credentials locally, then switch to the service account when deployed to Cloud Run.

Wrapping Up

Deploying to GCP Cloud Run as an AWS developer wasn't as scary as I thought. The hardest part was understanding the terminology - once I mapped GCP services to their AWS equivalents, everything clicked.

Key takeaways:

  1. Cloud Run = Lambda + API Gateway but with containers instead of zip files
  2. Workload Identity Federation is cleaner than storing service account keys
  3. Secret Manager is better than environment variables for production
  4. Dockerfile is straightforward if you've used Docker before
  5. GitHub Actions setup is very similar to AWS OIDC

The entire setup took me about 2 hours from zero GCP knowledge to a working deployment. Most of that time was understanding Workload Identity Federation and IAM permissions.

If you're an AWS developer looking to use GCP-specific services (like Vertex AI), Cloud Run is a great starting point. The serverless model is familiar, and the deployment process is actually simpler than juggling Lambda + API Gateway + ECR.

In my next article, I'll cover integrating Vertex AI Gemini for the actual Slack bot functionality.