Intro
I've been deploying to AWS for years. Lambda, ECS, API Gateway, you name it. But when I needed to deploy a Python Slack bot integrated with Google's Vertex AI, it made sense to use GCP instead of wrestling with cross-cloud authentication.
The problem? I had no idea how GCP worked.
After a few hours of trial and error, I got my Flask app running on Cloud Run with automatic GitHub Actions deployment. Here's everything I learned, with constant AWS comparisons to make the transition easier.
GCP vs AWS: What You Need to Know
Before diving in, here's how GCP services map to AWS:
| GCP Service | AWS Equivalent | What It Does |
|---|---|---|
| Cloud Run | Lambda + API Gateway | Serverless containers with automatic HTTPS endpoints |
| Artifact Registry | ECR | Docker image registry |
| Secret Manager | Secrets Manager | Store API keys and credentials |
| Workload Identity Federation | IAM Roles for service-to-service | Keyless authentication for GitHub Actions |
| Vertex AI | Bedrock | Managed AI/ML platform |
The biggest difference? Cloud Run uses containers, not zip files like Lambda. This means you write a Dockerfile instead of bundling your code.
The Application: Flask + Vertex AI Gemini
The app is a Slack bot that uses Google's Gemini AI to respond to messages:
Project structure:
1my-slack-bot/2├── app/3│ ├── __init__.py4│ ├── main.py # Flask app5│ ├── handlers/6│ │ ├── __init__.py7│ │ └── slack_events.py # Slack event handlers8│ └── services/9│ ├── __init__.py10│ └── gemini.py # Vertex AI integration11├── .venv/12├── pyproject.toml13├── uv.lock14├── Dockerfile # Container definition15├── .dockerignore16└── .github/workflows/deploy.yml # CI/CD pipeline
Key dependencies (in pyproject.toml):
1[project]2dependencies = [3 "flask>=3.1.2",4 "python-dotenv>=1.1.1",5 "slack-bolt>=1.26.0",6 "google-cloud-aiplatform>=1.121.0",7]
The Dockerfile
Unlike Lambda where you zip your code, Cloud Run requires a container image. Here's the Dockerfile:
1FROM python:3.12-slim23WORKDIR /app45# Install uv (fast Python package manager)6RUN pip install uv78# Copy dependency files9COPY pyproject.toml uv.lock ./1011# Install dependencies (no dev dependencies for production)12RUN uv sync --frozen --no-dev1314# Copy application code15COPY app/ ./app/1617# Cloud Run sets PORT environment variable18EXPOSE 80801920# Production settings21ENV FLASK_ENV=production22ENV PORT=80802324# Run the app25CMD uv run python app/main.py
Key differences from a Lambda deployment:
- No zip file - Your entire app + dependencies are baked into a Docker image
- Port 8080 - Cloud Run expects your app to listen on the
PORTenv var (default 8080) - No handler function - Just run your Flask app normally
.dockerignore (keep the image small):
1__pycache__/2*.py[cod]3.venv/4.git/5.github/6.env7.env.*8README.md9tests/10.pytest_cache/
GCP Setup: One-Time Configuration
1. Create a GCP Project
1# Set your project ID2export PROJECT_ID="your-project-id"3gcloud config set project $PROJECT_ID
AWS equivalent: Creating an AWS account/setting up IAM
2. Enable Required APIs
1gcloud services enable \2 run.googleapis.com \3 artifactregistry.googleapis.com \4 secretmanager.googleapis.com \5 aiplatform.googleapis.com \6 cloudbuild.googleapis.com
AWS equivalent: No action needed - AWS services are enabled by default
3. Create Artifact Registry (Docker Registry)
1gcloud artifacts repositories create my-slack-bot \2 --repository-format=docker \3 --location=asia-northeast1 \4 --description="Container images for Slack bot"
AWS equivalent: aws ecr create-repository
4. Store Secrets
Instead of environment variables, use Secret Manager (like AWS Secrets Manager):
1# Store Slack credentials2echo -n "xoxb-your-slack-token" | \3 gcloud secrets create SLACK_BOT_TOKEN --data-file=-45echo -n "your-slack-signing-secret" | \6 gcloud secrets create SLACK_SIGNING_SECRET --data-file=-78# Store GCP project ID (needed for Vertex AI)9echo -n "$PROJECT_ID" | \10 gcloud secrets create GCP_PROJECT_ID --data-file=-
Why Secret Manager instead of env vars?
- Secrets are encrypted at rest
- Fine-grained IAM permissions
- Automatic rotation support
- No secrets in GitHub Actions
5. Set Up Workload Identity Federation (GitHub Actions)
This is the GCP equivalent of AWS's OIDC provider for GitHub Actions - it lets GitHub authenticate to GCP without storing service account keys.
Create service account:
1gcloud iam service-accounts create github-actions \2 --display-name="GitHub Actions Deployment"34# Grant permissions5gcloud projects add-iam-policy-binding $PROJECT_ID \6 --member="serviceAccount:github-actions@$PROJECT_ID.iam.gserviceaccount.com" \7 --role="roles/run.admin"89gcloud projects add-iam-policy-binding $PROJECT_ID \10 --member="serviceAccount:github-actions@$PROJECT_ID.iam.gserviceaccount.com" \11 --role="roles/artifactregistry.writer"1213gcloud projects add-iam-policy-binding $PROJECT_ID \14 --member="serviceAccount:github-actions@$PROJECT_ID.iam.gserviceaccount.com" \15 --role="roles/iam.serviceAccountUser"
Create Workload Identity Pool:
1# Create pool2gcloud iam workload-identity-pools create "github" \3 --location="global" \4 --display-name="GitHub Actions Pool"56# Create provider7gcloud iam workload-identity-pools providers create-oidc "github-provider" \8 --location="global" \9 --workload-identity-pool="github" \10 --display-name="GitHub Provider" \11 --attribute-mapping="google.subject=assertion.sub,attribute.actor=assertion.actor,attribute.repository=assertion.repository,attribute.repository_owner=assertion.repository_owner" \12 --attribute-condition="assertion.repository_owner == 'YOUR_GITHUB_ORG'" \13 --issuer-uri="https://token.actions.githubusercontent.com"1415# Get provider resource name (you'll need this for GitHub secrets)16gcloud iam workload-identity-pools providers describe "github-provider" \17 --location="global" \18 --workload-identity-pool="github" \19 --format="value(name)"2021# Allow GitHub to impersonate service account22# Replace PROJECT_NUMBER with your actual project number23# Replace YOUR_GITHUB_ORG/REPO with your repo24gcloud iam service-accounts add-iam-policy-binding \25 "github-actions@$PROJECT_ID.iam.gserviceaccount.com" \26 --role="roles/iam.workloadIdentityUser" \27 --member="principalSet://iam.googleapis.com/projects/PROJECT_NUMBER/locations/global/workloadIdentityPools/github/attribute.repository/YOUR_GITHUB_ORG/your-repo"
AWS equivalent: Setting up OIDC provider and IAM role for GitHub Actions
6. Grant Cloud Run Access to Secrets
1# Get your Cloud Run service account (PROJECT_NUMBER-compute@developer.gserviceaccount.com)2# Grant it access to secrets3gcloud projects add-iam-policy-binding $PROJECT_ID \4 --member="serviceAccount:PROJECT_NUMBER-compute@developer.gserviceaccount.com" \5 --role="roles/secretmanager.secretAccessor"
Why? Cloud Run needs permission to read secrets when your app starts.
GitHub Actions Workflow
Here's the complete CI/CD pipeline (.github/workflows/deploy.yml):
1name: Deploy to Cloud Run23on:4 push:5 branches: [main]6 workflow_dispatch: # Manual trigger78env:9 PROJECT_ID: ${{ secrets.GCP_PROJECT_ID }}10 SERVICE_NAME: my-slack-bot11 REGION: asia-northeast11213jobs:14 deploy:15 runs-on: ubuntu-latest1617 permissions:18 contents: read19 id-token: write # Required for Workload Identity Federation2021 steps:22 - name: Checkout code23 uses: actions/checkout@v42425 - name: Authenticate to Google Cloud26 uses: google-github-actions/auth@v227 with:28 workload_identity_provider: ${{ secrets.WIF_PROVIDER }}29 service_account: ${{ secrets.WIF_SERVICE_ACCOUNT }}3031 - name: Set up Cloud SDK32 uses: google-github-actions/setup-gcloud@v23334 - name: Authorize Docker push35 run: gcloud auth configure-docker ${{ env.REGION }}-docker.pkg.dev3637 - name: Build and Push Container38 run: |39 docker build -t ${{ env.REGION }}-docker.pkg.dev/${{ env.PROJECT_ID }}/${{ env.SERVICE_NAME }}/${{ env.SERVICE_NAME }}:${{ github.sha }} .40 docker push ${{ env.REGION }}-docker.pkg.dev/${{ env.PROJECT_ID }}/${{ env.SERVICE_NAME }}/${{ env.SERVICE_NAME }}:${{ github.sha }}4142 - name: Deploy to Cloud Run43 run: |44 gcloud run deploy ${{ env.SERVICE_NAME }} \45 --image ${{ env.REGION }}-docker.pkg.dev/${{ env.PROJECT_ID }}/${{ env.SERVICE_NAME }}/${{ env.SERVICE_NAME }}:${{ github.sha }} \46 --region ${{ env.REGION }} \47 --platform managed \48 --allow-unauthenticated \49 --set-env-vars "FLASK_ENV=production" \50 --set-secrets "SLACK_BOT_TOKEN=SLACK_BOT_TOKEN:latest,SLACK_SIGNING_SECRET=SLACK_SIGNING_SECRET:latest,GCP_PROJECT_ID=GCP_PROJECT_ID:latest" \51 --memory 512Mi \52 --cpu 1 \53 --min-instances 0 \54 --max-instances 10 \55 --timeout 605657 - name: Show Service URL58 run: |59 gcloud run services describe ${{ env.SERVICE_NAME }} \60 --region ${{ env.REGION }} \61 --format 'value(status.url)'
GitHub Secrets to configure:
Go to your repo → Settings → Secrets and variables → Actions:
GCP_PROJECT_ID: Your GCP project IDWIF_PROVIDER: Workload Identity Provider resource name from step 5WIF_SERVICE_ACCOUNT:github-actions@YOUR_PROJECT_ID.iam.gserviceaccount.com
Key differences from AWS CodePipeline/GitHub Actions:
| Feature | AWS | GCP |
|---|---|---|
| Authentication | OIDC with IAM Role ARN | Workload Identity Federation |
| Container registry | Push to ECR | Push to Artifact Registry |
| Deployment | Update Lambda/ECS | Deploy to Cloud Run |
| Secrets | Environment variables or Parameter Store | Secret Manager with --set-secrets |
Deployment
Once everything is configured, deployment is automatic:
1git add .2git commit -m "Deploy to Cloud Run"3git push origin main
GitHub Actions will:
- Build your Docker image
- Push to Artifact Registry
- Deploy to Cloud Run
- Output the service URL
Example output:
1Deploying container to Cloud Run service [my-slack-bot]2✓ Deploying new service... Done.3 https://my-slack-bot-abc123-an.a.run.app
That URL is your live endpoint! Update your Slack app's Event Subscriptions URL to https://your-url.run.app/slack/events.
Cost Comparison: Cloud Run vs Lambda
Cloud Run Pricing (as of 2024):
- Free tier: 2 million requests/month
- After free tier: $0.40 per million requests
- Memory: $0.0000025/GB-second
- CPU: $0.00002400/vCPU-second
- Min instances 0: Scales to zero when idle (free)
Lambda Pricing:
- Free tier: 1 million requests/month
- After free tier: $0.20 per million requests
- Memory: Similar per GB-second pricing
- Min instances 0: Also scales to zero
For a small Slack bot:
- Cloud Run: ~$0-5/month (usually free tier)
- Lambda + API Gateway: ~$0-5/month
Nearly identical! But Cloud Run's advantage is simpler deployments (just Docker) and better cold start times for containerized apps.
Common Gotchas
1. Secrets Permission Denied
Error:
1Permission denied on secret: SLACK_BOT_TOKEN for Revision service account
Solution: Grant Secret Manager access to Cloud Run service account:
1gcloud projects add-iam-policy-binding $PROJECT_ID \2 --member="serviceAccount:PROJECT_NUMBER-compute@developer.gserviceaccount.com" \3 --role="roles/secretmanager.secretAccessor"
2. Import Errors After Deployment
Error: Works locally but crashes in Cloud Run with ModuleNotFoundError.
Solution: Make sure your Dockerfile installs the package in editable mode:
1# Add to Dockerfile2RUN uv pip install -e .
Or structure your imports correctly using absolute paths.
3. Getting PROJECT_NUMBER
You need your project number (not ID) for Workload Identity Federation:
1gcloud projects describe $PROJECT_ID --format='value(projectNumber)'
Monitoring and Logs
View logs:
1# Stream logs in real-time2gcloud run services logs tail my-slack-bot --region asia-northeast134# View in Cloud Console5# https://console.cloud.google.com/run
AWS equivalent: CloudWatch Logs
The Cloud Run logs UI is actually nicer than CloudWatch - built-in filtering, automatic log grouping, and integrated error reporting.
Local Testing Before Deployment
Test with actual GCP credentials:
1# Authenticate2gcloud auth application-default login34# Add to .env5GCP_PROJECT_ID=your-project-id6GCP_LOCATION=asia-northeast17SLACK_BOT_TOKEN=xoxb-...8SLACK_SIGNING_SECRET=...910# Run locally11uv run python app/main.py
Your app will use your personal GCP credentials locally, then switch to the service account when deployed to Cloud Run.
Wrapping Up
Deploying to GCP Cloud Run as an AWS developer wasn't as scary as I thought. The hardest part was understanding the terminology - once I mapped GCP services to their AWS equivalents, everything clicked.
Key takeaways:
- Cloud Run = Lambda + API Gateway but with containers instead of zip files
- Workload Identity Federation is cleaner than storing service account keys
- Secret Manager is better than environment variables for production
- Dockerfile is straightforward if you've used Docker before
- GitHub Actions setup is very similar to AWS OIDC
The entire setup took me about 2 hours from zero GCP knowledge to a working deployment. Most of that time was understanding Workload Identity Federation and IAM permissions.
If you're an AWS developer looking to use GCP-specific services (like Vertex AI), Cloud Run is a great starting point. The serverless model is familiar, and the deployment process is actually simpler than juggling Lambda + API Gateway + ECR.
In my next article, I'll cover integrating Vertex AI Gemini for the actual Slack bot functionality.
