Back to HomeGCP

GCP Core Services Hands-on Tutorial: Compute Engine, Cloud Run, GKE Complete Operations Guide

16 min min read
#GCP Tutorial#Compute Engine#Cloud Run#GKE#Kubernetes#VM#Container Deployment#Serverless#Cloud Computing#Hands-on Guide

GCP Core Services Hands-on Tutorial: Compute Engine, Cloud Run, GKE Complete Operations Guide

GCP Core Services Hands-on Tutorial: Compute Engine, Cloud Run, GKE Complete Operations Guide

Want to run your programs on GCP but don't know which service to use?

Compute Engine, Cloud Run, GKE... they all sound similar—what's the difference?

This article will walk you through actually operating GCP's three major compute services. From creating your first VM, to deploying Serverless containers, to managing Kubernetes clusters—step-by-step to get you started.

Want to understand GCP's overall architecture first? Please refer to "GCP Complete Guide: From Beginner Concepts to Enterprise Practice."


GCP Compute Service Selection Guide

Before getting hands-on, understand the differences between these three services.

VM vs Container vs Serverless Comparison

ServiceTypeWhat You ManageSuitable Scenarios
Compute EngineVMOS, Runtime, ApplicationNeed full control, special software requirements
GKEContainer OrchestrationContainers, Pods, DeploymentsLarge-scale microservices, complex orchestration
Cloud RunServerless ContainerContainer imageAPI services, Web apps, quick deployment

Simple Memory Aid:

  • Need full control → Compute Engine
  • Want to save effort and money → Cloud Run
  • Need large-scale management → GKE

Choosing Services Based on Workload

Choose Compute Engine when:

  • Need to install specific software (like licensed software)
  • Need GPU for machine learning training
  • Need Windows Server
  • Traditional monolithic applications
  • Need fixed IP services

Choose Cloud Run when:

  • HTTP services (API, Web)
  • Unstable traffic (sometimes busy, sometimes idle)
  • Want automatic scaling
  • Want per-request billing (no traffic = no charge)
  • Quick deployment and iteration

Choose GKE when:

  • Many microservices need orchestration
  • Need fine-grained network control
  • Have on-premise K8s experience to migrate to cloud
  • Need stateful services
  • Enterprise container platform requirements

Service Combinations and Hybrid Architecture

In practice, many projects mix these services:

Common Combo 1: Frontend/Backend Separation

  • Frontend: Cloud Run (static site, SSR)
  • Backend API: Cloud Run
  • Background tasks: Compute Engine

Common Combo 2: Microservices Architecture

  • Main services: GKE
  • Lightweight webhooks: Cloud Run
  • Batch processing: Compute Engine (Spot VM)

Common Combo 3: ML Workflow

  • Model training: Compute Engine (GPU)
  • Model serving: Cloud Run or GKE
  • Data processing: Dataflow

Compute Engine (VM) Hands-on Tutorial

Compute Engine is GCP's most basic compute service. Like renting a computer in the cloud.

Creating Your First VM Instance

Method 1: Using Cloud Console (Web Interface)

  1. Go to Cloud Console → Compute Engine → VM instances

  2. Click "Create Instance"

  3. Set basic info:

    • Name: my-first-vm
    • Region: asia-east1 (Taiwan)
    • Zone: asia-east1-b
  4. Select machine type (detailed next section)

  5. Select boot disk (detailed next section)

  6. Set firewall:

    • Check "Allow HTTP traffic" (if running web)
    • Check "Allow HTTPS traffic"
  7. Click "Create"

Method 2: Using gcloud CLI

gcloud compute instances create my-first-vm \
  --zone=asia-east1-b \
  --machine-type=e2-medium \
  --image-family=debian-11 \
  --image-project=debian-cloud \
  --boot-disk-size=20GB \
  --tags=http-server,https-server

CLI benefits: can be scripted for easy repetition and version control.

Machine Types and Spec Selection

GCP has many machine series—choosing wrong wastes money.

Machine Series Comparison:

SeriesFeaturesUse CasesPrice
E2Cheapest, shared CPUDev/test, small services💰
N2Balanced, dedicated CPUGeneral production💰💰
N2DAMD processorHigh cost-performance needs💰💰
C2Compute optimizedCPU-intensive work💰💰💰
M2Memory optimizedLarge databases, SAP💰💰💰💰
A2GPU optimizedML training, rendering💰💰💰💰💰

How to Choose?

  • Dev/test environments → e2-micro (free) or e2-small
  • Small web services → e2-medium
  • General production → n2-standard-2 minimum
  • Databases → n2-highmem-*
  • Batch computing → c2-standard-*

Custom Machine Type:

If standard specs don't fit your needs, customize vCPU and memory:

gcloud compute instances create custom-vm \
  --custom-cpu=6 \
  --custom-memory=12GB

Boot Disk and Image Settings

Image Selection:

TypeOptionsCost
Public imagesDebian, Ubuntu, CentOSFree
Premium imagesWindows, RHEL, SUSEExtra charge
Custom imagesYour ownStorage cost

Disk Types:

TypeIOPSUse CasesPrice
pd-standard (HDD)LowBackup, cold data$0.04/GB
pd-balanced (SSD)MediumGeneral purpose$0.10/GB
pd-ssd (SSD)HighDatabases, high I/O$0.17/GB
pd-extreme (SSD)Very highHigh-performance databases$0.125/GB

Recommendations:

  • Dev/test → pd-balanced, 20-50GB
  • General production → pd-balanced, 50-100GB
  • Databases → pd-ssd or pd-extreme

Network and Firewall Configuration

Default Network Settings:

Each VM gets by default:

  • Internal IP (for VPC internal use)
  • External IP (for external connections, optional)

Firewall Rule Setup:

# Allow HTTP
gcloud compute firewall-rules create allow-http \
  --allow=tcp:80 \
  --target-tags=http-server

# Allow HTTPS
gcloud compute firewall-rules create allow-https \
  --allow=tcp:443 \
  --target-tags=https-server

# Allow SSH from specific IP
gcloud compute firewall-rules create allow-ssh-from-office \
  --allow=tcp:22 \
  --source-ranges=203.0.113.0/24

Security Recommendations:

  • Don't open 0.0.0.0/0 for SSH (whole world can connect)
  • Use IAP (Identity-Aware Proxy) instead of direct SSH
  • Regularly review unnecessary firewall rules

SSH Connection and Basic Management

Connection Methods:

1. Cloud Console Built-in SSH

Simplest way—click and connect.

2. gcloud CLI

gcloud compute ssh my-first-vm --zone=asia-east1-b

3. Standard SSH Client

# First set up SSH Key
gcloud compute config-ssh

# Then use regular SSH
ssh my-first-vm.asia-east1-b.your-project

Common Management Commands:

# List all VMs
gcloud compute instances list

# Stop VM (stops vCPU billing, but disk still charges)
gcloud compute instances stop my-first-vm --zone=asia-east1-b

# Start VM
gcloud compute instances start my-first-vm --zone=asia-east1-b

# Delete VM
gcloud compute instances delete my-first-vm --zone=asia-east1-b

For cost details, see "GCP Pricing and Cost Calculation Complete Guide."


Cloud Run Container Deployment Tutorial

Cloud Run is GCP's Serverless container service. Just give it a container—everything else is handled.

How Cloud Run Works

Core Concepts:

  1. You package a container image
  2. Deploy to Cloud Run
  3. Cloud Run automatically handles:
    • Starting containers
    • Load balancing
    • Auto-scaling (0 to N instances)
    • HTTPS certificates
    • Custom domains

Billing Method:

  • Only charged when processing requests
  • Can scale to 0 instances with no requests
  • Billed by CPU time and memory

Limitations:

  • Must be HTTP service (listening on PORT environment variable)
  • Request timeout max 60 minutes
  • Single request max 32GB memory

Deploying Services from Container Registry

Step 1: Prepare Your Application

Using Node.js as example, create index.js:

const express = require('express');
const app = express();
const port = process.env.PORT || 8080;

app.get('/', (req, res) => {
  res.send('Hello from Cloud Run!');
});

app.listen(port, () => {
  console.log(`Server running on port ${port}`);
});

Step 2: Create Dockerfile

FROM node:18-slim
WORKDIR /app
COPY package*.json ./
RUN npm install --production
COPY . .
CMD ["node", "index.js"]

Step 3: Build and Push to Artifact Registry

# Configure Docker authentication
gcloud auth configure-docker asia-east1-docker.pkg.dev

# Build image
docker build -t asia-east1-docker.pkg.dev/PROJECT_ID/REPO_NAME/my-app:v1 .

# Push
docker push asia-east1-docker.pkg.dev/PROJECT_ID/REPO_NAME/my-app:v1

Step 4: Deploy to Cloud Run

gcloud run deploy my-service \
  --image=asia-east1-docker.pkg.dev/PROJECT_ID/REPO_NAME/my-app:v1 \
  --region=asia-east1 \
  --platform=managed \
  --allow-unauthenticated

After deployment, you'll get an HTTPS URL.

Auto-Scaling and Traffic Management

Auto-Scaling Settings:

gcloud run deploy my-service \
  --min-instances=0 \    # Min instances (0 = can scale to 0)
  --max-instances=100 \  # Max instances
  --concurrency=80       # Max concurrent requests per instance

Traffic Split (Multi-Version Deployment):

# Deploy new version without traffic
gcloud run deploy my-service \
  --image=my-app:v2 \
  --no-traffic

# Gradually shift traffic
gcloud run services update-traffic my-service \
  --to-revisions=my-service-v2=50,my-service-v1=50

# All traffic to new version
gcloud run services update-traffic my-service \
  --to-latest

Custom Domain and HTTPS Setup

Setting Up Custom Domain:

  1. Go to Cloud Run → Select service → Manage Custom Domains
  2. Click "Add Mapping"
  3. Enter your domain (e.g., api.example.com)
  4. Follow instructions to set up DNS

DNS Setup:

  • Add CNAME record at your DNS provider
  • Point to target provided by Cloud Run

HTTPS:

  • Cloud Run automatically provides SSL certificate
  • Supports auto-renewal
  • No additional setup needed

Environment Variables and Secret Management

Setting Environment Variables:

gcloud run deploy my-service \
  --set-env-vars=DATABASE_URL=xxx,API_KEY=yyy

Using Secret Manager:

# First create Secret
echo -n "my-secret-value" | gcloud secrets create my-secret --data-file=-

# Mount Secret during deployment
gcloud run deploy my-service \
  --set-secrets=API_KEY=my-secret:latest

Benefits:

  • Secrets don't appear in deploy commands or env var lists
  • Can set IAM permissions to control access
  • Supports version management

GKE (Google Kubernetes Engine) Introduction

If your service scale is large and complex enough, GKE is the most powerful choice.

Creating and Configuring GKE Clusters

Using Console:

  1. Go to GKE → Create Cluster
  2. Choose mode: Autopilot or Standard (explained next section)
  3. Set name and region
  4. Configure node pools (Standard mode)
  5. Create

Using gcloud:

# Autopilot mode
gcloud container clusters create-auto my-cluster \
  --region=asia-east1

# Standard mode
gcloud container clusters create my-cluster \
  --zone=asia-east1-b \
  --num-nodes=3 \
  --machine-type=e2-medium

Get Cluster Credentials:

gcloud container clusters get-credentials my-cluster \
  --region=asia-east1

After running, you can use kubectl to operate the cluster.

Autopilot vs Standard Mode Comparison

ItemAutopilotStandard
Node ManagementGoogle managesYou manage
Billing UnitPod resourcesNode resources
Configuration FlexibilityLessFully customizable
SecurityHardened by defaultSelf-configured
ComplexityLowHigh
Suitable ForMost usersNeed special configurations

Recommendations:

  • Just starting with GKE → Autopilot
  • Need GPU, special node configs → Standard
  • Want to save management effort → Autopilot
  • Have dedicated K8s team → Standard

Workload Deployment Basics

Deploying a Simple Application:

Create deployment.yaml:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
spec:
  replicas: 3
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
      - name: my-app
        image: asia-east1-docker.pkg.dev/PROJECT_ID/REPO/my-app:v1
        ports:
        - containerPort: 8080
        resources:
          requests:
            memory: "256Mi"
            cpu: "250m"
          limits:
            memory: "512Mi"
            cpu: "500m"

Deploy:

kubectl apply -f deployment.yaml

Common Commands:

# View Deployments
kubectl get deployments

# View Pods
kubectl get pods

# View Pod logs
kubectl logs <pod-name>

# Enter Pod
kubectl exec -it <pod-name> -- /bin/sh

# Scale replicas
kubectl scale deployment my-app --replicas=5

Service Exposure and Load Balancing

Create Service:

apiVersion: v1
kind: Service
metadata:
  name: my-app-service
spec:
  type: LoadBalancer
  selector:
    app: my-app
  ports:
  - port: 80
    targetPort: 8080

Service Types:

TypePurposeExternal Access
ClusterIPCluster internal communicationNo
NodePortOpen node portYes (rarely used)
LoadBalancerGCP load balancerYes

Ingress (Advanced):

To manage routing for multiple services:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my-ingress
spec:
  rules:
  - host: api.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: my-app-service
            port:
              number: 80

Storage Service Integration

Compute services often need storage.

Cloud Storage Mounting and Usage

Accessing Cloud Storage from VM:

# Install gsutil (usually pre-installed)
# Upload file
gsutil cp local-file.txt gs://my-bucket/

# Download file
gsutil cp gs://my-bucket/file.txt ./

# Sync folder
gsutil rsync -r ./local-folder gs://my-bucket/folder

Accessing from Cloud Run:

const {Storage} = require('@google-cloud/storage');
const storage = new Storage();

async function uploadFile() {
  await storage.bucket('my-bucket').upload('local-file.txt');
}

Persistent Disk Configuration

Add Disk to VM:

# Create disk
gcloud compute disks create my-disk \
  --size=100GB \
  --type=pd-ssd \
  --zone=asia-east1-b

# Attach to VM
gcloud compute instances attach-disk my-vm \
  --disk=my-disk \
  --zone=asia-east1-b

Mount Inside VM:

# After SSH into VM
sudo mkfs.ext4 -m 0 -F /dev/sdb
sudo mkdir /mnt/data
sudo mount /dev/sdb /mnt/data

# Set auto-mount on boot
echo '/dev/sdb /mnt/data ext4 defaults 0 0' | sudo tee -a /etc/fstab

Filestore (NFS) Use Cases

Suitable Scenarios:

  • Multiple VMs need shared files
  • Need POSIX filesystem semantics
  • Traditional apps needing NFS

Create Filestore:

gcloud filestore instances create my-filestore \
  --zone=asia-east1-b \
  --tier=BASIC_HDD \
  --file-share=name=vol1,capacity=1TB \
  --network=name=default

Mount on VM:

sudo apt-get install nfs-common
sudo mkdir /mnt/filestore
sudo mount 10.0.0.2:/vol1 /mnt/filestore

Common Issues and Best Practices

Practical problems and solutions commonly encountered.

Performance Tuning Recommendations

Compute Engine:

  • Choose correct machine type (don't over-provision)
  • Use SSD instead of HDD for databases
  • Consider Local SSD for temporary storage
  • Enable Preemptible/Spot VM for batch jobs

Cloud Run:

  • Set appropriate concurrency (default 80)
  • Use min-instances to avoid cold starts
  • Container images should be small (use Alpine, Distroless)
  • Use CPU boost feature

GKE:

  • Set Resource Requests and Limits
  • Use HPA (Horizontal Pod Autoscaler)
  • Consider Node Auto-provisioning
  • Use Pod Disruption Budget

Cost Control Techniques

Compute Engine:

  • Use Spot VMs for dev environments
  • Use Scheduling to auto-shutdown after hours
  • Regularly clean unused disks and snapshots

Cloud Run:

  • Set min-instances to 0 (allow scale to 0)
  • Don't set max-instances too high when not needed
  • Optimize container startup time

GKE:

  • Autopilot mode bills more precisely by Pod
  • Use Cluster Autoscaler
  • Consider Spot Node Pool for interruptible work

Monitoring and Logging Setup

Cloud Monitoring:

All GCP service metrics automatically go to Cloud Monitoring.

Key Metrics:

  • CPU usage
  • Memory usage
  • Network traffic
  • Latency and error rates

Cloud Logging:

# View VM logs
gcloud logging read "resource.type=gce_instance"

# View Cloud Run logs
gcloud logging read "resource.type=cloud_run_revision"

# View GKE logs
gcloud logging read "resource.type=k8s_container"

Setting Up Alerts:

  1. Go to Cloud Monitoring → Alerting
  2. Create Alert Policy
  3. Select metrics and conditions
  4. Set notification channels (Email, Slack, PagerDuty)

For security settings, see "GCP Security and Cloud Armor Protection Complete Guide."


Need a Second Opinion on Architecture Design?

Good architecture can save several times the operational costs.

Schedule Architecture Consultation and let us review your cloud architecture together.

CloudInsight's Architecture Consulting Services:

  • Existing Architecture Assessment: Find performance bottlenecks and cost waste
  • Migration Planning: Complete planning from on-premise to cloud
  • Best Practice Recommendations: Recommend optimal service combinations for your needs
  • Proof of Concept (POC): Help you quickly validate architecture feasibility

Conclusion: Building Your GCP Compute Architecture

After this tutorial, you should know how to choose and use GCP's compute services.

Quick Recap:

NeedChoiceReason
Need full controlCompute EngineCan install any software
Want it easyCloud RunDon't manage infrastructure
Large-scale microservicesGKEPowerful orchestration
Unstable trafficCloud RunCan scale to 0
Need GPUCompute EngineSupports NVIDIA GPU
Complex network needsGKEFine-grained network control

Next Step Recommendations:

  1. For new projects, start with Cloud Run
  2. If need full control, use Compute Engine
  3. If services exceed 10, consider GKE
  4. Mixed use is normal—don't force everything into one type

Hands-on is the best way to learn. Open a test project and run through all the examples in this tutorial!


Further Reading


Image Descriptions

Illustration: GCP Compute Service Selection Decision Tree

Scene Description: Flowchart-style decision tree starting from top "Choose Compute Service," branching through several questions to three results: Compute Engine, Cloud Run, GKE. Questions include "Need full control?" "Is it HTTP service?" "Need large-scale orchestration?"

Visual Focus:

  • Main content clearly presented

Required Elements:

  • Per description key elements

Chinese Text to Display: None

Color Tone: Professional, clear

Elements to Avoid: Abstract graphics, gears, glowing effects

Slug: gcp-compute-service-decision-tree


Illustration: Cloud Run Auto-Scaling Diagram

Scene Description: Timeline chart showing Cloud Run scaling behavior. X-axis is time, Y-axis left side is request count (line), right side is instance count (bars). Chart shows instances increase when requests increase, decrease when requests decrease, down to zero.

Visual Focus:

  • Main content clearly presented

Required Elements:

  • Per description key elements

Chinese Text to Display: None

Color Tone: Professional, clear

Elements to Avoid: Abstract graphics, gears, glowing effects

Slug: cloud-run-autoscaling-timeline-chart


Illustration: GKE Cluster Architecture Diagram

Scene Description: Architecture diagram showing GKE cluster structure. Outer layer is Cluster boundary, inside divided into Control Plane (Google managed) and Node Pool. Node Pool has multiple Nodes, each Node has multiple Pods. Load Balancer outside points to Pods.

Visual Focus:

  • Main content clearly presented

Required Elements:

  • Per description key elements

Chinese Text to Display: None

Color Tone: Professional, clear

Elements to Avoid: Abstract graphics, gears, glowing effects

Slug: gke-cluster-architecture-diagram


Illustration: Three Compute Services Cost Comparison Chart

Scene Description: Line chart comparing monthly costs of three compute services at different traffic levels. X-axis is monthly requests (from 0 to 10 million), Y-axis is monthly cost. Three lines represent Compute Engine (flat line), Cloud Run (starts from zero, gradually rises), GKE (has fixed cost then rises).

Visual Focus:

  • Main content clearly presented

Required Elements:

  • Per description key elements

Chinese Text to Display: None

Color Tone: Professional, clear

Elements to Avoid: Abstract graphics, gears, glowing effects

Slug: gcp-compute-services-cost-comparison-chart


References

  1. Google Cloud, "Compute Engine Documentation" (2024)
  2. Google Cloud, "Cloud Run Documentation" (2024)
  3. Google Cloud, "Google Kubernetes Engine Documentation" (2024)
  4. Google Cloud, "Cloud Storage Documentation" (2024)
  5. Google Cloud, "Best Practices for Operating Containers" (2024)

Need Professional Cloud Advice?

Whether you're evaluating cloud platforms, optimizing existing architecture, or looking for cost-saving solutions, we can help

Book Free Consultation

Related Articles