Cloud Native 12 Factor Complete Guide: 12 Principles for Building Scalable Cloud Native Apps【2025】
Cloud Native 12 Factor Complete Guide: 12 Principles for Building Scalable Cloud Native Apps【2025】
Your code runs perfectly on your machine, but breaks as soon as it goes live. Config files scattered everywhere, environment variables all mixed up, every deployment feels like defusing a bomb. 12 Factor App provided answers to these problems over a decade ago.
12 Factor is a set of battle-tested principles for cloud application development. Regardless of your programming language or framework, these 12 principles apply. After reading this article, you'll understand the specific meaning of each principle and how to implement them in your projects.

What is 12 Factor App?
Origins of 12 Factor
12 Factor App was proposed by Heroku co-founder Adam Wiggins in 2011. At that time, Heroku was one of the earliest PaaS (Platform as a Service) platforms, and they had accumulated extensive experience running SaaS applications in cloud environments.
These experiences were distilled into 12 principles to help developers avoid common pitfalls. Although over a decade has passed, these principles remain relevant today and have even become the foundation for Cloud Native application standards.
Why is 12 Factor Important?
12 Factor solves these problems:
- Environment inconsistency: Works in development, breaks in production
- Deployment difficulties: Manual configuration changes with every release
- Scaling limitations: Applications can't scale horizontally
- Maintenance challenges: New team members take forever to understand the project
Applications following 12 Factor principles:
- Can be seamlessly ported between environments
- Are suitable for deployment on cloud platforms
- Can easily scale horizontally
- Reduce gaps between development and operations
Relationship Between 12 Factor and Cloud Native
12 Factor predates the term Cloud Native, but its philosophy perfectly aligns with Cloud Native architecture requirements. In fact, if you want to adopt Kubernetes or microservices architecture, following 12 Factor is the most basic prerequisite.
Many best practices later proposed by CNCF can be traced back to 12 Factor principles. You could say 12 Factor is the foundation of Cloud Native.
Want to understand the complete Cloud Native concept? See Cloud Native Complete Guide: What is Cloud Native?.
Complete Analysis of 12 Principles
1. Codebase
One codebase, multiple deployments
Principle Explained:
An application should have only one codebase, stored in a version control system (like Git). This codebase can be deployed to multiple environments (development, testing, production), but the code itself is just one copy.
Correct Approach:
my-app/
├── src/
├── package.json
└── .git/
# Deploy to different environments via CI/CD
dev.example.com ← same codebase
staging.example.com ← same codebase
prod.example.com ← same codebase
Violations:
- Using different code versions for development and production
- Forking different versions for different customers
- Copy-pasting shared code across multiple projects
Why It Matters:
Scattered code leads to version chaos. Fix a bug here, forget to fix it there. Following this principle, any change can be tracked, any environment's issue can be reproduced.
2. Dependencies
Explicitly declare and isolate dependencies
Principle Explained:
Applications should not rely on system-level packages. All dependencies should be explicitly declared in a dependency manifest (like package.json, requirements.txt, pom.xml).
Correct Approach:
// package.json
{
"dependencies": {
"express": "^4.18.0",
"pg": "^8.11.0",
"redis": "^4.6.0"
}
}
Violations:
- Code assumes certain command-line tools are installed
- Using undeclared global packages
- README says "please manually install ImageMagick first"
Why It Matters:
Explicit dependencies mean new team members can set up the development environment with a single command. Deployments won't fail because "this machine is missing something."
3. Config
Store config in environment variables
Principle Explained:
Configuration (database connections, API keys, service endpoints) should not be hardcoded. They should be injected via environment variables, allowing the same code to use different configurations in different environments.
Correct Approach:
// Read config from environment variables
const dbHost = process.env.DATABASE_HOST;
const apiKey = process.env.API_KEY;
Violations:
// Hardcoded in code (wrong)
const dbHost = 'prod-db.example.com';
const apiKey = 'sk-1234567890';
Why It Matters:
- Avoid committing secrets to Git
- Different environments use different configs without code changes
- Security: secrets won't leak to code repositories
Test Criterion:
Ask yourself: Can this code be immediately open-sourced? If open-sourcing would expose secrets, the config is in the wrong place.
4. Backing Services
Treat backing services as attached resources
Principle Explained:
Databases, caches, message queues, email services, etc., should all be treated as swappable "attached resources." Applications should not distinguish between "local services" and "third-party services."
Correct Approach:
# Specify service locations via environment variables
DATABASE_URL=postgres://user:pass@localhost:5432/mydb
REDIS_URL=redis://localhost:6379
SMTP_URL=smtp://sendgrid.net:587
Switching databases? Just change the environment variable, no code changes needed.
Violations:
- Code assumes database is always on localhost
- Hardcoded service IP addresses
- Using SQLite for local development, PostgreSQL for production (with code differences)
Why It Matters:
Cloud environment services change frequently. Databases may migrate, cache services may switch providers. Following this principle, these changes don't require code changes, just configuration changes.
How does Kubernetes implement this principle? See Cloud Native Tech Stack Guide to learn about K8s Service and ConfigMap mechanisms.
5. Build, Release, Run
Strictly separate build and run stages
Principle Explained:
From code to execution, there should be three stages:
- Build: Compile code into an executable artifact
- Release: Combine build artifact with config to produce a deployable version
- Run: Start the application in the execution environment
Correct Approach:
Code → [Build] → Docker Image
Docker Image + Config → [Release] → v1.2.3
v1.2.3 → [Run] → Running container
Each Release should have a unique version number for tracking and rollback.
Violations:
- Directly modifying code in production
- Writing config values during build
- No version numbers, unknown what version is running
Why It Matters:
Strict separation of these three stages ensures:
- Same build can be deployed to different environments
- Quick rollback to previous version when issues occur
- Every version is reproducible
6. Processes
Execute the app as stateless processes
Principle Explained:
Each application instance (process) should be stateless. Any data needing persistence should be stored in external services (databases, caches).
Correct Approach:
// Store session in Redis
const session = require('express-session');
const RedisStore = require('connect-redis')(session);
app.use(session({
store: new RedisStore({ client: redisClient }),
secret: process.env.SESSION_SECRET
}));
Violations:
// Storing data in memory (wrong)
const userSessions = {};
app.post('/login', (req, res) => {
userSessions[req.body.userId] = { loggedIn: true };
});
Why It Matters:
Stateless applications can scale horizontally. If state is stored in memory, adding a server causes data inconsistency.
This is why Kubernetes can restart Pods anytime without affecting service—because Pods don't store important data.
7. Port Binding
Export services via port binding
Principle Explained:
Applications should bind to a port to provide services externally, not rely on external Web Servers (like Apache, Nginx) to handle requests.
Correct Approach:
const express = require('express');
const app = express();
const port = process.env.PORT || 3000;
app.listen(port, () => {
console.log(`Server listening on port ${port}`);
});
Violations:
- Code assumes Apache or Nginx is always in front
- No definition of how to expose services externally
Why It Matters:
In container environments, each container is an independent unit. It should know how to provide services externally without relying on external configuration. This allows applications to be used as "backing services" for other applications.
8. Concurrency
Scale out via the process model
Principle Explained:
Applications should handle more load by increasing process count, not making a single process larger (adding more CPU, RAM).
Scaling Model:
┌─────────────────────────────────────┐
│ Web (3 instances) │
│ ├─ web.1 ├─ web.2 ├─ web.3 │
├─────────────────────────────────────┤
│ Worker (2 instances) │
│ ├─ worker.1 ├─ worker.2 │
├─────────────────────────────────────┤
│ Clock (1 instance) │
│ └─ clock.1 │
└─────────────────────────────────────┘
Violations:
- Using a single process to handle everything
- Using multi-threading to scale, but can't span machines
- Assuming only one instance will ever run
Why It Matters:
Horizontal scaling is more flexible and economical than vertical scaling. Kubernetes HPA (Horizontal Pod Autoscaler) is based on this concept—automatically increasing Pod count when traffic is high.

9. Disposability
Fast startup, graceful shutdown
Principle Explained:
Applications should start in seconds and shut down gracefully. This allows systems to scale quickly, deploy quickly, and recover quickly.
Correct Approach:
// Handle shutdown signal
process.on('SIGTERM', async () => {
console.log('Received SIGTERM, shutting down gracefully');
// Stop accepting new requests
server.close();
// Complete pending requests
await finishPendingRequests();
// Close database connections
await db.close();
process.exit(0);
});
Violations:
- Startup takes several minutes
- Killing process on shutdown without handling pending requests
- Not handling SIGTERM signal
Why It Matters:
Kubernetes sends SIGTERM to containers during redeployment or Pod scaling. If containers don't shut down gracefully, pending requests fail and users see errors.
12 Factor sounds simple, but implementation isn't easy. Schedule architecture consultation and let experienced professionals help you avoid implementation pitfalls.
10. Dev/Prod Parity
Keep development, staging, and production as similar as possible
Principle Explained:
Differences between development and production environments should be minimal. This includes:
- Time gap: Shorter time from development to deployment is better
- Personnel gap: Developers should also participate in deployment
- Tools gap: Development and production use the same tech stack
Correct Approach:
# docker-compose.yml for local development
services:
db:
image: postgres:15 # Same as production
redis:
image: redis:7 # Same as production
app:
build: .
environment:
- DATABASE_URL=postgres://...
Violations:
- Using SQLite for development, PostgreSQL for production
- Using Windows for development, Linux for production
- Development environment several versions behind production
Why It Matters:
Environment differences are the source of "works on my machine" problems. Docker and container technology have made maintaining environment consistency much easier.
11. Logs
Treat logs as event streams
Principle Explained:
Applications should not manage their own log files. They should write logs to standard output (stdout) and let the execution environment decide how to collect and store them.
Correct Approach:
// Write to stdout
console.log(JSON.stringify({
timestamp: new Date().toISOString(),
level: 'info',
message: 'User logged in',
userId: user.id
}));
Violations:
// Writing to file (wrong)
const fs = require('fs');
fs.appendFileSync('/var/log/app.log', 'User logged in\n');
Why It Matters:
In container environments, containers are ephemeral. Writing logs to files inside containers means logs disappear when containers are deleted.
Logs written to stdout can be collected by Kubernetes and sent to centralized logging systems like Elasticsearch or CloudWatch. This allows searching and analyzing logs across all instances.
12. Admin Processes
Run admin/management tasks as one-off processes
Principle Explained:
Database migrations, data cleanup, maintenance scripts, and other admin tasks should run in the same environment as the application, using the same code and configuration.
Correct Approach:
# Run database migration as a K8s Job
kubectl run db-migrate \
--image=myapp:v1.2.3 \
--env="DATABASE_URL=$DATABASE_URL" \
--command -- npm run migrate
Violations:
- SSH directly into production servers to run scripts
- Running admin tasks with different code versions
- Admin scripts not in version control
Why It Matters:
Admin tasks face the same environment variables, database, and configuration as the application. Running in different environments may cause issues due to version inconsistency.

15 Factor Extended Version
As Cloud Native evolved, the community found the original 12 Factor needed supplementation with some modern practices. Here are 3 commonly mentioned additional principles:
13. API First
Principle Explained:
Design with API interfaces first. Before writing implementation code, define the API contract (OpenAPI, GraphQL Schema).
Why This Principle is Needed:
- In microservices architecture, services communicate via APIs
- Frontend-backend separation requires clear API specifications
- API contracts can auto-generate documentation and SDKs
14. Telemetry
Principle Explained:
Applications should have built-in observability capabilities:
- Metrics: Performance indicators (request count, latency, error rate)
- Tracing: Distributed tracing (which services a request passes through)
- Logging: Structured logs
Why This Principle is Needed:
In microservices architecture, traditional debugging methods no longer work. You need to trace the complete path of a request across services.
OpenTelemetry is the current CNCF-promoted standard, handling metrics, tracing, and logging together.
15. Security
Principle Explained:
Security should be built into the application, not added as an afterthought. This includes:
- Dependency security scanning
- Secret management (not storing passwords in plaintext environment variables)
- Least privilege principle
- Container image security scanning
Why This Principle is Needed:
Traditional perimeter security (firewalls, WAF) isn't enough in Cloud Native environments. Zero trust architecture requires every service to authenticate and authorize.
Want to learn more about Cloud Native security? See Cloud Native Security Complete Guide.
12 Factor Implementation Example
Here's a Node.js application demonstrating how to follow 12 Factor:
Project Structure:
my-app/
├── src/
│ ├── index.js
│ ├── config.js
│ └── routes/
├── package.json
├── Dockerfile
├── docker-compose.yml
└── .env.example
config.js (Principle 3: Config):
module.exports = {
port: process.env.PORT || 3000,
database: {
url: process.env.DATABASE_URL,
},
redis: {
url: process.env.REDIS_URL,
},
logLevel: process.env.LOG_LEVEL || 'info',
};
Dockerfile (Principles 2, 5: Dependencies, Build):
FROM node:20-alpine
WORKDIR /app
# Explicitly declare dependencies
COPY package*.json ./
RUN npm ci --only=production
COPY src/ ./src/
# Inject config via environment variables
ENV NODE_ENV=production
EXPOSE 3000
CMD ["node", "src/index.js"]
index.js (Principles 7, 9, 11: Port Binding, Disposability, Logs):
const express = require('express');
const config = require('./config');
const app = express();
// Principle 7: Port Binding
const server = app.listen(config.port, () => {
// Principle 11: Logs to stdout
console.log(JSON.stringify({
level: 'info',
message: `Server started on port ${config.port}`,
timestamp: new Date().toISOString()
}));
});
// Principle 9: Disposability - Graceful shutdown
process.on('SIGTERM', () => {
console.log(JSON.stringify({
level: 'info',
message: 'SIGTERM received, shutting down gracefully'
}));
server.close(() => {
console.log(JSON.stringify({
level: 'info',
message: 'Server closed'
}));
process.exit(0);
});
});
How does Spring Boot implement 12 Factor? See Cloud Native Java Development Guide.
12 Factor Quick Reference Table
| # | Principle | One Sentence | Checklist |
|---|---|---|---|
| 1 | Codebase | One codebase, multiple deploys | Code in Git? Only one copy? |
| 2 | Dependencies | Explicitly declare dependencies | package.json complete? |
| 3 | Config | Config in environment variables | No hardcoded passwords? |
| 4 | Backing Services | Services are swappable | Need code changes to switch DB? |
| 5 | Build, Release, Run | Separate stages | Have CI/CD? Version numbers? |
| 6 | Processes | Stateless | Session stored externally? |
| 7 | Port Binding | Bind your own port | No Apache/Nginx dependency? |
| 8 | Concurrency | Horizontal scaling | Can run multiple instances? |
| 9 | Disposability | Fast start, graceful stop | Handle SIGTERM? |
| 10 | Dev/Prod Parity | Environment consistency | Use Docker for development? |
| 11 | Logs | Write to stdout | Not writing to files? |
| 12 | Admin Processes | One-off tasks | Migration scripts in version control? |
FAQ
Q1: Is 12 Factor mandatory or just recommendations?
12 Factor is a set of recommended best practices, not mandatory requirements. But if you want to run applications smoothly in Cloud Native environments like Kubernetes, following these principles makes things much easier.
Q2: Do I have to follow all of them to be compliant?
Not necessarily. Depending on your application needs, some principles may not apply. But the core ones (Config, Stateless Processes, Logs) are almost essential requirements for Cloud Native.
Q3: What's the relationship between 12 Factor and microservices?
12 Factor was originally designed for monolithic SaaS applications, but its principles equally apply to microservices. In fact, microservices architecture needs these principles even more because there are more services and more complex environments.
Q4: Can I apply 12 Factor to my legacy system?
You can adopt incrementally. Usually starting with config externalization (Principle 3) and log standardization (Principle 11) is easiest, then handling statelessness and dependency management.
Q5: Is 15 Factor an official standard?
No. 15 Factor is a community-suggested extension based on Cloud Native evolution, without official certification. But API First, Telemetry, and Security are indeed important considerations for modern cloud applications.
Next Steps
12 Factor is the foundation of Cloud Native architecture. After mastering these principles, you can explore further:
- Return to core concepts: Cloud Native Complete Guide
- Learn Kubernetes practices: Cloud Native Tech Stack Guide
- Learn Java Cloud Native development: Cloud Native Java Development Guide
- Strengthen security: Cloud Native Security Complete Guide
Need a second opinion on architecture design? 12 Factor looks simple, but implementing it in existing systems isn't easy. Schedule architecture consultation and let experienced experts help you evaluate current status and plan improvement paths.
References
Need Professional Cloud Advice?
Whether you're evaluating cloud platforms, optimizing existing architecture, or looking for cost-saving solutions, we can help
Book Free ConsultationRelated Articles
5G Cloud Native Architecture: How Telecom Operators Achieve Cloud Native 5G Core Networks [2025]
How do 5G and Cloud Native combine? This article explains 5G Cloud Native Architecture, 5G Core Network cloud native architecture, the relationship between 3GPP standards and Cloud Native, and telecom operator adoption cases.
Cloud NativeCloud Native AI: Building AI/ML Workflows in Cloud Native Environments (2025)
How to build AI/ML workflows in Cloud Native environments? This article covers MLOps integration with cloud native, Kubeflow platform, GPU resource management, and AI model deployment and scaling strategies on Kubernetes.
Cloud NativeCloud Native Database Selection Guide: PostgreSQL, NoSQL, and Cloud Native Database Comparison (2025)
What is a Cloud Native Database? This article covers CloudNativePG (cloud native PostgreSQL), differences between traditional vs cloud native databases, mainstream cloud native database comparisons, and a selection decision guide.