If you're managing an On-Premises Kubernetes cluster, you've probably wondered how to control who can do what in your environment. Maybe you've had that moment of panic thinking, "Wait, can any developer delete our production pods?" Well, that's where RBAC comes in, and I'm here to walk you through it step-by-step.
Why Do We Even Need RBAC?
Let's start with the basics. Imagine your Kubernetes cluster is like a big apartment building. Without proper access control, it's like giving everyone a master key to every apartment, the maintenance room, and the roof. Chaos, right?
RBAC (Role-Based Access Control) is your building's security system. It ensures that:
- Developers can deploy applications but can't mess with cluster-wide settings
- Your monitoring team can read metrics but not delete deployments
- Service accounts for applications have just enough permissions to function
- You can sleep peacefully knowing someone won't accidentally wipe out your QA Deployments, or UAT environment.
The benefits are pretty compelling:
- Granular control : Define exactly what each user or service can access
- Reduced blast radius : Mistakes or compromised credentials cause limited damage
- Compliance : Meet security requirements for audit trails and access control
- Multi-tenancy : Safely run multiple teams or applications in the same cluster
Understanding the Building Blocks
RBAC in Kubernetes has four main components. Think of them as different pieces of a puzzle that fit together:
1. Role (Namespace-scoped)
A Role defines what actions can be performed on which resources within a specific namespace. It's like saying "in apartment 3B, you can cook and clean, but not redecorate."
2. RoleBinding (Namespace-scoped)
This connects users or service accounts to a Role. It's the actual key handover: "Hey Alice, here's access to apartment 3B with the permissions we defined."
3. ClusterRole (Cluster-wide)
Like a Role, but works across the entire cluster or on cluster-scoped resources (like nodes). Think of it as building-wide permissions.
4. ClusterRoleBinding (Cluster-wide)
Grants the permissions defined in a ClusterRole across the entire cluster.
How they work together: You create a Role/ClusterRole defining permissions → You create a RoleBinding/ClusterRoleBinding linking those permissions to specific users → Kubernetes enforces these rules when users try to perform actions.
What's Already There? Default RBAC Configuration
When you spin up a fresh Kubernetes cluster, it comes with some pre-configured RBAC settings. Let's peek under the hood:
# Check out the default ClusterRoles
kubectl get clusterroles
# View a specific default role
kubectl describe clusterrole view
You'll see several built-in ClusterRoles:
- cluster-admin : Full control over everything (the master key!)
- admin : Full access to resources in a namespace
- edit : Can modify most resources in a namespace
- view : Read-only access to most resources
These are great starting points, but here's the thing: the defaults are often too permissive for production use. You'll want to create custom roles tailored to your organization's needs.
Let's check what your current user can do:
kubectl auth can-i create deployments
kubectl auth can-i delete nodes
kubectl auth can-i get pods --namespace kube-system
Let's Get Hands-On: Creating Custom Roles
Alright, time to create something useful! Let's build a role for a developer who needs to manage pods and services in the development namespace.
Step 1: Create the Namespace
kubectl create namespace development
Step 2: Define a Custom Role
Create a file called developer-role.yaml:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: development
name: pod-manager
rules:
# Allow managing pods
- apiGroups: [""]
resources: ["pods", "pods/log"]
verbs: ["get", "list", "watch", "create", "update", "delete"]
# Allow managing services
- apiGroups: [""]
resources: ["services"]
verbs: ["get", "list", "create", "update", "delete"]
# Allow reading config maps (but not editing)
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["get", "list"]
What's happening here?
-
apiGroups: Core Kubernetes resources use""(empty string), while other resources use their API group -
resources: The types of objects this role can access -
verbs: The actions allowed (think HTTP methods: get, list, create, delete, etc.)
Step 3: Create a RoleBinding
Now let's bind this role to a user. Create developer-rolebinding.yaml:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: pod-manager-binding
namespace: development
subjects:
# This is the user we're granting permissions to
- kind: User
name: poem@example.com
apiGroup: rbac.authorization.k8s.io
roleRef:
# This references the Role we created above
kind: Role
name: pod-manager
apiGroup: rbac.authorization.k8s.io
Step 4: Apply the Configurations
kubectl apply -f developer-role.yaml
kubectl apply -f developer-rolebinding.yaml
# Verify they were created
kubectl get roles -n development
kubectl get rolebindings -n development
Real-World Example: Read-Only ClusterRole
Let's create a ClusterRole for a monitoring system that needs read access cluster-wide. Create monitoring-clusterrole.yaml:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: monitoring-reader
rules:
# Read pods across all namespaces
- apiGroups: [""]
resources: ["pods", "pods/log", "pods/status"]
verbs: ["get", "list", "watch"]
# Read nodes
- apiGroups: [""]
resources: ["nodes", "nodes/metrics", "nodes/stats"]
verbs: ["get", "list"]
# Read deployments and replica sets
- apiGroups: ["apps"]
resources: ["deployments", "replicasets", "statefulsets"]
verbs: ["get", "list", "watch"]
# Read metrics
- apiGroups: ["metrics.k8s.io"]
resources: ["pods", "nodes"]
verbs: ["get", "list"]
And the binding monitoring-clusterrolebinding.yaml:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: monitoring-reader-binding
subjects:
- kind: ServiceAccount
name: prometheus
namespace: monitoring
roleRef:
kind: ClusterRole
name: monitoring-reader
apiGroup: rbac.authorization.k8s.io
Apply them:
kubectl apply -f monitoring-clusterrole.yaml
kubectl apply -f monitoring-clusterrolebinding.yaml
Service Accounts: Your Application's Identity
Here's something super important: Service Accounts are how pods authenticate to the Kubernetes API. Every pod runs with a service account (by default, the default service account in its namespace).
Think of service accounts as robot users for your applications. If your app needs to list pods, create services, or read secrets, it uses a service account.
Creating a Service Account
# Create a service account
kubectl create serviceaccount app-deployer -n development
# Or use a YAML manifest
Create app-serviceaccount.yaml:
apiVersion: v1
kind: ServiceAccount
metadata:
name: app-deployer
namespace: development
Now create a role for this service account. Create app-deployer-role.yaml:
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: development
name: deployment-manager
rules:
- apiGroups: ["apps"]
resources: ["deployments"]
verbs: ["get", "list", "create", "update", "patch"]
- apiGroups: [""]
resources: ["services"]
verbs: ["get", "list", "create"]
And bind it app-deployer-rolebinding.yaml:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: deployment-manager-binding
namespace: development
subjects:
- kind: ServiceAccount
name: app-deployer
namespace: development
roleRef:
kind: Role
name: deployment-manager
apiGroup: rbac.authorization.k8s.io
Apply everything:
kubectl apply -f app-serviceaccount.yaml
kubectl apply -f app-deployer-role.yaml
kubectl apply -f app-deployer-rolebinding.yaml
Using the Service Account in a Pod
apiVersion: v1
kind: Pod
metadata:
name: deployer-app
namespace: development
spec:
serviceAccountName: app-deployer # This is the key line!
containers:
- name: deployer
image: my-deployer-app:v1
Now your pod can perform the actions defined in the deployment-manager role!
Adding New Users to Your Cluster
This is where it gets interesting. Kubernetes doesn't manage users directly-it relies on external authentication. Let's explore the common approaches:
Option 1: X.509 Client Certificates (Common for Admins)
This is the traditional approach. Here's how to add a user named "rajendra":
# 1. Create a private key
openssl genrsa -out rajendra.key 2048
# 2. Create a certificate signing request
openssl req -new -key rajendra.key -out rajendra.csr -subj "/CN=rajendra/O=developers"
# 3. Create a CertificateSigningRequest in Kubernetes
cat <<EOF | kubectl apply -f -
apiVersion: certificates.k8s.io/v1
kind: CertificateSigningRequest
metadata:
name: rajendra
spec:
request: $(cat rajendra.csr | base64 | tr -d '\n')
signerName: kubernetes.io/kube-apiserver-client
usages:
- client auth
EOF
# 4. Approve the certificate
kubectl certificate approve rajendra
# 5. Get the signed certificate
kubectl get csr rajendra -o jsonpath='{.status.certificate}' | base64 -d > rajendra.crt
# 6. Create a kubeconfig for rajendra
kubectl config set-credentials rajendra \
--client-certificate=rajendra.crt \
--client-key=rajendra.key \
--embed-certs=true
kubectl config set-context rajendra-context \
--cluster=your-cluster-name \
--user=rajendra
Now rajendra can use this context to access the cluster (with whatever permissions you grant via RoleBindings).
Option 2: OIDC (OpenID Connect) - Recommended for Production
OIDC integrates with identity providers like Google, Azure AD, or Okta. You configure your API server with OIDC parameters:
# API server flags (simplified example)
--oidc-issuer-url=https://accounts.google.com
--oidc-client-id=your-client-id
--oidc-username-claim=email
--oidc-groups-claim=groups
Users authenticate with your identity provider, get a token, and use it with kubectl. This is way more manageable for organizations!
Option 3: Service Account Tokens (For Automation)
For CI/CD systems or automated tools:
# Create a service account
kubectl create serviceaccount jenkins -n default
# Create a secret for the token (Kubernetes 1.24+)
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
metadata:
name: jenkins-token
namespace: default
annotations:
kubernetes.io/service-account.name: jenkins
type: kubernetes.io/service-account-token
EOF
# Get the token
kubectl get secret jenkins-token -n default -o jsonpath='{.data.token}' | base64 -d
Assigning Roles to Users: Putting It All Together
Let's say we added rajendra using certificates. Now let's give him permissions:
Scenario 1: Rajendra needs admin access in the development namespace
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: rajendra-admin
namespace: development
subjects:
- kind: User
name: rajendra # Must match the CN from the certificate
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: admin # Using the built-in admin role
apiGroup: rbac.authorization.k8s.io
Scenario 2: Rajendra needs read-only access cluster-wide
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: rajendra-viewer
subjects:
- kind: User
name: rajendra
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: ClusterRole
name: view # Built-in view role
apiGroup: rbac.authorization.k8s.io
Scenario 3: Group-based permissions
If you're using OIDC with group claims:
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: developers-access
namespace: development
subjects:
- kind: Group
name: developers@example.com # Email group from your IdP
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: pod-manager
apiGroup: rbac.authorization.k8s.io
Apply these:
kubectl apply -f rajendra-admin-binding.yaml
Test Rajendra's access:
# As an admin, check what rajendra can do
kubectl auth can-i get pods --namespace development --as rajendra
kubectl auth can-i delete nodes --as rajendra
Best Practices: Don't Shoot Yourself in the Foot
Let me share some hard-learned lessons:
1. Principle of Least Privilege
Start restrictive, then add permissions as needed. It's much easier than trying to take permissions away later!
# Bad: Too permissive
rules:
- apiGroups: ["*"]
resources: ["*"]
verbs: ["*"]
# Good: Specific and limited
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["get", "list"]
2. Use Namespaces for Isolation
# Create separate namespaces for teams/environments
kubectl create namespace team-alpha
kubectl create namespace team-beta
kubectl create namespace production
3. Regular Audits
Set up a monthly review:
# List all RoleBindings and ClusterRoleBindings
kubectl get rolebindings --all-namespaces
kubectl get clusterrolebindings
# Check who has cluster-admin access (should be minimal!)
kubectl get clusterrolebindings -o json | \
jq -r '.items[] | select(.roleRef.name=="cluster-admin") | .metadata.name'
4. Monitor RBAC Changes
Enable audit logging in your API server to track RBAC modifications:
# Audit policy example
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
# Log RBAC changes
- level: RequestResponse
verbs: ["create", "update", "patch", "delete"]
resources:
- group: rbac.authorization.k8s.io
resources: ["roles", "rolebindings", "clusterroles", "clusterrolebindings"]
5. Use Tools to Visualize RBAC
Check out these helpful tools:
# kubectl rbac-tool
kubectl krew install rbac-tool
kubectl rbac-tool whoami
kubectl rbac-tool lookup rajendra
# rakkess (shows what you can do)
kubectl krew install access-matrix
kubectl access-matrix
6. Document Your RBAC Policies
Keep a README in your GitOps repo:
# RBAC Policies
## Roles
- `pod-manager`: Developers managing pods in dev namespace
- `monitoring-reader`: Read-only for Prometheus
## Users
- poem@example.com: pod-manager in development
- rajendra@example.com: view access cluster-wide
7. Test Before Production
# Always test with --dry-run first
kubectl apply -f new-role.yaml --dry-run=server
# Test as a specific user
kubectl auth can-i create deployments --as poem@example.com -n development
Security Considerations You Can't Ignore
Watch out for privilege escalation : Don't give users the ability to create or modify RoleBindings unless they're admins. This prevents them from granting themselves more permissions.
# Dangerous! Allows creating any RoleBinding
rules:
- apiGroups: ["rbac.authorization.k8s.io"]
resources: ["rolebindings"]
verbs: ["create", "update"]
Protect your service account tokens : They're secrets! Treat them accordingly.
Rotate credentials regularly : Especially for service accounts used by CI/CD.
Use Pod Security Standards : Combine RBAC with Pod Security Admission for defense in depth.
Wrapping Up
RBAC might seem daunting at first, but it's absolutely essential for running a secure Kubernetes cluster. Here's what we covered:
✅ Why RBAC matters (security, compliance, peace of mind)
✅ The four core components and how they interact
✅ Creating custom Roles and ClusterRoles with real examples
✅ Working with service accounts for applications
✅ Adding users through different authentication methods
✅ Assigning appropriate permissions
✅ Best practices to keep your cluster locked down
Your action items:
- Audit your current RBAC setup (run those
kubectl getcommands!) - Identify overly permissive roles and tighten them up
- Document your RBAC policies
- Set up regular reviews (put it on the calendar!)
- Enable audit logging to track changes
Remember: security is a journey, not a destination. Start with basic RBAC, learn from mistakes, iterate, and continuously improve. Your future self (and your security team) will thank you!
I'd Love to Hear from You!
Thanks for taking the time to read through this guide on Kubernetes RBAC! I hope you found it helpful and practical.
I'd really appreciate your thoughts:
- Did this guide help clarify RBAC concepts for you?
- Were the examples clear and useful for your use case?
- Is there anything you'd like me to explain differently or in more detail?
- What topics would you like to see covered in future posts?
Whether you're just getting started with Kubernetes or you're a seasoned pro with tips to share, your feedback helps make this content better for everyone. Feel free to drop a comment below with your experiences, questions, or suggestions - I read every single one!
If you found this helpful, please consider sharing it with your team or anyone else who might benefit. And if you've discovered any cool RBAC tricks or tools that I didn't mention, I'd love to learn about them too! 🙌
Happy clustering and looking forward to hearing from you!
Got questions? Try things out in a test cluster first. Break things, learn, and then apply those lessons to production. Happy RBACing! 🚀
Want to dive deeper? Check out the official Kubernetes RBAC documentation and consider exploring tools like Open Policy Agent (OPA) for even more sophisticated access control.
Sign up for Rajendra Khope
Thoughts, stories and ideas.
https://rajendrakhope.com
No spam. Unsubscribe anytime.

Top comments (0)