What Is a Process?
A program is an executable file on disk.
When the operating system loads that file into memory and starts execution, it becomes a process.
A process has its own:
Memory Address
Registers
Program Counter
Stack and Heap memory
I/O and system resources
The key property is isolation.
Each process has its own resources, and one process cannot access or interact with another process directly.
Although it is possible to communicate with another process via IPC (Inter-Process Communication) and pipes, they need to be intentionally set up.
Example:
Google Chrome runs each tab in a separate process. If one tab crashes or malfunctions, other tabs keep running. That isolation comes from process boundaries.
Cons:
Heavyweight
Expensive context switching
Higher memory usage
The operating systems must save registers, memory mappings, and kernel data structures also know as PCB(Process Control Blocks) on every context switch between process executions.
What Is a Thread?
A thread is the unit of execution inside a process. Its like a subset of the process that runs inside the process boundaries.
Every process has at least one thread, called the main thread. Many processes run multiple threads.
Threads within the same process has same:
memory address
heap and global variables
However, each thread have their own stack and registers Because threads share memory, communication is fast. No need for inter process communication.
But this introduces risk.
any faulty thread can crash the entire process.
Context switching between threads is faster than between processes because:
No memory context switch
Less kernel level operations
Still, thread switching requires kernel involvement. That overhead adds up in highly concurrent systems.
The Problem With Heavy Threads
OS threads are expensive:
Large stack memory, mostly around 1 MB
Kernel managed
Expensive to create and destroy
A blocking system call will block the entire thread
If you spawn thousands of threads, memory usage explodes.
This is where Go changes the model.
What Is a Goroutine?
A goroutine is a lightweight unit of execution managed by the Go runtime, not the operating system.
You start one by prefixing a function call with the go keyword:
go speak("Hello World")
Thats it.
The main function itself runs as a goroutine.
How Goroutines Work
Go uses an M:N scheduling model:
M goroutines
N OS threads
M is usually much larger than N. Which means the Go runtime scheduler multiplexes many goroutines onto fewer OS threads.
On a logical CPU:
One OS thread runs
One goroutine executes at a time
Many runnable goroutines wait in a local queue
When a goroutine blocks on I/O, the runtime:
Parks the thread
Moves other runnable goroutines from that thread to another thread
and Keeps CPU cores busy. This avoids wasting resources.
Why Goroutines Are Lightweight
Initial stack around 2 KB
Stack grows dynamically
Context switching happens in user space
No full kernel switches for goroutine scheduling
You can run large number of goroutines in a single program with modest memory usage.
Work Stealing Scheduler
Go uses a work stealing scheduler.
If one logical CPU runs out of work:
It steals runnable goroutines from another CPUs queue
This balances load across cores
If no local work exists:
- The scheduler checks a global queue
This keeps CPUs busy without manual tuning.
Blocking and System Calls
When a goroutine performs a blocking system call:
The OS blocks the thread
The runtime detaches other goroutines
Another thread takes over execution
Some operations, like network polling, use dedicated threads. This reduces unnecessary thread parking.
The result:
You write code that looks synchronous.
The runtime handles concurrency and scheduling.
Fork Join Model in Go
Go follows the fork join idea.
The parent goroutine spawns child goroutines
Child goroutines run concurrently
After execution each child joins back with the parent using synchronization primitives like WaitGroup or channels
Example:
func speak(msg string) {
fmt.Println(msg)
}
func main() {
go speak("Hello World")
time.Sleep(1 * time.Second)
}
If main exits early, the program terminates before the child goroutine finishes. You must coordinate execution.
Goroutines share can memory and communicate, but in that case They must synchronize access using:
Channels
Mutexes
Atomic operations
Ignoring synchronization can leads to race conditions.
Quick Comparison
| Feature | Process | Thread | Goroutine |
|---|---|---|---|
| Memory | Own memory space | Shared memory within process | Managed within process memory |
| Isolation | Strong isolation | No isolation within process | No isolation within process |
| Weight | Heavyweight | Lighter than process | Very lightweight |
| Context Switching | Expensive | Faster than process | Very fast, user-space scheduled |
| Management | OS managed | Kernel managed | Go runtime managed |
| Scheduling | OS scheduler | OS scheduler | User-space scheduler (Go runtime) |
| Blocking | Blocks independently | Can block OS thread | Efficient blocking behavior |
| Failure Impact | Safe from other process crashes | One bad thread can crash process | One bad goroutine can crash process |
| Synchronization | IPC required | Shared memory requires sync | Requires proper synchronization |
When To Use What
Use processes when the priority is:
Strong isolation
Security boundaries
Crash containment
Use threads when:
You need shared memory
You operate outside Go
You rely on OS level primitives
Use goroutines when:
You build concurrent services in Go
You need high concurrency with low overhead
You want simple concurrency syntax
If you build network services, background workers, or streaming systems in Go, goroutines give you scale without thread explosion.
Final Takeaway
A process isolates.
A thread executes inside a process.
A goroutine executes inside a thread but is managed by the Go runtime.
Isolation decreases as you move down the stack and Efficiency increases.



Top comments (0)