DEV Community

Cover image for **Java Virtual Threads and Structured Concurrency: 5 Game-Changing Techniques for Modern Development**
Nithin Bharadwaj
Nithin Bharadwaj

Posted on

**Java Virtual Threads and Structured Concurrency: 5 Game-Changing Techniques for Modern Development**

As a best-selling author, I invite you to explore my books on Amazon. Don't forget to follow me on Medium and show your support. Thank you! Your support means the world!

For years, writing concurrent programs in Java felt like a delicate balancing act. We had powerful tools, but using them efficiently meant constantly worrying about scarce resources. The most precious resource was the platform thread itself—heavyweight, expensive to create, and limited in number. We built elaborate thread pools and embraced complex asynchronous styles, not necessarily because they were easier to understand, but because we had to. Something had to give.

Now, the landscape is changing. Two new ideas are reshaping how we think about concurrency in Java: virtual threads and structured concurrency. These are not just incremental improvements. They represent a fundamental shift towards a model that is both simpler for developers and more efficient for the system. I want to walk you through five practical techniques using these new tools, showing how they turn previous complexities into straightforward code.

Let's start with the first technique: treating threads as an abundant resource.

This is the core promise of virtual threads. Think of a platform thread as a full-sized, dedicated delivery truck. It's powerful, but maintaining a fleet of thousands of them is impractical. A virtual thread is like the package on that truck. You can have millions of packages, but they all share a much smaller fleet of trucks. When a package needs to wait at a loading dock (a blocking operation like a database call), it’s taken off the truck so another package can use it.

This changes everything. The old "thread-per-request" model, which was simple but notoriously inefficient with platform threads, becomes not only viable but desirable again. You can write clear, blocking-style code without dooming your application to resource exhaustion.

// The Old Way: A tense balancing act with a fixed pool.
ExecutorService oldExecutor = Executors.newFixedThreadPool(200);
for (int i = 0; i < 10_000; i++) {
    oldExecutor.submit(() -> {
        // Each call to process() holds a precious platform thread hostage.
        processHttpRequest();
    });
}
// If 201 requests come at once, the 201st waits, even if the CPU is idle.

// The New Way: A virtual thread for every single task.
try (var newExecutor = Executors.newVirtualThreadPerTaskExecutor()) {
    for (int i = 0; i < 10_000; i++) {
        newExecutor.submit(() -> {
            // processHttpRequest() still blocks, but only the virtual thread.
            // The carrier thread (the truck) is freed to run other virtual threads.
            processHttpRequest();
        });
    }
}
// All 10,000 requests can be served concurrently, limited only by system resources.
Enter fullscreen mode Exit fullscreen mode

You can tailor these virtual threads just like platform threads.

// Giving your virtual threads clear names helps in debugging.
ExecutorService namedExecutor = Executors.newThreadPerTaskExecutor(
    Thread.ofVirtual()
        .name("web-request-", 0)  // Creates threads named web-request-0, web-request-1...
        .factory()
);

// Building a custom factory for more control.
ThreadFactory carefulFactory = Thread.ofVirtual()
    .name("data-pipeline-", 0)
    .inheritInheritableThreadLocals(false) // Often a good practice to avoid accidental leaks.
    .uncaughtExceptionHandler((thread, exception) -> {
        System.err.printf("Thread %s failed: %s%n", thread.getName(), exception.getMessage());
    })
    .factory();
Enter fullscreen mode Exit fullscreen mode

The second technique involves structuring your concurrent work as a single, manageable unit.

This is where structured concurrency comes in. Before, launching multiple concurrent tasks felt like releasing balloons into the sky—you hoped they’d come back, but you had to manually track each one. Structured concurrency gives you a box for those balloons. The rule is simple: no subtask can outlive the scope that created it. This prevents resource leaks and makes reasoning about your code much easier.

The StructuredTaskScope is that box. You create a scope, fork tasks inside it, and then wait for them all to finish. The scope’s lifecycle ensures everything is cleaned up.

// This pattern is becoming as familiar as try-with-resources.
try (var scope = new StructuredTaskScope.ShutdownOnFailure()) {

    // Fork three independent tasks to run concurrently.
    Future<String> userTask = scope.fork(() -> fetchUserFromService(userId));
    Future<List<Order>> ordersTask = scope.fork(() -> fetchUserOrders(userId));
    Future<Preferences> prefsTask = scope.fork(() -> fetchUserPreferences(userId));

    // Wait here for ALL forked tasks to complete (or fail).
    scope.join();
    // If any task failed, throw its exception here.
    scope.throwIfFailed();

    // At this point, we know all tasks are definitively done.
    String user = userTask.resultNow();
    List<Order> orders = ordersTask.resultNow();
    Preferences prefs = prefsTask.resultNow();

    return new Dashboard(user, orders, prefs);
} // The scope closes here, guaranteeing no lingering threads.
Enter fullscreen mode Exit fullscreen mode

Different problems call for different scoping policies. Need the first successful result from multiple sources?

try (var scope = new StructuredTaskScope.ShutdownOnSuccess<String>()) {
    scope.fork(() -> callPrimaryAPI(id));
    scope.fork(() -> callBackupAPI(id));
    scope.fork(() -> getFromLocalCache(id));

    scope.join(); // The scope will shut down as soon as one task succeeds.

    // Get the result of the first successful task.
    return scope.result();
}
Enter fullscreen mode Exit fullscreen mode

The third technique is perhaps the most liberating: you don't have to rewrite your blocking code.

A major hurdle with older asynchronous models was "async contagion"—to benefit from it, you often had to rewrite your entire call chain to use CompletableFuture or similar. Virtual threads remove this pressure. Since blocking is now cheap, your existing synchronous code is automatically efficient.

// Your classic, straightforward blocking HTTP call is now perfectly fine.
public String fetchPage(String url) throws IOException, InterruptedException {
    HttpClient client = HttpClient.newHttpClient();
    HttpRequest request = HttpRequest.newBuilder().uri(URI.create(url)).build();

    // This line 'blocks'. With a platform thread, that was costly.
    // With a virtual thread, it just yields control.
    HttpResponse<String> response = client.send(request, HttpResponse.BodyHandlers.ofString());

    return response.body();
}

// Your standard JDBC query works exactly as it always did.
public Customer findCustomer(Long id) {
    String sql = "SELECT * FROM customer WHERE id = ?";
    // The virtual thread will yield during the database round-trip.
    return jdbcTemplate.queryForObject(sql, new CustomerRowMapper(), id);
}
Enter fullscreen mode Exit fullscreen mode

Combine this with structured concurrency for clean, concurrent data aggregation.

public Profile assembleUserProfile(String userId) throws Exception {
    try (var scope = new StructuredTaskScope.ShutdownOnFailure()) {
        Future<Account> accountFuture = scope.fork(() -> accountRepo.findByUserId(userId));
        Future<List<Message>> messagesFuture = scope.fork(() -> messageService.getInbox(userId));
        Future<Settings> settingsFuture = scope.fork(() -> settingsManager.load(userId));

        scope.join();
        scope.throwIfFailed(); // One failure fails the whole operation.

        // All fetches happened concurrently, but the code reads sequentially.
        return new Profile(
            accountFuture.resultNow(),
            messagesFuture.resultNow(),
            settingsFuture.resultNow()
        );
    }
}
Enter fullscreen mode Exit fullscreen mode

The fourth technique addresses a hidden cost: rethinking thread-local storage.

ThreadLocal variables were a common way to pass context (like user authentication). With only a few hundred platform threads, the memory footprint was manageable. With millions of virtual threads, each holding onto a ThreadLocal map, it can become a serious problem. The new tool for this job is ScopedValue.

A ScopedValue is set once for the dynamic scope of a specific block of code and is automatically cleared when that block ends. It’s immutable and designed for one-way sharing of data from a parent to its child threads.

// The old, potentially hazardous way with ThreadLocal.
private static final ThreadLocal<User> CURRENT_USER = new ThreadLocal<>();

void handleRequest(Request req) {
    CURRENT_USER.set(req.getAuthenticatedUser());
    try {
        process(req);
    } finally {
        CURRENT_USER.remove(); // FORGET THIS, AND YOU LEAK MEMORY.
    }
}

// The new, scoped way. The binding is clear and automatic.
private static final ScopedValue<User> SCOPED_USER = ScopedValue.newInstance();

void handleRequest(Request req) {
    // The SCOPED_USER is bound only for the execution of the run() method.
    ScopedValue.where(SCOPED_USER, req.getAuthenticatedUser())
               .run(() -> process(req));
    // No cleanup required.
}

// You can bind multiple values and nest scopes.
void complexOperation(User user) {
    ScopedValue.where(SCOPED_USER, user)
               .where(TRACE_ID, generateId())
               .run(() -> {
                   // Both values are readable here.
                   doStepOne();
                   // Nested scope can override a value.
                   ScopedValue.where(TRACE_ID, "subtask-" + TRACE_ID.get())
                              .run(() -> doStepTwo());
               });
}
Enter fullscreen mode Exit fullscreen mode

Using them within structured concurrency is natural, as the forked tasks are children of the scope.

try (var scope = new StructuredTaskScope.ShutdownOnFailure()) {
    // Bind the context for all tasks forked inside this block.
    ScopedValue.where(REQUEST_ID, requestId)
               .run(() -> {
                   scope.fork(() -> validateInput()); // Can read REQUEST_ID
                   scope.fork(() -> auditRequest());  // Can read REQUEST_ID
                   scope.join();
               });
}
Enter fullscreen mode Exit fullscreen mode

The fifth technique simplifies a traditionally tricky area: coordinated error handling and cancellation.

With unstructured concurrency, cancelling a group of tasks and ensuring proper cleanup was error-prone. Structured concurrency makes this intrinsic. Cancelling the scope cancels all its subtasks. Timeouts apply to the entire operation, not just pieces of it.

// A timeout for the whole concurrent fetch.
public CombinedData fetchWithTimeout(String id, Duration timeout) throws Exception {
    try (var scope = new StructuredTaskScope.ShutdownOnFailure()) {
        Future<DataA> futureA = scope.fork(() -> fetchServiceA(id));
        Future<DataB> futureB = scope.fork(() -> fetchServiceB(id));

        // Wait for both, but only for the given duration.
        scope.joinUntil(Instant.now().plus(timeout));

        // After joinUntil, we must check the state.
        if (futureA.state() == Future.State.RUNNING || futureB.state() == Future.State.RUNNING) {
            // The timeout kicked in. Scope is already shutting down.
            throw new TimeoutException("Operations did not complete in time");
        }

        // If we're here, both finished. Check for other failures.
        scope.throwIfFailed();
        return new CombinedData(futureA.resultNow(), futureB.resultNow());
    }
}
Enter fullscreen mode Exit fullscreen mode

You can build robust patterns like "fetch from primary, with a concurrent fallback."

public ProductInfo getProductInfo(String sku) throws Exception {
    try (var scope = new StructuredTaskScope.ShutdownOnSuccess<ProductInfo>()) {

        Future<ProductInfo> primaryFuture = scope.fork(() -> primaryService.get(sku));
        Future<ProductInfo> fallbackFuture = scope.fork(() -> fallbackCache.get(sku));

        scope.join(); // Will shut down once either task succeeds.

        try {
            // Returns the result from whichever succeeded first.
            return scope.result();
        } catch (ExecutionException e) {
            // This means the winning task threw an exception.
            // The other task was cancelled. We need to check if the fallback has a result.
            if (fallbackFuture.state() == Future.State.SUCCESS) {
                return fallbackFuture.resultNow();
            }
            throw e; // Both failed.
        }
    }
}
Enter fullscreen mode Exit fullscreen mode

The cleanup is automatic. If a subtask opens a resource, the standard try-with-resources mechanism works in harmony with scope cancellation.

try (var scope = new StructuredTaskScope.ShutdownOnFailure()) {
    Future<DatabaseConnection> connFuture = scope.fork(() -> Database.connect());

    Future<QueryResult> resultFuture = scope.fork(() -> {
        // This will throw if connFuture was cancelled.
        try (DatabaseConnection conn = connFuture.get()) {
            return conn.executeQuery("SELECT ...");
        }
    });

    scope.join();
    return resultFuture.resultNow();
}
// If the scope is shut down early, both forks are interrupted.
// The try-with-resources in the second fork ensures the connection is closed.
Enter fullscreen mode Exit fullscreen mode

What does this all mean in practice? It means we can design systems that are closer to how we think about problems. We can spawn a task whenever we have independent work, without fear. We can group related tasks, knowing their lifecycle is managed as one. We can write plain, blocking code for I/O operations and still achieve high throughput.

This shift brings concurrency back to its essential purpose: modeling parallel activities in a way that is natural, safe, and efficient. The complexity hasn't vanished, but it has been moved—from our application code into the runtime, where it belongs. We get to write simpler, more reliable programs, and the JVM handles the hard part of scheduling millions of lightweight threads. It feels less like managing a complex machine and more like describing the work that needs to be done.

📘 Checkout my latest ebook for free on my channel!

Be sure to like, share, comment, and subscribe to the channel!


101 Books

101 Books is an AI-driven publishing company co-founded by author Aarav Joshi. By leveraging advanced AI technology, we keep our publishing costs incredibly low—some books are priced as low as $4—making quality knowledge accessible to everyone.

Check out our book Golang Clean Code available on Amazon.

Stay tuned for updates and exciting news. When shopping for books, search for Aarav Joshi to find more of our titles. Use the provided link to enjoy special discounts!

Our Creations

Be sure to check out our creations:

Investor Central | Investor Central Spanish | Investor Central German | Smart Living | Epochs & Echoes | Puzzling Mysteries | Hindutva | Elite Dev | Java Elite Dev | Golang Elite Dev | Python Elite Dev | JS Elite Dev | JS Schools


We are on Medium

Tech Koala Insights | Epochs & Echoes World | Investor Central Medium | Puzzling Mysteries Medium | Science & Epochs Medium | Modern Hindutva

Top comments (0)