Exploring Modern Concurrency in Java: From Classic Heavyweight Threads to Lightweight Virtual Threads (Java 1 to 22)

Imagine running a bustling restaurant. Back in the day, you might have hired one waiter for every single table. Each customer had their own personal server, even if they were just sitting there sipping water and scrolling through their phone. Sounds pretty wasteful, doesn’t it? Well, that’s exactly how Java’s old school thread model used to work. Every task, no matter how tiny, got its own heavyweight thread (known as a Platform Thread in Java, which maps directly to an OS Thread), hogging system resources even when it wasn’t doing much of anything.

Now, fast forward to today. With the introduction of Virtual Threads in Java 19 and beyond, it’s like your waitstaff has been given superpowers. Suddenly, a single waiter (Platform Thread) can juggle dozens or even hundreds of tables (Virtual Threads) at once, seamlessly switching between customers without breaking a sweat. Tasks that used to bog down your system now flow smoothly, making applications faster, more efficient, and a whole lot easier to build.



But how did we get here?

In this blog, we’ll take you on a journey through the evolution of Java’s concurrency model from the clunky, resource-hungry threads of the ‘90s to the sleek, lightweight virtual threads of Java 22. We’ll dive into the challenges, innovations, and game-changing breakthroughs that have shaped modern Java concurrency. Whether you’re feeling nostalgic about the old days of synchronized blocks or itching to explore Project Loom, there’s something here for everyone.

So, get ready because concurrency in Java has come a long way, and it’s never been more exciting (or efficient) than it is today. Let’s dive in!


Like any of my other blogs, I’m going to split the entire discussion into the following topics, so anyone interested in specific sections can easily navigate to them.

  • Foundations of Concurrency in Java (Java 1.0 – Java 1.4, 1996) : Fundamentals of concurrency in Java, introducing the Thread class, Runnable interface, and synchronized for thread coordination. It explains inter-thread communication using wait(), notify(), and notifyAll(), along with daemon threads, priorities, and ThreadGroup for thread management. It also covers thread-safe collections (Vector, Hashtable) and the volatile keyword for memory visibility, highlighting their role in early concurrency control.

  • The Concurrency Revolution – java.util.concurrent (Java 5, 2004): This section covers the java.util.concurrent package introduced in Java 5, which brought high-level concurrency utilities. It explores the Executor Framework for thread pool management, Callable and Future for asynchronous execution, and concurrent collections like ConcurrentHashMap and CopyOnWriteArrayList for improved performance. It also discusses synchronization primitives like ReentrantLock, Semaphore, CountDownLatch, and atomic variables that enable lock-free thread-safe operations.

  • Fork/Join Framework and Fine-Grained Parallelism (Java 7, 2011): Here we discuss the Fork/Join Framework, which introduced divide-and-conquer parallelism using ForkJoinPool and a work-stealing algorithm for efficient CPU utilization. It explores Phaser, an advanced synchronization mechanism, and try-with-resources for simplified lock management. The chapter also discusses StampedLock for optimized read-write locking and techniques like false sharing prevention to improve multi-threaded performance.

  • Functional Concurrency & Reactive Programming (Java 8-9, 2014-2017): This section covers functional concurrency in Java 8 with Parallel Streams for simplified parallelism and CompletableFuture for non-blocking asynchronous programming. It also discusses ConcurrentHashMap optimizations, replacing segmented locking with lock-free nodes and tree binning for better performance. Java 9 introduced the Flow API, aligning with the Reactive Streams specification, enabling backpressure handling, along with enhancements to CompletableFuture and clarifications in the Java Memory Model (JMM).

  • Subtle Enhancements and Preparations for Modern Concurrency (Java 10-18, 2018-2022) : Subtle but essential concurrency enhancements from Java 10 to 18, focusing on performance tuning, JVM optimizations, and preparations for modern concurrency. It includes refinements in ForkJoinPool scheduling, JVM Thread-Local Handshakes (JEP 312), lock elision techniques, and NUMA-aware memory allocation for better multi-threaded performance. Java 17 improved Garbage Collection (GC) for concurrency, while Java 18 laid the groundwork for Virtual Threads in Project Loom, ensuring efficient thread management in future Java versions.

  • Virtual Threads and Structured Concurrency (Java 19-22, 2022-2024): The introduction and stabilization of Virtual Threads and Structured Concurrency in Java 19-22, enabling lightweight, user-mode threads for efficient task management. It explores Scoped Values as a thread-safe alternative to ThreadLocal, and improvements in task scheduling, automatic cancellation, and error handling through StructuredTaskScope. Java 21 finalized these features, while Java 22 introduced performance tuning, debugging tools, and optimizations in ForkJoinPool, Parallel Streams, and ConcurrentHashMap for seamless virtual thread integration.

1. Foundations of Concurrency in Java (Java 1.0 – Java 1.4, 1996)


While this section covers the fundamental concepts of Java’s traditional threading model, it is important to note that modern concurrency libraries and frameworks have significantly evolved, offering more efficient and scalable solutions. The core principles of threading remain relevant, but in most real world applications today, we rarely work directly with low level thread management.

Instead, we leverage high-level abstractions such as the Executor Framework, Fork/Join Pool, CompletableFuture, and Virtual Threads (Project Loom). Given this, we will briefly navigate through this section understanding these concepts is certainly valuable, but spending too much time on them may not be as beneficial for modern day concurrency challenges


Thread and Runnable Classes

There are two primary ways to create a thread in java,

  1. Extending the Thread class
  2. Implementing the Runnable interface (Preferred way)


 1Extending the Thread Class

This approach involves subclassing the Thread class and overriding the run() method. The run() method contains the code that will execute in a separate thread. When start() is called, a new thread is created, and the JVM invokes the run() method of the new thread.

Note: Calling the run() method directly will not create a new thread; only start() can do that.

Extending the Thread Class
package com.scribbledtech.java1to4;

public class ThreadExample extends Thread {
    public void run() {
        System.out.println("Thread " + Thread.currentThread().getName() + " is running...");
    }
    public static void main(String[] args) {
        ThreadExample t1 = new ThreadExample();
        System.out.println("Starting a new thread from thread "+ Thread.currentThread().getName());
        t1.start(); // Starting a new thread

    }

}

Then out put would look like

Console output
Starting a new thread from thread main
Thread Thread-0 is running...
Process finished with exit code 0


Why is this not a preferred way?

Implementing the Runnable interface is generally considered the preferred way to create a thread. Even though the Thread class also implements the Runnable interface, subclassing Thread is often discouraged for the following reasons

  • Java supports only single inheritance, meaning if a class extends Thread, it cannot extend any other class.
  • Thread logic is tightly coupled with the thread creation, reducing reusability.


2Implementing the Runnable Interface (Preferred)


In this approach, we define the thread’s task using the Runnable interface and pass it to a Thread object.

Java
package com.scribbledtech.java1to4;

public class RunnableExample implements Runnable{
    public void run() {
        System.out.println("Thread " + Thread.currentThread().getName() + " is running...");
    }
    public static void main(String[] args) {
        RunnableExample myTask = new RunnableExample(); // Create a Runnable object
        Thread t1 = new Thread(myTask);  // Pass it to a Thread instance
        System.out.println("Starting a new thread from thread "+ Thread.currentThread().getName());
        t1.start(); // Start the thread
        Thread t2 = new Thread(myTask);  // Create another thread with same instance
        System.out.println("Starting a second thread from thread "+ Thread.currentThread().getName());
        t2.start(); // Start the second thread
    }
}

When using the Runnable interface, your class (like RunnableExample) implements Runnable, meaning it provides its own run() method that defines the task to be performed. This approach cleanly separates the task definition from the thread creation process.

The task is defined independently and then passed to a Thread object for execution. This separation of concerns makes the design more flexible and allows for greater reusability, as the same Runnable task can be executed by multiple threads.

The output of the above code snippet would look like below.

TeX
Starting a new thread from thread main
Starting a second thread from thread main
Thread Thread-1 is running...
Thread Thread-0 is running...

Process finished with exit code 0

Just don’t be confused by the order of the print statements. The output demonstrates that the “starting” statements are running on the main thread, while the “running” statements are executing on two different threads. The order appears a bit shuffled because the OS scheduling of threads doesn’t guarantee any specific order of execution.

Thread lifecycle

In Java, a thread moves through several lifecycle states from its creation to termination. The Java Virtual Machine (JVM) and CPU scheduler control these transitions. A thread can be running, waiting, blocked, or terminated, depending on the execution flow. Understanding these states helps in designing efficient multi-threaded applications while avoiding deadlocks, race conditions, and performance bottlenecks.

Let’s look into the states one by one


1NEW State


A thread is in the NEW state when it is created but not yet started. And
it moves to RUNNABLE when start() is called.

NEW state thread
Thread t = new Thread(() -> System.out.println("Thread running..."));
// The thread is still in NEW state because start() is not called.


2RUNNABLE State

The thread is ready to run and is simply waiting for its turn to use the CPU. It’s in the runnable queue, indicating it’s eligible to be executed, but it’s not necessarily running at that precise moment.

RUNNABLE state thread
Thread t = new Thread(() -> System.out.println("Thread running..."));
t.start();  // Thread moves from NEW -> RUNNABLE


3BLOCKED State

In the thread lifecycle, a thread enters the BLOCKED state when it is not currently eligible to run. This most often occurs when the thread is waiting to access a protected section of code that is currently locked by another thread, or when it’s waiting for a monitor lock. In essence, the thread is trying to obtain an object lock. While blocked, a thread cannot perform any execution and doesn’t consume CPU cycles, remaining idle until the thread scheduler reactivates it. It moves to RUNNABLE once the lock is released.

BLOCKED state thread
class SharedResource {
    synchronized void access() {
        System.out.println(Thread.currentThread().getName() + " accessing...");
        try { Thread.sleep(1000); } catch (InterruptedException ignored) {}
    }
}

public class BlockedExample {
    public static void main(String[] args) {
        SharedResource resource = new SharedResource();

        Thread t1 = new Thread(() -> resource.access());
        Thread t2 = new Thread(() -> resource.access()); // Will be BLOCKED

        t1.start();
        t2.start();  // t2 will be blocked until t1 releases the lock
    }
}


4WAITING State

A thread enters the WAITING state when it’s waiting indefinitely for another thread to perform a specific action. Threads in this state do not consume CPU resources and remain inactive until explicitly awakened by another thread. This occurs when a thread calls methods such as Object.wait() (with no timeout), Thread.join() (with no timeout), or LockSupport.park(). For example, a thread that has called Object.wait() on an object is waiting for another thread to call Object.notify() or Object.notifyAll() on that object. Similarly, a thread that has been called Thread.join() is waiting for a specified thread to terminate.

WAITING state thread
class WaitingExample {
    public static void main(String[] args) throws InterruptedException {
        Object lock = new Object();
        
        Thread t1 = new Thread(() -> {
            synchronized (lock) {
                try {
                    System.out.println("Thread entering WAITING state...");
                    lock.wait(); // Moves to WAITING state
                    System.out.println("Thread resumed!");
                } catch (InterruptedException ignored) {}
            }
        });

        t1.start();
        Thread.sleep(3000); // Simulating delay before notifying

        synchronized (lock) {
            lock.notify(); // Moves t1 back to RUNNABLE
        }
    }
}


5TIMED_WAITING State

Threads enter the TIMED_WAITING state when they are waiting for a specific amount of time. This state is often used when a thread needs to wait for a resource but doesn’t want to wait indefinitely. This occurs when a thread calls methods such as Thread.sleep(), Object.wait() (with a timeout parameter), Thread.join() (with a timeout parameter), or LockSupport.parkNanos() or LockSupport.parkUntil(). After the specified time elapses, the thread can either proceed if the resource is available or transition to another state

TIMED_WAITING state thread
Thread t = new Thread(() -> {
    try {
        System.out.println("Sleeping...");
        Thread.sleep(5000); // Moves to TIMED_WAITING
    } catch (InterruptedException ignored) {}
});
t.start();


6DEAD State

A thread enters the DEAD state when it has completed its execution. This means the run() method has finished executing, either by reaching the end of the method or by throwing an uncaught exception. Once a thread is terminated, it cannot be restarted. The resources associated with the thread are typically reclaimed by the operating system or the Java Virtual Machine (JVM). In this state, the thread is no longer active, and it’s essentially “dead”. There’s no way to bring a terminated thread back to life; you would need to create a new thread if you want to perform the same task again.

DEAD state thread
Thread t = new Thread(() -> System.out.println("Thread finished execution."));
t.start();

// The thread will be TERMINATED after execution.


7RUNNING State

In the RUNNING state, a thread is actively executing its run() method and performing its intended functions. A thread enters this state only when the thread scheduler selects it from the runnable state and allocates CPU time to it. The CPU rapidly switches between active threads, giving the illusion of simultaneous execution. Ideally, with ‘n’ cores, you might expect ‘n’ times the throughput if each core runs a separate thread performing the same job.

RUNNING state thread
class RunningExample extends Thread {
    public void run() {
        for (int i = 1; i <= 5; i++) {
            System.out.println(Thread.currentThread().getName() + " is RUNNING: " + i);
            try {
                Thread.sleep(1000); // Moves to TIMED_WAITING for 1 second
            } catch (InterruptedException e) {
                e.printStackTrace();
            }
        }
    }
}

public class Main {
    public static void main(String[] args) {
        RunningExample t1 = new RunningExample();
        RunningExample t2 = new RunningExample();

        t1.start(); // Moves from NEW → RUNNABLE → RUNNING (when scheduled)
        t2.start(); // Moves from NEW → RUNNABLE → RUNNING (when scheduled)
    }
}


Key Methods of the Thread Class

The Thread class in Java provides essential methods to control and manage the execution of threads. These methods help in creating, pausing, stopping, and synchronizing thread execution, ensuring smooth multi-threaded application behavior. Below is a detailed explanation of each key method, and at the end, there is an example given to demonstrate their usage.

start() – Starting a New Thread

The start() method is used to create a new thread and transition it from the NEW state to RUNNABLE. Unlike calling run() directly, start() ensures that the run() method executes in a separate thread of execution, allowing true parallel processing. Once called, the thread enters the RUNNABLE state, waiting for CPU time.

run() – Defining the Thread’s Task

The run() method contains the logic that a thread executes when it starts running. It is automatically invoked when start() is called. Overriding run() is necessary when extending the Thread class or implementing the Runnable interface. Calling run() directly does not create a new thread but executes the code in the same thread.


interrupt() – Stopping a Thread Gracefully

The interrupt() method signals a thread to stop its execution in a controlled manner. It does not forcefully terminate a thread but sets an interrupted flag, which the thread can check using isInterrupted(). If the thread is in a WAITING state (such as sleep() or wait(), calling interrupt() causes an InterruptedException, allowing the thread to exit safely.


sleep(long millis) – Pausing Execution Temporarily

The sleep() method pauses a thread’s execution for a specified time, moving it into the TIMED_WAITING state. After the sleep duration expires, the thread automatically moves back to RUNNABLE and waits for CPU scheduling. Unlike wait(), sleep() does not release any locks held by the thread during its pause. Since Thread.sleep() is a static function, it operates on the currently running thread, ie; the thread executes this line of code.


yield() – Hinting the Scheduler to Pause Execution

The yield() method suggests to the CPU scheduler that the current thread is willing to pause execution to allow other threads to run. However, thread scheduling is managed by the operating system, and calling yield() does not guarantee an immediate switch to another thread. It is mainly useful for improving responsiveness in CPU-intensive applications. Since Thread.yield() is a static function, it operates on the currently running thread, ie; the thread executes this line of code.


 join() – Making a Thread Wait for Another Thread

The join() method makes the current thread wait until another thread has completed execution. When join() is called, the calling thread moves to the WAITING state and resumes execution only after the target thread finishes. If join(time) is used, the calling thread moves to TIMED_WAITING and resumes execution once the specified time expires, even if the target thread has not finished.

Java
class MyThread extends Thread {
    public void run() {
        System.out.println(Thread.currentThread().getName() + " has started."); // Running state

        // Example: Using sleep() - Moves to TIMED_WAITING for 2 seconds
        try {
            System.out.println(Thread.currentThread().getName() + " is sleeping...");
            Thread.sleep(2000);
        } catch (InterruptedException e) {
            System.out.println(Thread.currentThread().getName() + " was interrupted during sleep!");
        }

        // Example: Using yield() - Suggests CPU to switch to another thread
        System.out.println(Thread.currentThread().getName() + " is yielding...");
        Thread.yield();

        // Example: Running a loop to simulate execution
        for (int i = 1; i <= 3; i++) {
            System.out.println(Thread.currentThread().getName() + " running: " + i);
        }
    }
}

public class ThreadMethodsExample {
    public static void main(String[] args) throws InterruptedException {
        // Creating and starting thread1
        MyThread t1 = new MyThread();
        t1.start(); // Moves thread from NEW -> RUNNABLE -> RUNNING

        // Creating and starting thread2
        MyThread t2 = new MyThread();
        t2.start();

        // Example: join() - Main thread waits for t1 to finish
        t1.join(); // Moves main thread to WAITING state until t1 finishes

        System.out.println("Main thread resumes after t1.");

        // Example: Interrupting a sleeping thread
        MyThread t3 = new MyThread();
        t3.start();
        t3.interrupt(); // Sends an interrupt signal to t3, which may wake it from sleep
    }
}


Synchronized Keyword


Synchronization in Java is the process of controlling access to shared resources in a multi-threaded environment to prevent race conditions and data inconsistency. It ensures that only one thread can execute a critical section of code at a time.

RaceCondition Class
package com.scribbledtech.java1to4;

public class RaceCondition {
    public static void main(String[] args) throws InterruptedException {
        Counter counter = new Counter();
        Thread t1 = new Thread(() -> {
            for (int i = 0; i < 1000; i++) counter.increment();
        });
        Thread t2 = new Thread(() -> {
            for (int i = 0; i < 1000; i++) counter.increment();
        });
        t1.start();
        t2.start();
        t1.join();
        t2.join();
        System.out.println("Final count: " + counter.count);
    }
}

class Counter {
    public int count = 0;

    public int increment() {
        return ++count;
    }
}


In the above example, we create a shared Counter object and pass it to two different threads (t1 and t2), both of which increment the counter simultaneously. Since both threads access and modify the count variable at the same time without synchronization, a race condition occurs.

This means the expected final count should be 2000 (1000 + 1000), but due to thread interleaving, the actual output may be less than 2000. The issue arises because both threads read, increment, and write back the value concurrently, causing lost updates.


How the Race Condition Happens?

Let’s assume the initial count = 0, and both threads execute the increment() method in this order:

  1. Thread 1 reads count = 10
  2. Thread 2 reads count = 10 (Before Thread 1 writes back)
  3. Thread 1 increments: count = 11 and writes back
  4. Thread 2 increments: count = 11 and writes back (Overwrites Thread 1’s update)
  5. The correct count should have been 12, but it remains 11 due to lost updates.


To solve this problem in Java 1, we use the synchronized keyword. There are different types of usages for this keyword, but in all cases, one thing is common: it allows only one thread to execute a piece of code at a time. That is all it does.

Under the hood, this is a locking mechanism, i.e., Java provides an intrinsic lock (or monitor lock) for every object. When a thread enters a synchronized block or method, it acquires the lock. Other threads trying to enter the same synchronized block will wait until the lock is released. Once the thread completes execution, it releases the lock, allowing another thread to proceed.

Lets look at different usages of synchronized keyword


1Synchronized Methods

A method can be marked synchronized to allow only one thread at a time to execute it.

Synchronized Methods
class SharedResource {
    private int counter = 0;

    public synchronized void increment() {
        counter++;
        System.out.println(Thread.currentThread().getName() + " incremented counter to " + counter);
    }
}


If multiple threads call increment(), only one thread at a time can execute the method. Other threads will be blocked until the executing thread completes and releases the lock.

Drawbacks:

  • Blocks entire method execution, which can be inefficient.
  • Affects performance if used excessively.


2Synchronized Blocks

Instead of synchronizing the entire method, we can synchronize a specific code block, reducing contention and improving performance.

Synchronized Blocks
class SharedResource {
    private int counter = 0;

    public void increment() {
        synchronized (this) {  
            counter++;
            System.out.println(Thread.currentThread().getName() + " incremented counter to " + counter);
        }
    }
}

  • Allows fine-grained locking instead of locking the entire method.
  • Other non-critical parts of the method can run without blocking.


3Static Synchronized Methods


If a method is static synchronized, the lock is on the class object (Class<?>), meaning it applies across all instances.

Static Synchronized Methods
class SharedResource {
    private static int counter = 0;

    public static synchronized void increment() {
        counter++;
        System.out.println(Thread.currentThread().getName() + " incremented counter to " + counter);
    }
}

  • The lock is on SharedResource.class, not an instance.
  • Useful when modifying static variables that should be protected.

4Synchronized Blocks on Static Methods


Instead of synchronizing the entire method, we can use synchronized blocks inside static methods.

Synchronized Blocks on Static Methods
class SharedResource {
    private static int counter = 0;

    public static void increment() {
        synchronized (SharedResource.class) { 
            counter++;
            System.out.println(Thread.currentThread().getName() + " incremented counter to " + counter);
        }
    }
}

  • Provides better control over which part of the method gets synchronized.
  • Avoids unnecessary locking of non-critical operations.

These kinds of locks are a bit tricky; if we are not implementing them properly, then the system can go into a deadlock as threads are waiting for each other to release the locks.

Inter-thread Communication


Inter-thread communication in Java allows multiple threads to coordinate their execution by using wait(), notify(), and notifyAll(). Don’t get confused, all these methods are part of the Object class. This is because every Java object has a monitor lock, which is used for synchronization, and these methods are designed to work with an object’s monitor. These methods help threads efficiently pause and resume execution to avoid excessive CPU usage while waiting for shared resources. They are typically used in producer-consumer models, where one thread produces data and another consumes it.

wait()Pauses Execution & Releases Lock

This method causes a thread to pause execution and enter the WAITING state. While waiting, the thread releases the lock on the shared resource, allowing other threads to access it. The waiting thread remains in this state until another thread calls notify() or notifyAll(). Following are some of the key things to keep in mind

  • wait() must be called inside a synchronized block or method because it relies on an object’s intrinsic lock (monitor lock).
  • If wait() is called outside synchronization, the program will throw IllegalMonitorStateException.

notify()Wakes Up One Waiting Thread

This method wakes up a single thread waiting on the same monitor lock. The awakened thread then moves to a RUNNABLE state but does not immediately start executing. It will only proceed when it reacquires the lock on the shared resource.

  • If multiple threads are waiting, notify() and one random thread will wake up.
  • The awakened thread must reacquire the lock before continuing execution.


notifyAll() – Wakes Up All Waiting Threads


The notifyAll() method wakes up all threads waiting on the monitor lock. However, only one thread at a time will proceed since each must reacquire the lock before continuing execution. This method is preferred when multiple waiting threads need to be considered, ensuring no thread remains indefinitely waiting.


Producer-Consumer Models


A Producer-Consumer Model is a common use case for inter-thread communication. The producer thread generates data and places it in a shared buffer, while the consumer thread retrieves and processes the data. The use of wait() and notify() ensures efficient synchronization so that:

  • The consumer waits if no data is available (using wait()).
  • The producer notifies the consumer when new data is ready (using notify()).
  • The producer waits if the buffer is full, avoiding unnecessary CPU cycles.

Producer Consumer Example
package com.scribbledtech.java1to4;

import java.util.LinkedList;

class SharedBuffer {
    private final LinkedList<Integer> buffer = new LinkedList<>();
    private final int capacity = 5; 

    public synchronized void produce(int value) throws InterruptedException {
        while (buffer.size() == capacity) {
            System.out.println("Buffer full! Producer waiting...");
            wait(); 
        }
        buffer.add(value);
        System.out.println("Produced: " + value);
        notify(); 
    }

    public synchronized int consume() throws InterruptedException {
        while (buffer.isEmpty()) {
            System.out.println("Buffer empty! Consumer waiting...");
            wait(); 
        }
        int value = buffer.removeFirst();
        System.out.println("Consumed: " + value);
        notify(); 
        return value;
    }
}

class Producer extends Thread {
    private final SharedBuffer buffer;

    public Producer(SharedBuffer buffer) {
        this.buffer = buffer;
    }

    public void run() {
        int value = 1;
        try {
            while (true) {
                buffer.produce(value++);
                Thread.sleep(1000); // Simulating time taken to produce an item
            }
        } catch (InterruptedException ignored) {}
    }
}

class Consumer extends Thread {
    private final SharedBuffer buffer;

    public Consumer(SharedBuffer buffer) {
        this.buffer = buffer;
    }

    public void run() {
        try {
            while (true) {
                buffer.consume();
                Thread.sleep(1500); // Simulating time taken to process an item
            }
        } catch (InterruptedException ignored) {}
    }
}

public class ProducerConsumerExample {
    public static void main(String[] args) {
        SharedBuffer buffer = new SharedBuffer();
        Producer producer = new Producer(buffer);
        Consumer consumer = new Consumer(buffer);

        producer.start();
        consumer.start();
    }
}

The JVM internally maintains two key thread queues for managing threads waiting for synchronization. When multiple threads interact using wait(), notify(), and synchronization, the JVM moves threads between these queues:

  1. The Wait Set (Waiting Queue) → Holds threads that have called wait() and released the lock.
  2. The Entry Set (Lock Queue) → Holds threads that are waiting to acquire the lock after being notified.

Let’s understand these concepts by looking into the above producer-consumer code and try to understand what exactly happening there

  1. Initial State: Both Threads Start Execution
    • The Producer thread enters the synchronized block and starts adding items to the buffer.
    • The Consumer thread enters the synchronized block but sees the buffer is empty, so it calls wait().
    • The consumer thread is now in the Wait Set (Waiting Queue), and it releases the lock.

  2. Producer Produces Items Until the Buffer is Full
    • The Producer continues producing and adding items.
    • Once the buffer is full, the Producer calls wait(), releasing the lock.
    • Now, both Producer and Consumer are in the Wait Set (both are waiting).

  3. Consumer Wakes Up After Producer Calls notify()
    • The Producer calls notify() before going into wait(), which moves the Consumer thread from the Wait Set to the Entry Set (Lock Queue).
    • The Consumer thread is now RUNNABLE but cannot execute until it reacquires the lock.
    • Consumer is now ready to run but waiting to acquire the lock.

  4. Consumer Acquires Lock and Starts Consuming
    • When the Producer releases the lock (exits the synchronized block), the JVM grants the lock to one thread in the Entry Set.
    • The Consumer gets the lock and starts consuming items.

  5. Consumer Consumes Items, Calls notify(), and Wakes Up Producer
    • As the Consumer removes items from the buffer, space is created.
    • Once the buffer is not full, the Consumer calls notify(), moving the Producer from the Wait Set to the Entry Set.
    • The Consumer then calls wait(), moving itself back to the Wait Set.

  6. Producer Acquires the Lock and Produces Again
    • When the Consumer releases the lock, the Producer moves from the Entry Set to RUNNING.
    • The cycle repeats, ensuring smooth producer-consumer coordination.
    • The Producer runs again, and the cycle repeats indefinitely.

🔧 Update in Progress ……….

This part of the blog is currently being worked on. Stay tuned for more insights!

Similar Posts