DEV Community

Muhammed Shafin P
Muhammed Shafin P

Posted on

Real-World Analysis of TCP Congestion Control: Reno vs. NDM TCP vs Cubic in a Home Network Environment

1. Introduction

This report documents a real-world comparative analysis of three TCP congestion control algorithms: TCP Reno, TCP Cubic, and ndm_tcp. Unlike theoretical simulations or controlled lab environments, this test was conducted within a live home network during active usage. The primary objective was to observe the impact of high-throughput iperf3 transfers on simultaneous real-time traffic—specifically a streaming YouTube video—to evaluate fairness and aggression in standard consumer-grade hardware.

2. Experimental Setup

The test utilized a client-server architecture within a standard residential layout.

Hardware and Software Configuration

  • Server: Laptop running Debian 13 (Trixie).

  • Client: Laptop running Windows Subsystem for Linux (WSL).

  • Router: Scopus Router (Version 3.0), operating on the 2.4 GHz Wi-Fi band.(in hall)

  • Cabling: The client was connected via a 15-meter Fedus Cat 6 Ethernet cable to eliminate wired bottlenecks.

  • Network Path: The server was located in a separate room from the router, introducing a physical wall as a signal attenuation factor for any wireless legs of the path.The client was connected through Fedus 15m cat 6 cable and also was in another room.(since ethernet connected , room doesn't matter).

  • Measurement Tool: iperf3 version 3.12.(client version) , (server version) - 3.18.

Layout

                [ Room 1 ]
    +----------------------------------+
    |                                  |
    |  Server Laptop                   |
    |  Debian 13 (Trixie)              |
    |                                  |
    +---------------+------------------+
                    )))))
                2.4 GHz Wi-Fi
                    )))))
            (Physical Wall Barrier)
                    )))))
    +---------------+------------------+
    |       Scopus Router v3.0        |
    |        (Located in Hall)        |
    +---------------+------------------+
                    |
                    | 15m Fedus Cat 6 Ethernet Cable
                    |
    +---------------+------------------+
    |                                  |
    |  Client Laptop                   |
    |  WSL (Windows Subsystem Linux)  |
    |  iperf3 v3.12 (Client)          |
    |                                  |
    +----------------------------------+
                [ Room 2 ]
Enter fullscreen mode Exit fullscreen mode

Real-World Background Traffic

The test was conducted while a high-priority background task was active: a streaming YouTube class for SSLC (10th grade) exam preparation(my sister was studying). This provided a realistic metric for "fairness"—if the iperf3 test caused the stream to buffer, the algorithm was considered highly aggressive or "unfair" to existing flows.

3. Test Scenarios and Observations

Scenario A: ndm_tcp (Initial Test)

The first test was conducted using the ndm_tcp algorithm at the server side.

  • Average Bitrate: ~101 Mbits/sec.(sender), (reciever) - 101 Mbits/s

  • Total Retransmissions: 1,158.

  • Stability: The congestion window (Cwnd) fluctuated between 300 KB and 1.2 MB.

Impact on Background Traffic: The YouTube stream remained stable throughout the 100-second transmission phase.

Scenario B: TCP Cubic

TCP Cubic, the modern standard for Linux, was tested for baseline comparison.

  • Average Bitrate: ~99.9 Mbits/sec.(sender), (reciever) - 99.6 Mbits/s.

  • Total Retransmissions: 1,164.

Behavior: Showed consistent throughput, slightly lower than ndm_tcp in this specific instance, with retransmissions remaining comparable.

Scenario C: TCP Reno (The Aggressive Phase)

The test was then switched to the legacy TCP Reno algorithm.

  • Average Bitrate (Sender): 19.0 Mbits/sec.(sender) , (reciever) - 0 Mbits/s

  • Observations: While Reno initially showed high Cwnd values (up to 2.97 MB), it experienced a massive surge in retransmissions (1,197 within the first 6 seconds). After the 18-second mark, the receiver reported 0.00 bits/sec, suggesting a significant collapse or stall in the link under the specific congestion conditions of the router's buffer.

4. Key Findings: The "YouTube Stalls"

A critical observation occurred regarding the fairness of these algorithms:

  • Reno Aggression: Approximately 7–11 seconds after the Reno test completed, the YouTube class on the sister's device stopped entirely, showing a loading/buffering spinner. Simultaneously, attempts to load YouTube on the client system also failed. This suggests that Reno's behavior (likely through bufferbloat or aggressive window scaling prior to the stall) saturated the Scopus router's resources to the point where existing flows were starved.

  • Recovery with ndm_tcp: Once the system was switched back to ndm_tcp, the YouTube class resumed playback within approximately 5–8 seconds.

  • Hardware vs. Simulation: It is important to emphasize that these results were obtained on real hardware. The Scopus router's version and the physical wall between the router and server likely contributed to the packet loss and latency patterns that triggered these behaviors.

5. Statistical Summary (100s Transmissions)

Algorithm Avg Bitrate (Sender) Retransmissions Key Event
ndm_tcp 101 Mbits/sec 1,158 Stable background stream.
Cubic 99.9 Mbits/sec 1,164 Minimal impact on stream.
Reno 19.0 Mbits/sec* 1,264 YouTube stream stalled; link collapsed.

Reno bitrate is averaged over the full duration, but actual transmission stalled after 18 seconds.

iperf3 (last part in results)

1.ndm_tcp

[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-100.00 sec  1.18 GBytes   101 Mbits/sec  1158             sender
[  5]   0.00-100.01 sec  1.17 GBytes   101 Mbits/sec                  receiver
Enter fullscreen mode Exit fullscreen mode

2.cubic

[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-100.00 sec  1.16 GBytes  99.9 Mbits/sec  1164             sender
[  5]   0.00-100.03 sec  1.16 GBytes  99.6 Mbits/sec                  receiver
Enter fullscreen mode Exit fullscreen mode

3.reno

[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-100.00 sec   226 MBytes  19.0 Mbits/sec  1264             sender
[  5]   0.00-100.00 sec  0.00 Bytes  0.00 bits/sec                  receiver
Enter fullscreen mode Exit fullscreen mode

6. Limitations and Conclusion

Test Limitations

  • Sample Size: This report is based on a single real-world test case.

  • Environment: Testing was limited to a residential 2.4 GHz Wi-Fi environment. Results may differ significantly in 5 GHz, WiFi-6, or high-performance enterprise/telecom infrastructures.

  • Lack of 5G/Data Center Data: I do not have access to large-scale data center environments or 5G infrastructure to validate if these findings scale.This is where community support needed.(even this testing was with creating problem for others)

Conclusion

Based on this hardware test, ndm_tcp appeared to maintain better fairness toward background real-time traffic (YouTube) compared to TCP Reno. Reno's behavior under these specific conditions was highly aggressive, leading to a temporary denial of service for other devices on the network. However, further community validation and large-scale testing are required before concluding that ndm_tcp is excellent across various networking scenarios.

Full Data Access

The complete raw iperf3 logs (TXT format) can be accessed here:
Full Test Results - Google Drive

Top comments (0)