DEV Community

Muhammed Shafin P
Muhammed Shafin P

Posted on

Why NDM-TCP Tests Are Single-Run Snapshots (Not Statistical Averages)

A 3-minute read on testing philosophy and practical constraints


The Reality Check

Let me be transparent with the community: this is a personal hobby project I'm working on between studying for upcoming exams. As a solo developer with limited time and hardware, I've made a deliberate choice to conduct extended 40-second "stress-test snapshots" rather than following the traditional scientific method of running 50–100 trials and averaging results.

This isn't ideal—but it's honest, and here's why I'm doing it this way.


The Constraints I'm Working With

Time Limitations

Running multiple trials for every single test scenario would require dozens of hours I simply don't have right now. With exams approaching and this being a side project, I've had to prioritize breadth over statistical depth—testing many different scenarios once rather than one scenario many times.

Hardware Reality

I'm testing on a single laptop using VMware, not a multi-node hardware lab. This is what I have access to, and I'm working within those boundaries.

Software Challenges

My current iperf3 build (installed via apt) crashes with core dumps when connecting to external servers. This limits me to localhost tc (traffic control) simulations for now. Building from source to fix this would add complexity I don't have bandwidth for at the moment.

To properly test real-world conditions (WiFi, Ethernet), I'd ideally need:

  • One device as a server connected to WiFi/Ethernet
  • Another device as a receiver

Right now, I don't have multiple devices available for this setup.


The Philosophy: Embracing Chaos Over Repeatability

Here's where my approach differs philosophically from traditional testing:

Why True Randomness Matters

Some researchers prefer "fixed-seed" repeatability, where the random noise is identical across every test run. This makes results perfectly reproducible, but it doesn't reflect reality.

In the real world, network conditions are never the same twice. You never encounter the exact same sequence of jitter and packet loss. NDM-TCP is designed to handle the unexpected—to adapt to unique, unpredictable "snowflakes" of network stress that occur in production environments.

The Limitations of Pseudo-Random Generation

Most computers can't generate truly random numbers—they use pseudo-random number generators. For accurate repeatability, you'd need to:

  • Use seeded pseudo-random generators (so the same sequence reproduces)

The tc tool doesn't expose these controls easily, so each test run encounters genuinely different random conditions. This means:

  • More realistic (mimics real-world unpredictability)
  • Less reproducible (can't guarantee identical conditions)

I view this as a feature, not a bug. NDM-TCP needs to prove it can handle whatever the network throws at it, not just one carefully controlled sequence.


Why I Need Community Support

Here's the reality: I cannot test each scenario 50-200 times because of these constraints. That's exactly why I'm asking for community support.

This Started as a Crazy Experiment

NDM-TCP actually came from another crazy experiment I did called NDM (Neural Differential Manifolds). I took concepts from that experiment and wondered: "what if?"

I tested it, and it works. The results show promise.

But here's what matters most:

If NDM-TCP truly works—if it can handle real-world networks and real hardware better than these tc-based localhost simulations—then it could solve many old problems that networks still face today:

  • Handling unpredictable loss patterns
  • Adapting to rapid RTT changes
  • Maintaining fairness under stress
  • Reducing retransmissions in chaotic conditions

From Localhost to the Real World

These localhost tests with tc are just the beginning. If this approach proves effective on:

  • Real network hardware
  • Actual WiFi/Ethernet connections
  • Production environments with real congestion
  • Diverse network topologies

...then we might have something genuinely valuable for TCP connections everywhere.

That's the vision—and that's why I need your help to test it properly.


A Call for Community Collaboration

This is where you come in.

If you have:

  • A stable Linux lab environment
  • More time than I currently do
  • Multiple devices for real-world WiFi/Ethernet testing
  • Interest in running 50–100 trial statistical analyses

The NDM-TCP project is open-source. I would love to include community-contributed data in future updates. Your multi-run statistical analysis would complement my single-run stress tests perfectly, giving us both:

  • The detail and chaos-handling evidence from snapshots
  • The statistical confidence from averaged multi-run trials

Closing Thoughts

This testing methodology isn't perfect—it's pragmatic. I'm working within real constraints while trying to provide valuable insights into NDM-TCP's behavior under stress.

Think of these tests as "field reports" rather than "lab results." They show what happened during one intense encounter with network chaos, not what happens on average. Both perspectives have value.

If the community can help expand this to more rigorous statistical testing, we'll have the best of both worlds: the detail of individual stress responses and the confidence of statistical trends.


Want to contribute? Check out the project repository and join the discussion. Together, we can build a more complete picture of NDM-TCP's performance.

Top comments (0)