Read on Why a Stable Sawtooth from a Nonlinear System Matters
Disclaimer: This Is About Research Potential, Not Superiority Claims
Before we begin: this article is not claiming NDM-TCP is better than CUBIC, BBR, or Reno. Those algorithms are production-grade, formally analyzed, and battle-tested. They work. They are good at what they do.
This article is about something else entirely: why the fact that NDM-TCP produces a stable sawtooth pattern suggests there is research-grade content worth investigating — even though it has only been tested in simulations (using tc) and one real-world case so far.
The point is not "existing algorithms are bad." The point is "something unexpected happened that existing theory does not fully explain."
The Core Tension: Provable Theory vs. Real-World Complexity
For 30 years, TCP congestion control has been built on 20th-century calculus-based models. The network is treated like a fluid pipe: if pressure (delay) goes up, you turn the valve (congestion window) down. The math is clean. The equations are linear or near-linear. The behavior is predictable.
This approach has produced algorithms like Reno, CUBIC, and BBR — all of which have formal stability proofs. A stability proof is a mathematical guarantee (usually using something called a Lyapunov function) that the algorithm will never spiral out of control, oscillate forever, or crash the network.
CUBIC and Reno are mathematically simple enough to prove stable. They are like a predictable pendulum. Their behavior can be fully characterized with differential equations.
NDM-TCP is different. It is a recurrent nonlinear system. These are notoriously difficult to prove stable because the internal state (the "hidden state" array) is constantly changing based on feedback. Nonlinear systems can exhibit chaos, unpredictable oscillations, and sensitive dependence on initial conditions.
There is no formal proof that NDM-TCP is stable.
And yet — in both tc-based simulations and one real-world test — it produced a clean, stable sawtooth pattern.
That tension is what makes this interesting.
Two Ways of Seeing the Network
The "Old" Way: Calculus-Based Control
- RTT is a number — a single scalar value representing delay
- The network is modeled as a continuous system with smooth dynamics
- Congestion is detected by crossing a threshold (delay > baseline) or losing packets
- The goal is to solve for the optimal "rate" using differential equations
This works. It is elegant. It has decades of theory behind it.
But it struggles with modern networks: 5G with variable latency, satellite links with jitter, Wi-Fi with random bursts of interference. These networks are noisy — and noise looks like congestion to a calculus-based controller.
The NDM-TCP Way: Information-Theoretic Control
- RTT is a probability distribution with measurable entropy
- The network is modeled as a chaotic signal with patterns hidden in the noise
- Congestion is detected by analyzing the structure of delay variation (low entropy = stable pattern = real congestion; high entropy = noisy pattern = interference)
- The goal is to find "meaning" in the signal using information theory
This is a fundamentally different approach. Instead of asking "what is the delay?", it asks "what does the pattern of delays tell us?"
Why a Stable Sawtooth from a Nonlinear System Is Unusual
In the world of neural networks and recurrent controllers, "unstable" looks like a jagged, vibrating mess. Small changes in input cause wild swings in output. The system hunts around chaotically without ever settling into a rhythm.
NDM-TCP produced a clean, rhythmic sawtooth.
This means:
The system has reached an emergent equilibrium. The recurrent nonlinear controller and the TCP framework's native functions (tcp_cong_avoid_ai, tcp_slow_start) are working together, not fighting each other.
The "neural dynamics" have synchronized with the "physical network." The hidden state is adapting in a way that matches the network's actual behavior, producing predictable recovery patterns.
Nonlinear memory (recurrence) can be just as stable as linear math in practice — even if the formal proof is still missing.
This is not guaranteed. This is not trivial. Most adaptive nonlinear controllers fail at exactly this point.
The fact that it worked — in simulation and in one real-world test — suggests something is there.
The "Poor Man's Proof"
NDM-TCP does not have a 50-page mathematical stability proof. It does not have a formal Lyapunov analysis. It does not have eigenvalue decomposition showing bounded trajectories.
But it does have empirical evidence of stability: a clean sawtooth pattern that repeats consistently across test conditions.
In research terms, this is what you might call a "poor man's proof" — not formal mathematics, but strong empirical evidence that something real is happening.It suggests the approach is not fundamentally broken. It suggests there is structure worth studying.
It does not prove the algorithm is optimal, or even good. But it proves it is stable enough to investigate further.
What This Means: Two Paradigms
Stable by Design (CUBIC, Reno)
- Simple, provable algorithms
- Mathematically elegant
- Blind to noise — delay variation from wireless interference looks the same as delay from congestion
- Predictable, but sometimes overly conservative in noisy environments
Stable by Emergence (NDM-TCP)
- Complex, adaptive algorithm
- No formal proof (yet)
- Sensitive to patterns in noise — uses entropy to distinguish real congestion from random jitter
- Potentially more adaptive, but harder to analyze
Neither is "better." They are different approaches to the same problem.
The research question is: can information-theoretic feedback (like entropy) combined with recurrent nonlinear control produce stable, adaptive congestion control that handles modern noisy networks better than threshold-based approaches?
NDM-TCP does not answer that question definitively. But it suggests the question is worth asking.
What This Is Not Saying
This article is not saying:
- CUBIC is bad
- BBR is outdated
- Formal proofs do not matter
- NDM-TCP is production-ready
- Existing algorithms should be replaced
What it is saying:
- Current theory is built on calculus-based models that assume relatively clean signals
- Modern networks (5G, satellite, wireless) are noisier than those models anticipated
- Information theory (like entropy analysis) might offer a different lens for understanding congestion
- Recurrent nonlinear systems can be stable in practice even without formal proofs — but we do not understand why yet
- The gap between "provable on paper" and "works in practice" is worth investigating
Why This Matters for Other Researchers
If you are a networking researcher, control theorist, or machine learning researcher, here is why NDM-TCP's results are interesting:
1. It Worked in Simulation (tc-based)
tc (traffic control) is a standard Linux tool for simulating network conditions — bandwidth limits, delay, packet loss, jitter. NDM-TCP showed stable sawtooth behavior across multiple tc scenarios. This is reproducible. Anyone with a Linux machine can test it.
2. It Worked in a Real-World Test (One Case)
One real-world deployment test also showed stable behavior. This is limited evidence — one test is not enough to generalize — but it suggests the simulation results are not just artifacts of the testing environment.
3. The Combination Is Unusual
Entropy-based delay analysis + recurrent nonlinear controller + adaptive plasticity + framework-aware modulation = not a common combination in congestion control research. The fact that this combination produces stability suggests there is an interaction worth studying.
4. It Identifies a Theoretical Gap
If NDM-TCP is stable in practice but unprovable in theory, that tells us something about the theory. Either:
- Existing theory does not yet fully explain this behavior (we need better tools for analyzing recurrent nonlinear systems)
- The assumptions are too restrictive (real networks have structure that our models ignore)
- "Stability" in practice is more forgiving than "stability" in formal analysis
Any of these would be a research contribution.
What Needs to Happen Next
If this is genuinely research-grade content, here is what proper investigation looks like:
- Third-party testing — independent researchers should reproduce the results in different environments
- Formal stability analysis — someone with control theory expertise should attempt to model and analyze the system
- Comparison with state-of-the-art — benchmark against CUBIC, BBR, Reno, Vegas in identical conditions
- Fairness testing — test NDM-TCP against competing flows to see if it starves or gets starved
- Theoretical entropy study — prove (or disprove) that Shannon entropy on RTT history is a valid congestion signal
None of this has been done yet. The current results are self-conducted, limited in scope, and not peer-reviewed.
But the fact that a stable pattern emerged from a nonlinear system suggests it is worth doing.
Final Thought: Performance vs. Elegance
The history of computer science is full of examples where practical performance outpaced mathematical elegance:
- Neural networks worked for decades before we understood why (and we still do not fully understand)
- Quicksort is not optimal in the worst case, but it is the default sorting algorithm because it is fast in practice
- Heuristic search (like A*) often outperforms provably optimal search because real-world problems have structure the theory does not capture
NDM-TCP might be another example of that tension. Or it might not. That is what research is for.
What we can say right now is this: a recurrent nonlinear congestion controller produced stable behavior in simulation and in one real-world test. That is unusual enough to be worth investigating properly.
Not because it proves existing algorithms are wrong. But because it suggests existing theory is incomplete.
Written to clarify what the stability results reveal about the gap between formal theory and practical systems — and why that gap is worth studying, even if NDM-TCP itself is just a prototype.
Top comments (0)