The History of Credit-based Flow Control (Part 1)

Written by Jing Qiao, Chi, Yao; Translated by Xiaozhen Liu, Chenyang Xu, Yakun Zhou

Backpressure mechanism, also known as credit-based flow control, is a classic scheme for network communication flow control problems. Its predecessor is the TCP sliding window. This idea is particularly simple and effective. As we will see in this article, based on the same principles, this idea is applicable to any flow control scheme and is found in the design of many hardware and software systems. However, you may not imagine that this simple idea has a not simple history.

1. Flow Control in OneFlow

OneFlow solves the flow control problem through a backpressure mechanism. The following are two figures of the pipeline to show how this mechanism works:

Figure 1: Pipeline when training is the bottleneck
Figure 2: Pipeline when dataloading is the bottleneck

As shown in the above two figures, although the time of dataloading is short, it does not load data excessively but waits when its two regsts are filled.

  • When training is the bottleneck and the data of batch 3 is being trained, dataloading prepares batch 7 and batch 8 and then waits.
  • When Preprocessing is the bottleneck, DataLoading always processes the data of two batches ahead of Preprocessing.

These two figures illustrate that with a decentralized, asynchronous execution design, OneFlow can automatically take care of the slowest processing unit through a backpressure mechanism that allows each execution unit, which we call an Actor, to pace itself accordingly.

If you think about it, you will see that the backpressure mechanism here seems to be similar to the well-known TCP sliding window.

Indeed, the backpressure mechanism, also known as credit-based flow control, is a classic scheme for network communication flow control problems. Its predecessor is the TCP sliding window.

This idea is particularly simple and effective. As we will see later, based on the same principles, this idea is applicable to any flow control scheme and is found in the design of many hardware and software systems.

However, you may not imagine that behind this simple idea, there is a not simple history. It even led to a heated academic debate, but lost in the debate. Despite that, the credit-based flow control idea was refined during the debate and has since made a splash in many fields.

In this article, we will introduce the basic concept of it and its chequered history.

2. What is Flow Control

Network Flow Control is a basic process in the network to prevent the network from losing frames in case of congestion.

In the figure above, assume that between a pair of network communication nodes:

  • Sender produces data at a rate of 2MB/s, Receiver consumes data at a rate of 1MB/s, and data is transmitted at 2MB/s.
  • Both nodes have a data buffer (Send Buffer/Receive Buffer) of 5MB in size.

It can be deduced that since the Sender produces data faster than the Receiver consumes data, the Receive Buffer is filled up after 5s, and then it faces two situations:

  • If the Receive Buffer is bounded, then the newly arrived data will have to be discarded.
  • If the Receive Buffer is unbounded, then it will continue to expand and eventually lead to memory exhaustion.

So far, to briefly summarize, flow control is to solve the end-to-end sender and receiver speed mismatch problem. Or more explicitly, it is to solve the “Fast Sender Slow Receiver” problem.

So, what are the flow control schemes? The flow control scheme here is to provide a set of speed matching measures to match the slower processing rate of the Receiver by curbing the faster sending rate of the Sender.

Then the question becomes: how do we curb the Sender’s sending rate? There are two common ideas:

Idea one: direct rate limiting

The Sender sends data at a predetermined rate. For example, implement a speed limiter on the Sender side to reduce the Sender’s sending rate to 1MB/s, so that the Sender’s sending rate matches the Receiver’s processing rate.

Idea two: authorization sending

The Sender cannot send directly unless it has received a communication flow permission from Receiver. This quantization scheme protects the Receiver from memory overflow.

Note that the communication flow permission here is often referred to as Credit. What exactly is it? And how does it relate to credit-based flow control?

Next, we will tell the story of credit-based flow control in chronological order. In fact, these two ideas are exactly the two sides in the debate. The answer to the above question will also be revealed in the story.

3. The Story of Credit-based Flow Control

TCP Slide Window

In 1974, Vinton G. Cerf, thes designer of the TCP/IP protocol and known as one of THE fathers of the Internet, published the seminal paper on the TCP/IP protocol, “A protocol for packet network intercommunication”. In this paper, a flow control scheme based on slide windows was proposed.

In December of the same year, the concept of slide windows was officially added to RFC675 ( Over time, many improvements were made to the TCP/IP protocol and various bugs and inconsistencies were fixed, but the idea of sliding windows was retained. This idea was later expanded and developed into “windows” for various scenarios in network flow control, which was refined into credit-based flow control.

Now, let’s look at the workflow of the TCP slide window.

The GIF above demonstrates the workflow of the TCP slide window:

  1. Assume that during initialization, the size of the sending Window is 3 (which will be dynamically adjusted according to the sending situation), and the size of the receiving Window is 5 (fixed). The sender sends packets at a rate that is three times the rate at which the receiver consumes data.
  2. Sender sends packets 1–3, Receiver receives packets 1–3 and places them in the buffer.
  3. User-space Consumer consumes packet number 1.
  4. The receiver replies to the sender Ack=4, which means that the sender can then send from packet 4 with Window size 3 (total size 5, but there are 2 packets that did not have time to be consumed); the receiver receives this message and moves the sliding window to 4 with a Send Window size of 3.
  5. Sender sends packets 4–6, Receiver receives packets 4–6 and places them in the buffer.
  6. User-space Consumer consumes packet number 2.
  7. The receiver replies to the sender with Ack=7, which means that the sender can send from packet 7 with Window size 1; the receiver receives the message and moves the slide window to 8, The window size is also adjusted to 1, which means that the sending rate is reduced to 1/3 of the original rate..
  8. Sender sends packets 7, Receiver receives packets 7. Nowthe receiver’s buffer is full.
  9. Assuming the receiver has a problem-consuming data and has not been consuming data, and the receive window is full at that time, then the receiver will reply with ACK=8 and Windows size 0. After receiving this message, the receiver will adjust the send window size to 0, i.e. stop sending.
  10. Assuming that the receiver recovers the consumption data, the above process similar to 3–5 will be repeated.

The Credit-based Flow Control Idea

In 1981, in “Methods, Tools, and Observations on Flow Control in Packet-Switched Data Networks”, Pouzin and Louis summarized some existing network flow control schemes and proposed the idea of credit-based flow control. They defined Credit as:

credit (or token): which gives permission for message flow.

He also pointed out that the flow control scheme based on the window mechanism is an instance of credit-based flow control, which is called Self-Correcting Credit Scheme.

Figure: “Methods, Tools, and Observations on Flow Control in Packet-Switched Data Networks”

The Heated Debate Over Credit-based Flow Control

This debate started in the early 1990s when the once-promising ATM network was in full swing with the drafting and formulation of standards.

ATM (Asynchronous Transfer Mode) network is a connection-oriented general transmission mode designed for multiple services. It has the advantages of good real-time performance, strong flexibility, and excellent service quality. At that time, many people had high hopes and thought it was the future of network technology development. Therefore, the discussion on the selection of related technologies in the ATM network is also very intense.

In 1993, Chinese professor H.T. Kung submitted a proposal for ATM network credit-based flow control to the ATM Forum.

This proposal presents a specific algorithm for implementing credit-based flow control in ATM networks and conducts a detailed usability analysis. After the scheme was put forward, it got the attention immediately from the industry.

However, by the end of 1994, the ATM Forum voted for another scheme called rate-based flow control and rejected the credit-based flow control scheme.

Remember the two flow control schemes we mentioned before? In fact, rate-based flow control corresponds to the above-mentioned Direct Rate Limiting, and credit-based flow control corresponds to the above-mentioned Authorized Sending.

Here let’s briefly touch on rate-based flow control.

The idea behind rate-based flow control is that the Sender first evaluates the rate requirements under the required resources and then sends data at a predetermined rate.

A typical application of this scheme is peripheral I/O devices, which operate at a fixed rate. For example, when a character terminal acts as a receiver, it generally cannot actively control the rate at which the sender sends data to it. So in order for it to work properly, the only way is to make the Sender feed it data at the Sender’s rate. Another typical application of Sender sending data at a fixed rate is the pipelining in CPU design. The fixed global clock rate in the pipeline determines that each component is working at the same rate.

Going back to the story of ATM network, what happened between 1993–1994?

From the article “Congestion control and traffic management in ATM networks: Recent advances and a survey” in 1996, we can get a glimpse of the intensity of the debate on the two schemes at that time.

The debate, which lasted more than a year, was quite “religious”. This is because the believers of each approach had quite different goals in mind and were unwilling to compromise. To achieve their goals, they were willing to make trade-offs that were unacceptable to the other side.

Credit-based side published paper saying “Credit Where Credit is Due”, and rate-based side said “The Benefits of Rate-based Glow Control for ABR Service”.

Rate-based side presented “Limitations of Credit-based Flow Control”, and credit-based side responded “The Realities of Flow Control for ABR Service”.

Obviously, Credit-based flow control side was very dissatisfied with the vote of ATM Forum. They firmly believed that credit-based flow control was a better scheme. In the next few years, scholars continued to conduct research on it.

For example, Professor Kung continued to publish articles, three in a row, making a series of improvements to the original algorithm proposed by the credit-based flow control scheme, such as supporting dynamic credit.

In addition, he also established a team at Harvard to jointly develop an ATM switch which is based on credit-based flow control with BNR, and did much more experiments on the switch to prove the effectiveness of the algorithm. See

Professor Kung mentioned the reasons for his efforts in “Credit-based Flow Control for ATM Networks”:

We hope thereby to speed the evolution of ATM flow control, and minimize the risk of standardizing inadequate solutions. This article avoids political and short-term pragmatic issues, such as migration paths and interoperability, noting that flow control mechanisms adopted now may be in use long after such issues are forgotten.

In this article, we reviewed the history of credit-based flow control. First, we introduced the basic concept of flow control and then told the story about the heated debate over credit-based flow control. In the next article, we will go deeper and analyze the core principle of credit-based flow control and take a look at its application in other fields nowadays.

I hope this article will help you in your deep learning projects😊. If you want to experience the functions of OneFlow, you can follow the method described in this article. If you have any questions or comments💡 about use, please feel free to leave a comment in the comments section below. Please do the same if you have any comments, remarks or suggestions for improvement. In future articles, we’ll introduce more details of OneFlow.

Related articles:

  1. Explore MLIR Development Process: Take OneFlow as an Example
  2. How to Choose the Grid Size and Block Size for a CUDA Kernel?

Welcome to visit OneFlow on GitHub and follow us on Twitter and LinkedIn.

Also, welcome to join our Discord group to discuss and ask OneFlow related questions, and connect with OneFlow contributors and users all around the world.




OneFlow is a performance-centered and open-source deep learning framework.

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium


Design Project 3-week5

SLAE#6–3 Polymorphic shell-code for /etc/hosts

Working with Timezone in MongoDB

How to use cp command in SSH?

Move it!…again!

Don’t listen to everyone you heard!

Overflow occurs of number in MatLab and in C.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store


OneFlow is a performance-centered and open-source deep learning framework.

More from Medium

How the DataLoader of OneFlow Works

Graphs Theory and Algorithms

A highly portable training loop using PADL in seconds!

Exploring the applicability of Kotlin for data science.