We’d like to resolve the shortfalls of simple version. First thing is that when m1 resumes, I’d like to give it its priority over m2 back: Let’s get rid of m3. We originally had m1 receiving grants at prio P5 and then m1 stopped so we started to grant m2 at prio p5. Later now m1 resumes and when first pkt of m1 is received, outstandingBytes of m1 goes below 1RTT and we want to give it its priority over m2. So we immediately bump up m1 prio level to P4 and send grant for it and at this point P5 is associated with m2 and P5 associated with P4. So assuming m1 and m2 will transmit fully at this point, the train of pkts would look like this: 2_p5, 2_p5, 1_p5, 2_p5, 1_p5, 2_p5, 1_p5, 2_p5, 1_p4, 1_p4, 1_p4, …, 1_p4, 2_p5, 1_p5, 2_p5, 1_p5, 2_p5, 1_p5 we are getting, m2 pkts at p5, and then 1RTT of m1 and m2 pkts will be interleaving. Then we get m1 pkts at p4 and when m1 is fully granted and all grants of m1 at p4 are received, we get 1RTT of interleaving m1 and m2 pkts.
When m1 resumes, we will have pkts of m1 buffered up behind 1RTT of m2 pkts. So the question is, Is the any way to let that queue subside before we send grants to m1 again? The answer seems to be No and that’s because we need to estimate buffering and skip grants to let buffer subside but we can’t reconstruct buffering at TOR using information we got at receiver and if we overestimate buffering, we may cause bubble and bw wastage by skipping too many grants.