Hi David,
We are almost in agreement on that⦠almost 
Iām saying that when the library in the Electron (or whatever MCU you use) is asked to send a packet the networking code has to āchooseā how long to hold on to the data before sending it anyway as a small network packet (only detectable under the levels of TCP). The algorithm that is used for this āchoiceā can affect the throughput of the resulting program (and even how much you are billed on an electron!).
An example to illustrate my meaning (as the above is very terse):
TCP client sending 80 bytes every 0.05 seconds = 20 user calls to TCPClient.write(array,80).
If the TCP library sends one network packet for every request, there will be 20 network packets a second that include user data + media layer header + IP layer header + TCP layer header (for argument sake Iāll use 30 bytes), so your 80 bytes becomes maybe 110 bytes * 20 = 2200 bytes sent over the network per sec.
If your TCP library decides to āhang on to the data for up to 0.2 secondsā in the hopes it can get more data in a single network packet - closer to the MTU - it improves the efficiency, as it combines what could have been 4 user packets:
80*4=320 data bytes + IP + TCP + media headers (30bytes) = 350 bytes per network packet *5 packets per second = 1750bytes per second.
Both deliver the complete and correct user data of 1600 bytes, the first uses 600 bytes of overhead, the second 150, and that is without worrying about the ack packets that may be sent coming the other way.
The down side of this efficiency gain is that the latency increases, which can be acceptable or can be a problem for the end user. The TCPClient has no way of knowing āwhat is acceptableā in the electron.
If you ask the library to send data at the same size as the MTU, there is no āchoiceā to make - it should send the data immediately as it canāt get any more efficient than that.
I agree that this isnāt too important for correct data transmission - it shouldnāt make a difference if I ask the electron to send 2 bytes or 200000 in a single write call to the correct operation, but may affect the lower level efficiency and latency.
So youāre right, TCP doesnāt need my
- if you are sending a known quantity of data, send it in the biggest chunks you can is good approach 99% of the time.
My data is bursty and doesnāt care about latency (within the 10 mins mark), so if I send it more efficiently that makes me happier (billed less, less battery use), hence trying to help the network algorithm and avoid smaller packets than the MTU being sent when not necessary.
In practice my choice of 1500 was terrible - it almost guarantees small packets:
Assuming an MTU of 1500 for the cellular network:
1500 user bytes broken in to MTU sized network packets including 30 bytes of headers on each packet would yield 2 packets:
- 1470 user bytes + 30 bytes of headers,
- 30 user bytes + 30 bytes of headers
Yep, the TCP library may help fix my silly problem, but probably not because my data is very bursty, so it may be up to a 2 or 3 seconds between calls to write which is likely to cause a time out and send the second (tiny) packet. As I donāt care about the data latency, I should hang on to the 30 bytes until the next lot of data comes in and add it all together.
Sorry, thatās a long and verbose reply to an aside. I hope it illustrates why I mention āhelpingā the network. Itās not hugely important, as it doesnāt change that we both agree - if data is sent on a TCP connection:
A) it shouldnāt be corrupt
B) the user needs to know so they donāt send the same data again.
Whatever efficiency optimisations are put in place TCP should not deliver unexpected or corrupt data, which is what seems to be happening in my experience - and @rickkas7 's analysis seems to indicate that (B) is not unexpected when some kind of failure happens in the network code.
W.