TCP or UDP issues consolidation

A set of test cases is a great idea. A number of simple ones need to be proposed. Each of the following to be done in the three modes. But the testing does rely upon understanding what TCP and UDP are supposed to do, and the differenced between them. E.g. Newly received UDP packets should flush out old ones if the receive buffer is full. Newly received TCP packets should be discarded if the receive buffer is full. UDP packet boundaries must be maintained but ordering is not, packets are allowed to be lost and/or duplicated. TCP packet boundaries are not maintained, the byte stream is presented to the user code in strict order, no duplcations, no losses.

UDP tests to be done in each of the 3 modes, MANUAL, SEMI and AUTO.

(1) UDP broadcast coded as plainly as possible. None of the known tricks employed. Sending 100 500char packets in quick succession. Note that UDP does not have flow control so we should expect to see packets lost. But we know this test will fail after only two or three packets because we won’t be reading the packets we have ourselves sent. And NORMAL mode won’t work at all because there is no cloud/no ping.

(2) Spark sends a UDP packet by one call to UDP.beginPacket(), several to UDP.write(), and one call to UDP.endPacket(). Only one datagram should be received on the recipient Linux / Windows / whatever box.

(3) Use one of the many utils (or write one on Linux, trivially, ask me) to send UDP datagrams variable lengths, Spark must respect the packet boundaries, one UDP.read must read one datagram only, even though the length is unknown. This test will not work.

(4) Test return codes. E.g. an error should be returned if a UDP packet greater than 576(?) chars is attempted to be sent - we know that the old fashioned limit is all we can expect to work.

And then we relax these tests, making these easier for Spark, step by step. E.g. variations on (1)

(1a) Send a packet only every 100ms. (1b) Send shorter packets. (1c) Set the local port to another, or read the socket to drain it, and to test it is being sent(!). (1d) Call ping first

etc etc

ping @ScruffR @Bdub @SomeFixItDude @Raldus @mdma

A search for UDP on the forum quickly gives these in the search results window:
Simple UDP program breaks the core
Unreliable UDP: crashes/freezes when sending at high frequency
UDP received DGRAM boundaries lost: read(), parsePacket(), available() all broken
UDP Broadcast problems with simple application
Strange UDP bug
Beehive Monitor (UDP, Sleep, Thermistor, WiFi antenna, ADC speed, RAM)
[Solved] UDP broadcast occasionally resets system
[Solved/Workaround] Spark Core can’t send UDP broadcast packets without cloud connection
UDP + red LED restart loop
UDP problems with #include "spark_disable_cloud.h"
UDP broadcast only works when connected to the cloud, why?
UDP problem’s with simple program. Router side problem?
I’m having a problem getting UDP traffic from my spark cores
UDP issues and workarounds

Numerous UDP bugs are consolidated here: https://community.spark.io/t/udp-issues-and-workarounds/4975

1 Like