UDP issues and workarounds

As noted in the Firmware Reference http://docs.spark.io/firmware/#communication-udp there are issues with UDP.

UDP protocol summary

This quick summary is meant to re-familiarise those who already have used UDP as to its essential features. Please refer to Wikipedia1 or to this https://community.spark.io/t/udp-received-dgram-boundaries-lost-read-parsepacket-available-all-broken/3800 and other UDP topics here.

UDP, by definition, is unreliable. Datagrams do not arrive in order, may not arrive at all and sometimes they are even duplicated. But a datagram arrives intact or not at all. UDP has no flow control.

Spark UDP implementation issues

Spark implements a UDP class which is supposedly compatible with the Arduinoā€™s UDP class. For whatever reasons (at least one may be a CC3000 issue), it is not.

  • parsePacket() is supposed to find the first datagram in the UDP receive buffer and return its size in bytes. It does not, it returns the total number of bytes in the buffer, and this may contain several datagrams. A subsequent call to parsePacket() does not return the second datagram which was already in the buffer - it considers the entire buffer processed, parsed.

  • available() is supposed to return the number of bytes remaining unread of the datagram last found by parsePacket(). It does not, it returns the number of bytes in the entire receive buffer, and this may contain several datagrams.

  • remoteIP() and remotePort() return the senderā€™s IP and port of the datagram found by parsePacket(). Because parsePacket() does not respect datagram boundaries, the second and subsequent datagrams in the buffer are not parsed for this info. If these datagrams are from different senders then the sender will be unknown.

  • read() is meant to read at most the specified number of bytes of the parsePacket() datagram, or the entire datagram if it is smaller than that. It does not respect datagram boundaries and will return the entire read buffer of several datagrams if the specified number allows.

  • write() causes a datagram to be sent every time it is called whereas it is supposed to append to an internal buffer, the entire contents of which is sent only when endPacket() is called.

  • endPacket() does nothing.

  • broadcasts do not work in MANUAL or SEMI_AUTOMATIC modes

  • broadcasts cause crashes in the sender

Workarounds

The Spark UDP implementation is still usable in restricted circumstances.

Sending is not problematic as long as you remember one datagram is sent per write() and not only at endPacket(). endPacket() does nothing at this time. Presumably this will be fixed so that the datagram is assembled by one or more write() calls and sent by the endPacket() call. It is suggested therefore, to future-proof your code, to assemble your own datagram into your own buffer, and to only call write() once, for the entire datagram. Continue to use endPacket() after each write() so that the code continues to work once Spark UDP is fixed.

Receiving by Spark UDP is problematic as parsePacket() does not respect datagram boundaries. To retrieve one datagram using read() you therefore need to know exactly how many bytes to read(). Unfortunately neither parsePacket() nor available() help - they each return the total bytes ready to read and this may include multiple datagrams. The read buffer contains only the datagrams, all concatenated together with nothing separating them. To read one datagram you must either know the fixed length of the sent datagram or you must read one byte at a time looking for an end-of-datagram delimiter. This requires that the sender deliberately includes such a delimiter in the datagram. This means that Spark UDP cannot usually be used to receive a pre-existing UDP feed because the Windows or Linux sender will not be appending any delimiter to the sent datagrams. If fixed length packets are sent the issue of getting out of sync must be considered. For this reason the writer would recommend the fixed length is 1 :smile: (Note the problem is bigger than you may suspect - because ordering is not preserved and arrival is not guaranteed and duplication can happen, you cannot just ignore this problem and pretend UDP is a reliable ordered stream of 1-byte packets.)

To summarise the above paragraph: For Spark UDP to receive distinguishable datagrams, datagram delimiters must be used by the sender and searched for, or fixed length packets must be used.

Because parsePacket sets what is returned by senderIP() and senderPort() and parsePacket() does not ā€œseeā€ datagram boundaries, if the 2nd datagram in the receive buffer is from a different IP or Port this will not be discernible. Therefore a typical class of UDP server apps cannot be easily implemented using Spark UDP: Those apps which must send a response to the sender of each received datagram will not work where there are multiple senders.

Broadcasts buffer issue: On the sender prevent buffer overflow crashes by setting the local port to a different one to the remote port. Or, read your own broadcasts!

Broadcasts if there is no Cloud: Ping something first, then the broadcasts work. Easiest to ping the gateway.

@psb777, I sent you a PM regarding my edits. :smile:

Can you please e-mail me my original. I think you take the BE BOLD too much to heart :smile:

@psb777, I donā€™t have your original to email to you unfortunately.

I am not sure why your refer to a tutorial since this is clearly titled ā€œissues and workaroundā€! Donā€™t confuse the topic :smile:

2 Likes

@BDub, just learned something! Thanks!

@psb777, I will be sending the text momentarily.

Iā€™m sorry, I shouldnā€™t be using the forum to compose and develop the posting. Thanks for your contributions. I hope you donā€™t mind the changes Iā€™ve done to them. I was in mid-flow and it all changed under me.

1 Like

@psb777, if you donā€™t mind, I will continue my edits where I believe they may better convey the message you are trying to put accross. This includes the title, as the use of ā€œbrokenā€ obviates the fact that many users have very successfully used UDP. So I ask that you remove that word from the title without me editing each time. The fact that it has ā€œissuesā€ is clear enough. :smile:

1 Like

Has there been any update to fixing the UDP problems? I would be happy to stalk a link anyone can provide? I have been holding off on projects waiting for UDP to work correctly. This is a part of core functionality of the spark core and would expect more movement on this issue. Aside from the functions not working correctly using the udp protocol at high frequency causes CFOD and CBOD. At one point I starting thinking about ways to have a separate watchdog mcu watching the core to reset it like in other posts I have seen. That is ridiculous to have to do that. This post is almost a month old and is a re-cap of many other posts that have been going on for some time. At the moment (albeit a very long moment) core owners that have any substantial network activity canā€™t rely on their core executing the user loop. I love seeing the projects hit twitter etc, but I also wonder how many times someone is sighing and hitting the reset button.

The product is a great one and has awesome potential. Please fix :smile: Sorry to bring this up but I feel like this issue is going nowhere.

1 Like

Regarding reading over UDP - parsePacket() et al, this is an issue with the cc3000 handling recv_from() where it sends all the data available after stripping the datagram headers. Unfortunately, TI say there will be no fix. More details here

http://e2e.ti.com/support/wireless_connectivity/f/851/t/340115.aspx and
http://e2e.ti.com/support/wireless_connectivity/f/851/t/298482.aspx

Integrating with existing UDP services may be problematic. Thinking aloud - a possible workaround is to write a small UDP proxy that wraps each received datagram in an additional envelope containing length, source etcā€¦ On the spark, you can then parse the envelope header and know how long each datagram is and from where it originated.

2 Likes

Wow I followed your links and what a disappointment! Ughā€¦ CC33000 not fixing that issue is crazy and so many people are going to ditch that chip. Wow a proxy for the udp packet may be the only thing you can do. What a waste though an additional hop has to be made.

What about the CFOD or CBOD when sending rapidly? Anything to make that work? I canā€™t keep my core alive for long periods of time. That is also frustrating. I hope some advancements are being made to keep the user loop running even when wifi trips up.

Thanks

Yes, itā€™s not a great situation, but I think we can make it better.

The CFOD/CBOD - this may be because the host maintains a free packet buffer count, which it updates when it sends a packet, or when the CC3000 sends a message about packets it has sent. However, this isnā€™t properly guarded - itā€™s updated both on the main thread and also in an interrupt, leading to the typical issues associated with concurrent updates to a shared value.

I have a proposed fix for this, but donā€™t have any good test cases to stress the problem. If you have a test case I can use, I can investigate further. Bu Iā€™m soon on vacation until the end of the month so Iā€™m afraid I canā€™t look at it until August.

@mdma if you get a chance, the code in the following link causes the core to die every time.

If you change the delay to under 200ms you die even faster. The longest I have seen the code run is a couple hours. If you follow along you see that disabling the cloud helps but the core still dies though. I did not update the thread to reflect this information, I felt like the discussion was going nowhere fast.

At the time of the orig post on the other thread you get two possible outcomes with the above code.

  1. CFOD
  2. The core appears to be alive and the cloud REST functions still poll the device however the user loop is not running. You can verify this by using the spark.variable and D7 led. The variable stops updating well under the maximum value an int can hold and the D7 led no longer blinks.

I do not know if any cloud code updates have been done since then. I am also running the latest ti firmware at the time was well.

Thanks Again

Time to call it broken, I think.

Does this influence DNS hostname resolving?

I have not seen DNS be effected but I guess it is possible. There have been issues with folks that have a complicated or slow DNS setup. If your wireless access point/router is gatwaying for you, normally everything is fine. One satellite internet user has written his own DNS with a much longer timeout since the default never worked for him.

Writing the proxy to do as described would be impossible. The boundaries of received UDP datagrams cannot be determined. Hmm, many weeks later: I understand now: The proxy would not be on a Spark so it would recognize the packet boundaries and insert an end of packet marker and re-transmit, the Spark can search for the marker.

UDP.parsePacket and UDP.read are broken. The Spark will sometimes receive incoming packets (AAA), (BBBBB) as (AAABBBBB). I need to be able to rely on the return from parsePacket to tell be the length of the next unread packet.

Itā€™s an embarrassment for a serious library to get the basic UDP fundamentals wrong. This will give newbies a skewed idea of UDP and will frustrate people who know what they are doing. Please fix this.

2 Likes

UDP.write and UDP.endPacket are broken. According to the Spark docs, and common sense, UDP.write provides a buffer for you so that you donā€™t have to do your own buffering. Then when you call UDP.endPacket, everything you have written gets sent as a single packet.

Instead, UDP.write sends a packet every time it is called. UDP.endPacket does not function.

Since UDP inherits from Stream, this is especially problematic. Some of the write methods call UDP.write many times in the process of writing a single piece of data, but then UDP.write fires each tiny piece off in its own individual packet.

Can we expect this to ever be resolved on the Spark Core?

2 Likes

Here is the work-around for the write endPacket issue. Try itā€“works great!

The read and parsePacket issue is related to the TI chip and is not fixable by Spark. That is one of many reasons they are moving to different WiFi chip for Photon. Encode the packet length in the packet or if you canā€™t, then build a parser that can figure it out. If you are using a well-known network service over UDP, we can probably help you figure it out.

//----- UDP + overloading the inappropriate UDP functions of the Spark Core (REQUIRED !)
class myUDP : public UDP {
private :
	uint8_t myBuffer[128];
	int offset = 0;
public :
	virtual int beginPacket(IPAddress ip, uint16_t port){
		offset = 0;
		return UDP::beginPacket(ip, port);
	};
	virtual int endPacket(){
		return UDP::write(myBuffer, offset);
	};
	virtual size_t write(uint8_t buffer) {
		write(&buffer, 1);
		return 1;
	}
	virtual size_t write(const uint8_t *buffer, size_t size) {
		memcpy(&myBuffer[offset], buffer, size);
		offset += size;
		return size;
	}
};

myUDP Udp;

P.S. You will get better answers if you stick to one thread. I see @Moors7 tidied up for you a bitā€“he beat me to it again!

2 Likes