As noted in the Firmware Reference http://docs.spark.io/firmware/#communication-udp there are issues with UDP.
UDP protocol summary
This quick summary is meant to re-familiarise those who already have used UDP as to its essential features. Please refer to Wikipedia1 or to this https://community.spark.io/t/udp-received-dgram-boundaries-lost-read-parsepacket-available-all-broken/3800 and other UDP topics here.
UDP, by definition, is unreliable. Datagrams do not arrive in order, may not arrive at all and sometimes they are even duplicated. But a datagram arrives intact or not at all. UDP has no flow control.
Spark UDP implementation issues
Spark implements a UDP class which is supposedly compatible with the Arduino’s UDP class. For whatever reasons (at least one may be a CC3000 issue), it is not.
parsePacket() is supposed to find the first datagram in the UDP receive buffer and return its size in bytes. It does not, it returns the total number of bytes in the buffer, and this may contain several datagrams. A subsequent call to parsePacket() does not return the second datagram which was already in the buffer - it considers the entire buffer processed, parsed.
available() is supposed to return the number of bytes remaining unread of the datagram last found by parsePacket(). It does not, it returns the number of bytes in the entire receive buffer, and this may contain several datagrams.
remoteIP() and remotePort() return the sender’s IP and port of the datagram found by parsePacket(). Because parsePacket() does not respect datagram boundaries, the second and subsequent datagrams in the buffer are not parsed for this info. If these datagrams are from different senders then the sender will be unknown.
read() is meant to read at most the specified number of bytes of the parsePacket() datagram, or the entire datagram if it is smaller than that. It does not respect datagram boundaries and will return the entire read buffer of several datagrams if the specified number allows.
write() causes a datagram to be sent every time it is called whereas it is supposed to append to an internal buffer, the entire contents of which is sent only when endPacket() is called.
endPacket() does nothing.
broadcasts do not work in MANUAL or SEMI_AUTOMATIC modes
broadcasts cause crashes in the sender
The Spark UDP implementation is still usable in restricted circumstances.
Sending is not problematic as long as you remember one datagram is sent per write() and not only at endPacket(). endPacket() does nothing at this time. Presumably this will be fixed so that the datagram is assembled by one or more write() calls and sent by the endPacket() call. It is suggested therefore, to future-proof your code, to assemble your own datagram into your own buffer, and to only call write() once, for the entire datagram. Continue to use endPacket() after each write() so that the code continues to work once Spark UDP is fixed.
Receiving by Spark UDP is problematic as parsePacket() does not respect datagram boundaries. To retrieve one datagram using read() you therefore need to know exactly how many bytes to read(). Unfortunately neither parsePacket() nor available() help - they each return the total bytes ready to read and this may include multiple datagrams. The read buffer contains only the datagrams, all concatenated together with nothing separating them. To read one datagram you must either know the fixed length of the sent datagram or you must read one byte at a time looking for an end-of-datagram delimiter. This requires that the sender deliberately includes such a delimiter in the datagram. This means that Spark UDP cannot usually be used to receive a pre-existing UDP feed because the Windows or Linux sender will not be appending any delimiter to the sent datagrams. If fixed length packets are sent the issue of getting out of sync must be considered. For this reason the writer would recommend the fixed length is 1 (Note the problem is bigger than you may suspect - because ordering is not preserved and arrival is not guaranteed and duplication can happen, you cannot just ignore this problem and pretend UDP is a reliable ordered stream of 1-byte packets.)
To summarise the above paragraph: For Spark UDP to receive distinguishable datagrams, datagram delimiters must be used by the sender and searched for, or fixed length packets must be used.
Because parsePacket sets what is returned by senderIP() and senderPort() and parsePacket() does not “see” datagram boundaries, if the 2nd datagram in the receive buffer is from a different IP or Port this will not be discernible. Therefore a typical class of UDP server apps cannot be easily implemented using Spark UDP: Those apps which must send a response to the sender of each received datagram will not work where there are multiple senders.
Broadcasts buffer issue: On the sender prevent buffer overflow crashes by setting the local port to a different one to the remote port. Or, read your own broadcasts!
Broadcasts if there is no Cloud: Ping something first, then the broadcasts work. Easiest to ping the gateway.