Iām not home and Iām not setup for local compiles. But what you post seems to show significant improvement over what I have experienced previously.
How are you sending the packets? Are you using netcat or have you written a C program? Can you share that as well, please?
Although I canāt attest to the stability of UDP transmission in the latest TI patch, the TCP (client) communication hasnāt exactly improved much. Closing a TCP socket still takes upwards to 3.6 seconds at random intervals. Buffer starvation is still present and hangs up the entire CC3000. Iām also having dropped TCP packets, albeit infrequently.
I also noted that the lower the throughput the more stable the CC3000 was. At most I was sending out 32 packets of 600 bytes per second on TCP.
Iām using an STM32F407 with modified Spark firmware, so my setup differs a bit from yours.
@kennethlimcp: When will this latest TI firmware with the set of fixes you have been testing be made available to those, like me (and the generally intended population of Spark Core users), who compile in the Cloud?
No, the CC3000 updates are not applied as part of regular firmware compiles. Youāll need to compile the cc3000 patch code locally, or ask someone for a binary to flash to your core.
@kennethlimcp Here is some sample code that will fail almost immediately.
If you want code that is not as brutal (in microcontroller terms) let me know I have a client / server I sent over to @dave but is needs a DHT22 sensor and a little more setup.
Also testing a bit on the core to validate my test case. My test sends two packets - 42 bytes then 2 bytes. Iām seeing socket boundaries being properly preserved on the Core with the service pack 1.28. (Some packets are dropped, but the lengths are always 42 or 2.) So thatās good!
fwiw Iām having the same UDP issue as @steelydev on my Photon, same setup except Iām using port 8888. Iām on the develop branch. Iām in MANUAL mode, and have also tried calling Spark.process(), but Udp.parsePacket() always seems to return zero even after sending packets to the Photon (confirmed that packets are being sent to the correct IP and port with Wireshark).
@steelydev Iām building the code locally on my machine and flashing straight to the Photon (instead of using Particle Build and flashing from the cloud). This process is described in the readme file in the firmware GitHub repository (https://github.com/spark/firmware).
ādevelopā is a git repository branch name, and is typically the tip of the repo into which other feature branches are eventually merged. In other words, develop is usually where youād find the most up-to-date production code.