TCP or UDP issues consolidation

I have yet to patch the latest version and is running on the cc3000 firmware version of 1.29 as of the last deep update.

Using the same code that you provided 1 post above with 2 changes from write to print for the length serial output. :smiley:

Hereā€™s the output from your code running at 3s delay and sending at 1s interval:

packet length:76
read length:76
12345678901234567890123456789012345678901234567890ABCDEFGHIJKLMNOPQRSTUVWXYZ
packet length:76
read length:76
12345678901234567890123456789012345678901234567890ABCDEFGHIJKLMNOPQRSTUVWXYZ
packet length:76
read length:76
12345678901234567890123456789012345678901234567890ABCDEFGHIJKLMNOPQRSTUVWXYZ

Iā€™m pretty sure they are working fine somehow in my testing

The firmware branch: https://github.com/spark/firmware/tree/feature/hal-cc3000-update
The patch file: https://github.com/spark/cc3000-patch-programmer/tree/ti-patch-1.14

You need to flash the patch and compile code locally to test as there are host driver changes.

Iā€™m not home and Iā€™m not setup for local compiles. But what you post seems to show significant improvement over what I have experienced previously.

How are you sending the packets? Are you using netcat or have you written a C program? Can you share that as well, please?

Iā€™m using the software from http://packetsender.com/ on my Mac but can also use other methods if thereā€™s any recommendation

Although I canā€™t attest to the stability of UDP transmission in the latest TI patch, the TCP (client) communication hasnā€™t exactly improved much. Closing a TCP socket still takes upwards to 3.6 seconds at random intervals. Buffer starvation is still present and hangs up the entire CC3000. Iā€™m also having dropped TCP packets, albeit infrequently.

I also noted that the lower the throughput the more stable the CC3000 was. At most I was sending out 32 packets of 600 bytes per second on TCP.

Iā€™m using an STM32F407 with modified Spark firmware, so my setup differs a bit from yours.

2 Likes

@kennethlimcp: When will this latest TI firmware with the set of fixes you have been testing be made available to those, like me (and the generally intended population of Spark Core users), who compile in the Cloud?

I guess this has to be answered by the Spark firmware team :wink:

Itā€™s already available - the v.1.14 TI firmware is available on the cc3000-patch-programmer repo.

Does that mean I will get that update automatically if I compile in the Cloud? And up(down?)load/burn from there? Using the IDE?

No, the CC3000 updates are not applied as part of regular firmware compiles. Youā€™ll need to compile the cc3000 patch code locally, or ask someone for a binary to flash to your core.

https://community.spark.io/t/hard-fault-caused-by-the-tcpserver-example-from-the-spark-docs-and-a-simple-python-client/8764 can be a pretty simple test case.

I tried the new TI patch with the example code from the thread I started and it still causes a hard fault within seconds :frowning:

My plan for this:

1.) Setup my STlinkV2 with the core and get GDB running and debug

2.) Test the latest CC3000 patch pulled into Spark-cli V 0.4.92

3.) Come up with some example code to replace the existing TCP example code

4.) Fix some related issues like SYSTEM_MODE behavior.

Will update once i have started :smile:

1 Like

@kennethlimcp Here is some sample code that will fail almost immediately.

If you want code that is not as brutal (in microcontroller terms) let me know I have a client / server I sent over to @dave but is needs a DHT22 sensor and a little more setup.

2 Likes

I donā€™t have a DHT22 sensor but drop me a PM or something just so that i have access to them when i perform testing.

Thanks!

Hope to make some progress now that i have 1 month worth of dev time :wink:

1 Like

@mtnscott,

time to revive this old threadā€¦

Hereā€™s the slightly modified code of yours that i used for testing: https://gist.github.com/kennethlimcp/2a5df2900481ff390537

Also hereā€™s a screenshot. I tried to sleep(0.5) but you start to lose data quickly. Another change i did was to use a newer CC3000 patch (1.3.2)

Take note that we have yet to test with improvements to the driver code to account for other stuff and that might increase the stability.

However, this huge improvement might be useful to you so i thought i should reach out early!

2 Likes

UDP issue on Photon. This code runs on Core but not on Photon. The print is never executed on the Photon.

unsigned int localPort = 51515;
UDP Udp;

void setup() {
  Serial.begin(115200);

  Udp.begin(localPort);
}

void loop() {
  int len;

  if ((len = Udp.parsePacket()) > 0) {
    Serial.print("parsePacket() len="); Serial.println(len);
    Udp.flush();
  }
}

Iā€™m looking into the photon issues.

Also testing a bit on the core to validate my test case. My test sends two packets - 42 bytes then 2 bytes. Iā€™m seeing socket boundaries being properly preserved on the Core with the service pack 1.28. (Some packets are dropped, but the lengths are always 42 or 2.) So thatā€™s good!

1 Like

fwiw Iā€™m having the same UDP issue as @steelydev on my Photon, same setup except Iā€™m using port 8888. Iā€™m on the develop branch. Iā€™m in MANUAL mode, and have also tried calling Spark.process(), but Udp.parsePacket() always seems to return zero even after sending packets to the Photon (confirmed that packets are being sent to the correct IP and port with Wireshark).

@mdma what is service pack 1.28?

@bmichini what is the develop branch?

@steelydev Iā€™m building the code locally on my machine and flashing straight to the Photon (instead of using Particle Build and flashing from the cloud). This process is described in the readme file in the firmware GitHub repository (https://github.com/spark/firmware).

ā€œdevelopā€ is a git repository branch name, and is typically the tip of the repo into which other feature branches are eventually merged. In other words, develop is usually where youā€™d find the most up-to-date production code.