TCPClient read() data missed

Hi Brian,
Oh, so the comments in the post is referring to Host server side ? This is bad as I have no control of it :frowning:
When I test the "Login | Weather Underground" with PC, it is working fine.
Any idea how I may able to report this to the site ?

When I set the SO_SNDBUF size to anything under 2240 like so “cat /dev/zero | nc -O 2240 192.168.1.109 1111“ the problem goes away.

That’s why the bug is rightly said to be in the CC3000. Sure, a workaround as per @bko is possible at the other end - or would be if you had control of it - but the proper fix is as per @zachary

@bko, @Dilbert, @psb777, @mdma: So whether or not the Server is being nice in terms of how it is sending packets, from what I can tell there is definitely a bug. I stopped being able to send stories to the Choosatron over sockets, dying at exactly one packet, and I was unable to get it to work even with control over the server side. Spark Cores I have that have NOT updated to the CC3000 firmware that the Spark CLI and web IDE provides still work for me. After updating they break.

Last night I decided to flash different versions of the CC3000 firmware to confirm (from Spark’s cc3000-patch-programmer github, and after flashing the ti-patch-1.14 branch they worked again, so it appears the fix is there. I’ve already been in touch with @zachary and the team so they know what is up and have my data.

So in the meantime you can manually flash the CC3000 firmware and try that out! Hope this helps / solves the problem. :slight_smile:

2 Likes

Thanks for testing that patch @jerrytron! Would love to have input from testing by others as well.

I’m a bit concerned that you are, I assume, running the 1.14 TI patch along with an older host driver in firmware. See TI’s warning at the top of the release notes.

Integrating the host driver is the job we haven’t done yet and that will take time. Pull requests welcome as always!

An experimental integration of the 1.14 host driver is here https://github.com/spark/firmware/tree/feature/hal-cc3000-update.

I posted these to the Elites for initial early testing, although everyone is welcome to test, but please keep in mind that it’s experimental and may :bomb: or :fire:!

2 Likes

@mdma has there been much testing/feedback on this?

Hi @Hootie81

@kennethlimcp has been testing this some but I think the results are mixed.

1 Like

@bko, I believe @kennethlimcp was testing the new v1.32 cc3000 firmware but not the new host driver. If this branch is meant to work with v1.32 then kenneth and I will need to change recompile his test app since, right now, we are using the cloud compile server.

@mdma, is the 1.14 host driver branch the one to use with v1.32 of the cc3000 firmware or is a new driver not written yet?

2 Likes

Yes, the 1.14 driver and the 1.32 service pack belong together. (Although I've not seen any dire consequences of using the 1.32 service pack with the existing host driver.)

1 Like

If I flash the new cc3000 firmware, and host driver, does that completely remediate the TCP buffering problem that’s causing large files to timeout?

Will that be part of a new deep update at some point? Or fixed on the photon?

Thanks!
Ryan

I’d be happy to test it for you, if you can get me a program that I can run on my core.

Next sprint I will be doing more network testing on both the core and the photon so happy for any input.

I’m out of the country until Monday, but will post a repo that shows the problem when I get back.

Thanks!
Ryan

I gave up and just created my own library to download large files in HTTP chunks.

1 Like

I would like to know, is there an update TI driver which may fix the missing data problem ?

You can try

particle flash --usb cc3000_1_14

This will update the Core to the latest TI driver. We are still testing it which is why it’s not rolled out as a deep update yet.

Bad news, I tried this today but did not work. In one case, worse than before. The LED light, flashing RED. I tried reload back previous version but does not seem working.

particle flash --usb cc3000
particle flash --usb deep_update_2014_06

Is there a way to tell which version I have in the code ?

The flashing red may be an indication of a bug in your application. What SOS code is it displaying? Generally, the driver performance is much improved with the latest service pack.

2 different issues.

first, I know the cause of the SOS code : I am using the flash-ee library. I comment out the code to write data to flash, error goes away. However, this code was working just last week. I had not changed anything in Application while updating the CC3000 and deep_update_2014_06 code.

So, I suspect the flash-ee overlap with Core firmware.

FlashDevice *record;
struct WEATHER_REPORT report;

record = Devices::createAddressErase( ); // OK

record->write (&report, 430, sizeof(struct WEATHER_REPORT)); --> OK
record->write (&report, 480, sizeof(struct WEATHER_REPORT)); --> ERROR

Size of (struct WEATHER_REPORT) is 56 bytes.
Further test should certain address cause problem. What function should I use to define the range of valid flash-ee to use ?

second, I tried the cc3000_1_14, it works one time. Then due to the crash, I had reloaded the code few times and now, I could not able to get to work again. This is why I ask who to check version of FW in the system.

@Dilbert could you read the weather data from a server and pass it to your core using a cloud event ? a standard $5 VPS has an order of magnitude more power than your core, it makes more sense to do processing there.

In my design, each device would take local weather info plus it previous program history to run a algorithm. So that each device will come out with different setting.
I try to make the device can function without depending on Cloud Server after configuration.
Of course, weather server can be done too.