Max Speed of TCP Data (and proper coding methods)

Hey everyone,

I feel like I am close to the precipice here. I am trying to create a simple POST data stream that operates at reasonable speeds. Unfortunately, my current code only operates at one post per 4 seconds (very very slow)

I’d like to share some of my code, because I think people will find it helpful

Github Gist of Code

The good news

  • This code is stable! I had a lot of problems with calls to client.connect, etc causing things to hang (the heartbeat function wouldn’t work)
  • This code sends data at a reliable speed
  • this is because of my manage_connection function, which keeps track of when it should be sending data and allows for easier threading.

The bad news
That speed is only once per four seconds. Is there something I am missing?

What I have tried

  • A million implementations of using client.stop(). I’ve done waits
    then stopped, I’ve done a thousand things
  • You really have to do the Spark.disable(). This makes the whole thing much more stable (in my experience at least)
  • Interesting point is that the extra \r\n at the end causes the server to respond with a 400: Bad Request response. Without this, I can only send one piece of data every 12 seconds (when the server reports a Timeout)! This leads to me to believe that there has to be a way to make this go faster!
  • Am I not sending some kind of “I’m done” character?
  • I’ve tried sending things before client.connected() returns false. I.e. I’ve waited 400ms and send another set of data. What this always does is just hang the system (and I still get exactly one signal per 4 seconds)
  • As you might have noticed, it is set up so that you can easily have several client objects. (just do ClientStruct clients[3] and you are good to go because you just pass around pointers). This is because I tried this – and it failed horribly (don’t ever do this)
  • Is there a way I could do multiple clients? This code is all setup to handle them! :smiley:

Hi @CloudformDesign

I would avoid using the Arduino String class. I know they are convenient but eventually you will run out of RAM. Statically allocated C strings are just safer.

Have you tried adding a “Connection: close” HTTP header to your request? Maybe your server is keeping the line open.

Multiple TCPClients have never worked for me but apparently some of the test code uses multiples. There are only 7 sockets available on the TI CC3000 and one is used by the cloud connection.

Speaking of cloud connections, have you tried your code with the cloud off? Loop runs a lot faster if the cloud is off.

How do you turn off the cloud?

To “turn off the cloud” use Spark.disconnect()

@CloudformDesign: it’s interesting you mention these 4 seconds. In my project based on the Webserver library, I often see a “*** Connection timed out” debug message four to ten seconds after successfully processing a request. Obviously the connection is left open despite my flushBuf and reset calls. I still need to find out how to close the connection to see if this improves the stability of my Spark Core webserver, which now resets due to timeouts, much too often to be of any use in a production environment.

Thanks bko!

I have Spark.disconnect() in my setup function, so the cloud should be off.

I have tried a Connection: close before without it working. It seems like this is all a little magical. Should the request look like this:

POST /endppint HTTP/1.1
Host: HOST:PORT
Content-Length: 19
Content-Type: application/json

{"hi":"from spark"}

Connection: close

Are there too many newlines? Is this what you are thinking? Does the connection:close go somewhere else? Thanks for your help, I have no idea how to write raw http requests.

I think it is also important to note that I have communicated with this server using python and gotten 5Hz (I am only getting 0.25Hz from the Sparkcore)

I normally use character arrays as well, I just wanted working code quickly for testing. I will definitely switch in the future – thanks for the suggestion. Do Strings use malloc or something? (from experience, that is terrible for microcontrollers)

maxint – this code seems very stable. I will be adjusting it to use static character arrays, and then I strongly recommend using it for posting!

Hi @CloudformDesign

The Connection: close is an HTTP header so it goes above the data say right below the Host: line. It tells the server to close the connection when the request is done.

Ok, I changed it to what is below. The code below is operating every 8 seconds -- still slower than before.

String request = String("POST ") + endpoint; 
request += " HTTP/1.1\r\nHost: ";
request += server_str;
request += "\r\nContent-Length: ";
request += String(len) + "\r\nContent-Type: application/json\r\n";
request += "Connection: close\r\n";
request += "\r\n";
request += String(message) + "\r\n";

cl->client.print(request);

// Without these lines it will NEVER send a value again
delay(1000);
cl->client.stop();
1 Like

Huge success I am now reliably transferring at faster than 12Hz!!!

The key was replacing client.print with client.write – I think this points to a huge bug

I also added Connection: keep-alive – although without the changes to using write this didn’t work

Here is the code

I plan on turning this into a tcp client library in the near future, but for those on the bleeding edge feel free to use it now!

I have also created an issue about this

3 Likes

My TCP Library will be included in the general library sparklibs under the open source behive project. It will be done soon.

Check it out

Summary

  • Have a 1 second delay before trying to connect
  • Wait at least 80 ms between sending signals
  • never use client.print – only write your entire buffer in a single client.write(buffer, len) call
  • always call Spark.diconnect() for doing TCP connection code.
  • send a keep-alive in the header (see my code examples)

This will all be implemented soon.

some comments
On a separate but related issue, it is apparently necessary to include about a one second delay (I don’t know if that is the exact amount, but 500ms was not enough) before the spark attempts to connect with. Without it, it will never connect (crazy as that may sound)

Oddly enough, this problem only exists on my work Windows server – with linux I communicated just fine (although it always took a while to actually send data).

I want people to know this while they try to debug their connections – a delay could fix your problem!

hey, I am working on how to make one photon as a server and two photons as clients with TCP protocol these days. But it’s just not fast enough, I guess it consume a lot of time to reconnect after the other one photon’s transmit. I am wandering if there is solution that I don’t need to close the client right after ‘this’ transmit, I mean is it possible for me to keep two client connect to the server at the same time? :smile:

@particle_adam, you don’t need to close the socket between transmits. However, you will want to make sure you do have a connection before trying to send data. On one project I will close the socket if nothing has been received for a pre-determined time.

but I have no idea how to keep two clients on the same port and how to fetch the data from the certain client just with the library function on the particle reference? Thank you in advance!

@particle_adam, I missed the part about two clients and one server! I can’t say I’ve ever tried this. Perhaps @bko, @ScruffR or @mdma can provide guidance here.

I’d hope @bko might chime in since I’m just scratching the surface of topics he’s thoroughly knowledgable in :blush:

But I’ll give some three Photon setup a try-and-error go :wink:

1 Like

HI @particle_adam

It sounds like you have a Photon Server and two Photon slaves that want to connect simultaneously to same TCP port on the server. I am not sure this is possible on a Photon or other micro.

What would work instead:

  • Each device has its own TCP port number and you have two instances of TCPServer. This is going to be difficult to code and resource intensive but would work.

  • Switch from TCP to UDP and use broadcast or multicast. This would be easier but requires all the Photons to be on the same network and subnet since these packets will not be routed upstream. You will also need to worry somewhat about dropped packets since UDP does not retry on errors.

3 Likes

@bko, I was going to propose this but I needed someone much smarter than me to do it instead! I believe there may be another topic where a member used ack/nak packets with UDP to provide packet in-order sequencing and confirmation. I guess another approach would be to run the TCPServer on the nodes and have the “central” Photon poll the nodes. :smile:

1 Like

yeah,I tried the first one this morning and it worked well. Compared to the previous method that switch between two clients photons this solution in fact run much faster which is about every 30~40ms from the server perspective depend on the size of the packet. But it is still not fast enough because I want to used it to sample and send to main MCU to do autonomous control. :weary:
And now I am trying to make sense about the UDP multicast right before I check your answer. Thank you very much for showing me an another way to achieve my goal, I really appreciate that.
Well, I will keep trying :grin:

1 Like

I actually don’t get you idea the “another approach”, can you explain it with more detail. :smiley:

@particle_adam, if you want the primary photon to do autonomous control then it should “own” when and how it gets data from the client photons. One way to do that is to run a TCPServer on the clients and have the main photon “poll” each client for data in a round-robin fashion. This allows you more control over the data collection.

1 Like