TCPClient slowing to a crawl

I would love to have the TCPBuffer dynamically allocated, where the user can directly specify the maximum and minimum size. Initially buffers start at their maximum size, but when the system becomes resource-starved, they are pruned down to their minimum.

At present, our dynamic allocation SOS’es when it runs out of memory. If I can avoid that for system allocation requests, then dynamic buffer management becomes possible!

1 Like

Indeed @bko, that is what I intend to do this weekend. I second @mdma’s dynamic TCPClient buffer size and look forward to that becoming a reality, and I also understand the tradeoffs the Spark team has to make to balance out the many applications that the Spark Core can be made for – it can be a tough crowd when you have a product with this much potential! :smile:

3 Likes

Guys, i’ve done a lot of tests and changes in my code. But whatever i’ve trying i’m faced one problem - tcp client doesn’t breaks messages in any kind of packages, so this code:

_buffer = (uint8_t*) calloc (_client.available(),sizeof(uint8_t));
_total =  _client.available();
_client.read(_buffer, _client.available());

Called in burstly incoming messages can hold any part of incomming message, because TCP client’s buffer goes in cyclic rewriting by TCP Client.
And cause, for example, this serie of messages:

{"message":{"name":"size","type":"range","value":"3","clientName":"Spark_Core1"}}
{"message":{"name":"size","type":"range","value":"4","clientName":"Spark_Core1"}}
{"message":{"name":"size","type":"range","value":"5","clientName":"Spark_Core1"}}
{"message":{"name":"size","type":"range","value":"6","clientName":"Spark_Core1"}}▒Q{"message":{"name":"size","type":"range","v}
alue":"7","clientName":"Spark_Core1"}}▒Q{"message":{"name":"size","type":"range","value":"8","clientName":"Spark_Core1"}}▒Q{"mes1"}}

I’ve tried even disabling of IRQ on reading of tcp client’s buffer. But code like:

__disable_irq();
_buffer = (uint8_t*) calloc (_client.available(),sizeof(uint8_t));
_total =  _client.read(_buffer, _client.available());
_offset = 0;
__enable_irq();

Just drive Spark in reboot (without sos message). So is there any way to pause receiving messages into buffer, to do not break it into parts?
Or maybe there is a way to make an IRQ call of buffer processing function?

Hi @ekbduffy

Your code assumes that client.available() is not changing between the first, second, and third lines. I think this is a bad assumption and you should capture the value into a local variable.

If you want pause incoming data, I don’t know of a way. You can call client.stop() and close the connection and re-open it later but that is a big hammer.

1 Like

Thanks, this works much faster and work, but result same on burstly incoming data.

So, there is no way to take part of data which arrived as one message, right? it’s sad…
Ok, i will add checks of correct start and end of message and look will it help or not:)