Sending high frequency/large data packets via particle (BCM chip)

Hey folks,

My project use particle photon to send data packets to nodejs backend. However the data chunk is very large (~2 to 3 KB) and messaging frequency is quite high (~10Hz).

Currently I met some bad crypto transform error on nodejs server, and after careful examing the source code, I am able narrow down the problem which shall occur at firmware side.

In short, I used the blocking_send function which exists in original particle firmware code. I noticed that fundamentally, the blocking_send uses the function of socket_send, which exist in /hal/src/photon/socket_hal.cpp. Interestingly, the sending process of this function is wiced_tcp_send_buffer, which used the bcm-wiced-sdk library.

I noticed that when I send packets (10Hz, 2~3KB/packet) from photon, after few iterations (sometimes 10 to 20, sometimes 900 to 1000), the wiced_tcp_send_buffer function will return 16!!?? causing many subsequent errors. Driven by strong curiously, I checked the error code definition of this library, whose interpretations is “WOULD_BLOCK”??!! After that I checked this article which said it may due to the fact that socket stream buffer is filled, which seems make sense!! Here are some some of my curious and any suggestions and comments are welcome!!

  • I observed that when this wiced_tcp_buffer_send function returns 16, it actually sends data, which is in fact corrupted, i.e. send wrong data causing deciphering error at node backend. Is there any way to coding in this philosophy: if success, then send, if failed then don’t send, instead send corrupted data…?

  • How can I check the available socket stream buffer size before each send?? it looks like the fundamental system call does not really works here. For example, I tried to include sys/socket.h and use getsockopt to monitor the socket state, but did not work successfully though…

  • If it is indeed the socket buffer issues, my philosophy is:
    if (socket buffer is full) then skip it!
    else send data packet!
    however, the hardest part is same as previous, the system does not really works here while the api in wiced lib is limited.

  • finally can some one indicates what’s the available socket buffer size of BCM chip used in photon, is there any way to increase it by coding manually?

Thanks for the help!

The Photon is easily capable of doing this using the standard TCPClient. I’ve transmitted over 800 Kbytes/sec. for days testing this.

The problem you are running into can be seen in how the sample code above is written. The write function returns the number of bytes written, but on the Photon also returns -16 when the underlying send buffer is full. When this happens, all you need to do is try again, such as on the next invocation of loop.

I think the internal send buffer is around 6K, which works out to an optimal write size of around 1024 bytes. If you make the buffer much smaller, the time it takes to come around by loop, a minimum of 1 millisecond, will cause the internal send buffer to run out of data. If you make it significantly larger, say 4096 bytes, the opposite starvation problem occurs. Since the write buffer is so large, the internal send buffer needs to be early empty before writing to it, and you’ll run out of data then as well.

In your case, I would break each write up into 1024 byte chunks, repeating when the -16 error occurs, and you should easily be able to sustain that rate.

4 Likes

Thanks for your suggestions @rickkas7 , I will try it tonight, hopefully it will work for me!

By the way, there is another problem that when -16 error is returned, the ‘corrupted’ data chunk is actually sent to nodejs server, meaning the sending process is actually success, however the content being sent is incorrect!

Since packet format is basically formed as below;

===================================================
| 2 bytes integer indicating size of pkt | big encrypted data chunk |

And in ndoejs side, the chunking stream actually use the first 2 bytes integer to justify the packet size. This means if data chunk is corrupted during sending process, the alignment when nodejs process packet will be broken, For example, perhaps the broken packet is formed as below:

3KB (indicaing packet size) + 30B encrypted data (send corrupted data)

While the node thinks it is a good packet, and thinks its size is actuall 3KB, then it will truncate 3KB as one packet, which is surely wrong, but it will break the alignment causing subsequent packet cannot be parsed correctly!

So i am wondering is there any method for predicting whether the -16 error will occur? For example, if it will occur then wait a while and send, it not, then just send, meaning if I do actually send the packet, then packet must be correct, otherwise, just don’t send it.

SOME UPDATES:
hey @rickkas7 I think it will be better if I make an example here :slight_smile:

Suppose I have a nodejs, which serves 2 photon with each packet being 3KB and sending rate is 1packet/100ms.

the if -16 error occurs on 1 of the photon, meaning the packet is actually being sent, but it is corrupted, then then chunk stream format in nodejs side may be:

(3KB + 30B encrypted data) + (3KB + 3KB encrypted data) + (3KB + 3KB encrypted data)

as you can see, the first packet received by nodejs is corrupted, as the encrypted data is 30B << 3KB.

So when nodejs process this chunk, it will first see 3KB header, and think the packet size is 3KB. Based on this, it will truncate first 3KB as is first packet. But in fact, it is not! it simply ruin the packet in the rest of chunks as the alignment is broken! :frowning:

So the first packet parsed by nodejs will cause bad decrypt error, with the subsequent being lost (invalid packet header, which is in fact the part of encrypted data).

@chenc, I think @rickkas7 is saying that by using a 1024 byte packet, you should see different performance, possibly without the “garbage” data. Try as he indicated then report back on your findings :wink:

1 Like

The data should not be sent if TCPClient.write() returns -16. You absolutely need to call write() again with the same block in order for it to be sent. Also, you must not send any other data after it until that block is sent, otherwise the data will be out-of-order.

As long as you do that, the data will arrive just fine on the node.js side.

2 Likes

Thank you @rickkas7 @peekay123 Yes! you are right! after carefully observing, I think the garbage bytes are sent from background event loop. It looks when messaging failed, it will trigger event loop to send something (perhaps some ping, ACK etc.), which didn’t conform the package formation rule, i.e. (header: package size) + some packet data

1 Like