I’m sending relatively large chunks of binary data off an electron 260 (~10k - 20k) for the purpose of audio sampling. I’m writing the data in 1024 byte chunks. I usually see a lot of data loss (~50%) in terms of what shows up on my server.
I’m trying to understand where the limitation is. The 1K TCPClient.write is executing every 4-5 seconds, which is slow, but data is received on the server every ~20 seconds (REALLY slow, even for 2g). Signal strength is good (4 bars).
Is there a max throughput limitation for the device (firmware or hardware) that could be affecting it.
Is there a way to check whether I’m connecting to a 2g vs 3g antenna?
Any insight into how the TCPClient.write() is handled, and how it might affect the above problem would be much appreciated.
Related to the transmission loss, the debug logs show half of the 1024 byte chunks not sending:
sending data 0 1,188,55,145,151,67 //my app logging tells me I sent the first chunk and logs the first bytes so I can match them up to hex on the server later: these bytes show up! Firmware says:
1129.354 AT send 17 "AT+USOWR=1,1024\r\n"
1129.364 AT read > 3 "\r\n@"
1129.414 AT send 1024 "[big blob of data here omitted for clarity]"
sending data 1 91,90,91,91,90,91 //my app thinks the next batch is sent, but these bytes do not show on server. Firmware says…
1139.437 AT send 17 "AT+USOWR=1,1024\r\n"
1140.968 AT read + 18 "\r\n+USOWR: 1,1024\r\n"
1140.978 AT read OK 6 “\r\nOK\r\n” //no blob of data happens and app says…
sending data 2 93,93,92,93,94,94 //these bytes show up
1140.978 AT send 17 "AT+USOWR=1,1024\r\n"
1140.979 AT read + 14 "\r\n+CIEV: 2,4\r\n"
1140.989 AT read + 14 "\r\n+CIEV: 2,3\r\n"
1140.999 AT read > 3 "\r\n@"
1141.049 AT send 1024 “[another big blog of data here]”
An so it goes in alternating fashion. One 1024 chunk is sent, other not. But no error is thrown (although lets be real, I have very little idea of what the AT commands are saying here). Anyone know why this is happening?
int MDMParser::socketSend(int socket, const char * buf, int len)
{
//DEBUG_D(“socketSend(%d,%d)\r\n”, socket,len);
int cnt = len;
while (cnt > 0) {
int blk = USO_MAX_WRITE;
if (cnt < blk)
blk = cnt;
bool ok = false;
{
Basically, USO_MAX_WRITE is 1024, so if you send a chunk of 1024 bytes, it fails the cnt < blk test. Things go bad from there. So use 1023. Or 512. Or anything less than 1024 (but not equal to).
Hey @mountainecho thanks for posting your question and your findings on how to properly transmit data over TCP.
@BDub, is the fact that sending chunk sizes of 1024 bytes results in a failure an expected behavior, or is that a firmware bug and an allowable case that we should accommodate for?
I am sure you can change USO_MAX_WRITE in the firmware if you compile locally using gcc and the rest of the toolchain. Any other compile method is not likely to give you what you need.
That said, a lot of infrastructure depends on TCP packets being 1500 bytes or smaller since that is the usual MTU size in most routers without enabling jumbo packets. I don’t know enough about the cellular part of the network to say if that is a hard limit there as well, but I think you will break things in general if you use a value larger than 1500 bytes.
I’ve struggle a bit to get tx speed to be close to my 8kB/s buffering speed for audio on the device. The binary data moves fast enough, but the whole transaction song and dance between modem-server is dog slow. I assume cell networks aren’t great for latency. So I thought that larger chunks would help reduce the back-and-forth time.
Any other suggestions that might make transmission more efficient?
Max throughput on the Electron is limited by the speed at which data is sent to the modem via USART baud rate 115200. I’ve done some quick experiments with increasing this rate up to 900KB or so and it didn’t seem to help. More experiments here would help solidify that data though. At any rate (no pun) the max throughput I’ve measured is about 3KB/s, but I don’t think I was using more than 512 byte chunks with TCPClient. Have you found larger chunk sizes are improving your data rate significantly?
The buffer size in the modem is not currently reconfigurable, and if it was in the future it would probably only be allowed to go up instead of down to make sure we don’t break anything that currently relies on 1024. From the AT command manual 1024 is the max size that can be sent with USOWR, so to fix this it sounds like we would just need to increase the default buffer size to 1025.
@BDub1 one further question I wonder if you could help me with: the Ublox AT command reference notes a required wait of 50 ms after receiving the @ prompt after issuing an USOWR command.
The notes for USOST specifically do not include that advice: “After the command is sent, the user waits for the @ prompt. When it appears the stream of bytes can
be provided.”
In mdm_hal.cpp, it looks like the 50ms delay is inserted between @prompt and data transmission anyway. If this is indeed superfluous, would it be possible to remove in future release?
sendFormated("AT+USOST=%d,\"" IPSTR "\",%d,%d\r\n",_sockets[socket].handle,IPNUM(ip),port,blk);
if (RESP_PROMPT == waitFinalResp()) {
HAL_Delay_Milliseconds(50);
send(buf, blk);
if (RESP_OK == waitFinalResp())
ok = true;
}
If that’s possible, that’s a good find! I’ve often discovered through testing though that delays are needed even though their documentation doesn’t state it. Or that a command can take up to NNN seconds to complete even though that is also not stated for the command in question.
Are you building locally? If so please test out your theories and let me know if we should investigate more. If not, if you would like to add a Github issue similar to the one above, that would be very appreciated!
Yeah, I know what you mean. It’s brutal how much of this is pure trial and error.
I haven’t been building firmware locally yet, but have done some local testing in app using AT commands instead of the particle UDP interfaces, and I think perhaps the USOST sensitivity is after the data transmission, rather than after the command itself. In particular, it seems that the modem is blocked while writing data to lower-level, and might not be able to receive another command immediately after completion. But I’ll check further and let you know what I find.
I tried a variety of things with command.cellular/USOST. Sent single chunk and multiple. Sent several USOST commands in sequence. Did not see any negative effect from sending data immediately after @ prompt. I do see a long delay, 180ms or so, getting OK prompt after the data is sent, so the last chunk needs a longer timeout.
Wanted to leave one other item here in case others search later:
If you really want higher transmission rates, you need to put the modem into direct link mode. Having done that no more commands can be issued to the modem: everything is just sent directly to the other end of the connected socket. BUT…
The decimal value 37, hex 25, character “%” is not tolerated by the modem and gets dropped. I’ve tried to find documentation, but have been unable. Anyway, this dropped character will drive you crazy trying to get TCP chunk sizes correct. Hopefully this will save someone time/headache in the future. I’m playing with audio, so I can just bump a 37 to 38 with no negative effect. What would an actual engineer do?
I am using my particle electron to connect to IBM watson platform. I
use the MQTT and sparkjson library. However I could only send 1224
bytes/sec. But the microcontroller(ARM cortex M3) and 3G gsm module, it
should be faster as per my estimation. How do I get higher data speeds
to increase the publish freqyency?
I had to look up MQTT. That’s what I needed! Anyway, too late now. I wrote my own UDP service that handles state management between the device and server.
Basically, the answer is that the particle firmware has some limitations based on 1) the max buffer size 2) the network/modem latency. Neither TCP nor UDP will give you much better than what you’re getting if used by the standard methods.
The first thing you can do is use cellular.command to send direct commands. This will enable you to put the modem into direct link mode. In this mode, the data goes directly to the other end of the connection. Consequently no commands are interpreted by the modem, no back-and-forth between servers.
If you’re using UDP, you have to manage the connection manually. If TCP, then you have to manually create chunks and make sure they are properly formatted with size and line breaks according to the protocol for chunked content.
The other bad news is this: There’s no way to read incoming data with cellular command when the modem is in direct link mode. For this reason, I had to locally customize the particle firmware to create a method enabling direct read of the incoming modem buffer.
Actually read your post again: my solution may cause various other problems for you if you’re trying to connect to Watson directly. Also, not sure the impact on using MQTT.
Hey mountainecho, really interested in the way you implemented your streaming and whether it was successful in the end, and what you found the drawbacks were.
I’m looking at trying to establish a one way TCP connection with data rates in the 1-2 kilobyte per second range, and also with minimal latency. I’m hoping to send at least a 512 byte chunk at a rate of 2 Hz to a server.
Do you have a follow on that I could use, such as some code that was critical to getting your application? I’m not so fussed with requiring data to be able to travel the other way.