MQTT-TLS could use Amazon IoT

oops, maybe do you use Photon now? Photon’s RAM don’t have so much, I recommend you use Argon.
But if you use Photon please use “config-mini.h” settings for RAM as in this topic comments.

I think you’re on the right track. We saw instability above 4kb as well on E-series and as I recall it’s because the application stack is limited to 6144b. We limited packet size and sent them fast, but this still artificially limits throughput.

1 Like

I posted a request on hiro’s github page a while ago but in the end we decided to just use the particle stuff and just accept the inefficiency for our latest field trial. You’re post/callout brought me back to this though… You can find my original post and hiro’s answer here:

I haven’t gotten back to this and tested it but it looks like the answer is yes and no, I guess yes but it takes up lots of ram

also hiro- in this post I mention some beer $ and I’d like to make good on that promise- pm me if you’ve got venmo, :slight_smile:

I think too big MQTT packet size is inefficiency, over 512-1024 byte packet size is it too big in my thoughts.
Use MQTT on the low spec IoT device, then the one packet size will naturally be smaller because of device’s MCU can’t process too big packet(TLS encryption/decryption, networking on WiFi stack…etc).

If it want to send big packet with MQTT, divide the packet message or reduce the packet size or something… is needed.

Copied from github:

We have this config working in production:
#define MQTT_MAX_PACKET_SIZE 2048

We played with this a bit and I don’t have old commits to reference but I’m certain I hit intermittent memory instability at just over 3k. Due to some other processes backing off to 2k was stable and convenient. I found this by watching free memory outputs from multiple test runs and then testing for extended periods in the field.

I agree with Hiro, there is a hard limit on application source code free memory.

As for the overhead cost, there’s two considerations, bytes of overhead and time to transmit.
We saw a definite impact in throughput, but for us this was due to the latency in the serial delivery of “10x 500b packets”. We lost time in both the ACK and preparation of the next 500b packet. MQTT should be really efficient (20-40b of overhead???, but check the specs).

In an nutshell, 2048b packets gave about 3-4x the throughput of Particle.publish() but 9500b would be another 2x or more I imagine…

@hirotakaster and I may disagree on the last note in theory. …I’d really like to test a 10k upload and see how it impacted my battery life… only then could I decide if it’s too big.

Hi @ian.c

“divide the packet message or reduce the packet size or something”

The meaning of this, if it send a too big packet size, that hitting to a lot of memory will be allocated on application&stack. so dividing the packet message will be used efficiently to memory.

As you comments, TCP ACK transmission have to consider on the multiple MQTT pub/sub networking.
Good throughput is better, but it will be depending on device MCU spec/application program(memory or others)/WiFi status.

So I think the stability of the device with suitable packet size is better than high throughput or big packet size.
Of course, trying big packet size is good too.

Oh I understand now. When you said ‘inefficient’, I thought you might be referring to throughput or computationally. We agree that it has to be small to be stable.

1 Like