Direct connection from Electron to AWS SQS

For some different reasons, I can not use the Particle Cloud, the main reason i that in case of network failures I can not losse data (I need to log it) and send it to a cloud system afterwards (with the correct timestamp).

So for each measurement I need to add the timestamp of the measurement (not necessarily real-time).

I wanted to use AWS SQS, but this requires a https connection (with a certificate). Is there any way we can achieve this on the electron?

Kind regards,


Hmm, I don’t quite follow the logic :confused:
You can’t use the Particle Cloud due to possible network failures, but you can use AWS SQS despite that network failure?

But for the HTTPS part of your question, there are several threads (when you search for HTTPS) that will either

  • direct you to webhooks,
  • direct you to the Glowfish HTTPS library,
  • refer you to mbedTLS/DTLS, or
  • tell that this is not yet possible with out any of the above

Sorry, maybe I did not correctly express myself…

Of course if I have a network failure, there is no connection to the AWS cloud either.

But in case of a network failure, I save the measurement with it’s timestamp to eeprom. When there is again network connection, I want to send the measurement with it’s saved timestamp as 1 record to SQS. In case of the particle cloud I can not do that because if I put the value to the cloud (variable) it automatically gets assigned the current timestamp and not the timestamp the measurement was actually taken.

Does anyone have an example / working project with the Glowfish https library and mbedTLS/DTLS?

Thank you for your reply,


When using webhooks (via Particle.publish()) rather than Particle.variable() you can send the timestamp with the value (even multiple timstamp/value pairs at once) as a JSON.

1 Like

I’m pushing data into Microsoft Azure Table Database using Event Hubs to receive the Webhooks, and then Stream Analytics to take that data and push it into an Azure Table Database. This setup can scale to 1 million events per second without failing.

If you have to send to Amazon and bypass the Particle Cloud then, you should look at the Adafruit WICED Feather since it does support the secure MQTT TLS data transfer, but you then loose all the great Particle cloud features.

@ScruffR @RWB thank your for your replies. It really sounds an interesting technology, but however the use of Particle.Publish() together with a Webhook does not solve my problem.

Please let me explain:

1.) The Particle.Publish() solves the timestamp issue when there would be a connection problem between the Electron and the Particle Cloud --> Good

2.) The Webhook is in this case the point of failure: If there is a problem with the amazon system (load balancer problem, EC2 instance problem, or just a plain network failure between AWS and the Particle Cloud), the the Webhook will not be triggered and the value (together with it’s timestamp) will not be stored in the AWS system.

So the total solution comes back to the same, there is a level where we can loose data.

Either I need to be able to push data directly to AWS in some way (or another provider), or Particle Cloud must foresee in data storage, triggering, alarming by SMS and push messages. Or the Particle cloud should have some sort of que system that only gets emptied when the messages are really processed by another system. But this is not the case (so for).

If anybody has any ideas, it would be nice to hear from them


Unless you only discard your saved data once you got confirmation of it being stored, either via the webhook response (which would require the AWS server to hand back to the webhook - which other services do), or by reading back at least one of the newly added datapoints (via another webhook).

So there are other options than these

@scruff can you please elaborate a bit more on how you see this please. If it would be too much you can send a private message as well. I am interested on how you see this (if other values are coming during that same time as well).

Thank you and kind regards,


I haven’t got any experience with AWS as such, but only the general functionality of webhooks and Particle.publish()/Particle.subscribe(), so I couldn’t help you with the how-to.
There are others with more AWS specific answers - I’m more the general (bigger picture) or run my own server guy :blush:

But pushing out datapoints to webhooks (and awaiting a related subscription to confirm completion of the task) can be done one-by-one while still collecting new incoming data.
You’d just need to have some sort of FIFO buffer where you add to the head new data points while pushing out and if successful pulling off the historic data.

1 Like

@GrtVHecke Have you checked out yet? You can publish directly to their service without using the Particle Publish events.

I’m pretty sure you can send data to Pubnub directly also.

Also when using Particle Publish, you get an Acknowledgement that the webhook was received successfully so you know if the data was delivered or not and you can then use some sort of buffer that ScruffR was talking about.