Webhook "sleep" errors.... help!

electron
Tags: #<Tag:0x00007fe21ef7eb78>

#1

I have 4 E-Series devices running OS1.4.1. They all send data to Ubidots via a webhook (I am using a webhook rather than the library to save data), and lately they have been showing the “sleep” error in the webhook console logs. I can’t figure out why this is happening… the guys at Ubidots are working on it but they don’t think its on there end.

These 4 devices are sending their Particle.publish()'s at basically the same time, although I’m pretty sure this isn’t an issue.

Here is the full webhook I’m sending by calling Particle.publish("ubi_board", payload, PRIVATE);:

{
    "event": "ubi_board",
    "url": "https://industrial.api.ubidots.com/api/v1.6/devices/{{{PARTICLE_DEVICE_ID}}}",
    "requestType": "POST",
    "noDefaults": true,
    "rejectUnauthorized": true,
    "headers": {
        "X-Auth-Token": "normally my token is here"
    },
    "json": {
        "boxt": {
            "value": "{{boxt}}",
            "timestamp": "{{timestamp}}"
        },
        "boxh": {
            "value": "{{boxh}}",
            "timestamp": "{{timestamp}}"
        },
        "boxp": {
            "value": "{{boxp}}",
            "timestamp": "{{timestamp}}"
        },
        "PTmVavg": {
            "value": "{{PTmVavg}}",
            "timestamp": "{{timestamp}}"
        },
        "PTmVsd": {
            "value": "{{PTmVsd}}",
            "timestamp": "{{timestamp}}"
        },
        "3V3 mA": {
            "value": "{{3V3-mA}}",
            "timestamp": "{{timestamp}}"
        },
        "ORP mA": {
            "value": "{{ORP-mA}}",
            "timestamp": "{{timestamp}}"
        },
        "BattV": {
            "value": "{{BattV}}",
            "timestamp": "{{timestamp}}"
        },
        "RSSI": {
            "value": "{{RSSI}}",
            "timestamp": "{{timestamp}}"
        },
        "FileSize": {
            "value": "{{File Size}}",
            "timestamp": "{{timestamp}}"
        },
        "chrg": {
            "value": "{{CHRG}}",
            "timestamp": "{{timestamp}}"
        }
    }
}

and this is the payload that shows up in the console:

{
"boxt":6.09
"boxh":43.42
"boxp":1003.07
"PTmVavg":29.65
"PTmVsd":0.85
"RSSI":-67
"File Size":344240
"BattV":4.07
"ORP-mA":41.34
"3V3-mA":5.32
"CHRG":1
"timestamp":1571803268000
}

The usual particle data is included also. Variations of this webhook, along with a lot of other data, were working intermittently with “sleep” and “ESOCKETTIMEOUT” errors, so I split them up into three separate web hooks. But now, I just keep getting the “sleep” indication when I look at the logs for this webhook.

I really need to get this functioning so I can get my data to Ubidots…

Any help is appreciated!


#2

IIRC the sleep error comes from too many requests that ended in an error (e.g. your ESOCKETTIMEOUT) to protect both the target server as well as the Particle infrastructure from devices gone rogue.

I know of reports where the default timeout of 5 seconds was too short for some target servers to respond. I think Particle had to adjust that but only did it for a know set of servers/clients/… (I can’t remember exactly).

@marekparticle may be better able to help out with that.


#3

Good info ScruffR! Lets see what Marek says…

As of this morning, some of the requests are getting through. I have 3 devices now sending data every 30 minutes, and one device still only sending data on the hour. Looks like the devices sending data every 30 minutes are getting through ONLY once per hour (on the 30th minute), but when they try to send data on the hour with the 4th device, the webhook sleeps. Hopefully that was clear, I’ve attached a screenshot!


#4

Yup, ESOCKETTIMEDOUT is a read timeout, and I believe the 5 second timeout still remains in place (with 3 sets of 3 retries delimited by sleep periods thereafter). I believe @ScruffR’s assertion is correct, though it was before my time at Particle. @SammyG - it’s perhaps worth filing a support request to evaluate whether or not this is a possibility in this instance.

I’d love to see some Device IDs / timestamps in your support request as well, to make sure nothing else is going on!

Edit - OOO simul-reply! Actually, now I really want to see some device IDs.


#5

Send you a message Marek!


#6

Ahhhh, wow. The graph for this is… quite sad. :frowning:

All looks well within our backend as it presents to me, hm.

Not to:


but given the patterning here it looks like a progressive issue with contacting Ubidots.

I am curious about the results of their investigation. I’ll inquire to see if there’s anything I can do for you with respect to this window. Can you stagger these Publish()s at all? It… shouldn’t matter, but they are pretty close together, and it seems like something actionable that can be done we look this over.


#7

What kind of stagger? Seconds or minutes?


#8

If it’s within your power, at least 50 seconds.


#9

OK I can try that. Do you mean have each of the devices send the data at different times, or just have this delay between the three particle.publish() calls?


#10

SammyG, sorry if my intentions are not clear - the idea is to avoid firing this Webhook off 2-3 times in 10-15 seconds. Like you, I’m unconvinced that this will solve the issue, but it’s just a simple troubleshooting exercise.


#11

I’m going to try and stagger the systems 1 minute apart from each other. So they send the data at XX:00, XX:01, XX02, and XX:03…

I’ll report back.


#12

Ok so I setup the four devices with a stagger such that they send data at XX:58, XX:59, XX:00, and XX:01.

Here are the results:

09%20PM

So the payloads coming in at XX:58 and XX:59 get through, but the XX:00 and XX:01 are still having issues. Looks like the particle cloud also tries the problem web hooks twice… interesting.

Anyway, there’s the update for now!


#13

Ubidots has been very helpful and they’re still working on their end to see if there’s a fix.


#14

Ubidots to the rescue!

They found a way to improve the response time for what I was doing.

Thanks everyone for the help.


#15

Oh wow, that’s fantastic @SammyG! Thanks for keeping us posted!


closed #16

opened #17