Sorry for the long post. The increased timeout does not have an effect.
The significant increase in TIMEOUTS reports from devices, happens when the upload webhooks response back from the service bus, is to be forwarded to the device, to confirm the upload, and is not received by the device.
An example of what it should look like. The timestamps are not a perfect match coming from different services.
First is our on-device log and below the SSE feed. "X" marks the OK ack back to the device.
84K -78 i15 ^0
071618:33:44 >RESP
071618:33:50 >RESP
071618:33:56 >RESP
071618:33:58 >RESP <<< 4/4
84K -78 i15 ^0
2025-07-16T18:34:16.804 e00fce68e15685ca4a338b64 HLP1up "data":"{\"T\":\"2025-07-16T18:33:42Z\",\"R\":-78,\"I\":15,\"D\":\"{...
2025-07-16T18:34:16.906 particle-internal hook-sent/HLP1up
2025-07-16T18:34:17.325 particle-internal hook-response/HLP1up_e00fce68e15685ca4a338b64/0 X
2025-07-16T18:34:18.864 e00fce68e15685ca4a338b64 HLP1up "data":"{\"T\":\"2025-07-16T18:33:42Z\",\"R\":-78,\"I\":15,\"D\":\"{...
2025-07-16T18:34:18.937 particle-internal hook-sent/HLP1up
2025-07-16T18:34:19.371 particle-internal hook-response/HLP1up_e00fce68e15685ca4a338b64/0 X
2025-07-16T18:34:24.926 e00fce68e15685ca4a338b64 HLP1up "data":"{\"T\":\"2025-07-16T18:33:42Z\",\"R\":-78,\"I\":15,\"D\":\"{...
2025-07-16T18:34:24.991 particle-internal hook-sent/HLP1up
2025-07-16T18:34:25.320 particle-internal hook-response/HLP1up_e00fce68e15685ca4a338b64/0 X
2025-07-16T18:34:30.699 e00fce68e15685ca4a338b64 HLP1up "data":"{\"T\":\"2025-07-16T18:33:42Z\",\"R\":-78,\"I\":15,\"D\":\"{...
2025-07-16T18:34:30.766 particle-internal hook-sent/HLP1up
2025-07-16T18:34:31.159 particle-internal hook-response/HLP1up_e00fce68e15685ca4a338b64/0 X
Here an example where the webhook reply is not received, resulting in a TIMEOUT log
I have included the connection log to show the signal strength is fairly ok.
071618:48:44 >RESP
071618:48:46 >RESP
071618:48:48 >RESP
071618:49:14 NORESP <<< results in a timeout log entry
071618:54:15 >RESP
071618:54:15 ^18:48
84K -78 i15 ^0
2025-07-16T18:49:16.817 e00fce68e15685ca4a338b64 HLP1up "data":"{\"T\":\"2025-07-16T18:48:43Z\",\"R\":-78,\"I\":15,\"D\":\"{...
2025-07-16T18:49:16.925 particle-internal hook-sent/HLP1up
2025-07-16T18:49:17.369 particle-internal hook-response/HLP1up_e00fce68e15685ca4a338b64/0 X
2025-07-16T18:49:18.895 e00fce68e15685ca4a338b64 HLP1up "data":"{\"T\":\"2025-07-16T18:48:43Z\",\"R\":-78,\"I\":15,\"D\":\"{...
2025-07-16T18:49:19.045 particle-internal hook-sent/HLP1up
2025-07-16T18:49:19.397 particle-internal hook-response/HLP1up_e00fce68e15685ca4a338b64/0 X
2025-07-16T18:49:20.885 e00fce68e15685ca4a338b64 HLP1up "data":"{\"T\":\"2025-07-16T18:48:43Z\",\"R\":-78,\"I\":15,\"D\":\"{...
2025-07-16T18:49:20.991 particle-internal hook-sent/HLP1up
2025-07-16T18:49:21.329 particle-internal hook-response/HLP1up_e00fce68e15685ca4a338b64/0 X
[...]
2025-07-16T18:49:48.114 e00fce68e15685ca4a338b64 HHH1log Upload response: TIMEOUT
[...]
2025-07-16T18:54:47.962 e00fce68e15685ca4a338b64 HLP1up "data":"{\"T\":\"2025-07-16T18:48:43Z\",\"R\":-78,\"I\":15,\"D\":\"{...
2025-07-16T18:54:48.073 particle-internal hook-sent/HLP1up
2025-07-16T18:54:48.491 particle-internal hook-response/HLP1up_e00fce68e15685ca4a338b64/0 X
2025-07-17T06:05:02.433Z,LTE,H3G,238,06,50202,58131056,LTE,83.33,%,-80,dBm,RSRP,37.5,%,-13,dB,RSRQ,connected,0,0,2,unknown,connected,0,1,0,none,26,14,0,387,0,-210,unknown,unknown,43248,81876,166300,Open,ok,32428,148
2025-07-17T00:05:02.464Z,LTE,H3G,238,06,50202,58131006,LTE,57.5,%,-92,dBm,RSRP,62.5,%,-9,dB,RSRQ,connected,0,0,2,unknown,connected,0,1,0,none,25,14,0,5896,0,-210,unknown,unknown,21648,81876,166300,Open,ok,10828,52
2025-07-16T18:05:02.389Z,LTE,H3G,238,06,50202,58130976,LTE,40,%,-99,dBm,RSRP,50,%,-10,dB,RSRQ,connected,0,0,1,unknown,connected,0,1,0,none,15,13,0,36833,0,-210,unknown,unknown,48,82004,166300,Open,ok,37,2,725
2025-07-16T17:16:26.632Z,LTE,H3G,238,06,50202,58131006,LTE,57.5,%,-92,dBm,RSRP,62.5,%,-9,dB,RSRQ,connected,0,0,1,unknown,connected,0,1,0,none,7,3,0,5836,0,-210,unknown,unknown,11,81916,166316,Open,ok,5,2,
2025-07-16T13:47:56.777Z,LTE,H3G,238,06,50202,58130996,LTE,52.5,%,-94,dBm,RSRP,37.5,%,-13,dB,RSRQ,connected,0,0,1,unknown,connected,0,1,0,none,7,0,0,273,0,-210,unknown,unknown,5,81844,166316,Open,ok,5,6,
It was lucky to catch this on an office device uploading 4 chunks every 15 min. showing this behaviour on average every 3 batches of uploads for a few hours. It happens at any of the four chunks in a batch.
It also happens on devices with just one small upload once an hour. The phenomenon is across device id's, NO's, countries, time of day, OS 2.2.0, 4.2.0, 6.2.1., 2-year and 1-year old device code. It naturally happens more frequently on mobile units, but also randomly across stationary units with excellent signal over time.
The excellent included libraries PublishQueuePosixRK and BackgroundPublishRK may have covered this for some time. But since early May there is a tendency for some units to stop receiving the webhook reply for minutes up to hours, resulting in customer alarms. There the connection is often working well, as device publishing of the TIMEOUT log goes through.
A working heavy work-around, has been for our server to reset a device when reporting TIMEOUT, preventing customer issues. [This includes a reset of the cloud session]. I am currently testing an updated device work-around with a self reset including cloud session, as a lighter more precise alternative.
The consequences of this has been experienced since early may. I remember we have experienced and reported a more significant lack of webhook replies on the platform maybe 4? years ago, and some time after it was suddenly fixed.
This time it happens at a varying hundreds of times out of around 8 thousand uploads per 24h, but enough to disrupt the service before work-arounds.