Error when publishing on local server

Every time I publish something to my local server I get the error below in my server window. I still see the correct data in the CLI windows. I am not sure if this is a problem or not, but I thought I’d ask. I am running a photon on a raspberry pi local server.

Your server IP address is: 192.168.0.10
server started { host: 'localhost', port: 5683 }
Connection from: ::ffff:192.168.0.24, connId: 1
on ready { coreID: '2b0034001547343339383037',
  ip: '::ffff:192.168.0.24',
  product_id: 6,
  firmware_version: 65535,
  cache_key: '_0' }
Core online!
onSocketData called, but no data sent.
routeMessage got a NULL coap message  { coreID: '2b0034001547343339383037' }
1: Core disconnected: socket close false { coreID: '2b0034001547343339383037',
  cache_key: '_0',
  duration: 4.168 }
Session ended for _0
Connection from: ::ffff:192.168.0.24, connId: 2
on ready { coreID: '2b0034001547343339383037',
  ip: '::ffff:192.168.0.24',
  product_id: 6,
  firmware_version: 65535,
  cache_key: '_1' }
Core online!

@icedMocha Did you progress on this?

I’m having the same exact problem.
Running latest 0.4.9 on Core and any node version.

I have not. I’ve given up on publishing for now and just read variable instead.

I have this problem too… i can’t understand where the problem is!

Same, I’ve given up trying to debug/fix this. Too bad no one is working on the local server anymore.

This issue prevents my devices to be updated to latest firmware version and I’m considering reimplementing the protocol using MQTT for that reason.

With which version of firmware work?

I’m on 0.3.4

I have found!

In spark-protocol/clients/SparkCore.js replace

this.sendReply("EventSlowdown", msg.getId());

with

this.sendReply("EventAck", msg.getId());
1 Like

Nice, I’ll test it in a bit.
Are you using the latest 0.5.x?

recheck the previous post… i have corrected the filename

Yes i’m in 0.5.x

Nice, that worked. I’m gonna keep testing this patch.

Interesting that the “slow down” is the cause of the problem.

The problem is not the slow down but the publish function called before that not return nothing (aka false).

Ok, first trial tests didn’t go that well.

I have my Particle receiving ble connections, and those are reported through the server.
So I can have 8+ messages per second being pushed.

If I have only one device connected the messages go through (1/sec), but as soon as I bump to more than 2 the messages stop. I see the Particle “sending” them, but none come through the server.

Any idea why that could be? I removed that throughput limit that the original code had by the way.

This still sounds as if you were hitting the documented limit of 1 Particle.publish() per second (with a burst of up to four with a four sec pause).
Where have you removed that limit?

That limit is part of the Parrticle firmware on your device. To lift that restriction for your local server, you'd need to alter the "framework" build locally and reflash that tweaked firmware.

Hold on, you might be right. I honestly don’t remember if I removed that limit in firmware or the spark-server.

Where is in the firmware? I can’t find it (again)

communication/src/publisher.h function is_rate_limited must return true
communication/src/spark_protocol.cpp function “bool SparkProtocol::send_event” that check the rate per second

So far so good.

Guys, thank you for the help.
I finally got rid of node 0.10 and old firmware dependencies! :sunglasses:

EDIT: I forked and stripped out the spark-protocol git repo to be able to use it as a spark-server dependecy @ https://github.com/mfferreira/spark-protocol

1 Like

hi @straccio, shouldn't this return false instead?

what an excellent find for SparkCore.js – I'll be testing this out as well!

@marfife indeed it's a pretty lonely space for spark-server – you can tell just from the name that it hasn't seen any official updates, but at least we've been able to keep it going through these incremental bugfixes from the community.