Websockets losing connection to server

I am using the websocket library with the following code:

#include "Spark-Websockets/Spark-Websockets.h"

WebSocketClient client;
char server[] = "xxx.xxx.xxx.xxx";
int port = 4000;

int sensor0 = 0;
int sensor1 = 1;
int sensor2 = 2;
int sensor3 = 3;
int sensor4 = 4;
int sensor5 = 5;

int sensorState0;
int sensorState1;
int sensorState2;
int sensorState3;
int sensorState4;
int sensorState5;

int bounceDelay = 150;

void setup() {

  client.connect(server, port);
  pinMode(sensor0, INPUT_PULLUP);  
  pinMode(sensor1, INPUT_PULLUP);     
  pinMode(sensor2, INPUT_PULLUP);     
  pinMode(sensor3, INPUT_PULLUP);     
  pinMode(sensor4, INPUT_PULLUP);     
  pinMode(sensor5, INPUT_PULLUP);     


void loop() {

    sensorState0 = digitalRead(sensor0);
    sensorState1 = digitalRead(sensor1);
    sensorState2 = digitalRead(sensor2);
    sensorState3 = digitalRead(sensor3);
    sensorState4 = digitalRead(sensor4);
    sensorState5 = digitalRead(sensor5);

    if(sensorState0 == LOW){
    if(sensorState1 == LOW){
    if(sensorState2 == LOW){
    if(sensorState3 == LOW){
    if(sensorState4 == LOW){
    if(sensorState5 == LOW){
    // client.send("ALIVE!");

void onMessage(WebSocketClient client, char* message) {
//   Serial.print("neChimes server returns (echoes): ");
//   Serial.println(message);

The core connects to the cloud and I can flash it fine. When the Serial port (debugging) is activated I see the pins being read by the core. The only problem is that if the pins are grounded (a button press, basically) in rapid succession the communication to the WebSockets server is severed (or hangs, don’t know which). Yet, I can still talk to the server with wscat, so the server is still up and fine. Again, the pins are being read (per the Serial output) but the client.send() is being ignored.

I see that the client.monitor() function/method checks for .connected() and will try to reconnect if their is any problem. This doesn’t appear to be happening, however.

Also, if two pins are grounded (pressed) at the same time, the Core reboots (?), flashes the red SOS, then 1 flash (fault). Not sure what’s up with that, though it might be indicative of what is happening internally. A reboot (manual, or otherwise) fixes the problem momentarily, but then the problem occurs again.

I am coming from a Processing background with a bit of Arduino in there as well. Is there a better way to poll the pins.

Any ideas would be greatly appreciated.


Just an idea; have you considered using interrupts for your buttons? I’m not sure if that’s the right solution for your case, but it seems like a likely use case for buttons.

Yes! I was thinking of that as well. I agree, that would be a logical next step. Maybe it would clean up the code a bit (or rather, how the code executes so as to not loose connectivity to the WebSockets server)? There are six switches being triggered in the set up, not sure if there are six interrupt pins available on the Core?

That said, I am still not sure what is happening on the Core, why is it loosing/dropping the WebSocket Server connection. Is there a buffer being over run somewhere? Is there any way to log this?


OK, into the interrupts and such. Here’s my code:

// This #include statement was automatically added by the Spark IDE.
#include "Spark-Websockets/Spark-Websockets.h"

WebSocketClient client;
char server[] = "";
int port = 4000;

void blink(void);

void setup()
  client.connect(server, port);
  pinMode(D0, INPUT_PULLUP);
  attachInterrupt(D0, blink, FALLING);
  client.send("leaving setup()"); //this never get sent

void loop()

void blink()
  client.send("tada"); //this gets sent once

It does send the ‘tada’ message once and then no more. The core continues to breathe for about 4-5 seconds, then SOSs with a Code of 11 (Invalid Case!?!). So, I think I can now put it down to problems in the WebSockets library.

Is anyone successfully using the WebSockets library to open a robust/persistant socket and ‘send’ data from the Core to a broadcast server? I see examples where a website speaks to the Core, but not so much a core pushing to a website in real-time.

Any suggestions would be welcome.



Does sound like a library problem.

Have you looked into Spark.publish()?


I would be very interested in whether others have gotten this library to work. I have admit to being somewhat skeptical that it will work for a websocket client. Here’s why:

The websocket protocol requires that frames sent by clients mask their payload - that is, each byte sent in the payload must be XORd with one of four random bytes. The four bytes are chosen when the frame is constructed and sent in the frame header - the idea here I think is that it makes it hard for people to read a stream and make sense of it (although it’s a pretty weak form of encoding, so it’s really just meant to keep out busybodies, not serious hackers - that’s my interpretation FWIW).

Anyway, the Arduino websocket library that is the basis of the Spark port does NOT send masked data (see lines 215 and 457 of the library code). According to the RFC6455 standard


a websocket server is supposed to reject any unmasked client frame, and shut down the socket. Here’s the money quote (the sentence between the asterisks is the key):

5.1. Overview

In the WebSocket Protocol, data is transmitted using a sequence of
frames. To avoid confusing network intermediaries (such as
intercepting proxies) and for security reasons that are further
discussed in Section 10.3, a client MUST mask all frames that it
sends to the server (see Section 5.3 for further details). (Note
that masking is done whether or not the WebSocket Protocol is running
over TLS.) The server MUST close the connection upon receiving a
frame that is not masked.
In this case, a server MAY send a Close
frame with a status code of 1002 (protocol error) as defined in
Section 7.4.1. A server MUST NOT mask any frames that it sends to
the client. A client MUST close a connection if it detects a masked
frame. In this case, it MAY use the status code 1002 (protocol
error) as defined in Section 7.4.1. (These rules might be relaxed in
a future specification.)

Now, I’m sure there are servers that will just roll with it and let the packet through, but most servers are likely to be compliant.

There is one more possibility, however, which I’ve run into in some situations: when you’re stuffing lots of data at once through a websocket, it’s possible that you can cause a buffer overflow on the server side. A well-implemented server should not let that happen, but I’ve experienced it before so there’s that.

But I’m more likely to believe that it’s because the library is non RFC6455-compliant.

I have been working on a more-compliant version of the library. I’ve got it working, but it’s not fully tested. If you’d like, you’re welcome to take a look and give it a try. I’ve put it on my github site at the following URL:

It doesn’t have a proper example, but I’ve put some of the various functions I use in the Example functions.ino file. I was planning to post it as a library at some point anyway - your clarion call simply forced my hand early ;-).

Lemme know what you think.


PS - if I had to guess why your Core is crashing, it’s because your websocket (or tcp or stream or something) buffer is overflowing. Just a pure guess, however…

Yeah, that would be a solution. And I should investigate it more in depth. But a closed solution (in regards to my project).

I am an artist working on a globally distributed sensor array that should be accessed by anyone from anywhere via a website/page. Which is apparently workable via spark.publish(). Though, I believe, I would have to publish my token and spark ID number to everyone (ostensibly, as it would be in the HTML document), and that is a problem.

Now, if READ and WRITE tokens/secrets were separate (a la XIVELY or Twitter), I would not mind publishing my keys. Or, is this currently the case?

See, I need a closed system for management of the installation on one side, and an open system for public participation on the other side. Caveat, all this needs to be as close to real-time as possible (real-time minus network delay).

Any suggestions? I will look into .publish() and see if I can hack a solution.




I will give this a gander over the coming weekend. If you could supply some examples in the interim (basic is fine, client.read/write) it would be helpful.

I think the masking issue might be the thing, as the protocol (and you) point out:

I will let you know how it goes in a few days.

All the best,


If you had a server back end that had the Spark API credentials, then a webpage with websockets to your server back end, that would be secure. :+1:

Yes! We were JUST THINKING of this!

Bravo, Zachary!

I will let you know how it goes,


1 Like

Yep. That’s what I’m working on more or less. But I’m also looking into taking it a little further.

I’m using my server to manage all interaction with Cores - obtaining access tokens, validating that the device is properly set up in the database on my server, etc. But rather than relying on REST for communication between server and Cores, I’m opening a websocket. Why do that? It gives me the ability to move more data, and with greater flexibility.

For instance, I want to send an image from a Core to the server. I can do that by having the server make a REST call to a Core to execute a function, and then having that function send an HTTP POST with the image data to the server. This is potentially a security issue (no HTTPS support on the Core), and has the additional overhead of an HTTP POST.

Instead, I will try to open a websocket between Core and server, and send messages and data directly back and forth. Security should be better, because I used the Spark API for validation and the websocket is opened by the Core, not the server (and the server only accepts the websocket from a validated Core - I can even pass the bearer token from the API and have the server check it for an extra layer of validation).

Once that’s done, I can streamline communications between server and Core, and have a secure two-way channel that allows for more information to be transferred (rather than just an int result code from calling a function).

That’s the idea. If I get it up and working, I’ll post the example. If not, I can always fall back on the POST approach. And at some point, I’ll share more about the device we’re developing. It’s actually a pretty cool deal - it’s a point-of-care diagnostic that can be used for Ebola (among other things). We’re in a major sprint to get ready for a meeting with CDC in the next couple of weeks…



That sounds exciting, best of luck with your sprint!

I look forward to seeing your solution, and will begin coding on my own tonight.

To all who have contributed so far to this thread, I appreciate the suggestions/expertise. Looking forward to continuing the disucssion in due course.

Happy coding!


That sounds like a great plan @leo3linbeck. :+1:

One thing to keep in mind is that your websocket will have authenticity (both sides can be confident about the identity on the other end), but it will not have secrecy (it can be sniffed) or integrity (if messages are altered, the receiver does not know).

This may be perfectly acceptable in your case; I just want to make sure you’re not missing it. :smile:

Good points. We don’t pass any patient specific information, but it’s still not optimal. I’ve been meaning to do a deeper dive into the Spark protocol to see if there’s a way to use that for direct Core-server communication beyond the Spark API. At 30,000 feet, it looks like a great ultimate solution, but I’m just familiar enough with how it would work to feel comfortable making the switch at this point. But any thoughts or ideas on how to use your technologies outside of the Spark API would be greatly appreciated.

Thanks for all you’re doing to build a robust community, and for the great work done to-date on the Spark Core.


1 Like


It is really working well. I have a chime sensor actuated by the wind at the home here in Chicago. The Core is taking the data in and publishing to the Spark Cloud. Node.js (on a private server) is listening for the Core’s events and then re-publishing the data to the Internet and the simple visualization/sonification of the activity is carried out on the website.

And the access tokens and secrets stay hidden behind the server. I am going to leave the sensor up for the week to check the robustness of the server/core/etc. There is a lot more to go on the design end for the installation, but you can see it in action here:


Thanks for the help/ideas/etc.


1 Like

Nice! I thought maybe it wasn’t working for a minute, but then there must’ve been a gust of wind because I got some multi-colored circles and chime-like sounds. Lovely! Way to go!


Did you use my websocket library? If so, I will go ahead and publish it to the community so it’s available in the IDE. Also, if you did, did you make any changes to the code?

Congrats on getting your chimes going. Ring the bell! :wink:



I just downloaded your library on Sunday and still need to have a look at it. So, in this instance, I have not used your library. I hope to be able to give it a go on Friday after I get over/through this week. Use of your library would enhance my project greatly as I would not be limited to the throttling of the data stream by the Spark Cloud servers. This is why the stream occasionally goes silent: it’s been a very windy November so far in Chicago . . .

I will get back with you.


Great, with these network things, I always wonder if I am the only person who can see it!

I am wondering about the throttling of the stream on the Spark server side. I understand why it’s taking place, but I was curious if I had a personal server running (you know, go my own cloud route, using Spark’s open software) if this throttling would still be in place? That is, is it baked into the personal cloud package or is it being throttled at some other point?

Also, and I would surmise others have asked, are there any plans to make the WebIDE open as well?

I have really enjoyed this little board, well done!

All the best,