Trying to connect to anything other than Spark Cloud. Spark.process() taking over

Got my first Spark Cores yesterday, the black ones. Super excited, I was able to hook them up to the Spark Cloud in about 2 minutes, very very slick implementation.

However things went south quickly as I attempted to talk to my local server using TcpClient, in my LAN. I started getting some very erratic behavior, at times unable to re-flash from the Cloud IDE.

After several hours of debugging, attempting to ping my local server as well as google, with no success. I finally tried setting SYSTEM_MODE(MANUAL), and BAM! no problems whatsoever. pings, mqtt, TcpClient all worked flawlessly.

It appears that the Spark.process() is utilizing the network connection and not allowing anything else to use it, that’s my newb assessment. Of course I had to setup USB based flashing, because Cloud flash won’t work without Spark.process(), but I wanted to do that anyways.

Not sure if this will help anyone else, and not sure if I found a bug or maybe a faulty processor. Maybe the Spark guys have some insight.

Anyways, amazing product, glad I met you guys at OSCON, and can’t wait to really build something.

Hi @entropealabs

I am sorry that you had a bad out of the box experience. Maybe @Dave or @zachary can comment more. But I wanted to say that I think your experience was unusual.

For my “development” core that sits on my desk, I run an app that scrapes a few web pages (weather, news) and displays some items on an LCD display while also providing local temperature from a DS18B20 sensor as a Spark.variable() and published via Spark.publish(). This app runs anytime I am not actively working on something and just works 24/7. It does sometimes reboot but I would guess the mean uptime is more than a week if I don’t touch it.

I have another core that runs an LED clock and provides temperature on the Spark cloud that also never seems to have problems.

Spark is similar to Arduino but is not an Arduino and sometimes libraries that worked great over there, need a few changes over here. Lots of folks have problems, for instance, with TCP client when they forget to read and discard data from the other connected computer (typically for a HTTP GET request). On Spark, the TI CC3000 has a set of internal packet buffers and if you never read the data, the TI part will eventually overflow and your core will reset. On Arduino boards with the WizNet parts, I think the packet buffer just gets overwritten silently and data is lost, but there is no fatal error. Neither strategy is completely “right” or “wrong”: it just depends on the application needs. But code written for the “never read” strategy obviously fails on Spark.

There are problems and bugs, of course, but there is a great community here that can help.

So my advice is to give the cloud another chance, maybe starting with something simple and try again.

Looks like it may have been a network issue the other night, I’m not able to replicate the issue today. Here’s the sample code.

#include "application.h"

IPAddress server(74, 125, 137, 113);

void setup()
{
    Serial.begin(9600);
    pinMode(D7, OUTPUT);
    delay(1000);
    Serial.println("connecting...");
}

bool state = HIGH;

void loop()
{
    digitalWrite(D7, state);
    Serial.println(WiFi.ping(server));
    state = (state) ? LOW : HIGH;    
    delay(1000);
}
1 Like

I’m also able to communicate with my local MQTT server just fine now as well, and don’t really need the Spark Cloud for anything now that usb flashing is working as well!

2 Likes