Can the Spark Core trigger the Reset Pin?

This is not really a valid argument because the Arduino does not have a complex background task set that handles a connection to the internet. Because of that, we need a hardware watchdog timer. The arduino could care less if you go into a hard loop in your code forever.

This is a good point, but not a good reason to disable the watchdog. We need to make sure the code handles this case when you call sleep(), and either disables the watchdog… or perhaps it’s handled already in hardware because it makes sense if I put the micro to sleep, I don’t want to reset in the middle of that sleep. Not sure, but either way, it’s part of the bigger picture in working in the watchdog timer fixes with the rest of the code.

This would prompt someone to learn how to use it, but also not a valid reason to keep it off. You want it ON and protecting your application from staying offline when you DON’T know any better and do things like create long delays or hard loops in your code.

Yes, however the watchdog timeout is currently set to 26.208 seconds in my code… and if you waited longer than 10 seconds the Core is going to miss it’s time to handshake with the server and they will get out of sync, forcing a reset. You COULD punch the handshake code AND watchdog timer reset in your user code… but that’s really just an example of going outside of the architecture of the Spark Core for some odd reason in your code. Maybe because you don’t want to implement a proper state machine. Either way that can be written up in an example routine to demonstrate how to do that sort of thing, if you need it.

The watchdog timer is actually CURRENTLY in ALL spark cores… ENABLED. It won’t ever be used those based on the way it’s cleared… every second through an interrupt service handler. So Spark wants it enabled, but we must help them come up with just the right “tuning” for it that works for all cases. See zachary’s posts above.

@rlogiacco I appreciate your comments and feedback… I do hope this sheds some light on the subject for you and hope we can continue to noodle on this problem! :smile:

Thanks mate, I’ll keep speaking my mind hoping anybody reading us can benefit to a certain extent :wink: On top of that I find these convo interesting and valuable.
BTW, while I’m not at all into the insides of the :spark: I do understand the watchdog and its intrinsic value, just trying to voice the average user.

Would it make any difference if I add the fact I always meant to have the watchdog active around internal code while let the user control if he wants it active around his own code?

I know the Arduino doesn’t have running code besides the user code (well, excluding the boot loader, but I wouldn’t count that) while the :spark: has plenty, but that shouldn’t be an end user concern unless he wants to.

Please consider I’m not planning to do anything which should let the watchdog kick in myself, but I’m sure somebody out there will certainly come out with some valid reason. On top of that I wasn’t aware of that 10secs limit for cloud synch… Actually that is a very good point on your perspective: if 10s is a boundary already for network code I believe the only answer I’m left with is ‘but I might want to have it while disconnected’… Corner case, I know…

@BDub Have you testing any of the new updated code that @david_s5 has worked on here Davids latest Firmware

I’m interested in trying it, I think I loaded it on the Spark Core but your watchdog is still activated so I’m not sure if I did it right.

Can we blend his recent improvements with your watchdog feature?

The reason I ask is because my core seems to be loosing the connection alot more than it used to before I loaded your code. But since it resets it it keeps me online anyway. I wondering if his new code will keep me online for longer between resets.

Let me know what you think.

I haven’t really been able to test anything that fixes CFOD because I never see CFOD… not reliably anyway.

I would wait until David works in his changes. He’s also touching the IWDG, so hopefully it all works out well! We’re discussing it in the CFOD thread.

Sounds like this is it then, this is the code that we have to wrap around our existing code?

I would really like to try this and have some uptime.

Lets know if this is it or if I have missed something.

Thanks for your work in getting this solution.

@thebaldgeek Yes it will keep your Spark connected to the web.

What you have to do is this:

1 - Your going to have to follow this video to a T. It will take some time but it will work. The only thing that tripped me up was the part about copying the “PATH” info, make sure there are no “-” in the path code you copy and past. You’ll know what I’m talking about when you get to the part in the video where they are talking about pasting the “Path =” info.

The Core-Firmware readme is also very useful to double check things that you are doing: https://github.com/spark/core-firmware/blob/master/README.md

  1. You will need to download the zip files from these 3 pages. Then unzip them to the 3 separate folders that the video above tells you to create. If you download the zip folders then you will not have to try to download the files via the command line interface like they do in the video above which makes things quicker and easier, your accomplishing the same thing which is putting the downloaded libraries into the 3 folders.



Unzip those files above into the folders below:

core-firmware
core-common
core-communication

After you do this follow the video until you get to the Netbeans part of the process. Then pay attention and when he shows you the program.cpp file that is where you will place your code that you are wanting to run. Once you compile your code sucessfuly in Netbeans then you will be able to flash the Spark with your code and @BDub 's Watchdog feature will be up and running keeping you online.

Give it a try when you have an hour or 2 to walk through it all and you should have no problems. Let us know how it goes.

2 Likes

Thanks for the great post @RWB :wink:

Hey RWB,
Since we are testing the same code with Xively, could you post it so i can see it? For some reason, Xively is not picking things up!
Thanks!

#define FEED_ID "12345" //note: fake id here.. 
#define XIVELY_API_KEY "12345" //note: fake key here

TCPClient client;

int trigger = A7; // this is the pin talking to the attiny
int reading = 0; // this is analog pin we are taking sensor readings at
int ledD = D7; // this is our onboard LED Pin
unsigned long LastUpTime = 0;
uint32_t lastReset = 0; // last known reset time
bool s = true;
char whichApp[64] = "READ TEMPERATURE with XIVELY";

// This routine runs only once upon reset
void setup()
{
   //Register our Spark function here
  Spark.variable("whichapp", &whichApp, STRING);
  Spark.variable("reading", &reading, INT);
  Spark.function("degres", tempCalculation);
  Spark.function("volt", analogReading);
  pinMode(A0, INPUT);
  pinMode(ledD, OUTPUT);
  ledStatus(2,100); //Blink
  lastReset = millis(); // We just powered up 


}

void loop()
{


  reading = analogRead(A0);
  int temp_calc = (reading*3.3/4095)*100 - 50;

   if (millis()-LastUpTime>2000)
   {
      xivelyTemp(temp_calc);
      LastUpTime = millis();
   }
}

void xivelyTemp(int temperature) {

   //Serial.println("Connecting to server...");
    if (client.connect("api.xively.com", 8081)) 
    {

        // Connection succesful, update datastreams
        client.print("{");
        client.print("  \"method\" : \"put\",");
        client.print(" \"resource\" : \"/feeds/");
        client.print(FEED_ID);
        client.print("\",");
        client.print("  \"params\" : {},");
        client.print("  \"headers\" : {\"X-ApiKey\":\"");
        client.print(XIVELY_API_KEY);
        client.print("\"},");
        client.print("  \"body\" :");
        client.print("    {");
        client.print("      \"version\" : \"1.0.0\",");
        client.print("      \"datastreams\" : [");
        client.print("        {");
        client.print("          \"id\" : \"Sensor_Data\",");
        client.print("          \"current_value\" : \"");
        client.print(temperature);
        client.print("\"");
        client.print("        }");
        client.print("      ]");
        client.print("    },");
        client.print("  \"token\" : \"0x123abc\"");
        client.print("}");
        client.println();

        ledStatus(2, 500);        
    } 
    else 
    {
        // Connection failed
        //Serial.println("connection failed");
        ledStatus(4, 500);
    }


    if (client.available()) 
    {
        // Read response
        //char c = client.read();
        //Serial.print(c);
    }

    if (!client.connected()) 
    {
        //Serial.println();
        //Serial.println("disconnecting.");
        client.stop();
    }

    client.flush();
    client.stop();

}


void ledStatus(int x, int t)
{
    for (int j = 0; j <= x-1; j++)
    {
        digitalWrite(ledD, HIGH);
        delay(t);
        digitalWrite(ledD, LOW);
        delay(t); 
   }
}

int tempCalculation(String command) {
    int tempCalc = (reading*3.3/4095)*100 - 50;
    return tempCalc;
}

int analogReading(String command) {
    return reading;
}

@BDub

Hey I have this new temp sensor code worked up and now I need to flash the Spark Core with your WatchDog Firmware + the Temp Code.

I know how to add my main program to the Application.cpp file in NetBeans.

I think I put the .h file in the inc folder which is inside the core-firmware folder right? That’s where all the other .h files are.

Now in the Spark IDE there are 2 .cpp files. My main loop file and then there are the SHT1x.h and SHT1x.cpp files that are named the same. Where do I put the SHT1x.cpp file? In the same folder as Application.cpp file?

Hope you can follow what I’m asking.

Yes, I do. You need to put the SHT1x.h in the INC folder, and SHT1.cpp in the SRC folder along with the application.cpp. Then edit the build.mk file in the SRC directory in notepad quickly … just right-click, edit. Then add a line under your where you see application.cpp… copy that line, paste it under that one, and rename application.cpp to SHT1x.cpp. Save, compile, Bob’s your uncle :wink:

edit: don’t forget to add SHT1x.h to your application.cpp

1 Like

Hi BDub
FYI, I am a newbie at this stuff
I noticed there is newlib_stubs.cpp however there is no newlib_stubs.h
Do you know that is?
Thanks!
Dup

@Dup I have no idea. Do you need to do something with that file?

@Bdub Hey I received some feedback from @Dub who is also running your Watchdog firmware and he told me that the core is resetting a lot more than it ever did running the stock Spark Firmware. I noticed the same thing when it comes to the Spark Core resetting alot more often with your Watchdog firmware.

I could get the stock firmware to stay online for almost 24 hours before a reset sometimes. It would stay up for 6 hours easy without issues. With the Watchdog firmware it was resetting every few hours probably on average. I have my RED LED after recovery drop off after 30 mins so I can tell how often its been resetting. Because the reset process happens so quickly and reliably you really couldn’t tell in a sensor logging application and I didn’t really care as long as it stayed connected without having to manually reset it.

I’m just curious if you have any thoughts about why we the Watchdog is catching a reset fault a lot more often than the stock script was freezing due to a fault? Its like the Watchdog is alot more sensitive somewhere which causes a reset more alot more often than the stock firmware.

How often do you see the RED breathing LED after a reset? Or are you even watching it during the day?

Just figured I would throw it out there.

Interesting data. I would say it’s not the watchdog that causes things to reset more… more like something is blocking the watchdog from getting reset at least every 26.208 seconds… which then that blocking action causes the watchdog to reset the Core.

If you were able to stay online and actually communicating for almost 24 hours before, I would say that the Core is able to be blocked somewhere for longer than 26.208 seconds and not cause it to lose a connection with the cloud completely without recovering on it’s own (without a watchdog). However, since that is not always the case… and sometimes it can’t recover (e.g. CFOD)… you are still better off with the watchdog running and resetting if the system is blocked for more than 26.208 seconds.

I haven’t been testing it myself because I don’t have CFOD problems… so it would likely never reset.

All of this is also kind of a temporary fix (in lieu of the ATtiny85 solution) until the system is more stable and the watchdog need not come into play (and reset the core repeatedly).

Hey BDub,
No, it was just something a noticed while I was running through the folders. Just trying to help identify issues…maybe I should leav that to the experts :smile:
Dup

Yea the Watchdog solution is the only working solution to the Spark Freezing issue out there so its certainly better than any other solution at the time. I don’t want ya to waste any more time on it but I did find it interesting that somebody else said the same thing happened to them.

Hopefully the TI Genius Bar can figure it all out tomorrow :smiley:

1 Like

@RWB Did you try the spark_master_new? If so please pull a fresh copy from the core-firmware, thet test app was using too much memory see https://community.spark.io/t/bug-bounty-kill-the-cyan-flash-of-death/1322/373

@david_s5 I tried to load it for 2 hours but couldn’t get it to work. I’ll wait for the Spark Update Rollout.

Wow! I haven’t read through this humongous thread (I probably don’t have enough life left to do that…!) but I’m hoping that someone here has flagged up that the system.reset() call doesn’t actually seem to do the same as pressing the reset button.

In my program some of the initialisations re-run after calling system.reset() but some don’t. Unless I am misunderstanding something, the page linked below should be updated or explain more fully?

http://docs.spark.io/firmware/#system-reset

I see that this thread offers an alternative which I will try.

I have a question about the code on that page (copied below)

lastReset will surely always be set to zero (or very close to zero) after a reset - so why bother with it? Perhaps I am missing something?

// **********
uint32_t lastReset = 0;

void setup() {
lastReset = millis();
}

void loop() {
// Reset after 5 minutes of operation
// ==================================
if (millis() - lastReset > 5*60000UL) {
System.reset();
}
}