I think it’s this
I think it’s this
Hello Dup, using Temboo does the trick of pushing data reliable from a spark core straight into a spreadsheet.
I will give it a try!!
Hi all. Would everyone be interested in a feature of Atomiot.com that would log the data from your spark to a Google spreadsheet you define?
I have setup an account and my cores are connected For some reason the graph does not appear, any idea?
We’re you able to read values from your core?
You create graphs from the series screen.
When I click view graph, I see the words “Graph view” along with the name I gave it but no graph.
Hmm. Post or PM me your graph URL. I’ll take a look.
I did exactly as you describe in your example but the script fails to execute correctly when it is executed by the trigger. The statement “var response = UrlFetchAp.fetch…” does not execute properly. It gives the error message “Execution faild. Unexepected error” . When I execute the Url of the statement in the browser it gives back the correct answer like you show above via Postman. When I look in the Execution transcript in the Script Editor I can see it Waits for exactly 60 seconds before the error message is sent. Looks like some sort of timeout for the statement to execute. &0 seconds is also a long time so it should execute within that time. In the browser it takes less than 5 seconds.
Any ideas what the problem could be?
Also running into very frequent errors ~90% using the time based triggers in Google Sheets; while it works just fine manually running the script. I work over at Google and I’ll see if I can snag someone to give this a look.
Thanks all for the help setting this up and getting this far.
I am getting the same. If I run it manually it works but even with the re-tries it fails at the time elapsed setting.
I filed a bug to Google apps script and one googler pointed me to this issue.
"Apps Script uses two different UrlFetchApp pipelines: one for when the code is run by a user and one for when the code is run by a trigger. The trigger pipeline has some slightly different rules, which is why you are occasionally seeing these errors. These errors are typically related to missing robots.txt files or IP throttling/blocking at some level in your network stack.
We won’t be changing this architecture, but if there are specific URLs that you believe should be working and are failing mysteriously please open an individual bug to track them."
api.spark.io might have some settings which causes AS can’t use it. Maybe spark team can work with them to solve it?
At least that gives some sort of explanation! Sparkies… any thoughts on working with google to solve?
Interesting! Something about a missing robots file making it angry? I’ll make a task for myself to try seeing if we’re missing anything, and I’ll follow up after.
Hey Dave, any progress on this? I am also getting the “unexpected error” on the UrlFetchApp.fetch. The script works fine when run manually but does not when configured with a 15min trigger. I setup some looping to retry and it fails every time and it never works when setup on a time trigger for me.
In the mean time I am using a powershell script to pull the data on my local machine. If anyone is interested you can check out the powershell code here:
This is obviously not ideal as it relies on the local PC to be running to log the data. It would be great to get the google docs working to pull the data as their server are presumably more stable.
Sorry about the delay on this, I haven’t had a chance to work on it yet. I’ve been working hard to make sure the Local Cloud can be released as soon as possible. I’ll try to look into this this next sprint.
Interesting thanks flackmonkey, I was looking for a way to dump to local CSV while this Google sheets work was on-going. I’m going to reach out to the Google guy in the bug and see if I can get anymore specifics from him to help Dave.
Also, I would get your core ID and Access Token link off your website, unless you want people messing with your Core.
ahh thanks. I didn’t really think it through on how insecure that was. I assumed that was just a read only access.