Documentation Needs

BTW, just pushed my 4th Docs commit from a local change this time instead of the Github editor. So easy! Much nicer editing in Sublime as well. Also previewing the changes in a real local version of Docs before pushing to master is priceless. Spark :spark: makes Documentation :books: FUN!

That’s weird. Searched for the error on StackOverflow and got some not very helpful threads. Let me know if it comes back, or if anyone else runs into it. I’m guessing it has to do with the syntax of the ‘clean’ function, but if it’s working for you now then it’s hard to test a fix…

Well it always fails actually… every time a file is updated. It can error multiple times as well and I have to restart the server every time I change a file. It’s not a big deal, but obviously would be slick not to have to close and restart the server for every change. Anyone else see this problem?

I’ve addressed the top of the list documentation issue here https://github.com/spark/docs/pull/169 - will someone approve it please.

I want to propose another attitude in accepting documentation submissions. The gatekeepers who decide whether or not to accept a documentation submission should ask themselves this one simple question: “Is the documentation better or worse with this submission?” They should not be asking themselves “Is this the most ideal contribution there ever could be?”

Rather than proposing wording changes to the submitter as a condition of acceptance, or quibbling about the approach taken by the submitter, or insisting that the submitter write a tutorial on the subject, just accept the proposed change if it constitutes a marginal improvement and then, if the accepter can see an obvious improvement, FIX IT, even if it means making big changes to the submission, just do it, as long as you’re sure you’re being constructive! Especially if the gatekeeper can do it in less time than entering into a debate.

It’s just too dispiriting otherwise.

The gatekeepers are unlikely to be expert in every single aspect of the Spark, or of programming, or networking, or of style, or what constitutes good English. They are unlikely even to have a consistent approach to the documentation between themselves. What they are there for, in my view, and I thought in the view of others, was to ensure there was no vandalism, that the contributions were being made good faith constructively. They are NOT there to stamp their own peculiar style on every contribution.

1 Like

I suggest that your post here be added to the docs so people have a better understanding of spark core security implimentation. Especially the part that placing access_token in a webpage on the internet is not secure.

Thanks,

Bobby

Added to the list above, thanks! :slight_smile: :spark:

2 Likes

Added to the list:

SPARK_WLAN_Loop()

#pragma SPARK_NO_PREPROCESSOR

1 Like

Return codes

Several or many or most or all of the functions described in the firmware docs have undocumented return codes. E.g. most or all of the functions in the UDP library return codes which are undocumented. A specific example: I read in the forum that UDP.write() returns -1 or -2 on error but this is undocumented. It would be nice to know what the return codes of all the functions are. Just to know that they exist, and what signifies failure and what signifies success, would be very useful. The convention in C is that functions return -1 for failure, setting the global errno to identify the error. 0 signifies success. Anyway, whatever, where there are return codes they should please be documented.

2 Likes

@akiesels reports numerous issues / inconsistencies / bugs with the WiFi library.

At https://community.spark.io/t/solved-after-spark-sleep-finished-udp-packets-cannot-be-sent/8234 @ECOFRI reports an issue which needs documenting. After Spark.sleep() UDP.begin() must be called again. Very likely this is true for TCP also. And must Spark.connect() be called to re-establish the Cloud connection or does this come back too? Documented is only that WiFi.start() need not be called again.

Additionally it should be stated that for each UDP.begin() a corresponding UDP.stop() should be called to avoid running out of sockets.
And in the current docs is stated


UDP

This class enables UDP messages to be sent and received.

The UDP protocol implementation has known issues that will require extra consideration when programming with it. Please refer to the Known Issues category of the Community for details. The are also numerous working examples and workarounds in the searchable Community topics.


Where is the “Known Issues” category of the community? I looked under “Categories” and couldn’t find it. I found “Issues” in the “Open Source” section, tho’. Or is it the “Troubleshooting” category?
A link to it would be helpful.

At this topic https://community.spark.io/t/spark-function-limits/952 several users note that they had missed the documented limitation of 4 registered Spark.function() functions.

I’m unsure if I am making a documentation request or an enhancement suggestion.

Does Spark.function() return anything? If so then this should be documented so that the return codes can be tested by user code. I would suggest -1 might mean “max of 4 functions exceeded” and -2 might mean “max 12 char limit exceeded”.

And I would request that the relevant examples in the docs be amended to use the return codes. (And, indeed, that all our examples always test the return code, wherever there is one to test, to encourage good coding practise.)

Each Spark.function() returns a value through the API call and not read by the core itself.

The user can use spark list to quickly determine the functions exposed by each core and spot that the 5th function onwards is missing!

I think you misunderstand me, @kennethlimcp. I would like the Spark.function() call to itself return a value as to whether it was successful in registering the user-code function. This would allow your suggestion for the unfortunately impossible compiler enhancement to be given effect a different way.

Yes, the docs say 4 max, yes you can use spark list as you say. But my suggestion that Spark.function itself return a success/failure code allows @mdma’s 100-iteration loop (in his response to your bug/enhacement report) to know when to stop. And the reulting good practise which we will encourage in the docs of always testing the return code will allow the two users who spect hours debugging their code to have a better chance of determining the problem without having to ask for help in the forum. That would be a good idea, yes?

Return to…where? The user code does not bother about the Spark.function() being called except to execute to the function tagged to the exposed function name and do whatever it is supposed to.

You are welcome to open a github issue or even submit a Pull Request if you find it really beneficial to have return code for Spark.function().

@kennethlimcp, I am sure my suggestion will have broad if not universal support. I have already posted my suggestion as a comment to your request for a compiler enhancement at github.

At https://community.spark.io/t/ambiguous-documentation/8514 there is a report, seemingly confirmed, that the documentation about sleep() is ambiguous if not misleading.