I know that the Spark team has been designing new features for adding libraries to the webIDE, but I don’t think they have the final details nailed down yet. I know that they are big github users and I am sure the final feature will allow github to be the library repository, so you are on safe ground there.
Let me turn the question around for a minute:
How do you think they should capture all these dependencies?
What would be a good way to say “this library depends on (or was tested with) this version of this other library” ?
You asked a good question that lines up well with what I think the Spark team is thinking about!
The good news is that the Spark Team is currently working on adding library support to the Sparkulator. The bad news is it’s not here yet. I don’t know the exact timeline, but I think it’s at least a couple of weeks out (but still in active discussion!).
Sidenote: First i thought about merging all the .h and .cpp files into one file and let people use that. But that’s not very elegant, isn’t it? )
I’ve been wating for this question. I like how composer solved that - which is a dependency management system for php, with simple json structured configurations. These are basically links to a version of a git repository, containing another json-config-file.
On the other hand, usually one will not use a big tree of dependencies, just because memory is so limited.
PS: First of all, we have to keep it stupid simple. As Arduino and Spark are designed for “normal” people like my 10yr brother or my 67yr granpa. They should not worry (or even understand) what are .h and .cpp files.
@Coffee great question, also great point about keeping things stupid simple.
One of the trickiest parts of building a Spark library system is making it easy to use yet capable of supporting complex things like recursive dependency resolution. In the short term, it won’t support recursive dependency resolution so we can ship something early. Longer term, we’d like to support this. It’ll work similar to PHP composer, Rubygems, or NPM, aka dependencies can be specified like this in a spark.json manifest file.
"json-parser": "~> 0.2"
Like @bko and @wgbartley mentioned, this is a feature is currently in active development. As we build up the functionality to ingest libraries easily into the Web IDE (and later use locally via a CLI), we’ll be using this open source “Uber Library Example” to illustrate how to organize a Spark Library. If you’re looking for ideas for how to structure your code you could follow the patterns illustrated there. Comments, questions, and suggestions are welcome!