@Coffee great question, also great point about keeping things stupid simple.
One of the trickiest parts of building a Spark library system is making it easy to use yet capable of supporting complex things like recursive dependency resolution. In the short term, it won’t support recursive dependency resolution so we can ship something early. Longer term, we’d like to support this. It’ll work similar to PHP composer, Rubygems, or NPM, aka dependencies can be specified like this in a
spark.json manifest file.
"json-parser": "~> 0.2"
Like @bko and @wgbartley mentioned, this is a feature is currently in active development. As we build up the functionality to ingest libraries easily into the Web IDE (and later use locally via a CLI), we’ll be using this open source “Uber Library Example” to illustrate how to organize a Spark Library. If you’re looking for ideas for how to structure your code you could follow the patterns illustrated there. Comments, questions, and suggestions are welcome!