Would like to suggest a .ignore file for the compiling using cloud feature on the Spark-Cli
I downloaded a sample library and there’s many different folders inside like examples, inc and src
So i want to keep everything but still want to send it for cloud compile and what i’m doing now is to shift the files to be compiled to another dir before calling the command.
Would be nice to have a .gitignore similar feature.
I too like this idea. Better still would be the inverse, a .sparkfiles file. The files which need to be passed to the Spark Cloud for compilation stay reasonably constant over time. But other workfiles, especially during development, do not. I tend to have files such as foo.cpp.good and foo.cpp.1 and foo.cpp.2 etc as well as other dross such as x and y. But I am not against the .sparkignore file, we could have both, each optional, with the logic being similar to the processing of the /etc/hosts.deny and /etc/hosts.allow files. If both files .sparkignore and .sparkfiles exists and a file is mentioned in .sparkignore then ignore it UNLESS it is mentioned in .sparkfiles. If .sparkfiles does not exist then all files except those mentioned in .sparkignore are included. If .sparkignore does not exist etc etc, Someone smart will work out the rules. I would just prefer the creation of some ad hoc work file did not require me to edit .sparkignore - I would rather have a .sparkfiles file, but others work differently from me: And I think if one is being implemented the other is almost trivially simple to do at the same time.
Also see the thread “Compiling folder using spark-cli cloud compile” where @Dave is already considering suggestions such as .sparkignore and .sparkfiles and my suggestion that the files to be sent for Cloud compilation can be mentioned in the spark-cli command line. I wish to use that feature to write dependency rules in Makefiles. https://community.spark.io/t/compiling-folder-using-spark-cli-cloud-compile/3465/17
Cool idea! Some kind of ignore file for directory / projects, and multiple filename support on the command line. I should pull in @jgoggins in case he wants to weigh in on naming conventions for the ignore filename.
I think there has been some talk about a spark ‘project definition’ file of some kind, especially when considering pre-packaged libraries, so an include file might take that form
Okay, this feature is built and hiding in the git repo for the cli, but I will keep testing it, but it should be included in the next release. The filenames are spark.include, and spark.ignore, relative filenames and no wildcards please. You can include files in subdirectories using the include file, but the CLI will only go into the first directory.