Docker Build Image

Hey all,

I threw together a docker image for building spark firmware. It includes arm-gcc and a snapshot of the git repos. It might even work via boot2docker on OSX!

Usage goes something like this:

  1. docker pull jjhuff/spark_build
  2. cd <your source dir>
  3. docker run -v $(pwd):/app -u $(id -u) jjhuff/spark_build
  4. Your output images should now be under build/

From here, you should be able to flash via spark-cli or dfu-util

Check it out and let me know what you think!

5 Likes

This isn’t a very helpful reply but I just wanted to say that I think this is pretty cool. I don’t know when I would personally use it but I hope someone uses it and gives you some feedback.

Nice, thanks for sharing! :slight_smile: I think someone was talking about a docker image of the local server as well...

This reminds me of the virtual machine @Raldus made back when here:

Thanks!
David

Yeah, sometimes it’s nice to be able to do everything local:) The cloud compile stuff is pretty nifty though!

Does the local server support the compile API now? It’d probably be fairly easy to hook that up with the docker container with the application source living on a tmpfs volume.

That’s a really excellent idea. I was trying to use Centos for dfu-util and spark command line tools a couple of days ago but even Centos 7 didn’t have the necessary prerequisites and I wasn’t keen on downloading source code and recompiling it on to a stable Centos 7 build, and possibly upsetting what’s there already, so I used my Raspberry Pi instead which took five hours to compile Nodejs! Using Docker might be the better way to do this; I’ll have to give it a try. Did you build using a Dockerfile, or manually? Does the image run a shell?

I tried using a VM (under VMware) for Arduino development but it was a bit of a fiddle because every time I plugged in the usb to the Arduino it connected to the host device, so I then had to map it to the VM. When I downloaded code it reset the connection and I had to do it again. Eventually I used an old laptop and it was much better.

Tim.

If you are already using dfu-util, you can just download the ARM gcc toolchain and compile locally and then use dfu-util to flash the code to your core. If you can type “make all” in a shell, you can do it and here’s how:

Note that the library format with three repos will be changing shortly but I believe this is still the current way.

Simply grab the feature/hal branch in the firmware repo and that’s an all in one version. :slight_smile:

I tried jjhuff’s Docker image, which worked ok for compiling, but I decided to build my own image because I wanted dfu-util and the spark cli. I’ve built it for Centos 7, because that’s what I use.

I’ve posted my Dockerfile in case anyone is interested in building their own image rather than using someone else’s and because it’s easier to rebuild on demand. It’s about 1.5GB and on my Docker laptop takes 20-25 minutes to build, which you only do rarely. There is one manual setting you need to make within the Dockerfile before you build.

The ARM compiler is downloaded beforehand but the rest is downloaded upon build. It’s arguable which method is better.

Hope you can make sense of the file below, which doesn’t display perfectly in my preview window, but if you are familiar with Docker it should be easy enough to reformat.

Tim.

#
# Basic Core compiler environment
#
# On host, create new directory, eg corecompiler. Then cd in to it. Put this Dockerfile in it.
# Build using: sudo docker build --tag="core002" ./     # note the trailing './'
#
# nohup sudo docker build --tag="core002" ./ > log 2>&1 &
#
# **** Manual Setting Required - note the pathname for the ARM compiler and adjust the
#      PATH environment variable towards the end of Dcokerfile where the firmware is
#      installed and built
#

# Base the build on Centos 7 because Centos 6 is missing too many packages
FROM centos:centos7

MAINTAINER Tim v0.1 January 2015

# Note the time in the build log
RUN date

# Include this build file in to the filesystem to show how it was built
COPY Dockerfile /var/log/

# Several packages that are required.
RUN yum -y install gcc git libtool autoconf autogen intltool python guile perl-Package-Constants libusb make gcc-c++ glibc.i686

# ARM Compiler:
# Add ARM gcc to the image - this command automatically unzips it and copies it from host dir to /usr/local
# Obtained from https://launchpad.net/gcc-arm-embedded
# Eg: wget https://launchpad.net/gcc-arm-embedded/4.9/4.9-2014-q4-major/+download/gcc-arm-none-eabi-4_9-2014q4-20141203-linux.tar.bz2
# and put in to host directory alongside Dockerfile
ADD gcc-arm-none-eabi-4_9-2014q4-20141203-linux.tar.bz2 /usr/local/

# dfu-util has a dependency on at least version 1.0 of libusb
# Remove current libusb, which is too old (Centos7 has v0.1.4)
RUN yum -y remove libusb

# Download source for libusb
RUN git clone git://git.libusb.org/libusb.git /usr/local/libusb/
WORKDIR /usr/local/libusb
RUN ./autogen.sh && ./configure && make && make install

# Add shared libraries - might not be required - commented out
#RUN echo /usr/local/lib > /etc/ld.so.conf.d/usrlocallib.conf

# Add /usr/local/lib/pkgconfig to the package path otherwise dfu-util thinks that libusb is missing
# or is too low a version (because we've installed it manually rather than using yum).
ENV PKG_CONFIG_PATH $PKG_CONFIG_PATH:/usr/local/lib/pkgconfig

# debug if required
#RUN echo $PKG_CONFIG_PATH

# Obtain source for dfu-util. Spark needs a version later than about 0.7 or 0.8, otherwise
# when it runs it doesn't work with the Core correctly.
RUN git clone git://gitorious.org/dfu-util/dfu-util.git /usr/local/dfu-util/
WORKDIR /usr/local/dfu-util
RUN ./autogen.sh && ./configure && make && make install

# Test that dfu-util works and displays the version number
RUN dfu-util -V

# Install node (approx 135MB)
# The make command could take at least five minutes to run or a few hours
# if running on a slow device such as a Raspberry Pi.
# Version v0.10.26-release worked - but I guess a later version would also be ok
RUN git clone https://github.com/joyent/node.git /usr/local/node
WORKDIR /usr/local/node
RUN git checkout v0.10.26-release
RUN ./configure && make && make install

# spark cli
RUN npm install -g spark-cli

# Basic test of Spark cli
RUN spark help

#
# Build the Spark specific code under /spark
#

# Install and make the firmware, which includes the default 'tinker' application
RUN mkdir /spark && mkdir /spark/firmware
RUN git clone https://github.com/spark/core-firmware.git          /spark/firmware/core-firmware/
RUN git clone https://github.com/spark/core-common-lib.git        /spark/firmware/core-common-lib/
RUN git clone https://github.com/spark/core-communication-lib.git /spark/firmware/core-communication-lib/
# Important - the PATH must be the same as the ARM compiler installed at start of this Dockerfile
ENV PATH $PATH $PATH:/usr/local/gcc-arm-none-eabi-4_9-2014q4/bin
# RUN echo "New path is:" && echo $PATH
WORKDIR /spark/firmware/core-firmware/build
# Binaries will be created in /spark/firmware/core-firmware/build
RUN make

# Create a mountpoint for host directory shared using -v option in the 'docker run' command.
VOLUME ["/app"]

# Note the time in the build log
RUN date

CMD ["/bin/bash"]
1 Like

Kudos for this - a great step in the right direction! I’d really love to see a common docker image with all the tools used for local development

  • ARM gcc toolchain
  • git / git repo
  • dfu-util
  • various C++ IDEs preconfigured with firmware projects for compiling and debugging (Eclipse, NetBeans, etc…)
  • Particle Dev
  • Particle Server (local cloud)
  • Particle CLI
  • st-link (for jtag debugging)
  • WICED SDK (Now that we can distribute it!)
  • this includes OpenOCD
  • …etc…

It takes a number of hours to setup a local environment, even with good instructions. Having a docker image would mean everyone gets a springboard into local firmware development!

I’d be very happy step up to maintain this for the community if someone could get the ball rolling with a clean image to start from! (DIsclaimer, I’ve not used docker, but eager to learn.)

I know @kennethlimcp has also done work here, I hope we can pool our efforts!

1 Like