I’m looking to use the spark.io module to stream a video from a camera module to a server/web to view. I’m hoping someone would have done the hard work already and done so, and if so, I’d certainly like to take a look at what module you used, and potentially what library as well.
Not really possible, unfortunately. The fastest method would be to find a camera module that supported 8-Bit RGB output and pipe it through a shift register to the Core. You’d need to add some SPI based SRAM as well to buffer the image, which would halve your read speed unless you hooked the camera straight to 8 I/0 pins instead of using the shift register. However you’d still need 4 pins for the SPI SRAM and 2 pins for the I2C connection to the camera. This would leave you two pins free left on the Core!
After all that you’re still not going to get more than 1FPS streaming due to a number of factors, including the fact the CC3000 just isn’t designed to stream video.
If you must hook a camera to the Core, using a low resolution UART module is your best choice. If that doesn’t work, you can always buy a wifi webcam and program the Core to talk with it through some web API or just buy a Raspberry Pi with Camera Module. (Best Choice, IMHO)
By modern video standards, this is not much to look at, but wow, fitting all that into a 1.5K Arduino frame buffer is impressive. The code uses very low level timing to get the video into the Arduino, so it would be very hard to port to .
If your goal is to have your robot follow a line with a video camera in a well lit room with a high contrast image, then this looks like it could work, but you will really have to be up for a challenge. On the other hand, if your goal is some kind of security camera with human looking pictures, then @timb is right and you should move along to other solutions.
While it only supports one camera, you might want to look into a raspberry pi. It’s a bit more expensive than a spark, and uses up something like 2.5W compared to the spark’s ~1.1W, but the pi is infinitely more powerful, and has much more mature software. The biggest downside of the pi is that it has no built-in analog I/O, but you probably don’t care about that.
@soulslicer If you go with a UART based camera you can only really use one. Again, UART transmits data at a maximum speed of 11.5KB/s not including errors. If you had a 640x480 UART camera, a single uncompressed frame is going to be 2.5MB! So keep in mind you still have to buffer that in some external SRAM. You’re talking 3 minutes or so per frame!
Basically, you need an MCU with a MIPI Camera Interface to work with a camera in any reliable fashion.
You could use the Broadcom chip that powers the Raspberry Pi with an embedded WiFi module. If you want to stream multiple cameras this is really your only option.
@soulslicer I hate to be a pessimist because I like to think that the Core can be used for
but streaming video would definitely be pushing the limits of the Core. It’s not to say that it’s impossible; with a lot of tuning, the CC3000 might be able to handle video, and the STM32 is definitely way more powerful than an Arduino, so if the Arduino can do basic video from @bko’s post, then an STM32 could definitely do way better.
However given the pretty intense development time necessary, I would think that a linux based system would be better. Not necessarily the Raspberry Pi if you’re designing something to go into production, because the Raspberry Pi is difficult to reproduce on your own - but there are other linux and android systems that might work.
Can you tell us anything about the product you’re developing? The more detail you can provide, the better we can advise on architecture.
@zach What @bko posted is just a basic NTSC frame buffer, so it’s not storing the data long term or compressing it for re-transmission. If you just wanted to monitor an NTSC security camera for movement, I could easily make that happen on the Core. However, if you wanted to read, compress and upload 640x480 or greater still images it’s going to be a challenge. Streaming video at anything over 1FPS is going to be next to impossible without a camera that does internal H.264 compression and buffers internally or something.
I’ve run the math from pretty much every angle and without external hardware it’s just not possible for the Core by itself to do it.
You’re most likely right about reproducing the Pi, the Broadcom chip used on it is getting long in the tooth, so it’s hard to get unless you’re ordering in large quantities. Perhaps look into the chip powering the Beaglebone Black?
@timb is of course correct–the Arduino board is (1) very limited in the number of pixels and (2) only does one-bit per pixel with an analog threshold that the user must set and (3) only does a few frames per second. The actual frame-buffer is in the Arduino which is amazing, but limiting. This would good for a line-following robot under controlled lighting conditions, but not for things people will want to look at.
The Pixy is the CMU Camera v5. CMU Camera v4 was out for a time (Sparkfun used sell it for instance) but I think it was hard to use if you were not one of the creators. Those are some smart guys behind Pixy and they have promised to do Viola-Jones facial detection on the Pixy, but RaspPI, for instance, can already do V-J facial detection and recognition using the openCV library at a few frames per second throughput.
I think the core could do something with video at the level of say, 160x120 4-bits per pixel greyscale, but I agree that H.264 would be next to impossible and larger raster sizes would become very challenging, very quickly.
I agree with @zach: whether or not low-resolution, low-speed video is useful is just a problem for your imagination! You have to find the right application!
I guess I could discuss about alternative solutions here but that would defeat the purpose as this is strictly a spark.IO forum. The best for me would be to find an open source preferably ARM based programmable open source board that interfaces with a camera module. So far, googled it but haven’t found any solutions
Check out the Beaglebone Black or pcDuino. You want something capable of CSI (Camera Serial Interface) which are dedicated MIPI (differential high speed serial interface) and I2C ports designed to interface with camera modules. This is what’s used in most cell phones, the Raspberry Pi, Beaglebone, etc.
Like I said in the other thread, you can use a UART camera to snap a single picture and email it, but it will take 15 to 30 seconds for it all to happen.
To clarify: Snapping a single picture and uploading it = completely doable; streaming realtime video = not going to happen.
I have ordered Grove Serial Camera Kit. For realtime purposes 160x120 might be okay (eg. send the buffer to OLED display from UART to SPI). Other resolutions might require some buffer (CC3000 or external SD card?). I just bought one Spark Core and am pushing it to boundaries. The onboard memory sucks, we cannot use ARM Cortex M3 at full speeds for those use cases. I’m thinking about Cubieboard or Cubietruck for those projects. I’m now thinking what would be the core good for anything else than playing with RGB Leds with 24/7 cloud connection.