Photon with TensorFlow Google's Artificial Intelligence Software


Wondering if anyone is interested in getting TensorFlow working with data from the Photon. TensorFlow was released by Google near the start of November 2015. Here are a few links:

And an interesting review by other stake holders

Anyway. It has been about 20 years since I played with some simple neural networks, wonder if anyone else would like to take a stab at connecting a photon sensor output to TensorFlow input?

I think I can get TensorFlow working on cloud9 but would need a bit of help going through potential example programs to find something easy enough that I can understand it. Also it is mainly in Python which I never became expert at.

A starting point is the mnist dataset of simple digits. A repo is at and the example (which does not seem to work well is at )

Draw a digit and see two different ways of analyzing the drawing (Simple and Convolutional).

Not really sure how we could use TensorFlow or any other AI machine with the Particle Photon, but I like the challenge of trying.

Teaching High School Robotics with the Spark Photon

Does anyone know how Python does the equivalent to a node NPM package.json file. I know python has a file but not sure if that is what I want. I can set up TensorFlow on the cloud9 site and the examples work but getting others to install it is too difficult.

Here is an example package.json file which automates the setup of Node.js (seriously, using cloud9 anyone can have a powerful Node server running in seconds from a github site with a well written package.json file)

The key ideas I am looking for in python are “scripts” and “dependencies”. Any suggestions?

The other option is to wait until someone has made a node js connection with TensorFlow. So far I have only found this for which the examples did not work.

Here is what a node package.json file looks like:

“name”: “module-name”,
“version”: “10.3.1”,
“description”: “An example module to illustrate the usage of a package.json”,
“author”: “Your Name”,
“contributors”: [{
“name”: “Foo Bar”,
“email”: ""
“bin”: {
“module-name”: “./bin/module-name”
“scripts”: {
“test”: “vows --spec --isolate”,
“start”: “node index.js”,
“predeploy”: “echo im about to deploy”,
“postdeploy”: “echo ive deployed”,
“prepublish”: “coffee --bare --compile --output lib/foo src/foo/.coffee"
“main”: “lib/foo.js”,
“repository”: {
“type”: “git”,
“url”: “
“bugs”: {
“url”: “
“keywords”: [
“dependencies”: {
“primus”: "
“async”: “~0.8.0”,
“express”: “4.2.x”,
“winston”: “git://”,
“bigpipe”: “bigpipe/pagelet”,
“plates”: “
“devDependencies”: {
“vows”: “^0.7.0”,
“assume”: “<1.0.0 || >=2.3.1 <2.4.5 || >=2.5.2 <3.0.0”,
“pre-commit”: “*”
“preferGlobal”: true,
“private”: true,
“publishConfig”: {
“registry”: “
“subdomain”: “foobar”,
“analyze”: true,
“license”: “MIT”


Hey Zack’s @zach @zachary

You already have the “Core”, “Photon” and “Electron” so how about a “Neuron”. A stripped down Photon that only does what is needed for whichever way neural networks are going. (My vote is one analog read pin and as many digital read/write pins as you can fit on a board).

With facebook, IBM and Google making their A.I. software opensource it is only a matter of time until people want to get their hands wet with the hardware side of Machine Learning (Like what happened 20 years ago). Presently the $5 Rasberry Pi or $99 Parallella (16 Core) are probably the best choices, however your online IDE with IFTTT debug capabilities beats them hands down.

On a regular day I only really need 1 or 2 Photons to test my ideas. Wouldn’t it be great to have a product that people can’t get enough of. Say that a 5 “Neuron” Neural Network actually works, people would want more “Neurons” to make it more powerful. Then you start adding layers and all of a sudden everyone wants 20 to 100 “Neurons”

Just a suggestion, have a great day!


Hey @rocksetta - I think there are some pretty spectacular ways that the Internet of Things and machine learning relate to one another, but I’m not sure that running machine learning on the device side is the way that I would go.

Machine learning becomes interesting when you have lots of data, and it also requires some computational power. Rather than attempting to implement a machine learning system on a specific device, I’d encourage you to think about implementing a machine learning algorithm in the cloud and then using Photons/Electrons/etc. to send sensor data to that platform using our APIs and webhooks.

Imagine, for instance, you’re working on an algorithm for a “learning thermostat” (a la Nest). Each thermostat could be very bare bones; just some temperature and presence sensors and an HVAC controller hooked up to a Photon/Electron. You then pipe all of the data to a central server that uses neural networks to determine effective algorithms for controlling the HVAC system so that the user is comfortable (when they’re home, it’s a comfortable temperature) while saving energy costs (when they’re not home, the HVAC system is off as much as possible). Then the thermostat itself would be a “neuron” as you’ve described it.

Make sense? Let me know if I’m off base here in terms of what you’re hoping to experiment with.



That makes sense @zach. I was wondering how the Photon could be set up to work with Google’s TensorFlow Neural Network Software. A single thermostat is not hard to program, but a group of thermostats, humidity sensors, light sensors, time of day measurements and motion sensors data could be continuously sent to TensorFlow for the software to learn from the user and predict cost effective comfort settings. That would be a good goal for the original point of this thread.

What I was addressing, for you, is that historically hardware improvements lead to software improvements and vice versa. Neural Networks went out of fashion as being too hard to generalize. TensorFlow has proven that they can be generically useful, suggesting that there may soon be a resurgence in interest in neural network hardware. Presently the Photon is a good platform to experiment with neural networks, but to be really useful it would have to be tweaked a bit.

I think I will do a bit of messing around with a neural network design on the Photon and get back to you if I find anything interesting. Thanks for the reply.

P.S. The Photon is an awesome product for the classroom. Thank you very much.


@zach I was completely wrong, the Photon is great for making a neural network. The only limitation I have found is that each node (Photon) can only communicate with 8 other “nodes”, however the total number of “nodes” presently with my system has no limit. If anyone is interested they can follow along at

Don’t ask me to explain it as I am sure it will dramatically change as things progress. Presently I have to work back-propagation (learning) into it and I have only done programming like that with a software based neural network, not a hardware based one.

Hoping version 4.8 comes out soon, as I would like to be able to use the DAC1 pin as well as the Timer.changePeriod() function.

Reminder: My main Teacher Info is at


I have made a youtube video about using TensorFlow for beginners on cloud9. Instead of using confusing command line statements I use bash commands that can be right clicked and then run. See the video at

I also have a webpage dedicated to Google’s new Artificial Intelligence Python Library at


Can someone give me some feedback about flattening the Photon’s PWM voltage spikes? I need to make as many DAC channels as I can (Preferably 8). Not sure if with the latest firmware version allows the 2 DAC pins A3 and A6 to work but I need more DAC’s to make a traditional Neural Network…

@Moors7 @peekay123 @kennethlimcp

Can I mimic a DAC pin using a PWM pin by connecting a capacitor (470 pF ??) to flatten the voltage spikes? (Neural networks constantly monitor combined voltages so a PWM fluctuating signal would cause havoc in the circuit).


A simple RC filter is usually a first choice, but to reduce the output ripple voltage enough you end up also reducing the output voltage unnecessarily and creating a very laggy system. A good solution is a 2nd order RC filter. For 500Hz PWM, two cascaded 2k ohm & 10uF RC filters would work nicely (2.2k is not going to hurt if that’s what you have). You can optionally run this through a high impedance unity gain op-amp to boost the drive current, or lower the impedance for A/D measurement (although this one is probably low enough already, should be fine for the Photon’s ADC).

Here’s a nice tutorial I found on the topic that helps me keep this post short :wink:


@rocksetta, you could also consider a multi-channel DAC chip like the TI DAC088S085 which is an 8bit, 8-channel, SPI, single supply DAC. :wink:


Thanks, that is very useful.

I built the first order filter (different capacitor and resistors that I had lying around, with a coil just for fun) and it worked fine, when sensing through pin A1 I got OK voltage readings, compared with random high low readings without the capacitor. The low end was not so great but OK

Any word on if the issues with DAC pins A3 and A6 have been worked out with firmware version 0.4.7?


Got a new video on how to use TensorFlow with the Udacity Deep Learning course. Still a big jump to be using Artificial Intelligence with the photon, but a small step in the right direction


Looks like IBM has beat me to it with a new chip that simulates a neural network. Called IBM True North


So Deep Learning, Artificial Intelligence (AI), Machine Learning (ML), Neural Networks (NN), Neural Processing Unit (NPU) are some of the keywords along with software such as: TensorFlow, Theano, skflow / Scikit, Caffe, Torch, Open-Al and many others.

At some point someone will make (or has already made) an inexpensive neural designed chip. I am interested if anyone wants to look into connecting this chip with the Photon?

Just reply or like this note if your interested. I thought I found a neural board with a camera at for about $100, but have not been able to re-find it. please reply if you find something. This is interesting but finding suppliers seems confusing.

Possibly we might be able to connect something like the Pixy camera to the neural board and then from the neural board to the Photon.

What you would be looking at would be a Photon that could sense , learn and interact with it’s environment.

Once again, if you are interested then like this message so I can reply to you if anything developes


@rocksetta on twitter as well.


I have not been working with the photon much lately as I have been working on the tensorflow magenta music generation program. The discussion is at!forum/magenta-discuss


So I have taken a few months out from working with my Photon to try to understand Tensorflow Deep Learning. I have managed to use a neural network to generate some music and also generate some 3D Printed objects.

but I really want to get a neural network connected to the photon.

Note: Training a neural network takes a ton of computer power, but implementing the neural network does not take much computer power.

I think an array of sensors connected to a photon could be ran through a web based NN and some action could be taken based on the NN training.

Even better would be to use continuous learning or have the Photon able to respond without the web connection. (This may take an NN chip like the True North

Anyone interested? Especially anyone in data science or really good with the Photon. I tend to be more the software side of things.


I took about six months working with Tensorflow and managed to test some music generating Machine Learning (I got it working from a web browser using a dataset generated using a Ubuntu machine). Here is the result

It generates musical notes in a numerical format and then allows playing the melody on your browser.

I think Photon data could be uploaded to a neural network and then have instructions downloaded back to the photon.


Hi, I am following this thread as I am interested to find a good and easy way to control Particles in my new home, using “natural language”. If Particles could be made HomeKit compatible, Siri would be a perfect solution for me. Unfortunately, they aren’t…
(On my todo list: I must make some time to give “Homebridge” a try…)

Today, I read below article about the Raspberry-Pi and Google AI. I thought it was appropriate to post it here.
It may interest some members looking for more powerful & flexible solutions and open totally new perspectives:

Unfortunately it is (probably) too complex a step for me…
I am interested in any reactions and possibly examples of what is possible!



Hi @FiDel, have you considered Amazon Echo which is also available for the RPi?
There are some Alexa threads around.

Alexa now supports colored lights - here's my implementation with FastLED


‘Alexa’ is available for the Pi, Echo is the speaker :wink: In that trend, you can get an Amazon Dot for $50 (getting cheaper with more units), which has the microphones embedded, in a wife-friendly enclosure, working out the box.
Might be worthwhile looking into if you don’t want to hassle with hardware in multiple rooms.