Wondering if anyone is interested in getting TensorFlow working with data from the Particle.io Photon. TensorFlow was released by Google near the start of November 2015. Here are a few links:
And an interesting review by other stake holders
Anyway. It has been about 20 years since I played with some simple neural networks, wonder if anyone else would like to take a stab at connecting a photon sensor output to TensorFlow input?
I think I can get TensorFlow working on cloud9 https://c9.io but would need a bit of help going through potential example programs to find something easy enough that I can understand it. Also it is mainly in Python which I never became expert at.
Does anyone know how Python does the equivalent to a node NPM package.json file. I know python has a setup.py file but not sure if that is what I want. I can set up TensorFlow on the cloud9 site and the examples work but getting others to install it is too difficult.
Here is an example package.json file which automates the setup of Node.js (seriously, using cloud9 anyone can have a powerful Node server running in seconds from a github site with a well written package.json file)
The key ideas I am looking for in python are "scripts" and "dependencies". Any suggestions?
You already have the “Core”, “Photon” and “Electron” so how about a “Neuron”. A stripped down Photon that only does what is needed for whichever way neural networks are going. (My vote is one analog read pin and as many digital read/write pins as you can fit on a board).
With facebook, IBM and Google making their A.I. software opensource it is only a matter of time until people want to get their hands wet with the hardware side of Machine Learning (Like what happened 20 years ago). Presently the $5 Rasberry Pi or $99 Parallella (16 Core) are probably the best choices, however your online IDE with IFTTT debug capabilities beats them hands down.
On a regular day I only really need 1 or 2 Photons to test my ideas. Wouldn’t it be great to have a product that people can’t get enough of. Say that a 5 “Neuron” Neural Network actually works, people would want more “Neurons” to make it more powerful. Then you start adding layers and all of a sudden everyone wants 20 to 100 “Neurons”
Hey @rocksetta - I think there are some pretty spectacular ways that the Internet of Things and machine learning relate to one another, but I’m not sure that running machine learning on the device side is the way that I would go.
Machine learning becomes interesting when you have lots of data, and it also requires some computational power. Rather than attempting to implement a machine learning system on a specific device, I’d encourage you to think about implementing a machine learning algorithm in the cloud and then using Photons/Electrons/etc. to send sensor data to that platform using our APIs and webhooks.
Imagine, for instance, you’re working on an algorithm for a “learning thermostat” (a la Nest). Each thermostat could be very bare bones; just some temperature and presence sensors and an HVAC controller hooked up to a Photon/Electron. You then pipe all of the data to a central server that uses neural networks to determine effective algorithms for controlling the HVAC system so that the user is comfortable (when they’re home, it’s a comfortable temperature) while saving energy costs (when they’re not home, the HVAC system is off as much as possible). Then the thermostat itself would be a “neuron” as you’ve described it.
Make sense? Let me know if I’m off base here in terms of what you’re hoping to experiment with.
That makes sense @zach. I was wondering how the Photon could be set up to work with Google’s TensorFlow Neural Network Software. A single thermostat is not hard to program, but a group of thermostats, humidity sensors, light sensors, time of day measurements and motion sensors data could be continuously sent to TensorFlow for the software to learn from the user and predict cost effective comfort settings. That would be a good goal for the original point of this thread.
What I was addressing, for you, is that historically hardware improvements lead to software improvements and vice versa. Neural Networks went out of fashion as being too hard to generalize. TensorFlow has proven that they can be generically useful, suggesting that there may soon be a resurgence in interest in neural network hardware. Presently the Photon is a good platform to experiment with neural networks, but to be really useful it would have to be tweaked a bit.
I think I will do a bit of messing around with a neural network design on the Photon and get back to you if I find anything interesting. Thanks for the reply.
P.S. The Photon is an awesome product for the classroom. Thank you very much.
@zach I was completely wrong, the Photon is great for making a neural network. The only limitation I have found is that each node (Photon) can only communicate with 8 other "nodes", however the total number of "nodes" presently with my system has no limit. If anyone is interested they can follow along at
Don't ask me to explain it as I am sure it will dramatically change as things progress. Presently I have to work back-propagation (learning) into it and I have only done programming like that with a software based neural network, not a hardware based one.
Hoping version 4.8 comes out soon, as I would like to be able to use the DAC1 pin as well as the Timer.changePeriod() function.
I have made a youtube video about using TensorFlow for beginners on cloud9. Instead of using confusing command line statements I use bash commands that can be right clicked and then run. See the video at
Can someone give me some feedback about flattening the Photon’s PWM voltage spikes? I need to make as many DAC channels as I can (Preferably 8). Not sure if with the latest firmware version allows the 2 DAC pins A3 and A6 to work but I need more DAC’s to make a traditional Neural Network…
Can I mimic a DAC pin using a PWM pin by connecting a capacitor (470 pF ??) to flatten the voltage spikes? (Neural networks constantly monitor combined voltages so a PWM fluctuating signal would cause havoc in the circuit).
A simple RC filter is usually a first choice, but to reduce the output ripple voltage enough you end up also reducing the output voltage unnecessarily and creating a very laggy system. A good solution is a 2nd order RC filter. For 500Hz PWM, two cascaded 2k ohm & 10uF RC filters would work nicely (2.2k is not going to hurt if that’s what you have). You can optionally run this through a high impedance unity gain op-amp to boost the drive current, or lower the impedance for A/D measurement (although this one is probably low enough already, should be fine for the Photon’s ADC).
I built the first order filter (different capacitor and resistors that I had lying around, with a coil just for fun) and it worked fine, when sensing through pin A1 I got OK voltage readings, compared with random high low readings without the capacitor. The low end was not so great but OK
Any word on if the issues with DAC pins A3 and A6 have been worked out with firmware version 0.4.7?
Got a new video on how to use TensorFlow with the Udacity Deep Learning course. Still a big jump to be using Artificial Intelligence with the photon, but a small step in the right direction
So Deep Learning, Artificial Intelligence (AI), Machine Learning (ML), Neural Networks (NN), Neural Processing Unit (NPU) are some of the keywords along with software such as: TensorFlow, Theano, skflow / Scikit, Caffe, Torch, Open-Al and many others.
At some point someone will make (or has already made) an inexpensive neural designed chip. I am interested if anyone wants to look into connecting this chip with the Photon?
Just reply or like this note if your interested. I thought I found a neural board with a camera at https://www.sparkfun.com/ for about $100, but have not been able to re-find it. please reply if you find something. This is interesting http://www.research.ibm.com/articles/brain-chip.shtml but finding suppliers seems confusing.
Possibly we might be able to connect something like the Pixy camera http://www.cmucam.org/projects/cmucam5 to the neural board and then from the neural board to the Photon.
What you would be looking at would be a Photon that could sense , learn and interact with it’s environment.
Once again, if you are interested then like this message so I can reply to you if anything developes
So I have taken a few months out from working with my Photon to try to understand Tensorflow Deep Learning. I have managed to use a neural network to generate some music and also generate some 3D Printed objects.
I took about six months working with Tensorflow and managed to test some music generating Machine Learning (I got it working from a web browser using a dataset generated using a Ubuntu machine). Here is the result
Hi, I am following this thread as I am interested to find a good and easy way to control Particles in my new home, using “natural language”. If Particles could be made HomeKit compatible, Siri would be a perfect solution for me. Unfortunately, they aren’t…
(On my todo list: I must make some time to give “Homebridge” a try…)
Today, I read below article about the Raspberry-Pi and Google AI. I thought it was appropriate to post it here.
It may interest some members looking for more powerful & flexible solutions and open totally new perspectives:
Unfortunately it is (probably) too complex a step for me…
I am interested in any reactions and possibly examples of what is possible!
'Alexa' is available for the Pi, Echo is the speaker In that trend, you can get an Amazon Dot for $50 (getting cheaper with more units), which has the microphones embedded, in a wife-friendly enclosure, working out the box.
Might be worthwhile looking into if you don't want to hassle with hardware in multiple rooms.