Music Visualizer with spark core, 8x8 led backpack, & microphone

Hello!
I am relatively new to spark core, and I am attempting to replicate this music visualizer tutorial with the spark core.

Because the pins are different on spark core from arduino, I wanted to make sure I am wiring this correctly before I power it on, so that I don’t damage anything. Also, it says it is crucial to connect the microphone to the AREF pin, but from what I understand the spark core doesn’t have an equivalent.

I apologize if any of these are simple questions. I am just overly cautious and wanted to check that my wiring is right before continuing to the next step :slight_smile:

Thank you!!

No guaranty but some feedback :wink:

You won’t need the AREF pin since the Core already has a default analog ref voltage of 3.3V
I’d guess you want an A pin connected to the mic amp OUT pin (instead of the digital D3).
I’d also guess your LED matrix wants to be powered off VIN rather than 3V3*.
If you’re experiencing some power issues with the mic, you might also try 3V3 instead of 3V3* - this might introduce some noise but better with noise than not at all :wink:

1 Like

Thank you so much! I initially had it like that, and then second guessed myself and moved around the wires. :smile:

One other question, do you have any recommendations on how to convert the arduino code into code for the spark core?

I included the code for arduino at the end of this post. But when I tested it, it says it fails to compile. Also, it says to include:

avr/pgmspace.h
ffft.h
math.h
Wire.h
Adafruit_GFX.h
Adafruit_LEDBackpack.h

So I included the Adafruit GFX and LEDBackpack libraries, but I am not sure how to include the other files such as the ffft.h?

Also, it gives the warning for the adafruit libraries "Defaulting to Release Build"

Code....

// This #include statement was automatically added by the Spark IDE.
#include "Adafruit_GFX/Adafruit_GFX.h"

// This #include statement was automatically added by the Spark IDE.
#include "adafruit-led-backpack/adafruit-led-backpack.h"

/*
PICCOLO is a tiny Arduino-based audio visualizer.

Hardware requirements:
 - Most Arduino or Arduino-compatible boards (ATmega 328P or better).
 - Adafruit Bicolor LED Matrix with I2C Backpack (ID: 902)
 - Adafruit Electret Microphone Amplifier (ID: 1063)
 - Optional: battery for portable use (else power through USB)
Software requirements:
 - elm-chan's ffft library for Arduino

Connections:
 - 3.3V to mic amp+ and Arduino AREF pin <-- important!
 - GND to mic amp-
 - Analog pin 0 to mic amp output
 - +5V, GND, SDA (or analog 4) and SCL (analog 5) to I2C Matrix backpack

Written by Adafruit Industries.  Distributed under the BSD license --
see license.txt for more information.  This paragraph must be included
in any redistribution.

ffft library is provided under its own terms -- see ffft.S for specifics.
*/

// IMPORTANT: FFT_N should be #defined as 128 in ffft.h.

#include <avr/pgmspace.h>
#include <ffft.h>
#include <math.h>
#include <Wire.h>
#include <Adafruit_GFX.h>
#include <Adafruit_LEDBackpack.h>

// Microphone connects to Analog Pin 0.  Corresponding ADC channel number
// varies among boards...it's ADC0 on Uno and Mega, ADC7 on Leonardo.
// Other boards may require different settings; refer to datasheet.
#ifdef __AVR_ATmega32U4__
 #define ADC_CHANNEL 7
#else
 #define ADC_CHANNEL 0
#endif

int16_t       capture[FFT_N];    // Audio capture buffer
complex_t     bfly_buff[FFT_N];  // FFT "butterfly" buffer
uint16_t      spectrum[FFT_N/2]; // Spectrum output buffer
volatile byte samplePos = 0;     // Buffer position counter

byte
  peak[8],      // Peak level of each column; used for falling dots
  dotCount = 0, // Frame counter for delaying dot-falling speed
  colCount = 0; // Frame counter for storing past column data
int
  col[8][10],   // Column levels for the prior 10 frames
  minLvlAvg[8], // For dynamic adjustment of low & high ends of graph,
  maxLvlAvg[8], // pseudo rolling averages for the prior few frames.
  colDiv[8];    // Used when filtering FFT output to 8 columns

/*
These tables were arrived at through testing, modeling and trial and error,
exposing the unit to assorted music and sounds.  But there's no One Perfect
EQ Setting to Rule Them All, and the graph may respond better to some
inputs than others.  The software works at making the graph interesting,
but some columns will always be less lively than others, especially
comparing live speech against ambient music of varying genres.
*/
static const uint8_t PROGMEM
  // This is low-level noise that's subtracted from each FFT output column:
  noise[64]={ 8,6,6,5,3,4,4,4,3,4,4,3,2,3,3,4,
              2,1,2,1,3,2,3,2,1,2,3,1,2,3,4,4,
              3,2,2,2,2,2,2,1,3,2,2,2,2,2,2,2,
              2,2,2,2,2,2,2,2,2,2,2,2,2,3,3,4 },
  // These are scaling quotients for each FFT output column, sort of a
  // graphic EQ in reverse.  Most music is pretty heavy at the bass end.
  eq[64]={
    255, 175,218,225,220,198,147, 99, 68, 47, 33, 22, 14,  8,  4,  2,
      0,   0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,
      0,   0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,
      0,   0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0,  0 },
  // When filtering down to 8 columns, these tables contain indexes
  // and weightings of the FFT spectrum output values to use.  Not all
  // buckets are used -- the bottom-most and several at the top are
  // either noisy or out of range or generally not good for a graph.
  col0data[] = {  2,  1,  // # of spectrum bins to merge, index of first
    111,   8 },           // Weights for each bin
  col1data[] = {  4,  1,  // 4 bins, starting at index 1
     19, 186,  38,   2 }, // Weights for 4 bins.  Got it now?
  col2data[] = {  5,  2,
     11, 156, 118,  16,   1 },
  col3data[] = {  8,  3,
      5,  55, 165, 164,  71,  18,   4,   1 },
  col4data[] = { 11,  5,
      3,  24,  89, 169, 178, 118,  54,  20,   6,   2,   1 },
  col5data[] = { 17,  7,
      2,   9,  29,  70, 125, 172, 185, 162, 118, 74,
     41,  21,  10,   5,   2,   1,   1 },
  col6data[] = { 25, 11,
      1,   4,  11,  25,  49,  83, 121, 156, 180, 185,
    174, 149, 118,  87,  60,  40,  25,  16,  10,   6,
      4,   2,   1,   1,   1 },
  col7data[] = { 37, 16,
      1,   2,   5,  10,  18,  30,  46,  67,  92, 118,
    143, 164, 179, 185, 184, 174, 158, 139, 118,  97,
     77,  60,  45,  34,  25,  18,  13,   9,   7,   5,
      3,   2,   2,   1,   1,   1,   1 },
  // And then this points to the start of the data for each of the columns:
  * const colData[]  = {
    col0data, col1data, col2data, col3data,
    col4data, col5data, col6data, col7data };

Adafruit_BicolorMatrix matrix = Adafruit_BicolorMatrix();

void setup() {
  uint8_t i, j, nBins, binNum, *data;

  memset(peak, 0, sizeof(peak));
  memset(col , 0, sizeof(col));

  for(i=0; i<8; i++) {
    minLvlAvg[i] = 0;
    maxLvlAvg[i] = 512;
    data         = (uint8_t *)pgm_read_word(&colData[i]);
    nBins        = pgm_read_byte(&data[0]) + 2;
    binNum       = pgm_read_byte(&data[1]);
    for(colDiv[i]=0, j=2; j<nBins; j++)
      colDiv[i] += pgm_read_byte(&data[j]);
  }

  matrix.begin(0x70);

  // Init ADC free-run mode; f = ( 16MHz/prescaler ) / 13 cycles/conversion 
  ADMUX  = ADC_CHANNEL; // Channel sel, right-adj, use AREF pin
  ADCSRA = _BV(ADEN)  | // ADC enable
           _BV(ADSC)  | // ADC start
           _BV(ADATE) | // Auto trigger
           _BV(ADIE)  | // Interrupt enable
           _BV(ADPS2) | _BV(ADPS1) | _BV(ADPS0); // 128:1 / 13 = 9615 Hz
  ADCSRB = 0;                // Free run mode, no high MUX bit
  DIDR0  = 1 << ADC_CHANNEL; // Turn off digital input for ADC pin
  TIMSK0 = 0;                // Timer0 off

  sei(); // Enable interrupts
}

void loop() {
  uint8_t  i, x, L, *data, nBins, binNum, weighting, c;
  uint16_t minLvl, maxLvl;
  int      level, y, sum;

  while(ADCSRA & _BV(ADIE)); // Wait for audio sampling to finish

  fft_input(capture, bfly_buff);   // Samples -> complex #s
  samplePos = 0;                   // Reset sample counter
  ADCSRA |= _BV(ADIE);             // Resume sampling interrupt
  fft_execute(bfly_buff);          // Process complex data
  fft_output(bfly_buff, spectrum); // Complex -> spectrum

  // Remove noise and apply EQ levels
  for(x=0; x<FFT_N/2; x++) {
    L = pgm_read_byte(&noise[x]);
    spectrum[x] = (spectrum[x] <= L) ? 0 :
      (((spectrum[x] - L) * (256L - pgm_read_byte(&eq[x]))) >> 8);
  }

  // Fill background w/colors, then idle parts of columns will erase
  matrix.fillRect(0, 0, 8, 3, LED_RED);    // Upper section
  matrix.fillRect(0, 3, 8, 2, LED_YELLOW); // Mid
  matrix.fillRect(0, 5, 8, 3, LED_GREEN);  // Lower section

  // Downsample spectrum output to 8 columns:
  for(x=0; x<8; x++) {
    data   = (uint8_t *)pgm_read_word(&colData[x]);
    nBins  = pgm_read_byte(&data[0]) + 2;
    binNum = pgm_read_byte(&data[1]);
    for(sum=0, i=2; i<nBins; i++)
      sum += spectrum[binNum++] * pgm_read_byte(&data[i]); // Weighted
    col[x][colCount] = sum / colDiv[x];                    // Average
    minLvl = maxLvl = col[x][0];
    for(i=1; i<10; i++) { // Get range of prior 10 frames
      if(col[x][i] < minLvl)      minLvl = col[x][i];
      else if(col[x][i] > maxLvl) maxLvl = col[x][i];
    }
    // minLvl and maxLvl indicate the extents of the FFT output, used
    // for vertically scaling the output graph (so it looks interesting
    // regardless of volume level).  If they're too close together though
    // (e.g. at very low volume levels) the graph becomes super coarse
    // and 'jumpy'...so keep some minimum distance between them (this
    // also lets the graph go to zero when no sound is playing):
    if((maxLvl - minLvl) < 8) maxLvl = minLvl + 8;
    minLvlAvg[x] = (minLvlAvg[x] * 7 + minLvl) >> 3; // Dampen min/max levels
    maxLvlAvg[x] = (maxLvlAvg[x] * 7 + maxLvl) >> 3; // (fake rolling average)

    // Second fixed-point scale based on dynamic min/max levels:
    level = 10L * (col[x][colCount] - minLvlAvg[x]) /
      (long)(maxLvlAvg[x] - minLvlAvg[x]);

    // Clip output and convert to byte:
    if(level < 0L)      c = 0;
    else if(level > 10) c = 10; // Allow dot to go a couple pixels off top
    else                c = (uint8_t)level;

    if(c > peak[x]) peak[x] = c; // Keep dot on top

    if(peak[x] <= 0) { // Empty column?
      matrix.drawLine(x, 0, x, 7, LED_OFF);
      continue;
    } else if(c < 8) { // Partial column?
      matrix.drawLine(x, 0, x, 7 - c, LED_OFF);
    }

    // The 'peak' dot color varies, but doesn't necessarily match
    // the three screen regions...yellow has a little extra influence.
    y = 8 - peak[x];
    if(y < 2)      matrix.drawPixel(x, y, LED_RED);
    else if(y < 6) matrix.drawPixel(x, y, LED_YELLOW);
    else           matrix.drawPixel(x, y, LED_GREEN);
  }

  matrix.writeDisplay();

  // Every third frame, make the peak pixels drop by 1:
  if(++dotCount >= 3) {
    dotCount = 0;
    for(x=0; x<8; x++) {
      if(peak[x] > 0) peak[x]--;
    }
  }

  if(++colCount >= 10) colCount = 0;
}

ISR(ADC_vect) { // Audio-sampling interrupt
  static const int16_t noiseThreshold = 4;
  int16_t              sample         = ADC; // 0-1023

  capture[samplePos] =
    ((sample > (512-noiseThreshold)) &&
     (sample < (512+noiseThreshold))) ? 0 :
    sample - 512; // Sign-convert for FFT; -512 to +511

  if(++samplePos >= FFT_N) ADCSRA &= ~_BV(ADIE); // Buffer full, interrupt off
}

@kbecks, unfortunately the code uses the ffft library which makes extensive use of Arduino-specific assembler and hardware-specific references. There is NO port for this type of code. Furthermore, the main code makes use of ADC interrupts not currently accessible to the Spark user app. All this to say that this is what I would consider this an un-portable library.

Now, that does not mean that a new Spark library can’t be written. There is STM32 platform code for fft and there may be a way to hook into ADC interrupts. However, for the assembler, a low-level programmer would need to get involved. :smile:

@peekay123 Thank you for your insight, that is good to note. :blush:

Do you think I would have better luck following this tutorial which uses the same hardware (8x8 led backpack & microphone) but does not require the ffft?

Do you think I am able to use this code (with a few changes, “1024” to “4096”, “0” to “A0”), or am I overlooking something?

Lastly, I noticed with this one, in addition to including the “Adafruit_GFX.h” and “Adafruit_LEDBackpack”, it says to include “Wire.h”, how would I go about including this?

For general porting questions like your “Wire.h” question, there is a short thread to have a look

In connection with FFT there are two threads on this forum, but I’ve not read through them, tho’

For your other question:
A0 would be fine and you’d use 4095 (should also be 1023 in Adafruit code ;-))

Thank you both sooo much for your help, I finally got it working!! I am so excited!! :smile:

One last question…
Is it possible to graph real time sound levels (so I can see on the computer essentially what the led display is doing)?
If so, can you recommend a tutorial or what to use, I’m not really sure where to begin.

I’ve come across a lot of potential options (publish w/ IFTTT? Freeboard? do I use JSON?) but am a little clueless on if what I’m trying to do is feasible and what the best route would be to take.

1 Like

Realtime is a bit of a stretch here :wink:

And IFTTT is not an option for this kind of use, neither are any of the cloud based services.

I’d guess the best result you might have with UDP or maybe TCPClient/TCPServer in your local subnet.
And response time will be even better in SYSTEM_MODE(MANUAL) or SYSTEM_MODE(SEMI_AUTOMATIC) since you cut out any cloud maintenance that demands for some processing time too.

UDP has some (minor) issues, but you’ll find workarounds on this forum and some useful hints how to use it for your purpose, too.

@kbecks, as @ScruffR points out. “real-time” is tricky. One thing you could do is simply stream the data over the serial port (USB or Serial1) to your PC where you can graph it.

1 Like

kbecks @kbecks , nice work, this is a cool integration. Did you ever publish the project or the code that you got working?