To answer your request, yes, glowfi.sh can distinguish between acoustic chirp signals from various sources…
I took your 167 acoustic chirp files (the new ones only), transformed them using a Discrete Cosine Transform to get the 8000 de-correlated frequency components corresponding to your 8000 time samples (I ignored signal phase for this test). I then ran the transformed 8000 components for all 167 1-sec chirps through the glowfi.sh feature_select endpoint to determine which time components were discriminating for the threes cases you have (washer on - sensor on washer, washer on - sensor on dryer, and dryer on - sensor on dryer). Our feature selection identified 1352 frequency components of the 8000 that were significant in differentiating between these cases. I then ran these 1352 components for a randomly selected 146 of the 167 chirps to train a classification model for your three acoustic cases (using our train endpoint). Then I tested this classifier by running the remaining 21 chirps (~7 chirps for each of the three cases) through our predict endpoint. The results are that we achieve a composite accuracy of 81% for correctly predicting the class of your acoustic signals based on DCT transformations. I include a portion of the Json return showing other accuracy numbers. The total run time for predicting the 21 samples was ~500ms.
I include a plot of log10(Amplitude) of DCT transforms of three example acoustic chirps.
Let me know if you want me to go over the flow to/from our API.
Mike
glowfi.sh API return
“accuracy_data”: {
“recall”: [
0.8,
0.57,
1.0
],
“f1_scores”: [
0.67,
0.67,
1.0
],
“precision”: [
0.57,
0.8,
1.0
],
“class_names”: [
“DD”,
“DW”,
“WW”
],
“Composite_Accuracy”: 0.81