Signal classification and processing for a gesture sensor module

In summary: it will enter "gesture mode" and if it stays there for a while it will enter "gesture mode" and if it stays there for a while it will enter "gesture mode" and if it stays there for a while it will enter "gesture mode" and if it stays there for a while it will enter "gesture mode" and if it stays there for a while it will enter "gesture mode" and if it stays there for a while it will enter "gesture mode" and if it stays there for a while it will enter "gesture mode" and if it stays there for a while it will enter "gesture mode" and if it stays there for a while it will enter "gesture mode" and if it stays there for a while
  • #1
Lord Crc
343
47
TL;DR Summary
Want to classify a variable-length sensor signal into multiple categories.
I got a gesture sensor module that produces a signal per event. The each signal is a set of regularly sampled readings from four photodiodes. The number of samples per event can be very different, from around 20 up to over 100, and each sample value is essentially the measured brightness.

Initially I'd like to focus on a pair of diodes, so I can detect "left to right" and "right to left" movement. In addition I'd like to be able to if the classification is uncertain. A "left to right" signal will be characteristic in that the left photodiode will measure higher brightness before the right photodiode, and reduced brightness before the right diode. However the absolute levels will vary between events, and the relative levels will often vary between left and right diode (if the hand does not move perpendicular to the sensor for example).

As an example, here is some data for three events. The values here are differential values, ie the left sample value minus the right sample value.

The first is a relatively clear "left to right":
17, 13, 18, 15, 17, 19, 16, 17, 21, 22, 18, 18, 17, 20, 20, 19, 18, 17, 15, 15, 16, 16, 19, 15, 15, 15, 14, 6, 13, 5, 4, 2, 3, 1, -7, -2, -5, -10, -9, -16, -15, -12, -16, -15, -17, -19, -22, -22, -20, -22, -20, -26, -22, -21, -23, -25, -23

Second is a faster yet fairly decent "right to left":
-27, -22, -23, -18, -20, -11, -13, -12, -7, -3, 3, 4, 10, 15, 19, 20, 25, 24, 20, 19, 20, 15, 7, 0, 0, -7, -4

The last an even faster but now more ambiguous "right to left":
-12, -14, -13, -14, -16, -18, -19, -14, -16, -17, -15, -14, -17, -10, -14, -14, -13, -9, -10, -8, -7
I'm ok with this last one being classified as "uncertain".

I've had a few statistics classes at uni way back, but we never got around to stuff like this. I've tried to do some searching but haven't yet found anything that I could recognize would be a good fit.

My initial thought was to do something fairly low-tech and simply expect N consecutive samples that are "fairly high", followed some time later by some "fairly in the middle", finally followed by some "fairly low". But it seems a bit fragile and noise prone.

Another idea that I came up with during searches was to try to fit a cubic polynomial to the normalized differential values and then use some classifier like a perceptron on the resulting coefficients. No idea if that's completely rubbish or not though.

As a final twist, this will be running on a microcontroller with limited memory, so big matrices or lots of heavy computations etc might be an issue.

Anyone got some pointers or ideas?

Not sure if this is the right place, so mods feel free to move.
 
Engineering news on Phys.org
  • #2
There is no fundamental reason the signal should depend upon speed (until you hit signal/noise limits). Human events are much slower than these electronics should be. What is the time interval between count signals and is that fixed. How does the module decide to send signal (and stop).
It looks to me like the module stops signaling before the transition for the last data.(?)
 
  • #3
hutchphd said:
There is no fundamental reason the signal should depend upon speed (until you hit signal/noise limits).
A slower hand means more samples so a longer signal.

hutchphd said:
What is the time interval between count signals and is that fixed.
ADC takes 1.4ms to sample, but there's a configurable wait time between samples (essentially multiples of 1.4ms). The gesture sensing is interleaved with proximity detection (see below), so I'm not entirely sure how many ms there are between samples, will have to read datasheet more carefully. But it should be a fixed rate regardless.
hutchphd said:
How does the module decide to send signal (and stop).
It looks to me like the module stops signaling before the transition for the last data.(?)
When not detecting gestures it uses the photodiodes as proximity sensors. One can configure a proximity value for when a gesture event should start, and there's a similar threshold for when it ends.

Cheers!
 
  • #4
Lord Crc said:
Summary:: Want to classify a variable-length sensor signal into multiple categories.

As a final twist, this will be running on a microcontroller with limited memory, so big matrices or lots of heavy computations etc might be an issue.

Anyone got some pointers or ideas?
To do this right, you need to consider using Finite State Machine (FSM) techniques, IMO. Have you been exposed to them before?

https://en.wikipedia.org/wiki/Finite-state_machine

1591885687799.png
 
  • #5
Lord Crc said:
Not sure if this is the right place, so mods feel free to move.
(moved to EE) :wink:
 
  • Like
Likes Lord Crc
  • #6
berkeman said:
To do this right, you need to consider using Finite State Machine (FSM) techniques, IMO. Have you been exposed to them before?
Yeah as I mentioned in my original post my first idea was basically a FSM.

For the nice clean signals, such as the first one I listed, this should work well I expect, however ideally I'd like it to handle less ideal signals as well, such as the second one. I was not sure I'd be able to get the state machine to reliably recognize those, but maybe focusing on the raw signals would be better for a FSM.
 
  • #7
Lord Crc said:
Yeah as I mentioned in my original post my first idea was basically a FSM.
Oops, apologies that I missed that. :smile:

Is there any way that you can digitize all of the sensor inputs at the same time? Having the sampling use an interleaved sampling seems to add some hard complications (at least for me on first glance).
 
  • #8
berkeman said:
Is there any way that you can digitize all of the sensor inputs at the same time? Having the sampling use an interleaved sampling seems to add some hard complications (at least for me on first glance).

Upon reading the datasheet[1] yet again, I see that I was mistaken in my description. The proximity detection is not done during gesture detection, only before. If something is in proximity it will enter "gesture mode". Once in "gesture mode", the gesture data is used to determine when a gesture is over.

I got confused since the way you configure the "enter threshold" is essentially identical to the "exit threshold", however the proximity engine has its own separate set of LED brightness and ADC gain settings.

Regardless, what the sensor provides is an array of 4-tuples of measured brightness, each tuple having one 8bit digitized value from each of the four photodiodes. A pair of diodes is sampled in 0.7ms, so for all intents and purposes here I assume they can be treated as simultaneously sampled.

[1]: https://docs.broadcom.com/docs/AV02-4191EN
 

1. What is signal classification in the context of gesture sensor modules?

Signal classification refers to the process of analyzing and categorizing signals received from a gesture sensor module. This involves identifying patterns and features in the signal data to determine the type of gesture being performed.

2. How does signal processing work in a gesture sensor module?

Signal processing in a gesture sensor module involves manipulating and analyzing the signals received from the sensor to extract useful information. This can include filtering, amplifying, and converting the signals to a digital format for further analysis.

3. What factors affect the accuracy of signal classification in a gesture sensor module?

The accuracy of signal classification in a gesture sensor module can be affected by various factors, such as the quality of the sensor, the complexity of the gestures being performed, and the algorithms used for signal processing and classification.

4. How can machine learning be used for signal classification in gesture sensor modules?

Machine learning techniques can be used for signal classification in gesture sensor modules by training algorithms on a large dataset of signals and their corresponding gestures. This allows the algorithm to learn and improve its accuracy over time.

5. What are some common applications of signal classification and processing in gesture sensor modules?

Gesture sensor modules with signal classification and processing capabilities are commonly used in various applications, such as gaming, virtual reality, and smart home devices. They can also be used in healthcare for monitoring and tracking movements of patients.

Similar threads

  • Electrical Engineering
Replies
0
Views
335
  • Precalculus Mathematics Homework Help
Replies
14
Views
1K
  • Precalculus Mathematics Homework Help
Replies
11
Views
1K
  • Nuclear Engineering
Replies
7
Views
2K
  • Set Theory, Logic, Probability, Statistics
Replies
1
Views
953
  • Engineering and Comp Sci Homework Help
Replies
8
Views
1K
  • Nuclear Engineering
Replies
0
Views
425
  • General Discussion
Replies
11
Views
1K
Replies
68
Views
9K
Replies
3
Views
2K
Back
Top