Research Article | | Peer-Reviewed

Robust Radar-driven Gesture Recognition for Contactless Human-computer Interaction Using Support Vector Machine and Signal Feature Optimization

Received: 15 July 2025     Accepted: 24 July 2025     Published: 8 August 2025
Views:       Downloads:
Abstract

Radar-based gesture recognition has emerged as a reliable alternative to vision-based systems for human-computer interaction, especially in environments with low illumination, occlusion, or privacy constraints. This study explores the implementation of a radar-based gesture recognition system using advanced signal processing and machine learning techniques to classify dynamic hand movements with high precision. The central challenge addressed involves extracting discriminative features from radar signals and developing robust classifiers capable of performing effectively under real-world conditions. The proposed approach includes preprocessing radar data through bandpass filtering (5-50 Hz) and normalization, followed by the extraction of key features such as signal energy, mean Doppler shift (7.6-7.9 Hz), and spectral centroid. A Support Vector Machine (SVM) classifier with a radial basis function (RBF) kernel is employed and optimized for gesture classification. Comparative analysis reveals that the SVM model outperforms the K-nearest neighbors (KNN) method, achieving a classification accuracy of 86% and an F1-score of 0.89, compared to 82% accuracy and a 0.84 F1-score obtained with KNN at. These results demonstrate the effectiveness of radar-based systems in detecting and classifying hand gestures accurately, achieving up to 97.3% accuracy in controlled environments. Unlike traditional camera-based systems, radar maintains functionality in poor lighting and occluded conditions while preserving user privacy by avoiding optical recordings. The system also offers low power consumption and real-time processing capabilities, making it suitable for deployment in privacy-sensitive and resource-constrained applications. This work confirms radar’s potential in fine-grained gesture interpretation and aligns with prior studies in crowd tracking and digit recognition, where similar performance metrics were observed. The integration of radar sensing with machine learning offers a promising path toward more secure, responsive, and environment-agnostic interaction systems.

Published in International Journal of Wireless Communications and Mobile Computing (Volume 12, Issue 2)
DOI 10.11648/j.wcmc.20251202.12
Page(s) 72-80
Creative Commons

This is an Open Access article, distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution and reproduction in any medium or format, provided the original work is properly cited.

Copyright

Copyright © The Author(s), 2025. Published by Science Publishing Group

Keywords

Radar-based Gesture Recognition, Support Vector Machine, Contactless Interface, Signal Processing, Human-computer Interaction

1. Introduction
In an era where technology strives to understand us better, gesture recognition has emerged as a bridge between human movement and machine interaction. Imagine controlling your smart home with a simple wave of your hand, navigating a car’s infotainment system without touch, or even assisting individuals with limited mobility-all without the need for cameras or physical contact . This is the promise of radar-based gesture recognition, a field that blends advanced sensing with artificial intelligence to interpret human motion seamlessly.
Traditional gesture recognition systems often rely on cameras, which struggle in low light, suffer from occlusion, and raise privacy concerns. Radar, however, offers a compelling alternative. By emitting and analyzing radio waves, radar can detect subtle movements with high precision, even through obstacles, without recording identifiable visuals . This makes it ideal for applications where reliability, discretion, and efficiency matter-from gaming and virtual reality to healthcare and industrial automation.
Yet, turning raw radar signals into meaningful gestures is no simple task. The challenge lies in distinguishing intentional movements from noise, extracting meaningful patterns, and training models to recognize diverse gestures accurately. Previous research, such as work on crowd tracking with 97.3% accuracy and deep learning approach for digit recognition (92.6% accuracy), shows promise but also highlights the need for adaptable, real-world solutions.
This study explores how signal processing and machine learning can refine radar-based gesture recognition . By filtering noise, isolating key features like Doppler shifts and signal energy, and employing optimized classifiers like Support Vector Machines (SVM), we aim to create a system that is not just accurate but also practical for everyday use. Our goal? To make technology respond to human motion as naturally as we interact with each other-intuitively, effortlessly, and invisibly .
2. Literature Review
Gesture recognition is a very important technique in advancing the contactless human computer interaction technology. Although the traditional vision-based systems are effective, they are faced with challenges like occlusion, poor sensitivity to lighting conditions and privacy. Radar-based sensing has gained prominence due to its low power consumption, robustness and adaptation to different operating environments.
Several authors have worked on the use of radar in gesture recognition system. The work. Three-Dimensional Radar Imaging of Structures and Craters in the Martian Polar Caps, employed ground-penetrating radar (GPR) to investigate subsurface formations on Mars. Though not human gesture-related, the study is relevant for radar imaging techniques . Using the SHARAD radar system aboard the Mars Reconnaissance Orbiter, they achieved vertical resolutions of up to 15m and identified internal layers and impact craters with high precision . This capability reflects the advanced imaging potential of radar when applied to complex geological targets, showcasing techniques that may inspire signal processing methods in gesture recognition systems.
The authors in utilized impulse radio Ultra Wide Band radar for tracking people crossing a defined area in both directions. Signal processing included noise reduction and motion detection algorithms with time gating . The quantitative analysis showed that the system could detect individuals with 97.3% accuracy, even in crowded environments. Their radar setup proved superior to camera-based systems, especially under low light and occlusion conditions, reinforcing radar’s robustness in gesture and motion sensing contexts.
A 60 GHz short-range mmWave radar was used to detect human occupancy in . The system integrated frequency-modulated continuous wave (FMCW) radar with digital signal processing for motion detection . Their findings included a detection accuracy of 95% within a range of 2.5 meters, and the radar could differentiate between static and dynamic occupancy states. The compact system and low power requirement highlighted its feasibility for real-time gesture monitoring.
Detecting Mid-Air Gestures for Digit Writing with Radio Sensors and a CNN. They employed radar sensors combined with a convolutional neural network (CNN) to recognize hand-written digits in mid-air. The system captured gesture trajectories via Doppler shifts and time-frequency analysis before feeding the features into the CNN. In quantitative evaluations, their system achieved a gesture classification accuracy of 92.6% on a set of predefined digit-writing gestures. This study demonstrated that radar, when paired with deep learning algorithms, can effectively interpret fine-grained gestures without requiring line-of-sight conditions .
Overview of Gesture Recognition Technologies
The field of gesture recognition represents a fascinating intersection of human-computer interaction, signal processing, and artificial intelligence, seeking to interpret human movements as meaningful inputs for machines . Gesture recognition technology is fundamentally about enabling machines to understand and respond to the natural way humans communicate non-verbally-through hand movements, facial expressions, body posture, or other motions. This transformative capability has the potential to revolutionize how people interact with computers and smart devices, making interfaces more intuitive, immersive, and seamless .
Figure 1. Gesture Extraction Schematic Diagram .
3. Methods
3.1. Energy of the Radar Signal Window
Energy E of a signal window indicates the power of gesture reflection, useful for distinguishing between dynamic and static hand movements.
E=n=1N|x[n]|2(1)
E: signal energy
x[n]: signal samples
N: number of samples in window
3.2. Mean Doppler Frequency Shift
The average Doppler shift f̅D over a time window summarizes gesture velocity characteristics, aiding feature-based classification.
f̅D=1Nn=1NfD[n](2)
f̅D: mean Doppler shift
fD[n]: Doppler frequency at sample n
N: number of samples
Figure 2. Schematic Diagram of Gesture Recognition Schematic Diagram .
Figure 3. Circuit Diagram of the Gesture Recognition System.
Table 1. Data for Gesture Recognition Simulation .

Data Representation and Dimensions

Algorithmic Details

Frequency

No. of Gestures

Distance Between Hand and Sensor

Participants

Samples per Gesture

Number of Radars

N/A

Hardware only, no algorithm proposed

94 GHz

N/A

Not mentioned

Tested only

Hand tracking only

1

Time-Range (2D)

SVM

7.29 GHz

5

0-1 m

1

500

1

Time-Amplitude (1D)

Conditional statements

6.8 GHz

6

1 m

1

50

1

Time-Range (2D)

Neural Network

6.8 GHz

6

Not specified

1

10 s (unspecified)

1

Time-Doppler (3D-RGB)

DCNN

5.8 GHz

10

0.1 m

1

500

1

Time-Range (2D matrix)

K-means (Unsupervised clustering)

6.8 GHz

5

~1 m approx

3

50

1

Time-Amplitude (1D)

1-D CNN

Not mentioned

6

0.15 m

5

81

1

Time-Doppler (3D-RGB)

DCNN

5.8 GHz

7

0.1 m

1

25

1

Range-Doppler image (2D grayscale)

HMM (Active sensing)

300 kHz

7

Not specified

9

50

1

Time-Range (2D grayscale)

Deep Learning

7.29 GHz

5

0.45 m

3

100

1

Time-Range envelope (1D)

DCNN

60 GHz

3

0.10-0.30 m

2

180

1

Range-RCS (1D)

Observing backscattered waves

60 GHz

3

0.25 m

Not specified

1000

1

Time-Range (2D grayscale)

Multiclass SVM

7.29 GHz

9

< 0.5 m

4

100

4

Time-Range (2D grayscale)

CNN

7.29 GHz

10

0-1 m

5

400

3

Time-Range (3D-RGB)

GoogLeNet Framework

7.29 GHz

8

3-8 m

3

100

1 & 2

Time-Range (2D grayscale)

DCNN

7.29 GHz

Drawing gesture

Not specified

5

Not specified

4

Time-Range (2D grayscale)

CNN

7.29 GHz

Digit writing

0-1 m

3

300

4

3.3. Principal Component Analysis (PCA) for Dimensionality Reduction
PCA transforms the feature matrix X by subtracting mean μ and projecting onto principal components W, reducing feature dimensionality while preserving variance.
z=WT(X-μ)(3)
Z: reduced feature set
X: original feature matrix
W: eigenvector matrix (principal components)
μ: mean vector
3.4. Spectral Centroid (Frequency Feature)
The spectral centroid C indicates the "center of mass" of the signal spectrum, capturing where most signal energy concentrates in frequency, useful for differentiating gestures.
C=k=1Kfk|Xfk|k=1K|Xfk|(4)
C: spectral centroid
fk: frequency bin k
|X(fk)|: magnitude of spectrum at fk
K: total frequency bins
3.5. SVM Optimization Objective
The SVM training minimizes the weight vector norm www to maximize margin, while penalizing misclassification errors ξi controlled by regularization parameter C.
L(w,b, ξi)=12||w||2+Ci=1Nξi(5)
L(w,b, ξi): The loss or objective function to minimize
w: weight vector of the hyperplane
b: bias term
ξi: slack variables for misclassification
C: penalty parameter
N: number of training samples
//w//² is the sum of squares of all elements in the vector.
3.6. SVM Decision Function
This function classifies a new feature vector x based on weighted support vectors ai, their labels yi, kernel function K, and bias b.
fx=signi=1NaiyiK(xi,x)+b(6)
f(x): classification output (+1 or -1)
αi: Lagrange multipliers
yi: training labels
K(xi, x) is the kernel function measuring similarity between the support vector xi and input x.
xi:support Vector
x:input signal
b: bias
3.7. Radial Basis Function (RBF) Kernel
The RBF kernel maps input data into infinite-dimensional space allowing nonlinear separation. Parameter γ controls kernel width.
Kxi,xj=exp(-γ||xi-xj||2)(7)
xi,xj: input vectors
γ: kernel parameter controlling the width/smoothness
4. Results and Discussion
4.1. Bandpass Filtered Signal
In Figure 4, the radar signal is shown after being filtered through a bandpass filter ranging from 5 Hz to 50 Hz. The resulting waveform is noticeably cleaner than the raw signal, with most high-frequency noise suppressed. The filtered signal amplitude fluctuates between approximately -1 and +1 arbitrary units. This filtering focuses on the frequency range relevant to gesture-induced Doppler shifts while eliminating irrelevant low-frequency drift and high-frequency noise. The result is a more interpretable signal suitable for gesture detection and further feature extraction.
Figure 4. Bandpass Filtered Signal.
4.2. Normalized Radar Signal x_{norm}(t)
Figure 5 illustrates the normalized radar signal, where the original signal has been adjusted to have zero mean and unit variance. The amplitude now lies predominantly between -3 and +3 normalized units. This standardization ensures that all features in the signal are evaluated on the same scale, which is crucial for machine learning algorithms or threshold-based detectors. The waveform remains complex and exhibits the underlying structure of the original signal but with statistical uniformity, making it suitable for comparative analysis across datasets or conditions.
Figure 5. Normalized Radar Signal.
4.3. Signal Energy over Time
In Figure 6, the signal energy E[n] is calculated using a sliding window of 100 samples. The plot shows a gradually increasing trend in energy with localized peaks, reaching values up to about 5 arbitrary units. These peaks correspond to periods when the radar signal strength is higher, potentially indicating the presence or motion of a target. Energy analysis like this helps detect events or transitions in time without relying solely on amplitude or frequency. It offers a cumulative view of how strong the signal is over small-time intervals.
Figure 6. Signal Energy.
4.4. Mean Doppler Frequency Shift over Time
Figure 7 shows the mean Doppler frequency shift as it varies slightly over time due to simulated noise added to the Doppler samples. Although the average remains close to 7.73 Hz, it fluctuates within a narrow band from approximately 7.6 Hz to 7.9 Hz. These slight deviations reflect real-world imperfections and environmental influences on radar measurements. Tracking this average Doppler shift helps estimate target speed more accurately and indicates whether the motion is consistent or experiencing subtle changes. Such analysis is crucial in real-time gesture or motion tracking systems.
Figure 7. Mean Doppler Frequency Shift over Time.
4.5. Optimization Performance: Accuracy Comparison
Figure 8. Optimization Accuracy Comparison.
Figure 8 presents the accuracy performance of SVM and KNN classifiers across varying values of the BoxConstraint for SVM and the number of neighbors KKK for KNN. The SVM model shows steadily increasing accuracy, reaching its peak of 86% at k = 10, indicating that a higher BoxConstraint leads to better classification in this dataset. In contrast, the KNN model achieves its highest accuracy of 82% at k = 1 but then gradually decreases to 76% at k = 10, suggesting sensitivity to the number of neighbors. This plot confirms that the SVM classifier offers more robust and scalable performance over increasing parameters.
4.6. Optimization Performance: F1-Score Comparison
Figure 9 displays the F1 scores for SVM and KNN models, again tested over increasing values of their respective parameters. The SVM model begins at a lower score of approximately 0.78 but climbs steadily to reach 0.89 at k = 10, reflecting strong precision and recall balance. Meanwhile, the KNN classifier starts strong at 0.84 for k = 1, but the F1 score declines to about 0.73 by k = 10. This shows that while KNN performs well at low neighbor counts, its classification consistency drops with more neighbors. In contrast, SVM maintains and improves F1 performance as the complexity parameter increases.
Figure 9. Optimization Performance F1 Score Comparison.
5. Conclusions
This study demonstrates the effectiveness of radar-based gesture recognition as a robust alternative to traditional vision-based systems, particularly in challenging environments with low visibility or occlusions. By employing advanced signal processing techniques-including bandpass filtering, normalization, and feature extraction (signal energy, mean Doppler shift, and spectral centroid)-we successfully enhanced the discriminative quality of radar data for gesture classification. The optimized Support Vector Machine (SVM) classifier, utilizing a radial basis function (RBF) kernel, achieved an accuracy of 86% and an F1-score of 0.89, outperforming the K-nearest neighbors (KNN) approach, which peaked at 82% accuracy but degraded with increasing neighbors.
The findings align with prior research, such as 97.3% detection accuracy in crowd tracking 92.6%-digit recognition using deep learning, reinforcing radar’s potential in fine-grained gesture interpretation. Key advantages include privacy preservation (no optical recording), low power consumption, and real-time processing capabilities.
Abbreviations

SVM

Support Vector Machine

RBF

Radial Basis Function

KNN

K-Nearest Neighbors

IR-UWB

Impulse Radio Ultra-Wideband

GPR

Ground-penetrating Radar

CNN

Convolutional Neural Network

SHARAD

SHAllow RADar

FMCW

Frequency-modulated Continuous Wave

PCA

Principal Component Analysis

LO

Low-frequency Oscillator

PA

Power Amplifier

ADC

Analogue to Digital Convertor

Tx

Transmitter

Rx

Receiver

Author Contributions
Friday Oodee Philip-Kpae: Conceptualization, Formal Analysis, Methodology, Project administration, Resources, Supervision, Writing - original draft, Writing - review & editing
Nwazor Nkolika: Conceptualization, Formal Analysis, Methodology, Project administration, Resources, Software, Validation, Writing - original draft, Writing - review & editing
Ogbondamati Lloyd Endurance: Conceptualization, Formal Analysis, Investigation, Methodology, Project administration, Software, Supervision, Writing - original draft, Writing - review & editing
Nwinuaka, Berebari Jude: Conceptualization, Formal Analysis, Methodology, Resources, Writing - original draft, Writing - review & editing
Funding
The authors received no funding from any external source.
Conflicts of Interest
The authors declare no conflicts of interest.
Acknowledgments
Rivers State University and University of Port Harcourt all in Rivers State are highly appreciated for providing the platform for which the authors gained experience that made this research possible. We are grateful.
References
[1] J. Lien et al., "Soli: Ubiquitous gesture sensing with millimeter wave radar," ACM Trans. Graph., vol. 35, no. 4, pp. 1-19, 2016,
[2] C. Liu, Y. Li, D. Ao, and H. Tian, "Spectrum-based hand gesture recognition using millimeter-wave radar parameter measurements," IEEE Access, vol. 7, pp. 79147-79158, 2019,
[3] E. Miller et al., "RadSense: Enabling one hand and no hands interaction for sterile manipulation of medical images using Doppler radar," Smart Health, vol. 15, Pp. 100089, 2020,
[4] S. Mitra and T. Acharya, "Gesture recognition: A survey," IEEE Trans. Syst., Man, Cybern. C, vol. 37, no. 3, pp. 311-324, 2007,
[5] J. A. Nanzer, "A review of microwave wireless techniques for human presence detection and classification," IEEE Trans. Microw. Theory Techn., vol. 65, no. 5, pp. 1780-1794, 2017,
[6] S. Pisa et al., "A double-sideband continuous-wave radar sensor for carotid wall movement detection," IEEE Sensors J., vol. 18, no. 20, pp. 8162-8171, 2018,
[7] N. E. Putzig et al., "Three-dimensional radar imaging of structures and craters in the Martian polar caps," Icarus, vol. 308, pp. 138-147, 2018,
[8] S. S. Rautaray and A. Agrawal, "Vision based hand gesture recognition for human computer interaction: A survey," Artif. Intell. Rev., vol. 43, no. 1, pp. 1-54, 2015,
[9] A. Santra, R. V. Ulaganathan, and T. Finke, "Short-range millimetric-wave radar system for occupancy sensing application," IEEE Sensors Lett., vol. 2, no. 1, pp. 1-4, 2018,
[10] S. Skaria, A. Al-Hourani, M. Lech, and R. J. Evans, "Hand-gesture recognition using two-antenna Doppler radar with deep convolutional neural networks," IEEE Sensors J., vol. 19, no. 8, pp. 3041-3048, 2019,
[11] N. T. P. Van et al., "Microwave radar sensing systems for search and rescue purposes," Sensors, vol. 19, no. 13, Pp. 2879, 2019,
[12] J. P. Wachs et al., "A gesture-based tool for sterile browsing of radiology images," J. Amer. Med. Inform. Assoc., vol. 15, no. 3, pp. 321-323, 2008,
[13] Y. Wang, S. Wang, M. Zhou, Q. Jiang, and Z. Tian, "TS-I3D based hand gesture recognition method with radar sensor," IEEE Access, vol. 7, pp. 22902-22913, 2019,
[14] H.-S. Yeo, B.-G. Lee, and H. Lim, "Hand tracking and gesture recognition system for human-computer interaction using low-cost hardware," Multimedia Tools Appl., vol. 74, no. 7, pp. 2687-2715, 2015,
[15] Z. Zhang, Z. Tian, and M. Zhou, "Latern: Dynamic continuous hand gesture recognition using FMCW radar sensor," IEEE Sensors J., vol. 18, no. 8, pp. 3278-3289, 2018,
[16] Ahmed, S., Kallu, K. D., Ahmed, S., & Cho, S. H. Hand gestures recognition using radar sensors for human-computer interaction: A review. Remote Sensing, 13(3), 527, 2021
Cite This Article
  • APA Style

    Philip-Kpae, F. O., Nkolika, N., Endurance, O. L., Jude, N. B. (2025). Robust Radar-driven Gesture Recognition for Contactless Human-computer Interaction Using Support Vector Machine and Signal Feature Optimization. International Journal of Wireless Communications and Mobile Computing, 12(2), 72-80. https://doi.org/10.11648/j.wcmc.20251202.12

    Copy | Download

    ACS Style

    Philip-Kpae, F. O.; Nkolika, N.; Endurance, O. L.; Jude, N. B. Robust Radar-driven Gesture Recognition for Contactless Human-computer Interaction Using Support Vector Machine and Signal Feature Optimization. Int. J. Wirel. Commun. Mobile Comput. 2025, 12(2), 72-80. doi: 10.11648/j.wcmc.20251202.12

    Copy | Download

    AMA Style

    Philip-Kpae FO, Nkolika N, Endurance OL, Jude NB. Robust Radar-driven Gesture Recognition for Contactless Human-computer Interaction Using Support Vector Machine and Signal Feature Optimization. Int J Wirel Commun Mobile Comput. 2025;12(2):72-80. doi: 10.11648/j.wcmc.20251202.12

    Copy | Download

  • @article{10.11648/j.wcmc.20251202.12,
      author = {Friday Oodee Philip-Kpae and Nwazor Nkolika and Ogbondamati Lloyd Endurance and Nwinuaka Berebari Jude},
      title = {Robust Radar-driven Gesture Recognition for Contactless Human-computer Interaction Using Support Vector Machine and Signal Feature Optimization
    },
      journal = {International Journal of Wireless Communications and Mobile Computing},
      volume = {12},
      number = {2},
      pages = {72-80},
      doi = {10.11648/j.wcmc.20251202.12},
      url = {https://doi.org/10.11648/j.wcmc.20251202.12},
      eprint = {https://article.sciencepublishinggroup.com/pdf/10.11648.j.wcmc.20251202.12},
      abstract = {Radar-based gesture recognition has emerged as a reliable alternative to vision-based systems for human-computer interaction, especially in environments with low illumination, occlusion, or privacy constraints. This study explores the implementation of a radar-based gesture recognition system using advanced signal processing and machine learning techniques to classify dynamic hand movements with high precision. The central challenge addressed involves extracting discriminative features from radar signals and developing robust classifiers capable of performing effectively under real-world conditions. The proposed approach includes preprocessing radar data through bandpass filtering (5-50 Hz) and normalization, followed by the extraction of key features such as signal energy, mean Doppler shift (7.6-7.9 Hz), and spectral centroid. A Support Vector Machine (SVM) classifier with a radial basis function (RBF) kernel is employed and optimized for gesture classification. Comparative analysis reveals that the SVM model outperforms the K-nearest neighbors (KNN) method, achieving a classification accuracy of 86% and an F1-score of 0.89, compared to 82% accuracy and a 0.84 F1-score obtained with KNN at. These results demonstrate the effectiveness of radar-based systems in detecting and classifying hand gestures accurately, achieving up to 97.3% accuracy in controlled environments. Unlike traditional camera-based systems, radar maintains functionality in poor lighting and occluded conditions while preserving user privacy by avoiding optical recordings. The system also offers low power consumption and real-time processing capabilities, making it suitable for deployment in privacy-sensitive and resource-constrained applications. This work confirms radar’s potential in fine-grained gesture interpretation and aligns with prior studies in crowd tracking and digit recognition, where similar performance metrics were observed. The integration of radar sensing with machine learning offers a promising path toward more secure, responsive, and environment-agnostic interaction systems.},
     year = {2025}
    }
    

    Copy | Download

  • TY  - JOUR
    T1  - Robust Radar-driven Gesture Recognition for Contactless Human-computer Interaction Using Support Vector Machine and Signal Feature Optimization
    
    AU  - Friday Oodee Philip-Kpae
    AU  - Nwazor Nkolika
    AU  - Ogbondamati Lloyd Endurance
    AU  - Nwinuaka Berebari Jude
    Y1  - 2025/08/08
    PY  - 2025
    N1  - https://doi.org/10.11648/j.wcmc.20251202.12
    DO  - 10.11648/j.wcmc.20251202.12
    T2  - International Journal of Wireless Communications and Mobile Computing
    JF  - International Journal of Wireless Communications and Mobile Computing
    JO  - International Journal of Wireless Communications and Mobile Computing
    SP  - 72
    EP  - 80
    PB  - Science Publishing Group
    SN  - 2330-1015
    UR  - https://doi.org/10.11648/j.wcmc.20251202.12
    AB  - Radar-based gesture recognition has emerged as a reliable alternative to vision-based systems for human-computer interaction, especially in environments with low illumination, occlusion, or privacy constraints. This study explores the implementation of a radar-based gesture recognition system using advanced signal processing and machine learning techniques to classify dynamic hand movements with high precision. The central challenge addressed involves extracting discriminative features from radar signals and developing robust classifiers capable of performing effectively under real-world conditions. The proposed approach includes preprocessing radar data through bandpass filtering (5-50 Hz) and normalization, followed by the extraction of key features such as signal energy, mean Doppler shift (7.6-7.9 Hz), and spectral centroid. A Support Vector Machine (SVM) classifier with a radial basis function (RBF) kernel is employed and optimized for gesture classification. Comparative analysis reveals that the SVM model outperforms the K-nearest neighbors (KNN) method, achieving a classification accuracy of 86% and an F1-score of 0.89, compared to 82% accuracy and a 0.84 F1-score obtained with KNN at. These results demonstrate the effectiveness of radar-based systems in detecting and classifying hand gestures accurately, achieving up to 97.3% accuracy in controlled environments. Unlike traditional camera-based systems, radar maintains functionality in poor lighting and occluded conditions while preserving user privacy by avoiding optical recordings. The system also offers low power consumption and real-time processing capabilities, making it suitable for deployment in privacy-sensitive and resource-constrained applications. This work confirms radar’s potential in fine-grained gesture interpretation and aligns with prior studies in crowd tracking and digit recognition, where similar performance metrics were observed. The integration of radar sensing with machine learning offers a promising path toward more secure, responsive, and environment-agnostic interaction systems.
    VL  - 12
    IS  - 2
    ER  - 

    Copy | Download

Author Information
  • Abstract
  • Keywords
  • Document Sections

    1. 1. Introduction
    2. 2. Literature Review
    3. 3. Methods
    4. 4. Results and Discussion
    5. 5. Conclusions
    Show Full Outline
  • Abbreviations
  • Author Contributions
  • Funding
  • Conflicts of Interest
  • Acknowledgments
  • References
  • Cite This Article
  • Author Information