Advancing Accessibility with AI: A Deep Learning Model for Recognizing Finnish Sign Language
Van Eynde, Michiel (2024)
Van Eynde, Michiel
2024
All rights reserved. This publication is copyrighted. You may download, display and print it for Your own personal use. Commercial use is prohibited.
Julkaisun pysyvä osoite on
https://urn.fi/URN:NBN:fi:amk-202501151371
https://urn.fi/URN:NBN:fi:amk-202501151371
Tiivistelmä
The study addressed the challenge of translating Finnish Sign Language into written text through an artificial intelligence (AI) model, aiming to enhance communication accessibility. The primary goal was to develop a real-time hand gesture recognition system that could accurately interpret sign language gestures
using machine learning techniques. Key tasks included designing a robust detection system to capture hand
movements, developing a translation model, and optimizing the system for minimal processing delay.
The implementation involved using a convolutional neural network (CNN) based on the VGG16 architecture, leveraging transfer learning to adapt pre-trained weights for Finnish Sign Language recognition. Hand
gestures were captured and processed using OpenCV and the CVzone Hand Tracking module, with preprocessing ensuring a uniform structure and accuracy in real-time gesture classification. A dedicated dataset of
Finnish sign language images was created, annotated, and used to train the model.
Results showed high accuracy in recognizing static hand gestures within the Finnish alphabet, showing that
the approach successfully meets the project’s aims. Preprocessing and real-time optimization contributed
significantly to the model’s performance, while some challenges, such as mislabeling due to dataset constraints, highlighted the need for a larger and diverse dataset.
In conclusion, this project set up a foundation for a practical, accessible tool to bridge communication barriers. Further development could extend the model to include dynamic gestures and facial recognition, enabling broader applications in real-time sign language translation.
using machine learning techniques. Key tasks included designing a robust detection system to capture hand
movements, developing a translation model, and optimizing the system for minimal processing delay.
The implementation involved using a convolutional neural network (CNN) based on the VGG16 architecture, leveraging transfer learning to adapt pre-trained weights for Finnish Sign Language recognition. Hand
gestures were captured and processed using OpenCV and the CVzone Hand Tracking module, with preprocessing ensuring a uniform structure and accuracy in real-time gesture classification. A dedicated dataset of
Finnish sign language images was created, annotated, and used to train the model.
Results showed high accuracy in recognizing static hand gestures within the Finnish alphabet, showing that
the approach successfully meets the project’s aims. Preprocessing and real-time optimization contributed
significantly to the model’s performance, while some challenges, such as mislabeling due to dataset constraints, highlighted the need for a larger and diverse dataset.
In conclusion, this project set up a foundation for a practical, accessible tool to bridge communication barriers. Further development could extend the model to include dynamic gestures and facial recognition, enabling broader applications in real-time sign language translation.