100% found this document useful (1 vote)
109 views3 pages

MINI - PROJECT (Sign Language)

The document outlines a mini project focused on developing a real-time sign language detection application that translates sign language gestures into text and speech. The application aims to enhance communication for the deaf and hard-of-hearing community by utilizing a Convolutional Neural Network (CNN) for gesture recognition and supporting user-friendly interaction. It aligns with global initiatives for inclusivity and accessibility in education and communication, addressing critical barriers faced by individuals relying on sign language.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
100% found this document useful (1 vote)
109 views3 pages

MINI - PROJECT (Sign Language)

The document outlines a mini project focused on developing a real-time sign language detection application that translates sign language gestures into text and speech. The application aims to enhance communication for the deaf and hard-of-hearing community by utilizing a Convolutional Neural Network (CNN) for gesture recognition and supporting user-friendly interaction. It aligns with global initiatives for inclusivity and accessibility in education and communication, addressing critical barriers faced by individuals relying on sign language.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3

MINI PROJECT

Real-Time Sign Language Detection


[An application that converts sign language into speech and text]

TEAM 13:
11.ANGELINE LIZBETH SHAJI
59.SWETHA C
09.AMISHA RAJ T
55.SANATH KRISHNAN P
07.ALAN SHAJU

INTRODUCTION:
In the rapidly globalizing world, inclusivity and accessibility have become
foundational pillars of modern education and communication. As society becomes
increasingly interconnected, the need to bridge communication gaps among diverse
groups has never been more critical. This is particularly true for individuals who rely
on sign language as their primary means of communication. Recognizing this pressing
need, our team has developed an innovative application for real-time Sign Language
gesture recognition and translation into text and speech. This project represents a
significant step forward in creating a more inclusive environment for the deaf and
hard-of-hearing community.
The application leverages state-of-the-art technologies in computer vision,
specifically a Convolutional Neural Network (CNN) model, to ensure accurate and
efficient translation of sign language gestures. Designed with user-centric principles,
the platform supports real-time interaction, enabling seamless communication
between sign language users and those unfamiliar with it. By converting gestures
into both textual and spoken outputs, the application transcends traditional
communication barriers, fostering greater inclusivity in academic, professional, and
social settings.
Our project aligns with global initiatives such as the United Nations Sustainable
Development Goals (SDGs), particularly SDG 4 (Quality Education) and SDG 10
(Reduced Inequalities). By providing a tool that enhances accessibility to education
and communication, we directly support SDG 4, ensuring that individuals with
hearing impairments can participate fully in learning environments. Furthermore, the
application addresses SDG 10 by reducing inequalities stemming from language
barriers, empowering individuals and promoting equal opportunities for all.

ABSTRACT
Project Title: Real-Time Sign Language Detection and Conversion to Text and Speech

Objective: This system is an advanced application for real-time translation of sign


language hand gestures into text and speech, addressing the critical communication
barriers faced by the deaf and hard-of-hearing community. Key modules include user
profile and authentication, gesture recognition powered by a Convolutional Neural
Network (CNN) model, real-time text and speech translation for recognized gestures,
and a feature to add custom gestures for new words to enhance flexibility and
adaptability. Additionally, the application supports a user-friendly interface designed
for seamless interaction, making it accessible for a wide range of users, including
educators, students, and professionals. By bridging the gap between sign language
users and non-signers, this system contributes significantly to building a more
inclusive and equitable society.

Functional Requirements
• Recognize predefined hand gestures.
• Allow users to add custom gestures.
• Accept video feed from a camera.
• Display recognized gesture labels or actions.
• Convert gestures to audio/text (e.g., for accessibility).

Non-functional Requirements
• Instant Gesture Recognition: The system should recognize and translate hand
gestures into sign language translations in real-time with minimal lag. This would
involve processing camera inputs and producing outputs (translated words or
phrases) almost instantly.
• Efficient Data Flow: The video feed from the user’s camera should be captured,
processed, and translated with minimal delay. Ensure smooth data flow between the
camera capture, gesture detection, and translation modules without bottlenecks.
• Low Resource Consumption: Ensure the system consumes minimal computing
resources (e.g., CPU, RAM) to allow smooth performance without delays. If the app is
intended for mobile, it should be optimized to conserve battery life and network 7
bandwidth, particularly if cloud-based processing is involved.
• Simple Setup Process: The system should guide users through an easy setup
process, allowing them to quickly start translating sign language gestures. This
includes simple steps like selecting input devices (camera), configuring recognition
settings (e.g., sensitivity, accuracy), and testing the system with example gestures.

Software Requirements
• Frontend: HTML, CSS, JS, Tkinter gui.
• Backend: We have used Python as our Back-end Language which has the most
widest library collections The technical feasibility is frequently the most difficult area
encountered at this stage. Our app will fit perfectly for technical feasibility.
• Operating System: Windows 8 and Above
• IDE: Visual Studio Code
• Programming Language: Python 3.9 5
• Python libraries: OpenCV, NumPy, Keras,mediapipe,Tensorflow

Hardware Requirments
• Webcam

You might also like