0% found this document useful (0 votes)
34 views67 pages

Certificates PageNumbers Centered From Intro

The project titled 'SIGNBRIDGE- AUDIO TO SIGN LANGUAGE TRANSLATOR USING NLP' aims to facilitate communication between hearing-impaired individuals and those who can hear by converting spoken language into sign language through a web-based interface. It utilizes speech-to-text technology followed by Natural Language Processing (NLP) to interpret and translate spoken words into hand gestures. The project is part of the Bachelor of Technology degree in Computer Science and Engineering at Sphoorthy Engineering College, Hyderabad.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views67 pages

Certificates PageNumbers Centered From Intro

The project titled 'SIGNBRIDGE- AUDIO TO SIGN LANGUAGE TRANSLATOR USING NLP' aims to facilitate communication between hearing-impaired individuals and those who can hear by converting spoken language into sign language through a web-based interface. It utilizes speech-to-text technology followed by Natural Language Processing (NLP) to interpret and translate spoken words into hand gestures. The project is part of the Bachelor of Technology degree in Computer Science and Engineering at Sphoorthy Engineering College, Hyderabad.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 67

SIGNBRIDGE- AUDIO TO SIGN LANGUAGE TRANSLATOR USING NLP

project Report

On

SIGNBRIDGE- AUDIO TO SIGN LANGUAGE TRANSLATOR USING


NLP
submitted for partial fulfilment of the requirement for the award of degree of

Bachelor of Technology
in

COMPUTER SCIENCE AND ENGINEERING (INTERNET OF THINGS)

By
CH. GAYATHRI (21N81A6904)

T. NIKITHA (21N81A6918)

B. RITHEESH REDDY (21N81A6922)

V. CHARAN TEJA (21N81A6926)

Under the guidance of

Mr. J. NARESH KUMAR, Qualification,

Designation

DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING


(CYBER SECURITY)
SPHOORTHY ENGINEERING COLLEGE
(Approved by AICTE and Affiliated to JNTUH)
Nadargul Village, Saroo Nagar Mandal, Hyderabad, Telangana- 501510.
Academic Year: 2024-2025

SPHOORTHY ENGINEERING COLLEGE


DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING

DEPT. Of CSE (CYBER SECURITY), SPHN, HYD


SIGNBRIDGE- AUDIO TO SIGN LANGUAGE TRANSLATOR USING NLP

(CYBER SECURITY)

CERTIFICATE

The project entitled “SIGNBRIDGE- AUDIO TO SIGN LANGUAGE TRANSLATOR USING


NLP” that is been submitted by CH. GAYATHRI - 21N81A6904, T. NIKITHA - 21N81A6918, B.
RITHEESH - 21N81A6922, V. CHARAN TEJA - 21N81A6926 in partial fulfilment of the award of
Bachelor of Technology in Computer Science and Engineering (Cyber Security) to Jawaharlal Nehru
Technological University Hyderabad. It is record of bonafide work carried out under our guidance and
supervision. The results embodied in this project have not been submitted to any other university or Institute
for award of any degree. In my opinion, this report is of standard required for the degree of Bachelor of
Technology.

INTERNAL SUPERVISOR: HOD PRINCIPAL

Dr/Mr/Mrs/Ms. GUIDE NAME Qualification Mr. G. Rakesh Reddy Dr. V. S. Giridhar Akula
Designation M.Tech,(Ph.D) M.Tech, Ph.D(CSE)

EXTERNAL EXAMINE

DEPT. Of CSE (CYBER SECURITY), SPHN, HYD


SIGNBRIDGE- AUDIO TO SIGN LANGUAGE TRANSLATOR USING NLP

DECLARATION

We, Ch. Gayathri (21N81A6904), T. Nikitha (21N81A6918), B. Ritheesh (21N81A6922), V.


Charan Teja (21N81A6926) are students of the fourth Year, Second Semester B. Tech in Dept. of CSE
(Cyber Security), Sphoorthy Engineering College, Hyderabad, hereby declare that the Project Stage-II
titled “SIGNBRIDGE- AUDIO TO SIGN LANGUAGE TRANSLATOR USING NLP” has been
carried out by us and submitted in partial fulfilment for the award of degree in Bachelor of Technology in
Dept. of CSE (Cyber Security), Sphoorthy Engineering College, Hyderabad during the academic year
2024-2025.

Date:

Place:

CH. GAYATHRI (21N81A6904)

T. NIKITHA (21N81A6918)

B. RITHEESH REDDY (21N81A6922)

V. CHARAN TEJA (21N81A6926

DEPT. Of CSE (CYBER SECURITY), SPHN, HYD


SIGNBRIDGE- AUDIO TO SIGN LANGUAGE TRANSLATOR USING NLP

ACKNOWLEDGEMENT

It is a great pleasure for us to acknowledge the assistance and support of many individuals who have
been responsible for the successful completion of this Project Stage-II.

First, we take this oapportunity to express our sincere gratitude to Dept. of CSE (Cyber Security),
Sphoorthy Engineering College, Hyderabad for providing us with a great opportunity to pursue our
Bachelor’s degree in this institution.

We would like to thank Dr. V. S. Giridhar Akula, Principal, Sphoorthy Engineering College,
Hyderabad, for his constant encouragement and expert advice. It is a matter of immense pleasure to express
our sincere thanks to Mr. G. Rakesh Reddy, Head of the Department, Dept. of CSE (Cyber Security),
Sphoorthy Engineering College, Hyderabad, for providing the right academic guidance that made our task
possible.

We would like to thank our guide Mr. J. NARESH KUMAR, Assistant Professor, Dept. of CSE
(Cyber Security), Sphoorthy Engineering College, Hyderabad, for sparing his valuable time to extend help
in every step of our Major Project, which paved the way for smooth progress and the fruitful culmination of
the project.

We would like to thank our Project Coordinators, Mrs. P. Sandhya Rani , Assistant Professor and
all the staff members of Dept. of CSE (Cyber Security), Sphoorthy Engineering College, Hyderabad for their
support.

We are also grateful to our family and friends who provided us with every requirement throughout
the course. We would like to thank one and all who directly or indirectly helped us in the Major Project.

Name of the students

1. Ch. Gayathri

2. T. Nikitha

3. B. Ritheesh Reddy

4. V.Charan Teja

DEPT. Of CSE (CYBER SECURITY), SPHN, HYD


SIGNBRIDGE- AUDIO TO SIGN LANGUAGE TRANSLATOR USING NLP

Table of Contents

TABLE OF CONTENTS. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . i-ii

List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii-iv

Abbreviations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v

ABSTRACT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

CHAPTER 1 Introduction . . . . . . . . . . . . . . . . . . . . . . . 1
.
1.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Background and Motivation . . . . . . . . . . . . . . . . . . . . . . 2
1.3 Objectives of the Project Work . . . . . . . . . . . . . . . . . . . 5
CHAPTER 2 Literature Survey . . . . . . . . . . . . . . . . . . 8
. . .
2.1 Audio to Sign Language Translator . . . . . . . . . . . . . . . . . 8
2.1.1 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.1.2 Research Gap . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.2 Sign Language Translation System Using Neural Networks . . . . 9
2.2.1 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.2.2 Research Gap . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.3 Serious Game for Sign Language . . . . . . . . . . . . . . . . . . . 10
2.3.1 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.3.2 Research Gap . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.4 Real-Time Sign Language Translation System . . . . . . . . . . . 11
2.4.1 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.4.2 Research Gap . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.5 Enhancing Sign Language Translation with Machine Learning . . 12
2.5.1 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.5.2 Research Gap . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.6 Multimodal Sign Language Translation . . . . . . . . . . . . . . . 13
2.6.1 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . 13
2.6.2 Research Gap . . . . . . . . . . . . . . . . . . . . . . . . . . 14
CHAPTER 3 SYSTEM ANALYSIS. . . . . . . . . . . . . . . 15
. . .
3.1 Existing System . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.1.1 The ASL Workbench . . . . . . . . . . . . . . . . . . . . . 15

DEPT. Of CSE (CYBER SECURITY), SPHN, HYD


SIGNBRIDGE- AUDIO TO SIGN LANGUAGE TRANSLATOR USING NLP

3.1.2 ViSiCAST Translator..................................................................16


3.2 Proposed System..........................................................................................17
3.2.1 INGIT..............................................................................................18
3.3 SOFTWARE AND HARDWARE REQUIREMENTS.........20
3.3.1 Software prerequisites....................................................................20
3.3.2 Hardware Requirements.................................................................20
3.4 FUNCTIONAL REQUIREMENTS..................................................20
3.5 NON-FUNCTIONAL REQUIREMENTS........................................22
CHAPTER 4............................................................SYSTEM DESIGN 23
4.1 Architecture of the System.........................................................................23
4.2 Work Flow....................................................................................................25
CHAPTER 5.................................................................METHODOLOGY 30
5.1 Requirements and Analysis.........................................................................30
5.2 Design and Development............................................................................30
5.3 Design of Algorithms..................................................................................31
5.4 The Main Features......................................................................................31
CHAPTER 6 SPECIFICATIONS AND IMPLEMENTATION . 33
6.1 SPECIFICATIONS...................................................................................33
6.1.1 Key Functions Tools.....................................................................33
6.1.2 Languages..........................................................................................35
6.2 IMPLEMENTATION..............................................................................37
6.2.1 Source Code.....................................................................................37
CHAPTER 7...................................................................................RESULTS 45
7.0.1 Accuracy:...........................................................................................46
CHAPTER 8........................................................................CONCLUSION 50
CHAPTER 9.......................................................................REFERENCES 51

DEPT. Of CSE (CYBER SECURITY), SPHN, HYD


SIGNBRIDGE- AUDIO TO SIGN LANGUAGE TRANSLATOR USING NLP

List of Figures

1.1 Structure of HamNoSys...............................................................................4


1.2 HamNoSys symbols and there descriptions...............................................5
1.3 Avatar..............................................................................................................7
1.4 Different Avatar..............................................................................................7

3.1 Architecture of Sign Synth System.........................................................15


3.2 Architecture of ASL Workbench.............................................................16
3.3 Architecture of ViSiCAST System.......................................................17
3.4 Architecture of Text to ISL Machine Translation...............................18
3.5 Structure of TESSA System...................................................................18
3.6 Architecture of INGIT................................................................................19

4.1 System architecture......................................................................................23


4.2 Output............................................................................................................24
4.3 Grammar Rules...........................................................................................25
4.4 Grammar Rules...........................................................................................25
4.5 Examples of Grammar...............................................................................26
4.6 Workflow.......................................................................................................28
4.7 Use case Diagram........................................................................................28
4.8 Class Diagram.............................................................................................28
4.9 Activity Diagram........................................................................................29

7.1 Home Page....................................................................................................45


7.2 Login Portal.................................................................................................45
7.3 Configuration for Avatar............................................................................46
7.4 Configuration for Avatar............................................................................46
7.5 OUTPUT 1................................................................................................47
7.6 OUTPUT 2................................................................................................47
7.7 OUTPUT 3................................................................................................48
7.8 OUTPUT 4................................................................................................48
7.9 OUTPUT 5................................................................................................49

9.1 Certificate of Presentation...........................................................................53

DEPT. Of CSE (CYBER SECURITY), SPHN, HYD


SIGNBRIDGE- AUDIO TO SIGN LANGUAGE TRANSLATOR USING NLP

9.2 Certificate of Presentation...........................................................................54


9.3 Certificate of Presentation...........................................................................55
9.4 Paper Published............................................................................................56
9.5 Plagiarism Report........................................................................................57

DEPT. Of CSE (CYBER SECURITY), SPHN, HYD


SIGNBRIDGE- AUDIO TO SIGN LANGUAGE TRANSLATOR USING NLP

Abbreviations

Abbreviation Description
SL Sign language

ASL American Sign Language

BSL British Sign Language

ISL Indian Sign Language

SiGML Signing Gesture Markup Language

HamNoSys Hamburg Notation System

NLP Natural Language Processing

XML Extensible Markup Language

RNN Recurrent Neural Network

DTW Dynamic Time Warping

CNNs Convolutional Neural Networks

LFG Lexical functional grammar

CMU Carnegie Mellon University

DRS Discourse Representation Structure

HDPSG Head Driven Phrase Structure Grammar

FCG Fluid Construction Grammar

PDA Personal Digital Assistant

SVO Subject-Verb-Object

HTML5 HyperText Markup Language 5

CSS Cascading Style Sheets

DEPT. Of CSE (CYBER SECURITY), SPHN, HYD 9


SIGNBRIDGE- AUDIO TO SIGN LANGUAGE TRANSLATOR USING NLP

Abstract

An absence of communication options for Deaf and hearing people, some


might say provides a real social disadvantage to obtain services, which are
sometimes at bestessential, sometimes non-existent. Sign language uses body
language and manuals to freely convey concepts, as opposed to acoustically
conveyed sound patterns. Those who are mute, those who can hear but cannot
speak, and regular people may all use the app to connect with others who are
hard of hearing. By automatically converting spoken words into hand gestures,
this project creates a web-based interface that allows hearing-impaired individ- uals
to communicate with normal people in real time through sign language
interpretation. Two major phases of the system Speech-to-text technology
interprets oral input into textual output. After that, the text is syntactically parsed
using sign natural rules by utilizing Natural Language Processing (NLP)
algorithms that make use of the Natural Language Toolkit (NLTK). The last
phase translates the parsed text into sign language gestures which involve hand
shapes, orientation, and body movements to convey the visual meaning of a
message. This system could be used to greatly reduce the communication
barriers experienced by people with hearing loss and deafness using Machine
Learning (ML) to continuously improve accuracy, enhance quality of life, and
provide a more inclusive society between the deaf community in our daily
interactions within an ever-growing world.

Keywords: Sign language, machine learning, hand gestures, speech-to- text,


natural language toolkit (NLTK), natural language processing (NLP), and
Indian Sign Language.

DEPT. Of CSE (CYBER SECURITY), SPHN, HYD 10


SIGNBRIDGE- AUDIO TO SIGN LANGUAGE TRANSLATOR USING NLP

CHAPTER 1

Introduction

1.1 Introduction
Communication is key to socializing, learning, and gaining access to im-
portant services. However the limited number of hearing people who know sign
language makes a sound-proof lifestyle challenging for deaf individuals.
Through sign language, the most common form of communication among the
deaf, many people can preserve their feeling of culture and community with
other members (it employs hand forms, gestures, facial expressions, and body
movement orientation to transmit meaning). Even though communication
through sign language is very important, not everyone in the hearing world can
understand it — which creates a huge social gap instead of one that helps them
integrate into society. New web dev technologies, new ML capa- bilities. . . even
these burgeoning NLP tech press the envelope when it comes to solving
involved problems. The team is trying to create an online service that will
convert spoken English into sign language in order to let hearing (but non-
signing) individuals and deaf people communicate. The software mainly
employs machine learning models to convert different words into sign languages
and speech-to-text APIs in conjunction with natural language pro- cessing (NLP)
to understand language. The system is programmed to function in two major
phases. There are two main stages to the system’s programming. This entails
employing advanced voice recognition algorithms to record and translate audio
input into text.Next step would be to pre-process that text itself with NLP
techniques, built using Natural Language Toolkit (NLTK) which can help us
understand sentence structures and convert them into the equivalent sign.
Ultimately, the system provides real-time sign language signs indicating hand
movements as well as orientation and body alignment against spoken words.
Over time, the technology becomes more accurate and acts as

1
SIGNBRIDGE- AUDIO TO SIGN LANGUAGE TRANSLATOR USING NLP

a communication bridge between deaf individuals with the hearing community in


an effective manner. The Unique feature of the ’Real-Time’ translation system in
this project assures to achievement of a path-breaking revolution by enabling the deaf
community and making them feel more powerful, regarding communication from
their perspective. Furthermore, it enhances community engagement and helps
battle the isolation many Deaf individuals think daily.

1.2 Background and Motivation

Sign language (SL), a naturally occurring visual-spatial language, uses


body language, hand forms, and orientation to convey meaning through three-
dimensional linguistic utterances as opposed to sound. Verbal exchanges made
through motions of the hands and arms, facial expressions, and upper body.
The language was developed as a result of India’s deaf, hard-of-hearing,
and unintelligent populace. Naturally, different deaf and dumb populations
throughout the world will speak different languages. English, French, Urdu,
and other languages are among the numerous spoken languages in the world.
Around the world, people with hearing loss use a variety of sign languages
and expressions. People utilize American Sign Language (ASL), British Sign
Language (BSL), and Indian Sign Language (ISL) to express themselves and
interact with one another. Numerous sign languages, including ASL and BSL,
already have interactive systems built for them. There are around 5.07 million
hearing-impaired persons in India. Of them, around 50 are between the ages
of 20 and 60, while over 30 are under the age of 20. These individuals
are unable to communicate adequately, thus they usually use sign language to
communicate with others. Sign languages are either rarely recognized or
entirely rejected outside of the small group of individuals with disabilities due
to their unclear syntax and organization. Studies on American Sign Language
indicate that sign language is a whole language with a unique syntax, grammar,
and other linguistic features. The same is true for various sign languages,
as demonstrated by projects like Indian Sign Language.Research on ISL
began in 1978, and it has since been shown that ISL is a full natural

DEPT. Of CSE (CYBER SECURITY), SPHN, HYD 2


SIGNBRIDGE- AUDIO TO SIGN LANGUAGE TRANSLATOR USING NLP

language with its syntax and grammar. Hearing-impaired people may find it
especially difficult to communicate in public places like train station, bus stop,
bank, hospital, etc. because a hearing person may not understand the sign
language used by the deaf person. A hearing person cannot converse with a deaf
person since the latter may not know sign language. In order to promote
communication between the deaf and non-deaf populations, language translation is
crucial. Six percent of India’s population, or 63 million individuals, suffer significant
hearing loss, according to the 2011 census. Of these people, 76 to 89 are
Indian Deaf and can’t speak or write any language. One of the following might
be the cause of the low literacy rate:

• Insufficient interpreters for Sign Language.

• The ISL tool is not available.

• Insufficient studies on ISL.

The deaf community finds it challenging to communicate in public set-


tings including banks, hospitals, and railroads because of their incapacity. A
technology that allows text to be converted to Indian Sign Language and vice
versa is required to improve their communication with the outside world. The
community’s standard of living will improve as a result of these systems. There
is still much to learn about sign languages since they have not been explored as
thoroughly as spoken languages. is the idea of individuals showing up at a
place for a prearranged event, either alone or in a group.

The Role of HamNoSys in Sign Language Generation:


There are 200 mostly recognizable characters in the Stokoe-based notation system
known as HamNoSys. These symbols from HamNoSys are used to symbolize
hands, hand placement, form, and orientation in relation to other body parts.
Approximately three Department of Computer Science and Engi- neering values
should be assigned to these parameters for writing signs. The symmetry operator
offered by HamNoSys allows for the non-manual represen- tation of two-handed
signs. Non-manual phonological representation is possible by substituting the
symbols representing bodily parts, such as the head, with

DEPT. Of CSE (CYBER SECURITY), SPHN, HYD 3


SIGNBRIDGE- AUDIO TO SIGN LANGUAGE TRANSLATOR USING NLP

hand graphemes. However, it might be difficult to depict facial expressions like puffy
cheeks or elevated eyebrows. Figure 1 shows that the parameters of a sign are stated
as follows: symmetry operator, non-manual components, hand shape, hand position,
hand location, and hand movement. One HamNoSys describes the non-manual
aspects of a single sign, including the initial posture, hand shape, orientation,
placement, as well as the actions that change this posture either simultaneously or
sequentially. The first posture notation for two-handed signs is preceded by the
symmetry operator, which describes how the description of the non-dominant hand
is translated into the dominant hand. As a result of its versatility, HamNoSys may
be used to describe any sign of any language, promoting linguistic independence.
HamNoSys creates visual phonetics of signals that are independent of language.
Animated signing avatars are used to generate animation using the Signing Gesture
Markup Language (SiGML), which is based on HamNoSys. A web-based program
called Audio to Sign Language Translator was created for those who are hard of
hearing or deaf. It converts audio from English to Indian Sign Language. Simple
English phrases are fed into the machine, which produces.The Hamburg Notation
System (HamNoSys) may then be created from ISL-gloss. The sign synthesis module
will receive signing instructions from the HamNoSys repre- sentation, creating an
animated ISL representation for the user. ISL syntax is represented via
dependency trees.

Figure 1.1: Structure of HamNoSys

An online tool called Audio to Sign Language Translator was created for
those who are hard of hearing or deaf. It converts audio from English to
Indian Sign Languages. Using basic English words as input, the system creates
ISL-gloss, which may then be transformed into the Hamburg Notation System
(HamNoSys). In order to provide an animated ISL representation for the user,

DEPT. Of CSE (CYBER SECURITY), SPHN, HYD 4


SIGNBRIDGE- AUDIO TO SIGN LANGUAGE TRANSLATOR USING NLP

Figure 1.2: HamNoSys symbols and there descriptions

the sign synthesis module will receive signing instructions from the HamNoSys
representation. ISL syntax is represented via dependency trees.

Indian Sign Language Grammar:


Indian Sign Language, like other languages, has its unique grammar. It
doesn’t depend on whether someone speaks Hindi or English. Sign language
is not the same as spoken Hindi or English that is manually represented. It
has unique and exceptional characteristics, such as

• Each and every no. is represented with a hand gesture that is suited
for it. For instance, the sign for 45 will represent the number four, and
then the sign for five.

• The signals for ”male/man” and ”female/woman” come before the in- dica-
tions for familial ties. By putting these inquiries at the end of sentences, in-
terrogative phrases with terms like WHAT, WHERE, etc. are expressed.

• The ISL includes non-manual gestures such as mouth patterns, motion,


facial expression, body posture, head position, and eye gaze. ISL, unlike
English, employs a Subject-Object-Verb word order.

1.3 Objectives of the Project Work

The research aims to enhance communication skills for those with hearing
impairments by translating English text into Indian Sign Language using
natural language processing. The goal of using natural language processing
(NLP) to convert English sentences to Indian Sign Language is to assist
hearing-impaired persons in communicating. Regarding text-to-sign, audio-to-
sign, and audio-to-text:

DEPT. Of CSE (CYBER SECURITY), SPHN, HYD 5


SIGNBRIDGE- AUDIO TO SIGN LANGUAGE TRANSLATOR USING NLP

• Audio to Text: The first step is converting spoken English (audio)


into written text using speech recognition technology. This technology em-
ploys algorithms and machine learning models to transcribe spoken words
accurately. To recognize and separate words and phrases, the audio input
is first captured and subsequently processed.Among these are sophisticated
models such as IBM’s Watson Speech to Text or Google’s Speech-to-Text.
The transcribed text provides a visual representation of the spoken lan-
guage, allowing individuals with hearing disabilities to understand the con-
tent through reading. This step is crucial as it bridges the gap between audio
communication and text, forming the foundation for further translation into
sign language.

• Audio to Sign: Converting audio to sign language involves two steps:


first, audio is transformed to the sentence by speech recognitions, and
then the text is translated into sign language. After converting speech to
text, NLP techniques parse and understand the text’s context, syntax, and se-
mantics. The parsed text is then mapped to corresponding ISL signs us-
ing a comprehensive database of ISL signs and their English equivalents.
The system must account for grammatical differences between English and
ISL to ensure accuracy. The final output is displayed through animated
avatars or videos of human sign language interpreters, providing a visual
communication method for individuals with hearing disabilities.

• Text to Sign: NLP is used to convert textual content into ISL while
keeping meaning.

The system identifies key linguistic features such as nouns, verbs, and
sentence structures. After parsing the text, it translates it into ISL signs,
considering the unique grammar and syntax of sign language. This process
is complex due to structural differences between languages; for example,
ISL often uses a different word order and may omit unnecessary words. The
output is presented through animated sign language avatars or pre-
recorded videos of sign language interpreters, ensuring that written content
is accessible in a visual format, thus enhancing communication for
individuals with hearing disabilities

DEPT. Of CSE (CYBER SECURITY), SPHN, HYD 6


SIGNBRIDGE- AUDIO TO SIGN LANGUAGE TRANSLATOR USING NLP

• AVATAR Avatars are described as ”virtual bodies” or ”digitally created


humanoids.” They receive XML or SiGML text as input and generate ani-
mation that is appropriate for the topic. The avatar receives a series of
animation frame definitions as input. They explain the avatar’s mo- tionless
position. The avatar’s time stamp, or the moment the avatar will be po-
sitioned in that pose, is likewise described by these sequences. Rendering
software creates signature animation in accordance with prede- termined
frame definitions when these avatars are positioned in a series of positions.
For the purpose of creating sign animations, several avatars have been cre-
ated. As seen in figure 1.4, they are named ”Anna,” ”Marc,” and ”Fran-
coise.”

Figure 1.3: Avatar

Figure 1.4: Different Avatar

DEPT. Of CSE (CYBER SECURITY), SPHN, HYD 7


SIGNBRIDGE- AUDIO TO SIGN LANGUAGE TRANSLATOR USING NLP

CHAPTER 2

Literature Survey

To establish a solid foundation for the Audio to Video Sign Language


Translator project, we conducted a comprehensive literature survey, analyzing key
research papers focused on sign language translation. Particularly with re- gard to
creating precise, up-to-date, and contextually appropriate sign language
representations, these research shed light on current approaches, difficulties, and
developments in speech-to-sign translation.

2.1 Audio to Sign Language Translator


2.1.1 Methodology
In the research, a system that combines voice recognition and Natural
Language Processing (NLP) techniques is described for converting spoken
English (audio) into Indian Sign Language (ISL). The process initiates
with the capture of audio input, which is then transcribed into text utilizing speech
recognition technology. Subsequently, the transcribed text undergoes parsing to
comprehend its syntax and semantics through NLP methods. Following this,
the parsed text is correlated with corresponding ISL signs utilizing a
comprehensive database housing ISL signs and their respective English text
representations. This intricate process facilitates the transformation of spoken
language into a visual form accessible to individuals proficient in ISL, thus
enhancing communication accessibility for individuals with hearing disabilities.

2.1.2 Research Gap


The primary research gap identified in this study is the limited focus
on real-time processing and scalability within the developed system. Real-
time processing, essential for swiftly translating spoken English into Indian
Sign Language (ISL), is critical for seamless communication across
diverse

DEPT. Of CSE (CYBER SECURITY), SPHN, HYD 8


SIGNBRIDGE- AUDIO TO SIGN LANGUAGE TRANSLATOR USING NLP

contexts. However, the research indicates a lack of sufficient attention to this


aspect, potentially hindering the system’s utility, particularly in scenarios where
prompt communication is imperative. Additionally, the system’s ability to
handle different dialects and regional variations of ISL is not extensively
addressed, posing a challenge in accurately interpreting and conveying messages
across diverse linguistic settings. This disparity emphasizes the necessity of
an all-encompassing strategy that gives equal weight to scalability and real-
time processing power in order to improve accessibility and inclusion in sign
language communication systems. 2.2 A Neural Network-Based Sign Language
Translation System

2.2 Sign Language Translation System Using Neu- ral


Networks

2.2.1 Methodology
The paper introduces a sign language translation system utilizing neural
networks, primarily focusing on converting text into sign language. Trained
on a substantial dataset comprising sign language videos, the system harnesses a
recurrent neural network (RNN) for sequence prediction. By helping to
understand the contextual and sequential features of language, our RNN
architecture makes it easier to translate complex words into sign language
expressions. In the end, this procedure produces animated avatars that
graphically depict the correct Indian Sign Language (ISL) signs. These
avatars serve as the final output, effectively communicating the translated text
in ISL format, thus enhancing accessibility for individuals with hearing
disabilities.

2.2.2 Research Gap


The study emphasizes the necessity for significant computational resources, which
may serve as a barrier to widespread adoption of the developed system. This
requirement suggests that users may need access to powerful hardware or computing
infrastructure to effectively utilize the system, potentially limiting

DEPT. Of CSE (CYBER SECURITY), SPHN, HYD


SIGNBRIDGE- AUDIO TO SIGN LANGUAGE TRANSLATOR USING NLP

its accessibility, particularly in resource-constrained environments. Moreover, the


research underscores that the system’s real-time performance requires im- provement
to enhance its practicality for everyday use. Real-time performance is crucial for
ensuring seamless and timely communication, especially in dy- namic contexts
such as conversations or interactive settings. Addressing these issues would not only
enhance the system’s usability but also broaden its applicability across diverse user
groups and usage scenarios.

2.3 Serious Game for Sign Language

2.3.1 Methodology
An algorithm for hand gesture detection using the Dynamic Time Warping
(DTW) approach is presented in this research. There are three key components to
the system architecture:

1. Identifying hand and facial areas in real time: This module focuses on
identifying and delineating the facial and hand regions within the input video stream,
enabling subsequent analysis and processing.

2. Trajectory tracking of hands: Once the hand regions are detected, the
system tracks their trajectory in terms of direction and distance from the
center of the frame. This tracking mechanism facilitates the understanding of
hand movements and gestures.

3. Gesture recognition: Leveraging the trajectory information obtained, the


system proceeds to recognize specific gestures performed by the user’s hands.
This step involves comparing the observed hand movements with predefined
gesture patterns to identify and classify the executed gestures accurately.

Moreover, the proposed approach is integrated into a serious game plat-


form designed to educate users in sign language. By embedding the gesture
recognition algorithm within a gamified learning environment, the system aims to
enhance user engagement and proficiency in sign language acquisition.This all-
encompassing strategy not only makes gesture identification easier but also gives
users an engaging and dynamic learning environment, which advances the
larger objective of advancing accessibility and sign language instruction.

DEPT. Of CSE (CYBER SECURITY), SPHN, HYD


SIGNBRIDGE- AUDIO TO SIGN LANGUAGE TRANSLATOR USING NLP

2.3.2 Research Gap


The main limitation identified in the study is the system’s focus on static gestures,
which overlooks the intricacies of dynamic and complex sentence structures inherent
in sign language. Sign languages, like any other language, are rich in expression
and encompass various dynamic movements and gram- matical features essential for
conveying meaning comprehensively. By primarily concentrating on static gestures,
the system fails to capture the full richness and nuances of sign language
communication, thereby limiting its effectiveness in facilitating meaningful
interactions. Consequently, this restriction impedes the system’s applicability for
comprehensive sign language communication, as it may struggle to accurately
convey the subtleties and nuances embedded within dynamic signing expressions
and complex linguistic constructs. Addressing this limitation would require
expanding the system’s capabilities to encompass dynamic gestures and embrace the
complexity of sign language grammar, thus enhancing its efficacy in facilitating more
authentic and inclusive sign language communication experiences.

2.4 Real-Time Sign Language Translation Sys- tem

2.4.1 Methodology
This study’s real-time sign language translation system relies on deep learning
models, namely, convolutional neural networks (CNNs), to identify movements.
CNNs excel in identifying patterns in visual data and are well- suited for image-
related tasks, making them a viable candidate for deciphering sign language. The
technology uses NLP to convert text into sign language. Using CNNs for
visual input and NLP for textual input, the system can reliably analyze spoken or
written language and transform it into sign language motions in real time.

DEPT. Of CSE (CYBER SECURITY), SPHN, HYD


SIGNBRIDGE- AUDIO TO SIGN LANGUAGE TRANSLATOR USING NLP

2.4.2 Research Gap


The research highlights a significant gap concerning the lack of emphasis
on user customization and adaptability to different sign language dialects
within the described system. Sign languages exhibit substantial variations
in signs, gestures, and grammatical structures across different regions and cultural
contexts. However, the current system does not adequately address these
variations, potentially limiting its usability and effectiveness in diverse linguistic and
cultural settings.If user customisation and dialectal adaptation are neglected, the
system could not be able to accommodate each user’s particular requirements and
preferences and might have trouble correctly deciphering and communicating
messages in regional variations of sign language. Addressing this research gap
would require incorporating mechanisms for user customization and
accommodating dialectal variations within the system, thereby enhancing its
inclusivity and usability across various linguistic and cultural contexts.

2.5 Enhancing Sign Language Translation with


Machine Learning

2.5.1 Methodology
The study presents an approach that increases the accuracy of sign language
translation from textual input by using machine learning algorithms.

It involves integrating a substantial dataset comprising sign language videos,


which serves as the foundation for training a model capable of accurately map-
ping text to corresponding sign language gestures. The approach adopted in this
methodology combines both supervised and unsupervised learning tech- niques to
enhance the system’s performance. By leveraging labeled data for supervised
learning and exploring patterns within the data through unsu- pervised learning,
the system aims to refine its translation capabilities and improve accuracy. With
the ultimate objective of promoting more effective communication accessibility for
people with hearing difficulties, this thorough approach highlights the paper’s
attempts to develop sign language translation utilizing machine learning
approaches.

DEPT. Of CSE (CYBER SECURITY), SPHN, HYD


SIGNBRIDGE- AUDIO TO SIGN LANGUAGE TRANSLATOR USING NLP

2.5.2 Research Gap


The primary research gap identified in this study revolves around limited real-
world testing and a lack of comprehensive user feedback. This gap implies
that the effectiveness of the system in practical scenarios has not been thoroughly
validated, and potential enhancements based on user experiences remain unexplored.
Without extensive testing in real-world environments and solicitation of user
feedback, the system’s performance and usability in actual usage contexts remain
uncertain. Addressing this research gap would necessitate conducting rigorous real-
world testing and actively seeking user feedback to evaluate the system’s
effectiveness, identify areas for improvement, and refine the user experience.
Iteratively refining the system to better suit the requirements and preferences
of its target users through the integration of user input and insights from
real-world usage situations would increase its usefulness and effect in promoting
communication accessibility and sign language translation.

2.6 Multimodal Sign Language Translation

2.6.1 Methodology
The paper presents a methodology for a multimodal sign language transla- tion
system that integrates audio, text, and visual data to improve translation accuracy.
Among the technologies used by the system are computer vision techniques for
visual gesture interpretation, speech recognition for audio to text conversion, and
Natural Language Processing (NLP) methods for textual data analysis and
processing. The goal of the system is to improve the precision and
dependability of sign language translation by utilizing data from many modalities.
The system can record and interpret verbal and gestural clues from several
sources thanks to this thorough methodology, producing transla- tions that are
more reliable and accurate.Integrating these modalities improves communication
accessibility for those with hearing difficulties by converting spoken or written
material in to sign language more seamlessly and effectively.

DEPT. Of CSE (CYBER SECURITY), SPHN, HYD


SIGNBRIDGE- AUDIO TO SIGN LANGUAGE TRANSLATOR USING NLP

2.6.2 Research Gap


The research identifies two significant research gaps in the study of the
multimodal sign language translation system. Firstly, it underscores the high
computational demands of the system, which could pose a barrier to accessi-
bility. The extensive computational resources required may limit the system’s
accessibility, particularly in environments with limited computing capabilities or
for users with restricted access to high-performance computing infrastructure.
Additionally, the research highlights a lack of accessibility features catering
to diverse user groups. This limitation implies that the system may not adequately
address the specific needs and preferences of users from different demographic
backgrounds, potentially hindering its usability and effectiveness across varied
populations. Addressing these gaps would involve optimizing the system’s
computational efficiency and incorporating accessibility features tai- lored to the
requirements of diverse user groups, thus enhancing its inclusivity and usability
for individuals with hearing disabilities.

DEPT. Of CSE (CYBER SECURITY), SPHN, HYD


SIGNBRIDGE- AUDIO TO SIGN LANGUAGE TRANSLATOR USING NLP

CHAPTER 3

SYSTEM ANALYSIS

3.1 Existing System

In India, there is a lack of good models for translating text to sign language,
despite its widespread usage for hearing and speech-impaired individuals. Oral
communication lacks adequate and effective audiovisual support. There has been
minimal attempt to computerize ISL, despite significant breakthroughs in
computer recognition of sign languages from other countries. There are few
developed systems for Indian sign language, while much research has focused on
American or British sign language.

Majority of systems underlying architectures are built on:

1. Direct translation converts terms from one language to another. The


eventual result may differ from what was planned.

2. Statistical machine translation necessitates a sizable parallel corpus, which


sign language does not have easy access to.

3. Grammar rules are used in a transfer-based architecture to provide proper


translations between language systems.

Figure 3.1: Architecture of Sign Synth System

3.1.1 The ASL Workbench


ASL workbench It’s a computer translation system that transforms text into
American Sign Language. It examines the input text up to the f-structure
DEPT. Of CSE (CYBER SECURITY), SPHN, HYD
SIGNBRIDGE- AUDIO TO SIGN LANGUAGE TRANSLATOR USING NLP

using the LFG method. Some syntactic features from the input text are
abstracted by its representation, which replaces the text’s linguistic properties.
The architectural design of the ASL workstation is shown in Figure 2.4. The
Workbench system included rules for translating English f-structures into
ASL. Both the producing and transfer modules are operational. The
translation module accepts an LFG f structure in English as input. It is
transformed into an ASL f-structure by lexical selection and structural
correspondence. The ASL f-structure is input into the generating module.
According to the text, the generating module generates the c-structure and p-
structure in American Sign Language. If the lexical element is a noun, ASL
Workbench will fingerspell the term.

Figure 3.2: Architecture of ASL Workbench

The translator can generate an item in the ASL lexicon that corresponds to
the term, but translation fails if the element is not a noun. Additionally, if
required, it can retry the translation and establish an entry in the transfer lexicon.

3.1.2 ViSiCAST Translator


The English to BSL translating system, ViSiCAST, was created by
Safar and Marshall (2001). It analyzes English for the BSL generation using
a semantic level of representation. It involves researching the use of various
technology for sign language transmission. Figure 2.5 displays the ViSiCAST
architecture. This method is easy to use. After entering English content into
the system, the user can modify it to suit their needs. The CMU link grammar
parsers are then used to parse the entered text at the syntactic level. Discourse
Representation Structure (DRS) is an intermediate semantic representation

DEPT. Of CSE (CYBER SECURITY), SPHN, HYD 16


SIGNBRIDGE- AUDIO TO SIGN LANGUAGE TRANSLATOR USING NLP

created by this parser. The Head-Driven Structure Grammar defines the morphology
and the syntax of sign creation based on this representation. Signs can be
changed here and are shown as HamNoSys.

Figure 3.3: Architecture of ViSiCAST System

Following this, SiGML, which explains signing notations in XML format, is


connected to signing symbols. 3D rendering software can readily comprehend
SiGML and will play animation in response to text input.

3.2 Proposed System

An automated system for translating text to ISL:


The recommended system accepts English text as input and generates
indications that correspond to the content. The figure displays the system’s
architecture. The graphic depicts four important system components: input text
preprocessing and parsing, LFG f-structure representation, transfer gram- mar
rules, ISL sentence generation, and ISL synthesis. The parser accepts a basic
English sentence as input. A simple sentence is one that only contains one
primary verb. After processing the text, Minipar generates a dependency tree.
Prior to processing, a phrase lookup table with around 350 phrases and
temporal expressions is generated. An English morphological analyzer can help
you identify several nouns. The LFG functional structure (f-structure) encodes
the grammatical connection of the input sentence.This includes the syntactic and
functional representation of a sentence. This data is displayed as a collection
of attribute-value pairs. Grammatical symbols use attributes to represent their
names and values to represent their characteristics. The producing module applies
transfer grammar rules to the source phrase to cre- ate the destination structure.
During the generation phase, two key processes
DEPT. Of CSE (CYBER SECURITY), SPHN, HYD 17
SIGNBRIDGE- AUDIO TO SIGN LANGUAGE TRANSLATOR USING NLP

occur: lexical selection and word order correspondence. Lexical selection is


based on a bilingual lexicon (English-ISL). In ISL, the word
”BREAKFAST” is changed to ”MORNING FOOD” from the English word.
ISL follows the subject-object-verb (SOV) word order..

Figure 3.4: Architecture of Text to ISL Machine Translation.

These approaches focus on analyzing individual words in a source language


string, rather than analyzing the original syntax. In general, the word order is
the sign language is identical to that used in the English text. But while the
sequence of words may alter while translating from English to Indian Sign
Language. The system requires advanced understanding of both English and
the target sign language. This approach is used in systems such as TESSA
and the SignSynth project.

Figure 3.5: Structure of TESSA System

3.2.1 INGIT
INGIT operates via a transcribed spoken Hindi text string. In FCG, a
domain-specific Hindi construction grammar converts input into a thin semantic
structure, which is subsequently utilized for ellipsis resolution to create a
saturated semantic structure. The ISL generator creates the proper ISL-tag
structure for each type of utterance (statements, questions, negations, etc.). A
HamNoSys converter was used to create the graphical simulation. This system
transforms Hindi string to Indian Sign using cross-model translation.

DEPT. Of CSE (CYBER SECURITY), SPHN, HYD 18


SIGNBRIDGE- AUDIO TO SIGN LANGUAGE TRANSLATOR USING NLP

Language that may be spoken at Indian Railways reservation offices. The


technology turns the reservation clerk input into ISL. It can then be presented
to ISL users. The grammar of sign language has

been built with Fluid Construction Grammar (FCG). The domain-specific


construction grammar for Hindi transforms inputs into the semantic structures for
ellipsis resolution, resulting in a saturated semantic structure. The ISL generator
creates an appropriate tag structure based on the kind of utterance (statements,
inquiries, denials, etc.). After that, a HamNoSys converter gen- erates the
graphical simulation. They collected a tiny corpus over six days to validate
the method. This corpus is based on consumer interactions at a computer
reservation counter. They were assessed In the exchange, 230 words were
substituted, with many being repeated.

The vocabulary had 90 words, including ten verbs, nine time-related terms,
twelve domain-specific words (e.g., tickets), 15 numbers, 12 month names, 4
cities, 4 trains, and digit particles. The INGIT system is made up of three
key modules:

1. The input parser

2. The module for Ellipsis resolution

3. The ISL Generator combines an ISL vocabulary with a domain-


bound English to ISL translation model.

Figure 3.6: Architecture of INGIT

DEPT. Of CSE (CYBER SECURITY), SPHN, HYD 19


SIGNBRIDGE- AUDIO TO SIGN LANGUAGE TRANSLATOR USING NLP

3.3 SOFTWARE AND HARDWARE


REQUIRE- MENTS

3.3.1 Software prerequisites


1. Python 3.8 is the programming language

2. Libraries and Firewalls

• PyTorch : For model training and inference.

• NLTK : For preprocessing the text.

• YouTube Data API : For extracting video metadata and captions.

• Speech-to-Text APIs : For generating text for the audio.

• Summary model : Pegasus

• Django For backend API or website development.

• HTML/CSS For creating the frontend interface.

3. IDE/Editor: Visual Studio Code, Google colab.


4. Operating System: Windows 10, macOS.

3.3.2 Hardware Requirements


1. RAM :4GB

2. Processor : Intel i3 or above

3. Hard Disk :40GB or above

3.4 FUNCTIONAL REQUIREMENTS


These define the core features and capabilities of the system.

1. Speech Recognition:

• The system should capture and transcribe spoken audio into text ac-
curately.
DEPT. Of CSE (CYBER SECURITY), SPHN, HYD 20
SIGNBRIDGE- AUDIO TO SIGN LANGUAGE TRANSLATOR USING NLP

• Support multiple languages (if required).

2. Natural Language Processing (NLP):

• The system should process transcribed text to understand context


and intent.

• It should map spoken words to corresponding sign language gestures.

3. Sign Language Avatar Rendering:

• The system should generate and animate an avatar performing sign


language gestures.

• The avatar should be capable of expressing facial expressions and


hand movements accurately.

4. User Interface (UI):

• The system should provide an interactive UI for users to input


audio and view sign language translations.

• It should have playback controls for pausing, rewinding, or replaying


the sign language video.

5. Real-Time Processing:

• The system should process speech and display sign language anima-
tion with minimal delay.

6. Customization Options:

• Users should be able to change the avatar’s appearance or adjust


sign language speed.

7. Error Handling and Feedback:

• The system should handle misrecognized words and allow corrections.

• Users should be able to provide feedback for improving translations.

DEPT. Of CSE (CYBER SECURITY), SPHN, HYD 21


SIGNBRIDGE- AUDIO TO SIGN LANGUAGE TRANSLATOR USING NLP

3.5 NON-FUNCTIONAL REQUIREMENTS


These establish the system’s performance and quality characteristics.

1. Performance: The system should process speech and generate sign


language animations within 1– 2 seconds.

2. Scalability: The architecture should support additional languages and


new sign gestures in future updates.

3. Accuracy: The speech-to-text model should have at least 90

The avatar gestures should conform to standard sign language movements.

4. Usability:The UI should be intuitive and accessible to all users, includ-


ing those with disabilities.

5. Security: The system should ensure user data privacy and encryption
for stored data.

6. Compatibility: The application should work across different platforms


(Web, Mobile, Desktop).

7. Maintainability: The system should be easy to update and improve


over time with new sign language datasets.

8. Reliability:The system should function 99.9 percent of the time without


crashes or downtime.

DEPT. Of CSE (CYBER SECURITY), SPHN, HYD 22


SIGNBRIDGE- AUDIO TO SIGN LANGUAGE TRANSLATOR USING NLP

CHAPTER 4

SYSTEM DESIGN

4.1 Architecture of the System

These modules make up the system:

• Audio-text converter

• Input Parser

• ISL GeneratoR

• Graphics Generator

• Audio – Text
• Audio - Sign
• Text - sign

Figure 4.1: System architecture

1. Translator of Audio to text: The module accepts input from any


PDA’s built-in or external microphone. The module utilizes Google Cloud
Speech to transform audio into text. The findings

It is possible to utilize any language. The result is an English-language text


string. The module is a previously built Python script. The module adds
appropriate punctuation to the supplied text.

DEPT. Of CSE (CYBER SECURITY), SPHN, HYD


SIGNBRIDGE- AUDIO TO SIGN LANGUAGE TRANSLATOR USING NLP

2. Input Parser: The input parser tokenizes the paragraph into sentences.
Each statement is tokenized into words using technologies from NLP and
ML. A collection of tokens for this module’s output, every single line.
The process of breaking up a character sequence and a specified document
unit into smaller units, or tokens, while perhaps also discarding specific
characters, such as punctuation, is known as tokenization. An illustration of
tokenization is as follows: Friends, Romans, and countrymen, please listen
to me

Output:

Figure 4.2: Output

Tokenizer: A tokenizer is a device that separates text into a series of token


equivalents of ”word.”

3. ISL Generator: The ISL Generator modules aims to convert English


grammar input into ISL grammar output. This was performed by trans-
fer-based translation.

4. Transfer-based Translation: The systems accept text input. The com-


puter analyzes both syntactic and semantic elements. Following that, a sign
language translation is produced. This system...

The system turns the source language into an abstract text and then applies
linguistic principles to translate it into the target language.

Since a unique set of rules is applied to read the information from the
source language and create a semantic or syntactic structure in the target
language, translation is sometimes referred to as ”rule based translation.”

speech. Our approach transforms the raw language into a parser that uses
phrase structures. A natural language parser is an application that
identifies sentence structure, including word groups that constitute ”phrases”
and the subject or object of a verb. In order:

DEPT. Of CSE (CYBER SECURITY), SPHN, HYD 24


SIGNBRIDGE- AUDIO TO SIGN LANGUAGE TRANSLATOR USING NLP

Probabilistic parsers use their knowledge of hand-parsed sentences to


analyze incoming phrases with the highest probability. Our PCFG
compiler includes efficient PCFG and lexicalized dependency parsers,
as well as a lexicalized Stanford parser based on probabilistic natural
language parsing. After converting the source language into phrase
structure trees, we use ISL grammar rules to modify the tree’s structure
to match ISL grammar and structure.

Grammatical Guidelines for Converting English Sentences to ISL Sen-


tences: Translating across languages with different grammar rules may be
difficult. When sign language is used as the target language and spoken
language as the source language, the level of complexity increases significantly.

When translating English to Indian sign language, it’s important to compare


the grammars of the two languages.

Figure 4.3: Grammar Rules

Figure 4.4: Grammar Rules

4.2 Work Flow

Reordering of Words of English Sentence

DEPT. Of CSE (CYBER SECURITY), SPHN, HYD 25


SIGNBRIDGE- AUDIO TO SIGN LANGUAGE TRANSLATOR USING NLP

Figure 4.5: Examples of Grammar

In this module, step 1 is to modify SVO structure of English to SOV


structure of ISL. In next step, we look for interrogative sentences, the ones
with W- questions.

Management WH-Interrogations WH: The last part of a sentence always contains


interrogative indicators, such as who, what, when, and why.

”What is her birthday in English?” ”Her Birthday + Question” in ISL. To


do this, parse the text’s phrase structure tree, remove any nodes that include WH-
interrogative terms, and add the node as the node’s final child.

The Eliminator
We never utilize the connecting verbs were/was/is/am/are or don’t employ
article, as per ISL standards. (a, an, some, and the). We name these terms
StopWords. Despite being quite prevalent in the source language, these terms
should be removed since they have no significance and are not included in
the target language’s lexicon. In this section of the module, we remove stop words
from the rearranged tokens.

Learning to Stem and Lemmatize term-endings, suffixes, and words that


end in gerunds (-ing) are never used for any term in ISL. In ISL, all words
are found in their base form. Using stemming and lemmatizing, we transform
each token into its root form in this module step. Different versions of words,
including organize, organizes, and organizing, will be used in documents in
the original language for grammatical reasons. Other word families, such
as democratic, democratic, and democratization, share similar meanings and are
derivationally connected. The goal of stemming and lemmatization is to decrease
the inflectional

Sometimes derivatively linked forms to a single base form. For


example:

DEPT. Of CSE (CYBER SECURITY), SPHN, HYD 26


SIGNBRIDGE- AUDIO TO SIGN LANGUAGE TRANSLATOR USING NLP

am, are, is=¿be car, cars, car’s, and cars’=¿car. This text mapping
will

DEPT. Of CSE (CYBER SECURITY), SPHN, HYD 27


SIGNBRIDGE- AUDIO TO SIGN LANGUAGE TRANSLATOR USING NLP

result in the following: the boy’s vehicles are various colors = the boy car
be different color. Stemming is a heuristic process that eliminates derivational
affixes from words with the goal of achieving proper results most of the time.
Lemmatization is the process of appropriately employing vocabulary and
morphology to restore a word’s base or dictionary form, deleting only inflec-

tional endings. When confronted with


Lemmatization attempts to return either ”see” or ”saw” based on whether the
token is used verb or a noun. Stemming may only return s.

Synonym Generation
As we have limited dictionary, all words present in source language may
not be present in dictionary of ISL. Every token in the string can be one of
the either:

1. Has SIGML file corresponding to word


2. Donot have SIGML file corresponding to it
In case 2, we use Wordnet to find synonyms of the word. For each possible
synonym, we check for case 1, ie. Whether synonym of the word exists in
current database or not and then that synonym replaces the word.

The Graphics Generator


Graphics Producer This module accepts an ISL text string as input. Each
text token in this module is mapped to a database by the graphics generator.
For each word in database, a matching file is present. For instance, the
SIGML file for Word Bird is bird.sigml. Therefore, each word has a matching
SIGML file. We present unmapped words as a collection of SIGML files
from all alphabets. For example... Rahul + R.sigml + a.sigml + h.sigml +
u.sigml+ l.sigml

Avatar receives these SIGML files as input and uses them to produce
animated movements.

DEPT. Of CSE (CYBER SECURITY), SPHN, HYD 28


SIGNBRIDGE- AUDIO TO SIGN LANGUAGE TRANSLATOR USING NLP

Figure 4.6: Workflow

Figure 4.7: Use case Diagram

Figure 4.8: Class Diagra

DEPT. Of CSE (CYBER SECURITY), SPHN, HYD 29


SIGNBRIDGE- AUDIO TO SIGN LANGUAGE TRANSLATOR USING NLP

Figure 4.9: Activity Diagram

DEPT. Of CSE (CYBER SECURITY), SPHN, HYD 30


SIGNBRIDGE- AUDIO TO SIGN LANGUAGE TRANSLATOR USING NLP

CHAPTER 5

METHODOLOGY

5.1 Requirements and Analysis

Translating from one language to another requires a bilingual dictionary.


We will create a dictionary containing both English and Indian terms. The
equivalent of the English Words might be pictures, movies, or coded text in
sign language (gloss). Although each strategy has advantages and disadvantages
of its own, the video strategy is the most appropriate since it outperforms
the others. We favored using movies since they offer more realistic material than
images and coded text. We would want to create a system based on synthetic
animations if time permits because it uses less memory, is readily replicable, has
translation support, and is based on an avatar system. method that is readily
customizable

5.2 Design and Development

The suggested translation method uses an audio text as input, translates it


into English text, and then uses a parse tree structure to translate it into
ISL grammar. Machine translation is used to transform English into a grammar
based on ISL. There are recognized criteria for turning English into a semi-
structured parse tree, as well as bilingual guidelines. This parse tree is in
conformity with ISL grammatical standards. After further conversion to text,
this altered tree is processed to become the ISL language.

1. Audio to Text: This involves converting spoken English (audio) into


written text using speech recognition technology. It allows individuals
with hearing disabilities to understand spoken language by providing a vis-
ual representation of the spoken words in text form.

2. Audio to Sign: Converting audio into sign language bridges commu-

31
SIGNBRIDGE- AUDIO TO SIGN LANGUAGE TRANSLATOR USING NLP

nication gaps by providing a direct visual representation for individuals reliant on


sign language.

3. Text to sign: Translating text into sign language facilitates communica-


tion accessibility by visually representing written content for individuals re-
lying on sign language.

5.3Design of Algorithms

There are five modules in the system:

• An English parser for processing text written in English.

• A module for rearranging sentences according to ISL grammatical stan-


dards.

• Eliminator for stopword removal.

• A module for converting videos.

• Stemming to find each word’s root words and synonyms for terms that
aren’t in dictionaries.

The system receives a written English text as input, parses it, and uses the
grammatical representation to generate a phrase structure. Since ISL follows the
Subject-Object-Verb structure coupled with a variety of negative and
interrogative sentences, reordering is then done to satisfy the grammatical
requirements of ISL texts. After that, any unnecessary words are eliminated since
ISL will only employ words that have sense and will not utilize any connecting
verbs, articles, or other helpful terms. The lemmatization module receives the
output and breaks each word down to its most basic form. The synonyms of the
terms that are not found in the dictionary are used instead.

5.4The Main Features

• The Speech to Sign Language Conversion.

• The Text is used as bridge.

DEPT. Of CSE (CYBER SECURITY), SPHN, HYD 31


SIGNBRIDGE- AUDIO TO SIGN LANGUAGE TRANSLATOR USING NLP

• Web-based interface (no downloads required);

• Text tokenization based on ISL rules

• Architecture of Client/Server.

• Machine learning: makes use of NLP instruments.

• Completely responsive: You don’t have to worry about how this website
will appear on a desktop, tablet, or mobile device since everything is responsive.

• Simple to use and intuitive

DEPT. Of CSE (CYBER SECURITY), SPHN, HYD 32


SIGNBRIDGE- AUDIO TO SIGN LANGUAGE TRANSLATOR USING NLP

CHAPTER 6

SPECIFICATIONS AND IMPLEMENTION

6.1 SPECIFICATIONS

6.1.1 Key Functions Tools

Front-end
• SigMLURLApp

The SigML (Sign Markup Language) URL App is a web-based application used
for interpreting and displaying sign language gestures. It converts structured
SigML files into visual hand gestures and animations, making it useful
for applications in sign language recognition, communication, and
education.

– It is an XML-based representation specifically designed for sign


language processing.

– The app enables interaction between users and sign language models,
ensuring smooth gesture rendering.

– It can be integrated with motion tracking tools or AI-based hand


recognition models for enhanced usability.

Back-end
• Open Source Computer Vision Library

An industry-leading‘source of computer vision packages, OpenCV offers a


range of features for image processing, video analysis, together with machine
learning

Key Features:

– Image and video capture from webcams or external sources.

33
SIGNBRIDGE- AUDIO TO SIGN LANGUAGE TRANSLATOR USING NLP

– Real-time processing of sign language gestures and hand movement


tracking.

– Pre-trained models for detecting hand gestures, facial expressions,


and body posture.

– Suitable for sophisticated recognition jobs with deep learning frame-


works like PyTorch, Keras, and TensorFlow.

Use in this project:

– OpenCV is used to process images and recognize sign language


gestures.

– It helps in identifying hand shapes and movements captured via a


webcam or a video feed.

– Image thresholding, contour detection, and keypoint recognition tech-


niques are used for effective gesture classification.

• The phpMyAdmin

A web-based application called phpMyAdmin provides a graphical user


interface for MySQL and MariaDB database administration.

Key Features:

– Allows users to create, modify, and delete databases and tables.

– Provides an easy-to-use SQL editor for executing queries.

– Facilitates data import/export operations in various formats like


CSV, JSON, and XML.

– Supports user authentication and access control to manage different


user roles.

Use in this project:

– Stores user-related data, such as authentication details, sign language


model outputs, and user activity logs.

– Maintains records of gesture mappings and their corresponding mean-


ings in sign language.

DEPT. Of CSE (CYBER SECURITY), SPHN, HYD


SIGNBRIDGE- AUDIO TO SIGN LANGUAGE TRANSLATOR USING NLP

6.1.2 Languages

Front-end

• HyperText Markup Language 5 (HTML5) The most recent ver-


sion of HTML, HTML5, is used to organize online content. It intro-
duces several improvements over previous versions, including:

– Native support for multimedia elements (¡video¿, ¡audio¿).

– Improved semantics and accessibility with elements like ¡article¿,


¡section¿, and ¡nav¿.

– Enhanced form validation and input elements.

– Works seamlessly with CSS3 and JavaScript to create dynamic


web pages.

Use in this project:

– Creates the structure for the sign language recognition system’s web
interface.

– Embeds videos, animations, and forms for user interaction.

– Facilitates the display of real-time sign language interpretation re-


sults.

• Cascading Style Sheets, or CSS Web applications’ layout and de-


sign may be improved with CSS.

Key Features:

– Provides responsive design capabilities using media queries.

– Supports animations and transitions for an engaging user experience.

– Allows customization of fonts, colors, and layouts to improve acces-


sibility.

Use in this project:

– Styles the front-end interface, ensuring a visually appealing and

DEPT. Of CSE (CYBER SECURITY), SPHN, HYD


SIGNBRIDGE- AUDIO TO SIGN LANGUAGE TRANSLATOR USING NLP

user-friendly experience.

DEPT. Of CSE (CYBER SECURITY), SPHN, HYD


SIGNBRIDGE- AUDIO TO SIGN LANGUAGE TRANSLATOR USING NLP

– Adjusts page layouts dynamically for different screen sizes.

– Enhances the presentation of sign language interpretations and real-


time feedback.

• SIGM(Sign Markup Language)

Sign language motions may be encoded using SIGML, an XML-based language.


Avatars or animated hand models may be used for visualization, and it
enables the systematic depiction of sign motions.

Use in this project:

– Provides a standard format for representing sign language gestures.

– Ensures compatibility with sign language animation software.

– Helps translate textual input into sign language representations for


better accessibility.

Back-end
• Python3

A popular high-level, flexible programming language for web development, AI,


and machine learning is Python 3.

Key Features:

– Simple and readable syntax, making it ideal for rapid development.

– Extensive libraries for machine learning (TensorFlow, Keras, Py-


Torch), data processing (Pandas, NumPy), and web development
(Flask, Django).

– Strong support for automation and scripting.

Use in this project:

– Image Processing: OpenCV (cv2 module) is used for detecting


and analyzing hand gestures.

– Machine Learning Models: Python integrates with TensorFlow/K-


eras to train and deploy gesture recognition models.

DEPT. Of CSE (CYBER SECURITY), SPHN, HYD


SIGNBRIDGE- AUDIO TO SIGN LANGUAGE TRANSLATOR USING NLP

– Database Management:Python interacts with phpMyAdmin/MySQL


databases to store and retrieve data efficiently.

– API Handling: Python-based RESTful APIs help in communi-


cat- ing between the front-end and back-end services.

– Web Frameworks:Django is used to build and deploy the web ap-


plication.

By seamlessly integrating these advanced technologies and program- ming


languages, the system efficiently processes sign language recognition through
real-time image and video analysis, accurately interprets gestures using machine
learning models, and ensures smooth interaction between users and the
platform. The backend effectively manages and stores relevant data in a
structured database, facilitating quick retrieval and processing of sign
language representations. Additionally, the intuitive and responsive user
interface enhances accessibility, providing users with real-time feedback and
seamless interaction with the sign language recog- nition system. This
combination of technologies ensures high accuracy, scalability, and an
optimized user experience.

6.2 IMPLEMENTATION

6.2.1 Source Code

View.py
from django.https import HttpResponseRequest from
django.shortcut import renders, redirects

from django.contribs.auth.form import UserCreationForms, Authentication-


Forms
from django.contribs.auth import loginto,log off from
nltk.tokenizer import words-tokenizer

from nltk.corpu import stopword


from nltk.stems import WordNetLemmatizers import
nltk
DEPT. Of CSE (CYBER SECURITY), SPHN, HYD
SIGNBRIDGE- AUDIO TO SIGN LANGUAGE TRANSLATOR USING NLP

from djiango.contribs.staticfile import finder


django.contribs.auth.decorators import logintorequired def
homeviews(request) :
return renders(requests,’homepage.html’)
def aboutviews(request) :
return renders(request,’about it.html’)
def contactviews(request) :
return renders(request,’contact us.html’)
@loginrequired(loginurl = ”loginto”)
def animationviews(request) :
if request.methods == ’POST’:
text = request.POST.get(’stext’)
texts.lower()
word= wordtokenizer(text)
tag = nltk.postag(word)
tenses =
tenses[”future”] = len([words for word in tag if word[1] == ”MD”])
tenses[”present”] = len([word for word in tag if word[1] in
[”VVBP”,
”VEBZ”,”VVBG”]])
tenses[”past”] = len([word for word in tagged if word[1] in
[”VBCD”, ”VBBN”]])
tenses[”presentcontinuous”] = len([wordf orwordintaggedifword[1]in[”V V BG”]])
stopwords = set([”mightn′tt”,′ re′,′ wasn′,′ wouldn′,′ be′,′ has′,′ thats′,′ does′,′ shouldn′,′ do′, ”
removing stopwords and applying lemmatizing nlp process to words
lr = WordNetsLemmatizer()
filteredtexts = []
for w,p in zips(words,tag):
if W not in stopword :
if p[1]==’VBBG’ or p[1]==’VBD’ or p[1]==’VBZ’ or p[1]==’VBN’
or p[1]==’NN’:
filteredtext.appends(lr.lemmation(w, pos =′ v′))elifp[1] ==′ JJ′orp[1] ==′
JJR′orp[1] ==′ JJS′orp[1] ==′ RBR′orp[1] ==′ RCBS′ :
filteredtexts.append(lr.lemmation(w, pos =′ a′))

else:

DEPT. Of CSE (CYBER SECURITY), SPHN, HYD


SIGNBRIDGE- AUDIO TO SIGN LANGUAGE TRANSLATOR USING NLP

filteredtexts.append(lr.lemmation(w))
adding the specific word to specify tense
word = filteredtexts
tem=[]
for w in word:
if w==’I’:
tem.append(’Me’)
else:
tem.appends(w)
word = tem
probabletenses = max(tenses, key = tensess.get)
if probabletenses == ”past”andtenses[”past”] >= 1 :
tem = [”Before”]
tem = tem + words
word = tem
elif probabletenses == ”future”andtenses[”future”] >= 1 :
if ”Will” not in word:
tem = [”Will”]
tem = tem + words
word = tem
else:
pass
elif probabletenses == ”present” :
if tenses[”presentcontinuous”] >= 1 :
tem = [”Now”]
tem = tem + word
word = tem
filteredtexts = []
for w in word:
path=w + ”.mp4”
f = finder.find
if not f:
for c in w:
filteredtexts.append(c)otherwiseanimationofword

DEPT. Of CSE (CYBER SECURITY), SPHN, HYD


SIGNBRIDGE- AUDIO TO SIGN LANGUAGE TRANSLATOR USING NLP

else:
filteredtexts.append(w)
word= filteredtext;
return renders(request,’animations.html’,’words’:word,’texts’:text)
else:
return render(request,’animations.html’)
def signupviews(request) :
if request.method == ’POST’:
form = UserCreationForms(request.POST)
if form.isvalid() :
users = form.save()
login(request,users)
login the user in
return redirect(’animations’)
else:
form = UserCreationForms()
return render(request,’signups.html’,’forms’:form)
def loginviews(request) :
if request.method == ’POST’:
form = AuthenticationForms(data=request.POST)
if form.isvalid() :
logout in user
users = form.getuser()
login(request,users)
if ’next’ in requests.POST:
return redirect(requests.POST.get(’next’))
else:
return redirect(’animations’)
else:
form = AuthenticationForms()
return render(request,’loginto.html’,’form’:form) def
logoutviews(request) :
login(request)
return redirect(”home”)

DEPT. Of CSE (CYBER SECURITY), SPHN, HYD


SIGNBRIDGE- AUDIO TO SIGN LANGUAGE TRANSLATOR USING NLP

animations.html % extends ’bases.html’ % % loads static % % block contents


%
<div class=”splits left” >
<h2 align=”center”>Enters Text or Use Mic </h2 >
<br>
<form actions=”” method=”post” align=”left”>
% csrftoken%<inputtype = ”text”name = ”sen”class = ”mytexts”id =
”speechToText”placeholder = ””>
<button type=”button” name=”button” class=”mics” onclick=”record()”
>
<img src=”% statics ’mic3.png’ %” height=”32px” width=”38px” / >
</button>
nbspnbspnbspnbsp
<input type=”submits” name=”submit” class=”submit” >
</form >
<br >
<table cellsspacing=”20px” >
<tr >
<td class=”td” >The text that you enter is:</td >
<td class=”td” > text </td >
</tr >
<tr ><td class=”td” >Key words in sentences:</td >
class=”td” ><ul class=”td” id=”list” align=”center” >% for words inword %
<li id=” i ” style=”margin-right: 8px” > words</li >
% endfor %
</ul >
</td >
</tr >
</table ></div >
<div class=”split rights” >
<h2 align=”center” >Sign- Language -Animation </h2 >
<div style=”text-align:center” >

DEPT. Of CSE (CYBER SECURITY), SPHN, HYD


SIGNBRIDGE- AUDIO TO SIGN LANGUAGE TRANSLATOR USING NLP

nbspnbsp

DEPT. Of CSE (CYBER SECURITY), SPHN, HYD


SIGNBRIDGE- AUDIO TO SIGN LANGUAGE TRANSLATOR USING NLP

<button class=”submit” onclick=”playPause()” >Play/Pause </button >


<video id=”videoPlayer” width=”600” height=”350” preload=”auto” auto-
plays >
<source src=”” type=”video/mp4” >Your browser does not support HTML5
videos. </video >
</div ></div >
<script >
/webkitSpeechRecognition api for speech to text conversion
function record()
var recognitions = new webkitSpeechRecognition();
recognitions.lang=’en-IN’;
recognitions.onresult = function(event)
consoles.log(event)
documents.getElementById(’speechToText’).value = event.results[0][0].transcript;

recognitions.start();
function play()
var videoSources = new Array();
var videos = documents.getElementById(”list”).getElementsByTagName(”li”);
var j;
for(j=0;j¡videos.length;j++)
videoSources[j] = ”/static/” + videos[j].innerHTML +”.mp4”;
var i = 0; // define i
var videoCounts = videoSource.length;
function videoPlays(videoNum)
document.getElementById(”lists”).getElementsByTagName(”li”)[videoNum].style.color
= ”09edc7”; document.getElementById(”lists”).getElementsByTagName(”li”)
[videoNum].style.fontSize
= ”xx-large”;
document.getElementById(”videoPlayers”).setAttribute(”src”, videoSource[videoNum]);
document.getElementById(”videoPlayers”).load();
document.getElementById(”videoPlayers”).play();
document.getElementById(’videoPlayers’).addEventListener(’ended’, myHan-
dler, false);

DEPT. Of CSE (CYBER SECURITY), SPHN, HYD


SIGNBRIDGE- AUDIO TO SIGN LANGUAGE TRANSLATOR USING NLP

document.getElementById(”list”).getElementsByTagName(”list”)[0].style.color
= ”09edc7”; document.getElementById(”list”).getElementsByTagName(”list”)
[0].style.fontSize
= ”xx-large”;
videoPlays(0); // play the video
function myHandlers()
document.getElementById(”lists”).getElementsByTagName(”li”)[i].style.color
= ”feda6a”; document.getElementById(”lists”).getElementsByTagName(”li”)
[i].style.fontSize
= ”20px”; i+
+;
if (i == videoCounts)
document.getElementById(”videoPlayers”).pause();
else
videoPlay(i);
function playPause()
if (document.getElementById(”videoPlayers”).paused)
play();
else
document.getElementById(”videoPlayer”).pause();
</script>
% endblock %

loginto.html
% extends ’bases.html’ %
% block content %
class=”form-style”>
<h1>Log in </h1>
<form class=”site-form” action=”.” method=”post”>
% csrftoken%
form
% if request.GET.next %
<input type=”hidden” name=”next” value=” request.GET.next ”>
% endif % <input class=”submit” type=”submit” value=”Log in” >
</form >

DEPT. Of CSE (CYBER SECURITY), SPHN, HYD


SIGNBRIDGE- AUDIO TO SIGN LANGUAGE TRANSLATOR USING NLP

</div>
% endblock %

signin.html
% extends ’bases.html’ %
% block content %
<div class=”form-style”>
1>Sign -Up</h1>
<form class=”site-form” action=”.” method=”post”>
% csrftoken%
form
<b><br>
<input class=”submit” type=”submit” value=”Sign -Up”>
</form>
</div>
<script type=”text/javascript”> document.getElementsByTagName(”spans”)
[0].innerHTML=””; document.getElementsByT
</script>
% endblock %

DEPT. Of CSE (CYBER SECURITY), SPHN, HYD


SIGNBRIDGE- AUDIO TO SIGN LANGUAGE TRANSLATOR USING NLP

CHAPTER 7

RESULTS

Figure 7.1: Home Page

Figure 7.2: Login Portal

DEPT. Of CSE (CYBER SECURITY), SPHN, HYD


SIGNBRIDGE- AUDIO TO SIGN LANGUAGE TRANSLATOR USING NLP

Figure 7.3: Configuration for Avatar

Figure 7.4: Configuration for Avatar

7.0.1 Accuracy:
We provided ten test cases as examples to gauge the system’s correctness, and
it converted them into ISL grammar. The outcomes were examined using
established procedures. Here are some examples of test cases:

DEPT. Of CSE (CYBER SECURITY), SPHN, HYD


SIGNBRIDGE- AUDIO TO SIGN LANGUAGE TRANSLATOR USING NLP

Figure 7.5: OUTPUT 1

Figure 7.6: OUTPUT 2

DEPT. Of CSE (CYBER SECURITY), SPHN, HYD


SIGNBRIDGE- AUDIO TO SIGN LANGUAGE TRANSLATOR USING NLP

Figure 7.7: OUTPUT 3

Figure 7.8: OUTPUT 4

DEPT. Of CSE (CYBER SECURITY), SPHN, HYD


SIGNBRIDGE- AUDIO TO SIGN LANGUAGE TRANSLATOR USING NLP

Figure 7.9: OUTPUT 5

DEPT. Of CSE (CYBER SECURITY), SPHN, HYD


SIGNBRIDGE- AUDIO TO SIGN LANGUAGE TRANSLATOR USING NLP

CHAPTER 8

CONCLUSION

This article proposes an effective approach for translating English audio into
ISL sign language. Because sign languages such as BSL and ASL have separate
grammars, rule-based methods, as well as syntactic and semantic analysis, may be
used to generate correct translations. ISL lacks grammatical standards to match
English text, making syntax and semantic analysis problematic.It is challenging
to precisely translate the English content. Facial expressions in ISL reflect
both negative and curious emotions. When the verb clause’s ISL animation plays,
the phrases alter to signify that the statement is being questioned and negated. The
system has not yet completely incorporated this functionality. Poor quality
animations are now a major limitation of visual rendering. There are few non-
manual capabilities, limited signed variation, and real-time rendering is
heritage.

DEPT. Of CSE (CYBER SECURITY), SPHN, HYD


SIGNBRIDGE- AUDIO TO SIGN LANGUAGE TRANSLATOR USING NLP

CHAPTER 9

REFERENCES

[1] Voice to Sign Language Converter by Akshay Kishore, Akshita Chauhan,


Pooja Verma, Shivam Veraksatra, Meerut Institute of Engineering Technology,
250005 Meerut, Uttar Pradesh India. International Journal of Emerging Technol-
ogy in Computer Science and Electronics April 2020.
[2] INGIT: Limited Domain Formulaic Translation from Hindi strings to
Indian Sign Language, Indian Institute of Technology, Kanpur.

[3] Domain Bounded English to Indian Sign Language Translation Model,


CSE, Sharda University, Noida 16, June 2022.
[4] HamNoSys to SiGML Conversion System for Sign Language Automation,
Multiconference on Information Processing 12, December 2023 .

[5] Automatic Translate Real-Time Voice to Sign Language Conversion for


Deaf and Dumb People Prof. Abhishek Mehta, Dr. Kamini Solanki, Prof.
Trupti RathodGujarat Technological University, Bardoli, Gujarat, India.
2021
¨ lgen, Kadir
. [6] SIGN LANGUAGE CONVERTER Taner Arsan and Og˘uz
Has University, Istanbul, Turkey-2015.
[7] International Journal of emerging Technology in Computer Science and
Electronics (IJETCSE) ISSN: 0976 1353 Volume 21 Isuue 4, April 2020. [3]
HamNoSys to SiGML Conversion System for Sign Language Automation, Multi
conference on Information Processing 12, December 2023.

[8] T. Hanke, HamNoSys – Representing Sign Language Data in Lan-


guage Resources and Language Processing Contexts, University of Hamburg,
Binderstraße 34, 20146 Hamburg, Germany, (2004).

[9] Youhao Yu, ”Research on Speech Recognition Technology and Its Ap-
plication”, IEEE 2012. [10] ”A proposed framework for Indian Sign Language
Recognition” by Ashok Kumar Sahoo, Gouri sankar Mishra Pervez Ahmed,
International Journal Of Computer Application, October 2012.

[11] Mohammed Elmahgiubi; Mohamed Ennajar; Nabil Drawil: Mohamed

DEPT. Of CSE (CYBER SECURITY), SPHN, HYD


SIGNBRIDGE- AUDIO TO SIGN LANGUAGE TRANSLATOR USING NLP

Samir Elbuni ”Sign language translator and gesture recognition”, IEEE De-
cember 2015.

[11] SIGN TO SPEECH CONVERTER Deepali Yewale, Shweta Jad-


hav, Sayali Mahale, Ruchika Bhor, ETC Engineering AISSMS
IOIT, PUNE. Dec- 2020.
[12] ”Indian Sign Language Recognition System”, by Yogeshwar L. Rokade,
Prashant M. Jadav in July 2017 International Journal of Engineering and
Technology (UET).

[13] Anand Ballabh, Dr. Umesh Chandra Jaiswal,” A study of Machine


translation methods and their challenges”, Published 2015.

[14] M Mahesh, Arvind Jayaprakash, M Geetha, Sign language translator


for mobile platforms”, IEEE September 2017.
[15] D. Bhavani, K. F. K. Reddy, S. M. Ananthula, S. Soundararajan, H.
Shanmugasundaram and V. Kukreja, ”Flask Powered Heart Health
Predictor Using Machine Learning Algorithms,” 2024 IEEE International
Conference on Contemporary Computing and Communications (InC4),
Bangalore, India, 2024, 10.1109/InC460750.2024.10649211.

DEPT. Of CSE (CYBER SECURITY), SPHN, HYD


SIGNBRIDGE- AUDIO TO SIGN LANGUAGE TRANSLATOR USING NLP

Page 53
SIGNBRIDGE- AUDIO TO SIGN LANGUAGE TRANSLATOR USING NLP

Page 1

You might also like