PROJECT TITLE :
GENERATING QUESTIONS FROM TEXT
IN THREE DIFFICULTY LEVELS
GROUP MEMBERS
GUIDE: Ms SURYA C
AADIL KAMARUDHEEN : 1 Asst.PROFESSOR (ADS)
AMAN NAJEEB K P : 6 GROUP NO: 10
SHIJAS K V : 55
AYMAN ABBAS MUNDOL : 14
CONTENTS
1) Introduction 6) Model
2) Related Works 7) Input and Output
3) Objectives 8) Result Analysis
4) Proposed system 9) Conclusion
5) Flow chart 10) Reference
INTRODUCTION
Automate question generation from text for different difficulty levels
Provide questions at various difficulty levels to support educators,
learners, and content creators.
Streamline question generation, saving time for educators, enriching
study resources for learners.
RELATED WORKS
Paper Title Year Summary
Automated exam question generator using This research introduces an automated exam
genetic algorithm ;
2021 question generator using Genetic Algorithm to
Tengku Nurulhuda Tengku Abd Rahim; Zalilah Abd Aziz; produce high-quality multiple-choice questions
Rose Hafsah Ab Rauf; Noratikah Shamsudin
2021 IEEE Conference on e-Learning, e-Management and e-Services covering six levels of Bloom's Taxonomy.
(IC3e)
Year: 2021 | Conference Paper | Publisher: IEEE
Neural Question Generation from Text: A
Preliminary Study 2020 Introduces a novel approach to automatic question
Qingyu Zhou, Nan Yang, Furu Wei, Chuanqi Tan, generation using neural encoder-decoder models.
Hangbo Bao & Ming Zhou
NLPCC 2020: Natural Language Processing and
Chinese Computing pp 662–671
This research addresses the shortcomings of current
Design of auto-generating examination paper
algorithm based on hybrid genetic algorithm
2021 item bank systems in educational testing. It proposes
Chuanhong Zhou; Lianghuang Lin; Pujia Shuai
an auto-generating examination paper algorithm using
2021 IEEE 3rd International Conference on a hybrid genetic algorithm to enhance objectivity
Cloud Computing and Big Data Analysis (ICCCBDA) and reduce subjectivity
OBJECTIVES
Develop a system to generate questions from text at varying difficulty levels.
Implement natural language processing and machine learning algorithms for
accurate question formulation.
Provide users with personalized learning experiences through adaptive question
difficulty settings.
PROPOSED SYSTEM
Data Collection and Preprocessing:
Collect text passages from diverse sources and preprocess them (tokenization,
stopword removal, etc.) for analysis.
Feature Extraction and Model Training:
Use advanced NLP techniques (TF-IDF, word embeddings) to extract features
from text.
Train a machine learning model (e.g., LSTM, Transformer) to predict question
difficulty levels based on text features.
Real-time Question Generation:
Deploy the trained model in a real-time environment to generate questions
from input text at varying difficulty levels.
Continuous Optimization and Evaluation:
Continuously optimize the model's performance through fine-tuning and
evaluation using metrics like accuracy and diversity of generated questions.
FLOW DIAGRAM
Data
Feature Model
Collection &
Extraction Training
preprocessing
Fine-tuning
Deployment and Evaluation
Optimization
MODEL USED
Leveraging T5 model,which is introduced by google
With T5,Reframe all NLP tasks into a unified text-to-text-format
where the input and output are always text strings.
Our text-to-text framework allows us to use the same model,
loss function, and hyperparameters on any NLP task, including
machine translation, document summarization, question
answering, and classification tasks
INPUT/OUTPUT
User Interface
INPUT TEXT FIELD
INPUT TEXT
SETTING DIFFICULTY LEVEL
OUTPUT
RESULT ANALYSIS
Accuracy: The system accurately generates relevant questions from text
inputs, ensuring that the questions align with the content's context and
intent
Responsiveness: The system promptly generates questions, facilitating
efficient extraction of key information and seamless integration into
educational or analytical processes.
Robustness: The system handles diverse text sources resiliently, including
varying lengths and styles, while maintaining consistent question
generation quality.
Ease of Use: The system provides a user-friendly interface, streamlining
the question generation process with minimal input or adjustments
required from users, making it accessible across different proficiency
levels and scenarios.
Accuracy measures grammatical correctness, syntax validity, and semantic meaning
of generated questions.
Precision gauges the ratio of relevant questions to all generated ones, particularly in
a specified topic.
Recall assesses the comprehensiveness or coverage of relevant questions.
F1 Score balances precision and recall, offering a holistic evaluation of question
quality and relevance.
BLEU (Bilingual Evaluation Understudy) Score is a metric used to evaluate the
quality of generated text by comparing it to a set of reference text. It measures how
similar the generated text is to the reference text.
Graphical Representation: The graph shows bars representing the BLEU scores for
each generated question. Each question is labeled on the x-axis (e.g., Q1, Q2, Q3),
and the BLEU scores (ranging from 0 to 1) are shown on the y-axis.
Interpreting the Graph: Higher bars indicate higher BLEU scores, which means the
generated questions are more similar to the reference questions. Lower bars or
scores closer to 0 indicate less similarity between the generated and reference
questions
CONCLUSION
In conclusion our project revolutionizes question generation, offering personalized
learning through varying difficulty levels.
Leveraging advanced NLP and machine learning, our system creates challenging
questions from text passages.
Real-time analysis provides instant insights into sentiment trends, enhancing
education outcomes.
Further optimization will make our model a game-changer in educational
technology, empowering effective teaching and learning.
REFERENCES :
[1] Automated exam question generator using genetic algorithm
Tengku Nurulhuda Tengku Abd Rahim; Zalilah Abd Aziz;
Rose Hafsah Ab Rauf; Noratikah Shamsudin
2021 IEEE Conference on e-Learning, e-Management and e-Services (IC3e)
Year: 2021 | Conference Paper | Publisher: IEEE
[2] Neural Question Generation from Text: A Preliminary Study
Qingyu Zhou, Nan Yang, Furu Wei, Chuanqi Tan, Hangbo Bao & Ming Zhou
NLPCC 2020: Natural Language Processing and Chinese Computing pp 662–
671
[3]Design of auto-generating examination paper algorithm based on hybrid
genetic algorithm
Chuanhong Zhou; Lianghuang Lin; Pujia Shuai
2021 IEEE 3rd International Conference on Cloud Computing and Big Data
Analysis (ICCCBDA)
THANK YOU!