STUDENT ATTENDANCE SYSTEM BASED ON THE
FACE RECOGNITION OF WEBCAM’S IMAGE OF THE
CLASSROOM.
MINI PROJECT REPORT
SUBMITTED BY:
CS KOUSHIK (100520733087)
SAMREEN (100520733099)
THUPPARI PRANAY (100520733103)
In partial fulfillment for the award of the degree
of
BACHELOR OF ENGINEERING
IN
Computer Science and Engineering
University College of Engineering
OSMANIA UNIVERSITY: HYDERABAD-500007
AUG-2023.
1
Department of Computer Science and Engineering,
University College of Engineering,
Osmania University.
CERTIFICATE:
This is to certify the bonafide work of CS.KOUSHIK, SAMREEN and
THUPPARI PRANAY bearing roll numbers 100520733087,100520733099 and
100520733103 respectively, for their mini project, in partial fulfillment of
Bachelors of Engineering Degree, offered by Department of Computer Science and
Engineering, University College of Engineering, Osmania University.
Project Guide Head of the Department
Dr. L.K.Suresh Kumar P.V.Sudha
(Asst. Professor) (Professor)
DEPT. OF CSE, UCEOU. DEPT. OF CSE, UCEOU.
2
STUDENT DECLARATION
We declare that the work reported in the project report entitled “ Student Attendance System Based
On The Face Recognition Of Webcam’s Image Of The Classroom” submitted by CS. Koushik,
Samreen, Thuppari Pranay is a record of the work done by us in the DEPARTMENT OF
COMPUTER SCIENCE AND ENGINEERING, UNIVERSITY COLLEGE OF ENGINEERING,
OSMANIA UNIVERSITY. No part of the report is copied from books/ journals/internet and
wherever referred, the same has been duly acknowledged in the text. The reported data is based on
the work done entirely by us and not copied from any other source or submitted to any other
Instituteor University for the award of a degree or diploma.
SIGNATURES
C.S. Koushik :
Samreen :
Thuppari Pranay :
3
ACKNOWLEDGEMENT
It is our privilege and pleasure to express our profound sense of respect, gratitude and
indebtedness to our guide Dr. L.K. SURESH KUMAR, Assistant Professor, Department of
Computer Science and Engineering, UCEOU, for his inspiration, guidance, cogent discussion,
constructive criticisms, and encouragement throughout this dissertation work.
We also thank P.V.SUDHA, Head Department of Computer Science and Engineering for her
support from the Department and allowing all the resources available to us students. We would
also like to extend our thanks to the entire faculty of the Department of Computer Science and
Engineering, University College of Engineering, Osmania University, who encouraged us
throughout the course of our Bachelor degree and allowing us to use as many resources present
in the department. Our sincere thanks to our parents and friends for their valuable suggestions,
morals, strength and support for the completion of our project.
4
TABLE OF CONTENTS
CERTIFICATE 2
STUDENT DECLARATION 3
ACKNOWLEDGEMENT 4
1. INTRODUCTION 6
2. AIMS AND OBJECTIVES 7
3. REVIEW OF LITERATURE 8-9
4. METHODOLOGY( FORMULATION/ALGORITHM )
4.1 SYSTEM DESIGN 10-12
4.2 TECHNOLOGIES USED 12
5. IMPLEMENTATION 13-16
5.1 ENTRY NEW USER CREDENTIALS
5.2 FACE SCANNING
5.3 FACE DETECTION
5.4 MARK THE ATTENDANCE
5.5 SMS NOTIFIER
6. SOURCE CODES WITH ADEQUATE COMMENTS 16-19
7. CONCLUSION 20
8. FUTURE SCOPE 20
9. REFERENCES 21
5
INTRODUCTION
In today's networking world, the need to maintain the security of information or physical
property is becoming both increasingly important and increasingly difficult. From time to
time we hear about the crimes of credit card fraud, computer breakings by hackers, or
security breaches in a company or government building.
In most of these crimes, the criminals were taking advantage of a fundamental flaw in the
conventional access control systems: the systems do not grant access by "who we are",
but by "what we have", such as ID cards, keys, passwords, PIN numbers, or mother's
maiden name.
None of these means are really define us. Recently, technology became available to allow
verification of "true" individual identity. This technology is based in a field called
"biometrics".
Biometric access control are automated methods of verifying or recognizing the identity
of a living person on the basis of some physiological characteristics, such as fingerprints
or facial features, or some aspects of the person's behavior, like his/her handwriting style
or keystroke patterns. Since biometric systems identify a person by biological
characteristics, they are difficult to forge. Face recognition is one of the few biometric
methods that possess the merits of both high accuracy and low intrusiveness. It has the
accuracy of a physiological approach without being intrusive. For this reason, since the
early 70's (Kelly, 1970), face recognition has drawn the attention of researchers in fields
from security, psychology, and image processing, to computer vision.
6
3. AIM OF THE PROJECT
The main aim of “ STUDENT ATTENDANCE SYSTEM BASED ON THE FACE
RECOGNITION ” is to develop the intelligent system capable of detecting the student faces and
marking the attendance of them and notifying the parent of their child’s attendance by sending
the messages. This system will leverage computer vision algorithms and machine learning
techniques to achieve accurate and efficient face detecting and recognition.
OBJECTIVES OF THE PROJECT
Data Collection and Preprocessing:
• Gather a diverse dataset of facial images, ensuring variability in lighting conditions,
facial expressions, poses, and demographics.
User Interface(UI):
• Design and implement a user-friendly interface for users to interact with the face
detecting and recognition system.
• Allow users to upload their images to mark attendance.
• Allows the users to notify the parents about their child attendance
Testing and Deployment:
● Implement and evaluate KNN algorithm to group the facial feature vectors into grid
based on their similarity.
By achieving these objectives, the Student Attendance System based on Face Recognition Web
project will deliver a powerful tool capable of effectively detect the individual students faces and
mark the attendance and notify the parents.
7
REVIEW OF LITERATURE
Face recognition is one of the few biometric methods that possess the merits of both high
accuracy and low intrusiveness. It has the accuracy of a physiological approach without being
intrusive. Over past 30 years, many researchers have proposed different face recognition
techniques, motivated by the increased number of real world applications requiring the
recognition of human faces. There are several problems that make automatic face recognition
a very difficult task. However, the face image of a person inputs to the database that is
usually acquired under different conditions. The important of automatic face recognition is
much be cope with numerous variations of images of the same face due to changes in the
following parameters such as
1. Pose
2. Illumination
3. Expression
4. Motion
5. Facial hair
6. Glasses
7. Background of image
Face recognition technology is well advance that can applied for many commercial
applications such as personal identification, security system, image- film processing,
psychology, computer interaction, entertainment system, smart card, law enforcement,
surveillance and so on. Face recognition can be done in both a still image and video sequence
which has its origin in still-image face recognition. Different approaches of face recognition
for still images can be categorized into three main groups such as
1. Holistic approach
2. Feature-based approach
3. Hybrid approach product
1.Holistic Approach :- In holistic approach or global feature, the whole face region is
taken into account as input data into face detection system. Examples of holistic
methods are eigenfaces (most widely used method for face recognition), probabilistic
eigenfaces, fisherfaces, support vector machines, nearest feature lines (NFL) and
independent-component analysis approaches. They are all based on principal
component-analysis (PCA) techniques that can be used to simplify a dataset into lower
dimension while retaining the characteristics of dataset.
2. Feature-based approach:- In feature-based approaches or local feature that is the features on
face such as nose, and then eyes are segmented and then used as input data for structural
classifier. Pure geometry, dynamic link architecture, and hidden Markov model methods belong
to this category. One of the most successful of these systems is the Elastic Bunch Graph
Matching (EBGM) system [40],[41], which is based on DLA.
8
Wavelets, especially Gabor wavelets, play a building block role for facial representation in
these graph matching methods. A typical local feature representation consists of wavelet
coefficients for different scales and rotations based on fixed wavelet bases. These locally
estimated wavelet coefficients are robust to illumination change, translation, distortion,
rotation and scaling. The grid is appropriately positioned over the image and is stored with each
grid point’s locally determined jet in figure 2(a), and serves to represent the pattern classes.
Recognition of a new image takes place by transforming the image into the grid of jets, and
matching all stored model graphs to the image. Conformation of the DLA is
done by establishing and dynamically modifying links between vertices in the model
domain.
3. Hybrid approach :- The idea of this method comes from how human vision system
perceives both holistic and local feature. The key factors that influence the
performance of hybrid approach include how to determine which features should be
combined and how to combine, so as to preserve their advantages and avert their
disadvantages at the same time.
These problems have close relationship with the multiple classifier system (MCS) and
ensemble learning in the field of machine learning. Unfortunately, even in these
fields, these problems remain unsolved. In spite of this, numerous efforts made in
these fields indeed provide us some insights into solving these problems, and these
lessons can be used as guidelines in designing a hybrid face recognition system.
hybrid approach that use both holistic and local information for recognition may be
an effective way to reduce the complexity of classifiers and improve their
generalization capability
9
4. METHODOLOGY( FORMULATION/ ALGORITHM)
4.1 SYSTEM DESIGN
A throughout survey has revealed that various methods and combination of these methods
can be applied in development of a new face recognition system. Among the many possible
approaches, we have decided to use a combination of knowledge-based methods for face
detection part and neural network approach for face recognition part. The main reason in this
selection is their smooth applicability and reliability issues. Our face recognition system
approach is given in Figure
4.11. Input Part
Input part is prerequisite for face recognition system. Image acquisition operation is
performed in this part. Live captured images are converted to digital data for performing
image-processing computations. These captured images are sent to face detection algorithm
4.12. Face Detection Part
Face detection performs locating and extracting face image operations for face recognition
system. Face detection part algorithm is given in Figure given below. Our experiments reveal
that skin segmentation, as a first step for face detection, reduces computational time for
searching whole image. While segmentation is applied, only segmented region is searched
weather the segment includes any face or not.
10
.
For this reason, skin segmentation is applied as a first step of detection part. RGB color space
is used to describe skin like color [4]. White balance of images differs due to change in
lighting conditions of the environment while acquiring image. This situation creates non-skin
objects that belong to skin objects.
Therefore, white balance of the acquired image should be corrected before segmenting it
[18]. Results of segmentation on original image and white balance corrected image is given
in Figure 4 and 5.
After “and operation” is applied on segmented images, some morphological operations are
applied on final skin image to search face candidate. Noisy like small regions elimination,
closing operations are performed. Then, face candidates are choosen with two condition
11
which are ratio of bounding box of candidate and covering some gaps inside the candidate
region. Ratio of bounding box should lie between 0.3 and 1.5
Based on these conditions, face candidates are extracted from input image with modified
bounding box from original bounding box. The height of bounding box modified as 1.28
times bigger than width of bounding box because chest and neck parts will be eliminated if
candidate includes them This modification value have been determined experimentally.
These face candidates will be sent to facial feature extraction part to validate the candidates.
Final verification of candidate and face image extraction, facial feature extraction process is
applied. Facial feature is one of the most significant features of face. Facial features are
eyebrows, eyes, mouth, nose, nose tip, cheek, etc. The property is used to extract the eyes and
mouth which, two eyes and mouth generate isosceles triangle, and distance between eye to
eye and mid point of eyes distance to mouth is equal [2]. Laplacian of Gaussian (LoG) filter
and some other filtering operations are performed to extract facial feature of face candidate.
4.2 TECHNOLOGIES USED
FRONT END
• HTML
• CSS
BACKEND
• PYTHON
• OPEN CV
• KNN ALGORITHM
API’S USED
• SMS CHEF CODE
12
5. IMPLEMENTATION:
USER INTERFACE
5.1 ENTRY NEW USER CREDENTIALS
13
5.2 FACE SCANNING
Sample images taken by our Detector Program
14
5.3 FACE DETECTION
5.4 MARK THE ATTENDANCE
15
5.5 SMS NOTIFIER
SOURCE CODE
import cv2
import os
from flask import Flask,request,render_template
from datetime import date
from datetime import datetime
import numpy as np
from sklearn.neighbors import KNeighborsClassifier
import pandas as pd
import joblib
#### Defining Flask Appcm
app = Flask(__name__)
#### Saving Date today in 2 different formats
datetoday = date.today().strftime("%m_%d_%y")
datetoday2 = date.today().strftime("%d-%B-%Y")
#### Initializing VideoCapture object to access WebCam
face_detector = cv2.CascadeClassifier('haarcascade_frontalface_default.xml')
try:
cap = cv2.VideoCapture(1)
except:
cap = cv2.VideoCapture(0)
#### If these directories don't exist, create them
if not os.path.isdir('Attendance'):
os.makedirs('Attendance')
if not os.path.isdir('static'):
os.makedirs('static')
if not os.path.isdir('static/faces'):
16
os.makedirs('static/faces')
if f'Attendance-{datetoday}.csv' not in os.listdir('Attendance'):
with open(f'Attendance/Attendance-{datetoday}.csv','w') as f:
f.write('Name,Roll,Time')
#### get a number of total registered users
def totalreg():
return len(os.listdir('static/faces'))
#### extract the face from an image
def extract_faces(img):
if img!=[]:
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
face_points = face_detector.detectMultiScale(gray, 1.3, 5)
return face_points
else:
return []
#### Identify face using ML model
def identify_face(facearray):
model = joblib.load('static/face_recognition_model.pkl')
return model.predict(facearray)
#### A function which trains the model on all the faces available in faces folder
def train_model():
faces = []
labels = []
userlist = os.listdir('static/faces')
for user in userlist:
for imgname in os.listdir(f'static/faces/{user}'):
img = cv2.imread(f'static/faces/{user}/{imgname}')
resized_face = cv2.resize(img, (50, 50))
faces.append(resized_face.ravel())
labels.append(user)
faces = np.array(faces)
knn = KNeighborsClassifier(n_neighbors=5)
knn.fit(faces,labels)
joblib.dump(knn,'static/face_recognition_model.pkl')
#### Extract info from today's attendance file in attendance folder
def extract_attendance():
df = pd.read_csv(f'Attendance/Attendance-{datetoday}.csv')
names = df['Name']
rolls = df['Roll']
times = df['Time']
l = len(df)
return names,rolls,times,l
#### Add Attendance of a specific user
def add_attendance(name):
username = name.split('_')[0]
userid = name.split('_')[1]
current_time = datetime.now().strftime("%H:%M:%S")
df = pd.read_csv(f'Attendance/Attendance-{datetoday}.csv')
if int(userid) not in list(df['Roll']):
17
with open(f'Attendance/Attendance-{datetoday}.csv','a') as f:
f.write(f'\n{username},{userid},{current_time}')
################## ROUTING FUNCTIONS #########################
#### Our main page
@app.route('/')
def home():
names,rolls,times,l = extract_attendance()
return
render_template('home.html',names=names,rolls=rolls,times=times,l=l,totalreg=totalreg(),datetoday2=datetoday2)
#### This function will run when we click on Take Attendance Button
@app.route('/start',methods=['GET'])
def start():
if 'face_recognition_model.pkl' not in os.listdir('static'):
return render_template('home.html',totalreg=totalreg(),datetoday2=datetoday2,mess='There is no trained model
in the static folder. Please add a new face to continue.')
cap = cv2.VideoCapture(0)
ret = True
while ret:
ret,frame = cap.read()
if extract_faces(frame)!=():
(x,y,w,h) = extract_faces(frame)[0]
cv2.rectangle(frame,(x, y), (x+w, y+h), (255, 0, 20), 2)
face = cv2.resize(frame[y:y+h,x:x+w], (50, 50))
identified_person = identify_face(face.reshape(1,-1))[0]
add_attendance(identified_person)
cv2.putText(frame,f'{identified_person}',(30,30),cv2.FONT_HERSHEY_SIMPLEX,1,(255, 0,
20),2,cv2.LINE_AA)
cv2.imshow('Attendance',frame)
if cv2.waitKey(1)==27:
break
cap.release()
cv2.destroyAllWindows()
names,rolls,times,l = extract_attendance()
return
render_template('home.html',names=names,rolls=rolls,times=times,l=l,totalreg=totalreg(),datetoday2=datetoday2)
#### This function will run when we add a new user
@app.route('/add',methods=['GET','POST'])
def add():
newusername = request.form['newusername']
newuserid = request.form['newuserid']
userimagefolder = 'static/faces/'+newusername+'_'+str(newuserid)
if not os.path.isdir(userimagefolder):
os.makedirs(userimagefolder)
cap = cv2.VideoCapture(0)
i,j = 0,0
while 1:
_,frame = cap.read()
faces = extract_faces(frame)
18
for (x,y,w,h) in faces:
cv2.rectangle(frame,(x, y), (x+w, y+h), (255, 0, 20), 2)
cv2.putText(frame,f'Images Captured: {i}/50',(30,30),cv2.FONT_HERSHEY_SIMPLEX,1,(255, 0,
20),2,cv2.LINE_AA)
if j%10==0:
name = newusername+'_'+str(i)+'.jpg'
cv2.imwrite(userimagefolder+'/'+name,frame[y:y+h,x:x+w])
i+=1
j+=1
if j==500:
break
cv2.imshow('Adding new User',frame)
if cv2.waitKey(1)==27:
break
cap.release()
cv2.destroyAllWindows()
print('Training Model')
train_model()
names,rolls,times,l = extract_attendance()
return
render_template('home.html',names=names,rolls=rolls,times=times,l=l,totalreg=totalreg(),datetoday2=datetoday2)
#### Our main function which runs the Flask App
if __name__ == '__main__':
app.run(debug=True)
SOURCE CODE FOR SMS NOTIFIER:
import requests
apisecret = "23c5d1b1e743d1ca84d43eb92af681dff5e2dcf3"
deviceId = "00000000-0000-0000-bd8f-1d0f920d4268"
phone = "+919347325894"
message = "Hlo student.!!! Your Attendance is Marked"
message = {
"secret": apisecret,
"mode" : "devices",
"device": deviceId,
"sim" : 1,
"priority": 1 ,
"phone" : phone,
"message" : message,
}
r = requests.post(url = "https://www.cloud.smschef.com/api/send/sms" , params = message)
result = r.json();
print(result)
19
8. CONCLUSION
In conclusion, the development of a "Student Attendance System Based On The Face
Recognition Of Webcam’s Image Of The Classroom" Web App has proven to be a significant
achievement, providing an innovative and efficient solution for detecting the students faces and
marking the attendance by facial recognition and notifing the parents by sending the attendance
messages. This leads to aware the parents of their child attendance through the integration of
computer vision techniques and machine learning algorithms, the app successfully read and show
video stream, capture images, detects the faces and flatten the largest face image and save in a
numpy array.
9. FUTURE SCOPE
• Display the attendance in the form of graphs like piechart / bargraph.
• Detects large amount of students faces and stores data.
• Notify the monthly, semester wise child attendance to their parents.
STUDENT ATTENDANCE CHART:
➢ It notifies the percentage of the students attendance to their parents.
➢ If their attendance is >75% then they are regular and available to write exams
➢ If the attendance is <75% and >60% then they are out of danger.
➢ If the attendance is <60% then they are in the danger mode and they are irregular to the
institute.
➢ So all these data is sent to the parents so that they can be aware of their child’s attendance
and take proper steps to increase it.
20
REFERENCES
1. L. Zhi-fang, Y. Zhi-sheng, A.K.Jain and W. Yun-qiong, 2003, “Face Detection And Facial
Feature Extraction In Color Image”, Proc. The Fifth International Conference on
Computational Intelligence and Multimedia Applications (ICCIMA’03), pp.126-130, Xi’an,
China.
2. C. Lin, 2005, “Face Detection By Color And Multilayer Feedforward Neural Network”,
Proc.
2005 IEEE International Conference on Information Acquisition, pp.518-523, Hong Kong
and
Macau, China.
3. S. Kherchaoui and A. Houacine, 2010, “Face Detection Based On A Model Of The Skin
Color With Constraints And Template Matching”, Proc. 2010 International Conference on
Machine and Web Intelligence, pp. 469 - 472, Algiers, Algeria.
4. Design of a Face Recognition System (PDF Download Available). Available from:
https://www.researchgate.net/publication/262875649_Design_of_a_Face_Recognition_Syste
m.
21