Fake Job Prediction with ML Algorithms
Fake Job Prediction with ML Algorithms
BACHELOR OF TECHNOLOGY
IN
BY
G. NANDINI 21PT5A6602
CERTIFICATE
This is to certify that the major project entitled “FAKE JOB PREDICTION USING MACHINE
LEARNING ALGORITHMS” is being submitted by G. NANDINI (21PT5A6602), in partial
fulfilment of the requirement for the award of the degree of B. Tech in Computer Science and
Engineering (AI &ML), Avanthi’s Scientific Technological and Research Academy, Hyderabad,
is a record of bonafide work carried out by them under my guidance. The result presented in this major
project work have been verified and are found to be satisfactory. The result embodied in this project
work have not been submitted to any other University for the award of any other degree.
I here by declare that the results embodied in this dissertation entitled “FAKE JOB
PREDICTION USING MACHINE LEARNING ALGORITHMS” is carried out
by me during the year 2023-2024 in partial fulfilment of the award of B. Tech,
Computer Science and Engineering (AI & ML) from Avanthi’s Scientific
Technological and Research Academy. I have not submitted the same to any other
university or organization for the award of other degree.
G. NANDINI 21PT5A6602
This is an acknowledge of the intensive drive and technical competence of many individuals who
have contributed to the success of our major project work.
We are grateful to chairman, Avanthi Group of Institutions, Sri. M. SRINIVASA RAO for
granting us the permission for undergoing the practical training through development of this project
in college.
Our sincere thanks to the Principal, Dr. G. RAMACHANDRA REDDY, Avanthi’s Scientific
Technological and Research Academy and to all the faculty members.
We would like to express our gratitude to head of the department Dr. N. V. RAMANA REDDY,
C.S.E-HOD, Associate Professor, for his valuable suggestions during the course of our major project
work.
We are immensely thankful to our B.Tech Project Co-ordinator S. RAJENDER, Assistant Professor
for Department of Computer Science and Engineering for this work, which helped us in
completing this major project successfully.
We are immensely thankful to our internal guide Dr. N. V. RAMANA REDDY, Associate
Professor, Department of CSE, for his valuable guidance and suggestion in each and every stage
of this work, which helped us in completing this major project successfully.
We are thankful to one and all, who are co-operated with us to complete our major project work
successfully.
G. NANDINI 21PT5A6602
ABSTRACT
ALGORITHMS
To avoid fraudulent post for job in the internet, an automated tool using machine learning based
classification techniques is proposed in the paper. Different classifiers are used for checking fraudulent
post in the web and the results of those classifiers are compared for identifying the best employment
scam detection model. It helps in detecting fake job posts from an enormous number of posts. Two
major types of classifiers, such as single classifier and ensemble classifiers are considered for
fraudulent job posts detection. However, experimental results indicate that ensemble classifiers are the
best classification to detect scams over the single classifiers. Naive Bayes is a statistical classification
method based on Bayes Theorem that assumes the impact of a specific feature on a class is unrelated
to the impact of other features. It is a quick, accurate, and dependable approach that performs well on
large datasets. On the other hand, SGD Classifier is an effective method for fitting linear classifiers and
regressors under convex loss functions like Support Vector Machines and Logistic Regression. It has
gained significant attention in large-scale learning due to its ability to handle text categorization and
natural language processing issues with ease.
INDEX
CHAPTER CONTENTS Page No
o CERTIFICATES
o ACKNOWLEDGEMENT
o DECLARATION
o ABSTRACT
1. INTRODUCTION
1.1 Problem Statement 3
1.2 Purpose 4
1.3 Scope 5
6
1.4 Objectives
2. LITERATURE SURVEY 8
3. SYSTEM ANALYSIS
3.1 Existing System 18
3.2 Disadvantages of Existing System 19
3.3 Proposed System 15
3.4 Advantages of Proposed System 16
4. SYSTEM REQUIREMENTS
4.1 Functional Requirements 18
4.2 Non-Functional Requirements 19
4.2.1 Hardware Requirements 22
4.2.2 Software Requirements 22
5. SOFTWARE ENVIRONMENT
5.1 Python 23
6. SYSTEM ARCHITECTURE
6.1 System Architecture 36
36
6.2 UML Diagrams
37
6.2.1 Use Case Diagram 39
6.2.2 Class Diagram 40
6.2.3 Sequence Diagram 42
6.2.4 Activity Diagram 44
6.2.5 Deployment Diagram 46
6.2.6 Data Flow Diagram 46
7. MODULES
7.1Modules 47
7.2Description of modules 48
8. IMPLEMENTATION
8.1Source Code 50
9. SCREEN SHOTS 62
11. CONCLUSION
11.1 Conclusion 75
76
11.2 Future Scope
13. APPENDICES
Appendix-A
Appendix-B
15. PO ATTAINMENT
CHAPTER 1
INTRODUCTION
FAKE JOB PREDICTION USING MACHINE 1. INTRODUCTION
LEARNING ALGORITHMS
1.INTRODUCTION
Employment scam is one of the serious issues in recent times addressed in the domain
of Online Recruitment Frauds (ORF). In recent days, many companies prefer to post
their vacancies online so that these can be accessed easily and timely by the job-seekers.
However, this intention may be one type of scam by the fraud people because they offer
employment to job-seekers in terms of taking money from them.
Fraudulent job advertisements can be posted against a reputed company for violating
their credibility. These fraudulent job post detection draws a good attention for
obtaining an automated tool for identifying fake jobs and reporting them to people for
avoiding application for such jobs.
For this purpose, machine learning approach is applied which employs several
classification algorithms for recognizing fake posts. In this case, a classification tool
isolates fake job posts from a larger set of job advertisements and alerts the user. To
address the problem of identifying scams on job posting, supervised learning algorithm
as classification techniques are considered initially.
A. Single Classifier based Prediction Classifiers are trained for predicting the
unknown test cases. The following classifiers are used while detecting fake job posts
The Naive Bayes classifier is a supervised classification tool that exploits the concept
of Bayes Theorem of Conditional Probability. The decision made by this classifier is
quite effective in practice even if its probability estimates are inaccurate. This classifier
obtains a very promising result in the following scenario- when the features are
K-Nearest Neighbour Classifiers, often known as lazy learners, identifies objects based
on closest proximity of training examples in the feature space. The classifier considers
k number of objects as the nearest object
while determining the class. The main challenge of this classification technique relies
on choosing the appropriate value of k.
A Decision Tree (DT) is a classifier that exemplifies the use of tree-like structure. It
gains knowledge on classification. Each target class is denoted as a leaf node of DT and
non-leaf nodes of DT are used as a decision node that indicates certain test. The
outcomes of those tests are identified by either of the branches of that decision node.
Starting from the beginning at the root this tree are going through it until a leaf node is
reached. It is the way of obtaining classification result from a decision tree. Decision
tree learning is an approach that has been applied to spam filtering. This can be useful
for forecasting the goal based on some criterion by implementing and training this
model.
1. The problem statement involves predicting fake job listings using machine learning
algorithms. This is an essential task in the era of increasing online job scams and
fraudulent activities. Machine learning algorithms can be employed to analyze
patterns and identify anomalies in large datasets, making them suitable for this
application.
2. First, let’s discuss the data collection process. Data for this problem can be obtained
from various sources such as job listing websites, social media platforms, and
government databases. The data should include features like job title, company
name, location, salary range, description, and other relevant information. A
significant portion of the dataset should consist of genuine job listings to serve as a
baseline for comparison.
3. Next, we need to preprocess the data by cleaning it and transforming it into a format
suitable for machine learning models. This may involve removing irrelevant
features, handling missing values, and encoding categorical variables.
4. Once the data is preprocessed, we can apply various machine learning algorithms
to build our predictive model. Some popular choices include Naive Bayes
Classifier, Support Vector Machines (SVM), Decision Trees, Random Forests, and
Neural Networks. These algorithms can be trained on the labeled dataset to learn
patterns that distinguish fake job listings from genuine ones based on their features.
1.2 Purpose
1. Fake job prediction using machine learning algorithms serves the critical purpose
of detecting and preventing fraudulent activities in the recruitment process. By
leveraging advanced technologies like natural language processing
(NLP) and supervised learning algorithms, organizations can identify false job
postings and alert job seekers to avoid falling victim to scams.
4. Mitigating Economic Impact The prevalence of fake job postings not only
harms individual job seekers but also has broader economic implications. By
leveraging machine learning algorithms to predict and prevent employment fraud,
organizations can help reduce instances of financial loss, unemployment due to
deceptive practices, and economic stress caused by falling victim to scams. This
proactive approach can contribute to a more secure and trustworthy job market
environment.
5. Empowering Job Seekers Ultimately, the purpose of fake job prediction using
machine learning algorithms is to empower job seekers with reliable tools and
insights that enable them to make informed decisions when applying for positions
online. By leveraging technology to identify false advertising and untrustworthy
employers, these algorithms play a crucial role in protecting individuals from
falling prey to fraudulent schemes while seeking employment opportunities.
1.3 Scope
6. 5. Ethical Considerations: While the scope for job prediction using machine
learning algorithms is extensive, it is essential to address ethical considerations such
as data privacy, bias mitigation, transparency in decision-making processes, and
ensuring fair treatment of individuals from diverse backgrounds. Implementing
ethical guidelines in the development and deployment of these predictive models is
crucial to maintaining trust and credibility.
1.4 Objectves
6. Model Training: Once the data is prepared, machine learning models need to be
trained using historical datasets that contain information about past employment
outcomes. By running algorithms like random forest regression on this training
data, the models learn to detect patterns and make predictions based on new input.
7. Outcome Prediction: After training the models, they can be used to predict future
job outcomes for students based on their academic performance and extracurricular
involvement. These predictions can help career centers provide targeted support and
guidance to individuals seeking employment opportunities.
2. LITERATURE SURVEY
ABSTRACT: This study research attempts to prohibit privacy and loss of money for
individuals and organization by creating a reliable model which can detect the fraud
exposure in the online recruitment environments. This research presents a major
contribution represented in a reliable detection model using ensemble approach based
on Random forest classifier to detect Online Recruitment Fraud (ORF). The detection
of Online Recruitment Fraud is characterized by other types of electronic fraud
detection by its modern and the scarcity of studies on this concept. The researcher
proposed the detection model to achieve the objectives of this study. For feature
selection, support vector machine method is used and for classification and detection,
ensemble classifier using Random Forest is employed. A freely available dataset called
Employment Scam Aegean Dataset (EMSCAD) is used to apply the model. Pre-
processing step had been applied before the selection and classification adoptions. The
results showed an obtained accuracy of 97.41%. Further, the findings presented the
main features and important factors in detection purpose include having a company
profile feature, having a company logo feature and an industry feature.
TITLE: An Empirical Study of the Naïve Bayes Classifier An empirical study of the
naive Bayes classifier,
ABSTRACT: The naive Bayes classifier greatly simplify learn-ing by assuming that
features are independent given class. Although independence is generally a poor
assumption, in practice naive Bayes often competes well with more sophisticated
classifiers. Our broad goal is to understand the data character-istics which affect the
performance of naive Bayes. Our approach uses Monte Carlo simulations that al-low a
systematic study of classification accuracy for several classes of randomly generated
prob-lems. We analyze the impact of the distribution entropy on the classification error,
showing that low-entropy feature distributions yield good per-formance of naive Bayes.
We also demonstrate that naive Bayes works well for certain nearly-functional feature
dependencies, thus reaching its best performance in two opposite cases: completely
independent features (as expected) and function-ally dependent features (which is
surprising). An-other surprising result is that the accuracy of naive Bayes is not directly
correlated with the degree of feature dependencies measured as the class- conditional
mutual information between the fea-tures. Instead, a better predictor of naive Bayes ac-
curacy is the amount of information about the class that is lost because of the
independence assump-tion.
It is necessary to analyze this large amount of data and extract useful knowledge from
it. Process of extracting the useful knowledge from huge set of incomplete, noisy, fuzzy
and random data is called data mining. Decision tree classification technique is one of
the most popular data mining techniques. In decision tree divide and conquer technique
is used as basic learning strategy. A decision tree is a structure that includes a root node,
branches, and leaf nodes. Each internal node denotes a test on an attribute, each branch
denotes the outcome of a test, and each leaf node holds a class label. The topmost node
in the tree is the root node. This paper focus on the various algorithms of Decision tree
(ID3, C4.5, CART), their characteristic, challenges, advantage and disadvantage.
TITLE: “Machine learning for email spam filtering: review, approaches and open
research problems,
ABSTRACT: The upsurge in the volume of unwanted emails called spam has created
an intense need for the development of more dependable and robust antispam filters.
Machine learning methods of recent are being used to successfully detect and filter
spam emails. We present a systematic review of some of the popular machine learning
based email spam filtering approaches. Our review covers survey of the important
concepts, attempts, efficiency, and the research trend in spam filtering. The preliminary
discussion in the study background examines the applications of machine learning
techniques to the email spam filtering process of the leading internet service providers
(ISPs) like Gmail, Yahoo and Outlook emails spam filters. Discussion on general email
spam filtering process, and the various efforts by different researchers in combating
spam through the use machine learning techniques was done. Our review compares the
strengths and drawbacks of existing machine learning approaches and the open research
TITLE: ST4_Method_Random_Forest,
(1) the analysis regarding the general characteristics of the studies, such as geographical
distribution, frequency of the papers considering time, journals, application domains,
and remote sensing software packages used in the case studies,
The challenges, recommendations, and potential directions for future research are also
discussed in detail. Moreover, a summary of the results is provided to aid researchers
to customize their efforts in order to achieve the most accurate results based on their
thematic applications.
3. SYSTEM ANALYSIS
In the realm of cybersecurity and fraud detection, the use of machine learning
algorithms has gained significant traction in predicting and preventing fake job
postings. By analyzing various features and patterns within job listings, machine
learning models can be trained to distinguish between legitimate and fraudulent job
advertisements.
People often post their reviews online forum regarding the products they purchase. It
may guide other purchaser while choosing their products. In this context, spammers can
manipulate reviews for gaining profit and hence it is required to develop techniques
that detects these spam reviews. This can be implemented by extracting features from
the reviews by extracting features using Natural Language Processing (NLP). Next,
machine learning techniques are applied on these features. Lexicon based approaches
may be one alternative to machine learning techniques that uses dictionary or corpus to
eliminate spam reviews.
Unwanted bulk mails, belong to the category of spam emails, often arrive to user
mailbox. This may lead to unavoidable storage crisis as well as bandwidth
consumption. To eradicate this problem, Gmail, Yahoo mail and Outlook service
providers incorporate spam filters using Neural Networks. While addressing the
problem of email spam detection, content based filtering, case based filtering, heuristic
based filtering, memory or instance based filtering, adaptive spam filtering approaches
are taken into consideration.
Fake news in social media characterizes malicious user accounts, echo chamber effects.
The fundamental study of fake news detection relies on three perspectives- how fake
news is written, how fake news spreads, how a user is related to fake news. Features
related to news content and social context are extracted and a machine learning models
are imposed to recognize fake news.
2. Unbalanced dataset: The job posting dataset is highly unbalanced, with 9868
real jobs and only 725 fraudulent jobs. This imbalance can negatively impact
the performance of machine learning models, as they may not be able to
effectively learn patterns from the minority class (fraudulent jobs).
3. Extra monitoring and entry-level jobs: Fake job postings often target entry-
level positions or require extra monitoring activities, making it challenging for
machine learning models to accurately identify these types of fraudulent
postings. Younger individuals are also more susceptible to falling victim to
these scams.
The target of this study is to detect whether a job post is fraudulent or not. Identifying
and eliminating these fake job advertisements will help the job seekers to concentrate
on legitimate job posts only. In this context, a dataset from Kaggle is employed that
provides information regarding a job that may or may not be suspicious.
A. Implementation of Classifiers
In this framework classifiers are trained using appropriate parameters. For maximizing
the performance of these models, default parameters may not be sufficient enough.
Adjustment of these parameters enhances the reliability of this model which may be
regarded as the optimised one for identifying as well as isolating the fake job posts from
the job seekers.
considered. However, the accuracy may not be enough metric for evaluating model‘s
performance since it does not consider wrong predicted cases. If a fake post is treated
as a true one, it creates a significant problem. Hence, it is necessary to consider false
positive and false negative cases that compensate to misclassification. For measuring
this compensation, precision and recall is quite necessary to be considered.
3. Scalability: Machine learning models can scale to handle large volumes of job
postings, making them suitable for platforms with a high frequency of new
listings. This scalability ensures that all incoming job postings are screened
effectively without overwhelming human resources.
4. SYSTEM REQUIREMENTS
Fake job listings have become a significant issue in the job market, leading to increased interest
in using machine learning algorithms to predict and prevent such fraudulent activities.
Functional requirements for building a system that can accurately predict fake job listings using
machine learning algorithms can be outlined as follows:
3. Feature Extraction: The next step is to extract relevant features from the
preprocessed data that can be used as inputs for machine learning
algorithms. These features may include things like the presence of certain
keywords or phrases that are commonly associated with fake job listings
(e.g., “work from home,” “no experience required,” “instant hire”), the use
of specific email domains or phone numbers, or inconsistencies in the
listing’s formatting or content.
Random Forests, and Neural Networks. The choice of algorithm will depend
on factors such as the size and complexity of the dataset, the desired level
of accuracy, and computational resources available.
1. Performance:
2. Reliability:
3. Security:
• Data Privacy: Ensure that sensitive information used for training and
prediction is protected from unauthorized access.
4. Maintainability:
5. Usability:
6. Scalability:
5. SOFTWARE ENVIRONMENT
5.1 PYTHON
1) High-level
2) General-purpose
Python is a general-purpose language. It means that you can use Python in various
domains including:
• Web applications
• Big data applications
• Testing
• Automation
• Data science, machine learning, and AI
• Desktop software
• Mobile apps
The targeted language like SQL which can be used for querying data from relational
databases.
To execute the source code, you need to convert it to the machine language that the
computer can understand. And the Python interpreter turns the source code, line by
line, once at a time, into the machine code when the Python program executes.Compiled
languages like Java and C# use a compiler that compiles the whole source code before
the program executes.
First, download the latest version of Python from the download page.
In the setup window, you need to check the Add Python 3.8 to PATH and click Install
Now to begin the installation.
To verify the installation, you open the Run window and type cmd and press Enter:
If you see the output like the above screenshot, you’ve successfully installed Python on
your computer.
If you see the following output from the Command Prompt after typing
the python command:
Likely, you didn’t check the Add Python 3.8 to PATH checkbox when you install
Python.
It’s recommended to install Python on macOS using an official installer. Here are the
steps:
Before installing Python 3 on your Linux distribution, you check whether Python 3 was
already installed by running the following command from the terminal:
python3 --version
If you see a response with the version of Python, then your computer already has
Python 3 installed. Otherwise, you can install Python 3 using a package management
system.
For example, you can install Python 3.10 on Ubuntu using apt:
To install the newer version, you replace 3.10 with that version.
Visual Studio Code is a lightweight source code editor. The Visual Studio Code is often
called VS Code. The VS Code runs on your desktop. It’s available for Windows,
macOS, and Linux.VS Code comes with many features such as IntelliSense, code
editing, and extensions that allow you to edit Python source code effectively. The best
part is that the VS Code is open-source and free.Besides the desktop version, VS Code
also has a browser version that you can use directly in your web browser without
installing it.
First, navigate to the VS Code official website and download the VS code based on
your platform (Windows, macOS, or Linux).
Once the installation completes, you can launch the VS code application:
To make the VS Code works with Python, you need to install the Python extension
from the Visual Studio Marketplace.
Third, create a new app.py file and enter the following code and save the file:
print('Hello, World!')
Code language: Python (python)
The print() is a built-in function that displays a message on the screen. In this
example, it’ll show the message 'Hello, Word!'.
What is a function
When you sum two numbers, that’s a function. And when you multiply two numbers,
that’s also a function.
Each function takes your inputs, applies some rules, and returns a result.
In the above example, the print() is a function. It accepts a string and shows it on the
screen.
Python has many built-in functions like the print() function to use them out of the box
in your program.
In addition, Python allows you to define your functions, which you’ll learn how to do
it later.
To execute the app.py file, you first launch the Command Prompt on Windows or
Terminal on macOS or Linux.
After that, type the following command to execute the app.py file:
python app.py
Code language: Python (python)
python3 app.py
Code language: CSS (css)
Hello, World!
Code language: Python (python)
If you use VS Code, you can also launch the Terminal within the VS code by:
Typically, the backtick key (`) locates under the Esc key on the keyboard.
Python IDLE is the Python Integration Development Environment (IDE) that comes
with the Python distribution by default.
The Python IDLE is also known as an interactive interpreter. It has many features such
as:
In short, the Python IDLE helps you experiment with Python quickly in a trial-and-
error manner.
The following shows you step by step how to launch the Python IDLE and use it to
execute the Python code:
Now, you can enter the Python code after the cursor >>> and press Enter to execute it.
For example, you can type the code print('Hello, World!') and press Enter, you’ll see
the message Hello, World! immediately on the screen:
If you’ve been working in other programming languages such as Java, C#, or C/C++,
you know that these languages use semicolons (;) to separate the statements.
However, Python uses whitespace and indentation to construct the code structure.
The meaning of the code isn’t important to you now. Please pay attention to the code
structure instead.
At the end of each line, you don’t see any semicolon to terminate the statement. And
the code uses indentation to format the code.
6. SYSTEM ARCHITECTURE
Meta-model and a notation. In the future, some form of method or process may also be
added to; or associated with, UML.
The Unified Modeling Language is a standard language for specifying,
Visualization, Constructing and documenting the artifacts of software system, as well
as for business modeling and other non-software systems. The UML represents a
collection of best engineering practices that have proven successful in the modeling of
large and complex systems. The UML is a very important part of developing objects
oriented software and the software development process. The UML uses mostly
graphical notations to express the design of software projects.
GOALS:
1.Provide users a ready-to-use, expressive visual modeling Language so that they can
develop and exchange meaningful models.
actors, their goals (represented as use cases), and any dependencies between those use
cases. The main purpose of a use case diagram is to show what system functions are
performed for which actor. Roles of the actors in the system can be depicted.
The deployment diagram maps the software architecture created in design to the
physical system architecture that executes it. In distributed systems, it models the
distribution of the software across the physical nodes.The software systems are
manifested using various artifacts, and then they are mapped to the execution
environment that is going to execute the software such as nodes. Many nodes are
involved in the deployment diagram; hence, the relation between them is represented
using communication paths.
• The DFD is also called as bubble chart. It is a simple graphical formalism that
can be used to represent a system in terms of input data to the system, various
processing carried out on this data, and the output data is generated by this
system.
• The data flow diagram (DFD) is one of the most important XXX Modelling
tools. It is used to model the system components. These components are the
system process, the data used by the process, an external entity that interacts
with the system and the information flows in the system.
• DFD shows how the information moves through the system and how it is
modified by a series of transformations. It is a graphical technique that depicts
information flow and the transformations that are applied as data moves from
input to output.
• DFD is also known as bubble chart. A DFD may be used to represent a system
at any level of abstraction. DFD may be partitioned into levels that represent
increasing information flow and functional detail.
7. MODULES
Fake job listings have become a significant problem in the job market, leading to an
increased interest in using machine learning algorithms to predict and prevent such
fraudulent activities. In this context, several modules can be employed to build an
effective fake job prediction system.
7.1 MODULES
User:-
❖ Register
❖ Login
❖ Predict
The admin module provides a central hub for managing the system and its users. Here's
a breakdown of its functionalities:
User Management:
• User List: View a list of all registered users, including usernames, emails, and
potentially registration dates.
• User Details: Access detailed information about specific users, allowing for
further investigation if needed.
• User Management Actions: This might include functionalities like user
account activation/deactivation, or even deletion in case of suspicious activity.
System Management:
Additional Considerations:
• Security: The admin module should have robust security measures in place to
prevent unauthorized access. This might involve features like:
o Secure login with strong password requirements.
o User roles with different access levels.
o Activity logs to track user actions.
• Reporting: The admin might have access to generate reports on various aspects
of the system, such as:
o Number of flagged job postings.
o User activity statistics.
o Model performance over time.
def user_predict(request):
user_id = request.session['user_id']
user = UserModel.objects.get(user_id=user_id)
if request.method == 'POST':
title = request.POST.get("jobtitle")
location=request.POST.get("location")
department=request.POST.get("department")
salary_range=request.POST.get("salary_range")
company_profile=request.POST.get("Company_Profile")
description=request.POST.get("decription")
requirements=request.POST.get("requirements")
benefits=request.POST.get("benefits")
req_experience=request.POST.get("required_experience")
req_education=request.POST.get("required_education")
industry=request.POST.get("industry")
function=request.POST.get("function")
emp_type=request.POST.get("employment_type")
print(emp_type,title,location,department,salary_range,company_pro
file,description,requirements,benefits,req_experience,req_educati
on,industry,function)
job=JobModel.objects.create(job_title=title,job_location=location
,job_dept=department,job_com_profile=company_profile,
job_description=description,job_requirement=requirements,job_bene
fits=benefits,
job_req_experience=req_experience,job_req_education=req_education
,job_industry=industry,
job_function=function,job_salary_range=salary_range,job_emp_type=
emp_type,user_url=user)
if job:
messages.success(request, 'successfully entered
jobdata')
return redirect('user_result',id=job.job_id)
else:
messages.error(request, 'Invalid data')
return redirect('user_predict')
return render(request,'user/user-predict.html')
def user_profile(req):
user_id = req.session['user_id']
user = UserModel.objects.get(user_id=user_id)
if req.method == 'POST':
username = req.POST.get("user_username")
email = req.POST.get("user_email")
contact = req.POST.get("user_contact")
password = req.POST.get("user_password")
if len(req.FILES) != 0:
image = req.FILES["image"]
user.user_username = username
user.user_contact = contact
user.user_password = password
user.user_image = image
user.save()
messages.success(req,'Updated
Successfully')
else:
user.user_username = username
user.user_contact = contact
user.user_username = username
user.user_contact = contact
user.user_password = password
user.save()
messages.success(req,'Updated
Successfully')
return redirect('user_profile')
return render(req,'user/user-profile.html',{'user':user})
def user_result(request,id):
user_id = request.session['user_id']
user = UserModel.objects.get(user_id=user_id)
# job = JobModel.objects.get(pk=id)
predict=JobModel.objects.get(pk=id)
print(predict,'ooooooooooooo')
X_test=[predict.job_title + predict.job_location +
predict.job_dept + predict.job_com_profile +
predict.job_description +
predict.job_requirement + predict.job_benefits +
predict.job_req_education + predict.job_req_experience +
predict.job_industry + predict.job_function]
print(X_test)
- import joblib
file=open('job_vc_rf.pkl','rb')
vc=joblib.load(file)
X_test1=vc.transform(X_test)
print(X_test1,'gggggggggggggggggggggggg')
import joblib
file=open('job_rf.pkl','rb')
rfmodel=joblib.load(file)
from sklearn.svm import SVC
y_pred=rfmodel.predict(X_test1)
print(y_pred[0])
predict.job_status=y_pred[0]
predict.save()
print(predict.job_status,'hhhhhhhhhhhhhhhhhhh')
# messages.info(request,"non-fraudulent")
messages.success(request,'Predicted Successfully')
return render(request,'user/user-
result.html',{'job':predict})
import pandas as pd
import random
import missingno
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer
from sklearn.model_selection import train_test_split
from sklearn.pipeline import Pipeline
from sklearn.base import TransformerMixin
from sklearn.metrics import accuracy_score, classification_report, confusion_matrix
from wordcloud import WordCloud
import spacy
from spacy.lang.en.stop_words import STOP_WORDS
from spacy.lang.en import English
from sklearn.svm import SVC
from sklearn.metrics import accuracy_score,f1_score, recall_score, precision_score
def admin_algocomp(request):
try:
dt = Dataset.objects.filter(dt_algo='DecisionTreeClassifier').first()
dt_ac = dt.dt_Accuracy*100
dt_pr = dt.dt_Precision*100
dt_re = dt.dt_Recall*100
dt_fs = dt.dt_F1_Score*100
lr = Dataset.objects.filter(lr_algo='Logistic Regression').first()
lr_ac = lr.lr_Accuracy*100
lr_pr = lr.lr_Precision*100
lr_re = lr.lr_Recall*100
lr_fs = lr.lr_F1_Score*100
nb = Dataset.objects.filter(nb_algo='Naive-Bayes').first()
nb_ac = nb.nb_Accuracy*100
nb_pr = nb.nb_Precision*100
nb_re = nb.nb_Recall*100
nb_fs = nb.nb_F1_Score*100
rf = Dataset.objects.filter(rf_algo='RandomForestClassifier').first()
rf_ac = rf.rf_Accuracy*100
rf_pr = rf.rf_Precision*100
rf_re = rf.rf_Recall*100
rf_fs = rf.rf_F1_Score*100
context = {
'lr_ac':lr_ac,
'lr_pr':lr_pr,
'lr_re':lr_re,
'lr_fs':lr_fs,
'nb_ac':nb_ac,
'nb_pr':nb_pr,
'nb_re':nb_re,
'nb_fs':nb_fs,
'dt_ac':dt_ac,
'dt_pr':dt_pr,
'dt_re':dt_re,
'dt_fs':dt_fs,
'rf_ac':rf_ac,
'rf_pr':rf_pr,
'rf_re':rf_re,
'rf_fs':rf_fs,
}
return render(request,'admin/admin-algocomp.html',context)
except:
messages.warning(request,'Run all 4 algorithms to compare values')
return redirect('admin_view')
def admin_allusers(request):
user=UserModel.objects.filter(user_status='accepte
d').order_by('user_id')
return render(request,'admin/admin-allusers.html',{'user':user})
def admin_dectree(request):
data=Dataset.objects.all().order_by('-data_id').first()
return render(request,'admin/admin-dectree.html',{'data':data})
def admin_lr(request):
data = Dataset.objects.all().order_by('-data_id').first()
print(data,type(data),'dataaaaaaaaaaa')
return render(request,'admin/admin-lr.html',{'data':data})
def admin_nb(request):
data = Dataset.objects.all().order_by('-data_id').first()
return render(request,'admin/admin-nb.html',{'data':data})
def admin_pendingusers(request):
items = UserModel.objects.filter(user_status='pending').order_by('-user_id')
return render(request,'admin/admin-pendingusers.html' ,{'items':items})
def admin_randfor(request):
data = Dataset.objects.all().order_by('-data_id').first()
return render(request,'admin/admin-randfor.html',{'data':data})
def admin_upload(request):
if request.method == 'POST':
dataset = request.FILES['dataset']
data = Dataset.objects.create(data_set = dataset)
data = data.data_id
print(type(data),'type')
return redirect('admin_view')
return render(request,'admin/admin-upload.html')
def admin_view(request):
data = Dataset.objects.all().order_by('-data_id').first()
print(data,type(data),'sssss')
file = str(data.data_set)
df = pd.read_csv(f'./media/{file}')
table = df.to_html(table_id='data_table')
return render(request,'admin/admin-view.html',{'i':data,'t':table})
def accept_user(request,id):
accept = get_object_or_404(UserModel,user_id=id)
accept.user_status = "accepted"
accept.save(update_fields=["user_status"])
accept.save()
return redirect('admin_pendingusers')
def decline_user(request,id):
decline = get_object_or_404(UserModel,user_id=id)
decline.user_status = "declined"
decline.save(update_fields=["user_status"])
decline.save()
return redirect('admin_pendingusers')
def RandomForest(request):
Accuracy = None
Precision = None
Recall = None
F1_Score = None
data = Dataset.objects.all().order_by('-data_id').first()
file = str(data.data_set)
df = pd.read_csv(f'./media/{file}')
x=df['title']
y=df['fraudulent']
print(x.shape,'hhhhhhhhhhhhhhhhhhhhhhhhhh')
from sklearn.model_selection import train_test_split
x_train,x_test,y_train,y_test=train_test_split(x,y,test_size=0.3,random_state=0)
#converting text into Numbers
from sklearn.feature_extraction.text import TfidfVectorizer
tf = TfidfVectorizer()
x_train1 = tf.fit_transform(x_train)
x_test1 = tf.transform(x_test)
import joblib
file=open('job_vc_rf.pkl','wb')
joblib.dump(tf,file)
#Mchinelearning
from sklearn.ensemble import RandomForestClassifier
model_name = RandomForestClassifier()
model_name.fit(x_train1,y_train)
prediction = model_name.predict(x_test1)
import joblib
file=open('job_rf.pkl','wb')
joblib.dump(model_name,file)
Accuracy=(accuracy_score(prediction,y_test))
Precision=(precision_score(prediction,y_test,average = 'macro'))
Recall=(recall_score(prediction,y_test,average = 'macro'))
F1_Score=(f1_score(prediction,y_test,average = 'macro'))
print(Accuracy,Precision,Recall,F1_Score,'Scoreeeeeeeeeeeee lr')
data.rf_Accuracy = Accuracy
data.rf_Precision = Precision
data.rf_Recall = Recall
data.rf_F1_Score = F1_Score
data.save()
data = Dataset.objects.filter(rf_algo='RandomForestClassifier').order_by('-
data_id').first()
return render(request,'admin/admin-randfor.html',{'data':data})
def LogisticRegression(request):
Accuracy = None
Precision = None
Recall = None
F1_Score = None
data = Dataset.objects.all().order_by('-data_id').first()
# id = data.data_id
file = str(data.data_set)
df = pd.read_csv(f'./media/{file}')
x=df['title']
y=df['fraudulent']
print(x.shape,'hhhhhhhhhhhhhhhhhhhhhhhhhh')
from sklearn.model_selection import train_test_split
x_train,x_test,y_train,y_test=train_test_split(x,y,test_size=0.3,random_state=0)
#converting text into Numbers
from sklearn.feature_extraction.text import
TfidfVectorizer
tf = TfidfVectorizer()
x_train1 = tf.fit_transform(x_train)
x_test1 = tf.transform(x_test)
#Mchinelearning
from sklearn.linear_model import LogisticRegression
model_name = LogisticRegression()
model_name.fit(x_train1,y_train)
prediction = model_name.predict(x_test1)
Accuracy=(accuracy_score(prediction,y_test))
Precision=(precision_score(prediction,y_test,average = 'macro'))
Recall=(recall_score(prediction,y_test,average = 'macro'))
F1_Score=(f1_score(prediction,y_test,average = 'macro'))
print(Accuracy,Precision,Recall,F1_Score,'Scoreeeeeeeeeeeee lr')
data.lr_Accuracy = Accuracy
data.lr_Precision = Precision
data.lr_Recall = Recall
data.lr_F1_Score = F1_Score
data.save()
return render(request,'admin/admin-lr.html',{'data':data})
def navie_bayes(request):
Accuracy = None
Precision = None
Recall = None
F1_Score = None
data = Dataset.objects.all().order_by('-data_id').first()
file = str(data.data_set)
df = pd.read_csv(f'./media/{file}')
x=df['title']
y=df['fraudulent']
print(x.shape,'hhhhhhhhhhhhhhhhhhhhhhhhhh')
from sklearn.model_selection import train_test_split
x_train,x_test,y_train,y_test=train_test_split(x,y,test_size=0.3,random_state=0)
#converting text into Numbers
from sklearn.feature_extraction.text import TfidfVectorizer
tf = TfidfVectorizer()
x_train1 = tf.fit_transform(x_train)
x_test1 = tf.transform(x_test)
#Mchinelearning
from sklearn.naive_bayes import MultinomialNB
model_name = MultinomialNB()
model_name.fit(x_train1,y_train)
prediction = model_name.predict(x_test1)
Accuracy=(accuracy_score(prediction,y_test))
Precision=(precision_score(prediction,y_test,average = 'macro'))
Recall=(recall_score(prediction,y_test,average = 'macro'))
F1_Score=(f1_score(prediction,y_test,average = 'macro'))
print(Accuracy,Precision,Recall,F1_Score,'Scoreeeeeeeeeeeee lr')
data.nb_Accuracy = Accuracy
data.nb_Precision = Precision
data.nb_Recall = Recall
data.nb_F1_Score = F1_Score
data.save()
data = Dataset.objects.filter(nb_algo='Naive-Bayes').order_by('-data_id').first()
return render(request,'admin/admin-nb.html',{'data':data})
def DecisionTree(request):
Accuracy = None
Precision = None
Recall = None
F1_Score = None
data = Dataset.objects.all().order_by('-data_id').first()
file = str(data.data_set)
df = pd.read_csv(f'./media/{file}')
x=df['title']
y=df['fraudulent']
print(x.shape,'hhhhhhhhhhhhhhhhhhhhhhhhhh')
from sklearn.model_selection import train_test_split
x_train,x_test,y_train,y_test=train_test_split(x,y,test_size=0.3,random_state=0)
print(x_train,'train')
#converting text into Numbers
from sklearn.feature_extraction.text import TfidfVectorizer
tf = TfidfVectorizer()
x_train1 = tf.fit_transform(x_train)
x_test1 = tf.transform(x_test)
#Mchinelearning
from sklearn.tree import DecisionTreeClassifier
model_name = DecisionTreeClassifier()
model_name.fit(x_train1,y_train)
prediction = model_name.predict(x_test1)
Accuracy=(accuracy_score(prediction,y_test))
Precision=(precision_score(prediction,y_test,average = 'macro'))
Recall=(recall_score(prediction,y_test,average = 'macro'))
F1_Score=(f1_score(prediction,y_test,average = 'macro'))
print(Accuracy,Precision,Recall,F1_Score,'Scoreeeeeeeeeeeee lr')
data.dt_Accuracy = Accuracy
data.dt_Precision = Precision
data.dt_Recall = Recall
data.dt_F1_Score = F1_Score
data.save()
data=Dataset.objects.filter(dt_algo='DecisionTreeClassifier').order_by('-
data_id').first()
return render(request,'admin/admin-dectree.html',{'data':data})
def button(request,id):
predict=JobModel.objects.get(pk=id)
print(predict,'ooooooooooooo')
X_test=[predict.job_title + predict.job_location + predict.job_dept +
predict.job_com_profile + predict.job_description +
predict.job_requirement + predict.job_benefits + predict.job_req_education +
predict.job_req_experience +
predict.job_industry + predict.job_function]
print(X_test)
import joblib
file=open('job_vc_rf.pkl','rb')
vc=joblib.load(file)
X_test1=vc.transform(X_test)
print(X_test1,'gggggggggggggggggggggggg')
import joblib
file=open('job_rf.pkl','rb')
rfmodel=joblib.load(file)
from sklearn.svm import SVC
y_pred=rfmodel.predict(X_test1)
print(y_pred[0])
return redirect('user_result',id=id)
9. SCREEN SHOTS
b) Gorilla Testing
Gorilla testing is a test technique in which the tester and/or developer test the module
of the application thoroughly in all aspects. Gorilla testing is done to check how robust
your application is. For example, the tester is testing the pet insurance company’s
website, which provides the service of buying an insurance policy, tag for the pet,
Lifetime membership. The tester can focus on any one module, let’s say, the insurance
policy module, and test it thoroughly with positive and negative test scenarios.
of testing is to find the defect on interface, communication, and data flow among
modules. Top-down or Bottom-up approach is used while integrating modules into the
whole system. This type of testing is done on integrating modules of a system or
between systems. For example, a user is buying a flight ticket from any airline website.
Users can see flight details and payment information while buying a ticket, but flight
details and payment processing are two different systems. Integration testing should be
done while integrating of airline website and payment processing system.
focus only on the input and output of test objects. Detailed information about the
advantages, disadvantages, and types of Black Box testing can be found here.
c) Smoke Testing
Smoke testing is performed to verify that basic and critical functionality of the system
under test is working fine at a very high level. Whenever a new build is provided by the
development team, then the Software Testing team validates the build and ensures that
no major issue exists. Them testing team will ensure that the build is stable, and a
detailed level of testing will be carried out further. For example, tester is testing pet
insurance website. Buying an insurance policy, adding another pet, providing quotes
are all basic and critical functionality of the application. Smoke testing for this website
verifies that all these functionalities are working fine before doing any in-depth testing.
d) Sanity Testing
Sanity testing is performed on a system to verify that newly added functionality or bug
fixes are working fine. Sanity testing is done on stable build. It is a subset of the
regression test. For example, a tester is testing a pet insurance website. There is a
change in then discount for buying a policy for second pet. Then sanity testing is only
performed on buying insurance policy module.
f) Monkey Testing
Monkey Testing is carried out by a tester, assuming that if the monkey uses the
application, then how random input and values will be entered by the Monkey without
any knowledge or understanding of the application. The objective of Monkey Testing
is to check if an application or system gets crashed by providing random input
values/data. Monkey Testing is performed randomly, no test cases are scripted, and it
is not necessary to be aware of the full functionality of the system.
a) Alpha Testing
Alpha testing is a type of acceptance testing performed by the team in an organization
to find as many defects as possible before releasing software to customers. For
example, the pet insurance website is under UAT. UAT team will run real- time
scenarios like buying an insurance policy, buying annual membership, changing the
address, ownership transfer of the pet in a same way the user uses the real website. The
team can use test credit card information to process payment-related scenarios.b) Beta
Testing
Beta Testing is a type of software testing which is carried out by the clients/customers.
It is performed in the Real Environment before releasing the product to the market for
the actual end-users. Beta Testing is carried out to ensure that there are no major failures
in the software or product, and it satisfies the business requirements from an end-user
perspective. Beta Testing is successful when the customer accepts the software.
Usually, this testing is typically done by the end-users. This is the final testing done
before releasing the application for commercial purposes. Usually, the Beta version of
the software or product released is limited to a certain number of users in a specific
area. So, the end-user uses the software and shares the feedback with the company. The
company then takes necessary action before releasing the software worldwide.
a) Penetration Testing
Penetration Testing or Pen testing is the type of security testing performed as an
authorized cyberattack on the system to find out the weak points of the system in terms
of security. Pen testing is performed by outside contractors, generally known as ethical
hackers. That is why it is also known as ethical hacking. Contractors perform different
operations like SQL injection, URL manipulation, Privilege Elevation, session expiry,
and provide reports to the organization.
Notes: Do not perform the Pen testing on your laptop/computer. Always take
written permission to do pen tests.
a) Load testing
Load testing is testing of an application’s stability and response time by applying load,
which is equal to or less than the designed number of users for an application. For
example, your application handles 100 users at a time with a response time of 3 seconds,
then load testing can be done by applying a load of the maximum of 100 or less than
100 users. The goal is to verify that the application is responding within 3 seconds for
all the users.
b) Stress Testing
Stress testing is testing an application’s stability and response time by applying load,
which is more than the designed number of users for an application. For example, your
application handles 1000 users at a time with a response time of 4 seconds, then stress
testing can be done by applying a load of more than 1000 users. Test the application
with 1100,1200,1300 users and notice the response time. The goal is to verify the
stability of an application under stress.
c) Scalability Testing
Scalability testing is testing an application’s stability and response time by applying
load, which is more than the designed number of users for an application. For
example, your application handles 1000 users at a time with a response time of 2
seconds, then scalability testing can be done by applying a load of more than 1000 users
and gradually increasing the number of users to find out where exactly my application
is crashing. Let’s say my application is giving response time as follows:
testing
a tester is performing usability testing. Testers can check the scenario like if the mobile
app is easy to operate with one hand or not, scroll bar should be vertical, background
colour of the app should be black and price of and stock is displayed in red or green
colour. The main idea of usability testing of this kind of app is that as soon as the user
opens the app, the user should get a glance at the market.
a) Exploratory testing
Exploratory Testing is informal testing performed by the testing team. The objective of
this testing is to explore the application and look for defects that exist in the
application. Testers use the knowledge of the business domain to test the application.
Test charters are used to guide the exploratory testing.
c) Accessibility Testing
The aim of Accessibility Testing is to determine whether the software or application
is accessible for disabled people or not. Here, disability means deafness, colour
blindness, mentally disabled, blind, old age, and other disabled groups. Various checks
are performed, such as font size for visually disabled, colour and contrast for colour
blindness, etc.
11. CONCLUSION
11.1 CONCLUSION
Employment scam detection will guide job-seekers to get only legitimate offers from
companies. For tackling employment scam detection, several machine learning
algorithms are proposed as countermeasures in this paper. Supervised mechanism is
used to exemplify the use of several classifiers for employment scam detection.
Experimental results indicate that Random Forest classifier outperforms over its peer
classification tool. The proposed approach achieved accuracy 98.27% which is much
higher than the existing methods.
Fake job prediction using machine learning algorithms is a crucial application that can
significantly impact the detection and prevention of employment fraud. Through the
utilization of advanced techniques such as Natural Language Processing (NLP) and
classification algorithms like Naive Bayes and Stochastic Gradient Descent (SGD)
classifiers, it is possible to develop effective models that can distinguish between
legitimate job postings and fraudulent ones. By analyzing various features of job
postings such as text content, title, location, profile information, and character count,
these machine learning models can accurately predict the likelihood of a job
advertisement being fake.
The combination of NLP and classification algorithms in the final model provides a
robust framework for identifying false job postings and alerting applicants to potential
scams, thereby enhancing overall job market security and trustworthiness.
Current State of Fake Job Prediction: Fake job postings have been a persistent issue
in the online recruitment space, leading to various fraudulent activities and scams
targeting job seekers. The use of machine learning algorithms, such as Naive Bayes and
Stochastic Gradient Descent (SGD) classifiers, has shown promise in detecting and
predicting fake job postings by analyzing textual data from job listings.
Incorporating Multimodal Data Analysis: The future scope for fake job prediction
using machine learning algorithms could involve incorporating multimodal data
analysis, which combines textual information with other modalities like images or
videos associated with job listings. By leveraging a combination of text and visual data,
machine learning models can gain a more comprehensive understanding of the context
and authenticity of job postings.
12. REFERENCES/BIBILOGRAPHY
1. Bandar Alghamdi, Fahad Alharby, “An Intelligent Model for Online Recruitment
Fraud Detection”, Journal of Information Security, 2019, pp. 155 176.
2. Tao Jiang, Jian ping li, Amin ul Haq, Abdus labor, and Amjad al, “A Novel Stacking
Approach for Accurate Detection of Fake News”, Vol. 9, 2021, pp.
22626-22639.
3. Karri sai Suresh reddy, karri Lakshmana reddy, “fake job recruitment detection”,
JETIR August 2021, Vol. 8, pp. d443-d448.
4. Tulus Suryanto, Robbi Rahim, Ansari Saleh Ahmar, “Employee Recruitment Fraud
Prevention with the Implementation of Decision Support System”, Journal of Physics
Conference Series, 2018, pp.1-11.
9254.
6. Lal, Sangeeta, Rishabh Jiaswal, Neetu Sardana, Ayushi Verma, Amanpreet Kaur, and
Rahul Mourya. "ORFDetector: ensemble learning based online recruitment
fraud detection." In 2019 Twelfth International Conference on Contemporary
Computing (IC3), pp. 1-5. IEEE, 2019.
IEEE Transactions on Systems, Man, and Cybernetics: Systems, Vol. 49, 2019, pp. 1-
20
10. Sokratis Vidros, Constantinos Kolias, Georgios Kambourakis and Leman Akoglu,
“Automatic Detection of Online Recruitment Frauds: Characteristics, Methods, and a
Public Dataset”, Future Internet 2017, pp. 2-19.
11. Shu, Kai, Amy Sliva, Suhang Wang, Jiliang Tang, and Huan Liu. "Fake news
detection on social media: A data mining perspective." ACM SIGKDD
explorations newsletter 19, no. 1 (2017): 22-36.
12. Devsmit Ranparia; Shaily Kumari; Ashish Sahani, ”Fake Job Prediction using
Sequential Network”, IEEE 15th International Conference on Industrial and
Information Systems (ICIIS), 2020, pp.339-343
13. Syed Mahbub, Eric Pardede, “Using Contextual Features for Online Recruitment
Fraud Detection”, 27th International Conference on Information Systems
Development, 2018.
14. Najma Imtiaz Ali, Suhaila Samsuri, Muhamad Sadry, Imtiaz Ali Brohi, Asadullah
Shah, “Online Shopping Satisfaction in Malaysia: A Framework for Security, Trust and
Cybercrime”, 6th International Conference on Information and Communication
Technology for The Muslim World, 2016, pp. 194-198.
16.Sultana Umme Habiba, Md. Khairul Islam, Farzana Tasnim, “A Comparative Study
on Fake Job Post Prediction Using Different Data mining Techniques”, 2nd
17. Sarvesh Tanwar, Thomas Paul, Kanwarpreet Singh, Mannat Joshi, Ajay Rana,
“Classification and Impact of Cyber Threats in India: A review”, 8th International
Conference on Reliability, Infocom Technologies and Optimization (Trends and Future
Directions) (ICRITO), 2020,pp. 129-135.
18. Veena, K., and P. Visu. "Detection of cyber crime: An approach using the lie
detection technique and methods to solve it." In 2016 International Conference
on Information Communication and Embedded Systems (ICICES), pp. 1-6. IEEE, 2016.
LIST OF FIGURES
S No Figure No Figure Name Page No
LIST OF ABBREVIATIONS
UI User Interface
13.APPENDIX-B
APPENDIX-B
PO2: Problem analysis: Identify, formulate, review research literature, and analyze
complex engineering problems reaching substantiated conclusions using first principles
of mathematics, natural sciences, and engineering sciences.
PO5: Modern tool usage: Create, select, and apply appropriate techniques, resources,
and modern engineering and IT tools including prediction and modeling to complex
engineering activities with an understanding of the limitations.
PO6: The engineer and society: Apply reasoning informed by the contextual
knowledge to asses societal, health, safety, legal and cultural issues and the consequent
responsibilities relevant to the professional engineering practice.
PO12: Life-long learning: Recognize the need for, and have the preparation and ability
to engage in independent and life-long learning in the broadest context of technologies
change.
PSO1: Design, implement, test and evaluate a computer system, or algorithm to meet
desired needs and to solve a computational problem.
PSO2: Ability to analyze, design and implement hardware and software components.
15. PO ATTAINMENT
AVANTHIS SCIENTIFIC TECHNOLOGICAL AND RESEARCH
ACADEMY