0% found this document useful (0 votes)
34 views52 pages

Minor Project Report DA

The report titled 'Artificial Intelligence Redefining People's Life' explores the transformative impact of AI on daily life and employment, highlighting both its benefits and challenges. It categorizes AI into weak and strong types, discusses its advantages and disadvantages, and reviews various fields within AI research. The document emphasizes the need for proactive measures to address potential disruptions in the labor market due to automation and AI advancements.

Uploaded by

Lirical Boy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
34 views52 pages

Minor Project Report DA

The report titled 'Artificial Intelligence Redefining People's Life' explores the transformative impact of AI on daily life and employment, highlighting both its benefits and challenges. It categorizes AI into weak and strong types, discusses its advantages and disadvantages, and reviews various fields within AI research. The document emphasizes the need for proactive measures to address potential disruptions in the labor market due to automation and AI advancements.

Uploaded by

Lirical Boy
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

MAJOR PROJECT REPORT

ON

“ARTIFICIAL INTELLIGENCE REDEFINING PEOPLE’S LIFE”

SUBMITTED IN PARTIAL FULFILLMENT FOR THE AWARD OF THE


DEGREE OF BACHELOR OF COMMERCE
2022-2025

UNDER THE GUIDANCE OF


Ms. Aanchal Dabas
SUBMITTED BY:
Dhruv Aggarwal
Roll No. 03814988822

Maharaja Surajmal Institute

C-4, Janakpuri, New Delhi- 110058 Affiliated to


Guru Gobind Singh Indraprastha University, Delhi
TABLE OF CONTENT
Particulars Page No.

Student Undertaking 3

Certificate from Faculty Guide 4

Acknowledgement 5

Objective 6

Introduction 7

Literature Review 13

Research Methodology 23

Analysis and Interpretation 26

Conclusion 38

Limitation of the Study 43

Bibliography 44

Annexure 47

Page | 2
STUDENT UNDERTAKING

This is to certify that I have completed the project titled “ARTIFICIAL


INTELLIGENCE REDEFINING PEOPLE’S LIFE” in Maharaja Surajmal
Institute under the guidance of Ms. Aanchal Dabas in partial fulfilment of the
requirement for the award of degree of Bachelor of Commerce ([Link]) at Maharaja
Surajmal Institute New Delhi. This is an original piece of work and has not been
submitted elsewhere.

Dhruv Aggarwal

Page | 3
ACKNOWLEDGEMENT

It is a matter of great satisfaction and pleasure to present this project report on


“ARTIFICIAL INTELLIGENCE REDEFINING PEOPLE’S LIFE”. I take this
opportunity to thank all my faculty for their encouragement and able guidance at every stage
of the project.
I am grateful to Ms. Aanchal Dabas, my project guide for sparing her precious time and
guiding me and for giving valuable suggestions for compiling the project report. I express
my heartfelt gratitude to all those people who have helped me directly or indirectly in
completing this project.

Dhruv Aggarwal

Page | 4
OBJECTIVE

1. To analyse the role of artificial intelligence in redefining people’s lives.


2. How artificial intelligence is making peoples lives easier.
3. How it is affecting the life and what are it’s impacts.

Page | 5
CHAPTER -1

INTRODUCTION

Page | 6
Initially, technologists delivered terrifying dystopian warnings about the ability of
technology, especially artificial intelligence (AI), to eliminate jobs.

Then came a sort of retraction, with a slew of reassuring statements reduce the alarm

Now, the conversation appears to be settling on a more balanced narrative predicts that when
the robots arrive, they will not bring doomsday Neither utopia nor stress, but both benefits
and stress.

To begin with, most observers now believe that the recent past may easily foreshadow the
near future by reminding us that automation created more employment than it killed in the
last 30 years, and so presents enormous promise.

In this sense, automation and artificial intelligence (AI) are rapidly appearing as potential
sources of the productivity improvements that the US economy so desperately needs. As a
result, automation has the potential to boost the economy in the next years and improve
prosperity at an unpredictable period.

At the same time, many analysts are now predicting substantial upheaval. Even if the
absolute number of employments remains stable, the widespread opinion now is that tough
times are ahead in the labour market, causing very substantial dislocations for many
employees. All of this highlights how muddled, contentious, and unpredictable our
understanding of how automation and comparable technologies will be implemented in the
next years is.

This is where this report comes in. The pages that follow develop both backward- and
forward-looking analyses of how the advent of digital technologies is already reorienting
labour markets and may continue to do so—along with a policy agenda for national and
state-local response—to fill in some of the gaps and provide an overview of automation
trends both in the recent past and in the near future.

The investigation begins with the definition and conceptualization of automation. The
research then uses well-established occupation-based statistical methodologies to quantify
and map the varied observed and expected effects of automation on employment growth and
change from 1980 to 2016 and 2016 to 2030. The emphasis is on areas of possible
occupational stress rather than net employment totals throughout. Furthermore, special
emphasis is paid to delving underneath the national top-line data to investigate industrial,

Page | 7
regional, and demographic differences. Finally, the paper offers policymakers at the national
and state-local levels a framework and suggestions.

In light of these conversations, the current report can be seen as a source of both comfort
and concern for the near future reminiscent of the recent past

It tells a lot, for example, that the widespread adoption of automation in the form of
pervasive digitalization in the years since 1980 has resulted in a minor rise in job availability
rather than mass unemployment. If repeated in the future, such gains would be an essential
response to excessive fear.

However, a future that resembles the recent past is not always a source of optimism. The
first phase of digital automation, as this retrospective research reveals, was characterised by
catastrophic upheaval, particularly the "hollowing out" of the labour market, with
employment and income increases concentrated at the high and low ends of the skill
distribution. It is consequently unsettling that our forward-looking study predicts more of
the same in the next decade or so. Instead, it advocates for greater urgency and stronger
protections, such as increased lifelong learning opportunities, enhanced labour market
transitions, and more supportive initiatives to address the specific and regional problems of
the vulnerable.

In that sense, the following assessment may not inspire fear, but it does demand attention
and action, as well as the assurance that automation can be beneficial even when it is
disruptive.

Artificial intelligence has significantly improved people's lives in a variety of ways, and
people are no longer the same as they were before AI was introduced. As previously stated,
AI implementation has resulted in time savings, which has resulted in increased output from
businesses and day-to-day human activities. Moreover, AI has led to reduced human effort,
computerised methods, and automated transportation systems and participation in hazardous
jobs.

Evidently, AI has had a significant impact on people's lives and has done wonders to aid in
the automation of almost all of their activities. Many of these methods require a significant
amount of time and manual labour to complete. The automation of these processes by AI
will significantly contribute to the actual activities of people and industries, allowing them
to progress.

Page | 8
We can identify two kinds of AI based on their functions and capacities. The first type of
artificial intelligence is weak AI, also known as narrow AI, which is meant to do a specific
task, such as face recognition, Internet Siri search, or self-driving automobile. Many present
systems that claim to employ "AI" are most likely weak AI focused on a tightly defined
specialised function. Although this weak AI appears to be beneficial to human existence,
some believe it might be harmful since it could disrupt the electric grid or threaten nuclear
power plants if it malfunctions.

The new development of many researchers' long-term goal is to create strong AI or artificial
general intelligence (AGI), which is the speculative intelligence of a machine that can
understand or learn any intelligent task that a human being can, thus assisting human to
unravel the confronted problem. While narrow AI may surpass humans in tasks such as chess
or problem solving, its impact is currently limited. AGI, on the other hand, has the potential
to outperform humans in practically every cognitive endeavour.

Strong AI is a distinct vision of AI in that it may be taught to operate like a human mind, to
be intelligent in whatever task it is given, and even to have perception, beliefs, and other
cognitive abilities that are generally solely attributed to humans

What are the advantages and disadvantages of artificial intelligence?

Artificial neural networks and deep learning artificial intelligence technologies are quickly
evolving, primarily because AI processes large amounts of data much faster and makes
predictions more accurately than humanly possible.

While the huge volume of data being created on a daily basis would bury a human researcher,
AI applications that use machine learning can take that data and quickly turn it into
actionable information. As of this writing, the primary disadvantage of using AI is that it is
expensive to process the large amounts of data that AI programming requires.

Advantages

 Good at detail-oriented jobs;

 Reduced time for data-heavy tasks;

 Delivers consistent results; and

 AI-powered virtual agents are always available.


Page | 9
Disadvantages

 Expensive;

 Requires deep technical expertise;

 Limited supply of qualified workers to build AI tools;

 Only knows what it's been shown; and

 Lack of ability to generalize from one task to another.

Strong AI vs. weak AI

AI can be categorized as either weak or strong.

 Weak AI, also known as narrow AI, is an AI system that is designed and trained
to complete a specific task. Industrial robots and virtual personal assistants, such
as Apple's Siri, use weak AI.

 Strong AI, also known as artificial general intelligence (AGI), describes


programming that can replicate the cognitive abilities of the human brain. When
presented with an unfamiliar task, a strong AI system can use fuzzy logic to
apply knowledge from one domain to another and find a solution autonomously.
In theory, a strong AI program should be able to pass both a Turing Test and the
Chinese room test.

What are the 4 types of artificial intelligence?

Arend Hintze, an assistant professor of integrative biology and computer science and
engineering at Michigan State University, explained in a 2016 article that AI can be
categorized into four types, beginning with the task-specific intelligent systems in wide use
today and progressing to sentient systems, which do not yet exist. The categories are as
follows:

 Type 1: Reactive machines. These AI systems have no memory and are task
specific. An example is Deep Blue, the IBM chess program that beat Garry
Kasparov in the 1990s. Deep Blue can identify pieces on the chessboard and

Page | 10
make predictions, but because it has no memory, it cannot use past experiences
to inform future ones.

 Type 2: Limited memory. These AI systems have memory, so they can use past
experiences to inform future decisions. Some of the decision-making functions
in self-driving cars are designed this way.

 Type 3: Theory of mind. Theory of mind is a psychology term. When applied


to AI, it means that the system would have the social intelligence to understand
emotions. This type of AI will be able to infer human intentions and predict
behaviour, a necessary skill for AI systems to become integral members of
human teams.

 Type 4: Self-awareness. In this category, AI systems have a sense of self, which


gives them consciousness. Machines with self-awareness understand their own
current state. This type of AI does not yet exist.

Page | 11
CHAPTER – 2

REVIEW OF LITREATURE

Page | 12
1.1. The challenge of the AI field

This study arose from the issues that AI poses in light of the global rise and expansion of
information technology, which has characterised business- and non-business organisational
development.

The importance of AI research is motivated by two factors: I to provide new entrants into
the AI field with an awareness of the fundamental structure of the AI literature as such, the
literature presented here answers the popular question, "why should I study AI?" (ii) the
spike in AI interest, which has resulted in greater interest and massive expenditures in AI
facilities.

Researchers from all disciplines who are interested in their subject want to be informed of
the work of others in their field and share the information they have gained over the years.
By sharing AI information, new methodologies and ideas may be created, resulting in a
better understanding of the area. To that goal, this paper was produced for AI researchers so
that they may continue their efforts to develop this area of specialisation through freshly
developed ideas. As a result, they would be able to advance the frontier of knowledge in AI.

This article gives a quick introduction of several major aspects of Artificial Intelligence in
the part that follows. This is to acquaint readers to the vast range of issues covered by AI. A
full overview of the literature on the various types of artificial intelligence is offered in
another section. The study presents some critical problems with major research
consequences for individuals interested in doing artificial intelligence research. These
questions, if effectively handled, will resolve several outstanding technical and non-
technical concerns that have persisted from the previous decade to the present.

1.2. An overview of the AI field

The domains of artificial intelligence are broadly categorised into sixteen groups.
Reasoning, programming, artificial life, belief revision, data mining, distributed AI, expert
systems, genetic algorithms, systems, knowledge representation, machine learning, natural
language understanding, neural networks, theorem proving, constraint satisfaction, and
theory of computation are examples of these. Because many readers of this article may
require a high-level overview of the AI field, the author has used a flow diagram to
demonstrate the overall structure of this study, as well as the link between the many branches

Page | 13
of AI, as shown in Figure 1. The following is a quick summary of some of the most important
fields of artificial intelligence.

1.2.1. Reasoning

The first significant area addressed here is logic. Case-based, non-monotonic, model-based,
qualitative, automated, spatial, temporal, and common-sense reasoning research has
progressed.

Case-based reasoning (CBR) is briefly discussed as an illustration. The fundamental source


of information in CBR is a series of cases kept in a case base. Cases, rather than broad
principles, provide particular experience in a problem-solving domain.

The case-based reasoning cycle describes the major tasks involved in addressing issues with
cases. This cycle suggests four steps: relieve, reuse, revise, and keep.

First, the newly solved problem must be officially represented as a case (new case).

The case base is then searched for a case that is comparable to the current situation.

The solution from the retrieved case is reused to solve the new problem, and a new solution
is obtained and provided to the user, who may check and perhaps amend the result. The
amended case (or the knowledge obtained via the case-based issue resolution process) is
then saved for future use.

1.2.2. Genetic algorithm

Genetic Algorithms is the second main field of AI covered here (GA). This is a search
method based on natural selection and natural genetics mechanics. It is an iterative process
that keeps a population of structures that are potential answers to certain domain difficulties.
The structures in the present population are scored for their usefulness as solutions during
each generation, and on the basis of these ratings, a new population of candidate structures
is produced utilising certain genetic operators such as reproduction, cross over, and
mutation.

Page | 14
Page | 15
1.2.3. Expert system

The expert system is the third feature of AI addressed here. An expert system is a piece of
computer software that can solve a certain set of issues utilising knowledge and reasoning
approaches traditionally associated with a human expert. It may also be defined as a
computer system that performs at or near the level of a human specialist in a certain field of
endeavour.

1.2.4. Natural language understanding

NLG systems are computer software systems that generate texts in English and other human
languages, frequently from non-linguistic input data. NLG systems, like other AI systems,
require a large quantity of information that is difficult to get. In general, these issues were
caused by the complexity, novelty, and little understood nature of the tasks done by the
computers, and were exacerbated by the fact that people write so differently.

1.2.5. Knowledge representation (KR)

Knowledge bases are used to represent application domains and to make stored knowledge
more accessible. Initially, KR research focused on formalisms that are often geared to deal
with a small knowledge base yet provide powerful reasoning services and are extremely
expressive.

2. The Artificial Intelligence Literature

2.1. Reasoning in artificial intelligence

There is a wealth of literature on the theory and practise of reasoning in artificial intelligence.
Researchers have worked on: I developing axioms that provide sound and complete
axiomazation for logic of reasoning; (ii) the theoretical properties of the algorithms used for
qualitative temporal reasoning; (iii) what is relevant to a given problem of reasoning; and
(iv) qualitative reasoning methods. The author axiomatized causal models defined by Pearl
in terms of a set of equations.

Axiomatisations are offered for three increasingly more general types of causal models: I
recursive theories; (ii) theories with unique solutions to equations; and (iii) arbitrary
theories. It is demonstrated that in order to reason about causality in the most generic third
class, the language employed by Galles and Pearl must be extended. Furthermore, the

Page | 16
complexity of the decision methods is defined for all of the languages and model classes
evaluated.

The concept of reasoning in Artificial Intelligence has been discussed in several general
areas, including reasoning complexity, reasoning about minimal belief, axiomatizing,
sampling algorithm, conditional plausibility, efficient methods, logic and consistency, fuzzy
description logics, backbone fragility, diagnosis, independence, domain filtering, and fusion.
The literature on reasoning complexity focuses on spatial congruence and expressive
description logics. Cristani (1999) introduces a novel algebra for reasoning about spatial
congruence, demonstrating that the satisfiability problem in the spatial algebra MC-4 is NP-
complete, and presents a complete classification of tractability in the algebra, based on the
individuation of three maximal tractable sub classes, one of which contains the basic
relations.

Tobies (2000) investigates the complexities of combining the description logics ALCQ and
ALCQI with a terminological formalism based on cardinality constraints on concepts. These
combinations may be readily integrated into C2, the two variable fragment of predicate logic
with counting quantifiers, which results in decidability the following time.

Cheng and Druzdzel (2000) create a method for evidential reasoning in large Bayesian
networks in another paper. The adaptive importance sampling method, AISBN, is developed
and displays promising convergence rates even under harsh situations.

It appears to routinely outperform the existing sampling method. This is a superior


alternative to stochastic sampling methods, which have been shown to perform badly in
evidential reasoning with exceedingly implausible evidence.

Halpern covers the topic of conditional plausibility thoroughly (2001). Halpern defines
algebraic conditional plausibility measures as a generic concept. Bayesian networks are used
to describe algebraic conditional plausibility measures.

Renz and Nebel (2001) investigate the theoretical features of qualitative spatial reasoning in
the RCC8 framework when it comes to efficiency approaches. They show that an orthogonal
combination of heuristic approaches may solve practically all apparently difficult situations
in the phase transition area up to a certain size in an acceptable amount of time.

Rosati (1999) conceptualise the minimal belief and negation as failure (MBNF)
prepositional fragment presented by Lipschitz in a work. The notion may be thought of as a

Page | 17
unifying framework for a number of non-monotonic formalisms, such as default logic, auto
epistemic logic, circumscription, epistemic questions, and logic programming. Soft
computing theory is widely used in the reasoning literature. Straccia (2001) conducted one
such study on reasoning inside fuzzy description logics. The study describes a fuzzy
extension of ALC that combines Zadeh's fuzzy logic with a traditional DL. The work
contributes to the notion of managing organised knowledge with suitable syntax, semantics,
and characteristics on constraint propagation calculus for reasoning.

Backbone fragility and the local search cost peak are introduced by Singer et al. (2000).

The authors present a temporal model for reasoning in temporal settings on disjunctive
metric restrictions on intervals and time points. This temporal model is made up of a labelled
temporal algebra and the reasoning algorithms that go with it. Although several advances
have been made, the computing cost of reasoning methods is exponential in relation to the
underlying issue complexity.

Console et al. (2003) expand the technique to cope with temporal information in diagnosis.
They offer a method for constructing such trees from a model-based reasoning system and
establish the concept of temporal decision tree, which is meant to make use of relevant
information as long as it is collected. Lang et al. began an important investigation that
analyses independence (2003). There are two types of independence discussed: syntactic
independence and semantic independence. They also discuss the problem of forgetting,
which is the process of extracting only the information from a knowledge base that is
relevant to a set of queries formed from a subset of the alphabet.

Still in the reasoning literature, Debruyne and Bessiere (2001) concentrate on local
consistencies that are stronger than arc consistency without modifying the topology of the
network, i.e., merely eliminating inconsistent values from domains. They compared them
conceptually and empirically, taking into account pruning efficiency and the time necessary
to enforce them.

Baader et al. addressed the fusion notion (2002). The authors generalise the results of
decidability transfer from standard modal logics to a broad class of description logics. They
offer abstract description systems, which encompass various description logics uniformly
and may be considered as a common generalisation of description and modal logics, and
demonstrate the transfer result in this generic context.

Page | 18
Halpern and Pucella (2002) provide a prepositional logic to reason about event uncertainty,
where the uncertainty is characterised by a set of probability measures allocating an interval
of likelihood to each occurrence. They provide a sound and thorough axiomatisation of the
logic and demonstrate that the satisfiability issue is NP-complete, making it no more difficult
than satisfiability in prepositional logic.

Consistency is a significant research field in reasoning. Wray and Laird (2003) demonstrate
how the combination of a hierarchy and persistent claims of knowledge may make
preserving logical consistency in stated knowledge problematic. They investigate the
negative consequences of persistent assumptions in reasoning and provide fresh potential
remedies.

Younes and Simmons (2003) provide a modification of the additive heuristic for plan space
planning those accounts for probable reuse of existing activities in a plan. They also suggest
a significant number of unique defect selection procedures and demonstrate how they may
assist POCL planners tackle more issues than previously achievable. VHPOP also aids in
planning with durative activities by combining typical temporal constraint reasoning
approaches.

John McCarthy, a professor emeritus of computer science at Stanford, the man who coined
the term "artificial intelligence" and subsequently went on to define the field for more than
five decades, died suddenly at his home in Stanford in the early morning Monday, Oct. 24.
He was 84.

McCarthy was a giant in the field of computer science and a seminal figure in the field of
artificial intelligence. While at Dartmouth in 1955, McCarthy authored a proposal for a two-
month, 10-person summer research conference on "artificial intelligence" – the first use of
the term in publication.

In proposing the conference, McCarthy wrote, "The study is to proceed on the basis of the
conjecture that every aspect of learning or any other feature of intelligence can in principle
be so precisely described that a machine can be made to simulate it." The subsequent
conference is considered a watershed moment in computer science.

Page | 19
In 1958, McCarthy invented the computer programming language LISP, the second oldest
programming language after FORTRAN. LISP is still used today and is the programming
language of choice for artificial intelligence.

He also developed the concept of computer time-sharing in the late 1950s and early 1960s,
an advance that greatly improved the efficiency of distributed computing and predated the
era of cloud computing by decades.

"A bunch of people decided that time-sharing was clearly the way to work with a computer,
but nobody could figure out how to make it work for general purpose computing – nobody
except John," said Les Earnest, a senior research scientist emeritus at Stanford and an early
collaborator at the Stanford Artificial Intelligence Laboratory (SAIL) with McCarthy.

'Programs with Common Sense'

In 1960, McCarthy authored a paper titled, "Programs with Common Sense," laying out the
principles of his programming philosophy and describing "a system which is to evolve
intelligence of human order."

McCarthy garnered attention in 1966 by hosting a series of four simultaneous computer


chess matches carried out via telegraph against rivals in Russia. The matches, played with
two pieces per side, lasted several months. McCarthy lost two of the matches and drew two.
"They clobbered us," recalled Earnest.

Page | 20
Chess and other board games, McCarthy would later say, were the "Drosophila of artificial
intelligence," a reference to the scientific name for fruit flies that are similarly important in
the study of genetics.

McCarthy would later develop the first "hand-eye" computer system in which a computer
was able to see real 3D blocks via a video camera and control a robotic arm to complete
simple stacking and arrangement exercises.

Professionally, McCarthy was known for intense focus, a quality easily misunderstood.
"John was very focused on what he was working on at all times. If you engaged him in a
topic he was not interested in, he would sometimes turn away without saying a thing," said
Earnest. "It was his way of staying on focus."

Ed Feigenbaum, professor emeritus of computer science at Stanford and a colleague


recruited by McCarthy in the 1960s, recalled a softer side: "He could be blunt, but John was
always kind and generous with his time, especially with students, and he was sharp until the
end. He was always focused on the future. Always inventing, inventing, inventing. That was
John."

One project that McCarthy returned to near the end of his life was a paper he had written in
the early 1970s exploring the practicality of interstellar travel. He wrote: "We show that
interstellar travel is entirely feasible with only small improvements in present technology
provided travel times of several hundred to several thousand years are accepted."

"John's motivation for writing it in the first place was annoyance with what he claimed were
faulty [critical] analyses published at the time. He resurrected it out of frustration that no
one had corrected these analyses over the intervening decades," wrote Bill Gosper, a
colleague from SAIL, in an email.

Page | 21
CHAPTER – 3

RESEARCH METHODOLOGY

Page | 22
RESEARCH METHODOLOGY

Research in common pursuance refers to a search for knowledge in a scientific and


systematic way for pursuant information on a specified topic. Once the objective is identified
that next step is to collect the data which is relevance to the problem identified and analyze
the collected data in order to find out the hidden reasons for the problem.

There are two types of Data namely:

1. Primary Data

2. Secondary Data

Research Design

Research design is defined as a framework of methods and techniques chosen by a researcher


to combine various components of research in a reasonably logical manner so that the
research problem is efficiently handled. It provides insights about “how” to conduct research
using a particular methodology. Every researcher has a list of research questions which need
to be assessed – this can be done with research design.

 Primary Data

Primary data is original in nature and is collected first hand. Primary data is
information that you collect specifically for the purpose of your research project. An
advantage of primary data is that it is specifically tailored to your research needs. A
disadvantage is that it is expensive to obtain.

 Questionnaire Method
 Personal Interview Method

 Secondary Data
Secondary data is when the investigator does not collect data originally for the
research enquiry but uses data already collected and available in published or unpublished
from, data.

Page | 23
Use of secondary data in a research enquiry saves time, finance and labor. However, some
people doubt the accuracy of secondary data. If reliable and suitable secondary data is
available, there is no harm in using secondary data for any research enquiry.

Most research requires the collection of primary data, and this is what students concentrate
on. Unfortunately, many dissertations do not include secondary data in their findings section
although it is perfectly acceptable to do so, providing it has been analyzed.

It is always a good idea to use data collected by someone else if it exists ± it may be on a
much larger scale and could contribute to the findings considerably.

SOURCE OF DATA COLLECTION

The Methodology adopted for this project consist extensive research on the topic
‘ARTIFICIAL INTELLIGANCE’ on the world wide web through the help of internet
connection

Page | 24
CHAPTER – 4

DATA ANALYSIS AND


INTERPRETATION

Page | 25
4.1 PRIMARY DATA

Data collected primarily through the mode of google form: -

AGE: -

Figure 4.1

Age of mostly respondents were seen to be the age of 18-19 i.e., 22 each. Hence, the target
audience is mostly lying under generation z.

Page | 26
GENDER: -

Figure 4.2

Respondents were mainly males so the major part of responses would be bended more
towards a masculine point of view.

Page | 27
Q1 What is the full form of “AI”?

Table 4.1

What is the full form of “AI”? Public response (out of 40 people asked)

Artificially Intelligent 0
Artificial Intelligence 39
Artificially Intelligence 1
Advanced Intelligence. 0

Figure 4.3

INTERPRETATION

It can be seen from the chart that around 97.5% know the full form of AI and 2.5% don’t
know the full form such a disappointment.

Page | 28
Q2 According to you what does AI do?

Table 4.2

According to you what does AI do? Public response (out of 40 people asked)
makes life easier 6
makes work easier 3
helps in factory 3
all of the above 28

Figure 4.4

INTERPRETATION

It can be seen from the chart that 70% people agree with all the above, 15% thinks AI will
only make life easier else 15% things AI will help in factory and work.

Page | 29
Q3 Do you use AI in your daily life?

Table 4.3

Do you use AI in your daily life? Public response (out of 40 people asked)

Yes 27
No 7
Maybe 6

Figure 4.5

INTERPRETATION

It is seen from the chart that 67.5% use AI in their daily life 17.5% don’t use Ai in their daily
life and 15% don’t know that they use Ai in their daily life.

Page | 30
Q4 According to you will AI steal your jobs?

Table 4.4

According to you will AI steal your jobs? Public response (out of 40 people asked)

Yes 7
No 11
Maybe 22

Figure 4.6

INTERPRETATION

It is seen from the chart that 17.5% thinks AI will steal job in future 27.5% thinks AI will
not steal job in future 55% thinks AI might steal jobs in future.

Page | 31
Q5 When you read about high profile individuals giving dire warnings about AI, what
best describes your response?

Table 4.5

When you read about high profile Public response (out of 40 people asked)
individuals giving dire warnings about AI,
what best describes your response?
It worries me 5
It worries me, but I'm optimistic we can 26
handle the issues as they come
It doesn't worry me 7
I don't know what to think 2

Figure 4.7

INTERPRETATION

Page | 32
It is seen from the chart that 12.5% of people worries about AI 65% of people worries but
they think they can handle the issues that might come 17.5% of people don’t even worries
about the cause of AI 5% of people don’t know what to think.

Q6 According to you over the next ten years, will AI :

Table 4.6

According to you over the next ten years, Public response (out of 40 people asked)
will AI :
Cause massive unemployment 7
Be somewhat disruptive to employment 19
Not noticeably impact overall employment 10
Create more jobs than it destroys 4

Figure 4.8

INTERPRETATION

Page | 33
It is seen from the chart that 17.5% people think AI might cause massive unemployment
47.5% people think sit might disturb employment 25% people thinks no notables impact
will be there 10% people thinks AI will create more jobs than it destroys.

Q7 According to you what you think about your job in future now?

Table 4.7

According to you what you think about Public response (out of 40 people asked)
your job in future now?
Will be replaced by AI during your career 9
Will not be replaced by AI before your 20
retirement
Can never be replaced 11

Figure 4.9

INTERPRETATION
Page | 34
It is seen from the chart that 22.5% people thinks AI will replace jobs during career 50%
people thinks AI will replace jobs after career 27.5% people thinks AI can never replace
jobs.

Q8 How much will AI change our industry in the next five years?

Table 4.8

How much will AI change our industry in Public response (out of 40 people asked)
the next five years?
1 0
2 4
3 9
4 13
5 14

Figure 4.10

Page | 35
INTERPRETATION

It is seen from the chart that 35% of people strongly agree that AI will change industry in 5
years.

Q9 Overall, do you think AI will be:

Table 4.9

Overall, do you think AI will be: Public response (out of 40 people asked)
A force for good 30
A force for evil 10

Figure 4.11

Page | 36
INTERPRETATION

It is seen from the chart that 75% people think AI is a force for good and 25% people think
AI is a force for evil.

Page | 37
CHAPTER – 5

CONCLUSION

At first, technologists conveyed unnerving tragic alerts about the capacity of innovation,
particularly man-made reasoning (AI), to wipe out positions.

Page | 38
Then, at that point, came a kind of withdrawal, with a huge number of consoling
proclamations decrease the caution

Presently, the discussion gives off an impression of being choosing a more adjusted story
predicts that when the robots show up, they won't bring Armageddon Neither perfect world
nor stress, however the two advantages and stress.

In the first place, most spectators currently accept that the new past may effortlessly portend
the not-so-distant future by advising us that mechanization made more work than it killed
over the most recent 30 years, thus presents colossal commitment.

In this sense, robotization and man-made reasoning (AI) are quickly showing up as expected
wellsprings of the efficiency enhancements that the US economy so frantically needs. Thus,
robotization can possibly help the economy before long and further develop flourishing at
an erratic period.

Simultaneously, numerous experts are currently anticipating significant commotion.


Regardless of whether unquestionably the quantity of businesses stays steady, the boundless
assessment currently is that difficult stretches are ahead in the work market, causing
exceptionally significant disengagements for some representatives. Each of this features
how tangled, disagreeable, and flighty comprehension we might interpret how
computerization and practically identical advancements will be carried out before long is.

This is where this report comes in. The pages that follow create both in reverse and forward-
looking examinations of how the coming of computerized innovations is as of now
reorienting work showcases and may keep on doing so — alongside a strategy plan for
public and state-neighbourhood reaction — to fill in a portion of the holes and give an outline
of mechanization patterns both in the new past and sooner rather than later.

The examination starts with the definition and conceptualization of computerization. The
exploration then utilizes deep rooted occupation-based measurable approaches to evaluate
and plan the differed noticed and expected impacts of robotization on work development
and change from 1980 to 2016 and 2016 to 2030. The accentuation is on areas of conceivable
word related pressure as opposed to net business aggregates all through. Moreover,
extraordinary accentuation is paid to digging under the public top-line information to
examine modern, local, and segment contrasts. At long last, the paper offers policymakers
at the public and state-neighbourhood levels a system and ideas.

Page | 39
Considering these discussions, the ongoing report should be visible as a wellspring of both
solace and worry for the not so distant future suggestive of the new past

It tells a great deal, for instance, that the broad reception of mechanization as unavoidable
digitalization in the years starting around 1980 has brought about a minor ascent in work
accessibility as opposed to mass joblessness. Assuming that rehashed from here on out, such
gains would be a fundamental reaction to inordinate apprehension.

In any case, a future that looks like the new past isn't generally a wellspring of idealism. The
primary period of computerized computerization, as this review research uncovers, was
described by disastrous commotion, especially the "emptying out" of the work market, with
business and pay increments gathered at the high and low finishes of the expertise
circulation. It is thusly disrupting that our forward-looking review predicts business as usual
in the following ten years or somewhere in the vicinity. All things considered, it advocates
for more prominent direness and more grounded securities, for example, expanded deep
rooted learning open doors, improved work market changes, and more steady drives to
resolve the particular and provincial issues of the defenceless.

In that sense, the accompanying evaluation may not move dread, yet it requests
consideration and activity, as well as the confirmation that mechanization can be
advantageous in any event, when it is problematic.

Man-made brainpower has essentially worked on individuals' lives in various ways, and
individuals are as of now not equivalent to they were before AI was presented. As recently
expressed, AI execution has brought about time reserve funds, which has brought about
expanded yield from organizations and everyday human exercises. Also, AI has prompted
decreased human exertion, mechanized strategies, and robotized transportation frameworks
and support in perilous positions.

Clearly, AI fundamentally affects individuals' lives and has done miracles to help with the
mechanization of practically their exercises as a whole. A large number of these techniques
call for a lot of investment and physical work to finish. The robotization of these cycles by
AI will essentially add to the genuine exercises of individuals and businesses, permitting
them to advance.

We can recognize two sorts of AI in view of their capabilities and limits. The primary kind
of man-made consciousness is powerless AI, otherwise called limited AI, which is intended

Page | 40
to do a particular undertaking, for example, face acknowledgment, Internet Siri search, or
self-driving vehicle. Many present frameworks that case to utilize "Computer based
intelligence" are in all likelihood powerless AI zeroed in on a firmly characterized specific
capability. Albeit this frail AI seems, by all accounts, to be advantageous to human life,
some accept it very well may be destructive since it could disturb the electric lattice or
compromise thermal energy stations assuming it glitches.

The new improvement of many specialists' drawn-out objective is to make solid AI or fake
general knowledge (AGI), which is the speculative insight of a machine that can comprehend
or gain proficiency with any smart undertaking that a person would be able, in this manner
helping human to unwind the faced issue. While slender AI might outperform people in
undertakings, for example, chess or critical thinking, its effect is at present restricted. AGI,
then again, can possibly beat people in essentially every mental undertaking.

Solid AI is an unmistakable vision of AI in that it very well might be educated to work like
a human brain, to be smart in anything that task it is given, and even to have discernment,
convictions, and other mental capacities that are for the most part exclusively credited to
people

Fake brain organizations and profound learning computerized reasoning advancements are
rapidly developing, basically in light of the fact that AI processes a lot of information a lot
quicker and makes expectations more precisely than humanly c1.1. The test of the AI field

This study emerged from the issues that AI presents considering the worldwide ascent and
extension of data innovation, which has described business-and non-business authoritative
turn of events.

The significance of AI research is propelled by two variables: I to give new participants into
the AI field with an attention to the crucial construction of the AI writing accordingly, the
writing introduced here addresses the well-known question, "for what reason would it be
advisable for me I concentrate on AI?" (ii) the spike in AI interest, which has brought about
more noteworthy interest and monstrous uses in AI offices.

Scientists from all disciplines who are keen regarding their matter need to be educated
regarding crafted by others in their field and offer the data they have acquired throughout
the long term. By sharing AI data, new systems and thoughts might be made, bringing about
a superior comprehension of the area. To that objective, this paper was created for AI

Page | 41
scientists so they might proceed with their endeavours to foster this area of specialization
through newly evolved thoughts. Subsequently, they would have the option to propel the
boondocks of information in AI.

This article gives a fast presentation of a few significant parts of Artificial Intelligence in
the part that follows. This is to familiarize peruses to the immense scope of issues covered
by AI. A full outline of the writing on the different kinds of man-made brainpower is
presented in another part. The review gives a few basic issues significant examination
ramifications for people keen on doing man-made brainpower research. These inquiries, if
really dealt with, will determine a few exceptional specialized and non-specialized worries
that have continued from the earlier 10 years to the present.

The spaces of man-made reasoning are comprehensively classified into sixteen gatherings.
Thinking, programming, counterfeit life, conviction amendment, information mining,
dispersed AI, master frameworks, hereditary calculations, frameworks, information
portrayal, AI, regular language understanding, brain organizations, hypothesis
demonstrating, imperative fulfilment, and hypothesis of calculation are instances of these.
Since numerous peruses of this article might require an undeniable level outline of the AI
field, the writer has utilized a stream chart to exhibit the general design of this review, as
well as the connection between the many parts of AI, as displayed in Figure 1. Coming up
next is a speedy synopsis of probably the main fields of man-made consciousness.

The main huge region tended to here is rationale. Case-based, non-monotonic, model-based,
subjective, mechanized, spatial, fleeting, and sound judgment thinking research has
advanced.

Case-based thinking (CBR) is momentarily talked about as a representation. The major


wellspring of data in CBR is a progression of cases kept for a situation base. Cases, as
opposed to wide standards, give specific involvement with a critical thinking space.

The case-based thinking cycle portrays the significant errands associated with resolving
issues with cases. This cycle proposes four stages: ease, reuse, update, and keep.

To start with, the recently tackled issue should be formally addressed as a case (new case).

The case base is then looked for a case that is similar to the ongoing circumstance.

Page | 42
The arrangement from the recovered case is reused to tackle the new issue, and another
arrangement is gotten and given to the client, who might check and maybe revise the
outcome. The corrected case (or the information got through the case-based issue goal
process) is then put something aside for some time later.

Hereditary Algorithms is the subsequent fundamental field of AI covered here (GA). This is
a pursuit strategy in view of regular determination and normal hereditary qualities
mechanics. An iterative cycle keeps a populace of designs that are likely solutions to specific
space challenges. The designs in the current populace are scored for their helpfulness as
arrangements during every age, and based on these evaluations, another populace of up-and-
comer structures is delivered using specific hereditary administrators, for example,
propagation, get over.

Page | 43
LIMITATION OF STUDY
While conducting a study, researchers always come across some limitations. Some of the
limitations faced during thus research are:

 The study only uses primary data so we could not determine a very good representation of
the real-world scenario.

 Due to the topic being so vast, analysing and interpreting the data was extremely tiring and
difficult.

 The research on this topic needs to be very extensive and thorough and time consuming
since there is a huge amount of data available.

Page | 44
CHAPTER – 6

BIBLIOGRAPHY

Page | 45
BIBLIOGRAPHY
 [Link]
A1&dq=artificial+intelligence+research+paper&ots=Km4BrTZN1W&sig=Zupw1
GtOl6Xqu0Hon2BU-
I4AhVU&redir_esc=y#v=onepage&q=artificial%20intelligence%20research%20p
aper&f=false

 [Link]
e+research+paper&btnG=

 [Link]

 [Link]
CzsYy71DhrXTFMMcSq7slTESpiBTtUQ:1657648584359&source=lnms&tbm=is
ch&sa=X&ved=2ahUKEwilx6GY9vP4AhV42TgGHbUPBSoQ_AUoAnoECAIQB
A&biw=1536&bih=746&dpr=1.25

Page | 46
CHAPTER – 7

ANNEXURE

Page | 47
ANNEXURE

Page | 48
Page | 49
Page | 50
Page | 51
Page | 52

You might also like