0% found this document useful (0 votes)
41 views23 pages

AIH UNIT 1 Complete

The document discusses the advancements of Artificial Intelligence (AI) and Machine Learning (ML) in healthcare, highlighting their roles in disease prediction, diagnosis, and improving efficiency in medical processes. Various applications of AI and ML, such as personalized medicine, medical imaging, and drug discovery, are explored, showcasing their potential to transform healthcare practices. Despite skepticism regarding their practical application, the integration of AI and ML in healthcare is rapidly increasing, promising significant improvements in patient care and operational efficiency.

Uploaded by

Ansh Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
41 views23 pages

AIH UNIT 1 Complete

The document discusses the advancements of Artificial Intelligence (AI) and Machine Learning (ML) in healthcare, highlighting their roles in disease prediction, diagnosis, and improving efficiency in medical processes. Various applications of AI and ML, such as personalized medicine, medical imaging, and drug discovery, are explored, showcasing their potential to transform healthcare practices. Despite skepticism regarding their practical application, the integration of AI and ML in healthcare is rapidly increasing, promising significant improvements in patient care and operational efficiency.

Uploaded by

Ansh Kumar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

AI in Healthcare

UNIT-1 Introduction to Artificial Intelligence and Machine Learning


Recent advancements in Artificial Intelligence (AI) and Machine Learning (ML) technology
have brought on substantial strides in predicting and identifying health emergencies, disease
populations, and disease state and immune response, amongst a few. Although, scepticism
(doubt) remains regarding the practical application and interpretation of results from ML-based
approaches in healthcare settings, the inclusion of these approaches is increasing at a rapid
pace.

Machine learning has been used in various applications, ranging from security services through
face detection to increasing efficiency and decreasing risk in public transportation, and recently
in various aspects of healthcare and biotechnology. Artificial intelligence and machine learning
have brought significant changes in business processes and have transformed day-to-day lives,
and comparable transformations are anticipated in healthcare and medicine. Recent
advancements in this area have displayed incredible progress and opportunity to disburden
physicians and improve accuracy, prediction, and quality of care. Current machine learning
advancements in healthcare have primarily served as a supportive role in a physician or
analyst's ability to fulfill their roles, identify healthcare trends, and develop disease prediction
models. In large medical organizations, machine learning-based approaches have also been
implemented to achieve increased efficiency in the organization of electronic health records,
identification of irregularities in the blood samples, organs, and bones using medical imaging
and monitoring, as well as in robot-assisted surgeries Machine learning applications have
recently enabled the acceleration of testing and hospital response in the battle against COVID-
19. Hospitals have been able to organize, share, and track patients, beds, rooms, ventilators,
EHRs, and even staff during the pandemic using a deep learning system by GE called the
Clinical Command Center.

Many new developments emerge as the field of healthcare grows into the new world of
technology. Artificial intelligence and machine learning-based approaches and applications are
vital for the field’s progression, including increased speed of diagnosis, accuracy, and
simplicity.

Although the terms machine learning, deep learning, and artificial intelligence are typically
used interchangeably, they represent different sets of algorithms and learning processes.
Artificial Intelligence (AI) is the umbrella term that refers to any computerized intelligence
that learns and imitates human intelligence. AI is most regarded for autonomous machines such
as robots and self-driving cars, but it also permeates everyday applications, such as

1
AI in Healthcare
UNIT-1 Introduction to Artificial Intelligence and Machine Learning
personalized advertisements and web searches. In recent years, AI development and application
have made incredible strides and have been applied to many areas due to their higher levels of
decision-making, accuracy, problem-solving capability, and computational skills. In generally
all development of AI algorithms, the data obtained is split into two groups, a training and test
data set, to ensure reliable learning, representative populations, and unbiased predictions. As
the name suggests, the training set is used for algorithm training that includes sets of
characterizing data points (features) and corresponding predictions (in the case of supervised
learning). The testing data set is new to the algorithm and is solely used to test the algorithm's
abilities. This measure is taken to eliminate biases in the algorithm's testing by the training
dataset. Once an algorithm passes through a training and testing phase with acceptable results,
the algorithms are implemented in healthcare settings. The application of AI is broad and has
many applied sub-regions; here, we provide an overview of machine learning and deep
learning, two of the several sub-regions of AI.

AI IN HEALTHCARE
Types of AI in health care
AI is an umbrella term covering a variety of distinct but interrelated processes. Some of the most
common forms of AI used within health care include:
 Machine learning (ML): Training algorithms using data sets, such as health records, to
create models capable of performing such tasks as categorizing information or predicting
outcomes.
 Deep learning: A subset of machine learning that involves greater volumes of data, training
times, and layers of ML algorithms to produce neural networks capable of more complex tasks.
 Natural language processing (NLP): The use of ML to understand human language, whether it
be verbal or written. In health care, NLP can help interpret documentation, notes, reports, and
published research.
 Robotic process automation (RPA): The use of AI in computer programs to automate
administrative and clinical workflows. Some health care organizations use RPA to improve the
patient experience and the daily function of their facilities.

2
AI in Healthcare
UNIT-1 Introduction to Artificial Intelligence and Machine Learning

Type of
Healthcare Machine Applied or
Description References
Area Learning Experiment
Model

Using EHRs for predicting Liang et al.


EHRs SVM, DT Applied
diagnoses 2014 [26]

Predicting post-stroke
Ge et al., 2019
- RNN pneumonia using deep neural Experiment
[35]
network approaches

Deep EHR: Chronic Disease Liu, Zhang &


LSTM,
- Prediction Using Medical Experiment Razavian 2018
CNN
Notes [40]

SRML-Mortality Predictor:
A hybrid machine learning
framework to predict mortality Ahmad et al.,
- ML Experiment
in paralytic ileus patients using 2020 [41]
Electronic Health Records
(EHRs)

Dermatologist-level
Medical Esteva et al.
CNN classification of skin cancer Experiment
Imaging 2017 [7]
with deep neural networks

Chexnet: Radiologist-Level Rajpurkar et al.,


- CNN Pneumonia Detection on Chest Applied 2017; Tsai &
X-Rays with Deep-Learning Tao, 2019 [8]

International evaluation of an
McKinney et al.
- CNN AI system for breast cancer Experiment
2020 [49]
screening

Deep-learning algorithm
predicts diabetic retinopathy Arcadu et al.
- Deep CNN Experiment
progression in individual 2019 [56]
patients

Structural MRI classification


for Alzheimer's disease Faturrahman et
- DBN Experiment
detection using deep belief al., 2017 [37]
network

Machine learning approaches


for integrating clinical and
Decision Patel et al.,
- imaging features in late-life Experiment
tree 2015 [27]
depression classification and
response prediction

3
AI in Healthcare
UNIT-1 Introduction to Artificial Intelligence and Machine Learning

Type of
Healthcare Machine Applied or
Description References
Area Learning Experiment
Model

Application of machine
Genetic
learning models to predict Tang et al. 2017
Engineering RT Experiment
tacrolimus stable dose in renal [10]
& Genomics
transplant recipients

Artificial intelligence predicts


the immunogenic landscape of
Malone et al.
- ML SARS-CoV-2 leading to Applied
2020 [15]
universal blueprints for
vaccine designs

Off-target predictions in
Deep CNN, Lin & Wong
- CRISPR-Cas9 gene editing Applied
Deep FFs 2018 [76]
using deep learning

DeepHF: Optimized CRISPR


guide RNA design for two Wang et al.,
- RNNs Applied
high-fidelity Cas9 variants by 2019 [85]
deep learning

CUNE: Unlocking HDR-


mediated nucleotide editing by
Random O’Brien et al.,
- identifying high- efficiency Applied
Forest 2019 [86]
target sites using machine
learning

ToxDL: deep learning using


primary structure and domain Pan et al., 2020
- CNNs Applied
embeddings for assessing [87]
protein toxicity

4
AI in Healthcare
UNIT-1 Introduction to Artificial Intelligence and Machine Learning

Application of AI and ML
1. Idеntifying Disеasеs and Diagnosis

Machinе lеarning is еmployеd in hеalthcarе for thе idеntification and diagnosis of disеasеs that
may bе challеnging to dеtеct during еarly stagеs. For example, IBM Watson Gеnomics utilizеs
cognitivе computing and gеnomе-basеd tumor sеquеncing to swiftly and accuratеly diagnosе
disеasеs.
2. Drug Discovеry and Manufacturing

Machinе lеarning is vеry usеful in finding nеw mеdicinеs еarly on. It hеlps in thе rеsеarch and
dеvеlopmеnt of nеw tеchnologiеs for studying gеnеs and prеcision mеdicinе. This can lеad to
nеw ways of trеating disеasеs with many causes.

An example is Microsoft’s Projеct Hanovеr which uses ML for different projects. Onе of
thеmis crеating AI tеchnology for trеating cancеr. Thеy arе also working on pеrsonalizing drug
combinations for a specific type of lеukеmia callеd AML (Acutе Myеloid Lеukеmia).
3. Mеdical Imaging Diagnosis

ML is usеd to analyze mеdical imagеs using a technology called Computеr Vision. ML hеlp
thе computеr undеrstand and analyzе imagеs likе X-rays, CT scans, or MRIs bеttеr. This makеs
it еasiеr for doctors to identify and undеrstand mеdical issues, thus еnhancing thе accuracy of
mеdical diagnosеs basеd on thеsе imagеs

For example, Microsoft’s InnеrEyе initiativе usеs ML to analyze imagеs, еspеcially in thе
mеdical field.
4. Pеrsonalizеd Mеdicinе

ML makеs pеrsonalizеd mеdicinе morе еffеctivе. This means that trеatmеnts arе dеsignеd
basеd on еach pеrson’s uniquе hеalth information and prеdictions about thеir hеalth. It’s likе
using smart tеchnology to crеatе customizеd mеdical carе for еach individual. The personalized
medicine market, which includes AI-driven approaches, is projected to exceed $3.5 billion by
2025.

For example, IBM Watson Oncology usеs patiеnt mеdical history to gеnеratе multiplе
pеrsonalizеd trеatmеnt options. This shows the potential of ML in individualizеd hеalthcarе.
5. Machinе Lеarning-Basеd Bеhavioral Modification

ML as a tool can study and understand our habits, еvеn thе onеs we might not consciously
think about. Thеsе could bе things likе how much wе movе, what wе еat, or еvеn our daily
routinеs. With this undеrstanding, machinе lеarning can thеn suggеst changеs or modifications
to thеsе habits to promotе bеttеr hеalth and prеvеnt potеntial mеdical issuеs.

For example, Somatix, a data analytics company, has created a smart app using machinе
lеarning. This app can rеcognizе thе movеmеnts you make in your еvеryday life and help you
make positive changes in your bеhavior for bеttеr hеalth.

5
AI in Healthcare
UNIT-1 Introduction to Artificial Intelligence and Machine Learning
6. Smart Hеalth Rеcords:

Kееping your health rеcords updated can bе a lot of work. Evеn with technology helping with
еntеring data, many tasks still take a long time. ML makеs things еasiеr, saving timе, еffort,
and monеy. Smart mеthods for sorting documеnts and rеading tеxt arе gеtting morе popular.
Examplеs include Googlе’s Cloud Vision API and MATLAB’s tеchnology that can understand
handwriting using ML.

For examplе, MIT is at thе forеfront of dеvеloping smart hеalth rеcords that usе tools from
machinе lеarning to hеlp with diagnosis and suggеst trеatmеnts.
7. Clinical Trial and Rеsеarch

ML can be really helpful in clinical trials and research. Clinical trials in thе pharmacеutical
industry often take a long time and cost a lot of money. By using ML and prеdictivе analytics,
rеsеarchеrs can quickly find potential candidatеs for thеsе trials from various sourcеs likе
prеvious doctor visits and social mеdia. ML is also usеd to monitor participants in rеal-timе,
accеss data instantly, dеtеrminе thе bеst samplе sizе, and rеducе еrrors in еlеctronic rеcords.

A study by IQVIA Rеsеarch & Dеvеlopmеnt Solutions suggests that using AI and ML in
clinical trials could save the industry a lot of money by 2025. Thе rеport highlights that just
using AI for finding suitable patients could cut around 30 percent of clinical trial costs, saving
about $18 billion еach yеar.

Additionally, AI and ML’s role in making trials morе еfficiеnt could spееd up thе introduction
of nеw drugs, bеnеfiting both pharmacеutical companies and patiеnts looking for nеw
trеatmеnt options.
8. Crowdsourcеd Data Collеction

ML is utilizеd in analyzing crowdsourcеd hеalth data to gain insights into disеasеs and
contributе to mеdical rеsеarch. Crowdsourcing in thе mеdical fiеld mеans collеcting a large
amount of information from pеoplе who willingly share their health data. This approach is
gaining popularity and is еxpеctеd to significantly impact how we understand and practicе
mеdicinе in thе future.

For example, Applе’s RеsеarchKit lеts usеrs usе apps that usе ML and facial rеcognition to
diagnose and treat conditions like Aspеrgеr’s and Parkinson’s disеasе.
9. Bеttеr Radiothеrapy

Machine learning in hеalthcarе, particularly in radiology, is highly valuable. In mеdical imagе


analysis, various factors can occur simultaneously. Lеsions, cancеr foci, and othеr еlеmеnts arе
challеnging to modеl with complеx еquations. ML algorithms makе diagnosis and variablе
idеntification еasiеr by lеarning from various samplеs. A common application is catеgorizing
objеcts likе lеsions into groups such as normal or abnormal, lеsion or non-lеsion.

For example, Googlе’s DееpMind Hеalth is assisting UCLH rеsеarchеrs in creating algorithms
that distinguish bеtwееn hеalthy and cancеrous tissuе. This can help improve radiation
treatment.

6
AI in Healthcare
UNIT-1 Introduction to Artificial Intelligence and Machine Learning
10. Outbrеak Prеdiction

ML is used to monitor and prеdict еpidеmics worldwide. Sciеntists use vast amounts of data
from satеllitеs, social mеdia, wеbsitеs, еtc. Artificial nеural nеtworks (ANN) organize this data
and prеdict outbrеaks of disеasеs likе malaria or sеvеrе infеctious disеasеs. This forеcasting is
particularly bеnеficial in third-world countries with limitеd mеdical and еducational rеsourcеs.

For example, ProMED-mail, an Intеrnеt-basеd rеporting platform, uses ML to monitor


еvolving disеasеs and providеs outbrеak rеports in rеal-timе.

With ML being widely applied in healthcare, the demand for ML experts is rising. Hence, if
plan to switch your career or start your career in ML, consider еnrolling in comprеhеnsivе AI
ML training. This will help you bеttеr undеrstand AI and ML, giving you hands-on еxpеriеncе
and practical knowledge in this quickly еvolving field.

Electronic Health Records

Electronic Health Records (EHRs), originally known as clinical information systems, were first
introduced by Lockheed in the 1960s. Since then, the systems have been reconstructed many
times to create an industry-wide standard system. In 2009, the US federal government invested
billions in promoting EHR implementation in all practices in an effort to improve the quality
and efficiency of the work; this ultimately resulted in nearly 87 percent of office-based
practices nationwide implementing EHRs in their systems by 2015. BIG data collected from
EHR systems with structured feature data have been instrumental in deep learning applications,
including medication refills and using patient history for predicting diagnoses. This has resulted
in significant improvement in data organization, accessibility, and quality of care and has
helped physicians with diagnoses and treatments. The standardization of features across
datasets has also allowed for increased access to health records for research purposes.

Considering the vital role that prediction plays in providing treatment, scientists have
developed deep learning models for the diagnosis and prediction of clinical conditions using
EHRs. In a recent research study, Liu, Zhang, and Razavian developed a deep learning
algorithm using LSTM networks (reinforcement learning) and CNNs (supervised learning) to
predict the onset of diseases, such as heart failure, kidney failure, and stroke. Unlike other
prediction models, this algorithm used both structured data obtained from EHR and
unstructured data contained in progress and diagnosis notes. As explained by Liu and
colleagues, the inclusion of unstructured data within the model resulted in significant
improvements in all the baseline accuracy measures, further indicating the versatility and
robustness of such algorithms. In another research study using deep neural network approaches,
Ge and colleagues built a model to predict post-stroke pneumonia within 7 and 14-day periods.
The model returned an Area under the ROC curve (AUC, a measure of model performance by
combining sensitivity and specificity of a model) value of 92.8 percent for the 7-day predictions
and 90.5 percent for the 14-day predictions, providing a highly accurate model predicting
pneumonia following a stroke. In addition, several ML-based models have also been
implemented to predict mortality in ICU patients. In one of such models, Ahmad and
colleagues have shown great ability to predict mortality in paralytic ileus (PI, incomplete
blockage of the intestine that prohibits the passage of food, eventually leading to a build-up
and complete blockage of the intestines) patients using EHRs. The algorithm, named
Statistically Robust Machine Learning-based Mortality Predictor (SRML-Mortality Predictor),
showed an 81.30% accuracy rate in predicting mortality in PI patients. Providing patients and

7
AI in Healthcare
UNIT-1 Introduction to Artificial Intelligence and Machine Learning
practitioners with predicted mortality, through the use of EHR prediction algorithms, can allow
them to make more educated clinical treatment decisions.

Medical Imaging

Given the digital nature of data and the presence of structured data formats such as DICOM
(Digital Imaging and Communications in Medicine), medical imaging has seen significant
strides with the implementation of machine learning-based approaches to several imaging
modalities, including Computed Tomography (CT), Magnetic Resonance Imaging (MRI), X-
Ray, Positron Emission Tomography (PET), Ultrasound, and more. Several ML-based models
have been developed to identify tumors, lesions, fractures, and tears.

In a recent study, McKinney and colleagues have implemented a deep learning algorithm to
detect tumors based on Mammograms in earlier stages of growth. In comparison to traditional
screening techniques used to identify tumors, these deep learning-based screen techniques
allow for the identification and location of tumors in earlier stages of breast cancer, allowing
for a better rate of resection. In a direct comparison, the deep learning-based approach was able
to outperform experienced radiologists by an AUC score of 11.5%. Several other studies have
also implemented ML-based approaches for breast cancer detection with variable success,
including models by Wang and colleagues, Amrane and colleagues, and Ahmad and
colleagues.

Similarly, in a recent study, Esteva and colleagues used CNN (unsupervised learning) to
classify 2032 different skin diseases using dermoscopic images. An objective comparison of
CNN classification with that of 21 board-certified dermatologists resulted in “on par”
performance, further confirming the veracity of the results. When implemented in conjunction
with the average consumer mobile platform, this approach can result in ease of use and early
diagnosis. In parallel, studies have also implemented ML-based approaches to quantify the
progression of retinal diseases. In one such study, Arcadu and colleagues applied a deep
learning CNN to detect the aneurysms that cause vision loss due to the progression of Diabetic
Retinopathy (DR). The CNN was also able to detect small and low contrast microaneurysms,
although it was not explicitly designed to accomplish that task. Given that diabetic retinopathy
is a common eye condition that affects around 60 percent of type 1 diabetes patients, it is
difficult to detect in its preliminary stages. Early prediction obtained using a CNN approach
has the potential to prevent and delay irreversible damage to patients' vision. X-rays have been
used for decades to identify abnormalities in the chest cavity and lung disease, though an in-
depth careful examination by a training radiologist is often required. In a recent study,
Rajpurkar and colleagues conducted a retrospective study to explore the capacities of a 121-
layer convolutional neural network to examine a collection of chest x-rays with various thoracic
diseases and identify irregularities in an attempt to mimic the detection by trained radiologists
[8]. In comparison, CNN's performance in the accuracy of identification observed was 81%,
which was 2% higher than that of the radiologists. Although applied retrospectively, this study,
along with CNNs developed by Tsai and Tao, Asif and colleagues, Liang and colleagues, and
Lee and colleagues, indicates incredible support that these approaches can provide in
examining and diagnosing illnesses, further reducing the burden on healthcare professionals.

ML-based approaches have also been implemented to predict and diagnose disease progression
of neurodegenerative diseases, including Alzheimer's disease, Parkinson's disease, serious
mental disorders including Psychosis, depression, PTSD, and developmental disorders,
including autism and ADHD. In one such study, Faturrahman and colleagues presented a

8
AI in Healthcare
UNIT-1 Introduction to Artificial Intelligence and Machine Learning
higher-level model using DBNs(unsupervised learning) for predicting Alzheimer's Disease
(AD) progression using structural MRI images, resulting in 91.76% accuracy, 90.59%
sensitivity, and 92.96% specificity. Although there is no cure for AD, early diagnosis can help
implement strategies to delay the symptoms and degeneration. Using decision tree models and
feature-rich data sets consisting of functional MRI, cognitive behavior scores, and age, Patel
and colleagues developed a model to predict the diagnosis and treatment response for
depression. The model scored 87.27% accuracy for diagnosis and 89.47% accuracy for
treatment response. This predictive diagnosis can help identify patients with depression and
develop personalized treatment plans based on their responses. With the current ML
applications in medical imaging, it is evident that its use has valuable implications for
advancing the medical field due to its pronounced advantages in accuracy, classification,
sensitivity, and specificity in prediction and diagnoses.

Genetic Engineering and Genomics

The discovery of the adaptive DNA system known as CRISPR (Clustered Regularly
Interspaced Short Palindromic Repeats) has cultivated the field of genetic engineering. This
exploration of “programmable endonucleases” has simplified genetic engineering and has
helped make the process of genetic modification and diagnosis easier, as well as dropping the
cost of the procedure dramatically. The recent application of CRISPR to Cas (CRISPR-
associated protein) editing, such as Cas-9 and Cas-13a, has changed genetic editing, though the
tool is not perfect. Recently, several machine learning techniques for predicting off-target
mutations in Cas9 gene editing have emerged. A new program developed by Jiecong Lin and
Ka-Chun Wong has improved the quality of these machine learning predictions by using deep
CNNs (AUC score: 97.2%) and deep FFs (AUC score: 97%). Considering the space for error
and off-target mutations using the Cas9 tool, scientists are using Cas9 for developing activity
predictors and more reliable Cas9 variants to reduce error. These models include higher
accuracy and fidelity Cas9 variants, hyper-accurate Cas9 variants, and guide RNA design tools
using deep learning.

Outside of CRISPR gene editing, O'Brien and colleagues have developed a service to provide
efficiency in nucleotide editing using random forest algorithms (supervised learning) to
investigate how different nucleotide compositions influence the HDR (homology-directed
repair) efficiency. They developed the Computational Universal Nucleotide Editor (CUNE),
used to find the most efficient method to identify a precise location to enter a specific point
mutation and predict HDR efficiency. Additionally, Pan and colleagues have developed a
model for prediction in gene editing named ToxDL that uses a CNN approach to predict protein
toxicity in-vivo using only the sequence data. Another branch of genetic engineering,
pharmacogenomics, has also made significant strides in the use of AI and machine learning to
determine stable doses of medications that have become popular. In one such study, Tang and
colleagues implemented an ML-based approach to determine a stable Tacrolimus dose (the
immunosuppressive drug) for patients who received a renal transplant to reduce the risk of
acute rejection. The use of machine learning in pharmacogenomics has recently been applied
in psychiatry, oncology, bariatrics, and neurology.

Machine learning applications of genetic engineering have also been instrumental in the fight
against COVID 19. In a recent study, Malone and colleagues utilized software based on
machine learning algorithms to “predict which antigens have the required features of HLA-
binding, processing, presentation to the cell surface, and the potential to be recognized by T
cells to be good clinical targets for immunotherapy”. The use of immunogenicity predictions

9
AI in Healthcare
UNIT-1 Introduction to Artificial Intelligence and Machine Learning
from this software, along with the presentation of antigen to infected host-cells, allowed the
team to successfully profile the “entire SARS-CoV2 proteome” as well as epitope hotspots.
These discoveries help predict blueprints for designing universal vaccines against the virus that
can be adapted across the global population.

Operationalizing Consumerism using AI

Technology has often impacted and changed consumer behaviour. The rise of e-commerce
platforms and online shopping has transformed how consumers purchase products, compare
prices, check reviews, and access a broader range of options worldwide from the comfort of
their homes. Social media platforms are becoming influential in how consumers engage and
communicate with brands, while voice assistants and smart speakers are changing how
consumers interact with technology using voice commands for online shopping, product
recommendations, or even ordering services. Consumers have often found ways to change their
behaviour, and adapt and integrate technologies into their brand-consumer behaviour.

Generative AI tools, including Open-AI's ChatGPT, Google's Gemini, and Microsoft's Co-
Pilot, have emerged as leading technologies that shape consumer engagement and have the
potential to influence consumer behaviour profoundly. These powerful tools enable businesses
to provide personalized recommendations, enhance customer support, and facilitate interactive
shopping experiences, empowering them to influence consumers' decision-making processes,
preferences, and overall brand engagement. Research in this area has already begun, with
multidisciplinary investigations and domain-specific studies focusing on education, retail, and
banking revealing significant changes in consumer behavior resulting from generative AI tools.

As the impending shift in consumer behavior looms, deepening our theoretical understanding
and developing effective strategies to manage and adapt to this transformative wave becomes
critical. Exploring optimal approaches to navigate this evolving landscape will allow
businesses to fully leverage the power of generative AI tools, leading to the effective
management and optimization of consumer behavior. By embracing this technology and its
potential impact, businesses can stay ahead in the dynamic marketplace and provide enhanced
experiences that align with consumers' changing expectations and preferences.

In response to the growing demand for a theoretical understanding of consumer behavior


regarding emerging technologies like generative AI tools. The primary objectives of this topic
are fourfold. Firstly, it aims to postulate the potential impact of generative AI tools on consumer
behavior. Secondly, it seeks to identify the implications for research, practice, and policy
stemming from these changes in consumer behavior. Thirdly, it explores strategies for brands
to effectively manage and prepare for the evolving nature of consumer interactions,
engagements, and consumption in the context of generative AI. Lastly, it presents a
comprehensive research agenda that outlines relevant areas for further exploration in this
domain.

By addressing these objectives, it contributes to the broader understanding of the implications


and opportunities that arise from the intersection of generative AI and consumer behavior. It
lays the foundation for future research endeavors. It guides academics and industry
professionals seeking to navigate the changing landscape of consumer behavior influenced by
technologies. Ultimately, this article aims to foster a comprehensive understanding of the
challenges and possibilities of generative AI tools and provides valuable insights for future
advancements in theory, practice, and policy in consumer behavior.

10
AI in Healthcare
UNIT-1 Introduction to Artificial Intelligence and Machine Learning
CHANGING CONSUMER BEHAVIOUR

This section sheds light on the anticipated changes in consumer behaviors resulting from the
adoption of generative AI technology. It explores how consumers and brands are expected to
utilize this technology in their interactions. Drawing upon existing literature and practices, this
section provides valuable insights into the potential impact of generative AI on consumer
behaviors. By examining the future landscape, this section aims to identify the transformative
ways consumers will engage with brands, products, and services. It explores how generative
AI tools will shape consumer decision-making processes, preferences, and overall brand
interaction.

1 Recommendations

In the ever-evolving landscape of consumer behaviour, individuals actively seek alternative


sources of information that extend beyond traditional avenues such as social media and online
reviews. As a result, consumers will increasingly turn to generative AI tools to seek out brand,
product, or service information and recommendations. This shift highlights the growing
reliance on AI-driven systems to provide personalized insights. Consumers are now seeking
answers directly from generative AI, leveraging its capabilities to access relevant and tailored
information. Engaging with generative AI allows consumers to make more informed decisions
about their desired brands or offerings. However, it is essential to acknowledge that the
recommendations provided by generative AI are based on the data used for training and,
therefore, may have limitations or be influenced by outdated information. Nonetheless,
consumers will be better informed as generative AI tools become more advanced and refined.
As a result, brands must recognize the significance of this shift and adapt their strategies to
effectively meet the changing needs of consumers who are now turning to AI tools for
information and recommendations about brands, products, and services.

2 Content creation

The implications of generative AI extend beyond text generation, including generating images,
which has enormous potential to impact content creation significantly. With the advent of AI-
powered image generation, consumers can now create visual content that was once limited to
professional designers or artists. This democratization of content creation empowers
individuals to express their creativity and share their unique perspectives across various
platforms. For instance, in social media, consumers can utilize generative AI tools to generate
visually appealing graphics, illustrations, or even memes to enhance their posts and captivate
their audience. They can leverage AI-generated images to augment their storytelling, create
visually stunning collages, or experiment with artistic filters and effects. This opens up new
avenues for self-expression and enables consumers to participate actively in content creation.

Moreover, generative AI enables consumers to create text and high-quality images for reviews.
For instance, instead of relying solely on text-based feedback, consumers can generate
accompanying visuals that provide additional context or showcase the product in action,
enhancing the overall consumer experience and providing valuable insights to other potential
buyers. The implications of generative AI for content creation are vast, as it empowers
consumers to become creators themselves and contribute their unique perspectives to the digital
landscape. Using generative AI tools, consumers can unleash their creativity, enhance their
social media presence, and enrich user-generated content with visually compelling images.

11
AI in Healthcare
UNIT-1 Introduction to Artificial Intelligence and Machine Learning
3 Virtual assistant

By leveraging the capabilities of generative AI, virtual shopping assistants transform how
consumers engage with products and make purchasing decisions, offering personalized
recommendations, and revolutionizing virtual shopping assistance, thereby shaping consumer
behaviour through information processing and decision-making support. One example is the
integration of generative AI-powered chatbots within e-commerce platforms, which can
engage with consumers in real-time, helping them navigate vast product catalogues, understand
their preferences, and provide tailored recommendations based on these preferences. By
offering personalized product suggestions, virtual shopping assistants powered by generative
AI enhance the overall shopping experience and guide consumers toward products that best
align with their requirements.

Furthermore, generative AI can shape consumer behavior by allowing consumers to try on


products virtually. Through augmented reality (AR) or virtual reality (VR) technologies, virtual
shopping assistants can simulate trying on clothes, accessories, or home decor items. By
generating virtual representations of products on the consumer's body or in their living space,
generative AI enhances the visualization and evaluation process, enabling consumers to make
more confident purchasing decisions.

The implications of generative AI for virtual shopping assistance are vast, as it revolutionizes
how consumers explore, evaluate, and decide on products. By leveraging generative AI's
capabilities, virtual shopping assistants offer personalized recommendations, facilitate product
search and evaluation, and even provide virtual try-on experiences, all of which contribute to
shaping consumer behavior through information processing and decision-making support.

4 Shaping global trends and consumer choices

Generative AI has significant implications for understanding global trends and their impact on
consumer choices, both at local and international levels. It enables individuals to gain insights
into diverse cultures, lifestyles, and consumption patterns through its engagement with users,
ultimately shaping their perceptions and behaviors. One example is the use of generative AI
for language learning and translation. Language barriers often limit individuals' access to
information and global trends. However, generative AI-powered language tools can facilitate
real-time translation of signs, brand messages, and other textual content, giving users a
contextualized understanding of the information and allowing individuals to learn about new
cultures, consume global content, and make more informed choices.

Generative AI analyses vast amounts of data from various sources, enabling it to understand
global trends by identifying emerging trends, consumer preferences, and market dynamics
across different regions. This is achieved through its algorithms, which process and extract
patterns from global datasets. Generative AI's analysis of global trends helps businesses and
consumers align their choices, lifestyles, and consumption patterns with the evolving
international landscape. It also shapes consumer behavior through personalized
recommendations and content based on individual preferences and global trends. Generative
AI facilitates language learning, translation, and access to contextualized information through
user engagement.

12
AI in Healthcare
UNIT-1 Introduction to Artificial Intelligence and Machine Learning
5 Transforming purchase behaviour

Generative AI has profound implications for purchase behavior, as it empowers consumers


with useful, relevant, real-time, contextual, and personalized information that transforms how
they engage with friends, social media influencers, and consumers from different parts of the
world. By leveraging the vast amount of data available, generative AI enables consumers to
become more educated, seek extensive information before making purchasing decisions, and
generate data that contributes to these generative AI tools' continuous learning and
improvement.

One relevant example of the impact of generative AI on purchase behavior is the ability to
provide real-time product recommendations with contextual information such as browsing
history, previous purchases, and demographic information. This level of personalization
increases the likelihood of finding products that meet a consumer's expectations. Furthermore,
generative AI can shape purchase behavior by facilitating access to reviews and ratings from
various sources. Instead of relying solely on a limited number of reviews or the opinions of a
few influencers, consumers can leverage generative AI tools to gather information from various
sources and perspectives. This enables them to make more informed decisions, considering a
diverse range of opinions, and experiences before making a purchase.

Operationalizing a New Supply Chain

Generative AI for supply chain refers to the application of advanced artificial


intelligence systems within the supply chain management industry.
Unlike traditional AI, which analyzes input to produce a predetermined output, generative AI
can create novel patterns and trends within data. It can anticipate unforeseen scenarios and
propose solutions that haven’t been explicitly programmed.
Moreover, generative AI can create new and original content, predictions, and data-driven
strategies.
For example, in the supply chain sector, generative AI can simulate complex
logistics networks to predict the outcomes of various strategies under different conditions.
Generative AI for supply chains can generate demand forecasts, optimize routing, and
automate inventory management.

Transforming Supply Chain operation Using AI


Steps to Implement Generative AI in Supply Chain

Step 1: Assess Current Supply Chain Challenges

To successfully integrate Generative AI, begin by understanding your supply chain’s pain
points. Identifying bottlenecks, inefficiencies, or recurring issues will help determine where AI
can add value.

 Pinpoint specific challenges, like poor demand forecasting or inventory bottlenecks.


 Gather input from stakeholders to get a comprehensive view.
 Analyze existing data to back up identified challenges.

13
AI in Healthcare
UNIT-1 Introduction to Artificial Intelligence and Machine Learning

Step 2: Define Clear Objectives


Set measurable and realistic goals for AI integration. Knowing precisely what you want to
achieve will guide your AI implementation strategy and ensure it aligns with business
priorities.

 Define key metrics like cost reduction, improved forecasting accuracy, or faster deliveries.
 Prioritize objectives based on potential ROI.
 Ensure objectives are time-bound and achievable.

Step 3: Data Collection and Preparation


Data is the foundation of any AI initiative. Collect, clean, and prepare data from all parts of
your supply chain to train AI models effectively.

 Gather historical data from inventory, logistics, demand forecasting, etc.


 Standardize the data to ensure consistency.
 Address data quality issues like gaps, duplicates, or inaccuracies.

Step 4: Choose the Right AI Tools and Platforms


Choose AI tools that are well-suited for your supply chain needs. Consider factors such as
scalability, ease of integration, and alignment with your existing infrastructure.

 Evaluate available Generative AI platforms like AWS, Azure, or custom solutions.


 Check compatibility with existing ERP and CRM systems.
 Choose tools that can scale as per future requirements.

Step 5: Develop Custom AI Models


Developing AI models tailored to your unique supply chain challenges ensures optimal
performance. These models could be aimed at optimizing various aspects, such as demand or
route planning.

 Collaborate with data scientists to create models specific to identified challenges.


 Test models iteratively to validate their effectiveness.
 Ensure models are easy to adjust based on business changes.

Step 6: Pilot Implementation


Before rolling out AI at full scale, run a pilot project. A controlled test helps validate
assumptions and provides a practical view of the expected impact.

 Select a small section of the supply chain for the pilot.


 Measure the impact using the predefined objectives.
 Gather feedback from teams to refine the AI implementation.

Step 7: Train Employees and Integrate Systems


Properly training your workforce is crucial to success. Employees must be familiar with how
to interact with the AI, ensuring smoother implementation and greater efficiency gains.

 Conduct training workshops to explain AI benefits and usage.

14
AI in Healthcare
UNIT-1 Introduction to Artificial Intelligence and Machine Learning
 Set up help desks or resources to support employees during integration.
 Ensure the AI integrates well with current supply chain software.

Step 8: Monitor and Optimize


Once implemented, it’s essential to continuously monitor AI models for effectiveness. Use
performance metrics to track success and areas needing adjustment.

 Keep an eye on KPIs like forecast accuracy and lead time reduction.
 Make necessary adjustments based on insights gained.
 Introduce continuous learning for AI models to adapt to dynamic scenarios.

Step 9: Scale Across the Supply Chain


Once the pilot has shown success, expand AI adoption to other areas. Gradual scaling allows
for effective management of issues that might arise in a wider context.

 Expand AI implementation to additional supply chain functions.


 Ensure that scaling doesn’t impact existing performance.
 Upgrade infrastructure if needed for wider adoption.

Step 10: Review and Innovate


To maintain an edge, continuous review is necessary. Generative AI should be treated as an
ongoing process rather than a one-time implementation.

 Schedule regular reviews to track performance against goals.


 Look out for emerging AI technologies to stay competitive.
 Keep improving models to meet new supply chain demands.

Benefits of Generative AI in Supply Chain


1. Enhanced Demand Forecasting

 Better Predictions: Generative AI can analyze historical and real-time data to predict customer
demand with greater accuracy.
 Adaptability: It can quickly adjust to changing market trends or sudden shifts, ensuring
companies stay ahead.

2. Inventory Optimization

 Reduced Overstock and Stockouts: AI dynamically balances inventory levels, ensuring just
the right amount of stock is available to meet demand without overburdening warehouses.
 Efficient Resource Allocation: It minimizes excess storage costs and avoids disruptions
caused by stockouts.

3. Supplier Management

 Risk Assessment: AI continuously evaluates suppliers based on risk metrics, reliability, and
performance, helping companies select the most dependable partners.

15
AI in Healthcare
UNIT-1 Introduction to Artificial Intelligence and Machine Learning
 Proactive Issue Management: AI identifies potential issues with suppliers in advance,
allowing businesses to prevent disruptions.

4. Logistics and Transportation Efficiency

 Route Optimization: Generative AI finds the most efficient routes for transportation, cutting
down delivery times and costs.
 Reduced Fuel Consumption: Efficient planning directly translates to lower transportation
costs and reduced carbon emissions.

5. Improved Decision-Making

 Data-Driven Insights: Generative AI processes vast datasets, providing actionable insights for
better decision-making at every stage of the supply chain.
 Scenario Analysis: AI can simulate different supply chain scenarios, helping leaders make
informed decisions under uncertain conditions.

6. Cost Reduction

 Lower Operational Costs: Automation and optimized logistics lead to substantial cost savings
across various aspects of the supply chain.
 Reduced Human Intervention: AI-powered automation minimizes the need for manual tasks,
cutting down labor expenses.

7. Faster Product Development

 Innovative Ideas Generation: Generative AI can assist in developing new product designs by
analyzing market trends and customer preferences.
 Accelerated Prototyping: AI helps create rapid prototypes, bringing products to market faster
than traditional methods.

8. Increased Supply Chain Visibility

 Real-Time Monitoring: Generative AI provides real-time tracking of inventory, logistics, and


supplier activities, enhancing end-to-end visibility.
 Quick Problem Resolution: Issues can be flagged in real time, allowing for faster resolutions
and minimizing disruptions.

9. Enhanced Customer Satisfaction

 On-Time Deliveries: Optimized logistics ensure timely deliveries, resulting in happier


customers and better brand loyalty.
 Personalized Offerings: AI analyzes customer preferences, enabling companies to offer
personalized services and products.

16
AI in Healthcare
UNIT-1 Introduction to Artificial Intelligence and Machine Learning

10. Risk Mitigation

 Predictive Insights: Generative AI anticipates potential risks, from supply chain disruptions
to market changes, allowing proactive risk management.
 Business Continuity: AI ensures supply chain resilience by quickly adapting to unexpected
disruptions, keeping operations running smoothly.

Machine Learning:
• Machine learning is a growing technology which enables computers to learn
automatically from past data.
• Machine learning uses various algorithms for building mathematical models and
making predictions using historical data or information.

• Currently, it is being used for various tasks such as image recognition, speech
recognition, email filtering, Facebook auto-tagging, recommender system, and many
more.
• The term machine learning was first introduced by Arthur Samuel in 1959. We can
define it in a summarized way as:
• Machine learning enables a machine to automatically learn from data, improve
performance from experiences, and predict things without being explicitly
programmed.
Types of Machine Learning Systems

17
AI in Healthcare
UNIT-1 Introduction to Artificial Intelligence and Machine Learning

1. Supervised Machine Learning: As its name suggests, supervised machine learning is


based on supervision.
• It means in the supervised learning technique, we train the machines using the
"labelled" dataset, and based on the training, the machine predicts the output.
• The main goal of the supervised learning technique is to map the input variable(x)
with the output variable(y). Some real-world applications of supervised learning are
Risk Assessment, Fraud Detection, Spam filtering, etc.
Categories of Supervised Machine Learning:
Supervised machine learning can be classified into two types of problems, which are
given below:
• Classification
• Regression
Classification: Classification algorithms are used to solve the classification problems
in which the output variable is categorical, such as "Yes" or No, Male or Female, Red
or Blue, etc.
• The classification algorithms predict the categories present in the dataset.
Regression:
• Regression algorithms are used to solve regression problems in which there is a linear
relationship between input and output variables.
• These are used to predict continuous output variables, such as market trends, weather
prediction, etc.
Some popular Regression algorithms are given below:
• Simple Linear Regression Algorithm
• Multivariate Regression Algorithm
• Decision Tree Algorithm
• Lasso Regression
Advantages:
• Since supervised learning work with the labelled dataset so we can have an exact idea
about the classes of objects.
• These algorithms are helpful in predicting the output on the basis of prior experience.
Disadvantages:
• These algorithms are not able to solve complex tasks.
• It may predict the wrong output if the test data is different from the training data.
• It requires lots of computational time to train the algorithm.

18
AI in Healthcare
UNIT-1 Introduction to Artificial Intelligence and Machine Learning
Unsupervised Machine Learning:
• Unsupervised learning is different from the supervised learning technique; as its name
suggests, there is no need for supervision.
• It means, in unsupervised machine learning, the machine is trained using the unlabeled
dataset, and the machine predicts the output w
• The main aim of the unsupervised learning algorithm is to group or categories the
unsorted dataset according to the similarities, patterns, and differences.
• Machines are instructed to find the hidden patterns from the input dataset.
Categories of Unsupervised Machine Learning:
1) Clustering:
• The clustering technique is used when we want to find the inherent groups from the
data.
• It is a way to group the objects into a cluster such that the objects with the most
similarities remain in one group and have fewer or no similarities with the objects of
other groups.
• An example of the clustering algorithm is grouping the customers by their purchasing
behavior. Some of the popular clustering algorithms are given below:
• K-Means Clustering algorithm
• Mean-shift algorithm
• DBSCAN Algorithm
• Principal Component Analysis
• Independent Component Analysis
2) Association:
• Association rule learning is an unsupervised learning technique, which finds
interesting relations among variables within a large dataset.
• The main aim of this learning algorithm is to find the dependency of one data item on
another data item and map those variables accordingly so that it can generate maximum
profit.
• Some popular algorithms of Association rule learning are Apriori Algorithm, Eclat,
FP-growth algorithm.

Advantages:
• These algorithms can be used for complicated tasks compared to the supervised ones
because these algorithms work on the unlabeled dataset.
• Unsupervised algorithms are preferable for various tasks as getting the unlabeled
dataset is easier as compared to the labelled dataset.
Disadvantages:
• The output of an unsupervised algorithm can be less accurate as the dataset is not
labelled, and algorithms are not trained with the exact output in prior.
• Working with Unsupervised learning is more difficult as it works with the unlabeled
dataset that does not map with the output.
Semi-Supervised Learning:
• Semi-Supervised learning is a type of Machine Learning algorithm that lies between
Supervised and Unsupervised machine learning.

19
AI in Healthcare
UNIT-1 Introduction to Artificial Intelligence and Machine Learning
• It represents the intermediate ground between Supervised (With Labelled training
data) and
Unsupervised learning (with no labelled training data) algorithms and uses the
combination of labelled and unlabeled datasets during the training period.
To overcome the drawbacks of supervised learning and unsupervised learning
algorithms, the concept of Semi-supervised learning is introduced.
• We can imagine these algorithms with an example. Supervised learning is where a
student is under the supervision of an instructor at home and college.
• Further, if that student is self- analyzing the same concept without any help from the
instructor, it comes under unsupervised learning.
• Under semi-supervised learning, the student has to revise himself after analyzing the
same concept under the guidance of an instructor at college.
Advantages:
• It is simple and easy to understand the algorithm.
• It is highly efficient.
• It is used to solve drawbacks of Supervised and Unsupervised Learning algorithms.
Disadvantages:
• Iterations results may not be stable.
• We cannot apply these algorithms to network-level data.
• Accuracy is low.
Reinforcement Learning:
• Reinforcement learning works on a feedback-based process, in which an AI agent (A
software component) automatically explore its surrounding by hitting & trail, taking
action, learning from experiences, and improving its performance.
• Agent gets rewarded for each good action and get punished for each bad action; hence
the goal of reinforcement learning agent is to maximize the rewards.
• In reinforcement learning, there is no labelled data like supervised learning, and agents
learn from their experiences only.
The reinforcement learning process is similar to a human being; for example, a child
learns various things by experiences in his day-to-day life.
• An example of reinforcement learning is to play a game, where the Game is the
environment, moves of an agent at each step define states, and the goal of the agent is
to get a high score.
• Agent receives feedback in terms of punishment and rewards.
• Due to its way of working, reinforcement learning is employed in different fields such
as Game theory, Operation Research, Information theory, multi-agent systems.
Categories of Reinforcement Learning:
Positive Reinforcement Learning: Positive reinforcement learning specifies
increasing the tendency that the required behavior would occur again by adding
something. It enhances the strength of the behavior of the agent and positively impacts
it.
Negative Reinforcement Learning: Negative reinforcement learning works exactly
opposite to the positive RL. It increases the tendency that the specific behavior would
occur again by avoiding the negative condition.

20
AI in Healthcare
UNIT-1 Introduction to Artificial Intelligence and Machine Learning
Bayesian classifiers
Introduction
● Bayesian classifiers are the statistical classifiers based on Bayes' Theorem
● Bayesian classifiers can predict class membership probabilities i.e. the probability that a
given tuple belongs to a particular class.
● It uses the given values to train a model and then it uses this model to classify new data.

Let’s understand with an example

There are only two possible events possible for the given question:
A: It is going to rain tomorrow
B: It will not rain tomorrow.
If you think intuitively
● It's either going to be raining today or it is NOt going to be raining today
● So technically there is 50% CHANCE OF RAIN tomorrow. Correct?
Bayesian theorem argues that the probability of an event taking place changes if there is
information available about a related event
● This means that if you recall the previous weather conditions for the last week, and you
remember that it has actually rained every single day, your answer will no longer be 50%
● The Bayesian approach provides a way of explaining how you should change your existing
beliefs in the light of new evidence.
● Bayesian rule’s emphasis on prior probability makes it better suited to be applied in a wide
range of scenarios
What is Bayes Theorem?

21
AI in Healthcare
UNIT-1 Introduction to Artificial Intelligence and Machine Learning
● Bayes' theorem, named after 18th-century British mathematician Thomas Bayes, is a
mathematical formula for determining conditional probability
● The theorem provides a way to revise existing predictions or theories given new or additional
evidence.
● In finance, Bayes' theorem can be used to rate the risk of lending money to potential
borrowers.
What are Bayesian Classifiers?
● Statistical classifiers.
● Predict class membership probabilities, such as the probability that a given tuple belongs to
a particular class.
● Based on Bayes’ theorem
● Exhibits high accuracy and speed when applied to large databases.
DECISION TREE
A decision tree is a flowchart-like structure used to make decisions or predictions. It consists
of nodes representing decisions or tests on attributes, branches representing the outcome of
these decisions, and leaf nodes representing final outcomes or predictions. Each internal node
corresponds to a test on an attribute, each branch corresponds to the result of the test, and
each leaf node corresponds to a class label or a continuous value.
Structure of a Decision Tree
1. Root Node: Represents the entire dataset and the initial decision to be made.
2. Internal Nodes: Represent decisions or tests on attributes. Each internal node has one or
more branches.
3. Branches: Represent the outcome of a decision or test, leading to another node.
4. Leaf Nodes: Represent the final decision or prediction. No further splits occur at these
nodes.
How Decision Trees Work?
The process of creating a decision tree involves:
1. Selecting the Best Attribute: Using a metric like Gini impurity, entropy, or information
gain, the best attribute to split the data is selected.
2. Splitting the Dataset: The dataset is split into subsets based on the selected attribute.
3. Repeating the Process: The process is repeated recursively for each subset, creating a
new internal node or leaf node until a stopping criterion is met (e.g., all instances in a
node belong to the same class or a predefined depth is reached).
Advantages of Decision Trees
 Simplicity and Interpretability: Decision trees are easy to understand and interpret. The
visual representation closely mirrors human decision-making processes.
 Versatility: Can be used for both classification and regression tasks.
 No Need for Feature Scaling: Decision trees do not require normalization or scaling of
the data.
 Handles Non-linear Relationships: Capable of capturing non-linear relationships
between features and target variables.
Disadvantages of Decision Trees

22
AI in Healthcare
UNIT-1 Introduction to Artificial Intelligence and Machine Learning
 Over-fitting: Decision trees can easily over-fit the training data, especially if they are
deep with many nodes.
 Instability: Small variations in the data can result in a completely different tree being
generated.
 Bias towards Features with More Levels: Features with more levels can dominate the
tree structure.

Regression in Machine Learning
It is a supervised machine learning technique, used to predict the value of the dependent
variable for new, unseen data. It models the relationship between the input features and the
target variable, allowing for the estimation or prediction of numerical values.
Regression analysis problem works with if output variable is a real or continuous value, such
as “salary” or “weight”. Many different models can be used, the simplest is the linear
regression. It tries to fit data with the best hyper-plane which goes through the points.
Terminologies Related to the Regression Analysis in Machine Learning
Terminologies Related to Regression Analysis:
 Response Variable: The primary factor to predict or understand in regression, also
known as the dependent variable or target variable.
 Predictor Variable: Factors influencing the response variable, used to predict its values;
also called independent variables.
 Outliers: Observations with significantly low or high values compared to others,
potentially impacting results and best avoided.
 Multicollinearity: High correlation among independent variables, which can complicate
the ranking of influential variables.
 Underfitting and Overfitting: Overfitting occurs when an algorithm performs well on
training but poorly on testing, while underfitting indicates poor performance on both
datasets.
Regression Types
There are two main types of regression:
 Simple Regression
o Used to predict a continuous dependent variable based on a single independent
variable.
o Simple linear regression should be used when there is only a single
independent variable.
 Multiple Regression
o Used to predict a continuous dependent variable based on multiple
independent variables.
o Multiple linear regression should be used when there are multiple independent
variables.
 NonLinear Regression
o Relationship between the dependent variable and independent variable(s)
follows a nonlinear pattern.
o Provides flexibility in modeling a wide range of functional forms.

23

You might also like