Leveraging
Generative AI
Ashling Partners | Solutions Engineering | Alp Uguray, 4x UiPath MVP
2
Senior Solutions Engineer at Ashling Partners
4x UiPath MVP Award
Host & Creator at Masters of Automation Podcast
(https://themasters.ai)
Alp Uguray
Introductions
Innovation Ambition Matrix
HOW TO WIN
WHERE
TO
PLAY
TRANSFORMATIONAL
Large market opportunity identified
but very different from what we are
doing today.
ADJACENT
Not doing today, but plugs right into
what we are doing today
CORE
Already doing it today
Develop New
Products & Assets
Add incremental
Products & Assets
Use Existing
Products & Assets
Serve
existing
Markets
&
Customers
Enter
Adjacent
Markets
Serve
Adjacent
Customers
Create
New
Market
Target
New
Customers
4
5
Good ones (Utopic Use)
• Leverages AI versus. Manual execution productivity gains
• Augmentation in task execution as HITL suggestions and recommendations
Not so good ones (Most likely)
• Job Displacement / Re-write
• Digital Misuse
• Digital Divide
• Vulnerability increase with cyberattacks
Worst ones (Cautious view)
• Data Privacy
• Fake Content and IP Law
• Failure of Regulations
• LLMs dominate the communication lines - Don’t know who you speak and widespread
adoption of personalized Face, Voice and Text
Importance of Scenario Planning
Driven by productivity gains and improved Customer and Employee Experiences,
Conversational AI dominance depends on a few different outcomes based on its adoption
6
Focus on realistic applications that can complement existing business capabilities.
• Prioritize applications based on ease of implementation and risk level, gradually moving towards more complex and
valuable ones. An example of a key application is using generative AI for knowledge management, which can provide
immediate value across various business functions
Do not have a perfectionist attitude towards the development of AI applications, which
could trap you in the proof-of-concept phase without ever delivering value.
• An iterative product development approach where applications are developed to solve specific customer or employee
problems and are then continuously adjusted based on feedback until they're ready to be scaled. This ensures that the
efforts have purpose and contribute towards transforming the industry standards​
The importance of ensuring that AI adoption doesn't compromise the organization's
data and intellectual property security, customer data security, brand credibility, and
legal protections.
• Collaboration between leaders from operations, technology and data teams, and the legal department to create
guardrails that empower the organization without hindering it.
Some Guiding Principles in Adoption
What’s prompt engineering?
Prompt engineering is the ‘art’ of optimizing
natural language for a LLM. Effective prompts
provide the relevant context and detail to a LLM,
therefore improving the accuracy and relevance
of the response.
The quality of prompts directly affects the output
of the model. Effective prompts help the model
understand your request and generate
appropriate responses, in complex or ambiguous
scenarios.
Tips / Tricks –
• Zero-shot Learning: never seen your data,
but makes inferences based on
understanding
• CoT (chain-of-thought) reasoning, ‘break it
down, step-by-step’
• Providing relevant context, ‘I am’ or ‘you are’
• First, do ‘xyz’, then do ‘xyz’, finally…
8
Zero-shot learning
This is a problem set up in machine learning where the model is asked to classify data
accurately it has never seen before during training. In other words, the model is expected
to infer classes that were not part of its training data. The model typically leverages high-
level abstractions and understandings learned from the training data to make accurate
predictions on the unseen classes. Zero-shot learning is especially important in settings
where it is costly or time-consuming to collect large labeled datasets for every possible
class.
Few-shot learning
Few-shot learning refers to the concept where a machine learning model is able to
generalize well from a small number of examples – often just one or two, hence the term
"one-shot" or "two-shot" learning. In a traditional machine learning context, models are
often trained on large amounts of data, but in few-shot learning, the idea is to design
models that can extract useful information from a small number of examples and make
accurate predictions. This is similar to how humans can often learn concepts from just a
few examples.
Shot Learnings
Some considerations
Data privacy and security:
• Avoid using real customer data or any personally
identifiable information (PII).
• Use anonymized or synthetic data sets whenever
possible.
• Ensure data storage and transfer follow best practices
and comply with relevant regulations, such as GDPR
or HIPAA.
“Hallucinations” - ChatGPT can make stuff up.
• Be aware of potential biases in data sets and
algorithms, which could lead to unfair or
discriminatory outcomes.
• Use techniques such as data pre-processing or
algorithmic adjustments to minimize the impact of
biases.
Responsible use of AI:
• Ensure that your solution aligns with ethical principles
and responsible AI guidelines.
• Avoid applications that could be harmful,
discriminatory, or promote misinformation.
10
Reinforcement
Learning
Prompt
Engineering
Chain of
Thought
How to get the best out of AGIs
11
RLHF
Reinforcement learning from human
feedback further aligns models.
(Diagram from OpenAI ChatGPT
announcement.)
12
Prompting with the “format trick”
“Use this format:” is all you need.
©
2
0
2
3
S
c
a
l
e
I
n
c
.
13
Specifying tasks using
code prompts
Prompting through partial code.
©
2
0
2
3
S
c
a
l
e
I
n
c
.
14
Specifying tasks using
code prompts
Prompting with imaginary variables.
©
2
0
2
3
S
c
a
l
e
I
n
c
.
15
Using an external interpreter to
overcome model limitations in
conversational Q&A.
“You are GPT-3”
©
2
0
2
3
S
c
a
l
e
I
n
c
.
16
Chain-of-thought prompting
Figure 1 from Jason Wei et al. (2022).
©
2
0
2
3
S
c
a
l
e
I
n
c
.
17
Zero-shot
chain-of-thought
Figure 1 from Takeshi Kojima et al. (2022).
©
2
0
2
3
S
c
a
l
e
I
n
c
.
18
Zero-shot
chain-of-thought
Figure 2 from Takeshi Kojima et al. (2022).
©
2
0
2
3
S
c
a
l
e
I
n
c
.
19
Zero-shot
chain-of-thought
Figure 2 from Takeshi Kojima et al. (2022).
©
2
0
2
3
S
c
a
l
e
I
n
c
.
20
Self-consistency
and consensus
Figure 1 from Xuezhi Wang et al. (2022).
©
2
0
2
3
S
c
a
l
e
I
n
c
.
21
Q&A

Leveraging Generative AI & Best practices

  • 1.
    Leveraging Generative AI Ashling Partners| Solutions Engineering | Alp Uguray, 4x UiPath MVP
  • 2.
    2 Senior Solutions Engineerat Ashling Partners 4x UiPath MVP Award Host & Creator at Masters of Automation Podcast (https://themasters.ai) Alp Uguray Introductions
  • 3.
    Innovation Ambition Matrix HOWTO WIN WHERE TO PLAY TRANSFORMATIONAL Large market opportunity identified but very different from what we are doing today. ADJACENT Not doing today, but plugs right into what we are doing today CORE Already doing it today Develop New Products & Assets Add incremental Products & Assets Use Existing Products & Assets Serve existing Markets & Customers Enter Adjacent Markets Serve Adjacent Customers Create New Market Target New Customers
  • 4.
  • 5.
    5 Good ones (UtopicUse) • Leverages AI versus. Manual execution productivity gains • Augmentation in task execution as HITL suggestions and recommendations Not so good ones (Most likely) • Job Displacement / Re-write • Digital Misuse • Digital Divide • Vulnerability increase with cyberattacks Worst ones (Cautious view) • Data Privacy • Fake Content and IP Law • Failure of Regulations • LLMs dominate the communication lines - Don’t know who you speak and widespread adoption of personalized Face, Voice and Text Importance of Scenario Planning Driven by productivity gains and improved Customer and Employee Experiences, Conversational AI dominance depends on a few different outcomes based on its adoption
  • 6.
    6 Focus on realisticapplications that can complement existing business capabilities. • Prioritize applications based on ease of implementation and risk level, gradually moving towards more complex and valuable ones. An example of a key application is using generative AI for knowledge management, which can provide immediate value across various business functions Do not have a perfectionist attitude towards the development of AI applications, which could trap you in the proof-of-concept phase without ever delivering value. • An iterative product development approach where applications are developed to solve specific customer or employee problems and are then continuously adjusted based on feedback until they're ready to be scaled. This ensures that the efforts have purpose and contribute towards transforming the industry standards​ The importance of ensuring that AI adoption doesn't compromise the organization's data and intellectual property security, customer data security, brand credibility, and legal protections. • Collaboration between leaders from operations, technology and data teams, and the legal department to create guardrails that empower the organization without hindering it. Some Guiding Principles in Adoption
  • 7.
    What’s prompt engineering? Promptengineering is the ‘art’ of optimizing natural language for a LLM. Effective prompts provide the relevant context and detail to a LLM, therefore improving the accuracy and relevance of the response. The quality of prompts directly affects the output of the model. Effective prompts help the model understand your request and generate appropriate responses, in complex or ambiguous scenarios. Tips / Tricks – • Zero-shot Learning: never seen your data, but makes inferences based on understanding • CoT (chain-of-thought) reasoning, ‘break it down, step-by-step’ • Providing relevant context, ‘I am’ or ‘you are’ • First, do ‘xyz’, then do ‘xyz’, finally…
  • 8.
    8 Zero-shot learning This isa problem set up in machine learning where the model is asked to classify data accurately it has never seen before during training. In other words, the model is expected to infer classes that were not part of its training data. The model typically leverages high- level abstractions and understandings learned from the training data to make accurate predictions on the unseen classes. Zero-shot learning is especially important in settings where it is costly or time-consuming to collect large labeled datasets for every possible class. Few-shot learning Few-shot learning refers to the concept where a machine learning model is able to generalize well from a small number of examples – often just one or two, hence the term "one-shot" or "two-shot" learning. In a traditional machine learning context, models are often trained on large amounts of data, but in few-shot learning, the idea is to design models that can extract useful information from a small number of examples and make accurate predictions. This is similar to how humans can often learn concepts from just a few examples. Shot Learnings
  • 9.
    Some considerations Data privacyand security: • Avoid using real customer data or any personally identifiable information (PII). • Use anonymized or synthetic data sets whenever possible. • Ensure data storage and transfer follow best practices and comply with relevant regulations, such as GDPR or HIPAA. “Hallucinations” - ChatGPT can make stuff up. • Be aware of potential biases in data sets and algorithms, which could lead to unfair or discriminatory outcomes. • Use techniques such as data pre-processing or algorithmic adjustments to minimize the impact of biases. Responsible use of AI: • Ensure that your solution aligns with ethical principles and responsible AI guidelines. • Avoid applications that could be harmful, discriminatory, or promote misinformation.
  • 10.
  • 11.
    11 RLHF Reinforcement learning fromhuman feedback further aligns models. (Diagram from OpenAI ChatGPT announcement.)
  • 12.
    12 Prompting with the“format trick” “Use this format:” is all you need. © 2 0 2 3 S c a l e I n c .
  • 13.
    13 Specifying tasks using codeprompts Prompting through partial code. © 2 0 2 3 S c a l e I n c .
  • 14.
    14 Specifying tasks using codeprompts Prompting with imaginary variables. © 2 0 2 3 S c a l e I n c .
  • 15.
    15 Using an externalinterpreter to overcome model limitations in conversational Q&A. “You are GPT-3” © 2 0 2 3 S c a l e I n c .
  • 16.
    16 Chain-of-thought prompting Figure 1from Jason Wei et al. (2022). © 2 0 2 3 S c a l e I n c .
  • 17.
    17 Zero-shot chain-of-thought Figure 1 fromTakeshi Kojima et al. (2022). © 2 0 2 3 S c a l e I n c .
  • 18.
    18 Zero-shot chain-of-thought Figure 2 fromTakeshi Kojima et al. (2022). © 2 0 2 3 S c a l e I n c .
  • 19.
    19 Zero-shot chain-of-thought Figure 2 fromTakeshi Kojima et al. (2022). © 2 0 2 3 S c a l e I n c .
  • 20.
    20 Self-consistency and consensus Figure 1from Xuezhi Wang et al. (2022). © 2 0 2 3 S c a l e I n c .
  • 21.