0% found this document useful (0 votes)
8 views13 pages

Zacks.com (1)

This report discusses the rise of AI technologies, particularly focusing on NVIDIA's advancements in GPU computing and the implications for investment opportunities in the AI sector. It highlights key companies benefiting from the AI boom, including NVIDIA, AMD, Micron, Marvell, and UiPath, while also explaining the mechanics of ChatGPT and its evolution. The document emphasizes the ongoing demand for AI capabilities and the potential for significant returns in related investments.

Uploaded by

Jr Jc
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views13 pages

Zacks.com (1)

This report discusses the rise of AI technologies, particularly focusing on NVIDIA's advancements in GPU computing and the implications for investment opportunities in the AI sector. It highlights key companies benefiting from the AI boom, including NVIDIA, AMD, Micron, Marvell, and UiPath, while also explaining the mechanics of ChatGPT and its evolution. The document emphasizes the ongoing demand for AI capabilities and the potential for significant returns in related investments.

Uploaded by

Jr Jc
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 13

Contents

Overview 3

What's the BIG AI Deal? 5

The Genie is Running Wild and Granting Unlimited Wishes 11

Bottom Line 13

What To Do Next 14

Disclaimer 15
Overview

The first edition of this report was launched in March of 2023 when the rumblings
about the "chatterbot that ate the internet" were just beginning. NVIDIA (NVDA) was
my #1 pick and it was still trading under $250.

Then, in late May, Jensen Huang revealed the new DGX GH200. This large-memory
AI supercomputer uses NVIDIA NVLink to combine up to 256 NVIDIA GH200 Grace
Hopper Superchips into a single data-center-sized GPU.

Google Cloud, Meta Platforms, and Microsoft were announced as the first
customers already in line. This lit a fire under the other 1,997 enterprises in the
Global 2000 that it was time to build their own LLMs (large language models)—in
essence, their own internal ChatGPTs and "generative AI" systems.

From there, sales of NVIDIA GPUs started to soar. Not only did NVIDIA benefit
greatly, but let's look at how our other picks fared:

#2 Synopsys (SNPS): $370 to $600

#3 Alphabet (GOOGL): $105 to $150

#4 Splunk (SPLK): $90 to $155 (acquired by Cisco)

#5 UiPath (PATH): My "sleeper" pick was volatile between $17 and $14 last
spring but is now building some footing between $21 and $24.

While many wring their hands about the "AI Bubble," we know differently. NVIDIA just
released its latest SuperPOD, the DGX Grace Blackwell 200, which is faster and more
energy efficient by orders of magnitude.

The demand for GPU-driven architectures is still exploding, as Microsoft, OpenAI,


and Meta Platforms are looking to buy millions of them over the next few years.

Investing In and Around ChatGPT

In this second edition, we'll review the mechanics of ChatGPT and then pick some
more stocks to build wealth around the megatrends.
ChatGPT uses a basic function in modern computing known as natural
languageprocessing (NLP).

Yes, it's like a Google search engine that crawls the web for new information, facts,
and references in addition to what it has already been "fed." But it has the
supercharged capabilities of conversational explanations that appear intelligent
because it has been "trained" how humans write, read, and formulate collections of
ideas and facts into sentences, paragraphs, and arguments. Here, training has a
specific definition as used by machine learning engineers.

It's still built around "brute force" computation speed like IBM's Deep Blue harnessed
to beat chess grandmaster Gary Kasparov in 1997. But with GPU-driven massive
parallel architectures, layers of functionality related to machine learning and deep
learning have been added, using neural nets that make it orders of magnitude faster,
"smarter," and more adaptive.
What's the BIG
AI Deal?

What makes ChatGPT unique in the evolution of NLP is its size and computing
power. It is a next-gen NLP called a Large Language Model (LLM) because it was
trained on enormous amounts of language data. Then with its neural nets (millions
of connected, probability-based decision trees) it can consistently predict and/or
formulate the next word of a human conversational sentence, article, or research
paper.

A type of machine learning model called Reinforcement Learning with Human


Feedback (RLHF) was used to train ChatGPT. What happened here is that OpenAI
(the group that invented ChatGPT) employs human AI trainers who interact with the
model by assuming the roles of both user and chatbot.

According to an article on Emeritus.org, a provider of online AI/ML courses (from


February) . . .

Trainers "compare the tone and accuracy of answers given by ChatGPT to


human replies and rank the quality of responses to reinforce human-like
conversation approaches."

More on this aspect after some background color.

ChatGPT was launched as a free research prototype in November 2022 by OpenAI,


the consortium research laboratory powered by Microsoft and the world's 5th most
powerful supercomputer.

From Wikipedia:

The organization was founded in San Francisco in 2015 by Sam Altman, Reid
Hoffman, Jessica Livingston, Elon Musk, Ilya Sutskever, Peter Thiel, Olivier
Grabias and others, who collectively pledged US$1 billion. Musk resigned from
the board in 2018 but remained a donor. Microsoft provided OpenAI LP with a
$1 billion investment in 2019 and a second multi-year investment in January
2023, reported to be $10 billion.

The "GPT" of this machine learning NLP model stands for Generative Pretraining
Transformer-3 (as in generation 3).
From the OpenAI homepage . . .

We've trained a model called ChatGPT which interacts in a conversational way.

The dialogue format makes it possible for ChatGPT to answer followup


questions, admit its mistakes, challenge incorrect premises, and reject
inappropriate requests.

If we break down each part of the GPT acronym, we can remember and hopefully
comprehend what ChatGPT is, how it's designed, and what it can do.

Here is how I described last year what the G, P, and T stand for to create the greatest
"sentence-finisher" the world has ever known . . .

Generative: This is the Reinforcement Learning via Human Feedback (RLHF)


part of the supercomputer where it continuously takes its years of developing
and training itself to provide better and better answers.

Pretraining: This is really the first step in the education of "the machine." It's
simply the databases of knowledge and information the machine has access
to. So it's already "gone to college" in the sense that it has "read" a lot of books
and incorporated NLP structures and patterns of human communication. And
thus when ChatGPT encounters new information, articles, or perspectives, it
has a context to explain them in light of previous knowledge and coherent
conversation standards.

Transformer: This is the architecture of neural nets with millions of layers and
memory nodes that act like connected, decision-tree probability rules to
choose and prioritize words and phrases. This analysis starts with the user
query and "transforms" it to the relevant answers it finds and assembles.

As noted before, advanced machine reinforcement learning occurs here, where


the engine can recall context from texts it has read and answers it has given in
a continuous improvement cycle.

Can ChatGPT Improve My Simple Definitions?

I asked ChatGPT-3.5 to give me its own short definitions of its namesake, and here's
what it delivered in about 3.5 seconds . . .
Generative: The term "generative" indicates that the model can generate new
text based on the input it receives. Generative models can produce coherent
and contextually relevant sequences of text, such as sentences or paragraphs.
This ability is crucial for applications such as chatbots, text completion, and
text summarization, where the model needs to produce meaningful and
human-like responses.

Pre-trained: The "pre-trained" aspect of the model refers to its training


methodology. Before being fine-tuned for a specific task or dataset, the model
is pre-trained on a large corpus of text data. During this pre-training phase, the
model learns general language representations and patterns from the text data
without being explicitly trained on any specific task. This pre-training helps the
model develop a broad understanding of language, which can be fine-tuned for
specific applications.

Transformer: The "Transformer" is a specific architecture used in the model,


introduced in a 2017 paper by Vaswani et al. The Transformer architecture is
designed for processing sequential data, such as text, and is based on self-
attention mechanisms. These mechanisms allow the model to weigh the
importance of different words in a sequence when generating or processing
text, enabling it to capture long-range dependencies and contextual
information more effectively than previous architectures like recurrent neural
networks (RNNs).

Where is All This Going?

I also asked Chatty about her/his/its future . . .

Me: Please compare the capabilities of your new versions, including ChatGPT
4 and 5

ChatGPT: As of my last update, I can provide a general comparison based on


the evolution of the GPT series up to GPT-3.5. Remember that specifics about
ChatGPT 4 and 5 would require more current information.

GPT-4:

Size and Scale: GPT-4 would likely have a larger model size than GPT-3.5, with
more parameters and potentially increased computational requirements.
Capabilities: It would likely exhibit improvements in natural language
understanding and generation, better context retention, and more coherent
responses.

Training Data: GPT-4 would likely be trained on a more extensive and diverse
dataset, potentially leading to better performance across a broader range of
tasks and topics.

Fine-Tuning: Like its predecessors, GPT-4 is suitable for fine-tuning specific


tasks, improving performance on tasks such as question answering, text
summarization, and language translation.

GPT-5:

Further Size and Scale: GPT-5 will likely continue the trend of increasing model
size and complexity, pushing the boundaries of what is currently feasible
regarding computational resources.

Advanced Capabilities: It might introduce new capabilities, such as better


handling of ambiguity, improved context understanding, and more nuanced
responses.

Specialized Domains: GPT-5 might offer better support for specialized


domains or industries, allowing the generation of more accurate and relevant
content in specific fields.

Ethical Considerations: With increasing awareness of ethical concerns


surrounding AI, GPT-5 might incorporate features to address issues like bias,
misinformation, and harmful content generation.
The Genie is Running Wild and Granting
Unlimited Wishes

I am a technology optimist and believe we'll figure out the ethical issues along the
way, just like we are doing with smartphone addiction in youth. I asked Chatty about
the future so you can see where this is going because it cannot be stopped, and
corporations will continue to invest heavily in their smart LLMs.

Ray Kurzweil's prediction of "singularity," when AI equals and surpasses human


intelligence, is constantly getting closer. But as I always remind people, AI still can't
be as insatiably curious—or learn jokes as fast—as your favorite 4-year-old. So, while
you learn to use AI's power tools in your own life and work, let's keep investing
alongside this revolution. Here are my top 5 picks right now . . .

#1 NVIDIA (NVDA): CEO Jensen Huang has told us for years that "accelerated" GPU
computing would change every industry and scientific endeavor. And his company
will be right there leading the charge. Must own on its rise to $250 billion in sales
and a $3 trillion market cap in the next 2-3 years.

#2 Advanced Micro Devices (AMD): Number 2 in the world for GPUs and it trades at
a big discount to NVDA. Lisa Su has her finger on the pulse of advanced
technologies and will be riding the revolution in 2nd place for many years.

#3 Micron Technology (MU): All these supercomputers and their applications are
going to need lots of advanced memory. And Wall Street finally woke up to that as
Micron's sales estimates are soaring 55% this year to $24 billion and 45% next year
to $35 billion.

#4 Marvell Technology (MRVL): All these supercomputers and datacenters will need
specialty networking equipment and semiconductors. Seasoned semiconductor
analyst from Bank of America, Vivek Arya, recently noted that with the accelerator
market doubling to $200 billion over the next three years, it could create "volatile but
fruitful opportunities" among the second-tier vendors, including Marvell -- #2 to
Broadcom in AI networking, and a new threat in custom chips.
#5 UiPath (PATH): I'm sticking with the "sleeper" stock of Robotic Process
Automation because it delivers unique solutions. It's a buy for nearly $20, and I
believe it will be acquired.
Bottom Line

Finally, let's remember that AI is not quite "there" yet in terms of the AGI singularity
(when AI becomes superior to human intelligence).

The best way to think about human intelligence is to imagine a four-year old and
how they learn language, thinking, laughter, daydreaming, dancing, bicycle riding, and
emotional interactions with their family, friends, and pets.

Alison Gopnick, a pioneering researcher in child development and learning at UC


Berkeley, also happens to enjoy playing with AI models. She was profiled and
interviewed in the immensely valuable 2019 book Possible Minds: 25 Ways of
Looking at AI, because she writes and speaks frequently on the comparisons and
contrasts of child learning and AI. She says . . .

"The most interesting thing about babies is that they are so enormously interested; the
most wonderful thing about them is their infinite capacity for wonder."

That "curiosity engine" may be the ultimate missing ingredient for AI.

By Kevin Cook - Zacks Senior Stock Strategist


What To Do Next

In addition to the information included in this special report, you can move yourself
way ahead of the crowd in any market environment with the following items.

1.) As part of this free report, you will now receive our free daily e-newsletter, Profit
from the Pros.

➤Each morning, Executive Vice President Kevin Matras will summarize the
market, what it means for investors and what to do next. Plus, you get links to
articles featuring some of our top stock, ETF, and mutual fund
recommendations.

Be sure to look for it in your email inbox before the market opens every day.

2.) You should bookmark our homepage to take advantage of one of the most
complete investment websites around.

➤Go there now: www.zacks.com

3.) Even better, get all Zacks' private buys and sells through our Zacks Ultimate
program.

➤Watch our trades and market insights from all of Zacks' private portfolios from
growth to value to momentum stocks, from insider moves to big earnings
surprises, from options to ETFs, even trades EVP Kevin Matras makes in his
personal account, and much more. This see-everything arrangement is unique:
one month, one dollar, not one cent of further obligation.

Learn more about the Zacks Ultimate now »


Disclaimer

Copyright 2025 Zacks Investment Research

This page has not been authorized, sponsored, or otherwise approved or endorsed by the companies
represented herein. Each of the company names represented herein are trademarks of Nvidia Corporation,
Advanced Micro Devices, Inc., Micron Technology, Inc., Marvell Technology, Inc., and UiPath, Inc. Front cover
source: Shutterstock

Zacks.com provides investment resources and informs you of these resources, which you may choose to use in
making your own investment decisions. Zacks is providing information on this resource to you subject to the
Zacks "Terms of Service". www.zacks.com/terms_of_service

Past performance is no guarantee of future results. Inherent in any investment is the potential for loss. This
material is being provided for informational purposes only and nothing herein constitutes investment, legal,
accounting or tax advice, or a recommendation to buy, sell or hold a security. No recommendation or advice is
being given as to whether any investment is suitable for a particular investor. It should not be assumed that any
investments in securities, companies, sectors or markets identified and described were or will be profitable. All
information is current as of the date of herein and is subject to change without notice. Any views or opinions
expressed may not reflect those of the firm as a whole. Zacks Investment Research is not a licensed securities
dealer, broker or U.S. investment adviser or investment bank.

The Zacks #1 Rank Performance covers the period beginning on January 1, 1988 through December 30, 2024.
The performance is the equal weighted performance of a hypothetical portfolio consisting of stocks with a
Zacks Rank of #1 that was rebalanced monthly from January 1988 through December 2013 and weekly from
12/31/13 through Monday's open on December 30, 2024. For each stock with a Zacks Rank #1 at the beginning
of the month, the total return during the month was calculated as the % change in the price of the stock from the
closing price of the prior month to the closing price of the current month plus any dividends received during the
month. The monthly individual stock returns were then averaged to determine the portfolio return for the month.
For each stock with a Zacks Rank #1 at the beginning of the week, the total return during the week was
calculated as the % change in the price of the stock from the opening price for the week to the opening price of
the next week plus any dividends received during the week. The weekly individual stock returns were then
averaged to determine the portfolio return for the week. If no month-end price or week end open price was
available for a stock, it was not included in the portfolio return for the month or the week. The monthly and
weekly returns were compounded to arrive at the annual returns. The annualized return is the annual return that,
had it been achieved in each year or portion of a year, would have compounded to create the total return over the
full time period. These returns are based on the list of Zacks Rank #1 Stocks that was available to clients of
Zacks as of the beginning of the month, when returns were calculated monthly, or as of the beginning of the
week when returns were calculated weekly. These returns are higher than the returns an investor could achieve
investing real money in a portfolio of Zacks Rank #1 stocks because the returns of the hypothetical Zacks Rank
#1 portfolio exclude a number of costs, including commissions incurred for trading, the average bid ask spread,
the price impact of the trading and, prior to 2013, in those months when the end of the month fell on Friday,
Saturday or Sunday, the overnight return from the month end close to the open on the next trading day. The S&P
500 is an unmanaged index. Visit www.zacks.com/performance_disclosure for information about the
performance numbers displayed above.

You might also like