ASSIGNMENT-02
“SHRI VAISHNAV INSTITUTE OF INFORMATION
TECHNOLOGY”
“COMPUTER-SCIENCE DEPARTMENT”
SUBJECT: - Quantum Computing
CLASS: - CS-AI(IBM)
SUBMITTED TO SUBMITTED BY
Ms. Ramanpreet Mam Pranjali Saxena
18100BTCSAII02853
Q1.) Explain the types of Transistors and their working?
Transistor is a semiconductor device which is used to either amplify the signals or to act as an
electrically controlled switch. A Transistor is a three terminal device and a small current /
voltage at one terminal (or lead) will control a large flow of current between the other two
terminals (leads).
A bipolar transistor is a type of transistor that uses both electrons and holes as charge carriers.
Two types of bipolar transistor are manufactured: NPN and PNP.
A field-effect transistor is a unipolar device constructed with no PN junction in the main
current-carrying path. Also, two types of field-effect transistor are manufactured: N-channel
and P-channel.
NPN Transistor
NPN is one of the two types of Bipolar Junction Transistors (BJT). The NPN transistor
consists of two n-type semiconductor materials and they are separated by a thin layer of p-
type semiconductor. Here, the majority charge carriers are electrons while holes are the
minority charge carriers. The flow of electrons from emitter to collector is controlled by the
current flow in the base terminal.
A small amount of current at base terminal causes a large amount current to flow from
emitter to collector. Nowadays, the more commonly used bipolar transistor is NPN transistor,
because the mobility of electrons is greater than mobility of holes. The standard equation for
the currents flowing in the transistor is
IE = IB + IC
The symbols and structure for NPN transistors are given below.
The construction and terminal voltages for a bipolar NPN transistor are shown
above. The voltage between the Base and Emitter (V ), is positive at the Base
BE
and negative at the Emitter because for an NPN transistor, the Base terminal is
always positive with respect to the Emitter. Also, the Collector supply voltage is
positive with respect to the Emitter (V ). So, for a bipolar NPN transistor to
CE
conduct the Collector is always more positive with respect to both the Base and
the Emitter.
PNP Transistor
The PNP is another type of Bipolar Junction Transistors (BJT). The PNP transistors contain
two p-type semiconductor materials and are separated by a thin layer of n-type
semiconductor. The majority charge carriers in the PNP transistors are holes while electrons
are minority charge carriers. The arrow in the emitter terminal of transistor indicates the flow
of conventional current. In PNP transistor, the current flows from Emitter to Collector.
The PNP transistor is ON when the base terminal is pulled LOW with respect to emitter. The
symbol and structure for PNP transistor is shown below.
FET (Field Effect Transistor)
The Field-Effect-Transistor (FET) is another major type of transistor. Basically, the FET also
have three terminals (like BJTs). The three terminals are: Gate (G), Drain (D) and Source (S).
Field Effect Transistor are classified into Junction Field Effect transistors (JFET) and
Insulated Gate Field Effect Transistors (IG-FET) or Metal Oxide Semiconductor Field Effect
Transistors (MOSFET).
For the connections in the circuit, we also consider a fourth terminal called Base or Substrate.
The FETs have control on the size and shape of a channel between Source and Drain, which
is created by voltage applied at Gate.
The Field Effect Transistors are uni-polar devices, as they require only the majority charge
carriers to operate (unlike BJT, which are bipolar transistors).
Q2.) Classify computing Algorithms based on their growing complexity with increase in
user input?
Constant time algorithm
When time complexity is constant (notated as “O(1)”), the size of the input (n) doesn’t matter.
Algorithms with Constant Time Complexity take a constant amount of time to run,
independently of the size of n. They don’t change their run-time in response to the input data,
which makes them the fastest algorithms out there.
For example, you’d use an algorithm with constant time complexity if you wanted to know if a
number is odd or even. No matter if the number is 1 or 9 billions (the input “n”), the algorithm
would perform the same operation only once, and bring you the result.
Also, if you wanted to print out once a phrase like the classic “Hello World”, you’d run that
too with constant time complexity, since the amount of operations (in this case 1) with this or
any other phrase will remain the same, no matter which operating system or which machine
configurations you are using.
To remain constant, these algorithms shouldn’t contain loops, recursions or calls to any other
non-constant time function. For constant time algorithms, run-time doesn’t increase: the order
of magnitude is always 1.
Linear Time Complexity: O(n)
When time complexity grows in direct proportion to the size of the input, you are facing
Linear Time Complexity, or O(n). Algorithms with this time complexity will process the input
(n) in “n” number of operations. This means that as the input grows, the algorithm takes
proportionally longer to complete.
These are the type of situations where you have to look at every item in a list to accomplish a
task (e.g. find the maximum or minimum value). Or you can also think about everyday tasks
like reading a book or finding a CD (remember them?) in a CD stack: if all data has to be
examined, the larger the input size, the higher the number of operations are.
Linear running time algorithms are very common, and they relate to the fact that the algorithm
visits every element from the input.
Logarithmic Time Complexity: O(log n)
Algorithms with this complexity make computation amazingly fast. An algorithm is said to
run in logarithmic time if its time execution is proportional to the logarithm of the input size.
This means that instead of increasing the time it takes to perform each subsequent step, the
time is decreased at a magnitude that is inversely proportional to the input “n”.
These types of algorithms never have to go through all of the input, since they usually work by
discarding large chunks of unexamined input with each step. This time complexity is generally
associated with algorithms that divide problems in half every time, which is a concept known
as “Divide and Conquer”. Divide and Conquer algorithms solve problems using the
following steps:
1.They divide the given problem into sub-problems of the same type.
2.They recursively solve these sub-problems.
3. They appropriately combine the sub-answers to answer the given problem.
Consider this example: let’s say that you want to look for a word in a dictionary that has every
word sorted alphabetically. There are at least two algorithms to do that:
Algorithm A:
Starts at the beginning of the book and goes in order until it finds the contact you are
looking for.
Algorithm B:
Opens the book in the middle and checks the first word on it.
If the word that you are looking for is alphabetically bigger, then it looks in the right
half. Otherwise, it looks in the left half.
While algorithm A goes word by word O(n), algorithm B splits the problem in half on each
iteration O(log n), achieving the same result in a much more efficient way.
Logarithmic time algorithms (O(log n)) are the second quickest ones after constant time
algorithms (O(1)).
Quadratic Time Complexity: O(n²)
In this type of algorithms, the time it takes to run grows directly proportional to the square of
the size of the input (like linear, but squared).
In most scenarios and particularly for large data sets, algorithms with quadratic time
complexities take a lot of time to execute and should be avoided.
Nested For Loops run on quadratic time, because you’re running a linear operation within
another linear operation, or n*n which equals n².
If you face these types of algorithms, you’ll either need a lot of resources and time, or you’ll
need to come up with a better algorithm.
Exponential Time Complexity: O(2^n)
In exponential time algorithms, the growth rate doubles with each addition to the input (n),
often iterating through all subsets of the input elements. Any time an input unit increases by 1,
it causes you to double the number of operations performed. This doesn’t sound good, right?
Algorithms with this time complexity are usually used in situations where you don’t know that
much about the best solution, and you have to try every possible combination or permutation
on the data.
Exponential time complexity is usually seen in Brute-Force algorithms. These algorithms
blindly iterate an entire domain of possible solutions in search of one or more solutions which
satisfy a condition. They try to find the correct solution by simply trying every possible
solution until they happen to find the correct one. This is obviously a not optimal way of
performing a task, since it will affect the time complexity. Brute-Force algorithms are used
in cryptography as attacking methods to defeat password protection by trying random strings
until they find the correct password that unlocks the system.
As in quadratic time complexity, you should avoid algorithms with exponential running times
since they don’t scale well.
The better the time complexity of an algorithm is, the faster the algorithm will carry out the
work in practice.
Q3.) Explain the role of quantum computing in Healthcare and Life Sciences?
Healthcare is a sector that holds a lot of potentials to integrate Quantum Computing.
Quantum Computing is not just a thing of science now. The Healthcare industry is adopting
the use of quantum computing to support patient-centric care. Quantum's ability to compute
at scale will enable clinicians to incorporate a huge number of cross-functional data sets into
their patient's risk factor. Also, quantum computing will allow us to select clinical trial
participants using more reference points, ensuring a better fit between protocol and patient.
With accuracy and speed of diagnosis and treatment becoming a necessity for giving quality
care. Quantum computing holds the potential for unprecedented processing power and speed
Improved Imaging Solutions
Quantum imaging machines can produce extremely accurate imaging that allows
visualization of single molecules. Machine learning algorithms and quantum computing
together can help a physician in evaluating the results of treatment. The old MRIs can
recognize areas of light and dark, and the radiologist must have to evaluate the issues. But
quantum imaging solutions can distinguish between tissue types, which allows more accurate
and precise imaging.
Diagnosis
When it comes to disease diagnosis and monitoring, quantum computing is not far behind.
Cancer patients usually undergo chemotherapy and do not know if the treatment works or not
for months. However, thanks to advancements in quantum computing, that is changing.
Quantum computing will enable doctors to compare much, much more data in parallel,
simultaneously, and all permutations of that data, to find the best patterns that describe it.
Data Management
The quantum computer promotes a key aspect for the health sciences - artificial intelligence.
Among its applications, we find big data applied to AI and quantum computing. This cutting-
edge technology provides the ability to record, sort, and analyse massive amounts of complex
data and find patterns in them. This usefulness in health is invaluable and Quantum
Computing can help streamline many health processes.
Radiation Therapy
Radiation beams are used to destroy or stop the multiplication of affected cells completely.
Reducing damage to the surrounding cells is a major challenge of radiation therapy.
Arriving at an optimal radiation therapy plan needs numerous simulations before an
optimal plan is determined. With quantum computers, the possibilities that can be considered
for each simulation can be found easily and quickly. It will allow physicians to determine the
best therapy and plan faster.
Q4.) What are the 3 phases of Quantum Progress? Explain each of them.
The NISQ Era
The next three to five years are expected to be characterized by so-called NISQ (Noisy
Intermediate-Scale Quantum) devices, which are increasingly capable of performing
useful, discrete functions but are characterized by high error rates that limit functionality.
One area in which digital computers will retain advantage for some time is accuracy: they
experience fewer than one error in 1024 operations at the bit level, while today’s qubits
destabilize much too quickly for the kinds of calculations necessary for quantum-
advantaged molecular simulation or portfolio optimization. Experts believe that error
correction will remain quantum computing’s biggest challenge for the better part of a
decade. That said, research underway at multiple major companies and start-ups, among
them IBM, Google, and Rigetti, has led to a series of technological breakthroughs in error
mitigation techniques to maximize the usefulness of NISQ-era devices. These efforts
increase the chances that the near to medium term will see the development of medium-
sized, if still error-prone, quantum computers that can be used to produce the first
quantum-advantaged experimental discoveries in simulation and combinatorial
optimization.
Broad Quantum Advantage
In 10 to 20 years, the period that will witness broad quantum advantage, quantum
computers are expected to achieve superior performance in tasks of genuine industrial
significance. This will provide step-change improvements over the speed, cost, or quality
of a binary machine. But it will require overcoming significant technical hurdles in error
correction and other areas, as well as continuing increases in the power and reliability of
quantum processors. Quantum advantage has major implications. Consider the case of
chemicals R&D. If quantum simulation enables researchers to model interactions among
materials as they grow in size— without the coarse, distorting heuristic techniques used
today—companies will be able to reduce, or even eliminate, expensive and lengthy lab
processes such as in situ testing. Already, companies such as Zapata Computing are
betting that quantum-advantaged molecular simulation will drive not only significant cost
savings but the development of better products that reach the market sooner. The story is
similar for automakers, airplane manufacturers, and others whose products are, or could
be, designed according to computational fluid dynamics. These simulations are currently
hindered by the inability of classical computers to model fluid behaviour on large surfaces
(or at least to do so in practical amounts of time), necessitating expensive and laborious
physical prototyping of components. Airbus, among others, is betting on quantum
computing to produce a solution. The company launched a challenge in 2019 “to assess
how [quantum computing] could be included or even replace other high-performance
computational tools that, today, form the cornerstone of aircraft design.”
Full-Scale Fault Tolerance
The third phase is still decades away. Achieving full-scale fault tolerance will require
makers of quantum technology to overcome additional technical constraints, including
problems related to scale and stability. But once they arrive, we expect fault-tolerant
quantum computers to affect a broad array of industries. They have the potential to vastly
reduce trial and error and improve automation in the specialty-chemicals market, enable
tail-event defensive trading and risk-driven high-frequency trading strategies in finance,
and even promote in silico drug discovery, which has major implications for personalized
medicine. With all this promise, it’s little surprise that the value creation numbers get very
big over time. In the industries we analysed, we foresee quantum computing leading to
incremental operating income of $450 billion to $850 billion by 2050 (with a nearly even
split between incremental annual revenues streams and recurring cost efficiencies) While
that’s a big carrot, it comes at the end of a long stick. More important for today’s decision
makers is understanding the potential ramifications in their industries: what problems
quantum computers will solve, where and how the value will be realized, and how they
can put their organizations on the path to value ahead of the competition.