0% found this document useful (0 votes)
8 views79 pages

Python Questions

This document provides 50 Python practice questions organized by topic, including Basic Concepts, Functions, Object-Oriented Programming, and Advanced Topics, suitable for freshers and interview preparation. Each question includes explanations and code examples, covering a range of difficulties from easy to tough. The content is designed to help learners understand key Python concepts and improve their coding skills.

Uploaded by

divy1908
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views79 pages

Python Questions

This document provides 50 Python practice questions organized by topic, including Basic Concepts, Functions, Object-Oriented Programming, and Advanced Topics, suitable for freshers and interview preparation. Each question includes explanations and code examples, covering a range of difficulties from easy to tough. The content is designed to help learners understand key Python concepts and improve their coding skills.

Uploaded by

divy1908
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 79

Python Practice Questions

This document contains 50 Python practice questions, ranging from easy to tough levels,
suitable for freshers and interview preparation. Questions are organized by topic and
include code examples and explanations.

Table of Contents
1. Basic Concepts and Data Structures
◦ Easy
◦ Moderate
◦ Tough
2. Functions and Control Flow
◦ Easy
◦ Moderate
◦ Tough
3. Object-Oriented Programming (OOP)
◦ Easy
◦ Moderate
◦ Tough
4. Advanced Topics and Interview Scenarios
◦ Easy
◦ Moderate
◦ Tough

1. Basic Concepts and Data Structures

Easy

Question 1: Variables and Data Types

What are variables in Python, and how do you declare them? Provide an example
demonstrating different data types (integer, float, string, boolean).

Explanation:

Variables are named storage locations that hold data. In Python, you don't need to
explicitly declare the data type; it's inferred when you assign a value. Python supports
various built-in data types.
Code Example:

# Integer
age = 30
print(f"Age: {age}, Type: {type(age)}")

# Float
price = 19.99
print(f"Price: {price}, Type: {type(price)}")

# String
name = "Alice"
print(f"Name: {name}, Type: {type(name)}")

# Boolean
is_student = True
print(f"Is Student: {is_student}, Type: {type(is_student)}")

Question 2: Basic Operators

Explain the difference between arithmetic, comparison, and logical operators in Python.
Provide an example for each.

Explanation:

• Arithmetic Operators: Perform mathematical calculations (e.g., +, -, , /, %, //, *).


• Comparison Operators: Compare two values and return a boolean result (e.g.,
==, !=, <, >, <=, >=).
• Logical Operators: Combine conditional statements (e.g., and, or, not).

Code Example:

# Arithmetic Operators
a = 10
b = 3
print(f"a + b = {a + b}")
print(f"a * b = {a * b}")

# Comparison Operators
x = 5
y = 10
print(f"x == y: {x == y}")
print(f"x < y: {x < y}")

# Logical Operators
p = True
q = False
print(f"p and q: {p and q}")
print(f"not q: {not q}")

Question 3: Lists

What is a Python list? How do you create a list, access its elements, and add/remove
elements from it? Give examples.

Explanation:

A list is a mutable, ordered sequence of elements. Lists are defined by enclosing


elements in square brackets [] , separated by commas.

Code Example:

# Create a list
my_list = [1, 2, 3, "apple", "banana"]
print(f"Original list: {my_list}")

# Access elements
print(f"First element: {my_list[0]}")
print(f"Last element: {my_list[-1]}")

# Add elements
my_list.append("cherry")
print(f"List after append: {my_list}")
my_list.insert(1, "orange")
print(f"List after insert: {my_list}")

# Remove elements
my_list.remove(3)
print(f"List after remove: {my_list}")
popped_element = my_list.pop()
print(f"List after pop: {my_list}, Popped: {popped_element}")

Question 4: Tuples

Describe Python tuples. How do they differ from lists? Provide an example of creating
and accessing elements in a tuple.

Explanation:

A tuple is an immutable, ordered sequence of elements. Tuples are defined by enclosing


elements in parentheses () , separated by commas. The key difference from lists is that
tuples cannot be changed after creation.

Code Example:
# Create a tuple
my_tuple = (10, 20, "hello", 30.5)
print(f"Original tuple: {my_tuple}")

# Access elements
print(f"First element: {my_tuple[0]}")
print(f"Third element: {my_tuple[2]}")

# Attempting to modify a tuple will raise an error


# my_tuple[0] = 5 # This would cause a TypeError

Question 5: Dictionaries

What is a dictionary in Python? How do you create one, add/access key-value pairs, and
iterate through it? Illustrate with an example.

Explanation:

A dictionary is an unordered collection of key-value pairs. Each key must be unique and
immutable. Dictionaries are defined by enclosing key-value pairs in curly braces {} ,
with a colon : separating each key from its value.

Code Example:

# Create a dictionary
my_dict = {"name": "Bob", "age": 25, "city": "New York"}
print(f"Original dictionary: {my_dict}")

# Access values
print(f"Name: {my_dict["name"]}")

# Add/Modify key-value pairs


my_dict["email"] = "[email protected]"
my_dict["age"] = 26
print(f"Dictionary after modification: {my_dict}")

# Iterate through dictionary


print("\nIterating through dictionary:")
for key, value in my_dict.items():
print(f"{key}: {value}")

Moderate

Question 6: String Manipulation


Given a string, write a Python program to count the occurrences of each character in the
string. Ignore case sensitivity. For example, 'Programming' should treat 'P' and 'p' as the
same character.

Explanation:

This problem can be solved using a dictionary to store character counts. We iterate
through the string, convert each character to lowercase, and update its count in the
dictionary.

Code Example:

def count_characters(input_string):
char_counts = {}
for char in input_string:
char_lower = char.lower()
char_counts[char_lower] = char_counts.get(char_lower, 0)
+ 1
return char_counts

text = "Programming"
counts = count_characters(text)
print(f"Character counts for '{text}': {counts}")

text2 = "Hello World"


counts2 = count_characters(text2)
print(f"Character counts for '{text2}': {counts2}")

Question 7: List Comprehension

Explain list comprehension in Python and provide an example to create a new list
containing only the even numbers from an existing list.

Explanation:

List comprehension offers a concise way to create lists. It consists of brackets containing
an expression followed by a for clause, then zero or more for or if clauses. The
result is a new list resulting from evaluating the expression in the context of the for
and if clauses which follow it.

Code Example:

numbers = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]


even_numbers = [num for num in numbers if num % 2 == 0]
print(f"Original numbers: {numbers}")
print(f"Even numbers using list comprehension: {even_numbers}")
# Another example: squares of numbers
squares = [x**2 for x in range(1, 6)]
print(f"Squares of numbers: {squares}")

Question 8: Dictionary Operations

Write a Python program to merge two dictionaries. If a key exists in both dictionaries, the
value from the second dictionary should be used. Demonstrate with an example.

Explanation:

There are several ways to merge dictionaries in Python. One common way is using the
update() method, which adds key-value pairs from one dictionary to another. If a key
exists in both, the value from the updated dictionary overwrites the original.

Code Example:

dict1 = {"a": 1, "b": 2, "c": 3}


dict2 = {"c": 4, "d": 5, "e": 6}

# Method 1: Using update()


merged_dict_update = dict1.copy() # Create a copy to avoid
modifying dict1 directly
merged_dict_update.update(dict2)
print(f"Merged dictionary (update method):
{merged_dict_update}")

# Method 2: Using ** operator (Python 3.5+)


merged_dict_unpacking = {**dict1, **dict2}
print(f"Merged dictionary (** operator):
{merged_dict_unpacking}")

# Method 3: Using | operator (Python 3.9+)


dict3 = {"f": 7, "g": 8}
dict4 = {"g": 9, "h": 10}
merged_dict_union = dict3 | dict4
print(f"Merged dictionary (| operator): {merged_dict_union}")

Question 9: Sets

What are sets in Python? How do you create a set, add/remove elements, and perform
set operations like union, intersection, and difference? Provide examples.

Explanation:
A set is an unordered collection of unique elements. Sets are useful for mathematical set
operations like union, intersection, difference, and for efficiently checking for
membership.

Code Example:

# Create a set
my_set = {1, 2, 3, 4, 4, 5} # Duplicate 4 is automatically
removed
print(f"Original set: {my_set}")

# Add elements
my_set.add(6)
print(f"Set after adding 6: {my_set}")

# Remove elements
my_set.remove(3)
print(f"Set after removing 3: {my_set}")

# Set operations
set_a = {1, 2, 3, 4}
set_b = {3, 4, 5, 6}

print(f"Set A: {set_a}")
print(f"Set B: {set_b}")

print(f"Union (A | B): {set_a | set_b}")


print(f"Intersection (A & B): {set_a & set_b}")
print(f"Difference (A - B): {set_a - set_b}")
print(f"Symmetric Difference (A ^ B): {set_a ^ set_b}")

Question 10: Immutability of Tuples

Explain the concept of immutability with respect to tuples in Python. What happens if
you try to modify an element of a tuple? Provide a code example to illustrate.

Explanation:

Immutability means that once an object is created, its state cannot be modified. Tuples
are immutable, meaning you cannot change, add, or remove elements after the tuple
has been created. If you try to modify an element, Python will raise a TypeError .

Code Example:

my_tuple = (1, 2, [3, 4])


print(f"Original tuple: {my_tuple}")

# Attempting to modify a direct element of the tuple will raise


an error
try:
my_tuple[0] = 10
except TypeError as e:
print(f"Error: {e} - Cannot modify immutable tuple element")

# However, if a mutable object (like a list) is inside a tuple,


its contents can be modified
print(f"Before modifying inner list: {my_tuple}")
my_tuple[2].append(5)
print(f"After modifying inner list: {my_tuple}")

# This is because the reference to the list itself within the


tuple doesn't change,
# only the contents of the list object that the tuple refers to.

Tough

Question 11: Deep Copy vs. Shallow Copy

Explain the difference between a shallow copy and a deep copy in Python, especially
when dealing with nested data structures. Provide code examples to illustrate when
each might be appropriate and the potential pitfalls of using the wrong one.

Explanation:

• Shallow Copy: Creates a new compound object but does not create copies of
nested objects. It only copies the references of the nested objects. Changes to
nested objects in the copy will affect the original, and vice-versa.
• Deep Copy: Creates a new compound object and then recursively inserts copies of
the objects found in the original. Changes to nested objects in the copy will not
affect the original.

Code Example:

import copy

original_list = [1, 2, [3, 4]]

# Shallow Copy
shallow_copied_list = list(original_list) # or
original_list.copy()
shallow_copied_list[0] = 100
shallow_copied_list[2][0] = 300

print(f"Original list after shallow copy modification:


{original_list}")
print(f"Shallow copied list: {shallow_copied_list}")
# Notice how original_list[2][0] also changed because it's a
shared reference

# Deep Copy
deep_copied_list = copy.deepcopy(original_list)
deep_copied_list[0] = 1000
deep_copied_list[2][0] = 3000

print(f"Original list after deep copy modification:


{original_list}")
print(f"Deep copied list: {deep_copied_list}")
# Notice how original_list remains unchanged

Question 12: Hashing and Immutability in Dictionaries/Sets

Discuss the role of hashing and immutability in the context of Python dictionaries and
sets. Why can only immutable objects be used as keys in dictionaries and elements in
sets? Provide an example of a common mistake related to this concept.

Explanation:

Dictionaries and sets in Python are implemented using hash tables. For these data
structures to work correctly, the keys (for dictionaries) and elements (for sets) must be
hashable. A hashable object has a hash value that never changes during its lifetime (it is
immutable) and can be compared to other objects. Immutable types like numbers,
strings, and tuples are hashable. Mutable types like lists and dictionaries are not
hashable because their content can change, which would alter their hash value.

If a mutable object were used as a key or set element, its hash value could change after
it's added, making it impossible to find or retrieve it later.

Code Example:

# Immutable types as dictionary keys (works)


my_dict = {"name": "Alice", 1: "one", (1, 2): "tuple"}
print(f"Dictionary with immutable keys: {my_dict}")

# Mutable types as dictionary keys (raises TypeError)


try:
my_dict_error = {[1, 2]: "list as key"}
except TypeError as e:
print(f"Error: {e} - Cannot use mutable list as dictionary
key")

# Immutable types as set elements (works)


my_set = {1, "hello", (3, 4)}
print(f"Set with immutable elements: {my_set}")
# Mutable types as set elements (raises TypeError)
try:
my_set_error = {[1, 2], 3}
except TypeError as e:
print(f"Error: {e} - Cannot use mutable list as set
element")

# Common mistake: Using a list as a key and then modifying it


# This would lead to unexpected behavior if it were allowed.

Question 13: Memory Management and Reference Counting

Explain Python's memory management, specifically focusing on reference counting and


garbage collection. How does Python handle circular references, and what is the role of
the gc module? Provide a conceptual example.

Explanation:

Python uses a private heap to manage memory. All Python objects and data structures
are located in a private heap. The interpreter manages this private heap. Python's
memory management primarily relies on reference counting. Each object has a
reference count, which increments when a new reference points to it and decrements
when a reference is removed. When the reference count drops to zero, the object's
memory is deallocated.

However, reference counting alone cannot handle circular references (e.g., Object A
refers to Object B, and Object B refers to Object A, but no external references point to
either). In such cases, the reference counts never drop to zero, leading to memory leaks.
Python addresses this with a cyclic garbage collector (part of the gc module) that
periodically detects and reclaims these unreachable cycles.

Code Example (Conceptual):

import gc

class Node:
def __init__(self, value):
self.value = value
self.next = None

# Create a circular reference


a = Node("A")
b = Node("B")
a.next = b
b.next = a
# At this point, 'a' and 'b' have reference counts > 0
# If we delete our direct references to 'a' and 'b':
del a
del b

# Now, 'a' and 'b' are only referenced by each other, forming a
cycle.
# Their reference counts won't drop to zero due to the cycle.
# The garbage collector will eventually detect and clean this
up.

# You can manually trigger garbage collection for demonstration


(not usually needed in production)
gc.collect()
print("Garbage collection performed.")

# The 'gc' module allows inspection and control over the garbage
collector.
# For example, gc.get_count() shows collection thresholds.
print(f"Garbage collector thresholds: {gc.get_threshold()}")

Question 14: Custom Data Structures - Linked List Node

Design a basic Node class for a singly linked list. The Node should have a data
attribute and a next attribute (pointing to the next node or None ). Implement a
method to print the data of the current node. This question tests understanding of
object attributes and basic data structure building blocks.

Explanation:

A linked list is a linear data structure where elements are not stored at contiguous
memory locations. Instead, each element (node) stores a reference (or pointer) to the
next element in the sequence. The Node class is the fundamental building block of a
linked list.

Code Example:

class Node:
def __init__(self, data):
self.data = data # Data stored in the node
self.next = None # Reference to the next node,
initially None

def print_node_data(self):
print(f"Node Data: {self.data}")

# Create nodes
node1 = Node(10)
node2 = Node(20)
node3 = Node(30)

# Link nodes
node1.next = node2
node2.next = node3

# Access and print data


node1.print_node_data()
node1.next.print_node_data() # Access data of the second node
node2.print_node_data()
node3.print_node_data()

# Traverse the linked list (conceptual)


current = node1
while current:
print(current.data, end=" -> ")
current = current.next
print("None")

Question 15: Abstract Data Types (ADTs) vs. Data Structures

Explain the difference between an Abstract Data Type (ADT) and a Data Structure.
Provide examples of common ADTs and their corresponding data structure
implementations in Python.

Explanation:

• Abstract Data Type (ADT): An ADT is a logical description of what a data type does,
without specifying how it's implemented. It defines a set of operations and their
behavior. It's a conceptual model, focusing on the

operations that can be performed on the data.

• Data Structure: A data structure is a concrete implementation of an ADT. It's a


physical representation of how data is organized and stored in memory, along with
the algorithms used to manipulate that data.

Examples:

ADT Data Structure Implementations in Python

List Python list (dynamic array)

Stack Python list (using append() and pop() )

Queue collections.deque or Python list


ADT Data Structure Implementations in Python

Dictionary Python dict (hash table)

Set Python set (hash table)

Code Example (Stack ADT using Python List):

class Stack:
def __init__(self):
self._items = []

def push(self, item):


self._items.append(item)

def pop(self):
if not self.is_empty():
return self._items.pop()
return None # Or raise an error

def peek(self):
if not self.is_empty():
return self._items[-1]
return None

def is_empty(self):
return len(self._items) == 0

def size(self):
return len(self._items)

# Usage
s = Stack()
s.push(1)
s.push(2)
print(f"Stack: {s._items}")
print(f"Peek: {s.peek()}")
print(f"Pop: {s.pop()}")
print(f"Stack: {s._items}")

Question 16: Generators and Iterators

Explain the concepts of generators and iterators in Python. How do they differ, and when
would you use one over the other? Provide a simple example of a generator function.
Explanation:

• Iterator: An object that implements the iterator protocol, which consists of the
__iter__() and __next__() methods. Iterators allow you to traverse through
a sequence of data one element at a time, without loading the entire sequence into
memory.
• Generator: A simple way to create iterators. A generator function is a function that
contains at least one yield statement. When called, it returns a generator object
(an iterator). Generators are memory-efficient because they produce items one by
one, only when requested, rather than building a complete list in memory.

Differences:

• Implementation: Iterators are typically implemented as classes with __iter__


and __next__ methods. Generators are implemented as functions using the
yield keyword.
• Memory: Generators are more memory-efficient for large sequences as they
generate values on the fly.
• State: Generators automatically manage their state between calls to next() .

When to use:

• Use iterators when you need more complex iteration logic or when building
custom iterable classes.
• Use generators for simple, memory-efficient iteration, especially when dealing
with potentially infinite sequences or large datasets.

Code Example (Generator):

def fibonacci_sequence(n):
a, b = 0, 1
for _ in range(n):
yield a
a, b = b, a + b

# Using the generator


fib_gen = fibonacci_sequence(10)
for num in fib_gen:
print(num, end=" ")
print()

# Another example: simple counter generator


def count_up_to(max_val):
count = 0
while count <= max_val:
yield count
count += 1

counter = count_up_to(5)
print(list(counter))

Question 17: Decorators

What are decorators in Python? Explain their purpose and provide a simple example of
how to create and use a decorator to log function calls.

Explanation:

A decorator is a design pattern in Python that allows a user to add new functionality to
an existing object without modifying its structure. Decorators are essentially functions
that take another function as an argument, extend its behavior, and return the extended
function. They are often used for logging, timing, authentication, and caching.

Code Example:

def log_function_call(func):
def wrapper(*args, **kwargs):
print(f"Calling function: {func.__name__} with args:
{args}, kwargs: {kwargs}")
result = func(*args, **kwargs)
print(f"Function {func.__name__} finished. Result:
{result}")
return result
return wrapper

@log_function_call
def add(a, b):
return a + b

@log_function_call
def multiply(x, y):
return x * y

add(5, 3)
multiply(4, 6)

Question 18: Context Managers (with statement)

Explain the concept of context managers in Python and the with statement. Why are
they useful, and how can you create your own custom context manager? Provide an
example using a file operation and a custom context manager.

Explanation:
Context managers in Python are objects that define the runtime context for a block of
code. They are typically used to set up a resource (like opening a file or acquiring a lock)
and ensure that the resource is properly cleaned up (like closing the file or releasing the
lock) after the block of code is executed, even if errors occur. The with statement is
used to ensure that a context manager's __enter__ and __exit__ methods are
called correctly.

Why useful:

• Resource Management: Guarantees proper acquisition and release of resources.


• Error Handling: Ensures cleanup even if exceptions occur within the with block.
• Readability: Makes code cleaner and easier to understand by abstracting setup/
teardown logic.

Creating Custom Context Managers:

1. Class-based: Implement __enter__ (returns the resource) and __exit__


(handles cleanup) methods.
2. Function-based (using contextlib.contextmanager decorator): Use a
generator function with yield to define the setup and teardown logic.

Code Example:

# Example 1: Using a file as a context manager


with open("example.txt", "w") as f:
f.write("Hello, context manager!\n")
print("File written and closed automatically.")

# Example 2: Custom class-based context manager


class MyContextManager:
def __init__(self, name):
self.name = name

def __enter__(self):
print(f"Entering context: {self.name}")
return self.name

def __exit__(self, exc_type, exc_val, exc_tb):


print(f"Exiting context: {self.name}")
if exc_type:
print(f"An exception occurred: {exc_type},
{exc_val}")
return False # Propagate exception if True, suppress if
False

with MyContextManager("Resource A") as resource:


print(f"Inside context with resource: {resource}")
print("\n")

with MyContextManager("Resource B") as resource:


print(f"Inside context with resource: {resource}")
raise ValueError("Something went wrong!")

print("\n")

# Example 3: Custom function-based context manager using


contextlib
from contextlib import contextmanager

@contextmanager
def managed_resource(name):
print(f"Acquiring resource: {name}")
yield name # This is where the 'with' block executes
print(f"Releasing resource: {name}")

with managed_resource("Database Connection") as db_conn:


print(f"Using database connection: {db_conn}")

Question 19: Metaclasses

What are metaclasses in Python? Explain their role in object creation and provide a
simple example of a custom metaclass that automatically adds a creation_time
attribute to all instances of classes it manages.

Explanation:

In Python, a metaclass is the class of a class. Just as an ordinary class defines the
behavior of its instances, a metaclass defines the behavior of classes. When you create a
class, Python uses a metaclass (by default, type ) to construct that class object.
Metaclasses allow you to hook into the class creation process and modify or customize
how classes are created.

Role in Object Creation:

1. When you define a class, Python first looks for a __metaclass__ attribute in the
class definition.
2. If found, it uses that metaclass to create the class object.
3. If not found, it uses the metaclass of the base class (or type if no base class).

Code Example:

import datetime
class MyMeta(type):
def __new__(mcs, name, bases, dct):
# mcs: the metaclass itself (MyMeta)
# name: name of the class being created (e.g.,
'MyClass')
# bases: tuple of base classes
# dct: dictionary of attributes and methods for the new
class

# Add a new attribute to the class dictionary


dct['creation_time'] = datetime.datetime.now()

# Create the class object using the parent metaclass


(type)
return super().__new__(mcs, name, bases, dct)

class MyClass(metaclass=MyMeta):
def __init__(self, value):
self.value = value

def display(self):
print(f"Value: {self.value}, Created at:
{self.creation_time}")

class AnotherClass(metaclass=MyMeta):
def __init__(self, data):
self.data = data

def show_data(self):
print(f"Data: {self.data}, Created at:
{self.creation_time}")

obj1 = MyClass(10)
obj1.display()

obj2 = AnotherClass("Hello")
obj2.show_data()

# Verify that creation_time is a class attribute


print(f"MyClass creation time (from class):
{MyClass.creation_time}")

Question 20: Abstract Base Classes (ABCs)

What are Abstract Base Classes (ABCs) in Python? How do they differ from regular
classes, and when would you use them? Provide an example using the abc module to
define an abstract method.

Explanation:
Abstract Base Classes (ABCs) in Python provide a way to define interfaces. They are
classes that cannot be instantiated directly and are meant to be subclassed. ABCs can
contain abstract methods (methods declared but not implemented in the ABC itself),
which must be implemented by any concrete subclass.

Differences from Regular Classes:

• Instantiation: You cannot create an instance of an ABC directly. You must subclass
it and implement all its abstract methods.
• Abstract Methods: ABCs can declare abstract methods using the
@abstractmethod decorator from the abc module. Subclasses are forced to
implement these methods.

When to use:

• Defining Interfaces: To enforce that certain methods are implemented by


subclasses, ensuring a consistent API.
• Polymorphism: To allow different concrete classes to be treated uniformly if they
adhere to the same interface.
• Preventing Instantiation: To prevent users from creating objects of a base class
that is not meant to be used directly.

Code Example:

from abc import ABC, abstractmethod

class Shape(ABC):
@abstractmethod
def area(self):
pass

@abstractmethod
def perimeter(self):
pass

def describe(self):
return "This is a shape."

class Circle(Shape):
def __init__(self, radius):
self.radius = radius

def area(self):
return 3.14159 * self.radius ** 2

def perimeter(self):
return 2 * 3.14159 * self.radius
class Rectangle(Shape):
def __init__(self, width, height):
self.width = width
self.height = height

def area(self):
return self.width * self.height

def perimeter(self):
return 2 * (self.width + self.height)

# Attempting to instantiate Shape will raise an error


try:
# s = Shape() # This would raise TypeError
pass
except TypeError as e:
print(f"Error: {e} - Cannot instantiate abstract class
Shape")

circle = Circle(5)
print(f"Circle area: {circle.area()}")
print(f"Circle perimeter: {circle.perimeter()}")
print(f"Circle description: {circle.describe()}")

rectangle = Rectangle(4, 6)
print(f"Rectangle area: {rectangle.area()}")
print(f"Rectangle perimeter: {rectangle.perimeter()}")
print(f"Rectangle description: {rectangle.describe()}")

2. Functions and Control Flow

Easy

Question 21: Basic Function Definition

What is a function in Python, and how do you define a simple function that takes two
numbers as arguments and returns their sum? Call the function with example values.

Explanation:

A function is a block of organized, reusable code that is used to perform a single, related
action. Functions provide better modularity for your application and a high degree of
code reusing. You define a function using the def keyword.

Code Example:
def add_numbers(a, b):
"""This function takes two numbers and returns their sum."""
return a + b

# Call the function


result = add_numbers(5, 7)
print(f"The sum is: {result}")

result2 = add_numbers(10.5, 20.3)


print(f"The sum is: {result2}")

Question 22: if-elif-else Statement

Write a Python program that takes a numerical score as input and prints a letter grade
based on the following criteria: * 90-100: A * 80-89: B * 70-79: C * 60-69: D * Below 60: F

Explanation:

The if-elif-else statement is used for conditional execution. It allows your program
to make decisions based on whether certain conditions are true or false.

Code Example:

def get_grade(score):
if score >= 90:
return "A"
elif score >= 80:
return "B"
elif score >= 70:
return "C"
elif score >= 60:
return "D"
else:
return "F"

print(f"Score 95: {get_grade(95)}")


print(f"Score 82: {get_grade(82)}")
print(f"Score 70: {get_grade(70)}")
print(f"Score 65: {get_grade(65)}")
print(f"Score 50: {get_grade(50)}")

Question 23: for Loop

Use a for loop to iterate through a list of names and print each name. Also,
demonstrate how to use range() with a for loop to print numbers from 1 to 5.

Explanation:
The for loop is used for iterating over a sequence (that is, a list, tuple, dictionary, set,
or string). The range() function generates a sequence of numbers.

Code Example:

names = ["Alice", "Bob", "Charlie"]


print("Iterating through names:")
for name in names:
print(name)

print("\nPrinting numbers from 1 to 5:")


for i in range(1, 6):
print(i)

Question 24: while Loop

Write a Python program using a while loop to print numbers from 10 down to 1.
Include a condition to stop the loop when the number becomes 0.

Explanation:

The while loop repeatedly executes a block of statements as long as a given condition
is true. It is useful when you don't know in advance how many times the loop will run.

Code Example:

count = 10
print("Counting down:")
while count >= 1:
print(count)
count -= 1

# Example with break


num = 1
print("\nCounting up with break:")
while True:
print(num)
if num == 5:
break
num += 1

Question 25: Function Arguments (Positional and Keyword)

Explain the difference between positional arguments and keyword arguments in Python
functions. Provide an example function call demonstrating both.
Explanation:

• Positional Arguments: Arguments whose positions (order) matter. The values are
assigned to parameters based on their order.
• Keyword Arguments: Arguments whose names are explicitly passed along with
their values. The order does not matter.

Code Example:

def greet(name, message):


print(f"Hello {name}, {message}")

# Positional arguments
greet("Alice", "how are you?")

# Keyword arguments
greet(message="Good morning!", name="Bob")

# Mixing positional and keyword arguments (positional must come


first)
greet("Charlie", message="Nice to meet you!")

Moderate

Question 26: Default Arguments and Variable-Length Arguments ( *args ,


**kwargs )

Explain default arguments and variable-length arguments ( *args and **kwargs ) in


Python functions. Provide an example function that uses all three concepts.

Explanation:

• Default Arguments: Allow you to provide a default value for a parameter. If the
caller doesn't provide a value for that parameter, the default value is used.
• *args (Arbitrary Positional Arguments): Allows a function to accept an arbitrary
number of positional arguments. These arguments are packed into a tuple.
• **kwargs (Arbitrary Keyword Arguments): Allows a function to accept an
arbitrary number of keyword arguments. These arguments are packed into a
dictionary.

Code Example:

def configure_settings(setting1, setting2="default_value",


*options, **metadata):
print(f"Setting 1: {setting1}")
print(f"Setting 2: {setting2}")
print(f"Options (args): {options}")
print(f"Metadata (kwargs): {metadata}")

# Example calls
configure_settings("value_A")
configure_settings("value_B", "custom_value_C")
configure_settings("value_D", "value_E", 1, 2, 3, key1="val1",
key2="val2")

Question 27: Lambda Functions

What are lambda functions in Python? When would you use them, and what are their
limitations? Provide an example of using a lambda function with map() or filter() .

Explanation:

Lambda functions (also called anonymous functions) are small, single-expression


functions that don't have a name. They are defined using the lambda keyword. They
are typically used for short, throwaway functions that are not going to be reused.

Limitations:

• Can only contain a single expression.


• Cannot contain statements (like if , for , while , return ).
• Cannot have multiple lines of code.

Code Example:

# Example with filter()


numbers = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]
even_numbers = list(filter(lambda x: x % 2 == 0, numbers))
print(f"Even numbers: {even_numbers}")

# Example with map()


squared_numbers = list(map(lambda x: x**2, numbers))
print(f"Squared numbers: {squared_numbers}")

# Simple lambda function


add = lambda a, b: a + b
print(f"Sum using lambda: {add(10, 20)}")

Question 28: Scope (LEGB Rule)

Explain the LEGB rule in Python for variable scope. Provide an example demonstrating
each level of scope.
Explanation:

The LEGB rule defines the order in which Python resolves names (variables, functions,
classes, etc.) in your code. It stands for:

• Local: Names assigned within a function (or class method).


• Enclosing function locals: Names in the local scope of any enclosing functions
(nested functions).
• Global: Names assigned at the top-level of a module (file).
• Built-in: Names pre-assigned in the built-in module (e.g., print , len , str ).

Python searches for a name in this order. If it doesn't find the name in one scope, it
moves to the next.

Code Example:

x = "global"

def outer_function():
x = "enclosing"

def inner_function():
x = "local"
print(f"Inside inner_function: {x}") # Local scope

inner_function()
print(f"Inside outer_function: {x}") # Enclosing scope

outer_function()
print(f"Outside all functions: {x}") # Global scope

# Built-in example
# print is a built-in function
print("Hello")

Question 29: Error Handling ( try-except-finally )

Explain Python's error handling mechanism using try , except , and finally blocks.
Write a program that attempts to divide two numbers, handles a ZeroDivisionError ,
and always prints a cleanup message.

Explanation:

• try block: Contains the code that might raise an exception.


• except block: Catches and handles specific exceptions that occur in the try
block.
• finally block: Contains code that will always be executed, regardless of
whether an exception occurred or was handled. It's typically used for cleanup
operations.

Code Example:

def safe_divide(a, b):


try:
result = a / b
except ZeroDivisionError:
print("Error: Cannot divide by zero!")
return None
except TypeError:
print("Error: Invalid input types for division!")
return None
else:
# This block executes only if no exception occurred in
the try block
print("Division successful!")
return result
finally:
# This block always executes
print("Division attempt finished.")

print(f"Result 1: {safe_divide(10, 2)}")


print("\n")
print(f"Result 2: {safe_divide(10, 0)}")
print("\n")
print(f"Result 3: {safe_divide(10, 'a')}")

Question 30: Recursion

What is recursion in programming? Write a recursive Python function to calculate the


factorial of a non-negative integer. Explain the base case and recursive step.

Explanation:

Recursion is a programming technique where a function calls itself to solve a problem. A


recursive function must have:

• Base Case: A condition that stops the recursion, preventing an infinite loop. This is
the simplest form of the problem that can be solved directly.
• Recursive Step: The part where the function calls itself with a modified input,
moving closer to the base case.

Code Example:
def factorial(n):
# Base case: factorial of 0 or 1 is 1
if n == 0 or n == 1:
return 1
# Recursive step: n! = n * (n-1)!
else:
return n * factorial(n - 1)

print(f"Factorial of 0: {factorial(0)}")
print(f"Factorial of 1: {factorial(1)}")
print(f"Factorial of 5: {factorial(5)}") # 5 * 4 * 3 * 2 * 1 =
120
print(f"Factorial of 7: {factorial(7)}")

Tough

Question 31: Closures and Nonlocal Keyword

Explain the concept of closures in Python. How does the nonlocal keyword relate to
closures, and when would you use it? Provide an example demonstrating a closure and
the use of nonlocal .

Explanation:

A closure is a nested function that remembers and has access to variables from its
enclosing scope, even after the enclosing function has finished execution. This allows
the inner function to

operate on data that is not part of its own local scope or the global scope.

The nonlocal keyword is used in nested functions to indicate that a variable refers to a
variable in an enclosing scope (not global) and should not be treated as a new local
variable. It allows you to modify variables in an outer (but non-global) scope from within
an inner function.

When to use nonlocal :

• When you need to modify a variable in an enclosing function's scope from within a
nested function.
• To avoid creating a new local variable with the same name, which would shadow
the outer variable.

Code Example:
def outer_function(x):
count = 0 # This is the variable the closure will

access and modify

def inner_function():
nonlocal count # Declare that 'count' refers to the
'count' in outer_function's scope
count += 1
print(f"Inner function called {count} times. x is {x}")
return count
return inner_function

# Create closures
counter1 = outer_function(10)
counter2 = outer_function(20)

counter1() # Output: Inner function called 1 times. x is 10


counter1() # Output: Inner function called 2 times. x is 10
counter2() # Output: Inner function called 1 times. x is 20
counter1() # Output: Inner function called 3 times. x is 10

Question 32: Generators and yield from

Beyond basic generators, explain the purpose and usage of the yield from expression
in Python. When is it particularly useful, and how does it simplify generator delegation?
Provide an example.

Explanation:

The yield from expression (introduced in Python 3.3) is used to delegate control to a
subgenerator or any iterable. It effectively allows a generator to yield all values from
another iterable without explicitly looping over it. This simplifies the code for chaining
generators and handling complex iteration logic.

When yield from is useful:

• Delegating to Subgenerators: When you have a main generator that needs to


produce values from one or more subgenerators.
• Simplifying Chaining: It makes the code cleaner and more readable compared to
manually iterating and yielding from a subgenerator.
• Propagating return values: It correctly handles return values from
subgenerators (which become the value of the yield from expression).

Code Example:
def subgenerator():
yield "A"
yield "B"
return "Subgenerator finished"

def main_generator():
print("Starting main generator")
result = yield from subgenerator()
print(f"Subgenerator returned: {result}")
yield "C"
print("Main generator finished")

# Using the main generator


gen = main_generator()

try:
print(next(gen)) # Output: Starting main generator, A
print(next(gen)) # Output: B
print(next(gen)) # Output: Subgenerator returned:
Subgenerator finished, C
print(next(gen)) # Raises StopIteration
except StopIteration:
print("Iteration finished.")

print("\nAnother example with multiple subgenerators:")


def greet_names(names):
for name in names:
yield f"Hello, {name}"

def farewell_names(names):
for name in names:
yield f"Goodbye, {name}"

def conversation_flow():
yield from greet_names(["Alice", "Bob"])
yield "--- Transition ---"
yield from farewell_names(["Charlie", "David"])

conv_gen = conversation_flow()
for msg in conv_gen:
print(msg)

Question 33: Coroutines and async / await (Basic)

Introduce the concept of coroutines in Python and how they enable asynchronous
programming. Explain the basic usage of async and await keywords. Provide a
simple example of an asynchronous function.
Explanation:

• Coroutines: In Python, coroutines are special functions that can pause their
execution and resume later. They are the building blocks of asynchronous
programming, allowing programs to perform multiple tasks concurrently without
blocking the main execution thread.
• async keyword: Used to define a coroutine function. An async def function is
a coroutine.
• await keyword: Used inside an async function to pause its execution until an
awaitable (like another coroutine, a Task, or a Future) completes. This allows the
program to switch to other tasks while waiting.

Why Asynchronous Programming?

It's particularly useful for I/O-bound operations (network requests, file operations,
database queries) where a program spends a lot of time waiting for external resources.
Asynchronous programming allows the program to do other work during these waiting
periods, improving efficiency and responsiveness.

Code Example:

import asyncio

async def say_hello(name, delay):


await asyncio.sleep(delay) # Simulate an I/O bound operation
print(f"Hello, {name} after {delay} seconds!")

async def main():


print("Starting main asynchronous program...")
# Schedule coroutines to run concurrently
task1 = asyncio.create_task(say_hello("Alice", 2))
task2 = asyncio.create_task(say_hello("Bob", 1))

# Await the completion of the tasks


await task1
await task2
print("All greetings done.")

# Run the main asynchronous function


if __name__ == "__main__":
asyncio.run(main())

Question 34: First-Class Functions and Higher-Order Functions

Explain what it means for functions to be


first-class citizens in Python. Define higher-order functions and provide an example of a
higher-order function that takes another function as an argument.

Explanation:

In Python, functions are first-class citizens, meaning they can be treated like any other
variable or object. This implies:

1. Assigned to variables: Functions can be assigned to variables.


2. Passed as arguments: Functions can be passed as arguments to other functions.
3. Returned from functions: Functions can be returned as values from other
functions.
4. Stored in data structures: Functions can be stored in lists, dictionaries, etc.

A higher-order function is a function that either takes one or more functions as


arguments or returns a function as its result (or both).

Code Example:

def apply_operation(operation, x, y):


"""A higher-order function that applies an operation to two
numbers."""
return operation(x, y)

def add(a, b):


return a + b

def subtract(a, b):


return a - b

# Pass functions as arguments


result_add = apply_operation(add, 10, 5)
print(f"Addition result: {result_add}")

result_subtract = apply_operation(subtract, 10, 5)


print(f"Subtraction result: {result_subtract}")

# Example of a function returning a function (closure)


def make_multiplier(factor):
def multiplier(number):
return number * factor
return multiplier

double = make_multiplier(2)
triple = make_multiplier(3)

print(f"Double 5: {double(5)}")
print(f"Triple 5: {triple(5)}")
Question 35: Function Annotations and Type Hinting

Explain the purpose of function annotations and type hinting in Python. How do they
improve code readability and maintainability, and what are their limitations? Provide an
example using type hints for function parameters and return values.

Explanation:

Function annotations (or type hints) are a way to indicate the expected types of
function parameters and return values. They were introduced in Python 3.5 (PEP 484)
and are primarily used for:

• Improved Readability: Makes the code easier to understand by explicitly stating


the expected types.
• Better Maintainability: Helps in catching type-related bugs early, especially in
larger codebases.
• Tooling Support: IDEs, linters (like MyPy), and other static analysis tools can use
type hints to provide better autocompletion, error checking, and refactoring
capabilities.

Limitations:

• Not Enforced at Runtime: Python does not enforce type hints at runtime. They are
merely suggestions or metadata. If you pass an incorrect type, the program will still
run (and likely fail later if the operation is incompatible).
• Optional: Using type hints is optional, and not all Python codebases adopt them.

Code Example:

def add_numbers_typed(a: int, b: int) -> int:


"""Adds two integers and returns an integer."""
return a + b

def greet_user(name: str, age: int) -> str:


"""Greets a user with their name and age."""
return f"Hello, {name}! You are {age} years old."

# Valid calls
print(add_numbers_typed(10, 20))
print(greet_user("Alice", 30))

# Python will not raise an error here, but a type checker would
flag it
# print(add_numbers_typed(10, "20"))

from typing import List, Dict, Union, Optional


def process_data(data: List[int]) -> float:
"""Calculates the average of a list of integers."""
if not data:
return 0.0
return sum(data) / len(data)

def get_user_info(user_id: int) -> Optional[Dict[str, Union[str,


int]]]:
"""Returns user info as a dictionary, or None if not
found."""
users = {
1: {"name": "Bob", "age": 25},
2: {"name": "Charlie", "age": 30}
}
return users.get(user_id)

print(f"Average of [1,2,3]: {process_data([1, 2, 3])}")


print(f"User 1 info: {get_user_info(1)}")
print(f"User 3 info: {get_user_info(3)}")

3. Object-Oriented Programming (OOP)

Easy

Question 36: Classes and Objects

What are classes and objects in Python? How do you define a simple class Dog with a
name attribute and a bark() method? Create an object of this class and call its
method.

Explanation:

• Class: A blueprint or a template for creating objects. It defines a set of attributes


(data) and methods (functions) that the objects created from the class will have.
• Object: An instance of a class. When a class is defined, no memory is allocated until
an object is created from it.

Code Example:

class Dog:
# The constructor method
def __init__(self, name):
self.name = name # Attribute

# A method
def bark(self):
print(f"{self.name} says Woof!")
# Create an object (instance) of the Dog class
my_dog = Dog("Buddy")

# Access attribute
print(f"My dog's name is {my_dog.name}")

# Call method
my_dog.bark()

another_dog = Dog("Lucy")
another_dog.bark()

Question 37: __init__ Method

Explain the purpose of the __init__ method in Python classes. When is it called, and
what is its role in object initialization? Provide an example.

Explanation:

The __init__ method is a special method in Python classes, often referred to as the
constructor. It is automatically called when a new object (instance) of the class is
created. Its primary purpose is to initialize the attributes of the newly created object.

• self is a convention for the first parameter, representing the instance of the class
itself.
• It allows you to pass arguments when creating an object to set its initial state.

Code Example:

class Car:
def __init__(self, make, model, year):
self.make = make
self.model = model
self.year = year
self.is_running = False # Default attribute

def start_engine(self):
self.is_running = True
print(f"The {self.year} {self.make} {self.model} engine
started.")

# Create Car objects, __init__ is called automatically


car1 = Car("Toyota", "Camry", 2020)
car2 = Car("Honda", "Civic", 2022)

print(f"Car 1: {car1.make} {car1.model}")


print(f"Car 2: {car2.make} {car2.model}")
car1.start_engine()
print(f"Is Car 1 running? {car1.is_running}")

Question 38: Instance Variables vs. Class Variables

What is the difference between instance variables and class variables in Python? When
would you use each, and how are they defined? Illustrate with an example.

Explanation:

• Instance Variables: Belong to a specific instance (object) of a class. Each object


has its own copy of instance variables. They are defined inside methods (usually
__init__ ) using self.variable_name .
• Class Variables: Belong to the class itself and are shared by all instances of that
class. They are defined directly inside the class, outside of any method.

When to use:

• Instance Variables: For data that is unique to each object (e.g., name , age ,
balance ).
• Class Variables: For data that is common to all objects of a class (e.g.,
company_name , species , count_of_objects ).

Code Example:

class Student:
school_name = "ABC High School" # Class variable

def __init__(self, name, student_id):


self.name = name # Instance variable
self.student_id = student_id # Instance variable

def display_info(self):
print(f"Name: {self.name}, ID: {self.student_id},
School: {Student.school_name}")

# Create instances
s1 = Student("Alice", "S001")
s2 = Student("Bob", "S002")

s1.display_info()
s2.display_info()

# Accessing class variable via class name


print(f"\nSchool name (via class): {Student.school_name}")
# Accessing class variable via instance (though not recommended
for modification)
print(f"School name (via s1): {s1.school_name}")

# Modifying class variable affects all instances


Student.school_name = "XYZ Academy"
s1.display_info()
s2.display_info()

Question 39: Inheritance (Single Inheritance)

Explain the concept of inheritance in Python. Create a Vehicle base class and a Car
derived class that inherits from Vehicle . Demonstrate method overriding.

Explanation:

Inheritance is a fundamental concept in OOP that allows a class (child/derived class) to


inherit attributes and methods from another class (parent/base class). This promotes
code reusability and establishes an

is-a relationship (e.g., a Car is a Vehicle).

• Method Overriding: When a subclass provides its own implementation of a


method that is already defined in its superclass.

Code Example:

class Vehicle:
def __init__(self, brand, model):
self.brand = brand
self.model = model

def display_info(self):
print(f"Brand: {self.brand}, Model: {self.model}")

def start_engine(self):
print("Vehicle engine started.")

class Car(Vehicle):
def __init__(self, brand, model, num_doors):
super().__init__(brand, model) # Call the parent class
constructor
self.num_doors = num_doors

# Method Overriding
def start_engine(self):
print(f"The {self.brand}
{self.model} car engine started with a roar!")
def display_car_info(self):
self.display_info() # Call parent method
print(f"Number of doors: {self.num_doors}")

# Create objects
vehicle = Vehicle("Generic", "Transport")
vehicle.display_info()
vehicle.start_engine()

print("\n")

car = Car("Toyota", "Corolla", 4)


car.display_car_info()
car.start_engine() # Calls the overridden method in Car class

Question 40: Polymorphism

Explain polymorphism in Python with a simple example. How does it allow objects of
different classes to be treated uniformly?

Explanation:

Polymorphism (meaning "many forms") is an OOP concept that allows objects of


different classes to be treated as objects of a common type. In Python, polymorphism is
achieved through method overriding and duck typing. If an object walks like a duck and
quacks like a duck, then it's a duck, regardless of its actual class.

Code Example:

class Dog:
def speak(self):
return "Woof!"

class Cat:
def speak(self):
return "Meow!"

class Duck:
def speak(self):
return "Quack!"

def make_sound(animal):
print(animal.speak())

# Create objects of different classes


dog = Dog()
cat = Cat()
duck = Duck()
# Call the same function with different objects
make_sound(dog)
make_sound(cat)
make_sound(duck)

# Another example: common interface for different shapes


class Circle:
def __init__(self, radius):
self.radius = radius
def area(self):
return 3.14 * self.radius * self.radius

class Rectangle:
def __init__(self, width, height):
self.width = width
self.height = height
def area(self):
return self.width * self.height

shapes = [Circle(5), Rectangle(4, 6)]

for shape in shapes:


print(f"Area: {shape.area()}")

Moderate

Question 41: Encapsulation and Access Modifiers

Explain the concept of encapsulation in Python OOP. How are "private" and "protected"
members typically indicated in Python, and what are the conventions around accessing
them? Provide an example.

Explanation:

Encapsulation is the bundling of data (attributes) and methods that operate on the data
into a single unit (a class). It restricts direct access to some of an object's components,
preventing accidental modification and promoting data integrity. In Python,
encapsulation is achieved by convention, as there are no strict "private" or "protected"
keywords like in some other languages.

• "Private" members (single underscore _ ): Attributes or methods prefixed with a


single underscore (e.g., _attribute , _method ) are conventionally considered
"protected." This is a hint to other developers that these members are intended for
internal use within the class or its subclasses and should not be directly accessed
from outside.
• "Strongly private" members (double underscore __ ): Attributes or methods
prefixed with a double underscore (e.g., __attribute , __method ) trigger
Python's name mangling. This means the interpreter renames these attributes to
_ClassName__attribute to make them harder to access directly from outside
the class. While not truly private (they can still be accessed if you know the
mangled name), it's a stronger convention against external access.

Code Example:

class BankAccount:
def __init__(self, initial_balance):
self.__balance = initial_balance # Strongly private by
convention (name mangling)
self._account_number = "12345" # Protected by convention

def deposit(self, amount):


if amount > 0:
self.__balance += amount
print(f"Deposited {amount}. New balance:
{self.__balance}")
else:
print("Deposit amount must be positive.")

def withdraw(self, amount):


if 0 < amount <= self.__balance:
self.__balance -= amount
print(f"Withdrew {amount}. New balance:
{self.__balance}")
else:
print("Invalid withdrawal amount or insufficient
funds.")

def get_balance(self):
return self.__balance

# Create an account
account = BankAccount(1000)

# Accessing public method


account.deposit(500)
account.withdraw(200)

# Attempting to access "private" members directly (discouraged)


# print(account.__balance) # This would raise an AttributeError
print(account._BankAccount__balance) # Accessing mangled name
(possible but bad practice)
print(account._account_number) # Accessing protected member
(possible but discouraged)
print(f"Current balance: {account.get_balance()}")

Question 42: Multiple Inheritance and Method Resolution Order (MRO)

Explain multiple inheritance in Python and the challenges it can present (e.g., the
"diamond problem"). How does Python resolve method calls in the presence of multiple
inheritance, and what is the Method Resolution Order (MRO)? Provide a simple example
demonstrating MRO.

Explanation:

Multiple inheritance allows a class to inherit from multiple parent classes, combining
their functionalities. While powerful, it can lead to complexities, especially the "diamond
problem," where a class inherits from two classes that have a common ancestor, leading
to ambiguity in method resolution.

Python resolves method calls in multiple inheritance using the Method Resolution
Order (MRO). MRO defines the order in which base classes are searched for a method or
attribute. Python 2 used a depth-first, then left-to-right approach, but Python 3 (and
new-style classes in Python 2) uses the C3 linearization algorithm, which ensures a
consistent and predictable order.

You can inspect the MRO of a class using ClassName.__mro__ or help(ClassName) .

Code Example:

class A:
def method(self):
print("Method from A")

class B(A):
def method(self):
print("Method from B")

class C(A):
def method(self):
print("Method from C")

class D(B, C):


# If D doesn't define method, which one is called?
pass

class E(C, B):


pass

obj_d = D()
obj_d.method() # Output will depend on MRO

obj_e = E()
obj_e.method()

print("\nMRO for D:")


print(D.__mro__)

print("\nMRO for E:")


print(E.__mro__)

# Example of using super() with MRO


class X:
def greet(self):
print("Hello from X")

class Y(X):
def greet(self):
print("Hello from Y")
super().greet()

class Z(X):
def greet(self):
print("Hello from Z")
super().greet()

class W(Y, Z):


def greet(self):
print("Hello from W")
super().greet()

obj_w = W()
obj_w.greet()
print("\nMRO for W:")
print(W.__mro__)

Question 43: Class Methods, Static Methods, and Instance Methods

Explain the differences between instance methods, class methods, and static methods in
Python. When would you use each type of method? Provide an example demonstrating
all three.

Explanation:

• Instance Methods: The most common type of method. They operate on an


instance of the class and have access to the instance's data through the self
parameter. They are defined without any special decorators.
• Class Methods: Operate on the class itself, not a specific instance. They receive the
class as the first argument (conventionally named cls ). They are defined using
the @classmethod decorator. Useful for factory methods or methods that operate
on class-level data.
• Static Methods: Do not operate on the instance or the class. They are essentially
regular functions that happen to be defined within a class. They don't receive
self or cls as their first argument. They are defined using the @staticmethod
decorator. Useful for utility functions that logically belong to the class but don't
need access to its state.

Code Example:

class MyClass:
class_variable = 0

def __init__(self, instance_value):


self.instance_value = instance_value

# Instance Method
def instance_method(self):
print(f"Instance method called. Instance value:
{self.instance_value}")
self.class_variable += 1
# Can access class variable via instance

# Class Method
@classmethod
def class_method(cls):
print(f"Class method called. Class variable:
{cls.class_variable}")
cls.class_variable += 10 # Can modify class variable

# Static Method
@staticmethod
def static_method(x, y):
print(f"Static method called. Sum: {x + y}")
# Cannot access instance_value or class_variable
directly

# Create instances
obj1 = MyClass(100)
obj2 = MyClass(200)

# Call instance methods


obj1.instance_method()
obj2.instance_method()

# Call class methods


MyClass.class_method()
obj1.class_method() # Can be called via instance, but operates
on class
# Call static methods
MyClass.static_method(5, 7)
obj1.static_method(10, 20)

print(f"\nFinal class_variable value: {MyClass.class_variable}")

Question 44: Operator Overloading

What is operator overloading in Python? How can you implement it to define custom
behavior for operators (e.g., + , - , == ) when applied to objects of your custom class?
Provide an example where you overload the + operator for a Vector class.

Explanation:

Operator overloading allows you to define how standard Python operators (like + , - ,
* , == , etc.) behave when applied to instances of your custom classes. This makes your
custom objects behave more like built-in types and improves code readability and
intuitiveness.

To overload an operator, you implement special methods (also known as "dunder


methods" or "magic methods") in your class. For example:

• __add__(self, other) for +


• __sub__(self, other) for -
• __eq__(self, other) for ==
• __len__(self) for len()
• __str__(self) for str() or print()

Code Example:

class Vector:
def __init__(self, x, y):
self.x = x
self.y = y

# Overload the + operator


def __add__(self, other):
if isinstance(other, Vector):
return Vector(self.x + other.x, self.y + other.y)
else:
raise TypeError("Can only add a Vector object")

# Overload the - operator


def __sub__(self, other):
if isinstance(other, Vector):
return Vector(self.x - other.x, self.y - other.y)
else:
raise TypeError("Can only subtract a Vector object")

# Overload the == operator


def __eq__(self, other):
if isinstance(other, Vector):
return self.x == other.x and self.y == other.y
return False

# String representation for printing


def __str__(self):
return f"Vector({self.x}, {self.y})"

# Create Vector objects


v1 = Vector(2, 3)
v2 = Vector(1, 5)
v3 = Vector(2, 3)

# Use overloaded operators


v_sum = v1 + v2
print(f"v1 + v2 = {v_sum}")

v_diff = v1 - v2
print(f"v1 - v2 = {v_diff}")

print(f"v1 == v2: {v1 == v2}")


print(f"v1 == v3: {v1 == v3}")

# This would raise a TypeError


try:
v1 + 5
except TypeError as e:
print(f"Error: {e}")

Question 45: Abstract Base Classes (ABCs) and Interfaces

Revisit Abstract Base Classes (ABCs) from the perspective of defining interfaces. How do
ABCs help in creating a contract for subclasses, and why is this important for larger
projects? Provide an example of an ABC defining a PaymentProcessor interface.

Explanation:

As discussed earlier, ABCs (using the abc module) are crucial for defining interfaces in
Python. An interface specifies a set of methods that a class must implement without
providing the implementation details. By inheriting from an ABC with abstract methods,
a subclass is contractually obligated to implement those methods. If it fails to do so,
Python will raise a TypeError when you try to instantiate the subclass.
Importance for Larger Projects:

• Enforces Consistency: Ensures that all classes implementing a particular interface


adhere to a common structure and behavior.
• Improved Maintainability: Makes it easier to understand and extend code, as you
know what methods to expect from any class implementing a given interface.
• Facilitates Polymorphism: Allows you to write code that operates on objects of
different types, as long as they implement the required interface.
• Early Error Detection: Catches missing method implementations at class creation
time rather than at runtime.

Code Example:

from abc import ABC, abstractmethod

class PaymentProcessor(ABC):
@abstractmethod
def process_payment(self, amount: float, currency: str) ->
bool:
"""Processes a payment of a given amount and
currency."""
pass

@abstractmethod
def refund_payment(self, transaction_id: str) -> bool:
"""Refunds a payment given a transaction ID."""
pass

def get_supported_currencies(self) -> list[str]:


"""Returns a list of currencies supported by the
processor (optional concrete method)."""
return ["USD", "EUR"]

class CreditCardProcessor(PaymentProcessor):
def process_payment(self, amount: float, currency: str) ->
bool:
print(f"Processing credit card payment of {amount}
{currency}")
# Simulate payment processing logic
return True

def refund_payment(self, transaction_id: str) -> bool:


print(f"Refunding credit card transaction:
{transaction_id}")
# Simulate refund logic
return True

class PayPalProcessor(PaymentProcessor):
def process_payment(self, amount: float, currency: str) ->
bool:
print(f"Processing PayPal payment of {amount}
{currency}")
# Simulate payment processing logic
return True

# This class intentionally omits refund_payment to show the


error
# def refund_payment(self, transaction_id: str) -> bool:
# print(f"Refunding PayPal transaction:
{transaction_id}")
# return True

# Attempt to instantiate an incomplete PayPalProcessor


try:
# paypal_processor = PayPalProcessor() # This would raise
TypeError if refund_payment is missing
pass
except TypeError as e:
print(f"Error instantiating PayPalProcessor: {e}")

credit_card_proc = CreditCardProcessor()
credit_card_proc.process_payment(100.50, "USD")
credit_card_proc.refund_payment("CC_TRANS_123")
print(f"Supported currencies:
{credit_card_proc.get_supported_currencies()}")

# If PayPalProcessor had all abstract methods implemented:


# paypal_processor = PayPalProcessor()
# paypal_processor.process_payment(50.00, "EUR")

Tough

Question 46: Descriptors

What are descriptors in Python? Explain their purpose and how they work using the
__get__ , __set__ , and __delete__ methods. Provide an example of a descriptor
that enforces type checking for an attribute.

Explanation:

Descriptors are objects that implement at least one of the descriptor protocol methods:
__get__ , __set__ , or __delete__ . They allow you to customize how attributes are
accessed, assigned, or deleted on an object. Descriptors are a powerful mechanism
behind many Python features, including properties, methods, and classmethod /
staticmethod .

• __get__(self, instance, owner) : Called when the attribute is accessed.


instance is the instance of the object the attribute was accessed through, and
owner is the class of that instance.
• __set__(self, instance, value) : Called when the attribute is assigned a
new value.
• __delete__(self, instance) : Called when the attribute is deleted.

Purpose:

• Reusability: Define attribute behavior once and reuse it across multiple classes.
• Validation: Enforce constraints (e.g., type checking, value ranges) on attribute
assignments.
• Computed Attributes: Create attributes whose values are computed dynamically.

Code Example (Type-checking Descriptor):

class TypeChecked:
def __init__(self, expected_type):
self.expected_type = expected_type
self._name = None
# To store the attribute name given by the class

def __set_name__(self, owner, name):

# This method is called automatically by Python when the


descriptor is assigned to an attribute
self._name = name

def __get__(self, instance, owner):


if instance is None:
return self # Return the descriptor itself if
accessed via class
return instance.__dict__.get(self._name, None)

def __set__(self, instance, value):


if not isinstance(value, self.expected_type):
raise TypeError(f"Expected {self._name} to be
{self.expected_type.__name__}, got {type(value).__name__}")
instance.__dict__[self._name] = value

def __delete__(self, instance):


del instance.__dict__[self._name]

class Person:
name = TypeChecked(str) # Using the descriptor
age = TypeChecked(int)

def __init__(self, name, age):


self.name = name # This will trigger __set__ of the
descriptor
self.age = age # This will trigger __set__ of the
descriptor

p = Person("Alice", 30)
print(f"Name: {p.name}, Age: {p.age}")

try:
p.age = "thirty" # This will raise a TypeError
except TypeError as e:
print(f"Error: {e}")

try:
p.name = 123 # This will raise a TypeError
except TypeError as e:
print(f"Error: {e}")

# Accessing via class returns the descriptor itself


print(f"Person.name: {Person.name}")

Question 47: Metaclasses vs. Decorators for Class Customization

Compare and contrast metaclasses and class decorators as mechanisms for customizing
class creation and behavior in Python. When would you choose one over the other?
Provide a scenario where a metaclass is more appropriate than a class decorator.

Explanation:

Both metaclasses and class decorators allow you to modify or enhance classes, but they
operate at different levels and have different use cases.

Feature Metaclass Class Decorator

A function that takes a class


The "factory" that creates the class
Mechanism object and returns a modified
object. Controls class creation.
(or new) class object.

When During class definition (before the After the class has been fully
Applied class object is fully formed). defined.

More powerful; can intercept and Less powerful; operates on the


Power/ modify the class dictionary ( dct ) already-created class object.
Control before the class is even created. Can Cannot directly influence how
Feature Metaclass Class Decorator

enforce rules across an entire the class is initially


hierarchy. constructed.

More complex to write and Generally simpler to write and


Complexity
understand. apply.

Frameworks, ORMs, enforcing API


Adding logging, caching, or
contracts, automatic registration of
Use Cases simple modifications to a
classes, complex class
single class or a few classes.
transformations.

When to choose a Metaclass:

• When you need to modify the class before it is fully created (e.g., adding attributes
based on other attributes, enforcing a specific class structure).
• When you need to control the creation of all classes that inherit from a certain base
class or that use a specific metaclass.
• When you need to automatically register classes (e.g., a plugin system).

Scenario where Metaclass is more appropriate:

Imagine building a framework where all models must have a _table_name attribute
derived from their class name, and you want to automatically register these models in a
central registry. A metaclass can intercept the class creation, set _table_name , and
register the class, ensuring consistency across all models.

Code Example (Conceptual Metaclass for Model Registration):

class ModelRegistry(type):
_models = {}

def __new__(mcs, name, bases, dct):


cls = super().__new__(mcs, name, bases, dct)
if name != 'BaseModel': # Don't register the base model
itself
mcs._models[name.lower()] = cls
print(f"Registered model: {name.lower()}")
return cls

class BaseModel(metaclass=ModelRegistry):
pass

class User(BaseModel):
def __init__(self, username):
self.username = username

class Product(BaseModel):
def __init__(self, product_name):
self.product_name = product_name

print(f"\nAll registered models: {ModelRegistry._models}")

# A class decorator would be applied *after* User and Product


are created,
# making it harder to intercept the initial creation and
registration process
# in a centralized, enforced manner.

Question 48: Mixins

What are mixins in Python, and how do they differ from regular inheritance? When would
you use mixins, and what are their advantages? Provide an example of a mixin class that
adds logging capabilities to other classes.

Explanation:

Mixins are a design pattern in Python (and other languages) that allows you to inject
specific functionalities into a class without using traditional multiple inheritance for a
strict "is-a" relationship. A mixin class is typically not meant to be instantiated on its
own; instead, its methods are "mixed in" with other classes to provide additional
behavior.

Differences from Regular Inheritance:

• Purpose: Regular inheritance ( is-a ) is for establishing a hierarchical relationship


where a subclass is a type of its superclass. Mixins ( has-a or can-do ) are for
adding specific, orthogonal functionalities.
• Instantiation: Base classes in regular inheritance are often meant to be
instantiated. Mixins are generally not.
• Method Resolution Order (MRO): When using mixins with multiple inheritance,
their position in the MRO is crucial to ensure their methods are called correctly.

Advantages of Mixins:

• Code Reusability: Share common functionalities across unrelated classes.


• Modularity: Keep concerns separated; each mixin focuses on a single feature.
• Flexibility: Easily combine different behaviors by mixing in various mixins.
• Avoids Deep Hierarchies: Prevents the creation of overly complex inheritance
trees.
Code Example (Logging Mixin):

import datetime

class LoggingMixin:
def log_message(self, message):
timestamp = datetime.datetime.now().strftime("%Y-%m-%d
%H:%M:%S")
print(f"[{timestamp}] [{self.__class__.__name__}]
{message}")

class MyApplication(LoggingMixin):
def __init__(self, name):
self.name = name
self.log_message(f"Application \'{self.name}\'
initialized.")

def run(self):
self.log_message(f"Application \'{self.name}\' is
running.")
# ... application logic ...
self.log_message(f"Application \'{self.name}\'
finished.")

class DataProcessor(LoggingMixin):
def __init__(self, data_source):
self.data_source = data_source
self.log_message(f"DataProcessor initialized with
source: {self.data_source}")

def process(self):
self.log_message("Starting data processing.")
# ... data processing logic ...
self.log_message("Data processing completed.")

app = MyApplication("Web Server")


app.run()

print("\n")

dp = DataProcessor("Database")
dp.process()

Question 49: __slots__

What is __slots__ in Python classes? Explain its purpose, advantages, and


disadvantages. Provide an example demonstrating its use and the memory savings it can
offer.
Explanation:

__slots__ is a special attribute that you can define in a Python class to explicitly
declare a fixed set of attributes for instances of that class. When __slots__ is used,
Python does not create an instance dictionary ( __dict__ ) for each object, which can
lead to significant memory savings, especially for classes with many instances.

Purpose:

• Memory Optimization: Reduces the memory footprint of objects by preventing


the creation of __dict__ for each instance.
• Faster Attribute Access: Can sometimes lead to faster attribute access because
Python doesn't need to look up attributes in a dictionary.

Advantages:

• Reduced Memory Consumption: The primary benefit, crucial for applications with
a large number of small objects.
• Potentially Faster Attribute Access: Direct access to attributes can be quicker
than dictionary lookups.

Disadvantages:

• Cannot Add New Attributes Dynamically: You cannot add new attributes to an
instance after it's created if __slots__ is defined (unless you explicitly include
__dict__ in __slots__ ).
• No __dict__ for Instances: Instances will not have a __dict__ attribute, which
means you can't use vars() on them or dynamically add attributes.
• No Multiple Inheritance with __slots__ Conflicts: If a class inherits from
multiple classes that all define __slots__ , and those __slots__ define the
same attribute names, it can lead to TypeError .
• Requires Careful Planning: You need to know all the attributes your class will
have upfront.

Code Example:

import sys

class PointWithoutSlots:
def __init__(self, x, y):
self.x = x
self.y = y

class PointWithSlots:
__slots__ = ("x", "y") # Define allowed attributes

def __init__(self, x, y):


self.x = x
self.y = y

# Create instances
p1 = PointWithoutSlots(1, 2)
p2 = PointWithSlots(1, 2)

# Check memory usage (approximate)


print(f"Size of PointWithoutSlots instance: {sys.getsizeof(p1)}
bytes")
print(f"Size of PointWithSlots instance: {sys.getsizeof(p2)}
bytes")

# Demonstrate __dict__ presence


print(f"PointWithoutSlots has __dict__: {"__dict__" in
dir(p1)}")
print(f"PointWithSlots has __dict__: {"__dict__" in dir(p2)}")

# Attempt to add new attribute to PointWithSlots (will fail)


try:
p2.z = 3
except AttributeError as e:
print(f"Error adding new attribute to PointWithSlots: {e}")

# If you need __dict__ while using __slots__ (rare, but


possible):
class PointWithSlotsAndDict:
__slots__ = ("x", "y", "__dict__")

def __init__(self, x, y):


self.x = x
self.y = y

p3 = PointWithSlotsAndDict(3, 4)
p3.z = 5 # Now this works
print(f"PointWithSlotsAndDict has __dict__: {"__dict__" in
dir(p3)}")
print(f"Size of PointWithSlotsAndDict instance:
{sys.getsizeof(p3)} bytes")

Question 50: super() Function

Explain the super() function in Python. How does it work with Method Resolution
Order (MRO) to enable cooperative multiple inheritance? Provide an example
demonstrating super() in a multiple inheritance scenario.

Explanation:
The super() function in Python provides a way to call a method from a parent or
sibling class in the Method Resolution Order (MRO). It is primarily used to ensure that
inherited methods are called correctly, especially in complex inheritance hierarchies
involving multiple inheritance.

How it works with MRO:

super() does not refer to the parent class directly. Instead, it returns a proxy object
that delegates method calls to the next class in the MRO. This cooperative behavior is
crucial for multiple inheritance, as it ensures that each class in the inheritance chain gets
a chance to execute its version of a method, without explicitly naming the parent
classes.

Advantages:

• Cooperative Inheritance: Enables methods in a complex hierarchy to work


together without hardcoding parent class names.
• Maintainability: Makes code more robust to changes in the inheritance hierarchy.
• Readability: Simplifies method calls in inheritance.

Code Example (Cooperative Multiple Inheritance with super() ):

class A:
def __init__(self):
print("Initializing A")
super().__init__() # Call next in MRO

class B(A):
def __init__(self):
print("Initializing B")
super().__init__() # Call next in MRO

class C(A):
def __init__(self):
print("Initializing C")
super().__init__() # Call next in MRO

class D(B, C):


def __init__(self):
print("Initializing D")
super().__init__() # Call next in MRO

# Create an instance of D
d_obj = D()

print("\nMRO for D:")


print(D.__mro__)
# Observe the output: Initializing D, Initializing B,
Initializing C, Initializing A
# This demonstrates how super() traverses the MRO, ensuring each
__init__ is called once.

# Another example with a common method


class Base:
def greet(self):
print("Hello from Base")

class Mixin1(Base):
def greet(self):
print("Hello from Mixin1")
super().greet()

class Mixin2(Base):
def greet(self):
print("Hello from Mixin2")
super().greet()

class MyClass(Mixin1, Mixin2):


def greet(self):
print("Hello from MyClass")
super().greet()

my_obj = MyClass()
my_obj.greet()

print("\nMRO for MyClass:")


print(MyClass.__mro__)

4. Advanced Topics and Interview Scenarios

Easy

Question 51: Virtual Environments

What is a Python virtual environment, and why is it important for development? How do
you create and activate a virtual environment using venv ?

Explanation:

A Python virtual environment is a self-contained directory that contains a Python


interpreter and a set of installed packages. It allows you to manage dependencies for
different projects independently, preventing conflicts between package versions. This is
crucial for maintaining a clean and organized development workflow.
Why it's important:

• Dependency Isolation: Each project can have its own set of libraries and versions
without affecting other projects or the global Python installation.
• Reproducibility: Ensures that your project works consistently across different
machines and environments.
• Cleanliness: Prevents cluttering your global Python environment with project-
specific packages.

Code Example (Shell Commands):

# 1. Create a virtual environment (e.g., named 'myenv')


python3 -m venv myenv

# 2. Activate the virtual environment


# On macOS/Linux:
source myenv/bin/activate

# On Windows (Command Prompt):


# myenv\Scripts\activate.bat

# On Windows (PowerShell):
# myenv\Scripts\Activate.ps1

# 3. Once activated, your terminal prompt will usually show the


environment name
# (myenv) user@host:~/my_project$

# 4. Install packages within the virtual environment


pip install requests

# 5. Deactivate the virtual environment when done


deactivate

Question 52: pip and requirements.txt

Explain the role of pip in Python development. How do you use pip to install
packages, and how can requirements.txt be used to manage project dependencies?

Explanation:

pip (Pip Installs Packages) is the standard package-management system used to install
and manage software packages written in Python. It connects to the Python Package
Index (PyPI), a repository of Python software.

requirements.txt is a plain text file that lists all the Python packages and their
versions required for a specific project. It allows for easy installation of all necessary
dependencies, ensuring that everyone working on the project uses the same versions of
libraries.

Code Example (Shell Commands):

# Install a single package


pip install numpy

# Install a specific version of a package


pip install pandas==1.3.5

# Install multiple packages


pip install requests beautifulsoup4

# Generate a requirements.txt file from your current environment


pip freeze > requirements.txt

# Example requirements.txt content:


# numpy==1.21.5
# pandas==1.3.5
# requests==2.27.1

# Install all packages listed in requirements.txt


pip install -r requirements.txt

# Uninstall a package
pip uninstall numpy

Moderate

Question 53: Concurrency vs. Parallelism

Explain the difference between concurrency and parallelism in Python. Discuss how
Python handles each, mentioning the Global Interpreter Lock (GIL) and its implications.
When would you choose one over the other?

Explanation:

• Concurrency: Deals with handling multiple tasks at the same time (or appearing to
do so). It's about structuring a program so that it can make progress on multiple
tasks simultaneously, even if only one task is truly executing at any given moment.
Think of a chef juggling multiple cooking tasks in one kitchen.
• Parallelism: Deals with executing multiple tasks simultaneously. It requires
multiple processing units (CPU cores) to truly run different parts of a program at
the exact same time. Think of multiple chefs working in separate kitchens.

Python and the GIL:


Python has a Global Interpreter Lock (GIL), which is a mutex that protects access to
Python objects, preventing multiple native threads from executing Python bytecodes at
once. This means that even on multi-core processors, only one thread can execute
Python bytecode at any given time.

• Implications for Parallelism: The GIL effectively prevents true CPU-bound


parallelism in multi-threaded Python programs. If your task is CPU-bound (e.g.,
heavy computations), using multiple threads won't speed it up; in fact, context
switching overhead might even slow it down.
• Implications for Concurrency: The GIL does not prevent concurrency. When a
Python thread performs an I/O operation (like reading from a network or disk), it
releases the GIL, allowing other threads to run. This makes Python threads suitable
for I/O-bound concurrent tasks.

When to choose:

• Concurrency (with threads): Ideal for I/O-bound tasks (web scraping, network
requests, file operations) where threads spend most of their time waiting for
external resources. The GIL is released during these waits.
• Parallelism (with multiprocessing): Essential for CPU-bound tasks (heavy
computations, data processing) where you need to utilize multiple CPU cores. The
multiprocessing module creates separate processes, each with its own Python
interpreter and memory space, thus bypassing the GIL.

Code Example (Conceptual):

import threading
import multiprocessing
import time

def cpu_bound_task(n):
# Simulate a CPU-bound task
sum_val = 0
for i in range(n):
sum_val += i * i
return sum_val

def io_bound_task(delay):
# Simulate an I/O-bound task
time.sleep(delay)
return f"Slept for {delay} seconds"

# --- Concurrency with Threads (for I/O-bound) ---


print("\n--- Threading for I/O-bound tasks ---")
start_time = time.time()
threads = []
for _ in range(3):
t = threading.Thread(target=io_bound_task, args=(1,))
threads.append(t)
t.start()

for t in threads:
t.join()
end_time = time.time()
print(f"Threading took: {end_time - start_time:.2f} seconds")

# --- Parallelism with Multiprocessing (for CPU-bound) ---


print("\n--- Multiprocessing for CPU-bound tasks ---")
start_time = time.time()
processes = []
# Using a smaller N for quick demo, increase for real CPU-bound
test
N = 10**6
for _ in range(3):
p = multiprocessing.Process(target=cpu_bound_task,
args=(N,))
processes.append(p)
p.start()

for p in processes:
p.join()
end_time = time.time()
print(f"Multiprocessing took: {end_time - start_time:.2f}
seconds")

# Note: For actual performance measurement, run these examples


multiple times
# and with larger N values for CPU-bound tasks.

Question 54: Iterators and Iterables

Deep dive into the concepts of iterators and iterables in Python. How do they work
together to enable iteration? Explain the __iter__ and __next__ methods and
provide an example of creating a custom iterable class.

Explanation:

• Iterable: An object that can be iterated over (e.g., a list, tuple, string, dictionary, or
a custom object that implements the __iter__ method). When you use a for
loop, list() , tuple() , etc., on an object, you are using its iterable nature.
• Iterator: An object that represents a stream of data. It implements two methods:
◦ __iter__(self) : Returns the iterator object itself. This allows iterators to
be used in for loops.
◦ __next__(self) : Returns the next item from the iteration. If there are no
more items, it must raise the StopIteration exception.

How they work together:

When you use a for loop on an iterable, Python internally calls iter() on the iterable
to get an iterator. Then, for each iteration, it calls next() on the iterator to get the next
item until StopIteration is raised.

Code Example (Custom Iterable):

class MyRange:
def __init__(self, start, end):
self.start = start
self.end = end

def __iter__(self):
# __iter__ returns an iterator object. In this case,
self is the iterator.
return self

def __next__(self):
if self.start < self.end:
current = self.start
self.start += 1
return current
else:
raise StopIteration

# Using the custom iterable


my_numbers = MyRange(1, 5)
for num in my_numbers:
print(num)

print("\nConverting to list:")
my_numbers_list = list(MyRange(10, 15))
print(my_numbers_list)

# Manual iteration
print("\nManual iteration:")
manual_iter = iter(MyRange(20, 23))
print(next(manual_iter))
print(next(manual_iter))
print(next(manual_iter))
try:
print(next(manual_iter))
except StopIteration:
print("StopIteration caught.")
Question 55: Decorators with Arguments

Extend your understanding of decorators by explaining how to create a decorator that


accepts arguments. Provide an example of a decorator that takes a
permission_level argument and checks if a user has the required permission before
executing a function.

Explanation:

Creating a decorator that accepts arguments involves an extra layer of nesting. The outer
function is the decorator factory, which takes the arguments for the decorator and
returns the actual decorator function. This actual decorator function then takes the
function to be decorated as its argument.

Structure:

def decorator_factory(arg1, arg2, ...):


def decorator(func):
def wrapper(*args, **kwargs):
# Logic using arg1, arg2, ... and func
return func(*args, **kwargs)
return wrapper
return decorator

Code Example:

def requires_permission(permission_level):
def decorator(func):
def wrapper(user, *args, **kwargs):
if user.permission >= permission_level:
print(f"User {user.name} has sufficient
permission ({user.permission}) for {func.__name__}.")
return func(user, *args, **kwargs)
else:
print(f"Permission Denied: User {user.name}
(level {user.permission}) needs level {permission_level} for
{func.__name__}.")
return None
return wrapper
return decorator

class User:
def __init__(self, name, permission):
self.name = name
self.permission = permission

@requires_permission(permission_level=5)
def delete_sensitive_data(user, data_id):
print(f"Deleting sensitive data {data_id} for user
{user.name}.")
return True

@requires_permission(permission_level=3)
def view_report(user, report_name):
print(f"Viewing report {report_name} for user {user.name}.")
return f"Report data for {report_name}"

admin_user = User("Admin", 10)


regular_user = User("Regular", 4)
guest_user = User("Guest", 1)

delete_sensitive_data(admin_user, "DB_RECORD_123")
delete_sensitive_data(regular_user, "DB_RECORD_456")

view_report(admin_user, "Sales_Q1")
view_report(regular_user, "Sales_Q1")
view_report(guest_user, "Sales_Q1")

Question 56: Metaclasses for API Enforcement

Building on the concept of metaclasses, demonstrate how to use a metaclass to enforce


that all subclasses of a particular base class implement a specific set of methods. This is
a common pattern in framework design.

Explanation:

Metaclasses are ideal for enforcing API contracts at the class creation level. By defining a
metaclass that checks for the presence of required methods in any class that uses it, you
can ensure that developers adhere to your framework's design principles. If a subclass
fails to implement a required method, the metaclass can raise an error during class
definition, preventing runtime issues.

Code Example:

class PluginMeta(type):
def __new__(mcs, name, bases, dct):
# Ensure that the class is not the abstract base class
itself
if name != 'BasePlugin':
required_methods = ['load', 'execute', 'unload']
for method in required_methods:
if method not in dct:
raise TypeError(f"Class {name} must
implement abstract method \'{method}\'")
return super().__new__(mcs, name, bases, dct)
class BasePlugin(metaclass=PluginMeta):
# This class serves as an interface/base for plugins
pass

class MyConcretePlugin(BasePlugin):
def load(self):
print("MyConcretePlugin loaded.")

def execute(self):
print("MyConcretePlugin executing.")

def unload(self):
print("MyConcretePlugin unloaded.")

class IncompletePlugin(BasePlugin):
def load(self):
print("IncompletePlugin loaded.")
# Missing execute and unload methods

# This will successfully create the class


plugin = MyConcretePlugin()
plugin.load()
plugin.execute()
plugin.unload()

# This will raise a TypeError during class definition because


methods are missing
try:
class AnotherIncompletePlugin(BasePlugin):
def load(self):
print("AnotherIncompletePlugin loaded.")
except TypeError as e:
print(f"\nError creating IncompletePlugin: {e}")

Question 57: functools.partial

Explain the functools.partial function in Python. When is it useful, and how does it
help in creating new functions with some arguments pre-filled? Provide an example.

Explanation:

functools.partial is a higher-order function that allows you to create a new


function with some of the arguments of an existing function already filled in. This is
known as "partial function application" or "currying" (though not strictly currying in the
functional programming sense).
When it is useful:

• Reducing Repetition: When you frequently call a function with the same initial
arguments.
• Adapting Functions: To create a new function that matches a required signature
(e.g., for callbacks or event handlers).
• Simplifying APIs: To provide a simpler interface to a more complex function.

Code Example:

from functools import partial

def multiply(x, y):


return x * y

# Create a new function 'double' that always multiplies by 2


double = partial(multiply, 2)
print(f"Double 5: {double(5)}")
print(f"Double 10: {double(10)}")

# Create a new function 'power_of_3' that always raises to the


power of 3
def power(base, exponent):
return base ** exponent

power_of_3 = partial(power, exponent=3)


print(f"2 to the power of 3: {power_of_3(2)}")
print(f"5 to the power of 3: {power_of_3(5)}")

# Example with a more complex function


def send_email(to_address, subject, body,
sender="[email protected]", cc=None):
print(f"Sending email from {sender} to {to_address} (CC:
{cc or 'None'})")
print(f"Subject: {subject}")
print(f"Body: {body}")

# Create a partial function for sending marketing emails


send_marketing_email = partial(send_email,
sender="[email protected]", subject="Special Offer!")

send_marketing_email("[email protected]", body="Buy now!")


send_marketing_email("[email protected]",
body="Limited time deal!", cc="[email protected]")

Tough

Question 58: Concurrency with asyncio and Event Loop


Delve deeper into Python's asyncio library. Explain the concept of an event loop and
how asyncio uses it to manage concurrent operations. Provide a more complex
example involving multiple asynchronous tasks and demonstrate how to run them
concurrently.

Explanation:

asyncio is Python's library for writing concurrent code using the async / await
syntax. At its core is the event loop, which is a central coordinator that manages and
executes asynchronous tasks. The event loop continuously monitors for events (e.g., I/O
completion, timers) and dispatches them to the appropriate coroutines.

How asyncio works:

1. Coroutines ( async def ): Functions defined with async def are coroutines.
They are not executed immediately when called; instead, they return a coroutine
object.
2. await : When an await expression is encountered within a coroutine, the
coroutine pauses its execution and yields control back to the event loop. This
allows the event loop to run other tasks while the awaited operation (e.g., network
request, asyncio.sleep ) is pending.
3. Event Loop: The event loop takes care of scheduling and running coroutines.
When an awaited operation completes, the event loop resumes the paused
coroutine.
4. Tasks: Coroutines are typically wrapped in asyncio.Task objects (e.g., using
asyncio.create_task() ) to schedule them for execution on the event loop.

Code Example:

import asyncio
import time

async def fetch_data(url):


print(f"Starting to fetch data from {url}...")
await asyncio.sleep(2) # Simulate network delay
print(f"Finished fetching data from {url}.")
return f"Data from {url}"

async def process_item(item_id, delay):


print(f"Processing item {item_id}...")
await asyncio.sleep(delay) # Simulate processing time
print(f"Finished processing item {item_id}.")
return f"Processed item {item_id}"

async def main():


start_time = time.time()
print("Main program started.")

# Create tasks for concurrent execution


task1 = asyncio.create_task(fetch_data("http://
api.example.com/data1"))
task2 = asyncio.create_task(process_item(1, 3))
task3 = asyncio.create_task(fetch_data("http://
api.example.com/data2"))

# Await tasks to complete. They run concurrently.


results = await asyncio.gather(task1, task2, task3)

print("Main program finished.")


end_time = time.time()
print(f"Total execution time: {end_time - start_time:.2f}
seconds")
print(f"Results: {results}")

if __name__ == "__main__":
asyncio.run(main())

Question 59: Custom Iterators and Generators for Large Datasets

Consider a scenario where you need to process a very large dataset (e.g., a multi-
gigabyte log file) that cannot fit into memory. Design a custom iterator or generator that
can read and process this file line by line, yielding relevant data without loading the
entire file. Discuss the memory efficiency benefits.

Explanation:

When dealing with large datasets, loading the entire dataset into memory can lead to
MemoryError . Iterators and generators provide a memory-efficient solution by
processing data in chunks or line by line, yielding one item at a time. This means only a
small portion of the data is in memory at any given moment.

Memory Efficiency Benefits:

• Reduced RAM Usage: Prevents MemoryError by not loading the entire dataset.
• Lazy Evaluation: Data is processed only when requested, saving resources.
• Scalability: Can handle arbitrarily large files or data streams.

Code Example (Log File Processor Generator):

def log_file_processor(filepath, keyword=None):

"""A generator that reads a log file line by line and yields
lines matching a keyword."""
print(f"Opening file: {filepath}")
with open(filepath, "r") as f:
for line_num, line in enumerate(f, 1):
if keyword is None or keyword.lower() in
line.lower():
yield f"Line {line_num}: {line.strip()}"
print(f"Finished processing file: {filepath}")

# Create a dummy large log file for demonstration


with open("large_log.txt", "w") as f:
for i in range(1, 10000):
f.write(f"Log entry {i}: This is some information.\n")
f.write("Log entry 10000: ERROR occurred in module X.\n")
for i in range(10001, 20000):
f.write(f"Log entry {i}: Another information line.\n")

print("\n--- Processing log file without keyword ---")


# Process the log file line by line without loading it all
count = 0
for entry in log_file_processor("large_log.txt"):
# print(entry) # Uncomment to see all lines
count += 1
if count > 5: # Limit output for demo
break
print(f"Processed first 5 lines (or more if uncommented). Total
lines in file: (many)")

print("\n--- Processing log file for ERROR keyword ---")


error_count = 0
for error_line in log_file_processor("large_log.txt",
keyword="ERROR"):
print(error_line)
error_count += 1
print(f"Found {error_count} error lines.")

# Clean up dummy file


import os
os.remove("large_log.txt")

Question 60: Advanced Decorator Patterns (Class-based Decorators)

Beyond function-based decorators, explain how to create a class-based decorator. When


would a class-based decorator be preferred over a function-based one? Provide an
example of a class-based decorator that measures the execution time of a function.

Explanation:

A class-based decorator is a class that, when instantiated, acts as a decorator. It


typically implements the __init__ method to receive the function to be decorated
and the __call__ method to make instances of the class callable (like a function). The
__call__ method contains the logic that wraps the original function.

When to prefer class-based decorators:

• Maintaining State: If your decorator needs to maintain state across multiple calls
to the decorated function or across multiple decorated functions (e.g., a counter, a
cache).
• More Complex Logic: When the decorator logic is more involved and benefits from
the structure and organization of a class (e.g., multiple helper methods).
• Inheritance: If you want to create a hierarchy of decorators.

Code Example (Class-based Timer Decorator):

import time

class Timer:
def __init__(self, func):
self.func = func
self.total_time = 0

def __call__(self, *args, **kwargs):


start_time = time.time()
result = self.func(*args, **kwargs)
end_time = time.time()
elapsed_time = end_time - start_time
self.total_time += elapsed_time
print(f"Function \'{self.func.__name__}\' took
{elapsed_time:.4f} seconds. Total time for this decorator:
{self.total_time:.4f} seconds")
return result

@Timer
def long_running_function(n):
sum_val = 0
for i in range(n):
sum_val += i * i
return sum_val

@Timer
def another_task(delay):
time.sleep(delay)
return "Task completed"

# Call the decorated functions


long_running_function(10**6)
long_running_function(5 * 10**5)

another_task(1)
another_task(0.5)

# Accessing state of the decorator instance (if needed)


# print(f"Total time for long_running_function:
{long_running_function.total_time:.4f} seconds")

Question 61: Descriptors for Data Validation and Transformation

Expand on descriptors by creating a descriptor that not only validates the type of an
attribute but also transforms its value (e.g., stripping whitespace from strings, ensuring
numbers are positive). Demonstrate how this can be used to create robust data models.

Explanation:

Descriptors are powerful for centralizing attribute logic, including validation and
transformation. By encapsulating this logic within a descriptor, you can reuse it across
multiple attributes and classes, ensuring consistent behavior and reducing boilerplate
code. This promotes the DRY (Don't Repeat Yourself) principle.

Code Example (Validated and Transformed Descriptor):

class ValidatedString:
def __init__(self, min_length=0, strip_whitespace=True):
self.min_length = min_length
self.strip_whitespace = strip_whitespace
self._name = None

def __set_name__(self, owner, name):


self._name = name

def __get__(self, instance, owner):


if instance is None:
return self
return instance.__dict__.get(self._name, None)

def __set__(self, instance, value):


if not isinstance(value, str):
raise TypeError(f"Expected {self._name} to be a
string, got {type(value).__name__}")

if self.strip_whitespace:
value = value.strip()

if len(value) < self.min_length:


raise ValueError(f"Expected {self._name} to have at
least {self.min_length} characters, got {len(value)}")

instance.__dict__[self._name] = value
class PositiveNumber:
def __init__(self):
self._name = None

def __set_name__(self, owner, name):


self._name = name

def __get__(self, instance, owner):


if instance is None:
return self
return instance.__dict__.get(self._name, None)

def __set__(self, instance, value):


if not isinstance(value, (int, float)):
raise TypeError(f"Expected {self._name} to be a
number, got {type(value).__name__}")
if value <= 0:
raise ValueError(f"Expected {self._name} to be a
positive number, got {value}")
instance.__dict__[self._name] = value

class Product:
name = ValidatedString(min_length=3, strip_whitespace=True)
price = PositiveNumber()
description = ValidatedString(min_length=0,
strip_whitespace=False) # Allow empty and preserve whitespace

def __init__(self, name, price, description):


self.name = name
self.price = price
self.description = description

# Valid product
p1 = Product(" Laptop ", 1200.50, "A powerful laptop for
everyday use.")
print(f"Product 1: Name=\'{p1.name}\', Price={p1.price},
Description=\'{p1.description}\'")

# Invalid name (too short)


try:
Product("AB", 100, "short name")
except ValueError as e:
print(f"Error: {e}")

# Invalid price (negative)


try:
Product("Tablet", -50, "negative price")
except ValueError as e:
print(f"Error: {e}")

# Invalid type for name


try:
Product(123, 100, "invalid type")
except TypeError as e:
print(f"Error: {e}")

Question 62: __slots__ and Inheritance

Discuss the complexities and considerations when using __slots__ in an inheritance


hierarchy. What happens if a child class defines __slots__ but its parent does not, or
vice-versa? Provide examples to illustrate these scenarios and potential pitfalls.

Explanation:

Using __slots__ in inheritance can be tricky and requires careful understanding of


how Python handles attribute storage. The behavior depends on whether the parent
class, child class, or both define __slots__ .

Scenarios:

1. Child defines __slots__ , Parent does NOT:

◦ The child class instances will not have a __dict__ for attributes defined in
__slots__ .
◦ However, the parent class attributes (if any) will still be stored in the parent's
__dict__ (if the parent doesn't use __slots__ ).
◦ If the child needs to add attributes not in its __slots__ , it will still need a
__dict__ (which means you must include __dict__ in the child's
__slots__ ).

2. Parent defines __slots__ , Child does NOT:

◦ The parent's __slots__ will apply to the inherited attributes.


◦ The child class instances will have a __dict__ for any new attributes
defined in the child, and for attributes inherited from the parent that were not
in the parent's __slots__ .

3. Both Parent and Child define __slots__ :

◦ This is the most complex scenario. The __slots__ of the child class are
combined with the __slots__ of its parents.
◦ Pitfall: If a child class inherits from multiple parents, and those parents
define the same attribute names in their __slots__ , it will lead to a
TypeError because Python cannot determine which slot to use for the
shared attribute.
Code Example:

import sys

# Scenario 1: Child defines __slots__, Parent does NOT


class ParentNoSlots:
def __init__(self, p_attr):
self.p_attr = p_attr

class ChildWithSlots(ParentNoSlots):
__slots__ = ("c_attr",)

def __init__(self, p_attr, c_attr):


super().__init__(p_attr)
self.c_attr = c_attr

c1 = ChildWithSlots("parent_val", "child_val")
print(f"c1.p_attr: {c1.p_attr}")
print(f"c1.c_attr: {c1.c_attr}")
print(f"c1 has __dict__: {"__dict__" in dir(c1)}") # True
because parent has __dict__
print(f"Size of ChildWithSlots instance: {sys.getsizeof(c1)}
bytes")

# Scenario 2: Parent defines __slots__, Child does NOT


class ParentWithSlots:
__slots__ = ("p_attr",)

def __init__(self, p_attr):


self.p_attr = p_attr

class ChildNoSlots(ParentWithSlots):
def __init__(self, p_attr, c_attr):
super().__init__(p_attr)
self.c_attr = c_attr

c2 = ChildNoSlots("parent_val", "child_val")
print(f"\nc2.p_attr: {c2.p_attr}")
print(f"c2.c_attr: {c2.c_attr}")
print(f"c2 has __dict__: {"__dict__" in dir(c2)}") # True
because child adds new attributes
print(f"Size of ChildNoSlots instance: {sys.getsizeof(c2)}
bytes")

# Scenario 3: Both Parent and Child define __slots__ (and


potential conflict)
class Grandparent:
__slots__ = ("gp_attr",)
def __init__(self, gp_attr):
self.gp_attr = gp_attr
class ParentA(Grandparent):
__slots__ = ("a_attr",)
def __init__(self, gp_attr, a_attr):
super().__init__(gp_attr)
self.a_attr = a_attr

class ParentB(Grandparent):
__slots__ = ("b_attr",)
def __init__(self, gp_attr, b_attr):
super().__init__(gp_attr)
self.b_attr = b_attr

# This will work fine


class ChildBothSlots(ParentA):
__slots__ = ("c_attr",)
def __init__(self, gp_attr, a_attr, c_attr):
super().__init__(gp_attr, a_attr)
self.c_attr = c_attr

c3 = ChildBothSlots("gp", "a", "c")


print(f"\nc3.gp_attr: {c3.gp_attr}")
print(f"c3.a_attr: {c3.a_attr}")
print(f"c3.c_attr: {c3.c_attr}")

# This will raise a TypeError due to conflicting __slots__ from


Grandparent
try:
class DiamondChild(ParentA, ParentB):
__slots__ = ("d_attr",)
def __init__(self, gp_attr, a_attr, b_attr, d_attr):
super().__init__(gp_attr, a_attr) # Calls
ParentA.__init__
ParentB.__init__(self, gp_attr, b_attr)
# Explicitly call ParentB.__init__
self.d_attr = d_attr
except TypeError as e:

print(f"\nError with DiamondChild due to __slots__ conflict:


{e}")

# To resolve the DiamondChild issue, you would typically avoid


__slots__ in common ancestors
# or ensure that __slots__ are only defined in the most derived
class, or include __dict__.

Question 63: __call__ Method and Callable Objects

Explain the __call__ method in Python. How does it make an object callable like a
function? Provide an example of a class whose instances can be called directly, and
discuss scenarios where this pattern is useful.
Explanation:

The __call__ method is a special method in Python that allows an instance of a class
to be called as if it were a function. If a class implements __call__ , then its objects can
be invoked using the () operator.

How it works:

When you call an object like obj() , Python internally translates this into
obj.__call__() . This makes instances of the class behave like functions.

Scenarios where this pattern is useful:

• Stateful Callables: When you need a function-like object that maintains internal
state across calls (e.g., a counter, a memoization cache).
• Decorators (Class-based): As seen in Question 60, class-based decorators use
__call__ to wrap the decorated function.
• Function Factories: When you want to create functions dynamically with specific
configurations.
• Simulating Closures: When you need a more structured way to achieve what
closures do, especially if the logic is complex.

Code Example:

class Multiplier:
def __init__(self, factor):
self.factor = factor

def __call__(self, number):


return number * self.factor

# Create instances of Multiplier


double = Multiplier(2)
triple = Multiplier(3)

# Call the instances like functions


print(f"Double 5: {double(5)}")
print(f"Triple 10: {triple(10)}")

# Example: A simple counter


class Counter:
def __init__(self):
self.count = 0

def __call__(self):
self.count += 1
return self.count
my_counter = Counter()
print(f"Count: {my_counter()}") # 1
print(f"Count: {my_counter()}") # 2
print(f"Count: {my_counter()}") # 3

Question 64: Weak References

Explain the concept of weak references in Python. When are they useful, and how do
they differ from strong references? Provide an example using the weakref module.

Explanation:

In Python, objects are typically managed by strong references. When an object has at
least one strong reference pointing to it, it cannot be garbage collected. Its memory will
only be reclaimed when all strong references to it are removed.

A weak reference is a reference to an object that does not prevent the object from being
garbage collected. If the only references to an object are weak references, the object can
be reclaimed by the garbage collector. Once the object is reclaimed, the weak reference
becomes invalid.

When are they useful?

• Caching: To implement caches where you want to store objects temporarily


without preventing them from being garbage collected if they are no longer
strongly referenced elsewhere.
• Observer Pattern: To allow observers to register for notifications from a subject
without creating strong references that would prevent the subject from being
garbage collected.
• Large Data Structures: To manage large, potentially transient data structures
without holding onto memory unnecessarily.

Code Example:

import weakref

class MyObject:
def __init__(self, name):
self.name = name
print(f"MyObject {self.name} created.")

def __del__(self):
print(f"MyObject {self.name} deleted.")

# Strong reference
obj_strong = MyObject("Strong")

# Weak reference
obj_weak = weakref.ref(obj_strong)

print(f"Weak reference target (before del): {obj_weak()}")

del obj_strong # Remove the strong reference

# Now, the object should be garbage collected, and the weak


reference becomes None
print(f"Weak reference target (after del): {obj_weak()}")

# Example with a WeakValueDictionary (a common use case)


print("\n--- WeakValueDictionary Example ---")
cache = weakref.WeakValueDictionary()

class ExpensiveObject:
def __init__(self, id):
self.id = id
print(f"ExpensiveObject {self.id} created.")

def __del__(self):
print(f"ExpensiveObject {self.id} deleted.")

def get_expensive_object(obj_id):
if obj_id not in cache:
print(f"Creating new ExpensiveObject {obj_id}")
cache[obj_id] = ExpensiveObject(obj_id)
else:
print(f"Retrieving ExpensiveObject {obj_id} from cache")
return cache[obj_id]

obj_a = get_expensive_object("A")
obj_b = get_expensive_object("B")
obj_a_again = get_expensive_object("A")

print("Cache content before deleting strong references:",


list(cache.keys()))

del obj_a
del obj_b
del obj_a_again

# Give garbage collector a moment (in real scenarios, it runs


automatically)
import gc
gc.collect()

print("Cache content after deleting strong references:",


list(cache.keys()))
# You'll notice that objects A and B are likely gone from the
cache

Question 65: Python Packaging and Distribution

Explain the process of packaging a Python project for distribution using setuptools
and PyPI . What are the key files involved ( setup.py , pyproject.toml ,
MANIFEST.in ), and what role do they play? Outline the steps to create a distributable
package and upload it to PyPI.

Explanation:

Packaging a Python project involves creating a distributable archive (like a wheel or


source distribution) that can be easily installed by others using pip . The primary tools
for this are setuptools (for defining project metadata and build instructions) and
PyPI (the Python Package Index, a central repository for Python packages).

Key Files:

• setup.py (Traditional): A Python script that contains metadata about your


project (name, version, description, dependencies) and instructions for
setuptools on how to build and install your package. While still widely used,
pyproject.toml is becoming the preferred modern approach.
• pyproject.toml (Modern): A configuration file (PEP 518, PEP 621) that specifies
build system requirements and project metadata. It's becoming the standard for
defining Python projects and their build backends.
• MANIFEST.in : A file that specifies non-Python files (e.g., data files,
documentation, static assets) that should be included in your source distribution.
By default, setuptools only includes Python source files.

Steps to Package and Distribute:

1. Project Structure: Organize your project with a clear directory structure (e.g.,
my_package/ , tests/ , README.md , LICENSE ).
2. pyproject.toml (or setup.py ): Create this file at the root of your project to
define metadata and dependencies.

◦ Example pyproject.toml (simplified): ```toml [project] name = "my-


awesome-package" version = "0.1.0" description = "A short description of my
package" readme = "README.md" requires-python = ">=3.8" dependencies =
[ "requests", "numpy>=1.20.0", ]
[build-system] requires = ["setuptools>=61.0"] build-backend =
"setuptools.build_meta" 3. **`MANIFEST.in` (if needed):** If you
have non-Python files to include, create this file. *
**Example `MANIFEST.in`:** include README.md recursive-include
my_package/data .json 4. **Install Build Tools:** Ensure you
have `build` and `twine` installed: bash pip install build twine 5.
**Build Distribution Archives:** Navigate to your project
root and run: bash python -m build This will create `dist/`
directory containing `.whl` (wheel) and `.tar.gz` (source
distribution) files. 6. **Upload to PyPI (or TestPyPI):** *
**Register on PyPI/TestPyPI:** Create an account on
[TestPyPI](https://test.pypi.org/) (for testing) or [PyPI]
(https://pypi.org/) (for production). * **Upload:** Use
`twine` to upload your distributions: bash twine upload --
repository testpypi dist/ # For TestPyPI

twine upload dist/* # For official


PyPI
``` You will be prompted for your PyPI/TestPyPI username and password (or
API token).

Code Example (Conceptual setup.py for older projects):

# setup.py (for older projects or specific needs)


from setuptools import setup, find_packages

setup(
name=\

my-awesome-package\

version=\

0.1.0\

packages=find_packages(),
install_requires=[
"requests",
"numpy>=1.20.0",
],
# metadata for upload to PyPI
author="Your Name",
author_email="[email protected]",
description="A short description of my package",
long_description=open("README.md").read(),
long_description_content_type="text/markdown",
url="https://github.com/yourusername/my-awesome-package",
classifiers=[
"Programming Language :: Python :: 3",
"License :: OSI Approved :: MIT License",
"Operating System :: OS Independent",
],
python_requires=">=3.8",
)

You might also like