0% found this document useful (0 votes)
2 views4 pages

Lab Programs

The document outlines various applications of AI models for text, image, music, and code generation. It includes code snippets for using OpenAI's GPT-2 for text generation, DALL-E for image synthesis, Magenta for music composition, and OpenAI Codex for code generation. Each section provides details on how to implement these models using Python libraries and frameworks.

Uploaded by

s7372017
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views4 pages

Lab Programs

The document outlines various applications of AI models for text, image, music, and code generation. It includes code snippets for using OpenAI's GPT-2 for text generation, DALL-E for image synthesis, Magenta for music composition, and OpenAI Codex for code generation. Each section provides details on how to implement these models using Python libraries and frameworks.

Uploaded by

s7372017
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 4

Text Generation with GPT-2: Experiment with OpenAI’s GPT-2 model for generating

diverse and coherent text based on prompts


import torch
from transformers import GPT2LMHeadModel, GPT2Tokenizer

def generate_text(prompt, model_name="gpt2", max_length=100, temperature=0.7,


top_k=50, top_p=0.9):
"""
Generates text using GPT-2 model based on a given prompt.

Parameters:
prompt (str): The input text to guide generation.
model_name (str): The GPT-2 model variant to use (e.g., 'gpt2', 'gpt2-
medium').
max_length (int): The maximum length of the generated text.
temperature (float): Sampling temperature for randomness (higher = more
random text).
top_k (int): The number of highest probability tokens to consider.
top_p (float): The cumulative probability for nucleus sampling.

Returns:
str: The generated text.
"""
# Load tokenizer and model
tokenizer = GPT2Tokenizer.from_pretrained(model_name)
model = GPT2LMHeadModel.from_pretrained(model_name)

# Encode the input prompt


input_ids = tokenizer.encode(prompt, return_tensors="pt")

# Generate text
with torch.no_grad():
output = model.generate(
input_ids,
max_length=max_length,
temperature=temperature,
top_k=top_k,
top_p=top_p,
do_sample=True # Enables sampling for diverse outputs
)

# Decode and return the generated text


return tokenizer.decode(output[0], skip_special_tokens=True)
if __name__ == "__main__":
prompt_text = input("Enter a prompt: ")
generated_text = generate_text(prompt_text)
print("\nGenerated Text:\n", generated_text)

Image Synthesis using DALL-E: Dive into image generation with OpenAI’s DALL-E,
creating unique and imaginative visuals based on textual descriptions.
from diffusers import StableDiffusionPipeline
import torch
from PIL import Image

def generate_image(prompt, model_id="CompVis/stable-diffusion-v1-4"):

# Load the Stable Diffusion model


pipe = StableDiffusionPipeline.from_pretrained(model_id)
pipe.to(torch.device("cuda" if torch.cuda.is_available() else "cpu"))

# Generate the image


image = pipe(prompt, guidance_scale=7.5).images[0]

# Display the image


image.show()
return image

# Run the function in Jupyter Notebook


def run_image_generation():
prompt_text = input("Enter a description for the image: ")
image = generate_image(prompt_text)

# Save the generated image


image.save("generated_image.png")
print("Image saved as generated_image.png")

# Call the function to allow execution in Jupyter Notebook


run_image_generation()
Music Composition with Magenta: Explore Magenta, a project by Google, to
generate music compositions using machine learning techniques.
import pretty_midi
import numpy as np

def create_simple_midi():
"""
Generates a simple AI-composed MIDI melody and saves it as
'generated_music.mid'.
"""
# Create a PrettyMIDI object
midi = pretty_midi.PrettyMIDI()

# Create an instrument (Acoustic Grand Piano)


instrument = pretty_midi.Instrument(program=0) # 0 = Piano

# Generate a simple AI-based melody (random notes)


for i in range(10): # 10 notes
pitch = np.random.randint(60, 72) # Random notes (Middle C to B)
start_time = i * 0.5 # Each note plays for 0.5 seconds
end_time = start_time + 0.5
note = pretty_midi.Note(velocity=100, pitch=pitch,
start=start_time, end=end_time)
instrument.notes.append(note)

# Add instrument to the MIDI file


midi.instruments.append(instrument)

# Save the generated MIDI file


midi.write("generated_music.mid")
print("🎶 Music generated and saved as 'generated_music.mid' 🎶")

# Run the function to generate music


create_simple_midi()

Code Generation with OpenAI Codex: Try your hand at code generation using
OpenAI Codex, which is proficient in understanding and generating
programming code
Code Generation with OpenAI Codex: Try your hand at code generation using OpenAI
Codex, which is proficient in understanding and generating programming
code. https://aistudio.google.com/prompts/new_chat (Use the link to generate API
from Google Gemini) package to be installed pip install google-generativeai

api = "paste your api from google gemini"

import google.generativeai as genai


import os

# Set your API key here (replace 'your_api_key_here' with your actual key)
os.environ['API_KEY'] = api # <-- Replace this with your actual API key

# Configure the Generative AI model


genai.configure(api_key=os.environ['API_KEY'])
model = genai.GenerativeModel('gemini-1.5-pro-latest')

# Ask user to input a prompt


user_prompt = input("Enter your prompt to create program ")

# Generate response
response = model.generate_content(user_prompt)
print("\nGenerated Response:\n")
print(response.text)

You might also like