Skip to content

sambanova/ai-starter-kit

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SambaNova logo

SambaNova AI Starter Kits

Overview

SambaNova AI Starter Kits are a collection of open-source examples and guides designed to facilitate the deployment of AI-driven use cases for both developers and enterprises.

To run these examples, you can obtain a free API key using SambaCloud. Alternatively, if you are a current SambaNova customer, you can deploy your models using SambaStack or SambaManaged. Most of the code examples are written in Python, although the concepts can be applied to any programming language.

Questions? Just message us on SambaNova Community Community or create an issue in GitHub. We're happy to help live!

Available AI Starter Kits

The table below lists the available kits, which are grouped into four categories: 1) Data Ingestion & Preparation, 2) Model Development & Optimization, 3) Intelligent Information Retrieval, and 4) Advanced AI Capabilities.

For functionalities related to third-party integrations, find a list in our Integrations Repository and Integrations Docs.

Name Kit Description Category
Data Extraction Series of notebooks that demonstrate methods for extracting text from documents in different input formats. Data Ingestion & Preparation
End to End Fine-tuning Recipe to upload and train a Language Model (LLM) using your own data in SambaStudio platform. Data Ingestion & Preparation
Enterprise Knowledge Retrieval Sample implementation of the semantic search workflow using the SambaNova platform to get answers to questions about your documents. Includes a runnable demo. Intelligent Information Retrieval
Multimodal Knowledge Retriever Sample implementation of the semantic search workflow leveraging the SambaNova platform to get answers using text, tables, and images to questions about your documents. Includes a runnable demo. Intelligent Information Retrieval
RAG Evaluation Kit A tool for evaluating the performance of LLM APIs using the RAG Evaluation methodology. Intelligent Information Retrieval
Search Assistant Sample implementation of the semantic search workflow built using the SambaNova platform to get answers to your questions using search engine snippets, and website crawled information as the source. Includes a runnable demo. Intelligent Information Retrieval
Benchmarking This kit evaluates the performance of multiple LLM models hosted in SambaStudio. It offers various performance metrics and configuration options. Users can also see these metrics within a chat interface. Advanced AI Capabilities
Financial Assistant This app demonstrates the capabilities of LLMs in extracting and analyzing financial data using function calling, web scraping, and RAG. Advanced AI Capabilities
Function Calling Example of tools calling implementation and a generic function calling module that can be used inside your application workflows. Advanced AI Capabilities
Custom Chat Templates complete workflow of how modern chat models format, send, and interpret conversations. From Jinja chat templates to Completions API invocation and tool-call parsing. Advanced AI Capabilitie

Getting Started

Go to SambaNova Quickstart Guide if it's your first time using the AI State Kits and you want to try out simple examples.

Getting a SambaNova API key and setting your generative models

Currently, there are two ways to obtain an API key from SambaNova. You can get a free API key using SambaCloud. Alternatively, if you are a current SambaNova customer, you can deploy your models using SambaStack or SambaManaged.

Use SambaCloud (Option 1)

To generate an API key, go to the API section of the SambaCloud portal.

To integrate SambaCloud LLMs with this AI starter kit, update the API information by configuring the environment variables in the ai-starter-kit/.env file, use .env-example as template:

  • Enter the SambaCloud API key in the ai-starter-kit/.env file, for example:
SAMBANOVA_API_KEY = "your-sambanova-api-key"

Use SambaStack/SambaManaged (Option 2)

Begin by deploying your LLM of choice (e.g., Llama 3.3 70B) to an endpoint for inference in SambaStack/SambaManaged.

To integrate your LLM deployed on SambaStack/SambaManaged with this AI starter kit, update the API information by configuring the environment variables in the ai-starter-kit/.env file, use .env-example as template:

  • Set your SambaStack/SambaManaged variables, for example:
SAMBANOVA_API_KEY = "your-sambanova-api-key"
SAMBANOVA_API_BASE = "you-sambanova-base-url"

Run the desired starter kit

Go to the README.md of the starter kit you want to use and follow the instructions. See Available AI Starter Kits.

Additional information

Setting up your virtual environment

There are two approaches to setting up your virtual environment for the AI Starter Kits:

  1. Individual Kit Setup (Traditional Method)
  2. Base Environment Setup

1. Individual Kit Setup

Each starter kit (see table above) has its own README.md and requirements.txt file. You can set up a separate virtual environment for each kit by following the instructions in their respective directories. This method is suitable if you're only interested in running a single kit or prefer isolated environments for each project.

To use this method:

  1. Navigate to the specific kit's directory
  2. Create a virtual environment
  3. Install the requirements
  4. Follow the kit-specific instructions

2. Base Environment Setup

For users who plan to work with multiple kits or prefer a unified development environment, we recommend setting up a base environment. This approach uses a Makefile to automate the setup of a consistent Python environment that works across all kits.

Benefits of the base environment approach:

  • Consistent Python version across all kits
  • Centralized dependency management
  • Simplified setup process
  • Easier switching between different kits

Prerequisites

  • pyenv: The Makefile will attempt to install pyenv if it's not already installed.
  • Docker: (Optional) If you want to use the Docker-based setup, ensure Docker is installed on your system.

What the Base Setup Does

  1. Installs pyenv and Poetry if they are not already installed.
  2. Sets up a Python virtual environment using a specified Python version (default is 3.11.3).
  3. Installs all necessary dependencies for the base environment.
  4. Sets up the parsing service required by some kits.
  5. Installs system dependencies like Tesseract OCR and Poppler.
  6. Provides Docker-based setup options for consistent environments across different systems.

Setting Up the Base Environment

  1. Install and Set Up the Base Environment:
make all

This command will set up the base ai-starter-kit environment, including installing all necessary tools and dependencies.

  1. Activate the Base Environment:
source .venv/bin/activate
  1. Navigate to Your Chosen Starter Kit:
cd path/to/starter_kit

Within the starter kit there will be instructions on how to start the kit. You can skip the virtual environment creation part in the kits README.md as we've done it here.

Parsing Service Management

For certain kits, we utilise a standard parsing service. By Default it's started automatically with the base environment. To work with this service in isolation, following the steps in this section.

  • Start Parsing Service:
make start-parsing-service
  • Stop Parsing Service:
make stop-parsing-service
  • Check Parsing Service Status:
make parsing-status
  • View Parsing Service Logs:
make parsing-log

Docker-based Setup

To use the Docker-based setup:

  1. Ensure Docker is installed on your system.
  2. Build the Docker image:
make docker-build
  1. Run a specific kit in the Docker container:
make docker-run-kit KIT=<kit_name>

Replace <kit_name> with the name of the starter kit you want to run (e.g., function_calling).

  1. To open a shell in the Docker container:
make docker-shell

Cleanup

To clean up all virtual environments created by the makefile and stop parsing services run the following command:

make clean

This command removes all virtual environments created with the makefile, stops the parsing service, and cleans up any temporary files.

Troubleshooting

If you encounter issues while setting up or running the AI Starter Kit, here are some common problems and their solutions:

Python version issues

If you're having problems with Python versions:

  1. Ensure you have pyenv installed: make ensure-pyenv
  2. Install the required Python versions: make install-python-versions
  3. If issues persist, check your system's Python installation and PATH settings.

Dependency conflicts

If you're experiencing dependency conflicts:

  1. Try cleaning your environment: make clean
  2. Update the lock file: poetry lock --no-update
  3. Reinstall dependencies: make install

pikepdf installation issues

If you encounter an error while installing pikepdf, such as:

ERROR: Failed building wheel for pikepdf
Failed to build pikepdf

This is likely due to missing qpdf dependency. The Makefile should automatically install qpdf for you, but if you're still encountering issues:

  1. Ensure you have proper permissions to install system packages.
  2. If you're on macOS, you can manually install qpdf using Homebrew:
    brew install qpdf
  3. On Linux, you can install it using your package manager, e.g., on Ubuntu:
    sudo apt-get update && sudo apt-get install -y qpdf
    
  4. After installing qpdf, try running make install again.

If you continue to face issues, please ensure your system meets all the requirements for building pikepdf and consider checking the pikepdf documentation for more detailed installation instructions.

Parsing service issues

If the parsing service isn't starting or is behaving unexpectedly:

  1. Check its status: make parsing-status
  2. View its logs: make parsing-log
  3. Try stopping and restarting it: make stop-parsing-service followed by make start-parsing-service

System Dependencies Issues

If you encounter issues related to Tesseract OCR or Poppler:

  1. Ensure the Makefile has successfully installed these dependencies.
  2. On macOS, you can manually install them using Homebrew:
  brew install tesseract poppler
  1. On Linux (Ubuntu/Debian), you can install them manually:
  sudo apt-get update && sudo apt-get install -y tesseract-ocr poppler-utils
  1. On Windows, you may need to install these dependencies manually and ensure they are in your system PATH.

Docker-related Issues

If you're using the Docker-based setup and encounter issues:

  1. Ensure Docker is properly installed and running on your system.
  2. Try rebuilding the Docker image: make docker-build
  3. Check Docker logs for any error messages.
  4. Ensure your firewall or antivirus is not blocking Docker operations.

General troubleshooting steps

  1. Ensure all prerequisites (Python, pyenv, Poetry) are correctly installed.
  2. Try cleaning and rebuilding the environment: make clean all
  3. Check for any error messages in the console output and address them specifically.
  4. Ensure your .env file is correctly set up in the ai-starter-kit root with all necessary environment variables.

If you continue to experience issues, please open an issue with details about your environment, the full error message, and steps to reproduce the problem.

Important Notes for Users

  • Ensure you have sufficient permissions to install software on your system.
  • The setup process may take several minutes, especially when installing Python versions or large dependencies.
  • If you encounter any issues during setup, check the error messages and ensure your system meets all prerequisites.
  • Always activate the base environment before navigating to and running a specific starter kit.
  • Some kits may require additional setup steps. Always refer to the specific README of the kit you're using.

API Reference

  • Find more information about SambaCloud and SambaStack here.

Note: These AI Starter Kit code samples are provided "as-is," and are not production-ready or supported code. Bugfix/support will be on a best-effort basis only. Code may use third-party open-source software. You are responsible for performing due diligence per your organization policies for use in your applications.

About

No description, website, or topics provided.

Resources

License

Contributing

Stars

Watchers

Forks

Packages

No packages published

Contributors 28