Browse free open source Python Graphics Software and projects below. Use the toggles on the left to filter open source Python Graphics Software by OS, license, language, programming language, and project status.

  • Auth0 for AI Agents now in GA Icon
    Auth0 for AI Agents now in GA

    Ready to implement AI with confidence (without sacrificing security)?

    Connect your AI agents to apps and data more securely, give users control over the actions AI agents can perform and the data they can access, and enable human confirmation for critical agent actions.
    Start building today
  • Automate contact and company data extraction Icon
    Automate contact and company data extraction

    Build lead generation pipelines that pull emails, phone numbers, and company details from directories, maps, social platforms. Full API access.

    Generate leads at scale without building or maintaining scrapers. Use 10,000+ ready-made tools that handle authentication, pagination, and anti-bot protection. Pull data from business directories, social profiles, and public sources, then export to your CRM or database via API. Schedule recurring extractions, enrich existing datasets, and integrate with your workflows.
    Explore Apify Store
  • 1
    MESHROOM

    MESHROOM

    3D reconstruction software

    Photogrammetry is the science of making measurements from photographs. It infers the geometry of a scene from a set of unordered photographies or videos. Photography is the projection of a 3D scene onto a 2D plane, losing depth information. The goal of photogrammetry is to reverse this process. The dense modeling of the scene is the result yielded by chaining two computer vision-based pipelines, “Structure-from-Motion” (SfM) and “Multi View Stereo” (MVS). Fusion of Multi-bracketing LDR images into HDR. Alignment of panorama images. Support for fisheye optics. Automatically estimate fisheye circle or manually edit it. Take advantage of motorized-head file. Easy to integrate in your Renderfarm System. Add specific rules to select the most suitable machines regarding CPU, RAM, GPU requirements of each Node.
    Downloads: 141 This Week
    Last Update:
    See Project
  • 2
    Lama Cleaner

    Lama Cleaner

    Image inpainting tool powered by SOTA AI Model

    Image inpainting tool powered by SOTA AI Model. Remove any unwanted object, defect, or people from your pictures or erase and replace(powered by stable diffusion) anything on your pictures. Lama Cleaner is a free, open-source and fully self-hostable inpainting tool powered by state-of-the-art AI models. You can use it to remove any unwanted object, defect, or people from your pictures or erase and replace anything on your pictures. Many AICG creators are using Lama Cleaner to clean-up their work. Completely free and open-source, fully self-hosted, supports CPU & GPU. Windows 1-Click Installer, classical image inpainting algorithm powered by cv2. Multiple SOTA AI models, and various inpainting strategies. Run as a desktop application. Interactive Segmentation on any object.
    Downloads: 51 This Week
    Last Update:
    See Project
  • 3
    GIMP ML

    GIMP ML

    AI for GNU Image Manipulation Program

    This repository introduces GIMP3-ML, a set of Python plugins for the widely popular GNU Image Manipulation Program (GIMP). It enables the use of recent advances in computer vision to the conventional image editing pipeline. Applications from deep learning such as monocular depth estimation, semantic segmentation, mask generative adversarial networks, image super-resolution, de-noising and coloring have been incorporated with GIMP through Python-based plugins. Additionally, operations on images such as edge detection and color clustering have also been added. GIMP-ML relies on standard Python packages such as numpy, scikit-image, pillow, pytorch, open-cv, scipy. In addition, GIMP-ML also aims to bring the benefits of using deep learning networks used for computer vision tasks to routine image processing workflows.
    Downloads: 13 This Week
    Last Update:
    See Project
  • 4
    Dream Textures

    Dream Textures

    Stable Diffusion built-in to Blender

    Create textures, concept art, background assets, and more with a simple text prompt. Use the 'Seamless' option to create textures that tile perfectly with no visible seam. Texture entire scenes with 'Project Dream Texture' and depth to image. Re-style animations with the Cycles render pass. Run the models on your machine to iterate without slowdowns from a service. Create textures, concept art, and more with text prompts. Learn how to use the various configuration options to get exactly what you're looking for. Texture entire models and scenes with depth to image. Inpaint to fix up images and convert existing textures into seamless ones automatically. Outpaint to increase the size of an image by extending it in any direction. Perform style transfer and create novel animations with Stable Diffusion as a post processing step. Dream Textures has been tested with CUDA and Apple Silicon GPUs. Over 4GB of VRAM is recommended.
    Downloads: 5 This Week
    Last Update:
    See Project
  • Pest Control Management Software Icon
    Pest Control Management Software

    Pocomos is a cloud-based field service solution that caters to businesses

    Built for the pest control industry, but also works great for Mosquito Control, Bin Cleaning, Window Washing, Solar Panel Cleaning, and other Home Service Businesses in need of an easy-to-use software that helps you simplify routing, scheduling, communications, payment processing, truck tracking, time tracking, and reporting.
    Learn More
  • 5
    Animated Drawings

    Animated Drawings

    Code to accompany "A Method for Animating Children's Drawings"

    AnimatedDrawings is a framework that converts user sketches or line drawings into fully animated 2D motion sequences using learned motion priors. The idea is that you draw a simple static figure (stick figure, silhouette, or contour lines), and the system produces plausible skeletal motion (walking, jumping, dancing) that adheres to the drawn shape constraints. The architecture separates shape embedding (to understand user-drawn geometry) from motion embedding / generation (to produce temporally coherent movement). Users can provide rough keyframes or control constraints (pose anchors), and the system fills intermediate frames with fluid animation. The repository includes demonstration apps and notebooks where you can upload or draw shapes and watch animations play. Because the approach is data-driven, it generalizes to new drawings even with varying proportions or stylizations.
    Downloads: 4 This Week
    Last Update:
    See Project
  • 6
    Mesh R-CNN

    Mesh R-CNN

    code for Mesh R-CNN, ICCV 2019

    Mesh R-CNN is a 3D reconstruction and object understanding framework developed by Facebook Research that extends Mask R-CNN into the 3D domain. Built on top of Detectron2 and PyTorch3D, Mesh R-CNN enables end-to-end 3D mesh prediction directly from single RGB images. The model learns to detect, segment, and reconstruct detailed 3D mesh representations of objects in natural images, bridging the gap between 2D perception and 3D understanding. Unlike voxel-based or point-based approaches, Mesh R-CNN uses a differentiable mesh representation, allowing it to efficiently refine surface geometry while maintaining high spatial detail. The system combines 2D detection from Mask R-CNN with 3D reasoning modules that output full mesh reconstructions aligned with the input image. It has been evaluated on datasets such as Pix3D, where it demonstrates state-of-the-art performance in reconstructing real-world object geometry.
    Downloads: 3 This Week
    Last Update:
    See Project
  • 7
    DALL-E 2 - Pytorch

    DALL-E 2 - Pytorch

    Implementation of DALL-E 2, OpenAI's updated text-to-image synthesis

    Implementation of DALL-E 2, OpenAI's updated text-to-image synthesis neural network, in Pytorch. The main novelty seems to be an extra layer of indirection with the prior network (whether it is an autoregressive transformer or a diffusion network), which predicts an image embedding based on the text embedding from CLIP. Specifically, this repository will only build out the diffusion prior network, as it is the best performing variant (but which incidentally involves a causal transformer as the denoising network) To train DALLE-2 is a 3 step process, with the training of CLIP being the most important. To train CLIP, you can either use x-clip package, or join the LAION discord, where a lot of replication efforts are already underway. Then, you will need to train the decoder, which learns to generate images based on the image embedding coming from the trained CLIP.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 8
    Kornia

    Kornia

    Open Source Differentiable Computer Vision Library

    Kornia is a differentiable computer vision library for PyTorch. It consists of a set of routines and differentiable modules to solve generic computer vision problems. At its core, the package uses PyTorch as its main backend both for efficiency and to take advantage of the reverse-mode auto-differentiation to define and compute the gradient of complex functions. Inspired by existing packages, this library is composed by a subset of packages containing operators that can be inserted within neural networks to train models to perform image transformations, epipolar geometry, depth estimation, and low-level image processing such as filtering and edge detection that operate directly on tensors. With Kornia we fill the gap between classical and deep computer vision that implements standard and advanced vision algorithms for AI. Our libraries and initiatives are always according to the community needs.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 9
    MMDeploy

    MMDeploy

    OpenMMLab Model Deployment Framework

    MMDeploy is an open-source deep learning model deployment toolset. It is a part of the OpenMMLab project. Models can be exported and run in several backends, and more will be compatible. All kinds of modules in the SDK can be extended, such as Transform for image processing, Net for Neural Network inference, Module for postprocessing and so on. Install and build your target backend. ONNX Runtime is a cross-platform inference and training accelerator compatible with many popular ML/DNN frameworks. Please read getting_started for the basic usage of MMDeploy.
    Downloads: 2 This Week
    Last Update:
    See Project
  • Say goodbye to broken revenue funnels and poor customer experiences Icon
    Say goodbye to broken revenue funnels and poor customer experiences

    Connect and coordinate your data, signals, tools, and people at every step of the customer journey.

    LeanData is a Demand Management solution that supports all go-to-market strategies such as account-based sales development, geo-based territories, and more. LeanData features a visual, intuitive workflow native to Salesforce that enables users to view their entire lead flow in one interface. LeanData allows users to access the drag-and-drop feature to route their leads. LeanData also features an algorithms match that uses multiple fields in Salesforce.
    Learn More
  • 10
    scikit-image

    scikit-image

    Image processing in Python

    scikit-image is a collection of algorithms for image processing. It is available free of charge and free of restriction. We pride ourselves on high-quality, peer-reviewed code, written by an active community of volunteers. scikit-image builds on scipy.ndimage to provide a versatile set of image processing routines in Python. This library is developed by its community, and contributions are most welcome! Read about our mission, vision, and values and how we govern the project. Major proposals to the project are documented in SKIPs. The scikit-image community consists of anyone using or working with the project in any way. A community member can become a contributor by interacting directly with the project in concrete ways.
    Downloads: 2 This Week
    Last Update:
    See Project
  • 11
    AnimateDiff

    AnimateDiff

    Plug-n-play module turning text-to-image models into animation

    AnimateDiff is an open-source project designed to enhance text-to-image diffusion models by adding animation capabilities. It allows users to turn static images generated by popular text-to-image models into animated sequences without requiring additional model training. This plug-and-play tool is compatible with a wide range of community models and facilitates the generation of animation directly from pre-existing text-to-image models. It supports various configurations to create animations with different visual styles, providing flexibility and ease of use for developers and artists interested in exploring dynamic, AI-generated animations.
    Downloads: 18 This Week
    Last Update:
    See Project
  • 12
    Warlock-Studio

    Warlock-Studio

    Suite with Real-ESRGAN, BSRGAN , RealESRNet, IRCNN, GFPGAN & RIFE.

    v5.1.1. Warlock-Studio is a Windows application that uses Real-ESRGAN, BSRGAN, IRCNN, GFPGAN, RealESRNet, RealESRAnime and RIFE Artificial Intelligence models to upscale, restore faces, interpolate frames and reduce noise in images and videos. the application supports GPU acceleration (including multi-GPU setups) and offers batch processing for large workloads. It includes drag-and-drop handling for single or multiple files, optional pre-resize functions, and an automatic tiling system designed to overcome GPU VRAM limitations.
    Leader badge
    Downloads: 27 This Week
    Last Update:
    See Project
  • 13
    Image Super-Resolution (ISR)

    Image Super-Resolution (ISR)

    Super-scale your images and run experiments with Residual Dense

    The goal of this project is to upscale and improve the quality of low-resolution images. This project contains Keras implementations of different Residual Dense Networks for Single Image Super-Resolution (ISR) as well as scripts to train these networks using content and adversarial loss components. Docker scripts and Google Colab notebooks are available to carry training and prediction. Also, we provide scripts to facilitate training on the cloud with AWS and Nvidia-docker with only a few commands. When training your own model, start with only PSNR loss (50+ epochs, depending on the dataset) and only then introduce GANS and feature loss. This can be controlled by the loss weights argument. The weights used to produce these images are available directly when creating the model object. ISR is compatible with Python 3.6 and is distributed under the Apache 2.0 license.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 14
    audio-diffusion-pytorch

    audio-diffusion-pytorch

    Audio generation using diffusion models, in PyTorch

    A fully featured audio diffusion library, for PyTorch. Includes models for unconditional audio generation, text-conditional audio generation, diffusion autoencoding, upsampling, and vocoding. The provided models are waveform-based, however, the U-Net (built using a-unet), DiffusionModel, diffusion method, and diffusion samplers are both generic to any dimension and highly customizable to work on other formats. Note: no pre-trained models are provided here, this library is meant for research purposes.
    Downloads: 1 This Week
    Last Update:
    See Project
  • 15
    Linux-Intelligent-Ocr-Solution

    Linux-Intelligent-Ocr-Solution

    Easy-OCR solution and Tesseract trainer for GNU/Linux

    Linux-intelligent-ocr-solution Lios is a free and open source software for converting print in to text using either scanner or a camera, It can also produce text out of scanned images from other sources such as Pdf, Image, Folder containing Images or screenshot. Program is given total accessibility for visually impaired. A Tesseract Trainer GUI is also shipped with this package. Forum : https://groups.google.com/forum/#!forum/lios Video Tutorial : https://www.youtube.com/playlist?list=PLn29o8rxtRe1zS1r2-yGm1DNMOZCgdU0i Tesseract Training Tutorial (beta) : https://www.youtube.com/watch?v=qLpCld4cdtk Source Code Github : https://github.com/Nalin-x-Linux/lios-3 Gitlab : https://gitlab.com/Nalin-x-Linux/lios-3 User guide is available in download page
    Downloads: 7 This Week
    Last Update:
    See Project
  • 16
    SMILI

    SMILI

    Scientific Visualisation Made Easy

    The Simple Medical Imaging Library Interface (SMILI), pronounced 'smilie', is an open-source, light-weight and easy-to-use medical imaging viewer and library for all major operating systems. The main sMILX application features for viewing n-D images, vector images, DICOMs, anonymizing, shape analysis and models/surfaces with easy drag and drop functions. It also features a number of standard processing algorithms for smoothing, thresholding, masking etc. images and models, both with graphical user interfaces and/or via the command-line. See our YouTube channel for tutorial videos via the homepage. The applications are all built out of a uniform user-interface framework that provides a very high level (Qt) interface to powerful image processing and scientific visualisation algorithms from the Insight Toolkit (ITK) and Visualisation Toolkit (VTK). The framework allows one to build stand-alone medical imaging applications quickly and easily.
    Leader badge
    Downloads: 10 This Week
    Last Update:
    See Project
  • 17
    RNNLIB is a recurrent neural network library for sequence learning problems. Applicable to most types of spatiotemporal data, it has proven particularly effective for speech and handwriting recognition. full installation and usage instructions given at http://sourceforge.net/p/rnnl/wiki/Home/
    Downloads: 1 This Week
    Last Update:
    See Project
  • 18
    AiHound

    AiHound

    AI powered image classification for nudity and documents / id-cards

    AI Hound is designed to run from an USB pendrive or any other kind of removeable and writeable media. The programm checks all Office-documents, Images and videos for various categories for images. Actually It can recognice nudity/porn and scanned or photographed documents / ID- and credit-cards. I am working on a model that also recognice various types of drugs in images.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 19
    Exposure

    Exposure

    Learning infinite-resolution image processing with GAN and RL

    Learning infinite-resolution image processing with GAN and RL from unpaired image datasets, using a differentiable photo editing model. ACM Transactions on Graphics (presented at SIGGRAPH 2018) Exposure is originally designed for RAW photos, which assumes 12+ bit color depth and linear "RGB" color space (or whatever we get after demosaicing). jpg and png images typically have only 8-bit color depth (except 16-bit pngs) and the lack of information (dynamic range/activation resolution) may lead to suboptimal results such as posterization. Moreover, jpg and most pngs assume an sRGB color space, which contains a roughly 1/2.2 Gamma correction, making the data distribution different from training images (which are linear). Exposure is just a prototype (proof-of-concept) of our latest research, and there are definitely a lot of engineering efforts required to make it suitable for a real product.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 20
    FrankMocap

    FrankMocap

    A Strong and Easy-to-use Single View 3D Hand+Body Pose Estimator

    FrankMocap is a monocular 3D human capture system that estimates body, hand, and optionally face pose from a single RGB image or video. It regresses parametric human models (e.g., SMPL/SMPL-X) directly, producing temporally stable meshes and joint angles suitable for animation or analytics. The pipeline couples a robust 2D keypoint detector with 3D mesh regression networks and priors that keep results anatomically plausible. It can run frame-by-frame or with temporal smoothing, and includes demo apps for live webcam capture as well as batch processing. Outputs include textured meshes, joint locations, and model parameters that can be exported to common DCC tools and game engines. The codebase offers pretrained models, clear inference scripts, and utilities to visualize results, making single-camera motion capture approachable on commodity hardware. Researchers and creators use it for motion studies, AR/VR prototyping, character animation, and human-in-the-loop editing.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 21
    Gamera is a framework for the creation of structured document analysis applications by domain experts. It combines a programming library with GUI tools for the training and interactive development of recognition systems.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 22
    Gaze at the landscape is a wallpaper switcher that uses on-line sources of pictures to provide a delightful desktop environment.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 23
    House3D

    House3D

    A Realistic and Rich 3D Environment

    House3D is a large-scale virtual 3D simulation environment designed to support research in embodied AI, reinforcement learning, and vision-language navigation. It provides more than 45,000 richly annotated indoor scenes sourced from the SUNCG dataset, covering diverse architectural layouts such as studios, multi-floor homes, and spaces with detailed furnishings and room types. Each environment includes fully labeled 3D objects, allowing agents to perceive and interact with their surroundings through multiple sensory modalities including RGB images, depth maps, semantic segmentation masks, and top-down maps. The simulator is optimized for high-performance rendering, achieving thousands of frames per second to enable efficient large-scale training of RL agents. House3D has served as the foundation for several influential research projects such as RoomNav (for concept-based navigation) and Embodied Question Answering (EQA).
    Downloads: 0 This Week
    Last Update:
    See Project
  • 24
    MMEditing

    MMEditing

    MMEditing is a low-level vision toolbox based on PyTorch

    MMEditing is an open-source toolbox for low-level vision. It supports various tasks. MMEditing is a low-level vision toolbox based on PyTorch, supporting super-resolution, inpainting, matting, video interpolation, etc. We decompose the editing framework into different components and one can easily construct a customized editor framework by combining different modules. The toolbox directly supports popular and contemporary inpainting, matting, super-resolution and generation tasks. The toolbox provides state-of-the-art methods in inpainting/matting/super-resolution/generation. Note that MMSR has been merged into this repo, as a part of MMEditing. With elaborate designs of the new framework and careful implementations, hope MMEditing could provide a better experience. When installing PyTorch in Step 2, you need to specify the version of CUDA. If you are not clear on which to choose, follow our recommendations.
    Downloads: 0 This Week
    Last Update:
    See Project
  • 25
    A Maya plug-in for importing Massive simulations.
    Downloads: 0 This Week
    Last Update:
    See Project
  • Previous
  • You're on page 1
  • 2
  • Next