Use Docker Model Runner

Requires: Docker Compose 2.35.0 and later, and Docker Desktop 4.41 and later

Docker Model Runner can be integrated with Docker Compose to run AI models as part of your multi-container applications.
This lets you define and run AI-powered applications alongside your other services.

Prerequisites

Provider services

Compose introduces a new service type called provider that allows you to declare platform capabilities required by your application. For AI models, you can use the model type to declare model dependencies.

Here's an example of how to define a model provider:

services:
  chat:
    image: my-chat-app
    depends_on:
      - ai_runner

  ai_runner:
    provider:
      type: model
      options:
        model: ai/smollm2

Notice the dedicated provider attribute in the ai_runner service.
This attribute specifies that the service is a model provider and lets you define options such as the name of the model to be used.

There is also a depends_on attribute in the chat service.
This attribute specifies that the chat service depends on the ai_runner service.
This means that the ai_runner service will be started before the chat service to allow injection of model information to the chat service.

How it works

During the docker compose up process, Docker Model Runner automatically pulls and runs the specified model.
It also sends Compose the model tag name and the URL to access the model runner.

This information is then passed to services which declare a dependency on the model provider.
In the example above, the chat service receives 2 environment variables prefixed by the service name:

  • AI_RUNNER_URL with the URL to access the model runner
  • AI_RUNNER_MODEL with the model name which could be passed with the URL to request the model.

This lets the chat service to interact with the model and use it for its own purposes.

Reference