Building AI-powered apps with Firebase AI Logic

Firebase’s mission is to help you transform your apps with the power of generative AI. We’re continuing to invest in robust frameworks, SDKs, and tooling to help streamline integrating AI experiences in your apps.

Last year, we introduced Vertex AI in Firebase, enabling mobile and web apps to access the Vertex AI Gemini API on the client side. We also launched Genkit, designed for leveraging Gemini and other models in server-side scenarios. Together, these offerings help you build robust AI-powered solutions.

Meet Firebase AI Logic

Today, we’re announcing the evolution of Vertex AI in Firebase to Firebase AI Logic.

Easily integrate generative AI into your apps, either directly via client-side access without setting up a backend, or through Genkit for robust server-side implementations

As part of this evolution, we’re also launching new features for both Firebase AI Logic client integration and the Genkit framework:

  • On the client side: You now have access to the Gemini Developer API, hybrid and on-device inference capabilities, support for Unity, image generation and editing with Gemini, and enhanced observability in AI monitoring dashboards.
  • On the server side: For Node.js, you can use Genkit to do dynamic model lookup to use the latest Gemini models without updating packages.

For more details, keep reading below.

Build client-side integrations directly in your apps

Firebase AI Logic client-side integrations enable a variety of rich AI-powered scenarios without needing a custom backend.

Thousands of apps in production today use these client-side integrations, such as:

  • Meal Planner: a meal planning and shopping list management app
  • Life: an AI-powered diary assistant
  • Waveful: a social media app for creators.

We’re launching multiple new features to provide more flexibility with Gemini integrations and expanded platform support.

Access to the Gemini Developer API

You now have the choice to use the Gemini Developer API, in addition to the long-available Vertex AI Gemini API. Now, when you access Google’s generative AI models directly from your apps, you can choose the API provider that’s the best fit for your specific requirements:

  • The Gemini Developer API is the easiest way to get started with generative AI in your app. It has a no-cost tier to let you prototype quickly without friction and upgrade to the paid tier when you’re ready to launch to production. This functionality is released in Preview.
  • The Vertex AI Gemini API remains the go-to for apps needing enterprise-grade features. When availability, scalability, governance, and robust performance are critical for production apps, Vertex AI is built for those demands. Support for the Vertex AI Gemini API is already available in GA.

You can start with the Gemini Developer API and if your needs grow, you can then transition smoothly to the Vertex AI Gemini API. Switching between them is designed to be seamless - often just a change in how you initialize the service.

A call to the Gemini model via the Gemini Developer API
import com.google.firebase.Firebase
import com.google.firebase.ai.ai

@Composable
fun MagicBackpackGenerator() {
  var output by remember { mutableStateOf("") }

  LaunchedEffect(Unit) {

    val model = Firebase.ai.generativeModel("gemini-2.0-flash")
    // This is code needed for using Vertex AI Gemini API instead of Developer API
    // val model = Firebase.ai(backend = GenerativeBackend.vertexAI())
    //           .generativeModel("gemini-2.0-flash")

    val response = model.generateContent("Write a story about a magic backpack");
      
    output = response.text.toString()
  }
  MagicBackpackUI(output)
}
Copied!

More control and insights in AI monitoring dashboards

Understanding how AI features are performing is crucial. Your AI monitoring dashboards that were already available for Genkit to also now supports requests through the Firebase AI Logic client SDKs. You’ll get valuable insights into usage patterns, performance metrics, and debugging information for your Gemini API calls - helping you create a better experience for your users.

Bringing generative AI to more platforms: Unity support!

Many of you in the game development community have asked for Unity support. We’re thrilled to announce the Preview of the Firebase AI Logic SDK for Unity so that you can integrate generative AI into games and Android XR experiences.

Single usage of the new SDK for Unity
using Firebase;
using Firebase.FirebaseAI;
using UnityEngine;

var firebaseAI = FirebaseAI.GetInstance(FirebaseAI.Backend.VertexAI());

var model = firebaseAI.GetGenerativeModel(modelName: "gemini-2.0-flash");
var response = await model.GenerateContentAsync("Write a story about a magic backpack.");
UnityEngine.Debug.Log(response.Text ?? "No text in response.");
Copied!

This SDK is in parity with other Firebase AI Logic SDKs, so you can generate text, images, and even build conversational AI experiences using the bidirectional streaming Gemini Live API.

Image generation capabilities from Gemini

Since earlier this year, you could access Imagen models client-side for high-quality text-to-image generation. Now, you have client-side access to even more image generation capabilities (and image editing) through the Gemini models. With Gemini’s multimodal capabilities, you can build more complex image interactions.

Generating and editing an image using Gemini model in a chat experience
import 'package:flutter/material.dart';
import 'package:firebase_ai/firebase_ai.dart';
...

final model = FirebaseAI.googleAI.generativeModel(
  modelName: 'gemini-2.0-flash-exp',
);
final chat = model.startChat();

Future<Image> _sendImageRequest(String prompt) async {
final response = await chat.sendMessage(
  Content.text(prompt),
  generationConfig: GenerationConfig(
    responseModalities: [ResponseModalities.text, ResponseModalities.image],
  ),
);
return _extractImageBytes(response);
}

final sunsetImage = await _sendImageRequest(
"Generate a beautiful sunset image over mountains with orange and purple sky",
);
final sunriseImage = await _sendImageRequest(
"Now transform that sunset into a beautiful sunrise with yellow and pink colors",
);

// Display images in your UI
Copied!

Hybrid on-device / in-cloud model inference for Gemini Nano on Chrome

While accessing cloud models is essential for many tasks, running AI directly in the browser offers significant advantages, like more privacy and supporting off-line scenarios. To give you the flexibility of on-device AI using Gemini Nano for web apps running on Chrome, we’re launching experimental support for hybrid inference through Firebase AI Logic.

How does it work? When your web app is running on a desktop Chrome browser (capable of running Gemini Nano locally), the SDK will attempt to run inference directly on the device. If on-device inference isn’t possible, the SDK automatically and seamlessly falls back to a cloud-hosted Gemini model. To enable this “hybrid” approach, you need to set mode: "prefer_on_device" when you instantiate your model.

Build server-side AI experiences with Genkit

While Firebase AI Logic client SDKs excel at simplifying client-side model access, many AI applications require server-side flexibility and integration. For these scenarios, we offer Genkit – our powerful open-source framework that supports server-side development in Firebase AI Logic.

Genkit provides you with robust AI capabilities and dedicated tooling for JavaScript (Node.js), Python, and Go. In case you aren’t yet familiar with Genkit, here are some key features:

  • Broad model support: Connect seamlessly to models from Google, OpenAI, Anthropic, Ollama, and more through a unified, consistent interface.
  • Streamlined AI development: Implement powerful capabilities including structured output generation, automatic tool calling, RAG, human-in-the-loop workflows, and agentic systems.
  • Developer experience first: Leverage the Genkit CLI and Developer UI for rapid local testing, debugging with comprehensive tracing, and robust evaluation of AI features.
  • Privacy and control: Protect sensitive prompts and maintain full control by keeping logic and data handling on your server.
  • Deployment anywhere: Deploy to Cloud Functions for Firebase, Firebase App Hosting, Cloud Run, or any environment supporting the runtime of your choice.

You can also quickly prototype apps powered by Genkit using Firebase Studio - create any app that has an AI feature and switch to code view to see exactly how Genkit works in an app.

Expanded language support

Genkit is expanding its language support to include Go (Beta) and Python (Alpha). The beta release for Go brings improved API stability and new capabilities as it rapidly approaches feature parity with our production-ready JavaScript SDK. Genkit for Python is now available in Alpha, providing early access for experimentation and community feedback as we shape its future direction.

Genkit JS and Developer UI updates

With the rapid improvements and releases of new models, you can now more easily use the latest models and their features without needing to update any packages. This capability is already available for Google AI, Vertex AI, and Ollama plugins for Genkit JS (v1.8+), and it’s coming to Genkit’s other languages and most popular third-party model providers soon.

Selecting a specific Gemini model version in a string
import { googleAI } from '@genkit-ai/googleai';

const ai = genkit({ plugins: [googleAI()] });

const { text } = await ai.generate({
prompt: "Hello, Gemini!",
model: googleAI.model('gemini-2.5-flash-preview-04-17', {
  thinkingConfig: {
      thinkingBudget: 1024,
      includeThoughts: true,
  },
})
});
Copied!

These plugin updates also enhance the Genkit Developer UI experience. The latest Gemini models and any locally installed open source models automatically appear in the model and prompt runners, making exploration and comparison effortless.

Additionally, Developer UI runners now surface model-specific parameters to easily explore powerful features like code execution and multi-modal responses.

Share feedback!

Your feedback is incredibly important – it genuinely shapes how we evolve Firebase. Let us know what you think, what works well, and what you’d like to see next!

You can reach us on these channels:

  • Leave feedback on any of the channels listed on our feedback page, like Firebase’s UserVoice or our GitHub repos.
  • For Genkit, file issues for problems or new features on our GitHub, or visit our Discord to interact directly with our team and show us what you’re building.

Happy building!