[[["容易理解","easyToUnderstand","thumb-up"],["確實解決了我的問題","solvedMyProblem","thumb-up"],["其他","otherUp","thumb-up"]],[["缺少我需要的資訊","missingTheInformationINeed","thumb-down"],["過於複雜/步驟過多","tooComplicatedTooManySteps","thumb-down"],["過時","outOfDate","thumb-down"],["翻譯問題","translationIssue","thumb-down"],["示例/程式碼問題","samplesCodeIssue","thumb-down"],["其他","otherDown","thumb-down"]],[],[],[],null,["# Jolt AI\n\n[](/showcase) \nShare\nAPRIL 9, 2025 \n\nScaling Code Understanding: How Jolt AI Leverages the Gemini API\n================================================================\n\nYev Spektor\n\nCEO \nVishal Dharmadhikari\n\nProduct Solutions Engineer \n\nDevelopers working with sprawling, production-scale codebases know the pain. Understanding context, finding relevant files, and making changes can feel like navigating a labyrinth. [Jolt AI](https://www.usejolt.ai/) is tackling this head-on with a codegen and chat tool designed specifically for real-world, 100K+ line codebases. Their secret weapon for delivering both speed and accuracy? The Gemini API, particularly Gemini 2.0 Flash.\n\n\n\u003cbr /\u003e\n\n\nJolt AI's mission is to enable developers to instantly understand and contribute to any codebase. Many of today's tools struggle with large, existing codebases and require users to manually select context files. It's tedious and impractical. Jolt AI uses a novel semantic search that accurately and automatically identifies the relevant context files. It's a game-changer for feature development, bug fixing, onboarding, and more.\n\nThe challenge for Jolt AI was finding a model that could power their search pipeline with the right blend of speed, consistency, and code understanding. \"We were looking to speed up 3 AI-backed steps in our code search pipeline,\" explains Yev Spektor, CEO of Jolt AI. \"Each step requires an understanding of various programming languages, frameworks, user code, and user intent.\"\n\nGemini 2.0 Flash: Delivering Speed and Enhanced Code Understanding\n------------------------------------------------------------------\n\n\nEnter Gemini 2.0 Flash. For Jolt AI, this model delivered the performance leap they were seeking. \"After some prompt tuning, we were able to get more consistent, higher-quality output with Gemini 2.0 Flash than we had with a slower, larger model from another provider,\" Spektor notes.\n\n\n\u003cbr /\u003e\n\n\nHow is Jolt AI using Gemini 2.0 Flash? It powers several crucial steps in their code search pipeline, providing the speed and accuracy needed to navigate and understand massive repositories. While the exact details are their \"secret sauce,\" the impact is clear: Gemini 2.0 Flash enables Jolt AI to quickly surface the right information within complex codebases.\n\n\n\u003cbr /\u003e\n\n\nSwitching to the Gemini API was remarkably efficient. \"A couple hours to get the SDK implemented, and 2 days for prompt tuning and testing,\" reports Spektor. The team also utilized Google AI Studio for prompt ideation and tuning, streamlining the development process.\n\nThe Results: Faster, Higher Quality, and More Cost-Effective\n------------------------------------------------------------\n\n\nThe move to Gemini 2.0 Flash has yielded impressive results for Jolt AI:\n\n\n\u003cbr /\u003e\n\n\n- **70-80% Reduction in response times:** The AI-backed steps in their search pipeline are significantly faster.\n- **Higher quality and more consistent answers:** Users receive better results more than twice as fast.\n- **80% Lower costs:** The migrated AI workloads are now significantly more cost-effective.\n\n\u003cbr /\u003e\n\n\u003cbr /\u003e\n\n\"We are getting higher-quality answers to our users more than twice as quickly,\" Spektor emphasizes. This combination of speed, quality, and cost savings underscores the power of Gemini 2.0 Flash for performance-critical applications.\n\n\u003cbr /\u003e\n\nFuture Focus and Developer Insights\n-----------------------------------\n\n\nJolt AI is actively expanding its IDE support with an upcoming JetBrains plugin and exploring API accessibility. Spektor is excited about the broader potential of Jolt AI across enterprises, from aiding developers and engineering leaders to supporting customer support teams and enabling automated AI code pipelines.\n\n\n\u003cbr /\u003e\n\n\nReflecting on their journey with the Gemini API, Spektor offers this advice to fellow developers:\n\n\n\u003cbr /\u003e\n\n\n\"Gemini 2.0 Flash is more capable than you think, don't sleep on it. It's very good at recall - much better than some slow, more expensive models.\" He also encourages developers to explore the latest models from the Gemini family: \"The new generation, Gemini 2.0 Flash and Gemini 2.5 Pro, need to be looked at. Gemini 2.0 Flash has made our product over twice as fast while increasing the quality of responses. The new models are a major step function.\"\n\n\n\u003cbr /\u003e\n\n\nJolt AI's success story highlights how the speed and capability of Gemini 2.0 Flash can significantly enhance AI-powered developer tools, especially those dealing with the complexities of large codebases.\n\n\n\u003cbr /\u003e\n\n\nReady to build? Explore the [Gemini API documentation](https://ai.google.dev/gemini-api) and get started with [Google AI Studio](https://aistudio.google.com/) today. \n\nRelated case studies\n--------------------\n\n[Optimal AI\nOptimal AI Uses the Gemini API to Cut Code Review Times by 50%](/showcase/optimalai) [Langbase\nHigh-throughput, low-cost AI agents with Gemini Flash on Langbase](/showcase/langbase) [Calcam\nFast, accurate nutritional analysis with CalCam and Gemini 2.0 Flash](/showcase/calcam)"]]