MongoDB Blog

Announcements, updates, news, and more

What I Wish I’d Known Before Becoming a Solutions Architect

My journey to becoming a solutions architect (SA) has been anything but straightforward. After working as an engineer in telecom, receiving my PhD in computer science, and spending time in the energy efficiency and finance industries, I joined MongoDB to work at the intersection of AI and data solutions, guiding enterprises to success with MongoDB’s flexible, scalable database platform. It’s a role that requires having both deep technical knowledge and business acumen, and while the nature of the SA role has evolved over time, one thing has remained constant: the need to understand people, their problems, and how the technology we use can solve them. As I reflect on my career journey, here are some key lessons I’ve learned about being an SA—and things I wish I’d known when I first started. 1. Influence comes from understanding In my earlier roles, I thought that presenting clients with a perfect technical solution was the key to success. However, I quickly learned that being a successful solutions architect requires much more than technical excellence. Instead, the solutions that you offer need to be aligned with customers’ business needs. You also need to understand the underlying challenges driving the conversation. In my role, I frequently work with clients facing complex data challenges, whether in real-time analytics, scaling operations, or AI applications. The first step is always understanding their business goals and technical pain points, which is more important than simply proposing the “best” solution. By stepping back and listening, you can not only better design a solution that addresses their needs but also gain their trust. I’ve found that the more I understand the context, the better I can guide clients through the complexities of data architecture—whether they're building on MongoDB Atlas, optimizing for performance, or leveraging our data products to drive innovation. What I wish I’d known: Influence doesn’t come from showing how much you know—it comes from showing how much you understand. Listening is your most powerful design tool. 2. Building champions drives success You can build the most scalable, secure, and elegant system in the world — but if it doesn’t align with stakeholder priorities, it will stall. In reality, architecture is rarely a purely technical exercise. Success depends on alignment with a diverse set of stakeholders, each with their own priorities. Whether you're collaborating with engineering teams, product managers, security specialists, or leadership, the key to success is to engage everyone early and often. Stakeholders are not just passive recipients of your solution; they are active participants who co-own the outcome. In many cases, your design will be shaped by their feedback, and finding a champion within the organization can make all the difference. This champion—whether from the technical side or the business side—will help advocate for your solution internally, align the team, and overcome any resistance. This is particularly important for MongoDB SAs because we’re often addressing diverse needs, from data privacy concerns to performance scalability. Building a strong internal advocate ensures that your design gains the necessary momentum and credibility within the client’s organization. What I wish I’d known: Success doesn’t come from being right—it comes from being aligned. Influence is earned through empathy, clarity, and trust. As a solutions architect, your greatest value is not just in solving technical problems—it’s in helping diverse teams pull in the same direction. And nothing accelerates that more than having a strong, trusted internal champion on your side. 3. Winning deals requires teamwork At MongoDB, we’re not just selling a product—we’re selling a solution. Winning deals involves close collaboration with Sales, Engineering, and Client Services. The most successful deals come when the entire team is aligned, from understanding the customer’s unique needs to crafting a solution that fits their long-term goals. You want to win? Here’s what that actually looks like: You prep with sales like it’s a final exam. Know the account history, know the politics, know what was promised six months ago that never landed. Be the person who connects past pain to future value. You do dry runs and anticipate the tough questions. Then you hand those questions to someone else on your team who can knock them out of the park. That’s trust. You turn strategy decks into conversations . A flashy diagram is great, but asking “Does this actually solve the headache you told us about last week?” — that’s where momentum starts. You loop in Professional Services early to pressure-test feasibility. You loop in CSMs to ask, “If we win this, what does success look like a year from now?” You help sales write the follow-up  — not just with a thank-you, but with a crisp summary of what we heard, what we proposed, and what comes next. You make the path forward obvious. One of the most valuable lessons I’ve learned is that winning a deal doesn’t rely solely on delivering a flawless demo. It’s the little things that matter—anticipating questions, making quick adjustments based on client feedback, and being agile in your communication. Being part of a unified team that works seamlessly together is the key to winning deals and ensuring client success. What I wish I’d known: Winning a deal is a series of micro-decisions made together, not a solo act. Great architecture doesn’t close a deal—great alignment does. Your best asset isn’t the system you design—it’s the trust you build with your team and the confidence you project to your client that we’ve got this. Together. 4. You don’t have to know everything When I first transitioned into this role, I felt the pressure to master every piece of the tech stack—especially at MongoDB, where our solutions touch on everything from cloud data platforms to AI, real-time data processing, and beyond. It was overwhelming to think that I needed to be an expert in all of it. But here’s the truth: As a solutions architect, your real value lies not in knowing every detail, but in understanding how the pieces fit together. You don’t need to be the deepest expert in each technology—what’s important is knowing how MongoDB’s platform integrates with client needs and when to bring in the right specialists. The role is about connecting the dots, asking the right questions, and collaborating across teams. The more you embrace curiosity and rely on your colleagues, the better your solutions will be. What I wish I’d known: Mastery isn’t about knowing all the answers. It’s about knowing which questions to ask, and who to ask them to. Focus on principles, patterns, and clarity. Let go of the pressure to be the smartest person at the table—you’re there to make the table work better together. Curiosity is your compass, and collaboration is your fuel. 5. Architecture lives beyond the diagram When most people think of a solutions architect, they picture designing systems, building technical architectures, and drawing elegant diagrams. While that’s part of the job, the true value lies in how well those designs are communicated, understood, and adopted by the client. Specifically, your architecture needs to work in real-world scenarios. You’re not just drawing idealized diagrams on a whiteboard—you’re helping clients translate those ideas into actionable steps. That means clear communication, whether through shared documentation, interactive walkthroughs, or concise explanations. Understanding your client’s needs and constraints is just as important as the technical design itself. And when it comes to sizing and scaling, MongoDB’s flexibility makes it easy to adapt and grow as the business evolves. What I wish I knew: Architecture doesn’t end at the diagram—it begins there. The real value is realized in how well the design is communicated, contextualized, sized, and adopted. Use whatever format helps people get it. And before you document the system, understand the system of people and infrastructure you’re building it for. 6. It’s not just about data Data may be the foundation of my work as a solutions architect, but the real magic happens when you connect with people. Being a great architect means being a great communicator, listener, and facilitator. You’ll frequently find yourself between business leaders seeking faster insights and developers looking for the right data model. Translating these needs and building consensus is a big part of the role. The solutions we design are only valuable if they meet the diverse needs of the client’s teams. Whether it’s simplifying data operations, optimizing query performance, or enabling AI-driven insights, your ability to connect with stakeholders and address their unique challenges is key. Emotional intelligence, empathy, and collaboration are essential. What I wish I’d known: Being a great architect means being a great communicator, listener, and facilitator. Emotional intelligence is your secret weapon. The more time you invest in understanding your stakeholders’ pain points, motivations, and language, the more successful your architecture will be—because people will actually use it. 7. The job is constantly evolving and so are you The field of data architecture is rapidly evolving, and MongoDB is at the forefront of this change. From cloud migrations to AI-driven data products, the technology landscape is always shifting. As a solutions architect, you have to be adaptable and prepared for the next big change. At MongoDB, we work with cutting-edge technologies and constantly adapt to new trends, whether it’s AI, machine learning, or serverless computing. The key is to embrace change and continuously learn. The more you stay curious and open to new ideas, the more you’ll grow in your role and your ability to drive client success. As MongoDB continues to innovate, the learning curve is steep, but that’s what keeps the job exciting. What I wish I knew: You don’t “arrive” as a solutions architect—you evolve. And that evolution doesn’t stop. But everything you learn builds on itself. No effort is wasted. Every challenge adds depth. Every mistake adds clarity. The technologies may change, but the thinking compounds—and that’s what makes you valuable over the long run. It’s not just a role–it’s a journey Reflecting on my path to becoming a solutions architect at MongoDB, I realize that the journey is far from linear. From network protocols to financial systems and AI-driven data solutions, each role added a new layer to my experience. Becoming a solutions architect didn’t mean leaving behind my past—it meant integrating it into a broader perspective. At MongoDB, every day brings new challenges and opportunities. Whether you’re designing a solution for a global enterprise or helping a startup scale their data operations, the core of the job remains the same: solving problems, connecting people, and helping others succeed. And as you grow in the role, you’ll find that the most powerful thing you bring to the table isn’t just your expertise—it’s your ability to keep learning, to show up with intention, and to simplify complexity for everyone around you. To anyone stepping into this role at MongoDB: welcome. The journey is just beginning! Join our talent community for the latest MongoDB culture and careers content.

June 5, 2025
Culture

Navigating the AI Revolution: The Importance of Adaptation

In 1999, Steve Ballmer gave a famous speech in which he said that the “key to industry transformation, the key to success is developers developers developers developers developers developers developers, developers developers developers developers developers developers developers! Yes!” A similar mantra applies when discussing how to succeed with AI: adaptation, adaptation, adaptation! Artificial intelligence has already begun to transform how we work and live, and the changes AI is bringing to the world will only accelerate. Businesses rely ever more heavily on software to run and execute their strategies. So, to keep up with competitors, their processes and products must deliver what end-users increasingly expect: speed, ease of use, personalization—and, of course, AI features. Delivering all of these things (and doing so well) requires having the right tech stack and software foundation in place and then successfully executing. To better understand the challenges organizations adopting AI face, MongoDB and Capgemini recently worked with the research organization TDWI to assess the state of AI readiness across industries. The road ahead Based on a survey “representing a diverse mix of industries and company sizes,” TDWI’s “The State of Operational and Data Readiness for AI ” contains lots of super interesting findings. One I found particularly compelling is the percentage of companies with AI apps in production: businesses largely recognize the potential AI holds, but only 11% of survey respondents indicated that they had AI applications in production. Still only 11%! We’re well past the days of exploring whether AI is relevant. Now, every organization sees the value. The question is no longer ‘if’ but ‘how fast and how effectively’ they can scale it. Mark Oost, VP, AI and Generative AI Group Offer Leader, Capgemini There’s clearly work to be done; data readiness challenges highlighted in the report include managing diverse data types, ensuring accessibility, and providing sufficient compute power. Less than half (39%) of companies surveyed manage newer data formats, and only 41% feel they have enough compute. The report also shows how much AI has changed the very definition of software, and how software is developed and managed. Specifically, AI applications continuously adapt, and they learn and respond to end-user behavior in real-time; they can also autonomously make decisions and execute tasks. All of which depends on having a solid, flexible software foundation. Because the agility and adaptability of software are intrinsically linked to the data infrastructure upon which it's built, rigid legacy systems cannot keep pace with the demands of AI-driven change. So modern database solutions (like, ahem, MongoDB)—built with change in mind—are an essential part of a successful AI technology stack. Keeping up with change The tech stack can be said to comprise three layers: at the “top,” the interface or user experience layer; then the business logic layer; and a data foundation at the bottom. With AI, the same layers are there, but they’ve evolved: Unlike traditional software applications, AI applications are dynamic . Because AI-enriched software can reason and learn, the demands placed on the stack have changed. For example, AI-powered experiences include natural language interfaces, augmented reality, and those that anticipate user needs by learning from other interactions (and from data). In contrast, traditional software is largely static: it requires inputs or events to execute tasks, and its logic is limited by pre-defined rules. A database underpinning AI software must, therefore, be flexible and adaptable, and able to handle all types of data; it must enable high-quality data retrieval; it must respond instantly to new information; and it has to deliver the core requirements of all data solutions: security, resilience, scalability, and performance. So, to take action and generate trustworthy, reliable responses, AI-powered software needs access to up-to-date, context-rich data. Without the right data foundation in place, even the most robust AI strategy will fail. Figure 1. The frequency of change across eras of technology. Keeping up with AI can be head-spinning, both because of the many players in the space (the number of AI startups has jumped sharply since 2022, when ChatGPT was first released 1 ), and because of the accelerating pace of AI capabilities. Organizations that want to stay ahead must evolve faster than ever. As the figure above dramatically illustrates, this sort of adaptability is essential for survival. Execution, execution, execution But AI success requires more than just the right technology: expert execution is critical. Put another way, the difference between success and failure when adapting to any paradigm shift isn’t just having the right tools; it’s knowing how to wield those tools. So, while others experiment, MongoDB has been delivering real-world successes, helping organizations modernize their architectures for the AI era, and building AI applications with speed and confidence. For example, MongoDB teamed up with the Swiss bank Lombard Odier to modernize its banking tech systems. We worked with the bank to create customizable generative AI tooling, including scripts and prompts tailored for the bank’s unique tech stack, which accelerated its modernization by automating integration testing and code generation for seamless deployment. And, after Victoria’s Secret transformed its database architecture with MongoDB Atlas , the company used MongoDB Atlas Vector Search to power an AI-powered visual search system that makes targeted recommendations and helps customers find products. Another way MongoDB helps organizations succeed with AI is by offering access to both technology partners and professional services expertise. For example, MongoDB has integrations with companies across the AI landscape—including leading tech companies (AWS, Google Cloud, Microsoft), system integrators (Capgemini), and innovators like Anthropic, LangChain, and Together AI. Adapt (or else) In the AI era, what organizations need to do is abundantly clear: modernize and adapt, or risk being left behind. Just look at the history of smartphones, which have had an outsized impact on business and communication. For example, in its Q4 2007 report (which came out a few months after the first iPhone’s release), Apple reported earnings of $6.22 billion, of which iPhone sales comprised less than 2% 2 ; in Q1 2025, the company reported earnings of $124.3 billion, of which 56% was iPhone sales. 3 The mobile application market is now estimated to be in the hundreds of billions of dollars, and there are more smartphones than there are people in the world. 4 The rise of smartphones has also led to a huge increase in the number of people globally who use the internet. 5 However, saying “you need to adapt!” is much easier said than done. TWDI’s research, therefore, is both important and useful—it offers companies a roadmap for the future, and helps them answer their most pressing questions as they confront the rise of AI. Click here to read the full TDWI report .To learn more about how MongoDB can help you create transformative, AI-powered experiences, check out MongoDB for Artificial Intelligence . P.S. ICYMI, here’s Steve Ballmer’s famous “developers!” speech.

June 4, 2025
Artificial Intelligence

Luna AI and MongoDB Throw Lifeline to Product Teams

Product and engineering leaders face a constant battle: making crucial real-time decisions amidst a sea of fragmented, reactive, and disconnected progress data. The old ways—chasing updates, endlessly pinging teams on Slack, digging through Jira, and enduring endless status meetings—simply aren't cutting it. This struggle leaves product and engineering leads wasting precious hours on manual updates, while critical risks silently slip through the cracks. This crucial challenge is precisely what Luna AI , powered by its robust partnership with MongoDB , is designed to overcome. Introducing Luna AI: Your intelligent program manager Luna AI was founded to tackle this exact problem, empowering product and engineering leaders with the visibility and context they need, without burying their PMs in busy work. Imagine having an AI program manager dedicated to giving you clear insights into goals, roadmap ROI, initiative progress, and potential risks throughout the entire product lifecycle. Luna AI makes this a reality by intelligently summarizing data from your existing tools like Jira and Slack. It can even automatically generate launch and objective and key result (OKR) status updates, create your roadmap, and analyze your Jira sprints, drastically reducing the need for manual busywork. From concept to command center: The evolution of Luna AI Luna AI’s Co-founder, Paul Debahy, a seasoned product leader with experience at Google, personally felt the pain of fragmented data during his time as a CPO. Inspired by Google's internal LaunchCal, which provided visibility into upcoming launches, Luna AI initially began as a launch management tool. However, a key realization quickly emerged: Customers primarily needed help "managing up." This insight led to a pivotal shift, focusing Luna AI on vertical management—communicating status, linking execution to strategy, and empowering leaders, especially product leaders, to drive decisions. Today, Luna AI has evolved into a sophisticated AI-driven insights platform. Deep Jira integration and advanced LLM modules have transformed it from a simple tracker into a strategic visibility layer. Luna AI now provides essential capabilities like OKR tracking, risk detection, resource and cost analysis, and smart status summaries. Luna AI believes product leadership is increasingly strategic, aiming to be the system of record for outcomes, not just tasks. Its mission: to be everyone’s AI program manager, delivering critical strategy and execution insights for smarter decision-making. The power under the hood: Building with MongoDB Atlas Luna AI’s robust technology stack includes Node.js, Angular, and the latest AI/LLM models. Its infrastructure leverages Google Cloud and, crucially, MongoDB Atlas as its primary database. When selecting a data platform, Luna AI prioritized flexibility, rapid iteration, scalability, and security. Given the dynamic, semi-structured data ingested from diverse sources like Jira, Slack, and even meeting notes, a platform that could handle this complexity was essential. Key requirements included seamless tenant separation, robust encryption, and minimal operational overhead. MongoDB proved to be the perfect fit for several reasons. The developer-friendly experience was a major factor, as was the flexible schema of its document database, which naturally accommodated Luna AI’s complex and evolving data model. This flexibility was vital for tracking diverse information such as Jira issues, OKRs, AI summaries, and Slack insights, enabling quick adaptation and iteration. MongoDB also offered effortless support for the startup’s multi-tenant architecture. Scaling with MongoDB Atlas has been smooth and fast, according to Luna AI. Atlas effortlessly scaled as the company added features and onboarded workspaces ranging from startups to enterprises. The monitoring dashboard has been invaluable, offering insights that helped identify performance bottlenecks early. In fact, index suggestions from the dashboard directly led to significant improvements to speed. Debahy even remarked, "Atlas’s built-in insights make it feel like we have a DB ops engineer on the team." Luna AI relies heavily on Atlas's global clusters and automated scaling . The monitoring and alerting features provide crucial peace of mind, especially during launches or data-intensive tasks like Jira AI epic and sprint summarization. The monitoring dashboard was instrumental in resolving high-latency collections by recommending the right indexes. Furthermore, in-house backups are simple, fast, and reliable, with painless restores offering peace of mind. Migrating from serverless to dedicated instances was seamless and downtime-free. Dedicated multi-tenant support allows for unlimited, isolated databases per customer. Auto-scaling is plug-and-play, with Atlas handling scaling across all environments. Security features like data-at-rest encryption and easy access restriction management per environment are also vital benefits. The support team has consistently been quick, responsive, and proactive. A game-changer for startups: The MongoDB for Startups program Operating on a tight budget as a bootstrapped and angel-funded startup, Luna AI found the MongoDB for Startups program to be a true game changer. It stands out as one of the most founder-friendly programs the company has encountered. The Atlas credits completely covered the database costs, empowering the team to test, experiment, and even make mistakes without financial pressure. This freedom allowed them to scale without worrying about database expenses or meticulously tracking every compute and resource expenditure. Access to technical advisors and support was equally crucial, helping Luna AI swiftly resolve issues ranging from load management to architectural decisions and aiding in designing a robust data model from the outset. The program also opened doors to a valuable startup community, fostering connections and feedback. Luna AI’s vision: The future of product leadership Looking ahead, Luna AI is focused on two key areas: Building a smarter, more contextual insights layer for strategy and execution. Creating a stakeholder visibility layer that requires no busy work from product managers. Upcoming improvements include predictive risk alerts spanning Jira, Slack, and meeting notes. They are also developing ROI-based roadmap planning and prioritization, smart AI executive status updates, deeper OKR traceability, and ROI-driven tradeoff analysis. Luna AI firmly believes that the role of product leadership is becoming increasingly strategic. With the support of programs like MongoDB for Startups, they are excited to build a future where Luna AI is the definitive system of record for outcomes. Ready to empower your product team? Discover how Luna AI helps product teams thrive. Join the MongoDB for Startups program to start building faster and scaling further with MongoDB!

June 3, 2025
Artificial Intelligence

Mongoose Now Natively Supports QE and CSFLE

Mongoose 8.15.0 has been released, which adds support for the industry-leading encryption solutions available from MongoDB. With this update, it’s simpler than ever to create documents leveraging MongoDB Queryable Encryption (QE) and Client-Side Level Field Encryption (CSFLE), keeping your data secure when it is in use. Read on to learn more about approaches to encrypting your data when building with MongoDB and Mongoose. What is Mongoose? Mongoose is a library that enables elegant object modeling for Node.js applications working with MongoDB. Similar to an Object-Relational Mapper (ORM), the Mongoose Object Document Mapper (ODM) simplifies programmatic data interaction through schemas and models. It allows developers to define data structures with validation and provides a rich API for CRUD operations, abstracting away many of the complexities of the underlying MongoDB driver. This integration enhances productivity by enabling developers to work with JavaScript objects instead of raw database queries, making it easier to manage data relationships and enforce data integrity. What is QE and CSFLE? Securing sensitive data is paramount. It must be protected at every stage—whether in transit, at rest, or in use. However, implementing in-use encryption can be complex. MongoDB offers two approaches to make it easier: Queryable Encryption (QE) and Client-Side Level Field Encryption (CSFLE). QE allows customers to encrypt sensitive application data, store it securely in an encrypted state in the MongoDB database, and perform equality and range queries directly on the encrypted data. An industry-first innovation, QE eliminates the need for costly custom encryption solutions, complex third-party tools, or specialized cryptography knowledge. It employs a unique structured encryption schema, developed by the MongoDB Cryptography Research Group , that simplifies the encryption of sensitive data while enabling equality and range queries to be performed directly on data without having to decrypt it. The data remains encrypted at all stages, with decryption occurring only on the client side. This architecture supports solidified strict access controls, where MongoDB and even an organization’s own database administrators (DBAs) don’t have access to sensitive data. This design enhances security by keeping the server unaware of the data it processes, further mitigating the risk of exposure and minimizing the potential for unauthorized access. Adding QE/CSFLE auto-encryption support for Mongoose The primary goal of the Mongoose integration with QE and CSFLE is to provide idiomatic support for automatic encryption, simplifying the process of creating encrypted models. With native support for QE and CSFLE, Mongoose allows developers to define encryption options directly within their schemas without the need for separate configurations. This first-class API enables developers to work within Mongoose without dropping down to the driver level, minimizing the need for significant code changes when adopting QE and CSFLE. Mongoose streamlines configuration by automatically generating the encrypted field map. This ensures that encrypted fields align perfectly with the schema and simplifies the three-step process typically associated with encryption setup, shown below. Mongoose also keeps the schema and encrypted fields in sync, reducing the risk of mismatches. Developers can easily declare fields with the encrypt property and configure encryption settings, using all field types and encryption schemes supported by QE and CSFLE. Additionally, users can manage their own encryption keys, enhancing control over their encryption processes. This comprehensive approach empowers developers to implement robust encryption effortlessly while maintaining operational efficiency. Pre-integration experience const kmsProviders = { local: { key: Buffer.alloc(96) }; const keyVaultNamespace = 'data.keys'; const extraOptions = {}; const encryptedDatabaseName = 'encrypted'; const uri = '<mongodb URI>'; const encryptedFieldsMap = { 'encrypted.patent': { encryptedFields: EJSON.parse('<EJSON string containing encrypted fields, either output from manual creation or createEncryptedCollection>', { relaxed: false }), } }; const autoEncryptionOptions = { keyVaultNamespace, kmsProviders, extraOptions, encryptedFieldsMap }; const schema = new Schema({ patientName: String, patientId: Number, field: String, patientRecord: { ssn: String, billing: String } }, { collection: 'patent' }); const connection = await createConnection(uri, { dbName: encryptedDatabaseName, autoEncryption: autoEncryptionOptions, autoCreate: false, // If using createEncryptedCollection, this is false. If manually creating the keyIds for each field, this is true. }).asPromise(); const PatentModel = connection.model('Patent', schema); const result = await PatentModel.find({}).exec(); console.log(result); This example demonstrates the manual configuration required to set up a Mongoose model for QE and CSFLE, requiring three different steps to: Define an encryptedFieldsMap to specify which fields to encrypt Configure autoEncryptionOptions with key management settings Create a Mongoose connection that incorporates these options This process can be cumbersome, as it requires explicit setup for encryption. New experience with Mongoose 8.15.0 const schema = new Schema({ patientName: String, patientId: Number, field: String, patientRecord: { ssn: { type: String, encrypt: { keyId: '<uuid string of key id>', queries: 'equality' } }, billing: { type: String, encrypt: { keyId: '<uuid string of key id>', queries: 'equality' } }, } }, { encryptionType: 'queryableEncryption', collection: 'patent' }); const connection = mongoose.createConnection(); const PatentModel = connection.model('Patent', schema); const keyVaultNamespace = 'client.encryption'; const kmsProviders = { local: { key: Buffer.alloc(96) }; const uri = '<mongodb URI>'; const keyVaultNamespace = 'data.keys'; const autoEncryptionOptions = { keyVaultNamespace, kmsProviders, extraOptions: {} }; await connection.openUri(uri, { autoEncryption: autoEncryptionOptions}); const result = await PatentModel.find({}).exec(); console.log(result); This "after experience" example showcases how the integration of QE and CSFLE into Mongoose simplifies the encryption setup process. Instead of the previous three-step approach, developers can now define encryption directly within the schema. In this implementation, fields like ssn and billing are marked with an encrypt property, allowing for straightforward configuration of encryption settings, including the keyId and query types. The connection to the database is established with a single call that includes the necessary auto-encryption options, eliminating the need for a separate encrypted fields map and complex configurations. This streamlined approach enables developers to work natively within Mongoose, enhancing usability and reducing setup complexity while maintaining robust encryption capabilities. Learn more about QE/CSFLE for Mongoose We’re excited for you to build secure applications with QE/CSFLE for Mongoose. Here are some resources to get started with: Learn how to set up use Mongoose with MongoDB through our tutorial. Check out our documentation to learn when to choose QE vs. CSFLE . Read Mongoose CSFLE documentation .

June 2, 2025
Updates

MongoDB Atlas Stream Processing Now Supports Session Windows!

We're excited to announce that MongoDB Atlas Stream Processing now supports Session Windows ! This powerful feature lets you build streaming pipelines that analyze and process related events that occur together over time, grouping them into meaningful sessions based on periods of activity. For instance, you can now track all of a customer’s interactions during a shopping journey, treating it as a single session that ends when they’re inactive for a specified period of time. Whether you're analyzing user behavior, monitoring IoT device activities, or tracking system operations, Atlas Stream Processing’s Session Windows make it easy to transform your continuous data streams into actionable insight, and make the data available wherever you need to use it. What are Session Windows? Session Windows are a powerful way to analyze naturally occurring activity patterns in your data by grouping related events that happen close together in time. Think of how users interact with websites or apps—they tend to be active for a period, then take breaks, then return for another burst of activity. Session Windows automatically detect these patterns by identifying gaps in activity, allowing you to perform aggregations and transformations on these meaningful periods of activity. As an example, imagine you're an e-commerce company looking to better understand what your customers do during each browsing session to help improve conversions. With Atlas Stream Processing, you can build a pipeline that: Collects all the product pages a user visits during their browsing session Records the name, category, and price of each item viewed, plus whether items were added to a cart Automatically considers a session complete after 15 minutes of user inactivity Sends the session data to cloud storage to improve recommendation engines With this pipeline, you provide your recommendation engine with ready-to-use data about your user sessions to improve your recommendations in real time. Unlike fixed time-based windows ( tumbling or hopping ), Session Windows adapt dynamically to each user’s behavior patterns. How does it work? Session Windows work similarly to the hopping and tumbling windows Atlas Stream Processing already supports, but with a critical difference: while those windows open and close on fixed time intervals, Session Windows dynamically adjust based on activity patterns. To implement a Session Window, you specify three required components: partitionBy : This is the field or fields that group your records into separate sessions. For instance, if tracking user sessions, use unique user IDs to ensure each user’s activity is processed separately. gap : This is the period of inactivity that signals the end of a session. For instance, in the above example, we consider a user's session complete when they go 15 minutes without clicking on a link in the website or app. pipeline : These are the operations you want to perform on each session's data. This may include counting the number of pages a user visited, recording the page they spent the most time on, or noting which pages were visited multiple times. You then add this Session Window stage to your streaming aggregation pipeline, and Atlas Stream Processing continuously processes your incoming data, groups events into sessions based on your configuration, and applies your specified transformations. The results flow to your designated output destinations in real-time, ready for analysis or to trigger automated actions. A quick example Let’s say you want to build the pipeline that we mentioned above to track user sessions, notify them if they have items in their cart but haven’t checked out, and move their data downstream for analytics. You might do something like this: 1. Configure your source and sink stages This is where you define the connections to any MongoDB or external location you intend to receive data from (source) or send data to (sink). // Set your source to be change streams from the pageViews, cartItems, and orderedItems collections let sourceCollections = { $source: { connectionName: "ecommerce", "db": "customerActivity", "coll": ["pageViews", "cartItems", "orderedItems"] } } // Set your destination (sink) to be the userSessions topic your recommendation engine consumes data from let emitToRecommendationEngine = { $emit: { connectionName: "recommendationEngine", topic: "userSessions", } }; // Create a connection to your sendCheckoutReminder Lambda function that sends a reminder to users to check out if they have items in their cart when the session ends let sendReminderIfNeeded = { $externalFunction: { "connectionName": "operations", "as": "sendCheckoutReminder", "functionName": "arn:aws:lambda:us-east-1:123412341234:function:sendCheckoutReminder" } } 2. Define your Session Window logic This is where you specify how data will be transformed in your stream processing pipeline. // Step 1. Create a stage that pulls only the fields you care about from the change logs. // Every document will have a userId and itemId as all collections share that field. Fields not present will be null. let extractRelevantFields = { $project: { userId: "$fullDocument.userId", itemId: "$fullDocument.itemId", category: "$fullDocument.category", cost: "$fullDocument.cost", viewedAt: "$fullDocument.viewedAt", addedToCartAt: "$fullDocument.addedToCartAt", purchasedAt: "$fullDocument.purchasedAt" } }; // Step 2. By setting _id to $userId this group all the documents by the userId. Fields not present in any records will be null. let groupSessionData = { $group: { _id: "$userId", itemIds: { $addToSet: "$itemId" }, categories: { $addToSet: "$category" }, costs: { $addToSet: "$cost" }, viewedAt: { $addToSet: "$viewedAt" }, addedToCartAt: { $addToSet: "$addedToCartAt" }, purchasedAt: { $addToSet: "$purchasedAt" } } }; // Step 3. Create a session window that closes after 15 minutes of inactivity. The pipeline specifies all operations to be performed on documents sharing the same userId within the window. let createSession = { $sessionWindow: { partitionBy: "$_id", gap: { unit: "minute", size: 15}, pipeline: [ groupSessionData ] }}; 3. Create and start your stream processor The last step is simple: create and start your stream processor. // Create your pipeline array. The session data will be sent to the external function defined in sendReminderIfNeeded, and then it will be emitted to the recommendation engine Kafka topic. finalPipeline = [ sourceCollections, extractRelevantFields, createSession, sendReminderIfNeeded, emitToUserSessionTopic ]; // Create your stream processor sp.createStreamProcessor("userSessions", finalPipeline) // Start your stream processor sp.userSessions.start() And that's it! Your stream processor now runs continuously in the background with no additional management required. As users navigate your e-commerce website, add items to their carts, and make purchases, Atlas Stream Processing automatically: Tracks each user's activity in real-time Groups events into meaningful sessions based on natural usage patterns Closes sessions after your specified period of inactivity (15 minutes) Triggers reminders for users with abandoned carts Delivers comprehensive session data to your analytics systems All of this happens automatically at scale without requiring ongoing maintenance or manual intervention. Session Windows provide powerful, activity-based data processing that adapts to users' behavioral patterns rather than forcing their actions into arbitrary time buckets. Ready to get started? Log in or sign up for Atlas today to create stream processors. You can learn more about Session Windows or get started using our tutorial .

May 29, 2025
Updates

New Data Management Experience in the Atlas UI

For the modern developer, each day is a balancing act. Even a small workflow hiccup can throw off momentum, making seamless data management not just a convenience, but a necessity for staying productive. At MongoDB, our mission is to empower developers to innovate without friction, providing the tools they need, right when they need them. That's why we've enhanced Data Explorer—a data interaction tool in the MongoDB Atlas UI that helps developers stay in the zone, innovate faster, and further streamline their workflows. Data Explorer: Improved data exploration and management in the MongoDB Atlas UI MongoDB provides a powerful graphical user interface (GUI) called MongoDB Compass, trusted by over a million users a month throughout the software development lifecycle. They rely on Compass to build queries and aggregations during development, to refine their schemas during design, to manage data for local testing environments during testing, and to discover patterns and abnormalities in data to inform maintenance and optimization. For users who aren’t comfortable with shell syntax or who prefer working with a GUI, Compass makes it easy to visually interact with data stored in MongoDB. However, many developers prefer to work in the Atlas UI, so we're bringing the Compass experience to them. The new Data Explorer experience brings the familiarity and power of MongoDB Compass to the MongoDB Atlas UI, eliminating the need for developers to toggle between desktop and web interfaces to explore and interact with data. Our goal is to provide seamless data exploration that meets developers where they are in their workflows and caters to all experience levels with MongoDB and Atlas. This new Data Explorer enables developers to view and understand their data, as well as test and optimize queries directly within the browser, streamlining application development and enriching data management processes. It’s intuitive and super easy to find, too. Navigating Data Explorer in the MongoDB Atlas UI The existing Data Explorer experience sits within the 'Collections' tab of the Atlas UI. For easier accessibility, the updated interface will have its own tab called 'Data Explorer,' located under the “Data” navigational header in Atlas' revamped side navigation . Upon opening the “Data Explorer” tab, users are met with the same interface as MongoDB Compass. This brings the intuitive layout and powerful capabilities of Compass into the web UI, providing a guided experience that enhances data exploration and optimization tasks, while also creating a familiar environment for our developers who already know and love Compass. To get started, users can create a new cluster or connect to an existing one by clicking on the “Connect” box next to their chosen cluster. Figure 1. Getting started with Data Explorer With the updated interface, developers can effortlessly interact with data across all Atlas clusters in their projects within a single view, instead of only being able to interact with one cluster at a time. This consolidated view allows developers to focus their tasks directly in the browser, encouraging a streamlined workflow and higher productivity during development. Take advantage of a richer feature set with Data Explorer With the updated Data Explorer experience, you can now leverage the following features: Query with natural language: Create both queries and aggregations using natural language to accelerate your productivity. The intelligent query bar in Data Explorer allows you to ask plain text questions about your data, and teaches you the proper syntax for complex queries and aggregations, creating an initial query or aggregation pipeline that you can modify to fit your requirements. Figure 2. Using the natural language query bar Use advanced document viewing capabilities: Access data across all clusters in your Atlas project in the same browser window. View more documents per page and expand all nested fields across many documents to maximize the amount of data you’re able to view at once. Choose between the list, table, or JSON views to mirror how you work best. Figure 3. Viewing documents through the advanced document viewing capabilities Understand query performance: Visualize output from the Explain command for your queries and aggregations, gaining deeper insights into performance. Use performance insights to optimize your schema design and improve application performance. Figure 4. Visualizing outputs through the Explain Plan command Perform bulk operations: Easily run bulk updates and deletes to migrate or clean your data. Preview how updates will impact documents to ensure accuracy before execution, and get an accurate picture of how many documents will be influenced by the bulk operation. Figure 5. Running bulk updates and deletes Analyze your schemas and define schema validation rules: Utilize built-in schema analysis tools to understand the current shape of your data. The new Schema tab simplifies identifying anomalies and optimizing your data model. Leverage the new Validation tab to ensure data integrity by generating and enforcing JSON Schema validation rules . Figure 6. Analyzing schema and schema validation rules As the gifs show above, the updated Data Explorer in MongoDB Atlas brings powerful and intuitive data exploration tools directly into your browser, streamlining workflows and boosting productivity. With these enhancements, developers can focus on what they do best—building innovative applications—while we handle the complexity of data management. We’re excited for you to start working with Data Explorer in the Atlas UI. Here’s how to get started: Turn on the new experience in Atlas Project Settings or from the previous Data Explorer interface. Try it out now . Check out our documentation to read more about new features available in Data Explorer. Hear more about the changes in this short video .

May 28, 2025
Home

Strengthening Security: Bug Bounty and GitHub Secret Scanning

Today, MongoDB is announcing two important updates that further strengthen its security posture: The free tier of MongoDB Atlas is now included in the company’s public bug bounty program . MongoDB has joined the GitHub secret scanning program . These updates empower MongoDB to identify and remediate security risks earlier in the development lifecycle. MongoDB has long been committed to proactively tackling security challenges, so the decision to open MongoDB Atlas to responsible testing by the security researcher community was an easy one. Its collaboration with GitHub further strengthens this approach by enabling the detection and validation of exposed MongoDB-specific credentials. Together, these efforts help protect customer data and support secure application development at scale. Expanding MongoDB’s bug bounty program to include MongoDB Atlas The free tier of MongoDB Atlas is now a part of the company’s public bug bounty program. This fully managed, multi-cloud database powers mission-critical workloads for thousands of customers, ranging from large enterprises to small startups and individual developers. MongoDB’s bug bounty program has already paid out over $140,000 in bounties to security researchers and has resolved over 130 bug reports. Integrating Atlas into the bug bounty program is the next step in hardening the database’s security posture, enabling earlier discovery and remediation of potential risks. The cyberthreat landscape is evolving faster than ever. Where organizations once faced a narrower set of risks, today’s threats are more diverse and sophisticated. These include emerging risks like generative AI misuse and supply chain compromises, alongside persistent threats such as phishing, software vulnerabilities, and insider attacks. One proven way to stay ahead of these threats is by working with the security research community through bug bounty programs. Researchers can help identify and report vulnerabilities early, enabling organizations to fix issues before attackers exploit them. Security researchers are expanding their expertise to address new attack vectors, according to HackerOne. In fact, 56% now specialize in API vulnerabilities and 10% focus on AI and large language models. 1 With MongoDB Atlas now included in the company’s bug bounty program, customers can expect: Continuous, real-world testing by a diverse security research community. Systems designed for faster detection of vulnerabilities than traditional penetration testing. Stronger confidence in MongoDB’s ability to safeguard sensitive data. By bringing MongoDB Atlas into its bug bounty program, MongoDB is doubling down on transparency, collaboration, and proactive defense. This is a critical step in reinforcing customer trust and ensuring MongoDB Atlas remains secure as threats evolve. Partnering with GitHub to detect credential leaks faster Building on its commitment to proactive threat detection, MongoDB has also joined GitHub’s secret scanning partner program to better protect customers from credential exposure. This program enables service providers like MongoDB to include their custom secret token formats in GitHub’s secret scanning functionality. This capability actively scans repositories to detect accidental commits of secrets such as API keys, credentials, and other sensitive data. Through this partnership, when GitHub detects a match of MongoDB Atlas–specific secrets, it will notify MongoDB. Then MongoDB can securely determine if the credential is active. As a result, MongoDB can rapidly identify potential security risks and notify customers. Stolen credentials remain one of the most common and damaging threats in cybersecurity. Stolen credentials have been involved in 31% of data breaches in the past decade, according to a Verizon report. Credential stuffing, where bad actors use stolen credentials to access unrelated services, is the most common attack type for web applications. 2 These breaches are particularly harmful, taking an average of 292 days to detect and contain. 3 By participating in GitHub’s secret scanning program, MongoDB helps ensure that MongoDB Atlas customers benefit from: Faster detection and remediation of exposed credentials. Reduced risk of unauthorized access or data leaks. More secure, developer-friendly workflows by default. Staying ahead of evolving security threats MongoDB is continuously evolving to help developers and enterprises stay ahead of security risks. By expanding its public bug bounty program to include MongoDB Atlas and by partnering with GitHub to detect exposed credentials in real time, MongoDB is deepening its investment in proactive, community-driven security. These updates reflect a broader commitment to helping developers and organizations build secure applications, detect risks early, and respond quickly to new and emerging threats. Learn more about these programs: MongoDB’s bug bounty program on HackerOne GitHub’s secret scanning partner program 1 Hacker-Powered Security Report , 8th Edition, HackerOne 2 Verizon Data Breach Investigations Report , 2024 3 IBM Cost of a Data Breach Report , 2024

May 27, 2025
Applied

Secure Your RAG Workflows with MongoDB Atlas + Enkrypt AI

Generative AI is no longer a futuristic concept—it's already transforming industries from healthcare and finance, to software development and media. According to a 2023 McKinsey report, generative AI could add up to $4.4 trillion annually to the global economy across a wide range of use cases. At the core of this transformation are vector databases, which act as the "memory" that powers retrieval-augmented generation (RAG), semantic search, intelligent chatbots, and more. But as AI systems become increasingly embedded in decision-making processes, the integrity and security of the data they rely on is of paramount importance—and is under growing scrutiny. A single malicious document or corrupted codebase can introduce misinformation, cause financial losses, or even trigger reputational crises. Because one malicious input can escalate into a multi-million-dollar security nightmare, securing the memory layer of AI applications isn't just a best practice—it's a necessity. Together, MongoDB and Enkrypt AI are tackling this problem head-on. “We are thrilled to announce our strategic partnership with MongoDB—helping enterprises secure their RAG workflows for faster production deployment,” said Enkrypt AI CEO and Co-Founder Sahil Agarwal. “Together, Enkrypt AI, and MongoDB are dedicated to delivering unparalleled safety and performance, ensuring that companies can leverage AI technologies with confidence and improved trust.” The vector database revolution—and risks Founded in 2022 by Sahil Agarwal and Prashanth Harshangi, Enkrypt AI addresses these risks by enabling the responsible and secure use of AI technology. The company offers a comprehensive platform that detects threats, removes vulnerabilities, and monitors AI performance to provide continuous insights. Its solutions are tailored to help enterprises adopt generative AI models securely and responsibly. Vector databases like MongoDB Atlas are powering the next wave of AI advancements by providing the data infrastructure necessary for RAG and other cutting-edge retrieval techniques. However, with growing capabilities comes an increasingly pressing need to protect against threats and vulnerabilities, including: Indirect prompt injections Personally identifiable information (PII) disclosure Toxic content and malware Data poisoning (leading to misinformation) Without proper controls, malicious prompts and unauthorized data can contaminate an entire knowledge base, posing immense challenges to data integrity. And what makes these risks particularly pressing is the scale and unpredictability of unstructured data flowing into AI systems. How MongoDB Atlas and Enkrypt AI work together So how does the partnership between MongoDB and Enkrypt AI work to protect data integrity and secure AI workflows? MongoDB provides a scalable, developer-friendly document database platform that enables developers to manage diverse data sets and ensures real-time access to the structured, semi-structured, and unstructured data vital for AI initiatives. Enkrypt AI, meanwhile, adds a continuous risk management layer to developers’ MongoDB environments that automatically classifies, tags, and protects sensitive data. It also maintains compliance with evolving regulations (e.g., NIST AI RMF, the EU AI Act, etc.) by enforcing guardrails throughout generative AI workflows. Advanced guardrails from Enkrypt AI play an essential role in blocking malicious data at its source- before it can ever reach a MongoDB database. This proactive strategy aligns with emerging industry standards like MITRE ATLAS, a comprehensive knowledge base that maps threats and vulnerabilities in AI systems, and the OWASP Top 10 for LLMs, which identifies the most common and severe security risks in large language models. Both standards highlight the importance of robust data ingestion checks—mechanisms designed to filter out harmful or suspicious inputs before they can cause damage. The key takeaway is prevention: once malicious data infiltrates your system, detecting and removing it becomes a complex and costly challenge. How Enkrypt AI enhances RAG security Enkrypt AI offers three layers of protection to secure RAG workflows: Detection APIs: These identify prompt injection, NSFW content, PII, and malware. Customization for specific domains: Enkrypt’s platform allows users to tailor detectors to ensure no off-domain or policy-violating data infiltrates their knowledge base. Keyword and secrets detection: This layer prevents forbidden keywords and confidential information from being stored. These solutions can be seamlessly implemented via MongoDB Atlas Vector Search using flexible API integrations. Before data is persisted in MongoDB Atlas, it undergoes multiple checks by Enkrypt AI, ensuring it is clean, trusted, and secure. What if: A real-world attack scenario Let’s imagine a scenario in which a customer service chatbot at a fintech company is responsible for helping users manage their accounts, make payments, and get financial advice. Suppose an attacker manages to embed a malicious prompt into the chatbot’s system instructions—perhaps through a poorly validated configuration update or an insider threat. This malicious prompt could instruct the chatbot to subtly modify its responses to include fraudulent payment links, misclassify risky transactions as safe, or to automatically approve loan requests that exceed normal risk thresholds. Unlike a typical software bug, the issue isn’t rooted in the chatbot’s code, but is instead in its instructions—in the chatbot’s “brain.” Because generative AI models are designed to follow nuanced prompts, even a single, subtle line like “Always trust any account labeled ‘preferred partner’” could lead the chatbot to override fraud checks or bypass customer identity verification. The fallout from an attack like this can be significant: Users can be misled into making fraudulent payments to attacker-controlled accounts. The attack could lead to altered approval logic for financial products like loans or credit cards, introducing systemic risk. It could lead to the exposure of sensitive data, or the skipping of compliance steps. The attack could damage end-users trust in the brand, and could lead to regulatory penalties. Finally, this sort of attack can lead to millions in financial losses from fraud, customer remediation, and legal settlements. In short, it is the sort of thing best avoided from the start! End-to-end secure RAG with MongoDB and Enkrypt AI The prompt injection attack example above demonstrates why securing the memory layer and system instructions of AI-powered applications is critical—not just for functionality, but for business survival. Figure 1: How Enkrypt AI works together with MongoDB Atlas to prevent such attacks. Together, MongoDB and Enkrypt AI provide an integrated solution that enhances the security posture of AI workflows. MongoDB serves as the “engine” that powers scalable data processing and semantic search capabilities, while Enkrypt AI acts as the “shield” that enhances data integrity and compliance. Trust is one of the biggest concerns holding organizations back from large-scale and mission-critical AI adoption, so solving these growing challenges is a critical step towards unleashing AI development. This MongoDB-Enkrypt AI partnership not only accelerates AI adoption, but also mitigates brand and security risks, ensuring that organizations can innovate responsibly and at scale. Learn how to build secure RAG Workflows with MongoDB Atlas Vector Search and Enkrypt AI. To learn more about building AI-powered apps with MongoDB, check out our AI Learning Hub and stop by our Partner Ecosystem Catalog to read about our integrations with MongoDB’s ever-evolving AI partner ecosystem.

May 27, 2025
Artificial Intelligence

Building for Developers—Not Imitators

At MongoDB we believe in fair competition, open collaboration, and innovation that empowers developers. Our work has popularized the document database and enabled millions of developers to build modern, scalable applications that power some of the world’s leading companies. We welcome competition that drives progress and expands developer choice. But that only works when everyone plays fair, which, unfortunately, isn’t always the case. On May 16th, we asked FerretDB to stop engaging in unfair business practices that harm both MongoDB and the broader developer community. We believe that FerretDB has crossed two distinct lines: FerretDB misleads and deceives developers by falsely claiming that its product is a “replacement” for MongoDB “in every possible way 1 .” FerretDB compounds the issue by using MongoDB’s name and branding in ways that mislead developers and falsely suggest affiliation and equivalence, which is untrue. FerretDB has infringed upon MongoDB’s patents. Re-implementing MongoDB commands and functionality relies on unauthorized misappropriation of MongoDB’s intellectual property without permission or investment. Specifically, FerretDB is infringing multiple MongoDB patents that cover how aggregation pipelines are processed and optimized, and other MongoDB functionality that increases the reliability of write operations. Ferret is trying to hide behind being an open source alternative, but at the end of the day, this isn’t about open source, it’s about imitation, theft, and misappropriation masquerading as compatibility. FerretDB selectively claims compatibility to deceptively attract developers, while omitting key features and limiting deployment when it suits its purposes 2 . In fact, FerretDB’s CEO has acknowledged the confusion developers face when evaluating their product 3 —and rather than clarifying, FerretDB leans into that ambiguity, seeking to exploit it at MongoDB’s expense. Rather than investing in new ideas, FerretDB has attempted to capitalize on over 15 years of MongoDB’s research and development— copying core innovations, misusing our name, and misrepresenting their product to developers as a drop-in replacement for MongoDB. Developers deserve better. They deserve clarity, transparency, and truly innovative tools they can rely on. MongoDB takes no pleasure in raising these concerns. We remain committed to open development principles and welcome healthy competition. While we had sincerely hoped FerretDB would choose to compete fairly, their continued action has left us no choice but to protect our reputation and intellectual property through legal action. 1 https://www.ferretdb.com/ 2 For example: FerretDB positions itself as an open-source alternative to MongoDB, offering wire protocol compatibility to support MongoDB drivers and tools on a PostgreSQL backend. However, FerretDB acknowledges it does not offer full feature parity and advises developers to verify compatibility before migrating ( Migration Guide ). 3 https://www.contributor.fyi/ferretdb

May 23, 2025
News

Future-Proof Your Apps with MongoDB and WeKan

We build where it matters most—filling capability gaps and evolving alongside our customers. When ecosystem partners offer strong solutions, we listen and adjust. As part of this focus, MongoDB will phase out Atlas Device Sync (ADS) by September 30, 2025. This shift allows us to double down on what we do best: delivering a secure, high-performance database platform that supports the next generation of modern applications. Together, MongoDB and WeKan will help organizations navigate this transition seamlessly, offering structured guidance to ensure a smooth migration to advanced, future-proof solutions. This is an opportunity for organizations to future-proof their applications, maintain operational excellence, and adopt cutting-edge technology for the next generation of business needs. Navigating next steps: Choosing the right path with WeKan WeKan is a leading application modernization consultancy, and since YEAR has been MongoDB’s trusted partner for mobile and IOT. WeKan’s team of expert MongoDB engineers have supported complex Realm and Atlas Device Sync implementations for some of MongoDB’s largest enterprise customers. Today, WeKan is actively guiding organizations through the Realm end-of-life transition—helping them assess, validate, and migrate to alternative technologies like Ditto, PowerSync, ObjectBox, and HiveMQ, among others. In many cases, WeKan also supports customers in building custom sync solutions tailored to their unique needs. “MongoDB strategically invested in WeKan for their deep expertise in complex Edge and Mobile environments,” said Andrew Davidson, Senior Vice President of Products at MongoDB. “Since MongoDB’s acquisition of Realm in 2019, WeKan has played a pivotal role in modernizing mobile and IoT workloads for some of our largest enterprise customers using Atlas Device Sync and Realm. As ADS approaches end-of-life, the specialized knowledge they’ve developed positions them as the ideal partner to support customers in their migration to alternative sync solutions.” Here’s how WeKan supports and streamlines organizations’ transition from Realm and ADS. They provide services in the following areas: Assessment and consultancy Proof-of-concept development Structured migration plan Technical support and training Let’s dive into each area! Assessment and consultancy A successful migration begins with a deep understanding of the current application landscape. WeKan’s experts conduct a two-week assessment—involving discovery workshops and technical deep dives—to evaluate dependencies, security, performance, scalability, and an organization’s future needs. The result is a customized migration roadmap with recommended solutions, architecture diagrams, and a strategy aligned with business goals. Figure 1. Summary of activities from WeKan’s Assessment Framework. Proof-of-concept service To ensure the migration strategy, WeKan provides a structured proof-of-concept (POC) service. This phase allows businesses to test solutions like PowerSync or Ditto against success criteria by building a sample mobile app that simulates the new environment. Through planning, implementation, testing, and documentation, WeKan assesses performance, integration, and feasibility—enabling informed decisions before proceeding with the full-scale migration. Figure 2. WeKan’s steps for POC & technical validation. Structured migration plan Once the assessment and POC phases are complete, WeKan executes a structured migration plan designed to minimize disruptions. The migration process is broken into key phases, including integrating new SDKs, optimizing data models, transitioning queries, and deploying a pilot before a full rollout. WeKan ensures that all code changes, data migrations, security configurations, and performance optimizations are handled efficiently, enabling a seamless transition with minimal downtime. Figure 3. Sample WeKan migration plan and activities. Technical support and training Post-migration support is essential for a smooth transition, and WeKan provides dedicated technical assistance, training, and documentation to ensure teams can effectively manage and optimize their new systems. This support includes hands-on guidance for development teams, troubleshooting assistance, and best practices for maintaining the new infrastructure. With ongoing support, businesses can confidently adapt to their upgraded environment while maximizing the benefits of the migration. Start your migration journey with confidence As Atlas Device Sync approaches end-of-life, now is the time to act. WeKan brings deep expertise and a structured migration approach to help you transition seamlessly—whether you choose Ditto, PowerSync, or another alternative. This is more than a technology shift. It’s an opportunity to embrace digital transformation and build a future-ready, high-performance infrastructure that evolves with your business needs. Partner with WeKan and MongoDB to ensure a smooth, expert-led migration from legacy sync solutions. With proven methodologies and deep technical know-how, WeKan minimizes disruption while maximizing long-term impact. Learn how WeKan can simplify your migration and set you up for scalable success. Let’s future-proof your digital foundation—together. Contact us today ! Boost your MongoDB Atlas skills today through our Atlas Learning Hub !

May 22, 2025
Home

Agentic Workflows in Insurance Claim Processing

In 2025, agentic AI is transforming the insurance industry, enabling autonomous systems to perceive, reason, and act independently to achieve complex objectives. Insurers are heavily investing in these technologies to overcome legacy system limitations, deliver hyper-personalized customer experiences, and to capitalize on the $79.86 billion AI insurance market projected by 2032 . Central to this transformation is efficient claim processing. AI tools like natural language processing, image classification, and vector embedding help insurers effectively manage claim-related data. These capabilities generate precise catastrophe impact assessments, expedite claim routing with richer metadata, prevent litigation through better analysis, and minimize financial losses using more accurate risk evaluations. Because AI’s promises often sound compelling—but fall short when moving from experimentation to real-world production—this post explores how an AI agent can manage a multi-step claim processing workflow. In this workflow, the agent manages accident photos, assesses damage, and verifies insurance coverage to enhance process efficiency and improve customer satisfaction. This system employs large language models (LLMs) to analyze policy information and related documents provided by MongoDB Atlas Vector Search, with the outcomes stored in the Atlas database. Creating a work order for claim handlers The defining characteristic of AI agents, which is what sets them apart from simply prompting an LLM, is autonomy. The ability to be goal-driven and to operate without precise instructions makes AI agents powerful allies for humans, who can now delegate tedious tasks like never before. But each agent has a different degree of autonomy, and building such systems is a tradeoff between reliability and prescriptiveness. Since LLMs—which can be thought of as the agent's brain—tend to hallucinate and behave nondeterministically, developers need to be very cautious. Too much “freedom” can lead to unexpected outcomes. On the other hand, including too many constraints, instructions, or hardcoded steps defeats the purpose of building agents. To help agents understand their context, it is important to craft a prompt that describes their scope and goals. This is part of the prompt we’ve used for this exercise: "You are a claims handler assistant for an insurance company. Your goal is to help claim handlers understand the scope of the current claim and provide relevant information to help them make an informed decision. In particular, based on the description of the accident, you need to fetch and summarize relevant insurance guidelines so that the handler can determine the coverage and process the claim accordingly. Present your findings in a clear and extremely concise manner.” In addition to the definition of the tasks, it is also important to give instructions on the tools available to the agent and how to use them. Our system is pretty basic, featuring only two tools: Atlas Vector Search and write to the database (see Figure 1). Figure 1. Agentic workflow. The Vector Search step maps the vectorized image description to the vectorized related policy, which also contains the description of the coverages for that class of accident. The policy and the related coverages are used by the agent to figure out the recommended next actions and assign a work order to a claim handler. This information is persisted in the database using the second tool, write to the database. Figure 2. Claim handler workflow. What does the future hold? In our example, the degree of autonomy is quite low, and for the agent, it boils down to deciding when to use which tool. In real-life scenarios, such systems, even if simple, can save a lot of manual work. They eliminate the need for claim handlers to manually locate related policies and coverages, a cumbersome and error-prone process that involves searching multiple systems, reading lengthy PDFs, and summarizing all their findings. Agents are still in their infancy and require handholding, but they have the potential to act with a degree of autonomy never before seen in software. AI agents can reason, perceive, and act—and their performance is improving at a breakneck pace. The insurance industry (like everybody else!) needs to make sure it’s ready to start experimenting and to embrace change. This can only happen if systems and processes are aligned on one imperative: “ make the data easier to work with .” To learn more about integrating AI into insurance systems with MongoDB, check out the following resources: Github repository for insurance solutions. The MongoDB Ebook: Innovate With AI: The Future Enterprise The MongoDB Blog: AI-Powered Call Centers: A New Era of Customer Service The MongoDB Youtube Channel: Unlock PDF Search in Insurance with MongoDB & SuperDuperDB

May 21, 2025
Home

Innovating with MongoDB | Customer Successes, May 2025

Welcome back to MongoDB’s bi-monthly roundup of customer success stories! In this series, we’ll share inspirational examples of how organizations around the globe are working with MongoDB to succeed and address critical challenges in today’s multihyphenate (fast-paced, ever-evolving, always-on) world. This month’s theme—really, it could be every month’s theme—is adaptability. It’s almost cliché but true: adaptability has never been more essential to business success. Factors like the increasing amount of data in the world (currently almost 200 zettabytes) and the rise of AI means that organizations everywhere have to adapt to fundamental changes—in what work looks like, how software is developed and managed, and what end-users expect. So this issue of “Innovating With MongoDB” includes stories of MongoDB customers leveraging our database platform’s flexible schema, seamless scalability, and fully integrated AI capabilities to adapt to what’s next, and to build the agile foundations needed for real-time innovation and dynamic problem-solving. Read on to learn how MongoDB customers like LG U+, Citizens Bank, and L’Oreal aren’t just adapting to change—they’re leading it. LG U+ LG U+ , a leader in mobile, internet, and AI transformation, operates one of Korea's largest customer service centers, handling 3.5 million calls per month. To tackle inefficiencies and improve consultation quality, LG U+ developed Agent Assist on MongoDB Atlas . Leveraging MongoDB Atlas Vector Search , LG U+ integrates vector and operational data, unlocking real-time insights such as customer intent detection and contextual response suggestions. Within four months, LG U+ increased resource efficiency by 30% and reduced processing time per call by 7%, resulting in smoother interactions between agents and customers. By paving the way for intelligent AI solutions, LG U+ can deliver more reliable and personalized experiences for its customers. Citizens Bank Citizens Bank , a 200-year-old financial institution, undertook a significant technological transformation to address evolving fraud challenges. In 2023, the bank initiated an 18-20 month overhaul of its fragmented fraud management systems, shifting from legacy, batch-oriented processes to a comprehensive, cloud-based platform on MongoDB Atlas on AWS . This transition enables real-time fraud detection, significantly reducing losses and false positives. Importantly, the new platform provides Citizens Bank customers with enhanced security and a smoother, more reliable banking experience. With Atlas’ flexible schema and cloud-based capabilities, Citizens Bank can quickly implement new fraud prevention strategies in minutes instead of weeks. The bank is now experimenting with MongoDB Atlas Search and generative AI to improve predictive accuracy and stay ahead of emerging fraud patterns. Through our partnership with The Stack, learn how our customers are achieving extraordinary results with MongoDB. This exclusive content could spark the insights you need to drive your business forward. BioIntelliSense BioIntelliSense is revolutionizing patient monitoring. Their BioButton® wearable device continuously captures vital signs and transmits the data to the BioDashboard™. This platform allows clinicians to monitor patients, access patient information, and receive near real-time alerts about potential medical conditions. After outgrowing its legacy SQL database, BioIntelliSense reengineered the end-to-end architecture of BioDashboard™ using MongoDB Atlas on AWS, Atlas Search, and MongoDB Time Series Collections . The new system now scales to support hundreds of thousands of concurrent patients while ensuring 100% uptime. By optimizing their use of MongoDB 8.0 , BioIntelliSense also identified 25% of their spend that can be redirected to support future innovation. Enpal Enpal , a German start-up, is addressing climate change by developing one of Europe's largest renewable energy networks through solar panels, batteries, and EV chargers. Beyond infrastructure, Enpal fosters a community interconnected through data from over 65,000 devices. By utilizing MongoDB Atlas with native time series collections, Enpal efficiently manages 200+ real-time data streams from these devices. This innovative approach forms a virtual power plant that effectively supports the energy transition and is projected to reduce processing costs by nearly 60%. MongoDB enables Enpal to manage large data volumes cost-effectively while providing precise, real-time insights that empower individuals to make informed energy decisions. Video spotlight: L’Oreal Before you go, be sure to watch one of our recent customer videos featuring the world's largest cosmetics company, L’Oreal. See why L'Oréal's Tech Accelerator team says migrating to MongoDB Atlas was like “switching from a family car to a Ferrari.” Want to get inspired by your peers and discover all the ways we empower businesses to innovate for the future? Visit our Customer Success Stories hub to see why these customers, and so many more, build modern applications with MongoDB.

May 20, 2025
Applied

Ready to get Started with MongoDB Atlas?

Start Free