🎉 Big news from #SnowflakeSummit! We're excited to announce support for Snowflake Semantic Model Sharing with Bobsled. Semantic models are a key part in building AI-ready data products that are discoverable and consumable by agents like Snowflake's Cortex Analyst. What this means for data product teams: ✅ Make data products AI-ready → Enable customers to query data using natural language directly within Cortex Agents ✅ Share data with business context → Easily share and access data along with semantic models while maintaining governance and version control ✅ Speed up onboarding → Eliminate the time and effort teams spend creating semantic models while ensuring AI systems generate accurate responses 🚀 Bobsled was featured as a launch partner alongside our customers Cotality and Deutsche Börse. #SnowflakeSummit #DataProducts #AI #SemanticModels
Bobsled
Software Development
Los Angeles, CA 2,836 followers
Share data into another team's cloud data lake or warehouse without leaving your own.
About us
Bobsled is a cross-cloud data sharing platform that makes it painless to share data between any data lake or warehouse. We enable product and data teams to share data products directly into a customer or partner’s preferred analytical environment without ever leaving their own.
- Website
-
https://www.bobsled.co/
External link for Bobsled
- Industry
- Software Development
- Company size
- 11-50 employees
- Headquarters
- Los Angeles, CA
- Type
- Privately Held
- Founded
- 2021
Products
Locations
-
Primary
Los Angeles, CA, US
-
London, GB
-
Berlin, DE
Employees at Bobsled
Updates
-
Bobsled reposted this
Today, we’re announcing Data Products in Bobsled. Now, customer-facing teams can create and customize products on-demand — without burying engineering in a sprawling mess of costly pipelines and ungoverned products. With data products in Bobsled, teams can: - Customize products using a no-code editor or AI-assisted SQL engine - Track permissions, usage, and lineage from a single pane of glass - Integrate fulfilment into existing apps and workflows Customer-facing teams get an easy-to-use interface to build and customize data products on-demand. Engineering teams get a unified control plane to govern every permutation of every product across every customer. This is data productization at scale.✨ Link to the launch post in the comments
-
Today, we’re announcing Data Products in Bobsled. Now, customer-facing teams can create and customize products on-demand — without burying engineering in a sprawling mess of costly pipelines and ungoverned products. With data products in Bobsled, teams can: - Customize products using a no-code editor or AI-assisted SQL engine - Track permissions, usage, and lineage from a single pane of glass - Integrate fulfilment into existing apps and workflows Customer-facing teams get an easy-to-use interface to build and customize data products on-demand. Engineering teams get a unified control plane to govern every permutation of every product across every customer. This is data productization at scale.✨ Link to the launch post in the comments
-
Two years ago, we launched Bobsled to the public. The vision was simple: build a single platform that enables teams to make data products instantly accessible, anywhere. No pipelines to build. No accounts to manage. One feed operating at petabyte-scale. Today, Bobsled now powers a wide swath of the global data supply chain. The biggest data companies in the world run on Bobsled. ZoomInfo, Cotality, Dun & Bradstreet, GlobalData Plc, Deutsche Börse and many others trust Bobsled to power their data feeds. We could not be more excited for the next two years. Over the next few weeks, we will be launching a new generation of features that will help our customers build and distribute cloud-native, AI-ready data products at the fraction of the cost of traditional feeds. Sign up using the link in the comments to get an early tour of what’s coming. #datasharing #datafeeds #cloud #ai #dataproducts
-
What does it actually mean to make your data products AI-ready? It’s the question we heard over and over at Google Cloud Next earlier this month. Our team was on the ground, meeting with product and engineering leaders across the data ecosystem—and the message was clear: AI isn’t coming. It’s here. And data providers need a strategy. Here are three key takeaways for data and analytics companies: 1. Google’s AI stack is the real deal This isn’t just AI-washing. Google has built a vertically integrated stack—from infrastructure (TPUs) to foundation models (Gemini) to application and agent layers (Vertex AI, BigQuery ML)—all within a developer-first ecosystem. Why it matters: AI adoption is happening inside cloud platforms. That’s where your customers are building—and where the next generation of data use cases is taking shape. 2. AI is a double-edged sword for data providers AI opens massive opportunities—but also creates existential risks for data and analytics companies. Protecting IP: Most data companies we speak with are doubling down on IP protection—and rightly so. Contracts, compliance, and business models depend on it. Disruption risk: AI lowers the barrier to data aggregation. Startups are using public data and foundation models to recreate datasets and challenge incumbents. The move: Focus on lower-risk areas and start experimenting. The cost of doing nothing is rising fast. 3. Cloud is now table stakes—and the launchpad for AI In our latest State of Data Feeds survey, 73% of providers said they now offer cloud delivery. But there’s a bigger shift: cloud used to be about infrastructure, now it’s about market access. If your product doesn’t live there, it’s increasingly outside the buyer journey. And AI only deepens this reality. These platforms are embedding AI into the tools your customers use—from semantic search to copilots. If your data isn’t cloud-delivered, it won’t be AI-usable. Interested in learning more about building data products in the AI era? Sign up for our roundtable in a few weeks. (Link in comments.)
-
-
Are data marketplaces worth it? A few years ago, every platform was launching one. Today, many data leaders are asking: was the hype worth it? As one executive told Bobsled CEO Jake Graham at a recent roundtable: “We listed on a few marketplaces and after three years, made $3.18. It just didn’t work.” The skeptics aren’t wrong—but they’re not entirely right either. Marketplaces aren’t magical demand-gen machines. But used strategically, they can be powerful levers to scale a data business. Marketplaces are a door, not a destination. They open up: • Co-selling with cloud providers • Access to new personas • New paths for trials and discovery But to make them work, data companies need a broader shift in how they approach discovery. We spoke with 50+ PMs about how they’re improving discoverability. There was one big emphasis: how do we get the products in the hands of actual users faster. Here are the five things they’re doing differently: 1. Manage data like a product. Clear metadata, schema, and documentation. No mystery. 2. Show real use cases. Include example queries, notebooks, and visualizations. 3. Publish everywhere. Your site, cloud platforms, vertical exchanges—go wide. 4. Offer low-friction trials. Fast access to real data in their own environment—no weeks of contracting. 5. Package smartly. Slice by use case, persona, or region. Make it obvious who it’s for. Curious to go deeper on data discovery in the AI era? Join us later this month for our AMA with product and tech leaders from across the data ecosystem. Sign up in comments 👇 #datamarketplace #daas #datasharing #ai
-
-
Will FTP ever die? We asked 50+ PM and engineering leaders at data companies. Here’s what we’ve found (so far): FTP runs deep — it still powers 60–80% of all data feeds. (Because let’s be honest: a pipeline that works doesn’t get touched.) But that’s changing — fast: ☁️ New customers want cloud — and they’re making it a requirement for new products 🛠️ Teams are building pipelines with flexibility, scale, and customization in mind 📅 75% plan to invest significantly in modernization over the next 12 months Want to know: • How they’re making the shift • Which destinations they’re prioritizing • How they’re managing cloud costs 👉 Take our 10-minute State of Data Feeds survey to get early access to the results. (Link in comments) 🎁 Bonus: We’re offering $250 honorariums for in-depth interviews — available to the first 10 PMs or engineers who opt in at the end of the survey.
-
-
Bobsled reposted this
I'll be at #AWS Re:Invent this week with Jake Graham, spreading the gospel of true zero copy sharing + low-cost custom data product fulfillment. Are you going to be there? DM me!
-
Bobsled reposted this
Thank you Jake Graham for inviting me to participate on behalf of BMLL in a fascinating discussion alongside Jill Deuel in the Bobsled Webinar - Beyond FTP: Building and scaling data feeds in the cloud era https://lnkd.in/eftNdwAK. Capital Markets participants are increasingly focused on using cloud-based financial data feeds to rapidly access the market data they need at a lower total cost of ownership. Its exciting to be at the forefront of this rapidly changing industry! #HistoricalDataDoneProperly
-
Bobsled reposted this
Don't miss our upcoming panel discussion 'Beyond FTP: Building and scaling data feeds in the cloud era'. Hosted by Bobsled, this session will feature insights from top industry leaders, including CEO Jake Graham, Jill Deuel from LinkUp and Alejandro Gomez from BMLL. You will learn strategies to: 👇 • Manage cloud costs for data feed fulfilment • Prioritise support for new destinations • Automate fulfilment processes to minimise engineering support Register now to join Neudata and Bobsled on 14 November at 3pm GMT/10am ET: https://lnkd.in/eVW-WCdM #webinar #data #financialdata #cloud
-