#OpenAI 🤝 #Databricks ✨ Get governed access to models like GPT-5 via SQL, API and Agent Bricks for best in class 💡 #Reasoning: Healthcare decisions, financial risk analysis, contract review, logistics planning 📚 #Productivity: Summarization, business writing, knowledge management 💻 #Coding: Debugging large systems, modernizing legacy applications, generating production-ready code
Gilbert Schuppe’s Post
More Relevant Posts
-
Amazing update for Snowflake engineers, this is a must read. Snowflake just rolled out MCP Server in Cortex (Public Preview) along with the snowflake-labs-mcp v1.3.3 PyPI release. This update lets AI copilots run SQL with RBAC guardrails, consume semantic views natively, and return structured JSON outputs you can plug directly into workflows. For engineers, this means copilots move beyond experiments into safe, governed production use. What’s even bigger: Snowflake’s latest updates extend MCP to Cortex Analyst for natural language SQL, add multi-tenant governance controls, and enable copilots to query semantic models and metadata without fragile workarounds. Together, these updates are laying the foundation for auditable, enterprise-ready AI copilots inside Snowflake. Would you enable MCP copilots in production today or keep them sandboxed until proven? 💬 Comment below with how you’d test MCP copilots first - sandbox, staging, or straight to prod. #Snowflake #MCP #AI #DataEngineering #Cortex #SnowflakeMCP #SnowflakeUpdate #SnowflakeEngineer #MCPcopilot #DataCouch
To view or add a comment, sign in
-
Piggy-backing off the OpenAI AgentKit announcement: I'm super bullish on workflow automation. Whether we call this an "agent" or something else, the idea that most work in the enterprise can be automated as part of a workflow is a core part of our vision. This is a great opportunity for me to reintroduce our Workflow product: → 𝗖𝗼𝗻𝗻𝗲𝗰𝘁 𝘁𝗼 𝗮𝗻𝘆 𝗱𝗮𝘁𝗮 𝘀𝗼𝘂𝗿𝗰𝗲: Google Sheets, Airtable, Snowflake, Postgres... → 𝗨𝘀𝗲 𝗼𝘂𝗿 𝗔𝗜 𝗔𝗻𝗮𝗹𝘆𝘀𝘁 𝗔𝗴𝗲𝗻𝘁 𝘁𝗼 𝗯𝘂𝗶𝗹𝗱 𝘁𝗵𝗲 𝘄𝗼𝗿𝗸𝗳𝗹𝗼𝘄: Ask AI to build out anything you want around that data. You can slice the data, create charts, perform advanced analysis. The blocks are all SQL, Python or no-code, so you can vibe your way through a workflow or getting into the weeds → 𝗦𝗲𝗻𝗱 𝘆𝗼𝘂𝗿 𝗶𝗻𝘀𝗶𝗴𝗵𝘁𝘀 𝗿𝗶𝗴𝗵𝘁 𝘁𝗼 𝘄𝗵𝗲𝗿𝗲 𝘆𝗼𝘂𝗿 𝘁𝗲𝗮𝗺 𝘄𝗼𝗿𝗸𝘀: Push AI-generated summaries, tables or charts to Slack, Google Sheets, email or anywhere else I've been saying this for a while: There are more efficient and effective ways to distribute insights to help drive the growth of your business than dashboards. If you want to try out a workflow designed specifically for insight discovery and distribution and you just want to talk to the AI, you can take it for a spin in minutes!
To view or add a comment, sign in
-
-
🔍 Insights building RAG systems for enterprise clients. Here’s what really matters beyond the hype and tutorials: 1. Garbage in, garbage out: Enterprise docs aren't clean - they're scans, old PDFs, and OCR nightmares. You need systems that can handle messy, low-quality documents. 2. Metadata beats embeddings: Domain-specific metadata schemas drastically outperform generic semantic embeddings. Invest in metadata design early. 3. Semantic search alone isn’t enough: Specialized domains have tons of acronyms, need exact matches, and complex cross-references. You need semantic + keyword + graph search working together. 4. Tables hold key insights: Ignoring structured tabular data means losing vital business context. Build dedicated pipelines specifically for tables. 5. Infrastructure wins deals: Real deployments need to handle multiple users at once, meet strict security requirements, and manage resources efficiently. These boring basics matter more than fancy features. ✅ Bottom Line: Enterprise RAG is an engineering challenge. Success comes from handling messy documents, smart metadata, hybrid retrieval, structured data, and rock-solid infrastructure. --- More details in the first comment. #GenAI #RAG #EnterpriseAI #LLMs
To view or add a comment, sign in
-
New version of MLflow - 3.4.0 - has been realeased with lots of new goodies: New Metrics, MCP, Judges & More Key Highlights: • 📊 OpenTelemetry Metrics Export: span‑level stats in OT metrics • 🤖 MCP Server Integration: AI assistants now talk to MLflow • 🧑⚖️ Custom Judges API: Build domain‑specific LLM evaluators • 📈 Correlations Backend: Store & compute metric correlations via NPMI • 🗂️ Evaluation Datasets: Track eval data in experiments • 🔗 Databricks Backend: MLflow server can use Databricks storage • 🤖 Claude Autologging: Auto‑trace Claude AI calls • 🌊 Strands Agent Tracing: Full agent workflow instrumentation • 🧪 Experiment Types in UI: Separate classic ML and GenAI experiments MLflow’s 3.4.0 brings a suite of features that tighten the feedback loop between data scientists and engineers. The OpenTelemetry metrics export gives you end‑to‑end visibility into each span’s performance, while the new MCP server lets LLM‑based assistants query and record runs directly in the tracking store. Custom judges let you author domain‑specific LLM evaluators, and the correlations backend now stores NPMI scores so you can compare metrics across experiments. Versioned evaluation datasets keep all your test data tied to the run that produced it, ensuring reproducibility. The Databricks backend unlocks native Databricks integration for the MLflow server, and auto‑logging for Claude interactions means conversations are captured without manual instrumentation. Strands Agent tracing adds end‑to‑end monitoring for autonomous workflows, and the UI now supports experiment types to keep classic ML/DL work separate from GenAI projects. Full release notes - https://lnkd.in/dtuwVzPk #MLflow #OpenTelemetry #Databricks
To view or add a comment, sign in
-
-
🔹"Speed or adaptability, which one would you bet on when every millisecond counts?" 🔹 We often think of data structures as abstract textbook concepts. But in reality, the choice between something like a Splay Tree and an LRU Cache can ripple all the way into the performance of large-scale machine learning systems. 🔹 In ML pipelines, efficiency isn’t just about model accuracy, it’s about how fast and intelligently we can move data. Splay Trees adapt to access patterns, bringing frequently used elements closer. This makes them powerful in scenarios where data access is skewed or unpredictable. But the trade-off? Rotations on reads and concurrency challenges can introduce latency under heavy load. LRU Caches, on the other hand, guarantee constant-time lookups and predictable performance. They shine in distributed ML systems where parallelization and cache-friendliness are critical. Yet, they come with metadata overhead and a rigid eviction policy that may not always align with dynamic learning workloads. 👉 In practice, this trade-off shows up in feature stores, parameter servers, and memory-bound training loops. Choosing the wrong structure can mean the difference between a system that scales gracefully and one that bottlenecks under real-world traffic. So the real question isn’t which is better universally, it’s which aligns with the workload, access patterns, and scaling strategy of your ML system. #MachineLearning #DataStructures #SystemDesign #AIEngineering #MLPipelines #SoftwareArchitecture #TechLeadership #PerformanceEngineering #SplayTree #LRUCache #ScalableAI
To view or add a comment, sign in
-
-
🚀 End-to-End Data Science Project | MLOps I’m excited to share my latest End-to-End ML pipeline project — designed with modularity, reproducibility, and MLOps best practices in mind. 🔗 Repo: https://lnkd.in/gbaY5g-i ➥ Workflows – ML Pipeline 1️⃣ Data Ingestion 📥 2️⃣ Data Validation ✅ 3️⃣ Data Transformation 🔧 (Feature Engineering & Preprocessing) 4️⃣ Model Training 🤖 5️⃣ Model Evaluation 📊 (MLflow + DagsHub for experiment tracking) ➥ Development Workflows 📝 Update config.yaml | schema.yaml | params.yaml 🧩 Modular components for each ML step 🔗 Pipelines orchestrated via main.py 📜 Logs & Tracking for reproducibility & monitoring ➥ Project Structure Highlights 📁 config/ → Central configs (paths, hyperparameters) 📁 experiment/ → Experiment runs (metrics, artifacts, models) 📁 Logs/ → Runtime logs for debugging & monitoring 📁 src/DataScience/ → Core ML codebase • 🧩 components/ → Data ingestion, preprocessing, training, evaluation • ⚙️ config/ → YAML loaders & helpers • 📏 constants/ → Global constants • 🗂️ entity/ → Data entities & schemas • 🔗 pipeline/ → End-to-end orchestration • 🛠️ utils/ → Logging, metrics, helpers 🐳 Dockerfile → Containerization for reproducibility 🎚️ params.yaml → Hyperparameters 📝 schema.yaml → Data schema & validation rules 💡 This project demonstrates how Data Science + MLOps practices can be combined to build robust, scalable, and reproducible ML pipelines. I’d love to hear feedback and suggestions from the community! #MLOps #MachineLearning #MLflow #DVC #DagsHub #DataEngineering #EndToEndML
To view or add a comment, sign in
-
🚀 Just Built an AI-Powered Book Generation System That Actually Works! After diving into workflow automation, I created an end-to-end system that transforms a simple book title into a full-length, professionally structured book using n8n, Supabase, and OpenAI GPT-4. 🎯 What It Does: ✅ Turns title + notes into a detailed outline ✅ Generates chapters with context awareness ✅ Adds human-in-the-loop review checkpoints ✅ Compiles everything into a polished DOCX ✅ Sends notifications at key milestones 🛠️ Tech Stack: n8n (orchestration) | Supabase (data) | OpenAI GPT-4 (content) 🔄 Smart Features: Context chaining across chapters Conditional branching for reviews Modular design for any content type Error handling + retry logic 📊 The Results: From a title like “The Future of AI in Business” → a 10-chapter, 30,000+ word book with proper flow and structure. 💡 Why It Matters: This isn’t just book writing automation — it’s a blueprint for content marketing, research reports, documentation, and knowledge bases. Scalable, auditable, and customizable. #WorkflowAutomation #AIAutomation #n8n #OpenAI #Supabase #TechInnovation
To view or add a comment, sign in
-
-
Join Alex Robertson's Huddle: Building Better Data Foundations with Engineering Best Practices Explore together how software engineering principles such as version control, CI/CD pipelines, testing frameworks, and agile methodologies can elevate the effectiveness of data engineering, governance, data science, BI, and AI efforts. Whether you’re building data platforms, deploying machine learning models, or managing enterprise analytics, this huddle will surface actionable insights to help your teams deliver with more speed, quality, and confidence. Don't miss the conversation: https://lnkd.in/g28Z93_W #dataXchange #engineering #AI #testing #datascience
To view or add a comment, sign in
-
-
I'll be speaking with Reid Havens 🧙🏻♂️ and a surprise guest (hint, it's Greg Baldini) this Friday about why "context is king" with AI, and what that means in practical terms. It's not a presentation, I'm just going to show some stuff and chat. I'll talk about: 1. What is "context" and why it is so important. 2. Ways to manage context yourself. 3. Ways to enable an agent to get their own context. 4. Why MCP is cool... but why I also don't use them in practice that much, at all... and what I use, instead. 5. Why I prefer and use agentic CLI tools over tools like GitHub Copilot agent mode in VS Code. 6. Why I favor the tools from Anthropic... particularly Claude Code. 7. Why you should be extra skeptical of all the voices and noise trying to capture your attention and your money. Being skeptical and critical of AI will help you get better at using it.
🚀 Communication Is the New Code: Beyond Copy/Paste with GPTs (feat. Kurt Buhler) 👤 Guest: Kurt Buhler, renowned data nerd and prolific creator behind Data Goblins (+ special surprise guest) 📅 Date/Time: Fri, Oct 17 • 8:30 AM PT 🔗 Watch live: https://lnkd.in/gPg8JMjZ In this live session, we’ll dig into how AI assistants are reshaping Power BI development, moving from “prompt & paste” to real collaboration. In this live session, we will: ✅ Put Claude Desktop + MCP servers to work on real models (insights, optimizations, DAX debugging) ✅ Explore GitHub Copilot for Power BI for DAX, M, and doc generation ✅ Demo advanced AI workflows: automated measures → intelligent modeling suggestions ✅ Weigh pros/cons honestly: productivity gains vs. pitfalls & over-reliance ✅ Share practical integration strategies for day-to-day dev Don’t miss this one! Come for the demos, stay for the tactics you can use Monday morning. #PowerBI #Fabric #DAX #AIAssistants #GitHubCopilot #Claude #MCP #DataModeling #BusinessIntelligence #Analytics #DataGoblin
Communication is the New Code: Beyond Copy/Paste with GPT's (with Kurt Buhler)
https://www.youtube.com/
To view or add a comment, sign in
-
A few weeks back, we were building a RAG pipeline for a client. Everything was going perfectly until we hit the real wall: their knowledge was stored in messy PDFs, Word files, and PowerPoints. Our LLM was ready. The embeddings were working beautifully. But the data wasn’t. Extracting clean text, tables, and structure from thousands of documents felt almost impossible. Every parser we tried failed: some missed key sections, others broke layouts or mixed up tables. That’s when we came across Docling, IBM’s open-source framework that quietly does something brilliant, it converts any document (PDF, DOCX, PPTX) into structured, LLM-ready data. Docling didn’t just pull out text - it preserved hierarchy, layout, and metadata, giving our RAG results more accuracy and context than we expected. This experience reminded me that the real power of AI doesn’t start with the model; it starts with clean, structured data. Before you make your LLM smart, make your documents readable. #AI #RAG #IBM #Docling #LLM #DataEngineering #GenerativeAI
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development