📣 R2 Feature Spotlight Week is here! This week, we're highlighting new features from our R2 release. We're thrilled to announce that two of our most anticipated Extend Developer Copilot features, WQL Generation and PMD Scripting Generation, are now generally available for all Extend Pro subscribers! 🚀 What's New with Copilot? 🔹 WQL Generation: Transform natural language into complex Workday Query Language (WQL) statements directly within the Generate Components interface. This accelerates development by simplifying dynamic data retrieval. 🔹 PMD Scripting Generation: Automate the creation of validation scripts, reusable script functions, and code snippets. Reduce manual coding and focus on higher-level logic to build more robust applications, faster. But that's not all... We're also officially launching the new WQL Query Component! This powerful component streamlines your development workflow by allowing you to: ✔️ Simplify inbound endpoints for complex WQL API queries. ✔️ Reuse WQL queries across multiple PMDs. ✔️ Easily pass query parameters. ✔️ Eliminate manual URL encoding. By automating these complex tasks, we're empowering you to shift your focus from routine coding to true innovation. Hear all about it from Product Manager Kristen Pastor 📣 Ready to build better apps, faster? Dive in and explore these new capabilities today!
More Relevant Posts
-
Copilot for Workday development. Really interesting! Video shows how Copilot can ease Workday Developer's life to build code/page for business requirements. #Workday #Copilot #WorkdayDevelopment #2025R2
📣 R2 Feature Spotlight Week is here! This week, we're highlighting new features from our R2 release. We're thrilled to announce that two of our most anticipated Extend Developer Copilot features, WQL Generation and PMD Scripting Generation, are now generally available for all Extend Pro subscribers! 🚀 What's New with Copilot? 🔹 WQL Generation: Transform natural language into complex Workday Query Language (WQL) statements directly within the Generate Components interface. This accelerates development by simplifying dynamic data retrieval. 🔹 PMD Scripting Generation: Automate the creation of validation scripts, reusable script functions, and code snippets. Reduce manual coding and focus on higher-level logic to build more robust applications, faster. But that's not all... We're also officially launching the new WQL Query Component! This powerful component streamlines your development workflow by allowing you to: ✔️ Simplify inbound endpoints for complex WQL API queries. ✔️ Reuse WQL queries across multiple PMDs. ✔️ Easily pass query parameters. ✔️ Eliminate manual URL encoding. By automating these complex tasks, we're empowering you to shift your focus from routine coding to true innovation. Hear all about it from Product Manager Kristen Pastor 📣 Ready to build better apps, faster? Dive in and explore these new capabilities today!
To view or add a comment, sign in
-
𝗪𝗮𝗻𝘁 𝘁𝗼 𝗺𝗮𝘀𝘁𝗲𝗿 𝗥𝗔𝗚 𝗳𝗿𝗼𝗺 𝘇𝗲𝗿𝗼 𝘁𝗼 𝗵𝗲𝗿𝗼? 𝗧𝗵𝗶𝘀 𝗰𝗼𝗺𝗽𝗿𝗲𝗵𝗲𝗻𝘀𝗶𝘃𝗲 𝗿𝗲𝗽𝗼𝘀𝗶𝘁𝗼𝗿𝘆 𝗶𝘀 𝘆𝗼𝘂𝗿 𝗰𝗼𝗺𝗽𝗹𝗲𝘁𝗲 𝗿𝗼𝗮𝗱𝗺𝗮𝗽. Building production-ready Retrieval-Augmented Generation systems doesn't have to be overwhelming. This repo breaks down every critical component with hands-on notebooks you can run today. 𝗪𝗵𝗮𝘁 𝘆𝗼𝘂'𝗹𝗹 𝗺𝗮𝘀𝘁𝗲𝗿: 𝗤𝘂𝗲𝗿𝘆 𝗖𝗼𝗻𝘀𝘁𝗿𝘂𝗰𝘁𝗶𝗼𝗻 – Transform natural language into SQL, Cypher, or vector queries. No more manual database wrestling. 𝗤𝘂𝗲𝗿𝘆 𝗧𝗿𝗮𝗻𝘀𝗹𝗮𝘁𝗶𝗼𝗻 – Learn multi-query decomposition, RAG-Fusion, and hypothetical document generation to maximize retrieval accuracy. 𝗦𝗺𝗮𝗿𝘁 𝗥𝗼𝘂𝘁𝗶𝗻𝗴 – Dynamically route queries to the right database and embed context for laser-focused answers. 𝗔𝗱𝘃𝗮𝗻𝗰𝗲𝗱 𝗥𝗲𝘁𝗿𝗶𝗲𝘃𝗮𝗹 – Implement Re-Rank, RankGPT, CRAG, and real-time external data integration to surface the most relevant information. 𝗜𝗻𝘁𝗲𝗹𝗹𝗶𝗴𝗲𝗻𝘁 𝗜𝗻𝗱𝗲𝘅𝗶𝗻𝗴 – Master multi-representation embeddings, RAPTOR hierarchical summarization, and CoLBERT for lightning-fast search. 𝗥𝗲𝗳𝗶𝗻𝗲𝗱 𝗚𝗲𝗻𝗲𝗿𝗮𝘁𝗶𝗼𝗻 – Use Self-RAG and RRR to create iterative reasoning loops that get smarter with each query. Each notebook is production-ready with real code examples, from beginner concepts to advanced multi-querying patterns. Whether you're building your first RAG app or scaling to enterprise, this is your definitive guide. Github repo link - https://dub.sh/ZYB32Ku Drop a comment if you're working on RAG applications or want to discuss implementation strategies!
To view or add a comment, sign in
-
-
In most large systems, background processes start life as tactical fixes - a quick script to sync data, clean records, or trigger downstream updates. Over time, they multiply into hundreds of independent jobs, each with its own repo, scheduler, and dependency graph. We’ve been experimenting with collapsing that complexity into a rules engine - a single platform that evaluates conditional logic across data streams and schedules. At its core: - React UI for managing jobs and rules, backed by a Node.js service using json-rules-engine. - Rules are expressed as declarative JSON - each job defines a trigger (schedule or event), a scope (dataset or API feed), and an action (e.g. API update, notification, DB write). - The backend abstracts data ingestion via connectors - REST, ODS feeds, message queues - and persists rules, execution logs, and metadata. Architecturally, it’s fully stateless. Every execution cycle ingests data, evaluates rules, and emits events. Scaling horizontally is just a matter of spawning more evaluators off a queue. The idea isn’t new - but applying it to enterprise operations at scale turns “scripting chaos” into declarative infrastructure. Once logic is externalised, you can version it, test it, and see it - something traditional background jobs rarely offer.
To view or add a comment, sign in
-
-
Bhai code likho smartly manually nahi 😎 Let components observe, not depend 👑 👩💻 Person 1: Bro, every time there’s a new update in our backend service, I’m manually calling all modules that depend on it Something like: analytics.update(data); dashboard.update(data); logger.update(data); It’s getting out of hand 😩 👨💻 Person 2: Ahh classic. You’re playing notification manager instead of coding features 😅 Why not use the Observer Pattern? Just have one Publisher notify everyone who’s interested — no manual calls for (Observer obs : observers) { obs.update(data); } One loop, infinite peace ✌️ 👩💻 Person 1: So like... each module just listens to updates? 👨💻 Person 2: Exactly! They’ll implement something like: public interface Observer { void update(Object data); } And your main service just becomes the Subject — maintaining who’s subscribed: public void registerObserver(Observer o) { observers.add(o); } No hard dependencies. Total decoupling 💡 👩💻 Person 1: Ohhh I get it. So if tomorrow I add a new NotificationService, I don’t have to change the core logic — just register it 👨💻 Person 2: Exactly! Plug-and-play modules It’s like saying: “I’ll emit the event — whoever cares, listen up” 👩💻 Person 1: So basically… Publisher + Subscribers = Observer Pattern registerObserver() when you care removeObserver() when you don’t notifyObservers() when data changes 👨💻 Person 2: Boom 💥 scalable, modular, and no spaghetti code 💡 Takeaway: Stop hardcoding dependencies Let your components observe, not depend Ab koi tight coupling nahi Total modular vibe 🔥 #ObserverPattern #TechSimplified
To view or add a comment, sign in
-
Who says you have to be an engineer to ship full-stack? Today I took an idea from a sketch to a working flow you can press. I started with the spine: a lean Postgres schema that maps branch → code → family → roles, enforced with real foreign keys. I wrote a server-side function that takes two inputs and returns ranked options. Then I wired a minimal form with multiple controls that call the same contract and render results. The wall showed up as silence. No errors. No output. It was not a UI problem. It was data truth. A mislabeled branch value broke the join, so the function returned no rows. I normalized the labels, reapplied the foreign key, made the seed idempotent, and reran my named test (USMC / 3531). Click. Same inputs. Same outputs. Deterministic and repeatable. What I learned today: -Start with the data model. Most “frontend” bugs are structure problems. -Keep matching logic at the source (DB or RPC) so there is one truth. -Make seeds and migrations safe to re-run. Confidence should be one command away. -Prove one end-to-end scenario before adding a second. Full-stack is a loop. Data → logic → UI → test. Learn the loop and you can ship. Do not let “I am not an engineer” be a limit. Most of the time you can do the thing. It is usually a belief telling you that you cannot.
To view or add a comment, sign in
-
-
From idea 💡 to prototype 🚀 in minutes with Traversal: 1️⃣ Define your data in JSON 2️⃣ Query instantly with GraphQL 3️⃣ Update in real-time — no migrations, no stress Build faster. Ship smarter. #DevHacks #GraphQL #WebDevelopment #CodingTips #BoldMines
To view or add a comment, sign in
-
📦 The Day I Learned That “One Schema Change” Can Bring Everything Down It started like any other sprint. All I had to do was add a new column — just one tiny column — to a data feed. No big deal, right? Well… that “small change” rippled through five downstream jobs, two dashboards, and a machine‑learning model that really didn’t enjoy surprise columns. 😅 Here’s what that incident taught me 👇 🔹 Schema evolution isn’t a technical detail — it’s a design principle. Managing changes gracefully is as important as building the pipeline itself. 🔹 Contracts matter. If producers & consumers don’t agree on schemas, you’re building on quicksand. 🔹 Version your schemas like you version your code. Avro, Protobuf, and schema registries exist for a reason. 🔹 Automate validations before deployment — no one wants to debug silently dropped columns at 3 a.m. Since then, I treat schemas as first‑class citizens in every data system I design. A “tiny change” isn’t tiny when it touches every layer of the stack. 🧠 Lesson learned: stability isn’t about avoiding change — it’s about handling change well. hashtag#DataEngineering hashtag#SchemaEvolution hashtag#DataPipelines hashtag#BigData hashtag#Kafka hashtag#Airflow hashtag#DataArchitecture hashtag#CloudComputing hashtag#ResilienceEngineering hashtag#LearningInPublic hashtag#ModernDataStack
To view or add a comment, sign in
-
-
GraphQL is an open‑source query language for APIs and a server‑side runtime. It provides a strongly‑typed schema to define relationships between data, making APIs more flexible and predictable.
To view or add a comment, sign in
-
How does Node.js pipe manage data flow? In Node.js, the pipe method is a powerful feature that manages data flow between streams effectively. Here's how it works: Stream Connection: The pipe method connects a readable stream (like a file or network resource) directly to a writable stream (like an HTTP response), allowing data to flow seamlessly from one to the other without intermediate storage. Backpressure Management: One of the key benefits of using pipe is its automatic handling of backpressure. When the writable stream cannot keep up with the readable stream's data flow, the pipe method pauses the readable stream, preventing data from being read until the writable stream is ready to accept more data. This ensures that your application does not overload memory by trying to process more data than it can handle at a time. Simplified Code: By using pipe, you eliminate the need to manually handle data events and buffering, streamlining your code. It simplifies the implementation of tasks such as file transfers and real-time data processing. Event Listening: Readable streams emit specific events, such as "data" when new data is available and "end" when there is no more data to read. The pipe method takes care of listening to these events behind the scenes, allowing you to focus on the data processing logic instead. Overall, the pipe method enhances performance and reduces memory usage by efficiently managing continuous data flow in Node.js applications. It’s a key feature that makes streams very powerful for handling data in real-time applications. #nodejs #pipe
To view or add a comment, sign in
-
-
Fluent Bit fundamentals: everything starts with the event, and the plugins that make it useful. 👏 👏 👏 In v1.9+, metadata expands beyond just the Tag, alongside the other two core elements: Timestamp and Record (logs, metrics, traces). This primer sets up routing, processing, and extensibility you’ll use everywhere. 👉 Read the next article in the series: https://bit.ly/4mWQ7kj
To view or add a comment, sign in
More from this author
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
👉 Upskill your employees in SAP, Workday, Cloud, AI, DevOps, Cloud | Edtech Expert | Top 10 SAP influencer | CEO & Founder
2wWorkday Developers Speed matters… but so does focus. Taking away the repetitive coding work means developers can spend their time where it counts, building smarter apps, faster. That’s the real win here.