Free code + simple instructions to build your own API! This is a simple API to request query fanout data for your keywords. Whether you're building automated content brief workflows, or just want to experiment with building your own API, this is an easy one to deploy. In the readme I included both GUI and CLI instructions for setting this up. Have fun building! https://lnkd.in/gxCjVN5y
How to build your own API for fanout data
More Relevant Posts
-
💧 Node.js Streams — Handling Data the Smart Way If you’ve ever worked with large files or continuous data (like video, logs, or network responses), you’ve probably hit a performance wall using fs.readFile() or fs.writeFile(). That’s where Streams come in — they let you process data piece by piece instead of loading it all at once into memory. Think of it like sipping water from a bottle instead of trying to gulp it all in one go. 🔄 What Are Streams in Node.js? Streams are built-in Node.js interfaces for efficient, chunk-by-chunk data handling. There are 4 main types: 1️⃣ Readable – read data from a source (e.g., file, HTTP request body) 2️⃣ Writable – write data to a destination (e.g., file, HTTP response) 3️⃣ Duplex – can both read & write (e.g., TCP socket, WebSocket) 4️⃣ Transform – Duplex + ability to modify data on the fly (e.g., compression, encryption). *Backpressure Simplified: Backpressure happens when the destination (Writable) can’t process data as fast as the source (Readable) sends it. Node.js handles this automatically when you use .pipe(), but you can manage it manually with .pause() and .resume() if needed. 🔌 Real-World Analogy: Imagine a water flow system: 🚰 Water tank = Readable stream ➡️ Pipe = .pipe() 💧 Filter = Transform stream 🪣 Bucket = Writable stream Each component plays a role in making sure data flows smoothly and efficiently. 🔄 Duplex vs Transform A Duplex stream is like a walkie-talkie — it can send and receive, but both channels are independent. A Transform stream is like a translator — it reads, modifies, and outputs automatically. ✅ Duplex → For two-way communication (like sockets) ✅ Transform → For “input → process → output” (like gzip, encryption) 💡 When to Use Streams Handling large files (>100MB) Real-time data (logs, video, network responses) Piping multiple transformations together Avoiding memory overload from reading full files 📄 I’ve prepared a complete document and runnable code examples on Node.js Streams — covering Readable, Writable, Duplex, and Transform streams with practical analogies. 📁 GitHub Repo with runnable examples + my full write-up: https://lnkd.in/g8ErAxJq 📄 Direct link to the doc I prepared: https://lnkd.in/gA98M5gv 🚀 Takeaway: Streams make Node.js truly powerful for handling real-world data. Once you understand how .pipe() and backpressure work, you unlock a whole new level of performance. #Nodejs #Streams #BackendDevelopment #JavaScript #AsyncProgramming #WebDevelopment #Performance #DevJourney
To view or add a comment, sign in
-
Customer Churn Prediction: - Built using Python and Claude Code - Backend Deployment: Render - Frontend Deployment: Streamlit ✨ Machine Learning Pipeline: Trained a Logistic Regression model on 440K+ customer records 🚀 REST API: FastAPI backend with individual & batch prediction endpoints, complete with interactive Swagger documentation 🌐 Interactive Web App: Streamlit frontend where users can input customer data or upload CSV files for bulk predictions 🐳 Production-Ready Deployment: Containerized with Docker, automated CI/CD via GitHub Actions, deployed on Render & Streamlit Cloud 📊 Key Features: • Real-time single customer churn predictions • Batch processing via CSV upload • Risk categorization (Low/Medium/High) • RESTful API with comprehensive documentation • Responsive web interface • Automated testing & deployment pipeline 🛠️ Tech Stack: **ML & Data**: scikit-learn, pandas, numpy **Backend**: FastAPI, uvicorn, pydantic **Frontend**: Streamlit **DevOps**: Docker, GitHub Actions, Render, Streamlit Cloud **Data Processing**: 11 customer features including demographics, usage patterns, and service interactions 🎯 **Business Impact:** This solution enables businesses to: • Identify at-risk customers proactively • Optimize retention campaign targeting • Allocate resources more effectively • Gain insights into churn drivers Try it: https://lnkd.in/e5z_w4cj GitHub: https://lnkd.in/eYwvvQPJ
To view or add a comment, sign in
-
Moving into phase-3 with Langchain & Agents after working around with the individual components (loaders, chunkers, embeddings, vector DBs, retrievers, LLM) and documenting my experimental project. 1. LangChain — Orchestration for LLM + Retrieval Why: All the bits we built earlier (retriever, vector DB, LLM) need glue. LangChain gives you that — a framework to chain, manage, and swap components without rewriting everything. How: -Use LangChain’s RetrievalQA or ConversationalRetrievalChain to connect retriever + LLM with prompt templates. -Define chain types: stuff, map_reduce, refine. -Use middleware, callbacks, and agent loops to extend logic. Optimization Tips: -Modularize: so you can swap embedding model or retriever easily. -Use caching / memoization inside chain runs to reduce redundant work. -Monitor chain timing & logs to spot bottlenecks. Agents & Tools — Making Systems Interactive Why: If you want your system not just to answer but to take actions (e.g., fetch web data, call APIs, write to spreadsheet), agents + tool integrations are the next step. How: -LangChain “Agents” let you define a set of tools (e.g., search engine, database queries) that the agent can choose among. -The agent uses LLM reasoning to pick the tool, call it, get the result, and feed it back into the chain. -You can build multi-step reasoning flows (agent can call tool A, then B, then answer). Optimization Tips: -Limit tool set (too many choices confuse the agent). -Use prompt templates to govern tool usage (rules, boundaries). -Rate-limit or sandbox tools to avoid abuse or endless loops. Experimental Project: I wanted something practical and useful for me, so I built DataSight, a personal knowledge explorer powered by LangChain + RAG + Agents. What It Does: -Upload PDFs or text files (research papers, internal docs). -Ask natural questions like “Summarize the 2023 revenue section” or “Compare product A and B.” -The system retrieves, reasons, and answers — with citations. -If data is missing, the agent can call a web-search tool to fill the gap Tech Stack: Python 3.10+ LangChain FAISS (local vector DB) SentenceTransformers (all-MiniLM-L6-v2) OpenAI or HuggingFace LLM DuckDuckGo Search API for web fallback dotenv + logging I’ve uploaded the code on GitHub: https://lnkd.in/dAEcMSDu
To view or add a comment, sign in
-
I’m currently working on API integration for the Finance Dashboard in our project. At first, I started using useEffect to fetch the data. It was what I was used to, and it worked fine. But along the way, my teammate suggested I try TanStack Query, and honestly, it’s making a big difference. With TanStack Query, I don’t have to manually handle loading, error states, or refetching; everything just works smoothly. It also caches the data and automatically updates it when needed. Here’s a quick comparison 👇 Using useEffect useEffect(() => { const fetchData = async () => { setLoading(true); try { const res = await fetch("/api/finance"); const data = await res.json(); setFinanceData(data); } catch (error) { console.error(error); } finally { setLoading(false); } }; fetchData(); }, []); Using TanStack Query const { data, isLoading, error } = useQuery({ queryKey: ["financeData"], queryFn: async () => { const res = await fetch("/api/finance"); return res.json(); }, }); The second approach is cleaner, faster, and easier to maintain, especially for dashboards that constantly need fresh data. It’s interesting to see how one suggestion can change how you approach API handling in React. Always open to learning and improving! #FrontendDevelopment #ReactJS #TanStackQuery #LearningInPublic #WebDevelopment
To view or add a comment, sign in
-
-
In Next.js, ISR (Incremental Static Regeneration) and SSR (Server-Side Rendering) are two pre-rendering methods. ISR generates pages at build time and updates them in the background based on a set revalidation period. This allows fast loading with occasional content refresh, making it ideal for blogs, product pages, and semi-static content. SSR, on the other hand, generates pages on every request at runtime, ensuring data is always fresh but with higher server load and slower response times. It is best suited for dashboards, user-specific data, or frequently changing content.
To view or add a comment, sign in
-
-
I thought Supabase edge functions would be plug-and-play. Five hours later, I was still buried in errors, had burned through a bunch of API credits, and didn’t even have a working deploy. So I switched things up and gave OpenAI’s Codex a shot. I synced my project to GitHub, gave Codex access, and opened the Copilot workspace. What happened next felt kind of unreal. Codex opened a terminal, figured out what was missing, fixed broken parts, installed packages I hadn’t caught, and even flagged some permission issues I completely overlooked. It just kept going until the errors were gone. No more hunting through Stack Overflow posts. No more guessing what might work. Once Codex was finished, I reviewed the diff, opened a pull request, and merged clean code into the main branch. Honestly, I’m still kind of amazed by how smooth that was. If you're working on backend setups and hitting roadblocks, especially with newer tools like Supabase or edge deployments, Codex might be the quiet teammate you didn’t realize you needed. Anyone else using AI tools in their dev workflow to debug? I’d love to hear what’s been working for you.
To view or add a comment, sign in
-
Types of RAG - Most people think RAG is “fetch top-k, paste, pray.” In reality, it’s a toolbox—pick the right tool for the question. Classic RAG: Fetch a few docs and answer. Great for FAQs and policy lookups. Query-Expansion: Rephrase the question first, then search. Helps with vague or slangy asks. Hybrid Search (vectors + keywords): Combine meaning + exact terms. Best default for messy corpora. Rerank-First: Over-retrieve 50–200, keep the top 5–10. Cuts noise in big wikis. Multi-Hop: Break one hard question into steps, retrieve per step. Explains “why/how” across docs. Agentic / ReAct: Plan → retrieve → reason → (maybe) retrieve again. For troubleshooting and multi-step tasks. SQL / Data RAG: Generate safe queries and cite numbers. For metrics, KPIs, dashboards. Graph-Aware: Follow entities and relationships, not just text. Great for provenance and timelines. Self-Checking (Self-RAG): If evidence is weak, re-fetch. Improves correctness with citations. Memory-Augmented: Pull user/project context too. Personalized copilots and account-specific help. Streaming / Live-Data: Read fresh feeds with short caches. Incidents, markets, ops, news. Multimodal: Retrieve images, code, diagrams—alongside text. Design reviews and code search. Rule of thumb: Start Hybrid + Rerank, add Query-Expansion for vague queries, use Multi-Hop/Agentic for complex tasks, SQL for numbers, Streaming for freshness.
To view or add a comment, sign in
-
🚨 Client Refuses From His Own Words! You won’t believe what happened recently… A client approached me with a project a DCF Analyzer tool that analyzes Excel sheets, detects formulas, errors, and extracts data accurately from multiple sheets. The front-end was already made, so my job was to: Add Google Authentication (with JWT sessions & protected routes) Integrate Supabase for backend & file uploads Allow each user to upload up to 5 files/day Ensure accurate Excel extraction & analysis And later… build a Landing Page all for $150 💻 I agreed not for the money, but because I believed in building long-term client trust. So I worked hard, tested everything locally and on Lovable, debugged every error, implemented authentication properly, and finally pushed the code. Then came the twist… 😅 After a week, the client said: “Nothing has changed, I’ll only pay $50. My time is precious.” I was speechless for a moment. 😶 Still, I calmly rechecked the entire project, spent 3 more days testing again… and everything worked flawlessly. Finally, during a meeting the truth appeared. 👉 The client was opening the wrong URL the entire time! And the best part? I had already told him five times earlier that his URL was incorrect! So yes, the real bug wasn’t in the code it was in communication. 💡 Moral of the story: Don’t undersell your worth. I charged $150 for a project easily worth $300+. Be kind, but don’t let kindness undervalue your effort. Because sometimes, the code runs perfectly, but the understanding crashes. 😄 P.S. To all developers and freelancers out there test your code twice, but also test your client’s attention. It might save you days of unnecessary debugging! 😅 Don’t forget to Follow 💫❤️ #DeveloperLife #FreelancerJourney #ClientStories #WebDevelopment #MERNStack #Supabase #Lovable #SoftwareEngineer #MoralStory #FreelanceTips
To view or add a comment, sign in
-
i've recenlty learned new useActionState hook introduced in React-19, Managing async form submissions, validations, and UI feedback used to mean juggling multiple useState hooks, onSubmit handlers, and manual resets. Now? It’s all declarative. Before- const [comment, setComment] = useState(''); const [loading, setLoading] = useState(false); const [success, setSuccess] = useState(false); const handleSubmit = async (e) => { e.preventDefault(); setLoading(true); await postComment(comment); setLoading(false); setSuccess(true); }; return ( <form onSubmit={handleSubmit}> <input value={comment} onChange={(e) => setComment(e.target.value)} name="comment" /> <button type="submit" disabled={loading}>Post</button> {success && <p>Comment posted!</p>} </form> ); } Now- const [state, formAction] = useActionState< { success: boolean }, FormData >( async (prevState, formData) => { const comment = formData.get('comment') as string; await postComment(comment); return { success: true }; }, { success: false } ); return ( <form action={formAction}> <input name="comment" /> <button type="submit">Post</button> {state.success && <p>Comment posted!</p>} </form> ); The hook automatically manages: ✅ Pending states during submission ✅ Error handling and state updates ✅ Form data serialization ✅ Server action integration But here's what I'm curious about: 🤔 Are you already using useActionState in production? How's your experience? 🤔 What edge cases have you encountered that I should be aware of? 🤔 Is this the future of form handling, or are there scenarios where the traditional approach still wins? Would love to hear from developers who've been using this in real projects. What am I missing? #React19 #useActionState #FrontendDevelopment #WebDev #ReactJS #NextJS #DeveloperLearning #CleanCode
To view or add a comment, sign in
-
Automate Notion updates with a ready-to-import n8n template — build a reliable, searchable pipeline from incoming data to Notion and logs. What this workflow does: - Triggers on a POST to a webhook and ingests payloads - Splits long text into chunks and generates OpenAI embeddings - Stores vectors in Supabase and queries them for retrieval - Uses window memory + a RAG agent (Anthropic) to produce context-aware responses - Appends results to Google Sheets for auditable logs - Sends Slack alerts on errors Why use it: - Eliminate manual Notion edits and keep a clear audit trail - Make content semantically searchable and retrievable - Combine vector search with LLM reasoning for more accurate outputs Quick prerequisites: n8n instance, OpenAI API key, Supabase project, Google Sheets OAuth, Slack credentials (Anthropic key optional). Want to try it? Check the template link in the first comment below to import it directly into n8n and get started. Template link in the comments section. #n8n #Notion #NotionAPI #WorkflowAutomation #APIAutomation #AIAutomation #OpenAI #Embeddings #RAG #RetrievalAugmentedGeneration #VectorSearch #Supabase #VectorDatabase #GoogleSheets #Slack #Webhook #LowCode #DataAutomation #LLM #AIAgents
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development