Escape Codebase Hell in Vibe Coding: 5 Fixes for Messy AI-Generated Code OVERVIEW 🔍 Vibe coding with tools like v0 is fast. But 40%+ of users end up in codebase hell. Here’s how to keep things clean. ____________________________________________________ WHY CODEBASE HELL HAPPENS ⚠️ AI coding tools are built for speed, not structure. - Duplicated components: 6 versions of the same button - Context loss: new code ignores old patterns - Hallucinations: props or CSS classes that don’t exist 👉 Great for quick UI prototypes in Lovable, but not scalable unless cleaned. ____________________________________________________ TREAT BUILDS AS THROWAWAY SCAFFOLDING 🚀 Do not get attached to v1. - Use it to validate ideas and UI in tools like Replit - Then restart with structure baked in 🛠️ Tool Tip: Use GitHub plus create-react-app or Next.js starter templates as a clean slate for v2 ____________________________________________________ LOCK IN A CORE COMPONENT LIBRARY 🧩 Stop duplication before it starts. - Define essentials: buttons, forms, modals, cards - Keep them in /components or even a separate repo 🛠️ Tool Tip: - Use Storybook to visualise and standardise components - Store them in a private npm package (Verdaccio) for reuse across projects ____________________________________________________ AUTOMATE CLEANUP WITH TOOLS AND AI 🧹 Every generation cycle should have a cleanup cycle. 🛠️ Tool Tip: - Run npx prettier --write . for consistent formatting - Run npx eslint . for linting - Use Depcheck (npx depcheck) to find unused dependencies - In Claude: prompt → “Scan my repo. Find duplicate components and unused CSS. Suggest consolidations.” ____________________________________________________ BUILD REPO HYGIENE RITUALS 📑 Small habits prevent big messes. 🛠️ Tool Tip: - Use Husky + lint-staged to auto-run Prettier/ESLint before every commit - Add a README.md with clear prop types and naming conventions - Use Bit.dev or Plasmo for sharable, versioned components ____________________________________________________ BOTTOMLINE 📌 Vibe coding tools like Replit, Cursor, Claude, Lovable, and v0 are not broken. The mistake is using prototypes as production. Speed is your ally only if you pair it with structure. Prototype fast. Reset smart. Maintain clean. That is how you escape codebase hell. #lovable #replit #vibecodingbugs #vibecoding #aicode #debugging #cursor #claudecode #buildinpublic
How to Avoid Codebase Hell with Vibe Coding Tools
More Relevant Posts
-
A Unison vibe coding adventure: the other day I needed a function that found the longest consecutive run in a sorted ‘Map Int a’, starting from its minimum key. There is a logarithmic time solution in a few lines but I decided to try vibing it. I asked Claude Code for an implementation that ran in sub-linear time, thinking it’d be no sweat. It has a shallow but encyclopedic knowledge of algorithms and data structures. Claude got the spec for the function (I had it interview me) and had no trouble producing an implementation that typechecked and passed the tests I approved, but the implementation was complicated and still linear. So this is a problem: randomly ignoring instructions, producing a linear time algorithm when that wasn’t requested. You have to be on guard for this, and an absolute expert on the task to notice. I was more emphatic: make it logarithmic. I gave some hints, and Claude responded by making the implementation even more complicated. At this point the function was like a hundred lines of dense code, somehow, and had like an inlined binary search implementation? My eyes glazed over scanning through it. But I noticed it wasn’t properly using the structure of the ordered map, despite having digested Unison’s Map documentation and read the source of various functions that work with maps and use the tree structure correctly. (Unison’s MCP server makes this easy) By the way it is a bog standard size-balanced tree with the approximate median key at the root. Again a problem: writing terrible and overly complicated code and failing to check for understanding of the things being used. But it produced something that typechecked and passed basic tests! However I told it I wasn’t going to review it. I said the implementation should be under 10 lines, and make use of the pigeonhole principle, which I illustrated with an example, and basically spoonfed the sketch of an implementation. Claude Code was like “you’re absolutely right” and then came up with a reasonable implementation. Just a few lines. Finally. But even though it purported to understand my “brilliant insight” about using the pigeonhole principle, in reviewing its implementation, I saw it wasn’t actually using the principle properly - its implementation thus did more work than necessary. I asked it if a certain recursive call was necessary, and by golly “you’re absolutely right”, Claude realized it wasn’t and fixed. At this point, I was happy with the implementation. The tests were already fine. I asked it to write the docs, which it did very quickly, including live examples. I made a couple tweaks. Commit. Overall deeply negative productivity for the actual implementation. Having its assistance with the docs and tests was a modest boost for me. Overall: net negative. This is where the tech is, folks.
To view or add a comment, sign in
-
If I told you that during coding sessions review, I need you to watch the code you're writing and fix it as you go, would you be able to accomplish this? Yes, absolutely. I can definitely adopt that approach. While I don't execute code in a traditional sense, I can simulate a "code review as we go" process by: 1. **Writing code incrementally:** Instead of providing a large block of code at once, I can build it step-by-step, explaining the logic for each section. 2. **Explicitly checking for common issues:** After writing a few lines or a function, I can pause to analyze it for potential problems like: * **Logical errors:** Does the algorithm make sense? * **Syntax errors:** Are there any typos or language-specific mistakes? * **Edge cases:** What happens with empty inputs, boundary values, or unexpected data? * **Best practices:** Is the code readable, efficient, and follow standard conventions for the language? 3. **Stating assumptions:** I'll clarify the assumptions I'm making so you can correct them early. 4. **Offering alternatives:** If I see a potential improvement, I can suggest it and explain the trade-offs. **In practice, our interaction would look like this:** **You:** "Write a Python function to find the maximum number in a list." **Me (in review mode):** "Okay, let's start with a straightforward approach. I'll write a function skeleton and then check it. ```python def find_max(numbers): # First, we should check if the list is empty. if not numbers: return None # Handling this edge case first. # Now, the core logic: initialize a variable with the first element. max_num = numbers[0] ``` * **Review Point:** So far, we've handled the empty list case to avoid errors. The initialization is correct. Let's continue. ```python # Iterate through the rest of the list. for num in numbers[1:]: # Compare each number to the current max. if num > max_num: max_num = num return max_num ``` * **Review Point:** This loop logic is sound. It starts from the second element (index 1) since we already used the first. It efficiently updates `max_num` when a larger number is found. This function should work correctly for lists of positive, negative, or mixed numbers. * **Potential Improvement:** We could also use the built-in `max()` function, but the purpose here was to implement the logic. This implementation is clear and efficient (O(n) time complexity)." --- **So, to answer directly: yes, I can absolutely do that.** Just let me know you'd like me to work in this "review-as-we-go" mode at the start of a coding session. Shall we try it with a specific task? --DeepSeek dose better then Claude AI DeepSeek AI vs Claude AI
To view or add a comment, sign in
-
-
LLMs are amazing coding copilots, but their "hallucinations" can lead to buggy code or incorrect logic! 🐛 Did you know much of the fix lies in 𝐡𝐨𝐰 𝐲𝐨𝐮 𝐩𝐫𝐨𝐦𝐩𝐭 𝐭𝐡𝐞𝐦? 💡 Here are 5 quick strategies for smarter prompts to get more accurate, reliable coding assistance from LLMs: 1. 𝐁𝐞 𝐒𝐩𝐞𝐜𝐢𝐟𝐢𝐜! 🎯 - 𝐒𝐭𝐫𝐚𝐭𝐞𝐠𝐲: Constrain the LLM to only the info you provide or a very clear scope. - 𝐄𝐱𝐚𝐦𝐩𝐥𝐞 𝐏𝐫𝐨𝐦𝐩𝐭: "Given this Python function: def calculate_area(length, width): return length * width. Only add a docstring that explains its purpose, arguments, and return value. Do not modify the function's logic." - 𝐑𝐞𝐬𝐮𝐥𝐭: 📄➡️📝 Precise, relevant code additions. 2. 𝐆𝐢𝐯𝐞 𝐂𝐨𝐧𝐭𝐞𝐱𝐭! 📚 - 𝐒𝐭𝐫𝐚𝐭𝐞𝐠𝐲: Provide all necessary background code, dependencies, or error messages upfront. - 𝐄𝐱𝐚𝐦𝐩𝐥𝐞 𝐏𝐫𝐨𝐦𝐩𝐭: "I'm using React and Redux Toolkit. Here's my slice.js: [paste slice code]. My component [paste component code] is not dispatching the action correctly. What's wrong?" - 𝐑𝐞𝐬𝐮𝐥𝐭: 📊📈 Debugging grounded in your actual project. 3. 𝐒𝐡𝐨𝐰 𝐄𝐱𝐚𝐦𝐩𝐥𝐞𝐬! ✅ - 𝐒𝐭𝐫𝐚𝐭𝐞𝐠𝐲: Guide the LLM with a template or a few correct usage patterns. - 𝐄𝐱𝐚𝐦𝐩𝐥𝐞 𝐏𝐫𝐨𝐦𝐩𝐭: "I need a TypeScript interface for a user. Example: { id: number, name: string }. Now, create an interface for a 'Product' with id, name, price, and category fields, using this style." - 𝐑𝐞𝐬𝐮𝐥𝐭: 🧑💻➡️✔️ Consistent, accurate code structures. 4. 𝐂𝐡𝐚𝐢𝐧 𝐨𝐟 𝐓𝐡𝐨𝐮𝐠𝐡𝐭! 🤔➡️📝 - 𝐒𝐭𝐫𝐚𝐭𝐞𝐠𝐲: Ask it to "think step-by-step" before providing the solution. - 𝐄𝐱𝐚𝐦𝐩𝐥𝐞 𝐏𝐫𝐨𝐦𝐩𝐭: "First, identify potential performance bottlenecks in this SQL query: [paste query]. Second, suggest specific indexing strategies. Finally, rewrite the query with the suggested improvements. Explain each step." - 𝐑𝐞𝐬𝐮𝐥𝐭: 🧩✨ Logical, verifiable optimization steps. 5. 𝐒𝐞𝐥𝐟-𝐂𝐨𝐫𝐫𝐞𝐜𝐭𝐢𝐨𝐧! 🔄 - 𝐒𝐭𝐫𝐚𝐭𝐞𝐠𝐲: Have the LLM review its own initial code or explanation. - 𝐄𝐱𝐚𝐦𝐩𝐥𝐞 𝐏𝐫𝐨𝐦𝐩𝐭:: "Review the JavaScript code you just provided. Are there any edge cases not handled? Is it the most efficient way to achieve the task? Please provide alternative solutions or improvements if applicable." - 𝐑𝐞𝐬𝐮𝐥𝐭: 🧐✍️ Refined, more robust code solutions. Mastering these tricks makes your LLMs less of a code fantasist 🤥 and more of a reliable programming partner! ✨ What's your best prompt engineering tip for coding? Share below! 👇 #LLMs #PromptEngineering #AI #GenerativeAI #CodingHelp #SoftwareDevelopment #TechTips #NoHallucinations #CodingAssistant
To view or add a comment, sign in
-
Metaprogramming: Writing Code That Writes Code Ever wished your code could be smarter, more flexible, or even write parts of itself? That's where metaprogramming comes in! It’s a powerful concept in software development that lets programs treat other programs (or themselves) as their data. Sounds complex, but it’s incredibly useful for building dynamic, adaptable software. WHAT IS METAPROGRAMMING? ------------------------- At its core, metaprogramming is about creating code that can read, generate, or transform other code. Instead of just solving a problem, you're writing code that solves *how* to solve problems, or how to generate the code that solves them. Think of it like a robot factory. Instead of building cars directly, you build robots that build cars. Metaprogramming builds tools that build code. WHY IS IT USEFUL? ----------------- - Automation: Reduces repetitive coding tasks. If you have patterns that repeat in your code, metaprogramming can automate their creation. - Flexibility: Allows your programs to adapt at runtime. They can modify their behavior or even generate new functionalities based on changing conditions. - Domain-Specific Languages (DSLs): Helps create mini-languages tailored for specific tasks, making code more readable and expressive for that domain. - Frameworks and Libraries: Many powerful frameworks use metaprogramming to provide features like automatic object mapping or routing. EXAMPLES IN ACTION (SIMPLIFIED) ------------------------------- You might have seen metaprogramming in action without realizing it: - Decorators (Python, TypeScript): Functions that modify other functions. `@staticmethod` or `@login_required` are common examples. - Macros (C++, Rust, Lisp): These allow you to define new syntax or expand code before compilation. - Code Generation: Tools that generate boilerplate code (like getters/setters in Java or API client code) from a definition. - Reflection (Java, C#): The ability of a program to examine and modify its own structure and behavior at runtime. Conclusion Metaprogramming might seem advanced, but understanding its principles can unlock new ways to write more efficient, flexible, and powerful software. It's a key tool for developers looking to move beyond just writing code, to writing code that can manage and extend itself.
To view or add a comment, sign in
-
With AI coding assistants, the risk of drifting away from good design patterns and architectural principles is larger than ever. This means that for production code, having a reliable automated quality assurance pipeline is essential. The pipeline should not only check your code, but it should also make sure that core architectural principles do not drift over time, which can be tricky with traditional static code analysis tools (like linters and type checkers) and also tricky to check with tests. I have found a new type of linter useful for this, for Python an example is the "import-linter" https://lnkd.in/dd5_65sM. It will make sure that layers of the application are kept pure, and help make sure that architectural principles like "clean architecture" are upheld, even when team members change and as time passes. Import-linter also makes it easier for coding assistants to make changes without corrupting the design principles since it will discover the QA tool failing and fix the issues. Another mechanism for upholding architectural principles over time I have been considering is the use of an LLM to review a list of policies against the code. I am working on an open source tool for this I call 'NormyFormy'. When NormyFormy is given a list of policies it will output how well the architectural / design principles of the policy are kept, in a score from 1 to 5. NormyFormy also provides useful comments on how to improve the code. I have created a really quick prototype of NormyFormy by copying some of the code from Kasper Junge's copcon package, for doing context engineering of the code repo content. See my prototype of the package in the link below, and let me know if there is an appetite for me developing it further: https://lnkd.in/d-uQtYyu
To view or add a comment, sign in
-
𝗖𝗹𝗮𝘂𝗱𝗲 𝗔𝗴𝗲𝗻𝘁 𝗦𝗗𝗞: 𝗧𝗵𝗲 𝗔𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁𝘂𝗿𝗲 𝗧𝗿𝗮𝗱𝗲-𝗼𝗳𝗳 𝗗𝗲𝘃𝗲𝗹𝗼𝗽𝗲𝗿𝘀 𝗡𝗲𝗲𝗱 𝘁𝗼 𝗨𝗻𝗱𝗲𝗿𝘀𝘁𝗮𝗻𝗱 Anthropic just released their Python and TypeScript Agent SDK alongside Claude 4.5 Sonnet. Beyond the convenience features, there's an architectural reality developers should understand. 𝗪𝗵𝗮𝘁 𝗬𝗼𝘂'𝗿𝗲 𝗔𝗰𝘁𝘂𝗮𝗹𝗹𝘆 𝗕𝘂𝗶𝗹𝗱𝗶𝗻𝗴 𝗢𝗻 The SDK is a wrapper around the Claude Code CLI. Your Python app makes subprocess calls to a Node.js runtime where the actual agent logic lives. This means: • System prompts and agent behavior are in Anthropic's closed runtime • Your Python app's behavior is controlled by whichever version of the Claude Code CLI (a Node.js binary) is installed globally • Agent behavior can change with CLI updates outside your control • Debugging crosses process boundaries into proprietary code 𝗧𝗵𝗲 𝗦𝘁𝗿𝗮𝘁𝗲𝗴𝗶𝗰 𝗖𝗼𝗻𝘁𝗲𝘅𝘁 This reflects a broader industry shift: AI providers moving from selling tokens to selling integrated products. As inference commoditizes, companies naturally move up the stack. The SDK offers real convenience, pre-built abstractions, supposedly faster shipping and managed complexity. But it comes with architectural commitments. 𝗪𝗵𝗲𝗻 𝗜𝘁 𝗠𝗮𝗸𝗲𝘀 𝗦𝗲𝗻𝘀𝗲 • You're comfortable with API-first tools (𝗺𝗲𝗺𝗼𝗿𝘆, 𝘁𝗲𝘅𝘁 𝗲𝗱𝗶𝘁𝗼𝗿) you don't control (and pay for per use) • You trust Anthropic's roadmap • Speed-to-market is the priority 𝗪𝗵𝗲𝗻 𝘁𝗼 𝗧𝗵𝗶𝗻𝗸 𝗧𝘄𝗶𝗰𝗲 • You need transparency for compliance or debugging • Performance optimization at the agent layer matters • You're building differentiated agent logic requiring granular control Consider alternatives: LangChain/LlamaIndex for more control, Google's ADK, or building directly on Claude's API with maximum transparency but no batteries included. 𝗧𝗵𝗲 𝗥𝗲𝗮𝗹 𝗤𝘂𝗲𝘀𝘁𝗶𝗼𝗻 Is this a step toward Anthropic eventually open-sourcing their Claude Code Agent, giving developers both convenience and transparency? Or are we watching AI providers build closed, vertically integrated stacks where understanding the foundational layer becomes proprietary knowledge? 𝗪𝗵𝗮𝘁'𝘀 𝘆𝗼𝘂𝗿 𝘁𝗮𝗸𝗲? Are you prioritizing speed-to-market or architectural control in your AI projects? This isn't about good or bad technology. It's about understanding the trade-offs: convenience versus control, managed complexity versus architectural flexibility. Make the choice that fits your requirements, just make it with 𝘄𝗶𝗱𝗲 𝗲𝘆𝗲𝘀 𝗼𝗽𝗲𝗻. Engineering post: https://lnkd.in/d99znHnx #SoftwareArchitecture #AI #Claude #Anthropic #DeveloperTools
To view or add a comment, sign in
-
Object-Oriented Programming (OOP) OOP is a programming paradigm that organizes code into objects, which represent real-world entities. Each object contains: Fields (Attributes / Data Members) Methods (Functions / Behaviors) ✅ OOP makes code reusable, scalable, and maintainable. It is built on 4 fundamental principles 👇 🔑 The Four Pillars of OOP 🏦 1. Encapsulation 📌 Definition: Binding fields and methods into a single unit (class) and restricting direct access using access modifiers (private, public, protected). 💻 Code Example (C#): public class BankAccount { private decimal balance; public void Deposit(decimal amount) => balance += amount; public decimal GetBalance() => balance; } 🎯 Practical Example: An ATM – you only access your money through deposit/withdraw methods, not by directly opening the bank’s database. 🎭 2. Abstraction 📌 Definition: Exposing only essential details while hiding complex implementation using abstract classes or interfaces. 💻 Code Example: public abstract class Vehicle { public abstract void Start(); } public class Car : Vehicle { public override void Start() => Console.WriteLine("Car started!"); } 🎯 Practical Example: You press “start” in a car, but you don’t see the internal combustion process. 🐶 3. Inheritance 📌 Definition: A mechanism where a child class inherits fields and methods from a parent class, enabling code reusability. 💻 Code Example: public class Animal { public void Eat() => Console.WriteLine("Eating..."); } public class Dog : Animal { public void Bark() => Console.WriteLine("Barking..."); } 🎯 Practical Example: A Dog is an Animal – it inherits the ability to eat but adds barking. 🔄 4. Polymorphism 📌 Definition: The ability of one method to have multiple implementations, either by overloading or overriding. 💻 Code Example: public class Shape { public virtual void Draw() => Console.WriteLine("Drawing shape"); } public class Circle : Shape { public override void Draw() => Console.WriteLine("Drawing circle"); } 🎯 Practical Example: The action “Speak” – Humans talk, Dogs bark, Cats meow → same action, different behaviors. ✨ Summary Encapsulation → Bundle data & restrict access Abstraction → Hide complexity, show essentials Inheritance → Reuse and extend existing code Polymorphism → Same action, different forms 👉 Together, these 4 pillars form the backbone of modern programming. ⚡ Question: Which OOP principle do you use the most in your projects? 🤔
To view or add a comment, sign in
-
Today makes it one year since i started vibe-coding If I was just starting out again, here’s my simplest advice for anyone new to coding and design: 1. Master the Basics: Start with the fundamentals—learn HTML, CSS, and get a feel for a little JavaScript and Python. 2. Use Templates: Don’t start projects from scratch. Find a solid GitHub template and build on it. It saves loads of setup time. 3. Try Spec-Driven Design: Check out GitHub’s Spec-Driven Design to help you plan and structure your work before you code. Learn more here: https://spec.github.com/ 4. Context Is Everything: Make sure you always have your key information handy. I keep a “Claude.md” file with my main notes, and I pair this with MCP tools so my AI assistant can reference exactly what I need—making my workflow faster and my results more relevant. Learn more about this setup here: How to turn Claude Code into a domain-specific coding agent: https://lnkd.in/gSVpdFsq 5. Expand Your Mind: Build your skills in product management and critical thinking. These will help you solve real problems, not just code. If you’re curious about vibe-coding or design, feel free to message me!
To view or add a comment, sign in
-
The Evolution of Development: From "Vibes" to True Engineering with AI Guardrails Andy Rea's article on the GitGuardian blog provides a comprehensive blueprint for the future of software engineering. It effectively deconstructs the concept of "vibe coding" and demonstrates how to implement automated guardrails that enable developers to focus on creativity and complex problem-solving, while AI and automation ensure code quality, security, and robustness. The example project, Acronym Creator, serves as a practical implementation guide. It establishes a complete automation pipeline operating on two critical fronts: Local Defense (Pre-commit Hooks) - The first barrier, running on the developer's machine: GitGuardian ggshield: Scans staged files for secrets including API keys and passwords before committing, preventing potential leaks Black: Provides automatic code formatting, eliminating style debates in code reviews Flake8: Performs linting to identify errors and enforce PEP8 standards Pytest with Coverage.py: Executes tests and enforces 80% test coverage requirement for successful commits CI/CD Defense (GitHub Actions) - The second barrier, operating in the controlled CI environment: GitGuardian repository scan: Analyzes the complete repository history, crucial for detecting secrets that were committed and later removed but remain in Git history SonarCloud: Conducts deep code quality and security analysis Semgrep: Performs Static Application Security Testing (SAST) to detect OWASP standard security vulnerabilities Semantic-release: Automatically generates versioning and CHANGELOG based on conventional commit messages The critical insight: When AI agents such as Claude Code, Copilot, or Cursor encounter guardrail errors, they don't halt progress. Instead, they use these failures as feedback to iterate and correct the code autonomously, establishing a continuous improvement cycle. This approach isn't about replacing developers but augmenting their capabilities. It allows professionals to focus on what truly matters: architecture, system design, complex trade-offs, and business logic implementation. The AI functions as an advanced junior engineer handling critical but repetitive tasks, while human developers evolve into architects and strategists. The era of autonomous AI coding agents has already begun. Adopting this technology safely and at scale requires implementing robust guardrails. Explore the complete example project and use it as a template: https://lnkd.in/dXUjjeih
To view or add a comment, sign in
-
It's Day 265 of My 365-Day Coding Journey: 🚀 Merging k Sorted Lists on LeetCode! 🚀 Struggling with linked lists and efficiency? This classic problem will test both your coding skills and your problem-solving mindset! 💡 🔍 The Problem Given an array of k sorted linked lists, the task is to merge them into one single sorted linked list. For example: [[1->4->5], [1->3->4], [2->6]] The output should be: 1->1->2->3->4->4->5->6. This problem is a great exercise in efficiency and understanding data structure merging! 🔄 💡 The Approach I tackled this with a Collect + Sort strategy: 1. Collect all the elements: I iterated through all k linked lists and extracted every node’s value into a single array. 2. Sort: Once all values were collected, I used a standard sorting algorithm to sort the array. 3. Reconstruct the sorted list: Finally, I reconstructed the sorted linked list from the sorted array. 🔧 📝 Key Takeaways - Simplicity over complexity: While not the most efficient solution for very large k (a min-heap would be faster), the Collect + Sort approach is simple, effective, and easy to implement. - Think of it as a transformation: We convert the complex problem of merging k lists into a simpler one: sorting a flat array, and then converting it back. - Efficiency matters: Understanding the complexities of sorting algorithms directly impacts performance, especially with large datasets. ⚡ 💬 Challenge for you! What strategies do you use to merge multiple sorted data structures? Do you prefer min-heaps, or have you explored other methods? Let’s discuss in the comments! 👇 🎥 Check Out My Video Walkthrough I dive deeper into the solution in my latest video: https://lnkd.in/g2R7rPmq 🤝 Join the Conversation Tackling data structures and algorithms? Let’s connect! It's always great to share insights and grow together. 🌱 #CodingJourney #WebDevelopment #DSA #LinkedLists #LeetCode #JavaScript #ProblemSolving #TechCommunity #LearningEveryDay #GrowthMindset #DeveloperLife #CodeNewbies #365DaysOfCode
To view or add a comment, sign in
-
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development