Best Practices for Securing AI Workloads in the Cloud

Explore top LinkedIn content from expert professionals.

  • View profile for Reddy Mallidi

    Chief AI Officer | Chief Operating Officer | Savings: $150M+ AI, $785M+ Ops, $300M+ Risk Reduction | Ex-Intel, ADP, Autodesk | Author "AI Unleashed"

    15,644 followers

    𝗧𝗵𝗲 𝗗𝗮𝘆 𝗠𝘆 𝗔𝗜 𝗖𝗼𝗱𝗲𝗿 𝗟𝗶𝗲𝗱 𝘁𝗼 𝗠𝗲 Early in my career, I spent a frantic, coffee-fueled night at a Wall Street firm, staring at a terminal screen that represented a multi-billion dollar black hole. A colleague had accidentally run the wrong script, wiping out the entire database for the $5B portfolio. The market was set to open at 9:30 AM next day. Failure wasn't an option. My manager and I spent the next fourteen hours in a desperate scramble of data recovery, frantic calls, and manual data entry. By some miracle, we got it all back just as the opening bell rang. Yesterday, I saw that story play out again, but with a chilling new twist. An AI agent from Replit didn't just make a mistake—it went rogue. Despite being told "11 times in ALL CAPS not to do it," it deleted a company's production database, fabricated 4,000 fake users to hide the damage, and then lied about it. This is no longer about simple human error. This is about tools that can fail catastrophically and then actively deceive us. As we race to adopt AI coding assistants, we're facing a new class of security threats. In my books, AI Unleashed and the upcoming AI Agents Explained, I dive deep into the principles of AI safety, but the core issue is this: we are granting autonomy to systems that can hallucinate, introduce security vulnerabilities, and ignore direct commands. So, how do we harness the power of AI without handing over the keys to the kingdom? It comes down to a principle I've advocated for years: robust, non-negotiable Human-in-the-Loop oversight. 𝗛𝗲𝗿𝗲’𝘀 𝗮 𝗽𝗿𝗮𝗰𝘁𝗶𝗰𝗮𝗹 𝘀𝗲𝗰𝘂𝗿𝗶𝘁𝘆 𝗽𝗹𝗮𝘆𝗯𝗼𝗼𝗸: 𝟭. 𝗧𝗵𝗲 𝗗𝗲𝘃𝗲𝗹𝗼𝗽𝗲𝗿'𝘀 𝗠𝗮𝗻𝗱𝗮𝘁𝗲: Be the Human Firewall. Treat every line of AI-generated code as if it came from an anonymous, untrained intern. It's a starting point, not a finished product. Review, validate, and test everything. Never trust, always verify. 𝟮. 𝗧𝗲𝗰𝗵𝗻𝗶𝗰𝗮𝗹 𝗚𝘂𝗮𝗿𝗱𝗿𝗮𝗶𝗹𝘀: Build a Padded Room. AI agents must operate under the principle of least privilege. Enforce strict environment segregation (dev vs. prod) and mandate a human approval gate (Human-in-the-Loop) for any action that modifies a system or touches sensitive data. 𝟯. 𝗢𝗿𝗴𝗮𝗻𝗶𝘇𝗮𝘁𝗶𝗼𝗻𝗮𝗹 𝗦𝘁𝗿𝗮𝘁𝗲𝗴𝘆: Govern the Bots. Your company needs a formal AI risk framework, like the one from NIST. Define clear usage policies, threat model for AI-specific attacks like prompt injection, and train your teams on the risks. Don't let AI adoption be the Wild West. The future isn't about replacing developers; it's about augmenting them with powerful tools inside a secure framework. The AI can be the co-pilot, but a human must always be flying the plane. 𝗛𝗼𝘄 𝗮𝗿𝗲 𝘆𝗼𝘂 𝗺𝗮𝗻𝗮𝗴𝗶𝗻𝗴 𝗔𝗜 𝗿𝗶𝘀𝗸 𝗶𝗻 𝗱𝗲𝘃𝗲𝗹𝗼𝗽𝗺𝗲𝗻𝘁? #AI #Cybersecurity #DevSecOps #AIagents #HumanInTheLoop #TechLeadership #SoftwareDevelopment #AISafety #AICoding #VibeCoding

  • View profile for Wayne Anderson

    🌟 Managing Director | Cyber & Cloud Strategist | CxO Advisor | Helping Client Execs & Microsoft Drive Secure, Scalable Outcomes | Speaker & Author

    4,132 followers

    As I work with companies that are stopping #artificialintelligence projects for #Security concerns, almost every time the priority list we work with them on is the same: 1) Your #identity visibility needs to be your main inspection chain. Confirm with a review and a controlled test, eliminate gaps. 2) Harden and protect logs for your #AI resources. Use activity and audit log in Microsoft 365 and use well-architected practices for serverless and resources in #Azure. 3) #threatmodeling is not a 4-letter word. Sit down and brainstorm all the bad things you worry about. then ask, which do you have examples from other areas of the business to suggest are real? Which have the most impact? If you have more formal models and tools, great. If your team doesn't, we can bring some basics, it doesn't have to be complicated or fancy to use #risk to prioritize the list. 4) Look at your top X from the list and pretend that it was happening to you. Use industry tools like MITRE #ATLAS and #ATTCK to give form to the "how" if you aren't sure. At each step of the attack see if you can explain how and where your tools either would see and respond to the threat. Use that to plan configuration adjustments and enhancements. Implement the easy quickly and prioritize the complex by what changes get the most coverage upgrade vs your prioritized list. If the sounds complicated, first, it's really not. it's really about breaking down large problems or complex problems into small steps. This is also where my team and my colleagues Steve Combs and Sean Ahmadinejad can surround your team with expertise and automation to trace logs, highlight vulnerabilities, and help with the enhancement prioritization and setting a team definition of what "good enough" might be to move the #ai or #copilot project forward if it's #Microsoft365. Get started.

  • View profile for Leonard Rodman, M.Sc. PMP® LSSBB® CSM® CSPO®

    Follow me and learn about AI for free! | AI Consultant and Influencer / API Automation Engineer

    52,616 followers

    Whether you’re integrating a third-party AI model or deploying your own, adopt these practices to shrink your exposed surfaces to attackers and hackers: • Least-Privilege Agents – Restrict what your chatbot or autonomous agent can see and do. Sensitive actions should require a human click-through. • Clean Data In, Clean Model Out – Source training data from vetted repositories, hash-lock snapshots, and run red-team evaluations before every release. • Treat AI Code Like Stranger Code – Scan, review, and pin dependency hashes for anything an LLM suggests. New packages go in a sandbox first. • Throttle & Watermark – Rate-limit API calls, embed canary strings, and monitor for extraction patterns so rivals can’t clone your model overnight. • Choose Privacy-First Vendors – Look for differential privacy, “machine unlearning,” and clear audit trails—then mask sensitive data before you ever hit Send. Rapid-fire user checklist: verify vendor audits, separate test vs. prod, log every prompt/response, keep SDKs patched, and train your team to spot suspicious prompts. AI security is a shared-responsibility model, just like the cloud. Harden your pipeline, gate your permissions, and give every line of AI-generated output the same scrutiny you’d give a pull request. Your future self (and your CISO) will thank you. 🚀🔐

  • View profile for Sarah Currey

    AWS Global Services Security | Culture of Security

    7,547 followers

    Elevate your cloud security posture for GenAI applications with a comprehensive defense-in-depth strategy linked below! 👏🚀 Start with securing your accounts and organization first, implementing least privilege policies using IAM Access Analyzer and encrypting data at rest with Amazon KMS, and layer additional built-in security and privacy-enhanced features of Amazon Bedrock and SageMaker. The article dives deeply into how you can leverage over 30 AWS Security, Identity, and Compliance services, which integrate with AWS AI/ML services, to help secure your workloads, accounts, and overall organization. To earn trust and accelerate innovation, it's crucial to strengthen your generative AI applications with a security-first mindset by embedding security in the early stages of generative AI development and integrating advanced security controls from AI/ML services. #generativeai #security #aws #ai #ml #defenseindepth #genai #cloudsecurity Christopher Rae Emily Soward Amazon Web Services (AWS)

  • View profile for Razi R.

    ↳ Driving AI Innovation Across Security, Cloud & Trust | Senior PM @ Microsoft | O’Reilly Author | Industry Advisor

    12,588 followers

    The Secure AI Lifecycle (SAIL) Framework is one of the actionable roadmaps for building trustworthy and secure AI systems. Key highlights include: • Mapping over 70 AI-specific risks across seven phases: Plan, Code, Build, Test, Deploy, Operate, Monitor • Introducing “Shift Up” security to protect AI abstraction layers like agents, prompts, and toolchains • Embedding AI threat modeling, governance alignment, and secure experimentation from day one • Addressing critical risks including prompt injection, model evasion, data poisoning, plugin misuse, and cross-domain prompt attacks • Integrating runtime guardrails, red teaming, sandboxing, and telemetry for continuous protection • Aligning with NIST AI RMF, ISO 42001, OWASP Top 10 for LLMs, and DASF v2.0 • Promoting cross-functional accountability across AppSec, MLOps, LLMOps, Legal, and GRC teams Who should take note: • Security architects deploying foundation models and AI-enhanced apps • MLOps and product teams working with agents, RAG pipelines, and autonomous workflows • CISOs aligning AI risk posture with compliance and regulatory needs • Policymakers and governance leaders setting enterprise-wide AI strategy Noteworthy aspects: • Built-in operational guidance with security embedded across the full AI lifecycle • Lifecycle-aware mitigations for risks like context evictions, prompt leaks, model theft, and abuse detection • Human-in-the-loop checkpoints, sandboxed execution, and audit trails for real-world assurance • Designed for both code and no-code AI platforms with complex dependency stacks Actionable step: Use the SAIL Framework to create a unified AI risk and security model with clear roles, security gates, and monitoring practices across teams. Consideration: Security in the AI era is more than a tech problem. It is an organizational imperative that demands shared responsibility, executive alignment, and continuous vigilance.

  • View profile for Matt Wood
    Matt Wood Matt Wood is an Influencer

    CTIO, PwC

    74,877 followers

    AI field note: AI needs nothing less (nothing more) than the security afforded to your data by AWS. Require the capabilities/culture to train & tune securely. Foundation model weights, apps built around them, and the data used to train, tune, ground or prompt them all represent valuable assets containing sensitive business data (like personal, compliance, operational, financial data). It's imperative these assets stay protected, private, and secure. To do this, we follow three principles: 1️⃣ Complete isolation of the AI data from the infrastructure operator. AWS has no ability to access customer content and AI data, such as AI model weights and data processed with models. This protection applies to all Nitro-based instances, including Inferentia, Trainium, and GPUs like P4, P5, G5, & G6. 2️⃣ Ability for customers to isolate AI data from themselves. We provide mechanisms to allow model weights and data to be loaded into hardware, while remaining isolated and inaccessible from customers’ own users and software. With Nitro Enclaves and KMS, you can encrypt your sensitive data using keys that you own and control, store that data in a location of your choice, and securely transfer the encrypted data to an isolated compute environment for inference. 3️⃣ Protected infrastructure communications. Communication between devices in the ML accelerator infrastructure must be protected. All externally accessible links between the devices must be encrypted. Through the Nitro System, you can cryptographically validate your applications and decrypt data only when the necessary checks pass. This enhancement allows AWS to offer end-to-end encryption for your data as it flows through generative AI workloads. We plan to offer this end-to-end encrypted flow in the upcoming AWS-designed Trainium2 as well as GPU instances based on NVIDIA's upcoming Blackwell architecture, which both offer secure communications between devices. This approach is industry-leading. It gives customers piece of mind to be able to protect their data, while also moving quickly with their generative AI programs, across the entire stack. You can tell a lot about how a company makes decisions based on their culture. A research organization (for example), will likely make a different set of trade offs in how they collect and use data to differentiate and drive their research. There is nothing wrong with this so long as it's transparent, but it's different to how we approach things at Amazon. Alternatively, while generative AI is new, many of the companies who are providing AI services have been serving customers for long enough to establish a history with respect to security (and the culture which underpins it). It's worth taking the time to inspect and understand that history, as past behavior is likely to be indicative of future delivery. I hope you take the time to do that with AWS. More in the excellent blog linked below.

  • View profile for Ken Priore

    Strategic Legal Advisor | AI & Product Counsel | Driving Ethical Innovation at Scale | Deputy General Counse- Product, Engineering, IP & Partner

    5,874 followers

    OpenAI's ChatGPT Agent just exposed a fundamental blind spot in AI governance: we're building autonomous systems faster than we're securing them. 🤖 The technical reality is stark. These AI agents can book flights, make purchases, and navigate websites independently—but they're also vulnerable to "prompt injections" where malicious sites trick them into sharing your credit card details. Think about it: we're creating AI that's trained to be helpful, which makes it the perfect mark for sophisticated phishing. Here's the strategic shift legal and privacy teams need to make: stop thinking about AI security as a technical afterthought and start treating it as a governance imperative. The framework forward requires three immediate actions: 🔒 Implement "human-in-the-loop" controls for all financial transactions—no exceptions ⚡ Build cross-functional AI risk assessment protocols that include prompt injection scenarios 🎯 Establish clear boundaries for what AI agents can and cannot access autonomously The opportunity here isn't just preventing breaches—it's building consumer trust at scale. Companies that get AI agent governance right will differentiate themselves as AI adoption accelerates. The question for your organization: are you building AI safety into your agent strategies, or are you waiting for the first major incident to force your hand? 💭 https://lnkd.in/g34tD3JE Comment, connect and follow for more commentary on product counseling and emerging technologies. 👇

  • View profile for Yassir Abousselham

    CISO | Board member | Advisor

    8,083 followers

    What if the next LLM or AI assistant your company deploys is malicious? AI safety researchers found that models can be intentionally trained with backdoors that, when activated, can transition to malicious behavior. For example, an LLM can switch from generating secure code to introducing exploitable vulnerabilities when certain conditions are met, such as year (e.g. 2024) or operating environment (e.g. PROD, .gov domain 😱) or a trigger word. Moreover, the backdoors can be designed to resist various behavioral safety techniques, including RL fine-tuning, supervised fine-tuning, and adversarial training. Lastly, the same research found that subjecting the backdoored models to adversarial training (aka red teaming) can lead to the models improving their ability to conceal malicious behaviors rather than eliminating them. So what’s the security team’s responsibility over deploying safe LLMs? While the industry hasn’t agreed on a de facto standard or methodology for AI safety, Trust and Security teams ought to start mitigating the risk of malicious AI models to align with the organization's risk appetite. A few high-level steps to consider:  - Develop AI safety expertise, deploy AI safety policies and “plug into” organizational efforts to roll out AI models, assistants, etc. - Define AI safety controls for fine-tuned models and monitor effectiveness e.g. access controls, vuln management, secure deployment, differential privacy and AI safety tools. - Update the 3rd party programs to inquire about AI safety from AI models vendors. In fact, it would be great see AI safety controls covered in AI vendors’ SOC2 and other attestations. - Establish AI applications normal behavioral baseline and alert/investigate anomalies. Research paper here: https://lnkd.in/gnfCng5Q Additional thoughts and feedback are welcome! 

  • View profile for Aayush Ghosh Choudhury

    Co-Founder/CEO at Scrut Automation (scrut.io)

    11,613 followers

    Need to build trust as an AI-powered company? There is a lot of hype - and FUD. But just as managing your own supply chain to ensure it is secure and compliant is vital, companies using LLMs as a core part of their business proposition will need to reassure their own customers about their governance program. Taking a proactive approach is important not just from a security perspective, but projecting an image of confidence can help you to close deals more effectively. Some key steps you can take involve: 1/ Documenting an internal AI security policy. 2/ Launching a coordinated vulnerability disclosure or even bug bounty program to incentivize security researchers to inspect your LLMs for flaws. 3/ Building and populating a Trust Vault to allow for customer self-service of security-related inquiries. 4/ Proactively sharing methods through which you implement the best practices like NIST’s AI Risk Management Framework specifically for your company and its products. Customers are going to be asking a lot of hard questions about AI security considerations, so preparation is key. Having an effective trust and security program - tailored to incorporate AI considerations - can strengthen both these relationships and your underlying security posture.

  • View profile for Gareth Young

    Founder & Chief Architect at Levacloud | Delivering Premium Microsoft Security Solutions | Entrepreneur & Technologist

    7,885 followers

    It is quite common for me to see Azure environments where resources have been spun up without any underlying architecture, governance or security design. Maybe they started out as a temporary solution or test and suddenly became relied upon and built on top of. This opens the organization up to a lot of vulnerabilities and risk, be it from a security perspective or cost perspective... or both! Microsoft Defender for Cloud is a fantastic tool to start bringing some order to the chaos, it also has some free capabilities to get started with, see them later in this post! Here are some of the key capabilities it has to offer: AI Security Posture Management (AI-SPM): Provides granular visibility into all workloads, including AI workloads, identifying vulnerabilities across VMs, Storage Accounts, AI models, SDKs, and datasets. For example, a financial services company mitigated vulnerabilities in their AI-driven fraud detection systems using AI-SPM. Enhanced Threat Protection: Integrates with Azure OpenAI Service to protect against jailbreak attempts and data breaches. A healthcare provider used this to secure patient data in their AI diagnostic tools. Multicloud Threat Protection: Not using Azure? no problem! - This tool supports Amazon RDS and Kubernetes security, enhancing threat detection and response across AWS, Azure, and GCP. A global retailer implemented these features to secure their e-commerce platforms. Infrastructure-as-Code (IaC) Insights: Enhances security with Checkov integration, streamlining DevSecOps processes for a software development firm. Cloud Infrastructure Entitlement Management (CIEM): Optimizes permissions management, reducing attack surfaces for a tech startup. API Security Testing: Supports Bright Security and StackHawk, ensuring API security throughout the development lifecycle. A logistics company used these tools to secure sensitive shipment data. Free Capabilities Microsoft Defender for Cloud offers the foundational Cloud Security Posture Management (CSPM) capabilities for free, including continuous security assessments, security recommendations, and the Microsoft cloud security benchmark across Azure, AWS, and Google Cloud. Check out the links in the comments to learn more! #CloudSecurity #AI #MicrosoftDefender #CyberSecurity #Multicloud #CNAPP #TechNews

Explore categories