GitLab CISO Josh Lemos breaks down how we approach AI governance, emphasizing the importance of working with vendors to ensure security requirements are baked into AI tools from the start. GitLab’s AI Transparency Center also helps customers understand exactly which model powers each feature so they can make informed decisions about how Duo accesses data. Key takeaway: AI governance starts with understanding how AI will shape risk and your attack surface. Then, work in partnership with vendors to build the right security assurances for your organization and customers.
More Relevant Posts
-
Today we’re launching reporting that actually helps security and AI governance teams to effectively report on AI use. Questions we hear all the time, and that you can now answer: 1. Who’s using these tools, and for what? 2. Which AI tools are getting used outside the official enterprise stack? 3. Is sensitive data leaving the building? If you’re fed up with your SASE being unable to do this, DM me to see what it looks like in action.
To view or add a comment, sign in
-
96% of IT leaders plan to expand their use of AI agents in the next 12 months — CSO, Grant Bourzikas shares how to implement a three step plan to establish security and governance before wide-scale agent deployment.
To view or add a comment, sign in
-
When Replit’s AI assistant wiped a live database, it wasn’t just an “AI bug”—it exposed deeper governance gaps: blind trust, missing immutable audit trails and unclear accountability. In our reflective perspective, we explore these systemic lessons and outline how Codenotary’s Guardian Agentic Center, leveraging MCP, command whitelisting and certified immutable logs, wraps AI automation in verifiable security controls. Discover the full analysis https://lnkd.in/dvVufUHX
To view or add a comment, sign in
-
-
🌐 AI is no longer a future trend — it’s embedded in how organizations innovate, compete, and scale. But with AI workloads come 𝗻𝗲𝘄, 𝗼𝗳𝘁𝗲𝗻 𝗼𝘃𝗲𝗿𝗹𝗼𝗼𝗸𝗲𝗱 𝘀𝗲𝗰𝘂𝗿𝗶𝘁𝘆 𝗿𝗶𝘀𝗸𝘀: sensitive data flows, API sprawl, and complex models running across cloud-native environments. 💡 Sysdig's white paper, 𝘚𝘦𝘤𝘶𝘳𝘪𝘯𝘨 𝘈𝘐: 𝘕𝘢𝘷𝘪𝘨𝘢𝘵𝘪𝘯𝘨 𝘢 𝘕𝘦𝘸 𝘍𝘳𝘰𝘯𝘵𝘪𝘦𝘳 𝘰𝘧 𝘚𝘦𝘤𝘶𝘳𝘪𝘵𝘺 𝘙𝘪𝘴𝘬, gives security leaders a practical framework to evaluate their programs and close gaps that traditional defenses miss. Check it out 🔗: https://okt.to/6PhxnY
To view or add a comment, sign in
-
🌐 AI is no longer a future trend — it’s embedded in how organizations innovate, compete, and scale. But with AI workloads come 𝗻𝗲𝘄, 𝗼𝗳𝘁𝗲𝗻 𝗼𝘃𝗲𝗿𝗹𝗼𝗼𝗸𝗲𝗱 𝘀𝗲𝗰𝘂𝗿𝗶𝘁𝘆 𝗿𝗶𝘀𝗸𝘀: sensitive data flows, API sprawl, and complex models running across cloud-native environments. 💡 Sysdig's white paper, 𝘚𝘦𝘤𝘶𝘳𝘪𝘯𝘨 𝘈𝘐: 𝘕𝘢𝘷𝘪𝘨𝘢𝘵𝘪𝘯𝘨 𝘢 𝘕𝘦𝘸 𝘍𝘳𝘰𝘯𝘵𝘪𝘦𝘳 𝘰𝘧 𝘚𝘦𝘤𝘶𝘳𝘪𝘵𝘺 𝘙𝘪𝘴𝘬, gives security leaders a practical framework to evaluate their programs and close gaps that traditional defenses miss. Check it out 🔗: https://okt.to/2Q5Mbv
To view or add a comment, sign in
-
As AI adoption accelerates, ensuring security, governance, and responsible practices is more important than ever. The OWASP AI Maturity Assessment (AIMA) provides a structured way for organizations to evaluate and improve their AI readiness across governance, design, implementation, and operations.
To view or add a comment, sign in
-
AI agents are starting to act in ways that test our assumptions. Recent incidents show they make mistakes (some destructive) like deleting production databases or misapplying logic across workflows. As organizations build more agentic AI into operations, the existing security playbooks don't map cleanly onto these risks. For security leaders, the push is toward resilient design, tighter permissions, full lifecycle visibility, and rollback plans that cover agent behaviors. The cost of letting an AI agent run unchecked is risk multiplied. #AIThreats #AgenticAI #CyberResilience
To view or add a comment, sign in
-
AI is like letting a powerful horse out of the barn—if you don’t have the right reins, it’s gonna take you places you don’t wanna go. That’s exactly what’s happening in businesses today. Everybody’s excited about what AI can do, but not enough folks are stopping to ask, ‘How do we secure it?’ Because without strong security, AI can expose your data, your people, and your business faster than you can blink. Let’s go Ian Swanson and Hoseb Dermanilian! Love this team! Ana Cymerman, Matthew Meabon, Todd Horne! Let’s go Palo Alto Networks!
To view or add a comment, sign in
More from this author
Explore content categories
- Career
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Hospitality & Tourism
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development
Senior DevOps Engineer | Jenkins, GitLab CI, TeamCity | Ansible, Puppet, Terraform | AWS, GCP | Kubernetes and CI/CD Expert
3dAmazing !