• Spearheaded 20+ AI feature development projects for Meta’s LLM and GenAI teams, embedding Responsible AI and security-first principles. Secured third-party data integrations and aligned development cycles with risk-informed product roadmaps to enable safe, compliant, and accelerated AI product launches.
• Led the integration of Responsible AI principles into technical program management by introducing AI governance frameworks, incorporating security controls into development pipelines, and coordinating across security, privacy, and product teams to enforce ethical, regulatory, and risk mitigation standards throughout the AI lifecycle.
• Initiated vulnerability risk assessments for high-impact AI systems, implementing adversarial testing, model robustness evaluations, and security stress tests to proactively uncover exploit vectors, reduce attack surface, and enhance AI system resilience. (CIS controls)
• Designed and implemented a scalable AI testing framework for localizing LLMs, embedding security-focused compliance automation,integrity monitoring, and input validation layers—resulting in a 15% reduction in data errors, a 27% boost in forecast accuracy, and significantly improved security readiness for launches.
• Partnered with cross-functional teams to drive AI product delivery, managing regulatory and security compliance workflows with Responsible AI, and threat mitigation practices—enhancing trust, traceability, and secure UX.
• Developed and deployed a 3p testing and risk tracking system to monitor external data and AI tooling pipelines, achieving $50M in operational savings while improving data quality, and reinforcing AI compliance, auditability, and external risk governance.
• Optimized AI product velocity and security by operationalizing secure data sourcing standards, enforcing model security reviews, and embedding security and compliance checkpoints into cross-team development processes—ensuring safe scaling of AI initiatives across the enterprise.