AI Innovation Security Researcher
Posted 2025-08-23
Remote, USA
Full Time
Immediate Start
<b>Description</b><br><p><strong>About the Role</strong></p><p>We’re seeking an <strong>AI Innovation Security Researcher </strong>to serve as the critical link between our AI development team and our security experts. In this role, you will:</p><ul><li>Translate real-world security challenges into AI-driven solutions</li><li>Shape prompt strategies and model workflows for security use-cases</li><li>Contribute to AI system development—help architect, prototype, and iterate on models and pipelines</li><li>Design and execute rigorous benchmarks to evaluate the performance of security-focused AI tools</li></ul><p>Your work will power capabilities such as automated exploitability checks for SAST/SCA findings, AI-guided remediation of container vulnerabilities (e.g. Dockerfile misconfigurations, unsafe downloads), and detection/analysis of data leaks. You’ll also help amplify our thought leadership by authoring blogs and delivering conference talks on cutting-edge AI-security topics.</p><p><strong>Key Responsibilities</strong></p><p><strong>Research & Benchmarking</strong></p><ul><li>Define evaluation frameworks for AI models tackling security tasks</li><li>Build test suites for exploitability analysis (e.g. proof-of-concept generation, severity scoring)</li><li>Measure and report on model accuracy, false-positive/negative rates, and robustness</li></ul><p><strong>AI Collaboration & Development</strong></p><ul><li>Work with ML engineers to craft and refine prompt templates for security scenarios</li><li>Contribute to model architecture design, fine-tuning, and deployment workflows</li><li>Investigate model behaviors, iterate on training data, and integrate new AI architectures as needed</li></ul><p><strong>Security Expertise & Tooling</strong></p><ul><li>Apply deep knowledge of static and software composition analysis (SAST/SCA)</li><li>Analyze container build pipelines to identify vulnerability origins and remediation paths</li><li>Leverage vulnerability databases (CVE, NVD), threat modeling, and risk assessment techniques</li></ul><p><strong>Content Creation & Evangelism</strong></p><ul><li>Write technical blog posts, whitepapers, and documentation on AI-driven security solutions</li><li>Present findings at internal brown-bags and external conferences</li><li>Mentor teammates on AI security best practices</li></ul><br> <b>Requirements</b><br><p><br></p><ul><li>Bachelor’s or Master’s degree in Computer Science, Cybersecurity, AI/ML, or related field</li><li>3+ years in security research, application security engineering</li><li>Hands-on with LLMs (e.g. GPT, PaLM), prompt engineering, or fine-tuning workflows</li><li>Proficient in Python</li><li>Deep understanding of SAST/SCA tools (e.g. SonarQube, Snyk) and their outputs</li><li>Familiarity with container security tooling (Docker, Kubernetes, Trivy)</li><li>Strong data analysis skills for evaluating model outputs and security telemetry</li><li>Excellent written and verbal communication; ability to distill complex topics for diverse audiences</li><li>Collaborative mindset; experience working across research, engineering, and security teams</li></ul><p><strong>Preferred Qualifications</strong></p><ul><li>Experience in AI/ML system development—model training, fine-tuning, and production deployment</li><li>Publications or presentations in AI, security, or DevSecOps venues</li><li>Prior work developing open-source security tools or frameworks</li><li>Experience with cloud security services (AWS/Azure/GCP) and infrastructure-as-code scanning</li><li>Familiarity with CI/CD pipelines and MLOps tooling</li></ul><p><br></p><br>