About the Role
Senior AI Cybersecurity Engineer
Category: Engineering
Req ID: 972
Date: Mar 26, 2026
Location:
Leawood, KS, US, 66211 Remote, US
We Impact Lives Through Purpose-Driven Work in A People First Culture
Ascend Learning, a leading healthcare and learning technology company, is the connection between a powerful portfolio of brands serving students, educators, and employers with outcomes-based, data-driven solutions across the lifecycle of learning. From testing to certification, Ascend Learning products are used by physicians, emergency medical professionals, nurses, allied health professionals, certified personal trainers, financial advisors, skilled trades professionals and insurance brokers.
Headquartered in Burlington, MA, with additional office locations and hybrid and remote workers in cities across the U.S., Ascend Learning was recognized by Newsweek and Plant-A Insights Group as one of America’s 2025 Greatest Workplaces as well as America’s Best Places to work for Mental Well-Being for 2025.
We're always looking for talented, passionate professionals to join us in our mission to help change lives. If this sounds like an environment where you'd thrive, read on to learn more.
WHAT YOU'LL DO
The Senior AI Cybersecurity Engineer will apply their combination of AI and cybersecurity knowledge and skills to protect our customer-facing AI products, in-house developed, and vendor-provided AI tools. This position will identify rapidly evolving AI attack vectors and vulnerabilities, evaluate and recommend tools, practices, and frameworks for mitigating these risks. This position will also be responsible for developing an AI security testing program, hands on execution of red team tests, and will partner with AI engineering, cybersecurity, and DevSecOps teams to integrate AI security into the broader enterprise security program.
WHERE YOU’LL WORK
This position will work a hybrid schedule from our office location in Leawood, KS. Remote will be considered in the USA.
HOW YOU’LL SPEND YOUR TIME
Perform threat modeling, risk assessments, and vulnerability assessments of AI systems, and recommend remediation strategies and solutions
Assess and secure AI/ML models against adversarial attacks (e.g., model inversion, poisoning, evasion)
Stay current on emerging AI threats, attack vectors, and adversarial ML research
Design and execute controlled adversarial attacks (prompt injection, input/output evaluation, data exfiltration, misinformation generation, etc.)
Develop reusable test repositories, scripts, and automation
Evaluate and recommend security tools and platforms for AI model monitoring, testing, and attack detection; Contribute to enterprise AI security strategy by bringing forward new practices, frameworks, etc.
WHAT YOU'LL NEED
High school diploma or GED required. Bachelor's degree in Information Systems, Information Technology, Computer Science, Engineering, or equivalent work experience preferred.
6+ years’ experience in both application penetration testing and red teaming of cloud and on-premises environments.
3+ years’ experience programming in python
Experience with scripting for automation and exploit development
Strong knowledge of ML/GenAI fundamentals (LLMs, embeddings, diffusion models) and adversarial ML techniques (model extraction, poisoning, prompt injection)
Familiarity with AI security frameworks: NIST AI RMF, MITRE ATLAS, or OWASP Top 10 for LLMs
Excellent communication skills that facilitate effective teamwork with Cybersecurity, AI Engineering and other technology teams
Ability to translate complex security requirements into practical solutions, advocate for security best practices, and build security awareness across technical and non-technical audiences
Experience with enterprise AI platforms (Microsoft Copilot, Google Workspace AI, Slack AI)
Strong background in DevSecOps and CI/CD security integration
Certifications: OSCP, CEH, or other relevant certifications preferred
Knowledge of data privacy regulations (GDPR, CCPA, SOC 2, ISO 27001)