/Adversarial Testers

Adversarial Testers

PolandRemoteplvia direct
// Job Type
Full Time
// Salary
USD 400 - 500/day
// Salary Range
400–500 USD / day
// Posted
2 months ago
// Seniority
mid
// Work Mode
remote

About the Role

Job title: Adversarial Tester (Prompt Injection Specialist) Job type: Contract  Contract Length: 2 months + extensions Rate: 400-500 per day Outside IR35 Role Location:  EU and US Fully Remote (Must be wiling to work US East Coast hours) Role and Responsibilities:  The Prompt Injection Specialist will design and execute structured adversarial prompt testing against a LLM chatbot. The focus is exclusively on prompt-layer vulnerabilities: jailbreaks, direct and indirect prompt injection, instruction override, and boundary attacks. This is not a cybersecurity or infrastructure penetration testing role. Job Requirements:  Hands-on experience with LLM prompt injection, jailbreaking, and adversarial prompt design Strong understanding of chatbot architectures, system prompt structures, and guardrail mechanisms Familiarity with OWASP LLM Top 10, MITRE ATLAS, and relevant adversarial ML frameworks Experience designing structured prompt test sets with coverage metrics Ability to define failure taxonomies and severity classification for prompt-layer attacks Proficiency with common LLM APIs and chat interfaces (OpenAI, Anthropic, Azure OpenAI, or equivalent) Accessibility Statement:  Read and apply for this role in the way that works for you by using our Recite Me assistive technology tool. Click the circle at the bottom right side of the screen and select your preferences.  We make an active choice to be inclusive towards everyone every day. Please let us know if you require any accessibility adjustments through the application or interview process. Our Commitment to Diversity, Equity, and Inclusion:  Signify’s mission is to empower every person, regardless of their background or circumstances, with an equitable chance to achieve the careers they deserve. Building a diverse future, one placement at a time. Check out our DE&I page here

Tech Stack

LLM prompt injectionjailbreakingadversarial prompt designchatbot architecturessystem prompt structuresguardrail mechanismsOWASP LLM Top 10MITRE ATLASadversarial ML frameworksstructured prompt test setscoverage metricsfailure taxonomiesseverity classificationLLM APIschat interfacesOpenAIAnthropicAzure OpenAI

Interested in this job?

Login to Apply

Use our AI to tailor your resume for this Adversarial Testers position at Signify Technology.