About the Role
This role offers the chance to join a growing data intelligence organisation and take ownership of production AI systems. You will work with modern LLM technologies, build automated intelligence tools, and contribute to the development of future client facing AI products.
The Company
They are a data insights organisation that builds KPI prediction models for well known consumer brands. Their platform gives investors an early view of company performance ahead of earnings. As AI becomes central to their strategy, they are expanding their engineering team to build scalable agentic and RAG powered systems that support both internal teams and customers.
The Role
* Build end to end RAG pipelines covering ingestion, embeddings, vector search and LLM reasoning.
* Develop agentic workflows using frameworks such as LangChain, CrewAI or ADK.
* Create LLM powered microservices using Python, FastAPI or Flask.
* Build internal AI tools, assistants and automated workflows.
* Deploy AI systems on AWS including Bedrock, Lambda, ECS or EKS, API Gateway and S3.
* Implement CI/CD pipelines and ensure monitoring, observability and reliability across deployed systems.
Your Skills and Experience
* Strong commercial experience with Python and production engineering practices.
* Practical experience building RAG systems and agentic architectures.
* Hands on knowledge of vector databases such as Pinecone, Weaviate or Qdrant.
* Experience deploying LLM or AI applications on AWS.
* Ability to design, operate and maintain AI systems independently.
* Comfortable working in evolving environments and taking ownership of technical solutions.
What They Offer
* Salary of £75,000 to £95,000 depending on experience.
* Equity package.
* Annual performance bonus.
* Opportunities to influence AI strategy and contribute to future client facing products.
Tech Stack
PythonRAGLLMsFastAPIAWSLangChainCrewAIvector databases