About the Role
Must have:
Cloud, Docker, GenAI, Kubernetes, LLMs, Machine Learning, Python, SQL, Vector Database
Employment type:
B2B
Operating mode:
Remote
Location:
Poland
About deepsense.ai
At deepsense.ai, we won’t just build AI solutions – we’ll shape how companies around the world use them.
By joining us, you’ll:
Work with partners like OpenAI, NVIDIA, Anyscale, LangChain, Crusoe, and ElevenLabs.
Explore and apply the newest tech: LLMs & RAG, MLOps, Edge Solutions, Computer Vision, Predictive Analytics.
Tackle challenges in software & tech, pharma & healthcare, manufacturing, retail, telecoms & media.
Contribute to open-source projects – just take a look at our latest solution, ragbits, an agentic RAG framework with over 1.6k stars on GitHub.
And the best part of working at deepsense.ai?
Spread your wings with clear career paths, technical or leadership.
Collaborate with 100+ AI experts with 15+ years of applied AI experience, as well as PhD-level researchers with academic backgrounds.
Tap into domain expertise and knowledge sharing whenever you need it.
Your Role as Machine Learning Engineer
What makes this role stand out:
You’re closest to the models and AI itself, building ML/LLM pipelines, integrating, and optimizing models.
You have a background in deep learning, NLP, or CV, and today you’re hands-on with GenAI and LLMs.
You know techniques like fine-tuning, prompting, quantization, and LoRA and you understand how models work and how to adapt them for production.
Why it’s worth it:
You’ll dive into the hottest areas of AI: LLMs, agentic frameworks, RAG, inference optimization, and fine-tuning.
Projects aren’t just PoCs, the models you build go into production and reach real users.
You won’t be boxed into “just ML” you’ll collaborate with Data Scientists, Software Engineers, and MLOps to deliver end-to-end solutions.
A few project examples:
Training multimodal LLMs for drug discovery.
Building AI voicebots that double conversion rates.
Creating a GenAI solution for a leading US legal company together with the OpenAI team.
Running GenAI on edge devices with cloud-level performance.
All of this in a setup that feels like an AI-driven software house: remote-first, flexible, and packed with specialists who are open to sharing knowledge and experimenting with the newest tech.
The ideal candidate:
Has 4–5+ years of experience in ML engineering and working with models in production environments.
Brings hands-on expertise with Large Language Models (LLMs) and Generative AI, including integration and inference optimization (latency, cost, scalability).
Is familiar with frameworks and tools for building and orchestrating LLM pipelines (LangChain, LlamaIndex, RAG, agent frameworks).
Can design and implement end-to-end ML/LLM pipelines, from data preparation and training/fine-tuning to production-grade APIs.
Has experience with cloud platforms (AWS, GCP, Azure) and their AI/ML services (e.g., SageMaker, Vertex AI, Azure ML).
Has worked with SQL, NoSQL, and vector databases (Pinecone, FAISS, Weaviate).
Is fluent in Python and experienced with ML frameworks (PyTorch, TensorFlow, Hugging Face).
Knows how to deploy and monitor models (MLOps: CI/CD for models, logging, observability, quality monitoring).
Communicates clearly and can collaborate effectively with both Data Scientists and product/client teams.
Bonus: experience in prompt engineering and building simple AI user interfaces (Streamlit, Gradio).
We Offer
Impactful AI projects
Tackle industry-grade challenges: from LLMs for drug discovery to GenAI on edge devices, AI voicebots, and open-source initiatives with global reach.
Collaborate directly with our partners for early access to tools before public release, testing in production, and bringing know-how from AI leaders into our projects.
Contribute to open-source initiatives like ragbits (1.6k+ stars on GitHub), adopted and appreciated by the ML community.
Growth & Knowledge Sharing
Join AI specialists who share expertise through Tech Talks, workshops, and internal trainings.
Present your work at conferences, run experiments, and stay ahead of the curve.
Choose your own career path and get support for your development.
Flexibility & Culture
Work fully remote, from one of our two offices (Warsaw, Bydgoszcz), or from coworking spaces in Poznań, Łódź, Wrocław, and Gdańsk.
Enjoy flexible working hours.
Benefit from a culture that prevents burnout and supports balance in daily work.
Start with onboarding – from day one you are matched with a buddy.
Get high-end equipment (laptops, dual monitors, pro peripherals).
Access a premium AI development suite: OpenAI ChatGPT, Claude, Gemini Advanced, GitHub Copilot, Cursor AI IDE, Claude Code, NotebookLM, plus the latest emerging AI tools to support your daily work.
Work in agile teams with fast decision-making and space to try out your ideas.
Basic Benefits
Get private medical care and a Multisport card.
Use our company library, attend onsite English lessons, and benefit from a dedicated training budget.
Join free team lunches and enjoy fresh fruit & snacks.
Take part in team-building activities and holiday celebrations.
Tech Stack
PythonMachine LearningDockerKubernetesCloudLLMsSQLVector DatabasePyTorchTensorFlowHugging FaceMLOpsGenAI