Oferty pracy Data Engineer i Data Science330+ Ofert

Znajdź oferty pracy data engineering, data science i AI. Machine learning, analityka i big data.

Data Scientist

P&G · WARSAW, Poland

<p style="text-align:left"><b>Job Location</b></p>WARSAW DOWNTOWN OFFICE<p style="text-align:inherit"></p><p style="text-align:left"><b>Job Description</b></p><p>We are looking for a passionate <b>Data Scientist</b> who is eager to make an impact on their career and take on a new challenge!</p><p></p><p><b>P&G Scaled Data Science Hub</b> in Warsaw is dynamically growing and making impact across most key algorithmic investments of the Company. We work across problems and algorithmic products related to Retail, Media, Digital commerce, R&D, Supply Chain, Productivity, Knowledge management and others.</p><p></p><p>In your role, you will take ownership of a particular business space with algorithmic model needs. We work all across traditional machine learning, deep learning, GenAI, optimization models, statistical models, adopting methods as required for the problem. To do your job well, you will need to build proven business understanding of your problem space.</p><p></p><p>In Warsaw Hub we specialize in developing and bringing algorithms to scale, always keeping impact, value, resilience, and reliability in mind. Our work impacts P&G's 100k+ employees and billions of consumers around the world.</p><p></p><p>You will join a world-class company improving lives of billions of consumers by applying data science. You’ll be welcomed within a 200+ global team of Data Scientists across US, Panama, Switzerland, Belgium, Poland, Singapore, India and China. You will be able to network and learn from scientists with a diverse set of backgrounds and seniorities. On top, in our Warsaw Hub, we do plenty of knowledge sharing on a weekly basis and have a unique job setup which will allow you to experience data science beyond your own key scope of responsibilities.</p><p></p><p>Our best-in-class P&G AI Factory platform you will use is an example taught about by Harvard Business School, and we continuously improve our developer experience, thanks to our strong AI Engineering team.</p><p></p><p><b>With us you will:</b></p><ul><li><p>Partner with product teams and business leaders to fully understand the problem, and AI & data engineering teams to automate & deploy your models into applications,</p></li><li><p>Develop your own insights to steer the algorithmic evolution roadmap in your area of ownership</p></li><li><p>Analyze and model on big datasets – e.g. translating 1.5TB daily consumer touchpoints and 500 million consumers' behaviors to actionable recommendations.</p></li><li><p>Answer business questions and propose solution for business problems by applying machine learning techniques and algorithms</p></li><li><p>Further develop understanding of machine learning, statistics, optimization, GenAI and other advanced analytical models – and how to apply to real world problems</p></li><li><p>Further learn what it takes to build and maintain a resilient algorithmic pipeline which passes the test of time, and what it takes for an algorithm to materialize it's impact and value</p></li><li><p>Write production grade code applying best practices.</p></li></ul><p style="text-align:inherit"></p><p style="text-align:left"><b>Job Qualifications</b></p><ul><li><p>At least a Masters in a quantitative degree (Statistics, Operation Research, Systems Engineering, Computer Science, Applied Math, Economics), or<br />Barchelor/Engineer degree with consecutive data science experience</p></li><li><p>At least <b>2 years experience with production grade Data Science / algorithmically enabled applications</b></p></li><li><p><b>Knowledge of Cloud Infrastructure</b>: <b>Microsoft Azure, Google Cloud Platform, Kubernetes</b></p></li><li><p><b>Solid code writing capabilities - Python and Spark are preferable</b></p></li><li><p><b>Strong technical & analytical skills (SQL, optimization, simulation, predictive modeling etc)</b></p></li><li><p>Proven success in leadership, problem solving and prioritising</p></li><li><p>Strong collaboration skills and working comfortably across teams</p></li></ul><p></p><p>Nice to have:</p><ul><li><p>Desired experience with Big Data ecosystem: Databricks, Spark, BigQuery</p></li><li><p>Basic understanding of Business Intelligence Tools such as PowerBI, Tableau</p></li><li><p>Experience with Agile DevOps, Github, Jira, Confluence - advantageous</p></li></ul><p></p><p><b>What we offer</b></p><ul><li><p>Work in an international Data Science team with global responsibilities (with large part of engineering and product teams located in Warsaw)</p></li><li><p>Long-term career with development and growth opportunities</p></li><li><p>Competitive salary and benefits program (private health care, life insurance, P&G stock options, saving plans, lunch subsidy, sport cards, in-office fitness center)</p></li><li><p>Relevant trainings, certifications, conference participation</p></li><li><p>Internal coaching programs & training</p></li><li><p>Flexible working arrangements</p></li></ul><p></p><p><b>Who we are</b></p><p></p><p>P&G was founded over 180 years ago as a simple soap and candle company. Today, we’re the world’s largest consumer goods company and home to iconic, trusted brands that make life a little bit easier in small but meaningful ways. We’ve spanned three centuries thanks to three simple ideas: leadership, innovation and citizenship. The insight, innovation and passion of hardworking teams has helped us grow into a global company that is governed responsibly and ethically, that is open and transparent, and that supports good causes and protects the environment. This is a place where you can be proud to work and do something that matters.</p><p></p><p>We commit to provide you with equal opportunities in employment! We value diversity and we do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.</p><p></p><p>At Procter & Gamble, we embrace a hybrid work model that combines the flexibility of remote work with the collaborative benefits of in-office engagement. Employees can enjoy the option to work from home two days a week while also spending time in the office to foster teamwork and enhance communication.</p><p></p><p><span><b>At P&G #weseeequal</b></span></p><p><span>We are an equal opportunity employer and value diversity at our company. At P&G we strive to build a culture where everyone feels welcome, included, and able to bring their full selves to work.</span></p><p><span>We ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process. Please click <a href="https://faq.pgwebtools.com/disability-accommodation-request/?fl_builder" target="_blank">here</a> if you require an accommodation during the application process. Please make sure to wait to hear back from us regarding your accommodation <b>before</b> proceeding with the online assessment, we thank you in advance for your patience.</span></p><p><b><span><span>Kindly be advised that at P&G, employment is exclusively extended on the basis of "Umowa o Pracę" (Full-time Employment Contract). Apply only if you agree to these conditions.</span></span></b></p><p style="text-align:inherit"></p><p style="text-align:left"><b>Job Schedule</b></p>Full time<p style="text-align:inherit"></p><p style="text-align:left"><b>Job Number</b></p>R000144419<p style="text-align:inherit"></p><p style="text-align:left"><b>Job Segmentation</b></p>Experienced Professionals<p style="text-align:inherit"></p><p style="text-align:left"><b>Starting Pay / Salary Range</b></p>

Full TimeRemotedirectData & AI
Salary not disclosed1 week ago

AI Engineer

IFS · Poland

We are seeking a hands-on AI Engineer to design, build, and operate internal AI-powered solutions that significantly improve productivity across IFS. This is an end-to-end engineering role responsible for delivering user-facing AI applications, backend AI services, enterprise copilots, and workflow automation. The role combines frontend development, AI engineering, and cloud platform integration, primarily within the Microsoft ecosystem, while remaining open to other modern development platforms and cloud technologies when appropriate. You will work closely with stakeholders to translate business needs into secure, scalable, and maintainable AI solutions aligned with enterprise architecture, governance, and security standards. This is not a data science role. It is an engineering role focused on building production-grade AI-powered applications. Key Responsibilities 1. AI Solution Engineering Translate business requirements into AI-driven technical designs and implementation plans. Design and implement applications that leverage large language models, embeddings, and retrieval-augmented generation. Engineer prompt strategies, grounding mechanisms, safety controls, and evaluation methods. Integrate AI capabilities into enterprise systems and workflows. 2. Frontend Development Design and develop modern, responsive frontend applications using React and TypeScript. Build internal AI portals, chat interfaces, dashboards, admin panels, and configuration screens. Implement advanced AI UX patterns including streaming responses, citations, feedback capture, and role-based controls. Integrate frontends securely with backend APIs and enterprise authentication mechanisms. Ensure accessibility, performance, usability, and maintainability standards. 3. Backend & AI Services Develop backend services and APIs using Azure services or other appropriate cloud platforms. Integrate with Azure OpenAI, Azure AI Search including vector search, or equivalent AI services. Design secure RESTful APIs exposing AI capabilities to internal consumers. Implement authentication and authorization standards such as OAuth2, OIDC, and managed identities. Ensure monitoring, telemetry, logging, and operational readiness. 4. Copilot & Microsoft Ecosystem Integration Design and build enterprise copilots using Copilot Studio. Integrate copilots with Microsoft 365 services such as Teams and SharePoint. Configure connectors, plugins, and grounding strategies aligned with governance requirements. Manage lifecycle, security, and compliance considerations for copilot solutions. 5. Automation & Productivity Enablement Build workflow automations using Power Automate, Azure Logic Apps, Power Platform, or equivalent tools. Design AI-driven process automation for internal productivity use cases. Integrate AI solutions into enterprise systems through APIs and orchestration layers. 6. DevOps, Governance & Continuous Improvement Implement CI/CD pipelines using Azure DevOps, GitHub Actions, or equivalent tooling. Containerize applications using Docker when appropriate. Apply GenAI operational practices including prompt versioning, evaluation, monitoring, and incident management. Maintain architecture documentation, design records, and operational procedures. Ensure compliance with IT security standards and architectural frameworks. Qualifications Technical Skills AI & LLM Engineering Strong understanding of large language models, embeddings, vector search, and retrieval-augmented generation. Practical experience integrating generative AI APIs into enterprise applications. Experience implementing prompt engineering, grounding techniques, and AI evaluation strategies. Familiarity with vector-capable databases or search platforms. Frontend Development Strong experience with React and TypeScript in production environments. Experience with modern frontend tooling such as Next.js or similar frameworks. Solid understanding of component architecture, state management, API integration, and UI performance. Experience implementing secure authentication flows in frontend applications. Backend & Cloud Engineering Proficiency in Python, C#/.NET, or TypeScript/Node.js. Experience designing and building secure REST APIs and microservices. Strong knowledge of cloud-native architectures and service-oriented design. Experience with Azure cloud services is highly desirable. Experience with other cloud platforms or development ecosystems such as AWS, GCP, or other modern frameworks is also valued. Microsoft Ecosystem & Automation Hands-on experience with Copilot Studio. Experience with Power Platform, Power Automate, Azure Logic Apps, or similar automation tools. Familiarity with Microsoft 365 integrations such as Teams and SharePoint. Strong scripting and automation skills using PowerShell, Bash, or JavaScript. DevOps & Operational Practices Experience with CI/CD pipelines. Familiarity with containerization technologies such as Docker. Understanding of monitoring, logging, and observability best practices. Strong documentation discipline and engineering rigor. Experience Requirements 3 to 5 years of professional software engineering experience. At least 1 to 2 years delivering generative AI solutions in a cloud environment. Proven experience building frontend applications for enterprise or internal platforms. Experience integrating LLM capabilities into production systems. Demonstrated ability to design secure, scalable, and maintainable applications. Experience working in cross-functional, distributed teams. Education Bachelor’s degree in Computer Science, Software Engineering, Information Technology, or a related field, or equivalent practical experience.

Full TimedirectData & AI
Salary not disclosed2 months ago

Data Analyst & Systems Manager (m/f/d)

Westwing Group SE · Warsaw, Poland

As a Data Analyst & System Manager, you will be part of our Finance Analytics & Systems (FA&S) team, transforming complex financial data into clear insights. By building intuitive dashboards, robust analytical models and leveraging AI-supported analytics, you will help uncover patterns across financial processes and systems that drive smarter, data-driven decisions. This position is unlimited and based in Warsaw. WHAT YOU'LL DO Be a key contributor to our Finance team, supporting the delivery of a world-class e-commerce experience and enabling our journey towards growth and expansion Support the creation and continuous improvement of state-of-the-art KPI dashboards and reporting tools for our FP&A and Accounting teams, helping them easily track performance and identify trends Perform data analyses and data modeling, turning complex datasets into clear, actionable insights that support decision-making Leverage AI-driven tools and automation (e.g. for data exploration, anomaly detection, forecasting or reporting efficiency) to enhance analytical workflows and generate faster, more impactful insights Translate analytical findings into clear recommendations and provide valuable input to the leadership team YOU COME WITH A Bachelor’s degree in Economics, Mathematics, Statistics, or a related field, combined with at least three years of relevant professional experience in a comparable analytical or data-focused role Strong skills in Excel and SQL, with strong proficiency in Power BI (strict must-have requirement); experience with Tableau or other comparable data visualisation tools is an advantage Experience with OneLake / Data Lake and Snowflake is considered a strong plus; additional programming skills (e.g. Python, VBA) are also an advantage A passion for solving analytical problems using quantitative approaches and for turning data into meaningful, business-relevant recommendations The ability to thrive in a fast-paced environment, balancing precision with pragmatism to deliver results efficiently Fluency in English

Full TimedirectData & AI
Salary not disclosed3 months ago

About PathwayAt Pathway we are shaking the foundations of artificial intelligence by introducing the world’s first post-transformer model that adapts and thinks just like humans.  Our breakthrough architecture outperforms Transformer and provides the enterprise with full visibility into how the model works. Combining the foundational model with the fastest data processing engine on the market, Pathway enables enterprises to move beyond incremental optimization and toward truly contextualized, experience-driven intelligence. We are trusted by organizations such as NATO, La Poste, and Formula 1 racing teams. Pathway is led by co-founder & CEO Zuzanna Stamirowska, a complexity scientist who created a team consisting of AI pioneers, including CTO Jan Chorowski who was the first person to apply Attention to speech and worked with Nobel laureate Geoff Hinton at Google Brain, as well as CSO Adrian Kosowski, a leading computer scientist and quantum physicist who obtained his PhD at the age of 20.  The company is backed by leading investors and advisors, including Lukasz Kaiser, co-author of the Transformer (“the T” in ChatGPT) and a key researcher behind OpenAI’s reasoning models. Pathway is headquartered in Palo Alto, California. The OpportunityThis is an R&D position in attention-based models. We are currently searching for 1 or 2 R&D Engineers with a strong track record in machine learning models research. This is an extremely ambitious foundational project. There is a flexible GPU budget associated with this specific project, guaranteed to be in the 7-digit range minimum. You Will perform (distributed) model training. help improve/adapt model architectures based on experiment results. design new tasks and experiments. optionally: oversee activities of team members involved in data preparation. The results of your work will play a crucial role in the success of the project. Cover letterIt's always a pleasure to say hi! If you could leave us 2-3 lines, we'd really appreciate that. You are expected to meet at least one of the following criteria: You have published at least one paper at NeurIPS, ICLR, or ICML - where you were the lead author or made significant conceptual & code contributions. You have significantly contributed to an LLM training effort which became newsworthy (topped a Huggingface benchmark, best in class model, etc.), preferably using multiple GPU's. You have spent at least 6 months working in a leading Machine Learning research center (e.g. at: Google Brain / Deepmind, Apple, Meta, Anthropic, Nvidia, MILA). You were an ICPC World Finalist, or an IOI, IMO, or IPhO medalist in High School. You Are A deep learning researcher, with a track record in Language Models and/or RL (candidates with a Vision or Robotics ML background are also welcome to apply). Interested in improving foundational architectures and creating new benchmarks. Experienced at hands-on experiments and model training (PyTorch, Jax, or Tensorflow). Have a good understanding of GPU architecture, memory design, and communication. Have a good understanding of graph algorithms. Have some familiarity with model monitoring, git, build systems, and CI/CD. Respectful of others Fluent in English Bonus Points Knowledge of approaches used in distributed training. Familiarity with Triton Successful track-record in algorithms & data science contests. Showing a code portfolio. Why You Should Apply Join an intellectually stimulating work environment. Be a pioneer: you get to work with a new type of "Live AI" challenges around long sequences and changing data. Be part of one of an early-stage AI startup that believes in impactful research and foundational changes. Type of contract: Full-time, permanent Preferable joining date: Immediate. The positions are open until filled – please apply immediately. Compensation: six-digit annual salary based on profile and location + Employee Stock Option Plan. Location: Remote work. Possibility to work or meet with other team members in one of our offices: Palo Alto, CA; Paris, France or Wroclaw, Poland. Candidates based anywhere in the EU, UK, United States, and Canada will be considered. If you meet our broad requirements but are missing some experience, don’t hesitate to reach out to us.

Full TimeRemotedirectData & AI
Salary not disclosed3 months ago

Data Scientist (Warsaw)

Yosh.AI · Poland

Data Scientist (Warsaw) Join Yosh.AI, Google Premier Partner and leading GenAI solution provider for Customer Experience in Europe, working with companies like Zalando, Orange, Metro, CCC, Medicover, eSKy and many more across Poland and Europe. We are on the lookout for talented individuals to contribute to our mission of transforming customer experiences in the retail, banking, and insurance sectors on a global scale. As Google Partner of the Year 2024, we collaborate on numerous cutting-edge and R&D projects, setting industry standards in AI applications.If you’re ready to make a significant impact and work with a team of passionate experts within our team and Google, Yosh.AI is your destination. Apply now to be a part of our award-winning journey and help us drive the AI revolution. Required Experience: A minimum of 3 years of experience in Data Science, with a keen interest or direct experience in Generative AI A minimum of 2 years of experience in NLP tasks Proficiency in PyTorch or another deep learning library Expert programming skills in Python Familiarity with the Numpy/Pandas libraries A solid understanding of machine learning concepts Knowledge of version control systems: git Demonstrated interest in and ability to work with advanced AI technologies, specific to the field of Generative AI Highly Desirable: Knowledge of technologies related to generative GPT models and the Langchain framework Proficiency in cloud solutions (AWS or GCP) Knowledge of Docker components and API deployment We offer: Opportunity for professional development in the area of GenAI Private Medical insurance Multisport card Google certification paths Hybrid or remote location – our office is located in the center of Warsaw Cooperation with a great team of energetic and open-minded people If you are fascinated by the potential of Generative AI and eager to work at the intersection of innovation and practical application, alongside a team of experts and in strong alliance with the Google team, we look forward to receiving your application. This position is not just a job but a chance to shape the future of AI technologies. If this sounds like something just for you, please send your CV to: careers@yosh.ai. By sending your application documents, you consent to the processing of your personal data contained in these documents by SHOPAI Sp. z o.o. (SHOPAI), ul. Walentego Roździeńskiego 2A, 41-946 Piekary Śląskie with a view to participating in the recruitment process. You have the right to withdraw your consent at any time by submitting a withdrawal notice via email to: careers@yosh.ai or by post, to the Administrator’s office address: Yosh.AI, ul. Foksal 16, 00-372 Warsaw. The withdrawal of consent does not affect the lawfulness of the processing performed on the basis of consent before its withdrawal. I consent to the processing of my personal data for the purpose of future recruitment. I am aware that I may withdraw this consent at any time by sending a message to the email address indicated above. The withdrawal of consent does not affect the lawfulness of the processing carried out prior to the exercise of this right. (voluntary consent).

Full TimedirectData & AI
Salary not disclosed3 weeks ago

Data Scientist

GFT Technologies SE · Kraków, PL

Data Scientist Date:  Mar 24, 2026 Location:  Kraków, PL, 30-302 Working place:  Hybrid   Type of contract: employment contract Salary range: 14 580 - 22 350 PLN gross   What will you do?   You will join a global initiative focused on building an AI-powered tool used by teams across the bank worldwide. The platform leverages modern technologies, including large language models (LLMs), to streamline everyday work, automate repetitive tasks, and support better decision-making. The project combines strong engineering practices with real business impact, delivering tools that improve efficiency, consistency, and collaboration at scale. It is a long-term, strategic programme developed by international teams and used by thousands of internal users.   Your tasks Develop and deploy machine learning models using Python (Pandas, Scikit-learn, TensorFlow) Build data pipelines and automate workflows Implement model monitoring, versioning, and CI/CD for ML Work with AWS and cloud-based ML solutions Analyze and visualize data for actionable insights Collaborate with engineering teams to productionize models   Your skills   Strong experience in Python and relevant ML libraries, including Pandas, Scikit-learn, and TensorFlow. Proven ability to develop, train, and deploy machine learning models. Experience in building data pipelines and automating workflows. Knowledge of model monitoring, versioning, and CI/CD practices for ML projects. Familiarity with cloud-based ML solutions, preferably AWS. Strong skills in data analysis and visualization to extract actionable insights. Ability to collaborate with engineering teams to productionize ML models. Excellent problem-solving skills and a results-oriented mindset.    We offer you   Contract of employment Hybrid work – 2 days a week in our/our client's office Working in a highly experienced and dedicated team Benefit package that can be tailored to your personal needs (private medical coverage, sport & recreation package, lunch subsidy, life insurance, etc.) On-line training and certifications fit for career path Access to e-learning platform Mindgram - a holistic mental health and wellbeing platform Work From Anywhere (WFA) - the temporary option to work remotely outside of Poland for up to 140 days per year (including Italy, Spain, the UK, Germany, Portugal, and Bulgaria)

Full TimeRemotedirectData & AI
PLN 14,580 - 22,350/month1 month ago

Data Analyst

B2B Network · Poland

Data Analyst working on multiple tasks supporting a large programme in Financial Crime Prevention, data profiling, data design, data mapping, complex data analysis   Must-have knowledge and experience: - Proven experience in data management, data quality, or a related role, preferably within the financial services industry- Experience with advanced SQL, Python, R, Java and Scala, Big Data- Familiar with data taxonomy and capable to build small data models- Experience with Power BI to produce reports- Excellent stakeholder management is mandatory as you will be working closely with multiple business areas and key business leads across IT, Architecture, Financial Crime, Compliance, Legal - and the Business in order to implement changes- Experience within financial crime (AML/CTF, KYC, Transaction Monitoring and Sanctions)- Professional, organised, and structured in approach and work produced to a high quality of standard, and you always strive to find the solution that is best for the bank- Prepared for fast paced project with changing priorities- Excellent analytical and problem-solving skills, strong attention to detail and a commitment to data accuracy- Ability to communicate effectively with both technical and non-technical audiences- Experience with data governance frameworks and methodologies is a plus- Experience with data migration and integration projects is a plus- Ability to work independently and as part of a team- Fluency in English is a requirement (speaking and writing)

Full TimedirectData & AI
Salary not disclosed2 months ago

ITSelecta Talent Solutions, based in Krakow, Poland, is a specialist recruitment agency with a multicultural team fluent in various languages. Serving Poland and Central Europe, the agency focuses on recruiting top Polish developers and other talents. Their expert recruiters and business developers are committed to addressing IT challenges, offering tailored recruitment services to build outstanding development teams for specific client needs. For our client an international company we are looking for a Oracle Data Engineer and Modeler  In this role, you will design, develop, and maintain components of the Global Data Platform operating in a multicloud environment (Oracle Cloud Infrastructure and Azure). You will work on integrating data sources into Oracle Autonomous Database, building unified data structures, and developing data transformation processes. You will also expand your skills in PL/SQL, OCI, Oracle Integration, and tools interoperating with Oracle Database. As an Oracle Data Engineer and Modeler, you will: Maintain and develop the Oracle Autonomous Database as a data-bus between integrated systems. Design data structures and expand automated data unification processes with high performance and accuracy. Create and maintain data dictionaries and metadata repositories. Ensure data model accuracy, completeness, and proper documentation. Develop and extend integrations in Oracle Integration Gen 3 / other tools. Collaborate with data architects, governance analysts, and business stakeholders. Cooperate with Azure specialists to provide data for the Azure analytics layer. RequirementsThe ideal candidate: Has experience with Oracle databases (preferably cloud-based). Has experience creating and implementing data models. Understands ERP/CRM data structures and can map them to target data models. Has strong SQL and PL/SQL skills. Has experience with data/application integration projects. Is proactive, communicative, and a strong team player. Nice to have: Knowledge of NetSuite, SAP S/4HANA, Dynamics 365, Sage, Salesforce. Familiarity with Oracle Cloud Infrastructure. Experience building UI solutions on Oracle DB (e.g., Apex). Basic MS SQL Server experience. Experience with data warehouses / BI systems. Knowledge of Azure Cloud technologies.

Full TimedirectData & AI
Salary not disclosed3 months ago

Senior Data analyst

B2Bnetwork · Poland

Detailed description of work task to be carried out * Perform IT analysis, investigative data analysis, system design, and data modelling to support development teams. * Take lead or facilitating role to drive functional and non-functional requirements and analysis work that involves multiple stakeholders, to design sustainable and value adding solutions. * Ensure alignment and transparency of requirements by acting as a bridge between IT and business teams * Build a contextual understanding of business processes on one hand and functional/technical know-how of our solutions on the other.  * Buidlign understanding of data model, its cosnumption by development Teams, and if data fullfils business requirements in our IIS area   Must-have knowledge and experience • Have analytical way of thinking• Are very good in interpersonal communications and must have good command over English language• Have passion for data analysis, problem solving and requirements elicitation• Have strong business and technical background• Are familiar with agile ways of working and willing to work in cross-border, cross-cultural, virtual teams• Are proactive, structured, detail-oriented and quality driven• Are able and passionate to develop in your role as assignments can vary over time• Are cooperative as a team player as well able to work independentlyYour experience and background:  • Experienced in relational databases design and querying, including optimization• Familiar with Big Data platforms(querying, optimization, loading and retrieving data) and at least some of the following technologies: Hadoop, Hive, Impala, pySpark, HQL• Experienced with JSON/AVRO data formats• Experienced with use and expansion of standardized data model like (Canonical Data Model)• Experienced in defining and forming business & technical requirements into understandable format.• Experienced with at least one of following programming languages: Python, R, Java• Utilized Jira and Bitbucket for project tracking and version control• Experienced in requirement analysis and refinement in data related areas• Able to collaborate with development teams (developers, architects, IT analysts)   Nice-to-have knowledge and experience Finance background and/or banking experience preferred

Full TimedirectData & AI
Salary not disclosed1 month ago

Join Tether and Shape the Future of Digital FinanceAt Tether, we’re not just building products, we’re pioneering a global financial revolution. Our cutting-edge solutions empower businesses—from exchanges and wallets to payment processors and ATMs—to seamlessly integrate reserve-backed tokens across blockchains. By harnessing the power of blockchain technology, Tether enables you to store, send, and receive digital tokens instantly, securely, and globally, all at a fraction of the cost. Transparency is the bedrock of everything we do, ensuring trust in every transaction.Innovate with TetherTether Finance: Our innovative product suite features the world’s most trusted stablecoin, USDT, relied upon by hundreds of millions worldwide, alongside pioneering digital asset tokenization services.But that’s just the beginning:Tether Power: Driving sustainable growth, our energy solutions optimize excess power for Bitcoin mining using eco-friendly practices in state-of-the-art, geo-diverse facilities.Tether Data: Fueling breakthroughs in AI and peer-to-peer technology, we reduce infrastructure costs and enhance global communications with cutting-edge solutions like KEET, our flagship app that redefines secure and private data sharing.Tether Education: Democratizing access to top-tier digital learning, we empower individuals to thrive in the digital and gig economies, driving global growth and opportunity.Tether Evolution: At the intersection of technology and human potential, we are pushing the boundaries of what is possible, crafting a future where innovation and human capabilities merge in powerful, unprecedented ways.Why Join Us?Our team is a global talent powerhouse, working remotely from every corner of the world. If you’re passionate about making a mark in the fintech space, this is your opportunity to collaborate with some of the brightest minds, pushing boundaries and setting new standards. We’ve grown fast, stayed lean, and secured our place as a leader in the industry.If you have excellent English communication skills and are ready to contribute to the most innovative platform on the planet, Tether is the place for you.Are you ready to be part of the future?About the jobAt Tether, we are at the forefront of integrating artificial intelligence with brain-computer interface technologies. Our projects leverage deep learning, generative models, and representation learning to decode and interpret brain activity. With our foundation in both established and cutting-edge AI infrastructure, our mission is to bridge the gap between AI and neuroimaging, driving innovation that’s not only revolutionary but also accessible, transparent, and privacy-focused.We are looking for a motivated and skilled machine learning engineer to join our dynamic Brain & AI team. This role focuses on developing AI models that enhance our understanding of neural mechanisms building encoding and decoding models and apply this knowledge to real-world applications such as brain computer interfaces. You will be instrumental in pushing the boundaries of what’s possible in AI and neuroscience, helping to solve some of the most complex and fascinating challenges in the field today.ResponsibilitiesDevelop and evaluate scalable deep learning algorithms that are central to our brain decoding initiatives.Collaborate closely with data scientists to pioneer research in generative modeling and representation learning.Identify bottlenecks in data processing pipelines and devise effective solutions, improving performance and reliability.Maintain high standards of code quality, organization, and automatization across all projects.Adapt machine learning and neural network algorithms to optimize performance in various computing environments, including distributed clusters and GPUs.Write and revise papers, participate in conferences, communicate and disseminate results. Basic Qualifications:Degree in Computer Science, Statistics, Informatics, Physics, Math, Neuroscience or another quantitative field.3+ years of experience of working in industry or research.Strong programming skills in Python, with experience in developing machine learning algorithms or infrastructure using Python and PyTorch.Experience in deep learning techniques such as supervised, semi-supervised, self-supervised learning, and/or generative modeling.Strong scientific background and ability to formulate and test novel hypotheses with proper experiments, draw conclusions and support claims.Proficient in managing unstructured datasets with strong analytical skills.Demonstrated project management and organizational skills.Proven ability to support and collaborate with cross-functional teams in a dynamic environment.Preferred Qualifications:PhD and research experience in Computer Science, Statistics, Informatics, Physics, Math, Neuroscience or another quantitative field.Scientific publications in top-tier AI and neuroscience conferences (NeurIPS, ICLR, ICML, AAAI, CVPR, Cosyne, SFN, CNN ecc) or peer reviewed journalsFamiliarity with deep learning libraries such as Pytorch, Huggingface, Transformers, Accelerator and Diffuser.Hands-on experience in training and fine-tuning generative models like diffusion models or large language models such as GPTs and LLAMAs.Experience with data and model visualization tools.Experience with non-invasive neural data (fMRI, EEG, MEG) or invasive neural recordings (ECoG, MEA, ecc).Important information for candidatesRecruitment scams have become increasingly common. To protect yourself, please keep the following in mind when applying for roles:Apply only through our official channels. We do not use third-party platforms or agencies for recruitment unless clearly stated. All open roles are listed on our official careers page: https://tether.recruitee.com/Verify the recruiter’s identity. All our recruiters have verified LinkedIn profiles. If you’re unsure, you can confirm their identity by checking their profile or contacting us through our website.Be cautious of unusual communication methods. We do not conduct interviews over WhatsApp, Telegram, or SMS. All communication is done through official company emails and platforms.Double-check email addresses. All communication from us will come from emails ending in @tether.to or @tether.ioWe will never request payment or financial details. If someone asks for personal financial information or payment at any point during the hiring process, it is a scam. Please report it immediately.When in doubt, feel free to reach out through our official website.

Full TimeRemotedirectData & AI
Salary not disclosed2 months ago

Lead/Senior Data Scientist Poland Apply now Growth through diversity, equity, and inclusion. As an ethical business, we do what is right — including ensuring equal opportunities and fostering a safe, respectful workplace for each of us. We believe diversity fuels both personal and business growth. We're committed to building an inclusive community where all our people thrive regardless of their backgrounds, identities, or other personal characteristics. Our team is delivering business solutions with Machine Learning and Data Science often turning them into scalable platforms including non-trivial and innovative solutions in the space of Forecasting and Customer Analytics often leveraging cutting-edge Causality frameworks. Example projects in this area: Next Best Offer/Action, propensity modeling, churn modeling, forecasting of demand and sales, revenue growth management, etc. This role requires close collaboration with business stakeholders, and the ability to understand their problems and translate them into machine learning ones.   Tasks:   Work on end-to-end classification and forecasting use cases: problem framing, data preparation, model development, evaluation and basic deployment support (e.g. demand forecasting, churn prediction). Explore and clean data; perform EDA to understand data and flag data quality issues. Engineer features for tabular and time-series data. Train, validate, and tune standard ML models (e.g. logistic regression, tree-based models, gradient boosting, simple neural nets, classical time-series models). Evaluate models with appropriate metrics that have impact on business KPIs. Build clear visualizations and concise reports to present model results and insights to business stakeholders. Collaborate with data engineers and AI engineers to bring models into production (batch scoring, APIs, models monitoring, dashboards). Document data sources, modeling assumptions, and experiment results in a reproducible way (notebooks, reports, wikis). Business understanding and translating problems into technical goals by defining success metrics, auditing data feasibility, and aligning stakeholder expectations. Pre-sales activities (at senior consultant level).   Requirements:   Commercial experience with various classical data science and Machine Learning (ML) models (e.g. decision trees, ensemble-based tree models, linear regression etc.). Solid knowledge of customer analytics concepts or advanced forecasting. Model hyperparameter tuning. Model validation frameworks. Experience with business requirements gathering, transforming them into technical plan, data processing, feature engineering, models evaluation. Previous experience in an analytical role supporting business will be a plus. Fluency in Python, basic working knowledge of SQL. Knowledge of specific DS/ML libraries. Solid experience in one of the cloud computing platforms (Databricks or GCP or Azure).   What Will Set You Apart:   Understanding of Causal machine learning. Experience in working with big data and distributed environments would be a plus. Commercial experience proven by multiple successful projects in the areas of forecasting would be a big plus. Experience with OOP in Python. Experience with MLOps. Familiarity with other languages R, Scala would be a plus. General:   Basic computer programming skills and familiarity with programming concepts. Strong business acumen. Experience with deep learning, reinforcement learning or other advanced modeling concepts in Classical Data Science problems. Ability to come up with creative solutions to address customer problems.   Missing one or two of these qualifications? We still want to hear from you! If you bring a positive mindset, we'll provide an environment where you feel valued and empowered to learn and grow.

Full TimedirectData & AI
Salary not disclosed1 month ago

About PathwayPathway is shaking the foundations of artificial intelligence by introducing the world’s first post-transformer model that adapts and thinks just like humans. Pathway’s breakthrough architecture (BDH) outperforms Transformer and provides the enterprise with full visibility into how the model works. Combining the foundational model with the fastest data processing engine on the market, Pathway enables enterprises to move beyond incremental optimization and toward truly contextualized, experience-driven intelligence. The company is trusted by organizations such as NATO, La Poste, and Formula 1 racing teams.Pathway is led by co-founder & CEO Zuzanna Stamirowska, a complexity scientist who created a team consisting of AI pioneers, including CTO Jan Chorowski who was the first person to apply Attention to speech and worked with Nobel laureate Goeff Hinton at Google Brain, as well as CSO Adrian Kosowski, a leading computer scientist and quantum physicist who obtained his PhD at the age of 20. The company is backed by leading investors and advisors, including TQ Ventures and Lukasz Kaiser, co-author of the Transformer (“the T” in ChatGPT) and a key researcher behind OpenAI’s reasoning models. Pathway is headquartered in Palo Alto, California.The opportunityWe are currently searching for a Machine Learning DevOps with experience in cloud and compute cluster management, scaling infrastructures, and Linux administration. Our development, ML training, and production environment is in the cloud, using several major cloud providers. We need support in managing and automating the processes, and scaling the infrastructure to growing team and production needs.You WillOptimize infrastructure for ML training and inference (e.g., GPUs, distributed compute).Automate and maintain ML/LLM pipelines (data ingestion, training, validation, deployment).Manage model versioning, reproducibility, and traceability.Work with terabyte-large datasets. Implement ML-centric CI/CD practices.Monitor model performance and data drift in production.Collaborate with machine learning engineers, software engineers, and platform teams.The role focuses on operationalizing machine learning models, ensuring scalability, reliability, and automation across the ML lifecycle.What We Are Looking ForVery good familiarity with Linux, shell scripts, and cluster configuration scripts as the basic work tool.Proficiency in workload management, containerization and orchestration (Slurm, Docker, Kubernetes).Solid grasp of CI/CD tools and workflows (GitHub Actions, Jenkins, Gitlab CI, etc.).Cloud infrastructure knowledge (AWS, GCP, Azure) – especially in ML services (e.g., SageMaker Hyperpod, Vertex AI).Familiarity with monitoring/logging tools (Grafana, CloudWatch, Prometheus, Loki).Experience with infrastructure as code (Terraform, CloudFormation, cluster-toolkit).Experience with ML pipeline orchestration tools (e.g., MLflow, Kubeflow, Airflow, Metaflow).Programming skills in Python (with exposure to ML libraries like TensorFlow, PyTorch).Experience with cluster, systems, and networks administration.Willingness to learn.This position holds a minimum requirement of a BSc in Computer Science or Information Technology.We will generally favor candidates who have undertaken ambitious efforts in the past. For example, if you have made an accepted contribution to the Linux kernel, won an important bug bounty, supported an academic grid/cluster computing team in a scaling effort, or even won a sports championship, make sure to mention this in your application!Why You Should ApplyIntellectually stimulating work environment. Be a pioneer: you get to work with realtime data processing & AI.Work in one of the hottest AI startups, with exciting career prospects. Team members are distributed across the world.Responsibilities and ability to make significant contribution to the company’ successInclusive workplace cultureFurther details Type of contract: Permanent employment contract Preferable joining date: Immediate. Compensation: based on profile and location. Location: Remote work. Possibility to work or meet with other team members in one of our offices: Palo Alto, CA; Paris, France or Wroclaw, Poland. Candidates based anywhere in the EU, United States, and Canada will be considered.

Full TimeRemotedirectDevOps
Salary not disclosed6 months ago

Senior Data Engineer (Snowflake) At LeverX, we have had the privilege of delivering over 1,500 projects for various clients. With 20+ years in the market, our team of 2,200+ is strong, reliable, and always evolving: learning, growing, and striving for excellence. We are looking for a Senior Data Engineer (Snowflake) to join us. Let’s see if we are a good fit for each other! WHAT WE OFFER: Projects in different domains: healthcare, manufacturing, e-commerce, fintech, etc. Projects for every taste: Startup products, enterprise solutions, research & development initiatives, and projects at the crossroads of SAP and the latest web technologies. Global clients based in Europe and the US, including Fortune 500 companies. Employment security: We hire for our team, not just a specific project. If your project ends, we will find you a new one. Healthy work atmosphere: On average, our employees stay with the company for 4+ years. Market-based compensation and regular performance reviews. Internal expert communities and courses. Perks to support your growth and well-being. REQUIRED SKILLS: 5+ years of hands-on experience in Data Engineering. Strong expertise in Snowflake, including architecture, advanced SQL, Snowpark (Python), and data governance (RBAC, masking policies). Solid experience with dbt (models, tests, snapshots, incremental strategies, CI/CD). Experience with data orchestration tools (Airflow, Dagster, Prefect) or Snowflake Tasks and Streams. Proven ability to design and maintain scalable Data Warehouse or Data Lakehouse architectures. Experience optimizing costs by managing Snowflake credits, warehouse configurations, and storage usage. Strong Python skills for data transformation, automation, and scripting. English B2+. NICE-TO-HAVE SKILLS: SnowPro Core or SnowPro Advanced Architect certifications. Experience building production-grade dashboards in BI tools (Tableau, Looker, Superset, Metabase). Experience with Cloud Platforms (AWS/Azure/GCP) regarding IAM, S3/Blob Storage networking. RESPONSIBILITIES: Design and build scalable data pipelines in Snowflake using SQL, Snowpark (Python), and Stored Procedures. Own Snowflake security and governance, including RBAC, dynamic data masking, row access policies, and object tagging. Develop and optimize data ingestion using Snowpipe, COPY INTO, and external stages. Optimize performance through virtual warehouse sizing, query profiling, clustering, and search optimization. Implement data transformations and semantic layers using dbt, applying medallion or Data Vault modeling patterns. Build reliable orchestration and CI/CD for data pipelines using Snowflake Tasks & Streams, external orchestrators, and infrastructure as code. Collaborate with and mentor engineers, partner with analytics and ML teams, and drive continuous improvements across the data platform.

Full TimedirectData & AI
Salary not disclosed1 month ago

Lead/Senior Data Scientist (Generative AI) Poland Apply now Growth through diversity, equity, and inclusion. As an ethical business, we do what is right — including ensuring equal opportunities and fostering a safe, respectful workplace for each of us. We believe diversity fuels both personal and business growth. We're committed to building an inclusive community where all our people thrive regardless of their backgrounds, identities, or other personal characteristics. We are looking for Data Scientist to join Lingaro DS&AI Competency Center, AI team to help our clients deliver value with building end-to-end Talk-To-Data (TTD) Solution powered by LLMs & GenAI models. Lingaro AI team focuses on novel aspects of Generative AI and LLMs with range of applications i.e.: RAG, summarization, multi-agent workflows and model fine tuning. Our deep, PhD level expertise with GenAI, NLP and Computer Vision make us great team to join if you are willing to challenge yourself to new heights. We are looking for a long-term engagement in the space of GenAI and related aspects.   Tasks:   Build end-to-end GenAI applications such as chatbots, voicebots, Talk-to-Data systems, where you will help buiild: data ingestion, retrieval layer, orchestration (e.g. LangChain/LlamaIndex/LangGraph), API/backend, and simple UI where needed. Design and implement RAG pipelines with vector databases, hybrid search, rerankers, query transformation, and evaluation frameworks for relevance and robustness. Perform model selection, prompting strategies, and fine-tuning (LoRA/QLoRA/SFT) for text, code, and multimodal models, including guardrails, output evaluation and A/B testing. Designing, integrating and optimizing LLMs interaction with external tools, APIs, and data sources in a standardized and scalable way by using Model Context Protocol (MCP) connectors. Business understanding and translating problems into technical goals by defining success metrics, auditing data feasibility, and aligning stakeholder expectations. Supporting project delivery. Supporting Pre-sales activities and initiatives.   What We're Looking For:   Familiarity with theory behind various deep learning concepts. Experience with Machine Learning (ML), especially in Generative AI (LLM/LMM) with focus on Natural Language Processing (NLP) or multimodal models. Experience with business requirements gathering, transforming them into technical plan, data processing, feature engineering, models evaluation, hypothesis testing and model deployment. Fluency in Python and object programing, working knowledge of SQL and vector database. Strong tech stack in Azure or GCP cloud. Knowledge of specific Deep Learning and GenAI libraries like: NumPy, PyTorch, HuggingFace, LangChain, LangGraph and GenAI APIs i.e. OpenAI/Gemini. Hands-on experience designing or operating MCP servers/clients for LLM agents.   What Will Set You Apart:   Experience in working with Databricks will be a plus. Commercial experience proven by multiple successful projects in Generative AI, Natural Language Processing or Computer Vision will be a big plus. General: Experience with microservice architectures. Experience with code repositories and code assistants. Strong business acumen. Ability to come up with creative solutions to address customer problem. Ability to lead juniors in project team. Missing one or two of these qualifications? We still want to hear from you! If you bring a positive mindset, we'll provide an environment where you feel valued and empowered to learn and grow.   We offer:   Stable employment. On the market since 2008, 1500+ talents currently on board in 7 global sites. “Office as an option” model. You can choose to work remotely or in the office.   Workation. Enjoy working from inspiring locations in line with our workation policy.  Great Place to Work® certified employer. Flexibility regarding working hours and your preferred form of contract.   Comprehensive online onboarding program with a “Buddy” from day 1.    Cooperation with top-tier engineers and experts.   Unlimited access to the Udemy learning platform from day 1. Certificate training programs. Lingarians earn 500+ technology certificates yearly.  Upskilling support. Capability development programs, Competency Centers, knowledge sharing sessions, community webinars, 110+ training opportunities yearly. Grow as we grow as a company. 76% of our managers are internal promotions.   A diverse, inclusive, and values-driven community.    Autonomy to choose the way you work. We trust your ideas.   Create our community together. Refer your friends to receive bonuses.   Activities to support your well-being and health. Plenty of opportunities to donate to charities and support the environment.   Modern office equipment. Purchased for you or available to borrow, depending on your location.

Full TimeRemotedirectData & AI
Salary not disclosed1 month ago

Support Data Engineer

CLOUDFIDE SPÓŁKA Z OGRANICZONĄ ODPOWIEDZIALNOŚCIĄ · Remote

This is a support-focused, hands-on internship where you’ll assist engineers in monitoring, maintaining, and improving cloud-based data pipelines and environments. Expect training, mentorship, and meaningful responsibilities - all tailored to help you build the foundations of a future Data Engineer career. Your Responsibilities Supporting engineering teams with day-to-day technical and operational tasks. Assisting with monitoring, debugging, and validating cloud data pipelines. Using Python and SQL for data processing, cleaning, and analysis. Helping document workflows and implement small improvements. Collaborating with mid and senior engineers to understand requirements and deliver simple solutions. Learning modern tools such as Azure, Databricks, Spark, orchestration, and CI/CD. Gradually taking on more advanced tasks as your skills grow. What We’re Looking For ⚠️ Mandatory requirement: Active student status (IT-related studies preferred). No commercial experience required. Working knowledge of Python - ability to write basic scripts and perform data transformations. Working knowledge of SQL - comfortable with queries, joins, aggregations. Student in Computer Science, Data Science, Applied Mathematics, Informatics, or a similar field. Interest in cloud technologies, data engineering, or Big Data systems. Eagerness to learn, ask questions, and grow in a fast-paced tech environment. Analytical mindset and problem-solving attitude. Nice to Have Participation in student tech groups, hackathons, coding communities, or research circles. Any exposure to Azure, Databricks, Spark, or ETL concepts (e.g., from university projects). Familiarity with scripting, version control, or cloud basics.

Full TimeRemotedirectData & AI
Salary not disclosed2 months ago

Flex is the diversified manufacturing partner of choice that helps market-leading brands design, build and deliver innovative products that improve the world. A career at Flex offers the opportunity to make a difference and invest in your growth in a respectful, inclusive, and collaborative environment. If you are excited about a role but don't meet every bullet point, we encourage you to apply and join us to create the extraordinary. Job Summary Job Description We’re looking to add a new Junior Supply Chain Data Scientist (Fixed term-1 year) employee to be based in Tczew, Poland. What a typical day looks like: Analyze supply chain data to identify trends and support decision-making Create clear visualizations and reports to explain insights to business teams Help translate business questions into data-driven analyses Support building and testing simple predictive models for forecasting and optimization Work with large datasets to extract useful information and improve processes Collaborate with developers and engineers to implement data solutions Prepare presentations and share findings with stakeholders The experience/qualification we’re looking to add to our team: Bachelor/Master of Science degree in Data Science/ML/Mathematics/Supply Chain/Operations Research Demonstrated knowledge of applied statistical and machine learning methods Strong knowledge of R and/or Python is required Experience/knowledge with SQL Good Polish & English language skills Eligibility to work in Poland (EU) Location: Tczew It is a hybrid role where the contract might potentially be prolonged and made permanent after 1 year. TK43 Job Category Operational Excellence Required Skills: Optional Skills: Flex is an Equal Opportunity Employer and employment selection decisions are based on merit, qualifications, and abilities. We do not discriminate based on: age, race, religion, color, sex, national origin, marital status, sexual orientation, gender identity, veteran status, disability, pregnancy status, or any other status protected by law. We're happy to provide reasonable accommodations to those with a disability for assistance in the application process. Please email accessibility@flex.com and we'll discuss your specific situation and next steps (NOTE: this email does not accept or consider resumes or applications. This is only for disability assistance. To be considered for a position at Flex, you must complete the application process first). Through the collective strength of 140,000 team members across 30 countries and responsible, sustainable operations, Flex is the diversified manufacturing partner of choice that helps market-leading brands design, build and deliver innovative products that improve the world. A career with Flex offers the opportunity to make a difference, invest in your growth, and build great products for our customers that improve people’s lives. Together, let’s create the extraordinary! Join our talent community to learn more about job opportunities, company news and events.

Full TimeRemotedirectData & AI
Salary not disclosed1 month ago

Senior Data Engineering Architect Poland Apply now Growth through diversity, equity, and inclusion. As an ethical business, we do what is right — including ensuring equal opportunities and fostering a safe, respectful workplace for each of us. We believe diversity fuels both personal and business growth. We're committed to building an inclusive community where all our people thrive regardless of their backgrounds, identities, or other personal characteristics.     Tasks:   Collaborate with stakeholders to understand business requirements and translate them into data engineering solutions.  Design and oversee the overall data architecture and infrastructure, ensuring scalability, performance, security, maintainability, and adherence to industry best practices.  Define data models and data schemas to meet business needs, considering factors such as data volume, velocity, variety, and veracity.  Select and integrate appropriate data technologies and tools, such as databases, data lakes, data warehouses, and big data frameworks, to support data processing and analysis.  Create scalable and efficient data processing frameworks, including ETL (Extract, Transform, Load) processes, data pipelines, and data integration solutions.  Ensure that data engineering solutions align with the organization's long-term data strategy and goals.  Evaluate and recommend data governance strategies and practices, including data privacy, security, and compliance measures.  Collaborate with data scientists, analysts, and other stakeholders to define data requirements and enable effective data analysis and reporting.  Provide technical guidance and expertise to data engineering teams, promoting best practices and ensuring high-quality deliverables. Support to team throughout the implementation process, answering questions and addressing issues as they arise.  Oversee the implementation of the solution, ensuring that it is implemented according to the design documents and technical specifications.  Stay updated with emerging trends and technologies in data engineering, recommending and implementing innovative solutions as appropriate.  Conduct performance analysis and optimization of data engineering systems, identifying and resolving bottlenecks and inefficiencies.  Ensure data quality and integrity throughout the data engineering processes, implementing appropriate validation and monitoring mechanisms.  Collaborate with cross-functional teams to integrate data engineering solutions with other systems and applications.  Participate in project planning and estimation, providing technical insights and recommendations.  Document data architecture, infrastructure, and design decisions, ensuring clear and up-to-date documentation for implementation, reference and knowledge sharing.      Requirements:   Proven work experience as a Data Engineering Architect or a similar role and strong experience in in the Data & Analytics area. Strong understanding of data engineering concepts, including data modeling, ETL processes, data pipelines, and data governance. Expertise in designing and implementing scalable and efficient data processing frameworks.  In-depth knowledge of various data technologies and tools, such as relational databases, NoSQL databases, data lakes, data warehouses, and big data frameworks (e.g., Hadoop, Spark).  Experience in selecting and integrating appropriate technologies to meet business requirements and long-term data strategy.  Ability to work closely with stakeholders to understand business needs and translate them into data engineering solutions.  Strong analytical and problem-solving skills, with the ability to identify and address complex data engineering challenges.  Proficiency in Python, PySpark, SQL.  Familiarity with cloud platforms and services, such as AWS, GCP, or Azure, and experience in designing and implementing data solutions in a cloud environment.  Knowledge of data governance principles and best practices, including data privacy and security regulations.  Excellent communication and collaboration skills, with the ability to effectively communicate technical concepts to non-technical stakeholders.  Experience in leading and mentoring data engineering teams, providing guidance and technical expertise.  Familiarity with agile methodologies and experience in working in agile development environments.  Continuous learning mindset, staying updated with the latest advancements and trends in data engineering and related technologies.  Strong project management skills, with the ability to prioritize tasks, manage timelines, and deliver high-quality results within designated deadlines.  Strong understanding of distributed computing principles, including parallel processing, data partitioning, and fault-tolerance.  Bachelor's degree in Computer Science, Information Technology, or a related field. A Master's degree may be preferred.

Full TimedirectData & AI
Salary not disclosed1 month ago

Data Scientist

deepsense.ai Sp. z o.o. · Poland

Must have: Classical ML, Python, SQL, Visualisation Employment type: B2B Operating mode: Remote Location: Poland Your Role as Data Scientist What makes this role stand out: You’re closest to the data and analytics from exploration and cleaning, through modeling, to generating business insights. Your tools include classical and non-linear ML models (regression, XGBoost, LightGBM, CatBoost) and libraries such as pandas and scikit-learn. You create analyses and visualizations that directly support business decisions. Why it’s worth it: You have a real impact on client strategies and decisions, your models and analyses don’t end up in a drawer. Projects are diverse from EDA and feature engineering, to predictive modeling, time-series analysis, data visualization, and dashboard development. You get room to grow, whether into deeper data analytics and business consulting, or towards AI engineering (working closely with MLEs and SEs). A few project examples: Training multimodal LLMs for drug discovery. Building AI voicebots that double conversion rates. Creating a GenAI solution for a leading US legal company together with the OpenAI team. Running GenAI on edge devices with cloud-level performance. All of this in a setup that feels like an AI-driven software house: remote-first, flexible, and packed with specialists who are open to sharing knowledge and experimenting with the newest tech. The ideal candidate: Hss a minimum of 4 years of experience in Data Science, delivering end-to-end, data-driven solutions. Is proficient in classical ML techniques (linear regression, feature selection methods, predictive modeling, time series) as well as non-linear methods (gradient boosting, random forest, XGBoost, LightGBM, CatBoost). Programs fluently in Python and uses libraries such as NumPy, pandas, scikit-learn. Can effectively manage and analyze data using SQL. Creates clear and engaging visualizations (matplotlib, seaborn, plotly). Can translate data into actionable business insights and recommendations. Stands out with strong communication skills and the ability to explain complex concepts in a simple way. Bonus points for experience in data engineering and cloud platforms (AWS, GCP, Azure), as well as knowledge of dashboarding tools (Tableau, Power BI, Dash) and experience in NLP or leveraging LLMs/Generative AI.

Full TimeRemotedirectData & AI
Salary not disclosed1 month ago

Machine Learning Engineer

deepsense.ai Sp. z o.o. · Poland

Must have: Cloud, Docker, GenAI, Kubernetes, LLMs, Machine Learning, Python, SQL, Vector Database Employment type: B2B Operating mode: Remote Location: Poland About deepsense.ai At deepsense.ai, we won’t just build AI solutions – we’ll shape how companies around the world use them. By joining us, you’ll: Work with partners like OpenAI, NVIDIA, Anyscale, LangChain, Crusoe, and ElevenLabs. Explore and apply the newest tech: LLMs & RAG, MLOps, Edge Solutions, Computer Vision, Predictive Analytics. Tackle challenges in software & tech, pharma & healthcare, manufacturing, retail, telecoms & media. Contribute to open-source projects – just take a look at our latest solution, ragbits, an agentic RAG framework with over 1.6k stars on GitHub. And the best part of working at deepsense.ai? Spread your wings with clear career paths, technical or leadership. Collaborate with 100+ AI experts with 15+ years of applied AI experience, as well as PhD-level researchers with academic backgrounds. Tap into domain expertise and knowledge sharing whenever you need it. Your Role as Machine Learning Engineer What makes this role stand out: You’re closest to the models and AI itself, building ML/LLM pipelines, integrating, and optimizing models. You have a background in deep learning, NLP, or CV, and today you’re hands-on with GenAI and LLMs. You know techniques like fine-tuning, prompting, quantization, and LoRA and you understand how models work and how to adapt them for production. Why it’s worth it: You’ll dive into the hottest areas of AI: LLMs, agentic frameworks, RAG, inference optimization, and fine-tuning. Projects aren’t just PoCs, the models you build go into production and reach real users. You won’t be boxed into “just ML” you’ll collaborate with Data Scientists, Software Engineers, and MLOps to deliver end-to-end solutions. A few project examples: Training multimodal LLMs for drug discovery. Building AI voicebots that double conversion rates. Creating a GenAI solution for a leading US legal company together with the OpenAI team. Running GenAI on edge devices with cloud-level performance. All of this in a setup that feels like an AI-driven software house: remote-first, flexible, and packed with specialists who are open to sharing knowledge and experimenting with the newest tech. The ideal candidate: Has 4–5+ years of experience in ML engineering and working with models in production environments. Brings hands-on expertise with Large Language Models (LLMs) and Generative AI, including integration and inference optimization (latency, cost, scalability). Is familiar with frameworks and tools for building and orchestrating LLM pipelines (LangChain, LlamaIndex, RAG, agent frameworks). Can design and implement end-to-end ML/LLM pipelines, from data preparation and training/fine-tuning to production-grade APIs. Has experience with cloud platforms (AWS, GCP, Azure) and their AI/ML services (e.g., SageMaker, Vertex AI, Azure ML). Has worked with SQL, NoSQL, and vector databases (Pinecone, FAISS, Weaviate). Is fluent in Python and experienced with ML frameworks (PyTorch, TensorFlow, Hugging Face). Knows how to deploy and monitor models (MLOps: CI/CD for models, logging, observability, quality monitoring). Communicates clearly and can collaborate effectively with both Data Scientists and product/client teams. Bonus: experience in prompt engineering and building simple AI user interfaces (Streamlit, Gradio). We Offer Impactful AI projects Tackle industry-grade challenges: from LLMs for drug discovery to GenAI on edge devices, AI voicebots, and open-source initiatives with global reach. Collaborate directly with our partners for early access to tools before public release, testing in production, and bringing know-how from AI leaders into our projects. Contribute to open-source initiatives like ragbits (1.6k+ stars on GitHub), adopted and appreciated by the ML community. Growth & Knowledge Sharing Join AI specialists who share expertise through Tech Talks, workshops, and internal trainings. Present your work at conferences, run experiments, and stay ahead of the curve. Choose your own career path and get support for your development. Flexibility & Culture Work fully remote, from one of our two offices (Warsaw, Bydgoszcz), or from coworking spaces in Poznań, Łódź, Wrocław, and Gdańsk. Enjoy flexible working hours. Benefit from a culture that prevents burnout and supports balance in daily work. Start with onboarding – from day one you are matched with a buddy. Get high-end equipment (laptops, dual monitors, pro peripherals). Access a premium AI development suite: OpenAI ChatGPT, Claude, Gemini Advanced, GitHub Copilot, Cursor AI IDE, Claude Code, NotebookLM, plus the latest emerging AI tools to support your daily work. Work in agile teams with fast decision-making and space to try out your ideas. Basic Benefits Get private medical care and a Multisport card. Use our company library, attend onsite English lessons, and benefit from a dedicated training budget. Join free team lunches and enjoy fresh fruit & snacks. Take part in team-building activities and holiday celebrations.

Full TimeRemotedirectData & AI
Salary not disclosed1 month ago

Data Engineer With AWS

Lingaro · Poland

Data Engineer With AWS Poland Apply now Growth through diversity, equity, and inclusion. As an ethical business, we do what is right — including ensuring equal opportunities and fostering a safe, respectful workplace for each of us. We believe diversity fuels both personal and business growth. We're committed to building an inclusive community where all our people thrive regardless of their backgrounds, identities, or other personal characteristics.   We are looking for a Lead Engineer to build and optimize cloud data platforms on AWS for a Client in Construction & Manufacturing Industry. You’ll be hands-on in developing pipelines, data models, and analytics-ready datasets, working closely with architects, analysts, and client stakeholders to deliver reliable, well-governed data solutions. The project involves direct work with Business to develop event-based and batch processing for ML and Advanced analytics use cases and Data Sharing within Business.       Tasks:     Design and implement scalable batch data pipelines on AWS using AWS Glue Develop and optimize data transformations using Apache Spark and SQL, ensuring performance and maintainability. Design efficient data models for analytical and reporting use cases (data lakes, data warehouses). Ensure data reliability, quality, monitoring, and performance in distributed environments. Engage directly with business to understand requirements and translate them into data engineering solutions. Work independently while maintaining effective communication with stakeholders.   What We're Looking For:   4–6 years of experience as Data Engineer. Strong SQL and Python skills (production-grade code). Hands-on experience with AWS (Glue/S3/Lambda/Redshift). Knowledge of data management principles and best practices, including data governance, data quality, and data integration.  Practical experience in managing CI/CD pipelines (GitLab) and IaC (Terraform). Excellent problem-solving and analytical skills, with the ability to identify and resolve complex data engineering issues.  Experience designing ETL/ELT pipelines. Experience working in Agile environment. Experience with using PowerBI is a nice bonus. English C1.  What Will Set You Apart:   Proactiveness. Challenger mindset. High communication skills. High ownership on assigned tasks.

Full TimedirectData & AI
Salary not disclosed1 month ago

Regular Data Engineer/ I

Inetum Polska · Poland - Warsaw

We are seeking a Regular Data Engineer to join our team. You will be responsible for managing critical ETL processes to acquire data from various sources, analyze it, and deliver accurate reports to clients on a regular basis. This role requires a commitment to service availability 24/7, including on-call duties, to ensure timely and reliable delivery of reports with high confidence in the accuracy of figures. Main Responsibilities: - Monitoring Checks: Perform regular monitoring of data processes to ensure smooth operation and identify potential issues. - Incident Support and Solving: Provide prompt support for incidents, troubleshoot problems, and implement solutions to minimize downtime. - Continuous Improvement: Identify opportunities for improving the production environment and implement changes to enhance efficiency and reliability. - Failure Management: Proactively avoid and capture failures in data processing to ensure uninterrupted service. - Communication & Documentation: Maintain clear and effective communication with stakeholders and document processes, incidents, and solutions. - KPI Monitoring: Monitor key performance indicators, including data set availability percentages, to ensure service quality. - Tickets Management: Manage tickets by creating, assigning, following up, and resolving issues in a timely manner. - Knowledge Base Management: Develop and maintain a comprehensive knowledge base to support ongoing operations and incident resolution. Project Technology: - SQL Server - Azure Data Lake - Azure Data Factory - Snowflake - Power BI - T-SQL

Full TimedirectData & AI
Salary not disclosed2 weeks ago

What You’ll Do Design, build, and maintain scalable data pipelines primarily within Google Cloud Platform Develop integration and transformation workflows between cloud data services and on-prem Oracle databases Work closely with trading, risk, and analytics teams to understand data requirements and deliver real-time and batch data solutions Optimise and monitor performance of data systems to support latency-sensitive trading applications Collaborate with cross-functional teams using Agile/Scrum methodologies to deliver business-critical data projects, provide technical assistance to the team  Ensure robust data governance, lineage, and compliance (including MiFID II, FCA, and other regulatory standards) Automate data workflows using Terraform, CI/CD pipelines, and containerisation tools (Docker/Kubernetes) What You Bring Strong experience with Google Cloud Platform (GCP) e.g. BigQuery, DataPlex, DBT, Pub/Sub, Dataflow Expertise in Oracle SQL, PL/SQL, and working with complex stored procedures and large datasets Proficiency in programming languages such as Python, Java, or Scala Experience with streaming and messaging systems (e.g. Jenkins, GitLab), and container orchestration (Kubernetes) Experience working with Infrastructure as Code, preferably Terraform, to manage cloud data infrastructure. Deep understanding of data modelling, data warehousing, and ETL/ELT design patterns Demonstrated team leadership experience Familiarity with Agile development practices (Scrum, Kanban, Jira) Exposure to financial markets, trading systems, or related high-performance environments is a strong plus Nice to Have GCP certification (e.g., Professional Data Engineer or similar) will be considered a strong advantage. Experience with Amazon Web Services (AWS) Knowledge of regulatory reporting, market data, or trade surveillance systems Experience with Apache Airflow, DBT, or similar orchestration tools Understanding of data security practices and compliance frameworks

Full TimedirectData & AI
Salary not disclosed2 months ago

Fullstack Senior Data Scientist Poland Apply now Growth through diversity, equity, and inclusion. As an ethical business, we do what is right — including ensuring equal opportunities and fostering a safe, respectful workplace for each of us. We believe diversity fuels both personal and business growth. We're committed to building an inclusive community where all our people thrive regardless of their backgrounds, identities, or other personal characteristics.     We are looking for a skilled and experienced Fullstack Data Scientist with solid expertise in Generative AI (GenAI) to lead projects focused on building and implementing advanced systems based on Large Language Models (LLMs), chatbots, AI agents, and Retrieval-Augmented Generation (RAG) mechanisms. As a Senior Data Scientist, you will be responsible for designing, implementing, and optimizing GenAI solutions, as well as mentoring teams. Your knowledge and experience will be crucial in making architectural decisions, selecting technologies, and implementing best practices in AI-driven development.       Tasks: Lead discovery and solution design for GenAI use cases, translating business problems into concrete architectures (LLM decision, RAGs, fine-tuning, agents, guardrails). Build end-to-end GenAI applications: data ingestion, retrieval layer, orchestration (e.g. LangChain/LlamaIndex/LangGraph), API/backend, and simple UI where needed. Design and implement RAG pipelines with vector databases, hybrid search, rerankers, query transformation, and evaluation frameworks for relevance and robustness. Perform model selection, prompting strategies, and fine-tuning (LoRA/QLoRA/SFT) for text, code, and multimodal models, including evaluation and A/B testing. Implement safety, compliance, and governance controls (input/output filters, PII handling, audit logs, human-in-the-loop review where required). Collaborate with data engineers, product owners, and full-stack developers on scalable architectures, SLAs, and integration with existing enterprise systems Gather technical requirements and estimate planned work. Mentor other data scientists/engineers in GenAI patterns, code quality, and best practices; contribute to internal libraries, templates, and reusable components. Stay current with GenAI landscape (new open and hosted models, agentic frameworks, evaluation techniques) and perform targeted PoCs to validate them.     What We're Looking For: 6+ years of experience in Data Science/AI engineering At least 4+ years of experience in production-ready Python AI-related code development. At least 2+ years of experience in production-ready LLM-related code development, preferably based on the Retrieval-Augmented Generation (RAG) concept. Strong analytical and problem-solving skills with the ability to optimize AI solutions for diverse applications. Strong knowledge and experience in Generative AI, including LLMs, chatbots, AI agents, and RAG mechanisms. Deep understanding of LLM evaluators, validators, and guardrails. Hands-on experience with one or more GenAI frameworks: LangChain, LlamaIndex, LangGraph, or similar orchestration stacks. Hands-on experience designing or operating MCP servers/clients for LLM agents Strong Python skills, including production-grade code, packaging, and testing for data/ML services Solid understanding of ML/AI concepts: types of algorithms, machine learning frameworks, model efficiency metrics, model lifecycle, AI architectures. Proven ability to collaborate effectively across technical and non-technical teams. Familiarity with cloud environments such as Azure (preferred), GCP, or AWS, including AI-related managed services. Familiarity with CI/CD, testing, and containerized deployments. Excellent communication skills in English, with the ability to convey complex technical concepts to various audiences.

Full TimedirectFull Stack
Salary not disclosed1 month ago
PoprzedniaStrona 2 z 14Następna