Companies you'll love to work for

MLOps Engineer

Cognism

Cognism

Croatia
Posted on Jul 17, 2025

WHO ARE WE

Cognism is the leading provider of European B2B data and sales intelligence. Ambitious businesses of every size use our platform to discover, connect, and engage with qualified decision-makers faster and close more deals. Headquartered in London with global offices, Cognism’s contact data and contextual signals are trusted by thousands of revenue teams to eliminate the guesswork from prospecting.

OUR WORK MODEL

Hybrid: This is a hybrid role, requiring you to work from our location office 2–3 days per week, with flexibility to work remotely on other days.

YOUR ROLE

Cognism is actively seeking an outstanding MLOps Engineer to join our growing Data team. This role is primarily a hands-on engineering and MLOps position, with the individual reporting directly to the Engineering Manager in the Data team. The MLOps at Cognism is entrusted with optimizing and improving the quality of ML services and products. Advising and enforcing best practices within Data Science team, provide tooling and platforms that ultimately results in more reliable, maintainable, scalable and faster Machine Learning workflows. The successful candidate will be at the forefront of our MLOps initiatives, especially during the implementation of our machine learning platform and best practices.

As an MLOps Engineer, you will play a critical role in bridging the gap between machine learning development and robust production systems. You will collaborate closely with Data Scientists, ML Engineers, and DevOps teams to streamline the deployment, monitoring, and scaling of ML models. Your focus will be on building reliable, automated pipelines and infrastructure that ensure models are delivered to production efficiently, securely, and at scale.

Key Responsibilities
  • Building and managing automation pipelines to operationalize the ML platform, model training and model deployment
  • Design and implement architectures, service and pipelines on the AWS cloud that are secure, reliable, scalable and maintainable
  • Contributing to the MLOps best practices within the Science and Data team
  • Acting as a bridge between AI, Engineering, and DevSecOps for ML deployment, monitoring, and maintenance
  • Communicate and work closely with team of Data Scientists to provide tooling and integration of AI/ML models into larger systems and applications
  • Monitor and maintain production critical ML services and workloads
Your Experience

Required
  • Strong understanding of cloud architectures and services fundamentals, AWS preferable, GCP, MS Azure
  • Good understanding of modern MLOps best practices
  • Good understanding of Machine Learning fundamentals
  • Good understanding of Data Engineering fundamentals
  • Experience with Infrastructure as Code (IaC) tools like Terraform, CDK or similar
  • Experience with CI/CD pipelines (GitHub Actions, Circle CI or similar)
  • Basic understanding of networking and security practices on cloud
  • Experience with containerization (Docker, AWS ECS, Kubernetes, or similar)
  • Proficiency reading and writing Python code
  • Experience deploying and monitoring machine learning models on the Cloud in production
  • Fluent in English, good communication skills and ability to work in a team
  • Enthusiasm in learning and exploring the modern MLOps solutions
Ideal
  • 3+ years in a MLOps, Machine Learning Engineer or DevOps role
  • Ability to design and implement cloud solutions and ability to build MLOps pipelines (AWS, MS Azure or GCP) with best practices
  • Good understanding of software development principles, DevOps methodologies
  • Experience and understanding of MLOps concepts:
    • Experiment Tracking
    • Model Registry & Versioning
    • Model & Data Drift Monitoring
  • Working with GPU based computational frameworks and architectures on cloud (AWS, GCP etc.)
  • Knowledge of MLOps and DevOps tools:
    • MLflow
    • Kubeflow, Metaflow, Airflow or similar
    • Visualisation tools – Grafana, QuickSight or similar
    • Monitoring tools – Coralogix or GrafanaCloud or similar
    • ELK stack (Elasticsearch, Logstash, Kibana)
  • Experience working in big data domains (10M+ scales)
  • Experience with streaming and batch-processing frameworks
Nice to have
  • Experience with MLOps Platforms (SageMaker, VertexAI, Databricks, or other)
  • Experience reading and writing in Scala
  • Knowledge of frameworks such as scikit-learn, Keras, PyTorch, Tensorflow, etc.
  • Experience with SQL, NoSQL databases, data lakehouse

WHY COGNISM

At Cognism, we’re not just building a company - we’re building an inclusive community of brilliant, diverse people who support, challenge, and inspire each other every day. If you’re looking for a place where your work truly makes an impact, you’re in the right spot!

Our values aren’t just words on a page—they guide how we work, how we treat each other, and how we grow together. They shape our culture, drive our success, and ensure that everyone feels valued, heard, and empowered to do their best work.

Here’s what we stand for:

We Are Nice! We treat each other with respect and kindness (because life’s too short for anything else).
🤝 We Are Collaborative. We’re in this together—great things happen when we work as one.
💡 We Are Solution-Focused. Every challenge is just an opportunity in disguise.
💙 We Are Understanding. We empower and support each other to do our best work.
🏆 We Celebrate Individual Contributors. Every role matters, and so do you!

At Cognism, we are committed to fostering an inclusive, diverse, and supportive workplace. Our values—Being Nice, Collaborative, Solution-Focused, and Understanding—guide everything we do, and we celebrate Individual Contributors. We welcome applications from individuals typically underrepresented in tech, so if this role excites you but you’re unsure if you meet every requirement, we encourage you to apply!