We are looking for a talented and ambitious Rust Backend Engineer to help design and build the core infrastructure for our cloud-based AI deployment platform. In this role, you will develop the scalable, high-performance backend services that power the deployment, orchestration, and serving of AI models at scale, enabling cutting-edge AI applications to move seamlessly from development to production.
Key Responsibilities:
Design, build, and maintain cloud-native backend services in Rust that support AI model deployment, inference, lifecycle management, and monitoring.
Develop robust APIs and orchestration components to manage deployment workflows.
Architect low-latency, high-throughput inference services optimized for real-time AI applications running in the cloud.
Integrate backend systems with model registries, data processing pipelines, and monitoring tools to provide a seamless end-to-end AI lifecycle.
Collaborate closely with AI researchers, DevOps, and hardware teams to co-design scalable deployment pipelines tailored for cloud environments.
Implement observability and telemetry systems to monitor model performance, availability, and cost efficiency.
Work with both SQL and NoSQL databases to manage model metadata, logs, and large-scale inference results.
Deploy, scale, and operate workloads on Kubernetes, ensuring high reliability, elasticity, and performance across cloud environments.
Continuously evaluate and adopt new technologies in Rust, backend systems, and AI infrastructure to keep the platform state-of-the-art.
Qualifications:
BSc or MSc in Computer Science, Software Engineering, or a related field.
5+ years of proven experience in backend development, ideally building platforms, distributed systems, or AI/ML infrastructure.
Strong proficiency in Rust, with experience building production-grade services.
Experience developing backend services in Python is a strong plus.
Solid understanding of API design, microservices architecture, distributed systems, and event-driven backends.
Hands-on experience with databases (SQL and NoSQL) and scalable data storage solutions.
Familiarity with cloud platforms (AWS, GCP, or Azure) and infrastructure-as-code practices.
Experience with Kubernetes and container orchestration in production cloud environments is a strong plus.
Experience building CI/CD pipelines for ML models or integrating with MLOps platforms (e.g., MLflow, Seldon, Kubeflow) is a plus.
Knowledge of AI model deployment workflows and integrating inference systems with backend services is plus.
Experience in a hardware/software co-design environment is a plus.
Fluent in English; additional European languages (German, Dutch, Spanish, French, or Italian) are a plus.
Soft Skills:
Self-driven and capable of working independently as well as within a highly collaborative team.
Excellent system design and problem-solving skills, with a focus on reliability, scalability, and maintainability.
Strong communication and collaboration skills across multidisciplinary teams.
Eager to mentor junior engineers and contribute to a culture of technical excellence and learning.
What We Offer?
The opportunity to build a cloud AI deployment platform that will power next generation AI systems.
A collaborative, innovation-driven environment with significant autonomy and ownership.
Hybrid work model with flexible scheduling.
A chance to join one of Europe’s most ambitious companies at the intersection of AI and silicon engineering.
Position based in Ghent, Belgium, with occasional travel to Barcelona, Spain.
We’re looking for exceptional engineers ready to shape the future of AI infrastructure. If building scalable, cloud-native AI deployment platforms excites you, we’d love to meet you.
At Openchip & Software Technologies S.L., we believe a diverse and inclusive team is the key to groundbreaking ideas. We foster a work environment where everyone feels valued, respected, and empowered to reach their full potential—regardless of race, gender, ethnicity, sexual orientation, or gender identity.