Middle Machine Learning Engineer (Backend) – GenAI Project
Varex AI

About Varex
At Varex, we’re pioneering enterprise communication through our state-of-the-art Generative AI Conversational Platform. By fusing advanced LLM technology with scalable backend architecture, we empower businesses to deliver smarter, faster, and more personalized interactions. We’re expanding our GenAI team and seeking a Middle Machine Learning Engineer (Backend) with deep LLM expertise to drive fine-tuning, prompt optimization, and novel algorithm development.
Responsibilities
LLM Fine-Tuning & Customization: Design and implement end-to-end fine-tuning workflows for large language models (e.g., GPT, LLaMA)Prompt Engineering & Optimization: Develop, benchmark, and iterate on system prompts.Innovative Algorithm Development: Research and prototype novel algorithms or architectures (e.g., retrieval-augmented generation, chained reasoning) to push the boundaries of GenAI capabilities.MLOps & Lifecycle Management: Implement reproducible training, evaluation, and deployment pipelines.Knowledge Sharing & Research: Stay current with LLM research
Requirements
Core Skills:
Exceptional problem-solving abilities and strong analytical skills.Ability to explain thoughts and complex ideas clearly.We ask that you apply only if you are confident in your ability to demonstrate these core skills.
Core Competencies:
1–4 years of ML-focused backend engineering or similar experience with demonstrable work on LLM projects.
Proficient in Python, with strong command of asynchronous programming, multithreading, and software design best practices.Understanding of transformer architectures, fine-tuning techniques, and evaluation metrics.Experience implementing and optimizing prompt engineering strategies.
Proven track record developing and deploying RESTful or gRPC services (e.g., FastAPI, Flask).Familiarity with some ML frameworks (e.g. PyTorch, TensorFlow, JAX) and LLM-specific toolkits (e.g. Hugging Face Transformers).Comfortable working with relational (PostgreSQL) and NoSQL (Redis, MongoDB) databases, as well as vector databases for embedding-based retrieval.
Version control (Git) fluency and experience building CI/CD for both application code and ML workflows. Strong foundation in mathematics essential for machine learning.
Nice to Have:
Familiarity with RAG architectures, vector search engines (Pinecone, Milvus), LangChain, or LlamaIndex.Expirience in AWS, GCP, or Azure—particularly services like SageMaker, Vertex AI, or Azure ML. Experience in containerization (Docker) and orchestration (Kubernetes, Helm).
What’s in it for You?
Deep LLM Engagement: From research and fine-tuning to deployment and monitoring—impacting millions of end-users.Innovative Environment: Join a dynamic startup culture that rewards creativity, rapid experimentation, and thought leadership in generative AI.Ready to reshape the future of AI-driven communication?
Як відгукнутися?
Щоб відгукнутися на цю вакансію, вам необхідно авторизуватися на нашому сайті. Якщо у вас ще немає облікового запису, будь ласка, зареєструйтесь.
Розмістити резюмеСхожі вакансії
Менеджер, адміністратор

Адміністратор ресторану

Кресляр (AutoCAD)
