Senior Software Engineer, AI Inference
Company: Red Hat
Location: Boston
Posted on: April 1, 2026
|
|
|
Job Description:
Job Summary At Red Hat, we believe the future of AI is open and
we are on a mission to bring the power of open-source LLMs and vLLM
to every enterprise. The Red Hat AI Inference team accelerates AI
for the enterprise and brings operational simplicity to GenAI
deployments. As leading developers and maintainers of the vLLM
project, and inventors of state-of-the-art techniques for model
compression, our team provides a stable platform for enterprises to
build, optimize, and scale LLM deployments. We are seeking an
experienced Senior Software Engineer to build and release the Red
Hat AI Inference Server. You will own the full lifecycle, from
compiling vLLM wheels across multiple hardware backends and
architectures, to packaging enterprise-grade container images,
managing multi-cloud infrastructure, and validating LLM accuracy
and performance across a growing matrix of models and hardware. You
will be building and shipping a product that runs on some of the
most powerful AI hardware in production today, working across the
full stack from C++/CUDA kernel compilation to
Kubernetes-orchestrated model serving on OpenShift. If you want to
work at the intersection of systems engineering, release
engineering, and AI infrastructure on one of the most popular
open-source projects on GitHub , this is the role for you. Join us
in shaping the future of AI! What you will do Build and release
vLLM wheels across multiple hardware backends and CPU
architectures, managing complex native dependency chains including
PyTorch, Triton, and other accelerator-specific libraries Design
and maintain CI/CD pipelines spanning multiple platforms including
GitHub Actions, GitLab CI, and Buildkite for build, test, and
release workflows Manage and scale multi-cloud GPU infrastructure
using Terraform and Ansible, including both bare-metal and
Kubernetes-based compute runners Own the model validation pipeline,
orchestrating accuracy evaluation, performance benchmarking,
tool-calling validation, and smoke testing across dozens of LLMs on
both bare metal and OpenShift Develop and maintain the Python
tooling and automation that powers the build, packaging,
validation, and release processes Drive adoption of agentic AI and
intelligent automation to streamline engineering workflows,
accelerate debugging, and reduce toil across the team ? What you
will bring 5 years of software engineering experience with
significant depth in build systems, release engineering, or
infrastructure Strong Python development skills with experience
building well-tested, maintainable tooling and automation Hands-on
experience building and packaging Python projects with native
compiled extensions, including familiarity with C++ and CUDA build
toolchains, wheel packaging, and multi-architecture builds Deep
familiarity with container ecosystems, including Dockerfiles and
Containerfiles, image registries, and container build pipelines
Understanding of LLM evaluation methodology, including accuracy
benchmarks such as MMLU, GSM8K, and HellaSwag, as well as inference
performance metrics like throughput and latency Experience with
CI/CD platforms such as GitHub Actions, GitLab CI, Tekton, or
Buildkite Solid understanding of release engineering practices
including reproducible builds, artifact management, dependency
pinning, and security scanning Experience with
infrastructure-as-code tools such as Terraform and Ansible, and
managing cloud resources at scale Working knowledge of Kubernetes
and/or OpenShift for deploying and testing workloads Enthusiasm for
applying LLM-based agents and AI-assisted tools to automate
engineering workflows, with a track record of identifying
repetitive processes and replacing them with intelligent automation
Excellent communication skills, capable of interacting effectively
with both technical and non-technical team members. A Bachelor's or
Master's degree in computer science, computer engineering, or a
related field. A Ph.D. in an ML-related domain is a significant
advantage. ? The following is considered a plus: Contributions to
upstream open-source projects, particularly vLLM, PyTorch, or other
AI/ML infrastructure Experience with GPU-accelerated workloads and
building software for heterogeneous hardware Familiarity with LLM
inference serving, model optimization, quantization techniques, or
evaluation frameworks Proficiency in C LI-MD2 AI-HIRING The salary
range for this position is $133,650.00 - $220,680.00. Actual offer
will be based on your qualifications. Pay Transparency Red Hat
determines compensation based on several factors including but not
limited to job location, experience, applicable skills and
training, external market value, and internal pay equity. Annual
salary is one component of Red Hat’s compensation package. This
position may also be eligible for bonus, commission, and/or equity.
For positions with Remote-US locations, the actual salary range for
the position may differ based on location but will be commensurate
with job duties and relevant work experience. About Red Hat Red Hat
is the world’s leading provider of enterprise open source software
solutions, using a community-powered approach to deliver
high-performing Linux, cloud, container, and Kubernetes
technologies. Spread across 40 countries, our associates work
flexibly across work environments, from in-office, to office-flex,
to fully remote, depending on the requirements of their role. Red
Hatters are encouraged to bring their best ideas, no matter their
title or tenure. We're a leader in open source because of our open
and inclusive environment. We hire creative, passionate people
ready to contribute their ideas, help solve complex problems, and
make an impact. Benefits ? Comprehensive medical, dental, and
vision coverage ? Flexible Spending Account - healthcare and
dependent care ? Health Savings Account - high deductible medical
plan ? Retirement 401(k) with employer match ? Paid time off and
holidays ? Paid parental leave plans for all new parents ? Leave
benefits including disability, paid family medical leave, and paid
military leave ? Additional benefits including employee stock
purchase plan, family planning reimbursement, tuition
reimbursement, transportation expense account, employee assistance
program, and more! Note: These benefits are only applicable to full
time, permanent associates at Red Hat located in the United States.
Inclusion at Red Hat Red Hat’s culture is built on the open source
principles of transparency, collaboration, and inclusion, where the
best ideas can come from anywhere and anyone. When this is
realized, it empowers people from different backgrounds,
perspectives, and experiences to come together to share ideas,
challenge the status quo, and drive innovation. Our aspiration is
that everyone experiences this culture with equal opportunity and
access, and that all voices are not only heard but also celebrated.
We hope you will join our celebration, and we welcome and encourage
applicants from all the beautiful dimensions that compose our
global village. Equal Opportunity Policy (EEO) Red Hat is proud to
be an equal opportunity workplace and an affirmative action
employer. We review applications for employment without regard to
their race, color, religion, sex, sexual orientation, gender
identity, national origin, ancestry, citizenship, age, veteran
status, genetic information, physical or mental disability, medical
condition, marital status, or any other basis prohibited by law.
Red Hat does not seek or accept unsolicited resumes or CVs from
recruitment agencies. We are not responsible for, and will not pay,
any fees, commissions, or any other payment related to unsolicited
resumes or CVs except as required in a written contract between Red
Hat and the recruitment agency or party requesting payment of a
fee. Red Hat supports individuals with disabilities and provides
reasonable accommodations to job applicants. If you need assistance
completing our online job application, email
application-assistance@redhat.com . General inquiries, such as
those regarding the status of a job application, will not receive a
reply.
Keywords: Red Hat, Salem , Senior Software Engineer, AI Inference, Engineering , Boston, Massachusetts