Software Engineer – Platform Security
About the job
FriendliAI is seeking a Forward Deployed Engineer (FDE) to assist enterprises in deploying, scaling, and operating generative and agentic AI workloads on FriendliAI infrastructure. You will work directly with customers to solve and implement production-grade applications using our products, such as Serverless Endpoints, Dedicated Endpoints, or Container.
Friendli Container is our service that allows customers to download our inference engine as Docker images and deploy it in their chosen environment, such as private clouds or on-premises. Our Friendli Container can be adopted directly to AWS EKS clusters using our EKS add-on product.
You will work directly on our customers’ projects, collaborating with their engineering teams to solve AI inference challenges like scaling, orchestration, and monitoring. This is a hands-on, customer-embedded role. If you have worked in DevOps, platform engineering, or SRE for AI applications, this is your ideal position.
Key Responsibilities
Design and implement large-scale deployment architectures for LLM and multimodal inference
Deploy and manage containerized workloads across Kubernetes clusters
Diagnose production issues, such as performance bottlenecks, and implement temporary fixes as needed
Collaborate with customers’ DevOps teams to integrate FriendliAI’s infrastructure into their CI/CD workflows
Develop scripts, Helm charts, and Terraform modules that simplify repeated deployments
Contribute field insights to shape our platform reliability, observability, and scaling strategies
Lead workshops, technical sessions, or webinars to help customers master infrastructure best practices
Qualifications
3+ years of experience in cloud infrastructure, DevOps, or reliability engineering
Bachelor’s or Master's degree in Computer Science, Computer Engineering, Electrical Engineering, or equivalent
Proficiency with Kubernetes, Docker, Terraform, and Helm
Strong foundation in distributed systems, networking, and performance tuning
Experience with GPU-based computing and generative AI model serving workloads
Strong technical background in backend systems or AI tooling
Experience operating workloads on AWS, GCP, or OCI
Excellent problem-solving and debugging skills in real-world environments
Preferred Experience
Experience deploying large models (LLMs, diffusion models) on GPUs or clusters
Familiarity with inference frameworks (Triton, vLLM, TensorRT, DeepSpeed-Inference)
Familiarity with observability stacks (Prometheus, Grafana, Loki, ELK, OTEL)
Understanding of networking security and compliance frameworks (e.g., SOC 2)
Experience supporting on-prem or hybrid-cloud deployments
Benefits
A front-row seat to the generative AI infrastructure revolution
Competitive compensation and benefits package
Daily lunch and dinner provided; unlimited snacks and beverages
Health check-up and top-tier hardware support
Flexible working hours and a highly collaborative environment
About us
FriendliAI is building the next-generation AI inference platform that accelerates the deployment of large language and multimodal models with unmatched performance and efficiency. Our infrastructure powers high-throughput, low-latency workloads for global organizations and integrates directly with Hugging Face, providing instant access to over 510,000 open-source models. We are on a mission to deliver the world’s best platform for AI inference.