Job Description:
1. Role Highlights:
- Participate in building AI workflow orchestration and automation platforms from scratch (0→1) or scaling existing ones (1→N), covering RAG, tool invocation, Agent collaboration, and asynchronous orchestration.
- Implement Diffy or similar "shadow traffic replay + response diff comparison" solutions for model/version regression testing and canary releases.
- Collaborate with multiple business lines (IM, customer service, marketing automation, data processing, etc.) to deliver real-world, production-grade closed-loop solutions.
2. Key Responsibilities:
- AI/LLM Workflow Orchestration: Design and implement multi-step reasoning, Agent collaboration, tool invocation (Tool-Calling/Function-Calling), asynchronous task queues, and compensation mechanisms.
- RAG Optimization: Build and refine Retrieval-Augmented Generation pipelines, including data ingestion, chunking, vectorization, recall/reranking, context compression, caching, and cost reduction.
- Evaluation & Quality Assurance: Establish automated evaluation and alignment frameworks (benchmark sets, Ragas/G-Eval/custom metrics), integrate A/B testing, and enable real-time monitoring.
- Engineering & Observability: Develop model/prompt versioning, feature/data versioning, experiment tracking (MLflow/W&B), and audit logs.
- Platform Integration: Expose workflows via API/SDK/microservices; integrate with business backends (Go/PHP/Node), queues (Kafka/RabbitMQ), storage (Postgres/Redis/object storage), and vector databases (Milvus/Qdrant/pgvector).
Job Requirements:
3. Must-Have Qualifications:
- 3+ years of backend or data/platform engineering experience, with 1–2 years of hands-on LLM/generative AI project experience.
- Proficiency in LLM application engineering: prompt engineering, function/tool calling, dialogue state management, memory, structured output, alignment, and evaluation.
- Familiarity with at least one orchestration framework: LangChain/LangGraph, LlamaIndex, Temporal/Prefect/Airflow, or custom DAG/state machine/compensation solutions.
- End-to-end RAG experience (data cleaning → vectorization → recall → reranking → evaluation); knowledge of Milvus/Qdrant/pgvector.
- Experience with Diffy or equivalent traffic replay/diff comparison tools (e.g., shadow traffic, record/replay, regression output analysis, canary releases).
- Strong engineering skills: Docker, CI/CD, Git workflows, logging/metrics (OpenTelemetry/Prometheus/Grafana).
- Proficiency in at least one core language (Go/Python/TypeScript) and ability to write reliable services/tests.
- Excellent remote collaboration and documentation skills, with a metrics-driven approach.
4. Nice-to-Have Skills:
- Deep Diffy experience (or integrating with API gateways for shadow traffic/routing/comparison).
- LLMOps/evaluation platform experience (Arize Phoenix, Evidently, PromptLayer, OpenAI Evals, Ragas).
- Practical Agent framework implementations (LangGraph, autogen/crewAI, GraphRAG, tool ecosystems).
- Security/compliance knowledge (anonymization, PDPA/GDPR, moderation tools like Llama Guard).
- Domain expertise in IM/customer service/marketing automation or multilingual scenarios (Chinese/English/Vietnamese).
- Cost optimization techniques: caching, retrieval compression, model routing/multi-provider switching.
5. Tech Stack (Preferred):
- Orchestration: LangChain/LangGraph, LlamaIndex, Temporal/Prefect/Airflow
- Models & Evaluation: OpenAI/Anthropic/Google, VLLM/Ollama, Ragas, G-Eval, MLflow/W&B
- Retrieval: Milvus, Qdrant, pgvector, Elasticsearch, rerankers (bge/multilingual/E5)
- Infrastructure: Go/Python/TypeScript, gRPC/REST, Redis, Postgres, Kafka/RabbitMQ, Docker/K8s
- Observability: OpenTelemetry, Prometheus, Grafana, ELK/ClickHouse
- Diff Tools: Twitter Diffy or equivalent shadow traffic/replay systems
Benefits:
- Competitive salary and career growth opportunities
- Collaborative team environment
- Fully remote work flexibility
- Cutting-edge projects with real-world impact
- Continuous learning and professional development support


