
Designs and operates data pipelines, datasets, and evaluation workflows for AI systems. Collaborates with AI/ML engineers, product engineers, and analytics stakeholders to ensure reliable, reproducible, and measurable AI systems. Develops curated datasets, feature tables, and implements validation checks, lineage, and clear ownership. Supports knowledge ingestion for RAG, including document processing, chunking, metadata enrichment, indexing/backfills, and freshness monitoring. Implements and operates evaluation data workflows, including golden sets, labeling support, drift checks, regression reporting, and dataset versioning. Improves pipeline performance and cost efficiency through incremental processing, partitioning, and resource tuning.