Data Platform Engineer
Build India’s sovereign AI stack for a billion people and shape the future of technology


Job Summary
We’re seeking a skilled Data Platform Engineer to build scalable tools, platforms, and pipelines tailored for processing large-scale, multilingual, multimodal datasets critical for foundational AI models.
In this role, you will build scalable data pipelines to ingest, transform, and prepare data from diverse sources—text, speech, images, and video—making it ready for Generative AI model training. Your work will involve developing and managing the underlying platform while addressing challenges like governance, security, observability, lineage, and scalability. The outcomes of your work will include efficient tools for data processing, a reliable data platform, and high-quality datasets tailored to the evolving needs of large-scale AI and LLM training.
Key Responsibilities
- Design and Build Scalable Platforms: Develop distributed infrastructure for ingesting, processing, and transforming diverse datasets (text, speech, images, video) at terabyte to petabyte scale.
- Develop Robust Data Pipelines: Create reliable, scalable pipelines to prepare datasets for Generative AI and LLM training.
- Implement Governance and Observability: Build frameworks for data lineage, monitoring, and access control to ensure data quality and operational reliability.
- Optimize Performance and Cost: Enhance platform performance and resource utilization using cost-effective strategies, including GPU-accelerated preprocessing.
- Collaborate and Innovate: Work closely with researchers and ML engineers to adapt platforms and data pipelines to evolving LLM requirements, addressing various data challenges.
- Drive Innovation: Stay updated on emerging tools, frameworks, and best practices to implement cutting-edge solutions for large-scale dataset creation.
Minimum Qualifications and Experience
- Bachelor’s or Master’s degree in Computer Science, Data Engineering, or related fields with 2+ years of experience in engineering roles, demonstrating strong foundations in software development, systems engineering, or related disciplines.
Required Expertise
- Hands-on experience in developing large-scale, distributed data pipelines and platforms, preferably in high- performance AI or ML environments.
- Expertise in managing unstructured data (text, speech, or multimodal datasets) for high-performance use cases, ideally in the context of LLM/AI datasets.
- Understanding of challenges in scalable data engineering, including ingestion, transformation, and storage
- Optimization for large-scale accelerated workflows.
- Proficiency in distributed systems and frameworks (e.g., Kafka, Ray, PySpark) for scalable data workflows.
- Exposure to end-to-end data lifecycle management, including DataOps.
- Strong programming skills in Python, Scala, or Go, with a focus on high-performance pipeline development.
- Experience with building and optimizing data pipelines, including ETL processes, data modeling, and integration into scalable workflows.
- Expertise in data scraping, crawling frameworks, and modern dataset development techniques such as synthetic data generation techniques.
- Experience with cloud platforms (AWS, GCP, Azure) and container orchestration (Docker, Kubernetes).
- Deep understanding of data platform design, including data architecture, metadata tracking, data lineage, observability, monitoring, and scalability best practices.
- Familiarity with Infrastructure-as-Code tools (e.g., Terraform, CloudFormation), CI/CD pipelines, relational/NoSQL databases, and GPU-accelerated workflows.
- Familiarity with visualization and monitoring tools for lifecycle management and pipeline performance tracking.
