Sarvam DevTools Engineer – Harness Layer

Sarvam

Sarvam
CTC
50L - 70L
Experience
3 - 6 yrs
Location
BLR
123 applications

Sarvam.ai is building foundational AI for India from India!

  • Building India’s sovereign Large Language Models (LLMs) and AI infrastructure.
  • Focused on multilingual AI systems for Indian languages and enterprise use cases.
  • Empowering developers and enterprises with localized, culturally aligned AI.

You'll be a good fit if you have

  • Expertise in building SDKs, CLIs, or IDE extensions that improve developer productivity and adoption.
  • Experience creating debugging and observability tools to capture invisible failures and make complex agent systems traceable.
  • Proven track record of designing developer-first frameworks that optimize workflows with context-aware suggestions and evaluation feedback.
  • Strong empathy for developers and the ability to transform agent chaos into structured, reliable, and delightful development experiences.
  • Experience implementing time-travel debugging or similar capabilities that allow developers to replay and inspect complex runs.
  • Familiarity with evaluation frameworks for performance metrics and regression testing.
  • Ability to design observability and telemetry pipelines that provide deep insights into failures.
  • Experience with developer-first SaaS, open-source devtools, or observability tooling.
  • Strong background in capturing, visualizing, and debugging failures in complex systems.
  • A proven ability to optimize authoring workflows and reduce friction through ergonomic tooling.

Key Responsibilities

  • Design and Build Developer Tools including SDKs, CLIs, and IDE extensions to improve productivity and adoption.
  • Develop Debugging and Observability Features that make complex agent systems traceable and failures visible.
  • Create Developer-First Frameworks with context-aware suggestions, evaluation feedback, and workflow optimizations.
  • Implement Advanced Debugging Capabilities such as time-travel debugging and replay mechanisms for complex runs.
  • Design and Maintain Evaluation Frameworks to measure performance, detect regressions, and ensure reliability.
  • Build Telemetry and Observability Pipelines that provide deep insights into failures and runtime behavior.
  • Collaborate with Developers to deeply understand pain points and transform agent workflows into reliable, structured, and enjoyable experiences.
  • Contribute to Open-Source and SaaS Devtools by shipping ergonomic, developer-first features that reduce friction.
  • Visualize and Debug Complex Failures through intuitive interfaces and tooling.
  • Continuously Improve Authoring Workflows to streamline developer experience and accelerate agent development.

In the news

You are pre-screened

Team Round1's take

  • Selected by the Government of India under the IndiaAI Mission to develop sovereign large language models.
  • Committed to open-sourcing its AI models for wider public and enterprise use.
  • Combining deep AI research with large-scale Indian language datasets for culturally relevant intelligence.