Remote Job Description

As a Software Engineer for the Hot Storage team, you will be responsible for contributing to the technical vision of how we store and retrieve our data. This includes taking ownership of our indexed data services (e.g. index events, storing them in datastores, making them available to be queried). To do this, we build distributed, high-throughput, and low-latency systems with a strong focus on availability, resilience, and durability. 

The Hot Storage team is responsible for consistently indexing data into our datastores (hundreds of gigabytes of new data per day) and keeping that data available for queries. The team is responsible for building robust and stateful distributed systems and ensuring the data is always available and up-to-date even when volumes vary widely. 

You will:
  • Code (Go and Java) new and existing services to scale out our event platform offering
  • Contribute to the architecture and design of our indexed data services
  • Debug and solve challenging cross-systems issues in production
  • Help improve our engineering tooling and practices

About Datadog:

We're on a mission to build the best platform in the world for engineers to understand and scale their systems, applications, and teams. We operate at high scale—trillions of data points per day—providing always-on alerting, metrics visualization, logs, and application tracing for tens of thousands of companies. Our engineering culture values pragmatism, honesty, and simplicity to solve hard problems the right way.

  • You have been building applications for 2+ years and know the systems you’ve worked on from top to bottom
  • You have backend programming experience
  • You have architected, built, and operated distributed systems to solve problems at high scale
  • You have a BS/MS/PhD in a scientific field or equivalent experience
  • You want to work in a fast-paced, high-growth startup environment that respects its engineers and customers


Bonus points:
  • You've worked at high scale with systems like Akka, Redis or Kafka
  • You’ve written your own data pipelines before
  • You have a strong background in statistics
  • You have significant experience with Go or a JVM based language

This is a remote position