Measuring developer experience – Focus on what’s important

Dave North | Last updated on August 30, 2024 | 6 minute read

At Rewind, we develop and operate our service using a DevOps methodology and culture – the team writing the service is the same team responsible for operating the service. We try to empower our team with a “you built it, you run it” mentality. However, we do have a CloudOps team whose mission is:

Provide foundations that underpin Rewind’s stability, enable teams to ship with confidence, faster and more efficiently than everyone else.

Essentially, the CloudOps team focuses on infrastructure (including scaling), overall CI/CD strategy with baseline implementations, and tooling to enable our developers to ship as fast as possible. 

Alas, the CloudOps team is small (but mighty). How do we know where we need to focus our very limited time and people? What tooling are we missing? What needs improvements?

As Danny Thomas points out in this outstanding Developer Productivity Engineering interview, it’s not obvious as a platform team where you need to spend your time:

  • Developers in general will put up with a lot of ‘pain’ before raising a hand saying something needs improving. A task or process taking a long time is just “the way it is”.
  • Developers are resourceful and will come up with workarounds to blockers which may not be efficient. Creating a tool could solve the problem in a more efficient manner.
  • It’s notoriously hard to measure “developer productivity” with a view to specific items to improve. Things like DORA metrics are a good start but they are very high level.

Danny talks about instrumenting everything so that you can objectively find inefficient processes. We’ve done this in some areas at Rewind: we measure metrics around deployments. We track the time from PR merge to production and the failure rate of deployments.  A small custom tool obtains these metrics for us from Github Actions (our CI/CD tooling). But what about those areas that are harder to instrument?

Net promoter score

Net Promoter Scores (NPS) have been used to measure customer loyalty for decades. We’ve probably all answered this question from a vendor survey at some point in time:

On a scale of 0-10, how likely is it that you would recommend [company name] to your friends, family, or business associates?

The answer to this simple question allows calculating an NPS score:

Responders that give you a 6 or below are called Detractors, those who give a score of 7 or 8 are called Passives, and those who give a 9 or 10 are Promoters. Your net promoter score is calculated by aggregating individual results and measuring the percentage of customers in each group.

What is a good NPS score overall? The creators of the NPS metric, Bain & Company, say that although an NPS score above 0 is good, above 20 is great and above 50 is amazing. Anywhere above 80 is the top percentile.

NPS can give you a good macro level of how you are doing with respect to customer satisfaction. Combined with a way for customers to provide specific feedback, it can be a good measure of how you are delivering from the customers viewpoint.

NPS and measuring developer experience

Now we know what NPS is, how does this help us with measuring developer experience and prioritizing what areas need some investment? There’s many articles written about crafting surveys for measuring Developer Experience (DX) which I netted into the following key points:

  • Limit the number of questions you ask. Companies are always sending out surveys asking for feedback and people can get survey fatigue.
  • Provide a way for people to provide open-ended feedback. The 1-10 scale is all that is needed to compute an NPS score but specific feedback is the gold you are looking for.
  • Make sure you’re always asking the same questions each iteration of the survey. You can’t measure an improvement if the questions keep changing.
  • Use tooling that makes it easy for people to complete the survey. At Rewind, we already use OfficeVibe for employee engagement and this tool makes it easy to tag on a periodic custom survey. But even a simple Google form will work. Make it easy for your target audience to fill it in and provide the information you need.

Rewind’s NPS survey

After much research and review among our team, we ended up with a four question survey which we send out every quarter to the engineering team. We’ve only run three repetitions of this so far but the feedback has been incredibly helpful in giving us areas to focus on with quarterly OKRs (Objectives and Key Results). The overall result wasn’t a big surprise to us, but some of the specific feedback provided has been fantastic in allowing us to zero in on targeted improvements that have a big impact. 

For example, we received some feedback that if we switched the logging for a particular area of our product to JSON, it would make querying the logs significantly easier in the log tooling we are using. So the CloudOps team spent some time changing the logger across many of our services to JSON, which makes the developers lives easier. This wasn’t even on our radar until we had feedback around this in the survey.

So what does our survey ask? We limited to 4 questions (each with an opportunity to add free-form comments):

  1. On a scale of 0-10, how happy are you with the CloudOps developed tooling (toolA, toolB, toolC, etc.)?
  2. On a scale of 0-10, how happy are you with the local development experience?
  3. On a scale of 0-10, how happy are you with the CI-CD process?
  4. On a scale of 0-10, how happy are you with the observability of the system (logs, metrics, APM, alarms, etc.)?

Your questions may vary depending on the overall responsibility of your team but we distilled everything from our team’s mission statement into these four pillars.

We run the survey for a few weeks, piggy-backing on our regular OfficeVibe surveys and then collate the results and compute an NPS score for each of the questions above. The one with the lowest score generally gets our most attention in our planning cycles and OKR setting. That doesn’t mean we won’t address some tactical items in other areas but as an area of focus, we want to spend 80% of our time in the lowest scoring pillar.

Wrap up

We view this as just a start on the journey to improving developer experience. Measuring tool adoption and features we add into our tooling is something we have also started to spend some time on. Awareness of the work we do, tools we develop, and just generally being open to feedback and suggestions is something we are continually working on. We’ve run workshops,  given demos, set up support Slack channels, and just generally tried to be open to suggestions for areas that need improvement.

At the end of the day, the CloudOps team’s overall goal is make our devs work-lifes better. So who better to ask than the devs we’re trying to keep happy? An investment in your company’s overall DX produces better code faster, and increases your long-term retention of great talent. 

Speaking of great talent, we’re always looking for clever minds to think up new ways to solve classic problems. Check out our open positions below, or learn more about working at Rewind.


Profile picture of <a class=Dave North">
Dave North
Dave North has been a versatile member of the Ottawa technology sector for more than 25 years. Dave is currently working at Rewind, leading the technical operations group. Prior to Rewind, Dave was a long time member of Signiant, holding many roles in the organization including sales engineer, pro services, technical support manager, product owner, and devops director. A proven leader and innovator, Dave holds 5 US patents and helped drive Signiant's move to a cloud SaaS business model with the award-winning Media Shuttle project. Prior to Signiant, Dave held several roles at Nortel, Bay Networks, and ISOTRO Network Management working on the NetID product suite. Dave is fanatical about cloud computing, automation, gadgets and Formula 1 racing.