Career development

Career development often implies performance surveys. However, there is a rule of a performance survey in Hygge companies: you don’t talk about the performance survey. Besides being a stressful experience for the employee being reviewed, filling out performance surveys is often an uncomfortable process for the reviewers and the managers who evaluate the results. Performance reviews often suffer from reviewer’s bias and might also lead to employee’s focus shift towards gaming the metrics instead of getting the job done. There are other ways to package the performance survey that address these problems without taking anything away from the value of collected feedback. Challenge your imagination and use less threatening words that target employee’s growth, team integration, and well-being: Strengths and Opportunities Assessment, Team Member Focus Survey, Integration and Growth Check-up, or simply Hygge Work Questions.

Metrics

It might be tempting to measure things that are easy to collect, quantify, and compare between different employees: the number of issues closed, lines of code written, mean time to resolve an issue, or peer review scores. Unfortunately, the majority of these seemingly obvious productivity metrics are easy to manipulate. There’s no one-size-fits-all solution to this problem. To be viable, any alternative to these scoring mechanisms needs to be rooted in science and embody the values your organization truly cares about.

A good place to look for performance metrics is your hiring process. A well-thought-out hiring pipeline probably already includes metrics that allow your organization to compare potential hires’ competencies. These competencies are usually much more robust to manipulation than any internal data and colleagues’ unstructured feedback, since you rarely can rely on those during the selection process. If you trust these metrics to bring new people on board, the chances are they are also a good performance measurement metrics inside the organization.

The resulting metrics also need to be consistent and valuable records for the whole employee cycle, from onboarding to termination. The rate of change in these metrics is often a stronger signal than the metrics themselves, being a great growth indicator. The further back you are capable to trace these signals, the better you’ll understand your employees and your organization.

Wording the Questions

The way you formulate questions can make all the difference.

  • Strengths: Building up the strength is always a good strategy, as it empowers people and boosts self-efficacy. Besides, by shifting the calibration focus towards the positive side of the spectrum, you are making the process more comfortable for everyone involved. This will also reduce the impact of reviewer’s writing style and of the tendency to be nice to people that you’d have to deal with if started with areas for improvement.
  • Problem areas: Shift the blame away from the employee or the reviewer for potential negative remarks. Formulate questions in a way that blame the organization and the process. Instead of asking about the employee’s failures, identify problems through questions about what the organization can do to help the person achieve excellency.
  • Validate inputs: Use questions that are hard to manipulate by being vague, untruthful, or simply wanting to be polite. Remove the ability to sugar-coat feedback with words and score things with numbers, as it can potentially make more harm. Instead, ask reviewers to list links, quotes, names, dates, and code lines.
  • Control questions: While not necessarily connected to specific metrics, these questions are designed to ensure consistency in reviewer’s answers. Praising an employee while not being able to come up with a project they would like to collaborate on, is a strong signal that a significant degree of calibration is needed when evaluating the reviewer’s answers.

Example Metrics and Questions

  • Critical Thinking (I was told vs I did research): Links to the work that demonstrated creativity, data-driven approach, long-term planning, consideration of multiple options, challenging common wisdom, etc.
  • Self-determination (NPC vs Player): List of issues where the person showed initiative opening an issue for themselves, volunteering in someone else’s issue, or contributing their own opinion to a discussion.
  • Asynchronous teamwork (Email vs GitHub): List of issues the person participated in that required a high degree of cooperation with others, splitting difficult tasks into smaller issues, and communicating through technical requirements, diagrams, and pull requests.
  • Reliability (Babysitting vs Fire-and-forget): Add links to significant issues/milestones the person closed independently or oversaw completion of.
  • Relevant tech skills (Junior vs Guru): What technical areas does the person excel in? Where can they make the biggest difference?
  • Control questions: How can we help the person to excel? (developing their tech skills, team integration, choosing direction, time management, etc.) What future projects are you most interested to collaborate with them on?

Evaluating Results

When reviewing the survey results, keep in mind that everyone’s language is calibrated differently. Some people are straight shooters and will eloquently state their opinion, others would sugarcoat problems with praise. When evaluating the reviewer’s answers, be mindful of these communication differences and pay attention to:

  • Turnaround time: You can tell a lot about the employee performance even before you see the answers. The time or the number of reminders it took the reviewers to fill out the survey often speak for themselves. Anyone who cares about working with the person being reviewed will probably bump the review up on their list of priorities. Missing answers or the lack of the feedback altogether is feedback in itself.
  • Okay reviews: Good performance survey should be worded in a way that makes anything that is not a glowing review a red flag.
  • 2nd opinion: It might be hard to evaluate technical feedback. Go to a 3rd-party to help interpret the answers. Preferably, someone who not involved in working with the reviewer or the person being reviewed.