How to Run Employee Performance Reviews in a Home Service Business
Annual reviews are too infrequent to change behavior. No reviews at all is common in the trades and produces no accountability. A 20-minute quarterly framework tied directly to CRM data closes the gap.
Key takeaways
- Annual reviews are too infrequent to change behavior. Quarterly reviews of 20 minutes per tech produce better outcomes with less total time invested per year.
- The 4 metrics for every field tech review: revenue per day, close rate (where applicable), callback rate, and review count.
- Connect compensation review directly to performance metrics, not calendar year. A tech hitting $1,300/day consistently should not wait until January for a rate adjustment.
Most home service owners either do annual reviews or no reviews at all, and both approaches produce the same outcome: techs who do not know where they stand.
Annual reviews are too infrequent. Behavior reviewed once a year is too far from the actual events to change. A tech who had a high callback rate in March does not benefit from hearing about it in December. No reviews at all is the more common pattern in trades businesses and produces no accountability structure and no development path.
The alternative is a quarterly review that takes 20 minutes per tech and runs entirely off data that already exists in your CRM. The metrics framework that feeds it is in technician performance metrics for home services.
Why Quarterly Beats Annual
The argument for annual reviews is usually "we don't have time." The arithmetic does not support it.
A 20-minute quarterly review per tech adds up to 80 minutes per tech per year. For a 5-tech team, that is 400 minutes per year, or about 6.5 hours. An annual review done properly, with documentation and goal-setting, takes at least 45 minutes. You save roughly 25 minutes per tech per year by switching to quarterly, and you get four coaching touchpoints instead of one.
The behavioral reason matters more than the time math. Quarterly reviews mean that when a tech has a strong month in February, they hear about it in March, not in December. When callback rates spike after a new equipment line is added to the service mix, the conversation happens at the 90-day mark when the pattern is fresh and correctable, not a year later when it is baked in.
The review data exists whether or not you use it. Your CRM has revenue by tech, job count by tech, and if you have callback tracking set up, return visit rates by tech. The quarterly review is just the mechanism to use it.
Text Clint: "what are the key metrics for each tech over the last 90 days?"
The 4 Metrics for Every Tech Review
These four numbers tell the complete story of a field tech's performance:
Revenue per day. Total job revenue attributed to the tech divided by the number of days they worked in the period. This is the single highest-signal number for a service tech. It captures close rate, average ticket, job volume, and time efficiency in one figure. The team average gives you a baseline. Individual variation tells you where coaching is needed.
Close rate (where applicable). For businesses where techs present estimates or options (HVAC replacement, restoration bids, specialty repair), close rate is the percentage of estimates presented that the customer accepted. Pair it with average estimate value: a tech with a 90% close rate on $200 repairs is not the same as a tech with a 70% close rate on $4,000 replacements.
Callback rate. The percentage of jobs that resulted in a return visit within 14-30 days for the same complaint. A callback rate materially above the team average points to a diagnostic or completion issue. See how to track first call resolution for setup instructions in each major CRM, and how to reduce callbacks in a field service business for the playbook to bring the number down.
Review count. Total verified customer reviews attributed to the tech in the period, where your CRM or review platform allows tech-level attribution. For businesses using CallRail or a review request system that tags the tech who ran the job, this is trackable. A tech who generates 3 reviews per month from 60 jobs is leaving reviews on the table. A tech who generates 8 reviews from the same volume is creating a visible reputation asset.
Text Clint: "what is the revenue per day for each tech this quarter, and how does each compare to last quarter?"
The 3 Questions
After the metrics review, three open questions from the same conversation produce more improvement than any directive:
"What job type are you most confident on right now?"
This tells you where the tech is strongest and opens the door to a conversation about routing more of those jobs to them in the next period. It also tells you which job types they are implicitly less confident on, which is coaching information you cannot get from the numbers alone.
"What is the one thing that would help you get more done per day?"
Listen for: dispatch sequencing issues, parts that are not on the truck when they should be, customer communication problems at the close of a job, and administrative friction. Most of the answers are fixable operational items, not skill gaps. A tech who is losing 45 minutes per day on a fixable dispatch issue is costing the business real revenue, and they may not say anything unless you ask.
"What is one thing we should stop doing?"
This is the most important question and the most often skipped. It gives the tech a genuine voice in operations. The answer surfaces waste, policy conflicts, and administrative friction that you cannot see from the owner's chair. You will not act on every answer. But asking consistently builds a culture where the team tells you about problems before they become expensive.
Document the answers to all three questions after the review. They become the baseline for the next quarter's conversation.
Text Clint: "what job types did each tech work most in the last 90 days, and what was their callback rate on each?"
Setting the 90-Day Target
Every review ends with one specific, measurable 90-day target.
Specific means: not "do better on close rate." Specific is: "your average ticket is $740, the team average is $910. In the next 90 days, the goal is to present the full repair or replacement option on every diagnostic call, not just the minimum repair. We'll look at your ticket average at the next review."
The target connects the metric gap to a specific behavior change, not a vague improvement. If the tech's callback rate is the gap, the target is: "run through the diagnostic checklist on every refrigerant-side call, confirm the repair is complete before leaving the job, and communicate the test results to the customer at close. 90-day callback rate target: below 12%."
The tech sets the target with you, not for you. If you set the target unilaterally and the tech has no ownership of it, they will not work toward it between reviews.
If the review reveals a skill gap (a specific equipment type or failure mode the tech is not confident on), the 90-day target includes a training component: certification course, ride-along with a senior tech on that job type, or manufacturer training on the equipment line.
Text Clint: "what is the gap between our highest-revenue tech and our lowest-revenue tech per day over the last 90 days?"
Connecting Performance to Compensation
Annual compensation reviews on a fixed calendar are a retention problem.
A tech who hits $1,300/day consistently and has a callback rate below 8% is generating a clear return on their compensation. If they have to wait until January for the conversation, you have 6-8 months of their performance going unacknowledged. That is 6-8 months of passive recruitment window for competitors.
The compensation review cadence should follow performance, not the calendar:
- Any tech who sustains a revenue-per-day or close-rate improvement above a pre-set threshold for 90 consecutive days triggers a compensation conversation.
- Any tech who earns a new certification (NATE, EPA, manufacturer-specific) gets an immediate rate adjustment tied to the certification value.
- Annual rate adjustments for inflation and market movement happen on January 1 for all techs, independent of the performance trigger reviews.
Set the performance thresholds explicitly in the tech's 90-day targets so they know exactly what produces a compensation conversation. "If you sustain $1,200/day average for 90 days, we have a rate conversation" is a clear incentive. "We'll see how it goes" is not.
The math on tech retention justifies the investment. Recruiting and onboarding a replacement service tech costs $3,000-$8,000 in direct cost, plus 60-90 days of reduced productivity from the new hire. A $2/hour raise for a tech who stays costs roughly $4,000/year. The retention cost is almost always lower than the replacement cost. See how to build a tech bonus plan for home service for the compensation structure detail.
How Clint Returns the Review Data
Text "what are the key metrics for each tech over the last 90 days?" and Clint returns revenue per day, job count, average ticket, and callback rate for each tech in your CRM, sorted by the metric of your choice.
That data is the starting point for every review in this framework. The 20-minute quarterly review becomes possible when you are not spending 15 of those minutes pulling the numbers manually from two different reports and a spreadsheet.
Sources
Frequently Asked Questions
4 questions home service owners actually ask about this.
01How do I handle a tech who reacts defensively to performance data?
Present the data as a team baseline, not a verdict. "Here is where the team is, here is where you are, here is the gap" is easier to receive than "your numbers are low." The context removes the judgment. Then move directly to the three questions. A tech who is defensive about numbers is almost always willing to answer "what would help you get more done per day?"
02What if I don't have revenue-per-tech data in my CRM?
Most major field service CRMs (Jobber, Housecall Pro, ServiceTitan, Workiz) have revenue attribution by assigned tech on completed jobs. If yours does not, check whether jobs have an "assigned to" field that can be used in a report filter. If attribution is genuinely unavailable, start there as a data setup task before the next review cycle.
03How do I run reviews for office staff or dispatchers?
The framework translates. Replace revenue per day with calls handled per day or quotes dispatched per day. Replace callback rate with dispatch accuracy (jobs dispatched to the right tech on the first dispatch vs. rerouted). The three questions apply without modification. The 90-day target works the same way.
04Should I show each tech the other techs' numbers?
Team average is fine to share and useful for context. Individual tech numbers for other team members should stay private unless the team culture is explicitly built around public performance boards (some HVAC companies run leaderboards by choice). Sharing one tech's numbers without their knowledge creates trust problems that are harder to fix than the performance gap.
See Clint in action
Clint is the pre-built AI for home service shops. Connect your CRM, email, and phone system in minutes and the agents run on your real data.