Set up an add-on suggestion tracker for service shops to log suggestions and purchases, compare staff results, and focus on add-ons that actually sell.

Add-ons can feel like they’re selling because you hear a few “yes” answers each day, see extra items on tickets, and remember the wins. But memory is selective. A busy week with a handful of strong saves can hide the fact that most suggestions never became a purchase.
The biggest gap is simple: many shops only record what was bought, not what was offered. If a technician suggests a premium filter, a screen protector, a tire sealant, or an extended warranty and the customer says no, that moment usually disappears. Later, when you review sales, you can’t tell whether an add-on underperformed because it was rarely suggested, suggested inconsistently, explained poorly, or just not wanted.
That’s why tracking suggested vs. bought changes the conversation. It separates two different questions that often get mixed together:
Without that split, you can reward the wrong behavior or blame the wrong product.
When suggestions aren’t recorded, a few predictable problems show up. Pitch rate “feels” high, but attach rate stays low. Everyone has a different opinion on what sells. Promotions run, but results are arguable. Training happens, but you can’t see if behavior changed. One person “seems great at upsells,” but the numbers never back it up.
When you track suggested vs. bought, you get plain answers. You can see which add-ons are mentioned consistently, which ones convert when they’re mentioned, and which ones stay dead even with lots of pitches. You also find quick wins: an add-on that converts well, but is only suggested on a small share of tickets.
A simple example:
That clarity is what turns “I think this sells” into “I know what sells, and why.”
An add-on suggestion tracker only works if everyone in the shop uses the same definitions. Keep it simple and you’ll trust the numbers. Let it get fuzzy and you’ll spend meetings arguing about the data instead of using it.
Start by defining what counts as a suggested add-on. A suggestion is a clear, customer-facing offer, not a thought in someone’s head and not a line item quietly added later. “Do you want tire shine today for $8?” counts. Thinking about offering it, casually mentioning it without an offer, or relying on a flyer on the counter doesn’t.
Next, define what counts as a purchased add-on. The cleanest rule is: purchased means it was paid on the same ticket (or, if your system splits tickets, during the same visit). Don’t count “they came back next week and bought it” as a win for that suggestion, or your attach rate will look better than it really is.
To keep the team aligned, use one simple unit: one add-on, one suggestion, one outcome. If the same add-on is suggested twice on the same visit, decide the rule up front (most shops log it once). If two different add-ons are suggested, log them separately.
From those definitions, three shop-friendly metrics fall out naturally:
Example: A detail shop runs 100 tickets in a week. Staff suggested “interior protectant” on 40 tickets, and it was bought on 10 of those. Suggestion rate is 40%. Attach rate is 25%. Add-on revenue per ticket is easy to calculate without guessing.
If you can’t explain your definitions in one minute to a new hire, they’re too complicated.
Start smaller than you think. Tracking works best when staff can choose quickly in the moment. If you try to track every item you could sell, people skip the log, pick random names, or dump everything into “Other.” The data turns into noise.
A good starting range is 10 to 30 add-ons that are offered often, easy to say yes to, and tied to a clear customer problem. Keep “maybe one day” items out until logging is consistent.
When choosing what goes on the list, look for add-ons that:
Naming is where many trackers fall apart. If one person logs “Protector,” another logs “Screen guard,” and a third logs “iPhone 14 protector,” your reporting splits into three buckets.
Pick one naming pattern and stick to it. A practical rule is Category + Variant + Key detail. Group similar items so you can compare fairly, then capture differences as variants instead of creating brand-new add-ons.
Example (phone repair counter): use “Screen Protector” as the category, and log the size or model as a variant. You can answer “Do screen protectors sell when suggested?” without drowning in hundreds of device names.
Seasonal items should be flagged. “Holiday gift wrap” or a summer-only check can spike for a few weeks and distort your long-term picture. Mark them as Seasonal so you can filter them out when you evaluate year-round performance.
Finally, don’t track only what sold. Add a simple price and margin field (even estimated). Popularity isn’t profit.
A tracker only works if people can fill it out fast, every time. Aim for a small set of fields that answers one question: what was suggested, and did it sell?
Start with the minimum:
That’s enough to see who suggests what and what converts.
If you can add a bit more without slowing people down, a few extras make the data more useful: quantity (when multiples can be sold), discount (so you can see if it only sells on markdown), and an optional “reason declined.” Keep decline reasons short and standardized: price, not needed, already has one, wants to think.
Speed beats detail. Use dropdowns for staff, service type, and add-on names. Make “Bought?” a single tap. If you allow notes, limit them to a few words.
If the form takes longer than 10 to 15 seconds, people will skip it or rush it.
Don’t store customer names, phone numbers, license plates, or full addresses in this tracker. You don’t need them to measure upsells, and they add risk. If you must tie entries back to a ticket, use a receipt or order number only.
The fastest way to make tracking work is to keep it boring: same add-on names, same moment of logging, same rule for what counts as “suggested.” Do that, and the numbers stay clean.
A rollout that fits most flows:
Logging location matters more than it seems. If you log only at checkout, you can miss suggestions made during the service. If you log after service, you might forget details. Many shops do best logging at the moment the customer decides.
For training, use tickets that force clear choices:
After the baseline, adjust one thing at a time. If you change everything at once, you won’t know what caused the shift.
A tracker only helps if you review it on a schedule. The goal is simple: catch logging problems early, then turn the numbers into coaching and merchandising decisions.
Start with a 2-minute daily spot check:
Once a week, run the same small set of reports so trends are obvious:
Add-ons sell differently depending on the job. Break results by service type so you see clean matches, like “screen protector” with “phone repair” or “deep conditioning” with “hair color.” When an add-on wins in one service and loses in another, that’s normal and useful.
A realistic weekly read might sound like: “Protective case is suggested 90 times and bought 18 times (20% attach), but profit is low. Express diagnostic is suggested only 25 times and bought 15 times (60% attach), and it’s the top profit driver.” That tells you what to push more often and what to stop treating as a headline item.
Picture a small phone repair shop that wants to stop guessing which add-ons actually sell. They track three add-ons on every repair ticket: a phone case, a screen protector, and “setup help” (moving data, setting up email, and basic settings).
For two weeks, the counter staff logs two things for each add-on: was it suggested, and was it purchased. They also note the repair type, because a cracked-screen customer behaves differently than a battery-swap customer.
Here’s what a simple rollup could look like after 2 weeks (84 repair tickets):
| Add-on | Times suggested | Times bought | Buy rate when suggested |
|---|---|---|---|
| Screen protector | 78 | 29 | 37% |
| Phone case | 80 | 12 | 15% |
| Setup help | 40 | 18 | 45% |
A few things jump out. The team suggests cases almost as often as protectors, but cases convert much worse. Setup help converts best, but it’s only suggested about half the time, usually when the customer asks questions first.
They make one small script change for setup help. Instead of “Do you want setup help?” they try: “Do you want us to move your data and set up your apps while we fix the phone? It usually saves about 30 minutes at home.” Same offer, clearer outcome.
Over the next few days, suggestions rise because the wording feels natural to say. The buy rate stays strong because customers understand what they get. Average ticket goes up without staff getting more aggressive.
Now the harder call: what should they stop suggesting? They don’t drop cases immediately. They split results by repair type and see cases sell mainly to “new phone setup” customers, not repair customers. So they change the rule: suggest cases only on activations and setup jobs. For repair tickets, they keep the protector suggestion (high volume, decent conversion) and keep setup help for customers who look rushed or ask timing questions.
That’s the point of a sales suggestion log: it turns opinions into patterns you can act on.
Tracking only helps if the data is consistent. Most shops don’t fail because the idea is bad. They fail because logging habits drift, and reports start lying.
Here are five mistakes that ruin the tracker:
A common example: a shop tracks “wiper blades.” One person logs “wipers,” another “front wipers,” and a third “wiper install.” The report shows each item sells poorly, so the manager removes it from the script. In reality, wipers sold fine, but the data was split across names.
Simple fixes work: limit add-ons to a short, fixed menu and lock the names. If you change a price or bundle, record the effective date. When comparing staff, use attach rate and add context notes for unusual weeks (new trainee, promotion, weather spike).
Before you change scripts, reorder displays, or set new spiffs, make sure your data is clean enough to trust. Small logging gaps can flip rankings.
Check these basics (use last week’s tickets, or the last 2 to 4 weeks if you’re lower volume):
If any item fails, treat your numbers as a draft. Tighten the rules, do a quick staff reminder, and keep collecting.
Once you have a few weeks of data, improve faster by narrowing your focus. Pick 1 to 2 add-ons to work on for the next month. If you try to “fix” ten at once, the message gets diluted and results bounce around.
Choose add-ons that solve a common customer problem and are easy to explain in one sentence. Turn each into one repeatable line your team can say the same way every time. Consistency matters because your add-on suggestion tracker should reflect the offer, not random wording.
Set one simple goal and review it weekly. Attach rate is a good starter: out of 100 eligible tickets, how many bought the add-on? Keep the target realistic and focus on steady improvement.
A lightweight routine:
If you outgrow spreadsheets, a small internal app can enforce naming, required fields, and consistent weekly reporting. If your team prefers building tools through a chat interface, Koder.ai is one option to quickly create a simple tracker app from the same fields, with the ability to export source code and deploy when you’re ready.
Keep the promise to your staff simple: fewer add-ons, clearer scripts, and one weekly check-in. That’s how the numbers turn into a habit, and the habit turns into extra sales you can actually prove.
Track both because sales reports only show purchases, not the offers that failed. When you log suggested and bought separately, you can tell whether an add-on is weak because nobody mentions it or because customers reject it when they hear it.
Use one simple rule: a suggestion counts only when a staff member clearly offers the add-on to the customer and the customer can say yes or no. A vague mention, a poster on the wall, or silently adding a line item should not count as suggested.
Count it as purchased only if it’s paid on the same ticket or during the same visit. Keeping the window tight prevents inflated attach rates and makes week-to-week comparisons trustworthy.
Start with a small menu, usually 10 to 30 add-ons you offer often and can deliver easily. If the list gets too long, staff will skip logging or pick inconsistent names, and the data becomes hard to use.
Use a single standard naming pattern and lock it so everyone logs the same way. A practical format is Category plus Variant plus one key detail, so you can group results without creating a new name for every tiny difference.
Keep it minimal: date or shift, staff member, service type, add-on suggested, and bought yes or no. That set is enough to see suggestion rate, attach rate, and who is actually offering what.
Make it fast with dropdowns and a single-tap yes or no for “Bought?”. If logging takes more than about 10 to 15 seconds, people will delay it, forget details, or stop doing it consistently.
Start with suggestion rate, attach rate, and add-on revenue per ticket. Suggestion rate shows whether the team is bringing it up, attach rate shows how often it converts when offered, and revenue per ticket shows the overall impact on your average sale.
Check coverage and consistency before you change scripts or pricing. If many tickets have no log, names are inconsistent, or the add-on has very few total suggestions, treat the results as a draft and tighten the process first.
Yes, if you need enforced naming, required fields, and automatic weekly reporting. Teams often start in a spreadsheet, then move to a simple internal app once habits stick; a tool like Koder.ai can help you build a basic tracker app quickly from the same fields and workflow.