How Sales Ops Can Automate SDR Monthly Performance Review Decks
Sales Ops can automate SDR monthly performance review decks by routing call analytics and meeting-booked metrics directly into PPTAutomate. The system processes the performance CSV or JSON data and maps individual rep statistics into a standardized .pptx template, generating dozens of personalized review presentations in seconds.
Monthly performance reviews for SDR teams generate a predictable operational bottleneck. The Sales Ops analyst downloads call data from Outreach or Salesloft, pulls pipeline contribution metrics from the CRM, exports the numbers into a spreadsheet, and builds individual slide decks for each rep. For a team of forty SDRs, this means forty separate presentations — each requiring the analyst to filter the master dataset to one rep, copy the relevant metrics, paste them into the template, format the numbers, and save the file. The process takes days, produces identical-format decks that differ only in the data, and monopolizes an analyst who could be doing strategic work.
Designing the Performance Review Template
The template design determines both the analytical value of the review conversation and the automation complexity. A well-designed template presents performance data in a narrative structure that guides the manager through the coaching conversation rather than simply displaying numbers.
Design the template with five sections:
Performance Summary — a single slide showing the SDR's name, reporting period, quota attainment percentage, and an overall performance indicator (On Track, Needs Improvement, Exceeding). This slide provides the conversation anchor. The manager opens the deck, sees the headline metric, and immediately knows the direction of the conversation.
Activity Metrics — calls made, emails sent, social touches, total activities. Present these as both raw numbers and trend lines showing month-over-month change. A rep who made 450 calls this month is performing differently depending on whether they made 500 last month (declining) or 350 last month (improving). The trend context transforms a static number into an actionable insight.
Conversion Funnel — the progression from activities to connections to meetings booked to opportunities created. Display conversion rates at each stage. This section reveals where in the funnel the SDR's performance breaks down. A rep with high activity volume but low connection rates may need talk track coaching. A rep with high connection rates but low meeting bookings may need better qualification training.
Pipeline Contribution — the revenue value of opportunities the SDR sourced or influenced during the period. Include deal names, values, stages, and the AE who owns each opportunity. This section connects SDR activity to revenue outcomes, which is the ultimate measure of SDR effectiveness.
Benchmarks and Goals — the SDR's metrics displayed alongside team averages, top performer benchmarks, and quota targets. This comparative view shows where the rep stands relative to peers and what specific improvements would move them toward quota.
Each section contains placeholders that map to a specific JSON path in the per-rep data object. The template is built once and reused for every SDR — the data injection creates the personalization.
Exporting and Structuring the Performance Data
The performance data originates from multiple systems, and the automation quality depends on producing a clean, consistent JSON export that matches the template placeholders exactly.
Outreach or Salesloft provides activity-level data: calls made, emails sent, sequences started, replies received, meetings booked. Export the monthly activity report filtered by the reporting period. The export should produce per-rep totals, not individual activity records — the review deck needs aggregate metrics, not a call log.
CRM (Salesforce or HubSpot) provides pipeline contribution: opportunities sourced by each SDR, the value and stage of each opportunity, and the conversion from meeting to opportunity. Pull this data via SOQL query or report export, filtered by the SDR's name and the reporting period.
Performance analytics (Gong, Chorus, or internal BI tools) provides qualitative metrics: average call duration, talk-to-listen ratio, and conversation scores. These metrics add depth to the coaching conversation and differentiate the review from a simple activity report.
Combine these data sources into a structured JSON payload with one object per SDR:
{
"period": "2026-04",
"team": "Enterprise SDR Team",
"reps": [
{
"name": "Jordan Martinez",
"quotaAttainment": 0.92,
"status": "On Track",
"activities": {
"calls": 467,
"emails": 312,
"socialTouches": 89,
"callsTrend": 0.08,
"emailsTrend": -0.03
},
"funnel": {
"connections": 145,
"meetings": 28,
"opportunities": 11,
"connectionRate": 0.31,
"meetingRate": 0.19,
"opportunityRate": 0.39
},
"pipeline": {
"totalValue": 385000,
"deals": [
{ "name": "Widget Co - Enterprise", "value": 120000, "stage": "Discovery" },
{ "name": "DataCorp - Team Plan", "value": 85000, "stage": "Negotiation" }
]
},
"benchmarks": {
"teamAvgCalls": 420,
"topPerformerCalls": 510,
"quotaTarget": 12
}
}
]
}
The reps array is the batch processing key. PPTAutomate iterates through this array and generates one complete performance review deck per rep. The structure must be consistent across all reps — every rep object should contain the same fields, even if some values are zero or null.
Running Batch Generation for the Full Team
Batch generation transforms the per-rep JSON array into a set of individual .pptx files. The process is a single API call that produces the entire team's review decks simultaneously.
Send the structured JSON payload to PPTAutomate's batch endpoint. The engine reads the reps array, iterates through each element, and for each rep: clones the performance review template, maps the rep's data to the template placeholders, generates tables and charts from the rep-specific arrays, and saves the output as {rep.name}-performance-review-{period}.pptx.
The generation handles per-rep variability automatically. One SDR sourced two deals worth $205,000; another sourced seven deals worth $680,000. The pipeline section generates the correct number of table rows for each rep's deal array without requiring template adjustments. One rep has three risk flags from management; another has none. The conditional visibility rules hide the risk section for reps with no flags and display it with the appropriate number of entries for reps who do.
Output routing delivers each deck to the correct manager. Configure the batch endpoint to group output files by manager and deliver each manager's set of review decks to their designated folder or email distribution list. The SDR manager of a ten-person team receives ten individual decks in a single delivery, ready for that week's one-on-one meetings.
The time savings scale linearly with team size. A forty-person SDR team that previously required three days of analyst time for monthly review compilation now receives all forty decks within minutes of the data export completing. The Sales Ops analyst's role shifts from data compilation to data quality assurance — reviewing the aggregation script's output for accuracy rather than manually copying numbers between systems.
Validating Per-Rep Accuracy and Data Isolation
The critical validation for batch-generated review decks is data isolation: each rep's deck must contain only that rep's data. Cross-contamination — where Rep A's call volume appears in Rep B's deck — undermines the entire system's credibility and can cause real harm in performance discussions.
Validate data isolation by opening three randomly selected decks from the batch and cross-referencing every metric against the source JSON. For each deck, confirm that the rep's name on the title slide matches the rep object in the JSON array, that the activity metrics match that rep's activities block, and that the pipeline deals listed belong to that rep's pipeline.deals array.
Pay particular attention to the benchmark comparison section. The team averages should be identical across all decks (they reflect team-level data, not individual data), while the individual metrics should differ. If two different reps show identical call volumes, verify against the source data — it's possible but statistically unlikely, and more often indicates a mapping error where both decks pull from the same array index.
Test the edge cases that batch generation must handle: a new hire with zero activity data (should produce a deck with zero values, not crash the generation), a departing rep who should be excluded from the batch (handled by filtering the source data before submission), and a rep whose pipeline contribution is entirely from deals that closed before the reporting period (should show zero pipeline contribution for the current period, not historical cumulative data).
After validating the first batch run, establish a lightweight QA process for subsequent months. Spot-check three decks per batch rather than reviewing all forty. The deterministic nature of the automation means that if the mapping works correctly for three reps, it works correctly for all of them — the only variable is the data, not the generation logic.
Frequently Asked Questions
Written by
Lyriryl
Full-Stack Engineer & GEO Architect
Building enterprise presentation automation at PPTAutomate. Focused on the intersection of data automation, brand compliance, and deterministic document generation.
Stop Building Slides Manually
PPTAutomate turns your data into brand-compliant presentations in seconds. Upload a template, map your data, and generate at scale.