Skip to content
Search

Automation Selection Scoring Matrix

A structured methodology for evaluating and selecting logistics automation solutions — AS/RS, AGV/AMR fleets, WMS, conveyor/sortation, and integrated systems — against a weighted set of criteria. Separates objective comparison from political pressure and produces a defensible, auditable decision.

System architecture must be defined before equipment selection. Evaluating AS/RS cranes vs. shuttle systems vs. AMRs without first defining SKU structure, throughput requirements, building constraints, and software integration needs is a purchasing decision, not an engineering decision.

Step 1: Define Requirements Before Evaluating Vendors

Section titled “Step 1: Define Requirements Before Evaluating Vendors”

The requirements document precedes the RFP. It specifies:

  • Throughput requirements (units/hour by period: average, design day, peak)
  • SKU profile (count, velocity distribution, dimensions, weight range)
  • Order profile (lines per order, order mix, B2B vs. B2C, SLA targets)
  • Building constraints (clear height, column spacing, floor spec, power capacity, footprint)
  • Integration requirements (WMS/WCS/ERP/TMS ecosystem, protocols required)
  • Growth horizon (volume and SKU growth rate for 5- and 10-year scenarios)
  • Budget parameters (CapEx ceiling, OpEx constraints, payback target)
  • Operational constraints (labor availability, maintenance capability, uptime requirement)

A requirements document that doesn’t exist before the RFP will be written implicitly by the vendors responding — which means each vendor defines requirements in the way that favors their solution.

Knockout criteria eliminate candidates regardless of weighted score. Failure on a single knockout = disqualification.

Common knockouts for logistics automation:

Knockout criterionWhat triggers elimination
Throughput floorSystem cannot demonstrate capacity at required peak units/hour
Integration protocolSystem cannot integrate with existing WMS via API or standard interface within project timeline
Floor requirementSystem requires floor spec beyond what the building can achieve (FF100+ on FF50 floor)
Clear heightSystem requires clearances that do not exist and cannot be created
Financial stabilityVendor cannot provide audited financials or shows distress signals (layoffs, customer attrition)
Reference availabilityVendor cannot provide 3+ customer references at comparable scale within the last 3 years
Uptime SLAVendor cannot contractually commit to ≥98% system availability with defined remedies

Knockout criteria should be published in the RFP so vendors can self-screen. Discovering a knockout at the demo stage wastes everyone’s time.

Step 3: Define Evaluation Criteria and Weights

Section titled “Step 3: Define Evaluation Criteria and Weights”

Organize criteria into six categories. Weights should reflect the client’s specific priority context — the table below shows a balanced default with adjustments for three common priority profiles.

Default weight allocation:

CategoryDefaultCost-constrainedHigh-throughputIntegration-heavy
Functional performance25%20%35%20%
Total cost of ownership20%35%15%20%
Integration capability20%15%20%30%
Vendor / integrator quality15%10%15%15%
Implementation risk12%12%10%10%
Scalability / future-fit8%8%5%5%
Total100%100%100%100%

Step 4: Define Sub-Criteria Within Each Category

Section titled “Step 4: Define Sub-Criteria Within Each Category”

Functional Performance (25% default)

Sub-criterionWhat to measureScore guidance
Peak throughput capacityUnits/hour at design peak10 = exceeds requirement by ≥20%; 1 = fails to meet requirement
Accuracy / qualityPick accuracy SLA; error rate10 = 99.9%+; 5 = 99.5%; 1 = <99%
Uptime / availabilityContractual system availability10 = ≥99.5%; 7 = ≥98%; 3 = <97%
FlexibilityAbility to handle SKU variation, order mix changeQualitative; scenario-tested
ErgonomicsOperator interface design; port height; exception handling easeQualitative; site visit

Total Cost of Ownership (20% default)

TCO must be calculated over a consistent horizon (typically 10 years) and include all cost layers:

Cost layerIncluded items
CapExEquipment, installation, infrastructure modifications, IT hardware
Integration laborWMS/WCS/ERP integration development; commissioning
TrainingInitial and ongoing operator and maintenance training
Annual maintenancePreventive maintenance contracts; spare parts
Software licensingAnnual WCS/WMS fees; support contracts
Labor deltaLabor cost change (positive or negative) vs. current state
End-of-lifeDecommissioning cost, module upgrade paths

Score each vendor’s TCO on a relative basis: Score = (Best vendor TCO / This vendor TCO) × 10

Integration Capability (20% default)

Sub-criterionWhat to measure
WMS compatibilityNative integrations with the client’s WMS; custom development required
API / EDI maturityREST API availability; EDI protocol support; documented API
VDA 5050 (AGV/AMR)Compliance with VDA 5050 v2.0 for multi-vendor fleet management
ERP connectivitySAP/Oracle/NetSuite connectors available
Real-time dataThroughput visibility, exception alerts, inventory accuracy in real time
Change management capabilityHow configuration changes are made post-go-live; WCS upgrade complexity

Vendor / Integrator Quality (15% default)

Sub-criterionWhat to measure
Financial stabilityRevenue trend, profitability, ownership structure, credit rating
Reference qualityComparable implementations in same industry/scale; willingness to do site visits
Support SLAResponse time commitments; local vs. remote support; 24/7 availability
Lifecycle commitmentParts availability guarantee (years); upgrade path roadmap
Implementation track recordOn-time/on-budget history; cited failures and how they were resolved

Warehouse logistics automation is an enduring operational commitment spanning 10 to 20 years. The accurate measure of a systems integration partner is how effectively they support, sustain, and enhance the system throughout its lifecycle.

Implementation Risk (12% default)

Sub-criterionWhat to measure
Go-live timelineCalendar months from contract to live operation; comparable site benchmarks
Parallel operationAbility to run old and new systems simultaneously during cutover
Change management supportTraining program structure; operator qualification process
Risk mitigation planFAT/SAT protocol; contingency plans for critical path delays
Software maturityNumber of live sites on current software version

Scalability / Future-Fit (8% default)

Sub-criterionWhat to measure
Volume scalabilityHow capacity is added (modular adds vs. major investment)
SKU range expansionAbility to handle new product types, dimensions, or weights
Software extensibilityAbility to add new carrier integrations, customer portals, analytics
AI / ML roadmapVendor investment in optimization algorithms, predictive maintenance

Use a 1–10 scale. Apply the same scoring rubric consistently across all evaluators.

Bias reduction practices:

  • Use multiple independent scorers; average results
  • Score all vendors on one criterion before moving to the next (not all criteria for one vendor before moving to the next)
  • Require written justification for every score below 4 or above 8
  • Document evidence source for each score (demo date, reference call date, document reference)

Weighted score = Σ (criterion weight × criterion score)

Vendor AVendor BVendor C
Functional performance (25%)8.2 × 0.25 = 2.057.5 × 0.25 = 1.889.0 × 0.25 = 2.25
TCO (20%)7.0 × 0.20 = 1.409.0 × 0.20 = 1.806.5 × 0.20 = 1.30
Integration (20%)8.5 × 0.20 = 1.706.0 × 0.20 = 1.208.0 × 0.20 = 1.60
Vendor quality (15%)7.0 × 0.15 = 1.058.0 × 0.15 = 1.207.5 × 0.15 = 1.13
Implementation risk (12%)7.5 × 0.12 = 0.908.5 × 0.12 = 1.026.0 × 0.12 = 0.72
Scalability (8%)6.0 × 0.08 = 0.487.0 × 0.08 = 0.569.0 × 0.08 = 0.72
Total7.587.667.72

When scores cluster within 0.5 points, the scoring is not decisive — move to sensitivity analysis.

Test whether the preferred vendor changes when weights shift. If Vendor A would win under “cost-constrained” weights but Vendor C wins under “high-throughput” weights, the weight assignment is a decision, not a calculation.

Scenarios to test:

  • Shift TCO weight from 20% → 35% (cost-constrained client)
  • Shift functional performance weight from 25% → 35% (high-throughput client)
  • Double the integration weight (heavily customized IT environment)

If the same vendor wins across all scenarios: confident selection. If the winner changes: surface the trade-off to the steering committee as an explicit decision point.

Step 8: Validate with References and Scenarios

Section titled “Step 8: Validate with References and Scenarios”

Reference checks (structured, not casual):

  • Ask references specifically about go-live experience, not steady-state operation
  • “What problems arose during implementation and how were they resolved?”
  • “What would you do differently if you were selecting again?”
  • “What is the vendor’s response time when you have a critical system issue?”

Scenario-based evaluation: Require vendors to demonstrate system behavior in exception scenarios, not just nominal operation:

  • What happens when a tote arrives at the wrong port?
  • How does the system handle a wave release during a mid-shift system update?
  • What is the manual override procedure when the WCS is unavailable?

Feature coverage alone doesn’t separate vendors. Scenario-driven evaluation does.

Use before scoring individual vendors to confirm which technology category fits the use case.

FactorAS/RS Stacker CraneShuttle SystemAMR / AGV Fleet
Storage densityHighVery highLow–medium
Peak throughputMediumVery highMedium
ScalabilityLow (major investment to add aisles)High (add shuttles/aisles modularly)High (add robots)
System complexityLowMedium–highMedium
Maintenance friendlinessHighMediumMedium
Infrastructure requirementsVery high (floor, power, clear height)HighLow–medium
SKU flexibilityMediumHighHigh
Best fitSingle SKU, high volume, stable demandHigh SKU count, high throughput, variable ordersMixed task, flexible operation, lower capital

Source: warehouserack.cn Automated Warehouse Equipment Selection Guide

ErrorEffectPrevention
Feature coverage as primary criterionAll WMS/WCS vendors pass; no differentiationUse scenario-driven evaluation and exception testing
Weighting by committee without debateWeights reflect politics, not prioritiesFacilitate explicit weight-setting session with stakeholders
TCO limited to hardware costTrue cost is 2–4× hardware aloneUse full 10-year TCO model with all cost layers
No knockout criteria definedDisqualifying weaknesses discovered latePublish knockout criteria in RFP; enforce them
Single scorerIndividual bias drives decisionMulti-scorer panel; average results; document disagreements
Vendor references only from vendor-provided listSurvivor bias; only success storiesRequest references, then independently verify and seek additional references

Integration with the Consulting Engagement

Section titled “Integration with the Consulting Engagement”

This scoring matrix is typically produced in Phase 4 (Detailed Assessment) of a consulting engagement. Inputs come from:

  • Phase 2 (Data Collection): throughput requirements, SKU profile, building specs
  • Phase 3 (Framework Options): technology categories already narrowed to 2–3 viable types
  • RFP responses and vendor demos (during Phase 4)

The matrix output feeds Phase 5 (Assessment Conclusion) as the recommendation evidence base. See Supply Chain Consulting Engagement Process.

Basic content

Subscribe to read the rest

This article is part of our Basic library — practitioner-level guidance, frameworks, and decision tools written from real projects.

$9/mo Basic · $13/mo Pro · cancel anytime