Automation Selection Scoring Matrix
A structured methodology for evaluating and selecting logistics automation solutions — AS/RS, AGV/AMR fleets, WMS, conveyor/sortation, and integrated systems — against a weighted set of criteria. Separates objective comparison from political pressure and produces a defensible, auditable decision.
System architecture must be defined before equipment selection. Evaluating AS/RS cranes vs. shuttle systems vs. AMRs without first defining SKU structure, throughput requirements, building constraints, and software integration needs is a purchasing decision, not an engineering decision.
The Eight-Step Methodology
Section titled “The Eight-Step Methodology”Step 1: Define Requirements Before Evaluating Vendors
Section titled “Step 1: Define Requirements Before Evaluating Vendors”The requirements document precedes the RFP. It specifies:
- Throughput requirements (units/hour by period: average, design day, peak)
- SKU profile (count, velocity distribution, dimensions, weight range)
- Order profile (lines per order, order mix, B2B vs. B2C, SLA targets)
- Building constraints (clear height, column spacing, floor spec, power capacity, footprint)
- Integration requirements (WMS/WCS/ERP/TMS ecosystem, protocols required)
- Growth horizon (volume and SKU growth rate for 5- and 10-year scenarios)
- Budget parameters (CapEx ceiling, OpEx constraints, payback target)
- Operational constraints (labor availability, maintenance capability, uptime requirement)
A requirements document that doesn’t exist before the RFP will be written implicitly by the vendors responding — which means each vendor defines requirements in the way that favors their solution.
Step 2: Identify Knockout Criteria
Section titled “Step 2: Identify Knockout Criteria”Knockout criteria eliminate candidates regardless of weighted score. Failure on a single knockout = disqualification.
Common knockouts for logistics automation:
| Knockout criterion | What triggers elimination |
|---|---|
| Throughput floor | System cannot demonstrate capacity at required peak units/hour |
| Integration protocol | System cannot integrate with existing WMS via API or standard interface within project timeline |
| Floor requirement | System requires floor spec beyond what the building can achieve (FF100+ on FF50 floor) |
| Clear height | System requires clearances that do not exist and cannot be created |
| Financial stability | Vendor cannot provide audited financials or shows distress signals (layoffs, customer attrition) |
| Reference availability | Vendor cannot provide 3+ customer references at comparable scale within the last 3 years |
| Uptime SLA | Vendor cannot contractually commit to ≥98% system availability with defined remedies |
Knockout criteria should be published in the RFP so vendors can self-screen. Discovering a knockout at the demo stage wastes everyone’s time.
Step 3: Define Evaluation Criteria and Weights
Section titled “Step 3: Define Evaluation Criteria and Weights”Organize criteria into six categories. Weights should reflect the client’s specific priority context — the table below shows a balanced default with adjustments for three common priority profiles.
Default weight allocation:
| Category | Default | Cost-constrained | High-throughput | Integration-heavy |
|---|---|---|---|---|
| Functional performance | 25% | 20% | 35% | 20% |
| Total cost of ownership | 20% | 35% | 15% | 20% |
| Integration capability | 20% | 15% | 20% | 30% |
| Vendor / integrator quality | 15% | 10% | 15% | 15% |
| Implementation risk | 12% | 12% | 10% | 10% |
| Scalability / future-fit | 8% | 8% | 5% | 5% |
| Total | 100% | 100% | 100% | 100% |
Step 4: Define Sub-Criteria Within Each Category
Section titled “Step 4: Define Sub-Criteria Within Each Category”Functional Performance (25% default)
| Sub-criterion | What to measure | Score guidance |
|---|---|---|
| Peak throughput capacity | Units/hour at design peak | 10 = exceeds requirement by ≥20%; 1 = fails to meet requirement |
| Accuracy / quality | Pick accuracy SLA; error rate | 10 = 99.9%+; 5 = 99.5%; 1 = <99% |
| Uptime / availability | Contractual system availability | 10 = ≥99.5%; 7 = ≥98%; 3 = <97% |
| Flexibility | Ability to handle SKU variation, order mix change | Qualitative; scenario-tested |
| Ergonomics | Operator interface design; port height; exception handling ease | Qualitative; site visit |
Total Cost of Ownership (20% default)
TCO must be calculated over a consistent horizon (typically 10 years) and include all cost layers:
| Cost layer | Included items |
|---|---|
| CapEx | Equipment, installation, infrastructure modifications, IT hardware |
| Integration labor | WMS/WCS/ERP integration development; commissioning |
| Training | Initial and ongoing operator and maintenance training |
| Annual maintenance | Preventive maintenance contracts; spare parts |
| Software licensing | Annual WCS/WMS fees; support contracts |
| Labor delta | Labor cost change (positive or negative) vs. current state |
| End-of-life | Decommissioning cost, module upgrade paths |
Score each vendor’s TCO on a relative basis: Score = (Best vendor TCO / This vendor TCO) × 10
Integration Capability (20% default)
| Sub-criterion | What to measure |
|---|---|
| WMS compatibility | Native integrations with the client’s WMS; custom development required |
| API / EDI maturity | REST API availability; EDI protocol support; documented API |
| VDA 5050 (AGV/AMR) | Compliance with VDA 5050 v2.0 for multi-vendor fleet management |
| ERP connectivity | SAP/Oracle/NetSuite connectors available |
| Real-time data | Throughput visibility, exception alerts, inventory accuracy in real time |
| Change management capability | How configuration changes are made post-go-live; WCS upgrade complexity |
Vendor / Integrator Quality (15% default)
| Sub-criterion | What to measure |
|---|---|
| Financial stability | Revenue trend, profitability, ownership structure, credit rating |
| Reference quality | Comparable implementations in same industry/scale; willingness to do site visits |
| Support SLA | Response time commitments; local vs. remote support; 24/7 availability |
| Lifecycle commitment | Parts availability guarantee (years); upgrade path roadmap |
| Implementation track record | On-time/on-budget history; cited failures and how they were resolved |
Warehouse logistics automation is an enduring operational commitment spanning 10 to 20 years. The accurate measure of a systems integration partner is how effectively they support, sustain, and enhance the system throughout its lifecycle.
Implementation Risk (12% default)
| Sub-criterion | What to measure |
|---|---|
| Go-live timeline | Calendar months from contract to live operation; comparable site benchmarks |
| Parallel operation | Ability to run old and new systems simultaneously during cutover |
| Change management support | Training program structure; operator qualification process |
| Risk mitigation plan | FAT/SAT protocol; contingency plans for critical path delays |
| Software maturity | Number of live sites on current software version |
Scalability / Future-Fit (8% default)
| Sub-criterion | What to measure |
|---|---|
| Volume scalability | How capacity is added (modular adds vs. major investment) |
| SKU range expansion | Ability to handle new product types, dimensions, or weights |
| Software extensibility | Ability to add new carrier integrations, customer portals, analytics |
| AI / ML roadmap | Vendor investment in optimization algorithms, predictive maintenance |
Step 5: Score Each Vendor
Section titled “Step 5: Score Each Vendor”Use a 1–10 scale. Apply the same scoring rubric consistently across all evaluators.
Bias reduction practices:
- Use multiple independent scorers; average results
- Score all vendors on one criterion before moving to the next (not all criteria for one vendor before moving to the next)
- Require written justification for every score below 4 or above 8
- Document evidence source for each score (demo date, reference call date, document reference)
Step 6: Calculate Weighted Scores
Section titled “Step 6: Calculate Weighted Scores”Weighted score = Σ (criterion weight × criterion score)
| Vendor A | Vendor B | Vendor C | |
|---|---|---|---|
| Functional performance (25%) | 8.2 × 0.25 = 2.05 | 7.5 × 0.25 = 1.88 | 9.0 × 0.25 = 2.25 |
| TCO (20%) | 7.0 × 0.20 = 1.40 | 9.0 × 0.20 = 1.80 | 6.5 × 0.20 = 1.30 |
| Integration (20%) | 8.5 × 0.20 = 1.70 | 6.0 × 0.20 = 1.20 | 8.0 × 0.20 = 1.60 |
| Vendor quality (15%) | 7.0 × 0.15 = 1.05 | 8.0 × 0.15 = 1.20 | 7.5 × 0.15 = 1.13 |
| Implementation risk (12%) | 7.5 × 0.12 = 0.90 | 8.5 × 0.12 = 1.02 | 6.0 × 0.12 = 0.72 |
| Scalability (8%) | 6.0 × 0.08 = 0.48 | 7.0 × 0.08 = 0.56 | 9.0 × 0.08 = 0.72 |
| Total | 7.58 | 7.66 | 7.72 |
When scores cluster within 0.5 points, the scoring is not decisive — move to sensitivity analysis.
Step 7: Sensitivity Analysis
Section titled “Step 7: Sensitivity Analysis”Test whether the preferred vendor changes when weights shift. If Vendor A would win under “cost-constrained” weights but Vendor C wins under “high-throughput” weights, the weight assignment is a decision, not a calculation.
Scenarios to test:
- Shift TCO weight from 20% → 35% (cost-constrained client)
- Shift functional performance weight from 25% → 35% (high-throughput client)
- Double the integration weight (heavily customized IT environment)
If the same vendor wins across all scenarios: confident selection. If the winner changes: surface the trade-off to the steering committee as an explicit decision point.
Step 8: Validate with References and Scenarios
Section titled “Step 8: Validate with References and Scenarios”Reference checks (structured, not casual):
- Ask references specifically about go-live experience, not steady-state operation
- “What problems arose during implementation and how were they resolved?”
- “What would you do differently if you were selecting again?”
- “What is the vendor’s response time when you have a critical system issue?”
Scenario-based evaluation: Require vendors to demonstrate system behavior in exception scenarios, not just nominal operation:
- What happens when a tote arrives at the wrong port?
- How does the system handle a wave release during a mid-shift system update?
- What is the manual override procedure when the WCS is unavailable?
Feature coverage alone doesn’t separate vendors. Scenario-driven evaluation does.
Technology-Type Comparison Matrix
Section titled “Technology-Type Comparison Matrix”Use before scoring individual vendors to confirm which technology category fits the use case.
| Factor | AS/RS Stacker Crane | Shuttle System | AMR / AGV Fleet |
|---|---|---|---|
| Storage density | High | Very high | Low–medium |
| Peak throughput | Medium | Very high | Medium |
| Scalability | Low (major investment to add aisles) | High (add shuttles/aisles modularly) | High (add robots) |
| System complexity | Low | Medium–high | Medium |
| Maintenance friendliness | High | Medium | Medium |
| Infrastructure requirements | Very high (floor, power, clear height) | High | Low–medium |
| SKU flexibility | Medium | High | High |
| Best fit | Single SKU, high volume, stable demand | High SKU count, high throughput, variable orders | Mixed task, flexible operation, lower capital |
Source: warehouserack.cn Automated Warehouse Equipment Selection Guide
Common Scoring Errors
Section titled “Common Scoring Errors”| Error | Effect | Prevention |
|---|---|---|
| Feature coverage as primary criterion | All WMS/WCS vendors pass; no differentiation | Use scenario-driven evaluation and exception testing |
| Weighting by committee without debate | Weights reflect politics, not priorities | Facilitate explicit weight-setting session with stakeholders |
| TCO limited to hardware cost | True cost is 2–4× hardware alone | Use full 10-year TCO model with all cost layers |
| No knockout criteria defined | Disqualifying weaknesses discovered late | Publish knockout criteria in RFP; enforce them |
| Single scorer | Individual bias drives decision | Multi-scorer panel; average results; document disagreements |
| Vendor references only from vendor-provided list | Survivor bias; only success stories | Request references, then independently verify and seek additional references |
Integration with the Consulting Engagement
Section titled “Integration with the Consulting Engagement”This scoring matrix is typically produced in Phase 4 (Detailed Assessment) of a consulting engagement. Inputs come from:
- Phase 2 (Data Collection): throughput requirements, SKU profile, building specs
- Phase 3 (Framework Options): technology categories already narrowed to 2–3 viable types
- RFP responses and vendor demos (during Phase 4)
The matrix output feeds Phase 5 (Assessment Conclusion) as the recommendation evidence base. See Supply Chain Consulting Engagement Process.
See Also
Section titled “See Also”- Automation TCO and Sensitivity Analysis — full 11-phase selection lifecycle, vendor landscape (WMS/WES/WCS/AMR/AS/RS), scripted demo protocol, 12 commercial negotiation levers
Basic content
Subscribe to read the rest
This article is part of our Basic library — practitioner-level guidance, frameworks, and decision tools written from real projects.
$9/mo Basic · $13/mo Pro · cancel anytime