QoS problems are insidious. The network is “up,” ping times look fine, and yet your Webex calls sound like the other person is talking through a fan, and video meetings freeze every 30 seconds. Nine times out of ten the root cause isn’t bandwidth — it’s the absence of a coherent Quality of Service policy. I’ve seen this exact scenario play out across enterprise networks, branch offices, and even service provider handoffs. The fix is always the same: implement a proper MQC QoS policy that actually matches and prioritizes your real-time traffic.
This guide is the reference I wish I’d had years ago. We’ll cover the full stack: DSCP markings, classification with NBAR2 and ACLs, Low-Latency Queuing (LLQ) for voice, Class-Based Weighted Fair Queuing (CBWFQ) for video and critical data, and the show commands you need to confirm everything is functioning. All examples use Cisco IOS and IOS-XE syntax — tested on Catalyst 9000 and ISR 4000 series routers. If you’re not sure which IOS variant you’re running, check out our Cisco IOS vs IOS-XE vs IOS-XR comparison first.
Why QoS Matters More Than Bandwidth
It’s tempting to solve voice/video problems by throwing more bandwidth at them. Sometimes that works, but it masks the real issue. Real-time traffic — voice, video conferencing, trading platforms — is uniquely sensitive to three things:
- Latency: One-way delay should stay under 150ms for voice (ITU G.114). Beyond that, conversations feel unnatural.
- Jitter: Variation in packet arrival times. A de-jitter buffer can compensate up to ~30ms, but beyond that, packets get dropped at the codec level.
- Packet loss: Even 1–2% loss causes noticeable audio degradation with most codecs. G.711 is especially sensitive.
A congested interface treats all traffic equally. A large FTP transfer or backup job can fill your egress queue and introduce hundreds of milliseconds of delay for your VoIP packets waiting their turn. QoS solves this by giving voice its own dedicated queue that skips the line.
The MQC Framework: Class Maps, Policy Maps, Service Policies
Cisco’s Modular QoS CLI (MQC) has three building blocks:
- Class Map: Defines what traffic to match (DSCP values, ACLs, NBAR2 protocol, etc.)
- Policy Map: Defines what to do with matched traffic (queue, shape, police, mark)
- Service Policy: Applies the policy map to an interface in a direction (input or output)
This separation is powerful — you can reuse the same class maps across multiple policy maps, and the same policy map across many interfaces.
Step 1: DSCP — Your Traffic Marking Strategy
Before you can queue traffic intelligently, you need to mark it. DSCP (Differentiated Services Code Point) is the standard 6-bit marking in the IP header. The key values you’ll use most:
DSCP Value PHB Name Use Case
──────────── ──────────── ─────────────────────────────────────────
EF (46) Expedited VoIP bearer (RTP audio streams)
CS5 (40) — VoIP signaling (SIP, H.323, SCCP)
AF41 (34) Assured Fwd Interactive video (Webex, Teams video)
AF31 (26) Assured Fwd Call signaling / streaming video
CS3 (24) — Network management (SNMP, syslog, NTP)
AF21 (18) Assured Fwd Critical data, transactional apps
CS1 (8) Scavenger Bulk data, backups, torrents
BE (0) Best Effort Everything else
The key principle: mark as close to the source as possible, but trust only what you control. Re-mark untrusted traffic at your network edge (CPE or distribution switch ingress).
Marking at the Edge with an Input Policy
If your phones set EF themselves, trust them. If not — or if you’re dealing with a third-party UC platform — re-mark on ingress:
! Match SIP signaling and RTP voice traffic by ACL or NBAR2
ip access-list extended ACL-VOIP-SIGNAL
permit tcp any any eq 5060
permit tcp any any eq 5061
permit udp any any eq 5060
class-map match-any CM-VOICE-SIGNAL
match access-group name ACL-VOIP-SIGNAL
match protocol sip
class-map match-any CM-VOICE-BEARER
match protocol rtp audio
match dscp ef
class-map match-any CM-VIDEO
match protocol webex-meeting
match protocol ms-lync-video
match dscp af41
! Input policy: re-mark everything on ingress from untrusted segment
policy-map PM-INGRESS-MARKING
class CM-VOICE-BEARER
set dscp ef
class CM-VOICE-SIGNAL
set dscp cs5
class CM-VIDEO
set dscp af41
class class-default
set dscp default
Apply to the ingress interface toward your UC platform or WAN handoff:
interface GigabitEthernet0/0/1
description WAN-HANDOFF
service-policy input PM-INGRESS-MARKING
Step 2: Classification for the Output Queue Policy
Your output (egress) policy is where you actually control queuing behavior. You need class maps that match the DSCP values you just set (or that your infrastructure already marks correctly):
! Voice bearer — EF
class-map match-any CM-OUT-VOICE
match dscp ef
! Voice/video signaling — CS5, CS3
class-map match-any CM-OUT-SIGNAL
match dscp cs5
match dscp cs3
! Interactive video — AF41, AF42, AF43
class-map match-any CM-OUT-VIDEO
match dscp af41
match dscp af42
match dscp af43
! Critical data — AF31, AF21
class-map match-any CM-OUT-CRITICAL
match dscp af31
match dscp af21
! Scavenger / bulk — CS1
class-map match-any CM-OUT-SCAVENGER
match dscp cs1
Step 3: The Output Policy Map — LLQ + CBWFQ
This is where the magic happens. The output policy combines:
- LLQ (Low-Latency Queuing) via the
prioritykeyword — gives voice a strict-priority queue. Packets in this class skip all others. - CBWFQ via the
bandwidthorbandwidth percentkeywords — guarantees minimum bandwidth shares for other traffic classes during congestion. - class-default gets whatever’s left.
policy-map PM-EGRESS-QOS
!
! ── VOICE: Strict-priority LLQ ──
! Never exceed 30% of link for voice bearer to prevent starvation
class CM-OUT-VOICE
priority percent 30
! Optional: police the LLQ to protect against policing starvation
police rate percent 30
conform-action transmit
exceed-action drop
!
! ── SIGNALING: Small bandwidth guarantee ──
class CM-OUT-SIGNAL
bandwidth percent 5
!
! ── VIDEO: Guaranteed bandwidth, WRED for drop optimization ──
class CM-OUT-VIDEO
bandwidth percent 25
random-detect dscp-based
!
! ── CRITICAL DATA ──
class CM-OUT-CRITICAL
bandwidth percent 20
random-detect dscp-based
!
! ── SCAVENGER: Hard limit — won't steal from others ──
class CM-OUT-SCAVENGER
bandwidth percent 5
tail-drop
!
! ── BEST EFFORT / DEFAULT: Gets remaining bandwidth ──
class class-default
bandwidth percent 15
random-detect
!
Apply to the egress WAN or uplink interface:
interface GigabitEthernet0/0/1
description WAN-HANDOFF
service-policy output PM-EGRESS-QOS
Important sizing note: Keep your LLQ (voice priority class) at or below 30–33% of total link bandwidth. If you over-provision it, CBWFQ classes can starve during sustained voice bursts. Also note: IOS-XE on Catalyst 9000 uses the same MQC syntax, but hardware queues on the switch ASIC may limit you to a fixed number of queues — always check your platform’s QoS datasheet.
NBAR2: Classification Without ACLs
NBAR2 (Next-Generation Network-Based Application Recognition) can identify application traffic by deep packet inspection without requiring you to maintain ACL entries for every port and protocol. On IOS-XE 16.x+:
! Enable NBAR2 on the interface
interface GigabitEthernet0/0/1
ip nbar protocol-discovery
! Now you can match directly in class-maps:
class-map match-any CM-COLLAB
match protocol webex-meeting
match protocol cisco-jabber
match protocol ms-teams
match protocol zoom
class-map match-any CM-BULK
match protocol bittorrent
match protocol dropbox
match protocol onedrive-sync
NBAR2 protocol packs are updated independently of IOS-XE. Check your current pack:
Router# show ip nbar protocol-pack active
Protocol Pack Name: Advanced
Protocol Pack Version: 44.0.0
Protocol Pack Compiled for IOS XE Version: 17.6
Update via ip nbar protocol-pack bootflash:pp-adv-xe.17.06.00.pack after downloading from Cisco.com.
Verification: Show Commands You Actually Need
This section is worth bookmarking on its own. These are the commands I run every time I troubleshoot a QoS issue.
Check the Policy is Applied and Matching
Router# show policy-map interface GigabitEthernet0/0/1
GigabitEthernet0/0/1
Service-policy output: PM-EGRESS-QOS
Class-map: CM-OUT-VOICE (match-any)
0 packets, 0 bytes
5 minute offered rate 0000 bps, drop rate 0000 bps
Match: dscp ef (46)
Priority: 30% (30000 kbps), burst bytes 750000, b/w exceed drops: 0
Class-map: CM-OUT-VIDEO (match-any)
14523 packets, 18234512 bytes
5 minute offered rate 2847000 bps, drop rate 0000 bps
Match: dscp af41 (34)
Queueing
queue limit 64 packets
(queue depth/total drops/no-buffer drops) 0/0/0
(pkts output/bytes output) 14523/18234512
bandwidth 25% (25000 kbps)
Class-map: class-default (match-any)
45821 packets, 32145678 bytes
5 minute offered rate 4123000 bps, drop rate 0000 bps
Match: any
Queueing
queue limit 64 packets
(queue depth/total drops/no-buffer drops) 3/0/0
(pkts output/bytes output) 45821/32145678
bandwidth 15% (15000 kbps)
Key things to look for: drop rate on your voice class (should be 0), queue depth persistently above 0 (indicates sustained congestion), and b/w exceed drops on the LLQ (means you’ve over-provisioned voice — police is kicking in).
Check DSCP Markings on Ingress
! Run this on the ingress interface to see what DSCP values are actually arriving
Router# show policy-map interface GigabitEthernet0/0/0 input
! Check class-map match stats
Router# show class-map CM-OUT-VOICE
Class Map match-any CM-OUT-VOICE (id 3)
Match: dscp ef (46)
Verify NBAR2 Protocol Detection
Router# show ip nbar protocol-discovery interface GigabitEthernet0/0/1 stats byte-count top-n 10
GigabitEthernet0/0/1
Input Output
---------------------- ----------------------
Protocol Byte Count Bit Rate Byte Count Bit Rate
---------------------- ------------ ---------- ------------ ----------
webex-meeting 14523112 2847000 12341890 2340000
ms-teams 8234100 1203000 7891234 1102000
http 45231000 6234000 38912000 5123000
unknown 2345678 312000 1923456 234000
If you see a large unknown category, your NBAR2 protocol pack may be outdated, or traffic is encrypted in a way that prevents layer-7 identification — fall back to DSCP-based classification.
Queue Depth and Drop Statistics
! Check for tail drops or WRED drops in your data classes
Router# show queue GigabitEthernet0/0/1
Interface GigabitEthernet0/0/1 queueing strategy: Class-based queueing
Output queue: 0/40 (size/max)
Conversations 0/4/256 active/max active/max total
Reserved Conversations 2/2 (allocated/max allocated)
Available Bandwidth 98750 kilobits/sec
Common QoS Mistakes (and How to Avoid Them)
1. Forgetting to Apply QoS at Every Hop
QoS only works if every device in the path honors the markings. A voice packet marked EF that hits an unconfigured switch in the middle of your campus will get treated as best-effort. Audit your entire traffic path — especially campus distribution and core switches.
2. Trusting End-User Markings
A laptop can mark its own traffic as EF. Don’t trust DSCP markings from endpoints you don’t control. Apply a re-marking input policy at the access layer to reset markings from user devices, then re-mark based on traffic type.
3. QoS on the Wrong Interface Direction
Output QoS (egress) is where congestion happens and where queuing matters. Input QoS (ingress) is for marking/policing. If you apply your LLQ policy to an input service-policy, the priority keyword won’t do what you think it does — and IOS-XE will warn you.
4. Over-Provisioning the LLQ
Setting priority percent 60 means your CBWFQ classes can only get 40% during voice saturation. In a 100Mbps WAN link, that’s enough — but on a 10Mbps MPLS circuit, you can starve your ERP and CRM traffic. Keep LLQ at 30% or less and police it.
5. Not Testing Under Load
QoS policy only activates during congestion. You cannot verify it’s working correctly on an idle link. Use a traffic generator (even iperf3 saturating the link) and make a VoIP call simultaneously — check show policy-map interface while the link is loaded.
Automating QoS Deployment Across Your Fleet
Pushing the same QoS policy across 50 branch routers manually is error-prone. We covered exactly this use case in our guide to Network Automation with Python using Netmiko, NAPALM, and Nornir — a Nornir task to deploy and verify MQC configs across a fleet of IOS-XE devices takes about 20 lines of Python.
For Ansible users, the cisco.ios.ios_config module handles MQC deployment cleanly:
- name: Deploy QoS policy map
cisco.ios.ios_config:
lines:
- policy-map PM-EGRESS-QOS
- " class CM-OUT-VOICE"
- " priority percent 30"
- " class CM-OUT-VIDEO"
- " bandwidth percent 25"
match: line
Quick Reference: QoS Verification Checklist
Before closing a QoS change ticket, run through this:
show policy-map interface [int] output— confirm policy is applied and class maps are matching packetsshow policy-map interface [int] outputduring a call — verify voice class has zero drop rateshow ip nbar protocol-discovery interface [int]— confirm NBAR2 is detecting your UC trafficshow queue [int]— check for unexpected tail drops in data classes- Run an MOS test (Cisco IP SLA or external tool) — target MOS > 4.0 for G.711
- Verify round-trip latency for voice stays under 300ms during peak load
BGP and routing handle getting packets to the right destination — but it’s QoS that determines whether those packets arrive in time to be useful for real-time applications. For more on how your traffic is actually being routed before it even hits your QoS policy, our Understanding BGP guide covers the protocol fundamentals that every enterprise network engineer needs to know.
Wrapping Up
A well-implemented QoS policy isn’t complicated once you internalize the MQC framework. The key points: mark traffic early and consistently with DSCP, use LLQ for voice bearer (never more than 30% of link), CBWFQ for video and critical data, and scavenger-class everything you want to keep from interfering. Verify under load — not on an idle link.
The show policy-map interface output is your best friend for ongoing QoS health checks. Set up a monitoring job to alert on drop rates in your voice and video classes — if those go above zero under normal conditions, your policy needs tuning or your link is legitimately saturated.