Fix Exchange ECP OU Picker Bug — PowerShell Automation
The Exchange Control Panel OU picker breaks after every cumulative update in environments with 500+ Organizational Units, forcing admins to manually reapply the same web.config fix on every Mailbox server, every quarter, forever. A scheduled PowerShell script ends the cycle — and the deeper lesson b

The 500 OU Bug Microsoft Won't Fix — And the PowerShell Script That Does
Why the same Exchange admins who manually reapply web.config patches after every cumulative update are the ones who shouldn't be running anything in the cloud.
The bug nobody talks about
If you administer Exchange Server in an environment with more than 500 Organizational Units, you already know this one. You open the Exchange Control Panel, click into the OU picker, and stare at a blank dropdown. Or a partial list. Or a list that mysteriously truncates at 500 entries with no indication that anything is missing.
The fix has been documented for years. You crack open web.config on every Mailbox-role server, find the GetListDefaultResultSize key, bump the limit, restart MSExchangeECPAppPool, and move on with your life.
Then Microsoft ships a cumulative update. The web.config gets overwritten. The picker breaks again. You reapply the fix. Across however many servers you run.
Repeat every quarter. Forever.
This is not a Microsoft problem in the sense that Microsoft will ever fix it. It's a Microsoft problem in the sense that Microsoft has decided this is your problem now.
Why manual reapplication is a silent disaster
The temptation is to dismiss this as a minor annoyance. "It's a five-minute fix, who cares?"
Here's who cares: anyone who has watched a junior admin forget to reapply the patch after a 2 AM CU rollout, anyone who has logged into ECP three weeks later and wondered why the OU list is suddenly empty, anyone who has had to explain to a directory team that the helpdesk can't provision a mailbox because the picker is broken again.
Five minutes per server per CU sounds small. Across a real environment — say five Mailbox servers, four CUs a year, plus the inevitable security updates — you're looking at hours of repetitive, error-prone, easily-forgotten work that exists only because Microsoft's installer doesn't preserve a customer-side configuration value.
The cost isn't the time. The cost is the uncertainty: every CU now carries an unspoken question of did anyone remember to reapply the fix? And that uncertainty has a way of surfacing at the worst possible moment.
What the script does
The fix isn't complicated. The discipline of running it consistently, on every server, after every update, without forgetting — that's the hard part. So you stop relying on humans for it.
The PowerShell script that lives on GitHub and has been hardening across real Exchange 2013–2025 deployments does four things, in this order:
- Backs up the existing web.config. Timestamped, idempotent, never overwrites a prior backup. If anything goes wrong, the rollback is one copy command.
- Checks the current
GetListDefaultResultSizevalue. If it's already correct, the script exits clean. No churn, no app pool restart, no log noise. This matters because you're going to schedule it. - Applies the modification only if needed. Targeted XML edit, not a blanket file replace. Other customizations on the box stay intact.
- Restarts
MSExchangeECPAppPool. Only the ECP app pool. The rest of Exchange keeps serving mail.
Deployment is a scheduled task running as SYSTEM with highest privileges, triggered daily or post-reboot. The whole thing fits in a few hundred lines of PowerShell and a single Task Scheduler XML you can import on every Mailbox server.
That's it. That's the whole solution. The reason it works isn't cleverness — it's that it removes the human from the loop on a task humans were never going to do reliably.
Why "just remember to do it" doesn't scale
Every operations team eventually learns this lesson the hard way: any maintenance step that depends on a human remembering to do something after another event will, given enough time, fail to happen.
The reason isn't laziness. It's that humans are bad at executing the same procedure identically across many systems on a schedule defined by someone else's release cadence. Computers are extremely good at this. The entire premise of automation is to move tasks from the first category to the second.
The Exchange OU fix is a textbook case. It's deterministic. It's repeatable. It has clear preconditions and a clear postcondition. It produces no business value when performed — its only value is preventing a failure mode. This is exactly the kind of work that should never touch a runbook a human reads at 2 AM.
The broader pattern
Step back from Exchange for a second.
The reason this fix exists, and the reason it has to keep being reapplied, is that a vendor made a decision about the default behavior of their software that doesn't match how your environment actually works. You worked around it. The vendor's update process doesn't know about your workaround. So your workaround dies on every update, and you have to revive it.
This pattern is everywhere in enterprise infrastructure. SQL Server settings that get reset. IIS configurations that drift. Group Policy values that get clobbered by feature updates. Cloud APIs that change schemas with two weeks' notice. Every time, the answer is the same: build a small, idempotent, scheduled correction loop that keeps reality matching your intent.
It's also the same instinct that drives the on-premise AI movement. When you run a language model through a cloud API, you're betting that the vendor's defaults — pricing, model versions, rate limits, data handling — will keep matching your needs forever. They won't. They'll change, and your workarounds will break, and you'll be back to writing correction loops against someone else's release schedule. Except now the "correction loop" is "renegotiate your contract" or "rewrite your prompts because they deprecated the model."
The Exchange admins who run their own infrastructure already understand this. They run on-prem because they want the configuration surface to be theirs. They write PowerShell to defend that surface from vendor drift. They are, philosophically, the same people who should be running AI on their own GPUs instead of paying per token to a black box that ships a new model every six weeks.
The minimum viable automation mindset
If you take one thing from this article, take this: any recurring manual fix is a script you haven't written yet.
The Exchange OU picker patch is one example. The 500 OU bug is one bug. The pattern — vendor update overwrites customer setting, customer reapplies setting, vendor ships another update — is a thousand bugs. The remediation in every case is the same shape:
- Detect the drift.
- Back up the current state.
- Apply the fix idempotently.
- Restart only what needs restarting.
- Run it on a schedule that's faster than the rate of drift.
Once you have one of these in production, the second one is easy. The third one is a copy-paste. By the time you have ten of them, you've quietly built an internal reliability platform that costs nothing, never sleeps, and stops being a thing anyone has to remember.
That's the version of IT operations worth running. Not the one where your weekend depends on whether someone reapplied a config setting.
Where this leads
The Exchange fix is freely available. Deploy it, schedule it, forget about it. Your future self, the one who would otherwise be reapplying web.config edits at midnight after a CU rollout, will be grateful.
But the larger question is worth sitting with. How many other quiet, recurring, vendor-imposed maintenance tasks does your environment carry? How many of them are on someone's mental checklist instead of in a scheduled task? How much of your operational reliability depends on people remembering things?
Every one of those is a script waiting to be written. And every script you write is one less thing the cloud salesperson can use to convince you that running your own infrastructure is too hard.
It isn't too hard. It's just a habit of automation that most teams haven't built yet.
InnovaTek Solutions builds and operates private, on-premise AI and data infrastructure for businesses that want to own their stack. If you're running Exchange on-prem, you already understand why. The same approach — your hardware, your data, your control — applies to AI. Talk to us about installing private LLMs, image recognition, and data pipelines that run on your servers and stay there.
Ready to Modernize Your Business?
Get your AI automation roadmap in minutes, not months.
Analyze Your Workflows →