Hi, all.
We have 2 sites, with 4 Exchange 2013 mailbox servers in a DAG (2 at each site), with 6 databases, and about 700 mailboxes, that replicate to all DAG members.
Yesterday we applied a retention policy to "Delete and Allow Recovery" all email over 3 years old. Today the processors on all 4 mailbox servers show 9-12 instances of ParserServer.exe, and 4 instances of noderunner.exe, running the processors at 100%. All these servers have 4 procs and 16gb memory, running as VMs on beefy physical hosts. We are assuming this is due to the system applying the retention policy tags to each mail item in each users' account.
Users are complaining of slow Outlook performance. With about 1.3TB total data, it may be days before the retention policy is completely applied.
How can I throttle back the application of the retention policy to free up resources on the servers? Can I limit the number of instances of these processes? Or limit the max. cpu cycles they are allowed to consume?
Also, how do these internal mechanisms work? For example, does Exchange first assign the policy to each account, then go through and tag each mail item, then go through again and perform the deletion? Can I view progress somehow?
We need to update this policy to 2 year retention next month - will the entire process repeat, forcing a crawl of every mail item once again, and therefore kill server performance, or is there some efficiency when run a second time? Any insight into the back end processes would be very helpful.
Thanks!
Dan