Server 2012 R2 fantaboulous dedup feature is one of my favourites.
But I had a mysterious error within the logs about “Not enough storage” which was definitely not true.
At the end the error message is misleading and it was to less memory at night times when the server tried to dedup his storage due to a rare condition with concurrent other VMs and dynamic ram. So it happend not often and the errors weren’t in my sight.
The Power Shell gave me this:
LastOptimizationTime : 24.06.2014 01:26:49
LastOptimizationResult : 0x8007000E
LastOptimizationResultMessage : Not enough storage is available to complete this operation.
Getting more minimum memory and scaling up the weight solved the problem.
Especially with lots of Terabytes on storage memory it is a critical thing.
Depending on how much you have to dedup I read different things.
Microsoft says “at least 1GB” of memory. If there is less the error message appears and fsdmhost won’t start.
So 1GB is suitable for this kind of “offline” optimization but within the new feature of optimization of partial files, dedup became a close to nearline dedup and should get a bit more RAM.
Official ZFS recommendarion for example is 5GB RAM per 1TB but I could see that it is very dependent on what types of files you store.
Many small files will have a lot of different hashes which results in a high memory consumption.
For example: I had a Fileserver filled with heaps of Microsoft ISOs ( 1,5TB – typical admin stuff right? 😉 and with ZFS Dedup calc I had roughly around 1,7GB of RAM which is way below comparing with official recommendations. The (ZFS) Fileserver could get up to 12GB of static delivered RAM but I could see that the whole system was only by 2,5GB of RAM.
Long story cut short, memory is the key.