Our Ops Manager server has been nagging us about insufficient memory resource of late. It even started crashing owing to resource exhaustion. After doubling the available system RAM, it stopped crashing (hurray!), but it has not stopped complaining about insufficient free memory (boo!).
At issue, I believe, is aggressive SQL query and data caching. SQL DB administrators are no-doubt aware of SQL Server’s aggressive use of free memory for cache. On dedicated DB servers, this is, no doubt, a good thing. However, on systems that have other work to do, it appears the cache grabbing can cause some resource contention.
I decided to limit the memory available to SQL to a large, but more reasonable amount. This will allow OpsMgr more flexibility in the event that it needs more RAM. A little TSQL is required to make the change happen:
--Reconfigure SQL to display advanced control options: USE master; GO EXEC sp_configure 'show advanced option', '1'; --Display current SQL configuration options, including "max server memory": RECONFIGURE; EXEC sp_configure; --Sets "max server memory" to 9728 Mb, or 9.5 Gb: USE master; GO EXEC sp_configure 'max server memory', 9728; RECONFIGURE; GO
The final fix is to set an override on the server free memory alerting threshold to allow the OpsMgr server to run at about 90% memory utilization… when you have 16Gb of RAM, 90% is not a bad place to be.
Official MS Documentation here:
(note there is a syntax error in the code sample on this page… “options” should be the singular “option”. Also, stored procedures should be called using EXEC… it just looks cleaner.)
General usage of sp_configure, used (incorrectly) in the previous link: