I'm trying to set up some I/O monitoring sets using the "avg disk queue length" counter object.
I'm confused as to how Kaseya is interpreting the values.
So its set to collect over 5 and alert at 50 (Default settings except for the collection threshold which I changed from 20 as no data was being collected.)
the value's its getting from the servers its applied too are in the 0.01-.0.02 range.
This confuses me because my reading suggests that a bad length is 2 for over 5 minutes. So why are the default alert thresholds so high?
Also if my collection threshold is set to 5 (Also too high) how am I getting data from the server?
Both of these lead me to believe the value Kaseya is using doesn't match exactly with the actual length.
Can anybody point me in the right direction here?
Firstly, the default thresholds were written nearly 15 years ago, when XP ruled the roost, and have never been updated. This is why they're so out of date.
Secondly, a reading of 0.0x is pretty normal these days (especially with SSD drives ow common). nothing to worry about there - use windows resource monitor, disk tab, storage section to observe the queue length reading independently. K is probably reading the value correctly.
The way monitoring works, is on the basis of not sending back data to the K server unless necessary. so, the agent on the monitored machine is programmed with the thresholds, and monitors them (presumably in real time or at least, very frequently). if the value read is under the collect threshold, nothing happens (all is well). The K server logs no readings (because they're normal, hence deemed not worthy to COLLECT).
If the monitored value exceeds the collect threshold, only then does the data get sent back to the K server for logging & potential to trigger an alert. The K server alerts are based on the logged data exceeding the alert threshold you've set.
If you expected K to have a 24/7 log of the data, set the collect threshold to 0. note this will put a load on the K server and increase the K server's SQL database size noticably (all that data has to be stored somewhere). Use with caution.
Consider using KNM to monitor this rather than classic monitoring, it's much more efficient & puts less load on the K server.
Thanks for the response!
So If I read this correctly, I can set the collection threshold to 1.5, and the alarm threshold to 2, and this will function as an alert for high disk loads?
Should work. some trial and error may be required, since the minimum unit measure for collection is minutes, I read that as the queue needs to be >1.5 for at least 1 minute, to trigger collection.
Disk I/O is usually a lot more dynamic than that, only a sustained load for a long period is going to alert that....high loads for short times won't trigger, i don't think. so you'll have to test it.