Ever since converting our VSA instance from an on-prem one to a cloud/SaaS one, our email alerts for antivirus definitions for Kaspersky being out of date are just getting all out of hand. We'll get sometimes 25 or more emails for the same machine being out of date, and and sometimes more than that and can't quite put a finger on just why. This kind of thing never happened with our on-prem setup, so why it would all of a sudden happen now is beyond me. I've put in a helpdesk ticket for this issue some time ago, and through many back-and-forths with their engineers things have gone nowhere with this frustrating issue.
Any ideas as to what could be going on here?
The core problem is a "Chicken and Egg" situation. The monitor fires before the updates run. We used to get 30-40 alerts per 1000 endpoints each day (on-prem) before switching to Smart Monitors. After that it dropped to 0.5 to 1 per 1000 endpoints per week. If the monitor checks 6x daily but updates are scheduled once daily, you can see how many false alerts you'll get.
When we chased this we found that in the time it took to dispatch the ticket to a tech, the updates had completed and most tickets were closed NTF.
Definitions outdated is not a high-priority event unless it persists. If you configure the monitor to alarm only after 3 checks in 48 hours, you'll get a reasonable alert. Our Smart Monitor for AV pulls the status from MS Security Center, then reports if the "wrong" AV is installed, AV is missing, suspended, or disabled. For Definitions Outdated status, it invokes the command to force an update and flags the condition status, checking again every 12 hours over 48 hours (forcing updates each time if necessary). The self-heal action eliminates nearly all of the alerts, and the few that do come in, the agent is actually having update problems.
Hey, thanks for this info. Sounds like this could be quite beneficial in our efforts to getting a handle on all these alerts. How would I go about getting something like this set up?