Just thought I'd share these links, since Kaseya doesn't seem to be actively announcing it.
Thank you for this. I posted the patch information earlier today:
Ahh, yes I missed that, I checked all the other forums with posts from today but must have skipped that one. It would have been nice to receive a call from our account rep, or anyone at Kaseya, as this seems like a very serious issue. I also don't see it on the Kaseya Twitter feed. Just seems like something that should be gotten out to the community as quickly as possible.
Is there someone we can talk to in order to gain a better understanding of how this exploit worked, what the true exposure might be in terms of access to customer systems, and what we could put in place going forward to protect against this type of exploit, should another one be found?
I second this. It would be nice to know the complete scope of what was effected and what we need to look for on clients machines. Apparently this also happened back in 2014?
Cesar Perez Oscar Romero
In the "XMR Detection and Removal Procedures" (helpdesk.kaseya.com/.../360000346651)
In Step 4 - I had to do an additonal step which was deleting the criteria “System Affected” which appeared against another custom field we had and replacing this by an *. If you don't do this this view will never show that any machines are affected!!
PS. It would have been nice if i could have just commented on the KB article instead :-)
Thank you for pointing this out. I have just corrected this in the article.
I can't get the XMR_Log.xml to import, I get "Exception has been thrown by the target of an invocation. There is no DataField named 'importName' in the DataRecord." (We're using a lot of Custom Fields, I don't really want to use up one more if I don't have to.)
Maybe it'll behave after I patch the system tonight.
You can use the AP log and reports to avoid the custom field portion. Being as my server does not have too many agents, I created an email if detected and/or found.
None found btw.
I am sure that Kaseya will probably not want to disclose the exact nature of the vulnerability (however I think the fix has something to do with the StripHeaders module for IIS?) however is it possible to provide assurances as to what impact this might have had?
For example how can you be sure that the XMR cryptominer is the only thing deployed (as this is the only thing being detected by the published scripts) and that some other backdoor to data or agents is not present?
I'm sure a compromised VSA is among our worst nightmares so some details would be appreciated.
I agree. The procedure is searching for a known malicious actor, exploiting the vulnerability. Surely we should be checking the IIS logs, or the Kaseya logs themselves, incase there is someone else actively exploiting the vulnerability, but only if we knew where to look?
The procedure itself is odd, I don't know if there is a genuine use case for the registry key it is searching for, but checking for 1 key and assuming 6 exist?
I don't think it was StripHeaders (github.com/.../StripHeaders). They likely added this to make it harder for future attacks because attackers won't be told what versions of software we're running.
If you're concerned about your VSA continuing to be compromised you could restore to before this attack was known and then re-install the patch, or restore as a different instance, re-install the patch and then compare your two systems.
In any event, I would expect our antimalware software and other deployed contermeasures for evil to catch these cypto miners even if they had been installed. Take a look: www.virustotal.com
The executable was compiled 1/12/2018, and was first submitted 1/24. Many (not all!) antivirus programs correctly detect that deployed crypto miner as evil. Kaspersky detects it.
The procedure also doesn't else/if quite right, from what I'm seeing. There's an IF, another IF, and if the second IF doesn't trigger then we get the ELSE. The correct form would be the first IF, then the second IF inside the first IF's ELSE, and so on.
Of course it also defines the agent temp directory variable even though #vAgentConfiguration.agentTempDir# has been right there in our toolbox for many years now, and the procedure itself never writes to the working directory anyway.
And while I'm kibbutzing, the procedure log entry should be something actually specific to the task. 'System affected', great, when I'm going through logs later I'm sure going to want to know "affected by WHAT, again?" Unique search strings are your friend when you're creating log data.
I'm tempted to build my own procedure instead of relying on this one.
Oh hey: There's a good write-up over at Medium on this situation.
Note the updated registry keys to check for. This'll be a fun procedure rewrite, oh yes...
I think the IF/ELSE is correct, and just assumed they cut and pasted an existing procedure to leave the agentDir defined in there. But they are certainly not the best example of writing their own procedures, seem to recall they have a dedicated procedure writing team :-$
Good find with the link, however I'm more concerned about the vulnerability itself, rather than the exploit/payload that has been discovered. And while others may have Magic End Point security, to detect any payload. As your link demonstrates, that particular exploit has been modified, to bypass any checking, and checking the end points seems futile. Its the VSA that has been compromised, and the VSA should contain logs or clues that will indicate any compromise, be it SQL Injection, Code Injection, Cross-site scripting etc. Having a description of the vulnerability rather than the payload, will allow us to check our own VSA's for any compromise regardless of payload.
"Having a description of the vulnerability rather than the payload"
I agree. This is needed.