I happened to notice that a web browser connection made to the agent check-in port of my Kserver gets internally forwarded by Kaseya to the VSA IIS web server. This allows access to my Kserver login page via the agent port, even when my upstream network firewall is blocking web ports 80/443.
My firewall does not do "deep inspection" required to differentiate HTTP/HTTPS traffic from agent traffic, so I cannot selectively block. My agent port is either open, or not.
That's a nasty bug, and one Kaseya must resolve. I doubt there is anything you can do without a suitable firewall to filter the traffic.
I went to look for the feature request to upvote it but I couldn't find it. You should post a link to it.
Feature Request Link:
There are a couple things you can do to mitigate this risk and reduce the attack surface for the agent port.
1. In R9.3 you are able to configure the the Edge services to require secure traffic. This is done by editing the KaseyaEdgeServices.config and setting "RedirectHttpToHttps" to "true". This will redirect all HTTP traffic to HTTPS. This ensures all connections are going over a secure transport when accessing the port.
2. You can enforce two-factor authentication, requiring anyone who accesses the login page to conduct an identity proof by requiring they enter in their AuthAnvil MFA credentials at time of login.
3. In R9.3 the Edge Services now honors traditional "X-Forwarded-For" headers in HTTPS streams. As such, if your upstream firewall that is conducting routing and/or NAT enables the header, this is forwarded onto IIS. As such, you can use the "IP & Domain Restrictions" feature in IIS and reduce access privileges to only trusted hosts. This isn't a fail-safe manner, especially if the client networks are dynamic addresses. And it still would allow machines where agents are installed to talk to the port. Which is why you would fall back to #2 if that is an issue.
We are looking to enhance this in the future with more intelligent filtering. However, at this point that isn't practical without some new capabilities added to edge services to maintain agent information state to know the difference between an agent and a browser, which could normally be spoofed anyways in an HTTP stream. We have already addressed this in the new endpoint fabric in R9.3 by using PKI with certificate trust for the channel communications. Untrusted devices cannot communicate over the endpoint fabric system. However, we could not backport that to the old agent technology, since we have to deal with multiple versions of older agents that may still need to communicate over that port.
This is not cool... We do not allow VSA Access via internet.. Only through VPN.. Is this a bug? Or is this by design?
Thanks for your reply, I have a couple of comments in response, while also acknowledging that the decisions in the past were not yours!
The fundamental issue here is a design flaw. Mitigation strategies are all well and good but they don’t do anything other than mask design flaw. It is reasonable for someone blocking TCP:443 at the firewall to assume that site is therefore inaccessible to the internet. Instead, we find that there are three ports potentially redirecting to the same service (TCP:443, TCP:Primary KServer port, Secondary KServer port), two of which cannot be blocked.
Good security design aims to reduce the attack surface, which hasn’t happened here. A flaw in the authentication system would now have no mitigation strategies, as it is unlikely most Kaseya clients know the IP address of all connecting clients. Consider for a moment that there is a flaw in the authentication that was easy to exploit. An attacker able to successfully login could run arbitrary code on EVERY computer Kaseya is managing. This is potentially enough to put the compromised company out of business. With the current methodology of being unable to mitigate the access to the authentication services at all, every client machine for every Kaseya client is now vulnerable. Do we seriously believe that given recent IoT based DDoS attacks that millions of Kaseya client PCs would not be an attractive target?
In specific response to your mitigation strategies:
1) Using SSL doesn’t prevent any attack. All it does is add computational load to an attacker, however if an attacker had an exploit, this would be so trivial it doesn’t bear mentioning. If for example, a brute force attack was performed, not using TLS, it would mean a passive observer would be able to see the password as well, which is not a mitigation in its own right – it’s just slightly less terrible.
2) Two factor authentication doesn’t by necessity prevent an exploit. Only making the service inaccessible remotely prevents an exploit, or weakness.
I would also be interested to know if the new fabric is using public PKI, or if it is Kaseya managed. Both have weaknesses, the public model is widely understood to be flawed, and a Kaseya implemented PKI raises questions about client-trusted certificates (i.e. is it installed in the PC certificate store?). Is there any more detailed information about this?
I would also be interested to know if any other users have a firewall recommendation that removes this HTTP(S) issue externally to Kaseya?
Http(s) responses on port 5721- bug. Period. 5721 is for classic agent communication, not Web traffic.
I may just fire up some old versions and check when this one crept in as it sure is weird.
There should also be more options for 2FA other than the Kaseya paid service...Authy, Google Auth etc all work perfectly fine
We hear ya. Just to be clear, accessing HTTP over 5721 has been available for YEARS. Even before the KAF was implemented in R7 or Edge Fabric in v9.3. In fact I'm told agent procedures have had ways to "curl" content for almost a decade over this port.
In any case, I agree with you that this isn't acceptable for customers with hardened VSA environments. So today I escalated an Engineering Change Order to implement a new solution to reduce the attack surface on the agent port. After conducting a threat model on the Edge and Endpoint components, we came up with a solution we think will meet everyone's needs.
What we have implemented is an HTTP module for IIS called "VSA Access Control Sentry" that intelligently looks at the HTTP traffic coming in and decides if it should be authorized or not on the agent port. As an example, there are API calls that need to use this in the endpoint fabric. However, general browsing is not accepted. Authenticated or not.
The code was completed today and submitted to QA for review. Once they ship the v18.104.22.168 patch they will run all our automated tests against the system to make sure we don't regress and introduce other problems. Once there is sign off, we will provide a zip file to support that you can then request. The zip includes the security DLL, a PowerShell script to install the DLL into the right directories and a README with details on editing a configuration section you need to put in the web.config manually. Then by calling "iisreset" IIS will pick up the changes, and immediately provide you the protection you need.
We will add this into the product installer for v9.4. Until then, anyone that is following hardening guidance for Internet facing servers will be able to apply this manually to get the protection requested.
NOTE: If you allow 443 to the Internet already, there is no need to apply this safeguard as you already allow people to access the VSA web pages. However, if you are limiting the attack surface to just 5721 for agent communications this module will prevent general HTTP/S traffic to the VSA.
We hope this will meet with your expectations. We take security seriously at Kaseya, and any time we can add more safeguards to further protect our services, we will consider it. We should have this available to support sometime next week. If anyone wishes to get early access to test in a non-production environment and provide feedback, let me know and we can see about arranging that if required.
Thanks for all the feedback. We hope this solution meets your needs.
I'm sure you are correct in that this problem has existed for years, which of course isn't a defence, but we all need to have a reality check here - it would seem obvious that nobody has exploited this in the past.
Let us all realise that this was not a coding error, but a design flaw, and users should be worried. A flaw this large does raise questions about the overall security architecture, and clearly as a company, past AND/OR present, security was not a core component.
Kaseya has a long history of mediocrity. Users are frequently complaining, deadlines are usually missed, and support responses and response times have been less than ideal. I notice that Kaseya was voted the best MSP RMM software recently, which I think tells us more about competing products than the superiority of Kaseya.
In this particular instance however, I think we can be happy with the response. It's easy to remain inflamed, to see the flaw and the design issues, and determine that everything is broken. But, when the CTO has personally listened to the complaint, initiated a fix process, provided an out of band temporary fix for those who want it, AND a permanent fix is to exist in the next update for all clients, we have to be at least a little thankful. Yes, Kaseya should be taken to task, and it's true this is a flagging product that has a long way to go, but we at least have a CTO that listens, and in this case helped enormously. We should all remember to judge each case on its merits- this time Kaseya performed well.
My sole remaining concern is that this flaw is now public knowledge. I would highly recommend that whe the patch is available, an email is sent to each client, explaining that they will need to patch if this affects them. It is only a small matter of time until someone writes an exploit for this that is executed before the next major patch. It would be a shame if communication let Kaseya down on this issue, when the technical and support arms did a good job. We have to remember as MSPs that we are all high value targets for attackers, and Kaseya is a super valuable target, and needs to act accordingly.
Thank you so much for your last response indicating "We hear ya". I have a non-production test environment ready and would love to get early access to the zip if possible as this issue is currently holding up a 6.5 to 9.3 migration for me.