We've had a service ticket open for over two weeks. Our VSA is SaaS and it has been barely usable for over 3 weeks. Extremely slow to accomplish anything.
Has anyone seen the same issue and if so, what explanations or solutions are being given to you to fix?
Yes, we have been VERY slow too. I opened a ticket a couple of days ago because it became absolutely unbearable to wait for 45 seconds for a "Quick View" window to open when hovering over an agent icon (an oxymoron for sure). I just heard back from support saying that they "logged in yesterday and today and did not see any performance issue." I was hoping for some information about logging that could/should be happening to monitor performance issues. :(
We had the same issue of slowness. Yes, 30-45 seconds for just about anything. I opened a ticket and by miracle it started to speed up when they replied saying it seems fine to them.
Very slow for us yesterday. It took me almost 20 minutes to log into VSA because the AuthAnvil 2FA was not working well and it would time out before the app would get the approval request. Then VSA ....
This is absolutely not something we want you settling for. If you are seeing latency in your SaaS tenant, I want to know which ones, when it happens if possible, and I want to make sure its addressed and taken seriously.
Please send me an email with something like "SaaS is Slow" in the subject line. Let me know the name of the SaaS server your using (ie:https://na1vsa105.kaseya.net/) Any time ranges where you know performance was sucking or when it improved. How many Agents you have... and any other detail you feel helpful. We can correlate this with the telemetry we collect and figure out what is causing this.
I dont want to look at this as a one time incident that gets closed as soon as we hit a period when the server starts performing well. I want to match up our monitoring data with when you experience the issue so we can begin to understand from our monitoring what threshold represent a state where users experience is being impacted... then I want to fix it.
Email address below...
Kirk, thank you for responding. will send you our details and ticket information via email.
Anyone else getting any traction with the slow issue on SaaS, we are at a crawl and can barely use the VSA. It's really hurting our ability to support clients during the COVID-19 pandemic.
Europe DK using saas45 some trouble to connect today, but when reloaded - no problem.
just wanted to jump in and provide an update. I don't want the silence to be interpreted as inaction.
As those on this thread have observed, we have been experiencing intermittent issue (sometime seriously impacting service availability), with some of our SaaS servers.
Please know we are aware and working hard on the issues. Kaseya has assembled a team representing all internal departments that are involved in service availability and the authority necessary to facilitate appropriate and necessary change. We are dedicated to resolving these issues. This team is meeting every day to review effects and action new changes and has oversight to the field engineering teams working on the resolutions…
The challenge is, it's not one issue and its usually not always same issue on all servers. Things have definitely been impacted by the use pattern shift as a result of the entire world moving to “work from home” over the course of this month. As I am sure many of you have experienced, the effect of this global pandemic has extended to the Internet in ways we didn't imagine and specifically the impact on how our systems operate has been incredible to observe. These are truly extraordinary times.
To name a few, we have observed global changes/effects in bandwidth usage patterns, file distributions, scheduled events, LAN Caches availability, P2P/STUN connections, New agents being deployed, entire business, cities even, seemly going offline as new areas light up, # of admins on, RC sessions, use patterns, events being collected, ticket volumes, script executions, AV deployments, VPN deployment and usage… etc, just to name a few of the most obvious.
We have been identifying issue and opportunities by the hour and resolving, implementing, testing and measuring the effects in real time. We have moved some servers around to different hosting centers, balanced host/guest configuration, identified user misconfigurations… like instances where clients are running “Update List by Scan on recurring bases (DONT DO THIS! This process has a huge volume if DB inserts and can adversely affect a server with just a few agents), we have added indexes, truncated tables, optimized long running queries… as well as may other thing. We are seeing improvements everywhere, but we are still seeing spikes that I am certain you are all feeling. Please bare with us, we are working around the clock to get everything running smoothly.
We are also improving our telemetry to call out issues trending towards an affect, so we can respond ahead of service interruptions.
If any of you have any projects that might be a heavier lift that normal do to current events, feel free to message me, engage our services teams or start a dialog here on the forums... there are alot of very smart folks here that may be able to help minimize the effort and impact. I realize these are extraordinary times and I am sure all of you qualify as "essential business" in your respective areas... we are committed to making sure you have the tools and resources to care for your customers and communities during this global crises. We are all "first responders" in a critical aspect of our global economy, which has in no point in history ever been so immediately disrupted as what we are going through today.
Also, may of you have emailed me when issue are happening. There is enough volume that I have decided to spend the time reacting rather than responding, so please don't take it as ignoring you. If you DO need to reach me personally, my cell is in my sig line in any email I have ever sent. Text or call, leave a message if I dont answer. I trust you all not to force me to change that, my kids just memorized it :-)
Thanks for understanding (or at least tolerating), I won't quit till things are running as they should. Please, stay safe, be well and stay with me.
Global Product Enablement
Kirk, we've seen improvements recently, thank you for your leadership in getting this addressed. To everyone, please stay safe.
Thanks for the update rattrap!
Keep in mind, even if we fix our systems, there are factors out of our control. Thought this might be an interesting read and something to help educate your customers. This will eventually affect everyone. I have read other articles about many cloud systems buckling under the load. Azure seeming hard down from some countries. Might be an interesting thread to start so we can share stories, if anyone is so inclined :-).
I spoke with a migration tech today about migrating from on-prem to SaaS. I brought up this very issue and he was VERY aware of the slowdown issue on SaaS. According to tech, there is no real good solution currently. He did alluded to some of the things Kirk mentioned above.
This is a huge red flag for me and will probably hold off migrating to SaaS until this is sorted.
It is good to see Executive Leadership addressing these issues as well. A+ for effort.
Any particular reason for the migration?