I can agree that live connect is a little bit faster. But really, the rest is a lot slower then before in 6.2. I thought that 6.3 would be a update that made the interface a bit faster. 1. First of all, when I'm selecting a procedure it take about 10 seconds before I can see any machines. The first 10 seconds it just says "No records found."2. "Loading agent status" after changing page in the navigation. It takes sometimes around 10 seconds before the agents appears. 3. Audit is much slower now. Just Applying audit or trying to get some informaton from a computer.Overall, I'm getting the feeling that it's much slower. What do you think?
I haven't updated yet, but would love to hear from those who have, does it seem slower?
The interface is a slower for us to.
If I check our KServer, the CPU usage has gone through the roof - use to average 15% on 6.2, and now on 6.3 it is 60%. And oddly, even though SQL has 12Gb assigned to it, it is only using a ~200Mb, whereas it use to use over 10Gb. Any else experiencing this?
The same thing here. Our kaseya database server CPU usage is much higher after updating to 6.3. And really, it's bad.
I've been troubleshooting our upgrade to 6.3 for about a week now.
We have a separate k-server and database-server in order to get the most out of performance, everything configured within power consumption thresholds (ie. about 8,5 Mbyte/agent) and with Kaseya's best practice instructions.
The troubles started about 2 days after the upgrade, where servicedesk tickets would take a few minutes to get through the processing steps, which normally is instant. When i checked the database server, the CPU load had scaled out of proportions, the k-server was almost idling as usual. I called our local Kaseya-support and all they could see was a lot of scheduled procedures, but that's nothing new compared to before the upgrade.
I've now followed countless instructions, both from Kaseya and public gurus in order to optimise the database usage. I've done everything from indexation and defragmentation of the DB to spreading out script deployment on the agents. Unfortunatly, i can't tell what steps actually did help but after a few days of hard work i believe the worst storm is over.
The fact is we never had these problems with 6.2, sure, like Fredric said, the GUI was everything but fast, but i honestly can't see much improvement in 6.3 in terms of sheer speed.
Talking about speed, working with agent procedures has gone from pretty slow to almost unbearable. When i click the "Edit procedure"-button, the new interface that handles a 3 line-procedure take 22 seconds to load, that's not OK. If the cpu/db-usage would peek during this action, i would understand why it took so long, but during these 22 seconds, the servers don't even flinch.
Let's hope Kaseya works day and night to resolve the issues and bring us that 400% speed-improvement they promised us.
We are having these issues since the 9th of januari 2013 , about 2 days after we upgraded. Kaseyasupport has for as far as i know, 4 similair cases(ours included) where the TempDB is the issue. It is the heavy I/O on the TempDB, caused by an query(dont know which one, but its something from the IIS Applicationpool called VSAPRES).
We have like 1k+ agents, divided over more than 100 subnets(Retail stores), which we wake up every morning at 06:00h and run the backup/logoff/shutdown procedures at 23:00H.
Since these times are the only times we are using real resources, and problem persists the whole day long, it is not possible that our procedures are the cause.
Our casenumber is CS143130. They are investigating for more than 2 weeks now, and we had to limit our KServer to 100IOP's or else it would harm our other virtualized servers.
It's good to hear we're not alone Ferry.
When i look at the IO statistics for the tempDB it's pretty constant at 10.000Mb... seems abit excessive when the ksubscribers averages at about 1000.
We've done a fair bit of checking on performance and it still looks like we have to buy a new machine which I don't like because I think a lot of this is poor programming.
- We run 2 servers in the configuration. App server is almost always idle. SQL Server runs typically between 60% and 80% and sometimes nails 100% and stays there. Yes this is certainly worse then it was on 6.2.
- If we reset SQL, the CPU will max out while it catches up on event logs and monitoring but will certainly work better after it catches up. And after 1-2 days slows down again. The slow down after a few days tells me they have something leaking.
- This happens regardless of what is happening with scripts.
- We have a report that shows the number of event log entries for the last hour for the top 30 machines. We only capture errors and warnings. If we see a machine with more then 100 entries, there is some performance issues. If we see a machine with more then 400 entries it nearly kills the SQL database.
- TEMP DB is the largest issue in SQL.
- We know KAM creates an SQL procedure that takes several minutes to run and always shows at the top of the list if you dig into the statistics of Kaseya. We leave KAM disabled fro this reason.
- @Eddy - If you are seeing only about 200 MB used for SQL, this is probably AWE. It's actually a fine way to run it normally. You need to go into Windows Performance Graphs and look at SQL Server Memory (Targetted Memory and Utilize Memory) I think they are called. This shows what SQL is actually utilizing. Task Manager shows what is available to other processes in this case assuming SQL will release that memory if required. The performance values will show you for that size of database what the targetted memory will be (typically = to the size of the database) and how much is utilized. If you have less utilized then the target, then you have a memory shortage.
- Looking at the SQL procedure execution times, we see a number of SQL processes taking sufficient amount of time to run. I wish Kaseya would put some focus on these procedures and give us a baseline on what they should be and what they do so that we could understand if we have problems in our database.
- We do see KLC connecting a little faster then 6.2 as long as everything is not overloaded.
- We regularly get messages that our KServer is overloaded.
Please keep us updated because we are hoping to upgrade and we certainly do not need performance problems. How many agents are you servicing? We are over 10,000 sooo if you have problems with a lot less agents, this is concerning for us.
Do any of you monitor your the CPU Usage on your database/kaseya server? If so, could you possibly post your usages? Here is the performance on ours for the past 7 days.
I've been very pleased with the performance gains from 6.2 to 6.3. I had a very frank discussion about Kaseya with my Service Desk Engineers just last night, and noted that they unanimously agreed that:> The interface is "zippier"> The VNC-based Remote Control connects much faster> Live Connect is "solid"
That said, how something "feels" is often subjective, so to quantify it, here's what we've got (bearing in mind that we've sized this according to the System Requirements listed here: http://help.kaseya.com/WebHelp/EN/VSA/6030000/Reqs/K2-System-Requirements63.htm)
Agent Supported: 4,500 and countingModules In Use: See below...
Both Frontend & Database Servers:> Windows Server 2012> Virtualized using VMWare ESXi 5.1> Storage is on a NetApp 3210, communicating via fiber & NFS
Frontend Server> 12 GB RAM> 4 Cores (1 socket, 4 cores/socket)Database Server> 24 GB RAM> 16 Cores (4 sockets, 4 cores/socket)> Database, TempDB, OS and SQL backups on separate volumes
I think one of the big keys here is the reservations that we have in place with our virtual infrastructure. I watched and took many notes on the Performance Tech Jams (Part 1: http://community.kaseya.com/resources/m/techjams/72733.aspx, Part 2: http://community.kaseya.com/resources/m/techjams/79493.aspx) as well as the Scaling Kaseya Servers Tech Jam (http://community.kaseya.com/resources/m/techjams/74666.aspx), and worked extensively with our Project Engineers to make sure our infrastructure could handle the requirements of this system (which is obviously business critical for us).
I should also note that the "Modules In Use" figure I'd mentioned above doesn't do justice to the depth at which I'm utilizing these modules... our ScriptThenElse table is nothing to shake a stick at, and any managed machine might have as many as 20-25 different Policies applied to it--not to mention 3-6 Patch Policies applied to each, and over 50 Views used for Policy Management alone... so we're definitely giving the system a workout.
Anyway, on to the pretty pictures... these are from today (Friday) when we've had a "full house" of Field Service Techs, Service Desk Engineers, Project Engineers, etc. and all of our clients were "in business." Also of note: many of our clients' workstations patched this morning as well.
Total CPU Utilization On Database Server:
Disk Latency On Database Server:
Memory Utilization On Database Server:
Total CPU Utilization On Application Server:
Disk Latency On Application Server:
Memory Utilization On Application Server:
So, hope that helps offer a baseline for what we're seeing in the "real world" with around 4,500 Agents.
There are many other threads talking about the benefits of 6.3 (I love the new multi-tabbed Agent Procedure editor ), but as this is a performance-related thread, I'll just leave it here by saying that we're quite pleased with the results of the performance enhancements that the Kaseya Devs have been working on... we definitely feel it. Keep it up!
Thanks @ispire - we aren't using AWE, but have SQL set to use min 8Gb and max 12Gb.
We have been having our drive fill up with SQL error logs too, getting the following error over a 100 time a second!
An exception occurred while enqueueing a message in the target queue. Error: 15517, State: 1. Cannot execute as the database principal because the principal "dbo" does not exist, this type of principal cannot be impersonated, or you do not have permission.
I think this is causing the problem!!! no to work out how to fix it. Is anyone else seeing this?
I actually surprised that Kaseya is only marginally slower at the moment given this, hopefully it will be much faster if I can stop the erro.
running the following query has fixed the SQL Error 15517 being logged hundreds of times a second:
alter database [ksubscribers] SET TRUSTWORTHY ON;
ALTER AUTHORIZATION ON DATABASE::[ksubscribers] to sa
Since I fixed the 15517 error, CPU has dropped from averaging 60% to 20%, and everything seems to be working much faster :)
Big test will be on Monday morning when more agents are online and more techs are working
Thanks for posting your experiences and the SQL fix. I am curious, di dyou come up with that on your own, or did it come from K Support?