Kaseya Community

Splitting K2 server to two servers: DB back end and Web front end...

This question has suggested answer(s)

Has anyone ran into this at all? Our organization is currently at about 2000 agents, and we most likely need to split the web services and db processing to separate servers, and I wanted to see if anyone had done this and can warn me of any hiccups that might show up. Thanks in advance all!

All Replies
  • We run two, and wouldn't do it any other way. We have 6100+ agents and since the beginning we've done the two server setup with little problem.

    Biggest question I have is are you looking at bringing up a new set of servers to go to side-by-side or simply split one roll off onto a new server?

    If 1 new server, I personally would think it easier to move the front-end and leave SQL where it lies. Run the kaseya setup on new front-end and make your DNS change to point at the new server for agent check-in. The only part you might need to verify is the database location if you used as opposed to machine name.

  • Good answer by Todd,

    We currently have around 2500 agents running on 1 server (SQL is on the same server too). K runs fine. So looking at your agent count, I'd ask why split between 2 servers?

  • Your server may be beefier (is that a word?) than his.  The second server he'll use to split off the SQL may be just "laying around" and therefore wouldn't be a hardware expense to get better performance.  Rather than spending the money to buy a new box or to upgrade a box, use what you have available and split resources between two (or more) moderately powered boxes.  The end result should be the same, not counting hardware failure mean times by using not-new equipment.

  • I am currently using a Dell PE R710 configured pretty good (cost me around $6K). So I hope he would take the same approach. If not, Why?

  • Should of done this LONG time ago. :)

    We switched ours after 300 agents and haven't regretted it.  It's simple.  Move the sql db over to the new server, login and change the db server location in K and your done.  They actually have a document to help do this.

  • I run two servers with dual hex core 2.4GHz Xeon processors 64GB RAM and SSD Drives they are connected to 10GB NIC and the performance is just ok. i finally gave up complaining and just accept the fact that it will be slow.

  • oh i have 2K agents

  • We have 16K+ agents and we have been split between two servers for a few years now.  Recently we were experiencing major slow downs and problems with KRC not relaying and connecting properly.  We run a highly object-oriented set of procedures that result in the core routines being called upwards of 1400 times per hour.

    The solution was to performance tune the SQL server by adding additional files to the tempdb and making sure that the databases were running on drives that support very high IOPS.

    If the tuning of the SQL server could restore our instance to fast & productive operations then I'm sure it would be of assistance to others with similar situations.

  • jeez this thread is 7 years old...

  • Since this thread has already been necro'd...

    We have split VSA/SQL in prod - 4-core/12G RAM for VSA, 16G for SQL. 3200 agents and no performance issues. Both systems are on Hyper-V with an HP MSA 2000 SAN.

    Using SSD does not guarantee performance. We had a NAS with SSDs hosting our original server environment that I replaced with a SAN. Both used 10G iSCSI protocol, but the SAN used a combination of 7.2K and 15K RPM drives. Everyone in the office was amazed by the performance gains. The issue, simply, was that the SSD drives were SATA, and the rotating drives were SAS. The controller can issue multiple write commands to all SAS drives in an array at once, while SATA drives require that you issue a single command and wait for it to complete before issuing the next command. SATA SSDs are $150-200, while SAS are $800 and up. Not sure why, because the difference between SAS and SATA rotating media is pennies.

    I also find that many MSPs don't distribute their data across multiple volumes. Even recently, we worked with an MSP that built a new VSA with a single 600G C: drive and installed everything there.  Their SQL server was likewise configured. Not a good choice if you expect any kind of performance. We have 6 disk volumes on our VSA and 4 on our SQL server to distribute the I/O load effectively. You can get a detailed document on how we configure this from our website in the Downloads/Tech Briefs page (www.mspbuilder.com). We studied the I/O patterns on VSA to determine where the heaviest action was and assigned those to mounted volumes. As for SQL, best practice is to always place Data and Logs on separate disks - we use C:\SQL as a root with mounted volumes for Data, TXLogs, and Logs.


  • Glenn..  Your website just came up as flagged with malware by BitDefender's traffic light.

    thought you should know.

  • Thanks - I've reached out to Bitdefender to identify the reason. This started just this past week.