what is the recommended installation setup for VSA 220.127.116.11. The docs are not updated since 1 year. We face several problems in the installation process.
Windows Server 2012R2 ??Windows Server 2016 ??
SQL Server 2012 Express ??SQL Server 2014 Express ??SQL Server 2016 Express ??
If it helps, we are on Server 2016 and SQL Server 2016 Enterprise. Works great.
If you're building a VSA/SQL on-prem, be sure to review our VSA Performance document on the Downloads/Documents page. One of our larger MSP clients (3600 agents) is running Server 2016 for VSA, Server 2016 Core for SQL. These have about 2/3 of the CPU and RAM recommended and have great performance, so have great scalability. Those are on Hyper-V 2016 server.
I tried SQL Express 2014, 2016, 2016R2 on German Windows 2016 and get allways the same error.
All Firewalls and Virusdetection is diabled. Also the Datacenter Firewalls do not block any traffic from the machiene.
2019-05-22 10:09:48Z: Error starting service: Kaseya Utility Service
Message: Der Dienst antwortete nicht rechtzeitig auf die Start- oder Steuerungsanforderung
Message: Der Dienst Kaseya Utility Service kann nicht auf dem Computer . gestartet werden.
Stack Trace: bei System.ServiceProcess.ServiceController.Start(String args)
bei KaseyaInstaller.Common.Service.StartService(String serviceName, Int32 timeoutMilliseconds)
2019-05-22 10:09:48Z: Service start exception, not retrying: Kaseya Utility Service
2019-05-22 10:09:48Z: Starting service: KaseyaDirectoryIntegrationService
2019-05-22 10:09:48Z: Started service: KaseyaDirectoryIntegrationService
2019-05-22 10:09:48Z: Error downloading.
Message: Der Prozess kann nicht auf die Datei "C:\Kaseya\WebPages\themes\default\images\logo.gif" zugreifen, da sie von einem anderen Prozess verwendet wird.
Stack Trace: bei System.IO.__Error.WinIOError(Int32 errorCode, String maybeFullPath)
bei System.IO.File.InternalCopy(String sourceFileName, String destFileName, Boolean overwrite, Boolean checkHost)
bei KaseyaInstaller.WizardPages.Helpers.InstallProgressHelper.UpdateClassicTopbarLogo(String themesDir)
bei KaseyaInstaller.WizardPages.InstallProgress.InstallProgress_Shown(Object sender, EventArgs e)
2019-05-22 10:09:48Z: WizardHelper.OnLoad for Form ExceptionDialog.
Any suggestion what is missing ?
We have an on-prem VSA (via Azure) hosting ~7000 agents and I do plan on implementing what you talked about in the document. However, I do have a question about spanning the page file across those extra drives.
Currently the page file is set to a small 400 GB "temp" drive that was provided along with the VM we chose within Azure. It does not show up as a data disk for the VM when you look at the VM properties within Azure. It is defined as a "Virtual HD ATA Device" in Device Manager.
I was thinking for an added performance boost to configure the paging file for the OS across the temp drive and the other drives we would be adding per your documentation. Our current configuration is a C drive for the OS, a E drive for Kaseya and the D drive for the temp storage. Another option would be to add 2 more small drives (total of 3 with the E drive) dedicated to the paging file. I would still implement your suggestions as well. This is how they show up in Device Manager
C (Virtual HD ATA Device)
Location 0 (Channel 0, Target 0, Lun 0)
D (Virtual HD ATA Device)
Location 1 (Channel 0, Target 1, Lun 0)
E (Microsoft Virtual Disk)
Bus Number 0, Target Id 0, LUN 0
If you're referring to our document, then no - we have a single 4 GB volume for paging, with a 3172 MB pagefile (fixed size). I would not arbitrarily add paging space/volumes without a PerfMon data collection over 5-7 days to track the actual requirements, and never place a PF on a data drive.
If your pagefile was set to "automatic", I'd remove it, reboot, defrag the drive, then recreate it with a fixed size. In some hosting environments, the PF disk is recreated at each boot, so no defrag would be needed. I did some analysis for a financial company that had an app that ran 15-16 days. Just changing the PF to a dedicated drive, fixed size, (and thus eliminating over 40K fragments!), we shaved 2 days off of the processing time. We also reduced the PF on that system from 64G to 8G (512G RAM) based on PerfMon. More isn't always better. :)
There is some canned reporting available by default for the VM in Azure. I will take a look.