Does anyone know which modules (via add\remove programs) can be removed from Kaseya On-Premise VSA.
We dont use Time Tracking, Desktop Policy & Migration, Service Desk, Software Management, Enterprise Mobility Management
Think i read Service Desk and EMM can be removed, but looking for some input
Trying to remove anything that uses resources on the server to improve our performance of our 13,000 + endpoints.
I don't remember which ones now, but I had removed a bunch of them that we didn't use but then had to end up adding them back the next time that I applied patches because it wouldn't let me proceed otherwise as it considered them "core" modules. My advice is to remove whichever ones you think you can go without and then run the patch process to find out which ones you have to add back :)
anyone have any experience with this. Seems like we have a log installed that we dont use and trying to remove unnecessary clutter from the VSA as well items that are using resources on the server
Since you're on-prem, you have the ability to optimize your platform performance. I'll assume you already have a split platform with a separate SQL host, but there are things you can do to your VSA to improve performance. One of the things we did as our customer's platforms started to grow was to analyze the systems. We found that several parts of the system had lots of Disk I/O. These customers had single, large drives for the VSA, and in some cases, it was all on the C: drive!
Some things that we did for them to help performance were to add a dedicated paging disk. We determined the paging requirements, then added a disk 12-15% larger than required, then defined a fixed sized paging file on that drive, leaving a minimum of 10% free (anything less results in Event Log warnings). We then set the pagefile on the C: drive to be just large enough to hold a mini-dump in the event of a crash. Separating paging from the O/S disk, using fixed-size page files, and using appropriately-sized pagefiles has been very helpful in optimizing performance.
Another thing that we did was to add more disk volumes to the VSA. If the VSA was on C:\Program Files, we started by shutting everything down and renaming the Kaseya folder. We created a new Kaseya folder, set permissions, and mounted a volume there - again, setting permissions on the volume itself. We created a new directory structure and mounted volumes within the Kaseya folder structure wherever our performance analysis showed high levels of I/O. We then copied the renamed Kaseya folder contents to the new folder structure and re-started everything. This was about an hour or so of downtime, but performance gains were significant. For systems that are already installed on a different disk (like D:\Kaseya), we simply temporarily mount new volumes, move the folder data into the new volume, and then unmount/re-mount on the now-empty folder).
This entire process is documented in a "Tech Brief - Optimum VSA Storage Layout" document that is available in the Products/Downloads/Documents section of our website. It references the most common high-load folders that will benefit from separate volumes, but some performance analysis will easily identify whether other modules would benefit from this treatment. Be sure to use distinct disk volumes and not partitions - disk volumes will have a 1:1 relationship between physical and logic I/O buffering and caching, while partitions will have a many:1 relationship to the physical I/O, which will result in a bottleneck you're trying to eliminate. If you're using local storage, this will mean creating individual LUNs at the disk controller to be effective. In a virtual environment, deploy storage from different volumes. It also references how to separate the data, log, and TX log volumes on the SQL server to obtain similar benefits on the back-end.
Avoid SATA disks at all cost for local storage, even SSD. We replaced an array of SATA SSD with SAS 7.2K rotating media (array of 8T disks, presented to Hyper-V as many 1.5TB LUNs). The improvement in performance was startling. SATA disks don't provide for multiple command queuing or overlapped seek, while SAS disks usually support dual data ports, command queuing, and command optimization to deliver data more effectively. The cost difference between SAS and SATA is minimal, especially for rotating media. SAS SSD still seems to carry a premium for some reason, but given our experience, it may not be worth it. We did deploy some 15K RAID-10 storage for the SQL server, but theprimary storage is SAS 7.2K RAID 6. They're using an HP 10G iSCSI SAN for the Hyper-V cluster environment in that case.
If you're not using the modules, the load they present would likely be minimal. If I were that concerned with it, I might stop/disable the related services and then remove those features from all the User Roles so they can't be accessed.