How many sites can offsite to a single offsite server? Should increasing RAM\CPU performance be considered? How much bandwidth is needed per site? Is there a rule of thumb for offsite servers similar to what Kaseya has documented for regular Kservers? Ex: Suggested Kaseya Server Configuration for Managing up to 1,000 Machines
Offsite server is a pretty useful tool, but it's got some serious limitations - maybe if you tell what your use case is I can give you more specific thoughts, but here are some general observations from using it over the years:
1. As for connections,I don't have any problems with "many" machines offsiting to a single offsite server (20-30 workstations, not a problem). I don't have a use case where I've ever wanted to push more than that, so I don't know if you can do 100, 1000, etc. However...
2. Bandwidth is what it is... so if you were thinking of pushing 100 local servers to a single offsite, that offsite server would probably run out of bandwidth - unless it is on a LAN, or at a colo with a really fat pipe :)
3. One serious limitation is file count. If you're using it for the "default" purpose of pushing Acronis-generated .TIB files, this usually isn't an issue; that being said, it will replicate just about anything you point it at! I've seen a practical limit of maybe 100,000 files before it starts getting so slow it's unusable. I think the reason is that the local and offsite sides both have to catalog what they have, then compare those catalogs, and they are built for "dozens to hundreds" of files to be compared... when you throw "millions" or anywhere close, the cataloging process just never catches up.
4. Another serious limitation is "max file size". This one is trickier, but I feel like files up to maybe as big as 20-40GB seem to be able to be transferred - somewhere above that file size something bogs down and the transfer just isn't reliable; 100GB files will sometimes be replicated with no problems, but other times it seems like the processes on both ends "choke" and need to be restarted, then they pick up and begin replicating again.
So, in short, the "golden" use case could be something like this:
30 workstations with Acronis, doing file backup of "c:\users" to "c:\backups". Each workstation set as a local server, pointing at a "centralized" offsite server. You'd expect those .TIB files to be a few dozen (base + incremental) per workstation, and unless someone's got a huge iTunes library on their workstation, just a few gig apiece, all of which should replicate swimmingly onto the offsite server.
If you're already using an offsite server and it's "choking" put some monitor sets on to watch disk queue length and disk time on the disk where you're dumping files; the offsite servers I've worked with often get i/o bound because they do so much reading and writing, and the more local servers pushing to a given offsite, the more "random" that disk activity is going to be. For sure, offsiting to a USB2 drive on a workstation (for example) is going to have the most serious I/O and performance consequences.