Kaseya Community

Sync Directory?

This question is not answered

So I have a network of hundreds of machines in remote locations (growing at a rapid pace), and one of the challenges I'm trying to solve is how to lower my bandwidth usage to remote locations.

Today, I have a procedure that basically does the following

  • Delete Destination Directory
  • Write Directory (Source on Kaseya Server) -> Destination Directory

What I really want to do is not to have to Delete the Destination Directory, but rather "sync" the destination directory (only copy new directories/files, delete directories/files that don't exist on the source) which would GREATLY reduce my bandwidth needs (in essence rsync for Kaseya).  These machines are all in remote locations (behind 3G modems as a matter of fact), so Kaseya is the only access method I have to them.

I've done a good bit of searching, and haven't really found and answer.

Thanks in advance.

All Replies
  • With a few caveats, Kaseya has built-in functionality that will do exactly what you want I think!

    Caveat 1: the functionality is in the BU*DR module - so you'd have to install that.  I don't think you actually have to have BU*DR licensing to get it to work, it definitely doesn't burn a license to set it up.

    Caveat 2: the directory structuring is a bit limited - at the "receiving" end the actual files being synched up are going to end up in a distant sub-directory; but of course you can just share it at that point and the end-users won't have to see it.  But if you need multiple sources, you won't be able to have them "dump" into a common pool.

    Caveat 3: one receiver can receive from multiple senders.  But one sender can only send one directory to one receiver.  This is something you'll have to work around.

    Caveat 4: you have to be able to turn on an inbound TCP port at the receiving end that "passes through" to something inside.  Probably not a big deal if you are controlling your own firewall/router.

    So here's how you set it up:

    Assumptions for this example:

    \\hqserver\share has all the files that you want everyone to be able to get to a copy of

    \\hqserver\share is actually located at f:\share on hqserver

    It will be accessible at \\locationxserver\share at each location.

    Location IP address is 1.2.3.x (just an example, I know it isn't valid!)

    So, therefore, at location 1, the public IP is 1.2.3.1 and the server name is \\location1server

    \\hqserver has a Kaseya GUID of 123456789

    Step 1: for each location, configure an "Offsite Server" (location 1 shown)

    * Backup => Offsite Servers

    * Click "Select Machine ID" link and choose location1server

    * Enter 1.2.3.1 in Name/IP (this needs to be resolvable by whatever is sending files.  If you have a dynamic IP as you sometimes do with a 3G modem, use a dyndns-type service and use the DNS name here.  But for the sake of this example, it's 1.2.3.1)

    * Choose a port you can open.  I like 12345 since it's generally unused, easy to remember, and not usually ISP-blocked.  But whatever's clever - it's going to listen on location1server, and you'll need to pass it through the firewall inbound.

    * For "Full path..." put in C:\receive or something along those lines - anywhere you've got space to accept all the files you're replicating.

    * Bonus: you can "seed" if you want, but if so, we'll do it later.

    * We'll also create the share later, and you'll see why!

    * Open TCP 12345 from the public IP address through the firewall to location1server, in your firewall

    Step 2: Set up "Local" server... one for each site.  This is where Caveat 3 comes into play... if you have 100 sites this is going to get cumbersome fast.

    * Backup => Local Servers

    * "Select Machine ID" and choose hqserver

    * Choose location1server in the dropdown

    * Bonus: yes you can limit bandwidth!  or just let it run full-blast :)

    * In "Full path..." box enter the path - in this example f:\share (see assumptions)

    Step 3: wait, then finish configuring

    * If you've done everything right, you should see a directory structure appear on location1server

    * It'll look like c:\receive\123456789 has a couple of files of "stuff" that's been replicated

    * There will also be c:\receive\123456789=hqserver and c:\receive\hqserver=123456789 directories, they will be empty, this is normal

    * If you want to "seed" via USB hard drive or something - don't just start copying files!!!

       * Go to Backup => Schedule Transfer and turn off transferring for the local server!

       * Copy the files at the remote side

       * Restart the schedule

    * Now create your share: share out c:\receive\123456789 as \\location1server\share

    Notes:

    1. Caveat 3 - if you have a ton of remote sites you'll need a separate "local server" for each one.  This could get to be a problem, since you can only define one "local server" per machine ID in Kaseya!  A mess of low-resource VMs would be a way around this, but a pretty ugly way.

    2. It really is a mirror operation - if you need to be able to make changes on the location1server side, this is not the solution for you.  ANything you change will get overwritten.

    3. Also - anything that gets removed from hqserver will get deleted pretty quickly at the location1server side as well.

    4. For directories with either massive file counts (over 10,000) or massive files (over 50gb) this method may not work very well.

  • Yeah, I need to make this work with thousands of remote locations.  We're in the middle of a massive deployment (100+ locations per week), with the final count being over 2,000 locations (we're already over 400).  So I don't think this solution would work very well, plus I don't control the routers or the network they're on (the machines sit behind a 3G router, and the 3G router's are connected to a private network handled by the carrier), so that's a no go as well.

  • Yeah I was afraid of that.

    Next suggestion would be to set up an Agent Procedure to do automated file transfer from a central source; the most rudimentary form would be an FTP site at your central location, and a very cleverly crafted ftp command to basically do a sync.  Get the file list, compare names/dates/sizes, delete changed files, download new versions.  

    Probably the most important change from what you're doing now is twofold:

    1. Don't just delete everything and redownload - ouch!

    2. Don't run all the bandwidth through your kserver, at 10 sites it might not be bad, but at 100 you'll notice, and at 2000+ your kserver will be choking if it even manages to handle the load.

    I recommend finding a vbscripting wizard to write a VBS that you then drop into your agent procedure that does the sync.

    Heh, or use something like FTPbox: http://ftpbox.org/faq/

  • One of the reasons I got Kaseya was for it's robust ability to handle and resume file transfers (this is over 3G).  It's write directory procedure works great, and has already saved me a ton of time (what used to take weeks of retrying over and over and over now is a set it and forget it operation).  I just never imagined that there wouldn't be a check box right there on the Write Directory procedure step that would say something like "Delete Files/Directories that don't exist on source."  I can work around it for now by basically hand keeping track of changes and adding a step to delete each file/directory we delete on the source.  But what a painful way to manage it.

  • Hi @grumple, have you tried using the 'Distribute' function under the Agent Procedures module?: otfPtfY7JIx  

    This works the same as the 'write file' agent procedure however it doesn't need to be scheduled (it runs every time there is a full check-in on a machine, if you aren't running any other agent procedures regularly on the machines you may need to schedule something generic like 'Force Check-in' under the Agent Procedure samples to kick the process off), and it only writes down the differences to the target. It works like that 'rsync' you were looking for. It also supports the 'Bandwidth Controls' under the Agent->check-in control function to lower the individual machine bandwidth. That isn't much help across a site with lots of machines, but it's good when there are only a few machines on each site, just an FYI.

    6.3 which is not far off now has a new 'Lan Cache' function which allows files to be dropped locally once and then sent out to each local machine, which will finally solve this for you, so keep an eye out for that feature once you get an upgrade to 6.3.



    [edited by: Ray Barber at 4:57 PM (GMT -7) on 4 Sep 2012] (added extra info)
  • Thanks Ray, that's actually a cool feature, but it only works 1 file at a time?  I can't just add a full directory?