I will be using a script to download a file (getURL) over and over again to a machine. The file name will never change, but the file will be updated from time to time. If LAN Cache is enabled, the latest copy of the file will never be downloaded because the file name already exists in LAN Cache and so the cached, old copy will be pulled.
I'm looking for a way to either pre-clear or bypass LAN Cache for this scripted download so that I can successfully download the latest version of the file.
The script will often not be run against the LAN Cache server, so I can't just clear a local file path.
My environment contains 60+ different networks so creating one-off scripts to run against each LAN Cache server is not a realistic option.
Any ideas? :)
In a URL, the ? symbol is special. anything after a ? is treated as a parameter for (the web server to read), so
http://xyz.com/foo.zip and xyz.com/foo.zip random> will always return the same file; however the presence of the random query string makes the URL unique each time it's referenced (assuming you use a new random each time), so browsers won't cache it. A good source of unique random, is the current datetimestamp, which will never ever repeat.
Try it..e.g. http://.www.microsoft.com?4386438906 loads the regular ms homepage, no problems.
My Understanding is that Lancache will update if a different version of a file is found on the server. Did you test this and find otherwise?
Yes, I can say for certain that the cache is not updated with the new version of the same file. It has caused me many headaches during script testing because things don't work as expected and I keep forgetting that LAN Cache is still serving up the old file.
Perhaps there is a behavioral difference between getFile and getURL. I prefer to use getURL to keep the traffic off of my internal network.
Hmm. Just a thought, but a trick I've used in the past in webpages. If you are using getURL, would it be possible to script the "url" to include a random number at the end, then rename the file to what it needs to be?
I've used that in the past to force image refreshes. So if the image is stored at www.mysite.com/image.jpg instead I call www.mysite.com/image.jpg So that everything you pull the image the url changes even though the file is exactly the same
Thanks Jonathan, that's a good thought. But if the source file is always named differently, I won't be able to process it with my script. You've solved the download piece, but I still need to work with the file afterwards (specifically I would need to unzip it).
Hrm. I suppose I could try a wildcard in the filename for the unzip command. If that works in a Kaseya script, that would do it.
I think #global:rand# might be able to be used across multiple scripts (I think I read it can work that way but have never tried it). The problem becomes that I would need to run the other script on a different machine.
Script 1: Runs on source server. Generate and upload the file using random number for name.
Script 2: Runs on destination server. Download and process the file.
I don't know how to tie these two scripts together in an automated fashion. If I call another procedure, it runs on the same machine...
If you are using getUrl then you would use the random naming on the "URL" portion of it, but then you could use a "standard" file name in the "Enter the filename in which to store the response" which would be the file you work with later...
Oops right after I posted this I re-read your description...
Couldn't you with Script 1 just use a standard name and *not* a random number. Then with Script 2 you use the random number on the "URL" but a standard on the file name.
So for example say script one generates a file and uploads it to http://myserver.net/uploads/hereismyfile.zip
Then in Script 2 I would use getUrl("http://myserver.net/uploads/hereismyfile.zip?#global:rand#", "hereismyfile.zip", "Continue Immediately", "All Operating Systems", "Halt on Fail");
Then in the further processing you are always working with 'hereismyfile.zip'...
Or perhaps another option... Is machine and and machine two both on the same network?
If so have you looked at "writeFileFromAgent" ? Maybe instead of uploading and downloading, you simply move the file from one agent machine to another?
Regarding the first option: I may not be following. If the file has been uploaded as hereismyfile.zip, how would the URL (file) host get the #global:rand# from Kaseya for it to match up with? If both sides don't use the same #global:rand# then Kaseya won't be able to successfully getURL. I would need to be generating the URL with the same Kaseya script for that to work I think.
Regarding the second option: I've had mixed results from past experiments with writeFileFromAgent, but unfortunately this would be from a different network anyhow.
Thanks for the ideas. Even if they don't work out, they help to generate more ideas.
Have you tried using the variable PatchConfiguration.LanCacheUNCPath.
You may be able to delete the existing file using this?
Thanks Craig! This resolves the issue. LAN Cache is bypassed when ?whatever is used after the file name. Doesn't seem like it even needs to be random. Very easy solution!
Thanks Eddy. I believe that would go to a different folder from where getURL files are kept. In my case that variable doesn't seem to represent anything. I think it's because I have all machines set to download from Internet for patching. All good though, I was able to get this working using Craig's information above.