Ive noticed that large patches (especially service packs) are constantly failign de to time out on some of my endpoints.

These endpoints are primarily connected over slow 'mobile broadband' links, I think the failure is because the next attempt at downloading the patch starts again from byte 0 instead of attempting a reget.

I notice that cURL has the following switch on it, perhaps kaseya should look at implementing it, especially for 'download from internet' machines.

-C/--continue-at
Continue/Resume a previous file transfer at the given offset. The given offset is the exact number of bytes that will be skipped counted from the beginning of the source file before it is transfered to the destination. If used with uploads, the ftp server command SIZE will not be used by curl.

Use "-C -" to tell curl to automatically find out where/how to resume the transfer. It then uses the given output/input files to figure that out.