Seeding Backup Copies
This article covers the options around initial seeding of backup copy files to the cloud repository
Upload of of extremely large backups to cloud can involve unique challenges and require multiple attempts of different methods to ingest them into cloud storage due to the variables involved.
Hosted Network will assist in any way we can to ensure the process is as smooth as possible, however please be prepared to work with us through such challenges if required
Over the internet
By default a new backup copy job will send the entire backup image to the cloud repository over the internet. Once the initial base image is uploaded, subsequent incremental backups are generally uploaded much faster, however depending on the retention settings used in the source backup job, regular large uploads may still take place.
In most cases, unless bandwidth on the source side is very limited this is not an issue, however in the case of extremely large backups or bandwidth limitations this method may not be viable. It is important to be aware of the following:
Uploading of large backup copies can take considerable time, especially if bandwidth is throttled on the source side in an effort to reduce disruption to the on-prem network
The longer a copy job runs for, the higher the likelihood something can go wrong (either on the source or destination side) and disrupt the job. This can often result in the job needing to start over from scratch
Upload speed available into the cloud repository may vary depending on the time of day and network load, and is not guaranteed to always be consistent or able to utilise the full upload capacity of the source connection
Physical Storage (eg. NAS or USB drive)
In cases where seeding over the internet is not feasible, backup copies can be seeded manually by copying the backup files from the on-prem backup storage to portable storage such as a NAS or USB drive.
This storage can then be shipped to Hosted Network and manually uploaded onto our backup storage and attached to the appropriate tenancy. Once the initial bulk data is present on our storage, incremental backups can then be run as normal, removing the need to copy the full backup chain over the internet.
When using this method it is important to be aware of the following:
Manually copying data to and from the portable storage and both ends can take considerable time and introduce significant delays in the solution being ready to use
For environments that involve a large amount of data churn (a lot of data changing every day), the delays involved in seeding using this method can mean by the time the backup data is available on the cloud repository, there is still a considerable amount of data that needs to be seeded over the internet to ensure the backups are fully up to date
eg. If you seed 10TB of data via NAS and it takes 1 week to copy to the storage, another week to ship the NAS and connect it in our datacentre, and 1 more week to copy the data onto the cloud repository, there will be 3 weeks of further backup data to send over the internet for the backup copy job to align. In some cases this could be multiple TB of extra data and multiple full weekly backup images
Further costs are generally involved in seeding using this method due to the requirement for a Hosted Network engineer to assist with the physical process of copying the data
Last updated