I just noticed https://lemmy.ml/u/giloronfoo@beehaw.org had proposed the same, but here’s the same but with more words ;).
I would propose you try to split the data you have manually into logically separate parts, so that you could logically fit 0.8 TB on one drive, 0.4 TB on another, and maybe sets of 0.2TB+0.2TB on a third one. Then you’d have a script that uses traditional backup approaches with modern backup apps to back up the particular data set for the disk you have attached to the system. This approach will allow you to access painlessly modern “infinite increments” backups where you persist older versions of data without doing full and incremental backups separately. You should then write a script to ensure no important data is forgotten to be backed up and that there are no overlapping backups (except for data you want to back up twice?).
For example, you could have a physical drive with sticker “photos and music” on it to back up your ~/Photos and ~/Music.
At some point some of those splits might become too large to fit into its allocated storage, which would be additional manual maintenance. Apply foresight to avoid these situations :).
If that kind of separation is not possible, then I guess tar+multi volume splitting is one option, as suggested elsewhere.
Where should they be “taking” funding instead?