forked from ungleich-public/ccollect
Compare commits
670 Commits
ccollect-0
...
master
Author | SHA1 | Date |
---|---|---|
skybeam | 1cc921ad86 | |
skybeam | ebded3049a | |
skybeam | cdd34a3416 | |
Darko Poljak | b50b3f64dc | |
Darko Poljak | 40dbfbd3a3 | |
Darko Poljak | 7d298d2b51 | |
poljakowski | 3687267dd7 | |
Jun Futagawa | 2ca7598593 | |
Darko Poljak | 08cb857664 | |
Darko Poljak | 309d8dc773 | |
poljakowski | fabdefad82 | |
Steffen Zieger | 616b1d9e3e | |
Darko Poljak | 7a7dec7751 | |
Darko Poljak | 28dec3694a | |
poljakowski | 59b50e7f4b | |
Steffen Zieger | a261ef841e | |
Darko Poljak | 109b70ea76 | |
Darko Poljak | 5341de86fb | |
Darko Poljak | 987277f1cf | |
Darko Poljak | 589fed6107 | |
Darko Poljak | 61ab45fc65 | |
Darko Poljak | 6c24e8a7d3 | |
Darko Poljak | 42bd1afb09 | |
Darko Poljak | 9ed5912461 | |
Darko Poljak | 5ce3fddf62 | |
Darko Poljak | 8f5d9b2c97 | |
Darko Poljak | 401dd4fa8e | |
Darko Poljak | f818f011e3 | |
poljakowski | c9eef21e43 | |
Darko Poljak | a5e565b5d6 | |
Darko Poljak | 2cefdaa1a5 | |
Darko Poljak | 74e3b26790 | |
poljakowski | dcc72aebf7 | |
Darko Poljak | de720ecfe9 | |
Darko Poljak | e44dede92f | |
Darko Poljak | 7701bdb0a8 | |
Darko Poljak | c39205d308 | |
Darko Poljak | 2788de47b8 | |
Darko Poljak | 1e18e71b9d | |
Darko Poljak | 51dcf4a02f | |
Darko Poljak | 702cdf931e | |
Darko Poljak | bfb3c6338c | |
Darko Poljak | 30abef474d | |
Darko Poljak | ca6d06c2c3 | |
Darko Poljak | 1628ce58c7 | |
Darko Poljak | 10dcf076a9 | |
poljakowski | 086c95f98d | |
Darko Poljak | 2725a1ced4 | |
Darko Poljak | 835e21c56c | |
Darko Poljak | 71eabe2f23 | |
Darko Poljak | 5c1bf8a8de | |
Darko Poljak | a63e16efc5 | |
Darko Poljak | b47a828af0 | |
Darko Poljak | 420dc3fe7f | |
Darko Poljak | 51f468182f | |
Darko Poljak | eeccc0b260 | |
Darko Poljak | fc0b86005c | |
Jun Futagawa | bd0fe05003 | |
Darko Poljak | 890b166a43 | |
Darko Poljak | e504d1f42b | |
Darko Poljak | b0f1317713 | |
Darko Poljak | 04bf9aff39 | |
Darko Poljak | 07c925de5d | |
Darko Poljak | 89a82ba55e | |
Darko Poljak | 12b6b2cf28 | |
Darko Poljak | fbe17cae44 | |
Darko Poljak | 6dca5c638d | |
Nico Schottelius | fe911dfcaa | |
Darko Poljak | 3049849ea6 | |
Darko Poljak | a18a00e773 | |
Darko Poljak | 01c36fc699 | |
Darko Poljak | 1df57c8154 | |
Darko Poljak | 902a7d667e | |
Darko Poljak | 8fbb7ddf27 | |
Darko Poljak | 86d5628577 | |
testing_rouxdo | 5356370233 | |
Nico Schottelius | 9d8a8a5a15 | |
testing_rouxdo | 977c7e9c1f | |
Nico Schottelius | e2ca223432 | |
Nico Schottelius | ca45e8429b | |
Nico Schottelius | 20abe4f86b | |
Jiri Pinkava | 10d4942912 | |
Nico Schottelius | b5eede90c6 | |
Nico Schottelius | dc67c929cf | |
Nico Schottelius | e0d39084c6 | |
Nico Schottelius | e392792e1e | |
Nico Schottelius | a729e05132 | |
Nico Schottelius | 1ee71a9dfb | |
Nico Schottelius | 61f715515f | |
Nico Schottelius | d67b35da2c | |
Nico Schottelius | 349a4845c0 | |
Nico Schottelius | ceb2f31e98 | |
Nico Schottelius | f7f6b4d885 | |
Nico Schottelius | bdd6b15397 | |
Nico Schottelius | dcb2b60c41 | |
Nico Schottelius | 9bd09f24a2 | |
Nico Schottelius | d04972026f | |
Nico Schottelius | 7b65687da5 | |
Nico Schottelius | aaf43af0d9 | |
Nico Schottelius | 4675bf864c | |
Nico Schottelius | 05de81e0f0 | |
Nico Schottelius | 51db5e1204 | |
Nico Schottelius | 8c376a31f5 | |
Nico Schottelius | c7d35464ae | |
Nico Schottelius | 758e5a9059 | |
Nico Schottelius | 410cf58067 | |
Nico Schottelius | a949a9e8e7 | |
Nico Schottelius | 5066f417a9 | |
Nico Schottelius | 093de8b0a1 | |
Nico Schottelius | 0d5b2992c0 | |
Nico Schottelius | d2cd0c48f3 | |
Nico Schottelius | e39e53d0fb | |
Nico Schottelius | 9cb8b99353 | |
Nico Schottelius | c10b46111b | |
Nico Schottelius | 540d860e28 | |
Nico Schottelius | cbf34deade | |
Nico Schottelius | 073b1138b7 | |
Nico Schottelius | 9819c718b1 | |
Nico Schottelius | 64824cb3b1 | |
Nico Schottelius | ac7c703ff0 | |
Nico Schottelius | b13ed10eaf | |
Nico Schottelius | 43bba003b2 | |
Nico Schottelius | a8c34581ea | |
Nico Schottelius | fdb68e1ade | |
Nico Schottelius | 63686c3598 | |
Nico Schottelius | 72034cb042 | |
Nico Schottelius | e47fb78603 | |
Nico Schottelius | 86992b9787 | |
Nico Schottelius | ca9106054b | |
Nico Schottelius | 44442a09c9 | |
Nico Schottelius | 23c395bcbd | |
Nico Schottelius | 9c47412991 | |
Nico Schottelius | 0b8e6409cf | |
Nico Schottelius | f630bef3b5 | |
Nico Schottelius | 545158b56f | |
Nico Schottelius | f98853379e | |
Nico Schottelius | ccf86defaf | |
Nico Schottelius | 49cb1f92ee | |
Nico Schottelius | e9c02b8e2d | |
Nico Schottelius | 7de72e5e8d | |
Nico Schottelius | 564ef0bd87 | |
Nico Schottelius | c2226f9134 | |
Nico Schottelius | 0f7891de8d | |
Nico Schottelius | a48fe6d41b | |
Nico Schottelius | d79c2b0a28 | |
Nico Schottelius | e508ef052f | |
Nico Schottelius | 8ae649c761 | |
Nico Schottelius | 5d0a3c73d2 | |
Nico Schottelius | e18c9fa94d | |
Nico Schottelius | 49ef5871bc | |
Nico Schottelius | afe732a69f | |
Nico Schottelius | ec61905fc4 | |
Nico Schottelius | 7e155f4219 | |
Nico Schottelius | 8b01949f4b | |
Nico Schottelius | 6a8ff3f1d2 | |
Nico Schottelius | e1ccba4f57 | |
Nico Schottelius | 7d1669827a | |
Nico Schottelius | c314f284a2 | |
Nico Schottelius | 375f9ebafe | |
Nico Schottelius | cf62a0fada | |
Nico Schottelius | e6d89d57fc | |
Patrick Drolet | e4dea56e49 | |
Nico Schottelius | 59c8941373 | |
Nico Schottelius | 9e801582d5 | |
Nico Schottelius | 435f2140da | |
Nico Schottelius | 145c6de2fb | |
Nico Schottelius | 1b591a040c | |
Nico Schottelius | 2a6ab4c125 | |
Nico Schottelius | 39e8eb4c94 | |
Nico Schottelius | 9d65e1b1bf | |
Nico Schottelius | ef4291c722 | |
Nico Schottelius | 086af1497c | |
Nico Schottelius | 8a70e30d97 | |
Nico Schottelius | 4865f3c8c6 | |
Patrick Drolet | 3d92f30574 | |
Nico Schottelius | 67aead1db2 | |
Patrick Drolet | 4af94d9e71 | |
Nico Schottelius | 84c732bfc0 | |
Nico Schottelius | ce6157efee | |
Nico Schottelius | 08331387b7 | |
Nico Schottelius | 262ceabca3 | |
Nico Schottelius | 3d571e915a | |
Nico Schottelius | 1c8a0808a6 | |
Nico Schottelius | d7c4834dce | |
Nico Schottelius | e8a977720f | |
Nico Schottelius | 229e251482 | |
Nico Schottelius | 422b220494 | |
Nico Schottelius | 36f413173a | |
Nico Schottelius | 9d94beec68 | |
Nico Schottelius | 23b2fcee08 | |
Nico Schottelius | 25d8a2e2fb | |
Nico Schottelius | 9eba4e8b8e | |
Nico Schottelius | 5d41dea79d | |
Nikita Koshikov | 93b56025fa | |
Nico Schottelius | 59f880ea86 | |
Nico Schottelius | 1f84f87888 | |
Nico Schottelius | f549334226 | |
Nico Schottelius | 26f4ae777b | |
Nico Schottelius | 8cc833a7b1 | |
Nico Schottelius | f17023255c | |
Nico Schottelius | 0f7a6a88ef | |
Nico Schottelius | b749a05473 | |
Nico Schottelius | a08580fe7e | |
Nico Schottelius | 3abea41ffa | |
Nico Schottelius | 56879ed9fb | |
Nico Schottelius | cf1459251e | |
Nico Schottelius | 48e181674a | |
Nico Schottelius | 02670a813c | |
Nico Schottelius | 3231acf525 | |
Nico Schottelius | 3431646fba | |
Nico Schottelius | ab74059c77 | |
Nico Schottelius | aeb3ff6d89 | |
Nico Schottelius | 852155a4db | |
Nico Schottelius | 4696590a73 | |
Nico Schottelius | ba11374c6f | |
Nico Schottelius | 4b560f64f4 | |
Nico Schottelius | 50dcd80b85 | |
Nico Schottelius | 4ba0dab260 | |
Nico Schottelius | e6a0300b9b | |
Nico Schottelius | 9aa111d21b | |
Nico Schottelius | 8a56d41ebc | |
Nico Schottelius | 87e15be561 | |
Nico Schottelius | 77ea2b513f | |
Nico Schottelius | 4e3c5922ee | |
Nico Schottelius | 428670b4e7 | |
Nico Schottelius | c2bc225dc0 | |
Nico Schottelius | 483cfee90c | |
Nico Schottelius | cbf1b7cf0e | |
Nico Schottelius | b014c00d24 | |
Nico Schottelius | d61c9625f4 | |
Nico Schottelius | 65c34deb43 | |
Nico Schottelius | 2b890b0316 | |
Nico Schottelius | 1b1e0ebc8b | |
Nico Schottelius | e136b132e6 | |
Nico Schottelius | b44fdb6107 | |
Nico Schottelius | e390c62072 | |
Nico Schottelius | ef641b5e31 | |
Nico Schottelius | c9472c5dff | |
Nico Schottelius | ed30a4d25b | |
Nico Schottelius | 8a87e7effa | |
Nico Schottelius | f5e1920a15 | |
Nico Schottelius | 8491a54b0d | |
Nico Schottelius | debdd9d004 | |
Nico Schottelius | 37dcda8e3b | |
Nico Schottelius | 3ea39547a7 | |
Nico Schottelius | 017b80f59b | |
Nico Schottelius | 19bc94a756 | |
Nico Schottelius | 8423fa136f | |
Nico Schottelius | 5da5506c65 | |
Nico Schottelius | 31ef31801e | |
Nico Schottelius | 09ed55a17e | |
Nico Schottelius | a9aad1ed8f | |
Nico Schottelius | 65a7badd4d | |
Nico Schottelius | bd1e365ca0 | |
Nico Schottelius | ca1231a576 | |
Nico Schottelius | de6a7893fc | |
Nico Schottelius | 194148b5b3 | |
jll2 | 6fd22b6416 | |
jll2 | 72830a4647 | |
jll2 | 76e6094247 | |
jll2 | 0b064e0565 | |
jll2 | dd7a047408 | |
jll2 | 010449bafa | |
jll2 | 97df2c14de | |
jll2 | 923350907d | |
jll2 | 544a7d269e | |
jll2 | cd643f1c0b | |
jll2 | 64b5ae8b03 | |
jll2 | 142fd24fc8 | |
jll2 | 5477b39a25 | |
Nico Schottelius | c9439be432 | |
Nico Schottelius | 2b28567588 | |
Nico Schottelius | cbff479c65 | |
jll2 | d6ea94c6dc | |
Nico Schottelius | 4db6b78a13 | |
Nico Schottelius | 10d420614c | |
jll2 | ea16af51b2 | |
jll2 | a4c61e7b68 | |
jll2 | 192b55b98d | |
jll2 | 122982b0b9 | |
jll2 | f2aef9d4dd | |
Nico Schottelius | b121e545f7 | |
Nico Schottelius | f4f9564bde | |
Nico Schottelius | 6595fe7b97 | |
Nico Schottelius | 2b31f8f229 | |
Nico Schottelius | 02264020f5 | |
Nico Schottelius | 382c159b41 | |
Nico Schottelius | ba538ea623 | |
Nico Schottelius | 62e8190a94 | |
Nico Schottelius | bce57a1ac1 | |
John Lawless | a030a98982 | |
John Lawless | ae23a04925 | |
Nico Schottelius | 8cc0f04874 | |
Nico Schottelius | 27c838163a | |
Nico Schottelius | 38ca0a1546 | |
Tonnerre Lombard | 6de3c9877c | |
Nico Schottelius | 1943bfd244 | |
Nico Schottelius | bf22075407 | |
Nico Schottelius | 00c1303fb2 | |
Nico Schottelius | 0516749a0c | |
Nico Schottelius | c133ba5df9 | |
Nico Schottelius | af242905af | |
Nico Schottelius | b3ad86f270 | |
Nico Schottelius | 337fec115b | |
Nico Schottelius | e5e1cc865a | |
Nico Schottelius | bfcc1ebfc4 | |
Nico Schottelius | b8b0ca107a | |
Nico Schottelius | 582018adbb | |
Nico Schottelius | d7ec63052a | |
Nico Schottelius | 4f088f84c3 | |
Nico Schottelius | c704d7d9b8 | |
Nico Schottelius | 05544bf02f | |
Nico Schottelius | 218f846479 | |
Nico Schottelius | c5545e3c45 | |
Nico Schottelius | ba61d0b6ce | |
Nico Schottelius | 1a8752814f | |
Nico Schottelius | 7cc669ba0a | |
Nico Schottelius | 45d8560110 | |
Nico Schottelius | 26b8df4825 | |
Nico Schottelius | cfe5433e7a | |
Nico Schottelius | 6af6c8d229 | |
Nico Schottelius | ca408a22cc | |
Jeroen Bruijning | 5809571ca0 | |
Nico Schottelius | 40cef5f7a4 | |
Nico Schottelius | fef686b449 | |
Nico Schottelius | 5caad132b5 | |
Nico Schottelius | 8f2af0e466 | |
Nico Schottelius | a9dd91ffef | |
Nico Schottelius | cdebda1c32 | |
Nico Schottelius | 085ba48497 | |
Nico Schottelius | d5c7b57b09 | |
Nico Schottelius | 5775bdb28d | |
Nico Schottelius | c01e6b9a16 | |
Nico Schottelius | 2dc80fa971 | |
Nico Schottelius | ca5c9fc5fd | |
Nico Schottelius | 2627af97ad | |
Nico Schottelius | 741650f926 | |
Nico Schottelius | f096e412ea | |
Nico Schottelius | e8fc763b5c | |
Nico Schottelius | 6ea6e23df0 | |
Lucky | de16d9556a | |
Nico Schottelius | c60e2870c4 | |
Nico Schottelius | 0c2f5df283 | |
Nico Schottelius | 7b76e1ee65 | |
Nico Schottelius | 842e668c4c | |
Nico Schottelius | 96171314d7 | |
Nico Schottelius | 4e47c881fb | |
Nico Schottelius | 8ace2a6520 | |
Nico Schottelius | 0d5cc4ddbc | |
Nico Schottelius | 036575df9d | |
Nico Schottelius | 20a549af71 | |
Nico Schottelius | 0c141187bf | |
Nico Schottelius | 5a7d12d254 | |
Nico Schottelius | 8597f011f9 | |
Nico Schottelius | e6fe61aa0e | |
Nico Schottelius | f8e36a89a8 | |
Nico Schottelius | d15a13dac8 | |
Nico Schottelius | 9a314d370e | |
Nico Schottelius | f99223d9bb | |
Nico Schottelius | 9de2d6d211 | |
Nico Schottelius | 6fe776d7a7 | |
Nico Schottelius | 49ee2a24a3 | |
Nico Schottelius | 89b4c993a5 | |
Nico Schottelius | 09876ee932 | |
Nico Schottelius | d617cbcb26 | |
Nico Schottelius | 4487f89fb1 | |
Nico Schottelius | 65c329e892 | |
Nico Schottelius | 95e33b844d | |
Nico Schottelius | 4519ac85d6 | |
Nico Schottelius | 439a38533a | |
Nico Schottelius | 18c20b6315 | |
Nico Schottelius | 9c67edbf5b | |
Nico Schottelius | 33471a3c24 | |
Nico Schottelius | 504b9c7143 | |
Nico Schottelius | de11135bfb | |
Nico Schottelius | 332ce704f6 | |
Nico Schottelius | 16c2efa1b8 | |
Nico Schottelius | 213c15297c | |
Nico Schottelius | e87f4e9a92 | |
Nico Schottelius | 3819316fa2 | |
Nico Schottelius | f91c51d5fb | |
Nico Schottelius | c6b49bc694 | |
Nico Schottelius | d44c3cabc5 | |
Nico Schottelius | bb08cd3208 | |
Nico Schottelius | 55edd60758 | |
Nico Schottelius | 1c3cd057be | |
Nico Schottelius | eef2d55c61 | |
Nico Schottelius | 923b3a387a | |
Nico Schottelius | 6cbfb4b807 | |
Nico Schottelius | c49383a9df | |
Nico Schottelius | 42e2c7e95d | |
Nico Schottelius | 5f11ffcaaf | |
Nico Schottelius | 4d4ba36e68 | |
Nico Schottelius | be207d7520 | |
Nico Schottelius | 57a9cf489e | |
Nico Schottelius | ab482026f2 | |
Nico Schottelius | f68a68a1e2 | |
Nico Schottelius | c516175669 | |
Nico Schottelius | a4cc7c0779 | |
Nico Schottelius | c2cbd983c8 | |
Nico Schottelius | 5acca63d53 | |
Nico Schottelius | feea0661a7 | |
Nico Schottelius | 327e901527 | |
Nico Schottelius | c7ae7fc322 | |
Nico Schottelius | 7511d3f783 | |
Nico Schottelius | dcf92a1788 | |
Nico Schottelius | 2cbfdf5bd5 | |
Nico Schottelius | b007d346eb | |
Nico Schottelius | 916e0cb2ff | |
Nico Schottelius | e4969a390a | |
Nico Schottelius | 03a55f1b1f | |
Nico Schottelius | c0c19d2598 | |
Nico Schottelius | 2a45844e1c | |
Nico Schottelius | 69ae9076e3 | |
Nico Schottelius | fe2d9d2fe5 | |
Nico Schottelius | 8bce8d8620 | |
Nico Schottelius | 95b4b66a99 | |
Nico Schottelius | fd6a50a36b | |
Nico Schottelius | 2c80eab8e1 | |
Nico Schottelius | 6d01706aea | |
Nico Schottelius | 2918f4a1e5 | |
Nico Schottelius | e71146dbbf | |
Nico Schottelius | 8490b77acf | |
Nico Schottelius | 7558d88a3d | |
Nico Schottelius | 6bd8393d3d | |
Nico Schottelius | 96d5ab1b3b | |
Nico Schottelius | caa6e9f023 | |
Nico Schottelius | 7d5c1e0077 | |
Nico Schottelius | d61f7e1fda | |
Nico Schottelius | 52c2132eb3 | |
Nico Schottelius | 194787e2d2 | |
Nico Schottelius | bb4a870852 | |
Nico Schottelius | e5010b0d06 | |
Nico Schottelius | 0161529e1c | |
Nico Schottelius | d442277d68 | |
Nico Schottelius | 760ddd2073 | |
Nico Schottelius | f609fbc25c | |
Nico Schottelius | 2c38ea503f | |
Nico Schottelius | 73501e8963 | |
Nico Schottelius | e8044b6f79 | |
Nico Schottelius | abb408a9b2 | |
Nico Schottelius | 52095b7cf0 | |
Nico Schottelius | 31b90624c8 | |
Nico Schottelius | bea4dea52b | |
Nico Schottelius | 30013fb4a6 | |
Nico Schottelius | 845cc5f0b5 | |
Nico Schottelius | 22f83a0238 | |
Nico Schottelius | d05b4dc3a8 | |
Nico Schottelius | e68e536033 | |
Nico Schottelius | 326dbc52e3 | |
Nico Schottelius | db5bd32b3b | |
Nico Schottelius | 967613b4c5 | |
Nico Schottelius | 39c4e83fbc | |
Nico Schottelius | 951a88c736 | |
Nico Schottelius | d87d1616e6 | |
Nico Schottelius | 88c295a5b7 | |
Nico Schottelius | aaf29c92b5 | |
Nico Schottelius | 27d9e2e429 | |
Nico Schottelius | 43ca90a2f2 | |
Nico Schottelius | 8f65880db5 | |
Nico Schottelius | dc28b25e3c | |
Nico Schottelius | e2a143e0b6 | |
Nico Schottelius | a35e31e86c | |
Nico Schottelius | 86960388df | |
Nico Schottelius | 11a2b5a6ba | |
Nico Schottelius | 45c5c29f77 | |
Nico Schottelius | 8f6a942ae9 | |
Nico Schottelius | 799c24faf3 | |
Nico Schottelius | 624e5b419b | |
Nico Schottelius | ca6c8f60f8 | |
Nico Schottelius | 6054ee5369 | |
Nico Schottelius | 3da1d2a02c | |
Nico Schottelius | bfc637bdb2 | |
Nico Schottelius | 97c56b247e | |
Nico Schottelius | 23bc864626 | |
Nico Schottelius | 6a891bc2b8 | |
Nico Schottelius | 5bf9fcb64b | |
Nico Schottelius | 0962090138 | |
Nico Schottelius | 2304d6fe0e | |
Nico Schottelius | e0355023ff | |
Nico Schottelius | 560562fbf9 | |
Nico Schottelius | b267ae4cc8 | |
Nico Schottelius | d7b7bab90e | |
Nico Schottelius | 06a9447742 | |
Nico Schottelius | 54538da003 | |
Nico Schottelius | 70c62e2514 | |
Nico Schottelius | e92bd762d4 | |
Nico Schottelius | ed91b488da | |
Nico Schottelius | 63a683941f | |
Nico Schottelius | c042479097 | |
Nico Schottelius | 08e864ee0d | |
Nico Schottelius | dc7d5d614b | |
Nico Schottelius | 30352b83a8 | |
Nico Schottelius | d08ac69af6 | |
Nico Schottelius | 965c3e37f7 | |
Nico Schottelius | dc5aec71ed | |
Nico Schottelius | fa695bdebf | |
Nico Schottelius | d0a417c3bd | |
Nico Schottelius | 4b77c5ba85 | |
Nico Schottelius | 4c2737cb5b | |
Nico Schottelius | e3555ed018 | |
Nico Schottelius | 78c3dc1bd7 | |
Nico Schottelius | 500bfc55e4 | |
Nico Schottelius | e87a7a5db1 | |
Nico Schottelius | a92d28db16 | |
Nico Schottelius | d0a6190203 | |
Nico Schottelius | 3af14e5eac | |
Nico Schottelius | 2138967b19 | |
Nico Schottelius | a89cb0ce38 | |
Nico Schottelius | 8b99b05dfd | |
Nico Schottelius | fb6749098f | |
Nico Schottelius | 242d3e62b9 | |
Nico Schottelius | 10870c4ce1 | |
Nico Schottelius | ca8918d2c0 | |
Nico Schottelius | 1d68408541 | |
Nico Schottelius | 13b47585a8 | |
Nico Schottelius | c3312ea54d | |
Nico Schottelius | 7678d4fb3f | |
Nico Schottelius | c291d4a8f5 | |
Nico Schottelius | 26d61c8c2c | |
Nico Schottelius | f1d1d2b048 | |
Nico Schottelius | e2706e6403 | |
Nico Schottelius | 6ca7848cf7 | |
Nico Schottelius | 3de1482bf3 | |
Nico Schottelius | 7f204fda81 | |
Nico Schottelius | 17fde8943d | |
Nico Schottelius | 21956a3362 | |
Nico Schottelius | 9bc8d266f2 | |
Nico Schottelius | afbab56496 | |
Nico Schottelius | 9ec054053f | |
Nico Schottelius | 19d84f65da | |
Nico Schottelius | 5d1e7f1efb | |
Nico Schottelius | 775a7e89a9 | |
Nico Schottelius | b662b32457 | |
Nico Schottelius | 30f9344cf8 | |
Nico Schottelius | 1bd9e89689 | |
Nico Schottelius | ed452354cd | |
Nico Schottelius | 6e24829a68 | |
Nico Schottelius | cd21d9c072 | |
Nico Schottelius | d95861c81c | |
Nico Schottelius | 7c53c28c5d | |
Nico Schottelius | 07cb3473db | |
Nico Schottelius | 44c80899fe | |
Nico Schottelius | 2db6d79735 | |
Nico Schottelius | e917546ad1 | |
Nico Schottelius | e9bcdc72f0 | |
Nico Schottelius | 7e27ca0f3d | |
Nico Schottelius | 9c2b38bd17 | |
Nico Schottelius | be9d93c997 | |
Nico Schottelius | b9314edb78 | |
Nico Schottelius | 5c74195f20 | |
Nico Schottelius | e4c63dec6c | |
Nico Schottelius | b78a48917e | |
Nico Schottelius | 50c75ccf43 | |
Nico Schottelius | 8b2bda8c40 | |
Nico Schottelius | bf4433ad63 | |
Nico Schottelius | 735353bd05 | |
Nico Schottelius | b31989c476 | |
Nico Schottelius | 952adc32f2 | |
Nico Schottelius | caa11c5626 | |
Nico Schottelius | feaf5c1413 | |
Nico Schottelius | c7e4bb72a1 | |
Nico Schottelius | 82a6e8df8d | |
Nico Schottelius | 141f9bd535 | |
Nico Schottelius | 605408d619 | |
Nico Schottelius | 8698e47a59 | |
Nico Schottelius | 5f6a314749 | |
Nico Schottelius | 1722576c3e | |
Nico Schottelius | 5dfdfa0af0 | |
Nico Schottelius | 906ff53b23 | |
Nico Schottelius | e759b04a92 | |
Nico Schottelius | 41669d794b | |
Nico Schottelius | 5cf8cb74bc | |
Nico Schottelius | 98c65ccf5a | |
Nico Schottelius | 673d46ce39 | |
Nico Schottelius | 635f32af22 | |
Nico Schottelius | a288efd671 | |
Nico Schottelius | e64e2655de | |
Nico Schottelius | 3ba0900930 | |
Nico Schottelius | 817ca9c9c6 | |
Nico Schottelius | 13585a8c3b | |
Nico Schottelius | 5bfa439ae5 | |
Nico Schottelius | 5b62a92aad | |
Nico Schottelius | 79445b333b | |
Nico Schottelius | c19dfcca1d | |
Nico Schottelius | d2369c1963 | |
Nico Schottelius | 342b5bf488 | |
Nico Schottelius | e6614294c2 | |
Nico Schottelius | 1e1000f193 | |
Nico Schottelius | 05f246d586 | |
Nico Schottelius | 0715932a44 | |
Nico Schottelius | ec6bb69c75 | |
Nico Schottelius | 2060baa71a | |
Lina Boelling | bb7dd035b5 | |
Nico Schottelius | 8c523dd0cf | |
Nico Schottelius | 0e07585772 | |
Nico Schottelius | c313ac8f49 | |
Nico Schottelius | 165a7a93a5 | |
Nico Schottelius | 22465abdb5 | |
Nico Schottelius | a06a5afebf | |
Nico Schottelius | df9cba0502 | |
Nico Schottelius | eae359da4e | |
Nico Schottelius | 282b9d538f | |
Nico Schottelius | 466220823b | |
Nico Schottelius | 57cba53e16 | |
Nico Schottelius | 061b1b4ae2 | |
Nico Schottelius | 8fd52561c1 | |
Nico Schottelius | a524a9e7a3 | |
Nico Schottelius | 85bc0f7782 | |
Nico Schottelius | c6d18d4699 | |
Nico Schottelius | 3e33691460 | |
Nico Schottelius | 7ab5ace250 | |
Nico Schottelius | 1c931dce8a | |
Nico Schottelius | 1a233ebd85 | |
Nico Schottelius | 5ef6063fad | |
Nico Schottelius | 4b0298101a | |
Nico Schottelius | c75c6d27c3 | |
Nico Schottelius | 2c545d8495 | |
Nico Schottelius | f7100e57ff | |
Nico Schottelius | 3372d02ce5 | |
Nico Schottelius | d322792bd4 | |
Nico Schottelius | 0e5041f833 | |
Nico Schottelius | a58ba2e1cf | |
Nico Schottelius | 26edc63c02 | |
Nico Schottelius | 34003efa29 | |
Nico Schottelius | 0080ba2a06 | |
Nico Schottelius | 80e98e76bf | |
Nico Schottelius | d327a1bf64 | |
Nico Schottelius | 3dd22f69c1 | |
Nico Schottelius | 44832e13a8 | |
Nico Schottelius | 8064bebb60 | |
Nico Schottelius | 81df2e5a21 | |
Nico Schottelius | 83e8262e47 | |
Nico Schottelius | 50634eeac0 | |
Nico Schottelius | ef49d107f7 | |
Nico Schottelius | fd541c8ea4 | |
Nico Schottelius | 1c7f824c19 | |
Nico Schottelius | 3081199b56 | |
Nico Schottelius | 5abd4678b0 | |
Nico Schottelius | 63cbbb4abf | |
Nico Schottelius | e95a2ea208 | |
Nico Schottelius | 98ad607421 | |
Nico Schottelius | ba60e59a97 | |
Nico Schottelius | 7951c72189 | |
Nico Schottelius | 465f691f60 | |
Nico Schottelius | fb2c76e337 | |
Nico Schottelius | 65875488e9 | |
Nico Schottelius | 04f9b36c4b | |
Nico Schottelius | 2c459fe12a | |
Nico Schottelius | 058680214b | |
Nico Schottelius | d2019cab0d | |
Nico Schottelius | 05e392e5c1 | |
Nico Schottelius | 517c34880a | |
Nico Schottelius | fdcf03960a | |
Nico Schottelius | da28d984c8 | |
Nico Schottelius | acc5db288b | |
Nico Schottelius | 1d5f54c6a5 | |
Nico Schottelius | faa8a98e65 | |
Nico Schottelius | 472c86d6a2 | |
Nico Schottelius | 482913f86d | |
Nico Schottelius | a28fae4a0b | |
Nico Schottelius | 27d1786af1 | |
Nico Schottelius | 740f14d426 | |
Nico Schottelius | 0109d300de | |
Nico Schottelius | 68c83cf50b | |
Nico Schottelius | decae37d8d | |
Nico Schottelius | 3b0d907196 | |
Nico Schottelius | 044df01c08 | |
Nico Schottelius | e42dac6943 | |
Nico Schottelius | 699d3463ac |
|
@ -1,2 +1,17 @@
|
|||
conf/sources/*/destination/*
|
||||
doc/old
|
||||
doc/*.html
|
||||
doc/*.htm
|
||||
doc/*.docbook
|
||||
doc/*.texi
|
||||
doc/man/*.html
|
||||
doc/man/*.htm
|
||||
doc/man/*.texi
|
||||
doc/man/*.man
|
||||
.*.swp
|
||||
doc/man/*.[0-9]
|
||||
doc/*.xml
|
||||
doc/*/*.xml
|
||||
*.texi
|
||||
*.fo
|
||||
*.lock
|
||||
|
|
|
@ -0,0 +1,12 @@
|
|||
stages:
|
||||
- test
|
||||
|
||||
unit_tests:
|
||||
stage: test
|
||||
script:
|
||||
- make test
|
||||
|
||||
shellcheck:
|
||||
stage: test
|
||||
script:
|
||||
- make shellcheck
|
848
COPYING
848
COPYING
|
@ -1,285 +1,626 @@
|
|||
GNU GENERAL PUBLIC LICENSE
|
||||
Version 2, June 1991
|
||||
GNU GENERAL PUBLIC LICENSE
|
||||
Version 3, 29 June 2007
|
||||
|
||||
Copyright (C) 1989, 1991 Free Software Foundation, Inc.
|
||||
51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
|
||||
Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/>
|
||||
Everyone is permitted to copy and distribute verbatim copies
|
||||
of this license document, but changing it is not allowed.
|
||||
|
||||
Preamble
|
||||
Preamble
|
||||
|
||||
The licenses for most software are designed to take away your
|
||||
freedom to share and change it. By contrast, the GNU General Public
|
||||
License is intended to guarantee your freedom to share and change free
|
||||
software--to make sure the software is free for all its users. This
|
||||
General Public License applies to most of the Free Software
|
||||
Foundation's software and to any other program whose authors commit to
|
||||
using it. (Some other Free Software Foundation software is covered by
|
||||
the GNU Library General Public License instead.) You can apply it to
|
||||
The GNU General Public License is a free, copyleft license for
|
||||
software and other kinds of works.
|
||||
|
||||
The licenses for most software and other practical works are designed
|
||||
to take away your freedom to share and change the works. By contrast,
|
||||
the GNU General Public License is intended to guarantee your freedom to
|
||||
share and change all versions of a program--to make sure it remains free
|
||||
software for all its users. We, the Free Software Foundation, use the
|
||||
GNU General Public License for most of our software; it applies also to
|
||||
any other work released this way by its authors. You can apply it to
|
||||
your programs, too.
|
||||
|
||||
When we speak of free software, we are referring to freedom, not
|
||||
price. Our General Public Licenses are designed to make sure that you
|
||||
have the freedom to distribute copies of free software (and charge for
|
||||
this service if you wish), that you receive source code or can get it
|
||||
if you want it, that you can change the software or use pieces of it
|
||||
in new free programs; and that you know you can do these things.
|
||||
them if you wish), that you receive source code or can get it if you
|
||||
want it, that you can change the software or use pieces of it in new
|
||||
free programs, and that you know you can do these things.
|
||||
|
||||
To protect your rights, we need to make restrictions that forbid
|
||||
anyone to deny you these rights or to ask you to surrender the rights.
|
||||
These restrictions translate to certain responsibilities for you if you
|
||||
distribute copies of the software, or if you modify it.
|
||||
To protect your rights, we need to prevent others from denying you
|
||||
these rights or asking you to surrender the rights. Therefore, you have
|
||||
certain responsibilities if you distribute copies of the software, or if
|
||||
you modify it: responsibilities to respect the freedom of others.
|
||||
|
||||
For example, if you distribute copies of such a program, whether
|
||||
gratis or for a fee, you must give the recipients all the rights that
|
||||
you have. You must make sure that they, too, receive or can get the
|
||||
source code. And you must show them these terms so they know their
|
||||
rights.
|
||||
gratis or for a fee, you must pass on to the recipients the same
|
||||
freedoms that you received. You must make sure that they, too, receive
|
||||
or can get the source code. And you must show them these terms so they
|
||||
know their rights.
|
||||
|
||||
We protect your rights with two steps: (1) copyright the software, and
|
||||
(2) offer you this license which gives you legal permission to copy,
|
||||
distribute and/or modify the software.
|
||||
Developers that use the GNU GPL protect your rights with two steps:
|
||||
(1) assert copyright on the software, and (2) offer you this License
|
||||
giving you legal permission to copy, distribute and/or modify it.
|
||||
|
||||
Also, for each author's protection and ours, we want to make certain
|
||||
that everyone understands that there is no warranty for this free
|
||||
software. If the software is modified by someone else and passed on, we
|
||||
want its recipients to know that what they have is not the original, so
|
||||
that any problems introduced by others will not reflect on the original
|
||||
authors' reputations.
|
||||
For the developers' and authors' protection, the GPL clearly explains
|
||||
that there is no warranty for this free software. For both users' and
|
||||
authors' sake, the GPL requires that modified versions be marked as
|
||||
changed, so that their problems will not be attributed erroneously to
|
||||
authors of previous versions.
|
||||
|
||||
Finally, any free program is threatened constantly by software
|
||||
patents. We wish to avoid the danger that redistributors of a free
|
||||
program will individually obtain patent licenses, in effect making the
|
||||
program proprietary. To prevent this, we have made it clear that any
|
||||
patent must be licensed for everyone's free use or not licensed at all.
|
||||
Some devices are designed to deny users access to install or run
|
||||
modified versions of the software inside them, although the manufacturer
|
||||
can do so. This is fundamentally incompatible with the aim of
|
||||
protecting users' freedom to change the software. The systematic
|
||||
pattern of such abuse occurs in the area of products for individuals to
|
||||
use, which is precisely where it is most unacceptable. Therefore, we
|
||||
have designed this version of the GPL to prohibit the practice for those
|
||||
products. If such problems arise substantially in other domains, we
|
||||
stand ready to extend this provision to those domains in future versions
|
||||
of the GPL, as needed to protect the freedom of users.
|
||||
|
||||
Finally, every program is threatened constantly by software patents.
|
||||
States should not allow patents to restrict development and use of
|
||||
software on general-purpose computers, but in those that do, we wish to
|
||||
avoid the special danger that patents applied to a free program could
|
||||
make it effectively proprietary. To prevent this, the GPL assures that
|
||||
patents cannot be used to render the program non-free.
|
||||
|
||||
The precise terms and conditions for copying, distribution and
|
||||
modification follow.
|
||||
|
||||
GNU GENERAL PUBLIC LICENSE
|
||||
TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
|
||||
|
||||
0. This License applies to any program or other work which contains
|
||||
a notice placed by the copyright holder saying it may be distributed
|
||||
under the terms of this General Public License. The "Program", below,
|
||||
refers to any such program or work, and a "work based on the Program"
|
||||
means either the Program or any derivative work under copyright law:
|
||||
that is to say, a work containing the Program or a portion of it,
|
||||
either verbatim or with modifications and/or translated into another
|
||||
language. (Hereinafter, translation is included without limitation in
|
||||
the term "modification".) Each licensee is addressed as "you".
|
||||
TERMS AND CONDITIONS
|
||||
|
||||
Activities other than copying, distribution and modification are not
|
||||
covered by this License; they are outside its scope. The act of
|
||||
running the Program is not restricted, and the output from the Program
|
||||
is covered only if its contents constitute a work based on the
|
||||
Program (independent of having been made by running the Program).
|
||||
Whether that is true depends on what the Program does.
|
||||
0. Definitions.
|
||||
|
||||
1. You may copy and distribute verbatim copies of the Program's
|
||||
source code as you receive it, in any medium, provided that you
|
||||
conspicuously and appropriately publish on each copy an appropriate
|
||||
copyright notice and disclaimer of warranty; keep intact all the
|
||||
notices that refer to this License and to the absence of any warranty;
|
||||
and give any other recipients of the Program a copy of this License
|
||||
along with the Program.
|
||||
"This License" refers to version 3 of the GNU General Public License.
|
||||
|
||||
You may charge a fee for the physical act of transferring a copy, and
|
||||
you may at your option offer warranty protection in exchange for a fee.
|
||||
"Copyright" also means copyright-like laws that apply to other kinds of
|
||||
works, such as semiconductor masks.
|
||||
|
||||
2. You may modify your copy or copies of the Program or any portion
|
||||
of it, thus forming a work based on the Program, and copy and
|
||||
distribute such modifications or work under the terms of Section 1
|
||||
above, provided that you also meet all of these conditions:
|
||||
"The Program" refers to any copyrightable work licensed under this
|
||||
License. Each licensee is addressed as "you". "Licensees" and
|
||||
"recipients" may be individuals or organizations.
|
||||
|
||||
a) You must cause the modified files to carry prominent notices
|
||||
stating that you changed the files and the date of any change.
|
||||
To "modify" a work means to copy from or adapt all or part of the work
|
||||
in a fashion requiring copyright permission, other than the making of an
|
||||
exact copy. The resulting work is called a "modified version" of the
|
||||
earlier work or a work "based on" the earlier work.
|
||||
|
||||
b) You must cause any work that you distribute or publish, that in
|
||||
whole or in part contains or is derived from the Program or any
|
||||
part thereof, to be licensed as a whole at no charge to all third
|
||||
parties under the terms of this License.
|
||||
A "covered work" means either the unmodified Program or a work based
|
||||
on the Program.
|
||||
|
||||
c) If the modified program normally reads commands interactively
|
||||
when run, you must cause it, when started running for such
|
||||
interactive use in the most ordinary way, to print or display an
|
||||
announcement including an appropriate copyright notice and a
|
||||
notice that there is no warranty (or else, saying that you provide
|
||||
a warranty) and that users may redistribute the program under
|
||||
these conditions, and telling the user how to view a copy of this
|
||||
License. (Exception: if the Program itself is interactive but
|
||||
does not normally print such an announcement, your work based on
|
||||
the Program is not required to print an announcement.)
|
||||
|
||||
These requirements apply to the modified work as a whole. If
|
||||
identifiable sections of that work are not derived from the Program,
|
||||
and can be reasonably considered independent and separate works in
|
||||
themselves, then this License, and its terms, do not apply to those
|
||||
sections when you distribute them as separate works. But when you
|
||||
distribute the same sections as part of a whole which is a work based
|
||||
on the Program, the distribution of the whole must be on the terms of
|
||||
this License, whose permissions for other licensees extend to the
|
||||
entire whole, and thus to each and every part regardless of who wrote it.
|
||||
To "propagate" a work means to do anything with it that, without
|
||||
permission, would make you directly or secondarily liable for
|
||||
infringement under applicable copyright law, except executing it on a
|
||||
computer or modifying a private copy. Propagation includes copying,
|
||||
distribution (with or without modification), making available to the
|
||||
public, and in some countries other activities as well.
|
||||
|
||||
Thus, it is not the intent of this section to claim rights or contest
|
||||
your rights to work written entirely by you; rather, the intent is to
|
||||
exercise the right to control the distribution of derivative or
|
||||
collective works based on the Program.
|
||||
To "convey" a work means any kind of propagation that enables other
|
||||
parties to make or receive copies. Mere interaction with a user through
|
||||
a computer network, with no transfer of a copy, is not conveying.
|
||||
|
||||
In addition, mere aggregation of another work not based on the Program
|
||||
with the Program (or with a work based on the Program) on a volume of
|
||||
a storage or distribution medium does not bring the other work under
|
||||
the scope of this License.
|
||||
An interactive user interface displays "Appropriate Legal Notices"
|
||||
to the extent that it includes a convenient and prominently visible
|
||||
feature that (1) displays an appropriate copyright notice, and (2)
|
||||
tells the user that there is no warranty for the work (except to the
|
||||
extent that warranties are provided), that licensees may convey the
|
||||
work under this License, and how to view a copy of this License. If
|
||||
the interface presents a list of user commands or options, such as a
|
||||
menu, a prominent item in the list meets this criterion.
|
||||
|
||||
3. You may copy and distribute the Program (or a work based on it,
|
||||
under Section 2) in object code or executable form under the terms of
|
||||
Sections 1 and 2 above provided that you also do one of the following:
|
||||
1. Source Code.
|
||||
|
||||
a) Accompany it with the complete corresponding machine-readable
|
||||
source code, which must be distributed under the terms of Sections
|
||||
1 and 2 above on a medium customarily used for software interchange; or,
|
||||
The "source code" for a work means the preferred form of the work
|
||||
for making modifications to it. "Object code" means any non-source
|
||||
form of a work.
|
||||
|
||||
b) Accompany it with a written offer, valid for at least three
|
||||
years, to give any third party, for a charge no more than your
|
||||
cost of physically performing source distribution, a complete
|
||||
machine-readable copy of the corresponding source code, to be
|
||||
distributed under the terms of Sections 1 and 2 above on a medium
|
||||
customarily used for software interchange; or,
|
||||
A "Standard Interface" means an interface that either is an official
|
||||
standard defined by a recognized standards body, or, in the case of
|
||||
interfaces specified for a particular programming language, one that
|
||||
is widely used among developers working in that language.
|
||||
|
||||
c) Accompany it with the information you received as to the offer
|
||||
to distribute corresponding source code. (This alternative is
|
||||
allowed only for noncommercial distribution and only if you
|
||||
received the program in object code or executable form with such
|
||||
an offer, in accord with Subsection b above.)
|
||||
The "System Libraries" of an executable work include anything, other
|
||||
than the work as a whole, that (a) is included in the normal form of
|
||||
packaging a Major Component, but which is not part of that Major
|
||||
Component, and (b) serves only to enable use of the work with that
|
||||
Major Component, or to implement a Standard Interface for which an
|
||||
implementation is available to the public in source code form. A
|
||||
"Major Component", in this context, means a major essential component
|
||||
(kernel, window system, and so on) of the specific operating system
|
||||
(if any) on which the executable work runs, or a compiler used to
|
||||
produce the work, or an object code interpreter used to run it.
|
||||
|
||||
The source code for a work means the preferred form of the work for
|
||||
making modifications to it. For an executable work, complete source
|
||||
code means all the source code for all modules it contains, plus any
|
||||
associated interface definition files, plus the scripts used to
|
||||
control compilation and installation of the executable. However, as a
|
||||
special exception, the source code distributed need not include
|
||||
anything that is normally distributed (in either source or binary
|
||||
form) with the major components (compiler, kernel, and so on) of the
|
||||
operating system on which the executable runs, unless that component
|
||||
itself accompanies the executable.
|
||||
The "Corresponding Source" for a work in object code form means all
|
||||
the source code needed to generate, install, and (for an executable
|
||||
work) run the object code and to modify the work, including scripts to
|
||||
control those activities. However, it does not include the work's
|
||||
System Libraries, or general-purpose tools or generally available free
|
||||
programs which are used unmodified in performing those activities but
|
||||
which are not part of the work. For example, Corresponding Source
|
||||
includes interface definition files associated with source files for
|
||||
the work, and the source code for shared libraries and dynamically
|
||||
linked subprograms that the work is specifically designed to require,
|
||||
such as by intimate data communication or control flow between those
|
||||
subprograms and other parts of the work.
|
||||
|
||||
If distribution of executable or object code is made by offering
|
||||
access to copy from a designated place, then offering equivalent
|
||||
access to copy the source code from the same place counts as
|
||||
distribution of the source code, even though third parties are not
|
||||
compelled to copy the source along with the object code.
|
||||
|
||||
4. You may not copy, modify, sublicense, or distribute the Program
|
||||
except as expressly provided under this License. Any attempt
|
||||
otherwise to copy, modify, sublicense or distribute the Program is
|
||||
void, and will automatically terminate your rights under this License.
|
||||
However, parties who have received copies, or rights, from you under
|
||||
this License will not have their licenses terminated so long as such
|
||||
parties remain in full compliance.
|
||||
The Corresponding Source need not include anything that users
|
||||
can regenerate automatically from other parts of the Corresponding
|
||||
Source.
|
||||
|
||||
5. You are not required to accept this License, since you have not
|
||||
signed it. However, nothing else grants you permission to modify or
|
||||
distribute the Program or its derivative works. These actions are
|
||||
prohibited by law if you do not accept this License. Therefore, by
|
||||
modifying or distributing the Program (or any work based on the
|
||||
Program), you indicate your acceptance of this License to do so, and
|
||||
all its terms and conditions for copying, distributing or modifying
|
||||
the Program or works based on it.
|
||||
The Corresponding Source for a work in source code form is that
|
||||
same work.
|
||||
|
||||
6. Each time you redistribute the Program (or any work based on the
|
||||
Program), the recipient automatically receives a license from the
|
||||
original licensor to copy, distribute or modify the Program subject to
|
||||
these terms and conditions. You may not impose any further
|
||||
restrictions on the recipients' exercise of the rights granted herein.
|
||||
You are not responsible for enforcing compliance by third parties to
|
||||
2. Basic Permissions.
|
||||
|
||||
All rights granted under this License are granted for the term of
|
||||
copyright on the Program, and are irrevocable provided the stated
|
||||
conditions are met. This License explicitly affirms your unlimited
|
||||
permission to run the unmodified Program. The output from running a
|
||||
covered work is covered by this License only if the output, given its
|
||||
content, constitutes a covered work. This License acknowledges your
|
||||
rights of fair use or other equivalent, as provided by copyright law.
|
||||
|
||||
You may make, run and propagate covered works that you do not
|
||||
convey, without conditions so long as your license otherwise remains
|
||||
in force. You may convey covered works to others for the sole purpose
|
||||
of having them make modifications exclusively for you, or provide you
|
||||
with facilities for running those works, provided that you comply with
|
||||
the terms of this License in conveying all material for which you do
|
||||
not control copyright. Those thus making or running the covered works
|
||||
for you must do so exclusively on your behalf, under your direction
|
||||
and control, on terms that prohibit them from making any copies of
|
||||
your copyrighted material outside their relationship with you.
|
||||
|
||||
Conveying under any other circumstances is permitted solely under
|
||||
the conditions stated below. Sublicensing is not allowed; section 10
|
||||
makes it unnecessary.
|
||||
|
||||
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
|
||||
|
||||
No covered work shall be deemed part of an effective technological
|
||||
measure under any applicable law fulfilling obligations under article
|
||||
11 of the WIPO copyright treaty adopted on 20 December 1996, or
|
||||
similar laws prohibiting or restricting circumvention of such
|
||||
measures.
|
||||
|
||||
When you convey a covered work, you waive any legal power to forbid
|
||||
circumvention of technological measures to the extent such circumvention
|
||||
is effected by exercising rights under this License with respect to
|
||||
the covered work, and you disclaim any intention to limit operation or
|
||||
modification of the work as a means of enforcing, against the work's
|
||||
users, your or third parties' legal rights to forbid circumvention of
|
||||
technological measures.
|
||||
|
||||
4. Conveying Verbatim Copies.
|
||||
|
||||
You may convey verbatim copies of the Program's source code as you
|
||||
receive it, in any medium, provided that you conspicuously and
|
||||
appropriately publish on each copy an appropriate copyright notice;
|
||||
keep intact all notices stating that this License and any
|
||||
non-permissive terms added in accord with section 7 apply to the code;
|
||||
keep intact all notices of the absence of any warranty; and give all
|
||||
recipients a copy of this License along with the Program.
|
||||
|
||||
You may charge any price or no price for each copy that you convey,
|
||||
and you may offer support or warranty protection for a fee.
|
||||
|
||||
5. Conveying Modified Source Versions.
|
||||
|
||||
You may convey a work based on the Program, or the modifications to
|
||||
produce it from the Program, in the form of source code under the
|
||||
terms of section 4, provided that you also meet all of these conditions:
|
||||
|
||||
a) The work must carry prominent notices stating that you modified
|
||||
it, and giving a relevant date.
|
||||
|
||||
b) The work must carry prominent notices stating that it is
|
||||
released under this License and any conditions added under section
|
||||
7. This requirement modifies the requirement in section 4 to
|
||||
"keep intact all notices".
|
||||
|
||||
c) You must license the entire work, as a whole, under this
|
||||
License to anyone who comes into possession of a copy. This
|
||||
License will therefore apply, along with any applicable section 7
|
||||
additional terms, to the whole of the work, and all its parts,
|
||||
regardless of how they are packaged. This License gives no
|
||||
permission to license the work in any other way, but it does not
|
||||
invalidate such permission if you have separately received it.
|
||||
|
||||
d) If the work has interactive user interfaces, each must display
|
||||
Appropriate Legal Notices; however, if the Program has interactive
|
||||
interfaces that do not display Appropriate Legal Notices, your
|
||||
work need not make them do so.
|
||||
|
||||
A compilation of a covered work with other separate and independent
|
||||
works, which are not by their nature extensions of the covered work,
|
||||
and which are not combined with it such as to form a larger program,
|
||||
in or on a volume of a storage or distribution medium, is called an
|
||||
"aggregate" if the compilation and its resulting copyright are not
|
||||
used to limit the access or legal rights of the compilation's users
|
||||
beyond what the individual works permit. Inclusion of a covered work
|
||||
in an aggregate does not cause this License to apply to the other
|
||||
parts of the aggregate.
|
||||
|
||||
6. Conveying Non-Source Forms.
|
||||
|
||||
You may convey a covered work in object code form under the terms
|
||||
of sections 4 and 5, provided that you also convey the
|
||||
machine-readable Corresponding Source under the terms of this License,
|
||||
in one of these ways:
|
||||
|
||||
a) Convey the object code in, or embodied in, a physical product
|
||||
(including a physical distribution medium), accompanied by the
|
||||
Corresponding Source fixed on a durable physical medium
|
||||
customarily used for software interchange.
|
||||
|
||||
b) Convey the object code in, or embodied in, a physical product
|
||||
(including a physical distribution medium), accompanied by a
|
||||
written offer, valid for at least three years and valid for as
|
||||
long as you offer spare parts or customer support for that product
|
||||
model, to give anyone who possesses the object code either (1) a
|
||||
copy of the Corresponding Source for all the software in the
|
||||
product that is covered by this License, on a durable physical
|
||||
medium customarily used for software interchange, for a price no
|
||||
more than your reasonable cost of physically performing this
|
||||
conveying of source, or (2) access to copy the
|
||||
Corresponding Source from a network server at no charge.
|
||||
|
||||
c) Convey individual copies of the object code with a copy of the
|
||||
written offer to provide the Corresponding Source. This
|
||||
alternative is allowed only occasionally and noncommercially, and
|
||||
only if you received the object code with such an offer, in accord
|
||||
with subsection 6b.
|
||||
|
||||
d) Convey the object code by offering access from a designated
|
||||
place (gratis or for a charge), and offer equivalent access to the
|
||||
Corresponding Source in the same way through the same place at no
|
||||
further charge. You need not require recipients to copy the
|
||||
Corresponding Source along with the object code. If the place to
|
||||
copy the object code is a network server, the Corresponding Source
|
||||
may be on a different server (operated by you or a third party)
|
||||
that supports equivalent copying facilities, provided you maintain
|
||||
clear directions next to the object code saying where to find the
|
||||
Corresponding Source. Regardless of what server hosts the
|
||||
Corresponding Source, you remain obligated to ensure that it is
|
||||
available for as long as needed to satisfy these requirements.
|
||||
|
||||
e) Convey the object code using peer-to-peer transmission, provided
|
||||
you inform other peers where the object code and Corresponding
|
||||
Source of the work are being offered to the general public at no
|
||||
charge under subsection 6d.
|
||||
|
||||
A separable portion of the object code, whose source code is excluded
|
||||
from the Corresponding Source as a System Library, need not be
|
||||
included in conveying the object code work.
|
||||
|
||||
A "User Product" is either (1) a "consumer product", which means any
|
||||
tangible personal property which is normally used for personal, family,
|
||||
or household purposes, or (2) anything designed or sold for incorporation
|
||||
into a dwelling. In determining whether a product is a consumer product,
|
||||
doubtful cases shall be resolved in favor of coverage. For a particular
|
||||
product received by a particular user, "normally used" refers to a
|
||||
typical or common use of that class of product, regardless of the status
|
||||
of the particular user or of the way in which the particular user
|
||||
actually uses, or expects or is expected to use, the product. A product
|
||||
is a consumer product regardless of whether the product has substantial
|
||||
commercial, industrial or non-consumer uses, unless such uses represent
|
||||
the only significant mode of use of the product.
|
||||
|
||||
"Installation Information" for a User Product means any methods,
|
||||
procedures, authorization keys, or other information required to install
|
||||
and execute modified versions of a covered work in that User Product from
|
||||
a modified version of its Corresponding Source. The information must
|
||||
suffice to ensure that the continued functioning of the modified object
|
||||
code is in no case prevented or interfered with solely because
|
||||
modification has been made.
|
||||
|
||||
If you convey an object code work under this section in, or with, or
|
||||
specifically for use in, a User Product, and the conveying occurs as
|
||||
part of a transaction in which the right of possession and use of the
|
||||
User Product is transferred to the recipient in perpetuity or for a
|
||||
fixed term (regardless of how the transaction is characterized), the
|
||||
Corresponding Source conveyed under this section must be accompanied
|
||||
by the Installation Information. But this requirement does not apply
|
||||
if neither you nor any third party retains the ability to install
|
||||
modified object code on the User Product (for example, the work has
|
||||
been installed in ROM).
|
||||
|
||||
The requirement to provide Installation Information does not include a
|
||||
requirement to continue to provide support service, warranty, or updates
|
||||
for a work that has been modified or installed by the recipient, or for
|
||||
the User Product in which it has been modified or installed. Access to a
|
||||
network may be denied when the modification itself materially and
|
||||
adversely affects the operation of the network or violates the rules and
|
||||
protocols for communication across the network.
|
||||
|
||||
Corresponding Source conveyed, and Installation Information provided,
|
||||
in accord with this section must be in a format that is publicly
|
||||
documented (and with an implementation available to the public in
|
||||
source code form), and must require no special password or key for
|
||||
unpacking, reading or copying.
|
||||
|
||||
7. Additional Terms.
|
||||
|
||||
"Additional permissions" are terms that supplement the terms of this
|
||||
License by making exceptions from one or more of its conditions.
|
||||
Additional permissions that are applicable to the entire Program shall
|
||||
be treated as though they were included in this License, to the extent
|
||||
that they are valid under applicable law. If additional permissions
|
||||
apply only to part of the Program, that part may be used separately
|
||||
under those permissions, but the entire Program remains governed by
|
||||
this License without regard to the additional permissions.
|
||||
|
||||
When you convey a copy of a covered work, you may at your option
|
||||
remove any additional permissions from that copy, or from any part of
|
||||
it. (Additional permissions may be written to require their own
|
||||
removal in certain cases when you modify the work.) You may place
|
||||
additional permissions on material, added by you to a covered work,
|
||||
for which you have or can give appropriate copyright permission.
|
||||
|
||||
Notwithstanding any other provision of this License, for material you
|
||||
add to a covered work, you may (if authorized by the copyright holders of
|
||||
that material) supplement the terms of this License with terms:
|
||||
|
||||
a) Disclaiming warranty or limiting liability differently from the
|
||||
terms of sections 15 and 16 of this License; or
|
||||
|
||||
b) Requiring preservation of specified reasonable legal notices or
|
||||
author attributions in that material or in the Appropriate Legal
|
||||
Notices displayed by works containing it; or
|
||||
|
||||
c) Prohibiting misrepresentation of the origin of that material, or
|
||||
requiring that modified versions of such material be marked in
|
||||
reasonable ways as different from the original version; or
|
||||
|
||||
d) Limiting the use for publicity purposes of names of licensors or
|
||||
authors of the material; or
|
||||
|
||||
e) Declining to grant rights under trademark law for use of some
|
||||
trade names, trademarks, or service marks; or
|
||||
|
||||
f) Requiring indemnification of licensors and authors of that
|
||||
material by anyone who conveys the material (or modified versions of
|
||||
it) with contractual assumptions of liability to the recipient, for
|
||||
any liability that these contractual assumptions directly impose on
|
||||
those licensors and authors.
|
||||
|
||||
All other non-permissive additional terms are considered "further
|
||||
restrictions" within the meaning of section 10. If the Program as you
|
||||
received it, or any part of it, contains a notice stating that it is
|
||||
governed by this License along with a term that is a further
|
||||
restriction, you may remove that term. If a license document contains
|
||||
a further restriction but permits relicensing or conveying under this
|
||||
License, you may add to a covered work material governed by the terms
|
||||
of that license document, provided that the further restriction does
|
||||
not survive such relicensing or conveying.
|
||||
|
||||
If you add terms to a covered work in accord with this section, you
|
||||
must place, in the relevant source files, a statement of the
|
||||
additional terms that apply to those files, or a notice indicating
|
||||
where to find the applicable terms.
|
||||
|
||||
Additional terms, permissive or non-permissive, may be stated in the
|
||||
form of a separately written license, or stated as exceptions;
|
||||
the above requirements apply either way.
|
||||
|
||||
8. Termination.
|
||||
|
||||
You may not propagate or modify a covered work except as expressly
|
||||
provided under this License. Any attempt otherwise to propagate or
|
||||
modify it is void, and will automatically terminate your rights under
|
||||
this License (including any patent licenses granted under the third
|
||||
paragraph of section 11).
|
||||
|
||||
However, if you cease all violation of this License, then your
|
||||
license from a particular copyright holder is reinstated (a)
|
||||
provisionally, unless and until the copyright holder explicitly and
|
||||
finally terminates your license, and (b) permanently, if the copyright
|
||||
holder fails to notify you of the violation by some reasonable means
|
||||
prior to 60 days after the cessation.
|
||||
|
||||
Moreover, your license from a particular copyright holder is
|
||||
reinstated permanently if the copyright holder notifies you of the
|
||||
violation by some reasonable means, this is the first time you have
|
||||
received notice of violation of this License (for any work) from that
|
||||
copyright holder, and you cure the violation prior to 30 days after
|
||||
your receipt of the notice.
|
||||
|
||||
Termination of your rights under this section does not terminate the
|
||||
licenses of parties who have received copies or rights from you under
|
||||
this License. If your rights have been terminated and not permanently
|
||||
reinstated, you do not qualify to receive new licenses for the same
|
||||
material under section 10.
|
||||
|
||||
9. Acceptance Not Required for Having Copies.
|
||||
|
||||
You are not required to accept this License in order to receive or
|
||||
run a copy of the Program. Ancillary propagation of a covered work
|
||||
occurring solely as a consequence of using peer-to-peer transmission
|
||||
to receive a copy likewise does not require acceptance. However,
|
||||
nothing other than this License grants you permission to propagate or
|
||||
modify any covered work. These actions infringe copyright if you do
|
||||
not accept this License. Therefore, by modifying or propagating a
|
||||
covered work, you indicate your acceptance of this License to do so.
|
||||
|
||||
10. Automatic Licensing of Downstream Recipients.
|
||||
|
||||
Each time you convey a covered work, the recipient automatically
|
||||
receives a license from the original licensors, to run, modify and
|
||||
propagate that work, subject to this License. You are not responsible
|
||||
for enforcing compliance by third parties with this License.
|
||||
|
||||
An "entity transaction" is a transaction transferring control of an
|
||||
organization, or substantially all assets of one, or subdividing an
|
||||
organization, or merging organizations. If propagation of a covered
|
||||
work results from an entity transaction, each party to that
|
||||
transaction who receives a copy of the work also receives whatever
|
||||
licenses to the work the party's predecessor in interest had or could
|
||||
give under the previous paragraph, plus a right to possession of the
|
||||
Corresponding Source of the work from the predecessor in interest, if
|
||||
the predecessor has it or can get it with reasonable efforts.
|
||||
|
||||
You may not impose any further restrictions on the exercise of the
|
||||
rights granted or affirmed under this License. For example, you may
|
||||
not impose a license fee, royalty, or other charge for exercise of
|
||||
rights granted under this License, and you may not initiate litigation
|
||||
(including a cross-claim or counterclaim in a lawsuit) alleging that
|
||||
any patent claim is infringed by making, using, selling, offering for
|
||||
sale, or importing the Program or any portion of it.
|
||||
|
||||
11. Patents.
|
||||
|
||||
A "contributor" is a copyright holder who authorizes use under this
|
||||
License of the Program or a work on which the Program is based. The
|
||||
work thus licensed is called the contributor's "contributor version".
|
||||
|
||||
A contributor's "essential patent claims" are all patent claims
|
||||
owned or controlled by the contributor, whether already acquired or
|
||||
hereafter acquired, that would be infringed by some manner, permitted
|
||||
by this License, of making, using, or selling its contributor version,
|
||||
but do not include claims that would be infringed only as a
|
||||
consequence of further modification of the contributor version. For
|
||||
purposes of this definition, "control" includes the right to grant
|
||||
patent sublicenses in a manner consistent with the requirements of
|
||||
this License.
|
||||
|
||||
7. If, as a consequence of a court judgment or allegation of patent
|
||||
infringement or for any other reason (not limited to patent issues),
|
||||
conditions are imposed on you (whether by court order, agreement or
|
||||
Each contributor grants you a non-exclusive, worldwide, royalty-free
|
||||
patent license under the contributor's essential patent claims, to
|
||||
make, use, sell, offer for sale, import and otherwise run, modify and
|
||||
propagate the contents of its contributor version.
|
||||
|
||||
In the following three paragraphs, a "patent license" is any express
|
||||
agreement or commitment, however denominated, not to enforce a patent
|
||||
(such as an express permission to practice a patent or covenant not to
|
||||
sue for patent infringement). To "grant" such a patent license to a
|
||||
party means to make such an agreement or commitment not to enforce a
|
||||
patent against the party.
|
||||
|
||||
If you convey a covered work, knowingly relying on a patent license,
|
||||
and the Corresponding Source of the work is not available for anyone
|
||||
to copy, free of charge and under the terms of this License, through a
|
||||
publicly available network server or other readily accessible means,
|
||||
then you must either (1) cause the Corresponding Source to be so
|
||||
available, or (2) arrange to deprive yourself of the benefit of the
|
||||
patent license for this particular work, or (3) arrange, in a manner
|
||||
consistent with the requirements of this License, to extend the patent
|
||||
license to downstream recipients. "Knowingly relying" means you have
|
||||
actual knowledge that, but for the patent license, your conveying the
|
||||
covered work in a country, or your recipient's use of the covered work
|
||||
in a country, would infringe one or more identifiable patents in that
|
||||
country that you have reason to believe are valid.
|
||||
|
||||
If, pursuant to or in connection with a single transaction or
|
||||
arrangement, you convey, or propagate by procuring conveyance of, a
|
||||
covered work, and grant a patent license to some of the parties
|
||||
receiving the covered work authorizing them to use, propagate, modify
|
||||
or convey a specific copy of the covered work, then the patent license
|
||||
you grant is automatically extended to all recipients of the covered
|
||||
work and works based on it.
|
||||
|
||||
A patent license is "discriminatory" if it does not include within
|
||||
the scope of its coverage, prohibits the exercise of, or is
|
||||
conditioned on the non-exercise of one or more of the rights that are
|
||||
specifically granted under this License. You may not convey a covered
|
||||
work if you are a party to an arrangement with a third party that is
|
||||
in the business of distributing software, under which you make payment
|
||||
to the third party based on the extent of your activity of conveying
|
||||
the work, and under which the third party grants, to any of the
|
||||
parties who would receive the covered work from you, a discriminatory
|
||||
patent license (a) in connection with copies of the covered work
|
||||
conveyed by you (or copies made from those copies), or (b) primarily
|
||||
for and in connection with specific products or compilations that
|
||||
contain the covered work, unless you entered into that arrangement,
|
||||
or that patent license was granted, prior to 28 March 2007.
|
||||
|
||||
Nothing in this License shall be construed as excluding or limiting
|
||||
any implied license or other defenses to infringement that may
|
||||
otherwise be available to you under applicable patent law.
|
||||
|
||||
12. No Surrender of Others' Freedom.
|
||||
|
||||
If conditions are imposed on you (whether by court order, agreement or
|
||||
otherwise) that contradict the conditions of this License, they do not
|
||||
excuse you from the conditions of this License. If you cannot
|
||||
distribute so as to satisfy simultaneously your obligations under this
|
||||
License and any other pertinent obligations, then as a consequence you
|
||||
may not distribute the Program at all. For example, if a patent
|
||||
license would not permit royalty-free redistribution of the Program by
|
||||
all those who receive copies directly or indirectly through you, then
|
||||
the only way you could satisfy both it and this License would be to
|
||||
refrain entirely from distribution of the Program.
|
||||
excuse you from the conditions of this License. If you cannot convey a
|
||||
covered work so as to satisfy simultaneously your obligations under this
|
||||
License and any other pertinent obligations, then as a consequence you may
|
||||
not convey it at all. For example, if you agree to terms that obligate you
|
||||
to collect a royalty for further conveying from those to whom you convey
|
||||
the Program, the only way you could satisfy both those terms and this
|
||||
License would be to refrain entirely from conveying the Program.
|
||||
|
||||
If any portion of this section is held invalid or unenforceable under
|
||||
any particular circumstance, the balance of the section is intended to
|
||||
apply and the section as a whole is intended to apply in other
|
||||
circumstances.
|
||||
13. Use with the GNU Affero General Public License.
|
||||
|
||||
It is not the purpose of this section to induce you to infringe any
|
||||
patents or other property right claims or to contest validity of any
|
||||
such claims; this section has the sole purpose of protecting the
|
||||
integrity of the free software distribution system, which is
|
||||
implemented by public license practices. Many people have made
|
||||
generous contributions to the wide range of software distributed
|
||||
through that system in reliance on consistent application of that
|
||||
system; it is up to the author/donor to decide if he or she is willing
|
||||
to distribute software through any other system and a licensee cannot
|
||||
impose that choice.
|
||||
Notwithstanding any other provision of this License, you have
|
||||
permission to link or combine any covered work with a work licensed
|
||||
under version 3 of the GNU Affero General Public License into a single
|
||||
combined work, and to convey the resulting work. The terms of this
|
||||
License will continue to apply to the part which is the covered work,
|
||||
but the special requirements of the GNU Affero General Public License,
|
||||
section 13, concerning interaction through a network will apply to the
|
||||
combination as such.
|
||||
|
||||
This section is intended to make thoroughly clear what is believed to
|
||||
be a consequence of the rest of this License.
|
||||
|
||||
8. If the distribution and/or use of the Program is restricted in
|
||||
certain countries either by patents or by copyrighted interfaces, the
|
||||
original copyright holder who places the Program under this License
|
||||
may add an explicit geographical distribution limitation excluding
|
||||
those countries, so that distribution is permitted only in or among
|
||||
countries not thus excluded. In such case, this License incorporates
|
||||
the limitation as if written in the body of this License.
|
||||
14. Revised Versions of this License.
|
||||
|
||||
9. The Free Software Foundation may publish revised and/or new versions
|
||||
of the General Public License from time to time. Such new versions will
|
||||
The Free Software Foundation may publish revised and/or new versions of
|
||||
the GNU General Public License from time to time. Such new versions will
|
||||
be similar in spirit to the present version, but may differ in detail to
|
||||
address new problems or concerns.
|
||||
|
||||
Each version is given a distinguishing version number. If the Program
|
||||
specifies a version number of this License which applies to it and "any
|
||||
later version", you have the option of following the terms and conditions
|
||||
either of that version or of any later version published by the Free
|
||||
Software Foundation. If the Program does not specify a version number of
|
||||
this License, you may choose any version ever published by the Free Software
|
||||
Foundation.
|
||||
Each version is given a distinguishing version number. If the
|
||||
Program specifies that a certain numbered version of the GNU General
|
||||
Public License "or any later version" applies to it, you have the
|
||||
option of following the terms and conditions either of that numbered
|
||||
version or of any later version published by the Free Software
|
||||
Foundation. If the Program does not specify a version number of the
|
||||
GNU General Public License, you may choose any version ever published
|
||||
by the Free Software Foundation.
|
||||
|
||||
10. If you wish to incorporate parts of the Program into other free
|
||||
programs whose distribution conditions are different, write to the author
|
||||
to ask for permission. For software which is copyrighted by the Free
|
||||
Software Foundation, write to the Free Software Foundation; we sometimes
|
||||
make exceptions for this. Our decision will be guided by the two goals
|
||||
of preserving the free status of all derivatives of our free software and
|
||||
of promoting the sharing and reuse of software generally.
|
||||
If the Program specifies that a proxy can decide which future
|
||||
versions of the GNU General Public License can be used, that proxy's
|
||||
public statement of acceptance of a version permanently authorizes you
|
||||
to choose that version for the Program.
|
||||
|
||||
NO WARRANTY
|
||||
Later license versions may give you additional or different
|
||||
permissions. However, no additional obligations are imposed on any
|
||||
author or copyright holder as a result of your choosing to follow a
|
||||
later version.
|
||||
|
||||
11. BECAUSE THE PROGRAM IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY
|
||||
FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN
|
||||
OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES
|
||||
PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED
|
||||
OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
|
||||
MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS
|
||||
TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE
|
||||
PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING,
|
||||
REPAIR OR CORRECTION.
|
||||
15. Disclaimer of Warranty.
|
||||
|
||||
12. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
|
||||
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR
|
||||
REDISTRIBUTE THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES,
|
||||
INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING
|
||||
OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED
|
||||
TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY
|
||||
YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER
|
||||
PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE
|
||||
POSSIBILITY OF SUCH DAMAGES.
|
||||
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
|
||||
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
|
||||
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
|
||||
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
|
||||
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
|
||||
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
|
||||
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
|
||||
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
How to Apply These Terms to Your New Programs
|
||||
16. Limitation of Liability.
|
||||
|
||||
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
|
||||
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
|
||||
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
|
||||
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
|
||||
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
|
||||
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
|
||||
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
|
||||
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
|
||||
SUCH DAMAGES.
|
||||
|
||||
17. Interpretation of Sections 15 and 16.
|
||||
|
||||
If the disclaimer of warranty and limitation of liability provided
|
||||
above cannot be given local legal effect according to their terms,
|
||||
reviewing courts shall apply local law that most closely approximates
|
||||
an absolute waiver of all civil liability in connection with the
|
||||
Program, unless a warranty or assumption of liability accompanies a
|
||||
copy of the Program in return for a fee.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
How to Apply These Terms to Your New Programs
|
||||
|
||||
If you develop a new program, and you want it to be of the greatest
|
||||
possible use to the public, the best way to achieve this is to make it
|
||||
|
@ -287,15 +628,15 @@ free software which everyone can redistribute and change under these terms.
|
|||
|
||||
To do so, attach the following notices to the program. It is safest
|
||||
to attach them to the start of each source file to most effectively
|
||||
convey the exclusion of warranty; and each file should have at least
|
||||
state the exclusion of warranty; and each file should have at least
|
||||
the "copyright" line and a pointer to where the full notice is found.
|
||||
|
||||
<one line to give the program's name and a brief idea of what it does.>
|
||||
Copyright (C) <year> <name of author>
|
||||
|
||||
This program is free software; you can redistribute it and/or modify
|
||||
This program is free software: you can redistribute it and/or modify
|
||||
it under the terms of the GNU General Public License as published by
|
||||
the Free Software Foundation; either version 2 of the License, or
|
||||
the Free Software Foundation, either version 3 of the License, or
|
||||
(at your option) any later version.
|
||||
|
||||
This program is distributed in the hope that it will be useful,
|
||||
|
@ -304,37 +645,30 @@ the "copyright" line and a pointer to where the full notice is found.
|
|||
GNU General Public License for more details.
|
||||
|
||||
You should have received a copy of the GNU General Public License
|
||||
along with this program; if not, write to the Free Software
|
||||
Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
|
||||
|
||||
along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
Also add information on how to contact you by electronic and paper mail.
|
||||
|
||||
If the program is interactive, make it output a short notice like this
|
||||
when it starts in an interactive mode:
|
||||
If the program does terminal interaction, make it output a short
|
||||
notice like this when it starts in an interactive mode:
|
||||
|
||||
Gnomovision version 69, Copyright (C) year name of author
|
||||
Gnomovision comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
|
||||
<program> Copyright (C) <year> <name of author>
|
||||
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
|
||||
This is free software, and you are welcome to redistribute it
|
||||
under certain conditions; type `show c' for details.
|
||||
|
||||
The hypothetical commands `show w' and `show c' should show the appropriate
|
||||
parts of the General Public License. Of course, the commands you use may
|
||||
be called something other than `show w' and `show c'; they could even be
|
||||
mouse-clicks or menu items--whatever suits your program.
|
||||
parts of the General Public License. Of course, your program's commands
|
||||
might be different; for a GUI interface, you would use an "about box".
|
||||
|
||||
You should also get your employer (if you work as a programmer) or your
|
||||
school, if any, to sign a "copyright disclaimer" for the program, if
|
||||
necessary. Here is a sample; alter the names:
|
||||
You should also get your employer (if you work as a programmer) or school,
|
||||
if any, to sign a "copyright disclaimer" for the program, if necessary.
|
||||
For more information on this, and how to apply and follow the GNU GPL, see
|
||||
<http://www.gnu.org/licenses/>.
|
||||
|
||||
Yoyodyne, Inc., hereby disclaims all copyright interest in the program
|
||||
`Gnomovision' (which makes passes at compilers) written by James Hacker.
|
||||
|
||||
<signature of Ty Coon>, 1 April 1989
|
||||
Ty Coon, President of Vice
|
||||
|
||||
This General Public License does not permit incorporating your program into
|
||||
proprietary programs. If your program is a subroutine library, you may
|
||||
consider it more useful to permit linking proprietary applications with the
|
||||
library. If this is what you want to do, use the GNU Library General
|
||||
Public License instead of this License.
|
||||
The GNU General Public License does not permit incorporating your program
|
||||
into proprietary programs. If your program is a subroutine library, you
|
||||
may consider it more useful to permit linking proprietary applications with
|
||||
the library. If this is what you want to do, use the GNU Lesser General
|
||||
Public License instead of this License. But first, please read
|
||||
<http://www.gnu.org/philosophy/why-not-lgpl.html>.
|
||||
|
|
|
@ -0,0 +1,16 @@
|
|||
Thanks go to the following people (sorted by alphabet):
|
||||
|
||||
* Alexey Maximov
|
||||
- for finding return-value and shell limitation bugs
|
||||
* #cLinux IRC channel on irc.freenode.org
|
||||
- for testing and debugging (those I mean should know ;-)
|
||||
* Daniel Aubry
|
||||
- for reporting many hints
|
||||
* Jens-Christoph Brendel
|
||||
- Added automatic backup manager (contrib/jbrendel-autobackup)
|
||||
* John Lawless
|
||||
- A lot of patches and some very interesting discussions.
|
||||
* Markus Meier
|
||||
- for finding a really simple solution for choosing the right backup to
|
||||
clone from: Make it independent of the interval, simply choose the last
|
||||
one created.
|
249
Makefile
249
Makefile
|
@ -1,53 +1,250 @@
|
|||
#
|
||||
# ccollect
|
||||
# Nico Schottelius, Fri Jan 13 12:13:08 CET 2006
|
||||
# 2006-2008 Nico Schottelius (nico-ccollect at schottelius.org)
|
||||
#
|
||||
# This file is part of ccollect.
|
||||
#
|
||||
# ccollect is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# ccollect is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with ccollect. If not, see <http://www.gnu.org/licenses/>.
|
||||
#
|
||||
# Initially written on Fri Jan 13 12:13:08 CET 2006
|
||||
#
|
||||
# FIXME: add prefix-support?
|
||||
#
|
||||
|
||||
INSTALL=install
|
||||
CCOLLECT=ccollect.sh
|
||||
CCOLLECT_SOURCE=ccollect
|
||||
CCOLLECT_DEST=ccollect
|
||||
LN=ln -sf
|
||||
ASCIIDOC=asciidoc
|
||||
DOCBOOKTOTEXI=docbook2x-texi
|
||||
DOCBOOKTOMAN=docbook2x-man
|
||||
XSLTPROC=xsltproc
|
||||
XSL=/usr/local/share/xsl/docbook/html/docbook.xsl
|
||||
A2X=a2x
|
||||
|
||||
prefix=/usr/packages/ccollect-0.2
|
||||
bindir=$(prefix)/bin
|
||||
destination=$(bindir)/$(CCOLLECT)
|
||||
prefix=/usr/packages/ccollect-git
|
||||
bindir=${prefix}/bin
|
||||
destination=${bindir}/${CCOLLECT_DEST}
|
||||
|
||||
mandest=${prefix}/man/man1
|
||||
manlink=/usr/local/man/man1
|
||||
|
||||
path_dir=/usr/local/bin
|
||||
path_destination=$(path_dir)/$(CCOLLECT)
|
||||
path_destination=${path_dir}/${CCOLLECT_DEST}
|
||||
docs_archive_name=docs.tar
|
||||
|
||||
# where to publish
|
||||
host=creme.schottelius.org
|
||||
dir=www/org/schottelius/linux/ccollect
|
||||
docdir=$(dir)/doc
|
||||
#
|
||||
# Asciidoc will be used to generate other formats later
|
||||
#
|
||||
MANDOCS = doc/man/ccollect.text \
|
||||
doc/man/ccollect_add_source.text \
|
||||
doc/man/ccollect_analyse_logs.text \
|
||||
doc/man/ccollect_delete_source.text \
|
||||
doc/man/ccollect_logwrapper.text \
|
||||
doc/man/ccollect_list_intervals.text
|
||||
|
||||
DOCS = ${MANDOCS} doc/ccollect.text
|
||||
|
||||
#
|
||||
# Doku
|
||||
#
|
||||
HTMLDOCS = ${DOCS:.text=.html}
|
||||
DBHTMLDOCS = ${DOCS:.text=.htm}
|
||||
|
||||
# texi is broken currently, don't know why xslt things complain yet
|
||||
TEXIDOCS = ${DOCS:.text=.texi}
|
||||
TEXIDOCS =
|
||||
|
||||
# fop fails here, so disable it for now
|
||||
PDFDOCS = ${DOCS:.text=.pdf}
|
||||
PDFDOCS =
|
||||
|
||||
MANPDOCS = ${MANDOCS:.text=.1}
|
||||
|
||||
DOCBDOCS = ${DOCS:.text=.docbook}
|
||||
|
||||
DOC_ALL = ${HTMLDOCS} ${DBHTMLDOCS} ${TEXIDOCS} ${MANPDOCS} ${PDFDOCS}
|
||||
|
||||
TEST_LOG_FILE = /tmp/ccollect/ccollect.log
|
||||
|
||||
#
|
||||
# End user targets
|
||||
#
|
||||
all:
|
||||
@echo "Nothing to make, make install."
|
||||
@echo "----------- ccollect make targets --------------"
|
||||
@echo "documentation: generate HTMl, Texinfo and manpage"
|
||||
@echo "html: only generate HTML"
|
||||
@echo "info: only generate Texinfo"
|
||||
@echo "man: only generate manpage{s}"
|
||||
@echo "install: install ccollect to ${prefix}"
|
||||
@echo "shellcheck: shellcheck ccollect script"
|
||||
@echo "test: run unit tests"
|
||||
|
||||
install: install-script install-link
|
||||
html: ${HTMLDOCS}
|
||||
htm: ${DBHTMLDOCS}
|
||||
info: ${TEXIDOCS}
|
||||
man: ${MANPDOCS}
|
||||
pdf: ${PDFDOCS}
|
||||
documentation: ${DOC_ALL}
|
||||
|
||||
install: install-link install-manlink
|
||||
|
||||
install-link: install-script
|
||||
$(LN) $(destination) $(path_destination)
|
||||
${LN} ${destination} ${path_destination}
|
||||
|
||||
install-script:
|
||||
$(INSTALL) -D -m 0755 -s $(CCOLLECT) $(destination)
|
||||
${INSTALL} -D -m 0755 ${CCOLLECT_SOURCE} ${destination}
|
||||
|
||||
install-man: man
|
||||
${INSTALL} -d -m 0755 ${mandest}
|
||||
${INSTALL} -D -m 0644 doc/man/*.1 ${mandest}
|
||||
|
||||
install-manlink: install-man
|
||||
${INSTALL} -d -m 0755 ${manlink}
|
||||
for man in ${mandest}/*; do ${LN} $$man ${manlink}; done
|
||||
|
||||
#
|
||||
# Tools
|
||||
#
|
||||
TOOLS2=ccollect_add_source
|
||||
TOOLS2 += ccollect_analyse_logs
|
||||
|
||||
TOOLS=ccollect_add_source \
|
||||
ccollect_analyse_logs \
|
||||
ccollect_delete_source \
|
||||
ccollect_list_intervals \
|
||||
ccollect_logwrapper \
|
||||
ccollect_list_intervals
|
||||
|
||||
# Stick to posix
|
||||
TOOLSMAN1 = $(TOOLS:ccollect=doc/man/ccollect)
|
||||
TOOLSMAN = $(TOOLSMAN1:=.text)
|
||||
|
||||
TOOLSFP = $(subst ccollect,tools/ccollect,$(TOOLS))
|
||||
|
||||
## FIXME: posix make: shell? =>
|
||||
|
||||
t2:
|
||||
echo $(TOOLS) - $(TOOLSFP)
|
||||
echo $(TOOLSMAN)
|
||||
echo $(TOOLSFP)
|
||||
|
||||
|
||||
# docbook gets .htm, asciidoc directly .html
|
||||
%.htm: %.docbook
|
||||
${XSLTPROC} -o $@ ${XSL} $<
|
||||
|
||||
%.html: %.text %.docbook
|
||||
${ASCIIDOC} -n -o $@ $<
|
||||
|
||||
%.html: %.text
|
||||
${ASCIIDOC} -n -o $@ $<
|
||||
|
||||
%.docbook: %.text
|
||||
${ASCIIDOC} -n -b docbook -o $@ $<
|
||||
|
||||
%.texi: %.docbook
|
||||
${DOCBOOKTOTEXI} --to-stdout $< > $@
|
||||
|
||||
#%.mandocbook: %.text
|
||||
# ${ASCIIDOC} -b docbook -d manpage -o $@ $<
|
||||
|
||||
#%.man: %.mandocbook
|
||||
# ${DOCBOOKTOMAN} --to-stdout $< > $@
|
||||
|
||||
#%.man: %.text
|
||||
%.1: %.text
|
||||
${A2X} -f manpage $<
|
||||
|
||||
%.pdf: %.text
|
||||
${A2X} -f pdf $<
|
||||
|
||||
documentation:
|
||||
@echo "Generating HTML-documentation"
|
||||
@asciidoc -n -o doc/ccollect.html doc/ccollect.text
|
||||
|
||||
#
|
||||
# Developer targets
|
||||
#
|
||||
update:
|
||||
@cg-update creme
|
||||
|
||||
push-work:
|
||||
@cg-push creme
|
||||
@cg-push main
|
||||
pub:
|
||||
git push
|
||||
|
||||
publish-doc: documentation
|
||||
@chmod a+r doc/ccollect.html
|
||||
@scp doc/ccollect.html doc/ccollect.text $(host):$(docdir)
|
||||
@chmod a+r ${DOCS} ${DOC_ALL}
|
||||
@tar cf ${docs_archive_name} ${DOCS} ${DOC_ALL}
|
||||
@echo "Documentation files are in ${docs_archive_name}"
|
||||
|
||||
#
|
||||
# Distribution
|
||||
#
|
||||
clean:
|
||||
rm -f ${DOC_ALL}
|
||||
rm -f doc/man/*.[0-9] doc/man/*.xml doc/*.fo doc/man/*.fo
|
||||
|
||||
distclean: clean
|
||||
rm -f ${DOCBDOCS}
|
||||
|
||||
#
|
||||
# Be nice with the users and generate documentation for them
|
||||
#
|
||||
dist: distclean documentation
|
||||
|
||||
/tmp/ccollect:
|
||||
mkdir -p /tmp/ccollect
|
||||
|
||||
shellcheck: ./ccollect
|
||||
shellcheck -s sh -f gcc -x ./ccollect
|
||||
|
||||
test-nico: $(CCOLLECT_SOURCE) /tmp/ccollect
|
||||
cd ./conf/sources/; for s in *; do CCOLLECT_CONF=../ ../../ccollect daily "$$s"; done
|
||||
touch /tmp/ccollect/$$(ls /tmp/ccollect | head -n1).ccollect-marker
|
||||
CCOLLECT_CONF=./conf ./ccollect -a daily
|
||||
touch /tmp/ccollect/$$(ls /tmp/ccollect | head -n1).ccollect-marker
|
||||
CCOLLECT_CONF=./conf ./ccollect -a -p daily
|
||||
|
||||
test-dir-source:
|
||||
mkdir -p /tmp/ccollect/source
|
||||
cp -R -f ./* /tmp/ccollect/source
|
||||
|
||||
test-dir-destination:
|
||||
mkdir -p /tmp/ccollect/backup
|
||||
|
||||
test-dir-destination-chint:
|
||||
mkdir -p /tmp/ccollect/backup-chint
|
||||
|
||||
test-fixed-intervals: $(CCOLLECT_SOURCE) test-dir-source test-dir-destination test-dir-destination-chint
|
||||
for s in ./test/conf/sources/*; do \
|
||||
CCOLLECT_CONF=./test/conf ./ccollect -l ${TEST_LOG_FILE} daily "$$(basename "$$s")"; \
|
||||
test "$$(ls -1 /tmp/ccollect/backup | wc -l)" -gt "0" || { cat ${TEST_LOG_FILE}; exit 1; }; \
|
||||
done
|
||||
CCOLLECT_CONF=./test/conf ./ccollect -l ${TEST_LOG_FILE} -a -v daily
|
||||
test "$$(ls -1 /tmp/ccollect/backup | wc -l)" -gt "0" || { cat ${TEST_LOG_FILE}; exit 1; }
|
||||
CCOLLECT_CONF=./test/conf ./ccollect -l ${TEST_LOG_FILE} -a -p daily
|
||||
test "$$(ls -1 /tmp/ccollect/backup | wc -l)" -gt "0" || { cat ${TEST_LOG_FILE}; exit 1; }
|
||||
@printf "\nFixed intervals test ended successfully\n"
|
||||
|
||||
test-interval-changing: $(CCOLLECT_SOURCE) test-dir-source test-dir-destination-chint
|
||||
rm -rf /tmp/ccollect/backup-chint/*
|
||||
test "$$(ls -1 /tmp/ccollect/backup-chint | wc -l)" -eq "0" || { cat ${TEST_LOG_FILE}; exit 1; }
|
||||
printf "3" > ./test/conf/sources/local-with-interval/intervals/daily
|
||||
for x in 1 2 3 4 5; do CCOLLECT_CONF=./test/conf ./ccollect -l ${TEST_LOG_FILE} daily local-with-interval; done
|
||||
test "$$(ls -1 /tmp/ccollect/backup-chint | wc -l)" -eq "4" || { cat ${TEST_LOG_FILE}; exit 1; }
|
||||
printf "5" > ./test/conf/sources/local-with-interval/intervals/daily
|
||||
for x in 1 2 3 4 5 6 7; do CCOLLECT_CONF=./test/conf ./ccollect -l ${TEST_LOG_FILE} daily local-with-interval; done
|
||||
test "$$(ls -1 /tmp/ccollect/backup-chint | wc -l)" -eq "6" || { cat ${TEST_LOG_FILE}; exit 1; }
|
||||
printf "4" > ./test/conf/sources/local-with-interval/intervals/daily
|
||||
for x in 1 2 3 4 5 6; do CCOLLECT_CONF=./test/conf ./ccollect -l ${TEST_LOG_FILE} daily local-with-interval; done
|
||||
test "$$(ls -1 /tmp/ccollect/backup-chint | wc -l)" -eq "5" || { cat ${TEST_LOG_FILE}; exit 1; }
|
||||
printf "3" > ./test/conf/sources/local-with-interval/intervals/daily
|
||||
@printf "\nInterval changing test ended successfully\n"
|
||||
|
||||
test: test-fixed-intervals test-interval-changing
|
||||
test -f "${TEST_LOG_FILE}"
|
||||
@printf "\nTests ended successfully\n"
|
||||
|
|
62
README
62
README
|
@ -2,22 +2,64 @@
|
|||
ccollect.sh, Nico Schottelius, 2005-12-06
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
ccollect backups data from local or remote hosts to your local harddisk.
|
||||
ccollect backups (local or remote) data to local or remote destinations.
|
||||
|
||||
You can retrieve the latest version of ccollect at [0].
|
||||
|
||||
You can retriev the latest version of ccollect at [0].
|
||||
|
||||
doc/ccollect.text Manual in text format
|
||||
doc/ccollect.html Manual in xhtml
|
||||
|
||||
ccollect was inspired by rsnapshot [1], which had some problems:
|
||||
- configuration parameters had to be TAB seperated
|
||||
- you could not specify exclude lists differently for every source
|
||||
ccollect was inspired by rsnapshot [1], which has some problems:
|
||||
- configuration parameters have to be TAB seperated
|
||||
- you can not specify per source exclude lists
|
||||
- no per source pre/post execution support
|
||||
- no parallel execution
|
||||
- does unecessary moving of backup directories
|
||||
- I didn't like the configuration at all, so I used the cconfig style [2].
|
||||
|
||||
Please use tools/report_success.sh to report success, if you are successfully
|
||||
using ccollect.
|
||||
|
||||
[0]: ccollect: http://linux.schottelius.org/ccollect/
|
||||
Have a look at doc/HACKING, if you plan to change ccollect.
|
||||
|
||||
A small try to visualize the differences in a table:
|
||||
|
||||
+---------------+-------------------------------------------------------------+
|
||||
| What? | rsnapshot | ccollect |
|
||||
+---------------+-------------------------------------------------------------+
|
||||
| Configuration | tab separated, needs | plain cconfig-style |
|
||||
| | parsing | |
|
||||
+---------------+-------------------------------------------------------------+
|
||||
| Per source | | |
|
||||
| post-/pre- | no | yes |
|
||||
| execution | | |
|
||||
+---------------+-------------------------------------------------------------+
|
||||
| Per source | | |
|
||||
| exclude lists | no | yes |
|
||||
+---------------+-------------------------------------------------------------+
|
||||
| Parallel | | |
|
||||
| execution | | |
|
||||
| of multiple | no | yes |
|
||||
| backups | | |
|
||||
+---------------+-------------------------------------------------------------+
|
||||
| Programming | perl | sh |
|
||||
| language | | (posix compatible) |
|
||||
+---------------+-------------------------------------------------------------+
|
||||
| Lines of code | 6772 (5353 w/o comments, | 546 (375 w/o comments, |
|
||||
| (2006-10-25) | 4794 w/o empty lines) | 288 w/o empty lines) |
|
||||
+---------------+-------------------------------------------------------------+
|
||||
| Lines of code | 7269 (6778 w/o comments, | 587 (397 w/o comments, |
|
||||
| (2009-07-23) | 6139 w/o empty lines) | 315 w/o empty lines) |
|
||||
+---------------+-------------------------------------------------------------+
|
||||
| Age | Available since 2002/2003 | Written at 2005-11-14 |
|
||||
+---------------+-------------------------------------------------------------+
|
||||
|
||||
Included documentation:
|
||||
|
||||
doc/ccollect.text Manual in text format
|
||||
doc/ccollect.html Manual in xhtml (generated)
|
||||
|
||||
doc/man/ccollect.text Manpage in text format
|
||||
doc/man/ccollect.man Manpage in manpage format (generated)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
[0]: ccollect: http://www.nico.schottelius.org/software/ccollect/
|
||||
[1]: rsnapshot: http://www.rsnapshot.org/
|
||||
[2]: cconfig: http://nico.schotteli.us/papers/linux/cconfig/
|
||||
|
|
|
@ -0,0 +1,26 @@
|
|||
Hinweis:
|
||||
Zwei Quellen auf ein Ziel funktioniert nicht, da die beiden
|
||||
Quellen von den unterschiedlichen Verzeichnissen linken.
|
||||
|
||||
Listing:
|
||||
.../* funktioniert nicht teilweise (abhängig von der Shell?)
|
||||
|
||||
Isup-check: optional!
|
||||
|
||||
line 318/check no_xxx => correct test?
|
||||
|
||||
REMOVE ALL PCMD code
|
||||
|
||||
Backup to remote can be done via ssh tunnel!
|
||||
|
||||
|
||||
% remote host: Allow backup host to access our sshd
|
||||
ssh -R4242:localhost:22 backupserver "ccollect interval backupremotehost"
|
||||
|
||||
remove $destination (==ddir)
|
||||
|
||||
--------------------------------------------------------------------------------
|
||||
|
||||
Remote backups:
|
||||
|
||||
ccollect_backup_to $host $remote_port $source $interval
|
|
@ -0,0 +1,937 @@
|
|||
#!/bin/sh
|
||||
#
|
||||
# 2005-2013 Nico Schottelius (nico-ccollect at schottelius.org)
|
||||
# 2016-2019 Darko Poljak (darko.poljak at gmail.com)
|
||||
#
|
||||
# This file is part of ccollect.
|
||||
#
|
||||
# ccollect is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# ccollect is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with ccollect. If not, see <http://www.gnu.org/licenses/>.
|
||||
#
|
||||
# Initially written for SyGroup (www.sygroup.ch)
|
||||
# Date: Mon Nov 14 11:45:11 CET 2005
|
||||
|
||||
# Error upon expanding unset variables:
|
||||
set -u
|
||||
|
||||
#
|
||||
# Standard variables (stolen from cconf)
|
||||
#
|
||||
__mydir="${0%/*}"
|
||||
__abs_mydir="$(cd "$__mydir" && pwd -P)"
|
||||
__myname=${0##*/}
|
||||
|
||||
#
|
||||
# where to find our configuration and temporary file
|
||||
#
|
||||
CCOLLECT_CONF="${CCOLLECT_CONF:-/etc/ccollect}"
|
||||
CSOURCES="${CCOLLECT_CONF}/sources"
|
||||
CDEFAULTS="${CCOLLECT_CONF}/defaults"
|
||||
CPREEXEC="${CDEFAULTS}/pre_exec"
|
||||
CPOSTEXEC="${CDEFAULTS}/post_exec"
|
||||
CMARKER=".ccollect-marker"
|
||||
|
||||
TMP="$(mktemp "/tmp/${__myname}.XXXXXX")"
|
||||
export TMP
|
||||
CONTROL_PIPE="/tmp/${__myname}-control-pipe"
|
||||
|
||||
VERSION="2.10"
|
||||
RELEASE="2020-08-26"
|
||||
HALF_VERSION="ccollect ${VERSION}"
|
||||
FULL_VERSION="ccollect ${VERSION} (${RELEASE})"
|
||||
|
||||
#
|
||||
# CDATE: how we use it for naming of the archives
|
||||
# DDATE: how the user should see it in our output (DISPLAY)
|
||||
#
|
||||
CDATE="date +%Y%m%d-%H%M"
|
||||
DDATE="date +%Y-%m-%d-%H:%M:%S"
|
||||
SDATE="date +%s"
|
||||
|
||||
#
|
||||
# LOCKING: use flock if available, otherwise mkdir
|
||||
# Locking is done for each source so that only one instance per source
|
||||
# can run.
|
||||
#
|
||||
# Use CCOLLECT_CONF directory for lock files.
|
||||
# This directory can be set arbitrary so it is writable for user
|
||||
# executing ccollect.
|
||||
LOCKDIR="${CCOLLECT_CONF}"
|
||||
# printf pattern: ccollect_<source>.lock
|
||||
LOCKFILE_PATTERN="ccollect_%s.lock"
|
||||
LOCKFD=4
|
||||
|
||||
#
|
||||
# locking functions using flock
|
||||
#
|
||||
lock_flock()
|
||||
{
|
||||
# $1 = source to backup
|
||||
# shellcheck disable=SC2059
|
||||
lockfile="${LOCKDIR}/$(printf "${LOCKFILE_PATTERN}" "$1")"
|
||||
eval "exec ${LOCKFD}> '${lockfile}'"
|
||||
|
||||
flock -n ${LOCKFD} && return 0 || return 1
|
||||
}
|
||||
|
||||
unlock_flock()
|
||||
{
|
||||
# $1 = source to backup
|
||||
# shellcheck disable=SC2059
|
||||
lockfile="${LOCKDIR}/$(printf "${LOCKFILE_PATTERN}" "$1")"
|
||||
eval "exec ${LOCKFD}>&-"
|
||||
rm -f "${lockfile}"
|
||||
}
|
||||
|
||||
#
|
||||
# locking functions using mkdir (mkdir is atomic)
|
||||
#
|
||||
lock_mkdir()
|
||||
{
|
||||
# $1 = source to backup
|
||||
# shellcheck disable=SC2059
|
||||
lockfile="${LOCKDIR}/$(printf "${LOCKFILE_PATTERN}" "$1")"
|
||||
|
||||
mkdir "${lockfile}" && return 0 || return 1
|
||||
}
|
||||
|
||||
unlock_mkdir()
|
||||
{
|
||||
# $1 = source to backup
|
||||
# shellcheck disable=SC2059
|
||||
lockfile="${LOCKDIR}/$(printf "${LOCKFILE_PATTERN}" "$1")"
|
||||
|
||||
rmdir "${lockfile}"
|
||||
}
|
||||
|
||||
#
|
||||
# determine locking tool: flock or mkdir
|
||||
#
|
||||
if command -v flock > /dev/null 2>&1
|
||||
then
|
||||
lockf="lock_flock"
|
||||
unlockf="unlock_flock"
|
||||
else
|
||||
lockf="lock_mkdir"
|
||||
unlockf="unlock_mkdir"
|
||||
fi
|
||||
|
||||
#
|
||||
# unset values
|
||||
#
|
||||
PARALLEL=""
|
||||
MAX_JOBS=""
|
||||
USE_ALL=""
|
||||
LOGFILE=""
|
||||
SYSLOG=""
|
||||
# e - only errors, a - all output
|
||||
LOGLEVEL="a"
|
||||
LOGONLYERRORS=""
|
||||
|
||||
#
|
||||
# catch signals
|
||||
#
|
||||
TRAPFUNC="rm -f \"${TMP}\""
|
||||
# shellcheck disable=SC2064
|
||||
trap "${TRAPFUNC}" 1 2 15
|
||||
|
||||
#
|
||||
# Functions
|
||||
#
|
||||
|
||||
# check if we are running interactive or non-interactive
|
||||
# see: http://www.tldp.org/LDP/abs/html/intandnonint.html
|
||||
_is_interactive()
|
||||
{
|
||||
[ -t 0 ] || [ -p /dev/stdin ]
|
||||
}
|
||||
|
||||
#
|
||||
# ssh-"feature": we cannot do '... read ...; ssh ...; < file',
|
||||
# because ssh reads stdin! -n does not work -> does not ask for password
|
||||
# Also allow deletion for files without the given suffix
|
||||
#
|
||||
delete_from_file()
|
||||
{
|
||||
file="$1"; shift
|
||||
suffix="" # It will be set, if deleting incomplete backups.
|
||||
[ $# -eq 1 ] && suffix="$1" && shift
|
||||
# dirs for deletion will be moved to this trash dir inside destination dir
|
||||
# - for fast mv operation
|
||||
trash="$(mktemp -d ".trash.XXXXXX")"
|
||||
while read -r to_remove; do
|
||||
mv "${to_remove}" "${trash}" ||
|
||||
_exit_err "Moving ${to_remove} to ${trash} failed."
|
||||
set -- "$@" "${to_remove}"
|
||||
if [ "${suffix}" ]; then
|
||||
to_remove_no_suffix="$(echo "${to_remove}" | sed "s/$suffix\$//")"
|
||||
mv "${to_remove_no_suffix}" "${trash}" ||
|
||||
_exit_err "Moving ${to_remove_no_suffix} to ${trash} failed."
|
||||
set -- "$@" "${to_remove_no_suffix}"
|
||||
fi
|
||||
done < "${file}"
|
||||
_techo "Removing $* in ${trash}..."
|
||||
empty_dir=".empty-dir"
|
||||
mkdir "${empty_dir}" || _exit_err "Empty directory ${empty_dir} cannot be created."
|
||||
[ "${VVERBOSE}" ] && echo "Starting: rsync -a --delete ${empty_dir} ${trash}"
|
||||
# rsync needs ending slash for directory content
|
||||
rsync -a --delete "${empty_dir}/" "${trash}/" || _exit_err "Removing $* failed."
|
||||
rmdir "${trash}" || _exit_err "Removing ${trash} directory failed"
|
||||
rmdir "${empty_dir}" || _exit_err "Removing ${empty_dir} directory failed"
|
||||
_techo "Removing $* in ${trash} finished."
|
||||
}
|
||||
|
||||
display_version()
|
||||
{
|
||||
echo "${FULL_VERSION}"
|
||||
exit 0
|
||||
}
|
||||
|
||||
usage()
|
||||
{
|
||||
cat << eof
|
||||
${__myname}: [args] <interval name> <sources to backup>
|
||||
|
||||
ccollect creates (pseudo) incremental backups
|
||||
|
||||
-h, --help: Show this help screen
|
||||
-a, --all: Backup all sources specified in ${CSOURCES}
|
||||
-e, --errors: Log only errors
|
||||
-j [max], --jobs [max] Specifies the number of jobs to run simultaneously.
|
||||
If max is not specified then parallelise all jobs.
|
||||
-l FILE, --logfile FILE Log to specified file
|
||||
-p, --parallel: Parallelise backup processes (deprecated from 2.0)
|
||||
-s, --syslog: Log to syslog with tag ccollect
|
||||
-v, --verbose: Be very verbose (uses set -x)
|
||||
-V, --version: Print version information
|
||||
|
||||
This is version ${VERSION} released on ${RELEASE}.
|
||||
|
||||
Retrieve latest ccollect at http://www.nico.schottelius.org/software/ccollect/
|
||||
eof
|
||||
exit 0
|
||||
}
|
||||
|
||||
# locking functions
|
||||
lock()
|
||||
{
|
||||
"${lockf}" "$@" || _exit_err \
|
||||
"Only one instance of ${__myname} for source \"$1\" can run at one time."
|
||||
}
|
||||
|
||||
unlock()
|
||||
{
|
||||
"${unlockf}" "$@"
|
||||
}
|
||||
|
||||
# time displaying echo
|
||||
# stdout version
|
||||
_techo_stdout()
|
||||
{
|
||||
echo "$(${DDATE}): $*"
|
||||
}
|
||||
|
||||
# syslog version
|
||||
_techo_syslog()
|
||||
{
|
||||
logger -t ccollect "$@"
|
||||
}
|
||||
|
||||
# specified file version
|
||||
_techo_file()
|
||||
{
|
||||
_techo_stdout "$@" >> "${LOGFILE}"
|
||||
}
|
||||
|
||||
# determine _techo version before parsing options
|
||||
if _is_interactive
|
||||
then
|
||||
_techof="_techo_stdout"
|
||||
else
|
||||
_techof="_techo_syslog"
|
||||
fi
|
||||
|
||||
# _techo with determined _techo version
|
||||
_techo()
|
||||
{
|
||||
if [ "${LOGLEVEL}" = "a" ]
|
||||
then
|
||||
# name is exported before calling this function
|
||||
# shellcheck disable=SC2154
|
||||
set -- ${name:+"[${name}]"} "$@"
|
||||
"${_techof}" "$@"
|
||||
fi
|
||||
}
|
||||
|
||||
_techo_err()
|
||||
{
|
||||
_techo "Error: $*"
|
||||
}
|
||||
|
||||
_exit_err()
|
||||
{
|
||||
_techo_err "$@"
|
||||
rm -f "${TMP}"
|
||||
exit 1
|
||||
}
|
||||
|
||||
#
|
||||
# Parse options
|
||||
#
|
||||
while [ "$#" -ge 1 ]; do
|
||||
case "$1" in
|
||||
-a|--all)
|
||||
USE_ALL=1
|
||||
;;
|
||||
-p|--parallel)
|
||||
_techo "Warning: -p, --parallel option is deprecated," \
|
||||
"use -j, --jobs instead."
|
||||
PARALLEL=1
|
||||
MAX_JOBS=""
|
||||
;;
|
||||
-j|--jobs)
|
||||
PARALLEL=1
|
||||
if [ "$#" -ge 2 ]
|
||||
then
|
||||
case "$2" in
|
||||
-*)
|
||||
;;
|
||||
*)
|
||||
MAX_JOBS=$2
|
||||
shift
|
||||
;;
|
||||
esac
|
||||
fi
|
||||
;;
|
||||
-e|--errors)
|
||||
LOGONLYERRORS="1"
|
||||
;;
|
||||
-l|--logfile)
|
||||
if [ "$#" -ge 2 ]
|
||||
then
|
||||
case "$2" in
|
||||
-*)
|
||||
_exit_err "Missing log file"
|
||||
;;
|
||||
*)
|
||||
LOGFILE="$2"
|
||||
shift
|
||||
;;
|
||||
esac
|
||||
else
|
||||
_exit_err "Missing log file"
|
||||
fi
|
||||
;;
|
||||
-s|--syslog)
|
||||
SYSLOG="1"
|
||||
;;
|
||||
-v|--verbose)
|
||||
set -x
|
||||
;;
|
||||
-V|--version)
|
||||
display_version
|
||||
;;
|
||||
--)
|
||||
# ignore the -- itself
|
||||
shift
|
||||
break
|
||||
;;
|
||||
-h|--help|-*)
|
||||
usage
|
||||
;;
|
||||
*)
|
||||
break
|
||||
;;
|
||||
esac
|
||||
shift
|
||||
done
|
||||
|
||||
# determine _techo version and logging level after parsing options
|
||||
if [ "${LOGFILE}" ]
|
||||
then
|
||||
_techof="_techo_file"
|
||||
LOGLEVEL="a"
|
||||
elif _is_interactive
|
||||
then
|
||||
if [ "${SYSLOG}" ]
|
||||
then
|
||||
_techof="_techo_syslog"
|
||||
LOGLEVEL="a"
|
||||
else
|
||||
_techof="_techo_stdout"
|
||||
LOGLEVEL="e"
|
||||
fi
|
||||
else
|
||||
_techof="_techo_syslog"
|
||||
LOGLEVEL="a"
|
||||
fi
|
||||
|
||||
if [ "${LOGFILE}" ] || [ "${SYSLOG}" ]
|
||||
then
|
||||
if [ "${LOGONLYERRORS}" ]
|
||||
then
|
||||
LOGLEVEL="e"
|
||||
fi
|
||||
fi
|
||||
|
||||
# check that MAX_JOBS is natural number > 0
|
||||
# empty string means run all in parallel
|
||||
if ! echo "${MAX_JOBS}" | grep -q -E '^[1-9][0-9]*$|^$'
|
||||
then
|
||||
_exit_err "Invalid max jobs value \"${MAX_JOBS}\""
|
||||
fi
|
||||
|
||||
#
|
||||
# Setup interval
|
||||
#
|
||||
if [ $# -ge 1 ]; then
|
||||
export INTERVAL="$1"
|
||||
shift
|
||||
else
|
||||
usage
|
||||
fi
|
||||
|
||||
#
|
||||
# Check for configuraton directory
|
||||
#
|
||||
[ -d "${CCOLLECT_CONF}" ] || _exit_err "No configuration found in " \
|
||||
"\"${CCOLLECT_CONF}\" (is \$CCOLLECT_CONF properly set?)"
|
||||
|
||||
#
|
||||
# Create (portable!) source "array"
|
||||
#
|
||||
export no_sources=0
|
||||
|
||||
if [ "${USE_ALL}" = 1 ]; then
|
||||
#
|
||||
# Get sources from source configuration
|
||||
#
|
||||
( cd "${CSOURCES}" && ls -1 > "${TMP}" ) || \
|
||||
_exit_err "Listing of sources failed. Aborting."
|
||||
|
||||
while read -r tmp; do
|
||||
eval export "source_${no_sources}=\"${tmp}\""
|
||||
no_sources=$((no_sources + 1))
|
||||
done < "${TMP}"
|
||||
else
|
||||
#
|
||||
# Get sources from command line
|
||||
#
|
||||
while [ "$#" -ge 1 ]; do
|
||||
eval "arg=\"\$1\""
|
||||
shift
|
||||
|
||||
# arg is assigned in the eval above
|
||||
# shellcheck disable=SC2154
|
||||
eval export "source_${no_sources}=\"${arg}\""
|
||||
no_sources="$((no_sources + 1))"
|
||||
done
|
||||
fi
|
||||
|
||||
#
|
||||
# Need at least ONE source to backup
|
||||
#
|
||||
if [ "${no_sources}" -lt 1 ]; then
|
||||
usage
|
||||
else
|
||||
_techo "${HALF_VERSION}: Beginning backup using interval ${INTERVAL}"
|
||||
fi
|
||||
|
||||
#
|
||||
# Look for pre-exec command (general)
|
||||
#
|
||||
if [ -x "${CPREEXEC}" ]; then
|
||||
_techo "Executing ${CPREEXEC} ..."
|
||||
"${CPREEXEC}"; ret=$?
|
||||
_techo "Finished ${CPREEXEC} (return code: ${ret})."
|
||||
|
||||
[ "${ret}" -eq 0 ] || _exit_err "${CPREEXEC} failed. Aborting"
|
||||
fi
|
||||
|
||||
################################################################################
|
||||
#
|
||||
# Let's do the backup - here begins the real stuff
|
||||
#
|
||||
|
||||
# in PARALLEL mode:
|
||||
# * create control pipe
|
||||
# * determine number of jobs to start at once
|
||||
if [ "${PARALLEL}" ]; then
|
||||
mkfifo "${CONTROL_PIPE}"
|
||||
# fd 5 is tied to control pipe
|
||||
eval "exec 5<>'${CONTROL_PIPE}'"
|
||||
TRAPFUNC="${TRAPFUNC}; rm -f \"${CONTROL_PIPE}\""
|
||||
# shellcheck disable=SC2064
|
||||
trap "${TRAPFUNC}" 0 1 2 15
|
||||
|
||||
# determine how much parallel jobs to prestart
|
||||
if [ "${MAX_JOBS}" ]
|
||||
then
|
||||
if [ "${MAX_JOBS}" -le "${no_sources}" ]
|
||||
then
|
||||
prestart="${MAX_JOBS}"
|
||||
else
|
||||
prestart="${no_sources}"
|
||||
fi
|
||||
else
|
||||
prestart=0
|
||||
fi
|
||||
fi
|
||||
|
||||
source_no=0
|
||||
while [ "${source_no}" -lt "${no_sources}" ]; do
|
||||
#
|
||||
# Get current source
|
||||
#
|
||||
eval export name=\"\$source_${source_no}\"
|
||||
source_no=$((source_no + 1))
|
||||
|
||||
#
|
||||
# Start ourself, if we want parallel execution
|
||||
#
|
||||
if [ "${PARALLEL}" ]; then
|
||||
if [ ! "${MAX_JOBS}" ]
|
||||
then
|
||||
# run all in parallel
|
||||
"$0" "${INTERVAL}" "${name}" &
|
||||
continue
|
||||
elif [ "${prestart}" -gt 0 ]
|
||||
then
|
||||
# run prestart child if pending
|
||||
{ "$0" "${INTERVAL}" "${name}"; printf '\n' >&5; } &
|
||||
prestart=$((prestart - 1))
|
||||
continue
|
||||
else
|
||||
# each time a child finishes we get a line from the pipe
|
||||
# and then launch another child
|
||||
while read -r line
|
||||
do
|
||||
{ "$0" "${INTERVAL}" "${name}"; printf '\n' >&5; } &
|
||||
# get out of loop so we can contnue with main loop
|
||||
# for next source
|
||||
break
|
||||
done <&5
|
||||
continue
|
||||
fi
|
||||
fi
|
||||
|
||||
#
|
||||
# Start subshell for easy log editing
|
||||
#
|
||||
(
|
||||
backup="${CSOURCES}/${name}"
|
||||
c_source="${backup}/source"
|
||||
c_dest="${backup}/destination"
|
||||
c_pre_exec="${backup}/pre_exec"
|
||||
c_post_exec="${backup}/post_exec"
|
||||
|
||||
#
|
||||
# Stderr to stdout, so we can produce nice logs
|
||||
#
|
||||
exec 2>&1
|
||||
|
||||
#
|
||||
# Record start of backup: internal and for the user
|
||||
#
|
||||
begin_s="$(${SDATE})"
|
||||
_techo "Beginning to backup"
|
||||
|
||||
#
|
||||
# Standard configuration checks
|
||||
#
|
||||
if [ ! -e "${backup}" ]; then
|
||||
_exit_err "Source \"${backup}\" does not exist."
|
||||
fi
|
||||
|
||||
#
|
||||
# Configuration _must_ be a directory (cconfig style)
|
||||
#
|
||||
if [ ! -d "${backup}" ]; then
|
||||
_exit_err "\"${backup}\" is not a cconfig-directory. Skipping."
|
||||
fi
|
||||
|
||||
#
|
||||
# Acquire lock for source. If lock cannot be acquired, lock will exit
|
||||
# with error message.
|
||||
#
|
||||
lock "${name}"
|
||||
|
||||
# redefine trap to also unlock (rm lockfile)
|
||||
TRAPFUNC="${TRAPFUNC}; unlock \"${name}\""
|
||||
# shellcheck disable=SC2064
|
||||
trap "${TRAPFUNC}" 1 2 15
|
||||
|
||||
#
|
||||
# First execute pre_exec, which may generate destination or other parameters
|
||||
#
|
||||
if [ -x "${c_pre_exec}" ]; then
|
||||
_techo "Executing ${c_pre_exec} ..."
|
||||
"${c_pre_exec}"; ret="$?"
|
||||
_techo "Finished ${c_pre_exec} (return code ${ret})."
|
||||
|
||||
[ "${ret}" -eq 0 ] || _exit_err "${c_pre_exec} failed. Skipping."
|
||||
fi
|
||||
|
||||
#
|
||||
# Read source configuration
|
||||
#
|
||||
for opt in verbose very_verbose summary exclude rsync_options \
|
||||
delete_incomplete rsync_failure_codes \
|
||||
mtime quiet_if_down ; do
|
||||
if [ -f "${backup}/${opt}" ] || [ -f "${backup}/no_${opt}" ]; then
|
||||
eval "c_$opt=\"${backup}/$opt\""
|
||||
else
|
||||
eval "c_$opt=\"${CDEFAULTS}/$opt\""
|
||||
fi
|
||||
done
|
||||
|
||||
#
|
||||
# Interval definition: First try source specific, fallback to default
|
||||
#
|
||||
c_interval="$(cat "${backup}/intervals/${INTERVAL}" 2>/dev/null)"
|
||||
|
||||
if [ -z "${c_interval}" ]; then
|
||||
c_interval="$(cat "${CDEFAULTS}/intervals/${INTERVAL}" 2>/dev/null)"
|
||||
|
||||
if [ -z "${c_interval}" ]; then
|
||||
_exit_err "No definition for interval \"${INTERVAL}\" found. Skipping."
|
||||
fi
|
||||
fi
|
||||
|
||||
#
|
||||
# Sort by ctime (default) or mtime (configuration option)
|
||||
#
|
||||
# variable is assigned using eval
|
||||
# shellcheck disable=SC2154
|
||||
if [ -f "${c_mtime}" ] ; then
|
||||
TSORT="t"
|
||||
else
|
||||
TSORT="tc"
|
||||
fi
|
||||
|
||||
#
|
||||
# Source configuration checks
|
||||
#
|
||||
if [ ! -f "${c_source}" ]; then
|
||||
_exit_err "Source description \"${c_source}\" is not a file. Skipping."
|
||||
else
|
||||
source=$(cat "${c_source}"); ret="$?"
|
||||
if [ "${ret}" -ne 0 ]; then
|
||||
_exit_err "Source ${c_source} is not readable. Skipping."
|
||||
fi
|
||||
fi
|
||||
|
||||
#
|
||||
# Destination is a path
|
||||
#
|
||||
if [ ! -f "${c_dest}" ]; then
|
||||
_exit_err "Destination ${c_dest} is not a file. Skipping."
|
||||
else
|
||||
ddir="$(cat "${c_dest}")"; ret="$?"
|
||||
if [ "${ret}" -ne 0 ]; then
|
||||
_exit_err "Destination ${c_dest} is not readable. Skipping."
|
||||
fi
|
||||
fi
|
||||
|
||||
#
|
||||
# Parameters: ccollect defaults, configuration options, user options
|
||||
#
|
||||
|
||||
#
|
||||
# Rsync standard options (archive will be added after is-up-check)
|
||||
#
|
||||
set -- "$@" "--delete" "--numeric-ids" "--relative" \
|
||||
"--delete-excluded" "--sparse"
|
||||
|
||||
#
|
||||
# Exclude list
|
||||
#
|
||||
# variable is assigned using eval
|
||||
# shellcheck disable=SC2154
|
||||
if [ -f "${c_exclude}" ]; then
|
||||
set -- "$@" "--exclude-from=${c_exclude}"
|
||||
fi
|
||||
|
||||
#
|
||||
# Output a summary
|
||||
#
|
||||
# variable is assigned using eval
|
||||
# shellcheck disable=SC2154
|
||||
if [ -f "${c_summary}" ]; then
|
||||
set -- "$@" "--stats"
|
||||
fi
|
||||
|
||||
#
|
||||
# Verbosity for rsync, rm, and mkdir
|
||||
#
|
||||
VVERBOSE=""
|
||||
# variable is assigned using eval
|
||||
# shellcheck disable=SC2154
|
||||
if [ -f "${c_very_verbose}" ]; then
|
||||
set -- "$@" "-vv"
|
||||
VVERBOSE="-v"
|
||||
elif [ -f "${c_verbose}" ]; then
|
||||
set -- "$@" "-v"
|
||||
fi
|
||||
|
||||
#
|
||||
# Extra options for rsync provided by the user
|
||||
#
|
||||
# variable is assigned using eval
|
||||
# shellcheck disable=SC2154
|
||||
if [ -f "${c_rsync_options}" ]; then
|
||||
while read -r line; do
|
||||
# Trim line.
|
||||
ln=$(echo "${line}" | awk '{$1=$1;print;}')
|
||||
# Only if ln is non zero length string.
|
||||
#
|
||||
# If ln is empty then rsync '' DEST evaluates
|
||||
# to transfer current directory to DEST which would
|
||||
# with specific options destroy DEST content.
|
||||
if [ -n "${ln}" ]
|
||||
then
|
||||
set -- "$@" "${ln}"
|
||||
fi
|
||||
done < "${c_rsync_options}"
|
||||
fi
|
||||
|
||||
#
|
||||
# Check: source is up and accepting connections (before deleting old backups!)
|
||||
#
|
||||
if ! rsync "$@" "${source}" >/dev/null 2>"${TMP}" ; then
|
||||
# variable is assigned using eval
|
||||
# shellcheck disable=SC2154
|
||||
if [ ! -f "${c_quiet_if_down}" ]; then
|
||||
cat "${TMP}"
|
||||
fi
|
||||
_exit_err "Source ${source} is not readable. Skipping."
|
||||
fi
|
||||
|
||||
#
|
||||
# Add --archive for real backup (looks nice in front)
|
||||
#
|
||||
set -- "--archive" "$@"
|
||||
|
||||
#
|
||||
# Check: destination exists?
|
||||
#
|
||||
cd "${ddir}" || _exit_err "Cannot change to ${ddir}. Skipping."
|
||||
|
||||
#
|
||||
# Check incomplete backups (needs echo to remove newlines)
|
||||
#
|
||||
# shellcheck disable=SC2010
|
||||
ls -1 | grep "${CMARKER}\$" > "${TMP}"; ret=$?
|
||||
|
||||
if [ "$ret" -eq 0 ]; then
|
||||
_techo "Incomplete backups: $(cat "${TMP}")"
|
||||
# variable is assigned using eval
|
||||
# shellcheck disable=SC2154
|
||||
if [ -f "${c_delete_incomplete}" ]; then
|
||||
delete_from_file "${TMP}" "${CMARKER}" &
|
||||
fi
|
||||
fi
|
||||
|
||||
#
|
||||
# Include current time in name, not the time when we began to remove above
|
||||
#
|
||||
destination_name="${INTERVAL}.$(${CDATE}).$$-${source_no}"
|
||||
export destination_name
|
||||
destination_dir="${ddir}/${destination_name}"
|
||||
export destination_dir
|
||||
|
||||
#
|
||||
# Check: maximum number of backups is reached?
|
||||
#
|
||||
# shellcheck disable=SC2010
|
||||
count="$(ls -1 | grep -c "^${INTERVAL}\\.")"
|
||||
|
||||
_techo "Existing backups: ${count} Total keeping backups: ${c_interval}"
|
||||
|
||||
if [ "${count}" -ge "${c_interval}" ]; then
|
||||
# Use oldest directory as new backup destination directory.
|
||||
# It need not to be deleted, rsync will sync its content.
|
||||
# shellcheck disable=SC2010
|
||||
oldest_bak=$(ls -${TSORT}1r | grep "^${INTERVAL}\\." | head -n 1 || \
|
||||
_exit_err "Listing oldest backup failed")
|
||||
_techo "Using ${oldest_bak} for destination dir ${destination_dir}"
|
||||
if mv "${oldest_bak}" "${destination_dir}" 2>/dev/null; then
|
||||
# Touch dest dir so it is not sorted wrong in listings below.
|
||||
ls_rm_exclude=$(basename "${destination_dir}")
|
||||
|
||||
# We have something to remove only if count > interval.
|
||||
remove="$((count - c_interval))"
|
||||
else
|
||||
_techo_err "Renaming oldest backup ${oldest_bak} to ${destination_dir} failed, removing it."
|
||||
remove="$((count - c_interval + 1))"
|
||||
ls_rm_exclude=""
|
||||
fi
|
||||
if [ "${remove}" -gt 0 ]; then
|
||||
_techo "Removing ${remove} backup(s)..."
|
||||
|
||||
if [ -z "${ls_rm_exclude}" -o ${c_interval} -eq 0 ]; then
|
||||
# shellcheck disable=SC2010
|
||||
ls -${TSORT}1r | grep "^${INTERVAL}\\." | head -n "${remove}" > "${TMP}" || \
|
||||
_exit_err "Listing old backups failed"
|
||||
else
|
||||
# shellcheck disable=SC2010
|
||||
ls -${TSORT}1r | grep -v "${ls_rm_exclude}" | grep "^${INTERVAL}\\." | head -n "${remove}" > "${TMP}" || \
|
||||
_exit_err "Listing old backups failed"
|
||||
fi
|
||||
|
||||
delete_from_file "${TMP}" &
|
||||
fi
|
||||
fi
|
||||
|
||||
#
|
||||
# Skip backup of this source if interval is zero.
|
||||
#
|
||||
if [ ${c_interval} -eq 0 ]; then
|
||||
_techo "Skipping backup for this interval."
|
||||
exit 0
|
||||
fi
|
||||
|
||||
#
|
||||
# Check for backup directory to clone from: Always clone from the latest one!
|
||||
# Exclude destination_dir from listing, it can be touched reused and renamed
|
||||
# oldest existing destination directory.
|
||||
#
|
||||
dest_dir_name=$(basename "${destination_dir}")
|
||||
# shellcheck disable=SC2010
|
||||
last_dir="$(ls -${TSORT}p1 | grep '/$' | grep -v "${dest_dir_name}" | head -n 1)" || \
|
||||
_exit_err "Failed to list contents of ${ddir}."
|
||||
|
||||
#
|
||||
# Clone from old backup, if existing
|
||||
#
|
||||
if [ "${last_dir}" ]; then
|
||||
set -- "$@" "--link-dest=${ddir}/${last_dir}"
|
||||
_techo "Hard linking from ${last_dir}"
|
||||
fi
|
||||
|
||||
#
|
||||
# Mark backup running and go back to original directory
|
||||
#
|
||||
touch "${destination_dir}${CMARKER}"
|
||||
cd "${__abs_mydir}" || _exit_err "Cannot go back to ${__abs_mydir}."
|
||||
|
||||
#
|
||||
# the rsync part
|
||||
#
|
||||
_techo "Transferring files..."
|
||||
rsync "$@" "${source}" "${destination_dir}"; ret=$?
|
||||
_techo "Finished backup (rsync return code: $ret)."
|
||||
|
||||
#
|
||||
# export rsync return code, might be useful in post_exec
|
||||
#
|
||||
export rsync_return_code=$ret
|
||||
|
||||
#
|
||||
# Set modification time (mtime) to current time, if sorting by mtime is enabled
|
||||
#
|
||||
[ -f "$c_mtime" ] && touch "${destination_dir}"
|
||||
|
||||
#
|
||||
# Check if rsync exit code indicates failure.
|
||||
#
|
||||
fail=""
|
||||
# variable is assigned using eval
|
||||
# shellcheck disable=SC2154
|
||||
if [ -f "$c_rsync_failure_codes" ]; then
|
||||
while read -r code ; do
|
||||
if [ "$ret" = "$code" ]; then
|
||||
fail=1
|
||||
fi
|
||||
done <"${c_rsync_failure_codes}"
|
||||
fi
|
||||
|
||||
#
|
||||
# Remove marking here unless rsync failed.
|
||||
#
|
||||
if [ -z "$fail" ]; then
|
||||
rm "${destination_dir}${CMARKER}" || \
|
||||
_exit_err "Removing ${destination_dir}${CMARKER} failed."
|
||||
if [ "${ret}" -ne 0 ]; then
|
||||
_techo "Warning: rsync exited non-zero, the backup may be broken (see rsync errors)."
|
||||
fi
|
||||
else
|
||||
_techo "Warning: rsync failed with return code $ret."
|
||||
fi
|
||||
|
||||
#
|
||||
# Create symlink to newest backup
|
||||
#
|
||||
# shellcheck disable=SC2010
|
||||
latest_dir="$(ls -${TSORT}p1 "${ddir}" | grep '/$' | head -n 1)" || \
|
||||
_exit_err "Failed to list content of ${ddir}."
|
||||
|
||||
ln -snf "${ddir}/${latest_dir}" "${ddir}/current" || \
|
||||
_exit_err "Failed to create 'current' symlink."
|
||||
|
||||
#
|
||||
# post_exec
|
||||
#
|
||||
if [ -x "${c_post_exec}" ]; then
|
||||
_techo "Executing ${c_post_exec} ..."
|
||||
"${c_post_exec}"; ret=$?
|
||||
_techo "Finished ${c_post_exec}."
|
||||
|
||||
if [ "${ret}" -ne 0 ]; then
|
||||
_exit_err "${c_post_exec} failed."
|
||||
fi
|
||||
fi
|
||||
|
||||
#
|
||||
# Time calculation
|
||||
#
|
||||
end_s="$(${SDATE})"
|
||||
full_seconds="$((end_s - begin_s))"
|
||||
hours="$((full_seconds / 3600))"
|
||||
minutes="$(((full_seconds % 3600) / 60))"
|
||||
seconds="$((full_seconds % 60))"
|
||||
|
||||
_techo "Backup lasted: ${hours}:${minutes}:${seconds} (h:m:s)"
|
||||
|
||||
unlock "${name}"
|
||||
|
||||
# wait for children (doing delete_from_file) if any still running
|
||||
wait
|
||||
) || exit
|
||||
done
|
||||
|
||||
#
|
||||
# Be a good parent and wait for our children, if they are running wild parallel
|
||||
# After all children are finished then remove control pipe.
|
||||
#
|
||||
if [ "${PARALLEL}" ]; then
|
||||
_techo "Waiting for children to complete..."
|
||||
wait
|
||||
rm -f "${CONTROL_PIPE}"
|
||||
fi
|
||||
|
||||
#
|
||||
# Look for post-exec command (general)
|
||||
#
|
||||
if [ -x "${CPOSTEXEC}" ]; then
|
||||
_techo "Executing ${CPOSTEXEC} ..."
|
||||
"${CPOSTEXEC}"; ret=$?
|
||||
_techo "Finished ${CPOSTEXEC} (return code: ${ret})."
|
||||
|
||||
if [ "${ret}" -ne 0 ]; then
|
||||
_techo "${CPOSTEXEC} failed."
|
||||
fi
|
||||
fi
|
||||
|
||||
rm -f "${TMP}"
|
||||
_techo "Finished"
|
415
ccollect.sh
415
ccollect.sh
|
@ -1,415 +0,0 @@
|
|||
#!/bin/sh
|
||||
# Nico Schottelius
|
||||
# written for SyGroup (www.sygroup.ch)
|
||||
# Date: Mon Nov 14 11:45:11 CET 2005
|
||||
# Last Modified: (See ls -l or git)
|
||||
|
||||
#
|
||||
# where to find our configuration and temporary file
|
||||
#
|
||||
CCOLLECT_CONF=${CCOLLECT_CONF:-/etc/ccollect}
|
||||
CSOURCES=$CCOLLECT_CONF/sources
|
||||
CDEFAULTS=$CCOLLECT_CONF/defaults
|
||||
CPREEXEC="$CDEFAULTS/pre_exec"
|
||||
CPOSTEXEC="$CDEFAULTS/post_exec"
|
||||
|
||||
TMP=$(mktemp /tmp/$(basename $0).XXXXXX)
|
||||
VERSION=0.3.2
|
||||
RELEASE="2006-CHANGE-IT-THIS-TIME-NICO"
|
||||
HALF_VERSION="ccollect $VERSION"
|
||||
FULL_VERSION="ccollect $VERSION ($RELEASE)"
|
||||
|
||||
#
|
||||
# unset parallel execution
|
||||
#
|
||||
PARALLEL=""
|
||||
|
||||
|
||||
#
|
||||
# catch signals
|
||||
#
|
||||
trap "rm -f \"$TMP\"" 1 2 15
|
||||
|
||||
|
||||
add_name()
|
||||
{
|
||||
sed "s/^/\[$name\] /"
|
||||
}
|
||||
|
||||
#
|
||||
# Tell how to use us
|
||||
#
|
||||
usage()
|
||||
{
|
||||
echo "$(basename $0): <intervall name> [args] <sources to backup>"
|
||||
echo ""
|
||||
echo " ccollect creates (pseudo) incremental backups"
|
||||
echo ""
|
||||
echo " -h, --help: Show this help screen"
|
||||
echo " -p, --parallel: Parallelise backup processes"
|
||||
echo " -a, --all: Backup all sources specified in $CSOURCES"
|
||||
echo " -v, --verbose: Be very verbose (uses set -x)."
|
||||
echo ""
|
||||
echo ""
|
||||
echo " On 2005-12-05 ccollect was written by Nico Schottelius."
|
||||
echo ""
|
||||
echo " This is version $VERSION, released at ${RELEASE}."
|
||||
echo ""
|
||||
echo " Retrieve latest ccollect at http://linux.schottelius.org/ccollect/"
|
||||
exit 0
|
||||
}
|
||||
|
||||
#
|
||||
# need at least intervall and one source or --all
|
||||
#
|
||||
if [ $# -lt 2 ]; then
|
||||
usage
|
||||
fi
|
||||
|
||||
#
|
||||
# check for configuraton directory
|
||||
#
|
||||
if [ ! -d "$CCOLLECT_CONF" ]; then
|
||||
echo "No configuration found in \"$CCOLLECT_CONF\"" \
|
||||
" (set \$CCOLLECT_CONF corectly?)"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
#
|
||||
# Filter arguments
|
||||
#
|
||||
INTERVALL=$1; shift
|
||||
i=1
|
||||
no_shares=0
|
||||
|
||||
while [ $i -le $# ]; do
|
||||
eval arg=\$$i
|
||||
|
||||
if [ "$NO_MORE_ARGS" = 1 ]; then
|
||||
eval share_${no_shares}=\"$arg\"
|
||||
no_shares=$(($no_shares+1))
|
||||
else
|
||||
case $arg in
|
||||
-a|--all)
|
||||
ALL=1
|
||||
;;
|
||||
-v|--verbose)
|
||||
VERBOSE=1
|
||||
;;
|
||||
-p|--parallel)
|
||||
PARALLEL="1"
|
||||
;;
|
||||
-h|--help)
|
||||
usage
|
||||
;;
|
||||
--)
|
||||
NO_MORE_ARGS=1
|
||||
;;
|
||||
*)
|
||||
eval share_${no_shares}=\"$arg\"
|
||||
no_shares=$(($no_shares+1))
|
||||
;;
|
||||
esac
|
||||
fi
|
||||
|
||||
i=$(($i+1))
|
||||
done
|
||||
|
||||
#
|
||||
# be really really really verbose
|
||||
#
|
||||
if [ "$VERBOSE" = 1 ]; then
|
||||
set -x
|
||||
fi
|
||||
|
||||
#
|
||||
# Look, if we should take ALL sources
|
||||
#
|
||||
if [ "$ALL" = 1 ]; then
|
||||
# reset everything specified before
|
||||
no_shares=0
|
||||
|
||||
#
|
||||
# get entries from sources
|
||||
#
|
||||
cwd=$(pwd)
|
||||
cd "$CSOURCES";
|
||||
ls > "$TMP"
|
||||
|
||||
while read tmp; do
|
||||
eval share_${no_shares}=\"$tmp\"
|
||||
no_shares=$(($no_shares+1))
|
||||
done < "$TMP"
|
||||
fi
|
||||
|
||||
#
|
||||
# Need at least ONE source to backup
|
||||
#
|
||||
if [ "$no_shares" -lt 1 ]; then
|
||||
usage
|
||||
else
|
||||
echo "==> $HALF_VERSION: Beginning backup using intervall $INTERVALL <=="
|
||||
fi
|
||||
|
||||
#
|
||||
# check default configuration
|
||||
#
|
||||
|
||||
D_FILE_INTERVALL="$CDEFAULTS/intervalls/$INTERVALL"
|
||||
D_INTERVALL=$(cat $D_FILE_INTERVALL 2>/dev/null)
|
||||
|
||||
#
|
||||
# Look for pre-exec command (general)
|
||||
#
|
||||
if [ -x "$CPREEXEC" ]; then
|
||||
echo "Executing $CPREEXEC ..."
|
||||
"$CPREEXEC"
|
||||
echo "Finished ${CPREEXEC}."
|
||||
fi
|
||||
|
||||
#
|
||||
# Let's do the backup
|
||||
#
|
||||
i=0
|
||||
while [ "$i" -lt "$no_shares" ]; do
|
||||
|
||||
#
|
||||
# Get current share
|
||||
#
|
||||
eval name=\$share_${i}
|
||||
i=$(($i+1))
|
||||
|
||||
export name
|
||||
|
||||
#
|
||||
# start ourself, if we want parallel execution
|
||||
#
|
||||
if [ "$PARALLEL" ]; then
|
||||
$0 "$INTERVALL" "$name" &
|
||||
continue
|
||||
fi
|
||||
|
||||
#
|
||||
# Start subshell for easy log editing
|
||||
#
|
||||
(
|
||||
#
|
||||
# Stderr to stdout, so we can produce nice logs
|
||||
#
|
||||
exec 2>&1
|
||||
|
||||
#
|
||||
# Standard locations
|
||||
#
|
||||
backup="$CSOURCES/$name"
|
||||
c_source="$backup/source"
|
||||
c_dest="$backup/destination"
|
||||
c_exclude="$backup/exclude"
|
||||
c_verbose="$backup/verbose"
|
||||
c_vverbose="$backup/very_verbose"
|
||||
c_rsync_extra="$backup/rsync_options"
|
||||
c_summary="$backup/summary"
|
||||
|
||||
c_pre_exec="$backup/pre_exec"
|
||||
c_post_exec="$backup/post_exec"
|
||||
|
||||
begin=$(date)
|
||||
begin_s=$(date +%s)
|
||||
|
||||
echo "$begin Beginning to backup"
|
||||
|
||||
#
|
||||
# Standard configuration checks
|
||||
#
|
||||
if [ ! -e "$backup" ]; then
|
||||
echo "Source does not exist."
|
||||
exit 1
|
||||
fi
|
||||
if [ ! -d "$backup" ]; then
|
||||
echo "\"$name\" is not a cconfig-directory. Skipping."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
#
|
||||
# intervall definition: First try source specific, fallback to default
|
||||
#
|
||||
c_intervall="$(cat "$backup/intervalls/$INTERVALL" 2>/dev/null)"
|
||||
|
||||
if [ -z "$c_intervall" ]; then
|
||||
c_intervall=$D_INTERVALL
|
||||
|
||||
if [ -z "$c_intervall" ]; then
|
||||
echo "Default and source specific intervall missing. Skipping."
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
||||
#
|
||||
# unset possible options
|
||||
#
|
||||
EXCLUDE=""
|
||||
RSYNC_EXTRA=""
|
||||
SUMMARY=""
|
||||
VERBOSE=""
|
||||
VVERBOSE=""
|
||||
|
||||
#
|
||||
# next configuration checks
|
||||
#
|
||||
if [ ! -f "$c_source" ]; then
|
||||
echo "Source description $c_source is not a file. Skipping."
|
||||
exit 1
|
||||
else
|
||||
source=$(cat "$c_source")
|
||||
if [ $? -ne 0 ]; then
|
||||
echo "Skipping: Source $c_source is not readable"
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
|
||||
if [ ! -d "$c_dest" ]; then
|
||||
echo "Destination $c_dest does not link to a directory. Skipping"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
#
|
||||
# pre_exec
|
||||
#
|
||||
if [ -x "$c_pre_exec" ]; then
|
||||
echo "Executing $c_pre_exec ..."
|
||||
$c_pre_exec
|
||||
echo "Finished ${c_pre_exec}."
|
||||
fi
|
||||
|
||||
# exclude
|
||||
if [ -f "$c_exclude" ]; then
|
||||
EXCLUDE="--exclude-from=$c_exclude"
|
||||
fi
|
||||
|
||||
# extra options for rsync
|
||||
if [ -f "$c_rsync_extra" ]; then
|
||||
RSYNC_EXTRA="$(cat "$c_rsync_extra")"
|
||||
fi
|
||||
|
||||
# verbosity for rsync
|
||||
if [ -f "$c_verbose" ]; then
|
||||
VERBOSE="-v"
|
||||
fi
|
||||
|
||||
# Output a summary
|
||||
if [ -f "$c_summary" ]; then
|
||||
SUMMARY="--stats"
|
||||
fi
|
||||
|
||||
# MORE verbosity, includes standard verbosity
|
||||
if [ -f "$c_vverbose" ]; then
|
||||
VERBOSE="-v"
|
||||
VVERBOSE="-v"
|
||||
fi
|
||||
|
||||
#
|
||||
# check if maximum number of backups is reached, if so remove
|
||||
#
|
||||
|
||||
# the created directories are named $INTERVALL.$DA
|
||||
count=$(ls -d "$c_dest/${INTERVALL}."?* 2>/dev/null | wc -l)
|
||||
echo "Currently $count backup(s) exist, total keeping $c_intervall backup(s)."
|
||||
|
||||
if [ "$count" -ge "$c_intervall" ]; then
|
||||
substract=$(echo $c_intervall - 1 | bc)
|
||||
remove=$(echo $count - $substract | bc)
|
||||
echo "Removing $remove backup(s)..."
|
||||
|
||||
ls -d "$c_dest/${INTERVALL}."?* | sort -n | head -n $remove > "$TMP"
|
||||
while read to_remove; do
|
||||
dir="$to_remove"
|
||||
echo "Removing $dir ..."
|
||||
rm $VVERBOSE -rf "$dir"
|
||||
done < "$TMP"
|
||||
fi
|
||||
|
||||
#
|
||||
# clone the old directory with hardlinks
|
||||
#
|
||||
|
||||
destination_date=$(date +%Y-%m-%d-%H:%M)
|
||||
destination_dir="$c_dest/${INTERVALL}.${destination_date}.$$"
|
||||
|
||||
last_dir=$(ls -d "$c_dest/${INTERVALL}."?* 2>/dev/null | sort -n | tail -n 1)
|
||||
|
||||
# give some info
|
||||
echo "Beginning to backup, this may take some time..."
|
||||
|
||||
# only copy if a directory exists
|
||||
if [ "$last_dir" ]; then
|
||||
echo "Hard linking..."
|
||||
cp -al $VVERBOSE "$last_dir" "$destination_dir"
|
||||
else
|
||||
echo "Creating $destination_dir"
|
||||
mkdir $VVERBOSE "$destination_dir"
|
||||
fi
|
||||
|
||||
if [ $? -ne 0 ]; then
|
||||
echo "Creating/cloning backup directory failed. Skipping backup."
|
||||
exit 1
|
||||
fi
|
||||
|
||||
#
|
||||
# the rsync part
|
||||
# options partly stolen from rsnapshot
|
||||
#
|
||||
|
||||
echo "Transferring files..."
|
||||
|
||||
rsync -a $VERBOSE $RSYNC_EXTRA $EXCLUDE $SUMMARY \
|
||||
--delete --numeric-ids --relative --delete-excluded \
|
||||
"$source" "$destination_dir"
|
||||
|
||||
if [ "$?" -ne 0 ]; then
|
||||
echo "rsync reported an error. The backup may be broken (see rsync errors)"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo "$(date) Successfully finished backup"
|
||||
|
||||
#
|
||||
# post_exec
|
||||
#
|
||||
if [ -x "$c_post_exec" ]; then
|
||||
echo "$(date) Executing $c_post_exec ..."
|
||||
"$c_post_exec"
|
||||
echo "$(date) Finished ${c_post_exec}."
|
||||
fi
|
||||
|
||||
end_s=$(date +%s)
|
||||
|
||||
full_seconds=$(echo "$end_s - $begin_s" | bc -l)
|
||||
hours=$(echo $full_seconds / 3600 | bc)
|
||||
seconds=$(echo "$full_seconds - ($hours * 3600)" | bc)
|
||||
minutes=$(echo $seconds / 60 | bc)
|
||||
seconds=$(echo "$seconds - ($minutes * 60)" | bc)
|
||||
|
||||
echo "Backup lasted: ${hours}:$minutes:$seconds (h:m:s)"
|
||||
|
||||
) | add_name
|
||||
done
|
||||
|
||||
#
|
||||
# Be a good parent and wait for our children, if they are running wild parallel
|
||||
#
|
||||
if [ "$PARALLEL" ]; then
|
||||
echo "Waiting for child jobs to complete..."
|
||||
wait
|
||||
fi
|
||||
|
||||
#
|
||||
# Look for post-exec command (general)
|
||||
#
|
||||
if [ -x "$CPOSTEXEC" ]; then
|
||||
echo "Executing $CPOSTEXEC ..."
|
||||
"$CPOSTEXEC"
|
||||
echo "Finished ${CPOSTEXEC}."
|
||||
fi
|
||||
|
||||
rm -f "$TMP"
|
||||
echo "==> Finished $WE <=="
|
|
@ -0,0 +1,3 @@
|
|||
This is my personal test configuration.
|
||||
It can perhaps be some example for you, although it may be pretty
|
||||
unsorted and highly chaotic.
|
|
@ -0,0 +1 @@
|
|||
25
|
|
@ -1,5 +1,5 @@
|
|||
#!/bin/cat
|
||||
|
||||
######################################################################
|
||||
If you see this content, post_exec was executed.
|
||||
If you see this content, post_exec was executed. (general post_exec)
|
||||
######################################################################
|
||||
|
|
|
@ -1,3 +1,4 @@
|
|||
#!/bin/cat
|
||||
|
||||
If you see this content, pre_exec was executed.
|
||||
(general pre_exec, not source dependent)
|
||||
|
|
|
@ -0,0 +1 @@
|
|||
/test
|
|
@ -0,0 +1 @@
|
|||
:/
|
|
@ -0,0 +1 @@
|
|||
root@
|
|
@ -1 +0,0 @@
|
|||
/home/nico/backupdir
|
|
@ -1 +0,0 @@
|
|||
/home/nico/vpn
|
|
@ -0,0 +1 @@
|
|||
/tmp/ccollect
|
|
@ -0,0 +1 @@
|
|||
.git
|
|
@ -0,0 +1 @@
|
|||
/home/users/nico/bin
|
|
@ -0,0 +1 @@
|
|||
This is based on a production example I use for my notebook.
|
|
@ -0,0 +1 @@
|
|||
/tmp/ccollect
|
|
@ -0,0 +1 @@
|
|||
/home/server/raid
|
|
@ -0,0 +1 @@
|
|||
localhost:/home/users/nico/bin
|
|
@ -0,0 +1 @@
|
|||
/tmp/ccollect
|
|
@ -0,0 +1 @@
|
|||
.git
|
|
@ -0,0 +1 @@
|
|||
/home/users/nico/bin
|
|
@ -0,0 +1 @@
|
|||
/tmp/ccollect
|
|
@ -0,0 +1 @@
|
|||
.git
|
|
@ -0,0 +1 @@
|
|||
/home/users/nico/bin
|
|
@ -0,0 +1 @@
|
|||
/tmp/ccollect
|
|
@ -0,0 +1 @@
|
|||
30
|
|
@ -0,0 +1 @@
|
|||
/home/users/nico/bin
|
|
@ -0,0 +1 @@
|
|||
.git
|
|
@ -0,0 +1 @@
|
|||
/home/users/nico/bin
|
|
@ -1 +0,0 @@
|
|||
manage
|
|
@ -1 +0,0 @@
|
|||
nico@creme.schottelius.org:bin
|
|
@ -1 +0,0 @@
|
|||
/home/nico/backupdir/testsource1
|
|
@ -1 +0,0 @@
|
|||
/home/nico/bilder
|
|
@ -1 +0,0 @@
|
|||
/home/nico/backupdir
|
|
@ -1,3 +0,0 @@
|
|||
openvpn-2.0.1.tar.gz
|
||||
nicht_reinnehmen
|
||||
etwas mit leerzeichenli
|
|
@ -1 +0,0 @@
|
|||
20
|
|
@ -1 +0,0 @@
|
|||
/home/nico/vpn
|
|
@ -0,0 +1 @@
|
|||
This is based on a production example I use for my notebook.
|
|
@ -0,0 +1 @@
|
|||
/tmp/ccollect
|
|
@ -0,0 +1 @@
|
|||
/home/server/raid
|
|
@ -0,0 +1 @@
|
|||
/home/users/nico/bin
|
|
@ -1 +0,0 @@
|
|||
/home/nico/backupdir/vpn
|
|
@ -1 +0,0 @@
|
|||
/home/nico/vpn/
|
|
@ -1 +0,0 @@
|
|||
/tmp
|
|
@ -0,0 +1 @@
|
|||
/tmp/ccollect
|
|
@ -1 +1 @@
|
|||
/home/user/nico/oeffentlich/computer/projekte/ccollect-0.3/
|
||||
/home/users/nico/bin
|
||||
|
|
|
@ -0,0 +1,3 @@
|
|||
This directory contains patches or programs contributed by others
|
||||
which are either not yet integrated into ccollect or may be kept
|
||||
seperated generally.
|
|
@ -0,0 +1,79 @@
|
|||
Summary: (pseudo) incremental backup with different exclude lists using hardlinks and rsync
|
||||
Name: ccollect
|
||||
Version: 2.3
|
||||
Release: 0
|
||||
URL: http://www.nico.schottelius.org/software/ccollect
|
||||
Source0: http://www.nico.schottelius.org/software/ccollect/%{name}-%{version}.tar.bz2
|
||||
|
||||
License: GPL-3
|
||||
Group: Applications/System
|
||||
Vendor: Nico Schottelius <nico-ccollect@schottelius.org>
|
||||
BuildRoot: %{_tmppath}/%{name}-%(id -un)
|
||||
BuildArch: noarch
|
||||
Requires: rsync
|
||||
|
||||
%description
|
||||
Ccollect backups data from local and remote hosts to your local harddisk.
|
||||
Although ccollect creates full backups, it requires very less space on the backup medium, because ccollect uses hardlinks to create an initial copy of the last backup.
|
||||
Only the inodes used by the hardlinks and the changed files need additional space.
|
||||
|
||||
%prep
|
||||
%setup -q
|
||||
|
||||
%install
|
||||
rm -rf $RPM_BUILD_ROOT
|
||||
|
||||
#Installing main ccollect and /etc directory
|
||||
%__install -d 755 %buildroot%_bindir
|
||||
%__install -d 755 %buildroot%_sysconfdir/%name
|
||||
%__install -m 755 ccollect %buildroot%_bindir/
|
||||
|
||||
#bin files from tools directory
|
||||
for t in $(ls tools/ccollect_*) ; do
|
||||
%__install -m 755 ${t} %buildroot%_bindir/
|
||||
done
|
||||
|
||||
#Configuration examples and docs
|
||||
%__install -d 755 %buildroot%_datadir/doc/%name-%version/examples
|
||||
|
||||
%__install -m 644 README %buildroot%_datadir/doc/%name-%version
|
||||
%__install -m 644 COPYING %buildroot%_datadir/doc/%name-%version
|
||||
%__install -m 644 CREDITS %buildroot%_datadir/doc/%name-%version
|
||||
%__install -m 644 conf/README %buildroot%_datadir/doc/%name-%version/examples
|
||||
%__cp -pr conf/defaults %buildroot%_datadir/doc/%name-%version/examples/
|
||||
%__cp -pr conf/sources %buildroot%_datadir/doc/%name-%version/examples/
|
||||
|
||||
#Addition documentation and some config tools
|
||||
%__install -d 755 %buildroot%_datadir/%name/tools
|
||||
%__install -m 755 tools/called_from_remote_pre_exec %buildroot%_datadir/%name/tools
|
||||
%__cp -pr tools/config-pre-* %buildroot%_datadir/%name/tools
|
||||
%__install -m 755 tools/report_success %buildroot%_datadir/%name/tools
|
||||
|
||||
%clean
|
||||
rm -rf $RPM_BUILD_ROOT
|
||||
|
||||
%files
|
||||
%defattr(-,root,root)
|
||||
%_bindir/ccollect*
|
||||
%_datadir/doc/%name-%version
|
||||
%_datadir/%name/tools
|
||||
%docdir %_datadir/doc/%name-%version
|
||||
%dir %_sysconfdir/%name
|
||||
|
||||
%changelog
|
||||
* Thu Aug 20 2009 Nico Schottelius <nico-ccollect@schottelius.org> 0.8
|
||||
- Introduce consistenst time sorting (John Lawless)
|
||||
- Check for source connectivity before trying backup (John Lawless)
|
||||
- Defensive programming patch (John Lawless)
|
||||
- Some code cleanups (argument parsing, usage) (Nico Schottelius)
|
||||
- Only consider directories as sources when using -a (Nico Schottelius)
|
||||
- Fix general parsing problem with -a (Nico Schottelius)
|
||||
- Fix potential bug when using remote_host, delete_incomplete and ssh (Nico Schottelius)
|
||||
- Improve removal performance: minimised number of 'rm' calls (Nico Schottelius)
|
||||
- Support sorting by mtime (John Lawless)
|
||||
- Improve option handling (John Lawless)
|
||||
- Add support for quiet operation for dead devices (quiet_if_down) (John Lawless)
|
||||
- Add smart option parsing, including support for default values (John Lawless)
|
||||
- Updated and cleaned up documentation (Nico Schottelius)
|
||||
- Fixed bug "removal of current directory" in ccollect_delete_source.sh (Found by G????nter St????hr, fixed by Nico Schottelius)
|
||||
|
|
@ -0,0 +1,47 @@
|
|||
[Almost complete Copy of an e-mail from Patrick Drolet]
|
||||
|
||||
Hello again,
|
||||
|
||||
|
||||
|
||||
I have created a script to better manage the backups since my
|
||||
upload/download ratio and my bandwidth is limited by my ISP, and my hard
|
||||
disk space is also somewhat limited. The script is called
|
||||
"ccollect_mgr.sh".
|
||||
|
||||
|
||||
|
||||
Provides the following features
|
||||
|
||||
1) Determine the interval (daily/weekly/monthly)
|
||||
|
||||
a. Define when you want weekly and monthly backups. It takes care of
|
||||
the rest
|
||||
|
||||
2) Perform the backups using ccollect
|
||||
|
||||
3) Copy the ccollect log output to the first backup of the set
|
||||
|
||||
a. Keeping the detailed log of each backup is always handy!
|
||||
|
||||
4) Build a periodic report and include the real amount of disk used
|
||||
|
||||
a. Computes the real amount of disk used (eg: no double counting of
|
||||
hard links)
|
||||
|
||||
b. Shows the actual amount of data transferred
|
||||
|
||||
5) Send an email if there has been errors or warnings
|
||||
|
||||
6) Send a periodic email to show transfer size, real backup size, etc
|
||||
|
||||
a. Weekly reports are nice.!
|
||||
|
||||
[...]
|
||||
|
||||
- rdu (real du), which computes the real amount of disk used (no
|
||||
double/triple counting hard links), same code as in ccollect_mgr.sh.
|
||||
|
||||
- S60ccollect_example, an example script to put in etc/init.d to
|
||||
add ccollect_mgr to the crontab
|
||||
|
|
@ -0,0 +1,21 @@
|
|||
#!/bin/sh
|
||||
|
||||
# Standard Linux: put in /etc/init.d
|
||||
# Busybox: put in /opt/etc/init.d
|
||||
|
||||
# Add ccollect_mgr job to crontab
|
||||
# Syntax reminder from crontab:
|
||||
# minute 0-59
|
||||
# hour 0-23
|
||||
# day of month 1-31
|
||||
# month 1-12 (or names, see below)
|
||||
# day of week 0-7 (0 or 7 is Sun, or use names)
|
||||
|
||||
crontab -l | grep -v ccollect_mgr > /tmp/crontab.tmp
|
||||
|
||||
# Backup every day at 1 am.
|
||||
echo "00 01 * * * /usr/local/sbin/ccollect_mgr.sh -from nas@myemail.net -to me@myemail.net -server relay_or_smtp_server NAS > /usr/local/var/log/ccollect.cron &" >> /tmp/crontab.tmp
|
||||
|
||||
crontab /tmp/crontab.tmp
|
||||
rm /tmp/crontab.tmp
|
||||
|
|
@ -0,0 +1,542 @@
|
|||
#!/bin/sh
|
||||
#
|
||||
# ----------------------------------------------------------------------------
|
||||
# Last update: 2009-12-11
|
||||
# By : pdrolet (ccollect_mgr@drolet.name)
|
||||
# ----------------------------------------------------------------------------
|
||||
# Job manager to the ccollect utilities
|
||||
# (ccollect written by Nico Schottelius)
|
||||
#
|
||||
# Provides the following features
|
||||
# 1) Determine the interval (daily/weekly/monthly)
|
||||
# 2) Check the estimated file transfer size
|
||||
# 3) Perform the backups using ccollect
|
||||
# 4) Copy the ccollect log to the first backup of the set
|
||||
# 5) Build a periodic report and include the real amount of disk used
|
||||
# 6) Send an email if there has been errors or warnings
|
||||
# 7) Send a periodic email to show transfer size, real backup size, etc
|
||||
# ----------------------------------------------------------------------------
|
||||
#
|
||||
# This script was written primarily to gain better visibility of backups in
|
||||
# an environment where data transfer is limited and so is bandwidth
|
||||
# (eg: going through an ISP). The primary target of this script were a
|
||||
# DNS323 and a QNAP T209 (eg: Busybox devices and not standard Linux devices)
|
||||
# but it should run on any POSIX compliant device.
|
||||
#
|
||||
# Note: This is one of my first script in over a decade... don't use this as a
|
||||
# reference (but take a look at ccollect.sh... very well written!)
|
||||
# ----------------------------------------------------------------------------
|
||||
#
|
||||
# -------------------------------------------
|
||||
# TO MAKE THIS SCRIPT RUN ON A BUSYBOX DEVICE
|
||||
# -------------------------------------------
|
||||
# - You may need to install Optware and the following packages:
|
||||
# - findutils (to get a find utility which supports printf)
|
||||
# - procps (to get a ps utility that is standard)
|
||||
# - mini-sendmail (this is what I used to send emails... you could easily
|
||||
# modify this to use sendmail, mutt, putmail, etc...).
|
||||
# - On DNS323 only: Your Busybox is very limited. For details, see
|
||||
# http://wiki.dns323.info/howto:ffp#shells. You need to redirect /bin/sh
|
||||
# to the Busybox provided with ffp (Fun Plug). To do this, type:
|
||||
# ln -fs /ffp/bin/sh /bin/sh
|
||||
#
|
||||
# --------------------------------------------------
|
||||
# TO MAKE THIS SCRIPT RUN ON A STANDARD LINUX DEVICE
|
||||
# --------------------------------------------------
|
||||
# - You will need install mini_sendmail or rewrite the send_email routine.
|
||||
#
|
||||
# ----------------------------------------------------------------------------
|
||||
|
||||
# Send warning if the worst case data transfer will be larger than (in MB)...
|
||||
warning_transfer_size=1024
|
||||
abort_transfer_size=5120
|
||||
|
||||
# Define paths and default file names
|
||||
ADD_TO_PATH="/opt/bin:/opt/sbin:/usr/local/bin:/usr/local/sbin"
|
||||
CCOLLECT="ccollect.sh"
|
||||
CCOLLECT_CONF="/usr/local/etc/ccollect"
|
||||
|
||||
PS="/opt/bin/ps"
|
||||
FIND="/opt/bin/find"
|
||||
|
||||
TEMP_LOG="${CCOLLECT_CONF}"/log.$$
|
||||
per_report="${CCOLLECT_CONF}/periodic_report.log"
|
||||
tmp_report="/tmp/ccollect.$$"
|
||||
tmp_mgr="/tmp/ccollect_mgr.$$"
|
||||
tmp_email="/tmp/email.$$"
|
||||
|
||||
backups_not_found=""
|
||||
|
||||
# Sub routines...
|
||||
|
||||
send_email()
|
||||
{
|
||||
# Send a simple email using mini-sendmail.
|
||||
|
||||
msg_body_file="$1"
|
||||
shift
|
||||
|
||||
# ------------------------------
|
||||
# Quit if we can't send an email
|
||||
# ------------------------------
|
||||
if [ "${to}" == "" ] || [ "${mail_server}" == "" ]; then
|
||||
echo "Missing mail server or destination email. No email sent with subject: $@"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo from: "${from}" > "${tmp_email}"
|
||||
echo subject: "$@" >> "${tmp_email}"
|
||||
echo to: "${to}" >> "${tmp_email}"
|
||||
echo cc: >> "${tmp_email}"
|
||||
echo bcc: >> "${tmp_email}"
|
||||
echo "" >> "${tmp_email}"
|
||||
echo "" >> "${tmp_email}"
|
||||
cat "${msg_body_file}" >> "${tmp_email}"
|
||||
echo "" >> "${tmp_email}"
|
||||
|
||||
echo ""
|
||||
echo Sending email to ${to} to report the following:
|
||||
echo -----------------------------------------------
|
||||
cat "${tmp_email}"
|
||||
cat "${tmp_email}" | mini_sendmail -f"${from}" -s"${mail_server}" "${to}"
|
||||
rm "${tmp_email}"
|
||||
}
|
||||
|
||||
remove_source()
|
||||
{
|
||||
remove_no=$1
|
||||
eval echo Removing backup \"\$source_$1\"
|
||||
|
||||
no_sources="$(( ${no_sources} - 1 ))"
|
||||
while [ "${remove_no}" -lt "${no_sources}" ]; do
|
||||
eval source_${remove_no}=\"\$source_$(( ${remove_no} + 1))\"
|
||||
eval ddir_${remove_no}=\"\$ddir_$(( ${remove_no} + 1))\"
|
||||
remove_no=$(( ${remove_no} + 1 ))
|
||||
done
|
||||
}
|
||||
|
||||
compute_rdu()
|
||||
{
|
||||
kdivider=1
|
||||
find_options=""
|
||||
|
||||
while [ "$#" -ge 1 ]; do
|
||||
case "$1" in
|
||||
-m)
|
||||
kdivider=1024
|
||||
;;
|
||||
-g)
|
||||
kdivider=1048576
|
||||
;;
|
||||
*)
|
||||
break
|
||||
;;
|
||||
esac
|
||||
shift
|
||||
done
|
||||
|
||||
if [ "$#" == 0 ]; then
|
||||
rdu=0
|
||||
return 1
|
||||
fi
|
||||
|
||||
# ------------------------------------------------------------------------------------------------------
|
||||
# Compute the real disk usage (eg: hard links do files outside the backup set don't count)
|
||||
# ------------------------------------------------------------------------------------------------------
|
||||
# 1) Find selected files and list link count, inodes, file type and size
|
||||
# 2) Sort (sorts on inodes since link count is constant per inode)
|
||||
# 3) Merge duplicates using uniq
|
||||
# (result is occurence count, link count, inode, file type and size)
|
||||
# 4) Use awk to sum up the file size of each inodes when the occurence count
|
||||
# and link count are the same. Use %k for size since awk's printf is 32 bits
|
||||
# 5) Present the result with additional dividers based on command line parameters
|
||||
#
|
||||
|
||||
rdu=$(( ( `"${FIND}" "$@" -printf '%n %i %y %k \n' \
|
||||
| sort -n \
|
||||
| uniq -c \
|
||||
| awk '{ if (( $1 == $2 ) || ($4 == "d")) { sum += $5; } } END { printf "%u\n",(sum); }'` \
|
||||
+ ${kdivider} - 1 ) / ${kdivider} ))
|
||||
}
|
||||
|
||||
check_running_backups()
|
||||
{
|
||||
# Check if a backup is already ongoing. If so, skip and send email
|
||||
# Don't use the ccollect marker as this is no indication if it is still running
|
||||
|
||||
source_no=0
|
||||
while [ "${source_no}" -lt "${no_sources}" ]; do
|
||||
eval backup=\"\$source_${source_no}\"
|
||||
|
||||
PID=$$
|
||||
"${PS}" -e -o pid,ppid,args 2> /dev/null \
|
||||
| grep -v -e grep -e "${PID}.*ccollect.*${backup}" \
|
||||
| grep "ccollect.*${backup}" > "${tmp_mgr}" 2> /dev/null
|
||||
running_proc=`cat "${tmp_mgr}" | wc -l`
|
||||
|
||||
if [ ${running_proc} -gt 0 ]; then
|
||||
# Remove backup from list
|
||||
running_backups="${running_backups}${backup} "
|
||||
|
||||
echo "Process already running:"
|
||||
cat "${tmp_mgr}"
|
||||
|
||||
remove_source ${source_no}
|
||||
else
|
||||
source_no=$(( ${source_no} + 1 ))
|
||||
fi
|
||||
rm "${tmp_mgr}"
|
||||
done
|
||||
|
||||
if [ "${running_backups}" != "" ]; then
|
||||
echo "skipping ccollect backups already running: ${running_backups}" | tee "${tmp_report}"
|
||||
send_email "${tmp_report}" "WARNING - skipping ccollect backups already running: ${running_backups}"
|
||||
rm "${tmp_report}"
|
||||
fi
|
||||
}
|
||||
|
||||
find_interval()
|
||||
{
|
||||
# ----------------------------------------------------
|
||||
# Find interval for ccollect backup.
|
||||
# optional parameters:
|
||||
# - Day of the week to do weekly backups
|
||||
# - Do monthly instead of weekly on the Nth week
|
||||
# ----------------------------------------------------
|
||||
|
||||
weekly_backup="$1"
|
||||
monthly_backup="$2"
|
||||
|
||||
weekday=`date "+%w"`
|
||||
if [ ${weekday} -eq ${weekly_backup} ]; then
|
||||
dom=`date "+%e"`
|
||||
weeknum=$(( ( ${dom} / 7 ) + 1 ))
|
||||
if [ "${weeknum}" -eq "${monthly_backup}" ]; then
|
||||
interval=monthly
|
||||
else
|
||||
interval=weekly
|
||||
fi
|
||||
else
|
||||
interval=daily
|
||||
fi
|
||||
}
|
||||
|
||||
precheck_transfer_size()
|
||||
{
|
||||
# Check the estimated (worst case) transfer size and send email if larger than certain size
|
||||
# Abort backup if total transfer is larger than maximum limit (ex: an error somewhere
|
||||
# requires to do full backup and not incremental, which could blow the quota with ISP)
|
||||
#
|
||||
# Be nice and add error checking one day...
|
||||
|
||||
source_no=0
|
||||
while [ "${source_no}" -lt "${no_sources}" ]; do
|
||||
eval backup=\"\$source_${source_no}\"
|
||||
eval ddir=\"\$ddir_${source_no}\"
|
||||
|
||||
last_dir="$(ls -tcp1 "${ddir}" | grep '/$' | head -n 1)"
|
||||
sdir="$(cat "${CCOLLECT_CONF}"/sources/"${backup}"/source)"; ret="$?"
|
||||
if [ -f "${CCOLLECT_CONF}"/sources/"${backup}"/exclude ]; then
|
||||
exclude="--exclude-from=${CCOLLECT_CONF}/sources/${backup}/exclude";
|
||||
else
|
||||
exclude=""
|
||||
fi
|
||||
rsync_options=""
|
||||
if [ -f "${CCOLLECT_CONF}"/sources/"${backup}"/rsync_options ]; then
|
||||
while read line; do
|
||||
rsync_options="${rsync_options} ${line}"
|
||||
done < ${CCOLLECT_CONF}/sources/${backup}/rsync_options
|
||||
fi
|
||||
|
||||
rsync -n -a --delete --stats ${rsync_options} "${exclude}" "${sdir}" "${ddir}/${last_dir}" > "${tmp_report}"
|
||||
|
||||
tx_rx=`cat "${tmp_report}" | grep "Total transferred file size" | \
|
||||
awk '{ { tx += $5 } } END { printf "%u",(((tx)+1024*1024-1)/1024/1024); }'`
|
||||
total_xfer=$(( ${total_xfer} + ${tx_rx} ))
|
||||
|
||||
source_no=$(( ${source_no} + 1 ))
|
||||
done
|
||||
|
||||
echo "Transfer estimation for${ccollect_backups}: ${total_xfer} MB"
|
||||
|
||||
if [ ${total_xfer} -gt ${abort_transfer_size} ]; then
|
||||
# --------------------------------------------------
|
||||
# Send an error if transfer is larger than max limit
|
||||
# --------------------------------------------------
|
||||
# Useful to detect potential issues when there is transfer quota (ex: with ISP)
|
||||
|
||||
echo "Data transfer larger than ${abort_transfer_size} MB is expected for${ccollect_backups}" >> "${tmp_report}"
|
||||
echo "** BACKUP ABORTED **" >> "${tmp_report}"
|
||||
|
||||
send_email "${tmp_report}" "ERROR: aborted ccollect for${ccollect_backups} -- Estimated Tx+Rx: ${total_xfer} MB"
|
||||
rm "${tmp_report}"
|
||||
exit 1
|
||||
elif [ ${total_xfer} -gt ${warning_transfer_size} ]; then
|
||||
# --------------------------------------------------
|
||||
# Send a warning if transfer is expected to be large
|
||||
# --------------------------------------------------
|
||||
# Useful to detect potential issues when there is transfer quota (ex: with ISP)
|
||||
|
||||
echo "Data transfer larger than ${warning_transfer_size} MB is expected for${ccollect_backups}" > "${tmp_report}"
|
||||
|
||||
send_email "${tmp_report}" "WARNING ccollect for${ccollect_backups} -- Estimated Tx+Rx: ${total_xfer} MB"
|
||||
rm "${tmp_report}"
|
||||
fi
|
||||
}
|
||||
|
||||
build_backup_dir_list()
|
||||
{
|
||||
source_no=0
|
||||
while [ "${source_no}" -lt "${no_sources}" ]; do
|
||||
eval backup=\"\$source_${source_no}\"
|
||||
eval ddir=\"\$ddir_${source_no}\"
|
||||
|
||||
backup_dir="`cat "${TEMP_LOG}" \
|
||||
| grep "\[${backup}\] .*: Creating.* ${ddir}" \
|
||||
| head -n 1 \
|
||||
| sed 's/[^\/]*\//\//; s/ \.\.\.//'`"
|
||||
|
||||
if [ ! -d "${backup_dir}" ]; then
|
||||
backups_not_found="${backups_not_found}\"${backup}\" "
|
||||
echo -n "Backup directory for \"${backup}\" not found. "
|
||||
remove_source "${source_no}"
|
||||
else
|
||||
eval export backup_dir_list_${source_no}="${backup_dir}"
|
||||
# eval echo Backup Dir List: \"\$backup_dir_list_${source_no}\"
|
||||
source_no=$(( ${source_no} + 1 ))
|
||||
fi
|
||||
done
|
||||
}
|
||||
|
||||
move_log()
|
||||
{
|
||||
if [ "${no_sources}" -gt 0 ]; then
|
||||
eval log_file=\"\$backup_dir_list_1\"/ccollect.log
|
||||
mv "${TEMP_LOG}" "${log_file}"
|
||||
echo New Log Location: "${log_file}"
|
||||
else
|
||||
echo "WARNING: none of the backup set have been created"
|
||||
log_file="${TEMP_LOG}"
|
||||
fi
|
||||
}
|
||||
|
||||
send_report()
|
||||
{
|
||||
# Analyze log for periodic report and for error status report
|
||||
cat "${log_file}" | ccollect_analyse_logs.sh iwe > "${tmp_report}"
|
||||
|
||||
# -------------------------
|
||||
# Build the periodic report
|
||||
# -------------------------
|
||||
# Compute the total number of MB sent and received for all the backup sets
|
||||
tx_rx=`cat "${tmp_report}" | \
|
||||
grep 'sent [[:digit:]]* bytes received [0-9]* bytes' | \
|
||||
awk '{ { tx += $3 } { rx += $6} } END \
|
||||
{ printf "%u",(((tx+rx)+(1024*1024)-1)/1024/1024); }'`
|
||||
current_date=`date +'20%y/%m/%d %Hh%M -- '`
|
||||
|
||||
# ------------------------------------------
|
||||
# Get the real disk usage for the backup set
|
||||
# ------------------------------------------
|
||||
total_rdu=0
|
||||
source_no=0
|
||||
while [ "${source_no}" -lt "${no_sources}" ]; do
|
||||
eval backup_dir=\"\$backup_dir_list_${source_no}\"
|
||||
compute_rdu -m "${backup_dir}"
|
||||
total_rdu=$(( ${total_rdu} + ${rdu} ))
|
||||
source_no=$(( ${source_no} + 1 ))
|
||||
done
|
||||
|
||||
# ---------------------------------------------------------
|
||||
# Get the disk usage for all backups of each backup sets...
|
||||
# ** BE PATIENT!!! **
|
||||
# ---------------------------------------------------------
|
||||
historical_rdu=0
|
||||
source_no=0
|
||||
while [ "${source_no}" -lt "${no_sources}" ]; do
|
||||
eval backup_dir=\"\$ddir_${source_no}\"
|
||||
compute_rdu -m "${backup_dir}"
|
||||
historical_rdu=$(( ${historical_rdu} + ${rdu} ))
|
||||
source_no=$(( ${source_no} + 1 ))
|
||||
done
|
||||
|
||||
historical_rdu=$(( (${historical_rdu}+1023) / 1024 ))
|
||||
|
||||
if [ "${no_sources}" -gt 0 ]; then
|
||||
ccollect_backups=""
|
||||
else
|
||||
ccollect_backups="(none performed) "
|
||||
fi
|
||||
|
||||
source_no=0
|
||||
while [ "${source_no}" -lt "${no_sources}" ]; do
|
||||
eval backup=\"\$source_${source_no}\"
|
||||
ccollect_backups="${ccollect_backups}\"${backup}\" "
|
||||
source_no=$(( ${source_no} + 1 ))
|
||||
done
|
||||
|
||||
echo ${current_date} Tx+Rx: ${tx_rx} MB -- \
|
||||
Disk Usage: ${total_rdu} MB -- \
|
||||
Backup set \(${interval}\):${ccollect_backups} -- \
|
||||
Historical backups usage: ${historical_rdu} GB >> "${per_report}"
|
||||
echo "Total Data Transfer: ${tx_rx} MB -- Total Disk Usage: ${total_rdu} MB -- Total Historical backups usage: ${historical_rdu} GB"
|
||||
|
||||
# ----------------------------------------
|
||||
# Send a status email if there is an error
|
||||
# ----------------------------------------
|
||||
ccollect_we=`cat "${log_file}" | ccollect_analyse_logs.sh we | wc -l`
|
||||
if [ ${ccollect_we} -ge 1 ]; then
|
||||
send_email "${tmp_report}" "ERROR ccollect for${ccollect_backups} -- Tx+Rx: ${tx_rx} MB"
|
||||
fi
|
||||
|
||||
# --------------------
|
||||
# Send periodic report
|
||||
# --------------------
|
||||
if [ ${report_interval} == ${interval} ] || [ ${interval} == "monthly" ]; then
|
||||
|
||||
# Make reporting atomic to handle concurrent ccollect_mgr instances
|
||||
mv "${per_report}" "${per_report}".$$
|
||||
cat "${per_report}".$$ >> "${per_report}".history
|
||||
|
||||
# Calculate total amount of bytes sent and received
|
||||
tx_rx=`cat "${per_report}".$$ | \
|
||||
awk '{ { transfer += $5 } } END \
|
||||
{ printf "%u",(transfer); }'`
|
||||
|
||||
# Send email
|
||||
send_email "${per_report}.$$" "${report_interval} ccollect status for${ccollect_backups} -- Tx+Rx: ${tx_rx} MB"
|
||||
rm "${per_report}.$$"
|
||||
fi
|
||||
|
||||
rm "${tmp_report}"
|
||||
}
|
||||
|
||||
# ------------------------------------------------
|
||||
# Add to PATH in case we're launching from crontab
|
||||
# ------------------------------------------------
|
||||
|
||||
PATH="${ADD_TO_PATH}:${PATH}"
|
||||
|
||||
# --------------
|
||||
# Default Values
|
||||
# --------------
|
||||
|
||||
# Set on which interval status emails are sent (daily, weekly, monthly)
|
||||
report_interval=weekly
|
||||
|
||||
# Set day of the week for weekly backups. Default is Monday
|
||||
# 0=Sun, 1=Mon, 2=Tue, 3=Wed, 4=Thu, 5=Fri, 6=Sat
|
||||
weekly_backup=1
|
||||
|
||||
# Set the monthly backup interval. Default is 4th Monday of every month
|
||||
monthly_backup=4
|
||||
|
||||
# ---------------------------------
|
||||
# Parse command line
|
||||
# ---------------------------------
|
||||
|
||||
show_help=0
|
||||
export no_sources=0
|
||||
|
||||
while [ "$#" -ge 1 ]; do
|
||||
case "$1" in
|
||||
-help)
|
||||
show_help=1
|
||||
;;
|
||||
-from)
|
||||
from="$2"
|
||||
shift
|
||||
;;
|
||||
-to)
|
||||
to="$2"
|
||||
shift
|
||||
;;
|
||||
-server|mail_server)
|
||||
mail_server="$2"
|
||||
shift
|
||||
;;
|
||||
-weekly)
|
||||
weekly_backup="$2"
|
||||
shift
|
||||
;;
|
||||
-monthly)
|
||||
monthly_backup="$2"
|
||||
shift
|
||||
;;
|
||||
-warning_size)
|
||||
warning_transfer_size="$2"
|
||||
shift
|
||||
;;
|
||||
-abort_size)
|
||||
abort_transfer_size="$2"
|
||||
shift
|
||||
;;
|
||||
-report)
|
||||
report_interval="$2"
|
||||
shift
|
||||
;;
|
||||
-*)
|
||||
ccollect_options="${ccollect_options}$1 "
|
||||
;;
|
||||
daily|weekly|monthly)
|
||||
;;
|
||||
*)
|
||||
eval backup=\"\$1\"
|
||||
ddir="$(cat "${CCOLLECT_CONF}"/sources/"${backup}"/destination)"; ret="$?"
|
||||
if [ "${ret}" -ne 0 ]; then
|
||||
echo "Destination ${CCOLLECT_CONF}/sources/${backup}/destination is not readable... Skipping."
|
||||
else
|
||||
ccollect_backups="${ccollect_backups} \"$1\""
|
||||
eval export source_${no_sources}=\"\$1\"
|
||||
eval export ddir_${no_sources}="${ddir}"
|
||||
# eval echo Adding source \"\$source_${no_sources}\" -- \"\$ddir_${no_sources}\"
|
||||
no_sources="$(( ${no_sources} + 1 ))"
|
||||
fi
|
||||
;;
|
||||
esac
|
||||
shift
|
||||
done
|
||||
|
||||
if [ "${no_sources}" -lt 1 ] || [ ${show_help} -eq 1 ]; then
|
||||
echo ""
|
||||
echo "$0: Syntax"
|
||||
echo " -help This help"
|
||||
echo " -from <email> From email address (ex.: -from nas@home.com)"
|
||||
echo " -to <email> Send email to this address (ex.: -to me@home.com)"
|
||||
echo " -server <smtp_addr> SMTP server used for sending emails"
|
||||
echo " -weekly <day#> Define wich day of the week is the weekly backup"
|
||||
echo " Default is ${weekly_backup}. Sunday = 0, Saturday = 6"
|
||||
echo " -monthly <week#> Define on which week # is the monthly backup"
|
||||
echo " Default is ${monthly_backup}. Value = 1 to 5"
|
||||
echo " -report <interval> Frequency of report email (daily, weekly or monthly)"
|
||||
echo " Default is ${report_interval}"
|
||||
echo " -warning_size <MB> Send a warning email if the transfer size exceed this"
|
||||
echo " Default is ${warning_transfer_size} MB"
|
||||
echo " -abort_size <MB> Abort and send an error email if the transfer size exceed this"
|
||||
echo " Default is ${abort_transfer_size} MB"
|
||||
echo ""
|
||||
echo " other parameters are transfered to ccollect"
|
||||
echo ""
|
||||
exit 0
|
||||
fi
|
||||
|
||||
#echo Backup sets:"${ccollect_backups}"
|
||||
check_running_backups
|
||||
|
||||
if [ "${no_sources}" -lt 1 ]; then
|
||||
echo "No backup sets are reachable"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
find_interval ${weekly_backup} ${monthly_backup}
|
||||
echo Interval: ${interval}
|
||||
|
||||
precheck_transfer_size
|
||||
|
||||
"${CCOLLECT}" ${ccollect_options} ${interval} ${ccollect_backups} | tee "${TEMP_LOG}"
|
||||
|
||||
build_backup_dir_list
|
||||
move_log
|
||||
|
||||
send_report
|
||||
|
|
@ -0,0 +1,65 @@
|
|||
#!/bin/sh
|
||||
#
|
||||
# -------------------------------------------------------------
|
||||
# Get the real disk usage for a group of selected files
|
||||
#
|
||||
# This script counts the size of the files and directories
|
||||
# listed, but exclude files that have hard links referenced outside
|
||||
# the list.
|
||||
#
|
||||
# The undelying objective of this script is to report the
|
||||
# real amount of disk used for backup solutions that are heavily
|
||||
# using hard links to save disk space on identical files (I use
|
||||
# ccollect, but this likely works with rsnapshot)
|
||||
# -------------------------------------------------------------
|
||||
# 20091002 - initial release - pdrolet (rdu@drolet.name)
|
||||
|
||||
# --------------------
|
||||
# Parse options
|
||||
# --------------------
|
||||
# Known problem:
|
||||
# - Command line cannot get a directory with a space in it
|
||||
#
|
||||
kdivider=1
|
||||
find_options=""
|
||||
while [ "$#" -ge 1 ]; do
|
||||
case "$1" in
|
||||
-m)
|
||||
kdivider=1024
|
||||
;;
|
||||
-g)
|
||||
kdivider=1048576
|
||||
;;
|
||||
-h|--help)
|
||||
echo
|
||||
echo $0: \<directories\> \[options below and any \"find\" options\]
|
||||
echo \ \ -m: result in mega bytes \(rounded up\)
|
||||
echo \ \ -g: result in giga bytes \(rounded up\)
|
||||
echo \ \ -h: this help
|
||||
echo
|
||||
exit 0
|
||||
;;
|
||||
*)
|
||||
find_options="${find_options} $1"
|
||||
;;
|
||||
esac
|
||||
shift
|
||||
done
|
||||
|
||||
# ------------------------------------------------------------------------------------------------------
|
||||
# Compute the size
|
||||
# ------------------------------------------------------------------------------------------------------
|
||||
# 1) Find selected files and list link count, inodes, file type and size
|
||||
# 2) Sort (sorts on inodes since link count is constant per inode)
|
||||
# 3) Merge duplicates using uniq
|
||||
# (result is occurence count, link count, inode, file type and size)
|
||||
# 4) Use awk to sum up the file size of each inodes when the occurence count
|
||||
# and link count are the same. Use %k for size since awk's printf is 32 bits
|
||||
# 5) Present the result with additional dividers based on command line parameters
|
||||
#
|
||||
echo $((( `find ${find_options} -printf '%n %i %y %k \n' \
|
||||
| sort -n \
|
||||
| uniq -c \
|
||||
| awk '{ if (( $1 == $2 ) || ($4 == "d")) { sum += $5; } } END { printf "%u\n",(sum); }'` \
|
||||
+ ${kdivider} -1 ) / ${kdivider} ))
|
||||
|
|
@ -0,0 +1 @@
|
|||
/var/cache/apt/archives/*
|
|
@ -0,0 +1,22 @@
|
|||
#!/bin/bash
|
||||
|
||||
function mkbackup {
|
||||
find /etc/ccollect/logwrapper/destination -type f -atime +2 -exec sudo rm {} \;
|
||||
/home/jcb/bm.pl &
|
||||
}
|
||||
|
||||
mkdir -p /media/backupdisk
|
||||
grep backupdisk /etc/mtab &> /dev/null
|
||||
|
||||
if [ $? == 0 ]
|
||||
then
|
||||
mkbackup
|
||||
else
|
||||
mount /media/backupdisk
|
||||
if [ $? == 0 ]
|
||||
then
|
||||
mkbackup
|
||||
else
|
||||
echo "Error mounting backup disk"
|
||||
fi
|
||||
fi
|
|
@ -0,0 +1,242 @@
|
|||
#!/usr/bin/perl
|
||||
|
||||
###############################
|
||||
#
|
||||
# Jens-Christoph Brendel, 2009
|
||||
# licensed under GPL3 NO WARRANTY
|
||||
#
|
||||
###############################
|
||||
|
||||
use Date::Calc qw(:all);
|
||||
use strict;
|
||||
use warnings;
|
||||
|
||||
#
|
||||
#!!!!!!!!!!!!!!!!! you need to customize these settings !!!!!!!!!!!!!!!!!!!!
|
||||
#
|
||||
my $backupdir = "/media/backupdisk";
|
||||
my $logwrapper = "/home/jcb/ccollect/tools/ccollect_logwrapper.sh";
|
||||
|
||||
#!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
|
||||
|
||||
# +------------------------------------------------------------------------+
|
||||
# | |
|
||||
# | V A R I A B L E S |
|
||||
# | |
|
||||
# +------------------------------------------------------------------------+
|
||||
#
|
||||
|
||||
# get the current date
|
||||
#
|
||||
my ($sek, $min, $hour, $day, $month, $year) = localtime();
|
||||
|
||||
my $curr_year = $year + 1900;
|
||||
my $curr_month = $month +1;
|
||||
my ($curr_week,$cur_year) = Week_of_Year($curr_year,$curr_month,$day);
|
||||
|
||||
# initialize some variables
|
||||
#
|
||||
my %most_recent_daily = (
|
||||
'age' => 9999,
|
||||
'file' => ''
|
||||
);
|
||||
|
||||
my %most_recent_weekly = (
|
||||
'age' => 9999,
|
||||
'file' => ''
|
||||
);
|
||||
|
||||
my %most_recent_monthly = (
|
||||
'age' => 9999,
|
||||
'file' => ''
|
||||
);
|
||||
|
||||
# prepare the output formatting
|
||||
#
|
||||
#---------------------------------------------------------------------------
|
||||
my ($msg1, $msg2, $msg3, $msg4);
|
||||
|
||||
format =
|
||||
@<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
|
||||
$msg1
|
||||
@<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< @<<<<<<<<<<<<<<<<<
|
||||
$msg2, $msg3
|
||||
|
||||
@||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
|
||||
$msg4
|
||||
.
|
||||
|
||||
my @months = (' ','January', 'February', 'March', 'April',
|
||||
'May', 'June', 'July', 'August',
|
||||
'September', 'October', 'November',
|
||||
'December');
|
||||
|
||||
# +------------------------------------------------------------------------+
|
||||
# | |
|
||||
# | P r o c e d u r e s |
|
||||
# | |
|
||||
# +------------------------------------------------------------------------+
|
||||
#
|
||||
|
||||
# PURPOSE: extract the date from the file name
|
||||
# PARAMETER VALUE: file name
|
||||
# RETURN VALUE: pointer of a hash containing year, month, day
|
||||
#
|
||||
sub decodeDate {
|
||||
my $file = shift;
|
||||
$file =~ /^(daily|weekly|monthly)\.(\d+)-.*/;
|
||||
my %date = (
|
||||
'y' => substr($2,0,4),
|
||||
'm' => substr($2,4,2),
|
||||
'd' => substr($2,6,2)
|
||||
);
|
||||
return \%date;
|
||||
}
|
||||
|
||||
# PURPOSE: calculate the file age in days
|
||||
# PARAMETER VALUE: name of a ccollect backup file
|
||||
# RETURN VALUE: age in days
|
||||
#
|
||||
sub AgeInDays {
|
||||
my $file = shift;
|
||||
my $date=decodeDate($file);
|
||||
my $ageindays = Delta_Days($$date{'y'}, $$date{'m'}, $$date{'d'}, $curr_year, $curr_month, $day);
|
||||
return $ageindays;
|
||||
}
|
||||
|
||||
# PURPOSE: calculate the file age in number of weeks
|
||||
# PARAMETER VALUE: name of a ccollect backup file
|
||||
# RETURN VALUE: age in weeks
|
||||
#
|
||||
sub AgeInWeeks {
|
||||
my($y,$m,$d);
|
||||
|
||||
my $file = shift;
|
||||
my $date = decodeDate($file);
|
||||
my ($weeknr,$yr) = Week_of_Year($$date{'y'}, $$date{'m'}, $$date{'d'});
|
||||
my $ageinweeks = $curr_week - $weeknr;
|
||||
return $ageinweeks;
|
||||
}
|
||||
|
||||
# PURPOSE: calculate the file age in number of months
|
||||
# PARAMETER VALUE: name of a ccollect backup file
|
||||
# RETURN VALUE: age in months
|
||||
#
|
||||
sub AgeInMonths {
|
||||
my $ageinmonths;
|
||||
my $ageinmonths;
|
||||
my $file = shift;
|
||||
my $date = decodeDate($file);
|
||||
if ($curr_year == $$date{'y'}) {
|
||||
$ageinmonths = $curr_month - $$date{'m'};
|
||||
} else {
|
||||
$ageinmonths = $curr_month + (12-$$date{'m'}) + ($curr_year-$$date{'y'}-1)*12;
|
||||
}
|
||||
return $ageinmonths;
|
||||
}
|
||||
|
||||
# +------------------------------------------------------------------------+
|
||||
# | |
|
||||
# | M A I N |
|
||||
# | |
|
||||
# +------------------------------------------------------------------------+
|
||||
#
|
||||
|
||||
#
|
||||
# find the most recent daily, weekly and monthly backup file
|
||||
#
|
||||
|
||||
opendir(DIRH, $backupdir) or die "Can't open $backupdir \n";
|
||||
|
||||
my @files = readdir(DIRH);
|
||||
|
||||
die "Zielverzeichnis leer \n" if ( $#files <= 1 );
|
||||
|
||||
foreach my $file (@files) {
|
||||
|
||||
next if $file eq "." or $file eq "..";
|
||||
|
||||
SWITCH: {
|
||||
if ($file =~ /^daily/) {
|
||||
my $curr_age=AgeInDays($file);
|
||||
if ($curr_age<$most_recent_daily{'age'}) {
|
||||
$most_recent_daily{'age'} =$curr_age;
|
||||
$most_recent_daily{'file'}= $file;
|
||||
}
|
||||
last SWITCH;
|
||||
}
|
||||
|
||||
if ($file =~ /^weekly/) {
|
||||
my $curr_week_age = AgeInWeeks($file);
|
||||
if ($curr_week_age<$most_recent_weekly{'age'}) {
|
||||
$most_recent_weekly{'age'} =$curr_week_age;
|
||||
$most_recent_weekly{'file'}=$file;
|
||||
}
|
||||
last SWITCH;
|
||||
}
|
||||
|
||||
if ($file =~ /^monthly/) {
|
||||
my $curr_month_age=AgeInMonths($file);
|
||||
if ($curr_month_age < $most_recent_monthly{'age'}) {
|
||||
$most_recent_monthly{'age'} =$curr_month_age;
|
||||
$most_recent_monthly{'file'}=$file;
|
||||
}
|
||||
last SWITCH;
|
||||
}
|
||||
print "\n\n unknown file $file \n\n";
|
||||
}
|
||||
}
|
||||
|
||||
printf("\nBackup Manager started: %02u.%02u. %u, week %02u\n\n", $day, $curr_month, $curr_year, $curr_week);
|
||||
|
||||
#
|
||||
# compare the most recent daily, weekly and monthly backup file
|
||||
# and decide if it's necessary to start a new backup process in
|
||||
# each category
|
||||
#
|
||||
|
||||
if ($most_recent_monthly{'age'} == 0) {
|
||||
$msg1="The most recent monthly backup";
|
||||
$msg2="$most_recent_monthly{'file'} from $months[$curr_month - $most_recent_monthly{'age'}]";
|
||||
$msg3="is still valid.";
|
||||
$msg4="";
|
||||
write;
|
||||
} else {
|
||||
$msg1="The most recent monthly backup";
|
||||
$msg2="$most_recent_monthly{'file'} from $months[$curr_month - $most_recent_monthly{'age'}]";
|
||||
$msg3="is out-dated.";
|
||||
$msg4="Starting new monthly backup.";
|
||||
write;
|
||||
exec "sudo $logwrapper monthly FULL";
|
||||
exit;
|
||||
}
|
||||
|
||||
if ($most_recent_weekly{'age'} == 0) {
|
||||
$msg1="The most recent weekly backup";
|
||||
$msg2="$most_recent_weekly{'file'} from week nr: $curr_week-$most_recent_weekly{'age'}";
|
||||
$msg3="is still valid.";
|
||||
$msg4="";
|
||||
write;
|
||||
} else {
|
||||
$msg1="The most recent weekly backup";
|
||||
$msg2="$most_recent_weekly{'file'} from week nr: $curr_week-$most_recent_weekly{'age'}";
|
||||
$msg3="is out-dated.";
|
||||
$msg4="Starting new weekly backup.";
|
||||
write;
|
||||
exec "sudo $logwrapper weekly FULL";
|
||||
exit;
|
||||
}
|
||||
|
||||
if ($most_recent_daily{'age'} == 0 ) {
|
||||
$msg1=" The most recent daily backup";
|
||||
$msg2="$most_recent_daily{'file'}";
|
||||
$msg3="is still valid.";
|
||||
$msg4="";
|
||||
write;
|
||||
} else {
|
||||
$msg1="The most recent daily backup";
|
||||
$msg2="$most_recent_daily{'file'}";
|
||||
$msg3="is out-dated.";
|
||||
$msg4="Starting new daily backup.";
|
||||
write;
|
||||
exec "sudo $logwrapper daily FULL";
|
|
@ -0,0 +1,3 @@
|
|||
- Zeile 126/127 (my $ageinmonths;) ist doppelt, einmal streichen.
|
||||
- in die allerletzte Zeile gehört eine schließende geschweifte Klammer
|
||||
"}", die irgendwo verlorengegangen ist.
|
|
@ -0,0 +1,15 @@
|
|||
Hello Nico,
|
||||
|
||||
I have attached three more patches for ccollect. Each patch
|
||||
has comments explaining its motivation.
|
||||
|
||||
All of these patches work-for-me (but I continue to test
|
||||
them). I would be interested in your opinion on, for example, the
|
||||
general approach used in i.patch which changes the way options are
|
||||
handled. I think it is a big improvement. If, however, you wanted
|
||||
the code to go in a different direction, let me know before we
|
||||
diverge too far.
|
||||
|
||||
Regards,
|
||||
|
||||
John
|
|
@ -0,0 +1,683 @@
|
|||
#!/bin/sh
|
||||
#
|
||||
# 2005-2009 Nico Schottelius (nico-ccollect at schottelius.org)
|
||||
#
|
||||
# This file is part of ccollect.
|
||||
#
|
||||
# ccollect is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# ccollect is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with ccollect. If not, see <http://www.gnu.org/licenses/>.
|
||||
#
|
||||
# Initially written for SyGroup (www.sygroup.ch)
|
||||
# Date: Mon Nov 14 11:45:11 CET 2005
|
||||
|
||||
#
|
||||
# Standard variables (stolen from cconf)
|
||||
#
|
||||
__pwd="$(pwd -P)"
|
||||
__mydir="${0%/*}"; __abs_mydir="$(cd "$__mydir" && pwd -P)"
|
||||
__myname=${0##*/}; __abs_myname="$__abs_mydir/$__myname"
|
||||
|
||||
#
|
||||
# where to find our configuration and temporary file
|
||||
#
|
||||
CCOLLECT_CONF=${CCOLLECT_CONF:-/etc/ccollect}
|
||||
CSOURCES=${CCOLLECT_CONF}/sources
|
||||
CDEFAULTS=${CCOLLECT_CONF}/defaults
|
||||
CPREEXEC="${CDEFAULTS}/pre_exec"
|
||||
CPOSTEXEC="${CDEFAULTS}/post_exec"
|
||||
|
||||
TMP=$(mktemp "/tmp/${__myname}.XXXXXX")
|
||||
VERSION=0.7.1
|
||||
RELEASE="2009-02-02"
|
||||
HALF_VERSION="ccollect ${VERSION}"
|
||||
FULL_VERSION="ccollect ${VERSION} (${RELEASE})"
|
||||
|
||||
#TSORT="tc" ; NEWER="cnewer"
|
||||
TSORT="t" ; NEWER="newer"
|
||||
|
||||
#
|
||||
# CDATE: how we use it for naming of the archives
|
||||
# DDATE: how the user should see it in our output (DISPLAY)
|
||||
#
|
||||
CDATE="date +%Y%m%d-%H%M"
|
||||
DDATE="date +%Y-%m-%d-%H:%M:%S"
|
||||
|
||||
#
|
||||
# unset parallel execution
|
||||
#
|
||||
PARALLEL=""
|
||||
|
||||
#
|
||||
# catch signals
|
||||
#
|
||||
trap "rm -f \"${TMP}\"" 1 2 15
|
||||
|
||||
#
|
||||
# Functions
|
||||
#
|
||||
|
||||
# time displaying echo
|
||||
_techo()
|
||||
{
|
||||
echo "$(${DDATE}): $@"
|
||||
}
|
||||
|
||||
# exit on error
|
||||
_exit_err()
|
||||
{
|
||||
_techo "$@"
|
||||
rm -f "${TMP}"
|
||||
exit 1
|
||||
}
|
||||
|
||||
add_name()
|
||||
{
|
||||
awk "{ print \"[${name}] \" \$0 }"
|
||||
}
|
||||
|
||||
pcmd()
|
||||
{
|
||||
if [ "$remote_host" ]; then
|
||||
ssh "$remote_host" "$@"
|
||||
else
|
||||
"$@"
|
||||
fi
|
||||
}
|
||||
|
||||
#
|
||||
# Version
|
||||
#
|
||||
display_version()
|
||||
{
|
||||
echo "${FULL_VERSION}"
|
||||
exit 0
|
||||
}
|
||||
|
||||
#
|
||||
# Tell how to use us
|
||||
#
|
||||
usage()
|
||||
{
|
||||
echo "${__myname}: <interval name> [args] <sources to backup>"
|
||||
echo ""
|
||||
echo " ccollect creates (pseudo) incremental backups"
|
||||
echo ""
|
||||
echo " -h, --help: Show this help screen"
|
||||
echo " -p, --parallel: Parallelise backup processes"
|
||||
echo " -a, --all: Backup all sources specified in ${CSOURCES}"
|
||||
echo " -v, --verbose: Be very verbose (uses set -x)"
|
||||
echo " -V, --version: Print version information"
|
||||
echo ""
|
||||
echo " This is version ${VERSION}, released on ${RELEASE}"
|
||||
echo " (the first version was written on 2005-12-05 by Nico Schottelius)."
|
||||
echo ""
|
||||
echo " Retrieve latest ccollect at http://unix.schottelius.org/ccollect/"
|
||||
exit 0
|
||||
}
|
||||
|
||||
#
|
||||
# Select interval if AUTO
|
||||
#
|
||||
# For this to work nicely, you have to choose interval names that sort nicely
|
||||
# such as int1, int2, int3 or a_daily, b_weekly, c_monthly, etc.
|
||||
#
|
||||
auto_interval()
|
||||
{
|
||||
if [ -d "${backup}/intervals" -a -n "$(ls "${backup}/intervals" 2>/dev/null)" ] ; then
|
||||
intervals_dir="${backup}/intervals"
|
||||
elif [ -d "${CDEFAULTS}/intervals" -a -n "$(ls "${CDEFAULTS}/intervals" 2>/dev/null)" ] ; then
|
||||
intervals_dir="${CDEFAULTS}/intervals"
|
||||
else
|
||||
_exit_err "No intervals are defined. Skipping."
|
||||
fi
|
||||
echo intervals_dir=${intervals_dir}
|
||||
|
||||
trial_interval="$(ls -1r "${intervals_dir}/" | head -n 1)" || \
|
||||
_exit_err "Failed to list contents of ${intervals_dir}/."
|
||||
_techo "Considering interval ${trial_interval}"
|
||||
most_recent="$(pcmd ls -${TSORT}p1 "${ddir}" | grep "^${trial_interval}.*/$" | head -n 1)" || \
|
||||
_exit_err "Failed to list contents of ${ddir}/."
|
||||
_techo " Most recent ${trial_interval}: '${most_recent}'"
|
||||
if [ -n "${most_recent}" ]; then
|
||||
no_intervals="$(ls -1 "${intervals_dir}/" | wc -l)"
|
||||
n=1
|
||||
while [ "${n}" -le "${no_intervals}" ]; do
|
||||
trial_interval="$(ls -p1 "${intervals_dir}/" | tail -n+${n} | head -n 1)"
|
||||
_techo "Considering interval '${trial_interval}'"
|
||||
c_interval="$(cat "${intervals_dir}/${trial_interval}" 2>/dev/null)"
|
||||
m=$((${n}+1))
|
||||
set -- "${ddir}" -maxdepth 1
|
||||
while [ "${m}" -le "${no_intervals}" ]; do
|
||||
interval_m="$(ls -1 "${intervals_dir}/" | tail -n+${m} | head -n 1)"
|
||||
most_recent="$(pcmd ls -${TSORT}p1 "${ddir}" | grep "^${interval_m}\..*/$" | head -n 1)"
|
||||
_techo " Most recent ${interval_m}: '${most_recent}'"
|
||||
if [ -n "${most_recent}" ] ; then
|
||||
set -- "$@" -$NEWER "${ddir}/${most_recent}"
|
||||
fi
|
||||
m=$((${m}+1))
|
||||
done
|
||||
count=$(pcmd find "$@" -iname "${trial_interval}*" | wc -l)
|
||||
_techo " Found $count more recent backups of ${trial_interval} (limit: ${c_interval})"
|
||||
if [ "$count" -lt "${c_interval}" ] ; then
|
||||
break
|
||||
fi
|
||||
n=$((${n}+1))
|
||||
done
|
||||
fi
|
||||
export INTERVAL="${trial_interval}"
|
||||
D_FILE_INTERVAL="${intervals_dir}/${INTERVAL}"
|
||||
D_INTERVAL=$(cat "${D_FILE_INTERVAL}" 2>/dev/null)
|
||||
}
|
||||
|
||||
#
|
||||
# need at least interval and one source or --all
|
||||
#
|
||||
if [ $# -lt 2 ]; then
|
||||
if [ "$1" = "-V" -o "$1" = "--version" ]; then
|
||||
display_version
|
||||
else
|
||||
usage
|
||||
fi
|
||||
fi
|
||||
|
||||
#
|
||||
# check for configuraton directory
|
||||
#
|
||||
[ -d "${CCOLLECT_CONF}" ] || _exit_err "No configuration found in " \
|
||||
"\"${CCOLLECT_CONF}\" (is \$CCOLLECT_CONF properly set?)"
|
||||
|
||||
#
|
||||
# Filter arguments
|
||||
#
|
||||
export INTERVAL="$1"; shift
|
||||
i=1
|
||||
no_sources=0
|
||||
|
||||
#
|
||||
# Create source "array"
|
||||
#
|
||||
while [ "$#" -ge 1 ]; do
|
||||
eval arg=\"\$1\"; shift
|
||||
|
||||
if [ "${NO_MORE_ARGS}" = 1 ]; then
|
||||
eval source_${no_sources}=\"${arg}\"
|
||||
no_sources=$((${no_sources}+1))
|
||||
|
||||
# make variable available for subscripts
|
||||
eval export source_${no_sources}
|
||||
else
|
||||
case "${arg}" in
|
||||
-a|--all)
|
||||
ALL=1
|
||||
;;
|
||||
-v|--verbose)
|
||||
VERBOSE=1
|
||||
;;
|
||||
-p|--parallel)
|
||||
PARALLEL=1
|
||||
;;
|
||||
-h|--help)
|
||||
usage
|
||||
;;
|
||||
--)
|
||||
NO_MORE_ARGS=1
|
||||
;;
|
||||
*)
|
||||
eval source_${no_sources}=\"$arg\"
|
||||
no_sources=$(($no_sources+1))
|
||||
;;
|
||||
esac
|
||||
fi
|
||||
|
||||
i=$(($i+1))
|
||||
done
|
||||
|
||||
# also export number of sources
|
||||
export no_sources
|
||||
|
||||
#
|
||||
# be really, really, really verbose
|
||||
#
|
||||
if [ "${VERBOSE}" = 1 ]; then
|
||||
set -x
|
||||
fi
|
||||
|
||||
#
|
||||
# Look, if we should take ALL sources
|
||||
#
|
||||
if [ "${ALL}" = 1 ]; then
|
||||
# reset everything specified before
|
||||
no_sources=0
|
||||
|
||||
#
|
||||
# get entries from sources
|
||||
#
|
||||
cwd=$(pwd -P)
|
||||
( cd "${CSOURCES}" && ls > "${TMP}" ); ret=$?
|
||||
|
||||
[ "${ret}" -eq 0 ] || _exit_err "Listing of sources failed. Aborting."
|
||||
|
||||
while read tmp; do
|
||||
eval source_${no_sources}=\"${tmp}\"
|
||||
no_sources=$((${no_sources}+1))
|
||||
done < "${TMP}"
|
||||
fi
|
||||
|
||||
#
|
||||
# Need at least ONE source to backup
|
||||
#
|
||||
if [ "${no_sources}" -lt 1 ]; then
|
||||
usage
|
||||
else
|
||||
_techo "${HALF_VERSION}: Beginning backup using interval ${INTERVAL}"
|
||||
fi
|
||||
|
||||
#
|
||||
# Look for pre-exec command (general)
|
||||
#
|
||||
if [ -x "${CPREEXEC}" ]; then
|
||||
_techo "Executing ${CPREEXEC} ..."
|
||||
"${CPREEXEC}"; ret=$?
|
||||
_techo "Finished ${CPREEXEC} (return code: ${ret})."
|
||||
|
||||
[ "${ret}" -eq 0 ] || _exit_err "${CPREEXEC} failed. Aborting"
|
||||
fi
|
||||
|
||||
#
|
||||
# check default configuration
|
||||
#
|
||||
|
||||
D_FILE_INTERVAL="${CDEFAULTS}/intervals/${INTERVAL}"
|
||||
D_INTERVAL=$(cat "${D_FILE_INTERVAL}" 2>/dev/null)
|
||||
|
||||
|
||||
#
|
||||
# Let's do the backup
|
||||
#
|
||||
i=0
|
||||
while [ "${i}" -lt "${no_sources}" ]; do
|
||||
|
||||
#
|
||||
# Get current source
|
||||
#
|
||||
eval name=\"\$source_${i}\"
|
||||
i=$((${i}+1))
|
||||
|
||||
export name
|
||||
|
||||
#
|
||||
# start ourself, if we want parallel execution
|
||||
#
|
||||
if [ "${PARALLEL}" ]; then
|
||||
"$0" "${INTERVAL}" "${name}" &
|
||||
continue
|
||||
fi
|
||||
|
||||
#
|
||||
# Start subshell for easy log editing
|
||||
#
|
||||
(
|
||||
#
|
||||
# Stderr to stdout, so we can produce nice logs
|
||||
#
|
||||
exec 2>&1
|
||||
|
||||
#
|
||||
# Configuration
|
||||
#
|
||||
backup="${CSOURCES}/${name}"
|
||||
c_source="${backup}/source"
|
||||
c_dest="${backup}/destination"
|
||||
c_exclude="${backup}/exclude"
|
||||
c_verbose="${backup}/verbose"
|
||||
c_vverbose="${backup}/very_verbose"
|
||||
c_rsync_extra="${backup}/rsync_options"
|
||||
c_summary="${backup}/summary"
|
||||
c_pre_exec="${backup}/pre_exec"
|
||||
c_post_exec="${backup}/post_exec"
|
||||
f_incomplete="delete_incomplete"
|
||||
c_incomplete="${backup}/${f_incomplete}"
|
||||
c_remote_host="${backup}/remote_host"
|
||||
|
||||
#
|
||||
# Marking backups: If we abort it's not removed => Backup is broken
|
||||
#
|
||||
c_marker=".ccollect-marker"
|
||||
|
||||
#
|
||||
# Times
|
||||
#
|
||||
begin_s=$(date +%s)
|
||||
|
||||
#
|
||||
# unset possible options
|
||||
#
|
||||
EXCLUDE=""
|
||||
RSYNC_EXTRA=""
|
||||
SUMMARY=""
|
||||
VERBOSE=""
|
||||
VVERBOSE=""
|
||||
DELETE_INCOMPLETE=""
|
||||
|
||||
_techo "Beginning to backup"
|
||||
|
||||
#
|
||||
# Standard configuration checks
|
||||
#
|
||||
if [ ! -e "${backup}" ]; then
|
||||
_exit_err "Source does not exist."
|
||||
fi
|
||||
|
||||
#
|
||||
# configuration _must_ be a directory
|
||||
#
|
||||
if [ ! -d "${backup}" ]; then
|
||||
_exit_err "\"${name}\" is not a cconfig-directory. Skipping."
|
||||
fi
|
||||
|
||||
#
|
||||
# first execute pre_exec, which may generate destination or other
|
||||
# parameters
|
||||
#
|
||||
if [ -x "${c_pre_exec}" ]; then
|
||||
_techo "Executing ${c_pre_exec} ..."
|
||||
"${c_pre_exec}"; ret="$?"
|
||||
_techo "Finished ${c_pre_exec} (return code ${ret})."
|
||||
|
||||
if [ "${ret}" -ne 0 ]; then
|
||||
_exit_err "${c_pre_exec} failed. Skipping."
|
||||
fi
|
||||
fi
|
||||
|
||||
#
|
||||
# Destination is a path
|
||||
#
|
||||
if [ ! -f "${c_dest}" ]; then
|
||||
_exit_err "Destination ${c_dest} is not a file. Skipping."
|
||||
else
|
||||
ddir=$(cat "${c_dest}"); ret="$?"
|
||||
if [ "${ret}" -ne 0 ]; then
|
||||
_exit_err "Destination ${c_dest} is not readable. Skipping."
|
||||
fi
|
||||
fi
|
||||
|
||||
#
|
||||
# interval definition: First try source specific, fallback to default
|
||||
#
|
||||
if [ ${INTERVAL} = "AUTO" ] ; then
|
||||
auto_interval
|
||||
_techo "Selected interval: '$INTERVAL'"
|
||||
fi
|
||||
c_interval="$(cat "${backup}/intervals/${INTERVAL}" 2>/dev/null)"
|
||||
|
||||
if [ -z "${c_interval}" ]; then
|
||||
c_interval="${D_INTERVAL}"
|
||||
|
||||
if [ -z "${c_interval}" ]; then
|
||||
_exit_err "No definition for interval \"${INTERVAL}\" found. Skipping."
|
||||
fi
|
||||
fi
|
||||
|
||||
#
|
||||
# Source checks
|
||||
#
|
||||
if [ ! -f "${c_source}" ]; then
|
||||
_exit_err "Source description \"${c_source}\" is not a file. Skipping."
|
||||
else
|
||||
source=$(cat "${c_source}"); ret="$?"
|
||||
if [ "${ret}" -ne 0 ]; then
|
||||
_exit_err "Source ${c_source} is not readable. Skipping."
|
||||
fi
|
||||
fi
|
||||
# Verify source is up and accepting connections before deleting any old backups
|
||||
rsync "$source" >/dev/null || _exit_err "Source ${source} is not readable. Skipping."
|
||||
|
||||
#
|
||||
# do we backup to a remote host? then set pre-cmd
|
||||
#
|
||||
if [ -f "${c_remote_host}" ]; then
|
||||
# adjust ls and co
|
||||
remote_host=$(cat "${c_remote_host}"); ret="$?"
|
||||
if [ "${ret}" -ne 0 ]; then
|
||||
_exit_err "Remote host file ${c_remote_host} exists, but is not readable. Skipping."
|
||||
fi
|
||||
destination="${remote_host}:${ddir}"
|
||||
else
|
||||
remote_host=""
|
||||
destination="${ddir}"
|
||||
fi
|
||||
export remote_host
|
||||
|
||||
#
|
||||
# check for existence / use real name
|
||||
#
|
||||
( pcmd cd "$ddir" ) || _exit_err "Cannot change to ${ddir}. Skipping."
|
||||
|
||||
|
||||
#
|
||||
# Check whether to delete incomplete backups
|
||||
#
|
||||
if [ -f "${c_incomplete}" -o -f "${CDEFAULTS}/${f_incomplete}" ]; then
|
||||
DELETE_INCOMPLETE="yes"
|
||||
fi
|
||||
|
||||
# NEW method as of 0.6:
|
||||
# - insert ccollect default parameters
|
||||
# - insert options
|
||||
# - insert user options
|
||||
|
||||
#
|
||||
# rsync standard options
|
||||
#
|
||||
|
||||
set -- "$@" "--archive" "--delete" "--numeric-ids" "--relative" \
|
||||
"--delete-excluded" "--sparse"
|
||||
|
||||
#
|
||||
# exclude list
|
||||
#
|
||||
if [ -f "${c_exclude}" ]; then
|
||||
set -- "$@" "--exclude-from=${c_exclude}"
|
||||
fi
|
||||
|
||||
#
|
||||
# Output a summary
|
||||
#
|
||||
if [ -f "${c_summary}" ]; then
|
||||
set -- "$@" "--stats"
|
||||
fi
|
||||
|
||||
#
|
||||
# Verbosity for rsync
|
||||
#
|
||||
if [ -f "${c_vverbose}" ]; then
|
||||
set -- "$@" "-vv"
|
||||
elif [ -f "${c_verbose}" ]; then
|
||||
set -- "$@" "-v"
|
||||
fi
|
||||
|
||||
#
|
||||
# extra options for rsync provided by the user
|
||||
#
|
||||
if [ -f "${c_rsync_extra}" ]; then
|
||||
while read line; do
|
||||
set -- "$@" "$line"
|
||||
done < "${c_rsync_extra}"
|
||||
fi
|
||||
|
||||
#
|
||||
# Check for incomplete backups
|
||||
#
|
||||
pcmd ls -1 "$ddir/${INTERVAL}"*".${c_marker}" > "${TMP}" 2>/dev/null
|
||||
|
||||
i=0
|
||||
while read incomplete; do
|
||||
eval incomplete_$i=\"$(echo ${incomplete} | sed "s/\\.${c_marker}\$//")\"
|
||||
i=$(($i+1))
|
||||
done < "${TMP}"
|
||||
|
||||
j=0
|
||||
while [ "$j" -lt "$i" ]; do
|
||||
eval realincomplete=\"\$incomplete_$j\"
|
||||
_techo "Incomplete backup: ${realincomplete}"
|
||||
if [ "${DELETE_INCOMPLETE}" = "yes" ]; then
|
||||
_techo "Deleting ${realincomplete} ..."
|
||||
pcmd rm $VVERBOSE -rf "${ddir}/${realincomplete}" || \
|
||||
_exit_err "Removing ${realincomplete} failed."
|
||||
fi
|
||||
j=$(($j+1))
|
||||
done
|
||||
|
||||
#
|
||||
# check if maximum number of backups is reached, if so remove
|
||||
# use grep and ls -p so we only look at directories
|
||||
#
|
||||
count="$(pcmd ls -p1 "${ddir}" | grep "^${INTERVAL}\..*/\$" | wc -l \
|
||||
| sed 's/^ *//g')" || _exit_err "Counting backups failed"
|
||||
|
||||
_techo "Existing backups: ${count} Total keeping backups: ${c_interval}"
|
||||
|
||||
if [ "${count}" -ge "${c_interval}" ]; then
|
||||
substract=$((${c_interval} - 1))
|
||||
remove=$((${count} - ${substract}))
|
||||
_techo "Removing ${remove} backup(s)..."
|
||||
|
||||
pcmd ls -${TSORT}p1r "$ddir" | grep "^${INTERVAL}\..*/\$" | \
|
||||
head -n "${remove}" > "${TMP}" || \
|
||||
_exit_err "Listing old backups failed"
|
||||
|
||||
i=0
|
||||
while read to_remove; do
|
||||
eval remove_$i=\"${to_remove}\"
|
||||
i=$(($i+1))
|
||||
done < "${TMP}"
|
||||
|
||||
j=0
|
||||
while [ "$j" -lt "$i" ]; do
|
||||
eval to_remove=\"\$remove_$j\"
|
||||
_techo "Removing ${to_remove} ..."
|
||||
pcmd rm ${VVERBOSE} -rf "${ddir}/${to_remove}" || \
|
||||
_exit_err "Removing ${to_remove} failed."
|
||||
j=$(($j+1))
|
||||
done
|
||||
fi
|
||||
|
||||
|
||||
#
|
||||
# Check for backup directory to clone from: Always clone from the latest one!
|
||||
#
|
||||
# Depending on your file system, you may want to sort on:
|
||||
# 1. mtime (modification time) with TSORT=t, or
|
||||
# 2. ctime (last change time, usually) with TSORT=tc
|
||||
last_dir="$(pcmd ls -${TSORT}p1 "${ddir}" | grep '/$' | head -n 1)" || \
|
||||
_exit_err "Failed to list contents of ${ddir}."
|
||||
|
||||
#
|
||||
# clone from old backup, if existing
|
||||
#
|
||||
if [ "${last_dir}" ]; then
|
||||
set -- "$@" "--link-dest=${ddir}/${last_dir}"
|
||||
_techo "Hard linking from ${last_dir}"
|
||||
fi
|
||||
|
||||
|
||||
# set time when we really begin to backup, not when we began to remove above
|
||||
destination_date=$(${CDATE})
|
||||
destination_dir="${ddir}/${INTERVAL}.${destination_date}.$$"
|
||||
destination_full="${destination}/${INTERVAL}.${destination_date}.$$"
|
||||
|
||||
# give some info
|
||||
_techo "Beginning to backup, this may take some time..."
|
||||
|
||||
_techo "Creating ${destination_dir} ..."
|
||||
pcmd mkdir ${VVERBOSE} "${destination_dir}" || \
|
||||
_exit_err "Creating ${destination_dir} failed. Skipping."
|
||||
|
||||
#
|
||||
# added marking in 0.6 (and remove it, if successful later)
|
||||
#
|
||||
pcmd touch "${destination_dir}.${c_marker}"
|
||||
|
||||
#
|
||||
# the rsync part
|
||||
#
|
||||
_techo "Transferring files..."
|
||||
rsync "$@" "${source}" "${destination_full}"; ret=$?
|
||||
# Correct the modification time:
|
||||
pcmd touch "${destination_dir}"
|
||||
|
||||
#
|
||||
# remove marking here
|
||||
#
|
||||
if [ "$ret" -ne 12 ] ; then
|
||||
pcmd rm "${destination_dir}.${c_marker}" || \
|
||||
_exit_err "Removing ${destination_dir}/${c_marker} failed."
|
||||
fi
|
||||
|
||||
_techo "Finished backup (rsync return code: $ret)."
|
||||
if [ "${ret}" -ne 0 ]; then
|
||||
_techo "Warning: rsync exited non-zero, the backup may be broken (see rsync errors)."
|
||||
fi
|
||||
|
||||
#
|
||||
# post_exec
|
||||
#
|
||||
if [ -x "${c_post_exec}" ]; then
|
||||
_techo "Executing ${c_post_exec} ..."
|
||||
"${c_post_exec}"; ret=$?
|
||||
_techo "Finished ${c_post_exec}."
|
||||
|
||||
if [ ${ret} -ne 0 ]; then
|
||||
_exit_err "${c_post_exec} failed."
|
||||
fi
|
||||
fi
|
||||
|
||||
# Calculation
|
||||
end_s=$(date +%s)
|
||||
|
||||
full_seconds=$((${end_s} - ${begin_s}))
|
||||
hours=$((${full_seconds} / 3600))
|
||||
seconds=$((${full_seconds} - (${hours} * 3600)))
|
||||
minutes=$((${seconds} / 60))
|
||||
seconds=$((${seconds} - (${minutes} * 60)))
|
||||
|
||||
_techo "Backup lasted: ${hours}:${minutes}:${seconds} (h:m:s)"
|
||||
|
||||
) | add_name
|
||||
done
|
||||
|
||||
#
|
||||
# Be a good parent and wait for our children, if they are running wild parallel
|
||||
#
|
||||
if [ "${PARALLEL}" ]; then
|
||||
_techo "Waiting for children to complete..."
|
||||
wait
|
||||
fi
|
||||
|
||||
#
|
||||
# Look for post-exec command (general)
|
||||
#
|
||||
if [ -x "${CPOSTEXEC}" ]; then
|
||||
_techo "Executing ${CPOSTEXEC} ..."
|
||||
"${CPOSTEXEC}"; ret=$?
|
||||
_techo "Finished ${CPOSTEXEC} (return code: ${ret})."
|
||||
|
||||
if [ ${ret} -ne 0 ]; then
|
||||
_techo "${CPOSTEXEC} failed."
|
||||
fi
|
||||
fi
|
||||
|
||||
rm -f "${TMP}"
|
||||
_techo "Finished ${WE}"
|
||||
|
||||
# vim: set shiftwidth=3 tabstop=3 expandtab :
|
|
@ -0,0 +1,663 @@
|
|||
#!/bin/sh
|
||||
#
|
||||
# 2005-2009 Nico Schottelius (nico-ccollect at schottelius.org)
|
||||
#
|
||||
# This file is part of ccollect.
|
||||
#
|
||||
# ccollect is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# ccollect is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with ccollect. If not, see <http://www.gnu.org/licenses/>.
|
||||
#
|
||||
# Initially written for SyGroup (www.sygroup.ch)
|
||||
# Date: Mon Nov 14 11:45:11 CET 2005
|
||||
|
||||
#
|
||||
# Standard variables (stolen from cconf)
|
||||
#
|
||||
__pwd="$(pwd -P)"
|
||||
__mydir="${0%/*}"; __abs_mydir="$(cd "$__mydir" && pwd -P)"
|
||||
__myname=${0##*/}; __abs_myname="$__abs_mydir/$__myname"
|
||||
|
||||
#
|
||||
# where to find our configuration and temporary file
|
||||
#
|
||||
CCOLLECT_CONF=${CCOLLECT_CONF:-/etc/ccollect}
|
||||
CSOURCES=${CCOLLECT_CONF}/sources
|
||||
CDEFAULTS=${CCOLLECT_CONF}/defaults
|
||||
CPREEXEC="${CDEFAULTS}/pre_exec"
|
||||
CPOSTEXEC="${CDEFAULTS}/post_exec"
|
||||
|
||||
TMP=$(mktemp "/tmp/${__myname}.XXXXXX")
|
||||
VERSION=0.7.1
|
||||
RELEASE="2009-02-02"
|
||||
HALF_VERSION="ccollect ${VERSION}"
|
||||
FULL_VERSION="ccollect ${VERSION} (${RELEASE})"
|
||||
|
||||
#TSORT="tc" ; NEWER="cnewer"
|
||||
TSORT="t" ; NEWER="newer"
|
||||
|
||||
#
|
||||
# CDATE: how we use it for naming of the archives
|
||||
# DDATE: how the user should see it in our output (DISPLAY)
|
||||
#
|
||||
CDATE="date +%Y%m%d-%H%M"
|
||||
DDATE="date +%Y-%m-%d-%H:%M:%S"
|
||||
|
||||
#
|
||||
# unset parallel execution
|
||||
#
|
||||
PARALLEL=""
|
||||
|
||||
#
|
||||
# catch signals
|
||||
#
|
||||
trap "rm -f \"${TMP}\"" 1 2 15
|
||||
|
||||
#
|
||||
# Functions
|
||||
#
|
||||
|
||||
# time displaying echo
|
||||
_techo()
|
||||
{
|
||||
echo "$(${DDATE}): $@"
|
||||
}
|
||||
|
||||
# exit on error
|
||||
_exit_err()
|
||||
{
|
||||
_techo "$@"
|
||||
rm -f "${TMP}"
|
||||
exit 1
|
||||
}
|
||||
|
||||
add_name()
|
||||
{
|
||||
awk "{ print \"[${name}] \" \$0 }"
|
||||
}
|
||||
|
||||
pcmd()
|
||||
{
|
||||
if [ "$remote_host" ]; then
|
||||
ssh "$remote_host" "$@"
|
||||
else
|
||||
"$@"
|
||||
fi
|
||||
}
|
||||
|
||||
#
|
||||
# Version
|
||||
#
|
||||
display_version()
|
||||
{
|
||||
echo "${FULL_VERSION}"
|
||||
exit 0
|
||||
}
|
||||
|
||||
#
|
||||
# Tell how to use us
|
||||
#
|
||||
usage()
|
||||
{
|
||||
echo "${__myname}: <interval name> [args] <sources to backup>"
|
||||
echo ""
|
||||
echo " ccollect creates (pseudo) incremental backups"
|
||||
echo ""
|
||||
echo " -h, --help: Show this help screen"
|
||||
echo " -p, --parallel: Parallelise backup processes"
|
||||
echo " -a, --all: Backup all sources specified in ${CSOURCES}"
|
||||
echo " -v, --verbose: Be very verbose (uses set -x)"
|
||||
echo " -V, --version: Print version information"
|
||||
echo ""
|
||||
echo " This is version ${VERSION}, released on ${RELEASE}"
|
||||
echo " (the first version was written on 2005-12-05 by Nico Schottelius)."
|
||||
echo ""
|
||||
echo " Retrieve latest ccollect at http://unix.schottelius.org/ccollect/"
|
||||
exit 0
|
||||
}
|
||||
|
||||
#
|
||||
# Select interval if AUTO
|
||||
#
|
||||
# For this to work nicely, you have to choose interval names that sort nicely
|
||||
# such as int1, int2, int3 or a_daily, b_weekly, c_monthly, etc.
|
||||
#
|
||||
auto_interval()
|
||||
{
|
||||
if [ -d "${backup}/intervals" -a -n "$(ls "${backup}/intervals" 2>/dev/null)" ] ; then
|
||||
intervals_dir="${backup}/intervals"
|
||||
elif [ -d "${CDEFAULTS}/intervals" -a -n "$(ls "${CDEFAULTS}/intervals" 2>/dev/null)" ] ; then
|
||||
intervals_dir="${CDEFAULTS}/intervals"
|
||||
else
|
||||
_exit_err "No intervals are defined. Skipping."
|
||||
fi
|
||||
echo intervals_dir=${intervals_dir}
|
||||
|
||||
trial_interval="$(ls -1r "${intervals_dir}/" | head -n 1)" || \
|
||||
_exit_err "Failed to list contents of ${intervals_dir}/."
|
||||
_techo "Considering interval ${trial_interval}"
|
||||
most_recent="$(pcmd ls -${TSORT}p1 "${ddir}" | grep "^${trial_interval}.*/$" | head -n 1)" || \
|
||||
_exit_err "Failed to list contents of ${ddir}/."
|
||||
_techo " Most recent ${trial_interval}: '${most_recent}'"
|
||||
if [ -n "${most_recent}" ]; then
|
||||
no_intervals="$(ls -1 "${intervals_dir}/" | wc -l)"
|
||||
n=1
|
||||
while [ "${n}" -le "${no_intervals}" ]; do
|
||||
trial_interval="$(ls -p1 "${intervals_dir}/" | tail -n+${n} | head -n 1)"
|
||||
_techo "Considering interval '${trial_interval}'"
|
||||
c_interval="$(cat "${intervals_dir}/${trial_interval}" 2>/dev/null)"
|
||||
m=$((${n}+1))
|
||||
set -- "${ddir}" -maxdepth 1
|
||||
while [ "${m}" -le "${no_intervals}" ]; do
|
||||
interval_m="$(ls -1 "${intervals_dir}/" | tail -n+${m} | head -n 1)"
|
||||
most_recent="$(pcmd ls -${TSORT}p1 "${ddir}" | grep "^${interval_m}\..*/$" | head -n 1)"
|
||||
_techo " Most recent ${interval_m}: '${most_recent}'"
|
||||
if [ -n "${most_recent}" ] ; then
|
||||
set -- "$@" -$NEWER "${ddir}/${most_recent}"
|
||||
fi
|
||||
m=$((${m}+1))
|
||||
done
|
||||
count=$(pcmd find "$@" -iname "${trial_interval}*" | wc -l)
|
||||
_techo " Found $count more recent backups of ${trial_interval} (limit: ${c_interval})"
|
||||
if [ "$count" -lt "${c_interval}" ] ; then
|
||||
break
|
||||
fi
|
||||
n=$((${n}+1))
|
||||
done
|
||||
fi
|
||||
export INTERVAL="${trial_interval}"
|
||||
D_FILE_INTERVAL="${intervals_dir}/${INTERVAL}"
|
||||
D_INTERVAL=$(cat "${D_FILE_INTERVAL}" 2>/dev/null)
|
||||
}
|
||||
|
||||
#
|
||||
# need at least interval and one source or --all
|
||||
#
|
||||
if [ $# -lt 2 ]; then
|
||||
if [ "$1" = "-V" -o "$1" = "--version" ]; then
|
||||
display_version
|
||||
else
|
||||
usage
|
||||
fi
|
||||
fi
|
||||
|
||||
#
|
||||
# check for configuraton directory
|
||||
#
|
||||
[ -d "${CCOLLECT_CONF}" ] || _exit_err "No configuration found in " \
|
||||
"\"${CCOLLECT_CONF}\" (is \$CCOLLECT_CONF properly set?)"
|
||||
|
||||
#
|
||||
# Filter arguments
|
||||
#
|
||||
export INTERVAL="$1"; shift
|
||||
i=1
|
||||
no_sources=0
|
||||
|
||||
#
|
||||
# Create source "array"
|
||||
#
|
||||
while [ "$#" -ge 1 ]; do
|
||||
eval arg=\"\$1\"; shift
|
||||
|
||||
if [ "${NO_MORE_ARGS}" = 1 ]; then
|
||||
eval source_${no_sources}=\"${arg}\"
|
||||
no_sources=$((${no_sources}+1))
|
||||
|
||||
# make variable available for subscripts
|
||||
eval export source_${no_sources}
|
||||
else
|
||||
case "${arg}" in
|
||||
-a|--all)
|
||||
ALL=1
|
||||
;;
|
||||
-v|--verbose)
|
||||
VERBOSE=1
|
||||
;;
|
||||
-p|--parallel)
|
||||
PARALLEL=1
|
||||
;;
|
||||
-h|--help)
|
||||
usage
|
||||
;;
|
||||
--)
|
||||
NO_MORE_ARGS=1
|
||||
;;
|
||||
*)
|
||||
eval source_${no_sources}=\"$arg\"
|
||||
no_sources=$(($no_sources+1))
|
||||
;;
|
||||
esac
|
||||
fi
|
||||
|
||||
i=$(($i+1))
|
||||
done
|
||||
|
||||
# also export number of sources
|
||||
export no_sources
|
||||
|
||||
#
|
||||
# be really, really, really verbose
|
||||
#
|
||||
if [ "${VERBOSE}" = 1 ]; then
|
||||
set -x
|
||||
fi
|
||||
|
||||
#
|
||||
# Look, if we should take ALL sources
|
||||
#
|
||||
if [ "${ALL}" = 1 ]; then
|
||||
# reset everything specified before
|
||||
no_sources=0
|
||||
|
||||
#
|
||||
# get entries from sources
|
||||
#
|
||||
cwd=$(pwd -P)
|
||||
( cd "${CSOURCES}" && ls > "${TMP}" ); ret=$?
|
||||
|
||||
[ "${ret}" -eq 0 ] || _exit_err "Listing of sources failed. Aborting."
|
||||
|
||||
while read tmp; do
|
||||
eval source_${no_sources}=\"${tmp}\"
|
||||
no_sources=$((${no_sources}+1))
|
||||
done < "${TMP}"
|
||||
fi
|
||||
|
||||
#
|
||||
# Need at least ONE source to backup
|
||||
#
|
||||
if [ "${no_sources}" -lt 1 ]; then
|
||||
usage
|
||||
else
|
||||
_techo "${HALF_VERSION}: Beginning backup using interval ${INTERVAL}"
|
||||
fi
|
||||
|
||||
#
|
||||
# Look for pre-exec command (general)
|
||||
#
|
||||
if [ -x "${CPREEXEC}" ]; then
|
||||
_techo "Executing ${CPREEXEC} ..."
|
||||
"${CPREEXEC}"; ret=$?
|
||||
_techo "Finished ${CPREEXEC} (return code: ${ret})."
|
||||
|
||||
[ "${ret}" -eq 0 ] || _exit_err "${CPREEXEC} failed. Aborting"
|
||||
fi
|
||||
|
||||
#
|
||||
# check default configuration
|
||||
#
|
||||
|
||||
D_FILE_INTERVAL="${CDEFAULTS}/intervals/${INTERVAL}"
|
||||
D_INTERVAL=$(cat "${D_FILE_INTERVAL}" 2>/dev/null)
|
||||
|
||||
|
||||
#
|
||||
# Let's do the backup
|
||||
#
|
||||
i=0
|
||||
while [ "${i}" -lt "${no_sources}" ]; do
|
||||
|
||||
#
|
||||
# Get current source
|
||||
#
|
||||
eval name=\"\$source_${i}\"
|
||||
i=$((${i}+1))
|
||||
|
||||
export name
|
||||
|
||||
#
|
||||
# start ourself, if we want parallel execution
|
||||
#
|
||||
if [ "${PARALLEL}" ]; then
|
||||
"$0" "${INTERVAL}" "${name}" &
|
||||
continue
|
||||
fi
|
||||
|
||||
#
|
||||
# Start subshell for easy log editing
|
||||
#
|
||||
(
|
||||
#
|
||||
# Stderr to stdout, so we can produce nice logs
|
||||
#
|
||||
exec 2>&1
|
||||
|
||||
#
|
||||
# Configuration
|
||||
#
|
||||
backup="${CSOURCES}/${name}"
|
||||
c_source="${backup}/source"
|
||||
c_dest="${backup}/destination"
|
||||
c_pre_exec="${backup}/pre_exec"
|
||||
c_post_exec="${backup}/post_exec"
|
||||
for opt in exclude verbose very_verbose rsync_options summary delete_incomplete remote_host ; do
|
||||
if [ -f "${backup}/$opt" -o -f "${backup}/no_$opt" ]; then
|
||||
eval c_$opt=\"${backup}/$opt\"
|
||||
else
|
||||
eval c_$opt=\"${CDEFAULTS}/$opt\"
|
||||
fi
|
||||
done
|
||||
|
||||
#
|
||||
# Marking backups: If we abort it's not removed => Backup is broken
|
||||
#
|
||||
c_marker=".ccollect-marker"
|
||||
|
||||
#
|
||||
# Times
|
||||
#
|
||||
begin_s=$(date +%s)
|
||||
|
||||
#
|
||||
# unset possible options
|
||||
#
|
||||
VERBOSE=""
|
||||
VVERBOSE=""
|
||||
|
||||
_techo "Beginning to backup"
|
||||
|
||||
#
|
||||
# Standard configuration checks
|
||||
#
|
||||
if [ ! -e "${backup}" ]; then
|
||||
_exit_err "Source does not exist."
|
||||
fi
|
||||
|
||||
#
|
||||
# configuration _must_ be a directory
|
||||
#
|
||||
if [ ! -d "${backup}" ]; then
|
||||
_exit_err "\"${name}\" is not a cconfig-directory. Skipping."
|
||||
fi
|
||||
|
||||
#
|
||||
# first execute pre_exec, which may generate destination or other
|
||||
# parameters
|
||||
#
|
||||
if [ -x "${c_pre_exec}" ]; then
|
||||
_techo "Executing ${c_pre_exec} ..."
|
||||
"${c_pre_exec}"; ret="$?"
|
||||
_techo "Finished ${c_pre_exec} (return code ${ret})."
|
||||
|
||||
if [ "${ret}" -ne 0 ]; then
|
||||
_exit_err "${c_pre_exec} failed. Skipping."
|
||||
fi
|
||||
fi
|
||||
|
||||
#
|
||||
# Destination is a path
|
||||
#
|
||||
if [ ! -f "${c_dest}" ]; then
|
||||
_exit_err "Destination ${c_dest} is not a file. Skipping."
|
||||
else
|
||||
ddir=$(cat "${c_dest}"); ret="$?"
|
||||
if [ "${ret}" -ne 0 ]; then
|
||||
_exit_err "Destination ${c_dest} is not readable. Skipping."
|
||||
fi
|
||||
fi
|
||||
|
||||
#
|
||||
# interval definition: First try source specific, fallback to default
|
||||
#
|
||||
if [ "${INTERVAL}" = "AUTO" ] ; then
|
||||
auto_interval
|
||||
_techo "Selected interval: '$INTERVAL'"
|
||||
fi
|
||||
c_interval="$(cat "${backup}/intervals/${INTERVAL}" 2>/dev/null)"
|
||||
|
||||
if [ -z "${c_interval}" ]; then
|
||||
c_interval="${D_INTERVAL}"
|
||||
|
||||
if [ -z "${c_interval}" ]; then
|
||||
_exit_err "No definition for interval \"${INTERVAL}\" found. Skipping."
|
||||
fi
|
||||
fi
|
||||
|
||||
#
|
||||
# Source checks
|
||||
#
|
||||
if [ ! -f "${c_source}" ]; then
|
||||
_exit_err "Source description \"${c_source}\" is not a file. Skipping."
|
||||
else
|
||||
source=$(cat "${c_source}"); ret="$?"
|
||||
if [ "${ret}" -ne 0 ]; then
|
||||
_exit_err "Source ${c_source} is not readable. Skipping."
|
||||
fi
|
||||
fi
|
||||
# Verify source is up and accepting connections before deleting any old backups
|
||||
rsync "$source" >/dev/null || _exit_err "Source ${source} is not readable. Skipping."
|
||||
|
||||
#
|
||||
# do we backup to a remote host? then set pre-cmd
|
||||
#
|
||||
if [ -f "${c_remote_host}" ]; then
|
||||
# adjust ls and co
|
||||
remote_host=$(cat "${c_remote_host}"); ret="$?"
|
||||
if [ "${ret}" -ne 0 ]; then
|
||||
_exit_err "Remote host file ${c_remote_host} exists, but is not readable. Skipping."
|
||||
fi
|
||||
destination="${remote_host}:${ddir}"
|
||||
else
|
||||
remote_host=""
|
||||
destination="${ddir}"
|
||||
fi
|
||||
export remote_host
|
||||
|
||||
#
|
||||
# check for existence / use real name
|
||||
#
|
||||
( pcmd cd "$ddir" ) || _exit_err "Cannot change to ${ddir}. Skipping."
|
||||
|
||||
|
||||
# NEW method as of 0.6:
|
||||
# - insert ccollect default parameters
|
||||
# - insert options
|
||||
# - insert user options
|
||||
|
||||
#
|
||||
# rsync standard options
|
||||
#
|
||||
|
||||
set -- "$@" "--archive" "--delete" "--numeric-ids" "--relative" \
|
||||
"--delete-excluded" "--sparse"
|
||||
|
||||
#
|
||||
# exclude list
|
||||
#
|
||||
if [ -f "${c_exclude}" ]; then
|
||||
set -- "$@" "--exclude-from=${c_exclude}"
|
||||
fi
|
||||
|
||||
#
|
||||
# Output a summary
|
||||
#
|
||||
if [ -f "${c_summary}" ]; then
|
||||
set -- "$@" "--stats"
|
||||
fi
|
||||
|
||||
#
|
||||
# Verbosity for rsync
|
||||
#
|
||||
if [ -f "${c_very_verbose}" ]; then
|
||||
set -- "$@" "-vv"
|
||||
elif [ -f "${c_verbose}" ]; then
|
||||
set -- "$@" "-v"
|
||||
fi
|
||||
|
||||
#
|
||||
# extra options for rsync provided by the user
|
||||
#
|
||||
if [ -f "${c_rsync_options}" ]; then
|
||||
while read line; do
|
||||
set -- "$@" "$line"
|
||||
done < "${c_rsync_options}"
|
||||
fi
|
||||
|
||||
#
|
||||
# Check for incomplete backups
|
||||
#
|
||||
pcmd ls -1 "$ddir/${INTERVAL}"*".${c_marker}" 2>/dev/null | while read marker; do
|
||||
incomplete="$(echo ${marker} | sed "s/\\.${c_marker}\$//")"
|
||||
_techo "Incomplete backup: ${incomplete}"
|
||||
if [ -f "${c_delete_incomplete}" ]; then
|
||||
_techo "Deleting ${incomplete} ..."
|
||||
pcmd rm $VVERBOSE -rf "${incomplete}" || \
|
||||
_exit_err "Removing ${incomplete} failed."
|
||||
pcmd rm $VVERBOSE -f "${marker}" || \
|
||||
_exit_err "Removing ${marker} failed."
|
||||
fi
|
||||
done
|
||||
|
||||
#
|
||||
# check if maximum number of backups is reached, if so remove
|
||||
# use grep and ls -p so we only look at directories
|
||||
#
|
||||
count="$(pcmd ls -p1 "${ddir}" | grep "^${INTERVAL}\..*/\$" | wc -l \
|
||||
| sed 's/^ *//g')" || _exit_err "Counting backups failed"
|
||||
|
||||
_techo "Existing backups: ${count} Total keeping backups: ${c_interval}"
|
||||
|
||||
if [ "${count}" -ge "${c_interval}" ]; then
|
||||
substract=$((${c_interval} - 1))
|
||||
remove=$((${count} - ${substract}))
|
||||
_techo "Removing ${remove} backup(s)..."
|
||||
|
||||
pcmd ls -${TSORT}p1r "$ddir" | grep "^${INTERVAL}\..*/\$" | \
|
||||
head -n "${remove}" > "${TMP}" || \
|
||||
_exit_err "Listing old backups failed"
|
||||
|
||||
i=0
|
||||
while read to_remove; do
|
||||
eval remove_$i=\"${to_remove}\"
|
||||
i=$(($i+1))
|
||||
done < "${TMP}"
|
||||
|
||||
j=0
|
||||
while [ "$j" -lt "$i" ]; do
|
||||
eval to_remove=\"\$remove_$j\"
|
||||
_techo "Removing ${to_remove} ..."
|
||||
pcmd rm ${VVERBOSE} -rf "${ddir}/${to_remove}" || \
|
||||
_exit_err "Removing ${to_remove} failed."
|
||||
j=$(($j+1))
|
||||
done
|
||||
fi
|
||||
|
||||
|
||||
#
|
||||
# Check for backup directory to clone from: Always clone from the latest one!
|
||||
#
|
||||
# Depending on your file system, you may want to sort on:
|
||||
# 1. mtime (modification time) with TSORT=t, or
|
||||
# 2. ctime (last change time, usually) with TSORT=tc
|
||||
last_dir="$(pcmd ls -${TSORT}p1 "${ddir}" | grep '/$' | head -n 1)" || \
|
||||
_exit_err "Failed to list contents of ${ddir}."
|
||||
|
||||
#
|
||||
# clone from old backup, if existing
|
||||
#
|
||||
if [ "${last_dir}" ]; then
|
||||
set -- "$@" "--link-dest=${ddir}/${last_dir}"
|
||||
_techo "Hard linking from ${last_dir}"
|
||||
fi
|
||||
|
||||
|
||||
# set time when we really begin to backup, not when we began to remove above
|
||||
destination_date=$(${CDATE})
|
||||
destination_dir="${ddir}/${INTERVAL}.${destination_date}.$$"
|
||||
destination_full="${destination}/${INTERVAL}.${destination_date}.$$"
|
||||
|
||||
# give some info
|
||||
_techo "Beginning to backup, this may take some time..."
|
||||
|
||||
_techo "Creating ${destination_dir} ..."
|
||||
pcmd mkdir ${VVERBOSE} "${destination_dir}" || \
|
||||
_exit_err "Creating ${destination_dir} failed. Skipping."
|
||||
|
||||
#
|
||||
# added marking in 0.6 (and remove it, if successful later)
|
||||
#
|
||||
pcmd touch "${destination_dir}.${c_marker}"
|
||||
|
||||
#
|
||||
# the rsync part
|
||||
#
|
||||
_techo "Transferring files..."
|
||||
rsync "$@" "${source}" "${destination_full}"; ret=$?
|
||||
# Correct the modification time:
|
||||
pcmd touch "${destination_dir}"
|
||||
|
||||
#
|
||||
# remove marking here
|
||||
#
|
||||
if [ "$ret" -ne 12 ] ; then
|
||||
pcmd rm "${destination_dir}.${c_marker}" || \
|
||||
_exit_err "Removing ${destination_dir}/${c_marker} failed."
|
||||
fi
|
||||
|
||||
_techo "Finished backup (rsync return code: $ret)."
|
||||
if [ "${ret}" -ne 0 ]; then
|
||||
_techo "Warning: rsync exited non-zero, the backup may be broken (see rsync errors)."
|
||||
fi
|
||||
|
||||
#
|
||||
# post_exec
|
||||
#
|
||||
if [ -x "${c_post_exec}" ]; then
|
||||
_techo "Executing ${c_post_exec} ..."
|
||||
"${c_post_exec}"; ret=$?
|
||||
_techo "Finished ${c_post_exec}."
|
||||
|
||||
if [ ${ret} -ne 0 ]; then
|
||||
_exit_err "${c_post_exec} failed."
|
||||
fi
|
||||
fi
|
||||
|
||||
# Calculation
|
||||
end_s=$(date +%s)
|
||||
|
||||
full_seconds=$((${end_s} - ${begin_s}))
|
||||
hours=$((${full_seconds} / 3600))
|
||||
seconds=$((${full_seconds} - (${hours} * 3600)))
|
||||
minutes=$((${seconds} / 60))
|
||||
seconds=$((${seconds} - (${minutes} * 60)))
|
||||
|
||||
_techo "Backup lasted: ${hours}:${minutes}:${seconds} (h:m:s)"
|
||||
|
||||
) | add_name
|
||||
done
|
||||
|
||||
#
|
||||
# Be a good parent and wait for our children, if they are running wild parallel
|
||||
#
|
||||
if [ "${PARALLEL}" ]; then
|
||||
_techo "Waiting for children to complete..."
|
||||
wait
|
||||
fi
|
||||
|
||||
#
|
||||
# Look for post-exec command (general)
|
||||
#
|
||||
if [ -x "${CPOSTEXEC}" ]; then
|
||||
_techo "Executing ${CPOSTEXEC} ..."
|
||||
"${CPOSTEXEC}"; ret=$?
|
||||
_techo "Finished ${CPOSTEXEC} (return code: ${ret})."
|
||||
|
||||
if [ ${ret} -ne 0 ]; then
|
||||
_techo "${CPOSTEXEC} failed."
|
||||
fi
|
||||
fi
|
||||
|
||||
rm -f "${TMP}"
|
||||
_techo "Finished ${WE}"
|
||||
|
||||
# vim: set shiftwidth=3 tabstop=3 expandtab :
|
|
@ -0,0 +1,74 @@
|
|||
# I found that ccollect was not deleting incomplete backups despite the
|
||||
# delete_incomplete option being specified. I traced the problem to:
|
||||
#
|
||||
# < pcmd rm $VVERBOSE -rf "${ddir}/${realincomplete}" || \
|
||||
#
|
||||
# which, at least on all the systems I tested, should read:
|
||||
#
|
||||
# > pcmd rm $VVERBOSE -rf "${realincomplete}" || \
|
||||
#
|
||||
# Also, the marker file is not deleted. I didn't see any reason to keep
|
||||
# those files around (what do you think?), so I deleted them also:
|
||||
#
|
||||
# > pcmd rm $VVERBOSE -rf "${ddir}/${realincomplete}" || \
|
||||
# > _exit_err "Removing ${realincomplete} failed."
|
||||
#
|
||||
# As long as I was messing with the delete incomplete code and therefore need
|
||||
# to test it, I took the liberty of simplifying it. The v0.7.1 code uses
|
||||
# multiple loops with multiple loop counters and creates many variables. I
|
||||
# simplified that to a single loop:
|
||||
#
|
||||
# > pcmd ls -1 "$ddir/${INTERVAL}"*".${c_marker}" 2>/dev/null | while read marker; do
|
||||
# > incomplete="$(echo ${marker} | sed "s/\\.${c_marker}\$//")"
|
||||
# > _techo "Incomplete backup: ${incomplete}"
|
||||
# > if [ "${DELETE_INCOMPLETE}" = "yes" ]; then
|
||||
# > _techo "Deleting ${incomplete} ..."
|
||||
# > pcmd rm $VVERBOSE -rf "${incomplete}" || \
|
||||
# > _exit_err "Removing ${incomplete} failed."
|
||||
# > pcmd rm $VVERBOSE -f "${marker}" || \
|
||||
# > _exit_err "Removing ${marker} failed."
|
||||
# > fi
|
||||
# > done
|
||||
#
|
||||
# The final code (a) fixes the delete bug, (b) also deletes the marker, and
|
||||
# (c) is eight lines shorter than the original.
|
||||
#
|
||||
--- ccollect-f.sh 2009-05-12 12:49:28.000000000 -0700
|
||||
+++ ccollect-g.sh 2009-06-03 14:32:03.000000000 -0700
|
||||
@@ -516,28 +516,20 @@
|
||||
fi
|
||||
|
||||
#
|
||||
# Check for incomplete backups
|
||||
#
|
||||
- pcmd ls -1 "$ddir/${INTERVAL}"*".${c_marker}" > "${TMP}" 2>/dev/null
|
||||
-
|
||||
- i=0
|
||||
- while read incomplete; do
|
||||
- eval incomplete_$i=\"$(echo ${incomplete} | sed "s/\\.${c_marker}\$//")\"
|
||||
- i=$(($i+1))
|
||||
- done < "${TMP}"
|
||||
-
|
||||
- j=0
|
||||
- while [ "$j" -lt "$i" ]; do
|
||||
- eval realincomplete=\"\$incomplete_$j\"
|
||||
- _techo "Incomplete backup: ${realincomplete}"
|
||||
+ pcmd ls -1 "$ddir/${INTERVAL}"*".${c_marker}" 2>/dev/null | while read marker; do
|
||||
+ incomplete="$(echo ${marker} | sed "s/\\.${c_marker}\$//")"
|
||||
+ _techo "Incomplete backup: ${incomplete}"
|
||||
if [ "${DELETE_INCOMPLETE}" = "yes" ]; then
|
||||
- _techo "Deleting ${realincomplete} ..."
|
||||
- pcmd rm $VVERBOSE -rf "${ddir}/${realincomplete}" || \
|
||||
- _exit_err "Removing ${realincomplete} failed."
|
||||
+ _techo "Deleting ${incomplete} ..."
|
||||
+ pcmd rm $VVERBOSE -rf "${incomplete}" || \
|
||||
+ _exit_err "Removing ${incomplete} failed."
|
||||
+ pcmd rm $VVERBOSE -f "${marker}" || \
|
||||
+ _exit_err "Removing ${marker} failed."
|
||||
fi
|
||||
- j=$(($j+1))
|
||||
done
|
||||
|
||||
#
|
||||
# check if maximum number of backups is reached, if so remove
|
||||
# use grep and ls -p so we only look at directories
|
|
@ -0,0 +1,18 @@
|
|||
# A line in my f.patch was missing needed quotation marks.
|
||||
# This fixes that.
|
||||
#
|
||||
--- ccollect-g.sh 2009-06-03 14:32:03.000000000 -0700
|
||||
+++ ccollect-h.sh 2009-06-03 14:32:19.000000000 -0700
|
||||
@@ -412,11 +412,11 @@
|
||||
fi
|
||||
|
||||
#
|
||||
# interval definition: First try source specific, fallback to default
|
||||
#
|
||||
- if [ ${INTERVAL} = "AUTO" ] ; then
|
||||
+ if [ "${INTERVAL}" = "AUTO" ] ; then
|
||||
auto_interval
|
||||
_techo "Selected interval: '$INTERVAL'"
|
||||
fi
|
||||
c_interval="$(cat "${backup}/intervals/${INTERVAL}" 2>/dev/null)"
|
||||
|
|
@ -0,0 +1,134 @@
|
|||
# I have many sources that use the same options so I put those
|
||||
# options in the defaults directory. I found that ccollect was
|
||||
# ignoring most of them. I thought that this was a bug so I wrote
|
||||
# some code to correct this:
|
||||
#
|
||||
# > for opt in exclude verbose very_verbose rsync_options summary delete_incomplete remote_host ; do
|
||||
# > if [ -f "${backup}/$opt" -o -f "${backup}/no_$opt" ]; then
|
||||
# > eval c_$opt=\"${backup}/$opt\"
|
||||
# > else
|
||||
# > eval c_$opt=\"${CDEFAULTS}/$opt\"
|
||||
# > fi
|
||||
# > done
|
||||
#
|
||||
# This also adds a new feature: if some option, say verbose, is
|
||||
# specified in the defaults directory, it can be turned off for
|
||||
# particular sources by specifying no_verbose as a source option.
|
||||
#
|
||||
# A side effect of this approach is that it forces script variable
|
||||
# names to be consistent with option file names. Thus, there are
|
||||
# several changes such as:
|
||||
#
|
||||
# < if [ -f "${c_rsync_extra}" ]; then
|
||||
# > if [ -f "${c_rsync_options}" ]; then
|
||||
#
|
||||
# and
|
||||
#
|
||||
# < if [ -f "${c_vverbose}" ]; then
|
||||
# > if [ -f "${c_very_verbose}" ]; then
|
||||
#
|
||||
# After correcting the bug and adding the "no_" feature, the code is
|
||||
# 12 lines shorter.
|
||||
#
|
||||
--- ccollect-h.sh 2009-06-01 15:59:11.000000000 -0700
|
||||
+++ ccollect-i.sh 2009-06-03 14:27:58.000000000 -0700
|
||||
@@ -336,20 +336,19 @@
|
||||
# Configuration
|
||||
#
|
||||
backup="${CSOURCES}/${name}"
|
||||
c_source="${backup}/source"
|
||||
c_dest="${backup}/destination"
|
||||
- c_exclude="${backup}/exclude"
|
||||
- c_verbose="${backup}/verbose"
|
||||
- c_vverbose="${backup}/very_verbose"
|
||||
- c_rsync_extra="${backup}/rsync_options"
|
||||
- c_summary="${backup}/summary"
|
||||
c_pre_exec="${backup}/pre_exec"
|
||||
c_post_exec="${backup}/post_exec"
|
||||
- f_incomplete="delete_incomplete"
|
||||
- c_incomplete="${backup}/${f_incomplete}"
|
||||
- c_remote_host="${backup}/remote_host"
|
||||
+ for opt in exclude verbose very_verbose rsync_options summary delete_incomplete remote_host ; do
|
||||
+ if [ -f "${backup}/$opt" -o -f "${backup}/no_$opt" ]; then
|
||||
+ eval c_$opt=\"${backup}/$opt\"
|
||||
+ else
|
||||
+ eval c_$opt=\"${CDEFAULTS}/$opt\"
|
||||
+ fi
|
||||
+ done
|
||||
|
||||
#
|
||||
# Marking backups: If we abort it's not removed => Backup is broken
|
||||
#
|
||||
c_marker=".ccollect-marker"
|
||||
@@ -360,16 +359,12 @@
|
||||
begin_s=$(date +%s)
|
||||
|
||||
#
|
||||
# unset possible options
|
||||
#
|
||||
- EXCLUDE=""
|
||||
- RSYNC_EXTRA=""
|
||||
- SUMMARY=""
|
||||
VERBOSE=""
|
||||
VVERBOSE=""
|
||||
- DELETE_INCOMPLETE=""
|
||||
|
||||
_techo "Beginning to backup"
|
||||
|
||||
#
|
||||
# Standard configuration checks
|
||||
@@ -462,17 +457,10 @@
|
||||
# check for existence / use real name
|
||||
#
|
||||
( pcmd cd "$ddir" ) || _exit_err "Cannot change to ${ddir}. Skipping."
|
||||
|
||||
|
||||
- #
|
||||
- # Check whether to delete incomplete backups
|
||||
- #
|
||||
- if [ -f "${c_incomplete}" -o -f "${CDEFAULTS}/${f_incomplete}" ]; then
|
||||
- DELETE_INCOMPLETE="yes"
|
||||
- fi
|
||||
-
|
||||
# NEW method as of 0.6:
|
||||
# - insert ccollect default parameters
|
||||
# - insert options
|
||||
# - insert user options
|
||||
|
||||
@@ -498,32 +486,32 @@
|
||||
fi
|
||||
|
||||
#
|
||||
# Verbosity for rsync
|
||||
#
|
||||
- if [ -f "${c_vverbose}" ]; then
|
||||
+ if [ -f "${c_very_verbose}" ]; then
|
||||
set -- "$@" "-vv"
|
||||
elif [ -f "${c_verbose}" ]; then
|
||||
set -- "$@" "-v"
|
||||
fi
|
||||
|
||||
#
|
||||
# extra options for rsync provided by the user
|
||||
#
|
||||
- if [ -f "${c_rsync_extra}" ]; then
|
||||
+ if [ -f "${c_rsync_options}" ]; then
|
||||
while read line; do
|
||||
set -- "$@" "$line"
|
||||
- done < "${c_rsync_extra}"
|
||||
+ done < "${c_rsync_options}"
|
||||
fi
|
||||
|
||||
#
|
||||
# Check for incomplete backups
|
||||
#
|
||||
pcmd ls -1 "$ddir/${INTERVAL}"*".${c_marker}" 2>/dev/null | while read marker; do
|
||||
incomplete="$(echo ${marker} | sed "s/\\.${c_marker}\$//")"
|
||||
_techo "Incomplete backup: ${incomplete}"
|
||||
- if [ "${DELETE_INCOMPLETE}" = "yes" ]; then
|
||||
+ if [ -f "${c_delete_incomplete}" ]; then
|
||||
_techo "Deleting ${incomplete} ..."
|
||||
pcmd rm $VVERBOSE -rf "${incomplete}" || \
|
||||
_exit_err "Removing ${incomplete} failed."
|
||||
pcmd rm $VVERBOSE -f "${marker}" || \
|
||||
_exit_err "Removing ${marker} failed."
|
|
@ -0,0 +1,296 @@
|
|||
Dear Nico Schottelius,
|
||||
|
||||
I have started using ccollect and I very much like its design:
|
||||
it is elegant and effective.
|
||||
|
||||
In the process of getting ccollect setup and running, I made
|
||||
five changes, including one major new feature, that I hope you will
|
||||
find useful.
|
||||
|
||||
First, I added the following before any old backup gets deleted:
|
||||
|
||||
> # Verify source is up and accepting connections before deleting any old backups
|
||||
> rsync "$source" >/dev/null || _exit_err "Source ${source} is not readable. Skipping."
|
||||
|
||||
I think that this quick test is a much better than, say, pinging
|
||||
the source in a pre-exec script: this tests not only that the
|
||||
source is up and connected to the net, it also verifies (1) that
|
||||
ssh is up and accepting our key (if we are using ssh), and (2) that
|
||||
the source directory is mounted (if it needs to be mounted) and
|
||||
readable.
|
||||
|
||||
Second, I found ccollect's use of ctime problematic. After
|
||||
copying an old backup over to my ccollect destination, I adjusted
|
||||
mtime and atime where needed using touch, e.g.:
|
||||
|
||||
touch -d"28 Apr 2009 3:00" destination/daily.01
|
||||
|
||||
However, as far as I know, there is no way to correct a bad ctime.
|
||||
I ran into this issue repeatedly while adjusting my backup
|
||||
configuration. (For example, "cp -a" preserves mtime but not
|
||||
ctime. Even worse, "cp -al old new" also changes ctime on old.)
|
||||
|
||||
Another potential problem with ctime is that it is file-system
|
||||
dependent: I have read that Windows sets ctime to create-time not
|
||||
last change-time.
|
||||
|
||||
However, It is simple to give a new backup the correct mtime.
|
||||
After the rsync step, I added the command:
|
||||
|
||||
553a616,617
|
||||
> # Correct the modification time:
|
||||
> pcmd touch "${destination_dir}"
|
||||
|
||||
Even if ccollect continues to use ctime for sorting, I see no
|
||||
reason not to have the backup directory have the correct mtime.
|
||||
|
||||
To allow the rest of the code to use either ctime or mtime, I
|
||||
added definitions:
|
||||
|
||||
44a45,47
|
||||
> #TSORT="tc" ; NEWER="cnewer"
|
||||
> TSORT="t" ; NEWER="newer"
|
||||
|
||||
(It would be better if this choice was user-configurable because
|
||||
those with existing backup directories should continue to use ctime
|
||||
until the mtimes of their directories are correct. The correction
|
||||
would happen passively over time as new backups created using the
|
||||
above touch command and the old ones are deleted.)
|
||||
|
||||
With these definitions, the proper link-dest directory can then be
|
||||
found using this minor change (and comment update):
|
||||
|
||||
516,519c579,582
|
||||
< # Use ls -1c instead of -1t, because last modification maybe the same on all
|
||||
< # and metadate update (-c) is updated by rsync locally.
|
||||
< #
|
||||
< last_dir="$(pcmd ls -tcp1 "${ddir}" | grep '/$' | head -n 1)" || \
|
||||
---
|
||||
> # Depending on your file system, you may want to sort on:
|
||||
> # 1. mtime (modification time) with TSORT=t, or
|
||||
> # 2. ctime (last change time, usually) with TSORT=tc
|
||||
> last_dir="$(pcmd ls -${TSORT}p1 "${ddir}" | grep '/$' | head -n 1)" || \
|
||||
|
||||
Thirdly, after I copied my old backups over to my ccollect
|
||||
destination directory, I found that ccollect would delete a
|
||||
recent backup not an old backup! My problem was that, unknown to
|
||||
me, the algorithm to find the oldest backup (for deletion) was
|
||||
inconsistent with that used to find the newest (for link-dest). I
|
||||
suggest that these two should be consistent. Because time-sorting
|
||||
seemed more consistent with the ccollect documentation, I suggest:
|
||||
|
||||
492,493c555,556
|
||||
< pcmd ls -p1 "$ddir" | grep "^${INTERVAL}\..*/\$" | \
|
||||
< sort -n | head -n "${remove}" > "${TMP}" || \
|
||||
---
|
||||
> pcmd ls -${TSORT}p1r "$ddir" | grep "^${INTERVAL}\..*/\$" | \
|
||||
> head -n "${remove}" > "${TMP}" || \
|
||||
|
||||
Fourthly, in my experience, rsync error code 12 means complete
|
||||
failure, usually because the source refuses the ssh connection.
|
||||
So, I left the marker in that case:
|
||||
|
||||
558,559c622,625
|
||||
< pcmd rm "${destination_dir}.${c_marker}" || \
|
||||
< _exit_err "Removing ${destination_dir}/${c_marker} failed."
|
||||
---
|
||||
> if [ "$ret" -ne 12 ] ; then
|
||||
> pcmd rm "${destination_dir}.${c_marker}" || \
|
||||
> _exit_err "Removing ${destination_dir}/${c_marker} failed."
|
||||
> fi
|
||||
|
||||
(A better solution might allow a user-configurable list of error
|
||||
codes that are treated the same as a fail.)
|
||||
|
||||
Fifth, because I was frustrated by the problems of having a
|
||||
cron-job decide which interval to backup, I added a major new
|
||||
feature: the modified ccollect can now automatically select an
|
||||
interval to use for backup.
|
||||
|
||||
Cron-job controlled backup works well if all machines are up and
|
||||
running all the time and nothing ever goes wrong. I have, however,
|
||||
some machines that are occasionally turned off, or that are mobile
|
||||
and only sometimes connected to local net. For these machines, the
|
||||
use of cron-jobs to select intervals can be a disaster.
|
||||
|
||||
There are several ways one could automatically choose an
|
||||
appropriate interval. The method I show below has the advantage
|
||||
that it works with existing ccollect configuration files. The only
|
||||
requirement is that interval names be chosen to sort nicely (under
|
||||
ls). For example, I currently use:
|
||||
|
||||
$ ls -1 intervals
|
||||
a_daily
|
||||
b_weekly
|
||||
c_monthly
|
||||
d_quarterly
|
||||
e_yearly
|
||||
$ cat intervals/*
|
||||
6
|
||||
3
|
||||
2
|
||||
3
|
||||
30
|
||||
|
||||
A simpler example would be:
|
||||
|
||||
$ ls -1 intervals
|
||||
int1
|
||||
int2
|
||||
int3
|
||||
$ cat intervals/*
|
||||
2
|
||||
3
|
||||
4
|
||||
|
||||
The algorithm works as follows:
|
||||
|
||||
If no backup exists for the least frequent interval (int3 in the
|
||||
simpler example), then use that interval. Otherwise, use the
|
||||
most frequent interval (int1) unless there are "$(cat
|
||||
intervals/int1)" int1 backups more recent than any int2 or int3
|
||||
backup, in which case select int2 unless there are "$(cat
|
||||
intervals/int2)" int2 backups more recent than any int3 backups
|
||||
in which case choose int3.
|
||||
|
||||
This algorithm works well cycling through all the backups for my
|
||||
always connected machines as well as for my usually connected
|
||||
machines, and rarely connected machines. (For a rarely connected
|
||||
machine, interval names like "b_weekly" lose their English meaning
|
||||
but it still does a reasonable job of rotating through the
|
||||
intervals.)
|
||||
|
||||
In addition to being more robust, the automatic interval
|
||||
selection means that crontab is greatly simplified: only one line
|
||||
is needed. I use:
|
||||
|
||||
30 3 * * * ccollect.sh AUTO host1 host2 host3 | tee -a /var/log/ccollect-full.log | ccollect_analyse_logs.sh iwe
|
||||
|
||||
Some users might prefer a calendar-driven algorithm such as: do
|
||||
a yearly backup the first time a machine is connected during a new
|
||||
year; do a monthly backup the first that a machine is connected
|
||||
during a month; etc. This, however, would require a change to the
|
||||
ccollect configuration files. So, I didn't pursue the idea any
|
||||
further.
|
||||
|
||||
The code checks to see if the user specified the interval as
|
||||
AUTO. If so, the auto_interval function is called to select the
|
||||
interval:
|
||||
|
||||
347a417,420
|
||||
> if [ ${INTERVAL} = "AUTO" ] ; then
|
||||
> auto_interval
|
||||
> _techo "Selected interval: '$INTERVAL'"
|
||||
> fi
|
||||
|
||||
The code for auto_interval is as follows (note that it allows 'more
|
||||
recent' to be defined by either ctime or mtime as per the TSORT
|
||||
variable):
|
||||
|
||||
125a129,182
|
||||
> # Select interval if AUTO
|
||||
> #
|
||||
> # For this to work nicely, you have to choose interval names that sort nicely
|
||||
> # such as int1, int2, int3 or a_daily, b_weekly, c_monthly, etc.
|
||||
> #
|
||||
> auto_interval()
|
||||
> {
|
||||
> if [ -d "${backup}/intervals" -a -n "$(ls "${backup}/intervals" 2>/dev/null)" ] ; then
|
||||
> intervals_dir="${backup}/intervals"
|
||||
> elif [ -d "${CDEFAULTS}/intervals" -a -n "$(ls "${CDEFAULTS}/intervals" 2>/dev/null)" ] ; then
|
||||
> intervals_dir="${CDEFAULTS}/intervals"
|
||||
> else
|
||||
> _exit_err "No intervals are defined. Skipping."
|
||||
> fi
|
||||
> echo intervals_dir=${intervals_dir}
|
||||
>
|
||||
> trial_interval="$(ls -1r "${intervals_dir}/" | head -n 1)" || \
|
||||
> _exit_err "Failed to list contents of ${intervals_dir}/."
|
||||
> _techo "Considering interval ${trial_interval}"
|
||||
> most_recent="$(pcmd ls -${TSORT}p1 "${ddir}" | grep "^${trial_interval}.*/$" | head -n 1)" || \
|
||||
> _exit_err "Failed to list contents of ${ddir}/."
|
||||
> _techo " Most recent ${trial_interval}: '${most_recent}'"
|
||||
> if [ -n "${most_recent}" ]; then
|
||||
> no_intervals="$(ls -1 "${intervals_dir}/" | wc -l)"
|
||||
> n=1
|
||||
> while [ "${n}" -le "${no_intervals}" ]; do
|
||||
> trial_interval="$(ls -p1 "${intervals_dir}/" | tail -n+${n} | head -n 1)"
|
||||
> _techo "Considering interval '${trial_interval}'"
|
||||
> c_interval="$(cat "${intervals_dir}/${trial_interval}" 2>/dev/null)"
|
||||
> m=$((${n}+1))
|
||||
> set -- "${ddir}" -maxdepth 1
|
||||
> while [ "${m}" -le "${no_intervals}" ]; do
|
||||
> interval_m="$(ls -1 "${intervals_dir}/" | tail -n+${m} | head -n 1)"
|
||||
> most_recent="$(pcmd ls -${TSORT}p1 "${ddir}" | grep "^${interval_m}\..*/$" | head -n 1)"
|
||||
> _techo " Most recent ${interval_m}: '${most_recent}'"
|
||||
> if [ -n "${most_recent}" ] ; then
|
||||
> set -- "$@" -$NEWER "${ddir}/${most_recent}"
|
||||
> fi
|
||||
> m=$((${m}+1))
|
||||
> done
|
||||
> count=$(pcmd find "$@" -iname "${trial_interval}*" | wc -l)
|
||||
> _techo " Found $count more recent backups of ${trial_interval} (limit: ${c_interval})"
|
||||
> if [ "$count" -lt "${c_interval}" ] ; then
|
||||
> break
|
||||
> fi
|
||||
> n=$((${n}+1))
|
||||
> done
|
||||
> fi
|
||||
> export INTERVAL="${trial_interval}"
|
||||
> D_FILE_INTERVAL="${intervals_dir}/${INTERVAL}"
|
||||
> D_INTERVAL=$(cat "${D_FILE_INTERVAL}" 2>/dev/null)
|
||||
> }
|
||||
>
|
||||
> #
|
||||
|
||||
While I consider the auto_interval code to be developmental, I have
|
||||
been using it for my nightly backups and it works for me.
|
||||
|
||||
One last change: For auto_interval to work, it needs "ddir" to
|
||||
be defined first. Consequently, I had to move the following code
|
||||
so it gets run before auto_interval is called:
|
||||
|
||||
369,380c442,443
|
||||
<
|
||||
< #
|
||||
< # Destination is a path
|
||||
< #
|
||||
< if [ ! -f "${c_dest}" ]; then
|
||||
< _exit_err "Destination ${c_dest} is not a file. Skipping."
|
||||
< else
|
||||
< ddir=$(cat "${c_dest}"); ret="$?"
|
||||
< if [ "${ret}" -ne 0 ]; then
|
||||
< _exit_err "Destination ${c_dest} is not readable. Skipping."
|
||||
< fi
|
||||
< fi
|
||||
345a403,414
|
||||
> # Destination is a path
|
||||
> #
|
||||
> if [ ! -f "${c_dest}" ]; then
|
||||
> _exit_err "Destination ${c_dest} is not a file. Skipping."
|
||||
> else
|
||||
> ddir=$(cat "${c_dest}"); ret="$?"
|
||||
> if [ "${ret}" -ne 0 ]; then
|
||||
> _exit_err "Destination ${c_dest} is not readable. Skipping."
|
||||
> fi
|
||||
> fi
|
||||
>
|
||||
> #
|
||||
|
||||
I have some other ideas but this is all I have implemented at
|
||||
the moment. Files are attached.
|
||||
|
||||
Thanks again for developing ccollect and let me know what you
|
||||
think.
|
||||
|
||||
Regards,
|
||||
|
||||
John
|
||||
|
||||
--
|
||||
John L. Lawless, Ph.D.
|
||||
Redwood Scientific, Inc.
|
||||
1005 Terra Nova Blvd
|
||||
Pacifica, CA 94044-4300
|
||||
1-650-738-8083
|
||||
|
|
@ -0,0 +1,15 @@
|
|||
--- ccollect-0.7.1.sh 2009-02-02 03:39:42.000000000 -0800
|
||||
+++ ccollect-0.7.1-a.sh 2009-05-24 21:30:38.000000000 -0700
|
||||
@@ -364,10 +364,12 @@
|
||||
source=$(cat "${c_source}"); ret="$?"
|
||||
if [ "${ret}" -ne 0 ]; then
|
||||
_exit_err "Source ${c_source} is not readable. Skipping."
|
||||
fi
|
||||
fi
|
||||
+ # Verify source is up and accepting connections before deleting any old backups
|
||||
+ rsync "$source" >/dev/null || _exit_err "Source ${source} is not readable. Skipping."
|
||||
|
||||
#
|
||||
# Destination is a path
|
||||
#
|
||||
if [ ! -f "${c_dest}" ]; then
|
|
@ -0,0 +1,15 @@
|
|||
--- ccollect-0.7.1-a.sh 2009-05-24 21:30:38.000000000 -0700
|
||||
+++ ccollect-0.7.1-b.sh 2009-05-24 21:32:00.000000000 -0700
|
||||
@@ -551,10 +551,12 @@
|
||||
# the rsync part
|
||||
#
|
||||
|
||||
_techo "Transferring files..."
|
||||
rsync "$@" "${source}" "${destination_full}"; ret=$?
|
||||
+ # Correct the modification time:
|
||||
+ pcmd touch "${destination_dir}"
|
||||
|
||||
#
|
||||
# remove marking here
|
||||
#
|
||||
pcmd rm "${destination_dir}.${c_marker}" || \
|
|
@ -0,0 +1,35 @@
|
|||
--- ccollect-0.7.1-b.sh 2009-05-24 21:32:00.000000000 -0700
|
||||
+++ ccollect-0.7.1-c.sh 2009-05-24 21:39:43.000000000 -0700
|
||||
@@ -40,10 +40,13 @@
|
||||
VERSION=0.7.1
|
||||
RELEASE="2009-02-02"
|
||||
HALF_VERSION="ccollect ${VERSION}"
|
||||
FULL_VERSION="ccollect ${VERSION} (${RELEASE})"
|
||||
|
||||
+#TSORT="tc" ; NEWER="cnewer"
|
||||
+TSORT="t" ; NEWER="newer"
|
||||
+
|
||||
#
|
||||
# CDATE: how we use it for naming of the archives
|
||||
# DDATE: how the user should see it in our output (DISPLAY)
|
||||
#
|
||||
CDATE="date +%Y%m%d-%H%M"
|
||||
@@ -513,14 +516,14 @@
|
||||
|
||||
|
||||
#
|
||||
# Check for backup directory to clone from: Always clone from the latest one!
|
||||
#
|
||||
- # Use ls -1c instead of -1t, because last modification maybe the same on all
|
||||
- # and metadate update (-c) is updated by rsync locally.
|
||||
- #
|
||||
- last_dir="$(pcmd ls -tcp1 "${ddir}" | grep '/$' | head -n 1)" || \
|
||||
+ # Depending on your file system, you may want to sort on:
|
||||
+ # 1. mtime (modification time) with TSORT=t, or
|
||||
+ # 2. ctime (last change time, usually) with TSORT=tc
|
||||
+ last_dir="$(pcmd ls -${TSORT}p1 "${ddir}" | grep '/$' | head -n 1)" || \
|
||||
_exit_err "Failed to list contents of ${ddir}."
|
||||
|
||||
#
|
||||
# clone from old backup, if existing
|
||||
#
|
|
@ -0,0 +1,615 @@
|
|||
#!/bin/sh
|
||||
#
|
||||
# 2005-2009 Nico Schottelius (nico-ccollect at schottelius.org)
|
||||
#
|
||||
# This file is part of ccollect.
|
||||
#
|
||||
# ccollect is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# ccollect is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with ccollect. If not, see <http://www.gnu.org/licenses/>.
|
||||
#
|
||||
# Initially written for SyGroup (www.sygroup.ch)
|
||||
# Date: Mon Nov 14 11:45:11 CET 2005
|
||||
|
||||
#
|
||||
# Standard variables (stolen from cconf)
|
||||
#
|
||||
__pwd="$(pwd -P)"
|
||||
__mydir="${0%/*}"; __abs_mydir="$(cd "$__mydir" && pwd -P)"
|
||||
__myname=${0##*/}; __abs_myname="$__abs_mydir/$__myname"
|
||||
|
||||
#
|
||||
# where to find our configuration and temporary file
|
||||
#
|
||||
CCOLLECT_CONF=${CCOLLECT_CONF:-/etc/ccollect}
|
||||
CSOURCES=${CCOLLECT_CONF}/sources
|
||||
CDEFAULTS=${CCOLLECT_CONF}/defaults
|
||||
CPREEXEC="${CDEFAULTS}/pre_exec"
|
||||
CPOSTEXEC="${CDEFAULTS}/post_exec"
|
||||
|
||||
TMP=$(mktemp "/tmp/${__myname}.XXXXXX")
|
||||
VERSION=0.7.1
|
||||
RELEASE="2009-02-02"
|
||||
HALF_VERSION="ccollect ${VERSION}"
|
||||
FULL_VERSION="ccollect ${VERSION} (${RELEASE})"
|
||||
|
||||
#
|
||||
# CDATE: how we use it for naming of the archives
|
||||
# DDATE: how the user should see it in our output (DISPLAY)
|
||||
#
|
||||
CDATE="date +%Y%m%d-%H%M"
|
||||
DDATE="date +%Y-%m-%d-%H:%M:%S"
|
||||
|
||||
#
|
||||
# unset parallel execution
|
||||
#
|
||||
PARALLEL=""
|
||||
|
||||
#
|
||||
# catch signals
|
||||
#
|
||||
trap "rm -f \"${TMP}\"" 1 2 15
|
||||
|
||||
#
|
||||
# Functions
|
||||
#
|
||||
|
||||
# time displaying echo
|
||||
_techo()
|
||||
{
|
||||
echo "$(${DDATE}): $@"
|
||||
}
|
||||
|
||||
# exit on error
|
||||
_exit_err()
|
||||
{
|
||||
_techo "$@"
|
||||
rm -f "${TMP}"
|
||||
exit 1
|
||||
}
|
||||
|
||||
add_name()
|
||||
{
|
||||
awk "{ print \"[${name}] \" \$0 }"
|
||||
}
|
||||
|
||||
pcmd()
|
||||
{
|
||||
if [ "$remote_host" ]; then
|
||||
ssh "$remote_host" "$@"
|
||||
else
|
||||
"$@"
|
||||
fi
|
||||
}
|
||||
|
||||
#
|
||||
# Version
|
||||
#
|
||||
display_version()
|
||||
{
|
||||
echo "${FULL_VERSION}"
|
||||
exit 0
|
||||
}
|
||||
|
||||
#
|
||||
# Tell how to use us
|
||||
#
|
||||
usage()
|
||||
{
|
||||
echo "${__myname}: <interval name> [args] <sources to backup>"
|
||||
echo ""
|
||||
echo " ccollect creates (pseudo) incremental backups"
|
||||
echo ""
|
||||
echo " -h, --help: Show this help screen"
|
||||
echo " -p, --parallel: Parallelise backup processes"
|
||||
echo " -a, --all: Backup all sources specified in ${CSOURCES}"
|
||||
echo " -v, --verbose: Be very verbose (uses set -x)"
|
||||
echo " -V, --version: Print version information"
|
||||
echo ""
|
||||
echo " This is version ${VERSION}, released on ${RELEASE}"
|
||||
echo " (the first version was written on 2005-12-05 by Nico Schottelius)."
|
||||
echo ""
|
||||
echo " Retrieve latest ccollect at http://unix.schottelius.org/ccollect/"
|
||||
exit 0
|
||||
}
|
||||
|
||||
#
|
||||
# need at least interval and one source or --all
|
||||
#
|
||||
if [ $# -lt 2 ]; then
|
||||
if [ "$1" = "-V" -o "$1" = "--version" ]; then
|
||||
display_version
|
||||
else
|
||||
usage
|
||||
fi
|
||||
fi
|
||||
|
||||
#
|
||||
# check for configuraton directory
|
||||
#
|
||||
[ -d "${CCOLLECT_CONF}" ] || _exit_err "No configuration found in " \
|
||||
"\"${CCOLLECT_CONF}\" (is \$CCOLLECT_CONF properly set?)"
|
||||
|
||||
#
|
||||
# Filter arguments
|
||||
#
|
||||
export INTERVAL="$1"; shift
|
||||
i=1
|
||||
no_sources=0
|
||||
|
||||
#
|
||||
# Create source "array"
|
||||
#
|
||||
while [ "$#" -ge 1 ]; do
|
||||
eval arg=\"\$1\"; shift
|
||||
|
||||
if [ "${NO_MORE_ARGS}" = 1 ]; then
|
||||
eval source_${no_sources}=\"${arg}\"
|
||||
no_sources=$((${no_sources}+1))
|
||||
|
||||
# make variable available for subscripts
|
||||
eval export source_${no_sources}
|
||||
else
|
||||
case "${arg}" in
|
||||
-a|--all)
|
||||
ALL=1
|
||||
;;
|
||||
-v|--verbose)
|
||||
VERBOSE=1
|
||||
;;
|
||||
-p|--parallel)
|
||||
PARALLEL=1
|
||||
;;
|
||||
-h|--help)
|
||||
usage
|
||||
;;
|
||||
--)
|
||||
NO_MORE_ARGS=1
|
||||
;;
|
||||
*)
|
||||
eval source_${no_sources}=\"$arg\"
|
||||
no_sources=$(($no_sources+1))
|
||||
;;
|
||||
esac
|
||||
fi
|
||||
|
||||
i=$(($i+1))
|
||||
done
|
||||
|
||||
# also export number of sources
|
||||
export no_sources
|
||||
|
||||
#
|
||||
# be really, really, really verbose
|
||||
#
|
||||
if [ "${VERBOSE}" = 1 ]; then
|
||||
set -x
|
||||
fi
|
||||
|
||||
#
|
||||
# Look, if we should take ALL sources
|
||||
#
|
||||
if [ "${ALL}" = 1 ]; then
|
||||
# reset everything specified before
|
||||
no_sources=0
|
||||
|
||||
#
|
||||
# get entries from sources
|
||||
#
|
||||
cwd=$(pwd -P)
|
||||
( cd "${CSOURCES}" && ls > "${TMP}" ); ret=$?
|
||||
|
||||
[ "${ret}" -eq 0 ] || _exit_err "Listing of sources failed. Aborting."
|
||||
|
||||
while read tmp; do
|
||||
eval source_${no_sources}=\"${tmp}\"
|
||||
no_sources=$((${no_sources}+1))
|
||||
done < "${TMP}"
|
||||
fi
|
||||
|
||||
#
|
||||
# Need at least ONE source to backup
|
||||
#
|
||||
if [ "${no_sources}" -lt 1 ]; then
|
||||
usage
|
||||
else
|
||||
_techo "${HALF_VERSION}: Beginning backup using interval ${INTERVAL}"
|
||||
fi
|
||||
|
||||
#
|
||||
# Look for pre-exec command (general)
|
||||
#
|
||||
if [ -x "${CPREEXEC}" ]; then
|
||||
_techo "Executing ${CPREEXEC} ..."
|
||||
"${CPREEXEC}"; ret=$?
|
||||
_techo "Finished ${CPREEXEC} (return code: ${ret})."
|
||||
|
||||
[ "${ret}" -eq 0 ] || _exit_err "${CPREEXEC} failed. Aborting"
|
||||
fi
|
||||
|
||||
#
|
||||
# check default configuration
|
||||
#
|
||||
|
||||
D_FILE_INTERVAL="${CDEFAULTS}/intervals/${INTERVAL}"
|
||||
D_INTERVAL=$(cat "${D_FILE_INTERVAL}" 2>/dev/null)
|
||||
|
||||
|
||||
#
|
||||
# Let's do the backup
|
||||
#
|
||||
i=0
|
||||
while [ "${i}" -lt "${no_sources}" ]; do
|
||||
|
||||
#
|
||||
# Get current source
|
||||
#
|
||||
eval name=\"\$source_${i}\"
|
||||
i=$((${i}+1))
|
||||
|
||||
export name
|
||||
|
||||
#
|
||||
# start ourself, if we want parallel execution
|
||||
#
|
||||
if [ "${PARALLEL}" ]; then
|
||||
"$0" "${INTERVAL}" "${name}" &
|
||||
continue
|
||||
fi
|
||||
|
||||
#
|
||||
# Start subshell for easy log editing
|
||||
#
|
||||
(
|
||||
#
|
||||
# Stderr to stdout, so we can produce nice logs
|
||||
#
|
||||
exec 2>&1
|
||||
|
||||
#
|
||||
# Configuration
|
||||
#
|
||||
backup="${CSOURCES}/${name}"
|
||||
c_source="${backup}/source"
|
||||
c_dest="${backup}/destination"
|
||||
c_exclude="${backup}/exclude"
|
||||
c_verbose="${backup}/verbose"
|
||||
c_vverbose="${backup}/very_verbose"
|
||||
c_rsync_extra="${backup}/rsync_options"
|
||||
c_summary="${backup}/summary"
|
||||
c_pre_exec="${backup}/pre_exec"
|
||||
c_post_exec="${backup}/post_exec"
|
||||
f_incomplete="delete_incomplete"
|
||||
c_incomplete="${backup}/${f_incomplete}"
|
||||
c_remote_host="${backup}/remote_host"
|
||||
|
||||
#
|
||||
# Marking backups: If we abort it's not removed => Backup is broken
|
||||
#
|
||||
c_marker=".ccollect-marker"
|
||||
|
||||
#
|
||||
# Times
|
||||
#
|
||||
begin_s=$(date +%s)
|
||||
|
||||
#
|
||||
# unset possible options
|
||||
#
|
||||
EXCLUDE=""
|
||||
RSYNC_EXTRA=""
|
||||
SUMMARY=""
|
||||
VERBOSE=""
|
||||
VVERBOSE=""
|
||||
DELETE_INCOMPLETE=""
|
||||
|
||||
_techo "Beginning to backup"
|
||||
|
||||
#
|
||||
# Standard configuration checks
|
||||
#
|
||||
if [ ! -e "${backup}" ]; then
|
||||
_exit_err "Source does not exist."
|
||||
fi
|
||||
|
||||
#
|
||||
# configuration _must_ be a directory
|
||||
#
|
||||
if [ ! -d "${backup}" ]; then
|
||||
_exit_err "\"${name}\" is not a cconfig-directory. Skipping."
|
||||
fi
|
||||
|
||||
#
|
||||
# first execute pre_exec, which may generate destination or other
|
||||
# parameters
|
||||
#
|
||||
if [ -x "${c_pre_exec}" ]; then
|
||||
_techo "Executing ${c_pre_exec} ..."
|
||||
"${c_pre_exec}"; ret="$?"
|
||||
_techo "Finished ${c_pre_exec} (return code ${ret})."
|
||||
|
||||
if [ "${ret}" -ne 0 ]; then
|
||||
_exit_err "${c_pre_exec} failed. Skipping."
|
||||
fi
|
||||
fi
|
||||
|
||||
#
|
||||
# interval definition: First try source specific, fallback to default
|
||||
#
|
||||
c_interval="$(cat "${backup}/intervals/${INTERVAL}" 2>/dev/null)"
|
||||
|
||||
if [ -z "${c_interval}" ]; then
|
||||
c_interval="${D_INTERVAL}"
|
||||
|
||||
if [ -z "${c_interval}" ]; then
|
||||
_exit_err "No definition for interval \"${INTERVAL}\" found. Skipping."
|
||||
fi
|
||||
fi
|
||||
|
||||
#
|
||||
# Source checks
|
||||
#
|
||||
if [ ! -f "${c_source}" ]; then
|
||||
_exit_err "Source description \"${c_source}\" is not a file. Skipping."
|
||||
else
|
||||
source=$(cat "${c_source}"); ret="$?"
|
||||
if [ "${ret}" -ne 0 ]; then
|
||||
_exit_err "Source ${c_source} is not readable. Skipping."
|
||||
fi
|
||||
fi
|
||||
|
||||
#
|
||||
# Destination is a path
|
||||
#
|
||||
if [ ! -f "${c_dest}" ]; then
|
||||
_exit_err "Destination ${c_dest} is not a file. Skipping."
|
||||
else
|
||||
ddir=$(cat "${c_dest}"); ret="$?"
|
||||
if [ "${ret}" -ne 0 ]; then
|
||||
_exit_err "Destination ${c_dest} is not readable. Skipping."
|
||||
fi
|
||||
fi
|
||||
|
||||
#
|
||||
# do we backup to a remote host? then set pre-cmd
|
||||
#
|
||||
if [ -f "${c_remote_host}" ]; then
|
||||
# adjust ls and co
|
||||
remote_host=$(cat "${c_remote_host}"); ret="$?"
|
||||
if [ "${ret}" -ne 0 ]; then
|
||||
_exit_err "Remote host file ${c_remote_host} exists, but is not readable. Skipping."
|
||||
fi
|
||||
destination="${remote_host}:${ddir}"
|
||||
else
|
||||
remote_host=""
|
||||
destination="${ddir}"
|
||||
fi
|
||||
export remote_host
|
||||
|
||||
#
|
||||
# check for existence / use real name
|
||||
#
|
||||
( pcmd cd "$ddir" ) || _exit_err "Cannot change to ${ddir}. Skipping."
|
||||
|
||||
|
||||
#
|
||||
# Check whether to delete incomplete backups
|
||||
#
|
||||
if [ -f "${c_incomplete}" -o -f "${CDEFAULTS}/${f_incomplete}" ]; then
|
||||
DELETE_INCOMPLETE="yes"
|
||||
fi
|
||||
|
||||
# NEW method as of 0.6:
|
||||
# - insert ccollect default parameters
|
||||
# - insert options
|
||||
# - insert user options
|
||||
|
||||
#
|
||||
# rsync standard options
|
||||
#
|
||||
|
||||
set -- "$@" "--archive" "--delete" "--numeric-ids" "--relative" \
|
||||
"--delete-excluded" "--sparse"
|
||||
|
||||
#
|
||||
# exclude list
|
||||
#
|
||||
if [ -f "${c_exclude}" ]; then
|
||||
set -- "$@" "--exclude-from=${c_exclude}"
|
||||
fi
|
||||
|
||||
#
|
||||
# Output a summary
|
||||
#
|
||||
if [ -f "${c_summary}" ]; then
|
||||
set -- "$@" "--stats"
|
||||
fi
|
||||
|
||||
#
|
||||
# Verbosity for rsync
|
||||
#
|
||||
if [ -f "${c_vverbose}" ]; then
|
||||
set -- "$@" "-vv"
|
||||
elif [ -f "${c_verbose}" ]; then
|
||||
set -- "$@" "-v"
|
||||
fi
|
||||
|
||||
#
|
||||
# extra options for rsync provided by the user
|
||||
#
|
||||
if [ -f "${c_rsync_extra}" ]; then
|
||||
while read line; do
|
||||
set -- "$@" "$line"
|
||||
done < "${c_rsync_extra}"
|
||||
fi
|
||||
|
||||
#
|
||||
# Check for incomplete backups
|
||||
#
|
||||
pcmd ls -1 "$ddir/${INTERVAL}"*".${c_marker}" > "${TMP}" 2>/dev/null
|
||||
|
||||
i=0
|
||||
while read incomplete; do
|
||||
eval incomplete_$i=\"$(echo ${incomplete} | sed "s/\\.${c_marker}\$//")\"
|
||||
i=$(($i+1))
|
||||
done < "${TMP}"
|
||||
|
||||
j=0
|
||||
while [ "$j" -lt "$i" ]; do
|
||||
eval realincomplete=\"\$incomplete_$j\"
|
||||
_techo "Incomplete backup: ${realincomplete}"
|
||||
if [ "${DELETE_INCOMPLETE}" = "yes" ]; then
|
||||
_techo "Deleting ${realincomplete} ..."
|
||||
pcmd rm $VVERBOSE -rf "${ddir}/${realincomplete}" || \
|
||||
_exit_err "Removing ${realincomplete} failed."
|
||||
fi
|
||||
j=$(($j+1))
|
||||
done
|
||||
|
||||
#
|
||||
# check if maximum number of backups is reached, if so remove
|
||||
# use grep and ls -p so we only look at directories
|
||||
#
|
||||
count="$(pcmd ls -p1 "${ddir}" | grep "^${INTERVAL}\..*/\$" | wc -l \
|
||||
| sed 's/^ *//g')" || _exit_err "Counting backups failed"
|
||||
|
||||
_techo "Existing backups: ${count} Total keeping backups: ${c_interval}"
|
||||
|
||||
if [ "${count}" -ge "${c_interval}" ]; then
|
||||
substract=$((${c_interval} - 1))
|
||||
remove=$((${count} - ${substract}))
|
||||
_techo "Removing ${remove} backup(s)..."
|
||||
|
||||
pcmd ls -p1 "$ddir" | grep "^${INTERVAL}\..*/\$" | \
|
||||
sort -n | head -n "${remove}" > "${TMP}" || \
|
||||
_exit_err "Listing old backups failed"
|
||||
|
||||
i=0
|
||||
while read to_remove; do
|
||||
eval remove_$i=\"${to_remove}\"
|
||||
i=$(($i+1))
|
||||
done < "${TMP}"
|
||||
|
||||
j=0
|
||||
while [ "$j" -lt "$i" ]; do
|
||||
eval to_remove=\"\$remove_$j\"
|
||||
_techo "Removing ${to_remove} ..."
|
||||
pcmd rm ${VVERBOSE} -rf "${ddir}/${to_remove}" || \
|
||||
_exit_err "Removing ${to_remove} failed."
|
||||
j=$(($j+1))
|
||||
done
|
||||
fi
|
||||
|
||||
|
||||
#
|
||||
# Check for backup directory to clone from: Always clone from the latest one!
|
||||
#
|
||||
# Use ls -1c instead of -1t, because last modification maybe the same on all
|
||||
# and metadate update (-c) is updated by rsync locally.
|
||||
#
|
||||
last_dir="$(pcmd ls -tcp1 "${ddir}" | grep '/$' | head -n 1)" || \
|
||||
_exit_err "Failed to list contents of ${ddir}."
|
||||
|
||||
#
|
||||
# clone from old backup, if existing
|
||||
#
|
||||
if [ "${last_dir}" ]; then
|
||||
set -- "$@" "--link-dest=${ddir}/${last_dir}"
|
||||
_techo "Hard linking from ${last_dir}"
|
||||
fi
|
||||
|
||||
|
||||
# set time when we really begin to backup, not when we began to remove above
|
||||
destination_date=$(${CDATE})
|
||||
destination_dir="${ddir}/${INTERVAL}.${destination_date}.$$"
|
||||
destination_full="${destination}/${INTERVAL}.${destination_date}.$$"
|
||||
|
||||
# give some info
|
||||
_techo "Beginning to backup, this may take some time..."
|
||||
|
||||
_techo "Creating ${destination_dir} ..."
|
||||
pcmd mkdir ${VVERBOSE} "${destination_dir}" || \
|
||||
_exit_err "Creating ${destination_dir} failed. Skipping."
|
||||
|
||||
#
|
||||
# added marking in 0.6 (and remove it, if successful later)
|
||||
#
|
||||
pcmd touch "${destination_dir}.${c_marker}"
|
||||
|
||||
#
|
||||
# the rsync part
|
||||
#
|
||||
|
||||
_techo "Transferring files..."
|
||||
rsync "$@" "${source}" "${destination_full}"; ret=$?
|
||||
|
||||
#
|
||||
# remove marking here
|
||||
#
|
||||
pcmd rm "${destination_dir}.${c_marker}" || \
|
||||
_exit_err "Removing ${destination_dir}/${c_marker} failed."
|
||||
|
||||
_techo "Finished backup (rsync return code: $ret)."
|
||||
if [ "${ret}" -ne 0 ]; then
|
||||
_techo "Warning: rsync exited non-zero, the backup may be broken (see rsync errors)."
|
||||
fi
|
||||
|
||||
#
|
||||
# post_exec
|
||||
#
|
||||
if [ -x "${c_post_exec}" ]; then
|
||||
_techo "Executing ${c_post_exec} ..."
|
||||
"${c_post_exec}"; ret=$?
|
||||
_techo "Finished ${c_post_exec}."
|
||||
|
||||
if [ ${ret} -ne 0 ]; then
|
||||
_exit_err "${c_post_exec} failed."
|
||||
fi
|
||||
fi
|
||||
|
||||
# Calculation
|
||||
end_s=$(date +%s)
|
||||
|
||||
full_seconds=$((${end_s} - ${begin_s}))
|
||||
hours=$((${full_seconds} / 3600))
|
||||
seconds=$((${full_seconds} - (${hours} * 3600)))
|
||||
minutes=$((${seconds} / 60))
|
||||
seconds=$((${seconds} - (${minutes} * 60)))
|
||||
|
||||
_techo "Backup lasted: ${hours}:${minutes}:${seconds} (h:m:s)"
|
||||
|
||||
) | add_name
|
||||
done
|
||||
|
||||
#
|
||||
# Be a good parent and wait for our children, if they are running wild parallel
|
||||
#
|
||||
if [ "${PARALLEL}" ]; then
|
||||
_techo "Waiting for children to complete..."
|
||||
wait
|
||||
fi
|
||||
|
||||
#
|
||||
# Look for post-exec command (general)
|
||||
#
|
||||
if [ -x "${CPOSTEXEC}" ]; then
|
||||
_techo "Executing ${CPOSTEXEC} ..."
|
||||
"${CPOSTEXEC}"; ret=$?
|
||||
_techo "Finished ${CPOSTEXEC} (return code: ${ret})."
|
||||
|
||||
if [ ${ret} -ne 0 ]; then
|
||||
_techo "${CPOSTEXEC} failed."
|
||||
fi
|
||||
fi
|
||||
|
||||
rm -f "${TMP}"
|
||||
_techo "Finished ${WE}"
|
|
@ -0,0 +1,683 @@
|
|||
#!/bin/sh
|
||||
#
|
||||
# 2005-2009 Nico Schottelius (nico-ccollect at schottelius.org)
|
||||
#
|
||||
# This file is part of ccollect.
|
||||
#
|
||||
# ccollect is free software: you can redistribute it and/or modify
|
||||
# it under the terms of the GNU General Public License as published by
|
||||
# the Free Software Foundation, either version 3 of the License, or
|
||||
# (at your option) any later version.
|
||||
#
|
||||
# ccollect is distributed in the hope that it will be useful,
|
||||
# but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
# GNU General Public License for more details.
|
||||
#
|
||||
# You should have received a copy of the GNU General Public License
|
||||
# along with ccollect. If not, see <http://www.gnu.org/licenses/>.
|
||||
#
|
||||
# Initially written for SyGroup (www.sygroup.ch)
|
||||
# Date: Mon Nov 14 11:45:11 CET 2005
|
||||
|
||||
#
|
||||
# Standard variables (stolen from cconf)
|
||||
#
|
||||
__pwd="$(pwd -P)"
|
||||
__mydir="${0%/*}"; __abs_mydir="$(cd "$__mydir" && pwd -P)"
|
||||
__myname=${0##*/}; __abs_myname="$__abs_mydir/$__myname"
|
||||
|
||||
#
|
||||
# where to find our configuration and temporary file
|
||||
#
|
||||
CCOLLECT_CONF=${CCOLLECT_CONF:-/etc/ccollect}
|
||||
CSOURCES=${CCOLLECT_CONF}/sources
|
||||
CDEFAULTS=${CCOLLECT_CONF}/defaults
|
||||
CPREEXEC="${CDEFAULTS}/pre_exec"
|
||||
CPOSTEXEC="${CDEFAULTS}/post_exec"
|
||||
|
||||
TMP=$(mktemp "/tmp/${__myname}.XXXXXX")
|
||||
VERSION=0.7.1
|
||||
RELEASE="2009-02-02"
|
||||
HALF_VERSION="ccollect ${VERSION}"
|
||||
FULL_VERSION="ccollect ${VERSION} (${RELEASE})"
|
||||
|
||||
#TSORT="tc" ; NEWER="cnewer"
|
||||
TSORT="t" ; NEWER="newer"
|
||||
|
||||
#
|
||||
# CDATE: how we use it for naming of the archives
|
||||
# DDATE: how the user should see it in our output (DISPLAY)
|
||||
#
|
||||
CDATE="date +%Y%m%d-%H%M"
|
||||
DDATE="date +%Y-%m-%d-%H:%M:%S"
|
||||
|
||||
#
|
||||
# unset parallel execution
|
||||
#
|
||||
PARALLEL=""
|
||||
|
||||
#
|
||||
# catch signals
|
||||
#
|
||||
trap "rm -f \"${TMP}\"" 1 2 15
|
||||
|
||||
#
|
||||
# Functions
|
||||
#
|
||||
|
||||
# time displaying echo
|
||||
_techo()
|
||||
{
|
||||
echo "$(${DDATE}): $@"
|
||||
}
|
||||
|
||||
# exit on error
|
||||
_exit_err()
|
||||
{
|
||||
_techo "$@"
|
||||
rm -f "${TMP}"
|
||||
exit 1
|
||||
}
|
||||
|
||||
add_name()
|
||||
{
|
||||
awk "{ print \"[${name}] \" \$0 }"
|
||||
}
|
||||
|
||||
pcmd()
|
||||
{
|
||||
if [ "$remote_host" ]; then
|
||||
ssh "$remote_host" "$@"
|
||||
else
|
||||
"$@"
|
||||
fi
|
||||
}
|
||||
|
||||
#
|
||||
# Version
|
||||
#
|
||||
display_version()
|
||||
{
|
||||
echo "${FULL_VERSION}"
|
||||
exit 0
|
||||
}
|
||||
|
||||
#
|
||||
# Tell how to use us
|
||||
#
|
||||
usage()
|
||||
{
|
||||
echo "${__myname}: <interval name> [args] <sources to backup>"
|
||||
echo ""
|
||||
echo " ccollect creates (pseudo) incremental backups"
|
||||
echo ""
|
||||
echo " -h, --help: Show this help screen"
|
||||
echo " -p, --parallel: Parallelise backup processes"
|
||||
echo " -a, --all: Backup all sources specified in ${CSOURCES}"
|
||||
echo " -v, --verbose: Be very verbose (uses set -x)"
|
||||
echo " -V, --version: Print version information"
|
||||
echo ""
|
||||
echo " This is version ${VERSION}, released on ${RELEASE}"
|
||||
echo " (the first version was written on 2005-12-05 by Nico Schottelius)."
|
||||
echo ""
|
||||
echo " Retrieve latest ccollect at http://unix.schottelius.org/ccollect/"
|
||||
exit 0
|
||||
}
|
||||
|
||||
#
|
||||
# Select interval if AUTO
|
||||
#
|
||||
# For this to work nicely, you have to choose interval names that sort nicely
|
||||
# such as int1, int2, int3 or a_daily, b_weekly, c_monthly, etc.
|
||||
#
|
||||
auto_interval()
|
||||
{
|
||||
if [ -d "${backup}/intervals" -a -n "$(ls "${backup}/intervals" 2>/dev/null)" ] ; then
|
||||
intervals_dir="${backup}/intervals"
|
||||
elif [ -d "${CDEFAULTS}/intervals" -a -n "$(ls "${CDEFAULTS}/intervals" 2>/dev/null)" ] ; then
|
||||
intervals_dir="${CDEFAULTS}/intervals"
|
||||
else
|
||||
_exit_err "No intervals are defined. Skipping."
|
||||
fi
|
||||
echo intervals_dir=${intervals_dir}
|
||||
|
||||
trial_interval="$(ls -1r "${intervals_dir}/" | head -n 1)" || \
|
||||
_exit_err "Failed to list contents of ${intervals_dir}/."
|
||||
_techo "Considering interval ${trial_interval}"
|
||||
most_recent="$(pcmd ls -${TSORT}p1 "${ddir}" | grep "^${trial_interval}.*/$" | head -n 1)" || \
|
||||
_exit_err "Failed to list contents of ${ddir}/."
|
||||
_techo " Most recent ${trial_interval}: '${most_recent}'"
|
||||
if [ -n "${most_recent}" ]; then
|
||||
no_intervals="$(ls -1 "${intervals_dir}/" | wc -l)"
|
||||
n=1
|
||||
while [ "${n}" -le "${no_intervals}" ]; do
|
||||
trial_interval="$(ls -p1 "${intervals_dir}/" | tail -n+${n} | head -n 1)"
|
||||
_techo "Considering interval '${trial_interval}'"
|
||||
c_interval="$(cat "${intervals_dir}/${trial_interval}" 2>/dev/null)"
|
||||
m=$((${n}+1))
|
||||
set -- "${ddir}" -maxdepth 1
|
||||
while [ "${m}" -le "${no_intervals}" ]; do
|
||||
interval_m="$(ls -1 "${intervals_dir}/" | tail -n+${m} | head -n 1)"
|
||||
most_recent="$(pcmd ls -${TSORT}p1 "${ddir}" | grep "^${interval_m}\..*/$" | head -n 1)"
|
||||
_techo " Most recent ${interval_m}: '${most_recent}'"
|
||||
if [ -n "${most_recent}" ] ; then
|
||||
set -- "$@" -$NEWER "${ddir}/${most_recent}"
|
||||
fi
|
||||
m=$((${m}+1))
|
||||
done
|
||||
count=$(pcmd find "$@" -iname "${trial_interval}*" | wc -l)
|
||||
_techo " Found $count more recent backups of ${trial_interval} (limit: ${c_interval})"
|
||||
if [ "$count" -lt "${c_interval}" ] ; then
|
||||
break
|
||||
fi
|
||||
n=$((${n}+1))
|
||||
done
|
||||
fi
|
||||
export INTERVAL="${trial_interval}"
|
||||
D_FILE_INTERVAL="${intervals_dir}/${INTERVAL}"
|
||||
D_INTERVAL=$(cat "${D_FILE_INTERVAL}" 2>/dev/null)
|
||||
}
|
||||
|
||||
#
|
||||
# need at least interval and one source or --all
|
||||
#
|
||||
if [ $# -lt 2 ]; then
|
||||
if [ "$1" = "-V" -o "$1" = "--version" ]; then
|
||||
display_version
|
||||
else
|
||||
usage
|
||||
fi
|
||||
fi
|
||||
|
||||
#
|
||||
# check for configuraton directory
|
||||
#
|
||||
[ -d "${CCOLLECT_CONF}" ] || _exit_err "No configuration found in " \
|
||||
"\"${CCOLLECT_CONF}\" (is \$CCOLLECT_CONF properly set?)"
|
||||
|
||||
#
|
||||
# Filter arguments
|
||||
#
|
||||
export INTERVAL="$1"; shift
|
||||
i=1
|
||||
no_sources=0
|
||||
|
||||
#
|
||||
# Create source "array"
|
||||
#
|
||||
while [ "$#" -ge 1 ]; do
|
||||
eval arg=\"\$1\"; shift
|
||||
|
||||
if [ "${NO_MORE_ARGS}" = 1 ]; then
|
||||
eval source_${no_sources}=\"${arg}\"
|
||||
no_sources=$((${no_sources}+1))
|
||||
|
||||
# make variable available for subscripts
|
||||
eval export source_${no_sources}
|
||||
else
|
||||
case "${arg}" in
|
||||
-a|--all)
|
||||
ALL=1
|
||||
;;
|
||||
-v|--verbose)
|
||||
VERBOSE=1
|
||||
;;
|
||||
-p|--parallel)
|
||||
PARALLEL=1
|
||||
;;
|
||||
-h|--help)
|
||||
usage
|
||||
;;
|
||||
--)
|
||||
NO_MORE_ARGS=1
|
||||
;;
|
||||
*)
|
||||
eval source_${no_sources}=\"$arg\"
|
||||
no_sources=$(($no_sources+1))
|
||||
;;
|
||||
esac
|
||||
fi
|
||||
|
||||
i=$(($i+1))
|
||||
done
|
||||
|
||||
# also export number of sources
|
||||
export no_sources
|
||||
|
||||
#
|
||||
# be really, really, really verbose
|
||||
#
|
||||
if [ "${VERBOSE}" = 1 ]; then
|
||||
set -x
|
||||
fi
|
||||
|
||||
#
|
||||
# Look, if we should take ALL sources
|
||||
#
|
||||
if [ "${ALL}" = 1 ]; then
|
||||
# reset everything specified before
|
||||
no_sources=0
|
||||
|
||||
#
|
||||
# get entries from sources
|
||||
#
|
||||
cwd=$(pwd -P)
|
||||
( cd "${CSOURCES}" && ls > "${TMP}" ); ret=$?
|
||||
|
||||
[ "${ret}" -eq 0 ] || _exit_err "Listing of sources failed. Aborting."
|
||||
|
||||
while read tmp; do
|
||||
eval source_${no_sources}=\"${tmp}\"
|
||||
no_sources=$((${no_sources}+1))
|
||||
done < "${TMP}"
|
||||
fi
|
||||
|
||||
#
|
||||
# Need at least ONE source to backup
|
||||
#
|
||||
if [ "${no_sources}" -lt 1 ]; then
|
||||
usage
|
||||
else
|
||||
_techo "${HALF_VERSION}: Beginning backup using interval ${INTERVAL}"
|
||||
fi
|
||||
|
||||
#
|
||||
# Look for pre-exec command (general)
|
||||
#
|
||||
if [ -x "${CPREEXEC}" ]; then
|
||||
_techo "Executing ${CPREEXEC} ..."
|
||||
"${CPREEXEC}"; ret=$?
|
||||
_techo "Finished ${CPREEXEC} (return code: ${ret})."
|
||||
|
||||
[ "${ret}" -eq 0 ] || _exit_err "${CPREEXEC} failed. Aborting"
|
||||
fi
|
||||
|
||||
#
|
||||
# check default configuration
|
||||
#
|
||||
|
||||
D_FILE_INTERVAL="${CDEFAULTS}/intervals/${INTERVAL}"
|
||||
D_INTERVAL=$(cat "${D_FILE_INTERVAL}" 2>/dev/null)
|
||||
|
||||
|
||||
#
|
||||
# Let's do the backup
|
||||
#
|
||||
i=0
|
||||
while [ "${i}" -lt "${no_sources}" ]; do
|
||||
|
||||
#
|
||||
# Get current source
|
||||
#
|
||||
eval name=\"\$source_${i}\"
|
||||
i=$((${i}+1))
|
||||
|
||||
export name
|
||||
|
||||
#
|
||||
# start ourself, if we want parallel execution
|
||||
#
|
||||
if [ "${PARALLEL}" ]; then
|
||||
"$0" "${INTERVAL}" "${name}" &
|
||||
continue
|
||||
fi
|
||||
|
||||
#
|
||||
# Start subshell for easy log editing
|
||||
#
|
||||
(
|
||||
#
|
||||
# Stderr to stdout, so we can produce nice logs
|
||||
#
|
||||
exec 2>&1
|
||||
|
||||
#
|
||||
# Configuration
|
||||
#
|
||||
backup="${CSOURCES}/${name}"
|
||||
c_source="${backup}/source"
|
||||
c_dest="${backup}/destination"
|
||||
c_exclude="${backup}/exclude"
|
||||
c_verbose="${backup}/verbose"
|
||||
c_vverbose="${backup}/very_verbose"
|
||||
c_rsync_extra="${backup}/rsync_options"
|
||||
c_summary="${backup}/summary"
|
||||
c_pre_exec="${backup}/pre_exec"
|
||||
c_post_exec="${backup}/post_exec"
|
||||
f_incomplete="delete_incomplete"
|
||||
c_incomplete="${backup}/${f_incomplete}"
|
||||
c_remote_host="${backup}/remote_host"
|
||||
|
||||
#
|
||||
# Marking backups: If we abort it's not removed => Backup is broken
|
||||
#
|
||||
c_marker=".ccollect-marker"
|
||||
|
||||
#
|
||||
# Times
|
||||
#
|
||||
begin_s=$(date +%s)
|
||||
|
||||
#
|
||||
# unset possible options
|
||||
#
|
||||
EXCLUDE=""
|
||||
RSYNC_EXTRA=""
|
||||
SUMMARY=""
|
||||
VERBOSE=""
|
||||
VVERBOSE=""
|
||||
DELETE_INCOMPLETE=""
|
||||
|
||||
_techo "Beginning to backup"
|
||||
|
||||
#
|
||||
# Standard configuration checks
|
||||
#
|
||||
if [ ! -e "${backup}" ]; then
|
||||
_exit_err "Source does not exist."
|
||||
fi
|
||||
|
||||
#
|
||||
# configuration _must_ be a directory
|
||||
#
|
||||
if [ ! -d "${backup}" ]; then
|
||||
_exit_err "\"${name}\" is not a cconfig-directory. Skipping."
|
||||
fi
|
||||
|
||||
#
|
||||
# first execute pre_exec, which may generate destination or other
|
||||
# parameters
|
||||
#
|
||||
if [ -x "${c_pre_exec}" ]; then
|
||||
_techo "Executing ${c_pre_exec} ..."
|
||||
"${c_pre_exec}"; ret="$?"
|
||||
_techo "Finished ${c_pre_exec} (return code ${ret})."
|
||||
|
||||
if [ "${ret}" -ne 0 ]; then
|
||||
_exit_err "${c_pre_exec} failed. Skipping."
|
||||
fi
|
||||
fi
|
||||
|
||||
#
|
||||
# Destination is a path
|
||||
#
|
||||
if [ ! -f "${c_dest}" ]; then
|
||||
_exit_err "Destination ${c_dest} is not a file. Skipping."
|
||||
else
|
||||
ddir=$(cat "${c_dest}"); ret="$?"
|
||||
if [ "${ret}" -ne 0 ]; then
|
||||
_exit_err "Destination ${c_dest} is not readable. Skipping."
|
||||
fi
|
||||
fi
|
||||
|
||||
#
|
||||
# interval definition: First try source specific, fallback to default
|
||||
#
|
||||
if [ ${INTERVAL} = "AUTO" ] ; then
|
||||
auto_interval
|
||||
_techo "Selected interval: '$INTERVAL'"
|
||||
fi
|
||||
c_interval="$(cat "${backup}/intervals/${INTERVAL}" 2>/dev/null)"
|
||||
|
||||
if [ -z "${c_interval}" ]; then
|
||||
c_interval="${D_INTERVAL}"
|
||||
|
||||
if [ -z "${c_interval}" ]; then
|
||||
_exit_err "No definition for interval \"${INTERVAL}\" found. Skipping."
|
||||
fi
|
||||
fi
|
||||
|
||||
#
|
||||
# Source checks
|
||||
#
|
||||
if [ ! -f "${c_source}" ]; then
|
||||
_exit_err "Source description \"${c_source}\" is not a file. Skipping."
|
||||
else
|
||||
source=$(cat "${c_source}"); ret="$?"
|
||||
if [ "${ret}" -ne 0 ]; then
|
||||
_exit_err "Source ${c_source} is not readable. Skipping."
|
||||
fi
|
||||
fi
|
||||
# Verify source is up and accepting connections before deleting any old backups
|
||||
rsync "$source" >/dev/null || _exit_err "Source ${source} is not readable. Skipping."
|
||||
|
||||
#
|
||||
# do we backup to a remote host? then set pre-cmd
|
||||
#
|
||||
if [ -f "${c_remote_host}" ]; then
|
||||
# adjust ls and co
|
||||
remote_host=$(cat "${c_remote_host}"); ret="$?"
|
||||
if [ "${ret}" -ne 0 ]; then
|
||||
_exit_err "Remote host file ${c_remote_host} exists, but is not readable. Skipping."
|
||||
fi
|
||||
destination="${remote_host}:${ddir}"
|
||||
else
|
||||
remote_host=""
|
||||
destination="${ddir}"
|
||||
fi
|
||||
export remote_host
|
||||
|
||||
#
|
||||
# check for existence / use real name
|
||||
#
|
||||
( pcmd cd "$ddir" ) || _exit_err "Cannot change to ${ddir}. Skipping."
|
||||
|
||||
|
||||
#
|
||||
# Check whether to delete incomplete backups
|
||||
#
|
||||
if [ -f "${c_incomplete}" -o -f "${CDEFAULTS}/${f_incomplete}" ]; then
|
||||
DELETE_INCOMPLETE="yes"
|
||||
fi
|
||||
|
||||
# NEW method as of 0.6:
|
||||
# - insert ccollect default parameters
|
||||
# - insert options
|
||||
# - insert user options
|
||||
|
||||
#
|
||||
# rsync standard options
|
||||
#
|
||||
|
||||
set -- "$@" "--archive" "--delete" "--numeric-ids" "--relative" \
|
||||
"--delete-excluded" "--sparse"
|
||||
|
||||
#
|
||||
# exclude list
|
||||
#
|
||||
if [ -f "${c_exclude}" ]; then
|
||||
set -- "$@" "--exclude-from=${c_exclude}"
|
||||
fi
|
||||
|
||||
#
|
||||
# Output a summary
|
||||
#
|
||||
if [ -f "${c_summary}" ]; then
|
||||
set -- "$@" "--stats"
|
||||
fi
|
||||
|
||||
#
|
||||
# Verbosity for rsync
|
||||
#
|
||||
if [ -f "${c_vverbose}" ]; then
|
||||
set -- "$@" "-vv"
|
||||
elif [ -f "${c_verbose}" ]; then
|
||||
set -- "$@" "-v"
|
||||
fi
|
||||
|
||||
#
|
||||
# extra options for rsync provided by the user
|
||||
#
|
||||
if [ -f "${c_rsync_extra}" ]; then
|
||||
while read line; do
|
||||
set -- "$@" "$line"
|
||||
done < "${c_rsync_extra}"
|
||||
fi
|
||||
|
||||
#
|
||||
# Check for incomplete backups
|
||||
#
|
||||
pcmd ls -1 "$ddir/${INTERVAL}"*".${c_marker}" > "${TMP}" 2>/dev/null
|
||||
|
||||
i=0
|
||||
while read incomplete; do
|
||||
eval incomplete_$i=\"$(echo ${incomplete} | sed "s/\\.${c_marker}\$//")\"
|
||||
i=$(($i+1))
|
||||
done < "${TMP}"
|
||||
|
||||
j=0
|
||||
while [ "$j" -lt "$i" ]; do
|
||||
eval realincomplete=\"\$incomplete_$j\"
|
||||
_techo "Incomplete backup: ${realincomplete}"
|
||||
if [ "${DELETE_INCOMPLETE}" = "yes" ]; then
|
||||
_techo "Deleting ${realincomplete} ..."
|
||||
pcmd rm $VVERBOSE -rf "${ddir}/${realincomplete}" || \
|
||||
_exit_err "Removing ${realincomplete} failed."
|
||||
fi
|
||||
j=$(($j+1))
|
||||
done
|
||||
|
||||
#
|
||||
# check if maximum number of backups is reached, if so remove
|
||||
# use grep and ls -p so we only look at directories
|
||||
#
|
||||
count="$(pcmd ls -p1 "${ddir}" | grep "^${INTERVAL}\..*/\$" | wc -l \
|
||||
| sed 's/^ *//g')" || _exit_err "Counting backups failed"
|
||||
|
||||
_techo "Existing backups: ${count} Total keeping backups: ${c_interval}"
|
||||
|
||||
if [ "${count}" -ge "${c_interval}" ]; then
|
||||
substract=$((${c_interval} - 1))
|
||||
remove=$((${count} - ${substract}))
|
||||
_techo "Removing ${remove} backup(s)..."
|
||||
|
||||
pcmd ls -${TSORT}p1r "$ddir" | grep "^${INTERVAL}\..*/\$" | \
|
||||
head -n "${remove}" > "${TMP}" || \
|
||||
_exit_err "Listing old backups failed"
|
||||
|
||||
i=0
|
||||
while read to_remove; do
|
||||
eval remove_$i=\"${to_remove}\"
|
||||
i=$(($i+1))
|
||||
done < "${TMP}"
|
||||
|
||||
j=0
|
||||
while [ "$j" -lt "$i" ]; do
|
||||
eval to_remove=\"\$remove_$j\"
|
||||
_techo "Removing ${to_remove} ..."
|
||||
pcmd rm ${VVERBOSE} -rf "${ddir}/${to_remove}" || \
|
||||
_exit_err "Removing ${to_remove} failed."
|
||||
j=$(($j+1))
|
||||
done
|
||||
fi
|
||||
|
||||
|
||||
#
|
||||
# Check for backup directory to clone from: Always clone from the latest one!
|
||||
#
|
||||
# Depending on your file system, you may want to sort on:
|
||||
# 1. mtime (modification time) with TSORT=t, or
|
||||
# 2. ctime (last change time, usually) with TSORT=tc
|
||||
last_dir="$(pcmd ls -${TSORT}p1 "${ddir}" | grep '/$' | head -n 1)" || \
|
||||
_exit_err "Failed to list contents of ${ddir}."
|
||||
|
||||
#
|
||||
# clone from old backup, if existing
|
||||
#
|
||||
if [ "${last_dir}" ]; then
|
||||
set -- "$@" "--link-dest=${ddir}/${last_dir}"
|
||||
_techo "Hard linking from ${last_dir}"
|
||||
fi
|
||||
|
||||
|
||||
# set time when we really begin to backup, not when we began to remove above
|
||||
destination_date=$(${CDATE})
|
||||
destination_dir="${ddir}/${INTERVAL}.${destination_date}.$$"
|
||||
destination_full="${destination}/${INTERVAL}.${destination_date}.$$"
|
||||
|
||||
# give some info
|
||||
_techo "Beginning to backup, this may take some time..."
|
||||
|
||||
_techo "Creating ${destination_dir} ..."
|
||||
pcmd mkdir ${VVERBOSE} "${destination_dir}" || \
|
||||
_exit_err "Creating ${destination_dir} failed. Skipping."
|
||||
|
||||
#
|
||||
# added marking in 0.6 (and remove it, if successful later)
|
||||
#
|
||||
pcmd touch "${destination_dir}.${c_marker}"
|
||||
|
||||
#
|
||||
# the rsync part
|
||||
#
|
||||
_techo "Transferring files..."
|
||||
rsync "$@" "${source}" "${destination_full}"; ret=$?
|
||||
# Correct the modification time:
|
||||
pcmd touch "${destination_dir}"
|
||||
|
||||
#
|
||||
# remove marking here
|
||||
#
|
||||
if [ "$ret" -ne 12 ] ; then
|
||||
pcmd rm "${destination_dir}.${c_marker}" || \
|
||||
_exit_err "Removing ${destination_dir}/${c_marker} failed."
|
||||
fi
|
||||
|
||||
_techo "Finished backup (rsync return code: $ret)."
|
||||
if [ "${ret}" -ne 0 ]; then
|
||||
_techo "Warning: rsync exited non-zero, the backup may be broken (see rsync errors)."
|
||||
fi
|
||||
|
||||
#
|
||||
# post_exec
|
||||
#
|
||||
if [ -x "${c_post_exec}" ]; then
|
||||
_techo "Executing ${c_post_exec} ..."
|
||||
"${c_post_exec}"; ret=$?
|
||||
_techo "Finished ${c_post_exec}."
|
||||
|
||||
if [ ${ret} -ne 0 ]; then
|
||||
_exit_err "${c_post_exec} failed."
|
||||
fi
|
||||
fi
|
||||
|
||||
# Calculation
|
||||
end_s=$(date +%s)
|
||||
|
||||
full_seconds=$((${end_s} - ${begin_s}))
|
||||
hours=$((${full_seconds} / 3600))
|
||||
seconds=$((${full_seconds} - (${hours} * 3600)))
|
||||
minutes=$((${seconds} / 60))
|
||||
seconds=$((${seconds} - (${minutes} * 60)))
|
||||
|
||||
_techo "Backup lasted: ${hours}:${minutes}:${seconds} (h:m:s)"
|
||||
|
||||
) | add_name
|
||||
done
|
||||
|
||||
#
|
||||
# Be a good parent and wait for our children, if they are running wild parallel
|
||||
#
|
||||
if [ "${PARALLEL}" ]; then
|
||||
_techo "Waiting for children to complete..."
|
||||
wait
|
||||
fi
|
||||
|
||||
#
|
||||
# Look for post-exec command (general)
|
||||
#
|
||||
if [ -x "${CPOSTEXEC}" ]; then
|
||||
_techo "Executing ${CPOSTEXEC} ..."
|
||||
"${CPOSTEXEC}"; ret=$?
|
||||
_techo "Finished ${CPOSTEXEC} (return code: ${ret})."
|
||||
|
||||
if [ ${ret} -ne 0 ]; then
|
||||
_techo "${CPOSTEXEC} failed."
|
||||
fi
|
||||
fi
|
||||
|
||||
rm -f "${TMP}"
|
||||
_techo "Finished ${WE}"
|
||||
|
||||
# vim: set shiftwidth=3 tabstop=3 expandtab :
|
|
@ -0,0 +1,17 @@
|
|||
--- ccollect-0.7.1-c.sh 2009-05-24 21:39:43.000000000 -0700
|
||||
+++ ccollect-0.7.1-d.sh 2009-05-24 21:47:09.000000000 -0700
|
||||
@@ -492,12 +492,12 @@
|
||||
if [ "${count}" -ge "${c_interval}" ]; then
|
||||
substract=$((${c_interval} - 1))
|
||||
remove=$((${count} - ${substract}))
|
||||
_techo "Removing ${remove} backup(s)..."
|
||||
|
||||
- pcmd ls -p1 "$ddir" | grep "^${INTERVAL}\..*/\$" | \
|
||||
- sort -n | head -n "${remove}" > "${TMP}" || \
|
||||
+ pcmd ls -${TSORT}p1r "$ddir" | grep "^${INTERVAL}\..*/\$" | \
|
||||
+ head -n "${remove}" > "${TMP}" || \
|
||||
_exit_err "Listing old backups failed"
|
||||
|
||||
i=0
|
||||
while read to_remove; do
|
||||
eval remove_$i=\"${to_remove}\"
|
|
@ -0,0 +1,19 @@
|
|||
--- ccollect-0.7.1-d.sh 2009-05-24 21:47:09.000000000 -0700
|
||||
+++ ccollect-0.7.1-e.sh 2009-05-24 22:18:16.000000000 -0700
|
||||
@@ -560,12 +560,14 @@
|
||||
pcmd touch "${destination_dir}"
|
||||
|
||||
#
|
||||
# remove marking here
|
||||
#
|
||||
- pcmd rm "${destination_dir}.${c_marker}" || \
|
||||
- _exit_err "Removing ${destination_dir}/${c_marker} failed."
|
||||
+ if [ "$ret" -ne 12 ] ; then
|
||||
+ pcmd rm "${destination_dir}.${c_marker}" || \
|
||||
+ _exit_err "Removing ${destination_dir}/${c_marker} failed."
|
||||
+ fi
|
||||
|
||||
_techo "Finished backup (rsync return code: $ret)."
|
||||
if [ "${ret}" -ne 0 ]; then
|
||||
_techo "Warning: rsync exited non-zero, the backup may be broken (see rsync errors)."
|
||||
fi
|
|
@ -0,0 +1,119 @@
|
|||
--- ccollect-0.7.1-e.sh 2009-05-24 22:18:16.000000000 -0700
|
||||
+++ ccollect-0.7.1-f.sh 2009-05-24 22:19:50.000000000 -0700
|
||||
@@ -124,10 +124,64 @@
|
||||
echo " Retrieve latest ccollect at http://unix.schottelius.org/ccollect/"
|
||||
exit 0
|
||||
}
|
||||
|
||||
#
|
||||
+# Select interval if AUTO
|
||||
+#
|
||||
+# For this to work nicely, you have to choose interval names that sort nicely
|
||||
+# such as int1, int2, int3 or a_daily, b_weekly, c_monthly, etc.
|
||||
+#
|
||||
+auto_interval()
|
||||
+{
|
||||
+ if [ -d "${backup}/intervals" -a -n "$(ls "${backup}/intervals" 2>/dev/null)" ] ; then
|
||||
+ intervals_dir="${backup}/intervals"
|
||||
+ elif [ -d "${CDEFAULTS}/intervals" -a -n "$(ls "${CDEFAULTS}/intervals" 2>/dev/null)" ] ; then
|
||||
+ intervals_dir="${CDEFAULTS}/intervals"
|
||||
+ else
|
||||
+ _exit_err "No intervals are defined. Skipping."
|
||||
+ fi
|
||||
+ echo intervals_dir=${intervals_dir}
|
||||
+
|
||||
+ trial_interval="$(ls -1r "${intervals_dir}/" | head -n 1)" || \
|
||||
+ _exit_err "Failed to list contents of ${intervals_dir}/."
|
||||
+ _techo "Considering interval ${trial_interval}"
|
||||
+ most_recent="$(pcmd ls -${TSORT}p1 "${ddir}" | grep "^${trial_interval}.*/$" | head -n 1)" || \
|
||||
+ _exit_err "Failed to list contents of ${ddir}/."
|
||||
+ _techo " Most recent ${trial_interval}: '${most_recent}'"
|
||||
+ if [ -n "${most_recent}" ]; then
|
||||
+ no_intervals="$(ls -1 "${intervals_dir}/" | wc -l)"
|
||||
+ n=1
|
||||
+ while [ "${n}" -le "${no_intervals}" ]; do
|
||||
+ trial_interval="$(ls -p1 "${intervals_dir}/" | tail -n+${n} | head -n 1)"
|
||||
+ _techo "Considering interval '${trial_interval}'"
|
||||
+ c_interval="$(cat "${intervals_dir}/${trial_interval}" 2>/dev/null)"
|
||||
+ m=$((${n}+1))
|
||||
+ set -- "${ddir}" -maxdepth 1
|
||||
+ while [ "${m}" -le "${no_intervals}" ]; do
|
||||
+ interval_m="$(ls -1 "${intervals_dir}/" | tail -n+${m} | head -n 1)"
|
||||
+ most_recent="$(pcmd ls -${TSORT}p1 "${ddir}" | grep "^${interval_m}\..*/$" | head -n 1)"
|
||||
+ _techo " Most recent ${interval_m}: '${most_recent}'"
|
||||
+ if [ -n "${most_recent}" ] ; then
|
||||
+ set -- "$@" -$NEWER "${ddir}/${most_recent}"
|
||||
+ fi
|
||||
+ m=$((${m}+1))
|
||||
+ done
|
||||
+ count=$(pcmd find "$@" -iname "${trial_interval}*" | wc -l)
|
||||
+ _techo " Found $count more recent backups of ${trial_interval} (limit: ${c_interval})"
|
||||
+ if [ "$count" -lt "${c_interval}" ] ; then
|
||||
+ break
|
||||
+ fi
|
||||
+ n=$((${n}+1))
|
||||
+ done
|
||||
+ fi
|
||||
+ export INTERVAL="${trial_interval}"
|
||||
+ D_FILE_INTERVAL="${intervals_dir}/${INTERVAL}"
|
||||
+ D_INTERVAL=$(cat "${D_FILE_INTERVAL}" 2>/dev/null)
|
||||
+}
|
||||
+
|
||||
+#
|
||||
# need at least interval and one source or --all
|
||||
#
|
||||
if [ $# -lt 2 ]; then
|
||||
if [ "$1" = "-V" -o "$1" = "--version" ]; then
|
||||
display_version
|
||||
@@ -344,12 +398,28 @@
|
||||
_exit_err "${c_pre_exec} failed. Skipping."
|
||||
fi
|
||||
fi
|
||||
|
||||
#
|
||||
+ # Destination is a path
|
||||
+ #
|
||||
+ if [ ! -f "${c_dest}" ]; then
|
||||
+ _exit_err "Destination ${c_dest} is not a file. Skipping."
|
||||
+ else
|
||||
+ ddir=$(cat "${c_dest}"); ret="$?"
|
||||
+ if [ "${ret}" -ne 0 ]; then
|
||||
+ _exit_err "Destination ${c_dest} is not readable. Skipping."
|
||||
+ fi
|
||||
+ fi
|
||||
+
|
||||
+ #
|
||||
# interval definition: First try source specific, fallback to default
|
||||
#
|
||||
+ if [ ${INTERVAL} = "AUTO" ] ; then
|
||||
+ auto_interval
|
||||
+ _techo "Selected interval: '$INTERVAL'"
|
||||
+ fi
|
||||
c_interval="$(cat "${backup}/intervals/${INTERVAL}" 2>/dev/null)"
|
||||
|
||||
if [ -z "${c_interval}" ]; then
|
||||
c_interval="${D_INTERVAL}"
|
||||
|
||||
@@ -371,22 +441,10 @@
|
||||
fi
|
||||
# Verify source is up and accepting connections before deleting any old backups
|
||||
rsync "$source" >/dev/null || _exit_err "Source ${source} is not readable. Skipping."
|
||||
|
||||
#
|
||||
- # Destination is a path
|
||||
- #
|
||||
- if [ ! -f "${c_dest}" ]; then
|
||||
- _exit_err "Destination ${c_dest} is not a file. Skipping."
|
||||
- else
|
||||
- ddir=$(cat "${c_dest}"); ret="$?"
|
||||
- if [ "${ret}" -ne 0 ]; then
|
||||
- _exit_err "Destination ${c_dest} is not readable. Skipping."
|
||||
- fi
|
||||
- fi
|
||||
-
|
||||
- #
|
||||
# do we backup to a remote host? then set pre-cmd
|
||||
#
|
||||
if [ -f "${c_remote_host}" ]; then
|
||||
# adjust ls and co
|
||||
remote_host=$(cat "${c_remote_host}"); ret="$?"
|
|
@ -0,0 +1,14 @@
|
|||
31c31,41
|
||||
< logdir="${LOGCONF}/destination"
|
||||
---
|
||||
> c_dest="${LOGCONF}/destination"
|
||||
>
|
||||
> if [ ! -f ${c_dest} ]; then
|
||||
> _exit_err "Destination ${c_dest} is not a file. Skipping."
|
||||
> else
|
||||
> logdir=$(cat "${c_dest}"); ret="$?"
|
||||
> if [ "${ret}" -ne 0 ]; then
|
||||
> _exit_err "Destination ${c_dest} is not readable. Skipping."
|
||||
> fi
|
||||
> fi
|
||||
>
|
|
@ -0,0 +1,26 @@
|
|||
#!/bin/sh
|
||||
#
|
||||
# 2007 Daniel Aubry
|
||||
# 2008 Nico Schottelius (added minimal header)
|
||||
#
|
||||
# Copying license: GPL2-only
|
||||
#
|
||||
|
||||
# TODO:
|
||||
# add variables, add copying, add configuration
|
||||
|
||||
if [ ! -e /tmp/ccollect-stats.lock ]
|
||||
then
|
||||
touch /tmp/ccollect-stats.lock
|
||||
|
||||
# changes after license clearify
|
||||
# for dest in /etc/ccollect/sources/ -type f -name destination | while read line
|
||||
|
||||
find /etc/ccollect/sources/*/destination | while read line
|
||||
do
|
||||
d=$(basename $(cat $line))
|
||||
echo "====[Backup: $backupname]====" | tee -a /var/log/backup.log
|
||||
du -sh $line/* | tee -a /var/log/backup.log
|
||||
done
|
||||
rm /tmp/ccollect-stats.lock
|
||||
fi
|
|
@ -0,0 +1,72 @@
|
|||
#!/bin/bash
|
||||
|
||||
# Backup-Ordner
|
||||
BACKUP_DIR="/mnt"
|
||||
|
||||
# ccollect_logwrapper-Skript
|
||||
CCOLLECT_LOGWRAPPER="./ccollect_logwrapper.sh"
|
||||
|
||||
# letzte Sicherung für Gruppe daily, weekly und monthly in Backup-Ordner ermitteln
|
||||
DATE_DAILY=` ls $BACKUP_DIR | grep daily | sort -r | sed -e'2,$d' | cut -f 2 -d.`
|
||||
DATE_WEEKLY=` ls $BACKUP_DIR | grep weekly | sort -r | sed -e'2,$d' | cut -f 2 -d.`
|
||||
DATE_MONTHLY=`ls $BACKUP_DIR | grep monthly | sort -r | sed -e'2,$d' | cut -f 2 -d.`
|
||||
DATE_YEARLY=` ls $BACKUP_DIR | grep yearly | sort -r | sed -e'2,$d' | cut -f 2 -d.`
|
||||
|
||||
# Falls Leerstring diesen mit "altem Datum" füllen
|
||||
if [ -z "$DATE_DAILY" ] ; then DATE_DAILY="20000101-0101" ; fi
|
||||
if [ -z "$DATE_WEEKLY" ] ; then DATE_WEEKLY="20000101-0101" ; fi
|
||||
if [ -z "$DATE_MONTHLY" ] ; then DATE_MONTHLY="20000101-0101" ; fi
|
||||
if [ -z "$DATE_YEARLY" ] ; then DATE_YEARLY="20000101-0101" ; fi
|
||||
|
||||
echo current: $DATE_CUR
|
||||
echo last daily: $DATE_DAILY
|
||||
echo last weekly: $DATE_WEEKLY
|
||||
echo last monthly: $DATE_MONTHLY
|
||||
echo last yearly: $DATE_YEARLY
|
||||
|
||||
# Datum date-konform wandeln
|
||||
# Achtung: mit bash - nicht mit sh möglich!
|
||||
# Alternativ mit expr... konvertieren
|
||||
|
||||
DATE_DAILY=${DATE_DAILY:0:4}-${DATE_DAILY:4:2}-${DATE_DAILY:6:2}" "${DATE_DAILY:9:2}:${DATE_DAILY:11:2}:00
|
||||
DATE_WEEKLY=${DATE_WEEKLY:0:4}-${DATE_WEEKLY:4:2}-${DATE_WEEKLY:6:2}" "${DATE_WEEKLY:9:2}:${DATE_WEEKLY:11:2}:00
|
||||
DATE_MONTHLY=${DATE_MONTHLY:0:4}-${DATE_MONTHLY:4:2}-${DATE_MONTHLY:6:2}" "${DATE_MONTHLY:9:2}:${DATE_MONTHLY:11:2}:00
|
||||
DATE_YEARLY=${DATE_YEARLY:0:4}-${DATE_YEARLY:4:2}-${DATE_YEARLY:6:2}" "${DATE_YEARLY:9:2}:${DATE_YEARLY:11:2}:00
|
||||
DATE_CUR=`date "+%Y-%m-%d %T"`
|
||||
|
||||
# Bei Bedarf Backups durchführen
|
||||
|
||||
if [ `date --date "$DATE_YEARLY" +%Y` -ne `date --date "$DATE_CUR" +%Y` ]
|
||||
then
|
||||
|
||||
# Jahresbackup erzeugen
|
||||
echo monthly backup started
|
||||
source $CCOLLECT_LOGWRAPPER -a yearly
|
||||
|
||||
elif [ `date --date "$DATE_MONTHLY" +%Y%m` -ne `date --date "$DATE_CUR" +%Y%m` ]
|
||||
then
|
||||
|
||||
# Monatsbackup erzeugen
|
||||
echo monthly backup started
|
||||
source $CCOLLECT_LOGWRAPPER -a monthly
|
||||
|
||||
elif [ `date --date "$DATE_WEEKLY" +%Y%W` -ne `date --date "$DATE_CUR" +%Y%W` ]
|
||||
then
|
||||
|
||||
# Wochenbackup erzeugen
|
||||
echo weekly backup started
|
||||
source $CCOLLECT_LOGWRAPPER -a weekly
|
||||
|
||||
elif [ `date --date "$DATE_DAILY" +%Y%j` -ne `date --date "$DATE_CUR" +%Y%j` ]
|
||||
then
|
||||
|
||||
# Tagesbackup erzeugen
|
||||
echo daily backup started
|
||||
source $CCOLLECT_LOGWRAPPER -a daily
|
||||
|
||||
else
|
||||
|
||||
# nichts zu tun
|
||||
echo nothing to do
|
||||
|
||||
fi
|
20
doc/CHANGES
20
doc/CHANGES
|
@ -1,20 +0,0 @@
|
|||
0.3.1 to 0.3.2:
|
||||
* ccollect now prints the start time, end time and duration of the backup
|
||||
|
||||
0.3 to 0.3.1:
|
||||
* added support for printing a summary
|
||||
* some cosmetic changes
|
||||
|
||||
0.2 to 0.3:
|
||||
* added "very_verbose"
|
||||
* normal "verbose" is now less verbose
|
||||
* added general 'pre_exec' and 'post_exec' support
|
||||
* added source specfifc 'pre_exec' and 'post_exec' support
|
||||
|
||||
0.1 to 0.2:
|
||||
* Added plausibility check
|
||||
* Updated and made documentation readable
|
||||
* implemented verbose option
|
||||
* Fixed double exclude parameter bug
|
||||
* Added much better documentation (asciidoc)
|
||||
* added rsync extra parameter option
|
|
@ -0,0 +1,37 @@
|
|||
Hello Hacker,
|
||||
|
||||
I really appreciate your interest in hacking this software, but
|
||||
I am kind of critical when seeing patches. Thus I created this
|
||||
file to give you some hints of my thinking quirks.
|
||||
|
||||
|
||||
Submitting patches
|
||||
------------------
|
||||
Make my life easier, make your life easier, use a version control system (vcs).
|
||||
For this software the preferred vcs is git. Clone the latest repo, create
|
||||
a new local branch (git checkout -b <branchname>) write down your ideas.
|
||||
|
||||
When you're done, push all your stuff out to some public repo and drop a
|
||||
mail to the mailinglist, what you did and where to get it.
|
||||
|
||||
|
||||
Introduce a feature or change behaviour
|
||||
---------------------------------------
|
||||
Uhh, fancy! You have had a great idea, then it's time to change
|
||||
the major version, so others know that something changed.
|
||||
|
||||
If the configuration format is changed, add a script to tools/
|
||||
to allow users upgrade their configuration to this major version.
|
||||
|
||||
And now comes the most difficult part: Add documentation. Nobody
|
||||
benefits from your cool feature, if it is not known. I know, writing
|
||||
documentation is not so much fun, but you also expect good documentation
|
||||
for this software, don't you?
|
||||
|
||||
|
||||
If you think my thinking quirks must be corrected
|
||||
-------------------------------------------------
|
||||
See above ("Submitting patches") and submit a patch to this file.
|
||||
|
||||
|
||||
Thanks for reading.
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue