You may have a need to secure erase a file with the Linux DD command. When you erase a file with the rm command the pointer to the file is erased but the file still exists on the file system and can be recovered. You need to take some additional steps to make sure the file is permanently deleted. Once you dd the file with random data, then delete it with rm as usual.
This reminds me of “The Great Zero Challenge” where recovery companies were challenged to restore a hard drive zeroed out with the dd command but none accepted.. Based on that I think the dd command is a very good way to secure erase important files and renders them unrecoverable.
Command to secure wipe a file (Adjust the bs=xx parameter)
how much random data to write to a file in bytes (must match the size of the file being overwritten, enter file size in bytes as seen in ls -l command)
How many times to repeat this, leave at 1. If you choose 2 then 27 byte file will grow to a 54 byte file. To do multiple passes use a loop: for ((n=1;n<4;n++)); COMMAND; done;
Without this parameter DD will truncate/remove the rest of the file if you specify to write less bytes then the file size. Having this parameter is a failsafe where you can run the command again and adjust the bs=## to match the file size.
1. Contents of the file “Hello this is my test file‘
2. The first time i run the DD command in the example below is with bs=20 parameter (write 20 bytes of random data into the file). The file size is 27 bytes and we can see that only 20 bytes worth of random data was written to a file, we can still see a portion of the file ‘t file’
3. The second time i run the command i specify the bs=27 which matches the size of the file in bytes as visible in ls -l. This time the whole file is overwritten with random data.
4. When running the DD commend you have to specify how much random data to write to file in bytes as visible in the ls -l command bs=##