To simplify it, a file is a bunch of data written to clusters on the disk. The clusters can be almost anywhere. The File Allocation Table (the same FAT acronym you might recognize from FAT16 and FAT32) keeps a list of the many thousands/millions of clusters. It just says "this file uses this cluster, goes here, uses that cluster, goes here, uses that cluster" and so on.

Fast formats (or deletions) destroy the file allocation table. That means all the links to the many thousands/millions of clusters are gone. The clusters are there, but there's no association with anything. Hence they are all lumped in with "free space" and will be over-written whenever the next file gets assigned that cluster.

So, file retrieval tools or firms use something that scans the "free space" clusters for files, extracts them, and re-saves them.

The problem is when you over-write some or all of the clusters. You can get corruption in the files, missing parts, or if there aren't enough clusters left intact by the time the effort is made, the file can be lost entirely. (assuming it's not lost when re-installing the operating system.

These clusters could be used for swap files, for temporary files, for any number of things. The key is to minimize the use of that HD in all circumstances until you get the files "undeleted"... Otherwise every time you do something that accesses the HD, you run the risk of over-writing more and more of the files you want to save.