When you enable zero-downtime deployments on Laravel Forge, Forge typically turns your app’s storage directory into a symlink that points to a persistent shared location (so releases can be swapped without losing uploads, caches, and other runtime data). That is a good default for deployments, but it can create a surprising side effect in backup tooling.
If you are like me and use - one of the best packages available :-) - spatie/laravel-backup, you should know that it ships with the default follow_links => false, so anything that lives behind that symlink will be skipped. In practice, that can mean that parts of storage you expected to be included, such as user uploads under storage/app/public or other assets, do not end up in the archive.
What changed: storage became a symlink
A classic Laravel setup has a real directory at: your-project/storage. With Forge zero-downtime, you usually get:
current/storage -> /home/forge/.../shared/storage(symlink)- each release lives in its own folder
- the
currentsymlink is switched to point to the new release
This is great operationally, but now your backup job is no longer looking at a normal directory. It is looking at a symlink.
spatie/laravel-backup uses a filesystem crawler to collect files. The follow_links setting controls whether it should traverse symlinks.
- If
follow_linksis false, the crawler will typically not descend into symlinked directories. - If your
storageis a symlink, then “back up the project folder” may not include the data behind that link.
Many teams keep follow_links disabled on purpose. It is a safety net that prevents accidentally backing up huge folders mounted elsewhere, recursive loops due to misconfigured symlinks
- duplicate data if multiple paths lead to the same target. So the “safe” setting can become “surprising” when your deployment strategy introduces new symlinks.
How to check you are affected
If you are using zero-downtime deployments, check your recent backups to see whether you have all the files you need. This is anyhow a good idea to do at least once a year, because you really want some certainties on what is contained in your backups before it is too late.
Solution: explicitly include the storage_path()
Instead of relying on “include the project root,” explicitly include the real storage path. So in config/backup.php:
'include' => [
base_path(),
storage_path(), // <--- Add this line
];
Conceptually:
- Keep
follow_links => false - Add
storage_path()(or the shared storage path) to the backup include list
Why this is attractive:
- you keep symlink traversal disabled globally
- you precisely include the one symlinked directory you actually need
- you avoid pulling in other symlinked targets that might exist in the repo
Alternatives
Enable follow_links and exclude what you don’t need:
If your setup includes multiple symlinks or complex directory structures, enabling follow_links => true can simplify the process. The key here is discipline: carefully exclude any directories that shouldn’t be backed up, such as vendor files, compiled assets, or ephemeral storage. This approach “just works” for many teams, especially when deployment strategies evolve frequently or involve multiple shared directories. However, it demands attention to detail to avoid backing up unnecessary or duplicate data.
Narrow the Backup Scope:
For a leaner, more focused approach, consider backing up only what truly matters: the database, the persistent /shared/storage directory (where user uploads live), and any other stateful folders that can’t be reproduced from code. By skipping vendor code, compiled assets, and other reproducible directories, you reduce backup size and complexity; while ensuring critical data is never missed. This method is particularly effective for larger applications where every megabyte and backup speed counts.