Contact Us

To help prevent spam, Javascript is required in order for you to use this form.
Or schedule a call

Appendix 

Encryption and key management 

This section follows "Encryption" in the "Overview and Concepts" section.

The user's password is used to derive two 192-bit keys (the "L" and "R" keys) via PBKDF2-SHA512, with hard-coded parameters for repeatable output.

  • The L-key is used to log in to the Authentication Role server in place of the real password; the server stores only a bcrypt(sha512) hash of this L-key.
  • The R-key never leaves the client, and is used to encrypt secret keys stored within the user's profile on the server.

This means that one password can be used for all client-side account operations, while preventing servers from uncovering client-only secrets.

When Comet sets up a Storage Vault for the first time, it generates two high-entropy random keys (the 256-bit "A" and 128-bit "E" keys). All user data in the Storage Vault is stored encrypted with the A-key using AES-256 in CTR mode, and authenticated using Poly1305 in AEAD (encrypt-then-MAC) mode.

The permanent A-key is stored inside the Storage Vault, encrypted with the E-key. The E-key is then encrypted with the R-key and stored in the user's profile on the Authentication Role server. When a backup is performed, the client uses its password to derive the private R-key, to decrypt the E-key from the vault, to decrypt the A-key for data storage. This extra level of indirection enables some key rotation scenarios, as a new E-key can be generated without needing to re-encrypt all the data in the Storage Vault.

If the Storage Vault is for a Storage Role bucket, a high-entropy random 128-bit PSK is used to gate access to the bucket. The Storage Role server stores only a bcrypt(sha512) hash of this PSK. The client encrypts this PSK with the R-key and stores it in the user's profile on the Authentication Role server.

Compatibility events 

This section follows "Backward Compatibility" in the "Overview and Concepts" section.

The following compatibility events have occurred:

Version Details Upgrade Compatible Downgrade Compatible
18.6.2 A server-side encryption format changed for bucket access keys. Yes Partial. Accessing the bucket with Comet 18.5.5 (or later 18.5.x) will convert the key format to the backward-compatible format
17.9.3 A metadata format was enforced. Yes, but pre-existing Storage Vaults will not take advantage of the new features No. Storage Vaults created with newer versions of Comet cannot be used in old versions of Comet
17.6.1 A compression format was added. Yes Partial. If the new compression format was used, old versions of Comet will not be able to restore data
2.8.2 Beta The encryption format changed. Yes. Accounts are automatically upgraded upon login No. Old versions of Comet will not understand secrets in the new format
2.3.0 Beta The compression format changed. Yes. New versions of Comet can still read old backed-up data No. Old versions of Comet cannot read newly backed-up data
2.2.0 Beta The license server address changed. Yes No. Old versions of Comet Server can no longer be used
2.0.0 Beta The Storage Vault format changed. Partial. Storage Vaults must be recreated No. Old versions of Comet are unable to use new Storage Vaults
1.7.0 Beta The Storage Vault format changed. Yes. Comet will automatically upgrade Local Copy and Comet Server types, but S3 and SFTP types must be upgraded manually No. Old versions of Comet are unable to use new Storage Vaults
1.0.0 Beta Initial compatibility milestone Yes No
0.?.? Beta No compatibility guarantees made in either direction. n/a n/a

Importing backup settings from other products 

Comet Backup supports importing settings from certain other third-party backup products.

In all cases, only user configuration settings are read from the other product.

  • Existing backed-up data is not converted to Comet's format; you must back up the data again using Comet.
  • Historical logs are not imported.

Ahsay OBM/ACB 6.x or compatible 

Ahsay, OBM, and ACB are trademarks of Ahsay Systems Corporation Limited.

Comet can import settings from an installed version of Ahsay OBM/ACB 6.x, subject to the following notices and conditions:

  • Ahsay backupsets are imported as Comet Protected Items.
    • Backupset types
      • File
        • Source selection/deselection is supported.
        • Macro selections (desktop, documents etc) are not supported.
        • Inclusion/exclusion filters are not supported.
      • MySQL
      • Microsoft SQL Server
        • The "All instances" option is not supported. Your Ahsay backupset must have selected specific instances for backup.
      • Other backupset types are not supported.
    • Pre/post commands
      • Pre/post commands are imported as applying to the Protected Item, not to the Storage Vault nor the Schedule.
    • Retention policies
      • Basic retention policies (DAYS/JOBS -type) are supported.
      • Advanced retention policies will be treated as "keep forever".
    • Schedule
      • Daily, Weekly and Custom schedules are supported.
      • Monthly schedules for a specific day of the month are supported.
      • Monthly schedules for a variant day of the month (e.g. Second Tuesday, Last Weekday) are not supported.
    • "Extra Local Copy"
      • "Local Copy" will be imported as a new or existing Local Path Storage Vault in Comet.
      • The "Skip Offsite Backup" option is correctly imported.

CrashPlan 

CrashPlan and Code42 are trademarks of Code42 Software, Inc.

Comet can import settings from an installed version of Code42 CrashPlan, subject to the following notices and conditions:

  • CrashPlan backupsets are imported as Comet Protected Items.
    • Backupset types
      • File
        • Sources selection/deselection is supported.
        • Regular expression exclusions are supported.
    • Retention policies
      • CrashPlan and Comet employ fundamentally different retention systems. Comet makes a best-effort attempt to translate the CrashPlan retention policy into a set of Comet retention rules, with some caveats:
        • Comet uses a minimum retention period based on the schedule window.
        • CrashPlan retention for the last week will be interpreted as multiple daily policies for periods shorter than 24 hours.
        • CrashPlan retention for the last week will be interpreted as single or multiple weekly policies for periods greater than 24 hours, depending on the configured value.
        • Retention for larger time periods will be configured as single or multiple daily, weekly or monthly rules dependent on the configured value, populated out for periods of a week, three months, a year, and indefinitely.
        • Weekly retention policies are treated as one backup per week, on the first day of the week, for the specified number of weeks.
        • Monthly retention policies are treated as one backup per month, on the first day of the month, for the specified number of months.
        • Indefinite retention policies are implemented as one backup per month, on the first day of the month, for over a million years.
    • Schedule
      • CrashPlan and Comet employ fundamentally different scheduling systems. Comet translates a CrashPlan schedule to multiple schedule rules, with some caveats:
        • CrashPlan allows a number of very small schedule windows, down to a minute. For the sake of performance, Comet enforces a minimum schedule window of 30 minutes for imported schedules.
        • Comet will generate a schedule rule for each time that a CrashPlan backup would have run in its specified window(s).
        • The "Skip if already running" option is automatically enabled for imported schedules, to prevent overlapping backup with small schedule windows.
    • Local destinations
      • If any local destinations are configured for CrashPlan, Comet will automatically import these using the same location on disk, backing up to a subdirectory within the specified path.
      • The folder containing Comet's local vault data will be labelled with a unique ID. If it is necessary to identify which folder belongs to Comet, we recommend checking the modify date for the folder. The Comet data folder can also be distinguished by the format of the unique ID - CrashPlan uses a numerical ID for its folder, while Comet uses a hyphenated GUID with a mixture of numbers and letters.

Other products 

Future versions of Comet Backup will support importing configuration from other products.

Migrating user data 

It is possible to migrate user data to balance your storage requirements.

Migrating server-side user data to a different volume 

You can migrate user data in different ways.

Off-line server migration, without Spanning 

  1. Stop Comet Server
  2. Move files to the new volume
  3. Update disk path in Comet Server's configuration file
  4. Start Comet Server

Gradual, on-line server migration, without Spanning 

  1. Use rsync/robocopy/rclone/... to synchronize the current drive contents to the new drive.
  2. Repeat step #1 until there is very little data change in a single sync run
  3. Stop Comet Server
  4. Perform one more sync pass
  5. Update disk path in Comet Server's configuration file
  6. Start Comet Server
  7. Delete all content from the old disk volume

Gradual, on-line server migration with Spanning 

If multiple volumes are Spanned together in the same Storage Role Comet Server, then you can move files freely. Comet Server will instantly recognize the changes, because it looks in all attached volumes when looking for a chunk.

  1. Configure Comet Server to Span between both the old and new volumes

    • Newly uploaded data will be written in a balanced way to both volumes.
  2. Live migrate data from old volume to the new volume

    • You can move data between the two volumes on-the-fly while Comet is running. You can even move data for an in-use Storage Vault. This is a safe operation.
  3. Stop Comet Server; move any remaining data that was written during step #2; disable the Spanning configuration; and then restart Comet Server

Migrating user data to a different Comet Server 

  1. Create a new bucket on a different Comet Server

    • You can either manually create the bucket, or Request a bucket on a new target server
  2. Copy the file content to the new server

  3. Edit the user's profile in the Auth Role Comet Server to change the address, bucket, and bucket-key that it points to

    • You should first ensure that this user is not running any backup jobs to the original server.
    • You must take care to preserve the Encryption Key settings. The key absolutely must not change.
  4. Remove the file content from the original server

    • You should first ensure that this user is not running any restore jobs from the original server.

Migrating user data between Storage Vault types 

All Storage Vault types (e.g. Comet Server, Local Copy, SFTP, Amazon S3 etc) use the same on-disk layout. It is possible to follow the above steps for "Migrating user data to a different Comet Server" even when either the old- or new- targets are not a Comet Server.

For more information, please see the Seed Load section.

Multiple Comet Server instances 

It is possible to run multiple Comet Server instances on the same machine or IP, by using a load balancer or a frontend proxy software. All Comet Server communication is performed over HTTP / HTTPS / Websockets, so applications such as nginx, Apache, HAProxy, Traefik or Caddy are all suitable for this purpose.

If you choose to do this, take care that the frontend proxy does not introduce additional buffering or timeouts that could interrupt the connection between Comet Backup and Comet Server.

For instance with nginx, the following configuration could be used as an example:

proxy_connect_timeout 3000;
proxy_send_timeout    3000;
proxy_read_timeout    3000;
client_body_timeout   3000;
proxy_buffering off;

HAProxy introduces additional network timeouts that may prevent live-connection websockets from staying online. Because an HTTPS handshake involves multiple network roundtrips, accidental disconnections may decrease performance. You can adjust HAProxy timeouts to prevent disconnecting long-running connections:

timeout connect 30s
timeout client 120s
timeout http-keep-alive 120s
timeout http-request 120s

Using Comet Backup behind a network proxy 

This feature requires Comet 17.11.x or later.

Comet Backup can be used behind an HTTP or SOCKS proxy.

Proxy settings are controlled by environment variables named HTTP_PROXY and HTTPS_PROXY (case insensitive).

Multiple programs use this configuration method, so the environment variable may already be present for other software on the machine.

On Windows, the environment variable should be set in the "System variables" section to ensure that any settings also apply to background services.

On Linux, environment variables can be set system-wide (e.g. in /etc/environment or /etc/profile.d/my-custom-proxy.sh), or for the root user running Comet Backup (e.g. in /root/.profile), or in your startup script for Comet Backup (e.g. in /etc/rc.local).

The HTTP_PROXY environment variable should be set to a string of the form https://username:password@my.proxy.host.com/ or http://my.proxy.host.com/ or socks5://username:password@my.proxy.host.com/. This matches the normal format on Linux; however some Windows applications write this variable without the protocol. Please ensure that the protocol is present.

Your proxy software must support websockets in order for Comet's live-connection functionality to work correctly.

A future version of Comet will provide GUI settings to configure the network proxy.

Troubleshooting "tls: oversized record recieved" error 

The tls: oversized record recieved error may occur when making an HTTP proxy connection to an HTTPS proxy server (or possibly vice-versa). Please double-check the environment variable and the proxy server URL.

Windows Event Log 

Comet Backup logs all job messages to the Windows Event Log. The log content is identical to the job log content seen in the Comet Server log browser, or the Comet Backup history table, or the Comet Server API. This should allow you to check for errors and/or ensure that jobs are running on time, by monitoring the Windows Event Log.

However, please note that this only covers client-side jobs, that do actually run. e.g. because "Missed Backup" job entries are generated server-side, they won't appear in the client's event log. It is not feasible to use the Windows Event Log as a complete monitoring solution for your customer base.

The Comet Backup installer also logs some events, that can be used as a proxy for detecting software installation or upgrades.

Event IDs 

Source Event ID Description Available since
backup-service any Messages about installing and starting the Elevator service, the Pre-Logon Service (prior to 18.6.0), and the Delegate service (18.6.0 and later) -
backup-tool.exe 50 Backup job started 18.3.8 or later
backup-tool.exe 51 Backup job finished 18.3.8 or later
backup-tool.exe 52 Backup job log message. The Event Log entry severity corresponds to the Comet log entry severity (Info/Warning/Error) 18.3.8 or later
backup-tool.exe 53 Comet Backup installer has registered the backup-tool.exe Event Log source 18.3.8 or later
backup-tool.exe 54 Comet Backup uninstaller has de-registered the backup-tool.exe Event Log source 18.3.8 or later
backup-tool.exe 55 Message from the Comet Delegate service 18.6.0 or later

Suppressing Before / After command failures 

When a command-line program displays some output, the output is either sent to stdout ("fd 1", for normal messages) or stderr ("fd 2", for error messages).

Comet will mark a Before/After command as a warning if there was content on stderr, or, if the command had a non-zero exit code.

If you are certain that the command cannot fail, you can

  • redirect stderr messages to go to stdout instead, by adding 2>&1 to the end of your command
  • override the command exit code, by adding to the end of your command
    • Windows: &exit 0
    • Linux and macOS: ; exit 0

The above information is not Comet-specific; more information about 2>&1 and stdout redirection can be found online, e.g. https://support.microsoft.com/en-nz/help/110930/redirecting-error-messages-from-command-prompt-stderr-stdout .

Data validation 

There are three types of integrity verification in Comet:

Referential integrity 

Referential integrity means that for each snapshot, all its matching chunks exist; that all the chunks are indexed; and so on. This is verified client-side every time the app runs a retention pass, so you should ensure that retention passes run successfully from time-to-time.

Data file integrity at rest 

Data file integrity ensures that each file in the Storage Vault is readable and has not been corrupted at rest (e.g. hash mismatch / decrypt errors).

Comet stores files inside the Storage Vault data location as opaque, encrypted, compressed files. The filenames are the SHA256 hash of the file content. Comet automatically verifies file integrity client-side, every time a file is accessed during backup and restore operations (i.e. non-exhaustively) by calculating the SHA256 hash of the content and comparing it to the filename.

Corruption of files at rest is a rare scenario; it's unlikely you need to worry about this, unless you are using local storage and you believe your disk drives are failing. However, for additional peace-of-mind, you can verify the integrity of the files on disk at any time, by comparing the filename to their SHA256 hash.

A future version of Comet will add built-in functionality to verify file integrity in this way.

Example data validation commands 

The following equivalent commands read all files in the current directory, take the SHA256 hash, and compare it to the filename.

These commands exclude the config file, as these are known to be safe for other reasons.

These commands do not exclude any other temporary files (e.g. /tmp/ subdirectory, or ~-named files) that may be used by some storage location types for temporary uploaded data. Such temporary files will almost certainly cause a hash mismatch, but do not interfere with normal backup or restore operations.

On Linux, you can use the following command:

find . ! -name 'config' -type f -exec sha256sum '{}' \; | awk '{ sub("^.*/", "", $2) ; if ($1 == $2) { print $2,"ok" } else { print "[!!!]",$2,"MISMATCH",$1 } }'

On Windows, you can use the following Powershell (4.0 or later) command:

Get-ChildItem -Recurse -File | Where-Object { $_.Name -ne "config" } | ForEach-Object {
    $h = (Get-FileHash -Path $_.FullName -Algorithm SHA256)
    if ($_.Name -eq $h.Hash) { echo "$($_.Name) ok"; } else { echo "[!!!] $($_.Name) MISMATCH $($h.Hash)"; }
}

Cloud storage 

Taking hash values of files in this way requires fully reading the file from the storage location. If the storage location is on cloud storage, this is equivalent to fully downloading the entire contents of the storage location. This may result in significant network traffic. In this case, we recommend relying on Comet's normal verification that happens automatically during backup- and restore- operations.

Data file integrity at generation-time 

It is possible that a malfunctioning Comet Backup client would generate bad data, and then save it into the Storage Vault with a valid hash and valid encryption. For instance, this could happen in some rare situations where the Comet Backup client is installed on a PC with malfunctioning RAM.

In this situation, Comet would try to run a future backup/restore job, load data from the vault, but fail to parse it with a couldn't load tree [...] hash mismatch error message or a Load(<index/...>): Decode [...] invalid character \x00 error message.

In this situation, it is possible to recover the Storage Vault by removing all the corrupted data. The remaining data is restoreable. However, it's not possible to identify the corrupted data using the data validation commands above.

Data validation steps 

Different methods are availble to identify the corrupted files.

  • Use the "Deep verify Vault contents" feature

    • This feature is available in Comet 18.8.2 or later, via the Comet Server web interface live connected device actions dialog when the "Advanced options" setting is enabled. It is not exposed to the client in the Comet Backup app.
    • This will cause the client to download parts of the Storage Vault and perform a deeper type of hash checking than is possible via the existing data validation steps. It should alert you to which data files are corrupt, and the Storage Vault can then be repaired following the existing documented steps.
    • There are two versions of the "deep verify" feature
      • In Comet 18.8.2, this feature downloads almost the entire content of the Storage Vault. This is a highly bandwidth-intensive operation. If you have the customer's password on file, it may be preferable to log in as a new device into their account from your own office, and control that device to run the command instead.
      • In Comet 18.8.3 and later, the "deep verify" feature is much faster than 18.8.2; it downloads only index/tree parts of the Storage Vault, and caches temporary files to reduce total network roundtrips.
  • Files mentioned in error message

    • Data files (e.g. couldn't load tree [...] hash mismatch error message)
      • Comet 18.8.2 updated the couldn't load tree error message to also indicate the exact corrupted pack file, if possible.
      • You can then delete the file from the /data/ subdirectory, and run a retention pass to validate the remaining content, as described below. This may assist with repairing the Storage Vault.
      • However, this only detects the corrupted directory trees that were immediately referenced by a running backup job; other past and future backup jobs may still be unrestorable.
    • Index files (e.g. Load(<index/...>): Decode [...] invalid character \x00)
      • The index files contain only non-essential metadata to accelerate performance. Index files can be safely regenerated via the "Rebuild indexes" option on a Storage Vault. This is a relatively fast operation.
    • Compared to the "Deep verify Vault contents" feature, repairing single files in this way does avoid the immediate bandwidth-intensive step of downloading the entire vault content; however, it is not a guarantee that all data in the Vault is safe. Use of this method should be coupled with a (bandwidth-equivalent) complete test restore.
  • Files by modification date

    • New backup jobs only add additional files into the Storage Vault. Another possible way to repair the Storage Vault is to assume files in the Storage Vault are affected after a given point in time.
    • This is only an option if your Storage Vault type exposes file modification timestamps (e.g. local disk or SFTP; and some limited number of cloud storage providers)
    • Specifically
      1. Ensure that no backup/restore/retention operations are currently running to the Storage Vault
      2. The corrupted data was created by the job prior to the one in which errors were first reported; Find the start time of this prior job
      3. Delete all files in the Storage Vault with any modification time greater than when that job started
      4. If you used the "Rebuild indexes" option or ran a retention pass since the errors began, the contents of the /index/ directory may have been consolidated into fewer files. If there are no files in the /index/ subdirectory, you should then initiate a "Rebuild indexes" operation
      5. Initiate a retention pass afterward, to ensure referential integrity of the remaining files, as described below
  • The other alternative is to start a new Storage Vault.

Recovering from file corruption 

If you encounter a hash mismatch error, a data file has been corrupted inside the Storage Vault. Data has been lost.

If the issue only occurred recently, it's highly likely that most backed-up data is safe, and that only recent backup jobs are affected.

Because Comet is a deduplicating backup engine, future backups may silently depend on this corrupted data. You should immediately take the following steps to re-establish data integrity of the remaining data in the Storage Vault:

  1. Identify and delete all corrupted files from the Storage Vault's data location
    • You can use the example data validation commands above to find all hash mismatches
  2. Run a retention pass for the Storage Vault
    • This will check referential integrity (as mentioned above)
    • The job will fail, with a number of warnings in the format
      • <snapshot/ABCDEFGH> depends on missing [...] or
      • Packindex 'AAAA' for snapshot 'BBBB' refers to unknown pack 'CCCC', shouldn't happen
  3. Delete snapshots that are missing content data
    • Delete files from the /snapshots/ subdirectory in the Storage Vault's data location that match the corruption warnings from the backup job report
  4. Repeat step 2 until no issues occur

Some data has been lost, but by carefully removing the corrupted data and everything that references it, the remaining backup snapshots will all be restorable. Future backup jobs should be safe.

A future version of Comet Backup may add a feature to automatically perform the above repair steps.

Missing files from Storage Vault 

Please take care to ensure that files do not go missing from the Storage Vault. The loss of any file within the Storage Vault compromises the integrity of your backup data. You are likely to be in a data-loss situation.

More specifically:

If there are missing files from the snapshots subdirectory, Comet will not know that there is a backup snapshot available to be restored. There is no solution for this, other than ensuring all the files are present. This is unfortunate, but should only have a limited impact.

If there are missing files from the packindex subdirectory, some operations will be slower until Comet runs an optimization pass during the next retention pass. This is not a significant problem.

If there are missing files from the locks subdirectory, Comet may perform a dangerous operation, such as deleting in-use data during a retention pass. This is potentially a significant problem.

If there are missing files from the index subdirectory, Comet will re-upload some data that it could have otherwise deduplicated. This is unfortunate, but should only have a limited impact. Excess data will be cleaned up by a future retention pass.

If there are missing files from the keys subdirectory, Comet will be entirely unable to access the Storage Vault. There is no solution for this, other than ensuring all the files are present. This could be a significant problem. However, because the files in this directory do not change often, it's very likely that their replication is up-to-date.

If there are missing files from the data subdirectory, Comet will be unable to restore some data. When a retention pass next runs, Comet should detect this problem, and alert you to which backup snapshots are unrecoverable. You can then delete the corresponding files from the snapshots subdirectory and continue on with what is left of the Storage Vault.

Splitting a Storage Vault 

It is possible to "split" a Storage Vault, by first cloning it and then deleting some jobs from each one.

This would be a preliminary step towards splitting one large Storage Vault into two smaller Storage Vaults. The cloned vault must use the same encryption material in order to access existing encrypted data, so this is not suitable as a protection mechanism.

This is an advanced process. Please take special care to avoid data loss.

  1. Prepare a location for the cloned Storage Vault
    • If the current Storage Vault is "Local Path", then make a new empty directory.
    • Or if the current Storage Vault is "Comet Server", then in Comet Server > Storage menu > "Storage Buckets" page, click "Add new".
  2. In Comet Backup, create a new "Custom" Storage Vault using these details
  3. Copy all files from the existing data location to the new data location, so that the backup data exists in both places
  4. Copy the encryption key
    • In Comet Server, from your user menu, enable "Advanced Options". Then on the user's detail page, choose Actions > Edit Raw Profile. Scroll to the "Destinations" section.
    • Find the original Storage Vault by name;
      • copy the EncryptionKeyEncryptionMethod, EncryptedEncryptionKey, and RepoInitTimestamp fields
    • Find the new Storage Vault by name;
      • paste the EncryptionKeyEncryptionMethod, EncryptedEncryptionKey, and RepoInitTimestamp fields
    • Save changes.

You should then have two Storage Vaults, that were cloned, but have now become independent copies.

In Comet Backup, you should ensure that you can browse files to restore, from both of these Storage Vaults.

Then it's possible to start work on the cleanup process:

  • You can change some schedules to back up to one Storage Vault and not the other
  • You can also save space by deleting some old backup snapshots in one of the Storage Vaults, either by
    • using the retention features
      • at the Protected Item level, you can set a zero-job retention for one of the Storage Vaults to clear out all jobs
    • or individually
      • in the Comet Backup > click Restore > right-click snapshot > choose "delete" from the menu.

Applying hotfixes 

When you report an issue to Comet staff, every attempt is made to reproduce the issue in-house. Once the issue is reproduced, we can confirm a fix internally.

Sometimes an issue depends on specific environment details that are infeasible to recreate. In these situations, support staff may ask you to run a "hotfix" version of Comet that contains experimental changes, to test a potential resolution for your issue.

Hotfixes normally come in the form of a replacement backup-tool file.

Applying backup-tool.exe hotfixes on Windows 

  1. Exit the Comet Backup app from the system tray
  2. Stop Comet's background services
    • Use services.msc or Task Manager to stop the "Comet Backup (delegate service)" and "Comet Backup (elevator service)" services
    • Prior to Comet 18.6.0, you should also stop any "Comet Backup (dispatch service)" entries for the Pre-Logon Service
  3. Replace files inside C:\Program Files\Comet Backup\backup-tool.exe with updated version
  4. Restart all stopped background services
  5. Restart the Comet Backup app

Applying backup-tool hotfixes on macOS 

  1. Exit the Comet Backup app from the system taskbar
  2. Stop Comet's background services
    • sudo launchctl stop system/backup.delegate
    • sudo launchctl stop system/backup.elevator
  3. Replace files inside /Applications/Comet Backup.app/Contents/MacOS/backup-tool with updated version
  4. Restart all stopped background services
    • sudo launchctl start system/backup.elevator
    • sudo launchctl start system/backup.delegate
  5. Restart the Comet Backup app