Contact Us

To help prevent spam, Javascript is required in order for you to use this form.

Appendix 

Encryption and key management 

This section follows "Encryption" in the "Overview and Concepts" section.

The user's password is used to derive two 192-bit keys (the "L" and "R" keys) via PBKDF2-SHA512, with hard-coded parameters for repeatable output.

  • The L-key is used to log in to the Authentication Role server in place of the real password; the server stores only a bcrypt(sha512) hash of this L-key.
  • The R-key never leaves the client, and is used to encrypt secret keys stored within the user's profile on the server.

This means that one password can be used for all client-side account operations, while preventing servers from uncovering client-only secrets.

When Comet sets up a Storage Vault for the first time, it generates two high-entropy random keys (the 256-bit "A" and 128-bit "E" keys). All user data in the Storage Vault is stored encrypted with the A-key using AES-256 in CTR mode, and authenticated using Poly1305 in AEAD (encrypt-then-MAC) mode.

The permanent A-key is stored inside the Storage Vault, encrypted with the E-key. The E-key is then encrypted with the R-key and stored in the user's profile on the Authentication Role server. When a backup is performed, the client uses its password to derive the private R-key, to decrypt the E-key from the vault, to decrypt the A-key for data storage. This extra level of indirection enables some key rotation scenarios, as a new E-key can be generated without needing to re-encrypt all the data in the Storage Vault.

If the Storage Vault is for a Storage Role bucket, a high-entropy random 128-bit PSK is used to gate access to the bucket. The Storage Role server stores only a bcrypt(sha512) hash of this PSK. The client encrypts this PSK with the R-key and stores it in the user's profile on the Authentication Role server.

Compatibility events 

This section follows "Backward Compatibility" in the "Overview and Concepts" section.

The following compatibility events have occurred:

Version Details Upgrade Compatible Downgrade Compatible
18.6.2 A server-side encryption format changed for bucket access keys. Yes Partial. Accessing the bucket with Comet 18.5.5 (or later 18.5.x) will convert the key format to the backward-compatible format
17.9.3 A metadata format was enforced. Yes, but pre-existing Storage Vaults will not take advantage of the new features No. Storage Vaults created with newer versions of Comet cannot be used in old versions of Comet
17.6.1 A compression format was added. Yes Partial. If the new compression format was used, old versions of Comet will not be able to restore data
2.8.2 Beta The encryption format changed. Yes. Accounts are automatically upgraded upon login No. Old versions of Comet will not understand secrets in the new format
2.3.0 Beta The compression format changed. Yes. New versions of Comet can still read old backed-up data No. Old versions of Comet cannot read newly backed-up data
2.2.0 Beta The license server address changed. Yes No. Old versions of Comet Server can no longer be used
2.0.0 Beta The Storage Vault format changed. Partial. Storage Vaults must be recreated No. Old versions of Comet are unable to use new Storage Vaults
1.7.0 Beta The Storage Vault format changed. Yes. Comet will automatically upgrade Local Copy and Comet Server types, but S3 and SFTP types must be upgraded manually No. Old versions of Comet are unable to use new Storage Vaults
1.0.0 Beta Initial compatibility milestone Yes No
0.?.? Beta No compatibility guarantees made in either direction. n/a n/a

Importing backup settings from other products 

Comet Backup supports importing settings from certain other third-party backup products.

In all cases, only user configuration settings are read from the other product.

  • Existing backed-up data is not converted to Comet's format; you must back up the data again using Comet.
  • Historical logs are not imported.

Ahsay OBM/ACB 6.x or compatible 

Ahsay, OBM, and ACB are trademarks of Ahsay Systems Corporation Limited.

Comet can import settings from an installed version of Ahsay OBM/ACB 6.x, subject to the following notices and conditions:

  • Ahsay backupsets are imported as Comet Protected Items.
    • Backupset types
      • File
        • Source selection/deselection is supported.
        • Macro selections (desktop, documents etc) are not supported.
        • Inclusion/exclusion filters are not supported.
      • MySQL
      • Microsoft SQL Server
        • The "All instances" option is not supported. Your Ahsay backupset must have selected specific instances for backup.
      • Other backupset types are not supported.
    • Pre/post commands
      • Pre/post commands are imported as applying to the Protected Item, not to the Storage Vault nor the Schedule.
    • Retention policies
      • Basic retention policies (DAYS/JOBS -type) are supported.
      • Advanced retention policies will be treated as "keep forever".
    • Schedule
      • Daily, Weekly and Custom schedules are supported.
      • Monthly schedules for a specific day of the month are supported.
      • Monthly schedules for a variant day of the month (e.g. Second Tuesday, Last Weekday) are not supported.
    • "Extra Local Copy"
      • "Local Copy" will be imported as a new or existing Local Path Storage Vault in Comet.
      • The "Skip Offsite Backup" option is correctly imported.

CrashPlan 

CrashPlan and Code42 are trademarks of Code42 Software, Inc.

Comet can import settings from an installed version of Code42 CrashPlan, subject to the following notices and conditions:

  • CrashPlan backupsets are imported as Comet Protected Items.
    • Backupset types
      • File
        • Sources selection/deselection is supported.
        • Regular expression exclusions are supported.
    • Retention policies
      • CrashPlan and Comet employ fundamentally different retention systems. Comet makes a best-effort attempt to translate the CrashPlan retention policy into a set of Comet retention rules, with some caveats:
        • Comet uses a minimum retention period based on the schedule window.
        • CrashPlan retention for the last week will be interpreted as multiple daily policies for periods shorter than 24 hours.
        • CrashPlan retention for the last week will be interpreted as single or multiple weekly policies for periods greater than 24 hours, depending on the configured value.
        • Retention for larger time periods will be configured as single or multiple daily, weekly or monthly rules dependent on the configured value, populated out for periods of a week, three months, a year, and indefinitely.
        • Weekly retention policies are treated as one backup per week, on the first day of the week, for the specified number of weeks.
        • Monthly retention policies are treated as one backup per month, on the first day of the month, for the specified number of months.
        • Indefinite retention policies are implemented as one backup per month, on the first day of the month, for over a million years.
    • Schedule
      • CrashPlan and Comet employ fundamentally different scheduling systems. Comet translates a CrashPlan schedule to multiple schedule rules, with some caveats:
        • CrashPlan allows a number of very small schedule windows, down to a minute. For the sake of performance, Comet enforces a minimum schedule window of 30 minutes for imported schedules.
        • Comet will generate a schedule rule for each time that a CrashPlan backup would have run in its specified window(s).
        • The "Skip if already running" option is automatically enabled for imported schedules, to prevent overlapping backup with small schedule windows.
    • Local destinations
      • If any local destinations are configured for CrashPlan, Comet will automatically import these using the same location on disk, backing up to a subdirectory within the specified path.
      • The folder containing Comet's local vault data will be labelled with a unique ID. If it is necessary to identify which folder belongs to Comet, we recommend checking the modify date for the folder. The Comet data folder can also be distinguished by the format of the unique ID - CrashPlan uses a numerical ID for its folder, while Comet uses a hyphenated GUID with a mixture of numbers and letters.

Other products 

Future versions of Comet Backup will support importing configuration from other products.

Migrating user data 

It is possible to migrate user data to balance your storage requirements.

Migrating server-side user data to a different volume 

You can migrate user data in different ways.

Off-line server migration, without Spanning 

  1. Stop Comet Server
  2. Move files to the new volume
  3. Update disk path in Comet Server's configuration file
  4. Start Comet Server

Gradual, on-line server migration, without Spanning 

  1. Use rsync/robocopy/rclone/... to synchronize the current drive contents to the new drive.
  2. Repeat step #1 until there is very little data change in a single sync run
  3. Stop Comet Server
  4. Perform one more sync pass
  5. Update disk path in Comet Server's configuration file
  6. Start Comet Server
  7. Delete all content from the old disk volume

Gradual, on-line server migration with Spanning 

If multiple volumes are Spanned together in the same Storage Role Comet Server, then you can move files freely. Comet Server will instantly recognize the changes, because it looks in all attached volumes when looking for a chunk.

  1. Configure Comet Server to Span between both the old and new volumes

    • Newly uploaded data will be written in a balanced way to both volumes.
  2. Live migrate data from old volume to the new volume

    • You can move data between the two volumes on-the-fly while Comet is running. You can even move data for an in-use Storage Vault. This is a safe operation.
  3. Stop Comet Server; move any remaining data that was written during step #2; disable the Spanning configuration; and then restart Comet Server

Migrating user data to a different Comet Server 

  1. Create a new bucket on a different Comet Server

    • You can either manually create the bucket, or Request a bucket on a new target server
  2. Copy the file content to the new server

  3. Edit the user's profile in the Auth Role Comet Server to change the address, bucket, and bucket-key that it points to

    • You should first ensure that this user is not running any backup jobs to the original server.
    • You must take care to preserve the Encryption Key settings. The key absolutely must not change.
  4. Remove the file content from the original server

    • You should first ensure that this user is not running any restore jobs from the original server.

Migrating user data between Storage Vault types 

All Storage Vault types (e.g. Comet Server, Local Copy, SFTP, Amazon S3 etc) use the same on-disk layout. It is possible to follow the above steps for "Migrating user data to a different Comet Server" even when either the old- or new- targets are not a Comet Server.

For more information, please see the Seed Load section.

Multiple Comet Server instances 

It is possible to run multiple Comet Server instances on the same machine or IP, by using a load balancer or a frontend proxy software. All Comet Server communication is performed over HTTP / HTTPS / Websockets, so applications such as nginx, Apache, HAProxy, Traefik or Caddy are all suitable for this purpose.

If you choose to do this, take care that the frontend proxy does not introduce additional buffering or timeouts that could interrupt the connection between Comet Backup and Comet Server.

For instance with nginx, the following configuration could be used as an example:

proxy_connect_timeout 3000;
proxy_send_timeout    3000;
proxy_read_timeout    3000;
client_body_timeout   3000;
proxy_buffering off;

HAProxy introduces additional network timeouts that may prevent live-connection websockets from staying online. Because an HTTPS handshake involves multiple network roundtrips, accidental disconnections may decrease performance. You can adjust HAProxy timeouts to prevent disconnecting long-running connections:

timeout connect 30s
timeout client 120s
timeout http-keep-alive 120s
timeout http-request 120s

Using Comet Backup behind a network proxy 

Comet Backup can be used behind an HTTP or SOCKS proxy.

In Comet 17.11.x, proxy settings are controlled by an environment variable named HTTP_PROXY.

Multiple programs use this configuration method, so the environment variable may already be present for other software on the machine.

On Windows, the environment variable should be set in the "System variables" section to ensure that any settings also apply to background services.

On Linux, environment variables can be set system-wide (e.g. in /etc/environment or /etc/profile.d/my-custom-proxy.sh), or for the root user running Comet Backup (e.g. in /root/.profile), or in your startup script for Comet Backup (e.g. in /etc/rc.local).

The HTTP_PROXY environment variable should be set to a string of the form https://username:password@my.proxy.host.com/ or http://my.proxy.host.com/ or socks5://username:password@my.proxy.host.com/.

A future version of Comet will provide GUI settings to configure the network proxy.

Windows Event Log 

Comet Backup logs all job messages to the Windows Event Log. The log content is identical to the job log content seen in the Comet Server log browser, or the Comet Backup history table, or the Comet Server API. This should allow you to check for errors and/or ensure that jobs are running on time, by monitoring the Windows Event Log.

However, please note that this only covers client-side jobs, that do actually run. e.g. because "Missed Backup" job entries are generated server-side, they won't appear in the client's event log. It is not feasible to use the Windows Event Log as a complete monitoring solution for your customer base.

The Comet Backup installer also logs some events, that can be used as a proxy for detecting software installation or upgrades.

Event IDs 

Source Event ID Description Available since
backup-service any Messages about installing and starting the Elevator service, the Pre-Logon Service (prior to 18.6.0), and the Delegate service (18.6.0 and later) -
backup-tool.exe 50 Backup job started 18.3.8 or later
backup-tool.exe 51 Backup job finished 18.3.8 or later
backup-tool.exe 52 Backup job log message. The Event Log entry severity corresponds to the Comet log entry severity (Info/Warning/Error) 18.3.8 or later
backup-tool.exe 53 Comet Backup installer has registered the backup-tool.exe Event Log source 18.3.8 or later
backup-tool.exe 54 Comet Backup uninstaller has de-registered the backup-tool.exe Event Log source 18.3.8 or later
backup-tool.exe 55 Message from the Comet Delegate service 18.6.0 or later

Suppressing Before / After command failures 

When a command-line program displays some output, the output is either sent to stdout ("fd 1", for normal messages) or stderr ("fd 2", for error messages).

Comet will mark a Before/After command as a warning if there was content on stderr, or, if the command had a non-zero exit code.

If you are certain that the command cannot fail, you can

  • redirect stderr messages to go to stdout instead, by adding 2>&1 to the end of your command
  • override the command exit code, by adding to the end of your command
    • Windows: &exit 0
    • Linux and macOS: ; exit 0

The above information is not Comet-specific; more information about 2>&1 and stdout redirection can be found online, e.g. https://support.microsoft.com/en-nz/help/110930/redirecting-error-messages-from-command-prompt-stderr-stdout .

Data validation 

There are three types of integrity verification in Comet:

Referential integrity 

Referential integrity means that for each snapshot, all its matching chunks exist; that all the chunks are indexed; and so on. This is verified client-side every time the app runs a retention pass, so you should ensure that retention passes run successfully from time-to-time.

Data file integrity at rest 

Data file integrity ensures that each file in the Storage Vault is readable and has not been corrupted at rest (e.g. hash mismatch / decrypt errors).

Comet stores files inside the Storage Vault data location as opaque, encrypted, compressed files. The filenames are the SHA256 hash of the file content. Comet automatically verifies file integrity client-side, every time a file is accessed during backup and restore operations (i.e. non-exhaustively) by calculating the SHA256 hash of the content and comparing it to the filename.

Corruption of files at rest is a rare scenario; it's unlikely you need to worry about this, unless you are using local storage and you believe your disk drives are failing. However, for additional peace-of-mind, you can verify the integrity of the files on disk at any time, by comparing the filename to their SHA256 hash.

A future version of Comet will add built-in functionality to verify file integrity in this way.

Example data validation commands 

The following equivalent commands read all files in the current directory, take the SHA256 hash, and compare it to the filename.

These commands exclude the config file, as these are known to be safe for other reasons.

These commands do not exclude any other temporary files (e.g. /tmp/ subdirectory, or ~-named files) that may be used by some storage location types for temporary uploaded data. Such temporary files will almost certainly cause a hash mismatch, but do not interfere with normal backup or restore operations.

On Linux, you can use the following command:

find . ! -name 'config' -type f -exec sha256sum '{}' \; | awk '{ sub("^.*/", "", $2) ; if ($1 == $2) { print $2,"ok" } else { print "[!!!]",$2,"MISMATCH",$1 } }'

On Windows, you can use the following Powershell (4.0 or later) command:

Get-ChildItem -Recurse -File | Where-Object { $_.Name -ne "config" } | ForEach-Object {
    $h = (Get-FileHash -Path $_.FullName -Algorithm SHA256)
    if ($_.Name -eq $h.Hash) { echo "$($_.Name) ok"; } else { echo "[!!!] $($_.Name) MISMATCH $($h.Hash)"; }
}

Cloud storage 

Taking hash values of files in this way requires fully reading the file from the storage location. If the storage location is on cloud storage, this is equivalent to fully downloading the entire contents of the storage location. This may result in significant network traffic. In this case, we recommend relying on Comet's normal verification that happens automatically during backup- and restore- operations.

Data file integrity at generation-time 

It is possible that a malfunctioning Comet Backup client would generate bad data, and then save it into the Storage Vault with a valid hash and valid encryption. For instance, this could happen in some rare situations where the Comet Backup client is installed on a PC with malfunctioning RAM.

In this situation, Comet would try to run a future backup/restore job, load data from the vault, but fail to parse it with a couldn't load tree [...] hash mismatch error message or a Load(<index/...>): Decode [...] invalid character \x00 error message.

In this situation, it is possible to recover the Storage Vault by removing all the corrupted data. The remaining data is restoreable. However, it's not possible to identify the corrupted data using the data validation commands above.

Data validation steps 

Different methods are availble to identify the corrupted files.

  • Use the "Deep verify Vault contents" feature

    • This feature is available in Comet 18.8.2 or later, via the Comet Server web interface live connected device actions dialog when the "Advanced options" setting is enabled. It is not exposed to the client in the Comet Backup app.
    • This will cause the client to download parts of the Storage Vault and perform a deeper type of hash checking than is possible via the existing data validation steps. It should alert you to which data files are corrupt, and the Storage Vault can then be repaired following the existing documented steps.
    • There are two versions of the "deep verify" feature
      • In Comet 18.8.2, this feature downloads almost the entire content of the Storage Vault. This is a highly bandwidth-intensive operation. If you have the customer's password on file, it may be preferable to log in as a new device into their account from your own office, and control that device to run the command instead.
      • In Comet 18.8.3 and later, the "deep verify" feature is much faster than 18.8.2; it downloads only index/tree parts of the Storage Vault, and caches temporary files to reduce total network roundtrips.
  • Files mentioned in error message

    • Data files (e.g. couldn't load tree [...] hash mismatch error message)
      • Comet 18.8.2 updated the couldn't load tree error message to also indicate the exact corrupted pack file, if possible.
      • You can then delete the file from the /data/ subdirectory, and run a retention pass to validate the remaining content, as described below. This may assist with repairing the Storage Vault.
      • However, this only detects the corrupted directory trees that were immediately referenced by a running backup job; other past and future backup jobs may still be unrestorable.
    • Index files (e.g. Load(<index/...>): Decode [...] invalid character \x00)
      • The index files contain only non-essential metadata to accelerate performance. Index files can be safely regenerated via the "reindex" option on a Storage Vault. This is a relatively fast operation.
    • Compared to the "Deep verify Vault contents" feature, repairing single files in this way does avoid the immediate bandwidth-intensive step of downloading the entire vault content; however, it is not a guarantee that all data in the Vault is safe. Use of this method should be coupled with a (bandwidth-equivalent) complete test restore.
  • Files by modification date

    • New backup jobs only add additional files into the Storage Vault. Another possible way to repair the Storage Vault is to assume files in the Storage Vault are affected after a given point in time.
    • This is only an option if your Storage Vault type exposes file modification timestamps (e.g. local disk or SFTP; and some limited number of cloud storage providers)
    • Specifically
      1. Ensure that no backup/restore/retention operations are currently running to the Storage Vault
      2. The corrupted data was created by the job prior to the one in which errors were first reported; Find the start time of this prior job
      3. Delete all files in the Storage Vault with any modification time greater than when that job started
      4. If you used the reindex option or ran a retention pass since the errors began, the contents of the /index/ directory may have been consolidated into fewer files. If there are no files in the /index/ subdirectory, you should then initiate a reindex operation
      5. Initiate a retention pass afterward, to ensure referential integrity of the remaining files, as described below
  • The other alternative is to start a new Storage Vault.

Recovering from file corruption 

If you encounter a hash mismatch error, a data file has been corrupted inside the Storage Vault. Data has been lost.

If the issue only occurred recently, it's highly likely that most backed-up data is safe, and that only recent backup jobs are affected.

Because Comet is a deduplicating backup engine, future backups may silently depend on this corrupted data. You should immediately take the following steps to re-establish data integrity of the remaining data in the Storage Vault:

  1. Identify and delete all corrupted files from the Storage Vault's data location
    • You can use the example data validation commands above to find all hash mismatches
  2. Run a retention pass for the Storage Vault
    • This will check referential integrity (as mentioned above)
    • The job will fail, with a number of warnings in the format
      • <snapshot/ABCDEFGH> depends on missing [...] or
      • Packindex 'AAAA' for snapshot 'BBBB' refers to unknown pack 'CCCC', shouldn't happen
  3. Delete snapshots that are missing content data
    • Delete files from the /snapshots/ subdirectory in the Storage Vault's data location that match the corruption warnings from the backup job report
  4. Repeat step 2 until no issues occur

Some data has been lost, but by carefully removing the corrupted data and everything that references it, the remaining backup snapshots will all be restorable. Future backup jobs should be safe.

A future version of Comet Backup may add a feature to automatically perform the above repair steps.

Splitting a Storage Vault 

It is possible to "split" a Storage Vault, by first cloning it and then deleting some jobs from each one.

This would be a preliminary step towards splitting one large Storage Vault into two smaller Storage Vaults. The cloned vault must use the same encryption material in order to access existing encrypted data, so this is not suitable as a protection mechanism.

This is an advanced process. Please take special care to avoid data loss.

  1. Prepare a location for the cloned Storage Vault
    • If the current Storage Vault is "Local Path", then make a new empty directory.
    • Or if the current Storage Vault is "Comet Server", then in Comet Server > Storage menu > "Storage Buckets" page, click "Add new".
  2. In Comet Backup, create a new "Custom" Storage Vault using these details
  3. Copy all files from the existing data location to the new data location, so that the backup data exists in both places
  4. Copy the encryption key
    • In Comet Server, from your user menu, enable "Advanced Options". Then on the user's detail page, choose Actions > Edit Raw Profile. Scroll to the "Destinations" section.
    • Find the original Storage Vault by name;
      • copy the EncryptionKeyEncryptionMethod, EncryptedEncryptionKey, and RepoInitTimestamp fields
    • Find the new Storage Vault by name;
      • paste the EncryptionKeyEncryptionMethod, EncryptedEncryptionKey, and RepoInitTimestamp fields
    • Save changes.

You should then have two Storage Vaults, that were cloned, but have now become independent copies.

In Comet Backup, you should ensure that you can browse files to restore, from both of these Storage Vaults.

Then it's possible to start work on the cleanup process:

  • You can change some schedules to back up to one Storage Vault and not the other
  • You can also save space by deleting some old backup snapshots in one of the Storage Vaults, either by
    • using the retention features
      • at the Protected Item level, you can set a zero-job retention for one of the Storage Vaults to clear out all jobs
    • or individually
      • in the Comet Backup > click Restore > right-click snapshot > choose "delete" from the menu.

Applying hotfixes 

When you report an issue to Comet staff, every attempt is made to reproduce the issue in-house. Once the issue is reproduced, we can confirm a fix internally.

Sometimes an issue depends on specific environment details that are infeasible to recreate. In these situations, support staff may ask you to run a "hotfix" version of Comet that contains experimental changes, to test a potential resolution for your issue.

Hotfixes normally come in the form of a replacement backup-tool file.

Applying backup-tool.exe hotfixes on Windows 

  1. Exit the Comet Backup app from the system tray
  2. Stop Comet's background services
    • Use services.msc or Task Manager to stop the "Comet Backup (delegate service)" and "Comet Backup (elevator service)" services
    • Prior to Comet 18.6.0, you should also stop any "Comet Backup (dispatch service)" entries for the Pre-Logon Service
  3. Replace files inside C:\Program Files\Comet Backup\backup-tool.exe with updated version
  4. Restart all stopped background services
  5. Restart the Comet Backup app

Applying backup-tool hotfixes on macOS 

  1. Exit the Comet Backup app from the system taskbar
  2. Stop Comet's background services
    • sudo launchctl stop system/backup.delegate
    • sudo launchctl stop system/backup.elevator
  3. Replace files inside /Applications/Comet Backup.app/Contents/MacOS/backup-tool with updated version
  4. Restart all stopped background services
    • sudo launchctl start system/backup.elevator
    • sudo launchctl start system/backup.delegate
  5. Restart the Comet Backup app

Troubleshooting 

Error "local error: tls: record overflow" 

This message means the connection was corrupted over the network, and Comet aborted the connection.

This can happen because of random network conditions. Retrying the operation should fix the issue.

If the issue keeps happening repeatedly, this message indicates that something is interfering with packets in your network.

  • Failing NIC
  • Bad NIC driver or driver configuration
  • Failing RAM, on either the endpoint machine or any of the intermediate routers
  • Outdated firewall or proxy, performing incorrect SSL interception

For more information, please see the record_overflow section in IETF RFC 5246.

"Paused" state on Windows service 

Comet Server on Windows consists of two parts: cometd.exe and cometd-service.exe. The latter binary is registered as a service with Windows, and is responsible for ensuring that the former binary stays running.

If Comet Server encounters an error and closes, the service binary will restart it. If the Comet Server is repeatedly unable to start - if it closes immediately when launched, several hundred times consecutively - then the service manager will assume the error is permanent and abandon restarting the process. This condition is displayed as the "Paused" state.

You can resolve this issue by resolving the underlying issue with the service. The error message should be recorded in Comet Server's log file.

You can view Comet Server's log files

  • in Comet Server Service Manager, use the Service menu > "Browse log files" option, or
  • by browsing the C:\ProgramData\Comet\logs directory.

Comet Server makes one log file per day. The error should be recorded in the most recent file (highest number in name / latest modification date).

Forgotten administrator password 

If you are locked out of the Comet Server web interface, you can change your administrator password by editing the cometd.cfg file.

  1. Stop the server, and edit the cometd.cfg file
  2. Find the AdminUsers section for the administrator user in question
  3. Set "PasswordFormat" to 0
  4. Set "Password" to e.g. "admin"
  5. Save the file and restart the server.

You should now be able to log in with the reset password. The password will be hashed and/or encrypted after first use.

Storage server "127.0.0.1" in use by accounts but not managed by Constellation 

A user account has a bucket on a Storage Role Comet Server at the address 127.0.0.1, but this address was not selected for management by Constellation. Aside from the standard warnings about managing Constellation, the 127.0.0.1 is a special case.

If Comet Server is configured to listen on all network interfaces, the server will be accessible on both 127.0.0.1 as well as LAN, WAN, or DNS addresses. If you log in to either the Comet Backup application or the Comet Server web interface at 127.0.0.1, and request a new Storage Vault from a Comet Server configured as "Local" (or $self$ in cometd.cfg), the resulting Storage Vault will be configured using 127.0.0.1 as the remote network address.

However, 127.0.0.1 has a different meaning depending on where it is found. The connection will fail if the Comet Backup client is not running on the same machine. There may also be unintended consequences if the account is replicated to another server.

To avoid this problem, either

  • always use the external DNS name for your Comet Server when requesting new Storage Vaults, or
  • change the Request destination to use "Remote" with an external DNS name instead of "Local" (or $self$ in cometd.cfg).

A similar caveat applies to the software downloads, which can include embedded server address details.

Microsoft SQL Server backup encountered a VDI error 

You should ensure that the necessary VDI .dll files are registered and are the correct version for your SQL Server installation. You can use Microsoft SQL Server Backup Simulator to check the status of the VDI .dll files.

"Access is denied" backing up files and folders on Windows 

An "Access Denied" error message means that the Windows user account running the backup job does not have access to read the file content.

Comet 18.6.0 and later 

Comet automatically creates a service account with all necessary permissions to read local files. If you are experiencing "Access Denied" errors on Comet 18.6.0 or later, you may be trying to back up a network path that has been mounted as a directory. Please see the "Accessing Windows network shares and UNC paths" section below for more information.

If you are experiencing "Access Denied" errors on Comet 18.6.0 or later, and you are certain that you are not backing up a mounted network path, please contact support.

Comet 18.5.x and earlier 

In Comet 18.5.x and earlier:

  • Comet normally runs as the logged-on user session
  • If the Pre-logon service is enabled, Comet runs as a background session for the same user account
  • If the "take filesystem snapshot" option is enabled, Comet runs as LOCAL SYSTEM

You may be able to resolve this issue by:

  1. enabling the "take filesystem snapshot" option in the Protected Item settings, which switches to run the backup as the LOCAL SYSTEM user; or

  2. changing the permissions of the file, to give access rights to the specific Windows user account running the backup job; or

  3. using lusrmgr.msc to add the specific Windows user account to the "Backup Operators" group, which will allow Comet to bypass filesystem permissions (requires Comet 18.3.14 or later); or

  4. excluding the content from the backup job. This may be appropriate for some temporary directories or cache files; or

  5. upgrading to Comet 18.6.0 or later.

Antivirus detects Comet Backup as a virus or malware 

Comet Backup is a safe application. Any such detection is a "false positive".

When Comet Backup is rebranded, it might seem like a new, unknown program. An unknown program that installs system services, accesses files on the disk and uploads them to the network, might be considered to be malware if it was installed without consent. Unfortunately it's understandable for an Antivirus product to detect this.

In this situation, there are some actions you can take:

  • Please ensure your Antivirus product is fully up-to-date.
  • Please contact Comet Support with a screenshot of the error message. In some situations, it may be possible for our developers to resolve the issue.
  • Choose to "allow" or "white-list" the file in the Antivirus software. This may send a signal to the Antivirus software vendor that the software is safe (e.g. ESET LiveGrid, Windows Defender Automatic Sample Submission, Kaspersky KSN, etc).
  • Enable Authenticode signing on Windows. This may provide additional "reputation" to the software installer.

Avast "FileRepMalware" 

You will receive the "FileRepMalware" error message for any file that was

  • downloaded from the internet; and
  • does not have an Authenticode certificate; and
  • has not been seen yet by many Avast users.

Many custom-branded Comet Backup installers do fall into this category.

You can resolve this issue by purchasing and installing an Authenticode certificate.

Missing files from Storage Vault 

Please take care to ensure that files do not go missing from the Storage Vault. The loss of any file within the Storage Vault compromises the integrity of your backup data. You are likely to be in a data-loss situation.

More specifically:

If there are missing files from the snapshots subdirectory, Comet will not know that there is a backup snapshot available to be restored. There is no solution for this, other than ensuring all the files are present. This is unfortunate, but should only have a limited impact.

If there are missing files from the packindex subdirectory, some operations will be slower until Comet runs an optimization pass during the next retention pass. This is not a significant problem.

If there are missing files from the locks subdirectory, Comet may perform a dangerous operation, such as deleting in-use data during a retention pass. This is potentially a significant problem.

If there are missing files from the index subdirectory, Comet will re-upload some data that it could have otherwise deduplicated. This is unfortunate, but should only have a limited impact. Excess data will be cleaned up by a future retention pass.

If there are missing files from the keys subdirectory, Comet will be entirely unable to access the Storage Vault. There is no solution for this, other than ensuring all the files are present. This could be a significant problem. However, because the files in this directory do not change often, it's very likely that their replication is up-to-date.

If there are missing files from the data subdirectory, Comet will be unable to restore some data. When a retention pass next runs, Comet should detect this problem, and alert you to which backup snapshots are unrecoverable. You can then delete the corresponding files from the snapshots subdirectory and continue on with what is left of the Storage Vault.

Network connectivity errors 

Comet Backup uploads files to Comet Server (or to a cloud storage provider) over the internet. Occasionally, you may see errors such as the following:

  • Couldn't save data chunk:
  • HTTP/1.x transport connection broken
  • net/http: request canceled (Client.Timeout exceeded while awaiting headers)
  • wsarecv
  • wsasend
  • An existing connection was forcibly closed by the remote host
  • dial tcp: lookup [...]: no such host
  • connectex: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond.

Comet Backup retries the upload several times, but eventually gives up. After a failed data chunk upload, you may see several more messages of the form Couldn't save data chunk: context canceled while Comet terminates the other upload threads.

Network errors have many possible causes:

  • Customer's PC
  • Customer's network
  • Customer's ISP
  • Internet-wide outages between customer's ISP and your ISP
  • Your ISP
  • Your network
  • Your Comet Server hardware
  • Comet Server software

To troubleshoot these issues, please check:

  • Does the backup succeed if it is retried?

    • Many network errors are temporary and will only occur rarely. In addition, a repeated second backup job will often run faster because many of the existing data chunks have already been uploaded. (Any unused data chunks in the Storage Vault will be automatically cleaned up by the next retention pass.)
  • Does the error message always happen at a certain time of day?

    • It may be possible to reschedule the backup to avoid times of heavy internet congestion.
  • Are there any corresponding messages for around the same time in your Comet Server logs?

    • This is important to determine the cause of some failures.
    • Some relevant Comet Server log messages take the form Error saving upload stream or Blocking re-upload of preexisting file

Accessing Windows network shares and UNC paths 

This section applies to both Comet Server and Comet Backup.

Comet can back up Windows network paths, and back up to Windows network storage (SMB / CIFS). However, because Comet runs as a service user, there are some issues with authentication to be aware of.

Please note that if you are using Comet Backup to back up data from a network device, you should prefer to install Comet Backup directly on the device instead of backing it up over the network. This will also significantly improve performance, as less data needs to be transferred over the LAN.

Mapped network drives 

On Windows, each logged-on user session has its own set of mapped network drives. The service user account is unlikely to have any mapped drives. If you see error messages like WARNING Missing: 'Z:\', this is probably the reason. You can work around this by using a UNC path instead.

Comet 18.6.5 and later will automatically convert mapped network drives to their UNC path equivalents.

For versions of Comet prior to 18.6.5:

  • In Comet Backup, when choosing items in a Files and Folders Protected Item, you can use the "Options" button > "Add " to browse inside a UNC path. Note that this browsing occurs as your logged-in Windows user, not as the service user, and may have different file access as a result. All backup jobs run as the service user.
  • In Comet Server, when configuring Local Path storage in the First Use Wizard, you can browse to a UNC path directly.

Authentication 

If the UNC share requires authentication, the service user account is probably not logged-in to the UNC share. If you see error messages like WARNING Lstat: CreateFile \\?\UNC\...: Access is denied., this is probably the reason.

Comet 18.6.5 and later have built-in options for setting Windows network authentication credentials.

For versions of Comet prior to 18.6.5, workarounds are available for both Comet Backup and Comet Server. Ranked in order of preference:

  • If you are using Comet Server to store data on a network device, you may be able to install Comet Server on the network device. If the network device is a NAS box (e.g. Synology / QNAP), Comet Server can be installed on Linux x86_64 NAS boxes.

  • If you are storing data on a network share, you can also work around this issue by switching from Windows network shares (SMB) to a network protocol that has built-in credential support. For instance, a S3-compatible server (e.g. the free Minio server) or an SFTP server.

  • In Comet Backup, you can work around this issue by adding net use \\HOST\SHARE /user:USERNAME PASSWORD as a "Before" command to the backup job.

    • If you are storing data on a UNC path, you can add this "Before" command on the Storage Vault instead of on the Protected Item. This will ensure it is run for all backup jobs going to that Storage Vault.
  • You can work around this issue in Comet Backup or in Comet Server by changing the Windows Service to use a different user account.

    • For Comet Server, this is the Comet Server service.
    • For Comet Backup 18.6.0 and later, this is the Comet Backup (delegate service) service.
    • For Comet Backup 18.5.x and earlier, this is the Comet Backup (dispatcher service) after you have enabled the Pre-Logon Service; and this change only affects scheduled, non-VSS backups only. Changing the Comet Backup (elevator service) will affect VSS backup jobs, but would also prevent remote software updates from working.
    • If you are using Comet on a Windows Server machine that is acting as the Domain Controller, you must choose a domain account.

Jobs left in Running state 

Comet Backup is responsible for closing-off a job log with Comet Server. If the PC is shut down unexpectedly, a job would be left in "Running" / "In progress" state indefinitely.

The old, inactive "Running" jobs will be cleaned up automatically if Comet sees an opportunity to prove that they are no longer running.

As of Comet 18.5.0, the following situations will clean up old, inactive "Running" jobs;

  • Running a retention pass
    • For safety reasons, a retention pass requires Comet to temporarily take exclusive control over a Storage Vault. Comet makes a number of checks to verify this exclusivity, but the practical benefit is that when a retention pass runs, all past backup jobs must no longer be running by definition.

A future version of Comet may automatically clean up Running jobs in additional situations.

Out of memory 

Comet Backup needs RAM to run. The main cause for this is to hold deduplication indexes; therefore the amount of RAM used is proportional to the size of the Storage Vault.

You might see these error messages:

  • runtime: VirtualAlloc of 1048576 bytes failed with errno=1455 on Windows
  • 0x5AF ERROR_COMMITMENT_LIMIT: The paging file is too small for this operation to complete. on Windows
  • fatal error: out of memory on all platforms

On Linux, when the system is out of memory (OOM), the kernel "OOM Killer" subsystem will immediately terminate a process of its choosing, to free up memory. If you see an error message like signal: killed in Comet on Linux, this means the process was terminated by a user or a subsystem, that might possibly be the OOM Killer. You can check for this in dmesg or kern.log.

You can reduce Comet Backup's RAM usage by trying to limit how much data is in each Storage Vault. For instance, instead of having multiple devices backing up into a single Storage Vault, create multiple Storage Vaults for each device. This will reduce the deduplication efficiency, but it will also reduce the necessary memory usage.

A future version of Comet may add options to trade-off memory usage in other ways. For instance, by using more temporary files on disk instead of more memory; or, by using more network bandwidth instead of more memory.

HTTP 500 in Comet Backup logs 

If you see an HTTP 500 error message in the Comet Backup logs, this means the server encountered an error.

If you see this while performing an operation to

  • Comet Server storage, then you should check the Comet Server log file for around the same timestamp, to see if there was a corresponding error on the server.
  • Cloud storage, then the cloud storage provider experienced an error at their end.
    • The error message may contain more detail; or
    • You can contact the cloud provider for more information; or
    • The operation may succeed if you retry it a short time later.

Change of hardware causes registration dialog to appear 

Comet detects the current device based on a hardware ID.

The hardware ID may be changed in some situations:

  • if you replace the motherboard or CPU; or
  • if you virtualise a physical server; or
  • if you migrate a VM guest to a different VM host, without preserving hardware IDs; or
  • if you make certain specific modifications to the operating system.

In these situations, the device's hardware ID will change, and Comet will recognise the PC as a new device.

Handling a changed device ID 

If your device is recognised as a new device, you should register it again.

The original backup data is still preserved in the Storage Vault, and will be deduplicated against any future backups from this device.

You can move the Protected Item settings from one device to another, by using the Copy/Paste buttons in the Comet Server web interface on the user's page > Protected Items tab.

The backup job log history will be preserved in the customer's account, but, it will be associated with the old device.

  • Once you de-register the original device, it would show as "Unknown device (XXXXX...)" in the job history.

  • The customer can still see these old jobs in the Comet Server web interface.

  • The customer can still see these old jobs in Comet Backup if they use the filter option > "All devices".

Because the device is detected as a new device, the billing period for this device will be restarted.

Storage Vault Locks 

Lock files are an important part of Comet's safety design. Comet uses lock files to ensure data consistency during concurrent operations.

Problem statement 

Comet Backup supports multiple devices backing up into a shared Storage Vault simultaneously. But when Comet runs a retention pass to clean up data, it's very important that no other backup jobs are running simultaneously.

A retention pass (A) looks at what data chunks exist in the Vault, then (B) deletes the unused ones.

A backup job (A) looks at what data chunks exist in the Vault, then (B) uploads new chunks from the local data, and uploads a backup snapshot that relies on both pre-existing and newly-uploaded chunks.

It's perfectly safe for multiple backup jobs to run simultaneously, even from multiple devices.

But, it is not safe for a retention pass to run at the same time as a backup job, because if the steps are interleaved (retention A > backup A > retention B > backup B) then a backup job might write a backup snapshot that refers to unknown chunks, resulting in data loss.

Comet must prevent you from running a backup job and a retention pass simultaneously.

Lock files 

In order to check whether a retention pass is currently running, Comet must communicate between all devices that could potentially be using the Storage Vault.

In order to determine whether any other device is actively using a Storage Vault, Comet writes a temporary text file into the Storage Vault, and deletes it when the job is completed. This is the only mechanism supported across all Storage Vault types (i.e. local disk / SFTP / S3 / etc). Then, any other job can look for these files to see what other operations are taking place concurrently.

Jobs in a Storage Vault are classified into two categories:

  • Exclusive (retention passes)
  • Non-exclusive (backup/restore jobs)

Multiple non-exclusive jobs may run simultaneously from any device. A non-exclusive job will refuse to start if any exclusive jobs are currently running. An exclusive job will refuse to start if any other jobs are running.

Specifically:

  • If a backup job is currently running, Comet will refuse to start a retention pass.
  • If a retention pass is currently running, Comet will refuse to start a backup job.

Downsides of lock file design 

If Comet is stopped suddenly (e.g. PC crash), the lock file would not be removed. All other Comet processes would not realize that the job had stopped. This could prevent proper functioning of backup jobs and/or retention passes.

Comet will alert you to this issue by failing the job. The error message should explain which device and/or job was responsible for originating the now-stale lock file.

You may see error messages of the form:

  • Locked by user '...' on this device (PID #...) since ... (... days ago)
  • Locked by user '...' on computer '...' (PID #...) since ... (... days ago)
  • However, the responsible process might have stopped.
  • If you investigate this process, and are absolutely certain it won't resume, then it's safe to ignore it and continue.

It is possible to delete lock files to recover from this situation. However, it is critical that you manually investigate the issue to ensure that the responsible process really has stopped. Consider that a PC may go to sleep at any time, and wake up days - or weeks - later, and immediately resume from the middle of a backup or retention operation; if the lock files were removed incorrectly, data loss is highly likely.

If you are sure that the responsible process is stopped, you can delete the lock files.

You can initiate this either

  • in Comet Backup, on the "Account" pane > right-click Storage Vault > "Advanced" > "Clean up lock files" option
  • remotely via the Comet Server web interface (when logged in as an administrator). First, enable the "Advanced Options" from the user menu in the top right; then, perform the unlock action via the Connected Devices Actions dialog.

Automatic unlock 

Comet Backup will automatically delete stale lock files when it determines that it is safe to do so.

  • When Comet is running on the same PC as a potentially-stale lock file, it can check the running processes to see if the originator process is still running.

A future version of Comet may be able to automatically detect and remove stale lock files in more situations.

Recovering from unsafe unlock operations 

If you encounter a Packindex '...' for snapshot '...' refers to unknown pack '...', shouldn't happen error, a data file has been erroneously deleted inside the Storage Vault. Data has been lost. This can happen if the "Unlock" feature is used without proper caution as advised above.

In this situation, you can recover the remaining data in the Storage Vault by following the instructions in the "Recovering from file corruption" section above.

Backup process stalled on "Preparing Storage Vault for first use" 

The first step on accessing a new, uninitalised Storage Vault is to generate and store some encryption material.

If a backup to a new Storage Vault seems to hang at this initial step, it's likely that Comet Backup is failing to access the storage location, and repeatedly retrying- and timing-out. An error message may appear after some extended duration.

Some possible causes of this issue are

  • Storage Vault misconfiguration
    • For Storage Vaults located in a Comet Server bucket, check the Storage Vault properties > "Hostname" field, that it points to a valid URL and not e.g. 127.0.0.1
  • Outdated CA certificates
    • This would prevent Comet Backup from making an HTTPS / SSL connection to the storage location
    • On Windows, run Windows Update
      • For Storage Vaults located in a Comet Server bucket, you can also check if the system Internet Explorer browser is able to load the Comet Server's web interface
    • On Linux, update the ca-certificates package