Configuration Reference
This page documents all available configuration options for FirecREST.
Below is an example configuration file showing how values can be structured:
Click to view a sample configuration file
apis_root_path: ""
doc_servers:
- url: "http://localhost:8000"
description: "Local environment"
auth:
authentication:
scopes: {}
tokenUrl: "http://keycloak:8080/auth/realms/kcrealm/protocol/openid-connect/token"
publicCerts:
- "http://keycloak:8080/auth/realms/kcrealm/protocol/openid-connect/certs"
username_claim: "sub"
#ssh_credentials:
# type: "SSHCA"
# url: "http://deic-sshca:2280/demoCA"
# max_connections: 500
ssh_credentials:
type: "SSHStaticKeys"
keys:
fireuser:
private_key: "secret_file:/run/secrets/ssh_private_key_fireuser"
firesrv:
private_key: "secret_file:/run/secrets/ssh_private_key_firesrv"
passphrase: "secret_file:/run/secrets/ssh_passphrase_firesrv"
clusters:
- name: "cluster-slurm-api"
ssh:
host: "192.168.240.2"
port: 22
max_clients: 500
timeout:
connection: 5
login: 5
command_execution: 5
idle_timeout: 60
keep_alive: 5
scheduler:
type: "slurm"
version: "24.11.0"
api_url: "http://192.168.240.2:6820"
api_version: "0.0.42"
timeout: 10
service_account:
client_id: "firecrest-health-check"
secret: "secret_file:/run/secrets/service_account_client_secret"
probing:
interval: 120
timeout: 10
startup_grace_period: 300
datatransfer_jobs_directives:
- "#SBATCH --constraint=mc"
- "#SBATCH --nodes=1"
- "#SBATCH --time=0-00:15:00"
file_systems:
- path: '/home'
data_type: 'users'
default_work_dir: true
- name: "cluster-slurm-ssh"
ssh:
host: "192.168.240.2"
port: 22
max_clients: 500
timeout:
connection: 5
login: 5
command_execution: 5
idle_timeout: 60
keep_alive: 5
scheduler:
type: "slurm"
version: "24.11.0"
timeout: 10
service_account:
client_id: "firecrest-health-check"
secret: "secret_file:/run/secrets/service_account_client_secret"
probing:
interval: 120
timeout: 5
datatransfer_jobs_directives:
- "#SBATCH --nodes=1"
- "#SBATCH --time=0-00:15:00"
- "#SBATCH --account={account}"
file_systems:
- path: '/home'
data_type: 'users'
default_work_dir: true
- name: "cluster-pbs"
ssh:
host: "192.168.240.4"
port: 22
max_clients: 500
timeout:
connection: 5
login: 5
command_execution: 5
idle_timeout: 60
keep_alive: 5
scheduler:
type: "pbs"
version: "23.06.06"
timeout: 10
service_account:
client_id: "firecrest-health-check"
secret: "secret_file:/run/secrets/service_account_client_secret"
probing:
interval: 120
timeout: 10
startup_grace_period: 300
datatransfer_jobs_directives:
- "#PBS -l nodes=1:ppn=1"
- "#PBS -l walltime=00:15:00"
- "#PBS -V"
file_systems:
- path: '/home'
data_type: 'users'
default_work_dir: true
data_operation:
max_ops_file_size: 1048576 # 1M
data_transfer:
service_type: "streamer"
host: "0.0.0.0"
port_range: [5665, 5666]
public_ips:
- "localhost"
wait_timeout: 43200 # 12h
inbound_transfer_limit: 5368709120 # 5GB
# data_transfer:
# service_type: "wormhole"
#data_transfer:
# service_type: "s3"
# name: "s3-storage"
# private_url: "http://192.168.240.19:9000"
# public_url: "http://localhost:9000"
# access_key_id: "storage_access_key"
# secret_access_key: "secret_file:/run/secrets/s3_secret_access_key"
# region: "us-east-1"
# ttl: 604800
# multipart:
# use_split: false
# max_part_size: 1073741824 # 1G
# parallel_runs: 3
# tmp_folder: "tmp"
# probing:
# interval: 60
# timeout: 10
In the following tables, you can find all the supported configuration options, along with their types, descriptions, and default values:
Settings
FirecREST configuration. Loaded from a YAML file.
| Field |
Type |
Description |
Default |
app_debug |
bool |
Enable debug mode for the FastAPI application. |
False |
app_version |
Literal |
— |
'2.x.x' |
apis_root_path |
str |
Base path prefix for exposing the APIs. |
'' |
doc_servers |
List[ dict ] | None |
Optional documentation servers. For completedocumentation see the servers parameter in theFastAPI docs. |
None |
auth |
Auth |
Authentication and authorization config (OIDC, FGA). |
(required) |
ssh_credentials |
SSHService | SSHCA | SSHStaticKeys |
SSH keys service or manually defined user keys. More details in this section. |
(required) |
clusters |
List[ HPCCluster ] |
List of configured HPC clusters. |
[] |
data_operation |
DataOperation | None |
Data transfer backend configuration. More details in this section. |
DataOperation(max_ops_file_size=5242880, data_transfer=None) |
logger |
Logger |
Logging configuration options. |
<generated by Logger()> |
Details of auth (Auth)
Auth
Authentication and authorization configuration.
Details of authentication (Oidc)
Oidc
OpenID Connect (OIDC) authentication configuration.
| Field |
Type |
Description |
Default |
scopes |
dict | None |
Map of OIDC scopes and their purposes. |
{} |
token_url |
str |
Token endpoint URL for the OIDC provider. This is used to obtain access tokens for the service account that will do the health checks. |
(required) |
public_certs |
List[ str ] |
List of URLs for retrieving public certificates. These are used to verify the OIDC token. |
[] |
username_claim |
str | None |
Name of the JWT claim containing the username (e.g. sub, preferred_username, etc.) |
'preferred_username' |
jwk_algorithm |
str | None |
Explicitly set the expected JWT signing algorithm if JWKs endpoint doesn't include 'alg' parameter for the signing key. |
None |
Details of authorization (OpenFGA)
OpenFGA
Authorization settings using OpenFGA.
| Field |
Type |
Description |
Default |
url |
str |
OpenFGA API base URL. |
(required) |
timeout |
int | None |
Connection timeout in seconds. When None the timeout is disabled. |
1 |
max_connections |
int |
Max HTTP connections per host. When set to 0, there is no limit. |
100 |
Details of ssh_credentials (SSHService)
SSHService
External service for managing SSH keys.
| Field |
Type |
Description |
Default |
type |
Literal |
— |
(required) |
url |
str |
URL of the SSH keys management service. |
(required) |
max_connections |
int |
Maximum concurrent connections to the service. When set to 0, there is no limit. |
100 |
Details of ssh_credentials (SSHCA)
SSHCA
External service for managing SSH keys.
| Field |
Type |
Description |
Default |
type |
Literal |
— |
(required) |
url |
str |
URL of the SSH keys management service. |
(required) |
max_connections |
int |
Maximum concurrent connections to the service. When set to 0, there is no limit. |
100 |
Details of ssh_credentials (SSHStaticKeys)
SSHStaticKeys
External service for managing SSH keys.
| Field |
Type |
Description |
Default |
type |
Literal |
— |
(required) |
keys |
Dict[ str, SSHUserKeys ] |
— |
(required) |
Details of keys (SSHUserKeys)
SSHUserKeys
SSH key pair configuration for authenticating to remote systems.
| Field |
Type |
Description |
Default |
private_key |
LoadFileSecretStr |
SSH private key. You can give directly the content or the file path using 'secret_file:/path/to/file'. |
(required) |
public_cert |
str | None |
Optional SSH public certificate. |
None |
passphrase |
LoadFileSecretStr | None |
Optional passphrase for the private key. You can give directly the content or the file path using 'secret_file:/path/to/file'. |
None |
Details of clusters (HPCCluster)
HPCCluster
Definition of an HPC cluster, including SSH access, scheduling, and
filesystem layout. More info in
the systems' section.
| Field |
Type |
Description |
Default |
name |
str |
Unique name for the cluster. This field is case insensitive. |
(required) |
ssh |
SSHClientPool |
SSH configuration for accessing the cluster nodes. |
(required) |
scheduler |
Scheduler |
Job scheduler configuration. |
(required) |
service_account |
ServiceAccount |
Service credentials for internal APIs. |
(required) |
probing |
Probing | None |
Probing configuration for monitoring the cluster. |
None |
file_systems |
List[ FileSystem ] |
List of mounted file systems on the cluster, such as scratch or home directories. |
[] |
datatransfer_jobs_directives |
List[ str ] |
Custom scheduler flags passed to data transfer jobs (e.g. -pxfer for a dedicated partition). |
[] |
Details of ssh (SSHClientPool)
SSHClientPool
SSH connection pool configuration for remote execution.
| Field |
Type |
Description |
Default |
host |
str |
SSH target hostname. |
(required) |
port |
int |
SSH port. |
(required) |
proxy_host |
str | None |
Optional proxy host for tunneling. |
None |
proxy_port |
int | None |
Optional proxy port. |
None |
max_clients |
int |
Maximum number of concurrent SSH clients. |
100 |
timeout |
SSHTimeouts |
SSH timeout settings. |
<generated by SSHTimeouts()> |
Details of timeout (SSHTimeouts)
SSHTimeouts
Various SSH settings.
| Field |
Type |
Description |
Default |
connection |
int |
Timeout (seconds) for initial SSH connection. |
5 |
login |
int |
Timeout (seconds) for SSH login/auth. |
5 |
command_execution |
int |
Timeout (seconds) for executing commands over SSH. |
5 |
idle_timeout |
int |
Max idle time (seconds) before disconnecting. |
60 |
keep_alive |
int |
Interval (seconds) for sending keep-alive messages. |
5 |
Details of scheduler (Scheduler)
Scheduler
Cluster job scheduler configuration.
| Field |
Type |
Description |
Default |
type |
enum str (Available options: slurm, pbs) |
Scheduler type. |
(required) |
version |
str |
Scheduler version. |
(required) |
api_url |
str | None |
REST API endpoint for scheduler. |
None |
api_version |
str | None |
Scheduler API version. |
None |
timeout |
int | None |
Timeout in seconds for scheduler communication with the API. |
10 |
Details of service_account (ServiceAccount)
ServiceAccount
Internal service account credentials.
| Field |
Type |
Description |
Default |
client_id |
str |
Service account client ID. |
(required) |
secret |
LoadFileSecretStr |
Service account secret token. You can give directly the content or the file path using 'secret_file:/path/to/file'. |
(required) |
Details of probing (Probing)
Probing
Cluster monitoring attributes.
| Field |
Type |
Description |
Default |
interval |
int |
Interval in seconds between cluster checks. |
(required) |
timeout |
int |
Maximum time in seconds allowed per check. |
(required) |
Details of file_systems (FileSystem)
FileSystem
Defines a cluster file system and its type.
| Field |
Type |
Description |
Default |
path |
str |
Mount path for the file system. |
(required) |
data_type |
enum str (Available options: users, store, archive, apps, scratch, project) |
File system purpose/type. |
(required) |
default_work_dir |
bool |
Mark this as the default working directory. |
False |
Details of data_operation (DataOperation)
DataOperation
| Field |
Type |
Description |
Default |
max_ops_file_size |
int |
Maximum file size (in bytes) allowed for direct upload and download. Larger files will go through the staging area. |
5242880 |
data_transfer |
S3DataTransfer | WormholeDataTransfer | StreamerDataTransfer | None |
Data transfer service configuration |
None |
Details of data_transfer (S3DataTransfer)
S3DataTransfer
Object storage configuration, including credentials, endpoints, and upload behavior.
| Field |
Type |
Description |
Default |
service_type |
Literal |
— |
(required) |
probing |
Probing | None |
Configuration for probing storage availability. |
None |
name |
str |
Name identifier for the storage. |
(required) |
private_url |
SecretStr |
Private/internal endpoint URL for the storage. |
(required) |
public_url |
str |
Public/external URL for the storage. |
(required) |
access_key_id |
SecretStr |
Access key ID for S3-compatible storage. |
(required) |
secret_access_key |
LoadFileSecretStr |
Secret access key for storage. You can give directly the content or the file path using 'secret_file:/path/to/file'. |
(required) |
region |
str |
Region of the storage bucket. |
(required) |
ttl |
int |
Time-to-live (in seconds) for generated URLs. |
(required) |
tenant |
str | None |
Optional tenant identifier for multi-tenant setups. |
None |
multipart |
MultipartUpload |
Settings for multipart upload, including chunk size and concurrency. |
<generated by MultipartUpload()> |
bucket_lifecycle_configuration |
BucketLifecycleConfiguration |
Lifecycle policy settings for auto-deleting files after a given number of days. |
<generated by BucketLifecycleConfiguration()> |
Details of probing (Probing)
Details of multipart (MultipartUpload)
MultipartUpload
Configuration for multipart upload behavior.
| Field |
Type |
Description |
Default |
use_split |
bool |
Enable or disable splitting large files into parts when uploading the file to the staging area. |
False |
max_part_size |
int |
Maximum size (in bytes) for multipart data transfers. Default is 2 GB. |
2147483648 |
parallel_runs |
int |
Number of parts to upload in parallel to the staging area. |
3 |
tmp_folder |
str |
Temporary folder used for storing split parts during upload. |
'tmp' |
Details of bucket_lifecycle_configuration (BucketLifecycleConfiguration)
BucketLifecycleConfiguration
Configuration for automatic object lifecycle in storage buckets.
| Field |
Type |
Description |
Default |
days |
int |
Number of days after which objects will expire automatically. |
10 |
Details of data_transfer (WormholeDataTransfer)
WormholeDataTransfer
| Field |
Type |
Description |
Default |
service_type |
Literal |
— |
(required) |
probing |
Probing | None |
Configuration for probing storage availability. |
None |
pypi_index_url |
str | None |
Optional local PyPI index URL for installing dependencies. |
None |
Details of probing (Probing)
Details of data_transfer (StreamerDataTransfer)
StreamerDataTransfer
| Field |
Type |
Description |
Default |
service_type |
Literal |
— |
(required) |
probing |
Probing | None |
Configuration for probing storage availability. |
None |
pypi_index_url |
str | None |
Optional local PyPI index URL for installing dependencies. |
None |
host |
str | None |
The interface to use for listening incoming connections |
None |
port_range |
Tuple |
Port range for establishing connections. |
(5665, 5675) |
public_ips |
List[ str ] | None |
List of public IP addresses where server can be reached. |
None |
wait_timeout |
int | None |
How long to wait for a connection before exiting (in seconds) |
86400 |
inbound_transfer_limit |
int | None |
Limit how much data can be received (in bytes) |
5368709120 |
Details of probing (Probing)
Details of logger (Logger)
Logger
| Field |
Type |
Description |
Default |
enable_tracing_log |
bool |
Enable tracing logs. |
False |