8000 FileNotFoundError during checkpoint saving in nemo_model_checkpoint.py · Issue #13581 · NVIDIA/NeMo · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

FileNotFoundError during checkpoint saving in nemo_model_checkpoint.py #13581

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
mujhenahiata opened this issue May 14, 2025 · 3 comments
Open
Assignees
Labels
ASR bug Something isn't working

Comments

@mujhenahiata
Copy link
mujhenahiata commented May 14, 2025

When training a speech-to-text model using NeMo and PyTorch Lightning, the training crashes during the validation phase due to a FileNotFoundError while attempting to remove an older .nemo checkpoint file.

Environment:

NeMo version: (2.2.0)
Python version: 3.10

Reproduction Steps:

  • Train a model using the speech_to_text_ctc_bpe.py script.
  • Set up model checkpointing using nemo_model_checkpoint.py (default or modified).
  • After validation runs, during checkpoint saving, the script crashes.

nemo/utils/callbacks
File "nemo_model_checkpoint.py", line 246, in on_save_checkpoint
get_filesystem(backup_path).rm(backup_path)

@titu1994

i have just updated it , there is some issue with how ddp and global ranks are going into some race condition. it works as expected on a single GPU, but crashes on a cluster.
should i raise a PR ?

@mujhenahiata mujhenahiata added the bug Something isn't working label May 14, 2025
@meatybobby meatybobby added the ASR label May 14, 2025
@nithinraok
Copy link
Collaborator

@mujhenahiata Could you share more detailed reproduction steps, including the configuration (excluding the dataset)? That would be helpful.

8000
@mujhenahiata
Copy link
Author

following is the
Python 3.10.12
nemo_text_processing 1.1.0
nemo-toolkit 2.3.0

Hardware
8XA100(40GB)

reproduction steps
Initialized the experiment along with the following config
config YAML

name: "Speech_To_Text_Finetuning"

init_from_nemo_model: 'some_model.nemo'
model:
  sample_rate: 16000
  log_prediction: true 

  train_ds:
    manifest_filepath: 'some.json'
    sample_rate: ${model.sample_rate}
    batch_size: 32 # you may increase batch_size if your memory allows
    shuffle: true
    num_workers: 1
    pin_memory: true
    max_duration: 20
    min_duration: 0.1
    # tarred datasets
    is_tarred: false
    tarred_audio_filepaths: null
    shuffle_n: 2048
    # bucketing params
    bucketing_strategy: "fully_randomized"
    bucketing_batch_size: null

  validation_ds:
    manifest_filepath: 'some.json'
    sample_rate: ${model.sample_rate}
    batch_size: 16
    shuffle: false
    use_start_end_token: false
    num_workers: 2
    pin_memory: true

  test_ds:
    manifest_filepath: null
    sample_rate: ${model.sample_rate}
    batch_size: 16
    shuffle: false
    use_start_end_token: false
    num_workers: 8
    pin_memory: true
  
  tokenizer: # use for spe/bpe based tokenizer models
    update_tokenizer: true
    dir: /tokenizer/tokenizer_spe_bpe_v128  # path to directory which contains either tokenizer.model (bpe) or vocab.txt (for wpe)
    type: bpe  # Can be either bpe (SentencePiece tokenizer) or wpe (WordPiece tokenizer)
  
  preprocessor:
    _target_: nemo.collections.asr.modules.AudioToMelSpectrogramPreprocessor
    sample_rate: ${model.sample_rate}
    normalize: "per_feature"
    window_size: 0.025
    window_stride: 0.01
    window: "hann"
    features: 80
    n_fft: 512
    log: true
    frame_splicing: 1
    dither: 0.00001
    pad_to: 0
    pad_value: 0.0

  encoder:
    _target_: nemo.collections.asr.modules.ConformerEncoder
    feat_in: ${model.preprocessor.features}
    feat_out: -1 # you may set it if you need different output size other than the default d_model
    n_layers: 18
    d_model: 512

    # Sub-sampling params
    subsampling: striding # vggnet, striding, stacking or stacking_norm, dw_striding
    subsampling_factor: 4 # must be power of 2 for striding and vggnet
    subsampling_conv_channels: -1 # -1 sets it to d_model
    causal_downsampling: false

    # Feed forward module's params
    ff_expansion_factor: 4

    # Multi-headed Attention Module's params
    self_attention_model: rel_pos # rel_pos or abs_pos
    n_heads: 8 # may need to be lower for smaller d_models
    # [left, right] specifies the number of steps to be seen from left and right of each step in self-attention
    att_context_size: [-1, -1] # -1 means unlimited context
    att_context_style: regular # regular or chunked_limited
    xscaling: true # scales up the input embeddings by sqrt(d_model)
    untie_biases: true # unties the biases of the TransformerXL layers
    pos_emb_max_len: 5000

    # Convolution module's params
    conv_kernel_size: 31
    conv_norm_type: 'batch_norm' # batch_norm or layer_norm or groupnormN (N specifies the number of groups)
    # conv_context_size can be"causal" or a list of two integers while conv_context_size[0]+conv_context_size[1]+1==conv_kernel_size
    # null means [(kernel_size-1)//2, (kernel_size-1)//2], and 'causal' means [(kernel_size-1), 0]
    conv_context_size: null

    ### regularization
    dropout: 0.1 # The dropout used in most of the Conformer Modules
    dropout_pre_encoder: 0.1 # The dropout used before the encoder
    dropout_emb: 0.0 # The dropout used for embeddings
    dropout_att: 0.1 # The dropout for multi-headed attention modules

    # set to non-zero to enable stochastic depth
    stochastic_depth_drop_prob: 0.0
    stochastic_depth_mode: linear  # linear or uniform
    stochastic_depth_start_layer: 1


  decoder:
      _target_: nemo.collections.asr.modules.ConvASRDecoder
      feat_in: null
      num_classes: -1
      vocabulary: []

  spec_augment:
    _target_: nemo.collections.asr.modules.SpectrogramAugmentation
    freq_masks: 2 # set to zero to disable it
    time_masks: 10 # set to zero to disable it
    freq_width: 27
    time_width: 0.05

  optim:
    name: adamw
    lr: 1e-4
    # optimizer arguments
    betas: [0.9, 0.98]
    weight_decay: 1e-3

    # scheduler setup
    sched:
      name: CosineAnnealing
      # scheduler config override
      warmup_steps: 5000
      warmup_ratio: null
      min_lr: 5e-6

trainer:
  devices: -1 # number of GPUs, -1 would use all available GPUs
  num_nodes: 1
  max_epochs: 100
  max_steps: -1 # computed at runtime if not set
  val_check_interval: 0.5 # Set to 0.25 to check 4 times per epoch, or an int for number of iterations
  accelerator: auto
  strategy:
    _target_: lightning.pytorch.strategies.DDPStrategy
    gradient_as_bucket_view: true
  accumulate_grad_batches: 1
  gradient_clip_val: 0.0
  precision: bf16 # 16, 32, or bf16
  log_every_n_steps: 10  # Interval of logging.
  enable_progress_bar: True
  num_sanity_val_steps: 1 # number of steps to perform validation steps for sanity check the validation process before starting the training, setting to 0 disables it
  check_val_every_n_epoch: 1 # number of evaluations on validation every n epochs
  sync_batchnorm: true
  enable_checkpointing: False  # Provided by exp_manager
  logger: false  # Provided by exp_manager
  benchmark: false # needs to be false for models with variable-length speech input as it slows down training


exp_manager:
  exp_dir: '/output'
  name: ${name}
  create_tensorboard_logger: false
  create_checkpoint_callback: true
  checkpoint_callback_params:
    # in case of multiple validation sets, first one is used
    monitor: "val_wer"
    mode: "min"
    save_top_k: 2
    always_save_nemo: True # saves the checkpoints as nemo files along with PTL checkpoints
    filename: "${name}-{epoch}-{val_wer:.4f}"
  resume_if_exists: false
  resume_ignore_no_checkpoint: true

  create_wandb_logger: true
  wandb_logger_kwargs:
    name: EXP
    project: "ASR"

Trained the model using the speech_to_text_ctc_bpe.py script.

@nithinraok @titu1994

@nithinraok
Copy link
Collaborator

You just provided config but not full reproducing step.
Anyway from your error:

get_filesystem(backup_path).rm(backup_path)

This usually happens when your script doesn;t have permissions to create the file/respective folder. Would appreciate:

  1. Full training recipe
  2. Full error log

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ASR bug Something isn't working
Projects
None yet
Development

No branches or pull requests

4 participants
0