8000 Inference API error with Whisper, return_timestamps parameter · Issue #1694 · huggingface/hub-docs · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content 8000
Inference API error with Whisper, return_timestamps parameter #1694
Open
@oscar-lv

Description

@oscar-lv

Bug description.
Hi team, been using the inference endpoint for whisper for months at https://api-inference.huggingface.co/models/openai/whisper-large-v3-turbo. Today, all of a sudden, the API started throwing this error

You have passed more than 3000 mel input features (> 30 seconds) which automatically enables long-form generation which requires the model to predict timestamp tokens. Please either pass return_timestamps=True or make sure to pass no more than 3000 mel input features.', 'warnings': ['There was an inference error: You have passed more than 3000 mel input features (> 30 seconds) which automatically enables long-form generation which requires the model to predict timestamp tokens. Please either pass return_timestamps=True or make sure to pass no more than 3000 mel input features.

This of course only happens when passing in samples longer than 30 seconds, and is replicable through the UI.

Passing a return_timestamp parameter in the HTTP request does not solve the issue, either in boolean or string form (True/"true")

parameters = { "language": "en", "temperature": "0.0", "return_timestamps": True}

Using generation_params also fails here.

Describe the expected behaviour
The endpoint should run inference as it has previously. Samples exceeding 30 seconds were supported without any issues and no parameter had to be provided.

Image

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions

      0