8000 High CPU usage at startup of the bots · Issue #1654 · pipecat-ai/pipecat · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

High CPU usage at startup of the bots #1654

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
abhijitpal1247 opened this issue Apr 24, 2025 · 4 comments
Open

High CPU usage at startup of the bots #1654

abhijitpal1247 opened this issue Apr 24, 2025 · 4 comments

Comments

@abhijitpal1247
Copy link

pipecat version

0.0.63

Python version

3.11.11

Operating System

Ubuntu 20.04.6 LTS

Question

I am following the bot_runner pattern to deploy pipecat bots. While testing it for scale I am seeing a spike in cpu usage and delay in startup of bots (the delay is proportional to number of bots spawned).
I am attaching a video of the same.
The docker resource configuration is as follows: --cpus=2 --memory=4g
Is there any way I can decrease the initial CPU utilisation?

Screen.Recording.2025-04-24.at.2.46.13.PM.mp4

What I've tried

No response

Context

No response

@ken-kuro
Copy link
ken-kuro commented Apr 25, 2025

Would you mind sharing your current implementation? I suppose you're using WebSocket transport right?

@abhijitpal1247
Copy link
Author
abhijitpal1247 commented Apr 25, 2025

@ken-kuro
yes,
STT: deepgram
LLM: openai
TTS: elevenlabs
transport: websocket

@ken-kuro
Copy link

Apologies for the ambiguity. Could you share your current implementation code? Specifically, are you using FastAPI’s WebSocket class or the lower-level WebSocket server interface? The video is a bit hard to make out, but I’m guessing it’s the FastAPI WebSocket. If that’s the case, how have you applied the bot-runner pattern? From what I understand, the pipecat architecture shines when each process handles a single pipeline—scaling via multiprocessing or across multiple VMs—rather than through asynchronous or multithreaded code.

@abhijitpal1247 6E54
Copy link
Author
abhijitpal1247 commented Apr 29, 2025

@ken-kuro sorry for the late reply, I am using lower-level WebSocket server interface: WebsocketServerTransport

whenever a new request comes to the bot_runner.py, I spawn a new bot with the subprocess command and in response I provide the port in which the new bot has been spawned up. So that later I can make a socket connection to this newly spawned bot.

By using the subprocess command, it enables the bot to be a separate process.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants
0