8000 GitHub - bytewax/bytewax: Python Stream Processing
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

bytewax/bytewax

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Bytewax: Python Stateful Stream Processing Framework

Bytewax is a Python framework and Rust-based distributed processing engine for stateful event and stream processing. Inspired by capabilities found in tools like Apache Flink, Spark, and Kafka Streams, Bytewax makes stream processing simpler and more accessible by integrating directly with the Python ecosystem you already know and trust.

Key Features:

  • Python-first: Leverage your existing Python libraries, frameworks, and tooling.
  • Stateful Stream Processing: Maintain and recover state automatically, enabling advanced online machine learning and complex event-driven applications.
  • Scalable & Distributed: Easily scale from local development to multi-node, multi-worker deployments on Kubernetes or other infrastructures.
  • Rich Connector Ecosystem: Ingest data from sources like Kafka, filesystems, or WebSockets, and output to data lakes, key-value stores, or other systems.
  • Flexible Dataflow API: Compose pipelines using operators (e.g., map, filter, join, fold_window) to express complex logic.

Table of Contents


Quick Start

Install Bytewax from PyPI:

pip install bytewax

Install waxctl to manage deployments at scale.

Minimal Example:

from bytewax.dataflow import Dataflow
from bytewax import operators as op
from bytewax.testing import TestingSource

flow = Dataflow("quickstart")

# Input: Local test source for demonstration
inp = op.input("inp", flow, TestingSource([1, 2, 3, 4, 5]))

# Transform: Filter even numbers and multiply by 10
filtered = op.filter("keep_even", inp, lambda x: x % 2 == 0)
results = op.map("multiply_by_10", filtered, lambda x: x * 10)

# Output: Print results to stdout
op.inspect("print_results", results)

Run it locally:

python -m bytewax.run quickstart.py

How Bytewax Works

Bytewax uses a dataflow computational model, similar to systems like Flink or Spark, but with a Pythonic interface. You define a dataflow graph of operators and connectors:

  1. Input: Data sources (Kafka, file systems, S3, WebSockets, custom connectors)
  2. Operators: Stateful transformations (map, filter, fold_window, join) defined in Python.
  3. Output: Data sinks (databases, storage systems, message queues).
Bytewax Dataflow Animation

Stateful operations: Bytewax maintains distributed state, allows for fault tolerance and state recovery, and supports event-time windowing for advanced analytics and machine learning workloads.

waxctl: Bytewax’s CLI tool for deploying and managing dataflows on cloud servers or Kubernetes clusters. Download waxctl here.


Operators Overview

Operators are the building blocks of Bytewax dataflows:

  • Stateless Operators: map, filter, inspect
  • Stateful Operators: reduce, fold_window, stateful_map
  • Windowing & Aggregations: Event-time, processing-time windows, tumbling, sliding, and session windows.
  • Joins & Merges: Combine multiple input streams with merge, join, or advanced join patterns.
  • Premium Operators:

For a comprehensive list, see the Operators API Documentation.


Connectors

Bytewax provides built-in connectors for common data sources and sinks such as Kafka, files, and stdout. You can also write your own custom connectors.

Examples of Built-in Connectors:

  • Kafka: bytewax.connectors.kafka
  • StdIn/StdOut: bytewax.connectors.stdio
  • Redis, S3, and More: See Bytewax connectors.

Community & Partner Connectors: Check out the Bytewax Module Hub for additional connectors contributed by the community.


Local Development, Testing, and Production

Local Development:

  • Use TestingSource and inspect operators for debugging.
  • Iterate quickly by running your flow with python -m bytewax.run my_flow.py.
  • Develop custom connectors and sinks locally with Python tooling you already know.

Testing:

  • Integration tests: Use TestingSource and run flows directly in CI environments.
  • Unit tests: Test individual functions and operators as normal Python code.
  • More on Testing

Production:

  • Scale horizontally by running multiple workers on multiple machines.
  • Integrate with Kubernetes for dynamic scaling, monitoring, and resilience.
  • Utilize waxctl for standardized deployments and lifecycle management.

Deployment Options

Running Locally

For experimentation and small-scale jobs:

python -m bytewax.run my_dataflow.py

Multiple workers and threads:

python -m bytewax.run my_dataflow.py -w 2

Containerized Execution

Run Bytewax inside Docker containers for easy integration with container platforms. See the Bytewax Container Guide.

Scaling on Kubernetes

Use waxctl to package and deploy Bytewax dataflows to Kubernetes clusters for production workloads:

waxctl df deploy my_dataflow.py --name my-dataflow

Learn more about Kubernetes deployment.

Scaling with the Bytewax Platform

Our commerically licensed Platform


Examples


Community and Contributing

Join us on Slack for support and discussion.

Open issues on GitHub Issues for bug reports and feature requests. (For general help, use Slack.)

Contributions Welcome:


License

Bytewax is licensed under the Apache-2.0 license.


Built with ❤️ by the Bytewax community

0