8000 GitHub - CogniSeeker/REBCAT: REactive Behavior Constraint-Aware Tree learning (REBCAT) - a human-robot collaboration framework to learn task from demonstrations. Interpretable, fast, object-centric, and reactive.
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

REactive Behavior Constraint-Aware Tree learning (REBCAT) - a human-robot collaboration framework to learn task from demonstrations. Interpretable, fast, object-centric, and reactive.

Notifications You must be signed in to change notification settings

CogniSeeker/REBCAT

Repository files navigation

REBCAT: REactive Behavior Constraint-Aware Tree learning

A Human-Robot Collaboration Framework for Learning from Demonstrations

Author: Oleh Borys
Institution: Czech Technical University in Prague, FEL, Cybernetics and Robotics
Email: olehboures@gmail.com
Date: May 2025

This project was developed for a Bachelor's thesis at CTU FEL in Prague.


Note: The project does not have the commit history because it was transferred from CIIRC GitLab, where it was initially developed. For more details and granted access, please contact the author.

Overview

REBCAT (REactive Behavior Constraint-Aware Tree learning) is a framework for human-robot collaboration that learns task knowledge from demonstrations. The system uses constraint-aware Behavior Trees to represent and execute tasks, enabling reactive adaptation to changing environments.

The framework supports:

  • Learning target positions for objects based on their properties
  • Extracting action ordering constraints from multi-step demonstrations
  • Generating reactive Behavior Trees for task execution
  • Improving performance through human feedback

Human-Robot Collaboration Scenario

Architecture

The system consists of five main components:

  1. Perception Manager: Detects objects and their properties using camera input
  2. Task Planning Manager: Classifies objects and determines action ordering constraints
  3. Execution Manager: Generates and executes Behavior Trees for robot control
  4. Feedback Manager: Collects human feedback to improve system performance
  5. Model Manager: Handles training and retraining of classification models

System Architecture Diagram

Camera and robot image sources: Intel RealSense D455 camera and RoboGroove (Maguire et al., 2022).

Installation

  1. Clone the repository
  2. Install dependencies:
    pip install -r requirements.txt

Configuration

The main configuration parameters are located in config.py. Key parameters to adjust:

For running with a physical robot:

  • WITH_ROS: Set to True to use ROS for robot control
  • CAMERA_TOPIC: Set the ROS topic for camera input
  • POSITION_TOLERANCE: Adjust tolerance for object position verification
  • TICK_SLEEP_TIME: Sleep time between BT ticks (seconds)
  • OBJECT_NOT_VISIBLE_TIMEOUT: Time before declaring an object missing
  • DETECTION_VALIDITY_TIMEOUT: Timeout for image validity
  • IMAGE_TIMEOUT: Maximum time to wait for camera image

For dataset creation:

  • DATASETS_PATH: Path to store generated datasets
  • RANDOM_SEED: Set for reproducible results
  • NUMBER_OF_INITIAL_DEMOS: Number of demonstrations to start with
  • EXPERIMENT_NAME: Name of the current experiment
  • NUMBER_OF_MOCK_OBJECTS: Number of mock objects to generate

For system evaluation:

  • EVALUATION_MODE: Choose between "compare_models_accuracy", "compare_simple_clever_feedback"
  • MODEL_TYPES: Select models to evaluate ("decision_tree", "catboost_native", etc.)
  • NUM_SPLITS, NUM_REPETITIONS: Configure cross-validation parameters
  • WITH_RETRAINING: Whether to retrain models at each evaluation step
  • WITH_FEEDBACK: Whether to use feedback for training
  • TARGET_CLASS_BALANCED: Sample examples with equal probability per class
  • TARGET_ACCURACY: Target accuracy threshold for evaluation (default: 0.99)
  • TRACK_RULE_CURVES: Whether to track rule learning curves
  • OUTPUT_FREQUENCY: Export trees and results every N steps
  • MAX_DEMOS: Maximum number of demonstrations to use
  • NUMBER_OF_INITIAL_DEMOS: Number of demonstrations to start with
  • RULES_FILE: Path to ground truth rules for evaluation

For model configuration:

  • MAIN_MODEL_TYPE: Model used in BT for classification
  • DECISION_TREE_MAX_DEPTH: Maximum depth of decision trees
  • DECISION_TREE_CRITERION: Criterion for decision tree splits
  • CATBOOST_ITERATIONS: Number of iterations for CatBoost models
  • CATBOOST_DEPTH: Maximum depth of CatBoost trees
  • CATBOOST_LEARNING_RATE: Learning rate for CatBoost model

Usage

Running the System

To start the system with a physical robot:

python main.py

The program will:

  1. Initialize all components and load training data
  2. Train the initial classification model
  3. Present task type selection (single-step or multi-step)
  4. Detect objects and classify them
  5. Generate a Behavior Tree for task execution
  6. Execute the task
  7. Collect feedback and improve the model

Creating a New Dataset

To generate a new feature space dataset:

python -m tools.generate_feature_space

Configure dataset parameters in tools/generate_feature_space.py:

  • Adjust object properties (class names, color values, etc.)
  • Configure value distributions (uniform, midpoint, or cluster)
  • Set ranges for continuous properties (size, weight)

For full dataset preparation including folds and rules:

python -m tools.run_data_preparation

Evaluating the System

To evaluate classifier performance:

python -m tools.classifier_evaluation.evaluate_classifier

This will:

  1. Run evaluations for all configured model types
  2. Generate learning curves comparing model performance
  3. Save results to the configured output directory

Human-Robot Workflow

  1. The human prepares the environment with objects
  2. The system detects objects and plans a task
  3. The robot executes the task using the generated Behavior Tree
  4. The human provides feedback on execution
  5. The system learns from feedback to improve future performance

Technical Details

Classification

The system supports multiple classification models:

  • Decision Tree: Highly interpretable with hierarchical decision structure
  • CatBoost: Gradient boosting approach with superior performance on small datasets

Action Ordering Constraints

For multi-step tasks, the system extracts ordering constraints using a Preconditions Extraction Algorithm that:

  • Analyzes demonstrations to find must-precede relationships
  • Identifies flexible ordering between actions
  • Removes transitive edges to create a minimal precedence graph

Behavior Tree Generation

The system generates a hierarchical BT with four main components:

  • HumanHelpBranch: Handles human intervention when needed
  • ObjectDetector: Monitors object positions
  • TaskBranch: Contains action sequences for execution
  • ReturnHome: Returns the robot to home position

About

REactive Behavior Constraint-Aware Tree learning (REBCAT) - a human-robot collaboration framework to learn task from demonstrations. Interpretable, fast, object-centric, and reactive.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published
0