8000 Double free of object malloc error when running `beast::severities::kTrace` · Issue #5388 · XRPLF/rippled · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Double free of object malloc error when running beast::severities::kTrace #5388

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
dangell7 opened this issue Apr 4, 2025 · 3 comments
Open

Comments

@dangell7
Copy link
Collaborator
dangell7 commented Apr 4, 2025

Issue Description

rippled(38872,0x16d2e7000) malloc: Double free of object 0x12f905ff0
rippled(38872,0x16d2e7000) malloc: *** set a breakpoint in malloc_error_break to debug
zsh: abort      ./rippled -u ripple.app.Batch

Steps to Reproduce

Run a test with the following env:

test::jtx::Env env{*this, envconfig(), features, nullptr, beast::severities::kTrace};

Expected Result

Actual Result

Environment

Supporting Files

@sublimator
Copy link
Contributor
sublimator commented Apr 4, 2025 via email

@sublimator
Copy link
Contributor
8000 sublimator commented Apr 5, 2025

https://github.com/sublimator/xahaud/blob/81b1fb11f76d481ab6dab24f4a4d1f04eac4ec25/src/test/unit_test/SuiteJournal.h#L87-L88

    // Only write the string if the level at least equals the threshold.
    if (level >= threshold())
    {
        static std::mutex log_mutex_;
        std::lock_guard lock(log_mutex_);
        suite_.log << s << partition_ << text << std::endl; // <----- std::endl triggers a flush, iffy with multiple thread
    }

That's the simplest fix which I used for the jshook emit tests.
You could adapt to rippled

@sublimator
Copy link
Contributor

Analysis of Multithreaded Logging Issues in the Rippled Codebase

After analyzing the code snippets, I can identify a potential threading issue in the logging system that could lead to a double free situation. Let me walk through the problem step by step.

Thread Creation and Logging Setup

In the Env::AppBundle::AppBundle constructor, we see:

  1. A logging sink is created:

    setDebugLogSink(std::make_unique<SuiteJournalSink>("Debug", kFatal, suite));
  2. A separate thread is spawned to run the application:

    thread = std::thread([&]() { app->run(); });

The Log Write Path

Let's trace the path of a log message:

  1. SuiteJournalSink::writeAlways writes to the suite's log with an std::endl:

    suite_.log << s << partition_ << text << std::endl;
  2. The log_os class is a wrapper around std::basic_ostream using a custom log_buf buffer

  3. The critical part: std::endl does two things:

    • Inserts a newline character
    • Forces a flush of the stream
  4. When the stream flushes, it calls the underlying buffer's sync() method:

    int sync() override
    {
        auto const& s = this->str();
        if (s.size() > 0)
            suite_.runner_->log(s);
        this->str("");
        return 0;
    }

The Race Condition

The issue occurs when multiple threads write to the log simultaneously:

  1. Thread A and Thread B both write to the log with std::endl
  2. Both threads trigger the log_buf::sync() method
  3. Thread A calls this->str() to get the current buffer contents
  4. Thread B also calls this->str() and gets the same buffer contents
  5. Thread A calls suite_.runner_->log(s) and then this->str("") to clear the buffer
  6. Thread B then calls suite_.runner_->log(s) with the same string object
  7. Thread B calls this->str("") on an already cleared buffer

The Double Free Problem

The double free issue happens because:

  1. The std::basic_stringbuf (parent class of log_buf) maintains an internal string
  2. When str() is called, it provides access to this string (often returning a copy)
  3. When str("") is called, it replaces the internal string, potentially destroying the old one
  4. If two threads call str("") in quick succession after both getting the same string contents, the second thread might try to operate on memory that's already been freed

Additionally, the runner::log() method has mutex protection:

void runner::log(std::string const& s)
{
    std::lock_guard lock(mutex_);
    // ...
}

But this only protects the runner's state, not the log_buf state. There's no synchronization at the log_buf level.

How std::endl Triggers the Issue

The key trigger is std::endl which forces an immediate flush. Without this flush, the buffer would accumulate characters and only sync when it gets full or when the program decides to flush. The explicit flush from std::endl makes the race condition much more likely by forcing sync() to be called after every log line.

This is a classic example of why thread safety is critical in logging systems, as they're often accessed from multiple threads simultaneously.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants
0