Rensa (Swedish for "clean") is a high-performance MinHash suite written in Rust with Python bindings. It's designed for efficient similarity estimation and deduplication of large datasets. It's 40x faster than datasketch
for MinHash operations while producing the same results and consuming less memory.
Rensa initially implemented a variant of the MinHash algorithm (R-MinHash
) that combined ideas from traditional MinHash and the C-MinHash algorithm. It now also offers a more direct C-MinHash
implementation and OptDensMinHash
which uses optimal densification.
Rensa is particularly useful in scenarios where you need to:
- Quickly estimate the similarity between large sets of data
- Deduplicate large datasets
- Perform locality-sensitive hashing (LSH) for approximate nearest neighbor search
Use cases include:
- Content deduplication in large document collections
- Identifying similar items in recommendation systems
- Clustering of high-dimensional data
- Near-duplicate detection in web crawling
Want to try Rensa right away? Check out our interactive Google Colab notebook that demonstrates how to use Rensa to deduplicate a dataset from Hugging Face:
Thanks mlabonne for the Colab notebook!
- Rensa: A novel high-performance MinHash Implementation in Rust
Rensa offers three high-performance MinHash variants in Rust: R-MinHash
(its original novel approach), C-MinHash
(an implementation closely following the C-MinHash paper), and OptDensMinHash
(based on optimal densification techniques). All are designed for efficient similarity estimation and leverage common strategies for speed and memory efficiency:
- Fast Hash Functions: Rensa employs fast, non-cryptographic hash functions (based on FxHash or Murmur3) for processing input items.
- Memory-Efficient Data Structures: Implementations use compact data structures to minimize memory usage while maintaining fast access times.
- Optimized Routines: Core operations are optimized using techniques like batch processing and vectorized operations where appropriate.
This variant was Rensa's initial novel approach. Key aspects of Rensa's RMinHash
implementation include:
-
Efficient Permutation Generation: Instead of storing full permutations or using k independent hash functions, Rensa's
RMinHash
uses a unique pair of random numbers (a, b) for each of thenum_perm
permutations. These are used to generate hash values on-the-fly for each item. -
Simplified Approach: While inspired by ideas related to C-MinHash,
RMinHash
is a distinct, simpler approach.- It does not apply an initial global permutation (σ) to the input data's hash in the same way as described in the C-MinHash paper for its primary permutation step.
- It uses
num_perm
distinct pairs of random numbers (a, b) to simulatenum_perm
independent hash functions, rather than deriving them from a smaller set of parameters in a circulant manner.
-
Trade-off:
RMinHash
's approach trades some of the potential variance reduction benefits of more complex MinHash schemes (like full C-MinHash) for simplicity and good performance. It still offers better performance than traditional MinHash in many scenarios.
Rensa's Locality-Sensitive Hashing (LSH) implementation, RMinHashLSH
, currently utilizes the RMinHash
variant for its index.
Rensa also includes CMinHash
, an implementation more directly aligned with the principles of the C-MinHash algorithm from the paper "C-MinHash: Rigorously Reducing K Permutations to Two". Key aspects of this implementation are:
- Two-Stage Hashing: It utilizes two sets of universal hash function parameters for its permutation scheme:
- An initial hash transformation (σ) is applied to the hash of each input item using parameters
sigma_a
andsigma_b
. - A second pair of parameters,
pi_c
andpi_d
, are used in combination with the σ-transformed item hash to generate thenum_perm
values in the MinHash signature. Specifically, for thek
-th hash slot (wherek
is from 0 tonum_perm-1
), the value is derived from(pi_c * sigma_transformed_hash + (pi_c * k + pi_d))
. The(pi_c * k + pi_d)
terms are pr 8000 ecomputed for eachk
to enhance efficiency.
- An initial hash transformation (σ) is applied to the hash of each input item using parameters
- Highly Optimized Routines: The
update
andjaccard
methods inCMinHash
are heavily optimized. This includes batch processing of input items, structuring calculations to improve cache utilization, and using vectorized operations (e.g., processing data in fixed-size chunks like blocks of 16 or 8) for faster computations. - Performance Focus: This implementation is specifically engineered for maximum single-threaded performance through these aggressive optimizations and careful memory access patterns.
Rensa also provides OptDensMinHash
, which implements MinHash enhanced by an optimal densification strategy. This approach aims to improve accuracy, especially for sparse datasets or smaller numbers of permutations, by ensuring that MinHash signatures are always fully populated.
- Densification: If, after processing all input items, some slots in the MinHash signature remain empty (i.e., no item hashed to them as the minimum), this algorithm fills these empty slots using values from other, non-empty slots in a principled manner. This "densification" ensures a complete signature.
- Theoretical Basis: The core ideas are drawn from research on densified MinHash algorithms, such as:
- Shrivastava, A. (2017). Optimal Densification for Fast and Accurate Minwise Hashing. PMLR.
- Mai, T., et al. (2020). On densification for MinWise Hashing. PMLR.
- Usage:
OptDensMinHash
is designed for unweighted data. The densification process is automatically triggered internally when the signature is requested (e.g., viadigest()
orjaccard()
).
These design choices result in a suite of MinHash implementations that are fast, memory-efficient, and suitable for large-scale similarity estimation and deduplication tasks. Benchmarks show that Rensa's implementations offer significant performance improvements over traditional MinHash libraries like datasketch
.
You can install Rensa using pip
. It's available in all platforms:
pip install rensa
Here's an example of how to use Rensa's MinHash implementations (e.g., RMinHash
, CMinHash
) for direct deduplication:
from datasets import load_dataset
from rensa import RMinHash, CMinHash # Or OptDensMinHash
from tqdm import tqdm
# Define a function to generate MinHash (works for RMinHash, CMinHash)
def generate_minhash_signature(text, minhash_class, num_perm=128, seed=42):
m = minhash_class(num_perm=num_perm, seed=seed)
m.update(text.split())
return m
def deduplicate_dataset_direct(dataset, text_column="sql", minhash_class=RMinHash, num_perm=128, desc="Deduplicating"):
unique_hashes = set()
deduplicated_indices = []
for idx, example in tqdm(enumerate(dataset), total=len(dataset), desc=desc):
minhash_obj = generate_minhash_signature(example[text_column], minhash_class, num_perm)
hash_tuple = tuple(minhash_obj.digest())
if hash_tuple not in unique_hashes:
unique_hashes.add(hash_tuple)
deduplicated_indices.append(idx)
return deduplicated_indices
def main_direct_deduplication():
print("Loading dataset...")
sql_dataset_dict = load_dataset("gretelai/synthetic_text_to_sql")
sql_dataset = sql_dataset_dict["train"]
print("Deduplicating dataset with R-MinHash...")
deduplicated_indices_r = deduplicate_dataset_direct(
sql_dataset,
text_column="sql",
minhash_class=RMinHash,
desc="R-MinHash Deduplication"
)
deduplicated_dataset_r = sql_dataset.select(deduplicated_indices_r)
print(f"Original dataset size: {len(sql_dataset)}")
print(f"Deduplicated dataset size (R-MinHash): {len(deduplicated_dataset_r)}")
print(f"Rows removed (R-MinHash): {len(sql_dataset) - len(deduplicated_dataset_r)}")
# Example with C-MinHash
# print("Deduplicating dataset with C-MinHash...")
# deduplicated_indices_c = deduplicate_dataset_direct(
# sql_dataset,
# text_column="sql",
# minhash_class=CMinHash,
# desc="C-MinHash Deduplication"
# )
# deduplicated_dataset_c = sql_dataset.select(deduplicated_indices_c)
# print(f"Deduplicated dataset size (C-MinHash): {len(deduplicated_dataset_c)}")
if __name__ == "__main__":
main_direct_deduplication()
Here's a more direct example of using CMinHash
for calculating Jaccard similarity:
from rensa import CMinHash
# Example texts
text1 = "This is an example sentence for CMinHash."
text2 = "This is another example sentence, slightly different from the first."
# Initialize CMinHash objects
num_permutations = 256
seed = 12345
c_minhash1 = CMinHash(num_perm=num_permutations, seed=seed)
c_minhash2 = CMinHash(num_perm=num_permutations, seed=seed)
# Update with words from each text
c_minhash1.update(text1.split())
c_minhash2.update(text2.split())
# Calculate Jaccard similarity
similarity = c_minhash1.jaccard(c_minhash2)
print(f"Estimated Jaccard similarity (CMinHash, {num_permutations} perm): {similarity:.4f}")
# Get signatures
signature1 = c_minhash1.digest()
# print(f"C-MinHash signature 1: {signature1}")
Here's an example of how to use RMinHashLSH
for deduplicating a dataset. This approach is more efficient for larger datasets. Key LSH parameters are set to example values within the function.
from datasets import load_dataset
from rensa import RMinHash, RMinHashLSH
from tqdm import tqdm
def deduplicate_dataset_with_lsh_simple(dataset, text_column="sql"):
num_perm = 128
seed = 42
lsh_threshold = 0.8
num_bands = 16
final_jaccard_threshold = 0.85
if num_perm % num_bands != 0:
raise ValueError(f"num_bands ({num_bands}) must divide num_perm ({num_perm}).")
minhashes = {}
for idx, example in tqdm(enumerate(dataset), total=len(dataset), desc="1. Generating RMinHashes"):
text_content = str(example[text_column])
tokens = text_content.split()
m = RMinHash(num_perm=num_perm, seed=seed)
m.update(tokens)
minhashes[idx] = m
lsh_index = RMinHashLSH(threshold=lsh_threshold, num_perm=num_perm, num_bands=num_bands)
for doc_id, rminhash_obj in tqdm(minhashes.items(), desc="2. Indexing into LSH"):
lsh_index.insert(doc_id, rminhash_obj)
to_remove = set()
sorted_doc_ids = sorted(minhashes.keys())
for doc_id in tqdm(sorted_doc_ids, desc="3. Querying LSH & Deduplicating"):
if doc_id in to_remove:
continue
query_minhash = minhashes[doc_id]
candidate_ids = lsh_index.query(query_minhash)
for candidate_id in candidate_ids:
if candidate_id == doc_id or candidate_id in to_remove:
continue
candidate_minhash = minhashes[candidate_id]
actual_jaccard = query_minhash.jaccard(candidate_minhash)
if actual_jaccard >= final_jaccard_threshold:
# Keep the item with the smaller original index
if doc_id < candidate_id:
to_remove.add(candidate_id)
else:
to_remove.add(doc_id)
break
deduplicated_indices = [idx for idx in sorted_doc_ids if idx not in to_remove]
return deduplicated_indices
def main_lsh_deduplication_simple():
print("Loading dataset...")
try:
sql_dataset_dict = load_dataset("gretelai/synthetic_text_to_sql")
sql_dataset = sql_dataset_dict["train"]
except Exception as e:
print(f"Failed to load dataset: {e}. Ensure 'datasets' is installed or use a local dataset.")
return
print("Deduplicating dataset with RMinHashLSH...")
deduplicated_indices_lsh = deduplicate_dataset_with_lsh_simple(
sql_dataset,
text_column="sql"
)
deduplicated_dataset_lsh = sql_dataset.select(deduplicated_indices_lsh)
print(f"Original dataset size (train split): {len(sql_dataset)}")
print(f"Deduplicated dataset size (RMinHashLSH): {len(deduplicated_dataset_lsh)}")
print(f"Rows removed (RMinHashLSH): {len(sql_dataset) - len(deduplicated_dataset_lsh)}")
if __name__ == "__main__":
main_lsh_deduplication_simple()
Rensa now supports inline deduplication, perfect for scenarios where you receive continuous streams of data and need to check each new record against existing ones in real-time.
from rensa import RMinHash, RMinHashDeduplicator
def inline_deduplication_example():
# Initialize the deduplicator with similarity threshold
# Adjust LSH parameters for the threshold
deduplicator = RMinHashDeduplicator(
threshold=0.7, # Jaccard similarity threshold
num_perm=128, # Number of permutations
use_lsh=True, # Use LSH for efficiency
num_bands=32, # More bands = more sensitive to lower similarities
)
# Simulate streaming data with varying similarities
document_stream = [
{"id": "001", "text": "The quick brown fox jumps over the lazy dog"},
{
"id": "002",
"text": "The quick brown fox jumps over the lazy dog today",
}, # Very similar
{
"id": "003",
"text": "A fast brown fox leaps over a sleepy dog",
}, # Somewhat similar
{"id": "004", "text": "Lorem ipsum dolor sit amet consectetur"},
{
"id": "005",
"text": "The quick brown fox jumps over the lazy dog",
}, # Exact duplicate
{
"id": "006",
"text": "Quick brown foxes jump over lazy dogs",
}, # Similar paraphrase
{"id": "007", "text": "Completely different content here"},
]
# Process each document as it arrives
for doc in document_stream:
# Create MinHash for the new document
minhash = RMinHash(num_perm=128, seed=42)
minhash.update(doc["text"].split())
# Check if it's a duplicate
if deduplicator.is_duplicate(doc["id"], minhash):
# Find which documents it duplicates
duplicates = deduplicator.get_duplicates(minhash)
print(f"Document {doc['id']} is a duplicate of: {duplicates}")
else:
# Add to the deduplicator if unique
if deduplicator.add(doc["id"], minhash):
print(f"Document {doc['id']} added (unique)")
print(f"\nTotal unique documents: {deduplicator.len()}")
if __name__ == "__main__":
inline_deduplication_example()
All deduplicators (RMinHashDeduplicator
, CMinHashDeduplicator
, OptDensMinHashDeduplicator
) support the following methods:
add(key: str, minhash) -> bool
: Add a new item if it's not a duplicate. Returns True if added.is_duplicate(key: str, minhash) -> bool
: Check if an item is a duplicate without adding it.get_duplicates(minhash) -> List[str]
: Get list of keys that are duplicates of the given MinHash.remove(key: str) -> bool
: Remove an item from the deduplicator.len() -> int
: Get the number of unique items stored.clear()
: Remove all items from the deduplicator.
Performance Tips for Inline Deduplication:
- Use LSH for large datasets: When dealing with thousands of documents, enable LSH (
use_lsh=True
) forRMinHashDeduplicator
. - Adjust threshold: Lower thresholds catch more duplicates but may have false positives.
- Batch when possible: If you receive data in small batches, process them together for better performance.
- Memory management: For very large datasets, consider implementing a sliding window or periodic cleanup of old entries.
Rensa offers three MinHash implementations (RMinHash
, CMinHash
, OptDensMinHash
), each with different trade-offs compared to each other and the popular datasketch
library.
Based on the latest advanced_benchmark.py
results (averaged over 5 runs on the gretelai/synthetic_text_to_sql
dataset, 100,000 rows, in a Macbook Pro M2 32GB):
-
Speed (at 256 permutations):
CMinHash
is consistently the fastest. Average execution time: 5.47 seconds.RMinHash
is also very fast. Average execution time: 5.58 seconds.OptDensMinHash
is fast. Average execution time: 12.36 seconds.datasketch
is considerably slower. Average execution time: 92.45 seconds. This makesCMinHash
up to approximately 16.90x faster thandatasketch
,RMinHash
up to approximately 16.57x faster, andOptDensMinHash
up to approximately 7.48x faster thandatasketch
(all at 256 permutations).
-
Accuracy (Jaccard Similarity of Deduplicated Sets vs. Datasketch, 128 permutations):
RMinHash
produces deduplication results identical todatasketch
(Jaccard similarity of 1.0000 between their output sets of unique items, with 99262 common items).OptDensMinHash
yields results very close todatasketch
. The Jaccard similarity is 0.9997 (with 99233 common items with Datasketch).CMinHash
also yields results very close todatasketch
. The Jaccard similarity is 0.9996 (with 99223 common items with Datasketch). This indicates that while all Rensa variants are highly effective for similarity estimation,RMinHash
perfectly matchesdatasketch
's deduplication output in this benchmark, whileCMinHash
andOptDensMinHash
produce extremely similar results.
-
Recommendation:
- For most use cases,
RMinHash
provides an excellent balance of high speed (up to ~16.6x faster thandatasketch
) and accuracy (matchingdatasketch
's deduplication results). It remains the generally recommended algorithm. - If absolute maximum throughput is the primary concern,
CMinHash
offers the best performance (up to ~16.9x faster thandatasketch
), with a negligible difference in exact deduplication results compared todatasketch
/RMinHash
. OptDensMinHash
offers a good balance of speed and high accuracy, and might be particularly beneficial for datasets with high sparsity or when using fewer permutations, due to its densification strategy.- If you require features beyond core MinHash generation or need to integrate with an existing
datasketch
ecosystem,datasketch
remains a comprehensive option, albeit slower for MinHash operations.
- For most use cases,
Rensa offers significant performance advantages over traditional MinHash libraries like datasketch
. Recent benchmarks demonstrate that Rensa's MinHash implementations are particularly powerful on large-scale, high-cardinality datasets.
The following benchmark was conducted using the large-scale Salesforce/wikitext
dataset containing 1.8 million rows. This benchmark highlights Rensa's remarkable performance advantage:
- R-MinHash completed deduplication ~39x faster than
datasketch
. - C-MinHash performance was similarly impressive.
Performance comparison:
Algorithm | Execution Time (s) | Speedup vs Datasketch |
---|---|---|
Datasketch | 1725 | - |
Rensa R-MinHash | 44 | ~39x faster |
Rensa C-MinHash | 42 | ~41x faster |
This benchmark clearly demonstrates Rensa's capability to handle large, diverse datasets at exceptional speed.
Benchmark speedups depend on dataset characteristics, including:
- Cardinality (number of distinct elements)
- Document length and repetition rate
High cardinality and large-scale datasets (such as Salesforce/wikitext
) leverage Rensa's optimizations fully, achieving maximal performance gains.
Earlier benchmarks, using the smaller and more repetitive gretelai/synthetic_text_to_sql
dataset, indicated approximately a 15x speedup over datasketch
. While still impressive, these speedups reflect differences in dataset characteristics:
Algorithm | Execution Time (s) | Speedup vs Datasketch |
---|---|---|
Datasketch | 92.45 | - |
Rensa R-MinHash | 5.58 | ~16.6x faster |
Rensa C-MinHash | 5.47 | ~16.9x faster |
Rensa OptDensMinHash | 12.36 | ~7.5x faster |
This demonstrates that Rensa significantly outperforms datasketch
in all scenarios, with even greater gains on larger, high-cardinality datasets.
- Use
RMinHash
orCMinHash
for general purposes, especially with large-scale datasets, to achieve maximum performance. OptDensMinHash
is beneficial when dealing with sparse data or fewer permutations.
Rensa consistently delivers high-performance MinHash implementations that outperform datasketch
substantially, making it ideal for real-world, large-scale deduplication and similarity estimation tasks.
To run the benchmarks yourself, follow these steps:
- Clone the repository:
git clone https://github.com/beowolx/rensa.git
cd rensa
- Create a virtual environment:
python3 -m venv venv
source venv/bin/activate
- Install the required dependencies:
pip install -r requirements.txt
- Run the simple benchmark (compares core MinHash deduplication, uses the dataset
gretelai/synthetic_text_to_sql
with 100K rows):
python benchmarks/simple_benchmark.py
- Run the advanced benchmark (detailed comparison of RMinHash, CMinHash, OptDensMinHash, Datasketch, uses the dataset
gretelai/synthetic_text_to_sql
with 100K rows):
python benchmarks/advanced_benchmark.py
- Run the wiki benchmark (compares deduplication performance on the
Salesforce/wikitext
dataset with 1.8M rows):
python benchmarks/wiki_benchmark.py
While Rensa offers significant performance improvements, it has some limitations compared to datasketch
:
-
Feature set: Rensa currently implements core MinHash (
RMinHash
,CMinHash
,OptDensMinHash
) and LSH (forRMinHash
viaRMinHashLSH
) functionality. It doesn't include some of the advanced features found indatasketch
like HyperLogLog, etc. -
Customization:
datasketch
offers more options for customizing the hash functions and other parameters. Rensa's implementations are more fixed for performance but offerseed
andnum_perm
customization. -
Theoretical guarantees:
RMinHash
, due to its simplified permutation generation, may not provide the same level of variance reduction as theoretically optimal MinHash or the full C-MinHash algorithm in all scenarios.CMinHash
is designed to be a more faithful implementation of the C-MinHash paper's principles, aiming for stronger theoretical backing regarding its reduction of k permutations to two.OptDensMinHash
relies on established densification techniques to improve estimates, particu 5D39 larly for sparse data.
Future work on Rensa may include:
- Adding more advanced features and customization options
- Further optimizing performance for specific use cases and data types
- Potentially extending LSH support to other MinHash variants if beneficial.
Despite these limitations, Rensa's performance benefits make it an excellent choice for applications where speed and efficiency are critical, especially when working with large datasets.
Contributions to Rensa are welcome! Please feel free to submit pull requests, report bugs, or suggest features through the GitHub issue tracker.
Rensa is released under the MIT License. See the LICENSE file for details.