conda create -n papermage python=3.11
conda activate papermage
If you're installing from source:
pip install -e '.[dev,predictors,visualizers]'
If you're installing from PyPi:
pip install 'papermage.[dev,predictors,visualizers]'
(you may need to add/remove quotes depending on your command line shell).
If you're on MacOSX, you'll also want to run:
conda install poppler
python -m pytest
for latest failed test
python -m pytest --lf --no-cov -n0
for specific test name of class name
python -m pytest -k 'TestPDFPlumberParser' --no-cov -n0
from papermage.recipes import CoreRecipe
recipe = CoreRecipe()
doc = recipe.run("tests/fixtures/papermage.pdf")
What is a Document
? At minimum, it is some text, saved under the .symbols
layer, which is just a <str>
. For example:
> doc.symbols
"PaperMage: A Unified Toolkit for Processing, Representing, and\nManipulating Visually-..."
But this library is really useful when you have multiple different ways of segmenting .symbols
. For example, segmenting the paper into Pages, and then each page into Rows:
for page in doc.pages:
print(f'\n=== PAGE: {page.id} ===\n\n')
for row in page.rows:
print(row.text)
...
=== PAGE: 5 ===
4
Vignette: Building an Attributed QA
System for Scientific Papers
How could researchers leverage papermage for
their research? Here, we walk through a user sce-
nario in which a researcher (Lucy) is prototyping
an attributed QA system for science.
System Design.
Drawing inspiration from Ko
...
This shows two nice aspects of this library:
-
Document
provides iterables for different segmentations ofsymbols
. Options include things likepages, tokens, rows, sentences, sections, ...
. Not every Parser will provide every segmentation, though. -
Each one of these segments (in our library, we call them
Entity
objects) is aware of (and can access) other segment types. For example, you can callpage.rows
to get all Rows that intersect a particular Page. Or you can callsent.tokens
to get all Tokens that intersect a particular Sentence. Or you can callsent.rows
to get the Row(s) that intersect a particular Sentence. These indexes are built dynamically when theDocument
is created and each time a newEntity
type is added. In the extreme, as long as those layers are available in the Document, you can write:
for page in doc.pages:
for sent in page.sentences:
for row in sent.rows:
...
You can check which layers are available in a Document via:
> doc.layers
['tokens',
'rows',
'pages',
'words',
'sentences',
'blocks',
'vila_entities',
'titles',
'authors',
'abstracts',
'keywords',
'sections',
'lists',
'bibliographies',
'equations',
'algorithms',
'figures',
'tables',
'captions',
'headers',
'footers',
'footnotes',
'symbols',
'images',
'metadata',
'entities',
'relations']
Note that Entity
s don't necessarily perfectly nest each other. For example, what happens if you run:
for sent in doc.sentences:
for row in sent.rows:
print([token.text for token in row.tokens])
Tokens that are outside each sentence can still be printed. This is because when we jump from a sentence to its rows, we are looking for all rows that have any overlap with the sentence. Rows can extend beyond sentence boundaries, and as such, can contain tokens outside that sentence.
A key aspect of using this library is understanding how these different layers are defined & anticipating how they might interact with each other. We try to make decisions that are intuitive, but we do ask users to experiment with layers to build up familiarity.
Each Entity
object stores information about its contents and position:
-
.spans: List[Span]
, ASpan
is a pointer intoDocument.symbols
(that is,Span(start=0, end=5)
corresponds tosymbols[0:5]
). By default, when you iterate over anEntity
, you iterate over its.spans
. -
.boxes: List[Box]
, ABox
represents a rectangular region on the page. Each span is associated a Box. -
.metadata: Metadata
, A free form dictionary-like object to store extra metadata about thatEntity
. These are usually empty.
A Document
is created by stitching together 3 types of tools: Parsers
, Rasterizers
and Predictors
.
-
Parsers
take a PDF as input and return aDocument
compared of.symbols
and other layers. The example one we use is a wrapper around PDFPlumber - MIT License utility. -
Rasterizers
take a PDF as input and return anImage
per page that is added toDocument.images
. The example one we use is PDF2Image - MIT License. -
Predictors
take aDocument
and apply some operation to compute a new set ofEntity
objects that we can insert into ourDocument
. These are all built in-house and can be either simple heuristics or full machine-learning models.
import json
with open('filename.json', 'w') as f_out:
json.dump(doc.to_json(), f_out, indent=4)
will produce something akin to:
{
"symbols": "PaperMage: A Unified Toolkit for Processing, Representing, an...",
"entities": {
"rows": [...],
"tokens": [...],
"words": [...],
"blocks": [...],
"sentences": [...]
},
"metadata": {...}
}
These can be used to reconstruct a Document
again via:
with open('filename.json') as f_in:
doc_dict = json.load(f_in)
doc = Document.from_json(doc_dict)
Note: A common pattern for adding layers to a document is to load in a previously saved document, run some additional Predictors
on it, and save the result.
See papermage/predictors/README.md
for more information about training custom predictors on your own data.
See papermage/examples/quick_start_demo.ipynb
for a notebook walking through some more usage patterns.
English | 简体中文
[Models (🤗Hugging Face)] | [Models(ModelScope)]
🔥🔥🔥 MinerU: Efficient Document Content Extraction Tool Based on PDF-Extract-Kit
👋 join us on Discord and WeChat
PDF documents contain a wealth of knowledge, yet extracting high-quality content from PDFs is not an easy task. To address this, we have broken down the task of PDF content extraction into several components:
- Layout Detection: Using the LayoutLMv3 model for region detection, such as
images
,tables
,titles
,text
, etc.; - Formula Detection: Using YOLOv8
8000
for detecting formulas, including
inline formulas
andisolated formulas
; - Formula Recognition: Using UniMERNet for formula recognition;
- Table Recognition: Using StructEqTable for table recognition;
- Optical Character Recognition: Using PaddleOCR for text recognition;
Note: Due to the diversity of document types, existing open-source layout and formula detection models struggle with diverse PDF documents. Therefore, we have collected diverse data for annotation and training to achieve precise detection effects on various types of documents. For details, refer to the sections on Layout Detection and Formula Detection. For formula recognition, the UniMERNet method rivals commercial software in quality across various types of formulas. For OCR, we use PaddleOCR, which performs well for both Chinese and English.
The PDF content extraction framework is illustrated below:
PDF-Extract-Kit Output Format
{
"layout_dets": [ # Elements on the page
{
"category_id": 0, # Category ID, 0~9, 13~15
"poly": [
136.0, # Coordinates are in image format, need to convert back to PDF coordinates, order is top-left, top-right, bottom-right, bottom-left x,y coordinates
781.0,
340.0,
781.0,
340.0,
806.0,
136.0,
806.0
],
"score": 0.69, # Confidence score
"latex": '' # Formula recognition result, only categories 13, 14 have content, others are empty, additionally 15 is the OCR result, this key will be replaced with text
},
...
],
"page_info": { # Page information: resolution size when extracting bounding boxes, alignment can be based on this information if scaling is involved
"page_no": 0, # Page number
"height": 1684, # Page height
"width": 1200 # Page width
}
}
The types included in category_id
are as follows:
{0: 'title', # Title
1: 'plain text', # Text
2: 'abandon', # Includes headers, footers, page numbers, and page annotations
3: 'figure', # Image
4: 'figure_caption', # Image caption
5: 'table', # Table
6: 'table_caption', # Table caption
7: 'table_footnote', # Table footnote
8: 'isolate_formula', # Display formula (this is a layout display formula, lower priority than 14)
9: 'formula_caption', # Display formula label
13: 'inline_formula', # Inline formula
14: 'isolated_formula', # Display formula
15: 'ocr_text'} # OCR result
2024.08.01
🎉🎉🎉 Added the StructEqTable module for table content extraction. Welcome to use it!2024.07.01
🎉🎉🎉 We releasedPDF-Extract-Kit
, a comprehensive toolkit for high-quality PDF content extraction, includinglayout detection
,formula detection
,formula recognition
, andOCR
.
By annotating a variety of PDF documents, we have trained robust models for layout detection
and formula detection
. Our pipeline achieves accurate extraction results on diverse types of PDF documents such as academic papers, textbooks, research reports, and financial statements, and is highly robust even in cases of scanned blurriness or watermarks.
Existing open-source models are often trained on data from Arxiv papers and fall short when facing diverse PDF documents. In contrast, our models, trained on diverse data, are capable of adapting to various document types for extraction.
The introduction of Validation process can be seen here.
We have compared our model with existing open-source layout detection models, including DocXchain, Surya, and two models from 360LayoutAnalysis. The model present as LayoutLMv3-SFT in the table refers to the checkpoint we further trained with our SFT data on LayoutLMv3-base-chinese pre-trained model. The validation set for academic papers consists of 402 pages, while the textbook validation set is composed of 587 pages from various sources of textbooks.
Model | Academic papers val | Textbook val | ||||
---|---|---|---|---|---|---|
mAP | AP50 | AR50 | mAP | AP50 | AR50 | |
DocXchain | 52.8 | 69.5 | 77.3 | 34.9 | 50.1 | 63.5 |
Surya | 24.2 | 39.4 | 66.1 | 13.9 | 23.3 | 49.9 |
360LayoutAnalysis-Paper | 37.7 | 53.6 | 59.8 | 20.7 | 31.3 | 43.6 |
360LayoutAnalysis-Report | 35.1 | 46.9 | 55.9 | 25.4 | 33.7 | 45.1 |
LayoutLMv3-SFT | 77.6 | 93.3 | 95.5 | 67.9 | 82.7 | 87.9 |
We have compared our model with the open-source formula detection model Pix2Text-MFD. Additionally, the YOLOv8-Trained is the weight obtained after we performed training on the basis of the YOLOv8l model. The paper's validation set is composed of 255 academic paper pages, and the multi-source validation set consists of 789 pages from various sources, including textbooks and books.
Model | Academic papers val | Multi-source val | ||
---|---|---|---|---|
AP50 | AR50 | AP50 | AR50 | |
Pix2Text-MFD | 60.1 | 64.6 | 58.9 | 62.8 |
YOLOv8-Trained | 87.7 | 89.9 | 82.4 | 87.3 |
The formula recognition we used is based on the weights downloaded from UniMERNet, without any further SFT training, and the accuracy validation results can be obtained on its GitHub page.
The table recognition we used is based on the weights downloaded from StructEqTable, a solution that converts images of Table into LaTeX. Compared to the table recognition capability of PP-StructureV2, StructEqTable demonstrates stronger recognition performance, delivering good results even with complex tables, which may currently be best suited for data within research papers. There is also significant room for improvement in terms of speed, and we are continuously iterating and optimizing. Within a week, we will update the table recognition capability to MinerU.
conda create -n pipeline python=3.10
pip install -r requirements.txt
pip install --extra-index-url https://miropsota.github.io/torch_packages_builder detectron2==0.6+pt2.3.1cu121
After installation, you may encounter some version conflicts leading to version changes. If you encounter version-related errors, you can try the following commands to reinstall specific versions of the libraries.
pip install pillow==8.4.0
In addition to version conflicts, you may also encounter errors where torch cannot be invoked. First, uninstall the following library and then reinstall cuda12 and cudnn.
pip uninstall nvidia-cusparse-cu12
Refer to Model Download to download the required model weights.
If you intend to run this project on Windows, please refer to Using PDF-Extract-Kit on Windows.
If you intend to run this project on macOS, please refer to Using PDF-Extract-Kit on macOS.
If you intend to experience this project on Google Colab, please
python pdf_extract.py --pdf data/pdfs/ocr_1.pdf
Parameter explanations:
--pdf
: PDF file to be processed; if a folder is passed, all PDF files in the folder will be processed.--output
: Path where the results are saved, default is "output".--vis
: Whether to visualize the results; if yes, detection results including bounding boxes and categories will be visualized.--render
: Whether to render the recognized results, including LaTeX code for formulas and plain text, which will be rendered and placed in the detection boxes. Note: This process is very time-consuming, and also requires prior installation ofxelatex
andimagemagic
.
This project is dedicated to using models for high-quality content extraction from documents on diversity. It does not involve reassembling the extracted content into new documents, such as converting PDFs to Markdown. For those needs, please refer to our other GitHub project: MinerU
- Table Parsing: Develop a feature to convert table images into corresponding LaTeX/Markdown format source code.
- Chemical Equation Detection: Implement automatic detection of chemical equations.
- Chemical Equation/Diagram Recognition: Develop a model to recognize and parse chemical equations and diagrams.
- Reading Order Sorting Model: Build a model to determine the correct reading order of text in documents.
PDF-Extract-Kit aims to provide high-quality PDF extraction capabilities. We encourage the community to propose specific and valuable requirements and welcome everyone to participate in continuously improving the PDF-Extract-Kit tool to advance scientific research and industrial development.
This repository is licensed under the Apache-2.0 License.
Please follow the model licenses to use the corresponding model weights: LayoutLMv3 / UniMERNet / StructEqTable / YOLOv8 / PaddleOCR.
- LayoutLMv3: Layout detection model
- UniMERNet: Formula recognition model
- StructEqTable: Table recognition model
- YOLOv8: Formula detection model
- PaddleOCR: OCR model
If you find our models / code / papers useful in your research, please consider giving ⭐ and citations 📝, thx :)
@misc{wang2024unimernet,
title={UniMERNet: A Universal Network for Real-World Mathematical Expression Recognition},
author={Bin Wang and Zhuangcheng Gu and Chao Xu and Bo Zhang and Botian Shi and Conghui He},
year={2024},
eprint={2404.15254},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
@article{he2024opendatalab,
title={Opendatalab: Empowering general artificial intelligence with open datasets},
author={He, Conghui and Li, Wei and Jin, Zhenjiang and Xu, Chao and Wang, Bin and Lin, Dahua},
journal={arXiv preprint arXiv:2407.13773},
year={2024}
}