Skip to content

pull request 20241114 #951

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 77 commits into
base: master
Choose a base branch
from
Open

pull request 20241114 #951

wants to merge 77 commits into from

Conversation

lztiancn
Copy link

Thanks for your contribution and we appreciate it a lot. The following instructions would make your pull request more healthy and more easily get feedback. If you do not understand some items, don't worry, just make the pull request and seek help from maintainers.

Motivation

Please describe the motivation of this PR and the goal you want to achieve through this PR.

Modification

Please briefly describe what modification is made in this PR.

BC-breaking (Optional)

Does the modification introduce changes that break the backward compatibility of the downstream repositories?
If so, please describe how it breaks the compatibility and how the downstream projects should modify their code to keep compatibility with this PR.

Use cases (Optional)

If this PR introduces a new feature, it is better to list some use cases here and update the documentation.

Checklist

Before PR:

  • Pre-commit or other linting tools are used to fix the potential lint issues.
  • Bug fixes are fully covered by unit tests, the case that causes the bug should be added in the unit tests.
  • The modification is covered by complete unit tests. If not, please add more unit test to ensure the correctness.
  • The documentation has been modified accordingly, like docstring or example tutorials.

After PR:

  • If the modification has potential influence on downstream or other related projects, this PR should be tested with those projects.
  • CLA has been signed and all committers have signed the CLA in this PR.

Copy link
Contributor

github-actions bot commented Nov 14, 2024


Thank you for your submission, we really appreciate it. Like many open-source projects, we ask that you all sign our Contributor License Agreement before we can accept your contribution. You can sign the CLA by just posting a Pull Request Comment same as the below format.


I have read the CLA Document and I hereby sign the CLA


1 out of 4 committers have signed the CLA.
✅ (moria97)[https://github.com/moria97]
@lztiancn
@futuremeng
@futuremeng
FutureMeng seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account.
You can retrigger this bot by commenting recheck in this Pull Request. Posted by the CLA Assistant Lite bot.

Dockerfile Outdated
wget https://gitee.com/myhloli/MinerU/raw/master/scripts/download_models.py && \
python3 download_models.py && \
sed -i 's|cpu|cuda|g' /root/magic-pdf.json"

# install extents
COPY requirements-fastapi.txt /minerugw/requirements-fastapi.txt

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To do not increase by default the size of the image, this could be set as a optional requirements.

moria97 and others added 30 commits February 14, 2025 10:14
- Add remove_tilted_line function to filter out lines with angles between 2 and 88 degrees
- Integrate the new function into the text extraction process
- Improve the accuracy of text block processing by removing non-horizontal/vertical lines
- Add key length validation for ONNX model initialization
- Move import statements to the top of the file
- Wrap model initialization in a try-except block for better error handling
- Refactor code to improve readability and maintainability
- Update model path from 'unimernet_small' to 'unimernet_small_2501' in multiple scripts and configuration files
- This change affects download_models.py, download_models_hf.py, and model_configs.yaml
- Reduce YOLO_LAYOUT_BASE_BATCH_SIZE from 4 to 1
- Simplify batch ratio calculation for formula detection
- Remove unused conditional logic in batch ratio determination
- Update GPU memory check and batch ratio calculation logic
- Add support for virtual VRAM size environment variable
- Improve logging for GPU memory and batch ratio
…e VRAM allocation logic to use 'VIRTUAL_VRAM_SIZE' environment variable

- Reduce MFR (Math Formula Recognition) batch size from 64 to 32
- Reduce batch_ratio by 1 for better performance and stability
- This change ensures more consistent memory usage when processing documents
- Improve batch ratio calculation based on GPU memory
- Enhance performance for devices with 8GB or more VRAM
- Update conditions for batch ratio assignment:
  -8 <= gpu_memory < 10: batch_ratio = 2 - 10 <= gpu_memory <= 12: batch_ratio =4
- This fix ensures proper batch ratio selection for GPU memory sizes
- Restore commented code for filtering out characters with invalid bounding boxes
- This change may affect the filtering of unnecessary characters in PDF parsing
- Add a check to return 0 when either bbox1_area or bbox2_area is zero
- This prevents division by zero errors when calculating IoU
- Add timing measurement for formula, text, and title optimization using LLM
- Log the execution time for each LLM aided process
- Add sub_model configuration option for rapid_table model
- Provide two sub_model options: slanet_plus and unitable
…ilities: upgrade to latest doclayout_yolo(2501) and unimernet(2501) models

- Improve performance: optimize resource usage and processing pipeline for faster parsing on high-end devices- Enhance parsing effects: add new heading classification feature to online demo
- Refactor changelog structure for better readability and organization
…ability

- Update online demo links in both English and Chinese README files
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants