base_model | library_name |
---|---|
google/gemma-3-1b-it |
peft |
Julia Medical Reasoning is a fine-tuned version of Google's Gemma-3 model, optimized for clinical reasoning, diagnostic support, and medical question answering in English. It has been adapted through supervised fine-tuning using a curated data 8000 set composed of medical case studies, question-answer pairs, and evidence-based medicine protocols.
- Developed by: Miguel Araújo Julio
- Shared by: Miguel Araújo Julio
- Model type: Causal Language Model
- Language(s) (NLP): English
- License: Apache 2.0
- Finetuned from model: google/gemma-3-1b-it
- Repository: N/A
- Medical education and training.
- Assisting clinicians with reasoning through differential diagnoses.
- Generating answers to patient queries and common clinical questions.
- Integration into clinical decision support tools.
- Augmenting chatbot interfaces for hospitals or telemedicine platforms.
- Final medical diagnosis or treatment recommendation without human oversight.
- Use in high-risk clinical environments without regulatory clearance.
The model may reproduce biases found in the training data and should not be considered a replacement for licensed medical professionals. There is a risk of hallucinated or outdated information being presented as fact.
Users should validate outputs against trusted medical sources and consult with qualified professionals before making clinical decisions.
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("your-username/julia-medical-reasoning")
tokenizer = AutoTokenizer.from_pretrained("your-username/julia-medical-reasoning")
inputs = tokenizer("What are common symptoms of diabetes?", return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0]))
The model was trained using a mix of open medical question-answer datasets, synthetic case-based reasoning examples, and filtered PubMed articles.
Data was filtered to remove out-of-domain or unsafe content, and pre-tokenized using Gemma's tokenizer.
- Training regime: bf16 mixed precision
- Fine-tuned for 3 epochs on a L4x4 GPU.
Curated benchmark sets of medical reasoning and multiple-choice questions (FreedomIntelligence/medical-o1-reasoning-SFT).
Decoder-only transformer architecture following Gemma specifications.
L4 x 4 (96 GB)
- PyTorch 2.1
- PEFT 0.14.0
- Transformers 4.40
BibTeX:
@misc{julia2025,
title={Julia Medical Reasoning: Fine-tuning Gemma for Medical Understanding},
author={Miguel Araújo Julio},
year={2025},
url={https://huggingface.co/Miguell-J/julia-medical-reasoning}
}
Miguel Araújo Julio
- Email: julioaraujo.guel@gmail.com
- PEFT 0.14.0