8000 GitHub - Miguell-J/Julia: Julia Medical Reasoning is a fine-tuned version of Google's Gemma-3 model, optimized for clinical reasoning, diagnostic support, and medical question answering in English. It has been adapted through supervised fine-tuning using a curated dataset composed of medical case studies, question-answer pairs, and evidence-based medicine protocols.
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Julia Medical Reasoning is a fine-tuned version of Google's Gemma-3 model, optimized for clinical reasoning, diagnostic support, and medical question answering in English. It has been adapted through supervised fine-tuning using a curated dataset composed of medical case studies, question-answer pairs, and evidence-based medicine protocols.

Notifications You must be signed in to change notification settings

Miguell-J/Julia

Repository files navigation

base_model library_name
google/gemma-3-1b-it
peft

Model Card for Julia, an AI for Medical Reasoning

Julia Medical Reasoning is a fine-tuned version of Google's Gemma-3 model, optimized for clinical reasoning, diagnostic support, and medical question answering in English. It has been adapted through supervised fine-tuning using a curated data 8000 set composed of medical case studies, question-answer pairs, and evidence-based medicine protocols.

Model Details

Model Description

  • Developed by: Miguel Araújo Julio
  • Shared by: Miguel Araújo Julio
  • Model type: Causal Language Model
  • Language(s) (NLP): English
  • License: Apache 2.0
  • Finetuned from model: google/gemma-3-1b-it

Model Sources [optional]

  • Repository: N/A

Uses

Direct Use

  • Medical education and training.
  • Assisting clinicians with reasoning through differential diagnoses.
  • Generating answers to patient queries and common clinical questions.

Downstream Use

  • Integration into clinical decision support tools.
  • Augmenting chatbot interfaces for hospitals or telemedicine platforms.

Out-of-Scope Use

  • Final medical diagnosis or treatment recommendation without human oversight.
  • Use in high-risk clinical environments without regulatory clearance.

Bias, Risks, and Limitations

The model may reproduce biases found in the training data and should not be considered a replacement for licensed medical professionals. There is a risk of hallucinated or outdated information being presented as fact.

Recommendations

Users should validate outputs against trusted medical sources and consult with qualified professionals before making clinical decisions.

How to Get Started with the Model

from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained("your-username/julia-medical-reasoning")
tokenizer = AutoTokenizer.from_pretrained("your-username/julia-medical-reasoning")

inputs = tokenizer("What are common symptoms of diabetes?", return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0]))

Training Details

Training Data

The model was trained using a mix of open medical question-answer datasets, synthetic case-based reasoning examples, and filtered PubMed articles.

Training Procedure

Preprocessing

Data was filtered to remove out-of-domain or unsafe content, and pre-tokenized using Gemma's tokenizer.

Training Hyperparameters

  • Training regime: bf16 mixed precision

Speeds, Sizes, Times [optional]

  • Fine-tuned for 3 epochs on a L4x4 GPU.

Evaluation

Testing Data, Factors & Metrics

Testing Data

Curated benchmark sets of medical reasoning and multiple-choice questions (FreedomIntelligence/medical-o1-reasoning-SFT).

Technical Specifications [optional]

Model Architecture and Objective

Decoder-only transformer architecture following Gemma specifications.

Compute Infrastructure

Hardware

L4 x 4 (96 GB)

Software

  • PyTorch 2.1
  • PEFT 0.14.0
  • Transformers 4.40

Citation [optional]

BibTeX:

@misc{julia2025,
  title={Julia Medical Reasoning: Fine-tuning Gemma for Medical Understanding},
  author={Miguel Araújo Julio},
  year={2025},
  url={https://huggingface.co/Miguell-J/julia-medical-reasoning}
}

Model Card Authors [optional]

Miguel Araújo Julio

Model Card Contact

Framework versions

  • PEFT 0.14.0

About

Julia Medical Reasoning is a fine-tuned version of Google's Gemma-3 model, optimized for clinical reasoning, diagnostic support, and medical question answering in English. It has been adapted through supervised fine-tuning using a curated dataset composed of medical case studies, question-answer pairs, and evidence-based medicine protocols.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published
0