8000 Zhenyu001225 (Zhenyu Liu) / Starred · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content
View Zhenyu001225's full-sized avatar
  • Rensselaer Polytechnic Institute
  • Troy,NY

Block or report Zhenyu001225

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
Showing results

[ICLR'25] Fast Inference of MoE Models with CPU-GPU Orchestration

Python 218 23 Updated Nov 18, 2024

The loss landscape of Large Language Models resemble basin!

Python 13 1 Updated Jul 8, 2025

A survey on harmful fine-tuning attack for large language model

193 6 Updated Jul 1, 2025

Code for the paper "AsFT: Anchoring Safety During LLM Fune-Tuning Within Narrow Safety Basin".

Python 25 Updated Jul 10, 2025

Universal and Transferable Attacks on Aligned Language Models

Python 4,041 542 Updated Aug 2, 2024

JailbreakBench: An Open Robustness Benchmark for Jailbreaking Language Models [NeurIPS 2024 Datasets and Benchmarks Track]

Python 372 38 Updated Apr 4, 2025
Python 104 12 Updated Nov 13, 2023
Python 4 Updated Feb 25, 2025

A curation of awesome tools, documents and projects about LLM Security.

1,272 131 Updated Apr 18, 2025

ECSO (Make MLLM safe without neither training nor any external models!) (https://arxiv.org/abs/2403.09572)

Python 28 1 Updated Nov 2, 2024

[ICLR 2025] Understanding and Enhancing Safety Mechanisms of LLMs via Safety-Specific Neuron

Python 16 Updated Apr 30, 2025

(ICLR 2023 Spotlight) MPCFormer: fast, performant, and private transformer inference with MPC

Python 99 18 Updated Jun 12, 2023
C++ 19 1 Updated Dec 22, 2024

[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.

Python 23,028 2,544 Updated Aug 12, 2024

A reading list for large models safety, security, and privacy (including Awesome LLM Security, Safety, etc.).

1,542 100 Updated Jul 6, 2025

Code and data for ACL 2024 paper on 'Cross-Modal Projection in Multimodal LLMs Doesn't Really Project Visual Attributes to Textual Space'

Python 15 1 Updated Jul 21, 2024

A curated list of resources dedicated to the safety of Large Vision-Language Models. This repository aligns with our survey titled A Survey of Safety on Large Vision-Language Models: Attacks, Defen…

116 9 Updated Jul 3, 2025
Python 16 2 Updated Jun 13, 2024

[CVPR 2025] Official implementation for "Steering Away from Harm: An Adaptive Approach to Defending Vision Language Model Against Jailbreaks"

Python 33 1 Updated Jul 5, 2025

Efficient Multimodal Large Language Models: A Survey

361 21 Updated Apr 29, 2025

Accepted by ECCV 2024

Python 142 1 Updated Oct 15, 2024

Accepted by IJCAI-24 Survey Track

Python 207 6 Updated Aug 25, 2024

Research and Materials on Hardware implementation of Transformer Model

Jupyter Notebook 268 35 Updated Feb 28, 2025

The official implementation of the paper "RobustKV: Defending Large Language Models against Jailbreak Attacks via KV Eviction"

Python 5 Updated May 18, 2025

InfiniGen: Efficient Generative Inference of Large Language Models with Dynamic KV Cache Management (OSDI'24)

Python 142 27 Updated Jul 10, 2024

Chain of Experts (CoE) enables communication between experts within Mixture-of-Experts (MoE) models

Python 216 28 Updated Jun 25, 2025

🐳 Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"

Python 722 34 Updated Mar 19, 2025

[ICML 2024] Quest: Query-Aware Sparsity for Efficient Long-Context LLM Inference

Cuda 305 34 Updated Jul 10, 2025
Next
0