8000 GitHub - PromptLabs/Prompt-Hacking-Resources: A list of curated resources for people interested in AI Red Teaming, Jailbreaking, and Prompt Injection
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

PromptLabs/Prompt-Hacking-Resources

Repository files navigation

Awesome prompt hacking – an awesome list of curated resources on prompt hacking and AI safety.

Topics include AI red teaming, jailbreaking, prompt injection, prompt hacking, AI/ML safety and security.

Awesome Made With Love Tweet

This resource is provided by Learn Prompting, your go-to resource for mastering Generative AI.

astronaut-learn-prompting

DiscordTwitter (X)LinkedInNewsletterFree Intro Prompt Hacking CourseFree Advanced Prompt Hacking Course


Table of Contents


Introduction

Prompt Hacking is an emerging field that covers the intersection between AI and Cybersecurity. It involves exploring the outer edges of LLM behavior through adversarial prompts and prompt injection techniques.

Due to its novelty, online resources are few and far between.

This repository aims to provide a good overview of materials and tutorials that help expose vulnerabilities, document offensive research, and promote a better understanding of model limitations.


Blogs

Stay informed with expert analyses, tutorials, and research articles on AI security.

Communities

Discord Communities

Reddit Communities

Courses

Free Courses

Paid Courses

Events

Keep updated with competitions, workshops, and summits that drive practical learning and networking:

Jailbreaks

A collection of repositories, tools, and research papers that document methods of bypassing LLM safeguards:

YouTube

AI Red Teaming

Jailbreaking


Contributing

If you have suggestions, improvements, or additional resources to include, please review our CONTRIBUTING.md guidelines and submit a pull request.


This repository aims to deliver critical, reliable resources for advancing prompt hacking research. We encourage rigorous testing, honest discussions, and the sharing of proven methodologies to foster safe and responsible exploration in this field.

About

A list of curated resources for people interested in AI Red Teaming, Jailbreaking, and Prompt Injection

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published
0