- Italy, Bologna
- @loretoparisi
More
Lists (32)
Sort Name ascending (A-Z)
3D Scene Reconstruction
Audio
Audio Generation
Audio synthesisAudio Source Separation
Automatic Speech Recognition
Speech to TextBERT
Bidirectional Encoder Representations from TransformersCapsNet
Capsule Neural NetworksChinese NLP
CLIP
Contrastive Language-Image Pre-trainingComputer Vision
Elixir
Embedding
Erlang
FastText
Word2vec & FastText text embeddingsGraph Neural Networks
GNNImage Classification
Classification of imagesImage Generation
Neural generation of imagesJavaScript
Language Modeling
MIDI
NER
Named Entities RecognitionONNX
Onnx runtime and models weightsProtein Structure Prediction
Python
RL
Reinforcement Learning and AgentsRust
Search
Search EnginesSemantic Search
TensorRT
Text Classification
Wasm
Web AssemblyWord2vec
Stars
- All languages
- ActionScript
- AppleScript
- Arduino
- Assembly
- Astro
- C
- C#
- C++
- CSS
- Clojure
- CoffeeScript
- Common Lisp
- Cuda
- Cython
- D
- Dockerfile
- Elixir
- Emacs Lisp
- Erlang
- Fortran
- Go
- HTML
- Hack
- Haskell
- Java
- JavaScript
- Jsonnet
- Julia
- Jupyter Notebook
- Kotlin
- Lex
- Lua
- MATLAB
- MDX
- MLIR
- Macaulay2
- Makefile
- Mako
- Meson
- Mojo
- Mustache
- NewLisp
- Nix
- OCaml
- Objective-C
- Objective-C++
- OpenEdge ABL
- PHP
- PLpgSQL
- Pascal
- Perl
- PostScript
- PowerShell
- Processing
- Protocol Buffer
- PureBasic
- Python
- R
- ReScript
- Roff
- Ruby
- Rust
- SCSS
- Scala
- Shell
- Solidity
- Standard ML
- Starlark
- Swift
- TeX
- TypeScript
- VHDL
- Vala
- Verilog
- Vim Script
- Visual Basic
- Vue
- WGSL
- WebAssembly
- Zig
This is InfiniRetri, a tool enhance Transformer-based LLMs(Large Language Model) ablity to hangle Long-Context.
Here's all my Python/Numba (CUDA) code for the encoder block I made :)
accompanying material for sleep-time compute paper
21 Lessons, Get Started Building with Generative AI 🔗 https://microsoft.github.io/generative-ai-for-beginners/
FULL v0, Cursor, Manus, Same.dev, Lovable, Devin, Replit Agent, Windsurf Agent & VSCode Agent (And other Open Sourced) System Prompts, Tools & AI Models.
🚀 The fast, Pythonic way to build MCP servers and clients
prima.cpp: Speeding up 70B-scale LLM inference on low-resource everyday home clusters
Code for 'LLM2Vec: Large Language Models Are Secretly Powerful Text Encoders'
Single File, Single GPU, From Scratch, Efficient, Full Parameter Tuning library for "RL for LLMs"
This package contains the original 2012 AlexNet code.
tulip-berkeley / open_clip
Forked from mlfoundations/open_clipAn open source implementation of CLIP (With TULIP Support)
Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs
L1: Controlling How Long A Reasoning Model Thinks With Reinforcement Learning
Learn how to use CUA (our Computer Using Agent) via the API on multiple computer environments.
Research repository on interfacing LLMs with Weaviate APIs. Inspired by the Berkeley Gorilla LLM.
Di♪♪Rhythm: Blazingly Fast and Embarrassingly Simple End-to-End Full-Length Song Generation with Latent Diffusion
Collection of pretrained models for the Montreal Forced Aligner
DeepEP: an efficient expert-parallel communication library
DeepGEMM: clean and efficient FP8 GEMM kernels with fine-grained scaling
FlashMLA: Efficient MLA decoding kernels
Cost-efficient and pluggable Infrastructure components for GenAI inference