- San Francisco
- cartesia.ai
Stars
Reading list for research topics in state-space models
Building blocks for foundation models.
Dataset and modelling infrastructure for modelling "event streams": sequences of continuous time, multivariate events with complex internal dependencies.
Scalable and flexible workflow orchestration platform that seamlessly unifies data, ML and analytics stacks.
This repo contains data and code for the paper "Language Models Enable Simple Systems for Generating Structured Views of Heterogeneous Data Lakes"
The RedPajama-Data repository contains code for preparing large datasets for training large language models.
Code and documentation to train Stanford's Alpaca models, and generate the data.
An open science effort to benchmark legal reasoning in foundation models
Adding guardrails to large language models.
Fast and memory-efficient exact attention
A tiny library for coding with large language models.
LlamaIndex is the leading framework for building LLM-powered agents over your data.
Creative interactive views of any dataset.
๐๐ฅ Buttery smooth toast notifications for Svelte
Implementation of DiffWave and SaShiMi audio generation models
Structured state space sequence models
Implementation of https://srush.github.io/annotated-s4
A benchmark of data-centric tasks from across the machine learning lifecycle.