8000 GitHub - kyegomez/zeta at 0.0.3
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

kyegomez/zeta

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

23 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Zeta - A Transgalactic Library for Scalable Transformations

MIT License MIT License

Zeta is a PyTorch-powered library, forged in the heart of the Halo array, that empowers researchers and developers to scale up Transformers efficiently and effectively. It leverages seminal research advancements to enhance the generality, capability, and stability of scaling Transformers while optimizing training efficiency.

Installation

To install:

pip install zetascale

To get hands-on and develop it locally:

git clone https://github.com/kyegomez/zeta.git
cd zeta
pip install -e .

Initiating Your Journey

Creating a model empowered with the aforementioned breakthrough research features is a breeze. Here's how to quickly materialize a BERT-like encoder:

>>> from zeta import EncoderConfig
>>> from zeta import Encoder

>>> config = EncoderConfig(vocab_size=64000)
>>> model = Encoder(config)

>>> print(model)

Additionally, we support the Decoder and EncoderDecoder architectures:

# To create a decoder model
>>> from zeta import DecoderConfig
>>> from zeta import Decoder

>>> config = DecoderConfig(vocab_size=64000)
>>> decoder = Decoder(config)
>>> print(decoder)

# To create an encoder-decoder model
>>> from zeta import EncoderDecoderConfig
>>> from zeta import EncoderDecoder

>>> config = EncoderDecoderConfig(vocab_size=64000)
>>> encdec = EncoderDecoder(config)
>>> print(encdec)

Key Features

Most of the transformative features mentioned below can be enabled by simply setting the corresponding parameters in the config:

>>> from zeta import EncoderConfig
>>> from zeta import Encoder

>>> config = EncoderConfig(vocab_size=64000, deepnorm=True, multiway=True)
>>> model = Encoder(config)

>>> print(model)

For a complete overview of our key features, refer to our Feature Guide.

Examples

Discover how to wield Zeta in a multitude of scenarios/tasks, including but not limited to:

We are working tirelessly to expand the collection of examples spanning various tasks (e.g., vision pretraining, speech recognition) and various deep learning frameworks (e.g., DeepSpeed, Megatron-LM). Your comments, suggestions, or contributions are welcome!

Results

Check out our Results Page to witness Zeta's exceptional performance in Stability Evaluations and Scaling-up Experiments.

Acknowledgments

Zeta is a masterpiece inspired by elements of FairSeq and UniLM.

Citations

If our work here in Zeta has aided you in your journey, please consider acknowledging our efforts in your work. You can find relevant citation details in our Citations Document.

Contributing

We're always thrilled to welcome new ideas and improvements from the community. Please check our Contributor's Guide for more details about contributing.

  • Create an modular omni-universal Attention class with flash multihead attention or regular mh or dilated attention -> then integrate into Decoder/ DecoderConfig
0