8000 why vae_latents_to_dit_latents is not used during FT? · Issue #110 · genmoai/mochi · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

why vae_latents_to_dit_latents is not used during FT? #110

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
jzhang38 opened this issue Dec 9, 2024 · 0 comments
Open

why vae_latents_to_dit_latents is not used during FT? #110

jzhang38 opened this issue Dec 9, 2024 · 0 comments
Labels
bug Something isn't working good first issue Good for newcomers

Comments

@jzhang38
Copy link
jzhang38 commented Dec 9, 2024

Seems that you directly use VAE latent for fine-tuning. Am I missing anything?

def vae_latents_to_dit_latents(vae_latents: torch.Tensor):

z = ldist.sample()

@ajayjain ajayjain added bug Something isn't working good first issue Good for newcomers labels Dec 20, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working good first issue Good for newcomers
Projects
None yet
Development

No branches or pull requests

2 participants
0