8000 DCGAN issue with the trigger vector · Issue #11 · dingsheng-ong/ipr-gan · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content
DCGAN issue with the trigger vector #11
Open
@Mzoughh

Description

@Mzoughh

Hi,

I’m currently working on applying your black-box watermarking method on a WGAN-GP model for 64×64 images (the DCGAN use case). I used my own fine-tuning code, adapting parts of your code to integrate your method.

I found that you transform the trigger vector from z to x_w with code like this:

with torch.no_grad():
y_w = wm_layer(fake_imgs) # Generator output for z with watermark manually added
x_w = td(z) # Transform vanilla noise into noise with patches
with DisableBatchNormStats(generator_fine_tuned):
Gxw = generator_fine_tuned(x_w) # Generator output for x_w as input

My main question is about the td operation. In your code (especially in the config files), I found that td in the DCGAN case refers to:

class TransformDist(nn.Module):
def init(self, config, **kwargs):
super(TransformDist, self).init()

def forward(self, z):
    y = 0.5 * (1 + torch.erf(z / math.sqrt(2)))
    return y * math.sqrt(2 * math.pi)

def reset(self): pass

However, this doesn’t seem to correspond directly to Equation 1 from the paper. I tried re-implementing Equation 1 as described theoretically, but during fine-tuning, the training didn’t converge.

If anyone can clarify this or help me understand the reasoning behind this implementation, I’d greatly appreciate it!

Thanks in advance.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions

      0