8000 Scope of Top-k Pooling Applied to Text (Is the Class Token Included?) · Issue #13 · muyangyi/SimSeg · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content
Scope of Top-k Pooling Applied to Text (Is the Class Token Included?) #13
Open
@ByeongJuWoo

Description

@ByeongJuWoo

At this line, the comment seems to suggest that the intention is to exclude the class token (text_target_token_idx) from the text tokens. Since the class token is also excluded from the image tokens, it would be natural to exclude it from the text tokens as well for consistency.

However, based on the actual implementation in line 106: text_features = text_features[:, self.text_target_token_idx:, :]

def forward_text_feature(self, input_ids, attention_mask):
    text_features = self.text_encoder(
        input_ids=input_ids, attention_mask=attention_mask
    )
    # pre-pooling
    if self.cfg.model.pool.name == "identity":
        # cls token
        text_features = text_features[:, self.text_target_token_idx, :].squeeze(dim=1)
    else:
        # word tokens <- is this right ?????
        text_features = text_features[:, self.text_target_token_idx:, :]

    return text_features

it appears that the class token is not being excluded, since [: , :, :] is same with [:, 0:, :]

Could you clarify which behavior is intended?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions

      0