8000 Merge module for Convolution by stelladk · Pull Request #94 · growingnet/gromo · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Merge module for Convolution #94

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Draft
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

stelladk
Copy link
Collaborator
@stelladk stelladk commented May 6, 2025

No description provided.

@stelladk stelladk self-assigned this May 6, 2025
@stelladk stelladk added the enhancement New feature or request label May 6, 2025
Copy link
codecov bot commented May 6, 2025

Codecov Report

Attention: Patch coverage is 19.09091% with 89 lines in your changes missing coverage. Please review.

Files with missing lines Patch % Lines
src/gromo/modules/conv2d_growing_module.py 25.31% 59 Missing ⚠️
src/gromo/modules/growing_module.py 3.22% 30 Missing ⚠️
Flag Coverage Δ
unittests 83.23% <19.09%> (-3.06%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

Files with missing lines Coverage Δ
src/gromo/modules/linear_growing_module.py 69.49% <ø> (-0.26%) ⬇️
src/gromo/modules/growing_module.py 82.78% <3.22%> (-5.47%) ⬇️
src/gromo/modules/conv2d_growing_module.py 71.09% <25.31%> (-13.97%) ⬇️
🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

Copy link
Collaborator
@sylvchev sylvchev left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice addition, it seems ok for merge module. I'm not sure about the drop global device, why removing it?

@@ -69,7 +218,6 @@ def __init__(
name: str | None = None,
s_growth_is_needed: bool = False,
) -> None:
device = device if device is not None else global_device()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why removing this line?

Comment on lines +140 to +145
def construct_full_activity(self) -> torch.Tensor:
assert self.previous_modules, f"No previous modules for {self.name}."
full_activity = torch.ones(
(self.previous_modules[0].input.shape[0], self.total_in_features),
device=self.device,
)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this wont work. You create a 2-order tensor while expecting a 3-order tensor when computing S. I think full_activity should be of shape (n, total_in_features, out_channels). (where n is the number of examples in the batch).
Anyway, if I am not mistaken you will immediately see it with the sanity checks.

Copy link
Collaborator
@TheoRudkiewicz TheoRudkiewicz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's looking good. If I may I would suggest to do a first and separate pull request where we switch compute_optimal_delta form child class to MergeModule thus this pull request would not change growing_module nor linear_growing_module. (I could do this separate PR if you want)

Comment on lines +275 to +282
def __out_dimention(self, dim: int) -> int:
return (
int(
(self.input_size[dim] - self.kernel_size[dim] + 2 * self.padding[dim])
/ self.stride[dim]
)
+ 1
)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you want you could use the function compute_output_shape_conv from gromo.utils.tools that does already (almost) those computation and has been tested.

Comment on lines +292 to +294
@property
def in_features(self) -> int:
return self.in_channels * self.input_size[0] * self.input_size[1] + self.use_bias
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very nice and quiet useful !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully mergi 399A ng this pull request may close these issues.

3 participants
0