-
Notifications
You must be signed in to change notification settings - Fork 2.2k
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, thanks!
@@ -106,5 +106,5 @@ def forward(self, # pylint: disable=arguments-differ | |||
output_sequence, final_state = layer(output_sequence, state) | |||
final_states.append(final_state) | |||
|
|||
final_state_tuple = (torch.cat(state_list, 0) for state_list in zip(*final_states)) | |||
final_state_tuple = tuple(torch.cat(state_list, 0) for state_list in zip(*final_states)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you please add a return type on line 74 and correct the Returns
section of the docstring?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done, thanks.
|
||
def test_wrapper_works_with_alternating_lstm(self): | ||
model = PytorchSeq2VecWrapper( | ||
StackedAlternatingLstm(input_size=100, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No need for the test fixture to be this size, it makes them slow - just use 4,5 etc.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point! Thanks.
Fixes allenai#2503. The `final_state_tuple` in the `StackedAlternatingLstm` was a generator instead of a tuple.
Fixes allenai#2503. The `final_state_tuple` in the `StackedAlternatingLstm` was a generator instead of a tuple.
Fixes #2503. The
final_state_tuple
in theStackedAlternatingLstm
was a generator instead of a tuple.