This repository was archived by the owner on Dec 16, 2022. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 2.2k
Ensure contiguous initial state tensors in _EncoderBase(stateful=True)
#2451
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
…e)` if zero length sequences are given as input
DeNeutoy
reviewed
Feb 6, 2019
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi, this looks good but can you add your little snippet as a test here
Sure thing! I added checks to the test written for #1493, since it essentially deals with the same problem (only for non-stateful encoders). Also, since I can only provoke the issue on the GPU I added a GPU-only copy of the tests. |
Hi! Sorry could you guard your test using the pytest decorator, rather than just the if statement inside the test: |
…e)` if zero length sequences are given as input
Had some bad continued indentation in the tests
DeNeutoy
approved these changes
Mar 15, 2019
reiyw
pushed a commit
to reiyw/allennlp
that referenced
this pull request
Nov 12, 2019
…e)` (allenai#2451) This PR fixes the following bug: **Describe the bug** If subsequent batches of inputs containing a zero-length sequence are passed to a stateful encoder (e.g. a child of `_EncoderBase` with the `stateful` parameter set to `True`) then the following error is raised: ``` RuntimeError: rnn: hx is not contiguous ``` **To Reproduce** ```python import torch from allennlp.modules.seq2seq_encoders import PytorchSeq2SeqWrapper lstm = torch.nn.LSTM(input_size=2, hidden_size=2, num_layers=3, dropout=0.1, batch_first=True) encoder = PytorchSeq2SeqWrapper(lstm, stateful=True) encoder = encoder.cuda(0) inputs = torch.randn(4, 4, 2).cuda(0) mask = torch.ones(4, 4).cuda(0) mask[2, :] = 0 encoder(inputs, mask) encoder(inputs, mask) ```
TalSchuster
pushed a commit
to TalSchuster/allennlp-MultiLang
that referenced
this pull request
Feb 20, 2020
…e)` (allenai#2451) This PR fixes the following bug: **Describe the bug** If subsequent batches of inputs containing a zero-length sequence are passed to a stateful encoder (e.g. a child of `_EncoderBase` with the `stateful` parameter set to `True`) then the following error is raised: ``` RuntimeError: rnn: hx is not contiguous ``` **To Reproduce** ```python import torch from allennlp.modules.seq2seq_encoders import PytorchSeq2SeqWrapper lstm = torch.nn.LSTM(input_size=2, hidden_size=2, num_layers=3, dropout=0.1, batch_first=True) encoder = PytorchSeq2SeqWrapper(lstm, stateful=True) encoder = encoder.cuda(0) inputs = torch.randn(4, 4, 2).cuda(0) mask = torch.ones(4, 4).cuda(0) mask[2, :] = 0 encoder(inputs, mask) encoder(inputs, mask) ```
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR fixes the following bug:
Describe the bug If subsequent batches of inputs containing a zero-length sequence are passed to a stateful encoder (e.g. a child of
_EncoderBase
with thestateful
parameter set toTrue
) then the following error is raised:To Reproduce