8000 Make TensorVariable interface more similar to that of numpy.ndarray · Issue #1080 · Theano/Theano · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content

Make TensorVariable interface more similar to that of numpy.ndarray #1080

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
abalkin opened this issue Nov 18, 2012 · 21 comments
Open

Make TensorVariable interface more similar to that of numpy.ndarray #1080

abalkin opened this issue Nov 18, 2012 · 21 comments

Comments

@abalkin
Copy link
Contributor
abalkin commented Nov 18, 2012

See also gh-1216

Numpy's ndarray instances support many convenience methods most of which are already implemented as global functions in theano.tensor but not available as TensorVariable members.

Here is the summary:

 +-----------------+-----------------------------+--------------------------------+
 | ndarray         | Theano                      | Status                         |
 +-----------------+-----------------------------+--------------------------------+
 | x.argmax()      | T.argmax(x)                 | DONE
 | x.argmin()      | T.argmin(x)                 | DONE
 | x.argsort()     | T.sort.argsort(x)           | DONE
 | x.choose()      |  ---                        | T.switch(x) is numpy.where(x)
 | x.clip()        | T.clip(x)                   | DONE
 | x.compress()    | ---                         |
 | x.conj()        | T.conj(x)                   | DONE
 | x.cumprod()     | T.cumprod(x)                | DONE w/o dtype=
 | x.cumsum()      | T.cumsum(x)                 | DONE w/o dtype=
 | x.diagonal()    | T.diagonal                  | DONE
 | x.dot(y)        | T.dot(x,y) or x.__dot__(y)  | DONE
 | x.fill(sval))   | T.fill(x, sval)             | DONE
 | x.imag          | T.imag(x)                   | DONE
 | x.nonzero()     | T.nonzero(x)                | DONE
 | x.ptp()         | T.ptp(x, axi)               | DONE
 | x.put()         | ---                         | 
 | x.ravel()       | T.flatten()                 | DONE w/o order=
 | x.real          | T.real(x)                   | DONE
 | x.repeat()      | T.repeat(x)                 | DONE
 | x.round()       | T.round(x)                  | DONE w/o decimals=
 | x.searchsorted()| T.searchsorted(x)           | DONE
 | x.sort()        | T.sort(x)                   | DONE
 | x.squeeze()     | T.squeeze(x)                | DONE
 | x.swapaxes()    | T.swapaxes(x)               | DONE
 | x.std()         | T.std(x)                    | DONE
 | x.take(i)       | T.take(x, i)                | DONE
 | x.trace()       | SB.linalg.trace(x)          | DONE
 +-----------------+-----------------------------+--------------------------------+
@nouiz
Copy link
Member
nouiz commented Nov 19, 2012

That is a good idea. We started that, but we didn't finish.

@nouiz
Copy link
Member
nouiz commented Nov 19, 2012

I put the milestone 0.6.1 as this is fast to reuse existing code. But we need someone to do it.

@abalkin
Copy link
Contributor Author
abalkin commented Nov 19, 2012

I'll pick some low-hanging fruits like dot = dot focusing on features which will help implementing linalg functions. It looks like some of them can be copied from numpy.linalg verbatim once ndarray methods are implemented.

@nouiz
Copy link
Member
nouiz commented Nov 19, 2012

I'm not sure I understand what you wrote. I think that in your table, all what had something in the right column are easy as the implementation already exist in Theano. What need to be added is the method in _tensor_py_operators in the file tensor/basic.py. Or you mean you try the one without theano implementation?

I'll add in the table the theano equivalent of take and trace

@abalkin
Copy link
Contributor Author
abalkin commented Nov 19, 2012

There are three levels of difficulty here: trivial - just add an alias to an existing method; simple - adapt an existing function to work as a method or attribute (x.T, x.conj(), x.real, etc.); implementation required - methods with --- in Theano column; design question - mutating methods like .sort() - should these be implemented at all?

I think it all will become clearer once I show the code, but the above is the rough order in which I intend to tackle this issue. I will probably switch back to #1057 once I have what I need.

@nouiz
Copy link
Member
nouiz commented Nov 19, 2012

Ok. I'll check the code when you made a new PR.

thanks

nouiz added a commit that referenced this issue Nov 27, 2012
Issue #1080: Make TensorVariable interface more similar to that of numpy.ndarray
@abalkin
Copy link
Contributor Author
abalkin commented Dec 7, 2012

I started implementing take op and ran into the following inconsistency in numpy:

>>> x = np.zeros((2,3,4))
>>> x[:,:,1]
array([[ 0.,  0.,  0.],
       [ 0.,  0.,  0.]])
>>> x.take(1,axis=2)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ValueError: object of too small depth for desired array

I am not sure I understand this error message and numpy.take() seems to be under-documented. Does anyone know if there are any other cases when take() is not equivalent to advanced indexing?

@lamblin
Copy link
Member
lamblin commented Dec 7, 2012

On Fri, Dec 07, 2012, abalkin wrote:

x.take(1,axis=2)

Apparently, x.take needs a list of indices:

>>> a.take([1], axis=2)
array([[[ 0.2],
        [ 0.2],
        [ 0.2]],

       [[ 0.2],
        [ 0.2],
        [ 0.2]]])

Pascal

8000

@abalkin abalkin mentioned this issue Dec 7, 2012
@nouiz
Copy link
Member
nouiz commented Feb 18, 2013

gh-1181 add x.nonzeros()

@jsalvatier
Copy link
Contributor

I could really use the cumsum function.

@jsalvatier
Copy link
Contributor

here's a quick implementation of cumsum for vectors: https://gist.github.com/jsalvatier/8378901. I think grad only works for the 1d case.

@nouiz
Copy link
Member
nouiz 8000 commented Jan 13, 2014

Can you make a PR out of that?

You can put it in theano/tensor/extra_ops.py

thanks

On Sat, Jan 11, 2014 at 7:36 PM, John Salvatier notifications@github.comwrote:

here's a quick implementation of cumsum for vectors:
https://gist.github.com/jsalvatier/8378901. I think grad only works for
the 1d case.


Reply to this email directly or view it on GitHubhttps://github.com//issues/1080#issuecomment-32111732
.

@nouiz
Copy link
Member
nouiz commented Jan 18, 2014

Another user asked for this today:) So it seam you are not the only one wanting this :)

@tsirif
Copy link
Contributor
tsirif commented Mar 22, 2016

How much does it need still to be done for this? Does anybody know which calls or operations are left?
I would like to take it on for GSoC '16 PR

@gokul-uf
Copy link
Contributor

@tsirif check out the ideas with low priority section here https://github.com/Theano/Theano/wiki/GSoC2016

@tsirif
Copy link
Contributor
tsirif commented Mar 22, 2016

@gokul-uf I didn't describe this well, sorry. I mean i want to do a PR in order to be eligible to participate for GSoC. Is there anything available considering this issue?

@gokul-uf
Copy link
Contributor

I'm not sure, I have not been following this. @nouiz or @lamblin would be able to answer your query

@MarcCote
Copy link
Contributor

T.searchsorted doesn't seem to exist. I just remembered I had started it a while back. This might help:
https://github.com/MarcCote/Theano/blob/searchsorted/theano/tensor/extra_ops.py#L11

@tsirif
Copy link
Contributor
tsirif commented Mar 22, 2016

Thanks i will look this up!

@hlin117
Copy link
hlin117 commented May 2, 2016

@tsirif searchsorted is now available in the master branch. However, it does not work on the GPU yet:
https://github.com/Theano/Theano/blob/master/theano/tensor/extra_ops.py#L177

@tsirif
Copy link
Contributor
tsirif commented May 2, 2016

@hlin117 Yes I wrote a note in the docs, referring explicitly to the fact that there's only a CPU impl yet. Check #4422

Have you something to suggest for a GPU impl? Is there a GPU lib which implements it already? (I guess that it should not use binary search)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in t 4D18 o comment
Projects
None yet
Development

No branches or pull requests

8 participants
0