8000 Gaussian Process as non-final layer · Issue #11 · alshedivat/keras-gp · GitHub
[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Skip to content
Gaussian Process as non-final layer #11
Open
@fullerf

Description

@fullerf

Hi,

This is a fantastic job gluing Keras to GPML. I got your examples to work and a toy problem of my own working already, so I'm quite happy. The next thing I wanted to try is to connect several GP layers to Dense keras layer before the output. I'm getting some errors when attempting this, despite it compiling.

Here's my code:

    return GP(inf='infGrid',
            lik='likGauss',
            dlik='dlikGrid',
            cov='covSEiso',
            opt={'cg_maxit': 2000,'cg_tol': 1e-6},
            mean='meanConst',
            grid_kwargs={'eq': 1,'k':150.0}, #equispaced grid w/ 100 pts over data range
            update_grid=1,
            batch_size=batch_size,
            nb_train_samples=nb_train_samples,
            hyp={'lik': float(np.log(2.0)), #hyper parameter is the std dev
                 'cov': [[1.0],[0.5]], #hyper initial params are tiled for all dims
                 'mean': float(0.1)}
            )

#note: passing information to matlab engine requires that it be python types, not numpy types.
def assemble_hierarchal_model(input_shape,chunk_D,batch_size,nb_train_samples):
    inp = Input(shape=input_shape)
    slice_1 = Lambda(lambda x: x[...,0:chunk_D])(inp)
    slice_2 = Lambda(lambda x: x[...,chunk_D:(chunk_D*2)])(inp)
    gp1 = make_GP_layer(batch_size,nb_train_samples)
    gp2 = make_GP_layer(batch_size,nb_train_samples)
    g_1 = gp1(slice_1)
    g_2 = gp2(slice_2)
    slurp = Concatenate()([g_1,g_2])
    sslurp = Reshape((2,))(slurp)
    out = Dense(1,use_bias=False)(sslurp)
    model = Model(inputs=inp, outputs=out)
    #loss = [gen_gp_loss(x) for x in [g_1,g_2]]
    model.compile(optimizer=Adam(1e-4), loss='mse')
    return model

As mentioned, the model compiles. The reshape layer is necessary to keep the TensorFlow back-end happy. Somehow it can detect the size of the GP output layers and correctly concatenate them, but when I add the Dense layer after the Concatenate, it acts like it doesn't know the size. Reshape fixes this.

Anyhow, if you run this model, though, it gives an odd error:

/Users/fdfuller/anaconda/lib/python2.7/site-packages/tensorflow/python/framework/ops.pyc in control_dependencies(self, control_inputs)
   3312     current = self._current_control_dependencies()
   3313     for c in control_inputs:
-> 3314       c = self.as_graph_element(c)
   3315       if isinstance(c, Tensor):
   3316         c = c.op

/Users/fdfuller/anaconda/lib/python2.7/site-packages/tensorflow/python/framework/ops.pyc in as_graph_element(self, obj, allow_tensor, allow_operation)
   2403 
   2404     with self._lock:
-> 2405       return self._as_graph_element_locked(obj, allow_tensor, allow_operation)
   2406 
   2407   def _as_graph_element_locked(self, obj, allow_tensor, allow_operation):

/Users/fdfuller/anaconda/lib/python2.7/site-packages/tensorflow/python/framework/ops.pyc in _as_graph_element_locked(self, obj, allow_tensor, allow_operation)
   2492       # We give up!
   2493       raise TypeError("Can not convert a %s into a %s."
-> 2494                       % (type(obj).__name__, types_str))
   2495 
   2496   def get_operations(self):

TypeError: Can not convert a int into a Tensor or Operation.

I'm digging into your backend to try and understand this. If I use the gp loss function, it complains that the Dense layer doesn't know about dh/dx and won't compile. With the mse loss, it compiles but gives this error. Possibly because the kgp Model doesn't have this loss registered?

Anyways, any tips would be helpful.

Metadata

Metadata

Assignees

No one assigned

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions

      0