咨询下大佬个问题,以监督学习的方式训练网络的时候,训练样本貌似也放进了同batch的不是附近的节点,看监督学习训练的损失是直接用激活函数得到logit后得到损失,这样是不是不太那啥?是不是应该只用正样本吧(监督方式训练的时候) · Issue #25 · twjiang/graphSAGE-pytorch · GitHub
More Web Proxy on the site http://driver.im/
You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
如题,看您是哈工大我就没写英文
The text was updated successfully, but these errors were encountered: