pytorch出了图计算的工具torch_geometric后,gcn的实现就简单了,直接封装好了

首先需要安装torch_geometric

$ pip install --no-index torch-scatter -f https://pytorch-geometric.com/whl/torch-1.7.0+${CUDA}.html
$ pip install --no-index torch-sparse -f https://pytorch-geometric.com/whl/torch-1.7.0+${CUDA}.html
$ pip install --no-index torch-cluster -f https://pytorch-geometric.com/whl/torch-1.7.0+${CUDA}.html
$ pip install --no-index torch-spline-conv -f https://pytorch-geometric.com/whl/torch-1.7.0+${CUDA}.html
$ pip install torch-geometric

可以将${CUDA}换成cu110,cu102.cu101

因为我是使用的cuda11.0,因此

$ pip install --no-index torch-scatter -f https://pytorch-geometric.com/whl/torch-1.7.0+cu110.html
$ pip install --no-index torch-sparse -f https://pytorch-geometric.com/whl/torch-1.7.0+cu110.html
$ pip install --no-index torch-cluster -f https://pytorch-geometric.com/whl/torch-1.7.0+cu110.html
$ pip install --no-index torch-spline-conv -f https://pytorch-geometric.com/whl/torch-1.7.0+cu110.html
$ pip install torch-geometric

安装好以后,就可以直接运行了,这里运行的数据集是Cora

数据集直接放到百度盘上,因为默认的地址是github,很容易连接失败

链接:https://pan.baidu.com/s/1mmk2oESN25WdyRKTqgo3sg 
提取码:7pd6 
解压到main.py的路径就行:

├── [  52]  data
│   ├── [  34]  Cora
│   │   ├── [  66]  processed
│   │   │   ├── [ 15M]  data.pt
│   │   │   ├── [ 431]  pre_filter.pt
│   │   │   └── [ 431]  pre_transform.pt
│   │   └── [ 171]  raw
│   │       ├── [251K]  ind.cora.allx
│   │       ├── [ 47K]  ind.cora.ally
│   │       ├── [ 58K]  ind.cora.graph
│   │       ├── [4.9K]  ind.cora.test.index
│   │       ├── [145K]  ind.cora.tx
│   │       ├── [ 27K]  ind.cora.ty
│   │       ├── [ 22K]  ind.cora.x
│   │       └── [4.0K]  ind.cora.y
│   ├── [  34]  ENZYMES
│   │   ├── [  66]  processed
│   │   │   ├── [2.7M]  data.pt
│   │   │   ├── [ 431]  pre_filter.pt
│   │   │   └── [ 431]  pre_transform.pt
│   │   └── [ 178]  raw
│   │       ├── [864K]  ENZYMES_A.txt
│   │       ├── [ 73K]  ENZYMES_graph_indicator.txt
│   │       ├── [1.2K]  ENZYMES_graph_labels.txt
│   │       ├── [3.7M]  ENZYMES_node_attributes.txt
│   │       ├── [ 38K]  ENZYMES_node_labels.txt
│   │       └── [2.5K]  README.txt
│   └── [ 166]  ENZYMES.zip
└── [1.5K]  main.py

现在就可以运行代码了

from torch_geometric.datasets import Planetoid
import torch
import torch.nn.functional as F
from torch_geometric.nn import GCNConvdataset = Planetoid(root='./data', name='Cora')
print(dataset[0].y.shape)
class Net(torch.nn.Module):def __init__(self):super(Net, self).__init__()self.conv1 = GCNConv(dataset.num_node_features, 16)self.conv2 = GCNConv(16, dataset.num_classes)def forward(self, data):x, edge_index = data.x, data.edge_indexx = self.conv1(x, edge_index)x = F.relu(x)x = F.dropout(x, training=self.training)x = self.conv2(x, edge_index)return F.log_softmax(x, dim=1)device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
model = Net().to(device)
data = dataset[0].to(device)
optimizer = torch.optim.Adam([dict(params=model.conv1.parameters(), weight_decay=5e-4),dict(params=model.conv2.parameters(), weight_decay=0)], lr=0.01)model.train()
for epoch in range(1000):optimizer.zero_grad()out = model(data)loss = F.nll_loss(out[data.train_mask], data.y[data.train_mask])loss.backward()optimizer.step()if epoch%10==9:model.eval()logits, accs = model(data), []for _, mask in data('train_mask', 'val_mask', 'test_mask'):pred = logits[mask].max(1)[1]acc = pred.eq(data.y[mask]).sum().item() / mask.sum().item()accs.append(acc)log = 'Epoch: {:03d}, Train: {:.5f}, Val: {:.5f}, Test: {:.5f}'print(log.format(epoch+1, accs[0], accs[1], accs[2]))

运行结果:

torch.Size([2708])
Epoch: 010, Train: 0.97143, Val: 0.68800, Test: 0.72700
Epoch: 020, Train: 0.99286, Val: 0.77200, Test: 0.78000
Epoch: 030, Train: 1.00000, Val: 0.76800, Test: 0.77700
Epoch: 040, Train: 1.00000, Val: 0.76800, Test: 0.77800
Epoch: 050, Train: 1.00000, Val: 0.76800, Test: 0.78600
Epoch: 060, Train: 1.00000, Val: 0.77000, Test: 0.79000
Epoch: 070, Train: 1.00000, Val: 0.77400, Test: 0.79600
Epoch: 080, Train: 1.00000, Val: 0.77400, Test: 0.79600
Epoch: 090, Train: 1.00000, Val: 0.77400, Test: 0.79500
Epoch: 100, Train: 1.00000, Val: 0.77400, Test: 0.79400
Epoch: 110, Train: 1.00000, Val: 0.77200, Test: 0.79500
Epoch: 120, Train: 1.00000, Val: 0.77000, Test: 0.79500
Epoch: 130, Train: 1.00000, Val: 0.76800, Test: 0.79400
Epoch: 140, Train: 1.00000, Val: 0.76600, Test: 0.79400
Epoch: 150, Train: 1.00000, Val: 0.76400, Test: 0.79700
Epoch: 160, Train: 1.00000, Val: 0.76400, Test: 0.79700
Epoch: 170, Train: 1.00000, Val: 0.76200, Test: 0.80100
Epoch: 180, Train: 1.00000, Val: 0.76400, Test: 0.80200
Epoch: 190, Train: 1.00000, Val: 0.76400, Test: 0.80300
Epoch: 200, Train: 1.00000, Val: 0.76600, Test: 0.80300
Epoch: 210, Train: 1.00000, Val: 0.76600, Test: 0.80400
Epoch: 220, Train: 1.00000, Val: 0.76600, Test: 0.80400
Epoch: 230, Train: 1.00000, Val: 0.76400, Test: 0.80400
Epoch: 240, Train: 1.00000, Val: 0.76600, Test: 0.80400
Epoch: 250, Train: 1.00000, Val: 0.76000, Test: 0.80400
Epoch: 260, Train: 1.00000, Val: 0.76200, Test: 0.80500
Epoch: 270, Train: 1.00000, Val: 0.76600, Test: 0.80700
Epoch: 280, Train: 1.00000, Val: 0.76800, Test: 0.80500
Epoch: 290, Train: 1.00000, Val: 0.77000, Test: 0.80500
Epoch: 300, Train: 1.00000, Val: 0.77000, Test: 0.80500
Epoch: 310, Train: 1.00000, Val: 0.77200, Test: 0.80700
Epoch: 320, Train: 1.00000, Val: 0.77400, Test: 0.80400
Epoch: 330, Train: 1.00000, Val: 0.77400, Test: 0.80400
Epoch: 340, Train: 1.00000, Val: 0.77400, Test: 0.80400
Epoch: 350, Train: 1.00000, Val: 0.77400, Test: 0.80600
Epoch: 360, Train: 1.00000, Val: 0.77400, Test: 0.80700
Epoch: 370, Train: 1.00000, Val: 0.77400, Test: 0.80600
Epoch: 380, Train: 1.00000, Val: 0.77600, Test: 0.80700
Epoch: 390, Train: 1.00000, Val: 0.77600, Test: 0.80600
Epoch: 400, Train: 1.00000, Val: 0.77400, Test: 0.80600
Epoch: 410, Train: 1.00000, Val: 0.77400, Test: 0.80600
Epoch: 420, Train: 1.00000, Val: 0.77400, Test: 0.80700
Epoch: 430, Train: 1.00000, Val: 0.77400, Test: 0.80800
Epoch: 440, Train: 1.00000, Val: 0.77600, Test: 0.80800
Epoch: 450, Train: 1.00000, Val: 0.77600, Test: 0.80800
Epoch: 460, Train: 1.00000, Val: 0.77400, Test: 0.80800
Epoch: 470, Train: 1.00000, Val: 0.77600, Test: 0.80800
Epoch: 480, Train: 1.00000, Val: 0.77400, Test: 0.80900
Epoch: 490, Train: 1.00000, Val: 0.77600, Test: 0.80900
Epoch: 500, Train: 1.00000, Val: 0.77600, Test: 0.81000
Epoch: 510, Train: 1.00000, Val: 0.77600, Test: 0.81000
Epoch: 520, Train: 1.00000, Val: 0.77400, Test: 0.81200
Epoch: 530, Train: 1.00000, Val: 0.77600, Test: 0.81200
Epoch: 540, Train: 1.00000, Val: 0.77600, Test: 0.81200
Epoch: 550, Train: 1.00000, Val: 0.77600, Test: 0.81100
Epoch: 560, Train: 1.00000, Val: 0.77800, Test: 0.81100
Epoch: 570, Train: 1.00000, Val: 0.77600, Test: 0.81100
Epoch: 580, Train: 1.00000, Val: 0.77600, Test: 0.81200
Epoch: 590, Train: 1.00000, Val: 0.77600, Test: 0.81200
Epoch: 600, Train: 1.00000, Val: 0.77600, Test: 0.81200
Epoch: 610, Train: 1.00000, Val: 0.77600, Test: 0.81200
Epoch: 620, Train: 1.00000, Val: 0.77600, Test: 0.81200
Epoch: 630, Train: 1.00000, Val: 0.77600, Test: 0.81200
Epoch: 640, Train: 1.00000, Val: 0.77600, Test: 0.81400
Epoch: 650, Train: 1.00000, Val: 0.77600, Test: 0.81400
Epoch: 660, Train: 1.00000, Val: 0.77600, Test: 0.81300
Epoch: 670, Train: 1.00000, Val: 0.77600, Test: 0.81300
Epoch: 680, Train: 1.00000, Val: 0.77600, Test: 0.81300
Epoch: 690, Train: 1.00000, Val: 0.77800, Test: 0.81300
Epoch: 700, Train: 1.00000, Val: 0.77600, Test: 0.81300
Epoch: 710, Train: 1.00000, Val: 0.77800, Test: 0.81300
Epoch: 720, Train: 1.00000, Val: 0.77800, Test: 0.81300
Epoch: 730, Train: 1.00000, Val: 0.77800, Test: 0.81300
Epoch: 740, Train: 1.00000, Val: 0.77800, Test: 0.81300
Epoch: 750, Train: 1.00000, Val: 0.77800, Test: 0.81300
Epoch: 760, Train: 1.00000, Val: 0.77800, Test: 0.81300
Epoch: 770, Train: 1.00000, Val: 0.77800, Test: 0.81300
Epoch: 780, Train: 1.00000, Val: 0.77800, Test: 0.81300
Epoch: 790, Train: 1.00000, Val: 0.77800, Test: 0.81300
Epoch: 800, Train: 1.00000, Val: 0.77800, Test: 0.81300
Epoch: 810, Train: 1.00000, Val: 0.77800, Test: 0.81300
Epoch: 820, Train: 1.00000, Val: 0.77800, Test: 0.81300
Epoch: 830, Train: 1.00000, Val: 0.77800, Test: 0.81300
Epoch: 840, Train: 1.00000, Val: 0.77800, Test: 0.81200
Epoch: 850, Train: 1.00000, Val: 0.77800, Test: 0.81300
Epoch: 860, Train: 1.00000, Val: 0.77800, Test: 0.81200
Epoch: 870, Train: 1.00000, Val: 0.77800, Test: 0.81200
Epoch: 880, Train: 1.00000, Val: 0.77800, Test: 0.81200
Epoch: 890, Train: 1.00000, Val: 0.77800, Test: 0.81200
Epoch: 900, Train: 1.00000, Val: 0.77800, Test: 0.81200
Epoch: 910, Train: 1.00000, Val: 0.77800, Test: 0.81100
Epoch: 920, Train: 1.00000, Val: 0.77800, Test: 0.81100
Epoch: 930, Train: 1.00000, Val: 0.77800, Test: 0.81100
Epoch: 940, Train: 1.00000, Val: 0.77800, Test: 0.81100
Epoch: 950, Train: 1.00000, Val: 0.77800, Test: 0.81100
Epoch: 960, Train: 1.00000, Val: 0.77800, Test: 0.81100
Epoch: 970, Train: 1.00000, Val: 0.77800, Test: 0.81100
Epoch: 980, Train: 1.00000, Val: 0.77800, Test: 0.81100
Epoch: 990, Train: 1.00000, Val: 0.77800, Test: 0.81100
Epoch: 1000, Train: 1.00000, Val: 0.77800, Test: 0.81100

注意:代码是参考的https://github.com/rusty1s/pytorch_geometric/tree/master/examples

利用torch_geometric运行gcn相关推荐

  1. hadoop使用mapreduce统计词频_hadoop利用mapreduce运行词频统计(非例程)

    1.运行环境 1.Ubuntu16.04单系统 2.hadoop-3.2.1 2.操作步骤 1.使用eclipse编写map reduce run 函数 2.导出jar包 3.将需要进行词频统计的文件 ...

  2. 利用终端运行java程序

    利用终端运行JAVA程序 开发Java程序,需要三个步骤:编写程序,编译程序,运行程序 不过首先得配置好你电脑中的Java环境变量,才能执行. 1,首先编写一个程序 打开记事本编写一个文件,就以Hel ...

  3. 如何利用CMD运行JAVA

    ** 利用cmd运行java ** 这里利用windows 10 自带的记事本写代码 ** 一.打开[此电脑] ** (1) 打开D盘,在D盘里建立一个文件夹,单击鼠标右键,新建文件夹将其命名为 Ja ...

  4. easy-mock搭建过程中,利用docker-compose运行easy-mock注意事项,重点是mongo版本错误导致构建失败

    该文默认你已了easy-mock,如果不了解,请移步easy-mock官网 . 该文意在强调利用docker-compose部署easymock时需要注意的问题. (该文默认读者已经懂得如何使用doc ...

  5. 云服务器上利用R运行tensorflow/keras

    1. 利用virtualenv创建虚拟环境:我的环境名为r-tensorflow virtualenv 你的环境名 --python=python3.6(你想要的Python版本) 进入虚拟环境目录, ...

  6. 【转】让Chrome化身成为摸鱼神器,利用Chorme运行布卡漫画以及其他安卓APK应用教程...

    下周就是十一了,无论是学生党还是工作党,大家的大概都会有点心不在焉,为了让大家更好的心不在焉,更好的在十一前最后一周愉快的摸鱼,今天就写一个如何让Chrome(google浏览器)运行安卓APK应用的 ...

  7. deepin nginx连接php,利用docker运行nginx加上本机的php-fpm。访问html文件正常,但是访问php文件就报错404...

    最近在虚拟机弄了一个deepin系统.打算使用docker加载nginx/1.17.6加在官方下载的PHP7.4.1(这个是本地编译的). 出现了以下的问题: 1)访问html文件正常,但是访问php ...

  8. mac利用vscode运行c语言程序,Mac下使用VScode编译配置C/C++程序详细图文教程

    在mac上有时候需要编写一些c 或者 c++的代码,如果使用 xcode,有时候就显得很笨重,而且运行起来很不方便.而微软提供了一个跨平台的编辑器visual studio code ,这个编辑器很轻 ...

  9. 批处理文件 执行java_利用批处理文件运行java程序

    当我们要运行java程序时,可以写一个批处理文件(.bat),以便双击即可运行java程序! 在次测试过程中需要注意的几个问题: 1.当需要java程序需要引进第三方包时,需要在设置classpath ...

最新文章

  1. Nature封面:只低一毫米,时间也会变慢!叶军团队首次在毫米尺度验证广义相对论...
  2. 互联网思维与非摩擦经济
  3. MySql 查询小数保留两位小数
  4. Android 数据存储之SharedPreferences存储小记
  5. linux下解包bin二进制文件_linux下如何使用docker二进制文件安装_docker离线安装
  6. html和css如何制作小球,[网页设计]使用CSS3动画模拟实现小球自由落体效果
  7. [地产]“用90%的时间考虑失败”——李嘉诚(长江实业集团董事长)
  8. Windows Server 2008 配置使用动态IP和备用地址
  9. 设计模式之单例模式8种实现方式,其七:静态内部类
  10. 微软“照片”应用Raw 格式图像编码器漏洞 (CVE-2021-24091)的技术分析
  11. 如何给小朋友解释单摆运动_法国教育学者:如何培养儿童的逻辑思维和时间观念...
  12. 驱动人生8新版助力电脑性能起飞
  13. Base64编码原理分析
  14. 生意参谋指数之指数推理原值
  15. 宋红康JVM 学习笔记
  16. MQ消息队列的优缺点介绍以及对比选型
  17. 如何应用quartz定时任务?
  18. STM32编程环境配置(kile5)
  19. 读论文——MoCo(何恺明 CV中的无监督)
  20. JQuery 基础知识学习(详尽版)

热门文章

  1. spring 狂神说的详细笔记
  2. 【定期更新】操作系统通读课本
  3. 机器学习基石(林轩田)第七章 笔记与感悟总结
  4. 拿起笔来做刀枪 · 之七 最终幻想 Final Fantasy
  5. java小练习:金字塔112112321
  6. 尾号限行api,单双号限行查询数据库接口调用代码示例
  7. html版本绩拼音怎么写,绩的读音_绩的拼音_绩的注音_绩怎么读-98在线字典
  8. 知+是什么?知+如何开通?如何用知+在知乎做推广?知+广告怎么投放?知+信息流投放如何优化内容?
  9. Cartoon Animator 4 for Mac(二维动画软件)
  10. navicat使用及SQL查询语法