This should work:
import torch
torch.cuda.is_available()
>>> True
torch.cuda.current_device()
>>> 0
torch.cuda.device(0)
>>> <torch.cuda.device at 0x7efce0b03be0>
torch.cuda.device_count()
>>> 1
torch.cuda.get_device_name(0)
>>> 'GeForce GTX 950M'
This tells me CUDA is available and can be used in one of your devices (GPUs). And currently, Device 0
or the GPU GeForce GTX 950M
is being used by PyTorch
.
与恶龙缠斗过久,自身亦成为恶龙;凝视深渊过久,深渊将回以凝视…