gpu-0-0~1# cat > /etc/profile.d/cuda.sh
CUDADIR=/share/apps/cuda
if ! echo ${PATH} | /bin/grep -q $CUDADIR/bin ; then
PATH=$CUDADIR/bin:${PATH}
LD_LIBRARY_PATH=$CUDADIR/lib64:${LD_LIBRARY_PATH}
fi
export PATH LD_LIBRARY_PATH
gpu-0-0~1# vi /etc/grub.conf
kernel /boot/kickstart/default/vmlinuz-5.5-x86_64 ro root=LABEL=/ ramdisk_size=150000 kssendmac ks selinux=0 rdblacklist=nouveau nouveau.modeset=0
gpu-0-0~1# reboot
gpu-0-0~1# wget http://us.download.nvidia.com/XFree86/Linux-x86_64/340.24/NVIDIA-Linux-x86_64-340.24.run
gpu-0-0~1# sh NVIDIA-Linux-x86_64-340.24.run --kernel-source-path=/usr/src/kernels/2.6.18-308.4.1.el5-x86_64 --no-questions --ui=none --no-questions --accept-license
gpu-0-0~1# echo "blacklist nouveau options nouveau modeset=0" > /etc/modprobe.d/blacklist-nouveau.conf
### CUDA Toolkit 6.0
https://developer.nvidia.com/cuda-downloads
Frontend# http://developer.download.nvidia.com/compute/cuda/6_0/rel/installers/cuda_6.0.37_linux_64.run
Frontend# sh cuda_6.0.37_linux_64.run
Do you accept the previously read EULA? (accept/decline/quit): accept
Install NVIDIA Accelerated Graphics Driver for Linux-x86_64 331.62? ((y)es/(n)o/(q)uit): n
Install the CUDA 6.0 Toolkit? ((y)es/(n)o/(q)uit): y
Enter Toolkit Location [ default is /usr/local/cuda-6.0 ]: /share/apps/cuda/
Do you want to install a symbolic link at /usr/local/cuda? ((y)es/(n)o/(q)uit): y
Install the CUDA 6.0 Samples? ((y)es/(n)o/(q)uit): y
Enter CUDA Samples Location [ default is /root/NVIDIA_CUDA-6.0_Samples ]: /share/apps/cuda/
Installing the CUDA Toolkit in /share/apps/cuda ...
Frontend# cd /share/apps/cuda/NVIDIA_CUDA-6.0_Samples/1_Utilities/deviceQuery
Frontend# make
gpu-0-0~1# ./deviceQuery
./deviceQuery Starting...
CUDA Device Query (Runtime API) version (CUDART static linking)
Detected 1 CUDA Capable device(s)
Device 0: "GeForce GTX 580"
CUDA Driver Version / Runtime Version 6.5 / 6.5
CUDA Capability Major/Minor version number: 2.0
Total amount of global memory: 1535 MBytes (1609760768 bytes)
(16) Multiprocessors, ( 32) CUDA Cores/MP: 512 CUDA Cores
GPU Clock rate: 1544 MHz (1.54 GHz)
Memory Clock rate: 2004 Mhz
Memory Bus Width: 384-bit
L2 Cache Size: 786432 bytes
Maximum Texture Dimension Size (x,y,z) 1D=(65536), 2D=(65536, 65535), 3D=(2048, 2048, 2048)
Maximum Layered 1D Texture Size, (num) layers 1D=(16384), 2048 layers
Maximum Layered 2D Texture Size, (num) layers 2D=(16384, 16384), 2048 layers
Total amount of constant memory: 65536 bytes
Total amount of shared memory per block: 49152 bytes
Total number of registers available per block: 32768
Warp size: 32
Maximum number of threads per multiprocessor: 1536
Maximum number of threads per block: 1024
Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
Max dimension size of a grid size (x,y,z): (65535, 65535, 65535)
Maximum memory pitch: 2147483647 bytes
Texture alignment: 512 bytes
Concurrent copy and kernel execution: Yes with 1 copy engine(s)
Run time limit on kernels: No
Integrated GPU sharing Host Memory: No
Support host page-locked memory mapping: Yes
Alignment requirement for Surfaces: Yes
Device has ECC support: Disabled
Device supports Unified Addressing (UVA): Yes
Device PCI Bus ID / PCI location ID: 1 / 0
Compute Mode:
< Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >
deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 6.5, CUDA Runtime Version = 6.5, NumDevs = 1, Device0 = GeForce GTX 580
Result = PASS
# vi test_gpu.sh
#!/bin/sh
#$ -N "test gpu"
#$ -q gpu.q
#$ -cwd
#$ -l gpu=1
/share/apps/cuda/NVIDIA_CUDA-6.0_Samples/1_Utilities/deviceQuery/deviceQuery
# qsub test_gpu.sh
### Amber14 install
# cd /share/apps/
# tar jxvfp Amber14.tar.bz2
# tar jxvfp AmberTools14.tar.bz2
# cd /share/apps/amber14
# export AMBERHOME=`pwd`
# export CUDA_HOME=/share/apps/cuda/
# ./update_amber --update
# cd $AMBERHOME
# ./configure -cuda gnu
# make -j 8 install
Building AmberTools 12 and Amber 12 in parallel
# cd $AMBERHOME
# ./configure -mpi gnu
# make install
Building CUDA-enabled Amber 12 (pmemd.cuda Amber 12 only!)
# cd $AMBERHOME
# ./configure -cuda gnu
# make install
Building CUDA-enabled Amber in parallel
# cd $AMBERHOME
# ./configure -cuda -mpi gnu
# make install
'HPC > CUDA' 카테고리의 다른 글
How to install CUDA compatible NVIDIA driver on CentOS 6.2(64-bit) (0) | 2012.06.29 |
---|
최근댓글