Setting up TensorFlow with GPU support can significantly accelerate your machine learning workflows. Follow this guide to configure TensorFlow for GPU usage on your system.
Prerequisites 📦
Before proceeding, ensure you have:
- A compatible NVIDIA GPU (e.g., GTX 10-series, Titan, or RTX 30-series)
- CUDA Toolkit installed (version matches TensorFlow requirements)
- cuDNN library installed (aligned with CUDA version)
- NVIDIA Driver updated to the latest version
Step-by-Step Installation 🔧
Install NVIDIA Drivers
Use your OS's package manager or download from NVIDIA官网.Install CUDA Toolkit
Download and install the latest CUDA version from NVIDIA CUDA Toolkit.- For Ubuntu:
sudo apt-get install cuda-toolkit
- For Windows: Use the installer from the official site
- For Ubuntu:
Install cuDNN
Download cuDNN from NVIDIA cuDNN and extract the files.Set Environment Variables
Add these paths to your.bashrc
or system environment:export PATH=/usr/local/cuda-11.x/bin:$PATH export LD_LIBRARY_PATH=/usr/local/cuda-11.x/lib64:$LD_LIBRARY_PATH
Replace
11.x
with your CUDA version.Install TensorFlow with GPU Support
Use pip:pip install tensorflow-gpu
Or via conda:
conda install -c conda-forge tensorflow-gpu
Verify GPU Compatibility 🧪
Run this code to check if TensorFlow detects your GPU:
import tensorflow as tf
print("GPU Available:", tf.config.list_physical_devices('GPU'))
If it returns GPU Available: [GPU...]
, your setup is successful!
Further Reading 📚
For detailed steps on TensorFlow GPU configuration, visit our guide section.
Need help with specific steps? Explore our TensorFlow documentation for more insights.