Run AI models in CUDA-enabled WSL2, set up CUDA-enabled WSL2 for LLM and stable diffusion models in Windows without sacrificing performance

Sometimes you need to run a model in a Linux environment. However, your most powerful GPU is running Windows, and you don't want to replace your entire system with Linux, and those oh, those games.

WSL2 is a great choice that balances the gap between Windows and Linux. However, when I wrote these articles, there were very few articles or discussions on how to get CUDA and PyTorch to run LLM and stable diffusion in WSL2. After some hard work and testing, I found a way to run all large AI models in WSL2 with CUDA and PyTorch enabled. Most important part:

Below are the steps I took to set up WSL2 with CUDA enabled.

prerequisites

You must have all CUDA drivers installed in Windows and be able to run AI models with CUDA without problems with all necessary components installed.

The reason behind this prerequisite is that WSL2 will use Windows' CUDA driver.

Step 1. Enable the Windows Subsystem for Linux

in windows terminal

dism.exe /online /enable-feature /featurename:Microsoft-Windows-Subsystem-Linux /all /norestart

or

Go to Control Panel -> Programs -> Turn Windows features on or off. Then check "Windows Subsystem for Linux"

insert image description here

Step 2. Install WSL2

in terminal&#x

Guess you like

Origin blog.csdn.net/iCloudEnd/article/details/131729220