The Ultimate Guide: Installing Ollama on Fedora 43
Running large language models (LLMs) locally isn’t just for the privacy-obsessed anymore—it’s for anyone who wants a snappy, custom coding assistant without a monthly subscription. If you’re rockin...

Source: DEV Community
Running large language models (LLMs) locally isn’t just for the privacy-obsessed anymore—it’s for anyone who wants a snappy, custom coding assistant without a monthly subscription. If you’re rocking Fedora 43, you’re already using one of the most cutting-edge distros out there. Here is how to get Ollama up and running with full NVIDIA acceleration and hook it into VS Code for a seamless dev experience. There’s something uniquely satisfying about seeing your GPU fans spin up because your local AI is thinking. Let’s get you there in eight steps. Step 1: Open the Gates (RPM Fusion) Fedora is known for its commitment to free, open-source software, which means the proprietary NVIDIA drivers aren't there by default. We need to add the RPM Fusion repositories to get the "non-free" goodies. Run this in your terminal: $ sudo dnf5 install https://mirrors.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm $ sudo dnf5 install https://mirrors.rpmfusion.org/nonfree/fedora/