Curious about AI and how to run it directly on your own machine?

It’s easier than you might think.

Unlocking Advanced AI Capabilities on Your Local Windows PC

If you want to experiment with cutting-edge AI without relying on cloud services, setting up Ollama and Deepseek R1 on Windows is a great place to start. Whether you’re a hobbyist, a machine learning enthusiast, or a developer exploring generative AI, this setup gives you the power to run models locally, keeping your data private while boosting performance.

I recently installed Ollama and Deepseek R1 on an older laptop and was surprised by how seamless the process was. It’s lightweight, flexible, and surprisingly painless to set up.

What Are Ollama and Deepseek R1?

Before diving into installation, here’s a quick overview of the tools you’ll be using:

  • Ollama – An open-source platform that makes it simple to run large language models (LLMs) locally. It supports models like Llama, Mistral, Vicuna, and more. With Ollama, you can leverage your own hardware for privacy, control, and speed.
  • Deepseek R1 – A state-of-the-art, open-source language model designed for versatility. From coding and natural language processing to creative writing and research, Deepseek R1 provides accurate, high-quality outputs and can run efficiently using Ollama.

By combining Ollama and Deepseek R1, you can run advanced AI models directly on your Windows PC with no expensive GPUs or cloud subscriptions required.

Prerequisites for Installation

Before starting, confirm that your system meets the following requirements. (For reference, I successfully ran Ollama + Deepseek R1 on a laptop with 8GB RAM, an older Intel i7 CPU, and no dedicated GPU.)

  • Operating System: Windows 10 or later (64-bit recommended)
  • Hardware: Minimum 8GB RAM (16GB+ preferred), modern Intel or AMD CPU, and sufficient disk space (LLMs may require 10GB+)
  • GPU (Optional): A dedicated NVIDIA GPU can significantly improve performance, though Ollama can run on CPU-only systems
  • Internet Connection: Required to download installers and model weights
  • Administrator Rights: May be necessary to run installers

Step 1: Install Ollama on Windows

Ollama is actively adding native Windows support. For now, here’s the recommended installation path:

Native Windows Installer

  1. Visit the official Ollama site and check for the Windows installer.
  2. Download and run the installer directly: Ollama Windows Setup.
  3. Complete the setup wizard.
  4. Open Command Prompt and verify the installation with:

ollama –version

  1. If you see version details, Ollama is installed successfully.

Step 2: Install Deepseek R1 Model in Ollama

Now that Ollama is running, you can pull the Deepseek R1 model and start experimenting.

  1. Open Command Prompt (or Terminal in Windows 11).
  2. Pull the Deepseek R1 model. For example:

ollama pull deepseek-r1:1.5b 

(This command installs the smaller 1.5 billion parameter version.)

Explore more available models here: Deepseek-R1 on GitHub.

  1. Once downloaded, run the model interactively:

ollama run deepseek-r1:1.5b

  1. You’ll enter an interactive prompt where you can type questions, generate code, or test creative tasks.
  1. When finished, exit with: 

/bye 

Tips for Effective Use

  • Stay Updated – Check GitHub and official sites for the latest features and fixes.
  • Explore More Models – Ollama supports Llama 3, Mistral, Vicuna, and other open-source LLMs.
  • Prompt Engineering – Experiment with prompts for better results.
  • Resource Management – Test different model sizes to balance performance and resource usage.

Use Cases for Ollama + Deepseek R1 on Windows

Running AI locally unlocks a wide range of practical applications:

  • Private Chatbots – Build secure, privacy-focused AI assistants.
  • Code Generation – Write and debug code locally without uploading sensitive data.
  • Creative Writing – Generate stories, scripts, or brainstorming ideas.
  • Data Analysis – Query data in natural language for quick insights.
  • AI Research – Test and prototype using open-source models.
  • Interactive Fun – Experiment with AI responses in real time.
N

Final Thoughts

Running Ollama + Deepseek R1 on Windows puts advanced AI directly in your hands. You’ll gain privacy, flexibility, and full control, all without cloud fees or hardware lock-in. Whether you’re building AI-driven apps, experimenting with research, or simply curious about what AI can do, this setup makes it easy to get started.

Happy experimenting!

Ready to Explore AI for Your Business?

At RBA, we help organizations unlock the full potential of AI. Whether that’s integrating open-source models like Deepseek R1, developing custom AI solutions, or optimizing existing workflows. If you’re ready to explore how AI can accelerate innovation in your business, reach out to us today and let’s build what’s next together.

About the Author

Alan Leppala
Alan Leppala

Cloud Infrastructure Engineer

Alan has over 15 years of IT Infrastructure experience working in several different areas. Multiple years spent as an IT Instructor led to work as a systems administrator.  Areas of expertise are Active Directory design and management, Azure IAAS, and Office 365 Administration.  Automation scripting and designing solutions for customers based on their unique needs.  Recent areas of expanding skills include: Automation solutions incorporating AI, Python scripting, and Azure Funtion App design.