0 / 0
Skip to content

My Failed Journey: Running Ollama on Synology NAS

24 March, 2025 - Rijswijk, Netherlands

It was a rainy evening in Rijswijk when I decided to embark on what I thought would be a straightforward project: setting up Ollama on my Synology DS920+ NAS. Little did I know that this would turn into a lesson about hardware limitations and the importance of system requirements.

The Setup

My Synology DS920+ is a modest machine:

  • 20MB RAM
  • SSD storage
  • No dedicated GPU
  • Running Docker

I had this grand plan to run local LLMs (Large Language Models) on my NAS, thinking it would be perfect for personal AI experiments. The setup seemed simple enough:

  1. Create a docker-compose.yml file
  2. Set up reverse proxy
  3. Install some models
  4. Start chatting with AI

The Configuration

First, I created my docker-compose.yml:

yaml
services:
  ollama:
    image: ollama/ollama:latest
    ports:
      - '11434:11434'
    volumes:
      - ollama_data:/root/.ollama
    environment:
      - OLLAMA_HOST=0.0.0.0
    restart: always

  ollama-webui:
    image: ghcr.io/open-webui/open-webui:main
    depends_on:
      - ollama
    environment:
      - OLLAMA_BASE_URLS=http://ollama:11434
      - ENV=dev
      - WEBUI_AUTH=False
      - WEBUI_NAME=Ollama WebUI
      - WEBUI_URL=http://0.0.0.0:8080
      - WEBUI_SECRET_KEY=1aad5e50-d3ee-48ba-ab77-78ea51bba52b
    extra_hosts:
      - 'host.docker.internal:host-gateway'
    restart: unless-stopped

volumes:
  ollama_data:

Then, I set up the reverse proxy in Synology:

  • Control Panel > Login Portal > Advanced > Reverse Proxy
  • Created a new rule for oi.harianto.myds.me
  • Configured HTTPS to HTTP forwarding
  • Enabled HSTS for security

The Models

I attempted to install several models:

  • deepseek-r1:1.5b
  • deepseek-r1:7b
  • llama3.3

The Reality Check

What followed was a series of disappointments:

  1. Memory Issues: My 20MB RAM was nowhere near enough. The models require significantly more memory to run effectively.

  2. Performance Problems: Without a GPU, the CPU was struggling to handle the computations. Response times were painfully slow, if they worked at all.

  3. System Instability: The NAS would become unresponsive, forcing me to restart the containers multiple times.

  4. Failed Attempts: Despite my best efforts to optimize the configuration, the system simply couldn’t handle the load.

The Lessons Learned

This experience taught me several valuable lessons:

  1. Hardware Matters: While Docker makes it easy to run services, the underlying hardware still needs to meet minimum requirements.

  2. Research Before Implementation: I should have checked the system requirements for Ollama and the models I wanted to run.

  3. Start Small: Instead of trying to run multiple large models, I should have started with a smaller model to test the system’s capabilities.

  4. Consider Alternatives: For running LLMs locally, I might need to consider:

    • Using a dedicated machine with better specs
    • Cloud-based solutions
    • Smaller, more efficient models

Moving Forward

While this attempt was unsuccessful, it hasn’t deterred me from exploring local AI solutions. I’m now considering:

  1. Setting up a dedicated machine for running LLMs
    • I’m particularly excited about the upcoming Lenovo Legion Go 2, which promises to be a powerful handheld device capable of running local LLMs
    • While it’s not released yet, its expected specifications make it an ideal candidate for my AI experiments
  2. Exploring cloud-based alternatives
  3. Looking into more lightweight AI solutions
  4. Researching hardware upgrades for my NAS

Conclusion

Sometimes, failure is the best teacher. This experience reminded me that while technology is powerful, it’s not magic. Hardware limitations are real, and understanding system requirements is crucial before embarking on such projects.

For those considering running Ollama on a Synology NAS, I recommend:

  1. Check the hardware requirements thoroughly
  2. Start with smaller models
  3. Monitor system resources carefully
  4. Have a backup plan if the setup doesn’t work

To be continued...


Note: This post serves as a reminder that not all experiments succeed, but each failure brings valuable lessons.