AMD Powered OS v2.1.0
🇺🇸 Designed & assembled in Denver, Colorado

RyzenForce
Documentation

OS v2.1.0 AMD Threadripper · EPYC · Ryzen 9 Assembled in the USA

What is RyzenForce?#

RyzenForce is a Denver-based project building AMD-powered AI workstations and servers purpose-built for running large AI models locally — no cloud, no subscriptions, no data leaving your machine. Each system ships with a custom dual-boot environment: Windows 11 Pro for broad software compatibility, and RyzenForce Linux, a bespoke open-source distro with an AI-tuned kernel, pre-installed ROCm stack, and a one-click app marketplace packed with 100+ local AI tools.

Whether you're a solo developer running Llama 3 on a desktop rig or an enterprise team deploying a 3-GPU inference cluster, RyzenForce scales from Tier 01 through Tier 03 with full ROCm and PyTorch support out of the box.

New to RyzenForce?

Follow the quickstart guide to register your machine, boot into RyzenForce Linux, and run your first local LLM via Ollama — takes about 15 minutes from power-on.

Get started  →  Register your machine →

Hardware tiers#

All three tiers run the complete RyzenForce software stack. Pick the one that matches your workload — you can upgrade or configure any tier before ordering.

All tiers ship pre-configured with Ollama, LM Studio, Docker, PyTorch, TensorFlow, and the full ROCm stack. No post-install setup required to run your first model.

Core components#

RyzenForce is built around three tightly integrated layers that work together across both operating systems:

Your OS, your choice#

Every RyzenForce system is a dual-boot machine. Switch between Windows 11 Pro and RyzenForce Linux at any time — both share the same NVMe storage pool and Docker volumes so your data stays consistent.

⊞ Windows 11 Pro

Maximum Compatibility

Full AMD RDNA 3 driver support, DirectX 12 Ultimate, WSL 2 for Linux workflows, and compatibility with the full Adobe and Autodesk creative suite.

  • DirectX 12 Ultimate + VulkanRT
  • WSL 2 for Linux AI workflows
  • CUDA via HIP translation layer
  • Adobe & Autodesk Suite Ready
  • Enterprise AD / Azure AD join
🔥 RyzenForce Linux

Maximum Performance

Custom open-source distro with a Threadripper-optimized kernel, minimal overhead, and a real-time inference engine. Your data never leaves the box.

  • Kernel tuned for Threadripper & EPYC
  • Pre-installed PyTorch & TensorFlow
  • Real-time Inference Engine (RIE)
  • ROCm 6.x + HIP stack built-in
  • Complete privacy & local control

Pre-loaded & marketplace apps#

Every RyzenForce machine ships with a core set of AI tools pre-installed, and the RyzenForce Marketplace gives you instant access to 100+ additional verified apps across every category:

All marketplace apps are open-source audited and cryptographically signed. Third-party submissions go through a security review before listing. Never install apps from outside the marketplace unless you fully trust the source.

Highlighted features#

🔒

Privacy-first by design

100% local execution. No telemetry, no cloud API calls, no data leaving your machine. Your models, your data, your rules.

🎮

24–144GB VRAM pool

From a single 7900 XTX to three stacked W7900s — run 70B+ parameter models entirely in VRAM with zero CPU offloading required.

One-click deployment

Every app in the Marketplace launches in seconds via pre-configured Docker containers. No CLI required for everyday users.

🔄

OpenAI-compatible API

LocalAI and Ollama provide drop-in OpenAI API endpoints — point any existing app or script at your local machine, no code changes needed.

🛡️

Enterprise-grade security

WireGuard mesh VPN, hardware-backed keystores via Vaultwarden, and zero-trust remote access through EdgePass.

📡

10–100GbE networking

Professional and Enterprise tiers include high-bandwidth networking for low-latency multi-node cluster and NAS connectivity.

🧩

Modular & upgradeable

WRX90 and SP5 platforms support future GPU and memory expansion. Add capacity as your workloads grow.

🛠️

Developer-ready stack

Python, Node.js, VS Code, PyTorch, TensorFlow, and ROCm are pre-installed on RyzenForce Linux. Start coding immediately.

Pick your path#

Not sure where to start? Choose the entry point that matches where you are today.

OpenClaw automation on RyzenForce#

OpenClaw is the open-source autonomous AI agent taking the world by storm — and RyzenForce hardware is its ideal host. Created by Peter Steinberger, OpenClaw turns your machine into a tireless digital operator: it reads emails, browses the web, manages files, executes scripts, and fires off actions across WhatsApp, Telegram, Discord, Slack, and more — all through plain-English commands. Running it on RyzenForce means your agent never touches a cloud server.

310K+
GitHub Stars
58K+
Forks
1,200+
Contributors
100+
AgentSkills
50+
Integrations
MIT
License · Free
AMD OFFICIAL
RyzenClaw Configuration

AMD published an official Best Known Configuration (BKC) for running OpenClaw on AMD hardware via WSL2 — enabling fully local LLM provisioning with Memory.md support, browser automation, and multi-agent workflows. RyzenForce systems ship pre-configured to this spec.

Read the AMD OpenClaw guide →
🦞
OpenClaw v3.12

Why RyzenForce is the ultimate OpenClaw machine

🎮

24–144GB VRAM — run 70B+ agents locally

OpenClaw is model-agnostic. Point it at a local Ollama instance running Llama 3 70B or Qwen3.5 35B entirely in VRAM — no cloud API costs, no token limits, no data egress. The Tier 02 build's 96GB pooled VRAM runs the largest open-weight models with context: 190000 tokens in memory simultaneously.

🔒

100% private — your agent, your data

OpenClaw accesses your email, calendar, files, and browser. On cloud hardware that's a serious security concern. On a RyzenForce machine it never leaves your LAN. Zero telemetry. No third-party servers touching your context. Pair it with Vaultwarden for hardware-backed credential storage for your agent's API keys.

Always-on, always fast

RyzenForce machines are purpose-built to run 24/7. OpenClaw shines as a background daemon — scheduling tasks at midnight, monitoring your inbox, triggering n8n workflows, and responding to Discord messages while you sleep. The 1000W PSU and AIO liquid cooling keep thermals stable under continuous multi-agent load.

🧩

n8n + OpenClaw = automation superstack

RyzenForce ships n8n in the Marketplace. Wire OpenClaw's webhook triggers directly into n8n visual workflows — OpenClaw handles the AI reasoning and natural language layer, n8n handles the structured automation graph. The result is an automation stack that rivals enterprise SaaS at zero recurring cost.

Quick install on RyzenForce Linux

OpenClaw requires Node.js 22+, which is pre-installed on RyzenForce Linux. Run these three commands from your terminal to be up in minutes:

bash RyzenForce Linux Terminal
## Install OpenClaw globally via npm
npm install -g openclaw@latest

## Run first-time onboarding (sets up daemon + gateway)
openclaw onboard --install-daemon

## Configure your local Ollama endpoint as the model provider
openclaw configure --model ollama --endpoint http://localhost:11434

For Tier 01 users: set Ollama's GPU offload to MAX and context to 65536. Tier 02 users can push context to 190000 with all three W7900 VRAM pools unified. Tier 03 enterprise cluster users should follow the ROCm multi-GPU guide for distributed inference across all three W7900 cards.

What OpenClaw automates on your RyzenForce machine

📬
Email & calendar Reads, drafts, and sends emails. Manages calendar events, meetings, and reminders — entirely from a Telegram or Discord DM.
🌐
Browser automation Navigates websites, fills out forms, scrapes data, and extracts structured content from any URL on your behalf — no Selenium config needed.
🗂️
File & system ops Reads and writes local files, executes shell scripts, runs cron jobs, and manages your file system in full-access or sandboxed mode.
🐙
GitHub & DevOps Automates debugging, opens issues, manages PRs, and triggers webhooks — keeping your projects moving while you focus elsewhere.
🧠
Persistent memory Stores long-term context in local Memory.md files. Your agent remembers your preferences, projects, and patterns across every session.
🔧
Self-extending skills Can write its own new AgentSkills in response to your requests — autonomously extending its own capabilities without you touching code.
💬
Multi-channel messaging Responds and acts via WhatsApp, Telegram, Discord, Slack, Signal, and iMessage — commands sent as messages, responses as completed tasks.
🏠
Smart home & IoT Bridges your Nextcloud, Home Assistant, and IoT devices — let your agent control automations based on calendar, location, or natural language.

What the community is saying

"I wanted to automate some tasks from Todoist and claw was able to create a skill for it on its own, all within a Telegram chat."

@iamsubhrajyoti

"30 mins later: controlling Gmail, Calendar, WordPress, Hetzner from Telegram like a boss. Smooth as single malt."

@Abhay08

"Essentially — you can automate almost anything you can do on the machine it sits on."

@aus_bytes

"After years of AI hype, I thought nothing could faze me. Then I installed OpenClaw… AI as teammate, not tool."

@lycfyi

Other resources#

// Last updated: March 14, 2026 · RyzenForce · Denver, Colorado 🇺🇸