RyzenForce
Documentation
What is RyzenForce?#
RyzenForce is a Denver-based project building AMD-powered AI workstations and servers purpose-built for running large AI models locally — no cloud, no subscriptions, no data leaving your machine. Each system ships with a custom dual-boot environment: Windows 11 Pro for broad software compatibility, and RyzenForce Linux, a bespoke open-source distro with an AI-tuned kernel, pre-installed ROCm stack, and a one-click app marketplace packed with 100+ local AI tools.
Whether you're a solo developer running Llama 3 on a desktop rig or an enterprise team deploying a 3-GPU inference cluster, RyzenForce scales from Tier 01 through Tier 03 with full ROCm and PyTorch support out of the box.
New to RyzenForce?
Follow the quickstart guide to register your machine, boot into RyzenForce Linux, and run your first local LLM via Ollama — takes about 15 minutes from power-on.
Get started → Register your machine →Hardware tiers#
All three tiers run the complete RyzenForce software stack. Pick the one that matches your workload — you can upgrade or configure any tier before ordering.
- CPU AMD Ryzen 9 9950X · 16C/32T · Zen 5
- GPU Radeon RX 7900 XTX · 24GB GDDR6
- RAM 64GB DDR5-6000 · Dual Channel
- Storage 2TB NVMe Gen 5
- CPU Threadripper PRO 7965WX · 24C/48T
- GPU 2× Radeon PRO W7900 · 96GB VRAM
- RAM 192GB DDR5 ECC · Quad-Channel
- Storage 8TB NVMe Gen 5
- CPU AMD EPYC 9354 · 32C/64T · Genoa
- GPU 3× Radeon PRO W7900 · 144GB VRAM
- RAM 384GB DDR5 ECC RDIMM
- Storage 16TB NVMe Gen 5 RAID
All tiers ship pre-configured with Ollama, LM Studio, Docker, PyTorch, TensorFlow, and the full ROCm stack. No post-install setup required to run your first model.
Core components#
RyzenForce is built around three tightly integrated layers that work together across both operating systems:
-
RyzenForce Linux OS
A custom open-source distro based on a Threadripper-optimized kernel. Ships with a real-time AI inference engine, pre-configured ROCm/HIP stack, PyTorch, TensorFlow, and a minimal desktop environment tuned for low overhead. Designed to squeeze maximum AI throughput from AMD RDNA 3 and CDNA hardware.
-
RyzenForce Marketplace
A curated, verified app store with 100+ open-source AI tools — all signed and audited. Deploy Ollama, Stable Diffusion, n8n, Nextcloud, Gitea, Grafana, and dozens more with a single click. Automatic updates, version pinning, and sandboxed containers keep everything reproducible and secure.
-
EdgePass
A cross-platform secure management client for your RyzenForce machine. Provides zero-trust VPN tunneling via WireGuard, remote desktop access, encrypted credential storage, and unified device management from your phone, tablet, or secondary laptop.
Your OS, your choice#
Every RyzenForce system is a dual-boot machine. Switch between Windows 11 Pro and RyzenForce Linux at any time — both share the same NVMe storage pool and Docker volumes so your data stays consistent.
Maximum Compatibility
Full AMD RDNA 3 driver support, DirectX 12 Ultimate, WSL 2 for Linux workflows, and compatibility with the full Adobe and Autodesk creative suite.
- DirectX 12 Ultimate + VulkanRT
- WSL 2 for Linux AI workflows
- CUDA via HIP translation layer
- Adobe & Autodesk Suite Ready
- Enterprise AD / Azure AD join
Maximum Performance
Custom open-source distro with a Threadripper-optimized kernel, minimal overhead, and a real-time inference engine. Your data never leaves the box.
- Kernel tuned for Threadripper & EPYC
- Pre-installed PyTorch & TensorFlow
- Real-time Inference Engine (RIE)
- ROCm 6.x + HIP stack built-in
- Complete privacy & local control
Pre-loaded & marketplace apps#
Every RyzenForce machine ships with a core set of AI tools pre-installed, and the RyzenForce Marketplace gives you instant access to 100+ additional verified apps across every category:
All marketplace apps are open-source audited and cryptographically signed. Third-party submissions go through a security review before listing. Never install apps from outside the marketplace unless you fully trust the source.
Highlighted features#
Privacy-first by design
100% local execution. No telemetry, no cloud API calls, no data leaving your machine. Your models, your data, your rules.
24–144GB VRAM pool
From a single 7900 XTX to three stacked W7900s — run 70B+ parameter models entirely in VRAM with zero CPU offloading required.
One-click deployment
Every app in the Marketplace launches in seconds via pre-configured Docker containers. No CLI required for everyday users.
OpenAI-compatible API
LocalAI and Ollama provide drop-in OpenAI API endpoints — point any existing app or script at your local machine, no code changes needed.
Enterprise-grade security
WireGuard mesh VPN, hardware-backed keystores via Vaultwarden, and zero-trust remote access through EdgePass.
10–100GbE networking
Professional and Enterprise tiers include high-bandwidth networking for low-latency multi-node cluster and NAS connectivity.
Modular & upgradeable
WRX90 and SP5 platforms support future GPU and memory expansion. Add capacity as your workloads grow.
Developer-ready stack
Python, Node.js, VS Code, PyTorch, TensorFlow, and ROCm are pre-installed on RyzenForce Linux. Start coding immediately.
Pick your path#
Not sure where to start? Choose the entry point that matches where you are today.
Quick Setup
First boot to first model in 15 minutes. Covers registration, OS selection, and running Ollama.
→Run Local AI
Deploy LLMs, image gen, and speech models with step-by-step guides for every app in the marketplace.
→Developer Docs
Build custom AI pipelines, integrate the OpenAI-compatible API, and package your own marketplace apps.
→OpenClaw automation on RyzenForce#
OpenClaw is the open-source autonomous AI agent taking the world by storm — and RyzenForce hardware is its ideal host. Created by Peter Steinberger, OpenClaw turns your machine into a tireless digital operator: it reads emails, browses the web, manages files, executes scripts, and fires off actions across WhatsApp, Telegram, Discord, Slack, and more — all through plain-English commands. Running it on RyzenForce means your agent never touches a cloud server.
AMD published an official Best Known Configuration (BKC) for running OpenClaw on AMD hardware via WSL2 — enabling fully local LLM provisioning with Memory.md support, browser automation, and multi-agent workflows. RyzenForce systems ship pre-configured to this spec.
Why RyzenForce is the ultimate OpenClaw machine
24–144GB VRAM — run 70B+ agents locally
OpenClaw is model-agnostic. Point it at a local Ollama instance running Llama 3 70B or Qwen3.5 35B entirely in VRAM — no cloud API costs, no token limits, no data egress. The Tier 02 build's 96GB pooled VRAM runs the largest open-weight models with context: 190000 tokens in memory simultaneously.
100% private — your agent, your data
OpenClaw accesses your email, calendar, files, and browser. On cloud hardware that's a serious security concern. On a RyzenForce machine it never leaves your LAN. Zero telemetry. No third-party servers touching your context. Pair it with Vaultwarden for hardware-backed credential storage for your agent's API keys.
Always-on, always fast
RyzenForce machines are purpose-built to run 24/7. OpenClaw shines as a background daemon — scheduling tasks at midnight, monitoring your inbox, triggering n8n workflows, and responding to Discord messages while you sleep. The 1000W PSU and AIO liquid cooling keep thermals stable under continuous multi-agent load.
n8n + OpenClaw = automation superstack
RyzenForce ships n8n in the Marketplace. Wire OpenClaw's webhook triggers directly into n8n visual workflows — OpenClaw handles the AI reasoning and natural language layer, n8n handles the structured automation graph. The result is an automation stack that rivals enterprise SaaS at zero recurring cost.
Quick install on RyzenForce Linux
OpenClaw requires Node.js 22+, which is pre-installed on RyzenForce Linux. Run these three commands from your terminal to be up in minutes:
## Install OpenClaw globally via npm
npm install -g openclaw@latest
## Run first-time onboarding (sets up daemon + gateway)
openclaw onboard --install-daemon
## Configure your local Ollama endpoint as the model provider
openclaw configure --model ollama --endpoint http://localhost:11434
For Tier 01 users: set Ollama's GPU offload to MAX and context to 65536. Tier 02 users can push context to 190000 with all three W7900 VRAM pools unified. Tier 03 enterprise cluster users should follow the ROCm multi-GPU guide for distributed inference across all three W7900 cards.
What OpenClaw automates on your RyzenForce machine
Memory.md files. Your agent remembers your preferences, projects, and patterns across every session.
What the community is saying
"I wanted to automate some tasks from Todoist and claw was able to create a skill for it on its own, all within a Telegram chat."
@iamsubhrajyoti"30 mins later: controlling Gmail, Calendar, WordPress, Hetzner from Telegram like a boss. Smooth as single malt."
@Abhay08"Essentially — you can automate almost anything you can do on the machine it sits on."
@aus_bytes"After years of AI hype, I thought nothing could faze me. Then I installed OpenClaw… AI as teammate, not tool."
@lycfyiOther resources#
- → Browse the RyzenForce Marketplace — 100+ local AI apps
- → Threadripper 7970X specs & deep-dive
- → Radeon 7900 XTX ROCm performance guide
- → Join the RyzenForce community on Discord
- → Read the RyzenForce engineering blog
- → Pre-order or back the project
// Last updated: March 14, 2026 · RyzenForce · Denver, Colorado 🇺🇸