Recent Posts
Running vLLM on AMD AI MAX+ 395 (ROCm, Ubuntu 24.04)
Finally, I managed to get vLLM running on my AMD AI MAX+ 395 GPU on Ubuntu 24.04.
It was not straightforward — ROCm support on Ryzen AI (gfx1151) is still evolving, and I ran into multiple low-level GPU faults before finding a stable setup.
This post documents: - What didn’t work - The errors I encountered - The working configuration
Hopefully this saves you a few hours (or days).
read more
Can Claude Code Use GitHub Copilot as a Backend? A Practical Exploration
Introduction Recently, I’ve been experimenting with a variety of LLM tooling ecosystems, including:
Claude Code
Codex via OpenRouter
Ollama
LM Studio
vLLM
LiteLLM
My goal is to better understand the underlying technologies and explore how to operate these tools in air-gapped or controlled environments.
In many enterprise settings, developers are allowed to use GitHub Copilot, but not Claude Code.
read more
Spec-Kit + Ralph Loop — A Practical Workflow for AI-Driven Development
I first learnt the idea of the Ralph Loop around January, while following developments in LLMs and AI through multiple channels — video feeds, X, newsletters, GitHub repositories, news, and research papers.
That sparked a question:
What happens if we combine the Ralph Loop with spec-driven design to generate real, working applications?
This blog is a reflection of that exploration.
My Background with Spec-Kit Since December last year, I have been using Spec-Kit in both:
read more