My most valuable skill as a hacker/entrepreneur is that I’m confident deploying arbitrary programs that work locally to the internet. Sounds simple, but it’s really the core of what got me into Y-Combinator and later helped me raise a seed round. This post is about how I got there—and a concrete tutorial to prove the point by building an LLM-over-DNS proxy on a bare VPS in under 30 minutes.Documentation Index
Fetch the complete documentation index at: https://mint.skeptrune.com/llms.txt
Use this file to discover all available pages before exploring further.
Being on the struggle bus early
When I was starting out hacking as a kid, one of the first complete things I built was a weather reply bot for Twitter. It read from the firehouse API, monitored for mentions and city names, then replied with current weather conditions when it got @‘ed. My parents got me a Raspberry Pi for Christmas and I found a tutorial online. I got it working locally and then got completely stuck on deployment. The obvious next step was using my Pi as a server, but that was a disaster. My program had bugs and would crash while I was away. Then I couldn’t SSH back in because my house didn’t have a static IP and Tailscale wasn’t a thing yet. It only worked on and off when I was home and could babysit it.Skipping straight to PaaS hell
When I started building web applications, I somehow skipped VPS entirely and went straight to Platform as a Service solutions like Vercel and Render. I was googling “how do I deploy my create react app” and somehow the top answer was to deploy to some third-party service that handled build steps, managed SSL, and was incredibly complicated and time-consuming. There was always some weird limitation: memory constraints during build, Puppeteer couldn’t run because they didn’t have the right apt packages. Then I was stuck configuring Docker images, and since AI wasn’t a thing yet and I’d never used Docker at a real job, it was all a disaster. I wasted more time trying to deploy my crappy React app than building it.Getting saved by a VPS maximalist
During college, I got lucky and met a hacky startup entrepreneur who was hiring. I decided to take a chance and join, even though the whole operation seemed barely legitimate. Going into the job, I had this assumption that the “right” way to deploy was on AWS or some other hyperscaler. But this guy’s mindset was the complete opposite—he was a VPS maximalist with a beautifully simple philosophy: rent a VPS, SSH in, do the same thing you did locally (yarn dev or whatever), throw up a reverse proxy, and call it a day. I watched him deploy like this over and over, and eventually he walked me through it myself a few times.
It was all so small and easy to learn, but it made me exponentially more confident as a builder. I never directly thought, “I can’t build this because I won’t be able to deploy it,” but the general insecurity definitely caused a hesitancy and procrastination that immediately went away.
Paying it forward
I’ve become an evangelist for this approach and wanted to write about it for a long time, but didn’t know how to frame it entertainingly. Then on X, I got inspiration when levelsio posted a tweet about deploying a DNS server on Hetzner that lets you talk to an LLM. Want to see it in action? Try this:Tutorial: LLM-over-DNS on a bare VPS
The architecture is simple: a Python DNS server that listens on port 53, treats incoming DNS query names as LLM prompts, calls the OpenRouter API, and returns the response as a TXT record. The client is justdig.
You’ll need a VPS with a public IP address (Hetzner, DigitalOcean, Linode, or similar), an OpenRouter API key, and Python 3 on the server. Any Linux image works.
Access your VPS
After purchasing your VPS, you’ll receive an IP address and login credentials (usually via email). Connect to your server:Replace
<your-vps-ip> with your actual server IP address.Clear existing DNS services
Many VPS images come with
systemd-resolved or bind9 pre-installed. These will conflict with a DNS server running on port 53. Remove or disable them first:Create the DNS-to-LLM proxy script
Create a file called Before running, paste your OpenRouter API key into the
llm_dns.py with the following content. The script listens for DNS queries, extracts the query name as a prompt, sends it to the OpenRouter API, and returns the response in one or more TXT records:OPENROUTER_API_KEY variable. For anything more serious than a demo, use an environment variable to keep your key out of the source code.Test your service
From another machine, send a DNS TXT query directly to your server’s IP:The LLM’s response should appear in the terminal output.
Secure your setup (recommended)
Use UFW (Uncomplicated Firewall) to restrict access. Allow SSH so you don’t lock yourself out, allow port 53 for DNS queries, and block everything else:
UFW is literally called “uncomplicated” because that’s what a VPS is—uncomplicated. This setup runs as root and stores your API key in plaintext. For anything beyond experimentation, use environment variables, a non-root user, and a process manager like systemd.
Troubleshooting
Permission denied
Permission denied
Make sure you’re running with
sudo. Port 53 requires root privileges.Connection timeout
Connection timeout
Check your VPS firewall settings and ensure port 53 (UDP and TCP) is open. Most VPS providers have a cloud-level firewall in addition to the OS-level one—check both.
API errors
API errors
Verify your OpenRouter API key and confirm your account has credits. You can test the API directly with
curl before running the DNS server.No response from the server
No response from the server
Run
systemctl status systemd-resolved to confirm it’s actually stopped. If it’s still running, it will bind port 53 and your script will fail to start (or silently fail to receive queries).