After years of using cloud platforms (vercel and netlify), I finally decided to take the plunge and set up my own Virtual Private Server (VPS) to host my applications. In this post, I'll walk you through my entire journey, from initial server access to running containerized applications, sharing both the technical steps and the valuable lessons I learned along the way.
I spent a lot of time looking for the best VPS provider that I could afford. Then I stumbled across GreenCloudVPS with their budget KVM sale. I decided to buy their budget KVM VPS located in Singapore since it’s close to the country where I live. I know I could just use DigitalOcean’s $6 droplets, but for some reason, I wanted something with more resources—even though I haven’t decided exactly what kind of apps I’ll host on this server. What I do know is that I want to use this server as a sandbox for practice and to get hands-on experience setting up a server. The specs might be overkill, but for $45 a year with that kind of specs, I’m more than happy to go for it.
The journey begins with accessing your fresh VPS instance. Most providers give you root credentials or an SSH key to start with:
ssh root@your_server_ip
I generated a new SSH key pair with a passphrase:
ssh-keygen -t rsa -b 4096 -C "your_email@example.com"
When prompted, I made sure to set a strong passphrase. While this means entering the passphrase every time I SSH into the server, the extra security is worth it. The key generation created two files in ~/.ssh/
:
id_rsa
(private key)id_rsa.pub
(public key - this goes on the server)Next, I copied my public key to the server:
# Create .ssh directory if it doesn't exist ssh root@server_ip_address mkdir -p ~/.ssh
Then i copy the public key ~/.ssh/id_rsa.pub
to the server in this file ~/.ssh/authorized_keys
. You need to create authorized_keys
file first.
My next step was creating a regular user with sudo privileges:
# Create new user adduser yourusername # Add user to sudo group usermod -aG sudo yourusername
to check if i can access my server login as other user:
ssh yourusername@your_server_ip
After confirming I could log in with the new user and key, I secured the SSH configuration:
# /etc/ssh/sshd_config PermitRootLogin no PasswordAuthentication no UsePAM no
I also created an SSH config file on my local machine for easier access:
# ~/.ssh/config Host myserver HostName your_server_ip User yourusername IdentityFile ~/.ssh/id_rsa
to check you can ssh into server with the new config:
ssh yourservername@myserver
Setting up UFW (Uncomplicated Firewall) was one of the most nerve-wracking parts because a mistake could lock me out of the server. Here's exactly what I did:
# Set default policies first sudo ufw default deny incoming sudo ufw default allow outgoing # IMPORTANT: Allow SSH BEFORE enabling the firewall! sudo ufw allow OpenSSH # Allow Nginx sudo ufw allow 'Nginx Full' # Enable the firewall sudo ufw enable
A crucial note: Make absolutely sure you allow OpenSSH before enabling UFW. I can't stress this enough - if you enable the firewall without allowing SSH access first, you'll lock yourself out of your server!
I started with installing Nginx and will use it as my reverse proxy:
sudo apt update sudo apt install nginx
To test my domain can be access, i edit my nginx config like this:
# /etc/nginx/conf.d/nginx.conf # edit example.com to your own domain server { listen 80; listen [::]:80; server_name exmample.com; # the hostname return 301 https://exmample.com$request_uri; ## all traffic through port 80 will be forwarded to 443 } server { listen 443 ssl http2; listen [::]:443 ssl http2; ssl_certificate /etc/letsencrypt/live/exmample.com/fullchain.pem; # copy paste from rule bellow ssl_certificate_key /etc/letsencrypt/live/exmample.com/privkey.pem; # copy paste from rule bellow server_name _; return 301 https://exmample.com$request_uri; }
Then I enabled the configuration:
sudo nginx -t # Test the configuration sudo systemctl restart nginx
Key lessons learned about Nginx configuration: Always test the configuration with nginx -t
before restarting
Installing Docker was straightforward, I follow the official docker docs:
# Add Docker's official GPG key: sudo apt-get update sudo apt-get install ca-certificates curl sudo install -m 0755 -d /etc/apt/keyrings sudo curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc sudo chmod a+r /etc/apt/keyrings/docker.asc # Add the repository to Apt sources: echo \ "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \ $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \ sudo tee /etc/apt/sources.list.d/docker.list > /dev/null sudo apt-get update # To install the latest version, run: sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin # Add user to docker group sudo usermod -aG docker $USER
Because I mainly develop my application on Mac, I need to build specifically for linux machine as my server is using Ubuntu. Adding the --platform linux/amd64
is necesary for the build process. The process is like this:
Build the application on local machine:
# to build docker image specific platform (for ubuntu server) # on project root directory, run this command docker build -t image-name --platform linux/amd64 .
Transfer my Docker images to server:
docker save <image-name> | bzip2 | ssh user@host docker load # see the progress using pipe viewer [pv]. # about pv -> https://www.ivarch.com/programs/pv.shtml docker save <image-name> | bzip2 | pv | ssh user@host docker load
After transfering the docker images from local machine to server, I run the app using this command:
# to run container docker run -d -p HOST_PORT:CONTAINER_PORT image-name:tag # to run container with env file docker run -d -p PORT:PORT --env-file <file-loc> image-name:tag
The current situation: i have 3 app that i want to serve on my server: Blog (nextjs app), Dinero (money tracker webapp - react vite), Swordfish (dinero backend service - Hono Bun). I use nginx as reverse proxy. I edit my nginx config look like this:
# /etc/nginx/conf.d/nginx.conf # add this bellow the existing config server { listen 443 ssl http2; listen [::]:443 ssl http2; ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; # managed by Certbot server_name api.example.com; # the hostname location / { proxy_pass http://127.0.0.1:8080; # URL api.example } } # ... other app reverse proxy config
in my ufw rules i only allow port for ssh (22), port 80, and 443. but when i test my applicaiton by accessing myweb.domain:app_port i can access the port. I latter found out about this ufw x docker iptables issue. Read this stackoverflow issue
After running into some issues with Docker's iptables rules conflicting with UFW, I switched to Traefik - a great tools written in GoLang, which integrates better with Docker and also provides automatic SSL certificate management. I also move from manually run each docker container to using docker compose. But before that, i stop my nginx service.
sudo systemctl stop nginx sudo systemctl disable nginx
With Docker Compose, running multiple applications became manageable:
# docker-compose.yml services: reverse-proxy: image: traefik:v3.1 container_name: traefik command: - "--providers.docker" - "--providers.docker.exposedbydefault=<value>" - "--entryPoints.websecure.address=<value>" - "--certificatesresolvers.myresolver.acme.tlschallenge=<value>" - "--certificatesresolvers.myresolver.acme.email=<value>" - "--certificatesresolvers.myresolver.acme.storage=<value>" - "--entrypoints.web.address=<value>" - "--entrypoints.web.http.redirections.entrypoint.to=<value>" - "--entrypoints.web.http.redirections.entrypoint.scheme=<value>" # - "--api.insecure=true" ports: - "80:80" - "443:443" # - "8080:8080" # to enable "--api.insecure=true" volumes: - letsencrypt:/letsencrypt - /var/run/docker.sock:/var/run/docker.sock blog: image: <image name app 1> container_name: blog labels: - "traefik.enable=<value>" - "traefik.http.routers.blog.rule=<value>" - "traefik.http.routers.blog.entrypoints=<value>" - "traefik.http.routers.blog.tls.certresolver=<value>" restart: always # add other apps config volumes: letsencrypt:
Then run docker compose with this command:
docker compose up -d # create container and start -detach
For secure remote access, I also installed Tailscale:
curl -fsSL https://tailscale.com/install.sh | sh sudo tailscale up --ssh
This allowed me to SSH into my server using Tailscale's secure network:
ssh myserver.tailnet
Setting up my own VPS has been an incredible learning experience. It gives me perspective on how to setup your own server, how to host your own application, and how to handle the configuration.