Free WordPress on Google Cloud DALL-E 3

Free WordPress on Google Cloud – Compute Engine. Part 1

Spread the love

This is one of the series of the articles about hosting WordPress (any) website on Google Cloud completely (or almost) FREE of charge.

To be able to run WordPress on Google Cloud platform (no matter on which resource, Cloud Run, App Engine or anything else) we would need an SQL database.

Google Cloud has its own SQL database solution – Cloud SQL, but it is PAID feature, it does not has any resources available under the Free Tier program so I will not consider it as an option and as a solution in this article, I will use Compute Engine virtual machine where we will install Docker and run MariaDB docker container with the database we’ll be connecting to from our Cloud Run service.

To be able to connect to the database running on Compute Engine from Cloud Run service, Cloud Run and Compute Engine should be working in the same VPC network, so we would need to create a custom VPC (Virtual Private Cloud) network for Compute Engine instance and link Cloud Run service to this network.

To be flexible in creating and managing the Compute Engine VM instances we would need to create two Persistent Disks, one for the OS boot and all needed system software installed (boot disk), and another one for storing the data, such as configs, database and other persistent data and files (data disk).

Compute Engine – Prepare

You don’t have to create a separate project for Compute Engine and Cloud Run worked together, so we will use the same project, all following commands assume you stay on the same project config set.

Activate Compute Engine API (monitoring is needed to be able to install the Ops Agent and see the monitoring graphs in google cloud console):

gcloud services enable compute.googleapis.com && \
gcloud services enable monitoring.googleapis.com

Set default region and zone for Compute Engine VM (I’m using the cheapest ones in terms of the egress traffic):

gcloud config set compute/region us-east1
gcloud config set compute/zone us-east1-b

Custom VPC Network and Subnetwork

Create a custom VPC network:

gcloud compute networks create ${network} \
  --bgp-routing-mode regional \
  --mtu 1460 \
  --project ${project_id} \
  --subnet-mode custom

where $network is your network name, $project_id is your project ID, so in my case it is:

gcloud compute networks create wpnetwork \
  --bgp-routing-mode regional \
  --mtu 1460 \
  --project wordpress-414215 \
  --subnet-mode custom

Create a custom VPC subnetwork:

gcloud compute networks subnets create ${subnet} \
  --project ${project_id} \
  --range ${subnetrange} \
  --stack-type IPV4_ONLY \
  --network ${network} \
  --purpose PRIVATE \
  --region ${region}

where $subnet is your subnetwork name, $subnetrange is your subnetwork IP range, so in my case it is:

gcloud compute networks subnets create wpsubnet \
  --project wordpress-414215 \
  --range 10.10.10.0/24 \
  --stack-type IPV4_ONLY \
  --network wpnetwork \
  --purpose PRIVATE \
  --region us-east1

Firewall Rules for custom VPC network

Create allow-internal firewall rule (allow all internal traffic) for custom VPC network:

gcloud compute firewall-rules create allow-internal \
  --network ${network} \
  --allow tcp:0-65535,udp:0-65535,icmp \
  --source-ranges 10.10.10.0/24 \
  --priority 65534

Create allow-ssh firewall rule (external ssh traffic, on port 22) for custom VPC network, in case if you need to connect to your VM externally via SSH (otherwise you don’t need it):

gcloud compute firewall-rules create allow-ssh \
  --network ${network} \
  --allow tcp:22 \
  --priority 65534

Create allow-icmp firewall rule (external ICMP traffic) for custom VPC network:

gcloud compute firewall-rules create allow-icmp \
  --network ${network} \
  --allow icmp \
  --priority 65534

When you’ve enabled Compute Engine API, Google Cloud automatically created a default network and its default firewall rules, it can be removed now:

gcloud compute firewall-rules delete \
  default-allow-ssh \
  default-allow-internal \
  default-allow-icmp \
  default-allow-rdp --quiet && \
gcloud compute networks delete default --quiet

Persistent Disks

Google Cloud gives you 30 GB-months standard persistent disk for free (as part of the Free Tier program), so we will use 15GB for boot disk (OS image) and 15GB for data disk.

Let’s create the Data Disk, the Boot Disk will be created along with the VM instance, run the following command:

gcloud compute disks create datadisk \
  --project ${project_id} \
  --type pd-standard \
  --size 15GB \
  --zone ${zone}

Compute Engine – Create

Now we are ready to create Google Cloud wordpress-sql Compute Engine Virtual Machine instance:

gcloud compute instances create wordpress-sql \
  --create-disk name=bootdisk,image-family=debian-12,image-project=debian-cloud,size=15GB,type=pd-standard,boot=yes \
  --disk name=datadisk,auto-delete=no \
  --machine-type e2-micro \
  --network ${network} \
  --network-tier STANDARD \
  --stack-type IPV4_ONLY \
  --subnet ${subnet} \
  --zone ${zone} \
  --scopes cloud-platform

With this command the datadisk will be attached but not mounted to the VM filesystem and not formatted.

For this Compute Engine VM instance I’m using debian-12 boot image family from debian-cloud image project, feel free to use your favourite one. To see all available images use the following command:

gcloud compute images list

Once the instance is created check whether you can connect to your VM via SSH (replace ${instance_name} to your VM instance name):

gcloud compute ssh ${instance_name}

If you cannot connect to the instance check the logs and fix the issues:

gcloud compute ssh ${instance_name} --project=${project_id} --zone=${zone} --troubleshoot

Persistent Disks Setup

Check the attached disks to the VM:

lsblk -f

This should give you something like the following:

NAME    FSTYPE FSVER LABEL UUID                                 FSAVAIL FSUSE% MOUNTPOINTS
sda
├─sda1  ext4   1.0         3eaae429-1dfa-4ac7-9b86-a07dc3b1d6a3   11.9G    14% /
├─sda14
└─sda15 vfat   FAT16       2FCA-F97F                             112.2M     9% /boot/efi
sdb

From this listing you can see that sda is your boot disk, sdb is your data disk and it does not have partitions, not formatted and not mounted.

Run the following command to check the disks:

sudo parted -l

You may see the error regarding disk sda:

Warning: Not all of the space available to /dev/sda appears to be used, you can
fix the GPT to use all of the space (an extra 2014 blocks) or continue with the
current setting? 
Fix/Ignore?

Type Fix to fix the issues with sda, after that you should see something like:

Warning: Not all of the space available to /dev/sda appears to be used, you can
fix the GPT to use all of the space (an extra 2014 blocks) or continue with the
current setting? 
Fix/Ignore? Fix
Model: Google PersistentDisk (scsi)
Disk /dev/sda: 16.1GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags: 

Number  Start   End     Size    File system  Name  Flags
14      1049kB  4194kB  3146kB                     bios_grub
15      4194kB  134MB   130MB   fat16              boot, esp
 1      134MB   16.1GB  16.0GB  ext4


Error: /dev/sdb: unrecognised disk label
Model: Google PersistentDisk (scsi)
Disk /dev/sdb: 16.1GB
Sector size (logical/physical): 512B/4096B
Partition Table: unknown
Disk Flags:

So disk sdb does not have partitions, let’s create them, type:

sudo parted /dev/sdb
GNU Parted 3.5
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted)

Disk /dev/sdb is already selected as you can see from the listing above, but if not – type:

select /dev/sdb
GNU Parted 3.5
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) select /dev/sdb
Using /dev/sdb
(parted)

Make a partition table:

mklabel gpt

You may see a warning message that all data will be deleted, type Yes, then check the table – type print, you should see the partition table of /dev/sdb:

(parted) print
Model: Google PersistentDisk (scsi)
Disk /dev/sdb: 16.1GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:

Number  Start  End  Size  File system  Name  Flags

(parted)

Create partition using the whole space of the disk by typing the following command:

mkpart primary ext4 0% 100%

Type print again and you should see:

(parted) mkpart primary ext4 0% 100%
(parted) print
Model: Google PersistentDisk (scsi)
Disk /dev/sdb: 16.1GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags:

Number  Start   End     Size    File system  Name     Flags
 1      1049kB  16.1GB  16.1GB  ext4         primary

To save the changes just type quit. Now we can format our /dev/sdb1 partition, type the following command:

sudo mkfs -t ext4 /dev/sdb1

You should see something like:

mke2fs 1.47.0 (5-Feb-2023)
Discarding device blocks: done
Creating filesystem with 3931648 4k blocks and 983040 inodes
Filesystem UUID: 0a09156c-cd23-4176-99fd-855e893fbe9c
Superblock backups stored on blocks:
        32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208

Allocating group tables: done
Writing inode tables: done
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done

Check the disks typing lsblk -f, you should see now something like this:

NAME    FSTYPE FSVER LABEL UUID                                 FSAVAIL FSUSE% MOUNTPOINTS
sda
├─sda1  ext4   1.0         3eaae429-1dfa-4ac7-9b86-a07dc3b1d6a3   11.9G    14% /
├─sda14
└─sda15 vfat   FAT16       2FCA-F97F                             112.2M     9% /boot/efi
sdb
└─sdb1  ext4   1.0         0a09156c-cd23-4176-99fd-855e893fbe9c

So now we have UUID of sdb1 partition of sdb disk, note it, we will use that later, type exit to close the SSH connection.

Startup Scripts

By having the UUID of the newly created data disk (which is 0a09156c-cd23-4176-99fd-855e893fbe9c in my case) we can now automate some processes on VM instance start/stop, create or even delete, we will use startup scripts feature.

Create startup.sh file in the root folder of your project and put the following code inside, save it:

#!/bin/bash

################################################################
# Creating Bash variables for further use in the script        #
################################################################
disk_mount_dir=/data
create_file=/var/vm_created

################################################################
# Install all required software and create /var/vm_create file #
################################################################
first_time_run () {
    local disk_uuid=0a09156c-cd23-4176-99fd-855e893fbe9c # it shoud be your disk UUID

    # Mounting SWAP file permanently
    echo "Mounting SWAP file ..."
    dd if=/dev/zero of=/swapfile bs=1M count=1000
    chmod 0600 /swapfile
    mkswap /swapfile
    swapon /swapfile
    echo "/swapfile swap swap defaults 0 0" | tee -a /etc/fstab
    mount -a

    echo "backing up /etc/fstab => /etc/fstab.backup ..."
    cp /etc/fstab /etc/fstab.backup

    # Mounting datadisk permanently
    echo "Mounting datadisk ..."
    mkdir -p ${disk_mount_dir}
    echo "UUID=${disk_uuid} ${disk_mount_dir} ext4 discard,defaults,nofail 0 2" | tee -a /etc/fstab
    mount -a

    # Installing Google Cloud Ops Agent
    echo "Installing Ops Agent ..."
    cd /tmp
    curl -sSO https://dl.google.com/cloudagents/add-google-cloud-ops-agent-repo.sh
    bash add-google-cloud-ops-agent-repo.sh --also-install
    bash add-google-cloud-ops-agent-repo.sh --remove-repo

    # Installing Docker with Docker Compose
    # Source: https://docs.docker.com/engine/install/debian/
    echo "Installing Docker Engine ..."
    # Add Docker's official GPG key:
    apt-get update
    apt-get install --no-install-recommends -yy ca-certificates curl gnupg #jq htop
    install -m 0755 -d /etc/apt/keyrings
    curl -fsSL https://download.docker.com/linux/debian/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg
    chmod a+r /etc/apt/keyrings/docker.gpg

    # Add the repository to Apt sources:
    echo \
        "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/debian \
        $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
        tee /etc/apt/sources.list.d/docker.list > /dev/null
    apt-get update

    # Install Docker
    apt-get install --no-install-recommends -yy docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

    # Cleanup
    apt autoremove -yy

    # craeting CREATED_FILE to avoid running this section next time on restart, only on 1st time create an instance
    touch ${create_file}
}

if ! [[ -f ${create_file} ]]; then
    first_time_run
fi

Now we can add this script to the metadata of the VM instance, type the following command:

gcloud compute instances add-metadata ${instance_name} --metadata-from-file startup-script=startup.sh

You should see something like:

Updated [https://www.googleapis.com/compute/v1/projects/wordpress-414215/zones/us-east1-b/instances/wordpress-sql].

By adding the above startup.sh script to the metadata of your VM instance we achieved the following:

  • Data Disk will be mounted automatically on every VM instance start
  • Swap file of 1GB is created once and mounted automatically on every VM instance start
  • Google Cloud Ops Agent is installed once and started automatically on every VM instance start
  • Docker and Docker Compose are installed once and started automatically on every VM instance start

All of the above could be done by running the commands from startup.sh script manually (using the sudo) but I wanted to demonstrate you this beautiful feature, so you could use it for any other purpose in your VM instance.

Initiate VM instance restart:

gcloud compute instances stop ${instance_name} && \
gcloud compute instances start ${instance_name}

After the VM is successfully restarted, connect to the VM instance via SSH and check the disks by typing command lsblk -f, you should see something like:

NAME    FSTYPE FSVER LABEL UUID                                 FSAVAIL FSUSE% MOUNTPOINTS
sda
└─sda1  ext4   1.0         0a09156c-cd23-4176-99fd-855e893fbe9c   13.9G     0% /data
sdb
├─sdb1  ext4   1.0         3eaae429-1dfa-4ac7-9b86-a07dc3b1d6a3   10.4G    23% /
├─sdb14
└─sdb15 vfat   FAT16       2FCA-F97F                             112.2M     9% /boot/efi

In my case, as you see, the Data Disk now is /dev/sda and the Boot Disk now is /dev/sdb, which is fine, we are not relying on disk names.

You can check now the startup script logs by typing the following command:

sudo journalctl -u google-startup-scripts.service | tail -n 200

By typing command free you should see the memory allocation and Swap file in use:

               total        used        free      shared  buff/cache   available
Mem:          997372      516200       78800         460      558320      481172
Swap:        1023996       36768      987228

You can also check the logs of the Google Cloud Ops Agent by typing the following command:

sudo systemctl status google-cloud-ops-agent"*"

By typing command docker -v you should see the Docker version installed and running on your VM instance:

Docker version 25.0.3, build 4debf41

By typing command docker compose version you should see the Docker Compose version installed and running on your VM instance:

Docker Compose version v2.24.5

MariaDB Docker Container

Let’s create a MariaDB docker container and wordpress database with wordpress user, so we can use it later in our Cloud Run.

In your project root folder create another folder docker and create a file compose.yaml in it with the following content:

services:
  # Mariadb container
  mariadb:
    image: mariadb:lts-jammy
    container_name: mariadb
    environment:
      MARIADB_ROOT_PASSWORD: rootpass1234
      MARIADB_DATABASE: wordpress
      MARIADB_USER: wordpress
      MARIADB_PASSWORD: userpass2345
    ports:
      - "3306:3306"
    volumes:
      - /data/mariadb/:/var/lib/mysql/
    restart: always
    healthcheck:
      test: [ "CMD", "healthcheck.sh", "--su-mysql", "--connect" ]
      start_period: 1m
      start_interval: 10s
      interval: 1m
      timeout: 5s
      retries: 3

This will create MariaDB server, wordpress database with wordpress user, rootpass1234 as a root user password and userpas1234 as a wordpress database user password. Please don’t use such passwords on your production instance, hope you understand why.

This container will be always started automatically on each VM instance start/restart, port 3306 is exposed locally (won’t be accessible from internet and even from any other GCP instances of the project (or other your projects) until they use the same VPC network), the database files will be stored in mariadb folder of the persistent disk (Data Disk), which is mounted to /data in VM instance.

More details about the format of the compose.yaml file you can find on docker official page.

Connect to your VM instance via SSH and create /data/docker folder with 777 rights:

sudo mkdir -p /data/docker && sudo chmod 777 /data/docker

Now you should be able to upload any files from your local machine to the /data/docker folder on your VM instance, let’s upload Docker compose.yaml file, exit from the VM SSH and run the following command:

gcloud compute scp ./docker/compose.yaml ${instance_name}:/data/docker/

If everything is OK, you should see something like:

compose.yaml              100%  570     4.5KB/s   00:00

Which means the compose.yaml file is successfully uploaded to VM instance, and now we can launch mariadb docker container, connect to VM instance and run the following command:

sudo docker compose -f /data/docker/compose.yaml up -d mariadb

This command will launch (up) the mariadb docker container in detached (-d) mode.
You can check the logs of mariadb container:

sudo docker logs mariadb --tail 50

If everything is OK you should see something like:

...
[Note] mariadbd: ready for connections.
Version: '10.11.7-MariaDB-1:10.11.7+maria~ubu2204'  socket: '/run/mysqld/mysqld.sock'  port: 3306  mariadb.org binary distribution

This means that we should be able to connect to our MariaDB wordpress database, using the wordpress user and userpass1234 password on port 3306 from localhost or any internal IP address of the VCP network. The host of the VM instance can be identified by uname -a command which should be an internal DNS, like (in my case):

wordpress-sql.us-east1-b.c.wordpress-414215.internal

Now we should be able to connect to MariaDB database from Cloud Run, App Engine or any other Google Cloud resource (within the VCP network).

Let’s continue configuring Cloud Run service (Part 2).