If you don't want to use borgbase you could use rysnc.net or even another VPS that you run.
This tutorial will be done on a fresh Digital Ocean droplet with Ubuntu 18.04.
Commands will be run by a user named johndoe with sudo privileges.
First things first we need to install Borg, luckily we can find it in Ubuntu's software repositories.
sudo apt update
sudo apt install borgbackup
You can check everything worked correctly by running borg --version
, you should see something like borg 1.1.5
, which is the version at the time of writing this post.
Now we need to install Python 3 along with PIP so that we can install Borgmatic.
Borgmatic is a wrapper for Borg that allows us to manage backups with easy to use configuration files. It is not required to use Borg but I'm going to use it here to show you how it works.
You may already have Python 3 installed (I think 18.04 does by default). You can run the commands below to check.
python --version
python3 --version
If Python 3 is already installed check what version of Python PIP is currently using, it might not be installed at all.
pip --version
pip3 --version
If PIP returns (python 2.7) at the end or it is not installed at all then we need to install PIP for Python 3.
sudo apt install python3-pip python3-setuptools
Make sure everything was installed correctly by running pip3 --version
.
Next install the following package that is used by Borgmatic.
pip3 install wheel
Using PIP install Borgmatic for your user (johndoe in my case).
pip3 install --user --upgrade borgmatic
You may need to edit your ~/.bashrc
file to include these commands in your PATH by adding the following to the end of the file.
export PATH="$HOME/.local/bin:$PATH"
Then running source ~/.bashrc
to update it for your current session.
Next we can publish the default configuration file by running the following.
sudo env "PATH=$PATH" generate-borgmatic-config
The reason we pass env "PATH=$PATH"
is to make sure we still have the borgmatic commands in our PATH when running sudo.
We could edit the secure_path
in /etc/sudoers to include /home/johndoe/.local/bin but I'll leave it as it is for this tutorial.
Before we go and edit the config file we'll first generate a new key pair and create our remote repository in borgbase.
To generate the key pair run the following command:
ssh-keygen -t ed25519 -a 100
Call it /home/johndoe/.ssh/borg_id_ed25519
making sure to replace johndoe
with the your username and leave the passphrase as empty.
This will generate an Ed25519 key, which is shorter and faster than a comparable RSA key.
cat ~/.ssh/borg_id_ed25519.pub
Copy this public key and add it to your BorgBase account by clicking "ACCOUNT" and then "ADD KEY".
Give it a name you'll recognise for your server and add the new key above to the Append-only access section.
Copy the repo path as we'll be adding it to the config file next, it will be something like c89dks9m@c89dks9m.repo.borgbase.com:repo
.
Open /etc/borgmatic/config.yaml by running sudo nano /etc/borgmatic/config.yaml
and edit its contents to look something like this:
# Where to look for files to backup, and where to store those backups. See
# https://borgbackup.readthedocs.io/en/stable/quickstart.html and
# https://borgbackup.readthedocs.io/en/stable/usage.html#borg-create for details.
location:
# List of source directories to backup (required). Globs and tildes are expanded.
source_directories:
- /root
- /home
- /etc
- /var/log/syslog*
# Paths to local or remote repositories (required). Tildes are expanded. Multiple
# repositories are backed up to in sequence. See ssh_command for SSH options like
# identity file or port.
repositories:
- YOUR-REPO-ID@YOUR-REPO-ID.repo.borgbase.com:repo
# Stay in same file system (do not cross mount points). Defaults to false.
#one_file_system: true
# Only store/extract numeric user and group identifiers. Defaults to false.
#numeric_owner: true
# Use Borg's --read-special flag to allow backup of block and other special
# devices. Use with caution, as it will lead to problems if used when
# backing up special devices such as /dev/zero. Defaults to false.
#read_special: false
# Record bsdflags (e.g. NODUMP, IMMUTABLE) in archive. Defaults to true.
#bsd_flags: true
# Mode in which to operate the files cache. See
# https://borgbackup.readthedocs.io/en/stable/usage/create.html#description for
# details. Defaults to "ctime,size,inode".
#files_cache: ctime,size,inode
# Alternate Borg local executable. Defaults to "borg".
#local_path: borg1
# Alternate Borg remote executable. Defaults to "borg".
#remote_path: borg1
# Any paths matching these patterns are included/excluded from backups. Globs are
# expanded. (Tildes are not.) Note that Borg considers this option experimental.
# See the output of "borg help patterns" for more details. Quote any value if it
# contains leading punctuation, so it parses correctly.
#patterns:
# - R /
# - '- /home/*/.cache'
# - + /home/susan
# - '- /home/*'
# Read include/exclude patterns from one or more separate named files, one pattern
# per line. Note that Borg considers this option experimental. See the output of
# "borg help patterns" for more details.
#patterns_from:
# - /etc/borgmatic/patterns
# Any paths matching these patterns are excluded from backups. Globs and tildes
# are expanded. See the output of "borg help patterns" for more details.
exclude_patterns:
- '*.pyc'
- ~/*/.cache
# - /etc/ssl
# Read exclude patterns from one or more separate named files, one pattern per
# line. See the output of "borg help patterns" for more details.
#exclude_from:
# - /etc/borgmatic/excludes
# Exclude directories that contain a CACHEDIR.TAG file. See
# http://www.brynosaurus.com/cachedir/spec.html for details. Defaults to false.
exclude_caches: true
# Exclude directories that contain a file with the given filename. Defaults to not
# set.
exclude_if_present: .nobackup
# Repository storage options. See
# https://borgbackup.readthedocs.io/en/stable/usage.html#borg-create and
# https://borgbackup.readthedocs.io/en/stable/usage/general.html#environment-variables for
# details.
storage:
# The standard output of this command is used to unlock the encryption key. Only
# use on repositories that were initialized with passcommand/repokey encryption.
# Note that if both encryption_passcommand and encryption_passphrase are set,
# then encryption_passphrase takes precedence. Defaults to not set.
#encryption_passcommand: secret-tool lookup borg-repository repo-name
# Passphrase to unlock the encryption key with. Only use on repositories that were
# initialized with passphrase/repokey encryption. Quote the value if it contains
# punctuation, so it parses correctly. And backslash any quote or backslash
# literals as well. Defaults to not set.
encryption_passphrase: CHANGE-ME-TO-A-LONG-SECURE-PASSPHRASE
# Number of seconds between each checkpoint during a long-running backup. See
# https://borgbackup.readthedocs.io/en/stable/faq.html#if-a-backup-stops-mid-way-does-the-already-backed-up-data-stay-there
# for details. Defaults to checkpoints every 1800 seconds (30 minutes).
#checkpoint_interval: 1800
# Specify the parameters passed to then chunker (CHUNK_MIN_EXP, CHUNK_MAX_EXP,
# HASH_MASK_BITS, HASH_WINDOW_SIZE). See https://borgbackup.readthedocs.io/en/stable/internals.html
# for details. Defaults to "19,23,21,4095".
#chunker_params: 19,23,21,4095
# Type of compression to use when creating archives. See
# https://borgbackup.readthedocs.org/en/stable/usage.html#borg-create for details.
# Defaults to "lz4".
compression: auto,zstd
# Remote network upload rate limit in kiBytes/second. Defaults to unlimited.
#remote_rate_limit: 100
# Command to use instead of "ssh". This can be used to specify ssh options.
# Defaults to not set.
ssh_command: ssh -i /home/johndoe/.ssh/borg_id_ed25519
# Base path used for various Borg directories. Defaults to $HOME, ~$USER, or ~.
# See https://borgbackup.readthedocs.io/en/stable/usage/general.html#environment-variables for details.
#borg_base_directory: /path/to/base
# Path for Borg configuration files. Defaults to $borg_base_directory/.config/borg
#borg_config_directory: /path/to/base/config
# Path for Borg cache files. Defaults to $borg_base_directory/.cache/borg
#borg_cache_directory: /path/to/base/cache
# Path for Borg security and encryption nonce files. Defaults to $borg_base_directory/.config/borg/security
#borg_security_directory: /path/to/base/config/security
# Path for Borg encryption key files. Defaults to $borg_base_directory/.config/borg/keys
#borg_keys_directory: /path/to/base/config/keys
# Umask to be used for borg create. Defaults to 0077.
#umask: 0077
# Maximum seconds to wait for acquiring a repository/cache lock. Defaults to 1.
#lock_wait: 5
# Name of the archive. Borg placeholders can be used. See the output of
# "borg help placeholders" for details. Defaults to
# "{hostname}-{now:%Y-%m-%dT%H:%M:%S.%f}". If you specify this option, you must
# also specify a prefix in the retention section to avoid accidental pruning of
# archives with a different archive name format. And you should also specify a
# prefix in the consistency section as well.
archive_name_format: '{hostname}-{now}'
# Retention policy for how many backups to keep in each category. See
# https://borgbackup.readthedocs.org/en/stable/usage.html#borg-prune for details.
# At least one of the "keep" options is required for pruning to work. See
# https://torsion.org/borgmatic/docs/how-to/deal-with-very-large-backups/
# if you'd like to skip pruning entirely.
retention:
# Keep all archives within this time interval.
#keep_within: 3H
# Number of secondly archives to keep.
#keep_secondly: 60
# Number of minutely archives to keep.
#keep_minutely: 60
# Number of hourly archives to keep.
#keep_hourly: 24
# Number of daily archives to keep.
keep_daily: 7
# Number of weekly archives to keep.
keep_weekly: 4
# Number of monthly archives to keep.
keep_monthly: 6
# Number of yearly archives to keep.
keep_yearly: 1
# When pruning, only consider archive names starting with this prefix.
# Borg placeholders can be used. See the output of "borg help placeholders" for
# details. Defaults to "{hostname}-".
prefix: '{hostname}-'
# Consistency checks to run after backups. See
# https://borgbackup.readthedocs.org/en/stable/usage.html#borg-check and
# https://borgbackup.readthedocs.org/en/stable/usage.html#borg-extract for details.
consistency:
# List of one or more consistency checks to run: "repository", "archives", and/or
# "extract". Defaults to "repository" and "archives". Set to "disabled" to disable
# all consistency checks. "repository" checks the consistency of the repository,
# "archive" checks all of the archives, and "extract" does an extraction dry-run
# of the most recent archive.
checks:
- repository
- archives
# Paths to a subset of the repositories in the location section on which to run
# consistency checks. Handy in case some of your repositories are very large, and
# so running consistency checks on them would take too long. Defaults to running
# consistency checks on all repositories configured in the location section.
#check_repositories:
# - user@backupserver:sourcehostname.borg
# Restrict the number of checked archives to the last n. Applies only to the "archives" check. Defaults to checking all archives.
check_last: 3
# When performing the "archives" check, only consider archive names starting with
# this prefix. Borg placeholders can be used. See the output of
# "borg help placeholders" for details. Defaults to "{hostname}-".
prefix: '{hostname}-'
# Options for customizing borgmatic's own output and logging.
#output:
# Apply color to console output. Can be overridden with --no-color command-line
# flag. Defaults to true.
#color: false
# Shell commands or scripts to execute before and after a backup or if an error has occurred.
# IMPORTANT: All provided commands and scripts are executed with user permissions of borgmatic.
# Do not forget to set secure permissions on this file as well as on any script listed (chmod 0700) to
# prevent potential shell injection or privilege escalation.
hooks:
# List of one or more shell commands or scripts to execute before creating a backup.
before_backup:
- echo "`date` - Starting backup"
- mysqldump --all-databases > /home/johndoe/databases.sql
# List of one or more shell commands or scripts to execute after creating a backup.
after_backup:
- echo "`date` - Finished backup"
- rm /home/johndoe/databases.sql
# List of one or more shell commands or scripts to execute in case an exception has occurred.
#on_error:
# - echo "Error while creating a backup."
# Umask used when executing hooks. Defaults to the umask that borgmatic is run with.
#umask: 0077
Make sure to change the encryption passphrase to a long secure secret and also update the repository addresss. Change the files to backup to suit your specific needs.
If you want to include your databases in the backup then you can use the before and after hooks (make sure again to change johndoe to the name of your user). If not then just comment out these lines.
Make sure to backup your passphrase as you won't be able to decrypt your backups without it.
Run the following command to check for any configuration errors.
sudo env "PATH=$PATH" validate-borgmatic-config
If everything is okay you should see All given configuration files are valid: /etc/borgmatic/config.yaml
.
sudo env "PATH=$PATH" borgmatic init --encryption repokey-blake2
You'll be asked about the authenticity of the host when connecting for the first time. Check the ECDSA key fingerprint against the one shown in BorgBase by hovering over the fingerprint icon on the repository check the SHA256 to make sure it matches.
Then enter yes to continue. You'll see a message saying Repository .... does not exist. This is simply becuase it it the first time you are running the command and it is currently being created.
To create our first backup we can simply run the following:
sudo env "PATH=$PATH" borgmatic --verbosity 1
The verbosity flag simply tells Borgmatic to print out all the files it is adding, quickly check through the list to make sure they all look correct as per your /etc/borgmatic/config.yaml file.
Since we're using sudo to run borgmatic we need to edit our /etc/sudoers file to allow passwordless sudo for that particular command whilst running our cron job.
sudo visudo
At the end of the file add the following:
johndoe ALL=(root) NOPASSWD: /home/johndoe/.local/bin/borgmatic
This will allow us to run our cron job with sudo and not be prompted for a sudo password.
To add a new cron job type crontab -e
in the terminal.
Add the following line to the end of the file.
0 0 * * * sudo /home/johndoe/.local/bin/borgmatic
This will create a new backup everyday at midnight.
If you would like to separate your apps in different repositories or even to create a repository for backing up just your database you can create a new config file by running:
sudo env "PATH=$PATH" generate-borgmatic-config --destination /etc/borgmatic.d/app1.yaml
You can then go and update the new config file to your liking e.g. to make an hourly database backup.
When setting up cron jobs for backups as above you can pass --config /etc/borgmatic.d/app1.yaml
to tell Borgmatic to only run the backup for that repository.
0 * * * * sudo /home/johndoe/.local/bin/borgmatic --config /etc/borgmatic.d/app1.yaml
This will run our app1 config file every hour.
To see all of your backup archives you can run:
sudo env "PATH=$PATH" borgmatic list
To see details about usage and the size of archives you can run:
sudo env "PATH=$PATH" borgmatic info
To restore a backup you need to first get the name of the archive using the above borgmatic list
command.
The list command should display something like this:
host-2019-01-01T04:05:06.070809 Tue, 2019-01-01 04:05:06 [...]
host-2019-01-02T04:06:07.080910 Wed, 2019-01-02 04:06:07 [...]
Then you can simply run:
sudo env "PATH=$PATH" borgmatic extract --archive host-2019-01-02T04:06:07.080910
You can also extract specific files by running:
sudo env "PATH=$PATH" borgmatic extract --archive host-2019-01-02T04:06:07.080910 --restore-path /path/1 /path/2
More information about extracting repositories and individual files can be found here - https://torsion.org/borgmatic/docs/how-to/restore-a-backup/
Borg has many more great features you can read about in the official docs here - https://borgbackup.readthedocs.io/en/stable/
Hopefully this has given you a quick overview regarding Borg's features and how simple it can be to set up.
]]>There are a number of different ways you can go about adding comments to your static site. The most common option is usually by using a third party service and embedding the comments onto your page using an iframe. Some examples are:
There are also some pretty awesome self-hosted options like Commento (the commenting platform I'm using for this site).
We're going to take advantage of Cockpit forms and use that as a basis for setting up comments on our blog.
Here's how it will work:
There's quite a lot more going on than that but it should give a basic overview.
To get started we'll head over to Cockpit and create a new form called comments.
Make sure you leave save form data
as false, I'll explain why shortly.
You also need to set up SMTP mailer settings in your config if you have not done already. I explain how in my previous post on contact forms.
You should already have an API key you can use only for form submissions if you followed the last post in this series, if not create a new key and add this in the rules section /api/forms/submit/*
.
Give it a test by sending a post request with Insomnia or Postman to see if your token is working as expected.
You should receive a notification email but you won't see a new form entry saved as we set this to false above.
This new comments form is where new comments will be posted to and saved when they are awaiting approval.
Once approved we will be deleting the entry from here, but more on that later.
When we approve a comment we are going to save it as a new collection entry and remove its entry from the comments form.
This will allow us to create a collectionLink (relationship) between our post and the comments for that post.
That way when we fetch our posts data from Cockpit we can also fetch the comments belonging to each post at the same time.
So head over to Collections in Cockpit and click Add Collection
.
Our new comments collection will have the following fields:
{"default": false}
){"link": "posts", "display": "title", "multiple": false, "limit": false}
)Make sure to include the options in the provided JSON options field when adding the notify_replies and post fields.
We set multiple to false in the post collectionLink, we've essentially created the inverse of a one to many relationship. (In Laravel this would be like return $this->belongsTo('App\Post');
)
If you've been following along with this series you should already have a posts collection set up. If not I show you how in my first post of the series here.
We need to add a new collectionLink field called comments
with the following options:
{
"link": "comments",
"display": "name",
"multiple": true,
"limit": false
}
Notice here that we've set multiple to true, this is essentially a one to many relationship. E.g. One post can have many comments. (In Laravel this would be like return $this->hasMany('App\Comment');
)
Your posts collection should now look something like this:
Okay so we've created a new comments form, a comments collection and updated our posts collection. Next we need to look at how we go about approving new comments that arrive in our comments form.
When a new comment is made we need to be able to moderate it first before it is published to the site, that's why we keep all pending comments in our comments form first.
We're going to create a new custom endpoint in Cockpit that will allow us to simply click a link and approve a comment.
We first need to add a new custom API key that only has permission to approve comments. So head over to settings, API Access and add a new key with the following in the rules section /api/forms/approve/comments
. We don't want to share this key with anyone.
To add a new custom api endpoint create a new file at config/api/forms/approve/
called comments.php
(you'll need to create the directories api, forms and approve) this will allow us to visit https://cms.yourdomain.com/api/forms/approve/comments?id=xxxx&token=xxxx
to access it.
We'll be passing an id
parameter of the comments form entry to the endpoint which is why I've included it in the url above.
In this file add the following:
<?php
// find the form entry using its id we included in the url
$form_entry = cockpit('forms')->findOne('comments', ['_id' => $this->param('id', null)]);
if (!$form_data = $form_entry['data']) {
return $this->stop('{"error": "No form entry found"}', 412);
}
// find the post that this comment is for
$post = cockpit('collections')->findOne('posts', ['_id' => $form_data['post_id']]);
if (!$post) {
return $this->stop('{"error": "No post found"}', 412);
}
// create a new comment in the comments collection with the form data and create collectionLink to the post
$comment_data = [
'parent_id' => $form_data['parent_id'],
'name' => $form_data['name'],
'email' => $form_data['email'],
'body' => $form_data['comment'],
'notify_replies' => $form_data['notify_replies'],
'post' => [
'_id' => $post['_id'],
'link' => 'posts',
'display' => $post['title']
]
];
$comment = cockpit('collections')->save('comments', $comment_data);
// check if this is the first comment on the post, if so then $post['comments'] will be an empty string so we update it to an empty array to prevent the next line throwing an error
if(!is_array($post['comments'])){
$post['comments'] = [];
}
// also add a collectionLink from the post to the new comment
$post['comments'][] = [
'_id' => $comment['_id'],
'link' => 'comments',
'display' => $comment['name']
];
$post = cockpit('collections')->save('posts', $post);
// delete the form entry from the comments form
cockpit('forms')->remove('comments', ['_id' => $form_entry['_id']]);
// redirect to view the comments collection
$this->reroute($this->baseUrl('/collections/entries/comments'));
I've tried to add comments to the above code to explain what's going but what we basically do is first find the entry in the comments form by its id (we included it in the url id=xxx).
Then find the post that this pending comment belongs to. Then we save the new comment in the comments collection and create a collectionLink to the post.
We then update the post so that it also has a collectionLink to the new comment.
Finally we remove the form entry and redirect to the comments collection page.
We're going to need to access our approve comment form API key in the next file we create. To avoid hard coding it and potentially accidently committing it to version control we will create a .env
file in our Cockpit root directory. Inside this .env
file enter:
APPROVE_TOKEN=xxxxxx
SITE_URL=https://cms.yourdomain.com
Making sure to replace xxxx with your actual "approve comment" api key from above and SITE_URL
with the url of your Cockpit site (no trailing slash).
Next create a new file in your Cockpit directory at config/bootstrap.php. Put the following inside:
<?php
// save the form entry and add its _id to data
$app->on("forms.submit.before", function($form, &$data, $frm, &$options) use ($app) {
if($form === 'comments'){
// make sure the comment has a valid post_id that exists
if(isset($data['post_id']) && $post = cockpit('collections')->findOne('posts', ['_id' => $data['post_id']])){
$data['post_title'] = $post['title'];
$entry = cockpit('forms')->save($form, ['data' => $data]);
$data['id'] = $entry['_id'];
} else {
$app->stop('{"error": "No post found"}', 412);
}
}
});
This is an event that we hook into before the form is submitted. We first make sure it is the correct form (in our case called comments) then we make sure that the form data we receive has the post_id
set and that an actual post with that ID exists in our database.
If it does then we add post_title
to the data and then save the submission as a new entry. That is why when we created the comments form above we made sure to set save form data
as false, otherwise it would save the entry twice.
You might be wondering why I'm saving the form entry now when it could have been saved anyway if we had just set save form data
to true. The answer to that is because we need the _id
of this entry so we can pass it through to our notification email and use it in our approve endpoint.
So after we save the entry we can retrieve its _id
and add $data['id']
to our data so we can use it in our email template along with our SITE_URL
and APPROVE_TOKEN
from our .env
file.
In Cockpit you can create custom email templates to override the default one. To do this you simply create a new file at config/forms/emails/
with the same name as the form you wish to override.
In our case we need to create one called comments.php
, once created add the following:
@if( isset($data['post_title']) )
A new comment is awaiting approval on <b>{{ $data['post_title'] }}</b>
<br><br>
@endif
@if( isset($data['name']) )
<b>Name:</b>
<br>
<br>{{ htmlspecialchars($data['name'], ENT_QUOTES, 'UTF-8', true) }}
<br>
@endif
@if( isset($data['email']) )
<br><b>Email:</b>
<br>
<br>{{ htmlspecialchars($data['email'], ENT_QUOTES, 'UTF-8', true) }}
<br>
@endif
@if( isset($data['comment']) )
<br><b>Comment:</b>
<br>
<br>{{ htmlspecialchars($data['comment'], ENT_QUOTES, 'UTF-8', true) }}
<br>
@endif
@if( isset($data['id']) )
<br>
<a href="{{ getenv('SITE_URL') }}/api/forms/approve/comments?id={{ $data['id'] }}&token={{ getenv('APPROVE_TOKEN') }}">Click here to approve this comment</a>
<br><br>
or
<br>
@endif
<br>
<a href="{{ getenv('SITE_URL') }}/forms/entries/comments">View and delete it</a>
All we are doing here is using the data to create an email that will tell us who made the comment, the comment itself and allow us to click a link to approve the comment.
It is important to make sure you include htmlspecialchars($var, ENT_QUOTES, 'UTF-8', true)
to protect ourselves against a comment containing malicious scripts etc.
Now if you send a post request to submit the comments form with the correct data making sure you replace the post_id
with the ID of one of your blog posts otherwise you won't be able to approve it.
{
"form": {
"post_id": "xxxxxxxx",
"parent_id": null,
"name": "John Doe",
"email": "you@example.com",
"comment": "This is my new comment.",
"notify_replies": true
}
}
Also make sure you use one of your real email addresses for email with notify replies set as true as we will be replying to this comment later.
You can find the ID of one of your blog posts by making a GET request to /api/collections/get/posts?token=xxxx
where xxxx is your posts collection API key. Choose a post and then copy the _id
value.
You should receive an email notification that uses the custom template above and includes our approve url.
You can click on the approve url and if successful it should redirect you to the comments collection where you can see the newly created comment.
You'll notice that the comment has the name of the blog post in the post
column. This is because we set the value display as title
in the JSON options for the collectionLink.
If you view your posts collection entries you'll see that the comments column has a 1
in it.
So that means our post and comment are linked succesfully!
To see this in action you can make a POST request to /api/collections/get/posts?token=xxx
with the body set as:
{"sort":{"_created":-1},"populate":1}
The populate
option is important as it tells Cockpit to return and populate relationships 1 level deep. You should see in the response that each post has a "comments": []
array. If you find the blog post you added the comment to you should see the comment there.
If you set populate to -1
it will populate to infinite levels, however it can cause some issues and errors.
Try setting populate to 0 or removing it and you'll notice that you won't get all fields returned for your comment.
Okay so we can now add a comment using our form and then approve the comment but how about comment replies and notifying the parent comment?
Well first off we need to create a new custom email template, so in config/forms/emails
create a new file called notify_reply.php
and add the following inside:
@if( isset($data['post_title']) )
Your comment has a new reply on <b>{{ $data['post_title'] }}</b>
<br><br>
@endif
@if( isset($data['name']) )
<b>Name:</b>
<br>
<br>{{ htmlspecialchars($data['name'], ENT_QUOTES, 'UTF-8', true) }}
<br>
@endif
@if( isset($data['comment']) )
<br><b>Comment:</b>
<br>
<br>{{ htmlspecialchars($data['comment'], ENT_QUOTES, 'UTF-8', true) }}
<br>
@endif
@if( $data['post_url'] )
<br>
<a href="{{ $data['post_url'] }}">Click here to view the comment</a>
@endif
Again be sure to include htmlspecialchars()
here! We'll pass all this data through to this template when we actually come to send the email.
Now let's add the actual code that will send an email to the parent comment when it receives a reply and that reply is approved.
Just before we do open up your .env
file and add the following:
FRONTEND_URL=https://yourdomain.com
Note that there is no trailing slash. We'll be using this to create the url for the blog post with its title_slug
e.g. https://yourdomain.com/first-blog-post
.
Open up config/api/forms/approve/comments.php
and update the following just after we remove the form entry:
// check if the comment has a valid parent comment and that it exists
if(isset($comment['parent_id']) && $parent_comment = cockpit('collections')->findOne('comments', ['_id' => $comment['parent_id']])){
// check if the parent comment has notify_replies set to true
if($parent_comment['notify_replies']){
// validate the email for the parent comment
if($this->helper('utils')->isEmail($parent_comment['email'])){
// use our custom email template for a new reply notification
if ($template = $this->path("#config:forms/emails/notify_reply.php")) {
$notify_data = [
'post_title' => $post['title'],
'name' => $comment['name'],
'comment' => $comment['body'],
'post_url' => getenv('FRONTEND_URL').'/'.$post['title_slug']
];
$body = $this->renderer->file($template, ['data' => $notify_data], false);
// send email to notify parent comment of a new reply
try {
$response = $this->mailer->mail($parent_comment['email'], "New comment reply on: {$post['title']}", $body);
} catch (\Exception $e) {
$response = $e->getMessage();
}
}
}
}
}
// display error if present or redirect to view the comments collection
return (isset($response) && $response !== true) ? ['error' => $response] : $this->reroute($this->baseUrl('/collections/entries/comments'));
So what we're doing here is first checking to see if the comment has a parent_id
value set, if it does and we find a comment with that ID in our database then check to see if the parent comment had notify_replies
set to true.
If it does then we check if the parent comment's email is valid and if we have a custom template available called notify_reply.php
(we do as we just created it).
Then we pass the data through to the template and attempt to send the email using our mailer.
If you used one of your real email addresses and set notify_replies
to true when testing the comment approval above then we can now try and reply to this comment.
So first we need to find out the ID for the comment we would like to reply to, to do this you can make a GET request like above to your posts collection endpoint and find the post with the comment, then copy the ID for the comment.
Now we can make another form submission with the following data:
{
"form": {
"post_id": "xxxxxxxx",
"parent_id": "xxxxxx",
"name": "Jane Doe",
"email": "you@example.com",
"comment": "This is a reply to my first comment.",
"notify_replies": true
}
}
Making sure to use the same post_id
as before and the parent_id
as the ID we just copied from the first comment.
You should receive the email notification to confirm or view/delete the comment entry. Once you click approve you should then receive an email to the parent comment's email address letting you know your comment has a new reply.
Let's finish up the backend by adding some server side validation for our comments form.
If you've read the previous post about contact forms you'll know how to do this. Create a new file at config/forms/
called comments.php
(it must have the same name as the one we gave our form).
<?php
// honeypot field
if (isset($data['website'])) {
// you can save the submission in case it is actually a genuine one like we did in the last blog post on contact forms, make sure you have a form set up called bots
cockpit('forms')->save('bots', ['data' => $data]);
return false;
}
if (empty($data['post_id'])) {
return false;
}
if (empty($data['name'])) {
$this->app->stop(['error' => 'The name field is required'], 200);
}
if (!filter_var($data['email'], FILTER_VALIDATE_EMAIL)) {
$this->app->stop(['error' => 'A valid email is required'], 200);
}
if (empty($data['comment'])) {
$this->app->stop(['error' => 'The comment field is required'], 200);
}
if (!is_bool($data['notify_replies'])) {
$this->app->stop(['error' => 'Notify replies must be of type boolean'], 200);
}
return true;
So we simply validate our comment form fields, we're going to use website
as a honeypot field to catch bots like we did in the previous post. If the spam bot accidentally automatically fills in the website field we will return false.
So I think we've finished up setting the Cockpit side of the comment system up, let's now look at the Nuxt frontend.
Moving on to Nuxt.js and our frontend let's first update our Nuxt .env file and add our FORMS_TOKEN
. If you've followed the previous post you should already have this.
Next open up nuxt.config.js and in the env property add:
env: {
commentUrl: `${process.env.BASE_URL}/api/forms/submit/comments?token=${process.env.FORMS_TOKEN}`
},
This is the endpoint we'll be posting our comments to.
So make sure not to include any sensitive API keys here. We obviously need the form endpoint and token to be public otherwise we won't be able to submit new comments from the frontend.
Open up your _title_slug.vue
page (the individual blog page) and update it to resemble the following:
<template>
<section>
<article class="my-8">
<div class="text-gray-600 font-bold text-sm tracking-wide">
{{ post._created | toDate }}
<a v-for="(tag, key) in post.tags" :key="key" :href="'/category/'+tag" class="ml-1">{{ tag }}</a>
</div>
<h1 class="mt-2">
{{ post.title }}
</h1>
<div class="mt-4 markdown" v-html="$options.filters.parseMd(post.excerpt + '\n' + post.content)">
</div>
<div id="comments" class="mt-8 mb-4 pt-3 border-t-2">
<h2 class="mb-2">
Comments
</h2>
<comment-form class="border-b-2" :post_id="post._id"/>
</div>
<ul>
<comment
v-for="comment in comments"
:key="comment._id"
:post_id="post._id"
:all="post.comments"
:comment="comment"
/>
</ul>
</article>
</section>
</template>
<script>
import CommentForm from '~/components/CommentForm.vue'
import Comment from '~/components/Comment.vue'
export default {
async asyncData ({ app, params, error, payload }) {
if (payload) {
return { post: payload }
} else {
let { data } = await app.$axios.post(process.env.POSTS_URL,
JSON.stringify({
filter: { published: true, title_slug: params.title_slug },
sort: {_created:-1},
populate: 1
}),
{
headers: { 'Content-Type': 'application/json' }
})
if (!data.entries) {
return error({ message: '404 Page not found', statusCode: 404 })
}
return { post: data.entries[0] }
}
},
components: {
CommentForm,
Comment
},
head () {
return {
title: this.post.title,
meta: [
{ hid: 'description', name: 'description', content: this.post.excerpt },
]
}
},
computed: {
comments: function () {
return this.post.comments ? this.post.comments.filter(comment => !comment.parent_id) : []
}
}
}
</script>
There are a few things to note here. We've got a CommentForm
and a Comment
component that we are yet to make. We pass the comment-form
the current post ID as a prop. We loop over each comment and pass the comment
component the post ID, all the comments for the post and the comment itself.
In the script section we register the Comment and CommentForm components.
We then have a computed property comments
this simply returns all comments for our post that do not have a parent_id
set e.g. they are top level comments.
At first I had comments set up with a collectionLink relationship to themselves so comments could have children and a parent. However I ran into issues whilst fetching the data relating to populate
in the request and the depth it should be carried out to. For example if setting populate: -1
in the request it would cause timeout errors for me.
So I decided instead to keep it simple and just add a parent_id to any child comment that references the ID of its parent.
That way I can organise the comments correctly in Nuxt by filtering only the parent comments and then recursively finding their children if they have any.
In your components directory create a new file called CommentForm.vue
and add the following inside:
<template>
<form @submit="checkForm" method="post" :id="parent_id ? `reply-${parent_id}` : ''">
<div class="flex flex-col md:flex-row mb-4">
<div class="w-full md:w-1/2 md:mr-2">
<input v-model="name" type="text" name="name" placeholder="Your Name" class="block bg-gray-200 mt-2 rounded w-full py-2 px-3">
</div>
<div class="w-full md:w-1/2 md:ml-2">
<input v-model="email" type="email" name="email" placeholder="Your Email" class="block bg-gray-200 mt-2 rounded w-full py-2 px-3">
</div>
</div>
<div class="mb-4">
<textarea v-model="comment" name="comment" rows="6" :placeholder="parent_id ? `Reply to ${parent_name}...` : 'Add a comment'" class="bg-gray-200 rounded resize-none w-full h-20 py-2 px-3">
</textarea>
</div>
<div class="mb-4">
<input v-model="notify_replies" class="mr-2" type="checkbox">
<span class="text-sm">
Notify me when anyone replies
</span>
</div>
<input type="text" name="website" v-model="website" class="hidden opacity-0 z-0" tabindex="-1" autocomplete="off">
<div class="mb-4">
<input type="submit" value="Add Comment" :class="{ 'cursor-not-allowed opacity-50': loading }" class="cursor-pointer bg-blue-500 hover:bg-blue-400 text-white font-bold py-2 px-4 border-b-4 border-blue-600 hover:border-blue-500 rounded">
</div>
<div v-if="errors.length" class="mb-4 text-red-500">
<b>Please correct the following error(s):</b>
<ul>
<li v-for="error in errors" :key="error">
{{ error }}
</li>
</ul>
</div>
<div v-if="success" class="text-green-500 mb-4">
<b>Your comment is currently awaiting moderation</b>
</div>
</form>
</template>
<script>
import axios from 'axios'
export default {
name: "commentForm",
props: {
post_id: String,
parent_id: String,
parent_name: String
},
data: function () {
return {
errors: [],
name: null,
email: null,
comment: null,
notify_replies: false,
website: null,
loading: false,
success: false
}
},
methods: {
checkForm: function (e) {
this.errors = []
this.success = false
if (!this.name) {
this.errors.push("Name required")
}
if (!this.email) {
this.errors.push('Email required')
} else if (!this.validEmail(this.email)) {
this.errors.push('Valid email required')
}
if (!this.comment) {
this.errors.push("Comment required")
}
if (!this.errors.length) {
this.submitForm()
}
e.preventDefault()
},
submitForm: function () {
this.loading = true
axios.post(process.env.commentUrl,
JSON.stringify({
form: {
post_id: this.post_id,
parent_id: this.parent_id,
name: this.name,
email: this.email,
comment: this.comment,
notify_replies: this.notify_replies,
website: this.website //honeypot field
}
}),
{
headers: { 'Content-Type': 'application/json' }
})
.then(({ data }) => {
this.loading = false
if(data.error){
this.errors.push(data.error)
} else if(data.name && data.email && data.comment) {
this.name = this.email = this.comment = null
this.success = true
}
}).catch(error => {
this.loading = false
this.errors.push('An error occured, please try again later')
})
},
validEmail: function (email) {
let re = /^(([^<>()\[\]\\.,;:\s@"]+(\.[^<>()\[\]\\.,;:\s@"]+)*)|(".+"))@((\[[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\])|(([a-zA-Z\-0-9]+\.)+[a-zA-Z]{2,}))$/
return re.test(email)
}
}
}
</script>
The first thing to note is that this form is very similar to the contact form we did in the previous post.
If the component has a parent_id
prop passed to it then we add an id to the form, you'll see why later. We also check for parent_id
whilst setting the placeholder for the comment textarea, if there is a parent we reference the parent's name.
We need to import axios here as we're now calling it on the client side so can't use app.$axios
as when in the asyncData
function.
The form has some simple client side validation like our comment form and also the same honeypot field called website.
If the form submission has any errors we display them and if it's successful we display a success message.
Now onto the Comment component, create a new file in the components directory called Comment.vue
and add the following:
<template>
<li class="mb-4" :class="!parent ? 'border-b-2' : ''">
<div ref="parent">
<div class="text-gray-600 text-sm mb-2">
<span class="text-gray-800 font-semibold">
{{comment.name}}
</span>
<span class="mx-1 text-xs">•</span>
{{ comment._created | toDate }}
<span v-if="parent">
<svg xmlns="http://www.w3.org/2000/svg" viewBox="-5 -5 24 24" width="12" height="12" preserveAspectRatio="xMinYMin" class="inline-block text-gray-600 fill-current">
<path d="M10.586 5.657l-3.95-3.95A1 1 0 0 1 8.05.293l5.657 5.657a.997.997 0 0 1 0 1.414L8.05 13.021a1 1 0 1 1-1.414-1.414l3.95-3.95H1a1 1 0 1 1 0-2h9.586z"></path>
</svg>
{{ parent.name }}
</span>
</div>
<div class="comment text-gray-800 text-base" v-html="$options.filters.parseMd(comment.body)"></div>
<div class="text-gray-600 text-sm mt-2 mb-4 cursor-pointer" @click="toggleReply">
<span v-if="replyOpen">Cancel</span>
<span v-else>Reply</span>
</div>
</div>
<ul class="ml-10 comment-list" v-if="children(comment._id).length">
<comment
v-for="child in children(comment._id)"
:key="child._id"
:post_id="post_id"
:all="all"
:comment="child"
:parent="comment"
/>
</ul>
</li>
</template>
<script>
import Vue from 'vue'
import CommentForm from '~/components/CommentForm.vue'
export default {
name: "comment",
props: {
post_id: String,
all: Array,
comment: Object,
parent: Object
},
data: function () {
return {
replyOpen: false
}
},
methods: {
children: function (parent_id) {
return this.all.filter(comment => comment.parent_id === parent_id)
},
toggleReply: function () {
if(!this.replyOpen){
let ComponentClass = Vue.extend(CommentForm)
let instance = new ComponentClass({
propsData: {
post_id: this.post_id,
parent_id: this.comment._id,
parent_name: this.comment.name
}
})
instance.$mount()
this.$refs.parent.appendChild(instance.$el)
this.replyOpen = true
} else {
// remove the reply form from the DOM
let form = document.getElementById(`reply-${this.comment._id}`)
if(form){
this.$refs.parent.removeChild(form)
this.replyOpen = false
}
}
}
}
}
</script>
This component is a little more complex than the CommentForm one. At the top in the li
tag we check if the comment has a parent. If it doesn't then we add a border to the bottom, just to add some separation between top level comments.
We then display the comment author's name, the date it was made (approved in our case) and the body of the comment.
We will be sanitizing the comment body shortly as it is not safe to use v-html on unsanitized user inputted data. A malicious actor could easily include javascript code on our site.
If earlier in this post when you set up the comments collection you chose not to support markdown and set the comment body field type as a textarea then you do not need to pass the comment.body
through v-html or $options.filters.parseMd()
.
We then have a div with Reply
or Cancel
depending on whether someone has clicked and opended a new comment form for that particular comment.
Finally we have a section for any child comments, hence this being a recursive component. We include the component again inside itself if the current comment has any children.
We loop over the comment's children and pass through the necessary props, again passing down the all
posts variable, the post_id
and the parent
comment.
The method we have called children simply filters the all
comments prop and returns any comments that have the current comment's ID set as their parent_id
.
Now for the intersting part, handling comment replies. I needed a way to make sure the parent_id
value was passed to the comment form if we were replying to a comment, that way we can identify which comment the reply belongs to.
You may have noticed that we imported Vue
and CommentForm
, this is so we can use them in the toggleReply
method. In this method we first check to see if the replyOpen
variable is set to false (e.g. the reply form is not active).
We then use Vue.extend to create a "subclass" of the base Vue constructor, passing in our CommentForm component. Next we create a new instance of this class and pass it the relevant props, including the parent_id
which is the ID of the current comment. Then we mount this without passing through any mount point.
The reason we do not pass any mount point is because we want to insert it into the DOM ourselves. The Vue docs state that:
If elementOrSelector argument is not provided, the template will be rendered as an off-document element, and you will have to use native DOM API to insert it into the document yourself.
So now we can insert this template by calling this.$refs.parent.appendChild(instance.$el)
where parent
is a reference we added to a div at the top of the comment component like so ref="parent"
.
Now when we click on Reply
the toggleReply function will be called and it will append a new instance of our CommentForm component to the end of this div.
If replyOpen
is set to true then Cancel
will be displayed instead of Reply
and we will run the else portion of toggleReply
. Here we simply find the comment form by id reply-${this.comment._id}
and call again the parent reference using it to remove the comment form from the DOM and set replyOpen back to false.
As I mentioned above you cannot simply pass user inputted data through v-html as it will be rendered as actual html on the page. So if a user made a comment with this content:
<script>alert('Hello');</script>
And we approved it, then whenever anybody visited the blog post with that comment on they would get an alert popup! You can read more about XSS attacks here.
To prevent against this we could either not run any user input through v-html (but then our markdown support wouldn't work) or first sanitize the data before displaying it on the page.
I tried a few different html sanitzers and in the end settled on Sanitize HTML.
Open up the terminal in your Nuxt root and run:
npm install sanitize-html --save-dev
Now that we've got it installed we need to use it, so open your filters.js file inside the plugins directory. Add the following to the top of the file:
const sanitizeHtml = require('sanitize-html')
and then update the parseMd
filter:
Vue.filter('parseMd', function(content) {
let clean = sanitizeHtml(content)
return marked(clean)
})
So all we're doing here is first passing through the content to sanitizeHtml
and then passing the cleaned content to marked
to parse the markdown.
If you want to test your sanitization is working as it should be try posting a comment with the content from this xss-payload-list.
SanitzeHtml seems to cope with this XSS payload well and mitigates all attempted attacks.
Let's add a tiny bit of css for our comments, so update your main.css to the following:
@tailwind base;
@tailwind components;
a {
@apply text-blue-400;
}
.content {
width: 50rem;
}
.markdown p {
@apply mt-0 mb-6;
}
.markdown ul {
@apply mb-6;
}
.markdown pre {
@apply my-8;
}
.comment {
@apply whitespace-pre-wrap;
}
.comment p {
@apply mb-4 inline-block;
}
.comment p:last-of-type {
@apply mb-0;
}
.comment pre {
@apply my-4;
}
.comment pre:last-of-type {
@apply mb-0;
}
.comment p:last-child {
@apply mb-0;
}
/* purgecss start ignore */
table {
@apply overflow-auto w-full;
}
table tr {
@apply bg-white border-t border-gray-400;
}
table th, table td {
@apply border border-gray-400 py-2 px-4;
}
.search-results em {
@apply not-italic bg-blue-200;
}
/* purgecss end ignore */
@tailwind utilities;
The whitespace-pre-wrap will help make sure the comments display correctly on the page.
With our collectionLink between a post and its comments if you delete a comment we would like the deleted comment to be "unlinked" from the post.
This doesn't seem to happen by default so we need to add a collections.remove.before.comments
hook to do it for us.
So in config/bootstrap.php
add the following code:
$app->on("collections.remove.before.comments", function($name, &$criteria) use ($app) {
// find the comment using its id
$comment = cockpit('collections')->findOne('comments', ['_id' => $criteria['_id']]);
if(isset($comment['post']['_id'])){
// find the post it is currently linked to
$post = cockpit('collections')->findOne('posts', ['_id' => $comment['post']['_id']]);
if(isset($post['comments']) && is_array($post['comments'])){
$comment_ids = array_column($post['comments'], '_id');
$key = array_search($comment['_id'], $comment_ids);
unset($post['comments'][$key]);
cockpit('collections')->save('posts', $post);
}
}
});
All we are doing here is finding the comment we're about to delete, then finding the post it belongs to and removing the link by unsetting the corresponding array item in the $post['comments']
array.
Now we can also do the reverse, e.g. unlink all comments (or just delete them if we want) for a post when the post is deleted.
So add the following below the above:
$app->on("collections.remove.before.posts", function($name, &$criteria) use ($app) {
// find the post using its id
$post = cockpit('collections')->findOne('posts', ['_id' => $criteria['_id']]);
if(isset($post['comments']) && is_array($post['comments'])){
// loop over each linked comment
foreach($post['comments'] as $item){
$comment = cockpit('collections')->findOne('comments', ['_id' => $item['_id']]);
// set the post to an empty string
$comment['post'] = "";
cockpit('collections')->save('comments', $comment);
}
}
});
Now this will simply unlink the comments but not delete them, if you'd like to just delete them update the loop to this:
// loop over each linked comment
foreach($post['comments'] as $item){
// delete each linked comment
cockpit('collections')->remove('comments', ['_id' => $item['_id']]);
}
To add a little comment count to the top of each post you can edit a small part of the following files - index.vue
, _page.vue
and _tag.vue
to the following:
<div class="text-gray-600 font-bold text-sm tracking-wide">
{{ post._created | toDate }}
<span class="ml-1 text-xs">•</span>
<a v-for="tag in post.tags" :key="tag" :href="'/category/'+tag" class="ml-1">#{{ tag }}</a>
<span class="mx-1 text-xs">•</span>
<span>
{{ post.comments ? post.comments.length : 0 }}
<svg xmlns="http://www.w3.org/2000/svg" viewBox="-2 -2 24 24" width="12" height="12" preserveAspectRatio="xMinYMin" class="inline-block text-gray-600 fill-current">
<path d="M3 .565h14a3 3 0 0 1 3 3v8a3 3 0 0 1-3 3h-6.958l-6.444 4.808A1 1 0 0 1 2 18.57v-4.006a2 2 0 0 1-2-2v-9a3 3 0 0 1 3-3z"></path>
</svg>
</span>
</div>
and then in _title_slug.vue
to this so we can click the comment count and be taken straight to the comment section:
<div class="text-gray-600 font-bold text-sm tracking-wide">
{{ post._created | toDate }}
<span class="ml-1 text-xs">•</span>
<a v-for="tag in post.tags" :key="tag" :href="'/category/'+tag" class="ml-1">#{{ tag }}</a>
<span class="mx-1 text-xs">•</span>
<a href="#comments" class="text-gray-600">
{{ post.comments ? post.comments.length : 0 }}
<svg xmlns="http://www.w3.org/2000/svg" viewBox="-2 -2 24 24" width="12" height="12" preserveAspectRatio="xMinYMin" class="inline-block text-gray-600 fill-current">
<path d="M3 .565h14a3 3 0 0 1 3 3v8a3 3 0 0 1-3 3h-6.958l-6.444 4.808A1 1 0 0 1 2 18.57v-4.006a2 2 0 0 1-2-2v-9a3 3 0 0 1 3-3z"></path>
</svg>
</a>
</div>
It should now look a little like this.
This is only a basic example of a comment system and it could definitely be greatly improved but hopefully it gives you some ideas on what you can do with Cockpit.
Now whenever a new comment is approved Cockpit will automatically fire our rebuild webhook from part 3 of this series and run npm run generate
again for our site!
If you notice any problems or can think of any improvements for this post feel free to add a comment or open an issue on Github.
You can check out the GitHub repo of the finished blog here.
Also I've just launched a live demo of this site on Netlify - https://nuxt-cockpit-static-blog.netlify.com
]]>There are a number of different ways you can go about handling forms on static sites, including:
Cockpit comes with its own solution that can help us add forms to our site using simple API POST requests to handle submissions.
Submissions made through the API can be viewed in the Cockpit dashboard and also notify you via email when a new submission is made.
You can read more about Cockpit forms in the documentation here.
First things first we need to generate a new API token in Cockpit and also make sure it only has permissions to hit the forms endpoint.
So head over to Cockpit and go to Settings then API Access. Click the little plus icon to add a new key and add the following to the rules field: /api/forms/submit/*
.
Now this key will only be able to perform form submissions.
In order for Cockpit to handle submissions we first need to create a new form. So from the dashboard click on forms and then click 'Create one'.
Give it a name like contact
and a label of Contact Form
. Add your email if you wish to be notified when new submissions are made. Turn on 'Save form data' if you would like to be able to view submission entries from Cockpit.
Now that we've created our contact form in Cockpit we can test it by sending a POST request to the right endpoint.
Before we can test out our form we need to update our Mailer config and add some SMTP details. Go to settings and then click on settings and add SMTP details for the email address you'd like to use.
If you didn't enter an email when creating the form above you can skip adding SMTP details
Add the following inside config.yaml
# use smtp to send emails
mailer:
from : you@example.com
transport : smtp
host : smtp.myhost.tld
user : you@example.com
password : yourpassword
port : 587
auth : true
encryption: tls # '', 'ssl' or 'tls'
If you have Postman or Insomnia installed you can easily send a POST request to your Cockpit endpoint.
The endpoint we need to use is cms.yourdomain.com/api/forms/submit/contact?token=xxx
where token is the API Key we created above to use for our form.
Make sure to set a header with Content-Type as application/json
and then set the request body as the following JSON:
{
"form": {
"name":"John Doe",
"email":"johndoe@example.com",
"message": "This is the message body!"
}
}
The response returned if all was successful should just be the new form entry:
{
"name": "John Doe",
"email": "johndoe@example.com",
"message": "This is the message body!"
}
If an error occurred (e.g. forgetting to update config.yaml) then the response will look something like this:
{
"error": "Invalid address: (From): root@localhost",
"data": {
"name": "John Doe",
"email": "johndoe@example.com",
"message": "This is the message body!"
}
}
If you head to cms.yourdomain.com/forms/entries/contact
you should now see the entry we just submitted via the API. You should also have received an email with the form submission details.
Open up your .env file for Nuxt and add a new variable called FORMS_TOKEN.
FORMS_TOKEN=xxxxxxxxxxxxxx
Now we also need to update the env property in our nuxt.config.js. Add the following anywhere inside module.exports = { ... }
env: {
contactUrl: `${process.env.BASE_URL}/api/forms/submit/contact?token=${process.env.FORMS_TOKEN}`
},
As mentioned in the previous post the reason we need to do this is because we will be making requests to contactUrl on the client side which means we need to have this variable bundled up in our js files.
Make sure you also update your create-env.js if deploying to Netlify. Also update your environment variables in Netlify.
const fs = require('fs')
fs.writeFileSync('./.env', `
BASE_URL=${process.env.BASE_URL}\n
POSTS_URL=${process.env.POSTS_URL}\n
URL=${process.env.URL}\n
PER_PAGE=${process.env.PER_PAGE}\n
SEARCH_URL=${process.env.SEARCH_URL}\n
FORMS_TOKEN=${process.env.FORMS_TOKEN}
`)
Now that we know our contact form is working as expected we can go and set it up in our blog.
First we'll just update our PageNav.vue component to add a link to the new page:
<template>
<nav class="text-center my-4">
<a href="/" class="p-2 text-sm sm:text-lg inline-block text-gray-800 hover:underline">Blog</a>
<a href="/about" class="p-2 text-sm sm:text-lg p-2 inline-block text-gray-800 hover:underline">About</a>
<a href="/search" class="p-2 text-sm sm:text-lg p-2 inline-block text-gray-800 hover:underline">Search</a>
<a href="/contact" class="p-2 text-sm sm:text-lg p-2 inline-block text-gray-800 hover:underline">Contact</a>
</nav>
</template>
Then create a new file in the pages directory called contact.vue
and put the following inside.
<template>
<section class="my-8">
<div class="text-center">
<h1 class="mb-6">Contact Page</h1>
<p class="mb-8">
This is a basic contact form working with Cockpit CMS!
</p>
</div>
<form @submit="checkForm" method="post">
<div class="mb-4">
<label for="name">Name:</label>
<input v-model="name" type="text" name="name" placeholder="Your Name" class="block mt-2 bg-gray-200 rounded w-full py-2 px-3">
</div>
<div class="mb-4">
<label for="mail">Email:</label>
<input v-model="email" type="email" name="email" placeholder="Your Email" class="block mt-2 bg-gray-200 rounded w-full py-2 px-3">
</div>
<div class="mb-4">
<label for="msg">Message:</label>
<textarea v-model="message" name="message" placeholder="Your Message" class="block mt-2 bg-gray-200 rounded w-full py-2 px-3"></textarea>
</div>
<div class="mb-4">
<input type="submit" value="Send message" :class="{ 'cursor-not-allowed opacity-50': loading }" class="cursor-pointer bg-blue-500 hover:bg-blue-400 text-white font-bold py-2 px-4 border-b-4 border-blue-600 hover:border-blue-500 rounded">
</div>
<div v-if="errors.length" class="mb-4 text-red-500">
<b>Please correct the following error(s):</b>
<ul>
<li v-for="error in errors" :key="error">
{{ error }}
</li>
</ul>
</div>
<div v-if="success" class="text-green-500">
<b>Your message has been sent succesfully</b>
</div>
</form>
</section>
</template>
<script>
import axios from 'axios'
export default {
head () {
return {
title: 'Contact',
meta: [
{ hid: 'description', name: 'description', content: 'This is the contact page!' }
]
}
},
data: function () {
return {
errors: [],
name: null,
email: null,
message: null,
loading: false,
success: false
}
},
methods: {
checkForm: function (e) {
this.errors = []
this.success = false
if (!this.name) {
this.errors.push("Name required")
}
if (!this.email) {
this.errors.push('Email required')
} else if (!this.validEmail(this.email)) {
this.errors.push('Valid email required')
}
if (!this.message) {
this.errors.push("Message required")
}
if (!this.errors.length) {
this.submitForm()
}
e.preventDefault()
},
submitForm: function () {
this.loading = true
axios.post(process.env.contactUrl,
JSON.stringify({
form: {
name: this.name,
email: this.email,
message: this.message
}
}),
{
headers: { 'Content-Type': 'application/json' }
})
.then(({ data }) => {
this.loading = false
if(data.error){
this.errors.push(data.error)
} else if(data.name && data.email && data.message) {
this.name = this.email = this.message = null
this.success = true
}
}).catch(error => {
this.loading = false
this.errors.push('An error occured, please try again later')
})
},
validEmail: function (email) {
let re = /^(([^<>()\[\]\\.,;:\s@"]+(\.[^<>()\[\]\\.,;:\s@"]+)*)|(".+"))@((\[[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\])|(([a-zA-Z\-0-9]+\.)+[a-zA-Z]{2,}))$/
return re.test(email)
}
}
}
</script>
As you can see we have some basic fields for our form and on submitting the form we perform some client side validation following the example in the Vue documentation here.
If there are no client side validation errors then we call the submitForm method and use axios to make a POST request to our contactUrl
endpoint. Then we simply display some text with a success message or an error if there is one present.
If no error is present we check if an entry has been returned with name, email and message details (this is what happens when a form is succesfully submitted).
You can fire up your local site using npm run dev
and test this contact form out. You should receive an email notification and be able to see the entry in Cockpit.
At the moment we only have validation for our form fields on the client side which can be circumvented, we need to also add validation for our fields in Cockpit.
We can add custom validation for our form fields in Cockpit by creating a new file with the same name as our form (in our case contact
) in the config/forms directory. You will need to create the forms directory first.
Then make a new file called contact.php
and put the following inside:
<?php
if (empty($data['name'])) {
$this->app->stop(['error' => 'The name field is required'], 200);
}
if (!filter_var($data['email'], FILTER_VALIDATE_EMAIL)) {
$this->app->stop(['error' => 'A valid email is required'], 200);
}
if (empty($data['message'])) {
$this->app->stop(['error' => 'The message field is required'], 200);
}
return true;
The form data is available in the $data variable. This is only really simple validation for an example.
I initially had return false;
inside each of the above valiation checks however it didn't give any information to the client about why the validation had failed. Instead we're stopping Cockpit and returning an error message with more details. You can return a 412 status code or something else if you like and handle these responses in axios catch()
if you'd prefer.
To test out if this validation is working on the server we need to send a POST request using Postman/Insomnia with name
set to null.
If you don't have Postman or Insomnia just comment out the following in contact.vue to temporarily disable to client side validation then submit the form on the front end without setting a value for the name field:
checkForm: function (e) {
this.errors = []
this.success = false
/* if (!this.name) {
this.errors.push("Name required")
}
if (!this.email) {
this.errors.push('Email required')
} else if (!this.validEmail(this.email)) {
this.errors.push('Valid email required')
}
if (!this.message) {
this.errors.push("Message required")
} */
if (!this.errors.length) {
this.submitForm()
}
e.preventDefault()
},
Now if you've added the contact.php file correctly you should notice that the response is returned with an error message if validation fails on the server. You shouldn't receive a notification email and there should not be a new entry visible in Cockpit.
If you have any kind of contact form on your site it is very likely you will have received spam from automated bots.
To help prevent this you can add a Google reCAPTCHA to your site/form.
If you'd rather not use reCAPTCHA another simple method is available known as a Honeypot trap.
The idea is that you add a hidden text field or checkbox to your form that the user cannot see. A bot that is filling out the form will also accidently fill out this hidden field, in our server side validation we can check if this hidden field has been filled our (or checkbox ticked) and if it has we simply return false from our contact.php
script.
Let's add a really simple honeypot field to our form. Above the input button add this new field:
<input type="text" name="website" v-model="website" class="hidden opacity-0 z-0" tabindex="-1" autocomplete="off">
We've given it a real looking name and set it to display: none, with 0 opacity and a z-index of 0. We've also set tabindex as -1 to prevent the user selecting the field by clicking tab and set autocomplete as off to prevent a user's browser accidently autocompleting and filling in the field.
Make sure to add website to the page's data:
data: function () {
return {
errors: [],
name: null,
email: null,
message: null,
website: null,
loading: false,
success: null
}
},
Also add it when posting the request to Cockpit:
submitForm: function () {
this.loading = true
axios.post(process.env.contactUrl,
JSON.stringify({
form: {
name: this.name,
email: this.email,
message: this.message,
website: this.website
}
}),
{
headers: { 'Content-Type': 'application/json' }
})
.then(({ data }) => {
this.loading = false
if(data.error){
this.errors.push(data.error)
} else if(data.name && data.email && data.message) {
this.name = this.email = this.message = null
this.success = true
}
}).catch(error => {
this.loading = false
this.errors.push('An error occured, please try again later')
})
},
Now all that's left to do is to update contact.php
in the config/forms directory.
<?php
if (isset($data['website'])) {
return false;
}
if (empty($data['name'])) {
$this->app->stop(['error' => 'The name field is required'], 200);
}
if (!filter_var($data['email'], FILTER_VALIDATE_EMAIL)) {
$this->app->stop(['error' => 'A valid email is required'], 200);
}
if (empty($data['message'])) {
$this->app->stop(['error' => 'The message field is required'], 200);
}
return true;
We just add a check for the new website honeypot field, if it is not set to null then the submission will fail vailidation and be rejected. We're just returning false here instead of a validation error message but you can add one if you like.
The only potential downside to this method of spam prevention is if a real user someone manages to accidently fill in the website field and their legitimate submission is rejected.
To make sure we don't lose a genuine submission we should add logging or save all entries that fail the honeypot field test. That way we can check every so often which submissions have been rejected and see if any are authentic.
One way we could do this is by creating a new form called bots
without setting an email and without setting save form data as true.
Then we can just update our custom validation for contact at config/forms/contact.php
and add the following:
<?php
if (isset($data['website'])) {
// save the submission in case it is actually a genuine one
cockpit('forms')->save('bots', ['data' => $data]);
return false;
}
if (empty($data['name'])) {
$this->app->stop(['error' => 'The name field is required'], 200);
}
if (!filter_var($data['email'], FILTER_VALIDATE_EMAIL)) {
$this->app->stop(['error' => 'A valid email is required'], 200);
}
if (empty($data['message'])) {
$this->app->stop(['error' => 'The message field is required'], 200);
}
return true;
Now if you send a POST request to your contact form and make sure to set website as some value then you should see the submission is saved in your bots form entries at cms.yourdomain.com/forms/entries/bots
.
This obviously doesn't prevent against a bot sending direct POST requests to our form's endpoint and omitting the website field but it should be fine for most situations.
If you want to make sure that ONLY the fields you want can be posted to your form then you can add something like this to your validation:
foreach($data as $field => $value){
if(!in_array($field, ['website', 'name', 'email', 'message'])){
return false;
}
}
Now if any additional field is added or sent the validation will fail.
You can always change the name of the honeypot field or update it to a checkbox if you notice spam coming through.
You should now have a contact form with client + server side validation and basic spam bot protection that looks like this:
You can check out the GitHub repo of the finished blog here and see a live demo of the site on Netlify here - https://nuxt-cockpit-static-blog.netlify.com
]]>Typically when searching for something you would submit a form that would then be sent to a backend which would then query a database and return the results.
If you want to implement live searching you need to send requests in real time as the user is typing so results can be displayed almost immediately.
Since our site is just a static blog made up of plain old HTML, CSS and Javascript we'll need to send requests elsewhere to get our search results.
Luckily for us Cockpit has a full-text search addon called Detektivo. To install this addon you simply need to add the files to your Cockpit CMS directory under addons/Detektivo.
You can do this by running the following commands from the command line.
cd /path/to/your/cms-yourblog/
cd addons
git clone https://github.com/agentejo/Detektivo.git
Detektivo supports a few different engines; Algolia, ElasticSearch and TNTSearch. We will be using Algolia here so visit the website and create an account (It has a great free tier).
Once you've created your Algolia account you can get your Application ID and Admin API Key. We need the Admin Key and not the Search-Only Key as we will be using it to add/update index records to be searched.
Now we need to update our config.yaml file in Cockpit. So go to settings and then click settings again and you should see a text editor.
Add the following inside:
# Search settings
detektivo:
engine: algolia
app_id: <YOUR-APP-ID>
api_key: <YOUR-API-KEY>
collections:
posts: [title, title_slug, excerpt]
Under collections you can see 'posts', this is in reference to our posts collection. The array containing title, title_slug and excerpt are the fields in this collection that we wish to be included in our index in Algolia.
Read more about record size limits here - https://www.algolia.com/doc/faq/basics/is-there-a-size-limit-for-my-index-records/
It is probably best to do just the title, title_slug and excerpt fields to be on the safe side.
Whilst logged in to Algolia create a new index and call it posts
. There won't be any records here yet as we have not added them.
In Algolia you can add records in three different ways; manually, by file upload or via the API.
We'll be using the API which is why we needed to add our Admin Key in our Cockpit configuration.
If you head back over to Cockpit and click on the menu you should see a new item under 'DETEKTIVO' called Manage Index, click on it.
You'll see your posts collection is listed because you added it in config.yaml. The number 4 refers to the number of fields that will be indexed. (This will be 3 if you just have title, title_slug and excerpt).
If you now click the refresh icon to Re-Index the posts they will become visible in Algolia.
You can edit the configuration for this index by going to - www.algolia.com/apps/<YOUR-APP-ID>/explorer/configuration/posts/searchable-attributes
Here you can add things like searchable attributes, rankings and set up result highlighting.
You may wish to update the Search behavior > Retrieved attributes to be just the title and title_slug so that the response will be smaller and easier to read.
Now that we have some records in Algolia that can be searched we can make a GET request using Postman or Insomnia or even by just visiting the URL in the browser.
The endpoint we'll be using with Detektivo will be cms.yourdomain.com/api/detektivo/collection/posts?token=<COCKPIT-SEARCH-API-KEY>&q={searchterm}
Where <COCKPIT-SEARCH-API-KEY>
is a key we've yet to create.
The great thing about the Detektivo addon is that each time you add/update/delete a post it automatically updates our posts index at Algolia for us.
We need to create a new API key in Cockpit however we need to make sure it only has permissions to perform searches on our posts collection and nothing else. That is because the key will be public and exposed in each request made.
We MUST NOT
use our MASTER API-KEY or any other Key we've previously created in Cockpit.
So head over to Settings then API Access in Cockpit and click the little plus icon to generate a new API Key.
Make sure to add /api/detektivo/collection/posts
in the rules section like so:
This rule means only requests made to that endpoint with the key will be authorised.
Now if you make a get request to the endpoint mentioned above with a search term you know exists in the title of one of your posts you should see some results returned.
Open up your .env file for Nuxt and add a new variable called SEARCH_URL.
SEARCH_URL=https://cms.yourdomain.com/api/detektivo/collection/posts?token=*COCKPIT-SEARCH-API-KEY*&q=
Now we also need to update our nuxt.config.js and add an env property. Add the following anywhere inside module.exports = { ... }
env: {
searchUrl: process.env.SEARCH_URL
},
The reason we need to do this is because we will be making requests to our searchUrl on the client side which means we need to have this variable bundled up in our js files.
Now we will be able to access the searchUrl variable even after our site has been generated. Don't worry, the token we are using is our Search-Only Key so nobody will be able to delete or edit our posts etc.
Make sure you also update your create-env.js if deploying to Netlify.
const fs = require('fs')
fs.writeFileSync('./.env', `
BASE_URL=${process.env.BASE_URL}\n
POSTS_URL=${process.env.POSTS_URL}\n
URL=${process.env.URL}\n
PER_PAGE=${process.env.PER_PAGE}\n
SEARCH_URL=${process.env.SEARCH_URL}
`)
First we'll just update our PageNav.vue component to add a link to the new page:
<template>
<nav class="text-center my-4">
<a href="/" class="p-2 text-sm sm:text-lg inline-block text-gray-800 hover:underline">Blog</a>
<a href="/about" class="p-2 text-sm sm:text-lg p-2 inline-block text-gray-800 hover:underline">About</a>
<a href="/search" class="p-2 text-sm sm:text-lg p-2 inline-block text-gray-800 hover:underline">Search</a>
</nav>
</template>
In the pages directory of your blog add a new file called search.vue
and put the following inside it:
<template>
<section class="my-8">
<div class="text-center">
<h1 class="mb-6">Search Page</h1>
<p>
This is a live search example using Algolia and Cockpit!
</p>
<div class="my-8">
<input type="text" name="searchTerm" v-model="searchTerm" placeholder="Search Posts..." class="text-center block mt-2 bg-gray-200 rounded w-full py-2 px-3">
<div v-if="results.length !==0" class="search-results">
<a v-for='result in results' :key="result.title_slug" :href="'/'+result.title_slug" class="block text-gray-800 p-3 text-left">
{{ result.title }}
</a>
</div>
<div v-else-if="searchTerm.length >= 3">
<span class="block text-gray-800 p-3 text-left">
No results found
</span>
</div>
</div>
</div>
</section>
</template>
<script>
import axios from 'axios';
export default {
data: function () {
return {
searchTerm: '',
results:[]
}
},
watch: {
searchTerm: 'search'
},
methods: {
search() {
if(this.searchTerm.length < 3){
return this.results = []
}
axios.get(process.env.searchUrl+this.searchTerm)
.then(response => {
this.results = response.data.hits
})
}
}
}
</script>
So what we're doing here is simply telling Nuxt to watch the searchTerm variable and to call the search method when it changes. If it has a length longer than 2 we will make a call to Cockpit to fetch the search results.
These results are then displayed and they use the title_slug as the url for the link.
To improve these results we could add highlighting by changing {{ result.title }}
to:
<span v-html="result._highlightResult.title.value"></span>
To make this work you first need to go to Algolia and add the title to Attributes to highlight
in Pagination and Display > Highlighting.
This will return the highlighted title word(s) wrapped in <em></em>
tags by default. That is why we need to use v-html otherwise the em tags would simply be rendered as a string.
You could then add a simple css rule to give the em tag a nice background colour for highlighting.
You can add something like this to main.css
/* purgecss ignore */
.search-results em {
@apply not-italic bg-blue-200;
}
I've added a purgecss ignore comment here to make sure this css isn't removed when we build the site because .search-results em
will not actually exist at build time as it is only present if we search on the client side so the css would be removed otherwise.
We could also add highlighting for the post excerpt. However the excerpt may be too long so we don't want to display the whole thing in the results.
Algolia offers a feature called snippeting that allows us to only display a snippet of text around the matched word(s).
If you visit Attributes to snippet in Algolia and add excerpt
then you can update the html to the following:
<template>
<section class="my-8">
<div class="text-center">
<h1 class="mb-6">Search Page</h1>
<p>
This is a live search example using Algolia and Cockpit!
</p>
<div class="my-8">
<input type="text" name="searchTerm" v-model="searchTerm" placeholder="Search Posts..." class="text-center block mb-4 shadow text-gray-600 rounded w-full py-2 px-3">
<div v-if="results.length !==0" class="search-results">
<a v-for='result in results' :key="result.title_slug" :href="'/'+result.title_slug" class="block text-gray-800 p-3 text-left">
<span v-html="result._highlightResult.title.value" class="block font-bold mb-1"></span>
<span v-html="result._snippetResult.excerpt.value"></span>
</a>
</div>
<div v-else-if="searchTerm.length >= 3">
<span class="block text-gray-800 p-3 text-left">
No results found
</span>
</div>
</div>
</div>
</section>
</template>
You should now see something like this:
With title and excerpt highlighting and also snippeting for the post excerpt.
You can check out the GitHub repo of the finished blog here and see a live demo of the site on Netlify here - https://nuxt-cockpit-static-blog.netlify.com
]]>Open up your .env file and add the following variable to it
PER_PAGE=2
We're setting it low on purpose so we can easily see the pagination in action.
Then at the top of nuxt.config.js add this line:
const perPage = Number(process.env.PER_PAGE)
Now that we have our perPage variable we can update our generate: property by adding the following just below let posts = ...
if(perPage < data.total) {
let pages = collection
.take(perPage-data.total)
.chunk(perPage)
.map((items, key) => {
let currentPage = key + 2
return {
route: `blog/${currentPage}`,
payload: {
posts: items.all(),
hasNext: data.total > currentPage*perPage
}
}
}).all()
return posts.concat(tags,pages)
}
So breaking this down, first we check if the value we have set to display per page is less than the total number of blog posts.
If it is less and for example we have set 10 posts per page but there are 25 posts in total. Then with the take
method we take (10 - 25) which equals -15 posts. The negative integer means we want to take 15 posts from the end of the posts collection. More information on this is in the collectjs docs.
The reason we only want to take from the end of the collection is because we do not want to include the first page of posts as this is currently already set as our blog's home page. (We already have 10 posts on the home page that we don't need to include for pagination)
Next we chunk the 15 posts we've got by the perPage
variable, so we would have 10 and 5 in two chunks.
Then we simply map these items into their respective pages, where currentPage
is the key that we add 2 onto since the first chunk will have a key of 0 however we want this to effectively be our page 2 (as we're going to count our home page as page 1).
We pass the post items in each chunk as the payload to use and we also pass a hasNext
variable that lets us know if there is another page or not. In our example here data.total
is 25 as there are 25 posts in total. When we're in the second chunk that contains 5 posts the chunk key will be 1 so we have (1 + 2)*10 which is 30. So hasNext
will evaluate to false.
We're going to set our blog up so that pages are found at yourdomain.com/blog/2
etc. You can instead do yourdomain.com/2
, yourdomain.com/blog/page-2
or whatever you prefer.
In the pages directory create a new folder called blog
and add a file named _page.vue
to it. Put the following code inside:
<template>
<section>
<div class=my-8>
<h1 class="mb-6">Blog Page {{ page }}</h1>
<ul class="flex flex-col w-full p-0">
<li class="mb-6 w-full" v-for="(post, key) in posts" :key="key">
<div class="text-gray-600 font-bold text-sm tracking-wide">
{{ post._created | toDate }}
<a v-for="(tag, key) in post.tags" :key="key" :href="'/category/'+tag" class="ml-1">{{ tag }}</a>
</div>
<a :href="'/'+post.title_slug">
<h2 class="my-2 text-gray-800 text-lg lg:text-xl font-bold">
{{ post.title }}
</h2>
</a>
<div class="page-content hidden md:block text-base mb-2" v-html="post.excerpt">
</div>
<a class="text-sm text-blue-400" :href="'/'+post.title_slug">
Read more
</a>
</li>
</ul>
<div class="flex justify-center mt-8">
<a :href="page === '2' ? '/' : `/blog/${Number(page)-1}`" class="text-sm pr-2">
Previous Page
</a>
<a v-if="hasNext" :href="`/blog/${Number(page)+1}`" class="text-sm pl-2">
Next Page
</a>
</div>
</div>
</section>
</template>
<script>
export default {
async asyncData ({ app, params, error, payload }) {
if (payload) {
return { posts: payload.posts, page: params.page, hasNext: payload.hasNext }
} else {
let { data } = await app.$axios.post(process.env.POSTS_URL,
JSON.stringify({
filter: { published: true },
limit: process.env.PER_PAGE,
skip: (params.page-1)*process.env.PER_PAGE,
sort: {_created:-1},
populate: 1
}),
{
headers: { 'Content-Type': 'application/json' }
})
if (!data.entries[0]) {
return error({ message: '404 Page not found', statusCode: 404 })
}
return { posts: data.entries, page: params.page, hasNext: Number((params.page-1)*process.env.PER_PAGE) + Number(process.env.PER_PAGE) < data.total }
}
},
head () {
return {
title: `Nuxt Cockpit Static Blog - Page ${this.page}`
}
}
}
</script>
Notice the limit and skip options we added when fetching the posts for the dev server.
When our blog has been generated it will be using the payload we passed through in nuxt.config.js.
We do a quick check to see if the current page is 2 when rendering the previous link as we don't want to link to yourdomain.com/blog/1
as that page doesn't exist, we want to simply go back to the home page to display our first page of posts.
Head over to index.vue in pages and update that too so we have a next page if there is one available.
<template>
<section>
<div class=my-8>
<ul class="flex flex-col w-full p-0">
<li class="mb-6 w-full" v-for="(post, key) in posts" :key="key">
<div class="text-gray-600 font-bold text-sm tracking-wide">
{{ post._created | toDate }}
<a v-for="tag in post.tags" :key="tag" :href="'/category/'+tag" class="ml-1">{{ tag }}</a>
</div>
<a :href="'/'+post.title_slug">
<h2 class="my-2 text-gray-800 text-lg lg:text-xl font-bold">
{{ post.title }}
</h2>
</a>
<div class="page-content hidden md:block text-base mb-2" v-html="post.excerpt">
</div>
<a class="text-sm text-blue-400" :href="'/'+post.title_slug">
Read more
</a>
</li>
</ul>
<div v-if="hasNext" class="flex justify-center mt-8">
<a href="/blog/2" class="text-sm">
Next Page
</a>
</div>
</div>
</section>
</template>
<script>
export default {
async asyncData ({ app, error }) {
const { data } = await app.$axios.post(process.env.POSTS_URL,
JSON.stringify({
filter: { published: true },
limit: process.env.PER_PAGE,
sort: {_created:-1},
populate: 1
}),
{
headers: { 'Content-Type': 'application/json' }
})
if (!data.entries[0]) {
return error({ message: '404 Page not found', statusCode: 404 })
}
return { posts: data.entries, hasNext: process.env.PER_PAGE < data.total }
}
}
</script>
Notice we've added the limit option when fetching our posts which is set to our PER_PAGE
environment variable.
If you visit the site now you should see the home page with two posts and a next link. If you click next you'll be taken to /blog/2
and depending on how many posts you've got in Cockpit you'll see a previous and next link on this page.
We also need to remember to update our create-env.js
file for Netlify.
const fs = require('fs')
fs.writeFileSync('./.env', `
API_TOKEN=${process.env.API_TOKEN}\n
BASE_URL=${process.env.BASE_URL}\n
POSTS_URL=${process.env.POSTS_URL}\n
URL=${process.env.URL}\n
PER_PAGE=${process.env.PER_PAGE}
`)
Make sure to update your environment variables when you are logged into Netlify like we did in Part 3 so that PER_PAGE
is included.
We also need to update our sitemap otherwise it won't be aware of our new blog pages so open up nuxt.config.js and update it to the following:
sitemap: {
path: '/sitemap.xml',
hostname: process.env.URL,
cacheTime: 1000 * 60 * 15,
generate: true, // Enable me when using nuxt generate
async routes () {
let { data } = await axios.post(process.env.POSTS_URL,
JSON.stringify({
filter: { published: true },
sort: {_created:-1},
populate: 1
}),
{
headers: { 'Content-Type': 'application/json' }
})
const collection = collect(data.entries)
let tags = collection.map(post => post.tags)
.flatten()
.unique()
.map(tag => `category/${tag}`)
.all()
let posts = collection.map(post => post.title_slug).all()
if(perPage < data.total) {
let pages = collection
.take(perPage-data.total)
.chunk(perPage)
.map((items, key) => `blog/${key+2}`)
.all()
return posts.concat(tags,pages)
}
return posts.concat(tags)
}
},
Now at the moment we only have pagination set up for our blog posts from all categories. If we wanted to go further we could also set up pagination per category to something like yourdomain.com/category/nuxt/2
etc.
Update your .env PER_PAGE
variable to something sensible like 10 and you should be good to go!
You can check out the GitHub repo of the finished blog here and see a live demo of the site on Netlify here - https://nuxt-cockpit-static-blog.netlify.com
]]>In the example we're making here I'll be adding authentication to app.example.test
and we'll be treating example.test
as the marketing frontend for our application.
So let's edit our Homestead.yaml file and add a database we can use for our authentication.
databases:
- example
Then run homestead up --provision
or homestead reload --proivision
if homestead is already running.
Now we need to ssh into homestead so run homestead ssh
and navigate to the directory where app.example.test
is located. Then run the following:
php artisan make:auth
You should now be able to see the login and register pages.
In our example.test
code create some new Middleware called CheckReferral
.
php artisan make:middleware CheckReferral
Open up the newly created file and edit the handle function.
public function handle($request, Closure $next)
{
if( !$request->hasCookie('referral') && $request->query('ref') ) {
return redirect($request->url())->withCookie(cookie()->forever('referral', $request->query('ref')));
}
return $next($request);
}
What we are doing here is checking whether a cookie named referral
is currently set. If it is not and the request contains a query parameter ref
then Laravel will set a cookie named referral with the value of whatever ref is equal to that has the maximum expiry time.
For example the url example.test/?ref=laravel
would set a cookie (if none already exists) with value laravel
.
If we want this middleware to run during every web route HTTP request to our application then we can add it to our middleware by editing app/Http/Kernel.php
and adding it to the 'web' section of the $middlewareGroups property like so:
protected $middlewareGroups = [
'web' => [
\App\Http\Middleware\EncryptCookies::class,
\Illuminate\Cookie\Middleware\AddQueuedCookiesToResponse::class,
\Illuminate\Session\Middleware\StartSession::class,
// \Illuminate\Session\Middleware\AuthenticateSession::class,
\Illuminate\View\Middleware\ShareErrorsFromSession::class,
\App\Http\Middleware\VerifyCsrfToken::class,
\Illuminate\Routing\Middleware\SubstituteBindings::class,
\App\Http\Middleware\CheckReferral::class,
],
'api' => [
'throttle:60,1',
'bindings',
],
];
If you only wanted to check and set the cookie on the homepage you could simply add it to the $routeMiddleware property instead and then call it for the '/'
route in web.php.
If you remember from the previous post we had a route /cookie
to check if the cookie had been set. Let's edit web.php and update this:
Route::get('/', function () {
//Cookie::queue(Cookie::make('test', '123', 60));
return view('welcome');
});
Route::get('/cookie', function () {
return Cookie::get('referral');
});
Make sure to comment out or delete the Cookie::queue we added in the previous post as we don't need this anymore.
Now if we visit example.test/?ref=laravel
you'll notice we're redirected to example.test
.
If we go to example.test/cookie
then you should see the value laravel returned.
Our cookie is currently being encrypted by Laravel but since it does not contain sensitive data lets disable it by editing app/Http/Middleware/EncryptCookies.php
.
protected $except = [
'referral'
];
Make sure to update EncryptCookies.php
for app.example.test too.
Head over to your code for app.example.test
and then register a new user in your browser.
Login with your newly created user. We'll be using a packaged called hashids that has been ported to Laravel to create a short unique string based on our users' ID in the database.
So install the package by running the following:
composer require vinkla/hashids
It should be discovered automatically. Next add the Facade to our aliases list at the bottom of config/app.php
'Hashids' => Vinkla\Hashids\Facades\Hashids::class,
Now we can publish the vendor files by running:
php artisan vendor:publish --provider="Vinkla\Hashids\HashidsServiceProvider"
Open up config/hashids.php
and update the 'main' connection. You can use laravel to generate a random string for the salt, just temporarily add dd(str_random(40));
to any route in web.php.
'main' => [
'salt' => 'yGPMa8oZc7PEJXxEnOIAhZscjujizzCPt028vCSG',
'length' => 6,
],
Now we will be able to generate a unique 6 character long referral ID for each user based on their ID in the database.
Create a new route in web.php called referral-link
.
Route::get('/referral-link', 'HomeController@referral');
We are using the HomeController generated by Laravel's auth scaffolding as it already has the auth middleware.
Edit HomeController.php:
public function referral()
{
return 'http://example.test/?ref=' . \Hashids::encode(auth()->user()->id);
}
You should see something like this http://example.test/?ref=V53YMO
returned.
First let's create a new migration to add a new column to our database.
php artisan make:migration add_referred_by_column_to_users_table --table=users
Edit the new migration file in database/migrations
public function up()
{
Schema::table('users', function (Blueprint $table) {
$table->unsignedInteger('referred_by')->nullable()->after('email');
});
}
Then whilst inside Homestead and in the correct directory run php artisan migrate
.
There will now be a referred_by column right after the email column in the users table.
Now we just need to edit app/Http/Controllers/Auth/RegisterController.php
so we can save the referred by data.
use Illuminate\Support\Facades\Cookie;
Make sure to add that to the top of the file first, then update the create function:
protected function create(array $data)
{
$cookie = Cookie::get('referral');
$referred_by = $cookie ? \Hashids::decode($cookie)[0] : null;
return User::create([
'name' => $data['name'],
'email' => $data['email'],
'password' => Hash::make($data['password']),
'referred_by' => $referred_by
]);
}
We check if the cookie named referral
is set (if it is not then null is returned) then we use our Hashids package to decode the value in the cookie and give us the ID of the user who referred this new registration.
Hashids::decode() returns an array which is why we have to add [0].
Before we continue make sure to update app/User.php
to add 'referred_by' to the $fillable property.
protected $fillable = [
'name', 'email', 'password', 'referred_by'
];
Before you log out of the current user visit app.example.test/referral-link
and copy your referral link.
Then log out and make sure to clear all your cookies for both example.test
and app.example.test
. Then paste your referral link into the browser (e.g. http://example.test/?ref=V53YMO).
Then imagine that we click a button on example.test
that takes us to app.example.test/register
for us to sign up for the application.
Enter details for a new user and click Register
. If you now check out the records in the database table you should see the referred_by column for this new user contains the id of the first user you created.
We can create a relationship that returns users who you have referred.
Update app/User.php
and add the following to the bottom of the file.
public function referrer()
{
return $this->belongsTo('App\User', 'referred_by');
}
public function referrals()
{
return $this->hasMany('App\User', 'referred_by');
}
Then update your web.php routes file.
Route::get('/referrer', 'HomeController@referrer');
Route::get('/referrals', 'HomeController@referrals');
And finally HomeController.php
public function referrer()
{
return auth()->user()->referrer;
}
public function referrals()
{
return auth()->user()->referrals;
}
Now if you login as your first user and visit app.example.test/referrals
you'll see an array of all the users who you've referred to the site.
If you visit app.example.test/referrer
you'll see the details of the user who referred you to the site.
Obviously we would never do this in a production application but things like auth()->user()->referrals()->count()
could be useful.
This is only a very simple example but hopefully it gives you a basic idea of how a more complex system could be implemented. If you are using the same domain for marketing and registrations then you can skip all the cookie sharing stuff and keep your cookies encrypted.
Source code for both sites can be found here https://github.com/willbrowningme/laravel-user-referral-example.
]]>For this example I'll be using Laravel Homestead to set up a couple of local Laravel sites.
laravel new example && laravel new subdomain
Then we need to edit Homestead.yaml and our hosts file to add addresses for these applications.
sites:
- map: example.test
to: /home/vagrant/code/example/public/
- map: app.example.test
to: /home/vagrant/code/subdomain/public/
192.168.10.10 example.test
192.168.10.10 app.example.test
homestead up --provision
When you've provisioned Homestead you should see the new Laravel welcome screen when you visit example.test
and app.example.test
.
Let's edit our .env
file for our main example.test site. Update the following values.
APP_URL=http://example.test
SESSION_DOMAIN=.example.test
The SESSION_DOMAIN
variable is important as this will allow our subdomain to access all cookies set by the parent domain.
Now let's also update the .env
file for our app.example.test site.
APP_URL=http://app.example.test
We don't need to set SESSION_DOMAIN
here.
We've left SESSION_DRIVER
as the default value which is file for both sites.
In our main example.test
code update the default welcome route in web.php
.
Route::get('/', function () {
Cookie::queue(Cookie::make('test', 'abc', 60));
return view('welcome');
});
Route::get('/cookie', function () {
return Cookie::get('test');
});
Here we're informing Laravel to set a Cookie named 'test' with a value of 'abc' that will expire in 60 minutes.
First visit example.test
in your browser, then if you visit example.test/cookie
you should see the value we set of abc
. So we know that our Cookie has been succesfully set.
If you head over to your app.example.test
code and add the following to the web.php routes file:
Route::get('/cookie', function () {
return Cookie::get('test');
});
Then visit app.example.test/cookie
you won't be able to see anything yet as Laravel by default encrypts all cookies that are set.
There are a couple of options we have here on how to access the cookie.
We can copy the APP_KEY value from our example.test .env file and paste it in our app.example.test .env file so that they both have the same value. If you try this and then visit again app.example.test/cookie
you will be able to see the value abc
and the cookie can be decrypted succsesfully.
The reason this works is because Laravel uses our APP_KEY value when encrypting, decrypting and signing data.
Some people may feel uncomfortable having the same APP_KEY value for both sites but there is another way.
We can tell Laravel not to encrypt certain cookies if they do not contain sensitive data.
In both
of our sites open up the app/Http/Middleware
folder and edit the EncryptCookies.php
file.
protected $except = [
'test'
];
Here we can add the name of any cookies we don't wish to be encrypted. Make sure you've added this to both sites
.
Change the value of the cookie set by example.test
to something else so we can be sure it's working.
Cookie::queue(Cookie::make('test', '123', 60));
Then visit example.test
in your browser again. Check example.test/cookie
and you should see the value of '123' this time.
Now if you visit app.example.test/cookie
you should be able to access the cookie and see the value of '123' returned.
You should only disable encryption for a cookie if it contains non-sensitive information.
I'll write another post shortly detailing how we can use what we've applied here to create a very simple referral system that tracks who has been referred by who. The system will use middleware to determine whether to set a cookie if a certain query string is present in the request. This will enable you to link to any url on your main site like so example.test/?ref=referral-id
or example.test/pricing?ref=referral-id
.
Update:
You can find the simple referral system post here - Building a Simple Referral System in Laravel
First we need to find our API key. So visit the "My Profile" section when logged into Cloudflare - https://dash.cloudflare.com/profile at the bottom of the page you'll see your keys.
We'll be using the Global API Key to clear the cache.
The Zone ID for your website can be found in the "Overview" section for that site. It will look like this:
The request we'll be sending to clear the cache looks like this:
curl -X POST "https://api.cloudflare.com/client/v4/zones/YOUR-ZONE-ID/purge_cache" \
-H "X-Auth-Email: YOUR-CLOUDFLARE-EMAIL" \
-H "X-Auth-Key: YOUR-GLOBAL-API-KEY" \
-H "Content-Type: application/json" \
--data '{"purge_everything":true}'
Where YOUR-CLOUDFLARE-EMAIL
is the email you use to login to Cloudflare. YOUR-GLOBAL-API-KEY
is the key we found above and where YOUR-ZONE-ID
is a unique identifier for your Cloudflare website.
You can add this code to the end of your deployment script to make sure the cache is purged after each deployment.
In the above example we're telling Cloudflare to purge everything but you can also choose which items to purge that have matching Cache-Tag headers or which hosts to purge although this does appear to be only available for Enterprise accounts
.
--data '{
"tags":["some-tag","another-tag"],
"hosts":["www.example.com","images.example.com"]
}'
More documentation can be found here - https://api.cloudflare.com/#zone-purge-files-by-cache-tags-or-host
If everything went to plan you should get a response from Cloudflare like so:
{
"success": true,
"errors": [],
"messages": [],
"result": {
"id": "9a7806061c88ada191ed06f989cc3dac"
}
}
]]>
Our vendor bundle is coming in at 752kB!
First things first we need to find out why our vendor bundle is so big in the first place.
Luckily Nuxt uses webpack-bundle-analyzer so we can simply add the following to our nuxt.config.js
under the build property.
build: {
analyze: true,
}
Then when you run npm run generate
it will open up the build analyser at http://127.0.0.1:8888
.
Looking at this we can see the highlight.js
package is very big with a parsed size of 540kB!
If we inspect this further we can see this is mainly due to a few languages included with it like mathematica.js or sqf.js.
Now in my case I only use a handful of common languages so I don't need any of the other included ones.
After a little bit of searching I came across this comment on GitHub on how to acheive this.
So let's give it a try and update our Nuxt site.
I have a filters.js
in the plugins directory where I was importing highlight.js.
I updated the import to the following: (I was previously doing import hljs from 'highlight.js'
)
import hljs from 'highlight.js/lib/highlight.js'
Then simply specify which languages you want to register like so:
hljs.registerLanguage('php', require('highlight.js/lib/languages/php'))
hljs.registerLanguage('javascript', require('highlight.js/lib/languages/javascript'))
hljs.registerLanguage('css', require('highlight.js/lib/languages/css'))
Make sure to only add the languages that you intend to use.
I also had Highlight.js
in the vendor file importing all of the languages, so I simply updated this aswell:
build: {
vendor: ['axios', 'highlight.js/lib/highlight.js'],
analyze: true,
}
Then ran npm run generate
again and had a look at the build analyser.
This time the vendor bundle was only 226kB in parsed size, down from 752kB! That's a 70% decrease just from removing uneeded languages.
As you can see we reduced the Highlight.js package down to 31.53kB from 540kB.
And since our vendor bundle is now less than 300kB we no longer get the annoying warning in our terminal. Success
!
Another example we can use is with moment.js, which is usually overkill if you are only doing some basic date formatting.
Another library that is much more lightweight and has a similar API is day.js.
If we remove moment.js and run npm install dayjs --save
to replace it with day.js then we can simply do the following:
const dayjs = require('dayjs')
import advancedFormat from 'dayjs/plugin/advancedFormat'
dayjs.extend(advancedFormat)
Vue.filter('toDate', function(timestamp) {
return dayjs(timestamp*1000).format('Do MMM YY')
})
The reason we needed to import the advancedFormat plugin for day.js is simply because 'Do' is not included in the default day.js installation.
We can now format dates in the same way as with moment but using a much smaller library.
I'll try to add more examples here in the future.
]]>We'll be using the following incoming webhook server to acheive our goal - https://github.com/adnanh/webhook
This webhook server is written in Go and is really simply to get set up. It is easy to configure as the config file is just JSON.
I'll be spinning up a fresh droplet with DigitalOcean for this example but you can use your own existing server and websites.
We could use this process to test in a staging environment but for this post we'll just be keeping it simple.
I'll be running all commands in this post as a user johndoe
with sudo permissions.
If you intend to run any npm commands in your build script make sure you have nodejs installed on your server.
The first thing we need to do is install golang on our server so that we can then install the incoming webhook server. You can do so using the following commands, make sure to find the latest stable version from the list here - https://golang.org/dl/
e.g. go1.10.3.linux-amd64.tar.gz
cd ~
wget https://dl.google.com/go/go<VERSION>.<OS>-<ARCH>.tar.gz
sudo tar -C /usr/local -xzf go<VERSION>.<OS>-<ARCH>.tar.gz
export PATH=$PATH:/usr/local/go/bin
Then we can simply install the latest version of webhook with the following command:
go get github.com/adnanh/webhook
This will create a file ~/go/bin/webhook
, in my case /home/johndoe/go/bin/webhook
.
Create a folder called ~/hooks
and then create a folder inside hooks with the same name as the website your going to deploy. In my case I'll just call it my-site-1
. This is where we'll put our deploy.sh
script and also an output.log
file.
mkdir ~/hooks
mkdir ~/hooks/my-site-1
Now create a new file inside the hooks directory and add following inside JSON inside, make sure to change my-site-1 to the name of your site and also change the command-working-directory to the correct root directory of your site:
nano ~/hooks/hooks.json
[
{
"id": "deploy-my-site-1",
"execute-command": "/home/johndoe/hooks/my-site-1/deploy.sh",
"command-working-directory": "/var/www/my-site-1/",
"response-message": "Executing deploy script...",
"trigger-rule":
{
"and":
[
{
"match":
{
"type": "payload-hash-sha1",
"secret": "<RANDOM-SECRET-STRING>",
"parameter":
{
"source": "header",
"name": "X-Hub-Signature"
}
}
},
{
"match":
{
"type": "value",
"value": "refs/heads/master",
"parameter":
{
"source": "payload",
"name": "ref"
}
}
}
]
}
}
]
Replace
Inside the ~/hooks/my-site-1
folder create an output.log
file. Then create a file named deploy.sh
.
cd ~/hooks/my-site-1
touch output.log
touch deploy.sh
chmod +x deploy.sh
The chmod command simply makes the .sh file executable.
Add the following inside deploy.sh (update to suit your sites needs):
#!/usr/bin/env bash
# redirect stdout/stderr to a file
exec > /home/johndoe/hooks/my-site-1/output.log 2>&1
git fetch --all
git checkout --force "origin/master"
npm install --production
npm run production
composer install --no-dev
php artisan route:cache
php artisan config:cache
php artisan view:cache
php artisan queue:restart
The third line of the above simply redirects all output to our output.log
file. Then we run git fetch and get checkout to get our code updates from our origin repo (in my case GitHub).
You can update the other commands to suit your needs. Since I'm using a Laravel
app as an example I'll run some artisan commands to clear the cache and restart the queue etc.
The incoming webhook server runs on port 9000
by default you can change this if you wish as described here but for our example we'll just leave it.
You now need to make sure that port 9000 is open on your server, which may involve updating your firewall rules. If you're using a service such as RunCloud (affiliate link) this is very easily done from the user interface.
Once you've made sure port 9000 is open we can try running the server to see if everything is working so far.
To start the server enter the following command making sure to change johndoe to your user's username and
/home/johndoe/go/bin/webhook -hooks /home/johndoe/hooks/hooks.json -ip "<YOUR-SERVER-IP>" -verbose
If you now visit http://<YOUR-SERVER-IP>:9000/hooks/deploy-my-site-1
in the browser you should see a message saying Hook rules were not satisfied.
This is because the rules we specified in hooks.json including the secret string were not included in our request and therefore not satisfied.
Stop the webhook server by typing CTRL+C in the terminal.
Go to your sites web root on your server and initialise a git repository, then add your remote GitHub url.
cd /var/www/my-site-1
git init
git remote add origin git@my-site-1:willbrowningme/my-site-1.git
The reason we use the above as the remote origin url is so that we can use an alias in our ~/.ssh/config
file to specify which ssh key to use when connecting. Update my-site-1 to the name of your repo. If you are using GitLab or another service that allows multiples repos per key then you can have the above as git@github.com:willbrowningme/my-site-1.git
.
In ~/.ssh/
create a new file called config and add the following inside:
If the .ssh directory doesn't exist yet then simply create it by running mkdir ~/.ssh
.
nano ~/.ssh/config
# My Site 1 Repo
Host my-site-1 github.com
HostName github.com
IdentityFile ~/.ssh/my_site_1_id_rsa
Make sure to change my-site-1 to the alias you definied above for the remote branch and also the IdentityFile to the path for the private key we are about to generate.
If we try to now run git fetch -all
we will get an error saying Permission denied (publickey). This is because we haven't yet set up a deploy key for the repo in GitHub.
To fix this let's generate a new ssh key paid on our server by running. Substitute the email for your GitHub email.
When it asks Enter a file in which to save the key
name it like so - /home/johndoe/.ssh/my_site_1_id_rsa
and leave the passphrase blank. (replace my_site_1 with the name of your github repo)
ssh-keygen -t rsa -b 4096 -C "your@github-email.com"
Again, the reason we are doing this is because GitHub only allows one deploy key to be used for each repository. You cannot use the same key for multiple repositories. Hence the naming convention.
I beleive GitLab does allow you to use it for multiple repos so you can just leave the name as default id_rsa
in that case if you wish.
Now we need to copy the public key and add it to GitHub as a deploy key
. So open up the pub key file.
vi ~/.ssh/my_site_1_id_rsa.pub
Copy the contents of this file and then type :q
to quit the vim editor.
On GitHub go to the repo in question click on settings and then Deploy Keys
. Click "add deploy key" and paste in the contents of your public key we just generated.
Now back in your websites web root and try to run the following again.
cd /var/www/my-site-1
git fetch --all
git checkout --force "origin/master"
With any luck the commands should work correctly now.
In the GitHub repo go to settings then webhooks and click "add a webhook". For the payload url enter http://<YOUR-SERVER-IP>:9000/hooks/deploy-my-site-1
replacing your server IP and the ID you gave in hooks.json for the webhook.
Choose application/json for the content type and make sure to enter the random secret string you generated ealier in our hooks.json file. These will need to match or the script will not be exectuted. Choose "just the push event" and save the webhook.
Now we need to test it all works as planned. So start up your webhook server again by running:
/home/johndoe/go/bin/webhook -hooks /home/johndoe/hooks/hooks.json -ip "<YOUR-SERVER-IP>" -verbose
Make an edit or an update to your code on your local pc repository so that we can commit the changes and then push them to GitHub by running:
git push origin master
This should now trigger GitHub to send the webhook delivery to our server which will then run the deploy.sh
script for my-site-1 and will fetch the updates we just made and then build the site with the commands we gave.
If you visit GitHub settings and then webhooks you should see the new delivery under Recent Deliveries
. Make sure it has a 200
response code and shows the response body we gave of "Executing deploy script...".
Stop the webhook server by typing CTRL+C into the terminal.
Now that everything is working as planned let's install supervisor
so we can keep the webhook server running in the background.
So below we install supervisor then create a new .conf file inside the /etc/supervisor/conf.d
directory.
sudo apt install supervisor
cd /etc/supervisor/conf.d
sudo nano webhooks.conf
Add the following inside the webhooks.conf
file, replacing the username and IP etc. with your values.
[program:webhooks]
command=bash -c "/home/johndoe/go/bin/webhook -hooks /home/johndoe/hooks/hooks.json -ip '<YOUR-SERVER-IP>' -verbose"
redirect_stderr=true
autostart=true
autorestart=true
user=johndoe
numprocs=1
process_name=%(program_name)s_%(process_num)s
stdout_logfile=/home/johndoe/hooks/supervisor.log
environment=HOME="/home/johndoe",USER="johndoe"
Save this file and then run.
touch ~/hooks/supervisor.log
sudo supervisorctl reread
sudo supervisorctl update
sudo supervisorctl start webhooks:*
I had a lot of issues with the webhooks.conf file and getting supervisor to start the server as the non root user. Initially it kept running the server as root which would then cause all the npm and git commands inside deploy.sh
to fail.
However I managed to get it working correctly by setting the right environment variables and then running the command through bash -c "the-command-here"
.
So now we should have the webhooks server running nicely in the background ready to receive incoming deliveries from GitHub.
Make another edit to your local code and push it to GitHub to make sure everything is still working as it should.
Check the output.log
file at ~/hooks/my-site-1/output.log
to see the output from deploy.sh
.
If you want to add another site with a different set of deploy and build commands you can follow these steps:
First let's edit hooks.json so that it looks something like this:
[
{
"id": "deploy-my-site-1",
"execute-command": "/home/johndoe/hooks/my-site-1/deploy.sh",
"command-working-directory": "/var/www/my-site-1/",
"response-message": "Executing deploy script...",
"trigger-rule":
{
"and":
[
{
"match":
{
"type": "payload-hash-sha1",
"secret": "<RANDOM-SECRET-STRING>",
"parameter":
{
"source": "header",
"name": "X-Hub-Signature"
}
}
},
{
"match":
{
"type": "value",
"value": "refs/heads/master",
"parameter":
{
"source": "payload",
"name": "ref"
}
}
}
]
}
},
{
"id": "deploy-my-site-2",
"execute-command": "/home/johndoe/hooks/my-site-2/deploy.sh",
"command-working-directory": "/var/www/my-site-2/",
"response-message": "Executing deploy script...",
"trigger-rule":
{
"and":
[
{
"match":
{
"type": "payload-hash-sha1",
"secret": "<RANDOM-SECRET-STRING>",
"parameter":
{
"source": "header",
"name": "X-Hub-Signature"
}
}
},
{
"match":
{
"type": "value",
"value": "refs/heads/master",
"parameter":
{
"source": "payload",
"name": "ref"
}
}
}
]
}
}
]
We then need to add a new folder called my-site-2 and then a new file called deploy.sh
, making sure to change the output file too.
mkdir ~/hooks/my-site-2
touch ~/hooks/my-site-2/output.log
nano ~/hooks/my-site-2/deploy.sh
Inside our new deploy.sh file add the following:
#!/usr/bin/env bash
# redirect stdout/stderr to a file
exec > /home/johndoe/hooks/my-site-2/output.log 2>&1
git fetch --all
git checkout --force "origin/master"
npm install --production
npm run production
composer install --no-dev
php artisan route:cache
php artisan config:cache
php artisan view:cache
php artisan queue:restart
Remember to make it executable too.
chmod +x ~/hooks/my-site-2/deploy.sh
Then we need to generate a new key pair named my_site_2_id_rsa and add the public key to the deploy key
section in the github repo just like we did earlier.
So initialise a new git repo (if you haven't got one already) and add the corresponding remote origin url to your code in /var/www/my-site-2
(or wherever you site is located).
cd /var/www/my-site-2
git init
git remote add origin git@my-site-2:willbrowningme/my-site-2.git
Then update ~/.ssh/config
# My Site 1 Repo
Host my-site-1 github.com
HostName github.com
IdentityFile ~/.ssh/my_site_1_id_rsa
# My Site 2 Repo
Host my-site-2 github.com
HostName github.com
IdentityFile ~/.ssh/my_site_2_id_rsa
Test using git fetch
git fetch --all
git checkout --force "origin/master"
If all went well the git fetch command should have worked.
Next make sure to restart the supervisor job as we have updated the hooks file.
sudo supervisorctl reload
Now make an edit on your local repo of the your second project and test pushing the changes to origin.
Success!
You should now be set up with automatic deployments and builds for the sites.
If you can see a way to improve on this setup then please let me know in the comments.
]]>Before we look at deploying our static blog let's add a few finishing touches like a sitemap and page titles etc.
In the project root run the following to install the Nuxt Community sitemap module:
npm install @nuxtjs/sitemap --save-dev
Then in your nuxt.config.js add the sitemap to the modules: property
modules: [
// Doc: https://github.com/nuxt-community/axios-module#usage
'@nuxtjs/axios',
'@nuxtjs/sitemap'
],
Then still in nuxt.config.js add the following code below the generate: property
sitemap: {
path: '/sitemap.xml',
hostname: process.env.URL,
cacheTime: 1000 * 60 * 15,
generate: true, // Enable me when using nuxt generate
async routes () {
let { data } = await axios.post(process.env.POSTS_URL,
JSON.stringify({
filter: { published: true },
sort: {_created:-1},
populate: 1
}),
{
headers: { 'Content-Type': 'application/json' }
})
const collection = collect(data.entries)
let tags = collection.map(post => post.tags)
.flatten()
.unique()
.map(tag => `category/${tag}`)
.all()
let posts = collection.map(post => post.title_slug).all()
return posts.concat(tags)
}
},
Here we are simply letting the sitemap module know what routes we have.
When we deploy our site we want to have the correct page titles and meta descriptions for each post, so let's look at sorting this out.
In the head: {...} property of nuxt.config.js you'll see we have a title and meta property we can set. Set these to the default for your blog.
Lets look at our about.vue page we created in the first part of this guide. If you don't have one just create a new about.vue
file in the pages directory and add the following:
<template>
<section class="my-8">
<div class="text-center">
<h1 class="mb-6">About Page</h1>
<p>
Hi this is a static blog made with Nuxt.js, Cockpit and Tailwindcss!
</p>
</div>
</section>
</template>
<script>
export default {
head () {
return {
title: 'About',
meta: [
{ hid: 'description', name: 'description', content: 'This is the about page!' }
]
}
}
}
</script>
Note the hid
property, if we are declaring the same meta tags as in our nuxt.config.js we need include this so that Nuxt does not duplicate the meta tags. Instead it overides those in nuxt.config.js with the ones we add here with the same hid
value.
But what about in our dynamic post and category pages?
Open up your _title_slug.vue
page and add the following beneath the asyncData method:
head () {
return {
title: this.post.title,
meta: [
{ hid: 'description', name: 'description', content: this.post.excerpt },
]
}
}
You can run the dev server and make sure everything is working correctly and the page titles are being set.
Do the same for _tag.vue
in the category directory.
head () {
return {
title: `Posts tagged with ${this.category}`,
meta: [
{ hid: 'description', name: 'description', content: `All blog posts categorised as ${this.category}.` },
]
}
}
If you want to improve this further you can add meta tags for social media sites like Twitter, Google and Facebook.
Also using Real Favicon Generator you can create all the correct icons etc. Just add the files to your static directory and they will be copied over to the dist directory when you run npm run generate
.
So far we haven't displayed the creation date for any of our blog posts so let's look at how we can do this.
Install day.js with the following command:
npm install dayjs --save-dev
We're using dayjs as we only want to do some simple date formatting and moment.js is overkill for this situation.
Once installed open up the filters.js file in the plugins directory and update it so that it looks like this:
import Vue from 'vue'
import highlightjs from 'highlight.js'
import marked, { Renderer } from 'marked'
const dayjs = require('dayjs')
import advancedFormat from 'dayjs/plugin/advancedFormat'
dayjs.extend(advancedFormat)
// Only import the languages that you need to keep our js bundle small
highlightjs.registerLanguage('php', require('highlight.js/lib/languages/php'))
highlightjs.registerLanguage('javascript', require('highlight.js/lib/languages/javascript'))
highlightjs.registerLanguage('css', require('highlight.js/lib/languages/css'))
// Create your custom renderer.
const renderer = new Renderer()
renderer.code = (code, language) => {
// Check whether the given language is valid for highlight.js.
const validLang = !!(language && highlightjs.getLanguage(language))
// Highlight only if the language is valid.
const highlighted = validLang ? highlightjs.highlight(language, code).value : code
// Render the highlighted code with `hljs` class.
return `<pre><code class="hljs ${language}">${highlighted}</code></pre>`
}
// Set the renderer to marked.
marked.setOptions({ renderer })
Vue.filter('parseMd', function(content) {
return marked(content)
})
Vue.filter('toDate', function(timestamp) {
return dayjs(timestamp*1000).format('Do MMM YY')
})
We needed to import advancedFormat
since the Do
date format is not included in dayjs by default. If you want to format you dates differently you might not need this.
Cockpit returns our created date as a timestamp in seconds, so we need to multiply it by 1000 to get it into milliseconds. Then we just format it to our liking.
You can now go and update index.vue
, _tag.vue
and _title_slug.vue
to include the post's created date like so {{ post._created | toDate }}
.
Your site should now look something like this.
Now that our site is in reasonable shape let's look at deploying it.
By far the easiest place for us to deploy our site is Netlify.
We can simply link our git repository on GitHub/GitLab/Bitbucket and it will automatically be updated and rebuilt on Netlify whenever we push changes. We can also easily add webhooks that allow us to tell Netlify to regenerate the site when we update one of our blog posts in Cockpit.
Just before we do this we need to add a little script to the root of our site that will allow Netlify to create a .env file at the time it builds our site.
The reason we need to do this is because we added our .env file to our .gitignore file so it won't be committed to git and Netlify won't have access to our Cockpit API key!
So create a new file called create-env.js
and add the following to it:
const fs = require('fs')
fs.writeFileSync('./.env', `
BASE_URL=${process.env.BASE_URL}\n
POSTS_URL=${process.env.POSTS_URL}\n
URL=${process.env.URL}
`)
All this little script does is create a .env file from the Build environment variables
that we will set up in Netlify soon.
If you haven't already initialise a git repository for your site and then push it to whichever service you use (e.g. GitHub).
Sign up at Netlify (it's free) and add a new site from git.
When you've allowed Netlify access and selected the correct git repository you need to add the following under Deploy Settings
as the Build command:
node ./create-env.js && npm run generate
Remember to set the Publish directory
as dist.
This tells Netlify to run our create-env.js
script above and write to a .env file so we can use our Cockpit API key etc.
Finally we need to tell Netlify what our Build environment variables
are so click "new variable" until you have something like this.
Now with any luck you'll be able to push changes to GitHub etc and Netlify will automatically be notified of the changes and rebuild your site by running the npm run generate
command we specified above!
So we've got automatic deploys set up for pushing changes to GitHub etc. but now we need to tell Netlify to rebuild of static site when we update, add or delete a post in Cockpit.
In Netlify under "Build & Deploy" Settings you should see an option to add a build hook.
Click on this and call it something like Regenerate Blog
.
You should then see a URL like this https://api.netlify.com/build_hooks/xxxxxxxxxxxxxxxx
copy this URL and then head over to your Cockpit backend - https://cms.yourdomain.com
.
Once signed into Cockpit go to settings, webhooks and click "create a webhook". Call the webhook Regenerate Blog or anything like that and paste in your Netlify Build Hook URL.
Make sure to add events collections.save.after
and collections.remove.after
.
Click save and then go edit one of your posts to see if everything is working.
You should see after a minute or so that Netlify has automatically regenerated the static site for us!
You can now go on to add your own custom domain to your blog and also add an SSL certifcate with forced https redirection.
We could also create a similar setup to the above on our own Digitalocean, Vultr etc. VPS using a small server to accept webhooks and run shell commands. I'll cover this in a future post!
Update!
You can find my post explaining this here - Setting up Automatic Deployment and Builds Using Webhooks
Hopefully you can see how easy it is to get up and running with a simple statically generate site using Nuxt and Cockpit. Paired with Netlify it really is a great developer experience and being served on Netlify's CDN makes it extremely fast!
You can check out the GitHub repo of the finished blog here and see a live demo of the site on Netlify here - https://nuxt-cockpit-static-blog.netlify.com
]]>You might be wondering how can we generate static pages for each blog post when deploying or updating our site?
Nuxt comes with an easy solution for this, so open up your nuxt.config.js and add the following above the build: {...} property:
generate: {
routes: async () => {
let { data } = await axios.post(process.env.POSTS_URL,
JSON.stringify({
filter: { published: true },
sort: {_created:-1},
populate: 1
}),
{
headers: { 'Content-Type': 'application/json' }
})
return data.entries.map((post) => {
return {
route: post.title_slug,
payload: post
}
})
}
},
So what's going on here? Well first we make a call to our Cockpit backend to get our post entries. We then map this response into an object containing the actual route
(we're using the title slug for this) and also a payload
object.
The payload we set to the entire post entry. This will be passed to each generated blog post and we'll be able to access it and display the contents.
This makes generating our static site faster as we won't need to fetch each blog post individually from every blog post page we generate.
You can read more about this at Nuxtjs.org.
So now we've told our blog what routes it needs to have we need to create a page that will display the contents of individual blog posts.
The convention for dynamic pages in Nuxt is to name the page like so _title_slug.vue
where title_slug is the unique route identifier in our case. Notice also we have prefixed title_slug with an underscore.
So create a new file called _title_slug.vue
in the pages directory. If you want your links to be /blog/title_slug instead of just /title_slug then you need to create a blog directory in the pages directory then put _title_slug.vue
in there. You can of course use /post/title_slug or whatever you like.
Inside the newly created _title_slug.vue
file add this code:
<template>
<section>
<article class="my-8">
<div class="text-gray-600 font-bold text-sm tracking-wide">
<a v-for="(tag, key) in post.tags" :key="key" :href="'/category/'+tag" class="ml-1">{{ tag }}</a>
</div>
<h1 class="mt-2 text-3xl font-bold">
{{ post.title }}
</h1>
<div class="mt-4 markdown" v-html="post.excerpt + '\n' + post.content">
</div>
</article>
</section>
</template>
<script>
export default {
async asyncData ({ app, params, error, payload }) {
if (payload) {
return { post: payload }
} else {
let { data } = await app.$axios.post(process.env.POSTS_URL,
JSON.stringify({
filter: { published: true, title_slug: params.title_slug },
sort: {_created:-1},
populate: 1
}),
{
headers: { 'Content-Type': 'application/json' }
})
if (!data.entries[0]) {
return error({ message: '404 Page not found', statusCode: 404 })
}
return { post: data.entries[0] }
}
}
}
</script>
So as you can see we accept the payload as an argument in the asyncData method. We then check if we have the payload available (which is the post for that particular page in our case). If we do then we simply return it as post
to the page data (you can check in Vue dev-tools).
If we don't have a payload i.e. when running our dev server then we simply send a post request to Cockpit. Notice the filter object in the request body that asks for the post with the same title_slug as the requested page. We can then check if this post exists in the response, if it does we return it and if not return the 404 error page.
Fire up the dev server again with npm run dev
. You should have something that looks like this.
Now you may have noticed our markdown is not being parsed and it looks really messy. Don't worry we'll fix this soon!
Note: If you're looking for some markdown placeholder text you can use Lorum Markdown to generate some.
Let's sort out our markdown parsing and code higlighting.
npm install marked highlight.js --save-dev
We'll make a global filter that we can use to parse our Markdown so create a file called filters.js
in the plugins directory and put this in it:
import Vue from 'vue'
import highlightjs from 'highlight.js'
import marked, { Renderer } from 'marked'
// Only import the languages that you need to keep our js bundle small
highlightjs.registerLanguage('php', require('highlight.js/lib/languages/php'))
highlightjs.registerLanguage('javascript', require('highlight.js/lib/languages/javascript'))
highlightjs.registerLanguage('css', require('highlight.js/lib/languages/css'))
// Create your custom renderer.
const renderer = new Renderer()
renderer.code = (code, language) => {
// Check whether the given language is valid for highlight.js.
const validLang = !!(language && highlightjs.getLanguage(language))
// Highlight only if the language is valid.
const highlighted = validLang ? highlightjs.highlight(language, code).value : code
// Render the highlighted code with `hljs` class.
return `<pre><code class="hljs ${language}">${highlighted}</code></pre>`
}
// Set the renderer to marked.
marked.setOptions({ renderer })
Vue.filter('parseMd', function(content) {
return marked(content)
})
Make sure you also add the following to nuxt.config.js underneath the head:{...} property
plugins: [
'~/plugins/filters.js'
],
We can now use this filter globally!
Back in _title_slug.vue
in the template where it says v-html we can now access our filter by putting:
v-html="$options.filters.parseMd(post.excerpt + '\n' + post.content)"
I know this isn't the prettiest solution but unfortunately we can't just pipe filters using '|' like we would usually - {{ some-markdown | parseMd }}
as it isn't possible in v-html.
You can create a method to call instead if you would like to tidy it up.
Back in nuxt.config.js update the css: property to include a theme for highlight.js - full list here.
css: [
'@/assets/css/main.css',
'highlight.js/styles/dracula.css'
],
That's starting to look a bit more like it!
Okay, so we've got our individual blog posts and their routes but we now want to generate routes for the different post categories
based on their tags.
For example if we have a post tagged vue
we want to be able to click on this tag to see all other posts that have been tagged vue
.
So lets go back to nuxt.config.js and update our routes method in the generate: property.
Just before we do let's add a package that lets us work with collections so we can easily get the data we need from our Cockpit response.
npm install collect.js --save-dev
Make sure to add const collect = require('collect.js')
at the top of our nuxt.config.js too.
Update the generate property in nuxt.config.js so that is resembles the below.
generate: {
routes: async () => {
let { data } = await axios.post(process.env.POSTS_URL,
JSON.stringify({
filter: { published: true },
sort: {_created:-1},
populate: 1
}),
{
headers: { 'Content-Type': 'application/json' }
})
const collection = collect(data.entries)
let tags = collection.map(post => post.tags)
.flatten()
.unique()
.map(tag => {
let payload = collection.filter(item => {
return collect(item.tags).contains(tag)
}).all()
return {
route: `category/${tag}`,
payload: payload
}
}).all()
let posts = collection.map(post => {
return {
route: post.title_slug,
payload: post
}
}).all()
return posts.concat(tags)
}
},
So here we use the same data returned from Cockpit as previously. Only this time we first collect the post entries into a const called collection
.
For our tags we first map the collection into a new collection of just the post tags. Then we flatten this and call unique() on it to give us a collection of unique tags. (We would normally run flatMap() instead of calling map() and then flatten() however it wouldn't work as expected for me with collect.js)
With this unique collection of tags we map them into the route and payload properties like we did previously. For the tag payload we simply filter the original collection and return only post entries that have the specified tag.
For the posts we can simply map them directly into their route and payloads.
Finally we just call posts.concat(tags)
to join the two together and return this.
So now we've got routes for our posts and a category page for each unique post tag!
Since we've set our category routes to be /category/tag-name
we need to create a category directory inside the pages directory.
Inside the category directory create a new file called _tag.vue
(following the same naming convention as before) and put the following inside:
<template>
<section>
<div class=my-8>
<h1 class="mb-6">Posts tagged with "{{ category }}"</h1>
<ul class="flex flex-col w-full p-0">
<li class="mb-6 w-full" v-for="(post, key) in posts" :key="key">
<div class="text-gray-600 font-bold text-sm tracking-wide">
<a v-for="(tag, key) in post.tags" :key="key" :href="'/category/'+tag" class="ml-1">{{ tag }}</a>
</div>
<a :href="'/'+post.title_slug">
<h2 class="my-2 text-gray-800 text-lg lg:text-xl font-bold">
{{ post.title }}
</h2>
</a>
<div class="page-content hidden md:block text-base mb-2" v-html="post.excerpt">
</div>
<a class="text-sm text-blue-400" :href="'/'+post.title_slug">
Read more
</a>
</li>
</ul>
</div>
</section>
</template>
<script>
export default {
async asyncData ({ app, params, error, payload }) {
if (payload) {
return { posts: payload, category: params.tag }
} else {
let { data } = await app.$axios.post(process.env.POSTS_URL,
JSON.stringify({
filter: { published: true, tags: { $has:params.tag } },
sort: {_created:-1},
populate: 1
}),
{
headers: { 'Content-Type': 'application/json' }
})
if (!data.entries[0]) {
return error({ message: '404 Page not found', statusCode: 404 })
}
return { posts: data.entries, category: params.tag }
}
}
}
</script>
This page is largely similar to our index.vue page in terms of the template. Notice that we again accept the payload from our nuxt.config.js if it's available.
If we don't have a payload then we make a post request to Cockpit and include in the filter tags: { $has:params.tag }
this returns all posts that have a tag for that particular category.
We can't call params.tag directly in our template which is why we simply pass it to our data object as category
.
In the next part we'll look at how to go about deploying our site and also adding some finishing touches.
You can find Part 3 here - Part 3: Deployment and see a live demo of the site on Netlify here - https://nuxt-cockpit-static-blog.netlify.com
]]>Updated for Nuxt 2 and Tailwindcss 1.0!
Tldr; You can check out the GitHub repo of the finished blog here and see a live demo on Netlify - https://nuxt-cockpit-static-blog.netlify.com
We'll be using the generate
feature of Nuxt.js to generate a static blog and a headless CMS called Cockpit for the api.
It will be a JAMstack project, trying to follow the best practices layed out.
The definition of the JAMstack given on jamstack.org is:
Modern web development architecture based on client-side JavaScript, reusable APIs, and prebuilt Markup.
In our example we'll be writing Markdown
in Cockpit for our posts that will be fetched by Nuxt.js and then parsed to HTML before generating our static blog.
Here are just a few benefits of generating a static site:
You can use a number of different static site generators such as Jekyll, Hugo, Next or Gatsby. There are also many different options for your headless CMS e.g. self hosted options like Strapi, Directus, Ponzu or you can use hosted options like Contentful, Netlify, Prismic or Storyblok.
For a more comprehensive list of headless CMSs - https://headlesscms.org/
And for a list of static site generators - https://www.staticgen.com/
For the site we're building we'll be using Nuxt.js as I love working with Vue and also Cockpit as it's a PHP based Headless CMS and is very quick and easy to set up.
We'll be keeping the headless CMS backend separate from the frontend site. So you will need to create a new app directory on your server called something like cms-yourblog
and another site called yourblog
.
You can then use yourdomain.com
for the frontend and a subdomain such as cms.yourdomain.com
for the backend. You can obviously use whatever subdomain you like.
I'm skipping setting up in our local environment with version control etc. here just to speed things up. But you may want to set Cockpit up locally first.
We don't actually need to do much configuration for Cockpit, you can simply download the zip file into your cms-yourblog web root directory and unzip the contents.
The third and forth commands below simply move the contents of the unzipped cockpit-master directory up one level to the current web root directory and then remove the empty cockpit-master directory.
cd /path/to/your/cms-yourblog/
wget "https://github.com/agentejo/cockpit/archive/master.zip"
unzip master.zip
mv cockpit-master/* cockpit-master/.[^.]* .
rmdir cockpit-master
rm master.zip
You can then go to cms.yourdomain.com/install
to finish off the installation process.
Once you've set up your new password and username we can create a posts collection.
You can think of collections in cockpit like you would a table in a database.
Our new posts collection will have the following fields:
{"default": false, "label": false}
){"slug": true}
)Make sure to include the options in the provided JSON options field when adding the published and title fields.
We can now head over to settings then api access where we will generate an API key so we can retreive our posts data.
You should see there is a "MASTER API-KEY" that you can generate. This key will have full permisions for your site so you should avoid using this if possible.
Where it says Custom Keys
click add key to add a new custom key. Then in the rules section add the following: /api/collections/get/posts
This means that our key will only have permission to access that particular end point for fetching blog posts. Add a small description too if you like.
This means if our API Key was ever accidently exposed then an attacker would only be able to view posts and not create/delete them etc.
Create a couple of dummy post entries so we have some initial data to look at.
If you have Postman or Insomnia installed you can then send a get request to:
https://cms.yourdomain.com/api/collections/get/posts?token=YOUR-API-TOKEN
This should return your posts in the entries
array of the response.
Now that we've got our basic CMS setup that can return our post data we can move onto setting up Nuxt.js for the frontend.
First of all we need to install Nuxt. We'll do this on our local computer and run the built in development server.
To install Nuxt run the following command:
npx create-nuxt-app static-blog
Where static-blog is the name of our app. It will ask a few questions, for custom server framework
select none. For custom UI framework
select none (we'll set tailwind up ourselves).
For the rendering mode select Universal
. Select yes to use the axios module
. We'll not bother with eslint
or prettier
for now so select no for both.
This will create a folder called static-blog for our frontend, you can obviously call it whatever you like.
Next we need to enter the newly created directory and run the development server (it should have already installed our dependencies).
cd static-blog
npm run dev
You can now visit http://localhost:3000
in your browser to see the site in action!
Open your preferred code editor (I'll be using Visual Studio Code
) and take a look at the folder structure.
Nuxt automatically creates a route for each file in the pages directory.
So If we simply copy the index.vue file and rename it about.vue we will be able to visit it at http://localhost:3000/about
.
We'll use the dotenv node module so we can access our .env variables inside nuxt.config.js. This module will allow us to create a .env file in our project root that we can store our secret api token and url in.
You should also add .env
to your .gitignore file to make sure you don't accidently commit and push the contents to Github etc.
npm install dotenv --save-dev
Once installed open up nuxt.config.js
and add the following at the very top of the file:
require('dotenv').config()
If you haven't already create a .env file at your project root and put the following inside:
URL=https://yourdomain.com
BASE_URL=https://cms.yourdomain.com
POSTS_URL=http://cms.yourdomain.com/api/collections/get/posts?token=YOUR-API-TOKEN
Making sure to replace YOUR-API-TOKEN
with the token we generated earlier in Cockpit.
We'll now be able to access these variables throughout our blog using process.env.POSTS_URL
for example.
The reason we used the dotenv package and didn't just add our api key to nuxt.config.js in the env:{...} property is because this gets bundled up in a js file and exposed to the client. So someone would be able to simply open our /_nuxt/xxxxxxxxxxxxxxxxxxxx.js
file and see our api key in plain text!
Install tailwind for our css framework (feel free to use any other css framework you like).
npm install tailwindcss --save-dev
Next initiate the tailwind config file by running:
./node_modules/.bin/tailwind init tailwind.config.js
Create a new directory called css inside the assets directory and then create a file in here called main.css
and add the following to it:
@tailwind base;
@tailwind components;
@tailwind utilities;
Then install the following dependencies:
npm install autoprefixer glob-all purgecss-webpack-plugin --save-dev
This will allow us to compile our css and also remove any unused css using purgecss.
In the root of the project create a file called postcss.config.js
and insert the following:
module.exports = {
plugins: [
require('tailwindcss')('./tailwind.config.js'),
require('autoprefixer')
]
}
Back in nuxt.config.js add the following at the very top of the file above module.exports = {...
require('dotenv').config() // we already added this ealier when making our .env file
const PurgecssPlugin = require('purgecss-webpack-plugin')
const glob = require('glob-all')
const path = require('path')
import axios from 'axios' // we'll need this later for our dynamic routes
class TailwindExtractor {
static extract(content) {
return content.match(/[A-z0-9-:\/]+/g) || [];
}
}
then add our main.css file and update the build: {... object like this:
css: [
'@/assets/css/main.css'
],
/*
** Build configuration
*/
build: {
extractCSS: true,
/*
** You can extend webpack config here
*/
extend (config, { isDev }) {
if (!isDev) {
// Remove unused CSS using purgecss. See https://github.com/FullHuman/purgecss
// for more information about purgecss.
config.plugins.push(
new PurgecssPlugin({
// Specify the locations of any files you want to scan for class names.
paths: glob.sync([
path.join(__dirname, './pages/**/*.vue'),
path.join(__dirname, './layouts/**/*.vue'),
path.join(__dirname, './components/**/*.vue')
]),
extractors: [
{
extractor: TailwindExtractor,
// Specify the file extensions to include when scanning for
// class names.
extensions: ["html", "vue"]
}
],
whitelist: [
"html",
"body",
"ul",
"ol",
"pre",
"code",
"blockquote"
],
whitelistPatterns: [/\bhljs\S*/]
})
)
}
}
}
We've added a few tags to the whitelist to make sure that purgecss doesn't remove any styles that apply to them.
We should now have tailwindcss up and running with purgecss to remove any unused styles when we come round to running npm run generate
.
Fire up the dev server with npm run dev
just to make sure everything still works.
Inside the components directory create three new files; PageHeader.vue
PageNav.vue
and PageFooter.vue
with the following contents respectively:
<template>
<header class="text-center">
<a class="text-gray-800 text-3xl font-bold" href="/">
<h1>
Static Blog
</h1>
</a>
</header>
</template>
<template>
<nav class="text-center my-4">
<a href="/" class="p-2 text-sm sm:text-lg inline-block text-gray-800 hover:underline">Blog</a>
<a href="/about" class="p-2 text-sm sm:text-lg p-2 inline-block text-gray-800 hover:underline">About</a>
</nav>
</template>
<template>
<footer class="flex justify-center my-4">
<div class="text-gray-800 text-sm">
A static blog built with Nuxt.js, Tailwindcss and Cockpit.
</div>
</footer>
</template>
Now go over to the layouts directory and update default.vue
so that it looks like this:
<template>
<div class="flex flex-row justify-center w-screen">
<div class="overflow-hidden content flex flex-col p-4 md:p-8">
<page-header/>
<page-nav/>
<nuxt/>
<page-footer/>
</div>
</div>
</template>
<script>
import PageHeader from '~/components/PageHeader.vue'
import PageNav from '~/components/PageNav.vue'
import PageFooter from '~/components/PageFooter.vue'
export default {
components: {
PageHeader,
PageNav,
PageFooter
}
}
</script>
Delete any of the default styles that were there as we won't be needing them.
Also add the following style to our main.css file underneath @tailwind components:
.content {
width: 50rem;
}
.markdown p {
@apply mt-0 mb-6;
}
.markdown ul {
@apply mb-6;
}
pre {
@apply my-8;
}
Now we just need to update index.vue in the pages directory.
Make sure that you have the axios module loaded correctly in your nuxt.config.js
/*
** Nuxt.js modules
*/
modules: [
// Doc: https://github.com/nuxt-community/axios-module#usage
'@nuxtjs/axios'
],
In the index.vue page update the file so that it resembles the following:
<template>
<section>
<div class=my-8>
<ul class="flex flex-col w-full p-0">
<li class="mb-6 w-full" v-for="(post, key) in posts" :key="key">
<div class="text-gray-600 font-bold text-sm tracking-wide">
<a v-for="tag in post.tags" :key="tag" :href="'/category/'+tag" class="ml-1">{{ tag }}</a>
</div>
<a :href="'/'+post.title_slug">
<h2 class="my-2 text-gray-800 text-lg lg:text-xl font-bold">
{{ post.title }}
</h2>
</a>
<div class="page-content hidden md:block text-base mb-2" v-html="post.excerpt">
</div>
<a class="text-sm text-blue-400 no-underline" :href="'/'+post.title_slug">
Read more
</a>
</li>
</ul>
</div>
</section>
</template>
<script>
export default {
async asyncData ({ app }) {
const { data } = await app.$axios.post(process.env.POSTS_URL,
JSON.stringify({
filter: { published: true },
sort: {_created:-1},
populate: 1
}),
{
headers: { 'Content-Type': 'application/json' }
})
return { posts: data.entries }
}
}
</script>
Nuxt includes the asyncData method which can be called on the server side before the component data has been set. You can read more about this method here - https://nuxtjs.org/guide/async-data
What we are doing is retrieving the posts from Cockpit and then setting these as the component data in a posts variable.
If you visit the site now at http://localhost:3000
you should see the post entries you've added from Cockpit.
You should now have something that looks like this.
In the next part we'll look at generating our dynamic routes in nuxt.config.js
for our individual blog posts based on their title slug
and also setting up our category page to display posts depending on their tags.
You can find Part 2 here - Part 2: Dynamic Routes
]]>