Blog

While Amazon Web Services (AWS) provides allot of services and administrative capabilities thru their Console web application, you still have to do some things manually.

 After creating a EC2 instance, you may want to attach additional storage.  The additional storage can be used to host your application independent of the OS/root partition, allowing you to more easily migrate, backup, and manage your application and it's data.

 After creating the additional storage and attaching the volume to your EC2 instance, you still need to tell the OS on the EC2 about the extra storage, which is where this tutorial comes into play.

 

Mount attached volume 

List partitions via lsblk, which lists information about all available or the specified block devices
> sudo lsblk
NAME        MAJ:MIN RM SIZE RO TYPE MOUNTPOINT

nvme1n1     259:0    0  50G  0 disk            <- no mount point
nvme0n1     259:1    0  16G  0 disk
├─nvme0n1p1 259:2    0   1M  0 part
└─nvme0n1p2 259:3    0  16G  0 part /

nvme0n1
Is the OS/root partition 

nvme1n1     
Is the external attached volume 

Verify that there is no data on the partition
> sudo file -s /dev/nvme1n1

Response if no file system thus no data
/dev/nvme1n1: data

Response if the partition has already been formatted
/dev/nvme1n1: SGI XFS filesystem data

If no data, create a file system
> sudo mkfs -t xfs /dev/nvme1n1 

Make the mount directory, which can be any name, but `data` is generic enough
> sudo mkdir /data

Mount the partition to the directory
> sudo mount /dev/nvme1n1 /data

Edit fstab to mount on boot
fstab defines your volumes and mount points at boot, so make a copy first
> sudo cp /etc/fstab /etc/fstab.orig 

Find the UUID of device, which will be used in fstab to identity the volume
> sudo blkid 

Edit fstab; Use your UUID; match existing entry spacing
the option nofail allows the boot sequence to continue even if the drive fails to mount
> sudo vi /etc/fstab
UUID=123ebf5a-8c9b-1234-1234-1234f6f6ff30 /data      xfs     defaults,nofail 0 2 

To verify the fstab configuration works, without rebooting, unmount and then auto mount the volume
> sudo umount /data> sudo mount -a

List partitions, and you will see your data directory, which you can now utilize
> lsblk
NAME        MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme1n1     259:0    0  50G  0 disk /data      <- it worked
nvme0n1     259:1    0  16G  0 disk
├─nvme0n1p1 259:2    0   1M  0 part
└─nvme0n1p2 259:3    0  16G  0 part /

> ls -l /data

-End of Document-
Thanks for reading

Sometimes you may have multiple code repositories which you always end up checking out together, and deploying together.  Or maybe it seemed like a good organization idea to separate your code based on functionality, but in practice, it has become cumbersome.  

With a few commands, you can merge the multiple repositories into one repository, keeping their history.  You can also keep the separate repositories in their own sub directory, thus maintaining the organization of your code, but utilize one repository to facility development, branches, and deployments. 

Example list of current separate repositories:
    ls 
    /local_git/old_project_1
    /local_git/old_project_2

Update your local repositories with the latest code
Pull and Commit/Push any changes 

Create a new repository for the combined project and push
    cd /local_git
    mkdir new_combined_project
    touch README.md
    git init .
    git commit -m "add readme"
    git remote add origin remote https://github.com/your_repo_url
    git push 

Add the first separate repository to the new combined repository
    cd /local_git/new_combined_project
    git remote add old_project_1 ../old_project_1 

List repositories
    git remote -v    

    old_project_1  ../old_project_1.git (fetch)
    old_project_1  ../old_project_1.git (push)
    origin  https://github.com/your_repo_url.git (fetch)
    origin  https://github.com/your_repo_url.git (push)

Note: If you get the wrong path to your local repository, you can remove repository entries
    git remote remove old_project_1 

Fetch the branch/tags/master for the first separate repository
    git fetch old_project_1 --tags 

Merge the files and histories for the first separate repository
    git merge --allow-unrelated-histories old_project_1/master
    list of files 

You should have a list of files and directories from the first separate repository
    ls
    your files from old_project_1

Optionally create a sub directory to move the files into.
    cd /local_git/new_combined_project
    mkdir old_project_1   

Move the files and folders into the new nested directory
    git mv !(old_project_1|old_project_2) old_project_1 

Note: If you just mv or cut/paste the files into the new directory, git may not persist the history for those files.

Note: !() excludes the listed directory/file

Note: If you get an error about unknown bash command !, enable the glob extension
    shopt -s extglob 

Your directories/files from the first separate repository should now be in
    /local_git/new_combined_project/old_project_1
and you should have the git history for old_project_1 

Consolidated commands repeating for the second separate repository
    cd /local_git/new_combined_project
    ls
    git remote add old_project_2 ../old_project_2
    git remote -v
    git fetch old_project_2 --tags
    git merge --allow-unrelated-histories old_project_2/master
    mkdir old_project_2
    git mv !(old_project_1|old_project_2) old_project_2
    ls
    ls old_project_2 

Your multiple separate repositories are now merged into one repository, with their history.  After verifying by checking out to a new directory, viewing it's history, you can remove the prior separate repositories.

Reference: https://stackoverflow.com/a/10548919

 

-End of Document-
Thanks for reading

Dotenv is a zero-dependency module that loads environment variables from a .env file into process.env. Storing configuration in the environment separate from code is based on The Twelve-Factor App methodology.
Reference: https://github.com/motdotla/dotenv

While you can use NodeJS Dotenv to manage configurations per environment,
another approach is to create a configuration per environment, and use an environment based file to distinguish which configuration to use. 

Why? 

It is useful to actually version control your configuration, so every developer and every instance of your application has the same configuration keys and values.

An example of how to setup a production and staging environment follows. 

Create the environment file as root to minimize the odds of the environment being removed or changed

> cd /home/yuourapp/
> sudo touch env-prod 

> cd /home/yuourappstg/
> sudo touch env-stg 

Helper scripts to start and restart your NodeJS Forever service
Note: [ -f "env-stg" ] returns true if the file exists 

> start-yourapp.sh
#!/bin/bash

if [ -f "env-stg" ]; then
    forever start -a --minUptime 1000 --spinSleepTime 2000 --uid yourapp-stg yourapp.js
else
    forever start -a --minUptime 1000 --spinSleepTime 2000 --uid yourapp yourapp.js
fi

> restart-yourapp.js
#!/bin/bash

if [ -f "env-stg" ]; then
    forever restart yourapp-stg
else
    forever restart yourapp
fi 

Create a configuration file per environment, ensuring that each configuration has the same keys, and varying the values as appropriate.

> config/config-stg.js
module.exports = {
  port : 9011,
  log : {
    console : { level : 'silly' }
  }
};

> config/config-prod.js
module.exports = {
  port : 9001,
  log : {
    console : { level : 'error' }
  }
}; 

Create a base configuration script to read in the appropriate configuration file 

An example for a NodeJS process
> config/config.js
const path = require('path');
const fs = require('fs');

let env;
// check for env-stg or env-prod file
if (fs.existsSync('env-stg')) {
  env = 'stg';
} else {
  env = 'prod';
}

const configPath = path.resolve(process.cwd(), `config/config-${env}`);
const config = require(configPath);
// visual validation of correct env
console.log('Using config ' + configPath);
module.exports = config;

An example for a VueJS/Nuxt process 

Due to VueJS/Nuxt being a browser based solution,
to avoid warnings and errors, create another env file to be required 

> sudo vi env
module.exports = {
    env: 'stg'
}

Add configuration as needed

> nuxt.config-stg.js
module.exports = {
  server: { port: 9013 },
};

> nuxt.config-prod.js
module.exports = {
  server: { port: 9003 },
}; 

Create a base configuration script to read in the appropriate configuration file.

> nuxt.config.js
const path = require('path');

// check for env file
let env;
try {
  // (base)/
  // require ('./env-stg');
  // using exports, as when require from .vue, build causes warning 'Module not found: Error: Can't resolve './env-stg''
  env = require ('./env');
  env = env.env;
} catch (e) {
  // default prod
  env = 'prod';
}

// check for env based nuxt config when called from different relative paths
let configPath;
let config;
try {
  // (base)/
  configPath = `./nuxt.config-${env}.js`;
  // config = require(configPath); // on build, results in warning 'Critical dependency: the request of a dependency is an expression'
  config = require(`./nuxt.config-${env}.js`);
} catch (e) {
  try {
    // (base)/server/
   
configPath = `../nuxt.config-${env}.js`;
   
config = require(`../nuxt.config-${env}.js`);
 
} catch (e) {
   
// (base)/pages/dir/
   
configPath = `../../nuxt.config-${env}.js`;
   
config = require(`../../nuxt.config-${env}.js`);
 
}
}

// visual validation of correct env
console.log('Building nuxt using ' + configPath);
module.exports = config;

Now you can check in the configuration files, but do not check in the .env and env-stg, env-prod files (add them to .gitignore), as those should vary based on the deployed environment.


-End of Document-
Thanks for reading

The purpose of NodeJS Forever is to keep a child process (such as your node.js web server) running continuously and automatically restart it when it exits unexpectedly. Forever basically allows you to run your NodeJS application as a process.
Reference: https://stackoverflow.com/a/32944853

A simple CLI tool for ensuring that a given script runs continuously (i.e. forever)
https://github.com/foreversd/forever#readme

A simple example to start and manage Forever
> forever start -a --minUptime 1000 --spinSleepTime 2000 --uid yourapp-stg yourapp.js

-a                        append to logs
--minUptime       1000     wait a second before considering restart
--spinSleepTime 2000    wait two seconds before restarting
--uid                   name the forever process

List all running Forever processes
> forever list

info:    Forever processes running
data:    uid   command       script                          forever pid   id          logfile                            uptime
data:    [0]   yourapp-stg   /usr/bin/node start.js   1668             23197   /home/yourapp/.forever/yourapp.log     0:1:20:14.94

You can restart and stop by name or uid.
Note: uid is incremental, so it may not always be the same number.

> forever restart yourapp-stg
> forever restart 0

> forever stop yourapp-stg
> forever stop 0 

And since you may not want to type or remember all these options, create some helper shell scripts

> start-yourapp.sh
#!/bin/bash
forever start -a --minUptime 1000 --spinSleepTime 2000 --uid yourapp-stg yourapp.js 

> restart-yourapp.sh
#!/bin/bash
forever restart yourapp-stg 

While forever will keep your NodeJS process running, it will not start on reboot.
One simple method to ensure your NodeJS runs on reboot, is to add crontab entry to your forever process. 

Create a crontab entry as the user your app runs as
> crontab -e
@reboot /bin/sh /home/yourapp/crontab-reboot.sh 

And create the reboot script
> crontab-reboot.sh
#!/bin/bash

# export path to NodeJS, Forever
export PATH=/usr/local/bin:$PATH

# cd to location of script
cd /home/yourapp || exit 

# run script, in this case Forever
forever start -a --minUptime 1000 --spinSleepTime 2000 --uid yourapp start.js

So now you application will run .. Forever .. yup.

 

-End of Document-
Thanks for reading