Blog

If you want to allow a user to upload files securely to a site, you can grant them FTPS access.

FTPS (also known FTP-SSL, and FTP Secure) is an extension to the commonly used File Transfer Protocol (FTP) that adds support for the Transport Layer Security (TLS) and, formerly, the Secure Sockets Layer (SSL) cryptographic protocols.
https://en.wikipedia.org/wiki/FTPS

While SFTP should be used instead, sometimes apps or users require using FTP.

While FTPS adds a layer to the FTP protocol, SFTP is a different protocol based on the network protocol SSH (Secure Shell). Unlike both FTP and FTPS, SFTP uses only one connection and encrypts both authentication information and data files being transferred.
https://www.keycdn.com/support/ftps-vs-sftp


To add a FTPS only user to Red Hat Enterprise 8 (RHEL8)
Note, of course, this should work for other Linux flavors too


1) Create your user appuser1
> sudo useradd -s /sbin/nologin appuser1

Setting /sbin/nologin as shell prevents the user from using SSH and get shell access

 

2) Install a FTP server, vsftpd
> sudo yum install vsftpd

3) Update the vsftpd config
> sudo vim /etc/vsftpd/vsftpd.conf

 

Enable local users

..
# Uncomment this to allow local users to log in.
# When SELinux is enforcing check for SE bool ftp_home_dir
local_enable=YES
..
# Allow virtual users to use the same privileges as local users
virtual_use_local_privs=YES

# Setup the virtual users config folder
user_config_dir=/etc/vsftpd/user_config/
..
 

More logging

..
# more verbose logging, including connections and commands
xferlog_std_format=NO
log_ftp_protocol=YES
vsftpd_log_file=/var/log/vsftpd/vsftpd.log
dual_log_enable=YES
..

 

Restrict users to a dir

..
# restricted to users home dir /etc/passwd
chroot_local_user=YES
..

 

Your ISP or router may block the default port 21, so use another port such as 2121
FTP requires another port for data, hence 2120

..
# port 21 blocked by .. modem or router
listen_port=2121
ftp_data_port=2120 # just to match
..

 

Enable a whitelisted access list

..
# /etc/pam.d/vsftpd tried to use file /etc/vsftpd/ftpusers, default deny, but had to comment out
pam_service_name=vsftpd

# default, do not allow these users, but allow anyone else
# userlist_enable=YES
# userlist_file=/etc/vsftpd/user_list

# allow only these users
userlist_enable=NO
userlist_file=/etc/vsftpd/sci_user_list
userlist_deny=NO
..


Enable passive mode.
In an active mode connection, when the client makes the initial connection and sends PORT, the server initiates the second connection back. In a passive connection, the client connects and sends the PASV command, which functions as a request for a port number to connect to.  Passive mode solves the problem of an FTP client's firewall blocking incoming connections.

..
pasv_enable=YES
pasv_min_port=2124
pasv_max_port=2148
pasv_address=[your public ip]
..

 

Set the paths to your existing web SSL certs

..
# path of the SSL certificate
# using web certs
rsa_cert_file=/etc/ssl/site.crt
rsa_private_key_file=/etc/ssl/site.key
# enable SSL
ssl_enable=YES
allow_anon_ssl=NO
force_local_data_ssl=YES
force_local_logins_ssl=YES
# TSL is more secure than SSL so enable ssl_tlsv1_2.
ssl_tlsv1=YES
ssl_sslv2=NO
ssl_sslv3=NO
require_ssl_reuse=NO
ssl_ciphers=HIGH
# enable SSL debugging
debug_ssl=YES
..

 

4) Update pam.d/vsftp authentication
> sudo vim /etc/pam.d/vsftp

#%PAM-1.0
session    optional     pam_keyinit.so    force revoke
# prevented login with valid user
# auth       required pam_listfile.so item=user sense=deny file=/etc/vsftpd/ftpusers onerr=succeed
# /sbin/nologin is not a valid shell, so ignore check
# auth       required pam_shells.so
auth       include  password-auth
account    include  password-auth
session    required     pam_loginuid.so
session    include  password-auth

 

5) Create the directory where the SFTP user will be restricted to, also know as chroot or 'jail' directory.  This directory must be root-owned directories that are not writable by any other user or group.
Note, enabled via vsftpd.conf chroot_local_user=YES

So if your website is in /var/www/html
> ls -ld /var/www/html
drwxr-xr-x 2 root root /var/www/html

You can use that directory for your SFTP user chroot directory

To create another directory
> sudo mkdir /data
> sudo chmod 755 /data
> ls -ld /data
drwxr-xr-x 2 root root /data

 

Change the users home directory to the chroot directory
> usermod -d /var/www/html appuser1
> usermod -d /data appuser1

6) add custom config per user
which allows the ftp user to create files as another user
> sudo vim /etc/vsftpd/user_config/appuser1
# also set users home dir in /etc/password
local_root=/var/www/html
write_enable=YES
# create new files as
guest_enable=YES
guest_username=appweb1

7) Restart vsftpd
> sudo systemctl restart vsftpd

 

8) Update your firewall
If you are using Amazon EC2, configure your Security Group, adding the ports
TCP 2120-2148.  These are the ports vsftpd is listening on and passive mode responding on

 

9) Test using a FTP client, such as FileZilla
https://filezilla-project.org/
Note, don't forget to change the default port 21 to what you configured
vsftpd.conf listen_port=2121
And enable encryption
Require explicit FTP over TLS


You have now created a limited FTPS user. 

 

-End of Document-
Thanks for reading

While developing web sites/applications locally, to facilitate development, you often use a local host name, such as 127.0.0.1, localhost, site.local, example.test, or sub.company.local. While you can access the site/application using normal HTTP, sometimes the application is configured for secure HTTP i.e. HTTPS or you always want to force HTTPS no matter the environment (dev/qa/prod).
 
However, ‘made up’ local host names such as example.test, site.local, site.dev etc can not use an external ssl certificate, as they cannot be accessed remotely to verify authenticity by a certificate authority (CA). So typically one would create a self signed certificate. But over time, browsers have restricted the acceptance of self signed certificates, resulting in a less friendly or impossible developer workflow. So the next step would be to add your own CA to your local environment, but that can be tedious and error prone. Luckily, there is a utility which greatly simplifies the process, for linux, macos, and even windows -mkcert
 
“Using certificates from real certificate authorities (CAs) for development can be dangerous or impossible (for hosts like example.test, localhost or 127.0.0.1), but self-signed certificates cause trust errors. Managing your own CA is the best solution, but usually involves arcane commands, specialized knowledge and manual steps.
 
mkcert automatically creates and installs a local CA in the system root store, and generates locally-trusted certificates. mkcert does not automatically configure servers to use the certificates, though, that's up to you.”

https://github.com/FiloSottile/mkcert

 

To creating a usable self signed ssl certificate using Windows, Nginx, and Laragon (a portable LAMP stack):

 

Download the latest mkcert for your OS (Windows in this case)

https://github.com/FiloSottile/mkcert/releases

mkcert-v1.4.1-windows-amd64.exe

 

Copy the file to a new dir

C:/laragon/bin/mkcert/

And rename to a generic mkcert.exe

Note, assuming you installed/extracted Laragon to C:/laragon
In a command window with Administrator Privileges (Run as Admin)

> cd C:\laragon\etc\ssl

 

Specify the destination of the CA cert

> mkdir C:\laragon\etc\ssl\mkcert

 

Set an temporary environment variable for mkcert to read

> setx CAROOT "C:\laragon\etc\ssl\mkcert"

 

By default, it would have be in you user directory

> C:\Users\<user>\AppData\Local\mkcert

 

Close the command window and re-open it so the environment variable can be read
In linux you might source ~/.bash_profile .. but windows

 

Test that the environment variable is indeed set

> cd C:\laragon\etc\ssl\
> echo %CAROOT%
C:\laragon\etc\ssl\mkcert

 

Create and install your local CA

> ..\..\bin\mkcert\mkcert -install 

 

You will be shown a prompt warning you that you are doing what you want to do, add a local CA

After reading, ClickYes

 

Note, by default the CA key will be named rootCA-key.pem and the CA cert will be named rootCA.pem. The names are hard coded in the project source main.go, if you want to compile the project.
 
You can view the CA via the Certificate Manager
Start Menu -> Run -> certmgr
Or
Laragon -> Menu -> Nginx -> Certificate Manager
Note, while Laragon does have its own CA which it can add, it does not seem to work with recent browser updates.
 
Click to Trusted Root Certification Authority -> Certificates

Scroll to find mkcert Computer\User@Computer>

Note, you can delete it if you want by Right Clicking on and select Delete

 

Now generate the SSL certificate, which will be signed by the CA you just added

 

> cd C:\laragon\etc\ssl
> ..\..\bin\mkcert\mkcert site.local "*.site.local"

Would create the SSL key and cert in C:\laragon\etc\ssl as

site.local+1-key.pem and site.local+1.pem 

 

Rename the files, or specify names when creating: 
> ..\..\bin\mkcert\mkcert -key-file company.localhost.key -cert-file company.localhost.crt company.localhost *.company.localhost
Which would match

company.localhost
site1.company.localhost
site2.company.localhost
 

Or more generically 
> ..\..\bin\mkcert\mkcert -key-file dev.localhost.key -cert-file dev.localhost.crt dev.localhost *.dev.localhost
Which would match

dev.localhost
site1.dev.localhost
site2.dev.localhost

 

Note, most browsers do not support wildcards 2 levels deep ie don't use just localhost or test

Note, Chrome redirects use of the .dev tld to HTTPS, as Google now owns the official .dev tld. While using any domain name which you override in your /etc/hosts file should be ok, it is best to use a domain you actually own. But if that is not practical, .test, .local, .localhost are the often provided alternatives.
Edit you Nginx or Apache config to add the SSL cert and key, and reload

Using the default website in Laragon as a working example

C:\laragon\etc\nginx\sites-enabled\00-default.conf 

    listen 8443;
    Server_name site1.dev.localhost; 

    # Enable SSL
    ssl_certificate "C:/laragon/etc/ssl/dev.localhost.crt";
    ssl_certificate_key "C:/laragon/etc/ssl/dev.localhost.key";
    ssl_session_timeout 5m;
    ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
    ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv3:+EXP;
    ssl_prefer_server_ciphers on;


Note, if you are using Skype, you may not be able to run a webserver on port 443, so choose another port, such as 8443.


Assuming you have added your local host name to /etc/hosts or

C:\Windows\system32\drivers\etc\hosts 
127.0.0.1       site1.dev.localhost

 

Viewing https://site1.dev.localhost 
Should result in a valid SSL cert.

 

 

Enjoy your HTTPS, and develop away.

To install reinstall on a computer, or reinstall after deleting the mkcert CA
Copy the full Laragon dir, or the rootCA.pem at least

Set an temporary environment variable for mkcert to read

> setx CAROOT "C:\laragon\etc\ssl\mkcert"

Close the command window, re-open Create and install your local CA

> ..\..\bin\mkcert\mkcert -install 

 

Re-enjoy your HTTPS, and develop away.

 

 

-End of Document-
Thanks for reading

MFA stands for Multifactor authentication, or Multi-factor authentication.
Multifactor authentication (MFA) is a security system that requires more than one method of authentication from independent categories of credentials to verify the user's identity for a login or other transaction.
Source: https://searchsecurity.techtarget.com/definition/multifactor-authentication-MFA

Note, MFA is also refereed to as 2FA or Two Factor Authentication

If you want to 'force' MFA for your users in AWS, you can follow the AWS tutorial:
'Enable Your Users to Configure Their Own Credentials and MFA Settings'
https://docs.aws.amazon.com/IAM/latest/UserGuide/tutorial_users-self-manage-mfa-and-creds.html
which creates a custom Policy and assigns it to a Group and then a User. 

Users with this Group will be 'forced' to add MFA before they can access resources.
'Forced' is a misnomer though. Once logged in, it may appear that you can do stuff, but most pages show non friendly errors that you do not have access, and what Policy to add to enable access. So IAM admin friendly, but not user friendly. Once you have enabled and logged in using MFA, you will able to access resources. So 'told to' or 'resigned to' would be a better Policy description. It would be nice if there was an official AWS Policy to force MFA and the only screen you saw upon login was that. But oh well, the Tutorial does 'work',  so that's all good.

But, what if when you create the user, you required the user to change their password on initial login.
The Policy listed in the AWS tutorial does not allow the user to change their password if they have not enabled MFA. So a chicken egg problem.  Or an angry user if devops didn't test first, or a frustrated devops if they did test first. 

To allow a user to change their password on initial Login, edit the Policy supplied by AWS.
Simply add the iam:ChangePassword permission to the DenyAllExceptListedIfNoMFA list.

  ...
  "DenyAllExceptListedIfNoMFA",
    "Effect": "Deny",
      "NotAction": [
        ...
        "iam:ChangePassword"
        ...

So now a new user can login, change their password, see a bunch of pages which they can't do anything with (uhg), go to their Security Credentials and enable MFA, logout, login with MFA and then be able to get to work.

 -End of Document-
Thanks for reading

Android Studio is the official integrated development environment for Google's Android operating system, built on JetBrains' IntelliJ IDEA software and designed specifically for Android development. It is available for download on Windows, macOS and Linux based operating systems.
Source: https://en.wikipedia.org/wiki/Android_Studio

When you install Android Studio, there is an option to install the Intel Emulator (HAXM).
But if you do not have a Intel CPU, but instead have a AMD CPU, HAXM is not very useful.
So uncheck that.  But luckily, there is an option for AMD CPUs after installation. 

Android Studio -> Tools -> SDK Manager
SDK Tools tab

In the list, there is an option for the AMD Emulator
Android Emulator Hypervisor Driver for AMD Processors (installer)

Which "makes it possible to the emulator on AMD CPUs without needing to enable Hyper-V and with performance on par with HAXM"
Source: https://androidstudio.googleblog.com/2019/11/android-emulator-hypervisor-for-amd.html

You can also uncheck the 
Intel x86 Emulator Accelerator (HXAM installer)
if you checked it during the initial install.

You may also want to check the 
Google USB Driver 
if you want to later debug/test with an a phone connected via usb

Also, you will need to enable Windows Hypervisor Platform
Note: You do not have to install the full Hypervisor VM, so you can still use VirtualBox or such.

Control Panel -> Programs and Features-> Turn Windows features on or off (left side)
Scroll down, enable Windows Hypervisor Platform

Even if not prompted to reboot on save, you need to reboot to enable the feature

You should now be able to create and launch an Android Emulator
Tools -> AVD Manager
Select an x86_64 image with google play

 Click Next and Finish
Note: You may still be prompted with an Intel HAXM dialog, probably a Android Studio bug.
Obviously Intel HAXM will fail to install/configure. But that is OK as the Android Emulator will still launch.
You should now have a usable Android Emulator, code away.

-End of Document-
Thanks for reading