The case: You fire up a professionally prepared Linux image at a cloud platform provider (Amazon, DO, Google, Azure, etc.) and it will run a kind of production level service moderately exposed to hacking attacks (non-targeted, non-advanced threats).
What would be the standard quick security related tuning to configure before you install the meat?
release: 2005, Ubuntu + CentOS (supposed to work with Amazon Linux, Fedora, Debian, RHEL as well)
- Read the micro threat model given in "The case" paragraph above before you compare my advise with your requirements. Otherwise please help me improve this page with your comments!
- It's not at all a guide for Docker containers or for a seriously hardened production server.
- The author is not a certified unix admin or a devsecops guru.
- It's a subjective advise!
- You are already experienced in Linux administration.
- As an admin you work on a trusted machine (desktop, laptop or tablet), properly hardened, continuously patched, with security not weared off by years of usage or being shared.
- There is a handy facility to generate and store passwords available on the above admin client.
- Your SSH keys or other credentials to admin the instance are handled securely on your admin machine. (An issue in itself how to satisfy this requirement. Like should your keys be enclosed in password or token protected key stores or drives?!)
- Your access to the cloud platform provider is secure enough, and your user/account there is handled with security in mind.
- You manage the cloud platform provider's web UI in a browser which you use solely for admin tasks.
- To sum the above: The security context of your installation (instance cooking) work is on a higher level of assurance than the level you would expect from the instance you configure.
- You use a Linux image provided by a party whose security competence is assumed (like the image baked by the cloud platform provider themselves or the Linux distro builder).
- You login to the instance via a trusted and clean terminal app or within admin browser via the native web-CLI.
The best approach is to adhere to cyber hygiene practices from the zero moment and not rely on an idea that security can be hardened in a bonus step afterwards.
Our advice would be to use — a corporate managed — iPad as a trusted admin client at a remote location or a managed desktop at the enterprise internal network. In case you/the company are/is subject for targeted attacks then laptops are a weaker choice for paranoids. But that's an other story to which I will dedicate an article or a post later.
Have a matching firewall protection enabled
I mean which serves as the internet facing firewall behind which your instance is running. Amazon calls it the security group, in DO that's the firewalls feature of a project. This filters ports internet connections can hit on your instance. Plan which external firewall preset will match the open ports of the instance you are installing. SSH + the ports of services. The external firewall may allow more ports as the preset may serve several types of instances. The good old approach to minimize the open ports is still valid.
You may have different firewall presets ready for different stages of your installations. Like in the beginning port 22 is the one you start off, but when a non standard SSH port is configured you may switch to a preset with 22 closed forever and the corresponding internet noise will not hit your instance.
Choosing the initial connect/login method
- In case you have your working practice to connect to the cloud platform resources, connect as you deem it safe. Otherwise:
- If the cloud platform allows I would choose the option with connecting to ssh with a random root password generated by the platform (DO offers this). See the explanation of my pro-passwords (long random passwords) point of view in a below section ('Password authentication is wrong, key files rule! (Disagreed️)').
Don't let a fresh instance run as is in the wild
Be paranoid, don't let a fresh instance to run unpatched while exposed to internets. When fired up log in within minutes to start patching.
Unless otherwise indicated all below commands you issue as root.
Make sure the bare system is up-to-date
# Ubuntu apt update apt ugrade # CentOS dnf update dnf upgrade # + both optionally: shutdown -r now
Restarting is technically not necessary, but it won't harm.
- Enable the local firewall and allow some ports. (It's indeed useless to enable the firewall if you already enabled a frontfacing firewall on the provider's host, but still.)
- Set the timezone.
ufw enable ufw allow 22, <*your custom ssh, eg 52112*> # see the SSHd setup below ufw status # make your time meaningful, change the location: timedatectl set-timezone Europe/Berlin
# may firewalld not present: dnf install firewalld systemctl enable firewalld --now # then firewall-cmd --state # should be '*running*' firewall-cmd --list-all # should be telling something like: # *public (active)* # *services: cockpit dhcpv6-client ssh* firewall-cmd --list-all-zones | more firewall-cmd --get-default-zone # public # add a custom ssh port, see the SSHd setup below firewall-cmd --permanent --zone=public --add-port=<*your custom ssh, eg 52112*>/tcp # success firewall-cmd --complete-reload firewall-cmd --list-all # now should also contain your custom ssh port # make your time meaningful, change the location: timedatectl set-timezone Europe/Berlin
Check the AppArmor or SELinux status:
# Ubuntu: apparmor_status # CentOS: sestatus
It's not that you should bother with it, but it's still more secure to utilize an Linux distro preconfigured with LSM in enforcing mode active, so try to make it an aspect when you choose your provider and select from the factory maintained images there. However based on other's opinions and considering the "micro threat model" set out above I would not make it a prerequisite. In order for LSM to make real sense it should be tuned adequately to your services and the particular situation.
Make yourself comfortable and productive, like:
- zsh, fish or a similar advanced shell will help you a lot (tuning it is not discussed here).
- wget is used in this guide.
- Don't hesitate to install the editor of choice as early as possible (I promote micro here, tho it's a bit more complicated to install).
apt/dnf install zsh wget which zsh echo $0 # Ubuntu snap install micro --classic
Yes I start with suppressing a security alert. Let's be realistic. Let's face it all systems are non-secure, even bash had horrible security flaws, everything has, maybe except ssh written by paranoid and methodic freaks.))
Alternatively install micro from:
- See my suggestions in the below 'Then services, folders, extensions' section.
mkdir -p /tank/packagez chown -R root:wheel /tank/packagez mkdir /opt/bin cd /tank/packagez wget https://github.com/zyedidia/micro/releases/download/v2<...>/micro-2<...>-linux64.tar.gz tar xf micro... mv micro.../micro /opt/bin/ chown -R root:root /opt/bin chmod -R 755 /opt/bin export PATH="$PATH:/opt/bin" micro /etc/profile.d/env.sh # create or add export PATH="$PATH:/opt/bin" # * or to the PATH in /etc/environment if that one is in use by the os
Mod the password quality settings:
# on Ubuntu you may need to install this first apt install libpam-pwquality # then micro /etc/security/pwquality.conf # enable: minlen = 20 ocredit = -2
This will set the minimum password length to 20 and require 2 special characters in it. Why 20? Ok, let it be 25. See also my bellow comment regarding passwords vs key files. (I suggest using random passwords and SSH password authentication instead of key files, hence the length. See the explanation of my pov in the below 'Password authentication is wrong, key files rule! (Disagreed️)' section.
Your personal user
For sshing your persistent instance imo a good practice would look like follows:
ssh -p 52112 email@example.com
- custom user, custom port
Create an admin user:
# Ubuntu useradd --uid <*1111*> -N --home /home/.<*lola*> --shell /usr/bin/zsh -g admin -G users <*lola*> # CentOS useradd --uid <*1111*> -N --home /home/.<*lola*> --shell /bin/zsh -g wheel -G users <*lola*>
- mind to rewrite 'lola' to your nick
- UID does not matter actually
- zsh may not be your choice
- mind that home directory is hidden in the above example (yes I think obscurity adds to security a bit))
Create your password as a random token. Eg.:
< /dev/urandom tr -cd '[:alpha:][:digit:]_!$%' | head -c30 ; echo "" # OR create it on the client device and copy or retype passwd <*lola*>
Tweaking the choice of special characters try to use the ones which are available for manual entering on different devices.
Consider this: a) SSHd config is already properly set, b) there are many settings in there which makes sense to harden. So you can choose from leaving it as is to diving into tuning it to death (given you read a lot about the meaning and the effects).
Consider the following quick mods:
- custom port (52112 is an example)
- disallowing root and any unexpected account to login
Note: A stupid mistake in sshd config may lock out your access forever. (Except that you may login via the providers console.) So:
Open two SSH connections to the remote instance, one with the original default user and one with your new custom user. SSHd will not lock out a live connection in most situations even if reloaded with a broken configuration (failed restart will not kick your live sessions in most cases).
Modify the below setting in the sshd config. The settings in
< > are to be fixed by you.
micro /etc/ssh/sshd_config # mods: Port <*52112*> AllowUsers <*lola*> DenyUsers root guest test admin toor *ec2-user bitnami <...default accounts>* DenyGroups AllowGroups
- Mind to use the same extra port which you accepted with the local and external firewalls.
For better cipher, mac and key exchange settings add the following to the end of the config:
KexAlgorithms firstname.lastname@example.org,diffie-hellman-group-exchange-sha256 Ciphers email@example.com,firstname.lastname@example.org,aes256-ctr MACs email@example.com,firstname.lastname@example.org,email@example.com,hmac-sha2-512,hmac-sha2-256,firstname.lastname@example.org
You may also mod the below settings in the sshd_config (keep all the other original settings as they are!), walk thru it, edit (uncomment and edit) the mentioned below lines. (It's not the new content of the config, these are the lines in the original which I suggest to modify accordingly).
LoginGraceTime 45 PermitRootLogin no MaxAuthTries 6 MaxSessions 3 ClientAliveInterval 60 ClientAliveCountMax 2 ChallengeResponseAuthentication no GSSAPIAuthentication no AllowAgentForwarding no AllowTcpForwarding no GatewayPorts no X11Forwarding no PermitUserEnvironment no
If there are duplicate settings, SSH will take the last one, so check the whole config file. (Technically if you add the above to the end of the config file that will override the previous settings.)
You may also disallow (with a leading hash) the SFTP subsystem unless you know
scp is something you really need and can't solve it otherwise. (Consider that scp allows for dumping any amount of data from your system.)
#Subsystem sftp /usr/lib/openssh/sftp-server
Restart the service, check the status and review the effective settings:
systemctl restart sshd.service systemctl status sshd.service # check the listening port sshd -T # in case of errors: # - Ubuntu: tail -30 /var/log/syslog # - CentOS: tail -30 /var/log/messages # - both: journalctl -xe
Keeping the live SSH connections/terminals alive, open a new terminal and check if connecting with your custom admin user works:
ssh -p <52112> <lola>@<IP>
Restart your instance and ssh into it with your user.
Password authentication is wrong, key files rule! (Disagreed️️)
You may wonder why don't I recommend to immediately forbid PasswordAuthentication and allow PubkeyAuthentication only?!
You may prove the opposite, but I see no practical cryptographic strength difference between a large random password and a key. Beyond a certain level of entropy we care to put into our password. A random password and a key are both just a bunch of random bytes.
From the practical point of view then we have a significant difference in handling the two kinds of secrets. A 20-30 char password you can even type in while looking at it stored in your password store well protected on an iOS device. With the keys there is always a drama of moving those around and protecting the key file. Unless you have an enterprise grade key store. So it's up to you. Obviously you can if you prefer so:
May you want and have an MFA capability to use a token like Yubikey — that would be the winning solution to heighten the level of assurance.
Kill the root user
When you are done with the ssh configuration, and your custom user safely logs in to the instance, it's time to kill the root. Not that it's absolutely a good idea, but you never know how it was misconfigured. :) 'Killing' is done by assigning a random password.
Check interactive password holders (Ubuntu only):
passwd -Sa | grep P
Change the root password to a random one (Ubuntu, CentOS):
echo -n "root:$( < /dev/urandom tr -cd '[:lower:][:digit:]_!$%,.[=*=][=#=]' | head -c40 )" | chpasswd
Note that root can assign any password (despite the fine policy we created above), so be careful with issuing the command without tuning, or test what it does.
If there were other interactive users assign random passwords to them unless you understand the reason for their interactive presence at your instance.
Then services, folders, extensions
Now you are done with installing the base system — bada-bing-bada-boom.
Pproceed to installing your services. That may involve adding new available ports to the firewall and new users. But first make the standard folder structure.
My subjective practice is to create a
/tank folder for storing custom stuff including the installation material:
mkdir -p /tank/packagez chown -R root:admin /tank/packagez # * under CentOS admin is wheel cd /tank/packagez
So the 'packagez' (or anything like that, 'install', you name it) will be the location to download the installation sources which you can return to and check later what was the source of the running softwares. Like:
I would suggest to install your tools and service binaries to the /opt folder. This ensures that you can keep track of what additional software you deployed on the instance.
Using caddy as an example:
cd /tank/packagez tar xf caddy -C /opt/caddy chown -R root:root /opt/caddy chmod -R 755 /opt/caddy /opt/caddy/caddy —help
caddy is handy to fire an HTTPS service out of nothing:
caddy reverse-proxy --from acme.web --to localhost:9000
As for the location of configuration stuff I would suggest to follow the standard guides. Like mostly that suggests /etc/:
mkdir /etc/caddy touch /etc/caddy/Caddyfile chown -R root:caddy /etc/caddy mkdir /etc/ssl/caddy chown -R root:caddy /etc/ssl/caddy chmod 0770 /etc/ssl/caddy
Create a user for the service
For the service users a good idea would be to not assign a working shell to them, like:
groupadd --system caddy useradd --system --gid caddy --create-home --home-dir /var/lib/caddy --shell /bin/false --comment "Caddy web server" caddy
- Mind to assign a fake shell to it. Alternatively:
Mostly your services will require automatic start. For that you presumably use systemd. Like:
wget https://raw.githubusercontent.com/caddyserver/dist/master/init/caddy.service -P /etc/systemd/system chmod 644 /etc/systemd/system/caddy.service micro /etc/systemd/system/caddy.service # General approach User=caddy Group=caddy ... PrivateTmp=true PrivateDevices=true ProtectHome=true ProtectSystem=full # Then: systemctl daemon-reload systemctl status caddy journalctl -u caddy -b -f tail -40 /var/log/caddy.log # when works systemctl enable caddy