AllStarLink Infrastructure

From W9CR
Jump to navigation Jump to search

This is a page concerning the various servers and hosts which comprise the ASL network services

Overview

The ASL architecture is designed to be scalable across donated/purchased virtual machines. Any bare metal servers will be setup with a VM technology, ASL is agnostic to the chosen hypervisor.

At the core of ASL service is a distributed database, which is active-active across all nodes. Registration servers talk to this database along with the nodes list and DNS servers. These services are the core of ASL services; IAX2 registration, nodes list and DNS. All other services are nice to have, but don't affect the availability of the network for end users.

The core servers are all interconnected using host to host IPSEC. This not tunnels, but rather host to host, where traffic is encrypted between hosts using pre-shared keys. These have proven to be reliable and work well even over the best effort of the internet. The DB servers require encrypted channels, as they don't support encryption at the application level. This also simplifies networking between ASL hosts.

DNS is serviced for primary DNS with short TTL's on register.allstarlink.org. Should any one server go offline, it's pulled from the DNS and turns down after 30 seconds. For the remainder of ASL hosts, several secondary servers exist.

Servers

All servers should have a local use in the format of jdoe for the user. Password auth should not be used, ssh should be setup. Sudo should be used for admin access.

when adding a user, use "Firstname Lastname, CALLSIGN" as the name, and the cellphone as the phone number.

Don't want to put your cellphone in? you don't get to admin the box.

Tampa Hypervisor

All users should have access to this box via SSH keys. Please place your public key in your home directory and your user should be in the "sudo" group to enable sudo.

This box is DL360 with 12 cores and 128 GB of ram. There are 8 146.8 GB 15k SAS disks in a Raid 50 config.

It's primary IP is 44.98.254.129/27 which is a logical interface "44net" on the 44net bridge.

enp3s0f0 is 208.38.136.11 and is physically the left most port on the box. Our upstream router has a static route pointing 44.98.254.128/27 at this interface.

The VM's are given addresses out of 44.98.254.128/27

The ILO is at https://192.168.8.5/ This is behind the VPN and is a seperate network. You will need VPN credentials and a vpn config for this.

The ILO needs java installed to get at the console, but is licensed for full video access.

The ILO MAC is B4B5.2F5B.B90C

The ILO admin password is

<poem style="border: 2px solid #d6d2c5; background-color: #f9f4e6;padding: 1em; padding-top:0px; font-size:12px;"> -----BEGIN PGP MESSAGE----- hQEMA4nyEUPSzEPpAQgAuJvcngnoXbZq/audRVP09zPHKRZWoP6gZhrNpMVpu1fN x0vGPbhWw94iloMjAWONeyLkNzOg2KGaSc1I7GY7dKQ8+0Snqez/lb0PAp5P1P+y c2owsvqJ616G4mZTZ0ZAUXMfY4Rmoz0YDctJatbGxanT4jLpmQMua9jx6Ukjl+F+ BZHB+jZwzOeVx3lNOYuX7hrHu6NLjDkoFiBOHDQBY0d0CPamYhRm25d/eIcN2Zpf pqXauSZJlFoKBhYRxpw3KzMMJhwR2F4Mda58fwJeuhTVf2HxijvqEOhv4NTgsSNJ M7MLn13Yxyru8mEw+TM0JSw4Kd8/bSUE6E7iYYXl8NLAMgEhVuvJkiwUolYu+U3b qVGOJD3+oAgdOqa9rZzecLGwCRUErXEWa3/1EvgWh+g31HG49ombZpC4AhHpWqXq nPZ/7YJmg9SLmMB2FvGfsaxIXXj+5fSJHHCPfAXNwNb4GmsNIbPkqXjPBELVb3hd woJxU0tc57MoJK8322WcoE4uKsV2r4oxZEK5g8/L3otN6Oz4VMgkQQyFUIBnGp7o Mflja33bvBflp6rnDH+IlC7qdtOmNqMQ3bWuKUxIWr3P/+xEUBjSwIR8ohW7XaLV BNQ5qnRi7iTF0o/JFBy6d6g7oQ1ZbqlDOoYmtwnz9dEkhU8X =SIcB -----END PGP MESSAGE----- </poem>

Syslog and fail2ban

The VM's are setup for remote syslog to 44.98.254.129.

Fail2ban is configured to nullroute IPs and email the admin contact after 2 bad logins. As it's done with a nullroute, fail2ban can run on the hypervisor and will catch login attempts on the VM's. This means if you get locked out via fail2ban, you're locked out of all VMs.

There is a firewall configured at /etc/network/firewall.sh. Note this protects the hypervisor (INPUT) and to the 44net subnet (FORWARD). By default everything is blocked to the external interface and new services must be added to allow them out.


The Hard Disks are arranged as follows:

5 6 7 8
1 2 3 4

The disks are 146.8 gb 15k SAS drives

Marked: HPDC firmware manufactured by segate for HP.

VM management

Kimchi is used for server management of the VM's and gives a nice easy way to setup a server. It's basically a frontend for virsh

The network config is based on this https://jamielinux.com/docs/libvirt-networking-handbook/custom-routed-network.html

To provision a server from a template:

  1. login to Kimchi
  2. Click on Virtualization and then Add
  3. Give the server a logical name and choose a template
  4. Click Create
  5. The server will be instantiated, but not started, if you want to change things, nows the time.
  6. You'll need to provision the network
  7. To assign an IP that's static we use DHCP, but assign a static lease.
  8. do 'echo 52:54:00:73:XX:XX,44.98.254.x' >> /var/lib/dnsmasq/44net/hostsfile" from the CLI. This needs the MAC address of the server and the static IP it will be on.
  9. do 'service dnsmasq@44net reload' to reload the dnsmasq config
  10. Double check everything and you can boot the server.

It will now boot and install ubuntu and should have a public IP in the 44net range

You will want to ensure it's auto started and will need to do that from the CLI using the virsh "autostart" command. This ensures the server will boot when the box restarts.

virsh # list
 Id    Name                           State
----------------------------------------------------
 1     ASL-Stats                      running
 7     TestVM                         running

virsh # dominfo TestVM
Id:             7
Name:           TestVM
UUID:           a4dbcba6-f493-4bdc-b5b0-40b714e9e9a4
OS Type:        hvm
State:          running
CPU(s):         1
CPU time:       130.6s
Max memory:     1048576 KiB
Used memory:    1048576 KiB
Persistent:     yes
Autostart:      disable
Managed save:   no
Security model: apparmor
Security DOI:   0
Security label: libvirt-a4dbcba6-f493-4bdc-b5b0-40b714e9e9a4 (enforcing)

virsh # autostart TestVM
Domain TestVM marked as autostarted

virsh # dominfo TestVM
Id:             7
Name:           TestVM
UUID:           a4dbcba6-f493-4bdc-b5b0-40b714e9e9a4
OS Type:        hvm
State:          running
CPU(s):         1
CPU time:       137.6s
Max memory:     1048576 KiB
Used memory:    1048576 KiB
Persistent:     yes
Autostart:      enable
Managed save:   no
Security model: apparmor
Security DOI:   0
Security label: libvirt-a4dbcba6-f493-4bdc-b5b0-40b714e9e9a4 (enforcing)