17 KiB
- Bootstrap / Installation / Deployment
- Testing / CLI Access
- Database
- uncloud clients access the data base from a variety of outside hosts
- So the postgresql data base needs to be remotely accessible
- Instead of exposing the tcp socket, we make postgresql bind to localhost via IPv6
- Then we remotely connect to the database server with ssh tunneling
- Configuring your database for SSH based remote access
- URLs
- uncloud Products
- Product features
- VPN
- How to add a new VPN Host
- Example of adding a VPN host at ungleich
- Example http commands / REST calls
- Creating a VPN pool
- Managing VPNNetworks
- Product and Product Children
- Identifiers
- Distributing/Dispatching/Orchestrating
- Milestones
- 1.1 (cleanup 1)
- 1.0 (initial release)
- Initial Generic product support
- Recurring product support
- Bill logic is still wrong
- Generating bill for admins/staff
- Bill fixes needed
- Initial Generic product support
Bootstrap / Installation / Deployment
Pre-requisites by operating system
General
To run uncloud you need:
- ldap development libraries
- libxml2-dev libxslt-dev
- gcc / libc headers: for compiling things
- python3-dev
- wireguard: wg (for checking keys)
Alpine
apk add openldap-dev postgresql-dev libxml2-dev libxslt-dev gcc python3-dev musl-dev wireguard-tools-wg
Debian/Devuan:
apt install postgresql-server-dev-all
Creating a virtual environment / installing python requirements
Virtual env
To separate uncloud requirements, you can use a python virtual env as follows:
python3 -m venv venv
. ./venv/bin/activate
Then install the requirements
pip install -r requirements.txt
Setting up the the database
Install the database service
The database can run on the same host as uncloud, but can also run a different server. Consult the usual postgresql documentation for a secure configuration.
The database needs to be accessible from all worker nodes.
Alpine
apk add postgresql-server
rc-update add postgresql
rc-service postgresql start`
Debian/Devuan:
apt install postgresql
Create the database
Due to the use of the JSONField, postgresql is required. To get started, create a database and have it owned by the user that runs uncloud (usually "uncloud"):
bridge:~# su - postgres
bridge:~$ psql
postgres=# create role uncloud login;
postgres=# create database uncloud owner nico;
Creating the schema
python manage.py migrate
Configuring remote access
- Get a letsencrypt certificate
- Expose SSL ports
- Create a user
certbot certonly --standalone \
-d <yourdbhostname> -m your@email.come \
--agree-tos --no-eff-email
- Configuring postgresql.conf:
listen_addresses = '*' # what IP address(es) to listen on;
ssl = on
ssl_cert_file = '/etc/postgresql/server.crt'
ssl_key_file = '/etc/postgresql/server.key'
- Cannot load directly due to permission error:
2020-12-26 13:01:55.235 CET [27805] FATAL: could not load server certificate file "/etc/letsencrypt/live/2a0a-e5c0-0013-0000-9f4b-e619-efe5-a4ac.has-a.name/fullchain.pem": Permission denied
- hook
bridge:/etc/letsencrypt/renewal-hooks/deploy# cat /etc/letsencrypt/renewal-hooks/deploy/postgresql
#!/bin/sh
umask 0177
export DOMAIN=2a0a-e5c0-0013-0000-9f4b-e619-efe5-a4ac.has-a.name
export DATA_DIR=/etc/postgresql
cp /etc/letsencrypt/live/$DOMAIN/fullchain.pem $DATA_DIR/server.crt
cp /etc/letsencrypt/live/$DOMAIN/privkey.pem $DATA_DIR/server.key
chown postgres:postgres $DATA_DIR/server.crt $DATA_DIR/server.key
- Allowing access with md5 encrypted password encrypted via TLS
hostssl all all ::/0 md5
postgres=# create role uncloud password '...';
CREATE ROLE
postgres=# alter role uncloud login ;
ALTER ROLE
Testing the connection:
psql postgresql://uncloud@2a0a-e5c0-0013-0000-9f4b-e619-efe5-a4ac.has-a.name/uncloud?sslmode
=require
g #+END_SRC
** Bootstrap
- Login via a user so that the user object gets created
- Run the following (replace nicocustomer with the username)
#+BEGIN_SRC sh
python manage.py bootstrap-user --username nicocustomer
Initialise the database
While it is not strictly required to add default values to the database, it might significantly reduce the starting time with uncloud.
To add the default database values run:
# Add local objects
python manage.py db-add-defaults
# Import VAT rates
python manage.py import-vat-rates
Worker nodes
Nodes that realise services (VMHosts, VPNHosts, etc.) need to be accessible from the main node and also need access to the database.
Workers usually should have an "uncloud" user account, even though strictly speaking the username can be any.
WireGuardVPN Server
- Allow write access to /etc/wireguard for uncloud user
- Allow sudo access to "ip" and "wg"
chown uncloud /etc/wireguard/
[14:30] vpn-2a0ae5c1200:/etc/sudoers.d# cat uncloud
app ALL=(ALL) NOPASSWD:/sbin/ip
app ALL=(ALL) NOPASSWD:/usr/bin/wg
Typical source code based deployment
- Deploy using bin/deploy.sh on a remote server
-
Remote server should have
- postgresql running, accessible via TLS from outside
- rabbitmq-configured [in progress]
Testing / CLI Access
Access via the commandline (CLI) can be done using curl or httpie. In our examples we will use httpie.
Checkout out the API
http localhost:8000/api/
Authenticate via ldap user in password store
http --auth nicocustomer:$(pass ldap/nicocustomer) localhost:8000/api/
Database
uncloud clients access the data base from a variety of outside hosts
So the postgresql data base needs to be remotely accessible
Instead of exposing the tcp socket, we make postgresql bind to localhost via IPv6
::1, port 5432
Then we remotely connect to the database server with ssh tunneling
ssh -L5432:localhost:5432 uncloud-database-host
Configuring your database for SSH based remote access
host all all ::1/128 trust
URLs
- api/ - the rest API
uncloud Products
Product features
- Dependencies on other products
- Minimum parameters (min cpu, min ram, etc).
-
Can also realise the dcl vm
- dualstack vm = VM + IPv4 + SSD
- Need to have a non-misguiding name for the "bare VM"
- Should support network boot (?)
VPN
How to add a new VPN Host
Install wireguard to the host
Install uncloud to the host
Add `python manage.py vpn –hostname fqdn-of-this-host` to the crontab
Use the CLI to configure one or more VPN Networks for this host
Example of adding a VPN host at ungleich
Create a new dual stack alpine VM
Add it to DNS as vpn-XXX.ungleich.ch
Route a /40 network to its IPv6 address
Install wireguard on it
TODO [C] Enable wireguard on boot
TODO [C] Create a new VPNPool on uncloud with
the network address (selecting from our existing pool)
the network size (/…)
the vpn host that provides the network (selecting the created VM)
the wireguard private key of the vpn host (using wg genkey)
http command
``` http -a nicoschottelius:$(pass ungleich.ch/nico.schottelius@ungleich.ch) http://localhost:8000/admin/vpnpool/ network=2a0a:e5c1:200:: \ network_size=40 subnetwork_size=48 vpn_hostname=vpn-2a0ae5c1200.ungleich.ch wireguard_private_key=… ```
Example http commands / REST calls
creating a new vpn pool
http -a nicoschottelius:$(pass ungleich.ch/nico.schottelius@ungleich.ch) http://localhost:8000/admin/vpnpool/ network_size=40 subnetwork_size=48 network=2a0a:e5c1:200:: vpn_hostname=vpn-2a0ae5c1200.ungleich.ch wireguard_private_key=$(wg genkey)
Creating a new vpn network
Creating a VPN pool
http -a uncloudadmin:$(pass uncloudadmin) https://localhost:8000/v1/admin/vpnpool/ \
network=2a0a:e5c1:200:: network_size=40 subnetwork_size=48 \
vpn_hostname=vpn-2a0ae5c1200.ungleich.ch wireguard_private_key=$(wg genkey)
This will create the VPNPool 2a0a:e5c1:200::/40 from which /48 networks will be used for clients.
VPNPools can only be managed by staff.
Managing VPNNetworks
To request a network as a client, use the following call:
http -a user:$(pass user) https://localhost:8000/v1/net/vpn/ \
network_size=48 \
wireguard_public_key=$(wg genkey | tee privatekey | wg pubkey)
```
VPNNetworks can be managed by all authenticated users.
* Developer Handbook
The following section describe decisions / architecture of
uncloud. These chapters are intended to be read by developers.
** This Documentation
This documentation is written in org-mode. To compile it to
html/pdf, just open emacs and press *C-c C-e l p*.
** Models
*** Bill
Bills are summarising usage in a specific timeframe. Bills usually
spawn one month.
*** BillRecord
Bill records are used to model the usage of one order during the
timeframe.
*** Order
Orders register the intent of a user to buy something. They might
refer to a product. (???)
Order register the one time price and the recurring price. These
fields should be treated as immutable. If they need to be modified,
a new order that replaces the current order should be created.
**** Replacing orders
If an order is updated, a new order is created and points to the
old order. The old order stops one second before the new order
starts.
If a order has been replaced can be seen by its replaced_by count:
#+BEGIN_SRC sh
>>> Order.objects.get(id=1).replaced_by.count()
1
Product and Product Children
- A product describes something a user can buy
- A product inherits from the uncloud_pay.models.Product model to get basic attributes
Identifiers
Problem description
Identifiers can be integers, strings or other objects. They should be unique.
Approach 1: integers
Integers are somewhat easy to remember, but also include predictable growth, which might allow access to guessed hacking (obivously proper permissions should prevent this).
Approach 2: random uuids
UUIDs are 128 bit integers. Python supports uuid.uuid4() for random uuids.
Approach 3: IPv6 addresses
uncloud heavily depends on IPv6 in the first place. uncloud could use a /48 to identify all objects. Objects that have IPv6 addresses on their own, don't need to draw from the system /48.
Possible Subnetworks
Assuming uncloud uses a /48 to represent all resources.
Network | Name | Description |
---|---|---|
2001:db8::/48 | uncloud network | All identifiers drawn from here |
2001:db8:1::/64 | VM network | Every VM has an IPv6 address in this network |
2001:db8:2::/64 | Bill network | Every bill has an IPv6 address |
2001:db8:3::/64 | Order network | Every order has an IPv6 address |
2001:db8:5::/64 | Product network | Every product (?) has an IPv6 address |
2001:db8:4::/64 | Disk network | Every disk is identified |
Tests
[15:47:37] black3.place6:~# rbd create -s 10G ssd/2a0a:e5c0:1::8
Decision
We use integers, because they are easy.
Distributing/Dispatching/Orchestrating
Variant 1: using cdist
- The uncloud server can git commit things
- The uncloud server loads cdist and configures the server
-
Advantages
- Fully integrated into normal flow
-
Disadvantage
- web frontend has access to more data than it needs
- On compromise of the machine, more data leaks
- Some cdist usual delay
Variant 2: via celery
- The uncloud server dispatches via celery
- Every decentral node also runs celery/connects to the broker
-
Summary brokers:
- If local only celery -> good to use redis - Broker
- If remote: probably better to use rabbitmq
-
redis
- simpler
-
rabbitmq
- more versatile
- made for remote connections
-
quorom queues would be nice, but not clear if supported
- Cannot be installed on alpine Linux at the moment
-
Advantage
- Very python / django integrated
- Rather instant
-
Disadvantages
- Every decentral node needs to have the uncloud code available
-
Decentral nodes might need to access the database
- Tasks can probably be written to work without that (i.e. only strings/bytes)
log/tests
(venv) [19:54] vpn-2a0ae5c1200:~/uncloud$ celery -A uncloud -b redis://bridge.place7.ungleich.ch worker -n worker1@%h –logfile ~/celery.log - Q vpn-2a0ae5c1200.ungleich.ch
Variant 3: dedicated cdist instance via message broker
- A separate VM/machine
- Has Checkout of ~/.cdist
- Has cdist checkout
- Tiny API for management
- Not directly web accessible
- "cdist" queue
Milestones uncloud
1.1 (cleanup 1)
TODO [C] Unify ValidationError, FieldError - define proper Exception
- What do we use for model errors
TODO [C] Cleanup the results handling in celery
- Remove the results broker?
- Setup app to ignore results?
- Actually use results?
1.0 (initial release)
TODO [C] Initial Generic product support
- Product
TODO [C] Recurring product support
CLOSED: [2020-09-11 Fri 23:19]
Assumption:
- recurringperiods are 30days
- User commits to 10 CHF for 30 days
- Wants to downgrade after 15 days to 5 CHF product
-
Expected result:
- order 1: 10 CHF until +30days
- order 2: 5 CHF starting 30days + 1s
- Sum of the two orders is 15 CHF
-
Question is
-
when is the VM shutdown?
-
- instantly
-
- at the end of the cycle
-
-
best solution
- user can choose between a … b any time
-
- You cannot cancel the duration
- You can upgrade and with that cancel the duration
- The idea of a duration is that you commit for it
- If you want to commit lower (daily basis for instance) you have higher per period prices
- User has VM with 2 Core / 2 GB RAM
- User modifies with to 1 core / 3 GB RAM
- We treat it as down/upgrade independent of the modifications
- committed for 30 days
- upgrade after 1 day
- so first order will be charged for 1/30ths
- User commits to 10 CHF for 30 days
- Wants to upgrade after 15 days to 20 CHF product
-
Order 1 : 1 VM with 2 Core / 2 GB / 10 SSD – 10 CHF
- 30days period, stopped after 15, so quantity is 0.5 = 5 CHF
-
Order 2 : 1 VM with 2 Core / 6 GB / 10 SSD – 20 CHF
- after 15 days
- VM is upgraded instantly
-
Expected result:
- order 1: 10 CHF until +15days = 0.5 units = 5 CHF
- order 2: 20 CHF starting 15days + 1s … +30 days after the 15 days -> 45 days = 1 unit = 20 CHF
- Total on bill: 25 CHF
- User commits to 10 CHF for 30 days
- Wants to upgrade after 15 days to 20 CHF product
-
Expected result:
- order 1: 10 CHF until +30days = 1 units = 10 CHF
- order 2: 20 CHF starting 15days + 1s = 1 unit = 20 CHF
- Total on bill: 30 CHF
- Should the new order modify the old order on save()?
CLOSED: [2020-09-09 Wed 01:00]
- 2020 used instead of 2019
- Was due to existing test data …
DONE Bill logic is still wrong
CLOSED: [2020-11-05 Thu 18:58]
- Bill starting_date is the date of the first order
- However first encountered order does not have to be the earliest in the bill!
- Bills should not have a duration
- Bills should only have a (unique) issue date
-
We charge based on bill_records
- Last time charged issue date of the bill OR earliest date after that
-
Every bill generation checks all (relevant) orders
- add a flag "not_for_billing" or "closed"
- query on that flag
- verify it every time
TODO Generating bill for admins/staff
Bill fixes needed
TODO Double bill in bill id
TODO Name the currency
TODO Maybe remove the chromium pdf rendering artefacts
- date on the top
- title on the top
- filename bottom left
- page number could even stay