* Refactoring

* Fix issue that causes a new image store to be created at every start of ucloud-api.
* VM Migration API call now takes hostname instead of host key.
* StorageHandler Classes are introduced. They transparently handles things related to importing of image, make vm out of image, resize vm image, delete vm image etc.
* Loggers added to __init__.py of every ucloud component's subpackage.
* Non-Trivial Timeout Events are no longer logged.
* Fix issue that prevents removal of stopped VMs (i.e VMs that are successfully migrated).
* Improved unit handling added. e.g MB, Mb, mB, mb are all Mega Bytes.
* VM migration is now possible on IPv6 host.
* Destination VM (receiving side of migration of a vm) now correctly expects incoming data on free ephemeral port.
* Traceback is no longer output to screen, instead it goes to log file.
* All sanity checks are put into a single file. These checks are run by ucloud.py before running any of ucloud component.
This commit is contained in:
ahmadbilalkhalid 2019-11-25 11:52:36 +05:00
commit cc0ca68498
26 changed files with 1101 additions and 294 deletions

View file

@ -0,0 +1,44 @@
graph LR
style ucloud fill:#FFD2FC
style cron fill:#FFF696
style infrastructure fill:#BDF0FF
subgraph ucloud[ucloud]
ucloud-cli[CLI]-->ucloud-api[API]
ucloud-api-->ucloud-scheduler[Scheduler]
ucloud-api-->ucloud-imagescanner[Image Scanner]
ucloud-api-->ucloud-host[Host]
ucloud-scheduler-->ucloud-host
ucloud-host-->need-networking{VM need Networking}
need-networking-->|Yes| networking-scripts
need-networking-->|No| VM[Virtual Machine]
need-networking-->|SLAAC?| radvd
networking-scripts-->VM
networking-scripts--Create Networks Devices-->networking-scripts
subgraph cron[Cron Jobs]
ucloud-imagescanner
ucloud-filescanner[File Scanner]
ucloud-filescanner--Track User files-->ucloud-filescanner
end
subgraph infrastructure[Infrastructure]
radvd
etcd
networking-scripts[Networking Scripts]
ucloud-imagescanner-->image-store
image-store{Image Store}
image-store-->|CEPH| ceph
image-store-->|FILE| file-system
ceph[CEPH]
file-system[File System]
end
subgraph virtual-machine[Virtual Machine]
VM
VM-->ucloud-init
end
subgraph metadata-group[Metadata Server]
metadata-->ucloud-init
ucloud-init<-->metadata
end
end

File diff suppressed because one or more lines are too long

After

Width:  |  Height:  |  Size: 37 KiB

View file

@ -1,7 +1,7 @@
.. ucloud documentation master file, created by
sphinx-quickstart on Mon Nov 11 19:08:16 2019.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
sphinx-quickstart on Mon Nov 11 19:08:16 2019.
You can adapt this file completely to your liking, but it should at least
contain the root `toctree` directive.
Welcome to ucloud's documentation!
==================================
@ -15,7 +15,9 @@ Welcome to ucloud's documentation!
usage/usage-for-admins
usage/usage-for-users
usage/how-to-create-an-os-image-for-ucloud
theory/summary
misc/todo
troubleshooting/installation-troubleshooting
Indices and tables
==================

View file

@ -135,7 +135,7 @@ You just need to update **AUTH_SEED** in the below code to match your auth's see
ETCD_URL=localhost
WITHOUT_CEPH=True
STORAGE_BACKEND=filesystem
BASE_DIR=/var/www
IMAGE_DIR=/var/image
@ -195,3 +195,35 @@ profile e.g *~/.profile*
alias uotp='cd /root/uotp/ && pipenv run python app.py'
and run :code:`source ~/.profile`
Arch
-----
.. code-block:: sh
# Update/Upgrade
pacman -Syuu
pacman -S python3 qemu chrony python-pip
pip3 install pipenv
cat > /etc/chrony.conf << EOF
server 0.arch.pool.ntp.org
server 1.arch.pool.ntp.org
server 2.arch.pool.ntp.org
EOF
systemctl start chronyd
systemctl enable chronyd
# Create non-root user and allow it sudo access
# without password
useradd -m ucloud
echo "ucloud ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers
sudo -H -u ucloud bash -c 'cd /home/ucloud && git clone https://aur.archlinux.org/yay.git && cd yay && makepkg -si'
sudo -H -u ucloud bash -c 'yay -S etcd'
systemctl start etcd
systemctl enable etcd

View file

@ -1,6 +1,18 @@
TODO
====
* **Check Authentication:** Nico reported that some endpoints
even work without providing token. (ListUserVM)
* Put overrides for **IMAGE_BASE**, **VM_BASE** in **ImageStorageHandler**.
* Put "Always use only one StorageHandler"
* Create Network Manager
* That would handle tasks like up/down an interface
* Create VXLANs, Bridges, TAPs.
* Remove them when they are no longer used.
* Check for :code:`etcd3.exceptions.ConnectionFailedError` when calling some etcd operation to
avoid crashing whole application.
* Throw KeyError instead of returning None when some key is not found in etcd.

View file

@ -0,0 +1,98 @@
Summary
=======
.. image:: /images/ucloud.svg
.. code-block::
<cli>
|
|
|
+-------------------------<api>
| |
| |```````````````|```````````````|
| | | |
| <file_scanner> <scheduler> <image_scanner>
| |
| |
+-------------------------<host>
|
|
|
Virtual Machine------<init>------<metadata>
**ucloud-cli** interact with **ucloud-api** to do the following operations:
- Create/Delete/Start/Stop/Migrate/Probe (Status of) Virtual Machines
- Create/Delete Networks
- Add/Get/Delete SSH Keys
- Create OS Image out of a file (tracked by file_scanner)
- List User's files/networks/vms
- Add Host
ucloud can currently stores OS-Images on
* File System
* `CEPH <https://ceph.io/>`_
**ucloud-api** in turns creates appropriate Requests which are taken
by suitable components of ucloud. For Example, if user uses ucloud-cli
to create a VM, **ucloud-api** would create a **ScheduleVMRequest** containing
things like pointer to VM's entry which have specs, networking
configuration of VMs.
**ucloud-scheduler** accepts requests for VM's scheduling and
migration. It finds a host from a list of available host on which
the incoming VM can run and schedules it on that host.
**ucloud-host** runs on host servers i.e servers that
actually runs virtual machines, accepts requests
intended only for them. It creates/delete/start/stop/migrate
virtual machines. It also arrange network resources needed for the
incoming VM.
**ucloud-filescanner** keep tracks of user's files which would be needed
later for creating OS Images.
**ucloud-imagescanner** converts images files from qcow2 format to raw
format which would then be imported into image store.
* In case of **File System**, the converted image would be copied to
:file:`/var/image/` or the path referred by :envvar:`IMAGE_PATH` environement variable
mentioned in :file:`/etc/ucloud/ucloud.conf`.
* In case of **CEPH**, the converted image would be imported into
specific pool (it depends on the image store in which the image
belongs) of CEPH Block Storage.
**ucloud-metadata** provides metadata which is used to contextualize
VMs. When, the VM is created, it is just clone (duplicate) of OS
image from which it is created. So, to differentiate between my
VM and your VM, the VM need to be contextualized. This works
like the following
.. note::
Actually, ucloud-init makes the GET request. You can also try it
yourself using curl but ucloud-init does that for yourself.
* VM make a GET requests http://metadata which resolves to actual
address of metadata server. The metadata server looks at the IPv6
Address of the requester and extracts the MAC Address which is possible
because the IPv6 address is
`IPv6 EUI-64 <https://community.cisco.com/t5/networking-documents/understanding-ipv6-eui-64-bit-address/ta-p/3116953>`_.
Metadata use this MAC address to find the actual VM to which it belongs
and its owner, ssh-keys and much more. Then, metadata return these
details back to the calling VM in JSON format. These details are
then used be the **ucloud-init** which is explained next.
**ucloud-init** gets the metadata from **ucloud-metadata** to contextualize
the VM. Specifically, it gets owner's ssh keys (or any other keys the
owner of VM added to authorized keys for this VM) and put them to ssh
server's (installed on VM) authorized keys so that owner can access
the VM using ssh. It also install softwares that are needed for correct
behavior of VM e.g rdnssd (needed for `SLAAC <https://en.wikipedia.org/wiki/IPv6#Stateless_address_autoconfiguration_(SLAAC)>`_).

View file

@ -0,0 +1,24 @@
Installation Troubleshooting
============================
etcd doesn't start
------------------
.. code-block:: sh
[root@archlinux ~]# systemctl start etcd
Job for etcd.service failed because the control process exited with error code.
See "systemctl status etcd.service" and "journalctl -xe" for details
possible solution
~~~~~~~~~~~~~~~~~
Try :code:`cat /etc/hosts` if its output contain the following
.. code-block:: sh
127.0.0.1 localhost.localdomain localhost
::1 localhost localhost.localdomain
then unfortunately, we can't help you. But, if it doesn't contain the
above you can put the above in :file:`/etc/hosts` to fix the issue.