Master Installation Guide

Navigation:  Installation >

Master Installation Guide

Previous pageReturn to chapter overviewNext page

Before installation please read the requirements carefully.

 

As an example, please check our video guide about how to deploy KuberDock without Ceph. (Note. Video is still under development.)

 

The following points should be kept in mind while deploying the master:

 

For the reasons of scalability, backup and migration simplification, it is strongly recommended to use a virtual machine instead of a hardware server for the master deployment.

 

SELinux is expected to be enabled on the master server.

 

KuberDock implements all the necessary protection of the master to ensure its network security. Any additional firewall setup my cause a security conflict.

 

The 16-bit-long service subnet can be either directly specified or auto-detected during the deployment.

 

Use the option --pod-ip-network of the deploy.sh script to define the subnet directly (e.g. ---pod-ip-network 10.1.0.0/16). If specified subnet isn’t free KuberDock networking may be disrupted.

 

If the option is omitted the required free IP address range will be detected automatically in the block from 10.0.0.0 to 10.253.255.255. Lack of free subnet in the block will result in the deployment error.

 

Log in to master server console as root and perform the following steps:

 

1. Download installation script from the repository by running the command:

 

Note that when you download and install the file you automatically agree to the terms and conditions of KuberDock licence agreement.

 

wget http://repo.cloudlinux.com/kuberdock/deploy.sh

 

Note. To install wget run:

 

yum install wget

 

2. Start installation script by running the command and check available deploy options below:

 

bash ./deploy.sh

 

Options available for deployment script:

 

Deploy with a fixed IP pools

Deploy with CEPH support

Deploy with ZFS as a local storage backend

 

Deploy with fixed IP pools

 

KuberDock supports fixed IP pools from 1.3.0 version. To deploy KuberDock cluster with fixed IP pools use option --fixed-ip-pools.

 

Example:

 

deploy.sh --fixed-ip-pools

 

After installation with this option it would be possible to assign IPs for each node. Read more about adding IPs for nodes in Managing IP pool section.

 

Deploy with CEPH support

 

To install KuberDock with CEPH support, all CEPH parameters (ceph.conf file, ceph user and user keyring) must be specified to enable CEPH in KuberDock. CEPH user must have at least the following capabilities:

 

 caps mon = "allow r"

 caps osd = "allow rwx pool=<your-KD-pool-name-here>"

 

If provided CEPH user has no capabilities to create CEPH pool, then the pool must be created before adding any new nodes to KuberDock cluster by an administrator. Follow official CEPH instructions to create pool http://docs.ceph.com/docs/hammer/rados/operations/pools.

 

To install cluster with CEPH support, the following options for deploy.sh must be defined:

 

If a user has no capabilities to create CEPH pool, then the pool for KuberDock must be created before adding any new nodes to KuberDock cluster by an administrator. Follow official CEPH instruction to create pool http://docs.ceph.com/docs/hammer/rados/operations/pools.

 

To install cluster with CEPH support, the following options for deploy.sh must be defined:

 

--ceph-user <username>

 

where username is a name of a user in CEPH cluster, (in CEPH it is `client.<username>`).

 

--ceph-config /path/to/ceph.conf

 

defines path to ceph.conf filename.It must be copied to KuberDock master server from CEPH admin host. For example, copy file to /etc/ceph folder, then option will be --ceph-config /etc/ceph/ceph.conf.

 

--ceph-user-keyring /path/to/ceph.client.username.keyring

 

defines path to keyring file with credentials for CEPH user. It must be copied from CEPH admin host to KuberDock master server. If you cannot find this file, then you can execute it by command on CEPH admin host:

 

ceph auth export client.username > ceph.client.username.keyring

 

where username must be the same as defined by --ceph-user option.

 

Finally, the example of running KuberDock deploy script with CEPH support will look as follows:

 

bash ./deploy.sh --ceph-user your_ceph_user --ceph-config /path/to/ceph.conf --ceph-user-keyring /path/to/client.your_ceph_user.keyring

 

If you install cluster with default namespace for persistent drives, then the pool will be named as ipv4 address of KuberDock master server, for example 123.123.123.1. Alternatively you can specify namespace using the option:

 

--pd-namespace <your-KD-pool-name-here>

 

This parameter allows to explicitly define CEPH pool name.

 

Example:

 

bash ./deploy.sh --ceph-user my_ceph_user --ceph-config /path/to/ceph.conf --ceph-user-keyring /path/to/client.my_ceph_user.keyring --pd-namespace my_namespace_name

 

Deploy with ZFS as a local storage backend

 

Please read the requirements before you start to deploy KuberDock with ZFS.

 

KuberDock adjusts several ZFS-related settings to optimize overall Input/Output performance:

 

ZFS filesystem parameter recordsize is set to 16K instead of default 128K. This is done at the zpool level and affects all Persistent Volumes of user pods.

 

Maximal size of ZFS-used caching tool ARC is limited to 1/3 of total memory available at the host.

 

The file-level prefetching mechanism of ZFS is disabled:

 

  zfs_prefetch_disable = 1

 

To deploy KuberDock as a backend for local storage run command:

 

bash ./deploy.sh --zfs

 

Then all local storages of docker containers will be located on the ZFS.

 

3. Then the script automatically detects available IP address of the master and proposes to confirm it or enter another value:

 

Enter master server IP address to communicate with the nodes (it should be an address of the cluster network):

[founded_ip_here]

 

Make sure that this address is the same as specified in the cluster network settings for the master to communicate the nodes.

 

4. Enter interface to bind public IP addresses on nodes:

 

[founded_interface_here]

 

5. When installation completed you will see the following message:

 

Installation completed and log saved to /var/log/kuberdock_master_deploy.log

KuberDock is available at https://master-ip/

login: admin

password: [your password]

 

Please, save your password in secure place.

 

Note that if during installation errors occurred and you don't see this message, then run the command:

 

bash ./deploy.sh --cleanup

 

After that, start installation from the first step.

 

If the error occurs again, please contact our support team via email [email protected] or submit a ticket on our helpdesk.

 

6. Go to the following address in your browser: https://master-ip/ and log in using your administrator credentials.

 

7. Go to Settings page and click License. Click pen icon to enter Installation ID to activate KuberDock. If you do not have Installation ID, then go to kuberdock.com and click Try KuberDock, fill in the application form and get trial Installation ID.

 

enter_install_id

 

Enter Installation ID and click Apply. KuberDock will be activated within the next few minutes.

 

apply

 

8. Perform the following steps to configure SSL certificate:

 

8.1. Upload SSL certificate file to the KuberDock master server.

 

8.2. Edit file /etc/nginx/conf.d/kuberdock-ssl.conf where set path to the ssl files:

 

  ssl_certificate /path/to/crt_file.crt; //path to crt file of your certificate

  ssl_certificate_key /path/to/key_file.key; //path to key file of your certificate

  ssl_dhparam /path/to/pem_file.pem; // path to pem file of your certificate

 

Note. Make sure that nginx process has access to that files.

 

8.3. Restart nginx process by running the command:

 

service nginx restart