IBM continues to reinvent itself and brings cloud-native development technologies to large enterprises. It is a worthy investment into containers, microservices, and especially Kubernetes, an excellent sign for the Kubernetes community.
In the last ten years, we are used to see infrastructure giants like Cisco, Dell-EMC, HPE, IBM watching from a distance how new technologies emerge and acquire them when they get profitable. Most of the time, buying revenue to satisfy shareholders, instead of investing in their own engineering resources. In that sense, IBM Cloud Private (ICP) is a promising move from IBM.
ICP is based on Kubernetes. I have been playing with ICP for over two months now, since ICP version 1.2, and user experience seems to be steadily improving. The latest version of ICP 2.1 was announced last week, and it includes a few important changes:
- Upgraded to use Kubernetes version 1.7.3
- New (lighter) UI theme
- Support for ICP Cloud Foundry (in addition to the Docker)
- Catalog of IBM products to try aka App Store that includes optimized container versions of IBM’s WebSphere Liberty, Db2, and MQ applications).
- Kibana dashboard for logging (Optional)
- Infrastructure choice as well as the ability to deploy with VMware, Canonical or OpenStack (Optional)
- Support to configure vSphere Cloud Provider during setup (Optional)
- Encryption for cluster data network traffic with IPSec (Optional)
- Support to set up a Federation (Tech Preview)
- Security scanning with Vulnerability advisor (Tech Preview)
- Data and analytics services for developers
- Support for most DevOps tools
In this post, I will provide step-by-step instructions on how to configure a Kubernetes-based managed private cloud using ICP. Now, let’s take a look at the requirements.
Minimum requirements for a multi-node cluster:
- Boot node: 1x 1+ core(s) >= 2.4 GHz CPU, 4GB RAM, >=100 GB disk space
- Master node: 1 or 3x 2+ cores >= 2.4 GHz CPU, 4+GB RAM, >=151 GB disk space
- Proxy node: 1 or 3x 2+ cores >= 2.4 GHz CPU, 4GB RAM, >=40 GB disk space
- Worker node: 1+ 1+ cores >= 2.4 GHz CPU, 4GB RAM, >=100 GB disk space
Since I’m not planning to run anything heavy, I’ll be using 3 nodes, and install Master, Proxy, and Workers an all 3 nodes.
Note: My lab h/w consists of Aparna Systems Orca µCloud 4015 chassis and 3 Oserv8 µServers with 8 core 2.1GHz CPU, 64GB DDR RAM, 2x NVMe drives, and 2×10 Gbps integrated network. This enclosure takes only 4U space. I like it because once it’s installed, no additional cabling is required to add up to 15 servers (60 servers in 4060). And µServers are packaged in a hot-swappable cartridge form factor that is about the size of a 3.5-inch hard disk drive.
Differences between Community (ICP-CE) vs Cloud-Native vs Enterprise packages:
Community Edition (CE) is intended for non-production use and includes all primary services like Kubernetes, logging, monitoring, IAM, and access to the catalog. It’s limited to one master node. You can try all the core functionality except for setting up a highly available cluster.
If you plan to open your private cloud securely to provide cloud services, you need to be on the next tier, which is available through your IBM Sales Representative. The Cloud Native package includes everything in CE and additionally Cloud Automation Manager, Microservice Builder, and WebSphere Liberty. Cloud Foundry is also available for this option.
The Enterprise package includes all of the above capabilities, plus WebSphere MQ Advanced, WebSphere Application Server Network Deployment, and API Connect Professional. Enterprise package also offers optional IBM UrbanCode Deploy and IBM Db2 Direct Advanced add-ons.
How to install IBM Private Cloud 2.1
We need a few things before we get up and running with ICP 2.1. First, I’ll configure my Ubuntu servers and share SSH keys, so the boot node can access all my other nodes. Then I’ll install Docker and after that ICP. From there, ICP will take care of my Kubernetes cluster installation.
Install the base O/S – Ubuntu
- Download your preferred version of Ubuntu. I use Ubuntu Server 16.04.3 LTS.
- Install Ubuntu on all servers with default options. I used user/nopassword as username/password for simplicity.
- Log in to your Ubuntu host via terminal.
- Edit the
/etc/network/interfacesfile, assign a static IP and set a hostname. For my setup, I have used:
Hostname IPubuntu36 192.168.20.36
- Edit the
/etc/hostsfile, add your nodes to the list, and make sure you can ping them by the hostname:cat /etc/hostsping ubuntu37
- On your Ubuntu host, install the SSH server:
sudo apt-get install openssh-server
Now, you should be able to access your servers using SSH. Check the status by running:sudo service ssh status
- Disable firewall on your Ubuntu VM by running:
sudo ufw disable
- Install curl if it’s not already installed:
sudo apt install curl
- Repeat steps 3-9 on all servers.
Now, we need to share SSH keys among all nodes:
- Log in to your first node, which will be the boot node (ubuntu36), as root.
- Generate an SSH key:
ssh-keygen -b 4096 -t rsa -f ~/.ssh/master.id_rsa -N ""
- Add the SSH key to the list of authorized keys:
cat ~/.ssh/master.id_rsa.pub | sudo tee -a ~/.ssh/authorized_keys
- From the boot node, add the SSH public key to other nodes in the cluster:
ssh-copy-id -i ~/.ssh/master.id_rsa.pub root;
- Repeat for all nodes.
- Log in to the other nodes and restart the SSH service:
sudo systemctl restart sshd
- Now the boot node can connect through SSH to all other nodes without the password.
To get the latest version of Docker, install it from the official Docker repository.
- On your Ubuntu nodes, run the following commands:
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt-get update
- Confirm that you want to install the binaries from the Docker repository instead of the default Ubuntu repository by running:
sudo apt-get install -y docker-ce
- Install Docker and make sure it’s up and running after installation is complete:
sudo apt-get install -y docker-ce
sudo systemctl status docker
Install IBM Cloud Private 2.1 with HA
- On all master nodes, make sure that the
vm.max_map_countis set to 262144 (default is 65530).sudo sysctl -w vm.max_map_count=262144
- SSH into your first node (boot node). Extract and load the image into Docker:
tar xf ibm-cloud-private-x86_64-2.1.0.tar.gz -O | sudo docker load
sudo apt-get install open-iscsi
- Create an installation folder for configuration files and extract the sample config file:
sudo docker run -v $(pwd):/data -e LICENSE=accept ibmcom/icp-inception:2.1.0-ee cp -r cluster /data
- Previous command creates the cluster directory under
/opt/ibm-cloud-private-2.1.0with the following files:
config.yaml, hosts, misc/storage_class, and
ssh_key. Before deploying ICP, these files need to be modified.
- Replace the
ssh_keyfile with the private SSH key you have created earlier.sudo cp ~/.ssh/master.id_rsa /opt/cluster/ssh_key
- Copy or move the ICP Docker image to the
/opt/cluster/imagesfolder.mkdir -p cluster/images
sudo mv /opt/ibm-cloud-private-x86_64-2.1.0.tar.gz cluster/images/
- Add the IP address of all our nodes to the
hostsfile in the
/opt/clusterdirectory. If you plan to run production workloads, I recommend separate master and worker Kubernetes nodes. Since I want to try high availability with three nodes, my config file looks like this:
- To be able to configure a Kubernetes failover cluster, set a VIP for the master nodes. The VIP for the master and proxy nodes are defined in the
config.yamland add the parameter values as follows:# HA settings
# Proxy settings
NOTE: You must use the actual interface names of your NICs and different IP addresses for the
- Finally, deploy the environment. Change directory to the cluster folder with the
config.yamlfile and deploy your ICP environment:cd /opt/ibm-cloud-private-2.1.0/cluster
sudo docker run --net=host -t -e LICENSE=accept -v $(pwd):/installer/cluster ibmcom/icp-inception:2.1.0-ee install
- The last step may take up to 5-10 minutes, and if your deployment is successful, you should be able to access your ICP login screen by visiting
https://cluster_vip:8443(Default username/password is admin/admin).
IBM Cloud Private Login Screen
How to Uninstall IBM Cloud Private 2.1 with HA
If you need to clean up your setup/reinstall or remove ICP for any reason you have two options. You can either run the command below or forcefully kill all Docker containers at once on all nodes.
I will go over the configuration of other optional features in my next blog post as I get more familiar with the platform.
- Introduction to IBM Cloud Private #2 – A quick look into the UI and functionality (coming soon)
- Introduction to IBM Cloud Private #3 – Optional Features (coming soon)
- Introduction to IBM Cloud Private #4 – How to deploy workloads on OpenEBS
To be continued…
Also published on Medium.