Introduction to IBM Cloud Private

IBM continues to reinvent itself and brings cloud-native development technologies to large enterprises. It is a worthy investment into containers, microservices, and especially Kubernetes, an excellent sign for the Kubernetes community.

In the last ten years, we are used to see infrastructure giants like Cisco, Dell-EMC, HPE, IBM watching from a distance how new technologies emerge and acquire them when they get profitable. Most of the time, buying revenue to satisfy shareholders, instead of investing in their own engineering resources.  In that sense, IBM Cloud Private (ICP) is a promising move from IBM.

ICP is based on Kubernetes. I have been playing with ICP for over two months now, since ICP version 1.2, and user experience seems to be steadily improving. The latest version of ICP 2.1 was announced last week, and it includes a few important changes:

  • Upgraded to use Kubernetes version 1.7.3
  • New (lighter) UI theme
  • Support for ICP Cloud Foundry (in addition to the Docker)
  • Catalog of IBM products to try aka App Store that includes optimized container versions of IBM’s WebSphere Liberty, Db2, and MQ applications).
  • Kibana dashboard for logging (Optional)
  • Infrastructure choice as well as the ability to deploy with VMware, Canonical or OpenStack (Optional)
  • Support to configure vSphere Cloud Provider during setup (Optional)
  • Encryption for cluster data network traffic with IPSec (Optional)
  • Support to set up a Federation (Tech Preview)
  • Security scanning with Vulnerability advisor (Tech Preview)
  • Data and analytics services for developers
  • Support for most DevOps tools

In this post, I will provide step-by-step instructions on how to configure a Kubernetes-based managed private cloud using ICP. Now, let’s take a look at the requirements.


Minimum requirements for a multi-node cluster:


  • Boot node: 1x 1+ core(s) >= 2.4 GHz CPU, 4GB RAM, >=100 GB disk space
  • Master node: 1 or 3x 2+ cores >= 2.4 GHz CPU, 4+GB RAM, >=151 GB disk space
  • Proxy node: 1 or 3x 2+ cores >= 2.4 GHz CPU, 4GB RAM, >=40 GB disk space
  • Worker node: 1+ 1+ cores >= 2.4 GHz CPU, 4GB RAM, >=100 GB disk space

Since I’m not planning to run anything heavy, I’ll be using 3 nodes, and install Master, Proxy, and Workers an all 3 nodes.

Note: My lab h/w consists of Aparna Systems Orca µCloud 4015 chassis and 3 Oserv8 µServers with 8 core 2.1GHz CPU, 64GB DDR RAM, 2x NVMe drives, and 2×10 Gbps integrated network. This enclosure takes only 4U space. I like it because once it’s installed, no additional cabling is required to add up to 15 servers (60 servers in 4060). And µServers are packaged in a hot-swappable cartridge form factor that is about the size of a 3.5-inch hard disk drive.


Differences between Community (ICP-CE) vs Cloud-Native vs Enterprise packages:

Community Edition (CE) is intended for non-production use and includes all primary services like Kubernetes, logging, monitoring, IAM, and access to the catalog. It’s limited to one master node. You can try all the core functionality except for setting up a highly available cluster.

If you plan to open your private cloud securely to provide cloud services, you need to be on the next tier, which is available through your IBM Sales Representative. The Cloud Native package includes everything in CE and additionally Cloud Automation Manager, Microservice Builder, and WebSphere Liberty. Cloud Foundry is also available for this option.

The Enterprise package includes all of the above capabilities, plus WebSphere MQ Advanced, WebSphere Application Server Network Deployment, and API Connect Professional. Enterprise package also offers optional IBM UrbanCode Deploy and IBM Db2 Direct Advanced add-ons.

How to install IBM Private Cloud 2.1

We need a few things before we get up and running with ICP 2.1. First, I’ll configure my Ubuntu servers and share SSH keys, so the boot node can access all my other nodes. Then I’ll install Docker and after that ICP. From there, ICP will take care of my Kubernetes cluster installation.

Install the base O/S – Ubuntu

  1. Download your preferred version of Ubuntu. I use Ubuntu Server 16.04.3 LTS.
  2. Install Ubuntu on all servers with default options. I used user/nopassword as username/password for simplicity.
  3. Log in to your Ubuntu host via terminal.
  4. Edit the /etc/network/interfaces file, assign a static IP and set a hostname. For my setup, I have used:
    Hostname   IP

  5. Edit the /etc/hosts file, add your nodes to the list, and make sure you can ping them by the hostname:
    cat /etc/hosts

    ping ubuntu37
    ping ubuntu38

  6. On your Ubuntu host, install the SSH server:
    sudo apt-get install openssh-server

    Now, you should be able to access your servers using SSH. Check the status by running:

    sudo service ssh status

  7. Disable firewall on your Ubuntu VM by running:
    sudo ufw disable
  8. Install curl if it’s not already installed:
    sudo apt install curl
  9. Repeat steps 3-9 on all servers.

Now, we need to share SSH keys among all nodes:

  1. Log in to your first node, which will be the boot node (ubuntu36), as root.
  2. Generate an SSH key:
    ssh-keygen -b 4096 -t rsa -f ~/.ssh/master.id_rsa -N ""
  3. Add the SSH key to the list of authorized keys:
    cat ~/.ssh/ | sudo tee -a ~/.ssh/authorized_keys
  4. From the boot node, add the SSH public key to other nodes in the cluster:
    ssh-copy-id -i ~/.ssh/ root;
  5. Repeat for all nodes.
  6. Log in to the other nodes and restart the SSH service:
    sudo systemctl restart sshd
  7. Now the boot node can connect through SSH to all other nodes without the password.

Install Docker

To get the latest version of Docker, install it from the official Docker repository.

  1. On your Ubuntu nodes, run the following commands:
    curl -fsSL | sudo apt-key add -
    sudo add-apt-repository "deb [arch=amd64] $(lsb_release -cs) stable"
    sudo apt-get update

  2. Confirm that you want to install the binaries from the Docker repository instead of the default Ubuntu repository by running:
    sudo apt-get install -y docker-ce

  3. Install Docker and make sure it’s up and running after installation is complete:
    sudo apt-get install -y docker-ce
    sudo systemctl status docker

Install IBM Cloud Private 2.1 with HA

Download the ibm-cloud-private-x86_64-2.1.0.tar.gz file.

  1. On all master nodes, make sure that the vm.max_map_count is set to 262144 (default is 65530).
    sudo sysctl -w vm.max_map_count=262144

  2. SSH into your first node (boot node). Extract and load the image into Docker:
    tar xf ibm-cloud-private-x86_64-2.1.0.tar.gz -O | sudo docker load
    sudo apt-get install open-iscsi
  3. Create an installation folder for configuration files and extract the sample config file:
    mkdir /opt/ibm-cloud-private-2.1.0
    cd /opt/ibm-cloud-private-2.1.0
    sudo docker run -v $(pwd):/data -e LICENSE=accept ibmcom/icp-inception:2.1.0-ee cp -r cluster /data
  4. Previous command creates the cluster directory under /opt/ibm-cloud-private-2.1.0 with the following files: config.yaml, hosts, misc/storage_class, and ssh_key. Before deploying ICP, these files need to be modified.
  5. Replace the ssh_key file with the private SSH key you have created earlier.
    sudo cp ~/.ssh/master.id_rsa /opt/cluster/ssh_key
  6. Copy or move the ICP Docker image to the /opt/cluster/images folder.
    mkdir -p cluster/images
    sudo mv /opt/ibm-cloud-private-x86_64-2.1.0.tar.gz cluster/images/
  7. Add the IP address of all our nodes to the hosts file in the /opt/cluster directory. If you plan to run production workloads, I recommend separate master and worker Kubernetes nodes. Since I want to try high availability with three nodes, my config file looks like this:
  8. To be able to configure a Kubernetes failover cluster, set a VIP for the master nodes. The VIP for the master and proxy nodes are defined in the config.yaml file. Edit config.yaml and add the parameter values as follows:
    # HA settings
    vip_iface: eth0
    # Proxy settings
    proxy_vip_iface: eth0

    NOTE: You must use the actual interface names of your NICs and different IP addresses for the cluster_vip and proxy_vip parameter values.

  9. Finally, deploy the environment. Change directory to the cluster folder with the config.yaml file and deploy your ICP environment:
    cd /opt/ibm-cloud-private-2.1.0/cluster
    sudo docker run --net=host -t -e LICENSE=accept -v $(pwd):/installer/cluster ibmcom/icp-inception:2.1.0-ee install
  10. The last step may take up to 5-10 minutes, and if your deployment is successful, you should be able to access your ICP login screen by visiting https://cluster_vip:8443 (Default username/password is admin/admin).

IBM Cloud Private Login Screen


IBM Cloud Private Dashboard

IBM Cloud Private Login Screen

IBM Cloud Private Catalog

How to Uninstall IBM Cloud Private 2.1 with HA

If you need to clean up your setup/reinstall or remove ICP for any reason you have two options. You can either run the command below or forcefully kill all Docker containers at once on all nodes.

docker run -e LICENSE=accept --net=host --name=installer -t -v $(pwd):/installer/cluster ibmcom/icp-inception:2.1.0-ee uninstall

I will go over the configuration of other optional features in my next blog post as I get more familiar with the platform.

  • Introduction to IBM Cloud Private #2 – A quick look into the UI and functionality (coming soon)
  • Introduction to IBM Cloud Private #3 – Optional Features (coming soon)
  • Introduction to IBM Cloud Private #4 – How to deploy workloads on OpenEBS

To be continued…

Please share:

Also published on Medium.

2 thoughts on “Introduction to IBM Cloud Private

  1. Bobbie Kveton says:

    I am curious to find out what blog platform you’re utilizing? I’m having some minor security problems with my latest site and I’d like to find something more secure. Do you have any suggestions?

  2. Erwin Polhill says:

    Extraordinary post. I would indeed welcome the author to compose more articles about this upcoming site post which will attract other gathering to your well established site.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.