OpenShift Origin CentOS7

Purpose

I needed a test environment to see if I can get a MongoDB 3.2 up on OpenShift, this MongoDB should be persisting its data on a NFS-backed persistent volume.

So I create what I needed…

Prerequisites

This demo requires Vagrant and libvirt to be installed. The Vagrant plugin hostmanager must be installed too.

10GiB of free RAM and a little bit of disk space should be available.

The host needs to have internet access as it will download RPMs and docker container images.

Preparation

git clone --recursive https://github.com/goern/my-openshift-origin-centos7

Usage

vagrant up --no-parallel will do the trick, followed by an easy

ansible-playbook -i inventory.ini \
  --private-key=~/.vagrant.d/insecure_private_key \
  openshift-ansible/playbooks/byo/config.yml`
Warning
This will take about 1.x5 hrs to finish, given you have a high bandwidth internet connection and a reasonable fast disk. This is approx. 1 hour for vagrant up and additional 20 minutes per each ansible run.

OpenShift Origin Web Console

You should be able to access the web console at http://master-1.goern.example.com:8443/ using any user or password. The OpenShift master is configured to do a allow_all authentication.

Known Issues

dnsIP of Nodes

The configuration item dnsIP is not set on any node, therefor the kubelet is trying to resolve IP addresses via the host resolver, which is pointing to the host all the Vagrant VMs run on (aka gateway).

workaround

First bring up the master, figure out its IP address and set openshift_dns_ip in the inventory.ini accordingly.

pod node label selector conflicts with its project node label selector

no cluster-admin

As we use the allow_all auth method, there needs to be one cluster admin.

workaround

Use oadm policy add-cluster-role-to-user cluster-admin admin to assign this role to the admin user. Use admin as login on the web console.