OpenShift Origin is the open source upstream project that powers OpenShift, Red Hat's container application platform. It provides support for Python, PHP, Perl, Node.js, Ruby, and Java and is extensible so that users can add support for other languages. The resources allocated for applications can be automatically or manually scaled as per required so that as demand increases there is no degradation of performance. OpenShift provides portability through the DeltaCloud API so customers can migrate deployments to other cloud computing vendor environments. The OpenShift is provided by leveraging Docker and Kubernetes, giving you the ability to have custom, reusable application images. OpenShift is designed to be a high-availability, scalable application platform. When configured properly, a large OpenShift deployment can offer an easy way to scale your application when demands increase, while providing zero downtime. With a cluster of OpenShift hosts in multiple data center locations, you can survive an entire data center going down.
In this article we will be showing you its installation and configuration on a stand alone CentOS 7 server with minimal packages installed on it.
In a highly available OpenShift Origin cluster with external etcd, a master host should have 1 CPU core and 1.5 GB of memory is required for each 1000 pods. Therefore, the recommended size of master host in an OpenShift Origin cluster of 2000 pods would be 2 CPU cores and 3 GB of RAM, in addition to the minimum requirements for a master host of 2 CPU cores and 16 GB of RAM.
OpenShift Origin requires a fully functional DNS server in the environment. This is ideally a separate host running DNS software and can provide name resolution to hosts and containers running on the platform. Let's setup DNS to resolve your host and setup FQDN with domain on your VMs.
Configure the SELINUXTYPE=targeted in the '/etc/selinux/config' file, if its not already done, because Security-Enhanced Linux (SELinux) must be enabled on all of the servers before installing OpenShift Origin else the installation will be failed.
# vi /etc/selinux/config
Make sure to update your system with latest updates and security patches using the following command.
# yum update -y
We have three options to install OpenShift which are curl-to-shell, a portable installer, or installing from source. In this article we will be installing OpenShift Origin from the source using Docker.
Run the command below to install Docker along with some other dependencies required to perform this setup like 'vim' editor and 'wget' utility if its not already installed on your system.
# yum install docker wget vim -y
Once the installation is complete, we need to configure it to trust the registry that we will be using for OpenShift images by opening the '/etc/sysconfig/docker' file in your command line editor.
# vim /etc/sysconfig/docker
# INSECURE_REGISTRY='--insecure-registry' NSECURE_REGISTRY='--insecure-registry 192.168.0.0/16'
Save and close the configuration file, and restart docker service by using below command.
# systemctl restart docker.service
Install and configure Openshift
Once we have a docker service up and running, now we are going to setup OpenShift to run as a standalone process managed by systemd. Let's run the command below to download the OpenShift binaries from GitHub in the '/tmp' directory.
# cd /tmp
# wget https://github.com/openshift/origin/releases/download/v1.4.1/openshift-origin-server-v1.4.1-3f9807a-linux-64bit.tar.gz
Then extract the package and change directory to the extracted folder to move all binary files into the '/usr/local/sbin' directory.
# tar -zxf openshift-origin-server-*.tar.gz
# cd openshift-origin-server-v1.4.1+3f9807a-linux-64bit/
# mv k* o* /usr/local/sbin/
Next, we will create a startup script and systemd unit file by placing our Public and Private IP addresses.
# vim /usr/local/bin/start_openshift.sh
#!/bin/bash cd /opt/openshift/ openshift start --public-master='https://:8443' --master='https://:8443'
Save and close the file and then put the following contents in the newly created file in systemd.
# vim /etc/systemd/system/openshift.service
[Unit] Description=OpenShift Origin Server [Service] Type=simple ExecStart=/usr/local/bin/start_openshift.sh
That's it, now save the file and change the permissions of this file to make it executable and then load the new unit file so that it can be functional.
# chmod u+x /usr/local/bin/start_openshift.sh
# mkdir /opt/openshift/
# systemctl daemon-reload
After reloading daemon, start Openshift service using command below and confirm if its status is active.
# systemctl start openshift
# systemctl status openshift
Now the Openshift service is up and running, to manage OpenShift installation remotely and access its applications, TCP ports 80, 443, and 8443 need to be opened in your firewall.
# firewall-cmd --zone=public --add-port=80/tcp
# firewall-cmd --zone=public --add-port=443/tcp
# firewall-cmd --zone=public --add-port=8443/tcp
Adding Openshift Router and Registry
Now we need to install an OpenShift router, so that it can serve apps over the Public IP address. OpenShift uses a Docker registry to store Docker images for easier management of your application lifecycle and the router routes requests to specific apps based on their domain names. So, first we need to tell the CLI tools where our settings and CA certificate are, to authenticate our new OpenShift cluster.
Let's add the following lines to '/root/.bashrc' so that they will load when we switch to the root user.
# export KUBECONFIG=/opt/openshift/openshift.local.config/master/admin.kubeconfig
# export CURL_CA_BUNDLE=/opt/openshift/openshift.local.config/master/ca.crt
Reload '.bashrc' to update settings.
# source /root/.bashrc
Then use below command to login to the cluster.
# oc login -u system:admin
Logged into "https://YOUR_SERVER_IP:8443" as "system:admin" using existing credentials. You have access to the following projects and can switch between them with 'oc project ': * default kube-system openshift openshift-infra Using project "default".
We have successfully added a router and now to add a registry, use the commands as shown below.
# oadm policy add-scc-to-user hostnetwork -z router
# oadm router
info: password for stats user admin has been set to s0iOOpIcnW --> Creating router router ... serviceaccount "router" created clusterrolebinding "router-router-role" created deploymentconfig "router" created service "router" created --> Success
# oadm registry
--> Creating registry registry ... serviceaccount "registry" created clusterrolebinding "registry-registry-role" created deploymentconfig "docker-registry" created service "docker-registry" created --> Success
Accessing Openshift Origin
OpenShift installation is now complete. You can test your OpenShift Deployment by visiting the following url in a web browser.
You will be prompted with an OpenShift login screen. By default, OpenShift allows you to login with any username and password combination and automatically creates an account for you. You will then have access to create projects and apps. We are going to create an account with the username 'ks' as shown.
Creating New Project in Openshift
After successfully logged in, you will be prompted to create a new project. Projects contain one or more apps that are related. Let’s create a test project so that we can deploy our first app.
Next give a name to the new project with its display name and short description.
After creating our new project, next screen you will see is to “Add to Project” screen where we can add our application images to OpenShift to get them ready for deployment. In this case, we’re going to deploy an existing image by clicking on the “Deploy Image” tab. Since OpenShift uses Docker, this will allow us to pull an image directly from Docker Hub or any other registry.
To test, we’re going to use the 'openshift/hello-openshift' image by entering it into the “Image Name” field as shown in the image below.
Click on the search icon, right to the image name and then click to the 'Create' button at the bottom with default options with basic image without any extra configuration required.
Click to the Project Overview, to check the status of your application .
Creating New Route
Now we are going to create new route to make our applications accessible through OpenShift router that we had previously created. To do so, click on the “Applications” menu on the left and then go to Routes.
Routing is a way to make your application publicly visible. Once you click onto the 'Create Route' button, you need to enter the following information, containing a unique name to the project, hostname and Path that the router watches to route traffic to the service.
After that OpenShift will generate a hostname to be used to access your application. You need to create a wildcard A record in your DNS to allow for automatic routing of all apps to your OpenShift cluster while setting this up in production.
Add the generated hostname to your local hosts file for testing in Linux '/etc/hosts', on windows 'C:\WINDOWS\system32\drivers\etc\hosts'.
Adding New Application to Openshift Origin
OpenShift Origin provides tools for running builds as well as building source code from within predefined builder images via the Source-to-Image toolchain. To create a new application that combines a builder image for Node.js with example source code to create a new deployable Node.js image run the following command after connecting to the administrative user and change to the default project.
# oc new-app openshift/nodejs-010-centos7~https://github.com/openshift/nodejs-ex.git
--> Found Docker image b3b1ce7 (3 months old) from Docker Hub for "openshift/nodejs-010-centos7" Node.js 0.10 ------------ Platform for building and running Node.js 0.10 applications Tags: builder, nodejs, nodejs010 * An image stream will be created as "nodejs-010-centos7:latest" that will track the source image * A source build using source code from https://github.com/openshift/nodejs-ex.git will be created * The resulting image will be pushed to image stream "nodejs-ex:latest" * Every time "nodejs-010-centos7:latest" changes a new build will be triggered * This image will be deployed in deployment config "nodejs-ex" * Port 8080/tcp will be load balanced by service "nodejs-ex" * Other containers can access this service through the hostname "nodejs-ex" --> Creating resources ... imagestream "nodejs-010-centos7" created imagestream "nodejs-ex" created buildconfig "nodejs-ex" created deploymentconfig "nodejs-ex" created service "nodejs-ex" created --> Success Build scheduled, use 'oc logs -f bc/nodejs-ex' to track its progress. Run 'oc status' to view your app.
A build will be triggered automatically using the provided image and the latest commit to the master branch of the provided Git repository. To get the status of a build, run below command.
# oc status
You can see more about the commands available in the CLI.
Now, you should be able to view your test application by opening the link generated by Openshift in your web browser. You can also view the status of your newly deployed apps from the Openshift Web console.
Click on any of the installed application to check more details about IP, routes and service ports.
In this article we have successfully installed and configured a single-server Openshift Origin environment on CentOS 7.2. OpenShift adds developer and operations-centric tools on top of Kubernetes to enable rapid application development, easy deployment and scaling, and long-term lifecycle maintenance for small and large teams. It is a centralized administration and management of an entire stack, team, or organization. Create reusable templates for components of your system, and interactively deploy them over time. Roll out modifications to software stacks to your entire organization in a controlled fashion. Integration with your existing authentication mechanisms, including LDAP, Active Directory, and public OAuth providers such as GitHub.