Friday, July 1, 2016

Steps to Install and Configure Foreman 1.11 on CentOS 7.x

Foreman is a free and open source configuration management and provisioning tool for physical and virtual servers. Foreman performs the repetitive and other configuration tasks using the tools like puppet, chef and Ansible. For provisioning, Foreman make the use of DHCP, DNS, TFTP and kickstart file.Though in this article we are going to use puppet tool with Foreman.
Foreman provides a dashboard from where system administrator can perform all configuration and audit task, we can also get the reports like how many nodes are managed by foreman and what configuration has been pushed on the nodes.
In this post i am going to demonstrate how to install and configure foreman 1.11 with puppet on CentOS 7.x
Below are details of my server on which i will install and configure Foreman.
  • OS(Operating System)  = CentOS 7.x
  • IP  Address =
  • Hostname =
  • SeLinux = Disabled
  • Firewall = Enabled
I have my own local DNS server for the domain, in case you don’t have the DNS server then you have to put entries in the ‘/etc/hosts‘ file for name to ip resolution.

Step:1 Enable puppetlabs repo and install foreman installer

Open the terminal and run the following commands one after the other.
[root@foreman ~]# rpm -ivh
[root@foreman ~]# yum -y install epel-release
[root@foreman ~]# yum -y install foreman-installer

Step:2 Start the Installation using foreman installer

To start the foreman installation , run the command “foreman-installer” , it will be non-interactive installation. In case you want the interactive installation use ‘-i‘ option in the command like “foreman-installer -i”
[root@foreman ~]# foreman-installer
Once the foreman installation is completed we  will get the output like below :
We can see that initial credentials have been created for the foreman dashboard and moreover puppetmaster is also installed which is running in 8140 port.
Before Accessing the dashboard, it is recommended  to open the required ports in the OS firewall. Execute the beneath commands one after the other.
[root@foreman ~]# firewall-cmd --permanent --add-port=53/tcp
[root@foreman ~]# firewall-cmd --permanent --add-port=67-69/udp
[root@foreman ~]# firewall-cmd --permanent --add-port=80/tcp
[root@foreman ~]# firewall-cmd --permanent --add-port=443/tcp
[root@foreman ~]# firewall-cmd --permanent --add-port=3000/tcp
[root@foreman ~]# firewall-cmd --permanent --add-port=3306/tcp
[root@foreman ~]# firewall-cmd --permanent --add-port=5910-5930/tcp
[root@foreman ~]# firewall-cmd --permanent --add-port=5432/tcp
[root@foreman ~]# firewall-cmd --permanent --add-port=8140/tcp
[root@foreman ~]# firewall-cmd --permanent --add-port=8443/tcp
[root@foreman ~]# firewall-cmd --reload
[root@foreman ~]#

Step:3 Access Foreman Dashboard

To access foreman dashboard , type “https://
In My Case dashboard can be accessed from the url :
Use the initial credentials which is created during foreman installation
By default foreman server itself is registered in foreman dashboard. To check the host information,
click on Hosts options —> then all hosts.
Let’s install ntp module on the foreman server and import it from dashboard. NTP is very much required for puppet to work smoothly. Use the below command to download ntp module.
[root@foreman ~]# puppet module install puppetlabs-ntp
Now import the NTP module from dashboard. Click on Configure —-> Classes
Click on Import option, it will import the NTP module in foreman dashboard, example is shown below :
Select the Module and click on Update.
Puppet -NTP-class-foreman-dashboard
Click on  ‘ntp’ class name and the select ‘Smart Class Parameter’
Select the override option in case you want to specify your own NTP  Servers. Change the Key type Value from “String” to “Array” and Specify the NTP Server’s name in the Default value Box and then click on Submit. Example is shown below.
Now it’s time to add ntp class to the host, for that go to Hosts options and Select the host (, Click on Edit. Go to the ‘Puppet Classes‘ Tab and Click on ‘+‘ option to add ntp class on the host and then click on submit
Now Run the following puppet command from the foreman server to configure the NTP Service automatically.
[root@foreman ~]# puppet agent --test
Now Check the reports from the Dashboard for the host.
Go To Hosts—> Click on hosts{}—> Click on Reports.
As we know that puppet makes the use of SSL Certificates to set up the secure communication between puppet server and its nodes. Once Puppet Server sign the certificates of its node, then only communication can occur.
Let’s create a autosign entry for puppet nodes which are on the domain “
In the Forman Dashboard , Go To Infrastructure —> Select Smart Proxies —> Select Autosign  under Action Tab
Click on New to create new ‘autosign’ entry. Specify the domain name and then click on Save.
At this point, now we can say that our foreman server is ready to manage servers.

Step:4 Add New hosts to Foreman Dashboard.

To add new hosts or servers in the foreman dashboard we have to install puppet agent on the hosts and execute the following puppet command from the host.
Let’s suppose we want to add Ubuntu server { – }
linuxtechi@ubuntu-server:~$ sudo apt install puppet
linuxtechi@ubuntu-server:~$ puppet agent -td
Now verify the host in dashboard.
Hosts -Details-Foreman-Dashboard
Now Add puppet Classes to this host in the same way  that we do for the host “” in the above steps.
Note : Command to remove puppet module.
[root@foreman ~]#  puppet module uninstall --environment=production puppetlabs-ntp
Notice: Preparing to uninstall 'puppetlabs-ntp' ...
Removed 'puppetlabs-ntp' (v4.2.0) from /etc/puppet/environments/production/modules
[root@foreman ~]#
That’s conclude the article, Please share your feedback and Comments
Reference :

Open source cross-platform development methods and tools

The trials and tribulations of cross-platform software development
Image by :
This is an article I've been wanting to sit down and write for a few years now. I first started developing software in the late '90s and got myself a Borland C++ compiler, which I quickly realized was only really going to work on Windows. I made a few small command-line applications at first and then started experimenting with graphical applications. I loved the creative process, but was disappointed by many of the tools. At the time, I didn't really move beyond adapting simple examples.
A little later, I got interested in developing web applications and started playing with a new language called PHP after being dissatisfied with Perl. I liked how I could mix code and HTML quite freely and have full access to the server machine from the PHP language. I developed a few sites in PHP and played with various ways of pushing some of the processing to JavaScript when possible. This was all pretty nascent, but coupling it with a database let me accomplish quite a lot. I also started participating in mailing lists, answering questions, and learning all I could about how this open source language was developed.
After that I got a little distracted with Linux, packaging applications, and porting to a new 64-bit architecture as a Gentoo developer. That was a lot of fun, and I learned a lot about dependencies, security updates, shared libraries, and how bad many scientists were at writing build systems. Throughout this period, I also learned about being part of an extended online community and had the opportunity to work with a lot of very dedicated and skilled people.

C++ and native development

Ultimately, I realized I wanted to develop software, and I really wanted to develop in a native language where I had access to the hardware. At the time I used Linux as my main operating system, but also worked with people using Windows and Mac OS X on a regular basis. I wanted them to be able to use the software I developed, and didn't want to write it three times. This was when I started aligning on a software stack, largely influenced by KDE and the opportunity I had to do a Google Summer of Code project with them. It is now about nine years since I did that Google Summer of Code project, and I am largely using the same stack with some additions/updates to facilitate collaborative, open source, cross-platform development.
C++ is a standardized language with a number of powerful open source compilers and many proprietary compilers that span everything from embedded systems to the biggest supercomputers on the planet. In my work, we regularly target both extremes of the spectrum as well as a lot of work in the desktop/laptop space. I think it is still one of the most diverse languages, with higher level abstraction than C, yet close enough to the metal to get great performance. It also has access to many APIs often exposed through C interfaces and can even interact with FORTRAN when properly coerced.

Cross-platform abstractions

C++ can be written in a portable way, but there are many platform-specific APIs, toolkits, and language extensions. There are also a number of ways to build a C++ application. I started off with simple handwritten Makefiles, but it soon became obvious that maintaining these for even simple projects was tedious and error prone. I started looking at Autotools, and later SCONS, before coming across CMake right around the time KDE was looking at making the switch.
CMake is a meta build system generator, meaning it doesn't build anything directly itself. You use the CMake language to specify how your project should be built, and can define exceptions for specific platforms where necessary. This might sound a little tedious too, and it can be, but you get something pretty huge in return—it will generate a build system for your environment. If you want to use Visual Studio, go ahead. If you want Makefiles, great. If you prefer the new Ninja system, you can use that. I normally use Ninja coupled with something called CodeBlocks that gets me Qt Creator integration.
Once you have something that can be built, you want to consider how you might abstract the windowing system. I went through quite a few abstractions here too, including Java Swing, GTKMM (C++ wrappers around GTK), wxWidgets, and a few I think I may have blocked out. I settled on Qt, which used to have a big disadvantage that it was under the GPLv2 license, and so your code must also use the GPL unless you paid for the commercial license. It was always dual-licensed, but I never really liked that approach. It was far better than any of the other abstractions I tried, and felt pragmatic to me, natural, and it had a large community of friendly open source developers in KDE.
Another great thing I always got from Qt was a native look and feel. The abstraction works hard to use native widgets, file dialogs, color pickers, etc. As the toolkit evolved it was extended to support Android and iOS, and it was relicensed as LGPL after Nokia acquired Trolltech. It also has an agreement in place with the KDE e.V. assuring that it will always be free, which provides protections for the future.

Version control, code review

I started out with CVS, and we would generally do code review after things were merged in Gentoo and KDE. This worked pretty well, but required disciplined developers who kept up with the commit messages on the commit list. Switching to Subversion things remained pretty similar, but the tool felt easier to use and had a little more atomicity.
For me, the big revolution was moving to Git. At first, we used it as a simple drop-in replacement with the ability to locally stage commits before pushing them. Later we started looking at more integrated code review, trying out a few solutions. For a while we used Gerrit, but I never felt like the interface was intuitive enough, nor did I like its focus on single commits.
Most of my work now uses GitHub or GitLab, both of which have a strong focus on pull/merge requests that look at branches containing one or more commits. This lets us separate things into separate commits where it makes sense, publish them as they are developed, and request review when it is ready for integration. I like the granularity where I can switch to individual commits to understand why a group of changes were made or look at the diff for the branch. Line-by-line comments allow focused review, and high-level discussion can take place in the main tab.

Automated building and testing

Another important aspect of cross-platform development is automated building and testing. This also ends up getting into building/packaging dependencies, as we rarely write code that doesn't reuse other libraries. This is another area where CMake provides quite a bit of help, and our practices are evolving.
I think this is one of the more difficult aspects of cross platform development, and some platforms are more difficult than others. The traditional approach has been to use cron jobs to automate build initiation, and drive dashboard scripts on the host that primarily use CMake via CTest to automate builds. They also generate build artifacts that are uploaded to CDash.
Many projects have something we call superbuilds. These automate the building and staging of project dependencies before finally building the project we are interested in. They enable us to ensure everything is built with the right flags and the correct versions. They also interact with CPack in some cases to create installers for different platforms. CDash will display build summaries, automatically created installers, test results, etc.


Cross-platform development is challenging. It's important to see the results of changes you make on other platforms within hours (or days, at worst). A project needs to consider cross-platform as part of its workflow to achieve optimal results. There is a lot more I could say about our approach, but this article is already very long. You can achieve great results with native code, and scientific applications often have a number of additional dependencies. Once you throw supercomputers and embedded devices into the mix, things become very interesting.

7 open source terminal games for Linux

Gaming in the terminal
Image by :
Do fancy graphics really make a game better? Can a text-based game for Linux still keep you entertained?
Don't get me wrong, I do occasionally enjoy playing a AAA game release from a major studio. But as I've gotten older, I've found that I really value gameplay (and nostalgia too, admittedly) far more than how photorealistic my gaming experience is.
For me, this has meant replaying some of my classic favorites from the 90s and early 2000s, or newer independent games which pay homage to the styles and gameplay of my older picks. As a Linux gamer, it's had the added bonus of providing a high quality gaming experience with very little effort on a computer that is far from top-of-the-line.
Many of my favorites have had dedicated Linux ports created through the years, and still others run flawlessly on Wine or inside of DOSBox. While the games themselves may not be open source, at least much of the rest of my computing stack is open, and for that matter, also free-as-in-beer.
But lately I've also been thinking about my very early days in computer gaming, which predated such fancy new inventions as mice and color monitors. Gameplay mattered a great deal back then, because it was literally all you had to work with. The allure of watching green specs on a black screen could only hold your attention for so long.
And so I decided to take a look around at some of the simplest open source games for Linux: terminal-based games.


A couple of years ago, 2048 became one of the most popular web-based games hosted on GitHub, with its simple mechanics of sliding blocks providing hours of entertainment. While itself a derivative of other similar games, I began to wonder if the name 2048 was reflective of the number of clones this game itself would generate (here are some of the more entertaining ones).
But 2048 is such a simple game, it lent itself well to a terminal-based implementation, and thus 2048-cli was born. 2048-cli is an MIT-licensed version of the game, written in c, which plays exactly like its web-based big brother.

BSD Games

Unlike the other games in this list, the BSD games collection isn't a single game at all, but rather a package providing many different text-mode games across a variety of themes, ranging from the simple to the complex. Including card games, clones of several well-known older games, and other entertaining applications, BSD games were originally packaged for various BSD distributions (no surprise there), generally under a BSD license (also not-so-shocking). Some of my favorites in the package include worm, snake, mille (a Mille Bornes implementation), cribbage, and backgammon, but you should look through the entire list of included games—there are many more than you might think!

Moon Buggy

Moon Buggy is a GPL-licensed side scrolling game in which you must navigate the crater-covered surface of the moon with your jump-powered car. Modeled after the arcade game Moon Patrol, Moon Buggy is about as simple of a side scrolling driving game as they come, but surprisingly addictive, for the same reason that single-speed navigation games like Flappy Bird are still amusing today.


One could not have a discussion of text-based games and not include Nethack. Nethack is a roguelike adventure game which has kept an active user community still playing it for almost three decades despite the genre spawning many alternatives with more modern graphical interfaces. Nethack is a fantasy game in which you explore a dungeon full of challenges and monsters, but also helpful loot like weapons, armor, scrolls, and potions to help you along the way. It is licensed under the NetHack General Public License, which is similar to the GPL.


Do you miss the classic arcade game Space Invaders? Wish you could play it in your terminal? Well, good news, that's exactly what the GPLv2-licensed nInvaders provides. Fight back invading alien space ships with your single-turret cannon as you move back and forth to avoid being blown up. It's a throwback to one of the best games of the 1970s.


Nudoku is a sudoku-game for the terminal, written in c and licensed under the GPLv3. A logic game of placing numbers in a 9 by 9 grid, Nudoku provides a number of difficulty levels from easy to hard and plays as well as its GUI- (or paper-) based counterparts.


Robot Finds Kitten

Robot Finds Kitten is a GPL-licensed "zen simulation" in which you play a robot tasked with finding a kitten in a world filled with many objects which you must inspect. While so simple that it's not a game in the tradition sense, it's oddly calming to explore a simple world full of entertainingly-described objects in search of your lost cat.

This, of course, only scratches the surface of gaming options at the Linux terminal. There are many, many others. Want to try out the games we discussed here? Most are packaged for major distributions; on my Fedora machine, the following command installs them all:
sudo dnf install -y nethack bsd-games 2048-cli ninvaders nudoku moon-buggy robotfindskitten
Do you have a favorite open source text-mode game for Linux? If so, let us know in the comments below.