Thursday, May 29, 2014

Docker: Lightweight Linux Containers for Consistent Development and Deployment

http://www.linuxjournal.com/content/docker-lightweight-linux-containers-consistent-development-and-deployment

Take on "dependency hell" with Docker containers, the lightweight and nimble cousin of VMs. Learn how Docker makes applications portable and isolated by packaging them in containers based on LXC technology.
Imagine being able to package an application along with all of its dependencies easily and then run it smoothly in disparate development, test and production environments. That is the goal of the open-source Docker project. Although it is still not officially production-ready, the latest release (0.7.x at the time of this writing) brought Docker another step closer to realizing this ambitious goal.
Docker tries to solve the problem of "dependency hell". Modern applications often are assembled from existing components and rely on other services and applications. For example, your Python application might use PostgreSQL as a data store, Redis for caching and Apache as a Web server. Each of these components comes with its own set of dependencies that may conflict with those of other components. By packaging each component and its dependencies, Docker solves the following problems:
  • Conflicting dependencies: need to run one Web site on PHP 4.3 and another on PHP 5.5? No problem if you run each version of PHP in a separate Docker container.
  • Missing dependencies: installing applications in a new environment is a snap with Docker, because all dependencies are packaged along with the application in a container.
  • Platform differences: moving from one distro to another is no longer a problem. If both systems run Docker, the same container will execute without issues.

Docker: a Little Background

Docker started life as an open-source project at dotCloud, a cloud-centric platform-as-a-service company, in early 2013. Initially, Docker was a natural extension of the technology the company had developed to run its cloud business on thousands of servers. It is written in Go, a statically typed programming language developed by Google with syntax loosely based on C. Fast-forward six to nine months, and the company has hired a new CEO, joined the Linux Foundation, changed its name to Docker Inc., and announced that it is shifting its focus to the development of Docker and the Docker ecosystem. As further indication of Docker's popularity, at the time of this writing, it has been starred on GitHub 8,985 times and has been forked 1,304 times. Figure 1 illustrates Docker's rising popularity in Google searches. I predict that the shape of the past 12 months will be dwarfed by the next 12 months as Docker Inc. delivers the first version blessed for production deployments of containers and the community at large becomes aware of Docker's usefulness.
Figure 1. Google Trends Graph for "Docker Software" for Past 12 Months

Under the Hood

Docker harnesses some powerful kernel-level technology and puts it at our fingertips. The concept of a container in virtualization has been around for several years, but by providing a simple tool set and a unified API for managing some kernel-level technologies, such as LXCs (LinuX Containers), cgroups and a copy-on-write filesystem, Docker has created a tool that is greater than the sum of its parts. The result is a potential game-changer for DevOps, system administrators and developers.
Docker provides tools to make creating and working with containers as easy as possible. Containers sandbox processes from each other. For now, you can think of a container as a lightweight equivalent of a virtual machine.
Linux Containers and LXC, a user-space control package for Linux Containers, constitute the core of Docker. LXC uses kernel-level namespaces to isolate the container from the host. The user namespace separates the container's and the host's user database, thus ensuring that the container's root user does not have root privileges on the host. The process namespace is responsible for displaying and managing only processes running in the container, not the host. And, the network namespace provides the container with its own network device and virtual IP address.
Another component of Docker provided by LXC are Control Groups (cgroups). While namespaces are responsible for isolation between host and container, control groups implement resource accounting and limiting. While allowing Docker to limit the resources being consumed by a container, such as memory, disk space and I/O, cgroups also output lots of metrics about these resources. These metrics allow Docker to monitor the resource consumption of the various processes within the containers and make sure that each gets only its fair share of the available resources.
In addition to the above components, Docker has been using AuFS (Advanced Multi-Layered Unification Filesystem) as a filesystem for containers. AuFS is a layered filesystem that can transparently overlay one or more existing filesystems. When a process needs to modify a file, AuFS creates a copy of that file. AuFS is capable of merging multiple layers into a single representation of a filesystem. This process is called copy-on-write.
The really cool thing is that AuFS allows Docker to use certain images as the basis for containers. For example, you might have a CentOS Linux image that can be used as the basis for many different containers. Thanks to AuFS, only one copy of the CentOS image is required, which results in savings of storage and memory, as well as faster deployments of containers.
An added benefit of using AuFS is Docker's ability to version container images. Each new version is simply a diff of changes from the previous version, effectively keeping image files to a minimum. But, it also means that you always have a complete audit trail of what has changed from one version of a container to another.
Traditionally, Docker has depended on AuFS to provide a copy-on-write storage mechanism. However, the recent addition of a storage driver API is likely to lessen that dependence. Initially, there are three storage drivers available: AuFS, VFS and Device-Mapper, which is the result of a collaboration with Red Hat.
As of version 0.7, Docker works with all Linux distributions. However, it does not work with most non-Linux operating systems, such as Windows and OS X. The recommended way of using Docker on those OSes is to provision a virtual machine on VirtualBox using Vagrant.

Containers vs. Other Types of Virtualization

So what exactly is a container and how is it different from hypervisor-based virtualization? To put it simply, containers virtualize at the operating system level, whereas hypervisor-based solutions virtualize at the hardware level. While the effect is similar, the differences are important and significant, which is why I'll spend a little time exploring the differences and the resulting differences and trade-offs.
Virtualization:
Both containers and VMs are virtualization tools. On the VM side, a hypervisor makes siloed slices of hardware available. There are generally two types of hypervisors: "Type 1" runs directly on the bare metal of the hardware, while "Type 2" runs as an additional layer of software within a guest OS. While the open-source Xen and VMware's ESX are examples of Type 1 hypervisors, examples of Type 2 include Oracle's open-source VirtualBox and VMware Server. Although Type 1 is a better candidate for comparison to Docker containers, I don't make a distinction between the two types for the rest of this article.
Containers, in contrast, make available protected portions of the operating system—they effectively virtualize the operating system. Two containers running on the same operating system don't know that they are sharing resources because each has its own abstracted networking layer, processes and so on.
Operating Systems and Resources:
Since hypervisor-based virtualization provides access to hardware only, you still need to install an operating system. As a result, there are multiple full-fledged operating systems running, one in each VM, which quickly gobbles up resources on the server, such as RAM, CPU and bandwidth.
Containers piggyback on an already running operating system as their host environment. They merely execute in spaces that are isolated from each other and from certain parts of the host OS. This has two significant benefits. First, resource utilization is much more efficient. If a container is not executing anything, it is not using up resources, and containers can call upon their host OS to satisfy some or all of their dependencies. Second, containers are cheap and therefore fast to create and destroy. There is no need to boot and shut down a whole OS. Instead, a container merely has to terminate the processes running in its isolated space. Consequently, starting and stopping a container is more akin to starting and quitting an application, and is just as fast.
Both types of virtualization and containers are illustrated in Figure 2.
Figure 2. VMs vs. Containers
Isolation for Performance and Security:
Processes executing in a Docker container are isolated from processes running on the host OS or in other Docker containers. Nevertheless, all processes are executing in the same kernel. Docker leverages LXC to provide separate namespaces for containers, a technology that has been present in Linux kernels for 5+ years and is considered fairly mature. It also uses Control Groups, which have been in the Linux kernel even longer, to implement resource auditing and limiting.
The Docker dæmon itself also poses a potential attack vector because it currently runs with root privileges. Improvements to both LXC and Docker should allow containers to run without root privileges and to execute the Docker dæmon under a different system user.
Although the type of isolation provided is overall quite strong, it is arguably not as strong as what can be enforced by virtual machines at the hypervisor level. If the kernel goes down, so do all the containers. The other area where VMs have the advantage is their maturity and widespread adoption in production environments. VMs have been hardened and proven themselves in many different high-availability environments. In comparison, Docker and its supporting technologies have not seen nearly as much action. Docker in particular is undergoing massive changes every day, and we all know that change is the enemy of security.
Docker and VMs—Frenemies:
Now that I've spent all this time comparing Docker and VMs, it's time to acknowledge that these two technologies can actually complement each other. Docker runs just fine on already-virtualized environments. You obviously don't want to incur the cost of encapsulating each application or component in a separate VM, but given a Linux VM, you can easily deploy Docker containers on it. That is why it should not come as a surprise that the officially supported way of using Docker on non-Linux systems, such as OS X and Windows, is to install a Precise64 base Ubuntu virtual machine with the help of Vagrant. Simple detailed instructions are provided on the http://www.docker.io site.
The bottom line is that virtualization and containers exhibit some similarities. Initially, it helps to think of containers as very lightweight virtualization. However, as you spend more time with containers, you come to understand the subtle but important differences. Docker does a nice job of harnessing the benefits of containerization for a focused purpose, namely the lightweight packaging and deployment of applications.

Docker Repositories

One of Docker's killer features is the ability to find, download and start container images that were created by other developers quickly. The place where images are stored is called a registry, and Docker Inc. offers a public registry also called the Central Index. You can think of the registry along with the Docker client as the equivalent of Node's NPM, Perl's CPAN or Ruby's RubyGems.
In addition to various base images, which you can use to create your own Docker containers, the public Docker Registry features images of ready-to-run software, including databases, content management systems, development environments, Web servers and so on. While the Docker command-line client searches the public Registry by default, it is also possible to maintain private registries. This is a great option for distributing images with proprietary code or components internally to your company. Pushing images to the registry is just as easy as downloading. It requires you to create an account, but that is free as well. Lastly, Docker Inc.'s registry has a Web-based interface for searching for, reading about, commenting on and recommending (aka "starring") images. It is ridiculously easy to use, and I encourage you to click the link in the Resources section of this article and start exploring.

Hands-On with Docker

Docker consists of a single binary that can be run in one of three different ways. First, it can run as a dæmon to manage the containers. The dæmon exposes a REST-based API that can be accessed locally or remotely. A growing number of client libraries are available to interact with the dæmon's API, including Ruby, Python, JavaScript (Angular and Node), Erlang, Go and PHP.
The client libraries are great for accessing the dæmon programmatically, but the more common use case is to issue instructions from the command line, which is the second way the Docker binary can be used, namely as a command-line client to the REST-based dæmon.
Third, the Docker binary functions as a client to remote repositories of images. Tagged images that make up the filesystem for a container are called repositories. Users can pull images provided by others and share their own images by pushing them to the registry. Registries are used to collect, list and organize repositories.
Let's see all three ways of running the docker executable in action. In this example, you'll search the Docker repository for a MySQL image. Once you find an image you like, you'll download it, and tell the Docker dæmon to run the command (MySQL). You'll do all of this from the command line.
Figure 3. Pulling a Docker Image and Launching a Container
Start by issuing the docker search mysql command, which then displays a list of images in the public Docker registry that match the keyword "mysql". For no particular reason other than I know it works, let's download the "brice/mysql" image, which you do with the docker pull brice/mysql command. You can see that Docker downloaded not only the specified image, but also the images it was built on. With the docker images command, you list the images currently available locally, which includes the "brice/mysql" image. Launching the container with the -d option to detach from the currently running container, you now have MySQL running in a container. You can verify that with the docker ps command, which lists containers, rather than images. In the output, you also see the port on which MySQL is listening, which is the default of 3306.
But, how do you connect to MySQL, knowing that it is running inside a container? Remember that Docker containers get their own network interface. You need to find the IP address and port at which the mysqld server process is listening. The docker inspect provides a lot of info, but since all you need is the IP address, you can just grep for that when inspecting the container by providing its hash docker inspect 5a9005441bb5 | grep IPAddress. Now you can connect with the standard MySQL CLI client by specifying the host and port options. When you're done with the MySQL server, you can shut it down with docker stop 5a9005441bb5.
It took seven commands to find, download and launch a Docker container to get a MySQL server running and shut it down after you're done. In the process, you didn't have to worry about conflicts with installed software, perhaps a different version of MySQL, or dependencies. You used seven different Docker commands: search, pull, images, run, ps, inspect and stop, but the Docker client actually offers 33 different commands. You can see the full list by running docker help from the command line or by consulting the on-line manual.
Before exercising Docker in the above example, I mentioned that the client communicates with the dæmon and the Docker Registry via REST-based Web services. That implies that you can use a local Docker client to interact with a remote dæmon, effectively administering your containers on a remote machine. The APIs for the Docker dæmon, Registry and Index are nicely documented, illustrated with examples and available on the Docker site (see Resources).

Docker Workflow

There are various ways in which Docker can be integrated into the development and deployment process. Let's take a look at a sample workflow illustrated in Figure 4. A developer in our hypothetical company might be running Ubuntu with Docker installed. He might push/pull Docker images to/from the public registry to use as the base for installing his own code and the company's proprietary software and produce images that he pushes to the company's private registry.
The company's QA environment in this example is running CentOS and Docker. It pulls images from the public and private registries and starts various containers whenever the environment is updated.
Finally, the company hosts its production environment in the cloud, namely on Amazon Web Services, for scalability and elasticity. Amazon Linux is also running Docker, which is managing various containers.
Note that all three environments are running different versions of Linux, all of which are compatible with Docker. Moreover, the environments are running various combinations of containers. However, since each container compartmentalizes its own dependencies, there are no conflicts, and all the containers happily coexist.
Figure 4. Sample Software Development Workflow Using Docker
It is crucial to understand that Docker promotes an application-centric container model. That is to say, containers should run individual applications or services, rather than a whole slew of them. Remember that containers are fast and resource-cheap to create and run. Following the single-responsibility principle and running one main process per container results in loose coupling of the components of your system. With that in mind, let's create your own image from which to launch a container.

Creating a New Docker Image

In the previous example, you interacted with Docker from the command line. However, when creating images, it is far more common to create a "Dockerfile" to automate the build process. Dockerfiles are simple text files that describe the build process. You can put a Dockerfile under version control and have a perfectly repeatable way of creating an image.
For the next example, please refer to the "PHP Box" Dockerfile (Listing 1).

Listing 1. PHP Box


# PHP Box
#
# VERSION 1.0

# use centos base image
FROM centos:6.4

# specify the maintainer
MAINTAINER Dirk Merkel, dmerkel@vivantech.com

# update available repos
RUN wget http://dl.fedoraproject.org/pub/epel/6/x86_64/
↪epel-release-6-8.noarch.rpm; rpm -Uvh epel-release-6-8.noarch.rpm

# install some dependencies
RUN yum install -y curl git wget unzip

# install Apache httpd and dependencies
RUN yum install -y httpd

# install PHP and dependencies
RUN yum install -y php php-mysql

# general yum cleanup
RUN yum install -y yum-utils
RUN package-cleanup --dupes; package-cleanup --cleandupes; 
 ↪yum clean -y all

# expose mysqld port
EXPOSE 80

# the command to run
CMD ["/usr/sbin/apachectl", "-D", "FOREGROUND"]
Let's take a closer look at what's going on in this Dockerfile. The syntax of a Dockerfile is a command keyword followed by that command's argument(s). By convention, command keywords are capitalized. Comments start with a pound character.
The FROM keyword indicates which image to use as a base. This must be the first instruction in the file. In this case, you will build on top of the latest CentOS base image. The MAINTAINER instruction obviously lists the person who maintains the Dockerfile. The RUN instruction executes a command and commits the resulting image, thus creating a new layer. The RUN commands in the Dockerfile fetch configuration files for additional repositories and then use Yum to install curl, git, wget, unzip, httpd, php-mysql and yum-utils. I could have combined the yum install commands into a single RUN instruction to avoid successive commits.
The EXPOSE instruction then exposes port 80, which is the port on which Apache will be listening when you start the container.
Finally, the CMD instruction will provide the default command to run when the container is being launched. Associating a single process with the launch of the container allows you to treat a container as a command.
Typing docker build -t php_box . on the command line will now tell Docker to start the build process using the Dockerfile in the current working directory. The resulting image will be tagged "php_box", which will make it easier to refer to and identify the image later.
The build process downloads the base image and then installs Apache httpd along with all dependencies. Upon completion, it returns a hash identifying the newly created image. Similar to the MySQL container you launched earlier, you can run the Apache and PHP image using the "php_box" tag with the following command line: docker run -d -t php_box.
Let's finish with a quick example that illustrates how easy it is to layer on top of an existing image to create a new one:

# MyApp
#
# VERSION       1.0

# use php_box base image
FROM php_box

# specify the maintainer
MAINTAINER Dirk Merkel, dmerkel@vivantech.com

# put my local web site in myApp folder to /var/www
ADD myApp /var/www
This second Dockerfile is shorter than the first and really contains only two interesting instructions. First, you specify the "php_box" image as a starting point using the FROM instruction. Second, you copy a local directory to the image with the ADD instruction. In this case, it is a PHP project that is being copied to Apache's DOCUMENT_ROOT folder on the images. The result is that the site will be served by default when you launch the image.

Conclusion

Docker's prospect of lightweight packaging and deploying of applications and dependencies is an exciting one, and it is quickly being adopted by the Linux community and is making its way into production environments. For example, Red Hat announced in December support for Docker in the upcoming Red Hat Enterprise Linux 7. However, Docker is still a young project and is growing at breakneck speed. It is going to be exciting to watch as the project approaches its 1.0 release, which is supposed to be the first version officially sanctioned for production environments. Docker relies on established technologies, some of which have been around for more than a decade, but that doesn't make it any less revolutionary. Hopefully this article provided you with enough information and inspiration to download Docker and experiment with it yourself.

Docker Update

As this article was being published, the Docker team announced the release of version 0.8. This latest deliverable adds support for Mac OS X consisting of two components. While the client runs natively on OS X, the Docker dæmon runs inside a lightweight VirtualBox-based VM that is easily managed with boot2docker, the included command-line client. This approach is necessary because the underlying technologies, such as LXC and name spaces, simply are not supported by OS X. I think we can expect a similar solution for other platforms, including Windows.
Version 0.8 also introduces several new builder features and experimental support for BTRFS (B-Tree File System). BTRFS is another copy-on-write filesystem, and the BTRFS storage driver is positioned as an alternative to the AuFS driver.
Most notably, Docker 0.8 brings with it many bug fixes and performance enhancements. This overall commitment to quality signals an effort by the Docker team to produce a version 1.0 that is ready to be used in production environments. With the team committing to a monthly release cycle, we can look forward to the 1.0 release in the April to May timeframe.

Resources

Main Docker Site: https://www.docker.io
Docker Registry: https://index.docker.io
Docker Registry API: http://docs.docker.io/en/latest/api/registry_api
Docker Index API: http://docs.docker.io/en/latest/api/index_api
Docker Remote API: http://docs.docker.io/en/latest/api/docker_remote_api

Wednesday, May 28, 2014

How to Share Files Between User Accounts on Windows, Linux, or OS X

http://www.howtogeek.com/189508/how-to-share-files-between-user-accounts-on-windows-linux-or-os-x

shared-computer-with-shared-files
Your operating system provides each user account with its own folders when you set up several different user accounts on the same computer. Shared folders allow you to share files between user accounts.
This process works similarly on Windows, Linux, and Mac OS X. These are all powerful multi-user operating systems with similar folder and file permission systems.

Windows

RELATED ARTICLE
HTG Explains: Why Every User On Your Computer Should Have Their Own User Account
Multiple user accounts were once impractical to use on Windows, but they aren’t anymore. If multiple people use your computer... [Read Article]
On Windows, the “Public” user’s folders are accessible to all users. You’ll find this folder under C:\Users\Public by default. Files you place in any of these folders will be accessible to other users, so it’s a good way to share music, videos, and other types of files between users on the same computer.
public-folder-on-windows-7
RELATED ARTICLE
How to Get Your Libraries Showing in the Navigation Pane on Windows 8.1
One of the first things we noticed in the Windows 8.1 Preview was that the Libraries link was missing from... [Read Article]
Windows even adds these folders to each user’s libraries by default. For example, a user’s Music library contains the user’s music folder under C:\Users\NAME\as well as the public music folder under C:\Users\Public\. This makes it easy for each user to find the shared, public files. It also makes it easy to make a file public — just drag and drop a file from the user-specific folder to the public folder in the library.
Libraries are hidden by default on Windows 8.1, so you’ll have to unhide them to do this.
move-file-to-public-library-folder
These Public folders can also be used to share folders publically on the local network. You’ll find the Public folder sharing option under Advanced sharing settings in the Network and Sharing Control Panel.
public-folder-network-sharing-settings
You could also choose to make any folder shared between users, but this will require messing with folder permissions in Windows. To do this, right-click a folder anywhere in the file system and select Properties. Use the options on the Security tab to change the folder’s permissions and make it accessible to different user accounts. You’ll need administrator access to do this.

Linux

RELATED ARTICLE
HTG Explains: How Do Linux File Permissions Work?
If you’ve been using Linux for some time (and even OS X) you’ll probably have come across a “permissions” error.... [Read Article]
This is a bit more complicated on Linux, as typical Linux distributions don’t come with a special user folder all users have read-write access to. The Public folder on Ubuntu is for sharing files between computers on a network.
You can use Linux’s permissions system to give other user accounts read or read-write access to specific folders. The process below is for Ubuntu 14.04, but it should be identical on any other Linux distribution using GNOME with the Nautilus file manager. It should be similar for other desktop environments, too.
Locate the folder you want to make accessible to other users, right-click it, and select Properties. On the Permissions tab, give “Others” the “Create and delete files” permission. Click the Change Permissions for Enclosed Files button and give “Others” the “Read and write” and “Create and Delete Files” permissions.
create-shared-user-data-folder-on-ubuntu-linux
Other users on the same computer will then have read and write access to your folder. They’ll find it under /home/YOURNAME/folder under Computer. To speed things up, they can create a link or bookmark to the folder so they always have easy access to it.

Mac OS X

Mac OS X creates a special Shared folder that all user accounts have access to. This folder is intended for sharing files between different user accounts. It’s located at /Users/Shared.
To access it, open the Finder and click Go > Computer. Navigate to Macintosh HD > Users > Shared. Files you place in this folder can be accessed by any user account on your Mac.
mac-os-x-shared-folder

These tricks are useful if you’re sharing a computer with other people and you all have your own user accounts — maybe your kids have their own limited accounts. You can share a music library, downloads folder, picture archive, videos, documents, or anything else you like without keeping duplicate copies.

CLI ifconfig – How to setup IP addess from Command Line in Linux

http://www.blackmoreops.com/2013/10/14/cli-ifconfig-setting-ip-addess-command-line-linux

Did you even had trouble with Network Manager or ifconfig and felt that you need to try to set up static IP address from command line / CLI ifconfig? I accidentally removed Gnome (my bad, wasn’t paying attention and did an apt-get autoremove -y .. how bad is that.. ) So I had a problem, I can’t connect to Internet to reinstall my Gnome Network Manager cause I’m in TEXT mode. Similarly I broke my network manager cause I was trying to use VPN and it just wouldn’t come back. I tried reinstalling it, but you need Internet for that. So here’s a small guide for that you can setup IP address and networking from Linux Command Line or CLI. You’ll be able to  browse it from your mobile device and make things work.
How to fix Wired Network interface is Unmanaged error in Debian or Kali Linux - 7 - blackMORE Ops

Firstly STOP and START Networking service

Some people would argue restart would work, but I prefer STOP-START to do a complete rehash. Also if it’s not working already, why bother?
# /etc/init.d/networking stop
 [ ok ] Deconfiguring network interfaces...done.
 # /etc/init.d/networking start
 [ ok ] Configuring network interfaces...done.

STOP and START Network-Manager

If you have some other network manager (i.e. wicd, then start stop that one).

# /etc/init.d/network-manager stop
 [ ok ] Stopping network connection manager: NetworkManager.
 # /etc/init.d/network-manager start
 [ ok ] Starting network connection manager: NetworkManager.
Just for the kicks, following is what restart would do.. similar I still prefer stop/start combination.
 # /etc/init.d/network-manager restart
 [ ok ] Stopping network connection manager: NetworkManager.
 [ ok ] Starting network connection manager: NetworkManager.

Now to bring up your interface:

 # ifconfig eth0 up
 # ifconfig eth0
 eth0      Link encap:Ethernet  HWaddr aa:bb:cc:11:22:33
 UP BROADCAST MULTICAST  MTU:1500  Metric:1
 RX packets:0 errors:0 dropped:0 overruns:0 frame:0
 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
 collisions:0 txqueuelen:1000
 RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)

Now lets set IP, subnet mask, broadcast address.

 # ifconfig eth0 192.168.43.226
 # ifconfig eth0 netmask 255.255.255.0
 # ifconfig eth0 broadcast 192.168.43.255
Let check the outcome:
# ifconfig eth0
 eth0     Link encap:Ethernet  HWaddr aa:bb:cc:11:22:33
 inet addr:192.168.43.226  Bcast:192.168.43.255  Mask:255.255.255.0
 UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
 RX packets:19325 errors:0 dropped:0 overruns:0 frame:0
 TX packets:19641 errors:0 dropped:0 overruns:0 carrier:0
 collisions:0 txqueuelen:1000
 RX bytes:0 (0.0 B)  TX bytes:0 (0.0 B)
and try to ping Google.com (cause if google.com is down, Internet is broken).
# ping google.com
 ping: unknown host google.com
Ah Internet is broken. Maybe not! So what went wrong in our side.

Simple, we didn’t add any default Gateway. Let’s do that

# route add default gw 192.168.43.1 eth0
and Just to confirm:
# route -n
 Kernel IP routing table
 Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
 0.0.0.0         192.168.43.1    0.0.0.0         UG    0      0        0 eth0
 192.168.43.0    0.0.0.0         255.255.255.0   U     0      0        0 eth0
Looks good to me, lets ping google.com again:
# ping google.com
 PING google.com (119.30.40.16) 56(84) bytes of data.
 64 bytes from cache.google.com (119.30.40.16): icmp_req=1 ttl=49 time=520 ms
 64 bytes from cache.google.com (119.30.40.16): icmp_req=2 ttl=49 time=318 ms
 64 bytes from cache.google.com (119.30.40.16): icmp_req=3 ttl=49 time=358 ms
 64 bytes from cache.google.com (119.30.40.16): icmp_req=4 ttl=49 time=315 ms
 ^C
 --- google.com ping statistics ---
 4 packets transmitted, 4 received, 0% packet loss, time 3002ms
 rtt min/avg/max/mdev = 315.863/378.359/520.263/83.643 ms
Done.

4 Free and Open Source Alternatives of Matlab

http://electronicsforu.com/electronicsforu/circuitarchives/view_article.asp?sno=1804&title%20=%204+Free+and+Open+Source+Alternatives+of+Matlab&b_type=new&id=12985&group_type=cool_stuff

Matlab’s easy to use interface, its power, and flexibility definitely make it a deservingly popular and useful software. But admit it, in bad times this propitiatory software can burn your pocket! So here we bring 4 free and open source alternatives of Matlab which can help you do the same work or even better at zero cost! Enjoy!



1. Scilab: This is Free Software used for numerical computation. It also comes with a high-level programming language. Scilab began as a university project, but has since become much more than that. Its development is presently sponsored by Scilab Enterprises, which also provides paid professional services around the application.
(Help to understand the difference between Scilab and Matlab: http://www.infoclearinghouse.com/files/scilab19.pdf)

2. GNU Octave: Popularly known as Octave, its official website describe it as, “High-level interpreted language, primarily intended for numerical computations. It provides capabilities for the numerical solution of linear and nonlinear problems, and for performing other numerical experiments. It also provides extensive graphics capabilities for data visualization and manipulation.”
(Help to understanding difference between GNU & Matlab: http://www.ece.ucdavis.edu/~bbaas/6/notes/notes.diffs.octave.matlab.html)

It's one of the best free software for that kind of job and you rarely have to employ Matlab. There are many workarounds for examples the slow loops can be replaced by precompiled modules written in C.

3. Sagemath also known as Sage, is a unified interface of a suite of more than 100 Free Software applications. These apps put together becomes a suitable alterbative of Matlab for elementary to advanced number theory, cryptography, numerical computation, commutative algebra, group theory, combinatorics, etc.
Sagemath’s UI described it as, “A notebook in a web browser or the command line. Using the notebook, Sage connects either locally to your own Sage installation or to a Sage server on the network. Inside the Sage notebook you can create embedded graphics, beautifully typeset mathematical expressions, add and delete input, and share your work across the network.”
Understand more benefits of sage here: http://www.sagemath.org/tour-benchmarks.html

4. Genius: Popular as Genius Math tool (GMT), one is another alternative of Matlab with some cool features. The tool offres a built-in interactive programming language called GEL (Genius Extension Language). This started as a simple GNOME calculator, but morphed into something more powerful and useful.
GMT's website officially described it as: “General purpose calculator program similar in some aspects to BC, Matlab, Maple or Mathematica. It is useful both as a simple calculator and as a research or educational tool. The syntax is very intuitive and is designed to mimic how mathematics is usually written.”
Here's a resource to understand Genius better: http://www.jirka.org/genius.html

Monday, May 26, 2014

The Growing Role of UEFI Secure Boot in Linux Distributions

http://www.linuxjournal.com/content/growing-role-uefi-secure-boot-linux-distributions

With the increasing prevalence of open-source implementations and the expansion of personal computing device usage to include mobile and non-PC devices as well as traditional desktops and laptops, combating attacks and security obstacles against malware is a growing priority for a broad community of vendors, developers and end users. This trend provides a useful example of how the flexibility and standardization provided by the Unified Extensible Firmware Interface (UEFI) technology addresses shared challenges in ways that help bring better products and experiences to market.
The UEFI specification defines an industry-leading interface between the operating system (OS) and the platform firmware, improving the performance, flexibility and security of computing devices. Designed for scalability, extensibility and interoperability, UEFI technology streamlines technological evolution of platform firmware. In 2013, developers of several open-source Linux-based operating systems, including Ubuntu 12.10, Fedora 18 and OpenSUSE 12.3, began using UEFI specifications in their distributions.
Additional features of UEFI include improved security in the pre-boot mode, faster booting, support of drives larger than 2.2 Terabytes and integration with modern 64-bit firmware device drivers. UEFI standards are platform-independent and compatible with a variety of platform architectures—meaning, users of several different types of operating systems, including both Linux and commercial systems, can enjoy the benefits of UEFI. Equally, because the UEFI specification includes bindings for multiple CPU architectures, these benefits apply on a variety of hardware platforms with these operating systems.
While UEFI Secure Boot may be one of the most talked about features, the complete set of features in the UEFI specification provide a standardized interoperable and extensible booting environment for the operating system and pre-boot applications. The attributes of this environment make it ideal for increased use in a rapidly widening array of Linux-based distributions. UEFI specifications are robust and designed to complement or even further advance Linux distributions. Industry experts expect to see continued expansion of their use during 2014 and beyond.

UEFI Secure Boot in Linux-Based Distributions

Malware developers have increased their attempts to attack the pre-boot environment because operating system and antivirus software vendors have hardened their code. Malware hidden in the firmware is virtually untraceable by the operating system, unless a search specifically targets malware within the firmware. UEFI Secure Boot assists with system firmware, driver and software validation. UEFI Secure Boot also allows users of Linux-based distributions to boot alternate operating systems without disabling UEFI Secure Boot. It provides users with the opportunity to run the software of their choice in the most secure and efficient manner, while promoting interoperability and technical innovation.
Secure Boot is an optional feature of the UEFI specification. The choice of whether to implement the feature and the details of its implementation (from an end-user standpoint) are business decisions made by equipment manufacturers. For example, consider the simplest and most usual case in which a platform uses UEFI-conformant firmware and a UEFI-aware operating system. When this system powers on (assuming it has UEFI Secure Boot enabled), the UEFI firmware uses security keys stored in the platform to validate the bootloader read from the disk. If the bootloader signature does not match the signature key needed for verification, the system will not boot.
In general, the signature check will succeed because the platform owner will have purchased the system with pre-installed software set up by the manufacturer to pre-establish trust between the firmware and operating system. The signature check also will succeed if the owner has installed an operating system loader that is trusted along with the appropriate keys that represent that trust if those keys are not already present in the platform. The case in which the signature check fails is most likely to arise when untrusted malware has insinuated its way into the machine, inserting itself into the boot path and tampering with the previously installed software. In this way, UEFI Secure Boot offers the prospect of a hardware-verified, malware-free operating system bootstrap process that helps improve system deployment security.
Without UEFI Secure Boot, malware developers can more easily take advantage of several pre-boot attack points, including the system-embedded firmware itself, as well as the interval between the firmware initiation and the loading of the operating system. The UEFI specification promotes extensibility and customization of security-enhanced interfaces, but allows the implementers to specify how they are used. As an optional feature, it is up to the platform manufacturer and system owner to decide how to manage UEFI Secure Boot. Thus, implementations may vary in how they express policy, and of course, UEFI Secure Boot is no panacea for every type of malware or security vulnerability. Nevertheless, in a variety of implementations that have already reached the market, UEFI Secure Boot has proven to be a practical and useful tool for improving platform integrity and successfully defending the point of attack for a dangerous class of pre-operating system malware.
The broadened adoption of UEFI Secure Boot technology, particularly by the Linux community, is not only a movement toward innovation, but also a progressive step toward the safeguarding of emerging computer platforms. The evolution of firmware technology in a variety of sectors continues to gain momentum, increasing the use of UEFI technology in Linux and commercial systems. This is a testament to the cross-functionality of UEFI between devices, software and systems, as well as its ability to deliver next-generation technologies for nearly any platform.

Disabling UEFI Secure Boot in Open-Source Implementations

A variety of models has emerged for the use of UEFI Secure Boot in the Open Source community. The minimal approach is to use the ability to disable the feature—a facility that is present in practically all platforms that implement UEFI Secure Boot. In so doing, the platform owner makes the machine compatible with any operating system that the platform supports regardless of whether that operating system supports UEFI Secure Boot. The downside of taking this approach is giving up the protection that having the feature enabled affords the platform, in terms of improved resistance to pre-operating system malware.
There are a couple key points to understand about the ability to enable or disable Secure Boot in any platform. The UEFI specification leaves both the choice of whether to implement Secure Boot—as well as the choice to provide an "on/off switch"—up to system designers. Practical considerations usually make appropriate choices obvious, depending on the intended use of the product. For example, a system designed to function as a kiosk that has to survive unattended by the owner in a retail store environment would likely choose to lock down the software payload as much as practical to avoid unintended changes that would compromise the kiosk's basic function. If the kiosk runtime booted using UEFI Secure Boot, it may make sense to provide no means to disable the feature as part of the strategy for maximizing kiosk availability and uptime.
General-purpose compute platforms present a different dynamic. In these cases, there is an expectation in the marketplace that the owner's choice of one or more operating systems can be installed on the machine, regardless of what shipped from the factory. For manufacturers of this class of systems, the choice of whether to allow enabling/disabling of UEFI Secure Boot takes into consideration that their customers want to choose from any available operating system, given than some may include no support for UEFI Secure Boot. This is true for open source as well as commercial operating system support. A vendor building a machine that supports all the operating system offerings from Microsoft's catalog, for example, must support older versions that have no UEFI Secure Boot support, as well as the newer ones from the Windows 8 generation that do have such support. Indeed, the need for the enable/disable feature appears in Microsoft's own platform design guide as a mandatory requirement, ensuring that conforming systems can run back catalog products as well as the newest products.
Following the same line of reasoning, most general-purpose platforms are shipping with not only the enable/disable feature, but also with facilities for the platform owner to manage the key store. This means owners can remove pre-installed keys, and in particular, add new ones of their own choosing. This facility then provides the basis for those who choose to roll their own operating system loader images, such as self-signing, or to select an operating system loader signed by the CA of their choice, regardless of whether or not the appropriate keys shipped from the factory.
In some cases, the creators of Linux distributions have chosen to participate directly in the UEFI Secure Boot ecosystem. In this case, a distribution includes an operating system loader signed by a Certificate Authority (CA). Today, the primary CA is the UEFI CA hosted by Microsoft, which is separate but parallel to the CA used for Microsoft's own software product management. At the time of this writing, no other CA has offered to participate; however, the UEFI Forum would welcome such an offer, as having a second source of supply for signing events would be ideal.
In other cases, Linux distributions provide users with a general-purpose shim-bootloader that will chain boot to a standard, more complete Linux bootloader in a secure manner. This process extends the chain of trust from UEFI Secure Boot to the Linux system environment, in which it becomes the province of the operating system-present code to determine what, if anything, to do with that trust.

Linux-Based Platforms that Leverage UEFI Secure Boot

The past year has marked the implementation of UEFI specifications in three popular Linux-based operating systems: Ubuntu 12.10, Fedora 18 and OpenSUSE. Below are additional details about their use of UEFI standards.
Canonical's Ubuntu 12.10
Support for a base-level compatibility between Canonical's Ubuntu and UEFI firmware began in October 2012, with the releases of 12.10 64-bit and 12.04.2 64-bit. At the time of release, industry experts projected that most machines would ship with a firmware compliant with version 2.3.1 of the UEFI standard. Currently, all Ubuntu 64-bit versions now support the UEFI Secure Boot feature. When deployed in Secure Boot configurations, the Ubuntu boot process uses a small "boot shim", which allows compatibility with the third-party CA.
Fedora 18
The UEFI Secure Boot implementation in Fedora 18 prevents the execution of unsigned code in kernel mode and can boot on systems with Secure Boot enabled. Fedora also boots on UEFI systems that do not support or have disabled Secure Boot. The bootloaders can run in an environment in which the boot-path validation process takes place without UEFI. In this mode, there are no restrictions on executing code in kernel mode. In Fedora 18, UEFI Secure Boot extends the chain of trust from the UEFI environment into the kernel. The verification process takes place before loading kernel modules.
OpenSUSE 12.3
The recent establishment of UEFI as the standard firmware on all x86 platforms was a milestone for the Open Source community, specifically for OpenSUSE. OpenSUSE 12.2 included support for UEFI, and the recent OpenSUSE 12.3 provides experimental support for the Secure Boot extension.

The Linux Community's Increasing Use of UEFI Technology for Security Solutions on Next-Generation Platforms

The increased reliance on firmware innovation across non-traditional market segments, combined with the expansion of personal computing from traditional desktops and laptops to an ever-wider range of form factors, is changing the landscape of computing devices. Although mobile devices have traditionally had a custom, locked-down environment, their increasing versatility and the growing popularity of open-source operating systems brings growing vulnerability to complex security attacks. While UEFI Secure Boot cannot unilaterally eradicate the insurgence of security attacks on any device, it helps provide a cross-functional solution for all platforms using UEFI firmware—including Linux-based distributions designed for tablets, smartphones and other non-PC devices. Currently, no one has claimed or demonstrated an attack that can circumvent UEFI Secure Boot, where properly implemented and enabled. The expansion of UEFI technologies into the Linux space addresses the growing demand for security, particularly across the mobile and non-PC application continuum.

What's Next for UEFI Technology in Linux-Based Applications?

As UEFI specifications continue to enable the evolution of firmware technology in a variety of sectors, their use will continue to gain momentum. In addition, the popularity and proliferation of Linux-based distributions will create even greater demand for UEFI technology. The recent use of UEFI specifications in Linux-based operating systems, such as Ubuntu 12.10, Fedora 18 and OpenSUSE 12.3, underscores this trend.
These distribution companies, along with the Linux Foundation and a number of other thought-leading groups from the Open Source community, are now members of the UEFI Forum. This is an important step forward for the ecosystem as a whole, improving innovation and collaboration between the firmware and operating system communities. For example, as mentioned above, many systems include facilities to self-manage the key stores in the platform, but today, limited potential for automating this exists. Proposals from the Open Source community address this limitation, with the promise of significant simplification for installing open-source operating systems in after-market scenarios. By providing a venue where discussion of such proposals reaches the ears of all the right stakeholders, the UEFI Forum helps speed up the arrival of such solutions in the market. This is exactly the kind of innovation and collaboration that the UEFI Forum is eager to foster.
The increasing deployment of UEFI technology in both Linux and commercial systems is a testament to its ability to deliver next-generation technologies for nearly any platform. A growing number of Linux distributions use UEFI specifications, allowing users to launch virtually any operating system of their choosing, while still enjoying the added security benefits of UEFI Secure Boot. With the expansion of UEFI specifications across numerous platforms, its intended purpose—to streamline and aid in firmware innovation by promoting interoperability between software, devices and systems—is realized.

Key Features of UEFI

  • Support of a more secure system, across multiple interfaces.
  • Faster boot times.
  • Speedier time to market.
  • Extensibility, modularity and easy prototyping during development.
  • UEFI specifications allow developers to reuse code during the building process, promoting more efficiency.

Run the same command on many Linux servers at once

http://linuxaria.com/pills/run-the-same-command-on-many-linux-servers-at-once

Ever have to check a list of Linux servers for various things like what version of CentOS they’re running, maybe how long each has been running to get an uptime report? You can and it’s very easy to get going with it with the command gsh
Group Shell (also called gsh) is a remote shell multiplexor. It lets you control many remote shells at once in a single shell. Unlike other commands dispatchers, it is interactive, so shells spawned on the remote hosts are persistent.
It requires only a SSH server on the remote hosts, or some other way to open a remote shell.



gsh allows you to run commands on multiple hosts by adding tags to the gsh command.
Important things to remember:
  • /etc/ghosts contains a list of all the servers and tags
  • gsh is a lot more fun once you’ve set up ssh keys to your servers
Examples to use:
List uptime on all servers in the linux group:
Check to see if an IP address was blocked with CSF by checking the csf and csfcluster groups/tags:
Unblock an IP and remove from /etc/csf.deny from all csf and csfcluster machines
Check the linux kernel version on all VPS machines running centos 5
Check cpanel version on all cpanel machines
The full readme is located here: http://outflux.net/unix/software/gsh/
Here’s an example /etc/ghosts file:
# Machines
#
# hostname         OS-Version Hardware OS  cp     security
1.linuxbrigade.com debian6 baremetal linux plesk  iptables
2.linuxbrigade.com centos5 vps       linux cpanel csfcluster
3.linuxbrigade.com debian7 baremetal linux plesk  iptables
4.linuxbrigade.com centos6 vps       linux cpanel csfcluster
5.linuxbrigade.com centos6 vps       linux cpanel csfcluster
6.linuxbrigade.com centos6 vps       linux nocp   denyhosts
7.linuxbrigade.com debian6 baremetal linux plesk  iptables
8.linuxbrigade.com centos6 baremetal linux cpanel csf
9.linuxbrigade.com centos5 vps       linux cpanel csf

Bash Getopts – Scripts with Command Line Options

http://tuxtweaks.com/2014/05/bash-getopts

I've always wanted to know how to create command line options for my Bash scripts. After some research I found there are two functions available to handle this; getopt and getopts. I'm not going to get into the debate about which one is better. getopts is a shell builtin and seems a little easier to implement than getopt, so I'll go with that for now.

bash getopts

I started out just trying to figure out how to process command line switches in my scripts. Eventually, I added some other useful functionality that makes this a good starting template for any interactive script. I've also included a help function with text formatting to make it a little easier to read.
Rather than go into a lengthy explanation of how getopts works in bash, I think it's simpler to just show some working code in a script.
Affiliate Link
#!/bin/bash

######################################################################
#This is an example of using getopts in Bash. It also contains some
#other bits of code I find useful.
#Author: Linerd
#Website: http://tuxtweaks.com/
#Copyright 2014
#License: Creative Commons Attribution-ShareAlike 4.0
#http://creativecommons.org/licenses/by-sa/4.0/legalcode
######################################################################

#Set Script Name variable
SCRIPT=`basename ${BASH_SOURCE[0]}`

#Initialize variables to default values.
OPT_A=A
OPT_B=B
OPT_C=C
OPT_D=D

#Set fonts for Help.
NORM=`tput sgr0`
BOLD=`tput bold`
REV=`tput smso`

#Help function
function HELP {
  echo -e \\n"Help documentation for ${BOLD}${SCRIPT}.${NORM}"\\n
  echo -e "${REV}Basic usage:${NORM} ${BOLD}$SCRIPT file.ext${NORM}"\\n
  echo "Command line switches are optional. The following switches are recognized."
  echo "${REV}-a${NORM}  --Sets the value for option ${BOLD}a${NORM}. Default is ${BOLD}A${NORM}."
  echo "${REV}-b${NORM}  --Sets the value for option ${BOLD}b${NORM}. Default is ${BOLD}B${NORM}."
  echo "${REV}-c${NORM}  --Sets the value for option ${BOLD}c${NORM}. Default is ${BOLD}C${NORM}."
  echo "${REV}-d${NORM}  --Sets the value for option ${BOLD}d${NORM}. Default is ${BOLD}D${NORM}."
  echo -e "${REV}-h${NORM}  --Displays this help message. No further functions are performed."\\n
  echo -e "Example: ${BOLD}$SCRIPT -a foo -b man -c chu -d bar file.ext${NORM}"\\n
  exit 1
}

#Check the number of arguments. If none are passed, print help and exit.
NUMARGS=$#
echo -e \\n"Number of arguments: $NUMARGS"
if [ $NUMARGS -eq 0 ]; then
  HELP
fi

### Start getopts code ###

#Parse command line flags
#If an option should be followed by an argument, it should be followed by a ":".
#Notice there is no ":" after "h". The leading ":" suppresses error messages from
#getopts. This is required to get my unrecognized option code to work.

while getopts :a:b:c:d:h FLAG; do
  case $FLAG in
    a)  #set option "a"
      OPT_A=$OPTARG
      echo "-a used: $OPTARG"
      echo "OPT_A = $OPT_A"
      ;;
    b)  #set option "b"
      OPT_B=$OPTARG
      echo "-b used: $OPTARG"
      echo "OPT_B = $OPT_B"
      ;;
    c)  #set option "c"
      OPT_C=$OPTARG
      echo "-c used: $OPTARG"
      echo "OPT_C = $OPT_C"
      ;;
    d)  #set option "d"
      OPT_D=$OPTARG
      echo "-d used: $OPTARG"
      echo "OPT_D = $OPT_D"
      ;;
    h)  #show help
      HELP
      ;;
    \?) #unrecognized option - show help
      echo -e \\n"Option -${BOLD}$OPTARG${NORM} not allowed."
      HELP
      #If you just want to display a simple error message instead of the full
      #help, remove the 2 lines above and uncomment the 2 lines below.
      #echo -e "Use ${BOLD}$SCRIPT -h${NORM} to see the help documentation."\\n
      #exit 2
      ;;
  esac
done

shift $((OPTIND-1))  #This tells getopts to move on to the next argument.

### End getopts code ###


### Main loop to process files ###

#This is where your main file processing will take place. This example is just
#printing the files and extensions to the terminal. You should place any other
#file processing tasks within the while-do loop.

while [ $# -ne 0 ]; do
  FILE=$1
  TEMPFILE=`basename $FILE`
  #TEMPFILE="${FILE##*/}"  #This is another way to get the base file name.
  FILE_BASE=`echo "${TEMPFILE%.*}"`  #file without extension
  FILE_EXT="${TEMPFILE##*.}"  #file extension


  echo -e \\n"Input file is: $FILE"
  echo "File withouth extension is: $FILE_BASE"
  echo -e "File extension is: $FILE_EXT"\\n
  shift  #Move on to next input file.
done

### End main loop ###

exit 0
Paste the above text into a text editor and then save it somewhere in your executable path. I chose to call the script options and I saved it under /home/linerd/bin. Once you save it, make sure to make it executable.
chmod +x ~/bin/options
Now you can run the script. Try running it with the -h switch to show the help information.
options -h
Now try running it with an unsupported option.
options -z
Finally, getopts can handle your command line options in any order. The only rule is that the file or files you are processing have to come after all of the option switches.
options -d bar -c chu -b man -a foo example1.txt example2.txt
So you can see from these examples how you can set variables in your scripts with command line options. There's more  going on than just getopts in this script, but I think these are valuable additions that make this a good starting template for new scripts. If you'd like to learn more about bash getopts, you can find the documentation buried deep within the bash man page in the "Builtins" section. You can also find info in the Bash Reference Manual.

What Next?

So what will you use getopts for? Let me know in the comments.

OpenStack 101: The parts that make up the project

http://www.networkworld.com/news/2014/051914-openstack-parts-281682.html?source=nww_rss

OpenStack is a platform, but it's made up of pieces. Here are the big ones

Network World - At its core, OpenStack is an operating system that builds public or private clouds. But OpenStack is a platform, it's not just one piece of software that's downloaded and installed to "voila!" build a cloud.
Instead, OpenStack is made up of more than a dozen components that control the most important aspects of a cloud. There is a project for the compute, networking and storage management of the cloud. Others for identity and access management and ones for orchestrating applications that run on top of it. Put together, these components enable enterprises and service providers to offer on-demand computing resources by provisioning and managing large networks of virtual machines.
+ ALSO ON NETWORK WORLD  OpenStack: Still waiting for the users | 15 most powerful OpenStack companies +
The code for each of these projects can be downloaded for free on GitHub and many of these projects are updated twice a year when a new release comes out. Most companies that interact with OpenStack will do so through a public cloud that runs on these components, or through a productized version of this code distributed by one of the many vendors involved in the project. It’s still important to know the pieces that make up the project. So here is OpenStack 101.
Compute
Compute
Code-name: Nova:

OpenStack was started in 2010 when Rackspace and NASA came together. NASA contributed the compute aspect, while Rackspace contributed the storage. Today, that compute project lives on as Nova.
Nova is designed to manage and automate the provisioning of compute resources. This is the core of the virtual machine management software, but it is not a hypervisor. Instead, Nova supports virtualization technologies including KVM, Xen, ESX and Hyper-V, and it can run on bare-metal and high performance computing configurations too. Compute resources are available via APIs for developers, and through web interfaces for administrators and users. The compute architecture is designed to scale horizontally on standard hardware. New in the Icehouse release are rolling upgrades, which allow OpenStack clouds to be updated to a new release without having to shut down VMs.
Nova can be thought of as the equivalent to Amazon Web Service’s Elastic Compute Cloud (EC2).
Neutron
Networking
Code-name: Neutron (formerly Quantum)

Neutron manages the networking associated with OpenStack clouds. It is an API-driven system that allows administrators or users to customize network settings, then spin up and down a variety of different network types (such as flat networks, VLANs or virtual private networks) on-demand. Neutron allows for dedicated or floating IP addresses (the latter of which can be used to reroute traffic during maintenance or a failure, for example). It supports the OpenFlow software defined networking protocol and plugins are available for services such as intrusion detection, load balancing and firewalls.
Object Storage
Code-name: Swift

OpenStack has two major storage platforms: An object storage system named Swift and a block storage platform named Cinder. Swift, which was one of the original components contributed by Rackspace, is a fully-distributed, scale-out API-accessible platform that can be integrated into applications or used for backup and archiving. It is not a traditional file storage system though; instead, Swift has no “central brain.” The OpenStack software automatically replicates data stored in Swift across multiple nodes to ensure redundancy and fault tolerance. If a node fails, the object is automatically replicated to new commodity nodes that are added to the system. That is one of the key enabling features to allow OpenStack to scale to massive sizes. Think of Swift as the equivalent of AWS’s Simple Storage Service (S3).

Block Storage
Code-name: Cinder

Unlike Swift, Cinder allows for blocks of storage to be managed. They’re meant to be assigned to compute instances to allow for expanded storage. The Cinder software manages the creation of these blocks, plus the acts of attaching and detaching the blocks to compute servers. The other major feature of Cinder is its integration with traditional enterprise storage systems, such as Linux Server storage and other platforms such as Ceph, NetApp, Nexenta, SolidFire and Zadara, among others. This is the equivalent of AWS’s Elastic Block Storage (EBS) feature. More information.
Identity and access management
Code-name: Keystone

OpenStack has a variety of components that are OpenStack shared services, meaning they work across various parts of the software, such as Keystone. This project is the primary tool for user authentication and role-based access controls in OpenStack clouds. Keystone integrates with LDAP to provide a central directory of users and allows administrators to set policies that control which resources various users have access to. Keystone supports traditional username and password logins, in addition to token-based logins.
Horizon
Dashboard
Code-name: Horizon

This is the primary graphical user interface for using OpenStack clouds. The web-based tool gives users and administrators the ability to provision and automate services. It’s the primary way for accessing resources if API calls are not used.
Glance
Image service
Code-name: Glance

One of the key benefits to a cloud platform is the ability to spin up virtual machines quickly when users request them. Glance helps accomplish this by creating templates for virtual machines. Glance can copy or snapshot a virtual machine image and allow that to be recreated. That means administrators can set up a catalog of virtual machine templates that users can select from and self-provision. Glance can also be used to back up existing images to save them. Glance integrates with Cinder to store the images.
Usage data and orchestration
Two of the newest projects in OpenStack are Ceilometer and Heat. Ceilometer is a telemetry system that allows administrators to track usage of the OpenStack cloud, including which users accessed which resources, as well as aggregate data about the cloud usage as a whole.
Heat is an orchestration engine that allows developers to automate the deployment of infrastructure. This allows compute, networking and storage configurations to be automatically assigned to a virtual machine or application. This allows for easier onboarding of new instances. Heat also has an auto-scaling element, which allows services to add resources as they are needed.
On the way: Databases, bare metal management, messaging and Hadoop
There are a number of projects that are still incubating, which means they are in development and not yet full-fledged components of OpenStack. These include Trove, which is a MySQL database as a service (think of this as an equivalent to AWS’s Relational Database Service (RDS). Another is Sahara (formerly named Savanah) which is meant to allow OpenStack software to control Hadoop clusters. Ironic is a project that will allow OpenStack to manage bare metal servers. And Macaroni is a messaging service.

These projects will continue to be developed by the OpenStack community and will most likely be integrated more fully into the project in the coming releases.

Notable Penetration Test Linux distributions of 2014

http://www.blackmoreops.com/2014/02/03/notable-penetration-test-linux-distributions-of-2014

 
Notable Penetration Test Linux distributions of 2014 - blackMORE Ops
A penetration test, or the short form pentest, is an attack on a computer system with the intention of finding security weaknesses, potentially gaining access to it, its functionality and data. A Penetration Testing Linux is a special built Linux distro that can be used for analyzing and evaluating security measures of a target system.
There are several operating system distributions, which are geared towards performing penetration testing. Distributions typically contains pre-packaged and pre-configured set of tools. This is useful because the penetration tester does not have to hunt down a tool when it is required. This may in turn lead to further complications such as compile errors, dependencies issues, configuration errors, or simply acquiring additional tools may not be practical in the tester’s context.
Popular examples are Kali Linux (replacing Backtrack as of December 2012) based on Debian Linux, Pentoo based on Gentoo Linux and BackBox based on Ubuntu Linux. There are many other specialized operating systems for penetration testing, each more or less dedicated to a specific field of penetration testing.
Penetration tests are valuable for several reasons:
  1. Determining the feasibility of a particular set of attack vectors
  2. Identifying higher-risk vulnerabilities that result from a combination of lower-risk vulnerabilities exploited in a particular sequence
  3. Identifying vulnerabilities that may be difficult or impossible to detect with automated network or application vulnerability scanning software
  4. Assessing the magnitude of potential business and operational impacts of successful attacks
  5. Testing the ability of network defenders to successfully detect and respond to the attacks
  6. Providing evidence to support increased investments in security personnel and technology
The new pentest distroes are developed and maintained with user friendliness in mind, so anyone with basic Linux usage knowledge can use them. Tutorials and HOW TO articles are available for public usages (rather than kept in closed community). The idea that pentest distroes are mainly used by network and computer security experts, security students and audit firms doesn’t apply anymore, everyone want’s to test their own network, Wireless connection, Website, Database and I must say most of the distribution owners are making it really easy and offering training for interested ones.
Now lets have a look at some of the best pentest distroes of 2014, some are well maintained, some are not, but either way they all offer great package list to play with:

1. Kali Linux (previous known as BackTrack 5r3)

Kali Linux - Notable Penetration Test Linux distributions of 2014 - blackMORE Ops
Kali is a complete re-build of BackTrack Linux, adhering completely to Debian development standards. All-new infrastructure has been put in place, all tools were reviewed and packaged, and we use Git for our VCS.
  • More than 300 penetration testing tools: After reviewing every tool that was included in BackTrack, we eliminated a great number of tools that either did not work or had other tools available that provided similar functionality.
  • Free and always will be: Kali Linux, like its predecessor, is completely free and always will be. You will never, ever have to pay for Kali Linux.
  • Open source Git tree: We are huge proponents of open source software and our development tree is available for all to see and all sources are available for those who wish to tweak and rebuild packages.
  • FHS compliant: Kali has been developed to adhere to the Filesystem Hierarchy Standard, allowing all Linux users to easily locate binaries, support files, libraries, etc.
  • Vast wireless device support: We have built Kali Linux to support as many wireless devices as we possibly can, allowing it to run properly on a wide variety of hardware and making it compatible with numerous USB and other wireless devices.
  • Custom kernel patched for injection: As penetration testers, the development team often needs to do wireless assessments so our kernel has the latest injection patches included.
  • Secure development environment: The Kali Linux team is made up of a small group of trusted individuals who can only commit packages and interact with the repositories while using multiple secure protocols.
  • GPG signed packages and repos: All Kali packages are signed by each individual developer when they are built and committed and the repositories subsequently sign the packages as well.
  • Multi-language: Although pentesting tools tend to be written in English, we have ensured that Kali has true multilingual support, allowing more users to operate in their native language and locate the tools they need for the job.
  • Completely customizable: We completely understand that not everyone will agree with our design decisions so we have made it as easy as possible for our more adventurous users to customize Kali Linux to their liking, all the way down to the kernel.
  • ARMEL and ARMHF support: Since ARM-based systems are becoming more and more prevalent and inexpensive, we knew that Kali’s ARM support would need to be as robust as we could manage, resulting in working installations for both ARMEL and ARMHF systems. Kali Linux has ARM repositories integrated with the mainline distribution so tools for ARM will be updated in conjunction with the rest of the distribution. Kali is currently available for the following ARM devices:
Kali is specifically tailored to penetration testing and therefore, all documentation on this site assumes prior knowledge of the Linux operating system.

2. NodeZero Linux

NodeZero Linux - Notable Penetration Test Linux distributions of 2014 - blackMORE Ops
Penetration testing and security auditing requires specialist tools. The natural path leads us to collecting them all in one handy place.  However how that collection is implemented can be critical to how you deploy effective and robust testing.
It is said the necessity is the mother of all invention, and NodeZero Linux is no different. Our team is built of testers and developers, who have come to the census that live systems do not offer what they need in their security audits. Penetration Testing distributions tend to have historically utilized the “Live” system concept of Linux, which really means that they try not to make any permanent effects to a system. Ergo all changes are gone after reboot, and run from media such as discs and USB’s drives. However all that this maybe very handy for occasional testing, its usefulness can be depleted when you’re testing regularly. It’s our belief that “Live System’s” just don’t scale well in a robust testing environment.
All though NodeZero Linux can be used as a “Live System” for occasional testing, its real strength comes from the understanding that a tester requires a strong and efficient system. This is achieved in our belief by working at a distribution that is a permanent installation that benefits from a strong selection of tools, integrated with a stable Linux environment.
NodeZero Linux is reliable, stable, and powerful.  Based on the industry leading Ubuntu Linux distribution, NodeZero Linux takes all the stability and reliability that comes with Ubuntu’s Long Term Support model, and its power comes from the tools configured to live comfortably within the environment.

3. BackBox Linux

BackBox Linux - Notable Penetration Test Linux distributions of 2014 - blackMORE Ops
BackBox is a Linux distribution based on Ubuntu. It has been developed to perform penetration tests and security assessments. Designed to be fast, easy to use and provide a minimal yet complete desktop environment, thanks to its own software repositories, always being updated to the latest stable version of the most used and best known ethical hacking tools.
BackBox main aim is providing an alternative, highly customizable and performing system. BackBox uses the light window manager Xfce. It includes some of the most used security and analysis Linux tools, aiming to a wide spread of goals, ranging from web application analysis to network analysis, from stress tests to sniffing, including also vulnerability assessment, computer forensic analysis and exploitation.
The power of this distribution is given by its Launchpad repository core constantly updated to the last stable version of the most known and used ethical hacking tools. The integration and development of new tools inside the distribution follows the commencement of open source community and particularly the Debian Free Software Guidelines criteria.
BackBox Linux takes pride as they excelled on the followings:
  • Performance and speed are key elements
Starting from an appropriately configured XFCE desktop manager it offers stability and the speed, that only a few other DMs can offer, reaching in extreme tweaking of services, configurations, boot parameters and the entire infrastructure. BackBox has been designed with the aim of achieving the maximum performance and minimum consumption of resources.
This makes BackBox a very fast distro and suitable even for old hardware configurations.
  • Everything is in the right place
The main menu of BackBox has been well organized and designed to avoid any chaos/mess finding tools that we are looking for. The selection of every single tool has been done with accuracy in order to avoid any redundancies and the tools that have similar functionalities.
With particular attention to the end user every needs, all menu and configuration files are have been organized and reduced to a minimum essential, necessary to provide an intuitive, friendly and easy usage of Linux distribution.
  • It’s standard compliant
The software packaging process, the configuration and the tweaking of the system follows up the Ubuntu/Debian standard guide lines.
Any of Debian and Ubuntu users will feel very familiar with, while newcomers will follow the official documentation and BackBox additions to customize their system without any tricky work around, because it is standard and straight forward!
  • It’s versatile
As a live distribution, BackBox offer an experience that few other distro can offer and once installed naturally lends itself to fill the role of a desktop-oriented system. Thanks to the set of packages included in official repository it provides to the user an easy and versatile usage of system.
  • It’s hacker friendly
If you’d like to make any change/modification, in order to suite to your purposes, or maybe add additional tools that is not present in the repositories, nothing could be easier in doing that with BackBox. Create your own Launchpad PPA, send your package to dev team and contribute actively to the evolution of BackBox Linux.

4. Blackbuntu

Blackbuntu Linux - Notable Penetration Test Linux distributions of 2014 - blackMORE Ops
Blackbuntu is distribution for penetration testing which was specially designed for security training students and practitioners of information security. Blackbuntu is penetration testing distribution with GNOME Desktop Environment.
Here is a list of Security and Penetration Testing tools – or rather categories available within the Blackbuntu package, (each category has many sub categories) but this gives you a general idea of what comes with this pentesting distro:
  • Information Gathering,
  • Network Mapping,
  • Vulnerability Identification,
  • Penetration,
  • Privilege Escalation,
  • Maintaining Access,
  • Radio Network Analysis,
  • VoIP Analysis,
  • Digital Forensic,
  • Reverse Engineering and a
  • Miscellaneous section.
Because this is Ubuntu based, almost every device and hardware would just work which is great as it wastes less time troubleshooting and more time working.

5. Samurai Web Testing Framework

Samurai Web Testing Framework Linux - Notable Penetration Test Linux distributions of 2014 - blackMORE Ops
The Samurai Web Testing Framework is a live linux environment that has been pre-configured to function as a web pen-testing environment. The CD contains the best of the open source and free tools that focus on testing and attacking websites. In developing this environment, we have based our tool selection on the tools we use in our security practice. We have included the tools used in all four steps of a web pen-test.
Starting with reconnaissance, we have included tools such as the Fierce domain scanner and Maltego. For mapping, we have included tools such WebScarab and ratproxy. We then chose tools for discovery. These would include w3af and burp. For exploitation, the final stage, we included BeEF, AJAXShell and much more. This CD also includes a pre-configured wiki, set up to be the central information store during your pen-test.
Most penetration tests are focused on either network attacks or web application attacks. Given this separation, many pen testers themselves have understandably followed suit, specializing in one type of test or the other. While such specialization is a sign of a vibrant, healthy penetration testing industry, tests focused on only one of these aspects of a target environment often miss the real business risks of vulnerabilities discovered and exploited by determined and skilled attackers. By combining web app attacks such as SQL injection, Cross-Site Scripting, and Remote File Includes with network attacks such as port scanning, service compromise, and client-side exploitation, the bad guys are significantly more lethal. Penetration testers and the enterprises who use their services need to understand these blended attacks and how to measure whether they are vulnerable to them. This session provides practical examples of penetration tests that combine such attack vectors, and real-world advice for conducting such tests against your own organization.
Samurai Web Testing Framework looks like a very clean distribution and the developers are focused on what they do best, rather than trying to add everything in one single distribution and thus making supporting tougher. This is in a way good as if you’re just starting, you should start with a small set of tools and then move on to next step.

6. Knoppix STD

Knoppix STD Linux - Notable Penetration Test Linux distributions of 2014 - blackMORE Ops
Like Knoppix, this distro is based on Debian and originated in Germany. STD is a Security Tool. Actually it is a collection of hundreds if not thousands of open source security tools. It’s a Live Linux Distro (i.e. it runs from a bootable CD in memory without changing the native operating system of your PC). Its sole purpose in life is to put as many security tools at your disposal with as slick an interface as it can.
The architecture is i486 and runs from the following desktops: GNOME, KDE, LXDE and also Openbox. Knoppix has been around for a long time now – in fact I think it was one of the original live distros.
Knoppix is primarily designed to be used as a Live CD, it can also be installed on a hard disk. The STD in the Knoppix name stands for Security Tools Distribution. The Cryptography section is particularly well-known in Knoppix.
The developers and official forum might seem snobbish (I mean look at this from their FAQ
Question: I am new to Linux. Should I try STD?
Answer: No. If you’re new to Linux STD will merely hinder your learning experience. Use Knoppix instead.
But hey, isn’t all Pentest distro users are like that? If you can’t take the heat, maybe you shouldn’t be trying a pentest distro after all. Kudos to STD dev’s for speaking their mind.

7. Pentoo

Pentoo Linux - Notable Penetration Test Linux distributions of 2014 - blackMORE Ops
Pentoo is a Live CD and Live USB designed for penetration testing and security assessment. Based on Gentoo Linux, Pentoo is provided both as 32 and 64 bit installable livecd. Pentoo is also available as an overlayfor an existing Gentoo installation. It features packet injection patched wifi drivers, GPGPU cracking software, and lots of tools for penetration testing and security assessment. The Pentoo kernel includes grsecurity and PAX hardening and extra patches – with binaries compiled from a hardened toolchain with the latest nightly versions of some tools available.
It’s basically a gentoo install with lots of customized tools, customized kernel, and much more. Here is a non-exhaustive list of the features currently included :
  •     Hardened Kernel with aufs patches
  •     Backported Wifi stack from latest stable kernel release
  •     Module loading support ala slax
  •     Changes saving on usb stick
  •     XFCE4 wm
  •     Cuda/OPENCL cracking support with development tools
  •     System updates if you got it finally installed
Put simply, Pentoo is Gentoo with the pentoo overlay. This overlay is available in layman so all you have to do is layman -L and layman -a pentoo.
Pentoo has a pentoo/pentoo meta ebuild and multiple pentoo profiles, which will install all the pentoo tools based on USE flags. The package list is fairly adequate. If you’re a Gentoo user, you might want to use Pentoo as this is the closest distribution with similar build.

8. WEAKERTH4N

WEAKERTH4N Linux - Notable Penetration Test Linux distributions of 2014 - blackMORE Ops
Weakerth4n has a very well maintained website and a devoted community. Built from Debian Squeeze (Fluxbox within a desktop environment) this operating system is particularly suited for WiFi hacking as it contains plenty of Wireless cracking and hacking tools.
Tools includes: Wifi attacks, SQL Hacking, Cisco Exploitation, Password Cracking, Web Hacking, Bluetooth, VoIP Hacking, Social Engineering, Information Gathering, Fuzzing Android Hacking, Networking and creating Shells.
Vital Statistics
  •     OS Type: Linux
  •     Based on: Debian, Ubuntu
  •     Origin: Italy
  •     Architecture: i386, x86_64
  •     Desktop: XFCE
If you look into their website you get the feeling that the maintainers are active and they write a lot of guides and tutorials to help newbies. As this is based on Debian Squeeze, this might be something you would want to give a go. They also released Version 3.6 BETA, (Oct 2013) so yeah, give it a go. You might just like it.

9. Matriux

Matriux Krypton Linux - Notable Penetration Test Linux distributions of 2014 - blackMORE Ops
Matriux is a Debian-based security distribution designed for penetration testing and forensic investigations. Although it is primarily designed for security enthusiasts and professionals, it can also be used by any Linux user as a desktop system for day-to-day computing. Besides standard Debian software, Matriux also ships with an optimised GNOME desktop interface, over 340 open-source tools for penetration testing, and a custom-built Linux kernel.
Matriux was first released in 2009 under code name “lithium” and then followed by versions like “xenon” based on Ubuntu. Matriux “Krypton” then followed in 2011 where we moved our system to Debian. Other versions followed for Matriux “Krypton” with v1.2 and then Ec-Centric in 2012. This year we are releasing Matriux “Leandros” RC1 on 2013-09-27 which is a major revamp over the existing system.
Matriux arsenal is divided into sections with a broader classification of tools for Reconnaissance, Scanning, Attack Tools, Frameworks, Radio (Wireless), Digital Forensics, Debuggers, Tracers, Fuzzers and other miscellaneous tool providing a wider approach over the steps followed for a complete penetration testing and forensic scenario. Although there are were many questions raised regarding why there is a need for another security distribution while there is already one. We believed and followed the free spirit of Linux in making one. We always tried to stay updated with the tool and hardware support and so include the latest tools and compile a custom kernel to stay abreast with the latest technologies in the field of information security. This version includes a latest section of tools PCI-DSS.
Matriux is also designed to run from a live environment like a CD/ DVD or USB stick which can be helpful in computer forensics and data recovery for forensic analysis, investigations and retrievals not only from Physical Hard drives but also from Solid state drives and NAND flashes used in smart phones like Android and iPhone. With Matriux Leandros we also support and work with the projects and tools that have been discontinued over time and also keep track with the latest tools and applications that have been developed and presented in the recent conferences.
Features (notable updates compared to Ec-Centric):
  • Custom kernel 3.9.4 (patched with aufs, squashfs and xz filesystem mode, includes support for wide range of wireless drivers and hardware) Includes support for alfacard 0036NH
  • USB persistent
  • Easy integration with virtualbox and vmware player even in Live mode.
  • MID has been updated to make it easy to install check http://www.youtube.com/watch?v=kWF4qRm37DI
  • Includes latest tools introduced at Blackhat 2013 and Defcon 2013, Updated build until September 22 2013.
  • UI inspired from Greek Mythology
  • New Section Added PCI-DSS
  • IPv6 tools included.
Another great looking distro based on Debian Linux. I am a great fan of Greek Mythology, (their UI was inspired by it), so I like it already.

10. DEFT

DEFT Linux - Notable Penetration Test Linux distributions of 2014 - blackMORE Ops
DEFT Linux is a GNU / Linux live for free software based on Ubuntu , designed by Stefano Fratepietro for purposes related to computer forensics ( computer forensics in Italy) and computer security. Version 7.2 takes about 2.5 GB.
The Linux distribution DEFT is made up of a GNU / Linux and DART (Digital Advanced Response Toolkit), suite dedicated to digital forensics and intelligence activities. It is currently developed and maintained by Stefano Fratepietro, with the support of Massimo Dal Cero, Sandro Rossetti, Paolo Dal Checco, Davide Gabrini, Bartolomeo Bogliolo, Valerio Leomporra and Marco Giorgi.
The first version of Linux DEFT was introduced in 2005, thanks to the Computer Forensic Course of the Faculty of Law at the University of Bologna. This distribution is currently used during the laboratory hours of the Computer Forensics course held at the University of Bologna and in many other Italian universities and private entities.
It is also one of the main solutions employed by law enforcement agencies during computer forensic investigations.
In addition to a considerable number of linux applications and scripts, Deft also features the DART suite containing Windows applications (both open source and closed source) which are still viable as there is no equivalent in the Unix world.
Since 2008 is often used between the technologies used by different police forces, for today the following entities (national and international) We are using the suite during investigative activities
  •     DIA (Anti-Mafia Investigation Department)
  •     Postal Police of Milan
  •     Postal Police of Bolzano
  •     Polizei Hamburg (Germany)
  •     Maryland State Police (USA)
  •     Korean National Police Agency (Korea)
Computer Forensics software must be able to ensure the integrity of file structures and metadata on the system being investigated in order to provide an accurate analysis. It also needs to reliably analyze the system being investigated without altering, deleting, overwriting or otherwise changing data.
There are certain characteristics inherent to DEFT that minimize the risk of altering the data being subjected to
analysis. Some of these features are:
  • On boot, the system does not use the swap partitions on the system being analyzed
  • During system startup there are no automatic mount scripts.
  • There are no automated systems for any activity during the analysis of evidence;
  • All the mass storage and network traffic acquisition tools do not alter the data being acquired.
You can fully utilize the wide ranging capabilities of the DEFT toolkit booting from a CDROM or from a DEFT USB stick any system with the following characteristics:
  • CD / DVD ROM or USB port from which the BIOS can support booting.
  • CPU x86 (Intel, AMD or Citrix) 166 Mhz or higher to run DEFT Linux in text mode, 200Mhz to run
DEFT Linux in graphical mode;
  • 64 Mbytes of RAM to run DEFT Linux in text mode or 128 Mbytes to run the DEFT GUI.
DEFT also supports the new Apple Intel based architectures
All in all, it looks and sounds like a purpose build Distro that is being used by several government bodies. Most of the documents are in Italian but translations are also available. It is based on Ubuntu which is a big advantage as you can do so much more. Their documentation is done in a clear an professional style, so you might find it useful. Also if you speak Italian, I guess you already use/used it.

11. CAINE

CAINE Linux - Notable Penetration Test Linux distributions of 2014 - blackMORE Ops
Caine is another Italy born/origin Ubuntu based distro.
Caine (an acronym for Computer Aided Investigative Environment’) is a distribution live oriented to Computer Forensics (computer forensics) historically conceived by Giancarlo Giustini, within a project of Digital Forensics Interdepartmental Research Center for Security (CRIS) of the University of Modena and Reggio Emilia see Official Site. Currently the project is maintained by Nanni Bassetti.
The latest version of Caine is based on the Ubuntu Linux 12.04 LTS, MATE and LightDM. Compared to its original version, the current version has been modified to meet the standards forensic reliability and safety standards laid down by the NIST View the methodologies of Nist.
Caine includes:
  • Caine Interface – a user-friendly interface that brings together a number of well-known forensic tools, many of which are open source;
  • Updated and optimized environment to conduct a forensic analysis;
  • Report generator semi-automatic, by which the investigator has a document easily editable and exportable with a summary of the activities;
  • Adherence to the investigative procedure defined recently by Italian Law 48/2008, Law 48/2008,.
In addition, Caine is the first distribution to include forensic Forensics inside the Caja/Nautilus Scripts and all the patches of security for not to alter the devices in analysis.
The distro uses several patches specifically constructed to make the system “forensic”, ie not alter the original device to be tested and/or duplicate:
  • Root file system spoofing: patch that prevents tampering with the source device;
  • No automatic recovery corrupted Journal patch: patch that prevents tampering with the device source, through the recovery of the Journal;
  • Mounter and RBFstab: mounting devices in a simple and via graphical interface.
RBFstab is set to treat EXT3 as a EXT4 noload with the option to avoid automatic recovery of any corrupt Journal of ‘EXT3;
  • Swap file off: patch that avoids modifying the file swap in systems with limited memory RAM, avoiding the alteration of the original artifact computer and overwrite data useful for the purposes of investigation.
Caine and Open Source == == Patches and technical solutions are and have been all made in collaboration with people (Professionals, hobbyists, experts, etc..) from all over the world.
CAINE represents fully the spirit of the Open Source philosophy, because the project is completely open, anyone could take the legacy of the previous developer or project manager.
The distro is open source, the Windows side (Nirlauncher/Wintaylor) is open source and, last one but not least important, the distro is installable, so as to give the possibility to rebuild in a new version, in order to give a long life to this project.

12. Parrot Security OS

Parrot Security OS -  - Notable Penetration Test Linux distributions of 2014 - blackMORE Ops
Parrot Security OS is an advanced operating system developed by Frozenbox Network and designed to perform security and penetration tests, do forensic analysis or act in anonymity.
Anyone can use Parrot, from the Pro pentester to the newbie, because it provides the most professional tools combined in a easy to use, fast and lightweight pen-testing environment and it can be used also for an everyday use.
It seems this distro targets Italian users specifically like few other mentioned above. Their interface looks cleaner which suggests they have an active development team working on it which can’t be said above some other distroes. If you go through their screenshots page you’ll see it’s very neat. Give it a try and report back, you never know which distro might suit you better.

13. BlackArch Linux


BlackArch Linux - Notable Penetration Test Linux distributions of 2014 - blackMORE Ops
BlackArch Linux is a lightweight expansion to Arch Linux for penetration testers and security researchers. The repository contains 838 tools. You can install tools individually or in groups. BlackArch is compatible with existing Arch installs.
Please note that although BlackArch is past the beta stage, it is still a relatively new project. [As seen in BlackArch Website]
I’ve used Arch Linux for sometime, it is very lightweight and efficient. If you’re comfortable with building your Linux installation from scratch and at the same time want all the Pentest Tools (without having to add them manually one at a time), then BlackArch is the right distro for you. Knowing Arch community, your support related issues will be resolved quickly.
However, I must warn that Arch Linux (or BlackArch Linux in this case) is not for newbies, you will get lost at step 3 or 4 while installing. If you’re moderately comfortable with Linux and Arch in general, go for it. Their website and community looks very organized (I like that) and it is still growing.

Conclusion

I’ve tried to gather as much information I could to compile this list. If you’re reading this because you want to pick one of these many penetration Linux Distributions, my suggestions would be pick the distribution that is closets to your current one. For example, if you’re an Ubuntu user, pick something based on Ubuntu, if you’re Gentoo user then Pentoo is what you’re after and so forth. Like any Linux distribution list, many people will have many opinions on which is the best. I’ve personally used several of them and found that each puts emphasis on a different side. It is upto the user which one they would like to use (I guess you could try them on VMWare or VirtualBox to get a feel).
I know for a fact that there are more Penetration Test Linux distributions out there and I missed some. My research shows these are the most used and maintained distroes, but if you know of some other Penetration Test Linux distributions and would like to add into this list, let us know via comments.