Friday, October 24, 2014

5 Deadly Linux Commands You Should Never Run

http://www.theepochtimes.com/n3/1031947-5-deadly-linux-commands-you-should-never-run

As a Linux user, you probably have searched online for articles and tutorials that show you how to use the terminal to run some commands. While most of these commands are harmless and could help you become more productive, there are some commands that are deadly and could wipe out your whole machine.

In this article, let’s check out some of the deadly Linux commands that you should never run.
Note: These commands are really harmful, so please don’t try to reproduce them on your Linux machines. You have been warned.

1. Deletes Everything Recursively

rm -rf /

This is one of the most deadly Linux commands around. The functionality of this command is really simple. It forcefully removes or deletes (rm) all the files and folders recursively (-rf) in the root directory (/) of your Linux machine. Once you delete all the files in the root directory, there is no way that you can boot into your Linux system again.
Also be aware that the below command comes in many other forms such as rm -rf * or rm -rf. So always be careful when ever you are executing a command that includes rm.

2. Fork Bomb

:(){ :|: & };:

This weird looking command doesn’t even look like a command, but it functions like a virus which creates copies of itself endlessly, thus called as Fork Bomb. This shell function quickly hijacks all your system resources like CPU, memory, etc. and will cause a system crash which in turn may result in data loss. So never ever try this command or any other weird-looking commands for that matter.

3. Move Everything to Nothingness

mv ~ /dev/null

The functionality of this command is really basic and simple. All it does is move (mv) the contents of your home folder (~) into the /dev/null folder. This looks really innocent, but the catch is that there is no folder called “Null,” and it simply means that you are moving all your files and folders into nothingness essentially destroying all the files irrecoverably.

4. Format Hard Drive

mkfs.ext3 /dev/sda

This command is really a disaster as it formats your entire hard drive and replaces it with the new ext3 file system. Once you execute the command, all your data is lost irrecoverably. So never ever try this command or any other suspicious command that involves your hard drive (sda).

5. Output Command Directly to Hard Drive

any-command > /dev/sda

This command is much more simple; any command you execute (in the place of “any-command”) will write the output data to your first hard drive replacing all the files and folders. This in turn damages your entire file system. Once you execute this command, you will be unable to boot into your Linux machine and your data may be lost irrecoverably.
Again, don’t ever try any suspicious command that includes your hard drive (sda).

Conclusion

Using the command line is pretty interesting but don’t blindly execute all the commands you find in the internet. A single command is enough to wipe out your whole system. In addition, while some of the commands above require elevated permissions (administrator), they may be disguised in other commands and may trick you into executing them.
So always be careful while you are executing the commands and only trust reputed and trusted sources for your command line requirements. The best way is to educate yourselves on how each command works and think through before executing the command.

Thursday, October 23, 2014

Integrating Trac, Jenkins and Cobbler—Customizing Linux Operating Systems for Organizational Needs

http://www.linuxjournal.com/content/integrating-trac-jenkins-and-cobbler—customizing-linux-operating-systems-organizational-need

Organizations supporting Linux operating systems commonly have a need to build customized software to add or replace packages on production systems. This need comes from timing and policy differences between customers and the upstream distribution maintainers. In practice, bugs and security concerns reported by customers will be prioritized to appropriate levels for the distribution maintainers who are trying to support all their customers. This means that customers often need to support patches to fill the gap, especially for unique needs, until distribution maintainers resolve the bugs.
Customers who desire to fill the support gap internally should choose tools that the distribution maintainers use to build packages whenever possible. However, third-party software packages often present challenges to integrate them into the distribution properly. Often these packages do not follow packaging guidelines and, as a result, do not support all distribution configurations or procedures for administration. These packages often require more generic processes to resolve the improper packaging.
From this point on, the tools and methods discussed in this article are specific to Red Hat Enterprise Linux (RHEL). These tools and methods also work with derivative distributions like Scientific Linux or Community Enterprise OS (CentOS). Some of the tools do include support for distributions based on Debian. However, specifics on implementation of the process focus on integration with RHEL-based systems.
The build phase of the process (described in "A Process for Managing and Customizing HPC Operating Systems" in the April 2014 issue of LJ) requires three pieces of software that can be filled by Trac, Cobbler and Jenkins. However, these pieces of software do not fill all the gaps present from downloading source code to creation of the overlay repository. Further tools and processes are gained by analysis of the upstream distribution's package management process and guidelines.
The application of the Fedora Packaging Guidelines and its counterpart EPEL Packaging Guidelines are good references for how to package software for RHEL-based systems appropriately. These guidelines call out specifics that often are overlooked by first-time packagers. Also, tools used in the process, such as Mock, work well with the software mentioned previously.
Fedora uses other tools to manage building packages and repositories. These tools are very specific to Fedora packaging needs and are not general enough for use in our organization. This is primarily due to technical reasons and features that I go into in the Jenkins section of the article.
The rest of this article focuses on implementing Trac, Cobbler, Jenkins, and the gaps between the three systems. Some of the gaps are filled using native plugins associated with the three systems. However, others are left to be implemented using scripts and processes requiring human interactions. There are points where human interaction is required to facilitate communication between groups, and other points are where the process is missing a well implemented piece of software. I discuss setup, configuration and integration of Trac, Cobbler and Jenkins, along with some requests for community support.

Trac

Trac consists of an issue-tracking system and wiki environment to support software development projects. However, Trac also works well for supporting the maintenance of administrative processes and managing change on production systems. I'm going to discuss the mapping to apply a software development process to the process by which one administers a production system.
I realize that talking about issue tracking and wiki software is a religious topic for some. Everyone has their favorite software, and these two kinds of systems have more than enough open-source options out there from which people can choose. I want to focus on the features that we have found useful at EMSL to support our HPC system and how we use them.
The ticket-tracking system works well for managing small changes on production systems. These small changes may include individual critical updates, configuration changes and requests from users. The purpose of these tickets is to record relevant technical information about the changes for administrators as well as management. This helps all stakeholders understand the cost and priority of the change. These small changes can be aggregated into milestones, which correspond to outage dates. This provides a starting framework to track what change happens and when on production systems.
Trac's wiki has features that are required for the process. The first is the ability to maintain a history of changes to individual pages. This is ideal for storing documents and procedures. Another feature is the ability to reference milestones from within pages. This feature is extremely useful, since by entering a single line in the wiki, it displays all tickets associated with the milestone in one simple line. These two features help maintain the procedures and outage pages in the wiki.
The administrative procedures are documented in the wiki, and they include but are not limited to software configuration, startup, shutdown and re-install. The time required to perform these administrative procedures also should be noted in the page. We also make sure to use the plain-text options for specifying commands that need to be run, as other fonts may confuse readers. In many cases, we have specified the specific command to run in these procedures. For complex systems, creating multiple pages for a particular procedure is prudent. However, cross links between pages should be added to note when one part of the procedure from each page should be followed.
Trac's plugin infrastructure does not have plugins to Jenkins or Cobbler. However, what would be the point of a plugin going from Trac to continuous integration or provisioning? Most software development models keep ticket systems limited to human interaction between the issuer of the ticket and the people resolving it. Some exceptions are when tickets are considered resolved but are waiting for integration testing. Automated tests could be triggered by the ticketing system when the ticket's state is changed. However, mapping these sorts of features to administrative procedures for managing production systems do not apply.

Cobbler

Cobbler works well for synchronizing RPM-based repositories and using those repositories to deploy systems. The RPMs are synchronized daily from Jenkins and distribution maintainers. The other important feature is to exclude certain packages from being synchronized locally. These features provide a platform to deploy systems that have specific customized packages for use in the enterprise.
The initial setup for Cobbler is to copy the primary repositories for the distribution of your choice to "repos" in Cobbler. The included repositories from Scientific Linux are the base operating system, fastbugs and security. Other distributions have similar repository configurations (see the Repositories and Locations sidebar). The other repository to include is EPEL, as it contains Mock and other tools used to build RPMs. There are other repositories that individual organizations should look into, although these four repositories are all that is needed.

Repositories and Locations

  • Extra Packages for Enterprise Linux: http://dl.fedoraproject.org/pub/epel/6/x86_64
  • Scientific Linux 66 Base: http://ftp1.scientificlinux.org/linux/scientific/6/x86_64/os
  • Scientific Linux 6 Security: http://ftp1.scientificlinux.org/linux/scientific/6/x86_64/updates/security
  • Scientific Linux 6 Fastbugs: http://ftp1.scientificlinux.org/linux/scientific/6/x86_64/updates/fastbugs
  • CentOS 6 Base: http://mirror.centos.org/centos/6/os/x86_64
  • CentOS 6 FastTrack: http://mirror.centos.org/centos/6/fasttrack/x86_64
  • CentOS 6 Updates: http://mirror.centos.org/centos/6/updates/x86_64
  • RHEL 6 Server Base: rhel-x86_64-server-6 channel
  • RHEL 6 Server FasTrack: rhel-x86_64-server-fastrack-6 channel
  • RHEL 6 Server Optional: rhel-x86_64-server-optional-6 channel
  • RHEL 6 Server Optional FasTrack: rhel-x86_64-server-optional-fastrack-6 channel
  • RHEL 6 Server Supplementary: rhel-x86_64-server-supplementary-6 channel

    The daily repositories either are downloaded from the Web on a daily basis or synchronized from the local filesystem. The daily repositories get the "keep updated" flag set, while the test and production repositories do not. For daily repositories that synchronize from a local filesystem, the "breed" should be set to rsync, while daily repositories that synchronize from the Web should set their "breed" to yum. This configuration, through experience, has been chosen because some RPMs do not upgrade well with new kernels nor do they have standard update processes normal to Red Hat or Fedora.
    An example of a set of repositories would be as follows:
  • phi-6-x86_64-daily — synchronizes automatically from the local filesystem using rsync once daily.
  • epel-6-x86_64-daily — synchronizes automatically from the Web using reposync once daily.
  • phi-6-x86_64-test — synchronizes manually from phi-6-x86_64-daily using rsync.
  • epel-6-x86_64-test — synchronizes manually from epel-6-x86_64-daily using rsync.
  • phi-6-x86_64-prod — synchronizes manually from phi-6-x86_64-test using rsync.
  • epel-6-x86_64-prod — synchronizes manually from epel-6-x86_64-test using rsync.
To exclude critical packages from the upstream distribution, the "yum options" flags are set on the daily repository to remove them. For example, to exclude the kernel package from from being synchronized, add exclude=kernel*. It's important for administrators to consult both the Cobbler and yum.conf man pages to get the syntax right.
Setting up Cobbler in this way allows administrators to deploy systems using customized critical packages. Cobbler also is used in future phases where the repositories are used to deploy the test and production clusters. The repositories and their relationships are all Cobbler needs to support package building, the test cluster and the production cluster.

Jenkins

Jenkins is a very powerful continuous integration tool used in software development. However, from a system administration view, Jenkins is a mutant cron job on steroids. Jenkins handles periodic source code checkout from source code management (SCM) repositories and downloading of released source code, via HTTP or FTP. It then runs a series of generic jobs that build, test and deploy the resulting software. These generic interfaces work well for building and distributing RPMs to be included by Cobbler.
The use of Jenkins in a software development role is not all that different from building RPMs (see Table 1 for a comparison of the two processes). The first step in the two processes differs in that (hopefully) the software development code required for the build step is in one place. Package developers need to have, at a minimum, two locations to pull code from to continue with the build. The first location is for patches and spec files, normally kept in an SCM. The second is for released source code packages. Source code is released in a single file and usually in some container format (such as tar, rar or zip). These files do not normally belong in an SCM and are more suited to an S3 (http://docs.aws.amazon.com/AmazonS3/latest/API/Welcome.html), swift (http://docs.openstack.org/api/openstack-object-storage/1.0/content) or blob store-like interface.

Table 1. Packaging vs. Development

Software Development RPM Packaging
Download source code from SCM. Download released source, spec file and patches.
Run the build process. Build the RPMs using Mock.
Run the testing suite. Validate the RPMs using rpmlint.
Publish test results. Save validation output for inspection.
Save source code package to repository. Save built RPMs for later download.
Send notification to pertinent developers. Send notification to pertinent packagers.
Jenkins is built primarily for downloading code from one and only one SCM. However, you can work around this issue by adding another build step. This means that the SCM plugin is used to download the spec file and patches while the first step in the build process downloads the source code package. After these two steps are done, the source code, patches or spec file can be patched with site-specific customization.
The next step is to build RPMs using Mock. This involves several tasks that can be broken up into various build steps (see the Mock Build in Jenkins sidebar). All these steps are done using the Jenkins execute shell build steps. Some of the Jenkins jobs we use are multi-configuration jobs that contain one axis defining the Mock chroot configuration. That chroot configuration should be generated from the daily repositories defined in Cobbler. Following these tasks can get you started on using Mock in Jenkins (Listing 1).

Listing 1. basic-mock-jenkins.sh


#!/bin/bash -xe

# keep in mind DIST is defined in multi-configuration axis
MOCK="/usr/bin/mock -r $DIST"
PKG=${JOB_NAME##*/}
# keep in mind VER could also be a multi-configuration axis
VER=${VER:-1.0}
# if you are ripping apart an RPM might have this one too
REL=${REL:-4.el6}

OUT=$PWD/output

wget -O $PKG-$VER.tar.gz 
 ↪http://www.example.com/sources/$PKG-$VER.tar.gz
rm -f $OUT/*.src.rpm
if ! $MOCK --resultdir=$OUT --buildsrpm --spec=$PKG.spec 
 ↪--sources=$PWD
then
    more $OUT/*.log | cat
    exit -1
fi

if ! $MOCK --resultdir=$OUT --rebuild $OUT/*.src.rpm
then
    more $OUT/*.log | cat
    exit -1
fi

rpmlint $OUT/*.rpm > rpmlint.log

Mock Build in Jenkins

  1. Prepare the source and specs.
  2. Run Mock source rpm build.
  3. Run Mock rpm build.
  4. Run rpm validation.

    Once the RPMs are built, it's important to run rpmlint on the resulting RPMs. This output gives useful advice for how to package RPMs properly for the targeted platform. This output should be handled like any other static code analysis tool. The number of warnings and errors should be tracked, counted and graphed over a series of builds. This gives a good indication whether bugs are being resolved or introduced over time.
    The generated RPMs and rpmlint output need to be archived for future use. The archive artifacts plugin works well for capturing these files. There also is an artifact deployer plugin that can copy the artifacts to directories that Cobbler can be configured to synchronize from for its part of the process.
    There is some room for improvement in this process, and I outline that in the conclusion. However, this is the basic framework to start using Jenkins to build RPMs using Mock and rpmlint. This part of the process needs constant care and attention as new updates are pushed by the distribution and package developers. Jenkins does have plugins to Trac and other issue-tracking systems. However, they are not included in this process, as we find e-mail to be a sufficient means of communication. The outlined process for building RPMs using Jenkins helps us track the hacks we use to manipulate important packages for our systems.

    Table 2. Software

    Role Software Choice
    Continuous Integration Jenkins
    Repository Management Cobbler
    Provisioning Cobbler
    Ticket Tracking Trac
    Wiki Trac
    Package Building Mock
    Package Guidelines Fedora Packaging Guidelines

    Conclusion

    I have discussed a method for setting up tools to develop RPMs against a custom distribution managed by Cobbler. Along with Trac, package developers can maintain updated RPMs of critical applications while managing communication. However, this process is not without gaps. First, I'll go over the gaps present in Jenkins, discussing core and plugin gaps that were not found. Then I'll discuss the gaps in Cobbler regarding repository management. These two systems are lacking in integration, although that can be worked around.
    MultiSCM is a functionality in Jenkins that would simplify the package building process. There is a MultiSCM plugin; however, it is advertised as a proof-of-concept code. The hope is that the radio button selection for SCM would turn into a set of check boxes. There are related bugs, but they have not seen traction in years. Package development is another good example of the need to download and poll for updates on code from multiple places.
    Here are links to information on the Jenkins Multiple SCMs Bugs:
  5. https://issues.jenkins-ci.org/browse/JENKINS-7192
  6. https://issues.jenkins-ci.org/browse/JENKINS-9720
Static code analysis tools are available as plugins for Jenkins, although these plugins do not include rpmlint. These plugins create graphs to track the number of warnings and errors in code over time. To perform the same task for packaging would be very helpful. However, you can work around this gap by using the generic plot plugin and another build step for each job.
Mock has a very well defined interface and workflow. A generic plugin to use Mock in Jenkins would be very useful. The plugin should include configuring the chroot configuration. Two kinds of build jobs also could be created, one using spec and source files, the other using source RPMs. A test also would need to be created to verify that Mock can be run without prompting for a user password. This plugin would be very helpful for automating this process, as we currently have to copy scripts between jobs.
There are some additions to Cobbler that would be useful for this process as well. There are no per-repo triggers. The ability to tell Trac that packages went from repo test to repo prod would be useful. Furthermore, the ability to tell Jenkins to build a package because a dependent package updated also would be useful.
The other useful addition to Cobbler would be the ability to remove older RPMs in the destination tree while synchronizing from the remote mirror. Cobbler repositories, if the "breed" is yum, build up in an append-only fashion. Processes for managing the space may be run periodically by removing the RPMs and then synchronizing the repository again. However, this leaves the repository in a broken state until the process is complete. This feature could be useful in any Cobbler deployment, as it would make sure repositories do not continue to take up space when RPMs are not needed.
Trac does not need any additional plugins to integrate better with Cobbler or Jenkins. We have found some usability issues with manipulating large tables in the wiki format. Some plugin to make editing large tables easier in the wiki format would be useful for us. Also, editing long pages becomes an issue if you cannot put comments throughout the page. We validate our procedures by having members of the group who are unfamiliar with the system read through the procedure. The reader should be able to comment on but not edit parts of the page. We have worked around or found plugins on the Trac Hacks page to resolve these issues.
The final request is for some level of certification from distribution maintainers to certify third-party packages. Many of the third-party packages we have applied to this process to do not support all distribution configurations. A certification from distribution maintainers validating that software distributed by third-party vendors have packaged their software appropriately for the distribution would help customers determine the cost of support.
This is by no means a complete solution for organizations to build customized critical applications. There are still gaps in the system that we have to work around using scripts or manual intervention. We constantly are working on the process and tools to make them better, so any suggestions to improve it are welcome. However, these tools do fill the need to support customization of critical applications for HPC at EMSL.

Acknowledgement

The research was performed using EMSL, a national scientific user facility sponsored by the Department of Energy's Office of Biological and Environmental Research and located at Pacific Northwest National Laboratory.

Amazing ! 25 Linux Performance Monitoring Tools

http://linoxide.com/monitoring-2/linux-performance-monitoring-tools

Over the time our website has shown you how to configure various performance tools for Linux and Unix-like operating systems. In this article we have made a list of the most used and most useful tools to monitor the performance for your box. We provided a link for each of them and split them into 2 categories: command lines one and the ones that offer a graphical interface.

Command line performance monitoring tools

1. dstat - Versatile resource statistics tool

A versatile combination of vmstat, iostat and ifstat. It adds new features and functionality allowing you to view all the different resources instantly, allowing you to compare and combine the different resource usage. It uses colors and blocks to help you see the information clearly and easily. It also allows you to export the data in CVS format to review it in a spreadsheet application or import in a database. You can use this application to monitor cpu, memory, eth0 activity related to time.
dstat

2. atop - Improved top with ASCII

A command line tool using ASCII to display a performance monitor that is capable of reporting the activity of all processes. It shows daily logging of system and process activity for long-term analysis and it highlights overloaded system resources by using colors. It includes metrics related to CPU, memory, swap, disks and network layers. All the functions of atop can be accessed by simply running:
# atop
And you will be able to use the interactive interface to display and order data.
atop

3. Nmon - performance monitor for Unix-like systems

Nmon stands for Nigel's Monitor and it's a system monitor tool originally developed for AIX. If features an Online Mode that uses curses for efficient screen handling, which updates the terminal frequently for real-time monitoring and a Capture Mode where the data is saved in a file in CSV format for later processing and graphing.
nmon
 More info in our nmon performance track article.

4. slabtop - information on kernel slab cache

This application will show you how the caching memory allocator manages in the Linux kernel caches various type of objects. The command is a top like command but is focused on showing real-time kernel slab cache information. It displays a listing of the top caches sorted by one of the listed sort criteria. It also displays a statistics header filled with slab layer information. Here are a few examples:
# slabtop --sort=a
# slabtop -s b
# slabtop -s c
# slabtop -s l
# slabtop -s v
# slabtop -s n
# slabtop -s o
More info is available kernel slab cache article

5. sar - performance monitoring and bottlenecks check

The sar command writes to standard output the contents of selected cumulative activity counters in the operating system. The accounting system, based on the values in the count and interval parameters, writes information the specified number of times spaced at the specified intervals in seconds. If the interval parameter is set to zero, the sar command displays the average statistics for the time since the system was started. Useful commands:
# sar -u 2 3
# sar –u –f /var/log/sa/sa05
# sar -P ALL 1 1
# sar -r 1 3
# sar -W 1 3

6. Saidar - simple stats monitor

Saidar is a simple and lightweight tool for system information. It doesn't have major performance reports but it does show the most useful system metrics in a short and nice way. You can easily see the up-time, average load, CPU, memory, processes, disk and network interfaces stats.
Usage: saidar [-d delay] [-c] [-v] [-h]
-d Sets the update time in seconds
-c Enables coloured output
-v Prints version number
-h Displays this help information.
saidar

7. top - The classical Linux task manager

top is one of the best known Linux utilities, it's a task manager found on most Unix-like operating systems. It shows the current list of running processes that the user can order using different criteria. It mainly shows how much CPU and memory is used by the system processes. top is a quick place to go a check what process or processes hangs your system. You can also find here a list of examples of top usage . You can access it by running the top command and entering the interactive mode:
Quick cheat sheet for interactive mode:
  • GLOBAL_Commands: ?, =, A, B, d, G, h, I, k, q, r, s, W, Z
  • SUMMARY_Area_Commands: l, m, t, 1
  • TASK_Area_Commands Appearance: b, x, y, z Content: c, f, H, o, S, u Size: #, i, n Sorting: <, >, F, O, R
  • COLOR_Mapping: , a, B, b, H, M, q, S, T, w, z, 0 - 7
  • COMMANDS_for_Windows:  -, _, =, +, A, a, G, g, w
top

8. Sysdig - Advanced view of system processes

Sysdig is a tool that gives admins and developers unprecedented visibility into the behavior of their systems. The team that develops it wants to improve the way system-level monitoring and troubleshooting is done by offering a unified, coherent, and granular visibility into the storage, processing, network, and memory subsystems making it possible to create trace files for system activity so you can easily analyze it at any time.
Quick examples:
# sysdig proc.name=vim
# sysdig -p"%proc.name %fd.name" "evt.type=accept and proc.name!=httpd"
# sysdig evt.type=chdir and user.name=root
# sysdig -l
# sysdig -L
# sysdig -c topprocs_net
# sysdig -c fdcount_by fd.sport "evt.type=accept"
# sysdig -p"%proc.name %fd.name" "evt.type=accept and proc.name!=httpd"
# sysdig -c topprocs_file
# sysdig -c fdcount_by proc.name "fd.type=file"
# sysdig -p "%12user.name %6proc.pid %12proc.name %3fd.num %fd.typechar %fd.name" evt.type=open
# sysdig -c topprocs_cpu
# sysdig -c topprocs_cpu evt.cpu=0
# sysdig -p"%evt.arg.path" "evt.type=chdir and user.name=root"
# sysdig evt.type=open and fd.name contains /etc
sysdig
More info is available in our article on how to use sysdig for improved system-level monitoring and troubleshooting

9. netstat - Shows open ports and connections

Is the tool Linux administrators use to show various network information, like what ports are open and what network connections are established and what process runs that connection. It also shows various information about the Unix sockets that are open between various programs. It is part of most Linux distributions A lot of the commands are explained in the article on netstat and its various outputs. Most used commands are:
$ netstat | head -20
$ netstat -r
$ netstat -rC
$ netstat -i
$ netstat -ie
$ netstat -s
$ netstat -g
$ netstat -tapn

10. tcpdump - insight on network packets

tcpdump can be used to see the content of the packets on a network connection. It shows various information about the packet content that pass. To make the output useful, it allows you to use various filters to only get the information you wish. A few examples on how you can use it:
# tcpdump -i eth0 not port 22
# tcpdump -c 10 -i eth0
# tcpdump -ni eth0 -c 10 not port 22
# tcpdump -w aloft.cap -s 0
# tcpdump -r aloft.cap
# tcpdump -i eth0 dst port 80
You can find them described in detail in our article on tcpdump and capturing packets

11. vmstat - virtual memory statistics

vmstat stands for virtual memory statistics and it's a memory monitoring tool that collects and displays summary information about memory, processes, interrupts, paging and block I/O. It is an open source program available on most Linux distributions, Solaris and FreeBSD. It is used to diagnose most memory performance problems and much more.
vmstat
More info in our article on vmstat commands.

12. free - memory statistics

Another command line tool that will show to standard output a few stats about memory usage and swap usage. Because it's a simple tool it can be used to either find quick information about memory usage or it can be used in different scripts and applications. You can see that this small application has a lot of uses and almost all system admin use this tool daily :-)
free

13. Htop - friendlier top

Htop is basically an improved version of top showing more stats and in a more colorful way allowing you to sort them in different ways as you can see in our article. It provides a more a more user-friendly interface.
htop
You can find more info in our comparison of htop and top

14. ss - the modern net-tools replacement

ss is part of the iproute2 package. iproute2 is intended to replace an entire suite of standard Unix networking tools that were previously used for the tasks of configuring network interfaces, routing tables, and managing the ARP table. The ss utility is used to dump socket statistics, it allows showing information similar to netstat and its able display more TCP and state information. A few examples:
# ss -tnap
# ss -tnap6
# ss -tnap
# ss -s
# ss -tn -o state established -p

15. lsof - list open files

lsof is a command meaning "list open files", which is used in many Unix-like systems to report a list of all open files and the processes that opened them. It is used by most Linux distributions and other Unix-like operating systems by system administrators to check what files are open by various processes.
# lsof +p process_id
# lsof | less
# lsof –u username
# lsof /etc/passwd
# lsof –i TCP:ftp
# lsof –i TCP:80
You can find more examples in the lsof article

16. iftop - top for your network connections

iftop is yet another top like application that will is based on networking information. It shows various current network connection sorted by bandwidth usage or the amount of data uploaded or downloaded. It also provides various estimations of the time it will take to download them.
iftop
For more info see article on network traffic with iftop

17. iperf - network performance tool

iperf is a network testing tool that can create TCP and UDP data connections and measure the performance of a network that is carrying them. It supports tuning of various parameters related to timing, protocols, and buffers. For each test it reports the bandwidth, loss, and other parameters.
iperf
If you wish to use the tool check out our article on how to install and use iperf

18. Smem - advanced memory reporting

Smem is one of the most advanced tools for Linux command line, it offers information about the actual memory that is used and shared in the system, attempting to provide a more realistic image of the actual memory being used.
$ smem -m
$ smem -m -p | grep firefox
$ smem -u -p
$ smem -w -p
Check out our article on Smem for more examples

GUI or Web based performance tools

19. Icinga - community fork of Nagios

Icinga is free and open source system and network monitoring application. It’s a fork of Nagios retaining most of the existing features of its predecessor and building on them to add many long awaited patches and features requested by the user community.
Icinga
More info about installing and configuring can be found in our Icinga article.

20. Nagios - the most popular monitoring tool.

The most used and popular monitoring solution found on Linux. It has a daemon that collects information about various process and has the ability to collect information from remote hosts. All the information is then provided via a nice and powerful web interface.
nagios
You can find information on how to install Nagios in our article

21. Linux process explorer - procexp for Linux

Linux process explorer is a graphical process explorer for Linux. It shows various process information like the process tree, TCP/IP connections and performance figures for each process. It's a replica of procexp found in Windows and developed by Sysinternals and aims to be more user friendly then top and ps.
Check our linux process explorer article for more info.

22. Collectl - performance monitoring tool

This is a performance monitoring tool that you can use either in an interactive mode or you can have it write reports to disk and access them with a web server. It reports statistics on CPU, disk, memory, network, nfs, process, slabs and more in easy to read and manage format.
collectl
More info in our Collectl article

23. MRTG - the classic graph tool

This is a network traffic monitor that will provide you graphs using the rrdtool. It is one of the oldest tools that provides graphics and is one of the most used on Unix-like operating systems. Check our article on how to use MRTG for information on the installation and configuration process

mrtg

24. Monit - simple and easy to use monitor tool

Monit is an open source small Linux utility designed to monitor processes, system load, filesystems, directories and files. You can have it run automatic maintenance and repair and can execute actions in error situations or send email reports to alert the system administrator. If you wish to use this tool you can check out our how to use Monit article.
monit

25. Munin - monitoring and alerting services for servers

Munin is a networked resource monitoring tool that can help analyze resource trends and see what is the weak point and what caused performance issues. The team that develops it wishes it for it to be very easy to use and user-friendly. The application is written in Perl and uses the rrdtool to generate graphs, which are with the web interface. The developers advertise the application "plug and play" capabilities with about 500 monitoring plugins currently available.