web123456

Common linux commands-part2

How to pronounce Ubuntu,Fedora,Debian,CentOS?
There's Benjamin, Federer, DiBian, Santos.
Fidorah, Durbin, Sentos.


================================
Counting lines of code with find and wc commands
================================
wc -l `find . -name "*.js" | xargs`


Check the number of files in the current directory ls -l | grep "^-" | wc -l
View the number of files in the current directory, including those in subdirectories.
ls -lR| grep "^-" | wc -l
View the number of folders (directories) in a directory, including those in subdirectories.
ls -lR| grep "^d" | wc -l


ls -l outputs a long list of files in the directory (note that the files here, unlike normal files, may be directories, links, device files, etc.)
grep "^-" Here the long list output is partially filtered to keep only general files, if only directories are kept it is ^d
wc -l counts the number of lines in the output, since only general files are filtered, the result is the number of lines in the general file information, and since one line of information corresponds to one file, it is also the number of files.


=======================================
KB and KIB, the free command outputs KB, not KIB.
=======================================
KB:
A kilobyte, often written as KB or K, is a unit of information measurement that is nowadays commonly used to label the storage capacity of storage media with a general capacity, such as memory.
1KB = 1,000Byte
1MB = 1,000KB
1GB = 1,000,000(106)KB
1TB = 1,000,000,000(109)KB
KIB:
Kibibyte is a unit of information measurement, representing 1024 bytes, i.e. 210 bytes, commonly abbreviated as KiB. Kibibyte is an abbreviation from kilo binary byte, meaning "thousand binary bytes".
1KiB = 1,024Byte
1MiB = 1,024KiB
1GiB = 1,024MiB = 1,048,576 (10242)KiB
1TiB = 1,024GiB = 1,073,741,824 (10243)KiB
Byte:
Byte, the English name is Byte. Byte is the abbreviation of Binary Term. A byte represents eight bits. It is commonly used as a unit of measurement for computer information, regardless of the type of data being stored.
Bit:
Bit (English: Bit), also known as the binary bit, refers to one of the binary, is the smallest unit of information.Bit is Binary digit (binary digit) abbreviation
1 byte (Byte) - 8 bits (bit)
1 word (Word) - 16 bits (bit)


================================
configure,make,make install
================================
Here is also a brief overview of the source code trilogy for installing software under Linux/Unix, which we'll see a lot of later on.
./configure  
make  
make install 


. /configure is used to check environment variables and configure compilation options.
make is used to compile the source code into a binary file.
make install installs the files compiled by make to the specified location (or to the default location).


tail -f allows you to see the latest logs in real time, and will refresh the screen constantly.
tar -zvxf decompress
ll is not a basic linux command, it is actually an alias for ls -l.
ll -a Show hidden directories.
dmesg|less Queries the linux kernel at boot time.
less /var/log/dmesg 




Command to delete a folder (svn) recursively in linux:
find . -name ".svn" -type d | xargs rm -rf or
find . -name ".svn" -type d -print -exec rm -rf {} ;


(1) "." Indicates a recursive search from the current directory.
(2) " -name "svn" " looks up by name.
(3) " -type d "The type of lookup is directory
(4) "-print" outputs the directory name of the file you are looking for.
(5) The main thing is -exec, the -exec option is followed by a command to be executed, indicating that the file or directory will be found out to execute the command.
The exec option is followed by the command or script to be executed, then a pair of {}, a space and a /, and finally a semicolon.


find . -name "*.o"  | xargs rm -f
You can do this through a pipeline command by first finding the files in your home directory that you want to delete, then constructing a list of arguments with "xargs" and running the command.
find named/ -name *.bak | xargs rm -f


find . -name ".svn" -type d | xargs rm -rf


==============================================
The difference between piping and command substitution is:
==============================================
Pipeline: The output of the command on the left of the pipeline symbol "|" is used as input for the command on the right.
Command substitution: the output of the command in the command substitution character "``" is used as the parameter for the corresponding position of other commands.


# pstree -p `ps -e | grep server | awk '{print $1}'` | wc -l
Pipes and command substitutions are utilized here.
That is, commands enclosed in `` are executed first, and then their output is used as arguments to other commands.
The above is the output (process number) of ps -e | grep server | awk '{print $1}' as an argument to pstree -p


--------------------
nohup command
--------------------
If you are running a process and you think that the process will not end when you exit your account, then you can use the nohup command. This command will continue to run the corresponding process after you exit your account/close the terminal.
Nohup means no hang up.
Purpose LINUX command usage to run commands without hanging up.


nohup Command [ Arg ... ] [ & ]


The nohup command runs the command specified by the Command parameter and any associated Arg parameters, ignoring all hangup signals. Use the nohup command to run programs in the background after logging off.
To run the nohup command in the background, add & (the symbol for "and") to the end of the command.


If you do not redirect the output of the nohup command, the output is appended to a file in the current directory. If the files in the current directory are not writable, the output is redirected to the $HOME/ file.
If no file can be created or opened for appending, the command specified by the Command parameter may not be invoked. If the standard error is a terminal, the
Then redirect all output from the specified command written to standard error to the same file descriptor as standard output.


If you submit a job using the nohup command, then by default all output from that job is redirected to a file named unless an output file is otherwise specified:
nohup command > 2>&1 &
In the above example,0 – stdin (standard input),1 – stdout (standard output),2 – stderr (standard error) ;
2>&1 is to redirect standard error (2) to standard output (&1), which in turn is redirected to input to the file.
Use jobs to view tasks.
Use fg %n to close.


--------------------
Difference between nohup and &.
--------------------
1)Commonly some programs,& end, the terminal is closed, then the program is also closed!
2) Single is &, running in the background, you close the terminal it will stop running
3) nohup command & runs in the background and will continue to run even if you close the terminal
4) A very obvious difference is that & get background, is the terminal is closed, they will also exit, while nohup is equivalent to an independent background process.
5) If nohup is executed, the background will keep executing no matter what the terminal does. However, if the terminal suddenly loses power or unplugs the network cable, the background will be blocked and will not be able to run.


Unix/Linux generally want to make a program run in the background, a lot of them use & at the end of the program to make the program run automatically. For example, we want to run mysql in the background:
    /usr/local/mysql/bin/mysqld_safe –user=mysql &
However, many of our programs are not like mysqld can be made into a daemon, may be our program is just an ordinary program, generally this kind of program even if the use of & end, if the terminal is closed, the
Then the program will also be closed. In order to be able to run in the background, we need to use the command nohup. For example, if we have a program that needs to run in the background and we want it to run all the time in the background, then use nohup:
    nohup /root/ &
prompted by a carriage return in the shell:
    [~]$ appending output to
The standard output of the original program is automatically redirected to a file in the current directory, acting as a log.
But sometimes there is a problem in this step, when the terminal is closed, the process will be closed automatically, you can see that the service is closed automatically when you close the terminal.
After consulting with a Red Flag Linux engineer, he couldn't figure it out either, and after executing it on my terminal, the process he started surprisingly still runs after closing the terminal.
It was only during the second time I was shown that I realized a detail that was different between me and him when operating the terminal:
He is also required to press any key on the keyboard in the terminal to return to the shell command window after the shell has prompted for a successful nohup, and then exit the terminal by typing exit in the shell; ************** focus ****
And I am closing the terminal every time I tap the close program button directly after nohup executes successfully. So this time it will break the session corresponding to that command, causing the process corresponding to nohup to be notified that it needs to be shutdown together.
This detail was missed by someone like me, so here it is for the record.




Putting scripts into the background for execution under AIX, Linux - with nohup vs. without nohup:
 
1) For Linux, when executing the following commands on the system, the script is put into the background for execution
    /location/ &
At this point, considering the issue in two contexts.
I: Continue to exit the current session by executing the exit command, and the script will still be running on the Linux system.
Two: If you disconnect the current connection without executing the exit command, the script will exit immediately.


If you use the following command, the script will be executed in the background
    nohup /location/ &
In both cases, the script will continue to run on the system, so whenever you want to put the script into the background, you need to use the nohup and & commands.


2) For AIX, when executing the following commands on the system, the script is put into the background for execution
    /location/ &
At this point, follow the two scenarios above under Linux
I: continue to execute exit command to exit the current session, the first time will prompt "You have running jobs", execute exit again, the script will also stop running
Two: If you disconnect the current connection without executing the exit command, the script will exit immediately.


If you use the following command, the script will be executed in the background
    nohup /location/ &
In both cases, the script will continue to run on the system.


------------------------------------------------
Finds if all files in a directory contain a certain string
------------------------------------------------
find .|xargs grep -ri "IBM" 
Finds out if all files in a directory contain a certain string, and prints out only the filename.
find .|xargs grep -ri "IBM" -l 
1. Regular expressions
(1) Regular expressions are generally used to describe special uses of text patterns, consisting of ordinary characters (such as the characters a-z) as well as special characters (called metacharacters, such as /, *, ?). etc.).
(2) Basic meta-character sets and their meanings
^ : Matches only the beginning of a line. e.g. ^a matches lines starting with a abc,a2e,a12,aaa,......
$: matches only the end of a line. e.g. ^a matches lines ending in a bca,12a,aaa,.......
*: Match 0 or more of these single characters. e.g. (a)* matches Null, a,aa,aaa,....
[]: only matches characters within []. It can be a single character or a sequence of characters, use "," to separate different strings inside. You can also use - to indicate the range of the character sequence within [], such as [1-5] means [12345].
\ : Used to mask the special meaning of a metacharacter only. Such as \*,\',\",\|,\+,\^,\... etc.
. : (dot) matches only any single character.
pattern\{n\}: only used to match the number of times the previous pattern appears. n is the number of times. For example, a\{2\}matches aa.
pattern\{n,\}: same meaning as above, but at least n times. e.g. a\{2,\} matches aa,aaa,aaaa,.....
pattern\{n,m\}: the same meaning as above, but the number of times between n and m. For example, a\{2,4\}matches aa,aaa,aaaa three times.
(3) Give examples:
^$: matches blank lines
^. $ : matches lines containing one character
\*\.pas : matches all characters or files ending with *.pas
[0123456789] or [0-9]: Assume to match any one number
[a-z]: any lowercase letter
[A-Za-z]: any upper and lower case letters
[S,s] : matches case S
[0-9]\{3\}\... [0-9]\{3\}\. [0-9]\{3\}\. [0-9]\{3\} : match IP address [0-9]\{3\}three strings of 0-9; \. : match dot (note that dot is a special character here, so use "\" to mask its meaning)
Introduction (1) find files with certain characteristics of the command, you can traverse the current directory or even the entire file system to view certain files or directories, its traversal of the large file system is generally placed in the background execution.
(2) General form of the find command
      find pathname -options [-print -exec -ok] 
-pathname :The path to the directory that the find command is looking for. For example, use "." for the current directory, and / for the root directory.
The -print :find command outputs the matching file to standard output.
-exec: The find command executes the shell command given by this parameter on the matched file, the corresponding command is of the form
'command'{} \; (note the space between {} and \)
-ok is the same as -exec, except that the shell commands given by this parameter are executed in a safer mode, with a prompt given before each command is executed, allowing the user to determine whether to execute it or not.
Options are as follows:
-name : Find files by their names
-perm : find files by file permissions
-user : Finds files by their owner.
-group : Find files by the group they belong to.
-mtime -n +n Finds files according to when they were changed, -n means that the file was changed less than n days ago, +n means that the file was changed n days ago. find also has the -atime and -ctime options, but they are similar to the -mtime option.
-size n[c] Finds the file length in n blocks, with c indicating the file length in bytes.
-nogroup Finds files that do not belong to a valid group, i.e. the group to which the file belongs does not exist in /etc/groups.
-newer file1 !file2 Finds files whose change times are newer than file1 but older than file2.
-depth First find out if there are any matching files in the specified directory, if not, then find them in subdirectories.
-type Finds files of a certain type, e.g.
b :Block device file
d: Catalog
e: Character device files
p; pipeline documentation
l: symbolic link file
f: General document
(3) Example of find command
find -name "*.txt" -print Finds files ending in txt and outputs them to the screen
find /cmd ".sh" -print Find all sh files in the /cmd directory and output them
find . -perm 755 -print Finds files in the current directory with permissions of 755 and outputs
find `pwd` -user root -print Finds the files under the current directory whose owner is root and outputs them
find . / -group sunwill -print Finds files in the current directory whose owner is sunwill.
find /var -mtime -5 -print Find all files in the /var directory that have changed within 5 days of the change
find /var -mtime +5 -print Find all files in the /var directory with a change time of 5 days ago
find /var -newer "myfile1" ! -newer "myfile2" -print Find all files in the /var directory that are newer than myfile1, but older than myfile2.
find /var -type d -print Find all directories in the /var directory
find /var -type l -print Finds all symbolic link files in the /var directory.
find . -size +1000000c -print Finds files larger than 1000000 bytes in the current directory
find / -name "" -depth -print Find "" in the root directory, if not, look in its subdirectories.
find . -type f -exec ls -l {} \; Find if there are any ordinary files in the current directory, if so, execute ls -l
(4) xargs command
When using the -exec option to the find command to process matched files, the find command passes all matched files to exec together; unfortunately, some systems have a limit on the length of the command that can be passed to exec, and the find command runs for a few minutes before an overflow error occurs. The error message is usually "argument column too long" or "argument column overflowed". This is where xargs can be useful, especially when used with the find command; whereas exec initiates multiple processes, xargs initiates multiple, but only one.
find . / -perm -7 -print | xargs chmod o-w Finds files with permission 7 and passes them to chmod for processing
Introduction (1) The general format of grep is grep [options] Basic Regular Expression [file].
String arguments are best enclosed in double quotes to prevent them from being misinterpreted as shell commands, and because they can be used to look up strings of multiple words.
-c: outputs only the notation of matching lines
-i: case insensitive (applies only to single characters)
-h: Do not display file names when querying multiple files
-H: Display only the file name
-l: when querying multiple files, only output the file names that contain matching characters
-n: show only matching lines and their line numbers
-s: Do not display error messages that do not exist or have no matching text.
-v: show all lines that do not contain matching text.
(2) Give examples:
grep ^[^210] myfile Match lines in myfile that start with anything other than 2, 1, 0
grep "[5-8][6-9][0-3]" myfile Match lines in myfile with the first 5|6|7|8, the second 6|7|8|9, and the third 0|1|2|3 characters in the first position
grep "4\{2,4\}" myfile Match lines in myfile containing 44,444 or 4444
grep "\?" myfile matches lines in myfile that contain any character.
(3) grep command class name
[[:upper:]] indicates [A-Z]
[[:alnum:]] denotes [0-9a-zA-Z]
[[:lower:]] means [a-z]
[[:space:]] means space or tab key
[[:digit:]] indicates [0-9]
[[:alpha:]] indicates [a-zA-Z]
E.g. grep "5[[:digit:]]][[:digit:]]" myfile Matches myfile with lines containing numbers starting with 5 followed by two digits.
present (sb for a job etc)
Can browse and extract information from files or strings based on specified rules, is a self-interpretation into the language.
(1)awkcommand-line method awk [-F filed-spearator] 'command' input-files
awk script: all awk commands are inserted into a file and the awk program is made executable, then the awk command interpreter is used as the first line of the script so that it can be invoked by typing the name of the script. awk scripts are made up of a variety of operations and modes.
The mode section determines when the action statement is triggered and the trigger event. (BEGIN,END)
The action processes the data, which is indicated within {} (print)
(2) Delimiters, fields and records
awk is executed with its browsing domain marked as $1,$2,... $n. This method becomes the domain marker. $0 is all domains.
(3) Examples.
awk '{print $0}' |tee All lines in the output $0 means all fields
awk -F : '{print $1} |tee ' Same as above. Except the separator is ":"
      awk 'BEGIN {print "IPDate\n"}{print $1 "\t" $4} END{print "end-of-report"}'  
Print "IPDate" at the beginning and "end-of-report" at the end, and print the main information in the middle, for example, if you match three pieces of information in total, the output will be as follows:
IPDate 
1 first 
2 second 
3 third 
end-of-report 
(4) The match operator ~ matches, ! ~ does not match
cat |awk '$0~/210.34.0.13/' Match lines with 210.34.0.13 in them.
awk '$0!~/210.34.0.13' Match lines not in 210.34.0.13
awk '{if($1=="210.34.0.13") print $0}' Match the first line of 210.34.0.13.
Introduction sed does not deal with initialization files, it operates on a single copy, and then all changes will be output to the screen if not redirected to a file.
sed is a very important text filtering tool, using one line of commands or using pipes in combination with grep and awk. It is a non-interactive text stream editing.
(1) Three ways to call sed
Use the sed command line format: sed [options] sed command Input file
Use the sed script file format: sed[options] -f sed script file Input file
sed script file[options] Input file
--Whether using the shell command-line method or the script file method, if no input file is specified, sed accepts input from standard input, typically the keyboard or redirection results.
(2) The options for the sed command are as follows
-n: no printing
-c: the next command is an edit command
-f: if a sed script file is being called
(3) The way sed looks up text in a file
--Use a line number, either a simple number or a range of line numbers
--Use of regular expressions
(4) Reading of the text
x x is a line number
x,y indicates that the line numbers range from x to y.
/pattern/ Query for lines containing patterns
/pattern/pattern/ Query for lines containing two patterns
pattern/,x looks up the line containing the pattern at the given line number
x,/pattern/ Query matching lines by line number and pattern
Queries rows that do not contain the specified line numbers x and y.
(5) Basic sed editing commands
p Prints the matching lines
d Delete matching rows
= show file line numbers
a\ Append new text message after positioning line numbers
i\ Insert new text message after positioning line numbers
c\ Replace positioned text with new text
s Replace the corresponding pattern with the replacement pattern
r Read a file from another file
w Write text to a file
q Launch or exit immediately after the first pattern match is completed
l Displays control characters equivalent to the eight prohibited ASCII codes.
{} Group of commands to execute at the locate line
n Read the next line of text from another file and append it to the next line
g Paste pattern 2 into /pattern n/
y Transmit characters
(6) Give examples:
sed -n '2p' prints the second line (note: -n is not to print mismatches, if you don't add -n, you print all the information in the file instead of the matches)
sed -n '1,4p' prints the first through fourth lines of the message
sed -n '/los/p' pattern matches los and prints it out
sed -n '2,/los/p' from the second line. Knowing that matching the first los
sed -n '/^$/p' matches empty lines
sed -n -e '/^$/p' -e '/^$/=' prints blank lines and line numbers
sed -n '/good/a\morning' append morning to good matches
sed -n '/good/i\morning' Insert morning in front of the matched good.
sed -n '/good/c\morning' Replace the matched good with morning
sed '1,2d' delete lines 1 and 2
sed 's/good/good morning/g' matches good and replaces it with goodmorning
send 's/good/& hello /p' add hello to good if matched
send 's/good/ hello &/p' with hello in front of it if it matches good
6. merge and split (sort, uniq, join, cut, paste, split) (1) sot command
sort [options] files Many different fields sorted in different column orders
-c Tests if the file is sorted
-m Merge two sort files
-u Delete all the same lines
-o The name of the output file to store the results of the sort.
-t field separator, start sorting with non-spaces or tabs
+n: n is the column number, use it to start sorting.
-n Specifies that the sort is a numeric categorical item on the domain
-r Compare inverse
sort -c Tests if a file has been sorted
sort -u sort and merge same rows
sort -r in reverse order
sort -t "/" +2 Separated by "/", second field starts sorting
(2) uniq command
uniq [options] files Removes or disables duplicate lines from a text file.
-u Display only non-repeating lines
-d Display only rows with duplicates, one for each type of duplicate row
-c Prints the number of occurrences of each repeated line
-f : n is a number, the first n fields are ignored.
uniq -f 2 Ignore the first 2 fields.
(3) join command
join [options] file1 file2 Use to join lines from two categorized text files together.
-an, n is a number, used to display mismatched lines from file n when joining
-onm , join domains, n is the file number, m is the domain number
-jnm, n is the file number, m is the domain number, use other domains as connection domains
-t , the field separator. Used to set a field separator that is not a space or tab key.
(4) split command
          split -output_file_size intput_filename output_filename 
Used to split large files into smaller ones.
-b n, size of each split file n
-C n, up to n bytes per split file line
-l n, number of lines per split file
-n, same as -l n
split -10 will split the file into 10 lines.
(5) Cut command
cut -c n1-n2 filename Displays each line from n1 to n2 from the beginning.
cut -c 3-5 The third through fifth characters of each line in the display.


==============================================
Several ways to view the number of threads under linux
==============================================
1、 cat /proc/${pid}/status
2、pstree -p ${pid}
3. top -p ${pid} and press H or
Just type top -bH -d 3 -p ${pid}
The top -H manual says: -H : Threads toggle
With this option to start top, top displays one thread per line. Otherwise, it displays one process per line.
4、ps xHThe manual says:H Show threads as if they were processes
This allows you to view all the threads that exist.
5、ps -mp <PID>The manual says:m Show threads after processes
This allows you to see the number of threads started by a process.


=======================
linux environment view command:
=======================
CPU:  cat /proc/cpuinfo,  top
Process: ps -ef
Memory: cat /proc/meminfo, top, free
Disks: df -sh, df -ht,df -lh.
Disk partitioning status: sfdisk -l, fdisk -l, parted, cat /proc/partitions,
IO:    iostat -x 1
OS: uname -a, cat /proc/version,more /etc/issue


[root@oam-nas ~]# more /etc/issue
Red Hat Enterprise Linux Server release 6.1 (Santiago)
Kernel \r on an \m


[root@oam-nas ~]# uname -a
Linux oam-nas 2.6.33.20 #1 SMP PREEMPT Wed Apr 3 17:07:07 CST 2013 x86_64 x86_64 x86_64 GNU/Linux


[root@oam-nas ~]# more /proc/version 
Linux version 2.6.33.20 (root@oam-nas) (gcc version 4.4.5 20110214 (Red Hat 4.4.5-6) (GCC) ) #1 SMP PREEMPT Wed Apr 3 17
:07:07 CST 2013


How to know what are the supported types for your Linux distribution:
Log in to Linux with superuser privileges and go to the /lib/modules/2.4./kernel/fs/ directory
Execute the command (the Fs directory is a bit different for different Linux distributions you can find it by looking for the FS folder):
(abigbadboy's kernel version is 2.6.18-164.el5)
[root@rh root]# cd /lib/modules/2.6.18-164.el5/kernel/fs/                        
[root@localhost fs]# ls
autofs4     cramfs    ext3      fscache  hfsplus  lockd       nfsd      vfat
cachefiles  dlm       ext4      fuse     jbd      msdos       nls
cifs        ecryptfs  fat       gfs2     jbd2     nfs         squashfs
configfs    exportfs  freevxfs  hfs      jffs2    nfs_common  udf


View startup information: dmesg
The Linux command dmesg is used to display boot information, which the kernel stores in the ring buffer. If you are too late to view the information at boot time, the
This can be viewed using dmesg. Boot information is also stored in the /var/log directory in a file named dmesg
File system mounts: cat /etc/fstab
View the file system:
fdisk -l --- view disk information and partition table or use cat /proc/partitions
df --- Current system space remaining
mount --- view directory mounts
To view the size of a directory: du -sh /uploadimages
System operation: top,uptime
Currently open services: service --status-all
Runtime level of the service: chkconfig --list
Ports used by all services: cat /etc/services |less ------ to see ports used by all services
Currently open ports: netstat -nat, netstat -tnlp
Network Configuration:ifconfig -a |grep add,
         ifup eth0,
         ifdown eth0,
         ethtool eth0,
         mii-tool -v eth0,
         /etc//network status,
         route, 
         /etc/sysconfig/network-scripts
         
[root@oam-nas2 yuanjs]# service network ?
usage:/etc//network {start|stop|status|restart|reload|force-reload}


Environment variables: export
What software is installed: rpm -qa , yum list,yum grouplist


top to see process and cpu usage
uptime Look at cpu load and system uptime.
free See memory Virtual memory -m is displayed in megabytes.
df-hT hard disk. See how each partition is being used. -h is displayed in G. -T is the system type of the partition.
iostat -x 1 allows you to view the IO information of the disk




/sbin
/bin
/usr/sbin
/usr/bin
/usr/local/sbin
/usr/local/bin
/usr/lib64/qt-3.3/bin
/root/bin


========================
linux software installation method:
(dpkg,apt-get)(rpm,yum)
========================
APT --- Advanced Package Tool 
apt-get -------- is a package management tool for debian, ubuntu distributions, very similar to the yum tool in redhat.
apt-get install packagename ---install a new package (see aptitude below)
apt-get remove packagename --- uninstalls an installed package (retains configuration documentation)
apt-get autoremove packagename --- uninstalls an installed package (removes configuration documentation)
Software source settings /etc/apt/
Update software source data apt-get update
Update installed software apt-get upgrade
Change system version apt-get dist-upgrade
Fixing dependency errors by installing packages or uninstalling them apt-get -f install
Search Software Data apt-cache search foo
Unzip and install the package apt-get install foo
Reinstall the package apt-get --reinstall install foo
Remove package releases apt-get remove foo
Uninstall the software and clear its configuration file apt-get --purge remove foo
Remove unwanted packages apt-get autoclean
Remove all downloaded packages apt-get clean
Automatically install packages needed to build a piece of software apt-get build-dep foo
Get source code apt-get source foo apt-get source rox-filer
Installation of compilation dependencies apt-get build-dep foo apt-get build-dep rox-filer
Unzip the source code dpkg-source -x foo_version dpkg-source -x rox_2.
Modify the source code section nano ROX-Filer/src/
Create package dpkg-buildpackage -rfakeroot -b
Modify software upgradeable status echo -e "foo hold" | dpkg --set-selections


dpkg --- Short for "Debian Packager". A package management system developed specifically for "Debian" to facilitate the installation, update and removal of software.
All "Linux" distributions derived from "Debian" use "dpkg", e.g. "Ubuntu Ubuntu", "Knoppix" and others. It is the basis of the Debian package manager, which was created by Ian Murdoch in 1993.
dpkg is very similar to RPM, and is also used to install, uninstall, and supply information related to .deb packages. dpkg itself is an underlying tool. The top-level tool.
such as APT, are used to obtain packages from remote locations and to handle complex package relationships.
Display DEB package information dpkg -I
Show list of DEB package files dpkg -c
Installing the DEB package dpkg -i
Install DEB package (specify root) dpkg --root=<directory> -i
Show all installed software dpkg -l
Show installed package information dpkg -s foo
Show list of installed package files dpkg -L foo
Uninstalling packages dpkg -r foo
Uninstall the package and remove its configuration file dpkg -P foo
Reconfigure the installed program dpkg-reconfigure foo


yum --------Yum (known as Yellow dog Updater, Modified) is a shell front-end package manager in Fedora, RedHat, and CentOS.
Based on RPM package management, it can automatically download RPM packages from a specified server and install them, handle dependencies automatically, and install all dependent packages at once.
No need to tediously download and install again and again.
            
[root@localhost ~]# yum update
Loaded plugins: fastestmirror, refresh-packagekit
Loading mirror speeds from cached hostfile
 * base: mirrors.
 * c6-media: 
 * centosplus: mirrors.
 * extras: mirrors.
 * updates: mirrors.
file:///media/CentOS/repodata/: [Errno 14] Could not open/read file:///media/CentOS/repodata/
Trying other mirror.
file:///media/cdrecorder/repodata/: [Errno 14] Could not open/read file:///media/cdrecorder/repodata/
Trying other mirror.
file:///media/cdrom/repodata/: [Errno 14] Could not open/read file:///media/cdrom/repodata/
Trying other mirror.
Error: Cannot retrieve repository metadata () for repository: c6-media. Please verify its path and try again
The reason for not finding the YUM repository file is that the image is not hooked up or the path to the YUM repository is written incorrectly, or the path to the YUM repository may be invalid.


Solution:
Move /etc// to another folder. Also make sure your linux should be networked.
            
yum list or yum grouplist --- find
yum install or yum groupinstall ---installation
yum remove or groupremove --- uninstalls
yum info or yum groupinfo --- See details about a package or group.
yum deplist --- show package dependencies


Install software: yum install (for example)
Remove software: yum remove or yum erase
Upgrade software: yum upgrade foo or yum update foo
Query info: yum info foo
Search for software (to include the foo field, for example): yum search foo
Show package dependencies: yum deplist foo


-e Silent Execution
-t Ignore errors
-R [minutes] Setting the wait time
-y Auto-answer yes
--skip-broken Ignore dependency issues
--nogpgcheck Ignore GPG verification


check-update Checks for packages that can be updated
clearn clear all
clean packages Removes temporary package files (files under /var/cache/yum).
clearn headers Clear rpm headers
clean oldheaders Clears old rpm headers.
deplist lists package dependencies
list Installable and updatable RPM packages
list installed installed packages
list extras Installed packages not in the repository
info Installable and updatable RPM packages info
info installed Information about installed packages (-qa is similar).
install [RPM package] install package
localinstall Installs the local RPM package
update [RPM package] update package
upgrade Upgrade system
search[keyword] search package
provides[keyword] search for a specific package filename
reinstall [RPM package] reinstall package
repolist Displays the configuration of the repository
resolvedep Specify dependencies
remove[RPM package] Uninstall package


rpm -----Redhat Package Manager OpenLinux, . and Turbo Linux are used by Linux distributions such as -vh: shows the progress of the installation;
rpm -ivh packagename ---install For example: rpm -ivh tcl-8.5.7-6.el6.x86_64.rpm tcl-devel-8.5.7-6.el6.x86_64.rpm tcl-pgtcl-1.6.2-3.el6.x86_64.rpm
rpm -ev packagename --- uninstall For example: rpm -ev tcl
rpm -qa |grep php See what PHP is currently installed on your system.
rpm -qpl How to view the installation path of an rpm installer ******


How to check RPM package dependencies in Fedora,CentOS,RHEL --- /Linux/2014-08/
$ rpm -qR tcpdump --- Note that this method only works for installed packages. Use the RPM command to list all packages that the target package depends on.
$ rpm -qpR tcpdump-4.4.0-2.fc19. --- If you need to check the dependencies of an uninstalled package, use the "-qpR" parameter to display the dependencies of that package.


1. How to install rpm packages (forced installation)
Execute the following command.
rpm -i or rpm -i --force --nodeps (force installation)
where is the filename of the rpm package you want to install, usually placed in the current directory.


The following warning or prompt may appear during installation:
1)... conflict with ... It is possible that some of the files in the package to be installed may overwrite existing files, and by default it will not install correctly in this case:.
rpm --force -i just force the installation
2)... is needed by ... or ... is not installed ... This package requires some software that you don't have installed.
rpm --nodeps -i to ignore this message


That is, rpm -i --force --nodeps can ignore all dependencies and file issues and install whatever package is available, but this forced installation of a package is not guaranteed to be fully functional.


2. How to uninstall the rpm package (force uninstall) _
Use the command rpm -ev package name, the package name can contain the version number and other information, but can not have the suffix .rpm
For example, to uninstall the package proftpd-1.2.8-1, you can use the following format:
rpm -e proftpd-1.2.8-1 
rpm -e proftpd-1.2.8 
rpm -e proftpd- 
rpm -e proftpd 
It may not be in the following format:
rpm -e proftpd-1.2.8-1. 
rpm -e proftpd-1.2.8-1.i386 
rpm -e proftpd-1.2 
rpm -e proftpd-1 


Sometimes there are errors or warnings:
... is needed by ... This means that this software is needed by other programs and cannot be uninstalled.
Available in.
rpm -e --nodeps (force uninstall)


An example of a forced uninstallation.
[root@A22770797 yuanjs]#  rpm -ev --nodeps httpd-2.2.15-9.el6.x86_64
warning: /etc/httpd/conf/ saved as /etc/httpd/conf/
[root@A22770797 yuanjs]# rpm -qa|grep httpd
httpd-tools-2.2.15-9.el6.x86_64
[root@A22770797 yuanjs]# rpm -ivh httpd-2.2.15-9.el6.x86_64.rpm 
warning: httpd-2.2.15-9.el6.x86_64.rpm: Header V3 RSA/SHA256 Signature, key ID fd431d51: NOKEY
Preparing...                ########################################### [100%]
   1:httpd                  ########################################### [100%]
[root@A22770797 yuanjs]# apachectl -v
Server version: Apache/2.2.15 (Unix)
Server built:   Apr  9 2011 08:58:28


====================================================================
Differences between the commands which, whereis, locate, and find in Linux
====================================================================
which to see the location of the executable file : which is the PATH environment variable to the path to find the executable file, so the basic function is to find the executable file
whereis works with the database to see where the files are located
locate works with the database to see the location of the file
find actually searches the hard disk for file names


1、which 
Grammar:
[root@redhat ~]# which executable name
Example:
[root@redhat ~]# which passwd 
/usr/bin/passwd 
which looks for executables in that path via the PATH environment variable, so basically it's just looking for executables.


2、whereis 
Grammar:
[root@redhat ~]# whereis [-bmsu] File or directory name
Parameter Description:
-b : find only binary files
-m: find only files in the manual path of the description file
-s : find source files only
-u : undocumented file
Example:
[root@redhat ~]# whereis passwd 
passwd: /usr/bin/passwd /etc/passwd /usr/share/man/man1/passwd. /usr/share/man/man5/passwd. 
Find all files related to the passwd file.


[root@redhat ~]# whereis -b passwd 
passwd: /usr/bin/passwd /etc/passwd 
Only binary files will be looked up


Compared to find, whereis is very fast, because the linux system will record all the files in the system in a database file, when you use whereis and locate, which will be described below, you will look for the data from the database, rather than traversing the hard disk as in the case of find, which is naturally very efficient.
However, the database file is not updated in real time, by default it is updated once a week. Therefore, when we use whereis and locate to find the file, sometimes we will find the data that has been deleted, or we just created the file but can't find it, the reason is because the database file has not been updated.


3、 locate 
Grammar:
[root@redhat ~]# locate file or directory name
Example:
[root@redhat ~]# locate passwd 
/home/weblogic/bea/user_projects/domains/zhanggongzhe112/myserver/stage/_appsdir_DB_war//jsp/as/user/ 
/home/weblogic/bea/user_projects/domains/zhanggongzhe112/myserver/stage/_appsdir_admin_war//jsp/platform/ 
/lib/security/pam_unix_passwd.so 
/lib/security/pam_passwdqc.so 
/usr/include/rpcsvc/ 
/usr/include/rpcsvc/ 
/usr/lib/perl5/5.8.5/i386-linux-thread-multi/rpcsvc/ 
/usr/lib/kde3/kded_kpasswdserver.la 
/usr/lib/kde3/kded_kpasswdserver.so 
/usr/lib/ruby/1.8/webrick/httpauth/ 
/usr/bin/vncpasswd 
/usr/bin/userpasswd 
/usr/bin/yppasswd 
………… 


4、 find 
Grammar:
[root@redhat ~]# find path Parameters


Parameter Description:


Time lookup parameter:
-atime n :List the files that have been fetched in memory in n*24 hours.
-ctime n :List files or directories changed or added in n*24 hours.
-mtime n :List files or directories modified in n*24 hours.
-newer file : List files newer than file.
Name lookup parameter:
-gid n : find files with group ID n
-group name : find files with group name
-uid n : Find the file with owner ID n.
-user name : Finds files with user name.
-name file : look for a file with the filename file (wildcards can be used)


Example:
[root@redhat ~]# find / -name zgz 
/home/zgz 
/home/zgz/zgz 
/home/weblogic/bea/user_projects/domains/zgz 
/home/oracle/product/10g/cfgtoollogs/dbca/zgz 
/home/oracle/product/10g/cfgtoollogs/emca/zgz 
/home/oracle/oradata/zgz 


[root@redhat ~]# find / -name '*zgz*' 
/home/zgz 
/home/zgz/zgz1 
/home/zgz/zgzdirzgz 
/home/zgz/zgz 
/home/zgz/zgzdir 
/home/weblogic/bea/user_projects/domains/zgz 
/home/weblogic/bea/user_projects/domains/zgz/zgz.log00006 
/home/weblogic/bea/user_projects/domains/zgz/zgz.log00002 
/home/weblogic/bea/user_projects/domains/zgz/zgz.log00004 
/home/weblogic/bea/user_projects/domains/zgz/ 
/home/weblogic/bea/user_projects/domains/zgz/zgz.log00008 
/home/weblogic/bea/user_projects/domains/zgz/zgz.log00005 


When we use whereis and locate can not find the file we need, you can use find, but find is traversing the hard disk to find, so it is very consuming hard disk resources, and the efficiency is also very low, so it is recommended that we prioritize the use of whereis and locate.


locate is a search in the database, which is updated up to once a day.
whereis can find executable commands and man page
find is to find files based on conditions.
which finds executables and aliases.


==================================
The meaning of "2>&1" in linux shell (important)
==================================
The script is.
nohup /mnt/Nand3/H2000G >/dev/null 2>&1 & "Redirect standard error output to standard output and throw it under /DEV/NULL."
For &1 more accurately should be the file descriptor 1, and 1 generally represents STDOUT_FILENO, in fact, this operation is a dup2(2) call. He standardized the output to all_result.
The standard output is then copied to file descriptor 2 (STDERR_FILENO), the consequence of which is that file descriptors 1 and 2 point to the same file table entry, and it can be argued that the erroneous output is merged.
Of which.
0 for keypad input, 0 for keyboard input, 0 for keyboard input, 0 for keyboard input, 0 for keyboard input, 0 for keyboard input, and
1 for screen output, and
2 indicates an error output.
Redirect standard error output to standard output and throw it under /DEV/NULL.


In layman's terms, this means throwing all standard output and standard errors inside the trash.


command > 2>&1 &
1. command > is to redirect the output of command to a file, that is, the output is not printed to the screen, but output to a file.
2. 2>&1 is to redirect standard errors to standard output, which in this case has been redirected to a file, i.e., to output standard errors to a file as well.
3. The last & is to allow the command to be executed in the background.


Imagine what 2>1 represents, 2 in combination with > represents an error redirection, while 1 represents an error redirection to a file 1, not standard output;
Switch to 2>&1, & and 1 combined with the standard output represents the standard output, it becomes an error redirection to the standard output.


---------------------------------------------------------------------------------------------------
You can use
ls 2>1 Test it, it won't report no 2 file error, but it will output an empty file 1;
ls xxx 2>1 test, there is no xxx the file error output to 1;
ls xxx 2>&1 test, won't generate 1 as a file anymore, but the error runs to standard output;
ls xxx > 2>&1, can actually be replaced with ls xxx 1> 2>&1; redirection symbol > defaults to 1, errors and output are passed.


Question:
1) Why should 2>&1 be written after it?
    command > file 2>&1
 
The first is command > file redirects the standard output to file, the
2>&1 is that the standard error copies the behavior of the standard output, i.e., it is redirected to file as well, with the end result that both the standard output and the error are redirected to file.


2) command 2>&1 >file
2>&1 The standard error copies the behavior of the standard output, but the standard output is still in the terminal at this point. The output is redirected to file only after >file, but the standard error remains in the terminal.
 
You can see it with strace:
1. command > file 2>&1
The key system call sequence in this command that implements redirection is:
open(file) == 3
dup2(3,1)
dup2(1,2)
2. command 2>&1 >file
The key system call sequence in this command that implements redirection is:
dup2(1,2)
open(file) == 3
dup2(3,1)
Consider what kind of file-sharing structure would result from a different sequence of dup2() calls.




================================================
Difference between /dev/zero and /dev/null
================================================
Using /dev/null.
Think of /dev/null as a "black hole". It's very much a write-only file. Anything written to it is lost forever. Attempts to read from it will result in nothing. However, /dev/null is very useful for command lines and scripts.


Using /dev/zero.
Like /dev/null, /dev/zero is a pseudo-file, but it actually produces a continuous stream of nulls (a binary stream of zeros, not ASCII). Writing to it results in missing output, and reading a stream of nulls from /dev/zero is difficult, although it can be done with ood or a hex editor. The main use of /dev/zero is to create an empty file of a specified length for initialization, like a temporary swap file.


/dev/null, nicknamed Bottomless Pit, you can output any data to it, it eats it through and doesn't hold up!
/dev/zero, is an input device that you can you use to initialize files.


/dev/null------ it is an empty device, also known as a bit bucket. Any output written to it is discarded. If you don't want the message to be displayed as standard output or written to a file, then you can redirect the message to the bit bucket.
/dev/zero------ The device provides 0's ad infinitum and can use any number you need - the device provides much more. He can be used to write the string 0 to a device or file.


dd --- disk dump
$dd if=/dev/zero of=./ bs=1k count=1
$ ls -l
total 4
-rw-r--r--     1 oracle    dba           1024 Jul 15 16:56




Disable standard output.
1 cat $filename >/dev/null
2 # The contents of the file are lost and not output to standard output.


Standard errors are prohibited (from Example 12-3).
1 rm $badname 2>/dev/null
2 # So the error message [standard error] is thrown into the Pacific Ocean.


Disable standard output and standard error output.
1 cat $filename 2>/dev/null >/dev/null
2 # If "$filename" does not exist, there will be no error message.
3 # If "$filename" exists, the contents of the file will not be printed to standard output.
4 # So therefore, the code above doesn't output any information at all.
5 #
6 # Useful when you only want to test the exit code of a command and don't want any output.
7 #
8 #
9 # cat $filename &>/dev/null
10 # Also possible, as noted by Baris Cicek.


Deleting contents of a file, but preserving the file itself, with all attendant permissions (from Example 2-1 and Example 2-3):    
1 cat /dev/null > /var/log/messages
2 #: > /var/log/messages has the same effect, but does not spawn new processes. (because: is built-in)

4 cat /dev/null > /var/log/wtmp


Automatic emptying of the contents of log files (especially good for dealing with those pesky "cookies" sent by commercial Web sites).
--------------------------------------------------------------------------------


Example 28-1. Hiding cookies from use
1 if [ -f ~/.netscape/cookies ] # Delete if present.
2 then
3   rm -f ~/.netscape/cookies
4 fi

6 ln -s /dev/null ~/.netscape/cookies
7 # Now all cookies are thrown into the black hole and not saved on disk.


--------------------------------------------------------------------------------
Using /dev/zero
Like /dev/null, /dev/zero is a pseudo-file, but it actually produces a continuous stream of nulls (a binary stream of zeros, not ASCII). Writing to it results in missing output, and reading a stream of nulls from /dev/zero is difficult, although it can be done with ood or a hex editor. The main use of /dev/zero is to create an empty file of a specified length for initialization, like a temporary swap file.
--------------------------------------------------------------------------------


Example 28-2. Creating a Swap Temp File with /dev/zero
1 #!/bin/bash
2 # Create an exchange file.

4 ROOT_UID=0 # The $UID for the Root user is 0.
5 E_WRONG_USER=65 # Not root?

7 FILE=/swap
8 BLOCKSIZE=1024
9 MINBLOCKS=40
10 SUCCESS=0
11 
12 
13 # This script must be run by root.
14 if [ "$UID" -ne "$ROOT_UID" ]
15 then
16   echo; echo "You must be root to run this script."; echo
17   exit $E_WRONG_USER
18 fi 
19   
20 
21 blocks=${1:-$MINBLOCKS} # If not specified on the command line.
22 #+ is set to the default of 40 blocks.
23 # The above sentence is equivalent to e.g:
24 # --------------------------------------------------
25 # if [ -n "$1" ]
26 # then
27 #   blocks=$1
28 # else
29 #   blocks=$MINBLOCKS
30 # fi
31 # --------------------------------------------------
32 
33 
34 if [ "$blocks" -lt $MINBLOCKS ]
35 then
36 blocks=$MINBLOCKS # Minimum 40 blocks long.
37 fi 
38 
39 
40 echo "Creating swap file of size $blocks blocks (KB)."
41 dd if=/dev/zero of=$FILE bs=$BLOCKSIZE count=$blocks # Write zeros to file.
42 
43 mkswap $FILE $blocks # Build this file as a swap file (or swap partition).
44 swapon $FILE # Activate the swap file.
45 
46 echo "Swap file created and activated."
47 
48 exit $SUCCESS


--------------------------------------------------------------------------------


Another application of /dev/zero is to fill a file of a specified size with zeros for a specific purpose, such as mounting a filesystem to a loopback device (see Example 13-8) or deleting a file "safely" (see Example 12-55).
--------------------------------------------------------------------------------


Example 28-3. Creating a ramdisk
1 #!/bin/bash
2 #

4 # "ramdisk" is a section of system RAM memory.
5 #+ It can be operated as a file system.
6 # It has the advantage of being very fast to access (both read and write).
7 # Disadvantage: volatile, data is lost when the computer is restarted or shut down.
8 #+ will reduce the amount of RAM available to the system.
9 #
10 # So what does ramdisk do?
11 # Save a larger dataset on ramdisk, such as a table or a dictionary.
12 #+ This speeds up data lookups, since they are much faster in memory than on disk.
13 
14 
15 E_NON_ROOT_USER=70 # Must be run as root.
16 ROOTUSER_NAME=root
17 
18 MOUNTPT=/mnt/ramdisk
19 SIZE=2000 # 2K blocks (can be modified as appropriate)
20 BLOCKSIZE=1024 # Each block has a size of 1K (1024 byte)
21 DEVICE=/dev/ram0 # first ram device
22 
23 username=`id -nu`
24 if [ "$username" != "$ROOTUSER_NAME" ]
25 then
26   echo "Must be root to run ""`basename $0`""."
27   exit $E_NON_ROOT_USER
28 fi
29 
30 if [ ! -d "$MOUNTPT" ] # Test that the mount point already exists, and that the mount point is not already there.
31 then #+ If this script has been run several times it will not create this directory again
32 mkdir $MOUNTPT #+ Because it was created earlier.
33 fi
34 
35 dd if=/dev/zero of=$DEVICE count=$SIZE bs=$BLOCKSIZE # Fill the contents of the RAM device with zeros.
36 # Why is this necessary?
37 mke2fs $DEVICE # Create an ext2 filesystem on the RAM device.
38 mount $DEVICE $MOUNTPT # Mount device.
39 chmod 777 $MOUNTPT # Make this ramdisk accessible to normal users.
40 # But, it can only be unloaded by root.
41 
42 echo """$MOUNTPT"" now available for use."
43 # Now ramdisk can be used to access files even by normal users.
44 
45 # Note that the ramdisk is volatile, so the contents of the ramdisk will disappear when the computer system is rebooted or shut down.
46 #
47 # Copy all the files you want to save to a regular disk directory.
48 
49 # After rebooting, run this script to create a ramdisk again.
50 # Just reloading /mnt/ramdisk without other steps will not work correctly.
51 
52 # With some improvements, this script could be placed in /etc//, /etc//, /etc//, /etc//, /etc//, /etc//, and /etc///.
53 #+ to automatically create a ramdisk when the system boots.
54 # This is good for speed-critical database servers.
55 
56 exit 0


--------------------------------------------------------------------------------
Finally, it is worth mentioning that the ELF binary utilizes /dev/zero.
/dev/null, nicknamed Bottomless Pit, you can output any data to it, it eats it through and doesn't hold up!
/dev/zero, is an input device that you can you use to initialize files.
/dev/null------ it is an empty device, also known as a bit bucket. Any output written to it is discarded. If you don't want the message to be displayed as standard output or written to a file, then you can redirect the message to the bit bucket.
/dev/zero------ The device provides 0's ad infinitum and can use any number you need - the device provides much more. He can be used to write the string 0 to a device or file.
$if=/dev/zero of=./ bs=1k count=1
$ ls -l
total 4
-rw-r--r--     1 oracle    dba           1024 Jul 15 16:56


eg,find / -name access_log   2>/dev/null
In this way, some error messages, such as some error messages, will not be displayed.




===============================
Difference between hard and soft links
===============================
I. Linking Documents
There are two types of links, soft links and hard links.
1. Soft link files
A soft link, also called a symbolic link, this file contains the pathname of another file. It can be any file or directory and can link files from different file systems.
Linking files can even link to non-existent files, which creates the problem generally known as "broken links" (or "phenomenon"), linking files can even loop link itself. Similar to recursion in programming languages.
A soft link can be generated with the ln -s command as follows.
    ------------------------------------------------------------
ln -s  source_file softlink_file
    ------------------------------------------------------------
When performing a read or write operation on a symbol file, the system automatically converts the operation to an operation on the source file, but when deleting a linked file, the system simply deletes the linked file, not the source file itself.
2. Hard-linked documents
The info ln command tells you that a "hard link" is another name for an existing file, which is somewhat confusing.
The command to hardwire is.
    ------------------------------
ln -d existfile newfile
    ------------------------------
There are two limitations to hard linking files
1) It is not allowed to create hard links to directories;
2), links can only be created between files in the same file system.
When reading, writing and deleting hard-linked files, the result is the same as for soft links. However, if we delete the source file of a hardlinked file, the hardlinked file still exists and retains its contents.
At this point, the system "forgets" that it was once a hardlinked file. Instead, it treats it as an ordinary file.
    
II. Differences between the two
A hard connection refers to a connection that is made through an index node. In the Linux file system, a file saved in a disk partition is assigned a number, called the Inode Index, regardless of its type.
Multiple filenames pointing to the same index node exist in Linux. Generally, such a connection is a hard link. The purpose of a hard link is to allow a file to have more than one valid pathname, so that the user can create hard links to important files to prevent "accidental deletion".
The reason for this is as described above, because the index node corresponding to that directory has more than one connection. Deleting only one connection does not affect the index node itself or the other connections, only if the last
The data block of the file and the connection to the directory are released only after the connection is deleted. That is, the file is not really deleted until it is deleted.
A soft link file is somewhat similar to a Windows shortcut. It is actually a type of special file. In symbolic linking, the file is actually a text file that contains information about the location of another file.


========================
Linux chkconfig command in detail
========================
To see which services are turned on: sysv's services can be viewed with chkconfig --list
The chkconfig command is primarily used to update (start or stop) and query run-level information about system services. Keep in mind that chkconfig does not immediately and automatically disable or activate a service, it simply changes the symbolic connection.
Use the syntax:
chkconfig [--add][--del][--list][system services] or chkconfig [--level <level designator>][system services][on/off/reset]


chkconfig displays usage when run without arguments. If the service name is added, then it checks if this service is started at the current runlevel.
Returns true if yes, false otherwise.
If on, off, or reset is specified after the service name, chkconfig changes the startup information for the specified service.
on and off refer to the service being started and stopped, respectively, and reset refers to resetting the service's startup information, regardless of what the problematic initialization script specified.
The on and off switches, by system default, are only valid for runlevels 3, 4, and 5, but reset can be valid for all runlevels.


Parameter Usage:
--add Adds the specified system service, allows the chkconfig command to manage it, and adds data to the system startup narrative file.
--del Removes the specified system service from being managed by the chkconfig command, and also removes related data from the system startup narrative file.
--level<level designator> Specifies the execution level at which the read system service is to be turned on or off.
Level 0 means: Indicates power off
Level 1 indicates: single-user mode
Level 2 means: multi-user command line mode without network connection
Level 3 means: multi-user command line mode with network connection
Level 4 means: not available
Level 5 means: Multi-user mode with graphical interface
Level 6 means: restart
It should be noted that the level option allows you to specify which runlevel to view and not necessarily the current runlevel. For each runlevel, there can only be one start script or stop script. When switching runlevels, init will not restart a service that has already been started, nor will it stop a service that has already been stopped.


chkconfig --list [name]: Displays runtime status information (on or off) for all runlevel system services. If name is specified, only the status of the specified service at different runlevels is displayed.
chkconfig --add name: Add a new service. chkconfig ensures that each runlevel has a start (S) or kill (K) entry. If one is missing, it is automatically created from the default init script.
chkconfig --del name: Removes the service and removes the associated symbolic link from /etc/rc[0-6].d.
chkconfig [--level levels] name: Sets whether a service is started, stopped, or reset at the specified runlevel.


Run-level documentation:
Each service managed by chkconfig requires two or more lines of comments in the corresponding script. The first line tells chkconfig which runlevel to start by default, and the start and stop priorities. If a service is not started at any runlevel by default, then use - instead of runlevel. The second line describes the service and can be commented across lines with \.
For example, contain three lines:
# chkconfig: 2345 20 80
# description: Saves and restores system entropy pool for \
# higher quality random number generation.


Example of use:
chkconfig --list #list all system services
chkconfig --add httpd #add httpd service
chkconfig --del httpd #delete httpd service
chkconfig --level httpd 2345 on #Set httpd to be on at runlevels 2, 3, 4, and 5.
chkconfig --list #List all services started on the system
chkconfig --list mysqld #List mysqld service settings
chkconfig --level 35 mysqld on #Set mysqld to run at levels 3 and 5 as a boot service, --level 35 means that the operation is only performed at levels 3 and 5, on means startup, off means shutdown.
chkconfig mysqld on #Set mysqld on at all levels, "at all levels" includes levels 2, 3, 4, and 5.


How to add a service:
1. Service scripts must be stored in the /etc// directory;
--add servicename
Add this service to the chkconfig utility services list, and the service will now be given a K/S entry in /etc//;
--level 35 mysqld on
Modify the default startup level of the service.
    
===============
Implementation level
===============
[root@localhost ~]# cat /etc/inittab


# Default runlevel. The runlevels used are:
#   0 - halt (Do NOT set initdefault to this)
#   1 - Single user mode
#   2 - Multiuser, without NFS (The same as 3, if you do not have networking)
#   3 - Full multiuser mode
#   4 - unused
#   5 - X11
#   6 - reboot (Do NOT set initdefault to this)


========================
Linux command: declare
========================
Function: Declare shell variables.
Grammar: declare [+/-][afrix]
Supplementary note: declare is a shell instruction that can be used to declare variables and set their attributes in the first syntax ([rix] is the attribute of the variable), and can be used to display shell functions in the second syntax. If no arguments are added, all shell variables and functions are displayed (the same effect as with the set instruction).
Parameters:
+/- "-" can be used to specify the attributes of a variable, and "+" to cancel the attributes set by the variable.
-a defined as an array
-f is defined as the function function
-i is defined as an integer.
-r Defined as read-only
-x Defined as an output variable through the environment


declare -x XMLRPC_TRACE_XML=1 
declare +x XMLRPC_TRACE_XML


ATA --->PATA(IDE)
ATA --->SATA
SCSI --->SAS


The h in hdb stands for IDE, or if it is displayed as sdb, it stands for SATA and SCSI
The last caption b represents the Primary bus, the second hard disk, the Slave location.


hda1 (IDE1:hard disk ) /boot partition
hda2 (IDE1:hard disk ) / Partitions
hdb   (IDE2)
sda   (SCSI1)
sdb   (SCSI2)


Floppy disk is generally /dev/fd0 fd1, hard disk is generally /dev/hda hdb, hard disk logical partition is generally for a period of hda1 hda2 ... etc., CD-ROM is generally /dev/hdc
swap partition, / partition and /boot partition


more /etc/passwd View Usernames and Groups
more /etc/group View Groups
more /etc/hosts   
more /etc/issue View OS version
more /etc/networks
more /etc/protocols
more /etc/rpc
more /etc/services
more /etc/shadow Mapping password information


chmod -R 777 xstart recursive Recursive changes all permissions.
chmod -R +x /opt Sets all directories, files and their subdirectory files under /opt to executable permissions (+X).


chown root:root xstart   
chown -R root:root /opt Set all directories and their subdirectories under /opt to be owned by root.


Clear Linux firewall: iptables -F service iptables stop


--------------------------------------------------
Command Name : chmod
Accessibility : All users
--------------------------------------------------
Usage : chmod [-cfvR] [--help] [--version] mode file...
Description : Linux/Unix has three levels of file invocation privileges: file owner, group, and others. By using chmod, you can control how files are invoked by others.
Parameters .
mode : Privilege setting string in the following format : [ugoa...] [[+-=][rwxX]...] [,...] where
u denotes the owner of the file, g denotes a person who belongs to the same group as the owner of the file, o denotes someone other than the owner, and a denotes all three.
+ means add permission, - means remove permission, = means only set permission.
r means readable, w means writable, x means executable, and X means executable only if the file is a subdirectory or if the file has been configured to be executable.
-c : Show the changed file permissions only if they have been changed.
-f : Do not display an error message if the file permissions cannot be changed.
-v : Display details of permission changes
-R : Change the same permissions for all files and subdirectories in the current directory (i.e., change them one by one in a round-robin fashion).
--help : show help
--version : show version
Example: Setting a file to be readable by everyone.
chmod ugo+r  
Setting files to be readable by everyone.
chmod a+r  
Setting a file with as the owner of the file allows those in the same group as it belongs to to write to it, but not others.
chmod ug+w,o-w  
Set the file to be executed only by the owner of the file.
chmod u+x  
Set all files and subdirectories in the current directory to be readable by anyone.
chmod -R a+r *  
In addition, chmod can also be used to express permissions as numbers such as chmod 777 file


The syntax is: chmod abc file
Where a,b,c are each a number, representing the User, Group, and Other permissions, respectively.
r=4,w=2,x=1 
If you want the rwx attribute then 4+2+1=7;
If you want the rw- attribute then 4+2=6;
If you want the r-x attribute then 4+1=7.
Paradigm:
chmod a=rwx file  
respond in singing
chmod 777 file  
having the same effect
chmod ug=rwx,o=x file  
respond in singing
chmod 771 file  
having the same effect
If you use chmod 4755 filename, you can give this program root privileges.


--------------------------------------------------
env command.
--------------------------------------------------
#!/usr/local/bin/python
Write the full path to the Python interpreter after #! followed by the full path to the Python interpreter
For a better solution, many Unix systems have a command called env, located in /bin or /usr/bin. It
will help you find the python interpreter in your system search path. If you have env on your system, your startup line can be changed to
for the following:
#!/usr/bin/env python
Alternatively, if your env is in /bin, the
#!/bin/env python


[root@oam-nas2 yuanjs]# which env
/bin/env
[root@oam-nas2 yuanjs]# env --help
Usage: env [OPTION]... [-] [NAME=VALUE]... [COMMAND [ARG]...]
Set each NAME to VALUE in the environment and run COMMAND.


  -i, --ignore-environment  start with an empty environment
  -0, --null           end each output line with 0 byte rather than newline
  -u, --unset=NAME     remove variable from the environment
      --help     display this help and exit
      --version  output version information and exit


A mere - implies -i.  If no COMMAND, print the resulting environment.


Report env bugs to bug-coreutils@
GNU coreutils home page: </software/coreutils/>
General help using GNU software: </gethelp/>
Report env translation bugs to </team/>
For complete documentation, run: info coreutils 'env invocation'


Look at Qunhui's:
DiskStation> env
SSH_CLIENT=192.168.56.1 3402 22
MAIL=/var/mail/admin
USER=admin
OLDPWD=/var/services/homes/admin
HOME=/var/services/homes/admin
SSH_TTY=/dev/pts/0
PAGER=more
LOGNAME=admin
TERM=xterm
PATH=/sbin:/bin:/usr/sbin:/usr/bin:/usr/syno/sbin:/usr/syno/bin:/usr/local/sbin:/usr/local/bin
SHELL=/bin/sh
PWD=/
SSH_CONNECTION=192.168.56.1 3402 192.168.56.101 22
PGDATA=/var/service/pgsql
TZ=CST-8


Watch OMV.
root@openmediavault:/var/lib/php5# su openmediavault
$ env
LANGUAGE=zh_CN:zh
USER=openmediavault
SSH_CLIENT=192.168.56.1 2626 22
MAIL=/var/mail/openmediavault
SHLVL=1
OLDPWD=/tmp
HOME=/home/openmediavault
SSH_TTY=/dev/pts/0
LOGNAME=openmediavault
_=/bin/su
TERM=xterm
PATH=/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
LANG=zh_CN.UTF-8
SHELL=/bin/sh
PWD=/var/lib/php5
SSH_CONNECTION=192.168.56.1 2626 192.168.56.102 22
$ exit
root@openmediavault:/var/lib/php5# env
TERM=xterm
SHELL=/bin/bash
SSH_CLIENT=192.168.56.1 2626 22
SSH_TTY=/dev/pts/0
USER=root
MAIL=/var/mail/root
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin
PWD=/var/lib/php5
LANG=zh_CN.UTF-8
SHLVL=1
HOME=/root
LANGUAGE=zh_CN:zh
LOGNAME=root
SSH_CONNECTION=192.168.56.1 2626 192.168.56.102 22
_=/usr/bin/env
OLDPWD=/tmp


===================================================
Detailed explanation of the contents of the /etc/passwd file
===================================================
1、/etc/passwd contains user information
2、/etc/shadowembodypasswdThe password for the user inside
3. /etc/group contains group information.
4. /etc/gshadow contains group encryption information
5, * - should be a backup of the corresponding file, if the misuse of this file can be used to restore:
/etc/passwd-
/etc/group-
/etc/shadow-
/etc/gshadow-
6. diff group group-see the difference between the two files.


UID numbers for program user accounts default between 1 and 499, and UID numbers between 500 and 60,000 are assigned to regular user accounts by default.
While the default GID number for program group accounts is between 1-499, the default UID number used for general group accounts is 500-60000.
The default UID and GID number ranges used by normal users and group accounts are defined in the configuration file "/etc/".


A system account is an account used by the system, different from a personal account. In fact, it is an account, and the difference between ordinary personal account is that this ID is used by the system program.
Linux below each program is to have a user account to run, some external services to provide services to the program, in order to do security isolation, will use non-root account to run, these non-root account, specifically used to run the program account, is the system account.
In fact, these programs can also be run with a normal personal account, but once the service program is breached, the personal data of this account will be exposed.
Generally speaking, the account number of the system account is <500, but it is not really absolute. (uid 500 before the system user, 500 after the creation of)




/etc/passwd is where users are stored.


username : password : uid : gid :user description : home directory : login shell
1        2      3      4     5         6        7


root:  x  : 0  :  0  : root : / root :  /  bin/bash
bin:x:1:1:bin:/bin:/sbin/nologin
daemon:x:2:2:daemon:/sbin:/sbin/nologin
adm:x:3:4:adm:/var/adm:/sbin/nologin
lp:x:4:7:lp:/var/spool/lpd:/sbin/nologin
sync:x:5:0:sync:/sbin:/bin/sync
shutdown:x:6:0:shutdown:/sbin:/sbin/shutdown
halt:x:7:0:halt:/sbin:/sbin/halt
mail:x:8:12:mail:/var/spool/mail:/sbin/nologin


 
UID : Each user must be assigned a userid, "0" is reserved for root.
1-99 are reserved and assigned to predefined system accounts. HP-UX10.20 supports 2,147,483,646. HP-UX10.20 previously supported no more than 60,000.
There is a class of users called psuedo users, which also have a record in the /etc/passwd file, but cannot log in because their login shell is empty. Their existence is mainly to facilitate system management, to meet the requirements of the corresponding system processes on file ownership. Common pseudo-users are shown in Table 8-1.
Table 8-1 Common Pseudo-Users in the /etc/passwd File
Pseudo-users
Meaning
bin
Have executable user command files
sys
Possession of system files
adm
Ownership of account documents
uucp
UUCP usage
lp
lp or lpd subsystem use
nobody
NFS Usage
In addition to the pseudo-users listed above, there are a number of standard pseudo-users, such as audit, cron, mail, usenet, etc., which are also each required for the processes and files in question.


The record lines in /etc/shadow correspond one-to-one to those in /etc/passwd, which is automatically generated by the pwconv command based on the data in /etc/passwd. Its file format is similar to that of /etc/passwd and consists of several fields separated by ":". These fields are:
Login name:Encryption password:Last modified time:Minimum time interval:Maximum time interval:Warning time:Inactivity time.


useradd -g mysql -d /home/test -m test (: create a new user test, belongs to the mysql group, start directory is /home/test)


===================================================
Linux passwd and shadow file contents in detail
===================================================
I. /etc/passwd
The /etc/passwd file is a plain text file that uses the same formatting for each line:
name:password:uid:gid:comment:home:shell 
name User login name
password User password. The password in this field is encrypted and is often denoted by x. When a user logs in, the system compares the entered password with the contents of this field using the same algorithm. When a user logs on to the system, the system takes the same algorithm for the password entered and compares it with the contents of this field. If this field is empty, the user does not need a password to log in.
uid Specifies the UID of the user. the system recognizes the user by this value, not by the user name, after the user logs into the system.
gid GID. use this value if the system is going to grant the same rights to the same group of people.
comment is used to hold the user's real name and personal details, or full name.
home Specifies the absolute path to the user's home directory.
shell The absolute path to the command to be executed if the user logs in successfully is placed in this field. It can be any command.
If the /etc/passwd file holds information about the user, seven pieces of information consisting of six semicolons are explained as follows
(1): Username.
(2): Password (encrypted)
(3): UID (user identification), used by the operating system itself.
(4): GID group identification.
(5): Full user name or local account number
(6): Start catalog
(7): The shell used for login is the tool that parses the login commands.
Example: abc:x:501:501::/home/abc:/bin/bash


II. /etc/shadow
If you look at the special account information stored in the /etc/shadow file as follows:
name:!!:13675:0:99999:7:::  
Each line defines password information for a special account, with each field separated by :.
Field 1 defines the special user account associated with this shadow entry.
Field 2 contains an encrypted password.
Field 3 Number of days since 1/1/1970 that the password has been changed
Field 4 Number of days before the password will be allowed to be changed (0 means "can be changed at any time")
Field 5 Number of days before the system will force the user to change to a new password (1 means "never change")
Field 6 Number of days the user will be warned about expiration before the password expires (-1 means "no warning")
Field 7 Number of days after password expiration that the system automatically disables the account (-1 means "never disable")
Field 8 Number of days the account has been disabled (-1 means "the account is enabled")
Field 9 Reserved for future use


If you look at the general account information stored under /etc/shadow as follows:
(1): Account Name
(2): Password: here it is encrypted, but can be decrypted by an expert. To major security issues (Generation! Symbol identifies that the account can not be used to log in)
(3): Date of last password change
(4): Number of days the password cannot be changed
(5): Number of days the password needs to be changed again (99999 means no change is needed)
(6): Warnings a few days in advance before password change
(7): Account expiration date
(8): Date of account cancellation
(9): Reserved entries, not currently used
Example: abc:!!!! :14768:0:99999:7::.


===================================================
Detailed explanation of the /etc/group /etc/passwd /etc/shadow files
===================================================
Special attention:
useradd –g users –G administrators admin
Add a new user admin, the primary group is users, the secondary group is administrators, there is a difference between the primary group and the secondary group. There is a difference between a main group and an attached group. The main group is small g and the attached group is large g. The attached group information will be stored in /etc/group, while the main group information will only be stored in /etc/passwd.




In Linux, the user (User) and user group (Group) configuration files are one of the most important files that you, as a system administrator, should understand and master. On the other hand, understanding these files is also an important part of system security management, as a qualified system administrator must understand the user and user group configuration files thoroughly;
I. User (User)-related:
Talking about users, we have to talk about user management, user profiles, and user query and management control tools; user management is mainly done by modifying user profiles, and the ultimate purpose of user management control tools is to modify user profiles.
So, what are user query and administration control tools? User query and control tools are system management tools for querying, adding, modifying and deleting users, such as querying user ids and finger destinations.
command, useradd or adduser to add a user, userdel to delete a user, passwd to set a password, and usermod to modify a user.
etc.; what we need to know is that the ultimate goal of the actions carried out by the user query and control tool is also to modify the user profile; so when we carry out user management, we can directly modify the user profile as well.
Achieve the purpose of user management.
Understanding the above, we can actually feel the importance of user (User) profiles: in fact, users and user groups are inseparable in system administration, but for the sake of illustration.
We still have to single out the User configuration file, which includes /etc/passwd and /etc/shadow.
file; in the midst of this, you can also learn about the importance of UIDs;
Some of the things you can learn or master with this title are: understanding /etc/passwd and /etc/shadow; what UID ;
The main user-related system configuration files are /etc/passwd
and /etc/shadow, where /etc/shadow is an encrypted file for user information, such as encrypted storage of user passwords and passphrases, etc.; /etc/passwd
and /etc/shadow files are complementary; we can see the difference by comparing the two files;
1. About /etc/passwd and UID;
/etc/passwd is a file that the system uses to identify the user; to use an inappropriate analogy, /etc/passwd
is a roster where all the users of the system are logged in; when we log in as beinan, the system first consults /etc/passwd
file to see if there is a beinan account, then determine the UID of the beinan to confirm the user and identity by UID, and if it exists, then read /etc/shadow
The password for the corresponding beinan in the shadow file; if the password is verified as correct then the system is logged in and the user's configuration file is read;
1) Understanding the contents of /etc/passwd:
In /etc/passwd, each line represents information about a user; there are seven segments in a line; each segment is separated by a : sign, for example, the following are the two lines of /etc/passwd on my system:
beinan:x:500:500:beinan sun:/home/beinan:/bin/bash
linuxsir:x:501:502::/home/linuxsir:/bin/bash
First field: username (also known as login name); in the example above, we see that the usernames of the two users are beinan and linuxsir;
Second field: password; in the example we see an x. The password is actually mapped to the /etc/shadow file;
The third field: UID; see the explanation of UID in this article;
Fourth field: GID; see the explanation of GID in this article;
The fifth field: the full name of the user, this is optional, you can not set, in the beinan user, the user's full name is beinan sun; and linuxsir this user is not set full name;
Sixth field: the location of the user's home directory; /home/beinan for the user beinan and /home/linuxsir for the user linuxsir;
Seventh field: the type of SHELL used by the user, beinan and linuxsir both use bash; so it is set to /bin/bash;
2) Understanding about UID:
UID is the user's ID value, in the system each user's UID value is unique, more specifically each user should correspond to a unique UID
, the system administrator should ensure this rule. The value of the system user's UID starts from 0 and is a positive integer, as for the maximum value it can be found in the /etc/
You can find out that the general Linux distribution convention is 60000; in Linux, root has a UID of 0 and has the highest privileges on the system;
UID is a unique feature in the system, as a system administrator should ensure that this standard, UID
The uniqueness of the UID is related to the security of the system and should be worthy of our attention! For example, I put the UID of beinan in /etc/passwd
What do you envision happening when you change it to 0. The user beinan will be recognized as root. beinan will be the account that can do all root operations;
UID is the identification to confirm the user's privileges, the role of the user to log in the system is realized by UID, not the user name, remember; share a UID with several users.
It's dangerous, as we talked about above, to change the UID of a regular user to 0, and share a UID with root
This in fact creates confusion about the system's administrative privileges. If we want to use root privileges, we can do so via su or sudo; never let a user share the same UID with root;
UID is unique and is only required by the administrator, in fact, we can modify the value of any user's UID to 0 by modifying the /etc/passwd file.
In general, each Linux distribution will reserve a certain UID and GID for system virtual users to occupy. Virtual users are usually present during the system installation, and are necessary to accomplish system tasks, but virtual users are not allowed to log in to the system, such as ftp, nobody, adm, rpm, bin, shutdown, and so on;
In Fedora, the first 499 UIDs and GIDs are set aside, and we add the UIDs for new users
Starting from 500, the GID also starts from 500, as for other systems, some systems may reserve the first 999 UIDs and GIDs; taking the /etc/
The UID_MIN of the Fedora system is 500, and UID_MAX is the minimum value.
The value is 60000, which means that the UID of the user we added by default through adduser has a value between 500 and 60000; whereas the Slackware
Adding a user by adduser does not specify the UID, the default UID is from 1000;
2. About /etc/shadow ;
1) /etc/shadow Overview;
The /etc/shadow file is a shadow file of /etc/passwd, which is not controlled by /etc/passwd.
These two files are supposed to be complementary; shadows include user and encrypted passwords as well as other /etc/passwd files.
Information that cannot be included, such as the user's expiration date; this file can only be read and manipulated with root privileges, with the following permissions:
-r--- 1 root root 1.5K October 16 09:49 /etc/shadow
The permissions of /etc/shadow should not be changed to readable by other users, as this is dangerous. If you find that the permissions of this file have been changed to be readable by other user groups or users, check it to prevent system security problems;
If we view this file as a normal user, we should see nothing, suggesting that it has insufficient permissions:
[beinan@localhost ~]$ more /etc/shadow
/etc/shadow: insufficient privileges
2) Content analysis of /etc/shadow;
The contents of the /etc/shadow file consists of nine segments, each segment separated by a : sign; we illustrate this with the following example;
beinan:$1$VE.Mq2Xf$2c9Qi7EQ9JP8GKF8gH7PB1:13072:0:99999:7:::
linuxsir:$1$IPDvUhXP$8R6J/VtPXvLyXxhLWPrnt/:13072:0:99999:7::13108:
First field: the username (also known as the login name), which is the same as /etc/passwd in /etc/shadow, thus linking passwd to the user record used in shadow; this field is non-empty;
Second field: password (encrypted), if some user is x in this field, it means that this user can't log in to the system; this field is non-empty;
Third field: the last time the password was changed; this is the interval (in days) from January 01, 1970 to the last password change. You can change a user's password with passwd and then see the changes in this field in /etc/shadow;
Fourth field: the minimum number of days between password changes; if set to 0, this feature is disabled; that is, how many days must pass before a user can change his/her password; this feature is not very useful; the default value is obtained through the /etc/ file definitions, and there is a definition in PASS_MIN_DAYS;
Fifth field: the maximum number of days between two modifications of the password; this enhances the timeliness of the administrator's management of user passwords, and should be said to enhance the security of the system; if it is the system default, it is obtained from the /etc/file definition when the user is added, and is defined in PASS_MAX_DAYS;
Sixth field: how many days in advance to warn the user password will expire; when the user logs on to the system, the system login program reminds the user that the password will be invalidated; if it is the system default, it is obtained from the /etc/ file definition when the user is added, defined in PASS_WARN_AGE;
Seventh field: how many days after the expiration of the password to disable the user; this field indicates how many days after the user password is invalidated, the system will disable the user, that is to say, the system will no longer allow the user to log in, and will not prompt the user to expire, it is a complete disablement;
Eighth field: user expiration date; this field specifies the number of days (days from January 1, 1970) that the user will be voided; if the value of this field is null, the account is permanently available;
Field 9: Reserved field, currently empty for future Linux development;
For more details, please use man shadow to check the help, you will get more detailed information;
We will analyze it again based on examples:
beinan:$1$VE.Mq2Xf$2c9Qi7EQ9JP8GKF8gH7PB1:13072:0:99999:7:::
linuxsir:$1$IPDvUhXP$8R6J/VtPXvLyXxhLWPrnt/:13072:0:99999:7::13108:
First field: username (also called login), in the example there are two records for the canyon, which also means that there are two users, beinan and linuxsir.
The second field: the encrypted password. If any user has an x in this field, it means that this user cannot log in to the system, and can also be regarded as a virtual user, however, virtual and real users are relative, and the system administrator can operate on any user at any time;
The third field: indicates the number of days since the last password change (from January 01, 1970). The above example shows that the two users, beinan and linuxsir, changed their user passwords on the same day, of course by using the passwd command, and the number of days since January 01, 1970, when the password was changed, is 13072 days;
Fourth field: disable the minimum number of days between password modifications, set to 0
Fifth field: the maximum number of days between two password modifications, in the example are 999999 days; this value if you do not specify when you add a user, it is through the /etc/ to get the default value, PASS_MAX_DAYS 99999; you can check the /etc/ to see, the specific value;
Sixth field: how many days in advance to warn the user password will expire; when the user logs on to the system, the system login program reminds the user that the password is about to be invalidated; if it is the system default, it is the value that is set by /etc when the user is added.
/file definition is obtained in PASS_WARN_AGE; in the example the value is 7 , which means that the user is warned to change the password 7 days before it is due to expire;
Seventh field: how many days after the expiration of the password to disable the user; this field indicates how many days after the user password is invalidated, the system will disable the user, that is, the system will no longer allow this user to log in, and will not prompt the user to expire, it is a complete disablement; in the example, this field is empty for both users, which means that this function is disabled;
Eighth field: user expiration date; this field specifies the number of days (days from January 1, 1970) after which the user is invalidated; if the value of this field is null, the account is permanently available; in the example
We see that the user beinan is empty in this field, indicating that this user is permanently available, while the user linuxsir indicates that it expires 13108 days after January 01, 1970, counting
It expired on November 21, 2005; haha, if you're interested, do the math yourself, it's still roughly the same ;);
Field 9: Reserved field, currently empty for future Linux development;
II. On user groups;
A group is a collection of users with certain common characteristics. The main user group configuration files are /etc/group and /etc/gshadow, of which /etc/gshadow is the encrypted information file of /etc/group; under this heading, you can also learn what GID is;
1. /etc/group Explanation;
/etc/group
file is the configuration file of the user group, the content includes the user and the user group, and can show which user group or user groups the user belongs to, because a user can belong to one or more different user groups; the same use of the
Users in the user group have similar characteristics to each other. For example, if we add a user to the root user group, that user can browse the files in the root user's home directory, and if the root user adds a file to the root user's home directory, that user can browse the files in the root user's home directory.
is open to read and write execution permissions, all users in the root user group can modify this file, and if it is an executable file (such as a script), users in the root user group are also allowed to execute it;
The characteristics of the user group in the system management for system administrators to provide great convenience, but security is also worthy of attention, such as a user has the most important content of the system management, it is best to allow users to have a separate user group, or the user of the file under the permissions are set to completely private; in addition to the root user group in general do not easily add ordinary users into it.
2. /etc/group content specific analysis
The contents of /etc/group include the user group, user group password, GID, and the users contained in the user group, one record for each user group; the format is as follows:
group_name:passwd:GID:user_list
Each entry in /etc/group is divided into four fields:
First field: user group name;
Second field: user group password;
Third field: GID
Fourth field: list of users, each user is separated by a , sign; this field can be empty; if the field is empty it means that the user group is the user name of the GID;
Let's take an example:
root:x:0:root,linuxsir Note: The user group root, x is the password segment, indicating that no password has been set, and the GID is 0. The root user group includes root, linuxsir, and other users with a GID of 0 (which can be viewed via /etc/passwd);
beinan:x:500:linuxsir Note: The user group beinan, x is the password segment, which means that no password is set, and the GID is 500, and the beinan user group includes the linuxsir user and the user with GID 500 (which can be viewed via /etc/passwd);
linuxsir:x:502:linuxsir Note: The user group linuxsir, x is the password segment, indicating that no password has been set, and the GID is 502,linuxsir user group under the package user linuxsir and the user with GID 502 (can be viewed via /etc/passwd);
helloer:x:503: Note: The user group helloer, x is the password segment, which means that no password is set, and the GID is 503, and the helloer user group includes users with a GID of 503, which can be viewed via /etc/passwd;
The /etc/passwd counterpart has the associated record:
root:x:0:0:root:/root:/bin/bash
beinan:x:500:500:beinan sun:/home/beinan:/bin/bash
linuxsir:x:505:502:linuxsir open,linuxsir office,13898667715:/home/linuxsir:/bin/bash
helloer:x:502:503::/home/helloer:/bin/bash
This shows that the helloer user group includes the helloer user; so we look at the users owned by a user group, which we can get by comparing /etc/passwd and /etc/group;
2. About GID;
The GID is similar to the UID, a positive integer or 0. The GID starts at 0. Groups with a GID of 0 allow the system to pay the root user group; the system will reserve some of the more advanced GIDs for the system virtualization.
users (also known as pseudo-users); the GIDs reserved for each system are different, such as the Fedora
500 are reserved, and when we add a new user group, the user group starts at 500; whereas Slackware
is to reserve the first 100 GIDs, and new user groups are added starting from 100; to view the default GID range of user groups added to the system, you should check the /etc/
GID_MIN and GID_MAX values;
We can compare the /etc/passwd and /etc/group files; we will see that there is a default user group; we will find it in /etc/passwd.
Each user record in the list will find the user's default GID
; In /etc/group, we also find out how many users are under each user group; when creating directories and files, the default user group is used; let's give an example anyway;
For example, if I add linuxsir as a root user group, the relevant records in /etc/passwd and /etc/group are:
The linuxsir user's record in /etc/passwd; we see in this record that the linuxsir user's default GID is 502; and the GID of 502 is found in /etc/group as the linuxsir user group;
linuxsir:x:505:502:linuxsir open,linuxsir office,13898667715:/home/linuxsir:/bin/bash
The linuxsir user's entry in /etc/group; here, we see that the linuxsir user group has a GID of 502, and the linuxsir user belongs to the root, beinan user group;
root:x:0:root,linuxsir
beinan:x:500:linuxsir
linuxsir:x:502:linuxsir
Let's create a directory with linuxsir to see what permissions are attributed to the directory created by the linuxsir user;
[linuxsir@localhost ~]$ mkdir testdir
[linuxsir@localhost ~]$ ls -lh
Total usage 4.0K
drwxrwxr-x 2 linuxsir linuxsir 4.0K October 17 11:42 testdir
When we create a directory with linuxsir, we find that testdir's permissions are still attributed to the linuxsir user and the linuxsir user group; they are not attributed to the root and beinan user groups, see?
However, it is worth noting that the default GID is not the most important thing when judging a user's access rights. As long as a directory allows the same group of users to have access rights, then the same group of users can have access rights to the directory, in which case the user's default GID is not the most important thing;
3. /etc/gshadow Explanation;
/etc/gshadow is the encrypted information file of /etc/group, for example, the user group (Group) management password is stored in this file. /etc
/gshadow and /etc/group are two complementary files; for large servers, customizing some permission models with complex relationship structures for many users and groups, setting user group passwords is extremely important.
It is necessary. For example, if we don't want some non-user group members to permanently have user group privileges and features, we can use password authentication to allow some users to temporarily have some user group features, which is done with the
to the user group password;
The format of /etc/gshadow is as follows, with one line for each user group;
groupname:password:admin,admin,…:member,member,…
First field: user group
Second field: user group password, this field can be empty or ! , if it is empty or has ! , it means there is no password;
Third field: user group manager, this field can also be empty, if there is more than one user group manager, split by the , sign;
Fourth field: group members, split by the , sign if there are multiple members;
Examples:
beinan:!::linuxsir
linuxsir:oUS/q7NH75RhQ::linuxsir
First field: in this example, there are two user groups beinan with linuxsir
Second field: password for the user group; the beinan user group has no password; the linuxsir user group has already, encrypted;
Third field: user group manager, both are empty;
Fourth field: the members of the beinan usergroup are linuxsir , and then there is a check against /etc/group and /etc/passwd
To see if there are other users, generally added by default, sometimes also create user groups and usernames with the same name; linuxsir user group has members linuxisir;
How do I set a password for a usergroup? We can do this with gpasswd; however, in general, it is not necessary to set passwords for usergroups; however, it is necessary to practice on your own; here is an example of setting passwords for the linuxsir usergroup;
Usage of gpasswd: gpasswd user group

[email=root@localhost]root@localhost[/email]
~]# gpasswd linuxsir
Changing the password for the linuxsir group
New password:
Please re-enter the new password:
To switch between user groups, you should use newgrp, which is kind of like switching between users with su; I'll give you an example first:
[beinan@localhost ~]$ newgrp linuxsir
Password:
[beinan@localhost ~]$ mkdir lingroup
[beinan@localhost ~]$ ls -ld lingroup/
drwxr-xr-x 2 beinan linuxsir 4096 October 18 15:56 lingroup/
[beinan@localhost ~]$ newgrp beinan
[beinan@localhost ~]$ mkdir beinangrouptest
[beinan@localhost ~]$ ls -ld beinangrouptest
drwxrwxr-x 2 beinan beinan 4096 October 18 15:56 beinangrouptest
Note: I switched to linuxsir with the beinan usergroup and created a directory, then switched back to beinan and created another directory. Observe the difference in the usergroups of the two directories; it's better to experience it yourself;
Third, users are queried or managed through user and user group profiles;
1. Methods for user and user group queries;


(1) View user information by looking at the user (User) and user group configuration file approach to view user information
We have users (User) and user groups (Group) of the configuration file has a basic understanding of the user (User) and user groups by viewing the configuration file, we can do to understand the system users, of course, you can also be through the id or finger and other tools to carry out the user's query and other tasks.
For file viewing, we can use more or cat to view the file, such as more /etc/passwd or cat /etc/passwd; other tools are also the same, can be viewed on the text on the line, such as less also good
For example, we can view /etc/passwd through the more, cat, less command, although the command is different, but to achieve the same purpose, are to get the contents of /etc/passwd;
[root@localhost ~]# more /etc/passwd
[root@localhost ~]# cat /etc/passwd
[root@localhost ~]# less /etc/passwd


2) Get user info via id and finger tools; ------- is important -----
In addition to viewing the User and Group profiles directly, we have the id and finger tools available to us, as well as command-line operations to
To complete the query on the user; id and finger, are two tools that have their own weight measurement, the id tool is more weight measurement of the user, the user group to which the user belongs, UID and GID view; and finger
Measure the query of user information, such as user name (login name), phone number, home directory, login SHELL type, real name, free time and so on;


id Command Usage;
id option User name
For example: I want to query the UID and GID of the beinan and linuxsir users and the user groups they belong to:
[root@localhost ~]# id beinan
uid=500(beinan) gid=500(beinan) groups=500(beinan)
Note: The UID of beinan is 500, the default user group is beinan, and the GID of the default user group is 500, which belongs to the beinan user group;
[root@localhost ~]# id linuxsir
uid=505(linuxsir) gid=502(linuxsir) groups=502(linuxsir),0(root),500(beinan)
Note: The UID of linuxsir is 505, the default user group is linuxsir, and the GID of the default user group is 502, which belongs to linuxsir (GID 502), root (GID 0), and beinan (GID 500);
I'll cover the detailed usage of id in the article dedicated to user queries; you can check the usage via man id, which is still relatively simple to use;

Usage of finger
finger Options Username1 Username2 ...
See man finger for more details on how to use it; I'll cover it in the article dedicated to user queries;
If finger is used without any parameters or users, it will show the current online users, similar to the w command; compare; but each has its own weight measurement;


   [root@localhost ~]# w
14:02:42 up 1:03, 3 users, load average: 0.04, 0.15, 0.18
USER TTY FROM LOGIN@ IDLE JCPU PCPU WHAT
linuxsir tty1 - 13:39 22:51 0.01s 0.01s -bash
beinan tty2 - 13:53 8:48 11.62s 0.00s /bin/sh /usr/X1
beinan pts/0 :0.0 13:57 0.00s 0.14s 1.08s gnome-terminal


[root@localhost ~]# finger
Login Name Tty Idle Login Time Office Office Phone
beinan beinan sun tty2 8 Oct 18 13:53
beinan beinan sun pts/0 Oct 18 13:57 (:0.0)
linuxsir linuxsir open tty1 22 Oct 18 13:39 linuxsir o +1-389-866-771
If we add the username after finger, we can see more detailed information about the user, and we can view more than one user at a time, separated by spaces, for example, in the following example, we query the information of two users at a time, beinan and linuxsir;
[root@localhost ~]# finger beinan linuxsir
Login: beinan Note: Username (also login name) Name: beinan sun (full username)
Directory: /home/beinan Note: Home directory Shell: /bin/bash Note: SHELL type used
On since Tue Oct 18 13:53 (CST) on tty2 10 minutes 55 seconds idle Note: Idle time;
On since Tue Oct 18 13:57 (CST) on pts/0 from :0.0
No mail.
No Plan.
Login: linuxsir Name: linuxsir open
Directory: /home/linuxsir Shell: /bin/bash
Office: linuxsir office, +1-389-866-7715
On since Tue Oct 18 13:39 (CST) on tty1 24 minutes 58 seconds idle
No mail.
No Plan.


3) Approach to user group queries;
We can query the groups we belong to by user, using groups; for example, if I query the groups that beinan and linuxsir belong to, we can use groups;
[root@localhost ~]# groups beinan linuxsir
beinan : beinan
linuxsir : linuxsir root beinan
Note: This is a groups view of the groups to which the users beinan and linuxsir belong at the same time;


2, by modifying the user (User) and user group (Group) configuration file to add;
Since we've already said earlier that users can be managed by modifying their configuration files, this topic should cover that; of course, users can be managed through user and group management tools such as the
adduser、userdel、usermod 、userinfo、groupadd 、groupdel
(groupmod, etc.) is also possible, and the management of users through administrative tools will be described in a special article;
Manage the process of adding users by modifying the User and Group profiles;
Let's start with adding a user as an example. Deleting and modifying a user is relatively simple;
1) Modify /etc/passwd to add user records;
We add new user records according to the conventions of the /etc/passwd format; of course you want to disable a user, you can delete the user records you want to delete; it is worth noting that you cannot have duplicate UIDs;
For example, I want to add the user lanhaitun, I found that UID 508 is not used by any user, and I want to set its user group to lanhaitun, and the GID of the user group is also set to 508, if the GID is not occupied;
We're going to open /etc/passwd and add a line at the bottom;
lanhaitun:x:508:508::/home/lanhaitun:/bin/bash
Then run pwconv to synchronize /etc/passwd with /etc/shadow. You can see if the contents of /etc/shadow are synchronized;
[root@localhost beinan]# pwconv
2) Modify /etc/group
First, we have to check if there is a lanhaitun user group and if GID 508 is occupied by another user group;
[root@localhost ~]# more /etc/group |grep lanhaitun
[root@localhost ~]# more /etc/group |grep 508
By looking at it, we see that it's not occupied; so we'll add lanhaitun's record to /etc/group
lanhaitun:x:508:
The next step is to run grpconv to synchronize the contents of /etc/group and /etc/gshadow. You can check if the group has been added by looking at the changes in the contents of /etc/gshadow;
[root@localhost beinan]# grpconv
3) Create the user's home directory and copy the user's startup file there as well;
To create a user's home directory, we're going to use the record of the new user added in /etc/passwd, and we'll add the new user's home directory to /etc/passwd.
We need to add a new user, lanhaitun, whose home directory is in /home/lanhaitun; we also need to change /etc/skel
directory. * Hidden files are copied over;
[root@localhost ~]# cp -R /etc/skel/ /home/lanhaitun
[root@localhost ~]# ls -la /home/lanhaitun/
Total usage 48
drwxr-xr-x 3 root root 4096 October 18 14:53 .
drwxr-xr-x 10 root root 4096 October 18 14:53 ...
-rw-r-r- 1 root root 24 October 18 14:53 .bash_logout
-rw-r-r- 1 root root 191 October 18 14:53 .bash_profile
-rw-r-r- 1 root root 124 October 18 14:53 .bashrc
-rw-r-r- 1 root root 5619 October 18 14:53 .canna
-rw-r-r- 1 root root 438 October 18 14:53 .emacs
-rw-r-r- 1 root root 120 October 18 14:53 .gtkrc
drwxr-xr-x 3 root root 4096 October 18 14:53 .kde
-rw-r-r- 1 root root 658 October 18 14:53 .zshrc
4) Change the attributes and permissions of the new user's home directory;
We found that the owner of the new user's home directory is currently root, and the hidden files in the home directory are also root-privileged;
[root@localhost ~]# ls -ld /home/lanhaitun/
drwxr-xr-x 3 root root 4096 October 18 14:53 /home/lanhaitun/
So we're going to change the /home/lanhaitun directory to the lanhaitun user with the chown command;
[root@localhost ~]# chown -R lanhaitun:lanhaitun /home/lanhaitun
Check to see if the owner has been changed to be owned by the lanhaitun user;
[root@localhost ~]# ls -ld /home/lanhaitun/
drwxr-xr-x 3 lanhaitun lanhaitun 4096 October 18 14:53 /home/lanhaitun/
[root@localhost ~]# ls -la /home/lanhaitun/
Total usage 48
drwxr-xr-x 3 lanhaitun lanhaitun 4096 October 18 14:53 .
drwxr-xr-x 10 root root 4096 October 18 14:53 ...
-rw-r-r- 1 lanhaitun lanhaitun 24 October 18 14:53 .bash_logout
-rw-r-r- 1 lanhaitun lanhaitun 191 October 18 14:53 .bash_profile
-rw-r-r- 1 lanhaitun lanhaitun 124 October 18 14:53 .bashrc
-rw-r-r- 1 lanhaitun lanhaitun 5619 October 18 14:53 .canna
-rw-r-r- 1 lanhaitun lanhaitun 438 October 18 14:53 .emacs
-rw-r-r- 1 lanhaitun lanhaitun 120 October 18 14:53 .gtkrc
drwxr-xr-x 3 lanhaitun lanhaitun 4096 October 18 14:53 .kde
-rw-r-r- 1 lanhaitun lanhaitun 658 October 18 14:53 .zshrc
It seems to have materialized;
But this is still not enough, because the directory permissions of /home/lanhaitun/ may be too public;
drwxr-xr-x 3 lanhaitun lanhaitun 4096 October 18 14:53 /home/lanhaitun/
We see that the permissions of the /home/lanhaitun/ directory are drwxr-xr-x, which means that it can be viewed by users of the same group and by other user groups. In order to maintain confidentiality, it is reasonable to set the permissions of the home directory of the newly added user to be readable, writable, and executable only by him/her; and so... ...
[root@localhost ~]# chmod 700 /home/lanhaitun/
[root@localhost ~]# ls -ld /home/lanhaitun/
drwx-- 3 lanhaitun lanhaitun 4096 October 18 14:53 /home/lanhaitun/
We use other users, except of course for the super-privileged root user; for example, if I look at lanhaitun's home directory as the beinan user, I get the following message;
[beinan@localhost ~]$ ls -la /home/lanhaitun/
ls: /home/lanhaitun/: insufficient privileges
So it seems that the lanhaitun user's home directory is secure
<IMG onmousewheel="return imgzoom(this);" οnmοuseοver="if(>*0.7) {=true; =*0.7; ='hand'; ='Click here to open new window\nCTRL+Mouse wheel to zoom in/out';}" οnclick="if(!) {return true;} else {('/wp-includes/images/smilies/icon_wink.gif');}" alt="" src="/wp-includes/images/smilies/icon_wink.gif" οnlοad="if(>*0.7) {=true; =*0.7; ='Click here to open new window\nCTRL+Mouse wheel to zoom in/out';}" border=0>


5) Set the password for the added user;
The above steps are in order, we have to set the password for the new user; to generate it through the passwd command; there is no way to solve this by modifying the file;
passwd usage:
passwd user
[root@localhost ~]# passwd lanhaitun
Changing password for user lanhaitun.
New UNIX password: NOTE: Enter your password!
Retype new UNIX password: type it again
passwd: all authentication tokens updated successfully. classifier for sums of money:Set password successfully
6) Test the success of adding users;
You can test by logging in as a new user, or by switching users via su;
[beinan@localhost ~]$ su lanhaitun
Password:
[lanhaitun@localhost beinan]$ cd ~
[lanhaitun@localhost ~]$ pwd
/home/lanhaitun
[lanhaitun@localhost ~]$ ls -la
Total usage 52
drwx-- 3 lanhaitun lanhaitun 4096 October 18 15:15 .
drwxr-xr-x 10 root root 4096 October 18 14:53 ...
-rw-r-r- 1 lanhaitun lanhaitun 24 October 18 14:53 .bash_logout
-rw-r-r- 1 lanhaitun lanhaitun 191 October 18 14:53 .bash_profile
-rw-r-r- 1 lanhaitun lanhaitun 124 October 18 14:53 .bashrc
-rw-r-r- 1 lanhaitun lanhaitun 5619 October 18 14:53 .canna
-rw-r-r- 1 lanhaitun lanhaitun 438 October 18 14:53 .emacs
-rw-r-r- 1 lanhaitun lanhaitun 120 October 18 14:53 .gtkrc
drwxr-xr-x 3 lanhaitun lanhaitun 4096 October 18 14:53 .kde
-rw--- 1 lanhaitun lanhaitun 66 October 18 15:15 .xauthOhEoTk
-rw-r-r- 1 lanhaitun lanhaitun 658 October 18 14:53 .zshrc
[lanhaitun@localhost ~]$ mkdir testdir
[lanhaitun@localhost ~]$ ls -lh
Total usage 4.0K
drwxrwxr-x 2 lanhaitun lanhaitun 4.0K October 18 15:16 testdir
Through the above series of actions, we will find that the created lanhaitun user has been successful;
2、Modify the user or user group by modifying the user (User) and user group (Group) configuration file;
We can modify /etc/passwd and /etc/group to modify the user and the group to which the user belongs, this process is similar to adding a new user; for example, I want to modify the full name of lanhaitun's user name, the company, and phone and other information; we can modify /etc/passwd to achieve;
1) Modify user information;
lanhaitun:x:508:508::/home/lanhaitun:/bin/bash Note: This is the initial record;
We can modify it to
lanhaitun:x:508:508:lanhaitun wu,Office Dalian,13000000000:/home/lanhaitun:/bin/bash
Of course, we can also modify the user's bash type, home directory, etc. Of course, if you modify the home directory, you have to build a home directory, master and permissions of the operation, which is the same as the previous approach to add users in the program is somewhat the same;
Once the modifications are done, we have to do a pwconv synchronization to view the user's information via finger, etc;
[root@localhost lanhaitun]# pwconv
[root@localhost lanhaitun]# finger lanhaitun
Login: lanhaitun Name: lanhaitun wu
Directory: /home/lanhaitun Shell: /bin/bash
Office: Office Dalian, +1-300-000-0000
Never logged in.
No mail.
No Plan.


2) Modify the group to which the user belongs, which can be achieved by /etc/group modification;
Of course, modifying users and user groups can be achieved not only by modifying configuration files, but also by usermod and chfn; I will write about this in a later document, which is also relatively simple; you can view the usage through man; here we first talk about how to achieve the purpose by modifying configuration files;
If we want to assign the user lanhaitun to the root user group, we can still do this by modifying /etc/group; find the line in /etc/group starting with root and add lanhaitun to it as planned;
root:x:0:root,lanhaitun
If you don't understand, see the previous /etc/group explanation, thanks;
Then execute the grpconv command to synchronize the contents of the /etc/group and /etc/gshadow files;
[root@localhost ~]# grpconv
View information about the lanhaitun attribution group;
[root@localhost ~]# id lanhaitun
uid=508(lanhaitun) gid=508(lanhaitun) groups=508(lanhaitun),0(root)
3) The method of deleting users and user groups;
This is relatively simple, we can delete /etc/passwd and /etc/group corresponding to the user and user group records to achieve the purpose, but also through the userdel and groupdel to realize the user and user group deletion;
If you are deleting a user by modifying the user and user group profiles, just delete the corresponding records, and if you don't want to keep their home directory, just delete it.
[root@localhost ~]# userdel lanhaitun
[root@localhost ~]# userdel -r lanhaitun
Note: You can use userdel to delete the lanhaitun user, as we can see in the second example there is an additional parameter -r
The first example means that only the lanhaitun user will be deleted, but his home directory, mail, etc. will still be preserved; add -r to this example.
parameter, which deletes the home directory, mail, etc.; so be careful; when you delete a user with userdel, it also deletes their usergroup; we can use /etc/passwd to delete the user's usergroup.
and /etc/group to see the changes.


===================================================
linux system account and password policy
===================================================
I. Description of the situation
In order to meet the system security requirements and maximize host and data security. The following security policy settings are made for the account and password of the linux system.
Accounts and passwords are managed on Linux systems through the following configuration files:
         /etc/
         /etc/shadow
The /etc/ file is a global file that is set to work for all newly created users on the system (except for the root user).
However the /etc/shadow file can be used to set policies for each specific account.
Therefore, we can modify the /etc/ file to make all newly created users strictly follow the policy restrictions, and we can manually modify the /etc/shadow file to set the existing accounts on the system to meet the policy requirements.


II. Modifying the /etc/ file
Modify the account password maximum effective use time
Add a test account to test the effect and view the current information of the test account
1. Adjust the system time to artificially expire the test account.
2. Login to the system with a test account
Reminds that the password has expired and needs to be changed
The system prompts that the account password needs to be changed. And the modified password must meet the password complexity requirements, otherwise you will not be allowed to enter the system.
3. Login to the system to check the status of the test account
Password expiration date is automatically moved back 1 day
And from the output, you can see that the expiration date of this account has been postponed by 1 day.


iii. The /etc/ file also allows you to change the password length of the account
PASS_MIN_LEN 5 The system default is a minimum of 5 characters.


IV. Modifying the /etc/shadow file
This file can be modified manually but for security and correctness we recommend using one of the commands provided by the system: chage
-m : The minimum number of days the password can be changed. A value of zero means that the password can be changed at any time.
-M : The maximum number of days the password will remain valid.
-W : The number of days to receive a warning message in advance before the user's password expires.
-E : The date when the account expires. After this date, the account will be unavailable.
-d : date of last change
-l : Example of current settings. It is up to unprivileged users to determine when their passwords or accounts have expired.
You must be root to use this command (except for the -l parameter), so we can use the following naming and parameters:
#chage -M 90 -W 7 account ? -Change password is valid for 90 days and warns the user 7 days in advance that the password will expire.
 
V. Sudo settings
In order to facilitate future maintenance and avoid the situation when the root account password expires and the operating system cannot be operated. The sudo privileges are configured for a particular account.
1. Basic steps
Log in to the system as root
Run #visudo
Add content to the end of the file:
test  ALL=(root)/usr/bin/chage
Account Name
Hostname ALL for all hosts
Giving ordinary users root privileges to execute commands
Giving ordinary users permission to execute chage


======================================
Introduction to Linux clocks: very important
======================================
There are two clocks in a Linux machine, a hardware clock (CMOS clock) and a kernel clock.
The hardware clock is battery driven and works through a specialized chip. It can be set via the BIOS setup screen or some system commands such as hwclock.
The kernel clock is maintained by the kernel, which reads the time from the hardware at startup and runs independently afterwards.
RTC(Real Time Clock)
I.e. Linux real-time clock drivers, usually they are embedded in the computer's chip, but also some are implemented on the motherboard using the Motorola MC146818 (or clone). This hardware device can be mapped to /dev/rtc for root programming access.


Linux divides the clock into System Clock and Hardware Clock (Real Time Clock, or RTC). System time refers to the current clock in the Linux Kernel, while the hardware clock is the battery-powered hardware clock on the motherboard, which can be set in the BIOS "Standard BIOS Feture" item. Since Linux has two clock systems, which clock system is used by default in Linux? Will there be a conflict between the two system clocks? These questions and concerns are not unreasonable. First of all, Linux does not have a default clock system. When Linux starts, the system clock reads the hardware clock settings, and then the system clock operates independently of the hardware.
In terms of the Linux boot process, the system clock and the hardware clock do not conflict, but all commands (including functions) in Linux are set using the system clock. Not only that, but the system clock and hardware clock can be asynchronous, i.e. the system time and hardware time can be different. The benefits of this are of little significance to the average user, but of great use to Linux network administrators. For example, to synchronize a large network (across a number of time zones) of servers, if the United States is located in New York Linux server and Beijing Linux server, one of the servers do not need to change the hardware clock and only need to temporarily set a system time, such as to set the time of the Beijing server time for the time of New York, the two servers to complete the synchronization of the file, and then with the original clock synchronization After the two servers have finished synchronizing their files, they can synchronize with the original clock. In this way, the system and the hardware clock provide more flexible operation.
In Linux, the main commands used to view and set the clock are date, hwclock, and clock. clock and hwclock are similar in usage, except that the clock command supports the x86 hardware system as well as the Alpha hardware system. Since most users use the x86 hardware system, these two commands can be regarded as one command to learn.




-r : see if the hardware clock is local time
In order to check whether the hardware clock is local time, run the command hwclock -r. The result is that the system prompts: "Could not open RTC: No such file or directory", i.e., the RTC file cannot be found.


2./dev/rtc how to create RTC file
cat /dev/rtc
cat /proc/driver/rtc ----- This is important!
mknod /dev/rtc c 10 135
date


Since the kernel is compiled without RTC support, you need to recompile the kernel (or add a module to the kernel). The way to do this is to check "Enhanced Real Time Clock Support" in the "Character device" section of the make menuconfig stage. After adding this module to the kernel, the rtc file appears in the /proc/drive/ directory, and cat can see its contents normally. However, there is still no rtc file in the /dev/ directory.
So we add the rtc file to the /dev directory with the mknod command. The command "mknod /dev/rtc c 10 135" was executed. After the command is executed, the rtc file is successfully generated under /dev. Run hwclock -r to see that the hardware time is local time. If you check the system time through the date command, it is UTC time. This means that the system has not set the local time.


3. /etc/localtime determines which time zone is used.
[root@nas-oam1 etc]# date -R View the time and time zone.
Mon, 23 Dec 2013 11:33:48 +0800
Linux's system time zone is obtained by symbolically connecting to /etc/localtime.
The time zone can be set with the tzset command. If you don't have this command, you can set the time zone to Shanghai in Asia by using the command "ln -s /etc/localtime /usr/share/zoneinfo/Asia/Shanghai".


4./etc/sysconfig/clock
This profile can be used to set which way the user chooses to display the time.
If the hardware clock is local time, UTC is set to 0 (default) and the environment variable TZ does not have to be set.
If the hardware clock is in UTC time, set UTC to 1 and set the environment variable TZ (or the configuration file /etc/TZ) to the time zone information, such as "Asia/Shanghai".


The hardware time of my machine is local time, so the contents of this configuration file are:
ZONE="Asia/Shanghai"
UTC=0
ARC=0




 
1. Use the date command in a virtual terminal to view and set the system time.
View the operation of the system clock:
# date -------------------------- useful


The operation of setting the system clock:
# date 091713272003.30 -------------------------- useful
Generic setup format:
# date month day, hour, minute, year. Seconds


2. Use the hwclock or clock command to view and set the hardware clock
View the operation of the hardware clock:
# hwclock --show or # clock -show -------------------------- useful
Wednesday September 17th, 2003 at 13:24:11 -0.482735 seconds
The operation of setting the hardware clock:
# hwclock --set --date="08/23/2013 09:52:00"
Or # clock --set --date="09/17/2003 13:26:00"
Generic setting format: hwclock/clock --set --date="month/day/year hour:minute:second".


3. System clock and hardware clock synchronization:
# hwclock --hctosys or -------------------------- useful
# clock --hctosys -------------------------- useful
In the above command, --hctosys indicates Hardware Clock to SYStem clock.


The hardware clock is synchronized with the system clock:
hwclock --systohc or -------------------------- is useful
clock -systohc -------------------------- useful


Note that: clock is a soft link to hwclock
[root@oam-nas2 yuanjs]# ls -l /sbin/clock*
lrwxrwxrwx. 1 root root 7 April 3 15:55 /sbin/clock -> hwclock


===============================
Changing the date and time under linux
===============================
We usually use the command "date -s" to change the system time. For example, the command to set the system time to November 13, 2009 is as follows.
#date -s 11/13/09 
The command to set the system time to 1:12:0 PM is as follows.
#date -s 13:12:00


---- Note that this says system time, which is maintained by the operating system for linux.


At system startup, the Linux operating system reads the time from CMOS into the system time variable, and later modifying the time is accomplished by modifying the system time. In order to maintain the consistency between the system time and the CMOS time, Linux writes the system time into the CMOS every once in a while. since this synchronization is done every once in a while (about 11 minutes), if we reboot the machine right after executing date -s, the modification of the time may not have been written into the CMOS, and this is the cause of the problem. To make sure the changes take effect you can run the following command.
The command #clock -w ---- forces the system time to be written to CMOS.




======================================
Introduction to Linux Time Zones: Very Important
======================================
CST China Standard Time -----UTC+8:00 China Coastal Time (Beijing Time)
Universal Time Coordinated (UTC)
There are two kinds of time distinction in the GPS system, one is UTC, the other is LT (local time) the difference between the two time zones are different, UTC is the time of the 0 time zone, the local time for the local time, such as Beijing at 8:00 a.m. (East 8), the UTC time will be 0:00 a.m., the time is eight hours later than the Beijing time, and this calculation can be done.
HKT-8 * time
JST-9 Janpanese Standard Time


1. Viewing time zones
The time zone of the system is defined by the contents of /etc/localtime.
1) cat /etc/localtime to see which time zone is currently selected. -------- focus ----------
  cat /etc/timezone  unix
  cat /etc/TZ        unix
2) cat /etc/sysconfig/clock to see the current system clock time zone and hardware clock time zone and what their time difference is
This profile can be used to set which way the user chooses to display the time.
If the hardware clock is local time, UTC is set to 0 and the environment variable TZ does not have to be set.
If the hardware clock is in UTC time, set UTC to 1 and set the environment variable TZ (or the configuration file /etc/TZ) to the time zone information, such as "Asia/Shanghai".
The hardware time of my machine is local time, so the contents of this configuration file are:
ZONE="Asia/Shanghai"
UTC=0 #default value
ARC=0


How to check if the hardware clock is local time:
hwclock –r 


Setting the time zone in "2".
1) Look up the name of the time zone you want to change in the /usr/share/zoneinfo/ directory, and modify the format as above.
2) /etc/localtime can be either a binary file or a softlink file.
Remove the original localtime file;
   # mv /etc/localtime /etc/localtime-old
3) Two methods:
(1) Make a new localtime file, linking the corresponding timezone file over to.
# ln -s /usr/share/zoneinfo/Asia/Shanghai /etc/localtime -------- focus ----------
(2) or cp /usr/share/zoneinfo/$primary/$secondary time zone /etc/localtime.
# cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime -------- focus ----------
4) Set the time zone entry on the /etc/sysconfig/clock file: (. "Asia/Shanghai") ---- set the hardware clock to use local time or UTC or not.
   ZONE="Asia/Shanghai"
UTC=0 #default value Refer to above
   
[root@xiangfch etc]# cat /etc/sysconfig/clock
# The time zone of the system is defined by the contents of /etc/localtime.
# This file is only for evaluation by system-config-date, do not rely on its
# contents elsewhere.
ZONE="Asia/Shanghai"




======================================
How to change the time zone under linux (TIMEZONE)
======================================
How to change timezone in linux? thanks!
You can see if there is a file localtime under the /etc path, if there is, it should be a soft link, change the link to point to change the system timezone setting.


Now the cluster software, most of them need more than one machine can not be more than 1000 seconds difference in time, so if the user are using the date command to modify the time, this is relatively simple not to say more. However, sometimes the time of the two machines although the same, but the time zone is different, then the user will have to modify the machine's time zone, this modification is different in different operating systems, so here are introduced to the mainstream operating system to modify the time zone method:


Solaris:
In solaris, to modify the time zone, you need to modify the /etc/TIMEZONE file, where TZ=PRC means China time zone, we can replace it with TZ=US/Pacific, and then reboot the machine, the time zone will be modified to the U.S. Pacific time zone.
There are three things to keep in mind here:
1, on X86 machines, you need to execute the following command again to update the /etc/rtc_config file:
# rtc -z zone-name (where zone-name is the value of the TZ in /etc/TIMEZONE)
       # rtc -c
2, how many time zones are there to choose from? We can go into the /usr/share/lib/zoneinfo directory, which has many directories, including US, and many files, such as PRC; this means that there are many more time zones under US, and PRC is the unified time zone. Because of this, we see two different forms of TZ=PRC and TZ=US/Pacific.
3, you need to reboot the system to make it take effect.


Linux(Redhat and Suse):
1, in /usr/share/zoneinfo/ directory look up the name of the time zone you want to change, modify the format as above
2, remove the original localtime file;
       # mv /etc/localtime /etc/localtime-old
3, make a new localtime file and link the corresponding timezone file over to it
       # ln -s /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
4, Synchronization with hardware
       # /sbin/hwclock --systohc


AIX:
1, view the current time zone (other operating systems are date command can)
cat /etc/environment (look for the line where TZ is)
2, for good measure, it is recommended to use smit to change the timezone
       smit chtz
3, all the time zone information in /usr/share/lib/zoneinfo directory


HPUX:
1, # set_parms timezone, can modify the timezone interactively.


For all Unix systems (excluding Linux), the current time zone can be viewed by echo $TZ.
    
Alternatively you can install the system-config-date tool:
yum install system-config-date


More information.
Linux Time Zone Configuration guide:
Background - The Earth is divided into time zones that are 15 degrees of longitude each, for this corresponds to the amount of angular distance the Sun appears to travel in 1 hour. 0 degrees longitude runs through the Royal Observatory in Greenwich, England. This is the origin of Greenwich Mean Time, or GMT. For all practical purposes, GMT and UTC are the same. To complicate matters, some countries observe Daylight Savings Time (DST), while others do not. Even within some countries, some states or districts do not observe DST while the rest of the country does! DST can also begin and end on different days in different countries! What a mess...
There are several files and directories that are used for time zones, and several tools:
l         /etc/sysconfig/clock- this is a short text file that defines the timezone, whether or not the hardware clock is using UTC, and an ARC option that is only relevant to DEC systems.
      eg:/ect/sysconfig/clock file content on jstest3:
       ZONE="Asia/Shanghai"
       UTC=true
       ARC=false
    
l         /etc/localtime - this is a symbolic link to the appropriate time zone file in /usr/share/zoneinfo
l         /usr/share/zoneinfo - this directory contains the time zone files that were compiled by zic(The time zone compiler. Zic creates the time conversion information files.)These are binary files and cannot be viewed with a text viewer. The files contain information such as rules about DST. They allow the kernel to convert UTC UNIX time into appropriate local dates and times.
l         /etc//- This script runs once, at boot time. A section of this script sets the system time from the hardware clock and applies the local time zone information.
l         /etc//halt- This script runs during system shutdown. A section of this script synchronizes the hardware clock from the system clock.
l         /etc/adjtime - This file is used by the adjtimex function, which can smoothly adjust system time while the system runs. settimeofday is a related function.
l         redhat-config-date or dateconfig - These commands start the Red Hat date/time/time zone configuration GUI.
l         zdump - This utility prints the current time and date in the specified time zone. Example:
# zdump Japan
Japan Sat Mar 29 00:47:57 2003 JST
# zdump Iceland
Iceland Fri Mar 28 15:48:02 2003 GMT


RedHat Linux operating system to change the time zone
Most modern Linux distributions have user-friendly programs to set the timezone, often accesible through the program menus or right-clicking the clock in a desktop environment such as KDE or GNOME. Failing that it's possible to manually change the system timezone in Linux in a few short steps.
It's possible to change the system timezone in Linux in a few short steps.
Steps
1.     Logged in as root, check which timezone your machine is currently using by executing `date`. You'll see something like "Mon 17 Jan 2005 12:15:08 PM PST -0.461203 seconds", PST in this case is the current timezone.
2.     Change to the directory to /usr/share/zoneinfo, here you will find a list of time zone regions. Choose the most appropriate region, if you live in Canada or the US this directory is the "Americas" directory.
3.     If you wish, backup the previous timezone configuration by copying it to a different location. Such as `mv /etc/localtime /etc/localtime-old`.
4.     Create a symbolic link from the appropiate timezone to /etc/localtime. Example: `ln -s /usr/share/zoneinfo/Europe/Amsterdam /etc/localtime`.
5.     If you have the utility rdate, update the current system time by executing `/usr/bin/rdate -s `. (This step can be skip!)
6.     Set the ZONE entry in the file /etc/sysconfig/clock file (. "America/Los_Angeles")
7.     Set the hardware clock by executing: ` /sbin/hwclock --systohc`
Tips
·       On some versions of RedHat Linux, Slackware, Gentoo, SuSE, Debian, Ubuntu, and anything else that is "normal", the command to display and change the time is 'date', not 'clock'
·       On RedHat Linux there is a utility called "Setup" that allows one to select the timezone from a list, but you must have installed the 'redhat-config-date' package.
Warnings
·       Some applications (such as PHP) have separate timezone settings from the system timezone.
·       On some systems, /etc/localtime is actually a symlink to the appropriate file under the /usr/share/zoneinfo directory (rather than a copy of that file).
·       On some systems, there is a system utility provided that will prompt for the correct timezone and make the proper changes to the system configuration. For example,Debianprovides the "tzsetup" utility.
Here is an example of changing the timezone: (Logged in as root)
In order to manually change the timezone, you can edit the /etc/sysconfig/clockfile and then make a new soft link to /etc/localtime. Here is an example of changing the timezone manually to "America/Denver":
1. Select the appropriate time zone from the /usr/share/zoneinfo directory. Time zone names are relative to that directory. In this case, we will select "America/Denver"
2. Edit the /etc/sysconfig/clocktext file so that it looks like this:
ZONE="America/Denver"
UTC=true
ARC=false
Of course, this assumes that your hardware clock is running UTC time...
3. Delete the following file: /etc/localtime (backup it when you need it !)
4. Create a new soft link for /etc/localtime. Here is an example of step 3 and step 4:
# cd /etc
# ls -al localtime
lrwxrwxrwx 1 root root 39 Mar 28 07:00 localtime -> /usr/share/zoneinfo/America/Los_Angeles
# rm /etc/localtime
# ln -s /usr/share/zoneinfo/America/Denver /etc/localtime
# ls -al localtime
lrwxrwxrwx 1 root root 34 Mar 28 08:59 localtime -> /usr/share/zoneinfo/America/Denver
# date
Fri Mar 28 09:00:04 MST 2003
 
extern configuration of timezone:
           ---- allows computers, servers, and network devices to synchronize their internal clock systems
NTP Configuration and Usage:
Background - Network Time Protocol (NTP) allows computers, servers, and network devices to synchronize their internal clock systems to an external reference source. In some cases, the reference source can be an atomic clock or GPS receiver. This is useful for a number of reasons. If you would like to automatically keep the time on your Linux system synchronized to standard world times, you have two built-in tools to do this:
ntpdate and ntpd (NTP Daemon) 


ntpdate:
ntpdate was written by David L. Mills at the University of Delaware. For details on Dr. Mills, enter this: 
$ finger @ allows you to view or set system time from one or more NTP servers. The first thing you need to do is find a time server you can query. Here is a list of public time servers, or you can use one of the following:




For example, if you only want to query an NTP server and make sure that you can reach it, use the following command:


# ntpdate -q
server 66.187.224.4, stratum 1, offset -0.067532, delay 0.38452
28 Mar 18:14:20 ntpdate[10724]: adjust time server 66.187.224.4 offset -0.067532 sec
Note that some firewall systems do not allow NTP traffic. NTP uses UDP port 123. If you would like to query more than one server and 
# ntpdate
28 Mar 18:20:59 ntpdate[10754]: adjust time server 66.187.233.4 offset -0.043222 sec
You can add the -v
This command is very similar to the rdate command. The ntpdate command can be used in startup scripts or cron jobs to automatically set the system time without running a dedicated server process. You will definitely want to try to retrieve the time from an NTP server with ntpdate before setting up your own NTP server. This will ensure that (a) you have connectivity (b) your firewall does not block NTP. Another thing to note about the ntpdate command is that it will not work in update mode if you are running a local NTP server process. It will work in query mode.


flag for verbose your system clock with the result, use the following:
NTP Server:
The NTP server (ntpd) can be setup to run continuously. This will keep the system clock synchronized. You will also be able to server NTP clients on your LAN, if you wish. I had problems with the Red Hat configuration GUI not setting the NTP server up correctly.


The configuration file is /etc/, and there is also an /etc/ntp directory which contains keys and the drift file. I will show you a working configuration file, with comments:
# Prohibit general access to this service.
restrict default ignore
# Permit all access over the loopback interface. This could
# be tightened as well, but to do so would effect some of
# the administrative functions.
restrict 127.0.0.1 
# -- CLIENT NETWORK -------
# Permit systems on this network to synchronize with this
# time service. Do not permit those systems to modify the
# configuration of this service. Also, do not use those
# systems as peers for synchronization.
# This is my internal LAN network address
restrict 192.168.212.0 mask 255.255.255.0 notrust nomodify notrap
# --- OUR TIMESERVERS ----- 
# or remove the default restrict line 
# Permit time synchronization with our time source, but do not
# permit the source to query or modify the service on this system.
# The statements below limit what the servers can do to your server
# I am using IP instead of DNS name because the "restrict" construct
# requires IP addresses
restrict 66.187.224.4 mask 255.255.255.255 nomodify notrap noquery
restrict 80.67.177.2 mask 255.255.255.255 nomodify notrap noquery
# The server listed below is
server 66.187.224.4
# The server listed below is
server 80.67.177.2
# --- NTP MULTICASTCLIENT ---
#multicastclient # listen on default 224.0.1.1
# restrict 224.0.1.1 mask 255.255.255.255 notrust nomodify notrap
# restrict 192.168.1.0 mask 255.255.255.0 notrust nomodify notrap
# I don't want to use multicast for my NTP server
# --- GENERAL CONFIGURATION ---
#
# Undisciplined Local Clock. This is a fake driver intended for backup
# and when no outside source of synchronized time is available. The
# default stratum is usually 3, but in this case we elect to use stratum
# 0. Since the server line does not have the prefer keyword, this driver
# is never used for synchronization, unless no other other
# synchronization source is available. In case the local host is
# controlled by some external source, such as an external oscillator or
# another protocol, the prefer keyword would cause the local host to
# disregard all other synchronization sources, unless the kernel
# modifications are in use and declare an unsynchronized condition.
#
# If you un-comment the two statements below, you could run an NTP server
# off of your local (and inaccurate) system clock.
#restrict 127.127.1.0
#server 127.127.1.0
fudge 127.127.1.0 stratum 10 
#
# Drift file. Put this in a directory which the daemon can write to.
# No symbolic links allowed, either, since the daemon updates the file
# by creating a temporary in the same directory and then rename()'ing
# it to the file.
#
driftfile /etc/ntp/drift
broadcastdelay 0.008


#
# Authentication delay. If you use, or plan to use someday, the
# authentication facility you should make the programs in the auth_stuff
# directory and figure out what this number should be on your machine.
#
# I am not using any authentication for this simple setup.
authenticate no


#
# Keys file. If you want to diddle your server at run time, make a
# keys file (mode 600 for sure) and define the key number to be
# used for making requests.
#
# PLEASE DO NOT USE THE DEFAULT VALUES HERE. Pick your own, or remote
# systems might be able to reset your clock at will. Note also that
# ntpd is started with a -A flag, disabling authentication, that
# will have to be removed as well.
#
keys /etc/ntp/keys
After you install this new version of the config file, you can start the service with /etc//ntpd startTo monitor the service, you can run the following command: ntpdc -p or ntpdc -p -n
If you are really impatient, you can use this command to watch the system until it synchronizes: watch nptdc -p -n
The ntpdc command can be run interactively as well. There are a number of informative ntpdc commands, such as iostats, sysstats, and peers.
When enough time has gone by, one of the servers will have an * placed in front of it to tell you that your system is synchronized to it. The lower the stratum number, the more accurate the server.
If you want to have the NTP server start up automatically, you can use the checkconfig command as follows:
# chkconfig --level 345 ntpd on
# chkconfig --level 0126 ntpd off
# chkconfig --list | grep ntpd
ntpd 0:off 1:off 2:off 3:on 4:on 5:on 6:off
To see that your NTP server is listening on UDP port 123, use the following command: netstat -tunaPlease note that the NTP server makes NTP queries from a UDP source port of 123. Some firewalls will not allow this, even if ntpdate worked (ntpdate uses a source port > 1023.) 
You can also use the ntpq utility, and the ntptrace utility for additional diagnostic support. For complete documentation on setting up and using NTP servers, see /.
 
 
 
How to Set the Time Zone in Unix:
Step1
Turn your machine on and locate the time zone file. You can do this by entering “command /etc/TIMEZONE.”
Step2
Locate the line within this file that indicates what time zone you’re clock is set to. It will look like this: TZ=US/Pacific. In this example, the time zone is set to the Pacific time zone of the .
Step3
Change the value of the time zone to the desired time zone. This could be, for example, ./Eastern. It is important to know what time zone you are in so the time zone function can be set correctly.
Step4
Reboot your machine for the changes to take effect. Shut down and restart your computer. This will cause the computer’s clock to reset and show the correct time zone 


===================================================
Introduction to RedHat AS Chinese and English character set configuration
===================================================
linux character set view and setup redhat 5.8
locale View the currently set character set the current locale environment
locale -a View all locale Write names of available locales that can be supported.
locale -m View all supported character sets (CP1125, UTF-8, ISO-8859-1, etc.) Write names of available charmaps.
echo $LANGUAGE See the current default settings.
echo $LANG View the current default settings
echo $LC_ALL  


The file that records the default language of the system is /etc/sysconfig/i18n, if the default installation is a Chinese system, the content of i18n is as follows:
LANG="zh_CN.UTF-8" SYSFONT="latarcyrheb-sun16" SUPPORTED="zh_CN.UTF-8:zh_CN:zh"


LANG variable: is short for language, a little basic English users can see this variable is to determine the system's default language, i.e., the system's menus, programs, toolbars, the default language of the input method and so on.
SYSFONT: Short for system font, which determines which font is used by default.
SUPPORTED: determines the languages supported by the system, i.e., the languages that the system can display.
It should be noted that since computers originated in English-speaking countries, English is always supported by default, no matter what you set these variables to, and English fonts are always included, no matter what fonts are used.


Setting of locale character set.
1. temporary LANG="zh_CN.UTF-8" ; export LANG= "zh_CN.
2. permanently Modify the /etc/sysconf/i18n file to include LANG="zh_CN.UTF-8".


/etc/environment controls the GNOME language environment (you can create this file yourself)
/etc/sysconfig/i18n Controls the boot process and the language environment of the real system.


[root@jsjzhang ~]$ cat /etc/environment
#Chinese Interface Configuration
LANGUAGE="zh_CN:zh:en_US:en"
LC_ALL=zh_CN.UTF-8
LANG=zh_CN.UTF-8
GST_ID3_TAG_ENCODING=GB2312


# English interface configuration
#LANGUAGE="en_US:en"
#LC_CTYPE=zh_CN.UTF-8
#LANG=en_GB.UTF-8
#GST_ID3_TAG_ENCODING=GBK


[root@jsjzhang ~]$ cat /etc/sysconfig/i18n
LANG="zh_CN.UTF-8"
SUPPORTED="zh_CN.UTF-8:zh_CN:zh"
SYSFONT="latarcyrheb-sun16"


If only the /etc/environment file is modified without modifying the /etc/sysconfig/i18n file, the
Then the startup process is still in Chinese, just the Gnome environment becomes English.
To configure the Gnome environment to be in English from the startup process to the Gnome environment, and to allow the Fcitx Chinese Input Method to work properly in an English environment.
In addition to modifying the /etc/environment file as above, modify the /etc/sysconfig/i18n file as follows:


[root@jsjzhang ~]# vim /etc/sysconfig/i18n
#LANG="zh_CN.UTF-8"
#SUPPORTED="zh_CN.UTF-8:zh_CN:zh"
LANG="en_US.UTF-8"
SUPPORTED="zh_CN.UTF-8:zh_CN:zh:zh_CN.GBK:zh_CN.GB18030:zh_CN.GB2312"
SYSFONT="latarcyrheb-sun16"


---------------------------------------
How to set the locale of linux
---------------------------------------
1. Check the language settings:
[root@brs98 ~]# locale
LANG=zh_CN.UTF-8
LC_CTYPE="zh_CN.UTF-8"
LC_NUMERIC="zh_CN.UTF-8"
LC_TIME="zh_CN.UTF-8"
LC_COLLATE="zh_CN.UTF-8"
LC_MONETARY="zh_CN.UTF-8"
LC_MESSAGES="zh_CN.UTF-8"
LC_PAPER="zh_CN.UTF-8"
LC_NAME="zh_CN.UTF-8"
LC_ADDRESS="zh_CN.UTF-8"
LC_TELEPHONE="zh_CN.UTF-8"
LC_MEASUREMENT="zh_CN.UTF-8"
LC_IDENTIFICATION="zh_CN.UTF-8"
LC_ALL=


2. Writing the /etc/environment file
vi /etc/environment
LC_ALL=zh_CN.UTF-8


3. Write /etc/sysconfig/i18n file
vi /etc/sysconfig/i18n
LANG="zh_CN.UTF-8"
SUPPORTED="zh_CN.UTF-8:zh_CN:zh"
SYSFONT="latarcyrheb-sun16"


4. Reboot the machine.
reboot


===================================================
Comparison of /etc/profile and /etc/environment
===================================================
Question:
Add export LANG=zh_CN to /etc/profile, log out of the system and log in again, the login prompt shows English. Delete export LANG=zh_CN from /etc/profile, add LNAG=zh_CN to /etc/environment, exit the system and log in again, the login prompt shows Chinese. The user environment is always created by executing /etc/profile and then reading /etc/environment, why is there a difference as described above?


Answer:
/etc/profile is the environment variable for all users
/etc/enviroment is the system's environment variable
The order in which the shell reads them when logging into the system should be
    /etc/enviroment -> /etc/profile -->$HOME/.profile -->$HOME/.env
The reason for this would be the difference between the user environment and the system environment described below:
/etc/environment is to set the environment for the whole system, while /etc/profile is to set the environment for all users, the former has nothing to do with the logged-in user, the latter has to do with the logged-in user.
The execution of system applications and the user environment can be irrelevant, but with the system environment is related, so when you log on, you see the prompt information, such as date, time information display format and the system environment of the LANG is related to the default LANG = en_US, if the system environment of the LANG = zh_CN, then the prompt information is in Chinese, or else is in English.




(1) /etc/profile: This file sets the environment information for each user of the system and is executed when the user logs in for the first time. It is executed when the user logs in for the first time. The shell settings are collected from the configuration files in the /etc/ directory.
(2) /etc/bashrc: This file is executed for each user running the bash shell. This file is read when the bash shell is opened.
(3)~/.bash_profile: 
Each user can use this file to enter shell information specific to their own use, and when the user logs in, the file is executed only once! By default, it sets some environment variables and executes the user's .bashrc file.
(4) ~/.bashrc: This file contains bash information specific to your bash shell, and is read when logging in and every time you open a new shell.
(5) ~/.bashlogin: This file contains a special file to be executed when logging in.
(6) ~/.profile: This file contains a special file to be executed when the login is complete.
(7) ~/.bash_logout: This file is executed every time you exit the system (exit the bash shell).


In addition, variables set in /etc/profile (global) can be used by any user, while variables set in ~/.bashrc (local) can only be inherited from /etc/profile, they are "parent-child" relationship.
Official Documentation:
/infocenter/pseries/v5r3/?topic=//doc/baseadmndita/etc_env_file.htm
  
The /etc/environment, /etc/profile, and ~/.profile files are run once at login time and ~/.env also run at login time. 
The ~/.env file, on the other hand, is run every time you open a new shell or a window.
* /etc/environment file
  The first file that the operating system uses at login time is the /etc/environment file. The /etc/environment file contains variables specifying the basic environment for all processes.
* /etc/profile file
  The second file that the operating system uses at login time is the /etc/profile file.
* .profile file
  The .profile file is present in your home ($HOME) directory and lets you customize your individual working environment.
* .env file
  A fourth file that the operating system uses at login time is the .env file, if your .profile contains the following line: export ENV=$HOME/.env
  
===================================================
How to change the linux locale
===================================================
1) Global (all users):
/etc/sysconfig/i18n
/etc/i18n or ~/.i18n


Change LANG=zh_CN.UTF-8 to LANG=en_US.UTF-8 in the file.
Other character sets.
LANG=en_US.UTF-8
LANG=zh_CN.gbk


2) Individual users:
~/.bash_profile personal path with the file name of the environment variable.
~/.bashrc: Important Personal Settings File


/etc/profile: ---->~/.bash_profile
/etc/bashrc : ---->~/.bashrc


---------------------------------------
LINUX SSH display Chinese garbled code, how to solve?
---------------------------------------
After ssh login, execute:
export LANG=zh_CN.gb2312
It will be able to display Chinese.
Edit /etc/sysconfig/i18n to change LANG="zh_CN.UTF-8" to LANG="zh_CN.GB18030" to display Chinese permanently or LANG="zh_CN.gb2312".


export LANG=US_EN You do this Follow what I said, go to the character interface, type in gdm A screen will appear for you to log in, there is a language option, choose simple chinese and you'll be OK.


[root@php ~]# vi  /etc/sysconfig/i18n 
#LANG="zh_CN.UTF-8"
LANG="zh_CN.GB18030"
LANGUAGE="zh_CN.GB18030;zh_CN.GB2313:zh_CN"
SUPPORTED="zh_CN.GB18030;zh_CN;zh:en_US.UTF-8;en_US:en"
SYSFONT="lat0-sun16"




If the Chinese directory displayed by ls -al is garbled, then there is a problem with the client's character set:
If you use secureCRT, you can set the client's character set, which can be GB2312, UTF-8, etc.


---------------------------------------
Kick off logged in user under linux, query terminal
---------------------------------------
View logged-in users in the machine
[root@sunsyk ~]# w
 16:29:02 up 2 days,  2:35,  5 users,  load average: 0.03, 0.05, 0.01
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU WHAT
root     pts/1    :0.0             Tue15    2days  1:44   0.04s -bash
root     pts/2    :0.0             Tue15   46:42m  0.05s  0.05s bash
root     pts/3    :0.0             Tue15    2days  0.02s  0.02s bash
root     pts/4    172.20.52.114    14:17   58:48   0.16s  0.03s sqlplus
root     pts/5    172.20.52.114    15:31    0.00s  0.03s  0.00s w
I kicked pts/1 (only root can go kick users)
[root@sunsyk ~]# pkill -kill -t pts/0,pts/1,pts/2,pts/12,pts/15 -kill is signal -t terminal


Check if it's kicking off
[root@sunsyk ~]# w
 16:34:16 up 2 days,  2:40,  2 users,  load average: 0.00, 0.05, 0.02
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU WHAT
root     pts/4    172.20.52.114    14:17    1:04m  0.16s  0.03s sqlplus
root     pts/5    172.20.52.114    15:31    0.00s  0.03s  0.00s w
Root can kick other users including yourself.


------------------------------------------------------------------------------
Linux commands kill and signal
------------------------------------------------------------------------------
The kill command is used to terminate a process, which is a common command for process management under Unix/Linux. Usually, when we need to terminate a process or some processes, we first use ps/pidof/pstree/top and other tools to get the process PID, and then use the kill command to kill the process. another use of the kill command is to send the signal to the specified process or process group (the command kill sends the specified), or to determine the process number for the PID signal, or to determine the process number for the PID of a process. The command kill sends the specified signal to the specified process or process group), or to determine whether a process with a PID is still alive. For example, there are many programs that use the SIGHUP signal as a trigger for re-reading configuration files.


I. Common parameters
Format: kill <pid>
Format: kill -TERM <pid>
dispatchSIGTERMSignal to the specified process,If the process does not capture the signal,then the process is terminated(If no signal is specified, the TERM signal is sent. The TERM signal will kill processes which do not catch this signal.)
 
Format: kill -l
Print a list of signal names. These are found in /usr/include/linux/. Only the 9th signal (SIGKILL) can terminate the process unconditionally, all other signals the process has the right to ignore. The following signals are commonly used:
HUP 1 Terminal disconnection
INT 2 Interrupt (same as Ctrl + C)
QUIT 3 Exit (same as Ctrl + \)
TERM 15 Termination
KILL 9 Forced termination
CONT 18 Continue (opposite of STOP, fg/bg command)
STOP 19 Pause (same as Ctrl + Z)
Format: kill -l <signame>
Displays the value of the specified signal.
 
Format: kill -9 <pid>
Format: kill -KILL <pid>
Force kills the specified process and terminates the specified process unconditionally.
 
Format: kill %<jobid>
Format: kill -9 %<jobid>
Kill the specified tasks (they can be listed using the jobs command)
 
Format: kill -QUIT <pid>
Format: kill -3 <pid>
Make the program exit properly.
 
killall command
The killall command kills all processes within the same process group. It allows the name of the process to be terminated to be specified instead of the PID.
# killall httpd  
 
II. Examples
1) First use ps to find the process, then kill it with kill.
[root@new55 ~]# ps -ef|grep vim 
root      3368  2884  0 16:21 pts/1    00:00:00 vim
root      3370  2822  0 16:21 pts/0    00:00:00 grep vim
[root@new55 ~]# kill 3368 
[root@new55 ~]# kill 3368 
-bash: kill: (3368) - There is no process.
 
2) The init process is unkillable.
3) List all signal names
[root@new55 ~]# kill -l 
 1) SIGHUP       2) SIGINT       3) SIGQUIT      4) SIGILL
 5) SIGTRAP      6) SIGABRT      7) SIGBUS       8) SIGFPE
 9) SIGKILL     10) SIGUSR1     11) SIGSEGV     12) SIGUSR2
13) SIGPIPE     14) SIGALRM     15) SIGTERM     16) SIGSTKFLT
17) SIGCHLD     18) SIGCONT     19) SIGSTOP     20) SIGTSTP
21) SIGTTIN     22) SIGTTOU     23) SIGURG      24) SIGXCPU
25) SIGXFSZ     26) SIGVTALRM   27) SIGPROF     28) SIGWINCH
29) SIGIO       30) SIGPWR      31) SIGSYS      34) SIGRTMIN
35) SIGRTMIN+1  36) SIGRTMIN+2  37) SIGRTMIN+3  38) SIGRTMIN+4
39) SIGRTMIN+5  40) SIGRTMIN+6  41) SIGRTMIN+7  42) SIGRTMIN+8
43) SIGRTMIN+9  44) SIGRTMIN+10 45) SIGRTMIN+11 46) SIGRTMIN+12
47) SIGRTMIN+13 48) SIGRTMIN+14 49) SIGRTMIN+15 50) SIGRTMAX-14
51) SIGRTMAX-13 52) SIGRTMAX-12 53) SIGRTMAX-11 54) SIGRTMAX-10
55) SIGRTMAX-9  56) SIGRTMAX-8  57) SIGRTMAX-7  58) SIGRTMAX-6
59) SIGRTMAX-5  60) SIGRTMAX-4  61) SIGRTMAX-3  62) SIGRTMAX-2
63) SIGRTMAX-1  64) SIGRTMAX
[root@new55 ~]#
 
/usr/include/linux/ wrote
#define SIGHUP 1
#define SIGINT 2
#define SIGQUIT 3
#define SIGILL 4
#define SIGTRAP 5
#define SIGABRT 6
#define SIGIOT 6
#define SIGBUS 7
#define SIGFPE 8
#define SIGKILL 9
#define SIGUSR1 10
#define SIGSEGV 11
#define SIGUSR2 12
#define SIGPIPE 13
#define SIGALRM 14
#define SIGTERM 15
#define SIGSTKFLT 16
#define SIGCHLD 17
#define SIGCONT 18
#define SIGSTOP 19
#define SIGTSTP 20
#define SIGTTIN 21
#define SIGTTOU 22
#define SIGURG 23
#define SIGXCPU 24
#define SIGXFSZ 25
#define SIGVTALRM 26
#define SIGPROF 27
#define SIGWINCH 28
#define SIGIO 29
#define SIGPOLL SIGIO
/*
#define SIGLOST 29
*/
#define SIGPWR 30
#define SIGSYS 31
#define SIGUNUSED 31
/* These should not be considered constants from userland. */
#define SIGRTMIN 32
#define SIGRTMAX _NSIG
 
Reference:
/blog/847299


------------------------------------------------------------------------------
4 ways to view the system's currently logged in user information under Linux
------------------------------------------------------------------------------
As a system administrator, you may often (at one time or another) need to see what users are active on your system. There are times when you even need to know what he/she is doing. This article summarizes 4 ways for us to view system user information (by number (ID)).
1. Use the w command to view information about the processes being used by the logged-in user.
The w command is used to display the names of users who have logged on to the system and what they are doing. The information used by this command comes from the /var/run/utmp file. the w command output includes:


user ID
User's machine name or tty number
Remote host address
The time the user logs into the system
Idle time (not very useful)
Time spent by processes attached to the tty (terminal) (JCPU time)
Time spent in current process (PCPU time)
The command the user is currently using
The w command can also be used with the following options


-h ignores header information
-u shows the loading time of the results
-s does not show JCPU, PCPU, login time
$ w
 23:04:27 up 29 days,  7:51,  3 users,  load average: 0.04, 0.06, 0.02
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU WHAT
ramesh   pts/0    dev-db-server        22:57    8.00s  0.05s  0.01s sshd: ramesh [priv]
jason    pts/1    dev-db-server        23:01    2:53   0.01s  0.01s -bash
john     pts/2    dev-db-server        23:04    0.00s  0.00s  0.00s w


$ w -h
ramesh   pts/0    dev-db-server        22:57   17:43   2.52s  0.01s sshd: ramesh [priv]
jason    pts/1    dev-db-server        23:01   20:28   0.01s  0.01s -bash
john     pts/2    dev-db-server        23:04    0.00s  0.03s  0.00s w -h


$ w -u
 23:22:06 up 29 days,  8:08,  3 users,  load average: 0.00, 0.00, 0.00
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU WHAT
ramesh   pts/0    dev-db-server        22:57   17:47   2.52s  2.49s top
jason    pts/1    dev-db-server        23:01   20:32   0.01s  0.01s -bash
john     pts/2    dev-db-server        23:04    0.00s  0.03s  0.00s w -u


$ w -s
 23:22:10 up 29 days,  8:08,  3 users,  load average: 0.00, 0.00, 0.00
USER     TTY      FROM               IDLE WHAT
ramesh   pts/0    dev-db-server        17:51  sshd: ramesh [priv]
jason    pts/1    dev-db-server        20:36  -bash
john     pts/2    dev-db-server         1.00s w -s


2. Use the who command to view (login) user names and processes started
The who command is used to list the names of users currently logged on to the system. The output is: user name, tty number, time and date, and host address.


$ who
ramesh pts/0        2009-03-28 22:57 (dev-db-server)
jason  pts/1        2009-03-28 23:01 (dev-db-server)
john   pts/2        2009-03-28 23:04 (dev-db-server)
If you only wish to list users, you can use the following statement:
$ who | cut -d' ' -f1 | sort | uniq
john
jason
ramesh
ADDITIONAL: The users command, which can be used to print out the names of users logged into the server. This command has no options other than the help and version options. If a user uses more than one terminal, multiple duplicate user names are displayed accordingly.
$ users
john jason ramesh
3. Use the whoami command to view the login name you are using
The whoami command is used to display the login username.


$ whoami
john
The whoami command executes exactly the same as id -un, for example:
$ id -un
john
The whoami command displays the name of the user currently logged in, as well as information about the tty currently in use. The output of this command includes the following: user name, tty name, current time and date, and also the link address used by the user to log in to the system.
$ who am i
john     pts/2        2009-03-28 23:04 (dev-db-server)


$ who mom likes
john     pts/2        2009-03-28 23:04 (dev-db-server)


Warning: Don't try "who mom hates" command.
Of course, if you change the user using the su command, the results displayed by that command (whoami) will change accordingly.


4. View the system's history at any time (information on users who have used the system)
The last command can be used to display the history of specific users logging into the system. If no parameters are specified, historical information is displayed for all users. By default, this information (the information displayed) will come from the /var/log/wtmp file. The output of this command contains the following columns of information:


user ID
tty device number
History Login Time Date
Logout time and date
Total working hours
$ last jason
jason   pts/0        dev-db-server   Fri Mar 27 22:57   still logged in
jason   pts/0        dev-db-server   Fri Mar 27 22:09 - 22:54  (00:45)
jason   pts/0        dev-db-server   Wed Mar 25 19:58 - 22:26  (02:28)
jason   pts/1        dev-db-server   Mon Mar 16 20:10 - 21:44  (01:33)
jason   pts/0        192.168.201.11  Fri Mar 13 08:35 - 16:46  (08:11)
jason   pts/1        192.168.201.12  Thu Mar 12 09:03 - 09:19  (00:15)
jason   pts/0        dev-db-server   Wed Mar 11 20:11 - 20:50  (00:39


-----------------------------------------------------
Find files and find characters in files:
-----------------------------------------------------
Find characters in a file: grep
-R, -r, --recursive       equivalent to --directories=recurse
      --include=FILE_PATTERN  search only files that match FILE_PATTERN
      --exclude=FILE_PATTERN  skip files and directories matching FILE_PATTERN
      --exclude-from=FILE   skip files matching any file pattern from FILE
      --exclude-dir=PATTERN  directories that match PATTERN will be skipped.
-R means look in its subdirectories as well.


grep -FR 'type="array"' . /* Finds if files in the current directory and subdirectories contain this string type="array"


grep 'type="array"' . /* |less Finds out if a file in the current directory contains this string type="array"


Suppose you are searching for documents with the string 'magic' in the directory '/usr/src/linux/Documentation':
$ grep magic /usr/src/linux/Documentation/*


Of course, if you expect a lot of output, you can pipeline it to 'less' Read more
$ grep magic /usr/src/linux/Documentation/* | less


One thing to note is that you need to provide a file filter (* for searching all files). If you forget, 'grep' will wait until the program is interrupted. If this happens to you, press <CTRL c> and try again.
Here are some interesting command line arguments:
grep -i pattern files : Search case-insensitively. The default is case-sensitive.
grep -l pattern files : List only matching file names.
grep -L pattern files : List mismatched file names.
grep -w pattern files : matches only whole words, not parts of strings (e.g. matches 'magic', not 'magical').
grep -C number pattern files : display [number] lines for each of the matched contexts.
grep pattern1 | pattern2 files : Display lines matching pattern1 or pattern2.
grep pattern1 files | grep pattern2 : Display lines that match both pattern1 and pattern2.
Here are some more special symbols for searching:
< and > label the beginning and end of words, respectively.
Example:
grep man * will match 'Batman', 'manic', 'man', etc., and
grep \'<man\' * Matches 'manic' and 'man', but not 'Batman'.
grep \'<man>\' Matches only 'man', not other strings like 'Batman' or 'manic'.
\'^\': means the matching string is at the beginning of the line.
\'$\: means the matching string is at the end of the line.
If you are not used to command line arguments, try a graphical interface 'grep' such as reXgrep. This program offers AND, OR, NOT syntax and nice buttons :-). If you just need clearer output, try fungrep.


.grep search string
Command Format.
grep string filename
There are many ways to find a string, for example, I want to find all lines starting with M . At this point, you must introduce the pattern view
Idea. Below are some simple examples, as well as explanations:
^M Lines beginning with M, ^ for beginning
M$ Line ending in M, $ means end
^[0-9] Rows starting with a number; letters may be listed in [].
^[124ab] Rows beginning with 1, 2, 4, a, or b.
^b.503 Dots indicating any letter
* :: An asterisk indicates more than 0 letters (can be none)
+ plus sign indicates more than 1 letter
. The slash can be removed from the special meaning
<eg> cat passwd | grep ^b List of university departments with applicant accounts
cat passwd | grep ^s List of exchange students who have applied for accounts
cat passwd | grep \'^b.503\' List the grades in the Electrical Engineering Department...
grep \'^. \' List all lines that begin with a period


Find files: find, locate, whereis
-name Search for files by filename
-perm Finds files by file permissions
-prune does not look in the currently specified directory
-user Finds files by file owner
-group Finds files by the group they belong to.
-mtime -n +n Finds files according to when they were changed, -n means the file was changed less than n days ago, + n means the file was changed n days ago.
-nogroup Finds files that do not belong to a valid group, i.e. the group to which the file belongs does not exist.
-nouser Finds files with no valid owner
-newer file1 ! file2 Finds files that are newer than file1 but older than file2.
-type Finds files of a certain type
Document type:
b Block device files
d Catalog
c Character device files
p Pipeline documentation
l Symbolic link files
f General document
-size n[c] Finds files of length n blocks, with c the file length in bytes.
-depth looks for files in the current directory and then in its subdirectories.
-mount Finds files without crossing file system mount points.
-follow If the find command encounters a symbolically linked file, it follows the file pointed to by the link
-cpio Using the cpio command on the matching files backs up those files to the disk device


Some examples of options to the find command:
$ find /etc -name Finds all directories in the /etc directory for this file.
$ find /etc -type d Find all directories in the /etc directory
$ find /etc -user yaoyuan # Find files in the /etc directory whose owner is yaoyuan.
$ find . -size +1000000c # Find files in the current directory with a file length greater than 1 M bytes.
  
The whereis command looks for files in the specified directory that match the attributes of the original code, binary files, or help files.


Options:
-b Finds only binary files
-B Find binary files only in the set directory
-f does not display path names before filenames
-m Look for description files only
-M Find description files only in the set directory
-s Finds only the original code file
-S looks for raw code files only in the set directory
-u Finds files that do not contain the specified type


whereis example
$ whereis mysql
mysql: /usr/bin/mysql /etc/mysql /usr/share/mysql /usr/share/man/man1/mysql. 


The locate command is used to find files that match the criteria, it will go to the database where the names of the files and directories are stored, and find the file or directory that matches the criteria.


Options:
-u Create a database, starting from the root directory
-U Create a database, you can specify where to start.
-e will be excluded from the search
-f Exclude specific file systems
-q Quiet mode, no error messages will be displayed.
-n Display up to n outputs
-r Conditional on doing the search using regular operators.
-o Specify the name of the datastore
-d Specify the path to the database
-h Display auxiliary messages
-v Show more messages
-V Display program version information


$ locate inittab
/usr/lib/upstart/
/usr/share/terminfo/a/ansi+inittabs  






rm -r recursive Delete the directory and the following files
-f force Forced file deletion
rm -rf Directory or file.
   
-----------------------------------------------------
touch command:
-----------------------------------------------------
TOUCH(1)                         User Commands                        TOUCH(1)
NAME
       touch - change file timestamps
SYNOPSIS
       touch [OPTION]... FILE...
DESCRIPTION
       Update  the  access  and modification times of each FILE to the current time.
       A FILE argument that does not exist is created empty, unless -c  or  -h is supplied.
       A  FILE  argument  string of - is handled specially and causes touch to change the times of the file associated with standard output.
       Mandatory arguments to long options are  mandatory  for  short  options too.
       -a     change only the access time
       -c, --no-create
              do not create any files


       -d, --date=STRING
              parse STRING and use it instead of current time


       -f     (ignored)


       -h, --no-dereference
              affect each symbolic link instead of any referenced file (useful
              only on systems that can change the timestamps of a symlink)


       -m     change only the modification time


       -r, --reference=FILE
              use this file’s times instead of current time


       -t STAMP
              use [[CC]YY]MMDDhhmm[.ss] instead of current time


       --time=WORD
              change the specified time: WORD is access, atime, or use: equiv-
              alent to -a WORD is modify or mtime: equivalent to -m  
       --help display this help and exit


       --version
              output version information and exit


       Note that the -d and -t options accept different time-date formats.


DATE STRING
       The  --date=STRING  is  a mostly free format human readable date string
       such as "Sun, 29 Feb 2004 16:21:42 -0800" or "2004-02-29  16:21:42"  or
       even  "next Thursday".  A date string may contain items indicating cal-
       endar date, time of day, time zone, day of week, relative  time,  rela-
       tive date, and numbers.  An empty string indicates the beginning of the
       day.  The date string format is more complex than is easily  documented
       here but is fully described in the info documentation.
  
-----------------------------------------------------
less command:
-----------------------------------------------------


                          SEARCHING


  /pattern          *  Search forward for (N-th) matching line.
  ?pattern          *  Search backward for (N-th) matching line.
  n                 *  Repeat previous search (for N-th occurrence).
  N                 *  Repeat previous search in reverse direction.
  ESC-n             *  Repeat previous search, spanning files.
  ESC-N             *  Repeat previous search, reverse dir. & spanning files.
  ESC-u                Undo (toggle) search highlighting.
  &pattern          *  Display only matching lines
        ---------------------------------------------------
        Search patterns may be modified by one or more of:
        ^N or !  Search for NON-matching lines.
        ^E or *  Search multiple files (pass thru END OF FILE).
        ^F or @  Start search at FIRST file (for /) or last file (for ?).
        ^K       Highlight matches, but don't move (KEEP position).
        ^R       Don't use REGULAR EXPRESSIONS.
   
-----------------------------------------------------
How to change the current user, password or switch users, logout users:
-----------------------------------------------------
The first time you change the root password, the command is sodu passwd root.


su and su -: The command su gives you access to the root account or other accounts on the system. When you switch to the root account by typing su in the user account's shell, you are able to change important system files (if you are not careful, you can damage them). Using the su - command makes you the root user using the root shell. Be careful when logging in as root.


su yuanjs means logging in as yuanjs, but the current path does not change.
su - yuanjs means logging in as yuanjs, and the current path changes to the home path of the yuanjs account.
su means logging in as root, but the current path is not changed
su - indicates that you are logged in as root, and the current path changes to the root account's home path.


sudo Function Description: Execute commands as something else.
Syntax: sudo [-bhHpV][-s <shell>][-u <user >][command] or sudo [-klv]
Example.
jorge@ubuntu:~$ sudo killall rm


Change password.
Name: passwd

Access: All users

Usage: passwd [-k] [-l] [-u [-f]] [-d] [-S] [username]

Description: Used to change the user's password

Parameters:
-k  keep non-expired authentication tokens
-l Disable account password. The effect is equivalent to usermod -L, only root is authorized to use this.
-u Restore account password. The effect is equivalent to usermod -U, again only root is authorized to use it.
-g Changes the group password. gpasswd equivalent command.
-f Changes user information accessed by the finger command.
-d disables password authentication for users, so that users can log in without entering a password, and only users with root privileges can use it.
-S Displays the type of password authentication for the specified user, which can only be used by users with root privileges.
[username] Specify the account name.


The most common and easiest to use: passwd tom, change tom user password.
sudo su first cut to root and then use passwd to modify, specific parameters can be used passwd -help


It is possible to log in without root, if you are the user specified during the system installation and have sudo privileges by default, you can simply
$ sudo passwd user1
Same as above.


-------------------------------------------
linux introduction:
-------------------------------------------
On top of the Linux kernel effort, the creators of Linux also drew on a great deal of system software and applications that are now bundled with Linux distributions from the GNU software effort (GNU stands for “GNU is Not UNIX”), which is directed by the Free Software Foundation (). There is a vast amount of software that can be used with Linux, making it an operating system that can compete with or surpass features available in any other operating system in the world. 


If you have heard Linux described as a free version of UNIX, there is good reason for it. Although much of the code for Linux started from scratch, the blueprint for what the code would do was created to follow POSIX (Portable Operating System Interface for UNIX) standards. POSIX is a computer industry operating system standard that every major version of UNIX complied with. In other words, if your operating system was POSIX-compliant, it was UNIX. 


netstat -tnlp |grep smb
netstat -lan |grep 80 Query the open port 80.
or
netstat -nat Query which IP address is connected to it


-------------------------------------------
vncserver configuration.
-------------------------------------------
netstat -tlnp |grep vnc or netstat -tlnp |grep vino
service vncserver status
service vncserver stop
service vncserver start
service vncserver restatus
ps -ef|grep vino


[root@ganshuai ~]# netstat -tlnp |grep vino
tcp        0      0 :::5900                     :::*                        LISTEN      11099/vino-server
tcp        0      0 :::5901                     :::*                        LISTEN      4696/vino-server




-------------------------------------------
version of linux:
-------------------------------------------
Mandriva 10.1
Novell Linux Desktop (2.6.4)
Red Flag Linux Desktop 4.1, 5.0
Red Hat Linux AS 4.0 (2.6.9)  Application Server
Red Hat Linux ES 4.0 (2.6.9)  Enterprise Server
Red Hat Linux WS 4.0 (2.6.9)  Workstation Server
SuSE Professional 9.2 (2.6.4), 9.3, 10.0 Popular in Europe
Turbolinux 10 S (2.6) Asia Popularity
Fedora: from Redhat linux
Ubunton Linux


-------------------------------------------
Linux kernel and version query commands
-------------------------------------------
I: Command: uname -a
Function: View the system kernel version number and system name.


Second: Command: cat /proc/version
Function: View the information of version under the directory "/proc", and also get the kernel version number and system name of the current system.
Additional Notes:
The /proc filesystem is not an ordinary filesystem, but an image of the system kernel, i.e. the files in this directory are stored in the system memory, and it provides an interface to access the kernel data in the form of a filesystem. The information we get with the command "uname -a" is from this file, but the same effect can be achieved by viewing its contents directly with the command in method 2. In addition, with the parameter "a" is to get detailed information, if you do not add the parameter to view the system name.


Check the release version of redhat.
# more /etc/redhat-release
CentOS release 4.4 (Final)


IV. lsb_release -a
Log in to the server and run lsb_release -a to list all the releases


for example:
[root@RH52173 X11]# lsb_release -a
LSB Version:    :core-3.1-ia32:core-3.1-noarch:graphics-3.1-ia32:graphics-3.1-noarch
Distributor ID: RedHatEnterpriseServer
Description:    Red Hat Enterprise Linux Server release 5.2 (Tikanga)
Release:        5.2
Codename:       Tikanga


V. Look at the issue file under etc.
#more /etc/issue


-------------------------------------------
Fedora:
-------------------------------------------
In 2003, Red Hat, Inc. changed the name of the distribution from Red Hat Linux to Fedora Core and moved its commercial efforts toward its Red Hat Enterprise Linux products. It then set up Fedora to be: 
by Red Hat 
by the Linux community 
of high-quality, cutting-edge open source technology 
proving ground for software slated for commercial Red Hat deployment and support 
With the recent split between community (Fedora) and commercial (Red Hat Enterprise Linux) versions of Red Hat Linux, Red Hat has created a model that can suit the fast-paced changes in the open source world, while still meeting the demands for a well-supported commercial Linux distribution. 
Technical people have chosen Red Hat Linux because of its reputation for solid performance. With the new Fedora Project, Red Hat has created an environment where open source developers can bring high-quality software packages to Red Hat Linux that would be beyond the resources of Red Hat, Inc. to test and maintain on its own.
Over 1,600 individual software packages (compared to just over 600 in Red Hat Linux 6.2) are included in Fedora Core 3. These packages contain features that would cost you hundreds or thousands of dollars to duplicate if you bought them as separate commercial products. These features let you: 
your computers to a LAN or the Internet. 
documents and publish your work on paper or on the Web.
with multimedia content to manipulate images, play music files, view video, and even burn your own CDs. 
games individually or over a network. 
over the Internet using a variety of Web tools for browsing, chatting, transferring files, participating in newsgroups, and sending and receiving e-mail. 
your computing resources by having Red Hat Linux act as a firewall and/or a router to protect against intruders coming in through public networks. 
a computer to act as a network server, such as a print server, Web server, file server, mail server, news server, and a database server. 
This is just a partial list of what you can do with Red Hat's Fedora. Using this book as your guide, you will find that there are many more features built into Fedora as well. 




-------------------------------------------
Installation and Configuration of JDK for Linux.
Installing the JDK under Red Hat Linux :
-------------------------------------------
1. Grant permission: chmod 777 jdk-1_5_0_11
2. Unzip the file: . /jdk-1_5_0_11
3. Installation command: rpm -ivh jdk-1_5_0_11
4. Uninstallation command: rpm -e jdk-1_5_0_11-linux-i586
5. Find command: rpm -qa jdk-1_5_0_11-linux-i586
   rpm -qa |grep vnc   q---query a---all package
-------------------------------------------
JDK environment variable configuration under Linux :
-------------------------------------------
1. Use SSH Secure Shell Client to connect to Linux.
2. Execute: vi /etc/profile command, use vi to edit the /etc/profile file.
3, in the export PATH USER LOGNAME MAIL HOS ...


-------------------------------------------
Common Shell Types.
-------------------------------------------
The Linux system provides several different shells to choose from. The commonly used ones are Bourne Shell (sh for short), C-Shelll (csh for short), Korn Shell (ksh for short), and Bourne Again Shell (bash for short).
(1) The Bourne Shell was developed by Steven Bourne of AT&T Bell Labs for AT&T's Unix, it is the default shell for Unix, and is the basis for the development of other shells.The Bourne Shell is quite good at programming, but is not as good as several other shells at handling interaction with the user. Bourne Shell is quite good at programming, but does not handle user interaction as well as several other shells.
(2) C Shell was developed for BSD Unix by Bill Joy of the University of California, Berkeley, and unlike sh, its syntax is very similar to that of the C language. It provides user interaction features that Bourne Shell cannot handle, such as command completion, command aliasing, and history command replacement. However, C Shell is not compatible with BourneShell.
(3) Korn Shell is AT&T Bell Labs David Korn development, it is a collection of C Shell and Bourne Shell advantages, and with the Bourne Shell fully compatible with the Bourne Shell downwards.Korn Shell's efficiency is very high, and its command interaction interface and programming interaction interface are very good.
(4)Bourne Again Shell (i.e. bash) is a shell developed by the Free Software Foundation (GNU), which is a default shell in Linux systems.Bash is not only compatible with Bourne Shell, but also inherits the advantages of C Shell and Korn Shell.


---------------
rename command:
---------------
foo1 foo2 foo10 foo99
rename foo foo0  foo??
rename foo foo0  foo?
foo denotes the old content being replaced
foo0 New content to be replaced
foo? means that those files were replaced with a pattern


1.
View the current process ID (KornShell):
$echo $$
4992


grep a*.txt & Places the current task in the background for execution.
jobs View all active jobs.
bg to see the tasks in the background The first character is $jobspec.
bg $jobspec Continue executing $jobspec tasks in the background
fg $jobspec Moves a $jobspec task from the background to the foreground for execution.
ctrl+z (hang the currently executing foreground process) puts the process into the background and hibernates it. To return it use :fg
ctrl+D The end-of-file character can be used with the


bg [jobspec ...]
              Resume each suspended job jobspec in the background, as if it had been started  with  &.   If
              jobspec  is not present, the shella€?s notion of the current job is used.  bg jobspec returns 0
              unless run when job control is disabled or, when run with job control enabled, any  specified
              jobspec was not found or was started without job control.
fg [jobspec]
              Resume jobspec in the foreground, and make it the current job.  If jobspec  is  not  present,
              the  shella€?s  notion  of  the  current  job is used.  The return value is that of the command
              placed into the foreground, or failure if run when job control is disabled or, when run  with
              job  control enabled, if jobspec does not specify a valid job or jobspec specifies a job that
              was started without job control.


jobs [-lnprs] [ jobspec ... ]
       jobs -x command [ args ... ]
              The first form lists the active jobs.  The options have the following meanings:
              -l     List process IDs in addition to the normal information.
              -p     List only the process ID of the joba€?s process group leader.
              -n     Display information only about jobs that have changed status since the user  was  last
                     notified of their status.
              -r     Restrict output to running jobs.
              -s     Restrict output to stopped jobs.


              If  jobspec  is given, output is restricted to information about that job.  The return status
              is 0 unless an invalid option is encountered or an invalid jobspec is supplied.


              If the -x option is supplied, jobs replaces any jobspec found in command  or  args  with  the
              corresponding process group ID, and executes command passing it args, returning its exit sta-
              tus.




How do I get back into the foreground of a program that ctrl+z went into the background? fg Must be in the same console.
owners first use ps to check the name of your task such as I opened the vi, and then crtl + z, and then ps to look at the command name, fg a line!
[root@red9 ~]# vi sendmail----1
[1]+  Stopped                 vim sendmail----1
[root@red9 ~]# ps
  PID TTY          TIME CMD
3119 pts/0    00:00:00 bash
3194 pts/0    00:00:00 vim
3195 pts/0    00:00:00 ps
[root@red9 ~]# fg 3194
-bash: fg: 3194: no such job
[root@red9 ~]# fg vim
vim sendmail----1


2. 
View the processes associated with the specified program:
$ps -A | grep ....bat
 8360  0:00 cmd /c ....bat


$ps -ef | grep ....bat 
 
pgrep and pkill: query process and kill process.
pgrep= ps -ef|grep ... For example: pgrep php-cgi Finds processes with process name php-cgi
pkill= killall e.g. pkill php-cgi kills all processes with the process name php-cgi


pgrep -u root,daemon     ---will list the processes owned by root OR daemon.
pgrep -u root sshd       ---will only list the processes called sshd AND owned by root.


Options for ps:
-a              Select all processes except both session leaders (see
                       getsid(2)) and processes not associated with a
                       terminal.
-A              Select all processes. Identical to -e.
-e              Select all processes. Identical to -A.
-f              does full-format listing. This option can be combined
                       with many other UNIX-style options to add additional
                       columns. It also causes the command arguments to be
                       printed. When used with -L, the NLWP (number of
                       threads) and LWP (thread ID) columns will be added. See
                       the c option, the format keyword args, and the format
                       keyword comm.


3.
$kill -9 $pid
$killall php-cgi kills all processes named php-cgi


4.
View window system startup status:
$uptime
12:55pm  up  3:41, 1 session, load average: 0.00, 0.00, 0.00.


5.
$who am i Display the current username
$which is java
$where 


6.
ps -ef | grep tty   or    ps -ef | grep java
ll
find . -name java
chmod +x file


grep [options] pattern [files..]
tee 


--color
clear 
zip
unzip
zipinfo 


gzip    .gz   gzip -dvf
gunzip        gunzip -dvf


--------------------------
tar     .tar
--------------------------
By convention, files compressed with gzip have a .gz extension;
Files compressed with bzip2 have a .bz2 extension; files compressed with zip have a .zip extension.
Files compressed with gzip can be decompressed with gunzip; files compressed with bzip2 can be decompressed with bunzip2;
Files compressed with zip can be decompressed with unzip.


1) A tar file is a collection of several files and/or directories in one file. This is a good way to create backups and archives.
    tar -cvf directory/file 
represents the file you created, and directory/file represents the files and directories you want to put inside the archive.
2) To extract the contents of the tar file, type:
    tar -xvf
3) To list the contents of the tar file, type:
    tar -tvf
    tar xvzf
4) tar -xzvf -C unzipped
Pressurize the file under the directory unzipped.
  
       tar -cf foo bar
              # Create from files foo and bar.
       tar -tvf
              # List all files in verbosely.
       tar -xf
              # Extract all files from .
              Main operation mode:
       -A, --catenate, --concatenate
              append tar files to an archive
       -c, --create
              create a new archive
       -d, --diff, --compare
              find differences between archive and file system
       --delete
              delete from the archive (not on mag tapes!)
       -r, --append
              append files to the end of an archive
       -t, --list
       -x, --extract, --get
              extract files from an archive
       -C, --directory=DIR
              change to directory DIR
       


8. Delete all contents of non-empty directories.
rm -dfr c:/wasx
rm -rf file/directory Forces the deletion of file or directory.


9. rmdir can only delete empty directories.




-----------------------
ftp:
-----------------------
ftp
ftp 9.181.85.65 200
ftp>help
ftp>open  9.181.85.65 200
ftp>close  is similiar to ftp>disconnect 
ftp>get remote-file [local-file]


ftp>cd       ----change remote directory
ftp>lcd      ----change local directory
ftp>quit


8.
Difference between the three.
bash
.
. / must have x privileges.


9.
Depends on the size of the space.
df   disk free space.
du   disk used


10.
history, less history,  less .bash_history, history| grep man





12. Task Manager:
free
time
timex
top displays the resource usage of each process in the system in real time, similar to the Windows task manager.
ps -A
ps -aux
vmstat Virtual Meomory Statistics, an acronym for Virtual Memory Statistics, monitors virtual memory, process, and CPU activity of the operating system. It provides statistics on the system as a whole, with the disadvantage that it is not possible to analyze a process in depth.


iostat is an acronym for I/O statistics. The iostat utility will monitor the system's disk operation activity. It is characterized by reporting disk activity statistics and will also report out CPU usage. Like vmstat, iostat has a weakness in that it cannot analyze a process in depth, only the overall system.


uname -a Display operating system information
uptime current time Time elapsed since the system was turned on and running Number of connected users System load in the last minute, five minutes, and fifteen minutes Parameters: -V Displays version information.




ksysguard  -------KDE system guard
gnome-system-monitor   -------------GNOME 


13. Display operating system information: view linux kernel version
uname -a
uname -v
uname -r


14.
whereis java
who -a
whoami
which gcc
    
strings
history 
script


15. Configure the network card and other information.
setup


Start the service with the command.
ntsysv


16.
mount  /dev/hda1 /mnt/c
umount /mnt/c   
 
17.
runlevel
telinit


Booted by LILO, it reads the system runlevels set in the configuration file /etc/inittab, and the levels are categorized as follows:
Level Content


0 Low-level system initialization, power off
1 Single-user or administrative level. Tasks that may be run in a multitasking situation, such as detailed disk checking, can be run.
2 Multi-user approach but no network support.
3 Omni level, activate all functions.
4 Not used for the time being
5 Graphical Terminal Login to Linux
6 Reboot after interrupting the system, sync+reboot
 
You can use runlevel to view the current runlevel of the system, and telinit to change the state of init.


more /etc/issue


=============================
redhat skills:
=============================
redhat-config-keyboard  system-config-keyboard
[Ctrl] + [Alt] + [Backspace] = Kills your current X session. Killing a graphical desktop session returns you to the login screen. You can use this method if the normal exit steps don't work.
[Ctrl] + [Alt] + [Delete] = Shut down and reboot Red Hat Linux. close your current session and then reboot the OS. use this method only if the normal shutdown procedure does not work.
[Ctrl] + [Alt] + [Fn] = Switch screens. [Ctrl] + [Alt] + one of the function keys displays a new screen. By default, from [F1] to [F6] are shell prompt screens, and [F7] is a graphical screen.
[Alt] + [Fn] = Switch screens. One of the [Ctrl] + [Alt] + function keys displays a new screen. By default, from [F1] to [F6] are shell prompt screens, and [F7] is a graphical screen.
[Alt] + [Tab] = Switch between tasks in the graphical desktop environment. If you have more than one application open at the same time, you can use [Alt] + [Tab] to switch between open tasks and applications.
[Tab] = Command line auto-completion. Use this command when using a shell prompt. Type the first few characters of a command or filename, then press [Tab], which automatically completes the command or displays all commands that match the characters you type.


[Ctrl] + [a] = Move the cursor to the beginning of the line. It works in most text editors and Mozilla URL fields.
[Ctrl] + [d] = Log out (and close) from the shell prompt. With this shortcut, you don't have to type exit or logout.
[Ctrl] + [e] = Move the cursor to the end of the line. It works in most text editors and Mozilla URL fields.
[Ctrl] + [l] = Clear the terminal. This shortcut is the same as typing clear at the command line.
[Ctrl] + [u] = clears the current line. If you are working in a terminal, use this shortcut to clear the characters from the cursor to the beginning of the line.


System configuration commands:
setup
system-config-network
system-config-XXX


system-config-authentication     system-config-packages
system-config-date               system-config-printer
system-config-display            system-config-printer-gui
system-config-keyboard           system-config-printer-tui
system-config-language           system-config-rootpassword
system-config-mouse              system-config-securitylevel
system-config-network            system-config-securitylevel-tui
system-config-network-cmd        system-config-services
system-config-network-druid      system-config-soundcard
system-config-network-gui        system-config-time
system-config-network-tui        system-config-users




Management of services:
1. Run the command to enter the GUI: system-config-services


2. Control via the command line:
View SMB.
/etc//smb status 
pgrep smbd
pgrep nmbd
netstat -tlnp |grep smb


View the configuration.
testparm


/etc//smb start
/etc//smb restart
/etc//smb stop




[root@localhost ~]# service smb start
To start the SMB service: [OK].
To start the NMB service: [ OK ]
[root@localhost ~]# service smb stop
[root@localhost ~]# service smb restart


Shutdown and reboot machine commands:
reboot == shutdown -r now
halt   == shutdown -h now






Workspace switcher.
Graphical desktops give you the ability to use multiple workspaces. So you don't have to stack all your running applications in one visual desktop area. The workspace switcher displays each workspace (or desktop) as a small square, and then displays the running applications on it. You can click on any of the cubes with the mouse to switch to that desktop. You can also use keyboard shortcuts.
[Ctrl]-[Alt]-[Up Arrow], [Ctrl]-[Alt]-[Down Arrow], [Ctrl]-[Alt]-[Right Arrow], or [Ctrl]-[Alt]-[Left Arrow] to switch between desktops.


[Middle Mouse Button] = Paste the highlighted text. Use the left mouse button to highlight the text. Point the cursor to where you want to paste the text. Click the middle mouse button to paste it. In a two-button mouse system, if you configure the mouse to emulate a third button, you can click both the left and right mouse buttons at the same time to perform the paste.


The [up] and [down] arrows = show command history. When you are using a shell prompt, press the [Up] or [Down] arrows to go back and forth through the history of commands you have typed in the current directory. When you see the command you want to use, press [Enter].


history = Show command history. Type it at a shell prompt to display the first 1000 numbered commands you type. To display a shorter history of commands, type history f followed by a space and a number. For example, history 20.
exit = logout. Typing this at a shell prompt logs out the current user or root account.
reset = Refresh the shell prompt. Typing this command at a shell prompt will refresh the screen if the characters are unclear or garbled.
clear = Clear the shell prompt screen. Typing it at the command line clears all data displayed at this shell prompt.


To boot your system into the text-based installation program, you need to type the text command at the boot: prompt.


1. Copying and pasting text under X: Using the mouse to copy and paste text under the X Window System is a simple operation. To copy text, simply click the mouse and drag it over the text to highlight it. To paste the text somewhere, click the center mouse button where you want to place the text.


and su -: The command su gives you access to the root account or other accounts on the system. When you switch to the root account by typing su in the user account's shell, you are able to change important system files (which you can damage if you are not careful). Using the su - command makes you the root user using the root shell. Be careful when logging in as root.


su yuanjs means logging in as yuanjs, but the current path does not change.
su - yuanjs means logging in as yuanjs, and the current path changes to the home path of the yuanjs account.
su means logging in as root, but the current path is not changed
su - indicates that you are logged in as root, and the current path changes to the root account's home path.


sudo Function Description: Execute commands as something else.
Syntax: sudo [-bhHpV][-s <shell>][-u <user >][command] or sudo [-klv]
Example.
jorge@ubuntu:~$ sudo killall rm


to launch the graphical desktop.


4. There are two ways to create new or additional user accounts, using the graphical User Manager or at a shell prompt.
(1) Start the GUI: shell command redhat-config-users (system-config-users)
  (2) shell:
2.1 Open a shell prompt.
2.2 If you are not logged in as root, type the command su - and then the root password.
2.3 On the command line, type useradd, followed by a space and the username of the new user you created (for example, useradd zhangsan). Press [Enter]. Usually, the user name is a variation of the user's name, e.g., Zhang San's user name is zhangsan. The user account name can be a variation of the user's name, abbreviation, or place of birth.
2.4 Type passwd, followed by a space and the username (e.g., passwd zhangsan).
2.5 Enter a password for the new user at the New password: prompt and press [Enter].
2.6 At the Retype new password: prompt, enter the same password to confirm your selection.


/etc/group files
/etc/passwd file
    
Check which command is the user group under linux!
View /etc/group file
With cat /etc/passwd |cut -f 1 -d :
You can view the user file in /etc/passwd with the browse file command
For example less /etc/passwd
or cat /etc/passwd
The command chmod is used to change permissions.
o is for owner, and -rw is to remove read and write permissions.
If you want to add it, just +rw it.




command| col -b | lpr
The above commands merge separate commands into one unique function. man command outputs the contents of the command's instructions to col, which formats the contents to fit the printed page size. lpr command sends the formatted contents to the printer.


6. Package Management Tool Installation Documentation.
To install all Red Hat Linux guidebooks, change to the directory containing the RPM files and type the following command:
rpm -ivh rhl-*.rpm
To install only a particular manual, replace rhl-*.rpm with the full name of the manual's file. For example, the file name of the Red Hat Linux Getting Started Guide will be something like , so you should type the following command to install it on your system:
rpm -ivh /mnt/cdrom/


7. Virtual console logout
If you are using the X Window System and are logged on at the console, type exit or [Ctrl]-[D] to log off from the console session.
 
8. Virtual console shutdown
To shut down the computer at a shell prompt, type the following command:
halt


9. Panels
2.2.4 Adding icons and applets to panels
To make the panel fit your personal needs, you can add more applets and launcher icons to it.
To add an applet to your panel, right-click on an unused area of your panel, choose Add to Panel, and then select it from the Accessories menu. With the applet selected, it appears on your panel. In Figure 2-8, the Weather Report applet, which displays the current local weather and temperature, is added to the panel.
To add a launcher to a panel, right-click on an unused area of the panel and select "Add to Panel" => "Launcher..." This will launch a dialog box. This will launch a dialog box. In this dialog you can enter the name of the application, its location and the command to launch it (e.g. /usr/bin/foo), and even choose an icon for this application. Click OK and this new launcher icon will appear on the panel.
an ingenious method
Another shortcut for adding a launcher to a panel is to right-click on an unused area of the panel and select Add to Panel => Launch from Menu. Then, select an application that appears in the menu. This will automatically add the launcher icon according to that program's properties in the Main Menu.
2.2.5. Configuring the Desktop Panel
You can hide the panel automatically or manually; place it on either side of the desktop; change its size and color; or change the way it behaves. To change the default panel settings, right-click an unused area of the panel and choose Properties. You can set the size of the panel; its position on the desktop; and whether you want to automatically hide the panel when it's not in use ("Autohide"). If you choose to autohide the panel, it won't appear on the desktop unless you move your mouse over the panel (called hovering, hovering).




10. Using Nautilus
By default, dragging and dropping files from one directory to another will move the files. To copy a file to another directory, press the [Ctrl] key while dragging and dropping.
By default, image files in your home directory are displayed as thumbnail icons. For text files, this means that you will see a portion of the actual text in the icon. For image files, you will see a scaled down version (or thumbnail) of that image. To turn this feature off, choose Edit => Preferences; choose Preview from the menu on the left; and choose Never from the Show Thumbnail Icons pulldown menu. Disabling this (and other) preview features will speed up Nautilus.


11. Configure the date and time
system-config-date &
redhat-config-date &


12. Mounting and unmounting floppy disks
Before you can use a floppy disk, it must be mounted. To mount a floppy disk, insert it into the floppy drive and type mount /mnt/floppy/ at a shell prompt.
You can access the contents of the floppy disk by switching to that directory using the cd /mnt/floppy/ command.


When you have finished with the task on the floppy disk, you should uninstall it before ejecting it from the drive. Close any programs that may still be using the files on the floppy or displaying the contents of the floppy (such as Nautilus or Konqueror), and then type the following command at a shell prompt:
umount /mnt/floppy/


Formatting:
Using mke2fs: /sbin/mke2fs /dev/fd0
mke2fs is a command used to create a Linux ext2 filesystem on a hard disk partition or a device like a floppy disk. Basically, mke2fs formats a device and creates a blank, Linux-compatible device that can be used to store files and data.
Insert your floppy disk into the drive and use the following command at a shell prompt:
/sbin/mke2fs /dev/fd0
On Linux systems, /dev/fd0 refers to the first floppy drive. If you have more than one floppy drive on your computer, your primary floppy drive will be /dev/fd0, your second floppy drive will be /dev/fd1, and so on.
The mke2fs utility has a number of options. The -c option causes the mke2fs command to check for bad blocks on the device before creating the file system. Other options are described in detail in the mke2fs man page.
Once you have created an ext2 filesystem on a floppy disk, you are ready to use it on your Red Hat Linux system.


Using gfloppy: /usr/bin/gfloppy
To start the gfloppy, click Main Menu => System Tools => Floppy Formatter. At a shell prompt, type /usr/bin/gfloppy. gfloppy's interface, as shown in Figure 4-2, is small and has very few options. The default settings are sufficient for most users, however, you can format floppy disks using the MS-DOS file system if necessary. You can also select the density of your diskette (if you are not using the usual high density 3.5" 1.44MB diskettes). You can also choose to quick format the diskette if it is previously formatted as ext2.
Insert the floppy disk and change the settings in the gfloppy to your own needs; then click Format. A status box will appear in the upper part of the main window, showing you the status of the formatting and calibration process (see Figure 4-3). When it is finished, you can eject the floppy disk and close the gfloppy program.




14. Storing Linux Files on MS-DOS Floppy Diskettes
To copy a file from a Linux machine to an MS-DOS formatted floppy disk so that it can be read by a Windows machine, you should format the floppy disk using the gfloppy (see Section 4.1.3.1) utility and the MS-DOS (FAT) file system. Then mount it to Linux as described in Section 4.1.1. Use the following command to copy the files (replace filename with the file you want to copy):
cp filename /mnt/floppy
You can then unmount the floppy disk and eject it from the drive. The new files on the diskette are now accessible from your Windows machine.




1
top to see what processes are currently running
kill -9 pid Terminate a process (tree)
cd Return to the root directory
pwd Display the current directory
3 less more view text file command Typing the v key in less launches vi to edit the current file
4 mkdir, rm mv Commands to change file names and directory names
cp The copy file and directory command
man command usage reference tool, very useful
nano is a small, free, and friendly editor.
5
vi has two modes, one is command mode and the other is edit mode. When you enter vi, you are in command mode by default.


Now we execute vi LoveLetter. after entering, press the Insert function key or i key on the keyboard to enter the editing state, you can insert characters, and then press Insert to become the re-cover mode, the difference between the two modes is very easy to reflect, you can just try it. Up, down, left, right and four arrow keys can move the cursor. The basic editing commands are no different from those in Windows. Isn't it easy? When you have finished entering the required content, we want to save, this time press the ESC key to return to the command mode from the editing mode, first enter a colon ":", that is, hold down the SHIFT key and then press the semicolon ";" so that the first input A ":", then, type w, enter, and you can save our edits to the LoveLetter file. Now we press Insert to continue editing. Press ESC again, type ":" and press w again to save again. But now we don't need to save, we want to exit without saving, how do we do that? When we type w, it means write, to save, and when we type q, it means quit. Okay, type q, enter, and vi will tell us that the changes we just made haven't been saved yet, so remember that! Once we need to abandon our changes, we can't just quit with the q command, we need to use the "q!" command. Type q!, okay, quit.
We want to see if the LoveLetter we just edited is actually saved, and then vi LoveLetter, ok, see? Now if we want to just quit, we can just type ":q" without that "!" because we didn't modify the contents of the file. If we change this article, we can just type "ESC : wq" when we exit. We don't need to type w and q twice.
6 In linux, you can mount an iso file as a directory with the command mount: mount -t iso9660 -o loop /home/kris/ /mnt/cdrom
7 file Check the type of file
8 wall write mesg
9 reset Reset the terminal, use this method when there is a problem with the screen
10 env Display environment variables
11 To change the system language: run export LC_ALL="en_US" LANG=en_US to change it to English. You can change it to English by running export LC_ALL="en_US" LANG=en_US. You can change LANG="en_US. Utf-8" in /etc/sysconfig/i18n.
12 View local ip: ifconfig -a
13 mount CD: mount /dev/cdrom /mnt/cdrom, then you can see the contents of the CD in /mnt/cdrom . Use umount /mnt/cdromo to unmount the CD-ROM. You can only eject the CD-ROM after it is unmounted.
14 When typing filenames, etc. at the linux prompt, you can type in part of the filename and then press the Tab key to intelligently complete it.
15 redhat in the installation of rpm package method: rpm -i "package path"; to upgrade the package can rpm -U packagename. package removal is relatively easy to use: # rpm -e packagename will be able to delete their own want to delete the package, do not need to know the version and path!
16 The boot-as-you-run vsftp method, run ntsysv and just check vsftpd in the list.
17 To view the contents of a file: cat filename
18 find / -name "*network*" -print Find all files in the root directory that contain network
find / -iname "*network*" -print Find all files in the root directory that contain network, ignoring case
19 useradd user1 Creates a user, but this user cannot be used until a password has been set for the user, the command to set the password is passwd user1
20 If there is a "*" next to the filename in ls, it means that it is an executable file and can be run by running "*/Filename".
21 restart a networkservice network restart
22 The character interface enters the graphical interface: startx, and the graphical interface returns to the character interface: just log off.
23 install VMWare, and then the virtual machine network type selection NAT can be realized between the host and the virtual machine communication, to log in remotely to linux also need to install the telnet service, root can not be logged in by default telnet
24 Delete non-empty directories: rm -rf Directory name
25 To decompress cpio: cpio -idmv < ***.cpio
26 The shell script edited in UE to run in unix prompts "h^M: is not an identifier", the solution: dtox > converted to unix format, or use the UE "file" "conversion" "DOS to Unix" function. ", "Convert" "DOS to Unix" function in UE. The second one is more convenient
27 find /usr -name httpd
28 decompression method: first gunzip, generate, then tar -xvf decompression
29 Methods for adding gcc to a path:
PATH=$PATH:/usr/gnu/bin/
export PATH
30 sh scripts cannot have spaces on either side of the equal sign for variable assignment
31 sh scripts should have no blank lines between command lines, and spaces before and after the condition after the if statement
32 Determining the current terminal type echo $TERM
33 To reacquire ip: /etc//network restart
34 The way to enter the ESC escape character in Linux:First press Ctrl+V, then press the ESC key
35 Running a program in a background manner, i.e., script exit or shell exit, does not cause the program to exit: the command is followed by "&".
36 The script for logging in and starting up is placed in the "/etc/profile" file, for example, to configure Java environment variables must be added to this file:
pathmunge /usr/java/j2re1.4.2_14/bin/ after
JAVA_HOME="/usr/java/j2re1.4.2_14/"


Note that there can be no spaces on either side of the equals sign in JAVA_HOME, otherwise JAVA_HOME will be treated as a command !!!!!!!!!!!!!!!!!! !1
37 If the system appears to be garbled modify the LANG environment variable can be. This problem has occurred in the past with batch systems
39 Trace under windows corresponds to "traceroute ip address" in Linux.
40 The way to use a USB flash drive under Linux, insert the USB flash drive, create a directory usb under mnt; then run "fdisk -l", which will show all the devices, because usually the USB flash drive is in Fat format, so find the identifier of the hard disk formatted as FAT, for example, sdb1, and then run mount -t vfat /dev/sdb1 /mnt/usb; so the USB flash drive will be mounted under /mnt/usb; when unmounting, umount /mnt/usb.


 
I. What Fedora is and how it relates to Redhat;
What is Fedora? If you are new to Linux, you may have seen the Fedora&Redhat discussion forum on the reason why the two Linux distributions Fedora and Redhat are put together, mainly because they are too closely linked. Fedora is a continuation of the desktop version of Redhat, but in collaboration with the open source community.
Some beginners may ask, what is a Linux distribution? Is Fedora a standalone system that can be installed on a computer? Yes, Fedora is a standalone operating system, a version of Linux. There are many versions of Linux, such as Debian, SuSE, Archlinux, Mandrakelinux and Slackware, etc. Because Linux is an open source operating system, you are fully capable of making your own Linux distro if you are technically proficient. Because Linux is an open source operating system, if you are technically proficient, you are more than capable of making your own Linux distribution.


Second, the official addresses of Fedora and Redhat;
The official Fedora address is
Redhat's official address:


=========================================================
Ten must-know tips for newcomers to Linux
=========================================================
Some of you have mentioned basic profiles, fonts, input methods, driver installations, and I've found that
These are not things you need to learn at the beginning of using Linux, for example, I am now interested in the font
Configuration is still not understood, huh? It's best to learn the following things right under the virtual terminal.
Don't get the X interface.


Each article is basically a categorization, and if a novice can read the paperback beforehand, he or she can save himself or herself
Quite a bit of time to reduce the fear of Linux as well as the frustration, these are the basics that
No graphical interface is required.


1. i386 boot process, hard disk partitioning, single-user access to the system;
Reason: installing system, grub, first aid system is very much related to this one


2. The concept of distributions, understanding that the term Linux is a generic term, and familiarizing yourself with the respective distributions.
package management tools, such as rpm, apt, yum; know exactly where the manuals for their respective distributions are, the
Where are the manuals for each software;
Reason: name the distro and version when asking the question, though it's all Linux, quite a few settings
There are differences between distributions. There is no other way for a newbie to install software, don't think about compiling to the HOME
directory, if you don't have access on a public server get your own virtual machine, now vmware
SERVER is free, it doesn't matter if it's slow, it's enough to learn these basics.


3. Basic use of the system, the following commands: (4,5,6 below are combined with this study)
bash environment variables, a few configuration files under HOME, the use of PATH, can write simple scripts;
*man*,cd, pwd, ls, mkdir, rmdir, cp, rm, mv, find,
grep/egrep/fgrep, df, du, vim (this one first, Emacser don't argue,)
nanoI'm not even gonna talk about it.), chmod, chown, more/less, head, tail,
cat, tar, gzip, bzip2, who, whoami, w, top, ifconfig,
ping, tracert, passwd, adduser/useradd, mail/mailx/mutt,
mount, umount, clear,reset,lftp/ncftp, fdisk/cfdisk/parted,
ps, kill, killall, jobs, bg, fg, crontab, at, batch, dmesg,
talk, message
Reason: It's almost 80% of the commands you use on a daily basis.


4. Knowledge of the documentation system:
Linux directory structure (FHS), what each directory is used for, and why it's not as good as it is under Windows
disk symbols, common filesystem types (ext2, ext3, reiserfs, jfs, xfs,
ntfs, vfat, iso9660), symbolic and hard links, special file types (
Character device files, block device files, sockets...) Relative and absolute paths.
/etc/fstab
Reason: It's really basic.


5. Permission issues, including /etc/passwd, /etc/group, /etc/shadow.
Concept of permission bit rwxsSt, uid, gid
Why: Basically, system security starts here first.


6. The concept of a process, in particular the relationship between a child process and a parent process, and the ability to use the ps command.
Output of pid, ppid to find out this relationship; the concept of signaling, know to signal with kill;
Foreground and background processes; daemons; pipelines, redirection of input and output;
Reason: a matter of common sense.


7. linux logging system, understand the use of various logs under /var/log
Reason: Something is wrong with the system first see if there is any suspicious log.


8. Familiarize yourself with the system startup scripts, and be clear about the init set, knowing how to
Starting, restarting, stopping services, how to add services to this set of mechanisms to delete them.
service
Reason: basic, and a good starting point for learning shell scripting


        
9. Basic concepts and use of TCP/IP, including:
The OSI network model.
Ethernet and MAC addresses.
IP protocols and IP addresses, representation of IP address segments.
The role of the ICMP and DHCP protocols; and
DNS systems.
TCP and UDP protocols, concept of ports, client/server model.
  /etc/hosts 
  /etc/ 
  /etc/services,
The role of the /etc/network/interfaces file.
ifconfig arp arping ping telnet ssh netstat route ip traceroute use of common network tools;.
Reason: UNIX has been tied to the network from the beginning, so understanding the network's
Basic questions are very necessary.


10. Basic concepts of X window, understand the meaning of X server and X client, in this regard there is Wang Yin's "Understanding Xwindow".
:8080/2001315450/
Reason: I'm going to enter the world of X window, it's still a long way to go, so I'd like to lay some foundation first :-)




Recommended Books.
《Computer Systems: A Programmer's Perspective》
A Deeper Understanding of Computer Systems, China Electric Power Publishing Co.
Great book for understanding the basics of computers.


UNIX Made Simple.
(English) Peter McBride Translated by Zhong Xiangqun, Mechanical Industry Press
Highly recommended and can be found at the library


Linux System Administration User's Guide, Tsinghua Publishing Co.
Recommended by cathayan


UNIX Tutorial
I've looked it up in the bookstore before. It seems pretty good. Pearson Education.


UNIX Operating Systems
O'Reilly. Recognize the brand. There's a list of books.
Might as well take a look.


-------------------------------------------
kill,killall,pkill,ps,pgrep:
-------------------------------------------
pgrep and pkill: query process and kill process.
pgrep= ps -ef|grep ... For example: pgrep php-cgi Finds processes with process name php-cgi
pkill= killall e.g. pkill php-cgi kills all processes with the process name php-cgi


pgrep -u root,daemon     ---will list the processes owned by root OR daemon.
pgrep -u root sshd       ---will only list the processes called sshd AND owned by root.


1) ps command:
Options for ps:
-a              Select all processes except both session leaders (seegetsid(2)) and processes not associated with a terminal.
-A              Select all processes. Identical to -e.
-e              Select all processes. Identical to -A.
-f              does full-format listing. This option can be combined
                       with many other UNIX-style options to add additional
                       columns. It also causes the command arguments to be
                       printed. When used with -L, the NLWP (number of
                       threads) and LWP (thread ID) columns will be added. See
                       the c option, the format keyword args, and the format
                       keyword comm.
x               Lift the BSD-style "must have a tty" restriction, which is imposed upon the set of all processes when some BSD-style
                       (without "-") options are used or when the ps personality setting is BSD-like. The set of processes selected in this
                       manner is in addition to the set of processes selected by other means. An alternate description is that this option
                       causes ps to list all processes owned by you (same EUID as ps), or to list all processes when used together with the
                       a option.
u               display user-oriented format


ps ax, ps axu
To see every process on the system using BSD syntax:
          
2) kill command.
The kill application is used in conjunction with the ps or pgrep command;
Usage of kill:
kill [Signal Code] Process ID
Note: The signal code can be omitted; the signal code we commonly use is -9 for forced termination;
Examples:
[root@localhost ~]# ps auxf |grep httpd
root 4939 0.0 0.0 5160 708 pts/3 S+ 13:10 0:00 \_ grep httpd
root 4830 0.1 1.3 24232 10272 ? Ss 13:02 0:00 /usr/sbin/httpd
apache 4833 0.0 0.6 24364 4932 ? S 13:02 0:00 \_ /usr/sbin/httpd
apache 4834 0.0 0.6 24364 4928 ? S 13:02 0:00 \_ /usr/sbin/httpd
apache 4835 0.0 0.6 24364 4928 ? S 13:02 0:00 \_ /usr/sbin/httpd
apache 4836 0.0 0.6 24364 4928 ? S 13:02 0:00 \_ /usr/sbin/httpd
apache 4837 0.0 0.6 24364 4928 ? S 13:02 0:00 \_ /usr/sbin/httpd
apache 4838 0.0 0.6 24364 4928 ? S 13:02 0:00 \_ /usr/sbin/httpd
apache 4839 0.0 0.6 24364 4928 ? S 13:02 0:00 \_ /usr/sbin/httpd
apache 4840 0.0 0.6 24364 4928 ? S 13:02 0:00 \_ /usr/sbin/httpd
We look at the processes of the httpd server; you can also use pgrep -l httpd to view them;
Let's look at the second column in the above example, which is the process PID column, where 4830 is the parent process of the httpd server, and the processes from 4833-4840 are his children of 4830; if we kill the parent process 4830, the child processes under it will also die;
[root@localhost ~]# kill 4840 Note: Kill the process 4840;
[root@localhost ~]# ps -auxf |grep httpd Note: See what happens. Is the httpd server still running?
[root@localhost ~]# kill 4830 Note: Kill the parent process of httpd;
[root@localhost ~]# ps -aux |grep httpd Note: Check to see if the other subprocesses of httpd exist and if the httpd server is still running?
For zombie processes, you can use kill -9 to force termination;
For example, if a program is completely dead, there is no way to exit if kill does not increase the signal strength, the best way is to increase the signal strength -9, followed by killing the parent process; for example;
[root@localhost ~]# ps aux |grep gaim
beinan 5031 9.0 2.3 104996 17484 ? S 13:23 0:01 gaim
root 5036 0.0 0.0 5160 724 pts/3 S+ 13:24 0:00 grep gaim
maybe
[root@localhost ~]# pgrep -l gaim
5031 gaim
[root@localhost ~]# kill -9 5031
2 killall


3) The killall command:
Kills all processes directly by program name.
Usage: killall Running program name
killall is also used in conjunction with ps or pgrep, which is convenient; ps or pgrep can be used to see which programs are running;
Examples:
[root@localhost beinan]# pgrep -l gaim
2979 gaim
[root@localhost beinan]# killall gaim
                       
-------------------------------------------
Linux command: killall - kills the process with the specified name
-------------------------------------------
Description of use
The killall command is used to kill processes by name. We can use the kill command to kill the process with the specified process PID, if we want to find the process we need to kill.
We also need to use commands such as ps and then grep to find processes before, and killall combines these two processes into one, which is such a great command to use.
Common Parameters
Format: killall <command-name>
Kills the process with the specified name. It actually sends a SIGTERM signal to all processes with the name <command-name>, and if those processes don't catch the signal, then those processes are simply killed.
Format: killall -<signame> <command-name>
Format: killall -<signum> <command-name>
Sends the specified signal to all processes with the name <command-name>. The specified signal can be either the name <signame> or the number <signum> corresponding to the signal. The following are commonly used signals: the first column is <signame>, the second column is <signum>, and the third column is the meaning of the signal.
The first column is <signame>, the second is <signum>, and the third is the meaning of the signal.
HUP 1 Terminal disconnection
INT 2 Interrupt (same as Ctrl + C)
QUIT 3 Exit (same as Ctrl + \)
TERM 15 Termination
KILL 9 Forced termination
CONT 18 Continue (opposite of STOP, fg/bg command)
STOP 19 Pause (same as Ctrl + Z)


Format: killall -l
Lists supported signals.
usage example


Example 1
[root@jfht ~]# killall -l
HUP INT QUIT ILL TRAP ABRT IOT BUS FPE KILL USR1 SEGV USR2 PIPE ALRM TERM
STKFLT CHLD CONT STOP TSTP TTIN TTOU URG XCPU XFSZ VTALRM PROF WINCH IO PWR SYS
UNUSED
[root@jfht ~]#


Example 2
[root@jfht ~]# killall tail
[root@jfht ~]# killall tail
tail: no process killed
[root@jfht ~]#
Example 3


This example shows how to kill all logged-in shells, since some bash is not actually connected to a terminal anymore.
[root@jfht ~]# w
 21:56:35 up 452 days,  5:16,  3 users,  load average: 0.05, 0.06, 0.01
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU WHAT
root     pts/1    220.112.87.62    21:53    0.00s  0.02s  0.00s w
root     pts/9    220.112.87.62    21:53    2:44   0.02s  0.02s -bash
root     pts/10   220.112.87.62    21:53    3:13   0.01s  0.01s -bash
[root@jfht ~]# killall -9 bash
This bash was also -stuck- so the connection was lost. Now reconnect and log in.
Last login: Mon Apr  4 21:53:23 2011 from 220.112.87.62
[root@jfht ~]# w
 21:56:52 up 452 days,  5:16,  1 user,  load average: 0.28, 0.10, 0.02
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU WHAT
root     pts/1    220.112.87.62    21:56    0.00s  0.01s  0.00s w


View logged-in users in the machine
[root@sunsyk ~]# w
 16:29:02 up 2 days,  2:35,  5 users,  load average: 0.03, 0.05, 0.01
USER     TTY      FROM              LOGIN@   IDLE   JCPU   PCPU WHAT
root     pts/1    :0.0             Tue15    2days  1:44   0.04s -bash
root     pts/2    :0.0             Tue15   46:42m  0.05s  0.05s bash
root     pts/3    :0.0             Tue15    2days  0.02s  0.02s bash
root     pts/4    172.20.52.114    14:17   58:48   0.16s  0.03s sqlplus
root     pts/5    172.20.52.114    15:31    0.00s  0.03s  0.00s w
I kicked pts/1 (only root can go kick users)
[root@sunsyk ~]# pkill -kill -t pts/1
[root@sunsyk ~]# pkill -kill -t pts/2
[root@sunsyk ~]# pkill -kill -t pts/3


===================================================
[root@localhost boot]# cat /boot/grub/
===================================================
# generated by anaconda
#
# Note that you do not have to rerun grub after making changes to this file
# NOTICE:  You have a /boot partition.  This means that
#          all kernel and initrd paths are relative to /boot/, eg.
#          root (hd0,0)
#          kernel /vmlinuz-version ro root=/dev/mapper/VolGroup-lv_root
#          initrd /initrd-[generic-]
#boot=/dev/sda
default=1
timeout=5
splashimage=(hd0,0)/grub/
hiddenmenu
title Red Hat Enterprise Linux Server (2.6.33.20)
root (hd0,0)
kernel /vmlinuz-2.6.33.20 ro root=/dev/mapper/VolGroup-lv_root rd_LVM_LV=VolGroup/lv_root rd_LVM_LV=VolGroup/lv_swap rd_NO_LUKS rd_NO_MD rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us crashkernel=128M rhgb quiet
initrd /initrd-2.6.33.
title Red Hat Enterprise Linux (2.6.32-131.0.15.el6.x86_64)
root (hd0,0)
kernel /vmlinuz-2.6.32-131.0.15.el6.x86_64 ro root=/dev/mapper/VolGroup-lv_root rd_LVM_LV=VolGroup/lv_root rd_LVM_LV=VolGroup/lv_swap rd_NO_LUKS rd_NO_MD rd_NO_DM LANG=en_US.UTF-8 SYSFONT=latarcyrheb-sun16 KEYBOARDTYPE=pc KEYTABLE=us crashkernel=128M rhgb quiet
initrd /initramfs-2.6.32-131.0.15.el6.x86_64.img
[root@localhost boot]# 




===================================================
[root@localhost boot]# cat /etc/inittab
===================================================
# inittab is only used by upstart for the default runlevel.
#
# ADDING OTHER CONFIGURATION HERE WILL HAVE NO EFFECT ON YOUR SYSTEM.
#
# System initialization is started by /etc/init/
#
# Individual runlevels are started by /etc/init/
#
# Ctrl-Alt-Delete is handled by /etc/init/
#
# Terminal gettys are handled by /etc/init/ and /etc/init/,
# with configuration in /etc/sysconfig/init.
#
# For information on how to write upstart event handlers, or how
# upstart works, see init(5), init(8), and initctl(8).
#
# Default runlevel. The runlevels used are:
#   0 - halt (Do NOT set initdefault to this)
#   1 - Single user mode
#   2 - Multiuser, without NFS (The same as 3, if you do not have networking)
#   3 - Full multiuser mode
#   4 - unused
#   5 - X11
#   6 - reboot (Do NOT set initdefault to this)

id:5:initdefault:
[root@localhost boot]# 






=======================
source command.
=======================
source filename [arguments]
              Read  and execute commands from filename in the current shell environment and return the
              exit status of the last command executed from filename.  If filename does not contain  a
              slash,  file names in PATH are used to find the directory containing filename.  The file
              searched for in PATH need not be executable.  When bash is not in posix mode,  the  cur-
              rent directory is searched if no file is found in PATH.  If the sourcepath option to the
              shopt builtin command is turned off, the PATH is not searched.   If  any  arguments  are
              supplied,  they  become  the positional parameters when filename is executed.  Otherwise
              the positional parameters are unchanged.  The return status is the status  of  the  last
              command  exited within the script (0 if no commands are executed), and false if filename
              is not found or cannot be read.


===================================================
How to make the client putty support Chinese display:
===================================================
Select Translation---->UTF-8 --->Apply


===================================================
How to run the executable
===================================================
. filename ----> means to execute filename via bash
sh filename ----> means to execute filename via bash
. /filename ---->filename has X (execute) permissions. . / means the current path. If you go to another directory, you have to use an absolute path.


Under linux, if the executable is in /bin or /usr/bin, just hit the filename to run it.
If it is in another folder, for example, run the time file in /root.
You just cd /root and then. /time


umask variable
umask The user mask, consisting of three octal digits corresponding to the user, group, and other users.
Users Grouping Other Users
R W X R W X R W X
R Suppresses reading W Suppresses writing X Suppresses execution


===================================================
The configure command
===================================================
[root@localhost xmlrpc]# ./configure -h
`configure' configures this package to adapt to many kinds of systems.


Usage: ./configure [OPTION]... [VAR=VALUE]...


To assign environment variables (., CC, CFLAGS...), specify them as
VAR=VALUE.  See below for descriptions of some of the useful variables.


Defaults for the options are specified in brackets.
Configuration:
  -h, --help              display this help and exit
      --help=short        display options specific to this package
      --help=recursive    display the short help of all the included packages
  -V, --version           display version information and exit
  -q, --quiet, --silent   do not print `checking...' messages
      --cache-file=FILE   cache test results in FILE [disabled]
  -C, --config-cache      alias for `--cache-file='
  -n, --no-create         do not create output files
      --srcdir=DIR        find the sources in DIR [configure dir or `..']


Installation directories:
  --prefix=PREFIX         install architecture-independent files in PREFIX
                          [/usr/local]
  --exec-prefix=EPREFIX   install architecture-dependent files in EPREFIX
                          [PREFIX]


By default, `make install' will install all the files in `/usr/local/bin', `/usr/local/lib' etc.  You can specify an installation prefix other than `/usr/local' using `--prefix', for instance `--prefix=$HOME'.


For better control, use the options below.


Fine tuning of the installation directories:
  --bindir=DIR            user executables [EPREFIX/bin]
  --sbindir=DIR           system admin executables [EPREFIX/sbin]
  --libexecdir=DIR        program executables [EPREFIX/libexec]
  --sysconfdir=DIR        read-only single-machine data [PREFIX/etc]
  --sharedstatedir=DIR    modifiable architecture-independent data [PREFIX/com]
  --localstatedir=DIR     modifiable single-machine data [PREFIX/var]
  --libdir=DIR            object code libraries [EPREFIX/lib]
  --includedir=DIR        C header files [PREFIX/include]
  --oldincludedir=DIR     C header files for non-gcc [/usr/include]
  --datarootdir=DIR       read-only arch.-independent data root [PREFIX/share]
  --datadir=DIR           read-only architecture-independent data [DATAROOTDIR]
  --infodir=DIR           info documentation [DATAROOTDIR/info]
  --localedir=DIR         locale-dependent data [DATAROOTDIR/locale]
  --mandir=DIR            man documentation [DATAROOTDIR/man]
  --docdir=DIR            documentation root [DATAROOTDIR/doc/PACKAGE]
  --htmldir=DIR           html documentation [DOCDIR]
  --dvidir=DIR            dvi documentation [DOCDIR]
  --pdfdir=DIR            pdf documentation [DOCDIR]
  --psdir=DIR             ps documentation [DOCDIR]


System types:
  --build=BUILD     configure for building on BUILD [guessed]
  --host=HOST       cross-compile to build programs to run on HOST [BUILD]
  --target=TARGET   configure for building compilers for TARGET [HOST]


Optional Features and Packages:
  --disable-option-checking  ignore unrecognized --enable/--with options
  --disable-FEATURE       do not include FEATURE (same as --enable-FEATURE=no)
  --enable-FEATURE[=ARG]  include FEATURE [ARG=yes]
  --with-PACKAGE[=ARG]    use PACKAGE [ARG=yes]
  --without-PACKAGE       do not use PACKAGE (same as --with-PACKAGE=no)
  --with-libdir=NAME      Look for libraries in .../NAME rather than .../lib
  --with-php-config=PATH  Path to php-config php-config
  --with-xmlrpc=DIR     Include XMLRPC-EPI support
  --with-libxml-dir=DIR     XMLRPC-EPI: libxml2 install prefix
  --with-libexpat-dir=DIR   XMLRPC-EPI: libexpat dir for XMLRPC-EPI (deprecated)
  --with-iconv-dir=DIR      XMLRPC-EPI: iconv dir for XMLRPC-EPI
  --enable-shared=PKGS  build shared libraries default=yes
  --enable-static=PKGS  build static libraries default=yes
  --enable-fast-install=PKGS  optimize for fast installation default=yes
  --with-gnu-ld           assume the C compiler uses GNU ld default=no
  --disable-libtool-lock  avoid locking (might break parallel builds)
  --with-pic              try to use only PIC/non-PIC objects default=use both
  --with-tags=TAGS      include additional configurations automatic




Some influential environment variables:
  CC          C compiler command
  CFLAGS      C compiler flags
  LDFLAGS     linker flags, . -L<lib dir> if you have libraries in a
              nonstandard directory <lib dir>
  LIBS        libraries to pass to the linker, . -l<library>
  CPPFLAGS    C/C++/Objective C preprocessor flags, . -I<include dir> if
              you have headers in a nonstandard directory <include dir>
  CPP         C preprocessor


Use these variables to override the choices made by `configure' or to help
it to find libraries and programs with nonstandard names/locations.




Explanation of Configure command parameters
 The 'configure' script has a large number of command line options. These options may change for different packages, but many of the basic options remain the same. Executing the 'configure' script with the '-- help' option will show you all the options available. Although many of these options are seldom used, it is useful to know that they exist when you configure a package for a specific need. The following is a brief description of each option.
--cache-file=FILE
'configure' tests the presence of features (or bugs!) on your system. To speed up subsequent configuration, the results of the tests are stored in a cache file. When configuring a complex source tree with 'configure' scripts in each subtree, the existence of a good cache file helps a lot.
--help
Output help information. Even experienced users may occasionally need to use the '--help' option, as a complex project may contain additional options. For example, the 'configure' script in the GCC package contains options that allow you to control whether or not to generate and use the GNU assembler in GCC.
--no-create
One of the main functions in 'configure' produces an output file. This option prevents 'configure' from generating this file. You can think of this as a dry run, although the cache is still rewritten.
--quiet
--silent
When 'configure' runs its tests, it outputs a brief message telling the user what it is doing. This is done because 'configure' can be slow, and without this output the user will be left wondering what is going on. Using either of these two options will throw you over the edge. (By using this option, you too can be left wondering! By using this option, you too can be left wondering!)
--version
Prints the version number of Autoconf used to generate the 'configure' script.
--prefix=PEWFIX
'--prefix' is the most commonly used option. The resulting 'Makefile' will look at the parameters passed with this option and can completely relocate the structural independence of a package when it is installed. For example, when installing a package such as Emacs, the following command will cause the Emacs Lisp file to be installed to "/opt/gnu/share".
$ ./configure --prefix=/opt/gnu


--exec-prefix=EPREFIX
This is similar to the '--prefix' option, but it is used to set the installation location of the files that the structure depends on. The compiled 'emacs' binary is one such file. If this option is not set, the default value used will be the same as the '--prefix' option.


--bindir=DIR
Specifies the location where the binary is to be installed. A binary file is defined as a program that can be executed directly by the user.
--sbindir=DIR
Specifies where the superbinary is installed. These are programs that can normally only be executed by the superuser.
--libexecdir=DIR
Specifies where the executable support files are installed. In contrast to binary files, these files are never executed directly by the user, but can be executed by the binary files mentioned above.
--datadir=DIR
Specify the location where the generic data file is to be installed.
--sysconfdir=DIR
Specifies the installation location of read-only data used on a single machine.
--sharedstatedir=DIR
Specifies the installation location of writable data that can be shared across multiple machines.
--localstatedir=DIR
Specify the location of the writeable data that can only be used by a single machine.
--libdir=DIR
Specify where to install the library file.
--includedir=DIR
Specifies where the C header files are installed. This option can also be used for header files in other languages such as C++.
--oldincludedir=DIR
Specifies the location of C header files installed for compilers other than GCC.
--infodir=DIR
Specify the location where the Info format files are installed. Info is the file format used by GNU projects.
--mandir=DIR
Specify the mounting location of the manual page.
--srcdir=DIR
This option has no effect on the installation. It will tell 'configure' where the source code is located. Generally this option is not needed, because the 'configure' script is usually in the same directory as the source files.
--program-prefix=PREFIX
Specify the prefix that will be added to the name of the installed program. For example, configuring a program named 'tar' with '--program-prefix=g' will cause the installed program to be named 'gtar'. When used in conjunction with other installation options, this option will only work if it is used by the `' file.
--program-suffix=SUFFIX
Specifies the suffix that will be added to the name of the installed program.
--program-transform-name=PROGRAM
PROGRAM is a sed script. When a program is installed, its name will be `sed -e PROGRAM' to generate the name of the installation.
--build=BUILD
Specify the platform on which the package will be installed. If not specified, the default value will be the value of the '--host' option.
--host=HOST
Specify the system platform on which the software will run. If not specified, `' will be run to detect it.
--target=GARGET
Specifies the system platform that the software is targeted to. This is mainly useful in the context of programming language tools such as compilers and assemblers. If not specified, the value of the '--host' option will be used by default.
--disable-FEATURE
Some packages may choose this option to provide compile-time configuration for large options, such as using the Kerberos authentication system or an experimental compiler optimization. If these features are provided by default, they can be disabled with '--disable-FEATURE', where 'FEATURE' is the name of the feature. Example.
$ ./configure --disable-gui
-enable-FEATURE[=ARG]
Instead, some packages may provide features that are disabled by default and can be enabled with '--enable-FEATURE'. Here 'FEATURE' is the name of the feature. A feature may take an optional argument. For example.
$ ./configure --enable-buffers=128
`--enable-FEATURE=no' is synonymous with '--disable-FEATURE' mentioned above.
--with-PACKAGE[=ARG]
 
In the free software community, there is a strong tradition of using existing packages and libraries. When configuring a source tree with 'configure', it is possible to provide information about other installed packages. For example, the BLT device toolkit, which relies on Tcl and Tk. To configure BLT, it may be necessary to give 'configure' some information about where we have installed Tcl and Tk.
$ ./configure --with-tcl=/usr/local --with-tk=/usr/local
'--with-PACKAGE=no' is synonymous with '--without-PACKAGE' which will be mentioned below.
--without-PACKAGE
Sometimes you may not want your package to interact with packages already on the system. For example, you may not want your new compiler to use GNU ld. You can do this by using this option.
$ ./configure --without-gnu-ld
--x-includes=DIR
This option is a special case of the '--with-PACKAGE' option. When Autoconf was first developed, it was popular to use 'configure' as a workaround for Imake to make software that ran on X. The '--x-includes' option provides a way to provide the 'configure' script with the information needed to make the software run on X. The '--x-includes' option is a special case of the '--with-PACKAGE' option. The --x-includes' option provides a way to indicate to the 'configure' script the directory containing the X11 header files.
--x-libraries=DIR
Similarly, the '--x-libraries' option provides a way to indicate to the 'configure' script the directory containing the X11 libraries.
Running 'configure' in the source tree is not necessary and not good. A good 'Makefile' generated by 'configure' can build packages whose source code belongs to another tree. The advantage of building derived files in a source-independent tree is obvious: derived files, such as target files, are cluttering up the source tree. This also makes it very difficult to build the same target file on a different system or with different configuration options. It is recommended to use three trees: a source tree, a build tree, and an install tree. Here is a close example of building a GNU malloc package using this approach.
$ gtar zxf mmalloc-1.
$ mkdir build && cd build
$ ../mmalloc-1.0/configure
creating cache ./
checking for gcc... gcc
checking whether the C compiler (gcc ) works... yes
checking whether the C compiler (gcc ) is a cross-compiler... no
checking whether we are using GNU C... yes
checking whether gcc accepts -g... yes
checking for a BSD compatible install... /usr/bin/install -c
checking host system type... i586-pc-linux-gnu
checking build system type... i586-pc-linux-gnu
checking for ar... ar
checking for ranlib... ranlib
checking how to run the C preprocessor... gcc -E
checking for ... yes
checking for getpagesize... yes
checking for working mmap... yes
checking for ... yes
checking for ... yes
updating cache ../
creating ./
The tree is now configured, and you can proceed to build and install the package to the default location '/usr/local':.
$ make all && make install


==============================
How to Configure a Linux NIC
==============================
Configuration of the network card:
(Note: The IP address set by the ifconfig command takes effect instantly, but after rebooting the machine, the IP address reverts to the original IP address, so the ifconfig command can only be used to set a temporary IP address)


1 command:system-config-network
1menu:system-> preferences-> network connections


1). NIC Configuration File
Configuration for network card information usually includes: configuring the IP address, subnet mask, and gateway. NIC information is stored in the NIC configuration file. The network card configuration file is located in the /etc/sysconfig/network-scripts directory.
A NIC corresponds to a NIC configuration file, configuration file naming rules:


ifcfg-NIC type and serial number of the card


Since the Ethernet card type is eth and the serial number of the card starts at 0, the configuration file name for the first card is ifcfg-eth0, the second card is ifcfg-eth1, and so on.


The contents of the commonly used configuration files in the NIC configuration file are as follows:
DEVICE=eth0, defines the identification name of this NIC.
BOOTPROTO=dhcp, the identifying name of the NIC to boot. static or none: represents a fixed IP address; bootp or dhcp: obtains an IP address through the BOOTP or DHCP protocol.
HWADDR=00:02:B3:0B:64:22, the MAC address of this NIC.
ONBOOT=yes, (most important) Whether or not to enable this NIC when starting network service. When the network service is started on a RedHat system.
The network service reads the configuration files of all network cards stored in the /etc/sysconfig/network-scripts/ directory one at a time.
If the ONBOOT setting of the network card configuration file is set to yes, the network service invokes the ifup command to start the card;
If the ONBOOT parameter of the NIC's configuration file is no, network skips booting this NIC.
TYPE=Ethernet, the type of network card.
USERCTL=no, whether to allow ordinary users to start or stop this NIC.
IPV6INIT=no, whether to enable IPV6 on this NIC.
PEERDNS=yes, whether to allow the NIC to query the DHCP server for DNS information at boot time and automatically overwrite the /etc/ configuration file.


The following configuration item is used to specify a static IP address for this NIC, when BOOTPROTO must be static or none.
IPADDR=192.168.1.55 to specify the IP address of the NIC in a static manner.
NETMASK=255.255.255.0, defines the subnet mask of this NIC.
MTU=1500, sets the maximum transmission unit size of MAC frames for the NIC.
GATEWAY=192.168.1.1 to set the default gateway for the network.
DNS1=192.168.128.5 to specify the primary DNS server address.
DNS2=192.168.128.6 to specify the alternate DNS server address.


2). Configure the NIC information
Configuring NIC information can be done by directly modifying the relevant content in the NIC configuration file, but this method is more demanding on the user.
Three commands that are commonly used to set IP addresses in RedHat Enterprise Linux 5 are: system-config-network, setup, and ifconfig.
One of the ways to set the IP address with ifconfig will be described in the Common Commands section.
(1) system-config-network command
Entering the system-config-network command at the command prompt will launch the visual network configuration interface (this is an easy way for novices)


(2) setup setup NIC information
Enter the setup command at the command line to enter the system setup interface, and then select the NIC settings to enter the NIC settings interface (many systems can be setup using the setup command, which is more widely used)


(3) ifconfig network card common commands
The ifconfig command is more powerful and can be used to view and set up network card information.
a. Viewing Network Card Information
Command syntax: ifconfig [Parameters]
Parameter Description:
No parameter: shows the currently active NIC
- a: displays configuration information for all NICs in the system
NIC device name: displays configuration information for the specified NIC
eg: View eth0 card information: #ifconfig eth0


b. Setting the IP address
Command syntax: ifconfig NIC device name IP address netmask subnet mask
(Note: The IP address set by the ifconfig command takes effect instantly, but after rebooting the machine, the IP address reverts to the original IP address, so the ifconfig command can only be used to set a temporary IP address)
eg:ifconfig eth0 192.168.168.156 netmask 255.255.255.0


c.Modify MAC address
Command syntax: ifconfig NIC device name hw ether MAC address
(Note: disable the NIC before modifying its MAC address and enable it afterward)
eg:ifconfig eth0 hw ether 00:0C:29:03:F3:76


A few common commands:
Disable the network card
Syntax: ifdown NIC device name or ifconfig NIC device name down
Enabling NICs
Syntax: ifup NIC device name or ifconfig NIC device name up
3. Binding IP and MAC addresses
Implementation method: create /etc/ethers file, file content "ip address mac address", and then execute the "arp -f" command to make the configuration effective.
eg: Bind IP address 193.168.168.154 to MAC address 00:0C:29:03:F3:75.
#echo "193.168.168.154 00:0C:29:03:F3:75">>/etc/ethers
#arp -f


Example.
[root@localhost network-scripts]# more ifcfg-eth4
DEVICE=eth4
NM_CONTROLLED=yes
ONBOOT=yes
BOOTPROTO=none
TYPE=Ethernet
IPV6INIT=no
USERCTL=no
DEFROUTE=yes
IPV4_FAILURE_FATAL=yes
NAME="System eth4"
UUID=84d43311-57c8-8986-f205-9c78cd6ef5d2
IPADDR=192.168.0.15
PREFIX=24
HWADDR=00:22:93:72:95:A8




[root@localhost network-scripts]# more ifcfg-eth6
DEVICE=eth6
HWADDR=00:22:93:73:2c:42
NM_CONTROLLED=yes
ONBOOT=no
BOOTPROTO=none
TYPE=Ethernet
IPV6INIT=no
USERCTL=no
IPADDR=192.168.0.6
NETMASK=255.255.255.0
DNS1=' '
GATEWAY=192.168.0.1


[root@localhost network-scripts]# more ifcfg-eth7
DEVICE=eth7
NM_CONTROLLED=yes
ONBOOT=yes
BOOTPROTO=dhcp
TYPE=Ethernet
IPV6INIT=no
USERCTL=no
DEFROUTE=yes
IPV4_FAILURE_FATAL=yes
NAME="System eth7"
UUID=1e690eec-2d2c-007e-535f-a873a2b375d5
HWADDR=00:22:93:73:2C:43
PEERDNS=yes
PEERROUTES=yes


--------------------------------------------
Linux system ifconfig command use and results analysis
--------------------------------------------
NIC naming convention under Linux: eth0, eth1. the first Ethernet card, the second. lo is the loopback interface, which has a fixed IP address of 127.0.0.1 and a mask of 8 bits. It represents your machine itself.


1. ifconfig is to view information about the network card.
ifconfig [Interface] 
Interface is optional, if this is not added, information about all NICs in the system is displayed. If this option is added, information about the specified NIC is displayed
Example: ifconfig eth0
eth0 Link encap:Ethernet
            HWaddr 00:0C:29:F3:3B:F2
            inet addr:192.168.0.10 Bcast:192.168.0.255 Mask:255.255.255.0
            UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
            RX packets:78 errors:0 dropped:0 overruns:0 frame:0
            TX packets:104 errors:0 dropped:0 overruns:0 carrier:0
            collisions:0 txqueuelen:100
            RX bytes:11679 (11.4 Kb)
            TX bytes:14077 (13.7 Kb)
            Interrupt:10 Base address:0x1080 
We can see
First line: Connection type: Ethernet HWaddr (hardware mac address)
Second line: IP address, subnet, and mask of the network card
The third line: UP (for the NIC on state) RUNNING (for the NIC's cable being connected) MULTICAST (multicast support) MTU:1500 (Maximum Transmission Unit): 1500 bytes
Fourth and fifth rows: statistics on packets received and sent
Line 7: Statistical information on the number of data bytes received and sent.


2、ifconfig configure network card
Configure the IP address of the network card
ifconfig eth0 192.168.0.1 netmask 255.255.255.0 
I have configured a 192.168.0.1 IP address and 24-bit mask on eth0. What if I want to configure another 192.168.1.1/24 IP address on eth0? Use the following command
ifconfig eth0:0 192.168.1.1 netmask 255.255.255.0 
At this point then use the ifconifg command to view, you can see the information of the two cards, respectively: eth0 and eth0:0. If you want to add more IP, then the card's name is then: eth0:1, eth0:2... Fill in as many as you want. ok!
Configure the hardware address of the network card
ifconfig eth0 hw ether xx:xx:xx:xx:xx:xx 
Just change the hardware address of the NIC, at which point you can fool the IP address bonanza on the LAN.
Disable the network card
ifconfig eth0 down 
Enable the network card
ifconfig eth0 up 
The ifconfig command is very powerful, you can also set the MTU of the network card, mixed mode and so on. I won't introduce them one by one, you can study them by yourself with time.
Note: The configuration information of the NIC configured with the ifconfig command will not exist after the NIC is rebooted and the machine is restarted. If you want to save the above configuration information in your computer forever, you have to modify the configuration file of the NIC.


3. View IP address assignments: ip addr
[root@localhost version]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast state DOWN qlen 1000
    link/ether 00:22:93:72:95:ac brd ff:ff:ff:ff:ff:ff
3: eth1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast state DOWN qlen 1000
    link/ether 00:22:93:72:95:ad brd ff:ff:ff:ff:ff:ff
4: eth2: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast state DOWN qlen 1000
    link/ether 00:22:93:72:95:aa brd ff:ff:ff:ff:ff:ff
5: eth3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:22:93:72:95:ab brd ff:ff:ff:ff:ff:ff
    inet6 fe80::222:93ff:fe72:95ab/64 scope link 
       valid_lft forever preferred_lft forever
6: eth4: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast state DOWN qlen 1000
    link/ether 00:22:93:72:95:a8 brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.15/24 brd 192.168.0.255 scope global eth4
7: eth5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 00:22:93:72:95:a9 brd ff:ff:ff:ff:ff:ff
    inet 10.118.202.16/24 brd 10.118.202.255 scope global eth5
    inet6 fe80::222:93ff:fe72:95a9/64 scope link 
       valid_lft forever preferred_lft forever
8: eth10: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN qlen 1000
    link/ether 00:19:c6:9d:76:e4 brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.15/24 brd 192.168.0.255 scope global eth10
9: eth12: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN qlen 1000
    link/ether 00:19:c6:9d:76:e5 brd ff:ff:ff:ff:ff:ff
    inet 192.168.0.16/24 brd 192.168.0.255 scope global eth12
10: eth13: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN qlen 1000
    link/ether 00:19:c6:9d:76:e6 brd ff:ff:ff:ff:ff:ff
11: eth11: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN qlen 1000
    link/ether 00:19:c6:9d:76:e7 brd ff:ff:ff:ff:ff:ff
    
--------------------------------------------
Route command under linux
--------------------------------------------
In the routing table, the default route appears as a destination network of 0.0.0.0 and a subnet mask of 0.0.0.0. If the destination address of a packet cannot be matched with any route, then the system will forward the packet using the default route.
route add default gw 10.118.202.1 Add a default route with a gateway of 10.118.202.1 - very important!


/s/blog_67146a750100zoyi.html


In order for the device to access another subnet, you need to add routing to the subnet in the device, here is some information. The basic operation is as follows:
Generally, routes are set up so that you can access other subnets. For example, if your host is at 192.168.10.0/24, and you want to access a host on the 192.168.20.0/24 network, and of course you know a gateway IP, such as 192.168.10.1 (which has to be on the same subnet as your host), you can configure the routes this way.
Add Route
    route add -net 192.168.20.0 netmask 255.255.255.0 gw 192.168.10.1


Viewing Route Status
    route -n


Delete Route
    route del -net 192.168.20.0 netmask 255.255.255.0
 
Route modification route
We have talked about the problem of routing in the network fundamentals, there must be a route between the two hosts to be able to interoperate with the TCP / IP protocol, otherwise it will not be able to connect ah!
Generally, whenever there is a network interface, that interface generates a route, e.g. the hosts inside Bird Labs have an eth0 and a lo, so:


Command Format:
[root@linux ~]# route [-nee]
[root@linux ~]# route add [-net|-host] [domain or host] netmask [mask] [gw|dev]
[root@linux ~]# route del [-net|-host] [domain or host] netmask [mask] [gw|dev]


Observed parameters:
-n : Instead of using protocol or host name, use IP or port number;
-ee: Use more detailed information to display
Parameters related to adding (add) and deleting (del) routes:
-net : Indicates that the trailing route is a network domain;
-host : Indicates that it is followed by a route to a single host;
netmask: Related to the domain, you can set netmask to determine the size of the domain;
gw : short for gateway, followed by the IP value, different from dev;
dev : If you just want to specify which network card to connect to, use this setting, followed by eth0, etc.
   
Paradigm 1: Simply observing the routing state
[root@linux ~]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
192.168.10.0    0.0.0.0         255.255.255.0   U     0      0        0 eth0
169.254.0.0     0.0.0.0         255.255.0.0     U     0      0        0 eth0
0.0.0.0         192.168.10.30   0.0.0.0         UG    0      0        0 eth0
[root@linux ~]# route
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
192.168.10.0    *               255.255.255.0   U     0      0        0 eth0
169.254.0.0     *               255.255.0.0     U     0      0        0 eth0
default              0.0.0.0         UG    0      0        0 eth0


Looking at the output of route and route -n in the above example, you can see that with the -n parameter, the main output is the IP address, while with route, the output is the hostname! That is to say, in the preset case, route will find out the hostname of the IP, and if it can't find it? If it can't find it, it will display it bluntly (a little slow), so Bird usually uses route -n directly! From the above, we also know that default = 0.0.0.0/0.0.0.0.0, and the above information what you must know?
- Destination, Genmask: These two things are network and netmask respectively! So these two dong dongs are combined to form a complete network domain!
- Gateway: Which gateway is the domain connected through? If it shows 0.0.0.0, it means that the route is directly transmitted by this machine, that is, it can be directly transmitted through the MAC of the LAN; if it shows IP, it means that the route needs to go through the help of the router (gateway) before it can be transmitted.
- Flags: There are several flags in total, representing the following meanings:
o U (route is up): the route is up;
o H (target is a host): The target is a host (IP) and not a domain;
o G (use gateway): requires that the packet be forwarded through an external host (gateway);
o R (reinstate route for dynamic routing): flag for reinstating routing information when using dynamic routing;
o D (dynamically installed by daemon or redirect): Dynamic routing has been set up by the service or redirect function.
o M (modified from routing daemon or redirect): the route has been modified;
o ! (reject route): this route will not be accepted (used to ward off insecure domains!)
- Iface: the interface through which this route delivers packets.


In addition, observe the order of the routes above, from the small domain (192.168.10.0/24 is Class C), gradually to the large domain (169.254.0.0/16 Class B), and finally the preset routes (0.0.0.0/0.0.0.0). Then when we want to determine how a network packet should be delivered, the packet will go through this routing process to determine that! For example, I only have three routes up there, if I have a packet to 192.168.10.20 to deliver, then the first thing I will do is to look for the route to 192.168.10.0/24, and I find it! So it goes directly to eth0. What if it goes to Yahoo's host? Yahoo's host IP is 202.43.195.52, and I can tell by the
1) not 192.168.10.0/24.
2) Not 169.254.0.0/16 Result arrived at
3)0/0, OK! It's going out, and it's passing packets through eth0 to the 192.168.10.30 gateway host! So, routing is sequential. So what happens when you repeat the same route over and over again, e.g., when you have two network cards on your host set to the same IP address in the same domain? The following will happen:
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
192.168.10.0    0.0.0.0         255.255.255.0   U     0      0        0 eth0
192.168.10.0    0.0.0.0         255.255.255.0   U     0      0        0 eth1
In other words, since routing is sequential, no matter which interface (eth0, eth1) a packet is received on, it will be sent from eth0, so it doesn't make sense to have two IPs of the same domain on one host! It's a bit of a redundancy. Unless it is a multi-homed host (Xen, VMware, etc.), then it is necessary to set up two IPs for the same domain on one host.


Example 2: Adding and Removing Routes
[root@linux ~]# route del -net 169.254.0.0 netmask 255.255.0.0 dev eth0
# The above action removes the domain 169.254.0.0/16!
# Please note that when deleting, you need to write all the information appearing in the routing table to the
# Include netmask, dev and other parameters! Attention
[root@linux ~]# route add -net 192.168.100.0 netmask 255.255.255.0 dev eth0
# Add a route via route add! Note that this route must be interoperable with you.
# For example, if I give the following command it will show an error:
# route add -net 192.168.200.0 netmask 255.255.255.0 gw 192.168.200.254
# Since I only have 192.168.10.100 in my environment, I can't connect to 192.168.200.254.
# This network segment is directly interconnected using MACs! That's understandable, isn't it?
[root@linux ~]# route add default gw 192.168.10.30
# Methods for adding preset routes! Please note that just one preset route is enough!
# If you have set up this place randomly, remember to reset your network using the following instructions
# /etc//network restart
If you want to delete and add routes, it is necessary to refer to the above example, in fact, the use of man route inside the information is very rich! Check it out carefully! You just need to remember, when there is "SIOCADDRT: Network is unreachable" error, it must be due to the IP behind the gw can not communicate directly with your domain (Gateway is not in your domain), so, quickly check whether the input is wrong ah! Go for it!


Example:
[root@oam1 ~]# route -n
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
192.168.0.0     0.0.0.0         255.255.255.0   U     0      0        0 eth8
192.168.0.0     0.0.0.0         255.255.255.0   U     1      0        0 eth3
192.168.122.0   0.0.0.0         255.255.255.0   U     0      0        0 virbr0
169.254.0.0     0.0.0.0         255.255.0.0     U     1006   0        0 eth8


[root@oam1 ~]# route del -net 192.168.0.0 netmask 255.255.255.0 dev eth3


[root@oam1 ~]# route
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
192.168.0.0     *               255.255.255.0   U     0      0        0 eth8
192.168.122.0   *               255.255.255.0   U     0      0        0 virbr0
link-local      *               255.255.0.0     U     1006   0        0 eth8




# route command to add a route, the machine reboot or the network card reboot is not dropped, the way to set up permanent routes under linux:
1. In /etc/, add
2. Add to the end of /etc/sysconfig/network
3./etc/sysconfig/static-router :
any net /24 gw


------------------------
The route command under WINDOWS:
------------------------
ROUTE syntax
route [-f] [-p] [Command] [Destination] [mask Netmask] [Gateway] [metric Metric] [if Interface]


The simple operation is as follows.
1. View route status: route print
View ipv4 (ipv6) route status only: route print -4(-6)
    
2. add route: route add destination network mask subnet mask gateway -- reboot machine or network card failure
    route add 192.168.20.0 mask 255.255.255.0 192.168.10.1
    
3. Add permanent: route -p add destination network mask subnet mask gateway
    route -p add 192.168.20.0 mask 255.255.255.0 192.168.10.1
    
4. delete route: route delete destination network mask subnet mask
    route delete 192.168.20.0 mask 255.255.255.0
    
------------------------
traceroute command
------------------------
For traceroute, you can use tracert ip address for windows and traceroute ip address for linux.


[root@oam-nas2 yuanjs]# traceroute 10.118.4.157
traceroute to 10.118.4.157 (10.118.4.157), 30 hops max, 60 byte packets
 1  10.118.202.1 (10.118.202.1)  1.801 ms  3.990 ms  7.271 ms
 2  10.118.246.73 (10.118.246.73)  1.092 ms  1.794 ms  2.465 ms
 3  10.118.246.6 (10.118.246.6)  174.842 ms  176.603 ms  178.407 ms
 4  10.118.4.157 (10.118.4.157)  0.262 ms  0.237 ms  0.195 ms    


C:\Documents and Settings\yuanjinsong>tracert 10.118.202.16
Tracing route to 10.118.202.16 over a maximum of 30 hops


  1     3 ms     1 ms     1 ms  10.118.4.129
  2    <1 ms    <1 ms    <1 ms  10.118.246.5
  3     1 ms     1 ms     2 ms  10.118.246.74
  4    <1 ms    <1 ms    <1 ms  10.118.202.16


Trace complete.




C:\Documents and Settings\yuanjinsong>tracert 10.118.202.96


Tracing route to 10.118.202.96 over a maximum of 30 hops


  1     2 ms     1 ms     2 ms  10.118.4.129
  2    <1 ms    <1 ms    <1 ms  10.118.246.5
  3     1 ms     1 ms     1 ms  10.118.246.74
  4     *        *        *     Request timed out.
  5     *        *


C:\Documents and Settings\yuanjinsong>tracert


Tracing route to [10.41.70.8]
over a maximum of 30 hops:


  1   700 ms     1 ms     1 ms  10.118.4.129
  2    <1 ms    <1 ms    <1 ms  10.118.246.5
  3    25 ms    25 ms    26 ms  10.118.254.253
  4    38 ms    29 ms    28 ms  ^C
  
------------------------------------------------------------------------
Common network configuration commands.
------------------------------------------------------------------------    
1) First, plug the card into the network cable, use the command: mii-tool eth0 , mii-tool eth1 ... mii-tool eth13
See which network ports are :link. record these network port numbers.
You can also use the ethtool eth0 command to
2) Unplug the network cable from the rear plug-in card, use the command: mii-tool The network port number with LINK status.
See which one changes from link to no link.
  
ifconfig -a |grep add
ifup eth0
ifdown eth0
ethtool eth0
mii-tool -v eth0
/etc//network status
route


service --status-all
service network restart --start network configuration
setup
startx


------------------------------------------------------------------------
How to configure routing so that different network segments can access each other.
------------------------------------------------------------------------
In the routing table, the default route appears as a destination network of 0.0.0.0 and a subnet mask of 0.0.0.0. If the destination address of a packet cannot be matched with any route, then the system will forward the packet using the default route.
route add default gw 10.118.202.1 Add a default route with a gateway of 10.118.202.1 - very important!
Or route add df gw 10.118.202.1


I ran into a problem: ip's on the same network segment can be pinged through, but not on different network segments.
Answer: No routes are set up.
[root@oam1 ~]# route
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
10.118.202.0    *               255.255.255.0   U     1      0        0 eth9
192.168.0.0     *               255.255.255.0   U     0      0        0 eth10
192.168.0.0     *               255.255.255.0   U     0      0        0 eth8
192.168.122.0   *               255.255.255.0   U     0      0        0 virbr0
link-local      *               255.255.0.0     U     1005   0        0 eth3
link-local      *               255.255.0.0     U     1009   0        0 eth10


As you can see from the above command, if you ping 10.118.4.140, there is no route to choose from, so you can't ping through.
A default route needs to be added that specifies the gateway and physical network port through which the physical network port and gateway connect to another network segment.
Go to the /etc/sysconfig/network-scripts directory:
vim ifcfg-eth9, specify the following:
DEVICE=eth9 //device name
BOOTPROTO=none
NETMASK=255.255.255.0 //Subnet Mask
TYPE=Ethernet //device type
HWADDR=00:22:93:72:b1:ab
IPADDR=10.118.202.201 //IP address
IPV6INIT=no
ONBOOT=yes //automatically loaded when OS boots up
USERCTL=no
DEFAULTROUTE=YES //---------- specifies that this is the default route
GATEWAY=10.118.202.1 //---------- specifies the gateway address.


Then check the routing information:
[root@oam1 ~]# route
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
10.118.202.0    *               255.255.255.0   U     1      0        0 eth9
192.168.0.0     *               255.255.255.0   U     0      0        0 eth10
192.168.0.0     *               255.255.255.0   U     0      0        0 eth8
192.168.122.0   *               255.255.255.0   U     0      0        0 virbr0
link-local      *               255.255.0.0     U     1005   0        0 eth3
link-local      *               255.255.0.0     U     1009   0        0 eth10
default         10.118.202.1    0.0.0.0         UG    0      0        0 eth9


At this point, you see that there is a default route, and all addresses that are not on this network segment (10.118.202.*) all take this route.


------------------------------------------------------------------------
Linux Commands I've Used - ifconfig - Network Configuration Commands
------------------------------------------------------------------------
Linux Commands I've Used - ifconfig - Network Configuration Commands
[root@csp ~]# ifconfig --help
Usage:
  ifconfig [-a] [-v] [-s] <interface> [[<AF>] <address>]
  [add <address>[/<prefixlen>]]
  [del <address>[/<prefixlen>]]
  [[-]broadcast [<address>]]  [[-]pointopoint [<address>]]
  [netmask <address>]  [dstaddr <address>]  [tunnel <address>]
  [outfill <NN>] [keepalive <NN>]
  [hw <HW> <address>]  [metric <NN>]  [mtu <NN>]
  [[-]trailers]  [[-]arp]  [[-]allmulti]
  [multicast]  [[-]promisc]
  [mem_start <NN>]  [io_addr <NN>]  [irq <NN>]  [media <type>]
  [txqueuelen <NN>]
  [[-]dynamic]
  [up|down] ...


Description of use
ifconfig command is often used to display the system's network interface (network card) information, can also be used to configure a network interface (configure a network interface), such as activation, shutdown, set the address. In the Linux system, the network card naming rules: eth0 for the first Ethernet card (Ethernet Card), eth1 for the second. lo for the loopback interface, its IP address is fixed to 127.0.0.1, mask 8 bits.


Common Parameters
Format: ifconfig


Displays information about currently active network interfaces.
If no arguments are given, ifconfig displays the status of the currently active interfaces. 


Format: ifconfig {INTERFACE}


Displays information about the specified network interface. For example: eth0, eth1.
If a single interface  argument  is given, it displays the status of the given interface only; 


Format: ifconfig -a


Displays information about all network interfaces, whether activated or not.
if a single -a argument is given, it displays the status of all interfaces, even those that are down.  
Display info on all network interfaces on server, active or inactive.


Other formats to configure the network interface.
Otherwise, it configures an interface.


Format: ifconfig {INTERFACE} up


Format: ifup {INTERFACE}


Activate the specified network interface. For example: eth0, eth1.
This  flag  causes the interface to be activated.  It is implicitly specified if an address is assigned to the interface.


Format: ifconfig {INTERFACE} down


Format: ifdown {INTERFACE}


Shut down the specified network interface.
This flag causes the driver for this interface to be shut  down.


Format: ifconfig {INTERFACE} {IP}


Format: ifconfig {INTERFACE} {IP} netmask {NETMASK}


Set the IP address and mask for the specified network interface and activate it automatically. For example: eth0, eth0:0, eth0:1, the last two are virtual NICs.


Format: ifconfig {INTERFACE} add {IP}


Format: ifconfig {INTERFACE}:0 {IP}


Adds an IP address to the specified network interface.


Format: ifconfig {INTERFACE} del {IP}
Delete the IP address for the specified network interface.




usage example
Example 1 Viewing the Current Network Interface and Status via the ifconfig Command
ifconfig without parameters prints only the network interfaces that are active.


[root@jfht ~]# ifconfig 
eth0      Link encap:Ethernet  HWaddr 00:1B:78:40:8C:20  
          inet addr:211.103.  Bcast:211.103.28.31  Mask:255.255.255.224
          inet6 addr: fe80::21b:78ff:fe40:8c20/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:134856806 errors:0 dropped:0 overruns:0 frame:0
          TX packets:140723373 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:1680519599 (1.5 GiB)  TX bytes:2804853589 (2.6 GiB)
          Interrupt:114 Memory:fa000000-fa012800


ifconfig eth0 192.168.1.1/24 ----- Adding static IP address 192.168.1.1 255.255.255.0 to the eth0 port does not work after reboot.
          
Now explain the meaning of the output message:


eth0: network interface
link encap:Network type
HWaddr: physical address of the network card
Inet addr : IP address
Bcast: broadcast address
Mask: subnet mask
UP: Network interface in use
Number of packets received and transmitted by RX packets,TX packets
RX byte, TX byte indicates the specific number of searches and transmissions.
Interrupt: Terminal Information
Base address: memory address




eth1      Link encap:Ethernet  HWaddr 00:1B:78:40:8C:22  
          inet addr:192.168.1.191  Bcast:192.168.1.255  Mask:255.255.255.0
          inet6 addr: fe80::21b:78ff:fe40:8c22/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:29821173 errors:0 dropped:0 overruns:0 frame:0
          TX packets:28680326 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:4264958692 (3.9 GiB)  TX bytes:427504706 (407.7 MiB)
          Interrupt:122 Memory:f8000000-f8012800 


lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:30263265 errors:0 dropped:0 overruns:0 frame:0
          TX packets:30263265 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:63016162 (60.0 MiB)  TX bytes:63016162 (60.0 MiB)


 


The ifconfig command followed by the -a parameter prints all configured network interfaces, whether they are active or not.
[root@jfht ~]# ifconfig -a 
eth0      Link encap:Ethernet  HWaddr 00:1B:78:40:8C:20  
          inet addr:211.103.  Bcast:211.103.28.31  Mask:255.255.255.224
          inet6 addr: fe80::21b:78ff:fe40:8c20/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:134856877 errors:0 dropped:0 overruns:0 frame:0
          TX packets:140723396 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:1680524793 (1.5 GiB)  TX bytes:2804859207 (2.6 GiB)
          Interrupt:114 Memory:fa000000-fa012800 


eth1      Link encap:Ethernet  HWaddr 00:1B:78:40:8C:22  
          inet addr:192.168.1.191  Bcast:192.168.1.255  Mask:255.255.255.0
          inet6 addr: fe80::21b:78ff:fe40:8c22/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:29821183 errors:0 dropped:0 overruns:0 frame:0
          TX packets:28680336 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:4264959332 (3.9 GiB)  TX bytes:427505346 (407.7 MiB)
          Interrupt:122 Memory:f8000000-f8012800 


lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:30263271 errors:0 dropped:0 overruns:0 frame:0
          TX packets:30263271 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:63016642 (60.0 MiB)  TX bytes:63016642 (60.0 MiB)


sit0      Link encap:IPv6-in-IPv4  
          NOARP  MTU:1480  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:0 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:0 (0.0 b)  TX bytes:0 (0.0 b)


 


The ifconfig command followed by the specified network interface name allows you to view specific network card information.
[root@jfht ~]# ifconfig eth1 
eth1      Link encap:Ethernet  HWaddr 00:1B:78:40:8C:22  
          inet addr:192.168.1.191  Bcast:192.168.1.255  Mask:255.255.255.0
          inet6 addr: fe80::21b:78ff:fe40:8c22/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:29821190 errors:0 dropped:0 overruns:0 frame:0
          TX packets:28680343 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:4264959780 (3.9 GiB)  TX bytes:427505794 (407.7 MiB)
          Interrupt:122 Memory:f8000000-f8012800 


[root@jfht ~]# 
Example 2 Turning off and activating a network card in a VMWare virtual machine with the ifconfig command
VMWare VM with NAT for network connection and RHEL 3.4 as the operating system. confirmed by ping command from the cmd window of Windows.


At the beginning, eth0 is active.
C:\Users\zhy>ping 192.168.227.128 


Pinging 192.168.227.128 with 32 bytes of data.
Reply from 192.168.227.128: bytes=32 time<1ms TTL=64
Reply from 192.168.227.128: bytes=32 time<1ms TTL=64
Reply from 192.168.227.128: bytes=32 time<1ms TTL=64
Reply from 192.168.227.128: bytes=32 time<1ms TTL=64


Ping statistics for 192.168.227.128.
Packets: Sent = 4, Received = 4, Lost = 0 (0% lost).
Estimated time of round trip in milliseconds.
Shortest = 0ms, Longest = 0ms, Average = 0ms


After executing the command ifconfig eth0 down in the Linux console, it is not pingable.
C:\Users\zhy>ping 192.168.227.128 


Pinging 192.168.227.128 with 32 bytes of data.
Request timeout.
Request timeout.
Reply from 192.168.227.1: Target host is not accessible.
Request timeout.


Ping statistics for 192.168.227.128.
Packets: sent = 4, received = 1, lost = 3 (75% lost).


After executing the command ifconfig eth0 up in the Linux console, you can ping again.
C:\Users\zhy>ping 192.168.227.128 


Pinging 192.168.227.128 with 32 bytes of data.
Reply from 192.168.227.128: bytes=32 time<1ms TTL=64
Reply from 192.168.227.128: bytes=32 time<1ms TTL=64
Reply from 192.168.227.128: bytes=32 time<1ms TTL=64
Reply from 192.168.227.128: bytes=32 time<1ms TTL=64


Ping statistics for 192.168.227.128.
Packets: Sent = 4, Received = 4, Lost = 0 (0% lost).
Estimated time of round trip in milliseconds.
Shortest = 0ms, Longest = 0ms, Average = 0ms


 


Example 3 Configuring Multiple Addresses for a NIC
There is already an ip address on eth0, add another ip address to it.


[root@node34 root]# ifconfig 
eth0      Link encap:Ethernet  HWaddr 00:0C:29:E3:D2:65  
          inet addr:192.168.227.128  Bcast:192.168.227.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:14766 errors:0 dropped:0 overruns:0 frame:0
          TX packets:18009 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:1530995 (1.4 Mb)  TX bytes:3088071 (2.9 Mb)
          Interrupt:10 Base address:0x2000 


lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:2310 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2310 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:203796 (199.0 Kb)  TX bytes:203796 (199.0 Kb)


[root@node34 root]# 
[root@node34 root]# ifconfig eth0:1 192.168.227.188 netmask 255.255.255.0 
[root@node34 root]# ifconfig 
eth0      Link encap:Ethernet  HWaddr 00:0C:29:E3:D2:65  
          inet addr:192.168.227.128  Bcast:192.168.227.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:14878 errors:0 dropped:0 overruns:0 frame:0
          TX packets:18097 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:1541605 (1.4 Mb)  TX bytes:3097295 (2.9 Mb)
          Interrupt:10 Base address:0x2000 


eth0:1    Link encap:Ethernet  HWaddr 00:0C:29:E3:D2:65  
          inet addr:192.168.227.188  Bcast:192.168.227.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:14883 errors:0 dropped:0 overruns:0 frame:0
          TX packets:18106 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:1541935 (1.4 Mb)  TX bytes:3098261 (2.9 Mb)
          Interrupt:10 Base address:0x2000 


lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:2312 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2312 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:203972 (199.1 Kb)  TX bytes:203972 (199.1 Kb)


C:\Users\zhy>ping 192.168.227.188 


Pinging 192.168.227.188 with 32 bytes of data.
Reply from 192.168.227.188: bytes=32 time<1ms TTL=64
Reply from 192.168.227.188: bytes=32 time<1ms TTL=64
Reply from 192.168.227.188: bytes=32 time<1ms TTL=64
Reply from 192.168.227.188: bytes=32 time<1ms TTL=64


Ping statistics for 192.168.227.188.
Packets: Sent = 4, Received = 4, Lost = 0 (0% lost).
Estimated time of round trip in milliseconds.
Shortest = 0ms, Longest = 0ms, Average = 0ms




[root@node34 root]# ifconfig eth0:1 del 192.168.227.188 
[root@node34 root]# ifconfig 
eth0      Link encap:Ethernet  HWaddr 00:0C:29:E3:D2:65  
          inet addr:192.168.227.128  Bcast:192.168.227.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:15306 errors:0 dropped:0 overruns:0 frame:0
          TX packets:18496 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:1585467 (1.5 Mb)  TX bytes:3141665 (2.9 Mb)
          Interrupt:10 Base address:0x2000 


eth0:2    Link encap:Ethernet  HWaddr 00:0C:29:E3:D2:65  
          inet addr:192.168.227.189  Bcast:192.168.227.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:15311 errors:0 dropped:0 overruns:0 frame:0
          TX packets:18505 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:1585797 (1.5 Mb)  TX bytes:3142711 (2.9 Mb)
          Interrupt:10 Base address:0x2000 


lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:2322 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2322 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:204852 (200.0 Kb)  TX bytes:204852 (200.0 Kb)


C:\Users\zhy>ping 192.168.227.188 


Pinging 192.168.227.188 with 32 bytes of data.
Reply from 192.168.227.1: Target host is not accessible.
Request timeout.
Request timeout.
Request timeout.


Ping statistics for 192.168.227.188.
Packets: sent = 4, received = 1, lost = 3 (75% lost).


Example 4 Network settings configured with the ifconfig command become invalid after the machine reboots
The configuration information of the network card configured with the ifconfig command does not exist after the reboot of the machine after the reboot of the network card. To save the above configuration information in the computer forever, you have to modify the configuration file of the NIC.


[root@node34 root]# ifconfig eth0:1 192.168.227.189 
[root@node34 root]# ifconfig 
eth0      Link encap:Ethernet  HWaddr 00:0C:29:E3:D2:65  
          inet addr:192.168.227.128  Bcast:192.168.227.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:215 errors:0 dropped:0 overruns:0 frame:0
          TX packets:251 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:21887 (21.3 Kb)  TX bytes:22716 (22.1 Kb)
          Interrupt:10 Base address:0x2000 


eth0:1    Link encap:Ethernet  HWaddr 00:0C:29:E3:D2:65  
          inet addr:192.168.227.189  Bcast:192.168.227.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:215 errors:0 dropped:0 overruns:0 frame:0
          TX packets:251 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:21887 (21.3 Kb)  TX bytes:22716 (22.1 Kb)
          Interrupt:10 Base address:0x2000 


lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:394 errors:0 dropped:0 overruns:0 frame:0
          TX packets:394 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:29423 (28.7 Kb)  TX bytes:29423 (28.7 Kb)


[root@node34 root]# reboot 


Broadcast message from root (pts/1) (Thu Jul 21 19:49:25 2011):


The system is going down for reboot NOW!
[root@node34 root]# 




Last login: Wed Jul 20 12:19:18 2011 from 192.168.227.1
[root@node34 root]# ifconfig -a 
eth0      Link encap:Ethernet  HWaddr 00:0C:29:E3:D2:65  
          inet addr:192.168.227.128  Bcast:192.168.227.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:77 errors:0 dropped:0 overruns:0 frame:0
          TX packets:104 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:8916 (8.7 Kb)  TX bytes:10906 (10.6 Kb)
          Interrupt:10 Base address:0x2000 


lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:354 errors:0 dropped:0 overruns:0 frame:0
          TX packets:354 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:25651 (25.0 Kb)  TX bytes:25651 (25.0 Kb)


[root@node34 root]#


Example 5 Sample network interface configuration file in the system, using DHCP and adding a virtual network card
[root@node34 root]# cat /etc/sysconfig/network-scripts/ifcfg-eth0   
DEVICE=eth0
ONBOOT=yes
BOOTPROTO=dhcp


[root@node34 root]# cat /etc/sysconfig/network-scripts/ifcfg-eth0:0   
DEVICE=eth0:0
ONBOOT=yes
#BOOTPROTO=dhcp
BOOTPROTO=static
IPADDR=192.168.227.227
NETMASK=255.255.255.0
ONBOOT=yes 


[root@node34 root]# service network restart 
Interface eth0 is being shut down: [ OK ]
To close the loopback interface: [ OK ]
To set the network parameters: [ OK ]
Eject Loopback Interface: [ OK ]
Pop-up screen eth0: [ OK ]


[root@node34 root]# ifconfig -a 
eth0      Link encap:Ethernet  HWaddr 00:0C:29:E3:D2:65  
          inet addr:192.168.227.128  Bcast:192.168.227.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:389 errors:0 dropped:0 overruns:0 frame:0
          TX packets:341 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:40273 (39.3 Kb)  TX bytes:37785 (36.8 Kb)
          Interrupt:10 Base address:0x2000 


eth0:0    Link encap:Ethernet  HWaddr 00:0C:29:E3:D2:65  
          inet addr:192.168.227.227  Bcast:192.168.227.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:389 errors:0 dropped:0 overruns:0 frame:0
          TX packets:341 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:40273 (39.3 Kb)  TX bytes:37785 (36.8 Kb)
          Interrupt:10 Base address:0x2000 


lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:355 errors:0 dropped:0 overruns:0 frame:0
          TX packets:355 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:25703 (25.1 Kb)  TX bytes:25703 (25.1 Kb)


[root@node34 root]# 


Example 6 Sample Network Interface Configuration File in a System with a Fixed IP Address
[root@jfht ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth0 
# Broadcom Corporation NetXtreme II BCM5706 Gigabit Ethernet
DEVICE=eth0
BOOTPROTO=static
BROADCAST=211.103.28.31
HWADDR=00:1B:78:40:8C:20
IPADDR=211.103.
IPV6INIT=yes
IPV6_AUTOCONF=yes
NETMASK=255.255.255.224
NETWORK=211.103.28.0
ONBOOT=yes
[root@jfht ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth1 
# Broadcom Corporation NetXtreme II BCM5706 Gigabit Ethernet
DEVICE=eth1
BOOTPROTO=static
BROADCAST=192.168.1.255
HWADDR=00:1B:78:40:8C:22
IPADDR=192.168.1.191
NETMASK=255.255.255.0
NETWORK=192.168.1.0
ONBOOT=yes
[root@jfht ~]# 


Reflections on the issue
Relevant information
[1] OldHawk Linux system ifconfig command use and result analysis
/taobataoma/archive/2007/12/27/
[2] Bird's Linux Private Tips ifconfig
/linux_server/#ifconfig
[3] Doudou.com Redhat Linux NIC Configuration and Binding
/2007-11/


=====================================
iconv:
=====================================


First, the use of iconv command for file content encoding conversion
Usage: iconv [Options...] [File...]
The following options are available:
Input/output format specification:
-f, --from-code=name Raw text encoding
-t, --to-code=name Output code
Information:
-l, --list List all known character sets
Output Control:
-c Ignore invalid characters from output
-o, --output=FILE Output File
-s, --silent Turn off warnings.
--verbose prints progress information
--? , --help gives a list of help for the system.
--usage gives brief usage information
-V, --version Prints the program version number
Example:
iconv -f gb2312 -t utf-8 >
This command reads a file, converts it from gb2312 encoding to utf-8 encoding, and its output is directed to the file. Note: the txt generated by windows writing pad is generally gb18030 encoding, if you specify the error will report the following error: iconv: Unknown Illegal Input Sequence at 6071


Second, the file name encoding conversion because now use linux, the original file in windows are encoded in GBK. So copy to linux is garbled, the file content can be converted with iconv but many Chinese file name or garbled, to find a file name can be converted to encode the command, that is, convmv. convmv command details parameters
For example convmv -f GBK -t UTF-8 *.mp3 but this command will not convert straight, you can see the comparison before and after conversion. If you want to convert straight, you have to add the parameter --notestconvmv -f GBK -t UTF-8 --notest *.mp3 The -f parameter is the encoding before conversion, and the -t is the encoding after conversion. Don't get this wrong. Otherwise, it may still be a mess. There is one more parameter that is very useful. The -r parameter converts all subdirectories in the current directory recursively. * You need to install convmv-1.10-1.


Third, a better foolproof command-line tool enca, which not only can intelligently identify the file's encoding, but also supports batch conversion.
1. Installation
$sudo apt-get install enca
2. View the current file code
enca -L zh_CN
Simplified Chinese National Standard; GB2312
Surrounded by/intermixed with non-text data
3. Conversion
The command format is as follows
$enca -L current language -x target encoding filename
For example to convert all files in the current directory to utf-8
enca -L zh_CN -x utf-8 *
enca -L zh_CN file Checks the encoding of the file
enca -L zh_CN -x UTF-8 file Convert file encoding to "UTF-8".
enca -L zh_CN -x UTF-8 < file1 > file2 If you don't want to overwrite the original file you can do so, it's very simple, right?


Q&A:
[root@localhost ScriptTools]# iconv -t UTF-8 -f GB2312 /home/yuanjs/ZXUSP_V01.02.10/code/scripts/usrdef_cliscript/network_cli.xml > /home/
iconv: illegal input sequence at unknown 8649
[root@localhost ScriptTools]#


I changed it to iconv -c -f gb18030 -t utf-8 $1 > $2 with an extra -c to ignore invalid characters.
It was found that the conversion was basically correct (at least the translation of the Chinese characters was correct, and there was no loss of data), but there was a small error, as shown in Figure 2.
And the converted sizes are different, as shown below in Figure 1.




=====================================
Linux Forced Override Function with the cp Command
=====================================
When we use the cp command in Linux, we find that when we copy a file from one directory to another directory with the same file name, we get the following result
Even after adding the -rf parameter to force overwrite copying, the system still prompts you to manually enter y to confirm copying one by one, which is very annoying.
So what is the reason for this? To find out, you can type alias at the command line, here is the output of the alias command
[root@test-01 yum]# alias
alias cp='cp -i'
alias l.='ls -d .* --color=tty'
alias ll='ls -l --color=tty'
alias ls='ls --color=tty'
alias mv='mv -i'
alias rm='rm -i'
alias which='alias | /usr/bin/which --tty-only --read-alias --show-dot --show-tilde'


Here we are only concerned with the line in red, the line related to cp, see? We normally use the cp command without any arguments.
By default, however, the system automatically adds the -i parameter to the cp command, which is what the alias is for. So what does the -i argument do? Use the man cp command to find out.
 -i, --interactive
              prompt before overwrite
-i is short for interactive, which means that the system will ask for a confirmation prompt before using the cp command to overwrite a file. This is supposed to be an insurance policy.
If you have a lot of files to copy and find it troublesome to enter y to confirm them one by one, you can use the following method to solve the problem.


Method I:
# unalias cp
Remove the alias from the cp command so that when you copy a file with cp -rf, it won't ask for confirmation.
However, it is still recommended to copy the completion of the alias, or to restore the cp way, because it can be an additional layer of insurance, is that people will have to make a mistake, to give themselves an insurance, there is a protection ah.
Recovery is simple.
#alias cp='cp -i'
That's fine.


Method II
Enter the \cp command, which also serves to cancel the cp alias.
    [root@localhost ~]#\cp -fr src dest 


Method III
Enter yes |cp -fr src dest to use the pipeline to automatically enter yes.
[root@localhost ~]#yes | cp -fr src dest Let the pipeline automatically enter a bunch of yeses, and you're done forcing copies.
So one might ask how the dos copy command accomplishes a forced copy? The answer is to use xcopy /y src dest to force a copy.


=====================================    
View subdirectories and file sizes in the current directory.
=====================================    
du: disk usage
1. how to use the du command to see the current directory subdirectories and file size, each file and directory size
   du *
   
2. See the size of a directory.
du-sH * or du-sh
    
The command to view the size of a directory and the number of files under Linux
1. View the size of the catalog.
[root@vps 1010 shellimage]#du -sh
The above one looks at the size of the current directory.


If it is to view the size of the specified directory then:
[root@vps 1010 shellimage]#du -sh /uploadimages *****************important command
Here is the size of the uploadimages directory in the root directory.
 
2. View the total number of files in the current directory.
[root@vps 1010 shellimage]#find . -type f |wc -l
The above is to view the total number of files in the current directory, if you want to view the total number of the specified directory then:


[root@vps 1010 shellimage]#find /uploadimages -type f |wc -l
The f here is for file, change it to d for directory.
    
=====================================    
View Linux Disk Space Size Command
=====================================
df: disk free
I. df command;
df is from the coreutils package, which comes with the system when it is installed; we can use this command to see how the disk is being used and where the file system is mounted;
Examples:
[root@localhost beinan]# df -lh ***************** important commands
Filesystem Capacity Used Available Used% Mount Points
/dev/hda8 11G 6.0G 4.4G 58% /
/dev/shm 236M 0 236M 0% /dev/shm
/dev/sda1 56G 22G 35G 39% /mnt/sda1


We can see that the system is installed in /dev/hda8; there is also a 56G disk partition /dev/sda1 mounted in /mnt/sda1;
For other parameters, see man df


II. fdsik
fdisk is a powerful disk manipulation tool from the util-linux package, we will only talk about how he can view the disk partition table and partition structure here; the parameter -l , with the -l parameter, you can get all the hard disk in the machine's partition situation;
[root@localhost beinan]# fdisk -l
Disk /dev/hda: 80.0 GB, 80026361856 bytes
255 heads, 63 sectors/track, 9729 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/hda1 * 1 765 6144831 7 HPFS/NTFS
/dev/hda2 766 2805 16386300 c W95 FAT32 (LBA)
/dev/hda3 2806 7751 39728745 5 Extended
/dev/hda5 2806 3825 8193118+ 83 Linux
/dev/hda6 3826 5100 10241406 83 Linux
/dev/hda7 5101 5198 787153+ 82 Linux swap / Solaris
/dev/hda8 5199 6657 11719386 83 Linux
/dev/hda9 6658 7751 8787523+ 83 Linux
In the above Blocks, the size of the partition, the unit of Blocks is byte, we can convert to M, for example, the size of the first partition /dev/hda1 if converted to M, it should be 6144831/1024 = 6000M, that is, 6G or so, in fact, there is no such a trouble, a rough look at the decimal point to move forward by three, we know that the approximate volume of how much; in fact, there is no such trouble. about how big the volume is;
System indicates the file system, for example, /dev/hda1 is in NTFS format; /dev/hda2 indicates a file system in fat32 format; .
In this example, we should pay special attention to the /dev/hda3 partition, which is an extended partition; it contains logical partitions underneath it, which are actually equivalent to containers; hda5,hda6,hda7,hda8,hda9 are subordinate to it;
One other thing we noticed is how come there is no hda4? Why is hda4 not included in the extended partition? A disk has at most four primary partitions; hda1-4 are all primary partitions; hda4 cannot be included in the extended partition, and the extended partition is also counted as a primary partition; in this case, there is no hda4 partition, of course, we can set one of the partitions as the primary partition, just that I didn't do so when I partitioned the disk at that time;
Then carefully count, let's see if this disk is still space? hda1 + hda2 + hda3 = the volume of the actual already partitioned, so we can calculate this way hda1 + hda2 + hda3 = 6144831 + 16386300 + 39728745 = 62259876 (b), converted to M units, the decimal point to move forward by three bits, so Currently has been divided into partitions about the occupation of the volume is 62259.876 (M), in fact, the most accurate calculation 62259876/1024 = 60800.67 (M); and the size of this disk is 80.0 GB (80026361856byte), in fact, the actual size of the 78150.744 (M); through a series of calculations we We can conclude that the hard disk is still in use; there are about 18G of unpartitioned space;


fdisk -l lists the number of all disks in the machine and also lists all disk partitions; for example:
[root@localhost beinan]# fdisk -l
Disk /dev/hda: 80.0 GB, 80026361856 bytes
255 heads, 63 sectors/track, 9729 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Device Boot Start End Blocks Id System
/dev/hda1 * 1 765 6144831 7 HPFS/NTFS
/dev/hda2 766 2805 16386300 c W95 FAT32 (LBA)
/dev/hda3 2806 7751 39728745 5 Extended
/dev/hda5 2806 3825 8193118+ 83 Linux
/dev/hda6 3826 5100 10241406 83 Linux
/dev/hda7 5101 5198 787153+ 82 Linux swap / Solaris
/dev/hda8 5199 6657 11719386 83 Linux
/dev/hda9 6658 7751 8787523+ 83 Linux


Disk /dev/sda: 60.0 GB, 60011642880 bytes
64 heads, 32 sectors/track, 57231 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Device Boot Start End Blocks Id System
/dev/sda1 1 57231 58604528 83 Linux


From the above we know that this machine has two hard disks, we can also specify fdisk -l to see the partitioning of one of the hard disks;
[root@localhost beinan]# fdisk -l /dev/sda
Disk /dev/sda: 60.0 GB, 60011642880 bytes
64 heads, 32 sectors/track, 57231 cylinders
Units = cylinders of 2048 * 512 = 1048576 bytes
Device Boot Start End Blocks Id System
/dev/sda1 1 57231 58604528 83 Linux
As you can see from the above, there is only one partition in the disk /dev/sda; the usage is almost 100%;
We can also look at /dev/hda's
[root@localhost beinan]# fdisk -l /dev/hda
Try it yourself?


III. cfdisk Package from util-linux;
cfdisk is also a good partitioning tool; in some distributions this tool has been removed from the util-linux package; cfdisk is characterized by its simplicity and ease of use; it is similar to fdisk in DOS; in this title we only explain how to check the status of the partitions and the file systems used in the machine;
Usage for viewing disk partitions cfdisk -Ps Disk device name;
for example
[root@localhost beinan]cfdisk -Ps
[root@localhost beinan]cfdisk -Ps /dev/hda
[root@localhost beinan]cfdisk -Ps
Partition Table for /dev/hda
First Last
# Type Sector Sector Offset Length Filesystem Type (ID) Flag
-- ------- ----------- ----------- ------ ----------- -------------------- ----
1 Primary 0 23438834 63 23438835 Linux (83) Boot
2 Primary 23438835 156296384 0 132857550 Extended (05) None
5 Logical 23438835 155268224 63 131829390 Linux (83) None
6 Logical 155268225 156296384 63 1028160 Linux swap (82) None
The only parameter we use, -Ps, lists the partitions of the disk; cfdisk currently exists in major distributions such as Slackware Debian Mandrake , and fedora 4.0 eliminates this little tool; a bit of a shame; here's how I operate it in Slackware;
It is more intuitive if you operate this way;
[root@localhost beinan]cfdisk disk device name
Examples:
[root@localhost beinan]cfdisk /dev/hda
What you see is the following pattern:
cfdisk 2.12a
Disk Drive: /dev/hda
Size: 80026361856 bytes, 80.0 GB
Heads: 255 Sectors per Track: 63 Cylinders: 9729
Name Flags Part Type FS Type [Label] Size (MB)
-------------------------------------------------------------------------------------------
hda1 Boot Primary Linux ReiserFS 12000.69
hda5 Logical Linux ReiserFS 67496.65
hda6 Logical Linux swap 526.42
[Bootable] [ Delete ] [ Help ] [Maximize] [ Print ] [ Quit ]
[ Type ] [ Units ] [ Write ]
Toggle bootable flag of the current partition
You have entered the cfdisk interface; move the pointer to [Quit] with the keyboard to exit;


Four, parted function good partitioning tool; in Fedora 4.0 with, you can install on their own; in this topic, we only say how to view the disk partitioning;
The call method is simple, parted opens the device /dev/hda by default, you can also specify your own; for example, parted /dev/hda or /dev/sda, etc.; the exit method is quit.
[root@localhost beinan]# parted
Using /dev/hda
(parted) p
Disk geometry for /dev/hda: 0.000-76319.085 megabytes
Disk label type: msdos
Minor Start point End point Type File system Flags
1 0.031 6000.842 Primary partition ntfs booting
2 6000.842 22003.088 Primary partition fat32 lba
3 22003.088 60800.690 Extended Subdivision
5 22003.119 30004.211 Logical partition reiserfs
6 30004.242 40005.615 Logical partition reiserfs
7 40005.646 40774.350 Logical partition linux-swap
8 40774.381 52219.094 Logical partition ext3
9 52219.125 60800.690 Logical partition reiserfs
We can list the partitions of the current disk by using p on the partd surface. If you want to see other disks, you can use the select function, such as select /dev/sda;


Fifth, qtparted, and parted related software and qtparted, you can also view the structure of the disk and the file system used, is graphical;
[beinan@localhost ~]# qtparted
Graphical view at a glance;


Six, sfdisk is also a partitioning tool, features are many; we only say here that his column disk partitioning function;
[root@localhost beinan]# sfdisk -l
See for yourself;
sfdisk has several useful features; interested brethren may wish to take a look;


Seven, partx also briefly, some systems come with this tool, the function is also simple, and fdisk, parted, cfdisk to say not worth mentioning; do not use it;
Usage: partx device name
[root@localhost beinan]# partx /dev/hda
# 1: 63- 12289724 ( 12289662 sectors, 6292 MB)
# 2: 12289725- 45062324 ( 32772600 sectors, 16779 MB)
# 3: 45062325-124519814 ( 79457490 sectors, 40682 MB)
# 4: 0- -1 ( 0 sectors, 0 MB)
# 5: 45062388- 61448624 ( 16386237 sectors, 8389 MB)
# 6: 61448688- 81931499 ( 20482812 sectors, 10487 MB)
# 7: 81931563- 83505869 ( 1574307 sectors, 806 MB)
# 8: 83505933-106944704 ( 23438772 sectors, 12000 MB)
# 9: 106944768-124519814 ( 17575047 sectors, 8998 MB)


VIII. View all disks and partitions currently in the machine: --------------------- Final Approach
[beinan@localhost ~]$ cat /proc/partitions
major minor #blocks name
3 0 78150744 hda
3 1 6144831 hda1
3 2 16386300 hda2
3 5 8193118 hda5
3 6 10241406 hda6
3 7 787153 hda7
3 8 11719386 hda8
3 9 8787523 hda9
8 0 58605120 sda
8 1 58604528 sda1


============================
fstab(/etc/fstab)
============================
fstab (/etc/fstab) is one of the more important configuration files under Linux, it contains detailed information about the filesystems and storage devices mounted by the system at boot time. Here is the fstab file on my machine:
LABEL=/                 /                       ext3    defaults        1 1
LABEL=/boot1            /boot                   ext3    defaults        1 2
tmpfs                   /dev/shm                tmpfs   defaults        0 0
devpts                  /dev/pts                devpts  gid=5,mode=620  0 0
sysfs                   /sys                    sysfs   defaults        0 0
proc                    /proc                   proc    defaults        0 0
LABEL=SWAP-sda11        swap                    swap    defaults        0 0
/dev/sda6               /mnt/winE               vfat    defaults        0 0
/dev/sda8               /mnt/winG               ntfs    defaults        0 0
You can see that there are six columns in this file, the following last column as an example of one by one for the explanation, here assuming that you are familiar with the mount command:
1. The first column is the file system or storage device that needs to be mounted, here is the G disk on my Windows, partitioned as /dev/sda8.
2. The second column is the mount point, I choose /mnt/winG.
3. Column 3 specifies the type of file system or partition, my G-disk is NTFS type, which is represented as ntfs under Linux.
File types:
  1) Linux file systems: ext2, ext3, jfs, reiserfs, reiser4, xfs, swap.
  2) Windows:
      vfat = FAT 32, FAT 16
      ntfs= NTFS
      Note: For NTFS rw ntfs-3g
  3) CD/DVD/iso: iso9660
  4) Network file systems:
  5) nfs:      server:/shared_directory /mnt/nfs nfs  0 0 
  6) smb:      //win_box/shared_folder /mnt/samba smbfs rw,credentials=/home/user_name/ 0 0
  7) auto: The file system type (ext3, iso9660, etc) it detected automatically. Usually works. Used for removable devices (CD/DVD, Floppy drives, or USB/Flash drives) as the file system may vary on these devices.
4. The fourth column is the mount option, refer to man mount for details. The following is a list of commonly used options:
auto: System mounts automatically, fstab defaults to this option.
   ro: read-only
   rw: read-write
   defaults: rw, suid, dev, exec, auto, nouser, and async.
5. The fifth column is the dump option, set whether to let the backup program dump the backup file system, 0 for ignore, 1 for backup.
6. Column 6 is the fsck option, which tells the fsck program in what order to check the file system. 0 is ignored.
References:
“How to edit and understand /etc/fstab”:
/linuxhelp/
"How to fstab":
/?t=283131
                
================================
ssh service installation, opening, closing, querying
================================
If you're using redhat, it's installed by default and starts at 35 by default.
If not, change the /etc/ssh/sshd_config configuration file.
If you just want to deny a few users, just add DenyUsers xxx xxx. If you just want to let a few specific users log in, add AllowUsers xxx xxx.
Remember not to configure DenyUsers and AllowUsers at the same time. restart the sshd service after configuration.
Login method:xxx@server ip,xxx is the user name on the server.
e.g., [email][email protected][/email],


<1> run setup, in the system services to choose sshd on the line, if the control will have to look at the configuration file!
setup-system-services select sshd
<2>chkconfig --level 2345 sshd on
--level Specify the level at which the system is running at 2345, usually just turn it on in 35, with the sshd service turned on (on) and off (off).


To temporarily control start and stop, use: /etc///sshd start | stop | restart | status


<3> Introducing how to enable ssh services in OpenSUSE 11.1
Enter the command in the following steps:
1, modify the sshd_config file, the command is: vi /etc/ssh/sshd_config
2. Remove the comment #PasswordAuthentication no, and change NO to YES.
3. Remove the comments from #PermitRootLogin yes
4, restart SSH service, the command is: /etc//sshd restart
5. Verify the SSH service status, the command is: /etc//sshd status


<4> Installation: Find openssh on the CD-ROM and install it with rpm
Startup: Generally speaking, the system starts automatically after installation, but you can start it manually by /etc//sshd start or service sshd start.
Blocking a user: Modify /etc/passwd to change this user's shell (colon separated last stop) to /bin/false


dropbear is lightweight ssh software for embedded linux.


==========================
Detailed explanation of the contents of the top command
==========================
The top command is a commonly used performance analysis tool under Linux, which can display the resource utilization status of each process in the system in real time, similar to the Windows task manager. Here is a detailed description of how to use it.


top - 01:06:48 up 1:22, 1 user, load average: 0.06, 0.60, 0.48
Tasks: 29 total, 1 running, 28 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.3% us, 1.0% sy, 0.0% ni, 98.7% id, 0.0% wa, 0.0% hi, 0.0% si
Mem: 191272k total, 173656k used, 17616k free, 22052k buffers
Swap: 192772k total, 0k used, 192772k free, 123988k cached


PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
1379 root 16 0 7976 2456 1980 S 0.7 1.3 0:11.03 sshd
14704 root 16 0 2128 980 796 R 0.7 0.5 0:02.72 top
1 root 16 0 1992 632 544 S 0.0 0.3 0:00.90 init
2 root 34 19 0 0 0 S 0.0 0.0 0:00.00 ksoftirqd/0
3 root RT 0 0 0 0 S 0.0 0.0 0:00.00 watchdog/0 


Statistical information area
The first five lines are statistical information about the system as a whole. The first line is task queue information, the same as the result of the uptime command. The contents are as follows:
01:06:48 Current time
up 1:22 System runtime in hour:minute format
1 user Number of currently logged in users
load average: 0.06, 0.60, 0.48 The system load, i.e. the average length of the task queue.
The three values are averaged from 1, 5, and 15 minutes ago to the present.


The second and third rows contain information about processes and CPUs. When there are multiple CPUs, these may be longer than two lines. The contents are as follows:


Tasks: 29 total
1 running Number of processes running
28 Number of sleeping processes
0 stopped Number of processes stopped
0 zombie Number of zombie processes
Cpu(s): 0.3% us Percentage of CPU in user space user
1.0% sy Percentage of CPU in kernel space system
0.0% ni Percentage of CPU occupied by processes that have changed priority in user process space nice
98.7% id Percentage of idle CPU idle
0.0% wa Percentage of CPU time spent waiting for inputs and outputs wait
0.0% hi
0.0% si 


The last two lines are memory information. The contents are as follows:


Mem: 191272k total Physical memory total
173656k total physical memory used
17616k free Total free memory
22052k buffers Amount of memory used as kernel cache


Swap: 192772k total swap area total
0k used Total number of swap areas used
192772k free Total free swap area
123988k cached Total amount of swap area buffered.
The contents of memory are swapped out into the swap area, which is later swapped back into memory, but the used swap area has not been overwritten, the
This value is the size of the swap area in which these contents already exist in memory. The corresponding memory can be swapped out again without having to write to the swap area again.


process information area
The lower part of the statistics area shows detailed information about each process. Let's first recognize the meaning of the columns.


No. Column Name Meaning
a PID process id
b PPID Parent process id
c RUSER Real user name
d UID The user id of the process owner
e USER The user name of the process owner
f GROUP Name of the group of process owners
g TTY The name of the terminal from which the process was started. Processes not started from a terminal are displayed as ?
h PR Priority
i NI nice value. Negative values indicate high priority, positive values indicate low priority
j P Last CPU used, meaningful only in multi-CPU environments
k %CPU Percentage of CPU time since last update
l TIME Total amount of CPU time used by the process in seconds.
m TIME+ Total CPU time used by the process in 1/100th of a second
n %MEM Percentage of physical memory used by the process
o VIRT Total amount of virtual memory used by the process, in kilobytes (kb) VIRT=SWAP+RES
p SWAP The size, in kilobytes, of the virtual memory used by the process that is swapped out.
q RES The amount of physical memory used by the process that has not been swapped out, in kilobytes (kb). RES=CODE+DATA
r CODE The size of physical memory occupied by executable code, in kilobytes.
s DATA The size of physical memory occupied by parts other than executable code (data segment + stack), in kilobytes.
t SHR shared memory size in kilobytes
u nFLT Page Error Count
v nDRT The number of pages that have been modified since the last write to the present.
w S Process status.
D = Uninterruptible sleep state
R=Running
S=Sleep
T=Track/Stop
Z = Zombie Process
x COMMAND Command name/command line
y WCHAN If the process is sleeping, displays the name of the sleeping system function
z Flags Task flags, refer to


* A: PID        = Process Id                               0x00008000  debug flag (2.5)
* E: USER       = User Name                                0x00024000  special threads (2.5)
* H: PR         = Priority                                 0x001D0000  special states (2.5)
* I: NI         = Nice value                               0x00100000  PF_USEDFPU (thru 2.4)
* O: VIRT       = Virtual Image (kb)
* Q: RES        = Resident size (kb)
* T: SHR        = Shared Mem size (kb)
* W: S          = Process Status
* K: %CPU       = CPU usage
* N: %MEM       = Memory usage (RES)
* M: TIME+      = CPU Time, hundredths
  b: PPID       = Parent Process Pid
  c: RUSER      = Real user name
  d: UID        = User Id
  f: GROUP      = Group Name
  g: TTY        = Controlling Tty
  j: P          = Last used cpu (SMP)
  p: SWAP       = Swapped size (kb)
  l: TIME       = CPU Time
  r: CODE       = Code size (kb)
  s: DATA       = Data+Stack size (kb)
  u: nFLT       = Page Fault count
  v: nDRT       = Dirty Pages count
  y: WCHAN      = Sleeping in Function
  z: Flags      = Task Flags <>
* X: COMMAND    = Command name/line


By default only the more important PID, USER, PR, NI, VIRT, RES, SHR, S, %CPU, %MEM, TIME+, COMMAND columns are displayed. You can change the display by using the following shortcut keys.
Change the display:


The f key allows you to select what to display. Pressing the f key displays a list of columns, press a-z to show or hide the corresponding columns, and finally press the Enter key to confirm.
Pressing the o key changes the order in which the columns are displayed. Pressing lowercase a-z moves the corresponding column to the right, while uppercase A-Z moves the corresponding column to the left. Finally, press enter to confirm.
Pressing the uppercase F or O key followed by a-z will sort the processes into the appropriate columns. An uppercase R key reverses the current sorting.


Command Usage
1. Name of tool (command)
top
2. Role of tools (commands)
Display the current process and other status of the system; top is a dynamic process, i.e. it can be refreshed continuously by user keystrokes. If the command is executed in the foreground, it will monopolize the foreground, the
until the user terminates the program. More precisely, the top command provides real-time monitoring of the status of the system's processors. It displays a list of the most "sensitive" CPU tasks on the system.
The command can sort tasks by CPU usage. Memory usage and execution time; many of the features of the command can be set either interactively or in a customized file.
3. Environmental settings
Used under Linux.
4. Methods of use
4.1 Use of formats
top [-] [d] [p] [q] [c] [C] [S] [s] [n]
4.2 Description of parameters
d Specifies the time interval between every two screen information refreshes. Of course, the user can use the s interactive command to change this.
p Monitor the status of just one process by specifying the monitor process ID.
q This option will cause top to refresh without any delay. If the calling program has superuser privileges, then top will run with the highest possible priority.
S Specify the accumulation mode
s Causes the top command to run in safe mode. This removes the potential danger posed by interactive commands.
i Make top not show any idle or dead processes.
c Displays the entire command line instead of just the command name
n iterations 


4.3 Other
The following describes some of the interactive commands that can be used during the execution of the top command. From a usage point of view, mastering these commands is more important than mastering options.
These commands are single-letter, and it is possible that some of them may be blocked if the s option is used in the command line options.
Ctrl+L Erase and rewrite the screen.
h or ? Displays a help screen that gives some short summary instructions for the command.
k Terminate a process. The user will be prompted to enter the PID of the process to be terminated and what signal needs to be sent to the process. A general termination of a process can use 15 signals;
If it does not end normally then use signal 9 to force the process to end. The default value is signal 15. this command is blocked in safe mode.
i Ignore idle and dead processes. This is a switching command.
q Exit the program.
r Reorder the priority level of a process. The user is prompted to enter the process PID to be changed and the process priority value to be set.
Entering a positive value will result in a lower priority, and the reverse will allow the process to have a higher priority. The default value is 10.
S Switches to totalization mode.
s Changes the delay time between refreshes. The user will be prompted to enter a new time in s. If there is a decimal value, convert it to m s. Enter a value of 0 and the system will keep refreshing. the default value is 5 s. The user can enter a value of 0 to change the delay time between refreshes.
Note that if you set too small a time, it is likely to cause constant refreshing, so that it is simply too late to see what is displayed, and the system load will be greatly increased.
f or F Adds or removes items from the current display.
o or O Changes the order of displayed items.
l Toggles the display of load averaging and startup time information.
m Toggles the display of memory information. --------------- important
t Toggles the display of process and CPU status information. --------------- important
c Toggles the display of the command name and the full command line. --------------- important
M Sorting based on resident memory size. --------------- important
P Sorted by CPU usage percentage size. --------------- important
T Sort by time/cumulative time. --------------- important
W Writes the current settings to the ~/.toprc file. This is the recommended method for writing top configuration files.


========================
The free command looks at memory: important!
========================
free -m in megabytes
free or free -k in KB (default)
free -b in B


/coldplayerest/archive/2010/02/20/
The free command on Linux explained in detail
Explain the output of the free command on Linux.
Here are the results of the free run, which is 4 lines long. I've added column numbers for ease of illustration. This allows you to see the output of free as a two-dimensional array FO (Free Output). Example:
FO[2][1] = 24677460
FO[3][2] = 10321516  
                   1          2          3          4          5          6
1              total       used       free     shared    buffers     cached
2 Mem:      24677460   23276064    1401396          0     870540   12084008
3 -/+ buffers/cache:   10321516   14355944
4 Swap:     25151484     224188   24927296


Formula: ************************************************
1) total = used ( "buffers" + "cached" + "actual used memory") + free
2) - buffer/cache = used - ( buffers + cached ) // after all the buffers are out, the real program memory used memory min) used memory min) Upper half of the interval
3) + buffer/cache = free + buffers + cached // free memory + all buffers, how much memory is still available free memory max (free memory max) lower half of the interval
4) ("- buffer/cache") Minimum amount of memory used + ("+ buffer/cache") Maximum amount of memory available = total


The output of free has four lines, the fourth line is the information of the exchange area, respectively, the total amount of exchange (total), the amount of use (used) and how many free exchange area (free), this is more clear, do not say too much.
The second and third lines of the free output are the more confusing ones. Both lines are describing memory usage. The first column is total, the second is used, and the third is free.


1) The output of the first line is viewed from the operating system (OS). That is, from the OS point of view, there are on the computer a total of.
24677460KB (free is in KB by default) of physical memory, i.e., FO[2][1];
Of this physical memory 23276064 KB (i.e., FO[2][2]) are used;
Also with 1401396 KB (i.e. FO[2][3]) is available;
The first equation is obtained here:
FO[2][1] = FO[2][2] + FO[2][3]
FO[2][4] represents memory that has been shared by several processes and is now deprecated, and its value is always 0 (of course it may not be 0 on some systems, depending mainly on how the free command is implemented).
FO[2][5] denotes the memory being held by the OS buffer.
FO[2][6] denotes the memory that is being cached by the OS. In some cases, the terms buffer and cache are often used interchangeably. But in some lower level software it is important to distinguish between the two terms, see the foreign language.
A buffer is something that has yet to be "written" to disk. 
A cache is something that has been "read" from the disk and stored for later use.
That is, buffers are used to hold data to be output to disk (block device), while cached holds data to be read from disk. These two are meant to improve IO performance and are managed by the OS.
Linux and other mature operating systems (e.g. windows), always cache more data in order to improve the performance of IO read, that's why FO[2][6] (cached memory) is larger and FO[2][3] is smaller. We can do a simple test.
Release the data occupied by the system cache;
echo 3>/proc/sys/vm/drop_caches
Read a large file and record the time;
Close the file;
Reread this large file and record the time;
The second read should be much faster than the first. It turns out that I did a BerkeleyDB read operation with about 5G of files and tens of millions of records. On my environment, the second read can be about 9 times faster than the first.


2) The second line of the free output looks at system memory usage from an application's point of view.
For FO[3][2], -buffers/cache, indicates how much memory an application thinks the system is being used up;
For FO[3][3], +buffers/cache, indicates how much memory an application thinks the system has left;
Because memory occupied by system cache and buffer can be reclaimed quickly, usually FO[3][3] will be much larger than FO[2][3].


Two more equations are used here:
FO[3][2] = FO[2][2] - FO[2][5] - FO[2][6]
FO[3][3] = FO[2][3] + FO[2][5] + FO[2][6]
Both are not difficult to understand.
The free command is provided by procps.*.rpm (on the Redhat family of OSes). all output values from the free command are read from /proc/meminfo.
On the system there may be meminfo(2) function, it is to parse /proc/meminfo. procps package itself implements the meminfo() function. You can download a tarball of procps to see the implementation, now the latest version 3.2.8.


===============================
LINUX Process Memory Usage View
===============================
Important formulas.
VIRT= SWAP(size swapped out of virtual memory) + RES
RES = CODE (executable code) + DATA (data segment + stack)


VSZ: virtual size Virtual memory size VSZ= VIRT
RSZ: resource size Actual memory size RSZ= RES
RSS:  resident set size in kilobytes
VIRT: Total amount of virtual memory used by the process, in kilobytes. VIRT=SWAP+RES
SWAP: The size of the virtual memory used by the process that is swapped out, in kilobytes.
RES: The amount of physical memory used by the process that has not been swapped out, in kilobytes (kb) RES=CODE+DATA
CODE: The size of physical memory occupied by executable code, in kilobytes.
DATA: The size of physical memory occupied by parts other than executable code (data segment + stack), in kilobytes.
SHR: Shared memory size in kilobytes


You can directly use the top command to view the contents of %MEM. You can choose to view by process or by user. If you want to view the process memory usage of oracle user, you can use the following command:
LINUX Process Memory Usage View
(1)top
You can directly use the top command to view the contents of %MEM. You can choose to view by process or by user. If you want to view the process memory usage of oracle user, you can use the following command:
$ top -u oracle
$ top -p 14596 
(2)pmap
You can view the memory occupied by process-related information according to the process, (process number can be viewed by ps) as follows:
$ pmap -d 14596
Address:   start address of map
Kbytes:    size of map in kilobytes
RSS:       resident set size in kilobytes
Dirty:     dirty pages (both shared and private) in kilobytes
Mode:      permissions on map: read, write, execute, shared, private (copy on write)
Mapping:   file backing the map, or ’[ anon ]’ for allocated memory, or  ’[ stack ]’ for the program stack
Offset:    offset into the file
Device:    device name (major:minor)
       
Address:0001000-0024000 Address space occupied by the process
Kbytes Size of the virtual segment
RSS Device number (primary: secondary)
Node number of the Anon device, 0 means no node corresponds to memory
Locked Whether swapped is allowed
Mode scope of one's jurisdiction:r=read, w=write, x=execute, s=shared, p=private(copy on write)


Linux Performance Testing pmap Command
Name:
pmap - report memory map of a process (view information about the memory map of a process)
usage
       pmap [ -x | -d ] [ -q ] pids...
       pmap -V
Option Meaning
-x extended Show the extended format. Display Extended Format
-d device Show the device format.
-q quiet Do not display some header/footer lines.
-V show version Displays version of program. Show version
Extended Format and Device Format fields:
Address: start address of map Image start address
Kbytes: size of map in kilobytes image size
RSS: resident set size in kilobytes Resident Set Size
Dirty: dirty pages (both shared and private) in kilobytes Dirty Page Size
Mode: permissions on map image permissions: r=read, w=write, x=execute, s=shared, p=private (copy on write)
Mapping: file backing the map , or '[ anon ]' for allocated memory, or '[ stack ]' for the program stack. Image Support Files,[anon]allocated memory [stack]program stack
Offset: offset into the file File Offset
Device: device name (major:minor) device name


 


Examples:
View the device format for process 1
[root@C44 ~]#  pmap -d 1
1:   init [5]                    
Address   Kbytes Mode  Offset           Device    Mapping
00934000      88 r-x-- 0000000000000000 008:00005 ld-2.3.
0094a000       4 r---- 0000000000015000 008:00005 ld-2.3.
0094b000       4 rw--- 0000000000016000 008:00005 ld-2.3.
0094e000    1188 r-x-- 0000000000000000 008:00005 libc-2.3.
00a77000       8 r---- 0000000000129000 008:00005 libc-2.3.
00a79000       8 rw--- 000000000012b000 008:00005 libc-2.3.
00a7b000       8 rw--- 0000000000a7b000 000:00000   [ anon ]
00a85000      52 r-x-- 0000000000000000 008:00005 .1
00a92000       4 rw--- 000000000000c000 008:00005 .1
00a93000      32 rw--- 0000000000a93000 000:00000   [ anon ]
00d9d000      52 r-x-- 0000000000000000 008:00005 .1
00daa000       4 rw--- 000000000000d000 008:00005 .1
08048000      28 r-x-- 0000000000000000 008:00005 init
0804f000       4 rw--- 0000000000007000 008:00005 init
084e1000     132 rw--- 00000000084e1000 000:00000   [ anon ]
b7f5d000       8 rw--- 00000000b7f5d000 000:00000   [ anon ]
bffee000      72 rw--- 00000000bffee000 000:00000   [ stack ]
ffffe000       4 ----- 0000000000000000 000:00000   [ anon ]
mapped: 1700K    writeable/private: 276K    shared: 0K
[root@C44 ~]#  


Value of the last line
mapped indicates the size of the virtual address space mapped by the process, i.e., the size of the virtual memory pre-allocated by the process, i.e., the ps's outgoing vsz
writeable/private indicates the amount of private address space occupied by the process, i.e. the amount of memory actually used by the process.
shared indicates the amount of memory a process shares with other processes.






View the device format of process 1 without header and footer lines
[root@C44 ~]#  pmap -d -q 1
1:   init [5]                    
00934000      88 r-x-- 0000000000000000 008:00005 ld-2.3.
0094a000       4 r---- 0000000000015000 008:00005 ld-2.3.
0094b000       4 rw--- 0000000000016000 008:00005 ld-2.3.
0094e000    1188 r-x-- 0000000000000000 008:00005 libc-2.3.
00a77000       8 r---- 0000000000129000 008:00005 libc-2.3.
00a79000       8 rw--- 000000000012b000 008:00005 libc-2.3.
00a7b000       8 rw--- 0000000000a7b000 000:00000   [ anon ]
00a85000      52 r-x-- 0000000000000000 008:00005 .1
00a92000       4 rw--- 000000000000c000 008:00005 .1
00a93000      32 rw--- 0000000000a93000 000:00000   [ anon ]
00d9d000      52 r-x-- 0000000000000000 008:00005 .1
00daa000       4 rw--- 000000000000d000 008:00005 .1
08048000      28 r-x-- 0000000000000000 008:00005 init
0804f000       4 rw--- 0000000000007000 008:00005 init
084e1000     132 rw--- 00000000084e1000 000:00000   [ anon ]
b7f5d000       8 rw--- 00000000b7f5d000 000:00000   [ anon ]
bffee000      72 rw--- 00000000bffee000 000:00000   [ stack ]
ffffe000       4 ----- 0000000000000000 000:00000   [ anon ]
[root@C44 ~]#  


View the extended format of process 1


[root@C44 ~]#  pmap -x 1
1:   init [5]                    
Address   Kbytes     RSS    Anon  Locked Mode   Mapping
00934000      88       -       -       - r-x--  ld-2.3.
0094a000       4       -       -       - r----  ld-2.3.
0094b000       4       -       -       - rw---  ld-2.3.
0094e000    1188       -       -       - r-x--  libc-2.3.
00a77000       8       -       -       - r----  libc-2.3.
00a79000       8       -       -       - rw---  libc-2.3.
00a7b000       8       -       -       - rw---    [ anon ]
00a85000      52       -       -       - r-x--  .1
00a92000       4       -       -       - rw---  .1
00a93000      32       -       -       - rw---    [ anon ]
00d9d000      52       -       -       - r-x--  .1
00daa000       4       -       -       - rw---  .1
08048000      28       -       -       - r-x--  init
0804f000       4       -       -       - rw---  init
084e1000     132       -       -       - rw---    [ anon ]
b7f5d000       8       -       -       - rw---    [ anon ]
bffee000      72       -       -       - rw---    [ stack ]
ffffe000       4       -       -       - -----    [ anon ]
-------- ------- ------- ------- -------
total kB    1700       -       -       -
[root@C44 ~]#  


 


Cycle through the last 1 line of the device format of process 3066 at 2 second intervals.




[root@C44 ~]#  while true; do pmap -d  3066 | tail -1; sleep 2; done
mapped: 5412K    writeable/private: 2028K    shared: 0K
mapped: 5412K    writeable/private: 2028K    shared: 0K
mapped: 5412K    writeable/private: 2028K    shared: 0K
mapped: 5412K    writeable/private: 2028K    shared: 0K
mapped: 5412K    writeable/private: 2028K    shared: 0K
mapped: 5412K    writeable/private: 2028K    shared: 0K
mapped: 5412K    writeable/private: 2028K    shared: 0K
mapped: 5412K    writeable/private: 2028K    shared: 0K
mapped: 5412K    writeable/private: 2028K    shared: 0K
mapped: 5412K    writeable/private: 2028K    shared: 0K
mapped: 5412K    writeable/private: 2028K    shared: 0K
mapped: 5412K    writeable/private: 2028K    shared: 0K
mapped: 5412K    writeable/private: 2028K    shared: 0K


(3)ps
As shown in the example below:
$ ps -e -o 'pid,comm,args,pcpu,rsz,vsz,stime,user,uid' where rsz is actual memory
$ ps -e -o 'pid,comm,args,pcpu,rsz,vsz,stime,user,uid' | grep oracle |  sort -nrk5
where rsz is the actual memory, the above example implements sorting by memory, from largest to smallest


======================================
cat /proc/cpuinfo, /proc/meminfo
======================================
root@OpenWrt:/etc# cat /proc/meminfo
MemTotal:        2075484 kB
MemFree:         1973816 kB
Buffers:               0 kB
Cached:            46160 kB
SwapCached:            0 kB
Active:            41032 kB
Inactive:          18840 kB
Active(anon):      14228 kB
Inactive(anon):      264 kB
Active(file):      26804 kB
Inactive(file):    18576 kB
Unevictable:           0 kB
Mlocked:               0 kB
HighTotal:       1318912 kB
HighFree:        1257656 kB
LowTotal:         756572 kB
LowFree:          716160 kB
SwapTotal:             0 kB
SwapFree:              0 kB
Dirty:                 0 kB
Writeback:             0 kB
AnonPages:         13712 kB
Mapped:             8524 kB
Shmem:               780 kB
Slab:              26024 kB
SReclaimable:      19176 kB
SUnreclaim:         6848 kB
KernelStack:        1104 kB
PageTables:         1268 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:     1037740 kB
Committed_AS:     433552 kB
VmallocTotal:     245760 kB
VmallocUsed:        3744 kB
VmallocChunk:     234168 kB


Viewing Virtual Memory Information
root@OpenWrt:/etc# cat /proc/vmstat 
nr_free_pages 493360
nr_inactive_anon 66
nr_active_anon 3580
nr_inactive_file 4644
nr_active_file 6703
nr_unevictable 0
nr_mlock 0
nr_anon_pages 3457
nr_mapped 2131
nr_file_pages 11542
nr_dirty 0
nr_writeback 0
nr_slab_reclaimable 4796
nr_slab_unreclaimable 1731
nr_page_table_pages 323
nr_kernel_stack 154
nr_unstable 0
nr_bounce 0
nr_vmscan_write 0
nr_vmscan_immediate_reclaim 0
nr_writeback_temp 0
nr_isolated_anon 0
nr_isolated_file 0
nr_shmem 195
nr_dirtied 4280
nr_written 4280
nr_anon_transparent_hugepages 0
nr_free_cma 0
nr_dirty_threshold 33300
nr_dirty_background_threshold 16650


====================
telinit or init difference
====================
First of all, it should be noted that telinit is a soft connection of init. When the system is up, the init process will occupy PID 1, the init program will judge the PID at the entrance, if it is not 1, it will exit the init handler and call telinit instead.
if (!isinit) exit(telinit(p, argc, argv));, The author designed it that way, just so the user would have to tap three less letters.
Okay, let's get down to business.
The telinit or init command
use
Initialize and control the process
grammatical
{ telinit |init } {0| 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | a | b | c |h | Q | q | S | s | M | m | N }
descriptive
The init command initializes and controls the process. Its first task is to start processes based on records read from the file /etc/inittab. The /etc/inittab file usually requests that the init command run the getty command on every line where a user can log in. the init command controls the separate processes required by the system.
The main process that makes up the process assignment operation of the init command is /usr/sbin/getty. The /usr/sbin/getty process starts a separate terminal line. Other processes typically dispatched by the init command are daemons and shells.
The telinit command, which is linked to the init command, directs the operation of the init command. telinit takes a single character argument and issues the init command through the kill subroutine to perform the appropriate operation.
The telinit command sets the system at a specific runlevel. A runlevel is a software configuration that allows only selected groups of processes to exist. The system can be on one of the following runlevels:
0-9 Tells the init command to place the system at one of the runlevels 0-9. When the init command requests a change in runlevels 0-9, it suspends all processes on the current runlevel and then restarts any processes associated with the new runlevel.
0-1 Reserved for future use by the operating system.
2 Contains all terminal processes and daemons running in a multi-user environment. In a multi-user environment, the /etc/inittab file is set up so that the init command creates a process for each terminal on the system. The console device driver is also set to run on all runlevels, so the system can only run on the active console.
3-9 Can be defined according to user preferences.
S、s、M、m Tell the init command to enter maintenance mode. When the system enters maintenance mode from another runlevel, only the system console is used as a terminal.
The following parameters are also used as pseudo instructions for the init command:
a、b、c、h Tell the init command to process only those records located in the /etc/inittab file that have a, b, c, or h in the runlevel field. These four parameters, a, b, c, and h, are not true runlevels. They differ from runlevels in that the init command does not require the entire system to be at runlevel a, b, c, or h. The init command does not require the entire system to be at runlevel a, b, c, or h.
When the init command finds a record in the /etc/inittab file with a value of a, b, c, or h in the runlevel field, it starts the process. However, it does not kill any processes at the current runlevel; it starts processes with a, b, c, or h in the runlevel field in addition to those already running at the current system runlevel. Another difference between a true runlevel and a, b, c, or h is that processes starting at a, b, c, or h are not stopped when the init command changes the runlevel. There are three ways to stop a, b, c, or h processes:
Enter off in the Action field.
Deletes the entire object.
Enter the maintenance state with the init command.
Q,q Tell the init command to recheck the /etc/inittab file.
N Send a signal to prevent the process from being respawned.
The following sequence of events occurs at system startup when the root file system has been installed during the pre-initialization process:
The init command is run as the last step in the boot process.
init attempted to read the /etc/inittab file.
If the /etc/inittab file exists, the init command attempts to locate an initdefault entry in the /etc/inittab file.
If the initdefault entry exists, the init command uses the specified runlevel as the initial system runlevel.
If the initdefault entry does not exist, the init command requests that the user enter a runlevel from the system console (/dev/console).
The init command enters the maintenance runlevel if the user enters the S, s, M, or m runlevel. These are the only runlevels that do not require a properly formatted /etc/inittab file.
If the /etc/inittab file does not exist, the init command places the system at the maintenance run level by default.
The init command rereads the /etc/inittab file every 60 seconds. If /etc/inittab has changed since the init command last read it, the new commands in the /etc/inittab file are executed at system startup.
If you request the init command to change the runlevel, the init command reads the /etc/inittab file to identify the processes that should be present at the new runlevel. Then, the init command deactivates all processes that should not run at the new level and starts all processes that should run at the new level.
The processes to be run by the init command at each of these runlevels are defined in the file /etc/inittab. The runlevels are changed by having the root user run the telinit command, which is linked to the init command. The init command run by this user sends the appropriate signals to the original init command initialized at system startup. The default runlevel can be changed by modifying the runlevel of the initdefault entry in the /etc/inittab file.
At the maintenance run level, the /dev/console console terminal is open for reading and writing. Prompted for the root password. When the root user password is successfully entered, su is invoked. There are two ways to exit the maintenance runlevel:
If the shell terminates, the init command requests a new runlevel.
maybe
The init (or telinit) command signals the init command and forces it to change the system's operating level.
The apparent failure of the init command to prompt for a new runlevel when the system tries to boot (with initdefault as maintenance) may be due to the fact that the terminal console device (/dev/console) has been switched to a device that is not a physical console. If this is the case, and you wish to work on a physical console instead of /dev/console, you can force the init command to switch to the physical console by pressing the DEL key on the physical console device.
When the init command prompts for a new runlevel, enter any of the digits 0 through 9 or any of the following letters: S, s, M, or m. If S, s, M, or m is entered, the init command operates in a maintenance mode, with the additional result that, if the control has previously been forcibly converted to a physical console, the /dev/console file is converted to that device as well. the init command generates The init command generates a message to the device to which the /dev/console file was previously attached.
If you entered runlevels 0 through 9, the init command enters the appropriate runlevel. init rejects any other entries and re-prompts you for the correct value. If this is the first time the init command has entered any runlevel other than maintenance, it searches the /etc/inittab file for entries with the boot or bootwait keywords. If the init command finds these keywords, it performs the appropriate task, assuming that the runlevel entered matches the runlevel of the entry. For example, if the init command finds the boot keyword, it boots the machine. Any particular initialization of the system, such as detecting and installing a file system, occurs before the system allows any user action. The init command then scans the /etc/inittab file for all entries processed for that runlevel. It then continues the normal processing of the /etc/inittab file.
Runlevel 2 is defined by default to include all terminal processes and daemons running in a multi-user environment. In a multiuser environment, the /etc/inittab file is set to cause the init command to create processes for each terminal on the system.
For terminal processes, the shell terminates because of an explicitly typed end-of-file character, or a disconnection. When the init command receives a signal that a process has aborted, it records that fact and the reason for it in the /etc/utmp file and in the /var/adm/wtmp file. The /var/adm/wtmp file records the history of started processes.
To start each process in the /etc/inittab file, the init command waits for one of its successors to stop, waits for a power failure signal, SIGPWR, or until the init command is issued by the init or telinit commands to change the system's operating level. When one of these three conditions occurs, the init command rechecks the /etc/inittab file. The init command waits for one of the three conditions to occur, even though new entries have been added to the /etc/inittab file. To provide an instantaneous response, run the telinit -q command to recheck the /etc/inittab file.
If the init command finds that it has run an entry in the file /etc/inittab consecutively (more than five times in 225 seconds), it assumes that there is an error in the entry command string. It then prints an error message to the console and records an error in the system error log. The entry does not run for 60 seconds after this message is sent. If the error continues to occur, the command regenerates the entry only five times every 240 seconds. the init command continues to assume an error until the command fails to respond five times in the interval, or until it receives a signal from a user. the init command logs the error only the first time it occurs. the entry is not run for 60 seconds after sending this message.
When the telinit command requests the init command to change the runlevel, the init command sends a SIGTERM signal to all processes not defined within the current runlevel. The init command waits 20 seconds before aborting these processes with the signal SIGKILL.
If the init command receives a SIGPWR signal and is not in maintenance mode, it scans the /etc/inittab file for specific power failure entries. Before any other further processes run, the init command invokes the tasks associated with these entries (if the runlevel allows it). In this way, the init command performs clearing and logging functions whenever a system encounters a power failure. Note that these power-failure entries should not be made with the first initialized device.
matrix
Because the init command is the ultimate ancestor of every process on the system, every other process on the system inherits the environment variables from the init command. As part of its initialization sequence, the init command reads the /etc/environment file and copies any assignments found in that file into the environment passed to all its children. Because init child processes do not run during the registration session, they do not inherit the umask setting from the init command. These processes can set umask to any value they want. Commands executed by init in the /etc/inittab file use the ulimit value of init instead of the default value given in /etc/security/limits. As a result, commands that are successfully executed from the command line may be executed incorrectly when invoked by init. Any commands with special ulimit needs should include specific actions to set the ulimit to the desired value.
typical example
To request the init command to recheck the /etc/inittab file, enter:
telinit  q
To request the init command to enter maintenance mode, enter:
telinit  s


===========================
ldconfig in detail
===========================
ldconfig - configure dynamic linker run-time bindings


1) Add /etc/, then execute /sbin/ldconfig
2) after the installation, ubuntu need to load the library path, modify the file /etc/, after the file to increase the installed library path: /usr/local./lib.
Then using root privileges, load the library path: sudo ldconfig.
ldconfig is a dynamic link library management command, in order to make the dynamic link library for the system to share, you also need to run the dynamic link library management command -- ldconfig
The purpose of the ldconfig command is to search the default search directories (/lib and /usr/lib) as well as the directories listed in the dynamic library configuration file /etc/ for shared dynamic link libraries (in the format of lib*.so*, as described above), and then create the connection and cache files required for dynamic loading of the program (). The default cache file is /etc/, which holds a list of the names of the libraries in order.
ldconfig is usually run at boot time, but when the user installs a new dynamic link library, it is necessary to run this command manually.
The ldconfig command line usage is as follows.
ldconfig [-v|--verbose] [-n] [-N] [-X] [-f CONF] [-C CACHE] [-r ROOT] [-l] [-p|--print-cache] [-c FORMAT] [--format=FORMAT] [-V] [-?|--help|--usage] path...
The options available for ldconfig are described as follows.
(1) -v or --verbose : With this option, ldconfig will display the directories it is scanning and searching for dynamic link libraries, as well as the names of the connections it creates.
(2) -n : With this option, ldconfig scans only the directories specified on the command line, not the default directories (/lib, /usr/lib), nor the directories listed in the configuration file /etc/.
(3) -N : This option instructs ldconfig not to rebuild the cache file (/etc/). If the -X option is not used, ldconfig will update the file link as usual.
(4) -X : This option instructs ldconfig not to update the file connection. If the -N option is not used, the cache file is updated normally.
(5) -f CONF : This option specifies that the configuration file for the DLL is CONF, the system default is /etc/.
(6) -C CACHE : This option specifies that the generated cache file is CACHE, the system default is /etc/, this file stores a list of shared libraries that have been sorted.
(7) -r ROOT : This option changes the root directory of the application to ROOT (which is realized by calling the chroot function). When this option is selected, the default configuration file /etc/ will be ROOT /etc/. For example, if you use -r /usr/zzz, when you open the configuration file /etc/, you actually open the file /usr/zzz/etc/. With this option, you can greatly increase the flexibility of dynamic link library management.
(8) -l : Normally, when ldconfig searches for DLLs, it will automatically set up a link to the DLL. If you select this option, you will enter the expert mode and need to set up the link manually. General users do not need this option.
(9) -p or --print-cache : This option instructs ldconfig to print out the names of all the shared libraries stored in the current cache file.
(10) -c FORMAT or --format=FORMAT : This option specifies the format to be used for the cached file, there are three types: old, new and com.


NAME
 ldconfig - configure dynamic linker run-time bindings
  
SYNOPSIS
 ldconfig [OPTION...]
  
DESCRIPTION
 ldconfig creates the necessary links and cache (for use by the run-time linker, ) to the most recent shared libraries found in the directories specified on the command line, in the file /etc/, and in the trusted directories (/usr/lib and /lib). ldconfig checks the header and file names of the libraries it encounters when determining which versions should have their links updated. ldconfig ignores symbolic links when scanning for libraries.
 ldconfig will attempt to deduce the type of ELF libs (ie. libc or libc (glibc)) based on what C libraries if any the library was linked against, therefore when making dynamic libraries, it is wise to explicitly link against libc (use -lc). ldconfig is capable of storing multiple ABI types of libraries into a single cache on architectures which allow native running of multiple ABIs, like ia32/ia64/x86_64 or sparc32/sparc64.
 Some existing libs do not contain enough information to allow the deduction of their type, therefore the /etc/ file format allows the specification of an expected type. This is only used for those ELF libs which we can not work out. The format is like this "dirname=TYPE", where type can be libc4, libc5 or libc6. (This syntax also works on the command line). Spaces are not allowed. Also see the -p option.
 Directory names containing an = are no longer legal unless they also have an expected type specifier.
 ldconfig should normally be run by the super-user as it may require write permission on some root owned directories and files. If you use -r option to change the root directory, you don&apos;t have to be super-user though as long as you have sufficient right to that directory tree.
  
OPTIONS
 -v --verbose
Verbose mode. Print current version number, the name of each directory as it is scanned and any links that are created.
 -n
Only process directories specified on the command line. Don&apos;t process the trusted directories (/usr/lib and /lib) nor those specified in /etc/. Implies -N.
 -N
Don&apos;t rebuild the cache. Unless -X is also specified, links are still updated.
 -X
Don&apos;t update links. Unless -N is also specified, the cache is still rebuilt.
 -f conf
Use conf instead of /etc/.
 -C cache
Use cache instead of /etc/.
 -r root
Change to and use root as the root directory.
 -l
Library mode. Manually link individual libraries. Intended for use by experts only.
 -p --print-cache
Print the lists of directories and candidate libraries stored in the current cache.
 -c --format=FORMAT
Use FORMAT for the cache file. Choices are old, new and compat (the default).
 -? --help --usage
Print usage information.
 -V --version
Print version and exit.
  
EXAMPLES
 # /sbin/ldconfig -v
will set up the correct links for the shared binaries and rebuild the cache.
 # /sbin/ldconfig -n /lib
as root after the installation of a new shared library will properly update the shared library symbolic links in /lib.


===============================
umask  --- user's mask
ulimit --- user's limit
===============================
umask [-p] [-S] [mode]
              The  user  file-creation mask is set to mode.  If mode begins with a digit, it is interpreted as an octal number; otherwise it
              is interpreted as a symbolic mode mask similar to that accepted by chmod(1).  If mode is omitted, the  current  value  of  the
              mask is printed.  The -S option causes the mask to be printed in symbolic form; the default output is an octal number.  If the
              -p option is supplied, and mode is omitted, the output is in a form that may be reused as input.  The return status  is  0  if
              the mode was successfully changed or if no mode argument was supplied, and false otherwise.
              
limit [-HSTabcdefilmnpqrstuvx [limit]]
              Provides control over the resources available to the shell and to processes started by it, on systems that allow such control.
              The -H and -S options specify that the hard or soft limit is set for the given resource.  A hard limit cannot be increased  by
              a  non-root  user  once  it  is set; a soft limit may be increased up to the value of the hard limit.  If neither -H nor -S is
              specified, both the soft and hard limits are set.  The value of limit can be a number in the unit specified for  the  resource
              or  one of the special values hard, soft, or unlimited, which stand for the current hard limit, the current soft limit, and no
              limit, respectively.  If limit is omitted, the current value of the soft limit of the  resource  is  printed,  unless  the  -H
              option  is  given.   When  more  than  one resource is specified, the limit name and unit are printed before the value.  Other
              options are interpreted as follows:
              -a     All current limits are reported
              -b     The maximum socket buffer size
              -c     The maximum size of core files created
              -d     The maximum size of a process’s data segment
              -e     The maximum scheduling priority ("nice")
              -f     The maximum size of files written by the shell and its children
              -i     The maximum number of pending signals
              -l     The maximum size that may be locked into memory
              -m     The maximum resident set size (many systems do not honor this limit)
              -n     The maximum number of open file descriptors (most systems do not allow this value to be set)
              -p     The pipe size in 512-byte blocks (this may not be set)
              -q     The maximum number of bytes in POSIX message queues
              -r     The maximum real-time scheduling priority
              -s     The maximum stack size
              -t     The maximum amount of cpu time in seconds
              -u     The maximum number of processes available to a single user
              -v     The maximum amount of virtual memory available to the shell
              -x     The maximum number of file locks
              -T     The maximum number of threads


              If limit is given, it is the new value of the specified resource (the -a option is display only).  If no option is given, then
              -f  is  assumed.   Values  are  in 1024-byte increments, except for -t, which is in seconds, -p, which is in units of 512-byte
              blocks, and -T, -b, -n, and -u, which are unscaled values.  The return status is 0 unless an invalid  option  or  argument  is
              supplied, or an error occurs while setting a new limit.              


I. Default permissions and umask settings
umask is the octal value that defines the default permissions for a user to create a file or directory. umask indicates forbidden permissions. Files and directories are a little different, though. For files, umask is set on the assumption that the file has octal 666 permissions, which is 666 minus the mask value of umask; for directories, umask is set on the assumption that the file has octal 777 permissions, which is the octal permission of the directory 777 minus the mask value of umask.
Paradigm:
jinsuo@jinsuo-desktop:~$ umask 066
jinsuo@jinsuo-desktop:~$ mkdir tsmk1
jinsuo@jinsuo-desktop:~$ ls -l
drwx--x--x  2 jinsuo jinsuo    4096 2010-05-07 09:25 tsmk1


Description:
umask is usually stored in the user's related shell configuration file, such as .bashrc or .profile in the user's directory, or it can be placed in the global user configuration file, or it can be placed in the shell global configuration file, such as /etc/profile. umask is placed in the relevant configuration file for the purpose of automatically configuring default permission codes for files or directories created by the user when the administrator creates the user.


II. Use of ulimit
The ulimit command is used to limit a process's use of a certain type of resource. the limit command's restrictions on resources are divided into two categories:
Hard limits place system-wide restrictions on resources that can only be modified by the root user.
A soft limit on the default for newly created processes can be increased to a system-wide hard limit.


ulimit is used to shell start the resources occupied by the process, is a shell built-in command
Syntax format.
ulimit [-acdfHlmnpsStvw] [size]
Parameters.
-H Set the hardware resource limit.
-S Set software resource limits.
-a Displays all current resource limits.
-c size:Set the maximum size of the core file. Unit:blocks
-d size:Set the maximum value of the data segment. Unit: kbytes
-f size:Sets the maximum size of the file to be created. Unit:blocks
-l size:Set the maximum value of locked processes in memory. Unit:kbytes
-m size:Set the maximum amount of resident memory that can be used. Unit:kbytes
-n size:Set the maximum number of file descriptors the kernel can open at the same time. Unit:n
-p size:Set the maximum value of the pipe buffer. Unit:kbytes
-s size:Set the maximum value of the stack. Unit: kbytes
-t size:Set the maximum CPU usage time. Unit: seconds
-v size:Set the maximum value of virtual memory. Unit: kbytes.


-a Show Soft Limits
-Ha shows hard limits


(e.g. ulimit -t 60 user wants to limit CPU time to 60 seconds per process)


Modifying ulimit for linux should be taken care of.
If you encounter an error message like
ulimit: max user processes: cannot modify limit: impermissible operation
ulimit: open files: cannot modify limit: impermissible operation


Why is it ok for root users? And what is the problem for normal users?
A look at /etc/security/ will probably make sense.
While linux has a default ulimit limit for users, this file can configure the user's hard and soft configurations, with hard configurations being an upper limit.
Modifications exceeding the limit will result in an "unallowable operation" error.


on top of that
*        soft    noproc  10240
*        hard    noproc  10240
*        soft    nofile  10240
*        hard    nofile  10240
That is, it limits the maximum number of threads (noproc) and files (nofile) to 10240 for any user.


==================================
SElinux and firewall shutdown
==================================
iptables is used to set up a firewall, i.e. to manage internal and external communications.
SELinux is mainly used on the file (file), folder (directory), process (process) restrictions.


Methods for shutting down SELinux:
1. Modify SELINUX="" in the /etc/selinux/config file to disabled and reboot.
2. If you do not want to reboot the system, use the command setenforce 0
Notes:
setenforce 1 Sets SELinux to enforcing mode.
setenforce 0 Sets SELinux to be in permissive mode.
You can also disable selinux by adding: selinux=0 to the boot parameters of lilo or grub.


Check the status of selinux:
/usr/bin/setstatus -v 
Below:
SELinux status:                 enabled
SELinuxfs mount:                /selinux
Current mode:                   permissive
Mode from config file:          enforcing
Policy version:                 21
Policy from config file:        targeted 
  
getenforce/setenforce View and set the current working mode of SELinux.
SELinux-related tools
/usr/bin/setenforce Modify the real-time running mode of SELinux
setenforce 1 Sets SELinux to enforcing mode.
setenforce 0 Sets SELinux to be in permissive mode.


To completely disable SELinux you need to set the parameter selinux=0 in /etc/sysconfig/selinux or add this parameter in /etc/.
It can also be controlled at boot time by passing the parameter selinux to the kernel. (Fedora 5 is valid by default)
kernel /boot/vmlinuz-2.6.15-1.2054_FC5 ro root=LABEL=/ rhgb quiet selinux=0


To see the status of the system, the following is the runtime output, refer to the
/usr/bin/setstatus -v
SELinux status: enabled
SELinuxfs mount: /selinux
Current mode: enforcing
Policy version: 18


# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - SELinux is fully disabled.
SELINUX=enforcing
#SELINUX=disabled
# SELINUXTYPE= type of policy in use. Possible values are:
# targeted - Only targeted network daemons are protected.
# strict - Full SELinux protection.
SELINUXTYPE=targeted
SELINUX has three options: "disabled", "permissive" and "enforcing".


Description of the three options.
Disabled goes without saying.
permissive means that Selinux works, but even if you violate the policy it lets you continue to operate, but logs your violation. This is very useful when we are developing policies. It is equivalent to Debug mode.
Enforcing is when you violate the strategy and you can't continue to operate.


Linux Disable Firewall
1) Permanently effective after reboot:
Enable: chkconfig iptables on
Disable: chkconfig iptables off
2) Instant effect, expires after reboot:
Enable: service iptables start
Shutdown: service iptables stop


3). Clear the Linux firewall: iptables -F service iptables stop


It should be noted that for all other services under Linux can be turned on and off with the above commands.
When the firewall is turned on, make the following settings to open the relevant ports.
Modify the /etc/sysconfig/iptables file to add the following:
-A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 80 -j ACCEPT
-A RH-Firewall-1-INPUT -m state --state NEW -m tcp -p tcp --dport 22 -j ACCEPT
Or:
/etc//iptables status will get a series of messages indicating that the firewall is on.
/etc///iptables stop Shut down the firewall
    
The recommended command to shut down the firewall is
iptables -P INPUT ACCEPT
iptables -P FORWARD ACCEPT
iptables -P OUTPUT ACCEPT
iptables -F
Notes:
Under fedora:
/etc//iptables stop


Under ubuntu, since UBUNTU does not have a relevant direct command, please use the following command:
iptables -P INPUT ACCEPT
iptables -P OUTPUT ACCEPT
Temporarily open all ports
There is no command to disable iptables on Ubuntu


====================================================
linux cp/scp commands in detail
====================================================


==========================
Name: cp
==========================
Access: All users
Usage:
cp [options] source dest
cp [options] source... directory
Description: Copies one file to another file, or several files to another directory.
act as a stopgap
-a Reproduce file status, permissions, etc., as close to their original state as possible.
-r If source contains a directory name, copy all files in the directory to the destination.
-f If a file with the same name already exists at the destination, delete it before copying.
Paradigm:
Make a copy of the file aaa (which already exists) and name it bbb :)
cp aaa bbb
Copy all the C programs into the Finished subdirectory.
cp *.c Finished


==========================
Command: scp
==========================
There are 3 common ways to copy files between different Linuxes:
The first is ftp, that is, one of the Linux install ftp Server, so that another can use ftp client program for file copying.
The second method is to use samba service, similar to the Windows file copy way to operate, more concise and convenient.
The third is to utilize the scp command for file copying.
scp is a file copy with Security, based on ssh login. It is easy to operate, for example, to copy a current file to another host remotely, you can use the following command.
    scp /home/daisy/ [email protected]:/home/root
You will then be prompted to enter the login password for the root user of that other 172.19.2.75 host, and the copying will begin.
It's also easy to reverse the operation and copy files from the remote host to the current system.
    scp [email protected]:/home/root/ home/daisy/


==================================================== 
scp command:
The linux scp command can copy files and directories between linux;
====================================================
scp can copy files between 2 linux hosts;
Command Basic Format:
scp [optional] file_source file_target
       
======================== 
Copying from local to remote
======================== 
* :: Copying of documents:
* Command format:
                scp local_file remote_username@remote_ip:remote_folder 
or
                scp local_file remote_username@remote_ip:remote_file 
or
                scp local_file remote_ip:remote_folder 
or
                scp local_file remote_ip:remote_file 
The 1st and 2nd specify the user name, and you need to enter the password again after the command is executed. The 1st specifies only the remote directory, and the file name remains unchanged, and the 2nd specifies the file name;
The 3rd and 4th do not specify a user name, and you need to enter a user name and password after the command is executed. The 3rd specifies only the remote directory, and the file name remains unchanged, and the 4th specifies the file name;
* Example:
                scp /home/space/music/1.mp3 root@:/home/root/others/music 
                scp /home/space/music/1.mp3 root@:/home/root/others/music/001.mp3 
                scp /home/space/music/1.mp3 :/home/root/others/music 
                scp /home/space/music/1.mp3 :/home/root/others/music/001.mp3 
* :: Copy the catalog:
* Command format:
                scp -r local_folder remote_username@remote_ip:remote_folder 
or
                scp -r local_folder remote_ip:remote_folder 
The first one specifies the user name, and you need to enter the password again after the command is executed;
The 2nd one does not specify a username, and requires you to enter a username and password after the command is executed;
* Example:
                scp -r /home/space/music/ root@:/home/root/others/ 
                scp -r /home/space/music/ :/home/root/others/ 
The above command copies the local music directory to the remote others directory, i.e. after copying, there is a remote others directory. /others/music/ directory
======================== 
Copying from remote to local
========================
To copy from remote to local, simply reverse the order of the last two parameters of the copy from local to remote command;
Example:
        scp root@:/home/root/others/music /home/space/music/1.mp3 
        scp -r :/home/root/others/ /home/space/music/
The simplest application is as follows .
scp local username@IP address filename 1 remote username@IP address filename 2


[Local username@IP address] can be left out , you may need to enter the password that corresponds to the remote username .
A few parameters that may be useful :
-v is the same as -v in most linux commands and is used to show progress. It can be used to check for connection, authentication, or configuration errors.
-C Enable compression option .
-P Selecting ports . Note that -p is already used by rcp .
. -4 Forcing an IPV4 Address .
. -6 Forcing an IPV6 Address .
 
Note two things:
1. If the firewall of the remote server has special restrictions, scp will have to take a special port, the specific port depends on the situation, the command format is as follows:
#scp -p 4588 remote@:/usr/local/ /home/administrator
2. The use of scp should pay attention to the use of the user has to read the corresponding file permissions of the remote server.


============================================
linux scripts: 2>&1
============================================
0 Standard Input
1 Standard output
2 Standard error output
The file descriptor for standard input is 0.
The standard output is 1.
The standard error is 2


&0 denotes file descriptor 0
&1 denotes file descriptor 1
&2 denotes file descriptor 2


ommand > 2>&1 & Explanation of the command:
command > is to redirect the output of command to a file, i.e., the output is not printed to the screen, but output to a file.
2>&1 is to redirect standard errors to standard output, which in this case has been redirected to a file, i.e., to output standard errors to a file as well.
The last &, is to have the command run in the background.
Imagine what 2>1 represents, 2 in combination with > represents an error redirection, while 1 represents an error redirection to a file 1, not standard output;
Switch to 2>&1, & and 1 combined with the standard output represents the standard output, it becomes an error redirection to the standard output.
You can test this with ls 2>1, it won't report an error about not having a 2 file, but it will output an empty file 1;
ls xxx 2>1 test, there is no xxx the file error output to 1;
ls xxx 2>&1 test, won't generate 1 as a file anymore, but the error runs to standard output;
ls xxx > 2>&1, can actually be replaced with ls xxx 1> 2>&1; redirection symbol > defaults to 1, errors and output are passed.


--------------------------
Why is 2>&1 written after it?
--------------------------
     command > file 2>&1
First, command > file redirects the standard output to file, and 2>&1 is a standard error copying the behavior of the standard output.
That is, it is also redirected to file, with the end result that both standard output and errors are redirected to file.


     command 2>&1 >file
2>&1 The standard error copies the behavior of the standard output, but the standard output is still in the terminal at this point. The output is redirected to file only after >file, but the standard error remains in the terminal.
You can see it with strace:
1. command > file 2>&1
The key system call sequence in this command that implements redirection is:
    open(file) == 3
dup2(3,1) 1 duplicates the behavior of 3
dup2(1,2) 2 copies the behavior of 1
2. command 2>&1 >file
The key system call sequence in this command that implements redirection is:
dup2(1,2) 1 duplicates the behavior of 2
    open(file) == 3
dup2(3,1) 3 copies the behavior of 1
Consider what kind of file-sharing structure would result from a different sequence of dup2() calls. See APUE 3.10, 3.12.


> and > > are both redirected outputs
1> refers to the standard message output path (i.e., the default output method)
2> Refers to the error message output path.
2>&1 means that the standard message output path is specified as the error message output path (i.e., they are all output together)


Supplementary Question 4<&0:
< and < < are both redirected inputs
<0 refers to standard input paths
4<&0 means that file descriptor 4 is specified as standard input (actually any number between 4 and 9 can be selected)


1. Control of standardized inputs
Syntax: "command < file" Make the file the input to the command.
Example:
mail -s "mail test" wesongzhou@ < file1 Treat file1 as the content of the letter, and the main
The title is called mail test and is sent to the recipient.


2. Control of standard outputs
Syntax: "command > file" Sends the result of the command (command execution result) to the specified file.
Example.
ls -l > list Writes the results of the "ls -l" command to the file list.


Syntax: "command >! File" Sends the result of the command (command execution result) to the specified file, or overwrites the file if it already exists.
Example:
ls -lg >! list Overwrites the results of the "ls - lg" command to the file list.


Syntax: "command >& file" Writes any information generated on the screen when the command is executed (any information from the screen) to the specified file.
Example:
cc >& error Writes any information generated when compiling the file (any information from the screen) to the file error.


Syntax: "command >> file" Appends the result of command execution (command execution result) to the specified file.
Example.
ls - lag >> list appends the results of the "ls - lag" command to the file list.


Syntax: "command >>& file" Appends any information generated on the screen when the command is executed (any information from the screen) to the specified file.
Example.
cc >>& error appends any information generated by the screen while compiling the file to the file error.


"About inputs, outputs and error outputs".
In the character terminal environment, the concept of standard input/standard output is well understood. Input refers to the input of an application or command, whether from the keyboard or from other files; output refers to some information generated by the application or command; unlike Windows system, there is also a standard error output concept under Linux system, which is mainly set up for the purpose of program debugging and system maintenance. This concept is mainly for program debugging and system maintenance purposes. Separating the error output from the standard output allows some high-level error messages not to interfere with the normal output information, thus facilitating the use of general users. In Linux system: standard input (stdin) is keyboard input by default; standard output (stdout) is screen output by default; standard error output (stderr) is also output to the screen by default (std above means standard). When using these concepts in BASH, the standard output is usually denoted as 1, and the standard error output is denoted as 2. Here are some examples of how to use them, especially the standard output and the standard error output.
The inputs, outputs and standard error outputs are mainly used for I/O redirection, meaning that their default settings need to be changed.


Look at this example first:
$ ls > ls_result
$ ls -l >> ls_result
The above two commands redirect the result output of the ls command to the ls_result file and append to the ls_result file respectively, instead of outputting it to the screen." >" is the symbol for redirecting output (standard output and standard error output), and two consecutive ">" symbols, i.e., "> >" means that the output is appended instead of clearing the original.


Here's another slightly more complex example:
$ find /home -name lost* 2> err_result
This command has an extra "2" before the ">" symbol, and the "2>" indicates redirection of standard error output. Since some directories in the /home directory are inaccessible due to permissions restrictions, some standard error output is generated and stored in the err_result file. You can imagine what the command find /home -name lost* 2>>err_result will produce.


If you execute find /home -name lost* > all_result directly, the result is that only the standard output is stored in the all_result file, what should you do if you want the standard error output to be stored in the file as well as the standard output?


Look at the example below:
$ find /home -name lost* > all_result 2>& 1
The example above will first redirect the standard error output to the standard output as well, and then redirect the standard output to the file all_result. This way we can store all the output in the file.


To realize the above function, there is another easy way to write it as follows:
$ find /home -name lost* >& all_result


If the error messages are not important, the following command allows you to bypass the many useless error messages:
$ find /home -name lost* 2> /dev/null


Students can go back and experiment with the following redirection methods to see what comes out and why.
$ find /home -name lost* > all_result 1>& 2
$ find /home -name lost* 2> all_result 1>& 2
$ find /home -name lost* 2>& 1 > all_result


Another very useful redirection operator is "-", see the example below:
$ (cd /source/directory && tar cf - . ) | (cd /dest/directory && tar xvfp -)


This command moves all files in the /source/directory directory to the /dest/directory directory quickly by compressing and decompressing them, which is especially advantageous if /source/directory and /dest/directory are not on the same filesystem.


Here are a few more uncommon uses:
n<&- indicates that the n input is turned off.
<&- Indicates that standard input (keyboard) is turned off
n>&- indicates that output n is turned off
>&- Indicates that standard output is turned off.


Today a friend went to the Tencent's job fair, and when I talked to him about the recruitment test questions, I saw such a topic: please explain 2 >& 1.
I'm sure the Linux scripting gurus out there know this usage. However, I only know the meaning of 2 > 1, I don't understand what it means after adding the & sign. This is not good, this is an interview question, maybe in the future to find a job interview when encountered?
So I decided to fix it, and here's what I've compiled since Google to share with you.
1. Relevant knowledge
1) By default, the standard input is the keyboard, but it can also come from a file or a pipe (pipe |).
2) By default, the standard output is terminal, but it can be redirected to a file, a pipe, or backquotes `.
3) Standard error output to terminal by default, but can be redirected to file.
4) The standard input, output and error output are denoted as STDIN,STDOUT,STDERR, respectively, and can also be denoted by 0,1,2.
5) In fact, in addition to the above commonly used 3 file descriptors, there are 3 ~ 9 can also be used as a file descriptor. 3 ~ 9 you can think of as executing a file descriptor somewhere, often used as a temporary intermediate descriptor.


2. Settlement 2 >& 1
You may often see this in the shell: >/dev/null 2>&1
Break down this combination: ">/dev/null 2>&1" into five parts.
1: > stands for where to redirect to, e.g. echo '123' > /home/
2: /dev/null for empty device file
3: 2> indicates redirection of stderr standard error
4:& denote,2>&1,indicate2The output redirection of the1 ---------critical------------ seems not quite right.
5: 1 means stdout standard output, system default is 1, so '>/dev/null' is equivalent to '1>/dev/null'
Thus, >/dev/null 2>&1 can also be written as "1> /dev/null 2> &1"
Then the >/dev/null 2>&1 statement is executed as:
1>/dev/null : First of all, it means that the standard output is redirected to an empty device file, that is, it does not output any information to the terminal, to put it bluntly, it does not display any information.
2>&1 : Next, the standard error output is redirected to the standard output, because the standard output was previously redirected to the empty device file, so the standard error output is also redirected to the empty device file.


--------- important ------------
Is that clear? Let's understand! By the way compare and contrast the benefits of using it this way! There are two most common ways to use it under Linux:
command > file 2>file with command > file 2>&1
Are they different in any way?
First of all, command > file 2>file means to send the standard output generated by the command, and the error output to file. command > file 2>file This way, stdout and stderr are both sent directly to file, and the file will be opened twice, so stdout and stderr will overwrite each other. stdout and stderr will overwrite each other, which is equivalent to using FD1 and FD2 to seize the file pipeline at the same time.
And command >file 2>&1 this command will stdout directly to the file, stderr inherited FD1 pipe, and then sent to the file, at this time, the file has only been opened once, but also only use a pipe FD1, which includes the contents of stdout and stderr.
From the IO efficiency, the efficiency of the first command is lower than the efficiency of the latter command, so in the preparation of shell scripts, more often than not, we will command > file 2>&1 such a way to write.


3. Expand on that.
Another very useful redirection operator is '-', see the example below:
     $ (cd /source/directory && tar cf - . ) | (cd /dest/directory && tar xvfp -)
This command moves all files in the /source/directory directory to the /dest/directory directory quickly by compressing and decompressing them, which is especially advantageous if /source/directory and /dest/directory are not on the same filesystem.


Here are a few more uncommon uses:
n<&- indicates that the n input is turned off.
<&- Indicates that standard input (keyboard) is turned off
n>&- indicates that output n is turned off
>&- Indicates that standard output is turned off.


=======================================
How to check if Linux is 32bit or 64bit
=======================================
$su - root
#file /sbin/init
/sbin/init: ELF ;32-bit; LSB executable, Intel 80386......
That is 32-bit linux, if it is 64-bit, the display is 64-bit.


===================================================
Linux: ldd Command Introduction and Usage - Printing Shared Library Dependencies
===================================================
1, first of all ldd is not an executable program, but only a shell script


2, ldd can display the dependency of the executable module, the principle is to set a series of environment variables, as follows: LD_TRACE_LOADED_OBJECTS, LD_WARN, LD_BIND_NOW, LD_LIBRARY_VERSION, LD_VERBOSE and so on. When the LD_TRACE_LOADED_OBJECTS environment variable is not empty, when any executable program is run, it will only show the module dependency and the program will not actually execute. Or you can test it in a shell terminal as follows:
(1) export LD_TRACE_LOADED_OBJECTS=1
(2) Execute any program, such as ls, and see what happens.


3, ldd display executable module dependency works, in essence, through the (elf dynamic library loader). We know that the module will work before the executable module program and get control, so when those environment variables mentioned above are set, the option to display the dependency of the executable module.


4, you can actually execute the module directly, such as: /lib/.2 --list program (which is equivalent to ldd program) ldd command usage (from ldd --help)
Name ldd print shared library dependencies
Outline ldd [options]... Documentation...
Description ldd Outputs the shared libraries required for each program or shared library specified on the command line.
options (as in computer software settings)
--version
Print the version number of ldd
-v --verbose
Print all information, e.g. including version information for symbols
-d --data-relocs
Perform symbolic redeployment and report missing target objects (only for ELF format)
-r --function-relocs
Perform redeployment of target objects and functions and report missing target objects and functions (only for ELF format)
--help Usage Information
The standard version of ldd is supplied with glibc2. libc5 is supplied with the older version previously and is still present on some systems. The long option is not supported in the libc5 version. On the other hand, the glibc2 version does not support the -V option and only provides the equivalent --version option.
If the library name given on the command line contains '/', the libc5 version of this program will use it as the library name; otherwise it will search for the library in the standard location. Run a shared library in the current directory with the prefix ". /".
ldd does not work on shared libraries of the format.
ldd does not work on some very old programs that were created before the compilers supporting ldd were released. If you use ldd on this type of program, the program will try to run with argc = 0 with unpredictable results.


======================================
LVM Commands - A Quick Reference
Physical Volume Commands
======================================
pvcreate Creates LVM disks (i.e., physical volumes)
pvdisplay Displays information about physical volumes in a volume group
pvchange Sets the performance of the PV, allowing or denying the allocation of additional PEs from this disk.
pvmove Moves allocated PEs in a volume group from source to destination
 
Volume Group Commands
 
vgcreate Creates a volume group
vgdisplay Displays information about volume groups
vgchange activates or deactivates volume groups, allowing them to be mounted with or without quorum
vgextend Expanding Volume Groups by Adding Disks
vgreduce deletes disks to shrink volume groups, vgscan scans all disks for volume groups
vgsync synchronizes mirrors
remove Deletes a volume group.
vgexport removes a volume group from the system without modifying the information in the physical-based volume
vgimport adds a physical volume to the system by scanning the output using the vgexport command for a
coil
Vgcfgbackup saves the configuration information for a volume group, remembering that a volume group consists of one or more physical volumes
vgcfgrestore Restore volume configuration information
 
Logical Volume Commands
 
lvcreate Generate logical volumes
lvdisplay Displays information about logical volumes
lvchange Changes the characteristics of logical volumes, including availability, scheduling policies, permissions, block reordering
Bit, allocation policy, availability of mirror cache lvextend Increase space on logical volumes
extendfs extends the size of the file system
lvreduce Reduce space on logical volumes
lvremove Removes a logical volume
lvsplit Splits the logical volume of the ytterbium image.
lvmerge Merges lvsplit logical volumes.
lvsync synchronizes logical volumes
lvmmigrate Prepare a ROOT file in a partition for logical volume migration
lvlnboot is used to create root, primary swap, or dump logical volumes.
lvlnboot Delete the logical volume created by lvlnboot
 


Modifying Volume Groups
 
vgextend Example
 
1.Create PV
 # pvcreate/dev/rdsk/c0t0d0
 # pvcreate/dev/rdsk/c0t5d0
2. add pv to vg
 #vgextend/dev/vg01 /dev/dsk/c0t4d0
3. Use the following command to verify the disks contained in the volume group
 #vgdisplay /dev/vg01
You can see that the disk is under this header
 ---Physical Volumes-----
 vgreduce 
You can use vgreduce to remove disks from a volume group. However, you must first remove the logic on it
volume, any time a disk is added or removed from the root volume group, if it is not set to run automatically, you must use the
The lvlnboot command updates the boot data of the volume group bootstrap tutorial.
  vgremove 
If you want to remove a volume group, then you can use the vgremove command. vgremove only removes a volume group that has been reduced to a physical volume by the vgreduce command, which can be seen by the vgdisplay command, check these two lines
   curpv 1
   actpv 1
If the two values are not the same as each other, the volume group cannot be deleted. If the cur pv has a higher value but the volume group has only one physical volume in the output of /etc/lvmtab and vgdisplay -v, it will not be possible to delete it. Unless these missing PVs are executed with vgcfgrestore, if not, then the only way to do this is to do a vgvexport --- this works best.
Increase the size of the logical volume
You can use the lvextend command to increase the size of the logic, or you can specify the amount of disk space you want to increase on a particular disk, or you can let LVM determine its distribution.
Assuming that you want to increase the logical volume /dev/vg01/lvol4 to 200M, the current size is 100M
  #lvextend -L 200/dev/vg01/lvol4
Extending logical volumes to specific disks
Assuming that there are multiple disks in the volume group and that two of them are of the same model, you want to extend a 275M logical volume on one disk to 400M, and you want to make sure that the incremental amount is assigned to another disk of the same model.
 #lvextend -L 400 /dev/vg01/lvol4 
Extended File System
The capacity of the filesystem grows in line with the growth of the logical volumes on which it resides, using the extendfs command. extendfs reads the current superblock to find out the pre-makeup filesystem format, and then uses this information to generate the facets needed for the logical volumes. Once these operations are complete the superblock is updated with the new information. extendfs requires the use of character device files.
  #extendfs /dev/vg01/rlvol4
Specifies the mount point.
Run bdf to see the increased capacity.
==== Note: extendfs must be umounted before it can be executed =====
 
Use the lvreduce command to reduce space
 
Note: Reducing the size of a logical volume will result in data loss due to space reclamation, if no prior backups have been taken. Do not use the newfs command to reduce the file system size.
Reducing the size of logical volumes
Assuming there is an 80M logical volume, you no longer need the data in it and other applications only need 40M of space.
#lvreduce -L 40/dev/vg03/lvol4
Moving data in logical volumes between disks
In a volume group, you can use the pvmove command to move data contained in a logical volume from one disk to another.
Example: You can move the data on a logical volume from one disk to another while using the space on the first disk for other purposes. And, to be able to move all databases from one disk to another, for example, if you want to delete a disk from a volume group, you can do so, and when the logical data on the disk is deleted, you can delete this disk from the volume group.
Example:
Moving the data of a logical volume from one disk to another
Suppose you want to move the data in the logical volume /dev/vg01/markets from /dev/dsk/c0t3d0 to the
/dev/dsk/c0t4d0
Note: You must specify that logical volume on the source disk with the -n flag when issuing this command, and you must also specify the source disk first on the command line
Example:
#pvmove -n/dev/vg01/markets /dev/dsk/c0t3d0 /dev/dks/c0t4d0
Move all the data on one disk to another disk.
E.g. #pvmove /dev/dsk/c0t4d0 /dev/dsk/c0t5d0
Delete the logical volume --- if you want to delete the data add -f
#lvremove /dev/vg3/lv012
Create a boot disk
1)pvcreate -B
2)vgextend
3)mkboot
As:
Creating a boot disk in the root volume group
Creating bootable disks is useful in two specific situations
 
1) Mirroring the root logical volume
2) Create a new root logical volume
Adding a bootable disk to the root volume group
1)pvcreate -B generate physical volume
2)vgextend Add disks to volume group
3) Use mkboot to put (lif) in the boot sector
4) Use the mkboot -a command to modify the auto file in the LIF zone.


----------------------------
File Interpretation for Linux
----------------------------
1. log files by the system log and kernel log monitoring program syslogd (syslog daemon) and klogd (kernel daemon) control, the
Configure the default activity of both monitoring programs in the /etc/ file.
The /etc/ file is the configuration file for the Linux journaling system. ubuntu (Debian) is /etc/:
The log files are organized as described in the /etc/ configuration file. The following figure shows the contents of the /etc/ file:
[root@localhost ~]# cat /etc/
# Log all kernel messages to the console.
# Logging much else clutters up the screen.
#kern.* /dev/console
# Log anything (except mail) of level info or higher.
# Don't log private authentication messages!
*.info;;; /var/log/messages


# The authpriv file has restricted access.
authpriv.* /var/log/secure


# Log all the mail messages in one place.
mail.* -/var/log/maillog


# Log cron stuff
cron.* /var/log/cron


# Everybody gets emergency messages
*.emerg *


# Save news errors of level crit and higher in a special file.
uucp, /var/log/spooler


# Save boot messages also to
local7.* /var/log/


The basic syntax of the line is:
[ Message Type ] [ Programs ]
Note: The separator must be a tab character.


(1). The message type is composed of "source" and "urgency", connected by a dot.
For example, in the figure above, it represents a "critical" situation from news. In this case, news is the source and crit is the critical condition. The wildcard * character can represent all sources.


Description:
The first statement, *.info, sends all messages above the info level (notice, warning, err, crit, alert, and emerge) to the appropriate log file.
Log File Categories (Categorized by Importance) Log files can be divided into eight categories, listed below in descending order of importance:
emerg emergency.
alert
Critical. Critical.
errerror , error
warning
notice
info Information
debug debugging
none Do not output any message


(2). Treatment Programs
The " Processing Options " option allows the log to be processed. It can be stored on the hard disk, forwarded to another machine or displayed on the administrator's terminal.
Treatment options at a glance:
Filename Write to a file, taking care of the absolute path.
@ hostname Forwarded to another host's syslogd program.
@IP address Same as above, just identified with an IP address.
/dev/console to the local machine screen.
* :: Sent to all users' terminals.
| Program Forwarded to a program through a pipeline.
Example:
/dev/console (to display information on the console as soon as a kernel emergency occurs)


Description:
If you want to modify the log files of syslogd, you must first kill the syslogd process, and then start syslogd after the modification. Attackers usually enter the system immediately after the modification of the system log, so as a network administrator you should use a machine dedicated to logging information, other machine logs automatically forwarded to it above, so that once the log information is generated immediately be transferred, so that you can correctly record the attacker's behavior, the log files recorded to a remote host, the remote host is the configuration of this paper we want to syslog server.


Server Configuration Practice Steps
Example: 10.0.0.1 is the syslog server and 10.0.0.2 is the client.
Steps:
1). Server Configuration
vi /etc/sysconfig/syslog 
sysLOGD_OPTIONS = "-r -m 0" ## -r means accept remote logs
Restart syslog service /etc//syslog restart
2). Client Configuration
    vi /etc/ 
Add @10.0.0.1 to the message destination
Example: *.info;;; @10.0.0.1
Save Exit Restart Service
        /etc///syslog restart 
( Knowledge point: directly view the end of the log: tail /var/log/messages or tail /var/log/boot so that you can see syslog reboot )


Note: The logging service uses port 514/udp which should be opened by the syslog server.
syslog log server side can not be filtered according to the source address, in order to prevent the outside network to write spam to the log server to be resolved in the network topology, the gateway to do to limit the outside network access to port 514.
If there are more servers, the logs formed in this way are larger, and the logs should be analyzed well




declare -x PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
----------------------------
tail command
----------------------------
root@OpenWrt:/tmp# tail --help
BusyBox v1.19.4 (2013-08-21 23:32:15 CST) multi-call binary.


Usage: tail [OPTIONS] [FILE]...


Print last 10 lines of each FILE (or stdin) to stdout.
With more than one FILE, precede each with a filename header.


-f Print data as file grows ******* very useful
-s SECONDS Wait SECONDS between reads with -f
-n N[kbm] Print last N lines
-c N[kbm] Print last N bytes
-q Never print headers
-v Always print headers


N may be suffixed by k (x1024), b (x512), or m (x1024^2).
If N starts with a '+', output begins with the Nth item from the start
of each file, not from the end.




tail -f allows you to see the latest logs in real time, and will refresh the screen constantly.


------------------------------------------------------------
linux useradd(adduser) command parameters and usage details (linux create new user command)
------------------------------------------------------------
Function Description: Creates a user account.
Syntax: useradd [-mMnr][-c <notes>][-d <login directory>][-e <expiration date>][-f <buffer days>][-g <groups>][-G <groups>][-s <shell>][-u < ;uid>][user-account]
or useradd -D [-b][-e <expiration date>][-f <buffer days>][-g <group>][-G <group>][-s <shell>]


Supplementary note: useradd can be used to create a user account. After the account is created, use passwd to set the password for the account. And you can use userdel to delete the account. The account created with the useradd command is actually stored in the /etc/passwd text file.


Parameters:
 -c<note> Add the note text. The comment text is saved in the comments field of passwd.
 -d<login directory> Specifies the starting directory when the user logs in.
 -D Change the preset value.
 -e<Validity> Specifies the validity period of the account number.
 -f<buffer days> Specifies how many days after the password expires to close the account.
 -g<group> Specifies the group to which the user belongs.
 -G<Group> Specifies additional groups to which the user belongs.
 -m Automatically creates the user's login directory.
 -M Do not automatically create a user's login directory.
 -n Cancel the creation of a group with the name of the user.
 -r Create a system account.
 -s<shell> Specifies the shell that the user will use after logging in.
 -u<uid> Specifies the user ID.


Name: adduser
1. role (linuxso note: useradd and adduser the same, but addgroup is not the existence of the command, so it is recommended to use useradd, of course, your habits are the most important ...)


The useradd command is used to create a user account and create a user's starting directory with superuser privileges.


2. Format


useradd [-d home] [-s shell] [-c comment] [-m [-k template]] [-f inactive] [-e expire ] [-p passwd] [-r] name 


3. Main parameters


When a new account is created without the -D parameter, the useradd command uses the command line to specify the settings for the new account and to use the system's preset values. The new user account will generate some system files, user directory creation, copy start files, etc., all of which can be specified using the command line options. This version of RedHatLinux provides the ability to create individual groups for each new user without adding the -n option. useradd can be used with the -ccomment option to specify the new user account's password file in the description field. -dhome_dir The home_dir that the new account will log into each time it logs in. default_home is the name of the login in default_home and is taken as the name of the directory in which it logged in.




-e expire_date Account expiration date. The date is specified in MM/DD/YYY format.


-f inactive_days The number of days after which the account will be permanently deactivated. When the value is 0, the account is deactivated immediately. When the value is -1, this feature is disabled.


-g initial_group group name or a number to be used as the user's starting group. The group name must be an existing name. The group number must also be an existing group. The default group number is 1.


-G group,[...] Define this user as a member of this group. Each group is separated by a "," space, no blank characters are allowed. Group names are subject to the same restrictions as the -g option. Defines the value as the user's starting group.


-m The user directory is created automatically if it does not exist. If the -k option is used, the files in skeleton_dir will be copied to the user directory. However, files in the /etc/skel directory will also be copied there instead. Any directories in skeleton_diror/etc/skel will also be created in the user directory. the -k and -m do not create directories and do not copy any files.


-M does not create a user directory, even if /etc/system file is set to create a user directory.


-n The preset user group and user name will be the same. This option will cancel this preset.


-r This parameter is used to create a system account. The UID of the system account will be smaller than the UID_MIN defined in /etc/. UID_MIN defined in the system file /etc/. Note that the account created by useradd does not create a user directory, nor does it care about the values defined in /etc/. and does not care about the value recorded in /etc/. If you want to have a user directory, you must specify the -m parameter to create a system account. This is an additional option added by REDHAT.


-s shell The name of the shell that the user will use after logging in. The default is left blank so that the system will specify the default login shell for you.


-u uid The ID value of the user. Must be a unique ID value unless the -o option is used. Numbers cannot be negative. The default is a minimum of 999 and increasing. 0 to 999 is traditionally reserved for system accounts. Changing the preset value When the -D option is present, useradd shows the current preset value, or updates the preset value by using the command line. The available options are.


-b default_home Defines the previous directory to which the user belongs. The user's name is appended to default_home to create the new user's directory. Of course, this option is disabled with -d.


-e default_expire_date User account expiration date.


-f default_inactive Account deactivation after a few days of expiration.


-g default_group The starting group name or ID for the new account. the group name must be an existing name. the group ID must also be an existing group. The group name must be an existing name and the group ID must be an existing group.


-s default_shell The name of the shell that users will use after logging in. This shell will be used for all future accounts. useradd displays the current default value if you do not specify any arguments. Note that system administrators are obligated to place user definitions in the /etc/skel directory.
4. Description Organization of information


useradd can be used to create user accounts, it is the same as the adduser command. After the account is created, the password for the account is set with passwd. Accounts created with useradd are actually stored in the /etc/passwd text file.


5. Application examples


Create a new user account and set the ID:


#useradd caojh -u 544 


It is important to note that you should try to set the ID value greater than 500 to avoid conflicts. Because Linux will create some special users after installation, generally the value between 0 and 499 is reserved for system accounts like bin and mail.


[root@linux ~]# useradd [-u UID] [-g initial_group] [-G other_group] 
> -[Mm] [-c description field] [-d home] [-s shell] username
Parameters:
-u : followed by UID, a set of numbers. Assigns a specific UID directly to the account;
-g : The group name that follows is the initial group we mentioned above.
The group ID (GID) will be placed in the fourth field of /etc/passwd.
-G : The group name that follows is the group that this account can support.
This parameter will modify the information in /etc/group!
-M : Mandatory! Do not create a user home directory
-m : Mandatory! To create a user home directory!
-c : This is the description of the fifth column of /etc/passwd, which can be set at will.
-d : Specify a directory to be the home directory instead of using the preset value;
-r : create a system account, the UID of this account will be limited (/etc/)
-s : followed by a shell, the default is /bin/bash.
Paradigm:


Example 1: Create a user with full reference to the preset, with the name vbird1
[root@linux ~]# useradd vbird1


--------------------------------------------------------
What you might want to know about cleaning up the /tmp/ folder on a Linux system
--------------------------------------------------------
/base/
We know that in the Linux system / tmp folder inside the file will be emptied, as for how long to be emptied, how to empty, may not have much knowledge, so today we will dissect a these two issues.


In RHEL\CentOS\Fedora\system (this experiment was conducted in RHEL6)
Let's take a look at the tmpwatch command first, his role is to remove files which haven't been accessed for a period of time (removes files which haven't been accessed for a period of time). I won't go into the specifics of how to use it, but I'll leave that to you. We will mainly look at the scheduled task file related to this command.
He's /etc//tmpwatch, and we can look inside this file to see what's inside
#! /bin/sh 
flags=-umc 
/usr/sbin/tmpwatch "$flags" -x /tmp/.X11-unix -x /tmp/.XIM-unix \ 
        -x /tmp/.font-unix -x /tmp/.ICE-unix -x /tmp/.Test-unix \ 
        -X ‘/tmp/hsperfdata_*’ 10d /tmp 
/usr/sbin/tmpwatch "$flags" 30d /var/tmp 
for d in /var/{cache/man,catman}/{cat?,X11R6/cat?,local/cat?}; do 
    if [ -d "$d" ]; then 
        /usr/sbin/tmpwatch "$flags" -f 30d "$d" 
    fi 
done


This script you carefully analyze to understand, the first line is equivalent to a mark (parameters), the second line is for the / tmp directory inside the excluded directories, the third line, which is for this / tmp directory cleanup, the following is for the cleanup of the other directories, it will not say.
Let's look at /usr/sbin/tmpwatch "$flags" 30d /var/tmp this line, the key is this 30d, that is, the meaning of 30 days, this decided to 30 days to clean up /tmp under the non-accessible files. If, say, you want to clean up one day at a time, change this 30d to 1d. This you know ...... haha!
But there is a problem to note, if you set a shorter time to clean up, for example, it is 30 minutes, 10 seconds, etc., you can set it in this file, but you will find that re the computer, he does not clean up the contents of the /tmp folder, this is why? This is tmpwatch where he is located to determine his upper directory is /etc//, and this directory is the first day to perform a scheduled task, so that you set a shorter time than a day, he will not work. Now that makes sense.
So the conclusion is: in RHEL6, the default time limit for the system to automatically clean up the /tmp folder is 30 days (********************)


In Debian\Ubuntu system (Ubuntu 10.10 as experimental environment)
On Ubuntu systems, the contents of the /tmp folder are emptied every time you boot up, so if you don't want him to clean it up automatically, you just need to change the value of TMPTIME in the rcS file.
Let's see how we can modify
sudo vi /etc/default/rcS
classifier for objects with a handle
TMPTIME=0
turn into (sth. else)
TMPTIME=-1 or infinity
Change it so that the system doesn't clean up your /tmp directory on reboot.
By analogy, if you say you want to limit how much time to change it, you can change it to the appropriate number (I didn't test it, that's how I understand it)
So the conclusion is: in Ubuntu, the time limit for the system to automatically clean up the /tmp folder defaults to every startup (********************)


==============================================
How to block cron from sending user mail
==============================================
Often I encounter this problem, when I log on to the system or knock commands, the system always prompts:
You have new mail in /var/spool/mail/root
It's annoying at times, and it arises for the following reasons:
Scripts executed in cron have output that causes this. Many people write scripts without considering how standard and error output is handled, so that once there is output, cron emails the output to the current user. A lot of online talk about stopping MTA (sendmail or postfix) etc. doesn't work, and removing the sendmail command doesn't work either.
The more common way to handle this is to add the following method after each cron to block it.
>/dev/null 2>&1 "Redirect standard error output to standard output, then throw it under /DEV/NULL."
OR
&> /dev/null
As:
*/2 * * * * /usr/local/sbin/dog_lighttpd.sh >/dev/null 2>&1
But sometimes it is not convenient enough, for example, just took over a project, which has already written more than 300 cron, even if you write a script to add blocking is also more annoying.
Then there's an easier way:
Direct: crontab -e
Add in the first line: MAILTO=""
It turns out that the recipients are defined in /etc/crontab, so we just leave the recipient lag empty.
In fact, the most fundamental way is still for people to develop good habits of script specification.


Dailybuild script content.
[root@xiangfch /]# cat /home/datebuild/
#!/bin/bash
DATE=`date "+%Y-%m-%d"`
cd /home/datebuild
mkdir -p $DATE
cd $DATE
svn co http://10.118.202.87/svn/webserver/ZXOIS-SYC_V01.01.10/ --username liruibin --password Hanfeizi1983 >/dev/null 2>&1 
cd ZXOIS-SYC_V01.01.10/openwrt
sh  >/dev/null 2>&1 
sh 1>/home/datebuild/$DATE/ 2>&1
cd /home/datebuild/$DATE
if [ `find ./ZXOIS-SYC_V01.01.10/openwrt/bin/x86/ |grep .vmdk | wc -l` -eq 0 ];then
echo -e "failed" >/home/datebuild/$DATE/
else
echo -e "succeed" >/home/datebuild/$DATE/
fi


==============================================
Mail viewing commands under linux
==============================================
The system provides a mail system for communication between users, and when a user opens a terminal to register and log in, he/she finds that the system gives the following message:
    you have mail.
 
At this point the user can read the mail by typing the mail command:
    $ mail
The mail program displays the user's messages one by one, and shows the most recent messages in chronological order. Each time a piece of mail is displayed, mail asks the user if he or she wants to do something with it.
1) If the user answers d, the letter is deleted;
2) If you only press the Enter key, it means that no changes are made to the letter (the letter is still saved and you can read this letter next time);
3) If you answer p, ask for a repeat of the letter to be displayed;
4)s filename indicates that the letter is to be deposited into the named file;
5) If you answer q, you want to exit from mail.


[root@localhost ~]# mail
Mail version 8.1 6/6/93. Type ? for help.
"/var/spool/mail/root": 76 messages 76 unread
>U 1 root@ Mon Jan 19 15:43 24/936   "Cron <root@localhost>"
U 2 root@ Mon Jan 19 15:44 24/936   "Cron <root@localhost>"
U 3 root@ Mon Jan 19 15:45 24/936   "Cron <root@localhost>"
U 4 root@ Mon Jan 19 15:46 24/936   "Cron <root@localhost>"
U 5 root@ Mon Jan 19 15:47 24/936   "Cron <root@localhost>"
> for current message U for unread
& p //display the current mail
Message 1:
From root@ Mon Jan 19 15:43:02 2009
Date: Mon, 19 Jan 2009 15:43:02 +0800
From: root@ (Cron Daemon)


& 2 //display the file labeled 2
Message 2:


Other common parameters.
unread Marks messages as unread
h|headers Displays the current mailing list
l|list Displays a list of currently supported commands
? |help Displays multiple command parameter usages for viewing mailing lists
d Deletes the current message, pointer and moves down. d 1-100 Deletes the first through 100 e-mails
f|from Displays summary information for the current message only. f num Displays the summary information for a particular message.
f|from num moves the pointer to a particular message
z Display the last twenty emails in your Inbox when you just
more|p|page Read the content of the message where the current pointer is located When reading, pressing the space bar is to turn the page, and pressing the enter key is to move down one line.
t|type|more|p|page num Read a particular message
n|next|{not fill in anything} read the content of the next email where the current pointer is located, when reading, press the space bar is to turn the page, press enter is to move down a line.
v|visual The current message is in plain text editing mode.
n|next|{fill in nothing} num Read a particular email
top Displays the header of the message that the current pointer is on
file|folder Displays the file where the system mail is located, as well as the total number of messages and other information.
x Exits the mail command platform and does not save previous operations, such as deleting messages
q exit mail command platform, save the previous operation, such as deleting the mail that has been deleted with d, read mail will be saved to the current user's home directory.
Recorded in the mbox file. Files are only completely deleted if they are deleted in mbox.


Type mail -f mbox in linux text command platform, you can see the mail in mbox in the current directory.
cd Change the location of the current folder
When writing a letter, pressing Ctrl+C twice in a row interrupts the work and does not deliver this letter.
When reading a letter, press Ctrl+C once to exit the reading state.
Checks whether the transmitted e-mail was delivered or is stranded in the mail server Syntax: /usr/lib/sendmail -bp
If the message "Mail queue is empty" is displayed on the screen, the mail has been sent.
If it is any other error message, it means that the e-mail has not been delivered for some reason.


===================
The full names of linux commands
===================
rc = run command
= run command directory
= initialization directory
initrd = initialize ram disk
initab = initialization table
fstab = file system table
httpd =http daemon
mysqld = mysql daemon
sshd = Secure SHell daemon
mingetty is a minimalist getty program for virtual consoles.
getty (get teletypewriter) Function Description: Sets the terminal mode, connection rate and control line.
tty (teletypewriter) n. teletypewriter Function: Displays the name of the file to which the terminal is connected to the standard input device.


/bin = BINaries
/dev = DEVices
/etc = ETCetera etc.; additions; additional persons; and others
/lib = LIBrary
/proc = PROCesses
/sbin = Superuser BINaries
/tmp = TeMPorary
/usr = Unix Shared Resources
/var = VARiable
FIFO = First In, First Out
GRUB = GRand Unified Bootloader
IFS = Internal Field Seperators
LILO = LInux LOader
MySQL = My is the name of the original author's daughter, SQL = Structured Query Language
PHP = Personal Home Page Tools = PHP Hypertext Preprocessor
PS = Prompt String
Perl = "Pratical Extraction and Report Language" = "Pathologically Eclectic Rubbish Lister"
Python was named after the TV show Monty Python's Flying Circus.
Tcl = Tool Command Language
Tk = ToolKit
VT = Video Terminal
YaST = Yet Another Setup Tool
apache = "a patchy" server
apt = Advanced Packaging Tool
ar = archiver
as = assembler
awk = "Aho Weiberger and Kernighan" First letter of the last names of the three authors.
bash = Bourne Again SHell
bc = Basic (Better) Calculator
bg = BackGround
biff = A dog owned by author Heidi Stettner who likes to bark at the mailman.
cal = CALendar
cat = CATenate
cd = Change Directory
chgrp = CHange GRouP
chmod = CHange MODe
chown = CHange OWNer
chsh = CHange SHell
cmp = compare
cobra = Common Object Request Broker Architecture
comm = common
cp = CoPy
cpio = CoPy In and Out
cpp = C Pre Processor
cron = Chronos Greek time
cups = Common Unix Printing System
cvs = Current Version System
daemon = Disk And Execution MONitor
dc = Desk Calculator
dd = Disk Dump
df = Disk Free
diff = DIFFerence
dmesg = diagnostic message
du = Disk Usage
ed = editor
egrep = Extended GREP
elf = Extensible Linking Format
elm = ELectronic Mail
emacs = Editor MACroS
eval = EVALuate
ex = EXtended
exec = EXECute
fd = file descriptors
fg = ForeGround
fgrep = Fixed GREP
fmt = format
fsck = File System ChecK
fstab = FileSystem TABle
fvwm = F*** Virtual Window Manager
gawk = GNU AWK
gpg = GNU Privacy Guard
groff = GNU troff
hal = Hardware Abstraction Layer
joe = Joe's Own Editor
ksh = Korn SHell
lame = Lame Ain't an MP3 Encoder
lex = LEXical analyser
lisp = LISt Processing = Lots of Irritating Superfluous Parentheses
ln = LiNk
lpr = Line PRint
ls = list
lsof = LiSt Open Files
m4 = Macro processor Version 4
man = MANual pages
mawk = Mike Brennan's AWK
mc = Midnight Commander
mkfs = MaKe FileSystem
mknod = MaKe NODe
motd = Message of The Day
mozilla = MOsaic GodZILLa
mtab = Mount TABle
mv = MoVe
nano = Nano's ANOther editor
nawk = New AWK
nl = Number of Lines
nm = names
nohup = No HangUP
nroff = New ROFF
od = Octal Dump
passwd = PASSWorD
pg = pager
pico = PIne's message COmposition editor
pine = "Program for Internet News & Email" = "Pine is not Elm"
ping = onomatopoeia aka Packet InterNet Grouper
pirntcap = PRINTer CAPability
popd = POP Directory
pr = pre
printf = PRINT Formatted
ps = Processes Status
pty = pseudo tty
pushd = PUSH Directory
pwd = Print Working Directory
rc = runcom = run command, rc is still a shell for plan9
rev = REVerse
rm = ReMove
rn = Read News
roff = RunOFF
rpm = RPM Package Manager = RedHat Package Manager
r in rsh, rlogin, rvim = Remote
rxvt = ouR XVT
seamoneky = me
sed = Stream EDitor
seq = SEQuence
shar = SHell ARchive
slrn = S-Lang rn
ssh = Secure SHell
ssl = Secure Sockets Layer
stty = Set TTY
su = Substitute User
svn = SubVersioN
tar = Tape ARchive
tcsh = TENEX C shell
tee = T (T-shaped water pipe connection)
telnet = TEminaL over Network
termcap = terminal capability
terminfo = terminal information
Abbreviation for tex = τ?χνη, Greek art
tr = traslate
troff = Typesetter new ROFF
tsort = Topological SORT
tty = TeleTypewriter
twm = Tom's Window Manager
tz = TimeZone
udev = Userspace DEV
ulimit = User's LIMIT
umask = User's MASK
uniq = UNIQue
vi = VIsual = Very Inconvenient
vim = Vi IMproved
wall = write all
wc = Word Count
wine = WINE Is Not an Emulator
xargs = eXtended ARGuments
xdm = X Display Manager
xlfd = X Logical Font Description
xmms = X Multimedia System
xrdb = X Resources DataBase
xwd = X Window Dump
yacc = yet another compiler compiler  
grep means "globally find a regular expression and print the result line." The full name of grep is Global Regular Expression Print.


File structure on Fedora 16 Beta i686:
/
|-- bin
|-- boot
|-- dev
|-- etc
|-- home
|-- lib
|-- lost+found
|-- media
|-- mnt
|-- opt
|-- proc
|-- root
|-- run
|-- sbin
|-- srv
|-- sys
|-- tmp
|-- usr
`-- var


Directory structure under Debian
├── bin The most basic commands needed for a basic system
├── boot kernel and boot system programs
│ └── grub boot configuration file, such as or are in it.
├── dev device files
├── emul         
│   └── ia32-linux
├── etc system configuration file
├── home Ordinary user's home directory
├──   
├── lib Dynamic Linking Shared Library
├── lib32 32-bit library files
├── lib64 -> /lib library files
├─ lost+found file fragments
├── media Mount directory for mounting storage devices
├── mnt The mount directory of the mounted storage device.
├── opt Optional installation directory
├── proc Memory mapping of process information and kernel information
├── root root's home directory
├── sbin system administration commands
├── selinux security service
├── Data to be extracted after the srv service is started
├── sys Kernel Device Tree
├── tmp Temporary files
├── usr applications and files
│ ├── bin Applications used by system users
│ ├── games play
│ ├── Include header files required for developing and compiling applications
│ ├── lib Configuration files for commonly used dynamic link libraries and software packages
│ ├── lib32 Commonly used dynamic link libraries and configuration files for software packages 32-bit
│   ├── lib64 -> lib
│ ├── Local Locally Installed Programs
│ ├── sbin More advanced administrative programs and system daemons used by super users
│ ├── share system share
│ └── src kernel source code
├── var
│ ├── backups backing up
│ ├── cache Application cache file
│ ├── lib Files to be changed during normal system operation
│ ├── Variable data for programs installed in local /usr/local
│ ├── lock Lock the file
│ ├── log System Logs
│ ├── mail Mail log related
│ ├── Variable data for opt opt directory
│ ├── run Information files about the system that are saved until the next boot.
│ ├── spool Fake offline directories for printers, mail, proxy servers, etc.
│ └── tmp Larger than /tmp or temporary files that need to exist for a longer period of time.
└── vmlinuz  
sudo tree / -L 1 > ~/tree