Top 10 Unix Command Line Utilities

2013年6月18日 17:37

0> tr

tmp > echo "adbdc" | tr "abc" "123"
1d2d3
tmp > echo "Hello" | tr "A-Za-z" "a-zA-Z"
hELLO
tmp > echo $PATH | tr ":" "\n" | sort
    /Users/oliver/.cabal/bin
    /Users/oliver/.rvm/bin
    /Users/oliver/.rvm/gems/ruby-1.9.3-p0/bin
    /Users/oliver/.rvm/gems/ruby-1.9.3-p0@global/bin
    /Users/oliver/.rvm/rubies/ruby-1.9.3-p0/bin
    /Users/oliver/local/node/bin
    /Volumes/macbox_cs/dev/android-sdk-macosx/platform-tools/
    ...

1> sort

tmp > du /bin/* | sort -n -r | head -4
1320    /bin/ksh
1264    /bin/sh
1264    /bin/bash
592     /bin/zsh

sort will take multiple files as input and will merge and sort all of the files for you. Some of the most used options include -r for sorting in reverse order and -f for sorting case-insensitive.

2> uniq

Want to get rid of duplicate lines? uniq solves this problem efficiently. Note that it will only compare adjacent lines for equality, so you might want to sort before you use uniq.
Nice options: -c will prepend the count of equal elements before a line, -u will only output lines that are not repeated and -i does the whole thing case-insensitive.

Here is an example that combines tr, sortand uniq such that you can get the frequency of all words in a wikipedia article:

tmp > curl http://en.wikipedia.org/wiki/Minimum_spanning_tree \
      | tr -cs "A-Za-z" "\n" | tr "A-Z" "a-z" \
      | sort | uniq -c | sort -n -r

% Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100 93342  100 93342    0     0   279k      0 --:--:-- --:--:-- --:--:--  323k
1031 a
 568 span
 442 href
 435 class
 308 li
 300 b
 284 title
 229 wiki
 211 the
 209 cite
 206 id
 192 spanning
 184 i
 169 tree
 166 minimum
 ...

This fetches an html-page from wikipedia and first does some preprocessing using tr:
tr -cs "A-Za-z" "\n" — split on all non-alphabetic characters
tr "A-Z" "a-z"— make everything lowercase
sort | uniq -c — sort, remove dups but remember the count
sort -n -r— sort numerically in reverse order

3> split and cat

This is an example that splits a huge file into 75 MB chunks:

split -b 75m input.zip

This will result in a bunch of files that are named with 3 letters starting fromxaa,xab,…
To reassemble the lot, all those files have to be concatinated in alphabetic order:

cat `ls x*` > reassembled.zip
tmp > ls *.zip | xargs md5
MD5 (input.zip) = d760b448595f844b1162eaa3c04f83d8
MD5 (reassembled.zip) = d760b448595f844b1162eaa3c04f83d8

4> substitution operations

extract audio from a bunch of mp4 files

for i in *.mp4; do ffmpeg -i "$i" "${i%.mp4}.mp3"; done

Here the subtitution operator ${i%.mp4} deletes the shortest possible match from the right side.
This is nice and terse…but there is another variant that might even be a little more explicit: using basename

for i in *.mp4; do ffmpeg -i "$i" "`basename $i .mp4`.mp3"; done

5> calculate the size of all files found by find

tmp > find . -iname "*.png" -ls | awk '{s += $7} END {print s}'
2076723
tmp > find . -iname "*.png" -print0 | xargs -0 du -ch | tail -1
2.2M    total

6> df

Classic. Collects some disk space usage information about your system.

tmp > df -h
Filesystem     Size   Used  Avail Capacity  iused   ifree %iused  Mounted on
/dev/disk0s2  156Gi  138Gi   17Gi    89% 36247400 4528347   89%   /
...

7> dd

A nice one I found here is to securely wipe your drive: overwrite the entire drive with 0s:

dd if=/dev/zero of=/dev/hda

More secure (means harder to recover) is to use random data to wipe the drive:

dd if=/dev/urandom of=/dev/hda

And for the paranoid and the US Government we can repeatedly execute the fun:

for n in `seq 7`; do dd if=/dev/urandom of=/dev/sda bs=8b conv=notrunc; done

8> zip

Most simple case: add some files to a zip-file (called “abc.zip”):

zip abc file1 file2 file3

Of course you can also copy a whole directory “tmp” into “abc.zip”.

zip -r abc tmp

Also quite handy: creating a password protected archives:

zip -e important.zip file1 file2

And finally list the files inside an archive:

unzip -l a.zip

9> hexdump

When dealing with binary files it is often necessary to glimps a quick view to the actual data. I found that having a little command line utility can be very practical for such cases. hexdump has exactly what I need.

tmp > hexdump  new.zip | head -5
0000000 70 a9 20 8d b1 a3 5c 1c 16 e3 17 b2 ef 94 16 ac
0000010 85 40 59 f9 89 40 45 ed 61 e8 10 f5 6f f5 99 a2
0000020 3a d6 69 62 e0 ab ee 0a 67 b8 c5 21 58 42 4d 52
0000030 2d 78 ae 2a 31 f2 78 c7 1f 22 99 07 e1 6a 55 bb
0000040 68 9a fe 8f c3 e0 e5 a3 4c 7d b3 6b f9 ae de 92

You can instruct it to display also the corresponding ASCII representation:

tmp > hexdump -C new.zip | head -5
00000000  70 a9 20 8d b1 a3 5c 1c  16 e3 17 b2 ef 94 16 ac  |p. ...\.........|
00000010  85 40 59 f9 89 40 45 ed  61 e8 10 f5 6f f5 99 a2  |.@[email protected]...|
00000020  3a d6 69 62 e0 ab ee 0a  67 b8 c5 21 58 42 4d 52  |:.ib....g..!XBMR|
00000030  2d 78 ae 2a 31 f2 78 c7  1f 22 99 07 e1 6a 55 bb  |-x.*1.x.."...jU.|
00000040  68 9a fe 8f c3 e0 e5 a3  4c 7d b3 6b f9 ae de 92  |h.......L}.k....|

Combining hex and octal output quickly allows for relating the hex values to their octal counterparts:

tmp > hexdump -xb new.zip | head -5
0000000    a970    8d20    a3b1    1c5c    e316    b217    94ef    ac16
0000000 160 251 040 215 261 243 134 034 026 343 027 262 357 224 026 254
0000010    4085    f959    4089    ed45    e861    f510    f56f    a299
0000010 205 100 131 371 211 100 105 355 141 350 020 365 157 365 231 242
0000020    d63a    6269    abe0    0aee    b867    21c5    4258    524d