working on it ...

Filters

Explore Public Snippets

Sort by

Found 944 snippets matching: awk

    public by lbottaro  2744  2  6  1

    Parsing and finding symbolic links in multiple paths

    This bash script will list simbolic links in multiple path (using wildchar * for nested directories), at level 1 from main directory, parsing the result as well. The script will use awk to parse returned data, getting the 11th data, space separated.
    ls -l `find ABC*/code/target-*/TARGET -maxdepth 1 -type l -name "*"` | awk '{print $11}'
    

    public by gwarah  48  0  4  0

    Validação de CPF/CNPJ em awk/bash

    Validação de CPF/CNPJ em awk. Para testar, no shell digite: echo | awk -f cnpj_cpf.awk echo $? Obs: 1. pode ser passada uma lista de CPFs ou CNPJs, um em cada linha da entrada padrão 2. o script retorna 0 se todos os CPFs/CNPJs forem válidos, do contrário retorna 1 Obs: testado no cygwin
    #!/usr/bin/awk
    # Arquivo    : cnpj_cpf.awk
    # Objentivo  : funções para validação de CPF e CNPJ
    # Requisites :
    #     1. SHELL=bash
    #     
    # History    :
    #   #version;date;description
    #   0.1.1b; 23/12/2019; retorna 1 pelo menos um CPF da lista for inválido
    #   0.1.0b; 23/12/2019; first release 
    #
    
    #
    # Tested: cygwin environment
    # 
    
    # verificação de CPF
    function check_cpf(p_cpf) {
      
        # comprimento deve ter onze posições
        if (! ( p_cpf ~ /^[[:digit:]]{11}$/ )) { return FALSE; }
        
        ###
        # regra de validação do CPF
        ###
        
        # quebra o CPF em 2 partes
        cpf9digs=substr(p_cpf,1,9);
        cpf2digs=substr(p_cpf,10,2);
        
        # obtenção do primeiro dígito
        soma=0;
        digv1=0;
        for(p=10;p>=2;p--) {
           dig=substr(cpf9digs,(11-p),1);
           soma+=p*dig;
        }
        mod11soma=soma%11;
        digv1=(mod11soma<2)?0:(11-mod11soma);
            
        # obtenção do segundo dígito
        cpf10digs=cpf9digs digv1;
        soma=0;
        digv2=0;
        for(p=11;p>=2;p--) {
           dig=substr(cpf10digs,(12-p),1);
           soma+=p*dig;
        }
        mod11soma=soma%11;
        digv2=(mod11soma<2)?0:(11-mod11soma);
        
        # dígito verificador completo
        digv=digv1 digv2;
        
        return (( digv == cpf2digs ) ? TRUE : FALSE);
    }
    
    # verificação de CNPJ
    function check_cnpj(p_cnpj) {
           
        # comprimento deve ter onze posições
        if (! ( p_cnpj ~ /^[[:digit:]]{14}$/ )) { return FALSE; }
        
        ###
        # regra de validação do CNPJ
        ###
        
        # quebra o CNPJ em 2 partes
        cnpj12digs=substr(p_cnpj,1,12);
        cnpj2digs=substr(p_cnpj,13,2);
        
        # obtenção do primeiro dígito
        soma=0;
        digv1=0;
        p=2; # peso inicial
        for(i=12;i>=1;i--) {
           dig=substr(cnpj12digs,i,1);
           soma+=p*dig;
           p=(p==9)?2:(p+1);
        }
        mod11soma=soma%11;
        digv1=(mod11soma<2)?0:(11-mod11soma);
      
        # obtenção do segundo dígito
        cnpj13digs=cnpj12digs digv1;
        soma=0;
        digv2=0;
        p=2; # peso inicial
        for(i=13;i>=1;i--) {
           dig=substr(cnpj13digs,i,1);
           soma+=p*dig;
           p=(p==9)?2:(p+1);
        }
        mod11soma=soma%11;
        digv2=(mod11soma<2)?0:(11-mod11soma);
           
        # dígito verificador completo
        digv=digv1 digv2;
        
        return (( digv == cnpj2digs ) ? TRUE : FALSE);
    }
    
    #
    # Variáveis devem ser declaradas neste bloco
    #
    BEGIN {
        # boolean values
        TRUE=1;
        FALSE=0;
        
        # retorna TRUE caso pelo menos um CPF/CNPJ da lista for inválido
        p_retorno=TRUE;
    }
    {
        p_valor=$0;
        p_flag=0;
        
        # se for CPF
        if ( p_valor ~ /^[[:digit:]]{11}$/ ) {
            p_flag=1;
            printf "CPF " p_valor " -  resultado: "; 
            if ( check_cpf(p_valor) == TRUE ) { print "válido";}
            else { 
                p_retorno=FALSE;
                print "inválido";
            }
        }
        
        # se for CNPJ
        if ( p_valor ~ /^[[:digit:]]{14}$/ ) {
            p_flag=1;
            printf "CNPJ " p_valor " -  resultado: "; 
            if ( check_cnpj(p_valor) == TRUE ) { print "válido";}
            else { 
                p_retorno=FALSE;
                print "inválido";
            }
        }
        
        # se não for CPF ou CNPJ
        if ( p_flag == 0 ) {
            p_retorno=FALSE;
            printf p_valor " não é nem CPF nem CNPJ "; 
        }
    }
    END {
        exit p_retorno;
    }						

    public by gwarah  2756  3  6  0

    List (and optionally run) snapped apps

    Show all installed snaped apps in a zenity menu. The user can choose one and run it. just run this script: ./mysnaps.sh
    #!/bin/bash
    # program     : mysnaps.sh
    # description : choose one snapped app and run it
    # author      : lp (Luís Pessoa)
    # version     : 0.1.0b
    # dependences : 
    #   1) shell: bash 
    #   2) packages awk, snap and zenity installed
    #   3) at least one snap app instaled
    # history     :
    #   lp; 07/02/2018; begin of development
    #   lp; 09/02/2018; first release
    
    ###############
    # functions
    ###############
    
    
    ###############
    # end functions
    ###############
    
    # building snap list header 
    snap_cols="$(snap list | \
    awk '{ if (NR == 1) { for (i = 1; i <= NF; i++)  printf "--column " $i " " }} ')"
    snap_cols="--column check ${snap_cols}"
    
    # building snap list options
    snap_ops=$(snap list | \
    awk '
    BEGIN { }
    {
    if (NR >= 2 ) {
        for (i = 1; i <= NF; i++) {
            
            if ( i == 1 ) {
                if ( $i == "core"  ) break;
                if (NR == 2) {printf "TRUE ";} 
                else {printf "FALSE ";}
            }
            printf $i;
            if ( i == NF ) { print "";}
            else { printf " "; }
       }
    }}')
    
    # snap app selection
    opc=$(zenity  --list  --text "Apps snaped " --radiolist  ${snap_cols} ${snap_ops})
    case $? in
             1)
                    echo "No snap app selected"
                    exit 1
                    ;;
            -1)
                    echo "Error!"
                    exit 1
                    ;;
    esac
    
    # run app selected
    app_opc=`echo $opc | awk '{print $1}'`
    ${app_opc}
    exit $?
    												

    public by romych78  2989  10  5  0

    Check locked session files by PHP

    // script to check stuck php processes because of locked session files
    // to execute use command # lsof -n | awk -f sess_view.awk
    
    
    /sess_/ {
        load_sessions[$9]++;
        if (load_sessions[$9]>max_sess_link_count){
            max_sess_link_count = load_sessions[$9];
            max_sess_link_name = $9;
        };
    
        if ($4 ~ /.*uW$/ ){ locked_id[$9]=$2 };
    }
    
    END {
    
        print max_sess_link_count, max_sess_link_name,locked_id[max_sess_link_name];
    
        if (locked_id[max_sess_link_name] && max_sess_link_count>3) {
            #    r=system("kill "locked_id[max_sess_link_name]);
            #    if (!r) print "Locking process "locked_id[max_sess_link_name]" killed"
            system("ls -al "max_sess_link_name);
        }
    
    }            

    external by ozzyjohnson  262  0  3  0

    awk cisco IOS cli script

    awk cisco IOS cli script: cdp_neighbor.awk
    BEGIN {
    device_id = ""
    entry_address = ""
    ip_address = ""
    platform = ""
    capabilities = ""
    interface = ""
    port_id = ""
    holdtime = ""
    version = ""
    advertisement_version = ""
    duplex = ""
    power_drawn = ""
    power_request_id = ""
    power_management_id = ""
    power_request_levels_are = ""
    management_address = ""
     
    printf("device_id;ip_address;interface\n");
    }
     
    #printf("device_id;entry_address;ip_address;platform;capabilities;interface;port id;holdtime;version;advertisement_version;duplex;power_drawn;power_request_id;power_management_id;power_request_levels_are;management_address\n");
    #}
     
     
    #-------------------------
    #Device ID: xxx
    #Entry address(es): 
    #  IP address: x.x.x.x
    #Platform: cisco AIR-AP1242AG-A-K9   ,  Capabilities: Trans-Bridge 
    #Interface: FastEthernet0/43,  Port ID (outgoing port): FastEthernet0
    #Holdtime : 148 sec
    #
    #Version :
    #Cisco IOS Software, C1240 Software (C1240-K9W8-M), Version 12.4(21a)JHB1, RELEASE SOFTWARE (fc1)
    #Technical Support: http://www.cisco.com/techsupport
    #Copyright (c) 1986-2010 by Cisco Systems, Inc.
    #Compiled Wed 11-Aug-10 15:55 by prod_rel_team
    #
    #advertisement version: 2
    #Duplex: full
    #Power drawn: 15.000 Watts
    #Power request id: 6605, Power management id: 5
    #Power request levels are:15000 12960 11560 5800 0 
    #Management address(es): 
    #
    #-------------------------
     
     
    function trim(field) {
    #gsub(/\s/, "", field)
    gsub(/^[ \t]+|[ \s]+|[ \t]+$/,"",field)
    return field;
    }
     
    /^.*Device ID:/ 	{split($0,a,":");device_id = a[2];}
    #/^Entry address(es):/  	{split($0,a,"=");entry_address = a[2];}
    /^.*IP address:/  	{split($0,a,":");ip_address = a[2];}
    #/^Platform:/	{split($0,a,":");platform = a[2];}
    /^Interface:/	{split($0,a,":");interface = a[2];}
     
     
    #/^.*Dirs :/ {t_dirs = substr($0,13,9);c_dirs = substr($0,23,9);s_dirs = substr($0,33,9);m_dirs = substr($0,43,9);f_dirs = substr($0,53,9);e_dirs = substr($0,63,9);}
    #/^.*Files :.*[0-9]/ {t_files = substr($0,13,9);c_files = substr($0,23,9);s_files = substr($0,33,9);m_files = substr($0,43,9);f_files = substr($0,53,9);e_files = substr($0,63,9);}
    #/^.*Bytes :/ {t_bytes = substr($0,13,9);c_bytes = substr($0,23,9);s_bytes = substr($0,33,9);m_bytes = substr($0,43,9);f_bytes = substr($0,53,9);e_bytes = substr($0,63,9);}
    #/^.*Times :/ {t_times = substr($0,13,9);c_times = substr($0,23,10);f_times = substr($0,53,9);e_times = substr($0,63,9);}
    #/^.*Speed :/ {speed = $3}
    #/^.*Ended :/ {split($0,a," :");ended = a[2];
     
    #c_dirs =  trim(c_dirs);
     
    /^-----------------------/ {
     
    #device_id =  trim(device_id);
    #ip_address =  trim(ip_address);
    #interface =  trim(interface);
     
    printf("%s;%s;%s\n",device_id,ip_address,interface)
    device_id = ""
    ip_address = ""
    interface = ""
    }
    
    

    external by Rafael Kitover  723  2  3  0

    fast uudecode in GNU awk and some others (like OpenBSD awk)

    fast uudecode in GNU awk and some others (like OpenBSD awk): uudecode_gawk.awk
    Awk
    #!/bin/sh
    
    # uudecode in GNU awk (and some others, like OpenBSD) decodes stdin to stdout
    #
    # Copyright (c) 2014, Rafael Kitover <rkitover@gmail.com>
    #
    # Redistribution and use in source and binary forms, with or without
    # modification, are permitted provided that the following conditions are met:
    # * Redistributions of source code must retain the above copyright
    # notice, this list of conditions and the following disclaimer.
    # * Redistributions in binary form must reproduce the above copyright
    # notice, this list of conditions and the following disclaimer in the
    # documentation and/or other materials provided with the distribution.
    #
    # THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDER ``AS IS'' AND ANY
    # EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
    # WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
    # DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER BE LIABLE FOR ANY
    # DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
    # (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
    # LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
    # ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
    # (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
    # SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
    
    gawk '
    BEGIN {
        charset=" !\"#$%&'\''()*+,-./0123456789:;<=>?@ABCDEFGHIJKLMNOPQRSTUVWXYZ[\\]^_";
    }
    
    function ord(char) {
        return index(charset, char) + 32 - 1;
    }
    
    /^begin / { next }
    /^end$/   { exit }
    
    {
        cnt = substr($0, 1, 1);
    
        if (cnt == "`") next;
    
        cnt = ord(cnt) - 32;
    
        enc = substr($0, 2, length($0) - 1);
    
        chars = 0;
        pos   = 1;
    
        while (chars < cnt) {
            grp = substr(enc, pos, 4);
            gsub(/`/, " ", grp); # zero bytes
    
            c1 = ord(substr(grp, 1, 1)) - 32;
            c2 = ord(substr(grp, 2, 1)) - 32;
            c3 = ord(substr(grp, 3, 1)) - 32;
            c4 = ord(substr(grp, 4, 1)) - 32;
    
            char_val = or(c4, or(or(lshift(c3, 6), lshift(c2, 12)), lshift(c1, 18)));
    
            char[1] = sprintf("%c", rshift(and(char_val, 16711680), 16));
            char[2] = sprintf("%c", rshift(and(char_val, 65280),     8));
            char[3] = sprintf("%c", and(char_val, 255));
    
            for (i = 1; i <= 3 && chars < cnt; i++) {
                printf("%s", char[i]);
    
                chars++;
            }
    
            pos += 4;
        }
    }
    '
    
    
    

    external by Rohit Prajapati  172  0  2  0

    Transpose from column to rows. To divide into batches. Example usage to avoid error "argument list too long": awk -f ~/codebase/my_files/scripts/test.awk test2.csv | while read line ; do echo "my_command: " $line ; done

    Transpose from column to rows. To divide into batches. Example usage to avoid error "argument list too long": awk -f ~/codebase/my_files/scripts/test.awk test2.csv | while read line ; do echo "my_command: " $line ; done: transpose_into_batches.awk
    # Usage: Transpose from column to rows.
    
    # For e.g.
    #
    # Input 1 -
    # 	abc_1
    # 	abc_2
    # 	abc_3
    # 	abc_4
    # 	abc_5
    # 	abc_6
    # 	abc_7
    # 	abc_8
    # 	abc_9
    # 
    # Input 2 -
    # 	abc_1,abc_2
    # 	abc_3,abc_4
    # 	abc_5,abc_6
    # 	abc_7,abc_8
    # 	abc_9
    # 
    # Output for both -
    # 	abc_1,abc_2,abc_3,abc_4
    # 	abc_5,abc_6,abc_7,abc_8
    # 	abc_9
    
    # Example usage: awk -f ~/codebase/my_files/scripts/test.awk test2.csv | while read line ; do echo "my_command: " $line ; done
    
    BEGIN {
    	FS=",";
    	OFS=",";
    	N=10
    	count=0;
    }
    
    {
    	for (i = 1 ; i <= NF ; i++) {
    		count++;
    		if (count == 1) {
    			printf("%s",$i);
    		} else if (count % N == 1) {
    			printf("\n%s",$i);
    		} else {
    			printf("%s%s",OFS,$i);
    		}
    	}
    }
    
    END {
    	print
    }
    
    
    

    external by Ryan Tam  3  0  1  0

    A faster paver autocomplete based on https://gist.github.com/gregorynicholas/4489161 but using awk/sed on pavement* instead of awk on `paver -h`

    A faster paver autocomplete based on https://gist.github.com/gregorynicholas/4489161 but using awk/sed on pavement* instead of awk on `paver -h`: fast_paver_autocomplete
    _paver()
    {
      local cur
      # Tasks that shows up automatically in paver help
      local paver_misctasks="generate_setup minilib help"
      COMPREPLY=()
      # Variable to hold the current word
      cur="${COMP_WORDS[COMP_CWORD]}"
      # Build a list of the available tasks from: `paver --help --quiet`
      local cmds=$(awk 'BEGIN { task=0; }
        {
            if ($1 ~ /@task/) {
                task=1;
            }
            if ($0 ~ /^def [^_].*\):/ && task)  {
                task=0;
                print($0);
            }
        }
        ' pavement* | sed 's/^def \(.*\)(.*):.*$/\1/')
      # Generate possible matches and store them in the
      # array variable COMPREPLY
      COMPREPLY=($(compgen -W "${cmds} ${paver_misctasks}" $cur))
    }
    complete -F _paver paver
    
    
    

    external by webb  148  0  2  0

    GNU Awk (gawk) script to convert an Excel-generated CSV file to a simple XML format

    GNU Awk (gawk) script to convert an Excel-generated CSV file to a simple XML format: csv-to-xml.awk
    BEGIN {
      printf "<?xml version=\"1.0\" encoding=\"US-ASCII\" standalone=\"yes\"?>\n"
      printf "<file xmlns=\"http://example.org/csv-to-xml\">\n"
      FPAT = "([^,]*)|(\"[^\"]+\")"
      RS = "\n"
    }
    
    {
      printf "<row>\n"
      for (i = 1; i <= NF; i++) {
        if (match($i, /^"(.*)"$/, array))
          $i = array[1]
        gsub(/&/, "\\&amp;", $i)
        gsub(/</, "\\&lt;", $i)
        gsub(/>/, "\\&gt;", $i)
        gsub(/""/, "\\&quot;", $i)
        gsub(/'/, "\\&apos;", $i)
        printf("  <column>%s</column>\n", $i)
      }
      printf "</row>\n"
    }
    
    END {
      printf "</file>\n"
    }
    
    
    

    external by chunyan  91  0  1  0

    awk cheat sheet

    awk cheat sheet: gistfile1.txt
    Awk
    HANDY ONE-LINE SCRIPTS FOR AWK                               30 April 2008
    Compiled by Eric Pement - eric [at] pement.org               version 0.27
    
    Latest version of this file (in English) is usually at:
       http://www.pement.org/awk/awk1line.txt
    
    This file will also be available in other languages:
       Chinese  - http://ximix.org/translation/awk1line_zh-CN.txt   
    
    USAGE:
    
       Unix: awk '/pattern/ {print "$1"}'    # standard Unix shells
    DOS/Win: awk '/pattern/ {print "$1"}'    # compiled with DJGPP, Cygwin
             awk "/pattern/ {print \"$1\"}"  # GnuWin32, UnxUtils, Mingw
    
    Note that the DJGPP compilation (for DOS or Windows-32) permits an awk
    script to follow Unix quoting syntax '/like/ {"this"}'. HOWEVER, if the
    command interpreter is CMD.EXE or COMMAND.COM, single quotes will not
    protect the redirection arrows (<, >) nor do they protect pipes (|).
    These are special symbols which require "double quotes" to protect them
    from interpretation as operating system directives. If the command
    interpreter is bash, ksh or another Unix shell, then single and double
    quotes will follow the standard Unix usage.
    
    Users of MS-DOS or Microsoft Windows must remember that the percent
    sign (%) is used to indicate environment variables, so this symbol must
    be doubled (%%) to yield a single percent sign visible to awk.
    
    If a script will not need to be quoted in Unix, DOS, or CMD, then I
    normally omit the quote marks. If an example is peculiar to GNU awk,
    the command 'gawk' will be used. Please notify me if you find errors or
    new commands to add to this list (total length under 65 characters). I
    usually try to put the shortest script first. To conserve space, I
    normally use '1' instead of '{print}' to print each line. Either one
    will work.
    
    FILE SPACING:
    
     # double space a file
     awk '1;{print ""}'
     awk 'BEGIN{ORS="\n\n"};1'
    
     # double space a file which already has blank lines in it. Output file
     # should contain no more than one blank line between lines of text.
     # NOTE: On Unix systems, DOS lines which have only CRLF (\r\n) are
     # often treated as non-blank, and thus 'NF' alone will return TRUE.
     awk 'NF{print $0 "\n"}'
    
     # triple space a file
     awk '1;{print "\n"}'
    
    NUMBERING AND CALCULATIONS:
    
     # precede each line by its line number FOR THAT FILE (left alignment).
     # Using a tab (\t) instead of space will preserve margins.
     awk '{print FNR "\t" $0}' files*
    
     # precede each line by its line number FOR ALL FILES TOGETHER, with tab.
     awk '{print NR "\t" $0}' files*
    
     # number each line of a file (number on left, right-aligned)
     # Double the percent signs if typing from the DOS command prompt.
     awk '{printf("%5d : %s\n", NR,$0)}'
    
     # number each line of file, but only print numbers if line is not blank
     # Remember caveats about Unix treatment of \r (mentioned above)
     awk 'NF{$0=++a " :" $0};1'
     awk '{print (NF? ++a " :" :"") $0}'
    
     # count lines (emulates "wc -l")
     awk 'END{print NR}'
    
     # print the sums of the fields of every line
     awk '{s=0; for (i=1; i<=NF; i++) s=s+$i; print s}'
    
     # add all fields in all lines and print the sum
     awk '{for (i=1; i<=NF; i++) s=s+$i}; END{print s}'
    
     # print every line after replacing each field with its absolute value
     awk '{for (i=1; i<=NF; i++) if ($i < 0) $i = -$i; print }'
     awk '{for (i=1; i<=NF; i++) $i = ($i < 0) ? -$i : $i; print }'
    
     # print the total number of fields ("words") in all lines
     awk '{ total = total + NF }; END {print total}' file
    
     # print the total number of lines that contain "Beth"
     awk '/Beth/{n++}; END {print n+0}' file
    
     # print the largest first field and the line that contains it
     # Intended for finding the longest string in field #1
     awk '$1 > max {max=$1; maxline=$0}; END{ print max, maxline}'
    
     # print the number of fields in each line, followed by the line
     awk '{ print NF ":" $0 } '
    
     # print the last field of each line
     awk '{ print $NF }'
    
     # print the last field of the last line
     awk '{ field = $NF }; END{ print field }'
    
     # print every line with more than 4 fields
     awk 'NF > 4'
    
     # print every line where the value of the last field is > 4
     awk '$NF > 4'
    
    STRING CREATION:
    
     # create a string of a specific length (e.g., generate 513 spaces)
     awk 'BEGIN{while (a++<513) s=s " "; print s}'
    
     # insert a string of specific length at a certain character position
     # Example: insert 49 spaces after column #6 of each input line.
     gawk --re-interval 'BEGIN{while(a++<49)s=s " "};{sub(/^.{6}/,"&" s)};1'
    
    ARRAY CREATION:
    
     # These next 2 entries are not one-line scripts, but the technique
     # is so handy that it merits inclusion here.
     
     # create an array named "month", indexed by numbers, so that month[1]
     # is 'Jan', month[2] is 'Feb', month[3] is 'Mar' and so on.
     split("Jan Feb Mar Apr May Jun Jul Aug Sep Oct Nov Dec", month, " ")
    
     # create an array named "mdigit", indexed by strings, so that
     # mdigit["Jan"] is 1, mdigit["Feb"] is 2, etc. Requires "month" array
     for (i=1; i<=12; i++) mdigit[month[i]] = i
    
    TEXT CONVERSION AND SUBSTITUTION:
    
     # IN UNIX ENVIRONMENT: convert DOS newlines (CR/LF) to Unix format
     awk '{sub(/\r$/,"")};1'   # assumes EACH line ends with Ctrl-M
    
     # IN UNIX ENVIRONMENT: convert Unix newlines (LF) to DOS format
     awk '{sub(/$/,"\r")};1'
    
     # IN DOS ENVIRONMENT: convert Unix newlines (LF) to DOS format
     awk 1
    
     # IN DOS ENVIRONMENT: convert DOS newlines (CR/LF) to Unix format
     # Cannot be done with DOS versions of awk, other than gawk:
     gawk -v BINMODE="w" '1' infile >outfile
    
     # Use "tr" instead.
     tr -d \r <infile >outfile            # GNU tr version 1.22 or higher
    
     # delete leading whitespace (spaces, tabs) from front of each line
     # aligns all text flush left
     awk '{sub(/^[ \t]+/, "")};1'
    
     # delete trailing whitespace (spaces, tabs) from end of each line
     awk '{sub(/[ \t]+$/, "")};1'
    
     # delete BOTH leading and trailing whitespace from each line
     awk '{gsub(/^[ \t]+|[ \t]+$/,"")};1'
     awk '{$1=$1};1'           # also removes extra space between fields
    
     # insert 5 blank spaces at beginning of each line (make page offset)
     awk '{sub(/^/, "     ")};1'
    
     # align all text flush right on a 79-column width
     awk '{printf "%79s\n", $0}' file*
    
     # center all text on a 79-character width
     awk '{l=length();s=int((79-l)/2); printf "%"(s+l)"s\n",$0}' file*
    
     # substitute (find and replace) "foo" with "bar" on each line
     awk '{sub(/foo/,"bar")}; 1'           # replace only 1st instance
     gawk '{$0=gensub(/foo/,"bar",4)}; 1'  # replace only 4th instance
     awk '{gsub(/foo/,"bar")}; 1'          # replace ALL instances in a line
    
     # substitute "foo" with "bar" ONLY for lines which contain "baz"
     awk '/baz/{gsub(/foo/, "bar")}; 1'
    
     # substitute "foo" with "bar" EXCEPT for lines which contain "baz"
     awk '!/baz/{gsub(/foo/, "bar")}; 1'
    
     # change "scarlet" or "ruby" or "puce" to "red"
     awk '{gsub(/scarlet|ruby|puce/, "red")}; 1'
    
     # reverse order of lines (emulates "tac")
     awk '{a[i++]=$0} END {for (j=i-1; j>=0;) print a[j--] }' file*
    
     # if a line ends with a backslash, append the next line to it (fails if
     # there are multiple lines ending with backslash...)
     awk '/\\$/ {sub(/\\$/,""); getline t; print $0 t; next}; 1' file*
    
     # print and sort the login names of all users
     awk -F ":" '{print $1 | "sort" }' /etc/passwd
    
     # print the first 2 fields, in opposite order, of every line
     awk '{print $2, $1}' file
    
     # switch the first 2 fields of every line
     awk '{temp = $1; $1 = $2; $2 = temp}' file
    
     # print every line, deleting the second field of that line
     awk '{ $2 = ""; print }'
    
     # print in reverse order the fields of every line
     awk '{for (i=NF; i>0; i--) printf("%s ",$i);print ""}' file
    
     # concatenate every 5 lines of input, using a comma separator
     # between fields
     awk 'ORS=NR%5?",":"\n"' file
    
    SELECTIVE PRINTING OF CERTAIN LINES:
    
     # print first 10 lines of file (emulates behavior of "head")
     awk 'NR < 11'
    
     # print first line of file (emulates "head -1")
     awk 'NR>1{exit};1'
    
      # print the last 2 lines of a file (emulates "tail -2")
     awk '{y=x "\n" $0; x=$0};END{print y}'
    
     # print the last line of a file (emulates "tail -1")
     awk 'END{print}'
    
     # print only lines which match regular expression (emulates "grep")
     awk '/regex/'
    
     # print only lines which do NOT match regex (emulates "grep -v")
     awk '!/regex/'
    
     # print any line where field #5 is equal to "abc123"
     awk '$5 == "abc123"'
    
     # print only those lines where field #5 is NOT equal to "abc123"
     # This will also print lines which have less than 5 fields.
     awk '$5 != "abc123"'
     awk '!($5 == "abc123")'
    
     # matching a field against a regular expression
     awk '$7  ~ /^[a-f]/'    # print line if field #7 matches regex
     awk '$7 !~ /^[a-f]/'    # print line if field #7 does NOT match regex
    
     # print the line immediately before a regex, but not the line
     # containing the regex
     awk '/regex/{print x};{x=$0}'
     awk '/regex/{print (NR==1 ? "match on line 1" : x)};{x=$0}'
    
     # print the line immediately after a regex, but not the line
     # containing the regex
     awk '/regex/{getline;print}'
    
     # grep for AAA and BBB and CCC (in any order on the same line)
     awk '/AAA/ && /BBB/ && /CCC/'
    
     # grep for AAA and BBB and CCC (in that order)
     awk '/AAA.*BBB.*CCC/'
    
     # print only lines of 65 characters or longer
     awk 'length > 64'
    
     # print only lines of less than 65 characters
     awk 'length < 64'
    
     # print section of file from regular expression to end of file
     awk '/regex/,0'
     awk '/regex/,EOF'
    
     # print section of file based on line numbers (lines 8-12, inclusive)
     awk 'NR==8,NR==12'
    
     # print line number 52
     awk 'NR==52'
     awk 'NR==52 {print;exit}'          # more efficient on large files
    
     # print section of file between two regular expressions (inclusive)
     awk '/Iowa/,/Montana/'             # case sensitive
    
    SELECTIVE DELETION OF CERTAIN LINES:
    
     # delete ALL blank lines from a file (same as "grep '.' ")
     awk NF
     awk '/./'
    
     # remove duplicate, consecutive lines (emulates "uniq")
     awk 'a !~ $0; {a=$0}'
    
     # remove duplicate, nonconsecutive lines
     awk '!a[$0]++'                     # most concise script
     awk '!($0 in a){a[$0];print}'      # most efficient script
    
    CREDITS AND THANKS:
    
    Special thanks to the late Peter S. Tillier (U.K.) for helping me with
    the first release of this FAQ file, and to Daniel Jana, Yisu Dong, and
    others for their suggestions and corrections.
    
    For additional syntax instructions, including the way to apply editing
    commands from a disk file instead of the command line, consult:
    
      "sed & awk, 2nd Edition," by Dale Dougherty and Arnold Robbins
      (O'Reilly, 1997)
    
      "UNIX Text Processing," by Dale Dougherty and Tim O'Reilly (Hayden
      Books, 1987)
    
      "GAWK: Effective awk Programming," 3d edition, by Arnold D. Robbins
      (O'Reilly, 2003) or at http://www.gnu.org/software/gawk/manual/
    
    To fully exploit the power of awk, one must understand "regular
    expressions." For detailed discussion of regular expressions, see
    "Mastering Regular Expressions, 3d edition" by Jeffrey Friedl (O'Reilly,
    2006).
    
    The info and manual ("man") pages on Unix systems may be helpful (try
    "man awk", "man nawk", "man gawk", "man regexp", or the section on
    regular expressions in "man ed").
    
    USE OF '\t' IN awk SCRIPTS: For clarity in documentation, I have used
    '\t' to indicate a tab character (0x09) in the scripts.  All versions of
    awk should recognize this abbreviation.
    
    #---end of file---
    
    
    • Public Snippets
    • Channels Snippets