linux - Looping through the content of a file in Bash

ID : 581

viewed : 330

Tags : linuxbashloopsunixiolinux

Top 5 Answer for linux - Looping through the content of a file in Bash

vote vote


One way to do it is:

while read p; do   echo "$p" done <peptides.txt 

As pointed out in the comments, this has the side effects of trimming leading whitespace, interpreting backslash sequences, and skipping the last line if it's missing a terminating linefeed. If these are concerns, you can do:

while IFS="" read -r p || [ -n "$p" ] do   printf '%s\n' "$p" done < peptides.txt 

Exceptionally, if the loop body may read from standard input, you can open the file using a different file descriptor:

while read -u 10 p; do   ... done 10<peptides.txt 

Here, 10 is just an arbitrary number (different from 0, 1, 2).

vote vote


cat peptides.txt | while read line  do    # do something with $line here done 

and the one-liner variant:

cat peptides.txt | while read line; do something_with_$line_here; done 

These options will skip the last line of the file if there is no trailing line feed.

You can avoid this by the following:

cat peptides.txt | while read line || [[ -n $line ]]; do    # do something with $line here done 
vote vote


Option 1a: While loop: Single line at a time: Input redirection

#!/bin/bash filename='peptides.txt' echo Start while read p; do      echo "$p" done < "$filename" 

Option 1b: While loop: Single line at a time:
Open the file, read from a file descriptor (in this case file descriptor #4).

#!/bin/bash filename='peptides.txt' exec 4<"$filename" echo Start while read -u4 p ; do     echo "$p" done 
vote vote


This is no better than other answers, but is one more way to get the job done in a file without spaces (see comments). I find that I often need one-liners to dig through lists in text files without the extra step of using separate script files.

for word in $(cat peptides.txt); do echo $word; done 

This format allows me to put it all in one command-line. Change the "echo $word" portion to whatever you want and you can issue multiple commands separated by semicolons. The following example uses the file's contents as arguments into two other scripts you may have written.

for word in $(cat peptides.txt); do $word; $word; done 

Or if you intend to use this like a stream editor (learn sed) you can dump the output to another file as follows.

for word in $(cat peptides.txt); do $word; $word; done > outfile.txt 

I've used these as written above because I have used text files where I've created them with one word per line. (See comments) If you have spaces that you don't want splitting your words/lines, it gets a little uglier, but the same command still works as follows:

OLDIFS=$IFS; IFS=$'\n'; for line in $(cat peptides.txt); do $line; $line; done > outfile.txt; IFS=$OLDIFS 

This just tells the shell to split on newlines only, not spaces, then returns the environment back to what it was previously. At this point, you may want to consider putting it all into a shell script rather than squeezing it all into a single line, though.

Best of luck!

vote vote


A few more things not covered by other answers:

Reading from a delimited file

# ':' is the delimiter here, and there are three fields on each line in the file # IFS set below is restricted to the context of `read`, it doesn't affect any other code while IFS=: read -r field1 field2 field3; do   # process the fields   # if the line has less than three fields, the missing fields will be set to an empty string   # if the line has more than three fields, `field3` will get all the values, including the third field plus the delimiter(s) done < input.txt 

Reading from the output of another command, using process substitution

while read -r line; do   # process the line done < <(command ...) 

This approach is better than command ... | while read -r line; do ... because the while loop here runs in the current shell rather than a subshell as in the case of the latter. See the related post A variable modified inside a while loop is not remembered.

Reading from a null delimited input, for example find ... -print0

while read -r -d '' line; do   # logic   # use a second 'read ... <<< "$line"' if we need to tokenize the line done < <(find /path/to/dir -print0) 

Related read: BashFAQ/020 - How can I find and safely handle file names containing newlines, spaces or both?

Reading from more than one file at a time

while read -u 3 -r line1 && read -u 4 -r line2; do   # process the lines   # note that the loop will end when we reach EOF on either of the files, because of the `&&` done 3< input1.txt 4< input2.txt 

Based on @chepner's answer here:

-u is a bash extension. For POSIX compatibility, each call would look something like read -r X <&3.

Reading a whole file into an array (Bash versions earlier to 4)

while read -r line; do     my_array+=("$line") done < my_file 

If the file ends with an incomplete line (newline missing at the end), then:

while read -r line || [[ $line ]]; do     my_array+=("$line") done < my_file 

Reading a whole file into an array (Bash versions 4x and later)

readarray -t my_array < my_file 


mapfile -t my_array < my_file 

And then

for line in "${my_array[@]}"; do   # process the lines done 

Related posts:

Top 3 video Explaining linux - Looping through the content of a file in Bash