output
stringlengths
9
26.3k
input
stringlengths
26
29.8k
instruction
stringlengths
14
159
GNU Parallel spends 2-10 ms overhead per job. It can be lowered a bit by using -u, but that means you may get output from different jobs mixed. GNU Parallel is not ideal if your jobs are in the ms range and runtime matters: The overhead will often be too big. You can spread the overhead to multiple cores by running mu...
I created this script out of boredom with the sole purpose of using/testing GNU parallel so I know it's not particularly useful or optimized, but I have a script that will calculate all prime numbers up to n: #!/usr/bin/env bashisprime () { local n=$1 ((n==1)) && return 1 for ((i=2;i<n;i++)); do if...
How to optimize GNU parallel for this use?
Don't reinvent the wheel. You can use pigz, a parallel implementation of gzip which should be in your distributions repositories. If it isn't, you can get it from here. Once you've installed pigz, use it as you would gzip: pigz *txtI tested this on 5 30M files created using for i in {1..5}; do head -c 50M /dev/urandom...
I am looking to speedup gzip process. (server is AIX 7.1) More specificly, the current implementation is with gzip *.txt and it takes up to 1h to complete. (file extractions are quite big and we got a total of 10 files) Question: Will it be more efficient to run pids="" gzip file1.txt & pids+=" $!" gzip file2.txt & p...
gzip *.txt vs gzip test.txt & gzip test2.txt &
The printf call would run one or more write(2) system calls, and the order they are processed would be the actual order of the output. One or more, because it depends on the buffering inside the C library. With line-buffered output (going to the terminal), you'd likely get two write calls, once for the initial newline...
The following C program is supposed to illustrate race condition b/w child and parent processes : #include <stdio.h> #include <unistd.h> #include <sys/types.h>int main() { fork(); printf("\n 1234567890 \n"); return 0; }When my friends executes it (on Ubuntu), they get expected output, which is jumbled...
Race Condition not working on Arch Linux
The important part is having a way to merge the output of your several openssl commands. I believe a FIFO would solve your problem. Try this mkfifo foo openssl <whatever your command is> > foo & openssl <whatever your command is> > foo & openssl <whatever your command is> > foo & dd if=foo of=/dev/sda bs=4MEDIT: Add...
I am using @tremby's great idea to fill a disk with random data. This involves piping openssl, which is encrypting a lot of zeros, to dd (bs=4M). I'm maxing out the single core on which this is being run (I have 7 more), and I'm nowhere near maxing out my I/O. I'm looking for a way to parallelize the input to dd...
Parallelize openssl as input to dd
Look at the CONCURRENCY variable in /etc/init.d/rc, you have several choices. When set to makefile, then the init process does it in parallel. There are different comments depending your distribution: # # Check if we are able to use make like booting. It require the # insserv package to be enabled. Boot concurrency a...
I was wondering if I could manage to initialize drivers, services etc. (all these jobs what Linux does during startup) in parallel instead of sequentially. I want to aggressively lower the boot time. I know some services depend on each other but to make an easy example: during probing the network devices, it shall take...
Can I force Linux to boot its initializations parallel? [closed]
If you use GNU Parallel you can do one of these: parallel cp {} destination/folder/ :::: filelist parallel -a filelist cp {} destination/folder/ cat filelist | parallel cp {} destination/folder/Consider spending 20 minutes on reading chapter 1+2 of the GNU Parallel 2018 book (print: http://www.lulu.com/shop/ole-tange/...
Let's say I have a file listing the path of multiple files like the following: /home/user/file1.txt /home/user/file2.txt /home/user/file3.txt /home/user/file4.txt /home/user/file5.txt /home/user/file6.txt /home/user/file7.txtLet's also say that I want to copy those files in parallel 3 by 3. I know that with the comman...
How can I use the parallel command while getting parameters dynamically from a file?
Something like this: gpus=2find files | parallel -j +$gpus '{= $_ = slot() > '$gpus' ? "foo" : "bar" =}' {}Less scary: parallel -j +$gpus '{= if(slot() > '$gpus') { $_ = "foo" } else { $_ = "bar" } =}' {}-j +$gpus Run one job per CPU thread + $gpus {= ... =} Use perl code to s...
I have a large dataset (>200k files) that I would like to process (convert files into another format). The algorithm is mostly single-threaded, so it would be natural to use parallel processing. However, I want to do an unusual thing. Each file can be converted using one of two methods (CPU- and GPU-based), and I woul...
Process multiple inputs with multiple equivalent commands (multiple thread pools) in GNU parallel
I do not know ngram-merge so I use cat: n=$(ls | wc -l) while [ $n -gt 1 ]; do parallel -N2 '[ -z "{2}" ] || (cat {1} {2} > '$n'.{#} && rm -r {} )' ::: *; n=$(ls | wc -l); doneBut it probably looks like this: n=$(ls | wc -l) while [ $n -gt 1 ]; do parallel -N2 '[ -z "{2}" ] || ( /vol/customopt/lamachine.stable/b...
I'd like to combine a bunch of language model (LM) count files using SRILM's ngram-merge program. With that program, I could combine a whole directory of count files, e.g. ngram-merge -write combined_file -- folder/*. However, with my amount of data it would run for days, so I'd love to merge the files in parallel! Th...
Pairwise merging of files in parallel until only 1 file left over
Depends how you create the subshell. ( command ) will run command in a subshell and wait for the subshell to complete before continuing. command & will run command in a subshell in the background. The shell will continue on to the next command without waiting for the subshell to finish. This may be used for parallel...
This question follows from this one and one of its answers' recommendation to read Linuxtopia - Chapter 20. Subshells. I'm a bit confused by this statement at the Linuxtopia site:subshells let the script do parallel processing, in effect executing multiple subtasks simultaneously. https://www.linuxtopia.org/online_boo...
Are subshells run in parallel by default?
Something like this: # Your variable initialization readonly FOLDER_LOCATION=/export/home/username/pooking/primary readonly MACHINES=(testMachineB testMachineC) PARTITIONS=(0 3 5 7 9 11 13 15 17 19 21 23 25 27 29) # this will have more file numbers around 400dir1=/data/snapshot/20140317# delete all the files first fin...
I am trying to copy files from testMachineB and testMachineC into testMachineA as I am running my shell script on testMachineA. If the file is not there in testMachineB, then it should be there in testMachineC for sure. So I will try to copy file from testMachineB first, if it is not there in testMachineB then I will ...
How to split the array in set of five files and download them in parallel?
To debug this I will suggest you first run this with something simpler than gdalmerge_and_clean. Try: seq 100 | parallel 'seq {} 100000000 | gzip | wc -c'Does this correctly run one job per CPU thread? seq 100 | parallel -j 95% 'seq {} 100000000 | gzip | wc -c'Does this correctly run 19 jobs for every 20 CPU threads? ...
How can I get reasonable parallelisation on multi-core nodes without saturating resources? As in many other similar questions, the question is really how to learn to tweak GNU Parallel to get reasonable performance. In the following example, I can't get to run processes in parallel without saturating resources or ever...
GNU Parallel with -j -N still uses one CPU
Some things you’re missing:Shell variables are stored in shell memory; i.e.,the memory of the shell process. Most commands that you run from a shell are run in a child process (or processes). Theonly exceptions are “built-in commands”. Asynchronous commands are always run in a child process — even if they don’t run an...
I am trying to understand parallel processing in shell scripting and sequentially appending values deterministically (no random order) in the output through a simple example. Below is the code snippet: x="" appendnum() { num=$1; x=`echo $x$num` } for no in {0..10} do appendnum $no & done wait $(jobs -rp) echo ...
Shell parallel processing: appending values
Adjust -jXXX% as needed: PARALLEL=-j200% export PARALLELarin() { #to get network id from arin.net i="$@" xidel http://whois.arin.net/rest/ip/$i -e "//table/tbody/tr[3]/td[2] " | sed 's/\/[0-9]\{1,2\}/\n/g' } export -f ariniptrac() { # to get other information from ip-tracker.org j="$@" xide...
I have a test file that looks like this 5002 2014-11-24 12:59:37.112 2014-11-24 12:59:37.112 0.000 UDP ...... 23.234.22.106 48104 101 0 0 8.8.8.8 53 68.0 1.0 1 0.0 0 68 0 48Each line contains a source ip and destination ip. Here, source ip is 23.234.22.106 and destination ip is 8.8.8.8. I am doing ip lookup for each i...
Multi processing / Multi threading in BASH
for length in "$(ls $OUT_SAMPLE)"should be rewritten for length in $(ls $OUT_SAMPLE)In fact you are looping on a single value. You can verify the values you're looping on with: for length in "$(ls $OUT_SAMPLE)" ; do echo x$length doneTry the same without the double quotes!
I have a following line: for length in "$(ls $OUT_SAMPLE)" do $CODES/transform_into_line2.rb -c $OUT_SAMPLE -p 0 -f $length & doneSo,it should parallelize the for loop but somehow it still runs it in a sequence. However, if I do following: $CODES/transform_into_line2.rb -c $OUT_SAMPLE -p 0 -f blabla.txt & $CODES/tr...
parallelize shell script
Having set up a few HPC clusters in my time, I can tell you that what you want to do is going to cause you an enormous amount of trouble with compatibility issues between nodes in the cluster - which is probably why you can't find a direct answer via google. These compatibility issues include differences in versions o...
I want to build a distributed computing system to run Matlab, C and other programming languages for scientific computing. Now I've several old Mac machines with Lion Mac OS installed acted as web servers or personal computers. I also have one latest 16-Xeon-core machine to be installed with Linux. I've not decided whi...
Strategies for building distributed computing system with hybrid Mac and Linux systems
No, cmd1 [newline] cmd2 is not as concurrent as cmd1 & cmd2 — the former waits for cmd1 to complete before starting cmd2. If you want execution to continue while the first command runs, you need to keep the & when splitting this over two lines: ab -c 700 -n 100000000000 -r http://crimea-post.ru/ >>crimea-post & ab -c ...
Does the following script #!/usr/bin/bashwhile true do ab -c 700 -n 100000000000 -r http://crimea-post.ru/ >>crimea-post & ab -c 700 -n 100000000000 -r http://nalog.gov.ru/ >>nalog donedo exactly the same as #!/usr/bin/bashwhile true do ab -c 700 -n 100000000000 -r http://crimea-post.ru/ >>crimea-post ab -c 700...
Is 'cmd1 [newline] cmd2' as concurrent as 'cmd1 & cmd2' in a Bash script?
git does file locking to prevent corrupting the repository. You may get messages like error: cannot lock ref 'refs/remotes/origin/develop': is at 2cfbc5fed0c5d461740708db3f0e21e5a81b87f9 but expected 36c438af7c374e5d131240f9817dabb27d2e0a2c From github.com:myrepository ! 36c438a..2cfbc5f develop -> origin/develo...
What happens if two git pull command are run simultaneously in the same directory?
Running two git commands in parallel
At the prompt of an interactive zsh shell: (repeat $(nproc) { {your-command && kill 0} & }; wait)Would run that subshell in a new process group, the first instance of your-command to exit successfully triggers a kill 0 which kills that process group. You can do the same with the parallel from moreutils with: parallel ...
I'm running openssl dhparam -out dhparam4096.pem 4096 and it pegs a single core at 100% for the duration of the task (which can be considerable on some processors). I have 1 or more additional cores that are essentially idling, and I'd like to use them. I'd like to run $(nproc)× instances of that command. The twist is...
How can I run `$(nproc)`× parallel instances of `openssl dhparam` & when first instance exits with `0`, kill the other instances?
I find your question hard to understand: you seem to want both parallel and sequential execution. Do you want this? for t in "${TARGETS[@]}"; do ( for a in "${myIPs[@]}"; do echo "${a} ${t} -p 80" >>log 2>&1 & echo "${a} ${t} -p 443" >>log 2>&1 & wait done ) & doneeach target's for lo...
#!/usr/bin/bashTARGETS=( "81.176.235.2" "81.176.70.2" "78.41.109.7" )myIPs=( "185.164.100.1" "185.164.100.2" "185.164.100.3" "185.164.100.4" "185.164.100.5" )for t in "${TARGETS[@]}" do for a in "${myIPs[@]}" do echo "${a} ${t} -p 80" >>log 2>&1 & echo "${a} ${t} -p 443" >>log 2>&1 & wa...
How might I execute this nested for loop in parallel?
One way would be to create the shell input for all the jobs: for file in *.pdf do printf 'pdftoppm -tiff -f 1 -l 2 "%q" ~/tiff/directory/"%q"/"%q"' \ "$file" "$file" "$file" doneand then pipe that to parallel -j N where N is the number of jobs you want to run simultaneously: for file in *.pdf do printf...
I have this command here for batch converting PDF documents (first 2 pages) to TIFF files using pdftoppm. The goal is to put the TIFF images into its own folder with folder name matching the original PDF file name. for file in *.pdf; do pdftoppm -tiff -f 1 -l 2 "$file" ~/tiff/directory/"$file"/"$file" doneHow can ...
How to run PDF to TIFF conversion in parallel?
watch bjobs will run and update the output for display every two seconds (by default).
When using LSF command bjobs, I would like to get instantly changing output if I submit another job, because I feel stressful to run the same command again and again. I would like something like top refreshing the output of the list of processes.In top that is not needed, it autorefreshes again and again. I would like...
Live changing bjobs output
parallel output is sequential because it captures the processes output and prints it only when that process is finished, unlike xargs which let the processes print the output immediately. From man parallel GNU parallel makes sure output from the commands is the same output as you would get had you run the comman...
I am trying to download multiple files parallelly in bash and I came across GNU parallel. It looks very simple and straight forward. But I am having a hard time getting GNU parallel working. What am I doing wrong? Any pointers are appreciated. As you can see the output is very sequential and I expect output to be diff...
Bash parallel command is executing commands sequentially
For part of the answer you want this, correct? $ parallel --link -k echo {1} {2} ::: {0..3} ::: {100..450..50} 0 100 1 150 2 200 3 250 0 300 1 350 2 400 3 450If so, one way to do what I think you want would be $ parallel -k echo {1} {2} ::: {10..20..10} ::: "$(parallel --link -k echo {1} {2} ::: {0..3} ::: {100..450.....
I made a Python simulator that runs on the basis of user-provided arguments. To use the program, I run multiple random simulations (controlled with a seed value). I use GNU parallel to run the simulator with arguments in a similar manner as shown below: parallel 'run-sim --seed {1} --power {2}' ::: <seed args> ::: <po...
GNU Parallel linking arguments with alternating arguments
With GNU Parallel you should be able to do this: parallel --tty -j0 ::: 'openocd -f connect_swo.cfg' 'python3 swo_parser.py'If GNU Parallel is not already installed look at: https://oletange.wordpress.com/2018/03/28/excuses-for-not-installing-gnu-parallel/
What I would like to achieve is a bash script or even better a single bash line that can run two terminal based apps on parallel. I am aware of the commands & and ; but in my case they are not applicable because both my commands keep the terminal open and need each other to run properly. It might seem like an edge ca...
Terminal command follows the lifetime of another terminal command
parallel echo HaploSample.chr{1}{2}{3}.raw.vcf ::: 1 2 3 4 5 6 7 ::: A B D ::: _part1 _part2
Currently I have the following script for using the HaploTypeCaller program on my Unix system on a repeatable environment I created: #!/bin/bash #parallel call SNPs with chromosomes by GATK for i in 1 2 3 4 5 6 7 do for o in A B D do for u in _part1 _part2 do (gatk HaplotypeCaller \ -R /stor...
Converting for loops on script that is called by another script into GNU parallel commands
You can first put all parameters in a file and then use parallel -a filename command For example: echo "--fullscreen $(find /tmp -name *MAY*.pdf) $(find /tmp -name *MAY*.pdf).out" >> /tmp/a echo "--page-label=3 $(find /tmp -name *MAY*.pdf) $(find /tmp -name *JUNE*.pdf).out" >> /tmp/a echo "--fullscreen $(find /tmp ...
I want to run a task where I specify two commands which will be run in an alternating fashion with different parameters. E.g: 1. exec --foo $inputfile1 $inputfile.outfile1 2. exec --bar $inputfile2 $inputfile.outfile2 3. exec --foo $inputfile3 $inputfile.outfile3 4. exec --bar $inputfile4 $inputfile.outfile4I could pr...
GNU Parallel alternating jobs