Hi. LSF/Openlava sysadmin in bioinformatics and parallel user here.
I have seen this a couple more times: You are trying to use GNU parallel to submit the jobs to all nodes.
THat's now the way to do things: You should not submit jobs on *all* your nodes. Please don't do that, as bsub was not designed to read large chunks of jobs. bsub writes the jobs to your home directory, so if your storage is not designed for a lot of writes, you are going to blow the cluster out of the water.
What you want to do is look up either:
or
Both bsub scripts and job arrays are useful to you: bsub scripts can be submitted as part of a pipeline: you can program the output of the bsub script from your pipeline and then submit it to bsub. So, instead of submitting your job 2000 times as in
bsub job0
bsub job1
....
bsub job1999
you just submit "bsub < scriptname" which contains 2000 lines which describe your jobs and you are done. The rest is done by bsub/LSF
Now, if your jobs are similar in a way that you just increment counter (as in most bioinformatics jobs), use arrays.
bsub -J JOBNAME[0-1999], where JOBNAME is a string you would like to name your job as, eg "fasta files alignment"
These techniques are useful because you can submit all 2000 jobs in less than a second, you can do it from a single node and you will not have to deal with a grumpy sysadmin or grumpy colleagues who cannot use the cluster. Just make sure you use the appropriate queue.
Let me know if you have any questions.
Best Regards,
George Marselis