You can run a Makeflow on your local machine to test it out. If you have a multi-core machine, then you can run multiple tasks simultaneously. If you have a Condor pool or a Sun Grid Engine batch system, then you can send your jobs there to run. If you don't already have a batch system, Makeflow comes with a system called Work Queue that will let you distribute the load across any collection of machines, large or small. Makeflow also supports execution in a Docker container, regardless of the batch system used.
| -c,----clean <option> | |
| Clean up: remove logfile and all targets. If option [intermediates, outputs] is specified, only indicated files are removed. | |
| -f,--summary-log <file> | |
| Write summary of workflow to file. | |
| -h, --help | Show this help screen. |
| -m,--email <email> | |
| Email summary of workflow to address. | |
| -v, --version | Show version string. |
| -X,--chdir <directory> | |
| Chdir to enable executing the Makefile in other directory. | |
| -B,--batch-options <options> | |
| Add these options to all batch submit files. | |
| -j,--max-local <#> | |
| Max number of local jobs to run at once. (default is # of cores) | |
| -J,--max-remote <#> | |
| Max number of remote jobs to run at once. (default is 1000 for -Twq, 100 otherwise) | |
| -l,--makeflow-log <logfile> | |
| Use this file for the makeflow log. (default is X.makeflowlog) | |
| -L,--batch-log <logfile> | |
| Use this file for the batch system log. (default is X.<type>log) | |
| -R, --retry | Automatically retry failed batch jobs up to 100 times. |
| -r,--retry-count <n> | |
| Automatically retry failed batch jobs up to n times. | |
| --wait-for-files-upto <#> | |
| Wait for output files to be created upto this many seconds (e.g., to deal with NFS semantics). | |
| -S,--submission-timeout <timeout> | |
| Time to retry failed batch job submission. (default is 3600s) | |
| -T,--batch-type <type> | |
| Batch system type: local, condor, sge, pbs, torque, slurm, moab, cluster, wq, amazon. (default is local) | |
| -d,--debug <subsystem> | |
| Enable debugging for this subsystem. | |
| -o,--debug-file <file> | |
| Write debugging output to this file. By default, debugging is sent to stderr (":stderr"). You may specify logs be sent to stdout (":stdout"), to the system syslog (":syslog"), or to the systemd journal (":journal"). | |
| --verbose | Display runtime progress on stdout. |
| -a, --advertise | Advertise the master information to a catalog server. |
| -C,--catalog-server <catalog> | |
| Set catalog server to <catalog>. Format: HOSTNAME:PORT | |
| -F,--wq-fast-abort <#> | |
| WorkQueue fast abort multiplier. (default is deactivated) | |
| -M,---N <project-name> | |
| Set the project name to <project>. | |
| -p,--port <port> | |
| Port number to use with WorkQueue. (default is 9123, 0=arbitrary) | |
| -Z,--port-file <file> | |
| Select port at random and write it to this file. (default is disabled) | |
| -P,--priority <integer> | |
| Priority. Higher the value, higher the priority. | |
| -W,--wq-schedule <mode> | |
| WorkQueue scheduling algorithm. (time|files|fcfs) | |
| -s,--password <pwfile> | |
| Password file for authenticating workers. | |
| --disable-cache | Disable file caching (currently only Work Queue, default is false) |
| --work-queue-preferred-connection <connection> | |
| Indicate preferred connection. Chose one of by_ip or by_hostname. (default is by_ip) | |
| --monitor <dir> | |||||||||||||
Enable the resource monitor, and write the monitor logs to | --monitor-with-time-series | Enable monitor time series. (default is disabled)
| --monitor-with-opened-files | Enable monitoring of openened files. (default is disabled)
| --monitor-interval <#> | Set monitor interval to <#> seconds. (default 1 second)
| --monitor-log-fmt <fmt> | Format for monitor logs. (default resource-rule-%06.6d, %d -> rule number)
| | |||||
| --docker <image> | |
| Run each task in the Docker container with this name. The image will be obtained via "docker pull" if it is not already available. | |
| --docker-tar <tar> | |
| Run each task in the Docker container given by this tar file. The image will be uploaded via "docker load" on each execution site. | |
| --amazon-credentials <path> | |
Specify path to Amazon credentials file.
The credentials file should be in the following JSON format:
{
"aws_access_key_id" : "AAABBBBCCCCDDD"
"aws_secret_access_key" : "AAABBBBCCCCDDDAAABBBBCCCCDDD"
}
| |
| --amazon-ami <image-id> | |
| Specify an amazon machine image. | |
| -A, --disable-afs-check | Disable the check for AFS. (experts only) |
| -z, --zero-length-error | Force failure on zero-length output files. |
| --wrapper <script> | |
| Wrap all commands with this script. Each rule's original recipe is appended to script or replaces the first occurrence of {} in script. | |
| --wrapper-input <file> | |
| Wrapper command requires this input file. This option may be specified more than once, defining an array of inputs. Additionally, each job executing a recipe has a unique integer identifier that replaces occurrences %% in file. | |
| --wrapper-output <file> | |
| Wrapper command requires this output file. This option may be specified more than once, defining an array of outputs. Additionally, each job executing a recipe has a unique integer identifier that replaces occurrences %% in file. | |
Note that variables defined in your Makeflow are exported to the environment.
makeflow -d all MakeflowRun makeflow on Condor will special requirements:
makeflow -T condor -B "requirements = MachineGroup == 'ccl'" MakeflowRun makeflow with WorkQueue using named workers:
makeflow -T wq -a -N project.name MakeflowCreate a directory containing all of the dependencies required to run the specified makeflow
makeflow -b bundle Makeflow