next up previous contents index
Next: condor_ submit_dag Up: 9. Command Reference Manual Previous: condor_ store_cred   Contents   Index

Subsections


condor_ submit

Queue jobs for execution under Condor

Synopsis

condor_ submit [--] [-v] [-n schedd_name] [-r schedd_name] [-d] [-a command ... ] [-s] [submit description file]

Description

condor_ submit is the program for submitting jobs for execution under Condor. condor_ submit requires a submit description file which contains commands to direct the queuing of jobs. One submit description file may contain specifications for the queuing of many Condor jobs at once. All jobs queued by a single invocation of condor_ submit must share the same executable, and are referred to as a job cluster. It is advantageous to submit multiple jobs as a single cluster because:

Note that submission of jobs from a Windows machine requires a stashed password to allow Condor to impersonate the user submitting the job. To stash a password, use the condor_ store_cred command. See the manual page at page [*] for details.

SUBMIT DESCRIPTION FILE COMMANDS

Each submit description file describes one cluster of jobs to be placed in the Condor execution pool. All jobs in a cluster must share the same executable, but they may have different input and output files, and different program arguments. The submit description file is the only command-line argument to condor_ submit.

The submit description file must contain one executable command and at least one queue command. All of the other commands have default actions.

The commands which can appear in the submit description file are:

executable = <name>
The name of the executable file for this job cluster. Only one executable command may be present in a description file. If submitting into the standard universe, which is the default, then the named executable must have been re-linked with the Condor libraries (such as via the condor_ compile command). If submitting into the vanilla universe, then the named executable need not be re-linked and can be any process which can run in the background (shell scripts work fine as well). If submitting into the Java universe, then the argument must be a compiled .class file.

input = <pathname>
Condor assumes that its jobs are long-running, and that the user will not wait at the terminal for their completion. Because of this, the standard files which normally access the terminal, (stdin, stdout, and stderr), must refer to files. Thus, the file name specified with input should contain any keyboard input the program requires (that is, this file becomes stdin). If not specified, the default value of /dev/null is used for submission to a Unix machine. If not specified, input is ignored for submission to an NT machine. For globus universe jobs, input may be a URL that the Globus tool globus_url_copy understands.

output = <pathname>
The output file name will capture any information the program would normally write to the screen (that is, this file becomes stdout). If not specified, the default value of /dev/null is used for submission to a Unix machine. If not specified, output is ignored for submission to an NT machine. More than one job should not use the same output file, since this will cause one job to overwrite the output of another. For globus universe jobs, output may be a URL that the Globus tool globus_url_copy understands.

error = <pathname>
The error file name will capture any error messages the program would normally write to the screen (that is, this file becomes stderr). If not specified, the default value of /dev/null is used for submission to a Unix machine. If not specified, error messages are ignored for submission to an NT machine. More than one job should not use the same error file, since this will cause one job to overwrite the errors of another. For globus universe jobs, error may be a URL that the Globus tool globus_url_copy understands.

arguments = <argument_list>
List of arguments to be supplied to the program on the command line. In the Java Universe, the first argument must be the name of the class containing main. Arguments are delimited (separated) by space characters. Jobs submitted for the globus universe will parse arguments differently due to the differences between the ClassAd representation in Condor and the Resource Specification Language (RSL) used in Globus. See section 5.3.3 for further details.

initialdir = <directory-path>
Used to specify the current working directory for the Condor job. Should be a path to a preexisting directory. If not specified, condor_ submit will automatically insert the user's current working directory at the time condor_ submit was run as the value for initialdir.

should_transfer_files = <YES | NO | IF_NEEDED >
The should_transfer_files setting is used to define if Condor should transfer files to and from the remote machine where your job runs. The file transfer mechanism is used to run jobs which are not in the Standard Universe (and can therefore use remote system calls for file access) on machines which do not have a shared file system with the submit machine. should_transfer_files equal to YES will cause Condor to always transfer files for your job. NO disables Condor's file transfer mechanism. IF_NEEDED will not transfer files for your job if it is matched with a resource in the same FileSystemDomain as your submit machine (and therefore, on a machine with the same shared file system). If the job is matched with a remote resource in a different FileSystemDomain, Condor will transfer the necessary files.

If you define should_transfer_files you must also define when_to_transfer_output (described below). For more information about this and other settings related to transferring files, see section 2.5.4 on page [*].

when_to_transfer_output = < ON_EXIT | ON_EXIT_OR_EVICT >

Setting when_to_transfer_output equal to ON_EXIT will cause Condor to transfer the job's output files back to the submitting machine only when the job completes (exits on its own).

The ON_EXIT_OR_EVICT option is intended for fault tolerant jobs which periodically save their own state and can restart where they left off. In this case, files are transfered to the submit machine any time the job leaves a remote site, either because it exited on its own, or was evicted by the Condor system for any reason prior to job completion. Any output files transferred back to the submit machine are automatically sent back out again as input files if the job restarts.

For more information about this and other settings related to transferring files, see section 2.5.4 on page [*].

transfer_input_files = < file1,file2,file... >
Lists all the files to be transferred into the working directory for the job before the job is started. You must separate multiple file names with a comma. By default, the file specified in the executable command and any file specified in the input command (for example, stdin) are transferred.

Only the transfer of files is available; the transfer of subdirectories is not supported.

For more information about this and other settings related to transferring files, see section 2.5.4 on page [*].

transfer_output_files = < file1,file2,file... >
This command forms an explicit list of output files to be transferred back from the temporary working directory on the execute machine to the submit machine. Most of the time, there is no need to use this command. If transfer_output_files is not specified, Condor will automatically transfer back all files in the job's temporary working directory which have been modified or created by the job. This is usually the desired behavior. Explicitly listing output files is typically only done when the job creates many files, and the user wants to keep a subset of those files. If there are multiple files, they must be delimited with commas. WARNING: Do not specify transfer_output_files in the submit description file unless there is a really good reason - it is best to let Condor figure things out by itself based upon what the job produces.

For more information about this and other settings related to transferring files, see section 2.5.4 on page [*].

requirements = <ClassAd Boolean Expression>
The requirements command is a boolean ClassAd expression which uses C-like operators. In order for any job in this cluster to run on a given machine, this requirements expression must evaluate to true on the given machine. For example, to require that whatever machine executes your program has a least 64 Meg of RAM and has a MIPS performance rating greater than 45, use:
        requirements = Memory >= 64 && Mips > 45
Only one requirements command may be present in a submit description file. By default, condor_ submit appends the following clauses to the requirements expression:
  1. Arch and OpSys are set equal to the Arch and OpSys of the submit machine. In other words: unless you request otherwise, Condor will give your job machines with the same architecture and operating system version as the machine running condor_ submit.
  2. Disk >= DiskUsage. The DiskUsage attribute is initialized to the size of the executable plus the size of any files specified in a transfer_input_files command. It exists to ensure there is enough disk space on the target machine for Condor to copy over both the executable and needed input files. The DiskUsage attribute represents the maximum amount of total disk space required by the job in kilobytes. Condor automatically updates the DiskUsage attribute approximately every 20 minutes while the job runs with the amount of space being used by the job on the execute machine.
  3. VirtualMemory >= ImageSize. To ensure the target machine has enough virtual memory to run your job.
  4. If Universe is set to Vanilla, FileSystemDomain is set equal to the submit machine's FileSystemDomain.
You can view the requirements of a job which has already been submitted (along with everything else about the job ClassAd) with the command condor_ q -l; see the command reference for condor_ q on page [*]. Also, see the Condor Users Manual for complete information on the syntax and available attributes that can be used in the ClassAd expression.

rank = <ClassAd Float Expression>
A ClassAd Floating-Point expression that states how to rank machines which have already met the requirements expression. Essentially, rank expresses preference. A higher numeric value equals better rank. Condor will give the job the machine with the highest rank. For example,
        requirements = Memory > 60
        rank = Memory
asks Condor to find all available machines with more than 60 megabytes of memory and give the job the one with the most amount of memory. See the Condor Users Manual for complete information on the syntax and available attributes that can be used in the ClassAd expression.

universe = <vanilla | standard | pvm | scheduler | globus | mpi | java>
Specifies which Condor Universe to use when running this job. The Condor Universe specifies a Condor execution environment. The standard Universe is the default (except where the configuration variable DEFAULT_UNIVERSE defines it otherwise), and tells Condor that this job has been re-linked via condor_ compile with the Condor libraries and therefore supports checkpointing and remote system calls. The vanilla Universe is an execution environment for jobs which have not been linked with the Condor libraries. Note: Use the vanilla Universe to submit shell scripts to Condor. The pvm Universe is for a parallel job written with PVM 3.4. The scheduler is for a job that should act as a metascheduler. The globus universe uses the Globus GRAM API to contact the Globus resource specified and requests it run the job. The mpi universe is for running mpi jobs made with the MPICH package. The java Universe is for programs written to the Java Virtual Machine. See the Condor User's Manual for more information about using Universe.

on_exit_remove = <ClassAd Boolean Expression>
This expression is checked when the job exits and if true, then it allows the job to leave the queue normally. If false, then the job is placed back into the Idle state. If the user job is a vanilla job then it restarts from the beginning. If the user job is a standard job, then it restarts from the last checkpoint.

For example: Suppose you have a job that occasionally segfaults but you know if you run it again on the same data, chances are it will finish successfully. This is how you would represent that with on_exit_remove(assuming the signal identifier for segmentation fault is 4):

	on_exit_remove = (ExitBySignal == True) && (ExitSignal != 4)

The above expression will not let the job exit if it exited by a signal and that signal number was 4(representing segmentation fault). In any other case of the job exiting, it will leave the queue as it normally would have done.

If left unspecified, this will default to True.

periodic_* expressions take precedence over on_exit_* expressions, and *_hold expressions take precedence over a *_remove expressions.

This expression is available for the vanilla and java universes. It is additionally available, when submitted from a Unix machine, for the standard universe. Note that the schedd, by default, only checks these periodic expressions once every 300 seconds. The period of these evaluations can be adjusted by setting the PERIODIC_EXPR_INTERVAL configuration macro.

on_exit_hold = <ClassAd Boolean Expression>
This expression is checked when the job exits and if true, places the job on hold. If false then nothing happens and the on_exit_remove expression is checked to determine if that needs to be applied.

For example: Suppose a job is known to run for a minimum of an hour. If the job exits after less than an hour, the job should be placed on hold and an e-mail notification sent, instead of being allowed to leave the queue.

	on_exit_hold = (CurrentTime - JobStartDate) > (60 * $(MINUTE))

This expression places the job on hold if it exits for any reason before running for an hour. An e-mail will be sent to the user explaining that the job was placed on hold because this expression became True.

periodic_* expressions take precedence over on_exit_* expressions, and *_hold expressions take precedence over a *_remove expressions.

If left unspecified, this will default to False.

This expression is available for the vanilla and java universes. It is additionally available, when submitted from a Unix machine, for the standard universe.

periodic_remove = <ClassAd Boolean Expression>
This expression is checked periodically at an interval of the number of seconds set by the configuration variable PERIODIC_EXPR_INTERVAL . If it becomes True, the job is removed from the queue. If unspecified, the default value is False.

See the Examples section for an example of a periodic_* expression.

periodic_* expressions take precedence over on_exit_* expressions, and *_hold expressions take precedence over a *_remove expressions. So, the periodic_remove expression takes precedent over the on_exit_remove expression, if the two describe conflicting actions.

This expression is available for the vanilla and java universes. It is additionally available, when submitted from a Unix machine, for the standard universe. Note that the schedd, by default, only checks periodic expressions once every 300 seconds. The period of these evaluations can be adjusted by setting the PERIODIC_EXPR_INTERVAL configuration macro.

periodic_hold = <ClassAd Boolean Expression>
This expression is checked periodically at an interval of the number of seconds set by the configuration variable PERIODIC_EXPR_INTERVAL . If it becomes true, the job will be placed on hold. If unspecified, the default value is False.

See the Examples section for an example of a periodic_* expression.

periodic_* expressions take precedence over on_exit_* expressions, and *_hold expressions take precedence over a *_remove expressions.

This expression is available for the vanilla and java universes. It is additionally available, when submitted from a Unix machine, for the standard universe. Note that the schedd, by default, only checks periodic expressions once every 300 seconds. The period of these evaluations can be adjusted by setting the PERIODIC_EXPR_INTERVAL configuration macro.

periodic_release = <ClassAd Boolean Expression>
This expression is checked periodically at an interval of the number of seconds set by the configuration variable PERIODIC_EXPR_INTERVAL while the job is in the Hold state. If the expression becomes True, the job will be released.

This expression is available for the vanilla and java universes. It is additionally available, when submitted from a Unix machine, for the standard universe. Note that the condor_ schedd daemon, by default, only checks periodic expressions once every 300 seconds. The period of these evaluations can be adjusted by setting the PERIODIC_EXPR_INTERVAL configuration macro.

priority = <priority>
Condor job priorities range from -20 to +20, with 0 being the default. Jobs with higher numerical priority will run before jobs with lower numerical priority. Note that this priority is on a per user basis; setting the priority will determine the order in which your own jobs are executed, but will have no effect on whether or not your jobs will run ahead of another user's jobs.

notification = <when>
Owners of Condor jobs are notified by email when certain events occur. If when is set to Always, the owner will be notified whenever the job is checkpointed, and when it completes. If when is set to Complete (the default), the owner will be notified when the job terminates. If when is set to Error, the owner will only be notified if the job terminates abnormally. If when is set to Never, the owner will not be mailed, regardless what happens to the job. The statistics included in the email are documented in section 2.6.6 on page [*].

notify_user = <email-address>
Used to specify the email address to use when Condor sends email about a job. If not specified, Condor will default to using :
        job-owner@UID_DOMAIN
where UID_DOMAIN is specified by the Condor site administrator. If UID_DOMAIN has not been specified, Condor will send the email to :
        job-owner@submit-machine-name

copy_to_spool = <True | False>
If copy_to_spool is set to True, then condor_ submit will copy the executable to the local spool directory before running it on a remote host. Oftentimes this can be quite time consuming and unnecessary. By setting it to False, condor_ submit will skip this step. Defaults to True.

getenv = <True | False>
If getenv is set to True, then condor_ submit will copy all of the user's current shell environment variables at the time of job submission into the job ClassAd. The job will therefore execute with the same set of environment variables that the user had at submit time. Defaults to False.

hold = <True | False>
If hold is set to True, then the job will be submitted in the hold state. Jobs in the hold state will not run until released by condor_ release. Defaults to false.

environment = <parameter_list>
List of environment variables of the form :
        <parameter>=<value>
Multiple environment variables can be specified by separating them with a semicolon (`` ; '') when submitting from a Unix platform. Multiple environment variables can be specified by separating them with a vertical bar (`` | '') when submitting from an NT platform. These environment variables will be placed (as given) into the job's environment before execution. The length of all characters specified in the environment is currently limited to 10240 characters. Note that spaces are accepted, but rarely desired, characters within parameter names and values. Place spaces within the parameter list only if required.

log = <pathname>
Use log to specify a file name where Condor will write a log file of what is happening with this job cluster. For example, Condor will log into this file when and where the job begins running, when the job is checkpointed and/or migrated, when the job completes, etc. Most users find specifying a log file to be very handy; its use is recommended. If no log entry is specified, Condor does not create a log for this cluster.

log_xml = <True | False>
If log_xml is true, then the log file will be written in ClassAd XML. If it isn't specified, XML is not used. Note that it's an XML fragment, and is missing the file header and footer. Also note that you should never mix XML and non-XML in a single file: if multiple jobs write to a single log file, it is up to you to make sure that all of them specify (or don't specify) this option in the same way.

jar_files = <file_list>
Specifies a list of additional JAR files to include when using the Java universe. JAR files will be transferred along with the executable and automatically added to the classpath.

image_size = <size>
This command tells Condor the maximum virtual image size to which you believe your program will grow during its execution. Condor will then execute your job only on machines which have enough resources, (such as virtual memory), to support executing your job. If you do not specify the image size of your job in the description file, Condor will automatically make a (reasonably accurate) estimate about its size and adjust this estimate as your program runs. If the image size of your job is underestimated, it may crash due to inability to acquire more address space, e.g. malloc() fails. If the image size is overestimated, Condor may have difficulty finding machines which have the required resources. size must be in kbytes, e.g. for an image size of 8 megabytes, use a size of 8000.

machine_count = <min..max> | <max>
For the PVM universe, both min and max or just max may be defined. If machine_count is specified, Condor will not start the job until it can simultaneously supply the job with min machines. Condor will continue to try to provide up to max machines, but will not delay starting of the job to do so. If the job is started with fewer than max machines, the job will be notified via a usual PvmHostAdd notification as additional hosts come on line.

For the MPI universe, a single value (max) is required. It is neither a maximum or minimum, but the number of machines to be dedicated toward running the job.

coresize = <size>
Should the user's program abort and produce a core file, coresize specifies the maximum size in bytes of the core file which the user wishes to keep. If coresize is not specified in the command file, the system's user resource limit ``coredumpsize'' is used (except on HP-UX).

nice_user = <True | False>
Normally, when a machine becomes available to Condor, Condor decides which job to run based upon user and job priorities. Setting nice_user equal to True tells Condor not to use your regular user priority, but that this job should have last priority among all users and all jobs. So jobs submitted in this fashion run only on machines which no other non-nice_user job wants -- a true ``bottom-feeder'' job! This is very handy if a user has some jobs they wish to run, but do not wish to use resources that could instead be used to run other people's Condor jobs. Jobs submitted in this fashion have ``nice-user.'' pre-appended in front of the owner name when viewed from condor_ q or condor_ userprio. The default value is False.

kill_sig = <signal-number>
When Condor needs to kick a job off of a machine, it will send the job the signal specified by signal-number. signal-number needs to be an integer which represents a valid signal on the execution machine. For jobs submitted to the Standard Universe, the default value is the number for SIGTSTP which tells the Condor libraries to initiate a checkpoint of the process. For jobs submitted to the Vanilla Universe, the default is SIGTERM which is the standard way to terminate a program in UNIX.

compress_files = file1, file2, ...

If your job attempts to access any of the files mentioned in this list, Condor will automatically compress them (if writing) or decompress them (if reading). The compress format is the same as used by GNU gzip.

The files given in this list may be simple file names or complete paths and may include * as a wildcard. For example, this list causes the file /tmp/data.gz, any file named event.gz, and any file ending in .gzip to be automatically compressed or decompressed as needed:

compress_files = /tmp/data.gz, event.gz, *.gzip

Due to the nature of the compression format, compressed files must only be accessed sequentially. Random access reading is allowed but is very slow, while random access writing is simply not possible. This restriction may be avoided by using both compress_files and fetch_files at the same time. When this is done, a file is kept in the decompressed state at the execution machine, but is compressed for transfer to its original location.

This option only applies to standard-universe jobs.

fetch_files = file1, file2, ...

If your job attempts to access a file mentioned in this list, Condor will automatically copy the whole file to the executing machine, where it can be accessed quickly. When your job closes the file, it will be copied back to its original location. This list uses the same syntax as compress_files, shown above.

This option only applies to standard-universe jobs.

append_files = file1, file2, ...

If your job attempts to access a file mentioned in this list, Condor will force all writes to that file to be appended to the end. Furthermore, condor_submit will not truncate it. This list uses the same syntax as compress_files, shown above.

This option may yield some surprising results. If several jobs attempt to write to the same file, their output may be intermixed. If a job is evicted from one or more machines during the course of its lifetime, such an output file might contain several copies of the results. This option should be only be used when you wish a certain file to be treated as a running log instead of a precise result.

This option only applies to standard-universe jobs.

local_files = file1, file2, ...

If your job attempts to access a file mentioned in this list, Condor will cause it to be read or written at the execution machine. This is most useful for temporary files not used for input or output. This list uses the same syntax as compress_files, shown above.

local_files = /tmp/*

This option only applies to standard-universe jobs.

file_remaps = < `` name = newname ; name2 = newname2 ... ''>

Directs Condor to use a new file name in place of an old one. name describes a file name that your job may attempt to open, and newname describes the file name it should be replaced with. newname may include an optional leading access specifier, local: or remote:. If left unspecified, the default access specifier is remote:. Multiple remaps can be specified by separating each with a semicolon.

This option only applies to standard universe jobs.

If you wish to remap file names that contain equals signs or semicolons, these special characters may be escaped with a backslash.

Example One:
Suppose that your job reads a file named dataset.1. To instruct Condor to force your job to read other.dataset instead, add this to the submit file:
file_remaps = "dataset.1=other.dataset"
Example Two:
Suppose that your run many jobs which all read in the same large file, called very.big. If this file can be found in the same place on a local disk in every machine in the pool, (say /bigdisk/bigfile,) you can instruct Condor of this fact by remapping very.big to /bigdisk/bigfile and specifying that the file is to be read locally, which will be much faster than reading over the network.
file_remaps = "very.big = local:/bigdisk/bigfile"
Example Three:
Several remaps can be applied at once by separating each with a semicolon.
file_remaps = "very.big = local:/bigdisk/bigfile ; dataset.1 = other.dataset"

buffer_files = < `` name = (size,block-size) ; name2 = (size,block-size) ... '' >
buffer_size = <bytes-in-buffer>
buffer_block_size = <bytes-in-block>
Condor keeps a buffer of recently-used data for each file a job accesses. This buffer is used both to cache commonly-used data and to consolidate small reads and writes into larger operations that get better throughput. The default settings should produce reasonable results for most programs.

These options only apply to standard-universe jobs.

If needed, you may set the buffer controls individually for each file using the buffer_files option. For example, to set the buffer size to 1 Mbyte and the block size to 256 KBytes for the file input.data, use this command:

buffer_files = "input.data=(1000000,256000)"

Alternatively, you may use these two options to set the default sizes for all files used by your job:

buffer_size = 1000000
buffer_block_size = 256000

If you do not set these, Condor will use the values given by these two configuration file macros:

DEFAULT_IO_BUFFER_SIZE = 1000000
DEFAULT_IO_BUFFER_BLOCK_SIZE = 256000

Finally, if no other settings are present, Condor will use a buffer of 512 Kbytes and a block size of 32 Kbytes.

rendezvousdir = <directory-path>
Used to specify the shared file system directory to be used for file system authentication when submitting to a remote scheduler. Should be a path to a preexisting directory.

x509userproxy = <full-pathname>
Used to override the default pathname for X509 user certificates. The default location for X509 proxies is the /tmp directory, which is generally a local file system. Setting this value would allow Condor to access the proxy in a shared file system (for example, AFS). Condor will use the proxy specified in the submit description file first. If nothing is specified in the submit description file, it will use the environment variable X509_USER_CERT. If that variable is not present, it will search in the default location.

globusscheduler = <scheduler-name>
Used to specify the Globus resource to which the job should be submitted. More than one scheduler can be submitted to, simply place a queue command after each instance of globusscheduler. Each instance should be a valid Globus scheduler, using either the full Globus contact string or the host/scheduler format shown below:
Example:
To submit to the LSF scheduler of the Globus gatekeeper on lego at Boston University:
GlobusScheduler = lego.bu.edu/jobmanager-lsf
queue

globusrsl = <RSL-string>
Used to provide any additional Globus RSL string attributes which are not covered by regular submit description file parameters.

globus_resubmit = <ClassAd Boolean Expression>
The expression is evaluated by the condor_ gridmanager each time the condor_ gridmanager gets a job ad to manage. Therefore, the expression is evaluated:
  1. when a globus universe job is first submitted to Condor-G
  2. when a globus universe job is released from the hold state
  3. when Condor-G is restarted (specifically, whenever the condor_ gridmanager is restarted)
If the expression evaluates to True, then any previous submission to the globus universe will be forgotten and this job will be submitted again as a fresh submission to the globus universe. This may be useful if there is a desire to give up on a previous submission and try again. Note that this may result in the same job running more than once. Do not treat this operation lightly.

globus_rematch = <ClassAd Boolean Expression>
This expression is evaluated by the condor_ gridmanager whenever:
  1. the globus_resubmit expression evaluates to True
  2. the condor_ gridmanager decides it needs to retry a submission (as when a previous submission failed to commit)
If globus_rematch evaluates to True, then before the job is submitted again to globus, the condor_ gridmanager will request that the condor_ schedd daemon renegotiate with the matchmaker (the condor_ negotiator). The result is this job will be matched again.

leave_in_queue = <ClassAd Boolean Expression>
When the ClassAd Expression evaluates to True, the job is not removed from the queue upon completion. The job remains in the queue until the user runs condor_ rm to remove the job from the queue. This allows the user of a remotely spooled job to retrieve output files in cases where Condor would have removed them as part of the cleanup associated with completion. Defaults to False.

match_list_length = <integer value>
Defaults to the value zero (0). When match_list_length is defined with an integer value greater than zero (0), attributes are inserted into the job ClassAd. The maximum number of attributes defined is given by the integer value. The job ClassAds introduced are given as
 
LastMatchName0 = "most-recent-Name"
LastMatchName1 = "next-most-recent-Name"

The value for each introduced ClassAd is given by the value of the Name attribute from the machine ClassAd of a previous execution (match). As a job is matched, the definitions for these attributes will roll, with LastMatchName1 becoming LastMatchName2, LastMatchName0 becoming LastMatchName1, and LastMatchName0 being set by the most recent value of the Name attribute.

An intended use of these job attributes is in the requirements expression. The requirements can allow a job to prefer a match with either the same or a different resource than a previous match.

transfer_output = <True | False>
For jobs submitted to the globus universe only. If True, then the output (from stdout) from the job is transferred from the remote machine back to the submit machine. The name of the file after transfer is given by the output command. If False, no transfer takes place (from the remote machine to submit machine), and the name of the file is given by the output command.

transfer_input = <True | False>
For jobs submitted to the globus universe only. If True, then the job input (stdin) is transferred from the machine where the job was submitted to the remote machine. The name of the file that is transferred is given by the input command. If False, then the job's input is taken from a pre-staged file on the remote machine, and the name of the file is given by the input command.

transfer_error = <True | False>
For jobs submitted to the globus universe only. If True, then the error output (from stderr) from the job is transferred from the remote machine back to the submit machine. The name of the file after transfer is given by the error command. If False, no transfer takes place (from the remote machine to submit machine), and the name of the file is given by the error command.

transfer_executable = <True | False>
For jobs submitted to the globus universe, as well as vanilla and MPI. If transfer_executable is set to False, then Condor looks for the executable on the remote machine, and does not transfer the executable over. This is useful if you have already pre-staged your executable and wish to have Condor behave more like rsh. Defaults to True.

remote_initialdir = <directory-path>
For jobs submitted to the globus universe only, the path specifies the directory in which the job is to be executed on the remote machine.

stream_output = <True | False>
For jobs submitted to the globus universe only, if True, then stdout is streamed back to the machine from which the job was submitted. If False, stdout is stored by the Globus jobmanager and transferred back when the job completes. This command is ignored if the job ClassAd attribute TransferOut is False. The default value is True.

stream_error = <True | False>
For jobs submitted to the globus universe only, if True, then stderr is streamed back to the machine from which the job was submitted. If False, stderr is stored by the Globus jobmanager and transferred back when the job completes. This command is ignored if the job ClassAd attribute TransferErr is False. The default value is True.

+<attribute> = <value>
A line which begins with a '+' (plus) character instructs condor_ submit to insert the following attribute into the job ClasssAd with the given value.

queue [number-of-procs]
Places one or more copies of the job into the Condor queue. If desired, new input, output, error, initialdir, arguments, nice_user, priority, kill_sig, coresize, or image_size commands may be issued between queue commands. This is very handy when submitting multiple runs into one cluster with one submit description file; for example, by issuing an initialdir between each queue command, each run can work in its own subdirectory. The optional argument number-of-procs specifies how many times to submit the job to the queue, and it defaults to 1.

allow_startup_script = <True | False>
If True, a standard universe job will execute a script instead of submitting the job, and the consistency check to see if the executable has been linked using condor_ compile is omitted. The executable command within the submit description file specifies the name of the script. The script is used to do preprocessing before the job is submitted. The shell script ends with an exec of the job executable, such that the process id of the executable is the same as that of the shell script. Here is an example script that gets a copy of a machine-specific executable before the exec.
 
   #! /bin/sh

   # get the host name of the machine
   $host=`uname -n`

   # grab a standard universe executable designed specifically
   # for this host
   scp elsewhere@cs.wisc.edu:${host} executable

   # The PID MUST stay the same, so exec the new standard universe process.
   exec executable ${1+"$@"}
If this command is not present (defined), then the value defaults to false.

In addition to commands, the submit description file can contain macros and comments:

Macros
Parameterless macros in the form of $(macro_name) may be inserted anywhere in Condor submit description files. Macros can be defined by lines in the form of
 
        <macro_name> = <string>
Three pre-defined macros are supplied by the submit description file parser. The third of the pre-defined macros is only relevant to MPI universe jobs. The $(Cluster) macro supplies the number of the job cluster, and the $(Process) macro supplies the number of the job. These macros are intended to aid in the specification of input/output files, arguments, etc., for clusters with lots of jobs, and/or could be used to supply a Condor process with its own cluster and process numbers on the command line. The $(Process) macro should not be used for PVM jobs. The $(Node) macro is defined only for MPI universe jobs. It is a unique value assigned for the duration of the job that essentially identifies the machine on which a program is executing.

If the dollar sign (``$'') is desired as a literal character, then use

$(DOLLAR)

In addition to the normal macro, there is also a special kind of macro called a substitution macro that allows you to substitute expressions defined on the resource machine itself (gotten after a match to the machine has been performed) into specific expressions in your submit description file. The special substitution macro is of the form:

 
$$(attribute)

The substitution macro may only be used in three expressions in the submit description file: executable, environment, and arguments. The most common use of this macro is for heterogeneous submission of an executable:

executable = povray.$$(opsys).$$(arch)
The opsys and arch attributes will be substituted at match time for any given resource. This will allow Condor to automatically choose the correct executable for the matched machine.

An extension to the syntax of the substitution macro provides an alternative string to use if the machine attribute within the substitution macro is undefined. The syntax appears as:

 
$$(attribute:string_if_attribute_undefined)

An example using this extended syntax provides a path name to a required input file. Since the file can be placed in different locations on different machines, the file's path name is given as an argument to the program.

 
argument = $$(input_file_path:/usr/foo)
On the machine, if the attribute input_file_path is not defined, then the path /usr/foo is used instead.

The environment macro, $ENV, allows the evaluation of an environment variable to be used in setting a submit description file command. The syntax used is

 
$ENV(variable)
An example submit description file command that uses this functionality evaluates the submitter's home directory in order to set the path and file name of a log file:
 
log = $ENV(HOME)/jobs/logfile
The environment variable is evaluated when the submit description file is processed.

The $RANDOM_CHOICE macro allows a random choice to be made from a given list of parameters at submission time. For an expression, if some randomness needs to be generated, the macro may appear as

 
    $RANDOM_CHOICE(0,1,2,3,4,5,6)
When evaluated, one of the parameters values will be chosen.

Comments
Blank lines and lines beginning with a pound sign ('#') character are ignored by the submit description file parser.

Options

--
Accept the command file from stdin.
-v
Verbose output - display the created job class-ad

-n schedd_name
Submit to the specified schedd. This option is used when there is more than one schedd running on the submitting machine

-r schedd_name
Submit to a remote schedd. The jobs will be submitted to the schedd on the specified remote host. On Unix systems, the Condor administrator for you site must override the default AUTHENTICATION_METHODS configuration setting to enable remote file system (FS_REMOTE) authentication.

-d
Disable file permission checks.

-a command
Augment the commands in the submit description file with the given command. This command will be considered to immediately precede the Queue command within the submit description file, and come after all other previous commands. The submit description file is not modified. Multiple commands are specified by using the -a option multiple times. Each new command is given in a separate -a option. Commands with spaces in them will need to be enclosed in double quote marks.

-s
Spool all required input files, user log, and proxy to the machine defined by the SCHEDD_HOST configuration variable. This is an alternative way to accomplish the same as using the -r schedd_name option. With the -s option, the remote condor_ schedd is specified using a configuration variable instead of a command line argument.

submit description file
The pathname to the submit description file. If this optional argument is missing, then the commands are taken from standard input.

Exit Status

condor_ submit will exit with a status value of 0 (zero) upon success, and a non-zero value upon failure.

Examples

General Remarks

See Also

Condor User Manual

Author

Condor Team, University of Wisconsin-Madison

Copyright

Copyright © 1990-2003 Condor Team, Computer Sciences Department, University of Wisconsin-Madison, Madison, WI. All Rights Reserved. No use of the Condor Software Program is authorized without the express consent of the Condor Team. For more information contact: Condor Team, Attention: Professor Miron Livny, 7367 Computer Sciences, 1210 W. Dayton St., Madison, WI 53706-1685, (608) 262-0856 or miron@cs.wisc.edu.

U.S. Government Rights Restrictions: Use, duplication, or disclosure by the U.S. Government is subject to restrictions as set forth in subparagraph (c)(1)(ii) of The Rights in Technical Data and Computer Software clause at DFARS 252.227-7013 or subparagraphs (c)(1) and (2) of Commercial Computer Software-Restricted Rights at 48 CFR 52.227-19, as applicable, Condor Team, Attention: Professor Miron Livny, 7367 Computer Sciences, 1210 W. Dayton St., Madison, WI 53706-1685, (608) 262-0856 or miron@cs.wisc.edu.

See the Condor Version 6.6.0 Manual for additional notices.


next up previous contents index
Next: condor_ submit_dag Up: 9. Command Reference Manual Previous: condor_ store_cred   Contents   Index
condor-admin@cs.wisc.edu