text stringlengths 81 112k |
|---|
Genotypes one or more GVCF files and runs either the VQSR or hard filtering pipeline. Uploads the genotyped VCF file
to the config output directory.
:param JobFunctionWrappingJob job: passed automatically by Toil
:param dict gvcfs: Dictionary of GVCFs {Sample ID: FileStoreID}
:param Namespace config: I... |
Runs Oncotator for a group of VCF files. Each sample is annotated individually.
:param JobFunctionWrappingJob job: passed automatically by Toil
:param dict vcfs: Dictionary of VCF FileStoreIDs {Sample identifier: FileStoreID}
:param Namespace config: Input parameters and shared FileStoreIDs
Require... |
Parses manifest file for Toil Germline Pipeline
:param str path_to_manifest: Path to sample manifest file
:return: List of GermlineSample namedtuples
:rtype: list[GermlineSample]
def parse_manifest(path_to_manifest):
"""
Parses manifest file for Toil Germline Pipeline
:param str path_to_manif... |
Downloads shared reference files for Toil Germline pipeline
:param JobFunctionWrappingJob job: passed automatically by Toil
:param Namespace config: Pipeline configuration options
:return: Updated config with shared fileStoreIDS
:rtype: Namespace
def download_shared_files(job, config):
"""
Dow... |
Creates a genome fasta index and sequence dictionary file if not already present in the pipeline config.
:param JobFunctionWrappingJob job: passed automatically by Toil
:param Namespace config: Pipeline configuration options and shared files.
Requires FileStoreID for genome fasta f... |
Prepares BAM file for Toil germline pipeline.
Steps in pipeline
0: Download and align BAM or FASTQ sample
1: Sort BAM
2: Index BAM
3: Run GATK preprocessing pipeline (Optional)
- Uploads preprocessed BAM to output directory
:param JobFunctionWrappingJob job: passed automatically by Toi... |
Downloads and runs bwakit for BAM or FASTQ files
:param JobFunctionWrappingJob job: passed automatically by Toil
:param str uuid: Unique sample identifier
:param str url: FASTQ or BAM file URL. BAM alignment URL must have .bam extension.
:param Namespace config: Input parameters and shared FileStoreIDs... |
Uses GATK HaplotypeCaller to identify SNPs and INDELs. Outputs variants in a Genomic VCF file.
:param JobFunctionWrappingJob job: passed automatically by Toil
:param str bam: FileStoreID for BAM file
:param str bai: FileStoreID for BAM index file
:param str ref: FileStoreID for reference genome fasta f... |
GATK germline pipeline with variant filtering and annotation.
def main():
"""
GATK germline pipeline with variant filtering and annotation.
"""
# Define Parser object and add to jobTree
parser = argparse.ArgumentParser(description=__doc__, formatter_class=argparse.RawTextHelpFormatter)
# Gener... |
Loops over the sample_ids (uuids) in the manifest, creating child jobs to process each
def sample_loop(job, uuid_list, inputs):
"""
Loops over the sample_ids (uuids) in the manifest, creating child jobs to process each
"""
for uuid_rg in uuid_list:
uuid_items = uuid_rg.split(',')
uuid = uuid_items[0]... |
Prefer this here as it allows us to pull the job functions from other jobs
without rewrapping the job functions back together.
bwa_inputs: Input arguments to be passed to BWA.
adam_inputs: Input arguments to be passed to ADAM.
gatk_preprocess_inputs: Input arguments to be passed to GATK preprocessing.
... |
This is a Toil pipeline used to perform alignment of fastqs.
def main():
"""
This is a Toil pipeline used to perform alignment of fastqs.
"""
# Define Parser object and add to Toil
if mock_mode():
usage_msg = 'You have the TOIL_SCRIPTS_MOCK_MODE environment variable set, so this pipeline ' ... |
Input1: Path to the BD2K Master Key (for S3 Encryption)
Input2: S3 URL (e.g. https://s3-us-west-2.amazonaws.com/cgl-driver-projects-encrypted/wcdt/exome_bams/DTB-111-N.bam)
Returns: 32-byte unique key generated for that URL
def generate_unique_key(master_key_path, url):
"""
Input1: Path to the BD2K Ma... |
Downloads encrypted file from S3
Input1: Working directory
Input2: S3 URL to be downloaded
Input3: Path to key necessary for decryption
Input4: name of file to be downloaded
def download_encrypted_file(work_dir, url, key_path, name):
"""
Downloads encrypted file from S3
Input1: Working di... |
Returns the paths of files from the FileStore
Input1: Toil job instance
Input2: Working directory
Input3: jobstore id dictionary
Input4: names of files to be returned from the jobstore
Returns: path(s) to the file(s) requested -- unpack these!
def return_input_paths(job, work_dir, ids, *args):
... |
Moves files from work_dir to output_dir
Input1: Working directory
Input2: Output directory
Input3: UUID to be preprended onto file name
Input4: list of file names to be moved from working dir to output dir
def move_to_output_dir(work_dir, output_dir, uuid=None, files=list()):
"""
Moves files f... |
Downloads shared files that are used by all samples for alignment and places them in the jobstore.
def batch_start(job, input_args):
"""
Downloads shared files that are used by all samples for alignment and places them in the jobstore.
"""
shared_files = ['ref.fa', 'ref.fa.amb', 'ref.fa.ann', 'ref.fa.b... |
Spawns an alignment job for every sample in the input configuration file
def spawn_batch_jobs(job, shared_ids, input_args):
"""
Spawns an alignment job for every sample in the input configuration file
"""
samples = []
config = input_args['config']
with open(config, 'r') as f_in:
for lin... |
Runs BWA and then Bamsort on the supplied fastqs for this sample
Input1: Toil Job instance
Input2: jobstore id dictionary
Input3: Input arguments dictionary
Input4: Sample tuple -- contains uuid and urls for the sample
def alignment(job, ids, input_args, sample):
"""
Runs BWA and then Bamsort ... |
Uploads output BAM from sample to S3
Input1: Toil Job instance
Input2: jobstore id dictionary
Input3: Input arguments dictionary
Input4: Sample tuple -- contains uuid and urls for the sample
def upload_bam_to_s3(job, ids, input_args, sample):
"""
Uploads output BAM from sample to S3
Input... |
Runs GATK Variant Quality Score Recalibration.
0: Start 0 --> 1 --> 3 --> 4 --> 5
1: Recalibrate SNPs | |
2: Recalibrate INDELS +-> 2 -+
3: Apply SNP Recalibration
4: Apply INDEL Recalibration
5: Write VCF to output directory
:p... |
Converts full GATK annotation name to the shortened version
:param annotations:
:return:
def get_short_annotations(annotations):
"""
Converts full GATK annotation name to the shortened version
:param annotations:
:return:
"""
# Annotations need to match VCF header
short_name = {'Qua... |
Parses genetorrent config file. Returns list of samples: [ [id1, id1 ], [id2, id2], ... ]
Returns duplicate of ids to follow UUID/URL standard.
def parse_sra(path_to_config):
"""
Parses genetorrent config file. Returns list of samples: [ [id1, id1 ], [id2, id2], ... ]
Returns duplicate of ids to foll... |
Tars a group of files together into a tarball
work_dir: str Current Working Directory
tar_name: str Name of tarball
uuid: str UUID to stamp files with
files: str(s) List of filenames to place in the tarball from working directory
def tarball_files(work_dir, tar_name, uuid=N... |
This function will administer 5 jobs at a time then recursively call itself until subset is empty
def start_batch(job, input_args):
"""
This function will administer 5 jobs at a time then recursively call itself until subset is empty
"""
samples = parse_sra(input_args['sra'])
# for analysis_id in s... |
Downloads a sample from dbGaP via SRAToolKit, then uses S3AM to transfer it to S3
input_args: dict Dictionary of input arguments
analysis_id: str An analysis ID for a sample in CGHub
def download_and_transfer_sample(job, input_args, samples):
"""
Downloads a sample from dbGaP via SRATool... |
Transfer gTEX data from dbGaP (NCBI) to S3
def main():
"""
Transfer gTEX data from dbGaP (NCBI) to S3
"""
# Define Parser object and add to toil
parser = build_parser()
Job.Runner.addToilOptions(parser)
args = parser.parse_args()
# Store inputs from argparse
inputs = {'sra': args.sr... |
Uploads a file from the FileStore to an output directory on the local filesystem or S3.
:param JobFunctionWrappingJob job: passed automatically by Toil
:param str filename: basename for file
:param str file_id: FileStoreID
:param str output_dir: Amazon S3 URL or local path
:param str s3_key_path: (... |
Downloads encrypted files from S3 via header injection
input_args: dict Input dictionary defined in main()
name: str Symbolic name associated with file
def download_encrypted_file(job, input_args, name):
"""
Downloads encrypted files from S3 via header injection
input_args: dict I... |
Simple curl request made for a given url
url: str URL to download
def download_from_url(job, url):
"""
Simple curl request made for a given url
url: str URL to download
"""
work_dir = job.fileStore.getLocalTempDir()
file_path = os.path.join(work_dir, os.path.basename(url))
if no... |
Makes subprocess call of a command to a docker container.
tool_parameters: list An array of the parameters to be passed to the tool
tool: str Name of the Docker image to be used (e.g. quay.io/ucsc_cgl/samtools)
java_opts: str Optional commands to pass to a java jar execution. (e.g... |
A list of files to move from work_dir to output_dir.
work_dir: str Current working directory
output_dir: str Output directory for files to go
uuid: str UUID to "stamp" onto output files
files: list List of files to iterate through
def copy_to_output_dir(work_dir, output_dir... |
Checks that dependency programs are installed.
input_args: dict Dictionary of input arguments (from main())
def program_checks(job, input_args):
"""
Checks that dependency programs are installed.
input_args: dict Dictionary of input arguments (from main())
"""
# Program checks
... |
Downloads and stores shared inputs files in the FileStore
input_args: dict Dictionary of input arguments (from main())
def download_shared_files(job, input_args):
"""
Downloads and stores shared inputs files in the FileStore
input_args: dict Dictionary of input arguments (from main())
... |
Launches pipeline for each sample.
shared_ids: dict Dictionary of fileStore IDs
input_args: dict Dictionary of input arguments
def parse_config_file(job, ids, input_args):
"""
Launches pipeline for each sample.
shared_ids: dict Dictionary of fileStore IDs
input_args: dict... |
Defines variables unique to a sample that are used in the rest of the pipelines
ids: dict Dictionary of fileStore IDS
input_args: dict Dictionary of input arguments
sample: tuple Contains uuid and sample_url
def download_sample(job, ids, input_args, sample):
"""
Defines variable... |
Statically define jobs in the pipeline
job_vars: tuple Tuple of dictionaries: input_args and ids
def static_dag_launchpoint(job, job_vars):
"""
Statically define jobs in the pipeline
job_vars: tuple Tuple of dictionaries: input_args and ids
"""
input_args, ids = job_vars
if input_... |
Unzips input sample and concats the Read1 and Read2 groups together.
job_vars: tuple Tuple of dictionaries: input_args and ids
def merge_fastqs(job, job_vars):
"""
Unzips input sample and concats the Read1 and Read2 groups together.
job_vars: tuple Tuple of dictionaries: input_args and ids
... |
Maps RNA-Seq reads to a reference genome.
job_vars: tuple Tuple of dictionaries: input_args and ids
def mapsplice(job, job_vars):
"""
Maps RNA-Seq reads to a reference genome.
job_vars: tuple Tuple of dictionaries: input_args and ids
"""
# Unpack variables
input_args, ids = job_va... |
This function adds read groups to the headers
job_vars: tuple Tuple of dictionaries: input_args and ids
def add_read_groups(job, job_vars):
"""
This function adds read groups to the headers
job_vars: tuple Tuple of dictionaries: input_args and ids
"""
input_args, ids = job_vars
wo... |
Sorts bam file and produces index file
job_vars: tuple Tuple of dictionaries: input_args and ids
def bamsort_and_index(job, job_vars):
"""
Sorts bam file and produces index file
job_vars: tuple Tuple of dictionaries: input_args and ids
"""
# Unpack variables
input_args, ids = job_... |
QC module: contains QC metrics and information about the BAM post alignment
job_vars: tuple Tuple of dictionaries: input_args and ids
def rseq_qc(job, job_vars):
"""
QC module: contains QC metrics and information about the BAM post alignment
job_vars: tuple Tuple of dictionaries: input_args a... |
Sorts the bam by reference
job_vars: tuple Tuple of dictionaries: input_args and ids
def sort_bam_by_reference(job, job_vars):
"""
Sorts the bam by reference
job_vars: tuple Tuple of dictionaries: input_args and ids
"""
# Unpack variables
input_args, ids = job_vars
work_dir = ... |
Produces exon counts
job_vars: tuple Tuple of dictionaries: input_args and ids
def exon_count(job, job_vars):
"""
Produces exon counts
job_vars: tuple Tuple of dictionaries: input_args and ids
"""
input_args, ids = job_vars
work_dir = job.fileStore.getLocalTempDir()
uuid = inp... |
Creates a bam of just the transcriptome
job_vars: tuple Tuple of dictionaries: input_args and ids
def transcriptome(job, job_vars):
"""
Creates a bam of just the transcriptome
job_vars: tuple Tuple of dictionaries: input_args and ids
"""
input_args, ids = job_vars
work_dir = job.f... |
Performs filtering on the transcriptome bam
job_vars: tuple Tuple of dictionaries: input_args and ids
def filter_bam(job, job_vars):
"""
Performs filtering on the transcriptome bam
job_vars: tuple Tuple of dictionaries: input_args and ids
"""
input_args, ids = job_vars
work_dir = ... |
Runs RSEM to produce counts
job_vars: tuple Tuple of dictionaries: input_args and ids
def rsem(job, job_vars):
"""
Runs RSEM to produce counts
job_vars: tuple Tuple of dictionaries: input_args and ids
"""
input_args, ids = job_vars
work_dir = job.fileStore.getLocalTempDir()
cp... |
Combine the contents of separate zipped outputs into one via streaming
job_vars: tuple Tuple of dictionaries: input_args and ids
output_ids: tuple Nested tuple of all the output fileStore IDs
def consolidate_output(job, job_vars, output_ids):
"""
Combine the contents of separate zipped outputs i... |
If s3_dir is specified in arguments, file will be uploaded to S3 using boto.
WARNING: ~/.boto credentials are necessary for this to succeed!
job_vars: tuple Tuple of dictionaries: input_args and ids
def upload_output_to_s3(job, job_vars):
"""
If s3_dir is specified in arguments, file will be uploa... |
Upload bam to S3. Requires S3AM and a ~/.boto config file.
def upload_bam_to_s3(job, job_vars):
"""
Upload bam to S3. Requires S3AM and a ~/.boto config file.
"""
input_args, ids = job_vars
work_dir = job.fileStore.getLocalTempDir()
uuid = input_args['uuid']
# I/O
job.fileStore.readGlob... |
This is a Toil pipeline for the UNC best practice RNA-Seq analysis.
RNA-seq fastqs are combined, aligned, sorted, filtered, and quantified.
Please read the README.md located in the same directory.
def main():
"""
This is a Toil pipeline for the UNC best practice RNA-Seq analysis.
RNA-seq fastqs ar... |
Remove the given file from hdfs with master at the given IP address
:type masterIP: MasterAddress
def remove_file(master_ip, filename, spark_on_toil):
"""
Remove the given file from hdfs with master at the given IP address
:type masterIP: MasterAddress
"""
master_ip = master_ip.actual
ss... |
Downloads input data files from S3.
:type masterIP: MasterAddress
def download_data(job, master_ip, inputs, known_snps, bam, hdfs_snps, hdfs_bam):
"""
Downloads input data files from S3.
:type masterIP: MasterAddress
"""
log.info("Downloading known sites file %s to %s.", known_snps, hdfs_snp... |
Convert input sam/bam file and known SNPs file into ADAM format
def adam_convert(job, master_ip, inputs, in_file, in_snps, adam_file, adam_snps, spark_on_toil):
"""
Convert input sam/bam file and known SNPs file into ADAM format
"""
log.info("Converting input BAM to ADAM.")
call_adam(job, master_i... |
Preprocess in_file with known SNPs snp_file:
- mark duplicates
- realign indels
- recalibrate base quality scores
def adam_transform(job, master_ip, inputs, in_file, snp_file, hdfs_dir, out_file, spark_on_toil):
"""
Preprocess in_file with known SNPs snp_file:
- mark duplicates
... |
Upload file hdfsName from hdfs to s3
def upload_data(job, master_ip, inputs, hdfs_name, upload_name, spark_on_toil):
"""
Upload file hdfsName from hdfs to s3
"""
if mock_mode():
truncate_file(master_ip, hdfs_name, spark_on_toil)
log.info("Uploading output BAM %s to %s.", hdfs_name, upload... |
Monolithic job that calls data download, conversion, transform, upload.
Previously, this was not monolithic; change came in due to #126/#134.
def download_run_and_upload(job, master_ip, inputs, spark_on_toil):
"""
Monolithic job that calls data download, conversion, transform, upload.
Previously, this ... |
A Toil job function performing ADAM preprocessing on a single sample
def static_adam_preprocessing_dag(job, inputs, sample, output_dir, suffix=''):
"""
A Toil job function performing ADAM preprocessing on a single sample
"""
inputs.sample = sample
inputs.output_dir = output_dir
inputs.suffix = ... |
Runs GATK Hard Filtering on a Genomic VCF file and uploads the results.
0: Start 0 --> 1 --> 3 --> 5 --> 6
1: Select SNPs | |
2: Select INDELs +-> 2 --> 4 +
3: Apply SNP Filter
4: Apply INDEL Filter
5: Merge SNP and INDEL VCFs
6: Write fi... |
Downloads a sample from CGHub via GeneTorrent, then uses S3AM to transfer it to S3
input_args: dict Dictionary of input arguments
analysis_id: str An analysis ID for a sample in CGHub
def download_and_transfer_sample(job, sample, inputs):
"""
Downloads a sample from CGHub via GeneTorren... |
This is a Toil pipeline to transfer TCGA data into an S3 Bucket
Data is pulled down with Genetorrent and transferred to S3 via S3AM.
def main():
"""
This is a Toil pipeline to transfer TCGA data into an S3 Bucket
Data is pulled down with Genetorrent and transferred to S3 via S3AM.
"""
# Defin... |
Validate a hexidecimal IPv6 ip address.
>>> validate_ip('::')
True
>>> validate_ip('::1')
True
>>> validate_ip('2001:db8:85a3::8a2e:370:7334')
True
>>> validate_ip('2001:db8:85a3:0:0:8a2e:370:7334')
True
>>> validate_ip('2001:0db8:85a3:0000:0000:8a2e:0370:7334')
True
>>> va... |
Convert a hexidecimal IPv6 address to a network byte order 128-bit
integer.
>>> ip2long('::') == 0
True
>>> ip2long('::1') == 1
True
>>> expect = 0x20010db885a3000000008a2e03707334
>>> ip2long('2001:db8:85a3::8a2e:370:7334') == expect
True
>>> ip2long('2001:db8:85a3:0:0:8a2e:370:73... |
Convert a network byte order 128-bit integer to a canonical IPv6
address.
>>> long2ip(2130706433)
'::7f00:1'
>>> long2ip(42540766411282592856904266426630537217)
'2001:db8::1:0:0:1'
>>> long2ip(MIN_IP)
'::'
>>> long2ip(MAX_IP)
'ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff'
>>> long2i... |
Convert a network byte order 128-bit integer to an rfc1924 IPv6
address.
>>> long2rfc1924(ip2long('1080::8:800:200C:417A'))
'4)+k&C#VzJ4br>0wv%Yp'
>>> long2rfc1924(ip2long('::'))
'00000000000000000000'
>>> long2rfc1924(MAX_IP)
'=r54lj&NUUO~Hi%c2ym0'
:param l: Network byte order 128-b... |
Convert an RFC 1924 IPv6 address to a network byte order 128-bit
integer.
>>> expect = 0
>>> rfc19242long('00000000000000000000') == expect
True
>>> expect = 21932261930451111902915077091070067066
>>> rfc19242long('4)+k&C#VzJ4br>0wv%Yp') == expect
True
>>> rfc19242long('pizza') == None... |
Validate a CIDR notation ip address.
The string is considered a valid CIDR address if it consists of a valid
IPv6 address in hextet format followed by a forward slash (/) and a bit
mask length (0-128).
>>> validate_cidr('::/128')
True
>>> validate_cidr('::/0')
True
>>> validate_cidr('... |
Convert a CIDR notation ip address into a tuple containing the network
block start and end addresses.
>>> cidr2block('2001:db8::/48')
('2001:db8::', '2001:db8:0:ffff:ffff:ffff:ffff:ffff')
>>> cidr2block('::/0')
('::', 'ffff:ffff:ffff:ffff:ffff:ffff:ffff:ffff')
:param cidr: CIDR notation ip a... |
Parses config file to pull sample information.
Stores samples as tuples of (uuid, URL)
:param JobFunctionWrappingJob job: passed by Toil automatically
:param Namespace inputs: Stores input arguments (see main)
def parse_input_samples(job, inputs):
"""
Parses config file to pull sample information.... |
Download the input sample
:param JobFunctionWrappingJob job: passed by Toil automatically
:param tuple sample: Tuple containing (UUID,URL) of a sample
:param Namespace inputs: Stores input arguments (see main)
def download_sample(job, sample, inputs):
"""
Download the input sample
:param JobF... |
Converts sample.tar(.gz) into two fastq files.
Due to edge conditions... BEWARE: HERE BE DRAGONS
:param JobFunctionWrappingJob job: passed by Toil automatically
:param Namespace inputs: Stores input arguments (see main)
:param str tar_id: FileStore ID of sample tar
def process_sample(job, inputs, tar_... |
Filters out adapters that may be left in the RNA-seq files
:param JobFunctionWrappingJob job: passed by Toil automatically
:param Namespace inputs: Stores input arguments (see main)
:param str r1_id: FileStore ID of read 1 fastq
:param str r2_id: FileStore ID of read 2 fastq
def cutadapt(job, inputs, ... |
Performs alignment of fastqs to BAM via STAR
:param JobFunctionWrappingJob job: passed by Toil automatically
:param Namespace inputs: Stores input arguments (see main)
:param str r1_cutadapt: FileStore ID of read 1 fastq
:param str r2_cutadapt: FileStore ID of read 2 fastq
def star(job, inputs, r1_cut... |
Perform variant calling with samtools nad QC with CheckBias
:param JobFunctionWrappingJob job: passed by Toil automatically
:param Namespace inputs: Stores input arguments (see main)
:param str bam_id: FileStore ID of bam
:param str bai_id: FileStore ID of bam index file
:return: FileStore ID of qc... |
Run SplAdder to detect and quantify alternative splicing events
:param JobFunctionWrappingJob job: passed by Toil automatically
:param Namespace inputs: Stores input arguments (see main)
:param str bam_id: FileStore ID of bam
:param str bai_id: FileStore ID of bam index file
:return: FileStore ID o... |
Combine the contents of separate tarballs into one.
:param JobFunctionWrappingJob job: passed by Toil automatically
:param Namespace inputs: Stores input arguments (see main)
:param str vcqc_id: FileStore ID of variant calling and QC tarball
:param str spladder_id: FileStore ID of spladder tarball
def... |
This Toil pipeline aligns reads and performs alternative splicing analysis.
Please read the README.md located in the same directory for run instructions.
def main():
"""
This Toil pipeline aligns reads and performs alternative splicing analysis.
Please read the README.md located in the same directory... |
Validate a dotted-quad ip address.
The string is considered a valid dotted-quad address if it consists of
one to four octets (0-255) seperated by periods (.).
>>> validate_ip('127.0.0.1')
True
>>> validate_ip('127.0')
True
>>> validate_ip('127.0.0.256')
False
>>> validate_ip(LOCAL... |
Validate that a dotted-quad ip address is a valid netmask.
>>> validate_netmask('0.0.0.0')
True
>>> validate_netmask('128.0.0.0')
True
>>> validate_netmask('255.0.0.0')
True
>>> validate_netmask('255.255.255.255')
True
>>> validate_netmask(BROADCAST)
True
>>> validate_netma... |
Validate a dotted-quad ip address including a netmask.
The string is considered a valid dotted-quad address with netmask if it
consists of one to four octets (0-255) seperated by periods (.) followed
by a forward slash (/) and a subnet bitmask which is expressed in
dotted-quad format.
>>> validat... |
Convert a dotted-quad ip address to a network byte order 32-bit
integer.
>>> ip2long('127.0.0.1')
2130706433
>>> ip2long('127.1')
2130706433
>>> ip2long('127')
2130706432
>>> ip2long('127.0.0.256') is None
True
:param ip: Dotted-quad ip address (eg. '127.0.0.1').
:type ip... |
Convert a dotted-quad ip to base network number.
This differs from :func:`ip2long` in that partial addresses as treated as
all network instead of network plus host (eg. '127.1' expands to
'127.1.0.0')
:param ip: dotted-quad ip address (eg. ‘127.0.0.1’).
:type ip: str
:returns: Network byte ord... |
Convert a network byte order 32-bit integer to a dotted quad ip
address.
>>> long2ip(2130706433)
'127.0.0.1'
>>> long2ip(MIN_IP)
'0.0.0.0'
>>> long2ip(MAX_IP)
'255.255.255.255'
>>> long2ip(None) #doctest: +IGNORE_EXCEPTION_DETAIL
Traceback (most recent call last):
...
T... |
Convert a CIDR notation ip address into a tuple containing the network
block start and end addresses.
>>> cidr2block('127.0.0.1/32')
('127.0.0.1', '127.0.0.1')
>>> cidr2block('127/8')
('127.0.0.0', '127.255.255.255')
>>> cidr2block('127.0.1/16')
('127.0.0.0', '127.0.255.255')
>>> cidr2... |
Convert a dotted-quad ip address including a netmask into a tuple
containing the network block start and end addresses.
>>> subnet2block('127.0.0.1/255.255.255.255')
('127.0.0.1', '127.0.0.1')
>>> subnet2block('127/255')
('127.0.0.0', '127.255.255.255')
>>> subnet2block('127.0.1/255.255')
... |
Create a tuple of (start, end) dotted-quad addresses from the given
ip address and prefix length.
:param ip: Ip address in block
:type ip: long
:param prefix: Prefix size for block
:type prefix: int
:returns: Tuple of block (start, end)
def _block_from_ip_and_prefix(ip, prefix):
"""Create ... |
Downloads files shared by all samples in the pipeline
:param JobFunctionWrappingJob job: passed automatically by Toil
:param Namespace config: Argparse Namespace object containing argument inputs
:param list[list] samples: A nested list of samples containing sample information
def download_shared_files(jo... |
Spawn the jobs that create index and dict file for reference
:param JobFunctionWrappingJob job: passed automatically by Toil
:param Namespace config: Argparse Namespace object containing argument inputs
:param list[list] samples: A nested list of samples containing sample information
def reference_preproc... |
Download sample and store sample specific attributes
:param JobFunctionWrappingJob job: passed automatically by Toil
:param list sample: Contains uuid, normal URL, and tumor URL
:param Namespace config: Argparse Namespace object containing argument inputs
def download_sample(job, sample, config):
"""
... |
Convenience job for handling bam indexing to make the workflow declaration cleaner
:param JobFunctionWrappingJob job: passed automatically by Toil
:param Namespace config: Argparse Namespace object containing argument inputs
def index_bams(job, config):
"""
Convenience job for handling bam indexing to... |
Declare jobs related to preprocessing
:param JobFunctionWrappingJob job: passed automatically by Toil
:param Namespace config: Argparse Namespace object containing argument inputs
def preprocessing_declaration(job, config):
"""
Declare jobs related to preprocessing
:param JobFunctionWrappingJob j... |
Statically declare workflow so sections can be modularly repurposed
:param JobFunctionWrappingJob job: passed automatically by Toil
:param Namespace config: Argparse Namespace object containing argument inputs
:param str normal_bam: Normal BAM FileStoreID
:param str normal_bai: Normal BAM index FileSto... |
Combine the contents of separate tarball outputs into one via streaming
:param JobFunctionWrappingJob job: passed automatically by Toil
:param Namespace config: Argparse Namespace object containing argument inputs
:param str mutect: MuTect tarball FileStoreID
:param str pindel: Pindel tarball FileStore... |
Parses samples, specified in either a manifest or listed with --samples
:param str path_to_manifest: Path to configuration file
:return: Samples and their attributes as defined in the manifest
:rtype: list[list]
def parse_manifest(path_to_manifest):
"""
Parses samples, specified in either a manife... |
Computational Genomics Lab, Genomics Institute, UC Santa Cruz
Toil exome pipeline
Perform variant / indel analysis given a pair of tumor/normal BAM files.
Samples are optionally preprocessed (indel realignment and base quality score recalibration)
The output of this pipeline is a tarball containing res... |
Downloads shared files that are used by all samples for alignment, or generates them if they were not provided.
:param JobFunctionWrappingJob job: passed automatically by Toil
:param Namespace inputs: Input arguments (see main)
:param list[list[str, list[str, str]]] samples: Samples in the format [UUID, [U... |
Downloads the sample and runs BWA-kit
:param JobFunctionWrappingJob job: Passed by Toil automatically
:param tuple(str, list) sample: UUID and URLS for sample
:param Namespace inputs: Contains input arguments
:param dict ids: FileStore IDs for shared inputs
def download_sample_and_align(job, sample, i... |
Parse manifest file
:param str manifest_path: Path to manifest file
:return: samples
:rtype: list[str, list]
def parse_manifest(manifest_path):
"""
Parse manifest file
:param str manifest_path: Path to manifest file
:return: samples
:rtype: list[str, list]
"""
samples = []
... |
Computational Genomics Lab, Genomics Institute, UC Santa Cruz
Toil BWA pipeline
Alignment of fastq reads via BWA-kit
General usage:
1. Type "toil-bwa generate" to create an editable manifest and config in the current working directory.
2. Parameterize the pipeline by editing the config.
3. Fil... |
Convert an address string to a long.
def _address2long(address):
"""
Convert an address string to a long.
"""
parsed = ipv4.ip2long(address)
if parsed is None:
parsed = ipv6.ip2long(address)
return parsed |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.