Workflows
What is a Workflow?Filters
CWL based workflow to assemble haploid/diploid eukaryote genomes of non-model organisms
The workflow is designed to use both PacBio long-reads and Illumina short-reads. The workflow first extracts, corrects, trims and decontaminates the long reads. Decontaminated trimmed reads are then used to assemble the genome and raw reads are used to polish it. Next, Illumina reads are cleaned and used to further polish the resultant assembly. Finally, the polished assembly is masked using inferred repeats ...
BridgeDb tutorial: Gene HGNC name to Ensembl identifier
This tutorial explains how to use the BridgeDb identifier mapping service to translate HGNC names to Ensembl identifiers. This step is part of the OpenRiskNet use case to link Adverse Outcome Pathways to WikiPathways.
First we need to load the Python library to allow calls to the BridgeDb REST webservice:
import requests
Let's assume we're interested ...
eQTL-Catalogue/qtlmap
Portable eQTL analysis and statistical fine mapping workflow used by the eQTL Catalogue
Introduction
eQTL-Catalogue/qtlmap is a bioinformatics analysis pipeline used for QTL Analysis.
The workflow takes phenotype count matrix (normalized and quality controlled) and genotype data as input, and finds associations between them with the help of sample metadata and phenotype metadata files (See Input formats and preparation for required input ...
This workflow demonstrates the usage of EODIE, a toolkit to extract object based timeseries information from Earth Observation data.
EODIE is a toolkit to extract object based timeseries information from Earth Observation data.
The EODIE code can be found on Gitlab .
The goal of EODIE is to ease the extraction of time series information at object level. Today, vast amounts of Earth Observation data are available to the users via for example earth explorer ...
Summary
This notebook demonstrates how to recreate lineages published in the paper Live imaging of remyelination in the adult mouse corpus callosum and available at idr0113-bottes-opcclones.
The lineage is created from the metadata associated to the specified image.
To load the data from the Image Data Resource, we use:
- the Python API ...
ASPICov was developed to provide a rapid, reliable and complete analysis of NGS SARS-Cov2 samples to the biologist. This broad application tool allows to process samples from either capture or amplicon strategy and Illumina or Ion Torrent technology. To ensure FAIR data analysis, this Nextflow pipeline follows nf-core guidelines and use Singularity containers.
Availability and Implementation: https://gitlab.com/vtilloy/aspicov
Citation: Valentin Tilloy, Pierre Cuzin, Laura Leroi, Emilie Guérin, ...
Type: Nextflow
Creators: Valentin Tilloy, Pierre Cuzin, Laura Leroi, Patrick Durand, Sophie Alain
Submitter: Valentin Tilloy

Snakemake workflow: FAIR CRCC - send data
A Snakemake workflow for securely sharing Crypt4GH-encrypted sensitive data from the CRC Cohort ...
polya_liftover - sc/snRNAseq Snakemake Workflow
A [Snakemake][sm] workflow for using PolyA_DB and UCSC Liftover with Cellranger.
Some genes are not accurately annotated in the reference genome. Here, we use information provide by the [PolyA_DB v3.2][polya] to update the coordinates, then the [USCS Liftover][liftover] tool to update to a more recent genome. Next, we use [Cellranger][cr] to create the reference and count matrix. Finally, by taking advantage of the integrated [Conda][conda] and ...
RNA-Seq pipeline
Here we provide the tools to perform paired end or single read RNA-Seq analysis including raw data quality control, differential expression (DE) analysis and functional annotation. As input files you may use either zipped fastq-files (.fastq.gz) or mapped read data (.bam files). In case of paired end reads, corresponding fastq files should be named using .R1.fastq.gz and .R2.fastq.gz suffixes.
Pipeline Workflow
All analysis steps are illustrated in the pipeline ...