Workflows
What is a Workflow?Filters
Classification and visualization of SSU, LSU sequences.
Type: Galaxy
Creators: Rand Zoabi, Paul Zierep, EMBL's European Bioinformatics Institute
Submitter: WorkflowHub Bot
This workflow creates taxonomic summary tables out of the amplicon pipeline results.
This workflow uses eggNOG mapper and InterProScan for functional annotation of protein sequences.
This workflow performs subtyping and consensus sequence generation for batches of Illumina PE sequenced Influenza A isolates.
Single-cell RNA-seq workflow with Scanpy and Anndata. Based on the 3k PBMC clustering tutorial from Scanpy. It takes count matrix, barcodes and feature files as input and creates an Anndata object out of them. It then performs QC and filters for lowly expressed genes and cells. Then the data is normalized and scaled. Then PCs are computed to further cluster using louvain algorithm. It also generated various plots of clustering colored with highly ranked genes.
Type: Galaxy
Creators: Pavankumar Videm, Hans-Rudolf Hotz, Mehmet Tekman, Bérénice Batut
Submitter: WorkflowHub Bot
This workflow allows you to annotate a genome with Helixer and evaluate the quality of the annotation using BUSCO and Genome Annotation statistics. GFFRead is also used to predict protein sequences derived from this annotation, and BUSCO and OMArk are used to assess proteome quality.
Workflow for clinical metaproteomics database searching
This workflow will perform taxonomic and functional annotations using Unipept and statistical analysis using MSstatsTMT.
In proteomics research, verifying detected peptides is essential for ensuring data accuracy and biological relevance. This tutorial continues from the clinical metaproteomics discovery workflow, focusing on verifying identified microbial peptides using the PepQuery tool.
The workflow begins with the Database Generation process. The Galaxy-P team has developed a workflow that collects protein sequences from known disease-causing microorganisms to build a comprehensive database. This extensive database is then refined into a smaller, more relevant dataset using the Metanovo tool.