From 1 - 10 / 25
  • Background Monitoring hard-bottom marine biodiversity can be challenging as it often involves non-standardised sampling methods that limit scalability and inter-comparison across different monitoring approaches. Therefore, it is essential to implement standardised techniques when assessing the status of and changes in marine communities, in order to give the correct information to support management policy and decisions, and to ensure the most appropriate level of protection for the biodiversity in each ecosystem. Biomonitoring methods need to comply with a number of criteria including the implementation of broadly accepted standards and protocols and the collection of FAIR data (Findable, Accessible, Interoperable, and Reusable). Introduction Artificial substrates represent a promising tool for monitoring community assemblages of hard-bottom habitats with a standardised methodology. The European ARMS project is a long-term observatory network in which about 20 institutions distributed across 14 European countries, including Greenland and Antarctica, collaborate. The network consists of Autonomous Reef Monitoring Structures (ARMS) which are deployed in the proximity of marine stations and Long-term Ecological Research sites. ARMS units are passive monitoring systems made of stacked settlement plates that are placed on the sea floor. The three-dimensional structure of the settlement units mimics the complexity of marine substrates and attracts sessile and motile benthic organisms. After a certain period of time these structures are brought up, and visual, photographic, and genetic (DNA metabarcoding) assessments are made of the lifeforms that have colonised them. These data are used to systematically assess the status of, and changes in, the hard-bottom communities of near-coast ecosystems. Aims ARMS data are quality controlled and open access, and they are permanently stored (Marine Data Archive) along with their metadata (IMIS, catalogue of VLIZ) ensuring data fairness. Data from ARMS observatories provide a promising early-warning system for marine biological invasions by: i) identifying newly arrived Non-Indigenous Species (NIS) at each ARMS site; ii) tracking the migration of already known NIS in European continental waters; iii) monitoring the composition of hard-bottom communities over longer periods; and iv) identifying the Essential Biodiversity Variables (EBVs) for hard-bottom fauna, including NIS. The ARMS validation case was conceived to achieve these objectives: a data-analysis workflow was developed to process raw genetic data from ARMS; end-users can select ARMS samples from the ever-growing number available in collection; and raw DNA sequences are analysed using a bioinformatic pipeline (P.E.M.A.) embedded in the workflow for taxonomic identification. In the data-analysis workflow, the correct identification of taxa in each specific location is made with reference to WoRMS and WRiMS, webservices that are used to check respectively the identity of the organisms and whether they are introduced.

  • This service aims at running various statistical analyses in RvLab on the produced data produced in this workflow. It represents the Step 10 of the ARMS Workflow within the Internal Joint Initiative.

  • This service aims at creating an OTU table with species information that can be processed by WoRMS and WRIMS (steps 7 and 8). It represents the Step 6 of the ARMS Workflow within the Internal Joint Initiative.

  • This service aims to uploading a local file in the ARMS validation case. It represents the Step 2 of the ARMS Workflow within the Internal Joint Initiative.

  • This service aims at running PEMA on all the sequences from the samples selected in Step 4 (ARMS Choose and Parameterize) of the ARMS workflow. It additionally requires a parameter file (a tsv file). The service can take some hours to run. It represents the Step 5.2 of the ARMS Workflow within the Internal Joint Initiative.

  • This service aims to reformat and produce final output in variuos formats for human and machine2machine reading. It represents the Step 9 of the ARMS Workflow within the Internal Joint Initiative.

  • The output from PEMA is organised into a directory structure that depends on the details in the Parameter.tsv file (e.g. which sequence type was processed, what algorithm was chosen, etc.).

  • The output file of the "ARMS OTU Unifier" service (step 6) and the input file of the "WoRMS Taxonomic Checker" (step 7) of the ARMS Workflow within the Internal Joint Initiative. It is a CSV file with species information, a sort of OTU table, that is prepared to be processed by WoRMS and WRIMS (steps 7 and 8).

  • This service allows to choose which column of the MasterARMS file contains the specific data to process. Moreover, it permits to provide additional files or arguments as parameters. It represents the Step 4 of the ARMS Workflow within the Internal Joint Initiative.

  • It is a CSV file with a the number of columns that depends on the number of sequences/samples that are processed in the PEMA run. The first column is an ID. The final column contains the species information. The columns in between are for each sequence (each sample) that was processed and contain integers values; the title of these columns is the ENA code of the sequence (the same as in the MasterARMS csv file).