76 documents found in 172ms
# 1
Böhnert, Tim • Merklinger, Felix F. • Stoll, Alexandra • Weigend, Maximilian • Quandt, Dietmar • (et. al.)
Abstract: The Atacama Desert, located on the western side of the Andes in northern Chile, harbours a range of endemic species adapted to hyperarid conditions. Vegetationis largely restricted to coastal fog oases and the Andean foothills, which are separated by a largely vegetation-free zone. Diversifications have been shown to be surprisingly recent in some Atacama clades, which is at odds with the extremely long history of aridity documented for this region. Here, we report the results of a molecular dating analysis of the Atacama genus Cristaria (Malvaceae) and its East Andean sister genus Lecanophora based on plastid sequence data.
# 2
Curdt, Constanze
Abstract: The TR32DB Metadata Schema is a structured list of metadata properties chosen to describe all data in the TR32DB with accurate metadata properties and thus to improve their searchability. The entire data provided to the TR32DB can be described with a number of descriptive metadata properties (e.g. creator, title, abstract, keywords, etc.) and administrative or technical properties (e.g. file format, file type, rights statement, etc.). The stored data are organized in six main data type categories: Data, Geodata, Report, Picture, Presentation, and Publication. The TR32DB Metadata Schema is set up in two levels to describe the various types of data collected by the CRC/TR32 participants. The first level is the General level. This level includes metadata properties classified in seven categories: Identification, Responsible Party, Topic, File Details, Constraints, Geographic, and automatic generated Metadata Details. The second level is the Specific level and contains the data type specific metadata properties. Currently, six data types are included: Data, Geodata, Report, Picture, Presentation, and Publication. Publication takes a special position and is once again sub-divided into the sub-categories: Article, Book, Book Section, and Event Paper.
# 3
Shrestha, Prabhakar • Sulis, Mauro
Abstract: Training Manual to use Terrestrial Systems Modeling Platform (TerrSysMP) as a numerical tool to develop a quantitative understanding of the complex soil-vegetation-atmosphere (SVA) interaction. The idealized setups in this manual, focus on the impact of root-zone soil moisture fluctuation on the diurnal energy partitioning and its feedback to the atmospheric boundary layer (ABL) dynamics. The interplay between subsurface hydrodynamics (e.g., (ψ-θ_vol ) relationship) and plant physiological properties (e.g., roughness, stomata conductance, root depth) is addressed using different plant functional types (PFTs) and soil textures (i.e., percentage of clay and sand).
# 4
Huber, Katrin • Vanderborght, Jan • Javaux, Mathieu • Schnepf, Andrea • Schröder, Natalie • (et. al.)
Abstract: R-SWMS is a numerical model for simulating solute transport and water flow in and between the soil and the plant systems. The acronym stands for Modeling “Root-Soil Water Movement and Solute transport”. Based on the flow and transport equations in the 3D soil matrix and within the 3D root xylem network, it simulates the uptake of solute and water by plant roots for a growing plant. Three-dimensional root growth is function of environmental conditions (soil strength, temperature) and plant parameters (gravitropism, sensitivity to strength, etc.). The code has been used in several projects and labs around the world. An updated list of publications dealing with R-SWMS can be found at https://www.zotero.org/groups/r-swms. The handbook includes theory, numerics, input files, output files, installation, and some example calculations.
# 5
Willmes, Christian • Yener, Yasa • Gilgenberg, Anton • Bareth, Georg
Abstract: The Collaborative Research Centre 806 database (CRC806-Database, http://crc806db.uni-koeln.de) is online and operating since 2011. The architecture consists of a Typo3 based website frontend, a CKAN based metadata storage and an OGC compliant Spatial Data Infrastructure (SDI). It was decided to update the system with some major changes to the overall architecture, by preserving the current API functionality and the URLs of the datasets in the database. This paper describes the system architecture of the partly new implementation of the CRC806-Database. The SDI part of the system is migrated from the current MapServer, GeoServer, MapProxy and pyCSW based implementation to a GeoNode based system. Additionally the Typo3 based frontend of the web portal is changed to use mostly server side Extbase and Fluid based content handling and rendering, instead of the current AgularJS based frontend. Due to stability and consistency difficulties of client side rendering we decided to build a more robust system and move to server side rendering. The reasons for migrating to GeoNode for the SDI stack and away from JavaScript based client side to a server side rendering are discussed by taking into account pro and contra of both approaches, as well as a list of lessons learned from the ongoing development and operation of the CRC806-Database.
Proceedings of the 2nd Data Management Workshop, 28.-29.11.2014, University of Cologne, Germany, Kölner Geographische Arbeiten, 96, pp. 115-126
# 6
Weber, Andreas • Piesche, Claudia
Abstract: The discussion about appropriate long-term accessibility of research results has gained importance. Especially, the requirements concerning the long-term preservation of research data from research projects funded, for instance, by the DFG have to be taken into account. While portals for globally standardised research data, e.g. climate data, are available, there is currently no provision for the large amount of data resulting from specialised research in individual research foci. In these cases, the requirements for longterm preservation have to be met by local solutions. In addition to the permanent storage of primary data and associated metadata, important steps of the genesis and the transformation process of published research results should also be incorporated into these individual solutions. Within the scope of the sub-project ‘INF Z2’ of the Collaborative Research Centre (CRC) 840, an infrastructure is designed that permits long-term preservation and retrieval of research data created within the CRC. Additionally, it allows for the reconstruction of the genesis of published research results.
Proceedings of the 2nd Data Management Workshop, 28.-29.11.2014, University of Cologne, Germany, Kölner Geographische Arbeiten, 96, pp. 103-114
# 7
Stamnas, Erasmia • Lammert, Andrea • Winkelmann, Volker • Grützun, Verena • Lang, Ulrich • (et. al.)
Abstract: Central Europe is a region with one of the most comprehensive networks for cloud and precipitation observations worldwide. Unifying these observations to become an ‚easy-to-use‘ data base and make it accessible to the climate community, is one of the goals of D(CP)². Therefore widely scattered observation data have to be organised, ranging from multivariate, long-term observations at dedicated supersites, to short-term area-wide remote sensing observations, up to high resolution satellite data. HD(CP)² has established a structure of distributed data servers with a common web portal as the global entry point. The central administration of these servers is based on a hierarchical data management system called Thematic Realtime Environmental Distributed Data Services (THREDDS) from Unidata, which is also used by the ESGF and the ARM program. To ensure that observation data from the multitude of instrument types can be used in a uniform manner, we have developed the HD(CP)² Data Product Standard. This document describes the binding conventions for the datasets, i.e. file names and formats, variable names and metadata conventions which shall apply to data sets and associated metadata intended for HD(CP)². The data files are provided in NetCDF format, following the principles given in the NetCDF Climate and Forecast (CF) metadata conventions, version 1.6, as far as possible. For the generation of metadata files, encoded in XML, HD(CP)² has developed its own web based Metadata Editor.
Proceedings of the 2nd Data Management Workshop, 28.-29.11.2014, University of Cologne, Germany, Kölner Geographische Arbeiten, 96, pp. 95-101
# 8
Sahle, Patrick • Kronenwett, Simone
Abstract: Research on digital humanities results not only in ‘data’ but more often in complex forms of presentation which could be called ‘resources’ and which should be maintained over a long period of time. Thus, generic data archives are not a comprehensive answer to all questions concerning sustainability of these resources. Instead, dedicated humanities data centers are needed to care for research data management, data preservation and curation and the presentational systems. The goals and tasks of these centers can be described according to institutional functions well known from our traditional information ecosystem: library, archive, museum and workshop.
Proceedings of the 2nd Data Management Workshop, 28.-29.11.2014, University of Cologne, Germany, Kölner Geographische Arbeiten, 96, pp. 89-93
# 9
Roesler-Schmidt, Gregor • Matuschka, Beatrice
Abstract: Piql Preservation Services (‘Piql’) emerged out of an industrial consortium which was set up in 2009 with the aim of developing a reliable, secure, cost-effective long-term archive solution for digital data. Until now the archiving of data on a supporting medium was often times problematic as the stability and reproducibility of data was not guaranteed. Using photosensitive polyester-based micrographic film, a very stable material that remains unchanged for a period of centuries in optimal conservation conditions, Piql generated a future-proof archiving system.
Proceedings of the 2nd Data Management Workshop, 28.-29.11.2014, University of Cologne, Germany, Kölner Geographische Arbeiten, 96, pp. 83-87
# 10
Redöhl, Brit
Abstract: The Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) is the largest organisation providing third-party research funding in Germany. As such, the DFG is well aware of the ncreasing importance of Research Data Management in all fields of research. Accordingly, the DFG supports the establishment of suitable community-specific guidelines or minimum standards for the handling of research data and the identification of potential for re-use. At institutional level this led to the enactment of DFG Guidelines on the handling of research data in autumn 2015. At the level of funding programmes there is first and foremost the open funding programme 'Information Infrastructures for Research Data'. Furthermore, DFG is raising awareness of projects focussing on information infrastructure (INF) embedded in Collaborative Research Centres (SFB). Here, new procedures and a highlighted emphasis during the peer review process are being introduced.
Proceedings of the 2nd Data Management Workshop, 28.-29.11.2014, University of Cologne, Germany, Kölner Geographische Arbeiten, 96, pp. 77-81
spinning wheel Loading next page