No active filters. Use the sidebar to filter search results.
6675 documents found in 433ms
# 1
Scherler, Dirk • Wulf, Hendrik • Gorelick, Noel
Abstract: This dataset is supplementary to the article of Scherler et al. (submitted), in which the global distribution of supraglacial debris cover is mapped and analyzed. For mapping supraglacial debris cover, we combined glacier outlines from the Randolph Glacier Inventory (RGI) version 6.0 (RGI consortium, 2017) with remote sensing-based ice and snow identification. Areas that belong to glaciers but that are neither ice nor snow were classified as debris cover. This dataset contains the outlines of the mapped debris-covered glaciers areas, stored in shapefiles (.shp). For creating this dataset, we used optical satellite data from Landsat 8 (for the time period 2013-2017), and from Sentinel-2A/B (2015-2017). For the ice and snow identification, we used three different algorithms: a red to short-wavelength infrared (swir) band ratio (RATIO; Hall et al., 1988), the normalized difference snow index (NDSI; Dozier, 1989), and linear spectral unmixing-derived fractional debris cover (FDC; e.g., Keshava and Mustard, 2002). For a detailed description of the debris-cover mapping and an analysis of the data, please see Scherler et al. (2019) to which these data are supplementary material. This dataset includes debris cover outlines based on either Landsat 8 (LS8; 30-m resolution) or Sentinel 2 (S2; 10-m resolution), and the three algorithms RATIO, NDSI, FDC. In total, there exist six different zip-files that each contain 19 shapefiles. The structure of the shapefiles follows that of the RGI version 6.0 (RGI consortium, 2017), with one shapefile for each RGI region. The original RGI shapefiles provide each glacier as one entry (feature) and include a variety of ancillary information, such as area, slope, aspect (RGI Consortium 2017a, Technical Note p. 12ff). Because the debris-cover outlines are based on the RGI v6.0 glacier outlines, all fields of the original shapefiles, which refer to the glacier, are retained, and expanded with four new fields: - DC_Area: Debris-covered area in m². Note that this unit for area is different from the unit used for reporting the glacier area (km²).- DC_BgnDate: Start of the time period from which satellite imagery was used to map debris cover.- DC_EndDate: End of the time period from which satellite imagery was used to map debris cover.- DC_CTSmean: Mean number of observations (CTS = COUNTS) per pixel and glacier. This number is derived from the number of available satellite images for the respective time period, reduced by filtering pixels due to cloud and snow cover. The dataset has a global extent and covers all of the glaciers in the RGI v. 6.0, but it exhibits poor coverage in the RGI region Subantarctic and Antarctic, where the debris cover extents are based on very few observations.
# 2
Münchmeyer, Jannes • Bindi, Dino • Sippl, Christian • Leser, Ulf • Tilmann, Frederik
Abstract: In Münchmeyer et al. 2019 magnitudes scales for Northern Chile have been derived with a focus on low uncertainties. The data set consists of three parts. First, a version of the IPOC catalog with the derived magnitude scales ML and MA and their uncertainties. Second, the attenuation functions for different waveform features. Third, the full matrix of features and the resulting single station magnitude predictions.The underlying IPOC catalog was obtained from Sippl et al. (2018). Detailed data description is provided in the README and in Münchmeyer et al. (2019) to which these data are supplementary material.
# 3
Petricca, Patrizio • Trippetta, Fabio • Billi, Andrea • Collettini, Cristiano • Cuffaro, Marco • (et. al.)
Abstract: This data publication includes a grid composed by contiguous 25 x 25 km square elements covering the Italian area and each parametrized by 1) the maximum length of faults included within the cell, 2) the maximum magnitude from instrumental seismic data, 3) the maximum magnitude from historical seismic data, 4) the maximum magnitude calculated from fault length using empirical scaling laws. This collection represents the basis to a work (Trippetta et al., 2019) aiming to test a fast method comparing the geologic (faults) and the seismologic (historical-instrumental seismicity) information available for a specific region. To do so, (1) a comprehensive catalogue of all known faults and (2) a comprehensive catalogue of earthquakes were compiled by merging the most complete available databases; (3) the related possible maximum magnitudes were derived from fault dimensions, upon the assumption of seismic reactivability of any fault; (4) the calculated magnitudes were compared with earthquake magnitudes recorded in historical and instrumental time series. Faults: to build the dataset of faults for Italy, the following databases were merged: (1) the entire faults collection after the Italian geological maps at the 1:100,000 scale (available online at www.isprambiente.it); (2) the faults compilation from the structural model of Italy at the 1:500,000 scale (Bigi et al., 1989); (3) faults provided in the ITHACA-Italian catalogue of capable faults (Michetti et al., 2000); and (4) the inventory of active faults of the GNDT (Gruppo Nazionale per la Difesa dai Terremoti, Galadini et al., 2000). To improve and implement the database, published complementary studies were selected for some specific areas considered to not be exhaustively covered by the aforementioned collection of faults, including Sardinia, SW Alps, Tuscany, the Adriatic front, Puglia, and the Calabrian Arc. For these areas, faults were selected on the grounds of scientific contributions that documented recent fault activity based on seismic, field, and paleoseismological data. In particular, for the southern Sardinia, the fault pattern proposed by Casula et al. (2001) was used. For the SW Alps, the works of Augliera et al. (1994), Courboulex et al. (1998), Larroque et al. (2001), Christophe et al. (2012), Sue et al. (2007), Capponi et al. (2009), Turino et al. (2009) and Sanchez et al. (2010) were followed. For the Tuscany area, Brogi et al. (2003), Brogi et al. (2005), Brogi (2006), Brogi (2008), Brogi (2011), and Brogi and Fabbrini (2009) were consulted. For the buried northern Apennines and Adriatic front, the fault datasets provided by Scrocca (2006), Cuffaro et al. (2010), and Fantoni and Franciosi (2010) were used. For the Puglia region, data from Patacca and Scandone (2004) and Del Gaudio et al. (2007) were used, while for the Calabrian Arc data were obtained from Polonia et al. (2016). Seismicity: to obtain a complete earthquake catalogue for the Italian territory, the following catalogues of instrumental and historical seismicity were integrated: (1) the CSI1.1 database (http://csi.rm.ingv.it; Castello et al., 2006) for the period 1981–2002, (2) the ISIDe database (http://iside.rm.ingv.it/iside/; IsideWorkingGroup, 2016) for the period 2003–2017 (Figure 3) and the CPTI15 (https://emidius.mi.ingv.it/CPTI15-DBMI15/; Rovida et al., 2016) for the period 1000-1981. The CSI 1.1 database (Castello et al., 2006) is a relocated catalogue of Italian earthquakes during the period 1997–2002. This collection derives from the work of Chiarabba et al. (2005). Most seismic events are lower than 4.0 in magnitude and are mostly located in the upper 12 km of the crust. A few earthquakes exceed magnitude 5.0, and the largest event is Mw 6.0. Due to their poorly constrained location, events with Mw < 2.0 were removed. The ISIDe database (IsideWorkingGroup, 2016) provides the parameters of earthquakes obtained by integrating data from real time and Italian Seismic Bulletin earthquakes. The time-span of this compilation begins in 1985. To avoid an overlap with the CSI database, only the time interval 2003–2017 was considered. Mw = 2.0 is the lower limit used for earthquake magnitude. The CPTI15 database integrates the italian macroseismic database version 2015 (DBMI15, Locati et al., 2016) and instrumental data from 26 different catalogues, databases and regional studies starting from the 1000 up to the 2014. To avoid overlapping of data with the utilized instrumental datasets, from the CPTI2015 we took data for the period 1000-1981 in the range of Mw 4-7. Method: starting from the entire faults dataset, the length of each structure was calculated (Lf, in km). Then, the Italian territory was divided into a grid with square cells of 25 x 25 km. The length of the longest fault crossing each cell characterizes the parameter “fault length” (Lf) of the considered cell. In the second step, these lengths were used as the input parameter to empirically derive the magnitude. The equations provided by Leonard (2010), were applied for earthquake magnitude-fault length relationships to infer the Potential Expected Maximum Magnitude as M = a + b ∗ log (Lf), with a=4.24 and b=1.67. The obtained magnitudes were assigned to each single cell. Furthermore, the maximum magnitude recorded/reported in instrumental/historical catalogs is associated to each containing cell. The resulting datasets are presented in txt format and included in the following files: - Grid_Coordinates.txt (contains ID and coordinates of grid's elements)- Grid_Structure.txt (contains geometry and specifications of the used grid)- Table_results (five columns table containing 1=element ID, 2= element max fault length (Lf_max in km), 3=element max Mw from instrumental record (MwInstr_max), 4=element max Mw from historical record (MwHist_max), 5=element max Mw derived by empirical relationship (PEMM).- The full list of references is included in the file Petricca_2018-003_References.txt
# 4
Wu, Hu • Müller, Jürgen • Brieden, Phillip
Abstract: IfE_GOCE05s is a GOCE-only global gravity field model, which was developed at the Institut für Erdmessung (IfE), Leibniz Universität Hannover, Germany. The observations with a time span from 1 November 2009 to 20 October 2013 are used for the model recovery. The GOCE precise kinematic orbit with 1-s sampling rate is processed for the gravity field up to degree/order 150, while the three main diagonal gravity gradients are down-sampled to 2 s and used to recover the model up to degree/order 250. With two additional Kaula’s regularizations, the combined model “IfE_GOCE05s” is derived, with a maximum degree of 250. To develop IfE_GOCE05s, the following GOCE data (01.11.2009 - 20.10.2013) was used:* Orbits: SST_PKI_2, SST_IAQ_2;* Gradients: EGG_GGT_2, EGG_IAQ_2. None any priori gravity field information was used.
Processing procedures: Gravity from orbits (SST):* Acceleration approach was applied to the kinematic orbit data;* PKI data was at 1 s sampling rate;* Model was derived up to d/o (degree/order) 150;* VCM (Variance-Covariance Matrix) was derived arc-wisely from the post-fit residuals. Gravity from gradients (SGG):* Gradients Vxx, Vyy and Vzz in the GRF (Gradiometer Reference Frame) were used;* Gradients were down-sampled to 2 s;* Model was derived up to d/o 250;* VCM was estimated arc-wisely from the post-fit residuals. Regularization:* A strong Kaula-regularization was applied to constrain the (near-)zonal coefficients that are degraded by the polar gap problem;* A slight Kaula-regularization was applied to improve the signal-to-noise ratio of the coefficients between d/o 201 and 250;* The regularization parameters were empirically determined. Combined solution:* The normal equations for SST and SGG were summed wih proper weighting factors;* Weighting factors for SST and SGG were determined from variance component estimation;* A direct inversion was applied on the final normal equation.
# 5
Encarnacao, Joao • Visser, Pieter • Jaeggi, Adrian • Bezdek, Ales • Mayer-Gürr, Torsten • (et. al.)
Abstract: Although the knowledge of the gravity of the Earth has improved considerably with CHAMP, GRACE and GOCE satellite missions, the geophysical community has identified the need for the continued monitoring of its time-variable component with the purpose of estimating the hydrological and glaciological yearly cycles and long-term trends. Currently, the GRACE-FO satellites are the sole provider of this data, while previously the GRACE mission collected these data for 15 years. Between the GRACE and GRACE-FO data periods lies a gap spanning from July 2017 to May 2018, while the Swarm satellites have collected gravimetric data with its GPS receivers since December 2013. This project aims at providing high-quality gravity field models from Swarm data that constitute an alternative and independent source of gravimetric data, which could help alleviate the consequences of the 10-month gap between GRACE and GRACE-FO, as well as the short gaps in the existing GRACE and GRACE-FO monthly time series. The geodetic community has realized that the combination of the different gravity field solutions is superior to any individual model. This project exploits this fact and delivers to the highest quality monthly-independent gravity field models, resulting from the combination of 4 different gravity field estimation approaches. All solutions are unconstrained and estimated independently from month to month. Preliminary comparison with GRACE data has demonstrated that the signal in the Swarm gravity field models is restricted to degrees 12-15 and below, while the temporal correlations decrease considerably above degree 10. The 750km smoothed models are suitable to retrieve the global annual temporal variations of Earth's gravity field and the agreement with GRACE over large basins (e.g. Amazon, Congo-Zambezi, Ganges-Brahmaputra) is within 1cm RMS in terms of Equivalent Water Height. The global RMS relative to a bias, trend, an annual and semi-annual model derived from GRACE over deep ocean areas (those roughly 1000km from shorelines) is under 1mm geoid height during periods of low ionospheric activity. More information about this project can be found at https://www.researchgate.net/project/Multi-approach-gravity-field-models-from-Swarm-GPS-data and ESA's Swarm DISC (the Data, Innovation and Science Cluster) Website (https://earth.esa.int/web/guest/missions/esa-eo-missions/swarm/activities/scientific-projects/disc#MAGF). This project is funded by ESA via the Swarm DISC, Sub-Contract No. SW-CO-DTU-GS-111.
# 6
Jatnieks, Janis • Sips, Mike • De Lucia, Marco • Dransch, Doris
Abstract: Geochemical models are used to seek answers about composition and evolution of groundwater, spill remediation, viability of geothermal resources and other important geoscientific applications. To understand these processes, it is useful to evaluate geochemical model response to different input parameter combinations. Running the model with varying input parameters creates a large amount of output data. It is a challenge to screen this data from the model to identify the significant relationships between input parameters and output variables. For addressing this problem we developed a Visual Analytics approach in an ongoing collaboration between Geoinformatics and Hydrogeology sections of GFZ German Research Centre for Geosciences. We implement our approach as an interactive data exploration tool called the GCex. GCex is a Visual Analytics approach and prototype that supports interactive exploration of geochemical models. It encodes many-to-many input/output relationships by the simple yet effective approach called Stacked Parameter Relation (SPR). GCex assists in the setup of simulations, model runs, data collection and result exploration, greatly enhancing the user experience in tasks such uncertainty and sensitivity analysis, inverse modeling and risk assessment. While in principle model-agnostic, the prototype currently supports and is tied to the popular geochemical code PHREEQC. Modification to support other models would not be complicated. GCex prototype was originally written by Janis Jatnieks at GFZ-Potsdam. It relies on Rphree (R-PHREEQC geochemical simulation model interface) written by Marco De Lucia at GFZ-Potsdam. A compatible version of Rphee is bundled with this installation.
https://gitext.gfz-potsdam.de/sec15pub/GCex/tags/1.0
# 7
Jatnieks, Janis • De Lucia, Marco • Sips, Mike • Dransch, Doris
Abstract: Surrogate playground is an automated machine learning approach written for rapidly screening a large number of different models to serve as surrogates for a slow running simulator. This code was written for a reactive transport application where a fluid flow model (hydrodynamics) is coupled to a geochemistry simulator (reactions in time and space) to simulate scenarios such as underground storage of CO2 or hydrogen storage for excess energy from wind farms. The challenge for such applications is that the geochemistry simulator is typically slow compared to fluid dynamics and constitutes the main bottleneck for producing highly detailed simulations of such application scenarios. This approach attempts to find machine learning models that can replace the slow running simulator when trained on input-output data from the geochemistry simulator. The code may be of more general interest as this prototype can be used to screen many different machine learning models for any regression problem in general. To illustrate this it also includes a demonstration example using the Boston housing standard data-set.
# 8
Kaplan, Nils Hinrich • Sohrt, Ernestine • Blume, Theresa • Weiler, Markus
Abstract: Version history17. July 2019: release of Version 2.0. This version includes additionally the catchment boundaries provided as subfolder of geodata.zip Data descriptionWe used different sensing techniques including time-lapse imagery, electric conductivity and stage measurements to generate a combined dataset of presence and absence of streamflow within a large number of nested sub-catchments in the Attert Catchment, Luxembourg. The first sites of observation were established in 2013 and successively extended to a total number of 182 in 2016 as part of the project “Catchments As Organized Systems” (CAOS, Zehe et al., 2014). Setup for time-lapse imagery measurements was inspired by Gilmore et al. (2013) while the setup for EC-sensor was proposed by Chapin et al. (2014). Temporal resolution ranged from 5 to 15 minutes intervals. Each single dataset was carefully processed and quality controlled before the time interval was homogenized to 30 minutes. The dataset provides valuable information of the dynamics of a meso-scale stream network in space and time. The Attert basin is located in the border region of Luxembourg and Belgium and covers an area of 247 km². The elevation of the catchment ranges from 245 m a.s.l. in Useldange to 549 m a.s.l. in the Ar-dennes. Climate conditions across the catchment are rather similar in terms of temperature and pre-cipitation. Hydrological regimes are mainly driven by seasonal fluctuations in evapotranspiration caus-ing flow to cease in intermittent reaches during dry periods. The catchment covers three predominant geologies: Slate, Marls and Sandstone. The dataset features data from catchments covering all geologi-cal characteristics from single geology to mixed geology. It can be used to test and evaluate hydrologic models, but also for the assessment of the intermittent stream ecosystem in the Attert basin.
Time-lapse Imagery Dörr Snapshot Mini 5.0 consumer wildlife cameras were used for time-lapse imagery. Time lapse mon-itoring was realized with the internal software with a temporal resolution of 15 minutes. Cameras were mounted at trees or structures close to the channel. For improved image analysis a gauging plate was installed in the channel. This method was closely related to a time-lapse camera gauging system published by Gilmore et al. (2013). EC-sensorsOnset HOBO Pendant waterproof temperature and light data logger (Model UA-002-64, Onset Com-puter Corp, Bourne, MA, USA) with modified light sensor to measure electric conductivity were used to monitor electric conductivity (EC) as proposed by Chapin et al. (2014). EC values were classified into no-flow situations for EC-values below 25microSi/cm and flow situation for EC-values above 25microSi/cm. Conventional GaugesConventional Gauges are divided into two sub-datasets. Data from ID values CG1 to CG11 were de-rived from water level data measured by METER/Decagon CTD pressure transducers in stilling wells. Data from ID values CG 12 to CG 18 were derived from discharge values measured by the Luxembourg Institute of Science and Technology (LIST). GeodataGeodata comprises of information on proportional shares of geological units in the catchment, the average slope in the catchment and the catchment area upstream of each site. Geological information is derived from a geological map (1:25.000) provided by the Administration des ponts et chaussées Service géologique de l'Etat, Luxembourg (2012). The the original map was created from 1947-1949. GIS analyses were performed using QGIS and SAGA on a 15 m resolution digital elevation model (DEM), which is based on a combined 5m resolution LIDAR scan of Luxembourg (Modèle Numérique de Terrain de Luxembourg, Le Gouvernement du Grand-Duché de Luxembourg, Administration du cadastre et de la topographie, 5m LIDAR, https://data.public.lu/en/datasets/bd-l-mnt5/) and 10m resolution LIDAR scan of Belgium (Relief de la Wallonie - Modèle Numérique de Surface, Service public de Wallonie, Département de la Géomatique. 10m LIDAR, http://geoportail.wallonie.be/catalogue/6029e738-f828-438b-b10a-85e67f77af92.html). The generat-ed 15m DEM has been pre-processed by burning in the digitalized stream network ( min. border cell method, epsilon = 3) and filling sinks (Wang Lui algorithm, minimum slope = 0.1°). The catchment area was calculated by using the pre-processed DEM with 15m resolution and the catchment area recursive tool from the SAGA toolbox using the D-8 method. The same DEM was used to calculate the average slope of each catchment. The “slope, aspect, curvature” tool from the SAGA toolbox was used to calcu-late the slope [radians] with the 9 parameter 2nd order polynom method (Zevenbergen & Thorne 1987) which uses a 3x3 pixel window of the DEM to calculate the slope. Catchment boundaries for each site are included as shape files. These shapefiles were calculated with the Watershed tool from the ArcGIS Hydrology toolbox using a flow direction raster as input which was derived from the Flow Direction tool (ArcGIS Hydrology toolbox) from the DEM described above. Raster output was trans-formed to shape files without simplification of the geometry (subfolder: boundaries).
# 9
Muñoz, Gerard • Ritter, Oliver • Weckmann, Ute • Meqbel, Naser • Becken, Michael
Abstract: The Integrated Geophysical Exploration Technologies for Deep Fractured Geothermal Systems project (I-GET) was aimed at developing an innovative strategy for geophysical exploration, particularly to exploit the full potential of seismic and electromagnetic exploration methods in detecting permeable zones and fluid bearing fractures. The proposed geothermal exploration approach was applied in selected European geothermal systems with different geological and thermodynamic reservoir characteristics: in Italy (high enthalpy reservoir in metamorphic rocks), in Iceland (high enthalpy reservoir in volcanic rocks) and in Germany and Poland (low to middle enthalpy reservoir in sedimentary rocks). The Groß Schönebeck in-situ geothermal laboratory, located 40 km north of Berlin in northeastern Germany, is a key site for testing the geothermal potential of deep sedimentary basins. The target reservoir is located in Lower Permian sandstones and volcanic strata, which host deep aquifers throughout the Northeast German Basin (NEGB). The laboratory consists of two 4.3-km-deep boreholes. The electrical conductivity of the subsurface is a very important parameter for characterizing geothermal systems as hot and mineralized (saline) fluids of deep aquifers can be imaged as regions of high electrical conductivity. In the first phase of the I-GET project, carried out in summer 2006, MT data was recorded at 55 stations along a 40-km long profile. In order to reduce the effect of the cultural noise, 4 remote reference stations located at distances of about 100 km from the profile were used. This profile is spatially coincident with a seismic tomography profile (Bauer et al., 2010). The main objective of the geophysical site characterization experiments was to derive combined electrical conductivity and P- and S-velocity tomographic models for a joint interpretation in high resolution. The data are provided in EMERALD format (Ritter et al., 2015). The folder structure and content is described in detail in Ritter et al., 2019. The project specific description is available in the associated data description file including information on the experimental setup and data collection, the instrumentation, recording configuration and data processing. Scientific outcomes of this project were published by Muñoz et al., (2010a, 2010b).
# 10
Mikhailova, Natalya • Poleshko, N.N. • Aristova,, I.L. • Mukambayev, A.S. • Kulikova, G.O.
Abstract: Version History11 Sep 2019: Release of Version 1.1 with the following changes: (1) new licence: CC BY SA 4.0, modification of the title: removal of file name and version); (2) addition of ORIDs when available. The metadata of the first version 1.0 is available in the download folder.. Data and file names remain unchanged. The EMCA (Earthquake Model Central Asia) catalogue (Mikhailova et al., 2015) includes information for 33620 earthquakes that occurred in Central Asia (Kazakhstan, Kyrgyzstan, Tajikistan, Uzbekistan and Turkmenistan). The catalogue provides for each event the estimated magnitude in terms of MLH (surface wave magnitude) scale, widely used in former USSR countries.MLH magnitudes range from 1.5 to 8.3. Although the catalogue spans the period from 2000 BC to 2009 AD, most of the entries (i.e. 33378) describe earthquakes that occurred after 1900. The catalogue includes the standard parametric information required for seismic hazard studies (i.e., time, location and magnitude values). The catalogue has been composed by integrating different sources (using different magnitude scales) and harmonised in terms of MLH scale. The MLH magnitude is determined from the horizontal component of surface waves (Rautian and Khalturin, 1994) and is reported in most of the seismic bulletins issued by seismological observatories in Central Asia. For the instrumental period MLH magnitude was estimated, when not directly measured, either from body wave magnitude (Mb), the energy class (K) or Mpva (regional magnitude by body waves determined by P-wave recorded by short-period instruments) using empirical regression analyses. The following relationships were used to estimate MLH (see Mikhailova, internal EMCA report, 2014):(1) MLH=0.47 K-1.15(2) MLH=1.34 Mb-1.89(3) MLH=1.14 Mpva-1.45When multiple scales were available for the same earthquake, priority was given to the conversion from K class. For the historical period, the MLH values were obtained from macroseismic information (Kondorskaya and Ulomov, 1996).
The catalogue is distributed as a ascii file in CSV (Comma Separated Value) format and UTF-8 encoding. A separate .csvt file is provided for column type specification (useful for importing the .csv file in QGIS and other similar environments).For each event the estimated location is provided as longitude, latitude, with the following spatial reference system: +proj=longlat +ellps=WGS84 +datum=WGS84 +no_defsWhen possible, precise indication of the events´ time in UTC format are provided.Distribution file: "EMCA_SeismoCat_v1.0.csv" Version: v1.0 Release date: 2015-07-30Header of CSV file:id: (int) serial ID of the eventyear: (int) Year of the event. Negative years refer to BCE (Before Common Era / Before Christ) eventsmonth: (int, 1-12) Month of the year for the eventday: (int, 1-31) Day of the month for the eventhour : (int, 0-23) Hour of the daymin: (int, 0-59) Minute of the hoursec: (int, 0-59) Second (and hundredth of second, if available) of the minutelat: (float) Latitude of the eventlon: (float) Longitude of the eventfdepth: (int) Focal depth of event in kmmlh: (float) Surface wave magnitude (see e.g. Rautian T. and V. Khalturin, 1994)
spinning wheel Loading next page