The core functionality of the marine data platform is digital collaboration in large data-heavy marine projects focussing on data management, data sharing, user rights & role management and accessibility from anywhere at any time. This is the basis for all digital process innovation and lifting of potential.
Once the digital base is maintained, efficiency in data handling and processing can be raise through automation. The marine data platform will provide different levels of quality metrics, that can be adjusted in a bespoke setup and automatically applied onto data sets already during upload or in subsequent processes. (Basis quality metrics are for e.g. row checks while uploading data. Medium quality metrics are datapoint-spacing and data distribution analysis. More advanced quality metrics are multibeam footprint area, gradient, etc.)
Uploaded measurements are visualized as pointclouds (after they have been converted to las / laz-files). Multiple measurements (from different sensors) will be visualized in a layered view with different detail levels that can be reached via zooming in a digital map.
The marine data platform will provide import function for many different proprietary binary formats and make them cloud processable. Conversion into an open-source format (Apache Parquet) allows usage of scalable computing resources and speed up the conversion from data to information. A generic data format frees from boundaries of heterogeneous data formats and allows to set standard processes in data analysis. We provide cloud ready processing tools to make best use of scalable cloud resources and to apply specific algorithms according to question.
The marine data platform connects users for collaboration as well as data to code and smart algorithms. Such specific analysis algorithms either derive from in-house developed functional modules or can incorporate functional modules from experts and research & scientific institutes. The marine data platform will then function as a market place where data users find analytical tools for very specific use cases. In future, registered users can buy data sets form another and/or data providers, simply via the platform ready to use.
Distributed event driven architecture using Apache Kafka
Large-scale distributed big data processing using Apache Spark
Efficient compression of data and performant queries possible due to using Apache Parquet as internal format
Easy and fast statistical analysis of very big datasets
Automatic detection of seagras in multibeam data
Object detection algorithms efficient visualization of pointcloud data
Large-scale distributed big geodata processing using Apache Sedona
Usage of quality metrics, derived from industry standards, to check the quality of uploaded data
Pointcloud Analysis
The architecture of Gaia-X is based on the principle of decentralization. Gaia-X is the result of many individual data owners (users) and technology players (providers) - all adopting a common standard of rules and control mechanisms – the Gaia-X standard.
Gaia-X Ecosystem Visualization
IDS Roles and Interactions (source: IDSA, IDS RAM 3.0)
The core of Gaia-X are data spaces which are based on the international data spaces (IDS) reference architecture model. Data spaces are linked by data space connectors (e.g. Eclipse Dataspace Connector) and provide the abstraction layer to enable communication.
Marispace-X will significantly accelerate the digitalization of the Blue Economy.
Efficiency increases and cost reductions in data driven processes
No vendor lock-ins
Compliance with data protection and information security requirements
Cross-industry bundling and data availability by sharing
Scale by cloud technologies across applications and sectors in maritime domain