Tutorials
Tutorials Offered
Presented by: Anca Anghelea, Rajat Shinde, Shinichi Sobue, Naoko Sugita, Lubomir Dolezal, Diego Moglioni, Sara Aparicio
Description
This hands-on tutorial guides participants through the end-to-end process of working with Earth Observation (EO) data in cloud environments to:
- develop EO-based indicators using FAIR workflows
- publish the indicators and workflows in a STAC catalogue, and
- disseminating via engaging and interactive visual storytelling opportunities with AI-assisted scientific storytelling
Using open, cloud-optimized EO data from ESA, NASA, and JAXA, participants will build and publish EO indicators on FAIR Open Science platforms (such as, e.g. Copernicus Dataspace Ecosystem or another EarthCODE integrated platform). These platforms enable transparent, reproducible science through open processing environments. Participants will use the FAIR and open science practices recommended by EarthCODE to describe and publish their data and methods to EO Dashboard.
The indicator publication will also guide them through the process of configuring draft visualization and narrative structures based on EOxElements, ultimately transforming scientific data into accessible, public-facing stories. Technologies like EOxHub Workspaces and eodash will be used throughout the session to explore federated data workflows and storytelling tools. This tutorial empowers participants to effectively communicate EO insights to a broad audience through clear, compelling, and interactive narratives.
The tutorial will be highly interactive, with participants working through guided, step-by-step exercises on their own devices. Training materials will be provided in advance. A GitHub account is required to access collaborative workflows and code examples.
Participants will gain practical experience using key technologies like EOxHub Workspaces (https://hub.eox.at) and eodash (https://eodash.org) to craft compelling narratives around EO data based indicators.
A core element of the tutorial is a hands-on use case demonstrating federated data concepts, focusing on cloud-optimized formats. Attendees will explore defining visualization and interaction styles to effectively convey EO information. The session also covers the process of drafting data publishing requests. Finally, participants will create stories with provided EO data, learning how to transform complex data into accessible and engaging narratives. The narrative creation is based on an EOxElement component (https://github.com/EOX-A/EOxElements/) which can be instantiated for any platform, for example in EO dashboard (https://www.eodashboard.org/story-editor). This tool allows creating narratives for storytelling such as https://www.eodashboard.org/story?id=el-nino-extremes.
This tutorial is designed to equip individuals with the skills and knowledge to effectively develop and communicate EO results to a broader public, using the latest cloud and web technologies, fostering greater understanding and engagement with Earth observation data.
Tutorial Learning Objectives
Participants will:
- Gain hands-on experience in the end-to-end EO data pipeline: data access, analysis, indicator creation, publication, and AI-assisted storytelling
- Use ESA’s EarthCODE ecosystem to create and publish reproducible workflows for EO-based indicator creation
- Learn how to work with open EO data in cloud-optimized formats
- Describe and publish new EO indicators in EO Dashboard
- Create interactive, narrative-driven data visualizations using EOxElements
- Understand how to transform complex data into stories that are both scientifically rigorous and publicly accessible
Prerequisites
Participants are expected to be researchers that work with EO data and aim to enhance their skills for handling data in the cloud, publishing FAIR workflows and building web-based visualisations to help them disseminate their research.
While some basic level of python and basic understanding of web technologies is expected, this tutorial can easily be followed by participants of all levels.
A laptop device with online access and a GitHub account are required. Please create the GitHub account in advance. Basic EO knowledge is helpful but not required.
Presented by: Andriyan Bayu Suksmono, Donny Danudirdjo, Koredianto Usman
Description
This hands-on tutorial guides participants through the end-to-end process of working with Earth Observation (EO) data in cloud environments to:
Compressed Sensing/Compressive Sampling (CS) is an emerging method for reconstructing signals and images based only on a few numbers of samples, which is much fewer than conventional (Shannon) sampling. This tutorial introduces the concepts and applications of CS in Radar and related areas. Some examples are presented for understanding by participants with basic knowledge on signal processing. This tutorial will covers the following topics
- A Brief Review on Conventional/Shannon Sampling
-
Basic Concept of Compressive Sampling/Sensing
- Random Sampling and Signal Reconstruction
-
Review on CS-Applications
- CS-SFCW Radar, CS-VLBI Imaging, CS-Weather Radar Processing
-
CS Reconstruction Algorithm using L1-Minimization
- The problem of L0-minimization
-
L0 relaxation to L1-minimization
- Donoho's premis, Tropp's premis:
-
L1-minimization and convex programming
- Convex optimization, Karmarkar's Interior Point Method
-
The weight point algorithm
- Graphical description of L1-Minimization and Weight Point Algorithm
-
Examples of CS problem using dictionaries method
- Direction of Arrival Estimation and Face Recognition Problems
-
L1-norm regularization methods
- Total variation (TV) minimization, sparse optimization, other regularization and fidelity terms
-
Applications
- Radar signal denoising and target detection, Remote sensing image restoration, Image segmentation
- Sparse fBM, CS-Speech: a simple CS device
Tutorial Learning Objectives
Participants will:
We expect a participant, after following this tutorial, will:
- Understand the basic concept of CS (Compressive Sampling/ Compressed Sensing)
- Can do simulation on CS on a digital signal
- Understand the importance of CS, its difference with classical sampling, and its importance in building efficient sensing/imaging devices
- Know various fields where the CS can be applied
Prerequisites
This is an introductory session on CS that will be suitable for final year undergraduate and graduate (Master and PhD) students, research engineers, and scientists. Basic knowledge on electronics and signal processing are required.
Presented by: Hanwen Yu
Description
Phase unwrapping (PU) is a fundamental yet challenging step in Interferometric Synthetic Aperture Radar (InSAR) data processing, directly determining the accuracy of topographic reconstruction and deformation measurement. The ill-posed nature of PU under noise, decorrelation, and phase discontinuities makes it one of the most difficult problems in radar remote sensing. This full-day tutorial provides a comprehensive and systematic overview of InSAR phase unwrapping, covering theoretical foundations, algorithmic advances, and the latest developments in AI-driven approaches.
- Ill-posed InSAR Phase Unwrapping - This section introduces classical ill-posed PU theories and algorithms such as Minimum Cost Flow (MCF), SNAPHU, and PUMA. These methods rely on the assumption of phase continuity to infer the unwrapped solution from noisy wrapped observations. While widely used, such assumptions limit their robustness in regions of rapid deformation, low coherence, or severe noise. The speaker will analyze the mathematical principles, strengths, and inherent challenges of these traditional methods.
- Non-ill-posed InSAR Phase Unwrapping - The second part presents the formulation of a non-ill-posed observation model that establishes a direct and stable relationship between the wrapped phase differences and the sought absolute phase. This formulation breaks the traditional dependency on the phase continuity assumption and achieves a conceptual shift from ill-posed to well-posed modeling. Representative algorithms, such as the Two-Stage Phase Unwrapping Algorithm (TSPA), will be discussed, demonstrating improved performance in large-gradient and discontinuous terrain scenarios.
- Large-scale InSAR Unwrapping - The third part introduces the envelope sparsity theorem and its application in large-scale PU. The theorem proves the consistency conditions between local cluster-based and global unwrapping results. Based on this principle, a low-dimensional, parallelizable framework is constructed to solve large-scale PU efficiently. This framework significantly reduces computation time and peak memory usage, making practical large-area and time-series InSAR processing feasible.
- AI-driven InSAR Phase Unwrapping - The final section explores the emerging integration of artificial intelligence in PU. The tutorial introduces deep learning frameworks such as PGNet, BCNet, and PU-GAN, as well as benchmark datasets including InSAR-DLPU. These models leverage AI’s strong feature representation and learning capabilities to overcome the limitations of phase continuity assumptions and enhance algorithm robustness. Furthermore, recent progress in self-supervised learning and interpretability will be discussed, offering new perspectives for data-driven InSAR analysis.
This tutorial is highly relevant to the IGARSS community as it bridges classical physical modeling with modern AI methodologies. It aligns with IGARSS’s central themes in radar remote sensing, Earth observation, and geospatial data science. By combining theoretical rigor, algorithmic innovation, and real-data demonstrations, the session will provide participants with both a deep understanding and practical skills to tackle next-generation InSAR phase unwrapping challenges.
Tutorial Learning Objectives
This tutorial aims to provide participants with a comprehensive understanding of the theoretical foundations, algorithmic developments, and emerging artificial intelligence frameworks for InSAR phase unwrapping (PU). By the end of the tutorial, participants will be able to:
-
Understand the fundamental principles of InSAR phase unwrapping.
Grasp the mathematical formulation of the PU problem, including phase wrapping, ambiguity resolution, and the implications of ill-posedness caused by noise, decorrelation, and phase discontinuities.
-
Distinguish between ill-posed and non-ill-posed PU formulations.
Recognize the assumptions underlying classical algorithms (e.g., MCF, SNAPHU, PUMA) and understand how non-ill-posed modeling eliminates the phase continuity dependency, enabling stable solutions in large-gradient or discontinuous regions.
-
Implement and evaluate large-scale unwrapping strategies.
Learn the principles of envelope sparsity and consistency conditions for distributed unwrapping, and understand how parallelization and low-dimensional optimization frameworks can accelerate large-scale InSAR processing.
-
Apply artificial intelligence to InSAR phase unwrapping.
Gain familiarity with AI-driven frameworks such as PGNet, BCNet, PU-GAN, and benchmark datasets like InSAR-DLPU. Understand how deep learning, interpretability, and self-supervised training strategies can enhance robustness and generalization beyond traditional phase continuity constraints.
-
Bridge physical modeling and data-driven learning.
Acquire an integrated perspective that combines the interpretability of physics-based algorithms with the adaptability of AI models, promoting innovative hybrid approaches for next-generation InSAR applications.
Through lectures, demonstrations, and real-data examples, participants will develop both conceptual understanding and practical skills to design, implement, and assess robust InSAR phase unwrapping algorithms suitable for diverse geoscientific scenarios.
Prerequisites
This tutorial is designed to be accessible to a broad audience from the IGARSS community, including researchers, engineers, and graduate students in radar remote sensing, geodesy, computer vision, and artificial intelligence. The tutorial does not require participants to have prior experience in phase unwrapping algorithm development, but some foundational knowledge will help attendees fully benefit from the content.
Participants are expected to have:
-
Basic understanding of radar and InSAR principles.
Familiarity with interferometric phase generation, coherence, and deformation measurement concepts will help participants connect phase unwrapping theory with physical InSAR observations.
-
Fundamental knowledge of signal and image processing.
Awareness of noise modeling, filtering, and spatial data representation is beneficial for understanding the mathematical formulations behind PU algorithms.
-
Basic programming and numerical analysis skills.
Experience with MATLAB or Python will be helpful for following algorithm demonstrations and exploring example codes provided during the tutorial.
-
Optional background in deep learning.
For the AI-driven phase unwrapping section, familiarity with convolutional neural networks and supervised or self-supervised training concepts will be advantageous but not mandatory. The tutorial will include an intuitive introduction to key AI models to ensure accessibility for all participants.
Overall, the tutorial is structured to support participants with diverse backgrounds — from classical InSAR practitioners seeking to understand AI-based innovations, to data scientists and machine learning researchers aiming to apply modern AI techniques to remote sensing and geoscience challenges.
Presented by: Layachi Abdelkebir, Beldjilali Bilal
Description
Earthquake prediction remains one of the most challenging problems in geoscience. Recent advances in artificial intelligence (AI) and multi-source remote sensing have opened new opportunities for detecting subtle pre-seismic anomalies in the ionosphere, crust, and atmosphere. This tutorial aims to introduce participants to an integrated framework that leverages Global Navigation Satellite System (GNSS) data, Differential Interferometric Synthetic Aperture Radar (DInSAR) products, and ionospheric Total Electron Content (TEC) measurements to analyze pre-earthquake disturbances and develop AI-based predictive models.
The tutorial will combine theoretical foundations with practical demonstrations using open-source datasets and Python-based tools. Participants will learn how to preprocess and synchronize GNSS, InSAR, and ionospheric datasets, extract spatio-temporal features, and apply machine learning and deep learning algorithms for anomaly detection and event forecasting. Real-world case studies from the Mediterranean and Asian seismic zones will be used to illustrate the methodology.
Key topics include:
- Ionospheric perturbation detection using GNSS-derived TEC maps.
- Integration of InSAR displacement data for crustal deformation analysis.
- Multi-source data fusion and feature engineering for seismic precursors.
- Application of neural networks and hybrid AI models in earthquake forecasting.
- Generation of geospatial prediction maps and uncertainty quantification.
The tutorial is highly relevant to the IGARSS community as it bridges multiple areas within geoscience and remote sensing, including AI and Big Data (A), Ionospheric Remote Sensing (M.5), and Risk and Disaster Management (D.7). It also aligns with the IGARSS 2026 emphasis on applying innovative technologies to address global challenges and societal resilience. By combining cutting-edge AI techniques with geodetic and ionospheric observations, the tutorial promotes interdisciplinary collaboration between geoscientists, remote sensing engineers, and data scientists.
Participants will gain practical skills to analyze complex geophysical datasets and implement reproducible AI workflows for hazard monitoring. The presented tools and methods can be adapted for other geohazards such as volcanic activity, landslides, and space weather events. Overall, the tutorial will contribute to advancing the community’s capacity to use remote sensing and AI for improving disaster preparedness and understanding Earth’s dynamic processes.
Tutorial Learning Objectives
By the end of this tutorial, participants will gain a comprehensive understanding of how to integrate GNSS, InSAR, and ionospheric TEC data for earthquake-related studies and predictive modeling. The session is designed to equip researchers, graduate students, and professionals in geosciences and remote sensing with both conceptual and practical skills to apply artificial intelligence (AI) methods in multi-source geophysical data analysis.
Specifically, participants will:
- Understand the physical mechanisms linking ionospheric and crustal perturbations to seismic activity, and the potential of remote sensing for earthquake precursor detection.
- Learn how to preprocess and synchronize multi-source datasets, including GNSS-based TEC maps, interferometric displacement maps from InSAR, and seismic catalogs.
- Gain proficiency in using open-source Python tools for geospatial analysis, data fusion, and feature extraction relevant to earthquake precursors.
- Apply machine learning and deep learning techniques (e.g., Random Forest, CNN, LSTM) to identify spatiotemporal anomalies and develop hybrid prediction models.
- Evaluate model performance, interpret predictive uncertainty, and visualize spatial prediction maps.
- Develop reproducible workflows that can be adapted to different seismic regions or extended to other geohazards such as volcanic or ionospheric disturbances.
Through a combination of lectures, demonstrations, and hands-on exercises, participants will acquire end-to-end experience in the development of AI-based frameworks for seismic hazard monitoring. By linking remote sensing, geodesy, and AI, the tutorial aims to foster interdisciplinary collaboration within the IGARSS community and enhance participants’ capabilities to address global challenges in natural hazard prediction and disaster resilience.
Prerequisites
This tutorial is designed for researchers, graduate students, and professionals with a background in geoscience, geodesy, or remote sensing who are interested in applying artificial intelligence and data integration techniques to earthquake prediction and ionospheric analysis. Participants are expected to have a fundamental understanding of GNSS and InSAR concepts, basic data processing workflows, and general geospatial analysis principles.
Prior experience with programming in Python is highly recommended, as several examples and exercises will involve practical coding for data visualization, preprocessing, and machine learning. Familiarity with scientific computing libraries such as NumPy, Pandas, Matplotlib, and Scikit-learn will be advantageous.
Participants should also possess a general knowledge of:
- GNSS data formats (RINEX, CSV) and TEC computation basics.
- InSAR interferometry principles, deformation mapping, and atmospheric effects.
- Seismic catalogs and statistical methods for event clustering and pattern recognition.
No prior expertise in deep learning is required, but basic familiarity with machine learning concepts (features, training/testing, classification/regression) will help participants follow the advanced sections more effectively.
The tutorial will begin with a brief review of the required theoretical background before progressing to hands-on sessions, ensuring that participants from diverse scientific backgrounds can fully benefit. Clear documentation, example datasets, and Python notebooks will be provided to allow participants to reproduce all analyses independently after the session.
Presented by: Giampaolo Ferraioli, Sergio Vitale, Vito Pascazio
Description
Synthetic Aperture Radar (SAR) imaging is a key remote sensing technology, enabling high-resolution observation independent of weather and daylight conditions. Applications range from environmental monitoring and disaster response to infrastructure and defense analysis. In the last few years, deep learning has transformed SAR image processing, achieving outstanding performance in detection, classification, and change analysis. However, most deep learning approaches are purely data-driven, often overlooking the fundamental physical properties of SAR imaging—such as coherent scattering, speckle statistics, and geometric distortions.
This tutorial introduces participants to the emerging paradigm of Physics-Aware Deep Learning (PADL) for SAR, where physical models and learning-based methods are integrated to achieve improved robustness, interpretability, and generalization.
We focus on SAR despeckling as a representative case study: a long-standing problem where the synergy between physics and AI clearly demonstrates its benefits.
The tutorial covers:
- Physical origins of speckle noise and statistical modeling;
- Deep-learning approaches;
- Physics-aware training strategies (supervised, unsupervised and self-supervised learning) that embed SAR priors, imaging geometry, and speckle distributions;
- Self-supervised and simulation-to-real learning paradigms;
- Evaluation metrics preserving radiometric and structural consistency.
A hands-on session will guide participants through implementing and evaluating a physics-aware deep despeckling model on open datasets such as Sentinel-1. We conclude with a discussion on uncertainty quantification, interpretability, and reproducibility.
By combining theory and practice, this tutorial provides a forward-looking perspective on how Physics-Aware Deep Learning can redefine SAR image restoration, offering physically consistent, trustworthy, and generalizable AI solutions for the next generation of Earth observation.
Tutorial Learning Objectives
After completing the tutorial, participants will be able to:
- Understand the physical mechanisms of SAR imaging and speckle formation.
- Implement and evaluate a physics-aware deep despeckling model.
- Apply self-supervised and simulation-to-real training strategies for real-world data.
- Assess uncertainty and interpretability in SAR deep learning pipelines
Prerequisites
This tutorial is designed for participants with a basic background in remote sensing and introductory experience in machine learning. A basic understanding of SAR imaging is recommended, including coherent image formation, radiometry, and the physical origin of speckle noise. Familiarity with statistical models for speckle (multiplicative noise, Gamma distribution, ENL) and concepts such as coherence and multi-looking that can support a deeper comprehension of the physics-aware techniques presented is helpful but not mandatory.
Participants should also be comfortable with the essential principles of machine learning and deep learning, including supervised learning, loss functions, regularization, and gradient-based optimization. A basic understanding of convolutional neural networks and exposure to deep architectures such as encoder-decoder models or transformers is helpful but not mandatory.
From a practical standpoint, attendees are expected to have basic Python programming skills, including familiarity with NumPy and Matplotlib. A minimal knowledge of PyTorch—tensors, modules, simple training loops—is suggested, as the hands-on session will guide participants through the implementation and evaluation of physics-aware deep despeckling models. Code templates and Kaggle/Colab-ready notebooks will be provided to minimize setup overhead.
Presented by: Dinh Ho Tong Minh
Description
The New Standard in Earth Observation
The European Space Agency's Copernicus Sentinel-1 program has revolutionized our ability to monitor Earth's dynamic surface. By utilizing Synthetic Aperture Radar (SAR), this technology transcends the limitations of traditional optical imaging—penetrating dense cloud cover and operating continuously through darkness. This all-weather capability is essential for consistent, long-term tracking of minute surface deformations.
Cutting-Edge Methodologies
This class bridges the gap between complex radar theory and practical research application. We move beyond basic techniques to explore advanced Interferometric SAR (InSAR) methodologies, specifically focusing on:
- PSDS InSAR: Persistent Scatterers and Distributed Scatterers algorithms.
- ComSAR: Advanced compression techniques for efficient data handling.
- TomoSAR: Integrated within our robust, open-source package for comprehensive analysis.
Applied Learning Experience
Designed for researchers across disciplines, this tutorial deconstructs sophisticated concepts into an accessible, streamlined workflow. You will engage directly with real-world Sentinel-1 datasets, utilizing industry-standard open-source tools—including ISCE, SNAP, StaMPS, and TomoSAR—to derive actionable geodynamic insights without requiring advanced programming expertise.
Learning Outcomes
By the end of this half-day session, participants will gain a strong foundational understanding of radar interferometry and the ability to generate ground motion time series from SAR image stacks—equipping them to contribute to the next generation of Earth monitoring.
Tutorial Learning Objectives
After just a half-day of training, participants will gain the following skills:
- Access SAR Data: You will easily access SAR data, making it readily available for your analysis.
- Master InSAR Theory: Our expert guidance will help you understand the intricacies of Interferometric SAR (InSAR) processing, breaking down complex concepts into easily digestible information.
- Interferogram Creation: You will learn how to create interferograms, a crucial step in the process that provides valuable insights into the Earth's surface.
- Ground Motion Interpretation: With our guidance, you will be able to interpret the ground motions revealed by these interferograms, allowing you to understand and analyze changes in the Earth's surface.
- Time Series Extraction: We will clarify the process of extracting ground motion time series from a stack of SAR images, empowering you to track and monitor surface movements over time.
Prerequisites
This tutorial is designed for a broad audience, including students, researchers, and professionals interested in radar remote sensing. No coding experience is required, and basic familiarity with remote sensing or GIS concepts is helpful but not mandatory.
Presented by: Prof. Pedran Ghamisi, Dr. Yao Sun, Dr. Raian Vargas Maretto, Dr. Kasra Rafiezadeh Shahi
Description
Artificial Intelligence has rapidly become a cornerstone of modern Earth Observation (EO), enabling breakthroughs in object detection, land-use mapping, and change detection. That has created an unprecedented opportunity for better monitoring the Earth surface and fostering better policies aimed at sustainability, improvement of social conditions, and disaster mitigation and monitoring. However, the growing reliance on AI in high-impact EO applications has raised critical questions about fairness, transparency, accountability, and its effectiveness in creating impact. How can we ensure that AI systems are not only accurate, but also equitable, explainable, and reproducible?
This tutorial provides a structured overview of Responsible and Trustworthy AI for Earth Observation, introducing the main principles, frameworks, and practices emerging across research and operational contexts. It aims to help the EO community understand how responsible AI methods can enhance reliability, transparency, and inclusiveness in geoscience applications.
The tutorial is organized into five thematic parts:
-
Fairness and Bias in EO Datasets and Models
- Sources of spatial and social bias in EO data (e.g., uneven global coverage, socio-economic disparities).
- Why fairness matters for scientific validity and global equity.
- Strategies and metrics for assessing bias and ensuring inclusion.
-
Explainability and Transparency
- The importance of interpretability in EO-driven decision-making.
- Overview of model explanation paradigms (post-hoc vs. intrinsic transparency).
- How explainable AI contributes to scientific trust and reproducibility.
-
Uncertainty and Trustworthiness
- Understanding uncertainty as a dimension of responsible modeling.
- Conceptual distinctions between aleatoric and epistemic uncertainty.
- How to communicate uncertainty in EO-based risk or policy applications.
-
Standardization, Reproducibility, and Ethical Practice
- Emerging GRSS initiatives and international standards for Responsible AI.
- Dataset and model documentation (dataset cards, model cards).
- Open data, transparency, and reproducible pipelines for sustainable AI4EO.
-
Impact-driven development and stakeholder engagement
- How to engage stakeholders involve them in the design of AI applications aimed at informing decision-making and policies.
- Stakeholder in the loop for an user-centric design approach.
- Communication with stakeholders for potentializing impact
By integrating these perspectives, the tutorial provides participants with a coherent understanding of how Responsible AI concepts connect to the daily practice of EO research and data-driven decision-making. It aims to encourage the adoption of transparent, accountable, and reproducible methodologies across the GRSS community.
Tutorial Learning Objectives
After this tutorial, participants will:
- Understand the key principles of Responsible and Trustworthy AI in EO.
- Recognize common sources of bias and their implications for scientific validity.
- Appreciate the importance of explainability and transparency for model credibility.
- Comprehend how uncertainty and reliability measures can support decision-making.
- Comprehend the importance of keeping the stakeholders in the loop for a potentialized and more effective impact.
- Learn about current GRSS and international efforts to promote reproducibility and ethical standards in AI4EO.
The tutorial provides a comprehensive foundation for researchers, practitioners, and students working in EO, AI, and geosciences who seek to apply AI responsibly in real-world contexts.
Prerequisites
No specific technical or coding expertise is required. The tutorial focuses on frameworks, use cases, and best practices relevant to both academic and applied EO research.
Presented by: Gilberto Camara, Charlotte Pelletier, Rolf Simoes, Felipe Carlos, Felipe Souza
Description
This tutorial introduces SITS, a robust open-source package designed for large-scale Earth Observation (EO) data analytics in both R and Python. SITS excels in satellite image time series analysis and supports multi-modal data cubes—including optical, SAR, DEM, and other gridded datasets. Through the STAC protocol, users can seamlessly access diverse data sources such as Copernicus Data Space Ecosystem (CDSE), AWS, Microsoft Planetary Computer, Digital Earth Africa, and Brazil Data Cube.
SITS features a flexible data cube architecture, enabling efficient processing across extensive regions and multiple grid zones. Its comprehensive suite of machine learning and deep learning algorithms includes SVM, random forests, temporal convolutional neural networks, and temporal attention encoders. Advanced capabilities—such as object-based time series classification—enable detailed land cover change analysis. The package also supports active learning, integrating self-organizing maps and uncertainty estimation, and offers methods for balancing training samples. SITS is fully compatible with leading R and Python geospatial libraries, including R-sf, R-stars, geopandas, and Xarray.
Time series analysis is now central to land use and land cover (LULC) mapping and change detection. SITS provides a complete, open-source pipeline for these tasks. In this tutorial, IGARSS participants will be shown how to access multi-source data with STAC and how to run machine learning workflows on terabyte-scale datasets using sits. Furthermore, this tutorial will also demonstrate the process of merging SAR, optical, and DEM data with data cubes for advanced analysis.
The tutorial will also showcase how to use SITS to build geo-embeddings with masked autoencoders and attention-based models for self-supervised learning, fine-tune embeddings with small training sets, and apply these techniques for time series classification.
Scalable from personal computers to large servers, SITS operates seamlessly on both CPU and GPU platforms. Developed since 2017, SITS is now stable and operational, with proven success in large-scale mapping projects.
The capabilities presented in this tutorial address IGARSS’s growing interest in scalable EO analytics, reproducibility, uncertainty quantification, and applications such as deforestation monitoring, agriculture, and climate-change studies. Participants will gain practical, reproducible workflows for large-scale land-use mapping.
Tutorial Learning Objectives
The aim of the tutorial is presenting to expert an operational tool that allows advanced satellite image time series analysis. SITS has an extensive documentation (https://docs.ropensci.org/sits) and a self-learning e-book (https://e-sensing.github.io/sitsbook/) which will be used as a basis for the tutorial.
Tutorial attendees will learn how to:
- Access EO cloud services and build large-scale data cubes of image time series.
- Build multi-modal data cubes combining optical, SAR and DEM data.
- Perform quality control of training samples using self-organised maps.
- Produce vegetation indexes and calculate mixture models with time series.
- Build machine-learning (ML) and deep-learning (DL) models for image time series.
- Fine-tune hyperparameters for ML and DL models.
- Construct embeddings with masked autoencoders and use them for classification.
- Generate spatiotemporal segmentation for object-oriented time series classification.
- Estimate map uncertainty and do classification post-processing with Bayesian smoothing.
All of the examples will be available as Jupyter notebooks on Kaggle, ready to run. The following links will be updated and prepared to run in the tutorial without the need for software installation:
- Introduction to SITS - https://www.kaggle.com/esensing/introduction-to-sits
- Working with time series in SITS - https://www.kaggle.com/esensing/working-with-time-series-in-sits
- Creating data cubes in SITS - https://www.kaggle.com/esensing/creating-data-cubes-in-sits
- Improving the quality of training samples - https://www.kaggle.com/code/esensing/improving-quality-of-training-samples
- Machine learning for data cubes - https://www.kaggle.com/esensing/machine-learning-for-data-cubes
- Classification of raster data cubes - https://www.kaggle.com/code/esensing/classification-of-raster-data-cubes
- Bayesian smoothing for post-processing - https://www.kaggle.com/code/esensing/bayesian-smoothing-for-post-processing
- Uncertainty and active learning- https://www.kaggle.com/code/esensing/uncertainty-and-active-learning
- Object-based time series classification - https://www.kaggle.com/esensing/object-based-image-time-series-classification
Prerequisites
Minimum prerequisites:
- Basic knowledge of Python or R.
- General understanding of the requirements of land use and land cover mapping using remote sensing data.
- Basic understanding of deep learning and machine learning principles.
Desirable prerequisites:
- Knowledge of deep learning and machine learning algorithms for images such as random forests and convolution neural networks.
- Familiarity with Sentinel-1, Sentinel-2, Landsat and MODIS images.
- Knowledge of the basics of image time series analysis.
Presented by: Esther Oyedele
Description
As Earth observation (EO) technologies accelerate into a new era of hyperspectral constellations, AI-enhanced analytics, digital twins, and climate services, the value of this innovation depends on one essential skill: the ability to communicate insights clearly, accurately, and compellingly to the people who rely on them. From decision makers navigating climate risks to communities experiencing environmental change, effective communication determines whether EO data becomes actionable knowledge or remains inaccessible technical output.
This tutorial introduces practical, future-oriented communication strategies tailored for the next generation of remote sensing research. Participants will learn how to translate advanced EO outputs into narratives that illuminate societal relevance, fuels cross-sector collaboration, and supports climate resilience. The session blends lecture, case studies, and hands-on activities to offer a toolkit for transforming complex datasets into accessible stories aligned with IGARSS 2026’s focus on the future of Earth observations.
A central component of the tutorial is crafting narratives that connect modern EO capabilities to real-world impact. Through examples involving climate extremes, water-energy-food security, cryosphere change, and disaster response, participants will practice turning technical insights into messages that resonate with diverse audiences, including policymakers, educators, local communities, and interdisciplinary partners. We highlight how next-generation EO systems can be embedded into broader societal narratives about adaptation, sustainability, and environmental justice.
The tutorial also addresses one of the most pressing communication challenges in future EO: conveying uncertainty transparently in an age of predictive models, probabilistic products, and AI-generated insights. Participants will learn approaches for framing uncertainty in ways that build trust and clarify risk without oversimplifying scientific nuance.
Finally, the session integrates principles of Diversity, Equity, Inclusion, Accessibility, and Belonging to ensure that advances in EO benefit the widest possible range of stakeholders. We explore strategies for co-producing messages with communities, designing accessible visuals, and using culturally informed storytelling to bridge the gap between scientific innovation and lived experience.
By the end of the tutorial, participants will be prepared to communicate research in a way that matches the urgency and complexity of the future EO landscape. They will leave with transferrable strategies for amplifying the societal impact of their work and strengthening the global narrative around Earth observation and resilience.
Tutorial Learning Objectives
Participants will develop foundational and advanced skills in storytelling, visual communication, and message framing tailored specifically for EO research. Attendees will practice tailoring communication for diverse audiences, including decision makers, technical collaborators, educators, and the public. They will explore frameworks for identifying core messages, anticipating audience needs, and selecting communication formats that enhance clarity and engagement.
Prerequisites
Participants should have a foundational understanding of Earth observation (EO) and remote sensing data applications, as this tutorial will use examples from these fields to build communication skills. However, no prior experience in science communication is required. This tutorial is designed to introduce fundamental concepts in storytelling, data visualization, and audience engagement, making it accessible to those new to these skills.
An interest in public outreach, policy influence, or interdisciplinary collaboration will enhance participants’ experience, as the session emphasizes effective communication across various audiences, from scientific peers to the general public. Participants will benefit from a curiosity about making scientific data more accessible and relevant to broader audiences, including policymakers, educators, and community stakeholders.
This tutorial is particularly suited for early-career researchers, graduate students, and professionals working on climate monitoring, natural hazard assessment, sustainability, or emerging EO technologies. By the end of the session, participants will have practical skills to communicate their research in ways that highlight the transformative potential of future EO systems and support informed action across a wide landscape of stakeholders.
Presented by: Seyed (Reza) Zekavat, Radwin Askari
Description
Recent advances in artificial intelligence (AI), autonomous platforms, and software-defined radar technologies have opened a new era in Intelligent Ground Penetrating Radar (GPR), a next-generation approach for non-invasive, high-resolution soil subsurface characterization. This tutorial introduces the theoretical foundations, system architecture, and practical implementation of intelligent GPR, with emphasis on drone-borne soil moisture mapping and root-zone characterization for large-scale environmental, agricultural, and geotechnical applications.
The tutorial begins with an overview of the limitations of existing soil moisture sensing technologies, including electromagnetic (EM) induction instruments (e.g., EM31/38) and satellite-based microwave remote sensing. These methods either lack sufficient vertical resolution or are restricted to shallow soil depths. In contrast, GPR can bridge this gap by offering multi-depth, high-resolution subsurface imaging. However, conventional GPR systems are slow, waveform-limited, and lack adaptive intelligence for complex and heterogeneous soil environments. The emerging Intelligent GPR paradigm overcomes these challenges through the integration of machine learning (ML), synthetic data generation, and intelligent waveform design.
Participants will be guided through the key components of an intelligent GPR system, including (1) transmitter-receiver architecture, (2) waveform and antenna design, (3) realistic soil channel modeling, (4) synthetic and generative data creation, and (5) ML-driven inversion and visualization. The session highlights multi-carrier waveform design, such as stepped-frequency continuous wave (SFCW), frequency-modulated continuous wave (FMCW), and orthogonal frequency-division multiplexing (OFDM), and explains how AI-driven optimization (e.g., reinforcement learning and deep neural networks) enables real-time waveform adaptation to subsurface variability. The tutorial also presents intelligent hardware frameworks using software-defined radios (SDRs), embedded GPUs/NPUs, and on-board computing platforms (e.g., NVIDIA Jetson, AMD PYNQ, and Qualcomm RB5) that enable near real-time 3D soil imaging.
A substantial portion of the session will focus on machine learning models for soil texture, composition, and moisture estimation, trained using hybrid datasets derived from real GPR field measurements and synthetic data generated via emulation engines such as gprMax. The workflow includes data pre-processing, feature extraction, labeling, and digital twin creation. Attendees will learn how realistic soil horizons (O-C layers) and electrical properties (permittivity and conductivity) are parameterized to produce physically consistent training datasets for model calibration and validation.
By integrating radar system engineering, soil science, and data intelligence, the tutorial provides a comprehensive roadmap for designing and deploying intelligent, drone-based GPR platforms. Applications include precision agriculture, hydrological monitoring, carbon and moisture mapping, and planetary exploration. The tutorial concludes by identifying open challenges, such as standardizing synthetic data protocols, improving online learning efficiency, and developing robust cross-domain calibration, to guide future interdisciplinary research in AI-enhanced geophysical sensing.
This Tutorial will be supported by the SoilX Lab team. Please visit soilx.wpi.edu.
Tutorial Learning Objectives
By the end of this tutorial, participants will be able to:
- Understand the architecture of Intelligent GPR systems, including waveform design, antenna configurations, transceiver architectures, and hardware integration.
- Differentiate between classical and intelligent waveform design approaches and explain how adaptive waveform optimization can enhance subsurface resolution and penetration.
- Describe soil medium characteristics and their impact on electromagnetic wave propagation, including key parameters, empirical soil models, and soil property sets.
- Implement synthetic and generative data workflows for GPR applications, including Channel emulation using tools such as gprMax, Forward modeling, and calibration using full waveform inversion (FWI). Validation and augmentation of data through GenAI-based methods.
- Apply machine learning methods to GPR data processing, including preprocessing, feature extraction, model training, and inference for soil moisture and subsurface structure estimation.
- Utilize advanced visualization techniques for interpreting 2D and 3D subsurface imaging and root-zone moisture distribution.
- Recognize current challenges, limitations, and emerging research opportunities in the development of Intelligent GPR systems and AI-enabled subsurface characterization.
- Gain hands-on familiarity with open tools and resources from the SoilX Lab (soilx.wpi.edu) supporting intelligent GPR research, simulation, and visualization.
Prerequisites
Participants are expected to have:
-
Basic knowledge of electromagnetic wave propagation
- Familiarity with Maxwell's equations, dielectric properties, and EM interaction with materials (e.g., soil, water, air).
- Understanding of frequency, wavelength, attenuation, and reflection principles.
-
Fundamentals of radar or GPR operation
- Awareness of radar system components such as transmitters, receivers, antennas, and waveform parameters.
- Basic understanding of time-domain or frequency-domain radar principles.
-
Introductory experience with signal and data processing
- Understanding of time-series filtering, noise reduction, and feature extraction techniques.
- Exposure to tools such as MATLAB, Python, or equivalent environments for data processing and visualization.
-
Basic knowledge of machine learning concepts
- Awareness of supervised learning, data labeling, and model training/testing workflows.
- Familiarity with standard ML frameworks (e.g., scikit-learn, TensorFlow, or PyTorch) is helpful but not required.
-
Interest in AI applications in remote sensing or geophysical instrumentation
- Attendees should be motivated to explore how artificial intelligence, generative data, and simulation tools can be integrated into modern radar and sensing systems
Presented by: Prof. Ashok K. Keshari, Prof. R. P. Singh, Mr. Rajeev Ranjan
Description
Natural catastrophes are becoming more often and intense due to climate change and human interference. Traditional approaches in disaster management predominantly struggle to offer timely and accurate real-time localized information in providing robust and rapid responses for disaster mitigation, especially in the complex inaccessible and impassable terrain regions. However, the remote sensing data has enabled users to map the disaster-prone areas more precisely and repetitively. The accessibility and availability of remote sensing data and their integration with Geospatial Artificial Intelligence (GeoAI) technology and cloud-based platform have accelerated the non-structural based disaster management approach. Unconventional technology such as GeoAI and cloud-based remote sensing allows easy and rapid development of tools having capability of automation, scalability, affordability, wider applicability at minimum computational facility, user-friendliness, diverse applications, high market demand, and substantial job returns than conventional technologies. The remote sensing derived attributes of disaster events are also required as input in conventional process-based models. Hence, it has been more pressing to acquire integrated knowledge and skills related to GeoAI, and cloud-based remote sensing computation for developing the improved and robust disaster management strategies.
The proposed tutorial will provide lectures and hands on experience to the IGARSS25 participants related to applications of advanced unconventional techniques for improved disaster management. The tutorial will leverage the potential use of unconventional technology such as GeoAI in conjunction with cloud-based platform, particularly Google Earth Engine (GEE) for real-time application of remote sensing in disaster management. The proposed tutorial will cover the end-to-end workflow from data acquisition such as Synthetic Aperture Radar (SAR) to integrated application of unconventional technology for real-time mapping and monitoring of disasters particularly floods including riverine, urban, and Glacial Lake Outburst Floods (GLOFs). The insights gained during this tutorial will enable the participants to develop and customize the automated applications and tools, scalable in nature for their unique research and wider application needs in regional disaster resilience and risk mitigation, particularly floods.
The IGARSS community includes the large number of crowds having immense research interests in the field of emerging unconventional technology such as GeoAI, and cloud-based remote sensing application in disaster management. And the interests and applications of these technologies are increasing rapidly due to their many appealing characteristics. Hence, the proposed tutorial is framed to keep the present requirements of the IGARSS community participants and will draw interest from multiple cores and applied disciplines. The proposed tutorial perfectly aligns with the goal of IGARSS to foster advancement in remote sensing and geospatial field for worldwide benefits. The tutorial addresses the combined application of this unconventional technology in disaster management which is more appealing to researchers and practitioners in hydrology, climate science, urban planning, and environmental monitoring. At the end, attendees will acquire insights and skills in integrated application of cutting-edge unconventional technologies such as GeoAI, cloud-based remote sensing which will build and enhance their capabilities in scientific understanding and development of strategies for preparedness, response, and disaster resilience-building, particularly for floods within their communities and globally.
Tutorial Learning Objectives
- Understand the applications of remote sensing and geospatial technology in flood disaster management: Gain a comprehensive understanding of desktop and cloud-based advanced remote sensing data for real-time disaster monitoring and response.
- Cloud-based advanced remote sensing (particularly, SAR) data and geospatial analysis using artificial intelligence models: Learn the navigation and utilization of GEE cloud-based platform for large-scale, real-time data processing, enabling efficient analysis of multisource remote sensing data, particularly SAR.
- Apply GeoAI for real-time mapping and monitoring of real-world flood disaster scenarios: Explore the application of GeoAI to specific flood disaster types to improve preparedness and mitigation efforts by identifying vulnerable zones.
- Hands on practice sessions to learn real-world applications of Conventional and GeoAI and cloud-based SAR remote sensing data (unconventional technology) for mapping and monitoring of flood disaster events.
Prerequisites
The participants should have basic knowledge of remote sensing and Geographic Information System (GIS), foundational understanding of AI, beginner experience with Google Earth Engine (GEE) cloud-based platform (preferred but not required), basic programming skills in Python using google colab and Java, and interest in disaster risk reduction and environmental monitoring.
The attendees should also bring their good working condition laptop for data processing and visualization and must have working GEE account.
Presented by: David Kerekes, Fuxun Yu, Konstantin Klemmer, Conrad M Albrecht
Online Support: Isabelle Wittmann, Esther Rolf, Rikard Vinge, Marc Russwurm, Andrea Nascetti, Zirui Xu
Description
This tutorial provides an introduction and hands-on session to NeuCo-Bench, https://github.com/embed2scale/NeuCo-Bench/tree/main, a benchmark framework that spun from the 2025 CVPR EARTHVISION workshop, https://www.grss-ieee.org/events/earthvision-2025/?tab=challenge. We designed NeuCo-Bench as an extendable, community-driven framework to evaluate neural compression and representation learning methods in the context of Earth Observation (EO) without the need for heavy compute resources. The growing volume and complexity of EO data calls for standardized, scalable, and reproducible tools that can assess the quality and utility of learned representations across diverse downstream tasks. NeuCo-Bench and its associated Earth2Vec community (https://earth2vec.github.io) that grew out of the Horizon Europe project Embed2Scale (https://embed2scale.eu) address this gap by offering a modular evaluation pipeline built around fixed-size, task-agnostic embeddings, enabling researchers and practitioners to consistently compare neural compression methods.
The tutorial will provide participants with both conceptual foundations and hands-on experience in regards to test the quality of geospatial embeddings. We will cover:
- The motivation behind neural compression and its role in scalable EO analytics through representation learning.
- The architecture and components of NeuCo-Bench, including its evaluation pipeline, hidden-task challenge mode, and balanced scoring system.
- Practical sessions employing the publicly released SSL4EO-S12-downstream dataset for benchmarking, https://huggingface.co/datasets/embed2scale/SSL4EO-S12-downstream : a curated multimodal, multitemporal EO dataset for classification and regression donwnstream tasks.
-
Demonstrations of how to integrate and evaluate state-of-the-art foundation models such as
- the 2025 CPVR EarthVision winning solutions (https://arxiv.org/abs/2509.06993 , https://github.com/KerekesDavid/embed2scale-solution),
- TerraMind (https://arxiv.org/abs/2504.11171), and
- TerraCodec (https://arxiv.org/abs/2510.12670)
within the NeuCo-Bench framework. Beyond, we will peak into energy-efficient models such as MOSAIKS (https://www.mosaiks.org) and implicit neural representations such as utilized by SatCLIP (https://github.com/microsoft/satclip).
- A panel discussion on how to move forward the developments of NeuCo-Bench based on the needs of the EO community in the light of the latest trends in self-supervised learning for multi-modal EO data compression and representation.
Tutorial particioners will:
- get their feet wet with cutting-edge techniques in neural representation learning and compression for EO data.
- get exposed to the Earth2Vec open-science community with experts from industry, government organizations, and academia.
- leave with actionable skills and tools to evaluate their own models, contributing to more robust and comparable research outcomes across the field of EO.
- have the opportunity to shape the future of NeuCo-Bench.
Whether you are an EO researcher developing new machine learning models, a practitioner deploying EO analytics at scale, or an educator seeking to introduce students to modern EO workflows, this tutorial offers valuable insights and tools to advance your work.
Tutorial Learning Objectives
- Introduction to the theory of neural compression and its relation to self-supervised learning from the angle of EO
- Introduction to the concept of NeuCo-Bench as an extendable framework for neural compression
- Practical guidance on how to benchmark state-of-the-art foundation models with NeuCo-Bench
- Hands-on how to contribute with novel downstream tasks or evaluation pipelines to NeuCo-Bench
Prerequisites
Required
-
basics in
- Python programming: https://www.python.org
- Git version control: https://git-scm.com
-
high-level understanding of
- deep neural networks: https://www.youtube.com/watch?v=l42lr8AlrHk
- multi-spectral satellite sensors: https://www.youtube.com/watch?v=zMsCyEAOrh0
- EO SAR sensors: https://www.youtube.com/watch?v=A6WzAc1FTeA
- for online participants (if any): stable internet connection with bandwidth to support video conferencing including screen sharing
Recommended
You will maximally benefit from the tutorial if you bring your own compression model that is able compress Sentinel-1 & -2 data to apply NeuCo-Bench to it. In case you'd like to prepare in advance, check out https://github.com/DLR-MF-DAS/embed2scale-challenge-supplement, potentially open an issue, and contact Conrad.Albrecht@DLR.de.
Presented by: Francescopaolo Sica, Fabio Pacifici, Ronny Hänsch
Description
This tutorial provides an advanced yet accessible overview of machine learning strategies for integrating optical and SAR data in Earth observation, emphasizing both the foundations of data understanding and the latest multimodal learning techniques. Participants will learn how to prepare, align, and combine these two complementary modalities to enhance environmental monitoring, change detection, and semantic mapping.
We begin by revisiting the fundamental properties and complementarities of optical and SAR data. Participants will gain an intuitive understanding of their physical principles - spectral reflectance versus microwave backscatter - and how imaging geometry, resolution, and environmental conditions affect information content. This section introduces key pre-processing steps required for data harmonization, including radiometric and atmospheric correction for optical imagery and calibration, terrain correction, and speckle filtering for SAR data. Practical demonstrations will illustrate how to co-register and align Sentinel-1 and Sentinel-2 products to build consistent multimodal datasets.
Building on these foundations, the tutorial transitions to the machine learning perspective, addressing challenges such as domain discrepancy, scale differences, and feature alignment. We explore how recent multimodal deep learning paradigms—including cross-modal pretraining, knowledge distillation, and contrastive self-supervised learning—enable networks to leverage correlations between sensors and learn shared representations.
Participants will be introduced to practical examples of inter-modality learning, where optical data acts as a supervisory or guiding signal for SAR-based models:
- Transcoding-based pretraining: adapting networks pre-trained on optical data for SAR tasks.
- Knowledge distillation: transferring semantic understanding from optical to SAR networks.
- Multimodal self-supervised learning: using paired Sentinel-1/Sentinel-2 data in DINO- or CLIP-like frameworks.
Hands-on sessions will guide participants through real workflows:
- Exploring and visualizing Sentinel-1 and Sentinel-2 data in common analysis platforms.
- Performing basic co-registration and data alignment to prepare multimodal datasets.
- Extracting complementary features from SAR and optical imagery for joint analysis.
- Training and evaluating machine learning models for land cover mapping or flood detection using combined data.
Finally, case studies in environmental monitoring, disaster response, and land cover mapping will demonstrate how combining optical and SAR data increases accuracy and robustness under real-world constraints such as cloud cover or limited labels.
By combining theoretical insights with practical exercises, this tutorial offers a comprehensive pathway from data preparation to multimodal learning, showing how sensor synergies reshape our ability to extract meaningful geophysical information from satellite observations.
Tutorial Learning Objectives
Participants will learn to:
- Understand the physical and informational differences between optical and SAR observations.
- Identify key challenges in multimodal data integration (alignment, scale, and domain gaps).
- Apply essential pre-processing techniques for optical (radiometric/atmospheric correction) and SAR (calibration, speckle reduction, geometric correction) data.
- Build consistent co-registered datasets from Sentinel-1 and Sentinel-2 imagery.
- Implement state-of-the-art machine learning methods for cross-modal training, including transcoding, distillation, and multimodal self-supervised learning.
- Evaluate the impact of optical-SAR synergies on downstream applications such as semantic segmentation and environmental monitoring.
Prerequisites
- Basic understanding of remote sensing concepts (SAR and/or optical).
- Basic Python programming skills.
- Familiarity with machine learning frameworks (TensorFlow or PyTorch).
- Optional: experience with geospatial tools such as SNAP, QGIS, or Rasterio.
Presented by: Iksha Gurung, Muthukumaran Ramasubramanian, Manil Maskey, Rahul Ramachandran
Description
The exponential growth of data and computational power has unlocked new frontiers in research. This evolution is increasingly powered by Large Language Models (LLMs), which have moved beyond text generation to become sophisticated reasoning engines. This has initiated a paradigm shift toward agentic workflows, where LLMs and other computational tools are orchestrated as autonomous or semi-autonomous agents to tackle complex, multi-step problems. When applied to scientific inquiry, this approach facilitates automating hypothesis generation, data analysis, and insight extraction. However, unique challenges persist in applying these agentic systems to specialized domains like Earth Observation (EO), particularly in grounding LLMs, reasoning with geospatial datasets, and ensuring the reliability of autonomous systems. This tutorial addresses these challenges by guiding participants through the lifecycle of designing, and implementing agentic workflows for scientific discovery using a new framework called Accelerated Knowledge Discovery (AKD).
Tutorial Learning Objectives
Participants will gain understanding of the foundational principles of LLMs and the architecture of agentic workflows. This tutorial will cover key aspects of building robust agents for scientific applications, starting with hands-on training in prompting for science, understanding Agent Protocols, and moving to the design of complex, multi-agent systems as well as multi-faceted evaluation of the underlying processes to conform to the rigorous expectations set forth by the domain. Participants will acquire skills to use agentic workflows effectively to accelerate their research. They will explore implementing these workflows, and learning to connect LLMs to external data sources and tools for a specific Earth Science use case. Participants will work directly with the AKD framework to design and implement a multi-agent workflow tailored for an Earth Observation system. They will also gain insights into the complexity involved in evaluation of the agents - with hands-on sessions of end-to-end evaluation process for the agents. Compute and Model access will be provided. To maximize hands-on time, course organizers will pre-configure resources, preventing setup delays. This will enable participants to navigate the complete lifecycle of an agentic project—from initial design to a functional system for scientific discovery.
Prerequisites
Participants should have a good conceptual understanding of Large Language Models (LLMs). Experience in Python programming is required, including proficiency in foundational data science libraries such as Numpy and Pandas. Familiarity with deep learning frameworks like Langchain, Langgraph, as well as experience with API integrations is highly beneficial, but not required. Each participant should bring a laptop (Windows, Mac, or Linux).