Community-Contributed Themes

Show/Hide Description
CCT.1: Advanced Signal Processing for Geoscience, Remote Sensing, and Future Earth Observations
A.1: AI and Big Data — High Performance Computing in Remote Sensing
The field of geoscience and remote sensing has experienced rapid growth, driven by significant advances in sophisticated signal processing techniques. These developments enable deeper understanding and extraction of insights from the ever-expanding body of Earth observation data. The understanding and application of mathematical principles in engineering, particularly in digital signal processing, have propelled the field forward—resulting in the emergence of new methods that enhance the performance and efficiency of signal analysis.

Innovative methods and algorithms have become essential for handling the massive scale, complexity, and multimodal nature of geospatial data. Techniques such as compressive sensing, sparse representation, deep learning, and data fusion have opened new possibilities for improving spatial, temporal, and spectral resolution, as well as for uncovering subtle patterns in remote sensing observations. These advances support a wide range of applications. This session highlights cutting-edge signal processing methodologies that advance the analysis, interpretation, and utilization of geospatial data. Topics of interest include, but are not limited to:
(1) Compressive Sensing and Sparse Reconstruction (CS/SR):  CS and SR methods for geoscience and remote sensing, including radar, subsurface imaging, hyperspectral, SAR, and geophysical signals.
(2) Super-Resolution Techniques:  super-resolution, resolution enhancement for optical and radar imagery, and applications in land cover and environmental monitoring.
(3) Emerging and Complex Signal Processing: complex-valued and hyper-complex signal/image processing, and innovative frameworks with promising applications.
(4) Neural Networks for Advanced Signal Processing: CNN, Deep-Learning, Complex-valued, hypercomplex-valued and quaternion neural networks, and other models for remote sensing analysis.
(5) Innovative Signal Processing Algorithms and Advanced Analysis: novel signal processing algorithms for geospatial data analysis, interpretation, advanced sampling, sparse representation, and dictionary learning.
Show/Hide Description
CCT.2: GRSS ESI TC / HDCRS WG - Parallel and Innovative Computing Technologies for Earth Observation
A.1: AI and Big Data — High Performance Computing in Remote Sensing
****Abstract (250-500 words)
Recent decades have experienced an extraordinary increase in Earth Observation (EO) data volumes due to the technological advancements associated with satellite missions. These missions generate massive, multi-source datasets that are now widely accessible to research institutions, public agencies, and private companies. However, the growing data scale also introduces significant challenges in terms of efficient storage, processing, and analysis.

. Extracting actionable insights from such data requires advanced software and algorithmic approaches. In particular, Artificial Intelligence (AI) methods, especially Deep Learning (DL), have become central to modern remote sensing applications. Among these, large-scale and foundation models are gaining traction due to their ability to generalize across diverse EO tasks.

The computational demands of these models often exceed the capabilities of conventional desktop systems. High-Performance Computing (HPC), which aggregates and orchestrates large-scale computing resources, is therefore essential to support scalable EO workflows. Efficient deployment of DL models at this scale also brings attention to energy consumption, model optimization, and hardware-aware computing strategies.

This CC theme proposal on "Parallel and Innovative Computing Technologies for EO" invites papers that explore the synergy between advanced computing and EO. We invite papers focused on novel methods, algorithms and tools that leverage parallel computing, distributed systems, and hardware accelerators (GPUs, FPGAs, and TPUs), to enhance the scalability and efficiency of EO data processing. Additionally, the session encourages submissions addressing emerging paradigms such as edge computing, in-orbit AI, and neuromorphic computing.

****Related General or Special Theme: 
This proposal aligns closely with the IGARSS 2026 General Themes: AI and Big Data (A) by addressing the development and deployment of advanced AI, deep learning, and high-performance computing methods, as well as Applications (C, L, M, O, P) through its focus on enabling scalable and efficient EO data processing for a wide range of Earth science applications.
Show/Hide Description
CCT.3: Multimodal Datasets, Evaluation Protocols, and Performance Metrics for Benchmarking Remote Sensing Algorithms
A.1: AI and Big Data — High Performance Computing in Remote Sensing
In the critical stage when current remote sensing technology is evolving toward refinement and intelligence, high-quality remote sensing benchmark data sets with wide coverage and rich target categories serves as the core support for breaking through the bottlenecks of existing image interpretation technologies and advancing interpretation capabilities from single-scene perception to complex-scene cognition.
Benchmark datasets with wide-area coverage are conducive to models learning about different geographical regions and human activity scenarios worldwide. This ensures that models fully learn the common features and specific manifestations of objects across regions, effectively avoiding the problem of insufficient model generalization ability caused by limited data coverage. From the dimension of object categories, a rich variety of object helps models learn inter-class feature differences more systematically. The inclusion of low-sample, high-value scarce objects in the data sets can improve the issue of scarce object sample shortage in current datasets.
Theoretically, multimodal remote sensing datasets and benchmarks can provide more comprehensive support for the perception and cognition of objects. It not only supplies sufficient and diverse training samples for deep learning models, but also serves as a unified performance verification benchmark for different interpretation algorithms such as semantic segmentation, object detection, and change detection. 
It is worth noting that in addition to focusing on the development and application of the datasets and benchmarks, this session will also focus on evaluation protocols and performance metrics that can be promoted and applied across the entire remote sensing field. The aim is to ensure the consistency and reliability of evaluation results through unified standards. Topics include:
 
1. Multimodal datasets: This topic focuses on highlighting existing multimodal remote sensing datasets, while also welcoming the presentation of novel strategies for the collection, annotation, and curation of such datasets—with an emphasis on addressing challenges like data heterogeneity and label quality to better support cross-modal remote sensing research.
 
2. Standardised evaluation protocols: The session will showcase systematic frameworks and industry-recognized best practices for evaluating remote sensing algorithms. It will also facilitate in-depth discussions on how these standardized protocols can enhance the robustness of algorithm performance assessments and effectively promote the reproducibility of research results across different teams and platforms.
 
3. Impact of emerging methodologies: the role of machine learning and artificial intelligence in advancing remote sensing analysis and data fusion to enable the combined use of different available constellations for improved accuracy, coverage and temporal response.
 
Inclusion
The IEEE IADF Technical Committee, on behalf of which this CCT is proposed, represents a diverse group of individuals from a variety of backgrounds, including different academic disciplines, cultural perspectives, and geographic regions. This diversity enriches our discussions and ensures a holistic approach to the topic under consideration.
To promote diversity, equity, inclusion, accessibility and belonging, we will actively seek contributions from a broad range of participants, particularly early career researchers and practitioners from diverse backgrounds, to ensure that different perspectives are represented in the session.
Finally, we will promote a welcoming and respectful atmosphere during the session, encouraging open dialogue and collaboration among participants.
Show/Hide Description
CCT.4: Geo-foundation and generative models for EO-based informed disaster risk response
A.2: AI and Big Data — Foundation Models in Geoscience
Currently there are millions of global population displaced from their homeland because of natural and human-made disasters. Not only displacement, its also a hard fact the disasters are often causing significant loss of life and property destruction. Humanitarian support, rescue and relief activities and sustained socio-economic recovery planning hinge on understanding the extent and nature of these disasters and associated risks. In this regard, Earth Observation (EO) technology is an immense potential in providing spatially explicit and temporal frequent observations that enable to understand the state and dynamics of disasters and associated risks. Advances in geospatial artificial intelligence (geoAI) has paved the way to automate classical information extraction workflows. In spite of these advances, geoAI models demand extensive amount of annotated data and suffering limited spatio-temporal transferability given disaster scenarios and earth observation imagery is complex and dynamic that brings domain shift. One of the solutions is adapting large vision and geo-foundation models pretrained on large amount of unlabeled imagery either in generative or contrastive strategy on generic pre-text tasks. Even though there are some success stories in adapting geo-foundation models, in time-sensitive activities like humanitarian emergency responses require models that are capable of generalizing across different locations and time frames efficiently with minimal adaptation effort in terms of time, resources and training data. This session aims to foster discussions and share developments from the broader scientific and geo-humanitarian community on the role of generative and geo-foundation models in enhancing informed humanitarian emergency responses. It also entertains challenges to downstream uptakes of these models. We seek contributions on the following topics:

Foundational research on geo-foundation and generative models including but not limited to:
• Development of geo-foundation models relevant for disaster risk response
• Efficient training and adoption strategies of geo-foundation models.
• Application of generative models in filling annotated data gap (generative training, synthetic training data generation and others)

Applications of geo-foundation and generative models in disaster risk response: focused on thematic applications designed for emergency response, including but not limited to:
• Rapid information retrieval for post-disaster events such as flood mapping, building damage detection from earth quack, tsunami, landslide, fire damage extent mapping, and more
• Predicting the occurrence extreme events for ad-hoc disaster response preparedness
• Other real-world use cases and success stories from humanitarian organizations, companies, and large multinational institutions that demonstrate how EO-based (geo)foundation and generative models supports rapid mapping and information retrieval in short-term humanitarian emergency responses and long-term socio-economic recovery planning.
Show/Hide Description
CCT.5: Geospatial Reasoning, Graph, and PDE Foundation Models: Advances in Semantic, Topological, and Spatiotemporal Learning
A.2: AI and Big Data — Foundation Models in Geoscience
Recent advances in remote sensing and machine learning are transforming how we perceive and reason about the Earth. A new wave of geospatial foundation models (GeoFMs) trained on large-scale multimodal data including optical, SAR, hyperspectral, and LiDAR imagery is enabling unified, scalable, and interpretable understanding of complex Earth systems. This proposed theme focuses on the intersection of geospatial reasoning, graph modeling, and PDE-based learning, aiming to foster dialogue across data-driven and physics-informed approaches. 

We invite contributions exploring the design, pretraining, and fine-tuning of geospatial foundation models that integrate physical constraints and real world dynamics. A key emphasis is geospatial chain of thought reasoning, where models perform structured, interpretable reasoning to explain spatial relationships and causal processes. Such reasoning driven AI is critical for transparent decision making in land cover mapping, environmental monitoring, and disaster management.  

The theme also highlights graph foundation models that learn from the semantic and topological structure of geospatial systems. By representing spatial entities as nodes and their interactions as edges, graph neural networks and graph transformers enable relational reasoning for flood propagation, hydrological connectivity, and urban resilience analysis. We particularly welcome studies that integrate semantic graph learning with foundation model architectures to capture both spatial context and topology.  

Equally, PDE foundation models represent a new frontier in coupling AI with physics. By embedding partial differential equations within deep learning frameworks, these models can simulate complex Earth dynamics, improving flood forecasting, climate prediction, and hazard assessment. Research combining physics-informed neural operators, graph PDE hybrids, or remote sensing–based data assimilation is strongly encouraged.
 
Submissions on model compression, knowledge distillation, and regional adaptation are also welcome, especially those that make large foundation models accessible for resource-limited or localized applications. Integration with vision language models and multimodal reasoning frameworks is encouraged to enhance semantic understanding and interpretability.  

By bridging advances in semantic, topological, and physics-informed learning, this IGARSS session seeks to accelerate the development of interpretable, reliable, and science-grounded geospatial AI. The focus is on systems that cannot only perceive but also reason, simulate, and forecast, combining data-driven intelligence with physical understanding to support next generation Earth observation and decision-making.
Show/Hide Description
CCT.6: Multimodal Foundation Models for Earth Observation: Unifying Multi-source Geospatial Data for a Holistic Planet Understanding
A.2: AI and Big Data — Foundation Models in Geoscience
This session is dedicated to the frontier of Multimodal Foundation Models (FMs) for Earth Observation (EO), a transformative paradigm that moves beyond single-modality analysis. The next leap in geospatial intelligence hinges on the ability to seamlessly fuse and jointly reason over the diverse, multi-source data streams that describe our planet—from optical and SAR imagery to LiDAR point clouds, hyperspectral data, and textual reports. This proposal argues that multimodal FMs are the key to unlocking a truly holistic, contextual, and generalizable understanding of Earth's processes. The critical importance of this topic is driven by: 1) The proliferation of heterogeneous EO data sources, creating an unprecedented opportunity and challenge for unified data exploitation; 2) The pressing need for AI systems that can understand and respond to complex, cross-domain queries (e.g., "Find all areas affected by both flooding and landslides in the last week") which inherently require multimodal reasoning.
This session will comprehensively cover the pipeline for building and applying multimodal FMs in EO. Topics include (but are not limited to):
Multimodal Pre-training: Strategies for large-scale, self-supervised learning across different data modalities (e.g., contrastive vision-language pre-training, cross-modal masking).
-Architectures for Fusion: Novel model designs (e.g., cross-attention, transformer-based fusion) that effectively align and integrate information from vision, language, and other geospatial modalities.
-Adaptation and Applications: Techniques for leveraging pre-trained multimodal FMs for downstream tasks like visual question answering, cross-modal retrieval, automated report generation, and enhanced change detection.
-Challenges in Geospatial Fusion: Addressing unique issues such as spatial-temporal alignment, resolving discrepancies in resolution and scale, and handling the physics behind different sensor modalities (e.g., optical vs. SAR).
While other sessions may focus on unimodal FMs or traditional data fusion techniques, our session is unique in its explicit focus on foundation models as the unifying architecture for large-scale, deep multimodal fusion. It positions data fusion not as a post-processing step, but as a core, pre-trained capability. This theme is a critical complement to sessions on machine learning and specific sensors, as it provides the foundational, multi-purpose models that can leverage the full spectrum of available data. It aims to bring together experts in machine learning, computer vision, natural language processing, and remote sensing to pioneer the next generation of integrative geospatial AI.
Show/Hide Description
CCT.7: The prospect of Remote Sensing Foundation Models: From Generation to Application
A.2: AI and Big Data — Foundation Models in Geoscience
Remote sensing foundation models, enabled by massive amounts of Earth Observation data and Artificial Intelligence, are changing how remote sensing data is analyzed and providing deeper insights and efficient solutions for multiple downstream remote sensing tasks. However, there are clouds over the future of remote sensing foundation models. One is the process of generating foundation models. Although a vast amount of remote sensing data is generated every day, the ethical issues in data use, such as data sharing, privacy protection, and regulations, as well as the spatial and temporal heterogeneity of data, pose a significant challenge to the architectural design and generalization capability of the foundation models, The training cost of foundation models, including the cost of data acquisition and model training are very huge. Secondly, there are still many problems that need to be explored in the direction of the application of foundation models. For example, on the global scale, whether a geographic bias in the foundation models causes geographic inequality in the prediction results. The potential of foundation models for urbanization monitoring, environmental protection, poverty eradication, and the achievement of the Sustainable Development Goals (SDGs) needs to be discussed.

Our session fits well with the theme of IGARSS2026 and responds positively to the use of remote sensing for collaborative global solutions. We want to bring together researchers from different backgrounds, including remote sensing data providers, remote sensing foundation model algorithm researchers, and researchers with backgrounds in ecology, urbanism, and sustainable development, to discuss and demonstrate their state-of-the-art Earth observation technologies and data, cutting-edge fundamental model design and training techniques, and the most influential fundamental model applications, to move towards a more equitable and effective global remote sensing foundation models through international cooperation, ethical considerations, technological innovations, and application orientation, which will be one of the mainstream directions of the Future of Earth Observation.
Show/Hide Description
CCT.8: Vision-Language Models for Remote Sensing: Foundations, Applications, and Challenges
A.2: AI and Big Data — Foundation Models in Geoscience
Scope:
This session focuses on the integration of vision-language models (VLMs) into remote sensing, enabling multimodal understanding that combines imagery with natural language. The session welcomes studies on foundation models for geospatial data, remote sensing visual question answering (RS-VQA), multimodal data fusion, zero/few-shot learning, prompt engineering, cross-domain adaptation, and human-in-the-loop applications. Contributions may include novel architectures, benchmark datasets, model interpretability, and scalable training strategies. The scope also extends to innovative applications in land cover and land use mapping, disaster response, environmental monitoring, climate change assessment, precision agriculture, and urban planning, reflecting the growing need for intelligent, explainable, and interactive geospatial analytics.
Importance:
VLMs represent a transformative paradigm in remote sensing, shifting from purely pixel-based analysis toward semantic, language-informed interpretation of Earth observation data. By bridging vision and language, these models enable natural question answering, image captioning, and reasoning over multimodal geospatial content, thus making remote sensing data more accessible to non-experts. Moreover, their capacity for generalization and transfer learning can reduce reliance on labeled data and improve adaptability to new sensors or regions. As VLMs continue to evolve, they are poised to redefine the future of geospatial intelligence, fostering transparency, interoperability, and human-centered interaction in AI-driven Earth observation.
Rationale for Complementing General and Special Themes:
This proposed theme complements General Themes such as AI and Big Data, Foundation Models in Geoscience (A.2), and Multimodal Data Fusion (T.18), and aligns with the *Special Theme SP.3: Artificial Intelligence Ethics and Governance in Remote Sensing. While foundation models are increasingly recognized in remote sensing, their specific adaptation to multimodal vision-language tasks remains underexplored. This session will fill that gap, providing a cross-disciplinary platform that bridges machine learning, natural language processing, and geoscience applications, thereby fostering collaboration and addressing an emerging frontier not yet fully covered by the existing themes.
Show/Hide Description
CCT.9: Artificial Intelligence (AI)-driven Techniques Aiding Carbon Capture, Utilization, & Storage (CCUS)
A.3: AI and Big Data — Large-scale Machine Learning
Human activities in the past century have generated unprecedented CO2 emissions, drastically accelerating climate change and global warming. Unless immediate actions are taken to counter emissions, disasters related to climate change will be an existential threat to human societies and many ecosystems. In response to this threat, scientists have proposed several negative emissions technologies to lower greenhouse gas emissions and achieve a critical pathway toward carbon neutrality. Among these technologies, Carbon Capture, Utilization, and Storage (CCUS) show the greatest potential for efficiency and scalability. 

For CCUS to succeed, it is important to carefully choose safe and reliable CO2 injection sites for storage and to effectively monitor these sites for safety. The choice of safe sites is affected by various factors that include geologic characteristics, land cover, proximity to sources of CO2 emission, stability, transportation mechanisms, holding and storage capacity, weather, soil properties. Additionally, nearness to population centers, health and environmental constraints, presence of enhanced oil recovery projects, and other geospatial should be considered. Given the large number of factors influencing site selection, it is difficult to manually determine which, if any, site in a given region is suitable for effective CO2 injection. Effective monitoring presents similar challenges. Current methods for CCUS monitoring are often limited to ground-based instrumentation and sparse sampling, which suffer from latency, limited spatial coverage, and high operational costs. 
Combining AI with space-based Earth observation provides a scalable and repeatable approach for choosing safe and reliable CO2 injection sites. This same approach can effectively monitor diverse geospatial indicators of CCUS success or failure across vast regions. 
This session will bring together interdisciplinary researchers working at the intersection of AI/ML, Remote Sensing, Chemistry, Environmental Sciences, Geosciences, Economics, and Civil Engineering. It will showcase cutting-edge research on the use of AI to aid CCUS applications. We welcome contributions on (but not limited to):
•	AI-driven site-suitability modeling and risk assessment
•	Multimodal data fusion of satellite, airborne, and in situ observations
•	AI-driven development of 3D models and registration techniques
•	Change detection and anomaly detection methods for CCUS operation monitoring
•	Physics-informed machine learning for subsurface and geomechanical modeling
•	Integration of CCUS monitoring with climate and energy policy frameworks.

Significance for IGARSS 2026
IGARSS is attended by world-renowned experts in remote sensing and Earth observation. A dedicated session on AI-driven CCUS is timely and synergizes with worldwide decarbonization efforts.  It will provide a forum for interdisciplinary collaboration among geoscientists and remote sensing experts, data scientists and AI experts, and policy stakeholders. By highlighting how AI can leverage space-observation data to make CCUS safer, more efficient, and scalable, this session will demonstrate how remote sensing research can directly contribute to solving one of the most critical challenges of our time. The proposed theme is synergistic with and complimentary to existing IGARSS themes such as AI and Big Data (A): Papers are solicited on novel Artificial Intelligence (AI), Machine Learning (ML) and Big Data methodologies developed for use in analyzing remote sensing data and Commercial and Industry-led Remote Sensing.
Show/Hide Description
CCT.10: GeoAI for Advanced Mapping and Geospatial Data Analysis: Innovations, Applications, and Challenges
A.3: AI and Big Data — Large-scale Machine Learning
GeoAI is undergoing a transformative phase, driven by advances in foundation models, multimodal learning, and expanding availability of Earth Observation data. Over the past two years, GeoAI has rapidly scaled from experimental research to operational systems that integrate optical, radar, lidar, and auxiliary data for enabling tasks such as large-scale land cover mapping, fine-grained change detection, disaster response, object mapping, and automatic feature extraction, including complex highly variable natural terrain features. In the context of “The Future of Earth Observations,” GeoAI provides powerful pathways to address evolving technical, scientific, and policy challenges related to resource exploration and management, infrastructure development, hazard mitigation, and environmental resilience at a range of scales.

This Community Contributed Session invites experts from government, academia, and industry to share methodological advances and operational applications of GeoAI in Earth observation. Topics of interest include automated feature extraction from multispectral and lidar data, AI-driven monitoring of urban growth and environmental change, hazard and infrastructure assessment, and AI-based quality assurance. The session will also highlight emerging challenges such as uncertainty quantification, bias mitigation, and the adaptation of GeoAI methods across diverse urban and environmental contexts. By fostering dialogue across geoscience disciplines, this session will explore how GeoAI can expand the role of Earth observations, enhance international cooperation, and inspire innovative solutions that are both resilient and sustainable.
Show/Hide Description
CCT.11: Machine learning applications in seismic data processing and imaging
A.3: AI and Big Data — Large-scale Machine Learning
Machine Learning (ML) and Artificial Intelligence (AI) have shown great potential in advancing seismic signal processing and imaging. While traditional model based methods have significantly improved our understanding of the subsurface, they struggle with the challenges of ever-growing datasets. Geophysical applications require measurement and processing of extensive multiscale datasets. To efficiently analyze the data and to provide realistic models of the Earth’s subsurface, AI techniques have (re)-gained interest in the geophysical community. The advancement of computational resources and the automation of geophysical inversion procedures through machine learning allow us to process large volumes of data with different scales and provide fast and reliable data processing and imaging of the subsurface. In addition, new sensing techniques like drone based surveys, nodal geophones, and distributed acoustic sensing enable a sampling ratio that was unprecedentedly high in the geophysical acquisition, which increases the data volume exponentially. These methods allow researchers to integrate large-scale multi-physics datasets such as seismic, gravity, magnetic, and SAR, extract hidden information in the data, and provide a detailed picture of the subsurface, all of which was not optimally reachable with state-of-the-art techniques. This session calls on researchers and practitioners to showcase the latest developments in seismic geophysical data sensing, processing, and imaging applications. Join researchers and practitioners to highlight challenges and advancements in ML methodologies and applications, including, but not limited to:
1.    Coherent and incoherent noise attenuation
2.    Seismic velocity analysis
3.    Interpolation and resolution enhancement
4.    Imaging and full waveform inversion
5.    Fault detection
6.    Seismic monitoring

Show/Hide Description
CCT.12: Natural Language Processing Capabilities Utilizing Open Knowledge Meshes (KM/NLP)
A.3: AI and Big Data — Large-scale Machine Learning
This session addresses the growing need to discover, process, and present large scale and diverse datasets. Knowledge Meshes, along with Natural Language Processing, enable creation and use of process-threaded, interoperable, semantically contextualized facts, captured in real-time during various interactions with the world, supporting questions of ‘what now, what next, and what if’, via a unified, multi-modal, and multi-utility access interface. 

With diverse datasets comes diverse users. Natural Language Processing can be used to determine the background and goals of the data user, allowing for tailored results to the person requesting, via ‘personas’.  A Knowledge Mesh is intended to accept, resolve, capture, and integrate both investigative queries and decisional instructions, provided by a diverse range of users including, but not limited to, scientists, operators, policy makers, and public interests. This enables more relevant datasets to be presented to the user, leading to better efficiency, predictions, and decision making ability. 

Commodity cloud services, modular interfaces, and AI/ML integration not only reduce lifecycle cost and commissioning timelines but also improve resiliency and time-to-product for downstream users. By enabling advanced AI and machine learning techniques, the Knowledge Mesh is intended to serve as a reference architecture for integrating FAIR (findable, accessible, interoperable, reusable) and TRUST (transparency, responsibility, user focus, sustainability, technology) principles into operational practices. Reproducible workflows for low-latency product generation show how cloud-native pipelines, event-driven triggers, and AI/ML algorithms can consistently deliver faster products, supporting both scientific discovery and operational forecasting.

Presentations should address ensuring factual grounding, reducing and communicating process-coupled uncertainty through federated methodologies, and enhancing interoperability with international, public, and private digital twin and knowledge oriented capabilities.  This broad integrated functionality is critical for addressing challenges at the intersection of ecological, socio-economic, and physical systems, exemplified by use cases in various coastal, ocean, terrestrial, atmospheric, and space domains.
Show/Hide Description
CCT.13: Physics-Informed Machine Learning in Remote Sensing
A.3: AI and Big Data — Large-scale Machine Learning
Machine Learning (ML) has transformed remote sensing by offering powerful data-driven tools capable of analyzing vast datasets and solving complex problems. However, traditional ML models may struggle to generalize across novel datasets or environmental conditions when underlying physical processes are poorly understood. Moreover, machine learning has often been criticized for the difficulty of model interpretation and the lack of physical consistency. 
The proposed CSS aims to explore the potential of Physical Model Informed Machine Learning (PIML) methods, which integrate the strengths of physical models—rooted in electromagnetic theory, sensing characteristics, and geophysics—with the data-driven flexibility of machine learning. By embedding fundamental physical constraints, PIML aims to develop robust and interpretable models that generalize well even with sparse data. This hybrid approach is particularly relevant in applications where remote sensing data may be limited or unrepresentative, such as climate monitoring and disaster prediction, as well as in cases involving the solution of forward and inverse problems. Topics include, but are not limited to the following
•	Physics-Informed Neural Networks 
•	Hybrid Data and Physics-Based Modeling 
•	Physics-regularized Deep Learning 
•	Data Augmentation with Physics-Based Simulations
•	Physics-Guided Inversion Techniques
•	Physics-Driven Surrogate Modeling
•	Integrate Data Models and Assimilation Methods
•	Explainable AI (XAI) with Physical Constraints
•	Physics-guided uncertainty quantification
Show/Hide Description
CCT.14: Scalable and Efficient Processing of Large-Scale LiDAR Point Clouds
A.3: AI and Big Data — Large-scale Machine Learning
The rapid evolution of LiDAR sensors and remote sensing platforms has made large-scale point clouds a critical data source for geospatial analysis, environmental monitoring, urban mapping, and digital twin construction. A single airborne or mobile survey can generate hundreds of millions of points enriched with intensity, color, or multispectral attributes. Unlike image data that resides on regular grids, point clouds are irregular, unordered, and exhibit highly non-uniform densities. These properties, together with the massive scale of current datasets, create significant challenges for deep learning frameworks that are expected to process entire 3D scenes with both high accuracy and efficiency. 

While recent advances in deep learning have demonstrated remarkable progress in LiDAR point cloud processing, most methods remain constrained by high computational cost, large memory footprints, and limited scalability. As a result, their applicability to real-world large-scale surveys is still restricted. Considering processing time, memory consumption, and energy usage directly impact models’ feasibility of practical deployment, efficient solutions not only require high accuracy but also need to satisfy the pressing requirements of practical deployment. 

This proposal aims to advance efficient processing frameworks for large-scale remote sensing point clouds by integrating novel neural architectures with adaptive data reduction and model compression strategies. Its scope focuses on algorithmic improvements for practical deployment. By addressing the trade-off between models' accuracy and efficiency in large-scale LiDAR point cloud processing, this work will contribute to a new generation of methods that are not only precise but also scalable and operationally viable. The research will open pathways toward broader adoption of 3D deep learning technologies in urban planning, disaster management, forestry, and infrastructure monitoring, reinforcing the role of IGARSS as a platform where cutting-edge research converges with real-world application needs. 

This proposal complements IGARSS General Themes in remote sensing data analysis, scalable computing, and AI for Earth observation, while advancing Special Themes that highlight efficient, deployable solutions for massive geospatial data. By considering both accuracy with efficiency of models, it strengthens the bridge from fundamental research to practical remote sensing applications and broadens the scientific and practical impact of this conference.
Show/Hide Description
CCT.15: Onboard Deep Learning for Hyperspectral Imaging: Methods, Missions, and Hardware
A.4: AI and Big Data — Real-time Analytics, On-board Processing and Edge Computing
Scope:
End-to-end approaches for on-orbit hyperspectral processing, including spectral–spatial DL architectures (1D/2D/3D CNNs; Transformers), data-efficient training (self-/semi-supervised, domain adaptation), uncertainty and noise robustness, and hardware co-design for edge inference (FPGAs, embedded GPUs, DPUs). Emphasis on lessons from pathfinder and operational missions (e.g., CHIME concepts; Intuition-1 flight experience) and on translating research models into dependable, power/latency-bounded onboard applications (cloud screening, compression/triage, anomaly detection, segmentation).
Importance:
Imaging spectroscopy generates extreme data rates; onboard AI is now pivotal to reduce downlink, enable event-driven operations, and shorten time-to-insight for science and applications. Recent flight and pre-flight demonstrations show feasibility and maturing toolchains, making 2026 an ideal moment to consolidate methods and best practices across agencies, academia, and industry.
Rationale for complementing IGARSS General/Special Themes:
This CCT deepens A.4 “Real-time Analytics, On-board Processing and Edge Computing” with a hyperspectral-specific focus that bridges algorithms, mission ops, and hardware. It naturally intersects T.15 “Hyperspectral Data Processing and Analysis” and S.4 “Spaceborne Hyperspectral Missions,” and can attract participation from the SP.5 “Commercial and Industry-led Remote Sensing” community through real mission/industry deployments.
Show/Hide Description
CCT.16: Advanced Modeling Technologies for Digital Twins
A.6: AI and Big Data — Digital Twins
Digital Twins (DTs) need to integrate multiple models cost-effectively, continually assimilate data, support rapid what-if explorations for informed decision-making, and quantify uncertainties. Existing models support some of these aspects, but rarely all of them. By integrating diverse models and continually assimilating data, Digital Twins offer the potential of optimized decision-making across various domains with integrated predictive analysis and  "what-if" scenario planning across many data sets and models.

Advanced modeling technologies are crucial for overcoming the inherent challenges in realizing the full potential of Digital Twins. Models need to run fast enough to support interactive exploration of scenarios and their potential outcomes, which is vital for informed decision-making. Accelerated models also provide one avenue for ensemble based uncertainty quantification. Digital twins need to utilize multiple models to capture interacting earth systems. Cost-effective and scalable methods are needed to  integrate models or provide an interoperable modeling environment.  Uncertainty quantification, often provided by individual models or data sets, must be expanded to provide meaningful uncertainty quantification across the system. Continual data assimilation within these integrated models ensures the Digital Twin remains synchronized with real-world conditions and provides up-to-date insights. Data fusion techniques are essential for combining data from multiple models and real-world sensors to provide a common, unified picture of the system. These advanced modeling technologies will collectively empower Digital Twins to be dynamic, predictive, decision-support systems.

This session will explore advanced modeling technologies to overcome these challenges, including but not limited to:
Fast surrogate machine learning models
GeoAI Foundation models
Uncertainty quantification over integrated models
Scalable model integration approaches
Interoperable modeling frameworks
Multi-scale model integration and data fusion
What-if models
Impact assessment for decision support
Show/Hide Description
CCT.17: GeoAI and Digital Twin Technologies for Remote Sensing Applications
A.6: AI and Big Data — Digital Twins
The fusion of GeoAI (Geospatial Artificial Intelligence) and Digital Twins (DT) is transforming the way we look at Earth Observation and remote sensing applications. GeoAI enables modeling and predictions of the environment and urban processes at various temporal and spatial scales by integrating advanced deep learning algorithms with digital representations of the real world. This, in turn, is enabling innovations across domains such as disaster management, urban monitoring, agriculture, environmental assessment, climate resilience, etc.

This proposed session aims to bring together researchers from industry as well as academia, exploring the synergy between GeoAI frameworks and Digital Twins for remote sensing applications. Contributions are encouraged for, but not limited to:
1. 3D mapping, feature extraction, and change detection using deep neural networks
2. Development of Digital Twins for real-time monitoring of urban and environmental systems
3. Multimodal data fusion for geospatial analysis
4. Damage mapping and post-disaster assessment through AI-driven techniques such as Domain Adaptation/Generalization for Digital Twins
5. Integration of IoT and edge computing to support responsive and scalable GeoAI-Digital Twin pipelines
6. Self-supervised and foundation models for large-scale geospatial Digital Twins

In short, we are looking for research papers that focus on using GeoAI frameworks and  Digital Twin systems for remote sensing applications. By combining high-resolution satellite, LiDAR, SAR, and UAV data with advanced AI models, including foundation and self-supervised architectures, GeoAI integrated Digital Twins offer powerful frameworks for real-time analysis, dynamic simulation, and informed decision-making.

By fostering collaboration with the experts in AI, Geoscience, and Earth Observation, this session seeks to accelerate the progress in GeoAI-enabled Digital Twin systems for developing intelligent, adaptive, and predictive applications that not only map and monitor but also anticipate the dynamics of our ever-changing planet.
Show/Hide Description
CCT.18: Synthetic Data for Earth Observation: Bridging the Data Gap to Empower Next-Generation Remote Sensing Applications
A.6: AI and Big Data — Digital Twins
This session is dedicated to the rapidly advancing field of Synthetic Data for Earth Observation (EO). 
Synthetic data in remote sensing refer to artificially generated imagery or pixel-level information that replicates the characteristics of real satellite or aerial observations. Rather than being captured directly by sensors, synthetic remote sensing data are created through computational models, simulations, or machine learning techniques designed to emulate real-world conditions.
The EO community is facing a critical challenge: the performance of advanced AI is increasingly limited by the scarcity of large-scale, diverse, and precisely annotated datasets. This proposal argues that synthetic data are the most promising solution to this data bottleneck. Recent trends have amplified the importance of this topic, and to give some examples, we can highlight: 1) The emergence of data-hungry Foundation Models requires datasets of a scale that real-world collection alone cannot provide; 2) The urgent need for granular, reliable data to monitor the Sustainable Development Goals (SDGs)—from tracking deforestation (SDG 15) to managing urban growth (SDG 11)—demands data for scenarios that are often under-represented or unavailable.
This session will cover the entire pipeline of synthetic data creation, from generation techniques (e.g., physics-based rendering, GANs, diffusion models, which are part of AIGC - Artificial Intelligence Generated Content) to its practical applications in boosting the performance and robustness of remote sensing algorithms. By enabling the creation of tailored, on-demand data, synthetic data creation represents a cornerstone methodology for building the next generation of EO solutions and is fundamental to the vision of a Digital Twin Earth.
While other themes focus on processing and analyzing existing data, our session addresses the more foundational issue of data creation. It is unique in its dedicated focus on synthetic data as a field of study itself. This theme provides a critical complement to sessions on machine learning and data fusion by tackling the upstream data supply problem, which is a prerequisite for their success. 
Moreover, it brings together a cross-disciplinary community from computer graphics, machine learning, and remote sensing to address a shared, fundamental challenge.
Show/Hide Description
CCT.19: AI InSAR
T.1: Theory and Techniques — Interferometry: Along and Across
The integration of Artificial Intelligence (AI) with Interferometric Synthetic Aperture Radar (InSAR) is revolutionizing Earth observation by enabling intelligent interpretation and automated analysis of large-scale, complex radar datasets. AI-based InSAR has emerged as a key driver in improving the efficiency, accuracy, and adaptability of remote sensing data processing. This proposed theme focuses on recent advances and emerging methodologies that leverage AI for InSAR applications, including interpretable deep learning-based phase unwrapping, coherence estimation, atmospheric correction, deformation time-series analysis, and anomaly or disaster detection. The fusion of AI and InSAR offers a transformative pathway to overcome long-standing challenges such as phase aliasing, noise contamination, and decorrelation in complex environments, achieving higher robustness and generalization than conventional model-based approaches.
By bridging machine learning, computer vision, and geoscience, AI-based InSAR advances the automation, interpretability, and physical consistency of geophysical parameter retrieval. In particular, self-supervised learning, pre-trained models, and physics-informed neural networks are enabling knowledge transfer across different sensors and observation conditions, significantly reducing reliance on labeled training data. Meanwhile, explainable AI (XAI) and uncertainty modeling are contributing to improved interpretability and reliability in deformation and terrain mapping.
This theme complements existing General and Special Themes by emphasizing cross-disciplinary innovation and community collaboration. It encourages contributions on new algorithmic frameworks, benchmark datasets, and open-source platforms for AI-driven InSAR research. By bringing together experts from AI, radar signal processing, and Earth science, this session aims to foster interdisciplinary dialogue and advance next-generation intelligent, interpretable, and physically grounded InSAR technologies for global-scale Earth observation and sustainable development.
Show/Hide Description
CCT.20: Model/data-driven phase unwrapping methods
T.1: Theory and Techniques — Interferometry: Along and Across
This proposal seeks to establish a dedicated focus within the IGARSS T.1 (Interferometry: Along and Across) theme on "Model- and Data-Driven Phase Unwrapping Methods." Phase unwrapping (PU), the process of resolving the inherent 2π ambiguities in interferometric phase measurements, remains a critical and often challenging step in the generation of accurate digital elevation models and displacement maps from SAR data. The limitations of traditional PU algorithms in handling high-noise scenarios, rapid phase gradients, and large-scale areas are well-documented, often leading to error propagation and compromised product reliability.

This topic will explore the rapidly advancing frontier of PU methodologies that leverage both sophisticated physical models and data-driven artificial intelligence techniques. The scope encompasses a wide range of approaches, including: (1) Advanced Model-driven methods that incorporate multibaseline phase unwrapping,more robust regularizes, graph theory, and compressive sensing to improve resilience to noise and discontinuities; (2) Hybrid methods that fuse SAR data with external auxiliary data or models to guide the unwrapping path; and (3) Fully Data-driven methods, particularly deep learning architectures such as Convolutional and Recurrent Neural Networks (CNNs, RNNs), which learn to perform PU directly from large datasets of wrapped and ground-truth unwrapped phases, showing remarkable promise in overcoming the pitfalls of conventional algorithms.

The importance of this topic is paramount. Reliable PU is the linchpin for the quantitative application of InSAR across essential fields such as tectonic and volcanic deformation monitoring, landslide and subsidence hazard assessment, and glacier dynamics. Advancements in this area directly translate to more robust, automated, and accurate Earth observation products, enhancing our capacity for geophysical modeling and disaster management.

This topic is a natural and vital supplement to the general "T.1: Interferometry" theme. While T.1 broadly covers the entire interferometric processing chain and its applications, this focused session will drill down into the core algorithmic challenge that ultimately defines the quality of the final geodetic product. By concentrating on the transformative potential of model- and data-driven PU, we aim to catalyze collaboration, present state-of-the-art solutions, and address one of the most persistent bottlenecks in the InSAR workflow, thereby strengthening the foundation upon which all subsequent T.1 applications are built. This dedicated forum is essential to keep the IGARSS community at the forefront of this critical methodological evolution.
Show/Hide Description
CCT.21: Resolution Enhancement across sensing dimensions: Spatial, Spectral, Temporal, and Radiometric
T.8: Theory and Techniques — Feature Extraction and Reduction
Recent advances in sensors and data processing have significantly enhanced our ability to observe the Earth's surface in high detail. Despite these improvements, the demand for higher resolution—spatial, spectral, temporal, and radiometric—remains a constant need in both scientific and operational remote sensing.

Resolution enhancement is a broad and dynamic field focused on improving image quality and information content beyond the native sensor capabilities. These improvements can be viewed from four key perspectives: spatial, for finer image details; spectral, for improved material characterization; temporal, for denser time-series sampling; and radiometric, for higher image quality and dynamic range.

This theme will provide a forum to discuss innovative image processing techniques and cutting-edge approaches that push the boundaries of remote sensing imagery resolution.

The proposed theme invites submissions on innovative methodological approaches on any of the above-mentioned enhancement perspectives, across all sensing modalities: optical, hyperspectral, LiDAR, SAR, passive microwave systems, etc…  Application-driven research papers are also encouraged, including areas such as agricultural monitoring, urban mapping, environmental monitoring, marine monitoring, target detection, forest monitoring, and disaster response.

By addressing innovative methods for enhancing spatial, spectral, temporal, and radiometric resolution, the theme directly contributes to advancing the state-of-the-art in remote sensing data exploitation. Furthermore, it complements IGARSS 2026's Special Themes such as Next-Generation Sensors and Missions by exploring how algorithmic innovations can extend the capabilities of current and future sensors. It also supports Earth System and Climate Studies and Disaster Monitoring and Mitigation by enabling higher-resolution observations critical for accurate environmental monitoring and rapid response
Show/Hide Description
CCT.22: Application-Oriented Remote Sensing Image Analysis Tasks
T.12: Theory and Techniques — Classification and Clustering
In an era where remote sensing technologies are fundamentally transforming fields such as agriculture, urban planning, and disaster management, the practical application of these advancements is paramount for addressing critical real-world challenges.

While researchers continually advance state-of-the-art methods for remote sensing image analysis—including feature extraction, classification, segmentation, and object detection—and achieve unprecedented performance on existing evaluation datasets, real-world applications often demand a level of quality and robustness that current methods may not consistently provide.

Theme Objectives
This theme aims to bridge the gap between methodological development and practical application. The primary objectives are to:

1. Showcase successful applications of remote sensing across key sectors: environmental monitoring, urban planning, agriculture, and disaster management.

2. Provide a platform for sharing knowledge, best practices, and innovative methodologies in applied remote sensing.

3. Foster collaboration between academia, industry, and government agencies to maximize the impact of remote sensing solutions.

4. Identify critical gaps in current applications and explore strategic approaches to overcome them.

Scope of Topics
The scope of this theme covers, but is not limited to, the following application-oriented tasks and domains:

1. Image Analysis Tasks	
1.1 Semantic Segmentation	
1.2 Instance Segmentation	
1.3 Panoptic Segmentation	
1.4 Building, Road, Tree, Vehicle, and Ship Detection	
1.5 Change Detection	
1.6 Time Series Analysis	

2. Application Domains
2.1 Agriculture
2.2 Urban Planning
2.3 Environmental Monitoring
2.4 Disaster Monitoring

Inter-Theme Complementarity
This theme strongly complements established themes such as L (Land Applications), M (Atmosphere Applications), O (Oceans), and T (Theory and Techniques), and specifically includes, but is not limited to, topics like:

T.8 Feature Extraction and Reduction

T.9 Image Segmentation

T.11 Object Detection and Recognition

T.12 Classification and Clustering

T.14 Change Detection and Temporal Analysis

L Land Applications 

M Atmosphere Applications

O Oceans

Conclusion
Application-oriented remote sensing image analysis tasks hold significant potential for driving positive change across various sectors . By fostering interdisciplinary collaboration and providing robust opportunities for knowledge exchange, this theme will accelerate the development of innovative and impactful solutions.

IGARSS offers a unique and essential platform to explore these applications and their broader societal impact, making this theme a valuable and timely contribution to the conference. Together, we can effectively harness the power of remote sensing to address global challenges and contribute to a more sustainable future.
Show/Hide Description
CCT.23: Decoding Urban Climate: Remote Sensing Insights into Energy Balance and Atmosphere
T.14: Theory and Techniques — Change Detection and Temporal Analysis
Rapid urbanization has transformed land use and land cover (LULC) patterns across India, particularly in semi-arid regions such as Ahmedabad, where natural vegetation and water bodies are increasingly replaced by impervious built-up surfaces. These transitions alter the surface energy balance (SEB) by increasing land surface temperature (LST) and sensible heat flux (H) while reducing latent heat flux (LE) and evapotranspiration (ET), ultimately intensifying the urban heat island (UHI) effect. The proposed theme, “Decoding Urban Climate: Remote Sensing Insights into Energy Balance and Atmosphere,” seeks to address these critical challenges by integrating satellite-based and ground-based observations to better understand urban surface–atmosphere interactions and their climatic implications.
The scope of this theme includes quantifying energy partitioning and atmospheric responses across urban and agricultural landscapes using multispectral and thermal remote sensing data. Studies employing datasets such as Landsat, MODIS, ECOSTRESS, and Sentinel, along with tools like Google Earth Engine (GEE) and QGIS, ArcMap, and ArcPro, are encouraged to evaluate parameters such as albedo, net radiation (Rn), LST, ET, H, LE, and ground heat flux (G). The inclusion of hydrometeorological variables such as precipitation and humidity further enhances the understanding of SEB variations and thermal stress. Advanced geospatial analytics, including correlation studies, trend analysis, and Emerging Hot Spot Analysis (EHSA), provide insights into the spatial and temporal evolution of urban heat hotspots and coldspots.
The importance of this theme lies in its potential to inform sustainable urban design, climate-resilient development, and energy-efficient infrastructure planning. By decoding the interactions among vegetation cover, surface fluxes, and atmospheric properties, remote sensing becomes a powerful tool for diagnosing urban climate health and guiding adaptive strategies for mitigation. The rationale for this theme aligns with IGARSS’s mission to advance Earth observation science and its societal applications. While existing general and special themes focus largely on global climate and ecosystem monitoring, this theme brings attention to the urban-scale processes shaping local energy balance and atmospheric composition. It bridges geoscience, environmental physics, and urban climatology, fostering collaboration among researchers working toward sustainable and livable cities in a changing climate.
Show/Hide Description
CCT.24: Approaches and Methodologies for the Detection, Characterizing, and Mitigation of Passive Sensor Data Corrupting Emissions (DMiPS)
T.17: Theory and Techniques — RFI Detection and Mitigation
This session addresses the critical and growing challenge of anthropogenic Radio Frequency Interference (RFI) contaminating passive sensor data, a significant concern for meteorological and other Earth exploration satellites. As new telecommunication services like 5G and satellite broadband-aviation uplinks are deployed, the risk to adjacent passive sensing RF bands increases. This session seeks to explore a comprehensive suite of methodologies, services, tools, and capabilities for detecting, identifying, characterizing, and ultimately mitigating these harmful emissions.

The foundation of passive microwave remote sensing is the measurement of naturally occurring electromagnetic radiation, which is often extremely weak compared to man-made emissions. This disparity makes the accurate identification and prevention of data contamination essential for vital applications such as Numerical Weather Prediction (NWP) models. The session will delve into the significant uncertainties surrounding how interference from new commercial technologies will manifest and to what extent it will impact measurements. A core focus will be on strategies to minimize these effects on a global scale, preserving the integrity of data that underpins public safety, transportation, agriculture, and economic planning.

Key objectives include the development of efficient and repeatable methods to identify anthropogenic interference sources, enabling the accurate flagging and removal of contaminated data. Discussions can cover a wide array of space-based and even non-satellite approaches, including the use of observing platforms like drones and aircraft, alongside advanced analytical techniques such as post-processing and machine learning. Furthermore, the session aims to assess existing and emerging technologies for characterizing interfering signals from orbital or sub-orbital platforms. This will foster a deeper understanding of their respective strengths and weaknesses in reducing impacts on passive microwave radiometers. By bringing together experts from across these domains, the session will focus on formulating a robust, adaptable, and comprehensive long-term solution to interference—one that can evolve in tandem with both communication and passive instrument technologies to ensure critical Earth observation capabilities are sustained for the future.
Show/Hide Description
CCT.25: Radio Frequency Interference and Spectrum Management Issues in Microwave Remote Sensing
T.17: Theory and Techniques — RFI Detection and Mitigation
Radio Frequency Interference (RFI) has been having an increasingly detrimental impact on microwave remote sensing. Interference can corrupt measurements and reduce the ability of retrieving relevant geophysical measurements over many regions of the globe. In several cases, even a primary spectrum allocation to remote sensing does not guarantee that the frequency range of operation of the sensors is free of RFI since illegal in-band transmissions or out-of-band emissions can still be present. Many innovative technical advances, both in software development and hardware design, have been made by the Earth observing microwave remote sensing community as well as by radio astronomers to improve detection of interference and mitigation of its negative effects.

The problem of increasing occurrence of interference in measurements is closely related to the ever growing spectrum needs of commercial interests, particularly those of the telecommunication industry. The development of 5G systems and constellations of satellites in low Earth orbit offering Internet services are some examples of how the demand for spectrum is placing enormous pressure on frequency bands utilized for microwave remote sensing. For this reason, the remote sensing community has also been working closely with spectrum managers to protect the frequencies of interest for science applications.

The session will present advances in interference detection and mitigation techniques developed within the passive and active remote sensing communities, report on the observed cases of interference and discuss the status of current and upcoming missions with regards to dealing with RFI and also discuss spectrum management issues facing remote-sensing frequencies.  
Show/Hide Description
CCT.26: Advances in Multimodal Remote Sensing Images Processing and Their Applications Multiple Areas
T.18: Theory and Techniques — Multimodal Data Fusion
With the advancement of satellite technology, various types of remote sensing data have been obtained in large quantities, such as visible light images, infrared images, multispectral images, hyperspectral images, SAR images. Compared with the use of single mode data, the use of multimodal remote sensing data is undoubtedly more conducive to all-weather, all-weather and omnidirectional observation of the region of interest. However, due to the significant differences between different modal data, how to timely and effectively process and interpret multimodal remote sensing images is the main challenge in the current field. With the development of deep learning technology and large model technology, new processing and interpretation methods continue to emerge in specific tasks, such as multimodal image fusion, joint classification, object detection, change detection, etc. These advancements allow us to better monitor and analyze changes on the Earth's surface. Multimodal remote sensing images have broad application prospects in fields such as agriculture, forestry, and oceanography. Therefore, in-depth research on multimodal remote sensing image processing has important theoretical significance and practical value.
Timely awareness of the latest advancements in multimodal remote sensing images processing can enable researchers to grasp cutting-edge technologies and innovative methods, thereby enhancing the efficiency and accuracy of their research and applications, and contributing to the in-depth development of scientific research. Furthermore, understanding these advancements also fosters collaboration between academia and industry, promoting the translation and application of technology, providing strong support for sustainable development, addressing global challenges, and driving innovation and progress in practical applications.
 The broad topics include (but are not limited to):
•	RGB/infrared/multispectral/hyperspectral/SAR image interpretation
•	Basic Large Model and Fine tuning for multimodal remote sensing images processing
•	Object/target detection methods for multimodal remote sensing images
•	Change detection methods for multimodal remote sensing images
•	Joint classification methods for multimodal remote sensing images
•	Fusion methods for multimodal remote sensing images
•	Applications of multimodal remote sensing images processing in agriculture
•	Applications of multimodal remote sensing images processing in forestry 
•	Applications of multimodal remote sensing images processing in coastal and sea areas
•	Applications of multimodal remote sensing images processing in wetlands
•	Applications of hyperspectral image classification in other fields
Show/Hide Description
CCT.27: Connecting Earth Observation and Ground-Level Images
T.18: Theory and Techniques — Multimodal Data Fusion
Ground-level images provide unique contextual and semantic information that complements satellite and aerial observation. Establishing reliable connections between these modalities - addressing challenges due to near-orthogonal viewpoint differences, dramatic scale and resolution disparities, and occlusion effects - unlocks a broad range of emerging applications. These include fine-grained geo-referencing of unlocalized photos, satellite data attribution, 3D scene reconstruction across varying altitudes, multi-view data fusion for digital twins, and cross-source change detection for environmental and infrastructure monitoring.

Recent advances in deep learning, cross-view retrieval, and multi-modal representation learning have markedly improved our ability to associate ground images with overhead observations; however, the task remains extremely challenging: differences in geometry, radiometry, and content visibility persist, and globally accurate ground-to-satellite localization is still limited to tens of meters without additional priors. The next frontier lies in local-context geo-localization, where approximate camera positions (e.g., within a few city blocks) enable meter-level alignment and open opportunities for downstream mapping, navigation, and situational awareness.

This theme session invites contributions that advance algorithms, datasets, and applications enabling accurate and scalable association between ground-level and satellite imagery. Topics of interest include, but are not limited to:
●	Ground-to-satellite image matching and visual localization
●	Multi-altitude and multi-sensor data fusion
●	Cross-view 3D reconstruction and geometric alignment
●	View synthesis and neural rendering conditioned on satellite or aerial priors
●	Foundational models and multimodal embeddings for cross-view association
●	Applications in urban search-and-rescue, environmental monitoring, and digital-twin generation

To promote innovation and reproducibility, the session will feature a competition track on local-context ground-to-satellite image localization. Participants will receive sets of one or more satellite images and one or more ground-level images with approximate locations (e.g., within a few hundred meters) and will be evaluated on predicted camera geolocation accuracy in meters. Winning teams will be acknowledged and invited to present their solutions during the session. Papers submitted to the theme will be considered for inclusion in a special issue of Photogrammetric Engineering & Remote Sensing (PE&RS), and winning teams will be asked to contribute to an article on the competition to be published in that special issue.
Show/Hide Description
CCT.28: Image Analysis and Data Fusion: The AI Era
T.18: Theory and Techniques — Multimodal Data Fusion
The success of Earth observation data impacts across various applications, from topographic map production to environmental and anthropic area monitoring, hazard management, and more,  continues to drive the development and deployment of state-of-the-art sensors capable of providing diverse insights into the Earth system’s condition.  Modern Earth observation technologies offer data in various modalities, as captured by passive and active sensors spanning different types of imagery such as optical (multispectral and hyperspectral), radar (synthetic aperture radar). Other emerging data modalities include Lidar point clouds and precise positioning information via GNSS. While early developments of remote sensing data applications predominantly focused on individual sensors, contemporary approaches, notably those rooted in machine learning, seek to integrate data from multiple sources as they often offer complementary information to understand the Earth’s ecosystem. Today, an even broader array of data sources is available, including crowd-sourced photographs, oblique images, and data from social networks, opening up novel avenues for tackling the most complex challenges in Earth monitoring and comprehension.

Despite the abundant availability and advantages of multimodal data, which encompasses multi-sensor, multi-frequency, and multi-temporal data, the analysis and fusion of information from these sources remains a complex and continuously evolving research frontier. Modern AI strategies founded on deep neural networks are at the forefront of enabling effective image analysis and multi-sensor data fusion. As a result, image analysis and data fusion have emerged as vibrant and dynamic research topics, characterized by a substantial demand for knowledge exchange, discussion of open research challenges, introduction of new benchmark datasets, and forums for debating new EO technological advances. 

The Image Analysis and Data Fusion Technical Committee (IADF-TC) of the Geoscience and Remote Sensing Society is dedicated to addressing open research challenges. Its mission is to facilitate connections among experts, provide educational resources for students, engage with  research practitioners and professionals, and promote best practices in the realm of image analysis and data fusion applications. Among its various activities, the IADF-TC organizes an annual community- focused session held in conjunction with the  IGARSS symposium. The session has so far  assembled cutting-edge contributions from industry, government, and academic sectors to present on topics latest advances on topics  that include data preprocessing (pansharpening, coregistration, geocoding), learning and analysis (supervised/self-supervised learning, computer vision,  multi-modal data integration and multi-temporal data analysis), and evaluation (embedding analysis, decision fusion, uncertainty quantification, Explainable AI).

This proposed session boasts a long and successful history within IGARSS, consistently held for over a decade. As a well-established session, it garners the full attention of both senior researchers and young scientists. Furthermore, it addresses topics of increasing significance in remote sensing and geoscience, appealing to an interdisciplinary audience with interests spanning methodology and application domains.
Show/Hide Description
CCT.29: Integrating Satellite Imagery and Text in the era of Remote Sensing Foundation Models
T.18: Theory and Techniques — Multimodal Data Fusion
Foundation models (FMs) are transforming remote sensing (RS) by revolutionizing how we collect, analyze, and interpret geospatial data. They automate complex analyses, enhance scalability, foster interdisciplinary collaboration, improve accessibility, adapt dynamically, and enable real-time monitoring. Integrating FMs into RS workflows not only streamlines existing practices but also opens new avenues for innovation and data-driven environmental decision-making.
While recent RS FMs support multimodal data, their effective integration with text remains an open challenge. Combining textual information—such as metadata, descriptions, and user queries—with imagery enables deeper semantic understanding and more intuitive interpretations. Text can appear in various structured or unstructured formats from diverse sources, including web content, databases, and metadata. However, fundamental differences between visual and textual modalities hinder joint modeling; for instance, models trained on natural images may not generalize well to satellite data or domain-specific language. Addressing these gaps requires adaptable FMs, curated multimodal datasets, and training methodologies optimized for RS.
Integrating text with visual data enables models to reason about imagery and associated descriptions, uncovering higher-level semantics. Beyond visual recognition, such models can model relationships, align linguistic and spatial cues, and generate natural-language explanations of satellite imagery. These advances enhance RS tasks requiring multimodal understanding—such as image captioning, visual question answering, or text-based image retrieval. Emerging vision–language models are driving progress in captioning, retrieval, scene classification, semantic segmentation, object detection, and visual grounding. As these systems mature, their RS integration is poised to accelerate application domains including but not limited to disaster response, urban planning, and climate change analysis.
This session explores opportunities and challenges in merging text with satellite imagery in the era of remote-sensing FMs. We invite contributions presenting methodologies, architectures, and case studies demonstrating how language can augment geospatial AI. Remote-sensing FM may or may not be a part of the architecture. By uniting experts from Remote Sensing, Natural Language Processing (NLP), and Earth sciences, this session aims to foster interdisciplinary collaboration and define future directions for intuitive, multimodal RS analytics.
The scope of this session includes but not limited to:
- Novel downstream use-cases combining remote-sensing imagery and text (Remote-sensing FM is optional in this case)
- Design and architecture of vision-language models for remote-sensing
- Efficient and scalable training approaches for remote-sensing FMs involving image and text
- Validation and benchmarking of FMs for remote-sensing tasks
- Explainability and interpretability in language-augmented remote-sensing models
- Applications of language-enhanced remote-sensing (Examples include but not limited to disaster management, urban planning, environmental monitoring, etc.)
- Integrating natural language interfaces with geospatial analysis tools
- Language-driven geospatial data visualization and exploration
- Future research directions for remote-sensing FMs mixing image with Natural Language Processing (NLP).
Show/Hide Description
CCT.30: Multisensor and Multi-modal geospatial data intelligence for Natural Disaster Response
T.18: Theory and Techniques — Multimodal Data Fusion
Recent developments in earth observation sensing technology have yielded new knowledge, methods, and tools for collecting and processing, and sensing. This session will provide a platform to discuss challenges and opportunities for multisource data fusion, AI methods for efficient and accurate remote sensing image classification, 3D mapping, and their applications, such as change detection for disaster management and assessment, land use and land cover analysis, environmental monitoring, and assessment.  

Various earth observation platforms (e.g., satellites, aircrafts, and drones) and types of sensors (e.g., LiDAR, optical, SAR, hyperspectral), along with in-situ sensors, have noticeably increased during the last decade. As a result, massive amounts of rich data are systematically produced. This increase in the number and heterogeneity of earth observation data sources presents both opportunities and challenges. Opportunities result from the refinement of sensors and sensing methods, the ability to capture increasingly granular data via those improved sensors and methods, and the improved data storage, manipulation, analysis, and interpretive capacity of modern computational hardware and software, cloud computing, and IoTs. Challenges result from the difficulty in accurately fusing data, so that a coherent, comprehensive, fully detailed, and integrated understanding of data from multisensor. This session brings together state-of-the-art research and development on AI algorithms, techniques, and software tools to address the above-mentioned challenges and to better exploit spatial and temporal features and dependencies in RS data. The focus of the session will be on data-driven models (such as CNN, GAN, RNN, etc.)  that are highly adaptable to changes in large remote sensing datasets and can find unexpected patterns in multi-source and multi-temporal data.
Show/Hide Description
CCT.31: Microwave-based water monitoring: modeling, techniques and applications
T.20: Theory and Techniques — Microwave Modeling
Monitoring water distribution and dynamics in all its forms — from polar ice sheets and soil moisture to wetlands and flood events — is fundamental to understanding climate change and mitigating its effects. In this context, microwave remote sensing offers an unparalleled, all-weather, day-and-night monitoring capability. This proposed community-contributed theme builds upon these strengths by creating a focused forum for water monitoring. The session will cover the entire value chain: from the electromagnetic modeling that links sensor measurements to geophysical parameters; to observation techniques, with synergies and complementarities among different sensors; all the way down to the applications that deliver societal, humanitarian, and commercial impact. 
In the broader context of IGARSS program, this session is conceived to foster synergies between foundational topics in Microwave Modeling (T20) and key application areas like Soils and Soil Moisture (L.8), Wetlands (L.9), and Inland Waters (L.10).
The strong community demand for such a theme was demonstrated by the success of the first MAMIWATA workshop, organized by the IEEE GRSS MIRS Technical Committee in 2023, in Pasadena (CA), USA. This momentum is underscored today by the more than 150 participants already registered for the upcoming second edition of MAMIWATA, to be held in Rome in November 2025. Two key outcomes from the first gathering were the consensus on the need for a unified platform to advance the physical modeling of scattering processes that involve water in different forms, and the need for building a diverse network to bridge the gap between sensor-focused communities. This session aims to be that bridge, encouraging the cross-pollination of ideas to address scientific challenges in a more unified, application-driven way. 
We solicit contributions across the full spectrum of water monitoring. This includes advancements in modeling, such as polarimetric and interferometric models, direct problems and inverse models, and radiative transfer theory. We also invite papers on innovative techniques, including new retrieval algorithms, comparative performance analyses between different sensors, and novel strategies for multi-sensor data fusion. Finally, we strongly encourage submissions on impactful applications, such as operational flood mapping, wetland carbon cycle monitoring, agricultural water management, and cryosphere change assessment. 
By bringing together experts from these interconnected domains, this theme will enrich the IGARSS program by facilitating a deeper understanding of retrieval uncertainties and paving the way for more robust algorithms. This holistic approach is essential for translating advanced modeling and sensor technology into tangible solutions for today’s most pressing environmental challenges.
Show/Hide Description
CCT.32: Modeling in Radar Remote Sensing
T.20: Theory and Techniques — Microwave Modeling
Radar remote sensing has become one of the most capable tools for observing Earth and planetary environments, leading to significant progress in retrieving bio-geophysical properties across rough surfaces of soil and ocean, forests, snow and ice, glaciers, and even planetary subsurface layers.

A fundamental requirement for these applications is the capability to model the scattering behavior of natural media. Forward modeling based on physical scattering mechanisms enables robust inversion of geophysical parameters and helps evaluate the sensitivity of radar observations under different system configurations and environmental conditions. By providing a rigorous link between measurements and physical processes, scattering models improve retrieval accuracy and also guide the design of novel radar sensors and observational strategies.

At the same time, data-driven approaches, particularly machine learning (ML), have gained substantial traction in radar remote sensing. While ML enables large-scale data exploitation, it often suffers from limited interpretability and robustness when confronted with out-of-distribution data. Physics-based scattering models can complement ML approaches by embedding physical constraints, thus creating hybrid frameworks that enhance reliability and explainability. This session will highlight advances in both theory and applications of scattering models, inversion methods, and physics-ML synergies in radar remote sensing.

This CCT is organized by the Modeling in Radar Remote Sensing (MRRS) Working Group of the Modeling In Remote Sensing Technical Committee (MIRS TC) of the IEEE Geoscience and Remote Sensing Society (GRSS). This CCT aims to bring together researchers across radar modeling, inversion, and hybrid methods, covering diverse environments and applications from environmental monitoring and climate studies to disaster management and planetary exploration.
Show/Hide Description
CCT.33: Compact and Dual Polarimetry SAR for Enhanced Coverage and Intelligent Monitoring
T.22: Theory and Techniques — Polarimetric Modeling
Modern spaceborne SAR increasingly favours a one-transmit/two-receive (1Tx/2Rx) architecture to balance swath width, revisit time, and data volume. Transmitting a single polarisation and receiving two orthogonal components within a pulse-repetition interval (PRI) enables wider swath and more frequent revisits, key for operational monitoring and onboard decision-making with constrained downlink. In dual-polarimetric SAR (e.g., Sentinel-1), a single transmit polarisation (H or V) yields either (HH, HV) or (VV, VH), so one co-/cross-pol pair is acquired while the complementary pair is not measured. Compact Hybrid Polarimetric (CHP) SAR transmits circular polarisation (RHCP or LHCP) and receives H and V, with the capability, under practical scattering assumptions, to derive the full set of linear co- and cross-pol channels for many applications. 
      With the availability of NISAR and Sentinel-1 for Earth observation, and Chandrayaan-2 for lunar surface exploration, dual- and compact-polSAR research has become both timely and essential. Moreover, the recent introduction of hybrid-polarimetry in inverse SAR (ISAR) for military and surveillance applications further underscores the growing importance of this configuration.
      This session targets system and algorithm advances that make 1Tx/2Rx attractive for Earth, Lunar, and Defence use cases: AI/ML-based reconstruction of missing polarisations, model-based estimators (e.g., reflection-symmetry-aware methods), calibration and error budgets for compact/dual modes, and trade-off analyses linking swath, SNR, information content, and data efficiency. We welcome contributions spanning theory, simulation, and mission results (including NISAR and Chandrayaan-2) that demonstrate how compact and dual polarimetry enable intelligent, wide-area monitoring without sacrificing essential polarimetric capability.
Show/Hide Description
CCT.34: Microwave Remote Sensing of Snow
C.1: Cryosphere — Snow Cover
Recent advances in microwave remote sensing are reshaping global monitoring of snow properties—snow water equivalent (SWE), depth, wetness, albedo, microstructure, and melt dynamics. With the recent launch of NISAR, routine L-band SAR/InSAR observations are now available at continental scales, opening new opportunities for mapping snow accumulation and metamorphism, tracking wet/dry transitions, quantifying deformation and compaction, and improving retrievals in forested and complex terrain alongside existing higher-frequency sensors such as Radarsat-2, Sentinel-1, TerraSAR-X, COSMO-SkyMed, and passive missions. In parallel, ESA’s HydroGNSS mission, scheduled for launch in 2025, will extend L-band GNSS-R measurements into high latitudes with frequent revisit, introducing new possibilities for terrestrial snow studies and for synergy with SAR, radiometry, and in-situ networks.

Importantly, neither NISAR nor HydroGNSS was purpose-built for direct retrievals of snow properties (e.g., SWE, snow wetness). Although the snow scattering intensity from L-band is generally small and can limit sensitivity to snow properties, both missions provide phase information—NISAR through interferometric phase and HydroGNSS through coherent GNSS reflections—that will encode snowpack quantity, structure, deformation/settlement, and wet/dry state changes. These phase-based observables, in combination with intensity, polarization, and ancillary data, are expected to contain actionable snow signals that can guide algorithm development and inform the design of future, snow-optimized missions.

This session will encourage science results, algorithms, and applications that leverage:

NISAR (L-band SAR/InSAR): Early results for terrestrial snow estimation; interferometric and polarimetric techniques for detecting snowpack evolution, deformation/settlement, and wet/dry transitions; fusion with optical and passive microwave data to improve retrievals in forests and complex terrain.

HydroGNSS (L-band GNSS-R): Methods and demonstrations for potential SWE and wetness retrievals, snow/soil signal separation, snowmelt detection, and cross-sensor integration with SAR, SMAP/SMOS, and ground networks.

Complementary assets: Multi-mission and multi-frequency approaches combining Sentinel-1, Radarsat-2, TerraSAR-X, COSMO-SkyMed, SMAP/SMOS, AMSR-2/GMI, FY-3, and upcoming ROSE-L, CIMR, and TSMM; airborne campaigns and dense in-situ observations for calibration/validation and uncertainty quantification.

We particularly welcome contributions on physics-based and hybrid retrievals, cross-scale data assimilation for hydrology and climate, benchmarking and uncertainty propagation across biomes and canopy conditions, and open datasets/workflows. By uniting SAR, GNSS-R, passive microwave, optical, airborne, and ground perspectives, the session will highlight early NISAR and HydroGNSS findings, establish best practices for sensor fusion, and chart a path toward operational snow monitoring and next-generation mission concepts.
Show/Hide Description
CCT.35: Advancing Permafrost/Cryosphere Remote Sensing
C.4: Cryosphere — Permafrost
With ongoing climate change, permafrost is degrading at an accelerating rate worldwide. Thawing permafrost threatens permafrost infrastructure and acts as a potent climate feedback by releasing stored greenhouse gases. In this context, remote sensing provides the only viable means to systematically monitor these vast, remote, and often inaccessible regions, delivering the consistent, long-term datasets essential for scientific discovery, policy-making, and hazard assessment.
This session is dedicated to showcasing the latest advancements in remote sensing that are essential for observing, quantifying, and understanding the complex dynamics of permarost enviornment. We invite contributions that utilize a diverse range of satellite and airborne remote sesning technologies—including synthetic aperture radar (SAR) and interferometric InSAR—for detecting ground deformation caused by permafrost thaw, monitoring key permafrost parameters, and assessing associated hazards. A key focus will be on innovative methodologies, such as multi-sensor data fusion, machine learning for feature extraction, and the integration of remote sensing data with in-situ measurements and process-based models to move beyond observation towards mechanistic understanding.
This session is strategically designed to complement the broader themes of IGARSS. While the conference covers land, ocean, and atmosphere, the cryosphere represents a unique and rapidly evolving domain that requires specialized analytical techniques. By consolidating cutting-edge cryospheric research, we provide a dedicated forum for a growing community and foster interdisciplinary dialogue. This session will naturally synergize with themes on climate change, hydrology, and hazards by explicitly addressing a key component that interlinks them all. It aims to chart the future direction of remote sensing in addressing one of the most pressing environmental challenges of our time, highlighting the indispensable role of our field in Earth system science.
Show/Hide Description
CCT.36: Remote sensing analysis in graphical and other transformed domain
L.1: Land Applications — Land Use Applications
Graphs serve as a powerful representational framework for machine learning (ML), and their integration has substantially advanced the field. Extensive studies have pushed forward graph ML in both theory and applications. Recently, new perspectives have been emerging in the ML community, including algebraic–topological analyses, foundation models, generative models, and large models in applications. Leveraging these ideas for core graph ML holds a lot of promise, including the dual benefit of deeper theoretical insight, new capabilities, and more powerful, application-aligned algorithms and models. The aim of this workshop is to explore and connect these new perspectives on graph ML, and to identify overarching challenges and tools – in remote sensing (RS) applications.  

In RS, graph analysis is a powerful tool in RS that allows for the extraction of meaningful insights from spatial data. They find applications in: i) Land Cover Classification: Graph-based methods can be used to classify land cover types by representing different classes as nodes and their relationships as edges; ii) Change Detection: By constructing graphs from multiple time-series RS images, changes in land use or land cover can be detected by analyzing the differences in graph structure over time; iii) Object Detection and Tracking: Graphs can represent objects in RS imagery, where nodes represent detected features and edges represent spatial or semantic relationships, aiding in the detection and tracking of targets over time; iv) Image Segmentation: Graph-cut algorithms can segment RS images into meaningful regions by formulating the segmentation problem as a graph partitioning problem, allowing for more accurate delineation of features; v) Network Analysis: In urban studies, graph analysis helps model transportation networks, utility infrastructure, and ecological networks, providing valuable insights into connectivity and flow dynamics; vi) Spatial Analysis: Graphs can represent geographical phenomena for studying spatial relationships, interactions, and hierarchies between different geographical features, such as rivers, roads, and urban areas; vi) Anomaly Detection: Graph-based techniques can be applied to identify outliers or anomalies in RS datasets, helping to detect illegal logging, encroachments, or other irregularities; vii) Ecological Modeling: Graphs can model ecosystem interactions, helping researchers understand species distribution, habitat connectivity, and ecological network dynamics; viii) Disaster Response and Management: Analyzing graphs generated from satellite data can help assess the impact of disasters, evaluate infrastructure damage, and plan recovery operations by understanding spatial relationships and connectivity; and most importantly, ix) Data Fusion: In multi-sensor RS, graph structures can facilitate the integration of diverse datasets, enhancing the extraction of information by linking related features from various sources. 

This workshop will receive submissions, talks, and poster sessions on a wide range of topics, perspectives, and ideas including but not limited to: 
    - Graphical and other transformed domain representation of images for various RS applications.
    - Graph foundational models and Graph Augmented LLMs 
    - Continuous and Differential Geometric Models 
    - Graph Diffusion Models and Graph Generative Models 
    - Graphs for Science and Graph-Based Simulations
    - Novel Graph ML Architectures 
    - Self-supervised and Semi-supervised Graph ML
    - Novel graph-theoretic applications in RS
    - Transferability and Generalization Properties of Graph Models
    - Other transformed domains, such as frequency domain, time domain, etc. 
Show/Hide Description
CCT.37: Remote sensing of hot deserts and desertification
L.2: Land Applications — Land Cover Dynamics
Twenty-seven (27) percent of the world’s terrestrial area is classified as arid or hyper-arid, inhabiting only 5.8% of the world’s population. As opposed to the general notion, these areas are dynamic ecosystems that respond sensitively to changes in water availability and carbon dioxide (CO2) levels, making them important indicators of climate change, thus offering early warning signs of the broader impacts of global climate change. Mapping and monitoring  temporal and spatial variations at different scale of wide hot desert areas and their fringe dynamics is important for  better understanding their impacts on Global climate. Detecting deserts' borders and their change is instrumental in assessing desertification processes.  
Remote sensing techniques are critical for monitoring and Mapping Deserts and their fringe dynamics in the context of climate and land use change.  The proposed session aims concern update and review of full range of methods utilizing VIS, NIR, SWIR, Thermal and Radar Remote sensing sensors for desert and desert fringe mapping and monitoring, and presentation of new algorithms from Multi/ hyper Spectral Classifications and modeling to Machine Learning techniques for this purpose.
Of special interest are topics concerning albedo and BDRF variations across desert  areas, soil moisture estimation utilizing VIS/ NIR/ SWIR and radar sensors, herbaceous/ shrubs and dwarf-shrubs patterns and their dynamics with reference to precipitation and evapotranspiration at different spatial and temporal scales .  
The issue of desertification versus deserts greening attracted much debate during the last two decades. This involves conceptual as well as methodological disputes. Fundamental issues concerning deserts' borders detection and delineation had been  almost neglected while estimates of deserts expansion and contraction are frequently published.  
Show/Hide Description
CCT.38: Large-scale forest structure and change mapping with the fusion of radar and lidar/optical sensors
L.3: Land Applications — Forest and Vegetation: Application and Modelling
Since forest structure is of great value to terrestrial ecology, habitat biodiversity, and global carbon storage assessments, it is desired to monitor and quantify the state of, and change in forest vertical structural profile, aboveground biomass and height along with other forest biophysical characteristics (e.g., LAI). It is important to generate large-scale (e.g., regional and global) moderate-resolution (e.g., few hectares down to sub-hectare) such products because these are useful not only on carbon storage accounting and carbon cycle dynamics modeling but also on supporting efforts aimed at quantifying biodiversity, particularly given the rapid declines and losses of many plant and animal species world-wide.

To address this scientific goal, the remote sensing community has been working towards combining multi-sensor measurements, such as Synthetic Aperture Radar (SAR) and lidar as well as optical sensors. For example, these include JAXA's ALOS/ALOS-2/ALOS-4 (single L-band SAR) and MOLI (lidar) missions, NASA'S NISAR (single L-band SAR) and GEDI (lidar), DLR's TanDEM-X (twin X-band SAR), ESA's BIOMASS (single P-band SAR) as well as China's Lutan-1 (bistatic L-band SAR) and TECIS (lidar). Commercial SAR satellites have also been emerging in recent years, such as the X-band SAR constellation from Capella Space, and the cartwheel multi-static X-band SAR constellation PIESAT-1 from PIESAT Information Technology. Other spaceborne passive optical sensors including NASA’s Landsat and ESA’s Sentinel-2 have also been successfully combined with lidar missions for large-scale forest parameter mapping.

It is known that interferometric radar (and passive optical sensor) have complete spatial coverage and good spatial resolution with moderate accuracy in measuring vertical structure, where microwave radar has deeper penetration through the canopy than passive optical sensors. In contrast, spaceborne lidar has a sparse coverage with much better vertical confidence. Therefore, this proposal will demonstrate a few novel scientific algorithms to combine the complete spatial coverage of radar/optical and precise vertical measurements of lidar so that large-scale (potentially global-scale) maps of forest vertical structure, biophysical parameters (e.g. aboveground biomass/height) and their time-series changes can be generated, through the fusion of multiple spaceborne SAR, including (but not limited to) Interferometric SAR (InSAR), Polarimetric InSAR (PolInSAR) and Tomographic SAR (TomoSAR) data, as well as lidar and/or passive optical data.
Show/Hide Description
CCT.39: MULTIFREQUENCY MICROWAVE APPLICATIONS TO SOIL AND VEGETATION : OBSERVATIONS AND MODELING
L.3: Land Applications — Forest and Vegetation: Application and Modelling
The Future of Earth Observations technologies is an intriguing matter. Many factors contribute to this scenario and one of the most determining factors is the increasing impacts of climate change, with more and more frequent extreme weather events and persistent droughts, has strong impact on crop productivity, health and yields. Recently, these risks have been expanding in several regions within the Mediterranean basin. Therefore, effective management of soil and water resources can be considered as a key factor to achieve sustainable food production. In this context, the monitoring of soil and vegetation from space can greatly enhance our understanding of Earth's surface processes as hydrological cycle and crop productivity. With the rapid advancements in remote sensing technology, estimation of several natural parameters with high spacetime resolution is becoming more and more feasible. 
The role of soil moisture and vegetation biomass in the climate system has been studied for a long time by the climate research community, showing an enhanced understanding after the addition of remotely sensed products. The essential role of soil moisture in the climate system motivated the Global Climate Observing System (GCOS) and ESA to endorse soil moisture as an Essential Climate Variable (ECV) and introduce it to their Climate Change Initiative programme. 
Remote sensing techniques support observations of the most important surface characteristics according to different sensors: the optical ones, such as visible-near-infrared, hyperspectral, and thermal infrared sensing, have limited penetration capabilities, making impossible to obtain soil information in presence of dense vegetation layers and of the entire vegetation structure in case of well-developed plants. Nevertheless, the signal in these wavelengths is very sensitive to vegetation pigments and photosynthetic activity. Furthermore, the use of these sensors is hampered by weather conditions like the presence of clouds, rainfall and solar illumination. 
Microwave sensors, such as synthetic aperture radar (SAR) and microwave radiometers, have instead a deeper penetration into agricultural and forest cover and better capability to observe objects in absence of solar light and cloud cover. Microwave observations from space-borne sensors are ideal for soil moisture and vegetation biomass retrieval due to the high sensitivity to water in observed bodies and the frequent revisit time. For this reason, a variety of microwave sensors from both active and passive systems has been observing the Earth’s surface since the late 1970s with this aim. Microwave observations can be consequently used for the retrieval of these land surface parameters and may ultimately be integrated into existing (long-term) data products. 
Due to the peculiar characteristics of optical and microwave satellite data, the integration of these two types of measurements, enable a comprehensive approach to assessing crop health, water status, and soil conditions, by overcoming the constraints of single use of different sensors and thus obtaining more in-depth information on the canopy. 
In this session some approaches based on microwave observations also integrated with optical data for estimating soil and vegetation parameters will be described. Both active and passive sensors will be used together with newly implemented algorithms and models.
Show/Hide Description
CCT.40: Remote sensing for generating robust datasets for forest health assessment and management planning
L.3: Land Applications — Forest and Vegetation: Application and Modelling
Many forest ecosystems are under degradation due to various stressors. Protecting and preserving forest ecosystems is critical to improving biodiversity, regulating water, energy, and nutrient cycles, and delivering vital ecosystem services. Remote sensing has long been central to forest monitoring and management, traditionally using coarse to medium-resolution satellite data. The latest publicly available data such as microwave, LiDAR, and higher resolution optical multi-spectral and hyperspectral data unlocked new opportunities for assessing forest resources in unprecedented detail. These datasets are widely used in periodic afforestation/ deforestation monitoring, carbon stock assessment, evidence-based carbon credit verification, and climate change and human disturbance impact studies. Modern machine learning and deep learning models enabled more sophisticated analyses than parametric models. However, there gaps still exist in generating high-quality data for forest health monitoring and suggesting accurate silviculture practices using publicly available datasets. On the other hand, the unmanned aerial vehicles (UAV) and airborne datasets produce high-resolution data, but are often limited by their intrinsic limitations for scale and repeated measurements. 
The session will bring data scientists, practitioners, citizens, and policy makers to bridge the gaps between research and practice to deep dive into the remote sensing application for forest resources assessment. The studies on forest cover change through area, density, plant structure, ecosystem productivity/ carbon cycle, and biodiversity. This session also seeks submissions on impact assessment due to natural drivers and anthropogenic activities such as climate change, pest and disease attack, human activities (logging, urbanization, and recreation), forest fires, extreme weather events (flood and drought), and landslides. The participants are encouraged to submit novel and case studies combining the latest remote sensing and in-situ data for forest health assessment. The use of remote sensing for ongoing forest management, including silviculture, ecosystem restoration or natural resources management and changes in policy guidelines are highly encouraged. In addition, submissions are welcome on the early or conceptual frameworks of generating large language models (LLMs), cost-benefit analysis of using various geospatial approaches for forest management, and mobile-applications for applying Citizen Science approaches. 
Although, the proposed session aligns with ‘L: Land Applications: L.3: Forest and Vegetation: Application and Modelling’, its intent is to highlight the more nuanced theme on forest health assessment, particularly, the new earth observation and UAV-based datasets and available WebGIS applications to provide high-quality accurate data for operational management (silviculture and restoration) planning, and their impact assessment reporting. 
This proposal is also in the scope of several other themes under the conference general themes more specifically AI and Big Data (A.3. Large-scale Machine Learning); Theory and Techniques (T.9. Image Segmentation, T.11 Object Detection and Recognition, T. 14. Change Detection and Temporal Analysis), Land Applications (L.2: Land Cover Dynamics, L.3: Forest and Vegetation: Application and Modelling, L.4: Forest and Vegetation: Biomass and Carbon Cycle) and Data and Society (D.2: Remote Sensing Data and Policy Decisions, D.3: Education and Remote Sensing, D.4: Standards in Remote Sensing, D.5: Geographic Information Systems and D.6: Citizen and Open Science)
Show/Hide Description
CCT.41: Characterize Forest Carbon Density Dynamics using 3D Remote Sensing Technology
L.4: Land Applications — Forest and Vegetation: Biomass and Carbon Cycle
Climate change mitigation is a global priority. Carbon sinks in forest ecosystems are essential for achieving carbon neutral goals. It is vital to quantify sequestration potential and develop scalable methodologies for accurate assessment and reporting. The global carbon market presents both opportunities and challenges for terrestrial ecosystems carbon sink. The development of market-based mechanisms, policy frameworks, and verification standards requires solid scientific support to ensure credibility and effectiveness. Moreover, advancements in multi-scale carbon sink monitoring technologies, such as 3D remote sensing, AI-driven models, and high-precision field instruments, are transforming the way carbon sequestration is assessed. There is a pressing need for innovative research to refine measurement accuracy, standardize methodologies, and apply findings to practical conservation and policy initiatives.
Forest carbon density dynamics refers to the temporal and spatial variations in the amount of carbon stored in forest. These dynamics are influenced by natural processes (e.g., growth, decay, and disturbances) and human activities (e.g., deforestation, afforestation, and land-use changes). 3D Remote sensing technology (such as optical Stereogrammetry, InSAR, PolInSAR, TomoSAR, and LiDAR) plays a crucial role in monitoring and managing forest resources. It provides a cost-effective, large-scale, and time-efficient approach for estimating carbon density. This session will focus on the integration of advanced remote sensing tools like LiDAR and optical sensors, for accurate forest carbon estimation. Example topics include but are not limited to: carbon density estimation methods, particularly above-ground biomass and 3D structural insights; challenges such as data integration and innovations like AI-driven models and spaceborne missions. Furthermore, applications in carbon accounting, climate change studies, and forest conservation will also be discussed. This session will give us a better understanding of the concept of forest carbon density dynamics, the remote sensing methods to characterize forest carbon density dynamics and relevant regulations for forest carbon accounting.
Show/Hide Description
CCT.42: Monitoring Global Urban Areas from Space
L.6: Land Applications — Urban and Built Environment
Satellite-based Earth observations (EO) offer unique capabilities for understanding the complexities of urban environments, enabling consistent, data-rich perspectives across multiple spatial and temporal scales. However, cities present distinct and interconnected analytical challenges due to spatial heterogeneity, spectral mixing, fine-scale variability, vertical structures, mixed materials, and resolution constraints that complicate conventional remote sensing approaches. Addressing these inherent limitations requires innovative approaches explicitly designed to navigate the intricacies of urban form, function, and rapid change. This session invites EO studies that advance how we observe, map, and interpret urban environments, from local to global scales, with particular emphasis on extracting reliable information and translating it into decision-relevant metrics.

We welcome research across diverse application domains, including urban heat dynamics, vegetation and green infrastructure, natural hazard and multi-hazard exposure, environmental health, social vulnerability and disparities, and detection of informal settlements within cities. Submissions should demonstrate how remote sensing can produce actionable and validated metrics that address urban-specific challenges and inform safe environment adaptation, disaster preparedness, equitable resource distribution, and evidence-based urban planning and policy. We encourage work that clarifies uncertainty, assesses transferability and scalability across diverse cities, and demonstrates reproducible workflows that can scale from neighborhood to global analyses.

We particularly encourage contributions that develop or apply innovative EO methods tailored to the complexities of the built urban environment. Approaches may be methodological, integrative by linking multiple sensors or data products, or application-driven. Studies that combine satellite observations with in situ measurements, socioeconomic or administrative data are strongly encouraged. Contributions demonstrating open data, open code, reproducible methods, or partnerships with operational users and community stakeholders are especially valued.
Show/Hide Description
CCT.43: Morphologies of Change: Remote Sensing Urban Transformation
L.6: Land Applications — Urban and Built Environment
     Urban morphology provides a quantitative framework for analyzing how the physical form of cities evolves alongside socioeconomic processes such as gentrification, reinvestment, and displacement. Advances in remote sensing now enable systematic observation of building height, density, and typology, offering measurable evidence of capital flows, land use transitions, and spatial inequality. This session explores how multi-sensor Earth observation, spatial modeling, and socio-urban theory can be integrated to interpret these transformations. It invites contributions that employ 3D reconstruction, data fusion, or cross-domain integration to link physical urban change with economic or social dynamics.
      Understanding how urban form evolves is essential for decoding the spatial manifestation of socioeconomic change. Remote sensing provides a scalable means of quantifying these processes, revealing how morphological patterns reflect development pressures, governance structures, and demographic shifts. Integrating physical and socioeconomic perspectives strengthens the capacity to detect reinvestment cycles, monitor urban regeneration, and guide equitable planning. The session emphasizes methodological innovation by combining remote sensing analytics, spatial statistics, and urban theory to advance data-driven approaches that connect built form to social structure.
       The urban morphology focus aligns directly with the Urban and Built Environment (L.6) theme, which centers on the quantification and interpretation of urban form through remote sensing. Urban morphology, building height estimation, and 3D reconstruction represent core analytical domains within this track, while the integration of socioeconomic data situates the session at the intersection of technical and applied urban research. By complementing the broader AI and Big Data theme through its focus on large-scale modeling, and reinforcing the Sustainable Cities theme through its attention to inequality and resilience, the session engages an audience concerned with how advances in Earth observation can illuminate the physical and social dynamics that shape urban transformation.
   


Show/Hide Description
CCT.44: Measuring Bare Earth Surface Topography and Vegetation Structure with a Multi-Modal Observing System Approach
L.7: Land Applications — Land Surveying, Geodesy and Topography
Earth’s changing surface structure inform us about natural hazards, vegetation, and water resources. Surface topography and vegetation measurements serve a broad range of science and applications objectives that span solid earth, cryosphere, biosphere and hydrosphere disciplines.  Surface Topography and Vegetation (STV) as a NASA targeted observable to map Earth’s changing surface and overlying vegetation structure by acquiring high-resolution, global height measurements, including bare surface land topography, ice topography, vegetation structure, and shallow water bathymetry. STV objectives would be best met by an observation strategy with a disaggregated approach that leverages radar, lidar, and optical measurements from a variety of orbital and sub-orbital assets. The different measurements are suited to different aspects of STV and have differing engineering and observing architecture optimizations.  Radar is the best approach for determining the vegetation structure based on density distribution measurements. Lidar measures canopy height and ground returns; it is precise but is not well-suited for full global coverage. Stereoimaging from optical measurements provides wide swath digital surface models, but with lower precision. Fusing the data is best for separating canopy height, the ground, and vegetation structure. Airborne observations provide targeted measurements at higher resolution. The STV ASCENT campaign is planned for summer 2026 over a variety of terrain types and features in California to provide data for maturing an observing system, which will likely consist of orbital and suborbital components. Simulations, experiments, data fusion, science activities, and technology development in interferometric SAR, lidar and stereo photogrammetry approaches that will mature STV toward an observing system are invited to submit to this session. 
Show/Hide Description
CCT.45: Earth System Science and Applications Based on a Decade of NASA Soil Moisture Active Passive (SMAP) Satellite Mission Science Data Products
L.8: Land Applications — Soils and Soil Moisture
The NASA Soil Moisture Active Passive (SMAP) satellite mission is in its eleventh year of measurements acquisition. SMAP is the first NASA Decadal Survey Mission launched in 2015 with a conical scanning polarimetric radiometer based on a 6-m deployable mesh antenna.  On board radio frequency interference (RFI) algorithms, utilizing temporal, frequency and polarization characteristics of the radiometer data, were implemented to mitigate the impact of RFI. The global SMAP L-band microwave radiometry has been used to derive surface soil moisture, vegetation optical depth (indicator of integrated above-ground vegetation water content), and landscape freeze/thaw status estimates over land regions. Over the oceans the data are used to estimate sea surface salinity and surface wind speed.  Over the cryosphere the data are used to estimate snow and ice properties as well as firns. The soil moisture estimates are the bases for understanding water stress in vegetation and hence used for applications in carbon cycle studies.  They are also used for monitoring drought, water resource management, and forecast of agriculture productivity.  The SMAP radiometer data and the NASA-ISRO Synthetic Aperture Radar (NISAR) data have been used to produce a high-resolution Active-Passive (AP) soil moisture product with two resolutions of 200 m and 1 km, which are essential for applications to many agriculture fields with small plots globally. To calibrate and validate (Cal/Val) the new high-resolution datasets, SMAP project has been collaborating with many partners internationally for in situ observations. This session includes presentations on the status of the SMAP observatory, enhancements in the science data processing, new application of the decade-long SMAP data and Cal/Val of the new SMAP-NISAR AP product. 

Show/Hide Description
CCT.46: Soil Moisture: Ground to Space
L.8: Land Applications — Soils and Soil Moisture
Microwave remote sensing has been recognized as the most promising way to monitor surface soil moisture due to its strong penetration capability through the vegetation canopy  Field experiments are essential for developing and validating soil moisture algorithms, especially for remote sensing applications. This is because real-world experiments provide the ground-based "truth" needed to calibrate sensors, evaluate retrieval models, and understand how various environmental factors affect algorithm performance. These experiments are essential for providing the ground truth needed to ensure the reliability of remote sensing data for applications like precision agriculture, drought and flood forecasting, and weather modeling.

The session will provide a historical context about the development of microwave remote sensing of soil moisture. The initial ground based field experiments led to the development of retrieval algorithms and sensor technology. These field experiments using tower/truck based sensors and airborne sensors demonstrated the ability of using microwave observations to estimate soil moisture. These field experiments helped define the optimal frequency (L-band) for remote sensing of soil moisture. The field experiments provide valuable data for calibration and validation for microwave remote sensing satellites. The development of algorithms and sensor technology led to the launch of ESA's Soil Moisture and Ocean Salinity (SMOS) mission and NASA's Soil Moisture Active Passive (SMAP) mission.

Presentations on soil moisture in support of microwave remote sensing will be encouraged. The session will celebrate the contribution by Thomas Jackson and his colleagues towards the development of microwave remote sensing of soil moisture that led to the SMOS and SMAP missions.
Show/Hide Description
CCT.47: Remote sensing for wetland sustainability
L.9: Land Applications — Wetlands
         Wetlands, often referred to as the "kidneys of the Earth," are crucial for maintaining global ecological balance. Not only do they sustain rich biodiversity,but they also play vital roles in water conservation, water purification, flood control, drought mitigation, carbon sequestration, and climate regulation. With the rapid advancement of remote sensing technology, we now possess unprecedented capabilities to monitor and manage these fragile ecosystems. Currently, remote sensing is providing robust support for the dynamic monitoring and management of wetlands through high-resolution imaging, drone surveillance, and advanced data processing algorithms. The theme of this conference is closely related to IGARSS L.9: Wetlands, SP.1 Geoscience and Remote Sensing in Developing Countries, T.12: Classification and Clustering, and several other technical themes with new sensors and new methods. However, there is still insufficient focus on utilizing emerging remote sensing technologies to support the sustainability of wetlands.
         In recent years, wetland ecosystems have been facing unprecedented challenges. Climate change, intensified human activities, and a lack of public awareness have triggered a sharp decline in global wetland areas and biodiversity. Since 1970, inland and coastal wetlands have decreased by approximately 35%, while wetland species populations have dropped by an alarming 84%. If this trend is not promptly reversed, it will pose a serious threat to the ecological security of wetlands. By implementing multi-resolution, multi-temporal, and multi-modal remote sensing monitoring strategies, we can more accurately assess changes in wetlands and provide scientific insights for their conservation and restoration. This conference will focus on the application of these innovative methods, exploring the latest advancements, challenges, and future directions in wetland monitoring and protection. We look forward to participants sharing their research findings, working together to advance the field of wetland remote sensing, and contributing to the sustainable development of wetlands.
Show/Hide Description
CCT.48: Critical mineral remote sensing with advanced multispectral and hyperspectral sensors
L.11: Land Applications — Geology and Geomorphology
Earth observation with imaging spectrometers (hyperspectral imaging) and advanced multispectral sensors offers unprecedented insights into the composition and distribution of mineral resources. These techniques provide detailed spectral information that can reveal hidden or previously overlooked reserves, significantly improving the accuracy of resource assessments. The increased global demand for raw materials and the designation of some elements as critical to national security and economies has led to heightened pressure for evaluation of known mineral resources and efficient screening of prospective areas. This theme focuses on the transformative impact of spectral techniques and earth observation data in revolutionizing geologic mapping, ore-body knowledge, and exploration discoveries. By presenting global case studies, the theme highlights how advanced remote sensing technologies at the drone, airborne, and spaceborne levels are redefining our understanding of mineral deposits. These remote observations are complemented by proximal sensing systems, such as field spectroradiometers, drill core sensors and lab spectrometers. A key part of mapping geology and specific minerals from multi-, and hyperspectral remote sensing imagery is the validation of the products on the ground. Publicly available ground validation data sets (e.g. Cuprite – Nevada, U.S.A.; Rocklea Dome – Western Australia, Australia) provide opportunities for the international research community to compare different remote sensing image processing and calibration methods. However, publicly available ground validation data sets that go beyond simple image classification are scarce. In addition to soliciting global case studies of mapping geology and minerals, this theme invites presentations about workflows and methods for validating geoscience products derived from optical satellite imagery, including ongoing and planned field campaigns and conceptual approaches.
Show/Hide Description
CCT.49: Artificial intelligence for active and passive remote sensing of precipitation
M.1: Atmosphere Applications — Precipitation and Clouds
Radar and satellite have been the most important remote sensing instrument for observing clouds and precipitation, serving as cornerstones of applications ranging from severe weather warnings to long-term climate monitoring around the world. Both radar and satellite observations are rich in information; but the traditional ways of utilizing them have only been able to extract part of the information due to the nature of the analytical tools. After the deployment of new satellite observation channels as well as the dual-polarization radar technique, the measurements became multi-dimensional, and it is more challenging to use conventional analytical methodologies to fully resolve the hydrometeorological information. In recent years, artificial intelligence (AI) has been gaining an increasing interest in scientific applications across all areas of geoscience and remote sensing, including weather radars and satellites. In fact, AI has already been used in radar-based precipitation retrievals long before it became popular in mainstream, such as the use of neural networks for rainfall estimation. It has been proven that AI is effective in extracting information from the large amount of multi-dimensional radar and/or satellite data in practical environments.  This session is devoted to radar and satellite retrievals of precipitation using artificial intelligence and machine (deep) learning techniques. Topics include but are not limited to passive microwave (PMW), Infrared (IR), and weather radar data interpretation, precipitation identification, classification, and quantitative estimation. This session will also feature presentations that combine multi-sensor observations, as well as multiscale precipitation data fusion and machine learning model regularization. This topic will also well complement the general and special themes of IGARSS2026.
Show/Hide Description
CCT.50: Polarimetric Weather Radars — Tribute to Prof. V. N. Bringi
M.1: Atmosphere Applications — Precipitation and Clouds
This community-contributed theme (CCT) proposal aims to honor the pioneering contributions of Prof. V. N. Bringi (1949-2024) of Colorado State University, whose visionary research fundamentally shaped the field of polarimetric weather radar and its applications to meteorology, hydrology, and atmospheric science. He was a pioneering figure in radar meteorology, particularly known for his groundbreaking work in polarimetric weather radar. His contributions significantly advanced the understanding and applications of weather radar systems, improving precipitation estimation, hydrometeor classification, and storm characterization. A Fellow of American Meteorology Society (AMS) and a professor of electrical engineering, Bringi remains one of only three electrical engineers to receive the prestigious AMS David and Lucille Atlas Remote Sensing Prize, awarded to him in 2012. His prolific research legacy includes five books, approximately 150 journal articles, more than 12 book chapters, and numerous conference papers. Beyond his immense scientific achievements, he was a model of humility, kindness, and grace-a mentor, friend, and inspiration to generations. He was also deeply engaged with the IGARSS and GRSS communities, especially in the context of spaceborne precipitation sensing.

This proposed session will bring together global experts, collaborators, and former students to present the latest advances inspired by his work, including dual-polarization theory, quantitative precipitation estimation, hydrometeor classification, and scattering simulations. Beyond celebrating Prof. Bringi’s legacy, the session will serve as a bridge between classical radar polarimetry and emerging frontiers in AI-driven retrievals, phased array weather radars, and spaceborne precipitation sensing, highlighting how his foundational ideas continue to guide modern radar remote sensing. This theme complements the General and Special Themes of IGARSS 2026 by showcasing a unifying and historically significant domain that continues to advance Earth observation and geoscience applications worldwide.

More information about the career and contributions of Prof. Bringi can be found in the following article:
M. Thurai and K. V. Mishra,  "Obituary: Viswanathan Naganathan Bringi (1949-2024)," Bulletin of the American Meteorological Society, vol. 106, no. 5, pp. 320-322, 2025.

The article is also available publicly at his department's LinkedIn post and ResearchGate: 
https://www.linkedin.com/posts/ece-department-csu_vn-bringi-tribute-activity-7356789366519709698-LXS5
https://www.researchgate.net/publication/393057034_Professor_V_N_Bringi
Show/Hide Description
CCT.51: Exploration and Exploitation of New Earth-Observing Satellite Applications for Weather and Climate Science
M.2: Atmosphere Applications — Numerical Weather Prediction and Data Assimilation
The Community Contributed Session, “Exploration and Exploitation of New Earth-Observing Satellite Applications for Weather and Climate Science”, will highlight innovative approaches to leveraging data from both new and especially existing satellite instruments. Already, a vast array of Earth-Observing sensors in Geostationary (GEO) and Low Earth (LEO) orbit, managed by numerous international organizations, provide extensive datasets that capture Earth’s weather and climate dynamics on both regional and global scales. 
However, such wealth of information is still ripe for additional exploitation. This session seeks to explore novel ways to enhance current capabilities and develop new applications. These possibilities include using Machine Learning (ML) algorithms for Infrared and Microwave remote sensing and data assimilation and preprocessing, as well as applying Geospatial Computer Vision for various applications. We will also explore the challenges and opportunities in Real-time Air Quality Modeling and infusing satellite data into environmental applications. 
Additional focus areas will involve the utilization of data from CubeSat and commercial sensor applications, specifically microwave sensors like those from Tomorrow.io and NASA TROPICS, to enable high-resolution global forecasts. This will also include improved methods for environmental monitoring (such as the water cycle) using satellite data and the integration of agile/rapid development sensors like those in the NEON Series. 
Achieving these advances may require improved integration of sensors across the electromagnetic spectrum as well as methods to meet the demands for more frequent or higher resolution data collection and transmission. Other approaches that are already beginning to be demonstrated include the use of novel machine learning algorithms for enhanced environmental classification and prediction, improved atmospheric composition analysis, and more effective integration with ground-based systems. 
These advancements, alongside emerging innovations in how Earth and the atmosphere are measured and characterized, have the potential to transform and elevate our understanding and capabilities to meet weather and climate science goals. This can lead to more accurate predictions, earlier warnings for localized and extreme weather events, enhanced data quality, and deeper insights into the global environment.
Show/Hide Description
CCT.52: Innovative Sciences and Technologies Developed for Detecting Severe Storms and Their Devastating Impacts on Life and Properties
M.2: Atmosphere Applications — Numerical Weather Prediction and Data Assimilation
As the global climate warming is accelerating, the frequency, intensity, and spatial scope of severe storm events—such as  thunderstorms, torrential rain-induced squalls, and coastal flooding —have surged dramatically in recent decades. These extreme weather phenomena pose severe threats to human lives, critical infrastructure (including transportation networks, power grids, and water supply systems), agricultural production, and ecological balance, underscoring an urgent need for advanced, reliable methods to monitor, predict, and mitigate their impacts. In response to this pressing challenge, this session warmly invites submissions of innovative methodologies and technical breakthroughs focused on leveraging satellite remote sensing data—coupled with numerical weather prediction models and state-of-the-art artificial intelligence (AI) techniques—to enhance severe storm detection and analysis.​ A key focus of the session is on research that harnesses high temporal and spatial resolution satellite data, a cornerstone of modern meteorological and environmental monitoring. Eligible data sources include, but are not limited to: geostationary meteorological satellites equipped with visible and infrared imagers (e.g., NASA’s GOES-R Series, Japan’s Himawari-8/9 with the Advanced Himawari Imager (AHI), China’s FY-4 Series, and the European Space Agency’s (ESA) upcoming Meteosat Third Generation (MTG) satellites), which provide near-real-time, continuous observations of large-scale atmospheric dynamics; polar-orbiting satellite imagers (such as ESA’s Sentinel-1/2/3 series and China’s Gaofen (GF) satellite constellation), which offer high-detail snapshots of regional storm systems and surface changes; and emerging commercial small satellite constellations, which enable flexible, high-frequency monitoring for targeted areas. Additionally, derived products from both geostationary (GEO) and low-Earth orbit (LEO) satellites—including atmospheric temperature and humidity profiles, cloud microphysical parameters, and soil parameters—are strongly encouraged for application in studies targeting convective initiation (CI) detection (a critical early warning indicator for severe storms), real-time tracking of heavy precipitation, mapping of surface flooding extents, and identification of landslide-prone areas triggered by storm-induced rainfall.​
Beyond storm monitoring, the session also welcomes research that addresses the post-storm economic and social impacts. This includes methodologies for quantifying storm-related losses (e.g., damage to crops, industrial facilities, and residential areas) and assessing long-term economic resilience of affected regions. Studies that integrate satellite-derived data (such as land-use change maps, vegetation recovery indices, and infrastructure damage assessments) to evaluate these impacts—and further propose science-based mitigation strategies (e.g., optimizing early warning systems) or post-disaster recovery plans (e.g., prioritizing reconstruction of critical infrastructure)—will be given high priority, as they bridge technical innovation with real-world policy and practice needs.​ As a dedicated platform for interdisciplinary collaboration, this session brings together experts across fields: meteorologists specializing in atmospheric dynamics, data scientists developing AI and machine learning algorithms for pattern recognition, remote sensing specialists proficient in satellite data processing and product generation, and economists focused on climate risk assessment and sustainable development. By fostering cross-pollination of ideas and integration of cutting-edge technologies with in-depth socioeconomic analysis, the session aims to drive the development of holistic, actionable solutions. Ultimately, these efforts will strengthen global capacity to detect severe storms more accurately, assess their impacts more comprehensively, and respond more effectively—critical steps toward building climate resilience in a rapidly changing world.
Show/Hide Description
CCT.53: Low Earth Orbit (LEO) Satellite Missions and Their Contribution to Earth Science Applications
M.2: Atmosphere Applications — Numerical Weather Prediction and Data Assimilation
Polar orbiting environmental satellites in Low Earth Orbit (LEO) are critical for global monitoring of Earth and its environment. With three operational satellites in orbit, NOAA’s Joint Polar Satellite System is providing resilient and reliable Earth observations for operational meteorology and other mission critical applications. Two more satellites in the series are under development and the constellation is expected to provide backbone observations that support both short- and long-term weather forecast models well into the next decade. In collaboration with partner agencies such as NASA, ESA, EUMETSAT and JAXA, NOAA provides its stakeholders a large set of global observations in LEO. The synergy between NOAA and its partner missions offers significant benefits to users such as improved global refresh of observations and complementarity of measurements in multiple regions of the electromagnetic spectrum to enable detailed monitoring of the Earth. In addition to providing timely and critical observations for extreme weather and disasters, the long historic data record from LEO satellites is also critical for climate monitoring. This session invites presentations from international agencies to present the status of their current and future LEO missions that make routine observations to monitor the Earth and its environment for applications such as NWP that support decision makers. Subjects covered in this session will include applications and accomplishments from current operational missions as well as plans for new missions from space agencies. Presentations that demonstrate the societal and economic value of LEO observations to applications that are integral to decision making are also of high interest to this session.
Show/Hide Description
CCT.54: RS of Aerosols and Radiative Impacts
M.4: Atmosphere Applications — Aerosols and Atmospheric Chemistry
Aerosols exert a disproportionate, scale-dependent leverage on Earth’s energy budget, hydrological cycle and public health, yet the remote-sensing (RS) tools that quantify their optical depth, single-scattering albedo, absorption Ångström exponent, vertical profile and microphysical properties remain fragmented across dozens of sensors, retrieval teams, cloud masks, quality flags and spatio-temporal sampling schemes. “RS of Aerosols and Radiative Impacts” is therefore convened to close this critical gap by integrating general advances in radiative-transfer theory, instrument calibration, machine-learning bias correction, uncertainty propagation and multi-scale data assimilation with special, stakeholder-driven attention to the radiative-impact metrics that matter most for next-generation climate projections, NWP reanalyses, solar-energy now-casting, monsoon variability and air-quality early-warning systems.

The scientific scope deliberately spans three interconnected pillars: (i) multi-sensor synergy that fuses measurements from UV-to-TIR multispectral imagers, multi-angle polarimeters, ozone monitoring spectrometers, DIAL/HSRL lidars, cloud-profiling radars, microwave sounders, geostationary constellations and targeted CubeSat formations; (ii) joint retrievals that simultaneously solve for aerosol loading, cloud contamination, surface BRDF, NO₂ columns and water-vapor correction within a single Bayesian cost function; and (iii) explicit closure studies that rigorously translate observed optical and microphysical properties into spectrally resolved top-of-atmosphere and surface radiative forcing, photovoltaic energy yield, planetary-bound-layer heating rates, monsoon circulation modulation and photochemical reaction-rate perturbations.

The underlying rationale for complementing general algorithmic themes with special impact-oriented themes is that generic breakthroughs—such as neural-network forward emulators, multi-pixel inversion regularization, multi-angle polarimetric particle-typing or GPU-accelerated radiative-transfer solvers—only realize tangible societal value when they are stress-tested against the specific radiative end-points that policy makers, IPCC AR7 chapters, WMO GAW guidelines and downstream energy, health and agricultural models actually consume; conversely, impact assessments cannot be deemed credible without fully traceable, community-vetted, uncertainty-budgeted RS products that satisfy FAIR data principles. By deliberately pairing methodological innovation with impact-oriented validation campaigns that integrate airborne in-situ payloads, AERONET-DRAGON supersites, high-resolution large-eddy simulations, chemical transport models and citizen-science sun-photometers, this session accelerates the urgently needed transition from raw pixels and level-2 swaths to actionable radiative metrics that ultimately inform evidence-based climate-mitigation pathways, renewable-energy grid management and real-time health-warning systems.
Show/Hide Description
CCT.55: Ground Penetrating and Sounder radars for planetary exploration
P.3: Remote Sensing of Planetary and other Celestial Bodies — Other Celestial Bodies
The objective of this session is deliberately broad, aiming to foster a comprehensive discussion on recent advancements in planetary exploration achieved through ground-penetrating and sounder radar instruments onboard planetary missions. The session seeks to highlight innovative research aimed at the detection and characterization of new geological and geomorphological features across planetary bodies, including ice deposits, volcanic terrains, impact craters, and sedimentary formations. Ground-penetrating and sounder radars provide unique insights into both surface and subsurface structures, enabling the study of planetary interiors, stratigraphy, and compositional variations that are otherwise inaccessible through conventional observations.

Contributions can address innovative planetary radar techniques for surface and subsurface exploration, data analysis and interpretation methods, as well as novel concepts, technologies, and GPR architectures for future exploration. Studies analyzing radar datasets from different missions, simulations of radar signal interactions with planetary surfaces, and proposals for new radar instrumentation are particularly welcome. Interdisciplinary studies combining GPR and sounder data with other remote sensing or in-situ measurements are also encouraged, as they can provide a more complete understanding of planetary processes.

The session also enables discussions on current challenges in planetary radar data, such as the detection and interpretation of complex subsurface structures, surface and subsurface roughness investigation, clutter mitigation, and noise reduction. By bringing together researchers working on diverse aspects of planetary radar science, the session aims to foster collaboration across disciplines, inspire innovative approaches for upcoming missions, and identify emerging technological trends. Ultimately, this session seeks to contribute to the advancement of planetary science by improving our understanding of planetary surfaces and subsurfaces, guiding the selection of exploration targets, and supporting the design of future radar-equipped planetary missions.

Show/Hide Description
CCT.56: ALOS Series Missions, Cal/Val, and Applications (Part 1)
S.1: Sensors — Spaceborne SAR Missions
This CCT proposes to discuss the overview, research achievements, future plans, and international cooperation of the Advanced Land Observing Satellite (ALOS), Japan's flagship Earth observation satellite series mission. 
The ALOS series of the Japan Aerospace Exploration Agency (JAXA) has been continuously operated since 2006, and currently ALOS-2 and ALOS-4 onboard L-band SAR have been in operation since 2014 and 2024, respectively, to perform precise Earth surface observations in various application fields. The next-generation ALOS optical mission will be mounted on the LiDAR instrument to measure terrain and forest height precisely and generate a precise digital surface model (DSM) and digital terrain model (DTM) combined with high-resolution optical satellites. To achieve mission objectives, calibration and validation (Cal/Val) is essential and encourages science and application development. International collaborations are also important to conduct these activities. Based on these results, we will also discuss the prospects for future ALOS series missions, including the continuity of the missions, the importance of international cooperation, and the advantages and disadvantages that should be reflected in future missions. 
Show/Hide Description
CCT.57: BIOMASS: Results, Methods, and Lessons Learned One Year After Launch
S.1: Sensors — Spaceborne SAR Missions
The BIOMASS Mission, successfully launched by the European Space Agency on April 29th, 2025, aims to deliver unprecedented data on the Earth’s forest biomes. Its primary objective is to generate accurate global maps of forest Above Ground Biomass Density (AGBD) every nine months. Operating as a Synthetic Aperture Radar (SAR) mission in Low Earth Orbit (LEO) at a nominal altitude of 666 km, BIOMASS incorporates several groundbreaking features. Most notably, it carries the lowest-frequency SAR ever deployed in space (435 MHz, P-band), enabling deep wave penetration through forest canopies and providing insights into the complete vertical structure of vegetation.
This unique capability is complemented by systematic, fully polarimetric and interferometric acquisitions over the same areas with a three-day revisit time, allowing detailed characterization of forest structure through Polarimetric Interferometry and Tomography. Interferometry is also at the hearth of the Ground Cancellation algorithm for AGBD retrieval, designed to enhance sensitivity by reducing ground scattering. Beyond forestry, BIOMASS supports several secondary objectives, including terrain mapping beneath vegetation, ice sheet tomography, ice flow monitoring, subsurface exploration in arid regions, and ionospheric studies.
As the first P-band radar mission in space, BIOMASS is expected to play a fundamental role in advancing radio science at low frequencies, with implications for ionospheric propagation, interaction with the vegetation, and sub-surface penetration in arid terrain and ice bodies.
At slightly over one year from launch, this community-contributed session will present key scientific findings to date, highlight encountered technological and processing challenges, and discuss the methodologies developed to fully exploit BIOMASS data.
Show/Hide Description
CCT.58: WSF-M Mission Status and Calibration I
S.2: Sensors — Spaceborne Passive Microwave Missions
This is the part I or the thwo part WSFM mission status and calibration session.

The Weather System Follow-on Microwave (WSF-M) satellite is the next-generation of Department of Defense (DoD) operational environmental satellite system.  The first WSF-M space vehicle (SV-1) was successfully launched on April 11, 2024, to a sun-synchronous Low Earth Orbit (LEO) orbit.  WSF-M has two payloads, a polarimetric Microwave Imager (MWI) and an Energetic Charged Particle (ECP) sensor.  The MWI has a total of 17 channels at frequencies 10, 18, 23, 36 and 89 GHz, of which 10, 18 and 36 GHz are fully polarimetric and utilizes digital signal processing receivers.  The WSF-M has been performing very well on orbit since launch.  Two calibration maneuver campaigns were conducted, one in April and the other in October of 2024.  Both geolocation and radiometric performance are well within its specified requirements.  This session will provide mission status and discuss calibration results as the SV-1 completes two years in orbit by April 2026. Further, the session topics will also cover new radio-frequency interference (RFI) mitigation technology   being considered for the WSF-M second space vehicle (SV-2) , scheduled to be launched no earlier than December 2026.
The WSF-M predecessor, Defense Meteorological Satellite Program (DMSP), has been providing the Geoscience and Remote Sensing Community with microwave imager data for 30+ years.  The data from DMSP passive microwave radiometer have been used for both weather forecast and climate study.  The WSF-M will continue to produce high resolution of microwave imagery and generate essential weather data products such as ocean surface wind speed and direction, snow depth, sea ice characterization, soil moisture, tropical cyclone and other weather imagery, etc.  We have received many inquiries about the WSF-M mission and MWI sensor data sets. This session will provide information to help Geoscience and Remote Sensing Community understand and use the WSF-M MWI data, which will in turn derive better weather products for final data users. 
Show/Hide Description
CCT.59: The CYGNSS Mission
S.3: Sensors — Spaceborne GNSS-R Missions
Scope: This CCT Session would be open to all novel research papers that are involved with the NASA CYGNSS Mission. This includes topics related to engineering calibration, to geophysical retrieval algorithm development and performance assessment, to scientific applications of the data products, and to extension of the engineering and scientific methods developed for CYGNSS to one or more of the existing and planned GNSS-R follow on missions

Importance: CYGNSS has been in orbit for ~9 years and in that time it has enabled or supported the development of many GNSS-R remote sensing methods and applications. Numerous follow on GNSS-R missions are now either flying or in development, by both government agencies (e.g. China, India, Taiwan, ESA, Spain, Brazil) and private companies (e.g. Spire, Muon, Tianmu). NASA is likely to require a deorbiting of CYGNSS within the next year due to its recent budget reprioritizations. This CCT Session is intended to be a capstone gathering of the research groups who have made significant contributions to the CYGNSS mission and to the development and expansion of GNSS-R remote sensing more broadly.

Rationale for complementing General and Special themes: Other GNSS-R themed sessions at IGARSS typically focus on specific technological or algorithmic topics or are collections of papers covering the numerous new GNSS-R spaceborne missions. This CCT session is intended to complement those other sessions with a particular focus and a deep dive into the contributions made by the pioneering first dedicated GNSS-R science mission. We also hope and expect that the papers presented at this session will serve as the basis for a future special section or special issue of one of the GRSS refereed journal publications that would similarly focus on the technological, engineering, scientific, and application-oriented contributions made by the CYGNSS missions.

Related General or Special Themes: 
T.23: Modeling for Signals of Opportunity (e.g. GNSS-R)
L.8: Soils and Soil Moisture
L.10: Inland Waters
O.2: Ocean Surface Winds and Currents
S.3: Spaceborne GNSS-R Missions
Show/Hide Description
CCT.60: Advancing global-scale high resolution imaging spectroscopy in preparation for CHIME
S.4: Sensors — Spaceborne Hyperspectral Missions
Scope:
The growing availability of high resolution imaging spectroscopy products from missions such as EnMAP, EMIT, HiSUI, PRISMA and DESIS, is enabling a wide spectrum of novel scientific products and applications. The Copernicus Hyperspectral Imaging Mission for the Environment (CHIME) will provide routine observations over the land and coastal zone through the Copernicus Programme in support of EU- and global policies for the management of natural resources, assets and benefits. This unique visible-to-shortwave infrared spectroscopy based observatory will provide unique and major contributions in fulfilling user requirements in the domains of environmental monitoring and management, with a focus on soil productivity, sustainable raw materials exploitation, sustainable use of nutrients and water in agriculture and food security. A number of secondary applications will benefit from the routine provision of CHIME products e.g. biodiversity, coastal and inland water quality monitoring, methane and carbon dioxide detection from point sources. In this session we will welcome contributions from the scientific and user communities encompassing CHIME preparatory activities, including L2 products development, calibration/validation, downstream products and applications prototyping. We will also welcome contributions building on current spaceborne imaging spectroscopy missions, and anticipated missions like SBG-VSWIR.

Importance: 
CHIME is expected to become a reference operational mission for medium-high spatial resolution imaging spectroscopy from space. Global user communities experienced on exploiting multispectral optical time series for operational applications, as well as the imaging spectroscopy science and application community currently relying on data from PRISMA, EnMAP, EMIT and HISUI, will greatly benefit from the information presented in this session to prepare for the future exploitation of CHIME operational products. 

Rationale for complementing IGARSS General/Special Themes:
This session naturally intersects T.15 “Hyperspectral Data Processing and Analysis” and S.4 “Spaceborne Hyperspectral Missions,”.
Show/Hide Description
CCT.61: Calibration and Validation of Imaging Spectrometer Measurements and Data Products
S.4: Sensors — Spaceborne Hyperspectral Missions
Earth observation is experiencing a growth period for spaceborne imaging spectrometers (hyperspectral imagers). National space agencies and private companies are providing reflectance products from these new sensors at a variety of spectral and spatial resolutions, for example NASA's EMIT (Earth Surface Mineral Dust Source Investigation) at 60 m ground sampling distance (GSD) and DLR's EnMAP (Environmental Mapping and Analysis Program) at 30 m GSD. With proper spectral, radiometric, and spatial calibration and post-launch evaluation, data from multiple sources might be used collectively to support scientific studies and extend applications over larger areas and at greater temporal frequency. This session will provide a forum for commercial, government, and academic organizations and other groups to share their mission-focused efforts and results in advancing imaging spectrometer calibration and validation (cal/val) of products. This session focuses on cal/val of radiance and reflectance products which are the basic geophysical data and the level of data interpreted for surface material composition and chemistry, respectively. This session also provides an opportunity to discuss international collaborations and multi-mission data comparison. Leveraging of hyperspectral data with multispectral data is a topic of interest, including presentations on advanced multispectral missions, such as Landsat Next. Airborne surveys and field campaigns often collect fine scale data for calibration and validation of measurements and products; thus, contributions which show applications of airborne and field data to cal/val to drone-based, airborne, and spaceborne imaging spectrometers are sought. Discussion of engagement of the IEEE GRSS Geoscience Spaceborne Imaging Spectroscopy (GSIS) technical committee and working groups is highly encouraged.
Show/Hide Description
CCT.62: Space-based Imaging Spectrometers: Status of Current and Planned Missions
S.4: Sensors — Spaceborne Hyperspectral Missions
This session provides the remote sensing community an update on government, commercial, and other space-based imaging spectrometers (hyperspectral imagers). This session creates an opportunity to inform the community on current missions providing data, for example, NASA's EMIT (Earth Surface Mineral Dust Source Investigation), the German Aerospace Center’s DESIS (DLR Earth Sensing Imaging Spectrometer) and EnMAP (Environmental Mapping and Analysis Program), ASI’s PRISMA (PRecursore IperSpettrale della Missione Applicativa), JAXA’s HISUI (Hyperspectral Imager Suite), and China’s GaoFen-5 AHSI (Advanced Hyperspectral Imager). The session is also a venue to highlight newly launched sensors designed to operate alone on small satellites, for example, Planet’s Tanager, on larger independent platforms, on the space station, or as part of a constellation, for example Orbital Sidekick’s GHOSt-1 and -2 (Global Hyperspectral Observation Satellite). This venue also facilitates communication of the latest developments on forthcoming missions, such as ESA’s CHIME (Copernicus Hyperspectral Imaging Mission for the Environment) and NASA’s SBG (Surface Biology and Geology). As data usage and societal impact are key goals of Earth observation missions, this forum also invites presentations on data systems, tools for data visualization and analysis, user needs and experiences, outreach to underserved communities and co-development of research and applications. As the pace of global change has increased the demand for science quality data to address new challenges, this session also seeks presentations on novel and emerging science and applications which have exceeded core objectives of imaging spectrometer missions. Discussions of leveraging national missions of varied measurement modalities, international cooperations, and partnerships between government, commercial, and other groups are encouraged.
Show/Hide Description
CCT.63: Space Lidar: Missions, Technologies, and Observations
S.5: Sensors — Spaceborne LIDAR Missions
This theme is organized by the Working Group of Active Optical and Lidar, IEEE GRSS Technical Committee on Instrumentation and Future Technology (IFT). 
 
Contributions are solicited from international space agencies, industry, and academia on enabling technology developments, space missions and observations.
 
The theme focuses on research and developments in an important topic in active optical remote sensing: Space Lidar. Lidar's unique capability to observe a diverse variety of geophysical phenomena from orbit around the Earth and planets has stimulated new areas of remote sensing research that now attracts the attention of scientists and engineers worldwide. With a number of instruments already operational or pending launches within the coming years, many of their original technological issues have been resolved, still the long term reliability of key active components and the survival on harsh space environment requires additional efforts and investments. Some the key topics for this session are:
 
• Space Agencies (NASA, ESA, JAXA, CNES etc.) on-orbit missions, future missions, technical challenges, observations, and science products
• Continuation of work in the domain of long-lived/ high power UV/visible/infrared lasers and optics, especially contamination and optical damage
• Research to improve the reliability of lasers/diode lasers and high-power optics operated in vacuum 
• Space-qualification of tunable lasers and optics to support trace gas lidars operating in the 1-5 µm region 
• Space-qualification of higher efficiency lasers such fiber lasers and amplifiers. Radiation hardening is an area of particular concern 
• General power scaling of space-qualified lasers with a focus on improved efficiency and thermal tolerances 
• Improved high gain, low dark noise and low NEP space qualified/qualifiable array/detector at all wavelengths, and in particular in IR bandwidth 
 
This session will focus on science and applications addressed by space-based lidar, as well as on techniques and supporting technology. Our goal is that this session is to provide a stimulating forum where members of the international lidar and related technology communities can present and discuss results, trends, and future directions.
 
Show/Hide Description
CCT.64: Advancing Real Time Environmental Monitoring through Multi-Platform Earth Observation and Sensor Integration
S.6: Sensors — Multi-platform Earth Observation
With growing rates in earthquakes, landslides, rapid changes in the atmosphere along with existing limitations in common methodologies, the need for real time and accurate monitoring has increased. Understanding how to handle these the problems caused by these events is critical for precise prediction and assessment, The advancement of multi-platform Earth observation technologies, along with collaborative use of multiple sources of Earth observation data, including Synthetic Aperture Radar (SAR), optical imagery, LiDAR, Global Navigation Satellite System Reflectometry (GNSS-R), and signals of opportunity, has the potential to revolutionize climate and land surface monitoring, disaster management, and Earth system modeling in the future. 

SAR provides all-weather, day-and-night high-penetration imaging critical for detecting changes in land surface structure and moisture conditions regardless of atmospheric conditions. Optical sensors contribute rich spectral information essential for vegetation analysis, land cover classification, and subtle surface changes, while LiDAR offers precise three-dimensional structural mapping enhances terrain and vegetation characterization. GNSS-R adds a unique dimension by using reflected GNSS signals to estimate surface properties like soil moisture and flood extent, with high temporal coverage and accuracy even under cloud cover and other challenging conditions. Use of a multi-sensor fusion approach can address glaring limitations present when working with a single source of data, because cross-leveraged spatial, temporal, and spectral information improves understanding of anomalies, surface dynamics, and early warning capabilities. 

Climate-induced hazards are occurring with increasing frequencies and intensities, and thus robust, real-time monitoring schemes are necessary, this integration addresses inherent limitations of individual sensors—such as SAR's lower spectral resolution, Optical's weather dependency, and GNSS-R’s spatial resolution challenges—and fills critical data gaps, promoting more robust environmental assessments. Incorporating Signals of Opportunity (SoOp) like GNSS-R 
and ground-based GNSS data into Earth System Models improves land–atmosphere interaction modeling, enhancing predictions of soil moisture, evapotranspiration, droughts, and floods. Open-access data from missions like NASA’s CYGNSS and ESA’s Sentinel offer frequent, high-resolution observations that boost monitoring accuracy and enable near-real-time applications. This multi-platform, multi-sensor approach offers scalable, cost-effective solutions for sustainable land management, ecosystem analysis, and disaster assessment, capturing both direct and indirect impacts.
Show/Hide Description
CCT.65: Multi-Instrument Applications for Environmental Monitoring and Sustainability.
S.6: Sensors — Multi-platform Earth Observation
This session highlights recent advances in multi-instrument remote sensing that enhance our ability to monitor environmental change and promote global sustainability, including progress toward the United Nations Sustainable Development Goals (SDGs). By integrating observations from satellites, airborne platforms, and ground-based networks equipped with optical, thermal, radar, microwave, and LiDAR sensors, researchers can now study Earth’s atmosphere, land, oceans, and cryosphere with unprecedented accuracy. These comprehensive datasets enable detailed assessments of greenhouse gas emissions, surface temperature variability, vegetation dynamics, soil moisture, water resources, and extreme weather events, providing essential information for climate adaptation, mitigation, and ecosystem management.

A central focus of the session is connecting environmental monitoring to human well-being through the Water-Energy-Food-Health (WEFH) Nexus. Participants will explore how integrated observations support strategies to improve water, energy, food, and health security, strengthen regional resilience, and accelerate progress toward SDG implementation. This approach emphasizes the interdependence of natural systems and societal needs, highlighting the role of advanced monitoring in guiding sustainable development and climate-resilient decision-making.

Despite these technological advances, significant challenges remain in combining heterogeneous datasets with differing spatial resolutions, temporal coverage, and calibration standards. Overcoming these obstacles requires robust cross-calibration, ground validation, and the application of advanced algorithms powered by artificial intelligence and high-performance computing. Barriers such as high costs, restrictive data policies, and limited technical capacity continue to constrain participation in some regions, particularly in underrepresented areas.

The session invites contributions on innovative data-fusion techniques, cross-sensor analyses, and practical applications in monitoring ecosystems, carbon and water cycles, natural hazards, and climate processes. Submissions from graduate students, early-career researchers, and scientists from underrepresented regions are especially encouraged to foster global collaboration and maximize scientific and societal impact.high-performance computing. Additional constraints, including high costs, restrictive data policies, and limited technical capacity, continue to limit participation in some regions, particularly in underrepresented areas.

The session invites contributions on innovative data-fusion approaches, cross-sensor analyses, and practical applications in monitoring ecosystems, carbon and water cycles, natural hazards, and climate processes. Submissions from graduate students, early-career researchers, and scientists from underrepresented regions are especially encouraged to foster global collaboration and maximize both scientific and societal impact.
Show/Hide Description
CCT.66: Remote sensing for Oil & Gas Exploration, Production and Environmental Monitoring
S.6: Sensors — Multi-platform Earth Observation
Remote Sensing covers a wide range of applications for the exploration and production of Oil & Gas and environmental monitoring of the production, storage, and transportation. Satellite constellations generate a large volume of data and cover the entire Earth's surface - Terabytes of new multispectral and hyperspectral data are freely available every single day. In addition, the spatial and spectral resolution of sensors are getting more and more precise to reach cm-scale resolution for optical images and submetric resolution for radar images. This opens new opportunities for surface characterization, either at sea or land surfaces. In onshore areas, remote sensing tools have been used for structural and mineralogical mapping, surface deformation quantification, the detection of the modification of vegetation, and the detection of soil alteration associated with hydrocarbon seeps. In offshore areas, remote sensing tools are used to detect oil films and to quantify/characterize the oil type. The identification of oil slicks presents strong implications for hydrocarbon exploration by identifying oil seeps but also for emergency response during man-made pollution. Other systems such as Lidar systems may be used to measure Digital Terrain Model (DTM, i.e. ground topography) and detect gases also with exploration and environment applications. The objectives of this session are to gather academic and industrial communities working with remote sensing techniques in mineral and petroleum research and to get an overview of an ample range of applications. The session will cover the following applications: 
ONSHORE APPLICATIONS:
- Utilization of hyperspectral and multispectral imaging for geological and mineral mapping,
- Detection of surface liquid and gaseous hydrocarbons, along with hydrocarbon-induced soil alterations,
- Monitoring of mineral and oil & gas field development operations,
- Surveillance of gas storage facilities (CO2, CH4, H2) and associated plant infrastructure,
- Application of interferometric techniques for quantifying surface deformation,
- Recent advancements in Big Data analytics, particularly in deep learning technologies.
OFFSHORE APPLICATIONS:
- Detection of surface hydrocarbons,
- Identification of natural oil seeps for hydrocarbon exploration,
- Monitoring of anthropogenic oil spills to support emergency response efforts,
- Remote sensing applications in subsurface mapping and monitoring,
- Recent progress in Big Data analysis, focusing on deep learning for the recognition of oil slick patterns.
Show/Hide Description
CCT.67: Wildfire and Earth Observation: Remote Sensing for Ecosystem Resilience and Hazard Adaptation with Artificial Intelligence (AI)
S.6: Sensors — Multi-platform Earth Observation
Scope & Importance
Wildfires are increasing in frequency and intensity worldwide, from boreal forests and Mediterranean shrublands to tropical savannas. These events alter ecosystems, degrade air quality, damage infrastructure, and exacerbate climate risks. Remote sensing from UAVs, airborne platforms, and Earth observation satellites provides unique capabilities to detect, monitor, and model wildfires across spatial and temporal scales.
Recent advances, including AI-powered satellite data analysis, hyperspectral imaging, LiDAR, satellite data embedding, and UAV-based thermal monitoring, are transforming our ability to understand and respond to wildfire events. By integrating Earth observation with fire science, ecology, atmospheric chemistry, and geoinformatics, this session highlights how cutting-edge technologies advance scientific understanding while supporting adaptation, mitigation, and risk management strategies relevant to policymakers and practitioners.

Rationale
Wildfire-related research at IGARSS is often distributed across multiple sessions, which limits the cross-pollination of ideas and technical advances. This Community-Contributed Theme provides a unifying framework to consolidate developments in AI-enhanced wildfire monitoring, multi-platform remote sensing (including UAVs, LiDAR, hyperspectral, and satellite data), and ecosystem resilience assessment.
By intentionally bringing together experts in remote sensing, ecology, atmospheric science, and decision-support applications, this theme fosters interdisciplinary exchange and bridges the gap between technology development and real-world wildfire resilience strategies. The unified approach showcases emerging technologies, cross-sector collaborations, and ecological insights essential for next-generation wildfire prediction, mitigation, and adaptation at local to global scales.
Show/Hide Description
CCT.68: UAV/mobile-mapping SAR systems, methods, and applications
S.13: Sensors — SAR Instruments
SAR systems on UAVs, high-altitude long endurance (HALE) unmanned aircraft, and mobile mapping platforms, such as automobiles, have increasingly gained attention also within the geoscience community. Small SAR systems deployed on such platforms offer complementary properties in terms of revisit time, operational flexibility, and observation capabilities compared to spaceborne and conventional airborne SAR systems. On the other hand, compared to stationary terrestrial radar/SAR systems, the increased synthetic aperture size of UAV/mobile mapping SAR systems allows for better spatial cross-range resolution also for quasi-terrestrial observation geometries.

These complementary properties of UAV-based and mobile mapping SAR systems open a large field of potential applications, some of which are addressed within the scope of this session including high-resolution DInSAR based measurements of surface displacements, monitoring of vegetation / agricultural crop, change detection.

From a system point of view, these agile SAR platforms require not only new compact SAR system designs, but also compact and innovative high-performance navigation using smaller INS/GNSS systems, in some cases combined with vision systems, as well as adequate SAR imaging algorithms and DInSAR processing chains adapted to the potentially non-linear sensor trajectories and partial aperture synthesis common to UAV/mobile mapping SAR systems and application.

UAV-borne SAR systems enable experimental formation flying, making them an important tool for developing and testing bistatic and multistatic SAR mission concepts, including synchronization for future spaceborne SAR missions.

This community-contributed session aims at giving an insight into recent state-of-the-art UAV- and mobile-mapping-based SAR systems, methods, and applications developed with a focus on geoscience applications.
After our successful invited/community-contributed sessions on this topic during IGARSS 2021/2022/2023/2024 and IGARSS 2025, we would like to keep track of this topic, providing insight into the latest technological developments with small SAR systems on UAV, HALE/HAPS, and mobile-mapping platforms.

The session typically covers a number of novel systems and UAV/mobile mapping platforms of different sizes, types (fixed-wing and VTOL UAVs, cars), and a range of methods, and applications such as repeat-pass differential SAR interferometry for displacement measurements, change detection, and tomographic configurations.

We believe (and this has been confirmed by repeatedly reaching a large audience in our sessions) that our session topic, UAV/mobile mapping-based SAR systems, methods, and applications, is of very high interest to the geoscience and remote sensing community already now and will continue to be so in the future.
Show/Hide Description
CCT.69: Evaluating Microwave hyper-spectral remote sensing capabilities (HyMS)
S.15: Sensors — Microwave Radiometer Instruments
Microwave hyper-spectral remote sensing technologies have theoretically shown great advantages to a number of applications important to NOAA, including temperature and moisture sounding enhancement, extending soundings to active meteorological events (various clouds, precipitation, etc), as well as offer additional information to better sense surface and hydrometeor parameters, etc. These enhancements in our ability to measure the environment would directly benefit our ability to nowcast and forecast the weather better particularly during extreme weather conditions. They also have the potential to provide benefits to the fundamental understanding of the spectroscopic characteristics of the microwave spectrum, which in turn could have wide ranging benefits through the improvement of the radiative transfer modeling, as well as better understanding of optical properties (scattering and absorption) of cloud, rain, snow and ice parameters, etc. These enhancements in all weather conditions are expected consequently to have positive impacts on our ability to calibrate and assimilate other microwave sensors that are critical for increasing Numerical Weather Prediction (NWP) systems skill. Recently, advances in sensor component technology have allowed the high spectral sampling of the microwave region to be a realistic and cost-effective possibility, allowing even smallsat compatible sensors to have this feature.

In order to reduce risk and determine the effectiveness of Hyperspectral Microwave sounding (HyMS) to advance operational performance, NOAA’s National Environmental Satellite, Data, and Information Service (NESDIS) Joint Venture Partnerships initiated a BAA in 2022 and other recent projects to leverage previous developments and explore the possibilities of a potential future state. We anticipated that the funded projects would assemble and deploy sensors in orbital, or sub-orbital flights with platform demonstrations to assess quantitatively and qualitatively the value of the data to NOAA’s missions.  We expected each project to collect data for a sufficient period of time to allow for a meaningful assessment of the quality and potential usefulness of the data in conjunction with the NESDIS STAR Data Evaluation Exploitation and Value Assessment (DEEVA) Team. The final assessment involved intercomparison with standard measurement equipment (e.g., in situ radiosondes, etc.) to serve as independent sources of measurements.  In addition to the BAA’s exploratory results, session presentations should discuss any new developments related to HyMS technologies.
Show/Hide Description
CCT.70: Observations and outcomes of the Advanced Microwave Scanning Radiometer (AMSR) series
S.15: Sensors — Microwave Radiometer Instruments
Space-based microwave imager with low-to-high frequency channels is a strong tool for monitoring water-related variables inside of clouds and Earth’s surface through clouds on a daily basis. The Advanced Microwave Scanning Radiometers (AMSR) are a series of Japanese microwave imagers that have unique capabilities in their high spatial resolution with large-sized main reflector (~2-m diameter), dual polarization channels for all frequency bands from 6.9 to 89 GHz, and frequent global observation with wide swath more than 1500-km. The series of instruments provide continuous global water cycle observations for more than 23 years by previous two missions, AMSR-E on board the NASA’s Aqua satellite launched in 2002 and AMSR2 on board the Global Change Observation Mission - Water (GCOM-W) launched in 2012 and currently in operation. Data from the AMSR series is widely used in meteorological agencies in Japan, U.S., Europe, Australia, etc., and is also used for monitoring of sea ice in polar oceans. It also contributes to producing satellite-based merged precipitation products, such as GSMaP, IMEGR, CMORPH, etc., to monitor hourly global rainfall distribution in a near-real-time basis. 
The latest instrument, AMSR3 on board the Global Observing SATellite for Greenhouse gases and Water cycle (GOSAT-GW) launched in June 2025, succeeds the afternoon orbit observation of AMSR2, and expands its observation capability to respond to new user needs. Major improvements in AMSR3 are new high-frequency channels (166- & 183-GHz) for snowfall retrievals and water vapor analysis in numerical weather predictions, and additional 10.25-GHz channels with better Noise Equivalent Delta Temperature (NEDT) for robust sea surface temperature (SST) estimates.
In this session, we will focus on the observation results and outcomes obtained by the past and current AMSR series, and early observation and application results from AMSR3, including calibration, inter-comparison with other microwave imagers, ground validation, and algorithm development.
Show/Hide Description
CCT.71: Advances in Radar Systems, Techniques, and Applications for Environmental and Planetary Sensing
S.17: Sensors — Advanced Future Instrument Concepts
Advances in radar systems, signal processing, data assimilation, and radar techniques are enabling unprecedented geophysical observations and discoveries in environmental and planetary science. Novel sensing modalities, including distributed and passive sensing, sensor fusion, and radar tomography, are offering improved spatial coverage, spatial resolution, temporal resolution, estimation accuracy, and 4D imaging capabilities. Reduced cost and increased availability of unmanned platforms and software-defined radios have opened avenues for novel radar applications and sensor systems. Innovations across discipline boundaries are being leveraged in conjunction with radar to address new applications in terrestrial and space sciences. Overall, emerging radar technology is transforming observational capabilities.  

This session aims to bring together pioneering research in radar systems and techniques. We invite researchers to present results on any topic involving the development of radar systems and data processing techniques to understand the terrestrial and space environment. This includes but is not limited to new results from existing radar observations, the development and application of novel observation or data processing techniques, numerical modeling of existing or future observations, and/or the analysis of ground-based, airborne, or orbital radar data for Earth and planetary remote sensing. 

The creation of this session is intended to establish a community for emerging radar technology and bring together researchers of this topic who have been presenting in disparate sessions. Combining knowledge will help push forward the critical and dynamic intersection of developing radar systems and techniques for monitoring the terrestrial and space environment. This session has the potential to advance knowledge across a range of science and engineering fields, benefiting glaciologists, climate scientists, geophysicists, planetary scientists, and all sectors of geoscience that rely on radar remote sensing of the environment for societal well-being.
Show/Hide Description
CCT.72: Innovations and New Methods in Remote Sensing Instrument Design and Calibration
S.17: Sensors — Advanced Future Instrument Concepts
The Community Contributed Session, “Innovations and New Methods in Remote Sensing Instrument Design and Calibration,” will explore cutting-edge advancements in radiometric and spectral calibration hardware and techniques, as well as groundbreaking sensor designs critical for the next generation of weather and climate data. As demands for higher spectral and spatio-temporal resolution increase, alongside stricter radiometric accuracy requirements, novel sensors and emerging hardware are essential to meet these benchmarks. This session will address innovations in calibration algorithms and supporting infrastructure that
are needed to manage higher data volumes and enhanced performance capabilities. Equally important is the development of real-time, automated calibration systems to ensure a continuous flow of precise, actionable weather and climate observations.

Providing revolutionary improvements in radiometric, spectral, and spatio-temporal accuracy is increasingly driven by both emerging hardware and the proliferation of new sensors. Thus, this session will highlight independent advancements brought by high-performing, miniaturized instruments; including hyperspectral infrared and microwave sounders on smallsats and cubesats in a variety of orbital geometries, and instruments that extend coverage into underutilized spectral regions (such as the VSWIR and solar reflective bands). 

The rapid growth of these new platforms, combined with the expansion of commercial sensors, necessitates innovations in rapid calibration and validation (cal/val) methodologies. On the software side, advances such as deep learning–based calibration algorithms, capable of autonomously identifying anomalies and correcting errors, are emerging as transformative tools. Similarly, autonomous calibration systems are poised to streamline recalibration, ensuring continuous data integrity over time and accelerating the path to operational maturity.

To achieve the highest radiometric accuracy, the session will highlight innovations in onorbit calibration reference systems, including concepts demonstrated by CLARREO and quantum-based calibration targets and sensors. These systems aim to establish fundamental, SI-traceable standards and show great promise for enabling on-orbit absolute calibration; addressing long-standing challenges in inter-sensor comparison and the creation of consistent, multi-decadal climate data records.

These advancements in sensor design and calibration have the potential to transform weather and climate science. By providing more accurate and detailed observations across the electromagnetic spectrum, they will enhance our ability to monitor key environmental variables such as atmospheric temperature, moisture profiles, and land and sea surface properties. This improved observational capacity will enable better forecasting of extreme weather events, support more precise climate modeling, and facilitate early warning systems for natural disasters. Ultimately, the innovations discussed in this session will play a critical role in advancing our understanding of Earth’s climate systems and enabling more informed, data-driven responses to global environmental challenges.
Show/Hide Description
CCT.73: Earth Observation for Civil Security
D.2: Data and Society — Remote Sensing Data and Policy Decisions
Geopolitical instability, climate change and the growing interdependence of critical infrastructures are reshaping the global security landscape. In this evolving context, Earth Observation (EO) provides a unique source of reliable, independent and continuous information that supports early warning, situational awareness and evidence-based decision-making for security governance and policy formulation. The growing volume of EO data, the interconnected nature of contemporary threats and the rapid progress in information extraction techniques call for innovative approaches to strengthen societal security through effective monitoring, foresight and coordinated action across institutional levels.

This theme will examine how EO, combined with cutting-edge information extraction methods such as foundation models, multimodal AI, knowledge graphs and data fusion, delivers actionable knowledge to inform risk assessment, enhance preparedness and support data-driven security and resilience policies. It also welcomes case studies where EO-derived evidence contributes to objective assessment and transparent reporting in politically sensitive or data-scarce environments.

Expected contributions include studies demonstrating EO applications across diverse domains, such as:
(i) improving the protection and resilience of critical infrastructures through EO-based monitoring and evidence-informed decision support;
(ii) developing early-warning and risk-assessment systems for natural and human-induced hazards with direct implications for societal stability;
(iii) monitoring and analysing threats linked to migration dynamics, cross-border tensions, or geopolitical instability; and
(iv) implementing predictive, EO-driven approaches to anticipate, prevent and manage complex security scenarios.

This theme complements the current IGARSS programme — particularly the General Themes under Data and Society — by focusing on how EO-derived information supports policy-relevant decision-making processes in the domain of civil security. It bridges the gap between technological innovation and its practical application to security governance, risk prevention and strategic planning.

While existing Special and General Themes address technological advances and environmental monitoring, few explicitly explore how EO-based insights contribute to governance processes, inter-agency coordination and evidence-informed policy responses to evolving security challenges, including those emerging from instability or conflict.

The session will highlight the societal and institutional value of EO in safeguarding populations and critical assets, supporting decision-makers in shaping policies that enhance preparedness, stability and resilience. By linking scientific innovation with operational and policy frameworks, it underscores EO’s growing role as an enabler of informed, coordinated and anticipatory decision-making — thereby enriching and complementing the IGARSS 2026 programme.
Show/Hide Description
CCT.74: Remote Sensing Standards for a Dynamic Earth
D.4: Data and Society — Standards in Remote Sensing
The IEEE Geoscience and Remote Sensing Society (GRSS) Standards for Earth Observation (GSEO) Technical Committee (TC) is focused on facilitating the development of protocols and standard practices for coordinated remote sensing data acquisition and assimilation by industry, academia, and governments. The scope of the GSEO is to help achieve full interoperability of hyperspectral, passive microwave, synthetic aperture radar (SAR), GNSS reflectometry (GNSS-R), and future remote sensing technologies data by empowering virtual constellations to enhance the quality of Earth observation data while minimizing the revisit time. The goals of our standardization initiatives include improving disaster management and early warning as well as optimizing natural resources, such as agriculture productivity. The GSEO takes a holistic approach that incorporates not only technological considerations but also policymaking, economics, and societal implications. We think that this vision can generate new insights within a geopolitically divided planet that is being burdened by the impacts of climate change.

The standardization landscape is highly dynamic, with many relevant stakeholders working on a diverse range of topics. There is a need to integrate different perspectives, and we believe that the standardization process that the GSEO uses under the IEEE Standards Association (SA), and our technical operational philosophy, provide the required next step to leverage global efforts in a coordinated and cost-effective scientific manner. The GSEO continues to foster the creation of standards that address identified needs. This is being accomplished by liaising with a variety of GRSS TCs for leveraging the full technical capabilities of the GRSS. We understand this effort through a long-term perspective, keeping in mind key areas of interest, such as natural hazard identification and monitoring, emergency response and coordination, recovery and reconstruction, and sustainable economic growth based on well-organized use of our natural resources.

This session serves as a scientific forum, including needs and thoughts coming from research institutes, universities, space agencies, e.g., National Aeronautics and Space Administration (NASA) and European Space Agency (ESA), as well as from private companies, which are becoming a key actor within the New Space era.
Show/Hide Description
CCT.75: ROCX 2025: Data Collections, Ground Experiments, and Early Results
D.6: Data and Society — Citizen and Open Science
ROCX (RIT Open Community eXperiment) 2025 gathered over 60 researchers from 23 organizations and 6 countries at RIT's Tait Preserve near Rochester, New York, USA during September 8-19, 2025 for a remote sensing data collection coordinated with 20 different ground experiments. The primary objective of ROCX 2025 was to acquire well ground-truthed remote sensing data for distribution through an open access website for use by researchers and students. Data were acquired by multispectral, hyperspectral, polarimetric and lidar sensors on ground, UAS, aircraft, and satellite platforms. Ground experiments included sub-pixel target detection, humanitarian demining, calibration, vegetation nitrogen sensing, and wetland characterization, among many others. This theme invites participants to share details on the data acquired, the ground experiments deployed, as well as  early results of analyses.

ROCX 2025 cuts across several of the General themes and is well suited as unique community contributed theme. Much of the data were collected by UAS, airborne, and satellite hyperspectral sensors and so a large part of the post-experiment activity will be focused on hyperspectral data processing. There were several types of radiometric and spectral calibration experiments so there is a strong sensor performance aspect. The data are anticipated to be used for remote sensing education and research for algorithm development. Since the data are to be distributed on an open access website, ROCX 2025 also fits in with the open science theme. Finally, the IEEE GRSS through it's Geoscience and Spaceborne Imaging Spectroscopy (GSIS) was a primary sponsor and so it is very appropriate that it serve as a topic at IGARSS 2026. 
Show/Hide Description
CCT.76: Advanced Satellite Remote Sensing and AI-driven Strategies for Multi-hazard Monitoring and Risk Prediction
D.7: Data and Society — Risk and Disaster Management (Extreme Weather, Earthquakes, Volcanoes, etc)
In the context of global climate change and rapid urban expansion, societies worldwide are increasingly vulnerable to natural and anthropogenic hazards such as earthquakes, volcanic eruptions, landslides, floods, urban subsidence, wildfires, and coastal instability. These events, whose frequency and intensity are amplified by environmental and human pressures, pose significant challenges for sustainable development and risk governance. To mitigate their impacts, effective and advanced monitoring frameworks are essential, capable of addressing complex spatio-temporal dynamics and providing actionable information for decision-makers.
Satellite-based Earth Observation has revolutionized hazard monitoring, with Synthetic Aperture Radar (SAR) playing a central role due to its unique capability to provide systematic, high-resolution data under all-weather and day-and-night conditions. The evolution of interferometric methods and multi-temporal SAR techniques has enabled accurate detection of ground deformation processes, supporting long-term monitoring of subsidence, seismic activity, volcanic unrest, and infrastructure stability. Recent advances in bistatic and multistatic SAR configurations further extend the potential for hazard characterization at multiple scales. Complementary integration with optical, multispectral, hyperspectral, LiDAR, and geodetic data offers additional perspectives, reinforcing the capacity to map and predict floods, landslides, and coastal risks with greater reliability.At the same time, artificial intelligence and machine learning frameworks are reshaping the analysis of Earth Observation data. By exploiting massive and heterogeneous datasets, AI-driven approaches enhance the detection, fusion, and forecasting of geohazard processes, enabling the timely generation of products that bridge scientific innovation with operational applications. These advances are crucial to strengthen early warning systems, improve disaster response, and support long-term adaptation and resilience strategies. Interdisciplinary contributions that combine geoscience, remote sensing, hydrology, and data science are thus indispensable to translate technological progress into real-world risk reduction.
This session welcomes abstracts addressing methodological and application-oriented advances in SAR interferometry, multi-temporal analysis, AI-based multi-sensor integration, and innovative case studies that demonstrate how Earth Observation can effectively support multi-hazard monitoring and risk prediction at both local and global scales.
Show/Hide Description
CCT.77: Advances in GeoAI systems for Wildfire Monitoring
D.7: Data and Society — Risk and Disaster Management (Extreme Weather, Earthquakes, Volcanoes, etc)
Scope: Topics include, but are not limited to

1)Data fusion systems for multispectral, hyperspectral, LiDAR, and SAR remote sensing and secondary products for forest scenes and wildfire studies 
2)Advances in computer vision and natural language processing techniques for secondary products relevant to wildfire studies, including predictive metrics/products: remote sensing products such as land cover maps, change maps (optional, has overlap with other Themes), vegetation indices, moisture indices, fuel load indices, buildup indices, and monitoring metrics such as air quality indices, spread prediction maps, damage assessment, and pollution and health impact assessment.  
3)Novel pipelines for 3D forest scene reconstruction using LiDAR and multi-view imagery, feed-forward 3D transformers, neural implicit models, and Gaussian Splatting 
4)Advances in GeoAI systems with language modelling capabilities for wildfire modelling, including GeoAI systems built from large language models (LLM), 2D-3D vision language models (VLMs) 
5)Advances in the integration of GeoAI and GIS: focusing on the ingestion of climate records and weather data retrieved from GIS into GeoAI systems, relevant data analysis techniques, as well as the incorporation of GeoAI predictions back into GIS platforms. 
6)Case studies on the application of GeoAI systems from global fire-prone regions 
___________________________________________________________________________________________________
Importance 

As wildfires intensify under climate change, the need for robust monitoring and predictive capabilities has never been more urgent. This Community Theme will highlight these emerging directions in GeoAI, bringing together contributions at the intersection of remote sensing, multimodal systems, LLM/VLM systems, and geospatial information systems to advance wildfire science and resilience. It will also provide a forum for researchers and policymakers to exchange ideas, present advances, and identify open challenges in applying GeoAI to wildfire monitoring, fostering collaborations that can translate research into real-world impact. 

Advances in sensor technology and the expansion of remote sensing modalities, including multispectral, hyperspectral, LiDAR, and SAR data from satellites, aircraft, and drones, now provide unprecedented environmental detail. Derived products such as land-cover maps, vegetation stress indices, and climate indicators further enrich monitoring capabilities. GeoAI is transforming how these diverse data are processed, with multimodal fusion integrating spatial, temporal, semantic, and linguistic signals to generate high-level wildfire intelligence. Large Language Models (LLMs) and Vision-Language Models (VLMs) enable cross-modal reasoning, aligning images, 3D reconstructions, and text-based knowledge. Embedding these GeoAI advances within Geographic Information Systems (GIS) creates direct pathways for operational wildfire monitoring, early warning, and decision support at multiple scales.
____________________________________________________________________________________________________

Rationale:

Wildfires rank among the fastest-growing climate hazards, endangering nature, infrastructure, and human well-being. While General Theme D.7 addresses hazards broadly, a dedicated GeoAI-wildfire theme highlights a rapidly emerging, cross-disciplinary domain. This theme emphasizes multimodal data fusion across multispectral, hyperspectral, LiDAR, SAR, 3D maps, and derived products, as well as advances in AI methods for predictive monitoring, and GIS integration methods. Explicitly cross-disciplinary, it connects AI, remote sensing, GIScience, and emergency management. By focusing on wildfire-specific challenges and integrating GeoAI with GIS platforms, this theme complements D.7 while ensuring research informs real-world monitoring and response.

Show/Hide Description
CCT.78: Bridging Earth Observations & Risk Analytics at Scale
D.7: Data and Society — Risk and Disaster Management (Extreme Weather, Earthquakes, Volcanoes, etc)
There are still gaps between the vast amount of Earth observation data currently available and to-be gathered (130TB daily estimate of data downlinked from NISAR, Sentinel, Landsat & PlanetScope (continuous monitoring) alone) and the access and use of these data.  Why are there still gaps, what are the major gaps, that if addressed could make a large impact and, and how do we move forward with the pertinent information.
We will have speakers from academia, industry, and government discuss these barriers and strategies for adopting geospatial data to solce climate, environmental, and business challenges related to disaster risk and mitigation.  This session will present breaking down barriers for EO to fuel risk analytics at scale; scaling risk analytics with AI and Next-Gen infrastructure, and mobilizing the ecosystem by co-creating the future frameworks of EO and risk analytics.
Our company and the presenters all have years of experience and current use case examples of where the gaps continue to exist and why, as well as where we need to concentrate some efforts and resources to overcome these gaps.
As we amass more and more data and AI continues to advance in it's importance within all organizations it is vastly important that we are able to also use EO geospatial data to aid in better decision making, improved visualizations, and shared knowledge.  This session discuss how we continue to move EO into our everyday usage within all organizations and what the best practices and processes are to make that happen.

Show/Hide Description
CCT.79: Harnessing Geospatial Technology and Machine Learning for Flood Management in Fluvial and Glacierized Catchments
D.7: Data and Society — Risk and Disaster Management (Extreme Weather, Earthquakes, Volcanoes, etc)
Flood is one of the most prevalent natural disasters occurring worldwide. The worldwide damage due to flood has been extremely severe and is of paramount concern worldwide as it affects the most people in the world and becomes a strong determinant in economic development, agricultural productivity and livelihood of the billion people. Floods cause not only the loss of life, crop and livestock but also cause the failure of roads, houses and infrastructures. These damages are not in commensurate units as they include life, property, infrastructure and crop, and thereby the economy, livelihood and development are getting affected to a larger extent. Flood phenomena involve large complexity and differ greatly depending upon the landscape, climate and hydrology as evident from urban flooding, riverine flooding and flooding in snowy and glacierized catchments. The fluvial catchments witness huge erosion and transport of sediments in the whole river course affecting the landforms, lithology and infrastructures, whereas the glacierized catchments involve highly dynamic hydrological and cryosphere processes and several hydrometeorological induced disasters that are significantly dependent on the terrain, climate and weather characteristics. Further, the severity and frequency of floods are amplified due to climate change, encroachment in flood plains, landuse and landcover changes, cloudburst, glacier and snow melt, GLOFs, avalanches, landslides, and faulty operation of reservoirs. It could be riverine flood, flash flood, cloud burst flood, glacier lake outburst flood, dam break flood, coastal flood, urban flood, stormwater drainage flood, and sewer flood.

The impact of climate change has further aggravated the problem due to temporal and spatial variation of rainfall. It has been also observed that the hydrologic cycle becomes more intense, stimulating rainfall of greater intensity and larger duration causing larger period of flooding because of climate change under warm condition. Since the floods are the most frequent natural disaster causing a great economic loss worldwide, it draws greater attention by the scientific community, public as well as the central and federal governments. It is therefore important to improve the understanding of flood and develop efficient flood management strategies.

This session aims to solicit novel contributions from the researchers that improve the understanding of floods in fluvial and glacierized catchments and enable us to develop efficient and climate resilient flood mitigation strategies. We invite researchers to submit original works that harness geospatial technologies and machine learning techniques in addressing flood problems in fluvial and glacierized catchments, decipher linkages and consequences of hydrometeorological induced disaster events such as GLOFs, landslides and avalanches, and how the spatial complexities of landscape, terrain, vegetation and climate change influence the flood risk in the region. This session also aims to bring together professionals from multidisciplinary fields to share their innovative ideas, research outcomes, and innovative insights obtained from case studies of different catchment settings by utilizing geospatial, artificial intelligence and machine learning techniques.
Show/Hide Description
CCT.80: Humanitarian AI with Earth Observation: Actionable Mapping for Time-Critical Operations
D.7: Data and Society — Risk and Disaster Management (Extreme Weather, Earthquakes, Volcanoes, etc)
Scope:
 This community contributed session addresses the growing need for rapid, reliable, and transparent use of Satellite Earth Observation (EO) and Artificial Intelligence (AI) in humanitarian operations. In an era of compounding crises that include armed conflicts, public health emergencies, and disasters such as earthquakes, floods, cyclones, and wildfires, responders face overlapping pressures that reshape displacement and environmental risk. As these crises occur more often, operations need timely and credible evidence showing where people are, what has changed, who is most at risk, and how to reach them. The session will demonstrate how EO and AI translate imagery and geospatial data into decision ready indicators for time critical contexts. It will focus on practical AI based workflows that quantify building damage and disruption to roads, bridges, electricity, water, and other lifelines; track land use and land cover change in and around settlements and critical services; combine exposure to multiple hazards with measures of vulnerability to support fair and safe targeting; and assess service reachability, including clinics and hospitals, safe water, sanitation and hygiene, and temporary or transitional shelter. To ground these analyses in people centered outcomes, presenters will integrate gridded population models from suites such as LandScan, together with satellite imagery and ancillary geospatial data, to characterize population distribution and dynamics, estimate exposed and affected populations. Presenters will show end to end pipelines that move from data access to analysis to delivery, using shared data standards, cataloged products, reproducible processing, and dissemination through dashboards and application programming interfaces. These practices will make results transparent, comparable across sensors, seasons, and regions, and easy to integrate with population, infrastructure, climate, and mobility information used by operational partners.

Importance: 
   As disasters intensify and crises grow more complex, responders need trustworthy, time sensitive geospatial intelligence for rapid decisions. This session advances transparency, reproducibility, and rigorous validation under limited ground truth and time constraints. Authors will clearly describe datasets, methods, and experiments; report comparative results with error analysis and quantified uncertainty; and state limitations. Submissions will detail safeguards for sensitive locations, such as spatial aggregation or intentional latency, and provide information for reproduction and extension. By linking methodological advances to operational questions about where people are, what has changed, who is most at risk, and how to reach them, the session strengthens confidence in EO and AI for anticipatory action, emergency response, and recovery. Emphasis on uncertainty, robustness, and do no harm safeguards supports adoption by organizations working under pressure

Rationale for Complementing General and Special Themes:
  The session complements General Theme D.7: Risk and Disaster Management by advancing EO and AI into operational products for rapid response and risk reduction. It aligns with the Special Theme SP.3: Artificial Intelligence Ethics and Governance in Remote Sensing, emphasizing trustworthy AI, transparency, clear uncertainty reporting, and safeguards for sensitive geospatial data. The session organizers will invite at least five strong submissions from research and practice networks and will chair the session in person at IGARSS 2026, ensuring a coherent program with real world impact across humanitarian operations.
Show/Hide Description
CCT.81: Leveraging AI, LLMs, and Geospatial Technologies for Rapid and Explainable Post-Disaster Damage Intelligence
D.7: Data and Society — Risk and Disaster Management (Extreme Weather, Earthquakes, Volcanoes, etc)
Disasters are becoming more frequent and costly, with the United Nations Office for Disaster Risk Reduction (UNDRR) estimating global annual losses at $2.3 trillion. Rapid and accurate damage assessment after events such as earthquakes, floods, landslides, bushfires, and hurricanes is crucial for guiding emergency response, optimizing logistics, and supporting recovery planning.

This session explores the integration of artificial intelligence (AI), large language models (LLMs), and geospatial technologies to advance post-disaster damage assessment. The rapid evolution of Retrieval-Augmented Generation (RAG) and multimodal LLMs (such as ChatGPT, Copilot, Grok, Gemini, etc.) has opened new opportunities to fuse spatial, textual, and visual data for more intelligent and explainable disaster analysis. By combining satellite and UAV imagery with AI-driven reasoning, this approach enhances both the accuracy and interpretability of disaster intelligence.

In addition to conventional remote sensing, the session will address the value of crowdsourced and social media data as complementary, real-time sources of situational awareness. Posts, images, and reports shared during disasters often provide early, location-specific information that fills gaps in satellite observations. When processed through LLMs, unstructured, multilingual data can be summarized, validated, and linked to geospatial datasets, enhancing both speed and contextual relevance in assessing impacts.

The integration of AI, LLMs, and geospatial intelligence introduces a new paradigm of human-centered, explainable disaster assessment. LLMs can interpret spatial outputs in natural language, assist decision-makers through dialogue-based systems, and democratize access to complex data for both technical and non-technical users. This synergy supports transparent, reproducible, and actionable disaster intelligence, which can inform emergency operations and policy decisions in near real-time.
This proposal highlights an emerging research frontier that bridges traditionally siloed domains—geospatial science, AI, disaster management, and humanitarian response—to deliver scalable and interpretable post-disaster solutions. It encourages contributions focused on the following key areas:
•	Fusion of multimodal data (satellite, UAV, and social media) for rapid damage detection;
•	Explainable AI (XAI) and visualization techniques for disaster reasoning;
•	LLM-augmented geospatial analytics for communication and interpretation; and
•	Applications of generative AI for multi-hazard monitoring and situational awareness.

Aligned with the IGARSS General Themes on AI and Machine Learning and Special Themes on Disaster Risk Reduction and Remote Sensing, this session fosters collaboration between researchers and practitioners. It aims to translate cutting-edge innovations into tangible societal impact. By advancing explainable, multimodal disaster intelligence, this session directly contributes to enhancing global resilience and preparedness in the face of escalating climate and disaster risks.
Show/Hide Description
CCT.82: Multi-Sensor Satellite, Ground and Ocean Observing Systems for Ocean Associated Disasters: Damage Assessment and Forecasting
D.7: Data and Society — Risk and Disaster Management (Extreme Weather, Earthquakes, Volcanoes, etc)
Natural and anthropogenic emissions are the drivers for climate change; the impacts are seen directly and indirectly through rising sea level and melting of snow/ice on the land. These changes enhance frequency and intensity of ocean associated disasters. The dense network of observing (multi satellites, GNSS, Buoys, ARGOS and Automatic Weather network) systems provide ocean, land and atmospheric parameters which help scientific community and modelers to forecast landfall, intensity and likely affected areas from the cyclones, hurricanes, tsunami, harmful algal blooms, coastal erosion, extreme rainfalls, coastal floods, landslides.  The people living along the coast, infrastructures (buildings, bridges, roads, ground water, agricultural productivity, socio economic) are severely damaged with the ocean associated hazards. This session aims to explore innovative approaches that integrate advanced observing systems with resilience and mitigation strategies to protect populations and critical infrastructure in high-risk regions under a changing climate. It will focus on the use of cutting-edge technologies and monitoring networks to enhance preparedness, strengthen adaptation, and support long-term disaster risk reduction. The session also seeks to promote sustainable development and practical solutions for vulnerable communities worldwide, emphasizing strategies that improve early warning, response, and overall resilience to climate-driven hazards. To support these objectives, this session invites papers that demonstrate innovative applications of observing systems and modeling approaches. The session encourages contributions based on satellites, drones, airborne, GNSS, Ocean observing systems and modelling in mapping, forecasting and new technologies to mitigate the impacts. The integration of these diverse data streams with AI/ML-based modeling and data assimilation techniques in early warning, accurate landfall predictions, and disaster preparedness are welcome. 
Show/Hide Description
CCT.83: Polar Lows and Cirrus–Aerosol Interactions: Multi Platform Process Studies Using Simulated HAWC/AOS Data and Ancillary Satellite Imagery
D.7: Data and Society — Risk and Disaster Management (Extreme Weather, Earthquakes, Volcanoes, etc)
This session addresses the longstanding gap in accurately detecting, attributing, and predicting polar low (PL) storms and their interactions with cirrus clouds and aerosols in Nordic Seas, the Canadian Arctic and the North Atlantic. These limitations hinder our ability to forecast hazardous weather, understand Arctic cloud–climate feedbacks and develop climate resilience strategies.
Why this problem is critical? PLs are high-impact, short-lived cyclones with hazardous winds, waves and precipitation. Their small scale, rapid evolution, and sparse observations make them notoriously difficult to detect and forecast using conventional methods. Cirrus clouds are strongly influenced by aerosols, and significantly influence the upper-atmosphere heating and radiative balance that affect PL intensity and regional energy budgets, leading to uncertainties in weather forecasts and climate projections. Rapid Arctic warming, sea ice decline, and changing ocean–atmosphere feedbacks modify storm tracks, PL intensity, and cloud–aerosol–moisture interactions.
What is missing in current studies?  PL climatologies and ERA5 reanalysis data underestimate PL occurrence, intensity, and impacts due to coarse resolution, gaps in incomplete multi-sensor observational data, biases near ice margins, and inconsistent detection algorithms. Moreover, there is a lack of integrated, process-based multi-platform studies combining advanced satellite remote sensing (e.g., HAWC/AOS: https://www.asc-csa.gc.ca/eng/satellites/hawc/; https://aos.gsfc.nasa.gov/), state-of-the-art coupled atmosphere-ocean modeling, AI/deep learning, and robust validation and attribution methods. Without advances in data fusion and process-level simulation, marine hazard forecasts and climate adaptation strategies are unreliable for these rapidly changing areas. 
How can this session will help?  By soliciting presentations that include works on integrating simulated HAWC/AOS vertical profiles, advanced models, and AI-driven detection workflows, validated with satellite, ground, and reanalysis data, this session will provide a forum that can lead to the first process-level, multi-platform benchmarks and automated tools for PL and cirrus/aerosol analyses in the North Atlantic/Arctic region. These datasets include (but note limited to) satellite-based synthetic aperture radar (SAR) and thermal infrared (TIR), ground-based lidar, satellite-based EarthCARE. The potential outcomes can enable improved forecasting, hazard assessment, and climate resilience planning for high-latitude communities, closing critical gaps in storm simulations and predictions, and particularly cloud physics. Using next-generation satellite data, advanced modeling techniques, and AI architectures, we propose a session to address inadequate process-level PL detection, attribution, simulation and their cirrus/aerosol interactions. 
Our core aim is to overcome barriers in observing and predicting PLs. Existing methods, reanalysis data and satellite remote sensing undercount PLs and misrepresent their intensities and lifecycles, due to coarse resolution, inconsistent detection algorithms, and limited integration of cloud–aerosol information. Central to our core aim are cirrus clouds and aerosol interactions—critical modulators of upper-atmosphere radiative balance and storm strength—a major uncertainty in Arctic climate. 
This session invites presentations that include (but not limited to) the use of “simulated-HAWC” profiles, multi-sensor observations (satellite, lidar, reanalysis data), optimized state-of-the-art modeling frameworks, and advanced AI, thereby developing process-level analyses and automated detection/attribution tools. Recent findings link upper-atmosphere and ocean impacts, further motivating our integrated observation–simulation–AI approach. Outcomes from this session could lead to better operational weather prediction needs and longer-term Arctic climate resilience planning. 
Show/Hide Description
CCT.84: Remote Sensing and Geospatial Modeling of Wildfire Risk, Detection, and Recovery
D.7: Data and Society — Risk and Disaster Management (Extreme Weather, Earthquakes, Volcanoes, etc)
Scope: This Community Contributed Theme (CCT) focuses on the wildfire life cycle, integrating remote sensing, geospatial modeling, and decision support to address early detection, risk assessment, fire spread modeling, burned area and emission quantification, post-fire ecological recovery, and policy-informed land management. Contributions will leverage advanced technologies, including optical and thermal sensors, SAR, LiDAR, UAVs, IoT/in-situ networks, data assimilation, machine learning, new satellite constellations, and multisource data fusion, to enhance understanding and management of wildfires across all phases (pre-fire, active fire, post-fire).

Importance: Wildfires are increasing in frequency, scale, and destructiveness due to climate change, land use changes, and shifting vegetation dynamics. This CCT addresses critical needs in disaster risk reduction, public safety, ecosystem integrity, carbon cycle feedbacks, air quality, and land stewardship. Advanced sensing and modeling enable faster detection, accurate fire behavior prediction, optimized resource allocation, effective prescribed burns, precise emission calculations, and informed post-fire rehabilitation, ultimately saving lives and guiding sustainable land management.

Rationale for Complementing General and Special Themes: While IGARSS 2026 includes SP.1: Natural Disasters & Disaster Management, its broad scope across multiple hazards does not specifically target wildfires or the integration of emerging sensor technologies, real-time/near-real-time modeling, cross-scale data fusion, and tailored decision support systems for wildfire phase transitions. This CCT complements SP.1 and related General Themes (e.g., Forest & Vegetation: Application & Modelling, Change Detection, Data Fusion) by focusing exclusively on wildfires. It emphasizes interdisciplinary approaches, uniting sensor technologists, modelers, ecologists, fire managers, and policymakers to advance wildfire science and actionable outcomes.

Anticipated Contributions: We expect robust community engagement, with submissions on topics such as early detection using multispectral/thermal imagery, fuel moisture estimation via microwave-optical fusion, fire spread forecasting with physics-based/AI hybrid models, burn severity and recovery analysis using LiDAR/hyperspectral data, and emission estimation for atmospheric and health models. The session will include oral and poster presentations to showcase methodological innovations and applied case studies, fostering collaboration and advancing the state of wildfire remote sensing and modeling.
Show/Hide Description
CCT.85: Wildfire Lifecycle Analytics with Multi-Source Geospatial Data
D.7: Data and Society — Risk and Disaster Management (Extreme Weather, Earthquakes, Volcanoes, etc)
Across the IGARSS community, multi-source geospatial evidence, including optical, SWIR, TIR and LST, SAR, reanalysis and numerical weather prediction, terrain and fuels, land use and infrastructure, and socio-environmental indicators, provides the foundation for probabilistic risk intelligence that supports prevention, preparedness, response, and recovery.

This session convenes wildfire research across the full lifecycle: ignition susceptibility, spread dynamics, exposure and impact assessment, smoke and air-quality forecasting, and ecological and community recovery after fire. It welcomes studies on explainability, calibration and uncertainty quantification, and equity in wildfire risk modeling, while not making them mandatory.

A central theme is fire management as the integrative application layer that turns risk estimates into action. Within this frame, fuel management is foundational: mapping fuel load and continuity, deriving fuel-moisture proxies, prioritizing and evaluating treatment portfolios such as prescribed burning and mechanical thinning, identifying safe treatment windows, and linking treatments with WUI hardening and closure policies. The same risk intelligence guides response operations, including resource allocation and staging, and it explicitly supports evacuation and escape-route planning by integrating evolving risk fields with transportation networks to identify safe corridors, estimate time to evacuate, and assess accessibility for vulnerable populations. It also strengthens information services and recovery planning.

Methodologically, the scope covers the full spectrum of machine learning, including deep and classical approaches, probabilistic and Bayesian methods, graph based, kernel, and ensemble models, together with hybrid physics and ML surrogates, data assimilation, and causal inference. We welcome scalable spatiotemporal representations such as self supervised and foundation models, adaptation to nonstationarity across ecozones and seasons, and simulation to real transfer that preserves physical plausibility. Robust data engineering is integral, including analysis ready spatiotemporal data cubes, rigorous QA and QC with uncertainty propagation, and label curation with weak or active learning under sparse or noisy supervision.

Temporal horizons are intentionally broad, from hours to days for early warning, through seasonal and interannual prediction, to scenario analyses from decadal to century scale under climate constraints, with comparability and uncertainty tracked consistently across scales. Evaluation aligns with use, including reliability diagrams and ECE or Brier skill for probability calibration, conformal coverage, out-of-distribution detection, and stress testing under heat, wind, and drought extremes, as well as decision consistent analysis with cost and loss frameworks for setting advisory, closure, and evacuation thresholds.

Recognizing societal stakes, the session examines regional and community inequalities in wildfire risk exposure, information access such as alerts, maps, and connectivity, and recovery after disasters. It highlights equity centered methodologies, including group aware evaluation, exposure weighted scoring, equity conscious thresholding, and participatory or Indigenous led validation that respects data sovereignty and local knowledge systems.

Overall, the session complements IGARSS General Themes in AI and ML for Remote Sensing, Data Fusion, and Earth Science Informatics, and aligns with Special Themes in hazards, sustainability, and resilience by unifying advances in representation, inference, and evaluation into methods and protocols that transfer across hazards.
Show/Hide Description
CCT.86: Nighttime Remote Sensing for Sustainable Development Goals
D.9: Data and Society — Remote Sensing for Sustainable Development
In recent years, nighttime remote sensing has gained prominence as a tool for monitoring human activities. Peer-reviewed studies in this field have surged significantly, particularly over the past five years, with more than 250 papers published annually on nighttime light. Applications of nighttime remote sensing range from identifying wildfires in early stages, detecting power outages caused by natural disasters, and analyzing the effects of Covid-19 lockdowns to assessing the impacts of wars, borders, poverty, urbanization, light pollution, skyglow, and cultural or social events. Additionally, nighttime remote sensing has been employed to investigate nighttime impacts on various environmental and societal factors, such as air pollution, urban heat islands, animal behavior, vegetation cycles, and human health. These research areas align with several Sustainable Development Goals (SDGs), including SDG-1 (No Poverty), SDG-3 (Good Health and Well-Being), SDG-7 (Affordable and Clean Energy), SDG-11 (Sustainable Cities and Communities), SDG-13 (Climate Action), SDG-14 (Life Below Water), and SDG-15 (Life on Land).

Nighttime remote sensing is fundamentally distinct from other types of remote sensing. While daytime remote sensing relies on sunlight as the primary illumination source, nighttime remote sensing mainly depends on artificial light generated by human activities on the Earth's surface at night. This unique light source requires specialized processing methods. For example, Nighttime light remote sensing is more sensitive to observation angles, time-series analysis is often essential, and the detection and pre-processing of clouds become even more critical.

Recent advancements and the launch of new satellites and sensors have led to a proliferation of nighttime-capable instruments and products, such as DMSP-OLS, VIIRS-DNB (featuring NASA's Black Marble), Landsat-8/9, Luojia-1, Jilin-1, EROS-B, SDGSAT-1, and the International Space Station. These products offer a range of capabilities, from long-term daily time-series continuity to multispectral imaging and very high spatial resolutions, reaching up to 1 meter. Additionally, the integration of nighttime thermal imaging enables researchers to assess both artificial light and heat emissions at night, offering deeper insights into urban environments. Researchers in the community utilize these diverse data products based on their specific needs, frequently collaborating and exchanging insights. This session proposal aims to provide a dedicated platform for researchers to discuss nighttime remote sensing data, analysis methods, and societal applications, particularly in relation to the SDGs.
Show/Hide Description
CCT.87: REACT for Sustainable Development Goals (SDGs)
D.9: Data and Society — Remote Sensing for Sustainable Development
This session invites papers contributing to REACT4SDGs, a theme dedicated to connecting remote sensing technologies directly to socio-economic improvements and the mitigation of natural and anthropogenic hazards, as outlined in the UN Sustainable Development Goals (SDGs). As the world confronts the escalating impacts of climate change, there is a pressing need to address associated societal challenges through innovative solutions and by advancing the breadth and depth of Earth Observation (EO). Scaling up these solutions—and ensuring their adoption within governmental frameworks and industrial projects—is uniquely viable through remote sensing technologies. This session aims to gather and showcase cutting-edge research that embodies this critical philosophy.

We welcome submissions on recent advances in Earth observation, including both methodological developments and technological solutions that support evidence-based development policies and the sustainable management of terrestrial, coastal, and marine resources. The session scope encompasses all environmental spheres, including the cryosphere, biosphere, atmosphere, and geosphere. The common thread for all submissions should be a clear relevance to sustainable development targets. Techniques of interest include, but are not limited to, data from optical, lidar, and SAR systems—utilizing various imaging modes such as interferometry, polarimetry, tomography, multi-frequency analysis, optical indices, EM models, and lidar waveforms—along with methods for quantifying the resulting environmental products. We encourage studies that demonstrate tangible impacts achieved with remote sensing technologies, such as monitoring urban heat islands for sustainable cities (SDG 11), assessing water quality for clean water and sanitation (SDG 6), tracking deforestation, land deformation, avalanches, lake outbursts, landslides, wildfires etc. for life on land (SDG 15), or supporting precision agriculture and food security against floods for zero hunger (SDG 2), or climate action (SDG 13) in general.
Show/Hide Description
CCT.88: Remote sensing for coastal sustainability
D.9: Data and Society — Remote Sensing for Sustainable Development
Human activities and climate change have significantly changed the global environment, ecosystem, economy, and society from various perspectives over the past decades, especially forming a double squeeze and threat on the coastal zone and its sustainable development. According to United Nations, around 40% of the world’s population lives within 100km of the coast. Coastal sustainability has become a crucial component of global sustainable development. This session topic is highly related to multiple themes in IGARSS, O. Oceans, D.9 Remote Sensing for Sustainable Development, and several other technical themes with new sensors and new methods. However, there is still a lack of such a focus on coastal sustainability with the support of emerging remote sensing technologies. This topic has been successfully organized with two sessions in IGARSS2024 and IGARSS20245. There were over 30 participants attending the sessions with fruitful discussions.

Recent decades have witnessed coastal reclamation and exploitation, coastal ecosystem and environmental evolution, urban population surge, and urban infrastructure expansion, which brings along different social and environmental impacts, i.e., biodiversity loss, ecosystem fragmentation, and climate change induced vulnerability for human beings. Sustainable coastal development addresses the significance of timely and efficiently monitoring of the urban, ecological and environmental processes together with their related issues in coastal regions, including urban sprawl, transportation systems, green space and wetland, biodiversity, air/water pollution, reclamation and aquaculture, natural disasters, etc. The advanced multisource remote sensing techniques, including airborne and spaceborne optical, SAR, and LiDAR at different resolutions together with in-situ data, can provide fine to coarse multi-angle, multi-scale, and multi-frequency observations for coastal monitoring, supporting their resilience and sustainable development. This session invites original research that presents the advances, methodology, and challenges of monitoring different coastal processes and their related issues in coastal regions using multisource remote sensed data. 
Show/Hide Description
CCT.89: Resilience Analytics with geospatial data
D.9: Data and Society — Remote Sensing for Sustainable Development
Climate extremes, rapid urbanization, aging assets, and emerging health risks are changing the scale and frequency of impacts. Planners and responders need timely and trustworthy maps to target prevention, guide mitigation, allocate resources, and evaluate outcomes. Work that reports validation against field or administrative data, shows generalization beyond a single case, or describes pathways to adoption is especially relevant.
This session complements “Remote Sensing for Sustainable Development” by focusing on how Earth observation, remote sensing, and GIS support decisions before, during, and after shocks. While sustainable development often emphasizes long-term monitoring and indicators, resilience analytics concentrates on assessing exposure, vulnerability, and capacity to adapt, and on turning geospatial evidence into practical measures.
The session covers optical, SAR, thermal, LiDAR/point clouds, and DEMs. Methods may include GIS, statistics, and AI/Explainable AI. Topics include building spatiotemporal indicators, event detection and forecasting, data fusion across sensors, transfer across sites, uncertainty estimation, and calibration. Studies that contain topography, hydrometeorology, land use, infrastructure, mobility, or administrative and health records are in scope.
Possible directions, but not limited to:
1. Natural hazards: susceptibility mapping, early detection, and impact assessment for landslides, floods, wildfires, earthquakes, and coastal hazards, including rapid mapping that links EO signals with precipitation, soils, and terrain.
2. Health and well-being: EO-derived environmental predictors (water bodies, vegetation, built density, land surface temperature, humidity) connected to outcomes such as vector-borne disease risk (e.g., dengue), heat stress, or air-quality exposure, with attention to spatial targeting of interventions.
3. 3D environments and infrastructure: LiDAR-based segmentation and 3D mapping of buildings, vegetation, and ground to estimate exposure, fragility, access, and change, and to support asset inventories and maintenance planning.
Show/Hide Description
CCT.90: Leveraging Earth Observation for Resilient and Sustainable Futures
D.10: Data and Society — Remote Sensing for Climate Change Impacts
This session examines how advancements in Earth Observation (EO), artificial intelligence (AI), and data analytics are driving innovation in support of resilient and sustainable societies. As the world faces increasing environmental, climatic, and socio-economic challenges, EO technologies are becoming indispensable tools for understanding complex systems, anticipating risks, and guiding informed action. The session brings together diverse contributions that illustrate how the integration of technological progress with environmental and societal applications — from local monitoring frameworks to global-scale modeling — is redefining the role of EO as a cornerstone for evidence-based decision-making and sustainable development.
A central focus of the session is the evolution of EO capabilities through methodological and technological innovation. Developments in high-resolution sensors, hyperspectral and SAR imaging, as well as the integration of multi-source datasets, are significantly enhancing the precision, accuracy, and timeliness of EO data. These advances are enabling new applications in climate adaptation, ecosystem monitoring, and sustainable resource management, contributing to better understanding and mitigation of human and climate-induced changes. The integration of EO data into multi-scale models further strengthens our ability to monitor environmental dynamics, assess vulnerabilities, and design effective, science-based responses.
Particular attention is given to the convergence of AI, machine learning, and edge computing, which are transforming EO data processing and analysis. These technologies allow for the extraction of meaningful information from vast data streams in near real time, improving both the efficiency and responsiveness of monitoring systems. Onboard processing and distributed computing architectures are reducing data latency and enhancing the autonomy of EO systems, opening new frontiers for operational applications ranging from disaster response and environmental protection to urban and agricultural management.
Looking toward the future, the session explores the emergence of intelligent and adaptive EO infrastructures. The integration of digital twin frameworks — virtual representations of Earth systems dynamically linked to EO data — and the optimization of satellite constellations are paving the way for more agile, scalable, and collaborative monitoring ecosystems. These infrastructures are expected to play a critical role in supporting global initiatives such as the Green Deal, the Sustainable Development Goals (SDGs), and climate neutrality targets by providing timely, interoperable, and actionable insights.
Altogether, the contributions presented in this session provide a comprehensive and forward-looking perspective on how EO is evolving from observation to action. They demonstrate how technological innovation, when combined with multidisciplinary approaches and a focus on societal needs, can transform raw data into predictive and actionable knowledge. By linking scientific excellence with policy relevance and operational usability, this session highlights the transformative power of EO in addressing environmental challenges, enhancing societal resilience, and fostering a more sustainable and equitable future for all.
Show/Hide Description
CCT.91: Earth Observation in Latin America: Regional Efforts, Challenges, and Innovations
SP.1: Special Themes — Geoscience and Remote Sensing in Developing Countries
 This session, organized by the GRSS Latin America Initiative, is dedicated to showcasing the unique Earth Observation (EO) landscape of Latin America. The region's unparalleled ecological significance, from the Amazon rainforest to extensive agricultural frontiers, is matched by acute vulnerabilities to climate change, deforestation, and water scarcity. The scope of this session encompasses cutting-edge research that leverages the full spectrum of remote sensing technologies—including optical, hyperspectral, and SAR—combined with advanced Artificial Intelligence (AI) and Machine Learning (ML) to address these critical regional challenges. We will feature work on precision agriculture for disease detection, water resource monitoring, soil moisture assessment, and illegal activity monitoring, highlighting home-grown algorithms and methodologies developed within the region.
 The importance of this session lies in its focus on region-specific solutions. While global EO paradigms are essential, local expertise is crucial for adapting technologies to Latin America's distinct biomes and socio-economic contexts. This session provides a vital platform to amplify these voices, foster intra-regional collaboration, and demonstrate how local innovations contribute to global environmental knowledge and sustainable development goals.
 This proposal directly complements the General and Special Themes of IGARSS 2026, such as "Land Cover/Land Use Change" and "Disasters," by providing a critical, place-based application context. It moves beyond generic methodology to show how these technologies are being operationalized in one of the world's most environmentally critical and complex regions. By focusing on regional efforts and challenges, this session enriches the technical program with underrepresented perspectives and case studies, perfectly aligning with the conference theme, "The Future of Earth Observations," by illustrating how regional collaboration and AI-driven innovation are shaping actionable and equitable solutions for the planet's future.
Show/Hide Description
CCT.92: Explainable Artificial Intelligence for Big Data and Scientific Discovery
SP.2: Special Themes — Findable, Accessible, Interoperable, Re-usable, Understandable, Secure, and Trustworthy (FAIRUST) Best Practices
The integration of artificial intelligence (AI) into the analysis of massive, heterogeneous, and high-dimensional datasets has revolutionized scientific discovery across domains such as Earth observation, climate science, urban planning, and public health. However, the increasing complexity of AI models, particularly deep learning systems, has led to a critical need for transparency, interpretability, and trustworthiness in AI-driven decision-making.

This workshop addresses the emerging field of Explainable AI (XAI), focusing on the design, evaluation, and deployment of techniques that make AI models more understandable and accountable. We aim to bridge the gap between AI researchers, geoscience researchers and engineers, and system engineers by fostering interdisciplinary collaboration and addressing practical challenges in making AI systems more transparent and trustworthy, which  is vital for adopting AI in high-stakes geoscience applications.

Relevant topics include, but are not limited to:
-  Post hoc explanation methods (e.g., SHAP, LIME, saliency maps) and interpretable model architectures, especially tailored to geospatial data
- Concept-based explanations and counterfactual reasoning for validating geological principles in complex Earth systems
- Human-in-the-loop explainability and interactive AI systems
- Evaluation frameworks for the fidelity, robustness, and usefulness of explanations
- Applications in high-stakes domains such as Earth and environmental sciences, satellite data, healthcare,  social sciences, and beyond
- Responsible and trustworthy AI practices in high-stakes domains
- Synergies between explainability and privacy, fairness, or robustness in large-scale data analytics

This workshop complements the General Theme of IGARSS 2026 by advancing the understanding and implementation of explainability in AI-driven data analytics and modeling pipelines. It also aligns with Special Themes around space technology, climate adaptation, and sustainability, where transparent and interpretable models are essential for scientific validation and stakeholder trust.

Attendees will gain insights into the latest advances in XAI, practical tools for evaluating and deploying explanations in big data settings, and opportunities for interdisciplinary collaboration across research and application domains.
Show/Hide Description
CCT.93: Synthetic Remote Sensing Image Synthesis, Quality and Deepfake Detection
SP.2: Special Themes — Findable, Accessible, Interoperable, Re-usable, Understandable, Secure, and Trustworthy (FAIRUST) Best Practices
Advances in generative artificial intelligence models are making it increasingly easy to generate high quality and diverse synthetic imagery. Recent studies have demonstrated the use of diffusion models to generate synthetic remote sensing imagery while also enabling detailed control over various aspects of image generation and editing such as scene content, object layout, and style. This session seeks submissions focused on two areas of research: 1) remote sensing image synthesis with state-of-the-art generative models, and 2) detection and analysis of synthetic remote sensing imagery produced by generative models. Special emphasis will be placed on methods for evaluating image quality, detecting, and characterizing synthetic remote sensing data. Works focused on developing and analyzing benchmark synthetic remote sensing datasets are also welcome.

High-quality synthetic remote sensing imagery poses both great benefits and risks to the remote sensing community and society at large. Advanced methods for measuring synthetic data quality and verifying the authenticity of remote sensing data are needed to ensure data trustworthiness and mitigate downstream consequences of synthetic AI generated media produced for malicious intent (deepfakes). Synthetic image analysis and detection can provide insights into limitations of current generative models and uncover future research directions and challenges for improvement. This session is cross-disciplinary, drawing on expertise from the generative AI, remote sensing, and image forensics research communities.

This session is relevant to special theme number 2: Findable, Accessible, Interoperable, Re-usable, Understandable, Secure, and Trustworthy (FAIRUST) Best Practices, with an emphasis on safety and trustworthiness in remote sensing data. Verifying data integrity and data trustworthiness is becoming increasingly difficult as generative models continue to improve. This session is also relevant to the general theme A: AI and Big Data, given its close relationship to generative computer vision models such as GANs and diffusion models.
Show/Hide Description
CCT.94: Datasets, Benchmarks and Standards for Scalable, Data-centric and Trustworthy Earth Observation AI
SP.3: Special Themes — Artificial Intelligence Ethics and Governance in Remote Sensing
The rapid emergence of large-scale and foundation models is transforming Earth observation (EO) and geospatial data analysis. As models grow in scale and complexity, the foundation of progress increasingly depends on the quality, diversity, and interoperability of underlying datasets, as well as the existence of standardized benchmarks and evaluation practices. However, the current landscape of EO AI remains fragmented — datasets often vary in quality and accessibility, benchmarks lack standardization, and evaluation methods are not always comparable across modalities, sensors, or tasks. This session addresses these challenges by focusing on the data, benchmarking, and standardization frameworks necessary to build scalable and trustworthy geospatial AI systems.
We invite contributions that advance the creation, curation, and governance of large-scale EO datasets across multiple sensors, resolutions, and temporal domains. Particular emphasis is placed on open datasets that enable reproducible research and cross-domain learning. Equally important are efforts that establish consistent benchmarking protocols, shared metrics, and evaluation frameworks for assessing model robustness, generalization, uncertainty, and fairness. The session also seeks to highlight emerging standards and best practices — such as those developed within GEO, CEOS, OGC, and ISO communities — that facilitate interoperability, metadata harmonization, and transparent model reporting.
Submissions may explore topics such as dataset development for multimodal pretraining, benchmark design for diverse geographies or sensor types, scalable evaluation pipelines, and community-driven platforms for dataset and model sharing. We particularly encourage papers introducing open benchmarks, standardized protocols, or collaborative initiatives that bridge the gap between AI, remote sensing, and Earth science communities.
By bringing together experts from academia, industry, and international agencies, this session aims to accelerate the establishment of a common ecosystem for geospatial foundation models and AI-driven EO applications. Establishing shared datasets, benchmarks, and standards will not only enable fair comparison and reproducibility but also foster trust, transparency, and global participation in the development of next-generation EO AI. Ultimately, the session seeks to catalyze a community-wide dialogue on how to move from fragmented datasets toward an integrated, standards-based foundation for scalable and reliable geospatial intelligence.
Show/Hide Description
CCT.95: Responsible AI for Earth Observation: Building Trustworthy, Transparent, and Ethical Pathways for the Future
SP.3: Special Themes — Artificial Intelligence Ethics and Governance in Remote Sensing
The growing integration of Artificial Intelligence (AI) in Earth Observation (EO) is transforming our capacity to monitor, understand, and address environmental and societal challenges. However, as the influence of AI expands, so too does the need for responsibility, transparency, and fairness in its design, deployment, impact assessment, and role in fostering efficient policies. These principles become even more vital in high-stakes contexts such as emergency responses to natural hazards, where timely, accurate, and trustworthy AI-driven insights are essential to mitigate risks and protect vulnerable communities. This session, hence, will focus on Responsible AI for EO, exploring both methodological foundations and practical applications that ensure AI-driven EO systems operate ethically and inclusively. This initiative aims to consolidate community efforts on topics such as AI for social good, bias detection and mitigation, data and model transparency, FAIR and open science principles, AI safety and robustness, ethical frameworks for global sustainability, and the role of AI on measuring, informing and supporting public policies.
Participants will share insights into best practices, standards, and frameworks that promote responsible innovation across both the methodological and application dimensions of AI for Earth Observation. The session will explore how principles of transparency, accountability, fairness, and reproducibility can be integrated throughout the AI lifecycle - from data curation and model development to deployment and impact assessment. Emphasis will be placed on fostering open, explainable, and trustworthy approaches that advance scientific rigor while ensuring societal benefit.

Relevance and Objectives:
* Highlight the growing importance of Responsible AI as the backbone for trustworthy EO solutions.
* Showcase practical methodologies for detecting and mitigating bias, ensuring fairness, and maintaining transparency in EO-based AI pipelines.
* Encourage open science practices by adopting FAIR principles and reproducible benchmarking initiatives.
* Foster interdisciplinary collaboration among scientists, policymakers, and industry and society stakeholders to ensure AI for EO serves the collective good.
* Build community consensus toward establishing guidelines and ethical frameworks aligned with international standards and sustainability goals.

Commitment to Diversity, Equity, Inclusion, Accessibility, and Belonging (DEIAB):
This initiative embodies DEIAB values both in its composition and its mission. The session will be promoted through IADF TC newsletters, GRSS communication channels, and professional networks, encouraging broad participation across global regions. By focusing on Responsible AI, the session inherently promotes inclusivity, ensuring that EO-derived insights and AI models reflect the realities and needs of underserved and marginalized communities. Through discussions on bias mitigation, data accessibility, and open-source collaboration, this effort aims to reduce inequities in digital and scientific participation while fostering ethical, transparent, and socially responsible use of AI in Earth Observation.
Show/Hide Description
CCT.96: Harnessing Quantum Technologies for Earth Observation and Scientific Discovery
SP.4: Special Themes — Quantum Technologies for Earth Observation
In recent years, quantum computing (QC) has experienced tremendous growth, evolving from theoretical concepts to tangible hardware prototypes. This emerging paradigm offers transformative potential for Earth Observation (EO), a field currently grappling with an unprecedented deluge of data from satellite constellations. Classical systems are increasingly strained by the volume, velocity, and complexity of petabyte-scale EO datasets, creating a computational bottleneck for critical challenges in climate modeling, disaster response, and resource management. This theme proposal posits that Quantum Computing and Quantum Machine Learning (QML) are uniquely positioned to address these challenges. QC offers “Enhanced Data Processing Capabilities” and “Superior Pattern Recognition”, enabling the analysis of vast, high-dimensional datasets far more efficiently than classical methods. By harnessing quantum phenomena like superposition and entanglement, QML algorithms can explore complex feature spaces to extract subtle patterns vital for scientific discovery.

This theme will convene experts to explore three core pillars of Quantum-EO applications:
1. Quantum Machine Learning for Data Analysis: We will focus on the application of hybrid quantum-classical models for complex EO data classification and segmentation. This includes exploring Quantum Support Vector Machines (QSVM) and Quantum Convolutional Neural Networks (QCNN) for tasks like land-cover classification, cloud screening, and anomaly detection in hyperspectral and SAR imagery. These methods may offer a quantum advantage in processing high-dimensional data from next-generation sensors. Furthermore, the theme will specifically address the analysis of complex EO time series data using quantum recurrent models, such as Quantum Long Short-Term Memory (QLSTM). Satellite systems generate vast temporal datasets tracking dynamic processes like ice melt, deforestation, and sea-level rise. QLSTM offers a potential pathway to more effectively model these complex sequential patterns, improving models, enabling earlier detection of anomalous environmental trends, and enhancing our understanding of dynamic Earth systems.

2. Quantum Optimization for Mission Operations: Earth Observation is underpinned by complex logistical challenges. “Quantum optimization techniques” can be applied to "resource allocation, and traffic management", such as optimizing satellite constellation tasking, data downlink scheduling, and managing distributed data workflows across ground stations.

3. Quantum Simulation for Earth Systems: Beyond data analysis, QC holds immense promise for "Hamiltonian simulations", which are "vital in enhancing the understanding of quantum systems" and, by extension, complex physical systems. This track will investigate the use of quantum algorithms to simulate geophysical and climate dynamics, potentially solving differential equations for ice sheet, atmospheric, or oceanic models that are intractable on classical supercomputers.

This theme will “bring together researchers and experts from academia, industry, and governmental agencies” (e.g., NASA) to foster an interdisciplinary dialogue. The goal is to assess the practical application of "noisy intermediate-scale quantum (NISQ) devices" for real-world EO problems, identify key benchmarks, and "contribute to shaping the future research directions” in this rapidly accelerating field.
Show/Hide Description
CCT.97: Hybrid Quantum-Classical Computing for Earth Observation
SP.4: Special Themes — Quantum Technologies for Earth Observation
Quantum technology has profoundly influenced modern life, enabling critical developments such as lasers, transistors, and the GNSS. These advances, in turn, catalyzed innovations like modern computing, smartphones, and the Internet. Current research aims to harness quantum phenomena, particularly superposition and entanglement, for next-generation applications in communication, computation, and sensing. Quantum computing holds the potential to transform remote sensing (RS) data processing by introducing a fundamentally different computing paradigm. Theoretical results have already highlighted potential advantages in fields such as cryptography, materials science, and chemistry. Although noisy, intermediate-scale quantum hardware is still in its early stages and has certain limitations, practical implementations of quantum algorithms applied to small problem instances have already been performed, showing promising results. Quantum computing opens up possibilities for solving optimization and complex simulation problems, enhancing quality and potentially introducing novel solutions in machine learning models as well as accelerating subsets of processing workloads. Although quantum chips are not yet fully integrated into supercomputers, future integration could unlock additional performance gains, for example, in digital twin computations, expediting satellite planning solutions and enabling real-time sensor data processing. The topics of discussion for this session, although not exhaustive, will include:

Quantum Algorithms and Techniques:
- Quantum-assisted and quantum-inspired algorithms, including theoretical analyses, simulations, and preliminary results.
- Classical data embedding into quantum states, quantum state transformations, quantum circuit design, and quantum state measurement.
- Quantum machine learning techniques applied to EO data.

Hybrid Computing Environments:
- How quantum and classical algorithms in hybrid High-Performance Computing (HPC) environments can address novel EO challenges.
- Integration of HPC within cloud EO data platforms to improve processing efficiency.
- Optimizing AI models for EO using GPU-accelerated clusters within European HPC infrastructures.
- Enhancing EO algorithm execution using heterogeneous HPC systems (GPU-boosted, memory-, storage-, and compute-optimized clusters).

Applications and Impact:
- Leveraging quantum algorithms to improve the accuracy, efficiency, and scalability of large-scale EO data processing.
- Applying quantum machine learning for enhanced predictive modeling and decision-making in Earth systems.
- Investigating quantum computing’s potential to revolutionize climate modeling and simulation for better climate change assessments.

Show/Hide Description
CCT.98: Quantum Sensing for Earth Observation
SP.4: Special Themes — Quantum Technologies for Earth Observation
This theme is being organized IEEE GRSS Technical Committee for Quantum Earth Science and Technology (QUEST).  

Submissions in line with the theme are solicited  from international space agencies, industry, and academia on enabling technology developments, space missions and observations.

The theme focuses on research and developments in an important topic related to Quantum sensors and sensing technology for earth observation, a unique capability to observe a diverse variety of geophysical phenomena from orbit around the Earth. This emerging capability has stimulated new areas of remote sensing research that now attracts the attention of scientists and engineers worldwide.

Quantum Sensing is the most exciting quantum technology, and it has the most potential to change our lives, in terms of societal benefits, in the next decade and beyond. Quantum sensing uses quantum properties to achieve unprecedented measurement sensitivity and performance, including quantum-enhanced methodologies that outperform their classical counterparts. Quantum sensors are highly relevant to overlapping areas such as precision navigation and timing; electromagnetic field sensing; attitude control; communications; and gravimetry. Typical quantum sensors exploit techniques such as atomic systems, matter waves, quantum entanglement, quantum superposition of states, quantum illumination methods, and manipulation of photons and atoms, in general. Significant gains include technologies important for a range of space-based remote sensing, in situ measurements, metrology, interferometry, quantum communication, ranging, imaging, radar and lidar receivers, and gravity measurements.

This community contributed session will focus on quantum sensing techniques, technologies and geoscience and remote sensing applications related to earth science, and will extend to astrophysics, and planetary Science, as well.
Show/Hide Description
CCT.99: From Demo to Dependable: Operating Commercial Small-Sat X-Band SAR Constellations at Scale and Into the Next Decade
SP.5: Special Themes — Commercial and Industry-led Remote Sensing
The small-satellite synthetic aperture radar (SAR) sector has entered a new operational phase in which sustained performance, calibration consistency, and advanced-mode reliability are key differentiators. This session consolidates community experience to guide the next decade of commercial SAR missions. It continues a series of highly attended Community Contributed Sessions at previous IGARSS meetings that have focused on commercial small-satellite X-band SAR missions from Capella Space, Umbra, ICEYE, and related operators. Building on that foundation, the 2026 session advances the discussion from initial demonstrations to the operational realities of sustaining constellations that have now been in commercial service for nearly a decade.

Commercial small-satellite X-band SAR constellations have evolved from prototypes into globally tasked, high-resolution operational systems. This session will examine the realities of sustaining and advancing these fleets as they enter long-term commercial maturity, explore the addition of advanced radar modalities, and pursue the design of their next-generation systems.

The session will explore how commercial small-satellite SAR operators balance the demands of continuous, reliable operations with the pursuit of advanced radar capabilities. As these constellations mature, mission teams must sustain consistent tasking, calibration, and data delivery while allocating limited power, thermal, and downlink resources toward innovation. Many operators have built extensive automation into their ground and space segments—from dynamic scheduling and health monitoring to autonomous calibration and cloud-based processing pipelines—that enable both reliability and agility at scale. Increasingly, edge and onboard computing are used to pre-process, compress, or triage data in orbit, allowing faster turnaround and more efficient use of downlink bandwidth. Several operators have now demonstrated complex radar modes—InSAR, sliding and staring spotlight, wide-swath ScanSAR, and emerging bistatic or multistatic configurations—and are transitioning these from isolated demonstrations to dependable, repeatable services. The session will highlight how teams plan, test, and operationalize these modes within existing mission constraints, emphasizing the trade-offs between operational stability, automation, and technological advancement that define the next stage of commercial SAR evolution.

Furthermore, the session will highlight ongoing preparations for the next generation of small-satellite SAR constellations designed for higher bandwidth, greater onboard processing capacity, and more advanced radar capabilities. As data-stack–based applications demand faster access and more dynamic tasking, operators are moving toward architectures that support near-real-time responsiveness, direct access, and operational interferometry. Yet these advances introduce tension between agility and consistency: rapid retasking and high maneuverability can degrade orbit control, calibration stability, and coherent collect compatibility. The session will explore how operators are addressing this trade-off through innovations in spacecraft autonomy, coordinated scheduling, and constellation management. Discussion will also cover the challenges of orbit and attitude control in small satellites for enabling advanced radar modes such as InSAR and bistatic collection.

The intent is to foster a shared understanding of how commercial very-high-resolution SAR systems can balance reliability, cost, and innovation while meeting market demands. A concluding discussion will identify best practices, open research challenges, and forward-looking priorities—particularly multistatic architectures, cross-constellation interoperability, and edge/on-orbit processing—that will define the next generation of commercial SAR.
Show/Hide Description
CCT.100: Making Research Easier with the Cloud in an Era of Big Data
SP.5: Special Themes — Commercial and Industry-led Remote Sensing
The exponential growth in geospatial data and computational demands has created both challenges and opportunities for researchers working with remote sensing datasets. Traditional computing systems often struggle with the size and complexity of modern geospatial data, particularly when dealing with hyperspectral imagery, time-series satellite data, and multi-source sensor integration. This session explores how cloud computing is revolutionizing geospatial research and analysis in the era of big data.  Members of the community are invited to contribute presentations for this session in line with this theme.  

Cloud technologies are fundamentally changing how researchers' access, process, and analyze large-scale geospatial datasets. By leveraging cloud infrastructure, researchers can now efficiently manage petabyte-scale archives, automate complex workflows, and perform sophisticated analyses without requiring local high-performance computing resources. The integration of AI and machine learning further enhances these capabilities, enabling automated feature extraction, pattern detection, and predictive modeling at unprecedented scales.

This session presents solutions through a three-phase approach: identifying technical challenges, establishing best practices, and demonstrating successful implementations. Industry leaders from cloud providers, data suppliers, and research institutions such will share insights on topics ranging from data management to advanced analytics. We'll explore how cloud-native solutions are democratizing access to earth observation data, enabling real-time processing capabilities, and fostering collaboration across organizations.

Each presentation will address key aspects of modern geospatial workflows, with particular emphasis placed on how emerging technologies like GenAI and cloud data warehouses are transforming geospatial analysis and making sophisticated tools accessible to researchers at all levels. The session will conclude with interactive discussions, allowing attendees to engage directly with technical experts and industry leaders.
Show/Hide Description
CCT.101: Measuring the Atmospheric Wind Profile
SP.5: Special Themes — Commercial and Industry-led Remote Sensing
This session focuses on the development of instrument concepts and integrated mission architectures to address the need for non-real-time global wind profile data in support of the space-based environmental monitoring community. 

A significant gap currently exists in space-based observing systems for measuring atmospheric wind profiles, particularly three-dimensional (3D) winds. These winds are defined as horizontal wind vectors distributed throughout both the vertical and horizontal extent of the atmosphere. Improving the ability to observe 3D winds from space would have a transformative impact on global and regional environmental prediction systems, including numerical weather prediction, climate monitoring, and severe weather forecasting. 

Advancing technologies that can accurately and consistently measure 3D winds from orbit or sub-orbital platforms has been an area of growing interest. These technologies may be either passive or active and could be deployed on a variety of platforms, such as satellites in low Earth orbit, high-altitude balloons, and airborne systems. Examples include Doppler wind lidars, radar systems, and combinations of passive infrared and microwave sounders capable of tracking atmospheric moisture and temperature fields to infer wind vectors. 

Key objectives of this session include: 

* Assessing the current state and readiness level of various technologies designed to measure 3D winds from space or near-space platforms.
* Identifying promising instrument concepts that could fill existing data gaps in wind observations, especially in regions that are poorly observed today (e.g., over oceans or in the upper troposphere and lower stratosphere).
* Exploring mission architecture concepts that combine different sensors or platforms to optimize spatial and temporal coverage. 
* Performing trade studies to evaluate the advantages and limitations of different sensing techniques (e.g., Light Detection and Ranging (LIDAR), Doppler radar, passive sounders) and platform types (e.g., single satellites, satellite constellations, or hybrid architectures). 

Ultimately, presentations within this session should guide the development of innovative mission concepts that can provide critical 3D wind observations and significantly enhance space-based environmental monitoring capabilities.
Show/Hide Description
CCT.102: Global Food-and-Water Security-support Analysis Datasets at 10-30m resolution using Remote Sensing Embedded Data and AI models on the Cloud
L.5: Land Applications — Agriculture
We currently have large collection of time-series data from satellite sensors at 10-30 m resolution from Sentinel’s 10m, and Landsat’s 30m covering the entire Planet. In addition, we have Planetscope’s 3 m data that is ideal for creating reference data sets. Further, the 64-band AlphaEarth Foundations’ Satellite Embedding datasets in Google Earth Engine at 10m spatial resolution covering the entire Planet Earth from 2017-2024 provides a paradigm-shift in understanding, processing, and deciphering information for myriad applications. These datasets are harmonized, normalized, and standardized and are ready to use for multiple applications using AI models and rich reference data. The AlphaEarth Foundations’ data comes from multiple sources such as optical and radar satellite images, LiDAR, long-term meteorological data, and even textural data. Every pixel in the world is embedded @ 10x10 meter squares and in 64 bands.

This community contributed session (CCS) will bring together scientists involved in global agricultural cropland mapping and water use assessments at highest known resolutions (3m to 30 m) using Earth Observation (EO) data in support of world’s food and water security analysis. The session will focus on paradigm-shift in producing myriad global cropland products involving multiple satellite sensor derived petabyte-scale big-data analytics of the Planet, artificial intelligence (AI), machine learning/deep learning (ML/DL), and cloud computing on platforms such as the Google Earth Engine or Amazon Web Services (AWS). The session will present and discuss various global cropland products such as cropland extent, watering method (irrigated or rainfed), cropping intensities, crop types, and crop water productivity produced using EO data at highest known resolution of 10-30 m covering the entire Planet.
Show/Hide Description
CCT.103: New Generation hyperspectral Remote Sensing data to advance remote sensing science with specific emphasis on agriculture, water, and food security
S.4: Sensors — Spaceborne Hyperspectral Missions
Great advances are taking place in remote sensing science with data acquisitions from multiple new generation hyperspectral sensors such as: 1. Planetscope’s Tanager-1, 2. German Deutsches Zentrum fur Luftund Raumfahrt (DLR’s) Earth Sensing Imaging Spectrometer (DESIS) sensor onboard the International Space Station (ISS), 3. Italian Space Agency (ASI) PRISMA (Hyperspectral Precursor of the Application Mission), 4. German DLR’s The Environmental Mapping and Analysis Program (EnMAP), and 5. Pixxel’s firefly’s. These sensors typically acquire data in hundreds or thousands of near-continuous hyperspectral narrowbands (HNBs). In contrast, multispectral broadbands (MBBs) such as Landsats, Sentinels, and PlanetScope Doves and SuperDoves acquire data in a few broad spectral bands along the electromagnetic spectrum. HNBs are typically ≤15 nanometers in bandwidth whereas multispectral broad bands (MBBs) are typically >15 nm in bandwidth. However, many spectroradiometers have bandwidths as narrow as 1 nm. Hyperspectral sensors have hundreds of broadbands of HNBs along the electromagnetic spectrum (e.g., 400-2500 nm) whereas multispectral sensors have around 10 MBBs in 400-2500 nm. Hyperspectral data can be considered as “signatures” whereas multispectral data are acquired in “a few data points along the spectrum”. Currently, we are acquiring remotely sensed data in four distinct modes. 

The overarching goal of this Community Contributed Session (CCS) session is to seek papers from world’s leading experts working with these new generation of hyperspectral sensors. A wide range of papers are invited that include understanding, processing, and developing methods and procedures to analyze the data to apply for applications pertaining to agriculture, vegetation, and water. The session will present and discuss: 1. spectral libraries of agricultural crops to help train, test, and validate artificial intelligence (AI) and machine learning (ML) algorithms, 2. major philosophies of hyperspectral data analysis pertaining to agriculture, water, and food security and highlight their strengths and limitations, 3. optimal hyperspectral narrowband data (OHNBs) where the best narrow bands are used, and the redundant bands sieved, 4. development and performance of hyperspectral vegetation indices (HVIs) that best characterize crop quantities, 5. classification methods and approaches including artificial intelligence\deep learning, machine learning, and cloud computing, and 6. modeling crop biophysical, biochemical, plant health, and plant structural quantities based on physical-based and empirically based methods. The session will discuss the performance of hyperspectral narrowband (HNB) data when compared with multispectral broadband data (MBB) data and see where and how we can make advances in crop type classification, crop health and stress studies, and in quantifying crop biophysical and biochemical parameters.