Multitemporal foundation models for EO

Multitemporal Foundation Models for EO
Foundation Models (FMs) represent the latest leap forward in AI, following the era of Deep Learning. Trained on vast amounts of unlabeled data through self-supervised learning (SSL), these models capture rich patterns that can be applied to a wide array of downstream tasks—even with limited or no additional training data. This paradigm holds particular promise for Earth Observation (EO) and Earth Sciences by enabling breakthroughs in analytical, predictive, and even prescriptive capabilities.
In EO and Earth Sciences, FMs can significantly enhance applications such as weather prediction and geospatial semantic data mining. By analyzing large-scale climate and atmospheric datasets, they deliver more accurate forecasts across different time horizons and reveal complex patterns in environmental systems. Their latent space representations and embeddings also enable powerful insights while reducing the need for extensive labeled data—a critical advantage in remote sensing, where labeling is often expensive and time-consuming.
Despite these benefits, integrating FMs into EO workflows poses distinct challenges. EO data often spans multiple modalities, resolutions, and spectral bands, requiring specialized adaptation and careful model updating—especially for “digital twin” scenarios where AI must remain synchronized with real-world changes. Moreover, FMs demand significant computational resources and optimized training strategies, particularly when handling enormous, continuously growing geospatial datasets. Evaluating and benchmarking FMs for these specialized applications further complicates their deployment, as existing benchmarks may be limited in scope.
This session provides a comprehensive introduction to Prithvi-EO, a geospatial foundation model. We will begin by exploring the theoretical underpinnings of Prithvi-EO. Participants will then engage in a hands-on workshop focused on fine-tuning the model for specific downstream tasks. Finally, we will cover the deployment of the trained model and how to interact with it in an operational setting. Throughout the session, participants will learn best practices for data preparation and model fine-tuning.
Agenda
Block 1: Overview
- Fundamentals of Multitemporal foundation models for EO
- IEEE GRSS ESI, NASA and Foundation models
Block 2 (Hands on)
- Environment Check
- Finetuning Prithvi EO
Block 3 & 4 (Hands on)
- Deploy Finetuned model
- Interact with the Deployed Finetuned Models (including the terramind model)
Instructor

Manil Maskey
Biography
Manil Maskey is a Senior Research Scientist with the National Aeronautics and Space Administration (NASA). He also leads the Advanced Concepts team, within the Inter Agency Implementation and Advanced Concepts at the Marshall Space Flight Center and Science Mission Directorate’s Artificial Intelligence initiative at NASA HQ. His research interests include computer vision, visualization, knowledge discovery, cloud computing, and data analytics. Dr. Maskey's career spans over 21 years in academia, industry, and government. Dr. Maskey is an adjunct faculty at the UAH Atmospheric Science department, a senior member of Institute of Electrical and Electronics Engineers (IEEE), chair of the IEEE Geoscience and Remote Sensing Society (GRSS) Earth Science Informatics Technical Committee, member of American Geophysical Union (AGU) and AGU Fall Meeting Planning Committee, member of European Geosciences Union (EGU), and member of Association for Advancement of Artificial Intelligence (AAAI).

Sujit Roy
Biography
Dr. Sujit Roy is a Lead AI Researcher and Computer Scientist with NASA's Interagency Implementation and Advanced Concepts Team (IMPACT), a program under the Marshall Space Flight Center, where he spearheads cutting-edge machine-learning work for Earth-science missions. Roy led NASA's first open-source Geospatial AI Foundation Model initiative (Prithvi-EO), forging a partnership with IBM Research and releasing the model on Hugging Face; for this effort he received IMPACT's 2023 Planet Award, which recognizes innovators whose ideas rapidly attract attention and resources. His research centers on foundation models, remote-sensing time-series analysis, and climate applications of deep learning. His recent work includes the Prithvi WxC weather and climate foundation model, the Clifford neural operator, the WINDSET weather-AI benchmark, a heliophysics foundation model supported by compute resources from the NSF NAIRR program, with results published in reputed journals and conferences. Holding a Ph.D. in Computer Science, Roy has more than a decade of R&D experience, including explainable AI research at the University of Manchester, and co-founded the neuro-AI start-up BrainAlive.

Iksha Gurung
Biography
Iksha Gurung is a Computer Scientist working with University of Alabama in Huntsville, supporting National Aeronautics and Space Administration Inter-Agency Implementation of Advanced Concepts Team (NASA-IMPACT). He leads the development and machine learning team in NASA-IMPACT. His projects include applying machine learning to Earth science phenomena studies and scaling the solutions to production.

Muthukumaran Ramasubramanian
Biography
Muthukumaran Ramasubramanian received the M.S. degree in computer science from the University of Alabama in Huntsville (UAH), where he is currently pursuing the Doctorate degree in computer science. He is also a Computer Science Researcher and leads the Machine Learning Team for NASA–Interagency Implementation and Advanced Concepts Team, UAH. His work focuses on using deep-NLP techniques to surface novel relationships from large corpora of text and to deploy deep learning solutions to detecting earth science phenomena on a global scale. His research interests include machine learning, big data, computer vision, and scalable cloud services.