After the 1906 earthquake, George Lawrence employed kites to capture an aerial view of the devastation, signifying the first endeavor to observe disasters from bird’s-eye view.
While technology moved from kites to airplanes and eventually to satellites, the essential goal remains the same: capturing geospatial images to understand Earth’s features for disaster assessment, environmental monitoring, and more.
As the world begins to focus on escalating climate change and environmental challenges, the fusion of technology and data insights offers solution pathways.
Now an artificial intelligence (AI) based collaboration between two giants of their industries – IBM and NASA – seeks a significant stride to redefine our ability to understand and respond to our planet’s dynamics, with the potential to reshape disaster management, environmental monitoring, and adaptation.
When and Where to Use Geospatial Data
Geospatial data plays a pivotal role in disaster management, spanning preparedness, response, and recovery phases. During events like earthquakes, floods, and wildfires, real-time geospatial data facilitates damage assessment, affected region identification, and efficient relief planning.
In environmental monitoring, geospatial imaging acts as a sentinel for change by tracking deforestation, urban growth, and climate-induced alterations.
This data empowers policymakers to formulate sustainable strategies, safeguard fragile ecosystems, and manage resources effectively.
To combat climate change, geospatial data is employed to monitor emissions, fluctuations in temperature, and the rise in sea levels. This information informs the development of strategies aimed at both mitigating and adapting to these effects.
In times of crisis, geospatial data plays a crucial role in humanitarian assistance by helping map affected regions, assessing the extent of damage, and coordinating relief efforts.
Leveraging AI for Geospatial Data Analysis
Although geospatial data plays a pivotal role in tasks such as disaster management, environmental monitoring, and climate observation, the intricate nature of geospatial images poses significant difficulties for manual interpretation.
The proliferation of satellites and drones has resulted in an expansion of geospatial data, causing manual analysis to become ineffective, time-consuming, and impractical in terms of scalability.
This situation is further aggravated by the lack of capable professionals accessible to conduct these analyses, resulting in delays.
Additionally, human analysts can face constraints around limited capacity and subjective viewpoints, leading to inaccuracies and varying outcomes.
These analysts may also struggle to fully comprehend the context, subsequently affecting the precision of their decisions.
Meanwhile, AI has attained a remarkable capability to rapidly process vast volumes of imaging data on a massive scale.
This ability empowers AI to consistently analyze real-time data streams, which is especially crucial in scenarios requiring swift responses like disaster management.
AI’s capacity for identifying intricate patterns helps in mitigating the inherent subjectivity of human interpretation, which can ensure uniform and precise outcomes.
By comprehending the intricate contextual complexities within geospatial data, AI can make better decisions.
Furthermore, AI’s potential to lessen dependency on experts democratizes geospatial analysis, enabling individuals without expertise to conduct sophisticated analyses in this domain.
The Challenge of AI for Geospatial Data Analysis
While AI holds great promise in geospatial applications, its effectiveness is limited by the scarcity and high expense associated with acquiring high-quality geospatial data, and the labor-intensive process of accurately labeling such data for specific purposes adds to these challenges.
Moreover, training models on large-scale high-resolution geospatial images demand significant computational resources.
This poses a notable challenge, given NASA’s ambition to release 250,000 terabytes of data from new missions to scientists and researchers by 2024.
Training AI models on such extensive datasets comes with high costs and environmental implications – but the benefits may outweigh the costs.
What Is a Foundational Model in AI?
To overcome the challenges mentioned above, one viable approach involves building a foundational model on geospatial data.
A foundational model in AI is a pre-trained model trained on a large data set using self-supervised learning to learn general patterns and features from the data. This general-purpose model serves as a basis for developing more specialized and refined models.
When creating a specialized AI model for a specific task or domain, the foundational model is refined or fine-tuned with smaller, task-specific dataset. This process allows the model to utilize the knowledge gained during the pre-training and refine it for specific task.
Using a foundational model expedites the development process, minimizes the data and cost needed for specialized AI training, and boosts model performance through its existing knowledge.
This approach has become popular in various AI applications, enabling the creation of powerful and effective models with reduced training time and resource requirements.
IBM’s Geospatial Foundational Model
IBM in collaboration with NASA has recently built a foundation model on geospatial data.
The key objectives are to lessen dependence on extensive geospatial data, lower training costs, and reduce the environmental impact of training AI models.
Trained on Harmonized Landsat Sentinel-2 satellite data (HLS) spanning a year across the continental United States, this model underwent an intensive training process, along with further fine-tuning using labeled data for tasks such as flood and burn scar mapping.
Through this training, the model has showcased a remarkable 15% enhancement over current methods, achieved with only half the amount of labeled data typically required.
Through additional refinement, this foundational model can be repurposed for various tasks such as deforestation monitoring, crop yield prediction, and greenhouse gas detection.
To foster broader access and application of AI, the model is accessible via Hugging Face, a renowned open-source AI model library. This democratization aims to inspire novel innovations in climate and Earth science.
Back in July, IBM introduced watsonx, a cutting-edge AI and data platform designed to facilitate the scalable and accelerated application of advanced AI with reliable data for enterprises.
As an extension of this effort, a business-oriented version of the geospatial model, integrated into IBM watsonx, is set to become accessible through the IBM Environmental Intelligence Suite (EIS) in the coming months.
The Bottom Line
IBM’s collaboration with NASA has resulted in a foundational geospatial AI model that addresses challenges in disaster management, environmental monitoring, and urban planning.
This AI solution offers enhanced accuracy and consistency, overcoming complexities associated with manual analysis of geospatial data.
Despite AI’s potential, obstacles such as data scarcity and high costs remain. IBM’s model, trained on Landsat Sentinel-2 data, has shown significant improvements over existing methods with just half the labeled data.
This innovation, accessible through Hugging Face, democratizes geospatial insights, promising new advancements in climate and Earth science applications.