Irish philosopher George Berkely, best known for his theory of immaterialism, once said: “If a tree falls in a forest and no one is there to hear it, does it make a sound? »
What about AI-generated trees? They probably won’t make any noise, but they will nevertheless be essential for applications such as adapting urban flora to climate change. For this, the novel “D-Tree FusionThe system developed by researchers at the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), Google and Purdue University merges AI and tree growth models with Google’s Auto Arborist data to create accurate 3D models of existing urban trees. The project produced the first-ever large-scale database of 600,000 environmentally friendly, simulation-ready tree models across North America.
“We are combining decades of forest science with modern AI capabilities,” says Sara Beery, assistant professor of electrical engineering and computer science (EECS) at MIT, principal investigator of MIT CSAIL, and co-author of a new article on Tree-D Fusion. “This allows us not only to identify trees in cities, but also to predict how they will grow and impact their environment over time. We are not ignoring the last 30 years of work to understand how to build these synthetic 3D models; instead, we use AI to make this existing knowledge more useful across a broader set of individual trees in cities across North America, and eventually around the world.
Tree-D Fusion builds on previous urban forest monitoring efforts that used Google Street View data, but expands them by generating full 3D models from single images. While previous attempts at tree modeling were limited to specific neighborhoods or weighed on accuracy at large scales, Tree-D Fusion can create detailed models that include typically hidden features, such as the backs of trees that are not visible. not visible in photos taken from the street. .
The practical applications of this technology go far beyond simple observation. Urban planners could use Tree-D Fusion to one day project into the future, anticipating where growing branches might entangle with power lines, or identifying neighborhoods where strategic tree placement could maximize cooling effects and improved air quality. According to the team, these predictive capabilities could shift urban forest management from reactive maintenance to proactive planning.
A tree grows in Brooklyn (and many other places)
The researchers took a hybrid approach to their method, using deep learning to create a 3D envelope of each tree’s shape, then using traditional procedural models to simulate realistic branch and leaf patterns based on genus. of the tree. This combination helped the model predict how trees would grow under different environmental conditions and climate scenarios, such as different possible local temperatures and varying access to groundwater.
Today, as cities around the world grapple with rising temperaturesThis research offers a new window into the future of urban forests. In collaboration with MIT Senseable City Labteam from Purdue University and Google embark on a global study that reimagines trees as living climate shields. Their digital modeling system captures the complex dance of shadow patterns across the seasons, revealing how strategic urban forestry could hopefully transform sweltering city blocks into more naturally cooled neighborhoods.
“Now, every time a street-mapping vehicle passes through a city, we’re not just taking snapshots: we’re watching these urban forests evolve in real time,” Beery says. “This continuous monitoring creates a living digital forest that mirrors its physical counterpart, providing cities with a powerful lens to observe how environmental stresses shape the health and growth patterns of trees across their urban landscape. »
AI-based tree modeling has become an ally in the quest for environmental justice: by mapping the canopy of urban trees in unprecedented detail, a sister project of Google AI for Nature Team helped uncover disparities in access to green spaces in different socio-economic areas. “We’re not just studying urban forests: we’re trying to cultivate more equity,” says Beery. The team is now working closely with ecologists and tree health experts to refine these models, ensuring that as cities expand their green canopies, the benefits also extend to all residents.
It’s child’s play
Although the Tree-D merger marks a major “growth” in the field, trees can pose a particularly difficult challenge for computer vision systems. Unlike the rigid structures of buildings or vehicles that current 3D modeling techniques handle well, trees change shape in nature: they sway in the wind, intertwine their branches with those of their neighbors and constantly change shape over time. as they grow. Tree-D fusion models are “simulation ready” in the sense that they can estimate the shape of trees in the future, based on environmental conditions.
“What makes this work exciting is how it challenges us to rethink fundamental assumptions about computer vision,” says Beery. “While 3D scene understanding techniques like photogrammetry or NeRF (neural radiance fields) excel at capturing static objects, trees require new approaches capable of taking into account their dynamic nature, where even a slight breeze can radically change their structure from one moment to the next.”
The team’s approach of creating crude structural shells that approximate the shape of each tree has proven remarkably effective, but some issues remain unresolved. Perhaps the most irritating is the “tangled tree problem”; When neighboring trees intersect, their intertwined branches create a puzzle that no current AI system can completely solve.
The scientists view their dataset as a springboard for future innovations in computer vision, and they are already exploring applications beyond Street View imagery, looking to extend their approach to platforms such as iNaturalist and wildlife camera traps.
“This marks just the beginning of Tree-D Fusion,” says Jae Joong Lee, a doctoral student at Purdue University who developed, implemented and deployed the Tree-D-Fusion algorithm. “With my colleagues, I plan to extend the capabilities of the platform on a global scale. Our goal is to use AI-driven insights to serve natural ecosystems, supporting biodiversity, promoting global sustainability, and ultimately benefiting the health of our entire planet.
Beery and Lee’s co-authors are Jonathan Huang, head of AI at Scaled Foundations (formerly at Google); and four others from Purdue University: doctoral students Jae Joong Lee and Bosheng Li, Professor and Dean’s Chair of Remote Sensing Songlin Fei, Assistant Professor Raymond Yeh, and Professor and Associate Head of Computer Science Bedrich Benes. Their work builds on efforts supported by the Natural Resources Conservation Service of the United States Department of Agriculture (USDA) and is directly supported by the National Institute of Food and Agriculture of the United States. USDA. The researchers presented their results at the European Computer Vision Conference this month.