Science & Tech

Design Smarter: How AI Is Reshaping Architecture

Artificial intelligence research at Texas A&M is giving the next generation of architects tools to create and explore designs in entirely new ways.

Imagine you could describe a five-story apartment building in simple words and instantly see a 3D model you can explore through mixed reality. No difficult software or coding. Just say what you want, and watch it come to life.

At Texas A&M’s College of Architecture, researchers are working on making this future real. Thanks to funding by the National Science Foundation (NSF), they are creating new tools that combine artificial intelligence (AI), augmented reality (AR) and spatial reasoning.

Dr. Wei Yan, a professor and researcher, leads these projects in the Department of Architecture. In July, he began his 20th year at Texas A&M and took on a new role as interim head of the department

Yan’s research team includes doctoral students leading their own projects. Their work has caught the attention of experts across the country. In May, his team earned a Best Paper Award at the 2025 IEEE Conference on Artificial Intelligence

Together, they are building new tools that are changing how architecture is taught and practiced.

A photo of two students in a classroom, with one of the students trying on augmented reality glasses.

Architecture doctoral student Guangxi Feng (left) and master’s in architecture student Travis Halverson (right) test Augmented Reality glasses.

Credit: Texas A&M University College of Architecture

Describe A Building, Then See It Appear

What if you could start designing a building just by typing a sentence?

That’s what Text-to-Visual Programming GPT (Text2VP) does. It’s a new generative AI tool made by doctoral student Guangxi Feng. Yan said that generative AI can already create text, images, videos and even 3D models from text prompts. 

Using OpenAI’s GPT-4.1, Text2VP lets people create a 3D model that they can change right away by describing a building in simple words. 

Users can change the shape, size and layout without writing any code, guided by their architectural knowledge. “This way, the human designers and AI collaborate on the project,” Yan said.

Normally, completing tasks in design software can take hours or days. Text2VP speeds up early design work, so designers can spend more time being creative instead of dealing with technical details.

“This lowers the barrier to entry,” Yan said. “It allows students to experiment and learn design logic more intuitively.” The tool will be tested in mixed reality, where users can walk through and change their 3D models. Yan said immersive spaces help people understand complex spatial concepts faster than using regular computer screens.

Even though it’s still being developed, Yan said it could change the way students and professional designers start their projects. 

His team is also exploring AI’s role in Building Information Modeling (BIM). BIM is a process for creating digital models of buildings that include both the design and information about the building’s parts. The process is difficult to master, even for professionals, but Yan and doctoral students Jaechang Ko and John Ajibefun are testing how AI could make it easier and more accessible for architects.

A photo of a computer screen with an artificial intelligence chatbot helping to design a multi-story building.

A demonstration shows the AI chatbot in action. The chatbot analyzes a multi-story 3D architectural model and offers real-time feedback.

Credit: Provided photo

Talk To Your Model, Get Instant Feedback

Building on this progress, Yan’s lab is testing how talking to an AI chatbot can help with design. The chatbot lets users interact with their model through conversation and works right in a web browser.

Doctoral student Farshad Askari created a chatbot that lets users “talk” to their 3D building models. After uploading a design, users can ask questions about its structure, layout or how well it works. The chatbot answers with text advice and helpful pictures. It can even compare the models to industry standards or sustainability goals.

The chatbot uses trusted information in a knowledge base and a live view of the uploaded building model with GPT-4o Vision to act like a real-time design assistant.

Soon, it could read detailed building data and work with standard document types like Industry Foundation Classes (IFC), allowing even deeper design checks.

“This kind of dialogue-driven design could one day power a whole new workflow,” Yan said. “It’s about creating feedback loops between the designer, the model and intelligent systems.”

Teaching AI To Understand Space Like People

Design isn’t just about shape and use. It also needs spatial intelligence: being able to picture, turn and move objects in 3D.

While people do this naturally, AI still has a hard time. “Spatial intelligence is a core skill in architecture and STEM fields,” Yan said.

To study this problem, doctoral candidate Monjoree Uttamasha led an NSF-funded project testing AI models like ChatGPT, Llama and Gemini. They used the Revised Purdue Spatial Visualization Test, a common test for spatial intelligence. Their study won Best Paper in the Computer Vision category at the 2025 IEEE Conference on AI

The results were clear: without extra context, AI models often failed to notice how shapes rotated or changed in space. Human participants outperformed the AI by a wide margin.

However, when given simple visual guides and math notations, the AI got a lot better. These findings show that AI can learn spatial thinking, but it needs more training with background information.

With the right help, AI tools can start to think more like human designers. Yan’s team sees this project, along with others in their lab, as a step toward improving AI technology and how design is taught.

“This research points to ways we can enhance both AI tools and educational methods,” Yan said. The lab’s work builds on more than 20 years of research at Texas A&M, combining computational design methods, machine learning and architectural visualization.