Fresh Eyes: Applying Machine Learning to Generative Architectural Design

#5

This workshop presents tools and techniques for the application of Machine Learning (ML) to Generative Architectural Design (GAD). Proposed here is a modest modification of a 3-step process that is well-known in generative architectural design, and that proceeds as: generate, evaluate, iterate. In place of the typical approaches to the evaluation step of this cycle, we have developed techniques to employ an ML process: a Convolutional Neural Net (CNN) trained to perform image classification. Such an approach holds significant ramifications for the overall design model, as it allows the integration of a variety of tacit and heretofore un-encapsulatable design criteria – such as architectural style, spatial experience, or typological features – into existing generative design workflows. While existing research1 has integrated low-level ML operations into the parametric design environment with a level of success, this proposal uniquely links the familiar environment of Grasshopper, which facilitates the general generative design cycle, with cloud-hosted ML models. In this, we employ two high-level frameworks, Lobe.ai​ and Ludwig​ (both based on the popular Tensorflow​ framework), that facilitate the training of CNNs with little or no scripting required. Extending work completed at the Smart Geometry Workshop in 2018 in Toronto​, this workshop directly supports two of the stated aims of the 2019 Design modeling Symposium, as it proposes specific methods for designing with high-complexity and AI-based models (Area A) in the interest of integrating social cultural and aesthetic criteria into existing processes of design (Area D).

Over the course of this workshop, participants train purpose-built image-based ML models to evaluate candidate design solutions based on design criteria of their choosing. As an illustration, the nearby images show a classification model trained to discern between assorted styles of North American detached single-family home massings. Following the training and testing of similar classification models, participants will then deploy them to a server, and integrate them into functional generative design models in Grasshopper via API calls. Extending this example, a single-family home massing classifier might be employed as the evaluator in a generative design cycle that is configured to identify regions of a design space that best adhere to a given architectural style, or that effectively hybridize two or more known styles in order to discover new hybrids. As a result, entirely new applications of generative design are facilitated, as participants may define generative models capable of optimizing not only for traditional criteria, such as structural performance or energy use, but for tacit criteria as well, such as architectural style, aesthetic expression, or spatial experience.

Target Audience

This workshop targets architectural design professionals and students of design that hold experience in generative design and optimization, interest in machine learning techniques, but that hold little or no experience in applied machine learning. In terms of prerequisites, p​ articipants in this workshop will be expected to arrive with​: A basic competency in parametric modeling in Grasshopper and/or scripting in Python; Experience in generative design, preferably using Grasshopper for design generation; Experience with one of a number of optimization tools (Galapagos, Octopus, or similar) for iteration. Prior to the start of the workshop, the c​ luster organizers will provide participants with:​ Grasshopper components2 that have been developed for interacting via API calls with a cloud-hosted ML model; A suite of tools written in Python3 to assist students in establishing training sets of tagged images. Over the course of the workshop, organizers will introduce methods for and competencies in​: techniques for establishing image-based training sets for ML models extracted from 3d geometric models; approaches to training and testing image classification models in Lobe and Ludwig; methods for hosting and serving trained classification models, and establishing API protocols; methods for integrating hosted models into the traditional generative design cycle.

Workshop Schedule, Format, and Requirements

Over the course of the two-day workshop, participants will work in small groups to define a design scenario that adheres to the proposed structure, and will construct a ML-enabled generative design workflow accordingly. This will require the establishment of a training set, the training and testing of an ML model, and the integration of this model into a larger generative design process as an evaluator function for some novel design trait. As the product of this work is the workflow itself, the most compelling outcome fit for exhibition is a representation of the process of optimization. Since optimizations are best understood as ‘walks’ through a space of potential solutions, and are often described as an animation4, we plan to exhibit a series of videos that visually document this process.

We anticipate the working format and physical requirements to be consistent across the workshop. These include the standard elements of a work / lecture space for talks and for participants to work in groups (worktables, power, internet access, a projector, whiteboard, etc). We expect participants to provide their own laptops with Rhino and Grasshopper installed, and with administrator permissions to install additional software we will provide. In addition, we may require a local server and local area network to be setup for the training and testing of ML models. For this, we will require a training computer with a powerful GPU that we are able to configure ourselves, as well as permission to setup a wifi hub and/or a wired setup.

Day 1

  • Orientation and establishing working groups
  • Compiling a training dataset, and begin to train an initial working ML model

Day 2

  • Completion of a number of initial iterative passes at fully-functional generative design workflows.
  • Finalizing the most promising generative design workflows, document and prepare animations for exhibition.

Workshop Leaders

Adam Menges

Adam Menges, CEO of Lobe, a visual programming environment for creating neural networks, he currently spends the majority of his time working on technologies and tools to make machine learning easier for the masses. Previously, he worked as an engineer and contributed to the success of world class companies, such as Apple and SendGrid.

Kat Park

Kat Park, firmwide Emerging Technology Leader, directs design technology strategy at SOM. A computer scientist and architect specializing in computation design, Park spearheads research initiatives to understand the role of data in performance based design, as well as explore the design and implementation of sensor systems that inform occupants‘ environment. As a senior designer at SOM NY, she has led the design, data management and implementation of complex geometrical systems for skyscrapers around the world. Her work at SOM NY includes World Trade Center Tower 1, Lotte Super Tower, Busan Lotte Tower, Yongsan Tower, and Digital Media City Landmark Tower in Korea, as well as research efforts that inform design processes. Park has lectured and taught parametric and generative design strategies at various institutions; presented and published in ACADIA, International Journal of Architectural Computing (IJAC), International Conference on Environmental Systems (ICES), Special Interest Group in Computer Human Interaction (SIGCHI), SmartGeometry, BIM Forum and Architecture and Urbanism (A+U). Prior to SOM and architecture, she was an interdisciplinary software developer and interaction designer at Art Technology Group and MIT Media Lab. Park holds a BS in Computer Science & Engineering and a Master of Architecture degree, both from MIT.

Kyle Steinfeld

Kyle Steinfeld, Assistant Professor of Architecture at the University of California, Berkeley, he seeks to illuminate the dynamic relationship between the creative practice of design and computational design methods through his research and creative work, thereby enabling a more inventive, informed, responsive, and responsible practice of architecture. He is the author of G​ eometric Computation: Foundations for Design​ and has published widely on the subject of design and computation. His recent work has specifically focused on the newly emerging topic of machine learning in design. In 2016, Steinfeld organised and moderated a session titled Procedural Design: Machine Learning and Evaluation at the 2016 ACADIA conference in Ann Arbor. Building on the conversation that unfolded there, while speaking at the invitation of the Kuwait University College of Architecture, he offered a talk titled Fresh Eyes that drew out a number of parallels between machine learning, visual thinking, and the nature of design activity. Further developing and refining these ideas, he authored a paper for the 2017 ACADIA conference titled Dreams May Come. Here, he set out a concise theory of machine learning as it applies to creative architectural design, and offered a guide to future research at the intersection of ML and design tools. Most recently, Steinfeld sought to put these ideas into practice with a series of online experiments, titled Drawing with Bots, that explore a variety of potential forms and formats for the relationship between a human designer and an artificial design partner.