CATIA, Component Based Design, Research, xGenerative Design

High Detail Modeling through Component-Based Design

The goal here is to populate a model with a template to create a highly detailed facade. The concept behind this process is called Component-Based Design.

Whereas you might think in a more traditionally-architectural sense in terms of a building being made up of a core, shell, and interiors, this approach focuses on the individual parts which make up the whole.

Hopefully, the prior posts, and this culminating one, shows that it’s very possible to have a workflow in which both design and production can be integrated into a more holistic process.

Creating an Object Type

First, an object type must be created. An object types stores things like the engineering template and user-defined feature, which will be explain later.

When creating new content, search for ‘Covering type’ and select it. And for the product instantiation method, select ‘Adaptive’. This allows the template to adapt per the inputs created.

The template and user-defined feature (UDF) are stored in the Resource Table. Within the table, you will see two rows, one for each resource.

You need to create a UDF in order to instantiate the template. The two resources are linked, with the idea being that the UDF will be a more simplified version of the Template, acting as an anchor for the more detailed product.

Creating a User Defined Feature

Again, the UDF needs to be super simple, so in this case, it will only contain an Axis System that will act as the anchor.

Note that there are two Axis Systems. The first will be referenced in the UDF; the second is a formula-based Axis System that is linked to the first Axis System.

Within the Building Engineering 3D Design app, highlight the first Axis System in the tree and then select the ‘User Defined Feature’ tool. This will automatically link both Axis Systems to a new Knowledge Template for the UDF.

So, an anchor for the UDF is now live. But, now an anchor for the template needs to be created too!

Simply create a new Axis System and use the ‘Define Component’ tool to associate this new axis system as the anchor. Notice that the axis system then becomes published, which means that is now exposed for other processes to use it.

Creating a High LOD Facade Model

Instantiated panels of one side of the skylight facade

In order to truly merge a design with the reality of the physical, the inputs from the design and the inputs of the template need to match. And to reiterate, these inputs are a set of points which define the boundary of the panel and the axis system, which ultimately is a coordinate location of where that panel lives.

To set this up, first the design model modified using xGen must be inserted into the site,which was previously used to test the template.

The main tool to use is ‘Capture Component Specifications’, where the inputs from the design model are linked up with several items, which include:

  • Object type – Click on the search icon and find the Covering object type
  • Part Body – create a new 3D shape where the specifications will live
  • Inputs – Includes axis systems, and three points from the design model
  • Parameters – Used in the template; offset and thickness, in this case

The inputs are associated using a selection pattern; a manual process of selecting the whole list of axis systems within the tree.

Once the selection pattern is processed, the inputs are then associated with a panel.

Note that in the screenshots above, the panels have already been exposed ⁠— its a very easy step from here though!

Back to the concept of level of development, a specific tool called ‘Change Level of Development’ actually populates the template individually.

Again, the UDF is this example is super simple, made up of just the axis system. This is because the simple version of the model lives in Rhino, and only extracted the essential geometrical data with xGen.

You can imagine, however, that the UDF could also contain a simple surface to represent the panels as well.

One of the benefits of this approach is that the panels are organized in a list, each with a unique name, generated automatically. You can then export these details into an Excel sheet or convert this model into IFC and link into a architectural Revit model.

Another benefit is the idea of continuous LOD, which means that I can update the template, adding more details to the panel, and reinstantiate the templates fairly quickly following the steps above, due to the links created between the design model and the template.

In other words, a conceptual design can rapidly become a highly detailed model — the stuff dreams are made of!

See the video above to see the quick change in level of development for yourself! 🙂

CATIA, Research, xGenerative Design

xGen + CATIA – Publishing Geometry

In order to populate the engineering template, we need get the inputs from the design model, which are living within the Rhino model.

This requires importing the Rhino model into xGenerative Design, exporting as a STEP file and then importing this file into a new 3D product.

But before moving on, all surfaces must be assembled, or joined, using the ‘Join’ tool. Otherwise, when you try to use the surfaces in xGen, you will have to import each surface individually. It is much more convienent to have the surfaces in just one list!

Extracting Geometry with xGen

After opening the new product within xGen, you can import the joined surfaces. The current workaround for this is to click on the ‘Join’, and then click any tool, such as translate, which appears in the 3D browser. This automatically creates the connected nodes and simulaneously imports all the surfaces. Now, on to the fun part: deconstruction!

The main nodes used for the importing and extracting of inputs were:

  • Dissassemble – Connecting the ‘Import’ node to this will give you the full list of surfaces
  • Sub Elements – Plug the output from the ‘Disassemble’ node here to get all three vertices of each surface; main inputs for template instantiation
  • Centroid – Use this node with the next one on this list; used as the reference point to get Axis Systems
  • Axis System on Surface – Get Axis Systems for each surface; an main input used for the template instantiation process
  • Publish – Don’t forget to add this to the final outputs! This exposes the main inputs (the three vertices/points and axis systems) for use later down the line
Full graph used to get geometrical data from the imported surfaces
Highlighting of main inputs: 3 points and Axis System

Next, we can start to actually use these inputs and the template created together! How exciting is that?!

CATIA, Component Based Design, Research

High LOD Modeling – Template Instantiation

After creating the base model, the next step is to create a Engineering Template. It’s comparable to an adaptive component family in Revit, but with the main difference being that you can include alot more geometrical information.

Creating an Engineering Template

Clicking on the ‘+’ arrow at the top right corner of the window allows you to create new content. Searching for ‘template’ brings up the Engineering Template icon.

After clicking the icon, a separate tab opens. Click on the ‘Add Reference’ button to link the base model to the template.

The program prompts you to select the components to be added, so you click on the top of the tree, the ‘Glazing_Panel A.1’, which contains all the information needed.

Automatically, this portion of the model gets added. In order to add the other portions of the model, such as the Skeleton, click on that part under ‘Unchanged Components’ and then the arrow. This includes all the parts into the ‘Components to process’ dialogue box.

Next, the inputs created, such as the axis system and three points which make up the edges of the panel, need to be linked. Clicking on the Inputs tab, a small window will pop up. In order to be able to select the inputs, click on the ‘Glazing_Panel A.1’ within the tree. This loads all the available parts that can be selected. Then, clicking on the small right arrow moves the necessary inputs to the ‘Selected objects’ box.

The last step is to link in the ‘Offset’ and ‘Thickness’ parameters, in a similar way as in the last step.

Testing the Template

Within the Building and Civil Assemblies app, create a test building and add some test inputs.

Using the Engineering Template tool, we can then test to see if the base model has been properly constructed. So, if anything looks wonky, this test will show you what needs to be fixed!

The tool prompts you to select the Engineering Template that you want to populate. So, again, click on the main part of the tree, or ‘PLM_Template_Panel A.1’, in this case.

Now, this is the fun part. A new window appears, and asks for the inputs. You can either select them on the tree menu or directly on the 3D viewer. Then, you get a nice little diagram of the different parts and how they interconnect.

And, when you thought it couldn’t get better, you can actually preview the result before instantiating the whole panel. Definitely feels good when you get a idea ahead of time if your model will work after all!

And yes, the moment of truth— it works!!!!!!!!!!

In the next post, I’ll go over how to populate the panel on a real design.

CATIA, Component Based Design, Computational Design, Research

High LOD Glazing Panel Creation

It’s a constant struggle with most modeling programs to create truly high LOD models. And this is especially true if the shape or design being attempted is not rectilinear or a quadrilateral. Fortunately, there some great tools, such as xGenerative Design, Assembly Design, and Building 3D Design to help with this, as well as this super helpful video, which I highly recommend to watch.

The Design

Rhino model imported into xGenerative Design, with each reference surface

I used a design of a skylight done in Rhino; the model had the base surfaces which represent the glazing panels. In order to add more detail to this model, I looked at the detail drawings provided by the client, which gave important info such as the type of glazing, size of panels, and offsets for things like sealants — turns out, the glazing panel is triple pane with 1/2″ offset between panels for the sealant!

Creating the Panel

The glass panel was created using Assembly Design, and the Covering element type contains all the different parts of the panel.

Tree Structure

The panel’s basic structure within Assembly Design

The tree structure is made up of two parts: the Skeleton and Plate. The skeleton contains all the inputs and base geometry required to build up the panel, such as the axis system, points, boundary curve and surface. The ‘Plate’ mostly contains references from the Skeleton, in order to further build up the geometry which makes up the panel.

Inputs

View of panel, highlighting the inputs which make up the panel

The inputs are crucial in order to create a template, which will be referenced in the overall design later on.

Panel Construction

The Thickened Surface tool, highlighting the linked surface

To create the glass panels, the surface from the Skeleton is copied into the Plate as a link. Then, using the Thicken Surface tool, the first glass panel is created. The rest of the panels are made in the same way, with the offset distances varying depending on its location.

One thing to note is that the ‘Offset’ of the surface is controlled by a formula, which has some associated parameters which can be changed. This isn’t so important as it’s unlikely that the glass panel type, and therefore thickness, will change after the design development phase, but it was a nice exercise in creating a more parametric model.

Plate Construction

View of a Sketch, for the profile for the sweep which represents the sealants

Another parameter that drives the overall shape of the panel is an ‘Offset’, which allows room to add a profile for the sealant. The boundary is constructed using a polyline, created using the input points. The profile is constrained by a point on the polyline, as well as the linked surface.

In the next post, I’ll go over how to create an Engineering Template from this model. Stay tuned!

Flux, Research

Rhino + Revit Interoperability Workflow Using Live Objects

RH-RVT_Workflow_Diagram-01

Like a honeycomb structure, an interoperability workflow weaves software together to make an efficient system. In this example, we’ll create a live floor element in Revit from a surface in Rhino.

Objectives

1. Read surface by layer name and sort by elevation in Grasshopper
2. Extract parameters (curves, level names, number of floors, and elevations) and send to Flux
3. Create a level and floor element in the Flow Tool
4. Merge level and floor element in Revit

RH+RVT_Interop_GH_R01

 

Steps 1-2

 

RH-RVT_GH_01RH-RVT_GH_02RH-RVT_GH_03

We’ll start off with a rhino model that contains surfaces in a designated layer.
A single surface is read by index, its parameters extracted, then sent to Flux
(where the magic happens).

GH_06

• Dynamic Pipeline – Set to ‘read by layer name only’
• Number slider – Select surface by index
• Flux Project – Select project
• To Flux – Send data to Flux (flow control mode: constantly)

GH_07• Area centroid
• Deconstruct Point – Extract elevation (unit Z)
• Sort List – Sort surfaces by elevation
• Brep | Plane – Extract closed planar curves
• Insert Items – Construct list of text/numerical values for level names and number (individual ID)

 

Step 3

 

RH-RVT_Flux_Flow_01

• Create Level
• Create Floor – Requires closed input curves in a single list
The order of operations is levels first, floors second. That’s so the floor knows which level
it lives on!

RH-RVT_Flow_01

 

Step 4

 

Interoperability_HowTo_Page_5_Image_0001Interoperability_HowTo_Page_5_Image_0002Interoperability_HowTo_Page_5_Image_0003Interoperability_HowTo_Page_5_Image_0004

Using the handy Revit plugin, merge the level, and then, you guessed it, the floor.

 

N.B. If you’re feeling lazy, simply connect the data parameters to its respective
To Flux component and send all data at once 🙂

NB_GH

Sharing Shortest Paths with Flow Tool

I wanted to try out how easy it would be to convert the Grasshopper file used to create points from a text file, created with Processing, into Flux’s flow tool.

Pathfinding_01

Text file cleaned, splitted, and pointed

Pathfinding_02

Points to Lines

The main difference from the Flow Tool and the Grasshopper file is that the Polyline component creates the five distinct lines from the text file, while in the Flow Tool, that block is not available, and so the paths are individual lines.

 

Pathfinding_Result

One advantage is that you can easily share the result with anyone using a link to the data key.

AA Angewandte, Research

Shortest Path Voxelization

This is an alternative to voxelizing according to bending moment in the Surface Voxelization example and serves more as a representation of what Karamba does. This example takes a text file, created in Processing, which contains point information to extract a set of curves. The curves are shortest paths from a base point to the outer boundary of the 3D array input. These path curves are then used as the guides to determine where the subdivided voxels populate.

Grasshopper plugins required:

Python
MeshEdit

 

If you’d like the files, send me an email!

 

Part I

01_SP_Filepath.JPG

Step 1. For this example, we have an 3D array ready in the Surface_ShortestPath_Voxelization_01.3dm file– so first off, we need to extract the paths from the text file. Open the ImportExport-CK05.gh and find the path parameter.

 

02_SP_Filepath.JPG

Step 2. Right-click the path parameter and select ‘set one file path’. A new window will open. Navigate to the location of the AgentTrail.txt and open the file. This sets the file path to that parameter so Grasshopper can find it on your computer.

 

03_SP_Paths.JPG

Step 3. You can now see the generated curves if you preview the curve parameter. However, as you can see, its a bit shifted from the 3D array. But no worries, theres a quick fix for this. Bake the curves as a group so we can position them correctly.

 

04_SP_Position_Adjust1.JPG

05_SP_Position_Adjust.JPG

06_SP_Position_Adjust.JPG

07_SP_Mirror_Adjust.jpg

Steps 4-7. Go to top view in rhino, and make a line horizontal to the bottom edge of the 3D array (Step 5). Take the bottom most point of the group of curves and move it perpendicular to the horizonal line boundary until they intersect (Step 6). So, we just shifted the whole group of curves up so they fall within the 3D array. But to make it perfect, the final step is to mirror the whole thing. So take any point on the curves and mirror along the horizontal (Step 7).

 

08_SP_ExportPaths.JPG

Step 8. Take a look at your beautiful work and then go take a break 🙂

 

Part II

09_SP_Voxelization.JPG

Step 9. Welcome back! Open PolySubdivision_02.gh in the same rhino file. You will see all the needed inputs on the left as well as the output parameters near the bottom left.

 

10_SP_Array.jpg

Step 10. First, input the 3D array by right-clicking the parameter labeled 3D array.

 

11_SP_Inputs.JPG

Step 11. Input all the necessary parameters accordingly. The inputs are basically the same as in the Voxel_Tests_R05.gh file, with the exception of the path curves created in the previous part of the example and additonal large mesh layer with the base cubes.

 

12_SP_RunScript.jpg

Step 12. Run the entire script by clicking on the data dam, or the play button.

 

13_SP_VoxelResult.JPG

14_SP_BakeResult.jpg

15_SP_Voxelization

Steps 13-15. Preview the results by turning on the geometry previews. Bake out the meshes through the mesh parameter labeled ‘final mesh’. And there it is in Rhino, ready to be put back into Processing.

AA Angewandte, Research

AA Angewandte 2017 – Surface Voxelization

This example takes a surface and voxelizes it according to the bending moment analysis. The resulting meshes can then be used in Processing.

Grashopper plugins required:

Karamba
Human

If you would like access to the files, send me an email and I’ll be more than happy to share them with you 🙂

 

01_Srf_Voxel_01.JPG

Step 1. Input the reference surface, within the 03_GH_INPUT_MESH layer, in Rhino by right-clicking on the mesh parameter in Grasshoper and select ‘set one mesh’. Do the same for the brep parameter for the support boundary, under 03_SUPPORT_BOUNDARY, which is a closed polysurface near the bottom of the surface.

02_Srf_Karamba_01.JPG

Step 2. Once inputed, the Karamba bending moment analysis will run. You can preview the results or bake the colored curves from the analysis by right-clicking on the bake component (this step requires the Human GH plugin to be installed on your computer). Points are generated from the resulting curves, which are the points used to populate the voxels in the next step.

03_Srf_3DArray.JPG

Step 3. Turn on the 00_CUBE layer to reveal the next inputs for voxelization. We will be voxelizing with cubes in this example, but the file provided also contains other voxel types to try out.

04_Srf_3DArray_2.JPG

Step 4. First, input the 3D array of cubes under the 00_CUBE_3D_ARRAY layer. Notice that there is a data dam here, which is like a play button– we’ll come back to this at the end when we want to run the whole script.

05_Srf_Sub.JPG

Step 5. The main principle of this voxelization is that the higher the bending moment, the more subdivided the voxel becomes. So we input two different voxel types that are populated on the points from the bending moment analysis. They are named accordingly— 00_CUBE_MESH_M goes inside the geometry parameter labeled Med Sub (subdivision) and so on.

 

06_Srf_Data_Dam.JPG

Step 6. Now, we go back to the play button from Step 4 and click on it to run the script. Voila! You have voxelized the surface!!

07_Srf_Voxelized.JPG

08_Srf_Voxelized_2.JPG

Step 7-8. Bake the resulting meshes inside the mesh parameter labeled ‘final meshes to bake’. You can also preview your work by turning on the custom preview to the right of this parameter.