3D Pilot Project – Complementary Physical and Virtual Experiences with 3D Objects

Lessons learned from project

McCord Museum


Table of Contents

  1. Background
  2. Objectives
  3. Digitization Processes
    1. Overview of processes
      1. Photographs Assembled by QuickTime Virtual Reality
      2. Laser Scanning of an Object's Shape Using Handyscan (Creaform) Technology
      3. Laser Scanning of Shape and Colour
      4. Modeling by Virtual Recreation of Object
    2. Choice of Processes and Artefacts for Digitization
      1. Laser Scanning
      2. Modeling or Virtual Recreation of an Object
      3. Findings
  4. User-Friendliness and Benefits for Visitors
    1. Interactive Station
    2. Website
  5. Conclusion
  6. Annex I Report on 3D Digitization by Canadian Museum of Nature
  7. Annex II Report on Modeling by Phillip W. Greene


1. Background

Located in Montréal, the McCord Museum of Canadian History is a public research and teaching museum that preserves over 1,375,000 objects, images and manuscripts, irreplaceable reflections of the social history and material culture of Montreal, Quebec and Canada. Footnote 1 As a leader in the dissemination of digital collections, it carried out a pilot project in 2008-2009 jointly with the Canadian Heritage Information Network in relation to complementary physical and virtual experiences with 3D objects. The pilot project includes an interactive station housed at the McCord Museum and a Web 3D component, which will be available in 2009 from the new VMC Lab section of the Virtual Museum of Canada. McCord Museum participated in the project to experiment with sharing 3D digital content for the purpose of public dissemination.

After a brief overview of the pilot project's objectives, we will present the lessons learned with regard to digitization processes, user-friendliness and the benefits associated with viewing 3D objects at the interactive station set up at the McCord and on the project website.

2. Objectives

The goals of public dissemination are an important consideration, as they presuppose that digital records are used first and foremost so that audiences can discover the virtual reproduction of an object in different contexts. Even though the records that are produced may also serve to research collections, that is not their primary goal. Consequently, the quality sought when producing records in any digitization process is dependent on the desired use. Footnote 2

The pilot project therefore aimed to experiment with and document a relatively new practice for museums, that is, viewing artefacts three-dimensionally for dissemination purposes.

Incidentally, technical issues relating to digitization were explored as a result of the project. Besides trying to see how digitization technologies can be applied to museum collection objects, the project provided a first-hand opportunity to experiment with digitizing objects from ethnology and archaeology collections and decorative arts.

Finally, it is important to note that we wanted to make it possible for 3D models to be explored on the interactive station in various ways:

  • in real time, by pivoting the object as desired, regardless of the axis; (in technical terms, this meant rotating the object on three axes: x, y, and z);
  • by preset zooms on targeted portions of the artefact;
  • by various animations produced from records.

3. Digitization Processes

For the purposes of the project, we identified four possible processes for digitization, which will be presented briefly below.

3.1 Overview of Processes

3.1.1 Photographs Assembled by QuickTime Virtual Reality

The digitization process for presenting artefacts in 3D by means of the Apple QuickTime VR plug-in consists in taking 64 photographs of the same object, while rotating the object 360 degrees on one axis. The images are then stitched together so that the object can be viewed in an animated sequence, thereby producing the effect of movement. Using this process, an object can be rotated on one axis only.

The process was rejected from the outset because it was not truly innovative and would not have made the McCord Museum's project one of experimentation and development of expertise in the area of three-dimensional digitization. In any case, the process had already been used by the McCord for presenting objects in three dimensions in the virtual exhibit entitled Urban Life through Two Lenses. Footnote 3

3.1.2 Laser Scanning of an Object's Shape Using Handyscan (Creaform) Technology

Developed by Creaform, a company headquartered in Quebec City, through research and development work carried out at the National Research Council of Canada, the portable Handyscan digitizes an object's shape by laser scanning. Colours (or texture) then have to be applied manually on the scanned shape.

  • The object's minimum dimension is about 5 centimetres and the maximum dimension, about 8 metres.
  • Very glossy (reflective) objectives have to be "dulled." An aerosol talc is sometimes needed; however, if precision is not the primary concern, the application of talc powder can often be foregone.
  • Transparency requires surface preparation (application of a dulling agent, for example, aerosol talc). The scanner's light rays need to reflect off of the object's outer surfaces to give a precise image.
  • Soft objects such as moccasins or a horse saddle can be digitized, but provisions must be taken for each scan so that the shape is not altered significantly during the scan.
  • Objects can also be scanned in sections and stitched together afterwards. Creaform has been used to scan trees and human beings as well, hair and clothes included.
  • The laser (class 2) cannot harm surfaces, either because of frequency or its strength. It is not harmful to eyes when viewed directly.
  • The record that is obtained depends on the project requirements. It is possible to produce a VRML file (raw scan file with minimum post-processing) on which textures can then be mapped, or a 3D surface file (STP or IGS). Other types are also available.

This technology is used most notably in the aerospace industry.

A demonstration of the Handyscan was given by Creaform in . Because colour is applied afterward, in post-production, it did not provide the faithfulness to the object that we were looking for at this stage of the project. Its considerable cost was also a factor in our abandoning this avenue.

3.1.3 Laser Scanning of Shape and Colour

We then went on to consider the process developed by the company Arius, the Arius 3D Foundation Scanner Model 100, a device where artefacts, shape and texture, can be digitized in one and the same operation, subject to certain specific constraints. This technology also resulted from research and development work carried out at the National Research Council of Canada. In , the possibility of Arius 3D becoming a partner in the project was discussed, by providing a new, so-called portable device (in the sense that it could be installed in a museum institution). In the end, however, this did not come to be.

In collaboration with Arius, we nonetheless proceeded with an initial analysis of approximately 200 objects from our collections that might eventually represent an interest for the pilot project. Footnote 4 As a result of this exercise, it turned out that a number of objects posed problems for three-dimensional digitization.

A summary of this operation may be worthwhile at this point.

Three-dimensional digitization of an object by means of a laser beam scan is performed according to a few basic principles as described by the company MCG3D.

From a given position vis-a-vis the object to be digitized, the scanner projects a low-power, non-damaging laser light upon a section of the object's surface. Each point of the surface touched by the laser light is captured by a CCD camera integrated into the scanner, and both the X, Y, Z coordinates and the laser light intensity of each of these points are recorded in the memory of the computer controlling the scanner. This operation is repeated thousands of times each second and generates a file containing a large amount of point data of the scanned surface. This file, displayed on the computer screen, shows the 3D shape of the scanned surface.

Some scanners capture the colour directly with laser scanning - in this case, RGB values (Red, Green, and Blue) are recorded along with the X, Y, and Z coordinates - or indirectly by mapping a colour photograph taken while scanning the 3D digital image. In the latter case, lighting conditions will have an effect on colour quality. Footnote 5

The files that are produced (called "source files") are, in fact, point cloud data files in PSIformat, from the name of the software application PointStream Imaging developed by Arius3D. The files are readable using a plug-in, the Image Suite viewer, supplied by the company.

The constraints inherent to the technology developed by Arius 3D can be summarized as follows:

  1. Dimensions: The artefact to be scanned must have a maximum length and width of 64 cm and a maximum height of 50.8 cm (corresponding to the space available for the scan).
  2. Object brightness: Surfaces must not overly reflect ambient light.
  3. Transparency: Surfaces must offer a certain "resistance" to the laser beam for them to be perceived.
  4. Darkness: Surfaces must reflect a minimum of light, namely that of the laser beam, and not absorb the entire light beam.
  5. Flexibility: Owing to the duration of the process and the fact that the object must be positioned at multiple angles, at each step of the digitization process, a soft object or an object with flexible parts (such as textiles, feathers, pendants, cables and so forth) cannot ensure a uniform rendering.
  6. Empty space between parts of an object: For example, the glass beads of a necklace.
  7. Details on surfaces: If the details are too small, digitization cannot provide an interesting rendering.

There is also one more constraint, relating this time to the complexity of shapes. An object with complex geometry necessitates multiple scans (from different angles) and considerable post-production work (subsequent to scanning the object), where the model is reconstructed by assembling various files selected from among a broad range of possibilities. This situation is to be avoided.

A few solutions were proposed by Arius 3D to circumvent the constraints arising from certain features of an object, such as coating its surfaces with an opaque powder or spraying them to reduce their glossiness. These solutions were discussed internally and, in the end, they were rejected in order to preserve the integrity of the artefacts. We therefore agreed to avoid choosing objects if deterioration might be caused by applying a coating and/or using a cleaning product to remove the coating.

These constraints considerably reduced the range of possibilities and led to excluding any textile items from the pilot project.

3.1.4 Modeling by Virtual Recreation of an Object

Another digitization process is removed from laser imaging and simply involves the digital recreation of an object by means of data acquired through observation of the object and high-resolution photography at multiple angles. By using a software application for designing geometric shapes and giving surfaces the features observed and documented by photographs, a 3D representation of the object can be obtained. Photoshop software completes the work and then produces a 3D model ready to be loaded into Blend. Footnote 6

3.2 Choice of Processes and Artefacts for Digitization

Having reviewed the processes available, we opted to experiment with two methods on a total of 10 objects:

  • laser beam scanning with a device using Arius 3D technology (for nine objects from our collection); and
  • modeling through virtual recreation by a computer graphics designer (for one object only).

As much as possible, we chose visually intriguing objects to attract the attention of visitors, both big and small, and stimulate their desire to manipulate the objects so that they could observe them better. Footnote 7 Some of the objects have special inscriptions making their observation all the more relevant.

Laser scanning of Powder horn by Arius3D

3.2.1 Laser Scanning

In total, nine objects were digitized using a laser scanner developed by Arius3D:

  1. Toy train - Lionel car (M992.110.90)
  2. Toy Apollo rocket (EX2004-02.027-03)
  3. Powder horn (M6936)
  4. Bust of Samuel de Champlain by Alfred Laliberté (M992.139.2)
  5. Sextant (M2694.1-3)
  6. Engraved tusk (MEL983.163.234.1-2)
  7. Globe and box (M973.67.1.1-3)
  8. Snuff box (M15909)
  9. Reduced model of canoe (M133)

A powder was applied to make a single object with transparent parts more opaque, that is, the rocket, which was made of plastic and whose nose cone and burner were translucent. The powder in question was calcium carbonate, deemed to be inoffensive for the object's integrity and easy to remove.

Although no products were applied to the other objects, the digitization of several of them nonetheless entailed special difficulties owing to the bright or dark appearance of the surfaces, or the particular components, for example:

  • For the powder horn, whose opening was too dark, many scans (160) were needed to produce an image of that part. In post-production, the number of highlights present in the images generated by the scan had to be reduced and the differences then smoothed out with the "paintbrush" tool available in the application.
  • In the case of the train, which was made of tinplate, it reflected the laser either too much or too little, depending on the various angles at which it was positioned for scanning purposes (156 images in total). Post-production, which also included cleaning up the files (to reduce highlights), took six hours.
  • Highly complex in terms of geometry, the sextant necessitated several successive scans (207), targeting the object's special components. The mirrors it contained could not be scanned. In post-production, the assembly work was particularly arduous (seven hours).

In addition to the actual scanning, it was realized that considerable post-production work was necessary on the object's texture to produce a 3D model that was as faithful as possible, estimated at about two to three hours for each object. Annex I provides a full report on the digitization of these objects by laser scanning.

3.2.2 Modeling or Virtual Recreation of Object

We experimented with modeling an object—a miniature boat—owing to the difficulties it presented because of its size, its fragile nature, and its soft and removable parts. Modeling was entrusted to a graphic designer, Phillip William Greene, of Montreal. The model was then outputted into the Microsoft Expression Blend software application. The artefact is the following:

  1. Miniature boat (Gaspesia schooner) - M979.80.3

3.2.3 Findings

Based on digitization of the 10 objects and their incorporation in the interactive station and on the project website, the following observations were made.

Although some of the 3D models turned out better than others, particularly the powder horn (M6936) and engraved tusk (MEL983.163.234.1-2), we found that the models of artefacts with overly reflective or transparent surfaces, particularly the one on which a powder had to be applied to make it more opaque, are less faithful to the original. For example:

  • the black surfaces of the tinplate toy train (M992.110.90) look unrealistic, with an overly uniform colour;
  • as regards the toy rocket, since a powder was applied to reduce the transparency of its nose cone and burner, the rendering is no longer faithful to the original.

As for the modeled object, the miniature boat Gaspesia (M979.80.3), although extremely attractive, its rendering leaves something to be desired, since it is hard to guess what material it is made of (wood). Its texture has a very, almost too, smooth appearance and it lacks details.

Finally, on the interactive station, the shadows reproduced on objects (defined during incorporation on the station) are surprising for the trained eye, since the shadows turn with the object as it is viewed!

4. User-Friendliness and Benefits for vistors

Conducted among visitors to the McCord Museum in , a study on 3Dviewing user-friendliness and the experience's benefits, in terms of both the interactive station and a website, resulted in numerous findings being made in relation to user appreciation and the expectations raised by these types of interfaces. The study was carried out by the National Research Council of Canada in collaboration with the McCord.

4.1 Interactive Station

The interactive station was set up at the entrance to the Museum, close to the ticket office and about 50 feet away from the entrance to the temporary exhibition on the main floor. Based on observations made on both weekdays and weekends, the following is apparent:

  • the interactive station draws and keeps the attention of about one out of every four visitors;
  • children stop longer to interact with its contents than adults;
  • the touch-free technology is not understood spontaneously by the majority of users, who become aware of it more as they repeatedly interact with the system;
  • several visitors sometimes try to interact simultaneously with the station, although it does not support this type of multiple interaction;
  • the ambient lighting must be relatively strong for the cameras to detect the presence of a dark-skinned hand; and
  • a foot resting on the bottom of the interactive station, but nonetheless visible to the cameras, activates selections.
Interaction with the station at the McCord Museum

  • To avoid the problem of simultaneous interactions (by different visitors, often children), a small stool was placed in front of the station, thereby suggesting to young visitors that they have to get up on it to interact with the screen.
  • Conversations were held with 40 or so visitors who stopped for more than a minute to interact with the station, representing various age groups at the McCord. The following points came to light:
  • The interactive station is an original device, surprising to visitors because of the traditional perception of museums ("high-tech"), attractive particularly to young audiences and promising for the presentation of collections. Many visitors acknowledged its interest for visiting the museum.
  • Although not always effective for the selection of artefacts, the interactive station nonetheless attracted a large number of visitors. Some even found it to be hygienic! According to one visitor, touch-free technology lends itself particularly well to an object's virtual manipulation, by gestures simulating evolution in space.
  • In the view of many visitors, 3D viewing promotes appropriation of the object, in all of its facets. The experience also generates a certain number of expectations, particularly as regards manipulation performance, image resolution, which was hoped to be even better for certain objects (such as the horn), and the realistic nature of the objects, which some deemed to be too artificial.
  • As for the proposed experience, a few visitors wondered, in hindsight, how it actually tied in with the Museum, as they expected to find the objects in the exhibits.

Overall, the benefits generated by the experience can be summed up as the discovery of a surprising and promising technology, and the possibility of interacting with objects and the iconography associated with them. In a way, the interactive station almost steals the show away from the objects! It also creates certain unfulfilled expectations, as some visitors wanted to discover objects as spectacular as the technology or pursue the experience in the museum's exhibition halls by seeing the objects presented at the interactive station in real life.

4.2 Website

To study user-friendliness and the benefits associated with viewing objects in 3D on the project's pilot website, Footnote 8 i.e., the part managed by CHIN, we asked for the opinion of 40 or so visitors representative of the various age groups at the McCord. We asked them to consult a specific Web page that presented categories of objects (air, land or sea) accessible in 3D. When the Internet connection was too slow, their attention was directed toward a 3D object already displayed on the monitor.

Screenshot of a object page in the project website's

  • The website was deemed to provide an experience that a large number of visitors said was fun and interesting, although they nonetheless suggested certain adjustments to the interface to increase the user-friendliness of 3D viewing.
  • 3D viewing proved to be user-friendly for most users, but not necessarily at the first attempt. Some adults said children would have better luck then they did at enabling features.
  • The few children who were interviewed acknowledged that it was easy, up to a certain point, to view objects.
  • Many visitors said they enjoyed touching the object virtually and observing its details.
  • Some made suggestions on adjusting the 3D viewing interface, asking for better zoom instructions or greater control over actions.
  • Enjoyment over discovering the technology was often mentioned among the benefits reported by visitors.
  • Some said they were disappointed with the content and wondered about the context of the experience.

Finally, a few visitors identified interesting applications for people who are physically unable to go to the museum or for children, given the importance of visual aspects in learning. In the view of these visitors, an object will receive more attention simply by virtue of the fact that it can be played with, either by viewing it from several angles or by magnifying it.

Conclusion

Countless lessons were learned from this project. Strictly in terms of laser scanning, we experimented with the possibilities and obvious limitations of the process. Although the technology developed by Arius 3D is innovative, the fact remains that it lacks flexibility. Its strength lies in being able to simultaneously scan an object's shape and colour (or texture), although artefacts that lend themselves to the operation are few and far between. Moreover, the cost of the operation is high, ranging from $500 to $2,500 per object, according to suppliers.

Other emerging technologies may be considered in the near future, however. For example, Creaform, a company headquartered in Quebec City, began offering a new portable scanner in fall 2008 that captures both an object's shape and colour, the VIU-SCAN. Footnote 9

In terms of audiences, both child and adult, viewing objects three-dimensionally contrasts starkly with the traditional presentation methods they expect (particularly adults) when visiting a history museum. The technology was definitely received enthusiastically, all the more so among children, and the discovery of a new device was perceived as a truly enjoyable experience. Being able to manipulate an object without actually touching it, to magnify it and to observe its details is enthralling for all categories of visitors! The element of play and movement that is added to the object is attention-grabbing. The interactive station in particular creates an almost "natural" relationship with objects, where the grasping reflex is brought into play. However, what the interactive station presents must be tied in with what the museum actually exhibits in order for the experience to be more significant.

In closing, it is our hope that this report will prove useful to professionals who wish to explore digitization for enhancing access to museum collections.

Annex I – Report on 3D Digitization by Canadian Museum of Nature

Arius 3D Laser Scanner

The laser system characterizes each point on the scanned object according to its colour and location in 3-dimensional space. It does this by scanning the surface of an object using one focused laser beam comprising three different wavelengths (red, green and blue), and recording the reflected light using a "charge couple device". Each point on the object is described by 6 numeric values; positional values X, Y, and Z, and surface colour values R, G, and B. The X coordinate of each point on the object is calculated from an accurate measurement of the position of the scanning mirror in the camera. The Y coordinate is calculated from an accurate measurement of the camera motion system. The Z, or range coordinate, is calculated through laser triangulation within the camera. At the same time the colour information at each point is gathered by measuring the intensity of the reflected laser beams. Colour intensity measurements of the surface colour of the scanned object, are accurate, being completely independent of ambient light. The total light exposure is about 3.5 milliwatts, roughly equivalent to shining a flashlight on the object. Also, because the laser light is in constant motion (it moves at about 300 mm/sec across the surface of the object), the dosage of light to the surface is extremely small. For each scan the laser beam passes over the surface, one scan line at a time. The laser scans at a resolution as fine as 100 microns, recording 3D shape and colour simultaneously with high-resolution and perfect registration.

In this project, each artefact was scanned with sequential overlapping scans until the entire surface was covered. The scanning resolution for each artefact ranged from 100 µm to 300 µm, depending on the size of the item being scanned. Larger elements such as the Train, Apollo 11 rocket and the Sextant were scanned at 300µm. The scanner's effective scanning area (i.e., field of view) is approximately 60 mm wide. Consequently larger objects require multiple scans to cover the entire surface area of the artefact and scanning at 300µm provided a more manageable point cloud and reasonable scanning time.

Each scan was processed in "point-cloud" form using Pointstream 3DImageSuite software. Overlapping scans of the object were aligned in Pointstream using regions of geometric commonality. Once the complete point-cloud model was created, it was converted, using surfacing software such as Paraform, into a triangulated polymesh comprising thousands of individual faces. Colour information was preserved and embedded in the faces of the polymesh. As a general rule, a form made of 500,000 points converts to 1,000,000 polygons/faces. The output model format of the polymesh 3D model delivered for this project was a "PLY". This model format maintains the colour information as a colour per vertex model.

Artefacts

Ivory Tusk and Stone Base

Stone Base
Scan Resolution:
100µm
Total Number of Points in 3D Model:
2.1 million
File Size (in PSI model Format):
31 MB
Number of Scans:
22
Time to Scan:
1 hr
Time to Align individual scans:
3 hr
Time for additional post processing/clean-up:
1 hr
Total Time to complete model:
5 hrs

Notes: The stone base has a simple geometry for scanning. Object was scanned on rotation table and scanned at 30-degree intervals. Each scan for the sides produced a highlight at one specific point where the angle of the object was directly across from the laser and the surface of the object was too bright. This is similar to traditional camera photography, where the light produces a highlight. These highlights were edited out from overlapping scans taken as the object rotated on the table.

Ivory Tusk
Scan Resolution:
100µm
Total Number of Points in 3D Model:
2.9 million
File Size (in PSI model Format):
43 MB
Number of Scans:
32
Time to Scan:
3 hrs
Time to Align individual scans:
2 hrs
Time for additional post processing/clean-up:
2 hrs
Total Time to complete model:
7 hrs

Notes: To handle highlights from the shiny reflectance of the surface, the tusk was placed at a slightly inclined upward angle. As a result, the camera traveled up the tusk, and the angle on incident from the laser reflecting from the surface did not produce any highlights.

The Tusk was also placed inside the stone base and then on the table for scanning. One scan was taken across the two artefacts together to capture their position. From this one scan, both the stone base and tusk were then aligned to set them into their desired position.

Snuff Box

Scan Resolution:
100µm
Total Number of Points in 3D Model:
1.6 million
File Size (in PSI model Format):
24.6 MB
Number of Scans:
65
Time to Scan:
4 hrs
Time to Align individual scans:
5 hrs
Time for additional post processing/clean-up:
3 hrs
Total Time to complete model:
12 hrs

Notes: The snuff box was placed on the rotation table in the closed position and scanned at approximately 30 degree intervals. 10 scans were taken to capture the top part of the globe sphere. Then the snuff box was turned over and another 10 scans were taken to capture the bottom side of the globe sphere. Then the snuff box was opened to its fullest - approximately 100 degrees - and placed again on the rotation table.

The remaining 45 scans were taken to capture the inside surface geometry of the snuff box. One of the parts has a protruding brass lip that fits inside the other half of the snuff box. This brass lip was difficult to scan since the brass material was very shiny and lacked tarnish. Each scan produced a highlight, and the curvature of the lip meant the camera could only capture a small area. Therefore, several scans were required, turning the snuff box on the rotation table every 10 degrees, to capture the lip. The extra scans removed the highlight from the previous scan.

Post-Processing

The porcelain inside section meets in an undercut position to the brass fitting and could not be scanned. Because the camera could only capture what the laser light hit and reflect back, the laser could not get into this undercut. As a result, this section had to be hole-filled with surface geometry to make a closed solid model. The protruding lip required some hole-filling to clean up the top edge along the lip. A hinge joins the two globe pieces. It was necessary, therefore, to separate the scans and close off the hinge section to make them complete and solid. This allowed any animation work to handle the two pieces separately and pivot on these two closed hinge pieces.

The act of opening the snuff box caused the globes to move slightly from the brass ring that holds them in place. As a result, the scans taken in the open position to are slightly off. The two globe pieces had to be separated from each of the scans and then aligned separately from each other to create a single object, requiring the additional hours.
Then they were re-aligned from one single scan taken of the open position and each piece was saved to this position. Additional pieces were aligned to the one scan taken of the closed position, and each piece was saved to form this position.

Globe and Corresponding Box.

Top Globe Box Cover
Scan Resolution:
100µm
Total Number of Points in 3D Model:
2.7 million
File Size (in PSI model Format):
39.7 MB
Number of Scans:
57
Time to Scan:
2.5 hrs
Time to Align individual scans:
2 hrs
Time for additional post processing/clean-up:
2 hr
Total Time to complete model:
6.5 hrs

Notes: Two scans were taken straight across the top. To scan the sides, the box was turned on the rotation table every 30 degrees and turned upside down to scan the inside. This required positioning the box on a slight angle so that camera could capture the inner sides of the box. (The camera could not scan the side walls straight on because the angle is 90 degrees and laser would not reflect anything.) The position allowed the camera to scan along the box and capture the inside. This reposition was done 4 times to capture the inside walls of the box.

Extra scans were taken to capture the edge geometry to align the inside of the box to the outside of the box. The overlap between these two sections is very small and required these extra scans to help with the alignment.

Cleanup and small hole-filling was required at the edge section at the bottom of the box.

Bottom Globe Box Cover
Scan Resolution:
100µm
Total Number of Points in 3D Model:
2.2 million
File Size (in PSI model Format):
32.5 MB
Number of Scans:
41
Time to Scan:
1.5 hrs
Time to Align individual scans:
2 hr
Time for additional post processing/clean-up:
2 hr
Total Time to complete model:
5.5 hrs

Notes: The same note details as the top box cover apply here. They are basically the same object and the scanning process was the same. Two extra scans were taken with the globe placed into position with the bottom box. One scan was taken straight on and another scan taken at 30 degrees to ensure the globe and the box were scanned in their set position.

Globe
Scan Resolution:
100µm
Total Number of Points in 3D Model:
1.1 million
File Size (in PSI model Format):
16.2 MB
Number of Scans:
18
Time to Scan:
2 hr
Time to Align individual scans:
3 hrs
Time for additional post processing/clean-up:
2 hr
Total Time to complete model:
7 hrs

Notes: The globe was placed on a rotation table and scanned every 15 degrees to ensure coverage and to remove the highlight that appeared on each scan. 9 scans were taken in this position and then the globe was turned over and another 9 scans were taken of the bottom.

The alignment took longer to complete since spherical objects lack geometry detail to aid in the alignment and scans tend to slide along each pass.

Additional post-processing was required in the positioning of all three objects. Using the two scans taken with the globe inserted into the bottom box, the globe was aligned to these scans and saved in this position. The bottom box was also aligned to these two scans to set the position and saved.

The box cover was moved and placed in 3D space using the Pointstream tools for moving an object. The box was moved over the bottom box to match and align.

Powder Horn

Scan Resolution:
100µm
Total Number of Points in 3D Model:
7.7 million
File Size (in PSI model Format):
113.1 MB
Number of Scans:
160
Time to Scan:
6.5 hrs
Time to Align individual scans:
4 hrs
Time for additional post processing/clean-up:
3 hr
Total Time to complete model:
13.5 hrs

Notes: The powder horn was placed on the table and scanned straight on and then turned approximately 30 degree in a rotation from the first scan; this rotation continued to the original point of the first scan. There were some highlights from the scans, but the surface material was fairly dull, so there were only a couple highlights.

The large number of scans is deceiving. Many extra scans were used for positioning. For example, the top of the powder horn (the small black opening) was scanned, a recognizable part on the horn was scanned for alignment.

The top part of the powder horn was very difficult to scan due to its properties: very dark and shiny, with a curved shape. The laser light doesn't reflect very well from this surface, and because of the curvature, the laser light does not capture a lot of data from the fall-off area. Additional scans of this top section were required. The large number of highlights could not be removed from the overlapping scans. Colour editing with a paint brush tool smoothed out the highlights.

Apollo 11 Rocket

Scan Resolution:
300µm
Total Number of Points in 3D Model:
950K
File Size (in PSI model Format):
14 MB
Number of Scans:
122
Time to Scan:
4 hrs
Time to Align individual scans:
3 hrs
Time for additional post processing/clean-up:
3 hr
Total Time to complete model:
10 hrs

Notes: The Apollo Rocket was placed on the table and scanned at angle of 60 degrees to remove any highlights from the surface area. The artefact was then repositioned in a rotation from the first scan. This rotation continued every 30 degrees until the first position was reached again.

Extra scans were taken for the alignment procedure: scanning the wing and another part of the rocket, such as the front tire assembly, clearly identified the part on the rocket to which the wing belongs. There are three identical wings and the pattern on the rocket is horizontally consistent around the rocket. The high number of scans reflects the additional scans that were taken for positioning; they were later deleted as unnecessary overlap.

Two items required dusting to capture the surface geometry. The rocket's nose cone and the rocket's burner are made of a reflective material that allows light to pass through it, similar to a reflective plastic cover on a vehicle's brake lights. Permission was given to dust the surface area of these two pieces with a very small amount of calcium carbonate.

Hole-filling was required to fill in areas that the camera did not capture. The stickers on the rocket had a shiny black checker pattern, which the camera had a hard time capturing. These missing squares were filled in and coloured black. Some hole-filling was required for the tires to make them solid, and to clean up the edges of the wings from the upper and lower scans to make the wings solid.

Train

Scan Resolution:
300µm
Total Number of Points in 3D Model:
2.3 million
File Size (in PSI model Format):
32.6 MB
Number of Scans:
156
Time to Scan:
6 hrs
Time to Align individual scans:
3 hrs
Time for additional post processing/clean-up:
3 hr
Total Time to complete model:
12 hrs

Notes: The train was placed on the rotation table and scanned at various angles to capture the surface geometry. The black painted areas on the train presented some problems in collecting scan data. The brass areas created highlights and colour variances in the scan data. The reflective nature of the brass resulted in scans that were either very dark or very bright, depending on the camera angle.

The sticker on the underside of the train was scanned at 100µm, rather than at 300µm, to make the sticker crisp and readable.

One wheel in particular was scanned to capture as much detail as possible. It was then used as a template and aligned to fit the other wheels to ensure consistency. The part that holds the wheels in place was also used as a template for the remaining three pieces on the train. This reduced the amount of scanning and aligning, since the parts are the same throughout the train.

Sextant

Scan Resolution:
300µm
Total Number of Points in 3D Model:
1.9 million
File Size (in PSI model Format):
28.7 MB
Number of Scans:
207
Time to Scan:
6 hrs
Time to Align individual scans:
4 hrs
Time for additional post processing/clean-up:
3 hr
Total Time to complete model:
13 hrs

Notes: The sextant was placed on the table at a slight incline to scan the surface geometry and reduce the highlights from the brass surface. Several scans were taken to capture all the surface geometry. The brass finish on the sextant required extra work for scanning. The finish changed in colour and brightness in the scan data depending on angle of the camera and the laser light hitting the surface.

The knobs, three mirrors, and telescope needed to be scanned from every possible angle to capture the surface area. The knobs are spherical and required scanning in more than four directions. As a result, the added complexity of the artefact's shape significantly increased the number of scans.

The brass section containing the stamped numbers was also scanned at 100µm to increase the resolution to make them crisper. The mirrors were not scanned, and had to be modeled afterwards or left as an opening.

Canoe

Scan Resolution:
100µm
Total Number of Points in 3D Model:
2.4 million
File Size (in PSI model Format):
35.2 MB
Number of Scans:
54
Time to Scan:
3 hrs
Time to Align individual scans:
1 hr
Time for additional post processing/clean-up:
2 hr
Total Time to complete model:
6 hrs

Notes: The leather material of the canoe was easily scanned by the laser camera. Some extra scans were required to capture the black quills on the side of the canoe. This required re-positioning the canoe to ensure the quills were flat in relation to the camera position, so that the laser light would reflect back while sweeping across the quills. Post-processing required filling in some small holes between the individual quills. In addition, hole-filling was required to close off each end of the inside of the canoe. These two areas became too tight for the camera to scan completely. Using the Paraform software, a surface was created from the scanned data to close these areas to complete the canoe.

Surfacing

The Paraform surfacing software produced an "OBJ" model format along with a "texture" map comprising colour information. This polymesh model contains a 3D model without any colour information embedded in the vertices, but rather is mapped to a texture map which is either a .tif or .jpg image that contains the colour map. Typically, when a point cloud is converted into a polymesh, the polymesh model contains many polygons or faces, requiring large processing power and memory to use the complete data set in animation/authoring software.

For this project, it was determined by Solaris that 150,000 polygons would be the ideal number for loading into the authoring software BLEND. At this number of polygons, the 3D model would still maintain a high level of surface detail and not exhaust the computer's system memory when loading all the 3D models into the Kiosk exhibit. This is the trade-off between the high-resolution scans which generated 3D models and accurate detail in the range of 2 to 10 million polygons. The kiosk software is not able to load and display these high resolution 3D models for visualization. As a result, the surfacing software produces the polygon-reduced models for use in applications requiring smaller polygon models.

Surfacing a 3D model entails drawing a template structure over the existing 3D model (see ill., "Mesh created from point cloud"; note the yellow line). This template structure drops a light polygon surface over the 3D model (see ill, note the green mesh lines enclosed in yellow lines). The software extracts the surface properties which are colour maps, bump maps and displacement maps (see ill., "Colour Map") from the original model.

The light polygon surface that was applied over the original 3D model can be adjusted to the number of polygons needed to be exported for a smaller model (see ill., green mesh lines represent new polygons to replace existing model).

Detail of a generate model by Arius 3d
Green mesh lines of new polygons applied over an original 3D model. A second window offer the view of the green mesh lines adjusted to the number of polygons needed to be exported for a smaller model

Each model required from two to three hours of surfacing work to generate a 3D model with an associated texture map.

However the sextant required an additional five hours to finalize the 3D model, in part due to the complexity of the model itself. The mirrors, telescope, and knobs had to be separated from the 3D model and the remaining base of the Sextant had to be hole-filled and closed where the pieces were removed. Then each part was surfaced individually.

This same procedure was used to generate the web based models. The 3D models were reduced to produce a 3D model with approximately 10,000 polygons. This is a very small percentage of the original dataset, which was in the millions of polygons. The web-based models follow the template structure, but have large triangles creating the surface area. The colour texture map was applied over these large triangles to complete the 3D model. However, much of the surface details were lost due to the number of polygons that were removed, but the colour texture map makes up for the difference.

Annex II – Report on modeling by Phillip W. Greene

Description of McCord Museum ship modeling process

Phillip W. Greene

Overview of Mandate

I was commissioned by the McCord Museum to model in 3D a museum artefact, that of a hand crafted ship, which was to be integrated and rendered in Microsoft Expression Blend. Originally the process adopted by the museum to digitize such artefacts was with 3D scanners, but since the ship was larger then what the scanner device could manage, it was not an option for this particular artefact.

The Challenge

The ship could not be modified, handled or altered during the process. Normally in modeling a similar object, the 3D designer would break the object apart into its base components. Then, using various devices, such as flat bed scanners and traditional measuring tools, the designer would recreate the parts in 3D as accurately as possible and finally, reassemble the various 3D parts to create the original artefact. This would also allow the capturing of the various textures and surfaces that make up the object without interference or blocking by other parts of the object.

The Solution

Since we could not break down or touch the artefact in question, it was necessary to model by eye and work from high resolution photographs. These photographs had to be taken in a very specific manner in order to provide the best possible digital images from which to extract form, shapes, position, relative scale and surface textures. The Museum photographer was provided with a photo guide with a storyboard detailing the various photos to be taken and with perspective and lighting information.

Since we could not break down or touch the artefact in question, it was necessary to model by eye and work from high resolution photographs. These photographs had to be taken in a very specific manner in order to provide the best possible digital images from which to extract form, shapes, position, relative scale and surface textures. The Museum photographer was provided with a photo guide with a storyboard detailing the various photos to be taken and with perspective and lighting information.

The first thing to do before the actual modeling was started, was to assess those elements or features of the ship that needed to be captured in order to convey accurately the feel/experience of the original artefact. In this case, the focus was on the how the object was built: handcrafted and assembled from wood and then hand-painted - in other words, showing the imperfections of its construction.

To achieve this, I decided to try when or where possible to use modeling techniques that relied more on hand manipulation then computer generated geometry to give a more handmade, imperfect feel.

Surface textures were reproduced from the digital photograph by isolating and extracting specific object faces. These were then cleaned up in Photoshop to remove blocking elements and shadows, and were corrected for camera distortions.

The 3D modeled ship was then outputted into the specified file format used by Blend to import 3D objects.

Contact information for this web page

This resource was published by the Canadian Heritage Information Network (CHIN). For comments or questions regarding this content, please contact CHIN directly. To find other online resources for museum professionals, visit the CHIN homepage or the Museology and conservation topic page on Canada.ca.

Report a problem or mistake on this page
Please select all that apply:

Thank you for your help!

You will not receive a reply. For enquiries, contact us.

Date modified: