Beyond Mapping III

Topic 12: Landscape Visualization

 

Cover_small

Map Analysis book with

 companion CD-ROM for

hands-on exercises

and further reading

 

Behind the Scenes of Virtual Reality  discusses the basic considerations and concepts in 3d-object rendering
How to Rapidly Construct a Virtual Scene  describes the procedures in generating a virtual scene from landscape inventory data 
How to Represent Changes in a Virtual Forest  discusses how simulations and "fly-bys" are used to visualize landscape changes and characteristics
Capture "Where and When" on Video-based GIS  describes how GPS-enabled video and digital still cameras work
Video Mapping Brings Maps to Life  describes how video maps are generated and discusses some applications of video mapping

 

<Click here> right-click to download a printer-friendly version of this topic (.pdf).

(Back to the Table of Contents)
______________________________

Behind the Scenes of Virtual Reality 
(GeoWorld, June 2000, pg. 22-23)

   (return to top of Topic)

 

Over the past three decades, cutting-edge GIS has evolved from computer mapping to spatial database management, and more recently, to map analysis and modeling.  The era of a sequestered GIS specialist has given way to mass marketed applications, such as MapQuest’s geo-queries, On-Star’s vehicular telematics and a multitude of other Internet-served maps.

The transition of GIS from an emerging industry to a fabric of society has radically changed traditional perspectives of map form, content and applications.  Like a butterfly emerging from a cocoon, contemporary maps are almost indistinguishable from their predecessors.   While underlying geographic principles remain intact, outward appearances of modern maps are dramatically different.

This evolution is most apparent in multimedia GIS.   Traditional maps graphically portray map features and conditions as static, 2-D abstractions composed of pastel colors, shadings, line types and symbols.  Modern maps, on the other hand, drapes spatial information on 3-D surfaces and provides interactive query of the mapped data that underlies the pictorial expression.  Draped remote sensing imagery enables a user to pan, zoom and rotate the encapsulated a picture of actual conditions.  Map features can be hyperlinked to text, tables, charts, audio, still images and even streaming video.  Time series data can be sequenced to animate changes and enhance movement of in both time and space.

While these visualizations are dramatic, none of the multimedia GIS procedures shake the cartographic heritage of mapping as much as virtual reality.  This topic was introduced in a feature article in GeoWorld a few years ago (Visualize Realistic Landscapes, GeoWorld, August, 1998, pages 42-47).  This and the next couple of columns will go behind the scenes to better understand how 3-D renderings are constructed and investigate some of the approaches important considerations and impacts.

Since discovery of herbal dyes, the color pallet has been a dominant part of mapping.  A traditional map of forest types, for example, associates various colors with different tree species—red for ponderosa pine, blue for Douglas fir, etc.  Cross-hatching or other cartographic techniques can be used to indicate the relative density of trees within each forest polygon.  A map’s legend relates the abstract colors and symbols to a set of labels identifying the inventoried conditions.  Both the map and the text description are designed to conjure up a vision of actual conditions and the resulting spatial patterns.

The map has long served as an abstract summary while the landscape artist’s canvas has served as a more realistic rendering of a scene.  With the advent of computer maps and virtual reality techniques the two perspectives are merging.  In short, color pallets are being replaced by rendering pallets.

Like the artist’s painting, complete objects are grouped into patterns rather than a homogenous color applied to large areas.  Object types, size and density reflect actual conditions.  A sense of depth is induced by plotting the objects in perspective.  In effect, a virtual reality GIS “sees” the actual conditions of forest parcels through its forest inventory data— species type, mixture, age, height and stocking density for each parcel.  A composite scene is formed by translating the data into realistic objects that characterize trees, houses, roads and other features then combined with suitable textures to typify sky, clouds, soil, brush and grasses.

Fundamental to the process is the ability to design realistic objects.  An effective approach, termed geometric modeling, utilizes an interface (figure 32-1) similar to a 3-D computer-aided drafting system to construct individual scene elements.  A series of sliders and buttons are used to set the size, shape, orientation and color of each element comprising an object.  For example, a tree is built by specifying a series of levels representing the trunk, branches, and leaves.  Level one forms the trunk that is interactively sized until the designer is satisfied with the representation.  Level two establishes the pattern of the major branches.  Subsequent levels identify secondary branching and eventually the leaves themselves.

T32_1a

Figure 32-1.  Designing tree objects.

The basic factors that define each level include 1) linear positioning, 2) angular positioning, 3) orientation, 4) sizing and 5) representation.  Linear positioning determines how often and where branches occur.  In fig. 1, the major branching occurs part way up the trunk and is fairly evenly spaced.

The angular positioning, sets how often branches occur around the trunk or branch to which it is attached.  The branches at the third level in the figure form a fan instead of being equally distributed around the branch.  Orientation refers to how the branches are tilting.  Note that the lower branches droop down from the trunk, while the top branches are more skyward looking.  The third-order branches tend show a similar drooping effect in the lower branches.

Sizing defines the length and taper a particular branch.  In the figure, the lower branches are considerably smaller than the mid-level branches.  Representation covers a lot of factors identifying how a branch will appear when it is displayed, such as its composition (a stick, leaf or textured crown), degree of randomness, and 24-bit RGB color.  In figure 1, needle color and shading was changed for the tree on the right to simulate a light dusting of snow.  Other effects such as fall coloration, leaf-off for deciduous trees, disease dieback, or pest infestations can be simulated.

 

 


T32_1bb

Figure 32-2.  The inset on the left shows various branching patterns.  The inset on the right depicts the sequencing of four branching levels.

Figure 32-2 illustrates some branching patterns and levels used to construct tree-objects.  The tree designer interface at first might seem like overkill—sort of a glorified “painting by the numbers.”  While it’s useful for the artistically-challenged, it is critical for effective 3-D rendering of virtual landscapes.

The mathematical expression of an object allows the computer to generate a series of “digital photographs” of a representative tree under a variety of look-angles and sun-lighting conditions.  The effect is similar to flying around the tree in a helicopter and taking pictures from different perspectives as the sun moves across the sky.  The background of each bitmap is made transparent and the set is added to the library of trees.  The result is a bunch of snapshots that are used to display a host of trees, bushes and shrubs under different viewing conditions.

The object-rendering process results in a “palette” of objects analogous to the color palette used in conventional GIS systems.  When displaying a map, the GIS relates a palette number with information about a forest parcel stored in a database.  In the case of 3-D rendering, however, the palette is composed of a multitude of tree-objects.  The effect is like color-filling polygons, except realistic trees are poured onto the landscape based on the tree types, sizing and densities stored in the GIS.  How this scene rendering process works is reserved for next month.
_______________________
Author's Note: the Tree designer module of the Virtual Forest software package by Pacific Meridian Resources was used for the figures in this column.  See http://www.innovativegis.com/products/vforest/ for more examples and discussion.


How to Rapidly Construct a Virtual Scene    
(GeoWorld, July 2000, pg. 22-23)

(return to top of Topic)

 

The previous column described how 3-dimensional objects, such as trees, are built for use in generating realistic landscape renderings.  The drafting process uses an interface that enables a user to interactively adjust the trunk’s size and shape then add branches and leaves with various angles and positioning.  The graphic result is similar to an artist’s rendering of an individual tree.

The digital representation, however, is radically different.  Because it is a mathematically defined object, the computer can generate a series of “digital photographs” of the tree under a variety of look-angles and sun-lighting conditions.  The effect is similar to flying around the tree in a helicopter and taking pictures from different perspectives as the sun moves across the sky.

The background of each of these snapshots is made transparent and the set is added to a vast library of tree symbols.  The result is a set of pictures that are used to display a host of trees, bushes and shrubs under different viewing conditions.  A virtual reality scene of a landscape is constructed by pasting thousands of these objects in accordance with forest inventory data stored in a GIS.

T32_2a

Figure 32-3.  Basic steps in constructing a virtual reality scene.

There are six steps in constructing a fully rendered scene (see Figure 32-3).  A digital terrain surface provides the lay of the landscape.  The GIS establishes the forest stand boundaries as geo-registered polygons with attribute data describing stand make-up and condition.

Forest floor conditions are represented by “texture maps” that add color and grain to the terrain surface.  Once the configuration and texturing is complete, the tree objects are “poured” onto the surface for the final composition with fog/haze added as appropriate.

The link between the GIS data and the graphic software is critical.  For each polygon, the data identifies the types of trees present, their relative occurrence (termed stocking density) and maturity (age, height).  In a mixed stand, such as spruce, fir and interspersed aspen, several tree symbols will be used.  Tree stocking identifies the number of trees per acre for each of the species present.  This information is used to determine the number tree objects to “plant” and cross-link to the appropriate tree symbols in 3-D tree object library.  The relative positioning of the polygon on the terrain surface with respect to the viewpoint determines which snapshot of the tree provides the best view and sun angle representation.


T32_2b

Figure 32-4.  Forest inventory data establishes tree types, stocking density and maturity.

Finally, information on percent maturity establishes the baseline height of the tree.  In a detailed tree library several different tree objects are generated to represent the continuum from immature, mature and old growth forms.  Figure 32-4 shows the tree exam files for two polygons identified in the adjacent graphic.  The first column of values identifies the tree symbol (library ID#).  Polygon 1573 has 21 distinct tree types including snags (dead trees).  Polygon 1658 is much smaller and only contains 16 different types.  The second column indicates the percent maturity while the third defines the number of trees.  These data shown are for an extremely detailed U.S. Forest Service research area in Colorado.  Most operational landscape visualizations, however, have only one or just a few tree types represented per polygon.

Once the appropriate tree symbol and number of trees are identified the computer can begin “planting” them.  This step involves determining a specific location within the polygon and sizing the snapshot based on the tree’s distance from the viewpoint.  Most often trees are randomly placed however clumping and compaction factors can be used to create clustered patterns if appropriate.

 

T32_2c

Figure 32-5.  Tree symbols are “planted” then sized depending on their distance from the viewpoint.

Tree sizing is similar pasting and resizing an image in a word document.  The base of the tree symbol is positioned at the specific location then enlarged or reduced depending on how far the tree is from the viewing position.  Figure 32-5 shows a series of resized tree symbols “planted” along a slope—big trees in front and progressively smaller ones in the distance.

The process of rendering a scene is surprisingly similar to that of landscape artist.  The terrain is painted and landscape features added.  In the artist’s world it can take hours or days to paint a scene.  In virtual reality the process is completed in a minute or two as hundreds of trees are selected, positioned and resized each second.

Since each tree is embedded on a transparent canvas they obscure what is behind them—textured terrain and/or other trees, depending on forest stand and viewing conditions.  Terrain locations that are outside of the viewing window or hidden behind ridges are simply ignored.  The multitude of issues and extended considerations surrounding virtual reality’s expression of GIS data, however, cannot be ignored.  That discussion is reserved for next month.
_______________________
Author's Note:  the Landscape Viewer module of the Virtual Forest software package by Pacific Meridian Resources was used for the figures in this column.  See http://www.innovativegis.com/products/vforest/ for more examples and discussion.



How to Represent Changes in a Virtual Forest    
(GeoWorld, August 2000, pg. 24-25)

(return to top of Topic)

The previous columns described the steps in rendering a virtual landscape.  The process begins with a 3D drafting program used to construct mathematical representations of individual scene elements similar to a painter’s sketches of the different tree types that occur within an area.  The tree library is linked to GIS data describing the composition of each forest parcel.  These data are used to position the polygon on the terrain, select the proper understory texture (“laying the carpet”) and paste the appropriate types and number of trees within each polygon (“pouring the trees”).

The result is a strikingly lifelike rendering of the landscape instead of a traditional map.  While maps use colors and abstract symbols to represent forest conditions, the virtual forest uses realistic scene elements to reproduce the composition and structure of the forest inventory data.  This lifelike 3D characterization of spatial conditions extends the boundaries of mapping from dry and often confusing drawings to more familiar graphical perspectives.

T32_3a

32-6.  Changes in the landscape can be visualized by modifying the forest inventory data.

The baseline rendering for a data set can be modified to reflect changes on the landscape.  For example, the top two inserts in figure 32-6 depict a natural thinning and succession after a severe insect infestation.  The winter effects were introduced by rendering with a snow texture and an atmospheric haze.

The lower pair of inserts show the before and after views of a proposed harvest block.  Note the linear texture features in the clearcut that identify the logging road.  Alternative harvest plans can be rendered and their relative visual impact assessed.  In addition, a temporal sequence can be generated that tracks the ‘green-up” through forest growth models as a replanted parcel grows.  In a sense, the baseline GIS information shows you “what is,” while the rendering of the output from a simulation model shows you “what could be.”

While GIS modeling can walk through time, movement to different viewpoints provides a walk through the landscape.  The viewer position can be easily changed to generate views from a set of locations, such as sensitive viewpoints along a road or trail.  Figure 32-7 shows the construction of a ‘fly-by” movie.  The helicopter flight path at 200 meters above the terrain was digitized then resampled every twenty meters (large red dots in the figure).  A full 3D rendering was made for each of the viewpoints (nearly 900 in all) and, when viewed at 30 frames per second, forms a twenty-eight second flight through the GIS database (see author’s note).


T32_3b

Figure 32-7.  A “fly-by” movie is constructed by generating a sequence of renderings then viewing them in rapid succession (click here to view the fly-by).

Admittedly, real-time “fly-bys” of GIS databases are a bit futuristic.  With each scene requiring three to four minutes to fully render on a PC-level computer, a 30 second movie requires about 45 hours of processing time.  The Lucas Films machines would reduce the time to a few minutes but it will take a few years to get that processing power on most desktops.  In the interim, the transition from traditional maps to fully rendered scenes is operationally constrained to a few vanguard software systems. There are several concerns about converting GIS data into realistic landscape renderings.  Tree placement is critical.  Recall that “stocking” (#trees per acre) is the forest inventory statistic used to determine the number of trees to paste within a polygon.  While this value indicates the overall density it assumes the trees are randomly distributed in geographic space.

While trees off in the distance form a modeled texture, placement differences of a couple of feet for trees in the foreground can significantly alter a scene.   For key viewpoints GPS positioning of specific trees within a few feet of the viewer is required.  Also, in sequential rendering the trees are statistically placed for the first scene then that “tree map” is used for all of the additional scenes.   Many species, such as aspen, tend to group and statistical methods are needed to account for “clumping” (number of seed trees) and “compaction” (distance function from seed tree).


T32_3c

Figure 32-8.  Strikingly real snapshots of forest data can be generated from either limited or robust GIS data.

Realistic trees, proper placement, appropriate under-story textures and shaded relief combine to produce strikingly real snapshots of a landscape.  In robust, forest inventory data the rendering closely matches reality.  However, equally striking results can be generated from limited data.  For example, the “green” portions on topographic maps indicate forested areas, but offer no information about species mixture, age/maturity or stocking.  Within a GIS, a “best guess” (er… expert opinion) can be substituted for the field data and one would be hard-pressed to tell the differences in rendered scenes.

That brings up an important point as map display evolves to virtual reality—how accurate is the portrayal?  Our cartographic legacy has wrestled with spatial and thematic accuracy, but “rendering fidelity” is an entirely new concept (figure 32-8).  Since you can’t tell by looking, standards must be developed and integrated with the metadata accompanying a rendering.  Interactive links between the underlying data and the snapshot are needed.  Without these safeguards, it’s “viewer beware” and opens a whole new avenue for lying with maps.

While Michael Creighton’s emersion into a virtual representation of a database (the novel Disclosure) might be decades off, virtual renderings of GIS data is a quantum leap forward.  The pastel colors and abstract symbols of traditional maps are becoming endangered cartographic procedures.  When your grandchild conjures up a 3D landscape with real-time navigation on a wrist-PC, you’ll fondly recall the bumpy transition from our paper-map paradigm.
_______________________
Author's Note: the Virtual Forest software package by Pacific Meridian Resources was used for the figures in this column.  The fly-by in  http://www.innovativegis.com/products/vforest/, select “Flybys” to access the simulated helicopter flight described as well as numerous other examples of 3D rendering.


Capture "Where and When" on Video-based GIS    
(GeoWorld, September 2000, pg. 26-27)

(return to top of Topic)

 

The past three columns described procedures for translating GIS data into virtual renderings of a landscape.  While traditional maps generalize landscape features as abstract symbols and patterns, a virtual forest portrays mapped data more like a painting.  Instead of pastel colors and crosshatching, realistic objects, such as trees, rocks and water, are appropriately placed on a shaded relief surface.  The effect is a map that rivals a photographic snapshot of the conditions recorded in the GIS database.

An alternative is to populate a GIS database with actual snapshots and streaming video that are linked to their map location.  Multimedia GIS provides a connection between a map and field-collected images, audio and tabular summaries.  This emerging field is poised to recast our perspective of what maps are and how they can be used.

Video mapping is an exciting part of the revolution in visualization of mapped data.  It records GPS signals directly on videotape shot in the field.  When the tape is played to a computer it links these data to a digital map for easy access and review.  The result is an extension of field data collection to field experience collection through geo-registered visual and audio records.

With video mapping, the construction of a multimedia GIS no longer involves tedious and time-consuming procedures for encoding spatial coordinates of the imagery.  The entire process, from field video collection to map indexing and Web page publishing consists of three simple steps— Recording, Indexing and Review.

During the Recording Step, video mapping encodes GPS coordinates directly onto the videotape (see figure 32-9).  The video mapping unit contains a standard GPS board that monitors the satellite signals, converts this information into a data stream consisting of longitude (X), latitude (Y), actual time/date, and a variety of supporting data.  These data are output as standard NMEA formatted data.  A second circuit board in the unit converts this digital information into an audio signal in a manner similar to a modem for phone line access to the Internet.


Figure 32-9.  Video Mapping in the Field.  As video is recorded, the precise location, time, and date are recorded every second to one of the videotape’s audio tracks.  The other track records pertinent information as you speak.

In turn, the acoustic signals are sent to one of the audio channels through the microphone input connector on the video camera.  The result is recording the GPS position on the videotape every second that the camera is on.

The direct recording of “where and when” on the tape greatly facilitates field data collection—as long as there is GPS reception, the information is automatically recorded on the same medium (videotape) as the imagery.  In addition, any audio notes you might make are captured on the same tape.  Voice recognition software can convert the notes into text, or if a specific voice commands are used, the information can be converted into a database record.

Most contemporary video cameras have a switch between photo and movie mode.  In movie mode, streaming video is recorded at 30 frames per second.  In photo mode, the camera acts like a still camera and "freezes" the frame for several seconds as it records the image to videotape.  In this mode, a one-hour videotape can record over 500 digital pictures.  In both photo and movie modes the one-second GPS "data stamp" provides ample positioning information for most applications… every 88 feet at 60 mph in a car or every 3 feet while strolling at 2 mph.

The Indexing Step involves connecting the video mapping unit to a computer and playing the video (see figure 32-10).  In this configuration, the audio cord is switched to the Headphone Output connector and a second cable is connected to the Lan C connector on the camera.  The connector provides information on tape position (footage on older cameras and time code on newer ones) used in indexing and playback control of the tape similar to those on a standard VCR.


T32_4b

Figure 32-10.  Indexing the Videotape.  The video, audio notes and GPS information is used to construct a multimedia map of the precise position and date/time of the video footage providing direct retrieval of text, data, audio, image and video by simply clicking on the map.

As the videotape is played, the audio X,Y and time code information is sent to the video mapping unit where it is converted to digital data and sent to the serial port on the computer.  If a headset was used in the field, the voice recording on the second audio channel is transferred as well.

For indexing there are five types of information available—streaming video (movie mode), still images (photo mode), voice audio (headset), tape time code (tape position), and GPS data (camera geo-position plus date/time and information on satellite lock)—all automatically registered on the videotape whenever the camera is recording.

Video mapping software records the GPS information from the videotape and constructs a database that connections GPS locations with videotape time codes. The computer generates an interactive map of everywhere that video was recorded.  Special map features can be marked on the map and information entered about them or still images captured, while the map is being indexed.  The map can be used for review, or exported in a MapInfo or ArcView compatible format.

The Review Step uses the indexed database to access audio and video information on the tape.  The hardware configuration is the same as for indexing (audio, Lan C and serial cables).  Clicking on any indexed location retrieves its GPS data and associated video.  Player controls similar to a VCR are sent to the video camera to play back the video recordings by noting its time code and causing the tape to move to that location.

Map features can start applications, open files and display images.  The software works with video capture cards to create still images and video clips you can link to map features, giving maximum flexibility in choosing a data review method.  In many applications the completed multimedia map is exported as an HTML file for viewing with any browser or over the Internet.  The map features can contain any or all
five of the basic information types:

ü      Text — interpreted from audio as .DOC file
ü      Data — interpreted from audio as .DAT, .XLS or .DBF file
ü      Audio — captured as .WAV file (about 100KB per 5 seconds)
ü      Image — captured as .JPG file (about 50KB per image)
ü      Video — captured as .AVI file (about 1MB per 5 seconds)

Next month’s column explores the procedures for constructing finished maps and describes several applications of video mapping.  In the interim, you might checkout the links to some online examples (see author’s notes).
_______________________
Author's Note: the information contained in this column is intended to describe the conceptual approach, considerations and practical applications of video mapping.  Several online demonstrations of this emerging technology in a variety of applications are available at http://www.redhensystems.com.  For information on the information on the VMS 200TM Video Mapping System contact Red Hen Systems, Inc, 2310 East Prospect Road, Suit A, Fort Collins, USA 80525, Phone: (800) 237-4182, Email: info@redhensystems.com, Web site: http://www.redhensystems.com/.


Video Mapping Brings Maps to Life    
(GeoWorld, October 2000, pg. 24-25)

(return to top of Topic)

 

As detailed in last month’s column, video mapping enables anyone with a computer and video camera to easily create their own interactive video maps.  The integration of computers, video camera, and GPS marks a technological milestone that finally makes GIS multimedia a practical reality.   A user can inexpensively add real time, geographically indexed images, audio and video clips to ongoing data collection activities.  Applications abound in natural resources, precision farming, business, government, science, recreation, and any other endeavor that needs a visual GIS capability.

For example, corridor mapping of oil and gas pipelines, transmission towers, right of ways and the like provide images of actual conditions not normally part of traditional maps.  In law enforcement, video mapping can be used from reconnaissance to traffic safety to forensics.  Agriculture applications include crop scouting, weed/pest management, verification of yield maps, and “as-applied” mapping.  Geo-business uses range from conveying neighborhood character, to insurance reporting to web page development.

By coupling audio/visual information to other GIS data, anyone can see first-hand conditions that add reality to tabular field data.  In disaster assessment, the ability to click on several indexed locations and see and hear the extent of damage can convey much more information than simple statistics.  In forestry, resource managers who were not involved in field data collection can review conditions not easily quantified, such as under-story characteristics and wildlife habitat potential.  In short, video mapping provides “the missing link" in GIS, enabling incorporation of visual layers into any geo-referenced data set.

Data is collected without a computer in the field or cumbersome additional equipment.  Figure 32-11 shows the VMSTM unit by Red Hen Systems (see author’s note) that weighs less than a pound and is connected to the video camera via a small microphone cable.  The GPS antenna is easily attached to a cap, hardhat or shoulder strap of a carrying case.  Optional hardware includes an electronic compass to record camera direction and a laser rangefinder to electronically measure the distance to objects at the optical center of the camera's view.  A multispectral unit for simultaneously recording up to four wavelength bands is under development.

T32_5a

Figure 32-11.  Video Mapping Hardware.  The VMS 200 unit enables recording and processing of  GPS signals and video time codes.

The office configuration consists of a video camera, VMS unit, notebook or desktop computer, and mapping software.  The software generates a map automatically from the data recorded on the videotape.  Once a map is created, it can be personalized by placing special feature points that relate to specific locations.  These points are automatically or manually linked to still images, video clips, sound files, documents, data sets, or other actions that are recalled at the touch of a button.  A voice recognition package is under development that will create free-form text and data-form entry.  The mapping software also is compatible with emerging GPS-based still cameras.

While a map is being created, or at a later time, a user can mark special locations with a mouse-click to "capture" still images, streaming video or audio files.  The “fire wire” port on many of the newer computers makes capturing multimedia file a snap.  Once captured, a simple click on the map feature accesses the images, associated files, or video playback beginning at that location.

Sharing or incorporating information is easy because the video maps are compatible with  most popular GIS programs.  An HTML export function provides an extremely useful data delivery device for service providers, project managers, or others who need to make their imagery generally available.  By transforming the maps and associated data to a Web page, time-dependent information can be "served to the Internet" and available to thousands of people within a few minutes of collection.

Differential post-processing is another important software addition.  The post-processing software (EZdiffTM) takes base station correction data from the Internet and performs a calculation against the video mapped GPS data for the same time period, then outputs a data file containing the corrected, highly accurate points.  It mathematically corrects the autonomous ("normal") GPS signals from an error of about 10 meters to positional accuracy of about 1 to 2 meters.

Figure 32-12 shows an example of the video mapping software.  The dark blue line on map identifies the route of an ultralite (a hang-glider with an engine).  Actually the line is composed of a series of dots— one for each second the video camera was recording.  Clicking anywhere on the line will cause the camera, or VCR, to automatically fast forward/reverse to the location and begin playing the video.

The light blue and red dots in the figure are feature locations where still images, audio tracks and video clips were captured to the hard disk.  The larger inset is a view of the lake and city from the summit of a hiking trail.  The adjacent red dots are a series of similar images taken along the trail.  When a video camera is set in photo mode, a one-hour videotape contains nearly 600 exposures— no film, processing or printing required.  In addition, the automatic assignment of GPS time and position makes filing and retrieving a trivial task—no more file cabinets, manila folders and with photos taped to reports.


T32_5b

Figure 32-12.  Video Mapping Software.  Specialized software builds a linked database and provides numerous features for accessing the data, customizing map layout and exporting to a variety of formats.

The top captured image on the right side of the figure shows a photo taken from an ultralite inventory of bridges along a major highway.  The middle image is a field photo of cabbage loper damage in a farmer’s field.  The bottom image is of a dummy in a training course for police officers.  The web pages for these and other applications are online for better understanding of video mapping capabilities (see author’s notes).

For centuries, maps have been abstractions of reality that use inked lines, symbols and shadings to depict the location of physical features and landscape conditions.  Multimedia GIS, and video mapping in particular, provide an easy means of linking additional audio/visual information to map features.  Special equipment, field procedures and office processing is minimal and easy to learn.  The ability to access stored images, video, audio, text, and data at the click of a mouse radically changes our paradigm of a map—from abstract drawings to sights, sounds and summaries of actual conditions.  Video mapping truly "makes maps come alive."
_______________________
Author's Note: the information contained in this column is intended to describe the conceptual approach, considerations and practical applications of video mapping.  Several online demonstrations of this emerging technology in a variety of applications are available at http://www.redhensystems.com.  For information on the information on the VMS 200TM Video Mapping System contact Red Hen Systems, Inc, 2310 East Prospect Road, Suit A, Fort Collins, USA 80525, Phone: (800) 237-4182, Email: info@redhensystems.com, Web site: http://www.redhensystems.com/.

(return to top of Topic)

(Back to the Table of Contents)