|
|
|
Even
though WPF is able to exploit graphics card acceleration features designed
for use in 3D, it is essentially a 2D technology. The layout mechanism deals
entirely in logically rectangular entities. (Of course elements are not
required to be rectangular, but they are always laid out according to their
rectangular bounding box, regardless of their shape.) The composition engine
that combines all the elements of the UI into a final image also operates in
a two-dimensional world – it effectively draws each element in turn onto the
flat surface that is the screen.
|
|
|
|
The only
extent to which controls, shapes, and most other framework elements
understand a 3rd dimension is the ‘Z-order’ which simply
determines what happens when elements overlap. (The Z order is determined by
the order in which elements appear in the XAML document. The earlier an
element is, the further towards the back it is.) This allows us to indicate
which element should appear ‘on top’ when two elements overlap. However, it
is just an ordering – the Z order does not allow us to distinguish between
one object being directly behind another, and being a very long way behind
another. So the coordinate system for
the UI is essentially two-dimensional.
|
|
|
|
In a 3D
world, the X, Y, and Z position of every object is specified precisely. This
means that 3D models occupy a different coordinate system from the rest of
the WPF UI. This presents a problem: how do we bring these two distinct
coordinate systems together?
|
|
|
|
The
solution employed by WPF is to use a special element, Viewport3D, which
bridges the gap between the two worlds. It projects the 3D world into a 2D
rectangle.
|
|
|
|
Note the
Viewport3D is a live view – it is able to render updates in the 3D
world.
|