Given a fixed viewing direction and resolution, we need to encode information that allows us to produce the image of the whole 3D scene that would be rendered with arbitrary color and transparency attributes for each structure. This will suffice for providing the promised types of interaction; structures can be turned off by making them fully transparent, and picking will be implemented using an image map. The idea behind MLIs is simple: from a fixed viewing direction and resolution, the only information needed to reconstruct any of these images is the depth and intensity information for each structure rendered separately. For each pixel of the target image, the client can then easily calculate what structures contribute to that pixel and in what proportions. We push as much of the overall computation as possible into the generation of the representation to make this process fast on the client.