Even if a method cannot be worked-out to re-paint the volumes at lower cost, it would still be useful to have such a tool.
For reference: Humus "Volume Light Map" demo
http://www.humus.name/index.php?page=3D&ID=47
To start we will examine the limitations of Id Tech 4's light volume structure. To fill a light volume there is a 1D "Falloff Image" and 2D "Projection Image".
The 1D Falloff Image can be thought of as a map of how bright the light is at relative z-distance to the source.
The 2D Projection Image is just a plain old 2D texture which paints the XY of the volume.
So all the visual elements in the Projection Image become cylindrical prisms if the Falloff Image is full-bright.
You can tease out more complex shapes by playing with the gradients of both images. For example you can make a sphere of light by having a Falloff Image that ramps from black to white to black while having a Projection Image that has a bright center circle that ramps to black.
This would be my idea of the process:
1) Slice a 3D texture into 2D planes and bake the resultant images into memory (the only 3D texture format I am aware of is the one in the demo above)
2) Presuming that the 3D texture Cube was sliced "Top down". Compare the images successively for elements that run parallel to the volume direction. Once you have created a composite that covers the largest number of slices. Bake that to a 2D projection image and create a corresponding 1D falloff Image that brightens and darkens that composite throughout the volume. Try to take advantage of the inherent volumetric gradient side-effects that happen due to interactions between the 2D and 1D projections to increase the amount of data subtraction.
3) At the same time as step 2, bake off unique visual elements into their own projection images and make singe element falloff images for each unique image.
4) Repeat steps 2 through-out the new set of images from step 3
5) Once you have boiled the set down to as few 2D + 1D sets as possible then slice the same 3D texture from a different axis and perform the same comparisons and reductions
6) Determine which Axis requires the lowest number of unique 1d+ 2D sets to reproduce the volume
7) Use the smallest set from step 6 to act as a master set from which the other sets are subtracted from. Try to boil down all sets by looking for ways that the light elements can be composited between up to 6 projection axis.
8) Once the absolute smallest number of 1D+2D sets are created then bake all the 2D projection images into a single long texture (like a filmstrip) while generating a scale + translate table that can snap to any specific "frame" in the strip. Do the same for the Falloff Images. Making sure to label these coordinate references so that you can easily find the which 2D image goes with which 1D falloff.
Extra credit:
Create a tool that does the above but also keeps track of optimal places to break the volume and create new 2D+1D sets per new light volume size and provides the relative sizing of each needed volume.
So:
Volume 1 =
XYZ size + named 1D+2D set A
Volume 2 =
XYZ size + named 1D+2D set B
So that's the challenge.
It could be approximated manually but an automation would allow for precise light map reproduction.
Boiled-down the problem is:
Given a fixed resolution 3D texture what are the least number of 1D+2D sets that can reasonably reproduce the data-set. You are allowed to save data by sharing the work with all 6 projection axis. You are allowed to save data by predicting emergent volumetric structures in the projections.