Neuroglancer Precomputed Volume Format¶
The precomputed volume format stores 4-d XYZC single- or multi-resolution arrays. The XYZ dimensions are chunked and optionally stored at multiple resolutions, while the C (channel) dimension is neither chunked nor multi-resolution.
The volume format consists of a directory tree containing an info
metadata file in JSON format, and the associated chunk data in the relative
paths specified in the metadata.
info
metadata format¶
- json PrecomputedVolume : object¶
Precomputed volume metadata
- Required members:¶
-
type :
"image"
|"segmentation"
¶ Specifies how to interpret the volume.
- One of:¶
-
"image"
¶ Generic image volume.
Displays as an image layer by default.
-
"segmentation"
¶ Discrete object label volume.
Displays as a segmentation layer by default.
-
-
data_type :
"uint8"
|"int8"
|"uint16"
|"int16"
|"uint32"
|"int32"
|"uint64"
|"float32"
¶ Data type of the volume.
-
type :
- Optional members:¶
-
@type :
"neuroglancer_multiscale_volume"
¶ Precomputed data kind.
Optional but strongly recommended for new data.
-
scales : array[
1
..] of object¶ Metadata for each resolution of the data.
- Required members:¶
- key : string¶
Relative path to the directory containing the chunk data for this scale.
Example
"8_8_8"
Example
"../other_volume/8_8_8"
-
size : array[
3
] of integer[0
, +∞)¶ Dimensions of the volume in voxels (XYZ order).
Example
[500, 500, 500]
-
resolution : array[
3
] of number¶ Voxel size in nanometers (XYZ order).
Note
Units other than meters cannot be represented.
- Optional members:¶
-
voxel_offset : array[
3
] of integer¶ Origin of the volume in voxels (XYZ order).
-
chunk_sizes : array[
1
..] of array[3
] of integer[1
, +∞)¶ Chunk dimensions (XYZ order).
Typically just a single chunk shape is specified, but more than one chunk shape can be specified to optimize different read access patterns. For each chunk shape specified, a separate copy of the data must be stored.
Example
[[64, 64, 64]]
Example
[[512, 512, 1], [512, 1, 512], [1, 512, 512]]
-
jpeg_quality : integer[
0
,100
]¶ JPEG encoding quality when writing chunks.
Only valid if
encoding
is"jpeg"
. The quality is specified using the IJG (Independent JPEG Group) [0, 100] recommended scale, with 0 having the worst quality (smallest file size) and 100 the best quality (largest file size).Note
This option only affects writing and is ignored by Neuroglancer.
-
png_level : integer[
0
,9
]¶ PNG compression level when writing chunks.
Only valid if
encoding
is"png"
. Specifies the zlib compression level between [0, 9], where 0 is uncompressed, with 1 having the fastest compression (largest file size), and 9 the slowest compression (smallest file size).Note
This option only affects writing and is ignored by Neuroglancer.
-
compressed_segmentation_block_size : array[
3
] of number[1
, +∞)¶ Block size for compressed segmentation encoding (XYZ order).
Must be specified if, and only if,
encoding
is"compressed_segmentation"
.
- sharding : PrecomputedSharding¶
Sharding parameters.
If specified, indicates that the chunks are stored in sharded format. If unspecified, chunks are stored in unsharded format.
Exclude scale from rendering.
-
voxel_offset : array[
- mesh : string¶
Relative path to associated object meshes.
Only valid if
type
is"segmentation"
.
- skeletons : string¶
Relative path to associated object skeletons.
Only valid if
type
is"segmentation"
.
- segment_properties : string¶
Relative path to associated segment properties.
Only valid if
type
is"segmentation"
.
-
@type :
Example
{ "data_type": "uint8", "num_channels": 1, "scales": [ { "chunk_sizes": [[64, 64, 64]], "encoding": "jpeg", "key": "8_8_8", "resolution": [8, 8, 8], "size": [6446, 6643, 8090], "voxel_offset": [0, 0, 0] }, { "chunk_sizes": [[64, 64, 64]], "encoding": "jpeg", "key": "16_16_16", "resolution": [16, 16, 16], "size": [3223, 3321, 4045], "voxel_offset": [0, 0, 0] }, { "chunk_sizes": [[64, 64, 64]], "encoding": "jpeg", "key": "32_32_32", "resolution": [32, 32, 32], "size": [1611, 1660, 2022], "voxel_offset": [0, 0, 0] }, { "chunk_sizes": [[64, 64, 64]], "encoding": "jpeg", "key": "64_64_64", "resolution": [64, 64, 64], "size": [805, 830, 1011], "voxel_offset": [0, 0, 0] }, { "chunk_sizes": [[64, 64, 64]], "encoding": "jpeg", "key": "128_128_128", "resolution": [128, 128, 128], "size": [402, 415, 505], "voxel_offset": [0, 0, 0] }, { "chunk_sizes": [[64, 64, 64]], "encoding": "jpeg", "key": "256_256_256", "resolution": [256, 256, 256], "size": [201, 207, 252], "voxel_offset": [0, 0, 0] }, { "chunk_sizes": [[64, 64, 64]], "encoding": "jpeg", "key": "512_512_512", "resolution": [512, 512, 512], "size": [100, 103, 126], "voxel_offset": [0, 0, 0] }], "type": "image" }
Example
{ "data_type": "uint64", "mesh": "mesh", "num_channels": 1, "scales": [ { "chunk_sizes": [[64, 64, 64]], "compressed_segmentation_block_size": [8, 8, 8], "encoding": "compressed_segmentation", "key": "8_8_8", "resolution": [8, 8, 8], "size": [6446, 6643, 8090], "voxel_offset": [0, 0, 0] }, { "chunk_sizes": [[64, 64, 64]], "compressed_segmentation_block_size": [8, 8, 8], "encoding": "compressed_segmentation", "key": "16_16_16", "resolution": [16, 16, 16], "size": [3223, 3321, 4045], "voxel_offset": [0, 0, 0] }, { "chunk_sizes": [[64, 64, 64]], "compressed_segmentation_block_size": [8, 8, 8], "encoding": "compressed_segmentation", "key": "32_32_32", "resolution": [32, 32, 32], "size": [1611, 1660, 2022], "voxel_offset": [0, 0, 0] }, { "chunk_sizes": [[64, 64, 64]], "compressed_segmentation_block_size": [8, 8, 8], "encoding": "compressed_segmentation", "key": "64_64_64", "resolution": [64, 64, 64], "size": [805, 830, 1011], "voxel_offset": [0, 0, 0] }, { "chunk_sizes": [[64, 64, 64]], "compressed_segmentation_block_size": [8, 8, 8], "encoding": "compressed_segmentation", "key": "128_128_128", "resolution": [128, 128, 128], "size": [402, 415, 505], "voxel_offset": [0, 0, 0] }, { "chunk_sizes": [[64, 64, 64]], "compressed_segmentation_block_size": [8, 8, 8], "encoding": "compressed_segmentation", "key": "256_256_256", "resolution": [256, 256, 256], "size": [201, 207, 252], "voxel_offset": [0, 0, 0] }, { "chunk_sizes": [[64, 64, 64]], "compressed_segmentation_block_size": [8, 8, 8], "encoding": "compressed_segmentation", "key": "512_512_512", "resolution": [512, 512, 512], "size": [100, 103, 126], "voxel_offset": [0, 0, 0] }], "type": "segmentation" }
Chunked representation of volume data¶
For each scale
and chunk size
chunk_size
specified in
chunk_sizes
, the volume (of voxel
dimensions size = [sx, sy, sz]
) is divided into a grid of grid_size =
ceil(size / chunk_size)
chunks.
The grid cell with grid coordinates g
, where 0 <= g < grid_size
,
contains the encoded data for the
voxel-space subvolume [begin_offset, end_offset)
, where begin_offset =
voxel_offset + g * chunk_size
and end_offset = voxel_offset + min((g + 1) *
chunk_size, size)
. Thus, the size of each subvolume is at most chunk_size
but may be truncated to fit within the dimensions of the volume. Each subvolume
is conceptually a 4-dimensional [x, y, z, channel]
array.
Unsharded chunk storage¶
If sharding
parameters are not
specified for a scale, each chunk is stored as a separate file within the path
specified by the key
property with the
name xBegin-xEnd_yBegin-yEnd_zBegin-zEnd
, where:
xBegin
,yBegin
, andzBegin
are substituted with the base-10 string representations of thex
,y
, andz
components ofbegin_offset
, respectively; andxEnd
,yEnd
, andzEnd
are substituted with the base-10 string representations of thex
,y
, andz
components ofend_offset
, respectively.
Sharded chunk storage¶
If sharding
parameters are specified
for a scale, the sharded representation of
the chunk data is stored within the directory specified by the
key
property. Each chunk is identified
by a uint64 chunk identifier, equal to the compressed format
code of the grid cell coordinates, which is
used as a key to retrieve the encoded chunk data from sharded representation.
Compressed morton code¶
The compressed Morton code is a variant of the normal Morton code where bits that would be equal to 0 for all grid cells are skipped.
Note
Storing a normal 3-D Morton code in a uint64 value would only allow 21 bits for each of the three dimensions.
In the following, we list each potentially used bit with a hexadecimal letter, so a 21-bit X coordinate would look like this:
x = ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---4 3210 fedc ba98 7654 3210
after spacing out by 2 to allow interleaved Y and Z bits, it becomes:
x = ---4 --3- -2-- 1--0 --f- -e-- d--c --b- -a-- 9--8 --7- -6-- 5--4 --3- -2-- 1--0``
For standard morton code, we’d shift Y << 1
and Z << 2
then OR the three
resulting uint64. But most datasets aren’t symmetrical in size across
dimensions.
Using compressed 3-D Morton code lets us use bits asymmetrically and conserve
bits where some dimensions are smaller and those bits would always be zero.
Compressed morton code drops the bits that would be zero across all entries
because that dimension is limited in size. Say the X has max size 42,943 which
requires only 16 bits (~64K) and would only use up to the “f” bit in the above
diagram. The bits corresponding to the most-significant 4
, 3
, 2
,
1
, and 0
bits would always be zero and therefore can be removed.
This allows us to fit more data into the single uint64, as the following example shows with Z having a 24 bit range.
Start with a X coordinate that for this example has a max of 16 bits:
x = ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- fedc ba98 7654 3210
after spacing, note MSB f
only has room for the Z bit since Y has dropped out:
x = ---- ---- ---- ---- ---f -e-- d--c --b- -a-- 9--8 --7- -6-- 5--4 --3- -2-- 1--0
Start with a Y coordinate that for this example has a max of 14 bits:
y = ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- ---- --dc ba98 7654 3210
after spacing with constant 2 bits since Y has smallest range:
y = ---- ---- ---- ---- ---- ---- d--c --b- -a-- 9--8 --7- -6-- 5--4 --3- -2-- 1--0
after shifting by 1 for future interleaving to get morton code:
y = ---- ---- ---- ---- ---- ---d --c- -b-- a--9 --8- -7-- 6--5 --4- -3-- 2--1 --0-
- Start with a Z coordinate that for this example has a max of 24 bits::
z = —- —- —- —- —- —- —- —- —- —- 7654 3210 fedc ba98 7654 3210
after spacing out Z with 24 bits max; note compression of MSB due to X and Y dropout:
z = ---- ---- ---- 7654 3210 f-e- d--c --b- -a-- 9--8 --7- -6-- 5--4 --3- -2-- 1--0
after shifting by 2 for future interleaving:
z = ---- ---- --76 5432 10f- e-d- -c-- b--a --9- -8-- 7--6 --5- -4-- 3--2 --1- -0--
Now if you OR the final X, Y, and Z you see no collisions:
x = ---- ---- ---- ---- ---f -e-- d--c --b- -a-- 9--8 --7- -6-- 5--4 --3- -2-- 1--0
y = ---- ---- ---- ---- ---- ---d --c- -b-- a--9 --8- -7-- 6--5 --4- -3-- 2--1 --0-
z = ---- ---- --76 5432 10f- e-d- -c-- b--a --9- -8-- 7--6 --5- -4-- 3--2 --1- -0--
While the above may be the simplest way to understand compressed Morton codes, the algorithm can be implemented more simply by iteratively going bit by bit from LSB to MSB and keeping track of the interleaved output bit.
Specifically, given the coordinates g
for a grid cell, where 0 <= g <
grid_size
, the compressed Morton code is computed as follows:
Set
j := 0
.For
i
from0
ton-1
, wheren
is the number of bits needed to encode the grid cell coordinates:For
dim
in0, 1, 2
(corresponding tox
,y
,z
):If
2**i < grid_size[dim]
:Set output bit
j
of the compressed Morton code to biti
ofg[dim]
.Set
j := j + 1
.
Chunk encoding¶
The of the subvolume data in each chunk depends on the specified
encoding
.
raw¶
Each chunk is stored directly in little-endian binary format in [x, y, z,
channel]
Fortran order (i.e. consecutive x
values are contiguous) without
any header. For example, if the chunk has dimensions [32, 32, 32, 1]
and has
a data_type
of "uint32"
, then the
encoded chunk should have a length of 131072 bytes.
Supported |
Any |
Supported |
Any |
compressed_segmentation¶
Each chunk is encoded using the multi-channel compressed
segmentation format.
The compression block size is specified by the
compressed_segmentation_block_size
metadata property.
Supported |
|
Supported |
Any |
compresso¶
Each chunk is encoded in Compresso format.
2-d image format encodings¶
When using 2-d image format-based encodings, each chunk is encoded as an image
where the number of components is equal to
num_channels
. The width and height of the
image may be arbitrary, provided that the total number of pixels is equal to the
product of the x, y, and z dimensions of the subvolume, and that the 1-D array
obtained by concatenating the horizontal rows of the image corresponds to the
flattened [X, Y, Z]
Fortran-order representation of the subvolume.
Note
For effective compression (and to minimize artifacts when using lossy
compression), however, it is recommended to use either [X, Y * Z]
or
[X * Y, Z]
as the width and height, respectively.
Warning
Lossy encodings should not be used for
segmentation
volumes or
image
volumes where it is important to
retain the precise values.
jpeg¶
Each chunk is encoded as a JPEG image.
Supported |
|
Supported |
1 or 3 |
png¶
Each chunk is encoded as a PNG image.
Supported |
|
Supported |
1-4 |
jxl¶
Each chunk is encoded as a JPEG-XL image.
Supported |
|
Supported |
1, 3, or 4 |