The NIFTI file format

(This article is about the nifti-1 file format. For an overview of how the nifti-2 differs from the nifti-1, see this one.)

The Neuroimaging Informatics Technology Initiative (nifti) file format was envisioned about a decade ago as a replacement to the then widespread, yet problematic, analyze 7.5 file format. The main problem with the previous format was perhaps the lack of adequate information about orientation in space, such that the stored data could not be unambiguously interpreted. Although the file was used by many different imaging software, the lack of adequate information on orientation obliged some, most notably spm, to include, for every analyze file, an accompanying file describing the orientation, such as a file with extension .mat. The new format was defined in two meetings of the so called Data Format Working Group (dfwg) at the National Institutes of Health (nih), one in 31 March and another in 02 September of 2003. Representatives of some of the most popular neuroimaging software agreed upon a format that would include new information, and upon using the new format, either natively, or have it as an option to import and export.

Radiological or “neurological”?

Perhaps the most visible consequence of the lack of orientation information was the then reigning confusion between left and right sides of brain images during the years in which the analyze format was dominant. It was by this time that researchers became used to describe an image as being in “neurological” or in “radiological” convention. These terms have always been inadequate because, in the absence of orientation information, no two pieces of software necessarily would have to display the same file with the same side of the brain in the same side of the screen. A file could be shown in “neurological” orientation in one software, but in radiological orientation in another, to the dismay of an unaware user. Moreover, although there is indeed a convention adopted by virtually all manufacturers of radiological equipment to show the left side of the patient on the right side of the observer, as if the patient were being observed from face-to-face or, if lying supine, from the feet, it is not known whether reputable neurologists ever actually convened to create a “neurological” convention that would be just the opposite of the radiological. The way as radiological exams are normally shown reflects the reality of the medical examination, in which the physician commonly approaches the patient in the bed from the direction of the feet (usually there is a wall behind the bed), and tend to stay face-to-face most of the time. Although the neurological examination does indeed include a few manoeuvres performed at the back, for most of the time, even in the more specialised semiotics, the physician stays at the front. The nifti format obviated all these issues, rendering these terms obsolete. Software can now mark the left or the right side correctly, sometimes giving the option of showing it flipped to better adapt to the user personal orientation preference.

Same format, different presentations

A single image stored in the analyze 7.5 format requires two files: a header, with extension .hdr, to store meta-information, and the actual data, with extension .img. In order to keep compatibility with the previous format, data stored in nifti format also uses a pair of files, .hdr/.img. Care was taken so that the internal structure of the nifti format would be mostly compatible with the structure of the analyze format. However, the new format added some clever improvements. Work with a pair of files for each image as in the .hdr/.img, rather than just one, is not only inconvenient, but it is also error prone, as one might easily forget (or not know) that the data of interest is actually split across more than one file. To address this issue, the nifti format also allows for storage as a single file, with extension .nii. Single file or pair of files are not the only possible presentations, though. It is very common for images to have large areas of solid background, or files describing masks and regions of interest containing just a few unique values that appear repeated many times. Files like these occupy large space in the disk, but with little actual information content. This is the perfect case where compression can achieve excellent results. Indeed, both nifti and analyze files can be compressed. The deflate algorithm (used, e.g., by gzip) can operate in streams, allowing compression and decompression on-the-fly. The compressed versions have the .gz extension appended: .nii.gz (single file) or .hdr/.img.gz (pair of files, either nifti or analyze).

Predefined dimensions for space and time

In the nifti format, the first three dimensions are reserved to define the three spatial dimensions — x, y and z —, while the fourth dimension is reserved to define the time points — t. The remaining dimensions, from fifth to seventh, are for other uses. The fifth dimension, however, can still have some predefined uses, such as to store voxel-specific distributional parameters or to hold vector-based data.

Overview of the header structure

In order to keep compatibility with the analyze format, the size of the nifti header was maintained at 348 bytes as in the old format. Some fields were reused, some were preserved, but ignored, and some were entirely overwritten. The table below shows each of the fields, their sizes, and a brief description. More details on how each field should be interpreted are provided further below.

Type Name Offset Size Description
int sizeof_hdr 0B 4B Size of the header. Must be 348 (bytes).
char data_type[10] 4B 10B Not used; compatibility with analyze.
char db_name[18] 14B 18B Not used; compatibility with analyze.
int extents 32B 4B Not used; compatibility with analyze.
short session_error 36B 2B Not used; compatibility with analyze.
char regular 38B 1B Not used; compatibility with analyze.
char dim_info 39B 1B Encoding directions (phase, frequency, slice).
short dim[8] 40B 16B Data array dimensions.
float intent_p1 56B 4B 1st intent parameter.
float intent_p2 60B 4B 2nd intent parameter.
float intent_p3 64B 4B 3rd intent parameter.
short intent_code 68B 2B nifti intent.
short datatype 70B 2B Data type.
short bitpix 72B 2B Number of bits per voxel.
short slice_start 74B 2B First slice index.
float pixdim[8] 76B 32B Grid spacings (unit per dimension).
float vox_offset 108B 4B Offset into a .nii file.
float scl_slope 112B 4B Data scaling, slope.
float scl_inter 116B 4B Data scaling, offset.
short slice_end 120B 2B Last slice index.
char slice_code 122B 1B Slice timing order.
char xyzt_units 123B 1B Units of pixdim[1..4].
float cal_max 124B 4B Maximum display intensity.
float cal_min 128B 4B Minimum display intensity.
float slice_duration 132B 4B Time for one slice.
float toffset 136B 4B Time axis shift.
int glmax 140B 4B Not used; compatibility with analyze.
int glmin 144B 4B Not used; compatibility with analyze.
char descrip[80] 148B 80B Any text.
char aux_file[24] 228B 24B Auxiliary filename.
short qform_code 252B 2B Use the quaternion fields.
short sform_code 254B 2B Use of the affine fields.
float quatern_b 256B 4B Quaternion b parameter.
float quatern_c 260B 4B Quaternion c parameter.
float quatern_d 264B 4B Quaternion d parameter.
float qoffset_x 268B 4B Quaternion x shift.
float qoffset_y 272B 4B Quaternion y shift.
float qoffset_z 276B 4B Quaternion z shift.
float srow_x[4] 280B 16B 1st row affine transform
float srow_y[4] 296B 16B 2nd row affine transform.
float srow_z[4] 312B 16B 3rd row affine transform.
char intent_name[16] 328B 16B Name or meaning of the data.
char magic[4] 344B 4B Magic string.
Total size 348B

Each of these fields is described in mode detail below, in the order as they appear in the header.

Size of the header

The field int sizeof_hdr stores the size of the header. It must be 348 for a nifti or analyze format.

Dim info

The field char dim_info stores, in just one byte, the frequency encoding direction (1, 2 or 3), the phase encoding direction (1, 2 or 3), and the direction in which the volume was sliced during the acquisition (1, 2 or 3). For spiral sequences, frequency and phase encoding are both set as 0. The reason to collapse all this information in just one byte was to save space. See also the fields short slice_start, short slice_end, char slice_code and float slice_duration.

Image dimensions

The field short dim[8] contains the size of the image array. The first element (dim[0]) contains the number of dimensions (1-7). If dim[0] is not in this interval, the data is assumed to have opposite endianness and so, should be byte-swapped (the nifti standard does not specify a specific field for endianness, but encourages the use of dim[0] for this purpose). The dimensions 1, 2 and 3 are assumed to refer to space (x, y, z), the 4th dimension is assumed to refer to time, and the remaining dimensions, 5, 6 and 7, can be anything else. The value dim[i] is a positive integer representing the length of the i-th dimension.


The short intent_code is an integer that codifies that the data is supposed to contain. Some of these codes require extra-parameters, such as the number of degrees of freedom (df). These extra parameters, when needed, can be stored in the fields intent_p* when they can be applied to the image as a while, or in the 5th dimension if voxelwise. A list of intent codes is in the table below:

Intent Code Parameters
None 0 no parameters
Correlation 2 p1 = degrees of freedom (df)
t test 3 p1 = df
F test 4 p1 = numerator df, p2 = denominator df
z score 5 no parameters
\chi^2 statistic 6 p1 = df
Beta distribution 7 p1 = a, p2 = b
Binomial distribution 8 p1 = number of trials, p2 = probability per trial
Gamma distribution 9 p1 = shape, p2 = scale
Poisson distribution 10 p1 = mean
Normal distribution 11 p1 = mean, p2 = standard deviation
Noncentral F statistic 12 p1 = numerator df, p2 = denominator df, p3 = numerator noncentrality parameter
Noncentral \chi^2 statistic 13 p1 = dof, p2 = noncentrality parameter
Logistic distribution 14 p1 = location, p2 = scale
Laplace distribution 15 p1 = location, p2 = scale
Uniform distribution 16 p1 = lower end, p2 = upper end
Noncentral t statistic 17 p1 = dof, p2 = noncentrality parameter
Weibull distribution 18 p1 = location, p2 = scale, p3 = power
\chi distribution 19 p1 = df*
Inverse Gaussian 20 p1 = \mu, p2 = \lambda
Extreme value type I 21 p1 = location, p2 = scale
p-value 22 no parameters
-ln(p) 23 no parameters
-log(p) 24 no parameters

* Note: For the \chi distribution, when p1=1, it is a “half-normal” distribution; when p1=2, it is a Rayleigh distribution; and when p1=3, it is a Maxwell-Boltzmann distribution. Other intent codes are available to indicate that the file contains data that is not of statistical nature.

Intent Code Description
Estimate 1001 Estimate of some parameter, possibly indicated in intent_name
Label 1002 Indices of a set of labels, which may be indicated in aux_file.
NeuroName 1003 Indices in the NeuroNames set of labels.
Generic matrix 1004 For a MxN matrix in the 5th dimension, row major. p1 = M, p2 = N (integers as float); dim[5] = M*N.
Symmetric matrix 1005 For a symmetric NxN matrix in the 5th dimension, row major, lower matrix. p1 = N (integer as float); dim[5] = N*(N+1)/2.
Displacement vector 1006 Vector per voxel, stored in the 5th dimension.
Vector 1007 As above, vector per voxel, stored in the 5th dimension.
Point set 1008 Points in the space, in the 5th dimension. dim[1] = number of points; dim[2]=dim[3]=1; intent_name may be used to indicate modality.
Triangle 1009 Indices of points in space, in the 5th dimension. dim[1] = number of triangles.
Quaternion 1010 Quaternion in the 5th dimension.
Dimless 1011 Nothing. The intent may be in intent_name.
Time series 2001 Each voxel contains a time series.
Node index 2002 Each voxel is an index of a surface dataset.
rgb 2003 rgb triplet in the 5th dimension. dim[0] = 5, dim[1] has the number of entries, dim[2:4] = 1, dim[5] = 3.
rgba 2004 rgba quadruplet in the 5th dimension. dim[0] = 5, dim[1] has the number of entries, dim[2:4] = 1, dim[5] = 4.
Shape 2005 Value at each location is a shape parameter, such as a curvature.

The intent parameters are stored in the fields float intent_p1, float intent_p2 and float intent_p3. Alternatively, if the parameters are different for each voxel, they should be stored in the 5th dimension of the file. A human readable intent name can be stored in the field char intent_name[16], which may help to explain the intention of the data when it cannot or is not coded with any of the intent codes and parameters above.

Data type and bits per pixel/voxel

The field int datatype indicates the type of the data stored. Acceptable values are:

Type Bitpix Code
unknown 0
bool 1 bit 1
unsigned char 8 bits 2
signed short 16 bits 4
signed int 32 bits 8
float 32 bits 16
complex 64 bits 32
double 64 bits 64
rgb 24 bits 128
“all” 255
signed char 8 bits 256
unsigned short 16 bits 512
unsigned int 32 bits 768
long long 64 bits 1024
unsigned long long 64 bits 1280
long double 128 bits 1536
double pair 128 bits 1792
long double pair 256 bits 2048
rgba 32 bits 2304

The field short bitpix holds the information of the number of bits per voxel. The value must match the type determined by datatype as shown above.

Slice acquisition information

The fields fields char slice_code, short slice_start, short slice_end and float slice_duration are useful to store information about the timing of an fMRI acquisition, and need to be used together with the char dim_info, which contains the field slice_dim. If, and only if, the slice_dim is different than zero, slice_code is interpreted as:

Code Interpretation
0 Slice order unknown
1 Sequential, increasing
2 Sequential, decreasing
3 Interleaved, increasing, starting at the 1st mri slice
4 Interleaved, decreasing, starting at the last mri slice
5 Interleaved, increasing, starting at the 2nd mri slice
6 Interleaved, decreasing, starting at one before the last mri slice

The fields short slice_start and short slice_end inform respectively which are the first the last slices that correspond to the actual mri acquisition. Slices present in the image that are outside this range are treated as padded slices (for instance, containing zeroes). The field float slice_duration indicates the amount of time needed to acquire a single slice. Having this information in a separate field allows to correctly store images of experiments in which slice_duration*dim[slice_dim] is smaller than the value stored in pixdim[4], usually the repetition time (tr).

Slice codes to specify slice acquisition timings. In this example, slice_start = 2 and slice_end = 11, indicating that slices #01 and #12 stored in the file have not been truly acquired with MRI, but instead, were padded to the file. The field slice_duration specifies how long it took to acquire each slice. The dimension that corresponds to the slice acquisition (in this case dim[3], z) is encoded in the field dim_info.

Voxel dimensions

The dimension for each voxel is stored in the field float pixdim[8], and each element match its respective in short dim[8]. The value in float pixdim[0], however, has a special meaning, discussed below; it should always be -1 or 1. The units of measurement for the first 4 dimensions are specified in the field xyzt_units, discussed below.

Voxel offset

The field int vox_offset indicates, for single files (.nii), the byte offset before the imaging data starts. For compatibility with older software, possible values are multiples of 16, and the minimum value is 352 (the smallest multiple of 16 that is larger than 348). For file pairs (.hdr/.img), this should be set as zero if no information other than image data itself is to be stored in the .img (most common), but it can also be larger than zero, allowing for user-defined extra-information to be prepended into the .img, such as a dicom header. In this case, however, the rule of being a multiple of 16 may eventually be violated. This field is of type float (32-bit, ieee-754), allowing integers up to 224 to be specified. The reason for using float rather than what would be the more natural choice, int, is to allow compatibility with the analyze format.

Data scaling

The values stored in each voxel can be linearly scaled to different units. The fields float scl_slope and float scl_inter define a slope and an intercept for a linear function. The data scaling feature allows the storage in a wider range than what would be allowed by the datatype. Yet, it is possible to use scaling within the same datatype. Both scaling fields should be ignored for the storage of rgb data. For complex types, it should be applied to both real and imaginary parts.

Data display

For files that store scalar (non-vector) data, the fields float cal_min and float cal_max determine the intended display range when the image is opened. Voxel values equal or below cal_min should be shown with the smallest colour in the colourscale (typically black in a gray-scale visualisation), and values equal or above cal_max should be shown with the largest colour in the colourscale (typically white).

Measurement units

Both spatial and temporal measurement units, used for the dimensions dim[1] to dim[4] (and, respectively, for pixdim[]), are encoded in the field char xyzt_units. The bits 1-3 are used to store the spatial dimensions, the bits 4-6 are for temporal dimensions, and the bits 6 and 7 are not used. A temporal offset can be specified in the field float toffset. The codes for xyzt_units, in decimal, are:

Unit Code
Unknown 0
Meter (m) 1
Milimeter (mm) 2
Micron (µm) 3
Seconds (s) 8
Miliseconds (ms) 16
Microseconds (µs) 24
Hertz (Hz) 32
Parts-per-million (ppm) 40
Radians per second (rad/s) 48


This field, char descrip[80] can contain any text with up to 80 characters. The standard does not specify whether this string needs to be terminated by a null character or not. Presumably, it is up to the application to correctly handle it.

Auxiliary file

A supplementary file, containing extra-information, can be specified in the field char aux_file[24]. This file can, for instance, contain the face indices for meshes which points are stored in the 5th dimension or a look-up-table to display colours.

Orientation information

The most visible improvement of the nifti format over the previous analyze format is the ability to unambiguously store information orientation. The file standard assumes that the voxel coordinates refer to the center of each voxel, rather than at any of its corners. The world coordinate system is assumed to be ras: +x is Right, +y is Anterior and +z is Superior, which is precisely different than the coordinate system used in analyze, which is las. The format provides three different methods to map the voxel coordinates (i,j,k) to the world coordinates (x,y,z). The first method exists only to allow compatibility with the analyze format. The other two methods may coexist, and convey different coordinate systems. These systems are specified in the fields short qform_code and short sform_code, which can assume the values specified in the table:

Name Code Description
unknown 0 Arbitrary coordinates. Use Method 1.
scanner_anat 1 Scanner-based anatomical coordinates.
aligned_anat 2 Coordinates aligned to another file, or to the “truth” (with an arbitrary coordinate center).
talairach 3 Coordinates aligned to the Talairach space.
mni_152 4 Coordinates aligned to the mni space.

In principle, the qform_code (Method 2 below) should contain either 0, 1 or 2, whereas the sform_code (Method 3 below) could contain any of the codes shown in the table.

Method 1

The Method 1 is for compatibility with analyze and is not supposed to be used as the main orientation method. The world coordinates are determined simply by scaling by the voxel size:

\left[ \begin{array}{c} x\\ y\\ z \end{array} \right]= \left[ \begin{array}{c} i\\ j\\ k \end{array} \right]\odot \left[ \begin{array}{c} \mathtt{pixdim[1]}\\ \mathtt{pixdim[2]}\\ \mathtt{pixdim[3]}\\ \end{array} \right]

where \odot is the Hadamard product.

Method 2

The Method 2 is used when short qform_code is larger than zero, and is intended to be used to indicate the scanner coordinates, in a way that resembles the coordinates specified in the dicom header. It can also be used to represent the alignment of an image to a previous session of the same subject (such as for coregistration). For compactness and simplicity, the information in this field is stored as quaternions (a,b,c,d), which last three coefficients are in the fields float quatern_b, float quatern_c, float quatern_d. The first coefficient can be calculated from the other three as a = \sqrt{1-b^2-c^2-d^2}. These fields are used to construct a rotation matrix as:

\mathbf{R} = \left[ \begin{array}{ccc} a^2+b^2-c^2-d^2 & 2(bc-ad) & 2(bd+ac) \\ 2(bc+ad) & a^2+c^2-b^2-d^2 & 2(cd-ab) \\ 2(bd-ac) & 2(cd+ab) & a^2+d^2-b^2-c^2 \end{array} \right]

This rotation matrix, together with the voxel sizes and a translation vector, is used to define the final transformation from voxel to world space:

\left[ \begin{array}{c} x\\ y\\ z \end{array} \right]=\mathbf{R} \left[ \begin{array}{c} i\\ j\\ q\cdot k\\ \end{array} \right]\odot \left[ \begin{array}{c} \mathtt{pixdim[1]}\\ \mathtt{pixdim[2]}\\ \mathtt{pixdim[3]}\\ \end{array} \right]+ \left[ \begin{array}{c} \mathtt{qoffset\_x}\\ \mathtt{qoffset\_y}\\ \mathtt{qoffset\_z}\\ \end{array} \right]

where \odot is, again, the Hadamard product, and q is the qfac value, stored at the otherwise unused pixdim[0], which should be either -1 or 1. Any different value should be treated as 1.

Method 3

The Method 3 is used when short sform_code is larger than zero. It relies on a full affine matrix, stored in the header in the fields float srow_*[4], to map voxel to world coordinates:

\left[ \begin{array}{c} x\\ y\\ z\\ 1 \end{array} \right]=\left[ \begin{array}{cccc} \mathtt{srow\_x[0]} & \mathtt{srow\_x[1]} & \mathtt{srow\_x[2]} & \mathtt{srow\_x[3]}\\ \mathtt{srow\_y[0]} & \mathtt{srow\_y[1]} & \mathtt{srow\_y[2]} & \mathtt{srow\_y[3]}\\ \mathtt{srow\_z[0]} & \mathtt{srow\_z[1]} & \mathtt{srow\_z[2]} & \mathtt{srow\_z[3]} \\ 0 & 0 & 0 & 1\end{array} \right]\cdot\left[ \begin{array}{c} i\\ j\\ k\\ 1 \end{array} \right]

Differently than Method 2, which is supposed to contain a transformation that maps voxel indices to the scanner world coordinates, or to align between two distinct images of the same subject, the Method 3 is used to indicate the transformation to some standard world space, such as the Talairach or mni coordinates, in which case, the coordinate system has its origin (0,0,0) at the anterior comissure of the brain.

Magic string

The char magic[4] field is a “magic” string that declares the file as conforming with the nifti standard. It was placed at the very end of the header to avoid overwriting fields that are needed for the analyze format. Ideally, however, this string should be checked first. It should be 'ni1' (or '6E 69 31 00' in hexadecimal) for .hdr/.img pair, or 'n+1' (or '6E 2B 31 00') for a .nii single file. In the absence of this string, the file should be treated as analyze. Future versions of the nifti format may increment the string to 'n+2', 'n+3', etc. Indeed, as of 2012, a second version is under preparation.

Unused fields

The fields char data_type[10], char db_name[18], int extents, short session_error and char regular are not used by the nifti format, but were included in the header for compatibility with analyze. The extents should be the integer 16384, and regular should be the character 'r'. The fields int glmin and int glmax specify respectively the minimum and maximum of the entire dataset in the analyze format.

Storing extra-information

Extra information can be included in the nifti format in a number of ways as allowed by the standard. At the end of the header, the next 4 bytes (i.e., from byte 349 to 352, inclusive) may or may not be present in a .hdr file. However, these bytes will always be present in a .nii file. They should be interpreted as a character array, i.e. char extension[4]. In principle, these 4 bytes should be all set to zero. If the first, extension[0], is non-zero, this indicates the presence of extended information beginning at the byte number 353. Such extended information needs to have size multiple of 16. The first 8 bytes of this extension should be interpreted as two integers, int esize and int ecode. The field esize indicates the size of the extent, including the first 8 bytes that are the esize and ecode themselves. The field ecode indicates the format used for the remaining of the extension. At the time of this writing, three codes have been defined:

Code Use
0 Unknown. This code should be avoided.
2 dicom extensions
4 xml extensions used by the afni software package.

More than one extension can be present in the same file, each one always starting with the pair esize and ecode, and with its first byte immediately past the last byte of the previous extension. In a single .nii file, the float vox_offset must be set properly so that the imaging data begins only after the end of the last extension.


The nifti format brought a number of great benefits if compared to the old analyze format. However, it also brought its own set of new problems. Fortunately, these problems are not severe. Here are some:

  • Even though a huge effort was done to keep compatibility with analyze, a crucial aspect was not preserved: the world coordinate system is assumed, in the nifti format, to be ras, which is weird and confusing. The las is a much more logical choice from a medical perspective. Fortunately, since orientation is stored unambiguously, it is possible to later flip the images in the screen at will in most software.
  • The file format still relies too much on the file extension being .nii or on a pair .hdr/.img, rather than much less ambiguous magic strings or numbers. On the other hand, the different magic strings for single file and for file pairs effectively prevent the possibility of file splitting/merging using common operating system tools (such as dd in Linux), as the magic string needs to be changed, even though the header structure remains absolutely identical.
  • The magic string that is present in the header is not placed at the beginning, but near its end, which makes the file virtually unrecognisable outside of the neuroimaging field.
  • The specification of three different coordinate systems, while bringing flexibility, also brought ambiguity, as nowhere in the standard there is information on which should be preferred when more than one is present. Certain software packages explicitly force the qform_code and sform_code to be identical to each other.
  • There is no field specifying a preferred interpolation method when using Methods 2 or 3, even though these methods do allow fractional voxels to be found with the specification of world coordinates.
  • Method 2 allows only rotation and translation, but sometimes, due to all sorts of scanner calibration issues and different kinds of geometric distortion present in different sequences, the coregistration between two images of the same subject may require scaling and shear, which are only provided in Method 3.
  • Method 3 is supposed to inform that the data is aligned to a standard space using an affine transformation. This works perfectly if the data has been previously warped to such a space. Otherwise, the simple alignment of any actual brain from native to standard space cannot be obtained with only linear transformations.
  • To squeeze information while keeping compatibility with the analyze format, some fields had to be mangled into just one byte, such as char dim_info and char xyzt_units, which is not practical and require sub-byte manipulation.
  • The field float vox_offset, directly inherited from the analyze format, should in fact, be an integer. Having it as a float only adds confusion.
  • Not all software packages implement the format exactly in the same way. Vector-based data, for instance, which should be stored in the 5th dimension, is often stored in the 4th, which should be reserved for time. Although this is not a problem with the format itself, but with the use made of it, easy implementation malpractices lead to a dissemination of ambiguous and ill-formed files that eventually cannot be read in other applications as intended by the time of the file creation.

Despite these issues, the format has been very successful as a means to exchange data between different software packages. An updated format, the nifti 2.0, with a header with more than 500 bytes of information, may become official soon. (UPDATE: details here)

More information

The official definition of the nifti format is available as a c header file (nifti1.h) here and mirrored here.

77 thoughts on “The NIFTI file format

  1. This is the best description of nifti format I have ever seen.
    Is there a typo the decreasing interleaved slice order? I think it should start with nth slices, rather than 1st or 2nd.

  2. This is the guide to NIFTI that I’ve been meaning to write myself, and I agree whole-heartedly with your description of the problems of the format. Good work!

  3. One question regarding the sform_code and qform_code, according to your description here, qform_code=1 is method 2, and sform_code=1 is method 3, but my dataset has both qform_code and sform_code=1, which method I should use? Thanks.

    • Hi,

      The description here follows closely the NIFTI standard (nifti1.h), which, unfortunatelly, doesn’t specify which of the methods should be used when more than one is present. This is one of the problems I mention at the end of the article.

      If you are lucky, methods 2 and 3 for your dataset should give the same orientation (i.e., with rotation/translation only). If not, and if you have the full affine matrix stored in the header, I believe it’d be fair to assume method 3. But I agree that this is an issue with the file format itself.

      All the best,


  4. hi,

    thank you for your extremely helpful post.
    i’ve read in the header but how do I go about reading the actual image data?
    Do you have pseudo code for this?

    for dim[] I get the following out:

    dim[0] = 3
    dim[1] = 256
    dim[2] = 256
    dim[3] = 130
    dim[4] = 1
    dim[5] = 1
    dim[6] = 1
    dim[7] = 1

    and for bitpix I get 16

    if I want to get the raw data in an array do you have code for this?

    so vox[0,0,0] = [x,y,y]
    … vox[10, 10, 10] = x10, y10, z10]…etc

    • Hi,
      It’s straightforward to read: just take the information from “bitpix” and “datatype” to know how many bytes and type for each voxel, and read into an array fo the sizes given in “dim” (indeed, as it seems you are already doing). The storage is RAS, i.e., the the first dimension to be filled (i.e., that runs faster), is “x”, from left to right, then “y” from posterior to anterior, then “z”, from inferior to superior. Of course these directions may not correspond to the actual brain orientation (if this is at all a brain), but once read, you adjust the orientation using one of the 3 orientation methods from the header, as described above.
      All the best,

  5. Hi,

    a discussion is currently taking place between my colleague and me about the definition of the origin of the voxels.

    According to the standard and to your explanation, “The file standard assumes that the voxel coordinates refer to the center of each voxel, rather than at any of its corners.” That makes perfect sense. However, let’s consider the left most, most posterior, most inferior voxel. In voxel coordinates, that voxel is [0,0,0]. Now, let’s say you transform this voxel to world coordinates, using method 2. You apply the transformation to [0,0,0], and get the coordinates of the center of that voxel in world coordinates. If the transform is identity, that would mean that the world coordinates of the center of the voxel will also be [0,0,0], right?

    If that is the case, let’s suppose that our voxel size is [1,1,1]. In that case, it would also mean that world points in the range

    [-0.5, -0.5, -0.5] to [0.5, 0.5, 0.5]

    are considered to be in voxel [0,0,0], right?

    I just wanted to validate that my understanding is correct. If it is not correct, then does it mean that the shift to bring the center to the center of the voxel in world coordinates should also be included in the transform?

    Thanks a lot

    • Hi,
      Your understanding looks correct. The translation shouldn’t ordinarily include the voxel size (or even half of it) unless one really wants to shift the image so that the coordinates end up representing one of the voxel corners. In the example you gave, yes, a coordinate such as -0.3 or +0.4 would “belong” to the voxel at position 0, whereas a coordinate such as 0.6 would belong to the voxel 1.
      All the best,

  6. Hi at all. Is it possible to convert back to dicom a processed nifty file? I am speaking about raw diffusion data after motion correction…I have used dtiprep and then dwiconvert supplying as arguments the original bval and bvecs but when I convert to dicom (xmedconv) the file is not imported as tensor anymore…

    • Hi Francesco,
      The NIFTI format contains far less information than the DICOM, so a complete recovery isn’t possible directly. However, it’s possible to generate new DICOM files using the information that can be captured from the NIFTI header, plus information about the sequence and other parameters that you may have from the original DICOM files. If you have Matlab, have a look at the commands dicomread, dicomwrite and dicominfo from the Image Processing Toolbox. For the bvecs and bvals, these are manufacturer-specific; this page may help:
      All the best,

  7. Hi,I am doing some research for DTI(Diffusion tensor imaging)
    Could you give me some programs for DTI? I want to achieve D Matrix with c++.Thanks a lot…

  8. Hi
    Appreciation and thanks for your reply.but I am sorry that i’m not familiar with FSL.FSL is a software or a toolbox that can be used by c++?Can you give me a detailed instructions?Besides,Do you learn of ITK?

    • Hi,
      FSL is a software package written in C/C++ that does what you’re trying to do. However, it’s not a C++ library, but a set of commands that can be invoked by the end user. The tool in FSL that does the tensor fit is called “dtifit”, but there are a few preprocessing steps (e.g., obtain the directions from the DICOM files, eddy correction, etc). I’m unaware of an actual library that could be linked to your program. However, the fit of the tensor is a relatively simple procedure that can be done easily with matrix algebra libraries (e.g. Armadillo, Eigen, etc) once you have put together the information you need.
      All the best,

  9. Pingback: The NIFTI-2 file format | Brainder.

  10. Dear All
    I need your help to solve my problem in related with .nii file.
    I have small pigeon brain MR image. the voxel size is 250*250*700 micron. I want to resize them to 2.5*2.5*7 millimeter. could you please help me how can I do. I found out some about the nii file but still I have problem.



    I applied this function to change the size but when I try to apply ICA analysis in FSL everything were changed.
    is there other parameter that I have to change it?

    Thanks in advance for your help.
    My email is :


      • Dear AndersonT
        Thanks in advance for your reply.
        I just want to resize it in header file. I don’t need resampling.


        • Hi Mehdi,
          You should be able to write a small script in Matlab or similar to edit the header. The command “fsledithd” should also be able to do it. If in doubt on how to do it, please send an email to the FSL list.

  11. Hello dear all!

    I encountered a problem really bothers me.
    I have several slices of a patient’s brain CT image.
    They are 512×512 matrix each slice and I color it, become 512x512x3 RGB images now, using Matlab.
    What should I do to make those images to “A” NIfTI file?

    thanks a lot, I’m getting crazy lol

    • Hi Shawn,

      The proper way to store RGB data is to put the 3 colours in the 5th dimension, such that you’d save it as 512 x 512 x 1 x 1 x 3, and use the intent code “2003”. However, as of today, I am unaware of any software that actually implements the standard up to that dimension and considers these intents properly, such that, unfortunately, the file may not be recognisable or shown properly, with the proper colours, by many viewers.

      All the best,


      • Hi,A. M. Winkler

        According to the standard, for RGB data, dim[0] = 5, dim[1] has the number of entries, dim[2:4] = 1, dim[5] = 3. But do you know how to get the x dimension and y dimension ?

        Thank you.
        Regard the best.

        • Hi Y.X,

          For RGB data, dim[0]=5, dim[1:4] will have your image size (x,y,z,t), and dim[5] will have size 3 and contain the RGB triplet.

          So, x and y will be dim[1] and dim[2] (actually it gets a bit more complicated because we think of x and y as real world coordinates, so we should be talking about i and j instead of x and y).

          All the best,


    • One way to store RGB data is mentioned in Anderson’s reply. According to NIfTI standard, you can also save it as RGB datatype. Then the arrangement for your example will be 3x256x256, and datatype should be set according to your data range (0~1 single or 0~255 uint8).

      You can use my nii_tool for matlab to deal with this easily. The link is


  12. If I open a nifti-file in, say, fsl, and both qform_code and sform_code are larger than zero, how do I know which method (method 2 or method 3) is used to display the image?

    • Hi Sascha,
      Depends on the implementation used by the program. The NIFTI standard does not specify which is preferred, although it seems sensible to use Method 3 (sform_code>0), but this depends whether the developer implemented the full standard or not. In FSLeyes (the upcoming FSL viewer) fully supports Method 3.
      All the best,

  13. Hey Anderson and fellow readers.
    What does “xyzt_units: 10” mean? I have found it in 3 different datasets, all of whom are NIFTI 1 format.

    Thank you.

    • Hi,

      This field needs to be converted to binary (i.e., base 2) according to the endianness. Then the first 3 bits are units for space, the next 3 are units for time.

      What you have is 2 + 8, indicating milimetres and seconds, that in base 2 is: [0 1 0 1 0 0 0 0] = 10, where [0 1 0 0 0 0 0 0] = 2 and [0 0 0 1 0 0 0 0] = 8.

      Hope this helps!

      All the best,


  14. I have very basic question to ask. I am just naive , out of curiosity i just want to know . What does the 3 dimensions of MRI image represent ?? say I have size details of 2 images Nifti format 1. img: [160x192x192 uint16] 2. [176x240x256 uint16] here why the channels differ ? aren’t they should be same ?

    • Hi Kruthika,
      These aren’t really channels but the dimensions of the image in voxels (i.e., volumetric pixels). The 3 dimensions are akin to width, length and height, although the order of each of these vary depending on other fields of the header. It can even be all oblique/tilted.
      Hope this helps.
      All the best,

  15. Hi all,
    I have a question. I found the displacement field of the deformation of two images. I load the nifty file as nii and I found the histogram of the img which is a 5-dimensional image file. I want to know the unit of the x-axis.

    • Hi Hesham,
      This information is stored in the field xyzt_units. You need to convert to binary, then read using the code indicated in the article. Typically it’s milimetres, though.
      All the best,

  16. Hi all,

    Please let me ask you for advice on this issue.

    I have two nifti volumes (from the same MR session and patient). One is the ADC map (converted from dicom to nifti using tool1) and the other one is the ROI depicted over other the T1w volume (converted from dicom to nifti using tool2). If I want to compute some statistics on the ADC but restricted to the ROI I need to transform the ROI to the ijk space of the ADC nifti.

    Let qform be the 4×4 matrix composed by the rotation matrix, the translation collum, and a final row (0,0,0,1). So the transformation should be as easy as ijj’ = (qform_ADC^-1 * qform_ROI) * ijk, where ijk is the location in the matrix containing ROI=true and a final 1.

    Am I wrong?

    • Hi Félix,
      Even if the images are from the same subject and same scanner, the different sequences may not ensure perfect alignment based only on the affine transformations derived from the DICOM/NIFTI. Consider using a realignment tool, such as FSL FLIRT.
      All the best,

      • Thanks for your comment.
        Nifti files were previously registered.Thats why I think that there is something that I’m not understanding.

  17. Pingback: In depth guide to the NIFTI file format – BRIANLAB

  18. Hi all,
    I have a question . I want to know if I can read patients’ information from NIFTI, such as patients’ name, sex and so on.

    • Hi Sunny,
      The NIFTI format doesn’t include fields to store this information. Sometimes, patient name may be (incorrectly) stored in the “description” field, but that is a misuse of the format, which isn’t supposed to convey this kind of information.
      All the best,

  19. Dear all,
    I use NIFTI files in the context of MRI registration.
    I would like to apply an affine transformation T to a point Pf=[If, Jf, Kf] expressed in voxels in the floating image inorder to obtain its voxel coordinates Pt=[It, Jt, kj] in the target image.
    Let’s say, i would like to resolve this problem : Pt= T*Pf.
    For that:
    1) I first convert [If, Jf, Kf] to the real word coordinates [Xf, Yf, Zf] using floating_image.affine
    2) I apply T to [Xf, Yf, Zf] to obtain the world coordinates of the warped point into the target image [Xt, Yt, Zt]
    3) Finally i convert [Xt, Yt, Zt] to the voxel space [It, Jt, Kt] using the inverse of reference_image.affine
    Unfortunately this gives wrong coordinates when [It, Jt, Kt] comparing them to fsl img2imgcoord results for the same point. I also visually checked that my new calculated coordinates are wrong in the images.

    • Hi Karim,
      From your description things seem right, i.e., the transformation being applied in world space, then put back to voxel coordinates. I wonder if the issue hasn’t anything to do with the implementation of the reader/writer of the files. Consider perhaps using FSL tools (which have a way to apply linear transformations easily).
      All the best,

      • Hi A. M. Winkler,

        thanks for your response ! I already use fsl img2imgcoord. But i need to implement my own function to warp a point from an image to another one .. inorder to speed up my log-demons algorithm which register my floating image voxel by voxel, ( i.e i have one transformation per voxel: diffeomorphic registration) to align it with the reference image..

  20. Hi Anderson
    I have a very basic question to ask. I want to know that Can the Matlab open the .nii file? and how to open it? I’m so sorry to bother you.

    Thank you!

  21. Pingback: Working with NIFTI(-1) files (in MATLAB) |

  22. Pingback: SPM12/Matlab scripting tutorial: Post 2 - fMRwhy

  23. Hello!
    I’d like to ask what kind of 3D reconstruction nifti format makes to MRIs in order to have a 3D file, rather than many files, each representing a 2D slice (like in DICOM format). Sorry if I’m not expressing myself correctly, but I’m a bit confused.
    Thank you very much

    • Hi,

      There is nothing special really. The 2D slices are stacked, forming a 3D file. However, the file format changes: I don’t think DICOM can store 3D directly. Plus, DICOM is somewhat a complex format. Instead we use NIFTI, as described in the article. There are various tools available that allow converting between these, and a great one, that I recommend, is Chris Rorden’s dcm2niix:
      Hope this helps.
      All the best,

      • Hello,

        You can use “dcm2nii” to convert your 2D stacked DICOM slices to a 3D nifti file.
        Hope this helps.
        All the best,
        Karim MAKKI.

  24. Hi Anderson, thanks heaps for this great and readable description of the NIFTI format. I needed to compute the FreeSurfer vox2ras matrix from a NIFTI header (based on sform and/or qform), and it worked out thanks to your description.

    For those looking for very in-depth info, I can recommend to fully read both this page and the spec at . The format is horrible, but it contains some interesting details.

  25. Hi. Thank you for this wonderful topic about nifti file. I have a question about how to calculate real area of a image in nifti file? I’m using brats 2019 data and its nifti files have images with size of 240×240 pixels and spaceunits is millimeter. Does it mean images having size of 240mmx240mm in reality? Please help me solve this problem. Thank all you very much!!!

    • Hi Phuong,

      If it’s, e.g., the cross-section area of a piece of tissue, such as a tumor, then yes, the area is just the area of the pixels. For volumetric data, however, it’s possible to have oblique sections (and associated interpolation), and the area would be computed accordingly. For surface area of organs, then a full reconstruction is likely needed, which isn’t trivial, but for some organs, as the brain, there exists software that can do it from NIFTI files.

      Good luck with your project!

      All the best,


  26. Hi Thanks for this amazing resource.
    I have querry , i am trying to read ADNI fmri data which is in .nii format.
    its dim[8] is [3, 256, 256, 170, 1, 1, 1, 1] and bitpix is 32. I am unable to figure out how to get the time series imformation ,since the 3 D matrix which I am getting by reading the .nii file is of dimension – 256 * 256 * 170

    • Hi Anoop,
      Based on these image sizes it seems you are looking only into the structural (MPRAGE) scans, not the fMRI time series. Try locating the EPI scans. These are 4D files.
      All the best,

  27. Hello!
    My comment is about volume calculation. Suppose I have a nifti image label of a tissue and want to calculate the volume of a single voxel in this image label. I know that I have the x,y,z of the image, but I want to calculate only the volume of the label mask. How can I do that?

    • Hi Carlão,

      The volume of a single voxel is given by pixdim[1]*pixdim[2]*pixdim[3]. The volume of all voxels marked with the same index in the mask can be computed by counting the number of such voxels and multiplying by the number of voxels. Much easier is to use the command “fslstats”, part of the FSL software package (

      All the best,


  28. Hello!
    Thanks for a very useful post. I’m struggling with an image I will use as mask reference. I’m working in python and keeps telling me that the input should be binary or integers. I suspect is because the some dim are set to 0:
    sizeof_hdr 348
    data_type FLOAT32
    dim0 3
    dim1 91
    dim2 109
    dim3 91
    dim4 1
    dim5 0
    dim6 0
    dim7 0
    How can I change that? I tried with fsledithd but does not seem to have these options to change.
    Thanks in advance

    • Hi Monica,

      You can simply convert to bool once in Python. Alternatively, change the data type in NIFTI to bool (and therefore change the file) before loading. The zeroes in the dims past what is indicated in “dim0” aren’t relevant.

      All the best,


  29. Pingback: Know the value range for the image data of a NIFTI image - Tutorial Guruji

  30. I’d like to ask about the values at indexes 5-7 in `dim` and `pixdim` arrays. What are the correct values there for an fmri image (bold signal)? My original nifty image (after converting dicom to nifti with dcm2niix) has `0`s there. After removing dummy slices with AFNI nifti_tool, `dim[5:]` have `1`s. When I use nibabel slicer to do the same, both `dim[5:]` and `pixdim[5:]` have `1`s. I’ve opened already a [nibabel issue]( but then found this active discussion and though I’ll ask here.

    • Hi Mateus,

      The dim[0] indicates how many dimensions the image has; for FMRI, this value would typically be 4 (for four dimensions). Then dim[5:7] are ignored and, in principle, could contain any value.

      Hope this helps!

      All the best,


  31. Hey, I’m currently writing a viewer for Nifti files, which some special functionality, that we need. When I try to display the typical 3-slices view of an image and the background, the volume of interest and the background are not aligned properly. It looks like this:

    The images are generated by first selecting the slices. The slices are selected by computing the index coordinates from provided mni coordinates for both volumes, and then just taking the slices based on the computed indices. Then both images are scaled to the same size and overlayed. Do you have an idea for how this misalignment comes to be, and how to get rid of it?

    • Nvm, by now, I figured out that the fact that both images are in MNI space alone doesn’t imply that they have the same position and orientation in the file. After resampling one image to match the other, things work as expected.

Leave a Reply to Sascha F Cancel reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s