Installing NiDB

NiDB is a light, powerful, and simple to use neuroimaging database. One of its main strengths is that it was developed using a stack of stable and proven technologies: Linux, Apache, MySQL/MariaDB, PHP, and Perl. None of these technologies are new, and the fact that they have been around for so many years means that there is a lot of documentation and literature available, as well as a myriad of libraries (for PHP and Perl) that can do virtually anything. Although both PHP and Perl have received some degree of criticism (not unreasonably), and in some cases are being replaced by tools such as Node.js and Python, the volume of information about them means it is easy to find solutions when problems appear.

This article covers installation steps for either CentOS or RHEL 7, but similar steps should work with other distributions since the overall strategy is the same. By separating apart each of the steps, as opposed to doing all the configuration and installation as a single script, it becomes easier to adapt to different systems, and to identify and correct problems that may arise due to local particularities. The steps below are derived from the scripts and, that are available in the NiDB repository, but here these will be ignored. Note that the instructions below are not “official”; for the latter, consult the NiDB documentation.  The intent of this article is to facilitate the process and mitigate some frustration you may feel if trying to do it all by yourself. Also, by looking at the installation steps, you should be able to have a broad overview of the pieces that constitute database.

1) Begin with a fresh install.

If installing CentOS from the minimal DVD, choose a “Minimal Install” and leave to add the desktop in the next step.

2) Update the system.

This is a good time to install the most recent updates and patches, and reboot if the updates include a new kernel:

yum update

3) Have a graphical mode.

While not strictly necessary, having a graphical interface for a web-based application will be handy. Install your favourite desktop, and a VNC server if you intend to manage the system remotely. For a lightweight desktop, consider MATE:

yum install epel-release
yum groupinstall "MATE Desktop"
systemctl set-default
systemctl isolate

For VNC, there are various options available. Consider, for example, TurboVNC.

4) Define some environment variables to be used later.

These will help when entering the commands later.

# Directory where NiDB will be installed

# Directory of the webpages and PHP files:

# Linux username under which NiDB will run:

# MySQL/MariaDB root password:

# MySQL/MariaDB username that will have access to the database, and associated password:

These variables are only used during the installation, and all the steps here are done as root. Considering clearing your shell history at the end, so as not to have your passwords stored there.

5) Create an account for the user under which NiDB will run.

This is the user that will run the processes related to the database. It is not necessary that this user has administrative privileges on the system, and from a security perspective, it is better if not.

useradd -m ${NIDBUSER}
passwd ${NIDBUSER} # choose a sensible password

6) Install and configure Apache.

Add the repository for a more recent version, then install:

yum install
yum install httpd

Configure it to run as the ${NIDBUSER} user:

sed -i "s/User apache/User ${NIDBUSER}/" /etc/httpd/conf/httpd.conf
sed -i "s/Group apache/Group ${NIDBUSER}/" /etc/httpd/conf/httpd.conf

Enable it at boot, and also start it now:

systemctl enable httpd.service
systemctl start httpd.service

Open the relevant ports in the firewall, then reload the rules:

firewall-cmd --permanent --add-port=80/tcp
firewall-cmd --permanent --add-port=443/tcp
firewall-cmd --reload

7) Install and configure MySQL/MariaDB.

For MariaDB 10.2, the repository can be added to /etc/yum.repos.d/ as:

echo "[mariadb]
name = MariaDB
baseurl =
gpgkey =
gpgcheck = 1" >> /etc/yum.repos.d/MariaDB.repo

For other versions or distributions, visit this address. Then do the actual installation:

yum install MariaDB-server MariaDB-client

Enable it at boot and start now too:

systemctl enable mariadb.service
systemctl start mariadb.service

Secure the MySQL/MariaDB installation:


Pay attention to the questions on the root password and set it here to what was chosen in the ${MYSQLROOTPASS} variable. Make sure your database is secure.

8) Install and configure PHP.

First add the repositories for PHP 7.2:

yum install
yum install yum-utils
yum-config-manager --enable remi-php72

Install some additional PHP packages:

pear install Mail
pear install Mail_Mime
pear install Net_SMTP

Edit the PHP configuration:

sed -i 's/^short_open_tag = .*/short_open_tag = On/g' /etc/php.ini
sed -i 's/^session.gc_maxlifetime = .*/session.gc_maxlifetime = 28800/g' /etc/php.ini
sed -i 's/^memory_limit = .*/memory_limit = 5000M/g' /etc/php.ini
sed -i 's/^upload_tmp_dir = .*/upload_tmp_dir = \/${NIDBROOT}\/uploadtmp/g' /etc/php.ini
sed -i 's/^upload_max_filesize = .*/upload_max_filesize = 5000M/g' /etc/php.ini
sed -i 's/^max_file_uploads = .*/max_file_uploads = 1000/g' /etc/php.ini
sed -i 's/^max_input_time = .*/max_input_time = 600/g' /etc/php.ini
sed -i 's/^max_execution_time = .*/max_execution_time = 600/g' /etc/php.ini
sed -i 's/^post_max_size = .*/post_max_size = 5000M/g' /etc/php.ini
sed -i 's/^display_errors = .*/display_errors = On/g' /etc/php.ini
sed -i 's/^error_reporting = .*/error_reporting = E_ALL \& \~E_DEPRECATED \& \~E_STRICT \& \~E_NOTICE/' /etc/php.ini

Also, edit /etc/php.ini to make sure your timezone is correct. Finally:

chown -R ${NIDBUSER}:${NIDBUSER} /var/lib/php/session

9) Install Perl and other pieces.

These are all in the main repositories already added so you should be able to simply run:

yum install perl* cpan git gcc gcc-c++ java ImageMagick vim libpng12 libmng wget iptraf*

Install also various Perl packages from CPAN. The first time you run cpan, various configuration questions will be asked; it is safe to accept default answers for all:

cpan File::Path
cpan Net::SMTP::TLS
cpan List::Util
cpan Date::Parse
cpan Image::ExifTool
cpan String::CRC32
cpan Date::Manip
cpan Sort::Naturally
cpan Digest::MD5
cpan Digest::MD5::File
cpan Statistics::Basic
cpan Email::Send::SMTP::Gmail

Then put these into a place where NiDB can find them:

mkdir /usr/local/lib64/perl5
cp -rv /root/perl5/lib/perl5/* /usr/local/lib64/perl5/

10) (Optional) Disable SELinux.

Disabling SELinux is not strictly necessary provided that you ensure that all processes related to NiDB (webserver, database server), and all its files, belong to the same user, nidb, and that file access policies are set correctly. In any case, you may feel this is useful so as to stop receiving too many irrelevant warnings during the installation. You can enable it again later.

sed -i 's/^SELINUX=.*/SELINUX=disabled/g' /etc/selinux/config
setenforce 0

Note that enabling or disabling SELinux requires a reboot to take effect (it is not sufficient to simply restart a daemon; there is not one in fact).

11) Install FSL.

FSL functions are used by various internal scripts. After the installation, make sure the environment variable FSLDIR exists and points to the correct location (typically /usr/local/fsl, but can be different if you installed it elsewhere). This variable is used below when defining the crontab jobs.


12) Download and install the NiDB files.

The official Github repository is However, I have made a fork with a couple of changes that better adapt to the system I am working with. You can probably go with either way.

mkdir -p ${NIDBROOT}
mkdir -p archive backup dicomincoming deleted download ftp incoming problem programs/lock programs/logs uploadtmp uploaded
git clone install
cd install
cp -Rv programs/Mysql* /usr/local/lib64/perl5/
cp -Rv programs/* ${NIDBROOT}/programs/
cp -Rv web/* ${WWWROOT}/
cp -rv Mysql* /usr/local/lib64/perl5/

Edit the file ${WWWROOT}/functions.php and complete two pieces of configuration. Locate these two lines:

$cfg = LoadConfig();

In the first parenthesis, (), put what you get when you run:

echo "${NIDBROOT}/programs/nidb.cfg"

whereas in the second (), put what you get when you run:

timedatectl | grep "Time zone:" | awk '{print $3}'

For example, depending on your variables and time zone, you could edit to look like this:

$cfg = LoadConfig("/nidb/programs/nidb.cfg");

13) Set up the database.

First, create the nidb user in MySQL/MariaDB. This is the only user (other than root) that will be able to do anything in the database:


Now create the NiDB database proper:

cd ${NIDBROOT}/install/setup
mysql -uroot -p${MYSQLROOTPASS} nidb < nidb.sql
mysql -uroot -p${MYSQLROOTPASS} nidb < nidb-data.sql

14) Setup cron jobs.

These jobs will take care of various automated input/output tasks.

cat <<EOC >> ~/tempcron.txt
* * * * * cd ${NIDBROOT}/programs; perl > /dev/null 2>&1
* * * * * cd ${NIDBROOT}/programs; perl > /dev/null 2>&1
* * * * * cd ${NIDBROOT}/programs; perl > /dev/null 2>&1
* * * * * cd ${NIDBROOT}/programs; perl > /dev/null 2>&1
* * * * * cd ${NIDBROOT}/programs; perl > /dev/null 2>&1
* * * * * cd ${NIDBROOT}/programs; perl > /dev/null 2>&1
* * * * * FSLDIR=${FSLDIR}; PATH=${FSLDIR}/bin:${PATH}; . ${FSLDIR}/etc/fslconf/; export FSLDIR PATH; cd ${NIDBROOT}/programs; perl > /dev/null 2>&1
@hourly find ${NIDBROOT}/programs/logs/*.log -mtime +4 -exec rm {} \;
@daily  /usr/bin/mysqldump nidb -u root -p${MYSQLROOTPASS} | gzip > ${NIDBROOT}/backup/db-\$(date +%Y-%m-%d).sql.gz
@hourly /bin/find /tmp/* -mmin +120 -exec rm -rf {} \;
@daily  find ${NIDBROOT}/ftp/* -mtime +7 -exec rm -rf {} \
@daily  find ${NIDBROOT}/tmp/* -mtime +7 -exec rm -rf {} \;
crontab -u ${NIDBUSER} ~/tempcron.txt && rm ~/tempcron.txt

15) Edit the main configuration.

The main configuration file, ${NIDBROOT}/programs/nidb.cfg, should be edited to reflect your paths, usernames, and passwords. It is this file that will contain the admin password for accessing NiDB. Use the ${NIDBROOT}/programs/nidb.cfg.sample as an example.

Once you have logged in as admin, you can also edit this file again in the database interface, in the menu Admin -> NiDB Settings.

16) (Optional) Install a MySQL/MariaDB frontend.

It will likely increase your productivity when doing maintenance to have a friendly frontend for MySQL/MariaDB. Two popular choices are phpMyAdmin (web-based) and Oracle MySQL Workbench.

For phpMyAdmin:

mv phpMyAdmin-*-english ${WWWROOT}/phpMyAdmin
chmod 755 ${WWWROOT}
cp ${WWWROOT}/phpMyAdmin/ ${WWWROOT}/phpMyAdmin/

For MySQL Workbench, the repositories are listed at this link:

rpm -Uvh mysql57-community-release-el7-11.noarch.rpm
yum install mysql-workbench

However, at the time of this writing, the current version (6.3.10) crashes upon start. The solution is to downgrade:

yum install yum-plugin-versionlock
yum versionlock mysql-workbench-community-6.3.8-1.el7.*
yum install mysql-workbench-community

17) That’s it!

You should by now have a working installation of NiDB, accessible from your web-browser at http://localhost. There are additional pieces you may consider configuring, such as a listener in one of your server ports to bring DICOMs from the scanner in automatically as the images are collected, and also other changes to the database schema and web interface. Now you have a starting point.


For more information on NiDB, see these two papers:


How do we measure thickness, area, and volume of the cerebral cortex?

There are various ways one could estimate morphometric parameters of the cortex, such as its thickness, area, and volume. For example, it is possible to use voxelwise partial volume effects using volume-based representations of the brain, such as in voxel-based morphometry (VBM), in which estimates per voxel become available. Volume-based representations also allow for estimates of thickness, as suggested, for example, by Hutton et al. (2004), or from a surface representation of the cortex, in which it can be measured as a form of distance between the mesh that represents the pia mater (the pial surface) and the mesh that represents the interface between gray and white matter (the white surface).

Here we focus on the surface-based representation as that offers advantages over volume-based representations (Van Essen et al., 1998). Software such as FreeSurfer uses magnetic resonance images to initially construct the white surface. Once that surface has been produced, a copy of it can be offset outwards until tissue contrast in the magnetic resonance image is maximal, which indicates the location of the pial surface. This procedure ensures that both white and pial surfaces have the same topology, with each face and each vertex of the white surface having their matching pair in the pial. This convenience facilitates the computations indicated below.

Cortical surface area

For a triangular face ABC of the surface representation, with vertex coordinates \mathbf{a}=[x_A \; y_A \; z_A]', \mathbf{b}=[x_B \; y_B \; z_B]', and \mathbf{c}=[x_C \; y_C \; z_C]', the area is |\mathbf{u} \times \mathbf{v}|/2, where \mathbf{u} = \mathbf{a}-\mathbf{c}, \mathbf{v} = \mathbf{b}-\mathbf{c}, \times represents the cross product, and the bars |\;| represent the vector norm. Even though such area per face (i.e., facewise) can be used in subsequent steps, most software packages can only deal with values assigned to each vertex (i.e., vertexwise). Conversion from facewise to vertexwise is achieved by assigning to each vertex one-third of the sum of the areas of all faces that meet at that vertex (Winkler et al., 2012).

Cortical thickness

The thickness at each vertex is computed as the average of two distances (Fischl and Dale, 2000; Greve and Fischl, 2018): the first is the distance from each white surface vertex to their corresponding closest point on the pial surface (not necessarily at a pial vertex); the second is the distance from the corresponding pial vertex to the closest point on the white surface (again, not necessarily at a vertex). Other methods are possible, however, see table below (adapted from Lerch and Evans, 2005):

Method Reference
Distance solved using the Laplace’s equation. Jones et al. (2000)
Distance between corresponding vertices. MacDonald et al. (2000)
Distance to the nearest point in the other surface. MacDonald et al. (2000)
Distance to the nearest point in the other surface, computed for both surfaces, then averaged. Fischl and Dale (2000)
Distance along the normal. MacDonald et al. (2000)
Distance along the iteratively computed normal. Lerch and Evans (2005)

Cortical volume

Product method

If the area of either of these surfaces is known, or if the area of a mid-surface, i.e., the surface running half-distance between pial and white surfaces is known, an estimate of the volume can be obtained by multiplying, at each vertex, area by thickness. This procedure is still problematic in that it underestimates the volume of tissue that is external to the convexity of the surface, and overestimates volume that is internal to it; both cases are undesirable, and cannot be solved by merely resorting to using an intermediate surface as the mid-surface.

Figure 1: A diagram in two dimensions of the problem of measuring the cortical volume. If volume is computed using the product method (a), considerable amount of tissue is left unmeasured in the gyri, or measured repeatedly in sulci. The problem is minimised, but not solved, with the use of the mid-surface. In the analytic method (b), vertex coordinates are used to compute the volume of tissue between matching faces of white and pial surfaces, leaving no tissue under- or over-represented.

Analytic method

In Winkler et al. (2018) we propose a different approach to measure volume. Instead of computing the product of thickness and area, we note that any pair of matching faces can be used to define an irregular polyhedron, of which all six coordinates are known from the surface geometry. This polyhedron is an oblique truncated triangular pyramid, which can be perfectly divided into three irregular tetrahedra, which do not overlap, nor leave gaps.

Figure 2: A 3D diagram with the proposed solution to measure the cortical volume. In the surface representation, the cortex is limited internally by the white and externally by the pial surface (a). These two surfaces have matching vertices that can be used to delineate an oblique truncated triangular pyramid (b) and (c). The six vertices of this pyramid can be used to define three tetrahedra, the volumes of which are computed analytically (d).

From the coordinates of the vertices of these tetrahedra, their volumes can be computed analytically, then added together, viz.:

  1. For a given face A_w B_w C_w in the white surface, and its corresponding face A_p B_p C_p in the pial surface, define an oblique truncated triangular pyramid.
  2. Split this truncated pyramid into three tetrahedra, defined as:
    \begin{array}{lcllllll} T_1 &=& (&A_w,&B_w,&C_w,&A_p&)\\ T_2 &=& (&A_p,&B_p,&C_p,&B_w&)\\ T_3 &=& (&A_p,&C_p,&C_w,&B_w&) \end{array}
  3. For each such tetrahedra, let \mathbf{a}, \mathbf{b}, \mathbf{c} and \mathbf{d} represent its four vertices in terms of coordinates [x\;y\;z]'. Compute the volume as |\mathbf{u}\cdot(\mathbf{v} \times \mathbf{w})|/6, where \mathbf{u} = \mathbf{a}-\mathbf{d}, \mathbf{v} = \mathbf{b}-\mathbf{d}, \mathbf{w} = \mathbf{c}-\mathbf{d}, the symbol \times represents the cross product, \cdot represents the dot product, and the bars |\;| represent the vector norm.

No error other than what is intrinsic to the placement of these surfaces is introduced. The resulting volume can be assigned to each vertex in a similar way as conversion from facewise area to vertexwise area. The above method is the default in FreeSurfer 6.0.0.

Is volume at all useful?

Given that volume of the cortex is, ultimately, determined by area and thickness, and these are known to be influenced in general by different factors (Panizzon et al, 2009; Winkler et al, 2010), why would anyone bother in even measuring volume? The answer is that not all factors that can affect the cortex will affect exclusively thickness or area. For example, an infectious process, or the development of a tumor, have potential to affect both. Volume is a way to assess the effects of such non-specific factors on the cortex. However, even in that case there are better alternatives available, namely, the non-parametric combination (NPC) of thickness and area. This use of NPC will be discussed in a future post here in the blog.


The “Group” indicator in FSL

In FSL, when we create a design using the graphical interface in FEAT, or with the command Glm, we are given the opportunity to define, at the higher-level, the “Group” to which each observation belongs. When the design is saved, the information from this setting is stored in a text file named something as “design.grp”. This file, and thus the group setting, takes different roles depending whether the analysis is used in FEAT itself, in PALM, or in randomise.

What can be confusing sometimes is that, in all three cases, the “Group” indicator does not refer to experimental or observational group of any sort. Instead, it refers to variance groups (VG) in FEAT, to exchangeability blocks (EB) in randomise, and to either VG or EB in PALM, depending on whether the file is supplied with the options -vg or -eb.

In FEAT, unless there is reason to suspect (or assume) that the variances for different observations are not equal, all subjects should belong to group “1”. If variance groups are defined, then these are taken into account when the variances are estimated. This is only possible if the design matrix is “separable”, that is, it must be such that, if the observations are sorted by group, the design can be constructed by direct sum (i.e., block-diagonal concatenation) of the design matrices for each group separately. A design is not separable if any explanatory variable (EV) present in the model crosses the group borders (see figure below). Contrasts, however, can encompass variables that are defined across multiple VGs.

The variance groups not necessarily must match the experimental observational groups that may exist in the design (for example, in a comparison of patients and controls, the variance groups may be formed based on the sex of the subjects, or another discrete variable, as opposed to the diagnostic category). Moreover, the variance groups can be defined even if all variables in the model are continuous.

In randomise, the same “Group” setting can be supplied with the option -e design.grp, thus defining exchangeability blocks. Observations within a block can only be permuted with other observations within that same block. If the option --permuteBlocks is also supplied, then the EBs must be of the same size, and the blocks as a whole are instead then permuted. Randomise does not use the concept of variance group, and all observations are always members of the same single VG.

In PALM, using -eb design.grp has the same effect that -e design.grp has in randomise. Further using the option -whole is equivalent to using --permuteBlocks in randomise. It is also possible to use together -whole and -within, meaning that the blocks as a whole are shuffled, and further, observations within block are be shuffled. In PALM the file supplied with the option -eb can have multiple columns, indicating multi-level exchangeability blocks, which are useful in designs with more complex dependence between observations. Using -vg design.grp causes PALM to use the v– or G-statistic, which are replacements for the t– and F-statistics respectively for the cases of heterogeneous variances. Although VG and EB are not the same thing, and may not always match each other, the VGs can be defined from the EBs, as exchangeability implies that some observations must have same variance, otherwise permutations are not possible. The option -vg auto defines the variance groups from the EBs, even for quite complicated cases.

In both FEAT and PALM, defining VGs will only make a difference if such variance groups are not balanced, i.e., do not have the same number of observations, since heteroscedasticity (different variances) only matter in these cases. If the groups have the same size, all subjects can be allocated to a single VG (e.g., all “1”).

Why the maximum statistic?

In brain imaging, each voxel (or vertex, or face, or edge) constitutes a single statistical test. Because thousands such voxels are present in an image, a single experiment results in thousands of statistical tests being performed. The p-value is the probability of finding a test statistic at least as large as the one observed in a given voxel, provided that no effect is present. A p-value of 0.05 indicates that, if an experiment is repeated 20 times and there are no effects, on average one of these repetitions will be considered significant. If thousands of tests are performed, the chance of obtaining a spuriously significant result in at least one voxel increases: if there are 1000 voxels, and at the same test level \alpha = 0.05, we expect, on average, to find 50 significant tests, even in the absence of any effect. This is known as the multiple testing problem. A review of the topic for brain imaging provided in Nichols and Hayasaka (2003) [see references at the end].

To take the multiple testing problem into account, either the test level (\alpha), or the p-values can be adjusted, such that instead of controlling the error rate at each individual test, the error rate is controlled for the whole set (family) of tests. Controlling such family-wise error rate (FWER) ensures that the chance of finding a significant result anywhere in the image is expected to be within a certain predefined level. For example, if there are 1000 voxels, and the FWER-adjusted test level is 0.05, we expect that, if the experiment is repeated for all the voxels 20 times, then on average in one of these repetitions there will be an error somewhere in the image. The adjustment of the p-values or of the test level is done using the distribution of the maximum statistic, something that most readers of this blog are certainly well aware of, as that permeates most of the imaging literature since the early 1990s.

Have you ever wondered why? What is so special about the distribution of the maximum that makes it useful to correct the error rate when there are multiple tests?

Definitions first

Say we have a set of V voxels in an image. For a given voxel v, v \in \{1, \ldots, V\}, with test statistic t_v, the probability that t_v is larger than some cutoff t is denoted by:

\mathsf{P}(t_v > t) = 1 - F_v(t)

where F_v(t) is the cumulative distribution function (cdf) of the test statistic. If the cutoff t is used to accept or reject a hypothesis, then we say that we have a false positive if an observed t_v is larger than t when there is no actual true effect. A false positive is also known as error type I (in this post, the only type of error discussed is of the type I).

For an image (or any other set of tests), if there is an error anywhere, we say that a family-wise error has occurred. We can therefore define a “family-wise null hypothesis” that there is no signal anywhere; to reject this hypothesis, it suffices to have a single, lonely voxel in which t_v > t. With many voxels, the chances of this happening increase, even if there no effect is present. We can, however, adjust our cuttoff t to some other value t_{\text{FWER}} so that the probability of rejecting such family-wise null hypothesis remains within a certain level, say \alpha_{\text{FWER}}.

Union-intersection tests

The “family-wise null hypothesis” is effectively a joint null hypothesis that there is no effect anywhere. That is, it is an union-intersection test (UIT; Roy, 1953). This joint hypothesis is retained if all tests have statistics that are below the significance cutoff. What is the probability of this happening? From the above we know that \mathsf{P}(t_v \leqslant t) = F_v(t). The probability of the same happening for all voxels simultaneously is, therefore, simply the product of such probabilities, assuming of course that the voxels are all independent:

\mathsf{P}(\bigwedge_v t_v \leqslant t) = \prod_v \mathsf{P}(t_v \leqslant t) = \prod_v F_v(t)

Thus, the probability that any voxel has a significant result, which would lead to the occurrence of a family-wise error, is 1-\prod_v F_v(t). If all voxels have identical distribution under the null, then the same is stated as 1- F_v(t)^V.

Distribution of the maximum

Consider the maximum of the set of V voxels, that is, M = \max{(t_v)}. The random variable M is only smaller or equal than some cutoff t if all values t_v are smaller or equal than t. If the voxels are independent, this enables us to derive the cdf of M:

\mathsf{P}(M \leqslant t) = \prod_v \mathsf{P}(t_v \leqslant t) = \prod_v F_v(t).

Thus, the probability that M is larger than some threshold t is 1-\prod_v F_v(t). If all voxels have identical distribution under the null, then the same is stated as 1- F_v(t)^V.

These results, lo and behold, are the same as those used for the UIT above, hence how the distribution of the maximum can be used to control the family-wise error rate (if the distribution of the maximum is computed via permutations, independence is not required).


The above is not the only way in which we can see why the distribution of the maximum allows the control of the family-wise error rate. The work by Marcus, Peritz and Gabriel (1976) showed that, in the context of multiple testing, the null hypothesis for a particular test v can be rejected provided that all possible joint (multivariate) tests done within the set and including v are also significant, and doing so controls the family-wise error rate. For example, if there are four tests, v \in \{1, 2, 3, 4\}, the test in v=1 is considered significant if the joint tests using (1,2,3,4), (1,2,3), (1,2,4), (1,3,4), (1,2), (1,3), (1,4) and (1) are all significant (that is, all that include v=1). Such joint test can be quite much any valid test, including Hotelling’s T^2, MANOVA/MANCOVA, or NPC (Non-Parametric Combination), all of which are based on recomputing the test statistic from the original data, or others, based on the test statistics or p-values of each of the elementary V tests, as in a meta-analysis.

Such closed testing procedure (CTP) incurs an additional problem, though: the number of joint tests that needs to be done is 2^V-1, which in imaging applications renders them unfeasible. However, there is one particular joint test that provides a direct algorithmic shortcut: using the \max(t_v) as the test statistic for the joint test. The maximum across all V tests is also the maximum for any subset of tests, such that these can be skipped altogether. This gives a vastly efficient algorithmic shortcut to a CTP, as shown by Westfall and Young (1993).

Simple intuition

One does not need to chase the original papers cited above (although doing so cannot hurt). Broadly, the same can be concluded based solely on intuition: if the distribution of some test statistic that is not the distribution of the maximum within an image were used as the reference to compute the (FWER-adjusted) p-values at a given voxel v, then the probability of finding a voxel with a test statistic larger than t_v anywhere could not be determined: there could always be some other voxel v', with an even larger statistic (i.e., t_{v'} > t_v), but the probability of such happening would not be captured by the distribution of a non-maximum. Hence the chance of finding a significant voxel anywhere in the image under the null hypothesis (the very definition of FWER) would not be controlled. Using the absolute maximum eliminates this logical leakage.


  • Marcus R, Peritz E, Gabriel KR. On closed testing pocedures with special reference to ordered analysis of variance. Biometrika. 1976 Dec;63(3):655.
  • Nichols T, Hayasaka S. Controlling the familywise error rate in functional neuroimaging: a comparative review. Stat Methods Med Res. 2003 Oct;12(5):419–46.
  • Roy SN. On a heuristic method of test construction and its use in multivariate analysis. Ann Math Stat. 1953 Jun;24(2):220–38.
  • Westfall PH, Young SS. Resampling-based multiple testing: examples and methods for p-value adjustment. New York, Wiley, 1993.

Better statistics, faster

Faster permutation inference

Permutation tests are more robust and help to make scientific results more reproducible by depending on fewer assumptions. However, they are computationally intensive as recomputing a model thousands of times can be slow. The purpose of this post is to briefly list some options available for speeding up permutation.

Firstly, no speed-ups may be needed: for small sample sizes, or low resolutions, or small regions of interest, a permutation test can run in a matter of minutes. For larger data, however, accelerations may be of use. One option is acceleration through parallel processing or GPUs (for example applications of the latter, see Eklund et al., 2012, Eklund et al., 2013 and Hernández et al., 2013; references below), though this does require specialised implementation. Another option is to reduce the computational burden by exploiting the properties of the statistics and their distributions. A menu of options includes:

  • Do few permutations (shorthand name: fewperms). The results remain valid on average, although the p-values will have higher variability.
  • Keep permuting until a fixed number of permutations with statistic larger than the unpermuted is found (a.k.a., negative binomial; shorthand name: negbin).
  • Do a few permutations, then approximate the tail of the permutation distribution by fitting a generalised Pareto distribution to its tail (shorthand name: tail).
  • Approximate the permutation distribution with a gamma distribution, using simple properties of the test statistic itself, amazingly not requiring any permutations at all (shorthand name: noperm).
  • Do a few permutations, then approximate the full permutation distribution by fitting a gamma distribution (shorthand name: gamma).
  • Run permutations on only a few voxels, then fill the missing ones using low-rank matrix completion theory (shorthand name: lowrank).

These strategies allow accelerations >100x, yielding nearly identical results as in the non-accelerated case. Some, such as tail approximation, are generic enough to be used nearly all the most common scenarios, including univariate and multivariate tests, spatial statistics, and for correction for multiple testing.

In addition to accelerating permutation tests, some of these strategies, such as tail and noperm, allow continuous p-values to be found, and refine the p-values far into the tail of the distribution, thus avoiding the usual discreteness of p-values, which can be a problem in some applications if too few permutations are done.

These methods are available in the tool PALM — Permutation Analysis of Linear Models — and the complete description, evaluation, and application to the re-analysis of a voxel-based morphometry study (Douaud et al., 2007) have been just published in Winkler et al., 2016 (for the Supplementary Material, click here). The paper includes a flow chart prescribing these various approaches for each case, reproduced below.

Faster permutation inference

The hope is that these accelerations will facilitate the use of permutation tests and, if used in combination with hardware and/or software improvements, can further expedite computation leaving little reason not to use these tests.


Contributed to this post: Tom Nichols, Ged Ridgway.

Three HCP utilities

If you are working with data from the Human Connectome Project (HCP), perhaps these three small Octave/MATLAB utilities may be of some use:

  • hcp2blocks.m: Takes the restricted file with information about kinship and zygosity and produces a multi-level exchangeability blocks file that can be used with PALM for permutation inference. It is fully described here.
  • hcp2solar.m: Takes restricted and unrestricted files to produce a pedigree file that can be used with SOLAR for heritability and genome-wide association analyses.
  • picktraits.m: Takes either restricted or unrestricted files, a list of traits and a list of subject IDs to produce tables with selected traits for the selected subjects. These can be used to, e.g., produce design matrices for subsequent analysis.

These functions need to parse relatively large CSV files, which is somewhat inefficient in MATLAB and Octave. Still, since these commands usually have to be executed only once for a particular analysis, a 1-2 minute wait seems acceptable.

If downloaded directly from the above links, remember also to download the prerequisites: strcsvread.m and strcsvwrite.m. Alternatively, clone the full repository from GitHub. The link is this. Other tools may be added in the future.

A fourth utility

For the HCP-S1200 release (March/2017), zygosity information is provided in the fields ZygositySR (self-reported zygosity) and ZygosityGT (zygosity determined by genetic methods for select subjects). If needed, these two fields can be merged into a new field named simply Zygosity. To do so, use a fourth utility, command mergezyg.

Downsampling (decimating) a brain surface

Downsampled average cortical surfaces at different iterations (n), with the respective number of vertices (V), edges (E) and faces (F).

In the previous post, a method to display brain surfaces interactively in PDF documents was presented. While the method is already much more efficient than it was when it first appeared some years ago, the display of highly resolved meshes can be computationally intensive, and may make even the most enthusiastic readers give up even opening the file.

If the data being shown has low spatial frequency, an alternative way to display, which preserves generally the amount of information, is to decimate the mesh, downsampling it to a lower resolution. Although in theory this can be done in the native (subject-level) geometry through retessellation (i.e., interpolation of coordinates), the interest in downsampling usually is at the group level, in which case the subjects have all been interpolated to a common grid, which in general is a geodesic sphere produced by subdividing recursively an icosahedron (see this earlier post). If, at each iteration, the vertices and faces are added in a certain order (such as in FreeSurfer‘s fsaverage or in the one generated with the platonic command), downsampling is relatively straightforward, whatever is the type of data.

Vertexwise data

For vertexwise data, downsampling can be based on the fact that vertices are added (appended) in a certain order as the icosahedron is constructed:

  • Vertices 1-12 correspond to n = 0, i.e., no subdivision, or ico0.
  • Vertices 13-42 correspond to the vertices that, once added to the ico0, make it ico1 (first iteration of subdivision, n = 1).
  • Vertices 43-162 correspond to the vertices that, once added to ico1, make it ico2 (second iteration, n = 2).
  • Vertices 163-642, likewise, make ico3.
  • Vertices 643-2562 make ico4.
  • Vertices 2563-10242 make ico5.
  • Vertices 10243-40962 make ico6, etc.

Thus, if the data is vertexwise (also known as curvature, such as cortical thickness or curvature indices proper), the above information is sufficient to downsample the data: to reduce down to an ico3, for instance, all what one needs to do is to pick the vertices 1 through 642, ignoring 643 onwards.

Facewise data

Data stored at each face (triangle) generally correspond to areal quantities, that require mass conservation. For both fsaverage and platonic icosahedrons, the faces are added in a particular order such that, at each iteration of the subdivision, a given face index is replaced in situ for four other faces: one can simply collapse (via sum or average) the data of every four faces into a new one.

Surface geometry

If the objective is to decimate the surface geometry, i.e., the mesh itself, as opposed to quantities assigned to vertices or faces, one can use similar steps:

  1. Select the vertices from the first up to the last vertex of the icosahedron in the level needed.
  2. Iteratively downsample the face indices by selecting first those that are formed by three vertices that were appended for the current iteration, then for those that have two vertices appended in the current iteration, then connecting the remaining three vertices to form a new, larger face.


Using downsampled data is useful not only to display meshes in PDF documents, but also, some analyses may not require a high resolution as the default mesh (ico7), particularly for processes that vary smoothly across the cortex, such as cortical thickness. Using a lower resolution mesh can be just as informative, while operating at a fraction of the computational cost.

A script

A script that does the tasks above using Matlab/Octave is here: icodown.m. It is also available as part of the areal package described here, which also satisfies all its dependencies. Input and output formats are described here.

Interactive 3D brains in PDF documents

A screenshot from Acrobat Reader. The example file is here.

Would it not be helpful to be able to navigate through tri-dimensional, surface-based representations of the brain when reading a paper, without having to download separate datasets, or using external software? Since 2004, with the release of the version 1.6 of the Portable Document Format (PDF), this has been possible. However, the means to generate the file were not easily available until about 2008, when Intel released of a set of libraries and tools. This still did not help much to improve popularity, as in-document rendering of complex 3D models requires a lot of memory and processing, making its use difficult in practice at the time. The fact that Acrobat Reader was a lot bloated did not help much either.

Now, almost eight years later, things have become easier for users who want to open these documents. Newer versions of Acrobat are much lighter, and capabilities of ordinary computers have increased. Yet, it seems the interest on this kind of visualisation have faded. The objective of this post is to show that it is remarkably simple to have interactive 3D objects in PDF documents, which can be used in any document published online, including theses, presentations, and papers: journals as PNAS and Journal of Neuroscience are at the forefront in accepting interactive manuscripts.


  • U3D Tools: Make sure you have the IDTFConverter utility, from the U3D tools, available on SourceForge as part of the MathGL library. A direct link to version 1.4.4 is here; an alternative link, of a repackaged version of the same, is here. Compiling instructions for Linux and Mac are in the “readme” file. There are some dependencies that must be satisfied, and are described in the documentation. If you decide not to install the U3D tools, but only compile them, make sure the path of the executable is both in the $PATH and in the $LD_LIBRARY_PATH. This can be done with:
cd /path/to/the/directory/of/IDTFConverter
export PATH=${PATH}:$(pwd)
  • The ply2idtf function: Make sure you have the latest version of the areal package, which contains the MATLAB/Octave function ply2idtf.m used below.
  • Certain LaTeX packages: The packages movie15 or media9, that allow embedding the 3D object into the PDF using LaTeX. Either will work. Below it is assumed the older, movie15 package, is used.

Step 1: Generate the PLY maps

Once you have a map of vertexwise cortical data that needs to be shown, follow the instructions from this earlier blog post that explains how to generate Stanford PLY files to display colour-coded vertexwise data. These PLY files will be used below.

Step 2: Convert the PLY to IDTF files

IDTF stands for Intermediate Data Text Format. As the name implies, it is a text, intermediate file, used as a step before the creation of the U3D files, the latter that are embedded into the PDF. Use the function ply2idtf for this:

   {'lh.pial.thickness.avg.ply','LEFT', eye(4);...

The first argument is a cell array with 3 columns, and as many rows as PLY files being added to the IDTF file. The first column contains the file name, the second the label (or node) that for that file, and the third an affine matrix that maps the coordinates from the PLY file to the world coordinate system of the (to be created) U3D. The second (last) argument to the command is the name of the output file.

Step 3: Convert the IDTF to U3D files

From a terminal window (not MATLAB or Octave), run:

IDTFConverter -input thickness.idtf -output thickness.u3d

Step 4: Configure default views

Here we use the older movie15 LaTeX package, and the same can be accomplished with the newer, media9 package. Various viewing options are configurable, all of which are described in the documentation. These options can be saved in a text file with extension .vws, and later supplied in the LaTeX document. An example is below.

VIEW=Both Hemispheres
  COO=0 -14 0,
  C2C=-0.75 0.20 0.65
  BGCOLOR=.5 .5 .5
VIEW=Left Hemisphere
  COO=0 -14 0,
  C2C=-1 0 0
  BGCOLOR=.5 .5 .5
VIEW=Right Hemisphere
  COO=0 -14 0,
  C2C=1 0 0
  BGCOLOR=.5 .5 .5

Step 5: Add the U3D to the LaTeX source

Interactive, 3D viewing is unfortunately not supported by most PDF readers. However, it is supported by the official Adobe Acrobat Reader since version 7.0, including the recent version DC. Thus, it is important to let the users/readers of the document know that they must open the file using a recent version of Acrobat. This can be done in the document itself, using a message placed with the option text of the \includemovie command of the movie15 package. A minimalistic LaTeX source is shown below (it can be downloaded here).


% Relevant package:

% pdfLaTeX and color links setup:
\definecolor{colorlink}{rgb}{0, 0, .6}  % dark blue

\title{Interactive 3D brains in PDF documents}

text=\fbox{\parbox[c][9cm][c]{9cm}{\centering {\footnotesize (Use \href{}{Adobe Acrobat Reader 7.0+} \\to view the interactive content.)}}},
\caption{An average 3D brain, showing colour-coded average thickness (for simplicity, colour scale not shown). Click to rotate. Right-click to for a menu with various options. Details at \href{}{}.}


Step 6: Generate the PDF

For LaTeX, use pdfLaTeX as usual:

pdflatex document.tex

What you get

After generating the PDF, the result of this example is shown here (a screenshot is at the top). It is possible to rotate in any direction, zoom, pan, change views to predefined modes, and alternate between orthogonal and perspective projections. It’s also possible to change rendering modes (including transparency), and experiment with various lightning options.

In Acrobat Reader, by right-clicking, a menu with various useful options is presented. A toolbar (as shown in the top image) can also be enabled through the menu.

The same strategy works also with the Beamer class, such that interactive slides can be created and used in talks, and with XeTeX, allowing these with a richer variety of text fonts.

See also

  • Wikipedia has an article on U3D files.
  • Alexandre Gramfort has developed a set of tools that covers quite much the same as above. It’s freely available in Matlab FileExchange.
  • To display molecules interactively (including proteins), the steps are similar. Instructions for Jmol and Pymol are available.
  • Commercial products offering features that build on these resources are also available.

Extreme value notes

Extreme values are useful to quantify the risk of catastrophic floods, and much more.

This is a brief set of notes with an introduction to extreme value theory. For reviews, see Leadbetter et al (1983) and David and Huser (2015) [references at the end]. Also of some (historical) interest might be the classical book by Gumbel (1958). Let X_1, \dots, X_n be a sequence of independent and identically distributed variables with cumulative distribution function (cdf) F(x) and let M_n =\max(X_1,\dots,X_n) denote the maximum.

If F(x) is known, the distribution of the maximum is:

\begin{array}{lll} P(M_n \leqslant x) &=&P(X_1 \leqslant x, \dots, X_n \leqslant x) \\ &=& P(X_1 \leqslant x) \cdots P(X_n \leqslant x) = F^n(x). \end{array}

The distribution function F(x) might, however, not be known. If data are available, it can be estimated, although small errors on the estimation of F(x) can lead to large errors concerning the extreme values. Instead, an asymptotic result is given by the extremal types theorem, also known as Fisher-Tippett-Gnedenko Theorem, First Theorem of Extreme Values, or extreme value trinity theorem (called under the last name by Picklands III, 1975).

But before that, let’s make a small variable change. Working with M_n directly is problematic because as n \rightarrow \infty, F^n(x) \rightarrow 0. Redefining the problem as a function of M_n^* = \frac{M_n-b_n}{a_n} renders treatment simpler. The theorem can be stated then as: If there exist sequences of constants a_n \in \mathbb{R}_{+} and b_n \in \mathbb{R} such that, as n \rightarrow \infty:

P\left(M_{n}^{*} \leqslant x \right) \rightarrow G(x)

then G(x) belongs to one of three “domains of attraction”:

  • Type I (Gumbel law): \Lambda(x) = e^{-e^{-\frac{x-b}{a}}}, for x \in \mathbb{R} indicating that the distribution of M_n has an exponential tail.
  • Type II (Fréchet law): \Phi(x) = \begin{cases} 0 & x\leqslant b \\ e^{-\left(\frac{x-b}{a}\right)^{-\alpha}} & x\; \textgreater\; b \end{cases} indicating that the distribution of M_n has a heavy tail (including polynomial decay).
  • Type III (Weibull law): \Psi(x) = \begin{cases} e^{-\left( -\frac{x-b}{a}\right)^\alpha} & x\;\textless\; b \\ 1 & x\geqslant b \end{cases} indicating that the distribution of M_n has a light tail with finite upper bound.

Note that in the above formulation, the Weibull is reversed so that the distribution has an upper bound, as opposed to a lower one as in the Weibull distribution. Also, the parameterisation is slightly different than the one usually adopted for the Weibull distribution.

These three families have parameters a\; \textgreater\; 0, b and, for families II and III, \alpha\; \textgreater\; 0. To which of the three a particular F(x) is attracted is determined by the behaviour of the tail of of the distribution for large x. Thus, we can infer about the asymptotic properties of the maximum while having only a limited knowledge of the properties of F(x).

These three limiting cases are collectively termed extreme value distributions. Types II and III were identified by Fréchet (1927), whereas type I was found by Fisher and Tippett (1928). In his work, Fréchet used M_n^* = \frac{M_n}{a_n}, whereas Fisher and Tippett used M_n^* = \frac{M_n-b_n}{a_n}. Von Mises (1936) identified various sufficient conditions for convergence to each of these forms, and Gnedenko (1943) established a complete generalisation.

Generalised extreme value distribution

As shown above, the rescaled maxima converge in distribution to one of three families. However, all are cases of a single family that can be represented as:

G(x) = e^{-\left(1-\xi\left(\frac{x-\mu}{\sigma}\right)\right)^{\frac{1}{\xi}}}

defined on the set \left\{x:1-\xi\frac{x-\mu}{\sigma}\;\textgreater\;0\right\}, with parameters -\infty \;\textless \;\mu\;\textless\; \infty (location), \sigma\;\textgreater\;0 (scale), and -\infty\;\textless\;\xi\;\textless\;\infty (shape). This is the generalised extreme value (GEV) family of distributions. If \xi \rightarrow 0, it converges to Gumbel (type I), whereas if \xi < 0 it corresponds to Fréchet (type II), and if \xi\;\textgreater\;0 it corresponds to Weibull (type III). Inference on \xi allows choice of a particular family for a given problem.

Generalised Pareto distribution

For u\rightarrow\infty, the limiting distribution of a random variable Y=X-u, conditional on X \;\textgreater\; u, is:

H(y) = 1-\left(1-\frac{\xi y}{\tilde{\sigma}}\right)^{\frac{1}{\xi}}

defined for y \;\textgreater\; 0 and \left(1-\frac{\xi y}{\tilde{\sigma}}\right) \;\textgreater\; 0. The two parameters are the \xi (shape) and \tilde{\sigma} (scale). The shape corresponds to the same parameter \xi of the GEV, whereas the scale relates to the scale of the former as \tilde{\sigma}=\sigma-\xi(u-\mu).

The above is sometimes called the Picklands-Baikema-de Haan theorem or the Second Theorem of Extreme Values, and it defines another family of distributions, known as generalised Pareto distribution (GPD). It generalises an exponential distribution with parameter \frac{1}{\tilde{\sigma}} as \xi \rightarrow 0, an uniform distribution in the interval \left[0, \tilde{\sigma}\right] when \xi = 1, and a Pareto distribution when \xi \;\textgreater\; 0.

Parameter estimation

By restricting the attention to the most common case of -\frac{1}{2}<\xi<\frac{1}{2}, which represent distributions approximately exponential, parametters for the GPD can be estimated using at least three methods: maximum likelihood, moments, and probability-weighted moments. These are described in Hosking and Wallis (1987). For \xi outside this interval, methods have been discussed elsewhere (Oliveira, 1984). The method of moments is probably the simplest, fastest and, according to Hosking and Wallis (1987) and Knijnenburg et al (2009), has good performance for the typical cases of -\frac{1}{2}<\xi<\frac{1}{2}.

For a set of extreme observations, let \bar{x} and s^2 be respectively the sample mean and variance. The moment estimators of \tilde{\sigma} and \xi are \hat{\tilde{\sigma}} = \frac{\bar{x}}{2}\left(\frac{\bar{x}^2}{s^2}+1\right) and \hat{\xi}=\frac{1}{2}\left(\frac{\bar{x}^2}{s^2}-1\right).

The accuracy of these estimates can be tested with, e.g., the Anderson-Darling goodness-of-fit test (Anderson and Darling, 1952; Choulakian and Stephens, 2001), based on the fact that, if the modelling is accurate, the p-values for the distribution should be uniformly distributed.


Statistics of extremes are used in PALM as a way to accelerate permutation tests. More details to follow soon.


The figure at the top (flood) is in public domain.

Non-Parametric Combination (NPC) for brain imaging

Have you ever had an analysis in which there was a large set of contrasts, all of interest, and you were worried about multiple testing? An eventual effect would be missed by a simple Bonferroni correction, but you did not know what else to do? Or did you have a set of different studies and you wished to obtain a style of meta-analytic result, indicating whether there would be evidence across all of them, without requiring the studies to be all consistently significant?

The Non-Parametric Combination (NPC) solves these issues. It is a way of performing joint inference on multiple data collected on the same experimental units (e.g., same subjects), all with minimal assumptions. The method was proposed originally by Pesarin (1990, 1992) [see references below], independently by Blair and Karninski (1993), and described extensively by Pesarin and Salmaso (2010). In this blog entry, the NPC is presented in brief, with emphasis on the modifications we introduce to render it feasible for brain imaging. The complete details are in our paper that has just been published in the journal Human Brain Mapping.

NPC in a nutshell

The NPC consists of, in a first phase, testing each hypothesis separately using permutations that are performed synchronously across datasets; these tests are termed partial tests. The resulting statistics for each and every permutation are recorded, allowing an estimate of the complete empirical null distribution to be constructed for each one. In a second phase, the empirical p-values for each statistic are combined, for each permutation, into a joint statistic. As such a combined joint statistic is produced from the previous permutations, an estimate of its empirical distribution function is immediately known, and so is the p-value of the joint test. A flowchart of the original algorithm is shown below; click to see it side-by-side with the modified one (described below).

A host of combining functions

The null hypothesis of the NPC is that null hypotheses for all partial tests are true, and the alternative hypothesis that any is false, which is the same null of a union-intersection test (UIT; Roy, 1953). The rejection region depends on how the combined statistic is produced. Various combining functions, which produce such combined statistics, can be considered, and some of the most well known are listed in the table below:

Method Statistic p-value
Tippett \min \left(p_{k}\right) 1-\left(1-T\right)^{K}
Fisher -2 \sum_{k=1}^{K} \ln\left(p_{k}\right) 1-\chi^{2}\left(T;\;\nu=2K\right)
Stouffer \frac{1}{\sqrt{K}} \sum_{k=1}^{K} \Phi^{-1}\left(1-p_{k}\right) 1-\Phi\left(T;\;\mu=0,\;\sigma^2=1\right)
Mudholkar–George \frac{1}{\pi}\sqrt{\frac{3(5K+4)}{K(5K+2)}}\sum_{k=1}^{K} \ln\left(\frac{1-p_{k}}{p_{k}}\right) 1-t_{\text{cdf}}(T;\;\nu=5K+4)

In the table, K is the number of partial tests, and the remaining of the variables follow the usual notation (see the Table 1 in the paper for the complete description). Many of these combining functions were proposed over the years for applications such as meta-analyses, and many of them assume independence between the tests being combined, and will give incorrect p-values if such assumption is not met. In the NPC, lack of dependence is not a problem, even if these same functions are used: the synchronised permutations ensure that any dependence, if existing, is taken into account, and this is done so implicitly, with no need for explicit modelling.

The different combining functions lead to different rejection regions for the null hypothesis. For the four combining functions in the table above, the respective rejection regions are in the figure below.

The combining functions can be modified to allow combination of tests so as to favour hypotheses with concordant directions, or be modified for bi-directional tests. Click on the figure above for examples of these cases (again, see the paper for the complete details).

Two problems, one solution

The multiple testing problem is well known in brain imaging: as an image comprises thousands of voxels/vertices/faces, correction is necessary. Bonferroni is in general too conservative, and various other approaches have been proposed, such as the random field theory. Permutation tests provide control over the familywise error rate (FWER) for the multiple tests across space, requiring only the assumption of exchangeability. This is all well known; see Nichols and Hayasaka (2003) and Winkler et al. (2014) for details.

However, another type of multiple testing is also common: analyses that test multiple hypotheses using the same model, multiple pairwise group comparisons, multiple and distinct models, studies using multiple modalities, that mix imaging and non-imaging data, that consider multiple processing pipelines, and even multiple multivariate analyses. All these common cases also need multiple testing correction. We call this multiple testing problem MTP-II, to discern it from the well known multiple testing problem across space, described above, which we term MTP-I.

One of the many combining functions possible with NPC, the one proposed by Tippett (1931), has a further property that makes it remarkably interesting. The Tippett function uses the smallest p-value across partial tests as its test statistic. Alternatively, if all statistics are comparable, it can be formulated in terms of the maximum statistic. It turns out that the distribution of the maximum statistic across a set of tests is also the distribution that can be used in a closed testing procedure (Marcus et al., 1976) to correct for the familywise error rate (FWER) using resampling methods, such as permutation. In the context of joint inference, FWER-correction can also be seen as an UIT. Thus, NPC offers a link between combination of multiple tests, and correction for multiple tests, in both cases regardless of any dependence between such tests.

This means that the MTP-II, for which correction in the parametric realm is either non-existing or fiendishly difficult, can be accommodated easily. It requires no explicit modelling of the dependence between the tests, and the resulting error rates are controlled exactly at the test level, adding rigour to what otherwise could lead to an excess of false positives without correction, or be overly conservative if a naïve correction such as Bonferroni were attempted.

Modifying for imaging applications

As originally proposed, in practice NPC cannot be used in brain imaging. As the statistics for all partial tests for all permutations need to be recorded, an enormous amount of space for data storage is necessary. Even if storage space were not a problem, the discreteness of the p-values for the partial tests is problematic when correcting for multiple testing, because with thousands of tests in an image, ties are likely to occur, further causing ties among the combined statistics. If too many tests across an image share the same most extreme statistic, correction for the MTP-I, while still valid, becomes less powerful (Westfall and Young, 1993; Pantazis et al., 2005). The most obvious workaround — run an ever larger number of permutations to break the ties — may not be possible for small sample sizes, or when possible, requires correspondingly larger data storage.

The solution is loosely based on the direct combination of the test statistics, by converting the test statistics of the partial tests to values that behave as p-values, using the asymptotic distribution of the statistics for the partial tests. We call these as “u-values”, in order to emphasise that they are not meant to be read or interpreted as p-values, but rather as transitional values that allow combinations that otherwise would not be possible.

For spatial statistics, the asymptotic distribution of the combined statistic is used, this time to produce a z-score, which can be subjected to the computation of cluster extent, cluster mass, and/or threshold-free cluster enhancement (TFCE; Smith and Nichols, 2009). A flow chart of the modified algorithm is shown below. Click to see it side-by-side with the original.

More power, fewer assumptions

One of the most remarkable features of NPC is that the synchronised permutations implicitly account for the dependence structure among the partial tests. This means that even combining methods originally derived under the assumption of independence can be used when such independence is untenable. As the p-values are assessed via permutations, distributional restrictions are likewise not necessary, liberating NPC from most assumptions that thwart parametric methods in general. This renders NPC a good alternative to classical multivariate tests, such as MANOVA, MANCOVA, and Hotelling’s T2 tests: each of the response variables can be seen as an univariate partial test in the context of the combination, but without the assumptions that are embodied in these old multivariate tests.

As if all the above were not already sufficient, NPC is also more powerful than such classical multivariate tests. This refers to its finite sample consistency property, that is, even with fixed sample size, as the number of modalities being combined increases, the power of the test also increases. The power of classical multivariate tests, however, increases up to a certain point, then begins to decrease, eventually reaching zero when the number of combining variables match the sample size.

The figure below summarises the analysis of a subset of the subjects of a published FMRI study (Brooks et al, 2005) in which painful stimulation was applied to the face, hand, and foot of 12 subjects. Using permutation tests separately, no results could be identified for any of the three types of stimulation. A simple multivariate test, the Hotelling’s T2 test, even assessed using permutations, did not reveal any effect of stimulation either. The NPC results, however, suggest involvement of large portions of the anterior insula and secondary somatosensory cortex. The Fisher, Stouffer and Mudholkar–George combining functions were particularly successful in recovering a small area of activity in the midbrain and periaqueductal gray area, which would be expected from previous studies on pain, but that could not be located from the original, non-combined data.

Detailed assessment of power, using variable number of modalities, and of modalities containing signal, is shown in the paper.

Combinations or conjunctions?

Combination, as done via NPC, is different than conjunctions (Nichols et al., 2005) in the following: in the combination, one seeks for aggregate significance across partial tests, without the need that any individual study is necessarily significant. In the conjunction, it is necessary that all of them, with no exception, is significant. As indicated above, the NPC forms an union-intersection test (UIT; Roy, 1953), whereas the conjunctions form an intersection-union test (IUT; Berger, 1982). The former can be said to be significant if any (or an aggregate) of the partial tests is significant, whereas the latter is significant if all the partial tests are.


The NPC, with the modifications for brain imaging, is available in the tool PALM — Permutation Analysis of Linear Models. It runs in either Matlab or Octave, and is free (GPL).


Contributed to this post: Tom Nichols.