Using Root for Application Development at Jefferson Lab. 

Updated Aug 13  2020

Versions available

If you login to an ifarm computer ("ssh ifarm") you'll get to a CentOS 7.* computer. You can check which version by "cat /etc/redhat-release".  Here, you may look at /apps/root to see the versions available.  Usually the one labelled "PRO" (a link) is the one you want, but there may be newer versions.

Note, versions of ROOT are also maintained on /site/12gev_phys , per the CLAS software model, the most recent as of Aug 13, 2020 is set up as follows:> source /site/12gev_phys/softenv.csh  2.4

and this produces> echo $ROOTSYS

Generally speaking, if you look at the /apps/root/VERSION/README_BOB file you can see how ROOT was compiled for that VERSION, (e.g. /apps/root/6.18.00/README_BOB), i.e. what compiler and whether I used the "configure" script or "cmake".  For the /site/12gev_phys distributions, lately I've put the information about how root was built in a file $ROOTSYS/README_HOW_BUILT.  All the more recent versions like 6.20.04 are compiled using cmake.


You should compile your applications with the same compiler and same kind of platform (e.g. RedHat 7.7) that was used to build ROOT.   The recommendation is that you login to the same kind of ifarm node where you'll run your application on the batch farm and compile your application there. Make sure your application runs interactively before running in batch.  For example, on ifarm1802 the default gcc is currently gcc version 9.2.0, but this tends to evolve over time.  Type "gcc -v" to see what the default is.

Detailed notes on how ROOT was compiled is shown in the README_BOB file in the ROOT directory, e.g. /apps/root/6.18.00//README_BOB and in $ROOTSYS/README_HOW_BUILT on /site/12gev_phys.  I sometimes just used the default gcc compiler that was available at the time, but in some cases  I had to use a more recent compiler.  You must ensure that you set up and use the same compiler.  

One nice thing about the CLAS software model is that the name of the compiler that was used to compile ROOT appears in the name of the path.  This way it is unambiguous.


The CUE command "use root/6.18.00 will set the appropriate variables in your environment or you may use the command "source /apps/root/6.18.00/setroot_CUE".  You will notice there are two versions of setroot_CUE, one for csh and one for bash, and setroot_CUE is actually a link to the csh version.  You could, of course, copy this script over and modify it.  In bash you have "export ROOTSYS=" instead of "setenv ROOTSYS" in csh.   Last time I checked, the batch farms need the "export ROOTSYS=" bash shell scheme.

A typical environment might look like this:

ROOTSYS /apps/root/6.18.00/root
LD_LIBRARY_PATH $ROOTSYS/lib before others
PATH $ROOTSYS/bin before others

Make sure you have 3 environment variables set: $ROOTSYS, $PATH, and $LD_LIBRARY_PATH !


Presently I have the following list enabled and it's controlled by a script that uses cmake.

   set enablePackages = ("roofit" "minuit2" "pyroot" "gdml" "unuran" "qt" "gtgsi" "mysql" "reflex" "cintex" "fftw3" "soversion" "mathmore" )

in some earlier versions we had "python" instead of "pyroot" in this list.

If we need any others, please let me know and I'll fix it ASAP. 

I can also add missing packages and I check, as part of the certification process, to see if packages were successfully made.  Sometimes a package fails to build because some system package is missing, e.g. GSL.

The setroot_CUE script for /apps/root

The following script is an example for how to set up ROOT.   I think that the /apps/root scheme may be replaced at some point.  For the /site/12gev_software see $ROOTSYS/README_HOW_BUILT for up-to-date details.  I think that the softenv.csh script will set this up for you, mostly.  However, maybe python needs to be set up by hand.

# Setup ROOT -- do this:  
# For sh family: use "export" instead of "setenv"
# for MacOs you must also set DYLD_LIBRARY_PATH

echo "ROOT 6.20.04 was compiled with gcc 4.8.5"
echo "present default gcc is given by gcc -v: "
gcc -v

setenv PYTHON /apps/python/3.4.3

# For csh shell
setenv ROOTSYS /u/apps/root/6.18.04/root
setenv PATH ${PYTHON}/bin/:${ROOTSYS}/bin:${PATH}

if (!($?LD_LIBRARY_PATH)) then
  setenv LD_LIBRARY_PATH ${PYTHON}/lib:${ROOTSYS}/lib

if (!($?PYTHONPATH)) then
  setenv PYTHONPATH ${ROOTSYS}/bin:${ROOTSYS}/lib


Root on /site -- the CLAS software model

As per the CLAS software model, , ROOT is maintained in /site/12gev_phys, which we try to update every 6 months or so.  Generally, the identical version is on /apps/root/*. To use the version on /site, please use the setup procedure like below.  As always, the three things to control are $ROOTSYS, $LD_LIBRARY_PATH, and $PATH.  Also, you should check that the command "root-config --version" returns an expected result.  See the example below.

Having logged into "ifarm" and suppose you want to use production version 2.4

ifarm1102> source /site/12gev_phys/softenv.csh 2.4

A subtlety: if you already have $ROOTSYS and root setup in your environment, the above lines will not "override" your defintion.  Avoid hard-coded definitions in your login script, or use the above line in your login script if that's what you intend to do.

The JLAB_VERSION variable will control a 'set' of software versions and associated dependencies.  The 'production' version will point to the recommended defaults, but that variable may be set to '1.0' or newer versions as they are created. 

Root on your PC using CUE level 2 install

Assuming you are a "power user" and need your own compilation of root, I maybe don't even need to tell you how, but ... here are some notes on installing ROOT on on RedHat 7, CUE level 2 using cmake.  The other way to build, using "configure", is considered obsolete, but you can ask me if you want to do it that way.  I have also installed root on Fedora at my home.  This is even more of an adventure since the default Fedora is missing lots of things (compiler, make, etc).

First, I note that you should add /apps/root/cmake to your path, the one in /usr/bin is old.  E.g. setenv PATH /apps/bin:${PATH}. 

As usual you need the environment variables

setenv ROOTSYS /home/rom/root-6.10.08/root
setenv PATH ${ROOTSYS}/bin:${PATH}

$ROOTSYS is where ROOT will get installed, i.e. /bin and /lib will appear there.

Now, go to the ./build directory that appears after you untar the root tar file and run "cmake"

cmake -DCMAKE_INSTALL_PREFIX=$ROOTSYS $toDisable $toEnable ../

where $toDisable and $toEnable are packages you may want to disable/enable.  For me, I disable nothing and I enable "roofit" "minuit2" "python" "gdml" "unuran" "qt" "gtgsi" "mysql" "reflex" "cintex" "fftw3" "soversion" "mathmore"

After 'cmake" does its build you type "make install" and you are done, exept for the testing (see next section).

On your local PC, there might be some things (i.e. libraries or packages) missing.  For the full list of what you need and may want, see the ROOT pages:

As explained in that ROOT web page, you can use "yum install" to install the missing packages.  An example would be:

yum install libXext-devel

I think that recent RH 7 distributions managed by JLab have most of what you want, but if you do this on your own (e.g. I use Fedora 32 at home) you have a lot of "yumming" to do.  Typically you'll get an error message from the install process and you can google it unless its obvious.  Suppose you figure out that "make" is missing from your new installation.  So then you type "yum whatprovides make".  You probably need to be superuser to do this.  It will answer with some package(s) and you pick one and install that, e.g. "yum install make-1:4.2.1-16.fc32.x86_6".   Then it's on to the next thing that is missing.

A year ago, I had a little trouble compiling with Python 3 because cmake would find an older version of Python which wasn't compatible.  I fixed this by hacking the CMakeCache.txt in the biuld directory so that this txt file pointed to the correct Python.   This affected about 6 lines in that CMakeCache.txt.  I'm not sure if that's still relevant as of 2020, though.

Testing Root -- Certification

Suppose you have succeeded at installing Root.  How do you test it ?  This is what I do:

If some users want to hand me macros or test code to test, I'd be happy to add it to my checklist of tests.  For example, I don't regularly use FFTW, so I wouldn't know if it's installed properly unless someone else tested it, or you give me some test code. 

New Feature Requests

If you want a new feature, I'll be happy to help implement it.  Email me at  This sometimes involves getting the Computer Center to install the libraries and "include" files of the new feature, and then I have to enable it in the building scheme and compile the appropriate ROOT libraries.   Don't hesitate to ask me, and please provide help and advice if you think I need it; we are a community and we try to make things better by working together.

ROOT on Windows or MAC

ROOT works on these platforms, but they are not supported by me or by the computer center.  Only Linux.


Computing related topics

Jefferson lab provides for it's nuclear physics users a scientific computing infrastructure. A computing cluster, colloquially known as The Farm, provides interactive and batch data processing capabilities. A mass storage system provides bulk data storage in the form of several flavors of disk storage (work, volatile and cache) and a robotic tape library. The computing requirements are driven my the facility users and coordinated by the hall or collaboration leadership and Experimental Nuclear Physics (ENP) division management. The ENP division funds the computing resources and their ongoing maintenance as part of the annual operation budget.

This computing infrastructure is managed by the Scientific Computing (SCI) group in IT division. Networking and other support for computing for ENP is provided by the Computing and Networking Infrastructure (CNI) group, also in IT. The IT division is also responsible for cyber security and for managing various computing related metrics that are reported to the DOE.

There are several areas where coordinated interaction between ENP and IT takes place at a technical level. This is done via the offline and online computing coordinators, assisted by the leader of the ENP data acquisition support group.

The coordinators are:


  • A - Ole Hansen
  • B - Veronique Ziegler
  • C - Brad Sawatzky
  • D - Mark Ito


  • A - Alexandre Camsonne
  • B - SergeyBoyarinov
  • C - Brad Sawatzky
  • D - David Lawrence

Some of these areas of interaction between ENP and IT divisions are documented in the following pages.

Data Management Plans

Each experiment performed at Jefferson lab represents a significant investment not only to the groups working on the experiment but also the funding agencies. It is prudent then to ensure that this investment is protected so that future researchers may not only take advantage of the final physics results but also have access to the data that produced those results in a form allowing data processing to be repeated in the light of new techniques or insights. 

By far the largest volume of data generated by an experiment is the raw data containing the digitized readout from the data acquisition system. However, the raw data is only meaningful in the context defined by the metadata that is recorded as the data is taken, this includes accelerator parameters, operating conditions and calibration of the detector, operator logs and much more. Since all of this data is stored in digital form it is also important to archive documentation, the software to read the data formats and even the software used to process the data. By far the safest course is to attempt to preserve as much of the information and software that is available. To be sure there will be points of diminishing return and, case by case, a decision on what is not worth keeping must be made.

Such is the importance of the preservation of data, that the funding agencies are asking grant applicants to provide a plan for how they will manage the data from their experiment.

With this in mind the Scientific Computing group (SCI) in IT division has written a JLab Data Management Plan that broadly outlines the steps taken to preserve data. Based on this plan the Experimental Nuclear Physics division management has prepared data management plans for each of the four experiment halls. Each hall specific plan takes into account differences in the ways in which the halls operate their online and offline data processing. These plans can be referred to by principle investigators when preparing their own data management plan and should much simplify that process.

Data_Management_Plan_Hall-A.pdf65.18 KB
Data_Management_Plan_Hall-B.pdf69.55 KB
Data_Management_Plan_Hall-C.pdf70.08 KB
Data_Management_Plan_Hall_D_v2.pdf236.11 KB