Experimental Nuclear Physics, with the help of IT division, provides support for several software packages commonly used by the user community.
The libraries are installed and maintained by Jefferson Lab staff who are active in physics analysis and simulation. They are available oninteractive JLAB CUE machines such as ifarm1801, ifarm1802, ifarm1901 and on the batch farm.
Library | Author | JLAB Staff Responsible |
Common Environment | Maurizio Ungaro | Maurizio Ungaro |
ROOT | CERN | Robert Michaels |
CLHEP | CERN | Maurizio Ungaro |
GEANT4 | Geant4 Collaboration | Maurizio Ungaro / Makoto Asai |
CERNLib | CERN | Stephen Wood |
EVIO | JLAB | Maurizio Ungaro |
HIPO | G. Gavalian | Maurizio Ungaro |
CCDB | D. Romanov | Maurizio Ungaro |
GEMC | M. Ungaro | Maurizio Ungaro |
QT | Qt Company | Maurizio Ungaro |
XERCESC | Apache | Maurizio Ungaro |
The libraries releases are organized using the JLAB_VERSION tags. The list of supported tags is available here.
Quick-Start
Login on ifarm1801.jlab.org or ifarm1802.jlab.org
Source the softenv script with the JLAB_VERSION tag as argument:
source /site/12gev_phys/softenv.csh 2.5
If you use bash:
source /site/12gev_phys/softenv.sh 2.5
User selection of libraries:
The source command above will load all the supported libraries; users can select a subset of those by putting their list in a ~/.jlab_software file.
For example, for geant4 only support (which needs clhep qt and xercesc), that file would read:
clhep geant4 qt xercesc
Docker images.
Docker images are available starting with JLAB_VERSION 2.2:
To run in batch mode:
docker run -it --rm jeffersonlab/jlabce:2.5 bash
To run in in interactive mode:
docker run -it --rm -p 6080:6080 -p 5900:5900 jeffersonlab/jlabce:2.5
then point your web brower to: http://localhost:6080
How to get help.
The first place to look for help is in the documentation linked to from the list. Try their blogs/discussion forums, or search online.
If you need help specific to the JLAB installation please submit a CCPR with the category "SCIENTIFIC COMPUTING".
Further documentation of the Common Environment Framework can be found here.
Local Installation
To install the packages on a local machine check the Step-By-Step instructions.
The Common Environment Framework code is available on github.
The Batch Farm
The environment is available on the batch farm. To use it, add the softenv source line above to your submission scripts.
The batch farm is maintained by the scientific computing group in IT division. Please refer to their website for information about how to use the data analysis and storage facilities at JLab.
Notification
From time to time we need to get the word out about software updates and known issues. Please subscribe to the JLab mailing list jlab-scicomp-briefs to keep up to date.
Feedback
Feedback is always welcome. There is a scientific support committee that reviews which packages are supported and how support is provided. Feel free to contact any member of the committee.
Maurizio Ungaro |
Maurizio Ungaro |
Maurizio Ungaro |
Tag | 2.4 | 2.5 | ||||||||||||||||||||||||||||||||||||||||||||
Release Date | Wed Oct 3 2018 | Wed Feb 9 2022 | ||||||||||||||||||||||||||||||||||||||||||||
Libraries |
|
|
Tag | 2.2 | 2.1 | 2.0 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Release Date | Wed March 18 2018 | Wed January 25 2017 | Wed Oct 6 2016 | ||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||||
Libraries |
|
|
|
The numbering scheme is available here.
Updated Aug 13 2020
If you login to an ifarm computer ("ssh ifarm") you'll get to a CentOS 7.* computer. You can check which version by "cat /etc/redhat-release". Here, you may look at /apps/root to see the versions available. Usually the one labelled "PRO" (a link) is the one you want, but there may be newer versions.
Note, versions of ROOT are also maintained on /site/12gev_phys , per the CLAS software model, the most recent as of Aug 13, 2020 is set up as follows:
ifarm1802.jlab.org> source /site/12gev_phys/softenv.csh 2.4
and this produces
ifarm1802.jlab.org> echo $ROOTSYS
/site/12gev_phys/2.4/Linux_CentOS7.7.1908-gcc9.2.0/root/6.20.04
Generally speaking, if you look at the /apps/root/VERSION/README_BOB file you can see how ROOT was compiled for that VERSION, (e.g. /apps/root/6.18.00/README_BOB), i.e. what compiler and whether I used the "configure" script or "cmake". For the /site/12gev_phys distributions, lately I've put the information about how root was built in a file $ROOTSYS/README_HOW_BUILT. All the more recent versions like 6.20.04 are compiled using cmake.
You should compile your applications with the same compiler and same kind of platform (e.g. RedHat 7.7) that was used to build ROOT. The recommendation is that you login to the same kind of ifarm node where you'll run your application on the batch farm and compile your application there. Make sure your application runs interactively before running in batch. For example, on ifarm1802 the default gcc is currently gcc version 9.2.0, but this tends to evolve over time. Type "gcc -v" to see what the default is.
Detailed notes on how ROOT was compiled is shown in the README_BOB file in the ROOT directory, e.g. /apps/root/6.18.00//README_BOB and in $ROOTSYS/README_HOW_BUILT on /site/12gev_phys. I sometimes just used the default gcc compiler that was available at the time, but in some cases I had to use a more recent compiler. You must ensure that you set up and use the same compiler.
One nice thing about the CLAS software model is that the name of the compiler that was used to compile ROOT appears in the name of the path. This way it is unambiguous.
The CUE command "use root/6.18.00 will set the appropriate variables in your environment or you may use the command "source /apps/root/6.18.00/setroot_CUE". You will notice there are two versions of setroot_CUE, one for csh and one for bash, and setroot_CUE is actually a link to the csh version. You could, of course, copy this script over and modify it. In bash you have "export ROOTSYS=" instead of "setenv ROOTSYS" in csh. Last time I checked, the batch farms need the "export ROOTSYS=" bash shell scheme.
A typical environment might look like this:
ROOTSYS /apps/root/6.18.00/root LD_LIBRARY_PATH $ROOTSYS/lib before others PATH $ROOTSYS/bin before others
Make sure you have 3 environment variables set: $ROOTSYS, $PATH, and $LD_LIBRARY_PATH !
Packages
Presently I have the following list enabled and it's controlled by a script that uses cmake.
set enablePackages = ("roofit" "minuit2" "pyroot" "gdml" "unuran" "qt" "gtgsi" "mysql" "reflex" "cintex" "fftw3" "soversion" "mathmore" )
in some earlier versions we had "python" instead of "pyroot" in this list.
If we need any others, please let me know and I'll fix it ASAP.
I can also add missing packages and I check, as part of the certification process, to see if packages were successfully made. Sometimes a package fails to build because some system package is missing, e.g. GSL.
The following script is an example for how to set up ROOT. I think that the /apps/root scheme may be replaced at some point. For the /site/12gev_software see $ROOTSYS/README_HOW_BUILT for up-to-date details. I think that the softenv.csh script will set this up for you, mostly. However, maybe python needs to be set up by hand.
#!/bin/bash # Setup ROOT -- do this: # For sh family: use "export" instead of "setenv" # for MacOs you must also set DYLD_LIBRARY_PATH echo "ROOT 6.20.04 was compiled with gcc 4.8.5" echo "present default gcc is given by gcc -v: " gcc -v setenv PYTHON /apps/python/3.4.3 # For csh shell setenv ROOTSYS /u/apps/root/6.18.04/root setenv PATH ${PYTHON}/bin/:${ROOTSYS}/bin:${PATH} if (!($?LD_LIBRARY_PATH)) then setenv LD_LIBRARY_PATH ${PYTHON}/lib:${ROOTSYS}/lib else setenv LD_LIBRARY_PATH ${PYTHON}/lib:${ROOTSYS}/lib:${LD_LIBRARY_PATH} endif if (!($?PYTHONPATH)) then setenv PYTHONPATH ${ROOTSYS}/bin:${ROOTSYS}/lib else setenv PYTHONPATH ${ROOTSYS}/bin:${ROOTSYS}/lib:${PYTHONPATH} endif
As per the CLAS software model, https://data.jlab.org/drupal/common-environment , ROOT is maintained in /site/12gev_phys, which we try to update every 6 months or so. Generally, the identical version is on /apps/root/*. To use the version on /site, please use the setup procedure like below. As always, the three things to control are $ROOTSYS, $LD_LIBRARY_PATH, and $PATH. Also, you should check that the command "root-config --version" returns an expected result. See the example below.
Having logged into "ifarm" and suppose you want to use production version 2.4
ifarm1102> source /site/12gev_phys/softenv.csh 2.4
A subtlety: if you already have $ROOTSYS and root setup in your environment, the above lines will not "override" your defintion. Avoid hard-coded definitions in your login script, or use the above line in your login script if that's what you intend to do.
The JLAB_VERSION variable will control a 'set' of software versions and associated dependencies. The 'production' version will point to the recommended defaults, but that variable may be set to '1.0' or newer versions as they are created.
Assuming you are a "power user" and need your own compilation of root, I maybe don't even need to tell you how, but ... here are some notes on installing ROOT on on RedHat 7, CUE level 2 using cmake. The other way to build, using "configure", is considered obsolete, but you can ask me if you want to do it that way. I have also installed root on Fedora at my home. This is even more of an adventure since the default Fedora is missing lots of things (compiler, make, etc).
First, I note that you should add /apps/root/cmake to your path, the one in /usr/bin is old. E.g. setenv PATH /apps/bin:${PATH}.
As usual you need the environment variables
setenv ROOTSYS /home/rom/root-6.10.08/root
setenv PATH ${ROOTSYS}/bin:${PATH}
setenv LD_LIBRARY_PATH ${ROOTSYS}/lib:${LD_LIBRARY_PATH}
$ROOTSYS is where ROOT will get installed, i.e. /bin and /lib will appear there.
Now, go to the ./build directory that appears after you untar the root tar file and run "cmake"
cmake -DCMAKE_INSTALL_PREFIX=$ROOTSYS $toDisable $toEnable ../
where $toDisable and $toEnable are packages you may want to disable/enable. For me, I disable nothing and I enable "roofit" "minuit2" "python" "gdml" "unuran" "qt" "gtgsi" "mysql" "reflex" "cintex" "fftw3" "soversion" "mathmore"
After 'cmake" does its build you type "make install" and you are done, exept for the testing (see next section).
On your local PC, there might be some things (i.e. libraries or packages) missing. For the full list of what you need and may want, see the ROOT pages:
https://root.cern/install/dependencies
As explained in that ROOT web page, you can use "yum install" to install the missing packages. An example would be:
yum install libXext-devel
I think that recent RH 7 distributions managed by JLab have most of what you want, but if you do this on your own (e.g. I use Fedora 32 at home) you have a lot of "yumming" to do. Typically you'll get an error message from the install process and you can google it unless its obvious. Suppose you figure out that "make" is missing from your new installation. So then you type "yum whatprovides make". You probably need to be superuser to do this. It will answer with some package(s) and you pick one and install that, e.g. "yum install make-1:4.2.1-16.fc32.x86_6". Then it's on to the next thing that is missing.
A year ago, I had a little trouble compiling with Python 3 because cmake would find an older version of Python which wasn't compatible. I fixed this by hacking the CMakeCache.txt in the biuld directory so that this txt file pointed to the correct Python. This affected about 6 lines in that CMakeCache.txt. I'm not sure if that's still relevant as of 2020, though.
Testing Root -- Certification
Suppose you have succeeded at installing Root. How do you test it ? This is what I do:
If some users want to hand me macros or test code to test, I'd be happy to add it to my checklist of tests. For example, I don't regularly use FFTW, so I wouldn't know if it's installed properly unless someone else tested it, or you give me some test code.
If you want a new feature, I'll be happy to help implement it. Email me at rom@jlab.org. This sometimes involves getting the Computer Center to install the libraries and "include" files of the new feature, and then I have to enable it in the building scheme and compile the appropriate ROOT libraries. Don't hesitate to ask me, and please provide help and advice if you think I need it; we are a community and we try to make things better by working together.
ROOT works on these platforms, but they are not supported by me or by the computer center. Only Linux.
Jefferson lab provides for it's nuclear physics users a scientific computing infrastructure. A computing cluster, colloquially known as The Farm, provides interactive and batch data processing capabilities. A mass storage system provides bulk data storage in the form of several flavors of disk storage (work, volatile and cache) and a robotic tape library. The computing requirements are driven my the facility users and coordinated by the hall or collaboration leadership and Experimental Nuclear Physics (ENP) division management. The ENP division funds the computing resources and their ongoing maintenance as part of the annual operation budget.
This computing infrastructure is managed by the Scientific Computing (SCI) group in IT division. Networking and other support for computing for ENP is provided by the Computing and Networking Infrastructure (CNI) group, also in IT. The IT division is also responsible for cyber security and for managing various computing related metrics that are reported to the DOE.
There are several areas where coordinated interaction between ENP and IT takes place at a technical level. This is done via the offline and online computing coordinators, assisted by the leader of the ENP data acquisition support group.
The coordinators are:
Offline
Online
Some of these areas of interaction between ENP and IT divisions are documented in the following pages.
Each experiment performed at Jefferson lab represents a significant investment not only to the groups working on the experiment but also the funding agencies. It is prudent then to ensure that this investment is protected so that future researchers may not only take advantage of the final physics results but also have access to the data that produced those results in a form allowing data processing to be repeated in the light of new techniques or insights.
By far the largest volume of data generated by an experiment is the raw data containing the digitized readout from the data acquisition system. However, the raw data is only meaningful in the context defined by the metadata that is recorded as the data is taken, this includes accelerator parameters, operating conditions and calibration of the detector, operator logs and much more. Since all of this data is stored in digital form it is also important to archive documentation, the software to read the data formats and even the software used to process the data. By far the safest course is to attempt to preserve as much of the information and software that is available. To be sure there will be points of diminishing return and, case by case, a decision on what is not worth keeping must be made.
Such is the importance of the preservation of data, that the funding agencies are asking grant applicants to provide a plan for how they will manage the data from their experiment.
With this in mind the Scientific Computing group (SCI) in IT division has written a JLab Data Management Plan that broadly outlines the steps taken to preserve data. Based on this plan the Experimental Nuclear Physics division management has prepared data management plans for each of the four experiment halls. Each hall specific plan takes into account differences in the ways in which the halls operate their online and offline data processing. These plans can be referred to by principle investigators when preparing their own data management plan and should much simplify that process.
Attachment | Size |
---|---|
Data_Management_Plan_Hall-A.pdf | 65.18 KB |
Data_Management_Plan_Hall-B.pdf | 69.55 KB |
Data_Management_Plan_Hall-C.pdf | 70.08 KB |
Data_Management_Plan_Hall_D_v2.pdf | 236.11 KB |
CERNlib 2005 is the supported version at JLab. This version is supported on Computer Center managed machines running either 32 or 64 bit versions of the following operating systems:
To setup CERNLib for use, type the command
setup cernlib
This command will define the following environment variables
and add $CERN_ROOT/bin to PATH.
If the setup command does not work, make sure that "source /site/env/syscshrc" has been added to .cshrc.
Some systems will include a version of CERNLib with the operating system. It is recommended to use the above setup command to override this version with the locally built verison.
By default, CERNLib is only compatible with the default gcc version for the respective Enterprise Linux versions. Upon request, CERNLib can be built for other versions of gcc if those versions are support by the computer center. (I.e. versions in /apps/gcc). Submit a helpdesk ticket to request support for a non-default compiler.
JLab does not provide support for CERNLib on other flavors of Linux (i.e. Ubuntu) or other operating systems (i.e. MACOSX).
CERNLib Source
The source code for the CERNLib version used at JLab is obtained from http://www-zeuthen.desy.de/linear_collider/cernlib/new/cernlib_2005.html.
The geant4 environment is set up with the other packages using the JLab common environment scripts, for example:
source /site/12gev_phys/softenv.csh 2.5
source /site/12gev_phys/softenv.sh 2.5
The supported geant4 versions can be found here.
A step-by-step guide on how to install geant4 and all its dependencies can be found here.
If you want a geant4 feature not included in the standard release we will be happy to help implement it.
Updated March 28, 2014
The clhep environment is set up automatically with the other packages using the JLab common environment scripts, for example:
source /site/12gev_phys/softenv.csh 2.2
The supported geant4 versions can be found here.
The JLab common environment provides utilities to compile clhep applications using scons.
The SConstruct file should include the lines:
from init_env import init_environment env = init_environment("clhep")
This will load the clhep environment in scons.
A step-by-step guide on how to install clhep can be found here.
If you want a clhep feature not included in the standard release I'll be happy to help implement it.
GEMC is a software framework that uses geant4 to simulate the passage of particles through matters.
gemc is part of the CUE Common Environment. It is installed in /site/12gev_phys.
The gemc uses the JLAB Common environment, set up using the softenv script:
source /site/12gev_phys/softenv.csh 2.5
The supported gemc versions can be found here.
A step-by-step guide on how to install geant4 and all its dependencies can be found here.
If you want a gemc feature not included in the standard release I'll be happy to help implement it.
EVIO is the event format that was developed as the native format for raw data generated by the CODA data acquisition toolkit. The goals of EVIO are:
EVIO is a hierarchical format where the basic building block is a structure known as a bank. Each bank has two parts, a header and a payload. The header contains meta information such as the length of the bank, the type of data in the payload and a numerical tag that is used in conjunction with a dictionary to provide a description of the payload. The payload of any particular bank is homogenous, that is a bank can contain integers, or real numbers or other banks but not a mixture. The ability to have a bank containing banks gives EVIO the flexibility to construct complex data structures.
Version 1 of the CODA EVIO package, written in C, was in use at Jefferson Lab for over a decade. It saw extensive use in Halls A and C, where all of the raw data for the 6 GeV program was written to disk in EVIO format. EVIO saw limited use in the Hall B with CLAS choosing to store their raw data in BOS/FPACK format. PRIMEX and the GlueX BCAL test also stored their raw data in EVIO format and EVIO has been used in experiments and test stands off the JLab site.
Versions 2 and 3 extended the EVIO API into C++ and Java. They could then take advantage of OO techniques. Mapping banks onto objects allows, for example, graphical visualization of the event structure and serialization into XML.
Version 4 has been developed as the raw data format for all of the experiments in the 12 GeV era to be used by all of the halls. The package is supported in C, C++ and Java. Extensions to the API improve support the use of EVIO to store the output from offline analysis and reconstruction code. The improved API also allows more flexibility in using EVIO to decode events offline.
The EVIO documentation and code are hosted on the CODA website
Background
Traditionally scientific software packages have been supported on an ad-hoc basis by various staff and users in ENP division. Meanwhile the scientific computing group in IT division has been responsible for the day-to-day operation of the systems used to run simulation and analysis jobs. This committee was formed to formalize the support of scientific software. The primary goal is to give the laboratory users clear mechanisms to obtain the software that they need, to get help and to give feedback. It is also a goal of this committee to ensure that the software packages are well maintained and updated in a way that minimizes disruption of users. For further details please see the charge to the committee.
The committee is charged with providing support for scientific software packages for data simulation and analysis. Specifically the committee shall:
A PDF of the memo that established the charge is linked to below...
Attachment | Size |
---|---|
20121205_Charge_Memo.pdf | 49.64 KB |
Whiteboard snapshot
ROOT | Bob M. |
CernLib | Steve W. |
CLHEP, GEANT4 | Maurizio, Paul Gueye |
EVIO | Carl T. |
Recomendations
Graham to be chair.
Actions
The comittee needs a charge.
Sub-committee of Brad, Sandy and Graham to look at IT Features items from above list.
Participants
Sandy Philpott (chair), Graham Heyes, Brad Sawatzky
Goal
Discuss integration of scientific software support with the IT help desk (CCPR system), documentation, notifications and version control.
Summary
Here is a summary of our sub-committee meeting Tuesday, addressing IT support for Physics software:
CCPRs
-----
New category PHYSICS SOFTWARE added to CCPR system
- assigned to IT staff - Sandy or other
- email to Brad, Graham, Mark, Ole, Bob, Steve, Mauri, for starters
- the email interface allows status updates - need to dig out the syntax details
Documentation
-------------
Suggest keeping the current location as a starting point, at
https://wiki.jlab.org/cc/external/wiki/index.php/Physics_Applications
and updating this overview to include 1) an overview of the JLab support model for Physics software, 2) a short description of each support package and 3) a current link to its JLab documentation.
Any relevant pages found through google and web searches of the JLab site should be consolidated at this top level, and stale pages should be replaced with a permanent redirect to the main jump page.
We also discussed that the documentation, software, meeting minutes etc would be hosted on a website controlled by ENP.
Standard and multiple versions
------------------------------
Users on site have different methods of accessing software -- the JLab legacy "setup" and "use" commands, and the $JLAB_ROOT environment maintained by Mauri in /site/12gev_physics. More details are needed on running different versions than default, and identifying the PRO production versions. How does /site/12gev_physics interact with the versions in /apps? Does it need to? These available access methods need maintaining and documentation. Do users still need "use" and/or "setup" ? The choice between "setup", "use", and "/site/12gev_physics" may also impact who is in charge of the production software.
The production /apps/<software>/PRO version, or that in $JLAB_ROOT, should only change during a scheduled maintenance period that has been announced to users.
Notifications
-------------
General announcements about software updates and status can go to the jlab-scicomp-briefs mailing list.
Agenda
In attendance
Graham Heyes, Bob Michaels, Patrizia Rossi, Javier Gomez, Steve Wood, Mark Ito, Brad Sawatzky, Maurizio Ungaro, Sandy Philpott
Minutes
Action items from the meeting
In attendance
Graham Heyes, Bob Michaels, Javier Gomez, Mark Ito, Maurizio Ungaro, Sandy Philpott
Action items from the last meeting
Agenda
Minutes
Action items from this meeting
Action items from the last meeting
Agenda
Minutes
Action items from this meeting