Scientific Software

Experimental Nuclear Physics, with the help of IT division, provides support for several software packages commonly used by the user community.

The libraries are installed and maintained by Jefferson Lab staff who are active in physics analysis and simulation. They are available oninteractive JLAB CUE machines such as ifarm1801, ifarm1802, ifarm1901  and on the batch farm.


Library Author     JLAB Staff Responsible
Common Environment Maurizio Ungaro Maurizio Ungaro
ROOT CERN Robert Michaels 
CLHEP CERN Maurizio Ungaro
GEANT4 Geant4 Collaboration Maurizio Ungaro / Makoto Asai
CERNLib CERN Stephen Wood
EVIO JLAB Maurizio Ungaro
HIPO G. Gavalian Maurizio Ungaro
CCDB D. Romanov Maurizio Ungaro
GEMC M. Ungaro Maurizio Ungaro
QT Qt Company Maurizio Ungaro
XERCESC Apache Maurizio Ungaro



The libraries releases are organized using the JLAB_VERSION tags. The list of supported tags is available here.




Login on or

Source the softenv script with the JLAB_VERSION tag as argument:

source /site/12gev_phys/softenv.csh 2.5

If you use bash:

source /site/12gev_phys/ 2.5


User selection of libraries:


The source command above will load all the supported libraries; users can select a subset of those by putting their list in a ~/.jlab_software file.

For example, for geant4 only support (which needs clhep qt and xercesc), that file would read:

clhep geant4 qt xercesc


Docker images.


Docker images are available starting with JLAB_VERSION 2.2:

To run in batch mode:

docker run -it --rm jeffersonlab/jlabce:2.5 bash

To run in in interactive mode:

docker run -it --rm -p 6080:6080 -p 5900:5900 jeffersonlab/jlabce:2.5

then point your web brower to: http://localhost:6080



How to get help.

The first place to look for help is in the documentation linked to from the list. Try their blogs/discussion forums, or search online.

If you need help specific to the JLAB installation please submit a CCPR with the category "SCIENTIFIC COMPUTING".

Further documentation of the Common Environment Framework can be found here.


Local Installation

To install the packages on a local machine check the Step-By-Step instructions.

The Common Environment Framework code is available on github.


The Batch Farm

The environment is available on the batch farm. To use it, add the softenv source line above to your submission scripts.

The batch farm is maintained by the scientific computing group in IT division. Please refer to their website for information about how to use the data analysis and storage facilities at JLab.



From time to time we need to get the word out about software updates and known issues. Please subscribe to the JLab mailing list jlab-scicomp-briefs to keep up to date.



Feedback is always welcome. There is a scientific support committee that reviews which packages are supported and how support is provided. Feel free to contact any member of the committee.

Maurizio Ungaro



Maurizio Ungaro

Maurizio Ungaro




Available Versions

Tag 2.4 2.5
Release Date Wed Oct 3 2018 Wed Feb 9 2022
clhep 2.4.04
qt system or 5.10.1
xercesc 3.2.2


gemc 2.7


banks 1.4
scons build 1.7
evio 5.1
ccdb 1.06.02
mlibrary 1.3
qt CUE system version
xercesc 3.2.3
geant4 4.10.07.p03
gemc 2.9 / 3.0
root 6.24.06
banks 1.8
scons build 1.10 / 1.11
evio 5.1
ccdb 1.07
mlibrary 1.5





Previous Releases



Tag 2.2 2.1 2.0
Release Date Wed March 18 2018 Wed January 25 2017 Wed Oct 6 2016
qt 5.9.1
xercesc 3.2.0
geant4 4.10.03.p02
gemc 2.6
jana 0.7.7p1
root 6.12.06
banks 1.4
scons build 1.6
evio 5.1
ccdb 1.06.02
mlibrary 1.2
qt 5.8.0
xercesc 3.1.4
geant4 4.10.02.p03
gemc 2.5
jana 0.7.7p1
root 6.10.02
banks 1.3
scons build 1.5
evio 5.1
ccdb 1.06.02
mlibrary 1.1
qt 5.6.0
xercesc 3.1.3
geant4 4.10.02.p02
gemc 2.5
jana 0.7.4p2
root 6.08.00
banks 1.3
scons build 1.4
evio 5.1
ccdb 1.06
mlibrary 1.0




Releases Scheme

The numbering scheme is available here.



Using Root for Application Development at Jefferson Lab. 

Updated Aug 13  2020

Versions available

If you login to an ifarm computer ("ssh ifarm") you'll get to a CentOS 7.* computer. You can check which version by "cat /etc/redhat-release".  Here, you may look at /apps/root to see the versions available.  Usually the one labelled "PRO" (a link) is the one you want, but there may be newer versions.

Note, versions of ROOT are also maintained on /site/12gev_phys , per the CLAS software model, the most recent as of Aug 13, 2020 is set up as follows:> source /site/12gev_phys/softenv.csh  2.4

and this produces> echo $ROOTSYS

Generally speaking, if you look at the /apps/root/VERSION/README_BOB file you can see how ROOT was compiled for that VERSION, (e.g. /apps/root/6.18.00/README_BOB), i.e. what compiler and whether I used the "configure" script or "cmake".  For the /site/12gev_phys distributions, lately I've put the information about how root was built in a file $ROOTSYS/README_HOW_BUILT.  All the more recent versions like 6.20.04 are compiled using cmake.


You should compile your applications with the same compiler and same kind of platform (e.g. RedHat 7.7) that was used to build ROOT.   The recommendation is that you login to the same kind of ifarm node where you'll run your application on the batch farm and compile your application there. Make sure your application runs interactively before running in batch.  For example, on ifarm1802 the default gcc is currently gcc version 9.2.0, but this tends to evolve over time.  Type "gcc -v" to see what the default is.

Detailed notes on how ROOT was compiled is shown in the README_BOB file in the ROOT directory, e.g. /apps/root/6.18.00//README_BOB and in $ROOTSYS/README_HOW_BUILT on /site/12gev_phys.  I sometimes just used the default gcc compiler that was available at the time, but in some cases  I had to use a more recent compiler.  You must ensure that you set up and use the same compiler.  

One nice thing about the CLAS software model is that the name of the compiler that was used to compile ROOT appears in the name of the path.  This way it is unambiguous.


The CUE command "use root/6.18.00 will set the appropriate variables in your environment or you may use the command "source /apps/root/6.18.00/setroot_CUE".  You will notice there are two versions of setroot_CUE, one for csh and one for bash, and setroot_CUE is actually a link to the csh version.  You could, of course, copy this script over and modify it.  In bash you have "export ROOTSYS=" instead of "setenv ROOTSYS" in csh.   Last time I checked, the batch farms need the "export ROOTSYS=" bash shell scheme.

A typical environment might look like this:

ROOTSYS /apps/root/6.18.00/root
LD_LIBRARY_PATH $ROOTSYS/lib before others
PATH $ROOTSYS/bin before others

Make sure you have 3 environment variables set: $ROOTSYS, $PATH, and $LD_LIBRARY_PATH !


Presently I have the following list enabled and it's controlled by a script that uses cmake.

   set enablePackages = ("roofit" "minuit2" "pyroot" "gdml" "unuran" "qt" "gtgsi" "mysql" "reflex" "cintex" "fftw3" "soversion" "mathmore" )

in some earlier versions we had "python" instead of "pyroot" in this list.

If we need any others, please let me know and I'll fix it ASAP. 

I can also add missing packages and I check, as part of the certification process, to see if packages were successfully made.  Sometimes a package fails to build because some system package is missing, e.g. GSL.

The setroot_CUE script for /apps/root

The following script is an example for how to set up ROOT.   I think that the /apps/root scheme may be replaced at some point.  For the /site/12gev_software see $ROOTSYS/README_HOW_BUILT for up-to-date details.  I think that the softenv.csh script will set this up for you, mostly.  However, maybe python needs to be set up by hand.

# Setup ROOT -- do this:  
# For sh family: use "export" instead of "setenv"
# for MacOs you must also set DYLD_LIBRARY_PATH

echo "ROOT 6.20.04 was compiled with gcc 4.8.5"
echo "present default gcc is given by gcc -v: "
gcc -v

setenv PYTHON /apps/python/3.4.3

# For csh shell
setenv ROOTSYS /u/apps/root/6.18.04/root
setenv PATH ${PYTHON}/bin/:${ROOTSYS}/bin:${PATH}

if (!($?LD_LIBRARY_PATH)) then
  setenv LD_LIBRARY_PATH ${PYTHON}/lib:${ROOTSYS}/lib

if (!($?PYTHONPATH)) then
  setenv PYTHONPATH ${ROOTSYS}/bin:${ROOTSYS}/lib


Root on /site -- the CLAS software model

As per the CLAS software model, , ROOT is maintained in /site/12gev_phys, which we try to update every 6 months or so.  Generally, the identical version is on /apps/root/*. To use the version on /site, please use the setup procedure like below.  As always, the three things to control are $ROOTSYS, $LD_LIBRARY_PATH, and $PATH.  Also, you should check that the command "root-config --version" returns an expected result.  See the example below.

Having logged into "ifarm" and suppose you want to use production version 2.4

ifarm1102> source /site/12gev_phys/softenv.csh 2.4

A subtlety: if you already have $ROOTSYS and root setup in your environment, the above lines will not "override" your defintion.  Avoid hard-coded definitions in your login script, or use the above line in your login script if that's what you intend to do.

The JLAB_VERSION variable will control a 'set' of software versions and associated dependencies.  The 'production' version will point to the recommended defaults, but that variable may be set to '1.0' or newer versions as they are created. 

Root on your PC using CUE level 2 install

Assuming you are a "power user" and need your own compilation of root, I maybe don't even need to tell you how, but ... here are some notes on installing ROOT on on RedHat 7, CUE level 2 using cmake.  The other way to build, using "configure", is considered obsolete, but you can ask me if you want to do it that way.  I have also installed root on Fedora at my home.  This is even more of an adventure since the default Fedora is missing lots of things (compiler, make, etc).

First, I note that you should add /apps/root/cmake to your path, the one in /usr/bin is old.  E.g. setenv PATH /apps/bin:${PATH}. 

As usual you need the environment variables

setenv ROOTSYS /home/rom/root-6.10.08/root
setenv PATH ${ROOTSYS}/bin:${PATH}

$ROOTSYS is where ROOT will get installed, i.e. /bin and /lib will appear there.

Now, go to the ./build directory that appears after you untar the root tar file and run "cmake"

cmake -DCMAKE_INSTALL_PREFIX=$ROOTSYS $toDisable $toEnable ../

where $toDisable and $toEnable are packages you may want to disable/enable.  For me, I disable nothing and I enable "roofit" "minuit2" "python" "gdml" "unuran" "qt" "gtgsi" "mysql" "reflex" "cintex" "fftw3" "soversion" "mathmore"

After 'cmake" does its build you type "make install" and you are done, exept for the testing (see next section).

On your local PC, there might be some things (i.e. libraries or packages) missing.  For the full list of what you need and may want, see the ROOT pages:

As explained in that ROOT web page, you can use "yum install" to install the missing packages.  An example would be:

yum install libXext-devel

I think that recent RH 7 distributions managed by JLab have most of what you want, but if you do this on your own (e.g. I use Fedora 32 at home) you have a lot of "yumming" to do.  Typically you'll get an error message from the install process and you can google it unless its obvious.  Suppose you figure out that "make" is missing from your new installation.  So then you type "yum whatprovides make".  You probably need to be superuser to do this.  It will answer with some package(s) and you pick one and install that, e.g. "yum install make-1:4.2.1-16.fc32.x86_6".   Then it's on to the next thing that is missing.

A year ago, I had a little trouble compiling with Python 3 because cmake would find an older version of Python which wasn't compatible.  I fixed this by hacking the CMakeCache.txt in the biuld directory so that this txt file pointed to the correct Python.   This affected about 6 lines in that CMakeCache.txt.  I'm not sure if that's still relevant as of 2020, though.

Testing Root -- Certification

Suppose you have succeeded at installing Root.  How do you test it ?  This is what I do:

If some users want to hand me macros or test code to test, I'd be happy to add it to my checklist of tests.  For example, I don't regularly use FFTW, so I wouldn't know if it's installed properly unless someone else tested it, or you give me some test code. 

New Feature Requests

If you want a new feature, I'll be happy to help implement it.  Email me at  This sometimes involves getting the Computer Center to install the libraries and "include" files of the new feature, and then I have to enable it in the building scheme and compile the appropriate ROOT libraries.   Don't hesitate to ask me, and please provide help and advice if you think I need it; we are a community and we try to make things better by working together.

ROOT on Windows or MAC

ROOT works on these platforms, but they are not supported by me or by the computer center.  Only Linux.


Computing related topics

Jefferson lab provides for it's nuclear physics users a scientific computing infrastructure. A computing cluster, colloquially known as The Farm, provides interactive and batch data processing capabilities. A mass storage system provides bulk data storage in the form of several flavors of disk storage (work, volatile and cache) and a robotic tape library. The computing requirements are driven my the facility users and coordinated by the hall or collaboration leadership and Experimental Nuclear Physics (ENP) division management. The ENP division funds the computing resources and their ongoing maintenance as part of the annual operation budget.

This computing infrastructure is managed by the Scientific Computing (SCI) group in IT division. Networking and other support for computing for ENP is provided by the Computing and Networking Infrastructure (CNI) group, also in IT. The IT division is also responsible for cyber security and for managing various computing related metrics that are reported to the DOE.

There are several areas where coordinated interaction between ENP and IT takes place at a technical level. This is done via the offline and online computing coordinators, assisted by the leader of the ENP data acquisition support group.

The coordinators are:


  • A - Ole Hansen
  • B - Veronique Ziegler
  • C - Brad Sawatzky
  • D - Mark Ito


  • A - Alexandre Camsonne
  • B - SergeyBoyarinov
  • C - Brad Sawatzky
  • D - David Lawrence

Some of these areas of interaction between ENP and IT divisions are documented in the following pages.

Data Management Plans

Each experiment performed at Jefferson lab represents a significant investment not only to the groups working on the experiment but also the funding agencies. It is prudent then to ensure that this investment is protected so that future researchers may not only take advantage of the final physics results but also have access to the data that produced those results in a form allowing data processing to be repeated in the light of new techniques or insights. 

By far the largest volume of data generated by an experiment is the raw data containing the digitized readout from the data acquisition system. However, the raw data is only meaningful in the context defined by the metadata that is recorded as the data is taken, this includes accelerator parameters, operating conditions and calibration of the detector, operator logs and much more. Since all of this data is stored in digital form it is also important to archive documentation, the software to read the data formats and even the software used to process the data. By far the safest course is to attempt to preserve as much of the information and software that is available. To be sure there will be points of diminishing return and, case by case, a decision on what is not worth keeping must be made.

Such is the importance of the preservation of data, that the funding agencies are asking grant applicants to provide a plan for how they will manage the data from their experiment.

With this in mind the Scientific Computing group (SCI) in IT division has written a JLab Data Management Plan that broadly outlines the steps taken to preserve data. Based on this plan the Experimental Nuclear Physics division management has prepared data management plans for each of the four experiment halls. Each hall specific plan takes into account differences in the ways in which the halls operate their online and offline data processing. These plans can be referred to by principle investigators when preparing their own data management plan and should much simplify that process.

Data_Management_Plan_Hall-A.pdf65.18 KB
Data_Management_Plan_Hall-B.pdf69.55 KB
Data_Management_Plan_Hall-C.pdf70.08 KB
Data_Management_Plan_Hall_D_v2.pdf236.11 KB


CERNLib Support at Jefferson Lab

CERNlib 2005 is the supported version at JLab.   This version is supported on Computer Center managed machines running either 32 or 64 bit versions of the following operating systems:

To setup CERNLib for use, type the command

setup cernlib

This command will define the following environment variables

and add $CERN_ROOT/bin to PATH.

If the setup command does not work, make sure that "source /site/env/syscshrc" has been added to .cshrc.

Some systems will include a version of CERNLib with the operating system.  It is recommended to use the above setup command to override this version with the locally built verison.

Support for non-default compilers

By default, CERNLib is only compatible with the default gcc version for the respective Enterprise Linux versions.  Upon request, CERNLib can be built for other versions of gcc if those versions are support by the computer center.  (I.e. versions in /apps/gcc).  Submit a helpdesk ticket to request support for a non-default compiler.

Support for other operating systems

JLab does not provide support for CERNLib on other flavors of Linux (i.e. Ubuntu) or other operating systems (i.e. MACOSX).

CERNLib Source

The source code for the CERNLib version used at JLab is obtained from


Using geant4 for Application Development at Jefferson Lab. 



The geant4 environment is set up with the other packages using the JLab common environment scripts, for example:

source /site/12gev_phys/softenv.csh 2.5

For bash / zsh users:

source /site/12gev_phys/ 2.5

Available Versions

The supported geant4 versions can be found here.


geant4 on your PC using CUE install

A step-by-step guide on how to install geant4 and all its dependencies can be found here.


New Feature Requests

If you want a geant4 feature not included in the standard release we will be happy to help implement it.  



Using clhep for Application Development at Jefferson Lab. 


Updated March 28, 2014



The clhep environment is set up automatically with the other packages using the JLab common environment scripts, for example:

source /site/12gev_phys/softenv.csh 2.2


Available Versions

The supported geant4 versions can be found here.


Compiling clhep applications

The JLab common environment provides utilities to compile clhep applications using scons.

The SConstruct file should include the lines:

from init_env import init_environment
env = init_environment("clhep")

This will load the clhep environment in scons.


clhep on your PC using CUE install

A step-by-step guide on how to install clhep can be found here.


New Feature Requests

If you want a clhep feature not included in the standard release I'll be happy to help implement it.  



GEMC is a software framework that uses geant4 to simulate the passage of particles through matters.

gemc is part of the CUE Common Environment. It is installed in /site/12gev_phys.



The gemc uses the JLAB Common environment, set up using the softenv script:

source /site/12gev_phys/softenv.csh 2.5


Available Versions

The supported gemc versions can be found here.


gemc on your PC

A step-by-step guide on how to install geant4 and all its dependencies can be found here.


New Feature Requests

If you want a gemc feature not included in the standard release I'll be happy to help implement it.  






EVIO is the event format that was developed as the native format for raw data generated by the CODA data acquisition toolkit. The goals of EVIO are:

EVIO is a hierarchical format where the basic building block is a structure known as a bank. Each bank has two parts, a header and a payload. The header contains meta information such as the length of the bank, the type of data in the payload and a numerical tag that is used in conjunction with a dictionary to provide a description of the payload. The payload of any particular bank is homogenous, that is a bank can contain integers, or real numbers or other banks but not a mixture. The ability to have a bank containing banks gives EVIO the flexibility to construct complex data structures. 

Version 1 of the CODA EVIO package, written in C, was in use at Jefferson Lab for over a decade.  It saw extensive use in Halls A and C, where all of the raw data for the 6 GeV program was written to disk in EVIO format. EVIO saw limited use in the Hall B with CLAS choosing to store their raw data in BOS/FPACK format. PRIMEX and the GlueX BCAL test also stored their raw data in EVIO format and EVIO has been used in experiments and test stands off the JLab site.


Versions 2 and 3 extended the EVIO API into C++ and Java. They could then take advantage of OO techniques. Mapping banks onto objects allows, for example, graphical visualization of the event structure and serialization into XML.


Version 4 has been developed as the raw data format for all of the experiments in the 12 GeV era to be used by all of the halls. The package is supported in C, C++ and Java. Extensions to the API improve support the use of EVIO to store the output from offline analysis and reconstruction code. The improved API also allows more flexibility in using EVIO to decode events offline.


Documentation and code.


The EVIO documentation and code are hosted on the CODA website



EVIO 4.3

Documentation in PDF format : Evio 4.3 Documentation


Download the code : Evio 4.3 Download


Committee members


Traditionally scientific software packages have been supported on an ad-hoc basis by various staff and users in ENP division. Meanwhile the scientific computing group in IT division has been responsible for the day-to-day operation of the systems used to run simulation and analysis jobs. This committee was formed to formalize the support of scientific software. The primary goal is to give the laboratory users clear mechanisms to obtain the software that they need, to get help and to give feedback. It is also a goal of this committee to ensure that the software packages are well maintained and updated in a way that minimizes disruption of users. For further details please see the charge to the committee.



The committee is charged with providing support for scientific software packages for data simulation and analysis. Specifically the committee shall:

  1. Define the relationship between ENP and IT in this area and develop policies and procedures to organize software support.
  2. Identify software packages, both lab written and third party, that are commonly used for data analysis by laboratory users.
  3. Identify or assign staff or users to be responsible for the installation and maintenance of the individual software packages.
  4. Identify or assign staff or users to be responsible for the day-to-day support of the individual software packages.
  5. Provide mechanisms for management and distribution of information, documentation and software packages and to give notification to the users when updates occur.
  6. Provide mechanisms for the users to submit requests for support, manage those requests and identify responders.
  7. Revisit all of these areas on a regular basis so that they continue to be supported.
  8. Upon request by users evaluate whether new software packages should be added to the supported suite. 

A PDF of the memo that established the charge is linked to below...

20121205_Charge_Memo.pdf49.64 KB

2012-11-08 Full committee

  • Package vs point person
  • IT features (Graham, Sandy, Brad).
    • Website
      • Documentation
      • Meeting info
    • Helpdesk integration
    • Mailing list for announcement - use jlab-scicomp-briefs
  • Governance
    • Mission Statement
    • Chair - Graham
    • Next meeting - after Thanksgiving

Whiteboard snapshot

List of supported software
CernLib Steve W.
CLHEP, GEANT4 Maurizio, Paul Gueye
EVIO Carl T.



Graham to be chair.


The comittee needs a charge.

Sub-committee of Brad, Sandy and Graham to look at IT Features items from above list.


2012-12-04 IT sub-committee

Sandy Philpott (chair), Graham Heyes, Brad Sawatzky
Discuss integration of scientific software support with the IT help desk (CCPR system), documentation, notifications and version control.
Here is a summary of our sub-committee meeting Tuesday, addressing IT support for Physics software:


New category PHYSICS SOFTWARE added to CCPR system
 - assigned to IT staff - Sandy or other
 - email to Brad, Graham, Mark, Ole, Bob, Steve, Mauri, for starters
 - the email interface allows status updates - need to dig out the syntax details


Suggest keeping the current location as a starting point, at

and updating this overview to include 1) an overview of the JLab support model for Physics software, 2) a short description of each support package and 3) a current link to its JLab documentation.

Any relevant pages found through google and web searches of the JLab site should be consolidated at this top level, and stale pages should be replaced with a permanent redirect to the main jump page.

We also discussed that the documentation, software, meeting minutes etc would be hosted on a website controlled by ENP.

Standard and multiple versions

Users on site have different methods of accessing software -- the JLab legacy "setup" and "use" commands, and the $JLAB_ROOT environment maintained by Mauri in /site/12gev_physics. More details are needed on running different versions than default, and identifying the PRO production versions.  How does /site/12gev_physics interact with the versions in /apps? Does it need to? These available access methods need maintaining and documentation. Do users still need "use" and/or "setup" ?  The choice between "setup", "use", and "/site/12gev_physics" may also impact who is in charge of the production software.

The production /apps/<software>/PRO version, or that in $JLAB_ROOT, should only change during a scheduled maintenance period that has been announced to users.


General announcements about software updates and status can go to the jlab-scicomp-briefs mailing list.

2012-12-06 Full committee


  • Minutes of previous meeting.
  • Presentation and discussion of charge.
    • Is there anything that we want to add to the scope?
    • What about software distribution?
  • Presentation and discussion of progress by sub-group on documentation and feedback.
    • Can we start updating documentation and who will do it?
  • Discussion of how to proceed with other items in the charge.
  • Action items for period between this meeting and the next.
  • AOB.

In attendance
Graham Heyes, Bob Michaels, Patrizia Rossi, Javier Gomez, Steve Wood, Mark Ito, Brad Sawatzky, Maurizio Ungaro, Sandy Philpott


  • The charge was presented and it was agreed that it was a good draft and should be forwarded to Rolf and Chip for approval.
  • The committee felt that, since it drafted it's own charge, it should be free to amend the charge in future (subject to approval).
  • Sandy presented  her minutes on the meeting of the group discussing integration with the help desk, documentation and IT specific issues.
    • The committee was impressed with the progress made towards using CCPRs for managing support requests. In particular one user had already asked for help via this mechanism and Maurizio was providing support for the user. So the system works!
    • Sandy will provide some "how to" and documentation so that people managing packages know how the to use CCPR system from the perspective of support provider.
    • How to proceed with documentation was discussed. It was agreed that will be used for meeting minutes, notes and other documentation of a general nature and will be linked both ways to the existing IT scientific computing website. There was discussion on the issue of where documentation for the supported software packages should be hosted. There was tentative agreement that it does not matter where the package specific documentation resides if it is well clearly linked to from the general information on the IT and sites. Various members of the committee will be given access to both sites so that they can add information.
    • The issue of setup and version control was discussed. A sub group of Maurizio, Bob and Steve will look into this.

Action items from the meeting

  • Clean up and start adding meeting information, notes and general information for the users.
  • Sandy to provide CCPR information.
  • Give the committee members editing access to the websites as required.
  • Sub committee to look at software setup and version management will report next time.
  • Next meeting will be in January.

2013-04-12 Full Committee

In attendance

Graham Heyes, Bob Michaels, Javier Gomez, Mark Ito, Maurizio Ungaro, Sandy Philpott

Action items from the last meeting

  • Clean up and start adding meeting information, notes and general information for the users.
    • Meeting info and agenda was added with some notes for users.
  • Sandy to provide CCPR information.
    • Did this happen?
  • Give the committee members editing access to the websites as required.
    • Nobody has asked for access to, has anyone looked at the IT maintained pages?
  • Sub committee to look at software setup and version management will report next time.


  • Reminder of minutes from last meeting.
  • Continue the discussion of software setup and version management. (I recall that a sub group was supposed to look into this in more detail but haven't seen any emails on the subject, what is going on?).
  • Discuss the CCPR system, how is this working? What else do we need from IT?
  • Documentation progress. Access to websites etc, who needs it?
  • Discussion points from committee members.
  • How often should we meet?
  • AOB


  • The CCPR system seems to have been working as advertised and requests from users have been handled.
    • Mark asked if it was possible to add people to CCPR thread (conversation). The example used was an expert who isn't normally on the list. Sandy replied that it isn't a feature of the CCPR system. The workaround is to remember that a third party invited into a CCPR thread will not automatically receive any replies sent to the CCPR system and they need to be forwarded manually.
    • Sandy pointed out that most members of the committee are not able to log into the web interface to CCPR that the IT division uses. Our interaction with the system is via email and Sandy moderates using the web interface.
    • Sandy will write a short email reminding committee members how the CCPR works in the context of requests for our help.
  • Maurizio presented a scheme that he is using for software setup that allows for the management of several versions of the same package for different operating systems and architectures. Bob has been using a similar scheme for ROOT.
    • Mark commented on the usefulness of the existing scheme using the /apps directory structure. After some discussion it was agreed that, at least for the near future, any new scheme should be implemented so as not to break the old. At a future date the committee may want to declare that the old will become obsolete but that is open to debate.
    • Sandy and Bob both suggested that the old /apps scheme could be implemented using symbolic links to the appropriate directories in the new system.
    • Maurizio will work with Bob and then with the other software maintainers to implement his scheme in a consistent way for all of the supported packages.
  • Documentation on this website was discussed. The link on the IT wiki will be replaced with a link to this site. The maintainers of the software packages will receive instructions via email on how to access this site as an editor and add content.
    • Sandy gave a quick preview of the new SciComp site which will also use Drupal.
  • Mark commented that we should make it clear to the users that this committee exists and tell them how to contact us with feedback.
    • The website will be updated so that it contains more useful content.
    • An email will go out to all users.
    • We should get a mention in the weekly JLab Brief email. 
  • It was agreed that in the near term we should try to meet once a month.


Action items from this meeting

  • More work on the web site functionality and content.
    • Email to the software maintainers with instructions on how to edit content.
  • Email from Sandy on CCPR system.
  • Maurizio and Bob to work on package and version management.
  • Sandy, Bob and Maurizio to look into backwards compatibility with /apps.
  • Add other supported packages to the scheme.
  • Sandy to link IT wiki to this website.
  • Draft an email and JLab Brief article for discussion and review next time.
  • Meet in a month - added to Zimbra calendar for May 24th.

2013-07-19 Full committee


Action items from the last meeting

  • More work on the web site functionality and content.
    • Email to the software maintainers with instructions on how to edit content.
  • Email from Sandy on CCPR system.
  • Maurizio and Bob to work on package and version management.
  • Sandy, Bob and Maurizio to look into backwards compatibility with /apps.
  • Add other supported packages to the scheme.
  • Sandy to link IT wiki to this website.
  • Draft an email and JLab Brief article for discussion and review next time.
  • Meet in a month - added to Zimbra calendar for May 24th.


  • Reminder of minutes from last meeting.
  • What has been done since the last meeting and what is left on the list?
  • People should add documentation etc to the support website.
  • Version management.
  • AOB



Action items from this meeting