This package is used inside heppy to produce flat ROOT trees using FCCSW EDM root files produced with the EventProducer
If you do not attempt to contribute to the heppy repository, simply clone it:
git clone [email protected]:HEP-FCC/heppy.git
If you aim at contributing to the heppy repository, you need to fork and then clone the forked repository:
git clone [email protected]:YOURGITUSERNAME/heppy.git
Then go to the heppy
directory cd heppy
.
Source the FCCSW software stack
source /cvmfs/fcc.cern.ch/sw/0.8.3/init_fcc_stack.sh
and source the heppy
source ./init.sh
If you do not attempt to contribute to the FCChhAnalyses repository, simply clone it:
git clone [email protected]:FCC-hh-framework/FCChhAnalyses.git
If you aim at contributing to the heppy repository, you need to fork and then clone the forked repository:
git clone [email protected]:YOURGITUSERNAME/FCChhAnalyses.git
Now you are ready to run an existing analysis or make your own one.
heppy_loop.py Outputs/HELHC FCChhAnalyses/HELHC/Zprime_tt/analysis.py
but better to run on Batch
heppy_batch.py -o FCChhAnalyses/HELHC/Outputs FCChhAnalyses/HELHC/Zprime_tt/analysis.py -b 'bsub -q 8nh < batchScript.sh'
heppy_batch.py -o FCChhAnalyses/HELHC/Outputs FCChhAnalyses/HELHC/Zprime_tt/analysis.py -b 'run_condor.sh --bulk tttt_condor -f workday'
-> the --bulk option is necessary to submit all the CHunks into 1 CONDOR job
-> if do not use --bulk, there will have 1 job per Chunk
The job retry, is as follow : heppy_check.py Outdir/Chunk -b 'run_condor.sh -f workday' -> it will submit each failed Chunks into a single CONDOR job
The queue names of CONDOR :
20 mins -> espresso
1h -> microcentury
2h -> longlunch
8h -> workday
1d -> tomorrow
3d -> testmatch
1w -> nextweek
in order to save time I already produced the outputs, they are stored on eos
/eos/experiment/fcc/helhc/analyses/Zprime_tt/heppy_outputs/helhc_v01/