How to work with the Storage Element¶
The Storage Element (SE) offers a large storage area that can be accessed directly on the Tier-3, but it is also part of the WLCG Grid and its files can be accessed in a secure way from outside, or you can use grid services to copy data from other grid SEs to it. Our SE is running the dCache software.
local access on Tier-3 nodes by using the NFS mount¶
The CMS file space is mounted below /pnfs/psi.ch/cms/trivcat/store
on all nodes, and as a local user you can access files just like any
other files on a node using commands like cp, mv, rm, etc.
On login (UI) nodes the SE is mounted in Read/Write mode, but on the worker nodes we only mount it in Read-Only mode.
As described in Understanding Tier-3 storage, the SE mount does not offer a fully POSIX compliant file system, you cannot modify (e.g. append to) an existing file. You just can erase it and then write a file with the same name.
Examples for copying files¶
XROOTD LAN (local area network, for local access from UI and worker nodes)¶
xrdfs executed on a UI in the Xrootd LAN service case :
$ xrdfs t3dcachedb03.psi.ch ls -l -u //pnfs/psi.ch/cms/trivcat/store/user/$USER/
...
-rw- 2015-03-15 22:03:41 5356235878 root://192.33.123.26:1094///pnfs/psi.ch/cms/trivcat/store/user/martinelli_f/xroot
-rw- 2015-03-15 22:06:04 131870 root://192.33.123.26:1094///pnfs/psi.ch/cms/trivcat/store/user/martinelli_f/xrootd.
-rw- 2015-03-15 22:06:45 1580023632 root://192.33.123.26:1094///pnfs/psi.ch/cms/trivcat/store/user/martinelli_f/ZllH.DiJetPt.Mar1.DY1JetsToLL_M-50_TuneZ2Star_8TeV-madgraph_procV2_mergeV1V2.root
...
xrdcp executed on a UI in the Xrootd LAN service case :
$ xrdcp -d 1 root://t3dcachedb03.psi.ch:1094///pnfs/psi.ch/cms/trivcat/store/user/$USER/ZllH.DiJetPt.Mar1.DY1JetsToLL_M-50_TuneZ2Star_8TeV-madgraph_procV2_mergeV1V2.root /dev/null -f
[1.472GB/1.472GB][100%][==================================================][94.18MB/s]
XROOTD WAN (wide area network, access from outside of Tier-3, stage-in / stage-out)¶
The Read-Write Xrootd service reachable by root://t3se01.psi.ch:1094//
Do NOT use this service for local analysis jobs. We limit the number of parallel transfers through this door, since it usually should only be used by efficient WAN copies, i.e. transfers of large files with high bandwidths, because too many of these transfers could harm the availability of the tier-3\'s small number of storage servers. If you use this door for your analysis jobs you will find that many of them will get queued.
$ xrdfs cms-xrd-transit.cern.ch locate /store/mc/RunIIFall15MiniAODv2/ZprimeToWW_narrow_M-3500_13TeV-madgraph/MINIAODSIM/PU25nsData2015v1_76X_mcRun2_asymptotic_v12-v1/00000/86A261F4-3BB8-E511-88EE-C81F66B73F37.root
[::192.33.123.24]:1095 Server Read
$ host 192.33.123.24
24.123.33.192.in-addr.arpa domain name pointer t3se01.psi.ch.
$ xrdcp --force root://cms-xrd-transit.cern.ch//store/mc/RunIIFall15MiniAODv2/ZprimeToWW_narrow_M-3500_13TeV-madgraph/MINIAODSIM/PU25nsData2015v1_76X_mcRun2_asymptotic_v12-v1/00000/86A261F4-3BB8-E511-88EE-C81F66B73F37.root /dev/null
[32MB/32MB][100%][==================================================][32MB/s]
ROOT examples¶
- Reading a file in ROOT by xrootd
https://confluence.slac.stanford.edu/display/ds/Using+Xrootd+from+root
$ root -l
$ root [1] TFile *_file0 = TFile::Open("root://t3dcachedb03.psi.ch:1094//pnfs/psi.ch/cms/trivcat/store/user/leo/whatever.root")
GFAL2 examples¶
The gfal2 tools offer a wide range of utilities gfal-cat, gfal-copy, gfal-ls, gfal-chmod, gfal-mkdir, gfal-rm, gfal-save, gfal-sum, gfal-xattr with corresponding manual page for each of them like \$ man gfal-rm .
Example usage
$ gfal-copy --force root://t3dcachedb03.psi.ch/pnfs/psi.ch/cms/t3-nagios/1MB-test-file_pool_t3fs14_cms_11 file:////$PWD/
$ gfal-mkdir root://t3dcachedb03.psi.ch/pnfs/psi.ch/cms/trivcat/store/user/user_id/testdir
$ gfal-copy root://t3dcachedb03.psi.ch//pnfs/psi.ch/cms/t3-nagios/1MB-test-file_pool_t3fs14_cms_11 file:/dev/null -f
Copying root://t3dcachedb03.psi.ch//pnfs/psi.ch/cms/t3-nagios/1MB-test-file_pool_t3fs14_cms_11 [DONE] after 0s
$ gfal-ls -l root://t3dcachedb03.psi.ch//pnfs/psi.ch/cms/trivcat/store/user
dr-xr-xr-x 0 0 0 512 Feb 21 2013 alschmid
...
Remove a file from dCache:
$ gfal-rm root://t3dcachedb03.psi.ch//pnfs/psi.ch/cms/trivcat/store/user/auser/myfile
Erasing a remote whole (non-empty) directory recursively:
$ gfal-rm -r root://t3dcachedb03.psi.ch//pnfs/psi.ch/cms/trivcat/store/user/auser/dir-name
Action of the gfal-save and gfal-cat
commands :
$ cat myfile
Hello T3
$ cat myfile | gfal-save root://t3dcachedb03.psi.ch//pnfs/psi.ch/cms/trivcat/store/user/auser/myfile
$ gfal-cat root://t3dcachedb03.psi.ch//pnfs/psi.ch/cms/trivcat/store/user/auser/myfile
Hello T3
Getting data from remote SEs to the T3 SE¶
Official datasets¶
For official data sets/blocks that are registered in CMS DBS you must use the Rucio system.
Job Stageout from other remote sites¶
You can try to stageout your CRAB3 Job outputs directly a T3_CH_PSI but if these transfers will get too slow and/or unreliable than stageout first at T2_CH_CSCS and afterwards copy your files to T3_CH_PSI.