/[pdpsoft]/trunk/grid-mw-security/ees/thesis/nikhef_pdp.tex
ViewVC logotype

Contents of /trunk/grid-mw-security/ees/thesis/nikhef_pdp.tex

Parent Directory Parent Directory | Revision Log Revision Log


Revision 953 - (show annotations) (download) (as text)
Wed Oct 21 08:40:48 2009 UTC (12 years, 3 months ago) by aramv
File MIME type: text/x-latex
File size: 10376 byte(s)
One to beam up
1 \chapter{Nikhef \& Grid computing}
2 \section{Nikhef}
3 Nikhef (\textit{Nationaal instituut voor subatomaire fysica}) is the Dutch national institute for subatomic physics.
4 \glossary{name={Nikhef}, description={Nationaal instituut voor subatomaire fysica (Dutch national institute for subatomic physics). Originally: Nationaal Instituut voor Kernfysica en Hoge Energie-Fysica}}
5 It's a collaboration between \textit{Stichting voor Fundamenteel Onderzoek der Materie} (FOM), \textit{Universiteit van Amsterdam} (UvA), \textit{Vrije Universiteit Amsterdam} (VU), \textit{Radboud Universiteit Nijmegen} (RU) and the \textit{Universiteit Utrecht} (UU).
6 \glossary{name={Stichting FOM}, description={Stichting voor Fundamenteen Onderzoek der Materie}}
7 \glossary{name={UvA}, description={Universiteit van Amsterdam}}
8 \glossary{name={VU}, description={Vrije Universiteit Amsterdam}}
9 \glossary{name={RU}, description={Radboud Universiteit Nijmegen}}
10 \glossary{name={UU}, description={Universiteit Utrecht}}
11 The name was originally an acronym for \textit{Nationaal Instituut voor Kernfysica en Hoge Energie-Fysica}, (National institute for nuclear and high energy physics).
12 After closing down the linear electron accelerator in 1998 the research into experimental nuclear physics yielded, but the Nikhef name has been retained up to the present day.
13 \cite{nikhefwebsite:overnikhef}
14
15 These days Nikhef is involved in areas dealing with subatomic particles.
16 Most employees at Nikhef are involved with physics projects, some of which like ATLAS, ALICE and LHCb are directly related to the \textit{Large Hadron Collider} (LHC) particle accelerator at the \textit{European Organization for Nuclear Research} (CERN).
17 \glossary{name={LHC}, description={Large Hadron Collider, the particle accelerator at CERN}}
18 %\glossary{name={CERN}, description={Organisation Européenne pour la Recherche Nucléaire \(European Organization for Nuclear Research\)}}
19 \glossary{name={CERN}, description={Organisation Européenne pour la Recherche Nucléaire}}
20 % The other physics projects are astroparticle physics, detector R\&D and theory.
21
22 %It owned the second installation of a computer system in the Netherlands (the first being owned by the CWI).
23 Among the technical departments at Nikhef are \textit{Mechanical Engineering} (EA), the \textit{Mechanic Workshop} (MA), \textit{Electronics Technology} (ET) and \textit{Computer Technology} (CT).
24
25 High energy physics experiments generate vast amounts of data, analysis of which requires equally vast amounts of computing power.
26 In the past supercomputers were used to provide this power, but in order to perform analysis on high-energy subatomic particle interactions required by the LHC experiments, a new method of pooling computing resources was adopted: Grid computing.
27
28 The CT department provides Nikhef's computing infrastructure.
29 The \textit{Physics Data Processing} (PDP) group is an offshoot of the CT department which develops Grid infrastructure, policy and software.
30
31 %The management of Nikhef provides resources to facilitate the technical departments and projects.
32 %Please see figure ~\ref{fig:nikhef_organigram} for an organizational chart of Nikhef.
33
34 \pagebreak
35
36 %\section{High energy physics}
37 %The detectors in the LHC detect particle interactions.
38 %The particle interactions are called events.
39 %Each event is self-contained and does not interact with any other event.
40 %The amount of data per event does not require a shared-memory machine anymore.
41 %Regular hardware in a (Beowulf) cluster setup is sufficient.
42 %The events are screened and filtered by different layers of triggers.
43 %These triggers can be created as a part of the detector hardware or in software.
44
45 %The multi-level triggers filter noise and events that are deemed to not contain relevant information.
46 %Taking into account the possibility of new physics makes this is a true challenge.
47 %Even though strong filtering is applied a lot of data will still need to be analyzed.
48 %At the time the LHC was created stage it became clear that it is unfeasible to host all the computing and data resources at CERN for the four big LHC experiments and all its researchers across the world.
49 %To avoid hosting all the resources and facilities for all the researchers who now had to travel towards their analysis data a new method of pooling computing resources was adopted: Grid Computing.
50
51 %The goal of Grid computing is to utilize existing computing infrastructures equally well as future infrastructures.
52
53
54
55
56
57 % Astro-particle physics and theory
58
59 % Geschiedenis nikhef -> computing resources nodig -> grid
60
61 % Grootste gedeelte bij ATLAS, dan LHCb, electronica, Grid,
62 % Grid is uitvloeisel van de CT groep.
63 % Meer mensen betrokken bij het bouwen van detectoren etc, dan met Grid computing. Meer mensen bezig met electronica, mechanica en ontwerp-afdeling dan grid computing.
64 % Supercomputers, high energy, lots of interactions, high frequency interactions, lots of analysis methods.
65 % Aanleiding voor nieuwe manier van computing. Namelijk: grid computing.
66 % Werd voorzien door CERN in eind jaren 90 dat de computing resources niet voldoende zouden zijn. In het jaar 2000 kwam de realisatie dat men moest gaan investeren in Grid computing. Of clusters aan elkaar knopen aangezien 'iedereen' al clusters had.
67 % Bestaande infrastructuren evengoed gebruiken als nieuw te bouwen infrastructuren, bij voorkeur tegelijk.
68
69 % Astro-particle-physics is een belangrijk onderwerp.
70 % Dan is er ook een theorie afdeling.
71 % Gravitational wave analyse. Bezig met onderdelen die ervoor zorgen dat de metingen preciezer worden.
72
73 \pagebreak
74 \begin{figure}[hp]
75 \centering
76 \includegraphics[width=\textwidth]{nikhef_organigram}
77 \caption[Nikhef organizational chart]%
78 {A diagram showing the organizational structure of Nikhef}
79 \label{fig:nikhef_organigram}
80 \end{figure}
81
82 \section{Participating organizations}
83 Like supercomputers, Grids attract science.
84 This has led to a community of Grid computing users which advances the Grid computing field on an international scale.
85
86 Some of the cooperating organisations within the Grid computing community are:
87
88 \begin{itemize}
89 \item BiG Grid, the Dutch e-science Grid. An example of a \textit{National Grid Initiative} (NGI), of which there are many.
90 \item The \textit{Enabling Grids for E-sciencE} (EGEE) project. A leading body for NGIs. To be transformed into the \textit{European Grid Initiative} (EGI).
91 \item The \textit{LHC Computing Grid} (LCG) is the Grid employed by CERN to store and analyze data generated by the \textit{Large Hadron Collider} (LHC). Also a member of EGEE.
92 \item The \textit{Virtual Laboratory for e-Science} (VL-e). A separate entity that tries to make Grid infrastructure accessible for e-science applications in the Netherlands.
93 \end{itemize}
94 \glossary{name={VL-e}, description={Virtual Laboratory for e-Science}}
95
96
97 \section{Grid resources}
98 Here's an example of the resources potentially available on a national (BiG Grid) and international (EGEE) level.
99 This is not a static number as the Grid is dynamic in nature.
100 Resources shift in and out due to maintenance requirements or upgrade.
101 The Grid has a tendency to grow in computing and storage capacity.
102 %TODO? reference
103
104 \begin{itemize}
105 \item BiG Grid has between 4500 and 5000 computing cores (not including LISA, which has 3000 cores) and about 4.7 petabytes of storage. The capacity of available tape storage is about 3 petabytes.
106 \item EGEE has roughly 100.000 computing cores and 50 petabytes of storage (most of which is tape storage).
107 %TODO? how much
108 \end{itemize}
109
110 % BigGrid:
111 % #CPU: ~4500 a 5000 cores (exclusief LISA. Die heeft ruwweg 3000 cores).
112 % #disk: 1,5PB en 100TB pre-stage cache. Tape is ongeveer nog iets van 3PB
113
114 % EGEE:
115 % #CPU: ~100k cores
116 % #disk: ~50PB, waarvan het grootste deel tape is en nog meer onbereikbaar.
117
118
119 % EGEE -> EGI
120 % BiG Grid == NGI, zo zijn er velen
121 % LCG is lid van EGEE
122 % VLEmed geen binding met EGEE, maar kan afkijken ervan doordat andere soorgelijke projecten er wel aan meedoen.
123 % Community rond de infrastructuur.
124 % Hetzelfde gebeurt bij supercomputers. Trekt wetenschap aan. -> Internationaal aanzien.
125
126 % EGI wordt een sturend orgaan voor de NGIs, die alle resources zelf moeten supplyen.
127
128
129 \section{PDP group}
130 The \textit{Physics Data Processing} (PDP) group at Nikhef is associated with \textit{BiG Grid}, the \textit{LHC Computing Grid} (LCG), \textit{Enabling Grids for E-sciencE} (EGEE), the \textit{Virtual Laboratory for e-Science} (VL-e) and the (planned) \textit{European Grid Initiative} (EGI).
131 \glossary{name={BiG Grid}, description={The Dutch e-science grid}}
132 \glossary{name={PDP}, description={Physics Data Processing}}
133 \glossary{name={LCG}, description={The LHC Computing Grid}}
134 \glossary{name={EGEE}, description={Enabling Grids for E-sciencE}}
135 \glossary{name={EGI}, description={European Grid Initiative}}
136 \glossary{name={VL-e}, description={Virtual Laboratory for e-Science}}
137
138 % Application Domain Analyst
139 Within Nikhef, the PDP group concerns itself with policy and infrastructure decisions pertaining to authentication and authorization for international Grid systems.
140 It facilitates the installation and maintenance of computing, storage and human resources.
141 It provides the Dutch national academic Grid and supercomputing \textit{Certificate Authority} (CA), and also delivers software such as:
142 \begin{itemize}
143 \item Grid middleware components (part of the gLite stack)
144 \item Cluster management software (Quattor)
145 \end{itemize}
146
147 % Cluster beheer
148 % Software certification & integration
149
150 % Primair bezig met infrastructuur up & running houden.
151
152 The PDP group employs \textit{Application Domain Analysts} (ADAs) which try to bridge the gap between Grid technology and its users by developing software solutions and offering domain-specific knowledge to user groups.
153 \glossary{name={ADA}, description={Application Domain Analyst}}
154 %Human resource tasks involve
155
156 % Software: grid middleware (security) user tools, cluster management software (quattor)
157 % faciliteren en onderhouden van computing en storage resources, en human resources.
158 % software development.
159 % beleidskwesties op internationaal niveau.
160
161 % Houdt zich bezig met authenticatie & authorisatie, en beleidskwesties die daarmee samenhagnen. policy
162 % Bijv: draait nationale academische grid en supercomputing CA.
163 % Data pooling
164 % Kringen werkzaamheden
165
166

Properties

Name Value
svn:mergeinfo

grid.support@nikhef.nl
ViewVC Help
Powered by ViewVC 1.1.28