@@ -136,123 +136,180 @@ Here, the %PDI \ref trace_plugin "Trace plugin" is used to trace %PDI calls.
136136Notice that some share/reclaim pairs come one after the other while others are
137137interlaced.
138138Is one better than the other?
139- If you do not know the answer to this question, just wait until Ex5 . :)
139+ If you do not know the answer to this question, just wait until ex5 . :)
140140
141141
142142## Decl'HDF5 plugin
143143
144+ From exercise 3 to exercise 9 included, we present the \ref Decl_HDF5_plugin
145+ "Decl'HDF5 plugin" (` decl_hdf5 ` ). We will introduce some keywords (` when ` ,
146+ ` datasets ` , ...) in the sub-tree of ` decl_hdf5 ` in configuration YAML file.
147+
148+ All keywords are defined in the last section ** full configuration example** of
149+ \ref Decl_HDF5_plugin "Decl'HDF5 plugin".
150+
144151### Ex3. HDF5 through PDI
145152
146- In this exercise, the code is the same as in ex2.
147- No need to touch the C code here, modification of the YAML file (` ex3.yml ` )
148- should be enough.
153+ In this exercise, the C code is the same as in ex2. You only need to modify the
154+ YAML file (` ex3.yml ` ).
149155
150156* Examine the YAML file, compile the code and run it.
151157
152- The \ref trace_plugin "Trace plugin" (` trace ` ) was replaced by the
153- \ref Decl_HDF5_plugin "Decl'HDF5 plugin" (` decl_hdf5 ` ) in the specification
154- tree.
155- In its configuration, the ` dsize ` variable is written.
158+ The \ref Decl_HDF5_plugin "Decl'HDF5 plugin" (` decl_hdf5 ` ) is added in the
159+ specification tree.
160+ In its configuration, the ` dsize ` variable is defined as a metadata for %PDI.
156161
157- * Write the ` psize ` and ` pcoord ` variables in addition to ` dsize ` to match the
158- content expected as described in the ` ex3.h5dump ` text file (use the ` h5dump `
159- command to see the content of your HDF5 output file in the same format as the
160- ` .h5dump ` file). You can easily check if the files are the same by running the
161- command:
162- ``` bash
163- diff ex3.h5dump <( h5dump ex3* .h5)
164- ```
162+ * Write the ` psize ` and ` pcoord ` variables in addition to ` dsize ` in a file ` ex3.h5 `
163+ with one MPI process.
164+ To achieve this result, you will need to fill 2 sections in the YAML file:
165165
166- To achieve this result, you will need to fill 2 sections in the YAML file.
166+ 1 . The ` data ` section to indicate to %PDI the \ref datatype_node "type" of the
167+ fields that are exposed.
167168
168- 1 . The ` data ` section to indicate to %PDI the \ref datatype_node type of the
169- fields that are exposed .
169+ 2 . The ` decl_hdf5 ` section for the configuration of the \ref Decl_HDF5_plugin
170+ "Decl'HDF5 plugin" .
170171
171- 2 . The ` decl_hdf5 ` section for the configuration of the
172- \ref Decl_HDF5_plugin "Decl'HDF5 plugin".
172+ * Use the ` h5dump ` command to see the content of your HDF5 output file in the
173+ same format as the h5dump file ` ex3.h5dump ` . You can easily check if the files
174+ are the same by running:
175+ ``` bash
176+ diff ex3.h5dump <( h5dump ex3* .h5)
177+ ```
178+ To see your ` h5 ` file in readable file format, you can check the section
179+ [ Comparison with the ` h5dump ` command] ( #h5comparison ) .
173180
174181\warning
175- If you relaunch the executable, remember to delete your old ` ex3.h5 ` file
176- before, otherwise the data will not be changed.
182+ If you relaunch the executable, remember to delete your old ` ex3.h5 ` file before,
183+ otherwise the data will not be changed.
177184
178185\warning
179- Since we write to the same location independently of the MPI rank, this exercise
180- will fail if more than one MPI rank is used.
181-
186+ When more than one MPI rank is used, we write to the same location in the file
187+ independently of the MPI rank. For this reason, this exercise will fail. The next
188+ exercise solves this issue.
182189
183190### Ex4. Writing some real data
184191
185192In this exercise each MPI process will write its local 2D array block contained
186193in the ` main_field ` variable to a separate HDF5 file.
187- Once again, this can be done by modifying the YAML file only, no nee to touch
194+ Once again, this can be done by modifying the YAML file only, no need to touch
188195the C file.
189196
190197* Examine the YAML file, compile the code and run it.
191198
192- Look at the number of blocks, you will have to use the correct number of MPI
193- ranks to run the example.
194-
195- Notice that in the YAML file, a list was used in the ` decl_hdf5 ` section with
196- multiple write blocks instead of a single one as before in order to write to
197- multiple files.
199+ \note Notice that in the YAML file ` ex4.yml ` , a list was used in the ` decl_hdf5 `
200+ section with multiple write blocks instead of a single one as before in order
201+ to write to multiple files.
202+
203+ \note Notice that we have moved fields (` dsize ` , ` psize ` and ` pcoord ` ) in the
204+ ` metadata ` section.
205+ ``` yaml
206+ pdi :
207+ metadata : # small values for which PDI keeps a copy
208+ # *** add ii as metadata
209+ # ...
210+ dsize : { type: array, subtype: int, size: 2 }
211+ ` ` `
212+ You can reference them from dynamic "$-expressions" in the configuration file.
198213
199- Also notice that this example now runs in parallel with two processes.
200- Therefore it uses "$-expressions" to specify the file names and ensure we
201- do not write to the same file from distinct ranks.
214+ \r emark Also notice that this example now runs in parallel with 4 processes.
215+ To ensure we do not write to the same file, we need to specify the file name
216+ using "$-expressions" for the different process rank.
202217
203218Unlike the other fields manipulated until now, the type of ` main_field` is not
204- fully known, its size is dynamic.
205- By moving other fields in the ` metadata ` section, you can reference them from
206- "$-expressions" in the configuration file.
219+ fully known, its size is dynamic. Therefore, we need to define the size in YAML
220+ file for %PDI using "$-expressions".
207221
208- * Use a $-expression to specify the size of ` main_field ` .
222+ * Describe the temperature data on the current iteration by using a $-expression
223+ to specify the size of `main_field` in `data` section.
209224
210- Unlike the other fields manipulated until now, ` main_field ` is exposed multiple
225+ Unlike the other fields manipulated until now, `main_field` is exposed multiple
211226times along execution.
212- In order not to overwrite it every time it is exposed, you need to add a ` when `
213- condition to restrict its output.
227+ In order not to overwrite it every time it is exposed, you propose to write one
228+ file per rank only at the first iteration (`ii=1`) with the directive `when`.
229+
230+ * Add the iteration loop `ii` as a metadata.
231+ * Write the curent temperature field in one file per process at first iteration.
214232
215- * Only write ` main_field ` at the second iteration (when ` ii=1 ` ) and match the
216- expected content as described in ` ex4.h5dump ` .
233+ You should be able to match the expected output described in `ex4.h5dump`.
234+ You can easily check if the files are the same by running :
217235` ` ` bash
218- diff ex4.h5dump <( h5dump ex4* .h5)
236+ diff ex4.h5dump <(h5dump ex4-data- *.h5)
219237` ` `
238+ To see your `h5` file in readable file format, you can check the section
239+ [Comparison with the `h5dump` command](#h5comparison).
220240
221- ### Ex5. Introducing events
241+ \note A definition of `metadata` and `data` can be :
222242
223- In ex4., two variables were written to ` ex4-data*.h5 ` , but the file was opened
243+ - `metadata` : small values for which PDI keeps a copy. These value can be referenced
244+ by using "$-expressions" in the configuration YAML file.
245+
246+ - `data` : values for which PDI does not keep a copy.
247+
248+ # ## Ex5. Introducing events and group of dataset
249+
250+ This exercise is done sequentially to facilitate the comparison between logs.
251+
252+ # ### Ex 5.1 PDI event and on_event
253+
254+ In ex4, two variables were written to `ex4-data-*.h5`, but the files were opened
224255and closed for each and every write.
256+
225257Since Decl'HDF5 only sees the data appear one after the other, it does not keep
226- the file open.
227- Since ` ii ` and ` main_field ` are shared in an interlaced way, they are both
228- available to %PDI at the same time and could be written without opening the file
229- twice.
230- You have to use events for that, you will modify both the C and YAML file .
258+ the file open. Since `ii` and `main_field` are shared in an interlaced way, they
259+ are both available to %PDI at the same time and could be written without opening
260+ the file twice.
261+ You have to use events for that, you will modify both the C and YAML file in this
262+ exercise .
231263
232264* Examine the YAML file and source code.
233-
234- * In the C file, trigger a %PDI event named ` loop ` when both ` ii ` and
265+ * In the C file, add a %PDI event named `loop` when both `ii` and
235266 ` main_field` are shared.
236- With the \ref trace_plugin "Trace plugin", check that the event is indeed
237- triggered at the expected time as described in ` ex5.log ` (only the lines
238- matching ` [Trace-plugin] ` have been kept). You can check if the files are the
239- same by running:
267+
268+ With the \ref trace_plugin "Trace plugin", check that the event is indeed
269+ triggered at the expected time as described in `ex5.log` (only the lines
270+ matching `[Trace-plugin]` have been kept).
271+ Using the previous section [Execution with storage of the log](#execution-with-storage-of-the-log),
272+ run this exercise in saving the output log in the `ex5.result.log`.
273+ After that you can easily check if the files are the same by running :
240274` ` ` bash
241275 diff ex5.log <(grep Trace-plugin ex5.result.log)
242276` ` `
277+ * In the YAML file, use the `on_event` mechanism to trigger the write of `ii` and
278+ ` main_field` for event `loop` only. This mechanism can be combined with a `when`
279+ directive, in that case the write is only executed when both mechanisms agree.
280+ * Add the `when` directive to write only at iteration 1 and 2. Use the symbol `&`
281+ which corresponds to the logical operation `and`.
282+
283+ # ### Ex 5.2 group of dataset
284+ In ex4, the name of the datasets of `.h5` file are `ii` and `main_field`(see ex4.h5dump).
285+ Using the keyword `dataset`, it is possible to have a different name from the
286+ %PDI variable name.
287+
288+ The name of the dataset is given after the definition of the data
289+ ` ` ` yaml
290+ write:
291+ ii: # name of the PDI data to write
292+ dataset: 'new_name'
293+ ` ` `
294+ Using this mechanism, it is possible to define a group object of hdf5 see
295+ https://support.hdfgroup.org/documentation/hdf5/latest/_h5_g__u_g.html.
296+ If you want to add a dataset `my_data` in the sub-group `groupA` of the group
297+ `my_group`, the name of the dataset will be :
298+ ` ` ` yaml
299+ dataset: 'my_group/groupA/my_data'.
300+ ` ` `
301+ where the symbol "/" is used to separate groups in path.
243302
244- * Use the ` on_event ` mechanism to trigger the write of ` ii ` and ` main_field ` .
245- This mechanism can be combined with a ` when ` directive, in that case the
246- write is only executed when both mechanisms agree.
303+ * Change the YAML file to write `main_field` and `ii` at iterations 1 and 2,
304+ in two distinct groups `iter1` and `iter2`.
247305
248- * Also notice the extended syntax that make it possible to write data to a
249- dataset whose name differs from the %PDI variable name.
250- Use this mechanism to write ` main_field ` at iterations 1 and 2, in two
251- distinct groups ` iter1 ` and ` iter2 ` .
252- Your output should match the content described in ` ex5.h5dump ` .
306+ To match the expected output described in `ex5.h5dump`. You can easily check
307+ if the files are the same by running :
253308` ` ` bash
254- diff ex5.h5dump <( h5dump ex5* .h5)
309+ diff ex5.h5dump <(h5dump ex5-data- *.h5)
255310` ` `
311+ To see your `h5` file in readable file format, you can check the section
312+ [Comparison with the `h5dump` command](#h5comparison).
256313
257314
258315# ## Ex6. Simplifying the code
0 commit comments