Skip to content

Commit 9da06db

Browse files
committed
Refactor to remove unnecessary tail spaces.
1 parent bdb7583 commit 9da06db

1 file changed

Lines changed: 8 additions & 8 deletions

File tree

wiki/QuickTutorial.html

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@
1111
ga('create', 'UA-45938012-1', 'auto');
1212
ga('create', 'UA-45937491-1', 'auto', {'name': 'pnetcdfTracker'});
1313
ga('create', 'UA-46002884-1', 'auto', {'name': 'parallelnetcdfTracker'});
14-
ga('send', 'pageview');
14+
ga('send', 'pageview');
1515
ga('pnetcdfTracker.send', 'pageview');
1616
ga('parallelnetcdfTracker.send', 'pageview');
1717
</script>
@@ -25,7 +25,7 @@ <h2>Introduction</h2>
2525
Parallel netCDF (officially abbreviated PnetCDF) is a library for parallel I/O providing higher-level data structures (e.g. multi-dimensional arrays of typed data). PnetCDF creates, writes, and reads the same file format as the serial netCDF library, meaning PnetCDF can operate on existing datasets, and existing serial analysis tools can process PnetCDF-generated files.
2626
</p>
2727
<p>
28-
The most distinguishing feature of both netCDF and PnetCDF is the <em>bi-modal</em> programming interface. An application creating a file will first enter <em>define mode</em>, in which it can describe all attributes, dimensions, types and structures of variables. The program will then exit "define mode" and enter <em>data mode</em>, in which it actually performs I/O. We'll see that often in the following examples. This "declaration-before-use" model can be a little restrictive, but does allow for some aggressive optimization when carrying out I/O.
28+
The most distinguishing feature of both netCDF and PnetCDF is the <em>bi-modal</em> programming interface. An application creating a file will first enter <em>define mode</em>, in which it can describe all attributes, dimensions, types and structures of variables. The program will then exit "define mode" and enter <em>data mode</em>, in which it actually performs I/O. We'll see that often in the following examples. This "declaration-before-use" model can be a little restrictive, but does allow for some aggressive optimization when carrying out I/O.
2929
</p>
3030
<p>
3131
This brief tutorial was written with the assumption the reader has some familiarity with serial netcdf. If serial netcdf concepts like attributes, dimensions, and variables are not familiar, start with the <a href="http://www.unidata.ucar.edu/software/netcdf/docs/">NetCDF Users Guide</a>.
@@ -42,11 +42,11 @@ <h2 id="IOfromMaster">I/O from Master</h2>
4242
<a href="https://github.com/Parallel-NetCDF/PnetCDF/blob/master/examples/tutorial/pnetcdf-write-from-master.c">Example writer</a>
4343
and
4444
<a href="https://github.com/Parallel-NetCDF/PnetCDF/blob/master/examples/tutorial/pnetcdf-read-from-master.c">Example reader</a>
45-
demonstrate this less than ideal approach.
45+
demonstrate this less than ideal approach.
4646
</p>
4747
<h2 id="Separatefiles">Separate files</h2>
4848
<p>
49-
We present the "one-file-per-process" approach not to recommend it, but rather because it is commonly seen.
49+
We present the "one-file-per-process" approach not to recommend it, but rather because it is commonly seen.
5050
</p>
5151
<p>
5252
This approach has some significant drawbacks. What if the number of writers differs from the number of readers? What if there are a million processes? What contextual information about the application data is lost in such an approach?
@@ -62,7 +62,7 @@ <h2 id="RealparallelIOonsharedfiles">Real parallel I/O on shared files</h2>
6262
The previous approaches either bypass parallel I/O entirely or hide a great deal of application context from PnetCDF. We now present a more natural way of carrying out I/O in a parallel program: operating on a shared file.
6363
</p>
6464
<p>
65-
Shared-file I/O provides several benefits. First of all, because all processes can participate in collective I/O, the underlying MPI-IO library can make use of several powerful optimizations, such as file access alignment and collective buffering. Opening or creating a dataset (file) is a collective operation as well, meaning processes store and query metadata (the number, size, and location in file of attributes and variables) in an efficient manner. Data decomposition may be more sophisticated, but it is also more likely to match how the scientific application has already split up the data among processes.
65+
Shared-file I/O provides several benefits. First of all, because all processes can participate in collective I/O, the underlying MPI-IO library can make use of several powerful optimizations, such as file access alignment and collective buffering. Opening or creating a dataset (file) is a collective operation as well, meaning processes store and query metadata (the number, size, and location in file of attributes and variables) in an efficient manner. Data decomposition may be more sophisticated, but it is also more likely to match how the scientific application has already split up the data among processes.
6666
</p>
6767
<p>
6868
Examples:
@@ -72,7 +72,7 @@ <h2 id="RealparallelIOonsharedfiles">Real parallel I/O on shared files</h2>
7272
</p>
7373
<h2 id="Flexibleinterface">Flexible interface</h2>
7474
<p>
75-
The standard netCDF and PnetCDF APIs explicitly specify the type of the application data (an array of integers, double precision, or floating point values, e.g. <tt>ncmpi_put_vara_float_all</tt>). We have further extended the PnetCDF API to accept arbitrary MPI datatypes. Say an application's data structures are more complex than a multidimensional array of a basic type, or if the application needs to write a non-contiguous selection of a given memory region to the dataset. In these situations, an MPI datatype can describe the desired data.
75+
The standard netCDF and PnetCDF APIs explicitly specify the type of the application data (an array of integers, double precision, or floating point values, e.g. <tt>ncmpi_put_vara_float_all</tt>). We have further extended the PnetCDF API to accept arbitrary MPI datatypes. Say an application's data structures are more complex than a multidimensional array of a basic type, or if the application needs to write a non-contiguous selection of a given memory region to the dataset. In these situations, an MPI datatype can describe the desired data.
7676
</p>
7777
<p>
7878
In these examples, we use <tt>ncmpi_put_vara_all</tt>, even though the datatype is a basic MPI_INT type:
@@ -85,10 +85,10 @@ <h2 id="Flexibleinterface">Flexible interface</h2>
8585
</p>
8686
<h2 id="Non-blockinginterface">Non-blocking interface</h2>
8787
<p>
88-
A set of "non-blocking" APIs is available in PnetCDF. They can aggregate multiple smaller requests into larger ones for better I/O performance. These routines follow the MPI model of posting operations, then waiting for completion of those operations.
88+
A set of "non-blocking" APIs is available in PnetCDF. They can aggregate multiple smaller requests into larger ones for better I/O performance. These routines follow the MPI model of posting operations, then waiting for completion of those operations.
8989
</p>
9090
<p>
91-
The PnetCDF and netCDF APIs are variable oriented: if an application writes 50 variables to a dataset (file), it must make 50 calls where each call is carried out by a separate MPI-IO write call. With the PnetCDF non-blocking API, however, the library can take this collection of pending operations and then stitch them together into one larger, more efficient MPI-IO request.
91+
The PnetCDF and netCDF APIs are variable oriented: if an application writes 50 variables to a dataset (file), it must make 50 calls where each call is carried out by a separate MPI-IO write call. With the PnetCDF non-blocking API, however, the library can take this collection of pending operations and then stitch them together into one larger, more efficient MPI-IO request.
9292
</p>
9393
<p>
9494
Examples:

0 commit comments

Comments
 (0)