5.1.5 Definition of the IMB-IO Benchmarks (Blocking Case)

This section describes the blocking I/O benchmarks in detail. The benchmarks are run with varying transfer sizes X (in bytes).The timings are averaged over multiple samples. Below see the view of a single sample with a fixed I/O size of X. Basic MPI data type for all data buffers is MPI_BYTE.

All benchmark flavors have a Write and a Read component. A symbol [ACTION] is used to denote a Read or a Write alternatively.

Every benchmark contains an elementary I/O action, denoting the pure read/write. In the Write cases, a file synchronization is included with different placements for aggregate and non-aggregate modes.

Figure 5-8: I/O benchmarks, aggregation for output

5.1.5.1 S_[ACTION]_indv 

File I/O performed by a single process. This pattern mimics the typical case when a particular master process performs all of the I/O. See the basic definitions and a schematic view of the pattern below.

measured pattern

as symbolized in Figure 5-8

elementary I/O action

as symbolized in Figure 5-9

for non-blocking mode based on

MPI_File_write / MPI_File_read

MPI_File_iwrite / MPI_File_iread

etype

MPI_BYTE

filetype

MPI_BYTE

MPI_Datatype

MPI_BYTE

reported timings

t (in msec) as indicated in Figure 5-8, aggregate and non-aggregate for Write case

reported throughput

X/t, aggregate and non-aggregate for Write case

Table 5-9: S_[ACTION]_indv definition

Figure 5-9: S_[ACTION]_indv pattern

5.1.5.2 S_[ACTION]_expl

Mimics the same situation as S_[ACTION]_indv, with a different strategy to access files. See the basic definitions and a schematic view of the pattern below.

measured pattern

as symbolized in Figure 5-8

elementary I/O action

as symbolized in Figure 5-10

for non-blocking mode based on

MPI_File_write_at/MPI_File_read_at

MPI_File_iwrite_at/MPI_File_iread_at

etype

MPI_BYTE

filetype

MPI_BYTE

MPI_Datatype

MPI_BYTE

reported timings

t (in msec) as indicated in Figure 5-8, aggregate and non-aggregate for Write case

reported throughput

X/t, aggregate and non-aggregate for Write case

Table 5-10: S_[ACTION]_expl definition

Figure 5-10: S_[ACTION]_expl pattern

5.1.5.3 P_[ACTION]_indv

This pattern accesses the file in a concurrent manner. All participating processes access a common file. See the basic definitions and a schematic view of the pattern below.

measured pattern

as symbolized in Figure 5-8

elementary I/O action

as symbolized in Figure 5-11

(Nproc = number of processes)

for non-blocking mode based on

MPI_File_write/MPI_File_read

MPI_File_iwrite/MPI_File_iread

etype

MPI_BYTE

filetype

tiled view, disjoint contiguous blocks

MPI_Datatype

MPI_BYTE

reported timings

t (in msec) as indicated in Figure 5-8, aggregate and non-aggregate for Write case

reported throughput

X/t, aggregate and non-aggregate for Write case

Table 5-11: P_[ACTION]_indv definition 

Figure 5-11: P_[ACTION]_indv pattern 

5.1.5.4 P_[ACTION]_expl

P_[ACTION]_expl follows the same access pattern as P_[ACTION]_indv with an explicit file pointer type. Below see the basic definitions and a schematic view of the pattern.

measured pattern

as symbolized in Figure 5-8

elementary I/O action

as symbolized in Figure 5-12

(Nproc = number of processes)

for non-blocking mode based on

MPI_File_write_at/MPI_File_read_at

MPI_File_iwrite_at/MPI_File_iread_at

etype

MPI_BYTE

filetype

MPI_BYTE

MPI_Datatype

MPI_BYTE

reported timings

t (in msec) as indicated in Figure 5-8, aggregate and non-aggregate for Write case

reported throughput

X/t, aggregate and non-aggregate for Write case

Table 5-12: P_[ACTION]_expl definition

Figure 5-12: P_[ACTION]_expl pattern 

5.1.5.5 P_[ACTION]_shared

Concurrent access to a common file by all participating processes, with a shared file pointer. See the basic definitions and a schematic view of the pattern below. 

measured pattern

as symbolized in Figure 5-8

elementary I/O action

as symbolized in Figure 5-13

(Nproc = number of processes)

for non-blocking mode based on

MPI_File_write_at/MPI_File_read_at

MPI_File_iwrite_at/MPI_File_iread_at

etype

MPI_BYTE

filetype

MPI_BYTE

MPI_Datatype

MPI_BYTE

reported timings

t (in msec) as indicated in Figure 5-8, aggregate and non-aggregate for Write case

reported throughput

X/t, aggregate and non-aggregate for Write case

Table 5-13: P_[ACTION]_shared definition

Figure 5-13: P_[ACTION]_shared pattern 

5.1.5.6 P_[ACTION]_priv

This pattern tests the case when all participating processes perform concurrent I/O to different private files. This benchmark is particularly useful for the systems allowing completely independent I/O operations from different processes. The benchmark pattern is expected to show parallel scaling and obtain optimum results. See the basic definitions and a schematic view of the pattern below. 

Table 5-14: P_[ACTION]_priv definition

measured pattern

as symbolized in Figure 5-8

elementary I/O action

as symbolized in Figure 5-14

(Nproc = number of processes)

for non-blocking mode based on

MPI_File_write/MPI_File_read

MPI_File_iwrite/MPI_File_iread

etype

MPI_BYTE

filetype

MPI_BYTE

MPI_Datatype

MPI_BYTE

reported timings

Dt (in msec), aggregate and non-aggregate for Write case

reported throughput

X/Dt, aggregate and non-aggregate for Write case

Figure 5-14: P_[ACTION]_priv pattern

5.1.5.7 C_[ACTION]_indv

C_[ACTION]_indv tests collective access from all processes to a common file, with an individual file pointer. Below see the basic definitions and a schematic view of the pattern. 

for non-blocking mode based on

MPI_File_read_all/

MPI_File_write_all

MPI_File_.._all_begin - MPI_File_.._all_end

all other parameters, measuring method

see 5.1.5.3

Table 5-15: C_[ACTION]_indv definition

5.1.5.8 C_[ACTION]_expl

This pattern performs Collective access from all processes to a common file, with an explicit file pointer. See the basic definitions and a schematic view of the pattern below. 

for non-blocking mode based on

MPI_File_read_at_all/MPI_File_write_at_all

MPI_File_.._at_all_begin - MPI_File_.._at_all_end

all other parameters, measuring method

see 5.1.5.4

Table 5-16: C_[ACTION]_expl definition

5.1.5.9 C_[ACTION]_shared

The benchmark of a collective access from all processes to a common file, with a shared file pointer. See the basic definitions and a schematic view of the pattern below.

for non-blocking mode

MPI_File_read_ordered/MPI_File_write_ordered

MPI_File_.._ordered_begin- MPI_File_.._ordered_end

all other parameters, measuring method

see 5.1.5.5 

Table 5-17: C_[ACTION]_shared definition

5.1.5.10 Open_Close

The benchmark of the MPI_File_open/MPI_File_close pair. All processes open the same file. To prevent the implementation from optimizations in case of an unused file, a negligible non-trivial action is performed with the file. See the basic definitions below.

measured pattern

MPI_File_open/MPI_File_close

Etype

MPI_BYTE

Filetype

MPI_BYTE

reported timings

t=Dt (in msec) as indicated in the figure below.

reported throughput

none

Table 5-18: Open_Close definition

Figure 5-15: Open_Close pattern

Submit feedback on this help topic