cl_io_operations Struct Reference
[cl_io]

Per-layer io operations. More...

#include <cl_object.h>


Data Fields

struct {
   int(*   cio_iter_init )(const struct lu_env *env, const struct cl_io_slice *slice)
 Prepare io iteration at a given layer.
   void(*   cio_iter_fini )(const struct lu_env *env, const struct cl_io_slice *slice)
 Finalize io iteration.
   int(*   cio_lock )(const struct lu_env *env, const struct cl_io_slice *slice)
 Collect locks for the current iteration of io.
   void(*   cio_unlock )(const struct lu_env *env, const struct cl_io_slice *slice)
 Finalize unlocking.
   int(*   cio_start )(const struct lu_env *env, const struct cl_io_slice *slice)
 Start io iteration.
   void(*   cio_end )(const struct lu_env *env, const struct cl_io_slice *slice)
 Called top-to-bottom at the end of io loop.
   void(*   cio_advance )(const struct lu_env *env, const struct cl_io_slice *slice, size_t nob)
 Called bottom-to-top to notify layers that read/write IO iteration finished, with nob bytes transferred.
   void(*   cio_fini )(const struct lu_env *env, const struct cl_io_slice *slice)
 Called once per io, bottom-to-top to release io resources.
op [CIT_OP_NR]
 Vector of io state transition methods for every io type.
struct {
   int(*   cio_submit )(const struct lu_env *env, const struct cl_io_slice *slice, enum cl_req_type crt, struct cl_2queue *queue, enum cl_req_priority priority)
 Submit pages from queue->c2_qin for IO, and move successfully submitted pages into queue->c2_qout.
req_op [CRT_NR]
int(* cio_read_page )(const struct lu_env *env, const struct cl_io_slice *slice, const struct cl_page_slice *page)
 Read missing page.
int(* cio_prepare_write )(const struct lu_env *env, const struct cl_io_slice *slice, const struct cl_page_slice *page, unsigned from, unsigned to)
 Prepare write of a page.
int(* cio_commit_write )(const struct lu_env *env, const struct cl_io_slice *slice, const struct cl_page_slice *page, unsigned from, unsigned to)
int(* cio_print )(const struct lu_env *env, void *cookie, lu_printer_t p, const struct cl_io_slice *slice)
 Optional debugging helper.


Detailed Description

Per-layer io operations.

See also:
vvp_io_ops, lov_io_ops, lovsub_io_ops, osc_io_ops


Field Documentation

int(* cl_io_operations::cio_commit_write)(const struct lu_env *env, const struct cl_io_slice *slice, const struct cl_page_slice *page, unsigned from, unsigned to)

Precondition:
io->ci_type == CIT_WRITE
See also:
vvp_io_commit_write(), lov_io_commit_write(), osc_io_commit_write().

void(* cl_io_operations::cio_end)(const struct lu_env *env, const struct cl_io_slice *slice)

Called top-to-bottom at the end of io loop.

Here layer might wait for an unfinished asynchronous io.

void(* cl_io_operations::cio_iter_fini)(const struct lu_env *env, const struct cl_io_slice *slice)

Finalize io iteration.

Called bottom-to-top at the end of each iteration of "io loop". Here layers can decide whether IO has to be continued.

See also:
cl_io_operations::cio_iter_init()

int(* cl_io_operations::cio_iter_init)(const struct lu_env *env, const struct cl_io_slice *slice)

Prepare io iteration at a given layer.

Called top-to-bottom at the beginning of each iteration of "io loop" (if it makes sense for this type of io). Here layer selects what work it will do during this iteration.

See also:
cl_io_operations::cio_iter_fini()

int(* cl_io_operations::cio_lock)(const struct lu_env *env, const struct cl_io_slice *slice)

Collect locks for the current iteration of io.

Called top-to-bottom to collect all locks necessary for this iteration. This methods shouldn't actually enqueue anything, instead it should post a lock through cl_io_lock_add(). Once all locks are collected, they are sorted and enqueued in the proper order.

int(* cl_io_operations::cio_prepare_write)(const struct lu_env *env, const struct cl_io_slice *slice, const struct cl_page_slice *page, unsigned from, unsigned to)

Prepare write of a page.

Called bottom-to-top by a top-level cl_io_operations::op[CIT_WRITE]cio_start() to prepare page for get data from user-level buffer.

Precondition:
io->ci_type == CIT_WRITE
See also:
vvp_io_prepare_write(), lov_io_prepare_write(), osc_io_prepare_write().

int(* cl_io_operations::cio_print)(const struct lu_env *env, void *cookie, lu_printer_t p, const struct cl_io_slice *slice)

Optional debugging helper.

Print given io slice.

int(* cl_io_operations::cio_read_page)(const struct lu_env *env, const struct cl_io_slice *slice, const struct cl_page_slice *page)

Read missing page.

Called by a top-level cl_io_operations::op[CIT_READ]cio_start() method, when it hits not-up-to-date page in the range. Optional.

Precondition:
io->ci_type == CIT_READ

int(* cl_io_operations::cio_start)(const struct lu_env *env, const struct cl_io_slice *slice)

Start io iteration.

Once all locks are acquired, called top-to-bottom to commence actual IO. In the current implementation, top-level vvp_io_{read,write}_start() does all the work synchronously by calling generic_file_*(), so other layers are called when everything is done.

int(* cl_io_operations::cio_submit)(const struct lu_env *env, const struct cl_io_slice *slice, enum cl_req_type crt, struct cl_2queue *queue, enum cl_req_priority priority)

Submit pages from queue->c2_qin for IO, and move successfully submitted pages into queue->c2_qout.

Return non-zero if failed to submit even the single page. If submission failed after some pages were moved into queue->c2_qout, completion callback with non-zero ioret is executed on them.

void(* cl_io_operations::cio_unlock)(const struct lu_env *env, const struct cl_io_slice *slice)

Finalize unlocking.

Called bottom-to-top to finish layer specific unlocking functionality, after generic code released all locks acquired by cl_io_operations::cio_lock().

struct { ... } cl_io_operations::op[CIT_OP_NR]

Vector of io state transition methods for every io type.

See also:
cl_page_operations::io


The documentation for this struct was generated from the following file:
Generated on Mon Apr 12 04:18:21 2010 for Lustre 1.10.0.40-0-g9a80ff7 by doxygen 1.4.7

Contact | About Sun | News | Employment | Privacy | Terms of Use | Trademarks | (C) 2008 Sun Microsystems, Inc.