cl_page_operations Struct Reference
[cl_page]

Per-layer page operations. More...

#include <cl_object.h>


Data Fields

cfs_page_t *(* cpo_vmpage )(const struct lu_env *env, const struct cl_page_slice *slice)
int(* cpo_own )(const struct lu_env *env, const struct cl_page_slice *slice, struct cl_io *io, int nonblock)
 Called when io acquires this page into the exclusive ownership.
void(* cpo_disown )(const struct lu_env *env, const struct cl_page_slice *slice, struct cl_io *io)
 Called when ownership it yielded.
void(* cpo_assume )(const struct lu_env *env, const struct cl_page_slice *slice, struct cl_io *io)
 Called for a page that is already "owned" by io from VM point of view.
void(* cpo_unassume )(const struct lu_env *env, const struct cl_page_slice *slice, struct cl_io *io)
 Dual to cl_page_operations::cpo_assume().
void(* cpo_export )(const struct lu_env *env, const struct cl_page_slice *slice, int uptodate)
 Announces whether the page contains valid data or not by uptodate.
int(* cpo_unmap )(const struct lu_env *env, const struct cl_page_slice *slice, struct cl_io *io)
 Unmaps page from the user space (if it is mapped).
int(* cpo_is_vmlocked )(const struct lu_env *env, const struct cl_page_slice *slice)
 Checks whether underlying VM page is locked (in the suitable sense).
void(* cpo_discard )(const struct lu_env *env, const struct cl_page_slice *slice, struct cl_io *io)
 Called when page is truncated from the object.
void(* cpo_delete )(const struct lu_env *env, const struct cl_page_slice *slice)
 Called when page is removed from the cache, and is about to being destroyed.
void(* cpo_fini )(const struct lu_env *env, struct cl_page_slice *slice)
 Destructor.
int(* cpo_is_under_lock )(const struct lu_env *env, const struct cl_page_slice *slice, struct cl_io *io)
 Checks whether the page is protected by a cl_lock.
int(* cpo_print )(const struct lu_env *env, const struct cl_page_slice *slice, void *cookie, lu_printer_t p)
 Optional debugging helper.
int(* cpo_prep )(const struct lu_env *env, const struct cl_page_slice *slice, struct cl_io *io)
 Called when a page is submitted for a transfer as a part of cl_page_list.
void(* cpo_completion )(const struct lu_env *env, const struct cl_page_slice *slice, int ioret)
 Completion handler.
int(* cpo_make_ready )(const struct lu_env *env, const struct cl_page_slice *slice)
 Called when cached page is about to be added to the cl_req as a part of req formation.
int(* cpo_cache_add )(const struct lu_env *env, const struct cl_page_slice *slice, struct cl_io *io)
 Announce that this page is to be written out opportunistically, that is, page is dirty, it is not necessary to start write-out transfer right now, but eventually page has to be written out.
transfer
Transfer methods. See comment on cl_req for a description of transfer formation and life-cycle.

struct {
   int(*   cpo_prep )(const struct lu_env *env, const struct cl_page_slice *slice, struct cl_io *io)
 Called when a page is submitted for a transfer as a part of cl_page_list.
   void(*   cpo_completion )(const struct lu_env *env, const struct cl_page_slice *slice, int ioret)
 Completion handler.
   int(*   cpo_make_ready )(const struct lu_env *env, const struct cl_page_slice *slice)
 Called when cached page is about to be added to the cl_req as a part of req formation.
   int(*   cpo_cache_add )(const struct lu_env *env, const struct cl_page_slice *slice, struct cl_io *io)
 Announce that this page is to be written out opportunistically, that is, page is dirty, it is not necessary to start write-out transfer right now, but eventually page has to be written out.
io [CRT_NR]
 Request type dependent vector of operations.
void(* cpo_clip )(const struct lu_env *env, const struct cl_page_slice *slice, int from, int to)
 Tell transfer engine that only [to, from] part of a page should be transmitted.
int(* cpo_cancel )(const struct lu_env *env, const struct cl_page_slice *slice)


Detailed Description

Per-layer page operations.

Methods taking an io argument are for the activity happening in the context of given io. Page is assumed to be owned by that io, except for the obvious cases (like cl_page_operations::cpo_own()).

See also:
vvp_page_ops, lov_page_ops, osc_page_ops


Field Documentation

void(* cl_page_operations::cpo_assume)(const struct lu_env *env, const struct cl_page_slice *slice, struct cl_io *io)

Called for a page that is already "owned" by io from VM point of view.

Optional.

See also:
cl_page_assume()

vvp_page_assume(), lov_page_assume()

int(* cl_page_operations::cpo_cache_add)(const struct lu_env *env, const struct cl_page_slice *slice, struct cl_io *io)

Announce that this page is to be written out opportunistically, that is, page is dirty, it is not necessary to start write-out transfer right now, but eventually page has to be written out.

Main caller of this is the write path (see vvp_io_commit_write()), using this method to build a "transfer cache" from which large transfers are then constructed by the req-formation engine.

Todo:
XXX it would make sense to add page-age tracking semantics here, and to oblige the req-formation engine to send the page out not later than it is too old.
See also:
cl_page_cache_add()

int(* cl_page_operations::cpo_cancel)(const struct lu_env *env, const struct cl_page_slice *slice)

Precondition:
the page was queued for transferring.
Postcondition:
page is removed from client's pending list, or -EBUSY is returned if it has already been in transferring.
This is one of seldom page operation which is: 0. called from top level; 1. don't have vmpage locked; 2. every layer should synchronize execution of its ->cpo_cancel() with completion handlers. Osc uses client obd lock for this purpose. Based on there is no vvp_page_cancel and lov_page_cancel(), cpo_cancel is defacto protected by client lock.

See also:
osc_page_cancel().

void(* cl_page_operations::cpo_clip)(const struct lu_env *env, const struct cl_page_slice *slice, int from, int to)

Tell transfer engine that only [to, from] part of a page should be transmitted.

This is used for immediate transfers.

Todo:
XXX this is not very good interface. It would be much better if all transfer parameters were supplied as arguments to cl_io_operations::cio_submit() call, but it is not clear how to do this for page queues.
See also:
cl_page_clip()

void(* cl_page_operations::cpo_completion)(const struct lu_env *env, const struct cl_page_slice *slice, int ioret)

Completion handler.

This is guaranteed to be eventually fired after cl_page_operations::cpo_prep() or cl_page_operations::cpo_make_ready() call.

This method can be called in a non-blocking context. It is guaranteed however, that the page involved and its object are pinned in memory (and, hence, calling cl_page_put() is safe).

See also:
cl_page_completion()

void(* cl_page_operations::cpo_delete)(const struct lu_env *env, const struct cl_page_slice *slice)

Called when page is removed from the cache, and is about to being destroyed.

Optional.

See also:
cl_page_delete()

vvp_page_delete(), osc_page_delete()

void(* cl_page_operations::cpo_discard)(const struct lu_env *env, const struct cl_page_slice *slice, struct cl_io *io)

Called when page is truncated from the object.

Optional.

See also:
cl_page_discard()

vvp_page_discard(), osc_page_discard()

void(* cl_page_operations::cpo_disown)(const struct lu_env *env, const struct cl_page_slice *slice, struct cl_io *io)

Called when ownership it yielded.

Optional.

See also:
cl_page_disown()

vvp_page_disown()

void(* cl_page_operations::cpo_export)(const struct lu_env *env, const struct cl_page_slice *slice, int uptodate)

Announces whether the page contains valid data or not by uptodate.

See also:
cl_page_export()

vvp_page_export()

void(* cl_page_operations::cpo_fini)(const struct lu_env *env, struct cl_page_slice *slice)

Destructor.

Frees resources and slice itself.

int(* cl_page_operations::cpo_is_under_lock)(const struct lu_env *env, const struct cl_page_slice *slice, struct cl_io *io)

Checks whether the page is protected by a cl_lock.

This is a per-layer method, because certain layers have ways to check for the lock much more efficiently than through the generic locks scan, or implement locking mechanisms separate from cl_lock, e.g., LL_FILE_GROUP_LOCKED in vvp. If pending is true, check for locks being canceled, or scheduled for cancellation as soon as the last user goes away, too.

Return values:
-EBUSY,: page is protected by a lock of a given mode;
-ENODATA,: page is not protected by a lock;
0,: this layer cannot decide.
See also:
cl_page_is_under_lock()

int(* cl_page_operations::cpo_is_vmlocked)(const struct lu_env *env, const struct cl_page_slice *slice)

Checks whether underlying VM page is locked (in the suitable sense).

Used for assertions.

Return values:
-EBUSY,: page is protected by a lock of a given mode;
-ENODATA,: page is not protected by a lock;
0,: this layer cannot decide. (Should never happen.)

int(* cl_page_operations::cpo_make_ready)(const struct lu_env *env, const struct cl_page_slice *slice)

Called when cached page is about to be added to the cl_req as a part of req formation.

Returns:
0 : proceed with this page;

-EAGAIN : skip this page;

-ve : error.

See also:
cl_page_make_ready()

int(* cl_page_operations::cpo_own)(const struct lu_env *env, const struct cl_page_slice *slice, struct cl_io *io, int nonblock)

Called when io acquires this page into the exclusive ownership.

When this method returns, it is guaranteed that the is not owned by other io, and no transfer is going on against it. Optional.

See also:
cl_page_own()

vvp_page_own(), lov_page_own()

int(* cl_page_operations::cpo_prep)(const struct lu_env *env, const struct cl_page_slice *slice, struct cl_io *io)

Called when a page is submitted for a transfer as a part of cl_page_list.

Returns:
0 : page is eligible for submission;

-EALREADY : skip this page;

-ve : error.

See also:
cl_page_prep()

int(* cl_page_operations::cpo_print)(const struct lu_env *env, const struct cl_page_slice *slice, void *cookie, lu_printer_t p)

Optional debugging helper.

Prints given page slice.

See also:
cl_page_print()

void(* cl_page_operations::cpo_unassume)(const struct lu_env *env, const struct cl_page_slice *slice, struct cl_io *io)

Dual to cl_page_operations::cpo_assume().

Optional. Called bottom-to-top when IO releases a page without actually unlocking it.

See also:
cl_page_unassume()

vvp_page_unassume()

int(* cl_page_operations::cpo_unmap)(const struct lu_env *env, const struct cl_page_slice *slice, struct cl_io *io)

Unmaps page from the user space (if it is mapped).

See also:
cl_page_unmap()

vvp_page_unmap()

cfs_page_t*(* cl_page_operations::cpo_vmpage)(const struct lu_env *env, const struct cl_page_slice *slice)

Returns:
the underlying VM page. Optional.

struct { ... } cl_page_operations::io[CRT_NR]

Request type dependent vector of operations.

Transfer operations depend on transfer mode (cl_req_type). To avoid passing transfer mode to each and every of these methods, and to avoid branching on request type inside of the methods, separate methods for cl_req_type:CRT_READ and cl_req_type:CRT_WRITE are provided. That is, method invocation usually looks like

slice->cp_ops.io[req->crq_type].cpo_method(env, slice, ...);


The documentation for this struct was generated from the following file:
Generated on Mon Apr 12 04:18:21 2010 for Lustre 1.10.0.40-0-g9a80ff7 by doxygen 1.4.7

Contact | About Sun | News | Employment | Privacy | Terms of Use | Trademarks | (C) 2008 Sun Microsystems, Inc.