MediaWiki API result

This is the HTML representation of the JSON format. HTML is good for debugging, but is unsuitable for application use.

Specify the format parameter to change the output format. To see the non-HTML representation of the JSON format, set format=json.

See the complete documentation, or the API help for more information.

{
    "batchcomplete": "",
    "continue": {
        "gapcontinue": "Release_2.10.0",
        "continue": "gapcontinue||"
    },
    "warnings": {
        "main": {
            "*": "Subscribe to the mediawiki-api-announce mailing list at <https://lists.wikimedia.org/postorius/lists/mediawiki-api-announce.lists.wikimedia.org/> for notice of API deprecations and breaking changes."
        },
        "revisions": {
            "*": "Because \"rvslots\" was not specified, a legacy format has been used for the output. This format is deprecated, and in the future the new format will always be used."
        }
    },
    "query": {
        "pages": {
            "461": {
                "pageid": 461,
                "ns": 0,
                "title": "Release 1.8",
                "revisions": [
                    {
                        "contentformat": "text/x-wiki",
                        "contentmodel": "wikitext",
                        "*": "<small>''(Updated: Jan 2010)''</small>\n__TOC__\nLustre\u2122 1.8.0.1 introduces several robust, new features and improved system functionality. This page provides feature descriptions and lists the benefits offered by upgrading to the Lustre 1.8 release branch. The change log and release notes are [[Change_Log_1.8|here]].\n\n==Adaptive Timeouts==\n\nThe adaptive timeouts feature (enabled, by default) causes Lustre to use an adaptive mechanism to set RPC timeouts, so users no longer have to tune the obd_timeout value. RPC service time histories are tracked on all servers for each service, and estimates for future RPCs are reported back to clients. Clients use these service time estimates along with their own observations of the network delays to set future RPC timeout values. \n\nIf server request processing slows down, its estimates increase and the clients allow more time for RPC completion before retrying. If RPCs queued up on the server approach their timeouts, the server sends early replies to the client, telling it to allow more time. Conversely, as the load on the server is reduced, the RPC timeout values decrease, allowing faster client detection of non-responsive servers and faster attempts to reconnect to a server's failover partner.\n\n<big>Why should I upgrade to Lustre 1.8 to get it?</big>\n\nAdaptive timeouts offers these benefits:\n\n* Simplified management for small and large clusters.\n* Automatically adjusts RPC timeouts as network conditions and server load changes.\n* Reduces server recovery time in cases where the server load is low at time of failure.\n\n<big>Additional Resources</big>\n\nFor more information about adaptive timeouts, see:\n<!-- * [[Architecture - Adaptive_Timeouts_-_Use_Cases|Architecture page - Adaptive timeouts (use cases)]] -->\n* [[Media:Adaptive-timeouts-hld.pdf|HLD - Adaptive RPC timeouts]]\n\n==Client Interoperability==\n\nThe client interoperability feature enables Lustre 1.8 clients to work with a new network protocol that will be introduced in Lustre 2.0. This feature allows transparent client, server, network and storage interoperability when migrating from 1.6 architecture-based clusters to clusters with 2.0 architecture-based servers. Lustre 1.8.x clients will interoperate with Lustre 2.0 servers.\n\n<big>Why should I upgrade to Lustre 1.8 to get it?</big>\n\nClient interoperability offers this benefit:\n\n* When Lustre 2.x is released, Lustre 1.8.x users will be able to upgrade to 2.x servers while the Lustre filesystem is up and running. This transparent upgrade feature will enable users to upgrade their servers to Lustre 2.x and reboot them without disturbing applications using the filesystem on clients. It will no longer be necessary to unmount clients from the filesystem to upgrade servers to the new software. After the 2.x upgrade, Lustre 2.x servers will interoperate with 1.8.x clients.\n\n<big>Additional Resources</big>\n\nFor more information on client interoperability, see:\n\n<!-- * [[Architecture - Interoperability_fids_zfs|Architecture page - Interoperability FIDs and ZFS]] -->\n* [[Media:Interop_disk_fidea.pdf|HLD - Interoperability at the Server Side]]\n* [[Media:Sptlrpc_interop-hld.pdf|HLD - Sptlrpc interoperability]]\n* [[Media:Interop-client-recov-dld.pdf|DLD - Interoperable Client Recovery]]\n* [[Media:Sptlrpc_interop-dld.pdf|DLD - Sptlrpc interoperability]]\n\n\n==OSS Read Cache==\n\nThe OSS read cache feature provides read-only caching of data on an OSS. It uses a regular Linux pagecache to store the data. OSS read cache improves Lustre performance when several clients access the same data set, and the data fits the OSS cache (which can occupy most of the available memory). The overhead of OSS read cache is very low on modern CPUs, and cache misses do not negatively impact performance compared to Lustre releases before OSS read cache was available.\n\n<big>Why should I upgrade to Lustre 1.8 to get it?</big>\n\nOSS read cache can improve Lustre performance, and offers these benefits:\n\n* Allows OSTs to cache read data more frequently\n* Improves repeated reads to match network speeds instead of disk speeds\n* Provides the building block for OST write cache (small write aggregation)\n\n<!--\n\n<big>Additional Resources</big>\n\nFor more information on OSS read cache, see:\n\n* [[Architecture - Caching_OSS|Architecture page - Caching OSS]]\n-->\n\n==OST Pools==\n\nThe OST pools feature allows the administrator to name a group of OSTs for file striping purposes. For instance, a group of local OSTs could be defined for faster access; a group of higher-performance OSTs could be defined for specific applications; a group of non-RAID OSTs could be defined for scratch files; or groups of OSTs could be defined for particular users. \n\nPools are defined by the system administrator, using regular Lustre tools (lctl). Pool usage is specified and stored along with other striping information\n(e.g., stripe count, stripe size) for directories or individual files (lfs\nsetstripe or llapi_create_file()). Traditional automated OST selection\noptimizations (QOS) occur within a pool (e.g., free-space leveling within\nthe pool). OSTs can be added or removed from a pool at any time (and existing\nfiles always remain in place and available.)\n\nOST pools characteristics include:\n\n* An OST can be associated with multiple pools\n* No ordering of OSTs is implied or defined within a pool\n* OST membership in a pool can change over time\n* a directory can default to a specific pool and new files/subdirectories created therein will use that pool\n\n'''NOTE:''' In its current implementation, the OST pools feature does not implement an automated policy or restrict users from creating files in any of the pools; it must be managed directly by administrator/user. It is a building block for policy-managed storage. \n\n<big>Why should I upgrade to Lustre 1.8 to get it?</big>\n\nOST pools offers these benefits:\n\n* Allows sets of OSTs to be managed via named groups\n* Pools can separate heterogeneous OSTs within the same filesystem\n** Fast vs. slow disks\n** Local network vs. remote network (e.g. WAN)\n** RAID 1 vs. RAID5 backing storage, etc.\n** Specific OSTs for users/groups/applications (by directory)\n* Easier disk usage policy implementation for administrators\n* Hardware can be more closely optimized for particular usage patterns\n* Human-readable stripe mappings\n\n<big>Additional Resources</big>\n\nFor more information on OST pools, see:\n\n<!-- * [[Architecture - Pools_of_targets|Architecture page - OST pools]] -->\n* [[Media:OstPools-DLD.pdf|DLD - OST Pools]]\n* [[Media:Ostpools-large-scale_testplan.pdf|Test plan - OST pools]]\n\n==Version-Based Recovery==\n\nVersion-based Recovery (VBR) improves the robustness of client recovery operations and allows Lustre to recover, even if multiple clients fail at the same time as the server. With VBR, recovery is more flexible; not all clients are evicted if some miss recovery, and a missed client may try to recover after the server recovery window.\n\n<big>Why should I upgrade to Lustre 1.8 to get it?</big>\n\nVBR functionality in Lustre 1.8 allows more flexible recovery after a failure. Previous Lustre releases enforced a strict, in-order replay condition that required all clients to reconnect during the recovery period. If a client was missing and the recovery period timed out, then the remaining clients were evicted. With VBR, conditional out-of-order replay is allowed. VBR uses versions to detect conflicting transactions. If an object's version matches what is expected, the transaction is replayed. If there is a version mis-match, clients attempting to modify the object are stopped. Recovery continues even if some clients do not reconnect (the missed clients can try to recover later). With VBR, Lustre clients may successfully recover in a wider variety of failure scenarios.\n\nVBR offers these benefits:\n\n* Improves the robustness of client recovery operations\n* Allows Lustre recovery to continue even if multiple clients fail at the same time as the server\n* Provides a building block for disconnected client operations\n\n<big>Additional Resources</big>\n\nFor more information on VBR, see:\n\n<!--\n*  [http://wiki.lustre.org/manual/LustreManual20_HTML/LustreRecovery.html#50438268_pgfId-1287769 Section 30.4: ''Version-based Recovery''] in the [http://wiki.lustre.org/manual/LustreManual20_HTML/index.html ''Lustre Operations Manual''].\n[[Architecture - Version_Based_Recovery|Architecture page - VBR]] \n-->\n* [[Media:20080612165106%21Version_base_recovery-hld.pdf|HLD - VBR]]\n* [[Media:Version_recovery.pdf|DLD - VBR]]\n* [[Media:VBR_phase2_large_scale_testplan.pdf|Test plan - VBR]]\n\n[[Category:Releases]]"
                    }
                ]
            },
            "771": {
                "pageid": 771,
                "ns": 0,
                "title": "Release 2.0",
                "revisions": [
                    {
                        "contentformat": "text/x-wiki",
                        "contentmodel": "wikitext",
                        "*": "{| class='wikitable'\n|- \n!Note: This page originated on the old Lustre wiki. It was identified as likely having value and was migrated to the new wiki. It is in the process of being reviewed/updated and may currently have content that is out of date.\n|}\n\n==Lustre 2.0 Matrix==\n\nLustre 2.0 will support the following Linux distributions and architectures.\n\n{| border=1 cellpadding=0\n|-\n!Component\n!Linux Distribution\n!Architecture\n|-\n|rowspan=\"3\"|<small>'''Server'''</small> \n|<small>OEL 5.4</small>\n|<small>x86_64</small>\n|-\n|<small>RHEL 5.4</small> \n|<small>x86_64</small>\n|-\n|style=\"background:#E8E8E8\"|<small>Dropped: SLES, vanilla (2.6.22)</small>\n|style=\"background:#E8E8E8\"|<small>Dropped: i686</small>\n|-\n|rowspan=\"6\"|<small>'''Patchless Client'''</small> \n|<small>OEL 5.4</small>\n|<small>x86_64</small>\n|-\n|<small>RHEL 5</small> \n|<small>x86_64</small>\n|-\n|<small>SLES 10, 11</small> \n|<small>x86_64</small>\n|-\n|<small>Fedora 11 (2.6.29) [New]</small> \n|<small>i686</small>\n|-\n|style=\"background:#E8E8E8\"|<small>Dropped: vanilla (2.6.22)</small>\n|\n|-\n|}\n\n==Lustre 2.0/2.x Features==\n\nLustre 2.0 will introduce several significant new features and improved system functionality. This page previews these features and benefits offered by upgrading to Lustre 2.0, and also describes features targeted at later 2.x releases. See [[Lustre_2.0_Features|Lustre 2.0 Features]]\n\n==Lustre 2.0 Release Milestones==\nThe Lustre 2.0 team will be publishing interim Alpha and Beta RPMs throughout the testing and stabilization process. The team plans to publish a milestone release every 4-6 weeks until Lustre 2.0 GA is achieved. The first Alpha milestone was announced at LUG 2009 on April 16, 2009. Please see the [[Lustre_2.0_Release_Milestone_Status|Lustre 2.0 Release Milestone Status]] page for related testing documents for this and future Alpha and Beta milestones.\n\n==Lustre 2.0 Features==\n\n[[Lustre_2.0|Lustre 2.0]] release introduced several significant new features and improved system functionality. This page provides descriptions of these features and lists the benefits offered by upgrading to the Lustre 2.0 release family.\n\n=Lustre 2.0.0=\n\nThe initial [[Lustre 2.0]] release (known as 2.0.0) offers these features:\n\n===Changelogs===\n\nChangelogs record events that change the filesystem namespace or file metadata.  Events such as file creation, deletion, renaming, attribute changes, etc. are recorded with the target and parent file identifiers (FIDs), the name of the target, and a timestamp. These records can be used for a variety of purposes:\n\n* Record recent changes to feed into an archiving system.\n* Use changelog entries to exactly replicate changes in a filesystem mirror.\n* Set up \"watch scripts\" that take action on certain events or directories. Changelog record are persistent (on disk) until explicitly cleared by the user.  The are guaranteed to accurately reflect on-disk changes in the event of a server failure.\n* Maintain a rough audit trail (file/directory changes with timestamps, but no user information).\n\nThese are sample changelog entries:\n\n 2 02MKDIR 4298396676 0x0 t=[0x200000405:0x15f9:0x0] p=[0x13:0x15e5a7a3:0x0] pics\n 3 01CREAT 4298402264 0x0 t=[0x200000405:0x15fa:0x0] p=[0x200000405:0x15f9:0x0] chloe.jpg\n 4 06UNLNK 4298404466 0x0 t=[0x200000405:0x15fa:0x0] p=[0x200000405:0x15f9:0x0] chloe.jpg\n 5 07RMDIR 4298405394 0x0 t=[0x200000405:0x15f9:0x0] p=[0x13:0x15e5a7a3:0x0] pics \n\nThe record types are:\n\n{| border=1 cellpadding=0\n!Record Type\n!Description\n|-\n|<small><strong>MARK</strong></small>|||<small>internal recordkeeping</small>\n|-\n|<small><strong>CREAT</strong></small>|||<small>regular file creation</small>\n|-\n|<small><strong>MKDIR</strong></small>|||<small>directory creation</small>\n|-\n|<small><strong>HLINK</strong></small>|||<small>hardlink</small>\n|-\n|<small><strong>SLINK</strong></small>|||<small>softlink</small>\n|-\n|<small><strong>MKNOD</strong></small>|||<small>other file creation</small>\n|-\n|<small><strong>UNLNK</strong></small>|||<small>regular file removal</small>\n|-\n|<small><strong>RMDIR</strong></small>|||<small>directory removal</small>\n|-\n|<small><strong>RNMFM</strong></small>|||<small>rename, original</small>\n|-\n|<small><strong>RNMTO</strong></small>|||<small>rename, final</small>\n|-\n|<small><strong>IOCTL</strong></small>|||<small>ioctl on file or directory</small>\n|-\n|<small><strong>TRUNC</strong></small>|||<small>regular file truncated</small>\n|-\n|<small><strong>SATTR</strong></small>|||<small>attribute change</small>\n|-\n|<small><strong>XATTR</strong></small>|||<small>extended attribute change</small>\n|-\n|<small><strong>UNKNW</strong></small>|||<small>unknown op</small>\n|}\n\nFID-to-full-pathname and pathname-to-FID functions are also included to map target and parent FIDs into the filesystem namespace.\n\n<big>Why should I upgrade to Lustre 2.0.0 to get it?</big>\n\nChangelogs offer these benefits:\n\n* File/directory change notification\n* Event notification\n* Filesystem replication\n* File backup policy decisions\n* Audit trail\n\n<big>Additional Resources</big>\n\nFor more information about changelogs, see:\n\n* [http://wiki.lustre.org/manual/LustreManual20_HTML/LustreMonitoring.html#50438273_pgfId-1296751 Section 12.1: Changelogs - Lustre 2.0 manual]\n\n===Commit on Share===\n\nThe Commit on Share (COS) feature prevents missing clients from causing cascading evictions of other clients. If some clients miss the recovery window, remaining clients are not evicted.\n\nWhen an MDS starts up and enters recovery mode after a failover or service restart, clients begin to reconnect and replay their uncommitted transactions. If one or more clients miss the recovery window, this may cause other clients to abort their transactions or be evicted. The transactions of evicted clients cannot be applied and are aborted. This causes a cascade effect as transactions dependent on the aborted ones fail and so on. COS addresses this problem by eliminating dependent transactions. With no dependent, uncommitted transactions to apply, the clients replay their requests independently without the risk of being evicted.\n\n<big>Why should I upgrade to Lustre 2.0.0 to get it?</big>\n\nCOS offers these benefits:\n\n* Allows clients to always be able to recover, regardless of whether other clients have failed.\n* Reduces recovery problems when multiple node failures occur \n\n<big>Additional Resources</big>\n\nFor more information on COS, see:\n\n* [http://wiki.lustre.org/manual/LustreManual20_HTML/LustreRecovery.html#50438268_pgfId-1292073 Section 30.5: Commit on Share - Lustre 2.0 manual]\n* [[Media:COS_TestPlan.pdf|COS Test Plan]]\n\n===lustre_rsync===\n\nThe lustre_rsync feature provides namespace and data replication to an external (remote) backup system without having to scan the file system for inode changes and modification times. Lustre metadata changelogs are used to record file system changes and determine which directory and file operations to execute on the replicated system. The lustre_rsync feature differs from existing backup/replication/synchronization systems because it avoids full file system scans, which can be unreasonably time-consuming for very large file systems. Also, the lustre_rsync process can be resumed from where it left off, so the replicated file system is fully synchronized when operation completes. Lustre_rsync may be bi-directional for distinct directories.\n\nThe replicated system may be another Lustre file system or any other file system. The replica is an exact copy of the namespace of the original file system at a given point in time. However, the replicated file system is '''not''' a snapshot of the source file system in that its contents may differ from the original file system's contents. On the replicated file system, a file's contents will be the data in the file at the time the file transfer occurred. \n\n<big>Why should I upgrade to Lustre 2.0.0 to get it?</big>\n\nLustre_rsync offers these benefits:\n\n* Namespace-coherent duplication of large file systems without scanning the complete file system\n* Functionality is safe when run repeatedly or run after an aborted attempt\n* Synchronization facility to switch the role of source and target file systems \n* In the case of recovery, the feature provides for reverse replication\n\n<big>Additional Resources</big>\n[http://doc.lustre.org/lustre_manual.xhtml#idm140224878349360 Lustre_rsync - Lustre 2.0 manual]"
                    }
                ]
            }
        }
    }
}