Network File System (NFS) Upper Layer Binding To RPC-Over-RDMA Oracle Corporation 1015 Granger AvenueAnn ArborMI48104USA+1 734 274 2396chuck.lever@oracle.com
Transport
Network File System Version 4NFS-Over-RDMAThis document specifies Upper Layer Bindings of Network File System (NFS) protocol versions to RPC-over-RDMA transports. These bindings are required to enable RPC-based protocols to use direct data placement on RPC-over-RDMA transports. This document obsoletes RFC 5667. An RPC-over-RDMA transport, such as defined in , may employ direct data placement to transmit large data payloads associated with RPC transactions. Each RPC-over-RDMA transport header conveys lists of memory locations corresponding to XDR data items defined in an Upper Layer Protocol (such as NFS). To facilitate interoperation, RPC client and server implementations must agree in advance on what XDR data items in which RPC procedures are eligible for direct data placement (DDP). This document contains material required of Upper Layer Bindings, as specified in , for the following NFS protocol versions: NFS Version 2 NFS Version 3 NFS Version 4.0 NFS Version 4.1 NFS Version 4.2 Corrections and updates made necessary by new language in have been introduced. For example, references to deprecated features of RPC-over-RDMA Version One, such as RDMA_MSGP, and the use of the Read list for handling RPC replies, has been removed. The term "mapping" has been replaced with the term "binding" or "Upper Layer Binding" throughout the document. Material that duplicates what is in has been deleted. Material required by for Upper Layer Bindings that was not present in has been added, including discussion of how each NFS version properly estimates the maximum size of RPC replies. The following changes have been made, relative to : Ambiguous or erroneous uses of RFC2119 terms have been corrected. References to specific data movement mechanisms have been made generic or removed. References to obsolete RFCs have been replaced. Technical corrections have been made. For example, the mention of 12KB and 36KB inline thresholds have been removed. The reference to a non-existant NFS version 4 SYMLINK operation has been replaced with NFS version 4 CREATE(NF4LNK). The discussion of NFS version 4 COMPOUND handling has been completed. An IANA Considerations Section has replaced the "Port Usage Considerations" Section. Code excerpts have been removed, and figures have been modernized. Language inconsistent with or contradictory to has been removed from Sections 2 and 3, and both Sections have been combined into Section 2 in the present document. An explicit discussion of NFSv4.0 and NFSv4.1 backchannel operation will replace the previous treatment of callback operations. No NFSv4.x callback operation is DDP-eligible. The binding for NFSv4.1 has been completed. No DDP-eligible operations exist in NFSv4.1 that did not exist in NFSv4.0. A binding for NFSv4.2 has been added that includes discussion of new data-bearing operations like READ_PLUS. As stated earlier, RPC programs such as NFS are required to have an Upper Layer Binding specification to interoperate on RPC-over-RDMA transports . The Upper Layer Binding specified in this document can be extended to cover versions of the NFS version 4 protocol specified after NFS version 4 minor version 2 via standards action. This includes NFSv4 extensions that are documented separately from a new minor version. The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in . Definitions of terminology and a general discussion of how RPC-over-RDMA is used to convey RPC transactions can be found in . In this section, these general principals are applied to the specifics of the NFS protocol. The Read list in each RPC-over-RDMA transport header represents a set of memory regions containing DDP-eligible NFS argument data. Large data items, such as the data payload of an NFS WRITE request, are referenced by the Read list. The server places these directly into its memory. XDR unmarshaling code on the NFS server identifies the correspondence between Read chunks and particular NFS arguments via the chunk Position value encoded in each Read chunk. The Write list in each RPC-over-RDMA transport header represents a set of memory regions that can receive DDP-eligible NFS result data. Large data items such as the payload of an NFS READ request are referenced by the Write list. The server places these directly into client memory. Each Write chunk corresponds to a specific XDR data item in an NFS reply. This document specifies how NFS client and server implementations identify the correspondence between Write chunks and XDR results. Each Read chunk is represented as a list of segments at the same XDR Position, and each Write chunk is represented as an array of segments. An NFS client thus has the flexibility to advertise a set of discontiguous memory regions in which to send or receive a single DDP-eligible data item. Small RPC messages are conveyed using RDMA Send operations which are of limited size. If an NFS request is too large to be conveyed via an RDMA Send, and there are no DDP-eligible data items that can be removed, an NFS client must send the request using a Long Call. The entire NFS request is sent in a special Read chunk called a Position-Zero Read chunk. If a client predicts that the maximum size of an NFS reply is too large to be conveyed via an RDMA Send, it provides a Reply chunk in the RPC-over-RDMA transport header conveying the NFS request. The server can place the entire NFS reply in the Reply chunk. These special chunks are described in more detail in . An NFS client MAY send a single Read chunk to supply opaque file data for an NFS WRITE procedure, or the pathname for an NFS SYMLINK procedure. For all other NFS procedures, NFS servers MUST ignore Read chunks that have a non-zero value in their Position fields, and Read chunks beyond the first in the Read list. Similarly, an NFS client MAY provide a single Write chunk to receive either opaque file data from an NFS READ procedure, or the pathname from an NFS READLINK procedure. NFS servers MUST ignore the Write list for any other NFS procedure, and any Write chunks beyond the first in the Write list. There are no NFS version 2 or 3 procedures that have DDP-eligible data items in both their Call and Reply. However, when an NFS client sends a Long Call or Reply, it MAY provide a combination of Read list, Write list, and/or a Reply chunk in the same RPC-over-RDMA header. If an NFS client has not provided enough bytes in a Read list to match the size of a DDP-eligible NFS argument data item, or if an NFS client has not provided enough Write list resources to handle an NFS WRITE or READLINK reply, or if the client has not provided a large enough Reply chunk to convey an NFS reply, the server MUST return one of: An RPC-over-RDMA message of type RDMA_ERROR, with the rdma_xid field set to the XID of the matching NFS Call, and the rdma_error field set to ERR_CHUNK; or An RPC message with the mtype field set to REPLY, the stat field set to MSG_ACCEPTED, and the accept_stat field set to GARBAGE_ARGS. NFS clients already successfully estimate the maximum reply size of each operation in order to provide an adequate set of buffers to receive each NFS reply. An NFS client provides a Reply chunk when the maximum possible reply size is larger than the client's responder inline threshold. This specification applies to NFS Version 4.0 , NFS Version 4.1 , and NFS Version 4.2 . It also applies to the callback protocols associated with each of these minor versions. An NFS client MAY send a Read chunk to supply opaque file data for a WRITE operation or the pathname for a CREATE(NF4LNK) operation in an NFS version 4 COMPOUND procedure. An NFS client MUST NOT send a Read chunk that corresponds with any other XDR data item in any other NFS version 4 operation in an NFS version 4 COMPOUND procedure, or in an NFS version 4 NULL procedure. Similarly, an NFS client MAY provide a Write chunk to receive either opaque file data from a READ operation, NFS4_CONTENT_DATA from a READ_PLUS operation, or the pathname from a READLINK operation in an NFS version 4 COMPOUND procedure. An NFS client MUST NOT provide a Write chunk that corresponds with any other XDR data item in any other NFS version 4 operation in an NFS version 4 COMPOUND procedure, or in an NFS version 4 NULL procedure. There is no prohibition against an NFS version 4 COMPOUND procedure constructed with both a READ and WRITE operation, say. Thus it is possible for NFS version 4 COMPOUND procedures to use both the Read list and Write list simultaneously. An NFS client MAY provide a Read list and a Write list in the same transaction if it is sending a Long Call or Reply. If an NFS client has not provided enough bytes in a Read list to match the size of a DDP-eligible NFS argument data item, or if an NFS client has not provided enough Write list resources to handle a WRITE or READLINK operation, or if the client has not provided a large enough Reply chunk to convey an NFS reply, the server MUST return one of: An RPC-over-RDMA message of type RDMA_ERROR, with the rdma_xid field set to the XID of the matching NFS Call, and the rdma_error field set to ERR_CHUNK; or An RPC message with the mtype field set to REPLY, the stat field set to MSG_ACCEPTED, and the accept_stat field set to GARBAGE_ARGS. An NFS client provides a Reply chunk when the maximum possible reply size is larger than the client's responder inline threshold. NFS clients successfully estimate the maximum reply size of most operations in order to provide an adequate set of buffers to receive each NFS reply. There are certain NFSv4 data items whose size cannot be reliably estimated by clients, however, because there is no protocol-specified size limit on these structures. These include but are not limited to opaque types such as the attrlist4 field; fields containing ACLs such as fattr4_acl, fattr4_dacl, fattr4_sacl; fields in the fs_locations4 and fs_locations_info4 data structures; and opaque fields loc_body, loh_body, da_addr_body, lou_body, lrf_body, fattr_layout_types and fs_layout_types, which pertain to pNFS layout metadata. An NFS version 4 COMPOUND procedure supplies arguments for a sequence of operations, and returns results from that sequence. A client MAY construct an NFS version 4 COMPOUND procedure that uses more than one chunk in either the Read list or Write list. The NFS client provides XDR Position values in each Read chunk to disambiguate which chunk is associated with which XDR data item. However NFS server and client implementations must agree in advance on how to pair Write chunks with returned result data items. The mechanism specified in ) is applied here: The first chunk in the Write list MUST be used by the first READ or READLINK operation in an NFS version 4 COMPOUND procedure. The next Write chunk is used by the next READ or READLINK, and so on. If there are more READ or READLINK operations than Write chunks, then any remaining operations MUST return their results inline. If an NFS client presents a Write chunk, then the corresponding READ or READLINK operation MUST return its data by placing data into that chunk. If the Write chunk has zero RDMA segments, or if the total size of the segments is zero, then the corresponding READ or READLINK operation MUST return its result inline. The following example shows a Write list with three Write chunks, A, B, and C. The server consumes the provided Write chunks by writing the results of the designated operations in the compound request, READ and READLINK, back to each chunk. If the client does not want to have the READLINK result returned directly, it provides a zero-length array of segment triplets for buffer B or sets the values in the segment triplet for buffer B to zeros to indicate that the READLINK result must be returned inline. Unlike NFS versions 2 and 3, the maximum size of an NFS version 4 COMPOUND is not bounded. However, typical NFS version 4 clients rarely issue such problematic requests. In practice, NFS version 4 clients behave in much more predictable ways. Rsize and wsize apply to COMPOUND operations by capping the total amount of data payload allowed in each COMPOUND. An extension to NFS version 4 supporting a comprehensive exchange of upper-layer message size parameters is part of . The NFS version 4 protocols support server-initiated callbacks to notify clients of events such as recalled delegations. There are no DDP-eligible data items in callback protocols associated with NFSv4.0, NFSv4.1, or NFSv4.2. In NFS version 4.1 and 4.2, callback operations may appear on the same connection as one used for NFS version 4 client requests. NFS version 4 clients and servers MUST use the mechanism described in when backchannel operations are conveyed on RPC-over-RDMA transports. NFS use of direct data placement introduces a need for an additional NFS port number assignment for networks that share traditional UDP and TCP port spaces with RDMA services. The iWARP protocol is such an example (InfiniBand is not). NFS servers for versions 2 and 3 traditionally listen for clients on UDP and TCP port 2049, and additionally, they register these with the portmapper and/or rpcbind service. However, requires NFS servers for version 4 to listen on TCP port 2049, and they are not required to register. An NFS version 2 or version 3 server supporting RPC-over-RDMA on such a network and registering itself with the RPC portmapper MAY choose an arbitrary port, or MAY use the alternative well-known port number for its RPC-over-RDMA service. The chosen port MAY be registered with the RPC portmapper under the netid assigned by the requirement in . An NFS version 4 server supporting RPC-over-RDMA on such a network MUST use the alternative well-known port number for its RPC-over-RDMA service. Clients SHOULD connect to this well-known port without consulting the RPC portmapper (as for NFSv4/TCP). The port number assigned to an NFS service over an RPC-over-RDMA transport is available from the IANA port registry . The RDMA transport for RPC supports all RPC security models, including RPCSEC_GSS security and transport-level security. The choice of RDMA Read and RDMA Write to convey RPC argument and results does not affect this, since it only changes the method of data transfer. Specifically, the requirements of ensure that this choice does not introduce new vulnerabilities. Because this document defines only the binding of the NFS protocols atop , all relevant security considerations are therefore to be described at that layer. The author gratefully acknowledges the work of Brent Callaghan and Tom Talpey on the original NFS Direct Data Placement specification . The author also wishes to thank Bill Baker and Greg Marsden for their support of this work. Dave Noveck provided excellent review, constructive suggestions, and consistent navigational guidance throughout the process of drafting this document. Special thanks go to nfsv4 Working Group Chair Spencer Shepler and nfsv4 Working Group Secretary Thomas Haynes for their support. Bi-directional Remote Procedure Call On RPC-over-RDMA TransportsMinor versions of NFSv4 newer than NFSv4.0 work best when ONC RPC transports can send Remote Procedure Call transactions in both directions on the same connection. This document describes how RPC- over-RDMA transport endpoints convey RPCs in both directions on a single connection.Remote Direct Memory Access Transport for Remote Procedure Call, Version OneThis document specifies a protocol for conveying Remote Procedure Call (RPC) messages on physical transports capable of Remote Direct Memory Access (RDMA). It requires no revision to application RPC protocols or the RPC protocol itself. This document obsoletes RFC 5666.NFS Version 4 Minor Version 2This Internet-Draft describes NFS version 4 minor version two, describing the protocol extensions made from NFS version 4 minor version 1. Major extensions introduced in NFS version 4 minor version two include: Server Side Copy, Application Input/Output (I/O) Advise, Space Reservations, Sparse Files, Application Data Blocks, and Labeled NFS.Binding Protocols for ONC RPC Version 2This document describes the binding protocols used in conjunction with the ONC Remote Procedure Call (ONC RPC Version 2) protocols. [STANDARDS-TRACK]Key words for use in RFCs to Indicate Requirement LevelsIn many standards track documents several words are used to signify the requirements in the specification. These words are often capitalized. This document defines these words as they should be interpreted in IETF documents. This document specifies an Internet Best Current Practices for the Internet Community, and requests discussion and suggestions for improvements.RPCSEC_GSS Protocol SpecificationThis memo describes an ONC/RPC security flavor that allows RPC protocols to access the Generic Security Services Application Programming Interface (referred to henceforth as GSS-API). [STANDARDS-TRACK]RPC: Remote Procedure Call Protocol Specification Version 2This document describes the Open Network Computing (ONC) Remote Procedure Call (RPC) version 2 protocol as it is currently deployed and accepted. This document obsoletes RFC 1831. [STANDARDS-TRACK]Network File System (NFS) Version 4 Minor Version 1 ProtocolThis document describes the Network File System (NFS) version 4 minor version 1, including features retained from the base protocol (NFS version 4 minor version 0, which is specified in RFC 3530) and protocol extensions made subsequently. Major extensions introduced in NFS version 4 minor version 1 include Sessions, Directory Delegations, and parallel NFS (pNFS). NFS version 4 minor version 1 has no dependencies on NFS version 4 minor version 0, and it is considered a separate protocol. Thus, this document neither updates nor obsoletes RFC 3530. NFS minor version 1 is deemed superior to NFS minor version 0 with no loss of functionality, and its use is preferred over version 0. Both NFS minor versions 0 and 1 can be used simultaneously on the same network, between the same client and server. [STANDARDS-TRACK]Network File System (NFS) Version 4 ProtocolThe Network File System (NFS) version 4 protocol is a distributed file system protocol that builds on the heritage of NFS protocol version 2 (RFC 1094) and version 3 (RFC 1813). Unlike earlier versions, the NFS version 4 protocol supports traditional file access while integrating support for file locking and the MOUNT protocol. In addition, support for strong security (and its negotiation), COMPOUND operations, client caching, and internationalization has been added. Of course, attention has been applied to making NFS version 4 operate well in an Internet environment.This document, together with the companion External Data Representation (XDR) description document, RFC 7531, obsoletes RFC 3530 as the definition of the NFS version 4 protocol.NFS: Network File System Protocol specificationThis RFC describes a protocol that Sun Microsystems, Inc., and others are using. A new version of the protocol is under development, but others may benefit from the descriptions of the current protocol, and discussion of some of the design issues.NFS Version 3 Protocol SpecificationThis paper describes the NFS version 3 protocol. This paper is provided so that people can write compatible implementations. This memo provides information for the Internet community. This memo does not specify an Internet standard of any kind.Assigned Numbers: RFC 1700 is Replaced by an On-line DatabaseThis memo obsoletes RFC 1700 (STD 2) "Assigned Numbers", which contained an October 1994 snapshot of assigned Internet protocol parameters. This memo provides information for the Internet community.A Remote Direct Memory Access Protocol SpecificationThis document defines a Remote Direct Memory Access Protocol (RDMAP) that operates over the Direct Data Placement Protocol (DDP protocol). RDMAP provides read and write services directly to applications and enables data to be transferred directly into Upper Layer Protocol (ULP) Buffers without intermediate data copies. It also enables a kernel bypass implementation. [STANDARDS-TRACK]Direct Data Placement over Reliable TransportsThe Direct Data Placement protocol provides information to Place the incoming data directly into an upper layer protocol's receive buffer without intermediate buffers. This removes excess CPU and memory utilization associated with transferring data through the intermediate buffers. [STANDARDS-TRACK]Network File System (NFS) Direct Data PlacementThis document defines the bindings of the various Network File System (NFS) versions to the Remote Direct Memory Access (RDMA) operations supported by the RPC/RDMA transport protocol. It describes the use of direct data placement by means of server-initiated RDMA operations into client-supplied buffers for implementations of NFS versions 2, 3, 4, and 4.1 over such an RDMA transport. [STANDARDS-TRACK]