2013-06-04 19:17:10 +04:00
|
|
|
/*
|
|
|
|
* QEMU NVM Express Controller
|
|
|
|
*
|
|
|
|
* Copyright (c) 2012, Intel Corporation
|
|
|
|
*
|
|
|
|
* Written by Keith Busch <keith.busch@intel.com>
|
|
|
|
*
|
|
|
|
* This code is licensed under the GNU GPL v2 or later.
|
|
|
|
*/
|
|
|
|
|
|
|
|
/**
|
2020-12-08 23:04:10 +03:00
|
|
|
* Reference Specs: http://www.nvmexpress.org, 1.4, 1.3, 1.2, 1.1, 1.0e
|
2013-06-04 19:17:10 +04:00
|
|
|
*
|
2020-06-30 14:04:26 +03:00
|
|
|
* https://nvmexpress.org/developers/nvme-specification/
|
2021-04-16 06:52:28 +03:00
|
|
|
*
|
|
|
|
*
|
|
|
|
* Notes on coding style
|
|
|
|
* ---------------------
|
|
|
|
* While QEMU coding style prefers lowercase hexadecimals in constants, the
|
|
|
|
* NVMe subsystem use thes format from the NVMe specifications in the comments
|
|
|
|
* (i.e. 'h' suffix instead of '0x' prefix).
|
|
|
|
*
|
|
|
|
* Usage
|
|
|
|
* -----
|
|
|
|
* See docs/system/nvme.rst for extensive documentation.
|
|
|
|
*
|
|
|
|
* Add options:
|
2013-06-04 19:17:10 +04:00
|
|
|
* -drive file=<file>,if=none,id=<drive_id>
|
2021-01-24 05:54:45 +03:00
|
|
|
* -device nvme-subsys,id=<subsys_id>,nqn=<nqn_id>
|
hw/block/nvme: support multiple namespaces
This adds support for multiple namespaces by introducing a new 'nvme-ns'
device model. The nvme device creates a bus named from the device name
('id'). The nvme-ns devices then connect to this and registers
themselves with the nvme device.
This changes how an nvme device is created. Example with two namespaces:
-drive file=nvme0n1.img,if=none,id=disk1
-drive file=nvme0n2.img,if=none,id=disk2
-device nvme,serial=deadbeef,id=nvme0
-device nvme-ns,drive=disk1,bus=nvme0,nsid=1
-device nvme-ns,drive=disk2,bus=nvme0,nsid=2
The drive property is kept on the nvme device to keep the change
backward compatible, but the property is now optional. Specifying a
drive for the nvme device will always create the namespace with nsid 1.
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Minwoo Im <minwoo.im.dev@gmail.com>
2019-06-26 09:51:06 +03:00
|
|
|
* -device nvme,serial=<serial>,id=<bus_name>, \
|
2018-06-26 04:44:56 +03:00
|
|
|
* cmb_size_mb=<cmb_size_mb[optional]>, \
|
2020-03-30 19:46:56 +03:00
|
|
|
* [pmrdev=<mem_backend_file_id>,] \
|
2020-07-06 09:12:53 +03:00
|
|
|
* max_ioqpairs=<N[optional]>, \
|
hw/block/nvme: align zoned.zasl with mdts
ZASL (Zone Append Size Limit) is defined exactly like MDTS (Maximum Data
Transfer Size), that is, it is a value in units of the minimum memory
page size (CAP.MPSMIN) and is reported as a power of two.
The 'mdts' nvme device parameter is specified as in the spec, but the
'zoned.append_size_limit' parameter is specified in bytes. This is
suboptimal for a number of reasons:
1. It is just plain confusing wrt. the definition of mdts.
2. There is a lot of complexity involved in validating the value; it
must be a power of two, it should be larger than 4k, if it is zero
we set it internally to mdts, but still report it as zero.
3. While "hw/block/nvme: improve invalid zasl value reporting"
slightly improved the handling of the parameter, the validation is
still wrong; it does not depend on CC.MPS, it depends on
CAP.MPSMIN. And we are not even checking that it is actually less
than or equal to MDTS, which is kinda the *one* condition it must
satisfy.
Fix this by defining zasl exactly like mdts and checking the one thing
that it must satisfy (that it is less than or equal to mdts). Also,
change the default value from 128KiB to 0 (aka, whatever mdts is).
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
2021-02-22 21:27:58 +03:00
|
|
|
* aerl=<N[optional]>,aer_max_queued=<N[optional]>, \
|
2021-02-14 21:09:27 +03:00
|
|
|
* mdts=<N[optional]>,vsl=<N[optional]>, \
|
|
|
|
* zoned.zasl=<N[optional]>, \
|
2021-05-28 14:05:07 +03:00
|
|
|
* zoned.auto_transition=<on|off[optional]>, \
|
2022-05-09 17:16:09 +03:00
|
|
|
* sriov_max_vfs=<N[optional]> \
|
2022-05-09 17:16:16 +03:00
|
|
|
* sriov_vq_flexible=<N[optional]> \
|
|
|
|
* sriov_vi_flexible=<N[optional]> \
|
|
|
|
* sriov_max_vi_per_vf=<N[optional]> \
|
|
|
|
* sriov_max_vq_per_vf=<N[optional]> \
|
hw/block/nvme: support to map controller to a subsystem
nvme controller(nvme) can be mapped to a NVMe subsystem(nvme-subsys).
This patch maps a controller to a subsystem by adding a parameter
'subsys' to the nvme device.
To map a controller to a subsystem, we need to put nvme-subsys first and
then maps the subsystem to the controller:
-device nvme-subsys,id=subsys0
-device nvme,serial=foo,id=nvme0,subsys=subsys0
If 'subsys' property is not given to the nvme controller, then subsystem
NQN will be created with serial (e.g., 'foo' in above example),
Otherwise, it will be based on subsys id (e.g., 'subsys0' in above
example).
Signed-off-by: Minwoo Im <minwoo.im.dev@gmail.com>
Tested-by: Klaus Jensen <k.jensen@samsung.com>
Reviewed-by: Klaus Jensen <k.jensen@samsung.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
2021-01-24 05:54:46 +03:00
|
|
|
* subsys=<subsys_id>
|
2020-12-08 23:04:10 +03:00
|
|
|
* -device nvme-ns,drive=<drive_id>,bus=<bus_name>,nsid=<nsid>,\
|
hw/block/nvme: support for shared namespace in subsystem
nvme-ns device is registered to a nvme controller device during the
initialization in nvme_register_namespace() in case that 'bus' property
is given which means it's mapped to a single controller.
This patch introduced a new property 'subsys' just like the controller
device instance did to map a namespace to a NVMe subsystem.
If 'subsys' property is given to the nvme-ns device, it will belong to
the specified subsystem and will be attached to all controllers in that
subsystem by enabling shared namespace capability in NMIC(Namespace
Multi-path I/O and Namespace Capabilities) in Identify Namespace.
Usage:
-device nvme-subsys,id=subsys0
-device nvme,serial=foo,id=nvme0,subsys=subsys0
-device nvme,serial=bar,id=nvme1,subsys=subsys0
-device nvme,serial=baz,id=nvme2,subsys=subsys0
-device nvme-ns,id=ns1,drive=<drv>,nsid=1,subsys=subsys0 # Shared
-device nvme-ns,id=ns2,drive=<drv>,nsid=2,bus=nvme2 # Non-shared
In the above example, 'ns1' will be shared to 'nvme0' and 'nvme1' in
the same subsystem. On the other hand, 'ns2' will be attached to the
'nvme2' only as a private namespace in that subsystem.
All the namespace with 'subsys' parameter will attach all controllers in
the subsystem to the namespace by default.
Signed-off-by: Minwoo Im <minwoo.im.dev@gmail.com>
Tested-by: Klaus Jensen <k.jensen@samsung.com>
Reviewed-by: Klaus Jensen <k.jensen@samsung.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
2021-01-24 05:54:50 +03:00
|
|
|
* zoned=<true|false[optional]>, \
|
hw/block/nvme: support namespace detach
Given that now we have nvme-subsys device supported, we can manage
namespace allocated, but not attached: detached. This patch introduced
a parameter for nvme-ns device named 'detached'. This parameter
indicates whether the given namespace device is detached from
a entire NVMe subsystem('subsys' given case, shared namespace) or a
controller('bus' given case, private namespace).
- Allocated namespace
1) Shared ns in the subsystem 'subsys0':
-device nvme-ns,id=ns1,drive=blknvme0,nsid=1,subsys=subsys0,detached=true
2) Private ns for the controller 'nvme0' of the subsystem 'subsys0':
-device nvme-subsys,id=subsys0
-device nvme,serial=foo,id=nvme0,subsys=subsys0
-device nvme-ns,id=ns1,drive=blknvme0,nsid=1,bus=nvme0,detached=true
3) (Invalid case) Controller 'nvme0' has no subsystem to manage ns:
-device nvme,serial=foo,id=nvme0
-device nvme-ns,id=ns1,drive=blknvme0,nsid=1,bus=nvme0,detached=true
Signed-off-by: Minwoo Im <minwoo.im.dev@gmail.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
2021-02-05 18:30:10 +03:00
|
|
|
* subsys=<subsys_id>,detached=<true|false[optional]>
|
2017-05-16 22:10:59 +03:00
|
|
|
*
|
|
|
|
* Note cmb_size_mb denotes size of CMB in MB. CMB is assumed to be at
|
2020-12-18 02:32:16 +03:00
|
|
|
* offset 0 in BAR2 and supports only WDS, RDS and SQS for now. By default, the
|
|
|
|
* device will use the "v1.4 CMB scheme" - use the `legacy-cmb` parameter to
|
|
|
|
* always enable the CMBLOC and CMBSZ registers (v1.3 behavior).
|
2020-03-30 19:46:56 +03:00
|
|
|
*
|
|
|
|
* Enabling pmr emulation can be achieved by pointing to memory-backend-file.
|
|
|
|
* For example:
|
|
|
|
* -object memory-backend-file,id=<mem_id>,share=on,mem-path=<file_path>, \
|
|
|
|
* size=<size> .... -device nvme,...,pmrdev=<mem_id>
|
2020-07-06 09:12:53 +03:00
|
|
|
*
|
2020-11-13 11:57:13 +03:00
|
|
|
* The PMR will use BAR 4/5 exclusively.
|
|
|
|
*
|
2021-01-24 05:54:45 +03:00
|
|
|
* To place controller(s) and namespace(s) to a subsystem, then provide
|
|
|
|
* nvme-subsys device as above.
|
|
|
|
*
|
|
|
|
* nvme subsystem device parameters
|
|
|
|
* ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
|
|
|
* - `nqn`
|
|
|
|
* This parameter provides the `<nqn_id>` part of the string
|
|
|
|
* `nqn.2019-08.org.qemu:<nqn_id>` which will be reported in the SUBNQN field
|
|
|
|
* of subsystem controllers. Note that `<nqn_id>` should be unique per
|
|
|
|
* subsystem, but this is not enforced by QEMU. If not specified, it will
|
|
|
|
* default to the value of the `id` parameter (`<subsys_id>`).
|
2020-07-06 09:12:53 +03:00
|
|
|
*
|
|
|
|
* nvme device parameters
|
|
|
|
* ~~~~~~~~~~~~~~~~~~~~~~
|
hw/block/nvme: support to map controller to a subsystem
nvme controller(nvme) can be mapped to a NVMe subsystem(nvme-subsys).
This patch maps a controller to a subsystem by adding a parameter
'subsys' to the nvme device.
To map a controller to a subsystem, we need to put nvme-subsys first and
then maps the subsystem to the controller:
-device nvme-subsys,id=subsys0
-device nvme,serial=foo,id=nvme0,subsys=subsys0
If 'subsys' property is not given to the nvme controller, then subsystem
NQN will be created with serial (e.g., 'foo' in above example),
Otherwise, it will be based on subsys id (e.g., 'subsys0' in above
example).
Signed-off-by: Minwoo Im <minwoo.im.dev@gmail.com>
Tested-by: Klaus Jensen <k.jensen@samsung.com>
Reviewed-by: Klaus Jensen <k.jensen@samsung.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
2021-01-24 05:54:46 +03:00
|
|
|
* - `subsys`
|
|
|
|
* Specifying this parameter attaches the controller to the subsystem and
|
|
|
|
* the SUBNQN field in the controller will report the NQN of the subsystem
|
|
|
|
* device. This also enables multi controller capability represented in
|
|
|
|
* Identify Controller data structure in CMIC (Controller Multi-path I/O and
|
2022-06-14 13:40:45 +03:00
|
|
|
* Namespace Sharing Capabilities).
|
hw/block/nvme: support to map controller to a subsystem
nvme controller(nvme) can be mapped to a NVMe subsystem(nvme-subsys).
This patch maps a controller to a subsystem by adding a parameter
'subsys' to the nvme device.
To map a controller to a subsystem, we need to put nvme-subsys first and
then maps the subsystem to the controller:
-device nvme-subsys,id=subsys0
-device nvme,serial=foo,id=nvme0,subsys=subsys0
If 'subsys' property is not given to the nvme controller, then subsystem
NQN will be created with serial (e.g., 'foo' in above example),
Otherwise, it will be based on subsys id (e.g., 'subsys0' in above
example).
Signed-off-by: Minwoo Im <minwoo.im.dev@gmail.com>
Tested-by: Klaus Jensen <k.jensen@samsung.com>
Reviewed-by: Klaus Jensen <k.jensen@samsung.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
2021-01-24 05:54:46 +03:00
|
|
|
*
|
2020-07-06 09:12:53 +03:00
|
|
|
* - `aerl`
|
|
|
|
* The Asynchronous Event Request Limit (AERL). Indicates the maximum number
|
2020-12-08 23:04:10 +03:00
|
|
|
* of concurrently outstanding Asynchronous Event Request commands support
|
2020-07-06 09:12:53 +03:00
|
|
|
* by the controller. This is a 0's based value.
|
|
|
|
*
|
|
|
|
* - `aer_max_queued`
|
|
|
|
* This is the maximum number of events that the device will enqueue for
|
2020-12-08 23:04:10 +03:00
|
|
|
* completion when there are no outstanding AERs. When the maximum number of
|
2020-07-06 09:12:53 +03:00
|
|
|
* enqueued events are reached, subsequent events will be dropped.
|
|
|
|
*
|
2021-02-22 22:13:22 +03:00
|
|
|
* - `mdts`
|
|
|
|
* Indicates the maximum data transfer size for a command that transfers data
|
|
|
|
* between host-accessible memory and the controller. The value is specified
|
|
|
|
* as a power of two (2^n) and is in units of the minimum memory page size
|
|
|
|
* (CAP.MPSMIN). The default value is 7 (i.e. 512 KiB).
|
|
|
|
*
|
2021-02-14 21:09:27 +03:00
|
|
|
* - `vsl`
|
|
|
|
* Indicates the maximum data size limit for the Verify command. Like `mdts`,
|
|
|
|
* this value is specified as a power of two (2^n) and is in units of the
|
|
|
|
* minimum memory page size (CAP.MPSMIN). The default value is 7 (i.e. 512
|
|
|
|
* KiB).
|
|
|
|
*
|
hw/block/nvme: align zoned.zasl with mdts
ZASL (Zone Append Size Limit) is defined exactly like MDTS (Maximum Data
Transfer Size), that is, it is a value in units of the minimum memory
page size (CAP.MPSMIN) and is reported as a power of two.
The 'mdts' nvme device parameter is specified as in the spec, but the
'zoned.append_size_limit' parameter is specified in bytes. This is
suboptimal for a number of reasons:
1. It is just plain confusing wrt. the definition of mdts.
2. There is a lot of complexity involved in validating the value; it
must be a power of two, it should be larger than 4k, if it is zero
we set it internally to mdts, but still report it as zero.
3. While "hw/block/nvme: improve invalid zasl value reporting"
slightly improved the handling of the parameter, the validation is
still wrong; it does not depend on CC.MPS, it depends on
CAP.MPSMIN. And we are not even checking that it is actually less
than or equal to MDTS, which is kinda the *one* condition it must
satisfy.
Fix this by defining zasl exactly like mdts and checking the one thing
that it must satisfy (that it is less than or equal to mdts). Also,
change the default value from 128KiB to 0 (aka, whatever mdts is).
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
2021-02-22 21:27:58 +03:00
|
|
|
* - `zoned.zasl`
|
|
|
|
* Indicates the maximum data transfer size for the Zone Append command. Like
|
|
|
|
* `mdts`, the value is specified as a power of two (2^n) and is in units of
|
|
|
|
* the minimum memory page size (CAP.MPSMIN). The default value is 0 (i.e.
|
|
|
|
* defaulting to the value of `mdts`).
|
2020-12-08 23:04:10 +03:00
|
|
|
*
|
2021-05-28 14:05:07 +03:00
|
|
|
* - `zoned.auto_transition`
|
|
|
|
* Indicates if zones in zone state implicitly opened can be automatically
|
|
|
|
* transitioned to zone state closed for resource management purposes.
|
|
|
|
* Defaults to 'on'.
|
|
|
|
*
|
2022-05-09 17:16:09 +03:00
|
|
|
* - `sriov_max_vfs`
|
|
|
|
* Indicates the maximum number of PCIe virtual functions supported
|
|
|
|
* by the controller. The default value is 0. Specifying a non-zero value
|
|
|
|
* enables reporting of both SR-IOV and ARI capabilities by the NVMe device.
|
|
|
|
* Virtual function controllers will not report SR-IOV capability.
|
|
|
|
*
|
2022-05-09 17:16:16 +03:00
|
|
|
* NOTE: Single Root I/O Virtualization support is experimental.
|
|
|
|
* All the related parameters may be subject to change.
|
|
|
|
*
|
|
|
|
* - `sriov_vq_flexible`
|
|
|
|
* Indicates the total number of flexible queue resources assignable to all
|
|
|
|
* the secondary controllers. Implicitly sets the number of primary
|
|
|
|
* controller's private resources to `(max_ioqpairs - sriov_vq_flexible)`.
|
|
|
|
*
|
|
|
|
* - `sriov_vi_flexible`
|
|
|
|
* Indicates the total number of flexible interrupt resources assignable to
|
|
|
|
* all the secondary controllers. Implicitly sets the number of primary
|
|
|
|
* controller's private resources to `(msix_qsize - sriov_vi_flexible)`.
|
|
|
|
*
|
|
|
|
* - `sriov_max_vi_per_vf`
|
|
|
|
* Indicates the maximum number of virtual interrupt resources assignable
|
|
|
|
* to a secondary controller. The default 0 resolves to
|
|
|
|
* `(sriov_vi_flexible / sriov_max_vfs)`.
|
|
|
|
*
|
|
|
|
* - `sriov_max_vq_per_vf`
|
|
|
|
* Indicates the maximum number of virtual queue resources assignable to
|
|
|
|
* a secondary controller. The default 0 resolves to
|
|
|
|
* `(sriov_vq_flexible / sriov_max_vfs)`.
|
|
|
|
*
|
hw/block/nvme: support for shared namespace in subsystem
nvme-ns device is registered to a nvme controller device during the
initialization in nvme_register_namespace() in case that 'bus' property
is given which means it's mapped to a single controller.
This patch introduced a new property 'subsys' just like the controller
device instance did to map a namespace to a NVMe subsystem.
If 'subsys' property is given to the nvme-ns device, it will belong to
the specified subsystem and will be attached to all controllers in that
subsystem by enabling shared namespace capability in NMIC(Namespace
Multi-path I/O and Namespace Capabilities) in Identify Namespace.
Usage:
-device nvme-subsys,id=subsys0
-device nvme,serial=foo,id=nvme0,subsys=subsys0
-device nvme,serial=bar,id=nvme1,subsys=subsys0
-device nvme,serial=baz,id=nvme2,subsys=subsys0
-device nvme-ns,id=ns1,drive=<drv>,nsid=1,subsys=subsys0 # Shared
-device nvme-ns,id=ns2,drive=<drv>,nsid=2,bus=nvme2 # Non-shared
In the above example, 'ns1' will be shared to 'nvme0' and 'nvme1' in
the same subsystem. On the other hand, 'ns2' will be attached to the
'nvme2' only as a private namespace in that subsystem.
All the namespace with 'subsys' parameter will attach all controllers in
the subsystem to the namespace by default.
Signed-off-by: Minwoo Im <minwoo.im.dev@gmail.com>
Tested-by: Klaus Jensen <k.jensen@samsung.com>
Reviewed-by: Klaus Jensen <k.jensen@samsung.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
2021-01-24 05:54:50 +03:00
|
|
|
* nvme namespace device parameters
|
|
|
|
* ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
|
hw/block/nvme: fix handling of private namespaces
Prior to this patch, if a private nvme-ns device (that is, a namespace
that is not linked to a subsystem) is wired up to an nvme-subsys linked
nvme controller device, the device fails to verify that the namespace id
is unique within the subsystem. NVM Express v1.4b, Section 6.1.6 ("NSID
and Namespace Usage") states that because the device supports Namespace
Management, "NSIDs *shall* be unique within the NVM subsystem".
Additionally, prior to this patch, private namespaces are not known to
the subsystem and the namespace is considered exclusive to the
controller with which it is initially wired up to. However, this is not
the definition of a private namespace; per Section 1.6.33 ("private
namespace"), a private namespace is just a namespace that does not
support multipath I/O or namespace sharing, which means "that it is only
able to be attached to one controller at a time".
Fix this by always allocating namespaces in the subsystem (if one is
linked to the controller), regardless of the shared/private status of
the namespace. Whether or not the namespace is shareable is controlled
by a new `shared` nvme-ns parameter.
Finally, this fix allows the nvme-ns `subsys` parameter to be removed,
since the `shared` parameter now serves the purpose of attaching the
namespace to all controllers in the subsystem upon device realization.
It is invalid to have an nvme-ns namespace device with a linked
subsystem without the parent nvme controller device also being linked to
one and since the nvme-ns devices will unconditionally be "attached" (in
QEMU terms that is) to an nvme controller device through an NvmeBus, the
nvme-ns namespace device can always get a reference to the subsystem of
the controller it is explicitly (using 'bus=' parameter) or implicitly
attaching to.
Fixes: e570768566b3 ("hw/block/nvme: support for shared namespace in subsystem")
Cc: Minwoo Im <minwoo.im.dev@gmail.com>
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
Reviewed-by: Gollu Appalanaidu <anaidu.gollu@samsung.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Minwoo Im <minwoo.im.dev@gmail.com>
2021-03-23 14:43:24 +03:00
|
|
|
* - `shared`
|
|
|
|
* When the parent nvme device (as defined explicitly by the 'bus' parameter
|
|
|
|
* or implicitly by the most recently defined NvmeBus) is linked to an
|
|
|
|
* nvme-subsys device, the namespace will be attached to all controllers in
|
|
|
|
* the subsystem. If set to 'off' (the default), the namespace will remain a
|
|
|
|
* private namespace and may only be attached to a single controller at a
|
|
|
|
* time.
|
hw/block/nvme: support for shared namespace in subsystem
nvme-ns device is registered to a nvme controller device during the
initialization in nvme_register_namespace() in case that 'bus' property
is given which means it's mapped to a single controller.
This patch introduced a new property 'subsys' just like the controller
device instance did to map a namespace to a NVMe subsystem.
If 'subsys' property is given to the nvme-ns device, it will belong to
the specified subsystem and will be attached to all controllers in that
subsystem by enabling shared namespace capability in NMIC(Namespace
Multi-path I/O and Namespace Capabilities) in Identify Namespace.
Usage:
-device nvme-subsys,id=subsys0
-device nvme,serial=foo,id=nvme0,subsys=subsys0
-device nvme,serial=bar,id=nvme1,subsys=subsys0
-device nvme,serial=baz,id=nvme2,subsys=subsys0
-device nvme-ns,id=ns1,drive=<drv>,nsid=1,subsys=subsys0 # Shared
-device nvme-ns,id=ns2,drive=<drv>,nsid=2,bus=nvme2 # Non-shared
In the above example, 'ns1' will be shared to 'nvme0' and 'nvme1' in
the same subsystem. On the other hand, 'ns2' will be attached to the
'nvme2' only as a private namespace in that subsystem.
All the namespace with 'subsys' parameter will attach all controllers in
the subsystem to the namespace by default.
Signed-off-by: Minwoo Im <minwoo.im.dev@gmail.com>
Tested-by: Klaus Jensen <k.jensen@samsung.com>
Reviewed-by: Klaus Jensen <k.jensen@samsung.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
2021-01-24 05:54:50 +03:00
|
|
|
*
|
hw/block/nvme: support namespace detach
Given that now we have nvme-subsys device supported, we can manage
namespace allocated, but not attached: detached. This patch introduced
a parameter for nvme-ns device named 'detached'. This parameter
indicates whether the given namespace device is detached from
a entire NVMe subsystem('subsys' given case, shared namespace) or a
controller('bus' given case, private namespace).
- Allocated namespace
1) Shared ns in the subsystem 'subsys0':
-device nvme-ns,id=ns1,drive=blknvme0,nsid=1,subsys=subsys0,detached=true
2) Private ns for the controller 'nvme0' of the subsystem 'subsys0':
-device nvme-subsys,id=subsys0
-device nvme,serial=foo,id=nvme0,subsys=subsys0
-device nvme-ns,id=ns1,drive=blknvme0,nsid=1,bus=nvme0,detached=true
3) (Invalid case) Controller 'nvme0' has no subsystem to manage ns:
-device nvme,serial=foo,id=nvme0
-device nvme-ns,id=ns1,drive=blknvme0,nsid=1,bus=nvme0,detached=true
Signed-off-by: Minwoo Im <minwoo.im.dev@gmail.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
2021-02-05 18:30:10 +03:00
|
|
|
* - `detached`
|
|
|
|
* This parameter is only valid together with the `subsys` parameter. If left
|
|
|
|
* at the default value (`false/off`), the namespace will be attached to all
|
|
|
|
* controllers in the NVMe subsystem at boot-up. If set to `true/on`, the
|
2021-06-14 19:29:27 +03:00
|
|
|
* namespace will be available in the subsystem but not attached to any
|
hw/block/nvme: support namespace detach
Given that now we have nvme-subsys device supported, we can manage
namespace allocated, but not attached: detached. This patch introduced
a parameter for nvme-ns device named 'detached'. This parameter
indicates whether the given namespace device is detached from
a entire NVMe subsystem('subsys' given case, shared namespace) or a
controller('bus' given case, private namespace).
- Allocated namespace
1) Shared ns in the subsystem 'subsys0':
-device nvme-ns,id=ns1,drive=blknvme0,nsid=1,subsys=subsys0,detached=true
2) Private ns for the controller 'nvme0' of the subsystem 'subsys0':
-device nvme-subsys,id=subsys0
-device nvme,serial=foo,id=nvme0,subsys=subsys0
-device nvme-ns,id=ns1,drive=blknvme0,nsid=1,bus=nvme0,detached=true
3) (Invalid case) Controller 'nvme0' has no subsystem to manage ns:
-device nvme,serial=foo,id=nvme0
-device nvme-ns,id=ns1,drive=blknvme0,nsid=1,bus=nvme0,detached=true
Signed-off-by: Minwoo Im <minwoo.im.dev@gmail.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
2021-02-05 18:30:10 +03:00
|
|
|
* controllers.
|
|
|
|
*
|
2020-12-08 23:04:10 +03:00
|
|
|
* Setting `zoned` to true selects Zoned Command Set at the namespace.
|
|
|
|
* In this case, the following namespace properties are available to configure
|
|
|
|
* zoned operation:
|
|
|
|
* zoned.zone_size=<zone size in bytes, default: 128MiB>
|
|
|
|
* The number may be followed by K, M, G as in kilo-, mega- or giga-.
|
|
|
|
*
|
|
|
|
* zoned.zone_capacity=<zone capacity in bytes, default: zone size>
|
|
|
|
* The value 0 (default) forces zone capacity to be the same as zone
|
|
|
|
* size. The value of this property may not exceed zone size.
|
|
|
|
*
|
|
|
|
* zoned.descr_ext_size=<zone descriptor extension size, default 0>
|
|
|
|
* This value needs to be specified in 64B units. If it is zero,
|
|
|
|
* namespace(s) will not support zone descriptor extensions.
|
|
|
|
*
|
|
|
|
* zoned.max_active=<Maximum Active Resources (zones), default: 0>
|
|
|
|
* The default value means there is no limit to the number of
|
|
|
|
* concurrently active zones.
|
|
|
|
*
|
|
|
|
* zoned.max_open=<Maximum Open Resources (zones), default: 0>
|
|
|
|
* The default value means there is no limit to the number of
|
|
|
|
* concurrently open zones.
|
|
|
|
*
|
2021-01-26 03:19:24 +03:00
|
|
|
* zoned.cross_read=<enable RAZB, default: false>
|
2020-12-08 23:04:10 +03:00
|
|
|
* Setting this property to true enables Read Across Zone Boundaries.
|
2013-06-04 19:17:10 +04:00
|
|
|
*/
|
|
|
|
|
2016-01-18 21:01:42 +03:00
|
|
|
#include "qemu/osdep.h"
|
2021-04-14 21:41:20 +03:00
|
|
|
#include "qemu/cutils.h"
|
2020-06-09 22:03:19 +03:00
|
|
|
#include "qemu/error-report.h"
|
2021-04-14 21:41:20 +03:00
|
|
|
#include "qemu/log.h"
|
|
|
|
#include "qemu/units.h"
|
2022-05-09 17:16:17 +03:00
|
|
|
#include "qemu/range.h"
|
include/qemu/osdep.h: Don't include qapi/error.h
Commit 57cb38b included qapi/error.h into qemu/osdep.h to get the
Error typedef. Since then, we've moved to include qemu/osdep.h
everywhere. Its file comment explains: "To avoid getting into
possible circular include dependencies, this file should not include
any other QEMU headers, with the exceptions of config-host.h,
compiler.h, os-posix.h and os-win32.h, all of which are doing a
similar job to this file and are under similar constraints."
qapi/error.h doesn't do a similar job, and it doesn't adhere to
similar constraints: it includes qapi-types.h. That's in excess of
100KiB of crap most .c files don't actually need.
Add the typedef to qemu/typedefs.h, and include that instead of
qapi/error.h. Include qapi/error.h in .c files that need it and don't
get it now. Include qapi-types.h in qom/object.h for uint16List.
Update scripts/clean-includes accordingly. Update it further to match
reality: replace config.h by config-target.h, add sysemu/os-posix.h,
sysemu/os-win32.h. Update the list of includes in the qemu/osdep.h
comment quoted above similarly.
This reduces the number of objects depending on qapi/error.h from "all
of them" to less than a third. Unfortunately, the number depending on
qapi-types.h shrinks only a little. More work is needed for that one.
Signed-off-by: Markus Armbruster <armbru@redhat.com>
[Fix compilation without the spice devel packages. - Paolo]
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
2016-03-14 11:01:28 +03:00
|
|
|
#include "qapi/error.h"
|
2014-10-07 12:00:34 +04:00
|
|
|
#include "qapi/visitor.h"
|
2021-04-14 21:41:20 +03:00
|
|
|
#include "sysemu/sysemu.h"
|
2014-10-07 15:59:18 +04:00
|
|
|
#include "sysemu/block-backend.h"
|
2021-04-14 21:41:20 +03:00
|
|
|
#include "sysemu/hostmem.h"
|
|
|
|
#include "hw/pci/msix.h"
|
2022-05-09 17:16:09 +03:00
|
|
|
#include "hw/pci/pcie_sriov.h"
|
2021-04-14 21:41:20 +03:00
|
|
|
#include "migration/vmstate.h"
|
|
|
|
|
2013-06-04 19:17:10 +04:00
|
|
|
#include "nvme.h"
|
2021-06-22 13:21:30 +03:00
|
|
|
#include "dif.h"
|
2021-04-14 21:41:20 +03:00
|
|
|
#include "trace.h"
|
2013-06-04 19:17:10 +04:00
|
|
|
|
2020-06-09 22:03:32 +03:00
|
|
|
#define NVME_MAX_IOQPAIRS 0xffff
|
2020-06-09 22:03:12 +03:00
|
|
|
#define NVME_DB_SIZE 4
|
2021-01-13 12:19:44 +03:00
|
|
|
#define NVME_SPEC_VER 0x00010400
|
2020-06-09 22:03:27 +03:00
|
|
|
#define NVME_CMB_BIR 2
|
2020-11-13 11:57:13 +03:00
|
|
|
#define NVME_PMR_BIR 4
|
2020-07-06 09:12:50 +03:00
|
|
|
#define NVME_TEMPERATURE 0x143
|
|
|
|
#define NVME_TEMPERATURE_WARNING 0x157
|
|
|
|
#define NVME_TEMPERATURE_CRITICAL 0x175
|
2020-07-06 09:12:51 +03:00
|
|
|
#define NVME_NUM_FW_SLOTS 1
|
2021-04-14 21:42:27 +03:00
|
|
|
#define NVME_DEFAULT_MAX_ZA_SIZE (128 * KiB)
|
2022-05-09 17:16:09 +03:00
|
|
|
#define NVME_MAX_VFS 127
|
2022-05-09 17:16:16 +03:00
|
|
|
#define NVME_VF_RES_GRANULARITY 1
|
2022-05-09 17:16:09 +03:00
|
|
|
#define NVME_VF_OFFSET 0x1
|
|
|
|
#define NVME_VF_STRIDE 1
|
2020-06-09 22:03:12 +03:00
|
|
|
|
2017-11-03 16:37:53 +03:00
|
|
|
#define NVME_GUEST_ERR(trace, fmt, ...) \
|
|
|
|
do { \
|
|
|
|
(trace_##trace)(__VA_ARGS__); \
|
|
|
|
qemu_log_mask(LOG_GUEST_ERROR, #trace \
|
|
|
|
" in %s: " fmt "\n", __func__, ## __VA_ARGS__); \
|
|
|
|
} while (0)
|
|
|
|
|
2020-07-06 09:12:56 +03:00
|
|
|
static const bool nvme_feature_support[NVME_FID_MAX] = {
|
|
|
|
[NVME_ARBITRATION] = true,
|
|
|
|
[NVME_POWER_MANAGEMENT] = true,
|
|
|
|
[NVME_TEMPERATURE_THRESHOLD] = true,
|
|
|
|
[NVME_ERROR_RECOVERY] = true,
|
|
|
|
[NVME_VOLATILE_WRITE_CACHE] = true,
|
|
|
|
[NVME_NUMBER_OF_QUEUES] = true,
|
|
|
|
[NVME_INTERRUPT_COALESCING] = true,
|
|
|
|
[NVME_INTERRUPT_VECTOR_CONF] = true,
|
|
|
|
[NVME_WRITE_ATOMICITY] = true,
|
|
|
|
[NVME_ASYNCHRONOUS_EVENT_CONF] = true,
|
|
|
|
[NVME_TIMESTAMP] = true,
|
2021-10-06 09:50:49 +03:00
|
|
|
[NVME_HOST_BEHAVIOR_SUPPORT] = true,
|
2021-04-19 13:48:32 +03:00
|
|
|
[NVME_COMMAND_SET_PROFILE] = true,
|
2023-02-20 14:59:26 +03:00
|
|
|
[NVME_FDP_MODE] = true,
|
|
|
|
[NVME_FDP_EVENTS] = true,
|
2020-07-06 09:12:56 +03:00
|
|
|
};
|
|
|
|
|
2020-07-06 09:12:57 +03:00
|
|
|
static const uint32_t nvme_feature_cap[NVME_FID_MAX] = {
|
|
|
|
[NVME_TEMPERATURE_THRESHOLD] = NVME_FEAT_CAP_CHANGE,
|
2020-10-14 10:55:08 +03:00
|
|
|
[NVME_ERROR_RECOVERY] = NVME_FEAT_CAP_CHANGE | NVME_FEAT_CAP_NS,
|
2020-07-06 09:12:57 +03:00
|
|
|
[NVME_VOLATILE_WRITE_CACHE] = NVME_FEAT_CAP_CHANGE,
|
|
|
|
[NVME_NUMBER_OF_QUEUES] = NVME_FEAT_CAP_CHANGE,
|
|
|
|
[NVME_ASYNCHRONOUS_EVENT_CONF] = NVME_FEAT_CAP_CHANGE,
|
|
|
|
[NVME_TIMESTAMP] = NVME_FEAT_CAP_CHANGE,
|
2021-10-06 09:50:49 +03:00
|
|
|
[NVME_HOST_BEHAVIOR_SUPPORT] = NVME_FEAT_CAP_CHANGE,
|
2021-04-19 13:48:32 +03:00
|
|
|
[NVME_COMMAND_SET_PROFILE] = NVME_FEAT_CAP_CHANGE,
|
2023-02-20 14:59:26 +03:00
|
|
|
[NVME_FDP_MODE] = NVME_FEAT_CAP_CHANGE,
|
|
|
|
[NVME_FDP_EVENTS] = NVME_FEAT_CAP_CHANGE | NVME_FEAT_CAP_NS,
|
2020-07-06 09:12:57 +03:00
|
|
|
};
|
|
|
|
|
2020-12-08 23:04:02 +03:00
|
|
|
static const uint32_t nvme_cse_acs[256] = {
|
|
|
|
[NVME_ADM_CMD_DELETE_SQ] = NVME_CMD_EFF_CSUPP,
|
|
|
|
[NVME_ADM_CMD_CREATE_SQ] = NVME_CMD_EFF_CSUPP,
|
|
|
|
[NVME_ADM_CMD_GET_LOG_PAGE] = NVME_CMD_EFF_CSUPP,
|
|
|
|
[NVME_ADM_CMD_DELETE_CQ] = NVME_CMD_EFF_CSUPP,
|
|
|
|
[NVME_ADM_CMD_CREATE_CQ] = NVME_CMD_EFF_CSUPP,
|
|
|
|
[NVME_ADM_CMD_IDENTIFY] = NVME_CMD_EFF_CSUPP,
|
|
|
|
[NVME_ADM_CMD_ABORT] = NVME_CMD_EFF_CSUPP,
|
|
|
|
[NVME_ADM_CMD_SET_FEATURES] = NVME_CMD_EFF_CSUPP,
|
|
|
|
[NVME_ADM_CMD_GET_FEATURES] = NVME_CMD_EFF_CSUPP,
|
|
|
|
[NVME_ADM_CMD_ASYNC_EV_REQ] = NVME_CMD_EFF_CSUPP,
|
2021-02-06 06:18:09 +03:00
|
|
|
[NVME_ADM_CMD_NS_ATTACHMENT] = NVME_CMD_EFF_CSUPP | NVME_CMD_EFF_NIC,
|
2022-05-09 17:16:17 +03:00
|
|
|
[NVME_ADM_CMD_VIRT_MNGMT] = NVME_CMD_EFF_CSUPP,
|
2022-06-16 15:34:07 +03:00
|
|
|
[NVME_ADM_CMD_DBBUF_CONFIG] = NVME_CMD_EFF_CSUPP,
|
2021-02-12 15:11:39 +03:00
|
|
|
[NVME_ADM_CMD_FORMAT_NVM] = NVME_CMD_EFF_CSUPP | NVME_CMD_EFF_LBCC,
|
2023-02-20 14:59:25 +03:00
|
|
|
[NVME_ADM_CMD_DIRECTIVE_RECV] = NVME_CMD_EFF_CSUPP,
|
|
|
|
[NVME_ADM_CMD_DIRECTIVE_SEND] = NVME_CMD_EFF_CSUPP,
|
2020-12-08 23:04:02 +03:00
|
|
|
};
|
|
|
|
|
|
|
|
static const uint32_t nvme_cse_iocs_none[256];
|
|
|
|
|
|
|
|
static const uint32_t nvme_cse_iocs_nvm[256] = {
|
|
|
|
[NVME_CMD_FLUSH] = NVME_CMD_EFF_CSUPP | NVME_CMD_EFF_LBCC,
|
|
|
|
[NVME_CMD_WRITE_ZEROES] = NVME_CMD_EFF_CSUPP | NVME_CMD_EFF_LBCC,
|
|
|
|
[NVME_CMD_WRITE] = NVME_CMD_EFF_CSUPP | NVME_CMD_EFF_LBCC,
|
|
|
|
[NVME_CMD_READ] = NVME_CMD_EFF_CSUPP,
|
|
|
|
[NVME_CMD_DSM] = NVME_CMD_EFF_CSUPP | NVME_CMD_EFF_LBCC,
|
2021-02-09 20:29:42 +03:00
|
|
|
[NVME_CMD_VERIFY] = NVME_CMD_EFF_CSUPP,
|
2020-11-06 12:46:01 +03:00
|
|
|
[NVME_CMD_COPY] = NVME_CMD_EFF_CSUPP | NVME_CMD_EFF_LBCC,
|
2020-12-08 23:04:02 +03:00
|
|
|
[NVME_CMD_COMPARE] = NVME_CMD_EFF_CSUPP,
|
2023-02-20 14:59:26 +03:00
|
|
|
[NVME_CMD_IO_MGMT_RECV] = NVME_CMD_EFF_CSUPP,
|
|
|
|
[NVME_CMD_IO_MGMT_SEND] = NVME_CMD_EFF_CSUPP | NVME_CMD_EFF_LBCC,
|
2020-12-08 23:04:02 +03:00
|
|
|
};
|
|
|
|
|
2020-12-08 23:04:06 +03:00
|
|
|
static const uint32_t nvme_cse_iocs_zoned[256] = {
|
|
|
|
[NVME_CMD_FLUSH] = NVME_CMD_EFF_CSUPP | NVME_CMD_EFF_LBCC,
|
|
|
|
[NVME_CMD_WRITE_ZEROES] = NVME_CMD_EFF_CSUPP | NVME_CMD_EFF_LBCC,
|
|
|
|
[NVME_CMD_WRITE] = NVME_CMD_EFF_CSUPP | NVME_CMD_EFF_LBCC,
|
|
|
|
[NVME_CMD_READ] = NVME_CMD_EFF_CSUPP,
|
|
|
|
[NVME_CMD_DSM] = NVME_CMD_EFF_CSUPP | NVME_CMD_EFF_LBCC,
|
2021-02-09 20:29:42 +03:00
|
|
|
[NVME_CMD_VERIFY] = NVME_CMD_EFF_CSUPP,
|
2020-11-06 12:46:01 +03:00
|
|
|
[NVME_CMD_COPY] = NVME_CMD_EFF_CSUPP | NVME_CMD_EFF_LBCC,
|
2020-12-08 23:04:06 +03:00
|
|
|
[NVME_CMD_COMPARE] = NVME_CMD_EFF_CSUPP,
|
|
|
|
[NVME_CMD_ZONE_APPEND] = NVME_CMD_EFF_CSUPP | NVME_CMD_EFF_LBCC,
|
|
|
|
[NVME_CMD_ZONE_MGMT_SEND] = NVME_CMD_EFF_CSUPP | NVME_CMD_EFF_LBCC,
|
|
|
|
[NVME_CMD_ZONE_MGMT_RECV] = NVME_CMD_EFF_CSUPP,
|
|
|
|
};
|
|
|
|
|
2013-06-04 19:17:10 +04:00
|
|
|
static void nvme_process_sq(void *opaque);
|
2022-05-09 17:16:17 +03:00
|
|
|
static void nvme_ctrl_reset(NvmeCtrl *n, NvmeResetType rst);
|
2023-02-20 14:59:26 +03:00
|
|
|
static inline uint64_t nvme_get_timestamp(const NvmeCtrl *n);
|
2013-06-04 19:17:10 +04:00
|
|
|
|
2020-07-06 09:12:48 +03:00
|
|
|
static uint16_t nvme_sqid(NvmeRequest *req)
|
|
|
|
{
|
|
|
|
return le16_to_cpu(req->sq->sqid);
|
|
|
|
}
|
|
|
|
|
2023-02-20 14:59:26 +03:00
|
|
|
static inline uint16_t nvme_make_pid(NvmeNamespace *ns, uint16_t rg,
|
|
|
|
uint16_t ph)
|
|
|
|
{
|
|
|
|
uint16_t rgif = ns->endgrp->fdp.rgif;
|
|
|
|
|
|
|
|
if (!rgif) {
|
|
|
|
return ph;
|
|
|
|
}
|
|
|
|
|
|
|
|
return (rg << (16 - rgif)) | ph;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline bool nvme_ph_valid(NvmeNamespace *ns, uint16_t ph)
|
|
|
|
{
|
|
|
|
return ph < ns->fdp.nphs;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline bool nvme_rg_valid(NvmeEnduranceGroup *endgrp, uint16_t rg)
|
|
|
|
{
|
|
|
|
return rg < endgrp->fdp.nrg;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline uint16_t nvme_pid2ph(NvmeNamespace *ns, uint16_t pid)
|
|
|
|
{
|
|
|
|
uint16_t rgif = ns->endgrp->fdp.rgif;
|
|
|
|
|
|
|
|
if (!rgif) {
|
|
|
|
return pid;
|
|
|
|
}
|
|
|
|
|
|
|
|
return pid & ((1 << (15 - rgif)) - 1);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline uint16_t nvme_pid2rg(NvmeNamespace *ns, uint16_t pid)
|
|
|
|
{
|
|
|
|
uint16_t rgif = ns->endgrp->fdp.rgif;
|
|
|
|
|
|
|
|
if (!rgif) {
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
return pid >> (16 - rgif);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline bool nvme_parse_pid(NvmeNamespace *ns, uint16_t pid,
|
|
|
|
uint16_t *ph, uint16_t *rg)
|
|
|
|
{
|
|
|
|
*rg = nvme_pid2rg(ns, pid);
|
|
|
|
*ph = nvme_pid2ph(ns, pid);
|
|
|
|
|
|
|
|
return nvme_ph_valid(ns, *ph) && nvme_rg_valid(ns->endgrp, *rg);
|
|
|
|
}
|
|
|
|
|
2020-12-08 23:04:06 +03:00
|
|
|
static void nvme_assign_zone_state(NvmeNamespace *ns, NvmeZone *zone,
|
2020-12-10 01:12:49 +03:00
|
|
|
NvmeZoneState state)
|
2020-12-08 23:04:06 +03:00
|
|
|
{
|
|
|
|
if (QTAILQ_IN_USE(zone, entry)) {
|
|
|
|
switch (nvme_get_zone_state(zone)) {
|
|
|
|
case NVME_ZONE_STATE_EXPLICITLY_OPEN:
|
|
|
|
QTAILQ_REMOVE(&ns->exp_open_zones, zone, entry);
|
|
|
|
break;
|
|
|
|
case NVME_ZONE_STATE_IMPLICITLY_OPEN:
|
|
|
|
QTAILQ_REMOVE(&ns->imp_open_zones, zone, entry);
|
|
|
|
break;
|
|
|
|
case NVME_ZONE_STATE_CLOSED:
|
|
|
|
QTAILQ_REMOVE(&ns->closed_zones, zone, entry);
|
|
|
|
break;
|
|
|
|
case NVME_ZONE_STATE_FULL:
|
|
|
|
QTAILQ_REMOVE(&ns->full_zones, zone, entry);
|
|
|
|
default:
|
|
|
|
;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
nvme_set_zone_state(zone, state);
|
|
|
|
|
|
|
|
switch (state) {
|
|
|
|
case NVME_ZONE_STATE_EXPLICITLY_OPEN:
|
|
|
|
QTAILQ_INSERT_TAIL(&ns->exp_open_zones, zone, entry);
|
|
|
|
break;
|
|
|
|
case NVME_ZONE_STATE_IMPLICITLY_OPEN:
|
|
|
|
QTAILQ_INSERT_TAIL(&ns->imp_open_zones, zone, entry);
|
|
|
|
break;
|
|
|
|
case NVME_ZONE_STATE_CLOSED:
|
|
|
|
QTAILQ_INSERT_TAIL(&ns->closed_zones, zone, entry);
|
|
|
|
break;
|
|
|
|
case NVME_ZONE_STATE_FULL:
|
|
|
|
QTAILQ_INSERT_TAIL(&ns->full_zones, zone, entry);
|
|
|
|
case NVME_ZONE_STATE_READ_ONLY:
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
zone->d.za = 0;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-03-04 10:40:11 +03:00
|
|
|
static uint16_t nvme_zns_check_resources(NvmeNamespace *ns, uint32_t act,
|
|
|
|
uint32_t opn, uint32_t zrwa)
|
2020-12-08 23:04:07 +03:00
|
|
|
{
|
|
|
|
if (ns->params.max_active_zones != 0 &&
|
|
|
|
ns->nr_active_zones + act > ns->params.max_active_zones) {
|
|
|
|
trace_pci_nvme_err_insuff_active_res(ns->params.max_active_zones);
|
|
|
|
return NVME_ZONE_TOO_MANY_ACTIVE | NVME_DNR;
|
|
|
|
}
|
2021-03-04 10:40:11 +03:00
|
|
|
|
2020-12-08 23:04:07 +03:00
|
|
|
if (ns->params.max_open_zones != 0 &&
|
|
|
|
ns->nr_open_zones + opn > ns->params.max_open_zones) {
|
|
|
|
trace_pci_nvme_err_insuff_open_res(ns->params.max_open_zones);
|
|
|
|
return NVME_ZONE_TOO_MANY_OPEN | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
2021-03-04 10:40:11 +03:00
|
|
|
if (zrwa > ns->zns.numzrwa) {
|
|
|
|
return NVME_NOZRWA | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
2020-12-08 23:04:07 +03:00
|
|
|
return NVME_SUCCESS;
|
|
|
|
}
|
|
|
|
|
2021-03-04 10:40:11 +03:00
|
|
|
/*
|
|
|
|
* Check if we can open a zone without exceeding open/active limits.
|
|
|
|
* AOR stands for "Active and Open Resources" (see TP 4053 section 2.5).
|
|
|
|
*/
|
|
|
|
static uint16_t nvme_aor_check(NvmeNamespace *ns, uint32_t act, uint32_t opn)
|
|
|
|
{
|
|
|
|
return nvme_zns_check_resources(ns, act, opn, 0);
|
|
|
|
}
|
|
|
|
|
2023-02-20 14:59:26 +03:00
|
|
|
static NvmeFdpEvent *nvme_fdp_alloc_event(NvmeCtrl *n, NvmeFdpEventBuffer *ebuf)
|
|
|
|
{
|
|
|
|
NvmeFdpEvent *ret = NULL;
|
|
|
|
bool is_full = ebuf->next == ebuf->start && ebuf->nelems;
|
|
|
|
|
|
|
|
ret = &ebuf->events[ebuf->next++];
|
|
|
|
if (unlikely(ebuf->next == NVME_FDP_MAX_EVENTS)) {
|
|
|
|
ebuf->next = 0;
|
|
|
|
}
|
|
|
|
if (is_full) {
|
|
|
|
ebuf->start = ebuf->next;
|
|
|
|
} else {
|
|
|
|
ebuf->nelems++;
|
|
|
|
}
|
|
|
|
|
|
|
|
memset(ret, 0, sizeof(NvmeFdpEvent));
|
|
|
|
ret->timestamp = nvme_get_timestamp(n);
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline int log_event(NvmeRuHandle *ruh, uint8_t event_type)
|
|
|
|
{
|
|
|
|
return (ruh->event_filter >> nvme_fdp_evf_shifts[event_type]) & 0x1;
|
|
|
|
}
|
|
|
|
|
|
|
|
static bool nvme_update_ruh(NvmeCtrl *n, NvmeNamespace *ns, uint16_t pid)
|
|
|
|
{
|
|
|
|
NvmeEnduranceGroup *endgrp = ns->endgrp;
|
|
|
|
NvmeRuHandle *ruh;
|
|
|
|
NvmeReclaimUnit *ru;
|
|
|
|
NvmeFdpEvent *e = NULL;
|
|
|
|
uint16_t ph, rg, ruhid;
|
|
|
|
|
|
|
|
if (!nvme_parse_pid(ns, pid, &ph, &rg)) {
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
ruhid = ns->fdp.phs[ph];
|
|
|
|
|
|
|
|
ruh = &endgrp->fdp.ruhs[ruhid];
|
|
|
|
ru = &ruh->rus[rg];
|
|
|
|
|
|
|
|
if (ru->ruamw) {
|
|
|
|
if (log_event(ruh, FDP_EVT_RU_NOT_FULLY_WRITTEN)) {
|
|
|
|
e = nvme_fdp_alloc_event(n, &endgrp->fdp.host_events);
|
|
|
|
e->type = FDP_EVT_RU_NOT_FULLY_WRITTEN;
|
|
|
|
e->flags = FDPEF_PIV | FDPEF_NSIDV | FDPEF_LV;
|
|
|
|
e->pid = cpu_to_le16(pid);
|
|
|
|
e->nsid = cpu_to_le32(ns->params.nsid);
|
|
|
|
e->rgid = cpu_to_le16(rg);
|
|
|
|
e->ruhid = cpu_to_le16(ruhid);
|
|
|
|
}
|
|
|
|
|
|
|
|
/* log (eventual) GC overhead of prematurely swapping the RU */
|
|
|
|
nvme_fdp_stat_inc(&endgrp->fdp.mbmw, nvme_l2b(ns, ru->ruamw));
|
|
|
|
}
|
|
|
|
|
|
|
|
ru->ruamw = ruh->ruamw;
|
|
|
|
|
|
|
|
return true;
|
|
|
|
}
|
|
|
|
|
2020-06-09 22:03:17 +03:00
|
|
|
static bool nvme_addr_is_cmb(NvmeCtrl *n, hwaddr addr)
|
|
|
|
{
|
2020-12-18 02:32:16 +03:00
|
|
|
hwaddr hi, lo;
|
2020-06-09 22:03:17 +03:00
|
|
|
|
2020-12-18 02:32:16 +03:00
|
|
|
if (!n->cmb.cmse) {
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
lo = n->params.legacy_cmb ? n->cmb.mem.addr : n->cmb.cba;
|
|
|
|
hi = lo + int128_get64(n->cmb.mem.size);
|
|
|
|
|
|
|
|
return addr >= lo && addr < hi;
|
2020-06-09 22:03:17 +03:00
|
|
|
}
|
|
|
|
|
2020-02-23 16:21:52 +03:00
|
|
|
static inline void *nvme_addr_to_cmb(NvmeCtrl *n, hwaddr addr)
|
|
|
|
{
|
2020-12-18 02:32:16 +03:00
|
|
|
hwaddr base = n->params.legacy_cmb ? n->cmb.mem.addr : n->cmb.cba;
|
|
|
|
return &n->cmb.buf[addr - base];
|
2020-02-23 16:21:52 +03:00
|
|
|
}
|
|
|
|
|
2020-11-13 08:30:05 +03:00
|
|
|
static bool nvme_addr_is_pmr(NvmeCtrl *n, hwaddr addr)
|
|
|
|
{
|
|
|
|
hwaddr hi;
|
|
|
|
|
|
|
|
if (!n->pmr.cmse) {
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
|
|
|
|
hi = n->pmr.cba + int128_get64(n->pmr.dev->mr.size);
|
|
|
|
|
|
|
|
return addr >= n->pmr.cba && addr < hi;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void *nvme_addr_to_pmr(NvmeCtrl *n, hwaddr addr)
|
|
|
|
{
|
|
|
|
return memory_region_get_ram_ptr(&n->pmr.dev->mr) + (addr - n->pmr.cba);
|
|
|
|
}
|
|
|
|
|
2021-12-17 12:44:01 +03:00
|
|
|
static inline bool nvme_addr_is_iomem(NvmeCtrl *n, hwaddr addr)
|
|
|
|
{
|
|
|
|
hwaddr hi, lo;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The purpose of this check is to guard against invalid "local" access to
|
|
|
|
* the iomem (i.e. controller registers). Thus, we check against the range
|
|
|
|
* covered by the 'bar0' MemoryRegion since that is currently composed of
|
|
|
|
* two subregions (the NVMe "MBAR" and the MSI-X table/pba). Note, however,
|
|
|
|
* that if the device model is ever changed to allow the CMB to be located
|
|
|
|
* in BAR0 as well, then this must be changed.
|
|
|
|
*/
|
|
|
|
lo = n->bar0.addr;
|
|
|
|
hi = lo + int128_get64(n->bar0.size);
|
|
|
|
|
|
|
|
return addr >= lo && addr < hi;
|
|
|
|
}
|
|
|
|
|
2019-10-11 09:32:00 +03:00
|
|
|
static int nvme_addr_read(NvmeCtrl *n, hwaddr addr, void *buf, int size)
|
2017-05-16 22:10:59 +03:00
|
|
|
{
|
2020-03-31 00:23:15 +03:00
|
|
|
hwaddr hi = addr + size - 1;
|
|
|
|
if (hi < addr) {
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (n->bar.cmbsz && nvme_addr_is_cmb(n, addr) && nvme_addr_is_cmb(n, hi)) {
|
2020-02-23 16:21:52 +03:00
|
|
|
memcpy(buf, nvme_addr_to_cmb(n, addr), size);
|
2019-10-11 09:32:00 +03:00
|
|
|
return 0;
|
2017-05-16 22:10:59 +03:00
|
|
|
}
|
2020-06-09 22:03:17 +03:00
|
|
|
|
2020-11-13 08:30:05 +03:00
|
|
|
if (nvme_addr_is_pmr(n, addr) && nvme_addr_is_pmr(n, hi)) {
|
|
|
|
memcpy(buf, nvme_addr_to_pmr(n, addr), size);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2022-12-08 14:43:18 +03:00
|
|
|
return pci_dma_read(PCI_DEVICE(n), addr, buf, size);
|
2017-05-16 22:10:59 +03:00
|
|
|
}
|
|
|
|
|
2021-11-11 18:45:51 +03:00
|
|
|
static int nvme_addr_write(NvmeCtrl *n, hwaddr addr, const void *buf, int size)
|
2020-11-23 13:24:55 +03:00
|
|
|
{
|
|
|
|
hwaddr hi = addr + size - 1;
|
|
|
|
if (hi < addr) {
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (n->bar.cmbsz && nvme_addr_is_cmb(n, addr) && nvme_addr_is_cmb(n, hi)) {
|
|
|
|
memcpy(nvme_addr_to_cmb(n, addr), buf, size);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (nvme_addr_is_pmr(n, addr) && nvme_addr_is_pmr(n, hi)) {
|
|
|
|
memcpy(nvme_addr_to_pmr(n, addr), buf, size);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2022-12-08 14:43:18 +03:00
|
|
|
return pci_dma_write(PCI_DEVICE(n), addr, buf, size);
|
2020-11-23 13:24:55 +03:00
|
|
|
}
|
|
|
|
|
hw/block/nvme: support multiple namespaces
This adds support for multiple namespaces by introducing a new 'nvme-ns'
device model. The nvme device creates a bus named from the device name
('id'). The nvme-ns devices then connect to this and registers
themselves with the nvme device.
This changes how an nvme device is created. Example with two namespaces:
-drive file=nvme0n1.img,if=none,id=disk1
-drive file=nvme0n2.img,if=none,id=disk2
-device nvme,serial=deadbeef,id=nvme0
-device nvme-ns,drive=disk1,bus=nvme0,nsid=1
-device nvme-ns,drive=disk2,bus=nvme0,nsid=2
The drive property is kept on the nvme device to keep the change
backward compatible, but the property is now optional. Specifying a
drive for the nvme device will always create the namespace with nsid 1.
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Minwoo Im <minwoo.im.dev@gmail.com>
2019-06-26 09:51:06 +03:00
|
|
|
static bool nvme_nsid_valid(NvmeCtrl *n, uint32_t nsid)
|
|
|
|
{
|
2021-04-14 22:46:00 +03:00
|
|
|
return nsid &&
|
|
|
|
(nsid == NVME_NSID_BROADCAST || nsid <= NVME_MAX_NAMESPACES);
|
hw/block/nvme: support multiple namespaces
This adds support for multiple namespaces by introducing a new 'nvme-ns'
device model. The nvme device creates a bus named from the device name
('id'). The nvme-ns devices then connect to this and registers
themselves with the nvme device.
This changes how an nvme device is created. Example with two namespaces:
-drive file=nvme0n1.img,if=none,id=disk1
-drive file=nvme0n2.img,if=none,id=disk2
-device nvme,serial=deadbeef,id=nvme0
-device nvme-ns,drive=disk1,bus=nvme0,nsid=1
-device nvme-ns,drive=disk2,bus=nvme0,nsid=2
The drive property is kept on the nvme device to keep the change
backward compatible, but the property is now optional. Specifying a
drive for the nvme device will always create the namespace with nsid 1.
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Minwoo Im <minwoo.im.dev@gmail.com>
2019-06-26 09:51:06 +03:00
|
|
|
}
|
|
|
|
|
2013-06-04 19:17:10 +04:00
|
|
|
static int nvme_check_sqid(NvmeCtrl *n, uint16_t sqid)
|
|
|
|
{
|
hw/nvme: Make max_ioqpairs and msix_qsize configurable in runtime
The NVMe device defines two properties: max_ioqpairs, msix_qsize. Having
them as constants is problematic for SR-IOV support.
SR-IOV introduces virtual resources (queues, interrupts) that can be
assigned to PF and its dependent VFs. Each device, following a reset,
should work with the configured number of queues. A single constant is
no longer sufficient to hold the whole state.
This patch tries to solve the problem by introducing additional
variables in NvmeCtrl’s state. The variables for, e.g., managing queues
are therefore organized as:
- n->params.max_ioqpairs – no changes, constant set by the user
- n->(mutable_state) – (not a part of this patch) user-configurable,
specifies number of queues available _after_
reset
- n->conf_ioqpairs - (new) used in all the places instead of the ‘old’
n->params.max_ioqpairs; initialized in realize()
and updated during reset() to reflect user’s
changes to the mutable state
Since the number of available i/o queues and interrupts can change in
runtime, buffers for sq/cqs and the MSIX-related structures are
allocated big enough to handle the limits, to completely avoid the
complicated reallocation. A helper function (nvme_update_msixcap_ts)
updates the corresponding capability register, to signal configuration
changes.
Signed-off-by: Łukasz Gieryk <lukasz.gieryk@linux.intel.com>
Reviewed-by: Klaus Jensen <k.jensen@samsung.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
2022-05-09 17:16:13 +03:00
|
|
|
return sqid < n->conf_ioqpairs + 1 && n->sq[sqid] != NULL ? 0 : -1;
|
2013-06-04 19:17:10 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
static int nvme_check_cqid(NvmeCtrl *n, uint16_t cqid)
|
|
|
|
{
|
hw/nvme: Make max_ioqpairs and msix_qsize configurable in runtime
The NVMe device defines two properties: max_ioqpairs, msix_qsize. Having
them as constants is problematic for SR-IOV support.
SR-IOV introduces virtual resources (queues, interrupts) that can be
assigned to PF and its dependent VFs. Each device, following a reset,
should work with the configured number of queues. A single constant is
no longer sufficient to hold the whole state.
This patch tries to solve the problem by introducing additional
variables in NvmeCtrl’s state. The variables for, e.g., managing queues
are therefore organized as:
- n->params.max_ioqpairs – no changes, constant set by the user
- n->(mutable_state) – (not a part of this patch) user-configurable,
specifies number of queues available _after_
reset
- n->conf_ioqpairs - (new) used in all the places instead of the ‘old’
n->params.max_ioqpairs; initialized in realize()
and updated during reset() to reflect user’s
changes to the mutable state
Since the number of available i/o queues and interrupts can change in
runtime, buffers for sq/cqs and the MSIX-related structures are
allocated big enough to handle the limits, to completely avoid the
complicated reallocation. A helper function (nvme_update_msixcap_ts)
updates the corresponding capability register, to signal configuration
changes.
Signed-off-by: Łukasz Gieryk <lukasz.gieryk@linux.intel.com>
Reviewed-by: Klaus Jensen <k.jensen@samsung.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
2022-05-09 17:16:13 +03:00
|
|
|
return cqid < n->conf_ioqpairs + 1 && n->cq[cqid] != NULL ? 0 : -1;
|
2013-06-04 19:17:10 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
static void nvme_inc_cq_tail(NvmeCQueue *cq)
|
|
|
|
{
|
|
|
|
cq->tail++;
|
|
|
|
if (cq->tail >= cq->size) {
|
|
|
|
cq->tail = 0;
|
|
|
|
cq->phase = !cq->phase;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static void nvme_inc_sq_head(NvmeSQueue *sq)
|
|
|
|
{
|
|
|
|
sq->head = (sq->head + 1) % sq->size;
|
|
|
|
}
|
|
|
|
|
|
|
|
static uint8_t nvme_cq_full(NvmeCQueue *cq)
|
|
|
|
{
|
|
|
|
return (cq->tail + 1) % cq->size == cq->head;
|
|
|
|
}
|
|
|
|
|
|
|
|
static uint8_t nvme_sq_empty(NvmeSQueue *sq)
|
|
|
|
{
|
|
|
|
return sq->head == sq->tail;
|
|
|
|
}
|
|
|
|
|
2017-12-18 08:00:43 +03:00
|
|
|
static void nvme_irq_check(NvmeCtrl *n)
|
|
|
|
{
|
2022-12-08 14:43:18 +03:00
|
|
|
PCIDevice *pci = PCI_DEVICE(n);
|
2021-07-13 20:31:27 +03:00
|
|
|
uint32_t intms = ldl_le_p(&n->bar.intms);
|
|
|
|
|
2022-12-08 14:43:18 +03:00
|
|
|
if (msix_enabled(pci)) {
|
2017-12-18 08:00:43 +03:00
|
|
|
return;
|
|
|
|
}
|
2021-07-13 20:31:27 +03:00
|
|
|
if (~intms & n->irq_status) {
|
2022-12-08 14:43:18 +03:00
|
|
|
pci_irq_assert(pci);
|
2017-12-18 08:00:43 +03:00
|
|
|
} else {
|
2022-12-08 14:43:18 +03:00
|
|
|
pci_irq_deassert(pci);
|
2017-12-18 08:00:43 +03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static void nvme_irq_assert(NvmeCtrl *n, NvmeCQueue *cq)
|
2013-06-04 19:17:10 +04:00
|
|
|
{
|
2022-12-08 14:43:18 +03:00
|
|
|
PCIDevice *pci = PCI_DEVICE(n);
|
|
|
|
|
2013-06-04 19:17:10 +04:00
|
|
|
if (cq->irq_enabled) {
|
2022-12-08 14:43:18 +03:00
|
|
|
if (msix_enabled(pci)) {
|
2020-06-09 22:03:13 +03:00
|
|
|
trace_pci_nvme_irq_msix(cq->vector);
|
2022-12-08 14:43:18 +03:00
|
|
|
msix_notify(pci, cq->vector);
|
2013-06-04 19:17:10 +04:00
|
|
|
} else {
|
2020-06-09 22:03:13 +03:00
|
|
|
trace_pci_nvme_irq_pin();
|
2020-06-09 22:03:18 +03:00
|
|
|
assert(cq->vector < 32);
|
|
|
|
n->irq_status |= 1 << cq->vector;
|
2017-12-18 08:00:43 +03:00
|
|
|
nvme_irq_check(n);
|
2013-06-04 19:17:10 +04:00
|
|
|
}
|
2017-11-03 16:37:53 +03:00
|
|
|
} else {
|
2020-06-09 22:03:13 +03:00
|
|
|
trace_pci_nvme_irq_masked();
|
2013-06-04 19:17:10 +04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2017-12-18 08:00:43 +03:00
|
|
|
static void nvme_irq_deassert(NvmeCtrl *n, NvmeCQueue *cq)
|
|
|
|
{
|
|
|
|
if (cq->irq_enabled) {
|
2022-12-08 14:43:18 +03:00
|
|
|
if (msix_enabled(PCI_DEVICE(n))) {
|
2017-12-18 08:00:43 +03:00
|
|
|
return;
|
|
|
|
} else {
|
2020-06-09 22:03:18 +03:00
|
|
|
assert(cq->vector < 32);
|
2021-06-17 21:55:42 +03:00
|
|
|
if (!n->cq_pending) {
|
|
|
|
n->irq_status &= ~(1 << cq->vector);
|
|
|
|
}
|
2017-12-18 08:00:43 +03:00
|
|
|
nvme_irq_check(n);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-07-20 13:44:01 +03:00
|
|
|
static void nvme_req_clear(NvmeRequest *req)
|
|
|
|
{
|
|
|
|
req->ns = NULL;
|
2020-10-21 15:03:19 +03:00
|
|
|
req->opaque = NULL;
|
2021-04-08 13:44:05 +03:00
|
|
|
req->aiocb = NULL;
|
2020-07-20 13:44:01 +03:00
|
|
|
memset(&req->cqe, 0x0, sizeof(req->cqe));
|
2020-08-24 14:32:06 +03:00
|
|
|
req->status = NVME_SUCCESS;
|
2020-07-20 13:44:01 +03:00
|
|
|
}
|
|
|
|
|
2021-02-07 23:06:01 +03:00
|
|
|
static inline void nvme_sg_init(NvmeCtrl *n, NvmeSg *sg, bool dma)
|
2020-06-29 11:04:10 +03:00
|
|
|
{
|
2021-02-07 23:06:01 +03:00
|
|
|
if (dma) {
|
2022-12-08 14:43:18 +03:00
|
|
|
pci_dma_sglist_init(&sg->qsg, PCI_DEVICE(n), 0);
|
2021-02-07 23:06:01 +03:00
|
|
|
sg->flags = NVME_SG_DMA;
|
|
|
|
} else {
|
|
|
|
qemu_iovec_init(&sg->iov, 0);
|
|
|
|
}
|
|
|
|
|
|
|
|
sg->flags |= NVME_SG_ALLOC;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void nvme_sg_unmap(NvmeSg *sg)
|
|
|
|
{
|
|
|
|
if (!(sg->flags & NVME_SG_ALLOC)) {
|
|
|
|
return;
|
2020-06-29 11:04:10 +03:00
|
|
|
}
|
|
|
|
|
2021-02-07 23:06:01 +03:00
|
|
|
if (sg->flags & NVME_SG_DMA) {
|
|
|
|
qemu_sglist_destroy(&sg->qsg);
|
|
|
|
} else {
|
|
|
|
qemu_iovec_destroy(&sg->iov);
|
2020-06-29 11:04:10 +03:00
|
|
|
}
|
2021-02-07 23:06:01 +03:00
|
|
|
|
|
|
|
memset(sg, 0x0, sizeof(*sg));
|
2020-06-29 11:04:10 +03:00
|
|
|
}
|
|
|
|
|
2020-11-23 13:24:55 +03:00
|
|
|
/*
|
|
|
|
* When metadata is transfered as extended LBAs, the DPTR mapped into `sg`
|
|
|
|
* holds both data and metadata. This function splits the data and metadata
|
|
|
|
* into two separate QSG/IOVs.
|
|
|
|
*/
|
|
|
|
static void nvme_sg_split(NvmeSg *sg, NvmeNamespace *ns, NvmeSg *data,
|
|
|
|
NvmeSg *mdata)
|
|
|
|
{
|
|
|
|
NvmeSg *dst = data;
|
2021-04-14 22:34:44 +03:00
|
|
|
uint32_t trans_len, count = ns->lbasz;
|
2020-11-23 13:24:55 +03:00
|
|
|
uint64_t offset = 0;
|
|
|
|
bool dma = sg->flags & NVME_SG_DMA;
|
|
|
|
size_t sge_len;
|
|
|
|
size_t sg_len = dma ? sg->qsg.size : sg->iov.size;
|
|
|
|
int sg_idx = 0;
|
|
|
|
|
|
|
|
assert(sg->flags & NVME_SG_ALLOC);
|
|
|
|
|
|
|
|
while (sg_len) {
|
|
|
|
sge_len = dma ? sg->qsg.sg[sg_idx].len : sg->iov.iov[sg_idx].iov_len;
|
|
|
|
|
|
|
|
trans_len = MIN(sg_len, count);
|
|
|
|
trans_len = MIN(trans_len, sge_len - offset);
|
|
|
|
|
|
|
|
if (dst) {
|
|
|
|
if (dma) {
|
|
|
|
qemu_sglist_add(&dst->qsg, sg->qsg.sg[sg_idx].base + offset,
|
|
|
|
trans_len);
|
|
|
|
} else {
|
|
|
|
qemu_iovec_add(&dst->iov,
|
|
|
|
sg->iov.iov[sg_idx].iov_base + offset,
|
|
|
|
trans_len);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
sg_len -= trans_len;
|
|
|
|
count -= trans_len;
|
|
|
|
offset += trans_len;
|
|
|
|
|
|
|
|
if (count == 0) {
|
|
|
|
dst = (dst == data) ? mdata : data;
|
2021-04-14 22:34:44 +03:00
|
|
|
count = (dst == data) ? ns->lbasz : ns->lbaf.ms;
|
2020-11-23 13:24:55 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
if (sge_len == offset) {
|
|
|
|
offset = 0;
|
|
|
|
sg_idx++;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-02-23 16:21:52 +03:00
|
|
|
static uint16_t nvme_map_addr_cmb(NvmeCtrl *n, QEMUIOVector *iov, hwaddr addr,
|
|
|
|
size_t len)
|
|
|
|
{
|
|
|
|
if (!len) {
|
|
|
|
return NVME_SUCCESS;
|
|
|
|
}
|
|
|
|
|
|
|
|
trace_pci_nvme_map_addr_cmb(addr, len);
|
|
|
|
|
|
|
|
if (!nvme_addr_is_cmb(n, addr) || !nvme_addr_is_cmb(n, addr + len - 1)) {
|
|
|
|
return NVME_DATA_TRAS_ERROR;
|
|
|
|
}
|
|
|
|
|
|
|
|
qemu_iovec_add(iov, nvme_addr_to_cmb(n, addr), len);
|
|
|
|
|
|
|
|
return NVME_SUCCESS;
|
|
|
|
}
|
|
|
|
|
2020-11-13 08:30:05 +03:00
|
|
|
static uint16_t nvme_map_addr_pmr(NvmeCtrl *n, QEMUIOVector *iov, hwaddr addr,
|
2021-04-20 22:22:59 +03:00
|
|
|
size_t len)
|
2020-11-13 08:30:05 +03:00
|
|
|
{
|
|
|
|
if (!len) {
|
|
|
|
return NVME_SUCCESS;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!nvme_addr_is_pmr(n, addr) || !nvme_addr_is_pmr(n, addr + len - 1)) {
|
|
|
|
return NVME_DATA_TRAS_ERROR;
|
|
|
|
}
|
|
|
|
|
|
|
|
qemu_iovec_add(iov, nvme_addr_to_pmr(n, addr), len);
|
|
|
|
|
|
|
|
return NVME_SUCCESS;
|
|
|
|
}
|
|
|
|
|
2021-02-07 23:06:01 +03:00
|
|
|
static uint16_t nvme_map_addr(NvmeCtrl *n, NvmeSg *sg, hwaddr addr, size_t len)
|
2020-02-23 16:21:52 +03:00
|
|
|
{
|
2020-11-13 08:30:05 +03:00
|
|
|
bool cmb = false, pmr = false;
|
|
|
|
|
2020-02-23 16:21:52 +03:00
|
|
|
if (!len) {
|
|
|
|
return NVME_SUCCESS;
|
|
|
|
}
|
|
|
|
|
|
|
|
trace_pci_nvme_map_addr(addr, len);
|
|
|
|
|
2021-12-17 12:44:01 +03:00
|
|
|
if (nvme_addr_is_iomem(n, addr)) {
|
|
|
|
return NVME_DATA_TRAS_ERROR;
|
|
|
|
}
|
|
|
|
|
2020-02-23 16:21:52 +03:00
|
|
|
if (nvme_addr_is_cmb(n, addr)) {
|
2020-11-13 08:30:05 +03:00
|
|
|
cmb = true;
|
|
|
|
} else if (nvme_addr_is_pmr(n, addr)) {
|
|
|
|
pmr = true;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (cmb || pmr) {
|
2021-02-07 23:06:01 +03:00
|
|
|
if (sg->flags & NVME_SG_DMA) {
|
2020-02-23 16:21:52 +03:00
|
|
|
return NVME_INVALID_USE_OF_CMB | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
2021-07-09 08:58:40 +03:00
|
|
|
if (sg->iov.niov + 1 > IOV_MAX) {
|
|
|
|
goto max_mappings_exceeded;
|
|
|
|
}
|
|
|
|
|
2020-11-13 08:30:05 +03:00
|
|
|
if (cmb) {
|
2021-02-07 23:06:01 +03:00
|
|
|
return nvme_map_addr_cmb(n, &sg->iov, addr, len);
|
2020-11-13 08:30:05 +03:00
|
|
|
} else {
|
2021-02-07 23:06:01 +03:00
|
|
|
return nvme_map_addr_pmr(n, &sg->iov, addr, len);
|
2020-11-13 08:30:05 +03:00
|
|
|
}
|
2020-02-23 16:21:52 +03:00
|
|
|
}
|
|
|
|
|
2021-02-07 23:06:01 +03:00
|
|
|
if (!(sg->flags & NVME_SG_DMA)) {
|
2020-02-23 16:21:52 +03:00
|
|
|
return NVME_INVALID_USE_OF_CMB | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
2021-07-09 08:58:40 +03:00
|
|
|
if (sg->qsg.nsg + 1 > IOV_MAX) {
|
|
|
|
goto max_mappings_exceeded;
|
|
|
|
}
|
|
|
|
|
2021-02-07 23:06:01 +03:00
|
|
|
qemu_sglist_add(&sg->qsg, addr, len);
|
2020-02-23 16:21:52 +03:00
|
|
|
|
|
|
|
return NVME_SUCCESS;
|
2021-07-09 08:58:40 +03:00
|
|
|
|
|
|
|
max_mappings_exceeded:
|
|
|
|
NVME_GUEST_ERR(pci_nvme_ub_too_many_mappings,
|
|
|
|
"number of mappings exceed 1024");
|
|
|
|
return NVME_INTERNAL_DEV_ERROR | NVME_DNR;
|
2020-02-23 16:21:52 +03:00
|
|
|
}
|
|
|
|
|
2021-02-07 23:06:01 +03:00
|
|
|
static inline bool nvme_addr_is_dma(NvmeCtrl *n, hwaddr addr)
|
|
|
|
{
|
|
|
|
return !(nvme_addr_is_cmb(n, addr) || nvme_addr_is_pmr(n, addr));
|
|
|
|
}
|
|
|
|
|
2021-02-07 23:21:45 +03:00
|
|
|
static uint16_t nvme_map_prp(NvmeCtrl *n, NvmeSg *sg, uint64_t prp1,
|
|
|
|
uint64_t prp2, uint32_t len)
|
2013-06-04 19:17:10 +04:00
|
|
|
{
|
|
|
|
hwaddr trans_len = n->page_size - (prp1 % n->page_size);
|
|
|
|
trans_len = MIN(len, trans_len);
|
|
|
|
int num_prps = (len >> n->page_bits) + 1;
|
2020-02-23 16:21:52 +03:00
|
|
|
uint16_t status;
|
2019-10-11 09:32:00 +03:00
|
|
|
int ret;
|
2013-06-04 19:17:10 +04:00
|
|
|
|
2020-07-29 22:18:34 +03:00
|
|
|
trace_pci_nvme_map_prp(trans_len, len, prp1, prp2, num_prps);
|
|
|
|
|
2021-02-07 23:21:45 +03:00
|
|
|
nvme_sg_init(n, sg, nvme_addr_is_dma(n, prp1));
|
2020-02-23 16:21:52 +03:00
|
|
|
|
2021-02-07 23:21:45 +03:00
|
|
|
status = nvme_map_addr(n, sg, prp1, trans_len);
|
2020-02-23 16:21:52 +03:00
|
|
|
if (status) {
|
2021-02-07 23:06:01 +03:00
|
|
|
goto unmap;
|
2020-02-23 16:21:52 +03:00
|
|
|
}
|
|
|
|
|
2013-06-04 19:17:10 +04:00
|
|
|
len -= trans_len;
|
|
|
|
if (len) {
|
|
|
|
if (len > n->page_size) {
|
|
|
|
uint64_t prp_list[n->max_prp_ents];
|
|
|
|
uint32_t nents, prp_trans;
|
|
|
|
int i = 0;
|
|
|
|
|
2021-04-09 10:25:48 +03:00
|
|
|
/*
|
|
|
|
* The first PRP list entry, pointed to by PRP2 may contain offset.
|
|
|
|
* Hence, we need to calculate the number of entries in based on
|
|
|
|
* that offset.
|
|
|
|
*/
|
|
|
|
nents = (n->page_size - (prp2 & (n->page_size - 1))) >> 3;
|
2013-06-04 19:17:10 +04:00
|
|
|
prp_trans = MIN(n->max_prp_ents, nents) * sizeof(uint64_t);
|
2019-10-11 09:32:00 +03:00
|
|
|
ret = nvme_addr_read(n, prp2, (void *)prp_list, prp_trans);
|
|
|
|
if (ret) {
|
|
|
|
trace_pci_nvme_err_addr_read(prp2);
|
2021-02-07 23:06:01 +03:00
|
|
|
status = NVME_DATA_TRAS_ERROR;
|
|
|
|
goto unmap;
|
2019-10-11 09:32:00 +03:00
|
|
|
}
|
2013-06-04 19:17:10 +04:00
|
|
|
while (len != 0) {
|
|
|
|
uint64_t prp_ent = le64_to_cpu(prp_list[i]);
|
|
|
|
|
2021-04-09 10:25:48 +03:00
|
|
|
if (i == nents - 1 && len > n->page_size) {
|
2020-10-19 10:11:31 +03:00
|
|
|
if (unlikely(prp_ent & (n->page_size - 1))) {
|
2020-06-09 22:03:13 +03:00
|
|
|
trace_pci_nvme_err_invalid_prplist_ent(prp_ent);
|
2021-02-07 23:06:01 +03:00
|
|
|
status = NVME_INVALID_PRP_OFFSET | NVME_DNR;
|
|
|
|
goto unmap;
|
2013-06-04 19:17:10 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
i = 0;
|
|
|
|
nents = (len + n->page_size - 1) >> n->page_bits;
|
2021-04-09 10:25:48 +03:00
|
|
|
nents = MIN(nents, n->max_prp_ents);
|
|
|
|
prp_trans = nents * sizeof(uint64_t);
|
2019-10-11 09:32:00 +03:00
|
|
|
ret = nvme_addr_read(n, prp_ent, (void *)prp_list,
|
|
|
|
prp_trans);
|
|
|
|
if (ret) {
|
|
|
|
trace_pci_nvme_err_addr_read(prp_ent);
|
2021-02-07 23:06:01 +03:00
|
|
|
status = NVME_DATA_TRAS_ERROR;
|
|
|
|
goto unmap;
|
2019-10-11 09:32:00 +03:00
|
|
|
}
|
2013-06-04 19:17:10 +04:00
|
|
|
prp_ent = le64_to_cpu(prp_list[i]);
|
|
|
|
}
|
|
|
|
|
2020-10-19 10:11:31 +03:00
|
|
|
if (unlikely(prp_ent & (n->page_size - 1))) {
|
2020-06-09 22:03:13 +03:00
|
|
|
trace_pci_nvme_err_invalid_prplist_ent(prp_ent);
|
2021-02-07 23:06:01 +03:00
|
|
|
status = NVME_INVALID_PRP_OFFSET | NVME_DNR;
|
|
|
|
goto unmap;
|
2013-06-04 19:17:10 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
trans_len = MIN(len, n->page_size);
|
2021-02-07 23:21:45 +03:00
|
|
|
status = nvme_map_addr(n, sg, prp_ent, trans_len);
|
2020-02-23 16:21:52 +03:00
|
|
|
if (status) {
|
2021-02-07 23:06:01 +03:00
|
|
|
goto unmap;
|
2017-06-13 13:08:35 +03:00
|
|
|
}
|
2020-02-23 18:12:12 +03:00
|
|
|
|
2013-06-04 19:17:10 +04:00
|
|
|
len -= trans_len;
|
|
|
|
i++;
|
|
|
|
}
|
|
|
|
} else {
|
2017-11-03 16:37:53 +03:00
|
|
|
if (unlikely(prp2 & (n->page_size - 1))) {
|
2020-06-09 22:03:13 +03:00
|
|
|
trace_pci_nvme_err_invalid_prp2_align(prp2);
|
2021-02-07 23:06:01 +03:00
|
|
|
status = NVME_INVALID_PRP_OFFSET | NVME_DNR;
|
|
|
|
goto unmap;
|
2013-06-04 19:17:10 +04:00
|
|
|
}
|
2021-02-07 23:21:45 +03:00
|
|
|
status = nvme_map_addr(n, sg, prp2, len);
|
2020-02-23 16:21:52 +03:00
|
|
|
if (status) {
|
2021-02-07 23:06:01 +03:00
|
|
|
goto unmap;
|
2017-06-13 13:08:35 +03:00
|
|
|
}
|
2013-06-04 19:17:10 +04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-06-29 11:04:10 +03:00
|
|
|
return NVME_SUCCESS;
|
2021-02-07 23:06:01 +03:00
|
|
|
|
|
|
|
unmap:
|
2021-02-07 23:21:45 +03:00
|
|
|
nvme_sg_unmap(sg);
|
2021-02-07 23:06:01 +03:00
|
|
|
return status;
|
2013-06-04 19:17:10 +04:00
|
|
|
}
|
|
|
|
|
2019-04-12 21:53:16 +03:00
|
|
|
/*
|
|
|
|
* Map 'nsgld' data descriptors from 'segment'. The function will subtract the
|
|
|
|
* number of bytes mapped in len.
|
|
|
|
*/
|
2021-02-07 23:06:01 +03:00
|
|
|
static uint16_t nvme_map_sgl_data(NvmeCtrl *n, NvmeSg *sg,
|
2019-04-12 21:53:16 +03:00
|
|
|
NvmeSglDescriptor *segment, uint64_t nsgld,
|
2021-02-07 23:21:45 +03:00
|
|
|
size_t *len, NvmeCmd *cmd)
|
2019-04-12 21:53:16 +03:00
|
|
|
{
|
|
|
|
dma_addr_t addr, trans_len;
|
|
|
|
uint32_t dlen;
|
|
|
|
uint16_t status;
|
|
|
|
|
|
|
|
for (int i = 0; i < nsgld; i++) {
|
|
|
|
uint8_t type = NVME_SGL_TYPE(segment[i].type);
|
|
|
|
|
|
|
|
switch (type) {
|
|
|
|
case NVME_SGL_DESCR_TYPE_DATA_BLOCK:
|
|
|
|
break;
|
|
|
|
case NVME_SGL_DESCR_TYPE_SEGMENT:
|
|
|
|
case NVME_SGL_DESCR_TYPE_LAST_SEGMENT:
|
|
|
|
return NVME_INVALID_NUM_SGL_DESCRS | NVME_DNR;
|
|
|
|
default:
|
|
|
|
return NVME_SGL_DESCR_TYPE_INVALID | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
|
|
|
dlen = le32_to_cpu(segment[i].len);
|
2020-03-18 11:41:19 +03:00
|
|
|
|
2019-04-12 21:53:16 +03:00
|
|
|
if (!dlen) {
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (*len == 0) {
|
|
|
|
/*
|
|
|
|
* All data has been mapped, but the SGL contains additional
|
|
|
|
* segments and/or descriptors. The controller might accept
|
|
|
|
* ignoring the rest of the SGL.
|
|
|
|
*/
|
2020-11-04 13:22:47 +03:00
|
|
|
uint32_t sgls = le32_to_cpu(n->id_ctrl.sgls);
|
2019-04-12 21:53:16 +03:00
|
|
|
if (sgls & NVME_CTRL_SGLS_EXCESS_LENGTH) {
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
2021-02-07 23:21:45 +03:00
|
|
|
trace_pci_nvme_err_invalid_sgl_excess_length(dlen);
|
2019-04-12 21:53:16 +03:00
|
|
|
return NVME_DATA_SGL_LEN_INVALID | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
|
|
|
trans_len = MIN(*len, dlen);
|
2020-03-18 11:41:19 +03:00
|
|
|
|
2019-04-12 21:53:16 +03:00
|
|
|
addr = le64_to_cpu(segment[i].addr);
|
|
|
|
|
|
|
|
if (UINT64_MAX - addr < dlen) {
|
|
|
|
return NVME_DATA_SGL_LEN_INVALID | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
2021-02-07 23:06:01 +03:00
|
|
|
status = nvme_map_addr(n, sg, addr, trans_len);
|
2019-04-12 21:53:16 +03:00
|
|
|
if (status) {
|
|
|
|
return status;
|
|
|
|
}
|
|
|
|
|
|
|
|
*len -= trans_len;
|
|
|
|
}
|
|
|
|
|
|
|
|
return NVME_SUCCESS;
|
|
|
|
}
|
|
|
|
|
2021-02-07 23:06:01 +03:00
|
|
|
static uint16_t nvme_map_sgl(NvmeCtrl *n, NvmeSg *sg, NvmeSglDescriptor sgl,
|
2021-02-07 23:21:45 +03:00
|
|
|
size_t len, NvmeCmd *cmd)
|
2019-04-12 21:53:16 +03:00
|
|
|
{
|
|
|
|
/*
|
|
|
|
* Read the segment in chunks of 256 descriptors (one 4k page) to avoid
|
|
|
|
* dynamically allocating a potentially huge SGL. The spec allows the SGL
|
|
|
|
* to be larger (as in number of bytes required to describe the SGL
|
|
|
|
* descriptors and segment chain) than the command transfer size, so it is
|
|
|
|
* not bounded by MDTS.
|
|
|
|
*/
|
|
|
|
const int SEG_CHUNK_SIZE = 256;
|
|
|
|
|
|
|
|
NvmeSglDescriptor segment[SEG_CHUNK_SIZE], *sgld, *last_sgld;
|
|
|
|
uint64_t nsgld;
|
|
|
|
uint32_t seg_len;
|
|
|
|
uint16_t status;
|
|
|
|
hwaddr addr;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
sgld = &sgl;
|
|
|
|
addr = le64_to_cpu(sgl.addr);
|
|
|
|
|
2021-02-07 23:21:45 +03:00
|
|
|
trace_pci_nvme_map_sgl(NVME_SGL_TYPE(sgl.type), len);
|
2019-04-12 21:53:16 +03:00
|
|
|
|
2021-02-07 23:06:01 +03:00
|
|
|
nvme_sg_init(n, sg, nvme_addr_is_dma(n, addr));
|
|
|
|
|
2019-04-12 21:53:16 +03:00
|
|
|
/*
|
|
|
|
* If the entire transfer can be described with a single data block it can
|
|
|
|
* be mapped directly.
|
|
|
|
*/
|
|
|
|
if (NVME_SGL_TYPE(sgl.type) == NVME_SGL_DESCR_TYPE_DATA_BLOCK) {
|
2021-02-07 23:21:45 +03:00
|
|
|
status = nvme_map_sgl_data(n, sg, sgld, 1, &len, cmd);
|
2019-04-12 21:53:16 +03:00
|
|
|
if (status) {
|
|
|
|
goto unmap;
|
|
|
|
}
|
|
|
|
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
for (;;) {
|
|
|
|
switch (NVME_SGL_TYPE(sgld->type)) {
|
|
|
|
case NVME_SGL_DESCR_TYPE_SEGMENT:
|
|
|
|
case NVME_SGL_DESCR_TYPE_LAST_SEGMENT:
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
return NVME_INVALID_SGL_SEG_DESCR | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
|
|
|
seg_len = le32_to_cpu(sgld->len);
|
|
|
|
|
|
|
|
/* check the length of the (Last) Segment descriptor */
|
2022-05-02 08:55:54 +03:00
|
|
|
if (!seg_len || seg_len & 0xf) {
|
2019-04-12 21:53:16 +03:00
|
|
|
return NVME_INVALID_SGL_SEG_DESCR | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (UINT64_MAX - addr < seg_len) {
|
|
|
|
return NVME_DATA_SGL_LEN_INVALID | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
|
|
|
nsgld = seg_len / sizeof(NvmeSglDescriptor);
|
|
|
|
|
|
|
|
while (nsgld > SEG_CHUNK_SIZE) {
|
|
|
|
if (nvme_addr_read(n, addr, segment, sizeof(segment))) {
|
|
|
|
trace_pci_nvme_err_addr_read(addr);
|
|
|
|
status = NVME_DATA_TRAS_ERROR;
|
|
|
|
goto unmap;
|
|
|
|
}
|
|
|
|
|
2021-02-07 23:06:01 +03:00
|
|
|
status = nvme_map_sgl_data(n, sg, segment, SEG_CHUNK_SIZE,
|
2021-02-07 23:21:45 +03:00
|
|
|
&len, cmd);
|
2019-04-12 21:53:16 +03:00
|
|
|
if (status) {
|
|
|
|
goto unmap;
|
|
|
|
}
|
|
|
|
|
|
|
|
nsgld -= SEG_CHUNK_SIZE;
|
|
|
|
addr += SEG_CHUNK_SIZE * sizeof(NvmeSglDescriptor);
|
|
|
|
}
|
|
|
|
|
|
|
|
ret = nvme_addr_read(n, addr, segment, nsgld *
|
|
|
|
sizeof(NvmeSglDescriptor));
|
|
|
|
if (ret) {
|
|
|
|
trace_pci_nvme_err_addr_read(addr);
|
|
|
|
status = NVME_DATA_TRAS_ERROR;
|
|
|
|
goto unmap;
|
|
|
|
}
|
|
|
|
|
|
|
|
last_sgld = &segment[nsgld - 1];
|
|
|
|
|
2020-03-18 11:41:19 +03:00
|
|
|
/*
|
2022-05-02 08:55:54 +03:00
|
|
|
* If the segment ends with a Data Block, then we are done.
|
2020-03-18 11:41:19 +03:00
|
|
|
*/
|
2022-05-02 08:55:54 +03:00
|
|
|
if (NVME_SGL_TYPE(last_sgld->type) == NVME_SGL_DESCR_TYPE_DATA_BLOCK) {
|
2021-02-07 23:21:45 +03:00
|
|
|
status = nvme_map_sgl_data(n, sg, segment, nsgld, &len, cmd);
|
2019-04-12 21:53:16 +03:00
|
|
|
if (status) {
|
|
|
|
goto unmap;
|
|
|
|
}
|
|
|
|
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
2022-05-02 08:55:54 +03:00
|
|
|
* If the last descriptor was not a Data Block, then the current
|
|
|
|
* segment must not be a Last Segment.
|
2019-04-12 21:53:16 +03:00
|
|
|
*/
|
|
|
|
if (NVME_SGL_TYPE(sgld->type) == NVME_SGL_DESCR_TYPE_LAST_SEGMENT) {
|
|
|
|
status = NVME_INVALID_SGL_SEG_DESCR | NVME_DNR;
|
|
|
|
goto unmap;
|
|
|
|
}
|
|
|
|
|
|
|
|
sgld = last_sgld;
|
|
|
|
addr = le64_to_cpu(sgld->addr);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Do not map the last descriptor; it will be a Segment or Last Segment
|
|
|
|
* descriptor and is handled by the next iteration.
|
|
|
|
*/
|
2021-02-07 23:21:45 +03:00
|
|
|
status = nvme_map_sgl_data(n, sg, segment, nsgld - 1, &len, cmd);
|
2019-04-12 21:53:16 +03:00
|
|
|
if (status) {
|
|
|
|
goto unmap;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
out:
|
|
|
|
/* if there is any residual left in len, the SGL was too short */
|
|
|
|
if (len) {
|
|
|
|
status = NVME_DATA_SGL_LEN_INVALID | NVME_DNR;
|
|
|
|
goto unmap;
|
|
|
|
}
|
|
|
|
|
|
|
|
return NVME_SUCCESS;
|
|
|
|
|
|
|
|
unmap:
|
2021-02-07 23:06:01 +03:00
|
|
|
nvme_sg_unmap(sg);
|
2019-04-12 21:53:16 +03:00
|
|
|
return status;
|
|
|
|
}
|
|
|
|
|
2021-02-04 11:55:48 +03:00
|
|
|
uint16_t nvme_map_dptr(NvmeCtrl *n, NvmeSg *sg, size_t len,
|
|
|
|
NvmeCmd *cmd)
|
2019-04-12 21:53:16 +03:00
|
|
|
{
|
|
|
|
uint64_t prp1, prp2;
|
|
|
|
|
2021-02-07 23:21:45 +03:00
|
|
|
switch (NVME_CMD_FLAGS_PSDT(cmd->flags)) {
|
2019-04-12 21:53:16 +03:00
|
|
|
case NVME_PSDT_PRP:
|
2021-02-07 23:21:45 +03:00
|
|
|
prp1 = le64_to_cpu(cmd->dptr.prp1);
|
|
|
|
prp2 = le64_to_cpu(cmd->dptr.prp2);
|
2019-04-12 21:53:16 +03:00
|
|
|
|
2021-02-07 23:21:45 +03:00
|
|
|
return nvme_map_prp(n, sg, prp1, prp2, len);
|
2019-04-12 21:53:16 +03:00
|
|
|
case NVME_PSDT_SGL_MPTR_CONTIGUOUS:
|
|
|
|
case NVME_PSDT_SGL_MPTR_SGL:
|
2021-02-07 23:21:45 +03:00
|
|
|
return nvme_map_sgl(n, sg, cmd->dptr.sgl, len, cmd);
|
2019-04-12 21:53:16 +03:00
|
|
|
default:
|
|
|
|
return NVME_INVALID_FIELD;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-11-23 13:24:55 +03:00
|
|
|
static uint16_t nvme_map_mptr(NvmeCtrl *n, NvmeSg *sg, size_t len,
|
|
|
|
NvmeCmd *cmd)
|
|
|
|
{
|
|
|
|
int psdt = NVME_CMD_FLAGS_PSDT(cmd->flags);
|
|
|
|
hwaddr mptr = le64_to_cpu(cmd->mptr);
|
|
|
|
uint16_t status;
|
|
|
|
|
|
|
|
if (psdt == NVME_PSDT_SGL_MPTR_SGL) {
|
|
|
|
NvmeSglDescriptor sgl;
|
|
|
|
|
|
|
|
if (nvme_addr_read(n, mptr, &sgl, sizeof(sgl))) {
|
|
|
|
return NVME_DATA_TRAS_ERROR;
|
|
|
|
}
|
|
|
|
|
|
|
|
status = nvme_map_sgl(n, sg, sgl, len, cmd);
|
|
|
|
if (status && (status & 0x7ff) == NVME_DATA_SGL_LEN_INVALID) {
|
|
|
|
status = NVME_MD_SGL_LEN_INVALID | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
|
|
|
return status;
|
|
|
|
}
|
|
|
|
|
|
|
|
nvme_sg_init(n, sg, nvme_addr_is_dma(n, mptr));
|
|
|
|
status = nvme_map_addr(n, sg, mptr, len);
|
|
|
|
if (status) {
|
|
|
|
nvme_sg_unmap(sg);
|
|
|
|
}
|
|
|
|
|
|
|
|
return status;
|
|
|
|
}
|
|
|
|
|
|
|
|
static uint16_t nvme_map_data(NvmeCtrl *n, uint32_t nlb, NvmeRequest *req)
|
|
|
|
{
|
|
|
|
NvmeNamespace *ns = req->ns;
|
2021-02-04 11:55:48 +03:00
|
|
|
NvmeRwCmd *rw = (NvmeRwCmd *)&req->cmd;
|
2021-06-17 22:06:52 +03:00
|
|
|
bool pi = !!NVME_ID_NS_DPS_TYPE(ns->id_ns.dps);
|
|
|
|
bool pract = !!(le16_to_cpu(rw->control) & NVME_RW_PRINFO_PRACT);
|
2020-11-23 13:24:55 +03:00
|
|
|
size_t len = nvme_l2b(ns, nlb);
|
|
|
|
uint16_t status;
|
|
|
|
|
2022-02-14 11:29:01 +03:00
|
|
|
if (nvme_ns_ext(ns) &&
|
|
|
|
!(pi && pract && ns->lbaf.ms == nvme_pi_tuple_size(ns))) {
|
2020-11-23 13:24:55 +03:00
|
|
|
NvmeSg sg;
|
|
|
|
|
|
|
|
len += nvme_m2b(ns, nlb);
|
|
|
|
|
|
|
|
status = nvme_map_dptr(n, &sg, len, &req->cmd);
|
|
|
|
if (status) {
|
|
|
|
return status;
|
|
|
|
}
|
|
|
|
|
|
|
|
nvme_sg_init(n, &req->sg, sg.flags & NVME_SG_DMA);
|
|
|
|
nvme_sg_split(&sg, ns, &req->sg, NULL);
|
|
|
|
nvme_sg_unmap(&sg);
|
|
|
|
|
|
|
|
return NVME_SUCCESS;
|
|
|
|
}
|
|
|
|
|
|
|
|
return nvme_map_dptr(n, &req->sg, len, &req->cmd);
|
|
|
|
}
|
|
|
|
|
|
|
|
static uint16_t nvme_map_mdata(NvmeCtrl *n, uint32_t nlb, NvmeRequest *req)
|
|
|
|
{
|
|
|
|
NvmeNamespace *ns = req->ns;
|
|
|
|
size_t len = nvme_m2b(ns, nlb);
|
|
|
|
uint16_t status;
|
|
|
|
|
|
|
|
if (nvme_ns_ext(ns)) {
|
|
|
|
NvmeSg sg;
|
|
|
|
|
|
|
|
len += nvme_l2b(ns, nlb);
|
|
|
|
|
|
|
|
status = nvme_map_dptr(n, &sg, len, &req->cmd);
|
|
|
|
if (status) {
|
|
|
|
return status;
|
|
|
|
}
|
|
|
|
|
|
|
|
nvme_sg_init(n, &req->sg, sg.flags & NVME_SG_DMA);
|
|
|
|
nvme_sg_split(&sg, ns, NULL, &req->sg);
|
|
|
|
nvme_sg_unmap(&sg);
|
|
|
|
|
|
|
|
return NVME_SUCCESS;
|
|
|
|
}
|
|
|
|
|
|
|
|
return nvme_map_mptr(n, &req->sg, len, &req->cmd);
|
|
|
|
}
|
|
|
|
|
|
|
|
static uint16_t nvme_tx_interleaved(NvmeCtrl *n, NvmeSg *sg, uint8_t *ptr,
|
|
|
|
uint32_t len, uint32_t bytes,
|
|
|
|
int32_t skip_bytes, int64_t offset,
|
|
|
|
NvmeTxDirection dir)
|
|
|
|
{
|
|
|
|
hwaddr addr;
|
|
|
|
uint32_t trans_len, count = bytes;
|
|
|
|
bool dma = sg->flags & NVME_SG_DMA;
|
|
|
|
int64_t sge_len;
|
|
|
|
int sg_idx = 0;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
assert(sg->flags & NVME_SG_ALLOC);
|
|
|
|
|
|
|
|
while (len) {
|
|
|
|
sge_len = dma ? sg->qsg.sg[sg_idx].len : sg->iov.iov[sg_idx].iov_len;
|
|
|
|
|
|
|
|
if (sge_len - offset < 0) {
|
|
|
|
offset -= sge_len;
|
|
|
|
sg_idx++;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (sge_len == offset) {
|
|
|
|
offset = 0;
|
|
|
|
sg_idx++;
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
trans_len = MIN(len, count);
|
|
|
|
trans_len = MIN(trans_len, sge_len - offset);
|
|
|
|
|
|
|
|
if (dma) {
|
|
|
|
addr = sg->qsg.sg[sg_idx].base + offset;
|
|
|
|
} else {
|
|
|
|
addr = (hwaddr)(uintptr_t)sg->iov.iov[sg_idx].iov_base + offset;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (dir == NVME_TX_DIRECTION_TO_DEVICE) {
|
|
|
|
ret = nvme_addr_read(n, addr, ptr, trans_len);
|
|
|
|
} else {
|
|
|
|
ret = nvme_addr_write(n, addr, ptr, trans_len);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (ret) {
|
|
|
|
return NVME_DATA_TRAS_ERROR;
|
|
|
|
}
|
|
|
|
|
|
|
|
ptr += trans_len;
|
|
|
|
len -= trans_len;
|
|
|
|
count -= trans_len;
|
|
|
|
offset += trans_len;
|
|
|
|
|
|
|
|
if (count == 0) {
|
|
|
|
count = bytes;
|
|
|
|
offset += skip_bytes;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return NVME_SUCCESS;
|
|
|
|
}
|
|
|
|
|
2021-11-11 18:45:52 +03:00
|
|
|
static uint16_t nvme_tx(NvmeCtrl *n, NvmeSg *sg, void *ptr, uint32_t len,
|
2020-12-15 21:18:25 +03:00
|
|
|
NvmeTxDirection dir)
|
|
|
|
{
|
|
|
|
assert(sg->flags & NVME_SG_ALLOC);
|
2019-05-20 20:40:30 +03:00
|
|
|
|
2020-12-15 21:18:25 +03:00
|
|
|
if (sg->flags & NVME_SG_DMA) {
|
2021-12-16 01:02:21 +03:00
|
|
|
const MemTxAttrs attrs = MEMTXATTRS_UNSPECIFIED;
|
2021-12-31 13:33:29 +03:00
|
|
|
dma_addr_t residual;
|
2020-02-23 18:03:34 +03:00
|
|
|
|
2020-12-15 21:18:25 +03:00
|
|
|
if (dir == NVME_TX_DIRECTION_TO_DEVICE) {
|
2021-12-16 11:36:38 +03:00
|
|
|
dma_buf_write(ptr, len, &residual, &sg->qsg, attrs);
|
2020-02-23 18:03:34 +03:00
|
|
|
} else {
|
2021-12-16 11:36:38 +03:00
|
|
|
dma_buf_read(ptr, len, &residual, &sg->qsg, attrs);
|
2020-02-23 18:03:34 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
if (unlikely(residual)) {
|
2020-06-09 22:03:13 +03:00
|
|
|
trace_pci_nvme_err_invalid_dma();
|
2020-12-15 21:18:25 +03:00
|
|
|
return NVME_INVALID_FIELD | NVME_DNR;
|
2017-06-13 13:08:35 +03:00
|
|
|
}
|
|
|
|
} else {
|
2020-02-23 18:03:34 +03:00
|
|
|
size_t bytes;
|
|
|
|
|
2020-12-15 21:18:25 +03:00
|
|
|
if (dir == NVME_TX_DIRECTION_TO_DEVICE) {
|
|
|
|
bytes = qemu_iovec_to_buf(&sg->iov, 0, ptr, len);
|
2020-02-23 18:03:34 +03:00
|
|
|
} else {
|
2020-12-15 21:18:25 +03:00
|
|
|
bytes = qemu_iovec_from_buf(&sg->iov, 0, ptr, len);
|
2020-02-23 18:03:34 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
if (unlikely(bytes != len)) {
|
2020-06-09 22:03:13 +03:00
|
|
|
trace_pci_nvme_err_invalid_dma();
|
2020-12-15 21:18:25 +03:00
|
|
|
return NVME_INVALID_FIELD | NVME_DNR;
|
2017-06-13 13:08:35 +03:00
|
|
|
}
|
2013-06-04 19:17:10 +04:00
|
|
|
}
|
2020-02-23 18:03:34 +03:00
|
|
|
|
2020-12-15 21:18:25 +03:00
|
|
|
return NVME_SUCCESS;
|
|
|
|
}
|
|
|
|
|
2021-11-11 18:45:52 +03:00
|
|
|
static inline uint16_t nvme_c2h(NvmeCtrl *n, void *ptr, uint32_t len,
|
2020-12-15 21:18:25 +03:00
|
|
|
NvmeRequest *req)
|
|
|
|
{
|
|
|
|
uint16_t status;
|
|
|
|
|
|
|
|
status = nvme_map_dptr(n, &req->sg, len, &req->cmd);
|
|
|
|
if (status) {
|
|
|
|
return status;
|
|
|
|
}
|
|
|
|
|
|
|
|
return nvme_tx(n, &req->sg, ptr, len, NVME_TX_DIRECTION_FROM_DEVICE);
|
|
|
|
}
|
|
|
|
|
2021-11-11 18:45:52 +03:00
|
|
|
static inline uint16_t nvme_h2c(NvmeCtrl *n, void *ptr, uint32_t len,
|
2020-12-15 21:18:25 +03:00
|
|
|
NvmeRequest *req)
|
|
|
|
{
|
|
|
|
uint16_t status;
|
|
|
|
|
|
|
|
status = nvme_map_dptr(n, &req->sg, len, &req->cmd);
|
|
|
|
if (status) {
|
|
|
|
return status;
|
|
|
|
}
|
|
|
|
|
|
|
|
return nvme_tx(n, &req->sg, ptr, len, NVME_TX_DIRECTION_TO_DEVICE);
|
2013-06-04 19:17:10 +04:00
|
|
|
}
|
|
|
|
|
2021-11-11 18:45:52 +03:00
|
|
|
uint16_t nvme_bounce_data(NvmeCtrl *n, void *ptr, uint32_t len,
|
2021-02-04 11:55:48 +03:00
|
|
|
NvmeTxDirection dir, NvmeRequest *req)
|
2020-11-23 13:24:55 +03:00
|
|
|
{
|
|
|
|
NvmeNamespace *ns = req->ns;
|
2021-02-04 11:55:48 +03:00
|
|
|
NvmeRwCmd *rw = (NvmeRwCmd *)&req->cmd;
|
2021-06-17 22:06:52 +03:00
|
|
|
bool pi = !!NVME_ID_NS_DPS_TYPE(ns->id_ns.dps);
|
|
|
|
bool pract = !!(le16_to_cpu(rw->control) & NVME_RW_PRINFO_PRACT);
|
2020-11-23 13:24:55 +03:00
|
|
|
|
2022-02-14 11:29:01 +03:00
|
|
|
if (nvme_ns_ext(ns) &&
|
|
|
|
!(pi && pract && ns->lbaf.ms == nvme_pi_tuple_size(ns))) {
|
2021-04-14 22:34:44 +03:00
|
|
|
return nvme_tx_interleaved(n, &req->sg, ptr, len, ns->lbasz,
|
|
|
|
ns->lbaf.ms, 0, dir);
|
2020-11-23 13:24:55 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
return nvme_tx(n, &req->sg, ptr, len, dir);
|
|
|
|
}
|
|
|
|
|
2021-11-11 18:45:52 +03:00
|
|
|
uint16_t nvme_bounce_mdata(NvmeCtrl *n, void *ptr, uint32_t len,
|
2021-02-04 11:55:48 +03:00
|
|
|
NvmeTxDirection dir, NvmeRequest *req)
|
2020-11-23 13:24:55 +03:00
|
|
|
{
|
|
|
|
NvmeNamespace *ns = req->ns;
|
|
|
|
uint16_t status;
|
|
|
|
|
|
|
|
if (nvme_ns_ext(ns)) {
|
2021-04-14 22:34:44 +03:00
|
|
|
return nvme_tx_interleaved(n, &req->sg, ptr, len, ns->lbaf.ms,
|
|
|
|
ns->lbasz, ns->lbasz, dir);
|
2020-11-23 13:24:55 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
nvme_sg_unmap(&req->sg);
|
|
|
|
|
|
|
|
status = nvme_map_mptr(n, &req->sg, len, &req->cmd);
|
|
|
|
if (status) {
|
|
|
|
return status;
|
|
|
|
}
|
|
|
|
|
|
|
|
return nvme_tx(n, &req->sg, ptr, len, dir);
|
|
|
|
}
|
|
|
|
|
2021-02-07 23:06:01 +03:00
|
|
|
static inline void nvme_blk_read(BlockBackend *blk, int64_t offset,
|
2023-03-20 15:40:36 +03:00
|
|
|
uint32_t align, BlockCompletionFunc *cb,
|
|
|
|
NvmeRequest *req)
|
2021-02-07 23:06:01 +03:00
|
|
|
{
|
|
|
|
assert(req->sg.flags & NVME_SG_ALLOC);
|
|
|
|
|
|
|
|
if (req->sg.flags & NVME_SG_DMA) {
|
2023-03-20 15:40:36 +03:00
|
|
|
req->aiocb = dma_blk_read(blk, &req->sg.qsg, offset, align, cb, req);
|
2021-02-07 23:06:01 +03:00
|
|
|
} else {
|
|
|
|
req->aiocb = blk_aio_preadv(blk, offset, &req->sg.iov, 0, cb, req);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline void nvme_blk_write(BlockBackend *blk, int64_t offset,
|
2023-03-20 15:40:36 +03:00
|
|
|
uint32_t align, BlockCompletionFunc *cb,
|
|
|
|
NvmeRequest *req)
|
2021-02-07 23:06:01 +03:00
|
|
|
{
|
|
|
|
assert(req->sg.flags & NVME_SG_ALLOC);
|
|
|
|
|
|
|
|
if (req->sg.flags & NVME_SG_DMA) {
|
2023-03-20 15:40:36 +03:00
|
|
|
req->aiocb = dma_blk_write(blk, &req->sg.qsg, offset, align, cb, req);
|
2021-02-07 23:06:01 +03:00
|
|
|
} else {
|
|
|
|
req->aiocb = blk_aio_pwritev(blk, offset, &req->sg.iov, 0, cb, req);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2022-12-08 11:12:45 +03:00
|
|
|
static void nvme_update_cq_eventidx(const NvmeCQueue *cq)
|
|
|
|
{
|
|
|
|
uint32_t v = cpu_to_le32(cq->head);
|
|
|
|
|
|
|
|
trace_pci_nvme_update_cq_eventidx(cq->cqid, cq->head);
|
|
|
|
|
|
|
|
pci_dma_write(PCI_DEVICE(cq->ctrl), cq->ei_addr, &v, sizeof(v));
|
|
|
|
}
|
|
|
|
|
2022-06-16 15:34:07 +03:00
|
|
|
static void nvme_update_cq_head(NvmeCQueue *cq)
|
|
|
|
{
|
2022-12-12 13:30:52 +03:00
|
|
|
uint32_t v;
|
|
|
|
|
|
|
|
pci_dma_read(PCI_DEVICE(cq->ctrl), cq->db_addr, &v, sizeof(v));
|
|
|
|
|
|
|
|
cq->head = le32_to_cpu(v);
|
2022-12-08 14:49:04 +03:00
|
|
|
|
|
|
|
trace_pci_nvme_update_cq_head(cq->cqid, cq->head);
|
2022-06-16 15:34:07 +03:00
|
|
|
}
|
|
|
|
|
2013-06-04 19:17:10 +04:00
|
|
|
static void nvme_post_cqes(void *opaque)
|
|
|
|
{
|
|
|
|
NvmeCQueue *cq = opaque;
|
|
|
|
NvmeCtrl *n = cq->ctrl;
|
|
|
|
NvmeRequest *req, *next;
|
2021-06-17 21:55:42 +03:00
|
|
|
bool pending = cq->head != cq->tail;
|
2019-10-11 09:32:00 +03:00
|
|
|
int ret;
|
2013-06-04 19:17:10 +04:00
|
|
|
|
|
|
|
QTAILQ_FOREACH_SAFE(req, &cq->req_list, entry, next) {
|
|
|
|
NvmeSQueue *sq;
|
|
|
|
hwaddr addr;
|
|
|
|
|
2022-06-16 15:34:07 +03:00
|
|
|
if (n->dbbuf_enabled) {
|
2022-12-08 11:12:45 +03:00
|
|
|
nvme_update_cq_eventidx(cq);
|
2022-06-16 15:34:07 +03:00
|
|
|
nvme_update_cq_head(cq);
|
|
|
|
}
|
|
|
|
|
2013-06-04 19:17:10 +04:00
|
|
|
if (nvme_cq_full(cq)) {
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
sq = req->sq;
|
|
|
|
req->cqe.status = cpu_to_le16((req->status << 1) | cq->phase);
|
|
|
|
req->cqe.sq_id = cpu_to_le16(sq->sqid);
|
|
|
|
req->cqe.sq_head = cpu_to_le16(sq->head);
|
|
|
|
addr = cq->dma_addr + cq->tail * n->cqe_size;
|
2022-12-08 14:43:18 +03:00
|
|
|
ret = pci_dma_write(PCI_DEVICE(n), addr, (void *)&req->cqe,
|
2019-10-11 09:32:00 +03:00
|
|
|
sizeof(req->cqe));
|
|
|
|
if (ret) {
|
|
|
|
trace_pci_nvme_err_addr_write(addr);
|
|
|
|
trace_pci_nvme_err_cfs();
|
2021-07-13 20:31:27 +03:00
|
|
|
stl_le_p(&n->bar.csts, NVME_CSTS_FAILED);
|
2019-10-11 09:32:00 +03:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
QTAILQ_REMOVE(&cq->req_list, req, entry);
|
2013-06-04 19:17:10 +04:00
|
|
|
nvme_inc_cq_tail(cq);
|
2021-02-07 23:06:01 +03:00
|
|
|
nvme_sg_unmap(&req->sg);
|
2013-06-04 19:17:10 +04:00
|
|
|
QTAILQ_INSERT_TAIL(&sq->req_list, req, entry);
|
|
|
|
}
|
2018-11-26 20:17:45 +03:00
|
|
|
if (cq->tail != cq->head) {
|
2021-06-17 21:55:42 +03:00
|
|
|
if (cq->irq_enabled && !pending) {
|
|
|
|
n->cq_pending++;
|
|
|
|
}
|
|
|
|
|
2018-11-26 20:17:45 +03:00
|
|
|
nvme_irq_assert(n, cq);
|
|
|
|
}
|
2013-06-04 19:17:10 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
static void nvme_enqueue_req_completion(NvmeCQueue *cq, NvmeRequest *req)
|
|
|
|
{
|
|
|
|
assert(cq->cqid == req->sq->cqid);
|
2020-07-06 09:12:48 +03:00
|
|
|
trace_pci_nvme_enqueue_req_completion(nvme_cid(req), cq->cqid,
|
2021-06-17 22:06:53 +03:00
|
|
|
le32_to_cpu(req->cqe.result),
|
|
|
|
le32_to_cpu(req->cqe.dw1),
|
2020-07-06 09:12:48 +03:00
|
|
|
req->status);
|
2020-09-30 02:19:05 +03:00
|
|
|
|
|
|
|
if (req->status) {
|
|
|
|
trace_pci_nvme_err_req_status(nvme_cid(req), nvme_nsid(req->ns),
|
|
|
|
req->status, req->cmd.opcode);
|
|
|
|
}
|
|
|
|
|
2013-06-04 19:17:10 +04:00
|
|
|
QTAILQ_REMOVE(&req->sq->out_req_list, req, entry);
|
|
|
|
QTAILQ_INSERT_TAIL(&cq->req_list, req, entry);
|
2022-07-05 17:24:03 +03:00
|
|
|
|
2022-10-19 23:28:02 +03:00
|
|
|
qemu_bh_schedule(cq->bh);
|
2013-06-04 19:17:10 +04:00
|
|
|
}
|
|
|
|
|
2020-07-06 09:12:53 +03:00
|
|
|
static void nvme_process_aers(void *opaque)
|
|
|
|
{
|
|
|
|
NvmeCtrl *n = opaque;
|
|
|
|
NvmeAsyncEvent *event, *next;
|
|
|
|
|
|
|
|
trace_pci_nvme_process_aers(n->aer_queued);
|
|
|
|
|
|
|
|
QTAILQ_FOREACH_SAFE(event, &n->aer_queue, entry, next) {
|
|
|
|
NvmeRequest *req;
|
|
|
|
NvmeAerResult *result;
|
|
|
|
|
|
|
|
/* can't post cqe if there is nothing to complete */
|
|
|
|
if (!n->outstanding_aers) {
|
|
|
|
trace_pci_nvme_no_outstanding_aers();
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* ignore if masked (cqe posted, but event not cleared) */
|
|
|
|
if (n->aer_mask & (1 << event->result.event_type)) {
|
|
|
|
trace_pci_nvme_aer_masked(event->result.event_type, n->aer_mask);
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
QTAILQ_REMOVE(&n->aer_queue, event, entry);
|
|
|
|
n->aer_queued--;
|
|
|
|
|
|
|
|
n->aer_mask |= 1 << event->result.event_type;
|
|
|
|
n->outstanding_aers--;
|
|
|
|
|
|
|
|
req = n->aer_reqs[n->outstanding_aers];
|
|
|
|
|
|
|
|
result = (NvmeAerResult *) &req->cqe.result;
|
|
|
|
result->event_type = event->result.event_type;
|
|
|
|
result->event_info = event->result.event_info;
|
|
|
|
result->log_page = event->result.log_page;
|
|
|
|
g_free(event);
|
|
|
|
|
|
|
|
trace_pci_nvme_aer_post_cqe(result->event_type, result->event_info,
|
|
|
|
result->log_page);
|
|
|
|
|
|
|
|
nvme_enqueue_req_completion(&n->admin_cq, req);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static void nvme_enqueue_event(NvmeCtrl *n, uint8_t event_type,
|
|
|
|
uint8_t event_info, uint8_t log_page)
|
|
|
|
{
|
|
|
|
NvmeAsyncEvent *event;
|
|
|
|
|
|
|
|
trace_pci_nvme_enqueue_event(event_type, event_info, log_page);
|
|
|
|
|
|
|
|
if (n->aer_queued == n->params.aer_max_queued) {
|
|
|
|
trace_pci_nvme_enqueue_event_noqueue(n->aer_queued);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
event = g_new(NvmeAsyncEvent, 1);
|
|
|
|
event->result = (NvmeAerResult) {
|
|
|
|
.event_type = event_type,
|
|
|
|
.event_info = event_info,
|
|
|
|
.log_page = log_page,
|
|
|
|
};
|
|
|
|
|
|
|
|
QTAILQ_INSERT_TAIL(&n->aer_queue, event, entry);
|
|
|
|
n->aer_queued++;
|
|
|
|
|
|
|
|
nvme_process_aers(n);
|
|
|
|
}
|
|
|
|
|
2021-01-15 06:27:02 +03:00
|
|
|
static void nvme_smart_event(NvmeCtrl *n, uint8_t event)
|
|
|
|
{
|
|
|
|
uint8_t aer_info;
|
|
|
|
|
|
|
|
/* Ref SPEC <Asynchronous Event Information 0x2013 SMART / Health Status> */
|
|
|
|
if (!(NVME_AEC_SMART(n->features.async_config) & event)) {
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
switch (event) {
|
|
|
|
case NVME_SMART_SPARE:
|
|
|
|
aer_info = NVME_AER_INFO_SMART_SPARE_THRESH;
|
|
|
|
break;
|
|
|
|
case NVME_SMART_TEMPERATURE:
|
|
|
|
aer_info = NVME_AER_INFO_SMART_TEMP_THRESH;
|
|
|
|
break;
|
|
|
|
case NVME_SMART_RELIABILITY:
|
|
|
|
case NVME_SMART_MEDIA_READ_ONLY:
|
|
|
|
case NVME_SMART_FAILED_VOLATILE_MEDIA:
|
|
|
|
case NVME_SMART_PMR_UNRELIABLE:
|
|
|
|
aer_info = NVME_AER_INFO_SMART_RELIABILITY;
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
nvme_enqueue_event(n, NVME_AER_TYPE_SMART, aer_info, NVME_LOG_SMART_INFO);
|
|
|
|
}
|
|
|
|
|
2020-07-06 09:12:53 +03:00
|
|
|
static void nvme_clear_events(NvmeCtrl *n, uint8_t event_type)
|
|
|
|
{
|
|
|
|
n->aer_mask &= ~(1 << event_type);
|
|
|
|
if (!QTAILQ_EMPTY(&n->aer_queue)) {
|
|
|
|
nvme_process_aers(n);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-02-23 19:38:22 +03:00
|
|
|
static inline uint16_t nvme_check_mdts(NvmeCtrl *n, size_t len)
|
|
|
|
{
|
|
|
|
uint8_t mdts = n->params.mdts;
|
|
|
|
|
|
|
|
if (mdts && len > n->page_size << mdts) {
|
2021-02-22 22:29:47 +03:00
|
|
|
trace_pci_nvme_err_mdts(len);
|
2020-02-23 19:38:22 +03:00
|
|
|
return NVME_INVALID_FIELD | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
|
|
|
return NVME_SUCCESS;
|
|
|
|
}
|
|
|
|
|
2020-11-09 14:23:18 +03:00
|
|
|
static inline uint16_t nvme_check_bounds(NvmeNamespace *ns, uint64_t slba,
|
|
|
|
uint32_t nlb)
|
2020-02-23 19:32:25 +03:00
|
|
|
{
|
|
|
|
uint64_t nsze = le64_to_cpu(ns->id_ns.nsze);
|
|
|
|
|
|
|
|
if (unlikely(UINT64_MAX - slba < nlb || slba + nlb > nsze)) {
|
2021-04-14 10:04:35 +03:00
|
|
|
trace_pci_nvme_err_invalid_lba_range(slba, nlb, nsze);
|
2020-02-23 19:32:25 +03:00
|
|
|
return NVME_LBA_RANGE | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
|
|
|
return NVME_SUCCESS;
|
|
|
|
}
|
|
|
|
|
2021-06-17 22:06:48 +03:00
|
|
|
static int nvme_block_status_all(NvmeNamespace *ns, uint64_t slba,
|
|
|
|
uint32_t nlb, int flags)
|
2020-10-14 10:55:08 +03:00
|
|
|
{
|
|
|
|
BlockDriverState *bs = blk_bs(ns->blkconf.blk);
|
|
|
|
|
|
|
|
int64_t pnum = 0, bytes = nvme_l2b(ns, nlb);
|
|
|
|
int64_t offset = nvme_l2b(ns, slba);
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* `pnum` holds the number of bytes after offset that shares the same
|
|
|
|
* allocation status as the byte at offset. If `pnum` is different from
|
|
|
|
* `bytes`, we should check the allocation status of the next range and
|
|
|
|
* continue this until all bytes have been checked.
|
|
|
|
*/
|
|
|
|
do {
|
|
|
|
bytes -= pnum;
|
|
|
|
|
|
|
|
ret = bdrv_block_status(bs, offset, bytes, &pnum, NULL, NULL);
|
|
|
|
if (ret < 0) {
|
2021-06-17 22:06:48 +03:00
|
|
|
return ret;
|
2020-10-14 10:55:08 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
|
2021-06-17 22:06:48 +03:00
|
|
|
trace_pci_nvme_block_status(offset, bytes, pnum, ret,
|
|
|
|
!!(ret & BDRV_BLOCK_ZERO));
|
2020-10-14 10:55:08 +03:00
|
|
|
|
2021-06-17 22:06:48 +03:00
|
|
|
if (!(ret & flags)) {
|
|
|
|
return 1;
|
2020-10-14 10:55:08 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
offset += pnum;
|
|
|
|
} while (pnum != bytes);
|
|
|
|
|
2021-06-17 22:06:48 +03:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static uint16_t nvme_check_dulbe(NvmeNamespace *ns, uint64_t slba,
|
|
|
|
uint32_t nlb)
|
|
|
|
{
|
|
|
|
int ret;
|
|
|
|
Error *err = NULL;
|
|
|
|
|
|
|
|
ret = nvme_block_status_all(ns, slba, nlb, BDRV_BLOCK_DATA);
|
|
|
|
if (ret) {
|
|
|
|
if (ret < 0) {
|
|
|
|
error_setg_errno(&err, -ret, "unable to get block status");
|
|
|
|
error_report_err(err);
|
|
|
|
|
|
|
|
return NVME_INTERNAL_DEV_ERROR;
|
|
|
|
}
|
|
|
|
|
|
|
|
return NVME_DULB;
|
|
|
|
}
|
|
|
|
|
2020-10-14 10:55:08 +03:00
|
|
|
return NVME_SUCCESS;
|
|
|
|
}
|
|
|
|
|
2020-11-10 10:53:20 +03:00
|
|
|
static void nvme_aio_err(NvmeRequest *req, int ret)
|
|
|
|
{
|
|
|
|
uint16_t status = NVME_SUCCESS;
|
|
|
|
Error *local_err = NULL;
|
|
|
|
|
|
|
|
switch (req->cmd.opcode) {
|
|
|
|
case NVME_CMD_READ:
|
|
|
|
status = NVME_UNRECOVERED_READ;
|
|
|
|
break;
|
|
|
|
case NVME_CMD_FLUSH:
|
|
|
|
case NVME_CMD_WRITE:
|
|
|
|
case NVME_CMD_WRITE_ZEROES:
|
2020-12-08 23:04:06 +03:00
|
|
|
case NVME_CMD_ZONE_APPEND:
|
2020-11-10 10:53:20 +03:00
|
|
|
status = NVME_WRITE_FAULT;
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
status = NVME_INTERNAL_DEV_ERROR;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
2021-02-07 00:59:26 +03:00
|
|
|
trace_pci_nvme_err_aio(nvme_cid(req), strerror(-ret), status);
|
2020-11-10 10:53:20 +03:00
|
|
|
|
|
|
|
error_setg_errno(&local_err, -ret, "aio failed");
|
|
|
|
error_report_err(local_err);
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Set the command status code to the first encountered error but allow a
|
|
|
|
* subsequent Internal Device Error to trump it.
|
|
|
|
*/
|
|
|
|
if (req->status && status != NVME_INTERNAL_DEV_ERROR) {
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
req->status = status;
|
|
|
|
}
|
|
|
|
|
2020-12-08 23:04:06 +03:00
|
|
|
static inline uint32_t nvme_zone_idx(NvmeNamespace *ns, uint64_t slba)
|
|
|
|
{
|
|
|
|
return ns->zone_size_log2 > 0 ? slba >> ns->zone_size_log2 :
|
|
|
|
slba / ns->zone_size;
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline NvmeZone *nvme_get_zone_by_slba(NvmeNamespace *ns, uint64_t slba)
|
|
|
|
{
|
|
|
|
uint32_t zone_idx = nvme_zone_idx(ns, slba);
|
|
|
|
|
2021-06-17 22:06:51 +03:00
|
|
|
if (zone_idx >= ns->num_zones) {
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
|
2020-12-08 23:04:06 +03:00
|
|
|
return &ns->zone_array[zone_idx];
|
|
|
|
}
|
|
|
|
|
|
|
|
static uint16_t nvme_check_zone_state_for_write(NvmeZone *zone)
|
|
|
|
{
|
2021-01-19 16:21:50 +03:00
|
|
|
uint64_t zslba = zone->d.zslba;
|
2020-12-08 23:04:06 +03:00
|
|
|
|
|
|
|
switch (nvme_get_zone_state(zone)) {
|
|
|
|
case NVME_ZONE_STATE_EMPTY:
|
|
|
|
case NVME_ZONE_STATE_IMPLICITLY_OPEN:
|
|
|
|
case NVME_ZONE_STATE_EXPLICITLY_OPEN:
|
|
|
|
case NVME_ZONE_STATE_CLOSED:
|
2021-01-19 16:21:50 +03:00
|
|
|
return NVME_SUCCESS;
|
2020-12-08 23:04:06 +03:00
|
|
|
case NVME_ZONE_STATE_FULL:
|
2021-01-19 16:21:50 +03:00
|
|
|
trace_pci_nvme_err_zone_is_full(zslba);
|
|
|
|
return NVME_ZONE_FULL;
|
2020-12-08 23:04:06 +03:00
|
|
|
case NVME_ZONE_STATE_OFFLINE:
|
2021-01-19 16:21:50 +03:00
|
|
|
trace_pci_nvme_err_zone_is_offline(zslba);
|
|
|
|
return NVME_ZONE_OFFLINE;
|
2020-12-08 23:04:06 +03:00
|
|
|
case NVME_ZONE_STATE_READ_ONLY:
|
2021-01-19 16:21:50 +03:00
|
|
|
trace_pci_nvme_err_zone_is_read_only(zslba);
|
|
|
|
return NVME_ZONE_READ_ONLY;
|
2020-12-08 23:04:06 +03:00
|
|
|
default:
|
|
|
|
assert(false);
|
|
|
|
}
|
|
|
|
|
2021-01-19 16:21:50 +03:00
|
|
|
return NVME_INTERNAL_DEV_ERROR;
|
2020-12-08 23:04:06 +03:00
|
|
|
}
|
|
|
|
|
2021-01-20 00:56:14 +03:00
|
|
|
static uint16_t nvme_check_zone_write(NvmeNamespace *ns, NvmeZone *zone,
|
|
|
|
uint64_t slba, uint32_t nlb)
|
2020-12-08 23:04:06 +03:00
|
|
|
{
|
2021-01-19 16:21:50 +03:00
|
|
|
uint64_t zcap = nvme_zone_wr_boundary(zone);
|
2020-12-08 23:04:06 +03:00
|
|
|
uint16_t status;
|
|
|
|
|
2021-01-19 16:21:50 +03:00
|
|
|
status = nvme_check_zone_state_for_write(zone);
|
|
|
|
if (status) {
|
|
|
|
return status;
|
2020-12-08 23:04:06 +03:00
|
|
|
}
|
|
|
|
|
2021-03-04 10:40:11 +03:00
|
|
|
if (zone->d.za & NVME_ZA_ZRWA_VALID) {
|
|
|
|
uint64_t ezrwa = zone->w_ptr + 2 * ns->zns.zrwas;
|
|
|
|
|
|
|
|
if (slba < zone->w_ptr || slba + nlb > ezrwa) {
|
|
|
|
trace_pci_nvme_err_zone_invalid_write(slba, zone->w_ptr);
|
|
|
|
return NVME_ZONE_INVALID_WRITE;
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
if (unlikely(slba != zone->w_ptr)) {
|
|
|
|
trace_pci_nvme_err_write_not_at_wp(slba, zone->d.zslba,
|
|
|
|
zone->w_ptr);
|
|
|
|
return NVME_ZONE_INVALID_WRITE;
|
|
|
|
}
|
2021-01-19 16:21:50 +03:00
|
|
|
}
|
2021-01-19 14:42:58 +03:00
|
|
|
|
2021-01-19 16:21:50 +03:00
|
|
|
if (unlikely((slba + nlb) > zcap)) {
|
|
|
|
trace_pci_nvme_err_zone_boundary(slba, nlb, zcap);
|
|
|
|
return NVME_ZONE_BOUNDARY_ERROR;
|
2020-12-08 23:04:06 +03:00
|
|
|
}
|
|
|
|
|
2021-01-19 16:21:50 +03:00
|
|
|
return NVME_SUCCESS;
|
2020-12-08 23:04:06 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
static uint16_t nvme_check_zone_state_for_read(NvmeZone *zone)
|
|
|
|
{
|
|
|
|
switch (nvme_get_zone_state(zone)) {
|
|
|
|
case NVME_ZONE_STATE_EMPTY:
|
|
|
|
case NVME_ZONE_STATE_IMPLICITLY_OPEN:
|
|
|
|
case NVME_ZONE_STATE_EXPLICITLY_OPEN:
|
|
|
|
case NVME_ZONE_STATE_FULL:
|
|
|
|
case NVME_ZONE_STATE_CLOSED:
|
|
|
|
case NVME_ZONE_STATE_READ_ONLY:
|
2021-02-22 21:36:09 +03:00
|
|
|
return NVME_SUCCESS;
|
2020-12-08 23:04:06 +03:00
|
|
|
case NVME_ZONE_STATE_OFFLINE:
|
2021-02-22 21:38:46 +03:00
|
|
|
trace_pci_nvme_err_zone_is_offline(zone->d.zslba);
|
2021-02-22 21:36:09 +03:00
|
|
|
return NVME_ZONE_OFFLINE;
|
2020-12-08 23:04:06 +03:00
|
|
|
default:
|
|
|
|
assert(false);
|
|
|
|
}
|
|
|
|
|
2021-02-22 21:36:09 +03:00
|
|
|
return NVME_INTERNAL_DEV_ERROR;
|
2020-12-08 23:04:06 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
static uint16_t nvme_check_zone_read(NvmeNamespace *ns, uint64_t slba,
|
|
|
|
uint32_t nlb)
|
|
|
|
{
|
2021-06-17 22:06:51 +03:00
|
|
|
NvmeZone *zone;
|
|
|
|
uint64_t bndry, end;
|
2020-12-08 23:04:06 +03:00
|
|
|
uint16_t status;
|
|
|
|
|
2021-06-17 22:06:51 +03:00
|
|
|
zone = nvme_get_zone_by_slba(ns, slba);
|
|
|
|
assert(zone);
|
|
|
|
|
|
|
|
bndry = nvme_zone_rd_boundary(ns, zone);
|
|
|
|
end = slba + nlb;
|
|
|
|
|
2020-12-08 23:04:06 +03:00
|
|
|
status = nvme_check_zone_state_for_read(zone);
|
2021-01-24 19:30:24 +03:00
|
|
|
if (status) {
|
2020-12-08 23:04:06 +03:00
|
|
|
;
|
|
|
|
} else if (unlikely(end > bndry)) {
|
|
|
|
if (!ns->params.cross_zone_read) {
|
|
|
|
status = NVME_ZONE_BOUNDARY_ERROR;
|
|
|
|
} else {
|
|
|
|
/*
|
|
|
|
* Read across zone boundary - check that all subsequent
|
|
|
|
* zones that are being read have an appropriate state.
|
|
|
|
*/
|
|
|
|
do {
|
|
|
|
zone++;
|
|
|
|
status = nvme_check_zone_state_for_read(zone);
|
2021-01-24 19:30:24 +03:00
|
|
|
if (status) {
|
2020-12-08 23:04:06 +03:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
} while (end > nvme_zone_rd_boundary(ns, zone));
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return status;
|
|
|
|
}
|
|
|
|
|
2021-01-19 23:01:15 +03:00
|
|
|
static uint16_t nvme_zrm_finish(NvmeNamespace *ns, NvmeZone *zone)
|
|
|
|
{
|
|
|
|
switch (nvme_get_zone_state(zone)) {
|
|
|
|
case NVME_ZONE_STATE_FULL:
|
|
|
|
return NVME_SUCCESS;
|
|
|
|
|
|
|
|
case NVME_ZONE_STATE_IMPLICITLY_OPEN:
|
|
|
|
case NVME_ZONE_STATE_EXPLICITLY_OPEN:
|
|
|
|
nvme_aor_dec_open(ns);
|
|
|
|
/* fallthrough */
|
|
|
|
case NVME_ZONE_STATE_CLOSED:
|
|
|
|
nvme_aor_dec_active(ns);
|
2021-03-04 10:40:11 +03:00
|
|
|
|
|
|
|
if (zone->d.za & NVME_ZA_ZRWA_VALID) {
|
|
|
|
zone->d.za &= ~NVME_ZA_ZRWA_VALID;
|
|
|
|
if (ns->params.numzrwa) {
|
|
|
|
ns->zns.numzrwa++;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-01-19 23:01:15 +03:00
|
|
|
/* fallthrough */
|
|
|
|
case NVME_ZONE_STATE_EMPTY:
|
|
|
|
nvme_assign_zone_state(ns, zone, NVME_ZONE_STATE_FULL);
|
|
|
|
return NVME_SUCCESS;
|
|
|
|
|
|
|
|
default:
|
|
|
|
return NVME_ZONE_INVAL_TRANSITION;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static uint16_t nvme_zrm_close(NvmeNamespace *ns, NvmeZone *zone)
|
|
|
|
{
|
|
|
|
switch (nvme_get_zone_state(zone)) {
|
|
|
|
case NVME_ZONE_STATE_EXPLICITLY_OPEN:
|
|
|
|
case NVME_ZONE_STATE_IMPLICITLY_OPEN:
|
|
|
|
nvme_aor_dec_open(ns);
|
|
|
|
nvme_assign_zone_state(ns, zone, NVME_ZONE_STATE_CLOSED);
|
|
|
|
/* fall through */
|
2021-02-08 03:32:56 +03:00
|
|
|
case NVME_ZONE_STATE_CLOSED:
|
|
|
|
return NVME_SUCCESS;
|
2021-01-19 23:01:15 +03:00
|
|
|
|
|
|
|
default:
|
|
|
|
return NVME_ZONE_INVAL_TRANSITION;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-06-17 22:06:55 +03:00
|
|
|
static uint16_t nvme_zrm_reset(NvmeNamespace *ns, NvmeZone *zone)
|
|
|
|
{
|
|
|
|
switch (nvme_get_zone_state(zone)) {
|
|
|
|
case NVME_ZONE_STATE_EXPLICITLY_OPEN:
|
|
|
|
case NVME_ZONE_STATE_IMPLICITLY_OPEN:
|
|
|
|
nvme_aor_dec_open(ns);
|
|
|
|
/* fallthrough */
|
|
|
|
case NVME_ZONE_STATE_CLOSED:
|
|
|
|
nvme_aor_dec_active(ns);
|
2021-03-04 10:40:11 +03:00
|
|
|
|
|
|
|
if (zone->d.za & NVME_ZA_ZRWA_VALID) {
|
|
|
|
if (ns->params.numzrwa) {
|
|
|
|
ns->zns.numzrwa++;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-06-17 22:06:55 +03:00
|
|
|
/* fallthrough */
|
|
|
|
case NVME_ZONE_STATE_FULL:
|
|
|
|
zone->w_ptr = zone->d.zslba;
|
|
|
|
zone->d.wp = zone->w_ptr;
|
|
|
|
nvme_assign_zone_state(ns, zone, NVME_ZONE_STATE_EMPTY);
|
|
|
|
/* fallthrough */
|
|
|
|
case NVME_ZONE_STATE_EMPTY:
|
|
|
|
return NVME_SUCCESS;
|
|
|
|
|
|
|
|
default:
|
|
|
|
return NVME_ZONE_INVAL_TRANSITION;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-01-19 23:01:15 +03:00
|
|
|
static void nvme_zrm_auto_transition_zone(NvmeNamespace *ns)
|
2020-12-08 23:04:07 +03:00
|
|
|
{
|
|
|
|
NvmeZone *zone;
|
|
|
|
|
|
|
|
if (ns->params.max_open_zones &&
|
|
|
|
ns->nr_open_zones == ns->params.max_open_zones) {
|
|
|
|
zone = QTAILQ_FIRST(&ns->imp_open_zones);
|
|
|
|
if (zone) {
|
|
|
|
/*
|
|
|
|
* Automatically close this implicitly open zone.
|
|
|
|
*/
|
|
|
|
QTAILQ_REMOVE(&ns->imp_open_zones, zone, entry);
|
2021-01-19 23:01:15 +03:00
|
|
|
nvme_zrm_close(ns, zone);
|
2020-12-08 23:04:07 +03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-04-15 09:37:36 +03:00
|
|
|
enum {
|
|
|
|
NVME_ZRM_AUTO = 1 << 0,
|
2021-03-04 10:40:11 +03:00
|
|
|
NVME_ZRM_ZRWA = 1 << 1,
|
2021-04-15 09:37:36 +03:00
|
|
|
};
|
|
|
|
|
2021-05-28 14:05:07 +03:00
|
|
|
static uint16_t nvme_zrm_open_flags(NvmeCtrl *n, NvmeNamespace *ns,
|
|
|
|
NvmeZone *zone, int flags)
|
2020-12-08 23:04:07 +03:00
|
|
|
{
|
2021-01-19 23:01:15 +03:00
|
|
|
int act = 0;
|
|
|
|
uint16_t status;
|
2020-12-08 23:04:07 +03:00
|
|
|
|
2021-01-19 23:01:15 +03:00
|
|
|
switch (nvme_get_zone_state(zone)) {
|
|
|
|
case NVME_ZONE_STATE_EMPTY:
|
|
|
|
act = 1;
|
|
|
|
|
|
|
|
/* fallthrough */
|
|
|
|
|
|
|
|
case NVME_ZONE_STATE_CLOSED:
|
2021-05-28 14:05:07 +03:00
|
|
|
if (n->params.auto_transition_zones) {
|
|
|
|
nvme_zrm_auto_transition_zone(ns);
|
|
|
|
}
|
2021-03-04 10:40:11 +03:00
|
|
|
status = nvme_zns_check_resources(ns, act, 1,
|
|
|
|
(flags & NVME_ZRM_ZRWA) ? 1 : 0);
|
2021-01-19 23:01:15 +03:00
|
|
|
if (status) {
|
|
|
|
return status;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (act) {
|
|
|
|
nvme_aor_inc_active(ns);
|
|
|
|
}
|
|
|
|
|
|
|
|
nvme_aor_inc_open(ns);
|
|
|
|
|
2021-04-15 09:37:36 +03:00
|
|
|
if (flags & NVME_ZRM_AUTO) {
|
2021-01-19 23:01:15 +03:00
|
|
|
nvme_assign_zone_state(ns, zone, NVME_ZONE_STATE_IMPLICITLY_OPEN);
|
|
|
|
return NVME_SUCCESS;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* fallthrough */
|
|
|
|
|
|
|
|
case NVME_ZONE_STATE_IMPLICITLY_OPEN:
|
2021-04-15 09:37:36 +03:00
|
|
|
if (flags & NVME_ZRM_AUTO) {
|
2021-01-19 23:01:15 +03:00
|
|
|
return NVME_SUCCESS;
|
|
|
|
}
|
|
|
|
|
|
|
|
nvme_assign_zone_state(ns, zone, NVME_ZONE_STATE_EXPLICITLY_OPEN);
|
|
|
|
|
|
|
|
/* fallthrough */
|
|
|
|
|
|
|
|
case NVME_ZONE_STATE_EXPLICITLY_OPEN:
|
2021-03-04 10:40:11 +03:00
|
|
|
if (flags & NVME_ZRM_ZRWA) {
|
|
|
|
ns->zns.numzrwa--;
|
|
|
|
|
|
|
|
zone->d.za |= NVME_ZA_ZRWA_VALID;
|
|
|
|
}
|
|
|
|
|
2021-01-19 23:01:15 +03:00
|
|
|
return NVME_SUCCESS;
|
|
|
|
|
|
|
|
default:
|
|
|
|
return NVME_ZONE_INVAL_TRANSITION;
|
2020-12-08 23:04:07 +03:00
|
|
|
}
|
2021-01-19 23:01:15 +03:00
|
|
|
}
|
2020-12-08 23:04:07 +03:00
|
|
|
|
2021-05-28 14:05:07 +03:00
|
|
|
static inline uint16_t nvme_zrm_auto(NvmeCtrl *n, NvmeNamespace *ns,
|
|
|
|
NvmeZone *zone)
|
2021-01-19 23:01:15 +03:00
|
|
|
{
|
2021-05-28 14:05:07 +03:00
|
|
|
return nvme_zrm_open_flags(n, ns, zone, NVME_ZRM_AUTO);
|
2020-12-08 23:04:07 +03:00
|
|
|
}
|
|
|
|
|
2021-04-15 09:38:28 +03:00
|
|
|
static void nvme_advance_zone_wp(NvmeNamespace *ns, NvmeZone *zone,
|
|
|
|
uint32_t nlb)
|
2021-01-20 01:01:02 +03:00
|
|
|
{
|
|
|
|
zone->d.wp += nlb;
|
|
|
|
|
|
|
|
if (zone->d.wp == nvme_zone_wr_boundary(zone)) {
|
|
|
|
nvme_zrm_finish(ns, zone);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-03-04 10:40:11 +03:00
|
|
|
static void nvme_zoned_zrwa_implicit_flush(NvmeNamespace *ns, NvmeZone *zone,
|
|
|
|
uint32_t nlbc)
|
|
|
|
{
|
|
|
|
uint16_t nzrwafgs = DIV_ROUND_UP(nlbc, ns->zns.zrwafg);
|
|
|
|
|
|
|
|
nlbc = nzrwafgs * ns->zns.zrwafg;
|
|
|
|
|
|
|
|
trace_pci_nvme_zoned_zrwa_implicit_flush(zone->d.zslba, nlbc);
|
|
|
|
|
|
|
|
zone->w_ptr += nlbc;
|
|
|
|
|
|
|
|
nvme_advance_zone_wp(ns, zone, nlbc);
|
|
|
|
}
|
|
|
|
|
2021-01-19 23:01:15 +03:00
|
|
|
static void nvme_finalize_zoned_write(NvmeNamespace *ns, NvmeRequest *req)
|
2020-12-08 23:04:06 +03:00
|
|
|
{
|
|
|
|
NvmeRwCmd *rw = (NvmeRwCmd *)&req->cmd;
|
|
|
|
NvmeZone *zone;
|
|
|
|
uint64_t slba;
|
|
|
|
uint32_t nlb;
|
|
|
|
|
|
|
|
slba = le64_to_cpu(rw->slba);
|
|
|
|
nlb = le16_to_cpu(rw->nlb) + 1;
|
|
|
|
zone = nvme_get_zone_by_slba(ns, slba);
|
2021-06-17 22:06:51 +03:00
|
|
|
assert(zone);
|
2020-12-08 23:04:06 +03:00
|
|
|
|
2021-03-04 10:40:11 +03:00
|
|
|
if (zone->d.za & NVME_ZA_ZRWA_VALID) {
|
|
|
|
uint64_t ezrwa = zone->w_ptr + ns->zns.zrwas - 1;
|
|
|
|
uint64_t elba = slba + nlb - 1;
|
|
|
|
|
|
|
|
if (elba > ezrwa) {
|
|
|
|
nvme_zoned_zrwa_implicit_flush(ns, zone, elba - ezrwa);
|
|
|
|
}
|
|
|
|
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2021-04-15 09:38:28 +03:00
|
|
|
nvme_advance_zone_wp(ns, zone, nlb);
|
2020-12-08 23:04:06 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline bool nvme_is_write(NvmeRequest *req)
|
|
|
|
{
|
|
|
|
NvmeRwCmd *rw = (NvmeRwCmd *)&req->cmd;
|
|
|
|
|
|
|
|
return rw->opcode == NVME_CMD_WRITE ||
|
|
|
|
rw->opcode == NVME_CMD_ZONE_APPEND ||
|
|
|
|
rw->opcode == NVME_CMD_WRITE_ZEROES;
|
|
|
|
}
|
|
|
|
|
2021-06-17 22:06:47 +03:00
|
|
|
static AioContext *nvme_get_aio_context(BlockAIOCB *acb)
|
|
|
|
{
|
|
|
|
return qemu_get_aio_context();
|
|
|
|
}
|
|
|
|
|
2020-11-23 13:24:55 +03:00
|
|
|
static void nvme_misc_cb(void *opaque, int ret)
|
2013-06-04 19:17:10 +04:00
|
|
|
{
|
|
|
|
NvmeRequest *req = opaque;
|
2020-08-24 13:43:38 +03:00
|
|
|
|
2021-06-17 22:06:47 +03:00
|
|
|
trace_pci_nvme_misc_cb(nvme_cid(req));
|
2020-07-06 09:12:48 +03:00
|
|
|
|
2020-11-23 13:24:55 +03:00
|
|
|
if (ret) {
|
|
|
|
nvme_aio_err(req, ret);
|
2020-12-08 23:04:06 +03:00
|
|
|
}
|
|
|
|
|
2020-11-23 13:24:55 +03:00
|
|
|
nvme_enqueue_req_completion(nvme_cq(req), req);
|
|
|
|
}
|
|
|
|
|
2021-02-04 11:55:48 +03:00
|
|
|
void nvme_rw_complete_cb(void *opaque, int ret)
|
2020-11-23 13:24:55 +03:00
|
|
|
{
|
|
|
|
NvmeRequest *req = opaque;
|
|
|
|
NvmeNamespace *ns = req->ns;
|
|
|
|
BlockBackend *blk = ns->blkconf.blk;
|
|
|
|
BlockAcctCookie *acct = &req->acct;
|
|
|
|
BlockAcctStats *stats = blk_get_stats(blk);
|
|
|
|
|
|
|
|
trace_pci_nvme_rw_complete_cb(nvme_cid(req), blk_name(blk));
|
|
|
|
|
|
|
|
if (ret) {
|
2020-08-24 13:43:38 +03:00
|
|
|
block_acct_failed(stats, acct);
|
2020-11-10 10:53:20 +03:00
|
|
|
nvme_aio_err(req, ret);
|
2020-11-23 13:24:55 +03:00
|
|
|
} else {
|
|
|
|
block_acct_done(stats, acct);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (ns->params.zoned && nvme_is_write(req)) {
|
|
|
|
nvme_finalize_zoned_write(ns, req);
|
2013-06-04 19:17:10 +04:00
|
|
|
}
|
2020-07-29 22:08:04 +03:00
|
|
|
|
2020-08-24 13:43:38 +03:00
|
|
|
nvme_enqueue_req_completion(nvme_cq(req), req);
|
2013-06-04 19:17:10 +04:00
|
|
|
}
|
|
|
|
|
2020-11-23 13:24:55 +03:00
|
|
|
static void nvme_rw_cb(void *opaque, int ret)
|
|
|
|
{
|
|
|
|
NvmeRequest *req = opaque;
|
|
|
|
NvmeNamespace *ns = req->ns;
|
|
|
|
|
|
|
|
BlockBackend *blk = ns->blkconf.blk;
|
|
|
|
|
|
|
|
trace_pci_nvme_rw_cb(nvme_cid(req), blk_name(blk));
|
|
|
|
|
|
|
|
if (ret) {
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2021-04-14 22:34:44 +03:00
|
|
|
if (ns->lbaf.ms) {
|
2020-11-23 13:24:55 +03:00
|
|
|
NvmeRwCmd *rw = (NvmeRwCmd *)&req->cmd;
|
|
|
|
uint64_t slba = le64_to_cpu(rw->slba);
|
|
|
|
uint32_t nlb = (uint32_t)le16_to_cpu(rw->nlb) + 1;
|
2021-04-13 22:51:30 +03:00
|
|
|
uint64_t offset = nvme_moff(ns, slba);
|
2020-11-23 13:24:55 +03:00
|
|
|
|
|
|
|
if (req->cmd.opcode == NVME_CMD_WRITE_ZEROES) {
|
|
|
|
size_t mlen = nvme_m2b(ns, nlb);
|
|
|
|
|
|
|
|
req->aiocb = blk_aio_pwrite_zeroes(blk, offset, mlen,
|
|
|
|
BDRV_REQ_MAY_UNMAP,
|
|
|
|
nvme_rw_complete_cb, req);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (nvme_ns_ext(ns) || req->cmd.mptr) {
|
|
|
|
uint16_t status;
|
|
|
|
|
|
|
|
nvme_sg_unmap(&req->sg);
|
|
|
|
status = nvme_map_mdata(nvme_ctrl(req), nlb, req);
|
|
|
|
if (status) {
|
|
|
|
ret = -EFAULT;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (req->cmd.opcode == NVME_CMD_READ) {
|
2023-03-20 15:40:36 +03:00
|
|
|
return nvme_blk_read(blk, offset, 1, nvme_rw_complete_cb, req);
|
2020-11-23 13:24:55 +03:00
|
|
|
}
|
|
|
|
|
2023-03-20 15:40:36 +03:00
|
|
|
return nvme_blk_write(blk, offset, 1, nvme_rw_complete_cb, req);
|
2020-11-23 13:24:55 +03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
out:
|
|
|
|
nvme_rw_complete_cb(req, ret);
|
|
|
|
}
|
|
|
|
|
2021-02-09 20:29:42 +03:00
|
|
|
static void nvme_verify_cb(void *opaque, int ret)
|
|
|
|
{
|
|
|
|
NvmeBounceContext *ctx = opaque;
|
|
|
|
NvmeRequest *req = ctx->req;
|
|
|
|
NvmeNamespace *ns = req->ns;
|
|
|
|
BlockBackend *blk = ns->blkconf.blk;
|
|
|
|
BlockAcctCookie *acct = &req->acct;
|
|
|
|
BlockAcctStats *stats = blk_get_stats(blk);
|
|
|
|
NvmeRwCmd *rw = (NvmeRwCmd *)&req->cmd;
|
|
|
|
uint64_t slba = le64_to_cpu(rw->slba);
|
2021-06-17 22:06:52 +03:00
|
|
|
uint8_t prinfo = NVME_RW_PRINFO(le16_to_cpu(rw->control));
|
2021-02-09 20:29:42 +03:00
|
|
|
uint16_t apptag = le16_to_cpu(rw->apptag);
|
|
|
|
uint16_t appmask = le16_to_cpu(rw->appmask);
|
2021-11-16 16:26:52 +03:00
|
|
|
uint64_t reftag = le32_to_cpu(rw->reftag);
|
|
|
|
uint64_t cdw3 = le32_to_cpu(rw->cdw3);
|
2021-02-09 20:29:42 +03:00
|
|
|
uint16_t status;
|
|
|
|
|
2021-11-16 16:26:52 +03:00
|
|
|
reftag |= cdw3 << 32;
|
|
|
|
|
2021-06-17 22:06:52 +03:00
|
|
|
trace_pci_nvme_verify_cb(nvme_cid(req), prinfo, apptag, appmask, reftag);
|
2021-02-09 20:29:42 +03:00
|
|
|
|
|
|
|
if (ret) {
|
|
|
|
block_acct_failed(stats, acct);
|
|
|
|
nvme_aio_err(req, ret);
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
block_acct_done(stats, acct);
|
|
|
|
|
|
|
|
if (NVME_ID_NS_DPS_TYPE(ns->id_ns.dps)) {
|
|
|
|
status = nvme_dif_mangle_mdata(ns, ctx->mdata.bounce,
|
|
|
|
ctx->mdata.iov.size, slba);
|
|
|
|
if (status) {
|
|
|
|
req->status = status;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
req->status = nvme_dif_check(ns, ctx->data.bounce, ctx->data.iov.size,
|
|
|
|
ctx->mdata.bounce, ctx->mdata.iov.size,
|
2021-06-17 22:06:52 +03:00
|
|
|
prinfo, slba, apptag, appmask, &reftag);
|
2021-02-09 20:29:42 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
out:
|
|
|
|
qemu_iovec_destroy(&ctx->data.iov);
|
|
|
|
g_free(ctx->data.bounce);
|
|
|
|
|
|
|
|
qemu_iovec_destroy(&ctx->mdata.iov);
|
|
|
|
g_free(ctx->mdata.bounce);
|
|
|
|
|
|
|
|
g_free(ctx);
|
|
|
|
|
|
|
|
nvme_enqueue_req_completion(nvme_cq(req), req);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
|
|
|
static void nvme_verify_mdata_in_cb(void *opaque, int ret)
|
|
|
|
{
|
|
|
|
NvmeBounceContext *ctx = opaque;
|
|
|
|
NvmeRequest *req = ctx->req;
|
|
|
|
NvmeNamespace *ns = req->ns;
|
|
|
|
NvmeRwCmd *rw = (NvmeRwCmd *)&req->cmd;
|
|
|
|
uint64_t slba = le64_to_cpu(rw->slba);
|
|
|
|
uint32_t nlb = le16_to_cpu(rw->nlb) + 1;
|
|
|
|
size_t mlen = nvme_m2b(ns, nlb);
|
2021-04-13 22:51:30 +03:00
|
|
|
uint64_t offset = nvme_moff(ns, slba);
|
2021-02-09 20:29:42 +03:00
|
|
|
BlockBackend *blk = ns->blkconf.blk;
|
|
|
|
|
|
|
|
trace_pci_nvme_verify_mdata_in_cb(nvme_cid(req), blk_name(blk));
|
|
|
|
|
|
|
|
if (ret) {
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
ctx->mdata.bounce = g_malloc(mlen);
|
|
|
|
|
|
|
|
qemu_iovec_reset(&ctx->mdata.iov);
|
|
|
|
qemu_iovec_add(&ctx->mdata.iov, ctx->mdata.bounce, mlen);
|
|
|
|
|
|
|
|
req->aiocb = blk_aio_preadv(blk, offset, &ctx->mdata.iov, 0,
|
|
|
|
nvme_verify_cb, ctx);
|
|
|
|
return;
|
|
|
|
|
|
|
|
out:
|
|
|
|
nvme_verify_cb(ctx, ret);
|
|
|
|
}
|
|
|
|
|
2020-11-16 13:14:02 +03:00
|
|
|
struct nvme_compare_ctx {
|
2020-11-23 13:24:55 +03:00
|
|
|
struct {
|
|
|
|
QEMUIOVector iov;
|
|
|
|
uint8_t *bounce;
|
|
|
|
} data;
|
|
|
|
|
|
|
|
struct {
|
|
|
|
QEMUIOVector iov;
|
|
|
|
uint8_t *bounce;
|
|
|
|
} mdata;
|
2020-11-16 13:14:02 +03:00
|
|
|
};
|
|
|
|
|
2020-11-23 13:24:55 +03:00
|
|
|
static void nvme_compare_mdata_cb(void *opaque, int ret)
|
|
|
|
{
|
|
|
|
NvmeRequest *req = opaque;
|
2021-02-04 11:55:48 +03:00
|
|
|
NvmeNamespace *ns = req->ns;
|
2020-11-23 13:24:55 +03:00
|
|
|
NvmeCtrl *n = nvme_ctrl(req);
|
2021-02-04 11:55:48 +03:00
|
|
|
NvmeRwCmd *rw = (NvmeRwCmd *)&req->cmd;
|
2021-06-17 22:06:52 +03:00
|
|
|
uint8_t prinfo = NVME_RW_PRINFO(le16_to_cpu(rw->control));
|
2021-02-04 11:55:48 +03:00
|
|
|
uint16_t apptag = le16_to_cpu(rw->apptag);
|
|
|
|
uint16_t appmask = le16_to_cpu(rw->appmask);
|
2021-11-16 16:26:52 +03:00
|
|
|
uint64_t reftag = le32_to_cpu(rw->reftag);
|
|
|
|
uint64_t cdw3 = le32_to_cpu(rw->cdw3);
|
2020-11-23 13:24:55 +03:00
|
|
|
struct nvme_compare_ctx *ctx = req->opaque;
|
|
|
|
g_autofree uint8_t *buf = NULL;
|
2021-04-16 10:22:33 +03:00
|
|
|
BlockBackend *blk = ns->blkconf.blk;
|
|
|
|
BlockAcctCookie *acct = &req->acct;
|
|
|
|
BlockAcctStats *stats = blk_get_stats(blk);
|
2020-11-23 13:24:55 +03:00
|
|
|
uint16_t status = NVME_SUCCESS;
|
|
|
|
|
2021-11-16 16:26:52 +03:00
|
|
|
reftag |= cdw3 << 32;
|
|
|
|
|
2020-11-23 13:24:55 +03:00
|
|
|
trace_pci_nvme_compare_mdata_cb(nvme_cid(req));
|
|
|
|
|
2021-04-16 10:22:33 +03:00
|
|
|
if (ret) {
|
|
|
|
block_acct_failed(stats, acct);
|
|
|
|
nvme_aio_err(req, ret);
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2020-11-23 13:24:55 +03:00
|
|
|
buf = g_malloc(ctx->mdata.iov.size);
|
|
|
|
|
|
|
|
status = nvme_bounce_mdata(n, buf, ctx->mdata.iov.size,
|
|
|
|
NVME_TX_DIRECTION_TO_DEVICE, req);
|
|
|
|
if (status) {
|
|
|
|
req->status = status;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2021-02-04 11:55:48 +03:00
|
|
|
if (NVME_ID_NS_DPS_TYPE(ns->id_ns.dps)) {
|
|
|
|
uint64_t slba = le64_to_cpu(rw->slba);
|
|
|
|
uint8_t *bufp;
|
|
|
|
uint8_t *mbufp = ctx->mdata.bounce;
|
|
|
|
uint8_t *end = mbufp + ctx->mdata.iov.size;
|
|
|
|
int16_t pil = 0;
|
|
|
|
|
|
|
|
status = nvme_dif_check(ns, ctx->data.bounce, ctx->data.iov.size,
|
2021-06-17 22:06:52 +03:00
|
|
|
ctx->mdata.bounce, ctx->mdata.iov.size, prinfo,
|
2021-06-17 22:06:50 +03:00
|
|
|
slba, apptag, appmask, &reftag);
|
2021-02-04 11:55:48 +03:00
|
|
|
if (status) {
|
|
|
|
req->status = status;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* When formatted with protection information, do not compare the DIF
|
|
|
|
* tuple.
|
|
|
|
*/
|
|
|
|
if (!(ns->id_ns.dps & NVME_ID_NS_DPS_FIRST_EIGHT)) {
|
2022-02-14 11:29:01 +03:00
|
|
|
pil = ns->lbaf.ms - nvme_pi_tuple_size(ns);
|
2021-02-04 11:55:48 +03:00
|
|
|
}
|
|
|
|
|
2021-04-14 22:34:44 +03:00
|
|
|
for (bufp = buf; mbufp < end; bufp += ns->lbaf.ms, mbufp += ns->lbaf.ms) {
|
|
|
|
if (memcmp(bufp + pil, mbufp + pil, ns->lbaf.ms - pil)) {
|
2022-08-25 08:29:08 +03:00
|
|
|
req->status = NVME_CMP_FAILURE | NVME_DNR;
|
2021-02-04 11:55:48 +03:00
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2020-11-23 13:24:55 +03:00
|
|
|
if (memcmp(buf, ctx->mdata.bounce, ctx->mdata.iov.size)) {
|
2022-08-25 08:29:08 +03:00
|
|
|
req->status = NVME_CMP_FAILURE | NVME_DNR;
|
2020-11-23 13:24:55 +03:00
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2021-04-16 10:22:33 +03:00
|
|
|
block_acct_done(stats, acct);
|
|
|
|
|
2020-11-23 13:24:55 +03:00
|
|
|
out:
|
|
|
|
qemu_iovec_destroy(&ctx->data.iov);
|
|
|
|
g_free(ctx->data.bounce);
|
|
|
|
|
|
|
|
qemu_iovec_destroy(&ctx->mdata.iov);
|
|
|
|
g_free(ctx->mdata.bounce);
|
|
|
|
|
|
|
|
g_free(ctx);
|
|
|
|
|
|
|
|
nvme_enqueue_req_completion(nvme_cq(req), req);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void nvme_compare_data_cb(void *opaque, int ret)
|
2020-11-16 13:14:02 +03:00
|
|
|
{
|
|
|
|
NvmeRequest *req = opaque;
|
2020-11-23 13:24:55 +03:00
|
|
|
NvmeCtrl *n = nvme_ctrl(req);
|
2020-11-16 13:14:02 +03:00
|
|
|
NvmeNamespace *ns = req->ns;
|
2020-11-23 13:24:55 +03:00
|
|
|
BlockBackend *blk = ns->blkconf.blk;
|
|
|
|
BlockAcctCookie *acct = &req->acct;
|
|
|
|
BlockAcctStats *stats = blk_get_stats(blk);
|
|
|
|
|
2020-11-16 13:14:02 +03:00
|
|
|
struct nvme_compare_ctx *ctx = req->opaque;
|
|
|
|
g_autofree uint8_t *buf = NULL;
|
|
|
|
uint16_t status;
|
|
|
|
|
2020-11-23 13:24:55 +03:00
|
|
|
trace_pci_nvme_compare_data_cb(nvme_cid(req));
|
2020-11-16 13:14:02 +03:00
|
|
|
|
2020-11-23 13:24:55 +03:00
|
|
|
if (ret) {
|
|
|
|
block_acct_failed(stats, acct);
|
2020-11-16 13:14:02 +03:00
|
|
|
nvme_aio_err(req, ret);
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2020-11-23 13:24:55 +03:00
|
|
|
buf = g_malloc(ctx->data.iov.size);
|
2020-11-16 13:14:02 +03:00
|
|
|
|
2020-11-23 13:24:55 +03:00
|
|
|
status = nvme_bounce_data(n, buf, ctx->data.iov.size,
|
|
|
|
NVME_TX_DIRECTION_TO_DEVICE, req);
|
2020-11-16 13:14:02 +03:00
|
|
|
if (status) {
|
|
|
|
req->status = status;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2020-11-23 13:24:55 +03:00
|
|
|
if (memcmp(buf, ctx->data.bounce, ctx->data.iov.size)) {
|
2022-08-25 08:29:08 +03:00
|
|
|
req->status = NVME_CMP_FAILURE | NVME_DNR;
|
2020-11-23 13:24:55 +03:00
|
|
|
goto out;
|
2020-11-16 13:14:02 +03:00
|
|
|
}
|
|
|
|
|
2021-04-14 22:34:44 +03:00
|
|
|
if (ns->lbaf.ms) {
|
2020-11-23 13:24:55 +03:00
|
|
|
NvmeRwCmd *rw = (NvmeRwCmd *)&req->cmd;
|
|
|
|
uint64_t slba = le64_to_cpu(rw->slba);
|
|
|
|
uint32_t nlb = le16_to_cpu(rw->nlb) + 1;
|
|
|
|
size_t mlen = nvme_m2b(ns, nlb);
|
2021-04-13 22:51:30 +03:00
|
|
|
uint64_t offset = nvme_moff(ns, slba);
|
2020-11-23 13:24:55 +03:00
|
|
|
|
|
|
|
ctx->mdata.bounce = g_malloc(mlen);
|
|
|
|
|
|
|
|
qemu_iovec_init(&ctx->mdata.iov, 1);
|
|
|
|
qemu_iovec_add(&ctx->mdata.iov, ctx->mdata.bounce, mlen);
|
|
|
|
|
|
|
|
req->aiocb = blk_aio_preadv(blk, offset, &ctx->mdata.iov, 0,
|
|
|
|
nvme_compare_mdata_cb, req);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
block_acct_done(stats, acct);
|
|
|
|
|
2020-11-16 13:14:02 +03:00
|
|
|
out:
|
2020-11-23 13:24:55 +03:00
|
|
|
qemu_iovec_destroy(&ctx->data.iov);
|
|
|
|
g_free(ctx->data.bounce);
|
2020-11-16 13:14:02 +03:00
|
|
|
g_free(ctx);
|
|
|
|
|
|
|
|
nvme_enqueue_req_completion(nvme_cq(req), req);
|
|
|
|
}
|
|
|
|
|
2021-06-17 22:06:49 +03:00
|
|
|
typedef struct NvmeDSMAIOCB {
|
|
|
|
BlockAIOCB common;
|
|
|
|
BlockAIOCB *aiocb;
|
|
|
|
NvmeRequest *req;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
NvmeDsmRange *range;
|
|
|
|
unsigned int nr;
|
|
|
|
unsigned int idx;
|
|
|
|
} NvmeDSMAIOCB;
|
|
|
|
|
|
|
|
static void nvme_dsm_cancel(BlockAIOCB *aiocb)
|
|
|
|
{
|
|
|
|
NvmeDSMAIOCB *iocb = container_of(aiocb, NvmeDSMAIOCB, common);
|
|
|
|
|
|
|
|
/* break nvme_dsm_cb loop */
|
|
|
|
iocb->idx = iocb->nr;
|
|
|
|
iocb->ret = -ECANCELED;
|
|
|
|
|
|
|
|
if (iocb->aiocb) {
|
|
|
|
blk_aio_cancel_async(iocb->aiocb);
|
|
|
|
iocb->aiocb = NULL;
|
|
|
|
} else {
|
|
|
|
/*
|
|
|
|
* We only reach this if nvme_dsm_cancel() has already been called or
|
2022-11-10 09:59:50 +03:00
|
|
|
* the command ran to completion.
|
2021-06-17 22:06:49 +03:00
|
|
|
*/
|
|
|
|
assert(iocb->idx == iocb->nr);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static const AIOCBInfo nvme_dsm_aiocb_info = {
|
|
|
|
.aiocb_size = sizeof(NvmeDSMAIOCB),
|
|
|
|
.cancel_async = nvme_dsm_cancel,
|
|
|
|
};
|
|
|
|
|
|
|
|
static void nvme_dsm_cb(void *opaque, int ret);
|
|
|
|
|
|
|
|
static void nvme_dsm_md_cb(void *opaque, int ret)
|
|
|
|
{
|
|
|
|
NvmeDSMAIOCB *iocb = opaque;
|
|
|
|
NvmeRequest *req = iocb->req;
|
2020-10-21 15:03:19 +03:00
|
|
|
NvmeNamespace *ns = req->ns;
|
2021-06-17 22:06:49 +03:00
|
|
|
NvmeDsmRange *range;
|
|
|
|
uint64_t slba;
|
|
|
|
uint32_t nlb;
|
2020-10-21 15:03:19 +03:00
|
|
|
|
2022-11-10 09:59:50 +03:00
|
|
|
if (ret < 0 || iocb->ret < 0 || !ns->lbaf.ms) {
|
2021-06-17 22:06:49 +03:00
|
|
|
goto done;
|
|
|
|
}
|
2020-10-21 15:03:19 +03:00
|
|
|
|
2021-06-17 22:06:49 +03:00
|
|
|
range = &iocb->range[iocb->idx - 1];
|
|
|
|
slba = le64_to_cpu(range->slba);
|
|
|
|
nlb = le32_to_cpu(range->nlb);
|
2020-10-21 15:03:19 +03:00
|
|
|
|
2021-06-17 22:06:49 +03:00
|
|
|
/*
|
|
|
|
* Check that all block were discarded (zeroed); otherwise we do not zero
|
|
|
|
* the metadata.
|
|
|
|
*/
|
2020-10-21 15:03:19 +03:00
|
|
|
|
2021-06-17 22:06:49 +03:00
|
|
|
ret = nvme_block_status_all(ns, slba, nlb, BDRV_BLOCK_ZERO);
|
|
|
|
if (ret) {
|
|
|
|
if (ret < 0) {
|
|
|
|
goto done;
|
2020-10-21 15:03:19 +03:00
|
|
|
}
|
|
|
|
|
2021-06-17 22:06:49 +03:00
|
|
|
nvme_dsm_cb(iocb, 0);
|
2022-04-15 23:48:32 +03:00
|
|
|
return;
|
2021-06-17 22:06:49 +03:00
|
|
|
}
|
2020-10-21 15:03:19 +03:00
|
|
|
|
2021-06-17 22:06:49 +03:00
|
|
|
iocb->aiocb = blk_aio_pwrite_zeroes(ns->blkconf.blk, nvme_moff(ns, slba),
|
|
|
|
nvme_m2b(ns, nlb), BDRV_REQ_MAY_UNMAP,
|
|
|
|
nvme_dsm_cb, iocb);
|
|
|
|
return;
|
2020-10-21 15:03:19 +03:00
|
|
|
|
2021-06-17 22:06:49 +03:00
|
|
|
done:
|
2022-11-10 09:59:50 +03:00
|
|
|
nvme_dsm_cb(iocb, ret);
|
2021-06-17 22:06:49 +03:00
|
|
|
}
|
2020-10-21 15:03:19 +03:00
|
|
|
|
2021-06-17 22:06:49 +03:00
|
|
|
static void nvme_dsm_cb(void *opaque, int ret)
|
|
|
|
{
|
|
|
|
NvmeDSMAIOCB *iocb = opaque;
|
|
|
|
NvmeRequest *req = iocb->req;
|
|
|
|
NvmeCtrl *n = nvme_ctrl(req);
|
|
|
|
NvmeNamespace *ns = req->ns;
|
|
|
|
NvmeDsmRange *range;
|
|
|
|
uint64_t slba;
|
|
|
|
uint32_t nlb;
|
2020-10-21 15:03:19 +03:00
|
|
|
|
2022-11-10 09:59:50 +03:00
|
|
|
if (iocb->ret < 0) {
|
|
|
|
goto done;
|
|
|
|
} else if (ret < 0) {
|
2021-06-17 22:06:49 +03:00
|
|
|
iocb->ret = ret;
|
|
|
|
goto done;
|
|
|
|
}
|
2021-02-21 21:39:36 +03:00
|
|
|
|
2021-06-17 22:06:49 +03:00
|
|
|
next:
|
|
|
|
if (iocb->idx == iocb->nr) {
|
|
|
|
goto done;
|
|
|
|
}
|
2020-10-21 15:03:19 +03:00
|
|
|
|
2021-06-17 22:06:49 +03:00
|
|
|
range = &iocb->range[iocb->idx++];
|
|
|
|
slba = le64_to_cpu(range->slba);
|
|
|
|
nlb = le32_to_cpu(range->nlb);
|
2020-10-21 15:03:19 +03:00
|
|
|
|
2021-06-17 22:06:49 +03:00
|
|
|
trace_pci_nvme_dsm_deallocate(slba, nlb);
|
2020-10-21 15:03:19 +03:00
|
|
|
|
2021-06-17 22:06:49 +03:00
|
|
|
if (nlb > n->dmrsl) {
|
|
|
|
trace_pci_nvme_dsm_single_range_limit_exceeded(nlb, n->dmrsl);
|
|
|
|
goto next;
|
|
|
|
}
|
2020-10-21 15:03:19 +03:00
|
|
|
|
2021-06-17 22:06:49 +03:00
|
|
|
if (nvme_check_bounds(ns, slba, nlb)) {
|
|
|
|
trace_pci_nvme_err_invalid_lba_range(slba, nlb,
|
|
|
|
ns->id_ns.nsze);
|
|
|
|
goto next;
|
|
|
|
}
|
|
|
|
|
|
|
|
iocb->aiocb = blk_aio_pdiscard(ns->blkconf.blk, nvme_l2b(ns, slba),
|
|
|
|
nvme_l2b(ns, nlb),
|
|
|
|
nvme_dsm_md_cb, iocb);
|
|
|
|
return;
|
2020-10-21 15:03:19 +03:00
|
|
|
|
2021-06-17 22:06:49 +03:00
|
|
|
done:
|
|
|
|
iocb->aiocb = NULL;
|
2022-11-10 09:59:50 +03:00
|
|
|
iocb->common.cb(iocb->common.opaque, iocb->ret);
|
|
|
|
qemu_aio_unref(iocb);
|
2021-06-17 22:06:49 +03:00
|
|
|
}
|
2020-10-21 15:03:19 +03:00
|
|
|
|
2021-06-17 22:06:49 +03:00
|
|
|
static uint16_t nvme_dsm(NvmeCtrl *n, NvmeRequest *req)
|
|
|
|
{
|
|
|
|
NvmeNamespace *ns = req->ns;
|
|
|
|
NvmeDsmCmd *dsm = (NvmeDsmCmd *) &req->cmd;
|
|
|
|
uint32_t attr = le32_to_cpu(dsm->attributes);
|
|
|
|
uint32_t nr = (le32_to_cpu(dsm->nr) & 0xff) + 1;
|
|
|
|
uint16_t status = NVME_SUCCESS;
|
|
|
|
|
|
|
|
trace_pci_nvme_dsm(nr, attr);
|
|
|
|
|
|
|
|
if (attr & NVME_DSMGMT_AD) {
|
|
|
|
NvmeDSMAIOCB *iocb = blk_aio_get(&nvme_dsm_aiocb_info, ns->blkconf.blk,
|
|
|
|
nvme_misc_cb, req);
|
|
|
|
|
|
|
|
iocb->req = req;
|
|
|
|
iocb->ret = 0;
|
|
|
|
iocb->range = g_new(NvmeDsmRange, nr);
|
|
|
|
iocb->nr = nr;
|
|
|
|
iocb->idx = 0;
|
|
|
|
|
|
|
|
status = nvme_h2c(n, (uint8_t *)iocb->range, sizeof(NvmeDsmRange) * nr,
|
|
|
|
req);
|
|
|
|
if (status) {
|
2023-04-11 21:54:44 +03:00
|
|
|
g_free(iocb->range);
|
|
|
|
qemu_aio_unref(iocb);
|
|
|
|
|
2021-06-17 22:06:49 +03:00
|
|
|
return status;
|
2020-10-21 15:03:19 +03:00
|
|
|
}
|
2021-06-17 22:06:49 +03:00
|
|
|
|
|
|
|
req->aiocb = &iocb->common;
|
|
|
|
nvme_dsm_cb(iocb, 0);
|
|
|
|
|
|
|
|
return NVME_NO_COMPLETE;
|
2020-10-21 15:03:19 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
return status;
|
|
|
|
}
|
|
|
|
|
2021-02-09 20:29:42 +03:00
|
|
|
static uint16_t nvme_verify(NvmeCtrl *n, NvmeRequest *req)
|
|
|
|
{
|
|
|
|
NvmeRwCmd *rw = (NvmeRwCmd *)&req->cmd;
|
|
|
|
NvmeNamespace *ns = req->ns;
|
|
|
|
BlockBackend *blk = ns->blkconf.blk;
|
|
|
|
uint64_t slba = le64_to_cpu(rw->slba);
|
|
|
|
uint32_t nlb = le16_to_cpu(rw->nlb) + 1;
|
|
|
|
size_t len = nvme_l2b(ns, nlb);
|
|
|
|
int64_t offset = nvme_l2b(ns, slba);
|
2021-06-17 22:06:52 +03:00
|
|
|
uint8_t prinfo = NVME_RW_PRINFO(le16_to_cpu(rw->control));
|
2021-02-09 20:29:42 +03:00
|
|
|
uint32_t reftag = le32_to_cpu(rw->reftag);
|
|
|
|
NvmeBounceContext *ctx = NULL;
|
|
|
|
uint16_t status;
|
|
|
|
|
|
|
|
trace_pci_nvme_verify(nvme_cid(req), nvme_nsid(ns), slba, nlb);
|
|
|
|
|
|
|
|
if (NVME_ID_NS_DPS_TYPE(ns->id_ns.dps)) {
|
2021-06-17 22:06:52 +03:00
|
|
|
status = nvme_check_prinfo(ns, prinfo, slba, reftag);
|
2021-02-09 20:29:42 +03:00
|
|
|
if (status) {
|
|
|
|
return status;
|
|
|
|
}
|
|
|
|
|
2021-06-17 22:06:52 +03:00
|
|
|
if (prinfo & NVME_PRINFO_PRACT) {
|
2021-02-09 20:29:42 +03:00
|
|
|
return NVME_INVALID_PROT_INFO | NVME_DNR;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-02-14 21:09:27 +03:00
|
|
|
if (len > n->page_size << n->params.vsl) {
|
|
|
|
return NVME_INVALID_FIELD | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
2021-02-09 20:29:42 +03:00
|
|
|
status = nvme_check_bounds(ns, slba, nlb);
|
|
|
|
if (status) {
|
|
|
|
return status;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (NVME_ERR_REC_DULBE(ns->features.err_rec)) {
|
|
|
|
status = nvme_check_dulbe(ns, slba, nlb);
|
|
|
|
if (status) {
|
|
|
|
return status;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
ctx = g_new0(NvmeBounceContext, 1);
|
|
|
|
ctx->req = req;
|
|
|
|
|
|
|
|
ctx->data.bounce = g_malloc(len);
|
|
|
|
|
|
|
|
qemu_iovec_init(&ctx->data.iov, 1);
|
|
|
|
qemu_iovec_add(&ctx->data.iov, ctx->data.bounce, len);
|
|
|
|
|
|
|
|
block_acct_start(blk_get_stats(blk), &req->acct, ctx->data.iov.size,
|
|
|
|
BLOCK_ACCT_READ);
|
|
|
|
|
|
|
|
req->aiocb = blk_aio_preadv(ns->blkconf.blk, offset, &ctx->data.iov, 0,
|
|
|
|
nvme_verify_mdata_in_cb, ctx);
|
|
|
|
return NVME_NO_COMPLETE;
|
|
|
|
}
|
|
|
|
|
2021-06-17 22:06:54 +03:00
|
|
|
typedef struct NvmeCopyAIOCB {
|
|
|
|
BlockAIOCB common;
|
|
|
|
BlockAIOCB *aiocb;
|
|
|
|
NvmeRequest *req;
|
|
|
|
int ret;
|
|
|
|
|
2021-11-16 16:26:52 +03:00
|
|
|
void *ranges;
|
|
|
|
unsigned int format;
|
2021-06-17 22:06:54 +03:00
|
|
|
int nr;
|
|
|
|
int idx;
|
|
|
|
|
|
|
|
uint8_t *bounce;
|
|
|
|
QEMUIOVector iov;
|
|
|
|
struct {
|
|
|
|
BlockAcctCookie read;
|
|
|
|
BlockAcctCookie write;
|
|
|
|
} acct;
|
|
|
|
|
2021-11-16 16:26:52 +03:00
|
|
|
uint64_t reftag;
|
2021-06-17 22:06:54 +03:00
|
|
|
uint64_t slba;
|
|
|
|
|
|
|
|
NvmeZone *zone;
|
|
|
|
} NvmeCopyAIOCB;
|
|
|
|
|
|
|
|
static void nvme_copy_cancel(BlockAIOCB *aiocb)
|
|
|
|
{
|
|
|
|
NvmeCopyAIOCB *iocb = container_of(aiocb, NvmeCopyAIOCB, common);
|
|
|
|
|
|
|
|
iocb->ret = -ECANCELED;
|
|
|
|
|
|
|
|
if (iocb->aiocb) {
|
|
|
|
blk_aio_cancel_async(iocb->aiocb);
|
|
|
|
iocb->aiocb = NULL;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static const AIOCBInfo nvme_copy_aiocb_info = {
|
|
|
|
.aiocb_size = sizeof(NvmeCopyAIOCB),
|
|
|
|
.cancel_async = nvme_copy_cancel,
|
|
|
|
};
|
|
|
|
|
2022-07-14 10:37:20 +03:00
|
|
|
static void nvme_copy_done(NvmeCopyAIOCB *iocb)
|
2020-11-06 12:46:01 +03:00
|
|
|
{
|
2021-06-17 22:06:54 +03:00
|
|
|
NvmeRequest *req = iocb->req;
|
2020-11-06 12:46:01 +03:00
|
|
|
NvmeNamespace *ns = req->ns;
|
2021-06-17 22:06:54 +03:00
|
|
|
BlockAcctStats *stats = blk_get_stats(ns->blkconf.blk);
|
2020-11-06 12:46:01 +03:00
|
|
|
|
2021-06-17 22:06:54 +03:00
|
|
|
if (iocb->idx != iocb->nr) {
|
|
|
|
req->cqe.result = cpu_to_le32(iocb->idx);
|
|
|
|
}
|
2021-02-04 11:55:48 +03:00
|
|
|
|
2021-06-17 22:06:54 +03:00
|
|
|
qemu_iovec_destroy(&iocb->iov);
|
|
|
|
g_free(iocb->bounce);
|
2020-11-06 12:46:01 +03:00
|
|
|
|
2021-06-17 22:06:54 +03:00
|
|
|
if (iocb->ret < 0) {
|
|
|
|
block_acct_failed(stats, &iocb->acct.read);
|
|
|
|
block_acct_failed(stats, &iocb->acct.write);
|
|
|
|
} else {
|
|
|
|
block_acct_done(stats, &iocb->acct.read);
|
|
|
|
block_acct_done(stats, &iocb->acct.write);
|
2021-02-04 11:55:48 +03:00
|
|
|
}
|
|
|
|
|
2021-06-17 22:06:54 +03:00
|
|
|
iocb->common.cb(iocb->common.opaque, iocb->ret);
|
|
|
|
qemu_aio_unref(iocb);
|
|
|
|
}
|
|
|
|
|
2022-07-14 10:37:20 +03:00
|
|
|
static void nvme_do_copy(NvmeCopyAIOCB *iocb);
|
2021-06-17 22:06:54 +03:00
|
|
|
|
2021-11-16 16:26:52 +03:00
|
|
|
static void nvme_copy_source_range_parse_format0(void *ranges, int idx,
|
|
|
|
uint64_t *slba, uint32_t *nlb,
|
|
|
|
uint16_t *apptag,
|
|
|
|
uint16_t *appmask,
|
|
|
|
uint64_t *reftag)
|
|
|
|
{
|
|
|
|
NvmeCopySourceRangeFormat0 *_ranges = ranges;
|
|
|
|
|
|
|
|
if (slba) {
|
|
|
|
*slba = le64_to_cpu(_ranges[idx].slba);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (nlb) {
|
|
|
|
*nlb = le16_to_cpu(_ranges[idx].nlb) + 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (apptag) {
|
|
|
|
*apptag = le16_to_cpu(_ranges[idx].apptag);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (appmask) {
|
|
|
|
*appmask = le16_to_cpu(_ranges[idx].appmask);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (reftag) {
|
|
|
|
*reftag = le32_to_cpu(_ranges[idx].reftag);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static void nvme_copy_source_range_parse_format1(void *ranges, int idx,
|
|
|
|
uint64_t *slba, uint32_t *nlb,
|
|
|
|
uint16_t *apptag,
|
|
|
|
uint16_t *appmask,
|
|
|
|
uint64_t *reftag)
|
|
|
|
{
|
|
|
|
NvmeCopySourceRangeFormat1 *_ranges = ranges;
|
|
|
|
|
|
|
|
if (slba) {
|
|
|
|
*slba = le64_to_cpu(_ranges[idx].slba);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (nlb) {
|
|
|
|
*nlb = le16_to_cpu(_ranges[idx].nlb) + 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (apptag) {
|
|
|
|
*apptag = le16_to_cpu(_ranges[idx].apptag);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (appmask) {
|
|
|
|
*appmask = le16_to_cpu(_ranges[idx].appmask);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (reftag) {
|
|
|
|
*reftag = 0;
|
|
|
|
|
|
|
|
*reftag |= (uint64_t)_ranges[idx].sr[4] << 40;
|
|
|
|
*reftag |= (uint64_t)_ranges[idx].sr[5] << 32;
|
|
|
|
*reftag |= (uint64_t)_ranges[idx].sr[6] << 24;
|
|
|
|
*reftag |= (uint64_t)_ranges[idx].sr[7] << 16;
|
|
|
|
*reftag |= (uint64_t)_ranges[idx].sr[8] << 8;
|
|
|
|
*reftag |= (uint64_t)_ranges[idx].sr[9];
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static void nvme_copy_source_range_parse(void *ranges, int idx, uint8_t format,
|
|
|
|
uint64_t *slba, uint32_t *nlb,
|
|
|
|
uint16_t *apptag, uint16_t *appmask,
|
|
|
|
uint64_t *reftag)
|
|
|
|
{
|
|
|
|
switch (format) {
|
|
|
|
case NVME_COPY_FORMAT_0:
|
|
|
|
nvme_copy_source_range_parse_format0(ranges, idx, slba, nlb, apptag,
|
|
|
|
appmask, reftag);
|
|
|
|
break;
|
|
|
|
|
|
|
|
case NVME_COPY_FORMAT_1:
|
|
|
|
nvme_copy_source_range_parse_format1(ranges, idx, slba, nlb, apptag,
|
|
|
|
appmask, reftag);
|
|
|
|
break;
|
|
|
|
|
|
|
|
default:
|
|
|
|
abort();
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-06-17 22:06:54 +03:00
|
|
|
static void nvme_copy_out_completed_cb(void *opaque, int ret)
|
|
|
|
{
|
|
|
|
NvmeCopyAIOCB *iocb = opaque;
|
|
|
|
NvmeRequest *req = iocb->req;
|
|
|
|
NvmeNamespace *ns = req->ns;
|
2021-11-16 16:26:52 +03:00
|
|
|
uint32_t nlb;
|
|
|
|
|
|
|
|
nvme_copy_source_range_parse(iocb->ranges, iocb->idx, iocb->format, NULL,
|
|
|
|
&nlb, NULL, NULL, NULL);
|
2021-06-17 22:06:54 +03:00
|
|
|
|
|
|
|
if (ret < 0) {
|
|
|
|
iocb->ret = ret;
|
|
|
|
goto out;
|
|
|
|
} else if (iocb->ret < 0) {
|
|
|
|
goto out;
|
2020-11-06 12:46:01 +03:00
|
|
|
}
|
|
|
|
|
2021-06-17 22:06:54 +03:00
|
|
|
if (ns->params.zoned) {
|
|
|
|
nvme_advance_zone_wp(ns, iocb->zone, nlb);
|
|
|
|
}
|
|
|
|
|
|
|
|
iocb->idx++;
|
|
|
|
iocb->slba += nlb;
|
|
|
|
out:
|
2022-07-14 10:37:20 +03:00
|
|
|
nvme_do_copy(iocb);
|
2021-06-17 22:06:54 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
static void nvme_copy_out_cb(void *opaque, int ret)
|
|
|
|
{
|
|
|
|
NvmeCopyAIOCB *iocb = opaque;
|
|
|
|
NvmeRequest *req = iocb->req;
|
|
|
|
NvmeNamespace *ns = req->ns;
|
|
|
|
uint32_t nlb;
|
|
|
|
size_t mlen;
|
|
|
|
uint8_t *mbounce;
|
|
|
|
|
2022-07-14 10:37:20 +03:00
|
|
|
if (ret < 0 || iocb->ret < 0 || !ns->lbaf.ms) {
|
2021-06-17 22:06:54 +03:00
|
|
|
goto out;
|
2020-11-06 12:46:01 +03:00
|
|
|
}
|
|
|
|
|
2021-11-16 16:26:52 +03:00
|
|
|
nvme_copy_source_range_parse(iocb->ranges, iocb->idx, iocb->format, NULL,
|
|
|
|
&nlb, NULL, NULL, NULL);
|
2021-06-17 22:06:54 +03:00
|
|
|
|
|
|
|
mlen = nvme_m2b(ns, nlb);
|
|
|
|
mbounce = iocb->bounce + nvme_l2b(ns, nlb);
|
|
|
|
|
|
|
|
qemu_iovec_reset(&iocb->iov);
|
|
|
|
qemu_iovec_add(&iocb->iov, mbounce, mlen);
|
|
|
|
|
|
|
|
iocb->aiocb = blk_aio_pwritev(ns->blkconf.blk, nvme_moff(ns, iocb->slba),
|
|
|
|
&iocb->iov, 0, nvme_copy_out_completed_cb,
|
|
|
|
iocb);
|
|
|
|
|
|
|
|
return;
|
|
|
|
|
|
|
|
out:
|
2022-07-14 10:37:20 +03:00
|
|
|
nvme_copy_out_completed_cb(iocb, ret);
|
2021-06-17 22:06:54 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
static void nvme_copy_in_completed_cb(void *opaque, int ret)
|
|
|
|
{
|
|
|
|
NvmeCopyAIOCB *iocb = opaque;
|
|
|
|
NvmeRequest *req = iocb->req;
|
|
|
|
NvmeNamespace *ns = req->ns;
|
|
|
|
uint32_t nlb;
|
2021-11-16 16:26:52 +03:00
|
|
|
uint64_t slba;
|
|
|
|
uint16_t apptag, appmask;
|
|
|
|
uint64_t reftag;
|
2021-06-17 22:06:54 +03:00
|
|
|
size_t len;
|
|
|
|
uint16_t status;
|
|
|
|
|
|
|
|
if (ret < 0) {
|
|
|
|
iocb->ret = ret;
|
|
|
|
goto out;
|
|
|
|
} else if (iocb->ret < 0) {
|
2021-02-04 11:55:48 +03:00
|
|
|
goto out;
|
2020-11-06 12:46:01 +03:00
|
|
|
}
|
|
|
|
|
2021-11-16 16:26:52 +03:00
|
|
|
nvme_copy_source_range_parse(iocb->ranges, iocb->idx, iocb->format, &slba,
|
|
|
|
&nlb, &apptag, &appmask, &reftag);
|
2021-06-17 22:06:54 +03:00
|
|
|
len = nvme_l2b(ns, nlb);
|
2020-11-06 12:46:01 +03:00
|
|
|
|
2021-06-17 22:06:54 +03:00
|
|
|
trace_pci_nvme_copy_out(iocb->slba, nlb);
|
|
|
|
|
|
|
|
if (NVME_ID_NS_DPS_TYPE(ns->id_ns.dps)) {
|
|
|
|
NvmeCopyCmd *copy = (NvmeCopyCmd *)&req->cmd;
|
2020-11-06 12:46:01 +03:00
|
|
|
|
2021-06-17 22:06:54 +03:00
|
|
|
uint16_t prinfor = ((copy->control[0] >> 4) & 0xf);
|
|
|
|
uint16_t prinfow = ((copy->control[2] >> 2) & 0xf);
|
|
|
|
|
|
|
|
size_t mlen = nvme_m2b(ns, nlb);
|
|
|
|
uint8_t *mbounce = iocb->bounce + nvme_l2b(ns, nlb);
|
|
|
|
|
2022-04-21 13:51:58 +03:00
|
|
|
status = nvme_dif_mangle_mdata(ns, mbounce, mlen, slba);
|
|
|
|
if (status) {
|
|
|
|
goto invalid;
|
|
|
|
}
|
2021-06-17 22:06:54 +03:00
|
|
|
status = nvme_dif_check(ns, iocb->bounce, len, mbounce, mlen, prinfor,
|
|
|
|
slba, apptag, appmask, &reftag);
|
2020-11-06 12:46:01 +03:00
|
|
|
if (status) {
|
2021-06-17 22:06:54 +03:00
|
|
|
goto invalid;
|
2020-11-06 12:46:01 +03:00
|
|
|
}
|
|
|
|
|
2021-06-17 22:06:54 +03:00
|
|
|
apptag = le16_to_cpu(copy->apptag);
|
|
|
|
appmask = le16_to_cpu(copy->appmask);
|
|
|
|
|
|
|
|
if (prinfow & NVME_PRINFO_PRACT) {
|
|
|
|
status = nvme_check_prinfo(ns, prinfow, iocb->slba, iocb->reftag);
|
2020-11-06 12:46:01 +03:00
|
|
|
if (status) {
|
2021-06-17 22:06:54 +03:00
|
|
|
goto invalid;
|
2020-11-06 12:46:01 +03:00
|
|
|
}
|
|
|
|
|
2021-06-17 22:06:54 +03:00
|
|
|
nvme_dif_pract_generate_dif(ns, iocb->bounce, len, mbounce, mlen,
|
|
|
|
apptag, &iocb->reftag);
|
|
|
|
} else {
|
|
|
|
status = nvme_dif_check(ns, iocb->bounce, len, mbounce, mlen,
|
|
|
|
prinfow, iocb->slba, apptag, appmask,
|
|
|
|
&iocb->reftag);
|
2020-11-06 12:46:01 +03:00
|
|
|
if (status) {
|
2021-06-17 22:06:54 +03:00
|
|
|
goto invalid;
|
2020-11-06 12:46:01 +03:00
|
|
|
}
|
|
|
|
}
|
2021-06-17 22:06:54 +03:00
|
|
|
}
|
2020-11-06 12:46:01 +03:00
|
|
|
|
2021-06-17 22:06:54 +03:00
|
|
|
status = nvme_check_bounds(ns, iocb->slba, nlb);
|
|
|
|
if (status) {
|
|
|
|
goto invalid;
|
2020-11-06 12:46:01 +03:00
|
|
|
}
|
|
|
|
|
2021-06-17 22:06:54 +03:00
|
|
|
if (ns->params.zoned) {
|
|
|
|
status = nvme_check_zone_write(ns, iocb->zone, iocb->slba, nlb);
|
|
|
|
if (status) {
|
|
|
|
goto invalid;
|
|
|
|
}
|
|
|
|
|
2021-03-04 10:40:11 +03:00
|
|
|
if (!(iocb->zone->d.za & NVME_ZA_ZRWA_VALID)) {
|
|
|
|
iocb->zone->w_ptr += nlb;
|
|
|
|
}
|
2020-11-06 12:46:01 +03:00
|
|
|
}
|
|
|
|
|
2021-06-17 22:06:54 +03:00
|
|
|
qemu_iovec_reset(&iocb->iov);
|
|
|
|
qemu_iovec_add(&iocb->iov, iocb->bounce, len);
|
|
|
|
|
|
|
|
iocb->aiocb = blk_aio_pwritev(ns->blkconf.blk, nvme_l2b(ns, iocb->slba),
|
|
|
|
&iocb->iov, 0, nvme_copy_out_cb, iocb);
|
|
|
|
|
|
|
|
return;
|
|
|
|
|
|
|
|
invalid:
|
|
|
|
req->status = status;
|
2022-07-14 10:37:20 +03:00
|
|
|
iocb->ret = -1;
|
2021-06-17 22:06:54 +03:00
|
|
|
out:
|
2022-07-14 10:37:20 +03:00
|
|
|
nvme_do_copy(iocb);
|
2021-06-17 22:06:54 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
static void nvme_copy_in_cb(void *opaque, int ret)
|
|
|
|
{
|
|
|
|
NvmeCopyAIOCB *iocb = opaque;
|
|
|
|
NvmeRequest *req = iocb->req;
|
|
|
|
NvmeNamespace *ns = req->ns;
|
|
|
|
uint64_t slba;
|
|
|
|
uint32_t nlb;
|
2020-11-06 12:46:01 +03:00
|
|
|
|
2022-07-14 10:37:20 +03:00
|
|
|
if (ret < 0 || iocb->ret < 0 || !ns->lbaf.ms) {
|
2021-06-17 22:06:54 +03:00
|
|
|
goto out;
|
|
|
|
}
|
2020-11-06 12:46:01 +03:00
|
|
|
|
2021-11-16 16:26:52 +03:00
|
|
|
nvme_copy_source_range_parse(iocb->ranges, iocb->idx, iocb->format, &slba,
|
|
|
|
&nlb, NULL, NULL, NULL);
|
2020-11-06 12:46:01 +03:00
|
|
|
|
2021-06-17 22:06:54 +03:00
|
|
|
qemu_iovec_reset(&iocb->iov);
|
|
|
|
qemu_iovec_add(&iocb->iov, iocb->bounce + nvme_l2b(ns, nlb),
|
|
|
|
nvme_m2b(ns, nlb));
|
2020-11-06 12:46:01 +03:00
|
|
|
|
2021-06-17 22:06:54 +03:00
|
|
|
iocb->aiocb = blk_aio_preadv(ns->blkconf.blk, nvme_moff(ns, slba),
|
|
|
|
&iocb->iov, 0, nvme_copy_in_completed_cb,
|
|
|
|
iocb);
|
|
|
|
return;
|
2020-11-06 12:46:01 +03:00
|
|
|
|
2021-06-17 22:06:54 +03:00
|
|
|
out:
|
2022-07-14 10:37:20 +03:00
|
|
|
nvme_copy_in_completed_cb(iocb, ret);
|
2021-06-17 22:06:54 +03:00
|
|
|
}
|
2020-11-06 12:46:01 +03:00
|
|
|
|
2022-07-14 10:37:20 +03:00
|
|
|
static void nvme_do_copy(NvmeCopyAIOCB *iocb)
|
2021-06-17 22:06:54 +03:00
|
|
|
{
|
|
|
|
NvmeRequest *req = iocb->req;
|
|
|
|
NvmeNamespace *ns = req->ns;
|
|
|
|
uint64_t slba;
|
|
|
|
uint32_t nlb;
|
|
|
|
size_t len;
|
|
|
|
uint16_t status;
|
2020-11-06 12:46:01 +03:00
|
|
|
|
2022-07-14 10:37:20 +03:00
|
|
|
if (iocb->ret < 0) {
|
2021-06-17 22:06:54 +03:00
|
|
|
goto done;
|
|
|
|
}
|
2020-11-06 12:46:01 +03:00
|
|
|
|
2021-06-17 22:06:54 +03:00
|
|
|
if (iocb->idx == iocb->nr) {
|
|
|
|
goto done;
|
|
|
|
}
|
2020-11-23 13:24:55 +03:00
|
|
|
|
2021-11-16 16:26:52 +03:00
|
|
|
nvme_copy_source_range_parse(iocb->ranges, iocb->idx, iocb->format, &slba,
|
|
|
|
&nlb, NULL, NULL, NULL);
|
2021-06-17 22:06:54 +03:00
|
|
|
len = nvme_l2b(ns, nlb);
|
2020-11-23 13:24:55 +03:00
|
|
|
|
2021-06-17 22:06:54 +03:00
|
|
|
trace_pci_nvme_copy_source_range(slba, nlb);
|
2020-11-23 13:24:55 +03:00
|
|
|
|
2021-06-17 22:06:54 +03:00
|
|
|
if (nlb > le16_to_cpu(ns->id_ns.mssrl)) {
|
|
|
|
status = NVME_CMD_SIZE_LIMIT | NVME_DNR;
|
|
|
|
goto invalid;
|
|
|
|
}
|
2020-11-23 13:24:55 +03:00
|
|
|
|
2021-06-17 22:06:54 +03:00
|
|
|
status = nvme_check_bounds(ns, slba, nlb);
|
|
|
|
if (status) {
|
|
|
|
goto invalid;
|
|
|
|
}
|
2020-11-23 13:24:55 +03:00
|
|
|
|
2021-06-17 22:06:54 +03:00
|
|
|
if (NVME_ERR_REC_DULBE(ns->features.err_rec)) {
|
|
|
|
status = nvme_check_dulbe(ns, slba, nlb);
|
|
|
|
if (status) {
|
|
|
|
goto invalid;
|
|
|
|
}
|
|
|
|
}
|
2020-11-23 13:24:55 +03:00
|
|
|
|
2021-06-17 22:06:54 +03:00
|
|
|
if (ns->params.zoned) {
|
|
|
|
status = nvme_check_zone_read(ns, slba, nlb);
|
|
|
|
if (status) {
|
|
|
|
goto invalid;
|
2020-11-23 13:24:55 +03:00
|
|
|
}
|
2020-11-06 12:46:01 +03:00
|
|
|
}
|
|
|
|
|
2021-06-17 22:06:54 +03:00
|
|
|
qemu_iovec_reset(&iocb->iov);
|
|
|
|
qemu_iovec_add(&iocb->iov, iocb->bounce, len);
|
|
|
|
|
|
|
|
iocb->aiocb = blk_aio_preadv(ns->blkconf.blk, nvme_l2b(ns, slba),
|
|
|
|
&iocb->iov, 0, nvme_copy_in_cb, iocb);
|
|
|
|
return;
|
2020-11-06 12:46:01 +03:00
|
|
|
|
2021-06-17 22:06:54 +03:00
|
|
|
invalid:
|
|
|
|
req->status = status;
|
2022-07-14 10:37:20 +03:00
|
|
|
iocb->ret = -1;
|
2021-06-17 22:06:54 +03:00
|
|
|
done:
|
2022-07-14 10:37:20 +03:00
|
|
|
nvme_copy_done(iocb);
|
2021-06-17 22:06:54 +03:00
|
|
|
}
|
2020-11-06 12:46:01 +03:00
|
|
|
|
2021-06-17 22:06:54 +03:00
|
|
|
static uint16_t nvme_copy(NvmeCtrl *n, NvmeRequest *req)
|
|
|
|
{
|
|
|
|
NvmeNamespace *ns = req->ns;
|
|
|
|
NvmeCopyCmd *copy = (NvmeCopyCmd *)&req->cmd;
|
|
|
|
NvmeCopyAIOCB *iocb = blk_aio_get(&nvme_copy_aiocb_info, ns->blkconf.blk,
|
|
|
|
nvme_misc_cb, req);
|
|
|
|
uint16_t nr = copy->nr + 1;
|
|
|
|
uint8_t format = copy->control[0] & 0xf;
|
|
|
|
uint16_t prinfor = ((copy->control[0] >> 4) & 0xf);
|
|
|
|
uint16_t prinfow = ((copy->control[2] >> 2) & 0xf);
|
2021-11-16 16:26:52 +03:00
|
|
|
size_t len = sizeof(NvmeCopySourceRangeFormat0);
|
2021-06-17 22:06:54 +03:00
|
|
|
|
|
|
|
uint16_t status;
|
|
|
|
|
|
|
|
trace_pci_nvme_copy(nvme_cid(req), nvme_nsid(ns), nr, format);
|
|
|
|
|
|
|
|
iocb->ranges = NULL;
|
|
|
|
iocb->zone = NULL;
|
|
|
|
|
|
|
|
if (NVME_ID_NS_DPS_TYPE(ns->id_ns.dps) &&
|
|
|
|
((prinfor & NVME_PRINFO_PRACT) != (prinfow & NVME_PRINFO_PRACT))) {
|
|
|
|
status = NVME_INVALID_FIELD | NVME_DNR;
|
|
|
|
goto invalid;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!(n->id_ctrl.ocfs & (1 << format))) {
|
|
|
|
trace_pci_nvme_err_copy_invalid_format(format);
|
|
|
|
status = NVME_INVALID_FIELD | NVME_DNR;
|
|
|
|
goto invalid;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (nr > ns->id_ns.msrc + 1) {
|
|
|
|
status = NVME_CMD_SIZE_LIMIT | NVME_DNR;
|
|
|
|
goto invalid;
|
|
|
|
}
|
|
|
|
|
2022-11-02 11:06:00 +03:00
|
|
|
if ((ns->pif == 0x0 && format != 0x0) ||
|
|
|
|
(ns->pif != 0x0 && format != 0x1)) {
|
2021-11-16 16:26:52 +03:00
|
|
|
status = NVME_INVALID_FORMAT | NVME_DNR;
|
|
|
|
goto invalid;
|
|
|
|
}
|
2021-02-04 11:55:48 +03:00
|
|
|
|
2021-11-16 16:26:52 +03:00
|
|
|
if (ns->pif) {
|
|
|
|
len = sizeof(NvmeCopySourceRangeFormat1);
|
|
|
|
}
|
|
|
|
|
|
|
|
iocb->format = format;
|
|
|
|
iocb->ranges = g_malloc_n(nr, len);
|
|
|
|
status = nvme_h2c(n, (uint8_t *)iocb->ranges, len * nr, req);
|
2021-06-17 22:06:54 +03:00
|
|
|
if (status) {
|
|
|
|
goto invalid;
|
|
|
|
}
|
|
|
|
|
|
|
|
iocb->slba = le64_to_cpu(copy->sdlba);
|
|
|
|
|
|
|
|
if (ns->params.zoned) {
|
|
|
|
iocb->zone = nvme_get_zone_by_slba(ns, iocb->slba);
|
|
|
|
if (!iocb->zone) {
|
|
|
|
status = NVME_LBA_RANGE | NVME_DNR;
|
|
|
|
goto invalid;
|
|
|
|
}
|
|
|
|
|
|
|
|
status = nvme_zrm_auto(n, ns, iocb->zone);
|
|
|
|
if (status) {
|
|
|
|
goto invalid;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
iocb->req = req;
|
|
|
|
iocb->ret = 0;
|
|
|
|
iocb->nr = nr;
|
|
|
|
iocb->idx = 0;
|
|
|
|
iocb->reftag = le32_to_cpu(copy->reftag);
|
2021-11-16 16:26:52 +03:00
|
|
|
iocb->reftag |= (uint64_t)le32_to_cpu(copy->cdw3) << 32;
|
2021-06-17 22:06:54 +03:00
|
|
|
iocb->bounce = g_malloc_n(le16_to_cpu(ns->id_ns.mssrl),
|
|
|
|
ns->lbasz + ns->lbaf.ms);
|
|
|
|
|
|
|
|
qemu_iovec_init(&iocb->iov, 1);
|
|
|
|
|
|
|
|
block_acct_start(blk_get_stats(ns->blkconf.blk), &iocb->acct.read, 0,
|
|
|
|
BLOCK_ACCT_READ);
|
|
|
|
block_acct_start(blk_get_stats(ns->blkconf.blk), &iocb->acct.write, 0,
|
|
|
|
BLOCK_ACCT_WRITE);
|
|
|
|
|
|
|
|
req->aiocb = &iocb->common;
|
2022-07-14 10:37:20 +03:00
|
|
|
nvme_do_copy(iocb);
|
2021-06-17 22:06:54 +03:00
|
|
|
|
|
|
|
return NVME_NO_COMPLETE;
|
|
|
|
|
|
|
|
invalid:
|
|
|
|
g_free(iocb->ranges);
|
|
|
|
qemu_aio_unref(iocb);
|
2021-02-04 11:55:48 +03:00
|
|
|
return status;
|
2020-11-06 12:46:01 +03:00
|
|
|
}
|
|
|
|
|
2020-11-16 13:14:02 +03:00
|
|
|
static uint16_t nvme_compare(NvmeCtrl *n, NvmeRequest *req)
|
|
|
|
{
|
|
|
|
NvmeRwCmd *rw = (NvmeRwCmd *)&req->cmd;
|
|
|
|
NvmeNamespace *ns = req->ns;
|
|
|
|
BlockBackend *blk = ns->blkconf.blk;
|
|
|
|
uint64_t slba = le64_to_cpu(rw->slba);
|
|
|
|
uint32_t nlb = le16_to_cpu(rw->nlb) + 1;
|
2021-06-17 22:06:52 +03:00
|
|
|
uint8_t prinfo = NVME_RW_PRINFO(le16_to_cpu(rw->control));
|
2020-11-23 13:24:55 +03:00
|
|
|
size_t data_len = nvme_l2b(ns, nlb);
|
|
|
|
size_t len = data_len;
|
2020-11-16 13:14:02 +03:00
|
|
|
int64_t offset = nvme_l2b(ns, slba);
|
|
|
|
struct nvme_compare_ctx *ctx = NULL;
|
|
|
|
uint16_t status;
|
|
|
|
|
|
|
|
trace_pci_nvme_compare(nvme_cid(req), nvme_nsid(ns), slba, nlb);
|
|
|
|
|
2021-06-17 22:06:52 +03:00
|
|
|
if (NVME_ID_NS_DPS_TYPE(ns->id_ns.dps) && (prinfo & NVME_PRINFO_PRACT)) {
|
2021-02-04 11:55:48 +03:00
|
|
|
return NVME_INVALID_PROT_INFO | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
2020-11-23 13:24:55 +03:00
|
|
|
if (nvme_ns_ext(ns)) {
|
|
|
|
len += nvme_m2b(ns, nlb);
|
|
|
|
}
|
|
|
|
|
2020-11-16 13:14:02 +03:00
|
|
|
status = nvme_check_mdts(n, len);
|
|
|
|
if (status) {
|
|
|
|
return status;
|
|
|
|
}
|
|
|
|
|
|
|
|
status = nvme_check_bounds(ns, slba, nlb);
|
|
|
|
if (status) {
|
|
|
|
return status;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (NVME_ERR_REC_DULBE(ns->features.err_rec)) {
|
|
|
|
status = nvme_check_dulbe(ns, slba, nlb);
|
|
|
|
if (status) {
|
|
|
|
return status;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-11-23 13:24:55 +03:00
|
|
|
status = nvme_map_dptr(n, &req->sg, len, &req->cmd);
|
|
|
|
if (status) {
|
|
|
|
return status;
|
|
|
|
}
|
2020-11-16 13:14:02 +03:00
|
|
|
|
|
|
|
ctx = g_new(struct nvme_compare_ctx, 1);
|
2020-11-23 13:24:55 +03:00
|
|
|
ctx->data.bounce = g_malloc(data_len);
|
2020-11-16 13:14:02 +03:00
|
|
|
|
|
|
|
req->opaque = ctx;
|
|
|
|
|
2020-11-23 13:24:55 +03:00
|
|
|
qemu_iovec_init(&ctx->data.iov, 1);
|
|
|
|
qemu_iovec_add(&ctx->data.iov, ctx->data.bounce, data_len);
|
2020-11-16 13:14:02 +03:00
|
|
|
|
2020-11-23 13:24:55 +03:00
|
|
|
block_acct_start(blk_get_stats(blk), &req->acct, data_len,
|
|
|
|
BLOCK_ACCT_READ);
|
2021-04-08 14:46:03 +03:00
|
|
|
req->aiocb = blk_aio_preadv(blk, offset, &ctx->data.iov, 0,
|
|
|
|
nvme_compare_data_cb, req);
|
2020-11-16 13:14:02 +03:00
|
|
|
|
|
|
|
return NVME_NO_COMPLETE;
|
|
|
|
}
|
|
|
|
|
2021-06-17 22:06:47 +03:00
|
|
|
typedef struct NvmeFlushAIOCB {
|
|
|
|
BlockAIOCB common;
|
|
|
|
BlockAIOCB *aiocb;
|
|
|
|
NvmeRequest *req;
|
|
|
|
int ret;
|
|
|
|
|
2021-01-25 12:39:24 +03:00
|
|
|
NvmeNamespace *ns;
|
2021-06-17 22:06:47 +03:00
|
|
|
uint32_t nsid;
|
|
|
|
bool broadcast;
|
|
|
|
} NvmeFlushAIOCB;
|
2021-01-25 12:39:24 +03:00
|
|
|
|
2021-06-17 22:06:47 +03:00
|
|
|
static void nvme_flush_cancel(BlockAIOCB *acb)
|
|
|
|
{
|
|
|
|
NvmeFlushAIOCB *iocb = container_of(acb, NvmeFlushAIOCB, common);
|
2021-01-25 12:39:24 +03:00
|
|
|
|
2021-06-17 22:06:47 +03:00
|
|
|
iocb->ret = -ECANCELED;
|
2021-01-25 12:39:24 +03:00
|
|
|
|
2021-06-17 22:06:47 +03:00
|
|
|
if (iocb->aiocb) {
|
|
|
|
blk_aio_cancel_async(iocb->aiocb);
|
2022-11-10 09:59:44 +03:00
|
|
|
iocb->aiocb = NULL;
|
2021-01-25 12:39:24 +03:00
|
|
|
}
|
2021-06-17 22:06:47 +03:00
|
|
|
}
|
2021-01-25 12:39:24 +03:00
|
|
|
|
2021-06-17 22:06:47 +03:00
|
|
|
static const AIOCBInfo nvme_flush_aiocb_info = {
|
|
|
|
.aiocb_size = sizeof(NvmeFlushAIOCB),
|
|
|
|
.cancel_async = nvme_flush_cancel,
|
|
|
|
.get_aio_context = nvme_get_aio_context,
|
|
|
|
};
|
2021-01-25 12:39:24 +03:00
|
|
|
|
2022-11-10 09:59:44 +03:00
|
|
|
static void nvme_do_flush(NvmeFlushAIOCB *iocb);
|
|
|
|
|
2021-06-17 22:06:47 +03:00
|
|
|
static void nvme_flush_ns_cb(void *opaque, int ret)
|
|
|
|
{
|
|
|
|
NvmeFlushAIOCB *iocb = opaque;
|
|
|
|
NvmeNamespace *ns = iocb->ns;
|
2021-01-25 12:39:24 +03:00
|
|
|
|
2021-06-17 22:06:47 +03:00
|
|
|
if (ret < 0) {
|
|
|
|
iocb->ret = ret;
|
|
|
|
goto out;
|
|
|
|
} else if (iocb->ret < 0) {
|
|
|
|
goto out;
|
|
|
|
}
|
2021-01-25 12:39:24 +03:00
|
|
|
|
2021-06-17 22:06:47 +03:00
|
|
|
if (ns) {
|
|
|
|
trace_pci_nvme_flush_ns(iocb->nsid);
|
2021-01-25 12:39:24 +03:00
|
|
|
|
2021-06-17 22:06:47 +03:00
|
|
|
iocb->ns = NULL;
|
|
|
|
iocb->aiocb = blk_aio_flush(ns->blkconf.blk, nvme_flush_ns_cb, iocb);
|
|
|
|
return;
|
2021-01-25 12:39:24 +03:00
|
|
|
}
|
|
|
|
|
2021-06-17 22:06:47 +03:00
|
|
|
out:
|
2022-11-10 09:59:44 +03:00
|
|
|
nvme_do_flush(iocb);
|
2021-06-17 22:06:47 +03:00
|
|
|
}
|
2021-01-25 12:39:24 +03:00
|
|
|
|
2022-11-10 09:59:44 +03:00
|
|
|
static void nvme_do_flush(NvmeFlushAIOCB *iocb)
|
2021-06-17 22:06:47 +03:00
|
|
|
{
|
|
|
|
NvmeRequest *req = iocb->req;
|
|
|
|
NvmeCtrl *n = nvme_ctrl(req);
|
|
|
|
int i;
|
|
|
|
|
|
|
|
if (iocb->ret < 0) {
|
|
|
|
goto done;
|
2021-01-25 12:39:24 +03:00
|
|
|
}
|
|
|
|
|
2021-06-17 22:06:47 +03:00
|
|
|
if (iocb->broadcast) {
|
|
|
|
for (i = iocb->nsid + 1; i <= NVME_MAX_NAMESPACES; i++) {
|
|
|
|
iocb->ns = nvme_ns(n, i);
|
|
|
|
if (iocb->ns) {
|
|
|
|
iocb->nsid = i;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!iocb->ns) {
|
|
|
|
goto done;
|
|
|
|
}
|
|
|
|
|
|
|
|
nvme_flush_ns_cb(iocb, 0);
|
|
|
|
return;
|
|
|
|
|
|
|
|
done:
|
|
|
|
iocb->common.cb(iocb->common.opaque, iocb->ret);
|
|
|
|
qemu_aio_unref(iocb);
|
|
|
|
}
|
|
|
|
|
|
|
|
static uint16_t nvme_flush(NvmeCtrl *n, NvmeRequest *req)
|
|
|
|
{
|
|
|
|
NvmeFlushAIOCB *iocb;
|
|
|
|
uint32_t nsid = le32_to_cpu(req->cmd.nsid);
|
|
|
|
uint16_t status;
|
|
|
|
|
|
|
|
iocb = qemu_aio_get(&nvme_flush_aiocb_info, NULL, nvme_misc_cb, req);
|
|
|
|
|
|
|
|
iocb->req = req;
|
|
|
|
iocb->ret = 0;
|
|
|
|
iocb->ns = NULL;
|
|
|
|
iocb->nsid = 0;
|
|
|
|
iocb->broadcast = (nsid == NVME_NSID_BROADCAST);
|
|
|
|
|
|
|
|
if (!iocb->broadcast) {
|
|
|
|
if (!nvme_nsid_valid(n, nsid)) {
|
|
|
|
status = NVME_INVALID_NSID | NVME_DNR;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
iocb->ns = nvme_ns(n, nsid);
|
|
|
|
if (!iocb->ns) {
|
|
|
|
status = NVME_INVALID_FIELD | NVME_DNR;
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
|
|
|
iocb->nsid = nsid;
|
|
|
|
}
|
|
|
|
|
|
|
|
req->aiocb = &iocb->common;
|
2022-11-10 09:59:44 +03:00
|
|
|
nvme_do_flush(iocb);
|
2021-06-17 22:06:47 +03:00
|
|
|
|
|
|
|
return NVME_NO_COMPLETE;
|
|
|
|
|
|
|
|
out:
|
|
|
|
qemu_aio_unref(iocb);
|
|
|
|
|
2021-01-25 12:39:24 +03:00
|
|
|
return status;
|
2020-08-24 13:43:38 +03:00
|
|
|
}
|
|
|
|
|
2020-12-08 23:04:00 +03:00
|
|
|
static uint16_t nvme_read(NvmeCtrl *n, NvmeRequest *req)
|
|
|
|
{
|
|
|
|
NvmeRwCmd *rw = (NvmeRwCmd *)&req->cmd;
|
|
|
|
NvmeNamespace *ns = req->ns;
|
|
|
|
uint64_t slba = le64_to_cpu(rw->slba);
|
|
|
|
uint32_t nlb = (uint32_t)le16_to_cpu(rw->nlb) + 1;
|
2021-06-17 22:06:52 +03:00
|
|
|
uint8_t prinfo = NVME_RW_PRINFO(le16_to_cpu(rw->control));
|
2020-12-08 23:04:00 +03:00
|
|
|
uint64_t data_size = nvme_l2b(ns, nlb);
|
2020-11-23 13:24:55 +03:00
|
|
|
uint64_t mapped_size = data_size;
|
2020-12-08 23:04:00 +03:00
|
|
|
uint64_t data_offset;
|
|
|
|
BlockBackend *blk = ns->blkconf.blk;
|
|
|
|
uint16_t status;
|
|
|
|
|
2020-11-23 13:24:55 +03:00
|
|
|
if (nvme_ns_ext(ns)) {
|
|
|
|
mapped_size += nvme_m2b(ns, nlb);
|
2021-02-04 11:55:48 +03:00
|
|
|
|
|
|
|
if (NVME_ID_NS_DPS_TYPE(ns->id_ns.dps)) {
|
2021-06-17 22:06:52 +03:00
|
|
|
bool pract = prinfo & NVME_PRINFO_PRACT;
|
2021-02-04 11:55:48 +03:00
|
|
|
|
2022-02-14 11:29:01 +03:00
|
|
|
if (pract && ns->lbaf.ms == nvme_pi_tuple_size(ns)) {
|
2021-02-04 11:55:48 +03:00
|
|
|
mapped_size = data_size;
|
|
|
|
}
|
|
|
|
}
|
2020-11-23 13:24:55 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
trace_pci_nvme_read(nvme_cid(req), nvme_nsid(ns), nlb, mapped_size, slba);
|
2020-12-08 23:04:00 +03:00
|
|
|
|
2020-11-23 13:24:55 +03:00
|
|
|
status = nvme_check_mdts(n, mapped_size);
|
2020-12-08 23:04:00 +03:00
|
|
|
if (status) {
|
|
|
|
goto invalid;
|
|
|
|
}
|
|
|
|
|
|
|
|
status = nvme_check_bounds(ns, slba, nlb);
|
|
|
|
if (status) {
|
|
|
|
goto invalid;
|
|
|
|
}
|
|
|
|
|
2020-12-08 23:04:06 +03:00
|
|
|
if (ns->params.zoned) {
|
|
|
|
status = nvme_check_zone_read(ns, slba, nlb);
|
2021-01-24 19:30:24 +03:00
|
|
|
if (status) {
|
2020-12-08 23:04:06 +03:00
|
|
|
trace_pci_nvme_err_zone_read_not_ok(slba, nlb, status);
|
|
|
|
goto invalid;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-12-08 23:04:00 +03:00
|
|
|
if (NVME_ERR_REC_DULBE(ns->features.err_rec)) {
|
|
|
|
status = nvme_check_dulbe(ns, slba, nlb);
|
|
|
|
if (status) {
|
|
|
|
goto invalid;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-02-04 11:55:48 +03:00
|
|
|
if (NVME_ID_NS_DPS_TYPE(ns->id_ns.dps)) {
|
|
|
|
return nvme_dif_rw(n, req);
|
|
|
|
}
|
|
|
|
|
2020-11-23 13:24:55 +03:00
|
|
|
status = nvme_map_data(n, nlb, req);
|
|
|
|
if (status) {
|
|
|
|
goto invalid;
|
|
|
|
}
|
|
|
|
|
2020-12-08 23:04:00 +03:00
|
|
|
data_offset = nvme_l2b(ns, slba);
|
|
|
|
|
|
|
|
block_acct_start(blk_get_stats(blk), &req->acct, data_size,
|
|
|
|
BLOCK_ACCT_READ);
|
2023-03-20 15:40:36 +03:00
|
|
|
nvme_blk_read(blk, data_offset, BDRV_SECTOR_SIZE, nvme_rw_cb, req);
|
2020-12-08 23:04:00 +03:00
|
|
|
return NVME_NO_COMPLETE;
|
|
|
|
|
|
|
|
invalid:
|
|
|
|
block_acct_invalid(blk_get_stats(blk), BLOCK_ACCT_READ);
|
|
|
|
return status | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
2023-02-20 14:59:26 +03:00
|
|
|
static void nvme_do_write_fdp(NvmeCtrl *n, NvmeRequest *req, uint64_t slba,
|
|
|
|
uint32_t nlb)
|
|
|
|
{
|
|
|
|
NvmeNamespace *ns = req->ns;
|
|
|
|
NvmeRwCmd *rw = (NvmeRwCmd *)&req->cmd;
|
|
|
|
uint64_t data_size = nvme_l2b(ns, nlb);
|
|
|
|
uint32_t dw12 = le32_to_cpu(req->cmd.cdw12);
|
|
|
|
uint8_t dtype = (dw12 >> 20) & 0xf;
|
|
|
|
uint16_t pid = le16_to_cpu(rw->dspec);
|
|
|
|
uint16_t ph, rg, ruhid;
|
|
|
|
NvmeReclaimUnit *ru;
|
|
|
|
|
|
|
|
if (dtype != NVME_DIRECTIVE_DATA_PLACEMENT ||
|
|
|
|
!nvme_parse_pid(ns, pid, &ph, &rg)) {
|
|
|
|
ph = 0;
|
|
|
|
rg = 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
ruhid = ns->fdp.phs[ph];
|
|
|
|
ru = &ns->endgrp->fdp.ruhs[ruhid].rus[rg];
|
|
|
|
|
|
|
|
nvme_fdp_stat_inc(&ns->endgrp->fdp.hbmw, data_size);
|
|
|
|
nvme_fdp_stat_inc(&ns->endgrp->fdp.mbmw, data_size);
|
|
|
|
|
|
|
|
while (nlb) {
|
|
|
|
if (nlb < ru->ruamw) {
|
|
|
|
ru->ruamw -= nlb;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
nlb -= ru->ruamw;
|
|
|
|
nvme_update_ruh(n, ns, pid);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-12-08 23:04:06 +03:00
|
|
|
static uint16_t nvme_do_write(NvmeCtrl *n, NvmeRequest *req, bool append,
|
|
|
|
bool wrz)
|
2013-06-04 19:17:10 +04:00
|
|
|
{
|
2020-07-20 13:44:01 +03:00
|
|
|
NvmeRwCmd *rw = (NvmeRwCmd *)&req->cmd;
|
|
|
|
NvmeNamespace *ns = req->ns;
|
2013-06-04 19:17:10 +04:00
|
|
|
uint64_t slba = le64_to_cpu(rw->slba);
|
2020-12-08 23:04:00 +03:00
|
|
|
uint32_t nlb = (uint32_t)le16_to_cpu(rw->nlb) + 1;
|
2021-02-04 11:55:48 +03:00
|
|
|
uint16_t ctrl = le16_to_cpu(rw->control);
|
2021-06-17 22:06:52 +03:00
|
|
|
uint8_t prinfo = NVME_RW_PRINFO(ctrl);
|
2020-08-24 09:59:41 +03:00
|
|
|
uint64_t data_size = nvme_l2b(ns, nlb);
|
2020-11-23 13:24:55 +03:00
|
|
|
uint64_t mapped_size = data_size;
|
2020-12-08 23:04:00 +03:00
|
|
|
uint64_t data_offset;
|
2020-12-08 23:04:06 +03:00
|
|
|
NvmeZone *zone;
|
|
|
|
NvmeZonedResult *res = (NvmeZonedResult *)&req->cqe;
|
2020-09-30 22:22:27 +03:00
|
|
|
BlockBackend *blk = ns->blkconf.blk;
|
2020-02-23 19:32:25 +03:00
|
|
|
uint16_t status;
|
2013-06-04 19:17:10 +04:00
|
|
|
|
2020-11-23 13:24:55 +03:00
|
|
|
if (nvme_ns_ext(ns)) {
|
|
|
|
mapped_size += nvme_m2b(ns, nlb);
|
2021-02-04 11:55:48 +03:00
|
|
|
|
|
|
|
if (NVME_ID_NS_DPS_TYPE(ns->id_ns.dps)) {
|
2021-06-17 22:06:52 +03:00
|
|
|
bool pract = prinfo & NVME_PRINFO_PRACT;
|
2021-02-04 11:55:48 +03:00
|
|
|
|
2022-02-14 11:29:01 +03:00
|
|
|
if (pract && ns->lbaf.ms == nvme_pi_tuple_size(ns)) {
|
2021-02-04 11:55:48 +03:00
|
|
|
mapped_size -= nvme_m2b(ns, nlb);
|
|
|
|
}
|
|
|
|
}
|
2020-11-23 13:24:55 +03:00
|
|
|
}
|
|
|
|
|
2020-12-08 23:04:00 +03:00
|
|
|
trace_pci_nvme_write(nvme_cid(req), nvme_io_opc_str(rw->opcode),
|
2020-11-23 13:24:55 +03:00
|
|
|
nvme_nsid(ns), nlb, mapped_size, slba);
|
2017-11-03 16:37:53 +03:00
|
|
|
|
2020-12-08 23:04:01 +03:00
|
|
|
if (!wrz) {
|
2020-11-23 13:24:55 +03:00
|
|
|
status = nvme_check_mdts(n, mapped_size);
|
2020-12-08 23:04:01 +03:00
|
|
|
if (status) {
|
|
|
|
goto invalid;
|
|
|
|
}
|
2020-02-23 19:38:22 +03:00
|
|
|
}
|
|
|
|
|
2020-11-09 14:23:18 +03:00
|
|
|
status = nvme_check_bounds(ns, slba, nlb);
|
2020-02-23 19:32:25 +03:00
|
|
|
if (status) {
|
2020-08-24 09:48:55 +03:00
|
|
|
goto invalid;
|
2013-06-04 19:17:10 +04:00
|
|
|
}
|
2015-10-28 18:33:11 +03:00
|
|
|
|
2020-12-08 23:04:06 +03:00
|
|
|
if (ns->params.zoned) {
|
|
|
|
zone = nvme_get_zone_by_slba(ns, slba);
|
2021-06-17 22:06:51 +03:00
|
|
|
assert(zone);
|
2020-12-08 23:04:06 +03:00
|
|
|
|
2021-01-19 14:42:58 +03:00
|
|
|
if (append) {
|
2021-02-04 11:55:48 +03:00
|
|
|
bool piremap = !!(ctrl & NVME_RW_PIREMAP);
|
|
|
|
|
2021-03-04 10:40:11 +03:00
|
|
|
if (unlikely(zone->d.za & NVME_ZA_ZRWA_VALID)) {
|
|
|
|
return NVME_INVALID_ZONE_OP | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
2021-01-19 14:42:58 +03:00
|
|
|
if (unlikely(slba != zone->d.zslba)) {
|
|
|
|
trace_pci_nvme_err_append_not_at_start(slba, zone->d.zslba);
|
|
|
|
status = NVME_INVALID_FIELD;
|
|
|
|
goto invalid;
|
|
|
|
}
|
|
|
|
|
2021-03-12 16:55:29 +03:00
|
|
|
if (n->params.zasl &&
|
|
|
|
data_size > (uint64_t)n->page_size << n->params.zasl) {
|
hw/block/nvme: align zoned.zasl with mdts
ZASL (Zone Append Size Limit) is defined exactly like MDTS (Maximum Data
Transfer Size), that is, it is a value in units of the minimum memory
page size (CAP.MPSMIN) and is reported as a power of two.
The 'mdts' nvme device parameter is specified as in the spec, but the
'zoned.append_size_limit' parameter is specified in bytes. This is
suboptimal for a number of reasons:
1. It is just plain confusing wrt. the definition of mdts.
2. There is a lot of complexity involved in validating the value; it
must be a power of two, it should be larger than 4k, if it is zero
we set it internally to mdts, but still report it as zero.
3. While "hw/block/nvme: improve invalid zasl value reporting"
slightly improved the handling of the parameter, the validation is
still wrong; it does not depend on CC.MPS, it depends on
CAP.MPSMIN. And we are not even checking that it is actually less
than or equal to MDTS, which is kinda the *one* condition it must
satisfy.
Fix this by defining zasl exactly like mdts and checking the one thing
that it must satisfy (that it is less than or equal to mdts). Also,
change the default value from 128KiB to 0 (aka, whatever mdts is).
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
2021-02-22 21:27:58 +03:00
|
|
|
trace_pci_nvme_err_zasl(data_size);
|
|
|
|
return NVME_INVALID_FIELD | NVME_DNR;
|
2021-01-19 14:42:58 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
slba = zone->w_ptr;
|
2021-02-04 11:55:48 +03:00
|
|
|
rw->slba = cpu_to_le64(slba);
|
2021-01-19 14:42:58 +03:00
|
|
|
res->slba = cpu_to_le64(slba);
|
2021-02-04 11:55:48 +03:00
|
|
|
|
|
|
|
switch (NVME_ID_NS_DPS_TYPE(ns->id_ns.dps)) {
|
|
|
|
case NVME_ID_NS_DPS_TYPE_1:
|
|
|
|
if (!piremap) {
|
|
|
|
return NVME_INVALID_PROT_INFO | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* fallthrough */
|
|
|
|
|
|
|
|
case NVME_ID_NS_DPS_TYPE_2:
|
|
|
|
if (piremap) {
|
|
|
|
uint32_t reftag = le32_to_cpu(rw->reftag);
|
|
|
|
rw->reftag = cpu_to_le32(reftag + (slba - zone->d.zslba));
|
|
|
|
}
|
|
|
|
|
|
|
|
break;
|
|
|
|
|
|
|
|
case NVME_ID_NS_DPS_TYPE_3:
|
|
|
|
if (piremap) {
|
|
|
|
return NVME_INVALID_PROT_INFO | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
|
|
|
break;
|
|
|
|
}
|
2021-01-19 14:42:58 +03:00
|
|
|
}
|
|
|
|
|
2021-01-20 00:56:14 +03:00
|
|
|
status = nvme_check_zone_write(ns, zone, slba, nlb);
|
2021-01-24 19:30:24 +03:00
|
|
|
if (status) {
|
2020-12-08 23:04:06 +03:00
|
|
|
goto invalid;
|
|
|
|
}
|
|
|
|
|
2021-05-28 14:05:07 +03:00
|
|
|
status = nvme_zrm_auto(n, ns, zone);
|
2021-01-24 19:30:24 +03:00
|
|
|
if (status) {
|
2020-12-08 23:04:07 +03:00
|
|
|
goto invalid;
|
|
|
|
}
|
|
|
|
|
2021-03-04 10:40:11 +03:00
|
|
|
if (!(zone->d.za & NVME_ZA_ZRWA_VALID)) {
|
|
|
|
zone->w_ptr += nlb;
|
|
|
|
}
|
2023-02-20 14:59:26 +03:00
|
|
|
} else if (ns->endgrp && ns->endgrp->fdp.enabled) {
|
|
|
|
nvme_do_write_fdp(n, req, slba, nlb);
|
2020-12-08 23:04:06 +03:00
|
|
|
}
|
|
|
|
|
2020-12-08 23:04:00 +03:00
|
|
|
data_offset = nvme_l2b(ns, slba);
|
|
|
|
|
2021-02-04 11:55:48 +03:00
|
|
|
if (NVME_ID_NS_DPS_TYPE(ns->id_ns.dps)) {
|
|
|
|
return nvme_dif_rw(n, req);
|
|
|
|
}
|
|
|
|
|
2020-12-08 23:04:01 +03:00
|
|
|
if (!wrz) {
|
2020-11-23 13:24:55 +03:00
|
|
|
status = nvme_map_data(n, nlb, req);
|
2020-12-08 23:04:01 +03:00
|
|
|
if (status) {
|
|
|
|
goto invalid;
|
|
|
|
}
|
|
|
|
|
|
|
|
block_acct_start(blk_get_stats(blk), &req->acct, data_size,
|
|
|
|
BLOCK_ACCT_WRITE);
|
2023-03-20 15:40:36 +03:00
|
|
|
nvme_blk_write(blk, data_offset, BDRV_SECTOR_SIZE, nvme_rw_cb, req);
|
2020-09-30 22:22:27 +03:00
|
|
|
} else {
|
2020-12-08 23:04:01 +03:00
|
|
|
req->aiocb = blk_aio_pwrite_zeroes(blk, data_offset, data_size,
|
|
|
|
BDRV_REQ_MAY_UNMAP, nvme_rw_cb,
|
|
|
|
req);
|
2020-09-30 22:22:27 +03:00
|
|
|
}
|
2020-11-23 13:24:55 +03:00
|
|
|
|
2020-09-30 22:22:27 +03:00
|
|
|
return NVME_NO_COMPLETE;
|
2020-08-24 09:48:55 +03:00
|
|
|
|
|
|
|
invalid:
|
2020-12-08 23:04:00 +03:00
|
|
|
block_acct_invalid(blk_get_stats(blk), BLOCK_ACCT_WRITE);
|
|
|
|
return status | NVME_DNR;
|
2013-06-04 19:17:10 +04:00
|
|
|
}
|
|
|
|
|
2020-12-08 23:04:01 +03:00
|
|
|
static inline uint16_t nvme_write(NvmeCtrl *n, NvmeRequest *req)
|
|
|
|
{
|
2020-12-08 23:04:06 +03:00
|
|
|
return nvme_do_write(n, req, false, false);
|
2020-12-08 23:04:01 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
static inline uint16_t nvme_write_zeroes(NvmeCtrl *n, NvmeRequest *req)
|
|
|
|
{
|
2020-12-08 23:04:06 +03:00
|
|
|
return nvme_do_write(n, req, false, true);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline uint16_t nvme_zone_append(NvmeCtrl *n, NvmeRequest *req)
|
|
|
|
{
|
|
|
|
return nvme_do_write(n, req, true, false);
|
|
|
|
}
|
|
|
|
|
|
|
|
static uint16_t nvme_get_mgmt_zone_slba_idx(NvmeNamespace *ns, NvmeCmd *c,
|
|
|
|
uint64_t *slba, uint32_t *zone_idx)
|
|
|
|
{
|
|
|
|
uint32_t dw10 = le32_to_cpu(c->cdw10);
|
|
|
|
uint32_t dw11 = le32_to_cpu(c->cdw11);
|
|
|
|
|
|
|
|
if (!ns->params.zoned) {
|
|
|
|
trace_pci_nvme_err_invalid_opc(c->opcode);
|
|
|
|
return NVME_INVALID_OPCODE | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
|
|
|
*slba = ((uint64_t)dw11) << 32 | dw10;
|
|
|
|
if (unlikely(*slba >= ns->id_ns.nsze)) {
|
|
|
|
trace_pci_nvme_err_invalid_lba_range(*slba, 0, ns->id_ns.nsze);
|
|
|
|
*slba = 0;
|
|
|
|
return NVME_LBA_RANGE | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
|
|
|
*zone_idx = nvme_zone_idx(ns, *slba);
|
|
|
|
assert(*zone_idx < ns->num_zones);
|
|
|
|
|
|
|
|
return NVME_SUCCESS;
|
|
|
|
}
|
|
|
|
|
2020-12-10 01:43:15 +03:00
|
|
|
typedef uint16_t (*op_handler_t)(NvmeNamespace *, NvmeZone *, NvmeZoneState,
|
|
|
|
NvmeRequest *);
|
2020-12-08 23:04:06 +03:00
|
|
|
|
|
|
|
enum NvmeZoneProcessingMask {
|
|
|
|
NVME_PROC_CURRENT_ZONE = 0,
|
2020-12-10 01:00:20 +03:00
|
|
|
NVME_PROC_OPENED_ZONES = 1 << 0,
|
|
|
|
NVME_PROC_CLOSED_ZONES = 1 << 1,
|
|
|
|
NVME_PROC_READ_ONLY_ZONES = 1 << 2,
|
|
|
|
NVME_PROC_FULL_ZONES = 1 << 3,
|
2020-12-08 23:04:06 +03:00
|
|
|
};
|
|
|
|
|
|
|
|
static uint16_t nvme_open_zone(NvmeNamespace *ns, NvmeZone *zone,
|
2020-12-10 01:43:15 +03:00
|
|
|
NvmeZoneState state, NvmeRequest *req)
|
2020-12-08 23:04:06 +03:00
|
|
|
{
|
2021-03-04 10:40:11 +03:00
|
|
|
NvmeZoneSendCmd *cmd = (NvmeZoneSendCmd *)&req->cmd;
|
|
|
|
int flags = 0;
|
|
|
|
|
|
|
|
if (cmd->zsflags & NVME_ZSFLAG_ZRWA_ALLOC) {
|
|
|
|
uint16_t ozcs = le16_to_cpu(ns->id_ns_zoned->ozcs);
|
|
|
|
|
|
|
|
if (!(ozcs & NVME_ID_NS_ZONED_OZCS_ZRWASUP)) {
|
|
|
|
return NVME_INVALID_ZONE_OP | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (zone->w_ptr % ns->zns.zrwafg) {
|
|
|
|
return NVME_NOZRWA | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
|
|
|
flags = NVME_ZRM_ZRWA;
|
|
|
|
}
|
|
|
|
|
|
|
|
return nvme_zrm_open_flags(nvme_ctrl(req), ns, zone, flags);
|
2020-12-08 23:04:06 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
static uint16_t nvme_close_zone(NvmeNamespace *ns, NvmeZone *zone,
|
2020-12-10 01:43:15 +03:00
|
|
|
NvmeZoneState state, NvmeRequest *req)
|
2020-12-08 23:04:06 +03:00
|
|
|
{
|
2021-01-19 23:01:15 +03:00
|
|
|
return nvme_zrm_close(ns, zone);
|
2020-12-08 23:04:06 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
static uint16_t nvme_finish_zone(NvmeNamespace *ns, NvmeZone *zone,
|
2020-12-10 01:43:15 +03:00
|
|
|
NvmeZoneState state, NvmeRequest *req)
|
2020-12-08 23:04:06 +03:00
|
|
|
{
|
2021-01-19 23:01:15 +03:00
|
|
|
return nvme_zrm_finish(ns, zone);
|
2020-12-08 23:04:06 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
static uint16_t nvme_offline_zone(NvmeNamespace *ns, NvmeZone *zone,
|
2020-12-10 01:43:15 +03:00
|
|
|
NvmeZoneState state, NvmeRequest *req)
|
2020-12-08 23:04:06 +03:00
|
|
|
{
|
|
|
|
switch (state) {
|
|
|
|
case NVME_ZONE_STATE_READ_ONLY:
|
|
|
|
nvme_assign_zone_state(ns, zone, NVME_ZONE_STATE_OFFLINE);
|
|
|
|
/* fall through */
|
|
|
|
case NVME_ZONE_STATE_OFFLINE:
|
|
|
|
return NVME_SUCCESS;
|
|
|
|
default:
|
|
|
|
return NVME_ZONE_INVAL_TRANSITION;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-12-08 23:04:08 +03:00
|
|
|
static uint16_t nvme_set_zd_ext(NvmeNamespace *ns, NvmeZone *zone)
|
|
|
|
{
|
|
|
|
uint16_t status;
|
|
|
|
uint8_t state = nvme_get_zone_state(zone);
|
|
|
|
|
|
|
|
if (state == NVME_ZONE_STATE_EMPTY) {
|
|
|
|
status = nvme_aor_check(ns, 1, 0);
|
2021-01-24 19:30:24 +03:00
|
|
|
if (status) {
|
2020-12-08 23:04:08 +03:00
|
|
|
return status;
|
|
|
|
}
|
|
|
|
nvme_aor_inc_active(ns);
|
|
|
|
zone->d.za |= NVME_ZA_ZD_EXT_VALID;
|
|
|
|
nvme_assign_zone_state(ns, zone, NVME_ZONE_STATE_CLOSED);
|
|
|
|
return NVME_SUCCESS;
|
|
|
|
}
|
|
|
|
|
|
|
|
return NVME_ZONE_INVAL_TRANSITION;
|
|
|
|
}
|
|
|
|
|
2020-12-08 23:04:06 +03:00
|
|
|
static uint16_t nvme_bulk_proc_zone(NvmeNamespace *ns, NvmeZone *zone,
|
|
|
|
enum NvmeZoneProcessingMask proc_mask,
|
2020-12-10 01:43:15 +03:00
|
|
|
op_handler_t op_hndlr, NvmeRequest *req)
|
2020-12-08 23:04:06 +03:00
|
|
|
{
|
|
|
|
uint16_t status = NVME_SUCCESS;
|
2020-12-10 01:12:49 +03:00
|
|
|
NvmeZoneState zs = nvme_get_zone_state(zone);
|
2020-12-08 23:04:06 +03:00
|
|
|
bool proc_zone;
|
|
|
|
|
|
|
|
switch (zs) {
|
|
|
|
case NVME_ZONE_STATE_IMPLICITLY_OPEN:
|
|
|
|
case NVME_ZONE_STATE_EXPLICITLY_OPEN:
|
2020-12-10 01:00:20 +03:00
|
|
|
proc_zone = proc_mask & NVME_PROC_OPENED_ZONES;
|
2020-12-08 23:04:06 +03:00
|
|
|
break;
|
|
|
|
case NVME_ZONE_STATE_CLOSED:
|
|
|
|
proc_zone = proc_mask & NVME_PROC_CLOSED_ZONES;
|
|
|
|
break;
|
|
|
|
case NVME_ZONE_STATE_READ_ONLY:
|
|
|
|
proc_zone = proc_mask & NVME_PROC_READ_ONLY_ZONES;
|
|
|
|
break;
|
|
|
|
case NVME_ZONE_STATE_FULL:
|
|
|
|
proc_zone = proc_mask & NVME_PROC_FULL_ZONES;
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
proc_zone = false;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (proc_zone) {
|
2020-12-10 01:43:15 +03:00
|
|
|
status = op_hndlr(ns, zone, zs, req);
|
2020-12-08 23:04:06 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
return status;
|
|
|
|
}
|
|
|
|
|
|
|
|
static uint16_t nvme_do_zone_op(NvmeNamespace *ns, NvmeZone *zone,
|
|
|
|
enum NvmeZoneProcessingMask proc_mask,
|
2020-12-10 01:43:15 +03:00
|
|
|
op_handler_t op_hndlr, NvmeRequest *req)
|
2020-12-08 23:04:06 +03:00
|
|
|
{
|
|
|
|
NvmeZone *next;
|
|
|
|
uint16_t status = NVME_SUCCESS;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
if (!proc_mask) {
|
2020-12-10 01:43:15 +03:00
|
|
|
status = op_hndlr(ns, zone, nvme_get_zone_state(zone), req);
|
2020-12-08 23:04:06 +03:00
|
|
|
} else {
|
|
|
|
if (proc_mask & NVME_PROC_CLOSED_ZONES) {
|
|
|
|
QTAILQ_FOREACH_SAFE(zone, &ns->closed_zones, entry, next) {
|
2020-12-10 01:43:15 +03:00
|
|
|
status = nvme_bulk_proc_zone(ns, zone, proc_mask, op_hndlr,
|
|
|
|
req);
|
|
|
|
if (status && status != NVME_NO_COMPLETE) {
|
2020-12-08 23:04:06 +03:00
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
2020-12-10 01:00:20 +03:00
|
|
|
if (proc_mask & NVME_PROC_OPENED_ZONES) {
|
2020-12-08 23:04:06 +03:00
|
|
|
QTAILQ_FOREACH_SAFE(zone, &ns->imp_open_zones, entry, next) {
|
2020-12-10 01:43:15 +03:00
|
|
|
status = nvme_bulk_proc_zone(ns, zone, proc_mask, op_hndlr,
|
|
|
|
req);
|
|
|
|
if (status && status != NVME_NO_COMPLETE) {
|
2020-12-08 23:04:06 +03:00
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
}
|
2020-12-10 01:00:20 +03:00
|
|
|
|
2020-12-08 23:04:06 +03:00
|
|
|
QTAILQ_FOREACH_SAFE(zone, &ns->exp_open_zones, entry, next) {
|
2020-12-10 01:43:15 +03:00
|
|
|
status = nvme_bulk_proc_zone(ns, zone, proc_mask, op_hndlr,
|
|
|
|
req);
|
|
|
|
if (status && status != NVME_NO_COMPLETE) {
|
2020-12-08 23:04:06 +03:00
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
if (proc_mask & NVME_PROC_FULL_ZONES) {
|
|
|
|
QTAILQ_FOREACH_SAFE(zone, &ns->full_zones, entry, next) {
|
2020-12-10 01:43:15 +03:00
|
|
|
status = nvme_bulk_proc_zone(ns, zone, proc_mask, op_hndlr,
|
|
|
|
req);
|
|
|
|
if (status && status != NVME_NO_COMPLETE) {
|
2020-12-08 23:04:06 +03:00
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
if (proc_mask & NVME_PROC_READ_ONLY_ZONES) {
|
|
|
|
for (i = 0; i < ns->num_zones; i++, zone++) {
|
2020-12-10 01:43:15 +03:00
|
|
|
status = nvme_bulk_proc_zone(ns, zone, proc_mask, op_hndlr,
|
|
|
|
req);
|
|
|
|
if (status && status != NVME_NO_COMPLETE) {
|
2020-12-08 23:04:06 +03:00
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
out:
|
|
|
|
return status;
|
|
|
|
}
|
|
|
|
|
2021-06-17 22:06:55 +03:00
|
|
|
typedef struct NvmeZoneResetAIOCB {
|
|
|
|
BlockAIOCB common;
|
|
|
|
BlockAIOCB *aiocb;
|
|
|
|
NvmeRequest *req;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
bool all;
|
|
|
|
int idx;
|
|
|
|
NvmeZone *zone;
|
|
|
|
} NvmeZoneResetAIOCB;
|
|
|
|
|
|
|
|
static void nvme_zone_reset_cancel(BlockAIOCB *aiocb)
|
|
|
|
{
|
|
|
|
NvmeZoneResetAIOCB *iocb = container_of(aiocb, NvmeZoneResetAIOCB, common);
|
|
|
|
NvmeRequest *req = iocb->req;
|
|
|
|
NvmeNamespace *ns = req->ns;
|
|
|
|
|
|
|
|
iocb->idx = ns->num_zones;
|
|
|
|
|
|
|
|
iocb->ret = -ECANCELED;
|
|
|
|
|
|
|
|
if (iocb->aiocb) {
|
|
|
|
blk_aio_cancel_async(iocb->aiocb);
|
|
|
|
iocb->aiocb = NULL;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static const AIOCBInfo nvme_zone_reset_aiocb_info = {
|
|
|
|
.aiocb_size = sizeof(NvmeZoneResetAIOCB),
|
|
|
|
.cancel_async = nvme_zone_reset_cancel,
|
|
|
|
};
|
|
|
|
|
|
|
|
static void nvme_zone_reset_cb(void *opaque, int ret);
|
|
|
|
|
|
|
|
static void nvme_zone_reset_epilogue_cb(void *opaque, int ret)
|
|
|
|
{
|
|
|
|
NvmeZoneResetAIOCB *iocb = opaque;
|
|
|
|
NvmeRequest *req = iocb->req;
|
|
|
|
NvmeNamespace *ns = req->ns;
|
|
|
|
int64_t moff;
|
|
|
|
int count;
|
|
|
|
|
2022-11-10 09:59:47 +03:00
|
|
|
if (ret < 0 || iocb->ret < 0 || !ns->lbaf.ms) {
|
|
|
|
goto out;
|
2021-06-17 22:06:55 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
moff = nvme_moff(ns, iocb->zone->d.zslba);
|
|
|
|
count = nvme_m2b(ns, ns->zone_size);
|
|
|
|
|
|
|
|
iocb->aiocb = blk_aio_pwrite_zeroes(ns->blkconf.blk, moff, count,
|
|
|
|
BDRV_REQ_MAY_UNMAP,
|
|
|
|
nvme_zone_reset_cb, iocb);
|
|
|
|
return;
|
2022-11-10 09:59:47 +03:00
|
|
|
|
|
|
|
out:
|
|
|
|
nvme_zone_reset_cb(iocb, ret);
|
2021-06-17 22:06:55 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
static void nvme_zone_reset_cb(void *opaque, int ret)
|
|
|
|
{
|
|
|
|
NvmeZoneResetAIOCB *iocb = opaque;
|
|
|
|
NvmeRequest *req = iocb->req;
|
|
|
|
NvmeNamespace *ns = req->ns;
|
|
|
|
|
2022-11-10 09:59:47 +03:00
|
|
|
if (iocb->ret < 0) {
|
|
|
|
goto done;
|
|
|
|
} else if (ret < 0) {
|
2021-06-17 22:06:55 +03:00
|
|
|
iocb->ret = ret;
|
|
|
|
goto done;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (iocb->zone) {
|
|
|
|
nvme_zrm_reset(ns, iocb->zone);
|
|
|
|
|
|
|
|
if (!iocb->all) {
|
|
|
|
goto done;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
while (iocb->idx < ns->num_zones) {
|
|
|
|
NvmeZone *zone = &ns->zone_array[iocb->idx++];
|
|
|
|
|
|
|
|
switch (nvme_get_zone_state(zone)) {
|
|
|
|
case NVME_ZONE_STATE_EMPTY:
|
|
|
|
if (!iocb->all) {
|
|
|
|
goto done;
|
|
|
|
}
|
|
|
|
|
|
|
|
continue;
|
|
|
|
|
|
|
|
case NVME_ZONE_STATE_EXPLICITLY_OPEN:
|
|
|
|
case NVME_ZONE_STATE_IMPLICITLY_OPEN:
|
|
|
|
case NVME_ZONE_STATE_CLOSED:
|
|
|
|
case NVME_ZONE_STATE_FULL:
|
|
|
|
iocb->zone = zone;
|
|
|
|
break;
|
|
|
|
|
|
|
|
default:
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
trace_pci_nvme_zns_zone_reset(zone->d.zslba);
|
|
|
|
|
|
|
|
iocb->aiocb = blk_aio_pwrite_zeroes(ns->blkconf.blk,
|
|
|
|
nvme_l2b(ns, zone->d.zslba),
|
|
|
|
nvme_l2b(ns, ns->zone_size),
|
|
|
|
BDRV_REQ_MAY_UNMAP,
|
|
|
|
nvme_zone_reset_epilogue_cb,
|
|
|
|
iocb);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
done:
|
|
|
|
iocb->aiocb = NULL;
|
2022-11-10 09:59:47 +03:00
|
|
|
|
|
|
|
iocb->common.cb(iocb->common.opaque, iocb->ret);
|
|
|
|
qemu_aio_unref(iocb);
|
2021-06-17 22:06:55 +03:00
|
|
|
}
|
|
|
|
|
2021-03-04 10:40:11 +03:00
|
|
|
static uint16_t nvme_zone_mgmt_send_zrwa_flush(NvmeCtrl *n, NvmeZone *zone,
|
|
|
|
uint64_t elba, NvmeRequest *req)
|
|
|
|
{
|
|
|
|
NvmeNamespace *ns = req->ns;
|
|
|
|
uint16_t ozcs = le16_to_cpu(ns->id_ns_zoned->ozcs);
|
|
|
|
uint64_t wp = zone->d.wp;
|
|
|
|
uint32_t nlb = elba - wp + 1;
|
|
|
|
uint16_t status;
|
|
|
|
|
|
|
|
|
|
|
|
if (!(ozcs & NVME_ID_NS_ZONED_OZCS_ZRWASUP)) {
|
|
|
|
return NVME_INVALID_ZONE_OP | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!(zone->d.za & NVME_ZA_ZRWA_VALID)) {
|
|
|
|
return NVME_INVALID_FIELD | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (elba < wp || elba > wp + ns->zns.zrwas) {
|
|
|
|
return NVME_ZONE_BOUNDARY_ERROR | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (nlb % ns->zns.zrwafg) {
|
|
|
|
return NVME_INVALID_FIELD | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
|
|
|
status = nvme_zrm_auto(n, ns, zone);
|
|
|
|
if (status) {
|
|
|
|
return status;
|
|
|
|
}
|
|
|
|
|
|
|
|
zone->w_ptr += nlb;
|
|
|
|
|
|
|
|
nvme_advance_zone_wp(ns, zone, nlb);
|
|
|
|
|
|
|
|
return NVME_SUCCESS;
|
|
|
|
}
|
|
|
|
|
2020-12-08 23:04:06 +03:00
|
|
|
static uint16_t nvme_zone_mgmt_send(NvmeCtrl *n, NvmeRequest *req)
|
|
|
|
{
|
2021-11-23 01:22:27 +03:00
|
|
|
NvmeZoneSendCmd *cmd = (NvmeZoneSendCmd *)&req->cmd;
|
2020-12-08 23:04:06 +03:00
|
|
|
NvmeNamespace *ns = req->ns;
|
|
|
|
NvmeZone *zone;
|
2021-06-17 22:06:55 +03:00
|
|
|
NvmeZoneResetAIOCB *iocb;
|
2020-12-08 23:04:08 +03:00
|
|
|
uint8_t *zd_ext;
|
2020-12-08 23:04:06 +03:00
|
|
|
uint64_t slba = 0;
|
|
|
|
uint32_t zone_idx = 0;
|
|
|
|
uint16_t status;
|
2021-11-23 01:22:27 +03:00
|
|
|
uint8_t action = cmd->zsa;
|
2020-12-08 23:04:06 +03:00
|
|
|
bool all;
|
|
|
|
enum NvmeZoneProcessingMask proc_mask = NVME_PROC_CURRENT_ZONE;
|
|
|
|
|
2021-11-23 01:22:27 +03:00
|
|
|
all = cmd->zsflags & NVME_ZSFLAG_SELECT_ALL;
|
2020-12-08 23:04:06 +03:00
|
|
|
|
|
|
|
req->status = NVME_SUCCESS;
|
|
|
|
|
|
|
|
if (!all) {
|
2021-11-23 01:22:27 +03:00
|
|
|
status = nvme_get_mgmt_zone_slba_idx(ns, &req->cmd, &slba, &zone_idx);
|
2020-12-08 23:04:06 +03:00
|
|
|
if (status) {
|
|
|
|
return status;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
zone = &ns->zone_array[zone_idx];
|
2021-03-04 10:40:11 +03:00
|
|
|
if (slba != zone->d.zslba && action != NVME_ZONE_ACTION_ZRWA_FLUSH) {
|
2020-12-08 23:04:06 +03:00
|
|
|
trace_pci_nvme_err_unaligned_zone_cmd(action, slba, zone->d.zslba);
|
|
|
|
return NVME_INVALID_FIELD | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
|
|
|
switch (action) {
|
|
|
|
|
|
|
|
case NVME_ZONE_ACTION_OPEN:
|
|
|
|
if (all) {
|
|
|
|
proc_mask = NVME_PROC_CLOSED_ZONES;
|
|
|
|
}
|
|
|
|
trace_pci_nvme_open_zone(slba, zone_idx, all);
|
2020-12-10 01:43:15 +03:00
|
|
|
status = nvme_do_zone_op(ns, zone, proc_mask, nvme_open_zone, req);
|
2020-12-08 23:04:06 +03:00
|
|
|
break;
|
|
|
|
|
|
|
|
case NVME_ZONE_ACTION_CLOSE:
|
|
|
|
if (all) {
|
2020-12-10 01:00:20 +03:00
|
|
|
proc_mask = NVME_PROC_OPENED_ZONES;
|
2020-12-08 23:04:06 +03:00
|
|
|
}
|
|
|
|
trace_pci_nvme_close_zone(slba, zone_idx, all);
|
2020-12-10 01:43:15 +03:00
|
|
|
status = nvme_do_zone_op(ns, zone, proc_mask, nvme_close_zone, req);
|
2020-12-08 23:04:06 +03:00
|
|
|
break;
|
|
|
|
|
|
|
|
case NVME_ZONE_ACTION_FINISH:
|
|
|
|
if (all) {
|
2020-12-10 01:00:20 +03:00
|
|
|
proc_mask = NVME_PROC_OPENED_ZONES | NVME_PROC_CLOSED_ZONES;
|
2020-12-08 23:04:06 +03:00
|
|
|
}
|
|
|
|
trace_pci_nvme_finish_zone(slba, zone_idx, all);
|
2020-12-10 01:43:15 +03:00
|
|
|
status = nvme_do_zone_op(ns, zone, proc_mask, nvme_finish_zone, req);
|
2020-12-08 23:04:06 +03:00
|
|
|
break;
|
|
|
|
|
|
|
|
case NVME_ZONE_ACTION_RESET:
|
|
|
|
trace_pci_nvme_reset_zone(slba, zone_idx, all);
|
2020-12-10 01:43:15 +03:00
|
|
|
|
2021-06-17 22:06:55 +03:00
|
|
|
iocb = blk_aio_get(&nvme_zone_reset_aiocb_info, ns->blkconf.blk,
|
|
|
|
nvme_misc_cb, req);
|
2020-12-10 01:43:15 +03:00
|
|
|
|
2021-06-17 22:06:55 +03:00
|
|
|
iocb->req = req;
|
|
|
|
iocb->ret = 0;
|
|
|
|
iocb->all = all;
|
|
|
|
iocb->idx = zone_idx;
|
|
|
|
iocb->zone = NULL;
|
2020-12-10 01:43:15 +03:00
|
|
|
|
2021-06-17 22:06:55 +03:00
|
|
|
req->aiocb = &iocb->common;
|
|
|
|
nvme_zone_reset_cb(iocb, 0);
|
2020-12-10 01:43:15 +03:00
|
|
|
|
2021-06-17 22:06:55 +03:00
|
|
|
return NVME_NO_COMPLETE;
|
2020-12-08 23:04:06 +03:00
|
|
|
|
|
|
|
case NVME_ZONE_ACTION_OFFLINE:
|
|
|
|
if (all) {
|
|
|
|
proc_mask = NVME_PROC_READ_ONLY_ZONES;
|
|
|
|
}
|
|
|
|
trace_pci_nvme_offline_zone(slba, zone_idx, all);
|
2020-12-10 01:43:15 +03:00
|
|
|
status = nvme_do_zone_op(ns, zone, proc_mask, nvme_offline_zone, req);
|
2020-12-08 23:04:06 +03:00
|
|
|
break;
|
|
|
|
|
|
|
|
case NVME_ZONE_ACTION_SET_ZD_EXT:
|
|
|
|
trace_pci_nvme_set_descriptor_extension(slba, zone_idx);
|
2020-12-08 23:04:08 +03:00
|
|
|
if (all || !ns->params.zd_extension_size) {
|
|
|
|
return NVME_INVALID_FIELD | NVME_DNR;
|
|
|
|
}
|
|
|
|
zd_ext = nvme_get_zd_extension(ns, zone_idx);
|
2020-12-15 21:18:25 +03:00
|
|
|
status = nvme_h2c(n, zd_ext, ns->params.zd_extension_size, req);
|
2020-12-08 23:04:08 +03:00
|
|
|
if (status) {
|
|
|
|
trace_pci_nvme_err_zd_extension_map_error(zone_idx);
|
|
|
|
return status;
|
|
|
|
}
|
|
|
|
|
|
|
|
status = nvme_set_zd_ext(ns, zone);
|
|
|
|
if (status == NVME_SUCCESS) {
|
|
|
|
trace_pci_nvme_zd_extension_set(zone_idx);
|
|
|
|
return status;
|
|
|
|
}
|
2020-12-08 23:04:06 +03:00
|
|
|
break;
|
|
|
|
|
2021-03-04 10:40:11 +03:00
|
|
|
case NVME_ZONE_ACTION_ZRWA_FLUSH:
|
|
|
|
if (all) {
|
|
|
|
return NVME_INVALID_FIELD | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
|
|
|
return nvme_zone_mgmt_send_zrwa_flush(n, zone, slba, req);
|
|
|
|
|
2020-12-08 23:04:06 +03:00
|
|
|
default:
|
|
|
|
trace_pci_nvme_err_invalid_mgmt_action(action);
|
|
|
|
status = NVME_INVALID_FIELD;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (status == NVME_ZONE_INVAL_TRANSITION) {
|
|
|
|
trace_pci_nvme_err_invalid_zone_state_transition(action, slba,
|
|
|
|
zone->d.za);
|
|
|
|
}
|
|
|
|
if (status) {
|
|
|
|
status |= NVME_DNR;
|
|
|
|
}
|
|
|
|
|
|
|
|
return status;
|
|
|
|
}
|
|
|
|
|
|
|
|
static bool nvme_zone_matches_filter(uint32_t zafs, NvmeZone *zl)
|
|
|
|
{
|
2020-12-10 01:12:49 +03:00
|
|
|
NvmeZoneState zs = nvme_get_zone_state(zl);
|
2020-12-08 23:04:06 +03:00
|
|
|
|
|
|
|
switch (zafs) {
|
|
|
|
case NVME_ZONE_REPORT_ALL:
|
|
|
|
return true;
|
|
|
|
case NVME_ZONE_REPORT_EMPTY:
|
|
|
|
return zs == NVME_ZONE_STATE_EMPTY;
|
|
|
|
case NVME_ZONE_REPORT_IMPLICITLY_OPEN:
|
|
|
|
return zs == NVME_ZONE_STATE_IMPLICITLY_OPEN;
|
|
|
|
case NVME_ZONE_REPORT_EXPLICITLY_OPEN:
|
|
|
|
return zs == NVME_ZONE_STATE_EXPLICITLY_OPEN;
|
|
|
|
case NVME_ZONE_REPORT_CLOSED:
|
|
|
|
return zs == NVME_ZONE_STATE_CLOSED;
|
|
|
|
case NVME_ZONE_REPORT_FULL:
|
|
|
|
return zs == NVME_ZONE_STATE_FULL;
|
|
|
|
case NVME_ZONE_REPORT_READ_ONLY:
|
|
|
|
return zs == NVME_ZONE_STATE_READ_ONLY;
|
|
|
|
case NVME_ZONE_REPORT_OFFLINE:
|
|
|
|
return zs == NVME_ZONE_STATE_OFFLINE;
|
|
|
|
default:
|
|
|
|
return false;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static uint16_t nvme_zone_mgmt_recv(NvmeCtrl *n, NvmeRequest *req)
|
|
|
|
{
|
|
|
|
NvmeCmd *cmd = (NvmeCmd *)&req->cmd;
|
|
|
|
NvmeNamespace *ns = req->ns;
|
|
|
|
/* cdw12 is zero-based number of dwords to return. Convert to bytes */
|
|
|
|
uint32_t data_size = (le32_to_cpu(cmd->cdw12) + 1) << 2;
|
|
|
|
uint32_t dw13 = le32_to_cpu(cmd->cdw13);
|
|
|
|
uint32_t zone_idx, zra, zrasf, partial;
|
|
|
|
uint64_t max_zones, nr_zones = 0;
|
|
|
|
uint16_t status;
|
2021-03-09 17:11:42 +03:00
|
|
|
uint64_t slba;
|
2020-12-08 23:04:06 +03:00
|
|
|
NvmeZoneDescr *z;
|
|
|
|
NvmeZone *zone;
|
|
|
|
NvmeZoneReportHeader *header;
|
|
|
|
void *buf, *buf_p;
|
|
|
|
size_t zone_entry_sz;
|
2021-03-09 17:11:42 +03:00
|
|
|
int i;
|
2020-12-08 23:04:06 +03:00
|
|
|
|
|
|
|
req->status = NVME_SUCCESS;
|
|
|
|
|
|
|
|
status = nvme_get_mgmt_zone_slba_idx(ns, cmd, &slba, &zone_idx);
|
|
|
|
if (status) {
|
|
|
|
return status;
|
|
|
|
}
|
|
|
|
|
|
|
|
zra = dw13 & 0xff;
|
2020-12-08 23:04:08 +03:00
|
|
|
if (zra != NVME_ZONE_REPORT && zra != NVME_ZONE_REPORT_EXTENDED) {
|
|
|
|
return NVME_INVALID_FIELD | NVME_DNR;
|
|
|
|
}
|
|
|
|
if (zra == NVME_ZONE_REPORT_EXTENDED && !ns->params.zd_extension_size) {
|
2020-12-08 23:04:06 +03:00
|
|
|
return NVME_INVALID_FIELD | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
|
|
|
zrasf = (dw13 >> 8) & 0xff;
|
|
|
|
if (zrasf > NVME_ZONE_REPORT_OFFLINE) {
|
|
|
|
return NVME_INVALID_FIELD | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (data_size < sizeof(NvmeZoneReportHeader)) {
|
|
|
|
return NVME_INVALID_FIELD | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
|
|
|
status = nvme_check_mdts(n, data_size);
|
|
|
|
if (status) {
|
|
|
|
return status;
|
|
|
|
}
|
|
|
|
|
|
|
|
partial = (dw13 >> 16) & 0x01;
|
|
|
|
|
|
|
|
zone_entry_sz = sizeof(NvmeZoneDescr);
|
2020-12-08 23:04:08 +03:00
|
|
|
if (zra == NVME_ZONE_REPORT_EXTENDED) {
|
|
|
|
zone_entry_sz += ns->params.zd_extension_size;
|
|
|
|
}
|
2020-12-08 23:04:06 +03:00
|
|
|
|
|
|
|
max_zones = (data_size - sizeof(NvmeZoneReportHeader)) / zone_entry_sz;
|
|
|
|
buf = g_malloc0(data_size);
|
|
|
|
|
|
|
|
zone = &ns->zone_array[zone_idx];
|
2021-03-09 17:11:42 +03:00
|
|
|
for (i = zone_idx; i < ns->num_zones; i++) {
|
2020-12-08 23:04:06 +03:00
|
|
|
if (partial && nr_zones >= max_zones) {
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
if (nvme_zone_matches_filter(zrasf, zone++)) {
|
|
|
|
nr_zones++;
|
|
|
|
}
|
|
|
|
}
|
2022-11-23 16:38:11 +03:00
|
|
|
header = buf;
|
2020-12-08 23:04:06 +03:00
|
|
|
header->nr_zones = cpu_to_le64(nr_zones);
|
|
|
|
|
|
|
|
buf_p = buf + sizeof(NvmeZoneReportHeader);
|
|
|
|
for (; zone_idx < ns->num_zones && max_zones > 0; zone_idx++) {
|
|
|
|
zone = &ns->zone_array[zone_idx];
|
|
|
|
if (nvme_zone_matches_filter(zrasf, zone)) {
|
2022-11-23 16:38:11 +03:00
|
|
|
z = buf_p;
|
2020-12-08 23:04:06 +03:00
|
|
|
buf_p += sizeof(NvmeZoneDescr);
|
|
|
|
|
|
|
|
z->zt = zone->d.zt;
|
|
|
|
z->zs = zone->d.zs;
|
|
|
|
z->zcap = cpu_to_le64(zone->d.zcap);
|
|
|
|
z->zslba = cpu_to_le64(zone->d.zslba);
|
|
|
|
z->za = zone->d.za;
|
|
|
|
|
|
|
|
if (nvme_wp_is_valid(zone)) {
|
|
|
|
z->wp = cpu_to_le64(zone->d.wp);
|
|
|
|
} else {
|
|
|
|
z->wp = cpu_to_le64(~0ULL);
|
|
|
|
}
|
|
|
|
|
2020-12-08 23:04:08 +03:00
|
|
|
if (zra == NVME_ZONE_REPORT_EXTENDED) {
|
|
|
|
if (zone->d.za & NVME_ZA_ZD_EXT_VALID) {
|
|
|
|
memcpy(buf_p, nvme_get_zd_extension(ns, zone_idx),
|
|
|
|
ns->params.zd_extension_size);
|
|
|
|
}
|
|
|
|
buf_p += ns->params.zd_extension_size;
|
|
|
|
}
|
|
|
|
|
2020-12-08 23:04:06 +03:00
|
|
|
max_zones--;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-12-15 21:18:25 +03:00
|
|
|
status = nvme_c2h(n, (uint8_t *)buf, data_size, req);
|
2020-12-08 23:04:06 +03:00
|
|
|
|
|
|
|
g_free(buf);
|
|
|
|
|
|
|
|
return status;
|
2020-12-08 23:04:01 +03:00
|
|
|
}
|
|
|
|
|
2023-02-20 14:59:26 +03:00
|
|
|
static uint16_t nvme_io_mgmt_recv_ruhs(NvmeCtrl *n, NvmeRequest *req,
|
|
|
|
size_t len)
|
|
|
|
{
|
|
|
|
NvmeNamespace *ns = req->ns;
|
|
|
|
NvmeEnduranceGroup *endgrp;
|
|
|
|
NvmeRuhStatus *hdr;
|
|
|
|
NvmeRuhStatusDescr *ruhsd;
|
|
|
|
unsigned int nruhsd;
|
|
|
|
uint16_t rg, ph, *ruhid;
|
|
|
|
size_t trans_len;
|
|
|
|
g_autofree uint8_t *buf = NULL;
|
|
|
|
|
|
|
|
if (!n->subsys) {
|
|
|
|
return NVME_INVALID_FIELD | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (ns->params.nsid == 0 || ns->params.nsid == 0xffffffff) {
|
|
|
|
return NVME_INVALID_NSID | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!n->subsys->endgrp.fdp.enabled) {
|
|
|
|
return NVME_FDP_DISABLED | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
|
|
|
endgrp = ns->endgrp;
|
|
|
|
|
|
|
|
nruhsd = ns->fdp.nphs * endgrp->fdp.nrg;
|
|
|
|
trans_len = sizeof(NvmeRuhStatus) + nruhsd * sizeof(NvmeRuhStatusDescr);
|
|
|
|
buf = g_malloc(trans_len);
|
|
|
|
|
|
|
|
trans_len = MIN(trans_len, len);
|
|
|
|
|
|
|
|
hdr = (NvmeRuhStatus *)buf;
|
|
|
|
ruhsd = (NvmeRuhStatusDescr *)(buf + sizeof(NvmeRuhStatus));
|
|
|
|
|
|
|
|
hdr->nruhsd = cpu_to_le16(nruhsd);
|
|
|
|
|
|
|
|
ruhid = ns->fdp.phs;
|
|
|
|
|
|
|
|
for (ph = 0; ph < ns->fdp.nphs; ph++, ruhid++) {
|
|
|
|
NvmeRuHandle *ruh = &endgrp->fdp.ruhs[*ruhid];
|
|
|
|
|
|
|
|
for (rg = 0; rg < endgrp->fdp.nrg; rg++, ruhsd++) {
|
|
|
|
uint16_t pid = nvme_make_pid(ns, rg, ph);
|
|
|
|
|
|
|
|
ruhsd->pid = cpu_to_le16(pid);
|
|
|
|
ruhsd->ruhid = *ruhid;
|
|
|
|
ruhsd->earutr = 0;
|
|
|
|
ruhsd->ruamw = cpu_to_le64(ruh->rus[rg].ruamw);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return nvme_c2h(n, buf, trans_len, req);
|
|
|
|
}
|
|
|
|
|
|
|
|
static uint16_t nvme_io_mgmt_recv(NvmeCtrl *n, NvmeRequest *req)
|
|
|
|
{
|
|
|
|
NvmeCmd *cmd = &req->cmd;
|
|
|
|
uint32_t cdw10 = le32_to_cpu(cmd->cdw10);
|
|
|
|
uint32_t numd = le32_to_cpu(cmd->cdw11);
|
|
|
|
uint8_t mo = (cdw10 & 0xff);
|
|
|
|
size_t len = (numd + 1) << 2;
|
|
|
|
|
|
|
|
switch (mo) {
|
|
|
|
case NVME_IOMR_MO_NOP:
|
|
|
|
return 0;
|
|
|
|
case NVME_IOMR_MO_RUH_STATUS:
|
|
|
|
return nvme_io_mgmt_recv_ruhs(n, req, len);
|
|
|
|
default:
|
|
|
|
return NVME_INVALID_FIELD | NVME_DNR;
|
|
|
|
};
|
|
|
|
}
|
|
|
|
|
|
|
|
static uint16_t nvme_io_mgmt_send_ruh_update(NvmeCtrl *n, NvmeRequest *req)
|
|
|
|
{
|
|
|
|
NvmeCmd *cmd = &req->cmd;
|
|
|
|
NvmeNamespace *ns = req->ns;
|
|
|
|
uint32_t cdw10 = le32_to_cpu(cmd->cdw10);
|
|
|
|
uint16_t ret = NVME_SUCCESS;
|
|
|
|
uint32_t npid = (cdw10 >> 1) + 1;
|
|
|
|
unsigned int i = 0;
|
|
|
|
g_autofree uint16_t *pids = NULL;
|
|
|
|
uint32_t maxnpid = n->subsys->endgrp.fdp.nrg * n->subsys->endgrp.fdp.nruh;
|
|
|
|
|
|
|
|
if (unlikely(npid >= MIN(NVME_FDP_MAXPIDS, maxnpid))) {
|
|
|
|
return NVME_INVALID_FIELD | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
|
|
|
pids = g_new(uint16_t, npid);
|
|
|
|
|
|
|
|
ret = nvme_h2c(n, pids, npid * sizeof(uint16_t), req);
|
|
|
|
if (ret) {
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
for (; i < npid; i++) {
|
|
|
|
if (!nvme_update_ruh(n, ns, pids[i])) {
|
|
|
|
return NVME_INVALID_FIELD | NVME_DNR;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
static uint16_t nvme_io_mgmt_send(NvmeCtrl *n, NvmeRequest *req)
|
|
|
|
{
|
|
|
|
NvmeCmd *cmd = &req->cmd;
|
|
|
|
uint32_t cdw10 = le32_to_cpu(cmd->cdw10);
|
|
|
|
uint8_t mo = (cdw10 & 0xff);
|
|
|
|
|
|
|
|
switch (mo) {
|
|
|
|
case NVME_IOMS_MO_NOP:
|
|
|
|
return 0;
|
|
|
|
case NVME_IOMS_MO_RUH_UPDATE:
|
|
|
|
return nvme_io_mgmt_send_ruh_update(n, req);
|
|
|
|
default:
|
|
|
|
return NVME_INVALID_FIELD | NVME_DNR;
|
|
|
|
};
|
|
|
|
}
|
|
|
|
|
2020-07-20 13:44:01 +03:00
|
|
|
static uint16_t nvme_io_cmd(NvmeCtrl *n, NvmeRequest *req)
|
2013-06-04 19:17:10 +04:00
|
|
|
{
|
2021-04-14 21:43:50 +03:00
|
|
|
NvmeNamespace *ns;
|
2020-07-20 13:44:01 +03:00
|
|
|
uint32_t nsid = le32_to_cpu(req->cmd.nsid);
|
2013-06-04 19:17:10 +04:00
|
|
|
|
2020-07-20 13:44:01 +03:00
|
|
|
trace_pci_nvme_io_cmd(nvme_cid(req), nsid, nvme_sqid(req),
|
2020-08-24 23:11:33 +03:00
|
|
|
req->cmd.opcode, nvme_io_opc_str(req->cmd.opcode));
|
2020-07-06 09:12:48 +03:00
|
|
|
|
hw/block/nvme: support multiple namespaces
This adds support for multiple namespaces by introducing a new 'nvme-ns'
device model. The nvme device creates a bus named from the device name
('id'). The nvme-ns devices then connect to this and registers
themselves with the nvme device.
This changes how an nvme device is created. Example with two namespaces:
-drive file=nvme0n1.img,if=none,id=disk1
-drive file=nvme0n2.img,if=none,id=disk2
-device nvme,serial=deadbeef,id=nvme0
-device nvme-ns,drive=disk1,bus=nvme0,nsid=1
-device nvme-ns,drive=disk2,bus=nvme0,nsid=2
The drive property is kept on the nvme device to keep the change
backward compatible, but the property is now optional. Specifying a
drive for the nvme device will always create the namespace with nsid 1.
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Minwoo Im <minwoo.im.dev@gmail.com>
2019-06-26 09:51:06 +03:00
|
|
|
if (!nvme_nsid_valid(n, nsid)) {
|
2013-06-04 19:17:10 +04:00
|
|
|
return NVME_INVALID_NSID | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
2021-01-25 12:39:24 +03:00
|
|
|
/*
|
|
|
|
* In the base NVM command set, Flush may apply to all namespaces
|
2021-04-16 06:52:28 +03:00
|
|
|
* (indicated by NSID being set to FFFFFFFFh). But if that feature is used
|
2021-01-25 12:39:24 +03:00
|
|
|
* along with TP 4056 (Namespace Types), it may be pretty screwed up.
|
|
|
|
*
|
2021-04-16 06:52:28 +03:00
|
|
|
* If NSID is indeed set to FFFFFFFFh, we simply cannot associate the
|
2021-01-25 12:39:24 +03:00
|
|
|
* opcode with a specific command since we cannot determine a unique I/O
|
2021-04-16 06:52:28 +03:00
|
|
|
* command set. Opcode 0h could have any other meaning than something
|
2021-01-25 12:39:24 +03:00
|
|
|
* equivalent to flushing and say it DOES have completely different
|
2021-04-16 06:52:28 +03:00
|
|
|
* semantics in some other command set - does an NSID of FFFFFFFFh then
|
2021-01-25 12:39:24 +03:00
|
|
|
* mean "for all namespaces, apply whatever command set specific command
|
2021-04-16 06:52:28 +03:00
|
|
|
* that uses the 0h opcode?" Or does it mean "for all namespaces, apply
|
|
|
|
* whatever command that uses the 0h opcode if, and only if, it allows NSID
|
|
|
|
* to be FFFFFFFFh"?
|
2021-01-25 12:39:24 +03:00
|
|
|
*
|
|
|
|
* Anyway (and luckily), for now, we do not care about this since the
|
|
|
|
* device only supports namespace types that includes the NVM Flush command
|
|
|
|
* (NVM and Zoned), so always do an NVM Flush.
|
|
|
|
*/
|
|
|
|
if (req->cmd.opcode == NVME_CMD_FLUSH) {
|
|
|
|
return nvme_flush(n, req);
|
|
|
|
}
|
|
|
|
|
2021-04-14 21:43:50 +03:00
|
|
|
ns = nvme_ns(n, nsid);
|
|
|
|
if (unlikely(!ns)) {
|
hw/block/nvme: support multiple namespaces
This adds support for multiple namespaces by introducing a new 'nvme-ns'
device model. The nvme device creates a bus named from the device name
('id'). The nvme-ns devices then connect to this and registers
themselves with the nvme device.
This changes how an nvme device is created. Example with two namespaces:
-drive file=nvme0n1.img,if=none,id=disk1
-drive file=nvme0n2.img,if=none,id=disk2
-device nvme,serial=deadbeef,id=nvme0
-device nvme-ns,drive=disk1,bus=nvme0,nsid=1
-device nvme-ns,drive=disk2,bus=nvme0,nsid=2
The drive property is kept on the nvme device to keep the change
backward compatible, but the property is now optional. Specifying a
drive for the nvme device will always create the namespace with nsid 1.
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Minwoo Im <minwoo.im.dev@gmail.com>
2019-06-26 09:51:06 +03:00
|
|
|
return NVME_INVALID_FIELD | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
2021-04-14 21:43:50 +03:00
|
|
|
if (!(ns->iocs[req->cmd.opcode] & NVME_CMD_EFF_CSUPP)) {
|
2020-12-08 23:04:02 +03:00
|
|
|
trace_pci_nvme_err_invalid_opc(req->cmd.opcode);
|
|
|
|
return NVME_INVALID_OPCODE | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
2021-04-14 21:43:50 +03:00
|
|
|
if (ns->status) {
|
|
|
|
return ns->status;
|
2021-02-12 15:11:39 +03:00
|
|
|
}
|
|
|
|
|
2021-09-15 18:43:30 +03:00
|
|
|
if (NVME_CMD_FLAGS_FUSE(req->cmd.flags)) {
|
|
|
|
return NVME_INVALID_FIELD;
|
|
|
|
}
|
|
|
|
|
2021-04-14 21:43:50 +03:00
|
|
|
req->ns = ns;
|
|
|
|
|
2020-07-20 13:44:01 +03:00
|
|
|
switch (req->cmd.opcode) {
|
2020-03-31 00:10:13 +03:00
|
|
|
case NVME_CMD_WRITE_ZEROES:
|
2020-07-20 13:44:01 +03:00
|
|
|
return nvme_write_zeroes(n, req);
|
2020-12-08 23:04:06 +03:00
|
|
|
case NVME_CMD_ZONE_APPEND:
|
|
|
|
return nvme_zone_append(n, req);
|
2013-06-04 19:17:10 +04:00
|
|
|
case NVME_CMD_WRITE:
|
2020-12-08 23:04:00 +03:00
|
|
|
return nvme_write(n, req);
|
2013-06-04 19:17:10 +04:00
|
|
|
case NVME_CMD_READ:
|
2020-12-08 23:04:00 +03:00
|
|
|
return nvme_read(n, req);
|
2020-11-16 13:14:02 +03:00
|
|
|
case NVME_CMD_COMPARE:
|
|
|
|
return nvme_compare(n, req);
|
2020-10-21 15:03:19 +03:00
|
|
|
case NVME_CMD_DSM:
|
|
|
|
return nvme_dsm(n, req);
|
2021-02-09 20:29:42 +03:00
|
|
|
case NVME_CMD_VERIFY:
|
|
|
|
return nvme_verify(n, req);
|
2020-11-06 12:46:01 +03:00
|
|
|
case NVME_CMD_COPY:
|
|
|
|
return nvme_copy(n, req);
|
2020-12-08 23:04:06 +03:00
|
|
|
case NVME_CMD_ZONE_MGMT_SEND:
|
|
|
|
return nvme_zone_mgmt_send(n, req);
|
|
|
|
case NVME_CMD_ZONE_MGMT_RECV:
|
|
|
|
return nvme_zone_mgmt_recv(n, req);
|
2023-02-20 14:59:26 +03:00
|
|
|
case NVME_CMD_IO_MGMT_RECV:
|
|
|
|
return nvme_io_mgmt_recv(n, req);
|
|
|
|
case NVME_CMD_IO_MGMT_SEND:
|
|
|
|
return nvme_io_mgmt_send(n, req);
|
2013-06-04 19:17:10 +04:00
|
|
|
default:
|
2020-12-08 23:04:02 +03:00
|
|
|
assert(false);
|
2013-06-04 19:17:10 +04:00
|
|
|
}
|
2020-12-08 23:04:02 +03:00
|
|
|
|
|
|
|
return NVME_INVALID_OPCODE | NVME_DNR;
|
2013-06-04 19:17:10 +04:00
|
|
|
}
|
|
|
|
|
2022-07-05 17:24:03 +03:00
|
|
|
static void nvme_cq_notifier(EventNotifier *e)
|
|
|
|
{
|
|
|
|
NvmeCQueue *cq = container_of(e, NvmeCQueue, notifier);
|
|
|
|
NvmeCtrl *n = cq->ctrl;
|
|
|
|
|
2022-07-28 09:36:07 +03:00
|
|
|
if (!event_notifier_test_and_clear(e)) {
|
|
|
|
return;
|
|
|
|
}
|
2022-07-05 17:24:03 +03:00
|
|
|
|
|
|
|
nvme_update_cq_head(cq);
|
|
|
|
|
|
|
|
if (cq->tail == cq->head) {
|
|
|
|
if (cq->irq_enabled) {
|
|
|
|
n->cq_pending--;
|
|
|
|
}
|
|
|
|
|
|
|
|
nvme_irq_deassert(n, cq);
|
|
|
|
}
|
|
|
|
|
2022-10-19 23:28:02 +03:00
|
|
|
qemu_bh_schedule(cq->bh);
|
2022-07-05 17:24:03 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
static int nvme_init_cq_ioeventfd(NvmeCQueue *cq)
|
|
|
|
{
|
|
|
|
NvmeCtrl *n = cq->ctrl;
|
|
|
|
uint16_t offset = (cq->cqid << 3) + (1 << 2);
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
ret = event_notifier_init(&cq->notifier, 0);
|
|
|
|
if (ret < 0) {
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
event_notifier_set_handler(&cq->notifier, nvme_cq_notifier);
|
|
|
|
memory_region_add_eventfd(&n->iomem,
|
|
|
|
0x1000 + offset, 4, false, 0, &cq->notifier);
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void nvme_sq_notifier(EventNotifier *e)
|
|
|
|
{
|
|
|
|
NvmeSQueue *sq = container_of(e, NvmeSQueue, notifier);
|
|
|
|
|
2022-07-28 09:36:07 +03:00
|
|
|
if (!event_notifier_test_and_clear(e)) {
|
|
|
|
return;
|
|
|
|
}
|
2022-07-05 17:24:03 +03:00
|
|
|
|
|
|
|
nvme_process_sq(sq);
|
|
|
|
}
|
|
|
|
|
|
|
|
static int nvme_init_sq_ioeventfd(NvmeSQueue *sq)
|
|
|
|
{
|
|
|
|
NvmeCtrl *n = sq->ctrl;
|
|
|
|
uint16_t offset = sq->sqid << 3;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
ret = event_notifier_init(&sq->notifier, 0);
|
|
|
|
if (ret < 0) {
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
event_notifier_set_handler(&sq->notifier, nvme_sq_notifier);
|
|
|
|
memory_region_add_eventfd(&n->iomem,
|
|
|
|
0x1000 + offset, 4, false, 0, &sq->notifier);
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2013-06-04 19:17:10 +04:00
|
|
|
static void nvme_free_sq(NvmeSQueue *sq, NvmeCtrl *n)
|
|
|
|
{
|
2022-07-05 17:24:03 +03:00
|
|
|
uint16_t offset = sq->sqid << 3;
|
|
|
|
|
2013-06-04 19:17:10 +04:00
|
|
|
n->sq[sq->sqid] = NULL;
|
2022-10-19 23:28:02 +03:00
|
|
|
qemu_bh_delete(sq->bh);
|
2022-07-05 17:24:03 +03:00
|
|
|
if (sq->ioeventfd_enabled) {
|
|
|
|
memory_region_del_eventfd(&n->iomem,
|
|
|
|
0x1000 + offset, 4, false, 0, &sq->notifier);
|
2022-07-28 09:48:51 +03:00
|
|
|
event_notifier_set_handler(&sq->notifier, NULL);
|
2022-07-05 17:24:03 +03:00
|
|
|
event_notifier_cleanup(&sq->notifier);
|
|
|
|
}
|
2013-06-04 19:17:10 +04:00
|
|
|
g_free(sq->io_req);
|
|
|
|
if (sq->sqid) {
|
|
|
|
g_free(sq);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-07-20 13:44:01 +03:00
|
|
|
static uint16_t nvme_del_sq(NvmeCtrl *n, NvmeRequest *req)
|
2013-06-04 19:17:10 +04:00
|
|
|
{
|
2020-07-20 13:44:01 +03:00
|
|
|
NvmeDeleteQ *c = (NvmeDeleteQ *)&req->cmd;
|
|
|
|
NvmeRequest *r, *next;
|
2013-06-04 19:17:10 +04:00
|
|
|
NvmeSQueue *sq;
|
|
|
|
NvmeCQueue *cq;
|
|
|
|
uint16_t qid = le16_to_cpu(c->qid);
|
|
|
|
|
2017-11-03 16:37:53 +03:00
|
|
|
if (unlikely(!qid || nvme_check_sqid(n, qid))) {
|
2020-06-09 22:03:13 +03:00
|
|
|
trace_pci_nvme_err_invalid_del_sq(qid);
|
2013-06-04 19:17:10 +04:00
|
|
|
return NVME_INVALID_QID | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
2020-06-09 22:03:13 +03:00
|
|
|
trace_pci_nvme_del_sq(qid);
|
2017-11-03 16:37:53 +03:00
|
|
|
|
2013-06-04 19:17:10 +04:00
|
|
|
sq = n->sq[qid];
|
|
|
|
while (!QTAILQ_EMPTY(&sq->out_req_list)) {
|
2020-07-20 13:44:01 +03:00
|
|
|
r = QTAILQ_FIRST(&sq->out_req_list);
|
2021-06-17 22:06:57 +03:00
|
|
|
assert(r->aiocb);
|
|
|
|
blk_aio_cancel(r->aiocb);
|
2021-04-08 13:44:05 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
assert(QTAILQ_EMPTY(&sq->out_req_list));
|
|
|
|
|
2013-06-04 19:17:10 +04:00
|
|
|
if (!nvme_check_cqid(n, sq->cqid)) {
|
|
|
|
cq = n->cq[sq->cqid];
|
|
|
|
QTAILQ_REMOVE(&cq->sq_list, sq, entry);
|
|
|
|
|
|
|
|
nvme_post_cqes(cq);
|
2020-07-20 13:44:01 +03:00
|
|
|
QTAILQ_FOREACH_SAFE(r, &cq->req_list, entry, next) {
|
|
|
|
if (r->sq == sq) {
|
|
|
|
QTAILQ_REMOVE(&cq->req_list, r, entry);
|
|
|
|
QTAILQ_INSERT_TAIL(&sq->req_list, r, entry);
|
2013-06-04 19:17:10 +04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
nvme_free_sq(sq, n);
|
|
|
|
return NVME_SUCCESS;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void nvme_init_sq(NvmeSQueue *sq, NvmeCtrl *n, uint64_t dma_addr,
|
2020-08-24 09:58:56 +03:00
|
|
|
uint16_t sqid, uint16_t cqid, uint16_t size)
|
2013-06-04 19:17:10 +04:00
|
|
|
{
|
|
|
|
int i;
|
|
|
|
NvmeCQueue *cq;
|
|
|
|
|
|
|
|
sq->ctrl = n;
|
|
|
|
sq->dma_addr = dma_addr;
|
|
|
|
sq->sqid = sqid;
|
|
|
|
sq->size = size;
|
|
|
|
sq->cqid = cqid;
|
|
|
|
sq->head = sq->tail = 0;
|
2020-02-23 18:37:49 +03:00
|
|
|
sq->io_req = g_new0(NvmeRequest, sq->size);
|
2013-06-04 19:17:10 +04:00
|
|
|
|
|
|
|
QTAILQ_INIT(&sq->req_list);
|
|
|
|
QTAILQ_INIT(&sq->out_req_list);
|
|
|
|
for (i = 0; i < sq->size; i++) {
|
|
|
|
sq->io_req[i].sq = sq;
|
|
|
|
QTAILQ_INSERT_TAIL(&(sq->req_list), &sq->io_req[i], entry);
|
|
|
|
}
|
2022-10-19 23:28:02 +03:00
|
|
|
|
|
|
|
sq->bh = qemu_bh_new(nvme_process_sq, sq);
|
2013-06-04 19:17:10 +04:00
|
|
|
|
2022-06-16 15:34:07 +03:00
|
|
|
if (n->dbbuf_enabled) {
|
|
|
|
sq->db_addr = n->dbbuf_dbs + (sqid << 3);
|
|
|
|
sq->ei_addr = n->dbbuf_eis + (sqid << 3);
|
2022-07-05 17:24:03 +03:00
|
|
|
|
|
|
|
if (n->params.ioeventfd && sq->sqid != 0) {
|
|
|
|
if (!nvme_init_sq_ioeventfd(sq)) {
|
|
|
|
sq->ioeventfd_enabled = true;
|
|
|
|
}
|
|
|
|
}
|
2022-06-16 15:34:07 +03:00
|
|
|
}
|
|
|
|
|
2013-06-04 19:17:10 +04:00
|
|
|
assert(n->cq[cqid]);
|
|
|
|
cq = n->cq[cqid];
|
|
|
|
QTAILQ_INSERT_TAIL(&(cq->sq_list), sq, entry);
|
|
|
|
n->sq[sqid] = sq;
|
|
|
|
}
|
|
|
|
|
2020-07-20 13:44:01 +03:00
|
|
|
static uint16_t nvme_create_sq(NvmeCtrl *n, NvmeRequest *req)
|
2013-06-04 19:17:10 +04:00
|
|
|
{
|
|
|
|
NvmeSQueue *sq;
|
2020-07-20 13:44:01 +03:00
|
|
|
NvmeCreateSq *c = (NvmeCreateSq *)&req->cmd;
|
2013-06-04 19:17:10 +04:00
|
|
|
|
|
|
|
uint16_t cqid = le16_to_cpu(c->cqid);
|
|
|
|
uint16_t sqid = le16_to_cpu(c->sqid);
|
|
|
|
uint16_t qsize = le16_to_cpu(c->qsize);
|
|
|
|
uint16_t qflags = le16_to_cpu(c->sq_flags);
|
|
|
|
uint64_t prp1 = le64_to_cpu(c->prp1);
|
|
|
|
|
2020-06-09 22:03:13 +03:00
|
|
|
trace_pci_nvme_create_sq(prp1, sqid, cqid, qsize, qflags);
|
2017-11-03 16:37:53 +03:00
|
|
|
|
|
|
|
if (unlikely(!cqid || nvme_check_cqid(n, cqid))) {
|
2020-06-09 22:03:13 +03:00
|
|
|
trace_pci_nvme_err_invalid_create_sq_cqid(cqid);
|
2013-06-04 19:17:10 +04:00
|
|
|
return NVME_INVALID_CQID | NVME_DNR;
|
|
|
|
}
|
hw/nvme: Make max_ioqpairs and msix_qsize configurable in runtime
The NVMe device defines two properties: max_ioqpairs, msix_qsize. Having
them as constants is problematic for SR-IOV support.
SR-IOV introduces virtual resources (queues, interrupts) that can be
assigned to PF and its dependent VFs. Each device, following a reset,
should work with the configured number of queues. A single constant is
no longer sufficient to hold the whole state.
This patch tries to solve the problem by introducing additional
variables in NvmeCtrl’s state. The variables for, e.g., managing queues
are therefore organized as:
- n->params.max_ioqpairs – no changes, constant set by the user
- n->(mutable_state) – (not a part of this patch) user-configurable,
specifies number of queues available _after_
reset
- n->conf_ioqpairs - (new) used in all the places instead of the ‘old’
n->params.max_ioqpairs; initialized in realize()
and updated during reset() to reflect user’s
changes to the mutable state
Since the number of available i/o queues and interrupts can change in
runtime, buffers for sq/cqs and the MSIX-related structures are
allocated big enough to handle the limits, to completely avoid the
complicated reallocation. A helper function (nvme_update_msixcap_ts)
updates the corresponding capability register, to signal configuration
changes.
Signed-off-by: Łukasz Gieryk <lukasz.gieryk@linux.intel.com>
Reviewed-by: Klaus Jensen <k.jensen@samsung.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
2022-05-09 17:16:13 +03:00
|
|
|
if (unlikely(!sqid || sqid > n->conf_ioqpairs || n->sq[sqid] != NULL)) {
|
2020-06-09 22:03:13 +03:00
|
|
|
trace_pci_nvme_err_invalid_create_sq_sqid(sqid);
|
2013-06-04 19:17:10 +04:00
|
|
|
return NVME_INVALID_QID | NVME_DNR;
|
|
|
|
}
|
2021-07-13 20:31:27 +03:00
|
|
|
if (unlikely(!qsize || qsize > NVME_CAP_MQES(ldq_le_p(&n->bar.cap)))) {
|
2020-06-09 22:03:13 +03:00
|
|
|
trace_pci_nvme_err_invalid_create_sq_size(qsize);
|
2013-06-04 19:17:10 +04:00
|
|
|
return NVME_MAX_QSIZE_EXCEEDED | NVME_DNR;
|
|
|
|
}
|
2020-10-22 09:28:46 +03:00
|
|
|
if (unlikely(prp1 & (n->page_size - 1))) {
|
2020-06-09 22:03:13 +03:00
|
|
|
trace_pci_nvme_err_invalid_create_sq_addr(prp1);
|
2020-10-22 09:28:46 +03:00
|
|
|
return NVME_INVALID_PRP_OFFSET | NVME_DNR;
|
2013-06-04 19:17:10 +04:00
|
|
|
}
|
2017-11-03 16:37:53 +03:00
|
|
|
if (unlikely(!(NVME_SQ_FLAGS_PC(qflags)))) {
|
2020-06-09 22:03:13 +03:00
|
|
|
trace_pci_nvme_err_invalid_create_sq_qflags(NVME_SQ_FLAGS_PC(qflags));
|
2013-06-04 19:17:10 +04:00
|
|
|
return NVME_INVALID_FIELD | NVME_DNR;
|
|
|
|
}
|
|
|
|
sq = g_malloc0(sizeof(*sq));
|
|
|
|
nvme_init_sq(sq, n, prp1, sqid, cqid, qsize + 1);
|
|
|
|
return NVME_SUCCESS;
|
|
|
|
}
|
|
|
|
|
2020-09-30 20:15:50 +03:00
|
|
|
struct nvme_stats {
|
|
|
|
uint64_t units_read;
|
|
|
|
uint64_t units_written;
|
|
|
|
uint64_t read_commands;
|
|
|
|
uint64_t write_commands;
|
|
|
|
};
|
|
|
|
|
|
|
|
static void nvme_set_blk_stats(NvmeNamespace *ns, struct nvme_stats *stats)
|
|
|
|
{
|
|
|
|
BlockAcctStats *s = blk_get_stats(ns->blkconf.blk);
|
|
|
|
|
2023-02-20 14:59:22 +03:00
|
|
|
stats->units_read += s->nr_bytes[BLOCK_ACCT_READ];
|
|
|
|
stats->units_written += s->nr_bytes[BLOCK_ACCT_WRITE];
|
2020-09-30 20:15:50 +03:00
|
|
|
stats->read_commands += s->nr_ops[BLOCK_ACCT_READ];
|
|
|
|
stats->write_commands += s->nr_ops[BLOCK_ACCT_WRITE];
|
|
|
|
}
|
|
|
|
|
2020-07-20 13:44:01 +03:00
|
|
|
static uint16_t nvme_smart_info(NvmeCtrl *n, uint8_t rae, uint32_t buf_len,
|
|
|
|
uint64_t off, NvmeRequest *req)
|
2020-07-06 09:12:52 +03:00
|
|
|
{
|
2019-04-12 21:53:16 +03:00
|
|
|
uint32_t nsid = le32_to_cpu(req->cmd.nsid);
|
2020-09-30 20:15:50 +03:00
|
|
|
struct nvme_stats stats = { 0 };
|
|
|
|
NvmeSmartLog smart = { 0 };
|
2020-07-06 09:12:52 +03:00
|
|
|
uint32_t trans_len;
|
2020-09-30 20:15:50 +03:00
|
|
|
NvmeNamespace *ns;
|
2020-07-06 09:12:52 +03:00
|
|
|
time_t current_ms;
|
2023-02-20 14:59:22 +03:00
|
|
|
uint64_t u_read, u_written;
|
2020-07-06 09:12:52 +03:00
|
|
|
|
2020-09-30 20:01:02 +03:00
|
|
|
if (off >= sizeof(smart)) {
|
|
|
|
return NVME_INVALID_FIELD | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
2020-09-30 20:15:50 +03:00
|
|
|
if (nsid != 0xffffffff) {
|
|
|
|
ns = nvme_ns(n, nsid);
|
hw/block/nvme: support multiple namespaces
This adds support for multiple namespaces by introducing a new 'nvme-ns'
device model. The nvme device creates a bus named from the device name
('id'). The nvme-ns devices then connect to this and registers
themselves with the nvme device.
This changes how an nvme device is created. Example with two namespaces:
-drive file=nvme0n1.img,if=none,id=disk1
-drive file=nvme0n2.img,if=none,id=disk2
-device nvme,serial=deadbeef,id=nvme0
-device nvme-ns,drive=disk1,bus=nvme0,nsid=1
-device nvme-ns,drive=disk2,bus=nvme0,nsid=2
The drive property is kept on the nvme device to keep the change
backward compatible, but the property is now optional. Specifying a
drive for the nvme device will always create the namespace with nsid 1.
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Minwoo Im <minwoo.im.dev@gmail.com>
2019-06-26 09:51:06 +03:00
|
|
|
if (!ns) {
|
2020-09-30 20:15:50 +03:00
|
|
|
return NVME_INVALID_NSID | NVME_DNR;
|
hw/block/nvme: support multiple namespaces
This adds support for multiple namespaces by introducing a new 'nvme-ns'
device model. The nvme device creates a bus named from the device name
('id'). The nvme-ns devices then connect to this and registers
themselves with the nvme device.
This changes how an nvme device is created. Example with two namespaces:
-drive file=nvme0n1.img,if=none,id=disk1
-drive file=nvme0n2.img,if=none,id=disk2
-device nvme,serial=deadbeef,id=nvme0
-device nvme-ns,drive=disk1,bus=nvme0,nsid=1
-device nvme-ns,drive=disk2,bus=nvme0,nsid=2
The drive property is kept on the nvme device to keep the change
backward compatible, but the property is now optional. Specifying a
drive for the nvme device will always create the namespace with nsid 1.
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Minwoo Im <minwoo.im.dev@gmail.com>
2019-06-26 09:51:06 +03:00
|
|
|
}
|
2020-09-30 20:15:50 +03:00
|
|
|
nvme_set_blk_stats(ns, &stats);
|
|
|
|
} else {
|
|
|
|
int i;
|
2020-07-06 09:12:52 +03:00
|
|
|
|
2021-04-14 22:46:00 +03:00
|
|
|
for (i = 1; i <= NVME_MAX_NAMESPACES; i++) {
|
2020-09-30 20:15:50 +03:00
|
|
|
ns = nvme_ns(n, i);
|
|
|
|
if (!ns) {
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
nvme_set_blk_stats(ns, &stats);
|
|
|
|
}
|
hw/block/nvme: support multiple namespaces
This adds support for multiple namespaces by introducing a new 'nvme-ns'
device model. The nvme device creates a bus named from the device name
('id'). The nvme-ns devices then connect to this and registers
themselves with the nvme device.
This changes how an nvme device is created. Example with two namespaces:
-drive file=nvme0n1.img,if=none,id=disk1
-drive file=nvme0n2.img,if=none,id=disk2
-device nvme,serial=deadbeef,id=nvme0
-device nvme-ns,drive=disk1,bus=nvme0,nsid=1
-device nvme-ns,drive=disk2,bus=nvme0,nsid=2
The drive property is kept on the nvme device to keep the change
backward compatible, but the property is now optional. Specifying a
drive for the nvme device will always create the namespace with nsid 1.
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Minwoo Im <minwoo.im.dev@gmail.com>
2019-06-26 09:51:06 +03:00
|
|
|
}
|
2020-07-06 09:12:52 +03:00
|
|
|
|
|
|
|
trans_len = MIN(sizeof(smart) - off, buf_len);
|
2021-01-15 06:27:01 +03:00
|
|
|
smart.critical_warning = n->smart_critical_warning;
|
2020-07-06 09:12:52 +03:00
|
|
|
|
2023-02-20 14:59:22 +03:00
|
|
|
u_read = DIV_ROUND_UP(stats.units_read >> BDRV_SECTOR_BITS, 1000);
|
|
|
|
u_written = DIV_ROUND_UP(stats.units_written >> BDRV_SECTOR_BITS, 1000);
|
|
|
|
|
|
|
|
smart.data_units_read[0] = cpu_to_le64(u_read);
|
|
|
|
smart.data_units_written[0] = cpu_to_le64(u_written);
|
2020-09-30 20:15:50 +03:00
|
|
|
smart.host_read_commands[0] = cpu_to_le64(stats.read_commands);
|
|
|
|
smart.host_write_commands[0] = cpu_to_le64(stats.write_commands);
|
2020-07-06 09:12:52 +03:00
|
|
|
|
|
|
|
smart.temperature = cpu_to_le16(n->temperature);
|
|
|
|
|
|
|
|
if ((n->temperature >= n->features.temp_thresh_hi) ||
|
|
|
|
(n->temperature <= n->features.temp_thresh_low)) {
|
|
|
|
smart.critical_warning |= NVME_SMART_TEMPERATURE;
|
|
|
|
}
|
|
|
|
|
|
|
|
current_ms = qemu_clock_get_ms(QEMU_CLOCK_VIRTUAL);
|
|
|
|
smart.power_on_hours[0] =
|
|
|
|
cpu_to_le64((((current_ms - n->starttime_ms) / 1000) / 60) / 60);
|
|
|
|
|
2020-07-06 09:12:53 +03:00
|
|
|
if (!rae) {
|
|
|
|
nvme_clear_events(n, NVME_AER_TYPE_SMART);
|
|
|
|
}
|
|
|
|
|
2020-12-15 21:18:25 +03:00
|
|
|
return nvme_c2h(n, (uint8_t *) &smart + off, trans_len, req);
|
2020-07-06 09:12:52 +03:00
|
|
|
}
|
|
|
|
|
2023-02-20 14:59:24 +03:00
|
|
|
static uint16_t nvme_endgrp_info(NvmeCtrl *n, uint8_t rae, uint32_t buf_len,
|
|
|
|
uint64_t off, NvmeRequest *req)
|
|
|
|
{
|
|
|
|
uint32_t dw11 = le32_to_cpu(req->cmd.cdw11);
|
|
|
|
uint16_t endgrpid = (dw11 >> 16) & 0xffff;
|
|
|
|
struct nvme_stats stats = {};
|
|
|
|
NvmeEndGrpLog info = {};
|
|
|
|
int i;
|
|
|
|
|
|
|
|
if (!n->subsys || endgrpid != 0x1) {
|
|
|
|
return NVME_INVALID_FIELD | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (off >= sizeof(info)) {
|
|
|
|
return NVME_INVALID_FIELD | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
|
|
|
for (i = 1; i <= NVME_MAX_NAMESPACES; i++) {
|
|
|
|
NvmeNamespace *ns = nvme_subsys_ns(n->subsys, i);
|
|
|
|
if (!ns) {
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
nvme_set_blk_stats(ns, &stats);
|
|
|
|
}
|
|
|
|
|
|
|
|
info.data_units_read[0] =
|
|
|
|
cpu_to_le64(DIV_ROUND_UP(stats.units_read / 1000000000, 1000000000));
|
|
|
|
info.data_units_written[0] =
|
|
|
|
cpu_to_le64(DIV_ROUND_UP(stats.units_written / 1000000000, 1000000000));
|
|
|
|
info.media_units_written[0] =
|
|
|
|
cpu_to_le64(DIV_ROUND_UP(stats.units_written / 1000000000, 1000000000));
|
|
|
|
|
|
|
|
info.host_read_commands[0] = cpu_to_le64(stats.read_commands);
|
|
|
|
info.host_write_commands[0] = cpu_to_le64(stats.write_commands);
|
|
|
|
|
|
|
|
buf_len = MIN(sizeof(info) - off, buf_len);
|
|
|
|
|
|
|
|
return nvme_c2h(n, (uint8_t *)&info + off, buf_len, req);
|
|
|
|
}
|
|
|
|
|
|
|
|
|
2020-07-20 13:44:01 +03:00
|
|
|
static uint16_t nvme_fw_log_info(NvmeCtrl *n, uint32_t buf_len, uint64_t off,
|
|
|
|
NvmeRequest *req)
|
2020-07-06 09:12:52 +03:00
|
|
|
{
|
|
|
|
uint32_t trans_len;
|
|
|
|
NvmeFwSlotInfoLog fw_log = {
|
|
|
|
.afi = 0x1,
|
|
|
|
};
|
|
|
|
|
2020-09-30 20:01:02 +03:00
|
|
|
if (off >= sizeof(fw_log)) {
|
2020-07-06 09:12:52 +03:00
|
|
|
return NVME_INVALID_FIELD | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
2020-09-30 20:01:02 +03:00
|
|
|
strpadcpy((char *)&fw_log.frs1, sizeof(fw_log.frs1), "1.0", ' ');
|
2020-07-06 09:12:52 +03:00
|
|
|
trans_len = MIN(sizeof(fw_log) - off, buf_len);
|
|
|
|
|
2020-12-15 21:18:25 +03:00
|
|
|
return nvme_c2h(n, (uint8_t *) &fw_log + off, trans_len, req);
|
2020-07-06 09:12:52 +03:00
|
|
|
}
|
|
|
|
|
2020-07-20 13:44:01 +03:00
|
|
|
static uint16_t nvme_error_info(NvmeCtrl *n, uint8_t rae, uint32_t buf_len,
|
|
|
|
uint64_t off, NvmeRequest *req)
|
2020-07-06 09:12:52 +03:00
|
|
|
{
|
|
|
|
uint32_t trans_len;
|
|
|
|
NvmeErrorLog errlog;
|
|
|
|
|
2020-09-30 20:01:02 +03:00
|
|
|
if (off >= sizeof(errlog)) {
|
|
|
|
return NVME_INVALID_FIELD | NVME_DNR;
|
2020-07-06 09:12:53 +03:00
|
|
|
}
|
|
|
|
|
2020-09-30 20:01:02 +03:00
|
|
|
if (!rae) {
|
|
|
|
nvme_clear_events(n, NVME_AER_TYPE_ERROR);
|
2020-07-06 09:12:52 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
memset(&errlog, 0x0, sizeof(errlog));
|
|
|
|
trans_len = MIN(sizeof(errlog) - off, buf_len);
|
|
|
|
|
2020-12-15 21:18:25 +03:00
|
|
|
return nvme_c2h(n, (uint8_t *)&errlog, trans_len, req);
|
2020-07-06 09:12:52 +03:00
|
|
|
}
|
|
|
|
|
2021-02-28 11:51:02 +03:00
|
|
|
static uint16_t nvme_changed_nslist(NvmeCtrl *n, uint8_t rae, uint32_t buf_len,
|
|
|
|
uint64_t off, NvmeRequest *req)
|
|
|
|
{
|
|
|
|
uint32_t nslist[1024];
|
|
|
|
uint32_t trans_len;
|
|
|
|
int i = 0;
|
|
|
|
uint32_t nsid;
|
|
|
|
|
2021-11-17 16:12:56 +03:00
|
|
|
if (off >= sizeof(nslist)) {
|
|
|
|
trace_pci_nvme_err_invalid_log_page_offset(off, sizeof(nslist));
|
|
|
|
return NVME_INVALID_FIELD | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
2021-02-28 11:51:02 +03:00
|
|
|
memset(nslist, 0x0, sizeof(nslist));
|
|
|
|
trans_len = MIN(sizeof(nslist) - off, buf_len);
|
|
|
|
|
|
|
|
while ((nsid = find_first_bit(n->changed_nsids, NVME_CHANGED_NSID_SIZE)) !=
|
|
|
|
NVME_CHANGED_NSID_SIZE) {
|
|
|
|
/*
|
|
|
|
* If more than 1024 namespaces, the first entry in the log page should
|
2021-04-16 06:52:28 +03:00
|
|
|
* be set to FFFFFFFFh and the others to 0 as spec.
|
2021-02-28 11:51:02 +03:00
|
|
|
*/
|
|
|
|
if (i == ARRAY_SIZE(nslist)) {
|
|
|
|
memset(nslist, 0x0, sizeof(nslist));
|
|
|
|
nslist[0] = 0xffffffff;
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
nslist[i++] = nsid;
|
|
|
|
clear_bit(nsid, n->changed_nsids);
|
|
|
|
}
|
|
|
|
|
|
|
|
/*
|
|
|
|
* Remove all the remaining list entries in case returns directly due to
|
|
|
|
* more than 1024 namespaces.
|
|
|
|
*/
|
|
|
|
if (nslist[0] == 0xffffffff) {
|
|
|
|
bitmap_zero(n->changed_nsids, NVME_CHANGED_NSID_SIZE);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!rae) {
|
|
|
|
nvme_clear_events(n, NVME_AER_TYPE_NOTICE);
|
|
|
|
}
|
|
|
|
|
|
|
|
return nvme_c2h(n, ((uint8_t *)nslist) + off, trans_len, req);
|
|
|
|
}
|
|
|
|
|
2020-12-08 23:04:03 +03:00
|
|
|
static uint16_t nvme_cmd_effects(NvmeCtrl *n, uint8_t csi, uint32_t buf_len,
|
2020-12-08 23:04:02 +03:00
|
|
|
uint64_t off, NvmeRequest *req)
|
|
|
|
{
|
|
|
|
NvmeEffectsLog log = {};
|
|
|
|
const uint32_t *src_iocs = NULL;
|
|
|
|
uint32_t trans_len;
|
|
|
|
|
|
|
|
if (off >= sizeof(log)) {
|
|
|
|
trace_pci_nvme_err_invalid_log_page_offset(off, sizeof(log));
|
|
|
|
return NVME_INVALID_FIELD | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
2021-07-13 20:31:27 +03:00
|
|
|
switch (NVME_CC_CSS(ldl_le_p(&n->bar.cc))) {
|
2020-12-08 23:04:02 +03:00
|
|
|
case NVME_CC_CSS_NVM:
|
|
|
|
src_iocs = nvme_cse_iocs_nvm;
|
2020-12-08 23:04:03 +03:00
|
|
|
/* fall through */
|
2020-12-08 23:04:02 +03:00
|
|
|
case NVME_CC_CSS_ADMIN_ONLY:
|
|
|
|
break;
|
2020-12-08 23:04:03 +03:00
|
|
|
case NVME_CC_CSS_CSI:
|
|
|
|
switch (csi) {
|
|
|
|
case NVME_CSI_NVM:
|
|
|
|
src_iocs = nvme_cse_iocs_nvm;
|
|
|
|
break;
|
2020-12-08 23:04:06 +03:00
|
|
|
case NVME_CSI_ZONED:
|
|
|
|
src_iocs = nvme_cse_iocs_zoned;
|
|
|
|
break;
|
2020-12-08 23:04:03 +03:00
|
|
|
}
|
2020-12-08 23:04:02 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
memcpy(log.acs, nvme_cse_acs, sizeof(nvme_cse_acs));
|
|
|
|
|
|
|
|
if (src_iocs) {
|
|
|
|
memcpy(log.iocs, src_iocs, sizeof(log.iocs));
|
|
|
|
}
|
|
|
|
|
|
|
|
trans_len = MIN(sizeof(log) - off, buf_len);
|
|
|
|
|
2020-12-15 21:18:25 +03:00
|
|
|
return nvme_c2h(n, ((uint8_t *)&log) + off, trans_len, req);
|
2020-12-08 23:04:02 +03:00
|
|
|
}
|
|
|
|
|
2023-02-20 14:59:26 +03:00
|
|
|
static size_t sizeof_fdp_conf_descr(size_t nruh, size_t vss)
|
|
|
|
{
|
|
|
|
size_t entry_siz = sizeof(NvmeFdpDescrHdr) + nruh * sizeof(NvmeRuhDescr)
|
|
|
|
+ vss;
|
|
|
|
return ROUND_UP(entry_siz, 8);
|
|
|
|
}
|
|
|
|
|
|
|
|
static uint16_t nvme_fdp_confs(NvmeCtrl *n, uint32_t endgrpid, uint32_t buf_len,
|
|
|
|
uint64_t off, NvmeRequest *req)
|
|
|
|
{
|
|
|
|
uint32_t log_size, trans_len;
|
|
|
|
g_autofree uint8_t *buf = NULL;
|
|
|
|
NvmeFdpDescrHdr *hdr;
|
|
|
|
NvmeRuhDescr *ruhd;
|
|
|
|
NvmeEnduranceGroup *endgrp;
|
|
|
|
NvmeFdpConfsHdr *log;
|
|
|
|
size_t nruh, fdp_descr_size;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
if (endgrpid != 1 || !n->subsys) {
|
|
|
|
return NVME_INVALID_FIELD | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
|
|
|
endgrp = &n->subsys->endgrp;
|
|
|
|
|
|
|
|
if (endgrp->fdp.enabled) {
|
|
|
|
nruh = endgrp->fdp.nruh;
|
|
|
|
} else {
|
|
|
|
nruh = 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
fdp_descr_size = sizeof_fdp_conf_descr(nruh, FDPVSS);
|
|
|
|
log_size = sizeof(NvmeFdpConfsHdr) + fdp_descr_size;
|
|
|
|
|
|
|
|
if (off >= log_size) {
|
|
|
|
return NVME_INVALID_FIELD | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
|
|
|
trans_len = MIN(log_size - off, buf_len);
|
|
|
|
|
|
|
|
buf = g_malloc0(log_size);
|
|
|
|
log = (NvmeFdpConfsHdr *)buf;
|
|
|
|
hdr = (NvmeFdpDescrHdr *)(log + 1);
|
|
|
|
ruhd = (NvmeRuhDescr *)(buf + sizeof(*log) + sizeof(*hdr));
|
|
|
|
|
|
|
|
log->num_confs = cpu_to_le16(0);
|
|
|
|
log->size = cpu_to_le32(log_size);
|
|
|
|
|
|
|
|
hdr->descr_size = cpu_to_le16(fdp_descr_size);
|
|
|
|
if (endgrp->fdp.enabled) {
|
|
|
|
hdr->fdpa = FIELD_DP8(hdr->fdpa, FDPA, VALID, 1);
|
|
|
|
hdr->fdpa = FIELD_DP8(hdr->fdpa, FDPA, RGIF, endgrp->fdp.rgif);
|
|
|
|
hdr->nrg = cpu_to_le16(endgrp->fdp.nrg);
|
|
|
|
hdr->nruh = cpu_to_le16(endgrp->fdp.nruh);
|
|
|
|
hdr->maxpids = cpu_to_le16(NVME_FDP_MAXPIDS - 1);
|
|
|
|
hdr->nnss = cpu_to_le32(NVME_MAX_NAMESPACES);
|
|
|
|
hdr->runs = cpu_to_le64(endgrp->fdp.runs);
|
|
|
|
|
|
|
|
for (i = 0; i < nruh; i++) {
|
|
|
|
ruhd->ruht = NVME_RUHT_INITIALLY_ISOLATED;
|
|
|
|
ruhd++;
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
/* 1 bit for RUH in PIF -> 2 RUHs max. */
|
|
|
|
hdr->nrg = cpu_to_le16(1);
|
|
|
|
hdr->nruh = cpu_to_le16(1);
|
|
|
|
hdr->maxpids = cpu_to_le16(NVME_FDP_MAXPIDS - 1);
|
|
|
|
hdr->nnss = cpu_to_le32(1);
|
|
|
|
hdr->runs = cpu_to_le64(96 * MiB);
|
|
|
|
|
|
|
|
ruhd->ruht = NVME_RUHT_INITIALLY_ISOLATED;
|
|
|
|
}
|
|
|
|
|
|
|
|
return nvme_c2h(n, (uint8_t *)buf + off, trans_len, req);
|
|
|
|
}
|
|
|
|
|
|
|
|
static uint16_t nvme_fdp_ruh_usage(NvmeCtrl *n, uint32_t endgrpid,
|
|
|
|
uint32_t dw10, uint32_t dw12,
|
|
|
|
uint32_t buf_len, uint64_t off,
|
|
|
|
NvmeRequest *req)
|
|
|
|
{
|
|
|
|
NvmeRuHandle *ruh;
|
|
|
|
NvmeRuhuLog *hdr;
|
|
|
|
NvmeRuhuDescr *ruhud;
|
|
|
|
NvmeEnduranceGroup *endgrp;
|
|
|
|
g_autofree uint8_t *buf = NULL;
|
|
|
|
uint32_t log_size, trans_len;
|
|
|
|
uint16_t i;
|
|
|
|
|
|
|
|
if (endgrpid != 1 || !n->subsys) {
|
|
|
|
return NVME_INVALID_FIELD | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
|
|
|
endgrp = &n->subsys->endgrp;
|
|
|
|
|
|
|
|
if (!endgrp->fdp.enabled) {
|
|
|
|
return NVME_FDP_DISABLED | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
|
|
|
log_size = sizeof(NvmeRuhuLog) + endgrp->fdp.nruh * sizeof(NvmeRuhuDescr);
|
|
|
|
|
|
|
|
if (off >= log_size) {
|
|
|
|
return NVME_INVALID_FIELD | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
|
|
|
trans_len = MIN(log_size - off, buf_len);
|
|
|
|
|
|
|
|
buf = g_malloc0(log_size);
|
|
|
|
hdr = (NvmeRuhuLog *)buf;
|
|
|
|
ruhud = (NvmeRuhuDescr *)(hdr + 1);
|
|
|
|
|
|
|
|
ruh = endgrp->fdp.ruhs;
|
|
|
|
hdr->nruh = cpu_to_le16(endgrp->fdp.nruh);
|
|
|
|
|
|
|
|
for (i = 0; i < endgrp->fdp.nruh; i++, ruhud++, ruh++) {
|
|
|
|
ruhud->ruha = ruh->ruha;
|
|
|
|
}
|
|
|
|
|
|
|
|
return nvme_c2h(n, (uint8_t *)buf + off, trans_len, req);
|
|
|
|
}
|
|
|
|
|
|
|
|
static uint16_t nvme_fdp_stats(NvmeCtrl *n, uint32_t endgrpid, uint32_t buf_len,
|
|
|
|
uint64_t off, NvmeRequest *req)
|
|
|
|
{
|
|
|
|
NvmeEnduranceGroup *endgrp;
|
|
|
|
NvmeFdpStatsLog log = {};
|
|
|
|
uint32_t trans_len;
|
|
|
|
|
|
|
|
if (off >= sizeof(NvmeFdpStatsLog)) {
|
|
|
|
return NVME_INVALID_FIELD | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (endgrpid != 1 || !n->subsys) {
|
|
|
|
return NVME_INVALID_FIELD | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!n->subsys->endgrp.fdp.enabled) {
|
|
|
|
return NVME_FDP_DISABLED | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
|
|
|
endgrp = &n->subsys->endgrp;
|
|
|
|
|
|
|
|
trans_len = MIN(sizeof(log) - off, buf_len);
|
|
|
|
|
|
|
|
/* spec value is 128 bit, we only use 64 bit */
|
|
|
|
log.hbmw[0] = cpu_to_le64(endgrp->fdp.hbmw);
|
|
|
|
log.mbmw[0] = cpu_to_le64(endgrp->fdp.mbmw);
|
|
|
|
log.mbe[0] = cpu_to_le64(endgrp->fdp.mbe);
|
|
|
|
|
|
|
|
return nvme_c2h(n, (uint8_t *)&log + off, trans_len, req);
|
|
|
|
}
|
|
|
|
|
|
|
|
static uint16_t nvme_fdp_events(NvmeCtrl *n, uint32_t endgrpid,
|
|
|
|
uint32_t buf_len, uint64_t off,
|
|
|
|
NvmeRequest *req)
|
|
|
|
{
|
|
|
|
NvmeEnduranceGroup *endgrp;
|
|
|
|
NvmeCmd *cmd = &req->cmd;
|
|
|
|
bool host_events = (cmd->cdw10 >> 8) & 0x1;
|
|
|
|
uint32_t log_size, trans_len;
|
|
|
|
NvmeFdpEventBuffer *ebuf;
|
|
|
|
g_autofree NvmeFdpEventsLog *elog = NULL;
|
|
|
|
NvmeFdpEvent *event;
|
|
|
|
|
|
|
|
if (endgrpid != 1 || !n->subsys) {
|
|
|
|
return NVME_INVALID_FIELD | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
|
|
|
endgrp = &n->subsys->endgrp;
|
|
|
|
|
|
|
|
if (!endgrp->fdp.enabled) {
|
|
|
|
return NVME_FDP_DISABLED | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (host_events) {
|
|
|
|
ebuf = &endgrp->fdp.host_events;
|
|
|
|
} else {
|
|
|
|
ebuf = &endgrp->fdp.ctrl_events;
|
|
|
|
}
|
|
|
|
|
|
|
|
log_size = sizeof(NvmeFdpEventsLog) + ebuf->nelems * sizeof(NvmeFdpEvent);
|
|
|
|
trans_len = MIN(log_size - off, buf_len);
|
|
|
|
elog = g_malloc0(log_size);
|
|
|
|
elog->num_events = cpu_to_le32(ebuf->nelems);
|
|
|
|
event = (NvmeFdpEvent *)(elog + 1);
|
|
|
|
|
|
|
|
if (ebuf->nelems && ebuf->start == ebuf->next) {
|
|
|
|
unsigned int nelems = (NVME_FDP_MAX_EVENTS - ebuf->start);
|
|
|
|
/* wrap over, copy [start;NVME_FDP_MAX_EVENTS[ and [0; next[ */
|
|
|
|
memcpy(event, &ebuf->events[ebuf->start],
|
|
|
|
sizeof(NvmeFdpEvent) * nelems);
|
|
|
|
memcpy(event + nelems, ebuf->events,
|
|
|
|
sizeof(NvmeFdpEvent) * ebuf->next);
|
|
|
|
} else if (ebuf->start < ebuf->next) {
|
|
|
|
memcpy(event, &ebuf->events[ebuf->start],
|
|
|
|
sizeof(NvmeFdpEvent) * (ebuf->next - ebuf->start));
|
|
|
|
}
|
|
|
|
|
|
|
|
return nvme_c2h(n, (uint8_t *)elog + off, trans_len, req);
|
|
|
|
}
|
|
|
|
|
2020-07-20 13:44:01 +03:00
|
|
|
static uint16_t nvme_get_log(NvmeCtrl *n, NvmeRequest *req)
|
2020-07-06 09:12:52 +03:00
|
|
|
{
|
2020-07-20 13:44:01 +03:00
|
|
|
NvmeCmd *cmd = &req->cmd;
|
|
|
|
|
2020-07-06 09:12:52 +03:00
|
|
|
uint32_t dw10 = le32_to_cpu(cmd->cdw10);
|
|
|
|
uint32_t dw11 = le32_to_cpu(cmd->cdw11);
|
|
|
|
uint32_t dw12 = le32_to_cpu(cmd->cdw12);
|
|
|
|
uint32_t dw13 = le32_to_cpu(cmd->cdw13);
|
|
|
|
uint8_t lid = dw10 & 0xff;
|
|
|
|
uint8_t lsp = (dw10 >> 8) & 0xf;
|
|
|
|
uint8_t rae = (dw10 >> 15) & 0x1;
|
2020-12-08 23:04:03 +03:00
|
|
|
uint8_t csi = le32_to_cpu(cmd->cdw14) >> 24;
|
2023-02-20 14:59:26 +03:00
|
|
|
uint32_t numdl, numdu, lspi;
|
2020-07-06 09:12:52 +03:00
|
|
|
uint64_t off, lpol, lpou;
|
|
|
|
size_t len;
|
2020-02-23 19:38:22 +03:00
|
|
|
uint16_t status;
|
2020-07-06 09:12:52 +03:00
|
|
|
|
|
|
|
numdl = (dw10 >> 16);
|
|
|
|
numdu = (dw11 & 0xffff);
|
2023-02-20 14:59:26 +03:00
|
|
|
lspi = (dw11 >> 16);
|
2020-07-06 09:12:52 +03:00
|
|
|
lpol = dw12;
|
|
|
|
lpou = dw13;
|
|
|
|
|
|
|
|
len = (((numdu << 16) | numdl) + 1) << 2;
|
|
|
|
off = (lpou << 32ULL) | lpol;
|
|
|
|
|
|
|
|
if (off & 0x3) {
|
|
|
|
return NVME_INVALID_FIELD | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
|
|
|
trace_pci_nvme_get_log(nvme_cid(req), lid, lsp, rae, len, off);
|
|
|
|
|
2020-02-23 19:38:22 +03:00
|
|
|
status = nvme_check_mdts(n, len);
|
|
|
|
if (status) {
|
|
|
|
return status;
|
|
|
|
}
|
|
|
|
|
2020-07-06 09:12:52 +03:00
|
|
|
switch (lid) {
|
|
|
|
case NVME_LOG_ERROR_INFO:
|
2020-07-20 13:44:01 +03:00
|
|
|
return nvme_error_info(n, rae, len, off, req);
|
2020-07-06 09:12:52 +03:00
|
|
|
case NVME_LOG_SMART_INFO:
|
2020-07-20 13:44:01 +03:00
|
|
|
return nvme_smart_info(n, rae, len, off, req);
|
2020-07-06 09:12:52 +03:00
|
|
|
case NVME_LOG_FW_SLOT_INFO:
|
2020-07-20 13:44:01 +03:00
|
|
|
return nvme_fw_log_info(n, len, off, req);
|
2021-02-28 11:51:02 +03:00
|
|
|
case NVME_LOG_CHANGED_NSLIST:
|
|
|
|
return nvme_changed_nslist(n, rae, len, off, req);
|
2020-12-08 23:04:02 +03:00
|
|
|
case NVME_LOG_CMD_EFFECTS:
|
2020-12-08 23:04:03 +03:00
|
|
|
return nvme_cmd_effects(n, csi, len, off, req);
|
2023-02-20 14:59:24 +03:00
|
|
|
case NVME_LOG_ENDGRP:
|
|
|
|
return nvme_endgrp_info(n, rae, len, off, req);
|
2023-02-20 14:59:26 +03:00
|
|
|
case NVME_LOG_FDP_CONFS:
|
|
|
|
return nvme_fdp_confs(n, lspi, len, off, req);
|
|
|
|
case NVME_LOG_FDP_RUH_USAGE:
|
|
|
|
return nvme_fdp_ruh_usage(n, lspi, dw10, dw12, len, off, req);
|
|
|
|
case NVME_LOG_FDP_STATS:
|
|
|
|
return nvme_fdp_stats(n, lspi, len, off, req);
|
|
|
|
case NVME_LOG_FDP_EVENTS:
|
|
|
|
return nvme_fdp_events(n, lspi, len, off, req);
|
2020-07-06 09:12:52 +03:00
|
|
|
default:
|
|
|
|
trace_pci_nvme_err_invalid_log_page(nvme_cid(req), lid);
|
|
|
|
return NVME_INVALID_FIELD | NVME_DNR;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2013-06-04 19:17:10 +04:00
|
|
|
static void nvme_free_cq(NvmeCQueue *cq, NvmeCtrl *n)
|
|
|
|
{
|
2022-12-08 14:43:18 +03:00
|
|
|
PCIDevice *pci = PCI_DEVICE(n);
|
2022-07-05 17:24:03 +03:00
|
|
|
uint16_t offset = (cq->cqid << 3) + (1 << 2);
|
|
|
|
|
2013-06-04 19:17:10 +04:00
|
|
|
n->cq[cq->cqid] = NULL;
|
2022-10-19 23:28:02 +03:00
|
|
|
qemu_bh_delete(cq->bh);
|
2022-07-05 17:24:03 +03:00
|
|
|
if (cq->ioeventfd_enabled) {
|
|
|
|
memory_region_del_eventfd(&n->iomem,
|
|
|
|
0x1000 + offset, 4, false, 0, &cq->notifier);
|
2022-07-28 09:48:51 +03:00
|
|
|
event_notifier_set_handler(&cq->notifier, NULL);
|
2022-07-05 17:24:03 +03:00
|
|
|
event_notifier_cleanup(&cq->notifier);
|
|
|
|
}
|
2022-12-08 14:43:18 +03:00
|
|
|
if (msix_enabled(pci)) {
|
|
|
|
msix_vector_unuse(pci, cq->vector);
|
2021-01-12 15:30:26 +03:00
|
|
|
}
|
2013-06-04 19:17:10 +04:00
|
|
|
if (cq->cqid) {
|
|
|
|
g_free(cq);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-07-20 13:44:01 +03:00
|
|
|
static uint16_t nvme_del_cq(NvmeCtrl *n, NvmeRequest *req)
|
2013-06-04 19:17:10 +04:00
|
|
|
{
|
2020-07-20 13:44:01 +03:00
|
|
|
NvmeDeleteQ *c = (NvmeDeleteQ *)&req->cmd;
|
2013-06-04 19:17:10 +04:00
|
|
|
NvmeCQueue *cq;
|
|
|
|
uint16_t qid = le16_to_cpu(c->qid);
|
|
|
|
|
2017-11-03 16:37:53 +03:00
|
|
|
if (unlikely(!qid || nvme_check_cqid(n, qid))) {
|
2020-06-09 22:03:13 +03:00
|
|
|
trace_pci_nvme_err_invalid_del_cq_cqid(qid);
|
2013-06-04 19:17:10 +04:00
|
|
|
return NVME_INVALID_CQID | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
|
|
|
cq = n->cq[qid];
|
2017-11-03 16:37:53 +03:00
|
|
|
if (unlikely(!QTAILQ_EMPTY(&cq->sq_list))) {
|
2020-06-09 22:03:13 +03:00
|
|
|
trace_pci_nvme_err_invalid_del_cq_notempty(qid);
|
2013-06-04 19:17:10 +04:00
|
|
|
return NVME_INVALID_QUEUE_DEL;
|
|
|
|
}
|
2021-06-17 21:55:42 +03:00
|
|
|
|
|
|
|
if (cq->irq_enabled && cq->tail != cq->head) {
|
|
|
|
n->cq_pending--;
|
|
|
|
}
|
|
|
|
|
2018-11-21 21:10:13 +03:00
|
|
|
nvme_irq_deassert(n, cq);
|
2020-06-09 22:03:13 +03:00
|
|
|
trace_pci_nvme_del_cq(qid);
|
2013-06-04 19:17:10 +04:00
|
|
|
nvme_free_cq(cq, n);
|
|
|
|
return NVME_SUCCESS;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void nvme_init_cq(NvmeCQueue *cq, NvmeCtrl *n, uint64_t dma_addr,
|
2020-08-24 09:58:56 +03:00
|
|
|
uint16_t cqid, uint16_t vector, uint16_t size,
|
|
|
|
uint16_t irq_enabled)
|
2013-06-04 19:17:10 +04:00
|
|
|
{
|
2022-12-08 14:43:18 +03:00
|
|
|
PCIDevice *pci = PCI_DEVICE(n);
|
|
|
|
|
|
|
|
if (msix_enabled(pci)) {
|
|
|
|
msix_vector_use(pci, vector);
|
2021-01-12 15:30:26 +03:00
|
|
|
}
|
2013-06-04 19:17:10 +04:00
|
|
|
cq->ctrl = n;
|
|
|
|
cq->cqid = cqid;
|
|
|
|
cq->size = size;
|
|
|
|
cq->dma_addr = dma_addr;
|
|
|
|
cq->phase = 1;
|
|
|
|
cq->irq_enabled = irq_enabled;
|
|
|
|
cq->vector = vector;
|
|
|
|
cq->head = cq->tail = 0;
|
|
|
|
QTAILQ_INIT(&cq->req_list);
|
|
|
|
QTAILQ_INIT(&cq->sq_list);
|
2022-06-16 15:34:07 +03:00
|
|
|
if (n->dbbuf_enabled) {
|
|
|
|
cq->db_addr = n->dbbuf_dbs + (cqid << 3) + (1 << 2);
|
|
|
|
cq->ei_addr = n->dbbuf_eis + (cqid << 3) + (1 << 2);
|
2022-07-05 17:24:03 +03:00
|
|
|
|
|
|
|
if (n->params.ioeventfd && cqid != 0) {
|
|
|
|
if (!nvme_init_cq_ioeventfd(cq)) {
|
|
|
|
cq->ioeventfd_enabled = true;
|
|
|
|
}
|
|
|
|
}
|
2022-06-16 15:34:07 +03:00
|
|
|
}
|
2013-06-04 19:17:10 +04:00
|
|
|
n->cq[cqid] = cq;
|
2022-10-19 23:28:02 +03:00
|
|
|
cq->bh = qemu_bh_new(nvme_post_cqes, cq);
|
2013-06-04 19:17:10 +04:00
|
|
|
}
|
|
|
|
|
2020-07-20 13:44:01 +03:00
|
|
|
static uint16_t nvme_create_cq(NvmeCtrl *n, NvmeRequest *req)
|
2013-06-04 19:17:10 +04:00
|
|
|
{
|
|
|
|
NvmeCQueue *cq;
|
2020-07-20 13:44:01 +03:00
|
|
|
NvmeCreateCq *c = (NvmeCreateCq *)&req->cmd;
|
2013-06-04 19:17:10 +04:00
|
|
|
uint16_t cqid = le16_to_cpu(c->cqid);
|
|
|
|
uint16_t vector = le16_to_cpu(c->irq_vector);
|
|
|
|
uint16_t qsize = le16_to_cpu(c->qsize);
|
|
|
|
uint16_t qflags = le16_to_cpu(c->cq_flags);
|
|
|
|
uint64_t prp1 = le64_to_cpu(c->prp1);
|
|
|
|
|
2020-06-09 22:03:13 +03:00
|
|
|
trace_pci_nvme_create_cq(prp1, cqid, vector, qsize, qflags,
|
|
|
|
NVME_CQ_FLAGS_IEN(qflags) != 0);
|
2017-11-03 16:37:53 +03:00
|
|
|
|
hw/nvme: Make max_ioqpairs and msix_qsize configurable in runtime
The NVMe device defines two properties: max_ioqpairs, msix_qsize. Having
them as constants is problematic for SR-IOV support.
SR-IOV introduces virtual resources (queues, interrupts) that can be
assigned to PF and its dependent VFs. Each device, following a reset,
should work with the configured number of queues. A single constant is
no longer sufficient to hold the whole state.
This patch tries to solve the problem by introducing additional
variables in NvmeCtrl’s state. The variables for, e.g., managing queues
are therefore organized as:
- n->params.max_ioqpairs – no changes, constant set by the user
- n->(mutable_state) – (not a part of this patch) user-configurable,
specifies number of queues available _after_
reset
- n->conf_ioqpairs - (new) used in all the places instead of the ‘old’
n->params.max_ioqpairs; initialized in realize()
and updated during reset() to reflect user’s
changes to the mutable state
Since the number of available i/o queues and interrupts can change in
runtime, buffers for sq/cqs and the MSIX-related structures are
allocated big enough to handle the limits, to completely avoid the
complicated reallocation. A helper function (nvme_update_msixcap_ts)
updates the corresponding capability register, to signal configuration
changes.
Signed-off-by: Łukasz Gieryk <lukasz.gieryk@linux.intel.com>
Reviewed-by: Klaus Jensen <k.jensen@samsung.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
2022-05-09 17:16:13 +03:00
|
|
|
if (unlikely(!cqid || cqid > n->conf_ioqpairs || n->cq[cqid] != NULL)) {
|
2020-06-09 22:03:13 +03:00
|
|
|
trace_pci_nvme_err_invalid_create_cq_cqid(cqid);
|
2020-10-22 09:28:46 +03:00
|
|
|
return NVME_INVALID_QID | NVME_DNR;
|
2013-06-04 19:17:10 +04:00
|
|
|
}
|
2021-07-13 20:31:27 +03:00
|
|
|
if (unlikely(!qsize || qsize > NVME_CAP_MQES(ldq_le_p(&n->bar.cap)))) {
|
2020-06-09 22:03:13 +03:00
|
|
|
trace_pci_nvme_err_invalid_create_cq_size(qsize);
|
2013-06-04 19:17:10 +04:00
|
|
|
return NVME_MAX_QSIZE_EXCEEDED | NVME_DNR;
|
|
|
|
}
|
2020-10-22 09:28:46 +03:00
|
|
|
if (unlikely(prp1 & (n->page_size - 1))) {
|
2020-06-09 22:03:13 +03:00
|
|
|
trace_pci_nvme_err_invalid_create_cq_addr(prp1);
|
2020-10-22 09:28:46 +03:00
|
|
|
return NVME_INVALID_PRP_OFFSET | NVME_DNR;
|
2013-06-04 19:17:10 +04:00
|
|
|
}
|
2022-12-08 14:43:18 +03:00
|
|
|
if (unlikely(!msix_enabled(PCI_DEVICE(n)) && vector)) {
|
2020-06-09 22:03:18 +03:00
|
|
|
trace_pci_nvme_err_invalid_create_cq_vector(vector);
|
|
|
|
return NVME_INVALID_IRQ_VECTOR | NVME_DNR;
|
|
|
|
}
|
hw/nvme: Make max_ioqpairs and msix_qsize configurable in runtime
The NVMe device defines two properties: max_ioqpairs, msix_qsize. Having
them as constants is problematic for SR-IOV support.
SR-IOV introduces virtual resources (queues, interrupts) that can be
assigned to PF and its dependent VFs. Each device, following a reset,
should work with the configured number of queues. A single constant is
no longer sufficient to hold the whole state.
This patch tries to solve the problem by introducing additional
variables in NvmeCtrl’s state. The variables for, e.g., managing queues
are therefore organized as:
- n->params.max_ioqpairs – no changes, constant set by the user
- n->(mutable_state) – (not a part of this patch) user-configurable,
specifies number of queues available _after_
reset
- n->conf_ioqpairs - (new) used in all the places instead of the ‘old’
n->params.max_ioqpairs; initialized in realize()
and updated during reset() to reflect user’s
changes to the mutable state
Since the number of available i/o queues and interrupts can change in
runtime, buffers for sq/cqs and the MSIX-related structures are
allocated big enough to handle the limits, to completely avoid the
complicated reallocation. A helper function (nvme_update_msixcap_ts)
updates the corresponding capability register, to signal configuration
changes.
Signed-off-by: Łukasz Gieryk <lukasz.gieryk@linux.intel.com>
Reviewed-by: Klaus Jensen <k.jensen@samsung.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
2022-05-09 17:16:13 +03:00
|
|
|
if (unlikely(vector >= n->conf_msix_qsize)) {
|
2020-06-09 22:03:13 +03:00
|
|
|
trace_pci_nvme_err_invalid_create_cq_vector(vector);
|
2013-06-04 19:17:10 +04:00
|
|
|
return NVME_INVALID_IRQ_VECTOR | NVME_DNR;
|
|
|
|
}
|
2017-11-03 16:37:53 +03:00
|
|
|
if (unlikely(!(NVME_CQ_FLAGS_PC(qflags)))) {
|
2020-06-09 22:03:13 +03:00
|
|
|
trace_pci_nvme_err_invalid_create_cq_qflags(NVME_CQ_FLAGS_PC(qflags));
|
2013-06-04 19:17:10 +04:00
|
|
|
return NVME_INVALID_FIELD | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
|
|
|
cq = g_malloc0(sizeof(*cq));
|
|
|
|
nvme_init_cq(cq, n, prp1, cqid, vector, qsize + 1,
|
2020-08-24 09:58:56 +03:00
|
|
|
NVME_CQ_FLAGS_IEN(qflags));
|
2020-07-06 09:13:01 +03:00
|
|
|
|
|
|
|
/*
|
|
|
|
* It is only required to set qs_created when creating a completion queue;
|
|
|
|
* creating a submission queue without a matching completion queue will
|
|
|
|
* fail.
|
|
|
|
*/
|
|
|
|
n->qs_created = true;
|
2013-06-04 19:17:10 +04:00
|
|
|
return NVME_SUCCESS;
|
|
|
|
}
|
|
|
|
|
2020-12-08 23:04:03 +03:00
|
|
|
static uint16_t nvme_rpt_empty_id_struct(NvmeCtrl *n, NvmeRequest *req)
|
|
|
|
{
|
|
|
|
uint8_t id[NVME_IDENTIFY_DATA_SIZE] = {};
|
|
|
|
|
2020-12-15 21:18:25 +03:00
|
|
|
return nvme_c2h(n, id, sizeof(id), req);
|
2020-12-08 23:04:03 +03:00
|
|
|
}
|
|
|
|
|
2020-07-20 13:44:01 +03:00
|
|
|
static uint16_t nvme_identify_ctrl(NvmeCtrl *n, NvmeRequest *req)
|
2016-08-04 22:42:14 +03:00
|
|
|
{
|
2020-06-09 22:03:13 +03:00
|
|
|
trace_pci_nvme_identify_ctrl();
|
2017-11-03 16:37:53 +03:00
|
|
|
|
2020-12-15 21:18:25 +03:00
|
|
|
return nvme_c2h(n, (uint8_t *)&n->id_ctrl, sizeof(n->id_ctrl), req);
|
2016-08-04 22:42:14 +03:00
|
|
|
}
|
|
|
|
|
2020-12-08 23:04:03 +03:00
|
|
|
static uint16_t nvme_identify_ctrl_csi(NvmeCtrl *n, NvmeRequest *req)
|
|
|
|
{
|
|
|
|
NvmeIdentify *c = (NvmeIdentify *)&req->cmd;
|
2021-02-21 21:39:36 +03:00
|
|
|
uint8_t id[NVME_IDENTIFY_DATA_SIZE] = {};
|
2021-02-14 21:09:27 +03:00
|
|
|
NvmeIdCtrlNvm *id_nvm = (NvmeIdCtrlNvm *)&id;
|
2020-12-08 23:04:03 +03:00
|
|
|
|
|
|
|
trace_pci_nvme_identify_ctrl_csi(c->csi);
|
|
|
|
|
2021-02-21 21:39:36 +03:00
|
|
|
switch (c->csi) {
|
|
|
|
case NVME_CSI_NVM:
|
2021-02-14 21:09:27 +03:00
|
|
|
id_nvm->vsl = n->params.vsl;
|
|
|
|
id_nvm->dmrsl = cpu_to_le32(n->dmrsl);
|
2021-02-21 21:39:36 +03:00
|
|
|
break;
|
hw/block/nvme: align zoned.zasl with mdts
ZASL (Zone Append Size Limit) is defined exactly like MDTS (Maximum Data
Transfer Size), that is, it is a value in units of the minimum memory
page size (CAP.MPSMIN) and is reported as a power of two.
The 'mdts' nvme device parameter is specified as in the spec, but the
'zoned.append_size_limit' parameter is specified in bytes. This is
suboptimal for a number of reasons:
1. It is just plain confusing wrt. the definition of mdts.
2. There is a lot of complexity involved in validating the value; it
must be a power of two, it should be larger than 4k, if it is zero
we set it internally to mdts, but still report it as zero.
3. While "hw/block/nvme: improve invalid zasl value reporting"
slightly improved the handling of the parameter, the validation is
still wrong; it does not depend on CC.MPS, it depends on
CAP.MPSMIN. And we are not even checking that it is actually less
than or equal to MDTS, which is kinda the *one* condition it must
satisfy.
Fix this by defining zasl exactly like mdts and checking the one thing
that it must satisfy (that it is less than or equal to mdts). Also,
change the default value from 128KiB to 0 (aka, whatever mdts is).
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
2021-02-22 21:27:58 +03:00
|
|
|
|
2021-02-21 21:39:36 +03:00
|
|
|
case NVME_CSI_ZONED:
|
|
|
|
((NvmeIdCtrlZoned *)&id)->zasl = n->params.zasl;
|
|
|
|
break;
|
|
|
|
|
|
|
|
default:
|
|
|
|
return NVME_INVALID_FIELD | NVME_DNR;
|
2020-12-08 23:04:03 +03:00
|
|
|
}
|
|
|
|
|
2020-12-15 21:18:25 +03:00
|
|
|
return nvme_c2h(n, id, sizeof(id), req);
|
2020-12-08 23:04:03 +03:00
|
|
|
}
|
|
|
|
|
2021-02-05 20:15:10 +03:00
|
|
|
static uint16_t nvme_identify_ns(NvmeCtrl *n, NvmeRequest *req, bool active)
|
2013-06-04 19:17:10 +04:00
|
|
|
{
|
|
|
|
NvmeNamespace *ns;
|
2020-07-20 13:44:01 +03:00
|
|
|
NvmeIdentify *c = (NvmeIdentify *)&req->cmd;
|
2013-06-04 19:17:10 +04:00
|
|
|
uint32_t nsid = le32_to_cpu(c->nsid);
|
|
|
|
|
2020-06-09 22:03:13 +03:00
|
|
|
trace_pci_nvme_identify_ns(nsid);
|
2017-11-03 16:37:53 +03:00
|
|
|
|
hw/block/nvme: support multiple namespaces
This adds support for multiple namespaces by introducing a new 'nvme-ns'
device model. The nvme device creates a bus named from the device name
('id'). The nvme-ns devices then connect to this and registers
themselves with the nvme device.
This changes how an nvme device is created. Example with two namespaces:
-drive file=nvme0n1.img,if=none,id=disk1
-drive file=nvme0n2.img,if=none,id=disk2
-device nvme,serial=deadbeef,id=nvme0
-device nvme-ns,drive=disk1,bus=nvme0,nsid=1
-device nvme-ns,drive=disk2,bus=nvme0,nsid=2
The drive property is kept on the nvme device to keep the change
backward compatible, but the property is now optional. Specifying a
drive for the nvme device will always create the namespace with nsid 1.
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Minwoo Im <minwoo.im.dev@gmail.com>
2019-06-26 09:51:06 +03:00
|
|
|
if (!nvme_nsid_valid(n, nsid) || nsid == NVME_NSID_BROADCAST) {
|
2013-06-04 19:17:10 +04:00
|
|
|
return NVME_INVALID_NSID | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
hw/block/nvme: support multiple namespaces
This adds support for multiple namespaces by introducing a new 'nvme-ns'
device model. The nvme device creates a bus named from the device name
('id'). The nvme-ns devices then connect to this and registers
themselves with the nvme device.
This changes how an nvme device is created. Example with two namespaces:
-drive file=nvme0n1.img,if=none,id=disk1
-drive file=nvme0n2.img,if=none,id=disk2
-device nvme,serial=deadbeef,id=nvme0
-device nvme-ns,drive=disk1,bus=nvme0,nsid=1
-device nvme-ns,drive=disk2,bus=nvme0,nsid=2
The drive property is kept on the nvme device to keep the change
backward compatible, but the property is now optional. Specifying a
drive for the nvme device will always create the namespace with nsid 1.
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Minwoo Im <minwoo.im.dev@gmail.com>
2019-06-26 09:51:06 +03:00
|
|
|
ns = nvme_ns(n, nsid);
|
|
|
|
if (unlikely(!ns)) {
|
2021-02-05 20:15:10 +03:00
|
|
|
if (!active) {
|
|
|
|
ns = nvme_subsys_ns(n->subsys, nsid);
|
|
|
|
if (!ns) {
|
|
|
|
return nvme_rpt_empty_id_struct(n, req);
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
return nvme_rpt_empty_id_struct(n, req);
|
|
|
|
}
|
hw/block/nvme: support multiple namespaces
This adds support for multiple namespaces by introducing a new 'nvme-ns'
device model. The nvme device creates a bus named from the device name
('id'). The nvme-ns devices then connect to this and registers
themselves with the nvme device.
This changes how an nvme device is created. Example with two namespaces:
-drive file=nvme0n1.img,if=none,id=disk1
-drive file=nvme0n2.img,if=none,id=disk2
-device nvme,serial=deadbeef,id=nvme0
-device nvme-ns,drive=disk1,bus=nvme0,nsid=1
-device nvme-ns,drive=disk2,bus=nvme0,nsid=2
The drive property is kept on the nvme device to keep the change
backward compatible, but the property is now optional. Specifying a
drive for the nvme device will always create the namespace with nsid 1.
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Minwoo Im <minwoo.im.dev@gmail.com>
2019-06-26 09:51:06 +03:00
|
|
|
}
|
2017-11-03 16:37:53 +03:00
|
|
|
|
2021-04-27 09:30:52 +03:00
|
|
|
if (active || ns->csi == NVME_CSI_NVM) {
|
2020-12-15 21:18:25 +03:00
|
|
|
return nvme_c2h(n, (uint8_t *)&ns->id_ns, sizeof(NvmeIdNs), req);
|
2020-12-08 23:04:06 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
return NVME_INVALID_CMD_SET | NVME_DNR;
|
2013-06-04 19:17:10 +04:00
|
|
|
}
|
|
|
|
|
2021-06-14 19:29:26 +03:00
|
|
|
static uint16_t nvme_identify_ctrl_list(NvmeCtrl *n, NvmeRequest *req,
|
|
|
|
bool attached)
|
2021-02-10 18:10:25 +03:00
|
|
|
{
|
|
|
|
NvmeIdentify *c = (NvmeIdentify *)&req->cmd;
|
2021-06-14 19:29:26 +03:00
|
|
|
uint32_t nsid = le32_to_cpu(c->nsid);
|
2021-02-10 18:10:25 +03:00
|
|
|
uint16_t min_id = le16_to_cpu(c->ctrlid);
|
|
|
|
uint16_t list[NVME_CONTROLLER_LIST_SIZE] = {};
|
|
|
|
uint16_t *ids = &list[1];
|
|
|
|
NvmeNamespace *ns;
|
|
|
|
NvmeCtrl *ctrl;
|
|
|
|
int cntlid, nr_ids = 0;
|
|
|
|
|
2021-06-14 19:29:26 +03:00
|
|
|
trace_pci_nvme_identify_ctrl_list(c->cns, min_id);
|
2021-02-10 18:10:25 +03:00
|
|
|
|
2021-06-14 19:29:26 +03:00
|
|
|
if (!n->subsys) {
|
2021-02-10 18:10:25 +03:00
|
|
|
return NVME_INVALID_FIELD | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
2021-06-14 19:29:26 +03:00
|
|
|
if (attached) {
|
|
|
|
if (nsid == NVME_NSID_BROADCAST) {
|
|
|
|
return NVME_INVALID_FIELD | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
|
|
|
ns = nvme_subsys_ns(n->subsys, nsid);
|
|
|
|
if (!ns) {
|
|
|
|
return NVME_INVALID_FIELD | NVME_DNR;
|
|
|
|
}
|
2021-02-10 18:10:25 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
for (cntlid = min_id; cntlid < ARRAY_SIZE(n->subsys->ctrls); cntlid++) {
|
|
|
|
ctrl = nvme_subsys_ctrl(n->subsys, cntlid);
|
|
|
|
if (!ctrl) {
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
2021-06-14 19:29:26 +03:00
|
|
|
if (attached && !nvme_ns(ctrl, nsid)) {
|
2021-02-10 18:10:25 +03:00
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
ids[nr_ids++] = cntlid;
|
|
|
|
}
|
|
|
|
|
|
|
|
list[0] = nr_ids;
|
|
|
|
|
|
|
|
return nvme_c2h(n, (uint8_t *)list, sizeof(list), req);
|
|
|
|
}
|
|
|
|
|
2022-05-09 17:16:10 +03:00
|
|
|
static uint16_t nvme_identify_pri_ctrl_cap(NvmeCtrl *n, NvmeRequest *req)
|
|
|
|
{
|
|
|
|
trace_pci_nvme_identify_pri_ctrl_cap(le16_to_cpu(n->pri_ctrl_cap.cntlid));
|
|
|
|
|
|
|
|
return nvme_c2h(n, (uint8_t *)&n->pri_ctrl_cap,
|
|
|
|
sizeof(NvmePriCtrlCap), req);
|
|
|
|
}
|
|
|
|
|
2022-05-09 17:16:11 +03:00
|
|
|
static uint16_t nvme_identify_sec_ctrl_list(NvmeCtrl *n, NvmeRequest *req)
|
|
|
|
{
|
|
|
|
NvmeIdentify *c = (NvmeIdentify *)&req->cmd;
|
|
|
|
uint16_t pri_ctrl_id = le16_to_cpu(n->pri_ctrl_cap.cntlid);
|
|
|
|
uint16_t min_id = le16_to_cpu(c->ctrlid);
|
|
|
|
uint8_t num_sec_ctrl = n->sec_ctrl_list.numcntl;
|
|
|
|
NvmeSecCtrlList list = {0};
|
|
|
|
uint8_t i;
|
|
|
|
|
|
|
|
for (i = 0; i < num_sec_ctrl; i++) {
|
|
|
|
if (n->sec_ctrl_list.sec[i].scid >= min_id) {
|
|
|
|
list.numcntl = num_sec_ctrl - i;
|
|
|
|
memcpy(&list.sec, n->sec_ctrl_list.sec + i,
|
|
|
|
list.numcntl * sizeof(NvmeSecCtrlEntry));
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
trace_pci_nvme_identify_sec_ctrl_list(pri_ctrl_id, list.numcntl);
|
|
|
|
|
|
|
|
return nvme_c2h(n, (uint8_t *)&list, sizeof(list), req);
|
|
|
|
}
|
|
|
|
|
2021-02-05 20:15:10 +03:00
|
|
|
static uint16_t nvme_identify_ns_csi(NvmeCtrl *n, NvmeRequest *req,
|
2021-05-21 09:08:42 +03:00
|
|
|
bool active)
|
2020-12-08 23:04:03 +03:00
|
|
|
{
|
|
|
|
NvmeNamespace *ns;
|
|
|
|
NvmeIdentify *c = (NvmeIdentify *)&req->cmd;
|
|
|
|
uint32_t nsid = le32_to_cpu(c->nsid);
|
|
|
|
|
|
|
|
trace_pci_nvme_identify_ns_csi(nsid, c->csi);
|
|
|
|
|
|
|
|
if (!nvme_nsid_valid(n, nsid) || nsid == NVME_NSID_BROADCAST) {
|
|
|
|
return NVME_INVALID_NSID | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
|
|
|
ns = nvme_ns(n, nsid);
|
|
|
|
if (unlikely(!ns)) {
|
2021-02-05 20:15:10 +03:00
|
|
|
if (!active) {
|
|
|
|
ns = nvme_subsys_ns(n->subsys, nsid);
|
|
|
|
if (!ns) {
|
|
|
|
return nvme_rpt_empty_id_struct(n, req);
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
return nvme_rpt_empty_id_struct(n, req);
|
|
|
|
}
|
2020-12-08 23:04:03 +03:00
|
|
|
}
|
|
|
|
|
2021-04-27 09:30:52 +03:00
|
|
|
if (c->csi == NVME_CSI_NVM) {
|
2021-11-16 16:26:52 +03:00
|
|
|
return nvme_c2h(n, (uint8_t *)&ns->id_ns_nvm, sizeof(NvmeIdNsNvm),
|
|
|
|
req);
|
2020-12-08 23:04:06 +03:00
|
|
|
} else if (c->csi == NVME_CSI_ZONED && ns->csi == NVME_CSI_ZONED) {
|
2020-12-15 21:18:25 +03:00
|
|
|
return nvme_c2h(n, (uint8_t *)ns->id_ns_zoned, sizeof(NvmeIdNsZoned),
|
|
|
|
req);
|
2020-12-08 23:04:03 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
return NVME_INVALID_FIELD | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
2021-02-05 20:15:10 +03:00
|
|
|
static uint16_t nvme_identify_nslist(NvmeCtrl *n, NvmeRequest *req,
|
2021-05-21 09:08:42 +03:00
|
|
|
bool active)
|
2016-08-04 22:42:14 +03:00
|
|
|
{
|
2020-12-08 23:04:03 +03:00
|
|
|
NvmeNamespace *ns;
|
2020-07-20 13:44:01 +03:00
|
|
|
NvmeIdentify *c = (NvmeIdentify *)&req->cmd;
|
2016-08-04 22:42:14 +03:00
|
|
|
uint32_t min_nsid = le32_to_cpu(c->nsid);
|
2020-12-08 23:04:03 +03:00
|
|
|
uint8_t list[NVME_IDENTIFY_DATA_SIZE] = {};
|
|
|
|
static const int data_len = sizeof(list);
|
|
|
|
uint32_t *list_ptr = (uint32_t *)list;
|
|
|
|
int i, j = 0;
|
2016-08-04 22:42:14 +03:00
|
|
|
|
2020-06-09 22:03:13 +03:00
|
|
|
trace_pci_nvme_identify_nslist(min_nsid);
|
2017-11-03 16:37:53 +03:00
|
|
|
|
2020-07-06 09:13:00 +03:00
|
|
|
/*
|
2021-04-16 06:52:28 +03:00
|
|
|
* Both FFFFFFFFh (NVME_NSID_BROADCAST) and FFFFFFFFEh are invalid values
|
2020-07-06 09:13:00 +03:00
|
|
|
* since the Active Namespace ID List should return namespaces with ids
|
|
|
|
* *higher* than the NSID specified in the command. This is also specified
|
|
|
|
* in the spec (NVM Express v1.3d, Section 5.15.4).
|
|
|
|
*/
|
|
|
|
if (min_nsid >= NVME_NSID_BROADCAST - 1) {
|
|
|
|
return NVME_INVALID_NSID | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
2021-04-14 22:46:00 +03:00
|
|
|
for (i = 1; i <= NVME_MAX_NAMESPACES; i++) {
|
2020-12-08 23:04:03 +03:00
|
|
|
ns = nvme_ns(n, i);
|
|
|
|
if (!ns) {
|
2021-02-05 20:15:10 +03:00
|
|
|
if (!active) {
|
|
|
|
ns = nvme_subsys_ns(n->subsys, i);
|
|
|
|
if (!ns) {
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
continue;
|
|
|
|
}
|
2016-08-04 22:42:14 +03:00
|
|
|
}
|
2020-12-08 23:04:03 +03:00
|
|
|
if (ns->params.nsid <= min_nsid) {
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
list_ptr[j++] = cpu_to_le32(ns->params.nsid);
|
2016-08-04 22:42:14 +03:00
|
|
|
if (j == data_len / sizeof(uint32_t)) {
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
2020-12-08 23:04:03 +03:00
|
|
|
|
2020-12-15 21:18:25 +03:00
|
|
|
return nvme_c2h(n, list, data_len, req);
|
2020-12-08 23:04:03 +03:00
|
|
|
}
|
|
|
|
|
2021-02-05 20:15:10 +03:00
|
|
|
static uint16_t nvme_identify_nslist_csi(NvmeCtrl *n, NvmeRequest *req,
|
2021-05-21 09:08:42 +03:00
|
|
|
bool active)
|
2020-12-08 23:04:03 +03:00
|
|
|
{
|
|
|
|
NvmeNamespace *ns;
|
|
|
|
NvmeIdentify *c = (NvmeIdentify *)&req->cmd;
|
|
|
|
uint32_t min_nsid = le32_to_cpu(c->nsid);
|
|
|
|
uint8_t list[NVME_IDENTIFY_DATA_SIZE] = {};
|
|
|
|
static const int data_len = sizeof(list);
|
|
|
|
uint32_t *list_ptr = (uint32_t *)list;
|
|
|
|
int i, j = 0;
|
|
|
|
|
|
|
|
trace_pci_nvme_identify_nslist_csi(min_nsid, c->csi);
|
|
|
|
|
|
|
|
/*
|
2021-04-16 06:52:28 +03:00
|
|
|
* Same as in nvme_identify_nslist(), FFFFFFFFh/FFFFFFFFEh are invalid.
|
2020-12-08 23:04:03 +03:00
|
|
|
*/
|
|
|
|
if (min_nsid >= NVME_NSID_BROADCAST - 1) {
|
|
|
|
return NVME_INVALID_NSID | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
2020-12-08 23:04:06 +03:00
|
|
|
if (c->csi != NVME_CSI_NVM && c->csi != NVME_CSI_ZONED) {
|
2020-12-08 23:04:03 +03:00
|
|
|
return NVME_INVALID_FIELD | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
2021-04-14 22:46:00 +03:00
|
|
|
for (i = 1; i <= NVME_MAX_NAMESPACES; i++) {
|
2020-12-08 23:04:03 +03:00
|
|
|
ns = nvme_ns(n, i);
|
|
|
|
if (!ns) {
|
2021-02-05 20:15:10 +03:00
|
|
|
if (!active) {
|
|
|
|
ns = nvme_subsys_ns(n->subsys, i);
|
|
|
|
if (!ns) {
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
continue;
|
|
|
|
}
|
2020-12-08 23:04:03 +03:00
|
|
|
}
|
2020-12-08 23:04:06 +03:00
|
|
|
if (ns->params.nsid <= min_nsid || c->csi != ns->csi) {
|
2020-12-08 23:04:03 +03:00
|
|
|
continue;
|
|
|
|
}
|
|
|
|
list_ptr[j++] = cpu_to_le32(ns->params.nsid);
|
|
|
|
if (j == data_len / sizeof(uint32_t)) {
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-12-15 21:18:25 +03:00
|
|
|
return nvme_c2h(n, list, data_len, req);
|
2016-08-04 22:42:14 +03:00
|
|
|
}
|
|
|
|
|
2020-07-20 13:44:01 +03:00
|
|
|
static uint16_t nvme_identify_ns_descr_list(NvmeCtrl *n, NvmeRequest *req)
|
2020-07-06 09:12:59 +03:00
|
|
|
{
|
2020-12-08 23:03:59 +03:00
|
|
|
NvmeNamespace *ns;
|
2020-07-20 13:44:01 +03:00
|
|
|
NvmeIdentify *c = (NvmeIdentify *)&req->cmd;
|
2020-07-06 09:12:59 +03:00
|
|
|
uint32_t nsid = le32_to_cpu(c->nsid);
|
2020-12-08 23:04:03 +03:00
|
|
|
uint8_t list[NVME_IDENTIFY_DATA_SIZE] = {};
|
2021-06-14 23:19:00 +03:00
|
|
|
uint8_t *pos = list;
|
|
|
|
struct {
|
|
|
|
NvmeIdNsDescr hdr;
|
|
|
|
uint8_t v[NVME_NIDL_UUID];
|
2021-08-09 13:34:40 +03:00
|
|
|
} QEMU_PACKED uuid = {};
|
2021-06-14 23:19:00 +03:00
|
|
|
struct {
|
|
|
|
NvmeIdNsDescr hdr;
|
|
|
|
uint64_t v;
|
2021-08-09 13:34:40 +03:00
|
|
|
} QEMU_PACKED eui64 = {};
|
2021-06-14 23:19:00 +03:00
|
|
|
struct {
|
|
|
|
NvmeIdNsDescr hdr;
|
|
|
|
uint8_t v;
|
2021-08-09 13:34:40 +03:00
|
|
|
} QEMU_PACKED csi = {};
|
2020-07-06 09:12:59 +03:00
|
|
|
|
|
|
|
trace_pci_nvme_identify_ns_descr_list(nsid);
|
|
|
|
|
hw/block/nvme: support multiple namespaces
This adds support for multiple namespaces by introducing a new 'nvme-ns'
device model. The nvme device creates a bus named from the device name
('id'). The nvme-ns devices then connect to this and registers
themselves with the nvme device.
This changes how an nvme device is created. Example with two namespaces:
-drive file=nvme0n1.img,if=none,id=disk1
-drive file=nvme0n2.img,if=none,id=disk2
-device nvme,serial=deadbeef,id=nvme0
-device nvme-ns,drive=disk1,bus=nvme0,nsid=1
-device nvme-ns,drive=disk2,bus=nvme0,nsid=2
The drive property is kept on the nvme device to keep the change
backward compatible, but the property is now optional. Specifying a
drive for the nvme device will always create the namespace with nsid 1.
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Minwoo Im <minwoo.im.dev@gmail.com>
2019-06-26 09:51:06 +03:00
|
|
|
if (!nvme_nsid_valid(n, nsid) || nsid == NVME_NSID_BROADCAST) {
|
2020-07-06 09:12:59 +03:00
|
|
|
return NVME_INVALID_NSID | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
2020-12-08 23:03:59 +03:00
|
|
|
ns = nvme_ns(n, nsid);
|
|
|
|
if (unlikely(!ns)) {
|
hw/block/nvme: support multiple namespaces
This adds support for multiple namespaces by introducing a new 'nvme-ns'
device model. The nvme device creates a bus named from the device name
('id'). The nvme-ns devices then connect to this and registers
themselves with the nvme device.
This changes how an nvme device is created. Example with two namespaces:
-drive file=nvme0n1.img,if=none,id=disk1
-drive file=nvme0n2.img,if=none,id=disk2
-device nvme,serial=deadbeef,id=nvme0
-device nvme-ns,drive=disk1,bus=nvme0,nsid=1
-device nvme-ns,drive=disk2,bus=nvme0,nsid=2
The drive property is kept on the nvme device to keep the change
backward compatible, but the property is now optional. Specifying a
drive for the nvme device will always create the namespace with nsid 1.
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Minwoo Im <minwoo.im.dev@gmail.com>
2019-06-26 09:51:06 +03:00
|
|
|
return NVME_INVALID_FIELD | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
2022-04-29 11:33:35 +03:00
|
|
|
if (!qemu_uuid_is_null(&ns->params.uuid)) {
|
|
|
|
uuid.hdr.nidt = NVME_NIDT_UUID;
|
|
|
|
uuid.hdr.nidl = NVME_NIDL_UUID;
|
|
|
|
memcpy(uuid.v, ns->params.uuid.data, NVME_NIDL_UUID);
|
|
|
|
memcpy(pos, &uuid, sizeof(uuid));
|
|
|
|
pos += sizeof(uuid);
|
|
|
|
}
|
2021-06-14 23:19:00 +03:00
|
|
|
|
|
|
|
if (ns->params.eui64) {
|
|
|
|
eui64.hdr.nidt = NVME_NIDT_EUI64;
|
|
|
|
eui64.hdr.nidl = NVME_NIDL_EUI64;
|
|
|
|
eui64.v = cpu_to_be64(ns->params.eui64);
|
|
|
|
memcpy(pos, &eui64, sizeof(eui64));
|
|
|
|
pos += sizeof(eui64);
|
|
|
|
}
|
|
|
|
|
|
|
|
csi.hdr.nidt = NVME_NIDT_CSI;
|
|
|
|
csi.hdr.nidl = NVME_NIDL_CSI;
|
|
|
|
csi.v = ns->csi;
|
|
|
|
memcpy(pos, &csi, sizeof(csi));
|
|
|
|
pos += sizeof(csi);
|
2020-12-08 23:04:03 +03:00
|
|
|
|
2020-12-15 21:18:25 +03:00
|
|
|
return nvme_c2h(n, list, sizeof(list), req);
|
2020-12-08 23:04:03 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
static uint16_t nvme_identify_cmd_set(NvmeCtrl *n, NvmeRequest *req)
|
|
|
|
{
|
|
|
|
uint8_t list[NVME_IDENTIFY_DATA_SIZE] = {};
|
|
|
|
static const int data_len = sizeof(list);
|
|
|
|
|
|
|
|
trace_pci_nvme_identify_cmd_set();
|
|
|
|
|
|
|
|
NVME_SET_CSI(*list, NVME_CSI_NVM);
|
2020-12-08 23:04:06 +03:00
|
|
|
NVME_SET_CSI(*list, NVME_CSI_ZONED);
|
|
|
|
|
2020-12-15 21:18:25 +03:00
|
|
|
return nvme_c2h(n, list, data_len, req);
|
2020-07-06 09:12:59 +03:00
|
|
|
}
|
|
|
|
|
2020-07-20 13:44:01 +03:00
|
|
|
static uint16_t nvme_identify(NvmeCtrl *n, NvmeRequest *req)
|
2016-08-04 22:42:14 +03:00
|
|
|
{
|
2020-07-20 13:44:01 +03:00
|
|
|
NvmeIdentify *c = (NvmeIdentify *)&req->cmd;
|
2016-08-04 22:42:14 +03:00
|
|
|
|
2021-02-22 21:32:21 +03:00
|
|
|
trace_pci_nvme_identify(nvme_cid(req), c->cns, le16_to_cpu(c->ctrlid),
|
|
|
|
c->csi);
|
|
|
|
|
2021-02-22 21:31:19 +03:00
|
|
|
switch (c->cns) {
|
2020-06-09 22:03:16 +03:00
|
|
|
case NVME_ID_CNS_NS:
|
2021-02-05 20:15:10 +03:00
|
|
|
return nvme_identify_ns(n, req, true);
|
2020-12-08 23:04:04 +03:00
|
|
|
case NVME_ID_CNS_NS_PRESENT:
|
2021-02-05 20:15:10 +03:00
|
|
|
return nvme_identify_ns(n, req, false);
|
2021-02-10 18:10:25 +03:00
|
|
|
case NVME_ID_CNS_NS_ATTACHED_CTRL_LIST:
|
2021-06-14 19:29:26 +03:00
|
|
|
return nvme_identify_ctrl_list(n, req, true);
|
|
|
|
case NVME_ID_CNS_CTRL_LIST:
|
|
|
|
return nvme_identify_ctrl_list(n, req, false);
|
2022-05-09 17:16:10 +03:00
|
|
|
case NVME_ID_CNS_PRIMARY_CTRL_CAP:
|
|
|
|
return nvme_identify_pri_ctrl_cap(n, req);
|
2022-05-09 17:16:11 +03:00
|
|
|
case NVME_ID_CNS_SECONDARY_CTRL_LIST:
|
|
|
|
return nvme_identify_sec_ctrl_list(n, req);
|
2020-12-08 23:04:03 +03:00
|
|
|
case NVME_ID_CNS_CS_NS:
|
2021-02-05 20:15:10 +03:00
|
|
|
return nvme_identify_ns_csi(n, req, true);
|
2020-12-08 23:04:04 +03:00
|
|
|
case NVME_ID_CNS_CS_NS_PRESENT:
|
2021-02-05 20:15:10 +03:00
|
|
|
return nvme_identify_ns_csi(n, req, false);
|
2020-06-09 22:03:16 +03:00
|
|
|
case NVME_ID_CNS_CTRL:
|
2020-07-20 13:44:01 +03:00
|
|
|
return nvme_identify_ctrl(n, req);
|
2020-12-08 23:04:03 +03:00
|
|
|
case NVME_ID_CNS_CS_CTRL:
|
|
|
|
return nvme_identify_ctrl_csi(n, req);
|
2020-06-09 22:03:16 +03:00
|
|
|
case NVME_ID_CNS_NS_ACTIVE_LIST:
|
2021-02-05 20:15:10 +03:00
|
|
|
return nvme_identify_nslist(n, req, true);
|
2020-12-08 23:04:04 +03:00
|
|
|
case NVME_ID_CNS_NS_PRESENT_LIST:
|
2021-02-05 20:15:10 +03:00
|
|
|
return nvme_identify_nslist(n, req, false);
|
2020-12-08 23:04:03 +03:00
|
|
|
case NVME_ID_CNS_CS_NS_ACTIVE_LIST:
|
2021-02-05 20:15:10 +03:00
|
|
|
return nvme_identify_nslist_csi(n, req, true);
|
2020-12-08 23:04:04 +03:00
|
|
|
case NVME_ID_CNS_CS_NS_PRESENT_LIST:
|
2021-02-05 20:15:10 +03:00
|
|
|
return nvme_identify_nslist_csi(n, req, false);
|
2020-07-06 09:12:59 +03:00
|
|
|
case NVME_ID_CNS_NS_DESCR_LIST:
|
2020-07-20 13:44:01 +03:00
|
|
|
return nvme_identify_ns_descr_list(n, req);
|
2020-12-08 23:04:03 +03:00
|
|
|
case NVME_ID_CNS_IO_COMMAND_SET:
|
|
|
|
return nvme_identify_cmd_set(n, req);
|
2016-08-04 22:42:14 +03:00
|
|
|
default:
|
2020-06-09 22:03:13 +03:00
|
|
|
trace_pci_nvme_err_invalid_identify_cns(le32_to_cpu(c->cns));
|
2016-08-04 22:42:14 +03:00
|
|
|
return NVME_INVALID_FIELD | NVME_DNR;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-07-20 13:44:01 +03:00
|
|
|
static uint16_t nvme_abort(NvmeCtrl *n, NvmeRequest *req)
|
2020-07-06 09:12:49 +03:00
|
|
|
{
|
2020-07-20 13:44:01 +03:00
|
|
|
uint16_t sqid = le32_to_cpu(req->cmd.cdw10) & 0xffff;
|
2020-07-06 09:12:49 +03:00
|
|
|
|
|
|
|
req->cqe.result = 1;
|
|
|
|
if (nvme_check_sqid(n, sqid)) {
|
|
|
|
return NVME_INVALID_FIELD | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
|
|
|
return NVME_SUCCESS;
|
|
|
|
}
|
|
|
|
|
2019-05-20 20:40:30 +03:00
|
|
|
static inline void nvme_set_timestamp(NvmeCtrl *n, uint64_t ts)
|
|
|
|
{
|
2020-06-09 22:03:13 +03:00
|
|
|
trace_pci_nvme_setfeat_timestamp(ts);
|
2019-05-20 20:40:30 +03:00
|
|
|
|
|
|
|
n->host_timestamp = le64_to_cpu(ts);
|
|
|
|
n->timestamp_set_qemu_clock_ms = qemu_clock_get_ms(QEMU_CLOCK_VIRTUAL);
|
|
|
|
}
|
|
|
|
|
|
|
|
static inline uint64_t nvme_get_timestamp(const NvmeCtrl *n)
|
|
|
|
{
|
|
|
|
uint64_t current_time = qemu_clock_get_ms(QEMU_CLOCK_VIRTUAL);
|
|
|
|
uint64_t elapsed_time = current_time - n->timestamp_set_qemu_clock_ms;
|
|
|
|
|
|
|
|
union nvme_timestamp {
|
|
|
|
struct {
|
|
|
|
uint64_t timestamp:48;
|
|
|
|
uint64_t sync:1;
|
|
|
|
uint64_t origin:3;
|
|
|
|
uint64_t rsvd1:12;
|
|
|
|
};
|
|
|
|
uint64_t all;
|
|
|
|
};
|
|
|
|
|
|
|
|
union nvme_timestamp ts;
|
|
|
|
ts.all = 0;
|
2020-10-02 10:57:16 +03:00
|
|
|
ts.timestamp = n->host_timestamp + elapsed_time;
|
2019-05-20 20:40:30 +03:00
|
|
|
|
|
|
|
/* If the host timestamp is non-zero, set the timestamp origin */
|
|
|
|
ts.origin = n->host_timestamp ? 0x01 : 0x00;
|
|
|
|
|
2020-06-09 22:03:13 +03:00
|
|
|
trace_pci_nvme_getfeat_timestamp(ts.all);
|
2019-05-20 20:40:30 +03:00
|
|
|
|
|
|
|
return cpu_to_le64(ts.all);
|
|
|
|
}
|
|
|
|
|
2020-07-20 13:44:01 +03:00
|
|
|
static uint16_t nvme_get_feature_timestamp(NvmeCtrl *n, NvmeRequest *req)
|
2019-05-20 20:40:30 +03:00
|
|
|
{
|
|
|
|
uint64_t timestamp = nvme_get_timestamp(n);
|
|
|
|
|
2020-12-15 21:18:25 +03:00
|
|
|
return nvme_c2h(n, (uint8_t *)×tamp, sizeof(timestamp), req);
|
2019-05-20 20:40:30 +03:00
|
|
|
}
|
|
|
|
|
2023-02-20 14:59:26 +03:00
|
|
|
static int nvme_get_feature_fdp(NvmeCtrl *n, uint32_t endgrpid,
|
|
|
|
uint32_t *result)
|
|
|
|
{
|
|
|
|
*result = 0;
|
|
|
|
|
|
|
|
if (!n->subsys || !n->subsys->endgrp.fdp.enabled) {
|
|
|
|
return NVME_INVALID_FIELD | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
|
|
|
*result = FIELD_DP16(0, FEAT_FDP, FDPE, 1);
|
|
|
|
*result = FIELD_DP16(*result, FEAT_FDP, CONF_NDX, 0);
|
|
|
|
|
|
|
|
return NVME_SUCCESS;
|
|
|
|
}
|
|
|
|
|
|
|
|
static uint16_t nvme_get_feature_fdp_events(NvmeCtrl *n, NvmeNamespace *ns,
|
|
|
|
NvmeRequest *req, uint32_t *result)
|
|
|
|
{
|
|
|
|
NvmeCmd *cmd = &req->cmd;
|
|
|
|
uint32_t cdw11 = le32_to_cpu(cmd->cdw11);
|
|
|
|
uint16_t ph = cdw11 & 0xffff;
|
|
|
|
uint8_t noet = (cdw11 >> 16) & 0xff;
|
|
|
|
uint16_t ruhid, ret;
|
|
|
|
uint32_t nentries = 0;
|
|
|
|
uint8_t s_events_ndx = 0;
|
|
|
|
size_t s_events_siz = sizeof(NvmeFdpEventDescr) * noet;
|
|
|
|
g_autofree NvmeFdpEventDescr *s_events = g_malloc0(s_events_siz);
|
|
|
|
NvmeRuHandle *ruh;
|
|
|
|
NvmeFdpEventDescr *s_event;
|
|
|
|
|
|
|
|
if (!n->subsys || !n->subsys->endgrp.fdp.enabled) {
|
|
|
|
return NVME_FDP_DISABLED | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!nvme_ph_valid(ns, ph)) {
|
|
|
|
return NVME_INVALID_FIELD | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
|
|
|
ruhid = ns->fdp.phs[ph];
|
|
|
|
ruh = &n->subsys->endgrp.fdp.ruhs[ruhid];
|
|
|
|
|
|
|
|
assert(ruh);
|
|
|
|
|
|
|
|
if (unlikely(noet == 0)) {
|
|
|
|
return NVME_INVALID_FIELD | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
|
|
|
for (uint8_t event_type = 0; event_type < FDP_EVT_MAX; event_type++) {
|
|
|
|
uint8_t shift = nvme_fdp_evf_shifts[event_type];
|
|
|
|
if (!shift && event_type) {
|
|
|
|
/*
|
|
|
|
* only first entry (event_type == 0) has a shift value of 0
|
|
|
|
* other entries are simply unpopulated.
|
|
|
|
*/
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
nentries++;
|
|
|
|
|
|
|
|
s_event = &s_events[s_events_ndx];
|
|
|
|
s_event->evt = event_type;
|
|
|
|
s_event->evta = (ruh->event_filter >> shift) & 0x1;
|
|
|
|
|
|
|
|
/* break if all `noet` entries are filled */
|
|
|
|
if ((++s_events_ndx) == noet) {
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
ret = nvme_c2h(n, s_events, s_events_siz, req);
|
|
|
|
if (ret) {
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
*result = nentries;
|
|
|
|
return NVME_SUCCESS;
|
|
|
|
}
|
|
|
|
|
2020-07-20 13:44:01 +03:00
|
|
|
static uint16_t nvme_get_feature(NvmeCtrl *n, NvmeRequest *req)
|
2013-06-04 19:17:10 +04:00
|
|
|
{
|
2020-07-20 13:44:01 +03:00
|
|
|
NvmeCmd *cmd = &req->cmd;
|
2013-06-04 19:17:10 +04:00
|
|
|
uint32_t dw10 = le32_to_cpu(cmd->cdw10);
|
2020-07-06 09:12:50 +03:00
|
|
|
uint32_t dw11 = le32_to_cpu(cmd->cdw11);
|
2020-07-06 09:12:57 +03:00
|
|
|
uint32_t nsid = le32_to_cpu(cmd->nsid);
|
2015-06-11 13:01:39 +03:00
|
|
|
uint32_t result;
|
2020-07-06 09:12:56 +03:00
|
|
|
uint8_t fid = NVME_GETSETFEAT_FID(dw10);
|
2020-07-06 09:12:57 +03:00
|
|
|
NvmeGetFeatureSelect sel = NVME_GETFEAT_SELECT(dw10);
|
2020-07-06 09:12:56 +03:00
|
|
|
uint16_t iv;
|
2020-10-14 10:55:08 +03:00
|
|
|
NvmeNamespace *ns;
|
2021-01-17 17:53:32 +03:00
|
|
|
int i;
|
2023-02-20 14:59:26 +03:00
|
|
|
uint16_t endgrpid = 0, ret = NVME_SUCCESS;
|
2020-07-06 09:12:56 +03:00
|
|
|
|
|
|
|
static const uint32_t nvme_feature_default[NVME_FID_MAX] = {
|
|
|
|
[NVME_ARBITRATION] = NVME_ARB_AB_NOLIMIT,
|
|
|
|
};
|
|
|
|
|
2020-09-30 02:19:04 +03:00
|
|
|
trace_pci_nvme_getfeat(nvme_cid(req), nsid, fid, sel, dw11);
|
2020-07-06 09:12:56 +03:00
|
|
|
|
|
|
|
if (!nvme_feature_support[fid]) {
|
|
|
|
return NVME_INVALID_FIELD | NVME_DNR;
|
|
|
|
}
|
2013-06-04 19:17:10 +04:00
|
|
|
|
2020-07-06 09:12:57 +03:00
|
|
|
if (nvme_feature_cap[fid] & NVME_FEAT_CAP_NS) {
|
hw/block/nvme: support multiple namespaces
This adds support for multiple namespaces by introducing a new 'nvme-ns'
device model. The nvme device creates a bus named from the device name
('id'). The nvme-ns devices then connect to this and registers
themselves with the nvme device.
This changes how an nvme device is created. Example with two namespaces:
-drive file=nvme0n1.img,if=none,id=disk1
-drive file=nvme0n2.img,if=none,id=disk2
-device nvme,serial=deadbeef,id=nvme0
-device nvme-ns,drive=disk1,bus=nvme0,nsid=1
-device nvme-ns,drive=disk2,bus=nvme0,nsid=2
The drive property is kept on the nvme device to keep the change
backward compatible, but the property is now optional. Specifying a
drive for the nvme device will always create the namespace with nsid 1.
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Minwoo Im <minwoo.im.dev@gmail.com>
2019-06-26 09:51:06 +03:00
|
|
|
if (!nvme_nsid_valid(n, nsid) || nsid == NVME_NSID_BROADCAST) {
|
2020-07-06 09:12:57 +03:00
|
|
|
/*
|
|
|
|
* The Reservation Notification Mask and Reservation Persistence
|
|
|
|
* features require a status code of Invalid Field in Command when
|
2021-04-16 06:52:28 +03:00
|
|
|
* NSID is FFFFFFFFh. Since the device does not support those
|
2020-07-06 09:12:57 +03:00
|
|
|
* features we can always return Invalid Namespace or Format as we
|
|
|
|
* should do for all other features.
|
|
|
|
*/
|
|
|
|
return NVME_INVALID_NSID | NVME_DNR;
|
|
|
|
}
|
hw/block/nvme: support multiple namespaces
This adds support for multiple namespaces by introducing a new 'nvme-ns'
device model. The nvme device creates a bus named from the device name
('id'). The nvme-ns devices then connect to this and registers
themselves with the nvme device.
This changes how an nvme device is created. Example with two namespaces:
-drive file=nvme0n1.img,if=none,id=disk1
-drive file=nvme0n2.img,if=none,id=disk2
-device nvme,serial=deadbeef,id=nvme0
-device nvme-ns,drive=disk1,bus=nvme0,nsid=1
-device nvme-ns,drive=disk2,bus=nvme0,nsid=2
The drive property is kept on the nvme device to keep the change
backward compatible, but the property is now optional. Specifying a
drive for the nvme device will always create the namespace with nsid 1.
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Minwoo Im <minwoo.im.dev@gmail.com>
2019-06-26 09:51:06 +03:00
|
|
|
|
|
|
|
if (!nvme_ns(n, nsid)) {
|
|
|
|
return NVME_INVALID_FIELD | NVME_DNR;
|
|
|
|
}
|
2020-07-06 09:12:57 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
switch (sel) {
|
|
|
|
case NVME_GETFEAT_SELECT_CURRENT:
|
|
|
|
break;
|
|
|
|
case NVME_GETFEAT_SELECT_SAVED:
|
|
|
|
/* no features are saveable by the controller; fallthrough */
|
|
|
|
case NVME_GETFEAT_SELECT_DEFAULT:
|
|
|
|
goto defaults;
|
|
|
|
case NVME_GETFEAT_SELECT_CAP:
|
|
|
|
result = nvme_feature_cap[fid];
|
|
|
|
goto out;
|
|
|
|
}
|
|
|
|
|
2020-07-06 09:12:56 +03:00
|
|
|
switch (fid) {
|
2020-07-06 09:12:50 +03:00
|
|
|
case NVME_TEMPERATURE_THRESHOLD:
|
|
|
|
result = 0;
|
|
|
|
|
|
|
|
/*
|
|
|
|
* The controller only implements the Composite Temperature sensor, so
|
|
|
|
* return 0 for all other sensors.
|
|
|
|
*/
|
|
|
|
if (NVME_TEMP_TMPSEL(dw11) != NVME_TEMP_TMPSEL_COMPOSITE) {
|
2020-07-06 09:12:57 +03:00
|
|
|
goto out;
|
2020-07-06 09:12:50 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
switch (NVME_TEMP_THSEL(dw11)) {
|
|
|
|
case NVME_TEMP_THSEL_OVER:
|
|
|
|
result = n->features.temp_thresh_hi;
|
2020-07-06 09:12:57 +03:00
|
|
|
goto out;
|
2020-07-06 09:12:50 +03:00
|
|
|
case NVME_TEMP_THSEL_UNDER:
|
|
|
|
result = n->features.temp_thresh_low;
|
2020-07-06 09:12:57 +03:00
|
|
|
goto out;
|
2020-07-06 09:12:50 +03:00
|
|
|
}
|
|
|
|
|
2020-07-06 09:12:57 +03:00
|
|
|
return NVME_INVALID_FIELD | NVME_DNR;
|
2020-10-14 10:55:08 +03:00
|
|
|
case NVME_ERROR_RECOVERY:
|
|
|
|
if (!nvme_nsid_valid(n, nsid)) {
|
|
|
|
return NVME_INVALID_NSID | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
|
|
|
ns = nvme_ns(n, nsid);
|
|
|
|
if (unlikely(!ns)) {
|
|
|
|
return NVME_INVALID_FIELD | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
|
|
|
result = ns->features.err_rec;
|
|
|
|
goto out;
|
2015-04-30 12:44:17 +03:00
|
|
|
case NVME_VOLATILE_WRITE_CACHE:
|
2021-02-10 14:22:21 +03:00
|
|
|
result = 0;
|
2021-04-14 22:46:00 +03:00
|
|
|
for (i = 1; i <= NVME_MAX_NAMESPACES; i++) {
|
2021-01-17 17:53:32 +03:00
|
|
|
ns = nvme_ns(n, i);
|
|
|
|
if (!ns) {
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
result = blk_enable_write_cache(ns->blkconf.blk);
|
|
|
|
if (result) {
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
2020-06-09 22:03:13 +03:00
|
|
|
trace_pci_nvme_getfeat_vwcache(result ? "enabled" : "disabled");
|
2020-07-06 09:12:57 +03:00
|
|
|
goto out;
|
|
|
|
case NVME_ASYNCHRONOUS_EVENT_CONF:
|
|
|
|
result = n->features.async_config;
|
|
|
|
goto out;
|
|
|
|
case NVME_TIMESTAMP:
|
2020-07-20 13:44:01 +03:00
|
|
|
return nvme_get_feature_timestamp(n, req);
|
2021-10-06 09:50:49 +03:00
|
|
|
case NVME_HOST_BEHAVIOR_SUPPORT:
|
|
|
|
return nvme_c2h(n, (uint8_t *)&n->features.hbs,
|
|
|
|
sizeof(n->features.hbs), req);
|
2023-02-20 14:59:26 +03:00
|
|
|
case NVME_FDP_MODE:
|
|
|
|
endgrpid = dw11 & 0xff;
|
|
|
|
|
|
|
|
if (endgrpid != 0x1) {
|
|
|
|
return NVME_INVALID_FIELD | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
|
|
|
ret = nvme_get_feature_fdp(n, endgrpid, &result);
|
|
|
|
if (ret) {
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
goto out;
|
|
|
|
case NVME_FDP_EVENTS:
|
|
|
|
if (!nvme_nsid_valid(n, nsid)) {
|
|
|
|
return NVME_INVALID_NSID | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
|
|
|
ns = nvme_ns(n, nsid);
|
|
|
|
if (unlikely(!ns)) {
|
|
|
|
return NVME_INVALID_FIELD | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
|
|
|
ret = nvme_get_feature_fdp_events(n, ns, req, &result);
|
|
|
|
if (ret) {
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
goto out;
|
2020-07-06 09:12:57 +03:00
|
|
|
default:
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
defaults:
|
|
|
|
switch (fid) {
|
|
|
|
case NVME_TEMPERATURE_THRESHOLD:
|
|
|
|
result = 0;
|
|
|
|
|
|
|
|
if (NVME_TEMP_TMPSEL(dw11) != NVME_TEMP_TMPSEL_COMPOSITE) {
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (NVME_TEMP_THSEL(dw11) == NVME_TEMP_THSEL_OVER) {
|
|
|
|
result = NVME_TEMPERATURE_WARNING;
|
|
|
|
}
|
|
|
|
|
2015-06-11 13:01:39 +03:00
|
|
|
break;
|
|
|
|
case NVME_NUMBER_OF_QUEUES:
|
hw/nvme: Make max_ioqpairs and msix_qsize configurable in runtime
The NVMe device defines two properties: max_ioqpairs, msix_qsize. Having
them as constants is problematic for SR-IOV support.
SR-IOV introduces virtual resources (queues, interrupts) that can be
assigned to PF and its dependent VFs. Each device, following a reset,
should work with the configured number of queues. A single constant is
no longer sufficient to hold the whole state.
This patch tries to solve the problem by introducing additional
variables in NvmeCtrl’s state. The variables for, e.g., managing queues
are therefore organized as:
- n->params.max_ioqpairs – no changes, constant set by the user
- n->(mutable_state) – (not a part of this patch) user-configurable,
specifies number of queues available _after_
reset
- n->conf_ioqpairs - (new) used in all the places instead of the ‘old’
n->params.max_ioqpairs; initialized in realize()
and updated during reset() to reflect user’s
changes to the mutable state
Since the number of available i/o queues and interrupts can change in
runtime, buffers for sq/cqs and the MSIX-related structures are
allocated big enough to handle the limits, to completely avoid the
complicated reallocation. A helper function (nvme_update_msixcap_ts)
updates the corresponding capability register, to signal configuration
changes.
Signed-off-by: Łukasz Gieryk <lukasz.gieryk@linux.intel.com>
Reviewed-by: Klaus Jensen <k.jensen@samsung.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
2022-05-09 17:16:13 +03:00
|
|
|
result = (n->conf_ioqpairs - 1) | ((n->conf_ioqpairs - 1) << 16);
|
2020-06-09 22:03:13 +03:00
|
|
|
trace_pci_nvme_getfeat_numq(result);
|
2020-07-06 09:12:56 +03:00
|
|
|
break;
|
|
|
|
case NVME_INTERRUPT_VECTOR_CONF:
|
|
|
|
iv = dw11 & 0xffff;
|
hw/nvme: Make max_ioqpairs and msix_qsize configurable in runtime
The NVMe device defines two properties: max_ioqpairs, msix_qsize. Having
them as constants is problematic for SR-IOV support.
SR-IOV introduces virtual resources (queues, interrupts) that can be
assigned to PF and its dependent VFs. Each device, following a reset,
should work with the configured number of queues. A single constant is
no longer sufficient to hold the whole state.
This patch tries to solve the problem by introducing additional
variables in NvmeCtrl’s state. The variables for, e.g., managing queues
are therefore organized as:
- n->params.max_ioqpairs – no changes, constant set by the user
- n->(mutable_state) – (not a part of this patch) user-configurable,
specifies number of queues available _after_
reset
- n->conf_ioqpairs - (new) used in all the places instead of the ‘old’
n->params.max_ioqpairs; initialized in realize()
and updated during reset() to reflect user’s
changes to the mutable state
Since the number of available i/o queues and interrupts can change in
runtime, buffers for sq/cqs and the MSIX-related structures are
allocated big enough to handle the limits, to completely avoid the
complicated reallocation. A helper function (nvme_update_msixcap_ts)
updates the corresponding capability register, to signal configuration
changes.
Signed-off-by: Łukasz Gieryk <lukasz.gieryk@linux.intel.com>
Reviewed-by: Klaus Jensen <k.jensen@samsung.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
2022-05-09 17:16:13 +03:00
|
|
|
if (iv >= n->conf_ioqpairs + 1) {
|
2020-07-06 09:12:56 +03:00
|
|
|
return NVME_INVALID_FIELD | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
|
|
|
result = iv;
|
|
|
|
if (iv == n->admin_cq.vector) {
|
|
|
|
result |= NVME_INTVC_NOCOALESCING;
|
|
|
|
}
|
2023-02-20 14:59:26 +03:00
|
|
|
break;
|
|
|
|
case NVME_FDP_MODE:
|
|
|
|
endgrpid = dw11 & 0xff;
|
|
|
|
|
|
|
|
if (endgrpid != 0x1) {
|
|
|
|
return NVME_INVALID_FIELD | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
|
|
|
ret = nvme_get_feature_fdp(n, endgrpid, &result);
|
|
|
|
if (ret) {
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
goto out;
|
|
|
|
|
2020-12-08 23:04:03 +03:00
|
|
|
break;
|
2013-06-04 19:17:10 +04:00
|
|
|
default:
|
2020-07-06 09:12:56 +03:00
|
|
|
result = nvme_feature_default[fid];
|
|
|
|
break;
|
2013-06-04 19:17:10 +04:00
|
|
|
}
|
2015-06-11 13:01:39 +03:00
|
|
|
|
2020-07-06 09:12:57 +03:00
|
|
|
out:
|
2020-07-06 09:12:47 +03:00
|
|
|
req->cqe.result = cpu_to_le32(result);
|
2023-02-20 14:59:26 +03:00
|
|
|
return ret;
|
2013-06-04 19:17:10 +04:00
|
|
|
}
|
|
|
|
|
2020-07-20 13:44:01 +03:00
|
|
|
static uint16_t nvme_set_feature_timestamp(NvmeCtrl *n, NvmeRequest *req)
|
2019-05-20 20:40:30 +03:00
|
|
|
{
|
|
|
|
uint16_t ret;
|
|
|
|
uint64_t timestamp;
|
|
|
|
|
2020-12-15 21:18:25 +03:00
|
|
|
ret = nvme_h2c(n, (uint8_t *)×tamp, sizeof(timestamp), req);
|
2021-01-24 19:30:24 +03:00
|
|
|
if (ret) {
|
2019-05-20 20:40:30 +03:00
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
nvme_set_timestamp(n, timestamp);
|
|
|
|
|
|
|
|
return NVME_SUCCESS;
|
|
|
|
}
|
|
|
|
|
2023-02-20 14:59:26 +03:00
|
|
|
static uint16_t nvme_set_feature_fdp_events(NvmeCtrl *n, NvmeNamespace *ns,
|
|
|
|
NvmeRequest *req)
|
|
|
|
{
|
|
|
|
NvmeCmd *cmd = &req->cmd;
|
|
|
|
uint32_t cdw11 = le32_to_cpu(cmd->cdw11);
|
|
|
|
uint16_t ph = cdw11 & 0xffff;
|
|
|
|
uint8_t noet = (cdw11 >> 16) & 0xff;
|
|
|
|
uint16_t ret, ruhid;
|
|
|
|
uint8_t enable = le32_to_cpu(cmd->cdw12) & 0x1;
|
|
|
|
uint8_t event_mask = 0;
|
|
|
|
unsigned int i;
|
|
|
|
g_autofree uint8_t *events = g_malloc0(noet);
|
|
|
|
NvmeRuHandle *ruh = NULL;
|
|
|
|
|
|
|
|
assert(ns);
|
|
|
|
|
|
|
|
if (!n->subsys || !n->subsys->endgrp.fdp.enabled) {
|
|
|
|
return NVME_FDP_DISABLED | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!nvme_ph_valid(ns, ph)) {
|
|
|
|
return NVME_INVALID_FIELD | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
|
|
|
ruhid = ns->fdp.phs[ph];
|
|
|
|
ruh = &n->subsys->endgrp.fdp.ruhs[ruhid];
|
|
|
|
|
|
|
|
ret = nvme_h2c(n, events, noet, req);
|
|
|
|
if (ret) {
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
for (i = 0; i < noet; i++) {
|
|
|
|
event_mask |= (1 << nvme_fdp_evf_shifts[events[i]]);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (enable) {
|
|
|
|
ruh->event_filter |= event_mask;
|
|
|
|
} else {
|
|
|
|
ruh->event_filter = ruh->event_filter & ~event_mask;
|
|
|
|
}
|
|
|
|
|
|
|
|
return NVME_SUCCESS;
|
|
|
|
}
|
|
|
|
|
2020-07-20 13:44:01 +03:00
|
|
|
static uint16_t nvme_set_feature(NvmeCtrl *n, NvmeRequest *req)
|
2013-06-04 19:17:10 +04:00
|
|
|
{
|
2020-10-14 10:55:08 +03:00
|
|
|
NvmeNamespace *ns = NULL;
|
hw/block/nvme: support multiple namespaces
This adds support for multiple namespaces by introducing a new 'nvme-ns'
device model. The nvme device creates a bus named from the device name
('id'). The nvme-ns devices then connect to this and registers
themselves with the nvme device.
This changes how an nvme device is created. Example with two namespaces:
-drive file=nvme0n1.img,if=none,id=disk1
-drive file=nvme0n2.img,if=none,id=disk2
-device nvme,serial=deadbeef,id=nvme0
-device nvme-ns,drive=disk1,bus=nvme0,nsid=1
-device nvme-ns,drive=disk2,bus=nvme0,nsid=2
The drive property is kept on the nvme device to keep the change
backward compatible, but the property is now optional. Specifying a
drive for the nvme device will always create the namespace with nsid 1.
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Minwoo Im <minwoo.im.dev@gmail.com>
2019-06-26 09:51:06 +03:00
|
|
|
|
2020-07-20 13:44:01 +03:00
|
|
|
NvmeCmd *cmd = &req->cmd;
|
2013-06-04 19:17:10 +04:00
|
|
|
uint32_t dw10 = le32_to_cpu(cmd->cdw10);
|
2015-06-11 13:01:39 +03:00
|
|
|
uint32_t dw11 = le32_to_cpu(cmd->cdw11);
|
2020-07-06 09:12:57 +03:00
|
|
|
uint32_t nsid = le32_to_cpu(cmd->nsid);
|
2020-07-06 09:12:56 +03:00
|
|
|
uint8_t fid = NVME_GETSETFEAT_FID(dw10);
|
2020-07-06 09:12:57 +03:00
|
|
|
uint8_t save = NVME_SETFEAT_SAVE(dw10);
|
2021-10-06 09:53:30 +03:00
|
|
|
uint16_t status;
|
2020-10-14 10:55:08 +03:00
|
|
|
int i;
|
2020-07-06 09:12:57 +03:00
|
|
|
|
2020-09-30 02:19:04 +03:00
|
|
|
trace_pci_nvme_setfeat(nvme_cid(req), nsid, fid, save, dw11);
|
2013-06-04 19:17:10 +04:00
|
|
|
|
2021-01-24 18:35:32 +03:00
|
|
|
if (save && !(nvme_feature_cap[fid] & NVME_FEAT_CAP_SAVE)) {
|
2020-07-06 09:12:57 +03:00
|
|
|
return NVME_FID_NOT_SAVEABLE | NVME_DNR;
|
|
|
|
}
|
2020-07-06 09:12:56 +03:00
|
|
|
|
|
|
|
if (!nvme_feature_support[fid]) {
|
|
|
|
return NVME_INVALID_FIELD | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
2020-07-06 09:12:57 +03:00
|
|
|
if (nvme_feature_cap[fid] & NVME_FEAT_CAP_NS) {
|
hw/block/nvme: support multiple namespaces
This adds support for multiple namespaces by introducing a new 'nvme-ns'
device model. The nvme device creates a bus named from the device name
('id'). The nvme-ns devices then connect to this and registers
themselves with the nvme device.
This changes how an nvme device is created. Example with two namespaces:
-drive file=nvme0n1.img,if=none,id=disk1
-drive file=nvme0n2.img,if=none,id=disk2
-device nvme,serial=deadbeef,id=nvme0
-device nvme-ns,drive=disk1,bus=nvme0,nsid=1
-device nvme-ns,drive=disk2,bus=nvme0,nsid=2
The drive property is kept on the nvme device to keep the change
backward compatible, but the property is now optional. Specifying a
drive for the nvme device will always create the namespace with nsid 1.
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Minwoo Im <minwoo.im.dev@gmail.com>
2019-06-26 09:51:06 +03:00
|
|
|
if (nsid != NVME_NSID_BROADCAST) {
|
|
|
|
if (!nvme_nsid_valid(n, nsid)) {
|
|
|
|
return NVME_INVALID_NSID | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
|
|
|
ns = nvme_ns(n, nsid);
|
|
|
|
if (unlikely(!ns)) {
|
|
|
|
return NVME_INVALID_FIELD | NVME_DNR;
|
|
|
|
}
|
2020-07-06 09:12:57 +03:00
|
|
|
}
|
|
|
|
} else if (nsid && nsid != NVME_NSID_BROADCAST) {
|
hw/block/nvme: support multiple namespaces
This adds support for multiple namespaces by introducing a new 'nvme-ns'
device model. The nvme device creates a bus named from the device name
('id'). The nvme-ns devices then connect to this and registers
themselves with the nvme device.
This changes how an nvme device is created. Example with two namespaces:
-drive file=nvme0n1.img,if=none,id=disk1
-drive file=nvme0n2.img,if=none,id=disk2
-device nvme,serial=deadbeef,id=nvme0
-device nvme-ns,drive=disk1,bus=nvme0,nsid=1
-device nvme-ns,drive=disk2,bus=nvme0,nsid=2
The drive property is kept on the nvme device to keep the change
backward compatible, but the property is now optional. Specifying a
drive for the nvme device will always create the namespace with nsid 1.
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Minwoo Im <minwoo.im.dev@gmail.com>
2019-06-26 09:51:06 +03:00
|
|
|
if (!nvme_nsid_valid(n, nsid)) {
|
2020-07-06 09:12:57 +03:00
|
|
|
return NVME_INVALID_NSID | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
|
|
|
return NVME_FEAT_NOT_NS_SPEC | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!(nvme_feature_cap[fid] & NVME_FEAT_CAP_CHANGE)) {
|
|
|
|
return NVME_FEAT_NOT_CHANGEABLE | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
2020-07-06 09:12:56 +03:00
|
|
|
switch (fid) {
|
2020-07-06 09:12:50 +03:00
|
|
|
case NVME_TEMPERATURE_THRESHOLD:
|
|
|
|
if (NVME_TEMP_TMPSEL(dw11) != NVME_TEMP_TMPSEL_COMPOSITE) {
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
switch (NVME_TEMP_THSEL(dw11)) {
|
|
|
|
case NVME_TEMP_THSEL_OVER:
|
|
|
|
n->features.temp_thresh_hi = NVME_TEMP_TMPTH(dw11);
|
|
|
|
break;
|
|
|
|
case NVME_TEMP_THSEL_UNDER:
|
|
|
|
n->features.temp_thresh_low = NVME_TEMP_TMPTH(dw11);
|
|
|
|
break;
|
|
|
|
default:
|
|
|
|
return NVME_INVALID_FIELD | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
2021-01-15 06:27:02 +03:00
|
|
|
if ((n->temperature >= n->features.temp_thresh_hi) ||
|
|
|
|
(n->temperature <= n->features.temp_thresh_low)) {
|
2022-05-06 01:21:47 +03:00
|
|
|
nvme_smart_event(n, NVME_SMART_TEMPERATURE);
|
2020-07-06 09:12:53 +03:00
|
|
|
}
|
|
|
|
|
2020-10-14 10:55:08 +03:00
|
|
|
break;
|
|
|
|
case NVME_ERROR_RECOVERY:
|
|
|
|
if (nsid == NVME_NSID_BROADCAST) {
|
2021-04-14 22:46:00 +03:00
|
|
|
for (i = 1; i <= NVME_MAX_NAMESPACES; i++) {
|
2020-10-14 10:55:08 +03:00
|
|
|
ns = nvme_ns(n, i);
|
|
|
|
|
|
|
|
if (!ns) {
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (NVME_ID_NS_NSFEAT_DULBE(ns->id_ns.nsfeat)) {
|
|
|
|
ns->features.err_rec = dw11;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
|
|
|
|
assert(ns);
|
2021-01-24 18:54:40 +03:00
|
|
|
if (NVME_ID_NS_NSFEAT_DULBE(ns->id_ns.nsfeat)) {
|
|
|
|
ns->features.err_rec = dw11;
|
|
|
|
}
|
2020-07-06 09:12:50 +03:00
|
|
|
break;
|
2015-06-11 13:01:39 +03:00
|
|
|
case NVME_VOLATILE_WRITE_CACHE:
|
2021-04-14 22:46:00 +03:00
|
|
|
for (i = 1; i <= NVME_MAX_NAMESPACES; i++) {
|
hw/block/nvme: support multiple namespaces
This adds support for multiple namespaces by introducing a new 'nvme-ns'
device model. The nvme device creates a bus named from the device name
('id'). The nvme-ns devices then connect to this and registers
themselves with the nvme device.
This changes how an nvme device is created. Example with two namespaces:
-drive file=nvme0n1.img,if=none,id=disk1
-drive file=nvme0n2.img,if=none,id=disk2
-device nvme,serial=deadbeef,id=nvme0
-device nvme-ns,drive=disk1,bus=nvme0,nsid=1
-device nvme-ns,drive=disk2,bus=nvme0,nsid=2
The drive property is kept on the nvme device to keep the change
backward compatible, but the property is now optional. Specifying a
drive for the nvme device will always create the namespace with nsid 1.
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Minwoo Im <minwoo.im.dev@gmail.com>
2019-06-26 09:51:06 +03:00
|
|
|
ns = nvme_ns(n, i);
|
|
|
|
if (!ns) {
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!(dw11 & 0x1) && blk_enable_write_cache(ns->blkconf.blk)) {
|
|
|
|
blk_flush(ns->blkconf.blk);
|
|
|
|
}
|
|
|
|
|
|
|
|
blk_set_enable_write_cache(ns->blkconf.blk, dw11 & 1);
|
2020-07-06 09:12:55 +03:00
|
|
|
}
|
|
|
|
|
2015-06-11 13:01:39 +03:00
|
|
|
break;
|
hw/block/nvme: support multiple namespaces
This adds support for multiple namespaces by introducing a new 'nvme-ns'
device model. The nvme device creates a bus named from the device name
('id'). The nvme-ns devices then connect to this and registers
themselves with the nvme device.
This changes how an nvme device is created. Example with two namespaces:
-drive file=nvme0n1.img,if=none,id=disk1
-drive file=nvme0n2.img,if=none,id=disk2
-device nvme,serial=deadbeef,id=nvme0
-device nvme-ns,drive=disk1,bus=nvme0,nsid=1
-device nvme-ns,drive=disk2,bus=nvme0,nsid=2
The drive property is kept on the nvme device to keep the change
backward compatible, but the property is now optional. Specifying a
drive for the nvme device will always create the namespace with nsid 1.
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Minwoo Im <minwoo.im.dev@gmail.com>
2019-06-26 09:51:06 +03:00
|
|
|
|
2013-06-04 19:17:10 +04:00
|
|
|
case NVME_NUMBER_OF_QUEUES:
|
2020-07-06 09:13:01 +03:00
|
|
|
if (n->qs_created) {
|
|
|
|
return NVME_CMD_SEQ_ERROR | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
2020-07-06 09:12:58 +03:00
|
|
|
/*
|
2021-04-16 06:52:28 +03:00
|
|
|
* NVMe v1.3, Section 5.21.1.7: FFFFh is not an allowed value for NCQR
|
2020-07-06 09:12:58 +03:00
|
|
|
* and NSQR.
|
|
|
|
*/
|
|
|
|
if ((dw11 & 0xffff) == 0xffff || ((dw11 >> 16) & 0xffff) == 0xffff) {
|
|
|
|
return NVME_INVALID_FIELD | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
2021-04-16 06:52:28 +03:00
|
|
|
trace_pci_nvme_setfeat_numq((dw11 & 0xffff) + 1,
|
|
|
|
((dw11 >> 16) & 0xffff) + 1,
|
hw/nvme: Make max_ioqpairs and msix_qsize configurable in runtime
The NVMe device defines two properties: max_ioqpairs, msix_qsize. Having
them as constants is problematic for SR-IOV support.
SR-IOV introduces virtual resources (queues, interrupts) that can be
assigned to PF and its dependent VFs. Each device, following a reset,
should work with the configured number of queues. A single constant is
no longer sufficient to hold the whole state.
This patch tries to solve the problem by introducing additional
variables in NvmeCtrl’s state. The variables for, e.g., managing queues
are therefore organized as:
- n->params.max_ioqpairs – no changes, constant set by the user
- n->(mutable_state) – (not a part of this patch) user-configurable,
specifies number of queues available _after_
reset
- n->conf_ioqpairs - (new) used in all the places instead of the ‘old’
n->params.max_ioqpairs; initialized in realize()
and updated during reset() to reflect user’s
changes to the mutable state
Since the number of available i/o queues and interrupts can change in
runtime, buffers for sq/cqs and the MSIX-related structures are
allocated big enough to handle the limits, to completely avoid the
complicated reallocation. A helper function (nvme_update_msixcap_ts)
updates the corresponding capability register, to signal configuration
changes.
Signed-off-by: Łukasz Gieryk <lukasz.gieryk@linux.intel.com>
Reviewed-by: Klaus Jensen <k.jensen@samsung.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
2022-05-09 17:16:13 +03:00
|
|
|
n->conf_ioqpairs,
|
|
|
|
n->conf_ioqpairs);
|
|
|
|
req->cqe.result = cpu_to_le32((n->conf_ioqpairs - 1) |
|
|
|
|
((n->conf_ioqpairs - 1) << 16));
|
2013-06-04 19:17:10 +04:00
|
|
|
break;
|
2020-07-06 09:12:53 +03:00
|
|
|
case NVME_ASYNCHRONOUS_EVENT_CONF:
|
|
|
|
n->features.async_config = dw11;
|
|
|
|
break;
|
2019-05-20 20:40:30 +03:00
|
|
|
case NVME_TIMESTAMP:
|
2020-07-20 13:44:01 +03:00
|
|
|
return nvme_set_feature_timestamp(n, req);
|
2021-10-06 09:50:49 +03:00
|
|
|
case NVME_HOST_BEHAVIOR_SUPPORT:
|
2021-10-06 09:53:30 +03:00
|
|
|
status = nvme_h2c(n, (uint8_t *)&n->features.hbs,
|
|
|
|
sizeof(n->features.hbs), req);
|
|
|
|
if (status) {
|
|
|
|
return status;
|
|
|
|
}
|
|
|
|
|
|
|
|
for (i = 1; i <= NVME_MAX_NAMESPACES; i++) {
|
|
|
|
ns = nvme_ns(n, i);
|
|
|
|
|
|
|
|
if (!ns) {
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
ns->id_ns.nlbaf = ns->nlbaf - 1;
|
|
|
|
if (!n->features.hbs.lbafee) {
|
|
|
|
ns->id_ns.nlbaf = MIN(ns->id_ns.nlbaf, 15);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return status;
|
2020-12-08 23:04:03 +03:00
|
|
|
case NVME_COMMAND_SET_PROFILE:
|
|
|
|
if (dw11 & 0x1ff) {
|
|
|
|
trace_pci_nvme_err_invalid_iocsci(dw11 & 0x1ff);
|
|
|
|
return NVME_CMD_SET_CMB_REJECTED | NVME_DNR;
|
|
|
|
}
|
|
|
|
break;
|
2023-02-20 14:59:26 +03:00
|
|
|
case NVME_FDP_MODE:
|
|
|
|
/* spec: abort with cmd seq err if there's one or more NS' in endgrp */
|
|
|
|
return NVME_CMD_SEQ_ERROR | NVME_DNR;
|
|
|
|
case NVME_FDP_EVENTS:
|
|
|
|
return nvme_set_feature_fdp_events(n, ns, req);
|
2013-06-04 19:17:10 +04:00
|
|
|
default:
|
2020-07-06 09:12:56 +03:00
|
|
|
return NVME_FEAT_NOT_CHANGEABLE | NVME_DNR;
|
2013-06-04 19:17:10 +04:00
|
|
|
}
|
|
|
|
return NVME_SUCCESS;
|
|
|
|
}
|
|
|
|
|
2020-07-20 13:44:01 +03:00
|
|
|
static uint16_t nvme_aer(NvmeCtrl *n, NvmeRequest *req)
|
2020-07-06 09:12:53 +03:00
|
|
|
{
|
|
|
|
trace_pci_nvme_aer(nvme_cid(req));
|
|
|
|
|
|
|
|
if (n->outstanding_aers > n->params.aerl) {
|
|
|
|
trace_pci_nvme_aer_aerl_exceeded();
|
|
|
|
return NVME_AER_LIMIT_EXCEEDED;
|
|
|
|
}
|
|
|
|
|
|
|
|
n->aer_reqs[n->outstanding_aers] = req;
|
|
|
|
n->outstanding_aers++;
|
|
|
|
|
|
|
|
if (!QTAILQ_EMPTY(&n->aer_queue)) {
|
|
|
|
nvme_process_aers(n);
|
|
|
|
}
|
|
|
|
|
|
|
|
return NVME_NO_COMPLETE;
|
|
|
|
}
|
|
|
|
|
2021-03-24 00:42:56 +03:00
|
|
|
static void nvme_update_dmrsl(NvmeCtrl *n)
|
|
|
|
{
|
|
|
|
int nsid;
|
|
|
|
|
|
|
|
for (nsid = 1; nsid <= NVME_MAX_NAMESPACES; nsid++) {
|
|
|
|
NvmeNamespace *ns = nvme_ns(n, nsid);
|
|
|
|
if (!ns) {
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
n->dmrsl = MIN_NON_ZERO(n->dmrsl,
|
|
|
|
BDRV_REQUEST_MAX_BYTES / nvme_l2b(ns, 1));
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-04-15 09:39:08 +03:00
|
|
|
static void nvme_select_iocs_ns(NvmeCtrl *n, NvmeNamespace *ns)
|
|
|
|
{
|
2021-07-13 20:31:27 +03:00
|
|
|
uint32_t cc = ldl_le_p(&n->bar.cc);
|
|
|
|
|
2021-04-15 09:39:08 +03:00
|
|
|
ns->iocs = nvme_cse_iocs_none;
|
|
|
|
switch (ns->csi) {
|
|
|
|
case NVME_CSI_NVM:
|
2021-07-13 20:31:27 +03:00
|
|
|
if (NVME_CC_CSS(cc) != NVME_CC_CSS_ADMIN_ONLY) {
|
2021-04-15 09:39:08 +03:00
|
|
|
ns->iocs = nvme_cse_iocs_nvm;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
case NVME_CSI_ZONED:
|
2021-07-13 20:31:27 +03:00
|
|
|
if (NVME_CC_CSS(cc) == NVME_CC_CSS_CSI) {
|
2021-04-15 09:39:08 +03:00
|
|
|
ns->iocs = nvme_cse_iocs_zoned;
|
2021-07-13 20:31:27 +03:00
|
|
|
} else if (NVME_CC_CSS(cc) == NVME_CC_CSS_NVM) {
|
2021-04-15 09:39:08 +03:00
|
|
|
ns->iocs = nvme_cse_iocs_nvm;
|
|
|
|
}
|
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-02-06 06:18:09 +03:00
|
|
|
static uint16_t nvme_ns_attachment(NvmeCtrl *n, NvmeRequest *req)
|
|
|
|
{
|
|
|
|
NvmeNamespace *ns;
|
|
|
|
NvmeCtrl *ctrl;
|
|
|
|
uint16_t list[NVME_CONTROLLER_LIST_SIZE] = {};
|
|
|
|
uint32_t nsid = le32_to_cpu(req->cmd.nsid);
|
|
|
|
uint32_t dw10 = le32_to_cpu(req->cmd.cdw10);
|
2021-08-23 14:03:33 +03:00
|
|
|
uint8_t sel = dw10 & 0xf;
|
2021-02-06 06:18:09 +03:00
|
|
|
uint16_t *nr_ids = &list[0];
|
|
|
|
uint16_t *ids = &list[1];
|
|
|
|
uint16_t ret;
|
|
|
|
int i;
|
|
|
|
|
|
|
|
trace_pci_nvme_ns_attachment(nvme_cid(req), dw10 & 0xf);
|
|
|
|
|
hw/block/nvme: fix handling of private namespaces
Prior to this patch, if a private nvme-ns device (that is, a namespace
that is not linked to a subsystem) is wired up to an nvme-subsys linked
nvme controller device, the device fails to verify that the namespace id
is unique within the subsystem. NVM Express v1.4b, Section 6.1.6 ("NSID
and Namespace Usage") states that because the device supports Namespace
Management, "NSIDs *shall* be unique within the NVM subsystem".
Additionally, prior to this patch, private namespaces are not known to
the subsystem and the namespace is considered exclusive to the
controller with which it is initially wired up to. However, this is not
the definition of a private namespace; per Section 1.6.33 ("private
namespace"), a private namespace is just a namespace that does not
support multipath I/O or namespace sharing, which means "that it is only
able to be attached to one controller at a time".
Fix this by always allocating namespaces in the subsystem (if one is
linked to the controller), regardless of the shared/private status of
the namespace. Whether or not the namespace is shareable is controlled
by a new `shared` nvme-ns parameter.
Finally, this fix allows the nvme-ns `subsys` parameter to be removed,
since the `shared` parameter now serves the purpose of attaching the
namespace to all controllers in the subsystem upon device realization.
It is invalid to have an nvme-ns namespace device with a linked
subsystem without the parent nvme controller device also being linked to
one and since the nvme-ns devices will unconditionally be "attached" (in
QEMU terms that is) to an nvme controller device through an NvmeBus, the
nvme-ns namespace device can always get a reference to the subsystem of
the controller it is explicitly (using 'bus=' parameter) or implicitly
attaching to.
Fixes: e570768566b3 ("hw/block/nvme: support for shared namespace in subsystem")
Cc: Minwoo Im <minwoo.im.dev@gmail.com>
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
Reviewed-by: Gollu Appalanaidu <anaidu.gollu@samsung.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Minwoo Im <minwoo.im.dev@gmail.com>
2021-03-23 14:43:24 +03:00
|
|
|
if (!nvme_nsid_valid(n, nsid)) {
|
|
|
|
return NVME_INVALID_NSID | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
2021-02-06 06:18:09 +03:00
|
|
|
ns = nvme_subsys_ns(n->subsys, nsid);
|
|
|
|
if (!ns) {
|
|
|
|
return NVME_INVALID_FIELD | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
|
|
|
ret = nvme_h2c(n, (uint8_t *)list, 4096, req);
|
|
|
|
if (ret) {
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!*nr_ids) {
|
|
|
|
return NVME_NS_CTRL_LIST_INVALID | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
2021-04-07 07:54:34 +03:00
|
|
|
*nr_ids = MIN(*nr_ids, NVME_CONTROLLER_LIST_SIZE - 1);
|
2021-02-06 06:18:09 +03:00
|
|
|
for (i = 0; i < *nr_ids; i++) {
|
|
|
|
ctrl = nvme_subsys_ctrl(n->subsys, ids[i]);
|
|
|
|
if (!ctrl) {
|
|
|
|
return NVME_NS_CTRL_LIST_INVALID | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
2021-08-23 14:03:33 +03:00
|
|
|
switch (sel) {
|
|
|
|
case NVME_NS_ATTACHMENT_ATTACH:
|
hw/block/nvme: fix handling of private namespaces
Prior to this patch, if a private nvme-ns device (that is, a namespace
that is not linked to a subsystem) is wired up to an nvme-subsys linked
nvme controller device, the device fails to verify that the namespace id
is unique within the subsystem. NVM Express v1.4b, Section 6.1.6 ("NSID
and Namespace Usage") states that because the device supports Namespace
Management, "NSIDs *shall* be unique within the NVM subsystem".
Additionally, prior to this patch, private namespaces are not known to
the subsystem and the namespace is considered exclusive to the
controller with which it is initially wired up to. However, this is not
the definition of a private namespace; per Section 1.6.33 ("private
namespace"), a private namespace is just a namespace that does not
support multipath I/O or namespace sharing, which means "that it is only
able to be attached to one controller at a time".
Fix this by always allocating namespaces in the subsystem (if one is
linked to the controller), regardless of the shared/private status of
the namespace. Whether or not the namespace is shareable is controlled
by a new `shared` nvme-ns parameter.
Finally, this fix allows the nvme-ns `subsys` parameter to be removed,
since the `shared` parameter now serves the purpose of attaching the
namespace to all controllers in the subsystem upon device realization.
It is invalid to have an nvme-ns namespace device with a linked
subsystem without the parent nvme controller device also being linked to
one and since the nvme-ns devices will unconditionally be "attached" (in
QEMU terms that is) to an nvme controller device through an NvmeBus, the
nvme-ns namespace device can always get a reference to the subsystem of
the controller it is explicitly (using 'bus=' parameter) or implicitly
attaching to.
Fixes: e570768566b3 ("hw/block/nvme: support for shared namespace in subsystem")
Cc: Minwoo Im <minwoo.im.dev@gmail.com>
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
Reviewed-by: Gollu Appalanaidu <anaidu.gollu@samsung.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Minwoo Im <minwoo.im.dev@gmail.com>
2021-03-23 14:43:24 +03:00
|
|
|
if (nvme_ns(ctrl, nsid)) {
|
2021-02-06 06:18:09 +03:00
|
|
|
return NVME_NS_ALREADY_ATTACHED | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
hw/block/nvme: fix handling of private namespaces
Prior to this patch, if a private nvme-ns device (that is, a namespace
that is not linked to a subsystem) is wired up to an nvme-subsys linked
nvme controller device, the device fails to verify that the namespace id
is unique within the subsystem. NVM Express v1.4b, Section 6.1.6 ("NSID
and Namespace Usage") states that because the device supports Namespace
Management, "NSIDs *shall* be unique within the NVM subsystem".
Additionally, prior to this patch, private namespaces are not known to
the subsystem and the namespace is considered exclusive to the
controller with which it is initially wired up to. However, this is not
the definition of a private namespace; per Section 1.6.33 ("private
namespace"), a private namespace is just a namespace that does not
support multipath I/O or namespace sharing, which means "that it is only
able to be attached to one controller at a time".
Fix this by always allocating namespaces in the subsystem (if one is
linked to the controller), regardless of the shared/private status of
the namespace. Whether or not the namespace is shareable is controlled
by a new `shared` nvme-ns parameter.
Finally, this fix allows the nvme-ns `subsys` parameter to be removed,
since the `shared` parameter now serves the purpose of attaching the
namespace to all controllers in the subsystem upon device realization.
It is invalid to have an nvme-ns namespace device with a linked
subsystem without the parent nvme controller device also being linked to
one and since the nvme-ns devices will unconditionally be "attached" (in
QEMU terms that is) to an nvme controller device through an NvmeBus, the
nvme-ns namespace device can always get a reference to the subsystem of
the controller it is explicitly (using 'bus=' parameter) or implicitly
attaching to.
Fixes: e570768566b3 ("hw/block/nvme: support for shared namespace in subsystem")
Cc: Minwoo Im <minwoo.im.dev@gmail.com>
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
Reviewed-by: Gollu Appalanaidu <anaidu.gollu@samsung.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Minwoo Im <minwoo.im.dev@gmail.com>
2021-03-23 14:43:24 +03:00
|
|
|
if (ns->attached && !ns->params.shared) {
|
|
|
|
return NVME_NS_PRIVATE | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
|
|
|
nvme_attach_ns(ctrl, ns);
|
2021-04-15 09:39:08 +03:00
|
|
|
nvme_select_iocs_ns(ctrl, ns);
|
2021-08-23 14:03:33 +03:00
|
|
|
|
|
|
|
break;
|
|
|
|
|
|
|
|
case NVME_NS_ATTACHMENT_DETACH:
|
hw/block/nvme: fix handling of private namespaces
Prior to this patch, if a private nvme-ns device (that is, a namespace
that is not linked to a subsystem) is wired up to an nvme-subsys linked
nvme controller device, the device fails to verify that the namespace id
is unique within the subsystem. NVM Express v1.4b, Section 6.1.6 ("NSID
and Namespace Usage") states that because the device supports Namespace
Management, "NSIDs *shall* be unique within the NVM subsystem".
Additionally, prior to this patch, private namespaces are not known to
the subsystem and the namespace is considered exclusive to the
controller with which it is initially wired up to. However, this is not
the definition of a private namespace; per Section 1.6.33 ("private
namespace"), a private namespace is just a namespace that does not
support multipath I/O or namespace sharing, which means "that it is only
able to be attached to one controller at a time".
Fix this by always allocating namespaces in the subsystem (if one is
linked to the controller), regardless of the shared/private status of
the namespace. Whether or not the namespace is shareable is controlled
by a new `shared` nvme-ns parameter.
Finally, this fix allows the nvme-ns `subsys` parameter to be removed,
since the `shared` parameter now serves the purpose of attaching the
namespace to all controllers in the subsystem upon device realization.
It is invalid to have an nvme-ns namespace device with a linked
subsystem without the parent nvme controller device also being linked to
one and since the nvme-ns devices will unconditionally be "attached" (in
QEMU terms that is) to an nvme controller device through an NvmeBus, the
nvme-ns namespace device can always get a reference to the subsystem of
the controller it is explicitly (using 'bus=' parameter) or implicitly
attaching to.
Fixes: e570768566b3 ("hw/block/nvme: support for shared namespace in subsystem")
Cc: Minwoo Im <minwoo.im.dev@gmail.com>
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
Reviewed-by: Gollu Appalanaidu <anaidu.gollu@samsung.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Minwoo Im <minwoo.im.dev@gmail.com>
2021-03-23 14:43:24 +03:00
|
|
|
if (!nvme_ns(ctrl, nsid)) {
|
2021-02-06 06:18:09 +03:00
|
|
|
return NVME_NS_NOT_ATTACHED | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
2021-04-14 22:40:40 +03:00
|
|
|
ctrl->namespaces[nsid] = NULL;
|
hw/block/nvme: fix handling of private namespaces
Prior to this patch, if a private nvme-ns device (that is, a namespace
that is not linked to a subsystem) is wired up to an nvme-subsys linked
nvme controller device, the device fails to verify that the namespace id
is unique within the subsystem. NVM Express v1.4b, Section 6.1.6 ("NSID
and Namespace Usage") states that because the device supports Namespace
Management, "NSIDs *shall* be unique within the NVM subsystem".
Additionally, prior to this patch, private namespaces are not known to
the subsystem and the namespace is considered exclusive to the
controller with which it is initially wired up to. However, this is not
the definition of a private namespace; per Section 1.6.33 ("private
namespace"), a private namespace is just a namespace that does not
support multipath I/O or namespace sharing, which means "that it is only
able to be attached to one controller at a time".
Fix this by always allocating namespaces in the subsystem (if one is
linked to the controller), regardless of the shared/private status of
the namespace. Whether or not the namespace is shareable is controlled
by a new `shared` nvme-ns parameter.
Finally, this fix allows the nvme-ns `subsys` parameter to be removed,
since the `shared` parameter now serves the purpose of attaching the
namespace to all controllers in the subsystem upon device realization.
It is invalid to have an nvme-ns namespace device with a linked
subsystem without the parent nvme controller device also being linked to
one and since the nvme-ns devices will unconditionally be "attached" (in
QEMU terms that is) to an nvme controller device through an NvmeBus, the
nvme-ns namespace device can always get a reference to the subsystem of
the controller it is explicitly (using 'bus=' parameter) or implicitly
attaching to.
Fixes: e570768566b3 ("hw/block/nvme: support for shared namespace in subsystem")
Cc: Minwoo Im <minwoo.im.dev@gmail.com>
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
Reviewed-by: Gollu Appalanaidu <anaidu.gollu@samsung.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Minwoo Im <minwoo.im.dev@gmail.com>
2021-03-23 14:43:24 +03:00
|
|
|
ns->attached--;
|
2021-03-24 00:42:56 +03:00
|
|
|
|
|
|
|
nvme_update_dmrsl(ctrl);
|
2021-08-23 14:03:33 +03:00
|
|
|
|
|
|
|
break;
|
|
|
|
|
|
|
|
default:
|
|
|
|
return NVME_INVALID_FIELD | NVME_DNR;
|
2021-02-06 06:18:09 +03:00
|
|
|
}
|
2021-02-28 11:51:02 +03:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Add namespace id to the changed namespace id list for event clearing
|
|
|
|
* via Get Log Page command.
|
|
|
|
*/
|
|
|
|
if (!test_and_set_bit(nsid, ctrl->changed_nsids)) {
|
|
|
|
nvme_enqueue_event(ctrl, NVME_AER_TYPE_NOTICE,
|
|
|
|
NVME_AER_INFO_NOTICE_NS_ATTR_CHANGED,
|
|
|
|
NVME_LOG_CHANGED_NSLIST);
|
|
|
|
}
|
2021-02-06 06:18:09 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
return NVME_SUCCESS;
|
|
|
|
}
|
|
|
|
|
2021-06-17 22:06:56 +03:00
|
|
|
typedef struct NvmeFormatAIOCB {
|
|
|
|
BlockAIOCB common;
|
|
|
|
BlockAIOCB *aiocb;
|
|
|
|
NvmeRequest *req;
|
|
|
|
int ret;
|
2021-02-12 15:11:39 +03:00
|
|
|
|
2021-06-17 22:06:56 +03:00
|
|
|
NvmeNamespace *ns;
|
|
|
|
uint32_t nsid;
|
|
|
|
bool broadcast;
|
|
|
|
int64_t offset;
|
2021-10-06 10:40:15 +03:00
|
|
|
|
|
|
|
uint8_t lbaf;
|
|
|
|
uint8_t mset;
|
|
|
|
uint8_t pi;
|
|
|
|
uint8_t pil;
|
2021-06-17 22:06:56 +03:00
|
|
|
} NvmeFormatAIOCB;
|
2021-02-12 15:11:39 +03:00
|
|
|
|
2021-06-17 22:06:56 +03:00
|
|
|
static void nvme_format_cancel(BlockAIOCB *aiocb)
|
|
|
|
{
|
|
|
|
NvmeFormatAIOCB *iocb = container_of(aiocb, NvmeFormatAIOCB, common);
|
2021-02-12 15:11:39 +03:00
|
|
|
|
2022-11-10 09:59:40 +03:00
|
|
|
iocb->ret = -ECANCELED;
|
|
|
|
|
2021-06-17 22:06:56 +03:00
|
|
|
if (iocb->aiocb) {
|
|
|
|
blk_aio_cancel_async(iocb->aiocb);
|
2022-11-10 09:59:40 +03:00
|
|
|
iocb->aiocb = NULL;
|
2021-02-12 15:11:39 +03:00
|
|
|
}
|
2021-06-17 22:06:56 +03:00
|
|
|
}
|
2021-02-12 15:11:39 +03:00
|
|
|
|
2021-06-17 22:06:56 +03:00
|
|
|
static const AIOCBInfo nvme_format_aiocb_info = {
|
|
|
|
.aiocb_size = sizeof(NvmeFormatAIOCB),
|
|
|
|
.cancel_async = nvme_format_cancel,
|
|
|
|
.get_aio_context = nvme_get_aio_context,
|
|
|
|
};
|
|
|
|
|
2021-10-06 10:40:15 +03:00
|
|
|
static void nvme_format_set(NvmeNamespace *ns, uint8_t lbaf, uint8_t mset,
|
|
|
|
uint8_t pi, uint8_t pil)
|
2021-06-17 22:06:56 +03:00
|
|
|
{
|
2021-10-06 09:53:30 +03:00
|
|
|
uint8_t lbafl = lbaf & 0xf;
|
|
|
|
uint8_t lbafu = lbaf >> 4;
|
|
|
|
|
2021-06-17 22:06:56 +03:00
|
|
|
trace_pci_nvme_format_set(ns->params.nsid, lbaf, mset, pi, pil);
|
2021-02-12 15:11:39 +03:00
|
|
|
|
|
|
|
ns->id_ns.dps = (pil << 3) | pi;
|
2021-10-06 09:53:30 +03:00
|
|
|
ns->id_ns.flbas = (lbafu << 5) | (mset << 4) | lbafl;
|
2021-02-12 15:11:39 +03:00
|
|
|
|
|
|
|
nvme_ns_init_format(ns);
|
2021-06-17 22:06:56 +03:00
|
|
|
}
|
2021-02-12 15:11:39 +03:00
|
|
|
|
2022-11-10 09:59:40 +03:00
|
|
|
static void nvme_do_format(NvmeFormatAIOCB *iocb);
|
|
|
|
|
2021-06-17 22:06:56 +03:00
|
|
|
static void nvme_format_ns_cb(void *opaque, int ret)
|
|
|
|
{
|
|
|
|
NvmeFormatAIOCB *iocb = opaque;
|
|
|
|
NvmeNamespace *ns = iocb->ns;
|
|
|
|
int bytes;
|
2021-02-12 15:11:39 +03:00
|
|
|
|
2022-11-10 09:59:40 +03:00
|
|
|
if (iocb->ret < 0) {
|
|
|
|
goto done;
|
|
|
|
} else if (ret < 0) {
|
2021-06-17 22:06:56 +03:00
|
|
|
iocb->ret = ret;
|
|
|
|
goto done;
|
|
|
|
}
|
2021-02-12 15:11:39 +03:00
|
|
|
|
2021-06-17 22:06:56 +03:00
|
|
|
assert(ns);
|
2021-02-12 15:11:39 +03:00
|
|
|
|
2021-06-17 22:06:56 +03:00
|
|
|
if (iocb->offset < ns->size) {
|
|
|
|
bytes = MIN(BDRV_REQUEST_MAX_BYTES, ns->size - iocb->offset);
|
2021-02-12 15:11:39 +03:00
|
|
|
|
2021-06-17 22:06:56 +03:00
|
|
|
iocb->aiocb = blk_aio_pwrite_zeroes(ns->blkconf.blk, iocb->offset,
|
|
|
|
bytes, BDRV_REQ_MAY_UNMAP,
|
|
|
|
nvme_format_ns_cb, iocb);
|
2021-02-12 15:11:39 +03:00
|
|
|
|
2021-06-17 22:06:56 +03:00
|
|
|
iocb->offset += bytes;
|
|
|
|
return;
|
|
|
|
}
|
2021-02-12 15:11:39 +03:00
|
|
|
|
2021-10-06 10:40:15 +03:00
|
|
|
nvme_format_set(ns, iocb->lbaf, iocb->mset, iocb->pi, iocb->pil);
|
2021-06-17 22:06:56 +03:00
|
|
|
ns->status = 0x0;
|
|
|
|
iocb->ns = NULL;
|
|
|
|
iocb->offset = 0;
|
2021-02-12 15:11:39 +03:00
|
|
|
|
2021-06-17 22:06:56 +03:00
|
|
|
done:
|
2022-11-10 09:59:40 +03:00
|
|
|
nvme_do_format(iocb);
|
2021-06-17 22:06:56 +03:00
|
|
|
}
|
2021-02-12 15:11:39 +03:00
|
|
|
|
2021-06-17 22:06:56 +03:00
|
|
|
static uint16_t nvme_format_check(NvmeNamespace *ns, uint8_t lbaf, uint8_t pi)
|
|
|
|
{
|
|
|
|
if (ns->params.zoned) {
|
|
|
|
return NVME_INVALID_FORMAT | NVME_DNR;
|
|
|
|
}
|
2021-02-12 15:11:39 +03:00
|
|
|
|
2021-06-17 22:06:56 +03:00
|
|
|
if (lbaf > ns->id_ns.nlbaf) {
|
|
|
|
return NVME_INVALID_FORMAT | NVME_DNR;
|
2021-02-12 15:11:39 +03:00
|
|
|
}
|
|
|
|
|
2022-02-14 11:29:01 +03:00
|
|
|
if (pi && (ns->id_ns.lbaf[lbaf].ms < nvme_pi_tuple_size(ns))) {
|
2021-06-17 22:06:56 +03:00
|
|
|
return NVME_INVALID_FORMAT | NVME_DNR;
|
2021-03-22 09:10:24 +03:00
|
|
|
}
|
2021-02-12 15:11:39 +03:00
|
|
|
|
2021-06-17 22:06:56 +03:00
|
|
|
if (pi && pi > NVME_ID_NS_DPS_TYPE_3) {
|
|
|
|
return NVME_INVALID_FIELD | NVME_DNR;
|
|
|
|
}
|
2021-03-22 09:10:24 +03:00
|
|
|
|
|
|
|
return NVME_SUCCESS;
|
2021-02-12 15:11:39 +03:00
|
|
|
}
|
|
|
|
|
2022-11-10 09:59:40 +03:00
|
|
|
static void nvme_do_format(NvmeFormatAIOCB *iocb)
|
2021-02-12 15:11:39 +03:00
|
|
|
{
|
2021-06-17 22:06:56 +03:00
|
|
|
NvmeRequest *req = iocb->req;
|
|
|
|
NvmeCtrl *n = nvme_ctrl(req);
|
2021-11-16 16:26:52 +03:00
|
|
|
uint32_t dw10 = le32_to_cpu(req->cmd.cdw10);
|
|
|
|
uint8_t lbaf = dw10 & 0xf;
|
|
|
|
uint8_t pi = (dw10 >> 5) & 0x7;
|
2021-02-12 15:11:39 +03:00
|
|
|
uint16_t status;
|
|
|
|
int i;
|
|
|
|
|
2021-06-17 22:06:56 +03:00
|
|
|
if (iocb->ret < 0) {
|
|
|
|
goto done;
|
|
|
|
}
|
2021-02-12 15:11:39 +03:00
|
|
|
|
2021-06-17 22:06:56 +03:00
|
|
|
if (iocb->broadcast) {
|
|
|
|
for (i = iocb->nsid + 1; i <= NVME_MAX_NAMESPACES; i++) {
|
|
|
|
iocb->ns = nvme_ns(n, i);
|
|
|
|
if (iocb->ns) {
|
|
|
|
iocb->nsid = i;
|
|
|
|
break;
|
|
|
|
}
|
2021-02-12 15:11:39 +03:00
|
|
|
}
|
2021-06-17 22:06:56 +03:00
|
|
|
}
|
2021-02-12 15:11:39 +03:00
|
|
|
|
2021-06-17 22:06:56 +03:00
|
|
|
if (!iocb->ns) {
|
|
|
|
goto done;
|
|
|
|
}
|
2021-02-12 15:11:39 +03:00
|
|
|
|
2021-11-16 16:26:52 +03:00
|
|
|
status = nvme_format_check(iocb->ns, lbaf, pi);
|
2021-06-17 22:06:56 +03:00
|
|
|
if (status) {
|
|
|
|
req->status = status;
|
|
|
|
goto done;
|
|
|
|
}
|
|
|
|
|
|
|
|
iocb->ns->status = NVME_FORMAT_IN_PROGRESS;
|
|
|
|
nvme_format_ns_cb(iocb, 0);
|
|
|
|
return;
|
|
|
|
|
|
|
|
done:
|
|
|
|
iocb->common.cb(iocb->common.opaque, iocb->ret);
|
|
|
|
qemu_aio_unref(iocb);
|
|
|
|
}
|
|
|
|
|
|
|
|
static uint16_t nvme_format(NvmeCtrl *n, NvmeRequest *req)
|
|
|
|
{
|
|
|
|
NvmeFormatAIOCB *iocb;
|
|
|
|
uint32_t nsid = le32_to_cpu(req->cmd.nsid);
|
2021-10-06 10:40:15 +03:00
|
|
|
uint32_t dw10 = le32_to_cpu(req->cmd.cdw10);
|
|
|
|
uint8_t lbaf = dw10 & 0xf;
|
|
|
|
uint8_t mset = (dw10 >> 4) & 0x1;
|
|
|
|
uint8_t pi = (dw10 >> 5) & 0x7;
|
|
|
|
uint8_t pil = (dw10 >> 8) & 0x1;
|
2021-10-06 09:53:30 +03:00
|
|
|
uint8_t lbafu = (dw10 >> 12) & 0x3;
|
2021-06-17 22:06:56 +03:00
|
|
|
uint16_t status;
|
|
|
|
|
|
|
|
iocb = qemu_aio_get(&nvme_format_aiocb_info, NULL, nvme_misc_cb, req);
|
|
|
|
|
|
|
|
iocb->req = req;
|
|
|
|
iocb->ret = 0;
|
|
|
|
iocb->ns = NULL;
|
|
|
|
iocb->nsid = 0;
|
2021-10-06 10:40:15 +03:00
|
|
|
iocb->lbaf = lbaf;
|
|
|
|
iocb->mset = mset;
|
|
|
|
iocb->pi = pi;
|
|
|
|
iocb->pil = pil;
|
2021-06-17 22:06:56 +03:00
|
|
|
iocb->broadcast = (nsid == NVME_NSID_BROADCAST);
|
|
|
|
iocb->offset = 0;
|
|
|
|
|
2021-10-06 09:53:30 +03:00
|
|
|
if (n->features.hbs.lbafee) {
|
|
|
|
iocb->lbaf |= lbafu << 4;
|
|
|
|
}
|
|
|
|
|
2021-06-17 22:06:56 +03:00
|
|
|
if (!iocb->broadcast) {
|
|
|
|
if (!nvme_nsid_valid(n, nsid)) {
|
|
|
|
status = NVME_INVALID_NSID | NVME_DNR;
|
|
|
|
goto out;
|
2021-02-12 15:11:39 +03:00
|
|
|
}
|
|
|
|
|
2021-06-17 22:06:56 +03:00
|
|
|
iocb->ns = nvme_ns(n, nsid);
|
|
|
|
if (!iocb->ns) {
|
|
|
|
status = NVME_INVALID_FIELD | NVME_DNR;
|
|
|
|
goto out;
|
2021-02-12 15:11:39 +03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2021-06-17 22:06:56 +03:00
|
|
|
req->aiocb = &iocb->common;
|
2022-11-10 09:59:40 +03:00
|
|
|
nvme_do_format(iocb);
|
2021-06-17 22:06:56 +03:00
|
|
|
|
|
|
|
return NVME_NO_COMPLETE;
|
2021-02-12 15:11:39 +03:00
|
|
|
|
2021-06-17 22:06:56 +03:00
|
|
|
out:
|
|
|
|
qemu_aio_unref(iocb);
|
2022-11-10 09:59:40 +03:00
|
|
|
|
2021-06-17 22:06:56 +03:00
|
|
|
return status;
|
2021-02-12 15:11:39 +03:00
|
|
|
}
|
|
|
|
|
2022-05-09 17:16:17 +03:00
|
|
|
static void nvme_get_virt_res_num(NvmeCtrl *n, uint8_t rt, int *num_total,
|
|
|
|
int *num_prim, int *num_sec)
|
|
|
|
{
|
|
|
|
*num_total = le32_to_cpu(rt ?
|
|
|
|
n->pri_ctrl_cap.vifrt : n->pri_ctrl_cap.vqfrt);
|
|
|
|
*num_prim = le16_to_cpu(rt ?
|
|
|
|
n->pri_ctrl_cap.virfap : n->pri_ctrl_cap.vqrfap);
|
|
|
|
*num_sec = le16_to_cpu(rt ? n->pri_ctrl_cap.virfa : n->pri_ctrl_cap.vqrfa);
|
|
|
|
}
|
|
|
|
|
|
|
|
static uint16_t nvme_assign_virt_res_to_prim(NvmeCtrl *n, NvmeRequest *req,
|
|
|
|
uint16_t cntlid, uint8_t rt,
|
|
|
|
int nr)
|
|
|
|
{
|
|
|
|
int num_total, num_prim, num_sec;
|
|
|
|
|
|
|
|
if (cntlid != n->cntlid) {
|
|
|
|
return NVME_INVALID_CTRL_ID | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
|
|
|
nvme_get_virt_res_num(n, rt, &num_total, &num_prim, &num_sec);
|
|
|
|
|
|
|
|
if (nr > num_total) {
|
|
|
|
return NVME_INVALID_NUM_RESOURCES | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (nr > num_total - num_sec) {
|
|
|
|
return NVME_INVALID_RESOURCE_ID | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (rt) {
|
|
|
|
n->next_pri_ctrl_cap.virfap = cpu_to_le16(nr);
|
|
|
|
} else {
|
|
|
|
n->next_pri_ctrl_cap.vqrfap = cpu_to_le16(nr);
|
|
|
|
}
|
|
|
|
|
|
|
|
req->cqe.result = cpu_to_le32(nr);
|
|
|
|
return req->status;
|
|
|
|
}
|
|
|
|
|
|
|
|
static void nvme_update_virt_res(NvmeCtrl *n, NvmeSecCtrlEntry *sctrl,
|
|
|
|
uint8_t rt, int nr)
|
|
|
|
{
|
|
|
|
int prev_nr, prev_total;
|
|
|
|
|
|
|
|
if (rt) {
|
|
|
|
prev_nr = le16_to_cpu(sctrl->nvi);
|
|
|
|
prev_total = le32_to_cpu(n->pri_ctrl_cap.virfa);
|
|
|
|
sctrl->nvi = cpu_to_le16(nr);
|
|
|
|
n->pri_ctrl_cap.virfa = cpu_to_le32(prev_total + nr - prev_nr);
|
|
|
|
} else {
|
|
|
|
prev_nr = le16_to_cpu(sctrl->nvq);
|
|
|
|
prev_total = le32_to_cpu(n->pri_ctrl_cap.vqrfa);
|
|
|
|
sctrl->nvq = cpu_to_le16(nr);
|
|
|
|
n->pri_ctrl_cap.vqrfa = cpu_to_le32(prev_total + nr - prev_nr);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static uint16_t nvme_assign_virt_res_to_sec(NvmeCtrl *n, NvmeRequest *req,
|
|
|
|
uint16_t cntlid, uint8_t rt, int nr)
|
|
|
|
{
|
|
|
|
int num_total, num_prim, num_sec, num_free, diff, limit;
|
|
|
|
NvmeSecCtrlEntry *sctrl;
|
|
|
|
|
|
|
|
sctrl = nvme_sctrl_for_cntlid(n, cntlid);
|
|
|
|
if (!sctrl) {
|
|
|
|
return NVME_INVALID_CTRL_ID | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (sctrl->scs) {
|
|
|
|
return NVME_INVALID_SEC_CTRL_STATE | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
|
|
|
limit = le16_to_cpu(rt ? n->pri_ctrl_cap.vifrsm : n->pri_ctrl_cap.vqfrsm);
|
|
|
|
if (nr > limit) {
|
|
|
|
return NVME_INVALID_NUM_RESOURCES | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
|
|
|
nvme_get_virt_res_num(n, rt, &num_total, &num_prim, &num_sec);
|
|
|
|
num_free = num_total - num_prim - num_sec;
|
|
|
|
diff = nr - le16_to_cpu(rt ? sctrl->nvi : sctrl->nvq);
|
|
|
|
|
|
|
|
if (diff > num_free) {
|
|
|
|
return NVME_INVALID_RESOURCE_ID | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
|
|
|
nvme_update_virt_res(n, sctrl, rt, nr);
|
|
|
|
req->cqe.result = cpu_to_le32(nr);
|
|
|
|
|
|
|
|
return req->status;
|
|
|
|
}
|
|
|
|
|
|
|
|
static uint16_t nvme_virt_set_state(NvmeCtrl *n, uint16_t cntlid, bool online)
|
|
|
|
{
|
2022-12-08 14:43:18 +03:00
|
|
|
PCIDevice *pci = PCI_DEVICE(n);
|
2022-05-09 17:16:17 +03:00
|
|
|
NvmeCtrl *sn = NULL;
|
|
|
|
NvmeSecCtrlEntry *sctrl;
|
|
|
|
int vf_index;
|
|
|
|
|
|
|
|
sctrl = nvme_sctrl_for_cntlid(n, cntlid);
|
|
|
|
if (!sctrl) {
|
|
|
|
return NVME_INVALID_CTRL_ID | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
2022-12-08 14:43:18 +03:00
|
|
|
if (!pci_is_vf(pci)) {
|
2022-05-09 17:16:17 +03:00
|
|
|
vf_index = le16_to_cpu(sctrl->vfn) - 1;
|
2022-12-08 14:43:18 +03:00
|
|
|
sn = NVME(pcie_sriov_get_vf_at_index(pci, vf_index));
|
2022-05-09 17:16:17 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
if (online) {
|
|
|
|
if (!sctrl->nvi || (le16_to_cpu(sctrl->nvq) < 2) || !sn) {
|
|
|
|
return NVME_INVALID_SEC_CTRL_STATE | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (!sctrl->scs) {
|
|
|
|
sctrl->scs = 0x1;
|
|
|
|
nvme_ctrl_reset(sn, NVME_RESET_FUNCTION);
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
nvme_update_virt_res(n, sctrl, NVME_VIRT_RES_INTERRUPT, 0);
|
|
|
|
nvme_update_virt_res(n, sctrl, NVME_VIRT_RES_QUEUE, 0);
|
|
|
|
|
|
|
|
if (sctrl->scs) {
|
|
|
|
sctrl->scs = 0x0;
|
|
|
|
if (sn) {
|
|
|
|
nvme_ctrl_reset(sn, NVME_RESET_FUNCTION);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return NVME_SUCCESS;
|
|
|
|
}
|
|
|
|
|
|
|
|
static uint16_t nvme_virt_mngmt(NvmeCtrl *n, NvmeRequest *req)
|
|
|
|
{
|
|
|
|
uint32_t dw10 = le32_to_cpu(req->cmd.cdw10);
|
|
|
|
uint32_t dw11 = le32_to_cpu(req->cmd.cdw11);
|
|
|
|
uint8_t act = dw10 & 0xf;
|
|
|
|
uint8_t rt = (dw10 >> 8) & 0x7;
|
|
|
|
uint16_t cntlid = (dw10 >> 16) & 0xffff;
|
|
|
|
int nr = dw11 & 0xffff;
|
|
|
|
|
|
|
|
trace_pci_nvme_virt_mngmt(nvme_cid(req), act, cntlid, rt ? "VI" : "VQ", nr);
|
|
|
|
|
|
|
|
if (rt != NVME_VIRT_RES_QUEUE && rt != NVME_VIRT_RES_INTERRUPT) {
|
|
|
|
return NVME_INVALID_RESOURCE_ID | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
|
|
|
switch (act) {
|
|
|
|
case NVME_VIRT_MNGMT_ACTION_SEC_ASSIGN:
|
|
|
|
return nvme_assign_virt_res_to_sec(n, req, cntlid, rt, nr);
|
|
|
|
case NVME_VIRT_MNGMT_ACTION_PRM_ALLOC:
|
|
|
|
return nvme_assign_virt_res_to_prim(n, req, cntlid, rt, nr);
|
|
|
|
case NVME_VIRT_MNGMT_ACTION_SEC_ONLINE:
|
|
|
|
return nvme_virt_set_state(n, cntlid, true);
|
|
|
|
case NVME_VIRT_MNGMT_ACTION_SEC_OFFLINE:
|
|
|
|
return nvme_virt_set_state(n, cntlid, false);
|
|
|
|
default:
|
|
|
|
return NVME_INVALID_FIELD | NVME_DNR;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2022-06-16 15:34:07 +03:00
|
|
|
static uint16_t nvme_dbbuf_config(NvmeCtrl *n, const NvmeRequest *req)
|
|
|
|
{
|
2022-12-08 14:43:18 +03:00
|
|
|
PCIDevice *pci = PCI_DEVICE(n);
|
2022-06-16 15:34:07 +03:00
|
|
|
uint64_t dbs_addr = le64_to_cpu(req->cmd.dptr.prp1);
|
|
|
|
uint64_t eis_addr = le64_to_cpu(req->cmd.dptr.prp2);
|
|
|
|
int i;
|
|
|
|
|
|
|
|
/* Address should be page aligned */
|
|
|
|
if (dbs_addr & (n->page_size - 1) || eis_addr & (n->page_size - 1)) {
|
|
|
|
return NVME_INVALID_FIELD | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* Save shadow buffer base addr for use during queue creation */
|
|
|
|
n->dbbuf_dbs = dbs_addr;
|
|
|
|
n->dbbuf_eis = eis_addr;
|
|
|
|
n->dbbuf_enabled = true;
|
|
|
|
|
|
|
|
for (i = 0; i < n->params.max_ioqpairs + 1; i++) {
|
|
|
|
NvmeSQueue *sq = n->sq[i];
|
|
|
|
NvmeCQueue *cq = n->cq[i];
|
|
|
|
|
|
|
|
if (sq) {
|
|
|
|
/*
|
|
|
|
* CAP.DSTRD is 0, so offset of ith sq db_addr is (i<<3)
|
|
|
|
* nvme_process_db() uses this hard-coded way to calculate
|
|
|
|
* doorbell offsets. Be consistent with that here.
|
|
|
|
*/
|
|
|
|
sq->db_addr = dbs_addr + (i << 3);
|
|
|
|
sq->ei_addr = eis_addr + (i << 3);
|
2022-12-08 14:43:18 +03:00
|
|
|
pci_dma_write(pci, sq->db_addr, &sq->tail, sizeof(sq->tail));
|
2022-07-05 17:24:03 +03:00
|
|
|
|
|
|
|
if (n->params.ioeventfd && sq->sqid != 0) {
|
|
|
|
if (!nvme_init_sq_ioeventfd(sq)) {
|
|
|
|
sq->ioeventfd_enabled = true;
|
|
|
|
}
|
|
|
|
}
|
2022-06-16 15:34:07 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
if (cq) {
|
|
|
|
/* CAP.DSTRD is 0, so offset of ith cq db_addr is (i<<3)+(1<<2) */
|
|
|
|
cq->db_addr = dbs_addr + (i << 3) + (1 << 2);
|
|
|
|
cq->ei_addr = eis_addr + (i << 3) + (1 << 2);
|
2022-12-08 14:43:18 +03:00
|
|
|
pci_dma_write(pci, cq->db_addr, &cq->head, sizeof(cq->head));
|
2022-07-05 17:24:03 +03:00
|
|
|
|
|
|
|
if (n->params.ioeventfd && cq->cqid != 0) {
|
|
|
|
if (!nvme_init_cq_ioeventfd(cq)) {
|
|
|
|
cq->ioeventfd_enabled = true;
|
|
|
|
}
|
|
|
|
}
|
2022-06-16 15:34:07 +03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2022-06-16 15:34:08 +03:00
|
|
|
trace_pci_nvme_dbbuf_config(dbs_addr, eis_addr);
|
|
|
|
|
2022-06-16 15:34:07 +03:00
|
|
|
return NVME_SUCCESS;
|
|
|
|
}
|
|
|
|
|
2023-02-20 14:59:25 +03:00
|
|
|
static uint16_t nvme_directive_send(NvmeCtrl *n, NvmeRequest *req)
|
|
|
|
{
|
|
|
|
return NVME_INVALID_FIELD | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
|
|
|
static uint16_t nvme_directive_receive(NvmeCtrl *n, NvmeRequest *req)
|
|
|
|
{
|
2023-02-20 14:59:26 +03:00
|
|
|
NvmeNamespace *ns;
|
2023-02-20 14:59:25 +03:00
|
|
|
uint32_t dw10 = le32_to_cpu(req->cmd.cdw10);
|
|
|
|
uint32_t dw11 = le32_to_cpu(req->cmd.cdw11);
|
|
|
|
uint32_t nsid = le32_to_cpu(req->cmd.nsid);
|
|
|
|
uint8_t doper, dtype;
|
|
|
|
uint32_t numd, trans_len;
|
|
|
|
NvmeDirectiveIdentify id = {
|
|
|
|
.supported = 1 << NVME_DIRECTIVE_IDENTIFY,
|
|
|
|
.enabled = 1 << NVME_DIRECTIVE_IDENTIFY,
|
|
|
|
};
|
|
|
|
|
|
|
|
numd = dw10 + 1;
|
|
|
|
doper = dw11 & 0xff;
|
|
|
|
dtype = (dw11 >> 8) & 0xff;
|
|
|
|
|
|
|
|
trans_len = MIN(sizeof(NvmeDirectiveIdentify), numd << 2);
|
|
|
|
|
|
|
|
if (nsid == NVME_NSID_BROADCAST || dtype != NVME_DIRECTIVE_IDENTIFY ||
|
|
|
|
doper != NVME_DIRECTIVE_RETURN_PARAMS) {
|
|
|
|
return NVME_INVALID_FIELD | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
2023-02-20 14:59:26 +03:00
|
|
|
ns = nvme_ns(n, nsid);
|
|
|
|
if (!ns) {
|
|
|
|
return NVME_INVALID_FIELD | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
|
|
|
switch (dtype) {
|
|
|
|
case NVME_DIRECTIVE_IDENTIFY:
|
|
|
|
switch (doper) {
|
|
|
|
case NVME_DIRECTIVE_RETURN_PARAMS:
|
|
|
|
if (ns->endgrp->fdp.enabled) {
|
|
|
|
id.supported |= 1 << NVME_DIRECTIVE_DATA_PLACEMENT;
|
|
|
|
id.enabled |= 1 << NVME_DIRECTIVE_DATA_PLACEMENT;
|
|
|
|
id.persistent |= 1 << NVME_DIRECTIVE_DATA_PLACEMENT;
|
|
|
|
}
|
|
|
|
|
|
|
|
return nvme_c2h(n, (uint8_t *)&id, trans_len, req);
|
|
|
|
|
|
|
|
default:
|
|
|
|
return NVME_INVALID_FIELD | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
|
|
|
default:
|
|
|
|
return NVME_INVALID_FIELD;
|
|
|
|
}
|
2023-02-20 14:59:25 +03:00
|
|
|
}
|
|
|
|
|
2020-07-20 13:44:01 +03:00
|
|
|
static uint16_t nvme_admin_cmd(NvmeCtrl *n, NvmeRequest *req)
|
2013-06-04 19:17:10 +04:00
|
|
|
{
|
2020-08-24 23:11:33 +03:00
|
|
|
trace_pci_nvme_admin_cmd(nvme_cid(req), nvme_sqid(req), req->cmd.opcode,
|
|
|
|
nvme_adm_opc_str(req->cmd.opcode));
|
2020-07-06 09:12:48 +03:00
|
|
|
|
2020-12-08 23:04:02 +03:00
|
|
|
if (!(nvme_cse_acs[req->cmd.opcode] & NVME_CMD_EFF_CSUPP)) {
|
|
|
|
trace_pci_nvme_err_invalid_admin_opc(req->cmd.opcode);
|
|
|
|
return NVME_INVALID_OPCODE | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
2021-02-07 23:21:45 +03:00
|
|
|
/* SGLs shall not be used for Admin commands in NVMe over PCIe */
|
|
|
|
if (NVME_CMD_FLAGS_PSDT(req->cmd.flags) != NVME_PSDT_PRP) {
|
|
|
|
return NVME_INVALID_FIELD | NVME_DNR;
|
|
|
|
}
|
|
|
|
|
2021-09-15 18:43:30 +03:00
|
|
|
if (NVME_CMD_FLAGS_FUSE(req->cmd.flags)) {
|
|
|
|
return NVME_INVALID_FIELD;
|
|
|
|
}
|
|
|
|
|
2020-07-20 13:44:01 +03:00
|
|
|
switch (req->cmd.opcode) {
|
2013-06-04 19:17:10 +04:00
|
|
|
case NVME_ADM_CMD_DELETE_SQ:
|
2020-07-20 13:44:01 +03:00
|
|
|
return nvme_del_sq(n, req);
|
2013-06-04 19:17:10 +04:00
|
|
|
case NVME_ADM_CMD_CREATE_SQ:
|
2020-07-20 13:44:01 +03:00
|
|
|
return nvme_create_sq(n, req);
|
2020-07-06 09:12:52 +03:00
|
|
|
case NVME_ADM_CMD_GET_LOG_PAGE:
|
2020-07-20 13:44:01 +03:00
|
|
|
return nvme_get_log(n, req);
|
2013-06-04 19:17:10 +04:00
|
|
|
case NVME_ADM_CMD_DELETE_CQ:
|
2020-07-20 13:44:01 +03:00
|
|
|
return nvme_del_cq(n, req);
|
2013-06-04 19:17:10 +04:00
|
|
|
case NVME_ADM_CMD_CREATE_CQ:
|
2020-07-20 13:44:01 +03:00
|
|
|
return nvme_create_cq(n, req);
|
2013-06-04 19:17:10 +04:00
|
|
|
case NVME_ADM_CMD_IDENTIFY:
|
2020-07-20 13:44:01 +03:00
|
|
|
return nvme_identify(n, req);
|
2020-07-06 09:12:49 +03:00
|
|
|
case NVME_ADM_CMD_ABORT:
|
2020-07-20 13:44:01 +03:00
|
|
|
return nvme_abort(n, req);
|
2013-06-04 19:17:10 +04:00
|
|
|
case NVME_ADM_CMD_SET_FEATURES:
|
2020-07-20 13:44:01 +03:00
|
|
|
return nvme_set_feature(n, req);
|
2013-06-04 19:17:10 +04:00
|
|
|
case NVME_ADM_CMD_GET_FEATURES:
|
2020-07-20 13:44:01 +03:00
|
|
|
return nvme_get_feature(n, req);
|
2020-07-06 09:12:53 +03:00
|
|
|
case NVME_ADM_CMD_ASYNC_EV_REQ:
|
2020-07-20 13:44:01 +03:00
|
|
|
return nvme_aer(n, req);
|
2021-02-06 06:18:09 +03:00
|
|
|
case NVME_ADM_CMD_NS_ATTACHMENT:
|
|
|
|
return nvme_ns_attachment(n, req);
|
2022-05-09 17:16:17 +03:00
|
|
|
case NVME_ADM_CMD_VIRT_MNGMT:
|
|
|
|
return nvme_virt_mngmt(n, req);
|
2022-06-16 15:34:07 +03:00
|
|
|
case NVME_ADM_CMD_DBBUF_CONFIG:
|
|
|
|
return nvme_dbbuf_config(n, req);
|
2021-02-12 15:11:39 +03:00
|
|
|
case NVME_ADM_CMD_FORMAT_NVM:
|
|
|
|
return nvme_format(n, req);
|
2023-02-20 14:59:25 +03:00
|
|
|
case NVME_ADM_CMD_DIRECTIVE_SEND:
|
|
|
|
return nvme_directive_send(n, req);
|
|
|
|
case NVME_ADM_CMD_DIRECTIVE_RECV:
|
|
|
|
return nvme_directive_receive(n, req);
|
2013-06-04 19:17:10 +04:00
|
|
|
default:
|
2020-12-08 23:04:02 +03:00
|
|
|
assert(false);
|
2013-06-04 19:17:10 +04:00
|
|
|
}
|
2020-12-08 23:04:02 +03:00
|
|
|
|
|
|
|
return NVME_INVALID_OPCODE | NVME_DNR;
|
2013-06-04 19:17:10 +04:00
|
|
|
}
|
|
|
|
|
2022-06-16 15:34:07 +03:00
|
|
|
static void nvme_update_sq_eventidx(const NvmeSQueue *sq)
|
|
|
|
{
|
2022-12-12 13:30:52 +03:00
|
|
|
uint32_t v = cpu_to_le32(sq->tail);
|
|
|
|
|
2022-12-08 14:49:04 +03:00
|
|
|
trace_pci_nvme_update_sq_eventidx(sq->sqid, sq->tail);
|
|
|
|
|
2022-12-12 13:30:52 +03:00
|
|
|
pci_dma_write(PCI_DEVICE(sq->ctrl), sq->ei_addr, &v, sizeof(v));
|
2022-06-16 15:34:07 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
static void nvme_update_sq_tail(NvmeSQueue *sq)
|
|
|
|
{
|
2022-12-12 13:30:52 +03:00
|
|
|
uint32_t v;
|
|
|
|
|
|
|
|
pci_dma_read(PCI_DEVICE(sq->ctrl), sq->db_addr, &v, sizeof(v));
|
|
|
|
|
|
|
|
sq->tail = le32_to_cpu(v);
|
2022-12-08 14:49:04 +03:00
|
|
|
|
|
|
|
trace_pci_nvme_update_sq_tail(sq->sqid, sq->tail);
|
2022-06-16 15:34:07 +03:00
|
|
|
}
|
|
|
|
|
2013-06-04 19:17:10 +04:00
|
|
|
static void nvme_process_sq(void *opaque)
|
|
|
|
{
|
|
|
|
NvmeSQueue *sq = opaque;
|
|
|
|
NvmeCtrl *n = sq->ctrl;
|
|
|
|
NvmeCQueue *cq = n->cq[sq->cqid];
|
|
|
|
|
|
|
|
uint16_t status;
|
|
|
|
hwaddr addr;
|
|
|
|
NvmeCmd cmd;
|
|
|
|
NvmeRequest *req;
|
|
|
|
|
2022-06-16 15:34:07 +03:00
|
|
|
if (n->dbbuf_enabled) {
|
|
|
|
nvme_update_sq_tail(sq);
|
|
|
|
}
|
|
|
|
|
2013-06-04 19:17:10 +04:00
|
|
|
while (!(nvme_sq_empty(sq) || QTAILQ_EMPTY(&sq->req_list))) {
|
|
|
|
addr = sq->dma_addr + sq->head * n->sqe_size;
|
2019-10-11 09:32:00 +03:00
|
|
|
if (nvme_addr_read(n, addr, (void *)&cmd, sizeof(cmd))) {
|
|
|
|
trace_pci_nvme_err_addr_read(addr);
|
|
|
|
trace_pci_nvme_err_cfs();
|
2021-07-13 20:31:27 +03:00
|
|
|
stl_le_p(&n->bar.csts, NVME_CSTS_FAILED);
|
2019-10-11 09:32:00 +03:00
|
|
|
break;
|
|
|
|
}
|
2013-06-04 19:17:10 +04:00
|
|
|
nvme_inc_sq_head(sq);
|
|
|
|
|
|
|
|
req = QTAILQ_FIRST(&sq->req_list);
|
|
|
|
QTAILQ_REMOVE(&sq->req_list, req, entry);
|
|
|
|
QTAILQ_INSERT_TAIL(&sq->out_req_list, req, entry);
|
2020-07-20 13:44:01 +03:00
|
|
|
nvme_req_clear(req);
|
2013-06-04 19:17:10 +04:00
|
|
|
req->cqe.cid = cmd.cid;
|
2020-07-20 13:44:01 +03:00
|
|
|
memcpy(&req->cmd, &cmd, sizeof(NvmeCmd));
|
2013-06-04 19:17:10 +04:00
|
|
|
|
2020-07-20 13:44:01 +03:00
|
|
|
status = sq->sqid ? nvme_io_cmd(n, req) :
|
|
|
|
nvme_admin_cmd(n, req);
|
2013-06-04 19:17:10 +04:00
|
|
|
if (status != NVME_NO_COMPLETE) {
|
|
|
|
req->status = status;
|
|
|
|
nvme_enqueue_req_completion(cq, req);
|
|
|
|
}
|
2022-06-16 15:34:07 +03:00
|
|
|
|
|
|
|
if (n->dbbuf_enabled) {
|
|
|
|
nvme_update_sq_eventidx(sq);
|
|
|
|
nvme_update_sq_tail(sq);
|
|
|
|
}
|
2013-06-04 19:17:10 +04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
hw/nvme: Make max_ioqpairs and msix_qsize configurable in runtime
The NVMe device defines two properties: max_ioqpairs, msix_qsize. Having
them as constants is problematic for SR-IOV support.
SR-IOV introduces virtual resources (queues, interrupts) that can be
assigned to PF and its dependent VFs. Each device, following a reset,
should work with the configured number of queues. A single constant is
no longer sufficient to hold the whole state.
This patch tries to solve the problem by introducing additional
variables in NvmeCtrl’s state. The variables for, e.g., managing queues
are therefore organized as:
- n->params.max_ioqpairs – no changes, constant set by the user
- n->(mutable_state) – (not a part of this patch) user-configurable,
specifies number of queues available _after_
reset
- n->conf_ioqpairs - (new) used in all the places instead of the ‘old’
n->params.max_ioqpairs; initialized in realize()
and updated during reset() to reflect user’s
changes to the mutable state
Since the number of available i/o queues and interrupts can change in
runtime, buffers for sq/cqs and the MSIX-related structures are
allocated big enough to handle the limits, to completely avoid the
complicated reallocation. A helper function (nvme_update_msixcap_ts)
updates the corresponding capability register, to signal configuration
changes.
Signed-off-by: Łukasz Gieryk <lukasz.gieryk@linux.intel.com>
Reviewed-by: Klaus Jensen <k.jensen@samsung.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
2022-05-09 17:16:13 +03:00
|
|
|
static void nvme_update_msixcap_ts(PCIDevice *pci_dev, uint32_t table_size)
|
|
|
|
{
|
|
|
|
uint8_t *config;
|
|
|
|
|
|
|
|
if (!msix_present(pci_dev)) {
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
assert(table_size > 0 && table_size <= pci_dev->msix_entries_nr);
|
|
|
|
|
|
|
|
config = pci_dev->config + pci_dev->msix_cap;
|
|
|
|
pci_set_word_by_mask(config + PCI_MSIX_FLAGS, PCI_MSIX_FLAGS_QSIZE,
|
|
|
|
table_size - 1);
|
|
|
|
}
|
|
|
|
|
2022-05-09 17:16:17 +03:00
|
|
|
static void nvme_activate_virt_res(NvmeCtrl *n)
|
|
|
|
{
|
2022-12-08 14:43:18 +03:00
|
|
|
PCIDevice *pci_dev = PCI_DEVICE(n);
|
2022-05-09 17:16:17 +03:00
|
|
|
NvmePriCtrlCap *cap = &n->pri_ctrl_cap;
|
|
|
|
NvmeSecCtrlEntry *sctrl;
|
|
|
|
|
|
|
|
/* -1 to account for the admin queue */
|
|
|
|
if (pci_is_vf(pci_dev)) {
|
|
|
|
sctrl = nvme_sctrl(n);
|
|
|
|
cap->vqprt = sctrl->nvq;
|
|
|
|
cap->viprt = sctrl->nvi;
|
|
|
|
n->conf_ioqpairs = sctrl->nvq ? le16_to_cpu(sctrl->nvq) - 1 : 0;
|
|
|
|
n->conf_msix_qsize = sctrl->nvi ? le16_to_cpu(sctrl->nvi) : 1;
|
|
|
|
} else {
|
|
|
|
cap->vqrfap = n->next_pri_ctrl_cap.vqrfap;
|
|
|
|
cap->virfap = n->next_pri_ctrl_cap.virfap;
|
|
|
|
n->conf_ioqpairs = le16_to_cpu(cap->vqprt) +
|
|
|
|
le16_to_cpu(cap->vqrfap) - 1;
|
|
|
|
n->conf_msix_qsize = le16_to_cpu(cap->viprt) +
|
|
|
|
le16_to_cpu(cap->virfap);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
hw/nvme: Implement the Function Level Reset
This patch implements the Function Level Reset, a feature currently not
implemented for the Nvme device, while listed as a mandatory ("shall")
in the 1.4 spec.
The implementation reuses FLR-related building blocks defined for the
pci-bridge module, and follows the same logic:
- FLR capability is advertised in the PCIE config,
- custom pci_write_config callback detects a write to the trigger
register and performs the PCI reset,
- which, eventually, calls the custom dc->reset handler.
Depending on reset type, parts of the state should (or should not) be
cleared. To distinguish the type of reset, an additional parameter is
passed to the reset function.
This patch also enables advertisement of the Power Management PCI
capability. The main reason behind it is to announce the no_soft_reset=1
bit, to signal SR-IOV support where each VF can be reset individually.
The implementation purposedly ignores writes to the PMCS.PS register,
as even such naïve behavior is enough to correctly handle the D3->D0
transition.
It’s worth to note, that the power state transition back to to D3, with
all the corresponding side effects, wasn't and stil isn't handled
properly.
Signed-off-by: Łukasz Gieryk <lukasz.gieryk@linux.intel.com>
Reviewed-by: Klaus Jensen <k.jensen@samsung.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
2022-05-09 17:16:12 +03:00
|
|
|
static void nvme_ctrl_reset(NvmeCtrl *n, NvmeResetType rst)
|
2013-06-04 19:17:10 +04:00
|
|
|
{
|
2022-12-08 14:43:18 +03:00
|
|
|
PCIDevice *pci_dev = PCI_DEVICE(n);
|
2022-05-09 17:16:17 +03:00
|
|
|
NvmeSecCtrlEntry *sctrl;
|
hw/block/nvme: support multiple namespaces
This adds support for multiple namespaces by introducing a new 'nvme-ns'
device model. The nvme device creates a bus named from the device name
('id'). The nvme-ns devices then connect to this and registers
themselves with the nvme device.
This changes how an nvme device is created. Example with two namespaces:
-drive file=nvme0n1.img,if=none,id=disk1
-drive file=nvme0n2.img,if=none,id=disk2
-device nvme,serial=deadbeef,id=nvme0
-device nvme-ns,drive=disk1,bus=nvme0,nsid=1
-device nvme-ns,drive=disk2,bus=nvme0,nsid=2
The drive property is kept on the nvme device to keep the change
backward compatible, but the property is now optional. Specifying a
drive for the nvme device will always create the namespace with nsid 1.
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Minwoo Im <minwoo.im.dev@gmail.com>
2019-06-26 09:51:06 +03:00
|
|
|
NvmeNamespace *ns;
|
2013-06-04 19:17:10 +04:00
|
|
|
int i;
|
|
|
|
|
2021-04-14 22:46:00 +03:00
|
|
|
for (i = 1; i <= NVME_MAX_NAMESPACES; i++) {
|
hw/block/nvme: support multiple namespaces
This adds support for multiple namespaces by introducing a new 'nvme-ns'
device model. The nvme device creates a bus named from the device name
('id'). The nvme-ns devices then connect to this and registers
themselves with the nvme device.
This changes how an nvme device is created. Example with two namespaces:
-drive file=nvme0n1.img,if=none,id=disk1
-drive file=nvme0n2.img,if=none,id=disk2
-device nvme,serial=deadbeef,id=nvme0
-device nvme-ns,drive=disk1,bus=nvme0,nsid=1
-device nvme-ns,drive=disk2,bus=nvme0,nsid=2
The drive property is kept on the nvme device to keep the change
backward compatible, but the property is now optional. Specifying a
drive for the nvme device will always create the namespace with nsid 1.
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Minwoo Im <minwoo.im.dev@gmail.com>
2019-06-26 09:51:06 +03:00
|
|
|
ns = nvme_ns(n, i);
|
|
|
|
if (!ns) {
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
|
|
|
nvme_ns_drain(ns);
|
|
|
|
}
|
2018-11-06 15:16:55 +03:00
|
|
|
|
2020-06-09 22:03:19 +03:00
|
|
|
for (i = 0; i < n->params.max_ioqpairs + 1; i++) {
|
2013-06-04 19:17:10 +04:00
|
|
|
if (n->sq[i] != NULL) {
|
|
|
|
nvme_free_sq(n->sq[i], n);
|
|
|
|
}
|
|
|
|
}
|
2020-06-09 22:03:19 +03:00
|
|
|
for (i = 0; i < n->params.max_ioqpairs + 1; i++) {
|
2013-06-04 19:17:10 +04:00
|
|
|
if (n->cq[i] != NULL) {
|
|
|
|
nvme_free_cq(n->cq[i], n);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-07-06 09:12:53 +03:00
|
|
|
while (!QTAILQ_EMPTY(&n->aer_queue)) {
|
|
|
|
NvmeAsyncEvent *event = QTAILQ_FIRST(&n->aer_queue);
|
|
|
|
QTAILQ_REMOVE(&n->aer_queue, event, entry);
|
|
|
|
g_free(event);
|
|
|
|
}
|
|
|
|
|
2022-05-09 17:16:17 +03:00
|
|
|
if (n->params.sriov_max_vfs) {
|
|
|
|
if (!pci_is_vf(pci_dev)) {
|
|
|
|
for (i = 0; i < n->sec_ctrl_list.numcntl; i++) {
|
|
|
|
sctrl = &n->sec_ctrl_list.sec[i];
|
|
|
|
nvme_virt_set_state(n, le16_to_cpu(sctrl->scid), false);
|
|
|
|
}
|
|
|
|
|
|
|
|
if (rst != NVME_RESET_CONTROLLER) {
|
|
|
|
pcie_sriov_pf_disable_vfs(pci_dev);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
hw/nvme: Implement the Function Level Reset
This patch implements the Function Level Reset, a feature currently not
implemented for the Nvme device, while listed as a mandatory ("shall")
in the 1.4 spec.
The implementation reuses FLR-related building blocks defined for the
pci-bridge module, and follows the same logic:
- FLR capability is advertised in the PCIE config,
- custom pci_write_config callback detects a write to the trigger
register and performs the PCI reset,
- which, eventually, calls the custom dc->reset handler.
Depending on reset type, parts of the state should (or should not) be
cleared. To distinguish the type of reset, an additional parameter is
passed to the reset function.
This patch also enables advertisement of the Power Management PCI
capability. The main reason behind it is to announce the no_soft_reset=1
bit, to signal SR-IOV support where each VF can be reset individually.
The implementation purposedly ignores writes to the PMCS.PS register,
as even such naïve behavior is enough to correctly handle the D3->D0
transition.
It’s worth to note, that the power state transition back to to D3, with
all the corresponding side effects, wasn't and stil isn't handled
properly.
Signed-off-by: Łukasz Gieryk <lukasz.gieryk@linux.intel.com>
Reviewed-by: Klaus Jensen <k.jensen@samsung.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
2022-05-09 17:16:12 +03:00
|
|
|
if (rst != NVME_RESET_CONTROLLER) {
|
2022-05-09 17:16:17 +03:00
|
|
|
nvme_activate_virt_res(n);
|
hw/nvme: Implement the Function Level Reset
This patch implements the Function Level Reset, a feature currently not
implemented for the Nvme device, while listed as a mandatory ("shall")
in the 1.4 spec.
The implementation reuses FLR-related building blocks defined for the
pci-bridge module, and follows the same logic:
- FLR capability is advertised in the PCIE config,
- custom pci_write_config callback detects a write to the trigger
register and performs the PCI reset,
- which, eventually, calls the custom dc->reset handler.
Depending on reset type, parts of the state should (or should not) be
cleared. To distinguish the type of reset, an additional parameter is
passed to the reset function.
This patch also enables advertisement of the Power Management PCI
capability. The main reason behind it is to announce the no_soft_reset=1
bit, to signal SR-IOV support where each VF can be reset individually.
The implementation purposedly ignores writes to the PMCS.PS register,
as even such naïve behavior is enough to correctly handle the D3->D0
transition.
It’s worth to note, that the power state transition back to to D3, with
all the corresponding side effects, wasn't and stil isn't handled
properly.
Signed-off-by: Łukasz Gieryk <lukasz.gieryk@linux.intel.com>
Reviewed-by: Klaus Jensen <k.jensen@samsung.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
2022-05-09 17:16:12 +03:00
|
|
|
}
|
2022-05-09 17:16:09 +03:00
|
|
|
}
|
|
|
|
|
2020-07-06 09:12:53 +03:00
|
|
|
n->aer_queued = 0;
|
2022-05-12 12:30:55 +03:00
|
|
|
n->aer_mask = 0;
|
2020-07-06 09:12:53 +03:00
|
|
|
n->outstanding_aers = 0;
|
2020-07-06 09:13:01 +03:00
|
|
|
n->qs_created = false;
|
hw/nvme: Make max_ioqpairs and msix_qsize configurable in runtime
The NVMe device defines two properties: max_ioqpairs, msix_qsize. Having
them as constants is problematic for SR-IOV support.
SR-IOV introduces virtual resources (queues, interrupts) that can be
assigned to PF and its dependent VFs. Each device, following a reset,
should work with the configured number of queues. A single constant is
no longer sufficient to hold the whole state.
This patch tries to solve the problem by introducing additional
variables in NvmeCtrl’s state. The variables for, e.g., managing queues
are therefore organized as:
- n->params.max_ioqpairs – no changes, constant set by the user
- n->(mutable_state) – (not a part of this patch) user-configurable,
specifies number of queues available _after_
reset
- n->conf_ioqpairs - (new) used in all the places instead of the ‘old’
n->params.max_ioqpairs; initialized in realize()
and updated during reset() to reflect user’s
changes to the mutable state
Since the number of available i/o queues and interrupts can change in
runtime, buffers for sq/cqs and the MSIX-related structures are
allocated big enough to handle the limits, to completely avoid the
complicated reallocation. A helper function (nvme_update_msixcap_ts)
updates the corresponding capability register, to signal configuration
changes.
Signed-off-by: Łukasz Gieryk <lukasz.gieryk@linux.intel.com>
Reviewed-by: Klaus Jensen <k.jensen@samsung.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
2022-05-09 17:16:13 +03:00
|
|
|
|
|
|
|
nvme_update_msixcap_ts(pci_dev, n->conf_msix_qsize);
|
2022-05-09 17:16:17 +03:00
|
|
|
|
|
|
|
if (pci_is_vf(pci_dev)) {
|
|
|
|
sctrl = nvme_sctrl(n);
|
2022-05-17 14:07:51 +03:00
|
|
|
|
2022-05-09 17:16:17 +03:00
|
|
|
stl_le_p(&n->bar.csts, sctrl->scs ? 0 : NVME_CSTS_FAILED);
|
|
|
|
} else {
|
|
|
|
stl_le_p(&n->bar.csts, 0);
|
|
|
|
}
|
2022-05-17 14:07:51 +03:00
|
|
|
|
|
|
|
stl_le_p(&n->bar.intms, 0);
|
|
|
|
stl_le_p(&n->bar.intmc, 0);
|
|
|
|
stl_le_p(&n->bar.cc, 0);
|
2022-06-16 15:34:07 +03:00
|
|
|
|
|
|
|
n->dbbuf_dbs = 0;
|
|
|
|
n->dbbuf_eis = 0;
|
|
|
|
n->dbbuf_enabled = false;
|
2020-12-08 23:03:58 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
static void nvme_ctrl_shutdown(NvmeCtrl *n)
|
|
|
|
{
|
|
|
|
NvmeNamespace *ns;
|
|
|
|
int i;
|
|
|
|
|
2020-11-13 08:30:05 +03:00
|
|
|
if (n->pmr.dev) {
|
|
|
|
memory_region_msync(&n->pmr.dev->mr, 0, n->pmr.dev->size);
|
2020-12-09 15:10:45 +03:00
|
|
|
}
|
2020-07-06 09:12:53 +03:00
|
|
|
|
2021-04-14 22:46:00 +03:00
|
|
|
for (i = 1; i <= NVME_MAX_NAMESPACES; i++) {
|
hw/block/nvme: support multiple namespaces
This adds support for multiple namespaces by introducing a new 'nvme-ns'
device model. The nvme device creates a bus named from the device name
('id'). The nvme-ns devices then connect to this and registers
themselves with the nvme device.
This changes how an nvme device is created. Example with two namespaces:
-drive file=nvme0n1.img,if=none,id=disk1
-drive file=nvme0n2.img,if=none,id=disk2
-device nvme,serial=deadbeef,id=nvme0
-device nvme-ns,drive=disk1,bus=nvme0,nsid=1
-device nvme-ns,drive=disk2,bus=nvme0,nsid=2
The drive property is kept on the nvme device to keep the change
backward compatible, but the property is now optional. Specifying a
drive for the nvme device will always create the namespace with nsid 1.
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Minwoo Im <minwoo.im.dev@gmail.com>
2019-06-26 09:51:06 +03:00
|
|
|
ns = nvme_ns(n, i);
|
|
|
|
if (!ns) {
|
|
|
|
continue;
|
|
|
|
}
|
|
|
|
|
2020-12-08 23:03:58 +03:00
|
|
|
nvme_ns_shutdown(ns);
|
hw/block/nvme: support multiple namespaces
This adds support for multiple namespaces by introducing a new 'nvme-ns'
device model. The nvme device creates a bus named from the device name
('id'). The nvme-ns devices then connect to this and registers
themselves with the nvme device.
This changes how an nvme device is created. Example with two namespaces:
-drive file=nvme0n1.img,if=none,id=disk1
-drive file=nvme0n2.img,if=none,id=disk2
-device nvme,serial=deadbeef,id=nvme0
-device nvme-ns,drive=disk1,bus=nvme0,nsid=1
-device nvme-ns,drive=disk2,bus=nvme0,nsid=2
The drive property is kept on the nvme device to keep the change
backward compatible, but the property is now optional. Specifying a
drive for the nvme device will always create the namespace with nsid 1.
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Minwoo Im <minwoo.im.dev@gmail.com>
2019-06-26 09:51:06 +03:00
|
|
|
}
|
2013-06-04 19:17:10 +04:00
|
|
|
}
|
|
|
|
|
2021-04-15 09:39:08 +03:00
|
|
|
static void nvme_select_iocs(NvmeCtrl *n)
|
2020-12-08 23:04:02 +03:00
|
|
|
{
|
|
|
|
NvmeNamespace *ns;
|
|
|
|
int i;
|
|
|
|
|
2021-04-14 22:46:00 +03:00
|
|
|
for (i = 1; i <= NVME_MAX_NAMESPACES; i++) {
|
2020-12-08 23:04:02 +03:00
|
|
|
ns = nvme_ns(n, i);
|
|
|
|
if (!ns) {
|
|
|
|
continue;
|
|
|
|
}
|
2021-02-06 06:16:52 +03:00
|
|
|
|
2021-04-15 09:39:08 +03:00
|
|
|
nvme_select_iocs_ns(n, ns);
|
2020-12-08 23:04:02 +03:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2013-06-04 19:17:10 +04:00
|
|
|
static int nvme_start_ctrl(NvmeCtrl *n)
|
|
|
|
{
|
2021-07-13 20:31:27 +03:00
|
|
|
uint64_t cap = ldq_le_p(&n->bar.cap);
|
|
|
|
uint32_t cc = ldl_le_p(&n->bar.cc);
|
|
|
|
uint32_t aqa = ldl_le_p(&n->bar.aqa);
|
|
|
|
uint64_t asq = ldq_le_p(&n->bar.asq);
|
|
|
|
uint64_t acq = ldq_le_p(&n->bar.acq);
|
|
|
|
uint32_t page_bits = NVME_CC_MPS(cc) + 12;
|
2013-06-04 19:17:10 +04:00
|
|
|
uint32_t page_size = 1 << page_bits;
|
2022-05-09 17:16:17 +03:00
|
|
|
NvmeSecCtrlEntry *sctrl = nvme_sctrl(n);
|
2013-06-04 19:17:10 +04:00
|
|
|
|
2022-12-08 14:43:18 +03:00
|
|
|
if (pci_is_vf(PCI_DEVICE(n)) && !sctrl->scs) {
|
2022-05-09 17:16:17 +03:00
|
|
|
trace_pci_nvme_err_startfail_virt_state(le16_to_cpu(sctrl->nvi),
|
|
|
|
le16_to_cpu(sctrl->nvq),
|
|
|
|
sctrl->scs ? "ONLINE" :
|
|
|
|
"OFFLINE");
|
|
|
|
return -1;
|
|
|
|
}
|
2017-11-03 16:37:53 +03:00
|
|
|
if (unlikely(n->cq[0])) {
|
2020-06-09 22:03:13 +03:00
|
|
|
trace_pci_nvme_err_startfail_cq();
|
2017-11-03 16:37:53 +03:00
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
if (unlikely(n->sq[0])) {
|
2020-06-09 22:03:13 +03:00
|
|
|
trace_pci_nvme_err_startfail_sq();
|
2017-11-03 16:37:53 +03:00
|
|
|
return -1;
|
|
|
|
}
|
2021-07-13 20:31:27 +03:00
|
|
|
if (unlikely(asq & (page_size - 1))) {
|
|
|
|
trace_pci_nvme_err_startfail_asq_misaligned(asq);
|
2017-11-03 16:37:53 +03:00
|
|
|
return -1;
|
|
|
|
}
|
2021-07-13 20:31:27 +03:00
|
|
|
if (unlikely(acq & (page_size - 1))) {
|
|
|
|
trace_pci_nvme_err_startfail_acq_misaligned(acq);
|
2017-11-03 16:37:53 +03:00
|
|
|
return -1;
|
|
|
|
}
|
2021-07-13 20:31:27 +03:00
|
|
|
if (unlikely(!(NVME_CAP_CSS(cap) & (1 << NVME_CC_CSS(cc))))) {
|
|
|
|
trace_pci_nvme_err_startfail_css(NVME_CC_CSS(cc));
|
2020-09-30 20:54:05 +03:00
|
|
|
return -1;
|
|
|
|
}
|
2021-07-13 20:31:27 +03:00
|
|
|
if (unlikely(NVME_CC_MPS(cc) < NVME_CAP_MPSMIN(cap))) {
|
2020-06-09 22:03:13 +03:00
|
|
|
trace_pci_nvme_err_startfail_page_too_small(
|
2021-07-13 20:31:27 +03:00
|
|
|
NVME_CC_MPS(cc),
|
|
|
|
NVME_CAP_MPSMIN(cap));
|
2017-11-03 16:37:53 +03:00
|
|
|
return -1;
|
|
|
|
}
|
2021-07-13 20:31:27 +03:00
|
|
|
if (unlikely(NVME_CC_MPS(cc) >
|
|
|
|
NVME_CAP_MPSMAX(cap))) {
|
2020-06-09 22:03:13 +03:00
|
|
|
trace_pci_nvme_err_startfail_page_too_large(
|
2021-07-13 20:31:27 +03:00
|
|
|
NVME_CC_MPS(cc),
|
|
|
|
NVME_CAP_MPSMAX(cap));
|
2017-11-03 16:37:53 +03:00
|
|
|
return -1;
|
|
|
|
}
|
2021-07-13 20:31:27 +03:00
|
|
|
if (unlikely(NVME_CC_IOCQES(cc) <
|
2017-11-03 16:37:53 +03:00
|
|
|
NVME_CTRL_CQES_MIN(n->id_ctrl.cqes))) {
|
2020-06-09 22:03:13 +03:00
|
|
|
trace_pci_nvme_err_startfail_cqent_too_small(
|
2021-07-13 20:31:27 +03:00
|
|
|
NVME_CC_IOCQES(cc),
|
|
|
|
NVME_CTRL_CQES_MIN(cap));
|
2017-11-03 16:37:53 +03:00
|
|
|
return -1;
|
|
|
|
}
|
2021-07-13 20:31:27 +03:00
|
|
|
if (unlikely(NVME_CC_IOCQES(cc) >
|
2017-11-03 16:37:53 +03:00
|
|
|
NVME_CTRL_CQES_MAX(n->id_ctrl.cqes))) {
|
2020-06-09 22:03:13 +03:00
|
|
|
trace_pci_nvme_err_startfail_cqent_too_large(
|
2021-07-13 20:31:27 +03:00
|
|
|
NVME_CC_IOCQES(cc),
|
|
|
|
NVME_CTRL_CQES_MAX(cap));
|
2017-11-03 16:37:53 +03:00
|
|
|
return -1;
|
|
|
|
}
|
2021-07-13 20:31:27 +03:00
|
|
|
if (unlikely(NVME_CC_IOSQES(cc) <
|
2017-11-03 16:37:53 +03:00
|
|
|
NVME_CTRL_SQES_MIN(n->id_ctrl.sqes))) {
|
2020-06-09 22:03:13 +03:00
|
|
|
trace_pci_nvme_err_startfail_sqent_too_small(
|
2021-07-13 20:31:27 +03:00
|
|
|
NVME_CC_IOSQES(cc),
|
|
|
|
NVME_CTRL_SQES_MIN(cap));
|
2017-11-03 16:37:53 +03:00
|
|
|
return -1;
|
|
|
|
}
|
2021-07-13 20:31:27 +03:00
|
|
|
if (unlikely(NVME_CC_IOSQES(cc) >
|
2017-11-03 16:37:53 +03:00
|
|
|
NVME_CTRL_SQES_MAX(n->id_ctrl.sqes))) {
|
2020-06-09 22:03:13 +03:00
|
|
|
trace_pci_nvme_err_startfail_sqent_too_large(
|
2021-07-13 20:31:27 +03:00
|
|
|
NVME_CC_IOSQES(cc),
|
|
|
|
NVME_CTRL_SQES_MAX(cap));
|
2017-11-03 16:37:53 +03:00
|
|
|
return -1;
|
|
|
|
}
|
2021-07-13 20:31:27 +03:00
|
|
|
if (unlikely(!NVME_AQA_ASQS(aqa))) {
|
2020-06-09 22:03:13 +03:00
|
|
|
trace_pci_nvme_err_startfail_asqent_sz_zero();
|
2017-11-03 16:37:53 +03:00
|
|
|
return -1;
|
|
|
|
}
|
2021-07-13 20:31:27 +03:00
|
|
|
if (unlikely(!NVME_AQA_ACQS(aqa))) {
|
2020-06-09 22:03:13 +03:00
|
|
|
trace_pci_nvme_err_startfail_acqent_sz_zero();
|
2013-06-04 19:17:10 +04:00
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
|
|
|
n->page_bits = page_bits;
|
|
|
|
n->page_size = page_size;
|
|
|
|
n->max_prp_ents = n->page_size / sizeof(uint64_t);
|
2021-07-13 20:31:27 +03:00
|
|
|
n->cqe_size = 1 << NVME_CC_IOCQES(cc);
|
|
|
|
n->sqe_size = 1 << NVME_CC_IOSQES(cc);
|
|
|
|
nvme_init_cq(&n->admin_cq, n, acq, 0, 0, NVME_AQA_ACQS(aqa) + 1, 1);
|
|
|
|
nvme_init_sq(&n->admin_sq, n, asq, 0, 0, NVME_AQA_ASQS(aqa) + 1);
|
2013-06-04 19:17:10 +04:00
|
|
|
|
2019-05-20 20:40:30 +03:00
|
|
|
nvme_set_timestamp(n, 0ULL);
|
|
|
|
|
2021-04-15 09:39:08 +03:00
|
|
|
nvme_select_iocs(n);
|
2020-12-08 23:04:02 +03:00
|
|
|
|
2013-06-04 19:17:10 +04:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2020-12-18 02:32:16 +03:00
|
|
|
static void nvme_cmb_enable_regs(NvmeCtrl *n)
|
|
|
|
{
|
2021-07-13 20:31:27 +03:00
|
|
|
uint32_t cmbloc = ldl_le_p(&n->bar.cmbloc);
|
|
|
|
uint32_t cmbsz = ldl_le_p(&n->bar.cmbsz);
|
2020-12-18 02:32:16 +03:00
|
|
|
|
2021-07-13 20:31:27 +03:00
|
|
|
NVME_CMBLOC_SET_CDPCILS(cmbloc, 1);
|
|
|
|
NVME_CMBLOC_SET_CDPMLS(cmbloc, 1);
|
|
|
|
NVME_CMBLOC_SET_BIR(cmbloc, NVME_CMB_BIR);
|
|
|
|
stl_le_p(&n->bar.cmbloc, cmbloc);
|
|
|
|
|
|
|
|
NVME_CMBSZ_SET_SQS(cmbsz, 1);
|
|
|
|
NVME_CMBSZ_SET_CQS(cmbsz, 0);
|
|
|
|
NVME_CMBSZ_SET_LISTS(cmbsz, 1);
|
|
|
|
NVME_CMBSZ_SET_RDS(cmbsz, 1);
|
|
|
|
NVME_CMBSZ_SET_WDS(cmbsz, 1);
|
|
|
|
NVME_CMBSZ_SET_SZU(cmbsz, 2); /* MBs */
|
|
|
|
NVME_CMBSZ_SET_SZ(cmbsz, n->params.cmb_size_mb);
|
|
|
|
stl_le_p(&n->bar.cmbsz, cmbsz);
|
2020-12-18 02:32:16 +03:00
|
|
|
}
|
|
|
|
|
2013-06-04 19:17:10 +04:00
|
|
|
static void nvme_write_bar(NvmeCtrl *n, hwaddr offset, uint64_t data,
|
2020-08-24 09:58:56 +03:00
|
|
|
unsigned size)
|
2013-06-04 19:17:10 +04:00
|
|
|
{
|
2022-12-08 14:43:18 +03:00
|
|
|
PCIDevice *pci = PCI_DEVICE(n);
|
2021-07-13 20:31:27 +03:00
|
|
|
uint64_t cap = ldq_le_p(&n->bar.cap);
|
|
|
|
uint32_t cc = ldl_le_p(&n->bar.cc);
|
|
|
|
uint32_t intms = ldl_le_p(&n->bar.intms);
|
|
|
|
uint32_t csts = ldl_le_p(&n->bar.csts);
|
|
|
|
uint32_t pmrsts = ldl_le_p(&n->bar.pmrsts);
|
|
|
|
|
2017-11-03 16:37:53 +03:00
|
|
|
if (unlikely(offset & (sizeof(uint32_t) - 1))) {
|
2020-06-09 22:03:13 +03:00
|
|
|
NVME_GUEST_ERR(pci_nvme_ub_mmiowr_misaligned32,
|
2017-11-03 16:37:53 +03:00
|
|
|
"MMIO write not 32-bit aligned,"
|
|
|
|
" offset=0x%"PRIx64"", offset);
|
|
|
|
/* should be ignored, fall through for now */
|
|
|
|
}
|
|
|
|
|
|
|
|
if (unlikely(size < sizeof(uint32_t))) {
|
2020-06-09 22:03:13 +03:00
|
|
|
NVME_GUEST_ERR(pci_nvme_ub_mmiowr_toosmall,
|
2017-11-03 16:37:53 +03:00
|
|
|
"MMIO write smaller than 32-bits,"
|
|
|
|
" offset=0x%"PRIx64", size=%u",
|
|
|
|
offset, size);
|
|
|
|
/* should be ignored, fall through for now */
|
|
|
|
}
|
|
|
|
|
2013-06-04 19:17:10 +04:00
|
|
|
switch (offset) {
|
2021-07-13 17:29:59 +03:00
|
|
|
case NVME_REG_INTMS:
|
2022-12-08 14:43:18 +03:00
|
|
|
if (unlikely(msix_enabled(pci))) {
|
2020-06-09 22:03:13 +03:00
|
|
|
NVME_GUEST_ERR(pci_nvme_ub_mmiowr_intmask_with_msix,
|
2017-11-03 16:37:53 +03:00
|
|
|
"undefined access to interrupt mask set"
|
|
|
|
" when MSI-X is enabled");
|
|
|
|
/* should be ignored, fall through for now */
|
|
|
|
}
|
2021-07-13 20:31:27 +03:00
|
|
|
intms |= data;
|
|
|
|
stl_le_p(&n->bar.intms, intms);
|
2013-06-04 19:17:10 +04:00
|
|
|
n->bar.intmc = n->bar.intms;
|
2021-07-13 20:31:27 +03:00
|
|
|
trace_pci_nvme_mmio_intm_set(data & 0xffffffff, intms);
|
2017-12-18 08:00:43 +03:00
|
|
|
nvme_irq_check(n);
|
2013-06-04 19:17:10 +04:00
|
|
|
break;
|
2021-07-13 17:29:59 +03:00
|
|
|
case NVME_REG_INTMC:
|
2022-12-08 14:43:18 +03:00
|
|
|
if (unlikely(msix_enabled(pci))) {
|
2020-06-09 22:03:13 +03:00
|
|
|
NVME_GUEST_ERR(pci_nvme_ub_mmiowr_intmask_with_msix,
|
2017-11-03 16:37:53 +03:00
|
|
|
"undefined access to interrupt mask clr"
|
|
|
|
" when MSI-X is enabled");
|
|
|
|
/* should be ignored, fall through for now */
|
|
|
|
}
|
2021-07-13 20:31:27 +03:00
|
|
|
intms &= ~data;
|
|
|
|
stl_le_p(&n->bar.intms, intms);
|
2013-06-04 19:17:10 +04:00
|
|
|
n->bar.intmc = n->bar.intms;
|
2021-07-13 20:31:27 +03:00
|
|
|
trace_pci_nvme_mmio_intm_clr(data & 0xffffffff, intms);
|
2017-12-18 08:00:43 +03:00
|
|
|
nvme_irq_check(n);
|
2013-06-04 19:17:10 +04:00
|
|
|
break;
|
2021-07-13 17:29:59 +03:00
|
|
|
case NVME_REG_CC:
|
2022-05-17 14:07:51 +03:00
|
|
|
stl_le_p(&n->bar.cc, data);
|
|
|
|
|
2020-06-09 22:03:13 +03:00
|
|
|
trace_pci_nvme_mmio_cfg(data & 0xffffffff);
|
2021-07-13 20:31:27 +03:00
|
|
|
|
2022-05-17 14:07:51 +03:00
|
|
|
if (NVME_CC_SHN(data) && !(NVME_CC_SHN(cc))) {
|
|
|
|
trace_pci_nvme_mmio_shutdown_set();
|
|
|
|
nvme_ctrl_shutdown(n);
|
|
|
|
csts &= ~(CSTS_SHST_MASK << CSTS_SHST_SHIFT);
|
|
|
|
csts |= NVME_CSTS_SHST_COMPLETE;
|
|
|
|
} else if (!NVME_CC_SHN(data) && NVME_CC_SHN(cc)) {
|
|
|
|
trace_pci_nvme_mmio_shutdown_cleared();
|
|
|
|
csts &= ~(CSTS_SHST_MASK << CSTS_SHST_SHIFT);
|
2015-04-24 21:55:42 +03:00
|
|
|
}
|
|
|
|
|
2021-07-13 20:31:27 +03:00
|
|
|
if (NVME_CC_EN(data) && !NVME_CC_EN(cc)) {
|
2017-11-03 16:37:53 +03:00
|
|
|
if (unlikely(nvme_start_ctrl(n))) {
|
2020-06-09 22:03:13 +03:00
|
|
|
trace_pci_nvme_err_startfail();
|
2021-07-13 20:31:27 +03:00
|
|
|
csts = NVME_CSTS_FAILED;
|
2013-06-04 19:17:10 +04:00
|
|
|
} else {
|
2020-06-09 22:03:13 +03:00
|
|
|
trace_pci_nvme_mmio_start_success();
|
2021-07-13 20:31:27 +03:00
|
|
|
csts = NVME_CSTS_READY;
|
2013-06-04 19:17:10 +04:00
|
|
|
}
|
2021-07-13 20:31:27 +03:00
|
|
|
} else if (!NVME_CC_EN(data) && NVME_CC_EN(cc)) {
|
2020-06-09 22:03:13 +03:00
|
|
|
trace_pci_nvme_mmio_stopped();
|
hw/nvme: Implement the Function Level Reset
This patch implements the Function Level Reset, a feature currently not
implemented for the Nvme device, while listed as a mandatory ("shall")
in the 1.4 spec.
The implementation reuses FLR-related building blocks defined for the
pci-bridge module, and follows the same logic:
- FLR capability is advertised in the PCIE config,
- custom pci_write_config callback detects a write to the trigger
register and performs the PCI reset,
- which, eventually, calls the custom dc->reset handler.
Depending on reset type, parts of the state should (or should not) be
cleared. To distinguish the type of reset, an additional parameter is
passed to the reset function.
This patch also enables advertisement of the Power Management PCI
capability. The main reason behind it is to announce the no_soft_reset=1
bit, to signal SR-IOV support where each VF can be reset individually.
The implementation purposedly ignores writes to the PMCS.PS register,
as even such naïve behavior is enough to correctly handle the D3->D0
transition.
It’s worth to note, that the power state transition back to to D3, with
all the corresponding side effects, wasn't and stil isn't handled
properly.
Signed-off-by: Łukasz Gieryk <lukasz.gieryk@linux.intel.com>
Reviewed-by: Klaus Jensen <k.jensen@samsung.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
2022-05-09 17:16:12 +03:00
|
|
|
nvme_ctrl_reset(n, NVME_RESET_CONTROLLER);
|
2021-07-13 20:31:27 +03:00
|
|
|
|
2022-05-17 14:07:51 +03:00
|
|
|
break;
|
2017-11-03 16:37:53 +03:00
|
|
|
}
|
2021-07-13 20:31:27 +03:00
|
|
|
|
|
|
|
stl_le_p(&n->bar.csts, csts);
|
|
|
|
|
2017-11-03 16:37:53 +03:00
|
|
|
break;
|
2021-07-13 17:29:59 +03:00
|
|
|
case NVME_REG_CSTS:
|
2017-11-03 16:37:53 +03:00
|
|
|
if (data & (1 << 4)) {
|
2020-06-09 22:03:13 +03:00
|
|
|
NVME_GUEST_ERR(pci_nvme_ub_mmiowr_ssreset_w1c_unsupported,
|
2017-11-03 16:37:53 +03:00
|
|
|
"attempted to W1C CSTS.NSSRO"
|
|
|
|
" but CAP.NSSRS is zero (not supported)");
|
|
|
|
} else if (data != 0) {
|
2020-06-09 22:03:13 +03:00
|
|
|
NVME_GUEST_ERR(pci_nvme_ub_mmiowr_ro_csts,
|
2017-11-03 16:37:53 +03:00
|
|
|
"attempted to set a read only bit"
|
|
|
|
" of controller status");
|
|
|
|
}
|
|
|
|
break;
|
2021-07-13 17:29:59 +03:00
|
|
|
case NVME_REG_NSSR:
|
2021-04-16 06:52:28 +03:00
|
|
|
if (data == 0x4e564d65) {
|
2020-06-09 22:03:13 +03:00
|
|
|
trace_pci_nvme_ub_mmiowr_ssreset_unsupported();
|
2017-11-03 16:37:53 +03:00
|
|
|
} else {
|
|
|
|
/* The spec says that writes of other values have no effect */
|
|
|
|
return;
|
2013-06-04 19:17:10 +04:00
|
|
|
}
|
|
|
|
break;
|
2021-07-13 17:29:59 +03:00
|
|
|
case NVME_REG_AQA:
|
2021-07-13 20:31:27 +03:00
|
|
|
stl_le_p(&n->bar.aqa, data);
|
2020-06-09 22:03:13 +03:00
|
|
|
trace_pci_nvme_mmio_aqattr(data & 0xffffffff);
|
2013-06-04 19:17:10 +04:00
|
|
|
break;
|
2021-07-13 17:29:59 +03:00
|
|
|
case NVME_REG_ASQ:
|
2021-07-13 20:31:27 +03:00
|
|
|
stn_le_p(&n->bar.asq, size, data);
|
2020-06-09 22:03:13 +03:00
|
|
|
trace_pci_nvme_mmio_asqaddr(data);
|
2013-06-04 19:17:10 +04:00
|
|
|
break;
|
2021-07-13 17:29:59 +03:00
|
|
|
case NVME_REG_ASQ + 4:
|
2021-07-13 20:31:27 +03:00
|
|
|
stl_le_p((uint8_t *)&n->bar.asq + 4, data);
|
|
|
|
trace_pci_nvme_mmio_asqaddr_hi(data, ldq_le_p(&n->bar.asq));
|
2013-06-04 19:17:10 +04:00
|
|
|
break;
|
2021-07-13 17:29:59 +03:00
|
|
|
case NVME_REG_ACQ:
|
2020-06-09 22:03:13 +03:00
|
|
|
trace_pci_nvme_mmio_acqaddr(data);
|
2021-07-13 20:31:27 +03:00
|
|
|
stn_le_p(&n->bar.acq, size, data);
|
2013-06-04 19:17:10 +04:00
|
|
|
break;
|
2021-07-13 17:29:59 +03:00
|
|
|
case NVME_REG_ACQ + 4:
|
2021-07-13 20:31:27 +03:00
|
|
|
stl_le_p((uint8_t *)&n->bar.acq + 4, data);
|
|
|
|
trace_pci_nvme_mmio_acqaddr_hi(data, ldq_le_p(&n->bar.acq));
|
2013-06-04 19:17:10 +04:00
|
|
|
break;
|
2021-07-13 17:29:59 +03:00
|
|
|
case NVME_REG_CMBLOC:
|
2020-06-09 22:03:13 +03:00
|
|
|
NVME_GUEST_ERR(pci_nvme_ub_mmiowr_cmbloc_reserved,
|
2017-11-03 16:37:53 +03:00
|
|
|
"invalid write to reserved CMBLOC"
|
|
|
|
" when CMBSZ is zero, ignored");
|
|
|
|
return;
|
2021-07-13 17:29:59 +03:00
|
|
|
case NVME_REG_CMBSZ:
|
2020-06-09 22:03:13 +03:00
|
|
|
NVME_GUEST_ERR(pci_nvme_ub_mmiowr_cmbsz_readonly,
|
2017-11-03 16:37:53 +03:00
|
|
|
"invalid write to read only CMBSZ, ignored");
|
|
|
|
return;
|
2021-07-13 17:29:59 +03:00
|
|
|
case NVME_REG_CMBMSC:
|
2021-07-13 20:31:27 +03:00
|
|
|
if (!NVME_CAP_CMBS(cap)) {
|
2020-12-18 02:32:16 +03:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2021-07-13 20:31:27 +03:00
|
|
|
stn_le_p(&n->bar.cmbmsc, size, data);
|
2020-12-18 02:32:16 +03:00
|
|
|
n->cmb.cmse = false;
|
|
|
|
|
|
|
|
if (NVME_CMBMSC_CRE(data)) {
|
|
|
|
nvme_cmb_enable_regs(n);
|
|
|
|
|
|
|
|
if (NVME_CMBMSC_CMSE(data)) {
|
2021-07-13 20:31:27 +03:00
|
|
|
uint64_t cmbmsc = ldq_le_p(&n->bar.cmbmsc);
|
|
|
|
hwaddr cba = NVME_CMBMSC_CBA(cmbmsc) << CMBMSC_CBA_SHIFT;
|
2020-12-18 02:32:16 +03:00
|
|
|
if (cba + int128_get64(n->cmb.mem.size) < cba) {
|
2021-07-13 20:31:27 +03:00
|
|
|
uint32_t cmbsts = ldl_le_p(&n->bar.cmbsts);
|
|
|
|
NVME_CMBSTS_SET_CBAI(cmbsts, 1);
|
|
|
|
stl_le_p(&n->bar.cmbsts, cmbsts);
|
2020-12-18 02:32:16 +03:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
n->cmb.cba = cba;
|
|
|
|
n->cmb.cmse = true;
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
n->bar.cmbsz = 0;
|
|
|
|
n->bar.cmbloc = 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
return;
|
2021-07-13 17:29:59 +03:00
|
|
|
case NVME_REG_CMBMSC + 4:
|
2021-07-13 20:31:27 +03:00
|
|
|
stl_le_p((uint8_t *)&n->bar.cmbmsc + 4, data);
|
2020-12-18 02:32:16 +03:00
|
|
|
return;
|
|
|
|
|
2021-07-13 17:29:59 +03:00
|
|
|
case NVME_REG_PMRCAP:
|
2020-06-09 22:03:13 +03:00
|
|
|
NVME_GUEST_ERR(pci_nvme_ub_mmiowr_pmrcap_readonly,
|
2020-03-30 19:46:56 +03:00
|
|
|
"invalid write to PMRCAP register, ignored");
|
|
|
|
return;
|
2021-07-13 17:29:59 +03:00
|
|
|
case NVME_REG_PMRCTL:
|
2021-07-13 20:31:27 +03:00
|
|
|
if (!NVME_CAP_PMRS(cap)) {
|
2021-06-07 12:47:57 +03:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2021-07-13 20:31:27 +03:00
|
|
|
stl_le_p(&n->bar.pmrctl, data);
|
2020-12-18 15:04:19 +03:00
|
|
|
if (NVME_PMRCTL_EN(data)) {
|
2020-11-13 08:30:05 +03:00
|
|
|
memory_region_set_enabled(&n->pmr.dev->mr, true);
|
2021-07-13 20:31:27 +03:00
|
|
|
pmrsts = 0;
|
2020-12-18 15:04:19 +03:00
|
|
|
} else {
|
2020-11-13 08:30:05 +03:00
|
|
|
memory_region_set_enabled(&n->pmr.dev->mr, false);
|
2021-07-13 20:31:27 +03:00
|
|
|
NVME_PMRSTS_SET_NRDY(pmrsts, 1);
|
2020-11-13 08:30:05 +03:00
|
|
|
n->pmr.cmse = false;
|
2020-12-18 15:04:19 +03:00
|
|
|
}
|
2021-07-13 20:31:27 +03:00
|
|
|
stl_le_p(&n->bar.pmrsts, pmrsts);
|
2020-12-18 15:04:19 +03:00
|
|
|
return;
|
2021-07-13 17:29:59 +03:00
|
|
|
case NVME_REG_PMRSTS:
|
2020-06-09 22:03:13 +03:00
|
|
|
NVME_GUEST_ERR(pci_nvme_ub_mmiowr_pmrsts_readonly,
|
2020-03-30 19:46:56 +03:00
|
|
|
"invalid write to PMRSTS register, ignored");
|
|
|
|
return;
|
2021-07-13 17:29:59 +03:00
|
|
|
case NVME_REG_PMREBS:
|
2020-06-09 22:03:13 +03:00
|
|
|
NVME_GUEST_ERR(pci_nvme_ub_mmiowr_pmrebs_readonly,
|
2020-03-30 19:46:56 +03:00
|
|
|
"invalid write to PMREBS register, ignored");
|
|
|
|
return;
|
2021-07-13 17:29:59 +03:00
|
|
|
case NVME_REG_PMRSWTP:
|
2020-06-09 22:03:13 +03:00
|
|
|
NVME_GUEST_ERR(pci_nvme_ub_mmiowr_pmrswtp_readonly,
|
2020-03-30 19:46:56 +03:00
|
|
|
"invalid write to PMRSWTP register, ignored");
|
|
|
|
return;
|
2021-07-13 17:29:59 +03:00
|
|
|
case NVME_REG_PMRMSCL:
|
2021-07-13 20:31:27 +03:00
|
|
|
if (!NVME_CAP_PMRS(cap)) {
|
2020-11-13 08:30:05 +03:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2021-07-13 20:31:27 +03:00
|
|
|
stl_le_p(&n->bar.pmrmscl, data);
|
2020-11-13 08:30:05 +03:00
|
|
|
n->pmr.cmse = false;
|
|
|
|
|
2021-07-13 20:31:27 +03:00
|
|
|
if (NVME_PMRMSCL_CMSE(data)) {
|
|
|
|
uint64_t pmrmscu = ldl_le_p(&n->bar.pmrmscu);
|
|
|
|
hwaddr cba = pmrmscu << 32 |
|
|
|
|
(NVME_PMRMSCL_CBA(data) << PMRMSCL_CBA_SHIFT);
|
2020-11-13 08:30:05 +03:00
|
|
|
if (cba + int128_get64(n->pmr.dev->mr.size) < cba) {
|
2021-07-13 20:31:27 +03:00
|
|
|
NVME_PMRSTS_SET_CBAI(pmrsts, 1);
|
|
|
|
stl_le_p(&n->bar.pmrsts, pmrsts);
|
2020-11-13 08:30:05 +03:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
n->pmr.cmse = true;
|
|
|
|
n->pmr.cba = cba;
|
|
|
|
}
|
|
|
|
|
|
|
|
return;
|
2021-07-13 17:29:59 +03:00
|
|
|
case NVME_REG_PMRMSCU:
|
2021-07-13 20:31:27 +03:00
|
|
|
if (!NVME_CAP_PMRS(cap)) {
|
2020-11-13 08:30:05 +03:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2021-07-13 20:31:27 +03:00
|
|
|
stl_le_p(&n->bar.pmrmscu, data);
|
2020-11-13 08:30:05 +03:00
|
|
|
return;
|
2013-06-04 19:17:10 +04:00
|
|
|
default:
|
2020-06-09 22:03:13 +03:00
|
|
|
NVME_GUEST_ERR(pci_nvme_ub_mmiowr_invalid,
|
2017-11-03 16:37:53 +03:00
|
|
|
"invalid MMIO write,"
|
|
|
|
" offset=0x%"PRIx64", data=%"PRIx64"",
|
|
|
|
offset, data);
|
2013-06-04 19:17:10 +04:00
|
|
|
break;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static uint64_t nvme_mmio_read(void *opaque, hwaddr addr, unsigned size)
|
|
|
|
{
|
|
|
|
NvmeCtrl *n = (NvmeCtrl *)opaque;
|
|
|
|
uint8_t *ptr = (uint8_t *)&n->bar;
|
|
|
|
|
2021-01-18 09:30:50 +03:00
|
|
|
trace_pci_nvme_mmio_read(addr, size);
|
2020-07-06 09:12:48 +03:00
|
|
|
|
2017-11-03 16:37:53 +03:00
|
|
|
if (unlikely(addr & (sizeof(uint32_t) - 1))) {
|
2020-06-09 22:03:13 +03:00
|
|
|
NVME_GUEST_ERR(pci_nvme_ub_mmiord_misaligned32,
|
2017-11-03 16:37:53 +03:00
|
|
|
"MMIO read not 32-bit aligned,"
|
|
|
|
" offset=0x%"PRIx64"", addr);
|
|
|
|
/* should RAZ, fall through for now */
|
|
|
|
} else if (unlikely(size < sizeof(uint32_t))) {
|
2020-06-09 22:03:13 +03:00
|
|
|
NVME_GUEST_ERR(pci_nvme_ub_mmiord_toosmall,
|
2017-11-03 16:37:53 +03:00
|
|
|
"MMIO read smaller than 32-bits,"
|
|
|
|
" offset=0x%"PRIx64"", addr);
|
|
|
|
/* should RAZ, fall through for now */
|
|
|
|
}
|
|
|
|
|
2021-07-13 20:24:04 +03:00
|
|
|
if (addr > sizeof(n->bar) - size) {
|
2020-06-09 22:03:13 +03:00
|
|
|
NVME_GUEST_ERR(pci_nvme_ub_mmiord_invalid_ofs,
|
2017-11-03 16:37:53 +03:00
|
|
|
"MMIO read beyond last register,"
|
|
|
|
" offset=0x%"PRIx64", returning 0", addr);
|
2021-07-13 20:24:04 +03:00
|
|
|
|
|
|
|
return 0;
|
2013-06-04 19:17:10 +04:00
|
|
|
}
|
2017-11-03 16:37:53 +03:00
|
|
|
|
2022-12-08 14:43:18 +03:00
|
|
|
if (pci_is_vf(PCI_DEVICE(n)) && !nvme_sctrl(n)->scs &&
|
2022-05-09 17:16:17 +03:00
|
|
|
addr != NVME_REG_CSTS) {
|
|
|
|
trace_pci_nvme_err_ignored_mmio_vf_offline(addr, size);
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2021-07-13 20:24:04 +03:00
|
|
|
/*
|
|
|
|
* When PMRWBM bit 1 is set then read from
|
|
|
|
* from PMRSTS should ensure prior writes
|
|
|
|
* made it to persistent media
|
|
|
|
*/
|
|
|
|
if (addr == NVME_REG_PMRSTS &&
|
2021-07-13 20:31:27 +03:00
|
|
|
(NVME_PMRCAP_PMRWBM(ldl_le_p(&n->bar.pmrcap)) & 0x02)) {
|
2021-07-13 20:24:04 +03:00
|
|
|
memory_region_msync(&n->pmr.dev->mr, 0, n->pmr.dev->size);
|
|
|
|
}
|
|
|
|
|
2021-07-13 20:31:27 +03:00
|
|
|
return ldn_le_p(ptr + addr, size);
|
2013-06-04 19:17:10 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
static void nvme_process_db(NvmeCtrl *n, hwaddr addr, int val)
|
|
|
|
{
|
2022-12-08 14:43:18 +03:00
|
|
|
PCIDevice *pci = PCI_DEVICE(n);
|
2013-06-04 19:17:10 +04:00
|
|
|
uint32_t qid;
|
|
|
|
|
2017-11-03 16:37:53 +03:00
|
|
|
if (unlikely(addr & ((1 << 2) - 1))) {
|
2020-06-09 22:03:13 +03:00
|
|
|
NVME_GUEST_ERR(pci_nvme_ub_db_wr_misaligned,
|
2017-11-03 16:37:53 +03:00
|
|
|
"doorbell write not 32-bit aligned,"
|
|
|
|
" offset=0x%"PRIx64", ignoring", addr);
|
2013-06-04 19:17:10 +04:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (((addr - 0x1000) >> 2) & 1) {
|
2017-11-03 16:37:53 +03:00
|
|
|
/* Completion queue doorbell write */
|
|
|
|
|
2013-06-04 19:17:10 +04:00
|
|
|
uint16_t new_head = val & 0xffff;
|
|
|
|
int start_sqs;
|
|
|
|
NvmeCQueue *cq;
|
|
|
|
|
|
|
|
qid = (addr - (0x1000 + (1 << 2))) >> 3;
|
2017-11-03 16:37:53 +03:00
|
|
|
if (unlikely(nvme_check_cqid(n, qid))) {
|
2020-06-09 22:03:13 +03:00
|
|
|
NVME_GUEST_ERR(pci_nvme_ub_db_wr_invalid_cq,
|
2017-11-03 16:37:53 +03:00
|
|
|
"completion queue doorbell write"
|
|
|
|
" for nonexistent queue,"
|
|
|
|
" sqid=%"PRIu32", ignoring", qid);
|
2020-07-06 09:12:53 +03:00
|
|
|
|
|
|
|
/*
|
|
|
|
* NVM Express v1.3d, Section 4.1 state: "If host software writes
|
|
|
|
* an invalid value to the Submission Queue Tail Doorbell or
|
|
|
|
* Completion Queue Head Doorbell regiter and an Asynchronous Event
|
|
|
|
* Request command is outstanding, then an asynchronous event is
|
|
|
|
* posted to the Admin Completion Queue with a status code of
|
|
|
|
* Invalid Doorbell Write Value."
|
|
|
|
*
|
|
|
|
* Also note that the spec includes the "Invalid Doorbell Register"
|
|
|
|
* status code, but nowhere does it specify when to use it.
|
|
|
|
* However, it seems reasonable to use it here in a similar
|
|
|
|
* fashion.
|
|
|
|
*/
|
|
|
|
if (n->outstanding_aers) {
|
|
|
|
nvme_enqueue_event(n, NVME_AER_TYPE_ERROR,
|
|
|
|
NVME_AER_INFO_ERR_INVALID_DB_REGISTER,
|
|
|
|
NVME_LOG_ERROR_INFO);
|
|
|
|
}
|
|
|
|
|
2013-06-04 19:17:10 +04:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
cq = n->cq[qid];
|
2017-11-03 16:37:53 +03:00
|
|
|
if (unlikely(new_head >= cq->size)) {
|
2020-06-09 22:03:13 +03:00
|
|
|
NVME_GUEST_ERR(pci_nvme_ub_db_wr_invalid_cqhead,
|
2017-11-03 16:37:53 +03:00
|
|
|
"completion queue doorbell write value"
|
|
|
|
" beyond queue size, sqid=%"PRIu32","
|
|
|
|
" new_head=%"PRIu16", ignoring",
|
|
|
|
qid, new_head);
|
2020-07-06 09:12:53 +03:00
|
|
|
|
|
|
|
if (n->outstanding_aers) {
|
|
|
|
nvme_enqueue_event(n, NVME_AER_TYPE_ERROR,
|
|
|
|
NVME_AER_INFO_ERR_INVALID_DB_VALUE,
|
|
|
|
NVME_LOG_ERROR_INFO);
|
|
|
|
}
|
|
|
|
|
2013-06-04 19:17:10 +04:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2020-07-06 09:12:48 +03:00
|
|
|
trace_pci_nvme_mmio_doorbell_cq(cq->cqid, new_head);
|
|
|
|
|
2013-06-04 19:17:10 +04:00
|
|
|
start_sqs = nvme_cq_full(cq) ? 1 : 0;
|
|
|
|
cq->head = new_head;
|
2022-06-16 15:34:07 +03:00
|
|
|
if (!qid && n->dbbuf_enabled) {
|
2022-12-08 14:43:18 +03:00
|
|
|
pci_dma_write(pci, cq->db_addr, &cq->head, sizeof(cq->head));
|
2022-06-16 15:34:07 +03:00
|
|
|
}
|
2013-06-04 19:17:10 +04:00
|
|
|
if (start_sqs) {
|
|
|
|
NvmeSQueue *sq;
|
|
|
|
QTAILQ_FOREACH(sq, &cq->sq_list, entry) {
|
2022-10-19 23:28:02 +03:00
|
|
|
qemu_bh_schedule(sq->bh);
|
2013-06-04 19:17:10 +04:00
|
|
|
}
|
2022-10-19 23:28:02 +03:00
|
|
|
qemu_bh_schedule(cq->bh);
|
2013-06-04 19:17:10 +04:00
|
|
|
}
|
|
|
|
|
2017-12-18 08:00:43 +03:00
|
|
|
if (cq->tail == cq->head) {
|
2021-06-17 21:55:42 +03:00
|
|
|
if (cq->irq_enabled) {
|
|
|
|
n->cq_pending--;
|
|
|
|
}
|
|
|
|
|
2017-12-18 08:00:43 +03:00
|
|
|
nvme_irq_deassert(n, cq);
|
2013-06-04 19:17:10 +04:00
|
|
|
}
|
|
|
|
} else {
|
2017-11-03 16:37:53 +03:00
|
|
|
/* Submission queue doorbell write */
|
|
|
|
|
2013-06-04 19:17:10 +04:00
|
|
|
uint16_t new_tail = val & 0xffff;
|
|
|
|
NvmeSQueue *sq;
|
|
|
|
|
|
|
|
qid = (addr - 0x1000) >> 3;
|
2017-11-03 16:37:53 +03:00
|
|
|
if (unlikely(nvme_check_sqid(n, qid))) {
|
2020-06-09 22:03:13 +03:00
|
|
|
NVME_GUEST_ERR(pci_nvme_ub_db_wr_invalid_sq,
|
2017-11-03 16:37:53 +03:00
|
|
|
"submission queue doorbell write"
|
|
|
|
" for nonexistent queue,"
|
|
|
|
" sqid=%"PRIu32", ignoring", qid);
|
2020-07-06 09:12:53 +03:00
|
|
|
|
|
|
|
if (n->outstanding_aers) {
|
|
|
|
nvme_enqueue_event(n, NVME_AER_TYPE_ERROR,
|
|
|
|
NVME_AER_INFO_ERR_INVALID_DB_REGISTER,
|
|
|
|
NVME_LOG_ERROR_INFO);
|
|
|
|
}
|
|
|
|
|
2013-06-04 19:17:10 +04:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
sq = n->sq[qid];
|
2017-11-03 16:37:53 +03:00
|
|
|
if (unlikely(new_tail >= sq->size)) {
|
2020-06-09 22:03:13 +03:00
|
|
|
NVME_GUEST_ERR(pci_nvme_ub_db_wr_invalid_sqtail,
|
2017-11-03 16:37:53 +03:00
|
|
|
"submission queue doorbell write value"
|
|
|
|
" beyond queue size, sqid=%"PRIu32","
|
|
|
|
" new_tail=%"PRIu16", ignoring",
|
|
|
|
qid, new_tail);
|
2020-07-06 09:12:53 +03:00
|
|
|
|
|
|
|
if (n->outstanding_aers) {
|
|
|
|
nvme_enqueue_event(n, NVME_AER_TYPE_ERROR,
|
|
|
|
NVME_AER_INFO_ERR_INVALID_DB_VALUE,
|
|
|
|
NVME_LOG_ERROR_INFO);
|
|
|
|
}
|
|
|
|
|
2013-06-04 19:17:10 +04:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2020-07-06 09:12:48 +03:00
|
|
|
trace_pci_nvme_mmio_doorbell_sq(sq->sqid, new_tail);
|
|
|
|
|
2013-06-04 19:17:10 +04:00
|
|
|
sq->tail = new_tail;
|
2022-06-16 15:34:07 +03:00
|
|
|
if (!qid && n->dbbuf_enabled) {
|
|
|
|
/*
|
|
|
|
* The spec states "the host shall also update the controller's
|
|
|
|
* corresponding doorbell property to match the value of that entry
|
|
|
|
* in the Shadow Doorbell buffer."
|
|
|
|
*
|
|
|
|
* Since this context is currently a VM trap, we can safely enforce
|
|
|
|
* the requirement from the device side in case the host is
|
|
|
|
* misbehaving.
|
|
|
|
*
|
|
|
|
* Note, we shouldn't have to do this, but various drivers
|
|
|
|
* including ones that run on Linux, are not updating Admin Queues,
|
|
|
|
* so we can't trust reading it for an appropriate sq tail.
|
|
|
|
*/
|
2022-12-08 14:43:18 +03:00
|
|
|
pci_dma_write(pci, sq->db_addr, &sq->tail, sizeof(sq->tail));
|
2022-06-16 15:34:07 +03:00
|
|
|
}
|
2022-10-19 23:28:02 +03:00
|
|
|
|
|
|
|
qemu_bh_schedule(sq->bh);
|
2013-06-04 19:17:10 +04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static void nvme_mmio_write(void *opaque, hwaddr addr, uint64_t data,
|
2020-08-24 09:58:56 +03:00
|
|
|
unsigned size)
|
2013-06-04 19:17:10 +04:00
|
|
|
{
|
|
|
|
NvmeCtrl *n = (NvmeCtrl *)opaque;
|
2020-07-06 09:12:48 +03:00
|
|
|
|
2021-01-18 09:30:50 +03:00
|
|
|
trace_pci_nvme_mmio_write(addr, data, size);
|
2020-07-06 09:12:48 +03:00
|
|
|
|
2022-12-08 14:43:18 +03:00
|
|
|
if (pci_is_vf(PCI_DEVICE(n)) && !nvme_sctrl(n)->scs &&
|
2022-05-09 17:16:17 +03:00
|
|
|
addr != NVME_REG_CSTS) {
|
|
|
|
trace_pci_nvme_err_ignored_mmio_vf_offline(addr, size);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2013-06-04 19:17:10 +04:00
|
|
|
if (addr < sizeof(n->bar)) {
|
|
|
|
nvme_write_bar(n, addr, data, size);
|
2020-06-30 14:04:29 +03:00
|
|
|
} else {
|
2013-06-04 19:17:10 +04:00
|
|
|
nvme_process_db(n, addr, data);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static const MemoryRegionOps nvme_mmio_ops = {
|
|
|
|
.read = nvme_mmio_read,
|
|
|
|
.write = nvme_mmio_write,
|
|
|
|
.endianness = DEVICE_LITTLE_ENDIAN,
|
|
|
|
.impl = {
|
|
|
|
.min_access_size = 2,
|
|
|
|
.max_access_size = 8,
|
|
|
|
},
|
|
|
|
};
|
|
|
|
|
2017-05-16 22:10:59 +03:00
|
|
|
static void nvme_cmb_write(void *opaque, hwaddr addr, uint64_t data,
|
2020-08-24 09:58:56 +03:00
|
|
|
unsigned size)
|
2017-05-16 22:10:59 +03:00
|
|
|
{
|
|
|
|
NvmeCtrl *n = (NvmeCtrl *)opaque;
|
2020-12-18 02:32:16 +03:00
|
|
|
stn_le_p(&n->cmb.buf[addr], size, data);
|
2017-05-16 22:10:59 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
static uint64_t nvme_cmb_read(void *opaque, hwaddr addr, unsigned size)
|
|
|
|
{
|
|
|
|
NvmeCtrl *n = (NvmeCtrl *)opaque;
|
2020-12-18 02:32:16 +03:00
|
|
|
return ldn_le_p(&n->cmb.buf[addr], size);
|
2017-05-16 22:10:59 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
static const MemoryRegionOps nvme_cmb_ops = {
|
|
|
|
.read = nvme_cmb_read,
|
|
|
|
.write = nvme_cmb_write,
|
|
|
|
.endianness = DEVICE_LITTLE_ENDIAN,
|
|
|
|
.impl = {
|
2018-11-20 21:41:48 +03:00
|
|
|
.min_access_size = 1,
|
2017-05-16 22:10:59 +03:00
|
|
|
.max_access_size = 8,
|
|
|
|
},
|
|
|
|
};
|
|
|
|
|
2022-11-09 13:40:11 +03:00
|
|
|
static bool nvme_check_params(NvmeCtrl *n, Error **errp)
|
2013-06-04 19:17:10 +04:00
|
|
|
{
|
2020-06-09 22:03:21 +03:00
|
|
|
NvmeParams *params = &n->params;
|
2013-06-04 19:17:10 +04:00
|
|
|
|
2020-06-09 22:03:21 +03:00
|
|
|
if (params->num_queues) {
|
2020-06-09 22:03:19 +03:00
|
|
|
warn_report("num_queues is deprecated; please use max_ioqpairs "
|
|
|
|
"instead");
|
|
|
|
|
2020-06-09 22:03:21 +03:00
|
|
|
params->max_ioqpairs = params->num_queues - 1;
|
2020-06-09 22:03:19 +03:00
|
|
|
}
|
|
|
|
|
hw/block/nvme: fix handling of private namespaces
Prior to this patch, if a private nvme-ns device (that is, a namespace
that is not linked to a subsystem) is wired up to an nvme-subsys linked
nvme controller device, the device fails to verify that the namespace id
is unique within the subsystem. NVM Express v1.4b, Section 6.1.6 ("NSID
and Namespace Usage") states that because the device supports Namespace
Management, "NSIDs *shall* be unique within the NVM subsystem".
Additionally, prior to this patch, private namespaces are not known to
the subsystem and the namespace is considered exclusive to the
controller with which it is initially wired up to. However, this is not
the definition of a private namespace; per Section 1.6.33 ("private
namespace"), a private namespace is just a namespace that does not
support multipath I/O or namespace sharing, which means "that it is only
able to be attached to one controller at a time".
Fix this by always allocating namespaces in the subsystem (if one is
linked to the controller), regardless of the shared/private status of
the namespace. Whether or not the namespace is shareable is controlled
by a new `shared` nvme-ns parameter.
Finally, this fix allows the nvme-ns `subsys` parameter to be removed,
since the `shared` parameter now serves the purpose of attaching the
namespace to all controllers in the subsystem upon device realization.
It is invalid to have an nvme-ns namespace device with a linked
subsystem without the parent nvme controller device also being linked to
one and since the nvme-ns devices will unconditionally be "attached" (in
QEMU terms that is) to an nvme controller device through an NvmeBus, the
nvme-ns namespace device can always get a reference to the subsystem of
the controller it is explicitly (using 'bus=' parameter) or implicitly
attaching to.
Fixes: e570768566b3 ("hw/block/nvme: support for shared namespace in subsystem")
Cc: Minwoo Im <minwoo.im.dev@gmail.com>
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
Reviewed-by: Gollu Appalanaidu <anaidu.gollu@samsung.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Minwoo Im <minwoo.im.dev@gmail.com>
2021-03-23 14:43:24 +03:00
|
|
|
if (n->namespace.blkconf.blk && n->subsys) {
|
|
|
|
error_setg(errp, "subsystem support is unavailable with legacy "
|
|
|
|
"namespace ('drive' property)");
|
2022-11-09 13:40:11 +03:00
|
|
|
return false;
|
hw/block/nvme: fix handling of private namespaces
Prior to this patch, if a private nvme-ns device (that is, a namespace
that is not linked to a subsystem) is wired up to an nvme-subsys linked
nvme controller device, the device fails to verify that the namespace id
is unique within the subsystem. NVM Express v1.4b, Section 6.1.6 ("NSID
and Namespace Usage") states that because the device supports Namespace
Management, "NSIDs *shall* be unique within the NVM subsystem".
Additionally, prior to this patch, private namespaces are not known to
the subsystem and the namespace is considered exclusive to the
controller with which it is initially wired up to. However, this is not
the definition of a private namespace; per Section 1.6.33 ("private
namespace"), a private namespace is just a namespace that does not
support multipath I/O or namespace sharing, which means "that it is only
able to be attached to one controller at a time".
Fix this by always allocating namespaces in the subsystem (if one is
linked to the controller), regardless of the shared/private status of
the namespace. Whether or not the namespace is shareable is controlled
by a new `shared` nvme-ns parameter.
Finally, this fix allows the nvme-ns `subsys` parameter to be removed,
since the `shared` parameter now serves the purpose of attaching the
namespace to all controllers in the subsystem upon device realization.
It is invalid to have an nvme-ns namespace device with a linked
subsystem without the parent nvme controller device also being linked to
one and since the nvme-ns devices will unconditionally be "attached" (in
QEMU terms that is) to an nvme controller device through an NvmeBus, the
nvme-ns namespace device can always get a reference to the subsystem of
the controller it is explicitly (using 'bus=' parameter) or implicitly
attaching to.
Fixes: e570768566b3 ("hw/block/nvme: support for shared namespace in subsystem")
Cc: Minwoo Im <minwoo.im.dev@gmail.com>
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
Reviewed-by: Gollu Appalanaidu <anaidu.gollu@samsung.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Minwoo Im <minwoo.im.dev@gmail.com>
2021-03-23 14:43:24 +03:00
|
|
|
}
|
|
|
|
|
2020-06-09 22:03:21 +03:00
|
|
|
if (params->max_ioqpairs < 1 ||
|
2020-06-09 22:03:32 +03:00
|
|
|
params->max_ioqpairs > NVME_MAX_IOQPAIRS) {
|
2020-06-09 22:03:19 +03:00
|
|
|
error_setg(errp, "max_ioqpairs must be between 1 and %d",
|
2020-06-09 22:03:32 +03:00
|
|
|
NVME_MAX_IOQPAIRS);
|
2022-11-09 13:40:11 +03:00
|
|
|
return false;
|
2020-06-09 22:03:32 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
if (params->msix_qsize < 1 ||
|
|
|
|
params->msix_qsize > PCI_MSIX_FLAGS_QSIZE + 1) {
|
|
|
|
error_setg(errp, "msix_qsize must be between 1 and %d",
|
|
|
|
PCI_MSIX_FLAGS_QSIZE + 1);
|
2022-11-09 13:40:11 +03:00
|
|
|
return false;
|
nvme: ensure the num_queues is not zero
When it is zero, it causes segv.
Using following command:
"-drive file=//home/test/test1.img,if=none,id=id0
-device nvme,drive=id0,serial=test,num_queues=0"
causes following Backtrack:
Thread 4 "qemu-system-x86" received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7fffe9735700 (LWP 30952)]
0x0000555555a7a77c in nvme_start_ctrl (n=0x5555577473f0) at hw/block/nvme.c:825
825 if (unlikely(n->cq[0])) {
(gdb) bt
0 0x0000555555a7a77c in nvme_start_ctrl (n=0x5555577473f0)
at hw/block/nvme.c:825
1 0x0000555555a7af7f in nvme_write_bar (n=0x5555577473f0, offset=20,
data=4587521, size=4) at hw/block/nvme.c:969
2 0x0000555555a7b81a in nvme_mmio_write (opaque=0x5555577473f0, addr=20,
data=4587521, size=4) at hw/block/nvme.c:1163
3 0x0000555555869236 in memory_region_write_accessor (mr=0x555557747cd0,
addr=20, value=0x7fffe97320f8, size=4, shift=0, mask=4294967295, attrs=...)
at /home/test/qemu1/qemu/memory.c:502
4 0x0000555555869446 in access_with_adjusted_size (addr=20,
value=0x7fffe97320f8, size=4, access_size_min=2, access_size_max=8,
access_fn=0x55555586914d <memory_region_write_accessor>,
mr=0x555557747cd0, attrs=...) at /home/test/qemu1/qemu/memory.c:568
5 0x000055555586c479 in memory_region_dispatch_write (mr=0x555557747cd0,
addr=20, data=4587521, size=4, attrs=...)
at /home/test/qemu1/qemu/memory.c:1499
6 0x00005555558030af in flatview_write_continue (fv=0x7fffe0061130,
addr=4273930260, attrs=..., buf=0x7ffff7ff0028 "\001", len=4, addr1=20,
l=4, mr=0x555557747cd0) at /home/test/qemu1/qemu/exec.c:3234
7 0x00005555558031f9 in flatview_write (fv=0x7fffe0061130, addr=4273930260,
attrs=..., buf=0x7ffff7ff0028 "\001", len=4)
at /home/test/qemu1/qemu/exec.c:3273
8 0x00005555558034ff in address_space_write (
---Type <return> to continue, or q <return> to quit---
as=0x555556758480 <address_space_memory>, addr=4273930260, attrs=...,
buf=0x7ffff7ff0028 "\001", len=4) at /home/test/qemu1/qemu/exec.c:3363
9 0x0000555555803550 in address_space_rw (
as=0x555556758480 <address_space_memory>, addr=4273930260, attrs=...,
buf=0x7ffff7ff0028 "\001", len=4, is_write=true)
at /home/test/qemu1/qemu/exec.c:3374
10 0x00005555558884a1 in kvm_cpu_exec (cpu=0x555556920e40)
at /home/test/qemu1/qemu/accel/kvm/kvm-all.c:2031
11 0x000055555584cd9d in qemu_kvm_cpu_thread_fn (arg=0x555556920e40)
at /home/test/qemu1/qemu/cpus.c:1281
12 0x0000555555dbaf6d in qemu_thread_start (args=0x5555569438a0)
at util/qemu-thread-posix.c:502
13 0x00007ffff5dc86db in start_thread (arg=0x7fffe9735700)
at pthread_create.c:463
14 0x00007ffff5af188f in clone ()
at ../sysdeps/unix/sysv/linux/x86_64/clone.S:95
Signed-off-by: Li Qiang <liq3ea@163.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Message-id: 20190120055558.32984-3-liq3ea@163.com
Signed-off-by: Max Reitz <mreitz@redhat.com>
2019-01-20 08:55:57 +03:00
|
|
|
}
|
|
|
|
|
2020-06-09 22:03:21 +03:00
|
|
|
if (!params->serial) {
|
2017-11-22 06:08:43 +03:00
|
|
|
error_setg(errp, "serial property not set");
|
2022-11-09 13:40:11 +03:00
|
|
|
return false;
|
2013-06-04 19:17:10 +04:00
|
|
|
}
|
2020-03-30 19:46:56 +03:00
|
|
|
|
2020-11-13 08:30:05 +03:00
|
|
|
if (n->pmr.dev) {
|
|
|
|
if (host_memory_backend_is_mapped(n->pmr.dev)) {
|
2020-07-14 19:02:00 +03:00
|
|
|
error_setg(errp, "can't use already busy memdev: %s",
|
2020-11-13 08:30:05 +03:00
|
|
|
object_get_canonical_path_component(OBJECT(n->pmr.dev)));
|
2022-11-09 13:40:11 +03:00
|
|
|
return false;
|
2020-03-30 19:46:56 +03:00
|
|
|
}
|
|
|
|
|
2020-11-13 08:30:05 +03:00
|
|
|
if (!is_power_of_2(n->pmr.dev->size)) {
|
2020-03-30 19:46:56 +03:00
|
|
|
error_setg(errp, "pmr backend size needs to be power of 2 in size");
|
2022-11-09 13:40:11 +03:00
|
|
|
return false;
|
2020-03-30 19:46:56 +03:00
|
|
|
}
|
|
|
|
|
2020-11-13 08:30:05 +03:00
|
|
|
host_memory_backend_set_mapped(n->pmr.dev, true);
|
2020-03-30 19:46:56 +03:00
|
|
|
}
|
2020-12-08 23:04:06 +03:00
|
|
|
|
hw/block/nvme: align zoned.zasl with mdts
ZASL (Zone Append Size Limit) is defined exactly like MDTS (Maximum Data
Transfer Size), that is, it is a value in units of the minimum memory
page size (CAP.MPSMIN) and is reported as a power of two.
The 'mdts' nvme device parameter is specified as in the spec, but the
'zoned.append_size_limit' parameter is specified in bytes. This is
suboptimal for a number of reasons:
1. It is just plain confusing wrt. the definition of mdts.
2. There is a lot of complexity involved in validating the value; it
must be a power of two, it should be larger than 4k, if it is zero
we set it internally to mdts, but still report it as zero.
3. While "hw/block/nvme: improve invalid zasl value reporting"
slightly improved the handling of the parameter, the validation is
still wrong; it does not depend on CC.MPS, it depends on
CAP.MPSMIN. And we are not even checking that it is actually less
than or equal to MDTS, which is kinda the *one* condition it must
satisfy.
Fix this by defining zasl exactly like mdts and checking the one thing
that it must satisfy (that it is less than or equal to mdts). Also,
change the default value from 128KiB to 0 (aka, whatever mdts is).
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
2021-02-22 21:27:58 +03:00
|
|
|
if (n->params.zasl > n->params.mdts) {
|
|
|
|
error_setg(errp, "zoned.zasl (Zone Append Size Limit) must be less "
|
|
|
|
"than or equal to mdts (Maximum Data Transfer Size)");
|
2022-11-09 13:40:11 +03:00
|
|
|
return false;
|
2020-12-08 23:04:06 +03:00
|
|
|
}
|
2021-02-14 21:09:27 +03:00
|
|
|
|
|
|
|
if (!n->params.vsl) {
|
|
|
|
error_setg(errp, "vsl must be non-zero");
|
2022-11-09 13:40:11 +03:00
|
|
|
return false;
|
2021-02-14 21:09:27 +03:00
|
|
|
}
|
2022-05-09 17:16:09 +03:00
|
|
|
|
|
|
|
if (params->sriov_max_vfs) {
|
|
|
|
if (!n->subsys) {
|
|
|
|
error_setg(errp, "subsystem is required for the use of SR-IOV");
|
2022-11-09 13:40:11 +03:00
|
|
|
return false;
|
2022-05-09 17:16:09 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
if (params->sriov_max_vfs > NVME_MAX_VFS) {
|
|
|
|
error_setg(errp, "sriov_max_vfs must be between 0 and %d",
|
|
|
|
NVME_MAX_VFS);
|
2022-11-09 13:40:11 +03:00
|
|
|
return false;
|
2022-05-09 17:16:09 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
if (params->cmb_size_mb) {
|
|
|
|
error_setg(errp, "CMB is not supported with SR-IOV");
|
2022-11-09 13:40:11 +03:00
|
|
|
return false;
|
2022-05-09 17:16:09 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
if (n->pmr.dev) {
|
|
|
|
error_setg(errp, "PMR is not supported with SR-IOV");
|
2022-11-09 13:40:11 +03:00
|
|
|
return false;
|
2022-05-09 17:16:09 +03:00
|
|
|
}
|
2022-05-09 17:16:16 +03:00
|
|
|
|
|
|
|
if (!params->sriov_vq_flexible || !params->sriov_vi_flexible) {
|
|
|
|
error_setg(errp, "both sriov_vq_flexible and sriov_vi_flexible"
|
|
|
|
" must be set for the use of SR-IOV");
|
2022-11-09 13:40:11 +03:00
|
|
|
return false;
|
2022-05-09 17:16:16 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
if (params->sriov_vq_flexible < params->sriov_max_vfs * 2) {
|
|
|
|
error_setg(errp, "sriov_vq_flexible must be greater than or equal"
|
|
|
|
" to %d (sriov_max_vfs * 2)", params->sriov_max_vfs * 2);
|
2022-11-09 13:40:11 +03:00
|
|
|
return false;
|
2022-05-09 17:16:16 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
if (params->max_ioqpairs < params->sriov_vq_flexible + 2) {
|
|
|
|
error_setg(errp, "(max_ioqpairs - sriov_vq_flexible) must be"
|
|
|
|
" greater than or equal to 2");
|
2022-11-09 13:40:11 +03:00
|
|
|
return false;
|
2022-05-09 17:16:16 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
if (params->sriov_vi_flexible < params->sriov_max_vfs) {
|
|
|
|
error_setg(errp, "sriov_vi_flexible must be greater than or equal"
|
|
|
|
" to %d (sriov_max_vfs)", params->sriov_max_vfs);
|
2022-11-09 13:40:11 +03:00
|
|
|
return false;
|
2022-05-09 17:16:16 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
if (params->msix_qsize < params->sriov_vi_flexible + 1) {
|
|
|
|
error_setg(errp, "(msix_qsize - sriov_vi_flexible) must be"
|
|
|
|
" greater than or equal to 1");
|
2022-11-09 13:40:11 +03:00
|
|
|
return false;
|
2022-05-09 17:16:16 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
if (params->sriov_max_vi_per_vf &&
|
|
|
|
(params->sriov_max_vi_per_vf - 1) % NVME_VF_RES_GRANULARITY) {
|
|
|
|
error_setg(errp, "sriov_max_vi_per_vf must meet:"
|
|
|
|
" (sriov_max_vi_per_vf - 1) %% %d == 0 and"
|
|
|
|
" sriov_max_vi_per_vf >= 1", NVME_VF_RES_GRANULARITY);
|
2022-11-09 13:40:11 +03:00
|
|
|
return false;
|
2022-05-09 17:16:16 +03:00
|
|
|
}
|
|
|
|
|
|
|
|
if (params->sriov_max_vq_per_vf &&
|
|
|
|
(params->sriov_max_vq_per_vf < 2 ||
|
|
|
|
(params->sriov_max_vq_per_vf - 1) % NVME_VF_RES_GRANULARITY)) {
|
|
|
|
error_setg(errp, "sriov_max_vq_per_vf must meet:"
|
|
|
|
" (sriov_max_vq_per_vf - 1) %% %d == 0 and"
|
|
|
|
" sriov_max_vq_per_vf >= 2", NVME_VF_RES_GRANULARITY);
|
2022-11-09 13:40:11 +03:00
|
|
|
return false;
|
2022-05-09 17:16:16 +03:00
|
|
|
}
|
2022-05-09 17:16:09 +03:00
|
|
|
}
|
2022-11-09 13:40:11 +03:00
|
|
|
|
|
|
|
return true;
|
2020-06-09 22:03:21 +03:00
|
|
|
}
|
|
|
|
|
2020-06-09 22:03:22 +03:00
|
|
|
static void nvme_init_state(NvmeCtrl *n)
|
|
|
|
{
|
2022-05-09 17:16:10 +03:00
|
|
|
NvmePriCtrlCap *cap = &n->pri_ctrl_cap;
|
2022-05-09 17:16:11 +03:00
|
|
|
NvmeSecCtrlList *list = &n->sec_ctrl_list;
|
|
|
|
NvmeSecCtrlEntry *sctrl;
|
2022-12-08 14:43:18 +03:00
|
|
|
PCIDevice *pci = PCI_DEVICE(n);
|
2022-05-09 17:16:16 +03:00
|
|
|
uint8_t max_vfs;
|
2022-05-09 17:16:11 +03:00
|
|
|
int i;
|
2022-05-09 17:16:10 +03:00
|
|
|
|
2022-12-08 14:43:18 +03:00
|
|
|
if (pci_is_vf(pci)) {
|
2022-05-09 17:16:16 +03:00
|
|
|
sctrl = nvme_sctrl(n);
|
|
|
|
max_vfs = 0;
|
|
|
|
n->conf_ioqpairs = sctrl->nvq ? le16_to_cpu(sctrl->nvq) - 1 : 0;
|
|
|
|
n->conf_msix_qsize = sctrl->nvi ? le16_to_cpu(sctrl->nvi) : 1;
|
|
|
|
} else {
|
|
|
|
max_vfs = n->params.sriov_max_vfs;
|
|
|
|
n->conf_ioqpairs = n->params.max_ioqpairs;
|
|
|
|
n->conf_msix_qsize = n->params.msix_qsize;
|
|
|
|
}
|
hw/nvme: Make max_ioqpairs and msix_qsize configurable in runtime
The NVMe device defines two properties: max_ioqpairs, msix_qsize. Having
them as constants is problematic for SR-IOV support.
SR-IOV introduces virtual resources (queues, interrupts) that can be
assigned to PF and its dependent VFs. Each device, following a reset,
should work with the configured number of queues. A single constant is
no longer sufficient to hold the whole state.
This patch tries to solve the problem by introducing additional
variables in NvmeCtrl’s state. The variables for, e.g., managing queues
are therefore organized as:
- n->params.max_ioqpairs – no changes, constant set by the user
- n->(mutable_state) – (not a part of this patch) user-configurable,
specifies number of queues available _after_
reset
- n->conf_ioqpairs - (new) used in all the places instead of the ‘old’
n->params.max_ioqpairs; initialized in realize()
and updated during reset() to reflect user’s
changes to the mutable state
Since the number of available i/o queues and interrupts can change in
runtime, buffers for sq/cqs and the MSIX-related structures are
allocated big enough to handle the limits, to completely avoid the
complicated reallocation. A helper function (nvme_update_msixcap_ts)
updates the corresponding capability register, to signal configuration
changes.
Signed-off-by: Łukasz Gieryk <lukasz.gieryk@linux.intel.com>
Reviewed-by: Klaus Jensen <k.jensen@samsung.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
2022-05-09 17:16:13 +03:00
|
|
|
|
2020-06-09 22:03:22 +03:00
|
|
|
n->sq = g_new0(NvmeSQueue *, n->params.max_ioqpairs + 1);
|
|
|
|
n->cq = g_new0(NvmeCQueue *, n->params.max_ioqpairs + 1);
|
2020-07-06 09:12:52 +03:00
|
|
|
n->temperature = NVME_TEMPERATURE;
|
2020-07-06 09:12:50 +03:00
|
|
|
n->features.temp_thresh_hi = NVME_TEMPERATURE_WARNING;
|
2020-07-06 09:12:52 +03:00
|
|
|
n->starttime_ms = qemu_clock_get_ms(QEMU_CLOCK_VIRTUAL);
|
2020-07-06 09:12:53 +03:00
|
|
|
n->aer_reqs = g_new0(NvmeRequest *, n->params.aerl + 1);
|
2022-05-09 17:16:19 +03:00
|
|
|
QTAILQ_INIT(&n->aer_queue);
|
2022-05-09 17:16:10 +03:00
|
|
|
|
2022-05-09 17:16:16 +03:00
|
|
|
list->numcntl = cpu_to_le16(max_vfs);
|
|
|
|
for (i = 0; i < max_vfs; i++) {
|
2022-05-09 17:16:11 +03:00
|
|
|
sctrl = &list->sec[i];
|
|
|
|
sctrl->pcid = cpu_to_le16(n->cntlid);
|
|
|
|
sctrl->vfn = cpu_to_le16(i + 1);
|
|
|
|
}
|
|
|
|
|
2022-05-09 17:16:10 +03:00
|
|
|
cap->cntlid = cpu_to_le16(n->cntlid);
|
2022-05-09 17:16:16 +03:00
|
|
|
cap->crt = NVME_CRT_VQ | NVME_CRT_VI;
|
|
|
|
|
2022-12-08 14:43:18 +03:00
|
|
|
if (pci_is_vf(pci)) {
|
2022-05-09 17:16:16 +03:00
|
|
|
cap->vqprt = cpu_to_le16(1 + n->conf_ioqpairs);
|
|
|
|
} else {
|
|
|
|
cap->vqprt = cpu_to_le16(1 + n->params.max_ioqpairs -
|
|
|
|
n->params.sriov_vq_flexible);
|
|
|
|
cap->vqfrt = cpu_to_le32(n->params.sriov_vq_flexible);
|
|
|
|
cap->vqrfap = cap->vqfrt;
|
|
|
|
cap->vqgran = cpu_to_le16(NVME_VF_RES_GRANULARITY);
|
|
|
|
cap->vqfrsm = n->params.sriov_max_vq_per_vf ?
|
|
|
|
cpu_to_le16(n->params.sriov_max_vq_per_vf) :
|
|
|
|
cap->vqfrt / MAX(max_vfs, 1);
|
|
|
|
}
|
|
|
|
|
2022-12-08 14:43:18 +03:00
|
|
|
if (pci_is_vf(pci)) {
|
2022-05-09 17:16:16 +03:00
|
|
|
cap->viprt = cpu_to_le16(n->conf_msix_qsize);
|
|
|
|
} else {
|
|
|
|
cap->viprt = cpu_to_le16(n->params.msix_qsize -
|
|
|
|
n->params.sriov_vi_flexible);
|
|
|
|
cap->vifrt = cpu_to_le32(n->params.sriov_vi_flexible);
|
|
|
|
cap->virfap = cap->vifrt;
|
|
|
|
cap->vigran = cpu_to_le16(NVME_VF_RES_GRANULARITY);
|
|
|
|
cap->vifrsm = n->params.sriov_max_vi_per_vf ?
|
|
|
|
cpu_to_le16(n->params.sriov_max_vi_per_vf) :
|
|
|
|
cap->vifrt / MAX(max_vfs, 1);
|
|
|
|
}
|
2020-06-09 22:03:22 +03:00
|
|
|
}
|
|
|
|
|
2020-06-09 22:03:27 +03:00
|
|
|
static void nvme_init_cmb(NvmeCtrl *n, PCIDevice *pci_dev)
|
|
|
|
{
|
2020-12-18 02:32:16 +03:00
|
|
|
uint64_t cmb_size = n->params.cmb_size_mb * MiB;
|
2021-07-13 20:31:27 +03:00
|
|
|
uint64_t cap = ldq_le_p(&n->bar.cap);
|
2020-06-09 22:03:27 +03:00
|
|
|
|
2020-12-18 02:32:16 +03:00
|
|
|
n->cmb.buf = g_malloc0(cmb_size);
|
|
|
|
memory_region_init_io(&n->cmb.mem, OBJECT(n), &nvme_cmb_ops, n,
|
|
|
|
"nvme-cmb", cmb_size);
|
|
|
|
pci_register_bar(pci_dev, NVME_CMB_BIR,
|
2020-06-09 22:03:27 +03:00
|
|
|
PCI_BASE_ADDRESS_SPACE_MEMORY |
|
|
|
|
PCI_BASE_ADDRESS_MEM_TYPE_64 |
|
2020-12-18 02:32:16 +03:00
|
|
|
PCI_BASE_ADDRESS_MEM_PREFETCH, &n->cmb.mem);
|
|
|
|
|
2021-07-13 20:31:27 +03:00
|
|
|
NVME_CAP_SET_CMBS(cap, 1);
|
|
|
|
stq_le_p(&n->bar.cap, cap);
|
2020-12-18 02:32:16 +03:00
|
|
|
|
|
|
|
if (n->params.legacy_cmb) {
|
|
|
|
nvme_cmb_enable_regs(n);
|
|
|
|
n->cmb.cmse = true;
|
|
|
|
}
|
2020-06-09 22:03:27 +03:00
|
|
|
}
|
|
|
|
|
2020-06-09 22:03:28 +03:00
|
|
|
static void nvme_init_pmr(NvmeCtrl *n, PCIDevice *pci_dev)
|
|
|
|
{
|
2021-07-13 20:31:27 +03:00
|
|
|
uint32_t pmrcap = ldl_le_p(&n->bar.pmrcap);
|
|
|
|
|
|
|
|
NVME_PMRCAP_SET_RDS(pmrcap, 1);
|
|
|
|
NVME_PMRCAP_SET_WDS(pmrcap, 1);
|
|
|
|
NVME_PMRCAP_SET_BIR(pmrcap, NVME_PMR_BIR);
|
2020-06-09 22:03:28 +03:00
|
|
|
/* Turn on bit 1 support */
|
2021-07-13 20:31:27 +03:00
|
|
|
NVME_PMRCAP_SET_PMRWBM(pmrcap, 0x02);
|
|
|
|
NVME_PMRCAP_SET_CMSS(pmrcap, 1);
|
|
|
|
stl_le_p(&n->bar.pmrcap, pmrcap);
|
2020-06-09 22:03:28 +03:00
|
|
|
|
2021-07-13 20:31:27 +03:00
|
|
|
pci_register_bar(pci_dev, NVME_PMR_BIR,
|
2020-06-09 22:03:28 +03:00
|
|
|
PCI_BASE_ADDRESS_SPACE_MEMORY |
|
|
|
|
PCI_BASE_ADDRESS_MEM_TYPE_64 |
|
2020-11-13 08:30:05 +03:00
|
|
|
PCI_BASE_ADDRESS_MEM_PREFETCH, &n->pmr.dev->mr);
|
2020-12-18 15:04:19 +03:00
|
|
|
|
2020-11-13 08:30:05 +03:00
|
|
|
memory_region_set_enabled(&n->pmr.dev->mr, false);
|
2020-06-09 22:03:28 +03:00
|
|
|
}
|
|
|
|
|
2022-05-09 17:16:15 +03:00
|
|
|
static uint64_t nvme_bar_size(unsigned total_queues, unsigned total_irqs,
|
|
|
|
unsigned *msix_table_offset,
|
|
|
|
unsigned *msix_pba_offset)
|
|
|
|
{
|
|
|
|
uint64_t bar_size, msix_table_size, msix_pba_size;
|
|
|
|
|
|
|
|
bar_size = sizeof(NvmeBar) + 2 * total_queues * NVME_DB_SIZE;
|
|
|
|
bar_size = QEMU_ALIGN_UP(bar_size, 4 * KiB);
|
|
|
|
|
|
|
|
if (msix_table_offset) {
|
|
|
|
*msix_table_offset = bar_size;
|
|
|
|
}
|
|
|
|
|
|
|
|
msix_table_size = PCI_MSIX_ENTRY_SIZE * total_irqs;
|
|
|
|
bar_size += msix_table_size;
|
|
|
|
bar_size = QEMU_ALIGN_UP(bar_size, 4 * KiB);
|
|
|
|
|
|
|
|
if (msix_pba_offset) {
|
|
|
|
*msix_pba_offset = bar_size;
|
|
|
|
}
|
|
|
|
|
|
|
|
msix_pba_size = QEMU_ALIGN_UP(total_irqs, 64) / 8;
|
|
|
|
bar_size += msix_pba_size;
|
|
|
|
|
|
|
|
bar_size = pow2ceil(bar_size);
|
|
|
|
return bar_size;
|
|
|
|
}
|
|
|
|
|
2022-05-09 17:16:16 +03:00
|
|
|
static void nvme_init_sriov(NvmeCtrl *n, PCIDevice *pci_dev, uint16_t offset)
|
2022-05-09 17:16:09 +03:00
|
|
|
{
|
|
|
|
uint16_t vf_dev_id = n->params.use_intel_id ?
|
|
|
|
PCI_DEVICE_ID_INTEL_NVME : PCI_DEVICE_ID_REDHAT_NVME;
|
2022-05-09 17:16:16 +03:00
|
|
|
NvmePriCtrlCap *cap = &n->pri_ctrl_cap;
|
|
|
|
uint64_t bar_size = nvme_bar_size(le16_to_cpu(cap->vqfrsm),
|
|
|
|
le16_to_cpu(cap->vifrsm),
|
|
|
|
NULL, NULL);
|
2022-05-09 17:16:09 +03:00
|
|
|
|
|
|
|
pcie_sriov_pf_init(pci_dev, offset, "nvme", vf_dev_id,
|
|
|
|
n->params.sriov_max_vfs, n->params.sriov_max_vfs,
|
|
|
|
NVME_VF_OFFSET, NVME_VF_STRIDE);
|
|
|
|
|
|
|
|
pcie_sriov_pf_init_vf_bar(pci_dev, 0, PCI_BASE_ADDRESS_SPACE_MEMORY |
|
|
|
|
PCI_BASE_ADDRESS_MEM_TYPE_64, bar_size);
|
|
|
|
}
|
|
|
|
|
hw/nvme: Implement the Function Level Reset
This patch implements the Function Level Reset, a feature currently not
implemented for the Nvme device, while listed as a mandatory ("shall")
in the 1.4 spec.
The implementation reuses FLR-related building blocks defined for the
pci-bridge module, and follows the same logic:
- FLR capability is advertised in the PCIE config,
- custom pci_write_config callback detects a write to the trigger
register and performs the PCI reset,
- which, eventually, calls the custom dc->reset handler.
Depending on reset type, parts of the state should (or should not) be
cleared. To distinguish the type of reset, an additional parameter is
passed to the reset function.
This patch also enables advertisement of the Power Management PCI
capability. The main reason behind it is to announce the no_soft_reset=1
bit, to signal SR-IOV support where each VF can be reset individually.
The implementation purposedly ignores writes to the PMCS.PS register,
as even such naïve behavior is enough to correctly handle the D3->D0
transition.
It’s worth to note, that the power state transition back to to D3, with
all the corresponding side effects, wasn't and stil isn't handled
properly.
Signed-off-by: Łukasz Gieryk <lukasz.gieryk@linux.intel.com>
Reviewed-by: Klaus Jensen <k.jensen@samsung.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
2022-05-09 17:16:12 +03:00
|
|
|
static int nvme_add_pm_capability(PCIDevice *pci_dev, uint8_t offset)
|
|
|
|
{
|
|
|
|
Error *err = NULL;
|
|
|
|
int ret;
|
|
|
|
|
|
|
|
ret = pci_add_capability(pci_dev, PCI_CAP_ID_PM, offset,
|
|
|
|
PCI_PM_SIZEOF, &err);
|
|
|
|
if (err) {
|
|
|
|
error_report_err(err);
|
|
|
|
return ret;
|
|
|
|
}
|
|
|
|
|
|
|
|
pci_set_word(pci_dev->config + offset + PCI_PM_PMC,
|
|
|
|
PCI_PM_CAP_VER_1_2);
|
|
|
|
pci_set_word(pci_dev->config + offset + PCI_PM_CTRL,
|
|
|
|
PCI_PM_CTRL_NO_SOFT_RESET);
|
|
|
|
pci_set_word(pci_dev->wmask + offset + PCI_PM_CTRL,
|
|
|
|
PCI_PM_CTRL_STATE_MASK);
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2022-11-09 13:40:16 +03:00
|
|
|
static bool nvme_init_pci(NvmeCtrl *n, PCIDevice *pci_dev, Error **errp)
|
2020-06-09 22:03:26 +03:00
|
|
|
{
|
2022-11-09 13:40:16 +03:00
|
|
|
ERRP_GUARD();
|
2020-06-09 22:03:26 +03:00
|
|
|
uint8_t *pci_conf = pci_dev->config;
|
2022-05-09 17:16:15 +03:00
|
|
|
uint64_t bar_size;
|
2020-11-13 11:50:33 +03:00
|
|
|
unsigned msix_table_offset, msix_pba_offset;
|
2021-01-12 15:30:26 +03:00
|
|
|
int ret;
|
|
|
|
|
2020-06-09 22:03:26 +03:00
|
|
|
pci_conf[PCI_INTERRUPT_PIN] = 1;
|
|
|
|
pci_config_set_prog_interface(pci_conf, 0x2);
|
2019-09-27 12:43:12 +03:00
|
|
|
|
|
|
|
if (n->params.use_intel_id) {
|
|
|
|
pci_config_set_vendor_id(pci_conf, PCI_VENDOR_ID_INTEL);
|
2022-05-09 17:16:09 +03:00
|
|
|
pci_config_set_device_id(pci_conf, PCI_DEVICE_ID_INTEL_NVME);
|
2019-09-27 12:43:12 +03:00
|
|
|
} else {
|
|
|
|
pci_config_set_vendor_id(pci_conf, PCI_VENDOR_ID_REDHAT);
|
|
|
|
pci_config_set_device_id(pci_conf, PCI_DEVICE_ID_REDHAT_NVME);
|
|
|
|
}
|
|
|
|
|
2020-06-09 22:03:26 +03:00
|
|
|
pci_config_set_class(pci_conf, PCI_CLASS_STORAGE_EXPRESS);
|
hw/nvme: Implement the Function Level Reset
This patch implements the Function Level Reset, a feature currently not
implemented for the Nvme device, while listed as a mandatory ("shall")
in the 1.4 spec.
The implementation reuses FLR-related building blocks defined for the
pci-bridge module, and follows the same logic:
- FLR capability is advertised in the PCIE config,
- custom pci_write_config callback detects a write to the trigger
register and performs the PCI reset,
- which, eventually, calls the custom dc->reset handler.
Depending on reset type, parts of the state should (or should not) be
cleared. To distinguish the type of reset, an additional parameter is
passed to the reset function.
This patch also enables advertisement of the Power Management PCI
capability. The main reason behind it is to announce the no_soft_reset=1
bit, to signal SR-IOV support where each VF can be reset individually.
The implementation purposedly ignores writes to the PMCS.PS register,
as even such naïve behavior is enough to correctly handle the D3->D0
transition.
It’s worth to note, that the power state transition back to to D3, with
all the corresponding side effects, wasn't and stil isn't handled
properly.
Signed-off-by: Łukasz Gieryk <lukasz.gieryk@linux.intel.com>
Reviewed-by: Klaus Jensen <k.jensen@samsung.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
2022-05-09 17:16:12 +03:00
|
|
|
nvme_add_pm_capability(pci_dev, 0x60);
|
2020-06-09 22:03:26 +03:00
|
|
|
pcie_endpoint_cap_init(pci_dev, 0x80);
|
hw/nvme: Implement the Function Level Reset
This patch implements the Function Level Reset, a feature currently not
implemented for the Nvme device, while listed as a mandatory ("shall")
in the 1.4 spec.
The implementation reuses FLR-related building blocks defined for the
pci-bridge module, and follows the same logic:
- FLR capability is advertised in the PCIE config,
- custom pci_write_config callback detects a write to the trigger
register and performs the PCI reset,
- which, eventually, calls the custom dc->reset handler.
Depending on reset type, parts of the state should (or should not) be
cleared. To distinguish the type of reset, an additional parameter is
passed to the reset function.
This patch also enables advertisement of the Power Management PCI
capability. The main reason behind it is to announce the no_soft_reset=1
bit, to signal SR-IOV support where each VF can be reset individually.
The implementation purposedly ignores writes to the PMCS.PS register,
as even such naïve behavior is enough to correctly handle the D3->D0
transition.
It’s worth to note, that the power state transition back to to D3, with
all the corresponding side effects, wasn't and stil isn't handled
properly.
Signed-off-by: Łukasz Gieryk <lukasz.gieryk@linux.intel.com>
Reviewed-by: Klaus Jensen <k.jensen@samsung.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
2022-05-09 17:16:12 +03:00
|
|
|
pcie_cap_flr_init(pci_dev);
|
2022-05-09 17:16:09 +03:00
|
|
|
if (n->params.sriov_max_vfs) {
|
|
|
|
pcie_ari_init(pci_dev, 0x100, 1);
|
|
|
|
}
|
2020-06-09 22:03:26 +03:00
|
|
|
|
2022-05-09 17:16:14 +03:00
|
|
|
/* add one to max_ioqpairs to account for the admin queue pair */
|
2022-05-09 17:16:15 +03:00
|
|
|
bar_size = nvme_bar_size(n->params.max_ioqpairs + 1, n->params.msix_qsize,
|
|
|
|
&msix_table_offset, &msix_pba_offset);
|
2020-11-13 11:50:33 +03:00
|
|
|
|
|
|
|
memory_region_init(&n->bar0, OBJECT(n), "nvme-bar0", bar_size);
|
2020-06-09 22:03:26 +03:00
|
|
|
memory_region_init_io(&n->iomem, OBJECT(n), &nvme_mmio_ops, n, "nvme",
|
2022-05-09 17:16:14 +03:00
|
|
|
msix_table_offset);
|
2020-11-13 11:50:33 +03:00
|
|
|
memory_region_add_subregion(&n->bar0, 0, &n->iomem);
|
|
|
|
|
2022-05-09 17:16:09 +03:00
|
|
|
if (pci_is_vf(pci_dev)) {
|
|
|
|
pcie_sriov_vf_register_bar(pci_dev, 0, &n->bar0);
|
|
|
|
} else {
|
|
|
|
pci_register_bar(pci_dev, 0, PCI_BASE_ADDRESS_SPACE_MEMORY |
|
|
|
|
PCI_BASE_ADDRESS_MEM_TYPE_64, &n->bar0);
|
|
|
|
}
|
2020-11-13 11:50:33 +03:00
|
|
|
ret = msix_init(pci_dev, n->params.msix_qsize,
|
|
|
|
&n->bar0, 0, msix_table_offset,
|
2022-11-09 13:40:16 +03:00
|
|
|
&n->bar0, 0, msix_pba_offset, 0, errp);
|
|
|
|
if (ret == -ENOTSUP) {
|
|
|
|
/* report that msix is not supported, but do not error out */
|
|
|
|
warn_report_err(*errp);
|
|
|
|
*errp = NULL;
|
|
|
|
} else if (ret < 0) {
|
|
|
|
/* propagate error to caller */
|
|
|
|
return false;
|
2020-06-09 22:03:33 +03:00
|
|
|
}
|
2020-06-09 22:03:29 +03:00
|
|
|
|
hw/nvme: Make max_ioqpairs and msix_qsize configurable in runtime
The NVMe device defines two properties: max_ioqpairs, msix_qsize. Having
them as constants is problematic for SR-IOV support.
SR-IOV introduces virtual resources (queues, interrupts) that can be
assigned to PF and its dependent VFs. Each device, following a reset,
should work with the configured number of queues. A single constant is
no longer sufficient to hold the whole state.
This patch tries to solve the problem by introducing additional
variables in NvmeCtrl’s state. The variables for, e.g., managing queues
are therefore organized as:
- n->params.max_ioqpairs – no changes, constant set by the user
- n->(mutable_state) – (not a part of this patch) user-configurable,
specifies number of queues available _after_
reset
- n->conf_ioqpairs - (new) used in all the places instead of the ‘old’
n->params.max_ioqpairs; initialized in realize()
and updated during reset() to reflect user’s
changes to the mutable state
Since the number of available i/o queues and interrupts can change in
runtime, buffers for sq/cqs and the MSIX-related structures are
allocated big enough to handle the limits, to completely avoid the
complicated reallocation. A helper function (nvme_update_msixcap_ts)
updates the corresponding capability register, to signal configuration
changes.
Signed-off-by: Łukasz Gieryk <lukasz.gieryk@linux.intel.com>
Reviewed-by: Klaus Jensen <k.jensen@samsung.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
2022-05-09 17:16:13 +03:00
|
|
|
nvme_update_msixcap_ts(pci_dev, n->conf_msix_qsize);
|
|
|
|
|
2020-06-09 22:03:29 +03:00
|
|
|
if (n->params.cmb_size_mb) {
|
|
|
|
nvme_init_cmb(n, pci_dev);
|
2020-11-13 11:57:13 +03:00
|
|
|
}
|
|
|
|
|
2020-11-13 08:30:05 +03:00
|
|
|
if (n->pmr.dev) {
|
2020-06-09 22:03:29 +03:00
|
|
|
nvme_init_pmr(n, pci_dev);
|
|
|
|
}
|
2021-01-12 15:30:26 +03:00
|
|
|
|
2022-05-09 17:16:09 +03:00
|
|
|
if (!pci_is_vf(pci_dev) && n->params.sriov_max_vfs) {
|
2022-05-09 17:16:16 +03:00
|
|
|
nvme_init_sriov(n, pci_dev, 0x120);
|
2022-05-09 17:16:09 +03:00
|
|
|
}
|
|
|
|
|
2022-11-09 13:40:16 +03:00
|
|
|
return true;
|
2020-06-09 22:03:26 +03:00
|
|
|
}
|
|
|
|
|
hw/block/nvme: support to map controller to a subsystem
nvme controller(nvme) can be mapped to a NVMe subsystem(nvme-subsys).
This patch maps a controller to a subsystem by adding a parameter
'subsys' to the nvme device.
To map a controller to a subsystem, we need to put nvme-subsys first and
then maps the subsystem to the controller:
-device nvme-subsys,id=subsys0
-device nvme,serial=foo,id=nvme0,subsys=subsys0
If 'subsys' property is not given to the nvme controller, then subsystem
NQN will be created with serial (e.g., 'foo' in above example),
Otherwise, it will be based on subsys id (e.g., 'subsys0' in above
example).
Signed-off-by: Minwoo Im <minwoo.im.dev@gmail.com>
Tested-by: Klaus Jensen <k.jensen@samsung.com>
Reviewed-by: Klaus Jensen <k.jensen@samsung.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
2021-01-24 05:54:46 +03:00
|
|
|
static void nvme_init_subnqn(NvmeCtrl *n)
|
|
|
|
{
|
|
|
|
NvmeSubsystem *subsys = n->subsys;
|
|
|
|
NvmeIdCtrl *id = &n->id_ctrl;
|
|
|
|
|
|
|
|
if (!subsys) {
|
|
|
|
snprintf((char *)id->subnqn, sizeof(id->subnqn),
|
|
|
|
"nqn.2019-08.org.qemu:%s", n->params.serial);
|
|
|
|
} else {
|
|
|
|
pstrcpy((char *)id->subnqn, sizeof(id->subnqn), (char*)subsys->subnqn);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2020-06-09 22:03:30 +03:00
|
|
|
static void nvme_init_ctrl(NvmeCtrl *n, PCIDevice *pci_dev)
|
2020-06-09 22:03:21 +03:00
|
|
|
{
|
|
|
|
NvmeIdCtrl *id = &n->id_ctrl;
|
2020-06-09 22:03:30 +03:00
|
|
|
uint8_t *pci_conf = pci_dev->config;
|
2021-07-13 20:31:27 +03:00
|
|
|
uint64_t cap = ldq_le_p(&n->bar.cap);
|
2022-05-09 17:16:16 +03:00
|
|
|
NvmeSecCtrlEntry *sctrl = nvme_sctrl(n);
|
2023-02-20 14:59:24 +03:00
|
|
|
uint32_t ctratt;
|
2013-06-04 19:17:10 +04:00
|
|
|
|
|
|
|
id->vid = cpu_to_le16(pci_get_word(pci_conf + PCI_VENDOR_ID));
|
|
|
|
id->ssvid = cpu_to_le16(pci_get_word(pci_conf + PCI_SUBSYSTEM_VENDOR_ID));
|
|
|
|
strpadcpy((char *)id->mn, sizeof(id->mn), "QEMU NVMe Ctrl", ' ');
|
2022-04-29 11:33:36 +03:00
|
|
|
strpadcpy((char *)id->fr, sizeof(id->fr), QEMU_VERSION, ' ');
|
2020-06-09 22:03:15 +03:00
|
|
|
strpadcpy((char *)id->sn, sizeof(id->sn), n->params.serial, ' ');
|
2021-01-24 05:54:48 +03:00
|
|
|
|
|
|
|
id->cntlid = cpu_to_le16(n->cntlid);
|
|
|
|
|
2021-02-28 11:51:02 +03:00
|
|
|
id->oaes = cpu_to_le32(NVME_OAES_NS_ATTR);
|
2023-02-20 14:59:24 +03:00
|
|
|
ctratt = NVME_CTRATT_ELBAS;
|
2021-02-28 11:51:02 +03:00
|
|
|
|
2013-06-04 19:17:10 +04:00
|
|
|
id->rab = 6;
|
2021-02-08 16:10:31 +03:00
|
|
|
|
|
|
|
if (n->params.use_intel_id) {
|
|
|
|
id->ieee[0] = 0xb3;
|
|
|
|
id->ieee[1] = 0x02;
|
|
|
|
id->ieee[2] = 0x00;
|
|
|
|
} else {
|
|
|
|
id->ieee[0] = 0x00;
|
|
|
|
id->ieee[1] = 0x54;
|
|
|
|
id->ieee[2] = 0x52;
|
|
|
|
}
|
|
|
|
|
2020-02-23 19:38:22 +03:00
|
|
|
id->mdts = n->params.mdts;
|
2020-07-06 09:13:03 +03:00
|
|
|
id->ver = cpu_to_le32(NVME_SPEC_VER);
|
2022-06-16 15:34:07 +03:00
|
|
|
id->oacs =
|
2023-02-20 14:59:25 +03:00
|
|
|
cpu_to_le16(NVME_OACS_NS_MGMT | NVME_OACS_FORMAT | NVME_OACS_DBBUF |
|
|
|
|
NVME_OACS_DIRECTIVES);
|
2021-01-13 12:19:44 +03:00
|
|
|
id->cntrltype = 0x1;
|
2020-07-06 09:12:49 +03:00
|
|
|
|
|
|
|
/*
|
|
|
|
* Because the controller always completes the Abort command immediately,
|
|
|
|
* there can never be more than one concurrently executing Abort command,
|
|
|
|
* so this value is never used for anything. Note that there can easily be
|
|
|
|
* many Abort commands in the queues, but they are not considered
|
|
|
|
* "executing" until processed by nvme_abort.
|
|
|
|
*
|
|
|
|
* The specification recommends a value of 3 for Abort Command Limit (four
|
|
|
|
* concurrently outstanding Abort commands), so lets use that though it is
|
|
|
|
* inconsequential.
|
|
|
|
*/
|
|
|
|
id->acl = 3;
|
2020-07-06 09:12:53 +03:00
|
|
|
id->aerl = n->params.aerl;
|
2020-07-06 09:12:51 +03:00
|
|
|
id->frmw = (NVME_NUM_FW_SLOTS << 1) | NVME_FRMW_SLOT1_RO;
|
2020-12-08 23:04:02 +03:00
|
|
|
id->lpa = NVME_LPA_NS_SMART | NVME_LPA_CSE | NVME_LPA_EXTENDED;
|
2020-07-06 09:12:50 +03:00
|
|
|
|
|
|
|
/* recommended default value (~70 C) */
|
|
|
|
id->wctemp = cpu_to_le16(NVME_TEMPERATURE_WARNING);
|
|
|
|
id->cctemp = cpu_to_le16(NVME_TEMPERATURE_CRITICAL);
|
|
|
|
|
2013-06-04 19:17:10 +04:00
|
|
|
id->sqes = (0x6 << 4) | 0x6;
|
|
|
|
id->cqes = (0x4 << 4) | 0x4;
|
2021-04-14 22:46:00 +03:00
|
|
|
id->nn = cpu_to_le32(NVME_MAX_NAMESPACES);
|
2020-03-31 00:10:13 +03:00
|
|
|
id->oncs = cpu_to_le16(NVME_ONCS_WRITE_ZEROES | NVME_ONCS_TIMESTAMP |
|
2020-11-16 13:14:02 +03:00
|
|
|
NVME_ONCS_FEATURES | NVME_ONCS_DSM |
|
2021-02-14 21:09:27 +03:00
|
|
|
NVME_ONCS_COMPARE | NVME_ONCS_COPY);
|
hw/block/nvme: support multiple namespaces
This adds support for multiple namespaces by introducing a new 'nvme-ns'
device model. The nvme device creates a bus named from the device name
('id'). The nvme-ns devices then connect to this and registers
themselves with the nvme device.
This changes how an nvme device is created. Example with two namespaces:
-drive file=nvme0n1.img,if=none,id=disk1
-drive file=nvme0n2.img,if=none,id=disk2
-device nvme,serial=deadbeef,id=nvme0
-device nvme-ns,drive=disk1,bus=nvme0,nsid=1
-device nvme-ns,drive=disk2,bus=nvme0,nsid=2
The drive property is kept on the nvme device to keep the change
backward compatible, but the property is now optional. Specifying a
drive for the nvme device will always create the namespace with nsid 1.
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Minwoo Im <minwoo.im.dev@gmail.com>
2019-06-26 09:51:06 +03:00
|
|
|
|
2021-01-25 12:39:24 +03:00
|
|
|
/*
|
|
|
|
* NOTE: If this device ever supports a command set that does NOT use 0x0
|
|
|
|
* as a Flush-equivalent operation, support for the broadcast NSID in Flush
|
|
|
|
* should probably be removed.
|
|
|
|
*
|
|
|
|
* See comment in nvme_io_cmd.
|
|
|
|
*/
|
|
|
|
id->vwc = NVME_VWC_NSID_BROADCAST_SUPPORT | NVME_VWC_PRESENT;
|
|
|
|
|
2021-11-16 16:26:52 +03:00
|
|
|
id->ocfs = cpu_to_le16(NVME_OCFS_COPY_FORMAT_0 | NVME_OCFS_COPY_FORMAT_1);
|
2022-05-02 08:55:54 +03:00
|
|
|
id->sgls = cpu_to_le32(NVME_CTRL_SGLS_SUPPORT_NO_ALIGN);
|
2020-07-06 09:12:57 +03:00
|
|
|
|
hw/block/nvme: support to map controller to a subsystem
nvme controller(nvme) can be mapped to a NVMe subsystem(nvme-subsys).
This patch maps a controller to a subsystem by adding a parameter
'subsys' to the nvme device.
To map a controller to a subsystem, we need to put nvme-subsys first and
then maps the subsystem to the controller:
-device nvme-subsys,id=subsys0
-device nvme,serial=foo,id=nvme0,subsys=subsys0
If 'subsys' property is not given to the nvme controller, then subsystem
NQN will be created with serial (e.g., 'foo' in above example),
Otherwise, it will be based on subsys id (e.g., 'subsys0' in above
example).
Signed-off-by: Minwoo Im <minwoo.im.dev@gmail.com>
Tested-by: Klaus Jensen <k.jensen@samsung.com>
Reviewed-by: Klaus Jensen <k.jensen@samsung.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
2021-01-24 05:54:46 +03:00
|
|
|
nvme_init_subnqn(n);
|
2020-07-06 09:13:02 +03:00
|
|
|
|
2013-06-04 19:17:10 +04:00
|
|
|
id->psd[0].mp = cpu_to_le16(0x9c4);
|
|
|
|
id->psd[0].enlat = cpu_to_le32(0x10);
|
|
|
|
id->psd[0].exlat = cpu_to_le32(0x4);
|
|
|
|
|
2021-01-24 05:54:48 +03:00
|
|
|
if (n->subsys) {
|
|
|
|
id->cmic |= NVME_CMIC_MULTI_CTRL;
|
2023-02-20 14:59:24 +03:00
|
|
|
ctratt |= NVME_CTRATT_ENDGRPS;
|
|
|
|
|
|
|
|
id->endgidmax = cpu_to_le16(0x1);
|
2023-02-20 14:59:26 +03:00
|
|
|
|
|
|
|
if (n->subsys->endgrp.fdp.enabled) {
|
|
|
|
ctratt |= NVME_CTRATT_FDPS;
|
|
|
|
}
|
2021-01-24 05:54:48 +03:00
|
|
|
}
|
|
|
|
|
2023-02-20 14:59:24 +03:00
|
|
|
id->ctratt = cpu_to_le32(ctratt);
|
|
|
|
|
2021-07-13 20:31:27 +03:00
|
|
|
NVME_CAP_SET_MQES(cap, 0x7ff);
|
|
|
|
NVME_CAP_SET_CQR(cap, 1);
|
|
|
|
NVME_CAP_SET_TO(cap, 0xf);
|
|
|
|
NVME_CAP_SET_CSS(cap, NVME_CAP_CSS_NVM);
|
|
|
|
NVME_CAP_SET_CSS(cap, NVME_CAP_CSS_CSI_SUPP);
|
|
|
|
NVME_CAP_SET_CSS(cap, NVME_CAP_CSS_ADMIN_ONLY);
|
|
|
|
NVME_CAP_SET_MPSMAX(cap, 4);
|
|
|
|
NVME_CAP_SET_CMBS(cap, n->params.cmb_size_mb ? 1 : 0);
|
|
|
|
NVME_CAP_SET_PMRS(cap, n->pmr.dev ? 1 : 0);
|
|
|
|
stq_le_p(&n->bar.cap, cap);
|
2013-06-04 19:17:10 +04:00
|
|
|
|
2021-07-13 20:31:27 +03:00
|
|
|
stl_le_p(&n->bar.vs, NVME_SPEC_VER);
|
2013-06-04 19:17:10 +04:00
|
|
|
n->bar.intmc = n->bar.intms = 0;
|
2022-05-09 17:16:16 +03:00
|
|
|
|
2022-12-08 14:43:18 +03:00
|
|
|
if (pci_is_vf(pci_dev) && !sctrl->scs) {
|
2022-05-09 17:16:16 +03:00
|
|
|
stl_le_p(&n->bar.csts, NVME_CSTS_FAILED);
|
|
|
|
}
|
2020-06-09 22:03:30 +03:00
|
|
|
}
|
|
|
|
|
2021-01-24 05:54:48 +03:00
|
|
|
static int nvme_init_subsys(NvmeCtrl *n, Error **errp)
|
|
|
|
{
|
|
|
|
int cntlid;
|
|
|
|
|
|
|
|
if (!n->subsys) {
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
cntlid = nvme_subsys_register_ctrl(n, errp);
|
|
|
|
if (cntlid < 0) {
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
|
|
|
n->cntlid = cntlid;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
hw/block/nvme: fix handling of private namespaces
Prior to this patch, if a private nvme-ns device (that is, a namespace
that is not linked to a subsystem) is wired up to an nvme-subsys linked
nvme controller device, the device fails to verify that the namespace id
is unique within the subsystem. NVM Express v1.4b, Section 6.1.6 ("NSID
and Namespace Usage") states that because the device supports Namespace
Management, "NSIDs *shall* be unique within the NVM subsystem".
Additionally, prior to this patch, private namespaces are not known to
the subsystem and the namespace is considered exclusive to the
controller with which it is initially wired up to. However, this is not
the definition of a private namespace; per Section 1.6.33 ("private
namespace"), a private namespace is just a namespace that does not
support multipath I/O or namespace sharing, which means "that it is only
able to be attached to one controller at a time".
Fix this by always allocating namespaces in the subsystem (if one is
linked to the controller), regardless of the shared/private status of
the namespace. Whether or not the namespace is shareable is controlled
by a new `shared` nvme-ns parameter.
Finally, this fix allows the nvme-ns `subsys` parameter to be removed,
since the `shared` parameter now serves the purpose of attaching the
namespace to all controllers in the subsystem upon device realization.
It is invalid to have an nvme-ns namespace device with a linked
subsystem without the parent nvme controller device also being linked to
one and since the nvme-ns devices will unconditionally be "attached" (in
QEMU terms that is) to an nvme controller device through an NvmeBus, the
nvme-ns namespace device can always get a reference to the subsystem of
the controller it is explicitly (using 'bus=' parameter) or implicitly
attaching to.
Fixes: e570768566b3 ("hw/block/nvme: support for shared namespace in subsystem")
Cc: Minwoo Im <minwoo.im.dev@gmail.com>
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
Reviewed-by: Gollu Appalanaidu <anaidu.gollu@samsung.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Minwoo Im <minwoo.im.dev@gmail.com>
2021-03-23 14:43:24 +03:00
|
|
|
void nvme_attach_ns(NvmeCtrl *n, NvmeNamespace *ns)
|
|
|
|
{
|
|
|
|
uint32_t nsid = ns->params.nsid;
|
|
|
|
assert(nsid && nsid <= NVME_MAX_NAMESPACES);
|
|
|
|
|
2021-04-14 22:40:40 +03:00
|
|
|
n->namespaces[nsid] = ns;
|
hw/block/nvme: fix handling of private namespaces
Prior to this patch, if a private nvme-ns device (that is, a namespace
that is not linked to a subsystem) is wired up to an nvme-subsys linked
nvme controller device, the device fails to verify that the namespace id
is unique within the subsystem. NVM Express v1.4b, Section 6.1.6 ("NSID
and Namespace Usage") states that because the device supports Namespace
Management, "NSIDs *shall* be unique within the NVM subsystem".
Additionally, prior to this patch, private namespaces are not known to
the subsystem and the namespace is considered exclusive to the
controller with which it is initially wired up to. However, this is not
the definition of a private namespace; per Section 1.6.33 ("private
namespace"), a private namespace is just a namespace that does not
support multipath I/O or namespace sharing, which means "that it is only
able to be attached to one controller at a time".
Fix this by always allocating namespaces in the subsystem (if one is
linked to the controller), regardless of the shared/private status of
the namespace. Whether or not the namespace is shareable is controlled
by a new `shared` nvme-ns parameter.
Finally, this fix allows the nvme-ns `subsys` parameter to be removed,
since the `shared` parameter now serves the purpose of attaching the
namespace to all controllers in the subsystem upon device realization.
It is invalid to have an nvme-ns namespace device with a linked
subsystem without the parent nvme controller device also being linked to
one and since the nvme-ns devices will unconditionally be "attached" (in
QEMU terms that is) to an nvme controller device through an NvmeBus, the
nvme-ns namespace device can always get a reference to the subsystem of
the controller it is explicitly (using 'bus=' parameter) or implicitly
attaching to.
Fixes: e570768566b3 ("hw/block/nvme: support for shared namespace in subsystem")
Cc: Minwoo Im <minwoo.im.dev@gmail.com>
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
Reviewed-by: Gollu Appalanaidu <anaidu.gollu@samsung.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Minwoo Im <minwoo.im.dev@gmail.com>
2021-03-23 14:43:24 +03:00
|
|
|
ns->attached++;
|
|
|
|
|
|
|
|
n->dmrsl = MIN_NON_ZERO(n->dmrsl,
|
|
|
|
BDRV_REQUEST_MAX_BYTES / nvme_l2b(ns, 1));
|
|
|
|
}
|
|
|
|
|
2020-06-09 22:03:30 +03:00
|
|
|
static void nvme_realize(PCIDevice *pci_dev, Error **errp)
|
|
|
|
{
|
|
|
|
NvmeCtrl *n = NVME(pci_dev);
|
2022-12-08 14:43:18 +03:00
|
|
|
DeviceState *dev = DEVICE(pci_dev);
|
hw/block/nvme: support multiple namespaces
This adds support for multiple namespaces by introducing a new 'nvme-ns'
device model. The nvme device creates a bus named from the device name
('id'). The nvme-ns devices then connect to this and registers
themselves with the nvme device.
This changes how an nvme device is created. Example with two namespaces:
-drive file=nvme0n1.img,if=none,id=disk1
-drive file=nvme0n2.img,if=none,id=disk2
-device nvme,serial=deadbeef,id=nvme0
-device nvme-ns,drive=disk1,bus=nvme0,nsid=1
-device nvme-ns,drive=disk2,bus=nvme0,nsid=2
The drive property is kept on the nvme device to keep the change
backward compatible, but the property is now optional. Specifying a
drive for the nvme device will always create the namespace with nsid 1.
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Minwoo Im <minwoo.im.dev@gmail.com>
2019-06-26 09:51:06 +03:00
|
|
|
NvmeNamespace *ns;
|
2022-05-09 17:16:09 +03:00
|
|
|
NvmeCtrl *pn = NVME(pcie_sriov_get_pf(pci_dev));
|
|
|
|
|
|
|
|
if (pci_is_vf(pci_dev)) {
|
|
|
|
/*
|
|
|
|
* VFs derive settings from the parent. PF's lifespan exceeds
|
|
|
|
* that of VF's, so it's safe to share params.serial.
|
|
|
|
*/
|
|
|
|
memcpy(&n->params, &pn->params, sizeof(NvmeParams));
|
|
|
|
n->subsys = pn->subsys;
|
|
|
|
}
|
2020-06-09 22:03:30 +03:00
|
|
|
|
2022-11-09 13:40:11 +03:00
|
|
|
if (!nvme_check_params(n, errp)) {
|
2020-06-09 22:03:30 +03:00
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2022-12-08 14:43:18 +03:00
|
|
|
qbus_init(&n->bus, sizeof(NvmeBus), TYPE_NVME_BUS, dev, dev->id);
|
2020-06-09 22:03:30 +03:00
|
|
|
|
2021-01-24 05:54:48 +03:00
|
|
|
if (nvme_init_subsys(n, errp)) {
|
|
|
|
return;
|
|
|
|
}
|
2022-05-09 17:16:10 +03:00
|
|
|
nvme_init_state(n);
|
2022-11-09 13:40:16 +03:00
|
|
|
if (!nvme_init_pci(n, pci_dev, errp)) {
|
2022-05-09 17:16:10 +03:00
|
|
|
return;
|
|
|
|
}
|
2020-06-09 22:03:30 +03:00
|
|
|
nvme_init_ctrl(n, pci_dev);
|
2013-06-04 19:17:10 +04:00
|
|
|
|
hw/block/nvme: support multiple namespaces
This adds support for multiple namespaces by introducing a new 'nvme-ns'
device model. The nvme device creates a bus named from the device name
('id'). The nvme-ns devices then connect to this and registers
themselves with the nvme device.
This changes how an nvme device is created. Example with two namespaces:
-drive file=nvme0n1.img,if=none,id=disk1
-drive file=nvme0n2.img,if=none,id=disk2
-device nvme,serial=deadbeef,id=nvme0
-device nvme-ns,drive=disk1,bus=nvme0,nsid=1
-device nvme-ns,drive=disk2,bus=nvme0,nsid=2
The drive property is kept on the nvme device to keep the change
backward compatible, but the property is now optional. Specifying a
drive for the nvme device will always create the namespace with nsid 1.
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Minwoo Im <minwoo.im.dev@gmail.com>
2019-06-26 09:51:06 +03:00
|
|
|
/* setup a namespace if the controller drive property was given */
|
|
|
|
if (n->namespace.blkconf.blk) {
|
|
|
|
ns = &n->namespace;
|
|
|
|
ns->params.nsid = 1;
|
|
|
|
|
2021-07-06 10:10:56 +03:00
|
|
|
if (nvme_ns_setup(ns, errp)) {
|
2020-06-09 22:03:25 +03:00
|
|
|
return;
|
|
|
|
}
|
2021-02-11 13:50:19 +03:00
|
|
|
|
hw/block/nvme: fix handling of private namespaces
Prior to this patch, if a private nvme-ns device (that is, a namespace
that is not linked to a subsystem) is wired up to an nvme-subsys linked
nvme controller device, the device fails to verify that the namespace id
is unique within the subsystem. NVM Express v1.4b, Section 6.1.6 ("NSID
and Namespace Usage") states that because the device supports Namespace
Management, "NSIDs *shall* be unique within the NVM subsystem".
Additionally, prior to this patch, private namespaces are not known to
the subsystem and the namespace is considered exclusive to the
controller with which it is initially wired up to. However, this is not
the definition of a private namespace; per Section 1.6.33 ("private
namespace"), a private namespace is just a namespace that does not
support multipath I/O or namespace sharing, which means "that it is only
able to be attached to one controller at a time".
Fix this by always allocating namespaces in the subsystem (if one is
linked to the controller), regardless of the shared/private status of
the namespace. Whether or not the namespace is shareable is controlled
by a new `shared` nvme-ns parameter.
Finally, this fix allows the nvme-ns `subsys` parameter to be removed,
since the `shared` parameter now serves the purpose of attaching the
namespace to all controllers in the subsystem upon device realization.
It is invalid to have an nvme-ns namespace device with a linked
subsystem without the parent nvme controller device also being linked to
one and since the nvme-ns devices will unconditionally be "attached" (in
QEMU terms that is) to an nvme controller device through an NvmeBus, the
nvme-ns namespace device can always get a reference to the subsystem of
the controller it is explicitly (using 'bus=' parameter) or implicitly
attaching to.
Fixes: e570768566b3 ("hw/block/nvme: support for shared namespace in subsystem")
Cc: Minwoo Im <minwoo.im.dev@gmail.com>
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
Reviewed-by: Gollu Appalanaidu <anaidu.gollu@samsung.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Minwoo Im <minwoo.im.dev@gmail.com>
2021-03-23 14:43:24 +03:00
|
|
|
nvme_attach_ns(n, ns);
|
2013-06-04 19:17:10 +04:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
static void nvme_exit(PCIDevice *pci_dev)
|
|
|
|
{
|
|
|
|
NvmeCtrl *n = NVME(pci_dev);
|
2020-12-08 23:04:06 +03:00
|
|
|
NvmeNamespace *ns;
|
|
|
|
int i;
|
2013-06-04 19:17:10 +04:00
|
|
|
|
hw/nvme: Implement the Function Level Reset
This patch implements the Function Level Reset, a feature currently not
implemented for the Nvme device, while listed as a mandatory ("shall")
in the 1.4 spec.
The implementation reuses FLR-related building blocks defined for the
pci-bridge module, and follows the same logic:
- FLR capability is advertised in the PCIE config,
- custom pci_write_config callback detects a write to the trigger
register and performs the PCI reset,
- which, eventually, calls the custom dc->reset handler.
Depending on reset type, parts of the state should (or should not) be
cleared. To distinguish the type of reset, an additional parameter is
passed to the reset function.
This patch also enables advertisement of the Power Management PCI
capability. The main reason behind it is to announce the no_soft_reset=1
bit, to signal SR-IOV support where each VF can be reset individually.
The implementation purposedly ignores writes to the PMCS.PS register,
as even such naïve behavior is enough to correctly handle the D3->D0
transition.
It’s worth to note, that the power state transition back to to D3, with
all the corresponding side effects, wasn't and stil isn't handled
properly.
Signed-off-by: Łukasz Gieryk <lukasz.gieryk@linux.intel.com>
Reviewed-by: Klaus Jensen <k.jensen@samsung.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
2022-05-09 17:16:12 +03:00
|
|
|
nvme_ctrl_reset(n, NVME_RESET_FUNCTION);
|
2020-12-08 23:04:06 +03:00
|
|
|
|
2021-04-23 19:55:11 +03:00
|
|
|
if (n->subsys) {
|
|
|
|
for (i = 1; i <= NVME_MAX_NAMESPACES; i++) {
|
|
|
|
ns = nvme_ns(n, i);
|
|
|
|
if (ns) {
|
|
|
|
ns->attached--;
|
|
|
|
}
|
2020-12-08 23:04:06 +03:00
|
|
|
}
|
|
|
|
|
2021-07-06 11:51:36 +03:00
|
|
|
nvme_subsys_unregister_ctrl(n->subsys, n);
|
|
|
|
}
|
|
|
|
|
2013-06-04 19:17:10 +04:00
|
|
|
g_free(n->cq);
|
|
|
|
g_free(n->sq);
|
2020-07-06 09:12:53 +03:00
|
|
|
g_free(n->aer_reqs);
|
2017-05-16 22:10:59 +03:00
|
|
|
|
2020-06-09 22:03:15 +03:00
|
|
|
if (n->params.cmb_size_mb) {
|
2020-12-18 02:32:16 +03:00
|
|
|
g_free(n->cmb.buf);
|
2018-10-29 09:29:41 +03:00
|
|
|
}
|
2020-03-30 19:46:56 +03:00
|
|
|
|
2020-11-13 08:30:05 +03:00
|
|
|
if (n->pmr.dev) {
|
|
|
|
host_memory_backend_set_mapped(n->pmr.dev, false);
|
2020-03-30 19:46:56 +03:00
|
|
|
}
|
2022-05-09 17:16:09 +03:00
|
|
|
|
|
|
|
if (!pci_is_vf(pci_dev) && n->params.sriov_max_vfs) {
|
|
|
|
pcie_sriov_pf_exit(pci_dev);
|
|
|
|
}
|
|
|
|
|
2021-04-23 08:21:26 +03:00
|
|
|
msix_uninit(pci_dev, &n->bar0, &n->bar0);
|
|
|
|
memory_region_del_subregion(&n->bar0, &n->iomem);
|
2013-06-04 19:17:10 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
static Property nvme_props[] = {
|
hw/block/nvme: support multiple namespaces
This adds support for multiple namespaces by introducing a new 'nvme-ns'
device model. The nvme device creates a bus named from the device name
('id'). The nvme-ns devices then connect to this and registers
themselves with the nvme device.
This changes how an nvme device is created. Example with two namespaces:
-drive file=nvme0n1.img,if=none,id=disk1
-drive file=nvme0n2.img,if=none,id=disk2
-device nvme,serial=deadbeef,id=nvme0
-device nvme-ns,drive=disk1,bus=nvme0,nsid=1
-device nvme-ns,drive=disk2,bus=nvme0,nsid=2
The drive property is kept on the nvme device to keep the change
backward compatible, but the property is now optional. Specifying a
drive for the nvme device will always create the namespace with nsid 1.
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Minwoo Im <minwoo.im.dev@gmail.com>
2019-06-26 09:51:06 +03:00
|
|
|
DEFINE_BLOCK_PROPERTIES(NvmeCtrl, namespace.blkconf),
|
2020-11-13 08:30:05 +03:00
|
|
|
DEFINE_PROP_LINK("pmrdev", NvmeCtrl, pmr.dev, TYPE_MEMORY_BACKEND,
|
2020-03-30 19:46:56 +03:00
|
|
|
HostMemoryBackend *),
|
hw/block/nvme: support to map controller to a subsystem
nvme controller(nvme) can be mapped to a NVMe subsystem(nvme-subsys).
This patch maps a controller to a subsystem by adding a parameter
'subsys' to the nvme device.
To map a controller to a subsystem, we need to put nvme-subsys first and
then maps the subsystem to the controller:
-device nvme-subsys,id=subsys0
-device nvme,serial=foo,id=nvme0,subsys=subsys0
If 'subsys' property is not given to the nvme controller, then subsystem
NQN will be created with serial (e.g., 'foo' in above example),
Otherwise, it will be based on subsys id (e.g., 'subsys0' in above
example).
Signed-off-by: Minwoo Im <minwoo.im.dev@gmail.com>
Tested-by: Klaus Jensen <k.jensen@samsung.com>
Reviewed-by: Klaus Jensen <k.jensen@samsung.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
2021-01-24 05:54:46 +03:00
|
|
|
DEFINE_PROP_LINK("subsys", NvmeCtrl, subsys, TYPE_NVME_SUBSYS,
|
|
|
|
NvmeSubsystem *),
|
2020-06-09 22:03:15 +03:00
|
|
|
DEFINE_PROP_STRING("serial", NvmeCtrl, params.serial),
|
|
|
|
DEFINE_PROP_UINT32("cmb_size_mb", NvmeCtrl, params.cmb_size_mb, 0),
|
2020-06-09 22:03:19 +03:00
|
|
|
DEFINE_PROP_UINT32("num_queues", NvmeCtrl, params.num_queues, 0),
|
|
|
|
DEFINE_PROP_UINT32("max_ioqpairs", NvmeCtrl, params.max_ioqpairs, 64),
|
2020-06-09 22:03:32 +03:00
|
|
|
DEFINE_PROP_UINT16("msix_qsize", NvmeCtrl, params.msix_qsize, 65),
|
2020-07-06 09:12:53 +03:00
|
|
|
DEFINE_PROP_UINT8("aerl", NvmeCtrl, params.aerl, 3),
|
|
|
|
DEFINE_PROP_UINT32("aer_max_queued", NvmeCtrl, params.aer_max_queued, 64),
|
2020-02-23 19:38:22 +03:00
|
|
|
DEFINE_PROP_UINT8("mdts", NvmeCtrl, params.mdts, 7),
|
2021-02-14 21:09:27 +03:00
|
|
|
DEFINE_PROP_UINT8("vsl", NvmeCtrl, params.vsl, 7),
|
2019-09-27 12:43:12 +03:00
|
|
|
DEFINE_PROP_BOOL("use-intel-id", NvmeCtrl, params.use_intel_id, false),
|
2020-12-18 02:32:16 +03:00
|
|
|
DEFINE_PROP_BOOL("legacy-cmb", NvmeCtrl, params.legacy_cmb, false),
|
2022-07-28 09:34:21 +03:00
|
|
|
DEFINE_PROP_BOOL("ioeventfd", NvmeCtrl, params.ioeventfd, false),
|
hw/block/nvme: align zoned.zasl with mdts
ZASL (Zone Append Size Limit) is defined exactly like MDTS (Maximum Data
Transfer Size), that is, it is a value in units of the minimum memory
page size (CAP.MPSMIN) and is reported as a power of two.
The 'mdts' nvme device parameter is specified as in the spec, but the
'zoned.append_size_limit' parameter is specified in bytes. This is
suboptimal for a number of reasons:
1. It is just plain confusing wrt. the definition of mdts.
2. There is a lot of complexity involved in validating the value; it
must be a power of two, it should be larger than 4k, if it is zero
we set it internally to mdts, but still report it as zero.
3. While "hw/block/nvme: improve invalid zasl value reporting"
slightly improved the handling of the parameter, the validation is
still wrong; it does not depend on CC.MPS, it depends on
CAP.MPSMIN. And we are not even checking that it is actually less
than or equal to MDTS, which is kinda the *one* condition it must
satisfy.
Fix this by defining zasl exactly like mdts and checking the one thing
that it must satisfy (that it is less than or equal to mdts). Also,
change the default value from 128KiB to 0 (aka, whatever mdts is).
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
2021-02-22 21:27:58 +03:00
|
|
|
DEFINE_PROP_UINT8("zoned.zasl", NvmeCtrl, params.zasl, 0),
|
2021-05-28 14:05:07 +03:00
|
|
|
DEFINE_PROP_BOOL("zoned.auto_transition", NvmeCtrl,
|
|
|
|
params.auto_transition_zones, true),
|
2022-05-09 17:16:09 +03:00
|
|
|
DEFINE_PROP_UINT8("sriov_max_vfs", NvmeCtrl, params.sriov_max_vfs, 0),
|
2022-05-09 17:16:16 +03:00
|
|
|
DEFINE_PROP_UINT16("sriov_vq_flexible", NvmeCtrl,
|
|
|
|
params.sriov_vq_flexible, 0),
|
|
|
|
DEFINE_PROP_UINT16("sriov_vi_flexible", NvmeCtrl,
|
|
|
|
params.sriov_vi_flexible, 0),
|
|
|
|
DEFINE_PROP_UINT8("sriov_max_vi_per_vf", NvmeCtrl,
|
|
|
|
params.sriov_max_vi_per_vf, 0),
|
|
|
|
DEFINE_PROP_UINT8("sriov_max_vq_per_vf", NvmeCtrl,
|
|
|
|
params.sriov_max_vq_per_vf, 0),
|
2013-06-04 19:17:10 +04:00
|
|
|
DEFINE_PROP_END_OF_LIST(),
|
|
|
|
};
|
|
|
|
|
2021-01-15 06:27:01 +03:00
|
|
|
static void nvme_get_smart_warning(Object *obj, Visitor *v, const char *name,
|
|
|
|
void *opaque, Error **errp)
|
|
|
|
{
|
|
|
|
NvmeCtrl *n = NVME(obj);
|
|
|
|
uint8_t value = n->smart_critical_warning;
|
|
|
|
|
|
|
|
visit_type_uint8(v, name, &value, errp);
|
|
|
|
}
|
|
|
|
|
|
|
|
static void nvme_set_smart_warning(Object *obj, Visitor *v, const char *name,
|
|
|
|
void *opaque, Error **errp)
|
|
|
|
{
|
|
|
|
NvmeCtrl *n = NVME(obj);
|
2021-01-15 06:27:02 +03:00
|
|
|
uint8_t value, old_value, cap = 0, index, event;
|
2021-01-15 06:27:01 +03:00
|
|
|
|
|
|
|
if (!visit_type_uint8(v, name, &value, errp)) {
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
cap = NVME_SMART_SPARE | NVME_SMART_TEMPERATURE | NVME_SMART_RELIABILITY
|
|
|
|
| NVME_SMART_MEDIA_READ_ONLY | NVME_SMART_FAILED_VOLATILE_MEDIA;
|
2021-07-13 20:31:27 +03:00
|
|
|
if (NVME_CAP_PMRS(ldq_le_p(&n->bar.cap))) {
|
2021-01-15 06:27:01 +03:00
|
|
|
cap |= NVME_SMART_PMR_UNRELIABLE;
|
|
|
|
}
|
|
|
|
|
|
|
|
if ((value & cap) != value) {
|
|
|
|
error_setg(errp, "unsupported smart critical warning bits: 0x%x",
|
|
|
|
value & ~cap);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
2021-01-15 06:27:02 +03:00
|
|
|
old_value = n->smart_critical_warning;
|
2021-01-15 06:27:01 +03:00
|
|
|
n->smart_critical_warning = value;
|
2021-01-15 06:27:02 +03:00
|
|
|
|
|
|
|
/* only inject new bits of smart critical warning */
|
|
|
|
for (index = 0; index < NVME_SMART_WARN_MAX; index++) {
|
|
|
|
event = 1 << index;
|
|
|
|
if (value & ~old_value & event)
|
|
|
|
nvme_smart_event(n, event);
|
|
|
|
}
|
2021-01-15 06:27:01 +03:00
|
|
|
}
|
|
|
|
|
hw/nvme: Implement the Function Level Reset
This patch implements the Function Level Reset, a feature currently not
implemented for the Nvme device, while listed as a mandatory ("shall")
in the 1.4 spec.
The implementation reuses FLR-related building blocks defined for the
pci-bridge module, and follows the same logic:
- FLR capability is advertised in the PCIE config,
- custom pci_write_config callback detects a write to the trigger
register and performs the PCI reset,
- which, eventually, calls the custom dc->reset handler.
Depending on reset type, parts of the state should (or should not) be
cleared. To distinguish the type of reset, an additional parameter is
passed to the reset function.
This patch also enables advertisement of the Power Management PCI
capability. The main reason behind it is to announce the no_soft_reset=1
bit, to signal SR-IOV support where each VF can be reset individually.
The implementation purposedly ignores writes to the PMCS.PS register,
as even such naïve behavior is enough to correctly handle the D3->D0
transition.
It’s worth to note, that the power state transition back to to D3, with
all the corresponding side effects, wasn't and stil isn't handled
properly.
Signed-off-by: Łukasz Gieryk <lukasz.gieryk@linux.intel.com>
Reviewed-by: Klaus Jensen <k.jensen@samsung.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
2022-05-09 17:16:12 +03:00
|
|
|
static void nvme_pci_reset(DeviceState *qdev)
|
|
|
|
{
|
|
|
|
PCIDevice *pci_dev = PCI_DEVICE(qdev);
|
|
|
|
NvmeCtrl *n = NVME(pci_dev);
|
|
|
|
|
|
|
|
trace_pci_nvme_pci_reset();
|
|
|
|
nvme_ctrl_reset(n, NVME_RESET_FUNCTION);
|
|
|
|
}
|
|
|
|
|
2022-05-09 17:16:17 +03:00
|
|
|
static void nvme_sriov_pre_write_ctrl(PCIDevice *dev, uint32_t address,
|
|
|
|
uint32_t val, int len)
|
|
|
|
{
|
|
|
|
NvmeCtrl *n = NVME(dev);
|
|
|
|
NvmeSecCtrlEntry *sctrl;
|
|
|
|
uint16_t sriov_cap = dev->exp.sriov_cap;
|
|
|
|
uint32_t off = address - sriov_cap;
|
|
|
|
int i, num_vfs;
|
|
|
|
|
|
|
|
if (!sriov_cap) {
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
if (range_covers_byte(off, len, PCI_SRIOV_CTRL)) {
|
|
|
|
if (!(val & PCI_SRIOV_CTRL_VFE)) {
|
|
|
|
num_vfs = pci_get_word(dev->config + sriov_cap + PCI_SRIOV_NUM_VF);
|
|
|
|
for (i = 0; i < num_vfs; i++) {
|
|
|
|
sctrl = &n->sec_ctrl_list.sec[i];
|
|
|
|
nvme_virt_set_state(n, le16_to_cpu(sctrl->scid), false);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
hw/nvme: Implement the Function Level Reset
This patch implements the Function Level Reset, a feature currently not
implemented for the Nvme device, while listed as a mandatory ("shall")
in the 1.4 spec.
The implementation reuses FLR-related building blocks defined for the
pci-bridge module, and follows the same logic:
- FLR capability is advertised in the PCIE config,
- custom pci_write_config callback detects a write to the trigger
register and performs the PCI reset,
- which, eventually, calls the custom dc->reset handler.
Depending on reset type, parts of the state should (or should not) be
cleared. To distinguish the type of reset, an additional parameter is
passed to the reset function.
This patch also enables advertisement of the Power Management PCI
capability. The main reason behind it is to announce the no_soft_reset=1
bit, to signal SR-IOV support where each VF can be reset individually.
The implementation purposedly ignores writes to the PMCS.PS register,
as even such naïve behavior is enough to correctly handle the D3->D0
transition.
It’s worth to note, that the power state transition back to to D3, with
all the corresponding side effects, wasn't and stil isn't handled
properly.
Signed-off-by: Łukasz Gieryk <lukasz.gieryk@linux.intel.com>
Reviewed-by: Klaus Jensen <k.jensen@samsung.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
2022-05-09 17:16:12 +03:00
|
|
|
static void nvme_pci_write_config(PCIDevice *dev, uint32_t address,
|
|
|
|
uint32_t val, int len)
|
|
|
|
{
|
2022-05-09 17:16:17 +03:00
|
|
|
nvme_sriov_pre_write_ctrl(dev, address, val, len);
|
hw/nvme: Implement the Function Level Reset
This patch implements the Function Level Reset, a feature currently not
implemented for the Nvme device, while listed as a mandatory ("shall")
in the 1.4 spec.
The implementation reuses FLR-related building blocks defined for the
pci-bridge module, and follows the same logic:
- FLR capability is advertised in the PCIE config,
- custom pci_write_config callback detects a write to the trigger
register and performs the PCI reset,
- which, eventually, calls the custom dc->reset handler.
Depending on reset type, parts of the state should (or should not) be
cleared. To distinguish the type of reset, an additional parameter is
passed to the reset function.
This patch also enables advertisement of the Power Management PCI
capability. The main reason behind it is to announce the no_soft_reset=1
bit, to signal SR-IOV support where each VF can be reset individually.
The implementation purposedly ignores writes to the PMCS.PS register,
as even such naïve behavior is enough to correctly handle the D3->D0
transition.
It’s worth to note, that the power state transition back to to D3, with
all the corresponding side effects, wasn't and stil isn't handled
properly.
Signed-off-by: Łukasz Gieryk <lukasz.gieryk@linux.intel.com>
Reviewed-by: Klaus Jensen <k.jensen@samsung.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
2022-05-09 17:16:12 +03:00
|
|
|
pci_default_write_config(dev, address, val, len);
|
|
|
|
pcie_cap_flr_write_config(dev, address, val, len);
|
|
|
|
}
|
|
|
|
|
2013-06-04 19:17:10 +04:00
|
|
|
static const VMStateDescription nvme_vmstate = {
|
|
|
|
.name = "nvme",
|
|
|
|
.unmigratable = 1,
|
|
|
|
};
|
|
|
|
|
|
|
|
static void nvme_class_init(ObjectClass *oc, void *data)
|
|
|
|
{
|
|
|
|
DeviceClass *dc = DEVICE_CLASS(oc);
|
|
|
|
PCIDeviceClass *pc = PCI_DEVICE_CLASS(oc);
|
|
|
|
|
2017-11-22 06:08:43 +03:00
|
|
|
pc->realize = nvme_realize;
|
hw/nvme: Implement the Function Level Reset
This patch implements the Function Level Reset, a feature currently not
implemented for the Nvme device, while listed as a mandatory ("shall")
in the 1.4 spec.
The implementation reuses FLR-related building blocks defined for the
pci-bridge module, and follows the same logic:
- FLR capability is advertised in the PCIE config,
- custom pci_write_config callback detects a write to the trigger
register and performs the PCI reset,
- which, eventually, calls the custom dc->reset handler.
Depending on reset type, parts of the state should (or should not) be
cleared. To distinguish the type of reset, an additional parameter is
passed to the reset function.
This patch also enables advertisement of the Power Management PCI
capability. The main reason behind it is to announce the no_soft_reset=1
bit, to signal SR-IOV support where each VF can be reset individually.
The implementation purposedly ignores writes to the PMCS.PS register,
as even such naïve behavior is enough to correctly handle the D3->D0
transition.
It’s worth to note, that the power state transition back to to D3, with
all the corresponding side effects, wasn't and stil isn't handled
properly.
Signed-off-by: Łukasz Gieryk <lukasz.gieryk@linux.intel.com>
Reviewed-by: Klaus Jensen <k.jensen@samsung.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
2022-05-09 17:16:12 +03:00
|
|
|
pc->config_write = nvme_pci_write_config;
|
2013-06-04 19:17:10 +04:00
|
|
|
pc->exit = nvme_exit;
|
|
|
|
pc->class_id = PCI_CLASS_STORAGE_EXPRESS;
|
2016-08-04 22:42:15 +03:00
|
|
|
pc->revision = 2;
|
2013-06-04 19:17:10 +04:00
|
|
|
|
2013-07-29 18:17:45 +04:00
|
|
|
set_bit(DEVICE_CATEGORY_STORAGE, dc->categories);
|
2013-06-04 19:17:10 +04:00
|
|
|
dc->desc = "Non-Volatile Memory Express";
|
2020-01-10 18:30:32 +03:00
|
|
|
device_class_set_props(dc, nvme_props);
|
2013-06-04 19:17:10 +04:00
|
|
|
dc->vmsd = &nvme_vmstate;
|
hw/nvme: Implement the Function Level Reset
This patch implements the Function Level Reset, a feature currently not
implemented for the Nvme device, while listed as a mandatory ("shall")
in the 1.4 spec.
The implementation reuses FLR-related building blocks defined for the
pci-bridge module, and follows the same logic:
- FLR capability is advertised in the PCIE config,
- custom pci_write_config callback detects a write to the trigger
register and performs the PCI reset,
- which, eventually, calls the custom dc->reset handler.
Depending on reset type, parts of the state should (or should not) be
cleared. To distinguish the type of reset, an additional parameter is
passed to the reset function.
This patch also enables advertisement of the Power Management PCI
capability. The main reason behind it is to announce the no_soft_reset=1
bit, to signal SR-IOV support where each VF can be reset individually.
The implementation purposedly ignores writes to the PMCS.PS register,
as even such naïve behavior is enough to correctly handle the D3->D0
transition.
It’s worth to note, that the power state transition back to to D3, with
all the corresponding side effects, wasn't and stil isn't handled
properly.
Signed-off-by: Łukasz Gieryk <lukasz.gieryk@linux.intel.com>
Reviewed-by: Klaus Jensen <k.jensen@samsung.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
2022-05-09 17:16:12 +03:00
|
|
|
dc->reset = nvme_pci_reset;
|
2013-06-04 19:17:10 +04:00
|
|
|
}
|
|
|
|
|
nvme: generate OpenFirmware device path in the "bootorder" fw_cfg file
Background on QEMU boot indices
-------------------------------
Normally, the "bootindex" property is configured for bootable devices
with:
DEVICE_instance_init()
device_add_bootindex_property(..., "bootindex", ...)
object_property_add(..., device_get_bootindex,
device_set_bootindex, ...)
and when the bootindex is set on the QEMU command line, with
-device DEVICE,...,bootindex=N
the setter that was configured above is invoked:
device_set_bootindex()
/* parse boot index */
visit_type_int32()
/* verify unicity */
check_boot_index()
/* store parsed boot index */
...
/* insert device path to boot order */
add_boot_device_path()
In the last step, add_boot_device_path() ensures that an OpenFirmware
device path will show up in the "bootorder" fw_cfg file, at a position
corresponding to the device's boot index. Thus guest firmware (SeaBIOS and
OVMF) can try to boot off the device with the right priority.
NVMe boot index
---------------
In QEMU commit 33739c712982,
nvma: ide: add bootindex to qom property
the following generic setters / getters:
- device_set_bootindex()
- device_get_bootindex()
were open-coded for NVMe, under the names
- nvme_set_bootindex()
- nvme_get_bootindex()
Plus nvme_instance_init() was added to configure the "bootindex" property
manually, designating the open-coded getter & setter, rather than calling
device_add_bootindex_property().
Crucially, nvme_set_bootindex() avoided the final add_boot_device_path()
call. This fact is spelled out in the message of commit 33739c712982, and
it was presumably the entire reason for all of the code duplication.
Now, Vladislav filed an RFE for OVMF
<https://github.com/tianocore/edk2/issues/48>; OVMF should boot off NVMe
devices. It is simple to build edk2's existent NvmExpressDxe driver into
OVMF, but the boot order matching logic in OVMF can only handle NVMe if
the "bootorder" fw_cfg file includes such devices.
Therefore this patch converts the NVMe device model to
device_set_bootindex() all the way.
Device paths
------------
device_set_bootindex() accepts an optional parameter called "suffix". When
present, it is expected to take the form of an OpenFirmware device path
node, and it gets appended as last node to the otherwise auto-generated
OFW path.
For NVMe, the auto-generated part is
/pci@i0cf8/pci8086,5845@6[,1]
^ ^ ^ ^
| | PCI slot and (present when nonzero)
| | function of the NVMe controller, both hex
| "driver name" component, built from PCI vendor & device IDs
PCI root at system bus port, PIO
to which here we append the suffix
/namespace@1,0
^ ^
| big endian (MSB at lowest address) numeric interpretation
| of the 64-bit IEEE Extended Unique Identifier, aka EUI-64,
| hex
32-bit NVMe namespace identifier, aka NSID, hex
resulting in the OFW device path
/pci@i0cf8/pci8086,5845@6[,1]/namespace@1,0
The reason for including the NSID and the EUI-64 is that an NVMe device
can in theory produce several different namespaces (distinguished by
NSID). Additionally, each of those may (optionally) have an EUI-64 value.
For now, QEMU only provides namespace 1.
Furthermore, QEMU doesn't even represent the EUI-64 as a standalone field;
it is embedded (and left unused) inside the "NvmeIdNs.res30" array, at the
last eight bytes. (Which is fine, since EUI-64 can be left zero-filled if
unsupported by the device.)
Based on the above, we set the "unit address" part of the last
("namespace") node to fixed "1,0".
OVMF will then map the above OFW device path to the following UEFI device
path fragment, for boot order processing:
PciRoot(0x0)/Pci(0x6,0x1)/NVMe(0x1,00-00-00-00-00-00-00-00)
^ ^ ^ ^ ^ ^
| | | | | octets of the EUI-64 in address order
| | | | NSID
| | | NVMe namespace messaging device path node
| PCI slot and function
PCI root bridge
Cc: Keith Busch <keith.busch@intel.com> (supporter:nvme)
Cc: Kevin Wolf <kwolf@redhat.com> (supporter:Block layer core)
Cc: qemu-block@nongnu.org (open list:nvme)
Cc: Gonglei <arei.gonglei@huawei.com>
Cc: Vladislav Vovchenko <vladislav.vovchenko@sk.com>
Cc: Feng Tian <feng.tian@intel.com>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: Kevin O'Connor <kevin@koconnor.net>
Signed-off-by: Laszlo Ersek <lersek@redhat.com>
Acked-by: Gonglei <arei.gonglei@huawei.com>
Acked-by: Keith Busch <keith.busch@intel.com>
Tested-by: Vladislav Vovchenko <vladislav.vovchenko@sk.com>
Message-id: 1453850483-27511-1-git-send-email-lersek@redhat.com
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
2016-01-27 02:21:23 +03:00
|
|
|
static void nvme_instance_init(Object *obj)
|
2014-10-07 12:00:34 +04:00
|
|
|
{
|
2021-01-15 06:27:01 +03:00
|
|
|
NvmeCtrl *n = NVME(obj);
|
2014-10-07 12:00:34 +04:00
|
|
|
|
2021-03-22 11:24:44 +03:00
|
|
|
device_add_bootindex_property(obj, &n->namespace.blkconf.bootindex,
|
|
|
|
"bootindex", "/namespace@1,0",
|
|
|
|
DEVICE(obj));
|
2021-01-15 06:27:01 +03:00
|
|
|
|
|
|
|
object_property_add(obj, "smart_critical_warning", "uint8",
|
|
|
|
nvme_get_smart_warning,
|
|
|
|
nvme_set_smart_warning, NULL, NULL);
|
2014-10-07 12:00:34 +04:00
|
|
|
}
|
|
|
|
|
2013-06-04 19:17:10 +04:00
|
|
|
static const TypeInfo nvme_info = {
|
2019-01-20 08:55:56 +03:00
|
|
|
.name = TYPE_NVME,
|
2013-06-04 19:17:10 +04:00
|
|
|
.parent = TYPE_PCI_DEVICE,
|
|
|
|
.instance_size = sizeof(NvmeCtrl),
|
2014-10-07 12:00:34 +04:00
|
|
|
.instance_init = nvme_instance_init,
|
hw/block/nvme: support multiple namespaces
This adds support for multiple namespaces by introducing a new 'nvme-ns'
device model. The nvme device creates a bus named from the device name
('id'). The nvme-ns devices then connect to this and registers
themselves with the nvme device.
This changes how an nvme device is created. Example with two namespaces:
-drive file=nvme0n1.img,if=none,id=disk1
-drive file=nvme0n2.img,if=none,id=disk2
-device nvme,serial=deadbeef,id=nvme0
-device nvme-ns,drive=disk1,bus=nvme0,nsid=1
-device nvme-ns,drive=disk2,bus=nvme0,nsid=2
The drive property is kept on the nvme device to keep the change
backward compatible, but the property is now optional. Specifying a
drive for the nvme device will always create the namespace with nsid 1.
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Minwoo Im <minwoo.im.dev@gmail.com>
2019-06-26 09:51:06 +03:00
|
|
|
.class_init = nvme_class_init,
|
2017-09-27 22:56:33 +03:00
|
|
|
.interfaces = (InterfaceInfo[]) {
|
|
|
|
{ INTERFACE_PCIE_DEVICE },
|
|
|
|
{ }
|
|
|
|
},
|
2013-06-04 19:17:10 +04:00
|
|
|
};
|
|
|
|
|
hw/block/nvme: support multiple namespaces
This adds support for multiple namespaces by introducing a new 'nvme-ns'
device model. The nvme device creates a bus named from the device name
('id'). The nvme-ns devices then connect to this and registers
themselves with the nvme device.
This changes how an nvme device is created. Example with two namespaces:
-drive file=nvme0n1.img,if=none,id=disk1
-drive file=nvme0n2.img,if=none,id=disk2
-device nvme,serial=deadbeef,id=nvme0
-device nvme-ns,drive=disk1,bus=nvme0,nsid=1
-device nvme-ns,drive=disk2,bus=nvme0,nsid=2
The drive property is kept on the nvme device to keep the change
backward compatible, but the property is now optional. Specifying a
drive for the nvme device will always create the namespace with nsid 1.
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Minwoo Im <minwoo.im.dev@gmail.com>
2019-06-26 09:51:06 +03:00
|
|
|
static const TypeInfo nvme_bus_info = {
|
|
|
|
.name = TYPE_NVME_BUS,
|
|
|
|
.parent = TYPE_BUS,
|
|
|
|
.instance_size = sizeof(NvmeBus),
|
|
|
|
};
|
|
|
|
|
2013-06-04 19:17:10 +04:00
|
|
|
static void nvme_register_types(void)
|
|
|
|
{
|
|
|
|
type_register_static(&nvme_info);
|
hw/block/nvme: support multiple namespaces
This adds support for multiple namespaces by introducing a new 'nvme-ns'
device model. The nvme device creates a bus named from the device name
('id'). The nvme-ns devices then connect to this and registers
themselves with the nvme device.
This changes how an nvme device is created. Example with two namespaces:
-drive file=nvme0n1.img,if=none,id=disk1
-drive file=nvme0n2.img,if=none,id=disk2
-device nvme,serial=deadbeef,id=nvme0
-device nvme-ns,drive=disk1,bus=nvme0,nsid=1
-device nvme-ns,drive=disk2,bus=nvme0,nsid=2
The drive property is kept on the nvme device to keep the change
backward compatible, but the property is now optional. Specifying a
drive for the nvme device will always create the namespace with nsid 1.
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
Reviewed-by: Keith Busch <kbusch@kernel.org>
Reviewed-by: Minwoo Im <minwoo.im.dev@gmail.com>
2019-06-26 09:51:06 +03:00
|
|
|
type_register_static(&nvme_bus_info);
|
2013-06-04 19:17:10 +04:00
|
|
|
}
|
|
|
|
|
|
|
|
type_init(nvme_register_types)
|