550.107.02

This commit is contained in:
Bernhard Stoeckner 2024-07-29 10:22:58 +02:00
parent caa2dd11a0
commit 2cca8b3fd5
No known key found for this signature in database
GPG Key ID: 7D23DC2750FAC2E1
66 changed files with 1016 additions and 447 deletions

View File

@ -1,222 +0,0 @@
# Changelog
## Release 550 Entries
### [550.100] 2024-07-09
### [550.90.07] 2024-06-04
### [550.78] 2024-04-25
### [550.76] 2024-04-17
### [550.67] 2024-03-19
### [550.54.15] 2024-03-18
### [550.54.14] 2024-02-23
#### Added
- Added vGPU Host and vGPU Guest support. For vGPU Host, please refer to the README.vgpu packaged in the vGPU Host Package for more details.
### [550.40.07] 2024-01-24
#### Fixed
- Set INSTALL_MOD_DIR only if it's not defined, [#570](https://github.com/NVIDIA/open-gpu-kernel-modules/pull/570) by @keelung-yang
## Release 545 Entries
### [545.29.06] 2023-11-22
#### Fixed
- The brightness control of NVIDIA seems to be broken, [#573](https://github.com/NVIDIA/open-gpu-kernel-modules/issues/573)
### [545.29.02] 2023-10-31
### [545.23.06] 2023-10-17
#### Fixed
- Fix always-false conditional, [#493](https://github.com/NVIDIA/open-gpu-kernel-modules/pull/493) by @meme8383
#### Added
- Added beta-quality support for GeForce and Workstation GPUs. Please see the "Open Linux Kernel Modules" chapter in the NVIDIA GPU driver end user README for details.
## Release 535 Entries
### [535.129.03] 2023-10-31
### [535.113.01] 2023-09-21
#### Fixed
- Fixed building main against current centos stream 8 fails, [#550](https://github.com/NVIDIA/open-gpu-kernel-modules/issues/550) by @airlied
### [535.104.05] 2023-08-22
### [535.98] 2023-08-08
### [535.86.10] 2023-07-31
### [535.86.05] 2023-07-18
### [535.54.03] 2023-06-14
### [535.43.02] 2023-05-30
#### Fixed
- Fixed console restore with traditional VGA consoles.
#### Added
- Added support for Run Time D3 (RTD3) on Ampere and later GPUs.
- Added support for G-Sync on desktop GPUs.
## Release 530 Entries
### [530.41.03] 2023-03-23
### [530.30.02] 2023-02-28
#### Changed
- GSP firmware is now distributed as `gsp_tu10x.bin` and `gsp_ga10x.bin` to better reflect the GPU architectures supported by each firmware file in this release.
- The .run installer will continue to install firmware to /lib/firmware/nvidia/<version> and the nvidia.ko kernel module will load the appropriate firmware for each GPU at runtime.
#### Fixed
- Add support for resizable BAR on Linux when NVreg_EnableResizableBar=1 module param is set. [#3](https://github.com/NVIDIA/open-gpu-kernel-modules/pull/3) by @sjkelly
#### Added
- Support for power management features like Suspend, Hibernate and Resume.
## Release 525 Entries
### [525.147.05] 2023-10-31
#### Fixed
- Fix nvidia_p2p_get_pages(): Fix double-free in register-callback error path, [#557](https://github.com/NVIDIA/open-gpu-kernel-modules/pull/557) by @BrendanCunningham
### [525.125.06] 2023-06-26
### [525.116.04] 2023-05-09
### [525.116.03] 2023-04-25
### [525.105.17] 2023-03-30
### [525.89.02] 2023-02-08
### [525.85.12] 2023-01-30
### [525.85.05] 2023-01-19
#### Fixed
- Fix build problems with Clang 15.0, [#377](https://github.com/NVIDIA/open-gpu-kernel-modules/issues/377) by @ptr1337
### [525.78.01] 2023-01-05
### [525.60.13] 2022-12-05
### [525.60.11] 2022-11-28
#### Fixed
- Fixed nvenc compatibility with usermode clients [#104](https://github.com/NVIDIA/open-gpu-kernel-modules/issues/104)
### [525.53] 2022-11-10
#### Changed
- GSP firmware is now distributed as multiple firmware files: this release has `gsp_tu10x.bin` and `gsp_ad10x.bin` replacing `gsp.bin` from previous releases.
- Each file is named after a GPU architecture and supports GPUs from one or more architectures. This allows GSP firmware to better leverage each architecture's capabilities.
- The .run installer will continue to install firmware to `/lib/firmware/nvidia/<version>` and the `nvidia.ko` kernel module will load the appropriate firmware for each GPU at runtime.
#### Fixed
- Add support for IBT (indirect branch tracking) on supported platforms, [#256](https://github.com/NVIDIA/open-gpu-kernel-modules/issues/256) by @rnd-ash
- Return EINVAL when [failing to] allocating memory, [#280](https://github.com/NVIDIA/open-gpu-kernel-modules/pull/280) by @YusufKhan-gamedev
- Fix various typos in nvidia/src/kernel, [#16](https://github.com/NVIDIA/open-gpu-kernel-modules/pull/16) by @alexisgeoffrey
- Added support for rotation in X11, Quadro Sync, Stereo, and YUV 4:2:0 on Turing.
## Release 520 Entries
### [520.61.07] 2022-10-20
### [520.56.06] 2022-10-12
#### Added
- Introduce support for GeForce RTX 4090 GPUs.
### [520.61.05] 2022-10-10
#### Added
- Introduce support for NVIDIA H100 GPUs.
#### Fixed
- Fix/Improve Makefile, [#308](https://github.com/NVIDIA/open-gpu-kernel-modules/pull/308/) by @izenynn
- Make nvLogBase2 more efficient, [#177](https://github.com/NVIDIA/open-gpu-kernel-modules/pull/177/) by @DMaroo
- nv-pci: fixed always true expression, [#195](https://github.com/NVIDIA/open-gpu-kernel-modules/pull/195/) by @ValZapod
## Release 515 Entries
### [515.76] 2022-09-20
#### Fixed
- Improved compatibility with new Linux kernel releases
- Fixed possible excessive GPU power draw on an idle X11 or Wayland desktop when driving high resolutions or refresh rates
### [515.65.07] 2022-10-19
### [515.65.01] 2022-08-02
#### Fixed
- Collection of minor fixes to issues, [#6](https://github.com/NVIDIA/open-gpu-kernel-modules/pull/61) by @Joshua-Ashton
- Remove unnecessary use of acpi_bus_get_device().
### [515.57] 2022-06-28
#### Fixed
- Backtick is deprecated, [#273](https://github.com/NVIDIA/open-gpu-kernel-modules/pull/273) by @arch-user-france1
### [515.48.07] 2022-05-31
#### Added
- List of compatible GPUs in README.md.
#### Fixed
- Fix various README capitalizations, [#8](https://github.com/NVIDIA/open-gpu-kernel-modules/pull/8) by @27lx
- Automatically tag bug report issues, [#15](https://github.com/NVIDIA/open-gpu-kernel-modules/pull/15) by @thebeanogamer
- Improve conftest.sh Script, [#37](https://github.com/NVIDIA/open-gpu-kernel-modules/pull/37) by @Nitepone
- Update HTTP link to HTTPS, [#101](https://github.com/NVIDIA/open-gpu-kernel-modules/pull/101) by @alcaparra
- moved array sanity check to before the array access, [#117](https://github.com/NVIDIA/open-gpu-kernel-modules/pull/117) by @RealAstolfo
- Fixed some typos, [#122](https://github.com/NVIDIA/open-gpu-kernel-modules/pull/122) by @FEDOyt
- Fixed capitalization, [#123](https://github.com/NVIDIA/open-gpu-kernel-modules/pull/123) by @keroeslux
- Fix typos in NVDEC Engine Descriptor, [#126](https://github.com/NVIDIA/open-gpu-kernel-modules/pull/126) from @TrickyDmitriy
- Extranous apostrohpes in a makefile script [sic], [#14](https://github.com/NVIDIA/open-gpu-kernel-modules/issues/14) by @kiroma
- HDMI no audio @ 4K above 60Hz, [#75](https://github.com/NVIDIA/open-gpu-kernel-modules/issues/75) by @adolfotregosa
- dp_configcaps.cpp:405: array index sanity check in wrong place?, [#110](https://github.com/NVIDIA/open-gpu-kernel-modules/issues/110) by @dcb314
- NVRM kgspInitRm_IMPL: missing NVDEC0 engine, cannot initialize GSP-RM, [#116](https://github.com/NVIDIA/open-gpu-kernel-modules/issues/116) by @kfazz
- ERROR: modpost: "backlight_device_register" [...nvidia-modeset.ko] undefined, [#135](https://github.com/NVIDIA/open-gpu-kernel-modules/issues/135) by @sndirsch
- aarch64 build fails, [#151](https://github.com/NVIDIA/open-gpu-kernel-modules/issues/151) by @frezbo
### [515.43.04] 2022-05-11
- Initial release.

View File

@ -1,7 +1,7 @@
# NVIDIA Linux Open GPU Kernel Module Source # NVIDIA Linux Open GPU Kernel Module Source
This is the source release of the NVIDIA Linux open GPU kernel modules, This is the source release of the NVIDIA Linux open GPU kernel modules,
version 550.100. version 550.107.02.
## How to Build ## How to Build
@ -17,7 +17,7 @@ as root:
Note that the kernel modules built here must be used with GSP Note that the kernel modules built here must be used with GSP
firmware and user-space NVIDIA GPU driver components from a corresponding firmware and user-space NVIDIA GPU driver components from a corresponding
550.100 driver release. This can be achieved by installing 550.107.02 driver release. This can be achieved by installing
the NVIDIA GPU driver from the .run file using the `--no-kernel-modules` the NVIDIA GPU driver from the .run file using the `--no-kernel-modules`
option. E.g., option. E.g.,
@ -188,7 +188,7 @@ encountered specific to them.
For details on feature support and limitations, see the NVIDIA GPU driver For details on feature support and limitations, see the NVIDIA GPU driver
end user README here: end user README here:
https://us.download.nvidia.com/XFree86/Linux-x86_64/550.100/README/kernel_open.html https://us.download.nvidia.com/XFree86/Linux-x86_64/550.107.02/README/kernel_open.html
For vGPU support, please refer to the README.vgpu packaged in the vGPU Host For vGPU support, please refer to the README.vgpu packaged in the vGPU Host
Package for more details. Package for more details.
@ -834,10 +834,12 @@ Subsystem Device ID.
| NVIDIA GeForce RTX 2050 | 25AD | | NVIDIA GeForce RTX 2050 | 25AD |
| NVIDIA RTX A1000 | 25B0 1028 1878 | | NVIDIA RTX A1000 | 25B0 1028 1878 |
| NVIDIA RTX A1000 | 25B0 103C 1878 | | NVIDIA RTX A1000 | 25B0 103C 1878 |
| NVIDIA RTX A1000 | 25B0 103C 8D96 |
| NVIDIA RTX A1000 | 25B0 10DE 1878 | | NVIDIA RTX A1000 | 25B0 10DE 1878 |
| NVIDIA RTX A1000 | 25B0 17AA 1878 | | NVIDIA RTX A1000 | 25B0 17AA 1878 |
| NVIDIA RTX A400 | 25B2 1028 1879 | | NVIDIA RTX A400 | 25B2 1028 1879 |
| NVIDIA RTX A400 | 25B2 103C 1879 | | NVIDIA RTX A400 | 25B2 103C 1879 |
| NVIDIA RTX A400 | 25B2 103C 8D95 |
| NVIDIA RTX A400 | 25B2 10DE 1879 | | NVIDIA RTX A400 | 25B2 10DE 1879 |
| NVIDIA RTX A400 | 25B2 17AA 1879 | | NVIDIA RTX A400 | 25B2 17AA 1879 |
| NVIDIA A16 | 25B6 10DE 14A9 | | NVIDIA A16 | 25B6 10DE 14A9 |
@ -912,6 +914,7 @@ Subsystem Device ID.
| NVIDIA GeForce RTX 4060 Ti | 2805 | | NVIDIA GeForce RTX 4060 Ti | 2805 |
| NVIDIA GeForce RTX 4060 | 2808 | | NVIDIA GeForce RTX 4060 | 2808 |
| NVIDIA GeForce RTX 4070 Laptop GPU | 2820 | | NVIDIA GeForce RTX 4070 Laptop GPU | 2820 |
| NVIDIA GeForce RTX 3050 A Laptop GPU | 2822 |
| NVIDIA RTX 3000 Ada Generation Laptop GPU | 2838 | | NVIDIA RTX 3000 Ada Generation Laptop GPU | 2838 |
| NVIDIA GeForce RTX 4070 Laptop GPU | 2860 | | NVIDIA GeForce RTX 4070 Laptop GPU | 2860 |
| NVIDIA GeForce RTX 4060 | 2882 | | NVIDIA GeForce RTX 4060 | 2882 |

View File

@ -72,7 +72,7 @@ EXTRA_CFLAGS += -I$(src)/common/inc
EXTRA_CFLAGS += -I$(src) EXTRA_CFLAGS += -I$(src)
EXTRA_CFLAGS += -Wall $(DEFINES) $(INCLUDES) -Wno-cast-qual -Wno-format-extra-args EXTRA_CFLAGS += -Wall $(DEFINES) $(INCLUDES) -Wno-cast-qual -Wno-format-extra-args
EXTRA_CFLAGS += -D__KERNEL__ -DMODULE -DNVRM EXTRA_CFLAGS += -D__KERNEL__ -DMODULE -DNVRM
EXTRA_CFLAGS += -DNV_VERSION_STRING=\"550.100\" EXTRA_CFLAGS += -DNV_VERSION_STRING=\"550.107.02\"
ifneq ($(SYSSRCHOST1X),) ifneq ($(SYSSRCHOST1X),)
EXTRA_CFLAGS += -I$(SYSSRCHOST1X) EXTRA_CFLAGS += -I$(SYSSRCHOST1X)

View File

@ -1045,7 +1045,7 @@ NV_STATUS NV_API_CALL nv_vgpu_get_bar_info(nvidia_stack_t *, nv_state_t *, con
NvU64 *, NvU64 *, NvU32 *, NvBool *, NvU8 *); NvU64 *, NvU64 *, NvU32 *, NvBool *, NvU8 *);
NV_STATUS NV_API_CALL nv_vgpu_get_hbm_info(nvidia_stack_t *, nv_state_t *, const NvU8 *, NvU64 *, NvU64 *); NV_STATUS NV_API_CALL nv_vgpu_get_hbm_info(nvidia_stack_t *, nv_state_t *, const NvU8 *, NvU64 *, NvU64 *);
NV_STATUS NV_API_CALL nv_vgpu_process_vf_info(nvidia_stack_t *, nv_state_t *, NvU8, NvU32, NvU8, NvU8, NvU8, NvBool, void *); NV_STATUS NV_API_CALL nv_vgpu_process_vf_info(nvidia_stack_t *, nv_state_t *, NvU8, NvU32, NvU8, NvU8, NvU8, NvBool, void *);
NV_STATUS NV_API_CALL nv_gpu_bind_event(nvidia_stack_t *); NV_STATUS NV_API_CALL nv_gpu_bind_event(nvidia_stack_t *, NvU32, NvBool *);
NV_STATUS NV_API_CALL nv_gpu_unbind_event(nvidia_stack_t *, NvU32, NvBool *); NV_STATUS NV_API_CALL nv_gpu_unbind_event(nvidia_stack_t *, NvU32, NvBool *);
NV_STATUS NV_API_CALL nv_get_usermap_access_params(nv_state_t*, nv_usermap_access_params_t*); NV_STATUS NV_API_CALL nv_get_usermap_access_params(nv_state_t*, nv_usermap_access_params_t*);

View File

@ -218,6 +218,8 @@ extern NvU32 os_page_size;
extern NvU64 os_page_mask; extern NvU64 os_page_mask;
extern NvU8 os_page_shift; extern NvU8 os_page_shift;
extern NvBool os_cc_enabled; extern NvBool os_cc_enabled;
extern NvBool os_cc_sev_snp_enabled;
extern NvBool os_cc_snp_vtom_enabled;
extern NvBool os_cc_tdx_enabled; extern NvBool os_cc_tdx_enabled;
extern NvBool os_dma_buf_enabled; extern NvBool os_dma_buf_enabled;
extern NvBool os_imex_channel_is_supported; extern NvBool os_imex_channel_is_supported;

View File

@ -5102,6 +5102,42 @@ compile_test() {
compile_check_conftest "$CODE" "NV_CC_PLATFORM_PRESENT" "" "functions" compile_check_conftest "$CODE" "NV_CC_PLATFORM_PRESENT" "" "functions"
;; ;;
cc_attr_guest_sev_snp)
#
# Determine if 'CC_ATTR_GUEST_SEV_SNP' is present.
#
# Added by commit aa5a461171f9 ("x86/mm: Extend cc_attr to
# include AMD SEV-SNP") in v5.19.
#
CODE="
#if defined(NV_LINUX_CC_PLATFORM_H_PRESENT)
#include <linux/cc_platform.h>
#endif
enum cc_attr cc_attributes = CC_ATTR_GUEST_SEV_SNP;
"
compile_check_conftest "$CODE" "NV_CC_ATTR_SEV_SNP" "" "types"
;;
hv_get_isolation_type)
#
# Determine if 'hv_get_isolation_type()' is present.
# Added by commit faff44069ff5 ("x86/hyperv: Add Write/Read MSR
# registers via ghcb page") in v5.16.
#
CODE="
#if defined(NV_ASM_MSHYPERV_H_PRESENT)
#include <asm/mshyperv.h>
#endif
void conftest_hv_get_isolation_type(void) {
int i;
hv_get_isolation_type(i);
}"
compile_check_conftest "$CODE" "NV_HV_GET_ISOLATION_TYPE" "" "functions"
;;
drm_prime_pages_to_sg_has_drm_device_arg) drm_prime_pages_to_sg_has_drm_device_arg)
# #
# Determine if drm_prime_pages_to_sg() has 'dev' argument. # Determine if drm_prime_pages_to_sg() has 'dev' argument.

View File

@ -97,5 +97,6 @@ NV_HEADER_PRESENCE_TESTS = \
linux/sync_file.h \ linux/sync_file.h \
linux/cc_platform.h \ linux/cc_platform.h \
asm/cpufeature.h \ asm/cpufeature.h \
linux/mpi.h linux/mpi.h \
asm/mshyperv.h

View File

@ -1689,7 +1689,7 @@ int nv_drm_get_crtc_crc32_v2_ioctl(struct drm_device *dev,
struct NvKmsKapiCrcs crc32; struct NvKmsKapiCrcs crc32;
if (!drm_core_check_feature(dev, DRIVER_MODESET)) { if (!drm_core_check_feature(dev, DRIVER_MODESET)) {
return -ENOENT; return -EOPNOTSUPP;
} }
crtc = nv_drm_crtc_find(dev, filep, params->crtc_id); crtc = nv_drm_crtc_find(dev, filep, params->crtc_id);
@ -1717,7 +1717,7 @@ int nv_drm_get_crtc_crc32_ioctl(struct drm_device *dev,
struct NvKmsKapiCrcs crc32; struct NvKmsKapiCrcs crc32;
if (!drm_core_check_feature(dev, DRIVER_MODESET)) { if (!drm_core_check_feature(dev, DRIVER_MODESET)) {
return -ENOENT; return -EOPNOTSUPP;
} }
crtc = nv_drm_crtc_find(dev, filep, params->crtc_id); crtc = nv_drm_crtc_find(dev, filep, params->crtc_id);

View File

@ -480,6 +480,22 @@ static int nv_drm_load(struct drm_device *dev, unsigned long flags)
return -ENODEV; return -ENODEV;
} }
#if defined(NV_DRM_FBDEV_GENERIC_AVAILABLE)
/*
* If fbdev is enabled, take modeset ownership now before other DRM clients
* can take master (and thus NVKMS ownership).
*/
if (nv_drm_fbdev_module_param) {
if (!nvKms->grabOwnership(pDevice)) {
nvKms->freeDevice(pDevice);
NV_DRM_DEV_LOG_ERR(nv_dev, "Failed to grab NVKMS modeset ownership");
return -EBUSY;
}
nv_dev->hasFramebufferConsole = NV_TRUE;
}
#endif
mutex_lock(&nv_dev->lock); mutex_lock(&nv_dev->lock);
/* Set NvKmsKapiDevice */ /* Set NvKmsKapiDevice */
@ -590,6 +606,15 @@ static void __nv_drm_unload(struct drm_device *dev)
return; return;
} }
/* Release modeset ownership if fbdev is enabled */
#if defined(NV_DRM_FBDEV_GENERIC_AVAILABLE)
if (nv_dev->hasFramebufferConsole) {
drm_atomic_helper_shutdown(dev);
nvKms->releaseOwnership(nv_dev->pDevice);
}
#endif
cancel_delayed_work_sync(&nv_dev->hotplug_event_work); cancel_delayed_work_sync(&nv_dev->hotplug_event_work);
mutex_lock(&nv_dev->lock); mutex_lock(&nv_dev->lock);
@ -834,13 +859,18 @@ static int nv_drm_get_dpy_id_for_connector_id_ioctl(struct drm_device *dev,
struct drm_file *filep) struct drm_file *filep)
{ {
struct drm_nvidia_get_dpy_id_for_connector_id_params *params = data; struct drm_nvidia_get_dpy_id_for_connector_id_params *params = data;
struct drm_connector *connector;
struct nv_drm_connector *nv_connector;
int ret = 0;
if (!drm_core_check_feature(dev, DRIVER_MODESET)) {
return -EOPNOTSUPP;
}
// Importantly, drm_connector_lookup (with filep) will only return the // Importantly, drm_connector_lookup (with filep) will only return the
// connector if we are master, a lessee with the connector, or not master at // connector if we are master, a lessee with the connector, or not master at
// all. It will return NULL if we are a lessee with other connectors. // all. It will return NULL if we are a lessee with other connectors.
struct drm_connector *connector = connector = nv_drm_connector_lookup(dev, filep, params->connectorId);
nv_drm_connector_lookup(dev, filep, params->connectorId);
struct nv_drm_connector *nv_connector;
int ret = 0;
if (!connector) { if (!connector) {
return -EINVAL; return -EINVAL;
@ -873,6 +903,11 @@ static int nv_drm_get_connector_id_for_dpy_id_ioctl(struct drm_device *dev,
int ret = -EINVAL; int ret = -EINVAL;
#if defined(NV_DRM_CONNECTOR_LIST_ITER_PRESENT) #if defined(NV_DRM_CONNECTOR_LIST_ITER_PRESENT)
struct drm_connector_list_iter conn_iter; struct drm_connector_list_iter conn_iter;
#endif
if (!drm_core_check_feature(dev, DRIVER_MODESET)) {
return -EOPNOTSUPP;
}
#if defined(NV_DRM_CONNECTOR_LIST_ITER_PRESENT)
nv_drm_connector_list_iter_begin(dev, &conn_iter); nv_drm_connector_list_iter_begin(dev, &conn_iter);
#endif #endif
@ -1085,6 +1120,10 @@ static int nv_drm_grant_permission_ioctl(struct drm_device *dev, void *data,
{ {
struct drm_nvidia_grant_permissions_params *params = data; struct drm_nvidia_grant_permissions_params *params = data;
if (!drm_core_check_feature(dev, DRIVER_MODESET)) {
return -EOPNOTSUPP;
}
if (params->type == NV_DRM_PERMISSIONS_TYPE_MODESET) { if (params->type == NV_DRM_PERMISSIONS_TYPE_MODESET) {
return nv_drm_grant_modeset_permission(dev, params, filep); return nv_drm_grant_modeset_permission(dev, params, filep);
} else if (params->type == NV_DRM_PERMISSIONS_TYPE_SUB_OWNER) { } else if (params->type == NV_DRM_PERMISSIONS_TYPE_SUB_OWNER) {
@ -1250,6 +1289,10 @@ static int nv_drm_revoke_permission_ioctl(struct drm_device *dev, void *data,
{ {
struct drm_nvidia_revoke_permissions_params *params = data; struct drm_nvidia_revoke_permissions_params *params = data;
if (!drm_core_check_feature(dev, DRIVER_MODESET)) {
return -EOPNOTSUPP;
}
if (params->type == NV_DRM_PERMISSIONS_TYPE_MODESET) { if (params->type == NV_DRM_PERMISSIONS_TYPE_MODESET) {
if (!params->dpyId) { if (!params->dpyId) {
return -EINVAL; return -EINVAL;
@ -1771,11 +1814,6 @@ void nv_drm_register_drm_device(const nv_gpu_info_t *gpu_info)
if (nv_drm_fbdev_module_param && if (nv_drm_fbdev_module_param &&
drm_core_check_feature(dev, DRIVER_MODESET)) { drm_core_check_feature(dev, DRIVER_MODESET)) {
if (!nvKms->grabOwnership(nv_dev->pDevice)) {
NV_DRM_DEV_LOG_ERR(nv_dev, "Failed to grab NVKMS modeset ownership");
goto failed_grab_ownership;
}
if (bus_is_pci) { if (bus_is_pci) {
struct pci_dev *pdev = to_pci_dev(device); struct pci_dev *pdev = to_pci_dev(device);
@ -1786,8 +1824,6 @@ void nv_drm_register_drm_device(const nv_gpu_info_t *gpu_info)
#endif #endif
} }
drm_fbdev_generic_setup(dev, 32); drm_fbdev_generic_setup(dev, 32);
nv_dev->hasFramebufferConsole = NV_TRUE;
} }
#endif /* defined(NV_DRM_FBDEV_GENERIC_AVAILABLE) */ #endif /* defined(NV_DRM_FBDEV_GENERIC_AVAILABLE) */
@ -1798,12 +1834,6 @@ void nv_drm_register_drm_device(const nv_gpu_info_t *gpu_info)
return; /* Success */ return; /* Success */
#if defined(NV_DRM_FBDEV_GENERIC_AVAILABLE)
failed_grab_ownership:
drm_dev_unregister(dev);
#endif
failed_drm_register: failed_drm_register:
nv_drm_dev_free(dev); nv_drm_dev_free(dev);
@ -1870,12 +1900,6 @@ void nv_drm_remove_devices(void)
struct nv_drm_device *next = dev_list->next; struct nv_drm_device *next = dev_list->next;
struct drm_device *dev = dev_list->dev; struct drm_device *dev = dev_list->dev;
#if defined(NV_DRM_FBDEV_GENERIC_AVAILABLE)
if (dev_list->hasFramebufferConsole) {
drm_atomic_helper_shutdown(dev);
nvKms->releaseOwnership(dev_list->pDevice);
}
#endif
drm_dev_unregister(dev); drm_dev_unregister(dev);
nv_drm_dev_free(dev); nv_drm_dev_free(dev);

View File

@ -465,10 +465,15 @@ int nv_drm_prime_fence_context_create_ioctl(struct drm_device *dev,
{ {
struct nv_drm_device *nv_dev = to_nv_device(dev); struct nv_drm_device *nv_dev = to_nv_device(dev);
struct drm_nvidia_prime_fence_context_create_params *p = data; struct drm_nvidia_prime_fence_context_create_params *p = data;
struct nv_drm_prime_fence_context *nv_prime_fence_context = struct nv_drm_prime_fence_context *nv_prime_fence_context;
__nv_drm_prime_fence_context_new(nv_dev, p);
int err; int err;
if (nv_dev->pDevice == NULL) {
return -EOPNOTSUPP;
}
nv_prime_fence_context = __nv_drm_prime_fence_context_new(nv_dev, p);
if (!nv_prime_fence_context) { if (!nv_prime_fence_context) {
goto done; goto done;
} }
@ -523,6 +528,11 @@ int nv_drm_gem_prime_fence_attach_ioctl(struct drm_device *dev,
struct nv_drm_fence_context *nv_fence_context; struct nv_drm_fence_context *nv_fence_context;
nv_dma_fence_t *fence; nv_dma_fence_t *fence;
if (nv_dev->pDevice == NULL) {
ret = -EOPNOTSUPP;
goto done;
}
if (p->__pad != 0) { if (p->__pad != 0) {
NV_DRM_DEV_LOG_ERR(nv_dev, "Padding fields must be zeroed"); NV_DRM_DEV_LOG_ERR(nv_dev, "Padding fields must be zeroed");
goto done; goto done;
@ -1312,6 +1322,10 @@ int nv_drm_semsurf_fence_ctx_create_ioctl(struct drm_device *dev,
struct nv_drm_semsurf_fence_ctx *ctx; struct nv_drm_semsurf_fence_ctx *ctx;
int err; int err;
if (nv_dev->pDevice == NULL) {
return -EOPNOTSUPP;
}
if (p->__pad != 0) { if (p->__pad != 0) {
NV_DRM_DEV_LOG_ERR(nv_dev, "Padding fields must be zeroed"); NV_DRM_DEV_LOG_ERR(nv_dev, "Padding fields must be zeroed");
return -EINVAL; return -EINVAL;
@ -1473,6 +1487,11 @@ int nv_drm_semsurf_fence_create_ioctl(struct drm_device *dev,
int ret = -EINVAL; int ret = -EINVAL;
int fd; int fd;
if (nv_dev->pDevice == NULL) {
ret = -EOPNOTSUPP;
goto done;
}
if (p->__pad != 0) { if (p->__pad != 0) {
NV_DRM_DEV_LOG_ERR(nv_dev, "Padding fields must be zeroed"); NV_DRM_DEV_LOG_ERR(nv_dev, "Padding fields must be zeroed");
goto done; goto done;
@ -1635,6 +1654,10 @@ int nv_drm_semsurf_fence_wait_ioctl(struct drm_device *dev,
unsigned long flags; unsigned long flags;
int ret = -EINVAL; int ret = -EINVAL;
if (nv_dev->pDevice == NULL) {
return -EOPNOTSUPP;
}
if (p->pre_wait_value >= p->post_wait_value) { if (p->pre_wait_value >= p->post_wait_value) {
NV_DRM_DEV_LOG_ERR( NV_DRM_DEV_LOG_ERR(
nv_dev, nv_dev,
@ -1743,6 +1766,11 @@ int nv_drm_semsurf_fence_attach_ioctl(struct drm_device *dev,
nv_dma_fence_t *fence; nv_dma_fence_t *fence;
int ret = -EINVAL; int ret = -EINVAL;
if (nv_dev->pDevice == NULL) {
ret = -EOPNOTSUPP;
goto done;
}
nv_gem = nv_drm_gem_object_lookup(nv_dev->dev, filep, p->handle); nv_gem = nv_drm_gem_object_lookup(nv_dev->dev, filep, p->handle);
if (!nv_gem) { if (!nv_gem) {

View File

@ -380,7 +380,7 @@ int nv_drm_gem_import_nvkms_memory_ioctl(struct drm_device *dev,
int ret; int ret;
if (!drm_core_check_feature(dev, DRIVER_MODESET)) { if (!drm_core_check_feature(dev, DRIVER_MODESET)) {
ret = -EINVAL; ret = -EOPNOTSUPP;
goto failed; goto failed;
} }
@ -430,7 +430,7 @@ int nv_drm_gem_export_nvkms_memory_ioctl(struct drm_device *dev,
int ret = 0; int ret = 0;
if (!drm_core_check_feature(dev, DRIVER_MODESET)) { if (!drm_core_check_feature(dev, DRIVER_MODESET)) {
ret = -EINVAL; ret = -EOPNOTSUPP;
goto done; goto done;
} }
@ -483,7 +483,7 @@ int nv_drm_gem_alloc_nvkms_memory_ioctl(struct drm_device *dev,
int ret = 0; int ret = 0;
if (!drm_core_check_feature(dev, DRIVER_MODESET)) { if (!drm_core_check_feature(dev, DRIVER_MODESET)) {
ret = -EINVAL; ret = -EOPNOTSUPP;
goto failed; goto failed;
} }

View File

@ -319,7 +319,7 @@ int nv_drm_gem_identify_object_ioctl(struct drm_device *dev,
struct nv_drm_gem_object *nv_gem = NULL; struct nv_drm_gem_object *nv_gem = NULL;
if (!drm_core_check_feature(dev, DRIVER_MODESET)) { if (!drm_core_check_feature(dev, DRIVER_MODESET)) {
return -EINVAL; return -EOPNOTSUPP;
} }
nv_dma_buf = nv_drm_gem_object_dma_buf_lookup(dev, filep, p->handle); nv_dma_buf = nv_drm_gem_object_dma_buf_lookup(dev, filep, p->handle);

View File

@ -1,5 +1,5 @@
/******************************************************************************* /*******************************************************************************
Copyright (c) 2013-2023 NVIDIA Corporation Copyright (c) 2013-2024 NVIDIA Corporation
Permission is hereby granted, free of charge, to any person obtaining a copy Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to of this software and associated documentation files (the "Software"), to
@ -423,7 +423,9 @@ static void uvm_get_unaddressable_range(NvU32 num_va_bits, NvU64 *first, NvU64 *
UVM_ASSERT(first); UVM_ASSERT(first);
UVM_ASSERT(outer); UVM_ASSERT(outer);
if (uvm_platform_uses_canonical_form_address()) { // Maxwell GPUs (num_va_bits == 40b) do not support canonical form address
// even when plugged into platforms using it.
if (uvm_platform_uses_canonical_form_address() && num_va_bits > 40) {
*first = 1ULL << (num_va_bits - 1); *first = 1ULL << (num_va_bits - 1);
*outer = (NvU64)((NvS64)(1ULL << 63) >> (64 - num_va_bits)); *outer = (NvU64)((NvS64)(1ULL << 63) >> (64 - num_va_bits));
} }

View File

@ -96,6 +96,10 @@
#include <linux/cc_platform.h> #include <linux/cc_platform.h>
#endif #endif
#if defined(NV_ASM_MSHYPERV_H_PRESENT) && defined(NVCPU_X86_64)
#include <asm/mshyperv.h>
#endif
#if defined(NV_ASM_CPUFEATURE_H_PRESENT) #if defined(NV_ASM_CPUFEATURE_H_PRESENT)
#include <asm/cpufeature.h> #include <asm/cpufeature.h>
#endif #endif
@ -285,6 +289,17 @@ void nv_detect_conf_compute_platform(
#if defined(NV_CC_PLATFORM_PRESENT) #if defined(NV_CC_PLATFORM_PRESENT)
os_cc_enabled = cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT); os_cc_enabled = cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT);
#if defined(NV_CC_ATTR_SEV_SNP)
os_cc_sev_snp_enabled = cc_platform_has(CC_ATTR_GUEST_SEV_SNP);
#endif
#if defined(NV_HV_GET_ISOLATION_TYPE) && IS_ENABLED(CONFIG_HYPERV) && defined(NVCPU_X86_64)
if (hv_get_isolation_type() == HV_ISOLATION_TYPE_SNP)
{
os_cc_snp_vtom_enabled = NV_TRUE;
}
#endif
#if defined(X86_FEATURE_TDX_GUEST) #if defined(X86_FEATURE_TDX_GUEST)
if (cpu_feature_enabled(X86_FEATURE_TDX_GUEST)) if (cpu_feature_enabled(X86_FEATURE_TDX_GUEST))
{ {
@ -293,8 +308,10 @@ void nv_detect_conf_compute_platform(
#endif #endif
#else #else
os_cc_enabled = NV_FALSE; os_cc_enabled = NV_FALSE;
os_cc_sev_snp_enabled = NV_FALSE;
os_cc_snp_vtom_enabled = NV_FALSE;
os_cc_tdx_enabled = NV_FALSE; os_cc_tdx_enabled = NV_FALSE;
#endif #endif //NV_CC_PLATFORM_PRESENT
} }
static static

View File

@ -160,6 +160,8 @@ NV_CONFTEST_FUNCTION_COMPILE_TESTS += full_name_hash
NV_CONFTEST_FUNCTION_COMPILE_TESTS += pci_enable_atomic_ops_to_root NV_CONFTEST_FUNCTION_COMPILE_TESTS += pci_enable_atomic_ops_to_root
NV_CONFTEST_FUNCTION_COMPILE_TESTS += vga_tryget NV_CONFTEST_FUNCTION_COMPILE_TESTS += vga_tryget
NV_CONFTEST_FUNCTION_COMPILE_TESTS += cc_platform_has NV_CONFTEST_FUNCTION_COMPILE_TESTS += cc_platform_has
NV_CONFTEST_FUNCTION_COMPILE_TESTS += cc_attr_guest_sev_snp
NV_CONFTEST_FUNCTION_COMPILE_TESTS += hv_get_isolation_type
NV_CONFTEST_FUNCTION_COMPILE_TESTS += seq_read_iter NV_CONFTEST_FUNCTION_COMPILE_TESTS += seq_read_iter
NV_CONFTEST_FUNCTION_COMPILE_TESTS += follow_pfn NV_CONFTEST_FUNCTION_COMPILE_TESTS += follow_pfn
NV_CONFTEST_FUNCTION_COMPILE_TESTS += drm_gem_object_get NV_CONFTEST_FUNCTION_COMPILE_TESTS += drm_gem_object_get

View File

@ -52,6 +52,8 @@ NvU32 os_page_size = PAGE_SIZE;
NvU64 os_page_mask = NV_PAGE_MASK; NvU64 os_page_mask = NV_PAGE_MASK;
NvU8 os_page_shift = PAGE_SHIFT; NvU8 os_page_shift = PAGE_SHIFT;
NvBool os_cc_enabled = 0; NvBool os_cc_enabled = 0;
NvBool os_cc_sev_snp_enabled = 0;
NvBool os_cc_snp_vtom_enabled = 0;
NvBool os_cc_tdx_enabled = 0; NvBool os_cc_tdx_enabled = 0;
#if defined(CONFIG_DMA_SHARED_BUFFER) #if defined(CONFIG_DMA_SHARED_BUFFER)

View File

@ -282,6 +282,7 @@ namespace DisplayPort
virtual void markDeviceForDeletion() = 0; virtual void markDeviceForDeletion() = 0;
virtual bool getRawDscCaps(NvU8 *buffer, NvU32 bufferSize) = 0; virtual bool getRawDscCaps(NvU8 *buffer, NvU32 bufferSize) = 0;
virtual bool setRawDscCaps(NvU8 *buffer, NvU32 bufferSize) = 0;
// This interface is still nascent. Please don't use it. Read size limit is 16 bytes. // This interface is still nascent. Please don't use it. Read size limit is 16 bytes.
virtual AuxBus::status getDpcdData(unsigned offset, NvU8 * buffer, virtual AuxBus::status getDpcdData(unsigned offset, NvU8 * buffer,

View File

@ -44,6 +44,7 @@ namespace DisplayPort
#define HDCP_BCAPS_DDC_EN_BIT 0x80 #define HDCP_BCAPS_DDC_EN_BIT 0x80
#define HDCP_BCAPS_DP_EN_BIT 0x01 #define HDCP_BCAPS_DP_EN_BIT 0x01
#define HDCP_I2C_CLIENT_ADDR 0x74 #define HDCP_I2C_CLIENT_ADDR 0x74
#define DSC_CAPS_SIZE 16
struct GroupImpl; struct GroupImpl;
struct ConnectorImpl; struct ConnectorImpl;
@ -421,6 +422,7 @@ namespace DisplayPort
virtual void markDeviceForDeletion() {bisMarkedForDeletion = true;}; virtual void markDeviceForDeletion() {bisMarkedForDeletion = true;};
virtual bool isMarkedForDeletion() {return bisMarkedForDeletion;}; virtual bool isMarkedForDeletion() {return bisMarkedForDeletion;};
virtual bool getRawDscCaps(NvU8 *buffer, NvU32 bufferSize); virtual bool getRawDscCaps(NvU8 *buffer, NvU32 bufferSize);
virtual bool setRawDscCaps(NvU8 *buffer, NvU32 bufferSize);
virtual AuxBus::status dscCrcControl(NvBool bEnable, gpuDscCrc *dataGpu, sinkDscCrc *dataSink); virtual AuxBus::status dscCrcControl(NvBool bEnable, gpuDscCrc *dataGpu, sinkDscCrc *dataSink);

View File

@ -472,6 +472,15 @@ bool DeviceImpl::getRawDscCaps(NvU8 *buffer, NvU32 bufferSize)
return true; return true;
} }
bool DeviceImpl::setRawDscCaps(NvU8 *buffer, NvU32 bufferSize)
{
if (bufferSize < sizeof(rawDscCaps))
return false;
dpMemCopy(&rawDscCaps, buffer, sizeof(rawDscCaps));
return parseDscCaps(&rawDscCaps[0], sizeof(rawDscCaps));
}
AuxBus::status DeviceImpl::transaction(Action action, Type type, int address, AuxBus::status DeviceImpl::transaction(Action action, Type type, int address,
NvU8 * buffer, unsigned sizeRequested, NvU8 * buffer, unsigned sizeRequested,
unsigned * sizeCompleted, unsigned * sizeCompleted,

View File

@ -36,25 +36,25 @@
// and then checked back in. You cannot make changes to these sections without // and then checked back in. You cannot make changes to these sections without
// corresponding changes to the buildmeister script // corresponding changes to the buildmeister script
#ifndef NV_BUILD_BRANCH #ifndef NV_BUILD_BRANCH
#define NV_BUILD_BRANCH r550_00 #define NV_BUILD_BRANCH r552_86
#endif #endif
#ifndef NV_PUBLIC_BRANCH #ifndef NV_PUBLIC_BRANCH
#define NV_PUBLIC_BRANCH r550_00 #define NV_PUBLIC_BRANCH r552_86
#endif #endif
#if defined(NV_LINUX) || defined(NV_BSD) || defined(NV_SUNOS) #if defined(NV_LINUX) || defined(NV_BSD) || defined(NV_SUNOS)
#define NV_BUILD_BRANCH_VERSION "rel/gpu_drv/r550/r550_00-326" #define NV_BUILD_BRANCH_VERSION "rel/gpu_drv/r550/r552_86-355"
#define NV_BUILD_CHANGELIST_NUM (34471492) #define NV_BUILD_CHANGELIST_NUM (34618165)
#define NV_BUILD_TYPE "Official" #define NV_BUILD_TYPE "Official"
#define NV_BUILD_NAME "rel/gpu_drv/r550/r550_00-326" #define NV_BUILD_NAME "rel/gpu_drv/r550/r552_86-355"
#define NV_LAST_OFFICIAL_CHANGELIST_NUM (34471492) #define NV_LAST_OFFICIAL_CHANGELIST_NUM (34618165)
#else /* Windows builds */ #else /* Windows builds */
#define NV_BUILD_BRANCH_VERSION "r550_00-324" #define NV_BUILD_BRANCH_VERSION "r552_86-1"
#define NV_BUILD_CHANGELIST_NUM (34468048) #define NV_BUILD_CHANGELIST_NUM (34615400)
#define NV_BUILD_TYPE "Nightly" #define NV_BUILD_TYPE "Official"
#define NV_BUILD_NAME "r550_00-240627" #define NV_BUILD_NAME "552.87"
#define NV_LAST_OFFICIAL_CHANGELIST_NUM (34454921) #define NV_LAST_OFFICIAL_CHANGELIST_NUM (34615400)
#define NV_BUILD_BRANCH_BASE_VERSION R550 #define NV_BUILD_BRANCH_BASE_VERSION R550
#endif #endif
// End buildmeister python edited section // End buildmeister python edited section

View File

@ -4,7 +4,7 @@
#if defined(NV_LINUX) || defined(NV_BSD) || defined(NV_SUNOS) || defined(NV_VMWARE) || defined(NV_QNX) || defined(NV_INTEGRITY) || \ #if defined(NV_LINUX) || defined(NV_BSD) || defined(NV_SUNOS) || defined(NV_VMWARE) || defined(NV_QNX) || defined(NV_INTEGRITY) || \
(defined(RMCFG_FEATURE_PLATFORM_GSP) && RMCFG_FEATURE_PLATFORM_GSP == 1) (defined(RMCFG_FEATURE_PLATFORM_GSP) && RMCFG_FEATURE_PLATFORM_GSP == 1)
#define NV_VERSION_STRING "550.100" #define NV_VERSION_STRING "550.107.02"
#else #else

View File

@ -57,7 +57,9 @@
#define NV_PFALCON_FALCON_DMATRFCMD 0x00000118 /* RW-4R */ #define NV_PFALCON_FALCON_DMATRFCMD 0x00000118 /* RW-4R */
#define NV_PFALCON_FALCON_DMATRFCMD_FULL 0:0 /* R-XVF */ #define NV_PFALCON_FALCON_DMATRFCMD_FULL 0:0 /* R-XVF */
#define NV_PFALCON_FALCON_DMATRFCMD_FULL_TRUE 0x00000001 /* R---V */ #define NV_PFALCON_FALCON_DMATRFCMD_FULL_TRUE 0x00000001 /* R---V */
#define NV_PFALCON_FALCON_DMATRFCMD_FULL_FALSE 0x00000000 /* R---V */
#define NV_PFALCON_FALCON_DMATRFCMD_IDLE 1:1 /* R-XVF */ #define NV_PFALCON_FALCON_DMATRFCMD_IDLE 1:1 /* R-XVF */
#define NV_PFALCON_FALCON_DMATRFCMD_IDLE_TRUE 0x00000001 /* R---V */
#define NV_PFALCON_FALCON_DMATRFCMD_IDLE_FALSE 0x00000000 /* R---V */ #define NV_PFALCON_FALCON_DMATRFCMD_IDLE_FALSE 0x00000000 /* R---V */
#define NV_PFALCON_FALCON_DMATRFCMD_SEC 3:2 /* RWXVF */ #define NV_PFALCON_FALCON_DMATRFCMD_SEC 3:2 /* RWXVF */
#define NV_PFALCON_FALCON_DMATRFCMD_IMEM 4:4 /* RWXVF */ #define NV_PFALCON_FALCON_DMATRFCMD_IMEM 4:4 /* RWXVF */

View File

@ -1,5 +1,5 @@
/* /*
* SPDX-FileCopyrightText: Copyright (c) 2018-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved. * SPDX-FileCopyrightText: Copyright (c) 2018-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-License-Identifier: MIT * SPDX-License-Identifier: MIT
* *
* Permission is hereby granted, free of charge, to any person obtaining a * Permission is hereby granted, free of charge, to any person obtaining a
@ -49,6 +49,7 @@
#include "soe/soeifsmbpbi.h" #include "soe/soeifsmbpbi.h"
#include "soe/soeifcore.h" #include "soe/soeifcore.h"
#include "soe/soeifchnmgmt.h" #include "soe/soeifchnmgmt.h"
#include "soe/soeiftnvl.h"
#include "soe/soeifcci.h" #include "soe/soeifcci.h"
#include "soe/soeifheartbeat.h" #include "soe/soeifheartbeat.h"
@ -71,6 +72,7 @@ typedef struct
RM_SOE_BIF_CMD bif; RM_SOE_BIF_CMD bif;
RM_SOE_CORE_CMD core; RM_SOE_CORE_CMD core;
RM_SOE_CHNMGMT_CMD chnmgmt; RM_SOE_CHNMGMT_CMD chnmgmt;
RM_SOE_TNVL_CMD tnvl;
RM_SOE_CCI_CMD cci; RM_SOE_CCI_CMD cci;
} cmd; } cmd;
} RM_FLCN_CMD_SOE, } RM_FLCN_CMD_SOE,
@ -126,8 +128,9 @@ typedef struct
#define RM_SOE_TASK_ID_CCI 0x0D #define RM_SOE_TASK_ID_CCI 0x0D
#define RM_SOE_TASK_ID_FSPMGMT 0x0E #define RM_SOE_TASK_ID_FSPMGMT 0x0E
#define RM_SOE_TASK_ID_HEARTBEAT 0x0F #define RM_SOE_TASK_ID_HEARTBEAT 0x0F
#define RM_SOE_TASK_ID_TNVL 0x10
// Add new task ID here... // Add new task ID here...
#define RM_SOE_TASK_ID__END 0x10 #define RM_SOE_TASK_ID__END 0x11
/*! /*!
* Unit-identifiers: * Unit-identifiers:
@ -151,8 +154,9 @@ typedef struct
#define RM_SOE_UNIT_CHNMGMT (0x0D) #define RM_SOE_UNIT_CHNMGMT (0x0D)
#define RM_SOE_UNIT_CCI (0x0E) #define RM_SOE_UNIT_CCI (0x0E)
#define RM_SOE_UNIT_HEARTBEAT (0x0F) #define RM_SOE_UNIT_HEARTBEAT (0x0F)
#define RM_SOE_UNIT_TNVL (0x10)
// Add new unit ID here... // Add new unit ID here...
#define RM_SOE_UNIT_END (0x10) #define RM_SOE_UNIT_END (0x11)
#endif // _RMSOECMDIF_H_ #endif // _RMSOECMDIF_H_

View File

@ -0,0 +1,76 @@
/*
* SPDX-FileCopyrightText: Copyright (c) 2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-License-Identifier: MIT
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#ifndef _SOEIFTNVL_H_
#define _SOEIFTNVL_H_
/*!
* @file soeiftnvl.h
* @brief SOE TNVL Command Queue
*
* The TNVL unit ID will be used for sending and recieving
* Command Messages between driver and TNVL unit of SOE
*/
/*!
* Commands offered by the SOE Tnvl Interface.
*/
enum
{
/*
* Issue register write command
*/
RM_SOE_TNVL_CMD_ISSUE_REGISTER_WRITE = 0x0,
/*
* Issue pre-lock sequence
*/
RM_SOE_TNVL_CMD_ISSUE_PRE_LOCK_SEQUENCE = 0x1,
};
/*!
* TNVL queue command payload
*/
typedef struct
{
NvU8 cmdType;
NvU32 offset;
NvU32 data;
} RM_SOE_TNVL_CMD_REGISTER_WRITE;
typedef struct
{
NvU8 cmdType;
} RM_SOE_TNVL_CMD_PRE_LOCK_SEQUENCE;
typedef union
{
NvU8 cmdType;
RM_SOE_TNVL_CMD_REGISTER_WRITE registerWrite;
RM_SOE_TNVL_CMD_PRE_LOCK_SEQUENCE preLockSequence;
} RM_SOE_TNVL_CMD;
#endif // _SOEIFTNVL_H_

View File

@ -324,10 +324,18 @@ cciInit
CCI *pCci, CCI *pCci,
NvU32 pci_device_id NvU32 pci_device_id
) )
{
if (!nvswitch_is_tnvl_mode_enabled(device))
{ {
nvswitch_task_create(device, _nvswitch_cci_poll_callback, nvswitch_task_create(device, _nvswitch_cci_poll_callback,
NVSWITCH_INTERVAL_1SEC_IN_NS / NVSWITCH_CCI_POLLING_RATE_HZ, NVSWITCH_INTERVAL_1SEC_IN_NS / NVSWITCH_CCI_POLLING_RATE_HZ,
0); 0);
}
else
{
NVSWITCH_PRINT(device, INFO, "Skipping CCI background task when TNVL is enabled\n");
}
return NVL_SUCCESS; return NVL_SUCCESS;
} }

View File

@ -295,6 +295,8 @@
_op(NvlStatus, nvswitch_tnvl_get_attestation_report, (nvswitch_device *device, NVSWITCH_GET_ATTESTATION_REPORT_PARAMS *params), _arch) \ _op(NvlStatus, nvswitch_tnvl_get_attestation_report, (nvswitch_device *device, NVSWITCH_GET_ATTESTATION_REPORT_PARAMS *params), _arch) \
_op(NvlStatus, nvswitch_tnvl_send_fsp_lock_config, (nvswitch_device *device), _arch) \ _op(NvlStatus, nvswitch_tnvl_send_fsp_lock_config, (nvswitch_device *device), _arch) \
_op(NvlStatus, nvswitch_tnvl_get_status, (nvswitch_device *device, NVSWITCH_GET_TNVL_STATUS_PARAMS *params), _arch) \ _op(NvlStatus, nvswitch_tnvl_get_status, (nvswitch_device *device, NVSWITCH_GET_TNVL_STATUS_PARAMS *params), _arch) \
_op(NvlStatus, nvswitch_send_tnvl_prelock_cmd, (nvswitch_device *device), _arch) \
_op(void, nvswitch_tnvl_disable_interrupts, (nvswitch_device *device), _arch) \
NVSWITCH_HAL_FUNCTION_LIST_FEATURE_0(_op, _arch) \ NVSWITCH_HAL_FUNCTION_LIST_FEATURE_0(_op, _arch) \
#define NVSWITCH_HAL_FUNCTION_LIST_LS10(_op, _arch) \ #define NVSWITCH_HAL_FUNCTION_LIST_LS10(_op, _arch) \

View File

@ -710,4 +710,5 @@ NvlStatus nvswitch_fsp_error_code_to_nvlstatus_map_lr10(nvswitch_device *device,
NvlStatus nvswitch_tnvl_get_attestation_certificate_chain_lr10(nvswitch_device *device, NVSWITCH_GET_ATTESTATION_CERTIFICATE_CHAIN_PARAMS *params); NvlStatus nvswitch_tnvl_get_attestation_certificate_chain_lr10(nvswitch_device *device, NVSWITCH_GET_ATTESTATION_CERTIFICATE_CHAIN_PARAMS *params);
NvlStatus nvswitch_tnvl_get_attestation_report_lr10(nvswitch_device *device, NVSWITCH_GET_ATTESTATION_REPORT_PARAMS *params); NvlStatus nvswitch_tnvl_get_attestation_report_lr10(nvswitch_device *device, NVSWITCH_GET_ATTESTATION_REPORT_PARAMS *params);
NvlStatus nvswitch_tnvl_get_status_lr10(nvswitch_device *device, NVSWITCH_GET_TNVL_STATUS_PARAMS *params); NvlStatus nvswitch_tnvl_get_status_lr10(nvswitch_device *device, NVSWITCH_GET_TNVL_STATUS_PARAMS *params);
void nvswitch_tnvl_disable_interrupts_lr10(nvswitch_device *device);
#endif //_LR10_H_ #endif //_LR10_H_

View File

@ -1,5 +1,5 @@
/* /*
* SPDX-FileCopyrightText: Copyright (c) 2020-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved. * SPDX-FileCopyrightText: Copyright (c) 2020-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-License-Identifier: MIT * SPDX-License-Identifier: MIT
* *
* Permission is hereby granted, free of charge, to any person obtaining a * Permission is hereby granted, free of charge, to any person obtaining a
@ -188,7 +188,9 @@
#define SOE_VBIOS_VERSION_MASK 0xFF0000 #define SOE_VBIOS_VERSION_MASK 0xFF0000
#define SOE_VBIOS_REVLOCK_DISABLE_NPORT_FATAL_INTR 0x370000 #define SOE_VBIOS_REVLOCK_DISABLE_NPORT_FATAL_INTR 0x370000
#define SOE_VBIOS_REVLOCK_ISSUE_INGRESS_STOP 0x440000 #define SOE_VBIOS_REVLOCK_ISSUE_INGRESS_STOP 0x4C0000
#define SOE_VBIOS_REVLOCK_ISSUE_REGISTER_WRITE 0x580000
#define SOE_VBIOS_REVLOCK_TNVL_PRELOCK_COMMAND 0x600000
// LS10 Saved LED state // LS10 Saved LED state
#define ACCESS_LINK_LED_STATE CPLD_MACHXO3_ACCESS_LINK_LED_CTL_NVL_CABLE_LED #define ACCESS_LINK_LED_STATE CPLD_MACHXO3_ACCESS_LINK_LED_CTL_NVL_CABLE_LED
@ -1058,6 +1060,9 @@ NvlStatus nvswitch_tnvl_get_attestation_certificate_chain_ls10(nvswitch_device *
NvlStatus nvswitch_tnvl_get_attestation_report_ls10(nvswitch_device *device, NVSWITCH_GET_ATTESTATION_REPORT_PARAMS *params); NvlStatus nvswitch_tnvl_get_attestation_report_ls10(nvswitch_device *device, NVSWITCH_GET_ATTESTATION_REPORT_PARAMS *params);
NvlStatus nvswitch_tnvl_send_fsp_lock_config_ls10(nvswitch_device *device); NvlStatus nvswitch_tnvl_send_fsp_lock_config_ls10(nvswitch_device *device);
NvlStatus nvswitch_tnvl_get_status_ls10(nvswitch_device *device, NVSWITCH_GET_TNVL_STATUS_PARAMS *params); NvlStatus nvswitch_tnvl_get_status_ls10(nvswitch_device *device, NVSWITCH_GET_TNVL_STATUS_PARAMS *params);
void nvswitch_tnvl_reg_wr_32_ls10(nvswitch_device *device, NVSWITCH_ENGINE_ID eng_id, NvU32 eng_bcast, NvU32 eng_instance, NvU32 base_addr, NvU32 offset, NvU32 data);
NvlStatus nvswitch_send_tnvl_prelock_cmd_ls10(nvswitch_device *device);
void nvswitch_tnvl_disable_interrupts_ls10(nvswitch_device *device);
NvlStatus nvswitch_ctrl_get_soe_heartbeat_ls10(nvswitch_device *device, NVSWITCH_GET_SOE_HEARTBEAT_PARAMS *p); NvlStatus nvswitch_ctrl_get_soe_heartbeat_ls10(nvswitch_device *device, NVSWITCH_GET_SOE_HEARTBEAT_PARAMS *p);
NvlStatus nvswitch_cci_enable_iobist_ls10(nvswitch_device *device, NvU32 linkNumber, NvBool bEnable); NvlStatus nvswitch_cci_enable_iobist_ls10(nvswitch_device *device, NvU32 linkNumber, NvBool bEnable);

View File

@ -1,5 +1,5 @@
/* /*
* SPDX-FileCopyrightText: Copyright (c) 2020-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved. * SPDX-FileCopyrightText: Copyright (c) 2020-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-License-Identifier: MIT * SPDX-License-Identifier: MIT
* *
* Permission is hereby granted, free of charge, to any person obtaining a * Permission is hereby granted, free of charge, to any person obtaining a
@ -50,5 +50,6 @@ void nvswitch_heartbeat_soe_callback_ls10(nvswitch_device *device, RM_FLCN_
NvlStatus nvswitch_soe_set_nport_interrupts_ls10(nvswitch_device *device, NvU32 nport, NvBool bEnable); NvlStatus nvswitch_soe_set_nport_interrupts_ls10(nvswitch_device *device, NvU32 nport, NvBool bEnable);
void nvswitch_soe_disable_nport_fatal_interrupts_ls10(nvswitch_device *device, NvU32 nport, void nvswitch_soe_disable_nport_fatal_interrupts_ls10(nvswitch_device *device, NvU32 nport,
NvU32 nportIntrEnable, NvU8 nportIntrType); NvU32 nportIntrEnable, NvU8 nportIntrType);
NvlStatus nvswitch_soe_issue_ingress_stop_ls10(nvswitch_device *device, NvU32 nport, NvBool bStop);
NvlStatus nvswitch_soe_reg_wr_32_ls10(nvswitch_device *device, NvU32 offset, NvU32 data);
#endif //_SOE_LS10_H_ #endif //_SOE_LS10_H_

View File

@ -272,8 +272,8 @@ const NvU32 soe_ucode_data_lr10_dbg[] = {
0xa6b0001d, 0x240cf409, 0x001da03e, 0x0049190f, 0x009ff711, 0x00f802f8, 0xb50294b6, 0x00f804b9, 0xa6b0001d, 0x240cf409, 0x001da03e, 0x0049190f, 0x009ff711, 0x00f802f8, 0xb50294b6, 0x00f804b9,
0xb602af92, 0xb9bc0294, 0xf400f8f9, 0x82f9d430, 0x301590b4, 0xc1b027e1, 0x0ad1b00b, 0x94b6f4bd, 0xb602af92, 0xb9bc0294, 0xf400f8f9, 0x82f9d430, 0x301590b4, 0xc1b027e1, 0x0ad1b00b, 0x94b6f4bd,
0x0c91b002, 0x900149fe, 0x9fa04499, 0x20079990, 0x0b99929f, 0x95b29fa0, 0xa0049992, 0x9297b29f, 0x0c91b002, 0x900149fe, 0x9fa04499, 0x20079990, 0x0b99929f, 0x95b29fa0, 0xa0049992, 0x9297b29f,
0x9fa00499, 0x0005ecdf, 0x90ffbf00, 0x4efe1499, 0xa0a6b201, 0x34ee909f, 0xb4b20209, 0x84bde9a0, 0x9fa00499, 0x0005ecdf, 0x90ffbf00, 0x4efe1499, 0xa0a6b201, 0x34ee909f, 0xb4b20209, 0x14bde9a0,
0x14bd34bd, 0x001eef3e, 0x277e6ab2, 0x49bf001a, 0x4bfea2b2, 0x014cfe01, 0x9044bb90, 0x95f94bcc, 0x34bd84bd, 0x001eef3e, 0x277e6ab2, 0x49bf001a, 0x4bfea2b2, 0x014cfe01, 0x9044bb90, 0x95f94bcc,
0xb31100b4, 0x008e0209, 0x9e0309b3, 0x010db300, 0x499800a8, 0xb27cb201, 0xfe5bb22a, 0xdd90014d, 0xb31100b4, 0x008e0209, 0x9e0309b3, 0x010db300, 0x499800a8, 0xb27cb201, 0xfe5bb22a, 0xdd90014d,
0x3295f938, 0x0be0b40c, 0xa53ed4bd, 0x5fbf001e, 0xf9a6e9bf, 0x34381bf4, 0xe89827b0, 0x987fbf01, 0x3295f938, 0x0be0b40c, 0xa53ed4bd, 0x5fbf001e, 0xf9a6e9bf, 0x34381bf4, 0xe89827b0, 0x987fbf01,
0xb03302e9, 0xb0b40a00, 0x90b9bc0c, 0x1bf4f9a6, 0x1444df1e, 0xf9180000, 0x0094330c, 0x90f1b206, 0xb03302e9, 0xb0b40a00, 0x90b9bc0c, 0x1bf4f9a6, 0x1444df1e, 0xf9180000, 0x0094330c, 0x90f1b206,
@ -2269,8 +2269,8 @@ const NvU32 soe_ucode_data_lr10_dbg[] = {
0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000,
0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000,
0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000,
0x69e9060c, 0xe6ca2d91, 0xac20edf2, 0xeafeafcc, 0x294f2cc2, 0x883a9d68, 0x493e2990, 0xc8e27d59, 0x69e9060c, 0xe6ca2d91, 0xac20edf2, 0xeafeafcc, 0x1de66f4b, 0x98838b38, 0xce342fcf, 0x31422bca,
0x30867660, 0xbc4af25f, 0xbc09e1ed, 0xab87e0fc, 0x8fc5fac6, 0xe1f366be, 0x1ec159bf, 0x352ff984, 0x30867660, 0xbc4af25f, 0xbc09e1ed, 0xab87e0fc, 0x154ee848, 0x4d419617, 0xc10ab5e0, 0x5570cfeb,
0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000,
0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000,
0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000,

View File

@ -272,8 +272,8 @@ const NvU32 soe_ucode_data_lr10_prd[] = {
0xa6b0001d, 0x240cf409, 0x001da03e, 0x0049190f, 0x009ff711, 0x00f802f8, 0xb50294b6, 0x00f804b9, 0xa6b0001d, 0x240cf409, 0x001da03e, 0x0049190f, 0x009ff711, 0x00f802f8, 0xb50294b6, 0x00f804b9,
0xb602af92, 0xb9bc0294, 0xf400f8f9, 0x82f9d430, 0x301590b4, 0xc1b027e1, 0x0ad1b00b, 0x94b6f4bd, 0xb602af92, 0xb9bc0294, 0xf400f8f9, 0x82f9d430, 0x301590b4, 0xc1b027e1, 0x0ad1b00b, 0x94b6f4bd,
0x0c91b002, 0x900149fe, 0x9fa04499, 0x20079990, 0x0b99929f, 0x95b29fa0, 0xa0049992, 0x9297b29f, 0x0c91b002, 0x900149fe, 0x9fa04499, 0x20079990, 0x0b99929f, 0x95b29fa0, 0xa0049992, 0x9297b29f,
0x9fa00499, 0x0005ecdf, 0x90ffbf00, 0x4efe1499, 0xa0a6b201, 0x34ee909f, 0xb4b20209, 0x84bde9a0, 0x9fa00499, 0x0005ecdf, 0x90ffbf00, 0x4efe1499, 0xa0a6b201, 0x34ee909f, 0xb4b20209, 0x14bde9a0,
0x14bd34bd, 0x001eef3e, 0x277e6ab2, 0x49bf001a, 0x4bfea2b2, 0x014cfe01, 0x9044bb90, 0x95f94bcc, 0x34bd84bd, 0x001eef3e, 0x277e6ab2, 0x49bf001a, 0x4bfea2b2, 0x014cfe01, 0x9044bb90, 0x95f94bcc,
0xb31100b4, 0x008e0209, 0x9e0309b3, 0x010db300, 0x499800a8, 0xb27cb201, 0xfe5bb22a, 0xdd90014d, 0xb31100b4, 0x008e0209, 0x9e0309b3, 0x010db300, 0x499800a8, 0xb27cb201, 0xfe5bb22a, 0xdd90014d,
0x3295f938, 0x0be0b40c, 0xa53ed4bd, 0x5fbf001e, 0xf9a6e9bf, 0x34381bf4, 0xe89827b0, 0x987fbf01, 0x3295f938, 0x0be0b40c, 0xa53ed4bd, 0x5fbf001e, 0xf9a6e9bf, 0x34381bf4, 0xe89827b0, 0x987fbf01,
0xb03302e9, 0xb0b40a00, 0x90b9bc0c, 0x1bf4f9a6, 0x1444df1e, 0xf9180000, 0x0094330c, 0x90f1b206, 0xb03302e9, 0xb0b40a00, 0x90b9bc0c, 0x1bf4f9a6, 0x1444df1e, 0xf9180000, 0x0094330c, 0x90f1b206,
@ -2269,8 +2269,8 @@ const NvU32 soe_ucode_data_lr10_prd[] = {
0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000,
0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000,
0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000,
0x69e9060c, 0xe6ca2d91, 0xac20edf2, 0xeafeafcc, 0x294f2cc2, 0x883a9d68, 0x493e2990, 0xc8e27d59, 0x69e9060c, 0xe6ca2d91, 0xac20edf2, 0xeafeafcc, 0x1de66f4b, 0x98838b38, 0xce342fcf, 0x31422bca,
0x30867660, 0xbc4af25f, 0xbc09e1ed, 0xab87e0fc, 0x8fc5fac6, 0xe1f366be, 0x1ec159bf, 0x352ff984, 0x30867660, 0xbc4af25f, 0xbc09e1ed, 0xab87e0fc, 0x154ee848, 0x4d419617, 0xc10ab5e0, 0x5570cfeb,
0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000,
0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000,
0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000, 0x00000000,

View File

@ -212,9 +212,16 @@ _inforom_nvlink_start_correctable_error_recording
pNvlinkState->bCallbackPending = NV_FALSE; pNvlinkState->bCallbackPending = NV_FALSE;
if (!nvswitch_is_tnvl_mode_enabled(device))
{
nvswitch_task_create(device, &_nvswitch_nvlink_1hz_callback, nvswitch_task_create(device, &_nvswitch_nvlink_1hz_callback,
NVSWITCH_INTERVAL_1SEC_IN_NS, 0); NVSWITCH_INTERVAL_1SEC_IN_NS, 0);
} }
else
{
NVSWITCH_PRINT(device, INFO, "Skipping NVLINK heartbeat task when TNVL is enabled\n");
}
}
NvlStatus NvlStatus
nvswitch_inforom_nvlink_load nvswitch_inforom_nvlink_load

View File

@ -8186,6 +8186,24 @@ nvswitch_tnvl_get_status_lr10
return -NVL_ERR_NOT_SUPPORTED; return -NVL_ERR_NOT_SUPPORTED;
} }
NvlStatus
nvswitch_send_tnvl_prelock_cmd_lr10
(
nvswitch_device *device
)
{
return -NVL_ERR_NOT_SUPPORTED;
}
void
nvswitch_tnvl_disable_interrupts_lr10
(
nvswitch_device *device
)
{
return;
}
// //
// This function auto creates the lr10 HAL connectivity from the NVSWITCH_INIT_HAL // This function auto creates the lr10 HAL connectivity from the NVSWITCH_INIT_HAL
// macro in haldef_nvswitch.h // macro in haldef_nvswitch.h

View File

@ -386,6 +386,13 @@ nvswitch_is_cci_supported_ls10
nvswitch_device *device nvswitch_device *device
) )
{ {
// Skip CCI on TNVL mode
if (nvswitch_is_tnvl_mode_enabled(device))
{
NVSWITCH_PRINT(device, INFO, "CCI is not supported on TNVL mode\n");
return NV_FALSE;
}
if (FLD_TEST_DRF(_SWITCH_REGKEY, _CCI_CONTROL, _ENABLE, _FALSE, if (FLD_TEST_DRF(_SWITCH_REGKEY, _CCI_CONTROL, _ENABLE, _FALSE,
device->regkeys.cci_control)) device->regkeys.cci_control))
{ {

View File

@ -5928,12 +5928,19 @@ nvswitch_create_deferred_link_state_check_task_ls10
pErrorReportParams->nvlipt_instance = nvlipt_instance; pErrorReportParams->nvlipt_instance = nvlipt_instance;
pErrorReportParams->link = link; pErrorReportParams->link = link;
if (!nvswitch_is_tnvl_mode_enabled(device))
{
status = nvswitch_task_create_args(device, (void*)pErrorReportParams, status = nvswitch_task_create_args(device, (void*)pErrorReportParams,
&_nvswitch_deferred_link_state_check_ls10, &_nvswitch_deferred_link_state_check_ls10,
NVSWITCH_DEFERRED_LINK_STATE_CHECK_INTERVAL_NS, NVSWITCH_DEFERRED_LINK_STATE_CHECK_INTERVAL_NS,
NVSWITCH_TASK_TYPE_FLAGS_RUN_ONCE | NVSWITCH_TASK_TYPE_FLAGS_RUN_ONCE |
NVSWITCH_TASK_TYPE_FLAGS_VOID_PTR_ARGS); NVSWITCH_TASK_TYPE_FLAGS_VOID_PTR_ARGS);
} }
else
{
NVSWITCH_PRINT(device, INFO, "Skipping Deferred link state background task when TNVL is enabled\n");
}
}
if (status == NVL_SUCCESS) if (status == NVL_SUCCESS)
{ {
@ -6013,12 +6020,15 @@ _nvswitch_create_deferred_link_errors_task_ls10
pErrorReportParams->nvlipt_instance = nvlipt_instance; pErrorReportParams->nvlipt_instance = nvlipt_instance;
pErrorReportParams->link = link; pErrorReportParams->link = link;
if (!nvswitch_is_tnvl_mode_enabled(device))
{
status = nvswitch_task_create_args(device, (void*)pErrorReportParams, status = nvswitch_task_create_args(device, (void*)pErrorReportParams,
&_nvswitch_deferred_link_errors_check_ls10, &_nvswitch_deferred_link_errors_check_ls10,
NVSWITCH_DEFERRED_FAULT_UP_CHECK_INTERVAL_NS, NVSWITCH_DEFERRED_FAULT_UP_CHECK_INTERVAL_NS,
NVSWITCH_TASK_TYPE_FLAGS_RUN_ONCE | NVSWITCH_TASK_TYPE_FLAGS_RUN_ONCE |
NVSWITCH_TASK_TYPE_FLAGS_VOID_PTR_ARGS); NVSWITCH_TASK_TYPE_FLAGS_VOID_PTR_ARGS);
} }
}
if (status == NVL_SUCCESS) if (status == NVL_SUCCESS)
{ {
@ -7416,7 +7426,7 @@ nvswitch_lib_service_interrupts_ls10
// 3. Run leaf specific interrupt handler // 3. Run leaf specific interrupt handler
// //
val = NVSWITCH_ENG_RD32(device, GIN, , 0, _CTRL, _CPU_INTR_NVLW_NON_FATAL); val = NVSWITCH_ENG_RD32(device, GIN, , 0, _CTRL, _CPU_INTR_NVLW_NON_FATAL);
val = DRF_NUM(_CTRL, _CPU_INTR_NVLW_NON_FATAL, _MASK, val); val = DRF_VAL(_CTRL, _CPU_INTR_NVLW_NON_FATAL, _MASK, val);
if (val != 0) if (val != 0)
{ {
NVSWITCH_PRINT(device, INFO, "%s: NVLW NON_FATAL interrupts pending = 0x%x\n", NVSWITCH_PRINT(device, INFO, "%s: NVLW NON_FATAL interrupts pending = 0x%x\n",
@ -7438,7 +7448,7 @@ nvswitch_lib_service_interrupts_ls10
} }
val = NVSWITCH_ENG_RD32(device, GIN, , 0, _CTRL, _CPU_INTR_NVLW_FATAL); val = NVSWITCH_ENG_RD32(device, GIN, , 0, _CTRL, _CPU_INTR_NVLW_FATAL);
val = DRF_NUM(_CTRL, _CPU_INTR_NVLW_FATAL, _MASK, val); val = DRF_VAL(_CTRL, _CPU_INTR_NVLW_FATAL, _MASK, val);
if (val != 0) if (val != 0)
{ {
NVSWITCH_PRINT(device, INFO, "%s: NVLW FATAL interrupts pending = 0x%x\n", NVSWITCH_PRINT(device, INFO, "%s: NVLW FATAL interrupts pending = 0x%x\n",
@ -7462,7 +7472,7 @@ nvswitch_lib_service_interrupts_ls10
} }
val = NVSWITCH_ENG_RD32(device, GIN, , 0, _CTRL, _CPU_INTR_NVLW_CORRECTABLE); val = NVSWITCH_ENG_RD32(device, GIN, , 0, _CTRL, _CPU_INTR_NVLW_CORRECTABLE);
val = DRF_NUM(_CTRL, _CPU_INTR_NVLW_CORRECTABLE, _MASK, val); val = DRF_VAL(_CTRL, _CPU_INTR_NVLW_CORRECTABLE, _MASK, val);
if (val != 0) if (val != 0)
{ {
NVSWITCH_PRINT(device, ERROR, "%s: NVLW CORRECTABLE interrupts pending = 0x%x\n", NVSWITCH_PRINT(device, ERROR, "%s: NVLW CORRECTABLE interrupts pending = 0x%x\n",
@ -7472,7 +7482,7 @@ nvswitch_lib_service_interrupts_ls10
// Check NPG // Check NPG
val = NVSWITCH_ENG_RD32(device, GIN, , 0, _CTRL, _CPU_INTR_NPG_FATAL); val = NVSWITCH_ENG_RD32(device, GIN, , 0, _CTRL, _CPU_INTR_NPG_FATAL);
val = DRF_NUM(_CTRL, _CPU_INTR_NPG_FATAL, _MASK, val); val = DRF_VAL(_CTRL, _CPU_INTR_NPG_FATAL, _MASK, val);
if (val != 0) if (val != 0)
{ {
NVSWITCH_PRINT(device, INFO, "%s: NPG FATAL interrupts pending = 0x%x\n", NVSWITCH_PRINT(device, INFO, "%s: NPG FATAL interrupts pending = 0x%x\n",
@ -7494,7 +7504,7 @@ nvswitch_lib_service_interrupts_ls10
} }
val = NVSWITCH_ENG_RD32(device, GIN, , 0, _CTRL, _CPU_INTR_NPG_NON_FATAL); val = NVSWITCH_ENG_RD32(device, GIN, , 0, _CTRL, _CPU_INTR_NPG_NON_FATAL);
val = DRF_NUM(_CTRL, _CPU_INTR_NPG_NON_FATAL, _MASK, val); val = DRF_VAL(_CTRL, _CPU_INTR_NPG_NON_FATAL, _MASK, val);
if (val != 0) if (val != 0)
{ {
NVSWITCH_PRINT(device, INFO, "%s: NPG NON_FATAL interrupts pending = 0x%x\n", NVSWITCH_PRINT(device, INFO, "%s: NPG NON_FATAL interrupts pending = 0x%x\n",
@ -7516,7 +7526,7 @@ nvswitch_lib_service_interrupts_ls10
} }
val = NVSWITCH_ENG_RD32(device, GIN, , 0, _CTRL, _CPU_INTR_NPG_CORRECTABLE); val = NVSWITCH_ENG_RD32(device, GIN, , 0, _CTRL, _CPU_INTR_NPG_CORRECTABLE);
val = DRF_NUM(_CTRL, _CPU_INTR_NPG_CORRECTABLE, _MASK, val); val = DRF_VAL(_CTRL, _CPU_INTR_NPG_CORRECTABLE, _MASK, val);
if (val != 0) if (val != 0)
{ {
NVSWITCH_PRINT(device, ERROR, "%s: NPG CORRECTABLE interrupts pending = 0x%x\n", NVSWITCH_PRINT(device, ERROR, "%s: NPG CORRECTABLE interrupts pending = 0x%x\n",
@ -7526,7 +7536,7 @@ nvswitch_lib_service_interrupts_ls10
// Check NXBAR // Check NXBAR
val = NVSWITCH_ENG_RD32(device, GIN, , 0, _CTRL, _CPU_INTR_NXBAR_FATAL); val = NVSWITCH_ENG_RD32(device, GIN, , 0, _CTRL, _CPU_INTR_NXBAR_FATAL);
val = DRF_NUM(_CTRL, _CPU_INTR_NXBAR_FATAL, _MASK, val); val = DRF_VAL(_CTRL, _CPU_INTR_NXBAR_FATAL, _MASK, val);
if (val != 0) if (val != 0)
{ {
NVSWITCH_PRINT(device, INFO, "%s: NXBAR FATAL interrupts pending = 0x%x\n", NVSWITCH_PRINT(device, INFO, "%s: NXBAR FATAL interrupts pending = 0x%x\n",

View File

@ -2979,13 +2979,6 @@ nvswitch_is_soe_supported_ls10
NVSWITCH_PRINT(device, WARN, "SOE can not be disabled via regkey.\n"); NVSWITCH_PRINT(device, WARN, "SOE can not be disabled via regkey.\n");
} }
if (nvswitch_is_tnvl_mode_locked(device))
{
NVSWITCH_PRINT(device, INFO,
"SOE is not supported when TNVL mode is locked\n");
return NV_FALSE;
}
return NV_TRUE; return NV_TRUE;
} }
@ -3033,13 +3026,6 @@ nvswitch_is_inforom_supported_ls10
return NV_FALSE; return NV_FALSE;
} }
if (nvswitch_is_tnvl_mode_enabled(device))
{
NVSWITCH_PRINT(device, INFO,
"INFOROM is not supported when TNVL mode is enabled\n");
return NV_FALSE;
}
if (!nvswitch_is_soe_supported(device)) if (!nvswitch_is_soe_supported(device))
{ {
NVSWITCH_PRINT(device, INFO, NVSWITCH_PRINT(device, INFO,
@ -4421,7 +4407,14 @@ nvswitch_eng_wr_ls10
return; return;
} }
if (nvswitch_is_tnvl_mode_enabled(device))
{
nvswitch_tnvl_reg_wr_32_ls10(device, eng_id, eng_bcast, eng_instance, base_addr, offset, data);
}
else
{
nvswitch_reg_write_32(device, base_addr + offset, data); nvswitch_reg_write_32(device, base_addr + offset, data);
}
#if defined(DEVELOP) || defined(DEBUG) || defined(NV_MODS) #if defined(DEVELOP) || defined(DEBUG) || defined(NV_MODS)
{ {

View File

@ -1,5 +1,5 @@
/* /*
* SPDX-FileCopyrightText: Copyright (c) 2020-2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved. * SPDX-FileCopyrightText: Copyright (c) 2020-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-License-Identifier: MIT * SPDX-License-Identifier: MIT
* *
* Permission is hereby granted, free of charge, to any person obtaining a * Permission is hereby granted, free of charge, to any person obtaining a
@ -559,6 +559,76 @@ nvswitch_soe_disable_nport_fatal_interrupts_ls10
} }
} }
/*
* @Brief : Perform register writes in SOE during TNVL
*
* @param[in] device
* @param[in] offset
* @param[in] data
*/
NvlStatus
nvswitch_soe_reg_wr_32_ls10
(
nvswitch_device *device,
NvU32 offset,
NvU32 data
)
{
FLCN *pFlcn;
NvU32 cmdSeqDesc = 0;
NV_STATUS status;
RM_FLCN_CMD_SOE cmd;
NVSWITCH_TIMEOUT timeout;
RM_SOE_TNVL_CMD_REGISTER_WRITE *pRegisterWrite;
NVSWITCH_GET_BIOS_INFO_PARAMS params = { 0 };
if (!nvswitch_is_soe_supported(device))
{
NVSWITCH_PRINT(device, INFO,
"%s: SOE is not supported\n",
__FUNCTION__);
return NVL_SUCCESS; // -NVL_ERR_NOT_SUPPORTED
}
status = device->hal.nvswitch_ctrl_get_bios_info(device, &params);
if ((status != NVL_SUCCESS) || ((params.version & SOE_VBIOS_VERSION_MASK) <
SOE_VBIOS_REVLOCK_ISSUE_REGISTER_WRITE))
{
nvswitch_reg_write_32(device, offset, data);
return NVL_SUCCESS;
}
pFlcn = device->pSoe->pFlcn;
nvswitch_os_memset(&cmd, 0, sizeof(cmd));
cmd.hdr.unitId = RM_SOE_UNIT_TNVL;
cmd.hdr.size = RM_SOE_CMD_SIZE(TNVL, REGISTER_WRITE);
pRegisterWrite = &cmd.cmd.tnvl.registerWrite;
pRegisterWrite->cmdType = RM_SOE_TNVL_CMD_ISSUE_REGISTER_WRITE;
pRegisterWrite->offset = offset;
pRegisterWrite->data = data;
nvswitch_timeout_create(NVSWITCH_INTERVAL_5MSEC_IN_NS, &timeout);
status = flcnQueueCmdPostBlocking(device, pFlcn,
(PRM_FLCN_CMD)&cmd,
NULL, // pMsg
NULL, // pPayload
SOE_RM_CMDQ_LOG_ID,
&cmdSeqDesc,
&timeout);
if (status != NV_OK)
{
NVSWITCH_PRINT(device, ERROR,
"%s: Failed to send REGISTER_WRITE command to SOE, offset = 0x%x, data = 0x%x\n",
__FUNCTION__, offset, data);
return -NVL_ERR_GENERIC;
}
return NVL_SUCCESS;
}
/* /*
* @Brief : Init sequence for SOE FSP RISCV image * @Brief : Init sequence for SOE FSP RISCV image
* *
@ -609,6 +679,8 @@ nvswitch_init_soe_ls10
} }
// Register SOE callbacks // Register SOE callbacks
if (!nvswitch_is_tnvl_mode_enabled(device))
{
status = nvswitch_soe_register_event_callbacks(device); status = nvswitch_soe_register_event_callbacks(device);
if (status != NVL_SUCCESS) if (status != NVL_SUCCESS)
{ {
@ -618,6 +690,11 @@ nvswitch_init_soe_ls10
"SOE init failed(2)\n"); "SOE init failed(2)\n");
return status; return status;
} }
}
else
{
NVSWITCH_PRINT(device, INFO, "Skipping registering SOE callbacks since TNVL is enabled\n");
}
// Sanity the command and message queues as a final check // Sanity the command and message queues as a final check
if (_nvswitch_soe_send_test_cmd(device) != NV_OK) if (_nvswitch_soe_send_test_cmd(device) != NV_OK)
@ -1363,6 +1440,71 @@ _soeI2CAccess_LS10
return ret; return ret;
} }
/*
* @Brief : Send TNVL Pre Lock command to SOE
*
* @param[in] device
*/
NvlStatus
nvswitch_send_tnvl_prelock_cmd_ls10
(
nvswitch_device *device
)
{
FLCN *pFlcn;
NvU32 cmdSeqDesc = 0;
NV_STATUS status;
RM_FLCN_CMD_SOE cmd;
NVSWITCH_TIMEOUT timeout;
RM_SOE_TNVL_CMD_PRE_LOCK_SEQUENCE *pTnvlPreLock;
NVSWITCH_GET_BIOS_INFO_PARAMS params = { 0 };
if (!nvswitch_is_soe_supported(device))
{
NVSWITCH_PRINT(device, INFO, "%s: SOE is not supported\n",
__FUNCTION__);
return -NVL_ERR_NOT_SUPPORTED;
}
status = device->hal.nvswitch_ctrl_get_bios_info(device, &params);
if ((status != NVL_SUCCESS) || ((params.version & SOE_VBIOS_VERSION_MASK) <
SOE_VBIOS_REVLOCK_TNVL_PRELOCK_COMMAND))
{
NVSWITCH_PRINT(device, INFO,
"%s: Skipping TNVL_CMD_PRE_LOCK_SEQUENCE command to SOE. Update firmware "
"from .%02X to .%02X\n",
__FUNCTION__, (NvU32)((params.version & SOE_VBIOS_VERSION_MASK) >> 16),
SOE_VBIOS_REVLOCK_TNVL_PRELOCK_COMMAND);
return -NVL_ERR_NOT_SUPPORTED;
}
pFlcn = device->pSoe->pFlcn;
nvswitch_os_memset(&cmd, 0, sizeof(cmd));
cmd.hdr.unitId = RM_SOE_UNIT_TNVL;
cmd.hdr.size = RM_SOE_CMD_SIZE(TNVL, PRE_LOCK_SEQUENCE);
pTnvlPreLock = &cmd.cmd.tnvl.preLockSequence;
pTnvlPreLock->cmdType = RM_SOE_TNVL_CMD_ISSUE_PRE_LOCK_SEQUENCE;
nvswitch_timeout_create(NVSWITCH_INTERVAL_5MSEC_IN_NS, &timeout);
status = flcnQueueCmdPostBlocking(device, pFlcn,
(PRM_FLCN_CMD)&cmd,
NULL, // pMsg
NULL, // pPayload
SOE_RM_CMDQ_LOG_ID,
&cmdSeqDesc,
&timeout);
if (status != NV_OK)
{
NVSWITCH_PRINT(device, ERROR, "%s: Failed to send PRE_LOCK_SEQUENCE command to SOE, status 0x%x\n",
__FUNCTION__, status);
return -NVL_ERR_GENERIC;
}
return NVL_SUCCESS;
}
/** /**
* @brief set hal function pointers for functions defined in LR10 (i.e. this file) * @brief set hal function pointers for functions defined in LR10 (i.e. this file)
* *

View File

@ -1,5 +1,5 @@
/* /*
* SPDX-FileCopyrightText: Copyright (c) 2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved. * SPDX-FileCopyrightText: Copyright (c) 2023-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-License-Identifier: MIT * SPDX-License-Identifier: MIT
* *
* Permission is hereby granted, free of charge, to any person obtaining a * Permission is hereby granted, free of charge, to any person obtaining a
@ -26,9 +26,14 @@
#include "common_nvswitch.h" #include "common_nvswitch.h"
#include "haldef_nvswitch.h" #include "haldef_nvswitch.h"
#include "ls10/ls10.h" #include "ls10/ls10.h"
#include "ls10/soe_ls10.h"
#include "nvswitch/ls10/dev_nvlsaw_ip.h" #include "nvswitch/ls10/dev_nvlsaw_ip.h"
#include "nvswitch/ls10/dev_nvlsaw_ip_addendum.h" #include "nvswitch/ls10/dev_nvlsaw_ip_addendum.h"
#include "nvswitch/ls10/dev_ctrl_ip.h"
#include "nvswitch/ls10/dev_ctrl_ip_addendum.h"
#include "nvswitch/ls10/dev_cpr_ip.h"
#include "nvswitch/ls10/dev_npg_ip.h"
#include <stddef.h> #include <stddef.h>
@ -947,6 +952,9 @@ nvswitch_detect_tnvl_mode_ls10
val = NVSWITCH_SAW_RD32_LS10(device, _NVLSAW, _TNVL_MODE); val = NVSWITCH_SAW_RD32_LS10(device, _NVLSAW, _TNVL_MODE);
if (FLD_TEST_DRF(_NVLSAW, _TNVL_MODE, _STATUS, _ENABLED, val)) if (FLD_TEST_DRF(_NVLSAW, _TNVL_MODE, _STATUS, _ENABLED, val))
{ {
NVSWITCH_PRINT(device, ERROR,
"%s: TNVL Mode Detected\n",
__FUNCTION__);
device->tnvl_mode = NVSWITCH_DEVICE_TNVL_MODE_ENABLED; device->tnvl_mode = NVSWITCH_DEVICE_TNVL_MODE_ENABLED;
} }
@ -1048,3 +1056,119 @@ nvswitch_tnvl_get_status_ls10
params->status = device->tnvl_mode; params->status = device->tnvl_mode;
return NVL_SUCCESS; return NVL_SUCCESS;
} }
static NvBool
_nvswitch_reg_cpu_write_allow_list_ls10
(
nvswitch_device *device,
NVSWITCH_ENGINE_ID eng_id,
NvU32 offset
)
{
switch (eng_id)
{
case NVSWITCH_ENGINE_ID_SOE:
case NVSWITCH_ENGINE_ID_GIN:
case NVSWITCH_ENGINE_ID_FSP:
return NV_TRUE;
case NVSWITCH_ENGINE_ID_SAW:
{
if (offset == NV_NVLSAW_DRIVER_ATTACH_DETACH)
return NV_TRUE;
break;
}
case NVSWITCH_ENGINE_ID_NPG:
{
if ((offset == NV_NPG_INTR_RETRIGGER(0)) ||
(offset == NV_NPG_INTR_RETRIGGER(1)))
return NV_TRUE;
break;
}
case NVSWITCH_ENGINE_ID_CPR:
{
if ((offset == NV_CPR_SYS_INTR_RETRIGGER(0)) ||
(offset == NV_CPR_SYS_INTR_RETRIGGER(1)))
return NV_TRUE;
break;
}
default :
return NV_FALSE;
}
return NV_FALSE;
}
void
nvswitch_tnvl_reg_wr_32_ls10
(
nvswitch_device *device,
NVSWITCH_ENGINE_ID eng_id,
NvU32 eng_bcast,
NvU32 eng_instance,
NvU32 base_addr,
NvU32 offset,
NvU32 data
)
{
if (!nvswitch_is_tnvl_mode_enabled(device))
{
NVSWITCH_PRINT(device, ERROR,
"%s: TNVL mode is not enabled\n",
__FUNCTION__);
NVSWITCH_ASSERT(0);
return;
}
if (nvswitch_is_tnvl_mode_locked(device))
{
NVSWITCH_PRINT(device, ERROR,
"%s: TNVL mode is locked\n",
__FUNCTION__);
NVSWITCH_ASSERT(0);
return;
}
if (_nvswitch_reg_cpu_write_allow_list_ls10(device, eng_id, offset))
{
nvswitch_reg_write_32(device, base_addr + offset, data);
}
else
{
if (nvswitch_soe_reg_wr_32_ls10(device, base_addr + offset, data) != NVL_SUCCESS)
{
NVSWITCH_PRINT(device, ERROR,
"%s: SOE ENG_WR failed for 0x%x[%d] %s @0x%08x+0x%06x = 0x%08x\n",
__FUNCTION__,
eng_id, eng_instance,
(
(eng_bcast == NVSWITCH_GET_ENG_DESC_TYPE_UNICAST) ? "UC" :
(eng_bcast == NVSWITCH_GET_ENG_DESC_TYPE_BCAST) ? "BC" :
(eng_bcast == NVSWITCH_GET_ENG_DESC_TYPE_MULTICAST) ? "MC" :
"??"
),
base_addr, offset, data);
NVSWITCH_ASSERT(0);
}
}
}
void
nvswitch_tnvl_disable_interrupts_ls10
(
nvswitch_device *device
)
{
//
// In TNVL locked disable non-fatal NVLW, NPG, and legacy interrupt,
// disable additional non-fatals on those partitions.
//
NVSWITCH_ENG_WR32(device, GIN, , 0, _CTRL, _CPU_INTR_LEAF_EN_CLEAR(NV_CTRL_CPU_INTR_NVLW_NON_FATAL_IDX),
0xFFFF);
NVSWITCH_ENG_WR32(device, GIN, , 0, _CTRL, _CPU_INTR_LEAF_EN_CLEAR(NV_CTRL_CPU_INTR_NPG_NON_FATAL_IDX),
0xFFFF);
NVSWITCH_ENG_WR32(device, GIN, , 0, _CTRL, _CPU_INTR_LEAF_EN_CLEAR(NV_CTRL_CPU_INTR_UNITS_IDX),
0xFFFFFFFF);
}

View File

@ -1,5 +1,5 @@
/* /*
* SPDX-FileCopyrightText: Copyright (c) 2017-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved. * SPDX-FileCopyrightText: Copyright (c) 2017-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-License-Identifier: MIT * SPDX-License-Identifier: MIT
* *
* Permission is hereby granted, free of charge, to any person obtaining a * Permission is hereby granted, free of charge, to any person obtaining a
@ -1021,6 +1021,15 @@ _nvswitch_ctrl_get_tnvl_status
return device->hal.nvswitch_tnvl_get_status(device, params); return device->hal.nvswitch_tnvl_get_status(device, params);
} }
void
nvswitch_tnvl_disable_interrupts
(
nvswitch_device *device
)
{
device->hal.nvswitch_tnvl_disable_interrupts(device);
}
static NvlStatus static NvlStatus
_nvswitch_construct_soe _nvswitch_construct_soe
( (
@ -1860,9 +1869,16 @@ nvswitch_lib_initialize_device
(void)device->hal.nvswitch_read_oob_blacklist_state(device); (void)device->hal.nvswitch_read_oob_blacklist_state(device);
(void)device->hal.nvswitch_write_fabric_state(device); (void)device->hal.nvswitch_write_fabric_state(device);
if (!nvswitch_is_tnvl_mode_enabled(device))
{
nvswitch_task_create(device, &nvswitch_fabric_state_heartbeat, nvswitch_task_create(device, &nvswitch_fabric_state_heartbeat,
NVSWITCH_HEARTBEAT_INTERVAL_NS, NVSWITCH_HEARTBEAT_INTERVAL_NS,
NVSWITCH_TASK_TYPE_FLAGS_RUN_EVEN_IF_DEVICE_NOT_INITIALIZED); NVSWITCH_TASK_TYPE_FLAGS_RUN_EVEN_IF_DEVICE_NOT_INITIALIZED);
}
else
{
NVSWITCH_PRINT(device, INFO, "Skipping Fabric state heartbeat background task when TNVL is enabled\n");
}
// //
// Blacklisted devices return successfully in order to preserve the fabric state heartbeat // Blacklisted devices return successfully in order to preserve the fabric state heartbeat
@ -1965,13 +1981,27 @@ nvswitch_lib_initialize_device
} }
if (device->regkeys.latency_counter == NV_SWITCH_REGKEY_LATENCY_COUNTER_LOGGING_ENABLE) if (device->regkeys.latency_counter == NV_SWITCH_REGKEY_LATENCY_COUNTER_LOGGING_ENABLE)
{
if (!nvswitch_is_tnvl_mode_enabled(device))
{ {
nvswitch_task_create(device, &nvswitch_internal_latency_bin_log, nvswitch_task_create(device, &nvswitch_internal_latency_bin_log,
nvswitch_get_latency_sample_interval_msec(device) * NVSWITCH_INTERVAL_1MSEC_IN_NS * 9/10, 0); nvswitch_get_latency_sample_interval_msec(device) * NVSWITCH_INTERVAL_1MSEC_IN_NS * 9/10, 0);
} }
else
{
NVSWITCH_PRINT(device, INFO, "Skipping Internal latency background task when TNVL is enabled\n");
}
}
if (!nvswitch_is_tnvl_mode_enabled(device))
{
nvswitch_task_create(device, &nvswitch_ecc_writeback_task, nvswitch_task_create(device, &nvswitch_ecc_writeback_task,
(60 * NVSWITCH_INTERVAL_1SEC_IN_NS), 0); (60 * NVSWITCH_INTERVAL_1SEC_IN_NS), 0);
}
else
{
NVSWITCH_PRINT(device, INFO, "Skipping ECC writeback background task when TNVL is enabled\n");
}
if (IS_RTLSIM(device) || IS_EMULATION(device) || IS_FMODEL(device)) if (IS_RTLSIM(device) || IS_EMULATION(device) || IS_FMODEL(device))
{ {
@ -1980,10 +2010,17 @@ nvswitch_lib_initialize_device
__FUNCTION__); __FUNCTION__);
} }
else else
{
if (!nvswitch_is_tnvl_mode_enabled(device))
{ {
nvswitch_task_create(device, &nvswitch_monitor_thermal_alert, nvswitch_task_create(device, &nvswitch_monitor_thermal_alert,
100*NVSWITCH_INTERVAL_1MSEC_IN_NS, 0); 100*NVSWITCH_INTERVAL_1MSEC_IN_NS, 0);
} }
else
{
NVSWITCH_PRINT(device, INFO, "Skipping Thermal alert background task when TNVL is enabled\n");
}
}
device->nvlink_device->initialized = 1; device->nvlink_device->initialized = 1;
@ -5968,6 +6005,15 @@ nvswitch_tnvl_send_fsp_lock_config
return device->hal.nvswitch_tnvl_send_fsp_lock_config(device); return device->hal.nvswitch_tnvl_send_fsp_lock_config(device);
} }
NvlStatus
nvswitch_send_tnvl_prelock_cmd
(
nvswitch_device *device
)
{
return device->hal.nvswitch_send_tnvl_prelock_cmd(device);
}
static NvlStatus static NvlStatus
_nvswitch_ctrl_set_device_tnvl_lock _nvswitch_ctrl_set_device_tnvl_lock
( (
@ -6001,8 +6047,18 @@ _nvswitch_ctrl_set_device_tnvl_lock
// //
// Disable non-fatal and legacy interrupts // Disable non-fatal and legacy interrupts
// Disable commands to SOE
// //
nvswitch_tnvl_disable_interrupts(device);
//
//
// Send Pre-Lock sequence command to SOE
//
status = nvswitch_send_tnvl_prelock_cmd(device);
if (status != NVL_SUCCESS)
{
return status;
}
// Send lock-config command to FSP // Send lock-config command to FSP
status = nvswitch_tnvl_send_fsp_lock_config(device); status = nvswitch_tnvl_send_fsp_lock_config(device);

View File

@ -155,24 +155,24 @@ typedef struct NV0000_CTRL_VGPU_DELETE_DEVICE_PARAMS {
} NV0000_CTRL_VGPU_DELETE_DEVICE_PARAMS; } NV0000_CTRL_VGPU_DELETE_DEVICE_PARAMS;
/* /*
* NV0000_CTRL_CMD_VGPU_VFIO_UNREGISTER_STATUS * NV0000_CTRL_CMD_VGPU_VFIO_NOTIFY_RM_STATUS
* *
* This command informs RM the status vgpu-vfio unregister for a GPU. * This command informs RM the status of vgpu-vfio GPU operations such as probe and unregister.
* *
* returnStatus [IN] * returnStatus [IN]
* This parameter provides the status vgpu-vfio unregister operation. * This parameter provides the status of vgpu-vfio GPU operation.
* *
* gpuPciId [IN] * gpuPciId [IN]
* This parameter provides the gpu id of the GPU * This parameter provides the gpu id of the GPU
*/ */
#define NV0000_CTRL_CMD_VGPU_VFIO_UNREGISTER_STATUS (0xc05) /* finn: Evaluated from "(FINN_NV01_ROOT_VGPU_INTERFACE_ID << 8) | NV0000_CTRL_VGPU_VFIO_UNREGISTER_STATUS_PARAMS_MESSAGE_ID" */ #define NV0000_CTRL_CMD_VGPU_VFIO_NOTIFY_RM_STATUS (0xc05) /* finn: Evaluated from "(FINN_NV01_ROOT_VGPU_INTERFACE_ID << 8) | NV0000_CTRL_VGPU_VFIO_NOTIFY_RM_STATUS_PARAMS_MESSAGE_ID" */
#define NV0000_CTRL_VGPU_VFIO_UNREGISTER_STATUS_PARAMS_MESSAGE_ID (0x5U) #define NV0000_CTRL_VGPU_VFIO_NOTIFY_RM_STATUS_PARAMS_MESSAGE_ID (0x5U)
typedef struct NV0000_CTRL_VGPU_VFIO_UNREGISTER_STATUS_PARAMS { typedef struct NV0000_CTRL_VGPU_VFIO_NOTIFY_RM_STATUS_PARAMS {
NvU32 returnStatus; NvU32 returnStatus;
NvU32 gpuId; NvU32 gpuId;
} NV0000_CTRL_VGPU_VFIO_UNREGISTER_STATUS_PARAMS; } NV0000_CTRL_VGPU_VFIO_NOTIFY_RM_STATUS_PARAMS;
/* _ctrl0000vgpu_h_ */ /* _ctrl0000vgpu_h_ */

View File

@ -108,6 +108,8 @@
#define NV_CONF_COMPUTE_SYSTEM_CPU_CAPABILITY_NONE 0 #define NV_CONF_COMPUTE_SYSTEM_CPU_CAPABILITY_NONE 0
#define NV_CONF_COMPUTE_SYSTEM_CPU_CAPABILITY_AMD_SEV 1 #define NV_CONF_COMPUTE_SYSTEM_CPU_CAPABILITY_AMD_SEV 1
#define NV_CONF_COMPUTE_SYSTEM_CPU_CAPABILITY_INTEL_TDX 2 #define NV_CONF_COMPUTE_SYSTEM_CPU_CAPABILITY_INTEL_TDX 2
#define NV_CONF_COMPUTE_SYSTEM_CPU_CAPABILITY_AMD_SEV_SNP 3
#define NV_CONF_COMPUTE_SYSTEM_CPU_CAPABILITY_AMD_SNP_VTOM 4
#define NV_CONF_COMPUTE_SYSTEM_GPUS_CAPABILITY_NONE 0 #define NV_CONF_COMPUTE_SYSTEM_GPUS_CAPABILITY_NONE 0
#define NV_CONF_COMPUTE_SYSTEM_GPUS_CAPABILITY_APM 1 #define NV_CONF_COMPUTE_SYSTEM_GPUS_CAPABILITY_APM 1

View File

@ -361,7 +361,8 @@ ARMCSALLOWLISTINFO armChipsetAllowListInfo[] =
{PCI_VENDOR_ID_MELLANOX, 0xA2D0, CS_MELLANOX_BLUEFIELD}, // Mellanox BlueField {PCI_VENDOR_ID_MELLANOX, 0xA2D0, CS_MELLANOX_BLUEFIELD}, // Mellanox BlueField
{PCI_VENDOR_ID_MELLANOX, 0xA2D4, CS_MELLANOX_BLUEFIELD2},// Mellanox BlueField 2 {PCI_VENDOR_ID_MELLANOX, 0xA2D4, CS_MELLANOX_BLUEFIELD2},// Mellanox BlueField 2
{PCI_VENDOR_ID_MELLANOX, 0xA2D5, CS_MELLANOX_BLUEFIELD2},// Mellanox BlueField 2 Crypto disabled {PCI_VENDOR_ID_MELLANOX, 0xA2D5, CS_MELLANOX_BLUEFIELD2},// Mellanox BlueField 2 Crypto disabled
{PCI_VENDOR_ID_MELLANOX, 0xA2DB, CS_MELLANOX_BLUEFIELD3},// Mellanox BlueField 3 {PCI_VENDOR_ID_MELLANOX, 0xA2DB, CS_MELLANOX_BLUEFIELD3},// Mellanox BlueField 3 Crypto disabled
{PCI_VENDOR_ID_MELLANOX, 0xA2DA, CS_MELLANOX_BLUEFIELD3},// Mellanox BlueField 3 Crypto enabled
{PCI_VENDOR_ID_AMAZON, 0x0200, CS_AMAZON_GRAVITRON2}, // Amazon Gravitron2 {PCI_VENDOR_ID_AMAZON, 0x0200, CS_AMAZON_GRAVITRON2}, // Amazon Gravitron2
{PCI_VENDOR_ID_FUJITSU, 0x1952, CS_FUJITSU_A64FX}, // Fujitsu A64FX {PCI_VENDOR_ID_FUJITSU, 0x1952, CS_FUJITSU_A64FX}, // Fujitsu A64FX
{PCI_VENDOR_ID_CADENCE, 0xDC01, CS_PHYTIUM_S2500}, // Phytium S2500 {PCI_VENDOR_ID_CADENCE, 0xDC01, CS_PHYTIUM_S2500}, // Phytium S2500

View File

@ -1045,7 +1045,7 @@ NV_STATUS NV_API_CALL nv_vgpu_get_bar_info(nvidia_stack_t *, nv_state_t *, con
NvU64 *, NvU64 *, NvU32 *, NvBool *, NvU8 *); NvU64 *, NvU64 *, NvU32 *, NvBool *, NvU8 *);
NV_STATUS NV_API_CALL nv_vgpu_get_hbm_info(nvidia_stack_t *, nv_state_t *, const NvU8 *, NvU64 *, NvU64 *); NV_STATUS NV_API_CALL nv_vgpu_get_hbm_info(nvidia_stack_t *, nv_state_t *, const NvU8 *, NvU64 *, NvU64 *);
NV_STATUS NV_API_CALL nv_vgpu_process_vf_info(nvidia_stack_t *, nv_state_t *, NvU8, NvU32, NvU8, NvU8, NvU8, NvBool, void *); NV_STATUS NV_API_CALL nv_vgpu_process_vf_info(nvidia_stack_t *, nv_state_t *, NvU8, NvU32, NvU8, NvU8, NvU8, NvBool, void *);
NV_STATUS NV_API_CALL nv_gpu_bind_event(nvidia_stack_t *); NV_STATUS NV_API_CALL nv_gpu_bind_event(nvidia_stack_t *, NvU32, NvBool *);
NV_STATUS NV_API_CALL nv_gpu_unbind_event(nvidia_stack_t *, NvU32, NvBool *); NV_STATUS NV_API_CALL nv_gpu_unbind_event(nvidia_stack_t *, NvU32, NvBool *);
NV_STATUS NV_API_CALL nv_get_usermap_access_params(nv_state_t*, nv_usermap_access_params_t*); NV_STATUS NV_API_CALL nv_get_usermap_access_params(nv_state_t*, nv_usermap_access_params_t*);

View File

@ -218,6 +218,8 @@ extern NvU32 os_page_size;
extern NvU64 os_page_mask; extern NvU64 os_page_mask;
extern NvU8 os_page_shift; extern NvU8 os_page_shift;
extern NvBool os_cc_enabled; extern NvBool os_cc_enabled;
extern NvBool os_cc_sev_snp_enabled;
extern NvBool os_cc_snp_vtom_enabled;
extern NvBool os_cc_tdx_enabled; extern NvBool os_cc_tdx_enabled;
extern NvBool os_dma_buf_enabled; extern NvBool os_dma_buf_enabled;
extern NvBool os_imex_channel_is_supported; extern NvBool os_imex_channel_is_supported;

View File

@ -799,7 +799,9 @@ NV_STATUS NV_API_CALL nv_gpu_unbind_event
} }
NV_STATUS NV_API_CALL nv_gpu_bind_event( NV_STATUS NV_API_CALL nv_gpu_bind_event(
nvidia_stack_t *sp nvidia_stack_t *sp,
NvU32 gpuId,
NvBool *isEventNotified
) )
{ {
THREAD_STATE_NODE threadState; THREAD_STATE_NODE threadState;
@ -812,7 +814,7 @@ NV_STATUS NV_API_CALL nv_gpu_bind_event(
// LOCK: acquire API lock // LOCK: acquire API lock
if ((rmStatus = rmapiLockAcquire(API_LOCK_FLAGS_NONE, RM_LOCK_MODULES_HYPERVISOR)) == NV_OK) if ((rmStatus = rmapiLockAcquire(API_LOCK_FLAGS_NONE, RM_LOCK_MODULES_HYPERVISOR)) == NV_OK)
{ {
CliAddSystemEvent(NV0000_NOTIFIERS_GPU_BIND_EVENT, 0, NULL); CliAddSystemEvent(NV0000_NOTIFIERS_GPU_BIND_EVENT, gpuId, isEventNotified);
// UNLOCK: release API lock // UNLOCK: release API lock
rmapiLockRelease(); rmapiLockRelease();

View File

@ -2675,6 +2675,8 @@ void osInitSystemStaticConfig(SYS_STATIC_CONFIG *pConfig)
pConfig->bIsNotebook = rm_is_system_notebook(); pConfig->bIsNotebook = rm_is_system_notebook();
pConfig->osType = nv_get_os_type(); pConfig->osType = nv_get_os_type();
pConfig->bOsCCEnabled = os_cc_enabled; pConfig->bOsCCEnabled = os_cc_enabled;
pConfig->bOsCCSevSnpEnabled = os_cc_sev_snp_enabled;
pConfig->bOsCCSnpVtomEnabled = os_cc_snp_vtom_enabled;
pConfig->bOsCCTdxEnabled = os_cc_tdx_enabled; pConfig->bOsCCTdxEnabled = os_cc_tdx_enabled;
} }

View File

@ -1559,24 +1559,6 @@ failed:
return status; return status;
} }
static void
RmHandleNvpcfEvents(
nv_state_t *pNv
)
{
OBJGPU *pGpu = NV_GET_NV_PRIV_PGPU(pNv);
THREAD_STATE_NODE threadState;
if (RmUnixRmApiPrologue(pNv, &threadState, RM_LOCK_MODULES_ACPI) == NULL)
{
return;
}
gpuNotifySubDeviceEvent(pGpu, NV2080_NOTIFIERS_NVPCF_EVENTS, NULL, 0, 0, 0);
RmUnixRmApiEpilogue(pNv, &threadState);
}
/* /*
* --------------------------------------------------------------------------- * ---------------------------------------------------------------------------
* *
@ -4312,7 +4294,6 @@ void NV_API_CALL rm_power_source_change_event(
THREAD_STATE_NODE threadState; THREAD_STATE_NODE threadState;
void *fp; void *fp;
nv_state_t *nv; nv_state_t *nv;
OBJGPU *pGpu = gpumgrGetGpu(0);
NV_STATUS rmStatus = NV_OK; NV_STATUS rmStatus = NV_OK;
NV_ENTER_RM_RUNTIME(sp,fp); NV_ENTER_RM_RUNTIME(sp,fp);
@ -4321,6 +4302,7 @@ void NV_API_CALL rm_power_source_change_event(
// LOCK: acquire API lock // LOCK: acquire API lock
if ((rmStatus = rmapiLockAcquire(API_LOCK_FLAGS_NONE, RM_LOCK_MODULES_EVENT)) == NV_OK) if ((rmStatus = rmapiLockAcquire(API_LOCK_FLAGS_NONE, RM_LOCK_MODULES_EVENT)) == NV_OK)
{ {
OBJGPU *pGpu = gpumgrGetGpu(0);
if (pGpu != NULL) if (pGpu != NULL)
{ {
nv = NV_GET_NV_STATE(pGpu); nv = NV_GET_NV_STATE(pGpu);
@ -5941,15 +5923,31 @@ void NV_API_CALL rm_acpi_nvpcf_notify(
) )
{ {
void *fp; void *fp;
OBJGPU *pGpu = gpumgrGetGpu(0); THREAD_STATE_NODE threadState;
NV_STATUS rmStatus = NV_OK;
NV_ENTER_RM_RUNTIME(sp,fp); NV_ENTER_RM_RUNTIME(sp,fp);
threadStateInit(&threadState, THREAD_STATE_FLAGS_NONE);
// LOCK: acquire API lock
if ((rmStatus = rmapiLockAcquire(API_LOCK_FLAGS_NONE,
RM_LOCK_MODULES_EVENT)) == NV_OK)
{
OBJGPU *pGpu = gpumgrGetGpu(0);
if (pGpu != NULL) if (pGpu != NULL)
{ {
nv_state_t *nv = NV_GET_NV_STATE(pGpu); nv_state_t *nv = NV_GET_NV_STATE(pGpu);
RmHandleNvpcfEvents(nv); if ((rmStatus = os_ref_dynamic_power(nv, NV_DYNAMIC_PM_FINE)) ==
NV_OK)
{
gpuNotifySubDeviceEvent(pGpu, NV2080_NOTIFIERS_NVPCF_EVENTS,
NULL, 0, 0, 0);
}
os_unref_dynamic_power(nv, NV_DYNAMIC_PM_FINE);
}
rmapiLockRelease();
} }
threadStateFree(&threadState, THREAD_STATE_FLAGS_NONE);
NV_EXIT_RM_RUNTIME(sp,fp); NV_EXIT_RM_RUNTIME(sp,fp);
} }

View File

@ -1551,6 +1551,21 @@ static const struct NVOC_EXPORTED_METHOD_DEF __nvoc_exported_method_def_RmClient
#endif #endif
}, },
{ /* [90] */ { /* [90] */
#if NVOC_EXPORTED_METHOD_DISABLED_BY_FLAG(0x4u)
/*pFunc=*/ (void (*)(void)) NULL,
#else
/*pFunc=*/ (void (*)(void)) cliresCtrlCmdVgpuVfioNotifyRMStatus_IMPL,
#endif // NVOC_EXPORTED_METHOD_DISABLED_BY_FLAG(0x4u)
/*flags=*/ 0x4u,
/*accessRight=*/0x0u,
/*methodId=*/ 0xc05u,
/*paramSize=*/ sizeof(NV0000_CTRL_VGPU_VFIO_NOTIFY_RM_STATUS_PARAMS),
/*pClassInfo=*/ &(__nvoc_class_def_RmClientResource.classInfo),
#if NV_PRINTF_STRINGS_ALLOWED
/*func=*/ "cliresCtrlCmdVgpuVfioNotifyRMStatus"
#endif
},
{ /* [91] */
#if NVOC_EXPORTED_METHOD_DISABLED_BY_FLAG(0x811u) #if NVOC_EXPORTED_METHOD_DISABLED_BY_FLAG(0x811u)
/*pFunc=*/ (void (*)(void)) NULL, /*pFunc=*/ (void (*)(void)) NULL,
#else #else
@ -1565,7 +1580,7 @@ static const struct NVOC_EXPORTED_METHOD_DEF __nvoc_exported_method_def_RmClient
/*func=*/ "cliresCtrlCmdClientGetAddrSpaceType" /*func=*/ "cliresCtrlCmdClientGetAddrSpaceType"
#endif #endif
}, },
{ /* [91] */ { /* [92] */
#if NVOC_EXPORTED_METHOD_DISABLED_BY_FLAG(0x811u) #if NVOC_EXPORTED_METHOD_DISABLED_BY_FLAG(0x811u)
/*pFunc=*/ (void (*)(void)) NULL, /*pFunc=*/ (void (*)(void)) NULL,
#else #else
@ -1580,7 +1595,7 @@ static const struct NVOC_EXPORTED_METHOD_DEF __nvoc_exported_method_def_RmClient
/*func=*/ "cliresCtrlCmdClientGetHandleInfo" /*func=*/ "cliresCtrlCmdClientGetHandleInfo"
#endif #endif
}, },
{ /* [92] */ { /* [93] */
#if NVOC_EXPORTED_METHOD_DISABLED_BY_FLAG(0x11u) #if NVOC_EXPORTED_METHOD_DISABLED_BY_FLAG(0x11u)
/*pFunc=*/ (void (*)(void)) NULL, /*pFunc=*/ (void (*)(void)) NULL,
#else #else
@ -1595,7 +1610,7 @@ static const struct NVOC_EXPORTED_METHOD_DEF __nvoc_exported_method_def_RmClient
/*func=*/ "cliresCtrlCmdClientGetAccessRights" /*func=*/ "cliresCtrlCmdClientGetAccessRights"
#endif #endif
}, },
{ /* [93] */ { /* [94] */
#if NVOC_EXPORTED_METHOD_DISABLED_BY_FLAG(0x11u) #if NVOC_EXPORTED_METHOD_DISABLED_BY_FLAG(0x11u)
/*pFunc=*/ (void (*)(void)) NULL, /*pFunc=*/ (void (*)(void)) NULL,
#else #else
@ -1610,7 +1625,7 @@ static const struct NVOC_EXPORTED_METHOD_DEF __nvoc_exported_method_def_RmClient
/*func=*/ "cliresCtrlCmdClientSetInheritedSharePolicy" /*func=*/ "cliresCtrlCmdClientSetInheritedSharePolicy"
#endif #endif
}, },
{ /* [94] */ { /* [95] */
#if NVOC_EXPORTED_METHOD_DISABLED_BY_FLAG(0x11u) #if NVOC_EXPORTED_METHOD_DISABLED_BY_FLAG(0x11u)
/*pFunc=*/ (void (*)(void)) NULL, /*pFunc=*/ (void (*)(void)) NULL,
#else #else
@ -1625,7 +1640,7 @@ static const struct NVOC_EXPORTED_METHOD_DEF __nvoc_exported_method_def_RmClient
/*func=*/ "cliresCtrlCmdClientGetChildHandle" /*func=*/ "cliresCtrlCmdClientGetChildHandle"
#endif #endif
}, },
{ /* [95] */ { /* [96] */
#if NVOC_EXPORTED_METHOD_DISABLED_BY_FLAG(0x11u) #if NVOC_EXPORTED_METHOD_DISABLED_BY_FLAG(0x11u)
/*pFunc=*/ (void (*)(void)) NULL, /*pFunc=*/ (void (*)(void)) NULL,
#else #else
@ -1640,7 +1655,7 @@ static const struct NVOC_EXPORTED_METHOD_DEF __nvoc_exported_method_def_RmClient
/*func=*/ "cliresCtrlCmdClientShareObject" /*func=*/ "cliresCtrlCmdClientShareObject"
#endif #endif
}, },
{ /* [96] */ { /* [97] */
#if NVOC_EXPORTED_METHOD_DISABLED_BY_FLAG(0x811u) #if NVOC_EXPORTED_METHOD_DISABLED_BY_FLAG(0x811u)
/*pFunc=*/ (void (*)(void)) NULL, /*pFunc=*/ (void (*)(void)) NULL,
#else #else
@ -1655,7 +1670,7 @@ static const struct NVOC_EXPORTED_METHOD_DEF __nvoc_exported_method_def_RmClient
/*func=*/ "cliresCtrlCmdObjectsAreDuplicates" /*func=*/ "cliresCtrlCmdObjectsAreDuplicates"
#endif #endif
}, },
{ /* [97] */ { /* [98] */
#if NVOC_EXPORTED_METHOD_DISABLED_BY_FLAG(0x811u) #if NVOC_EXPORTED_METHOD_DISABLED_BY_FLAG(0x811u)
/*pFunc=*/ (void (*)(void)) NULL, /*pFunc=*/ (void (*)(void)) NULL,
#else #else
@ -1670,7 +1685,7 @@ static const struct NVOC_EXPORTED_METHOD_DEF __nvoc_exported_method_def_RmClient
/*func=*/ "cliresCtrlCmdClientSubscribeToImexChannel" /*func=*/ "cliresCtrlCmdClientSubscribeToImexChannel"
#endif #endif
}, },
{ /* [98] */ { /* [99] */
#if NVOC_EXPORTED_METHOD_DISABLED_BY_FLAG(0x10u) #if NVOC_EXPORTED_METHOD_DISABLED_BY_FLAG(0x10u)
/*pFunc=*/ (void (*)(void)) NULL, /*pFunc=*/ (void (*)(void)) NULL,
#else #else
@ -1685,7 +1700,7 @@ static const struct NVOC_EXPORTED_METHOD_DEF __nvoc_exported_method_def_RmClient
/*func=*/ "cliresCtrlCmdOsUnixFlushUserCache" /*func=*/ "cliresCtrlCmdOsUnixFlushUserCache"
#endif #endif
}, },
{ /* [99] */ { /* [100] */
#if NVOC_EXPORTED_METHOD_DISABLED_BY_FLAG(0x11u) #if NVOC_EXPORTED_METHOD_DISABLED_BY_FLAG(0x11u)
/*pFunc=*/ (void (*)(void)) NULL, /*pFunc=*/ (void (*)(void)) NULL,
#else #else
@ -1700,7 +1715,7 @@ static const struct NVOC_EXPORTED_METHOD_DEF __nvoc_exported_method_def_RmClient
/*func=*/ "cliresCtrlCmdOsUnixExportObjectToFd" /*func=*/ "cliresCtrlCmdOsUnixExportObjectToFd"
#endif #endif
}, },
{ /* [100] */ { /* [101] */
#if NVOC_EXPORTED_METHOD_DISABLED_BY_FLAG(0x11u) #if NVOC_EXPORTED_METHOD_DISABLED_BY_FLAG(0x11u)
/*pFunc=*/ (void (*)(void)) NULL, /*pFunc=*/ (void (*)(void)) NULL,
#else #else
@ -1715,7 +1730,7 @@ static const struct NVOC_EXPORTED_METHOD_DEF __nvoc_exported_method_def_RmClient
/*func=*/ "cliresCtrlCmdOsUnixImportObjectFromFd" /*func=*/ "cliresCtrlCmdOsUnixImportObjectFromFd"
#endif #endif
}, },
{ /* [101] */ { /* [102] */
#if NVOC_EXPORTED_METHOD_DISABLED_BY_FLAG(0x813u) #if NVOC_EXPORTED_METHOD_DISABLED_BY_FLAG(0x813u)
/*pFunc=*/ (void (*)(void)) NULL, /*pFunc=*/ (void (*)(void)) NULL,
#else #else
@ -1730,7 +1745,7 @@ static const struct NVOC_EXPORTED_METHOD_DEF __nvoc_exported_method_def_RmClient
/*func=*/ "cliresCtrlCmdOsUnixGetExportObjectInfo" /*func=*/ "cliresCtrlCmdOsUnixGetExportObjectInfo"
#endif #endif
}, },
{ /* [102] */ { /* [103] */
#if NVOC_EXPORTED_METHOD_DISABLED_BY_FLAG(0x11u) #if NVOC_EXPORTED_METHOD_DISABLED_BY_FLAG(0x11u)
/*pFunc=*/ (void (*)(void)) NULL, /*pFunc=*/ (void (*)(void)) NULL,
#else #else
@ -1745,7 +1760,7 @@ static const struct NVOC_EXPORTED_METHOD_DEF __nvoc_exported_method_def_RmClient
/*func=*/ "cliresCtrlCmdOsUnixCreateExportObjectFd" /*func=*/ "cliresCtrlCmdOsUnixCreateExportObjectFd"
#endif #endif
}, },
{ /* [103] */ { /* [104] */
#if NVOC_EXPORTED_METHOD_DISABLED_BY_FLAG(0x11u) #if NVOC_EXPORTED_METHOD_DISABLED_BY_FLAG(0x11u)
/*pFunc=*/ (void (*)(void)) NULL, /*pFunc=*/ (void (*)(void)) NULL,
#else #else
@ -1760,7 +1775,7 @@ static const struct NVOC_EXPORTED_METHOD_DEF __nvoc_exported_method_def_RmClient
/*func=*/ "cliresCtrlCmdOsUnixExportObjectsToFd" /*func=*/ "cliresCtrlCmdOsUnixExportObjectsToFd"
#endif #endif
}, },
{ /* [104] */ { /* [105] */
#if NVOC_EXPORTED_METHOD_DISABLED_BY_FLAG(0x11u) #if NVOC_EXPORTED_METHOD_DISABLED_BY_FLAG(0x11u)
/*pFunc=*/ (void (*)(void)) NULL, /*pFunc=*/ (void (*)(void)) NULL,
#else #else
@ -1780,7 +1795,7 @@ static const struct NVOC_EXPORTED_METHOD_DEF __nvoc_exported_method_def_RmClient
const struct NVOC_EXPORT_INFO __nvoc_export_info_RmClientResource = const struct NVOC_EXPORT_INFO __nvoc_export_info_RmClientResource =
{ {
/*numEntries=*/ 105, /*numEntries=*/ 106,
/*pExportEntries=*/ __nvoc_exported_method_def_RmClientResource /*pExportEntries=*/ __nvoc_exported_method_def_RmClientResource
}; };
@ -2219,6 +2234,10 @@ static void __nvoc_init_funcTable_RmClientResource_1(RmClientResource *pThis) {
pThis->__cliresCtrlCmdVgpuSetVgpuVersion__ = &cliresCtrlCmdVgpuSetVgpuVersion_IMPL; pThis->__cliresCtrlCmdVgpuSetVgpuVersion__ = &cliresCtrlCmdVgpuSetVgpuVersion_IMPL;
#endif #endif
#if !NVOC_EXPORTED_METHOD_DISABLED_BY_FLAG(0x4u)
pThis->__cliresCtrlCmdVgpuVfioNotifyRMStatus__ = &cliresCtrlCmdVgpuVfioNotifyRMStatus_IMPL;
#endif
#if !NVOC_EXPORTED_METHOD_DISABLED_BY_FLAG(0x10u) #if !NVOC_EXPORTED_METHOD_DISABLED_BY_FLAG(0x10u)
pThis->__cliresCtrlCmdSystemNVPCFGetPowerModeInfo__ = &cliresCtrlCmdSystemNVPCFGetPowerModeInfo_IMPL; pThis->__cliresCtrlCmdSystemNVPCFGetPowerModeInfo__ = &cliresCtrlCmdSystemNVPCFGetPowerModeInfo_IMPL;
#endif #endif

View File

@ -178,6 +178,7 @@ struct RmClientResource {
NV_STATUS (*__cliresCtrlCmdSyncGpuBoostGroupInfo__)(struct RmClientResource *, NV0000_SYNC_GPU_BOOST_GROUP_INFO_PARAMS *); NV_STATUS (*__cliresCtrlCmdSyncGpuBoostGroupInfo__)(struct RmClientResource *, NV0000_SYNC_GPU_BOOST_GROUP_INFO_PARAMS *);
NV_STATUS (*__cliresCtrlCmdVgpuGetVgpuVersion__)(struct RmClientResource *, NV0000_CTRL_VGPU_GET_VGPU_VERSION_PARAMS *); NV_STATUS (*__cliresCtrlCmdVgpuGetVgpuVersion__)(struct RmClientResource *, NV0000_CTRL_VGPU_GET_VGPU_VERSION_PARAMS *);
NV_STATUS (*__cliresCtrlCmdVgpuSetVgpuVersion__)(struct RmClientResource *, NV0000_CTRL_VGPU_SET_VGPU_VERSION_PARAMS *); NV_STATUS (*__cliresCtrlCmdVgpuSetVgpuVersion__)(struct RmClientResource *, NV0000_CTRL_VGPU_SET_VGPU_VERSION_PARAMS *);
NV_STATUS (*__cliresCtrlCmdVgpuVfioNotifyRMStatus__)(struct RmClientResource *, NV0000_CTRL_VGPU_VFIO_NOTIFY_RM_STATUS_PARAMS *);
NV_STATUS (*__cliresCtrlCmdSystemNVPCFGetPowerModeInfo__)(struct RmClientResource *, NV0000_CTRL_CMD_SYSTEM_NVPCF_GET_POWER_MODE_INFO_PARAMS *); NV_STATUS (*__cliresCtrlCmdSystemNVPCFGetPowerModeInfo__)(struct RmClientResource *, NV0000_CTRL_CMD_SYSTEM_NVPCF_GET_POWER_MODE_INFO_PARAMS *);
NV_STATUS (*__cliresCtrlCmdSystemSyncExternalFabricMgmt__)(struct RmClientResource *, NV0000_CTRL_CMD_SYSTEM_SYNC_EXTERNAL_FABRIC_MGMT_PARAMS *); NV_STATUS (*__cliresCtrlCmdSystemSyncExternalFabricMgmt__)(struct RmClientResource *, NV0000_CTRL_CMD_SYSTEM_SYNC_EXTERNAL_FABRIC_MGMT_PARAMS *);
NV_STATUS (*__cliresCtrlCmdSystemPfmreqhndlrCtrl__)(struct RmClientResource *, NV0000_CTRL_SYSTEM_PFM_REQ_HNDLR_CTRL_PARAMS *); NV_STATUS (*__cliresCtrlCmdSystemPfmreqhndlrCtrl__)(struct RmClientResource *, NV0000_CTRL_SYSTEM_PFM_REQ_HNDLR_CTRL_PARAMS *);
@ -336,6 +337,7 @@ NV_STATUS __nvoc_objCreate_RmClientResource(RmClientResource**, Dynamic*, NvU32,
#define cliresCtrlCmdSyncGpuBoostGroupInfo(pRmCliRes, pParams) cliresCtrlCmdSyncGpuBoostGroupInfo_DISPATCH(pRmCliRes, pParams) #define cliresCtrlCmdSyncGpuBoostGroupInfo(pRmCliRes, pParams) cliresCtrlCmdSyncGpuBoostGroupInfo_DISPATCH(pRmCliRes, pParams)
#define cliresCtrlCmdVgpuGetVgpuVersion(pRmCliRes, vgpuVersionInfo) cliresCtrlCmdVgpuGetVgpuVersion_DISPATCH(pRmCliRes, vgpuVersionInfo) #define cliresCtrlCmdVgpuGetVgpuVersion(pRmCliRes, vgpuVersionInfo) cliresCtrlCmdVgpuGetVgpuVersion_DISPATCH(pRmCliRes, vgpuVersionInfo)
#define cliresCtrlCmdVgpuSetVgpuVersion(pRmCliRes, vgpuVersionInfo) cliresCtrlCmdVgpuSetVgpuVersion_DISPATCH(pRmCliRes, vgpuVersionInfo) #define cliresCtrlCmdVgpuSetVgpuVersion(pRmCliRes, vgpuVersionInfo) cliresCtrlCmdVgpuSetVgpuVersion_DISPATCH(pRmCliRes, vgpuVersionInfo)
#define cliresCtrlCmdVgpuVfioNotifyRMStatus(pRmCliRes, pVgpuDeleteParams) cliresCtrlCmdVgpuVfioNotifyRMStatus_DISPATCH(pRmCliRes, pVgpuDeleteParams)
#define cliresCtrlCmdSystemNVPCFGetPowerModeInfo(pRmCliRes, pParams) cliresCtrlCmdSystemNVPCFGetPowerModeInfo_DISPATCH(pRmCliRes, pParams) #define cliresCtrlCmdSystemNVPCFGetPowerModeInfo(pRmCliRes, pParams) cliresCtrlCmdSystemNVPCFGetPowerModeInfo_DISPATCH(pRmCliRes, pParams)
#define cliresCtrlCmdSystemSyncExternalFabricMgmt(pRmCliRes, pExtFabricMgmtParams) cliresCtrlCmdSystemSyncExternalFabricMgmt_DISPATCH(pRmCliRes, pExtFabricMgmtParams) #define cliresCtrlCmdSystemSyncExternalFabricMgmt(pRmCliRes, pExtFabricMgmtParams) cliresCtrlCmdSystemSyncExternalFabricMgmt_DISPATCH(pRmCliRes, pExtFabricMgmtParams)
#define cliresCtrlCmdSystemPfmreqhndlrCtrl(pRmCliRes, pParams) cliresCtrlCmdSystemPfmreqhndlrCtrl_DISPATCH(pRmCliRes, pParams) #define cliresCtrlCmdSystemPfmreqhndlrCtrl(pRmCliRes, pParams) cliresCtrlCmdSystemPfmreqhndlrCtrl_DISPATCH(pRmCliRes, pParams)
@ -959,6 +961,12 @@ static inline NV_STATUS cliresCtrlCmdVgpuSetVgpuVersion_DISPATCH(struct RmClient
return pRmCliRes->__cliresCtrlCmdVgpuSetVgpuVersion__(pRmCliRes, vgpuVersionInfo); return pRmCliRes->__cliresCtrlCmdVgpuSetVgpuVersion__(pRmCliRes, vgpuVersionInfo);
} }
NV_STATUS cliresCtrlCmdVgpuVfioNotifyRMStatus_IMPL(struct RmClientResource *pRmCliRes, NV0000_CTRL_VGPU_VFIO_NOTIFY_RM_STATUS_PARAMS *pVgpuDeleteParams);
static inline NV_STATUS cliresCtrlCmdVgpuVfioNotifyRMStatus_DISPATCH(struct RmClientResource *pRmCliRes, NV0000_CTRL_VGPU_VFIO_NOTIFY_RM_STATUS_PARAMS *pVgpuDeleteParams) {
return pRmCliRes->__cliresCtrlCmdVgpuVfioNotifyRMStatus__(pRmCliRes, pVgpuDeleteParams);
}
NV_STATUS cliresCtrlCmdSystemNVPCFGetPowerModeInfo_IMPL(struct RmClientResource *pRmCliRes, NV0000_CTRL_CMD_SYSTEM_NVPCF_GET_POWER_MODE_INFO_PARAMS *pParams); NV_STATUS cliresCtrlCmdSystemNVPCFGetPowerModeInfo_IMPL(struct RmClientResource *pRmCliRes, NV0000_CTRL_CMD_SYSTEM_NVPCF_GET_POWER_MODE_INFO_PARAMS *pParams);
static inline NV_STATUS cliresCtrlCmdSystemNVPCFGetPowerModeInfo_DISPATCH(struct RmClientResource *pRmCliRes, NV0000_CTRL_CMD_SYSTEM_NVPCF_GET_POWER_MODE_INFO_PARAMS *pParams) { static inline NV_STATUS cliresCtrlCmdSystemNVPCFGetPowerModeInfo_DISPATCH(struct RmClientResource *pRmCliRes, NV0000_CTRL_CMD_SYSTEM_NVPCF_GET_POWER_MODE_INFO_PARAMS *pParams) {

View File

@ -1098,6 +1098,12 @@ static void __nvoc_init_funcTable_OBJGPU_1(OBJGPU *pThis) {
} }
// Hal function -- gpuIsDevModeEnabledInHw // Hal function -- gpuIsDevModeEnabledInHw
if (( ((rmVariantHal_HalVarIdx >> 5) == 0UL) && ((1UL << (rmVariantHal_HalVarIdx & 0x1f)) & 0x00000001UL) )) /* RmVariantHal: VF */
{
pThis->__gpuIsDevModeEnabledInHw__ = &gpuIsDevModeEnabledInHw_491d52;
}
else
{
if (( ((chipHal_HalVarIdx >> 5) == 1UL) && ((1UL << (chipHal_HalVarIdx & 0x1f)) & 0x10000000UL) )) /* ChipHal: GH100 */ if (( ((chipHal_HalVarIdx >> 5) == 1UL) && ((1UL << (chipHal_HalVarIdx & 0x1f)) & 0x10000000UL) )) /* ChipHal: GH100 */
{ {
pThis->__gpuIsDevModeEnabledInHw__ = &gpuIsDevModeEnabledInHw_GH100; pThis->__gpuIsDevModeEnabledInHw__ = &gpuIsDevModeEnabledInHw_GH100;
@ -1107,8 +1113,15 @@ static void __nvoc_init_funcTable_OBJGPU_1(OBJGPU *pThis) {
{ {
pThis->__gpuIsDevModeEnabledInHw__ = &gpuIsDevModeEnabledInHw_491d52; pThis->__gpuIsDevModeEnabledInHw__ = &gpuIsDevModeEnabledInHw_491d52;
} }
}
// Hal function -- gpuIsProtectedPcieEnabledInHw // Hal function -- gpuIsProtectedPcieEnabledInHw
if (( ((rmVariantHal_HalVarIdx >> 5) == 0UL) && ((1UL << (rmVariantHal_HalVarIdx & 0x1f)) & 0x00000001UL) )) /* RmVariantHal: VF */
{
pThis->__gpuIsProtectedPcieEnabledInHw__ = &gpuIsProtectedPcieEnabledInHw_491d52;
}
else
{
if (( ((chipHal_HalVarIdx >> 5) == 1UL) && ((1UL << (chipHal_HalVarIdx & 0x1f)) & 0x10000000UL) )) /* ChipHal: GH100 */ if (( ((chipHal_HalVarIdx >> 5) == 1UL) && ((1UL << (chipHal_HalVarIdx & 0x1f)) & 0x10000000UL) )) /* ChipHal: GH100 */
{ {
pThis->__gpuIsProtectedPcieEnabledInHw__ = &gpuIsProtectedPcieEnabledInHw_GH100; pThis->__gpuIsProtectedPcieEnabledInHw__ = &gpuIsProtectedPcieEnabledInHw_GH100;
@ -1118,6 +1131,7 @@ static void __nvoc_init_funcTable_OBJGPU_1(OBJGPU *pThis) {
{ {
pThis->__gpuIsProtectedPcieEnabledInHw__ = &gpuIsProtectedPcieEnabledInHw_491d52; pThis->__gpuIsProtectedPcieEnabledInHw__ = &gpuIsProtectedPcieEnabledInHw_491d52;
} }
}
// Hal function -- gpuIsCtxBufAllocInPmaSupported // Hal function -- gpuIsCtxBufAllocInPmaSupported
if (( ((chipHal_HalVarIdx >> 5) == 1UL) && ((1UL << (chipHal_HalVarIdx & 0x1f)) & 0x11f0fc00UL) )) /* ChipHal: GA100 | GA102 | GA103 | GA104 | GA106 | GA107 | AD102 | AD103 | AD104 | AD106 | AD107 | GH100 */ if (( ((chipHal_HalVarIdx >> 5) == 1UL) && ((1UL << (chipHal_HalVarIdx & 0x1f)) & 0x11f0fc00UL) )) /* ChipHal: GA100 | GA102 | GA103 | GA104 | GA106 | GA107 | AD102 | AD103 | AD104 | AD106 | AD107 | GH100 */

View File

@ -3145,22 +3145,22 @@ static inline NvBool gpuIsCCEnabledInHw_DISPATCH(struct OBJGPU *pGpu) {
return pGpu->__gpuIsCCEnabledInHw__(pGpu); return pGpu->__gpuIsCCEnabledInHw__(pGpu);
} }
NvBool gpuIsDevModeEnabledInHw_GH100(struct OBJGPU *pGpu);
static inline NvBool gpuIsDevModeEnabledInHw_491d52(struct OBJGPU *pGpu) { static inline NvBool gpuIsDevModeEnabledInHw_491d52(struct OBJGPU *pGpu) {
return ((NvBool)(0 != 0)); return ((NvBool)(0 != 0));
} }
NvBool gpuIsDevModeEnabledInHw_GH100(struct OBJGPU *pGpu);
static inline NvBool gpuIsDevModeEnabledInHw_DISPATCH(struct OBJGPU *pGpu) { static inline NvBool gpuIsDevModeEnabledInHw_DISPATCH(struct OBJGPU *pGpu) {
return pGpu->__gpuIsDevModeEnabledInHw__(pGpu); return pGpu->__gpuIsDevModeEnabledInHw__(pGpu);
} }
NvBool gpuIsProtectedPcieEnabledInHw_GH100(struct OBJGPU *pGpu);
static inline NvBool gpuIsProtectedPcieEnabledInHw_491d52(struct OBJGPU *pGpu) { static inline NvBool gpuIsProtectedPcieEnabledInHw_491d52(struct OBJGPU *pGpu) {
return ((NvBool)(0 != 0)); return ((NvBool)(0 != 0));
} }
NvBool gpuIsProtectedPcieEnabledInHw_GH100(struct OBJGPU *pGpu);
static inline NvBool gpuIsProtectedPcieEnabledInHw_DISPATCH(struct OBJGPU *pGpu) { static inline NvBool gpuIsProtectedPcieEnabledInHw_DISPATCH(struct OBJGPU *pGpu) {
return pGpu->__gpuIsProtectedPcieEnabledInHw__(pGpu); return pGpu->__gpuIsProtectedPcieEnabledInHw__(pGpu);
} }

View File

@ -1118,6 +1118,23 @@ static void __nvoc_init_funcTable_KernelGsp_1(KernelGsp *pThis, RmHalspecOwner *
} }
} }
// Hal function -- kgspPreserveVgpuPartitionLogging
if (( ((rmVariantHal_HalVarIdx >> 5) == 0UL) && ((1UL << (rmVariantHal_HalVarIdx & 0x1f)) & 0x00000001UL) )) /* RmVariantHal: VF */
{
pThis->__kgspPreserveVgpuPartitionLogging__ = &kgspPreserveVgpuPartitionLogging_395e98;
}
else
{
if (( ((chipHal_HalVarIdx >> 5) == 1UL) && ((1UL << (chipHal_HalVarIdx & 0x1f)) & 0x000007e0UL) )) /* ChipHal: TU102 | TU104 | TU106 | TU116 | TU117 | GA100 */
{
pThis->__kgspPreserveVgpuPartitionLogging__ = &kgspPreserveVgpuPartitionLogging_395e98;
}
else
{
pThis->__kgspPreserveVgpuPartitionLogging__ = &kgspPreserveVgpuPartitionLogging_IMPL;
}
}
// Hal function -- kgspFreeVgpuPartitionLogging // Hal function -- kgspFreeVgpuPartitionLogging
if (( ((rmVariantHal_HalVarIdx >> 5) == 0UL) && ((1UL << (rmVariantHal_HalVarIdx & 0x1f)) & 0x00000001UL) )) /* RmVariantHal: VF */ if (( ((rmVariantHal_HalVarIdx >> 5) == 0UL) && ((1UL << (rmVariantHal_HalVarIdx & 0x1f)) & 0x00000001UL) )) /* RmVariantHal: VF */
{ {

View File

@ -361,6 +361,7 @@ struct KernelGsp {
NvU64 (*__kgspGetMaxWprHeapSizeMB__)(struct OBJGPU *, struct KernelGsp *); NvU64 (*__kgspGetMaxWprHeapSizeMB__)(struct OBJGPU *, struct KernelGsp *);
NvU32 (*__kgspGetFwHeapParamOsCarveoutSize__)(struct OBJGPU *, struct KernelGsp *); NvU32 (*__kgspGetFwHeapParamOsCarveoutSize__)(struct OBJGPU *, struct KernelGsp *);
NV_STATUS (*__kgspInitVgpuPartitionLogging__)(struct OBJGPU *, struct KernelGsp *, NvU32, NvU64, NvU64, NvU64, NvU64); NV_STATUS (*__kgspInitVgpuPartitionLogging__)(struct OBJGPU *, struct KernelGsp *, NvU32, NvU64, NvU64, NvU64, NvU64);
NV_STATUS (*__kgspPreserveVgpuPartitionLogging__)(struct OBJGPU *, struct KernelGsp *, NvU32);
NV_STATUS (*__kgspFreeVgpuPartitionLogging__)(struct OBJGPU *, struct KernelGsp *, NvU32); NV_STATUS (*__kgspFreeVgpuPartitionLogging__)(struct OBJGPU *, struct KernelGsp *, NvU32);
const char *(*__kgspGetSignatureSectionNamePrefix__)(struct OBJGPU *, struct KernelGsp *); const char *(*__kgspGetSignatureSectionNamePrefix__)(struct OBJGPU *, struct KernelGsp *);
NV_STATUS (*__kgspSetupGspFmcArgs__)(struct OBJGPU *, struct KernelGsp *, GSP_FIRMWARE *); NV_STATUS (*__kgspSetupGspFmcArgs__)(struct OBJGPU *, struct KernelGsp *, GSP_FIRMWARE *);
@ -580,6 +581,8 @@ NV_STATUS __nvoc_objCreate_KernelGsp(KernelGsp**, Dynamic*, NvU32);
#define kgspGetFwHeapParamOsCarveoutSize_HAL(pGpu, pKernelGsp) kgspGetFwHeapParamOsCarveoutSize_DISPATCH(pGpu, pKernelGsp) #define kgspGetFwHeapParamOsCarveoutSize_HAL(pGpu, pKernelGsp) kgspGetFwHeapParamOsCarveoutSize_DISPATCH(pGpu, pKernelGsp)
#define kgspInitVgpuPartitionLogging(pGpu, pKernelGsp, gfid, initTaskLogBUffOffset, initTaskLogBUffSize, vgpuTaskLogBUffOffset, vgpuTaskLogBuffSize) kgspInitVgpuPartitionLogging_DISPATCH(pGpu, pKernelGsp, gfid, initTaskLogBUffOffset, initTaskLogBUffSize, vgpuTaskLogBUffOffset, vgpuTaskLogBuffSize) #define kgspInitVgpuPartitionLogging(pGpu, pKernelGsp, gfid, initTaskLogBUffOffset, initTaskLogBUffSize, vgpuTaskLogBUffOffset, vgpuTaskLogBuffSize) kgspInitVgpuPartitionLogging_DISPATCH(pGpu, pKernelGsp, gfid, initTaskLogBUffOffset, initTaskLogBUffSize, vgpuTaskLogBUffOffset, vgpuTaskLogBuffSize)
#define kgspInitVgpuPartitionLogging_HAL(pGpu, pKernelGsp, gfid, initTaskLogBUffOffset, initTaskLogBUffSize, vgpuTaskLogBUffOffset, vgpuTaskLogBuffSize) kgspInitVgpuPartitionLogging_DISPATCH(pGpu, pKernelGsp, gfid, initTaskLogBUffOffset, initTaskLogBUffSize, vgpuTaskLogBUffOffset, vgpuTaskLogBuffSize) #define kgspInitVgpuPartitionLogging_HAL(pGpu, pKernelGsp, gfid, initTaskLogBUffOffset, initTaskLogBUffSize, vgpuTaskLogBUffOffset, vgpuTaskLogBuffSize) kgspInitVgpuPartitionLogging_DISPATCH(pGpu, pKernelGsp, gfid, initTaskLogBUffOffset, initTaskLogBUffSize, vgpuTaskLogBUffOffset, vgpuTaskLogBuffSize)
#define kgspPreserveVgpuPartitionLogging(pGpu, pKernelGsp, gfid) kgspPreserveVgpuPartitionLogging_DISPATCH(pGpu, pKernelGsp, gfid)
#define kgspPreserveVgpuPartitionLogging_HAL(pGpu, pKernelGsp, gfid) kgspPreserveVgpuPartitionLogging_DISPATCH(pGpu, pKernelGsp, gfid)
#define kgspFreeVgpuPartitionLogging(pGpu, pKernelGsp, gfid) kgspFreeVgpuPartitionLogging_DISPATCH(pGpu, pKernelGsp, gfid) #define kgspFreeVgpuPartitionLogging(pGpu, pKernelGsp, gfid) kgspFreeVgpuPartitionLogging_DISPATCH(pGpu, pKernelGsp, gfid)
#define kgspFreeVgpuPartitionLogging_HAL(pGpu, pKernelGsp, gfid) kgspFreeVgpuPartitionLogging_DISPATCH(pGpu, pKernelGsp, gfid) #define kgspFreeVgpuPartitionLogging_HAL(pGpu, pKernelGsp, gfid) kgspFreeVgpuPartitionLogging_DISPATCH(pGpu, pKernelGsp, gfid)
#define kgspGetSignatureSectionNamePrefix(pGpu, pKernelGsp) kgspGetSignatureSectionNamePrefix_DISPATCH(pGpu, pKernelGsp) #define kgspGetSignatureSectionNamePrefix(pGpu, pKernelGsp) kgspGetSignatureSectionNamePrefix_DISPATCH(pGpu, pKernelGsp)
@ -1177,6 +1180,16 @@ static inline NV_STATUS kgspInitVgpuPartitionLogging_DISPATCH(struct OBJGPU *pGp
return pKernelGsp->__kgspInitVgpuPartitionLogging__(pGpu, pKernelGsp, gfid, initTaskLogBUffOffset, initTaskLogBUffSize, vgpuTaskLogBUffOffset, vgpuTaskLogBuffSize); return pKernelGsp->__kgspInitVgpuPartitionLogging__(pGpu, pKernelGsp, gfid, initTaskLogBUffOffset, initTaskLogBUffSize, vgpuTaskLogBUffOffset, vgpuTaskLogBuffSize);
} }
static inline NV_STATUS kgspPreserveVgpuPartitionLogging_395e98(struct OBJGPU *pGpu, struct KernelGsp *pKernelGsp, NvU32 gfid) {
return NV_ERR_NOT_SUPPORTED;
}
NV_STATUS kgspPreserveVgpuPartitionLogging_IMPL(struct OBJGPU *pGpu, struct KernelGsp *pKernelGsp, NvU32 gfid);
static inline NV_STATUS kgspPreserveVgpuPartitionLogging_DISPATCH(struct OBJGPU *pGpu, struct KernelGsp *pKernelGsp, NvU32 gfid) {
return pKernelGsp->__kgspPreserveVgpuPartitionLogging__(pGpu, pKernelGsp, gfid);
}
static inline NV_STATUS kgspFreeVgpuPartitionLogging_395e98(struct OBJGPU *pGpu, struct KernelGsp *pKernelGsp, NvU32 gfid) { static inline NV_STATUS kgspFreeVgpuPartitionLogging_395e98(struct OBJGPU *pGpu, struct KernelGsp *pKernelGsp, NvU32 gfid) {
return NV_ERR_NOT_SUPPORTED; return NV_ERR_NOT_SUPPORTED;
} }

View File

@ -125,7 +125,6 @@ typedef struct KERNEL_HOST_VGPU_DEVICE
NvU32 chidOffset[RM_ENGINE_TYPE_LAST]; NvU32 chidOffset[RM_ENGINE_TYPE_LAST];
NvU32 channelCount[RM_ENGINE_TYPE_LAST]; /*Number of channels available to the VF*/ NvU32 channelCount[RM_ENGINE_TYPE_LAST]; /*Number of channels available to the VF*/
NvU8 vgpuUuid[RM_SHA1_GID_SIZE]; NvU8 vgpuUuid[RM_SHA1_GID_SIZE];
void *pVgpuVfioRef;
struct REQUEST_VGPU_INFO_NODE *pRequestVgpuInfoNode; struct REQUEST_VGPU_INFO_NODE *pRequestVgpuInfoNode;
struct PhysMemSubAlloc *pPhysMemSubAlloc; struct PhysMemSubAlloc *pPhysMemSubAlloc;
struct HOST_VGPU_DEVICE *pHostVgpuDevice; struct HOST_VGPU_DEVICE *pHostVgpuDevice;

View File

@ -119,7 +119,6 @@ typedef struct _def_client_vgpu_ns_intr
NvU64 guestDomainId; // guest ID that we need to use to inject interrupt NvU64 guestDomainId; // guest ID that we need to use to inject interrupt
NvU64 guestMSIAddr; // MSI address allocated by guest OS NvU64 guestMSIAddr; // MSI address allocated by guest OS
NvU32 guestMSIData; // MSI data value set by guest OS NvU32 guestMSIData; // MSI data value set by guest OS
void *pVgpuVfioRef; // Reference to vgpu device in nvidia-vgpu-vfio module
void *pEventDpc; // DPC event to pass the interrupt void *pEventDpc; // DPC event to pass the interrupt
} VGPU_NS_INTR; } VGPU_NS_INTR;

View File

@ -981,10 +981,12 @@ static const CHIPS_RELEASED sChipsReleased[] = {
{ 0x25AD, 0x0000, 0x0000, "NVIDIA GeForce RTX 2050" }, { 0x25AD, 0x0000, 0x0000, "NVIDIA GeForce RTX 2050" },
{ 0x25B0, 0x1878, 0x1028, "NVIDIA RTX A1000" }, { 0x25B0, 0x1878, 0x1028, "NVIDIA RTX A1000" },
{ 0x25B0, 0x1878, 0x103c, "NVIDIA RTX A1000" }, { 0x25B0, 0x1878, 0x103c, "NVIDIA RTX A1000" },
{ 0x25B0, 0x8d96, 0x103c, "NVIDIA RTX A1000" },
{ 0x25B0, 0x1878, 0x10de, "NVIDIA RTX A1000" }, { 0x25B0, 0x1878, 0x10de, "NVIDIA RTX A1000" },
{ 0x25B0, 0x1878, 0x17aa, "NVIDIA RTX A1000" }, { 0x25B0, 0x1878, 0x17aa, "NVIDIA RTX A1000" },
{ 0x25B2, 0x1879, 0x1028, "NVIDIA RTX A400" }, { 0x25B2, 0x1879, 0x1028, "NVIDIA RTX A400" },
{ 0x25B2, 0x1879, 0x103c, "NVIDIA RTX A400" }, { 0x25B2, 0x1879, 0x103c, "NVIDIA RTX A400" },
{ 0x25B2, 0x8d95, 0x103c, "NVIDIA RTX A400" },
{ 0x25B2, 0x1879, 0x10de, "NVIDIA RTX A400" }, { 0x25B2, 0x1879, 0x10de, "NVIDIA RTX A400" },
{ 0x25B2, 0x1879, 0x17aa, "NVIDIA RTX A400" }, { 0x25B2, 0x1879, 0x17aa, "NVIDIA RTX A400" },
{ 0x25B6, 0x14a9, 0x10de, "NVIDIA A16" }, { 0x25B6, 0x14a9, 0x10de, "NVIDIA A16" },
@ -1059,6 +1061,7 @@ static const CHIPS_RELEASED sChipsReleased[] = {
{ 0x2805, 0x0000, 0x0000, "NVIDIA GeForce RTX 4060 Ti" }, { 0x2805, 0x0000, 0x0000, "NVIDIA GeForce RTX 4060 Ti" },
{ 0x2808, 0x0000, 0x0000, "NVIDIA GeForce RTX 4060" }, { 0x2808, 0x0000, 0x0000, "NVIDIA GeForce RTX 4060" },
{ 0x2820, 0x0000, 0x0000, "NVIDIA GeForce RTX 4070 Laptop GPU" }, { 0x2820, 0x0000, 0x0000, "NVIDIA GeForce RTX 4070 Laptop GPU" },
{ 0x2822, 0x0000, 0x0000, "NVIDIA GeForce RTX 3050 A Laptop GPU" },
{ 0x2838, 0x0000, 0x0000, "NVIDIA RTX 3000 Ada Generation Laptop GPU" }, { 0x2838, 0x0000, 0x0000, "NVIDIA RTX 3000 Ada Generation Laptop GPU" },
{ 0x2860, 0x0000, 0x0000, "NVIDIA GeForce RTX 4070 Laptop GPU" }, { 0x2860, 0x0000, 0x0000, "NVIDIA GeForce RTX 4070 Laptop GPU" },
{ 0x2882, 0x0000, 0x0000, "NVIDIA GeForce RTX 4060" }, { 0x2882, 0x0000, 0x0000, "NVIDIA GeForce RTX 4060" },

View File

@ -308,6 +308,12 @@ typedef struct SYS_STATIC_CONFIG
/*! Indicates confidentail compute OS support is enabled or not */ /*! Indicates confidentail compute OS support is enabled or not */
NvBool bOsCCEnabled; NvBool bOsCCEnabled;
/*! Indicates SEV-SNP confidential compute OS support is enabled or not */
NvBool bOsCCSevSnpEnabled;
/*! Indicates SEV-SNP vTOM confidential compute OS support is enabled or not */
NvBool bOsCCSnpVtomEnabled;
/*! Indicates Intel TDX confidentail compute OS support is enabled or not */ /*! Indicates Intel TDX confidentail compute OS support is enabled or not */
NvBool bOsCCTdxEnabled; NvBool bOsCCTdxEnabled;
} SYS_STATIC_CONFIG; } SYS_STATIC_CONFIG;

View File

@ -1,5 +1,5 @@
/* /*
* SPDX-FileCopyrightText: Copyright (c) 2021-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved. * SPDX-FileCopyrightText: Copyright (c) 2021-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-License-Identifier: MIT * SPDX-License-Identifier: MIT
* *
* Permission is hereby granted, free of charge, to any person obtaining a * Permission is hereby granted, free of charge, to any person obtaining a
@ -63,4 +63,12 @@
*/ */
#define FLCN_RESET_PROPAGATION_DELAY_COUNT 10 #define FLCN_RESET_PROPAGATION_DELAY_COUNT 10
/*!
* Used by FALCON_DMATRFCMD polling functions to wait for _FULL==FALSE or _IDLE==TRUE
*/
typedef enum {
FLCN_DMA_POLL_QUEUE_NOT_FULL = 0,
FLCN_DMA_POLL_ENGINE_IDLE = 1
} FlcnDmaPollMode;
#endif // FALCON_COMMON_H #endif // FALCON_COMMON_H

View File

@ -80,7 +80,15 @@ confComputeApiCtrlCmdSystemGetCapabilities_IMPL
if ((sysGetStaticConfig(pSys))->bOsCCEnabled) if ((sysGetStaticConfig(pSys))->bOsCCEnabled)
{ {
pParams->cpuCapability = NV_CONF_COMPUTE_SYSTEM_CPU_CAPABILITY_AMD_SEV; pParams->cpuCapability = NV_CONF_COMPUTE_SYSTEM_CPU_CAPABILITY_AMD_SEV;
if ((sysGetStaticConfig(pSys))->bOsCCTdxEnabled) if ((sysGetStaticConfig(pSys))->bOsCCSevSnpEnabled)
{
pParams->cpuCapability = NV_CONF_COMPUTE_SYSTEM_CPU_CAPABILITY_AMD_SEV_SNP;
}
else if ((sysGetStaticConfig(pSys))->bOsCCSnpVtomEnabled)
{
pParams->cpuCapability = NV_CONF_COMPUTE_SYSTEM_CPU_CAPABILITY_AMD_SNP_VTOM;
}
else if ((sysGetStaticConfig(pSys))->bOsCCTdxEnabled)
{ {
pParams->cpuCapability = NV_CONF_COMPUTE_SYSTEM_CPU_CAPABILITY_INTEL_TDX; pParams->cpuCapability = NV_CONF_COMPUTE_SYSTEM_CPU_CAPABILITY_INTEL_TDX;
} }

View File

@ -1,5 +1,5 @@
/* /*
* SPDX-FileCopyrightText: Copyright (c) 2021-2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved. * SPDX-FileCopyrightText: Copyright (c) 2021-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
* SPDX-License-Identifier: MIT * SPDX-License-Identifier: MIT
* *
* Permission is hereby granted, free of charge, to any person obtaining a * Permission is hereby granted, free of charge, to any person obtaining a
@ -35,6 +35,67 @@
#include "published/ampere/ga102/dev_falcon_second_pri.h" #include "published/ampere/ga102/dev_falcon_second_pri.h"
#include "published/ampere/ga102/dev_fbif_v4.h" #include "published/ampere/ga102/dev_fbif_v4.h"
static GpuWaitConditionFunc s_dmaPollCondFunc;
typedef struct {
KernelFalcon *pKernelFlcn;
NvU32 pollMask;
NvU32 pollValue;
} DmaPollCondData;
static NvBool
s_dmaPollCondFunc
(
OBJGPU *pGpu,
void *pVoid
)
{
DmaPollCondData *pData = (DmaPollCondData *)pVoid;
return ((kflcnRegRead_HAL(pGpu, pData->pKernelFlcn, NV_PFALCON_FALCON_DMATRFCMD) & pData->pollMask) == pData->pollValue);
}
/*!
* Poll on either _FULL or _IDLE field of NV_PFALCON_FALCON_DMATRFCMD
*
* @param[in] pGpu GPU object pointer
* @param[in] pKernelFlcn pKernelFlcn object pointer
* @param[in] mode FLCN_DMA_POLL_QUEUE_NOT_FULL for poll on _FULL; return when _FULL is false
* FLCN_DMA_POLL_ENGINE_IDLE for poll on _IDLE; return when _IDLE is true
*/
static NV_STATUS
s_dmaPoll_GA102
(
OBJGPU *pGpu,
KernelFalcon *pKernelFlcn,
FlcnDmaPollMode mode
)
{
NV_STATUS status;
DmaPollCondData data;
data.pKernelFlcn = pKernelFlcn;
if (mode == FLCN_DMA_POLL_QUEUE_NOT_FULL)
{
data.pollMask = DRF_SHIFTMASK(NV_PFALCON_FALCON_DMATRFCMD_FULL);
data.pollValue = DRF_DEF(_PFALCON, _FALCON_DMATRFCMD, _FULL, _FALSE);
}
else
{
data.pollMask = DRF_SHIFTMASK(NV_PFALCON_FALCON_DMATRFCMD_IDLE);
data.pollValue = DRF_DEF(_PFALCON, _FALCON_DMATRFCMD, _IDLE, _TRUE);
}
status = gpuTimeoutCondWait(pGpu, s_dmaPollCondFunc, &data, NULL);
if (status != NV_OK)
{
NV_PRINTF(LEVEL_ERROR, "Error while waiting for Falcon DMA; mode: %d, status: 0x%08x\n", mode, status);
DBG_BREAKPOINT();
return status;
}
return NV_OK;
}
static NV_STATUS static NV_STATUS
s_dmaTransfer_GA102 s_dmaTransfer_GA102
( (
@ -48,15 +109,20 @@ s_dmaTransfer_GA102
) )
{ {
NV_STATUS status = NV_OK; NV_STATUS status = NV_OK;
RMTIMEOUT timeout;
NvU32 data; NvU32 data;
NvU32 bytesXfered = 0; NvU32 bytesXfered = 0;
// Ensure request queue initially has space or writing base registers will corrupt DMA transfer.
NV_CHECK_OK_OR_RETURN(LEVEL_SILENT, s_dmaPoll_GA102(pGpu, pKernelFlcn, FLCN_DMA_POLL_QUEUE_NOT_FULL));
kflcnRegWrite_HAL(pGpu, pKernelFlcn, NV_PFALCON_FALCON_DMATRFBASE, NvU64_LO32(srcPhysAddr >> 8)); kflcnRegWrite_HAL(pGpu, pKernelFlcn, NV_PFALCON_FALCON_DMATRFBASE, NvU64_LO32(srcPhysAddr >> 8));
kflcnRegWrite_HAL(pGpu, pKernelFlcn, NV_PFALCON_FALCON_DMATRFBASE1, NvU64_HI32(srcPhysAddr >> 8) & 0x1FF); kflcnRegWrite_HAL(pGpu, pKernelFlcn, NV_PFALCON_FALCON_DMATRFBASE1, NvU64_HI32(srcPhysAddr >> 8) & 0x1FF);
while (bytesXfered < sizeInBytes) while (bytesXfered < sizeInBytes)
{ {
// Poll for non-full request queue as writing control registers when full will corrupt DMA transfer.
NV_CHECK_OK_OR_RETURN(LEVEL_SILENT, s_dmaPoll_GA102(pGpu, pKernelFlcn, FLCN_DMA_POLL_QUEUE_NOT_FULL));
data = FLD_SET_DRF_NUM(_PFALCON, _FALCON_DMATRFMOFFS, _OFFS, dest, 0); data = FLD_SET_DRF_NUM(_PFALCON, _FALCON_DMATRFMOFFS, _OFFS, dest, 0);
kflcnRegWrite_HAL(pGpu, pKernelFlcn, NV_PFALCON_FALCON_DMATRFMOFFS, data); kflcnRegWrite_HAL(pGpu, pKernelFlcn, NV_PFALCON_FALCON_DMATRFMOFFS, data);
@ -66,28 +132,17 @@ s_dmaTransfer_GA102
// Write the command // Write the command
kflcnRegWrite_HAL(pGpu, pKernelFlcn, NV_PFALCON_FALCON_DMATRFCMD, dmaCmd); kflcnRegWrite_HAL(pGpu, pKernelFlcn, NV_PFALCON_FALCON_DMATRFCMD, dmaCmd);
// Poll for completion
data = kflcnRegRead_HAL(pGpu, pKernelFlcn, NV_PFALCON_FALCON_DMATRFCMD);
gpuSetTimeout(pGpu, GPU_TIMEOUT_DEFAULT, &timeout, 0);
while(FLD_TEST_DRF(_PFALCON_FALCON, _DMATRFCMD, _IDLE, _FALSE, data))
{
status = gpuCheckTimeout(pGpu, &timeout);
if (status == NV_ERR_TIMEOUT)
{
NV_PRINTF(LEVEL_ERROR, "Timeout waiting for Falcon DMA to finish\n");
DBG_BREAKPOINT();
return status;
}
osSpinLoop();
data = kflcnRegRead_HAL(pGpu, pKernelFlcn, NV_PFALCON_FALCON_DMATRFCMD);
}
bytesXfered += FLCN_BLK_ALIGNMENT; bytesXfered += FLCN_BLK_ALIGNMENT;
dest += FLCN_BLK_ALIGNMENT; dest += FLCN_BLK_ALIGNMENT;
memOff += FLCN_BLK_ALIGNMENT; memOff += FLCN_BLK_ALIGNMENT;
} }
//
// Poll for completion. GA10x+ does not have TCM tagging so DMA operations to/from TCM should
// wait for DMA to complete before launching another operation to avoid memory ordering problems.
//
NV_CHECK_OK_OR_RETURN(LEVEL_SILENT, s_dmaPoll_GA102(pGpu, pKernelFlcn, FLCN_DMA_POLL_ENGINE_IDLE));
return status; return status;
} }

View File

@ -2390,6 +2390,31 @@ error_cleanup:
return nvStatus; return nvStatus;
} }
/*!
* Preserve vGPU Partition log buffers between VM reboots
*/
NV_STATUS
kgspPreserveVgpuPartitionLogging_IMPL
(
OBJGPU *pGpu,
KernelGsp *pKernelGsp,
NvU32 gfid
)
{
if ((gfid == 0) || (gfid > MAX_PARTITIONS_WITH_GFID))
{
return NV_ERR_INVALID_ARGUMENT;
}
// Make sure this this NvLog buffer is pushed
kgspDumpGspLogsUnlocked(pKernelGsp, NV_FALSE);
// Preserve any captured vGPU Partition logs
libosPreserveLogs(&pKernelGsp->logDecodeVgpuPartition[gfid - 1]);
return NV_OK;
}
void kgspNvlogFlushCb(void *pKernelGsp) void kgspNvlogFlushCb(void *pKernelGsp)
{ {
if (pKernelGsp != NULL) if (pKernelGsp != NULL)
@ -3449,7 +3474,9 @@ kgspDumpGspLogsUnlocked_IMPL
NvBool bSyncNvLog NvBool bSyncNvLog
) )
{ {
if (pKernelGsp->bInInit || pKernelGsp->pLogElf || bSyncNvLog) if (pKernelGsp->bInInit || pKernelGsp->pLogElf || bSyncNvLog
|| pKernelGsp->bHasVgpuLogs
)
{ {
libosExtractLogs(&pKernelGsp->logDecode, bSyncNvLog); libosExtractLogs(&pKernelGsp->logDecode, bSyncNvLog);
@ -3479,7 +3506,9 @@ kgspDumpGspLogs_IMPL
NvBool bSyncNvLog NvBool bSyncNvLog
) )
{ {
if (pKernelGsp->bInInit || pKernelGsp->pLogElf || bSyncNvLog) if (pKernelGsp->bInInit || pKernelGsp->pLogElf || bSyncNvLog
|| pKernelGsp->bHasVgpuLogs
)
{ {
if (pKernelGsp->pNvlogFlushMtx != NULL) if (pKernelGsp->pNvlogFlushMtx != NULL)
portSyncMutexAcquire(pKernelGsp->pNvlogFlushMtx); portSyncMutexAcquire(pKernelGsp->pNvlogFlushMtx);

View File

@ -241,7 +241,6 @@ subdeviceCtrlCmdEventSetSemaphoreMemory_IMPL
pMemory->vgpuNsIntr.guestMSIAddr = 0; pMemory->vgpuNsIntr.guestMSIAddr = 0;
pMemory->vgpuNsIntr.guestMSIData = 0; pMemory->vgpuNsIntr.guestMSIData = 0;
pMemory->vgpuNsIntr.guestDomainId = 0; pMemory->vgpuNsIntr.guestDomainId = 0;
pMemory->vgpuNsIntr.pVgpuVfioRef = NULL;
pMemory->vgpuNsIntr.isSemaMemValidationEnabled = NV_TRUE; pMemory->vgpuNsIntr.isSemaMemValidationEnabled = NV_TRUE;
return NV_OK; return NV_OK;

View File

@ -436,21 +436,22 @@ tmrEventTimeUntilNextCallback_IMPL
NvU64 currentTime; NvU64 currentTime;
NvU64 nextAlarmTime; NvU64 nextAlarmTime;
NV_ASSERT_OK_OR_RETURN(tmrGetCurrentTime(pTmr, &currentTime));
TMR_EVENT_PVT *pEvent = (TMR_EVENT_PVT*)pEventPublic; TMR_EVENT_PVT *pEvent = (TMR_EVENT_PVT*)pEventPublic;
if (tmrIsOSTimer(pTmr, pEventPublic)) if (tmrIsOSTimer(pTmr, pEventPublic))
{ {
osGetCurrentTick(&currentTime);
// timens corresponds to relative time for OS timer // timens corresponds to relative time for OS timer
NV_CHECK_OR_RETURN(LEVEL_ERROR, portSafeAddU64(pEvent->timens, pEvent->startTimeNs, &nextAlarmTime), NV_CHECK_OR_RETURN(LEVEL_ERROR, portSafeAddU64(pEvent->timens, pEvent->startTimeNs, &nextAlarmTime),
NV_ERR_INVALID_ARGUMENT); NV_ERR_INVALID_ARGUMENT);
} }
else else
{ {
NV_ASSERT_OK_OR_RETURN(tmrGetCurrentTime(pTmr, &currentTime));
// timens corresponds to abs time in case of ptimer // timens corresponds to abs time in case of ptimer
nextAlarmTime = pEvent->timens; nextAlarmTime = pEvent->timens;
} }
if (currentTime >= nextAlarmTime) if (currentTime > nextAlarmTime)
return NV_ERR_INVALID_STATE; return NV_ERR_INVALID_STATE;
*pTimeUntilCallbackNs = nextAlarmTime - currentTime; *pTimeUntilCallbackNs = nextAlarmTime - currentTime;

View File

@ -4477,6 +4477,23 @@ cliresCtrlCmdSyncGpuBoostGroupInfo_IMPL
return status; return status;
} }
NV_STATUS
cliresCtrlCmdVgpuVfioNotifyRMStatus_IMPL
(
RmClientResource *pRmCliRes,
NV0000_CTRL_VGPU_VFIO_NOTIFY_RM_STATUS_PARAMS *pVgpuStatusParams
)
{
if (osIsVgpuVfioPresent() != NV_OK)
return NV_ERR_NOT_SUPPORTED;
osWakeRemoveVgpu(pVgpuStatusParams->gpuId, pVgpuStatusParams->returnStatus);
return NV_OK;
}
NV_STATUS NV_STATUS
cliresCtrlCmdVgpuGetVgpuVersion_IMPL cliresCtrlCmdVgpuGetVgpuVersion_IMPL
( (

View File

@ -273,6 +273,10 @@ kernelhostvgpudeviceapiConstruct_IMPL
status = pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice, status = pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice,
NV2080_CTRL_CMD_VGPU_MGR_INTERNAL_BOOTLOAD_GSP_VGPU_PLUGIN_TASK, NV2080_CTRL_CMD_VGPU_MGR_INTERNAL_BOOTLOAD_GSP_VGPU_PLUGIN_TASK,
pBootloadParams, sizeof(*pBootloadParams)); pBootloadParams, sizeof(*pBootloadParams));
// Preserve any captured vGPU Partition logs
NV_ASSERT_OK(kgspPreserveVgpuPartitionLogging(pGpu, pKernelGsp, pAllocParams->gfid));
if (status != NV_OK) if (status != NV_OK)
{ {
NV_PRINTF(LEVEL_ERROR, "Failed to call NV2080_CTRL_CMD_VGPU_MGR_INTERNAL_BOOTLOAD_GSP_VGPU_PLUGIN_TASK\n"); NV_PRINTF(LEVEL_ERROR, "Failed to call NV2080_CTRL_CMD_VGPU_MGR_INTERNAL_BOOTLOAD_GSP_VGPU_PLUGIN_TASK\n");

View File

@ -1,4 +1,4 @@
NVIDIA_VERSION = 550.100 NVIDIA_VERSION = 550.107.02
# This file. # This file.
VERSION_MK_FILE := $(lastword $(MAKEFILE_LIST)) VERSION_MK_FILE := $(lastword $(MAKEFILE_LIST))