Compare commits
3 Commits
Author | SHA1 | Date | |
---|---|---|---|
|
9940d2229a | ||
|
5e52edb203 | ||
|
2cca8b3fd5 |
222
CHANGELOG.md
222
CHANGELOG.md
@ -1,222 +0,0 @@
|
||||
# Changelog
|
||||
|
||||
## Release 550 Entries
|
||||
|
||||
### [550.100] 2024-07-09
|
||||
|
||||
### [550.90.07] 2024-06-04
|
||||
|
||||
### [550.78] 2024-04-25
|
||||
|
||||
### [550.76] 2024-04-17
|
||||
|
||||
### [550.67] 2024-03-19
|
||||
|
||||
### [550.54.15] 2024-03-18
|
||||
|
||||
### [550.54.14] 2024-02-23
|
||||
|
||||
#### Added
|
||||
|
||||
- Added vGPU Host and vGPU Guest support. For vGPU Host, please refer to the README.vgpu packaged in the vGPU Host Package for more details.
|
||||
|
||||
### [550.40.07] 2024-01-24
|
||||
|
||||
#### Fixed
|
||||
|
||||
- Set INSTALL_MOD_DIR only if it's not defined, [#570](https://github.com/NVIDIA/open-gpu-kernel-modules/pull/570) by @keelung-yang
|
||||
|
||||
## Release 545 Entries
|
||||
|
||||
### [545.29.06] 2023-11-22
|
||||
|
||||
#### Fixed
|
||||
|
||||
- The brightness control of NVIDIA seems to be broken, [#573](https://github.com/NVIDIA/open-gpu-kernel-modules/issues/573)
|
||||
|
||||
### [545.29.02] 2023-10-31
|
||||
|
||||
### [545.23.06] 2023-10-17
|
||||
|
||||
#### Fixed
|
||||
|
||||
- Fix always-false conditional, [#493](https://github.com/NVIDIA/open-gpu-kernel-modules/pull/493) by @meme8383
|
||||
|
||||
#### Added
|
||||
|
||||
- Added beta-quality support for GeForce and Workstation GPUs. Please see the "Open Linux Kernel Modules" chapter in the NVIDIA GPU driver end user README for details.
|
||||
|
||||
## Release 535 Entries
|
||||
|
||||
### [535.129.03] 2023-10-31
|
||||
|
||||
### [535.113.01] 2023-09-21
|
||||
|
||||
#### Fixed
|
||||
|
||||
- Fixed building main against current centos stream 8 fails, [#550](https://github.com/NVIDIA/open-gpu-kernel-modules/issues/550) by @airlied
|
||||
|
||||
### [535.104.05] 2023-08-22
|
||||
|
||||
### [535.98] 2023-08-08
|
||||
|
||||
### [535.86.10] 2023-07-31
|
||||
|
||||
### [535.86.05] 2023-07-18
|
||||
|
||||
### [535.54.03] 2023-06-14
|
||||
|
||||
### [535.43.02] 2023-05-30
|
||||
|
||||
#### Fixed
|
||||
|
||||
- Fixed console restore with traditional VGA consoles.
|
||||
|
||||
#### Added
|
||||
|
||||
- Added support for Run Time D3 (RTD3) on Ampere and later GPUs.
|
||||
- Added support for G-Sync on desktop GPUs.
|
||||
|
||||
## Release 530 Entries
|
||||
|
||||
### [530.41.03] 2023-03-23
|
||||
|
||||
### [530.30.02] 2023-02-28
|
||||
|
||||
#### Changed
|
||||
|
||||
- GSP firmware is now distributed as `gsp_tu10x.bin` and `gsp_ga10x.bin` to better reflect the GPU architectures supported by each firmware file in this release.
|
||||
- The .run installer will continue to install firmware to /lib/firmware/nvidia/<version> and the nvidia.ko kernel module will load the appropriate firmware for each GPU at runtime.
|
||||
|
||||
#### Fixed
|
||||
|
||||
- Add support for resizable BAR on Linux when NVreg_EnableResizableBar=1 module param is set. [#3](https://github.com/NVIDIA/open-gpu-kernel-modules/pull/3) by @sjkelly
|
||||
|
||||
#### Added
|
||||
|
||||
- Support for power management features like Suspend, Hibernate and Resume.
|
||||
|
||||
## Release 525 Entries
|
||||
|
||||
### [525.147.05] 2023-10-31
|
||||
|
||||
#### Fixed
|
||||
|
||||
- Fix nvidia_p2p_get_pages(): Fix double-free in register-callback error path, [#557](https://github.com/NVIDIA/open-gpu-kernel-modules/pull/557) by @BrendanCunningham
|
||||
|
||||
### [525.125.06] 2023-06-26
|
||||
|
||||
### [525.116.04] 2023-05-09
|
||||
|
||||
### [525.116.03] 2023-04-25
|
||||
|
||||
### [525.105.17] 2023-03-30
|
||||
|
||||
### [525.89.02] 2023-02-08
|
||||
|
||||
### [525.85.12] 2023-01-30
|
||||
|
||||
### [525.85.05] 2023-01-19
|
||||
|
||||
#### Fixed
|
||||
|
||||
- Fix build problems with Clang 15.0, [#377](https://github.com/NVIDIA/open-gpu-kernel-modules/issues/377) by @ptr1337
|
||||
|
||||
### [525.78.01] 2023-01-05
|
||||
|
||||
### [525.60.13] 2022-12-05
|
||||
|
||||
### [525.60.11] 2022-11-28
|
||||
|
||||
#### Fixed
|
||||
|
||||
- Fixed nvenc compatibility with usermode clients [#104](https://github.com/NVIDIA/open-gpu-kernel-modules/issues/104)
|
||||
|
||||
### [525.53] 2022-11-10
|
||||
|
||||
#### Changed
|
||||
|
||||
- GSP firmware is now distributed as multiple firmware files: this release has `gsp_tu10x.bin` and `gsp_ad10x.bin` replacing `gsp.bin` from previous releases.
|
||||
- Each file is named after a GPU architecture and supports GPUs from one or more architectures. This allows GSP firmware to better leverage each architecture's capabilities.
|
||||
- The .run installer will continue to install firmware to `/lib/firmware/nvidia/<version>` and the `nvidia.ko` kernel module will load the appropriate firmware for each GPU at runtime.
|
||||
|
||||
#### Fixed
|
||||
|
||||
- Add support for IBT (indirect branch tracking) on supported platforms, [#256](https://github.com/NVIDIA/open-gpu-kernel-modules/issues/256) by @rnd-ash
|
||||
- Return EINVAL when [failing to] allocating memory, [#280](https://github.com/NVIDIA/open-gpu-kernel-modules/pull/280) by @YusufKhan-gamedev
|
||||
- Fix various typos in nvidia/src/kernel, [#16](https://github.com/NVIDIA/open-gpu-kernel-modules/pull/16) by @alexisgeoffrey
|
||||
- Added support for rotation in X11, Quadro Sync, Stereo, and YUV 4:2:0 on Turing.
|
||||
|
||||
## Release 520 Entries
|
||||
|
||||
### [520.61.07] 2022-10-20
|
||||
|
||||
### [520.56.06] 2022-10-12
|
||||
|
||||
#### Added
|
||||
|
||||
- Introduce support for GeForce RTX 4090 GPUs.
|
||||
|
||||
### [520.61.05] 2022-10-10
|
||||
|
||||
#### Added
|
||||
|
||||
- Introduce support for NVIDIA H100 GPUs.
|
||||
|
||||
#### Fixed
|
||||
|
||||
- Fix/Improve Makefile, [#308](https://github.com/NVIDIA/open-gpu-kernel-modules/pull/308/) by @izenynn
|
||||
- Make nvLogBase2 more efficient, [#177](https://github.com/NVIDIA/open-gpu-kernel-modules/pull/177/) by @DMaroo
|
||||
- nv-pci: fixed always true expression, [#195](https://github.com/NVIDIA/open-gpu-kernel-modules/pull/195/) by @ValZapod
|
||||
|
||||
## Release 515 Entries
|
||||
|
||||
### [515.76] 2022-09-20
|
||||
|
||||
#### Fixed
|
||||
|
||||
- Improved compatibility with new Linux kernel releases
|
||||
- Fixed possible excessive GPU power draw on an idle X11 or Wayland desktop when driving high resolutions or refresh rates
|
||||
|
||||
### [515.65.07] 2022-10-19
|
||||
|
||||
### [515.65.01] 2022-08-02
|
||||
|
||||
#### Fixed
|
||||
|
||||
- Collection of minor fixes to issues, [#6](https://github.com/NVIDIA/open-gpu-kernel-modules/pull/61) by @Joshua-Ashton
|
||||
- Remove unnecessary use of acpi_bus_get_device().
|
||||
|
||||
### [515.57] 2022-06-28
|
||||
|
||||
#### Fixed
|
||||
|
||||
- Backtick is deprecated, [#273](https://github.com/NVIDIA/open-gpu-kernel-modules/pull/273) by @arch-user-france1
|
||||
|
||||
### [515.48.07] 2022-05-31
|
||||
|
||||
#### Added
|
||||
|
||||
- List of compatible GPUs in README.md.
|
||||
|
||||
#### Fixed
|
||||
|
||||
- Fix various README capitalizations, [#8](https://github.com/NVIDIA/open-gpu-kernel-modules/pull/8) by @27lx
|
||||
- Automatically tag bug report issues, [#15](https://github.com/NVIDIA/open-gpu-kernel-modules/pull/15) by @thebeanogamer
|
||||
- Improve conftest.sh Script, [#37](https://github.com/NVIDIA/open-gpu-kernel-modules/pull/37) by @Nitepone
|
||||
- Update HTTP link to HTTPS, [#101](https://github.com/NVIDIA/open-gpu-kernel-modules/pull/101) by @alcaparra
|
||||
- moved array sanity check to before the array access, [#117](https://github.com/NVIDIA/open-gpu-kernel-modules/pull/117) by @RealAstolfo
|
||||
- Fixed some typos, [#122](https://github.com/NVIDIA/open-gpu-kernel-modules/pull/122) by @FEDOyt
|
||||
- Fixed capitalization, [#123](https://github.com/NVIDIA/open-gpu-kernel-modules/pull/123) by @keroeslux
|
||||
- Fix typos in NVDEC Engine Descriptor, [#126](https://github.com/NVIDIA/open-gpu-kernel-modules/pull/126) from @TrickyDmitriy
|
||||
- Extranous apostrohpes in a makefile script [sic], [#14](https://github.com/NVIDIA/open-gpu-kernel-modules/issues/14) by @kiroma
|
||||
- HDMI no audio @ 4K above 60Hz, [#75](https://github.com/NVIDIA/open-gpu-kernel-modules/issues/75) by @adolfotregosa
|
||||
- dp_configcaps.cpp:405: array index sanity check in wrong place?, [#110](https://github.com/NVIDIA/open-gpu-kernel-modules/issues/110) by @dcb314
|
||||
- NVRM kgspInitRm_IMPL: missing NVDEC0 engine, cannot initialize GSP-RM, [#116](https://github.com/NVIDIA/open-gpu-kernel-modules/issues/116) by @kfazz
|
||||
- ERROR: modpost: "backlight_device_register" [...nvidia-modeset.ko] undefined, [#135](https://github.com/NVIDIA/open-gpu-kernel-modules/issues/135) by @sndirsch
|
||||
- aarch64 build fails, [#151](https://github.com/NVIDIA/open-gpu-kernel-modules/issues/151) by @frezbo
|
||||
|
||||
### [515.43.04] 2022-05-11
|
||||
|
||||
- Initial release.
|
||||
|
@ -1,7 +1,7 @@
|
||||
# NVIDIA Linux Open GPU Kernel Module Source
|
||||
|
||||
This is the source release of the NVIDIA Linux open GPU kernel modules,
|
||||
version 550.100.
|
||||
version 550.127.05.
|
||||
|
||||
|
||||
## How to Build
|
||||
@ -17,7 +17,7 @@ as root:
|
||||
|
||||
Note that the kernel modules built here must be used with GSP
|
||||
firmware and user-space NVIDIA GPU driver components from a corresponding
|
||||
550.100 driver release. This can be achieved by installing
|
||||
550.127.05 driver release. This can be achieved by installing
|
||||
the NVIDIA GPU driver from the .run file using the `--no-kernel-modules`
|
||||
option. E.g.,
|
||||
|
||||
@ -188,7 +188,7 @@ encountered specific to them.
|
||||
For details on feature support and limitations, see the NVIDIA GPU driver
|
||||
end user README here:
|
||||
|
||||
https://us.download.nvidia.com/XFree86/Linux-x86_64/550.100/README/kernel_open.html
|
||||
https://us.download.nvidia.com/XFree86/Linux-x86_64/550.127.05/README/kernel_open.html
|
||||
|
||||
For vGPU support, please refer to the README.vgpu packaged in the vGPU Host
|
||||
Package for more details.
|
||||
@ -912,6 +912,7 @@ Subsystem Device ID.
|
||||
| NVIDIA GeForce RTX 4060 Ti | 2805 |
|
||||
| NVIDIA GeForce RTX 4060 | 2808 |
|
||||
| NVIDIA GeForce RTX 4070 Laptop GPU | 2820 |
|
||||
| NVIDIA GeForce RTX 3050 A Laptop GPU | 2822 |
|
||||
| NVIDIA RTX 3000 Ada Generation Laptop GPU | 2838 |
|
||||
| NVIDIA GeForce RTX 4070 Laptop GPU | 2860 |
|
||||
| NVIDIA GeForce RTX 4060 | 2882 |
|
||||
|
@ -72,7 +72,7 @@ EXTRA_CFLAGS += -I$(src)/common/inc
|
||||
EXTRA_CFLAGS += -I$(src)
|
||||
EXTRA_CFLAGS += -Wall $(DEFINES) $(INCLUDES) -Wno-cast-qual -Wno-format-extra-args
|
||||
EXTRA_CFLAGS += -D__KERNEL__ -DMODULE -DNVRM
|
||||
EXTRA_CFLAGS += -DNV_VERSION_STRING=\"550.100\"
|
||||
EXTRA_CFLAGS += -DNV_VERSION_STRING=\"550.127.05\"
|
||||
|
||||
ifneq ($(SYSSRCHOST1X),)
|
||||
EXTRA_CFLAGS += -I$(SYSSRCHOST1X)
|
||||
|
@ -28,7 +28,7 @@ else
|
||||
else
|
||||
KERNEL_UNAME ?= $(shell uname -r)
|
||||
KERNEL_MODLIB := /lib/modules/$(KERNEL_UNAME)
|
||||
KERNEL_SOURCES := $(shell test -d $(KERNEL_MODLIB)/source && echo $(KERNEL_MODLIB)/source || echo $(KERNEL_MODLIB)/build)
|
||||
KERNEL_SOURCES := $(shell ((test -d $(KERNEL_MODLIB)/source && echo $(KERNEL_MODLIB)/source) || (test -d $(KERNEL_MODLIB)/build/source && echo $(KERNEL_MODLIB)/build/source)) || echo $(KERNEL_MODLIB)/build)
|
||||
endif
|
||||
|
||||
KERNEL_OUTPUT := $(KERNEL_SOURCES)
|
||||
@ -42,7 +42,11 @@ else
|
||||
else
|
||||
KERNEL_UNAME ?= $(shell uname -r)
|
||||
KERNEL_MODLIB := /lib/modules/$(KERNEL_UNAME)
|
||||
ifeq ($(KERNEL_SOURCES), $(KERNEL_MODLIB)/source)
|
||||
# $(filter patter...,text) - Returns all whitespace-separated words in text that
|
||||
# do match any of the pattern words, removing any words that do not match.
|
||||
# Set the KERNEL_OUTPUT only if either $(KERNEL_MODLIB)/source or
|
||||
# $(KERNEL_MODLIB)/build/source path matches the KERNEL_SOURCES.
|
||||
ifneq ($(filter $(KERNEL_SOURCES),$(KERNEL_MODLIB)/source $(KERNEL_MODLIB)/build/source),)
|
||||
KERNEL_OUTPUT := $(KERNEL_MODLIB)/build
|
||||
KBUILD_PARAMS := KBUILD_OUTPUT=$(KERNEL_OUTPUT)
|
||||
endif
|
||||
|
@ -474,7 +474,9 @@ static inline void *nv_vmalloc(unsigned long size)
|
||||
void *ptr = __vmalloc(size, GFP_KERNEL);
|
||||
#endif
|
||||
if (ptr)
|
||||
{
|
||||
NV_MEMDBG_ADD(ptr, size);
|
||||
}
|
||||
return ptr;
|
||||
}
|
||||
|
||||
@ -492,7 +494,9 @@ static inline void *nv_ioremap(NvU64 phys, NvU64 size)
|
||||
void *ptr = ioremap(phys, size);
|
||||
#endif
|
||||
if (ptr)
|
||||
{
|
||||
NV_MEMDBG_ADD(ptr, size);
|
||||
}
|
||||
return ptr;
|
||||
}
|
||||
|
||||
@ -528,8 +532,9 @@ static inline void *nv_ioremap_cache(NvU64 phys, NvU64 size)
|
||||
#endif
|
||||
|
||||
if (ptr)
|
||||
{
|
||||
NV_MEMDBG_ADD(ptr, size);
|
||||
|
||||
}
|
||||
return ptr;
|
||||
}
|
||||
|
||||
@ -545,8 +550,9 @@ static inline void *nv_ioremap_wc(NvU64 phys, NvU64 size)
|
||||
#endif
|
||||
|
||||
if (ptr)
|
||||
{
|
||||
NV_MEMDBG_ADD(ptr, size);
|
||||
|
||||
}
|
||||
return ptr;
|
||||
}
|
||||
|
||||
@ -675,7 +681,9 @@ static inline NvUPtr nv_vmap(struct page **pages, NvU32 page_count,
|
||||
/* All memory cached in PPC64LE; can't honor 'cached' input. */
|
||||
ptr = vmap(pages, page_count, VM_MAP, prot);
|
||||
if (ptr)
|
||||
{
|
||||
NV_MEMDBG_ADD(ptr, page_count * PAGE_SIZE);
|
||||
}
|
||||
return (NvUPtr)ptr;
|
||||
}
|
||||
|
||||
|
@ -1045,7 +1045,7 @@ NV_STATUS NV_API_CALL nv_vgpu_get_bar_info(nvidia_stack_t *, nv_state_t *, con
|
||||
NvU64 *, NvU64 *, NvU32 *, NvBool *, NvU8 *);
|
||||
NV_STATUS NV_API_CALL nv_vgpu_get_hbm_info(nvidia_stack_t *, nv_state_t *, const NvU8 *, NvU64 *, NvU64 *);
|
||||
NV_STATUS NV_API_CALL nv_vgpu_process_vf_info(nvidia_stack_t *, nv_state_t *, NvU8, NvU32, NvU8, NvU8, NvU8, NvBool, void *);
|
||||
NV_STATUS NV_API_CALL nv_gpu_bind_event(nvidia_stack_t *);
|
||||
NV_STATUS NV_API_CALL nv_gpu_bind_event(nvidia_stack_t *, NvU32, NvBool *);
|
||||
NV_STATUS NV_API_CALL nv_gpu_unbind_event(nvidia_stack_t *, NvU32, NvBool *);
|
||||
|
||||
NV_STATUS NV_API_CALL nv_get_usermap_access_params(nv_state_t*, nv_usermap_access_params_t*);
|
||||
|
@ -592,6 +592,13 @@ void nvUvmInterfaceChannelDestroy(uvmGpuChannelHandle channel);
|
||||
Error codes:
|
||||
NV_ERR_GENERIC
|
||||
NV_ERR_NO_MEMORY
|
||||
NV_ERR_INVALID_STATE
|
||||
NV_ERR_NOT_SUPPORTED
|
||||
NV_ERR_NOT_READY
|
||||
NV_ERR_INVALID_LOCK_STATE
|
||||
NV_ERR_INVALID_STATE
|
||||
NV_ERR_NVSWITCH_FABRIC_NOT_READY
|
||||
NV_ERR_NVSWITCH_FABRIC_FAILURE
|
||||
*/
|
||||
NV_STATUS nvUvmInterfaceQueryCaps(uvmGpuDeviceHandle device,
|
||||
UvmGpuCaps *caps);
|
||||
|
@ -595,10 +595,8 @@ typedef struct UvmGpuClientInfo_tag
|
||||
|
||||
typedef enum
|
||||
{
|
||||
UVM_GPU_CONF_COMPUTE_MODE_NONE,
|
||||
UVM_GPU_CONF_COMPUTE_MODE_APM,
|
||||
UVM_GPU_CONF_COMPUTE_MODE_HCC,
|
||||
UVM_GPU_CONF_COMPUTE_MODE_COUNT
|
||||
UVM_GPU_CONF_COMPUTE_MODE_NONE = 0,
|
||||
UVM_GPU_CONF_COMPUTE_MODE_HCC = 2
|
||||
} UvmGpuConfComputeMode;
|
||||
|
||||
typedef struct UvmGpuConfComputeCaps_tag
|
||||
|
@ -152,6 +152,8 @@ NV_STATUS_CODE(NV_ERR_FABRIC_MANAGER_NOT_PRESENT, 0x0000007A, "Fabric Manag
|
||||
NV_STATUS_CODE(NV_ERR_ALREADY_SIGNALLED, 0x0000007B, "Semaphore Surface value already >= requested wait value")
|
||||
NV_STATUS_CODE(NV_ERR_QUEUE_TASK_SLOT_NOT_AVAILABLE, 0x0000007C, "PMU RPC error due to no queue slot available for this event")
|
||||
NV_STATUS_CODE(NV_ERR_KEY_ROTATION_IN_PROGRESS, 0x0000007D, "Operation not allowed as key rotation is in progress")
|
||||
NV_STATUS_CODE(NV_ERR_NVSWITCH_FABRIC_NOT_READY, 0x00000081, "Nvswitch Fabric Status or Fabric Probe is not yet complete, caller needs to retry")
|
||||
NV_STATUS_CODE(NV_ERR_NVSWITCH_FABRIC_FAILURE, 0x00000082, "Nvswitch Fabric Probe failed")
|
||||
|
||||
// Warnings:
|
||||
NV_STATUS_CODE(NV_WARN_HOT_SWITCH, 0x00010001, "WARNING Hot switch")
|
||||
|
@ -218,6 +218,8 @@ extern NvU32 os_page_size;
|
||||
extern NvU64 os_page_mask;
|
||||
extern NvU8 os_page_shift;
|
||||
extern NvBool os_cc_enabled;
|
||||
extern NvBool os_cc_sev_snp_enabled;
|
||||
extern NvBool os_cc_snp_vtom_enabled;
|
||||
extern NvBool os_cc_tdx_enabled;
|
||||
extern NvBool os_dma_buf_enabled;
|
||||
extern NvBool os_imex_channel_is_supported;
|
||||
|
@ -5102,6 +5102,42 @@ compile_test() {
|
||||
compile_check_conftest "$CODE" "NV_CC_PLATFORM_PRESENT" "" "functions"
|
||||
;;
|
||||
|
||||
cc_attr_guest_sev_snp)
|
||||
#
|
||||
# Determine if 'CC_ATTR_GUEST_SEV_SNP' is present.
|
||||
#
|
||||
# Added by commit aa5a461171f9 ("x86/mm: Extend cc_attr to
|
||||
# include AMD SEV-SNP") in v5.19.
|
||||
#
|
||||
CODE="
|
||||
#if defined(NV_LINUX_CC_PLATFORM_H_PRESENT)
|
||||
#include <linux/cc_platform.h>
|
||||
#endif
|
||||
|
||||
enum cc_attr cc_attributes = CC_ATTR_GUEST_SEV_SNP;
|
||||
"
|
||||
|
||||
compile_check_conftest "$CODE" "NV_CC_ATTR_SEV_SNP" "" "types"
|
||||
;;
|
||||
|
||||
hv_get_isolation_type)
|
||||
#
|
||||
# Determine if 'hv_get_isolation_type()' is present.
|
||||
# Added by commit faff44069ff5 ("x86/hyperv: Add Write/Read MSR
|
||||
# registers via ghcb page") in v5.16.
|
||||
#
|
||||
CODE="
|
||||
#if defined(NV_ASM_MSHYPERV_H_PRESENT)
|
||||
#include <asm/mshyperv.h>
|
||||
#endif
|
||||
void conftest_hv_get_isolation_type(void) {
|
||||
int i;
|
||||
hv_get_isolation_type(i);
|
||||
}"
|
||||
|
||||
compile_check_conftest "$CODE" "NV_HV_GET_ISOLATION_TYPE" "" "functions"
|
||||
;;
|
||||
|
||||
drm_prime_pages_to_sg_has_drm_device_arg)
|
||||
#
|
||||
# Determine if drm_prime_pages_to_sg() has 'dev' argument.
|
||||
@ -6543,7 +6579,9 @@ compile_test() {
|
||||
# Determine whether drm_fbdev_generic_setup is present.
|
||||
#
|
||||
# Added by commit 9060d7f49376 ("drm/fb-helper: Finish the
|
||||
# generic fbdev emulation") in v4.19.
|
||||
# generic fbdev emulation") in v4.19. Removed by commit
|
||||
# aae4682e5d66 ("drm/fbdev-generic: Convert to fbdev-ttm")
|
||||
# in v6.11.
|
||||
#
|
||||
CODE="
|
||||
#include <drm/drm_fb_helper.h>
|
||||
@ -6555,6 +6593,48 @@ compile_test() {
|
||||
}"
|
||||
|
||||
compile_check_conftest "$CODE" "NV_DRM_FBDEV_GENERIC_SETUP_PRESENT" "" "functions"
|
||||
;;
|
||||
|
||||
drm_fbdev_ttm_setup)
|
||||
#
|
||||
# Determine whether drm_fbdev_ttm_setup is present.
|
||||
#
|
||||
# Added by commit aae4682e5d66 ("drm/fbdev-generic:
|
||||
# Convert to fbdev-ttm") in v6.11.
|
||||
#
|
||||
CODE="
|
||||
#include <drm/drm_fb_helper.h>
|
||||
#if defined(NV_DRM_DRM_FBDEV_TTM_H_PRESENT)
|
||||
#include <drm/drm_fbdev_ttm.h>
|
||||
#endif
|
||||
void conftest_drm_fbdev_ttm_setup(void) {
|
||||
drm_fbdev_ttm_setup();
|
||||
}"
|
||||
|
||||
compile_check_conftest "$CODE" "NV_DRM_FBDEV_TTM_SETUP_PRESENT" "" "functions"
|
||||
;;
|
||||
|
||||
drm_output_poll_changed)
|
||||
#
|
||||
# Determine whether drm_mode_config_funcs.output_poll_changed
|
||||
# callback is present
|
||||
#
|
||||
# Removed by commit 446d0f4849b1 ("drm: Remove struct
|
||||
# drm_mode_config_funcs.output_poll_changed") in v6.12. Hotplug
|
||||
# event support is handled through the fbdev emulation interface
|
||||
# going forward.
|
||||
#
|
||||
CODE="
|
||||
#if defined(NV_DRM_DRM_MODE_CONFIG_H_PRESENT)
|
||||
#include <drm/drm_mode_config.h>
|
||||
#else
|
||||
#include <drm/drm_crtc.h>
|
||||
#endif
|
||||
int conftest_drm_output_poll_changed_available(void) {
|
||||
return offsetof(struct drm_mode_config_funcs, output_poll_changed);
|
||||
}"
|
||||
|
||||
compile_check_conftest "$CODE" "NV_DRM_OUTPUT_POLL_CHANGED_PRESENT" "" "types"
|
||||
;;
|
||||
|
||||
drm_aperture_remove_conflicting_pci_framebuffers)
|
||||
|
@ -15,6 +15,7 @@ NV_HEADER_PRESENCE_TESTS = \
|
||||
drm/drm_atomic_uapi.h \
|
||||
drm/drm_drv.h \
|
||||
drm/drm_fbdev_generic.h \
|
||||
drm/drm_fbdev_ttm.h \
|
||||
drm/drm_framebuffer.h \
|
||||
drm/drm_connector.h \
|
||||
drm/drm_probe_helper.h \
|
||||
@ -97,5 +98,6 @@ NV_HEADER_PRESENCE_TESTS = \
|
||||
linux/sync_file.h \
|
||||
linux/cc_platform.h \
|
||||
asm/cpufeature.h \
|
||||
linux/mpi.h
|
||||
linux/mpi.h \
|
||||
asm/mshyperv.h
|
||||
|
||||
|
@ -1,5 +1,5 @@
|
||||
/*
|
||||
* SPDX-FileCopyrightText: Copyright (c) 2016 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
|
||||
* SPDX-FileCopyrightText: Copyright (c) 2016-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
|
||||
* SPDX-License-Identifier: MIT
|
||||
*
|
||||
* Permission is hereby granted, free of charge, to any person obtaining a
|
||||
@ -176,7 +176,7 @@ static struct task_struct *thread_create_on_node(int (*threadfn)(void *data),
|
||||
{
|
||||
|
||||
unsigned i, j;
|
||||
const static unsigned attempts = 3;
|
||||
static const unsigned attempts = 3;
|
||||
struct task_struct *thread[3];
|
||||
|
||||
for (i = 0;; i++) {
|
||||
|
@ -1689,7 +1689,7 @@ int nv_drm_get_crtc_crc32_v2_ioctl(struct drm_device *dev,
|
||||
struct NvKmsKapiCrcs crc32;
|
||||
|
||||
if (!drm_core_check_feature(dev, DRIVER_MODESET)) {
|
||||
return -ENOENT;
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
|
||||
crtc = nv_drm_crtc_find(dev, filep, params->crtc_id);
|
||||
@ -1717,7 +1717,7 @@ int nv_drm_get_crtc_crc32_ioctl(struct drm_device *dev,
|
||||
struct NvKmsKapiCrcs crc32;
|
||||
|
||||
if (!drm_core_check_feature(dev, DRIVER_MODESET)) {
|
||||
return -ENOENT;
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
|
||||
crtc = nv_drm_crtc_find(dev, filep, params->crtc_id);
|
||||
|
@ -64,12 +64,14 @@
|
||||
#include <drm/drm_ioctl.h>
|
||||
#endif
|
||||
|
||||
#if defined(NV_DRM_FBDEV_GENERIC_AVAILABLE)
|
||||
#if defined(NV_DRM_FBDEV_AVAILABLE)
|
||||
#include <drm/drm_aperture.h>
|
||||
#include <drm/drm_fb_helper.h>
|
||||
#endif
|
||||
|
||||
#if defined(NV_DRM_DRM_FBDEV_GENERIC_H_PRESENT)
|
||||
#if defined(NV_DRM_DRM_FBDEV_TTM_H_PRESENT)
|
||||
#include <drm/drm_fbdev_ttm.h>
|
||||
#elif defined(NV_DRM_DRM_FBDEV_GENERIC_H_PRESENT)
|
||||
#include <drm/drm_fbdev_generic.h>
|
||||
#endif
|
||||
|
||||
@ -124,6 +126,7 @@ static const char* nv_get_input_colorspace_name(
|
||||
|
||||
#if defined(NV_DRM_ATOMIC_MODESET_AVAILABLE)
|
||||
|
||||
#if defined(NV_DRM_OUTPUT_POLL_CHANGED_PRESENT)
|
||||
static void nv_drm_output_poll_changed(struct drm_device *dev)
|
||||
{
|
||||
struct drm_connector *connector = NULL;
|
||||
@ -167,6 +170,7 @@ static void nv_drm_output_poll_changed(struct drm_device *dev)
|
||||
nv_drm_connector_list_iter_end(&conn_iter);
|
||||
#endif
|
||||
}
|
||||
#endif /* NV_DRM_OUTPUT_POLL_CHANGED_PRESENT */
|
||||
|
||||
static struct drm_framebuffer *nv_drm_framebuffer_create(
|
||||
struct drm_device *dev,
|
||||
@ -204,7 +208,9 @@ static const struct drm_mode_config_funcs nv_mode_config_funcs = {
|
||||
.atomic_check = nv_drm_atomic_check,
|
||||
.atomic_commit = nv_drm_atomic_commit,
|
||||
|
||||
#if defined(NV_DRM_OUTPUT_POLL_CHANGED_PRESENT)
|
||||
.output_poll_changed = nv_drm_output_poll_changed,
|
||||
#endif
|
||||
};
|
||||
|
||||
static void nv_drm_event_callback(const struct NvKmsKapiEvent *event)
|
||||
@ -480,6 +486,22 @@ static int nv_drm_load(struct drm_device *dev, unsigned long flags)
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
#if defined(NV_DRM_FBDEV_AVAILABLE)
|
||||
/*
|
||||
* If fbdev is enabled, take modeset ownership now before other DRM clients
|
||||
* can take master (and thus NVKMS ownership).
|
||||
*/
|
||||
if (nv_drm_fbdev_module_param) {
|
||||
if (!nvKms->grabOwnership(pDevice)) {
|
||||
nvKms->freeDevice(pDevice);
|
||||
NV_DRM_DEV_LOG_ERR(nv_dev, "Failed to grab NVKMS modeset ownership");
|
||||
return -EBUSY;
|
||||
}
|
||||
|
||||
nv_dev->hasFramebufferConsole = NV_TRUE;
|
||||
}
|
||||
#endif
|
||||
|
||||
mutex_lock(&nv_dev->lock);
|
||||
|
||||
/* Set NvKmsKapiDevice */
|
||||
@ -590,6 +612,15 @@ static void __nv_drm_unload(struct drm_device *dev)
|
||||
return;
|
||||
}
|
||||
|
||||
/* Release modeset ownership if fbdev is enabled */
|
||||
|
||||
#if defined(NV_DRM_FBDEV_AVAILABLE)
|
||||
if (nv_dev->hasFramebufferConsole) {
|
||||
drm_atomic_helper_shutdown(dev);
|
||||
nvKms->releaseOwnership(nv_dev->pDevice);
|
||||
}
|
||||
#endif
|
||||
|
||||
cancel_delayed_work_sync(&nv_dev->hotplug_event_work);
|
||||
mutex_lock(&nv_dev->lock);
|
||||
|
||||
@ -834,13 +865,18 @@ static int nv_drm_get_dpy_id_for_connector_id_ioctl(struct drm_device *dev,
|
||||
struct drm_file *filep)
|
||||
{
|
||||
struct drm_nvidia_get_dpy_id_for_connector_id_params *params = data;
|
||||
struct drm_connector *connector;
|
||||
struct nv_drm_connector *nv_connector;
|
||||
int ret = 0;
|
||||
|
||||
if (!drm_core_check_feature(dev, DRIVER_MODESET)) {
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
|
||||
// Importantly, drm_connector_lookup (with filep) will only return the
|
||||
// connector if we are master, a lessee with the connector, or not master at
|
||||
// all. It will return NULL if we are a lessee with other connectors.
|
||||
struct drm_connector *connector =
|
||||
nv_drm_connector_lookup(dev, filep, params->connectorId);
|
||||
struct nv_drm_connector *nv_connector;
|
||||
int ret = 0;
|
||||
connector = nv_drm_connector_lookup(dev, filep, params->connectorId);
|
||||
|
||||
if (!connector) {
|
||||
return -EINVAL;
|
||||
@ -873,6 +909,11 @@ static int nv_drm_get_connector_id_for_dpy_id_ioctl(struct drm_device *dev,
|
||||
int ret = -EINVAL;
|
||||
#if defined(NV_DRM_CONNECTOR_LIST_ITER_PRESENT)
|
||||
struct drm_connector_list_iter conn_iter;
|
||||
#endif
|
||||
if (!drm_core_check_feature(dev, DRIVER_MODESET)) {
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
#if defined(NV_DRM_CONNECTOR_LIST_ITER_PRESENT)
|
||||
nv_drm_connector_list_iter_begin(dev, &conn_iter);
|
||||
#endif
|
||||
|
||||
@ -1085,6 +1126,10 @@ static int nv_drm_grant_permission_ioctl(struct drm_device *dev, void *data,
|
||||
{
|
||||
struct drm_nvidia_grant_permissions_params *params = data;
|
||||
|
||||
if (!drm_core_check_feature(dev, DRIVER_MODESET)) {
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
|
||||
if (params->type == NV_DRM_PERMISSIONS_TYPE_MODESET) {
|
||||
return nv_drm_grant_modeset_permission(dev, params, filep);
|
||||
} else if (params->type == NV_DRM_PERMISSIONS_TYPE_SUB_OWNER) {
|
||||
@ -1250,6 +1295,10 @@ static int nv_drm_revoke_permission_ioctl(struct drm_device *dev, void *data,
|
||||
{
|
||||
struct drm_nvidia_revoke_permissions_params *params = data;
|
||||
|
||||
if (!drm_core_check_feature(dev, DRIVER_MODESET)) {
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
|
||||
if (params->type == NV_DRM_PERMISSIONS_TYPE_MODESET) {
|
||||
if (!params->dpyId) {
|
||||
return -EINVAL;
|
||||
@ -1767,15 +1816,10 @@ void nv_drm_register_drm_device(const nv_gpu_info_t *gpu_info)
|
||||
goto failed_drm_register;
|
||||
}
|
||||
|
||||
#if defined(NV_DRM_FBDEV_GENERIC_AVAILABLE)
|
||||
#if defined(NV_DRM_FBDEV_AVAILABLE)
|
||||
if (nv_drm_fbdev_module_param &&
|
||||
drm_core_check_feature(dev, DRIVER_MODESET)) {
|
||||
|
||||
if (!nvKms->grabOwnership(nv_dev->pDevice)) {
|
||||
NV_DRM_DEV_LOG_ERR(nv_dev, "Failed to grab NVKMS modeset ownership");
|
||||
goto failed_grab_ownership;
|
||||
}
|
||||
|
||||
if (bus_is_pci) {
|
||||
struct pci_dev *pdev = to_pci_dev(device);
|
||||
|
||||
@ -1785,11 +1829,13 @@ void nv_drm_register_drm_device(const nv_gpu_info_t *gpu_info)
|
||||
drm_aperture_remove_conflicting_pci_framebuffers(pdev, nv_drm_driver.name);
|
||||
#endif
|
||||
}
|
||||
#if defined(NV_DRM_FBDEV_TTM_AVAILABLE)
|
||||
drm_fbdev_ttm_setup(dev, 32);
|
||||
#elif defined(NV_DRM_FBDEV_GENERIC_AVAILABLE)
|
||||
drm_fbdev_generic_setup(dev, 32);
|
||||
|
||||
nv_dev->hasFramebufferConsole = NV_TRUE;
|
||||
#endif
|
||||
}
|
||||
#endif /* defined(NV_DRM_FBDEV_GENERIC_AVAILABLE) */
|
||||
#endif /* defined(NV_DRM_FBDEV_AVAILABLE) */
|
||||
|
||||
/* Add NVIDIA-DRM device into list */
|
||||
|
||||
@ -1798,12 +1844,6 @@ void nv_drm_register_drm_device(const nv_gpu_info_t *gpu_info)
|
||||
|
||||
return; /* Success */
|
||||
|
||||
#if defined(NV_DRM_FBDEV_GENERIC_AVAILABLE)
|
||||
failed_grab_ownership:
|
||||
|
||||
drm_dev_unregister(dev);
|
||||
#endif
|
||||
|
||||
failed_drm_register:
|
||||
|
||||
nv_drm_dev_free(dev);
|
||||
@ -1870,12 +1910,6 @@ void nv_drm_remove_devices(void)
|
||||
struct nv_drm_device *next = dev_list->next;
|
||||
struct drm_device *dev = dev_list->dev;
|
||||
|
||||
#if defined(NV_DRM_FBDEV_GENERIC_AVAILABLE)
|
||||
if (dev_list->hasFramebufferConsole) {
|
||||
drm_atomic_helper_shutdown(dev);
|
||||
nvKms->releaseOwnership(dev_list->pDevice);
|
||||
}
|
||||
#endif
|
||||
drm_dev_unregister(dev);
|
||||
nv_drm_dev_free(dev);
|
||||
|
||||
@ -1943,12 +1977,12 @@ void nv_drm_suspend_resume(NvBool suspend)
|
||||
|
||||
if (suspend) {
|
||||
drm_kms_helper_poll_disable(dev);
|
||||
#if defined(NV_DRM_FBDEV_GENERIC_AVAILABLE)
|
||||
#if defined(NV_DRM_FBDEV_AVAILABLE)
|
||||
drm_fb_helper_set_suspend_unlocked(dev->fb_helper, 1);
|
||||
#endif
|
||||
drm_mode_config_reset(dev);
|
||||
} else {
|
||||
#if defined(NV_DRM_FBDEV_GENERIC_AVAILABLE)
|
||||
#if defined(NV_DRM_FBDEV_AVAILABLE)
|
||||
drm_fb_helper_set_suspend_unlocked(dev->fb_helper, 0);
|
||||
#endif
|
||||
drm_kms_helper_poll_enable(dev);
|
||||
|
@ -465,10 +465,15 @@ int nv_drm_prime_fence_context_create_ioctl(struct drm_device *dev,
|
||||
{
|
||||
struct nv_drm_device *nv_dev = to_nv_device(dev);
|
||||
struct drm_nvidia_prime_fence_context_create_params *p = data;
|
||||
struct nv_drm_prime_fence_context *nv_prime_fence_context =
|
||||
__nv_drm_prime_fence_context_new(nv_dev, p);
|
||||
struct nv_drm_prime_fence_context *nv_prime_fence_context;
|
||||
int err;
|
||||
|
||||
if (nv_dev->pDevice == NULL) {
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
|
||||
nv_prime_fence_context = __nv_drm_prime_fence_context_new(nv_dev, p);
|
||||
|
||||
if (!nv_prime_fence_context) {
|
||||
goto done;
|
||||
}
|
||||
@ -523,6 +528,11 @@ int nv_drm_gem_prime_fence_attach_ioctl(struct drm_device *dev,
|
||||
struct nv_drm_fence_context *nv_fence_context;
|
||||
nv_dma_fence_t *fence;
|
||||
|
||||
if (nv_dev->pDevice == NULL) {
|
||||
ret = -EOPNOTSUPP;
|
||||
goto done;
|
||||
}
|
||||
|
||||
if (p->__pad != 0) {
|
||||
NV_DRM_DEV_LOG_ERR(nv_dev, "Padding fields must be zeroed");
|
||||
goto done;
|
||||
@ -1312,6 +1322,10 @@ int nv_drm_semsurf_fence_ctx_create_ioctl(struct drm_device *dev,
|
||||
struct nv_drm_semsurf_fence_ctx *ctx;
|
||||
int err;
|
||||
|
||||
if (nv_dev->pDevice == NULL) {
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
|
||||
if (p->__pad != 0) {
|
||||
NV_DRM_DEV_LOG_ERR(nv_dev, "Padding fields must be zeroed");
|
||||
return -EINVAL;
|
||||
@ -1473,6 +1487,11 @@ int nv_drm_semsurf_fence_create_ioctl(struct drm_device *dev,
|
||||
int ret = -EINVAL;
|
||||
int fd;
|
||||
|
||||
if (nv_dev->pDevice == NULL) {
|
||||
ret = -EOPNOTSUPP;
|
||||
goto done;
|
||||
}
|
||||
|
||||
if (p->__pad != 0) {
|
||||
NV_DRM_DEV_LOG_ERR(nv_dev, "Padding fields must be zeroed");
|
||||
goto done;
|
||||
@ -1635,6 +1654,10 @@ int nv_drm_semsurf_fence_wait_ioctl(struct drm_device *dev,
|
||||
unsigned long flags;
|
||||
int ret = -EINVAL;
|
||||
|
||||
if (nv_dev->pDevice == NULL) {
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
|
||||
if (p->pre_wait_value >= p->post_wait_value) {
|
||||
NV_DRM_DEV_LOG_ERR(
|
||||
nv_dev,
|
||||
@ -1743,6 +1766,11 @@ int nv_drm_semsurf_fence_attach_ioctl(struct drm_device *dev,
|
||||
nv_dma_fence_t *fence;
|
||||
int ret = -EINVAL;
|
||||
|
||||
if (nv_dev->pDevice == NULL) {
|
||||
ret = -EOPNOTSUPP;
|
||||
goto done;
|
||||
}
|
||||
|
||||
nv_gem = nv_drm_gem_object_lookup(nv_dev->dev, filep, p->handle);
|
||||
|
||||
if (!nv_gem) {
|
||||
|
@ -380,7 +380,7 @@ int nv_drm_gem_import_nvkms_memory_ioctl(struct drm_device *dev,
|
||||
int ret;
|
||||
|
||||
if (!drm_core_check_feature(dev, DRIVER_MODESET)) {
|
||||
ret = -EINVAL;
|
||||
ret = -EOPNOTSUPP;
|
||||
goto failed;
|
||||
}
|
||||
|
||||
@ -430,7 +430,7 @@ int nv_drm_gem_export_nvkms_memory_ioctl(struct drm_device *dev,
|
||||
int ret = 0;
|
||||
|
||||
if (!drm_core_check_feature(dev, DRIVER_MODESET)) {
|
||||
ret = -EINVAL;
|
||||
ret = -EOPNOTSUPP;
|
||||
goto done;
|
||||
}
|
||||
|
||||
@ -483,7 +483,7 @@ int nv_drm_gem_alloc_nvkms_memory_ioctl(struct drm_device *dev,
|
||||
int ret = 0;
|
||||
|
||||
if (!drm_core_check_feature(dev, DRIVER_MODESET)) {
|
||||
ret = -EINVAL;
|
||||
ret = -EOPNOTSUPP;
|
||||
goto failed;
|
||||
}
|
||||
|
||||
|
@ -319,7 +319,7 @@ int nv_drm_gem_identify_object_ioctl(struct drm_device *dev,
|
||||
struct nv_drm_gem_object *nv_gem = NULL;
|
||||
|
||||
if (!drm_core_check_feature(dev, DRIVER_MODESET)) {
|
||||
return -EINVAL;
|
||||
return -EOPNOTSUPP;
|
||||
}
|
||||
|
||||
nv_dma_buf = nv_drm_gem_object_dma_buf_lookup(dev, filep, p->handle);
|
||||
|
@ -34,7 +34,7 @@ MODULE_PARM_DESC(
|
||||
"Enable atomic kernel modesetting (1 = enable, 0 = disable (default))");
|
||||
module_param_named(modeset, nv_drm_modeset_module_param, bool, 0400);
|
||||
|
||||
#if defined(NV_DRM_FBDEV_GENERIC_AVAILABLE)
|
||||
#if defined(NV_DRM_FBDEV_AVAILABLE)
|
||||
MODULE_PARM_DESC(
|
||||
fbdev,
|
||||
"Create a framebuffer device (1 = enable, 0 = disable (default)) (EXPERIMENTAL)");
|
||||
|
@ -59,14 +59,20 @@ typedef struct nv_timer nv_drm_timer;
|
||||
#endif
|
||||
|
||||
#if defined(NV_DRM_FBDEV_GENERIC_SETUP_PRESENT) && defined(NV_DRM_APERTURE_REMOVE_CONFLICTING_PCI_FRAMEBUFFERS_PRESENT)
|
||||
#define NV_DRM_FBDEV_AVAILABLE
|
||||
#define NV_DRM_FBDEV_GENERIC_AVAILABLE
|
||||
#endif
|
||||
|
||||
#if defined(NV_DRM_FBDEV_TTM_SETUP_PRESENT) && defined(NV_DRM_APERTURE_REMOVE_CONFLICTING_PCI_FRAMEBUFFERS_PRESENT)
|
||||
#define NV_DRM_FBDEV_AVAILABLE
|
||||
#define NV_DRM_FBDEV_TTM_AVAILABLE
|
||||
#endif
|
||||
|
||||
struct page;
|
||||
|
||||
/* Set to true when the atomic modeset feature is enabled. */
|
||||
extern bool nv_drm_modeset_module_param;
|
||||
#if defined(NV_DRM_FBDEV_GENERIC_AVAILABLE)
|
||||
#if defined(NV_DRM_FBDEV_AVAILABLE)
|
||||
/* Set to true when the nvidia-drm driver should install a framebuffer device */
|
||||
extern bool nv_drm_fbdev_module_param;
|
||||
#endif
|
||||
|
@ -67,6 +67,7 @@ NV_CONFTEST_FUNCTION_COMPILE_TESTS += fence_set_error
|
||||
NV_CONFTEST_FUNCTION_COMPILE_TESTS += sync_file_get_fence
|
||||
NV_CONFTEST_FUNCTION_COMPILE_TESTS += drm_aperture_remove_conflicting_pci_framebuffers
|
||||
NV_CONFTEST_FUNCTION_COMPILE_TESTS += drm_fbdev_generic_setup
|
||||
NV_CONFTEST_FUNCTION_COMPILE_TESTS += drm_fbdev_ttm_setup
|
||||
NV_CONFTEST_FUNCTION_COMPILE_TESTS += drm_connector_attach_hdr_output_metadata_property
|
||||
NV_CONFTEST_FUNCTION_COMPILE_TESTS += drm_helper_crtc_enable_color_mgmt
|
||||
NV_CONFTEST_FUNCTION_COMPILE_TESTS += drm_crtc_enable_color_mgmt
|
||||
@ -129,3 +130,4 @@ NV_CONFTEST_TYPE_COMPILE_TESTS += fence_ops_use_64bit_seqno
|
||||
NV_CONFTEST_TYPE_COMPILE_TESTS += drm_aperture_remove_conflicting_pci_framebuffers_has_driver_arg
|
||||
NV_CONFTEST_TYPE_COMPILE_TESTS += drm_mode_create_dp_colorspace_property_has_supported_colorspaces_arg
|
||||
NV_CONFTEST_TYPE_COMPILE_TESTS += drm_unlocked_ioctl_flag_present
|
||||
NV_CONFTEST_TYPE_COMPILE_TESTS += drm_output_poll_changed
|
||||
|
@ -1,5 +1,5 @@
|
||||
/*
|
||||
* SPDX-FileCopyrightText: Copyright (c) 2016 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
|
||||
* SPDX-FileCopyrightText: Copyright (c) 2016-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
|
||||
* SPDX-License-Identifier: MIT
|
||||
*
|
||||
* Permission is hereby granted, free of charge, to any person obtaining a
|
||||
@ -176,7 +176,7 @@ static struct task_struct *thread_create_on_node(int (*threadfn)(void *data),
|
||||
{
|
||||
|
||||
unsigned i, j;
|
||||
const static unsigned attempts = 3;
|
||||
static const unsigned attempts = 3;
|
||||
struct task_struct *thread[3];
|
||||
|
||||
for (i = 0;; i++) {
|
||||
|
@ -1070,7 +1070,7 @@ static void nvkms_kapi_event_kthread_q_callback(void *arg)
|
||||
nvKmsKapiHandleEventQueueChange(device);
|
||||
}
|
||||
|
||||
struct nvkms_per_open *nvkms_open_common(enum NvKmsClientType type,
|
||||
static struct nvkms_per_open *nvkms_open_common(enum NvKmsClientType type,
|
||||
struct NvKmsKapiDevice *device,
|
||||
int *status)
|
||||
{
|
||||
@ -1122,7 +1122,7 @@ failed:
|
||||
return NULL;
|
||||
}
|
||||
|
||||
void nvkms_close_pm_locked(struct nvkms_per_open *popen)
|
||||
static void nvkms_close_pm_locked(struct nvkms_per_open *popen)
|
||||
{
|
||||
/*
|
||||
* Don't use down_interruptible(): we need to free resources
|
||||
@ -1185,7 +1185,7 @@ static void nvkms_close_popen(struct nvkms_per_open *popen)
|
||||
}
|
||||
}
|
||||
|
||||
int nvkms_ioctl_common
|
||||
static int nvkms_ioctl_common
|
||||
(
|
||||
struct nvkms_per_open *popen,
|
||||
NvU32 cmd, NvU64 address, const size_t size
|
||||
|
@ -1,5 +1,5 @@
|
||||
/*******************************************************************************
|
||||
Copyright (c) 2016 NVIDIA Corporation
|
||||
Copyright (c) 2016-2024 NVIDIA Corporation
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to
|
||||
@ -81,7 +81,7 @@
|
||||
#define NUM_Q_ITEMS_IN_MULTITHREAD_TEST (NUM_TEST_Q_ITEMS * NUM_TEST_KTHREADS)
|
||||
|
||||
// This exists in order to have a function to place a breakpoint on:
|
||||
void on_nvq_assert(void)
|
||||
static void on_nvq_assert(void)
|
||||
{
|
||||
(void)NULL;
|
||||
}
|
||||
|
@ -1,5 +1,5 @@
|
||||
/*
|
||||
* SPDX-FileCopyrightText: Copyright (c) 2016 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
|
||||
* SPDX-FileCopyrightText: Copyright (c) 2016-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
|
||||
* SPDX-License-Identifier: MIT
|
||||
*
|
||||
* Permission is hereby granted, free of charge, to any person obtaining a
|
||||
@ -176,7 +176,7 @@ static struct task_struct *thread_create_on_node(int (*threadfn)(void *data),
|
||||
{
|
||||
|
||||
unsigned i, j;
|
||||
const static unsigned attempts = 3;
|
||||
static const unsigned attempts = 3;
|
||||
struct task_struct *thread[3];
|
||||
|
||||
for (i = 0;; i++) {
|
||||
|
@ -379,6 +379,17 @@ NV_STATUS UvmIsPageableMemoryAccessSupportedOnGpu(const NvProcessorUuid *gpuUuid
|
||||
// OS state required to register the GPU is malformed, or the partition
|
||||
// identified by the user handles or its configuration changed.
|
||||
//
|
||||
// NV_ERR_NVSWITCH_FABRIC_NOT_READY:
|
||||
// (On NvSwitch-connected system) Indicates that the fabric has not been
|
||||
// configured yet. Caller must retry GPU registration.
|
||||
//
|
||||
// NV_ERR_NVSWITCH_FABRIC_FAILURE:
|
||||
// (On NvSwitch-connected systems) Indicates that the NvLink fabric
|
||||
// failed to be configured.
|
||||
//
|
||||
// NV_ERR_GPU_MEMORY_ONLINING_FAULURE:
|
||||
// (On coherent systems) The GPU's memory onlining failed.
|
||||
//
|
||||
// NV_ERR_GENERIC:
|
||||
// Unexpected error. We try hard to avoid returning this error code,
|
||||
// because it is not very informative.
|
||||
|
@ -158,6 +158,12 @@ static NvU32 uvm_channel_update_progress_with_max(uvm_channel_t *channel,
|
||||
|
||||
NvU64 completed_value = uvm_channel_update_completed_value(channel);
|
||||
|
||||
// LCIC channels don't use gpfifo entries after the static schedule is up.
|
||||
// They can only have one entry active at a time so use the state of the
|
||||
// tracking semaphore to represent progress.
|
||||
if (uvm_channel_is_lcic(channel) && uvm_channel_manager_is_wlc_ready(channel->pool->manager))
|
||||
return uvm_gpu_tracking_semaphore_is_completed(&channel->tracking_sem) ? 0 : 1;
|
||||
|
||||
channel_pool_lock(channel->pool);
|
||||
|
||||
// Completed value should never exceed the queued value
|
||||
@ -397,18 +403,15 @@ static NV_STATUS channel_pool_rotate_key_locked(uvm_channel_pool_t *pool)
|
||||
uvm_assert_mutex_locked(&pool->conf_computing.key_rotation.mutex);
|
||||
|
||||
uvm_for_each_channel_in_pool(channel, pool) {
|
||||
NV_STATUS status = uvm_channel_wait(channel);
|
||||
// WLC channels share CE with LCIC pushes and LCIC waits for
|
||||
// WLC work to complete using WFI, so it's enough to wait
|
||||
// for the latter one.
|
||||
uvm_channel_t *wait_channel = uvm_channel_is_wlc(channel) ? uvm_channel_wlc_get_paired_lcic(channel) : channel;
|
||||
|
||||
NV_STATUS status = uvm_channel_wait(wait_channel);
|
||||
if (status != NV_OK)
|
||||
return status;
|
||||
|
||||
if (uvm_channel_pool_is_wlc(pool)) {
|
||||
uvm_spin_loop_t spin;
|
||||
uvm_channel_t *lcic_channel = uvm_channel_wlc_get_paired_lcic(channel);
|
||||
|
||||
// LCIC pushes don't exist as such. Rely on the tracking semaphore
|
||||
// to determine completion, instead of uvm_channel_wait
|
||||
UVM_SPIN_WHILE(!uvm_gpu_tracking_semaphore_is_completed(&lcic_channel->tracking_sem), &spin);
|
||||
}
|
||||
}
|
||||
|
||||
return uvm_conf_computing_rotate_pool_key(pool);
|
||||
@ -1051,13 +1054,21 @@ static void internal_channel_submit_work_wlc(uvm_push_t *push)
|
||||
UvmCslIv *iv_cpu_addr = lcic_semaphore->conf_computing.ivs;
|
||||
uvm_gpu_semaphore_notifier_t *last_pushed_notifier;
|
||||
NvU32 iv_index;
|
||||
uvm_spin_loop_t spin;
|
||||
NV_STATUS status;
|
||||
void* auth_tag_cpu = get_channel_unprotected_sysmem_cpu(wlc_channel) + WLC_SYSMEM_PUSHBUFFER_AUTH_TAG_OFFSET;
|
||||
|
||||
|
||||
// Wait for the WLC/LCIC to be primed. This means that PUT == GET + 2
|
||||
// and a WLC doorbell ring is enough to start work.
|
||||
UVM_SPIN_WHILE(!uvm_gpu_tracking_semaphore_is_completed(&lcic_channel->tracking_sem), &spin);
|
||||
status = uvm_channel_wait(lcic_channel);
|
||||
if (status != NV_OK) {
|
||||
UVM_ASSERT(uvm_global_get_status() != NV_OK);
|
||||
|
||||
// If there's a global fatal error we can't communicate with the GPU
|
||||
// and the below launch sequence doesn't work.
|
||||
UVM_ERR_PRINT_NV_STATUS("Failed to wait for LCIC channel (%s) completion.", status, lcic_channel->name);
|
||||
return;
|
||||
}
|
||||
|
||||
// Executing WLC adds an extra job to LCIC
|
||||
++lcic_channel->tracking_sem.queued_value;
|
||||
@ -1852,14 +1863,14 @@ static uvm_gpfifo_entry_t *uvm_channel_get_first_pending_entry(uvm_channel_t *ch
|
||||
NV_STATUS uvm_channel_get_status(uvm_channel_t *channel)
|
||||
{
|
||||
uvm_gpu_t *gpu;
|
||||
NvNotification *errorNotifier;
|
||||
NvNotification *error_notifier;
|
||||
|
||||
if (uvm_channel_is_proxy(channel))
|
||||
errorNotifier = channel->proxy.channel_info.shadowErrorNotifier;
|
||||
error_notifier = channel->proxy.channel_info.shadowErrorNotifier;
|
||||
else
|
||||
errorNotifier = channel->channel_info.errorNotifier;
|
||||
error_notifier = channel->channel_info.errorNotifier;
|
||||
|
||||
if (errorNotifier->status == 0)
|
||||
if (error_notifier->status == 0)
|
||||
return NV_OK;
|
||||
|
||||
// In case we hit a channel error, check the ECC error notifier as well so
|
||||
@ -2986,16 +2997,18 @@ out:
|
||||
|
||||
// Return the pool corresponding to the given CE index
|
||||
//
|
||||
// This function cannot be used to access the proxy pool in SR-IOV heavy.
|
||||
// Used to retrieve pools of type UVM_CHANNEL_POOL_TYPE_CE only.
|
||||
static uvm_channel_pool_t *channel_manager_ce_pool(uvm_channel_manager_t *manager, NvU32 ce)
|
||||
{
|
||||
uvm_channel_pool_t *pool;
|
||||
uvm_channel_pool_t *pool = uvm_channel_pool_first(manager, UVM_CHANNEL_POOL_TYPE_CE);
|
||||
|
||||
UVM_ASSERT(pool != NULL);
|
||||
UVM_ASSERT(test_bit(ce, manager->ce_mask));
|
||||
|
||||
// The index of the pool associated with 'ce' is the number of usable CEs
|
||||
// in [0, ce)
|
||||
pool = manager->channel_pools + bitmap_weight(manager->ce_mask, ce);
|
||||
// Pools of type UVM_CHANNEL_POOL_TYPE_CE are stored contiguously. The
|
||||
// offset of the pool associated with 'ce' is the number of usable CEs in
|
||||
// [0, ce).
|
||||
pool += bitmap_weight(manager->ce_mask, ce);
|
||||
|
||||
UVM_ASSERT(pool->pool_type == UVM_CHANNEL_POOL_TYPE_CE);
|
||||
UVM_ASSERT(pool->engine_index == ce);
|
||||
@ -3009,6 +3022,8 @@ void uvm_channel_manager_set_p2p_ce(uvm_channel_manager_t *manager, uvm_gpu_t *p
|
||||
|
||||
UVM_ASSERT(manager->gpu != peer);
|
||||
UVM_ASSERT(optimal_ce < UVM_COPY_ENGINE_COUNT_MAX);
|
||||
UVM_ASSERT(manager->gpu->parent->peer_copy_mode != UVM_GPU_PEER_COPY_MODE_UNSUPPORTED);
|
||||
UVM_ASSERT(peer->parent->peer_copy_mode != UVM_GPU_PEER_COPY_MODE_UNSUPPORTED);
|
||||
|
||||
manager->pool_to_use.gpu_to_gpu[peer_gpu_index] = channel_manager_ce_pool(manager, optimal_ce);
|
||||
}
|
||||
@ -3213,6 +3228,7 @@ static unsigned channel_manager_get_max_pools(uvm_channel_manager_t *manager)
|
||||
static NV_STATUS channel_manager_create_ce_pools(uvm_channel_manager_t *manager, unsigned *preferred_ce)
|
||||
{
|
||||
unsigned ce;
|
||||
unsigned type;
|
||||
|
||||
// A pool is created for each usable CE, even if it has not been selected as
|
||||
// the preferred CE for any type, because as more information is discovered
|
||||
@ -3222,18 +3238,20 @@ static NV_STATUS channel_manager_create_ce_pools(uvm_channel_manager_t *manager,
|
||||
// usable.
|
||||
for_each_set_bit(ce, manager->ce_mask, UVM_COPY_ENGINE_COUNT_MAX) {
|
||||
NV_STATUS status;
|
||||
unsigned type;
|
||||
uvm_channel_pool_t *pool = NULL;
|
||||
|
||||
status = channel_pool_add(manager, UVM_CHANNEL_POOL_TYPE_CE, ce, &pool);
|
||||
if (status != NV_OK)
|
||||
return status;
|
||||
}
|
||||
|
||||
for (type = 0; type < UVM_CHANNEL_TYPE_CE_COUNT; type++) {
|
||||
// Set pool type if it hasn't been set before.
|
||||
if (preferred_ce[type] == ce && manager->pool_to_use.default_for_type[type] == NULL)
|
||||
manager->pool_to_use.default_for_type[type] = pool;
|
||||
}
|
||||
for (type = 0; type < UVM_CHANNEL_TYPE_CE_COUNT; type++) {
|
||||
// Avoid overwriting previously set defaults.
|
||||
if (manager->pool_to_use.default_for_type[type] != NULL)
|
||||
continue;
|
||||
|
||||
ce = preferred_ce[type];
|
||||
manager->pool_to_use.default_for_type[type] = channel_manager_ce_pool(manager, ce);
|
||||
}
|
||||
|
||||
return NV_OK;
|
||||
@ -3739,11 +3757,15 @@ static void channel_manager_stop_wlc(uvm_channel_manager_t *manager)
|
||||
NV_STATUS status;
|
||||
|
||||
uvm_for_each_channel_in_pool(channel, lcic_pool) {
|
||||
uvm_spin_loop_t spin;
|
||||
|
||||
// Wait for the WLC/LCIC to be primed. This means that PUT == GET + 2
|
||||
// and a WLC doorbell ring is enough to start work.
|
||||
UVM_SPIN_WHILE(!uvm_gpu_tracking_semaphore_is_completed(&channel->tracking_sem), &spin);
|
||||
status = uvm_channel_wait(channel);
|
||||
if (status != NV_OK)
|
||||
UVM_ERR_PRINT_NV_STATUS("Failed to wait for LCIC channel (%s) completion", status, channel->name);
|
||||
|
||||
// Continue on error and attempt to stop WLC below. This can lead to
|
||||
// channel destruction with mismatched GET and PUT pointers. RM will
|
||||
// print errors if that's the case, but channel destruction succeeeds.
|
||||
}
|
||||
|
||||
status = uvm_push_begin(manager, UVM_CHANNEL_TYPE_SEC2, &push, "Stop WLC channels");
|
||||
|
@ -1,5 +1,5 @@
|
||||
/*******************************************************************************
|
||||
Copyright (c) 2013-2023 NVIDIA Corporation
|
||||
Copyright (c) 2013-2024 NVIDIA Corporation
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
of this software and associated documentation files (the "Software"), to
|
||||
@ -423,7 +423,9 @@ static void uvm_get_unaddressable_range(NvU32 num_va_bits, NvU64 *first, NvU64 *
|
||||
UVM_ASSERT(first);
|
||||
UVM_ASSERT(outer);
|
||||
|
||||
if (uvm_platform_uses_canonical_form_address()) {
|
||||
// Maxwell GPUs (num_va_bits == 40b) do not support canonical form address
|
||||
// even when plugged into platforms using it.
|
||||
if (uvm_platform_uses_canonical_form_address() && num_va_bits > 40) {
|
||||
*first = 1ULL << (num_va_bits - 1);
|
||||
*outer = (NvU64)((NvS64)(1ULL << 63) >> (64 - num_va_bits));
|
||||
}
|
||||
|
@ -138,6 +138,7 @@ static NV_STATUS get_gpu_caps(uvm_gpu_t *gpu)
|
||||
|
||||
if (gpu_caps.numaEnabled) {
|
||||
UVM_ASSERT(uvm_parent_gpu_is_coherent(gpu->parent));
|
||||
|
||||
gpu->mem_info.numa.enabled = true;
|
||||
gpu->mem_info.numa.node_id = gpu_caps.numaNodeId;
|
||||
}
|
||||
@ -1280,7 +1281,8 @@ static NV_STATUS init_gpu(uvm_gpu_t *gpu, const UvmGpuInfo *gpu_info)
|
||||
|
||||
status = get_gpu_caps(gpu);
|
||||
if (status != NV_OK) {
|
||||
UVM_ERR_PRINT("Failed to get GPU caps: %s, GPU %s\n", nvstatusToString(status), uvm_gpu_name(gpu));
|
||||
if (status != NV_ERR_NVSWITCH_FABRIC_NOT_READY)
|
||||
UVM_ERR_PRINT("Failed to get GPU caps: %s, GPU %s\n", nvstatusToString(status), uvm_gpu_name(gpu));
|
||||
return status;
|
||||
}
|
||||
|
||||
@ -2256,7 +2258,10 @@ static void set_optimal_p2p_write_ces(const UvmGpuP2PCapsParams *p2p_caps_params
|
||||
bool sorted;
|
||||
NvU32 ce0, ce1;
|
||||
|
||||
if (peer_caps->link_type < UVM_GPU_LINK_NVLINK_1)
|
||||
UVM_ASSERT(peer_caps->ref_count);
|
||||
UVM_ASSERT(gpu0->parent->peer_copy_mode == gpu1->parent->peer_copy_mode);
|
||||
|
||||
if (gpu0->parent->peer_copy_mode == UVM_GPU_PEER_COPY_MODE_UNSUPPORTED)
|
||||
return;
|
||||
|
||||
sorted = uvm_id_value(gpu0->id) < uvm_id_value(gpu1->id);
|
||||
@ -2282,7 +2287,7 @@ static void set_optimal_p2p_write_ces(const UvmGpuP2PCapsParams *p2p_caps_params
|
||||
static int nv_procfs_read_gpu_peer_caps(struct seq_file *s, void *v)
|
||||
{
|
||||
if (!uvm_down_read_trylock(&g_uvm_global.pm.lock))
|
||||
return -EAGAIN;
|
||||
return -EAGAIN;
|
||||
|
||||
gpu_peer_caps_print((uvm_gpu_t **)s->private, s);
|
||||
|
||||
|
@ -962,6 +962,8 @@ struct uvm_parent_gpu_struct
|
||||
// Whether CE supports physical addressing mode for writes to vidmem
|
||||
bool ce_phys_vidmem_write_supported;
|
||||
|
||||
// Addressing mode(s) supported for CE transfers between this GPU and its
|
||||
// peers: none, physical only, physical and virtual, etc.
|
||||
uvm_gpu_peer_copy_mode_t peer_copy_mode;
|
||||
|
||||
// Virtualization mode of the GPU.
|
||||
|
@ -684,7 +684,10 @@ static void access_counter_buffer_flush_locked(uvm_parent_gpu_t *parent_gpu,
|
||||
|
||||
while (get != put) {
|
||||
// Wait until valid bit is set
|
||||
UVM_SPIN_WHILE(!parent_gpu->access_counter_buffer_hal->entry_is_valid(parent_gpu, get), &spin);
|
||||
UVM_SPIN_WHILE(!parent_gpu->access_counter_buffer_hal->entry_is_valid(parent_gpu, get), &spin) {
|
||||
if (uvm_global_get_status() != NV_OK)
|
||||
goto done;
|
||||
}
|
||||
|
||||
parent_gpu->access_counter_buffer_hal->entry_clear_valid(parent_gpu, get);
|
||||
++get;
|
||||
@ -692,6 +695,7 @@ static void access_counter_buffer_flush_locked(uvm_parent_gpu_t *parent_gpu,
|
||||
get = 0;
|
||||
}
|
||||
|
||||
done:
|
||||
write_get(parent_gpu, get);
|
||||
}
|
||||
|
||||
@ -817,12 +821,18 @@ static NvU32 fetch_access_counter_buffer_entries(uvm_gpu_t *gpu,
|
||||
(fetch_mode == NOTIFICATION_FETCH_MODE_ALL || notification_index < access_counters->max_batch_size)) {
|
||||
uvm_access_counter_buffer_entry_t *current_entry = ¬ification_cache[notification_index];
|
||||
|
||||
// We cannot just wait for the last entry (the one pointed by put) to become valid, we have to do it
|
||||
// individually since entries can be written out of order
|
||||
// We cannot just wait for the last entry (the one pointed by put) to
|
||||
// become valid, we have to do it individually since entries can be
|
||||
// written out of order
|
||||
UVM_SPIN_WHILE(!gpu->parent->access_counter_buffer_hal->entry_is_valid(gpu->parent, get), &spin) {
|
||||
// We have some entry to work on. Let's do the rest later.
|
||||
if (fetch_mode != NOTIFICATION_FETCH_MODE_ALL && notification_index > 0)
|
||||
goto done;
|
||||
|
||||
// There's no entry to work on and something has gone wrong. Ignore
|
||||
// the rest.
|
||||
if (uvm_global_get_status() != NV_OK)
|
||||
goto done;
|
||||
}
|
||||
|
||||
// Prevent later accesses being moved above the read of the valid bit
|
||||
|
@ -631,7 +631,15 @@ static NV_STATUS fault_buffer_flush_locked(uvm_gpu_t *gpu,
|
||||
|
||||
while (get != put) {
|
||||
// Wait until valid bit is set
|
||||
UVM_SPIN_WHILE(!parent_gpu->fault_buffer_hal->entry_is_valid(parent_gpu, get), &spin);
|
||||
UVM_SPIN_WHILE(!parent_gpu->fault_buffer_hal->entry_is_valid(parent_gpu, get), &spin) {
|
||||
// Channels might be idle (e.g. in teardown) so check for errors
|
||||
// actively.
|
||||
status = uvm_channel_manager_check_errors(gpu->channel_manager);
|
||||
if (status != NV_OK) {
|
||||
write_get(parent_gpu, get);
|
||||
return status;
|
||||
}
|
||||
}
|
||||
|
||||
fault_buffer_skip_replayable_entry(parent_gpu, get);
|
||||
++get;
|
||||
@ -864,6 +872,10 @@ static NV_STATUS fetch_fault_buffer_entries(uvm_gpu_t *gpu,
|
||||
// We have some entry to work on. Let's do the rest later.
|
||||
if (fetch_mode == FAULT_FETCH_MODE_BATCH_READY && fault_index > 0)
|
||||
goto done;
|
||||
|
||||
status = uvm_global_get_status();
|
||||
if (status != NV_OK)
|
||||
goto done;
|
||||
}
|
||||
|
||||
// Prevent later accesses being moved above the read of the valid bit
|
||||
|
@ -50,18 +50,18 @@
|
||||
// because that type is normally associated with the LCE mapped to the most
|
||||
// PCEs. The higher bandwidth is beneficial when doing bulk operations such as
|
||||
// clearing PTEs, or initializing a page directory/table.
|
||||
#define page_tree_begin_acquire(tree, tracker, push, format, ...) ({ \
|
||||
NV_STATUS status; \
|
||||
uvm_channel_manager_t *manager = (tree)->gpu->channel_manager; \
|
||||
\
|
||||
if (manager == NULL) \
|
||||
status = uvm_push_begin_fake((tree)->gpu, (push)); \
|
||||
else if (uvm_parent_gpu_is_virt_mode_sriov_heavy((tree)->gpu->parent)) \
|
||||
status = uvm_push_begin_acquire(manager, UVM_CHANNEL_TYPE_MEMOPS, (tracker), (push), (format), ##__VA_ARGS__); \
|
||||
else \
|
||||
status = uvm_push_begin_acquire(manager, UVM_CHANNEL_TYPE_GPU_INTERNAL, (tracker), (push), (format), ##__VA_ARGS__);\
|
||||
\
|
||||
status; \
|
||||
#define page_tree_begin_acquire(tree, tracker, push, format, ...) ({ \
|
||||
NV_STATUS __status; \
|
||||
uvm_channel_manager_t *__manager = (tree)->gpu->channel_manager; \
|
||||
\
|
||||
if (__manager == NULL) \
|
||||
__status = uvm_push_begin_fake((tree)->gpu, (push)); \
|
||||
else if (uvm_parent_gpu_is_virt_mode_sriov_heavy((tree)->gpu->parent)) \
|
||||
__status = uvm_push_begin_acquire(__manager, UVM_CHANNEL_TYPE_MEMOPS, (tracker), (push), (format), ##__VA_ARGS__); \
|
||||
else \
|
||||
__status = uvm_push_begin_acquire(__manager, UVM_CHANNEL_TYPE_GPU_INTERNAL, (tracker), (push), (format), ##__VA_ARGS__);\
|
||||
\
|
||||
__status; \
|
||||
})
|
||||
|
||||
// Default location of page table allocations
|
||||
|
@ -1127,7 +1127,6 @@ static NV_STATUS test_pmm_reverse_map_many_blocks(uvm_gpu_t *gpu, uvm_va_space_t
|
||||
// incrementally. Therefore, the reverse translations will show them in
|
||||
// order.
|
||||
uvm_for_each_va_range_in(va_range, va_space, addr, addr + size - 1) {
|
||||
uvm_va_block_t *va_block;
|
||||
|
||||
for_each_va_block_in_va_range(va_range, va_block) {
|
||||
NvU32 num_va_block_pages = 0;
|
||||
|
@ -149,7 +149,7 @@ done:
|
||||
static NV_STATUS test_tracker_basic(uvm_va_space_t *va_space)
|
||||
{
|
||||
uvm_gpu_t *gpu;
|
||||
uvm_channel_t *channel;
|
||||
uvm_channel_t *any_channel;
|
||||
uvm_tracker_t tracker;
|
||||
uvm_tracker_entry_t entry;
|
||||
NvU32 count = 0;
|
||||
@ -159,15 +159,15 @@ static NV_STATUS test_tracker_basic(uvm_va_space_t *va_space)
|
||||
if (gpu == NULL)
|
||||
return NV_ERR_INVALID_STATE;
|
||||
|
||||
channel = uvm_channel_any(gpu->channel_manager);
|
||||
if (channel == NULL)
|
||||
any_channel = uvm_channel_any(gpu->channel_manager);
|
||||
if (any_channel == NULL)
|
||||
return NV_ERR_INVALID_STATE;
|
||||
|
||||
uvm_tracker_init(&tracker);
|
||||
TEST_CHECK_GOTO(assert_tracker_is_completed(&tracker) == NV_OK, done);
|
||||
|
||||
// Some channel
|
||||
entry.channel = channel;
|
||||
entry.channel = any_channel;
|
||||
entry.value = 1;
|
||||
|
||||
status = uvm_tracker_add_entry(&tracker, &entry);
|
||||
@ -258,7 +258,7 @@ done:
|
||||
static NV_STATUS test_tracker_overwrite(uvm_va_space_t *va_space)
|
||||
{
|
||||
uvm_gpu_t *gpu;
|
||||
uvm_channel_t *channel;
|
||||
uvm_channel_t *any_channel;
|
||||
uvm_tracker_t tracker, dup_tracker;
|
||||
uvm_tracker_entry_t entry;
|
||||
uvm_tracker_entry_t *entry_iter, *dup_entry_iter;
|
||||
@ -270,15 +270,15 @@ static NV_STATUS test_tracker_overwrite(uvm_va_space_t *va_space)
|
||||
if (gpu == NULL)
|
||||
return NV_ERR_INVALID_STATE;
|
||||
|
||||
channel = uvm_channel_any(gpu->channel_manager);
|
||||
if (channel == NULL)
|
||||
any_channel = uvm_channel_any(gpu->channel_manager);
|
||||
if (any_channel == NULL)
|
||||
return NV_ERR_INVALID_STATE;
|
||||
|
||||
uvm_tracker_init(&tracker);
|
||||
TEST_CHECK_GOTO(assert_tracker_is_completed(&tracker) == NV_OK, done);
|
||||
|
||||
// Some channel
|
||||
entry.channel = channel;
|
||||
entry.channel = any_channel;
|
||||
entry.value = 1;
|
||||
|
||||
status = uvm_tracker_add_entry(&tracker, &entry);
|
||||
@ -351,7 +351,7 @@ done:
|
||||
static NV_STATUS test_tracker_add_tracker(uvm_va_space_t *va_space)
|
||||
{
|
||||
uvm_gpu_t *gpu;
|
||||
uvm_channel_t *channel;
|
||||
uvm_channel_t *any_channel;
|
||||
uvm_tracker_t tracker, dup_tracker;
|
||||
uvm_tracker_entry_t entry;
|
||||
uvm_tracker_entry_t *entry_iter, *dup_entry_iter;
|
||||
@ -362,8 +362,8 @@ static NV_STATUS test_tracker_add_tracker(uvm_va_space_t *va_space)
|
||||
if (gpu == NULL)
|
||||
return NV_ERR_INVALID_STATE;
|
||||
|
||||
channel = uvm_channel_any(gpu->channel_manager);
|
||||
if (channel == NULL)
|
||||
any_channel = uvm_channel_any(gpu->channel_manager);
|
||||
if (any_channel == NULL)
|
||||
return NV_ERR_INVALID_STATE;
|
||||
|
||||
uvm_tracker_init(&tracker);
|
||||
@ -371,7 +371,7 @@ static NV_STATUS test_tracker_add_tracker(uvm_va_space_t *va_space)
|
||||
TEST_CHECK_GOTO(assert_tracker_is_completed(&tracker) == NV_OK, done);
|
||||
|
||||
// Some channel
|
||||
entry.channel = channel;
|
||||
entry.channel = any_channel;
|
||||
entry.value = 1;
|
||||
|
||||
status = uvm_tracker_add_entry(&tracker, &entry);
|
||||
|
@ -3493,8 +3493,6 @@ static NV_STATUS block_copy_begin_push(uvm_va_block_t *va_block,
|
||||
}
|
||||
|
||||
if (UVM_ID_IS_CPU(src_id) && UVM_ID_IS_CPU(dst_id)) {
|
||||
uvm_va_space_t *va_space = uvm_va_block_get_va_space(va_block);
|
||||
|
||||
gpu = uvm_va_space_find_first_gpu_attached_to_cpu_node(va_space, copy_state->src.nid);
|
||||
if (!gpu)
|
||||
gpu = uvm_va_space_find_first_gpu(va_space);
|
||||
@ -4486,8 +4484,6 @@ static NV_STATUS block_copy_resident_pages_mask(uvm_va_block_t *block,
|
||||
uvm_processor_mask_copy(search_mask, src_processor_mask);
|
||||
|
||||
for_each_closest_id(src_id, search_mask, dst_id, va_space) {
|
||||
NV_STATUS status;
|
||||
|
||||
if (UVM_ID_IS_CPU(src_id)) {
|
||||
int nid;
|
||||
|
||||
@ -8939,13 +8935,13 @@ NV_STATUS uvm_va_block_revoke_prot(uvm_va_block_t *va_block,
|
||||
uvm_processor_mask_copy(resident_procs, &va_block->resident);
|
||||
|
||||
for_each_closest_id(resident_id, resident_procs, gpu->id, va_space) {
|
||||
NV_STATUS status = block_revoke_prot_gpu_to(va_block,
|
||||
va_block_context,
|
||||
gpu,
|
||||
resident_id,
|
||||
running_page_mask,
|
||||
prot_to_revoke,
|
||||
out_tracker);
|
||||
status = block_revoke_prot_gpu_to(va_block,
|
||||
va_block_context,
|
||||
gpu,
|
||||
resident_id,
|
||||
running_page_mask,
|
||||
prot_to_revoke,
|
||||
out_tracker);
|
||||
if (status != NV_OK)
|
||||
break;
|
||||
|
||||
@ -12208,16 +12204,16 @@ NV_STATUS uvm_va_block_service_finish(uvm_processor_id_t processor_id,
|
||||
|
||||
// Map pages that are thrashing
|
||||
if (service_context->thrashing_pin_count > 0) {
|
||||
uvm_page_index_t page_index;
|
||||
uvm_page_index_t pinned_page_index;
|
||||
|
||||
for_each_va_block_page_in_region_mask(page_index,
|
||||
for_each_va_block_page_in_region_mask(pinned_page_index,
|
||||
&service_context->thrashing_pin_mask,
|
||||
service_context->region) {
|
||||
uvm_processor_mask_t *map_thrashing_processors = NULL;
|
||||
NvU64 page_addr = uvm_va_block_cpu_page_address(va_block, page_index);
|
||||
NvU64 page_addr = uvm_va_block_cpu_page_address(va_block, pinned_page_index);
|
||||
|
||||
// Check protection type
|
||||
if (!uvm_page_mask_test(caller_page_mask, page_index))
|
||||
if (!uvm_page_mask_test(caller_page_mask, pinned_page_index))
|
||||
continue;
|
||||
|
||||
map_thrashing_processors = uvm_perf_thrashing_get_thrashing_processors(va_block, page_addr);
|
||||
@ -12226,7 +12222,7 @@ NV_STATUS uvm_va_block_service_finish(uvm_processor_id_t processor_id,
|
||||
service_context->block_context,
|
||||
new_residency,
|
||||
processor_id,
|
||||
uvm_va_block_region_for_page(page_index),
|
||||
uvm_va_block_region_for_page(pinned_page_index),
|
||||
caller_page_mask,
|
||||
new_prot,
|
||||
map_thrashing_processors);
|
||||
|
@ -2274,7 +2274,7 @@ NV_STATUS uvm_va_block_populate_page_cpu(uvm_va_block_t *va_block,
|
||||
// returns NV_ERR_MORE_PROCESSING_REQUIRED and this makes it clear that the
|
||||
// block's state is not locked across these calls.
|
||||
#define UVM_VA_BLOCK_LOCK_RETRY(va_block, block_retry, call) ({ \
|
||||
NV_STATUS status; \
|
||||
NV_STATUS __status; \
|
||||
uvm_va_block_t *__block = (va_block); \
|
||||
uvm_va_block_retry_t *__retry = (block_retry); \
|
||||
\
|
||||
@ -2283,14 +2283,14 @@ NV_STATUS uvm_va_block_populate_page_cpu(uvm_va_block_t *va_block,
|
||||
uvm_mutex_lock(&__block->lock); \
|
||||
\
|
||||
do { \
|
||||
status = (call); \
|
||||
} while (status == NV_ERR_MORE_PROCESSING_REQUIRED); \
|
||||
__status = (call); \
|
||||
} while (__status == NV_ERR_MORE_PROCESSING_REQUIRED); \
|
||||
\
|
||||
uvm_mutex_unlock(&__block->lock); \
|
||||
\
|
||||
uvm_va_block_retry_deinit(__retry, __block); \
|
||||
\
|
||||
status; \
|
||||
__status; \
|
||||
})
|
||||
|
||||
// A helper macro for handling allocation-retry
|
||||
@ -2305,7 +2305,7 @@ NV_STATUS uvm_va_block_populate_page_cpu(uvm_va_block_t *va_block,
|
||||
// to be already taken. Notably the block's lock might be unlocked and relocked
|
||||
// as part of the call.
|
||||
#define UVM_VA_BLOCK_RETRY_LOCKED(va_block, block_retry, call) ({ \
|
||||
NV_STATUS status; \
|
||||
NV_STATUS __status; \
|
||||
uvm_va_block_t *__block = (va_block); \
|
||||
uvm_va_block_retry_t *__retry = (block_retry); \
|
||||
\
|
||||
@ -2314,12 +2314,12 @@ NV_STATUS uvm_va_block_populate_page_cpu(uvm_va_block_t *va_block,
|
||||
uvm_assert_mutex_locked(&__block->lock); \
|
||||
\
|
||||
do { \
|
||||
status = (call); \
|
||||
} while (status == NV_ERR_MORE_PROCESSING_REQUIRED); \
|
||||
__status = (call); \
|
||||
} while (__status == NV_ERR_MORE_PROCESSING_REQUIRED); \
|
||||
\
|
||||
uvm_va_block_retry_deinit(__retry, __block); \
|
||||
\
|
||||
status; \
|
||||
__status; \
|
||||
})
|
||||
|
||||
#endif // __UVM_VA_BLOCK_H__
|
||||
|
@ -31,6 +31,7 @@
|
||||
#include "nvCpuUuid.h"
|
||||
#include "nv-time.h"
|
||||
#include "nvlink_caps.h"
|
||||
#include "nvlink_proto.h"
|
||||
|
||||
#include <linux/module.h>
|
||||
#include <linux/interrupt.h>
|
||||
@ -49,7 +50,7 @@
|
||||
|
||||
#include "ioctl_nvswitch.h"
|
||||
|
||||
const static struct
|
||||
static const struct
|
||||
{
|
||||
NvlStatus status;
|
||||
int err;
|
||||
|
@ -22,6 +22,7 @@
|
||||
*/
|
||||
|
||||
#include "nv-linux.h"
|
||||
#include "nv-caps-imex.h"
|
||||
|
||||
extern int NVreg_ImexChannelCount;
|
||||
|
||||
|
@ -267,7 +267,7 @@ static void nv_cap_procfs_exit(void)
|
||||
nv_cap_procfs_dir = NULL;
|
||||
}
|
||||
|
||||
int nv_cap_procfs_init(void)
|
||||
static int nv_cap_procfs_init(void)
|
||||
{
|
||||
static struct proc_dir_entry *file_entry;
|
||||
|
||||
|
@ -290,7 +290,7 @@ void nv_destroy_dma_map_scatterlist(nv_dma_map_t *dma_map)
|
||||
os_free_mem(dma_map->mapping.discontig.submaps);
|
||||
}
|
||||
|
||||
void nv_load_dma_map_scatterlist(
|
||||
static void nv_load_dma_map_scatterlist(
|
||||
nv_dma_map_t *dma_map,
|
||||
NvU64 *va_array
|
||||
)
|
||||
@ -486,7 +486,7 @@ NV_STATUS NV_API_CALL nv_dma_map_sgt(
|
||||
return status;
|
||||
}
|
||||
|
||||
NV_STATUS NV_API_CALL nv_dma_unmap_sgt(
|
||||
static NV_STATUS NV_API_CALL nv_dma_unmap_sgt(
|
||||
nv_dma_device_t *dma_dev,
|
||||
void **priv
|
||||
)
|
||||
|
@ -1,5 +1,5 @@
|
||||
/*
|
||||
* SPDX-FileCopyrightText: Copyright (c) 2017-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
|
||||
* SPDX-FileCopyrightText: Copyright (c) 2017-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
|
||||
* SPDX-License-Identifier: MIT
|
||||
*
|
||||
* Permission is hereby granted, free of charge, to any person obtaining a
|
||||
@ -25,9 +25,9 @@
|
||||
* nv-ibmnpu.c - interface with the ibmnpu (IBM NVLink Processing Unit) "module"
|
||||
*/
|
||||
#include "nv-linux.h"
|
||||
#include "nv-ibmnpu.h"
|
||||
|
||||
#if defined(NVCPU_PPC64LE)
|
||||
#include "nv-ibmnpu.h"
|
||||
#include "nv-rsync.h"
|
||||
|
||||
/*
|
||||
|
@ -1,5 +1,5 @@
|
||||
/*
|
||||
* SPDX-FileCopyrightText: Copyright (c) 2016 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
|
||||
* SPDX-FileCopyrightText: Copyright (c) 2016-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
|
||||
* SPDX-License-Identifier: MIT
|
||||
*
|
||||
* Permission is hereby granted, free of charge, to any person obtaining a
|
||||
@ -176,7 +176,7 @@ static struct task_struct *thread_create_on_node(int (*threadfn)(void *data),
|
||||
{
|
||||
|
||||
unsigned i, j;
|
||||
const static unsigned attempts = 3;
|
||||
static const unsigned attempts = 3;
|
||||
struct task_struct *thread[3];
|
||||
|
||||
for (i = 0;; i++) {
|
||||
|
@ -368,7 +368,7 @@ int nv_encode_caching(
|
||||
return 0;
|
||||
}
|
||||
|
||||
int static nvidia_mmap_peer_io(
|
||||
static int nvidia_mmap_peer_io(
|
||||
struct vm_area_struct *vma,
|
||||
nv_alloc_t *at,
|
||||
NvU64 page_index,
|
||||
@ -389,7 +389,7 @@ int static nvidia_mmap_peer_io(
|
||||
return ret;
|
||||
}
|
||||
|
||||
int static nvidia_mmap_sysmem(
|
||||
static int nvidia_mmap_sysmem(
|
||||
struct vm_area_struct *vma,
|
||||
nv_alloc_t *at,
|
||||
NvU64 page_index,
|
||||
|
@ -40,6 +40,9 @@
|
||||
#if !defined(NV_BUS_TYPE_HAS_IOMMU_OPS)
|
||||
#include <linux/iommu.h>
|
||||
#endif
|
||||
#if NV_IS_EXPORT_SYMBOL_GPL_pci_ats_supported
|
||||
#include <linux/pci-ats.h>
|
||||
#endif
|
||||
|
||||
static void
|
||||
nv_check_and_exclude_gpu(
|
||||
@ -781,10 +784,15 @@ next_bar:
|
||||
// PPC64LE platform where ATS is currently supported (IBM P9).
|
||||
nv_ats_supported &= nv_platform_supports_numa(nvl);
|
||||
#else
|
||||
#if defined(NV_PCI_DEV_HAS_ATS_ENABLED)
|
||||
#if NV_IS_EXPORT_SYMBOL_GPL_pci_ats_supported
|
||||
nv_ats_supported &= pci_ats_supported(pci_dev);
|
||||
#elif defined(NV_PCI_DEV_HAS_ATS_ENABLED)
|
||||
nv_ats_supported &= pci_dev->ats_enabled;
|
||||
#else
|
||||
nv_ats_supported = NV_FALSE;
|
||||
#endif
|
||||
#endif
|
||||
|
||||
if (nv_ats_supported)
|
||||
{
|
||||
NV_DEV_PRINTF(NV_DBG_INFO, nv, "ATS supported by this GPU!\n");
|
||||
|
@ -1,5 +1,5 @@
|
||||
/*
|
||||
* SPDX-FileCopyrightText: Copyright (c) 1999-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
|
||||
* SPDX-FileCopyrightText: Copyright (c) 1999-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
|
||||
* SPDX-License-Identifier: MIT
|
||||
*
|
||||
* Permission is hereby granted, free of charge, to any person obtaining a
|
||||
@ -694,7 +694,7 @@ static nv_proc_ops_t nv_procfs_suspend_fops = {
|
||||
/*
|
||||
* Forwards error to nv_log_error which exposes data to vendor callback
|
||||
*/
|
||||
void
|
||||
static void
|
||||
exercise_error_forwarding_va(
|
||||
nv_state_t *nv,
|
||||
NvU32 err,
|
||||
|
@ -1,5 +1,5 @@
|
||||
/*
|
||||
* SPDX-FileCopyrightText: Copyright (c) 2017 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
|
||||
* SPDX-FileCopyrightText: Copyright (c) 2017-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
|
||||
* SPDX-License-Identifier: MIT
|
||||
*
|
||||
* Permission is hereby granted, free of charge, to any person obtaining a
|
||||
@ -29,7 +29,7 @@
|
||||
|
||||
nv_report_error_cb_t nv_error_cb_handle = NULL;
|
||||
|
||||
int nv_register_error_cb(nv_report_error_cb_t report_error_cb)
|
||||
int nvidia_register_error_cb(nv_report_error_cb_t report_error_cb)
|
||||
{
|
||||
if (report_error_cb == NULL)
|
||||
return -EINVAL;
|
||||
@ -41,9 +41,9 @@ int nv_register_error_cb(nv_report_error_cb_t report_error_cb)
|
||||
return 0;
|
||||
}
|
||||
|
||||
EXPORT_SYMBOL(nv_register_error_cb);
|
||||
EXPORT_SYMBOL(nvidia_register_error_cb);
|
||||
|
||||
int nv_unregister_error_cb(void)
|
||||
int nvidia_unregister_error_cb(void)
|
||||
{
|
||||
if (nv_error_cb_handle == NULL)
|
||||
return -EPERM;
|
||||
@ -52,9 +52,7 @@ int nv_unregister_error_cb(void)
|
||||
return 0;
|
||||
}
|
||||
|
||||
EXPORT_SYMBOL(nv_unregister_error_cb);
|
||||
|
||||
struct pci_dev;
|
||||
EXPORT_SYMBOL(nvidia_unregister_error_cb);
|
||||
|
||||
void nv_report_error(
|
||||
struct pci_dev *dev,
|
||||
@ -63,27 +61,17 @@ void nv_report_error(
|
||||
va_list ap
|
||||
)
|
||||
{
|
||||
va_list ap_copy;
|
||||
char *buffer;
|
||||
int length = 0;
|
||||
int status = NV_OK;
|
||||
gfp_t gfp = NV_MAY_SLEEP() ? NV_GFP_NO_OOM : NV_GFP_ATOMIC;
|
||||
|
||||
if (nv_error_cb_handle != NULL)
|
||||
{
|
||||
va_copy(ap_copy, ap);
|
||||
length = vsnprintf(NULL, 0, format, ap);
|
||||
va_end(ap_copy);
|
||||
if (nv_error_cb_handle == NULL)
|
||||
return;
|
||||
|
||||
if (length > 0)
|
||||
{
|
||||
status = os_alloc_mem((void *)&buffer, (length + 1)*sizeof(char));
|
||||
buffer = kvasprintf(gfp, format, ap);
|
||||
|
||||
if (status == NV_OK)
|
||||
{
|
||||
vsnprintf(buffer, length, format, ap);
|
||||
nv_error_cb_handle(dev, error_number, buffer, length + 1);
|
||||
os_free_mem(buffer);
|
||||
}
|
||||
}
|
||||
}
|
||||
if (buffer == NULL)
|
||||
return;
|
||||
|
||||
nv_error_cb_handle(dev, error_number, buffer, strlen(buffer) + 1);
|
||||
kfree(buffer);
|
||||
}
|
||||
|
@ -1,5 +1,5 @@
|
||||
/*
|
||||
* SPDX-FileCopyrightText: Copyright (c) 2017 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
|
||||
* SPDX-FileCopyrightText: Copyright (c) 2017-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
|
||||
* SPDX-License-Identifier: MIT
|
||||
*
|
||||
* Permission is hereby granted, free of charge, to any person obtaining a
|
||||
@ -37,7 +37,7 @@
|
||||
* @param[in] int
|
||||
* Length of error string.
|
||||
*/
|
||||
typedef void (*nv_report_error_cb_t)(struct pci_dev *, uint32_t, char *, int);
|
||||
typedef void (*nv_report_error_cb_t)(struct pci_dev *, uint32_t, char *, size_t);
|
||||
|
||||
/*
|
||||
* @brief
|
||||
@ -51,7 +51,7 @@ typedef void (*nv_report_error_cb_t)(struct pci_dev *, uint32_t, char *, int);
|
||||
* -EINVAL callback handle is NULL.
|
||||
* -EBUSY callback handle is already registered.
|
||||
*/
|
||||
int nv_register_error_cb(nv_report_error_cb_t report_error_cb);
|
||||
int nvidia_register_error_cb(nv_report_error_cb_t report_error_cb);
|
||||
|
||||
/*
|
||||
* @brief
|
||||
@ -61,6 +61,6 @@ int nv_register_error_cb(nv_report_error_cb_t report_error_cb);
|
||||
* 0 upon successful completion.
|
||||
* -EPERM unregister not permitted on NULL callback handle.
|
||||
*/
|
||||
int nv_unregister_error_cb(void);
|
||||
int nvidia_unregister_error_cb(void);
|
||||
|
||||
#endif /* _NV_REPORT_ERR_H_ */
|
||||
|
@ -96,6 +96,10 @@
|
||||
#include <linux/cc_platform.h>
|
||||
#endif
|
||||
|
||||
#if defined(NV_ASM_MSHYPERV_H_PRESENT) && defined(NVCPU_X86_64)
|
||||
#include <asm/mshyperv.h>
|
||||
#endif
|
||||
|
||||
#if defined(NV_ASM_CPUFEATURE_H_PRESENT)
|
||||
#include <asm/cpufeature.h>
|
||||
#endif
|
||||
@ -184,11 +188,7 @@ struct semaphore nv_linux_devices_lock;
|
||||
|
||||
// True if all the successfully probed devices support ATS
|
||||
// Assigned at device probe (module init) time
|
||||
NvBool nv_ats_supported = NVCPU_IS_PPC64LE
|
||||
#if defined(NV_PCI_DEV_HAS_ATS_ENABLED)
|
||||
|| NV_TRUE
|
||||
#endif
|
||||
;
|
||||
NvBool nv_ats_supported = NV_TRUE;
|
||||
|
||||
// allow an easy way to convert all debug printfs related to events
|
||||
// back and forth between 'info' and 'errors'
|
||||
@ -285,6 +285,17 @@ void nv_detect_conf_compute_platform(
|
||||
#if defined(NV_CC_PLATFORM_PRESENT)
|
||||
os_cc_enabled = cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT);
|
||||
|
||||
#if defined(NV_CC_ATTR_SEV_SNP)
|
||||
os_cc_sev_snp_enabled = cc_platform_has(CC_ATTR_GUEST_SEV_SNP);
|
||||
#endif
|
||||
|
||||
#if defined(NV_HV_GET_ISOLATION_TYPE) && IS_ENABLED(CONFIG_HYPERV) && defined(NVCPU_X86_64)
|
||||
if (hv_get_isolation_type() == HV_ISOLATION_TYPE_SNP)
|
||||
{
|
||||
os_cc_snp_vtom_enabled = NV_TRUE;
|
||||
}
|
||||
#endif
|
||||
|
||||
#if defined(X86_FEATURE_TDX_GUEST)
|
||||
if (cpu_feature_enabled(X86_FEATURE_TDX_GUEST))
|
||||
{
|
||||
@ -293,8 +304,10 @@ void nv_detect_conf_compute_platform(
|
||||
#endif
|
||||
#else
|
||||
os_cc_enabled = NV_FALSE;
|
||||
os_cc_sev_snp_enabled = NV_FALSE;
|
||||
os_cc_snp_vtom_enabled = NV_FALSE;
|
||||
os_cc_tdx_enabled = NV_FALSE;
|
||||
#endif
|
||||
#endif //NV_CC_PLATFORM_PRESENT
|
||||
}
|
||||
|
||||
static
|
||||
@ -1251,12 +1264,6 @@ static int validate_numa_start_state(nv_linux_state_t *nvl)
|
||||
return rc;
|
||||
}
|
||||
|
||||
NV_STATUS NV_API_CALL nv_get_num_dpaux_instances(nv_state_t *nv, NvU32 *num_instances)
|
||||
{
|
||||
*num_instances = nv->num_dpaux_instance;
|
||||
return NV_OK;
|
||||
}
|
||||
|
||||
void NV_API_CALL
|
||||
nv_schedule_uvm_isr(nv_state_t *nv)
|
||||
{
|
||||
|
@ -160,6 +160,8 @@ NV_CONFTEST_FUNCTION_COMPILE_TESTS += full_name_hash
|
||||
NV_CONFTEST_FUNCTION_COMPILE_TESTS += pci_enable_atomic_ops_to_root
|
||||
NV_CONFTEST_FUNCTION_COMPILE_TESTS += vga_tryget
|
||||
NV_CONFTEST_FUNCTION_COMPILE_TESTS += cc_platform_has
|
||||
NV_CONFTEST_FUNCTION_COMPILE_TESTS += cc_attr_guest_sev_snp
|
||||
NV_CONFTEST_FUNCTION_COMPILE_TESTS += hv_get_isolation_type
|
||||
NV_CONFTEST_FUNCTION_COMPILE_TESTS += seq_read_iter
|
||||
NV_CONFTEST_FUNCTION_COMPILE_TESTS += follow_pfn
|
||||
NV_CONFTEST_FUNCTION_COMPILE_TESTS += drm_gem_object_get
|
||||
@ -229,6 +231,7 @@ NV_CONFTEST_SYMBOL_COMPILE_TESTS += is_export_symbol_present_tsec_comms_free_gsc
|
||||
NV_CONFTEST_SYMBOL_COMPILE_TESTS += is_export_symbol_present_memory_block_size_bytes
|
||||
NV_CONFTEST_SYMBOL_COMPILE_TESTS += crypto
|
||||
NV_CONFTEST_SYMBOL_COMPILE_TESTS += is_export_symbol_present_follow_pte
|
||||
NV_CONFTEST_SYMBOL_COMPILE_TESTS += is_export_symbol_gpl_pci_ats_supported
|
||||
|
||||
NV_CONFTEST_TYPE_COMPILE_TESTS += dma_ops
|
||||
NV_CONFTEST_TYPE_COMPILE_TESTS += swiotlb_dma_ops
|
||||
|
@ -27,6 +27,7 @@
|
||||
#include "nvlink_linux.h"
|
||||
#include "nvlink_errors.h"
|
||||
#include "nvlink_export.h"
|
||||
#include "nvlink_proto.h"
|
||||
#include "nv-linux.h"
|
||||
#include "nv-procfs.h"
|
||||
#include "nv-time.h"
|
||||
|
@ -52,6 +52,8 @@ NvU32 os_page_size = PAGE_SIZE;
|
||||
NvU64 os_page_mask = NV_PAGE_MASK;
|
||||
NvU8 os_page_shift = PAGE_SHIFT;
|
||||
NvBool os_cc_enabled = 0;
|
||||
NvBool os_cc_sev_snp_enabled = 0;
|
||||
NvBool os_cc_snp_vtom_enabled = 0;
|
||||
NvBool os_cc_tdx_enabled = 0;
|
||||
|
||||
#if defined(CONFIG_DMA_SHARED_BUFFER)
|
||||
@ -400,7 +402,7 @@ NvS32 NV_API_CALL os_string_compare(const char *str1, const char *str2)
|
||||
return strcmp(str1, str2);
|
||||
}
|
||||
|
||||
void *os_mem_copy_custom(
|
||||
static void *os_mem_copy_custom(
|
||||
void *dstPtr,
|
||||
const void *srcPtr,
|
||||
NvU32 length
|
||||
|
@ -282,6 +282,7 @@ namespace DisplayPort
|
||||
virtual void markDeviceForDeletion() = 0;
|
||||
|
||||
virtual bool getRawDscCaps(NvU8 *buffer, NvU32 bufferSize) = 0;
|
||||
virtual bool setRawDscCaps(NvU8 *buffer, NvU32 bufferSize) = 0;
|
||||
|
||||
// This interface is still nascent. Please don't use it. Read size limit is 16 bytes.
|
||||
virtual AuxBus::status getDpcdData(unsigned offset, NvU8 * buffer,
|
||||
|
@ -44,6 +44,7 @@ namespace DisplayPort
|
||||
#define HDCP_BCAPS_DDC_EN_BIT 0x80
|
||||
#define HDCP_BCAPS_DP_EN_BIT 0x01
|
||||
#define HDCP_I2C_CLIENT_ADDR 0x74
|
||||
#define DSC_CAPS_SIZE 16
|
||||
|
||||
struct GroupImpl;
|
||||
struct ConnectorImpl;
|
||||
@ -421,6 +422,7 @@ namespace DisplayPort
|
||||
virtual void markDeviceForDeletion() {bisMarkedForDeletion = true;};
|
||||
virtual bool isMarkedForDeletion() {return bisMarkedForDeletion;};
|
||||
virtual bool getRawDscCaps(NvU8 *buffer, NvU32 bufferSize);
|
||||
virtual bool setRawDscCaps(NvU8 *buffer, NvU32 bufferSize);
|
||||
|
||||
virtual AuxBus::status dscCrcControl(NvBool bEnable, gpuDscCrc *dataGpu, sinkDscCrc *dataSink);
|
||||
|
||||
|
@ -472,6 +472,15 @@ bool DeviceImpl::getRawDscCaps(NvU8 *buffer, NvU32 bufferSize)
|
||||
return true;
|
||||
}
|
||||
|
||||
bool DeviceImpl::setRawDscCaps(NvU8 *buffer, NvU32 bufferSize)
|
||||
{
|
||||
if (bufferSize < sizeof(rawDscCaps))
|
||||
return false;
|
||||
|
||||
dpMemCopy(&rawDscCaps, buffer, sizeof(rawDscCaps));
|
||||
return parseDscCaps(&rawDscCaps[0], sizeof(rawDscCaps));
|
||||
}
|
||||
|
||||
AuxBus::status DeviceImpl::transaction(Action action, Type type, int address,
|
||||
NvU8 * buffer, unsigned sizeRequested,
|
||||
unsigned * sizeCompleted,
|
||||
|
@ -36,25 +36,25 @@
|
||||
// and then checked back in. You cannot make changes to these sections without
|
||||
// corresponding changes to the buildmeister script
|
||||
#ifndef NV_BUILD_BRANCH
|
||||
#define NV_BUILD_BRANCH r550_00
|
||||
#define NV_BUILD_BRANCH r553_17
|
||||
#endif
|
||||
#ifndef NV_PUBLIC_BRANCH
|
||||
#define NV_PUBLIC_BRANCH r550_00
|
||||
#define NV_PUBLIC_BRANCH r553_17
|
||||
#endif
|
||||
|
||||
#if defined(NV_LINUX) || defined(NV_BSD) || defined(NV_SUNOS)
|
||||
#define NV_BUILD_BRANCH_VERSION "rel/gpu_drv/r550/r550_00-326"
|
||||
#define NV_BUILD_CHANGELIST_NUM (34471492)
|
||||
#define NV_BUILD_BRANCH_VERSION "rel/gpu_drv/r550/r553_17-429"
|
||||
#define NV_BUILD_CHANGELIST_NUM (34957518)
|
||||
#define NV_BUILD_TYPE "Official"
|
||||
#define NV_BUILD_NAME "rel/gpu_drv/r550/r550_00-326"
|
||||
#define NV_LAST_OFFICIAL_CHANGELIST_NUM (34471492)
|
||||
#define NV_BUILD_NAME "rel/gpu_drv/r550/r553_17-429"
|
||||
#define NV_LAST_OFFICIAL_CHANGELIST_NUM (34957518)
|
||||
|
||||
#else /* Windows builds */
|
||||
#define NV_BUILD_BRANCH_VERSION "r550_00-324"
|
||||
#define NV_BUILD_CHANGELIST_NUM (34468048)
|
||||
#define NV_BUILD_TYPE "Nightly"
|
||||
#define NV_BUILD_NAME "r550_00-240627"
|
||||
#define NV_LAST_OFFICIAL_CHANGELIST_NUM (34454921)
|
||||
#define NV_BUILD_BRANCH_VERSION "r553_17-2"
|
||||
#define NV_BUILD_CHANGELIST_NUM (34902203)
|
||||
#define NV_BUILD_TYPE "Official"
|
||||
#define NV_BUILD_NAME "553.20"
|
||||
#define NV_LAST_OFFICIAL_CHANGELIST_NUM (34902203)
|
||||
#define NV_BUILD_BRANCH_BASE_VERSION R550
|
||||
#endif
|
||||
// End buildmeister python edited section
|
||||
|
@ -4,7 +4,7 @@
|
||||
#if defined(NV_LINUX) || defined(NV_BSD) || defined(NV_SUNOS) || defined(NV_VMWARE) || defined(NV_QNX) || defined(NV_INTEGRITY) || \
|
||||
(defined(RMCFG_FEATURE_PLATFORM_GSP) && RMCFG_FEATURE_PLATFORM_GSP == 1)
|
||||
|
||||
#define NV_VERSION_STRING "550.100"
|
||||
#define NV_VERSION_STRING "550.127.05"
|
||||
|
||||
#else
|
||||
|
||||
|
@ -57,7 +57,9 @@
|
||||
#define NV_PFALCON_FALCON_DMATRFCMD 0x00000118 /* RW-4R */
|
||||
#define NV_PFALCON_FALCON_DMATRFCMD_FULL 0:0 /* R-XVF */
|
||||
#define NV_PFALCON_FALCON_DMATRFCMD_FULL_TRUE 0x00000001 /* R---V */
|
||||
#define NV_PFALCON_FALCON_DMATRFCMD_FULL_FALSE 0x00000000 /* R---V */
|
||||
#define NV_PFALCON_FALCON_DMATRFCMD_IDLE 1:1 /* R-XVF */
|
||||
#define NV_PFALCON_FALCON_DMATRFCMD_IDLE_TRUE 0x00000001 /* R---V */
|
||||
#define NV_PFALCON_FALCON_DMATRFCMD_IDLE_FALSE 0x00000000 /* R---V */
|
||||
#define NV_PFALCON_FALCON_DMATRFCMD_SEC 3:2 /* RWXVF */
|
||||
#define NV_PFALCON_FALCON_DMATRFCMD_IMEM 4:4 /* RWXVF */
|
||||
|
@ -62,4 +62,14 @@
|
||||
#define NV_CTRL_CPU_INTR_UNITS_PRIV_RING 15:15
|
||||
#define NV_CTRL_CPU_INTR_UNITS_FSP 16:16
|
||||
|
||||
#define NV_CTRL_CPU_INTR_TOP_LEAF_BIT(i) (i/2):(i/2)
|
||||
#define NV_CTRL_CPU_INTR_TOP_LEAF_INTR_UNITS NV_CTRL_CPU_INTR_TOP_LEAF_BIT(NV_CTRL_CPU_INTR_UNITS_IDX)
|
||||
#define NV_CTRL_CPU_INTR_TOP_LEAF_INTR_NPG_FATAL NV_CTRL_CPU_INTR_TOP_LEAF_BIT(NV_CTRL_CPU_INTR_NPG_FATAL_IDX)
|
||||
#define NV_CTRL_CPU_INTR_TOP_LEAF_INTR_NPG_NON_FATAL NV_CTRL_CPU_INTR_TOP_LEAF_BIT(NV_CTRL_CPU_INTR_NPG_NON_FATAL_IDX)
|
||||
#define NV_CTRL_CPU_INTR_TOP_LEAF_INTR_NPG_CORRECTABLE NV_CTRL_CPU_INTR_TOP_LEAF_BIT(NV_CTRL_CPU_INTR_NPG_CORRECTABLE_IDX)
|
||||
#define NV_CTRL_CPU_INTR_TOP_LEAF_INTR_NVLW_FATAL NV_CTRL_CPU_INTR_TOP_LEAF_BIT(NV_CTRL_CPU_INTR_NVLW_FATAL_IDX)
|
||||
#define NV_CTRL_CPU_INTR_TOP_LEAF_INTR_NVLW_NON_FATAL NV_CTRL_CPU_INTR_TOP_LEAF_BIT(NV_CTRL_CPU_INTR_NVLW_NON_FATAL_IDX)
|
||||
#define NV_CTRL_CPU_INTR_TOP_LEAF_INTR_NVLW_CORRECTABLE NV_CTRL_CPU_INTR_TOP_LEAF_BIT(NV_CTRL_CPU_INTR_NVLW_CORRECTABLE_IDX)
|
||||
#define NV_CTRL_CPU_INTR_TOP_LEAF_INTR_NXBAR_FATAL NV_CTRL_CPU_INTR_TOP_LEAF_BIT(NV_CTRL_CPU_INTR_NXBAR_FATAL_IDX)
|
||||
|
||||
#endif // __ls10_dev_ctrl_ip_addendum_h__
|
||||
|
@ -1,5 +1,5 @@
|
||||
/*
|
||||
* SPDX-FileCopyrightText: Copyright (c) 2003-2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
|
||||
* SPDX-FileCopyrightText: Copyright (c) 2003-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
|
||||
* SPDX-License-Identifier: MIT
|
||||
*
|
||||
* Permission is hereby granted, free of charge, to any person obtaining a
|
||||
@ -25,4 +25,5 @@
|
||||
#define __ls10_ptop_discovery_ip_h__
|
||||
/* This file is autogenerated. Do not edit */
|
||||
#define NV_PTOP_UNICAST_SW_DEVICE_BASE_SAW_0 0x00028000 /* */
|
||||
#define NV_PTOP_UNICAST_SW_DEVICE_BASE_SOE_0 0x00840000 /* */
|
||||
#endif // __ls10_ptop_discovery_ip_h__
|
||||
|
@ -1,5 +1,5 @@
|
||||
/*
|
||||
* SPDX-FileCopyrightText: Copyright (c) 2018-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
|
||||
* SPDX-FileCopyrightText: Copyright (c) 2018-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
|
||||
* SPDX-License-Identifier: MIT
|
||||
*
|
||||
* Permission is hereby granted, free of charge, to any person obtaining a
|
||||
@ -49,6 +49,7 @@
|
||||
#include "soe/soeifsmbpbi.h"
|
||||
#include "soe/soeifcore.h"
|
||||
#include "soe/soeifchnmgmt.h"
|
||||
#include "soe/soeiftnvl.h"
|
||||
#include "soe/soeifcci.h"
|
||||
#include "soe/soeifheartbeat.h"
|
||||
|
||||
@ -71,6 +72,7 @@ typedef struct
|
||||
RM_SOE_BIF_CMD bif;
|
||||
RM_SOE_CORE_CMD core;
|
||||
RM_SOE_CHNMGMT_CMD chnmgmt;
|
||||
RM_SOE_TNVL_CMD tnvl;
|
||||
RM_SOE_CCI_CMD cci;
|
||||
} cmd;
|
||||
} RM_FLCN_CMD_SOE,
|
||||
@ -126,8 +128,9 @@ typedef struct
|
||||
#define RM_SOE_TASK_ID_CCI 0x0D
|
||||
#define RM_SOE_TASK_ID_FSPMGMT 0x0E
|
||||
#define RM_SOE_TASK_ID_HEARTBEAT 0x0F
|
||||
#define RM_SOE_TASK_ID_TNVL 0x10
|
||||
// Add new task ID here...
|
||||
#define RM_SOE_TASK_ID__END 0x10
|
||||
#define RM_SOE_TASK_ID__END 0x11
|
||||
|
||||
/*!
|
||||
* Unit-identifiers:
|
||||
@ -151,8 +154,9 @@ typedef struct
|
||||
#define RM_SOE_UNIT_CHNMGMT (0x0D)
|
||||
#define RM_SOE_UNIT_CCI (0x0E)
|
||||
#define RM_SOE_UNIT_HEARTBEAT (0x0F)
|
||||
#define RM_SOE_UNIT_TNVL (0x10)
|
||||
// Add new unit ID here...
|
||||
#define RM_SOE_UNIT_END (0x10)
|
||||
#define RM_SOE_UNIT_END (0x11)
|
||||
|
||||
#endif // _RMSOECMDIF_H_
|
||||
|
||||
|
164
src/common/nvswitch/common/inc/soe/soeiftnvl.h
Normal file
164
src/common/nvswitch/common/inc/soe/soeiftnvl.h
Normal file
@ -0,0 +1,164 @@
|
||||
/*
|
||||
* SPDX-FileCopyrightText: Copyright (c) 2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
|
||||
* SPDX-License-Identifier: MIT
|
||||
*
|
||||
* Permission is hereby granted, free of charge, to any person obtaining a
|
||||
* copy of this software and associated documentation files (the "Software"),
|
||||
* to deal in the Software without restriction, including without limitation
|
||||
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
|
||||
* and/or sell copies of the Software, and to permit persons to whom the
|
||||
* Software is furnished to do so, subject to the following conditions:
|
||||
*
|
||||
* The above copyright notice and this permission notice shall be included in
|
||||
* all copies or substantial portions of the Software.
|
||||
*
|
||||
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
|
||||
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
||||
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
|
||||
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
|
||||
* DEALINGS IN THE SOFTWARE.
|
||||
*/
|
||||
|
||||
#ifndef _SOEIFTNVL_H_
|
||||
#define _SOEIFTNVL_H_
|
||||
|
||||
|
||||
|
||||
/*!
|
||||
* @file soeiftnvl.h
|
||||
* @brief SOE TNVL Command Queue
|
||||
*
|
||||
* The TNVL unit ID will be used for sending and recieving
|
||||
* Command Messages between driver and TNVL unit of SOE
|
||||
*/
|
||||
|
||||
#define RM_SOE_LIST_LS10_ONLY_ENGINES(_op) \
|
||||
_op(GIN) \
|
||||
_op(XAL) \
|
||||
_op(XAL_FUNC) \
|
||||
_op(XPL) \
|
||||
_op(XTL) \
|
||||
_op(XTL_CONFIG) \
|
||||
_op(UXL) \
|
||||
_op(GPU_PTOP) \
|
||||
_op(PMC) \
|
||||
_op(PBUS) \
|
||||
_op(ROM2) \
|
||||
_op(GPIO) \
|
||||
_op(FSP) \
|
||||
_op(SYSCTRL) \
|
||||
_op(CLKS_SYS) \
|
||||
_op(CLKS_SYSB) \
|
||||
_op(CLKS_P0) \
|
||||
_op(SAW_PM) \
|
||||
_op(PCIE_PM) \
|
||||
_op(PRT_PRI_HUB) \
|
||||
_op(PRT_PRI_RS_CTRL) \
|
||||
_op(SYS_PRI_HUB) \
|
||||
_op(SYS_PRI_RS_CTRL) \
|
||||
_op(SYSB_PRI_HUB) \
|
||||
_op(SYSB_PRI_RS_CTRL) \
|
||||
_op(PRI_MASTER_RS) \
|
||||
_op(PTIMER) \
|
||||
_op(CPR) \
|
||||
_op(TILEOUT) \
|
||||
|
||||
#define RM_SOE_LIST_ALL_ENGINES(_op) \
|
||||
_op(XVE) \
|
||||
_op(SAW) \
|
||||
_op(SOE) \
|
||||
_op(SMR) \
|
||||
\
|
||||
_op(NPG) \
|
||||
_op(NPORT) \
|
||||
\
|
||||
_op(NVLW) \
|
||||
_op(MINION) \
|
||||
_op(NVLIPT) \
|
||||
_op(NVLIPT_LNK) \
|
||||
_op(NVLTLC) \
|
||||
_op(NVLDL) \
|
||||
\
|
||||
_op(NXBAR) \
|
||||
_op(TILE) \
|
||||
\
|
||||
_op(NPG_PERFMON) \
|
||||
_op(NPORT_PERFMON) \
|
||||
\
|
||||
_op(NVLW_PERFMON) \
|
||||
|
||||
#define RM_SOE_ENGINE_ID_LIST(_eng) \
|
||||
RM_SOE_ENGINE_ID_##_eng,
|
||||
|
||||
//
|
||||
// ENGINE_IDs are the complete list of all engines that are supported on
|
||||
// LS10 architecture(s) that may support them. Any one architecture may or
|
||||
// may not understand how to operate on any one specific engine.
|
||||
// Architectures that share a common ENGINE_ID are not guaranteed to have
|
||||
// compatible manuals.
|
||||
//
|
||||
typedef enum rm_soe_engine_id
|
||||
{
|
||||
RM_SOE_LIST_ALL_ENGINES(RM_SOE_ENGINE_ID_LIST)
|
||||
RM_SOE_LIST_LS10_ONLY_ENGINES(RM_SOE_ENGINE_ID_LIST)
|
||||
RM_SOE_ENGINE_ID_SIZE,
|
||||
} RM_SOE_ENGINE_ID;
|
||||
|
||||
/*!
|
||||
* Commands offered by the SOE Tnvl Interface.
|
||||
*/
|
||||
enum
|
||||
{
|
||||
/*
|
||||
* Issue register write command
|
||||
*/
|
||||
RM_SOE_TNVL_CMD_ISSUE_REGISTER_WRITE = 0x0,
|
||||
/*
|
||||
* Issue pre-lock sequence
|
||||
*/
|
||||
RM_SOE_TNVL_CMD_ISSUE_PRE_LOCK_SEQUENCE = 0x1,
|
||||
/*
|
||||
* Issue engine write command
|
||||
*/
|
||||
RM_SOE_TNVL_CMD_ISSUE_ENGINE_WRITE = 0x2,
|
||||
};
|
||||
|
||||
/*!
|
||||
* TNVL queue command payload
|
||||
*/
|
||||
|
||||
typedef struct
|
||||
{
|
||||
NvU8 cmdType;
|
||||
NvU32 offset;
|
||||
NvU32 data;
|
||||
} RM_SOE_TNVL_CMD_REGISTER_WRITE;
|
||||
|
||||
typedef struct
|
||||
{
|
||||
NvU8 cmdType;
|
||||
RM_SOE_ENGINE_ID eng_id;
|
||||
NvU32 eng_bcast;
|
||||
NvU32 eng_instance;
|
||||
NvU32 base;
|
||||
NvU32 offset;
|
||||
NvU32 data;
|
||||
} RM_SOE_TNVL_CMD_ENGINE_WRITE;
|
||||
|
||||
typedef struct
|
||||
{
|
||||
NvU8 cmdType;
|
||||
} RM_SOE_TNVL_CMD_PRE_LOCK_SEQUENCE;
|
||||
|
||||
typedef union
|
||||
{
|
||||
NvU8 cmdType;
|
||||
RM_SOE_TNVL_CMD_REGISTER_WRITE registerWrite;
|
||||
RM_SOE_TNVL_CMD_ENGINE_WRITE engineWrite;
|
||||
RM_SOE_TNVL_CMD_PRE_LOCK_SEQUENCE preLockSequence;
|
||||
} RM_SOE_TNVL_CMD;
|
||||
|
||||
#endif // _SOETNVL_H_
|
||||
|
@ -1,5 +1,5 @@
|
||||
/*
|
||||
* SPDX-FileCopyrightText: Copyright (c) 2018-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
|
||||
* SPDX-FileCopyrightText: Copyright (c) 2018-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
|
||||
* SPDX-License-Identifier: MIT
|
||||
*
|
||||
* Permission is hereby granted, free of charge, to any person obtaining a
|
||||
@ -831,6 +831,7 @@ typedef enum nvswitch_err_type
|
||||
NVSWITCH_ERR_HW_HOST_IO_FAILURE = 10007,
|
||||
NVSWITCH_ERR_HW_HOST_FIRMWARE_INITIALIZATION_FAILURE = 10008,
|
||||
NVSWITCH_ERR_HW_HOST_FIRMWARE_RECOVERY_MODE = 10009,
|
||||
NVSWITCH_ERR_HW_HOST_TNVL_ERROR = 10010,
|
||||
NVSWITCH_ERR_HW_HOST_LAST,
|
||||
|
||||
|
||||
|
@ -325,9 +325,17 @@ cciInit
|
||||
NvU32 pci_device_id
|
||||
)
|
||||
{
|
||||
nvswitch_task_create(device, _nvswitch_cci_poll_callback,
|
||||
NVSWITCH_INTERVAL_1SEC_IN_NS / NVSWITCH_CCI_POLLING_RATE_HZ,
|
||||
0);
|
||||
if (!nvswitch_is_tnvl_mode_enabled(device))
|
||||
{
|
||||
nvswitch_task_create(device, _nvswitch_cci_poll_callback,
|
||||
NVSWITCH_INTERVAL_1SEC_IN_NS / NVSWITCH_CCI_POLLING_RATE_HZ,
|
||||
0);
|
||||
}
|
||||
else
|
||||
{
|
||||
NVSWITCH_PRINT(device, INFO, "Skipping CCI background task when TNVL is enabled\n");
|
||||
}
|
||||
|
||||
return NVL_SUCCESS;
|
||||
}
|
||||
|
||||
|
@ -213,6 +213,7 @@
|
||||
_op(NvU32, nvswitch_get_eng_count, (nvswitch_device *device, NVSWITCH_ENGINE_ID eng_id, NvU32 eng_bcast), _arch) \
|
||||
_op(NvU32, nvswitch_eng_rd, (nvswitch_device *device, NVSWITCH_ENGINE_ID eng_id, NvU32 eng_bcast, NvU32 eng_instance, NvU32 offset), _arch) \
|
||||
_op(void, nvswitch_eng_wr, (nvswitch_device *device, NVSWITCH_ENGINE_ID eng_id, NvU32 eng_bcast, NvU32 eng_instance, NvU32 offset, NvU32 data), _arch) \
|
||||
_op(void, nvswitch_reg_write_32, (nvswitch_device *device, NvU32 offset, NvU32 data), _arch) \
|
||||
_op(NvU32, nvswitch_get_link_eng_inst, (nvswitch_device *device, NvU32 link_id, NVSWITCH_ENGINE_ID eng_id), _arch) \
|
||||
_op(void *, nvswitch_alloc_chipdevice, (nvswitch_device *device), _arch) \
|
||||
_op(NvlStatus, nvswitch_init_thermal, (nvswitch_device *device), _arch) \
|
||||
@ -295,6 +296,8 @@
|
||||
_op(NvlStatus, nvswitch_tnvl_get_attestation_report, (nvswitch_device *device, NVSWITCH_GET_ATTESTATION_REPORT_PARAMS *params), _arch) \
|
||||
_op(NvlStatus, nvswitch_tnvl_send_fsp_lock_config, (nvswitch_device *device), _arch) \
|
||||
_op(NvlStatus, nvswitch_tnvl_get_status, (nvswitch_device *device, NVSWITCH_GET_TNVL_STATUS_PARAMS *params), _arch) \
|
||||
_op(NvlStatus, nvswitch_send_tnvl_prelock_cmd, (nvswitch_device *device), _arch) \
|
||||
_op(void, nvswitch_tnvl_disable_interrupts, (nvswitch_device *device), _arch) \
|
||||
NVSWITCH_HAL_FUNCTION_LIST_FEATURE_0(_op, _arch) \
|
||||
|
||||
#define NVSWITCH_HAL_FUNCTION_LIST_LS10(_op, _arch) \
|
||||
|
@ -710,4 +710,5 @@ NvlStatus nvswitch_fsp_error_code_to_nvlstatus_map_lr10(nvswitch_device *device,
|
||||
NvlStatus nvswitch_tnvl_get_attestation_certificate_chain_lr10(nvswitch_device *device, NVSWITCH_GET_ATTESTATION_CERTIFICATE_CHAIN_PARAMS *params);
|
||||
NvlStatus nvswitch_tnvl_get_attestation_report_lr10(nvswitch_device *device, NVSWITCH_GET_ATTESTATION_REPORT_PARAMS *params);
|
||||
NvlStatus nvswitch_tnvl_get_status_lr10(nvswitch_device *device, NVSWITCH_GET_TNVL_STATUS_PARAMS *params);
|
||||
void nvswitch_tnvl_disable_interrupts_lr10(nvswitch_device *device);
|
||||
#endif //_LR10_H_
|
||||
|
@ -1,5 +1,5 @@
|
||||
/*
|
||||
* SPDX-FileCopyrightText: Copyright (c) 2020-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
|
||||
* SPDX-FileCopyrightText: Copyright (c) 2020-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
|
||||
* SPDX-License-Identifier: MIT
|
||||
*
|
||||
* Permission is hereby granted, free of charge, to any person obtaining a
|
||||
@ -188,7 +188,9 @@
|
||||
|
||||
#define SOE_VBIOS_VERSION_MASK 0xFF0000
|
||||
#define SOE_VBIOS_REVLOCK_DISABLE_NPORT_FATAL_INTR 0x370000
|
||||
#define SOE_VBIOS_REVLOCK_ISSUE_INGRESS_STOP 0x440000
|
||||
#define SOE_VBIOS_REVLOCK_ISSUE_INGRESS_STOP 0x4C0000
|
||||
#define SOE_VBIOS_REVLOCK_TNVL_PRELOCK_COMMAND 0x590000
|
||||
#define SOE_VBIOS_REVLOCK_SOE_PRI_CHECKS 0x610000
|
||||
|
||||
// LS10 Saved LED state
|
||||
#define ACCESS_LINK_LED_STATE CPLD_MACHXO3_ACCESS_LINK_LED_CTL_NVL_CABLE_LED
|
||||
@ -1058,7 +1060,10 @@ NvlStatus nvswitch_tnvl_get_attestation_certificate_chain_ls10(nvswitch_device *
|
||||
NvlStatus nvswitch_tnvl_get_attestation_report_ls10(nvswitch_device *device, NVSWITCH_GET_ATTESTATION_REPORT_PARAMS *params);
|
||||
NvlStatus nvswitch_tnvl_send_fsp_lock_config_ls10(nvswitch_device *device);
|
||||
NvlStatus nvswitch_tnvl_get_status_ls10(nvswitch_device *device, NVSWITCH_GET_TNVL_STATUS_PARAMS *params);
|
||||
|
||||
void nvswitch_tnvl_eng_wr_32_ls10(nvswitch_device *device, NVSWITCH_ENGINE_ID eng_id, NvU32 eng_bcast, NvU32 eng_instance, NvU32 base_addr, NvU32 offset, NvU32 data);
|
||||
NvlStatus nvswitch_send_tnvl_prelock_cmd_ls10(nvswitch_device *device);
|
||||
void nvswitch_tnvl_disable_interrupts_ls10(nvswitch_device *device);
|
||||
void nvswitch_tnvl_reg_wr_32_ls10(nvswitch_device *device, NvU32 offset, NvU32 data);
|
||||
NvlStatus nvswitch_ctrl_get_soe_heartbeat_ls10(nvswitch_device *device, NVSWITCH_GET_SOE_HEARTBEAT_PARAMS *p);
|
||||
NvlStatus nvswitch_cci_enable_iobist_ls10(nvswitch_device *device, NvU32 linkNumber, NvBool bEnable);
|
||||
NvlStatus nvswitch_cci_initialization_sequence_ls10(nvswitch_device *device, NvU32 linkNumber);
|
||||
|
@ -1,5 +1,5 @@
|
||||
/*
|
||||
* SPDX-FileCopyrightText: Copyright (c) 2020-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
|
||||
* SPDX-FileCopyrightText: Copyright (c) 2020-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
|
||||
* SPDX-License-Identifier: MIT
|
||||
*
|
||||
* Permission is hereby granted, free of charge, to any person obtaining a
|
||||
@ -50,5 +50,7 @@ void nvswitch_heartbeat_soe_callback_ls10(nvswitch_device *device, RM_FLCN_
|
||||
NvlStatus nvswitch_soe_set_nport_interrupts_ls10(nvswitch_device *device, NvU32 nport, NvBool bEnable);
|
||||
void nvswitch_soe_disable_nport_fatal_interrupts_ls10(nvswitch_device *device, NvU32 nport,
|
||||
NvU32 nportIntrEnable, NvU8 nportIntrType);
|
||||
|
||||
NvlStatus nvswitch_soe_issue_ingress_stop_ls10(nvswitch_device *device, NvU32 nport, NvBool bStop);
|
||||
NvlStatus nvswitch_soe_reg_wr_32_ls10(nvswitch_device *device, NvU32 offset, NvU32 data);
|
||||
NvlStatus nvswitch_soe_eng_wr_32_ls10(nvswitch_device *device, NVSWITCH_ENGINE_ID eng_id, NvU32 eng_bcast, NvU32 eng_instance, NvU32 base_addr, NvU32 offset, NvU32 data);
|
||||
#endif //_SOE_LS10_H_
|
||||
|
@ -212,8 +212,15 @@ _inforom_nvlink_start_correctable_error_recording
|
||||
|
||||
pNvlinkState->bCallbackPending = NV_FALSE;
|
||||
|
||||
nvswitch_task_create(device, &_nvswitch_nvlink_1hz_callback,
|
||||
NVSWITCH_INTERVAL_1SEC_IN_NS, 0);
|
||||
if (!nvswitch_is_tnvl_mode_enabled(device))
|
||||
{
|
||||
nvswitch_task_create(device, &_nvswitch_nvlink_1hz_callback,
|
||||
NVSWITCH_INTERVAL_1SEC_IN_NS, 0);
|
||||
}
|
||||
else
|
||||
{
|
||||
NVSWITCH_PRINT(device, INFO, "Skipping NVLINK heartbeat task when TNVL is enabled\n");
|
||||
}
|
||||
}
|
||||
|
||||
NvlStatus
|
||||
|
@ -1329,6 +1329,13 @@ nvswitch_corelib_set_tl_link_mode_lr10
|
||||
nvswitch_device *device = link->dev->pDevInfo;
|
||||
NvlStatus status = NVL_SUCCESS;
|
||||
|
||||
if (nvswitch_is_tnvl_mode_locked(device))
|
||||
{
|
||||
NVSWITCH_PRINT(device, ERROR,
|
||||
"%s(%d): Security locked\n", __FUNCTION__, __LINE__);
|
||||
return NVL_ERR_INSUFFICIENT_PERMISSIONS;
|
||||
}
|
||||
|
||||
if (!NVSWITCH_IS_LINK_ENG_VALID_LR10(device, NVLDL, link->linkNumber))
|
||||
{
|
||||
NVSWITCH_PRINT(device, ERROR,
|
||||
@ -1728,6 +1735,13 @@ nvswitch_corelib_set_rx_mode_lr10
|
||||
NvlStatus status = NVL_SUCCESS;
|
||||
NvU32 delay_ns;
|
||||
|
||||
if (nvswitch_is_tnvl_mode_locked(device))
|
||||
{
|
||||
NVSWITCH_PRINT(device, ERROR,
|
||||
"%s(%d): Security locked\n", __FUNCTION__, __LINE__);
|
||||
return NVL_ERR_INSUFFICIENT_PERMISSIONS;
|
||||
}
|
||||
|
||||
if (!NVSWITCH_IS_LINK_ENG_VALID_LR10(device, NVLDL, link->linkNumber))
|
||||
{
|
||||
NVSWITCH_PRINT(device, ERROR,
|
||||
@ -1955,6 +1969,13 @@ nvswitch_corelib_set_rx_detect_lr10
|
||||
NvlStatus status;
|
||||
nvswitch_device *device = link->dev->pDevInfo;
|
||||
|
||||
if (nvswitch_is_tnvl_mode_locked(device))
|
||||
{
|
||||
NVSWITCH_PRINT(device, ERROR,
|
||||
"%s(%d): Security locked\n", __FUNCTION__, __LINE__);
|
||||
return NVL_ERR_INSUFFICIENT_PERMISSIONS;
|
||||
}
|
||||
|
||||
if (nvswitch_does_link_need_termination_enabled(device, link))
|
||||
{
|
||||
NVSWITCH_PRINT(device, INFO,
|
||||
@ -2094,6 +2115,13 @@ nvswitch_request_tl_link_state_lr10
|
||||
NvlStatus status = NVL_SUCCESS;
|
||||
NvU32 linkStatus;
|
||||
|
||||
if (nvswitch_is_tnvl_mode_locked(device))
|
||||
{
|
||||
NVSWITCH_PRINT(device, ERROR,
|
||||
"%s(%d): Security locked\n", __FUNCTION__, __LINE__);
|
||||
return NVL_ERR_INSUFFICIENT_PERMISSIONS;
|
||||
}
|
||||
|
||||
if (!NVSWITCH_IS_LINK_ENG_VALID_LR10(device, NVLIPT_LNK, link->linkNumber))
|
||||
{
|
||||
NVSWITCH_PRINT(device, ERROR,
|
||||
|
@ -8186,6 +8186,44 @@ nvswitch_tnvl_get_status_lr10
|
||||
return -NVL_ERR_NOT_SUPPORTED;
|
||||
}
|
||||
|
||||
NvlStatus
|
||||
nvswitch_send_tnvl_prelock_cmd_lr10
|
||||
(
|
||||
nvswitch_device *device
|
||||
)
|
||||
{
|
||||
return -NVL_ERR_NOT_SUPPORTED;
|
||||
}
|
||||
|
||||
void
|
||||
nvswitch_tnvl_disable_interrupts_lr10
|
||||
(
|
||||
nvswitch_device *device
|
||||
)
|
||||
{
|
||||
return;
|
||||
}
|
||||
|
||||
void
|
||||
nvswitch_reg_write_32_lr10
|
||||
(
|
||||
nvswitch_device *device,
|
||||
NvU32 offset,
|
||||
NvU32 data
|
||||
)
|
||||
{
|
||||
if (device->nvlink_device->pciInfo.bars[0].pBar == NULL)
|
||||
{
|
||||
NVSWITCH_PRINT(device, ERROR,
|
||||
"%s: register write failed at offset 0x%x\n",
|
||||
__FUNCTION__, offset);
|
||||
return;
|
||||
}
|
||||
|
||||
// Write the register
|
||||
nvswitch_os_mem_write32((NvU8 *)device->nvlink_device->pciInfo.bars[0].pBar + offset, data);
|
||||
}
|
||||
|
||||
//
|
||||
// This function auto creates the lr10 HAL connectivity from the NVSWITCH_INIT_HAL
|
||||
// macro in haldef_nvswitch.h
|
||||
|
@ -386,6 +386,13 @@ nvswitch_is_cci_supported_ls10
|
||||
nvswitch_device *device
|
||||
)
|
||||
{
|
||||
// Skip CCI on TNVL mode
|
||||
if (nvswitch_is_tnvl_mode_enabled(device))
|
||||
{
|
||||
NVSWITCH_PRINT(device, INFO, "CCI is not supported on TNVL mode\n");
|
||||
return NV_FALSE;
|
||||
}
|
||||
|
||||
if (FLD_TEST_DRF(_SWITCH_REGKEY, _CCI_CONTROL, _ENABLE, _FALSE,
|
||||
device->regkeys.cci_control))
|
||||
{
|
||||
|
File diff suppressed because it is too large
Load Diff
@ -160,6 +160,13 @@ nvswitch_corelib_training_complete_ls10
|
||||
{
|
||||
nvswitch_device *device = link->dev->pDevInfo;
|
||||
|
||||
if (nvswitch_is_tnvl_mode_locked(device))
|
||||
{
|
||||
NVSWITCH_PRINT(device, ERROR,
|
||||
"%s(%d): Security locked\n", __FUNCTION__, __LINE__);
|
||||
return; // NVL_ERR_INSUFFICIENT_PERMISSIONS;
|
||||
}
|
||||
|
||||
nvswitch_init_dlpl_interrupts(link);
|
||||
_nvswitch_configure_reserved_throughput_counters(link);
|
||||
|
||||
@ -265,6 +272,13 @@ nvswitch_corelib_set_tx_mode_ls10
|
||||
NvU32 val;
|
||||
NvlStatus status = NVL_SUCCESS;
|
||||
|
||||
if (nvswitch_is_tnvl_mode_locked(device))
|
||||
{
|
||||
NVSWITCH_PRINT(device, ERROR,
|
||||
"%s(%d): Security locked\n", __FUNCTION__, __LINE__);
|
||||
return NVL_ERR_INSUFFICIENT_PERMISSIONS;
|
||||
}
|
||||
|
||||
if (!NVSWITCH_IS_LINK_ENG_VALID_LS10(device, NVLDL, link->linkNumber))
|
||||
{
|
||||
NVSWITCH_PRINT(device, ERROR,
|
||||
@ -352,6 +366,13 @@ nvswitch_corelib_set_dl_link_mode_ls10
|
||||
NvBool keepPolling;
|
||||
NVSWITCH_TIMEOUT timeout;
|
||||
|
||||
if (nvswitch_is_tnvl_mode_locked(device))
|
||||
{
|
||||
NVSWITCH_PRINT(device, ERROR,
|
||||
"%s(%d): Security locked\n", __FUNCTION__, __LINE__);
|
||||
return NVL_ERR_INSUFFICIENT_PERMISSIONS;
|
||||
}
|
||||
|
||||
if (!NVSWITCH_IS_LINK_ENG_VALID_LS10(device, NVLDL, link->linkNumber))
|
||||
{
|
||||
NVSWITCH_PRINT(device, ERROR,
|
||||
@ -494,6 +515,13 @@ nvswitch_corelib_get_rx_detect_ls10
|
||||
NvlStatus status;
|
||||
nvswitch_device *device = link->dev->pDevInfo;
|
||||
|
||||
if (nvswitch_is_tnvl_mode_locked(device))
|
||||
{
|
||||
NVSWITCH_PRINT(device, ERROR,
|
||||
"%s(%d): Security locked\n", __FUNCTION__, __LINE__);
|
||||
return NVL_ERR_INSUFFICIENT_PERMISSIONS;
|
||||
}
|
||||
|
||||
status = nvswitch_minion_get_rxdet_status_ls10(device, link->linkNumber);
|
||||
|
||||
if (status != NVL_SUCCESS)
|
||||
@ -590,13 +618,22 @@ nvswitch_corelib_get_tl_link_mode_ls10
|
||||
{
|
||||
case NV_NVLIPT_LNK_CTRL_LINK_STATE_STATUS_CURRENTLINKSTATE_ACTIVE:
|
||||
|
||||
// If using ALI, ensure that the request to active completed
|
||||
if (link->dev->enableALI)
|
||||
if (nvswitch_is_tnvl_mode_locked(device))
|
||||
{
|
||||
status = nvswitch_wait_for_tl_request_ready_ls10(link);
|
||||
NVSWITCH_PRINT(device, ERROR,
|
||||
"%s(%d): Security locked\n", __FUNCTION__, __LINE__);
|
||||
*mode = NVLINK_LINKSTATE_HS;
|
||||
}
|
||||
else
|
||||
{
|
||||
// If using ALI, ensure that the request to active completed
|
||||
if (link->dev->enableALI)
|
||||
{
|
||||
status = nvswitch_wait_for_tl_request_ready_ls10(link);
|
||||
}
|
||||
|
||||
*mode = (status == NVL_SUCCESS) ? NVLINK_LINKSTATE_HS:NVLINK_LINKSTATE_OFF;
|
||||
*mode = (status == NVL_SUCCESS) ? NVLINK_LINKSTATE_HS:NVLINK_LINKSTATE_OFF;
|
||||
}
|
||||
break;
|
||||
|
||||
case NV_NVLIPT_LNK_CTRL_LINK_STATE_STATUS_CURRENTLINKSTATE_L2:
|
||||
@ -995,6 +1032,13 @@ nvswitch_launch_ALI_link_training_ls10
|
||||
{
|
||||
NvlStatus status = NVL_SUCCESS;
|
||||
|
||||
if (nvswitch_is_tnvl_mode_locked(device))
|
||||
{
|
||||
NVSWITCH_PRINT(device, ERROR,
|
||||
"%s(%d): Security locked\n", __FUNCTION__, __LINE__);
|
||||
return NVL_ERR_INSUFFICIENT_PERMISSIONS;
|
||||
}
|
||||
|
||||
if ((link == NULL) ||
|
||||
!NVSWITCH_IS_LINK_ENG_VALID_LS10(device, NVLIPT_LNK, link->linkNumber) ||
|
||||
(link->linkNumber >= NVSWITCH_NVLINK_MAX_LINKS))
|
||||
|
@ -2979,13 +2979,6 @@ nvswitch_is_soe_supported_ls10
|
||||
NVSWITCH_PRINT(device, WARN, "SOE can not be disabled via regkey.\n");
|
||||
}
|
||||
|
||||
if (nvswitch_is_tnvl_mode_locked(device))
|
||||
{
|
||||
NVSWITCH_PRINT(device, INFO,
|
||||
"SOE is not supported when TNVL mode is locked\n");
|
||||
return NV_FALSE;
|
||||
}
|
||||
|
||||
return NV_TRUE;
|
||||
}
|
||||
|
||||
@ -3033,13 +3026,6 @@ nvswitch_is_inforom_supported_ls10
|
||||
return NV_FALSE;
|
||||
}
|
||||
|
||||
if (nvswitch_is_tnvl_mode_enabled(device))
|
||||
{
|
||||
NVSWITCH_PRINT(device, INFO,
|
||||
"INFOROM is not supported when TNVL mode is enabled\n");
|
||||
return NV_FALSE;
|
||||
}
|
||||
|
||||
if (!nvswitch_is_soe_supported(device))
|
||||
{
|
||||
NVSWITCH_PRINT(device, INFO,
|
||||
@ -4421,7 +4407,14 @@ nvswitch_eng_wr_ls10
|
||||
return;
|
||||
}
|
||||
|
||||
nvswitch_reg_write_32(device, base_addr + offset, data);
|
||||
if (nvswitch_is_tnvl_mode_enabled(device))
|
||||
{
|
||||
nvswitch_tnvl_eng_wr_32_ls10(device, eng_id, eng_bcast, eng_instance, base_addr, offset, data);
|
||||
}
|
||||
else
|
||||
{
|
||||
nvswitch_reg_write_32(device, base_addr + offset, data);
|
||||
}
|
||||
|
||||
#if defined(DEVELOP) || defined(DEBUG) || defined(NV_MODS)
|
||||
{
|
||||
@ -4438,6 +4431,33 @@ nvswitch_eng_wr_ls10
|
||||
#endif //defined(DEVELOP) || defined(DEBUG) || defined(NV_MODS)
|
||||
}
|
||||
|
||||
void
|
||||
nvswitch_reg_write_32_ls10
|
||||
(
|
||||
nvswitch_device *device,
|
||||
NvU32 offset,
|
||||
NvU32 data
|
||||
)
|
||||
{
|
||||
if (device->nvlink_device->pciInfo.bars[0].pBar == NULL)
|
||||
{
|
||||
NVSWITCH_PRINT(device, ERROR,
|
||||
"%s: register write failed at offset 0x%x\n",
|
||||
__FUNCTION__, offset);
|
||||
return;
|
||||
}
|
||||
|
||||
if (nvswitch_is_tnvl_mode_enabled(device))
|
||||
{
|
||||
nvswitch_tnvl_reg_wr_32_ls10(device, offset, data);
|
||||
}
|
||||
else
|
||||
{
|
||||
// Write the register
|
||||
nvswitch_os_mem_write32((NvU8 *)device->nvlink_device->pciInfo.bars[0].pBar + offset, data);
|
||||
}
|
||||
}
|
||||
|
||||
NvU32
|
||||
nvswitch_get_link_eng_inst_ls10
|
||||
(
|
||||
|
@ -1,5 +1,5 @@
|
||||
/*
|
||||
* SPDX-FileCopyrightText: Copyright (c) 2020-2022 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
|
||||
* SPDX-FileCopyrightText: Copyright (c) 2020-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
|
||||
* SPDX-License-Identifier: MIT
|
||||
*
|
||||
* Permission is hereby granted, free of charge, to any person obtaining a
|
||||
@ -559,6 +559,174 @@ nvswitch_soe_disable_nport_fatal_interrupts_ls10
|
||||
}
|
||||
}
|
||||
|
||||
/*
|
||||
* @Brief : Perform register writes in SOE during TNVL
|
||||
*
|
||||
* @param[in] device
|
||||
* @param[in] offset
|
||||
* @param[in] data
|
||||
*/
|
||||
NvlStatus
|
||||
nvswitch_soe_reg_wr_32_ls10
|
||||
(
|
||||
nvswitch_device *device,
|
||||
NvU32 offset,
|
||||
NvU32 data
|
||||
)
|
||||
{
|
||||
FLCN *pFlcn;
|
||||
NvU32 cmdSeqDesc = 0;
|
||||
NV_STATUS status;
|
||||
RM_FLCN_CMD_SOE cmd;
|
||||
NVSWITCH_TIMEOUT timeout;
|
||||
RM_SOE_TNVL_CMD_REGISTER_WRITE *pRegisterWrite;
|
||||
NVSWITCH_GET_BIOS_INFO_PARAMS params = { 0 };
|
||||
|
||||
if (!nvswitch_is_soe_supported(device))
|
||||
{
|
||||
NVSWITCH_PRINT(device, INFO,
|
||||
"%s: SOE is not supported\n",
|
||||
__FUNCTION__);
|
||||
return NVL_SUCCESS; // -NVL_ERR_NOT_SUPPORTED
|
||||
}
|
||||
|
||||
if (device->nvlink_device->pciInfo.bars[0].pBar == NULL)
|
||||
{
|
||||
NVSWITCH_PRINT(device, ERROR,
|
||||
"%s: register write failed at offset 0x%x\n",
|
||||
__FUNCTION__, offset);
|
||||
return -NVL_IO_ERROR;
|
||||
}
|
||||
|
||||
status = device->hal.nvswitch_ctrl_get_bios_info(device, ¶ms);
|
||||
if ((status != NVL_SUCCESS) || ((params.version & SOE_VBIOS_VERSION_MASK) <
|
||||
SOE_VBIOS_REVLOCK_SOE_PRI_CHECKS))
|
||||
{
|
||||
nvswitch_os_mem_write32((NvU8 *)device->nvlink_device->pciInfo.bars[0].pBar + offset, data);
|
||||
return NVL_SUCCESS;
|
||||
}
|
||||
|
||||
pFlcn = device->pSoe->pFlcn;
|
||||
|
||||
nvswitch_os_memset(&cmd, 0, sizeof(cmd));
|
||||
|
||||
cmd.hdr.unitId = RM_SOE_UNIT_TNVL;
|
||||
cmd.hdr.size = RM_SOE_CMD_SIZE(TNVL, REGISTER_WRITE);
|
||||
|
||||
pRegisterWrite = &cmd.cmd.tnvl.registerWrite;
|
||||
pRegisterWrite->cmdType = RM_SOE_TNVL_CMD_ISSUE_REGISTER_WRITE;
|
||||
pRegisterWrite->offset = offset;
|
||||
pRegisterWrite->data = data;
|
||||
|
||||
nvswitch_timeout_create(NVSWITCH_INTERVAL_5MSEC_IN_NS, &timeout);
|
||||
status = flcnQueueCmdPostBlocking(device, pFlcn,
|
||||
(PRM_FLCN_CMD)&cmd,
|
||||
NULL, // pMsg
|
||||
NULL, // pPayload
|
||||
SOE_RM_CMDQ_LOG_ID,
|
||||
&cmdSeqDesc,
|
||||
&timeout);
|
||||
if (status != NV_OK)
|
||||
{
|
||||
NVSWITCH_PRINT(device, ERROR,
|
||||
"%s: Failed to send REGISTER_WRITE command to SOE, offset = 0x%x, data = 0x%x\n",
|
||||
__FUNCTION__, offset, data);
|
||||
return -NVL_ERR_GENERIC;
|
||||
}
|
||||
|
||||
return NVL_SUCCESS;
|
||||
}
|
||||
|
||||
/*
|
||||
* @Brief : Perform engine writes in SOE during TNVL
|
||||
*
|
||||
* @param[in] device
|
||||
* @param[in] eng_id NVSWITCH_ENGINE_ID*
|
||||
* @param[in] eng_bcast NVSWITCH_GET_ENG_DESC_TYPE*
|
||||
* @param[in] eng_instance
|
||||
* @param[in] base_addr
|
||||
* @param[in] offset
|
||||
* @param[in] data
|
||||
*/
|
||||
NvlStatus
|
||||
nvswitch_soe_eng_wr_32_ls10
|
||||
(
|
||||
nvswitch_device *device,
|
||||
NVSWITCH_ENGINE_ID eng_id,
|
||||
NvU32 eng_bcast,
|
||||
NvU32 eng_instance,
|
||||
NvU32 base_addr,
|
||||
NvU32 offset,
|
||||
NvU32 data
|
||||
)
|
||||
{
|
||||
FLCN *pFlcn;
|
||||
NvU32 cmdSeqDesc = 0;
|
||||
NV_STATUS status;
|
||||
RM_FLCN_CMD_SOE cmd;
|
||||
NVSWITCH_TIMEOUT timeout;
|
||||
RM_SOE_TNVL_CMD_ENGINE_WRITE *pEngineWrite;
|
||||
NVSWITCH_GET_BIOS_INFO_PARAMS params = { 0 };
|
||||
|
||||
if (!nvswitch_is_soe_supported(device))
|
||||
{
|
||||
NVSWITCH_PRINT(device, INFO,
|
||||
"%s: SOE is not supported\n",
|
||||
__FUNCTION__);
|
||||
return NVL_SUCCESS; // -NVL_ERR_NOT_SUPPORTED
|
||||
}
|
||||
|
||||
if (device->nvlink_device->pciInfo.bars[0].pBar == NULL)
|
||||
{
|
||||
NVSWITCH_PRINT(device, ERROR,
|
||||
"%s: register write failed at offset 0x%x\n",
|
||||
__FUNCTION__, offset);
|
||||
return -NVL_IO_ERROR;
|
||||
}
|
||||
|
||||
status = device->hal.nvswitch_ctrl_get_bios_info(device, ¶ms);
|
||||
if ((status != NVL_SUCCESS) || ((params.version & SOE_VBIOS_VERSION_MASK) <
|
||||
SOE_VBIOS_REVLOCK_SOE_PRI_CHECKS))
|
||||
{
|
||||
nvswitch_os_mem_write32((NvU8 *)device->nvlink_device->pciInfo.bars[0].pBar + base_addr + offset, data);
|
||||
return NVL_SUCCESS;
|
||||
}
|
||||
|
||||
pFlcn = device->pSoe->pFlcn;
|
||||
|
||||
nvswitch_os_memset(&cmd, 0, sizeof(cmd));
|
||||
|
||||
cmd.hdr.unitId = RM_SOE_UNIT_TNVL;
|
||||
cmd.hdr.size = RM_SOE_CMD_SIZE(TNVL, ENGINE_WRITE);
|
||||
|
||||
pEngineWrite = &cmd.cmd.tnvl.engineWrite;
|
||||
pEngineWrite->cmdType = RM_SOE_TNVL_CMD_ISSUE_ENGINE_WRITE;
|
||||
pEngineWrite->eng_id = eng_id;
|
||||
pEngineWrite->eng_bcast = eng_bcast;
|
||||
pEngineWrite->eng_instance = eng_instance;
|
||||
pEngineWrite->base = base_addr;
|
||||
pEngineWrite->offset = offset;
|
||||
pEngineWrite->data = data;
|
||||
|
||||
nvswitch_timeout_create(NVSWITCH_INTERVAL_5MSEC_IN_NS, &timeout);
|
||||
status = flcnQueueCmdPostBlocking(device, pFlcn,
|
||||
(PRM_FLCN_CMD)&cmd,
|
||||
NULL, // pMsg
|
||||
NULL, // pPayload
|
||||
SOE_RM_CMDQ_LOG_ID,
|
||||
&cmdSeqDesc,
|
||||
&timeout);
|
||||
if (status != NV_OK)
|
||||
{
|
||||
NVSWITCH_PRINT(device, ERROR,
|
||||
"%s: Failed to send ENGINE_WRITE command to SOE, offset = 0x%x, data = 0x%x\n",
|
||||
__FUNCTION__, offset, data);
|
||||
return -NVL_ERR_GENERIC;
|
||||
}
|
||||
|
||||
return NVL_SUCCESS;
|
||||
}
|
||||
|
||||
/*
|
||||
* @Brief : Init sequence for SOE FSP RISCV image
|
||||
*
|
||||
@ -609,14 +777,21 @@ nvswitch_init_soe_ls10
|
||||
}
|
||||
|
||||
// Register SOE callbacks
|
||||
status = nvswitch_soe_register_event_callbacks(device);
|
||||
if (status != NVL_SUCCESS)
|
||||
if (!nvswitch_is_tnvl_mode_enabled(device))
|
||||
{
|
||||
NVSWITCH_PRINT_SXID(device, NVSWITCH_ERR_HW_SOE_COMMAND_QUEUE,
|
||||
"Failed to register SOE events\n");
|
||||
NVSWITCH_PRINT_SXID(device, NVSWITCH_ERR_HW_SOE_BOOTSTRAP,
|
||||
"SOE init failed(2)\n");
|
||||
return status;
|
||||
status = nvswitch_soe_register_event_callbacks(device);
|
||||
if (status != NVL_SUCCESS)
|
||||
{
|
||||
NVSWITCH_PRINT_SXID(device, NVSWITCH_ERR_HW_SOE_COMMAND_QUEUE,
|
||||
"Failed to register SOE events\n");
|
||||
NVSWITCH_PRINT_SXID(device, NVSWITCH_ERR_HW_SOE_BOOTSTRAP,
|
||||
"SOE init failed(2)\n");
|
||||
return status;
|
||||
}
|
||||
}
|
||||
else
|
||||
{
|
||||
NVSWITCH_PRINT(device, INFO, "Skipping registering SOE callbacks since TNVL is enabled\n");
|
||||
}
|
||||
|
||||
// Sanity the command and message queues as a final check
|
||||
@ -825,7 +1000,6 @@ _soeService_LS10
|
||||
)
|
||||
{
|
||||
NvBool bRecheckMsgQ = NV_FALSE;
|
||||
NvBool bRecheckPrintQ = NV_FALSE;
|
||||
NvU32 clearBits = 0;
|
||||
NvU32 intrStatus;
|
||||
PFLCN pFlcn = ENG_GET_FLCN(pSoe);
|
||||
@ -891,8 +1065,6 @@ _soeService_LS10
|
||||
NVSWITCH_PRINT(device, INFO,
|
||||
"%s: Received a SWGEN1 interrupt\n",
|
||||
__FUNCTION__);
|
||||
flcnDebugBufferDisplay_HAL(device, pFlcn);
|
||||
bRecheckPrintQ = NV_TRUE;
|
||||
}
|
||||
|
||||
// Clear any sources that were serviced and get the new status.
|
||||
@ -928,22 +1100,6 @@ _soeService_LS10
|
||||
}
|
||||
}
|
||||
|
||||
//
|
||||
// If we just processed a SWGEN1 interrupt (Debug Buffer interrupt), peek
|
||||
// into the Debug Buffer and see if any text was missed the last time
|
||||
// the buffer was displayed (above). If it is not empty, re-generate SWGEN1
|
||||
// (since it is now cleared) and exit. As long as an interrupt is pending,
|
||||
// this function will be re-entered and the message(s) will be processed.
|
||||
//
|
||||
if (bRecheckPrintQ)
|
||||
{
|
||||
if (!flcnDebugBufferIsEmpty_HAL(device, pFlcn))
|
||||
{
|
||||
flcnRegWrite_HAL(device, pFlcn, NV_PFALCON_FALCON_IRQSSET,
|
||||
DRF_DEF(_PFALCON, _FALCON_IRQSSET, _SWGEN1, _SET));
|
||||
}
|
||||
}
|
||||
|
||||
flcnIntrRetrigger_HAL(device, pFlcn);
|
||||
|
||||
return intrStatus;
|
||||
@ -1363,6 +1519,71 @@ _soeI2CAccess_LS10
|
||||
return ret;
|
||||
}
|
||||
|
||||
/*
|
||||
* @Brief : Send TNVL Pre Lock command to SOE
|
||||
*
|
||||
* @param[in] device
|
||||
*/
|
||||
NvlStatus
|
||||
nvswitch_send_tnvl_prelock_cmd_ls10
|
||||
(
|
||||
nvswitch_device *device
|
||||
)
|
||||
{
|
||||
FLCN *pFlcn;
|
||||
NvU32 cmdSeqDesc = 0;
|
||||
NV_STATUS status;
|
||||
RM_FLCN_CMD_SOE cmd;
|
||||
NVSWITCH_TIMEOUT timeout;
|
||||
RM_SOE_TNVL_CMD_PRE_LOCK_SEQUENCE *pTnvlPreLock;
|
||||
NVSWITCH_GET_BIOS_INFO_PARAMS params = { 0 };
|
||||
|
||||
if (!nvswitch_is_soe_supported(device))
|
||||
{
|
||||
NVSWITCH_PRINT(device, INFO, "%s: SOE is not supported\n",
|
||||
__FUNCTION__);
|
||||
return -NVL_ERR_NOT_SUPPORTED;
|
||||
}
|
||||
|
||||
status = device->hal.nvswitch_ctrl_get_bios_info(device, ¶ms);
|
||||
if ((status != NVL_SUCCESS) || ((params.version & SOE_VBIOS_VERSION_MASK) <
|
||||
SOE_VBIOS_REVLOCK_TNVL_PRELOCK_COMMAND))
|
||||
{
|
||||
NVSWITCH_PRINT(device, INFO,
|
||||
"%s: Skipping TNVL_CMD_PRE_LOCK_SEQUENCE command to SOE. Update firmware "
|
||||
"from .%02X to .%02X\n",
|
||||
__FUNCTION__, (NvU32)((params.version & SOE_VBIOS_VERSION_MASK) >> 16),
|
||||
SOE_VBIOS_REVLOCK_TNVL_PRELOCK_COMMAND);
|
||||
return -NVL_ERR_NOT_SUPPORTED;
|
||||
}
|
||||
|
||||
pFlcn = device->pSoe->pFlcn;
|
||||
|
||||
nvswitch_os_memset(&cmd, 0, sizeof(cmd));
|
||||
cmd.hdr.unitId = RM_SOE_UNIT_TNVL;
|
||||
cmd.hdr.size = RM_SOE_CMD_SIZE(TNVL, PRE_LOCK_SEQUENCE);
|
||||
|
||||
pTnvlPreLock = &cmd.cmd.tnvl.preLockSequence;
|
||||
pTnvlPreLock->cmdType = RM_SOE_TNVL_CMD_ISSUE_PRE_LOCK_SEQUENCE;
|
||||
|
||||
nvswitch_timeout_create(NVSWITCH_INTERVAL_5MSEC_IN_NS, &timeout);
|
||||
status = flcnQueueCmdPostBlocking(device, pFlcn,
|
||||
(PRM_FLCN_CMD)&cmd,
|
||||
NULL, // pMsg
|
||||
NULL, // pPayload
|
||||
SOE_RM_CMDQ_LOG_ID,
|
||||
&cmdSeqDesc,
|
||||
&timeout);
|
||||
if (status != NV_OK)
|
||||
{
|
||||
NVSWITCH_PRINT(device, ERROR, "%s: Failed to send PRE_LOCK_SEQUENCE command to SOE, status 0x%x\n",
|
||||
__FUNCTION__, status);
|
||||
return -NVL_ERR_GENERIC;
|
||||
}
|
||||
|
||||
return NVL_SUCCESS;
|
||||
}
|
||||
|
||||
/**
|
||||
* @brief set hal function pointers for functions defined in LR10 (i.e. this file)
|
||||
*
|
||||
|
@ -936,6 +936,7 @@ nvswitch_nvs_top_prod_ls10
|
||||
NVSWITCH_ENG_WR32(device, SYS_PRI_RS_CTRL, , 0, _PPRIV_RS_CTRL_SYS, _CG1,
|
||||
DRF_DEF(_PPRIV_RS_CTRL_SYS, _CG1, _SLCG, __PROD));
|
||||
|
||||
#if 0
|
||||
NVSWITCH_ENG_WR32(device, XAL, , 0, _XAL_EP, _CG,
|
||||
DRF_DEF(_XAL_EP, _CG, _IDLE_CG_DLY_CNT, __PROD) |
|
||||
DRF_DEF(_XAL_EP, _CG, _IDLE_CG_EN, __PROD) |
|
||||
@ -961,7 +962,8 @@ nvswitch_nvs_top_prod_ls10
|
||||
DRF_DEF(_XAL_EP, _CG1, _SLCG_TXMAP, __PROD) |
|
||||
DRF_DEF(_XAL_EP, _CG1, _SLCG_UNROLL_MEM, __PROD) |
|
||||
DRF_DEF(_XAL_EP, _CG1, _SLCG_UPARB, __PROD));
|
||||
|
||||
#endif //0
|
||||
|
||||
NVSWITCH_ENG_WR32(device, XPL, , 0, _XPL, _PL_PAD_CTL_PRI_XPL_RXCLK_CG,
|
||||
DRF_DEF(_XPL, _PL_PAD_CTL_PRI_XPL_RXCLK_CG, _IDLE_CG_DLY_CNT, __PROD) |
|
||||
DRF_DEF(_XPL, _PL_PAD_CTL_PRI_XPL_RXCLK_CG, _IDLE_CG_EN, __PROD) |
|
||||
|
@ -1,5 +1,5 @@
|
||||
/*
|
||||
* SPDX-FileCopyrightText: Copyright (c) 2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
|
||||
* SPDX-FileCopyrightText: Copyright (c) 2023-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
|
||||
* SPDX-License-Identifier: MIT
|
||||
*
|
||||
* Permission is hereby granted, free of charge, to any person obtaining a
|
||||
@ -26,9 +26,18 @@
|
||||
#include "common_nvswitch.h"
|
||||
#include "haldef_nvswitch.h"
|
||||
#include "ls10/ls10.h"
|
||||
#include "ls10/soe_ls10.h"
|
||||
|
||||
#include "nvswitch/ls10/dev_nvlsaw_ip.h"
|
||||
#include "nvswitch/ls10/dev_nvlsaw_ip_addendum.h"
|
||||
#include "nvswitch/ls10/dev_ctrl_ip.h"
|
||||
#include "nvswitch/ls10/dev_ctrl_ip_addendum.h"
|
||||
#include "nvswitch/ls10/dev_cpr_ip.h"
|
||||
#include "nvswitch/ls10/dev_npg_ip.h"
|
||||
#include "nvswitch/ls10/dev_fsp_pri.h"
|
||||
#include "nvswitch/ls10/dev_soe_ip.h"
|
||||
#include "nvswitch/ls10/ptop_discovery_ip.h"
|
||||
#include "nvswitch/ls10/dev_minion_ip.h"
|
||||
|
||||
#include <stddef.h>
|
||||
|
||||
@ -947,6 +956,9 @@ nvswitch_detect_tnvl_mode_ls10
|
||||
val = NVSWITCH_SAW_RD32_LS10(device, _NVLSAW, _TNVL_MODE);
|
||||
if (FLD_TEST_DRF(_NVLSAW, _TNVL_MODE, _STATUS, _ENABLED, val))
|
||||
{
|
||||
NVSWITCH_PRINT(device, ERROR,
|
||||
"%s: TNVL Mode Detected\n",
|
||||
__FUNCTION__);
|
||||
device->tnvl_mode = NVSWITCH_DEVICE_TNVL_MODE_ENABLED;
|
||||
}
|
||||
|
||||
@ -1048,3 +1060,207 @@ nvswitch_tnvl_get_status_ls10
|
||||
params->status = device->tnvl_mode;
|
||||
return NVL_SUCCESS;
|
||||
}
|
||||
|
||||
static NvBool
|
||||
_nvswitch_tnvl_eng_wr_cpu_allow_list_ls10
|
||||
(
|
||||
nvswitch_device *device,
|
||||
NVSWITCH_ENGINE_ID eng_id,
|
||||
NvU32 offset
|
||||
)
|
||||
{
|
||||
switch (eng_id)
|
||||
{
|
||||
case NVSWITCH_ENGINE_ID_SOE:
|
||||
case NVSWITCH_ENGINE_ID_GIN:
|
||||
case NVSWITCH_ENGINE_ID_FSP:
|
||||
return NV_TRUE;
|
||||
case NVSWITCH_ENGINE_ID_SAW:
|
||||
{
|
||||
if (offset == NV_NVLSAW_DRIVER_ATTACH_DETACH)
|
||||
return NV_TRUE;
|
||||
break;
|
||||
}
|
||||
case NVSWITCH_ENGINE_ID_NPG:
|
||||
{
|
||||
if ((offset == NV_NPG_INTR_RETRIGGER(0)) ||
|
||||
(offset == NV_NPG_INTR_RETRIGGER(1)))
|
||||
return NV_TRUE;
|
||||
break;
|
||||
}
|
||||
case NVSWITCH_ENGINE_ID_CPR:
|
||||
{
|
||||
if ((offset == NV_CPR_SYS_INTR_RETRIGGER(0)) ||
|
||||
(offset == NV_CPR_SYS_INTR_RETRIGGER(1)))
|
||||
return NV_TRUE;
|
||||
break;
|
||||
}
|
||||
case NVSWITCH_ENGINE_ID_MINION:
|
||||
{
|
||||
if ((offset == NV_MINION_NVLINK_DL_STAT(0)) ||
|
||||
(offset == NV_MINION_NVLINK_DL_STAT(1)) ||
|
||||
(offset == NV_MINION_NVLINK_DL_STAT(2)) ||
|
||||
(offset == NV_MINION_NVLINK_DL_STAT(3)))
|
||||
return NV_TRUE;
|
||||
break;
|
||||
}
|
||||
default :
|
||||
return NV_FALSE;
|
||||
}
|
||||
|
||||
return NV_FALSE;
|
||||
}
|
||||
|
||||
void
|
||||
nvswitch_tnvl_eng_wr_32_ls10
|
||||
(
|
||||
nvswitch_device *device,
|
||||
NVSWITCH_ENGINE_ID eng_id,
|
||||
NvU32 eng_bcast,
|
||||
NvU32 eng_instance,
|
||||
NvU32 base_addr,
|
||||
NvU32 offset,
|
||||
NvU32 data
|
||||
)
|
||||
{
|
||||
if (device->nvlink_device->pciInfo.bars[0].pBar == NULL)
|
||||
{
|
||||
NVSWITCH_PRINT(device, ERROR,
|
||||
"%s: register write failed at offset 0x%x\n",
|
||||
__FUNCTION__, offset);
|
||||
return;
|
||||
}
|
||||
|
||||
if (!nvswitch_is_tnvl_mode_enabled(device))
|
||||
{
|
||||
NVSWITCH_PRINT_SXID(device, NVSWITCH_ERR_HW_HOST_TNVL_ERROR,
|
||||
"ENG reg-write failed. TNVL mode is not enabled\n");
|
||||
return;
|
||||
}
|
||||
|
||||
if (_nvswitch_tnvl_eng_wr_cpu_allow_list_ls10(device, eng_id, offset))
|
||||
{
|
||||
nvswitch_os_mem_write32((NvU8 *)device->nvlink_device->pciInfo.bars[0].pBar + base_addr + offset, data);
|
||||
return;
|
||||
}
|
||||
|
||||
if (nvswitch_is_tnvl_mode_locked(device))
|
||||
{
|
||||
NVSWITCH_PRINT_SXID(device, NVSWITCH_ERR_HW_HOST_TNVL_ERROR,
|
||||
"TNVL ENG_WR failure - 0x%x 0x%x 0x%x 0x%x 0x%x 0x%x\n",
|
||||
eng_id, eng_instance, eng_bcast, base_addr, offset, data);
|
||||
|
||||
NVSWITCH_PRINT_SXID(device, NVSWITCH_ERR_HW_HOST_TNVL_ERROR,
|
||||
"TNVL mode is locked\n");
|
||||
return;
|
||||
}
|
||||
|
||||
if (nvswitch_soe_eng_wr_32_ls10(device, eng_id, eng_bcast, eng_instance, base_addr, offset, data) != NVL_SUCCESS)
|
||||
{
|
||||
NVSWITCH_PRINT_SXID(device, NVSWITCH_ERR_HW_HOST_TNVL_ERROR,
|
||||
"TNVL ENG_WR failure - 0x%x 0x%x 0x%x 0x%x 0x%x 0x%x\n",
|
||||
eng_id, eng_instance, eng_bcast, base_addr, offset, data);
|
||||
|
||||
NVSWITCH_PRINT(device, ERROR,
|
||||
"%s: SOE ENG_WR failed for 0x%x[%d] %s @0x%08x+0x%06x = 0x%08x\n",
|
||||
__FUNCTION__,
|
||||
eng_id, eng_instance,
|
||||
(
|
||||
(eng_bcast == NVSWITCH_GET_ENG_DESC_TYPE_UNICAST) ? "UC" :
|
||||
(eng_bcast == NVSWITCH_GET_ENG_DESC_TYPE_BCAST) ? "BC" :
|
||||
(eng_bcast == NVSWITCH_GET_ENG_DESC_TYPE_MULTICAST) ? "MC" :
|
||||
"??"
|
||||
),
|
||||
base_addr, offset, data);
|
||||
}
|
||||
}
|
||||
|
||||
static NvBool
|
||||
_nvswitch_tnvl_reg_wr_cpu_allow_list_ls10
|
||||
(
|
||||
nvswitch_device *device,
|
||||
NvU32 offset
|
||||
)
|
||||
{
|
||||
if ((offset >= DRF_BASE(NV_PFSP)) &&
|
||||
(offset <= DRF_EXTENT(NV_PFSP)))
|
||||
{
|
||||
return NV_TRUE;
|
||||
}
|
||||
|
||||
if ((offset >= NV_PTOP_UNICAST_SW_DEVICE_BASE_SOE_0 + DRF_BASE(NV_SOE)) &&
|
||||
(offset <= NV_PTOP_UNICAST_SW_DEVICE_BASE_SOE_0 + DRF_EXTENT(NV_SOE)))
|
||||
{
|
||||
return NV_TRUE;
|
||||
}
|
||||
|
||||
return NV_FALSE;
|
||||
}
|
||||
|
||||
void
|
||||
nvswitch_tnvl_reg_wr_32_ls10
|
||||
(
|
||||
nvswitch_device *device,
|
||||
NvU32 offset,
|
||||
NvU32 data
|
||||
)
|
||||
{
|
||||
if (device->nvlink_device->pciInfo.bars[0].pBar == NULL)
|
||||
{
|
||||
NVSWITCH_PRINT(device, ERROR,
|
||||
"%s: register write failed at offset 0x%x\n",
|
||||
__FUNCTION__, offset);
|
||||
NVSWITCH_ASSERT(0);
|
||||
return;
|
||||
}
|
||||
|
||||
if (!nvswitch_is_tnvl_mode_enabled(device))
|
||||
{
|
||||
NVSWITCH_PRINT_SXID(device, NVSWITCH_ERR_HW_HOST_TNVL_ERROR,
|
||||
"Reg-write failed. TNVL mode is not enabled\n");
|
||||
return;
|
||||
}
|
||||
|
||||
if (_nvswitch_tnvl_reg_wr_cpu_allow_list_ls10(device, offset))
|
||||
{
|
||||
nvswitch_os_mem_write32((NvU8 *)device->nvlink_device->pciInfo.bars[0].pBar + offset, data);
|
||||
return;
|
||||
}
|
||||
|
||||
if (nvswitch_is_tnvl_mode_locked(device))
|
||||
{
|
||||
NVSWITCH_PRINT_SXID(device, NVSWITCH_ERR_HW_HOST_TNVL_ERROR,
|
||||
"TNVL REG_WR failure - 0x%08x, 0x%08x\n", offset, data);
|
||||
|
||||
NVSWITCH_PRINT_SXID(device, NVSWITCH_ERR_HW_HOST_TNVL_ERROR,
|
||||
"TNVL mode is locked\n");
|
||||
return;
|
||||
}
|
||||
|
||||
if (nvswitch_soe_reg_wr_32_ls10(device, offset, data) != NVL_SUCCESS)
|
||||
{
|
||||
NVSWITCH_PRINT_SXID(device, NVSWITCH_ERR_HW_HOST_TNVL_ERROR,
|
||||
"TNVL REG_WR failure - 0x%08x, 0x%08x\n", offset, data);
|
||||
}
|
||||
}
|
||||
|
||||
void
|
||||
nvswitch_tnvl_disable_interrupts_ls10
|
||||
(
|
||||
nvswitch_device *device
|
||||
)
|
||||
{
|
||||
//
|
||||
// In TNVL locked disable non-fatal NVLW, NPG, and legacy interrupt,
|
||||
// disable additional non-fatals on those partitions.
|
||||
//
|
||||
NVSWITCH_ENG_WR32(device, GIN, , 0, _CTRL, _CPU_INTR_LEAF_EN_CLEAR(NV_CTRL_CPU_INTR_NVLW_NON_FATAL_IDX),
|
||||
0xFFFF);
|
||||
|
||||
NVSWITCH_ENG_WR32(device, GIN, , 0, _CTRL, _CPU_INTR_LEAF_EN_CLEAR(NV_CTRL_CPU_INTR_NPG_NON_FATAL_IDX),
|
||||
0xFFFF);
|
||||
|
||||
NVSWITCH_ENG_WR32(device, GIN, , 0, _CTRL, _CPU_INTR_LEAF_EN_CLEAR(NV_CTRL_CPU_INTR_UNITS_IDX),
|
||||
0xFFFFFFFF);
|
||||
}
|
||||
|
||||
|
@ -1,5 +1,5 @@
|
||||
/*
|
||||
* SPDX-FileCopyrightText: Copyright (c) 2017-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
|
||||
* SPDX-FileCopyrightText: Copyright (c) 2017-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
|
||||
* SPDX-License-Identifier: MIT
|
||||
*
|
||||
* Permission is hereby granted, free of charge, to any person obtaining a
|
||||
@ -1021,6 +1021,15 @@ _nvswitch_ctrl_get_tnvl_status
|
||||
return device->hal.nvswitch_tnvl_get_status(device, params);
|
||||
}
|
||||
|
||||
void
|
||||
nvswitch_tnvl_disable_interrupts
|
||||
(
|
||||
nvswitch_device *device
|
||||
)
|
||||
{
|
||||
device->hal.nvswitch_tnvl_disable_interrupts(device);
|
||||
}
|
||||
|
||||
static NvlStatus
|
||||
_nvswitch_construct_soe
|
||||
(
|
||||
@ -1860,9 +1869,16 @@ nvswitch_lib_initialize_device
|
||||
(void)device->hal.nvswitch_read_oob_blacklist_state(device);
|
||||
(void)device->hal.nvswitch_write_fabric_state(device);
|
||||
|
||||
nvswitch_task_create(device, &nvswitch_fabric_state_heartbeat,
|
||||
NVSWITCH_HEARTBEAT_INTERVAL_NS,
|
||||
NVSWITCH_TASK_TYPE_FLAGS_RUN_EVEN_IF_DEVICE_NOT_INITIALIZED);
|
||||
if (!nvswitch_is_tnvl_mode_enabled(device))
|
||||
{
|
||||
nvswitch_task_create(device, &nvswitch_fabric_state_heartbeat,
|
||||
NVSWITCH_HEARTBEAT_INTERVAL_NS,
|
||||
NVSWITCH_TASK_TYPE_FLAGS_RUN_EVEN_IF_DEVICE_NOT_INITIALIZED);
|
||||
}
|
||||
else
|
||||
{
|
||||
NVSWITCH_PRINT(device, INFO, "Skipping Fabric state heartbeat background task when TNVL is enabled\n");
|
||||
}
|
||||
|
||||
//
|
||||
// Blacklisted devices return successfully in order to preserve the fabric state heartbeat
|
||||
@ -1966,12 +1982,26 @@ nvswitch_lib_initialize_device
|
||||
|
||||
if (device->regkeys.latency_counter == NV_SWITCH_REGKEY_LATENCY_COUNTER_LOGGING_ENABLE)
|
||||
{
|
||||
nvswitch_task_create(device, &nvswitch_internal_latency_bin_log,
|
||||
nvswitch_get_latency_sample_interval_msec(device) * NVSWITCH_INTERVAL_1MSEC_IN_NS * 9/10, 0);
|
||||
if (!nvswitch_is_tnvl_mode_enabled(device))
|
||||
{
|
||||
nvswitch_task_create(device, &nvswitch_internal_latency_bin_log,
|
||||
nvswitch_get_latency_sample_interval_msec(device) * NVSWITCH_INTERVAL_1MSEC_IN_NS * 9/10, 0);
|
||||
}
|
||||
else
|
||||
{
|
||||
NVSWITCH_PRINT(device, INFO, "Skipping Internal latency background task when TNVL is enabled\n");
|
||||
}
|
||||
}
|
||||
|
||||
nvswitch_task_create(device, &nvswitch_ecc_writeback_task,
|
||||
(60 * NVSWITCH_INTERVAL_1SEC_IN_NS), 0);
|
||||
if (!nvswitch_is_tnvl_mode_enabled(device))
|
||||
{
|
||||
nvswitch_task_create(device, &nvswitch_ecc_writeback_task,
|
||||
(60 * NVSWITCH_INTERVAL_1SEC_IN_NS), 0);
|
||||
}
|
||||
else
|
||||
{
|
||||
NVSWITCH_PRINT(device, INFO, "Skipping ECC writeback background task when TNVL is enabled\n");
|
||||
}
|
||||
|
||||
if (IS_RTLSIM(device) || IS_EMULATION(device) || IS_FMODEL(device))
|
||||
{
|
||||
@ -1981,8 +2011,15 @@ nvswitch_lib_initialize_device
|
||||
}
|
||||
else
|
||||
{
|
||||
nvswitch_task_create(device, &nvswitch_monitor_thermal_alert,
|
||||
100*NVSWITCH_INTERVAL_1MSEC_IN_NS, 0);
|
||||
if (!nvswitch_is_tnvl_mode_enabled(device))
|
||||
{
|
||||
nvswitch_task_create(device, &nvswitch_monitor_thermal_alert,
|
||||
100*NVSWITCH_INTERVAL_1MSEC_IN_NS, 0);
|
||||
}
|
||||
else
|
||||
{
|
||||
NVSWITCH_PRINT(device, INFO, "Skipping Thermal alert background task when TNVL is enabled\n");
|
||||
}
|
||||
}
|
||||
|
||||
device->nvlink_device->initialized = 1;
|
||||
@ -4927,10 +4964,7 @@ nvswitch_reg_write_32
|
||||
device->nvlink_device->pciInfo.bars[0].baseAddr, offset, data);
|
||||
#endif
|
||||
|
||||
// Write the register
|
||||
nvswitch_os_mem_write32((NvU8 *)device->nvlink_device->pciInfo.bars[0].pBar + offset, data);
|
||||
|
||||
return;
|
||||
device->hal.nvswitch_reg_write_32(device, offset, data);
|
||||
}
|
||||
|
||||
NvU64
|
||||
@ -5968,6 +6002,15 @@ nvswitch_tnvl_send_fsp_lock_config
|
||||
return device->hal.nvswitch_tnvl_send_fsp_lock_config(device);
|
||||
}
|
||||
|
||||
NvlStatus
|
||||
nvswitch_send_tnvl_prelock_cmd
|
||||
(
|
||||
nvswitch_device *device
|
||||
)
|
||||
{
|
||||
return device->hal.nvswitch_send_tnvl_prelock_cmd(device);
|
||||
}
|
||||
|
||||
static NvlStatus
|
||||
_nvswitch_ctrl_set_device_tnvl_lock
|
||||
(
|
||||
@ -6001,8 +6044,18 @@ _nvswitch_ctrl_set_device_tnvl_lock
|
||||
|
||||
//
|
||||
// Disable non-fatal and legacy interrupts
|
||||
// Disable commands to SOE
|
||||
//
|
||||
nvswitch_tnvl_disable_interrupts(device);
|
||||
|
||||
//
|
||||
//
|
||||
// Send Pre-Lock sequence command to SOE
|
||||
//
|
||||
status = nvswitch_send_tnvl_prelock_cmd(device);
|
||||
if (status != NVL_SUCCESS)
|
||||
{
|
||||
return status;
|
||||
}
|
||||
|
||||
// Send lock-config command to FSP
|
||||
status = nvswitch_tnvl_send_fsp_lock_config(device);
|
||||
@ -6018,6 +6071,141 @@ _nvswitch_ctrl_set_device_tnvl_lock
|
||||
return status;
|
||||
}
|
||||
|
||||
/*
|
||||
* Service ioctls supported when TNVL mode is locked
|
||||
*/
|
||||
NvlStatus
|
||||
nvswitch_lib_ctrl_tnvl_lock_only
|
||||
(
|
||||
nvswitch_device *device,
|
||||
NvU32 cmd,
|
||||
void *params,
|
||||
NvU64 size,
|
||||
void *osPrivate
|
||||
)
|
||||
{
|
||||
NvlStatus retval;
|
||||
NvU64 flags = 0;
|
||||
|
||||
if (!NVSWITCH_IS_DEVICE_ACCESSIBLE(device) || params == NULL)
|
||||
{
|
||||
return -NVL_BAD_ARGS;
|
||||
}
|
||||
|
||||
flags = NVSWITCH_DEV_CMD_CHECK_ADMIN | NVSWITCH_DEV_CMD_CHECK_FM;
|
||||
switch (cmd)
|
||||
{
|
||||
NVSWITCH_DEV_CMD_DISPATCH(CTRL_NVSWITCH_GET_INFOROM_VERSION,
|
||||
_nvswitch_ctrl_get_inforom_version,
|
||||
NVSWITCH_GET_INFOROM_VERSION_PARAMS);
|
||||
NVSWITCH_DEV_CMD_DISPATCH_PRIVILEGED(
|
||||
CTRL_NVSWITCH_GET_NVLINK_MAX_ERROR_RATES,
|
||||
_nvswitch_ctrl_get_inforom_nvlink_max_correctable_error_rate,
|
||||
NVSWITCH_GET_NVLINK_MAX_CORRECTABLE_ERROR_RATES_PARAMS,
|
||||
osPrivate, flags);
|
||||
NVSWITCH_DEV_CMD_DISPATCH_PRIVILEGED(
|
||||
CTRL_NVSWITCH_GET_NVLINK_ERROR_COUNTS,
|
||||
_nvswitch_ctrl_get_inforom_nvlink_errors,
|
||||
NVSWITCH_GET_NVLINK_ERROR_COUNTS_PARAMS,
|
||||
osPrivate, flags);
|
||||
NVSWITCH_DEV_CMD_DISPATCH_PRIVILEGED(
|
||||
CTRL_NVSWITCH_GET_ECC_ERROR_COUNTS,
|
||||
_nvswitch_ctrl_get_inforom_ecc_errors,
|
||||
NVSWITCH_GET_ECC_ERROR_COUNTS_PARAMS,
|
||||
osPrivate, flags);
|
||||
NVSWITCH_DEV_CMD_DISPATCH_PRIVILEGED(
|
||||
CTRL_NVSWITCH_GET_SXIDS,
|
||||
_nvswitch_ctrl_get_inforom_bbx_sxid,
|
||||
NVSWITCH_GET_SXIDS_PARAMS,
|
||||
osPrivate, flags);
|
||||
NVSWITCH_DEV_CMD_DISPATCH_PRIVILEGED(
|
||||
CTRL_NVSWITCH_GET_SYS_INFO,
|
||||
_nvswitch_ctrl_get_inforom_bbx_sys_info,
|
||||
NVSWITCH_GET_SYS_INFO_PARAMS,
|
||||
osPrivate, flags);
|
||||
NVSWITCH_DEV_CMD_DISPATCH_PRIVILEGED(
|
||||
CTRL_NVSWITCH_GET_TIME_INFO,
|
||||
_nvswitch_ctrl_get_inforom_bbx_time_info,
|
||||
NVSWITCH_GET_TIME_INFO_PARAMS,
|
||||
osPrivate, flags);
|
||||
NVSWITCH_DEV_CMD_DISPATCH_PRIVILEGED(
|
||||
CTRL_NVSWITCH_GET_TEMP_DATA,
|
||||
_nvswitch_ctrl_get_inforom_bbx_temp_data,
|
||||
NVSWITCH_GET_TEMP_DATA_PARAMS,
|
||||
osPrivate, flags);
|
||||
NVSWITCH_DEV_CMD_DISPATCH_PRIVILEGED(
|
||||
CTRL_NVSWITCH_GET_TEMP_SAMPLES,
|
||||
_nvswitch_ctrl_get_inforom_bbx_temp_samples,
|
||||
NVSWITCH_GET_TEMP_SAMPLES_PARAMS,
|
||||
osPrivate, flags);
|
||||
NVSWITCH_DEV_CMD_DISPATCH(
|
||||
CTRL_NVSWITCH_GET_ATTESTATION_CERTIFICATE_CHAIN,
|
||||
_nvswitch_ctrl_get_attestation_certificate_chain,
|
||||
NVSWITCH_GET_ATTESTATION_CERTIFICATE_CHAIN_PARAMS);
|
||||
NVSWITCH_DEV_CMD_DISPATCH(
|
||||
CTRL_NVSWITCH_GET_ATTESTATION_REPORT,
|
||||
_nvswitch_ctrl_get_attestation_report,
|
||||
NVSWITCH_GET_ATTESTATION_REPORT_PARAMS);
|
||||
NVSWITCH_DEV_CMD_DISPATCH(
|
||||
CTRL_NVSWITCH_GET_TNVL_STATUS,
|
||||
_nvswitch_ctrl_get_tnvl_status,
|
||||
NVSWITCH_GET_TNVL_STATUS_PARAMS);
|
||||
NVSWITCH_DEV_CMD_DISPATCH_PRIVILEGED(
|
||||
CTRL_NVSWITCH_SET_FM_DRIVER_STATE,
|
||||
nvswitch_ctrl_set_fm_driver_state,
|
||||
NVSWITCH_SET_FM_DRIVER_STATE_PARAMS,
|
||||
osPrivate, flags);
|
||||
NVSWITCH_DEV_CMD_DISPATCH(CTRL_NVSWITCH_GET_ERRORS,
|
||||
nvswitch_ctrl_get_errors,
|
||||
NVSWITCH_GET_ERRORS_PARAMS);
|
||||
NVSWITCH_DEV_CMD_DISPATCH(CTRL_NVSWITCH_GET_BIOS_INFO,
|
||||
_nvswitch_ctrl_get_bios_info,
|
||||
NVSWITCH_GET_BIOS_INFO_PARAMS);
|
||||
NVSWITCH_DEV_CMD_DISPATCH(CTRL_NVSWITCH_GET_TEMPERATURE,
|
||||
_nvswitch_ctrl_therm_read_temperature,
|
||||
NVSWITCH_CTRL_GET_TEMPERATURE_PARAMS);
|
||||
NVSWITCH_DEV_CMD_DISPATCH(
|
||||
CTRL_NVSWITCH_GET_TEMPERATURE_LIMIT,
|
||||
_nvswitch_ctrl_therm_get_temperature_limit,
|
||||
NVSWITCH_CTRL_GET_TEMPERATURE_LIMIT_PARAMS);
|
||||
NVSWITCH_DEV_CMD_DISPATCH(CTRL_NVSWITCH_GET_FATAL_ERROR_SCOPE,
|
||||
_nvswitch_ctrl_get_fatal_error_scope,
|
||||
NVSWITCH_GET_FATAL_ERROR_SCOPE_PARAMS);
|
||||
NVSWITCH_DEV_CMD_DISPATCH(CTRL_NVSWITCH_GET_INFO,
|
||||
_nvswitch_ctrl_get_info,
|
||||
NVSWITCH_GET_INFO);
|
||||
NVSWITCH_DEV_CMD_DISPATCH(CTRL_NVSWITCH_GET_VOLTAGE,
|
||||
_nvswitch_ctrl_therm_read_voltage,
|
||||
NVSWITCH_CTRL_GET_VOLTAGE_PARAMS);
|
||||
NVSWITCH_DEV_CMD_DISPATCH(CTRL_NVSWITCH_GET_POWER,
|
||||
_nvswitch_ctrl_therm_read_power,
|
||||
NVSWITCH_GET_POWER_PARAMS);
|
||||
NVSWITCH_DEV_CMD_DISPATCH(CTRL_NVSWITCH_GET_NVLINK_STATUS,
|
||||
_nvswitch_ctrl_get_nvlink_status,
|
||||
NVSWITCH_GET_NVLINK_STATUS_PARAMS);
|
||||
NVSWITCH_DEV_CMD_DISPATCH_PRIVILEGED(
|
||||
CTRL_NVSWITCH_GET_NVLINK_ECC_ERRORS,
|
||||
_nvswitch_ctrl_get_nvlink_ecc_errors,
|
||||
NVSWITCH_GET_NVLINK_ECC_ERRORS_PARAMS,
|
||||
osPrivate, flags);
|
||||
NVSWITCH_DEV_CMD_DISPATCH(CTRL_NVSWITCH_GET_INTERNAL_LATENCY,
|
||||
_nvswitch_ctrl_get_internal_latency,
|
||||
NVSWITCH_GET_INTERNAL_LATENCY);
|
||||
NVSWITCH_DEV_CMD_DISPATCH_PRIVILEGED(CTRL_NVSWITCH_SET_NVLINK_ERROR_THRESHOLD,
|
||||
_nvswitch_ctrl_set_nvlink_error_threshold,
|
||||
NVSWITCH_SET_NVLINK_ERROR_THRESHOLD_PARAMS,
|
||||
osPrivate, flags);
|
||||
NVSWITCH_DEV_CMD_DISPATCH(CTRL_NVSWITCH_GET_NVLINK_ERROR_THRESHOLD,
|
||||
_nvswitch_ctrl_get_nvlink_error_threshold,
|
||||
NVSWITCH_GET_NVLINK_ERROR_THRESHOLD_PARAMS);
|
||||
default:
|
||||
nvswitch_os_print(NVSWITCH_DBG_LEVEL_INFO, "ioctl %x is not permitted when TNVL is locked\n", cmd);
|
||||
return -NVL_ERR_INSUFFICIENT_PERMISSIONS;
|
||||
}
|
||||
|
||||
return retval;
|
||||
}
|
||||
|
||||
NvlStatus
|
||||
nvswitch_lib_ctrl
|
||||
(
|
||||
@ -6031,6 +6219,11 @@ nvswitch_lib_ctrl
|
||||
NvlStatus retval;
|
||||
NvU64 flags = 0;
|
||||
|
||||
if (nvswitch_is_tnvl_mode_locked(device))
|
||||
{
|
||||
return nvswitch_lib_ctrl_tnvl_lock_only(device, cmd, params, size, osPrivate);
|
||||
}
|
||||
|
||||
if (!NVSWITCH_IS_DEVICE_ACCESSIBLE(device) || params == NULL)
|
||||
{
|
||||
return -NVL_BAD_ARGS;
|
||||
|
@ -37,6 +37,15 @@
|
||||
#include "class/cl0000.h"
|
||||
#include "nv_vgpu_types.h"
|
||||
|
||||
/* DRF macros for OBJGPU::gpuId */
|
||||
#define NV0000_BUSDEVICE_DOMAIN 31:16
|
||||
#define NV0000_BUSDEVICE_BUS 15:8
|
||||
#define NV0000_BUSDEVICE_DEVICE 7:0
|
||||
|
||||
#define GPU_32_BIT_ID_DECODE_DOMAIN(gpuId) (NvU16)DRF_VAL(0000, _BUSDEVICE, _DOMAIN, gpuId);
|
||||
#define GPU_32_BIT_ID_DECODE_BUS(gpuId) (NvU8) DRF_VAL(0000, _BUSDEVICE, _BUS, gpuId);
|
||||
#define GPU_32_BIT_ID_DECODE_DEVICE(gpuId) (NvU8) DRF_VAL(0000, _BUSDEVICE, _DEVICE, gpuId);
|
||||
|
||||
/*
|
||||
* NV0000_CTRL_CMD_VGPU_CREATE_DEVICE
|
||||
*
|
||||
@ -155,24 +164,24 @@ typedef struct NV0000_CTRL_VGPU_DELETE_DEVICE_PARAMS {
|
||||
} NV0000_CTRL_VGPU_DELETE_DEVICE_PARAMS;
|
||||
|
||||
/*
|
||||
* NV0000_CTRL_CMD_VGPU_VFIO_UNREGISTER_STATUS
|
||||
* NV0000_CTRL_CMD_VGPU_VFIO_NOTIFY_RM_STATUS
|
||||
*
|
||||
* This command informs RM the status vgpu-vfio unregister for a GPU.
|
||||
* This command informs RM the status of vgpu-vfio GPU operations such as probe and unregister.
|
||||
*
|
||||
* returnStatus [IN]
|
||||
* This parameter provides the status vgpu-vfio unregister operation.
|
||||
* This parameter provides the status of vgpu-vfio GPU operation.
|
||||
*
|
||||
* gpuPciId [IN]
|
||||
* This parameter provides the gpu id of the GPU
|
||||
*/
|
||||
|
||||
#define NV0000_CTRL_CMD_VGPU_VFIO_UNREGISTER_STATUS (0xc05) /* finn: Evaluated from "(FINN_NV01_ROOT_VGPU_INTERFACE_ID << 8) | NV0000_CTRL_VGPU_VFIO_UNREGISTER_STATUS_PARAMS_MESSAGE_ID" */
|
||||
#define NV0000_CTRL_CMD_VGPU_VFIO_NOTIFY_RM_STATUS (0xc05) /* finn: Evaluated from "(FINN_NV01_ROOT_VGPU_INTERFACE_ID << 8) | NV0000_CTRL_VGPU_VFIO_NOTIFY_RM_STATUS_PARAMS_MESSAGE_ID" */
|
||||
|
||||
#define NV0000_CTRL_VGPU_VFIO_UNREGISTER_STATUS_PARAMS_MESSAGE_ID (0x5U)
|
||||
#define NV0000_CTRL_VGPU_VFIO_NOTIFY_RM_STATUS_PARAMS_MESSAGE_ID (0x5U)
|
||||
|
||||
typedef struct NV0000_CTRL_VGPU_VFIO_UNREGISTER_STATUS_PARAMS {
|
||||
typedef struct NV0000_CTRL_VGPU_VFIO_NOTIFY_RM_STATUS_PARAMS {
|
||||
NvU32 returnStatus;
|
||||
NvU32 gpuId;
|
||||
} NV0000_CTRL_VGPU_VFIO_UNREGISTER_STATUS_PARAMS;
|
||||
} NV0000_CTRL_VGPU_VFIO_NOTIFY_RM_STATUS_PARAMS;
|
||||
|
||||
/* _ctrl0000vgpu_h_ */
|
||||
|
@ -108,6 +108,8 @@
|
||||
#define NV_CONF_COMPUTE_SYSTEM_CPU_CAPABILITY_NONE 0
|
||||
#define NV_CONF_COMPUTE_SYSTEM_CPU_CAPABILITY_AMD_SEV 1
|
||||
#define NV_CONF_COMPUTE_SYSTEM_CPU_CAPABILITY_INTEL_TDX 2
|
||||
#define NV_CONF_COMPUTE_SYSTEM_CPU_CAPABILITY_AMD_SEV_SNP 3
|
||||
#define NV_CONF_COMPUTE_SYSTEM_CPU_CAPABILITY_AMD_SNP_VTOM 4
|
||||
|
||||
#define NV_CONF_COMPUTE_SYSTEM_GPUS_CAPABILITY_NONE 0
|
||||
#define NV_CONF_COMPUTE_SYSTEM_GPUS_CAPABILITY_APM 1
|
||||
|
@ -152,6 +152,8 @@ NV_STATUS_CODE(NV_ERR_FABRIC_MANAGER_NOT_PRESENT, 0x0000007A, "Fabric Manag
|
||||
NV_STATUS_CODE(NV_ERR_ALREADY_SIGNALLED, 0x0000007B, "Semaphore Surface value already >= requested wait value")
|
||||
NV_STATUS_CODE(NV_ERR_QUEUE_TASK_SLOT_NOT_AVAILABLE, 0x0000007C, "PMU RPC error due to no queue slot available for this event")
|
||||
NV_STATUS_CODE(NV_ERR_KEY_ROTATION_IN_PROGRESS, 0x0000007D, "Operation not allowed as key rotation is in progress")
|
||||
NV_STATUS_CODE(NV_ERR_NVSWITCH_FABRIC_NOT_READY, 0x00000081, "Nvswitch Fabric Status or Fabric Probe is not yet complete, caller needs to retry")
|
||||
NV_STATUS_CODE(NV_ERR_NVSWITCH_FABRIC_FAILURE, 0x00000082, "Nvswitch Fabric Probe failed")
|
||||
|
||||
// Warnings:
|
||||
NV_STATUS_CODE(NV_WARN_HOT_SWITCH, 0x00010001, "WARNING Hot switch")
|
||||
|
@ -1,5 +1,5 @@
|
||||
/*
|
||||
* SPDX-FileCopyrightText: Copyright (c) 2019-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
|
||||
* SPDX-FileCopyrightText: Copyright (c) 2019-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
|
||||
* SPDX-License-Identifier: MIT
|
||||
*
|
||||
* Permission is hereby granted, free of charge, to any person obtaining a
|
||||
@ -187,6 +187,7 @@ void libosLogDestroy(LIBOS_LOG_DECODE *logDecode);
|
||||
void libosExtractLogs(LIBOS_LOG_DECODE *logDecode, NvBool bSyncNvLog);
|
||||
|
||||
void libosPreserveLogs(LIBOS_LOG_DECODE *pLogDecode);
|
||||
NvBool isLibosPreserveLogBufferFull(LIBOS_LOG_DECODE *pLogDecode, NvU32 gpuInstance);
|
||||
|
||||
#ifdef __cplusplus
|
||||
}
|
||||
|
@ -1438,6 +1438,34 @@ void libosPreserveLogs(LIBOS_LOG_DECODE *pLogDecode)
|
||||
}
|
||||
}
|
||||
|
||||
NvBool isLibosPreserveLogBufferFull(LIBOS_LOG_DECODE *pLogDecode, NvU32 gpuInstance)
|
||||
{
|
||||
NvU64 i = (NvU32)(pLogDecode->numLogBuffers);
|
||||
NvU32 tag = LIBOS_LOG_NVLOG_BUFFER_TAG(pLogDecode->sourceName, i * 2);
|
||||
NVLOG_BUFFER_HANDLE handle = 0;
|
||||
NV_STATUS status = nvlogGetBufferHandleFromTag(tag, &handle);
|
||||
|
||||
if (status != NV_OK)
|
||||
{
|
||||
return NV_FALSE;
|
||||
}
|
||||
|
||||
NVLOG_BUFFER *pNvLogBuffer = NvLogLogger.pBuffers[handle];
|
||||
if (pNvLogBuffer == NULL)
|
||||
{
|
||||
return NV_FALSE;
|
||||
}
|
||||
|
||||
if (FLD_TEST_DRF(LOG_BUFFER, _FLAGS, _PRESERVE, _YES, pNvLogBuffer->flags) &&
|
||||
DRF_VAL(LOG, _BUFFER_FLAGS, _GPU_INSTANCE, pNvLogBuffer->flags) == gpuInstance &&
|
||||
(pNvLogBuffer->pos >= pNvLogBuffer->size - NV_OFFSETOF(LIBOS_LOG_NVLOG_BUFFER, data) - sizeof(NvU64)))
|
||||
{
|
||||
return NV_TRUE;
|
||||
}
|
||||
|
||||
return NV_FALSE;
|
||||
}
|
||||
|
||||
static NvBool findPreservedNvlogBuffer(NvU32 tag, NvU32 gpuInstance, NVLOG_BUFFER_HANDLE *pHandle)
|
||||
{
|
||||
NVLOG_BUFFER_HANDLE handle = 0;
|
||||
|
@ -110,7 +110,7 @@ NvBool nvHsIoctlMoveCursor(
|
||||
{
|
||||
NVHsChannelEvoRec *pHsChannel;
|
||||
|
||||
if (apiHead > ARRAY_LEN(pDispEvo->pHsChannel)) {
|
||||
if (apiHead >= ARRAY_LEN(pDispEvo->pHsChannel)) {
|
||||
return FALSE;
|
||||
}
|
||||
|
||||
@ -206,7 +206,7 @@ NvBool nvHsIoctlSetCursorImage(
|
||||
NVHsChannelEvoRec *pHsChannel;
|
||||
NVSurfaceEvoRec *pSurfaceEvo = NULL;
|
||||
|
||||
if (apiHead > ARRAY_LEN(pDispEvo->pHsChannel)) {
|
||||
if (apiHead >= ARRAY_LEN(pDispEvo->pHsChannel)) {
|
||||
return FALSE;
|
||||
}
|
||||
|
||||
|
@ -4186,6 +4186,7 @@ static NvBool SwitchMux(
|
||||
{
|
||||
struct NvKmsSwitchMuxParams *pParams = pParamsVoid;
|
||||
const struct NvKmsSwitchMuxRequest *r = &pParams->request;
|
||||
struct NvKmsPerOpenDev *pOpenDev;
|
||||
NVDpyEvoPtr pDpyEvo;
|
||||
|
||||
pDpyEvo = GetPerOpenDpy(pOpen, r->deviceHandle, r->dispHandle, r->dpyId);
|
||||
@ -4193,7 +4194,12 @@ static NvBool SwitchMux(
|
||||
return FALSE;
|
||||
}
|
||||
|
||||
if (!nvKmsOpenDevHasSubOwnerPermissionOrBetter(GetPerOpenDev(pOpen, r->deviceHandle))) {
|
||||
pOpenDev = GetPerOpenDev(pOpen, r->deviceHandle);
|
||||
if (pOpenDev == NULL) {
|
||||
return FALSE;
|
||||
}
|
||||
|
||||
if (!nvKmsOpenDevHasSubOwnerPermissionOrBetter(pOpenDev)) {
|
||||
return FALSE;
|
||||
}
|
||||
|
||||
|
@ -361,7 +361,8 @@ ARMCSALLOWLISTINFO armChipsetAllowListInfo[] =
|
||||
{PCI_VENDOR_ID_MELLANOX, 0xA2D0, CS_MELLANOX_BLUEFIELD}, // Mellanox BlueField
|
||||
{PCI_VENDOR_ID_MELLANOX, 0xA2D4, CS_MELLANOX_BLUEFIELD2},// Mellanox BlueField 2
|
||||
{PCI_VENDOR_ID_MELLANOX, 0xA2D5, CS_MELLANOX_BLUEFIELD2},// Mellanox BlueField 2 Crypto disabled
|
||||
{PCI_VENDOR_ID_MELLANOX, 0xA2DB, CS_MELLANOX_BLUEFIELD3},// Mellanox BlueField 3
|
||||
{PCI_VENDOR_ID_MELLANOX, 0xA2DB, CS_MELLANOX_BLUEFIELD3},// Mellanox BlueField 3 Crypto disabled
|
||||
{PCI_VENDOR_ID_MELLANOX, 0xA2DA, CS_MELLANOX_BLUEFIELD3},// Mellanox BlueField 3 Crypto enabled
|
||||
{PCI_VENDOR_ID_AMAZON, 0x0200, CS_AMAZON_GRAVITRON2}, // Amazon Gravitron2
|
||||
{PCI_VENDOR_ID_FUJITSU, 0x1952, CS_FUJITSU_A64FX}, // Fujitsu A64FX
|
||||
{PCI_VENDOR_ID_CADENCE, 0xDC01, CS_PHYTIUM_S2500}, // Phytium S2500
|
||||
|
@ -1045,7 +1045,7 @@ NV_STATUS NV_API_CALL nv_vgpu_get_bar_info(nvidia_stack_t *, nv_state_t *, con
|
||||
NvU64 *, NvU64 *, NvU32 *, NvBool *, NvU8 *);
|
||||
NV_STATUS NV_API_CALL nv_vgpu_get_hbm_info(nvidia_stack_t *, nv_state_t *, const NvU8 *, NvU64 *, NvU64 *);
|
||||
NV_STATUS NV_API_CALL nv_vgpu_process_vf_info(nvidia_stack_t *, nv_state_t *, NvU8, NvU32, NvU8, NvU8, NvU8, NvBool, void *);
|
||||
NV_STATUS NV_API_CALL nv_gpu_bind_event(nvidia_stack_t *);
|
||||
NV_STATUS NV_API_CALL nv_gpu_bind_event(nvidia_stack_t *, NvU32, NvBool *);
|
||||
NV_STATUS NV_API_CALL nv_gpu_unbind_event(nvidia_stack_t *, NvU32, NvBool *);
|
||||
|
||||
NV_STATUS NV_API_CALL nv_get_usermap_access_params(nv_state_t*, nv_usermap_access_params_t*);
|
||||
|
@ -218,6 +218,8 @@ extern NvU32 os_page_size;
|
||||
extern NvU64 os_page_mask;
|
||||
extern NvU8 os_page_shift;
|
||||
extern NvBool os_cc_enabled;
|
||||
extern NvBool os_cc_sev_snp_enabled;
|
||||
extern NvBool os_cc_snp_vtom_enabled;
|
||||
extern NvBool os_cc_tdx_enabled;
|
||||
extern NvBool os_dma_buf_enabled;
|
||||
extern NvBool os_imex_channel_is_supported;
|
||||
|
@ -45,6 +45,7 @@
|
||||
#include "gpu/bus/kern_bus.h"
|
||||
#include <nv_ref.h> // NV_PMC_BOOT_1_VGPU
|
||||
#include "nvdevid.h"
|
||||
#include "ctrl/ctrl0000/ctrl0000vgpu.h"
|
||||
|
||||
#include "g_vgpu_chip_flags.h" // vGPU device names
|
||||
|
||||
@ -799,7 +800,9 @@ NV_STATUS NV_API_CALL nv_gpu_unbind_event
|
||||
}
|
||||
|
||||
NV_STATUS NV_API_CALL nv_gpu_bind_event(
|
||||
nvidia_stack_t *sp
|
||||
nvidia_stack_t *sp,
|
||||
NvU32 gpuId,
|
||||
NvBool *isEventNotified
|
||||
)
|
||||
{
|
||||
THREAD_STATE_NODE threadState;
|
||||
@ -812,7 +815,7 @@ NV_STATUS NV_API_CALL nv_gpu_bind_event(
|
||||
// LOCK: acquire API lock
|
||||
if ((rmStatus = rmapiLockAcquire(API_LOCK_FLAGS_NONE, RM_LOCK_MODULES_HYPERVISOR)) == NV_OK)
|
||||
{
|
||||
CliAddSystemEvent(NV0000_NOTIFIERS_GPU_BIND_EVENT, 0, NULL);
|
||||
CliAddSystemEvent(NV0000_NOTIFIERS_GPU_BIND_EVENT, gpuId, isEventNotified);
|
||||
|
||||
// UNLOCK: release API lock
|
||||
rmapiLockRelease();
|
||||
@ -843,9 +846,9 @@ void osWakeRemoveVgpu(NvU32 gpuId, NvU32 returnStatus)
|
||||
vgpu_vfio_info vgpu_info;
|
||||
|
||||
vgpu_info.return_status = returnStatus;
|
||||
vgpu_info.domain = gpuDecodeDomain(gpuId);
|
||||
vgpu_info.bus = gpuDecodeBus(gpuId);
|
||||
vgpu_info.device = gpuDecodeDevice(gpuId);
|
||||
vgpu_info.domain = GPU_32_BIT_ID_DECODE_DOMAIN(gpuId);
|
||||
vgpu_info.bus = GPU_32_BIT_ID_DECODE_BUS(gpuId);
|
||||
vgpu_info.device = GPU_32_BIT_ID_DECODE_DEVICE(gpuId);
|
||||
|
||||
os_call_vgpu_vfio((void *)&vgpu_info, CMD_VFIO_WAKE_REMOVE_GPU);
|
||||
}
|
||||
|
@ -2675,6 +2675,8 @@ void osInitSystemStaticConfig(SYS_STATIC_CONFIG *pConfig)
|
||||
pConfig->bIsNotebook = rm_is_system_notebook();
|
||||
pConfig->osType = nv_get_os_type();
|
||||
pConfig->bOsCCEnabled = os_cc_enabled;
|
||||
pConfig->bOsCCSevSnpEnabled = os_cc_sev_snp_enabled;
|
||||
pConfig->bOsCCSnpVtomEnabled = os_cc_snp_vtom_enabled;
|
||||
pConfig->bOsCCTdxEnabled = os_cc_tdx_enabled;
|
||||
}
|
||||
|
||||
|
@ -481,6 +481,11 @@ static NV_STATUS allocate_os_event(
|
||||
status = NV_ERR_NO_MEMORY;
|
||||
goto done;
|
||||
}
|
||||
new_event->hParent = hParent;
|
||||
new_event->nvfp = nvfp;
|
||||
new_event->fd = fd;
|
||||
new_event->active = NV_TRUE;
|
||||
new_event->refcount = 0;
|
||||
|
||||
portSyncSpinlockAcquire(nv->event_spinlock);
|
||||
for (event = nv->event_list; event; event = event->next)
|
||||
@ -501,12 +506,6 @@ static NV_STATUS allocate_os_event(
|
||||
done:
|
||||
if (status == NV_OK)
|
||||
{
|
||||
new_event->hParent = hParent;
|
||||
new_event->nvfp = nvfp;
|
||||
new_event->fd = fd;
|
||||
new_event->active = NV_TRUE;
|
||||
new_event->refcount = 0;
|
||||
|
||||
nvfp->bCleanupRmapi = NV_TRUE;
|
||||
|
||||
NV_PRINTF(LEVEL_INFO, "allocated OS event:\n");
|
||||
@ -1559,24 +1558,6 @@ failed:
|
||||
return status;
|
||||
}
|
||||
|
||||
static void
|
||||
RmHandleNvpcfEvents(
|
||||
nv_state_t *pNv
|
||||
)
|
||||
{
|
||||
OBJGPU *pGpu = NV_GET_NV_PRIV_PGPU(pNv);
|
||||
THREAD_STATE_NODE threadState;
|
||||
|
||||
if (RmUnixRmApiPrologue(pNv, &threadState, RM_LOCK_MODULES_ACPI) == NULL)
|
||||
{
|
||||
return;
|
||||
}
|
||||
|
||||
gpuNotifySubDeviceEvent(pGpu, NV2080_NOTIFIERS_NVPCF_EVENTS, NULL, 0, 0, 0);
|
||||
|
||||
RmUnixRmApiEpilogue(pNv, &threadState);
|
||||
}
|
||||
|
||||
/*
|
||||
* ---------------------------------------------------------------------------
|
||||
*
|
||||
@ -4312,7 +4293,6 @@ void NV_API_CALL rm_power_source_change_event(
|
||||
THREAD_STATE_NODE threadState;
|
||||
void *fp;
|
||||
nv_state_t *nv;
|
||||
OBJGPU *pGpu = gpumgrGetGpu(0);
|
||||
NV_STATUS rmStatus = NV_OK;
|
||||
|
||||
NV_ENTER_RM_RUNTIME(sp,fp);
|
||||
@ -4321,6 +4301,7 @@ void NV_API_CALL rm_power_source_change_event(
|
||||
// LOCK: acquire API lock
|
||||
if ((rmStatus = rmapiLockAcquire(API_LOCK_FLAGS_NONE, RM_LOCK_MODULES_EVENT)) == NV_OK)
|
||||
{
|
||||
OBJGPU *pGpu = gpumgrGetGpu(0);
|
||||
if (pGpu != NULL)
|
||||
{
|
||||
nv = NV_GET_NV_STATE(pGpu);
|
||||
@ -5940,16 +5921,32 @@ void NV_API_CALL rm_acpi_nvpcf_notify(
|
||||
nvidia_stack_t *sp
|
||||
)
|
||||
{
|
||||
void *fp;
|
||||
OBJGPU *pGpu = gpumgrGetGpu(0);
|
||||
void *fp;
|
||||
THREAD_STATE_NODE threadState;
|
||||
NV_STATUS rmStatus = NV_OK;
|
||||
|
||||
NV_ENTER_RM_RUNTIME(sp,fp);
|
||||
|
||||
if (pGpu != NULL)
|
||||
threadStateInit(&threadState, THREAD_STATE_FLAGS_NONE);
|
||||
|
||||
// LOCK: acquire API lock
|
||||
if ((rmStatus = rmapiLockAcquire(API_LOCK_FLAGS_NONE,
|
||||
RM_LOCK_MODULES_EVENT)) == NV_OK)
|
||||
{
|
||||
nv_state_t *nv = NV_GET_NV_STATE(pGpu);
|
||||
RmHandleNvpcfEvents(nv);
|
||||
OBJGPU *pGpu = gpumgrGetGpu(0);
|
||||
if (pGpu != NULL)
|
||||
{
|
||||
nv_state_t *nv = NV_GET_NV_STATE(pGpu);
|
||||
if ((rmStatus = os_ref_dynamic_power(nv, NV_DYNAMIC_PM_FINE)) ==
|
||||
NV_OK)
|
||||
{
|
||||
gpuNotifySubDeviceEvent(pGpu, NV2080_NOTIFIERS_NVPCF_EVENTS,
|
||||
NULL, 0, 0, 0);
|
||||
}
|
||||
os_unref_dynamic_power(nv, NV_DYNAMIC_PM_FINE);
|
||||
}
|
||||
rmapiLockRelease();
|
||||
}
|
||||
|
||||
threadStateFree(&threadState, THREAD_STATE_FLAGS_NONE);
|
||||
NV_EXIT_RM_RUNTIME(sp,fp);
|
||||
}
|
||||
|
@ -1,5 +1,5 @@
|
||||
/*
|
||||
* SPDX-FileCopyrightText: Copyright (c) 1999-2023 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
|
||||
* SPDX-FileCopyrightText: Copyright (c) 1999-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved.
|
||||
* SPDX-License-Identifier: MIT
|
||||
*
|
||||
* Permission is hereby granted, free of charge, to any person obtaining a
|
||||
@ -59,6 +59,7 @@
|
||||
#include <gpu/gsp/kernel_gsp.h>
|
||||
#include "liblogdecode.h"
|
||||
#include <gpu/fsp/kern_fsp.h>
|
||||
#include <gpu/gsp/kernel_gsp.h>
|
||||
|
||||
#include <mem_mgr/virt_mem_mgr.h>
|
||||
#include <virtualization/kernel_vgpu_mgr.h>
|
||||
@ -378,6 +379,13 @@ osHandleGpuLost
|
||||
|
||||
gpuSetDisconnectedProperties(pGpu);
|
||||
|
||||
if (IS_GSP_CLIENT(pGpu))
|
||||
{
|
||||
// Notify all channels of the error so that UVM can fail gracefully
|
||||
KernelGsp *pKernelGsp = GPU_GET_KERNEL_GSP(pGpu);
|
||||
kgspRcAndNotifyAllChannels(pGpu, pKernelGsp, ROBUST_CHANNEL_GPU_HAS_FALLEN_OFF_THE_BUS, NV_FALSE);
|
||||
}
|
||||
|
||||
// Trigger the OS's PCI recovery mechanism
|
||||
if (nv_pci_trigger_recovery(nv) != NV_OK)
|
||||
{
|
||||
|
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
@ -1551,6 +1551,21 @@ static const struct NVOC_EXPORTED_METHOD_DEF __nvoc_exported_method_def_RmClient
|
||||
#endif
|
||||
},
|
||||
{ /* [90] */
|
||||
#if NVOC_EXPORTED_METHOD_DISABLED_BY_FLAG(0x4u)
|
||||
/*pFunc=*/ (void (*)(void)) NULL,
|
||||
#else
|
||||
/*pFunc=*/ (void (*)(void)) cliresCtrlCmdVgpuVfioNotifyRMStatus_IMPL,
|
||||
#endif // NVOC_EXPORTED_METHOD_DISABLED_BY_FLAG(0x4u)
|
||||
/*flags=*/ 0x4u,
|
||||
/*accessRight=*/0x0u,
|
||||
/*methodId=*/ 0xc05u,
|
||||
/*paramSize=*/ sizeof(NV0000_CTRL_VGPU_VFIO_NOTIFY_RM_STATUS_PARAMS),
|
||||
/*pClassInfo=*/ &(__nvoc_class_def_RmClientResource.classInfo),
|
||||
#if NV_PRINTF_STRINGS_ALLOWED
|
||||
/*func=*/ "cliresCtrlCmdVgpuVfioNotifyRMStatus"
|
||||
#endif
|
||||
},
|
||||
{ /* [91] */
|
||||
#if NVOC_EXPORTED_METHOD_DISABLED_BY_FLAG(0x811u)
|
||||
/*pFunc=*/ (void (*)(void)) NULL,
|
||||
#else
|
||||
@ -1565,7 +1580,7 @@ static const struct NVOC_EXPORTED_METHOD_DEF __nvoc_exported_method_def_RmClient
|
||||
/*func=*/ "cliresCtrlCmdClientGetAddrSpaceType"
|
||||
#endif
|
||||
},
|
||||
{ /* [91] */
|
||||
{ /* [92] */
|
||||
#if NVOC_EXPORTED_METHOD_DISABLED_BY_FLAG(0x811u)
|
||||
/*pFunc=*/ (void (*)(void)) NULL,
|
||||
#else
|
||||
@ -1580,7 +1595,7 @@ static const struct NVOC_EXPORTED_METHOD_DEF __nvoc_exported_method_def_RmClient
|
||||
/*func=*/ "cliresCtrlCmdClientGetHandleInfo"
|
||||
#endif
|
||||
},
|
||||
{ /* [92] */
|
||||
{ /* [93] */
|
||||
#if NVOC_EXPORTED_METHOD_DISABLED_BY_FLAG(0x11u)
|
||||
/*pFunc=*/ (void (*)(void)) NULL,
|
||||
#else
|
||||
@ -1595,7 +1610,7 @@ static const struct NVOC_EXPORTED_METHOD_DEF __nvoc_exported_method_def_RmClient
|
||||
/*func=*/ "cliresCtrlCmdClientGetAccessRights"
|
||||
#endif
|
||||
},
|
||||
{ /* [93] */
|
||||
{ /* [94] */
|
||||
#if NVOC_EXPORTED_METHOD_DISABLED_BY_FLAG(0x11u)
|
||||
/*pFunc=*/ (void (*)(void)) NULL,
|
||||
#else
|
||||
@ -1610,7 +1625,7 @@ static const struct NVOC_EXPORTED_METHOD_DEF __nvoc_exported_method_def_RmClient
|
||||
/*func=*/ "cliresCtrlCmdClientSetInheritedSharePolicy"
|
||||
#endif
|
||||
},
|
||||
{ /* [94] */
|
||||
{ /* [95] */
|
||||
#if NVOC_EXPORTED_METHOD_DISABLED_BY_FLAG(0x11u)
|
||||
/*pFunc=*/ (void (*)(void)) NULL,
|
||||
#else
|
||||
@ -1625,7 +1640,7 @@ static const struct NVOC_EXPORTED_METHOD_DEF __nvoc_exported_method_def_RmClient
|
||||
/*func=*/ "cliresCtrlCmdClientGetChildHandle"
|
||||
#endif
|
||||
},
|
||||
{ /* [95] */
|
||||
{ /* [96] */
|
||||
#if NVOC_EXPORTED_METHOD_DISABLED_BY_FLAG(0x11u)
|
||||
/*pFunc=*/ (void (*)(void)) NULL,
|
||||
#else
|
||||
@ -1640,7 +1655,7 @@ static const struct NVOC_EXPORTED_METHOD_DEF __nvoc_exported_method_def_RmClient
|
||||
/*func=*/ "cliresCtrlCmdClientShareObject"
|
||||
#endif
|
||||
},
|
||||
{ /* [96] */
|
||||
{ /* [97] */
|
||||
#if NVOC_EXPORTED_METHOD_DISABLED_BY_FLAG(0x811u)
|
||||
/*pFunc=*/ (void (*)(void)) NULL,
|
||||
#else
|
||||
@ -1655,7 +1670,7 @@ static const struct NVOC_EXPORTED_METHOD_DEF __nvoc_exported_method_def_RmClient
|
||||
/*func=*/ "cliresCtrlCmdObjectsAreDuplicates"
|
||||
#endif
|
||||
},
|
||||
{ /* [97] */
|
||||
{ /* [98] */
|
||||
#if NVOC_EXPORTED_METHOD_DISABLED_BY_FLAG(0x811u)
|
||||
/*pFunc=*/ (void (*)(void)) NULL,
|
||||
#else
|
||||
@ -1670,7 +1685,7 @@ static const struct NVOC_EXPORTED_METHOD_DEF __nvoc_exported_method_def_RmClient
|
||||
/*func=*/ "cliresCtrlCmdClientSubscribeToImexChannel"
|
||||
#endif
|
||||
},
|
||||
{ /* [98] */
|
||||
{ /* [99] */
|
||||
#if NVOC_EXPORTED_METHOD_DISABLED_BY_FLAG(0x10u)
|
||||
/*pFunc=*/ (void (*)(void)) NULL,
|
||||
#else
|
||||
@ -1685,7 +1700,7 @@ static const struct NVOC_EXPORTED_METHOD_DEF __nvoc_exported_method_def_RmClient
|
||||
/*func=*/ "cliresCtrlCmdOsUnixFlushUserCache"
|
||||
#endif
|
||||
},
|
||||
{ /* [99] */
|
||||
{ /* [100] */
|
||||
#if NVOC_EXPORTED_METHOD_DISABLED_BY_FLAG(0x11u)
|
||||
/*pFunc=*/ (void (*)(void)) NULL,
|
||||
#else
|
||||
@ -1700,7 +1715,7 @@ static const struct NVOC_EXPORTED_METHOD_DEF __nvoc_exported_method_def_RmClient
|
||||
/*func=*/ "cliresCtrlCmdOsUnixExportObjectToFd"
|
||||
#endif
|
||||
},
|
||||
{ /* [100] */
|
||||
{ /* [101] */
|
||||
#if NVOC_EXPORTED_METHOD_DISABLED_BY_FLAG(0x11u)
|
||||
/*pFunc=*/ (void (*)(void)) NULL,
|
||||
#else
|
||||
@ -1715,7 +1730,7 @@ static const struct NVOC_EXPORTED_METHOD_DEF __nvoc_exported_method_def_RmClient
|
||||
/*func=*/ "cliresCtrlCmdOsUnixImportObjectFromFd"
|
||||
#endif
|
||||
},
|
||||
{ /* [101] */
|
||||
{ /* [102] */
|
||||
#if NVOC_EXPORTED_METHOD_DISABLED_BY_FLAG(0x813u)
|
||||
/*pFunc=*/ (void (*)(void)) NULL,
|
||||
#else
|
||||
@ -1730,7 +1745,7 @@ static const struct NVOC_EXPORTED_METHOD_DEF __nvoc_exported_method_def_RmClient
|
||||
/*func=*/ "cliresCtrlCmdOsUnixGetExportObjectInfo"
|
||||
#endif
|
||||
},
|
||||
{ /* [102] */
|
||||
{ /* [103] */
|
||||
#if NVOC_EXPORTED_METHOD_DISABLED_BY_FLAG(0x11u)
|
||||
/*pFunc=*/ (void (*)(void)) NULL,
|
||||
#else
|
||||
@ -1745,7 +1760,7 @@ static const struct NVOC_EXPORTED_METHOD_DEF __nvoc_exported_method_def_RmClient
|
||||
/*func=*/ "cliresCtrlCmdOsUnixCreateExportObjectFd"
|
||||
#endif
|
||||
},
|
||||
{ /* [103] */
|
||||
{ /* [104] */
|
||||
#if NVOC_EXPORTED_METHOD_DISABLED_BY_FLAG(0x11u)
|
||||
/*pFunc=*/ (void (*)(void)) NULL,
|
||||
#else
|
||||
@ -1760,7 +1775,7 @@ static const struct NVOC_EXPORTED_METHOD_DEF __nvoc_exported_method_def_RmClient
|
||||
/*func=*/ "cliresCtrlCmdOsUnixExportObjectsToFd"
|
||||
#endif
|
||||
},
|
||||
{ /* [104] */
|
||||
{ /* [105] */
|
||||
#if NVOC_EXPORTED_METHOD_DISABLED_BY_FLAG(0x11u)
|
||||
/*pFunc=*/ (void (*)(void)) NULL,
|
||||
#else
|
||||
@ -1780,7 +1795,7 @@ static const struct NVOC_EXPORTED_METHOD_DEF __nvoc_exported_method_def_RmClient
|
||||
|
||||
const struct NVOC_EXPORT_INFO __nvoc_export_info_RmClientResource =
|
||||
{
|
||||
/*numEntries=*/ 105,
|
||||
/*numEntries=*/ 106,
|
||||
/*pExportEntries=*/ __nvoc_exported_method_def_RmClientResource
|
||||
};
|
||||
|
||||
@ -2219,6 +2234,10 @@ static void __nvoc_init_funcTable_RmClientResource_1(RmClientResource *pThis) {
|
||||
pThis->__cliresCtrlCmdVgpuSetVgpuVersion__ = &cliresCtrlCmdVgpuSetVgpuVersion_IMPL;
|
||||
#endif
|
||||
|
||||
#if !NVOC_EXPORTED_METHOD_DISABLED_BY_FLAG(0x4u)
|
||||
pThis->__cliresCtrlCmdVgpuVfioNotifyRMStatus__ = &cliresCtrlCmdVgpuVfioNotifyRMStatus_IMPL;
|
||||
#endif
|
||||
|
||||
#if !NVOC_EXPORTED_METHOD_DISABLED_BY_FLAG(0x10u)
|
||||
pThis->__cliresCtrlCmdSystemNVPCFGetPowerModeInfo__ = &cliresCtrlCmdSystemNVPCFGetPowerModeInfo_IMPL;
|
||||
#endif
|
||||
|
@ -178,6 +178,7 @@ struct RmClientResource {
|
||||
NV_STATUS (*__cliresCtrlCmdSyncGpuBoostGroupInfo__)(struct RmClientResource *, NV0000_SYNC_GPU_BOOST_GROUP_INFO_PARAMS *);
|
||||
NV_STATUS (*__cliresCtrlCmdVgpuGetVgpuVersion__)(struct RmClientResource *, NV0000_CTRL_VGPU_GET_VGPU_VERSION_PARAMS *);
|
||||
NV_STATUS (*__cliresCtrlCmdVgpuSetVgpuVersion__)(struct RmClientResource *, NV0000_CTRL_VGPU_SET_VGPU_VERSION_PARAMS *);
|
||||
NV_STATUS (*__cliresCtrlCmdVgpuVfioNotifyRMStatus__)(struct RmClientResource *, NV0000_CTRL_VGPU_VFIO_NOTIFY_RM_STATUS_PARAMS *);
|
||||
NV_STATUS (*__cliresCtrlCmdSystemNVPCFGetPowerModeInfo__)(struct RmClientResource *, NV0000_CTRL_CMD_SYSTEM_NVPCF_GET_POWER_MODE_INFO_PARAMS *);
|
||||
NV_STATUS (*__cliresCtrlCmdSystemSyncExternalFabricMgmt__)(struct RmClientResource *, NV0000_CTRL_CMD_SYSTEM_SYNC_EXTERNAL_FABRIC_MGMT_PARAMS *);
|
||||
NV_STATUS (*__cliresCtrlCmdSystemPfmreqhndlrCtrl__)(struct RmClientResource *, NV0000_CTRL_SYSTEM_PFM_REQ_HNDLR_CTRL_PARAMS *);
|
||||
@ -336,6 +337,7 @@ NV_STATUS __nvoc_objCreate_RmClientResource(RmClientResource**, Dynamic*, NvU32,
|
||||
#define cliresCtrlCmdSyncGpuBoostGroupInfo(pRmCliRes, pParams) cliresCtrlCmdSyncGpuBoostGroupInfo_DISPATCH(pRmCliRes, pParams)
|
||||
#define cliresCtrlCmdVgpuGetVgpuVersion(pRmCliRes, vgpuVersionInfo) cliresCtrlCmdVgpuGetVgpuVersion_DISPATCH(pRmCliRes, vgpuVersionInfo)
|
||||
#define cliresCtrlCmdVgpuSetVgpuVersion(pRmCliRes, vgpuVersionInfo) cliresCtrlCmdVgpuSetVgpuVersion_DISPATCH(pRmCliRes, vgpuVersionInfo)
|
||||
#define cliresCtrlCmdVgpuVfioNotifyRMStatus(pRmCliRes, pVgpuDeleteParams) cliresCtrlCmdVgpuVfioNotifyRMStatus_DISPATCH(pRmCliRes, pVgpuDeleteParams)
|
||||
#define cliresCtrlCmdSystemNVPCFGetPowerModeInfo(pRmCliRes, pParams) cliresCtrlCmdSystemNVPCFGetPowerModeInfo_DISPATCH(pRmCliRes, pParams)
|
||||
#define cliresCtrlCmdSystemSyncExternalFabricMgmt(pRmCliRes, pExtFabricMgmtParams) cliresCtrlCmdSystemSyncExternalFabricMgmt_DISPATCH(pRmCliRes, pExtFabricMgmtParams)
|
||||
#define cliresCtrlCmdSystemPfmreqhndlrCtrl(pRmCliRes, pParams) cliresCtrlCmdSystemPfmreqhndlrCtrl_DISPATCH(pRmCliRes, pParams)
|
||||
@ -959,6 +961,12 @@ static inline NV_STATUS cliresCtrlCmdVgpuSetVgpuVersion_DISPATCH(struct RmClient
|
||||
return pRmCliRes->__cliresCtrlCmdVgpuSetVgpuVersion__(pRmCliRes, vgpuVersionInfo);
|
||||
}
|
||||
|
||||
NV_STATUS cliresCtrlCmdVgpuVfioNotifyRMStatus_IMPL(struct RmClientResource *pRmCliRes, NV0000_CTRL_VGPU_VFIO_NOTIFY_RM_STATUS_PARAMS *pVgpuDeleteParams);
|
||||
|
||||
static inline NV_STATUS cliresCtrlCmdVgpuVfioNotifyRMStatus_DISPATCH(struct RmClientResource *pRmCliRes, NV0000_CTRL_VGPU_VFIO_NOTIFY_RM_STATUS_PARAMS *pVgpuDeleteParams) {
|
||||
return pRmCliRes->__cliresCtrlCmdVgpuVfioNotifyRMStatus__(pRmCliRes, pVgpuDeleteParams);
|
||||
}
|
||||
|
||||
NV_STATUS cliresCtrlCmdSystemNVPCFGetPowerModeInfo_IMPL(struct RmClientResource *pRmCliRes, NV0000_CTRL_CMD_SYSTEM_NVPCF_GET_POWER_MODE_INFO_PARAMS *pParams);
|
||||
|
||||
static inline NV_STATUS cliresCtrlCmdSystemNVPCFGetPowerModeInfo_DISPATCH(struct RmClientResource *pRmCliRes, NV0000_CTRL_CMD_SYSTEM_NVPCF_GET_POWER_MODE_INFO_PARAMS *pParams) {
|
||||
|
@ -436,42 +436,6 @@ static void __nvoc_init_funcTable_ConfidentialCompute_1(ConfidentialCompute *pTh
|
||||
}
|
||||
}
|
||||
|
||||
// Hal function -- confComputeEnableKeyRotationSupport
|
||||
if (( ((rmVariantHal_HalVarIdx >> 5) == 0UL) && ((1UL << (rmVariantHal_HalVarIdx & 0x1f)) & 0x00000001UL) )) /* RmVariantHal: VF */
|
||||
{
|
||||
pThis->__confComputeEnableKeyRotationSupport__ = &confComputeEnableKeyRotationSupport_56cd7a;
|
||||
}
|
||||
else
|
||||
{
|
||||
if (( ((chipHal_HalVarIdx >> 5) == 1UL) && ((1UL << (chipHal_HalVarIdx & 0x1f)) & 0x10000000UL) )) /* ChipHal: GH100 */
|
||||
{
|
||||
pThis->__confComputeEnableKeyRotationSupport__ = &confComputeEnableKeyRotationSupport_GH100;
|
||||
}
|
||||
// default
|
||||
else
|
||||
{
|
||||
pThis->__confComputeEnableKeyRotationSupport__ = &confComputeEnableKeyRotationSupport_56cd7a;
|
||||
}
|
||||
}
|
||||
|
||||
// Hal function -- confComputeEnableInternalKeyRotationSupport
|
||||
if (( ((rmVariantHal_HalVarIdx >> 5) == 0UL) && ((1UL << (rmVariantHal_HalVarIdx & 0x1f)) & 0x00000001UL) )) /* RmVariantHal: VF */
|
||||
{
|
||||
pThis->__confComputeEnableInternalKeyRotationSupport__ = &confComputeEnableInternalKeyRotationSupport_56cd7a;
|
||||
}
|
||||
else
|
||||
{
|
||||
if (( ((chipHal_HalVarIdx >> 5) == 1UL) && ((1UL << (chipHal_HalVarIdx & 0x1f)) & 0x10000000UL) )) /* ChipHal: GH100 */
|
||||
{
|
||||
pThis->__confComputeEnableInternalKeyRotationSupport__ = &confComputeEnableInternalKeyRotationSupport_GH100;
|
||||
}
|
||||
// default
|
||||
else
|
||||
{
|
||||
pThis->__confComputeEnableInternalKeyRotationSupport__ = &confComputeEnableInternalKeyRotationSupport_56cd7a;
|
||||
}
|
||||
}
|
||||
|
||||
// Hal function -- confComputeIsDebugModeEnabled
|
||||
if (( ((chipHal_HalVarIdx >> 5) == 1UL) && ((1UL << (chipHal_HalVarIdx & 0x1f)) & 0x10000000UL) )) /* ChipHal: GH100 */
|
||||
{
|
||||
|
@ -117,8 +117,6 @@ struct ConfidentialCompute {
|
||||
NV_STATUS (*__confComputeTriggerKeyRotation__)(struct OBJGPU *, struct ConfidentialCompute *);
|
||||
void (*__confComputeGetKeyPairForKeySpace__)(struct OBJGPU *, struct ConfidentialCompute *, NvU32, NvBool, NvU32 *, NvU32 *);
|
||||
NV_STATUS (*__confComputeEnableKeyRotationCallback__)(struct OBJGPU *, struct ConfidentialCompute *, NvBool);
|
||||
NV_STATUS (*__confComputeEnableKeyRotationSupport__)(struct OBJGPU *, struct ConfidentialCompute *);
|
||||
NV_STATUS (*__confComputeEnableInternalKeyRotationSupport__)(struct OBJGPU *, struct ConfidentialCompute *);
|
||||
NvBool (*__confComputeIsDebugModeEnabled__)(struct OBJGPU *, struct ConfidentialCompute *);
|
||||
NvBool (*__confComputeIsGpuCcCapable__)(struct OBJGPU *, struct ConfidentialCompute *);
|
||||
NV_STATUS (*__confComputeEstablishSpdmSessionAndKeys__)(struct OBJGPU *, struct ConfidentialCompute *);
|
||||
@ -272,10 +270,6 @@ NV_STATUS __nvoc_objCreate_ConfidentialCompute(ConfidentialCompute**, Dynamic*,
|
||||
#define confComputeGetKeyPairForKeySpace_HAL(pGpu, pConfCompute, arg0, arg1, arg2, arg3) confComputeGetKeyPairForKeySpace_DISPATCH(pGpu, pConfCompute, arg0, arg1, arg2, arg3)
|
||||
#define confComputeEnableKeyRotationCallback(pGpu, pConfCompute, bEnable) confComputeEnableKeyRotationCallback_DISPATCH(pGpu, pConfCompute, bEnable)
|
||||
#define confComputeEnableKeyRotationCallback_HAL(pGpu, pConfCompute, bEnable) confComputeEnableKeyRotationCallback_DISPATCH(pGpu, pConfCompute, bEnable)
|
||||
#define confComputeEnableKeyRotationSupport(pGpu, pConfCompute) confComputeEnableKeyRotationSupport_DISPATCH(pGpu, pConfCompute)
|
||||
#define confComputeEnableKeyRotationSupport_HAL(pGpu, pConfCompute) confComputeEnableKeyRotationSupport_DISPATCH(pGpu, pConfCompute)
|
||||
#define confComputeEnableInternalKeyRotationSupport(pGpu, pConfCompute) confComputeEnableInternalKeyRotationSupport_DISPATCH(pGpu, pConfCompute)
|
||||
#define confComputeEnableInternalKeyRotationSupport_HAL(pGpu, pConfCompute) confComputeEnableInternalKeyRotationSupport_DISPATCH(pGpu, pConfCompute)
|
||||
#define confComputeIsDebugModeEnabled(pGpu, pConfCompute) confComputeIsDebugModeEnabled_DISPATCH(pGpu, pConfCompute)
|
||||
#define confComputeIsDebugModeEnabled_HAL(pGpu, pConfCompute) confComputeIsDebugModeEnabled_DISPATCH(pGpu, pConfCompute)
|
||||
#define confComputeIsGpuCcCapable(pGpu, pConfCompute) confComputeIsGpuCcCapable_DISPATCH(pGpu, pConfCompute)
|
||||
@ -551,26 +545,6 @@ static inline NV_STATUS confComputeEnableKeyRotationCallback_DISPATCH(struct OBJ
|
||||
return pConfCompute->__confComputeEnableKeyRotationCallback__(pGpu, pConfCompute, bEnable);
|
||||
}
|
||||
|
||||
NV_STATUS confComputeEnableKeyRotationSupport_GH100(struct OBJGPU *pGpu, struct ConfidentialCompute *pConfCompute);
|
||||
|
||||
static inline NV_STATUS confComputeEnableKeyRotationSupport_56cd7a(struct OBJGPU *pGpu, struct ConfidentialCompute *pConfCompute) {
|
||||
return NV_OK;
|
||||
}
|
||||
|
||||
static inline NV_STATUS confComputeEnableKeyRotationSupport_DISPATCH(struct OBJGPU *pGpu, struct ConfidentialCompute *pConfCompute) {
|
||||
return pConfCompute->__confComputeEnableKeyRotationSupport__(pGpu, pConfCompute);
|
||||
}
|
||||
|
||||
NV_STATUS confComputeEnableInternalKeyRotationSupport_GH100(struct OBJGPU *pGpu, struct ConfidentialCompute *pConfCompute);
|
||||
|
||||
static inline NV_STATUS confComputeEnableInternalKeyRotationSupport_56cd7a(struct OBJGPU *pGpu, struct ConfidentialCompute *pConfCompute) {
|
||||
return NV_OK;
|
||||
}
|
||||
|
||||
static inline NV_STATUS confComputeEnableInternalKeyRotationSupport_DISPATCH(struct OBJGPU *pGpu, struct ConfidentialCompute *pConfCompute) {
|
||||
return pConfCompute->__confComputeEnableInternalKeyRotationSupport__(pGpu, pConfCompute);
|
||||
}
|
||||
|
||||
NvBool confComputeIsDebugModeEnabled_GH100(struct OBJGPU *pGpu, struct ConfidentialCompute *pConfCompute);
|
||||
|
||||
static inline NvBool confComputeIsDebugModeEnabled_491d52(struct OBJGPU *pGpu, struct ConfidentialCompute *pConfCompute) {
|
||||
|
@ -1098,26 +1098,40 @@ static void __nvoc_init_funcTable_OBJGPU_1(OBJGPU *pThis) {
|
||||
}
|
||||
|
||||
// Hal function -- gpuIsDevModeEnabledInHw
|
||||
if (( ((chipHal_HalVarIdx >> 5) == 1UL) && ((1UL << (chipHal_HalVarIdx & 0x1f)) & 0x10000000UL) )) /* ChipHal: GH100 */
|
||||
{
|
||||
pThis->__gpuIsDevModeEnabledInHw__ = &gpuIsDevModeEnabledInHw_GH100;
|
||||
}
|
||||
// default
|
||||
else
|
||||
if (( ((rmVariantHal_HalVarIdx >> 5) == 0UL) && ((1UL << (rmVariantHal_HalVarIdx & 0x1f)) & 0x00000001UL) )) /* RmVariantHal: VF */
|
||||
{
|
||||
pThis->__gpuIsDevModeEnabledInHw__ = &gpuIsDevModeEnabledInHw_491d52;
|
||||
}
|
||||
|
||||
// Hal function -- gpuIsProtectedPcieEnabledInHw
|
||||
if (( ((chipHal_HalVarIdx >> 5) == 1UL) && ((1UL << (chipHal_HalVarIdx & 0x1f)) & 0x10000000UL) )) /* ChipHal: GH100 */
|
||||
{
|
||||
pThis->__gpuIsProtectedPcieEnabledInHw__ = &gpuIsProtectedPcieEnabledInHw_GH100;
|
||||
}
|
||||
// default
|
||||
else
|
||||
{
|
||||
if (( ((chipHal_HalVarIdx >> 5) == 1UL) && ((1UL << (chipHal_HalVarIdx & 0x1f)) & 0x10000000UL) )) /* ChipHal: GH100 */
|
||||
{
|
||||
pThis->__gpuIsDevModeEnabledInHw__ = &gpuIsDevModeEnabledInHw_GH100;
|
||||
}
|
||||
// default
|
||||
else
|
||||
{
|
||||
pThis->__gpuIsDevModeEnabledInHw__ = &gpuIsDevModeEnabledInHw_491d52;
|
||||
}
|
||||
}
|
||||
|
||||
// Hal function -- gpuIsProtectedPcieEnabledInHw
|
||||
if (( ((rmVariantHal_HalVarIdx >> 5) == 0UL) && ((1UL << (rmVariantHal_HalVarIdx & 0x1f)) & 0x00000001UL) )) /* RmVariantHal: VF */
|
||||
{
|
||||
pThis->__gpuIsProtectedPcieEnabledInHw__ = &gpuIsProtectedPcieEnabledInHw_491d52;
|
||||
}
|
||||
else
|
||||
{
|
||||
if (( ((chipHal_HalVarIdx >> 5) == 1UL) && ((1UL << (chipHal_HalVarIdx & 0x1f)) & 0x10000000UL) )) /* ChipHal: GH100 */
|
||||
{
|
||||
pThis->__gpuIsProtectedPcieEnabledInHw__ = &gpuIsProtectedPcieEnabledInHw_GH100;
|
||||
}
|
||||
// default
|
||||
else
|
||||
{
|
||||
pThis->__gpuIsProtectedPcieEnabledInHw__ = &gpuIsProtectedPcieEnabledInHw_491d52;
|
||||
}
|
||||
}
|
||||
|
||||
// Hal function -- gpuIsCtxBufAllocInPmaSupported
|
||||
if (( ((chipHal_HalVarIdx >> 5) == 1UL) && ((1UL << (chipHal_HalVarIdx & 0x1f)) & 0x11f0fc00UL) )) /* ChipHal: GA100 | GA102 | GA103 | GA104 | GA106 | GA107 | AD102 | AD103 | AD104 | AD106 | AD107 | GH100 */
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user