515.48.07

This commit is contained in:
Andy Ritger 2022-05-27 16:40:24 -07:00
parent af26e1ea89
commit 965db98552
No known key found for this signature in database
GPG Key ID: 6D466BB75E006CFC
114 changed files with 18493 additions and 22785 deletions

31
CHANGELOG.md Normal file
View File

@ -0,0 +1,31 @@
# Changelog
## Release 515 Entries
### [515.48.07] 2022-05-31
#### Added
- List of compatible GPUs in README.md.
#### Fixed
- Fix various README capitalizations, [#8 by @lx-is](https://github.com/NVIDIA/open-gpu-kernel-modules/pull/8)
- Automatically tag bug report issues, [#15 by @thebeanogamer](https://github.com/NVIDIA/open-gpu-kernel-modules/pull/15)
- Improve conftest.sh Script, [#37 by @Nitepone](https://github.com/NVIDIA/open-gpu-kernel-modules/pull/37)
- Update HTTP link to HTTPS, [#101 by @alcaparra](https://github.com/NVIDIA/open-gpu-kernel-modules/pull/101)
- moved array sanity check to before the array access, [#117 by @RealAstolfo](https://github.com/NVIDIA/open-gpu-kernel-modules/pull/117)
- Fixed some typos, [#122 by @FEDOyt](https://github.com/NVIDIA/open-gpu-kernel-modules/pull/122)
- Fixed capitalization, [#123 by @keroeslux](https://github.com/NVIDIA/open-gpu-kernel-modules/pull/123)
- Fix typos in NVDEC Engine Descriptor, [#126 from @TrickyDmitriy](https://github.com/NVIDIA/open-gpu-kernel-modules/pull/126)
- Extranous apostrohpes in a makefile script [sic], [#14 by @kiroma](https://github.com/NVIDIA/open-gpu-kernel-modules/issues/14)
- HDMI no audio @4K above 60Hz, [#75 by @adolfotregosa](https://github.com/NVIDIA/open-gpu-kernel-modules/issues/75)
- dp_configcaps.cpp:405: array index sanity check in wrong place?, [#110 by @dcb314](https://github.com/NVIDIA/open-gpu-kernel-modules/issues/110)
- NVRM kgspInitRm_IMPL: missing NVDEC0 engine, cannot initialize GSP-RM, [#116 by @kfazz](https://github.com/NVIDIA/open-gpu-kernel-modules/issues/116)
- ERROR: modpost: "backlight_device_register" [...nvidia-modeset.ko] undefined, [#135 by @sndirsch](https://github.com/NVIDIA/open-gpu-kernel-modules/issues/135)
- aarch64 build fails, [#151 by @frezbo](https://github.com/NVIDIA/open-gpu-kernel-modules/issues/151)
### [515.43.04] 2022-05-11
- Initial release.

618
README.md
View File

@ -1,7 +1,7 @@
# NVIDIA Linux Open GPU Kernel Module Source
This is the source release of the NVIDIA Linux open GPU kernel modules,
version 515.43.04.
version 515.48.07.
## How to Build
@ -17,7 +17,7 @@ as root:
Note that the kernel modules built here must be used with gsp.bin
firmware and user-space NVIDIA GPU driver components from a corresponding
515.43.04 driver release. This can be achieved by installing
515.48.07 driver release. This can be achieved by installing
the NVIDIA GPU driver from the .run file using the `--no-kernel-modules`
option. E.g.,
@ -162,3 +162,617 @@ for the target kernel.
- `src/nvidia/` The OS-agnostic code for nvidia.ko
- `src/nvidia-modeset/` The OS-agnostic code for nvidia-modeset.ko
- `src/common/` Utility code used by one or more of nvidia.ko and nvidia-modeset.ko
## Compatible GPUs
The open-gpu-kernel-modules can be used on any Turing or later GPU
(see the table below). However, in the 515.48.07 release,
GeForce and Workstation support is still considered alpha-quality.
To enable use of the open kernel modules on GeForce and Workstation GPUs,
set the "NVreg_OpenRmEnableUnsupportedGpus" nvidia.ko kernel module
parameter to 1. For more details, see the NVIDIA GPU driver end user
README here:
https://us.download.nvidia.com/XFree86/Linux-x86_64/515.48.07/README/kernel_open.html
In the below table, if three IDs are listed, the first is the PCI Device
ID, the second is the PCI Subsystem Vendor ID, and the third is the PCI
Subsystem Device ID.
| Product Name | PCI ID |
| ----------------------------------------------- | -------------- |
| NVIDIA TITAN RTX | 1E02 |
| NVIDIA GeForce RTX 2080 Ti | 1E04 |
| NVIDIA GeForce RTX 2080 Ti | 1E07 |
| Quadro RTX 6000 | 1E30 |
| Quadro RTX 8000 | 1E30 1028 129E |
| Quadro RTX 8000 | 1E30 103C 129E |
| Quadro RTX 8000 | 1E30 10DE 129E |
| Quadro RTX 6000 | 1E36 |
| Quadro RTX 8000 | 1E78 10DE 13D8 |
| Quadro RTX 6000 | 1E78 10DE 13D9 |
| NVIDIA GeForce RTX 2080 SUPER | 1E81 |
| NVIDIA GeForce RTX 2080 | 1E82 |
| NVIDIA GeForce RTX 2070 SUPER | 1E84 |
| NVIDIA GeForce RTX 2080 | 1E87 |
| NVIDIA GeForce RTX 2060 | 1E89 |
| NVIDIA GeForce RTX 2080 | 1E90 |
| NVIDIA GeForce RTX 2080 with Max-Q Design | 1E90 1025 1375 |
| NVIDIA GeForce RTX 2080 with Max-Q Design | 1E90 1028 08A1 |
| NVIDIA GeForce RTX 2080 with Max-Q Design | 1E90 1028 08A2 |
| NVIDIA GeForce RTX 2080 with Max-Q Design | 1E90 1028 08EA |
| NVIDIA GeForce RTX 2080 with Max-Q Design | 1E90 1028 08EB |
| NVIDIA GeForce RTX 2080 with Max-Q Design | 1E90 1028 08EC |
| NVIDIA GeForce RTX 2080 with Max-Q Design | 1E90 1028 08ED |
| NVIDIA GeForce RTX 2080 with Max-Q Design | 1E90 1028 08EE |
| NVIDIA GeForce RTX 2080 with Max-Q Design | 1E90 1028 08EF |
| NVIDIA GeForce RTX 2080 with Max-Q Design | 1E90 1028 093B |
| NVIDIA GeForce RTX 2080 with Max-Q Design | 1E90 1028 093C |
| NVIDIA GeForce RTX 2080 with Max-Q Design | 1E90 103C 8572 |
| NVIDIA GeForce RTX 2080 with Max-Q Design | 1E90 103C 8573 |
| NVIDIA GeForce RTX 2080 with Max-Q Design | 1E90 103C 8602 |
| NVIDIA GeForce RTX 2080 with Max-Q Design | 1E90 103C 8606 |
| NVIDIA GeForce RTX 2080 with Max-Q Design | 1E90 103C 86C6 |
| NVIDIA GeForce RTX 2080 with Max-Q Design | 1E90 103C 86C7 |
| NVIDIA GeForce RTX 2080 with Max-Q Design | 1E90 103C 87A6 |
| NVIDIA GeForce RTX 2080 with Max-Q Design | 1E90 103C 87A7 |
| NVIDIA GeForce RTX 2080 with Max-Q Design | 1E90 1043 131F |
| NVIDIA GeForce RTX 2080 with Max-Q Design | 1E90 1043 137F |
| NVIDIA GeForce RTX 2080 with Max-Q Design | 1E90 1043 141F |
| NVIDIA GeForce RTX 2080 with Max-Q Design | 1E90 1043 1751 |
| NVIDIA GeForce RTX 2080 with Max-Q Design | 1E90 1458 1660 |
| NVIDIA GeForce RTX 2080 with Max-Q Design | 1E90 1458 1661 |
| NVIDIA GeForce RTX 2080 with Max-Q Design | 1E90 1458 1662 |
| NVIDIA GeForce RTX 2080 with Max-Q Design | 1E90 1458 75A6 |
| NVIDIA GeForce RTX 2080 with Max-Q Design | 1E90 1458 75A7 |
| NVIDIA GeForce RTX 2080 with Max-Q Design | 1E90 1458 86A6 |
| NVIDIA GeForce RTX 2080 with Max-Q Design | 1E90 1458 86A7 |
| NVIDIA GeForce RTX 2080 with Max-Q Design | 1E90 1462 1274 |
| NVIDIA GeForce RTX 2080 with Max-Q Design | 1E90 1462 1277 |
| NVIDIA GeForce RTX 2080 with Max-Q Design | 1E90 152D 1220 |
| NVIDIA GeForce RTX 2080 with Max-Q Design | 1E90 1558 95E1 |
| NVIDIA GeForce RTX 2080 with Max-Q Design | 1E90 1558 97E1 |
| NVIDIA GeForce RTX 2080 with Max-Q Design | 1E90 1A58 2002 |
| NVIDIA GeForce RTX 2080 with Max-Q Design | 1E90 1A58 2005 |
| NVIDIA GeForce RTX 2080 with Max-Q Design | 1E90 1A58 2007 |
| NVIDIA GeForce RTX 2080 with Max-Q Design | 1E90 1A58 3000 |
| NVIDIA GeForce RTX 2080 with Max-Q Design | 1E90 1A58 3001 |
| NVIDIA GeForce RTX 2080 with Max-Q Design | 1E90 1D05 1069 |
| NVIDIA GeForce RTX 2070 Super | 1E91 |
| NVIDIA GeForce RTX 2070 Super with Max-Q Design | 1E91 103C 8607 |
| NVIDIA GeForce RTX 2070 Super with Max-Q Design | 1E91 103C 8736 |
| NVIDIA GeForce RTX 2070 Super with Max-Q Design | 1E91 103C 8738 |
| NVIDIA GeForce RTX 2070 Super with Max-Q Design | 1E91 103C 8772 |
| NVIDIA GeForce RTX 2070 Super with Max-Q Design | 1E91 103C 878A |
| NVIDIA GeForce RTX 2070 Super with Max-Q Design | 1E91 103C 878B |
| NVIDIA GeForce RTX 2070 Super with Max-Q Design | 1E91 1043 1E61 |
| NVIDIA GeForce RTX 2070 Super with Max-Q Design | 1E91 1458 1511 |
| NVIDIA GeForce RTX 2070 Super with Max-Q Design | 1E91 1458 75B3 |
| NVIDIA GeForce RTX 2070 Super with Max-Q Design | 1E91 1458 75B4 |
| NVIDIA GeForce RTX 2070 Super with Max-Q Design | 1E91 1458 76B2 |
| NVIDIA GeForce RTX 2070 Super with Max-Q Design | 1E91 1458 76B3 |
| NVIDIA GeForce RTX 2070 Super with Max-Q Design | 1E91 1458 78A2 |
| NVIDIA GeForce RTX 2070 Super with Max-Q Design | 1E91 1458 78A3 |
| NVIDIA GeForce RTX 2070 Super with Max-Q Design | 1E91 1458 86B2 |
| NVIDIA GeForce RTX 2070 Super with Max-Q Design | 1E91 1458 86B3 |
| NVIDIA GeForce RTX 2070 Super with Max-Q Design | 1E91 1462 12AE |
| NVIDIA GeForce RTX 2070 Super with Max-Q Design | 1E91 1462 12B0 |
| NVIDIA GeForce RTX 2070 Super with Max-Q Design | 1E91 1462 12C6 |
| NVIDIA GeForce RTX 2070 Super with Max-Q Design | 1E91 17AA 22C3 |
| NVIDIA GeForce RTX 2070 Super with Max-Q Design | 1E91 17AA 22C5 |
| NVIDIA GeForce RTX 2070 Super with Max-Q Design | 1E91 1A58 2009 |
| NVIDIA GeForce RTX 2070 Super with Max-Q Design | 1E91 1A58 200A |
| NVIDIA GeForce RTX 2070 Super with Max-Q Design | 1E91 1A58 3002 |
| NVIDIA GeForce RTX 2070 Super with Max-Q Design | 1E91 8086 3012 |
| NVIDIA GeForce RTX 2080 Super | 1E93 |
| NVIDIA GeForce RTX 2080 Super with Max-Q Design | 1E93 1025 1401 |
| NVIDIA GeForce RTX 2080 Super with Max-Q Design | 1E93 1025 149C |
| NVIDIA GeForce RTX 2080 Super with Max-Q Design | 1E93 1028 09D2 |
| NVIDIA GeForce RTX 2080 Super with Max-Q Design | 1E93 103C 8607 |
| NVIDIA GeForce RTX 2080 Super with Max-Q Design | 1E93 103C 86C7 |
| NVIDIA GeForce RTX 2080 Super with Max-Q Design | 1E93 103C 8736 |
| NVIDIA GeForce RTX 2080 Super with Max-Q Design | 1E93 103C 8738 |
| NVIDIA GeForce RTX 2080 Super with Max-Q Design | 1E93 103C 8772 |
| NVIDIA GeForce RTX 2080 Super with Max-Q Design | 1E93 103C 87A6 |
| NVIDIA GeForce RTX 2080 Super with Max-Q Design | 1E93 103C 87A7 |
| NVIDIA GeForce RTX 2080 Super with Max-Q Design | 1E93 1458 75B1 |
| NVIDIA GeForce RTX 2080 Super with Max-Q Design | 1E93 1458 75B2 |
| NVIDIA GeForce RTX 2080 Super with Max-Q Design | 1E93 1458 76B0 |
| NVIDIA GeForce RTX 2080 Super with Max-Q Design | 1E93 1458 76B1 |
| NVIDIA GeForce RTX 2080 Super with Max-Q Design | 1E93 1458 78A0 |
| NVIDIA GeForce RTX 2080 Super with Max-Q Design | 1E93 1458 78A1 |
| NVIDIA GeForce RTX 2080 Super with Max-Q Design | 1E93 1458 86B0 |
| NVIDIA GeForce RTX 2080 Super with Max-Q Design | 1E93 1458 86B1 |
| NVIDIA GeForce RTX 2080 Super with Max-Q Design | 1E93 1462 12AE |
| NVIDIA GeForce RTX 2080 Super with Max-Q Design | 1E93 1462 12B0 |
| NVIDIA GeForce RTX 2080 Super with Max-Q Design | 1E93 1462 12B4 |
| NVIDIA GeForce RTX 2080 Super with Max-Q Design | 1E93 1462 12C6 |
| NVIDIA GeForce RTX 2080 Super with Max-Q Design | 1E93 1558 50D3 |
| NVIDIA GeForce RTX 2080 Super with Max-Q Design | 1E93 1558 70D1 |
| NVIDIA GeForce RTX 2080 Super with Max-Q Design | 1E93 17AA 22C3 |
| NVIDIA GeForce RTX 2080 Super with Max-Q Design | 1E93 17AA 22C5 |
| NVIDIA GeForce RTX 2080 Super with Max-Q Design | 1E93 1A58 2009 |
| NVIDIA GeForce RTX 2080 Super with Max-Q Design | 1E93 1A58 200A |
| NVIDIA GeForce RTX 2080 Super with Max-Q Design | 1E93 1A58 3002 |
| NVIDIA GeForce RTX 2080 Super with Max-Q Design | 1E93 1D05 1089 |
| Quadro RTX 5000 | 1EB0 |
| Quadro RTX 4000 | 1EB1 |
| Quadro RTX 5000 | 1EB5 |
| Quadro RTX 5000 with Max-Q Design | 1EB5 1025 1375 |
| Quadro RTX 5000 with Max-Q Design | 1EB5 1025 1401 |
| Quadro RTX 5000 with Max-Q Design | 1EB5 1025 149C |
| Quadro RTX 5000 with Max-Q Design | 1EB5 1028 09C3 |
| Quadro RTX 5000 with Max-Q Design | 1EB5 103C 8736 |
| Quadro RTX 5000 with Max-Q Design | 1EB5 103C 8738 |
| Quadro RTX 5000 with Max-Q Design | 1EB5 103C 8772 |
| Quadro RTX 5000 with Max-Q Design | 1EB5 103C 8780 |
| Quadro RTX 5000 with Max-Q Design | 1EB5 103C 8782 |
| Quadro RTX 5000 with Max-Q Design | 1EB5 103C 8783 |
| Quadro RTX 5000 with Max-Q Design | 1EB5 103C 8785 |
| Quadro RTX 5000 with Max-Q Design | 1EB5 1043 1DD1 |
| Quadro RTX 5000 with Max-Q Design | 1EB5 1462 1274 |
| Quadro RTX 5000 with Max-Q Design | 1EB5 1462 12B0 |
| Quadro RTX 5000 with Max-Q Design | 1EB5 1462 12C6 |
| Quadro RTX 5000 with Max-Q Design | 1EB5 17AA 22B8 |
| Quadro RTX 5000 with Max-Q Design | 1EB5 17AA 22BA |
| Quadro RTX 5000 with Max-Q Design | 1EB5 1A58 2005 |
| Quadro RTX 5000 with Max-Q Design | 1EB5 1A58 2007 |
| Quadro RTX 5000 with Max-Q Design | 1EB5 1A58 2008 |
| Quadro RTX 5000 with Max-Q Design | 1EB5 1A58 200A |
| Quadro RTX 4000 | 1EB6 |
| Quadro RTX 4000 with Max-Q Design | 1EB6 1028 09C3 |
| Quadro RTX 4000 with Max-Q Design | 1EB6 103C 8736 |
| Quadro RTX 4000 with Max-Q Design | 1EB6 103C 8738 |
| Quadro RTX 4000 with Max-Q Design | 1EB6 103C 8772 |
| Quadro RTX 4000 with Max-Q Design | 1EB6 103C 8780 |
| Quadro RTX 4000 with Max-Q Design | 1EB6 103C 8782 |
| Quadro RTX 4000 with Max-Q Design | 1EB6 103C 8783 |
| Quadro RTX 4000 with Max-Q Design | 1EB6 103C 8785 |
| Quadro RTX 4000 with Max-Q Design | 1EB6 1462 1274 |
| Quadro RTX 4000 with Max-Q Design | 1EB6 1462 1277 |
| Quadro RTX 4000 with Max-Q Design | 1EB6 1462 12B0 |
| Quadro RTX 4000 with Max-Q Design | 1EB6 1462 12C6 |
| Quadro RTX 4000 with Max-Q Design | 1EB6 17AA 22B8 |
| Quadro RTX 4000 with Max-Q Design | 1EB6 17AA 22BA |
| NVIDIA GeForce RTX 2070 SUPER | 1EC2 |
| NVIDIA GeForce RTX 2070 SUPER | 1EC7 |
| NVIDIA GeForce RTX 2080 | 1ED0 |
| NVIDIA GeForce RTX 2080 with Max-Q Design | 1ED0 1025 132D |
| NVIDIA GeForce RTX 2080 with Max-Q Design | 1ED0 1028 08ED |
| NVIDIA GeForce RTX 2080 with Max-Q Design | 1ED0 1028 08EE |
| NVIDIA GeForce RTX 2080 with Max-Q Design | 1ED0 1028 08EF |
| NVIDIA GeForce RTX 2080 with Max-Q Design | 1ED0 103C 8572 |
| NVIDIA GeForce RTX 2080 with Max-Q Design | 1ED0 103C 8573 |
| NVIDIA GeForce RTX 2080 with Max-Q Design | 1ED0 103C 8600 |
| NVIDIA GeForce RTX 2080 with Max-Q Design | 1ED0 103C 8605 |
| NVIDIA GeForce RTX 2080 with Max-Q Design | 1ED0 1043 138F |
| NVIDIA GeForce RTX 2080 with Max-Q Design | 1ED0 1043 15C1 |
| NVIDIA GeForce RTX 2080 with Max-Q Design | 1ED0 17AA 3FEE |
| NVIDIA GeForce RTX 2080 with Max-Q Design | 1ED0 17AA 3FFE |
| NVIDIA GeForce RTX 2070 Super | 1ED1 |
| NVIDIA GeForce RTX 2070 Super with Max-Q Design | 1ED1 1025 1432 |
| NVIDIA GeForce RTX 2070 Super with Max-Q Design | 1ED1 103C 8746 |
| NVIDIA GeForce RTX 2070 Super with Max-Q Design | 1ED1 103C 878A |
| NVIDIA GeForce RTX 2070 Super with Max-Q Design | 1ED1 1043 165F |
| NVIDIA GeForce RTX 2070 Super with Max-Q Design | 1ED1 144D C192 |
| NVIDIA GeForce RTX 2070 Super with Max-Q Design | 1ED1 17AA 3FCE |
| NVIDIA GeForce RTX 2070 Super with Max-Q Design | 1ED1 17AA 3FCF |
| NVIDIA GeForce RTX 2070 Super with Max-Q Design | 1ED1 17AA 3FD0 |
| NVIDIA GeForce RTX 2080 Super | 1ED3 |
| NVIDIA GeForce RTX 2080 Super with Max-Q Design | 1ED3 1025 1432 |
| NVIDIA GeForce RTX 2080 Super with Max-Q Design | 1ED3 1028 09D1 |
| NVIDIA GeForce RTX 2080 Super with Max-Q Design | 1ED3 103C 8746 |
| NVIDIA GeForce RTX 2080 Super with Max-Q Design | 1ED3 103C 878A |
| NVIDIA GeForce RTX 2080 Super with Max-Q Design | 1ED3 1043 1D61 |
| NVIDIA GeForce RTX 2080 Super with Max-Q Design | 1ED3 1043 1E51 |
| NVIDIA GeForce RTX 2080 Super with Max-Q Design | 1ED3 1043 1F01 |
| NVIDIA GeForce RTX 2080 Super with Max-Q Design | 1ED3 17AA 3FCE |
| NVIDIA GeForce RTX 2080 Super with Max-Q Design | 1ED3 17AA 3FCF |
| NVIDIA GeForce RTX 2080 Super with Max-Q Design | 1ED3 17AA 3FD0 |
| Quadro RTX 5000 | 1EF5 |
| NVIDIA GeForce RTX 2070 | 1F02 |
| NVIDIA GeForce RTX 2060 | 1F03 |
| NVIDIA GeForce RTX 2060 SUPER | 1F06 |
| NVIDIA GeForce RTX 2070 | 1F07 |
| NVIDIA GeForce RTX 2060 | 1F08 |
| NVIDIA GeForce GTX 1650 | 1F0A |
| NVIDIA GeForce RTX 2070 | 1F10 |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F10 1025 132D |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F10 1025 1342 |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F10 1028 08A1 |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F10 1028 08A2 |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F10 1028 08EA |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F10 1028 08EB |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F10 1028 08EC |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F10 1028 08ED |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F10 1028 08EE |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F10 1028 08EF |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F10 1028 093B |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F10 1028 093C |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F10 103C 8572 |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F10 103C 8573 |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F10 103C 8602 |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F10 103C 8606 |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F10 1043 132F |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F10 1043 136F |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F10 1043 1881 |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F10 1043 1E6E |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F10 1458 1658 |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F10 1458 1663 |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F10 1458 1664 |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F10 1458 75A4 |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F10 1458 75A5 |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F10 1458 86A4 |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F10 1458 86A5 |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F10 1462 1274 |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F10 1462 1277 |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F10 1558 95E1 |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F10 1558 97E1 |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F10 1A58 2002 |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F10 1A58 2005 |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F10 1A58 2007 |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F10 1A58 3000 |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F10 1A58 3001 |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F10 1D05 105E |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F10 1D05 1070 |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F10 1D05 2087 |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F10 8086 2087 |
| NVIDIA GeForce RTX 2060 | 1F11 |
| NVIDIA GeForce RTX 2060 | 1F12 |
| NVIDIA GeForce RTX 2060 with Max-Q Design | 1F12 1028 098F |
| NVIDIA GeForce RTX 2060 with Max-Q Design | 1F12 103C 8741 |
| NVIDIA GeForce RTX 2060 with Max-Q Design | 1F12 103C 8744 |
| NVIDIA GeForce RTX 2060 with Max-Q Design | 1F12 103C 878E |
| NVIDIA GeForce RTX 2060 with Max-Q Design | 1F12 103C 880E |
| NVIDIA GeForce RTX 2060 with Max-Q Design | 1F12 1043 1E11 |
| NVIDIA GeForce RTX 2060 with Max-Q Design | 1F12 1043 1F11 |
| NVIDIA GeForce RTX 2060 with Max-Q Design | 1F12 1462 12D9 |
| NVIDIA GeForce RTX 2060 with Max-Q Design | 1F12 17AA 3801 |
| NVIDIA GeForce RTX 2060 with Max-Q Design | 1F12 17AA 3802 |
| NVIDIA GeForce RTX 2060 with Max-Q Design | 1F12 17AA 3803 |
| NVIDIA GeForce RTX 2070 | 1F14 |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F14 1025 1401 |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F14 1025 1432 |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F14 1025 1442 |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F14 1025 1446 |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F14 1025 147D |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F14 1028 09E2 |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F14 1028 09F3 |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F14 103C 8607 |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F14 103C 86C6 |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F14 103C 86C7 |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F14 103C 8736 |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F14 103C 8738 |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F14 103C 8746 |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F14 103C 8772 |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F14 103C 878A |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F14 103C 878B |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F14 103C 87A6 |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F14 103C 87A7 |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F14 1043 174F |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F14 1458 1512 |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F14 1458 75B5 |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F14 1458 75B6 |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F14 1458 76B4 |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F14 1458 76B5 |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F14 1458 78A4 |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F14 1458 78A5 |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F14 1458 86B4 |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F14 1458 86B5 |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F14 1462 12AE |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F14 1462 12B0 |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F14 1462 12C6 |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F14 1558 50D3 |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F14 1558 70D1 |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F14 1A58 200C |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F14 1A58 2011 |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F14 1A58 3002 |
| NVIDIA GeForce RTX 2060 | 1F15 |
| Quadro RTX 3000 | 1F36 |
| Quadro RTX 3000 with Max-Q Design | 1F36 1028 0990 |
| Quadro RTX 3000 with Max-Q Design | 1F36 103C 8736 |
| Quadro RTX 3000 with Max-Q Design | 1F36 103C 8738 |
| Quadro RTX 3000 with Max-Q Design | 1F36 103C 8772 |
| Quadro RTX 3000 with Max-Q Design | 1F36 1043 13CF |
| Quadro RTX 3000 with Max-Q Design | 1F36 1414 0032 |
| NVIDIA GeForce RTX 2060 SUPER | 1F42 |
| NVIDIA GeForce RTX 2060 SUPER | 1F47 |
| NVIDIA GeForce RTX 2070 | 1F50 |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F50 1028 08ED |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F50 1028 08EE |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F50 1028 08EF |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F50 103C 8572 |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F50 103C 8573 |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F50 103C 8574 |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F50 103C 8600 |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F50 103C 8605 |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F50 17AA 3FEE |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F50 17AA 3FFE |
| NVIDIA GeForce RTX 2060 | 1F51 |
| NVIDIA GeForce RTX 2070 | 1F54 |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F54 103C 878A |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F54 17AA 3FCE |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F54 17AA 3FCF |
| NVIDIA GeForce RTX 2070 with Max-Q Design | 1F54 17AA 3FD0 |
| NVIDIA GeForce RTX 2060 | 1F55 |
| Quadro RTX 3000 | 1F76 |
| Matrox D-Series D2450 | 1F76 102B 2800 |
| Matrox D-Series D2480 | 1F76 102B 2900 |
| NVIDIA GeForce GTX 1650 | 1F82 |
| NVIDIA GeForce GTX 1650 | 1F91 |
| NVIDIA GeForce GTX 1650 with Max-Q Design | 1F91 103C 863E |
| NVIDIA GeForce GTX 1650 with Max-Q Design | 1F91 103C 86E7 |
| NVIDIA GeForce GTX 1650 with Max-Q Design | 1F91 103C 86E8 |
| NVIDIA GeForce GTX 1650 with Max-Q Design | 1F91 1043 12CF |
| NVIDIA GeForce GTX 1650 with Max-Q Design | 1F91 1043 156F |
| NVIDIA GeForce GTX 1650 with Max-Q Design | 1F91 1414 0032 |
| NVIDIA GeForce GTX 1650 with Max-Q Design | 1F91 144D C822 |
| NVIDIA GeForce GTX 1650 with Max-Q Design | 1F91 1462 127E |
| NVIDIA GeForce GTX 1650 with Max-Q Design | 1F91 1462 1281 |
| NVIDIA GeForce GTX 1650 with Max-Q Design | 1F91 1462 1284 |
| NVIDIA GeForce GTX 1650 with Max-Q Design | 1F91 1462 1285 |
| NVIDIA GeForce GTX 1650 with Max-Q Design | 1F91 1462 129C |
| NVIDIA GeForce GTX 1650 with Max-Q Design | 1F91 17AA 229F |
| NVIDIA GeForce GTX 1650 with Max-Q Design | 1F91 17AA 3802 |
| NVIDIA GeForce GTX 1650 with Max-Q Design | 1F91 17AA 3806 |
| NVIDIA GeForce GTX 1650 with Max-Q Design | 1F91 17AA 3F1A |
| NVIDIA GeForce GTX 1650 with Max-Q Design | 1F91 1A58 1001 |
| NVIDIA GeForce GTX 1650 Ti | 1F95 |
| NVIDIA GeForce GTX 1650 Ti with Max-Q Design | 1F95 1025 1479 |
| NVIDIA GeForce GTX 1650 Ti with Max-Q Design | 1F95 1025 147A |
| NVIDIA GeForce GTX 1650 Ti with Max-Q Design | 1F95 1025 147B |
| NVIDIA GeForce GTX 1650 Ti with Max-Q Design | 1F95 1025 147C |
| NVIDIA GeForce GTX 1650 Ti with Max-Q Design | 1F95 103C 86E7 |
| NVIDIA GeForce GTX 1650 Ti with Max-Q Design | 1F95 103C 86E8 |
| NVIDIA GeForce GTX 1650 Ti with Max-Q Design | 1F95 103C 8815 |
| NVIDIA GeForce GTX 1650 Ti with Max-Q Design | 1F95 1043 1DFF |
| NVIDIA GeForce GTX 1650 Ti with Max-Q Design | 1F95 1043 1E1F |
| NVIDIA GeForce GTX 1650 Ti with Max-Q Design | 1F95 144D C838 |
| NVIDIA GeForce GTX 1650 Ti with Max-Q Design | 1F95 1462 12BD |
| NVIDIA GeForce GTX 1650 Ti with Max-Q Design | 1F95 1462 12C5 |
| NVIDIA GeForce GTX 1650 Ti with Max-Q Design | 1F95 1462 12D2 |
| NVIDIA GeForce GTX 1650 Ti with Max-Q Design | 1F95 17AA 22C0 |
| NVIDIA GeForce GTX 1650 Ti with Max-Q Design | 1F95 17AA 22C1 |
| NVIDIA GeForce GTX 1650 Ti with Max-Q Design | 1F95 17AA 3837 |
| NVIDIA GeForce GTX 1650 Ti with Max-Q Design | 1F95 17AA 3F95 |
| NVIDIA GeForce GTX 1650 Ti with Max-Q Design | 1F95 1A58 1003 |
| NVIDIA GeForce GTX 1650 Ti with Max-Q Design | 1F95 1A58 1006 |
| NVIDIA GeForce GTX 1650 Ti with Max-Q Design | 1F95 1A58 1007 |
| NVIDIA GeForce GTX 1650 Ti with Max-Q Design | 1F95 1E83 3E30 |
| NVIDIA GeForce GTX 1650 | 1F96 |
| NVIDIA GeForce GTX 1650 with Max-Q Design | 1F96 1462 1297 |
| NVIDIA GeForce MX450 | 1F97 |
| NVIDIA GeForce MX450 | 1F98 |
| NVIDIA GeForce GTX 1650 | 1F99 |
| NVIDIA GeForce GTX 1650 with Max-Q Design | 1F99 1025 1479 |
| NVIDIA GeForce GTX 1650 with Max-Q Design | 1F99 1025 147A |
| NVIDIA GeForce GTX 1650 with Max-Q Design | 1F99 1025 147B |
| NVIDIA GeForce GTX 1650 with Max-Q Design | 1F99 1025 147C |
| NVIDIA GeForce GTX 1650 with Max-Q Design | 1F99 103C 8815 |
| NVIDIA GeForce GTX 1650 with Max-Q Design | 1F99 1043 13B2 |
| NVIDIA GeForce GTX 1650 with Max-Q Design | 1F99 1043 1402 |
| NVIDIA GeForce GTX 1650 with Max-Q Design | 1F99 1043 1902 |
| NVIDIA GeForce GTX 1650 with Max-Q Design | 1F99 1462 12BD |
| NVIDIA GeForce GTX 1650 with Max-Q Design | 1F99 1462 12C5 |
| NVIDIA GeForce GTX 1650 with Max-Q Design | 1F99 1462 12D2 |
| NVIDIA GeForce GTX 1650 with Max-Q Design | 1F99 17AA 22DA |
| NVIDIA GeForce GTX 1650 with Max-Q Design | 1F99 17AA 3F93 |
| NVIDIA GeForce GTX 1650 with Max-Q Design | 1F99 1E83 3E30 |
| NVIDIA GeForce MX450 | 1F9C |
| NVIDIA GeForce GTX 1650 | 1F9D |
| NVIDIA GeForce GTX 1650 with Max-Q Design | 1F9D 1043 128D |
| NVIDIA GeForce GTX 1650 with Max-Q Design | 1F9D 1043 130D |
| NVIDIA GeForce GTX 1650 with Max-Q Design | 1F9D 1043 149C |
| NVIDIA GeForce GTX 1650 with Max-Q Design | 1F9D 1043 185C |
| NVIDIA GeForce GTX 1650 with Max-Q Design | 1F9D 1043 189C |
| NVIDIA GeForce GTX 1650 with Max-Q Design | 1F9D 1462 12F4 |
| NVIDIA GeForce GTX 1650 with Max-Q Design | 1F9D 1462 1302 |
| NVIDIA GeForce GTX 1650 with Max-Q Design | 1F9D 1462 131B |
| NVIDIA GeForce GTX 1650 with Max-Q Design | 1F9D 1462 1326 |
| NVIDIA GeForce GTX 1650 with Max-Q Design | 1F9D 1462 132A |
| NVIDIA GeForce GTX 1650 with Max-Q Design | 1F9D 1462 132E |
| NVIDIA GeForce MX550 | 1F9F |
| NVIDIA GeForce MX550 | 1FA0 |
| NVIDIA T1000 | 1FB0 1028 12DB |
| NVIDIA T1000 | 1FB0 103C 12DB |
| NVIDIA T1000 | 1FB0 103C 8A80 |
| NVIDIA T1000 | 1FB0 10DE 12DB |
| NVIDIA DGX Display | 1FB0 10DE 1485 |
| NVIDIA T1000 | 1FB0 17AA 12DB |
| NVIDIA T600 | 1FB1 1028 1488 |
| NVIDIA T600 | 1FB1 103C 1488 |
| NVIDIA T600 | 1FB1 103C 8A80 |
| NVIDIA T600 | 1FB1 10DE 1488 |
| NVIDIA T600 | 1FB1 17AA 1488 |
| NVIDIA T400 | 1FB2 1028 1489 |
| NVIDIA T400 | 1FB2 103C 1489 |
| NVIDIA T400 | 1FB2 103C 8A80 |
| NVIDIA T400 | 1FB2 10DE 1489 |
| NVIDIA T400 | 1FB2 17AA 1489 |
| NVIDIA T600 Laptop GPU | 1FB6 |
| NVIDIA T550 Laptop GPU | 1FB7 |
| Quadro T2000 | 1FB8 |
| Quadro T2000 with Max-Q Design | 1FB8 1028 097E |
| Quadro T2000 with Max-Q Design | 1FB8 103C 8736 |
| Quadro T2000 with Max-Q Design | 1FB8 103C 8738 |
| Quadro T2000 with Max-Q Design | 1FB8 103C 8772 |
| Quadro T2000 with Max-Q Design | 1FB8 103C 8780 |
| Quadro T2000 with Max-Q Design | 1FB8 103C 8782 |
| Quadro T2000 with Max-Q Design | 1FB8 103C 8783 |
| Quadro T2000 with Max-Q Design | 1FB8 103C 8785 |
| Quadro T2000 with Max-Q Design | 1FB8 103C 87F0 |
| Quadro T2000 with Max-Q Design | 1FB8 1462 1281 |
| Quadro T2000 with Max-Q Design | 1FB8 1462 12BD |
| Quadro T2000 with Max-Q Design | 1FB8 17AA 22C0 |
| Quadro T2000 with Max-Q Design | 1FB8 17AA 22C1 |
| Quadro T1000 | 1FB9 |
| Quadro T1000 with Max-Q Design | 1FB9 1025 1479 |
| Quadro T1000 with Max-Q Design | 1FB9 1025 147A |
| Quadro T1000 with Max-Q Design | 1FB9 1025 147B |
| Quadro T1000 with Max-Q Design | 1FB9 1025 147C |
| Quadro T1000 with Max-Q Design | 1FB9 103C 8736 |
| Quadro T1000 with Max-Q Design | 1FB9 103C 8738 |
| Quadro T1000 with Max-Q Design | 1FB9 103C 8772 |
| Quadro T1000 with Max-Q Design | 1FB9 103C 8780 |
| Quadro T1000 with Max-Q Design | 1FB9 103C 8782 |
| Quadro T1000 with Max-Q Design | 1FB9 103C 8783 |
| Quadro T1000 with Max-Q Design | 1FB9 103C 8785 |
| Quadro T1000 with Max-Q Design | 1FB9 103C 87F0 |
| Quadro T1000 with Max-Q Design | 1FB9 1462 12BD |
| Quadro T1000 with Max-Q Design | 1FB9 17AA 22C0 |
| Quadro T1000 with Max-Q Design | 1FB9 17AA 22C1 |
| NVIDIA T600 Laptop GPU | 1FBA |
| NVIDIA T500 | 1FBB |
| NVIDIA T1200 Laptop GPU | 1FBC |
| NVIDIA GeForce GTX 1650 | 1FDD |
| NVIDIA T1000 8GB | 1FF0 1028 1612 |
| NVIDIA T1000 8GB | 1FF0 103C 1612 |
| NVIDIA T1000 8GB | 1FF0 103C 8A80 |
| NVIDIA T1000 8GB | 1FF0 10DE 1612 |
| NVIDIA T1000 8GB | 1FF0 17AA 1612 |
| NVIDIA T400 4GB | 1FF2 1028 1613 |
| NVIDIA T400 4GB | 1FF2 103C 1613 |
| NVIDIA T400 4GB | 1FF2 103C 8A80 |
| NVIDIA T400 4GB | 1FF2 10DE 1613 |
| NVIDIA T400 4GB | 1FF2 17AA 1613 |
| Quadro T1000 | 1FF9 |
| NVIDIA A100-SXM4-40GB | 20B0 |
| NVIDIA A100-PG509-200 | 20B0 10DE 1450 |
| NVIDIA A100-SXM4-80GB | 20B2 10DE 1463 |
| NVIDIA A100-SXM4-80GB | 20B2 10DE 147F |
| NVIDIA PG506-242 | 20B3 10DE 14A7 |
| NVIDIA PG506-243 | 20B3 10DE 14A8 |
| NVIDIA A100 80GB PCIe | 20B5 10DE 1533 |
| NVIDIA A100 80GB PCIe | 20B5 10DE 1642 |
| NVIDIA PG506-232 | 20B6 10DE 1492 |
| NVIDIA A30 | 20B7 10DE 1532 |
| NVIDIA A100-PCIE-40GB | 20F1 10DE 145F |
| NVIDIA GeForce GTX 1660 Ti | 2182 |
| NVIDIA GeForce GTX 1660 | 2184 |
| NVIDIA GeForce GTX 1650 SUPER | 2187 |
| NVIDIA GeForce GTX 1650 | 2188 |
| NVIDIA GeForce GTX 1660 Ti | 2191 |
| NVIDIA GeForce GTX 1660 Ti with Max-Q Design | 2191 1028 0949 |
| NVIDIA GeForce GTX 1660 Ti with Max-Q Design | 2191 103C 85FB |
| NVIDIA GeForce GTX 1660 Ti with Max-Q Design | 2191 103C 85FE |
| NVIDIA GeForce GTX 1660 Ti with Max-Q Design | 2191 103C 86D6 |
| NVIDIA GeForce GTX 1660 Ti with Max-Q Design | 2191 103C 8741 |
| NVIDIA GeForce GTX 1660 Ti with Max-Q Design | 2191 103C 8744 |
| NVIDIA GeForce GTX 1660 Ti with Max-Q Design | 2191 103C 878D |
| NVIDIA GeForce GTX 1660 Ti with Max-Q Design | 2191 103C 87AF |
| NVIDIA GeForce GTX 1660 Ti with Max-Q Design | 2191 103C 87B3 |
| NVIDIA GeForce GTX 1660 Ti with Max-Q Design | 2191 1043 171F |
| NVIDIA GeForce GTX 1660 Ti with Max-Q Design | 2191 1043 17EF |
| NVIDIA GeForce GTX 1660 Ti with Max-Q Design | 2191 1043 18D1 |
| NVIDIA GeForce GTX 1660 Ti with Max-Q Design | 2191 1414 0032 |
| NVIDIA GeForce GTX 1660 Ti with Max-Q Design | 2191 1462 128A |
| NVIDIA GeForce GTX 1660 Ti with Max-Q Design | 2191 1462 128B |
| NVIDIA GeForce GTX 1660 Ti with Max-Q Design | 2191 1462 12C6 |
| NVIDIA GeForce GTX 1660 Ti with Max-Q Design | 2191 1462 12CB |
| NVIDIA GeForce GTX 1660 Ti with Max-Q Design | 2191 1462 12CC |
| NVIDIA GeForce GTX 1660 Ti with Max-Q Design | 2191 1462 12D9 |
| NVIDIA GeForce GTX 1660 Ti with Max-Q Design | 2191 17AA 380C |
| NVIDIA GeForce GTX 1660 Ti with Max-Q Design | 2191 17AA 381D |
| NVIDIA GeForce GTX 1660 Ti with Max-Q Design | 2191 17AA 381E |
| NVIDIA GeForce GTX 1650 Ti | 2192 |
| NVIDIA GeForce GTX 1660 SUPER | 21C4 |
| NVIDIA GeForce GTX 1660 Ti | 21D1 |
| NVIDIA GeForce RTX 3090 Ti | 2203 |
| NVIDIA GeForce RTX 3090 | 2204 |
| NVIDIA GeForce RTX 3080 | 2206 |
| NVIDIA GeForce RTX 3080 Ti | 2208 |
| NVIDIA GeForce RTX 3080 | 220A |
| NVIDIA CMP 90HX | 220D |
| NVIDIA GeForce RTX 3080 | 2216 |
| NVIDIA RTX A6000 | 2230 1028 1459 |
| NVIDIA RTX A6000 | 2230 103C 1459 |
| NVIDIA RTX A6000 | 2230 10DE 1459 |
| NVIDIA RTX A6000 | 2230 17AA 1459 |
| NVIDIA RTX A5000 | 2231 1028 147E |
| NVIDIA RTX A5000 | 2231 103C 147E |
| NVIDIA RTX A5000 | 2231 10DE 147E |
| NVIDIA RTX A5000 | 2231 17AA 147E |
| NVIDIA RTX A4500 | 2232 1028 163C |
| NVIDIA RTX A4500 | 2232 103C 163C |
| NVIDIA RTX A4500 | 2232 10DE 163C |
| NVIDIA RTX A4500 | 2232 17AA 163C |
| NVIDIA RTX A5500 | 2233 1028 165A |
| NVIDIA RTX A5500 | 2233 103C 165A |
| NVIDIA RTX A5500 | 2233 10DE 165A |
| NVIDIA RTX A5500 | 2233 17AA 165A |
| NVIDIA A40 | 2235 10DE 145A |
| NVIDIA A10 | 2236 10DE 1482 |
| NVIDIA A10G | 2237 10DE 152F |
| NVIDIA A10M | 2238 10DE 1677 |
| NVIDIA GeForce RTX 3060 Ti | 2414 |
| NVIDIA GeForce RTX 3080 Ti Laptop GPU | 2420 |
| NVIDIA RTX A5500 Laptop GPU | 2438 |
| NVIDIA GeForce RTX 3080 Ti Laptop GPU | 2460 |
| NVIDIA GeForce RTX 3070 Ti | 2482 |
| NVIDIA GeForce RTX 3070 | 2484 |
| NVIDIA GeForce RTX 3060 Ti | 2486 |
| NVIDIA GeForce RTX 3060 | 2487 |
| NVIDIA GeForce RTX 3070 | 2488 |
| NVIDIA GeForce RTX 3060 Ti | 2489 |
| NVIDIA CMP 70HX | 248A |
| NVIDIA GeForce RTX 3080 Laptop GPU | 249C |
| NVIDIA GeForce RTX 3060 Laptop GPU | 249C 1D05 1194 |
| NVIDIA GeForce RTX 3070 Laptop GPU | 249D |
| NVIDIA GeForce RTX 3070 Ti Laptop GPU | 24A0 |
| NVIDIA GeForce RTX 3060 Laptop GPU | 24A0 1D05 1192 |
| NVIDIA RTX A4000 | 24B0 1028 14AD |
| NVIDIA RTX A4000 | 24B0 103C 14AD |
| NVIDIA RTX A4000 | 24B0 10DE 14AD |
| NVIDIA RTX A4000 | 24B0 17AA 14AD |
| NVIDIA RTX A4000H | 24B1 10DE 1658 |
| NVIDIA RTX A5000 Laptop GPU | 24B6 |
| NVIDIA RTX A4000 Laptop GPU | 24B7 |
| NVIDIA RTX A3000 Laptop GPU | 24B8 |
| NVIDIA RTX A3000 12GB Laptop GPU | 24B9 |
| NVIDIA RTX A4500 Laptop GPU | 24BA |
| NVIDIA RTX A3000 12GB Laptop GPU | 24BB |
| NVIDIA GeForce RTX 3080 Laptop GPU | 24DC |
| NVIDIA GeForce RTX 3070 Laptop GPU | 24DD |
| NVIDIA GeForce RTX 3070 Ti Laptop GPU | 24E0 |
| NVIDIA RTX A4500 Embedded GPU | 24FA |
| NVIDIA GeForce RTX 3060 | 2503 |
| NVIDIA GeForce RTX 3060 | 2504 |
| NVIDIA GeForce RTX 3050 | 2507 |
| NVIDIA GeForce RTX 3050 OEM | 2508 |
| NVIDIA GeForce RTX 3060 Laptop GPU | 2520 |
| NVIDIA GeForce RTX 3050 Ti Laptop GPU | 2523 |
| NVIDIA RTX A2000 | 2531 1028 151D |
| NVIDIA RTX A2000 | 2531 103C 151D |
| NVIDIA RTX A2000 | 2531 10DE 151D |
| NVIDIA RTX A2000 | 2531 17AA 151D |
| NVIDIA GeForce RTX 3060 Laptop GPU | 2560 |
| NVIDIA GeForce RTX 3050 Ti Laptop GPU | 2563 |
| NVIDIA RTX A2000 12GB | 2571 1028 1611 |
| NVIDIA RTX A2000 12GB | 2571 103C 1611 |
| NVIDIA RTX A2000 12GB | 2571 10DE 1611 |
| NVIDIA RTX A2000 12GB | 2571 17AA 1611 |
| NVIDIA GeForce RTX 3050 Ti Laptop GPU | 25A0 |
| NVIDIA GeForce RTX 3050Ti Laptop GPU | 25A0 103C 8928 |
| NVIDIA GeForce RTX 3050Ti Laptop GPU | 25A0 103C 89F9 |
| NVIDIA GeForce RTX 3060 Laptop GPU | 25A0 1D05 1196 |
| NVIDIA GeForce RTX 3050 Laptop GPU | 25A2 |
| NVIDIA GeForce RTX 3050 Ti Laptop GPU | 25A2 1028 0BAF |
| NVIDIA GeForce RTX 3060 Laptop GPU | 25A2 1D05 1195 |
| NVIDIA GeForce RTX 3050 Laptop GPU | 25A5 |
| NVIDIA GeForce MX570 | 25A6 |
| NVIDIA GeForce RTX 2050 | 25A7 |
| NVIDIA GeForce RTX 2050 | 25A9 |
| NVIDIA GeForce MX570 A | 25AA |
| NVIDIA A16 | 25B6 10DE 14A9 |
| NVIDIA A2 | 25B6 10DE 157E |
| NVIDIA RTX A2000 Laptop GPU | 25B8 |
| NVIDIA RTX A1000 Laptop GPU | 25B9 |
| NVIDIA RTX A2000 8GB Laptop GPU | 25BA |
| NVIDIA RTX A500 Laptop GPU | 25BB |
| NVIDIA GeForce RTX 3050 Ti Laptop GPU | 25E0 |
| NVIDIA GeForce RTX 3050 Laptop GPU | 25E2 |
| NVIDIA GeForce RTX 3050 Laptop GPU | 25E5 |
| NVIDIA RTX A1000 Embedded GPU | 25F9 |
| NVIDIA RTX A2000 Embedded GPU | 25FA |

View File

@ -72,7 +72,7 @@ EXTRA_CFLAGS += -I$(src)/common/inc
EXTRA_CFLAGS += -I$(src)
EXTRA_CFLAGS += -Wall -MD $(DEFINES) $(INCLUDES) -Wno-cast-qual -Wno-error -Wno-format-extra-args
EXTRA_CFLAGS += -D__KERNEL__ -DMODULE -DNVRM
EXTRA_CFLAGS += -DNV_VERSION_STRING=\"515.43.04\"
EXTRA_CFLAGS += -DNV_VERSION_STRING=\"515.48.07\"
EXTRA_CFLAGS += -Wno-unused-function
@ -94,6 +94,7 @@ EXTRA_CFLAGS += -ffreestanding
ifeq ($(ARCH),arm64)
EXTRA_CFLAGS += -mgeneral-regs-only -march=armv8-a
EXTRA_CFLAGS += $(call cc-option,-mno-outline-atomics,)
endif
ifeq ($(ARCH),x86_64)

View File

@ -1647,23 +1647,12 @@ extern NvBool nv_ats_supported;
* and any other baggage we want to carry along
*
*/
#define NV_MAXNUM_DISPLAY_DEVICES 8
typedef struct
{
acpi_handle dev_handle;
int dev_id;
} nv_video_t;
typedef struct
{
nvidia_stack_t *sp;
struct acpi_device *device;
nv_video_t pNvVideo[NV_MAXNUM_DISPLAY_DEVICES];
struct acpi_handle *handle;
int notify_handler_installed;
int default_display_mask;
} nv_acpi_t;
#endif

View File

@ -35,8 +35,6 @@ extern nvidia_module_t nv_fops;
void nv_acpi_register_notifier (nv_linux_state_t *);
void nv_acpi_unregister_notifier (nv_linux_state_t *);
int nv_acpi_init (void);
int nv_acpi_uninit (void);
NvU8 nv_find_pci_capability (struct pci_dev *, NvU8);

View File

@ -576,11 +576,9 @@ typedef enum
((nv)->iso_iommu_present)
/*
* NVIDIA ACPI event IDs to be passed into the core NVIDIA
* driver for various events like display switch events,
* AC/battery events, etc..
* NVIDIA ACPI event ID to be passed into the core NVIDIA driver for
* AC/DC event.
*/
#define NV_SYSTEM_ACPI_DISPLAY_SWITCH_EVENT 0x8001
#define NV_SYSTEM_ACPI_BATTERY_POWER_EVENT 0x8002
/*
@ -589,14 +587,6 @@ typedef enum
#define NV_SYSTEM_GPU_ADD_EVENT 0x9001
#define NV_SYSTEM_GPU_REMOVE_EVENT 0x9002
/*
* Status bit definitions for display switch hotkey events.
*/
#define NV_HOTKEY_STATUS_DISPLAY_ENABLE_LCD 0x01
#define NV_HOTKEY_STATUS_DISPLAY_ENABLE_CRT 0x02
#define NV_HOTKEY_STATUS_DISPLAY_ENABLE_TV 0x04
#define NV_HOTKEY_STATUS_DISPLAY_ENABLE_DFP 0x08
/*
* NVIDIA ACPI sub-event IDs (event types) to be passed into
* to core NVIDIA driver for ACPI events.

View File

@ -1120,6 +1120,23 @@ compile_test() {
compile_check_conftest "$CODE" "NV_MDEV_SET_IOMMU_DEVICE_PRESENT" "" "functions"
;;
mdev_parent_ops_has_open_device)
# Determine if 'mdev_parent_ops' structure has a 'open_device'
# field.
#
# Added by commit 2fd585f4ed9d ("vfio: Provide better generic support
# for open/release vfio_device_ops") in 5.15 (2021-08-05)
#
CODE="
#include <linux/pci.h>
#include <linux/mdev.h>
int conftest_mdev_parent_ops_has_open_device(void) {
return offsetof(struct mdev_parent_ops, open_device);
}"
compile_check_conftest "$CODE" "NV_MDEV_PARENT_OPS_HAS_OPEN_DEVICE" "" "types"
;;
pci_irq_vector_helpers)
#
# Determine if pci_alloc_irq_vectors(), pci_free_irq_vectors()
@ -1154,23 +1171,6 @@ compile_test() {
compile_check_conftest "$CODE" "NV_VFIO_DEVICE_GFX_PLANE_INFO_PRESENT" "" "types"
;;
vfio_device_migration_info)
#
# determine if the 'struct vfio_device_migration_info' type is present.
#
# Proposed interface for vGPU Migration
# ("[PATCH v3 0/5] Add migration support for VFIO device ")
# https://lists.gnu.org/archive/html/qemu-devel/2019-02/msg05176.html
# Upstreamed commit a8a24f3f6e38 (vfio: UAPI for migration interface
# for device state) in v5.8 (2020-05-29)
#
CODE="
#include <linux/vfio.h>
struct vfio_device_migration_info info;"
compile_check_conftest "$CODE" "NV_VFIO_DEVICE_MIGRATION_INFO_PRESENT" "" "types"
;;
vfio_device_migration_has_start_pfn)
#
# Determine if the 'vfio_device_migration_info' structure has
@ -5304,6 +5304,67 @@ compile_test() {
compile_check_conftest "$CODE" "NV_ACPI_BUS_GET_DEVICE_PRESENT" "" "functions"
;;
dma_resv_add_fence)
#
# Determine if the dma_resv_add_fence() function is present.
#
# dma_resv_add_excl_fence() and dma_resv_add_shared_fence() were
# removed and replaced with dma_resv_add_fence() by commit
# 73511edf8b19 ("dma-buf: specify usage while adding fences to
# dma_resv obj v7") in linux-next, expected in v5.19-rc1.
#
CODE="
#if defined(NV_LINUX_DMA_RESV_H_PRESENT)
#include <linux/dma-resv.h>
#endif
void conftest_dma_resv_add_fence(void) {
dma_resv_add_fence();
}"
compile_check_conftest "$CODE" "NV_DMA_RESV_ADD_FENCE_PRESENT" "" "functions"
;;
dma_resv_reserve_fences)
#
# Determine if the dma_resv_reserve_fences() function is present.
#
# dma_resv_reserve_shared() was removed and replaced with
# dma_resv_reserve_fences() by commit c8d4c18bfbc4
# ("dma-buf/drivers: make reserving a shared slot mandatory v4") in
# linux-next, expected in v5.19-rc1.
#
CODE="
#if defined(NV_LINUX_DMA_RESV_H_PRESENT)
#include <linux/dma-resv.h>
#endif
void conftest_dma_resv_reserve_fences(void) {
dma_resv_reserve_fences();
}"
compile_check_conftest "$CODE" "NV_DMA_RESV_RESERVE_FENCES_PRESENT" "" "functions"
;;
reservation_object_reserve_shared_has_num_fences_arg)
#
# Determine if reservation_object_reserve_shared() has 'num_fences'
# argument.
#
# reservation_object_reserve_shared() function prototype was updated
# to take 'num_fences' argument by commit ca05359f1e64 ("dma-buf:
# allow reserving more than one shared fence slot") in v4.21-rc1
# (2018-12-14).
#
CODE="
#include <linux/reservation.h>
void conftest_reservation_object_reserve_shared_has_num_fences_arg(
struct reservation_object *obj,
unsigned int num_fences) {
(void) reservation_object_reserve_shared(obj, num_fences);
}"
compile_check_conftest "$CODE" "NV_RESERVATION_OBJECT_RESERVE_SHARED_HAS_NUM_FENCES_ARG" "" "types"
;;
# When adding a new conftest entry, please use the correct format for
# specifying the relevant upstream Linux kernel commit.
#

View File

@ -65,11 +65,57 @@ static inline void nv_dma_resv_fini(nv_dma_resv_t *obj)
#endif
}
static inline void nv_dma_resv_lock(nv_dma_resv_t *obj,
struct ww_acquire_ctx *ctx)
{
#if defined(NV_LINUX_DMA_RESV_H_PRESENT)
dma_resv_lock(obj, ctx);
#else
ww_mutex_lock(&obj->lock, ctx);
#endif
}
static inline void nv_dma_resv_unlock(nv_dma_resv_t *obj)
{
#if defined(NV_LINUX_DMA_RESV_H_PRESENT)
dma_resv_unlock(obj);
#else
ww_mutex_unlock(&obj->lock);
#endif
}
static inline int nv_dma_resv_reserve_fences(nv_dma_resv_t *obj,
unsigned int num_fences,
NvBool shared)
{
#if defined(NV_DMA_RESV_RESERVE_FENCES_PRESENT)
return dma_resv_reserve_fences(obj, num_fences);
#else
if (shared) {
#if defined(NV_LINUX_DMA_RESV_H_PRESENT)
return dma_resv_reserve_shared(obj, num_fences);
#elif defined(NV_RESERVATION_OBJECT_RESERVE_SHARED_HAS_NUM_FENCES_ARG)
return reservation_object_reserve_shared(obj, num_fences);
#else
unsigned int i;
for (i = 0; i < num_fences; i++) {
reservation_object_reserve_shared(obj);
}
#endif
}
return 0;
#endif
}
static inline void nv_dma_resv_add_excl_fence(nv_dma_resv_t *obj,
nv_dma_fence_t *fence)
{
#if defined(NV_LINUX_DMA_RESV_H_PRESENT)
#if defined(NV_DMA_RESV_ADD_FENCE_PRESENT)
dma_resv_add_fence(obj, fence, DMA_RESV_USAGE_WRITE);
#else
dma_resv_add_excl_fence(obj, fence);
#endif
#else
reservation_object_add_excl_fence(obj, fence);
#endif

View File

@ -499,9 +499,18 @@ int nv_drm_gem_fence_attach_ioctl(struct drm_device *dev,
goto fence_context_create_fence_failed;
}
nv_dma_resv_add_excl_fence(&nv_gem->resv, fence);
nv_dma_resv_lock(&nv_gem->resv, NULL);
ret = 0;
ret = nv_dma_resv_reserve_fences(&nv_gem->resv, 1, false);
if (ret == 0) {
nv_dma_resv_add_excl_fence(&nv_gem->resv, fence);
} else {
NV_DRM_DEV_LOG_ERR(
nv_dev,
"Failed to reserve fence. Error code: %d", ret);
}
nv_dma_resv_unlock(&nv_gem->resv);
fence_context_create_fence_failed:
nv_drm_gem_object_unreference_unlocked(&nv_gem_fence_context->base);

View File

@ -115,3 +115,6 @@ NV_CONFTEST_TYPE_COMPILE_TESTS += drm_plane_atomic_check_has_atomic_state_arg
NV_CONFTEST_TYPE_COMPILE_TESTS += drm_device_has_pdev
NV_CONFTEST_TYPE_COMPILE_TESTS += drm_crtc_state_has_no_vblank
NV_CONFTEST_TYPE_COMPILE_TESTS += drm_mode_config_has_allow_fb_modifiers
NV_CONFTEST_TYPE_COMPILE_TESTS += dma_resv_add_fence
NV_CONFTEST_TYPE_COMPILE_TESTS += dma_resv_reserve_fences
NV_CONFTEST_TYPE_COMPILE_TESTS += reservation_object_reserve_shared_has_num_fences_arg

View File

@ -976,6 +976,7 @@ NvBool nvkms_allow_write_combining(void)
return __rm_ops.system_info.allow_write_combining;
}
#if IS_ENABLED(CONFIG_BACKLIGHT_CLASS_DEVICE)
/*************************************************************************
* Implementation of sysfs interface to control backlight
*************************************************************************/
@ -1034,11 +1035,13 @@ static const struct backlight_ops nvkms_backlight_ops = {
.update_status = nvkms_update_backlight_status,
.get_brightness = nvkms_get_backlight_brightness,
};
#endif /* IS_ENABLED(CONFIG_BACKLIGHT_CLASS_DEVICE) */
struct nvkms_backlight_device*
nvkms_register_backlight(NvU32 gpu_id, NvU32 display_id, void *drv_priv,
NvU32 current_brightness)
{
#if IS_ENABLED(CONFIG_BACKLIGHT_CLASS_DEVICE)
char name[18];
struct backlight_properties props = {
.brightness = current_brightness,
@ -1093,15 +1096,20 @@ done:
nvkms_free(gpu_info, NV_MAX_GPUS * sizeof(*gpu_info));
return nvkms_bd;
#else
return NULL;
#endif /* IS_ENABLED(CONFIG_BACKLIGHT_CLASS_DEVICE) */
}
void nvkms_unregister_backlight(struct nvkms_backlight_device *nvkms_bd)
{
#if IS_ENABLED(CONFIG_BACKLIGHT_CLASS_DEVICE)
if (nvkms_bd->dev) {
backlight_device_unregister(nvkms_bd->dev);
}
nvkms_free(nvkms_bd, sizeof(*nvkms_bd));
#endif /* IS_ENABLED(CONFIG_BACKLIGHT_CLASS_DEVICE) */
}
/*************************************************************************

View File

@ -232,7 +232,7 @@ static inline const struct cpumask *uvm_cpumask_of_node(int node)
#define __GFP_NORETRY 0
#endif
#define NV_UVM_GFP_FLAGS (GFP_KERNEL | __GFP_NORETRY)
#define NV_UVM_GFP_FLAGS (GFP_KERNEL)
#if !defined(NV_ADDRESS_SPACE_INIT_ONCE_PRESENT)
void address_space_init_once(struct address_space *mapping);

View File

@ -151,7 +151,7 @@ static void maxwell_membar_after_transfer(uvm_push_t *push)
// Flush on transfers only works when paired with a semaphore release. Use a
// host WFI + MEMBAR.
// http://nvbugs/1709888
// Bug 1709888
gpu->parent->host_hal->wait_for_idle(push);
if (uvm_push_get_and_reset_flag(push, UVM_PUSH_FLAG_NEXT_MEMBAR_GPU))

View File

@ -1805,12 +1805,12 @@ nvswitch_exit
return;
}
nvswitch_procfs_exit();
nvswitch_ctl_exit();
pci_unregister_driver(&nvswitch_pci_driver);
nvswitch_procfs_exit();
cdev_del(&nvswitch.cdev);
unregister_chrdev_region(nvswitch.devno, NVSWITCH_MINOR_COUNT);

View File

@ -35,29 +35,13 @@ static NV_STATUS nv_acpi_extract_buffer (const union acpi_object *, void *, N
static NV_STATUS nv_acpi_extract_package (const union acpi_object *, void *, NvU32, NvU32 *);
static NV_STATUS nv_acpi_extract_object (const union acpi_object *, void *, NvU32, NvU32 *);
static int nv_acpi_add (struct acpi_device *);
static int nv_acpi_remove (struct acpi_device *device);
static void nv_acpi_event (acpi_handle, u32, void *);
static void nv_acpi_powersource_hotplug_event(acpi_handle, u32, void *);
static acpi_status nv_acpi_find_methods (acpi_handle, u32, void *, void **);
static NV_STATUS nv_acpi_nvif_method (NvU32, NvU32, void *, NvU16, NvU32 *, void *, NvU16 *);
static NV_STATUS nv_acpi_wmmx_method (NvU32, NvU8 *, NvU16 *);
static const struct acpi_device_id nv_video_device_ids[] = {
{
.id = ACPI_VIDEO_HID,
.driver_data = 0,
},
{
.id = "",
.driver_data = 0,
},
};
static struct acpi_driver *nv_acpi_driver;
static acpi_handle nvif_handle = NULL;
static acpi_handle nvif_parent_gpu_handle = NULL;
static acpi_handle wmmx_handle = NULL;
// Used for AC Power Source Hotplug Handling
@ -81,16 +65,6 @@ static NvBool battery_present = NV_FALSE;
#define ACPI_VIDEO_CLASS "video"
#endif
static const struct acpi_driver nv_acpi_driver_template = {
.name = "NVIDIA ACPI Video Driver",
.class = ACPI_VIDEO_CLASS,
.ids = nv_video_device_ids,
.ops = {
.add = nv_acpi_add,
.remove = nv_acpi_remove,
},
};
static int nv_acpi_get_device_handle(nv_state_t *nv, acpi_handle *dev_handle)
{
nv_linux_state_t *nvl = NV_GET_NVL_FROM_NV_STATE(nv);
@ -151,351 +125,6 @@ void nv_acpi_unregister_notifier(nv_linux_state_t *nvl)
unregister_acpi_notifier(&nvl->acpi_nb);
}
int nv_acpi_init(void)
{
/*
* This function will register the RM with the Linux
* ACPI subsystem.
*/
int status;
nvidia_stack_t *sp = NULL;
NvU32 acpi_event_config = 0;
NV_STATUS rmStatus;
status = nv_kmem_cache_alloc_stack(&sp);
if (status != 0)
{
return status;
}
rmStatus = rm_read_registry_dword(sp, NULL,
NV_REG_REGISTER_FOR_ACPI_EVENTS, &acpi_event_config);
nv_kmem_cache_free_stack(sp);
if ((rmStatus == NV_OK) && (acpi_event_config == 0))
return 0;
if (nv_acpi_driver != NULL)
return -EBUSY;
rmStatus = os_alloc_mem((void **)&nv_acpi_driver,
sizeof(struct acpi_driver));
if (rmStatus != NV_OK)
return -ENOMEM;
memcpy((void *)nv_acpi_driver, (void *)&nv_acpi_driver_template,
sizeof(struct acpi_driver));
status = acpi_bus_register_driver(nv_acpi_driver);
if (status < 0)
{
nv_printf(NV_DBG_INFO,
"NVRM: nv_acpi_init: acpi_bus_register_driver() failed (%d)!\n", status);
os_free_mem(nv_acpi_driver);
nv_acpi_driver = NULL;
}
return status;
}
int nv_acpi_uninit(void)
{
nvidia_stack_t *sp = NULL;
NvU32 acpi_event_config = 0;
NV_STATUS rmStatus;
int rc;
rc = nv_kmem_cache_alloc_stack(&sp);
if (rc != 0)
{
return rc;
}
rmStatus = rm_read_registry_dword(sp, NULL,
NV_REG_REGISTER_FOR_ACPI_EVENTS, &acpi_event_config);
nv_kmem_cache_free_stack(sp);
if ((rmStatus == NV_OK) && (acpi_event_config == 0))
return 0;
if (nv_acpi_driver == NULL)
return -ENXIO;
acpi_bus_unregister_driver(nv_acpi_driver);
os_free_mem(nv_acpi_driver);
nv_acpi_driver = NULL;
return 0;
}
static int nv_acpi_add(struct acpi_device *device)
{
/*
* This function will cause RM to initialize the things it needs for acpi interaction
* on the display device.
*/
int status = -1;
NV_STATUS rmStatus = NV_ERR_GENERIC;
nv_acpi_t *pNvAcpiObject = NULL;
union acpi_object control_argument_0 = { ACPI_TYPE_INTEGER };
struct acpi_object_list control_argument_list = { 0, NULL };
nvidia_stack_t *sp = NULL;
struct list_head *node, *next;
unsigned long long device_id = 0;
int device_counter = 0;
status = nv_kmem_cache_alloc_stack(&sp);
if (status != 0)
{
return status;
}
// allocate data structure we need
rmStatus = os_alloc_mem((void **) &pNvAcpiObject, sizeof(nv_acpi_t));
if (rmStatus != NV_OK)
{
nv_kmem_cache_free_stack(sp);
nv_printf(NV_DBG_ERRORS,
"NVRM: nv_acpi_add: failed to allocate ACPI device management data!\n");
return -ENOMEM;
}
os_mem_set((void *)pNvAcpiObject, 0, sizeof(nv_acpi_t));
device->driver_data = pNvAcpiObject;
pNvAcpiObject->device = device;
pNvAcpiObject->sp = sp;
// grab handles to all the important nodes representing devices
list_for_each_safe(node, next, &device->children)
{
struct acpi_device *dev =
list_entry(node, struct acpi_device, node);
if (!dev)
continue;
if (device_counter == NV_MAXNUM_DISPLAY_DEVICES)
{
nv_printf(NV_DBG_ERRORS,
"NVRM: nv_acpi_add: Total number of devices cannot exceed %d\n",
NV_MAXNUM_DISPLAY_DEVICES);
break;
}
status =
acpi_evaluate_integer(dev->handle, "_ADR", NULL, &device_id);
if (ACPI_FAILURE(status))
/* Couldnt query device_id for this device */
continue;
device_id = (device_id & 0xffff);
if ((device_id != 0x100) && /* Not a known CRT device-id */
(device_id != 0x200) && /* Not a known TV device-id */
(device_id != 0x0110) && (device_id != 0x0118) && (device_id != 0x0400) && /* Not an LCD*/
(device_id != 0x0111) && (device_id != 0x0120) && (device_id != 0x0300)) /* Not a known DVI device-id */
{
/* This isnt a known device Id.
Do default switching on this system. */
pNvAcpiObject->default_display_mask = 1;
break;
}
pNvAcpiObject->pNvVideo[device_counter].dev_id = device_id;
pNvAcpiObject->pNvVideo[device_counter].dev_handle = dev->handle;
device_counter++;
}
// arg 0, bits 1:0, 0 = enable events
control_argument_0.integer.type = ACPI_TYPE_INTEGER;
control_argument_0.integer.value = 0x0;
// listify it
control_argument_list.count = 1;
control_argument_list.pointer = &control_argument_0;
// _DOS method takes 1 argument and returns nothing
status = acpi_evaluate_object(device->handle, "_DOS", &control_argument_list, NULL);
if (ACPI_FAILURE(status))
{
nv_printf(NV_DBG_INFO,
"NVRM: nv_acpi_add: failed to enable display switch events (%d)!\n", status);
}
status = acpi_install_notify_handler(device->handle, ACPI_DEVICE_NOTIFY,
nv_acpi_event, pNvAcpiObject);
if (ACPI_FAILURE(status))
{
nv_printf(NV_DBG_INFO,
"NVRM: nv_acpi_add: failed to install event notification handler (%d)!\n", status);
}
else
{
try_module_get(THIS_MODULE);
pNvAcpiObject->notify_handler_installed = 1;
}
return 0;
}
static int nv_acpi_remove(struct acpi_device *device)
{
/*
* This function will cause RM to relinquish control of the VGA ACPI device.
*/
acpi_status status;
union acpi_object control_argument_0 = { ACPI_TYPE_INTEGER };
struct acpi_object_list control_argument_list = { 0, NULL };
nv_acpi_t *pNvAcpiObject = device->driver_data;
pNvAcpiObject->default_display_mask = 0;
// arg 0, bits 1:0, 1 = disable events
control_argument_0.integer.type = ACPI_TYPE_INTEGER;
control_argument_0.integer.value = 0x1;
// listify it
control_argument_list.count = 1;
control_argument_list.pointer = &control_argument_0;
// _DOS method takes 1 argument and returns nothing
status = acpi_evaluate_object(device->handle, "_DOS", &control_argument_list, NULL);
if (ACPI_FAILURE(status))
{
nv_printf(NV_DBG_INFO,
"NVRM: nv_acpi_remove: failed to disable display switch events (%d)!\n", status);
}
if (pNvAcpiObject->notify_handler_installed)
{
// remove event notifier
status = acpi_remove_notify_handler(device->handle, ACPI_DEVICE_NOTIFY, nv_acpi_event);
}
if (pNvAcpiObject->notify_handler_installed &&
ACPI_FAILURE(status))
{
nv_printf(NV_DBG_INFO,
"NVRM: nv_acpi_remove: failed to remove event notification handler (%d)!\n", status);
}
else
{
nv_kmem_cache_free_stack(pNvAcpiObject->sp);
os_free_mem((void *)pNvAcpiObject);
module_put(THIS_MODULE);
device->driver_data = NULL;
}
return status;
}
/*
* The ACPI specification defines IDs for various ACPI video
* extension events like display switch events, AC/battery
* events, docking events, etc..
* Whenever an ACPI event is received by the corresponding
* event handler installed within the core NVIDIA driver, the
* code can verify the event ID before processing it.
*/
#define ACPI_DISPLAY_DEVICE_CHANGE_EVENT 0x80
#define NVIF_NOTIFY_DISPLAY_DETECT 0xCB
#define NVIF_DISPLAY_DEVICE_CHANGE_EVENT NVIF_NOTIFY_DISPLAY_DETECT
static void nv_acpi_event(acpi_handle handle, u32 event_type, void *data)
{
/*
* This function will handle acpi events from the linux kernel, used
* to detect notifications from the VGA device.
*/
nv_acpi_t *pNvAcpiObject = data;
u32 event_val = 0;
unsigned long long state;
int status = 0;
int device_counter = 0;
if (event_type == NVIF_DISPLAY_DEVICE_CHANGE_EVENT)
{
/* We are getting NVIF events on this machine. We arent putting a very
extensive handling in-place to communicate back with SBIOS, know
the next enabled devices, and then do the switch. We just
pass a default display switch event, so that X-driver decides
the switching policy itself. */
rm_system_event(pNvAcpiObject->sp, NV_SYSTEM_ACPI_DISPLAY_SWITCH_EVENT, 0);
}
if (event_type == ACPI_DISPLAY_DEVICE_CHANGE_EVENT)
{
if (pNvAcpiObject->default_display_mask != 1)
{
while ((device_counter < NV_MAXNUM_DISPLAY_DEVICES) &&
(pNvAcpiObject->pNvVideo[device_counter].dev_handle))
{
acpi_handle dev_handle = pNvAcpiObject->pNvVideo[device_counter].dev_handle;
int dev_id = pNvAcpiObject->pNvVideo[device_counter].dev_id;
status = acpi_evaluate_integer(dev_handle,
"_DGS",
NULL,
&state);
if (ACPI_FAILURE(status))
{
nv_printf(NV_DBG_INFO,
"NVRM: nv_acpi_event: failed to query _DGS method for display device 0x%x\n",
dev_id);
}
else if (state)
{
/* Check if the device is a CRT ...*/
if (dev_id == 0x0100)
{
event_val |= NV_HOTKEY_STATUS_DISPLAY_ENABLE_CRT;
}
/* device-id for a TV */
else if (dev_id == 0x0200)
{
event_val |= NV_HOTKEY_STATUS_DISPLAY_ENABLE_TV;
}
else if ((dev_id == 0x0110) || /* device id for internal LCD */
(dev_id == 0x0118) || /* alternate ACPI ID for the
internal LCD */
(dev_id == 0x0400)) /* ACPI spec 3.0 specified
device id for a internal LCD*/
{
event_val |= NV_HOTKEY_STATUS_DISPLAY_ENABLE_LCD;
}
else if ((dev_id == 0x0111) || /* the set
of possible device-ids for a DFP */
(dev_id == 0x0120) ||
(dev_id == 0x0300)) /* ACPI spec 3.0 specified
device id for non-LVDS DFP */
{
event_val |= NV_HOTKEY_STATUS_DISPLAY_ENABLE_DFP;
}
}
device_counter++;
}
}
nv_printf(NV_DBG_INFO,
"NVRM: nv_acpi_event: Event-type 0x%x, Event-val 0x%x\n",
event_type, event_val);
rm_system_event(pNvAcpiObject->sp, NV_SYSTEM_ACPI_DISPLAY_SWITCH_EVENT, event_val);
}
// no unsubscription or re-enable necessary. Once DOD has been set, we are go.
// once we are subscribed to ACPI events, we don't have to re-subscribe unless
// unsubscribe.
}
NV_STATUS NV_API_CALL nv_acpi_get_powersource(NvU32 *ac_plugged)
{
unsigned long long val;
@ -543,14 +172,14 @@ static void nv_acpi_powersource_hotplug_event(acpi_handle handle, u32 event_type
*/
/* Do the necessary allocations and install notifier "handler" on the device-node "device" */
static nv_acpi_t* nv_install_notifier(struct acpi_device *device, acpi_notify_handler handler)
static nv_acpi_t* nv_install_notifier(struct acpi_handle *handle, acpi_notify_handler handler)
{
nvidia_stack_t *sp = NULL;
nv_acpi_t *pNvAcpiObject = NULL;
NV_STATUS rmStatus = NV_ERR_GENERIC;
acpi_status status = -1;
if (!device)
if (!handle)
return NULL;
if (nv_kmem_cache_alloc_stack(&sp) != 0)
@ -564,11 +193,11 @@ static nv_acpi_t* nv_install_notifier(struct acpi_device *device, acpi_notify_ha
os_mem_set((void *)pNvAcpiObject, 0, sizeof(nv_acpi_t));
// store a device reference in our object
pNvAcpiObject->device = device;
// store a handle reference in our object
pNvAcpiObject->handle = handle;
pNvAcpiObject->sp = sp;
status = acpi_install_notify_handler(device->handle, ACPI_DEVICE_NOTIFY,
status = acpi_install_notify_handler(handle, ACPI_DEVICE_NOTIFY,
handler, pNvAcpiObject);
if (!ACPI_FAILURE(status))
{
@ -592,7 +221,7 @@ static void nv_uninstall_notifier(nv_acpi_t *pNvAcpiObject, acpi_notify_handler
if (pNvAcpiObject && pNvAcpiObject->notify_handler_installed)
{
status = acpi_remove_notify_handler(pNvAcpiObject->device->handle, ACPI_DEVICE_NOTIFY, handler);
status = acpi_remove_notify_handler(pNvAcpiObject->handle, ACPI_DEVICE_NOTIFY, handler);
if (ACPI_FAILURE(status))
{
nv_printf(NV_DBG_INFO,
@ -616,56 +245,22 @@ static void nv_uninstall_notifier(nv_acpi_t *pNvAcpiObject, acpi_notify_handler
void NV_API_CALL nv_acpi_methods_init(NvU32 *handlesPresent)
{
#if defined(NV_ACPI_BUS_GET_DEVICE_PRESENT)
struct acpi_device *device = NULL;
int retVal = -1;
#endif
if (!handlesPresent) // Caller passed us invalid pointer.
return;
*handlesPresent = 0;
NV_ACPI_WALK_NAMESPACE(ACPI_TYPE_DEVICE, ACPI_ROOT_OBJECT,
ACPI_UINT32_MAX, nv_acpi_find_methods, NULL, NULL);
#if defined(NV_ACPI_BUS_GET_DEVICE_PRESENT)
if (nvif_handle)
{
*handlesPresent = NV_ACPI_NVIF_HANDLE_PRESENT;
do
{
if (!nvif_parent_gpu_handle) /* unknown error */
break;
retVal = acpi_bus_get_device(nvif_parent_gpu_handle, &device);
if (ACPI_FAILURE(retVal) || !device)
break;
if (device->driver_data)
{
nvif_parent_gpu_handle = NULL;
break; /* Someone else has already populated this device
nodes' structures. So nothing more to be done */
}
device->driver_data = nv_install_notifier(device, nv_acpi_event);
if (!device->driver_data)
nvif_parent_gpu_handle = NULL;
} while (0);
}
#endif
if (wmmx_handle)
*handlesPresent = *handlesPresent | NV_ACPI_WMMX_HANDLE_PRESENT;
#if defined(NV_ACPI_BUS_GET_DEVICE_PRESENT)
if (psr_handle)
{
// Since _PSR is not a per-GPU construct we only need to register a
@ -673,15 +268,9 @@ void NV_API_CALL nv_acpi_methods_init(NvU32 *handlesPresent)
// devices
if (psr_nv_acpi_object == NULL)
{
retVal = acpi_bus_get_device(psr_device_handle, &device);
if (!(ACPI_FAILURE(retVal) || !device))
{
psr_nv_acpi_object = nv_install_notifier(device, nv_acpi_powersource_hotplug_event);
}
psr_nv_acpi_object = nv_install_notifier(psr_device_handle, nv_acpi_powersource_hotplug_event);
}
}
#endif
return;
}
@ -698,7 +287,6 @@ acpi_status nv_acpi_find_methods(
if (!acpi_get_handle(handle, "NVIF", &method_handle))
{
nvif_handle = method_handle;
nvif_parent_gpu_handle = handle;
}
if (!acpi_get_handle(handle, "WMMX", &method_handle))
@ -717,8 +305,6 @@ acpi_status nv_acpi_find_methods(
void NV_API_CALL nv_acpi_methods_uninit(void)
{
struct acpi_device *device = NULL;
nvif_handle = NULL;
wmmx_handle = NULL;
@ -730,20 +316,6 @@ void NV_API_CALL nv_acpi_methods_uninit(void)
psr_device_handle = NULL;
psr_nv_acpi_object = NULL;
}
if (nvif_parent_gpu_handle == NULL)
return;
#if defined(NV_ACPI_BUS_GET_DEVICE_PRESENT)
acpi_bus_get_device(nvif_parent_gpu_handle, &device);
nv_uninstall_notifier(device->driver_data, nv_acpi_event);
#endif
device->driver_data = NULL;
nvif_parent_gpu_handle = NULL;
return;
}
static NV_STATUS nv_acpi_extract_integer(
@ -1763,16 +1335,6 @@ NvBool NV_API_CALL nv_acpi_is_battery_present(void)
#else // NV_LINUX_ACPI_EVENTS_SUPPORTED
int nv_acpi_init(void)
{
return 0;
}
int nv_acpi_uninit(void)
{
return 0;
}
void NV_API_CALL nv_acpi_methods_init(NvU32 *handlePresent)
{
*handlePresent = 0;

View File

@ -25,6 +25,7 @@
#include "os-interface.h"
#include "nv-linux.h"
#include "nv-reg.h"
#define NV_DMA_DEV_PRINTF(debuglevel, dma_dev, format, ... ) \
nv_printf(debuglevel, "NVRM: %s: " format, \
@ -32,6 +33,8 @@
NULL), \
## __VA_ARGS__)
NvU32 nv_dma_remap_peer_mmio = NV_DMA_REMAP_PEER_MMIO_ENABLE;
NV_STATUS nv_create_dma_map_scatterlist (nv_dma_map_t *dma_map);
void nv_destroy_dma_map_scatterlist(nv_dma_map_t *dma_map);
NV_STATUS nv_map_dma_map_scatterlist (nv_dma_map_t *dma_map);
@ -766,11 +769,16 @@ NV_STATUS NV_API_CALL nv_dma_unmap_alloc
return status;
}
static NvBool nv_dma_is_map_resource_implemented
static NvBool nv_dma_use_map_resource
(
nv_dma_device_t *dma_dev
)
{
if (nv_dma_remap_peer_mmio == NV_DMA_REMAP_PEER_MMIO_DISABLE)
{
return NV_FALSE;
}
#if defined(NV_DMA_MAP_RESOURCE_PRESENT)
const struct dma_map_ops *ops = get_dma_ops(dma_dev->dev);
@ -833,7 +841,7 @@ NV_STATUS NV_API_CALL nv_dma_map_peer
return NV_ERR_INVALID_REQUEST;
}
if (nv_dma_is_map_resource_implemented(dma_dev))
if (nv_dma_use_map_resource(dma_dev))
{
status = nv_dma_map_mmio(dma_dev, page_count, va);
}
@ -858,7 +866,7 @@ void NV_API_CALL nv_dma_unmap_peer
NvU64 va
)
{
if (nv_dma_is_map_resource_implemented(dma_dev))
if (nv_dma_use_map_resource(dma_dev))
{
nv_dma_unmap_mmio(dma_dev, page_count, va);
}
@ -873,29 +881,28 @@ NV_STATUS NV_API_CALL nv_dma_map_mmio
)
{
#if defined(NV_DMA_MAP_RESOURCE_PRESENT)
NvU64 mmio_addr;
BUG_ON(!va);
mmio_addr = *va;
*va = dma_map_resource(dma_dev->dev, mmio_addr, page_count * PAGE_SIZE,
DMA_BIDIRECTIONAL, 0);
if (dma_mapping_error(dma_dev->dev, *va))
if (nv_dma_use_map_resource(dma_dev))
{
NV_DMA_DEV_PRINTF(NV_DBG_ERRORS, dma_dev,
"Failed to DMA map MMIO range [0x%llx-0x%llx]\n",
mmio_addr, mmio_addr + page_count * PAGE_SIZE - 1);
return NV_ERR_OPERATING_SYSTEM;
NvU64 mmio_addr = *va;
*va = dma_map_resource(dma_dev->dev, mmio_addr, page_count * PAGE_SIZE,
DMA_BIDIRECTIONAL, 0);
if (dma_mapping_error(dma_dev->dev, *va))
{
NV_DMA_DEV_PRINTF(NV_DBG_ERRORS, dma_dev,
"Failed to DMA map MMIO range [0x%llx-0x%llx]\n",
mmio_addr, mmio_addr + page_count * PAGE_SIZE - 1);
return NV_ERR_OPERATING_SYSTEM;
}
}
/*
* The default implementation passes through the source address
* without failing. Adjust it using the DMA start address to keep RM's
* validation schemes happy.
*/
if (!nv_dma_is_map_resource_implemented(dma_dev))
else
{
/*
* If dma_map_resource is not available, pass through the source address
* without failing. Further, adjust it using the DMA start address to
* keep RM's validation schemes happy.
*/
*va = *va + dma_dev->addressable_range.start;
}
@ -915,15 +922,13 @@ void NV_API_CALL nv_dma_unmap_mmio
)
{
#if defined(NV_DMA_MAP_RESOURCE_PRESENT)
if (!nv_dma_is_map_resource_implemented(dma_dev))
{
va = va - dma_dev->addressable_range.start;
}
nv_dma_nvlink_addr_decompress(dma_dev, &va, page_count, NV_TRUE);
dma_unmap_resource(dma_dev->dev, va, page_count * PAGE_SIZE,
DMA_BIDIRECTIONAL, 0);
if (nv_dma_use_map_resource(dma_dev))
{
dma_unmap_resource(dma_dev->dev, va, page_count * PAGE_SIZE,
DMA_BIDIRECTIONAL, 0);
}
#endif
}

View File

@ -48,6 +48,7 @@ typedef struct nv_dma_buf_file_private
nv_dma_buf_mem_handle_t *handles;
NvU64 bar1_va_ref_count;
void *mig_info;
NvBool can_mmap;
} nv_dma_buf_file_private_t;
static void
@ -562,6 +563,8 @@ nv_dma_buf_mmap(
struct vm_area_struct *vma
)
{
// TODO: Check can_mmap flag
return -ENOTSUPP;
}
@ -674,6 +677,7 @@ nv_dma_buf_create(
priv->total_objects = params->totalObjects;
priv->total_size = params->totalSize;
priv->nv = nv;
priv->can_mmap = NV_FALSE;
rc = nv_kmem_cache_alloc_stack(&sp);
if (rc != 0)
@ -792,6 +796,15 @@ nv_dma_buf_reuse(
return NV_ERR_OPERATING_SYSTEM;
}
priv = buf->priv;
if (priv == NULL)

View File

@ -301,23 +301,6 @@
#define __NV_ENABLE_MSI EnableMSI
#define NV_REG_ENABLE_MSI NV_REG_STRING(__NV_ENABLE_MSI)
/*
* Option: RegisterForACPIEvents
*
* Description:
*
* When this option is enabled, the NVIDIA driver will register with the
* ACPI subsystem to receive notification of ACPI events.
*
* Possible values:
*
* 1 - register for ACPI events (default)
* 0 - do not register for ACPI events
*/
#define __NV_REGISTER_FOR_ACPI_EVENTS RegisterForACPIEvents
#define NV_REG_REGISTER_FOR_ACPI_EVENTS NV_REG_STRING(__NV_REGISTER_FOR_ACPI_EVENTS)
/*
* Option: EnablePCIeGen3
*
@ -819,6 +802,30 @@
#define NV_REG_OPENRM_ENABLE_UNSUPPORTED_GPUS_ENABLE 0x00000001
#define NV_REG_OPENRM_ENABLE_UNSUPPORTED_GPUS_DEFAULT NV_REG_OPENRM_ENABLE_UNSUPPORTED_GPUS_DISABLE
/*
* Option: NVreg_DmaRemapPeerMmio
*
* Description:
*
* When this option is enabled, the NVIDIA driver will use device driver
* APIs provided by the Linux kernel for DMA-remapping part of a device's
* MMIO region to another device, creating e.g., IOMMU mappings as necessary.
* When this option is disabled, the NVIDIA driver will instead only apply a
* fixed offset, which may be zero, to CPU physical addresses to produce the
* DMA address for the peer's MMIO region, and no IOMMU mappings will be
* created.
*
* This option only affects peer MMIO DMA mappings, and not system memory
* mappings.
*
* Possible Values:
* 0 = disable dynamic DMA remapping of peer MMIO regions
* 1 = enable dynamic DMA remapping of peer MMIO regions (default)
*/
#define __NV_DMA_REMAP_PEER_MMIO DmaRemapPeerMmio
#define NV_DMA_REMAP_PEER_MMIO NV_REG_STRING(__NV_DMA_REMAP_PEER_MMIO)
#define NV_DMA_REMAP_PEER_MMIO_DISABLE 0x00000000
#define NV_DMA_REMAP_PEER_MMIO_ENABLE 0x00000001
#if defined(NV_DEFINE_REGISTRY_KEY_TABLE)
@ -834,7 +841,6 @@ NV_DEFINE_REG_ENTRY(__NV_DEVICE_FILE_GID, 0);
NV_DEFINE_REG_ENTRY(__NV_DEVICE_FILE_MODE, 0666);
NV_DEFINE_REG_ENTRY(__NV_INITIALIZE_SYSTEM_MEMORY_ALLOCATIONS, 1);
NV_DEFINE_REG_ENTRY(__NV_USE_PAGE_ATTRIBUTE_TABLE, ~0);
NV_DEFINE_REG_ENTRY(__NV_REGISTER_FOR_ACPI_EVENTS, 1);
NV_DEFINE_REG_ENTRY(__NV_ENABLE_PCIE_GEN3, 0);
NV_DEFINE_REG_ENTRY(__NV_ENABLE_MSI, 1);
NV_DEFINE_REG_ENTRY(__NV_TCE_BYPASS_MODE, NV_TCE_BYPASS_MODE_DEFAULT);
@ -871,6 +877,7 @@ NV_DEFINE_REG_STRING_ENTRY(__NV_RM_MSG, NULL);
NV_DEFINE_REG_STRING_ENTRY(__NV_GPU_BLACKLIST, NULL);
NV_DEFINE_REG_STRING_ENTRY(__NV_TEMPORARY_FILE_PATH, NULL);
NV_DEFINE_REG_STRING_ENTRY(__NV_EXCLUDED_GPUS, NULL);
NV_DEFINE_REG_ENTRY(__NV_DMA_REMAP_PEER_MMIO, NV_DMA_REMAP_PEER_MMIO_ENABLE);
/*
*----------------registry database definition----------------------
@ -893,7 +900,6 @@ nv_parm_t nv_parms[] = {
NV_DEFINE_PARAMS_TABLE_ENTRY(__NV_INITIALIZE_SYSTEM_MEMORY_ALLOCATIONS),
NV_DEFINE_PARAMS_TABLE_ENTRY(__NV_USE_PAGE_ATTRIBUTE_TABLE),
NV_DEFINE_PARAMS_TABLE_ENTRY(__NV_ENABLE_MSI),
NV_DEFINE_PARAMS_TABLE_ENTRY(__NV_REGISTER_FOR_ACPI_EVENTS),
NV_DEFINE_PARAMS_TABLE_ENTRY(__NV_ENABLE_PCIE_GEN3),
NV_DEFINE_PARAMS_TABLE_ENTRY(__NV_MEMORY_POOL_SIZE),
NV_DEFINE_PARAMS_TABLE_ENTRY(__NV_KMALLOC_HEAP_MAX_SIZE),
@ -918,6 +924,7 @@ nv_parm_t nv_parms[] = {
NV_DEFINE_PARAMS_TABLE_ENTRY(__NV_ENABLE_GPU_FIRMWARE_LOGS),
NV_DEFINE_PARAMS_TABLE_ENTRY(__NV_ENABLE_DBG_BREAKPOINT),
NV_DEFINE_PARAMS_TABLE_ENTRY(__NV_OPENRM_ENABLE_UNSUPPORTED_GPUS),
NV_DEFINE_PARAMS_TABLE_ENTRY(__NV_DMA_REMAP_PEER_MMIO),
{NULL, NULL}
};

View File

@ -114,6 +114,7 @@ nv_linux_state_t *nv_linux_devices;
* And one for the control device
*/
nv_linux_state_t nv_ctl_device = { { 0 } };
extern NvU32 nv_dma_remap_peer_mmio;
nv_kthread_q_t nv_kthread_q;
nv_kthread_q_t nv_deferred_close_kthread_q;
@ -571,6 +572,12 @@ nv_registry_keys_init(nv_stack_t *sp)
WARN_ON(status != NV_OK);
}
}
status = rm_read_registry_dword(sp, nv, NV_DMA_REMAP_PEER_MMIO, &data);
if (status == NV_OK)
{
nv_dma_remap_peer_mmio = data;
}
}
static void __init
@ -2660,7 +2667,6 @@ nvidia_ctl_open(
nv_linux_state_t *nvl = &nv_ctl_device;
nv_state_t *nv = NV_STATE_PTR(nvl);
nv_linux_file_private_t *nvlfp = NV_GET_LINUX_FILE_PRIVATE(file);
static int count = 0;
nv_printf(NV_DBG_INFO, "NVRM: nvidia_ctl_open\n");
@ -2672,13 +2678,6 @@ nvidia_ctl_open(
if (NV_ATOMIC_READ(nvl->usage_count) == 0)
{
nv->flags |= (NV_FLAG_OPEN | NV_FLAG_CONTROL);
if ((nv_acpi_init() < 0) &&
(count++ < NV_MAX_RECURRING_WARNING_MESSAGES))
{
nv_printf(NV_DBG_ERRORS,
"NVRM: failed to register with the ACPI subsystem!\n");
}
}
NV_ATOMIC_INC(nvl->usage_count);
@ -2702,7 +2701,6 @@ nvidia_ctl_close(
nv_state_t *nv = NV_STATE_PTR(nvl);
nv_linux_file_private_t *nvlfp = NV_GET_LINUX_FILE_PRIVATE(file);
nvidia_stack_t *sp = nvlfp->sp;
static int count = 0;
unsigned int i;
nv_printf(NV_DBG_INFO, "NVRM: nvidia_ctl_close\n");
@ -2711,13 +2709,6 @@ nvidia_ctl_close(
if (NV_ATOMIC_DEC_AND_TEST(nvl->usage_count))
{
nv->flags &= ~NV_FLAG_OPEN;
if ((nv_acpi_uninit() < 0) &&
(count++ < NV_MAX_RECURRING_WARNING_MESSAGES))
{
nv_printf(NV_DBG_ERRORS,
"NVRM: failed to unregister from the ACPI subsystem!\n");
}
}
up(&nvl->ldata_lock);

View File

@ -320,6 +320,9 @@ namespace DisplayPort
//
bool bDscMstEnablePassThrough;
// Reduce number of 2H1OR LTs which fixes bug 3534707
bool bDscOptimizeLTBug3534707;
//
// Synaptics branch device doesn't support Virtual Peer Devices so DSC
// capability of downstream device should be decided based on device's own
@ -505,6 +508,7 @@ namespace DisplayPort
void populateDscGpuCaps(DSC_INFO* dscInfo);
void populateForcedDscParams(DSC_INFO* dscInfo, DSC_INFO::FORCED_DSC_PARAMS* forcedParams);
void populateDscSinkCaps(DSC_INFO* dscInfo, DeviceImpl * dev);
void populateDscBranchCaps(DSC_INFO* dscInfo, DeviceImpl * dev);
void populateDscModesetInfo(MODESET_INFO * pModesetInfo, const DpModesetParams * pModesetParams);
bool train(const LinkConfiguration & lConfig, bool force, LinkTrainingType trainType = NORMAL_LINK_TRAINING);

View File

@ -425,7 +425,9 @@ namespace DisplayPort
NvBool isDSCPossible();
bool isFECSupported();
bool readAndParseDSCCaps();
bool readAndParseBranchSpecificDSCCaps();
bool parseDscCaps(const NvU8 *buffer, NvU32 bufferSize);
bool parseBranchSpecificDscCaps(const NvU8 *buffer, NvU32 bufferSize);
bool setDscEnable(bool enable);
bool getDscEnable(bool *pEnable);
unsigned getDscVersionMajor();

View File

@ -53,6 +53,8 @@ namespace DisplayPort
bool bWaitForDeAllocACT;
bool bDeferredPayloadAlloc;
ModesetInfo lastModesetInfo;
DSC_MODE dscModeRequest; // DSC mode requested during NAB
DSC_MODE dscModeActive; // DSC mode currently active, set in NAE
DP_SINGLE_HEAD_MULTI_STREAM_PIPELINE_ID singleHeadMultiStreamID;
DP_SINGLE_HEAD_MULTI_STREAM_MODE singleHeadMultiStreamMode;
DP_COLORFORMAT colorFormat;
@ -76,6 +78,8 @@ namespace DisplayPort
hdcpEnabled(false),
hdcpPreviousStatus(false),
bWaitForDeAllocACT(false),
dscModeRequest(DSC_MODE_NONE),
dscModeActive(DSC_MODE_NONE),
singleHeadMultiStreamID(DP_SINGLE_HEAD_MULTI_STREAM_PIPELINE_ID_PRIMARY),
singleHeadMultiStreamMode(DP_SINGLE_HEAD_MULTI_STREAM_MODE_NONE),
bIsCurrentModesetGroup(false),

View File

@ -116,6 +116,8 @@ namespace DisplayPort
bool isBeingDestroyed;
bool isPaused;
bool bNoReplyTimerForBusyWaiting;
List messageReceivers;
List notYetSentDownRequest; // Down Messages yet to be processed
List notYetSentUpReply; // Up Reply Messages yet to be processed
@ -153,6 +155,13 @@ namespace DisplayPort
mergerDownReply.mailboxInterrupt();
}
void applyRegkeyOverrides(const DP_REGKEY_DATABASE& dpRegkeyDatabase)
{
DP_ASSERT(dpRegkeyDatabase.bInitialized &&
"All regkeys are invalid because dpRegkeyDatabase is not initialized!");
bNoReplyTimerForBusyWaiting = dpRegkeyDatabase.bNoReplyTimerForBusyWaiting;
}
MessageManager(DPCDHAL * hal, Timer * timer)
: timer(timer), hal(hal),
splitterDownRequest(hal, timer),
@ -236,6 +245,7 @@ namespace DisplayPort
MessageManager * parent;
bool transmitReply;
bool bTransmitted;
bool bBusyWaiting;
unsigned requestIdentifier;
unsigned messagePriority;
unsigned sinkPort;
@ -261,6 +271,7 @@ namespace DisplayPort
parent(0),
transmitReply(false),
bTransmitted(false),
bBusyWaiting(false),
requestIdentifier(requestIdentifier),
messagePriority(messagePriority),
sinkPort(0xFF)

View File

@ -65,11 +65,13 @@
//
#define NV_DP_DSC_MST_CAP_BUG_3143315 "DP_DSC_MST_CAP_BUG_3143315"
//
// Enable DSC Pass through support in MST mode.
//
#define NV_DP_DSC_MST_ENABLE_PASS_THROUGH "DP_DSC_MST_ENABLE_PASS_THROUGH"
// Regkey to reduce number of 2H1OR LTs which fixes bug 3534707
#define NV_DP_DSC_OPTIMIZE_LT_BUG_3534707 "DP_DSC_OPTIMIZE_LT_BUG_3534707"
#define NV_DP_REGKEY_NO_REPLY_TIMER_FOR_BUSY_WAITING "NO_REPLY_TIMER_FOR_BUSY_WAITING"
//
// Data Base used to store all the regkey values.
// The actual data base is declared statically in dp_evoadapter.cpp.
@ -102,6 +104,8 @@ struct DP_REGKEY_DATABASE
bool bBypassEDPRevCheck;
bool bDscMstCapBug3143315;
bool bDscMstEnablePassThrough;
bool bDscOptimizeLTBug3534707;
bool bNoReplyTimerForBusyWaiting;
};
#endif //INCLUDED_DP_REGKEYDATABASE_H

View File

@ -189,6 +189,7 @@ void ConnectorImpl::applyRegkeyOverrides(const DP_REGKEY_DATABASE& dpRegkeyDatab
this->bEnableFastLT = dpRegkeyDatabase.bFastLinkTrainingEnabled;
this->bDscMstCapBug3143315 = dpRegkeyDatabase.bDscMstCapBug3143315;
this->bDscMstEnablePassThrough = dpRegkeyDatabase.bDscMstEnablePassThrough;
this->bDscOptimizeLTBug3534707 = dpRegkeyDatabase.bDscOptimizeLTBug3534707;
}
void ConnectorImpl::setPolicyModesetOrderMitigation(bool enabled)
@ -630,6 +631,12 @@ create:
{
// Read and parse DSC caps only if panel supports DSC
newDev->readAndParseDSCCaps();
// Read and Parse Branch Specific DSC Caps
if (!newDev->isVideoSink() && !newDev->isAudioSink())
{
newDev->readAndParseBranchSpecificDSCCaps();
}
}
// Decide if DSC stream can be sent to new device
@ -655,13 +662,13 @@ create:
if (this->bDscMstEnablePassThrough)
{
//
// Check the device's own and its parent's DSC capability.
// - Sink device will do DSC cecompression when
// Check the device's own and its parent's DSC capability.
// - Sink device will do DSC cecompression when
// 1. Sink device is capable of DSC decompression and parent
// supports DSC pass through.
//
// - Sink device's parent will do DSC decompression
// 1. If sink device supports DSC decompression but it's parent does not support
// - Sink device's parent will do DSC decompression
// 1. If sink device supports DSC decompression but it's parent does not support
// DSC Pass through, but supports DSC decompression.
// 2. If the device does not support DSC decompression, but parent supports it.
//
@ -672,7 +679,7 @@ create:
if (newDev->parent->isDSCPassThroughSupported())
{
//
// This condition takes care of DSC capable sink devices
// This condition takes care of DSC capable sink devices
// connected behind a DSC Pass through capable branch
//
newDev->devDoingDscDecompression = newDev;
@ -681,12 +688,12 @@ create:
else if (newDev->parent->isDSCSupported())
{
//
// This condition takes care of DSC capable sink devices
// This condition takes care of DSC capable sink devices
// connected behind a branch device that is not capable
// of DSC pass through but can do DSC decompression.
//
newDev->bDSCPossible = true;
newDev->devDoingDscDecompression = newDev->parent;
newDev->devDoingDscDecompression = newDev->parent;
}
}
else
@ -695,11 +702,11 @@ create:
newDev->devDoingDscDecompression = newDev;
newDev->bDSCPossible = true;
}
}
}
else if (newDev->parent && newDev->parent->isDSCSupported())
{
//
// This condition takes care of sink devices not capable of DSC
// This condition takes care of sink devices not capable of DSC
// but parent is capable of DSC decompression.
//
newDev->bDSCPossible = true;
@ -709,7 +716,7 @@ create:
else
{
//
// Revert to old code if DSC Pass through support is not requested.
// Revert to old code if DSC Pass through support is not requested.
// This code will be deleted once DSC Pass through support will be enabled
// by default which will be done when 2Head1OR MST (GR-133) will be in production.
//
@ -1726,6 +1733,15 @@ void ConnectorImpl::populateDscGpuCaps(DSC_INFO* dscInfo)
dscInfo->gpuCaps.lineBufferBitDepth = lineBufferBitDepth;
}
void ConnectorImpl::populateDscBranchCaps(DSC_INFO* dscInfo, DeviceImpl * dev)
{
dscInfo->branchCaps.overallThroughputMode0 = dev->dscCaps.branchDSCOverallThroughputMode0;
dscInfo->branchCaps.overallThroughputMode1 = dev->dscCaps.branchDSCOverallThroughputMode1;
dscInfo->branchCaps.maxLineBufferWidth = dev->dscCaps.branchDSCMaximumLineBufferWidth;
return;
}
void ConnectorImpl::populateDscSinkCaps(DSC_INFO* dscInfo, DeviceImpl * dev)
{
// Early return if dscInfo or dev is NULL
@ -1846,6 +1862,12 @@ void ConnectorImpl::populateDscCaps(DSC_INFO* dscInfo, DeviceImpl * dev, DSC_INF
// Sink DSC capabilities
populateDscSinkCaps(dscInfo, dev);
// Branch Specific DSC Capabilities
if (!dev->isVideoSink() && !dev->isAudioSink())
{
populateDscBranchCaps(dscInfo, dev);
}
// GPU DSC capabilities
populateDscGpuCaps(dscInfo);
@ -2621,11 +2643,6 @@ bool ConnectorImpl::notifyAttachBegin(Group * target, // Gr
}
}
if (bEnableDsc)
{
DP_LOG(("DPCONN> DSC Mode = %s", (modesetParams.modesetInfo.mode == DSC_SINGLE) ? "SINGLE" : "DUAL"));
}
for (Device * dev = target->enumDevices(0); dev; dev = target->enumDevices(dev))
{
Address::StringBuffer buffer;
@ -2641,6 +2658,12 @@ bool ConnectorImpl::notifyAttachBegin(Group * target, // Gr
GroupImpl* targetImpl = (GroupImpl*)target;
targetImpl->bIsCurrentModesetGroup = true;
if (bEnableDsc)
{
DP_LOG(("DPCONN> DSC Mode = %s", (modesetParams.modesetInfo.mode == DSC_SINGLE) ? "SINGLE" : "DUAL"));
targetImpl->dscModeRequest = modesetParams.modesetInfo.mode;
}
DP_ASSERT(!(targetImpl->isHeadAttached() && targetImpl->bIsHeadShutdownNeeded) && "Head should have been shut down but it is still active!");
targetImpl->headInFirmware = false;
@ -2788,6 +2811,10 @@ void ConnectorImpl::notifyAttachEnd(bool modesetCancelled)
currentModesetDeviceGroup->setHeadAttached(false);
}
// set dscModeActive to what was requested in NAB and clear dscModeRequest
currentModesetDeviceGroup->dscModeActive = currentModesetDeviceGroup->dscModeRequest;
currentModesetDeviceGroup->dscModeRequest = DSC_MODE_NONE;
currentModesetDeviceGroup->setHeadAttached(true);
RmDfpCache dfpCache = {0};
dfpCache.updMask = 0;
@ -2934,6 +2961,7 @@ void ConnectorImpl::notifyDetachEnd(bool bKeepOdAlive)
dpMemZero(&currentModesetDeviceGroup->lastModesetInfo, sizeof(ModesetInfo));
currentModesetDeviceGroup->setHeadAttached(false);
currentModesetDeviceGroup->headInFirmware = false;
currentModesetDeviceGroup->dscModeActive = DSC_MODE_NONE;
// Mark head as disconnected
bNoLtDoneAfterHeadDetach = true;
@ -3980,18 +4008,36 @@ bool ConnectorImpl::trainLinkOptimized(LinkConfiguration lConfig)
GroupImpl * groupAttached = 0;
for (ListElement * e = activeGroups.begin(); e != activeGroups.end(); e = e->next)
{
DP_ASSERT(bIsUefiSystem || (!groupAttached && "Multiple attached heads"));
DP_ASSERT(bIsUefiSystem);
groupAttached = (GroupImpl * )e;
if ((groupAttached->lastModesetInfo.mode == DSC_DUAL) && groupAttached->bIsCurrentModesetGroup)
if (bDscOptimizeLTBug3534707)
{
//
// If current modeset group requires 2Head1OR mode, we should retrain link.
// For SST, there will be only one group per connector.
// For MST, we need to re-run LT in case the current modeset group requires DSC_DUAL.
//
bTwoHeadOneOrLinkRetrain = true;
break;
if ((groupAttached->dscModeRequest == DSC_DUAL) && (groupAttached->dscModeActive != DSC_DUAL))
{
//
// If current modeset group requires 2Head1OR and
// - group is not active yet (first modeset on the group)
// - group is active but not in 2Head1OR mode (last modeset on the group did not require 2Head1OR)
// then re-train the link
// This is because for 2Head1OR mode, we need to set some LT parametes for slave SOR after
// successful LT on primary SOR without which 2Head1OR modeset will lead to HW hang.
//
bTwoHeadOneOrLinkRetrain = true;
break;
}
}
else
{
if (groupAttached->lastModesetInfo.mode == DSC_DUAL && groupAttached->bIsCurrentModesetGroup)
{
//
// If current modeset group requires 2Head1OR mode, we should retrain link.
// For SST, there will be only one group per connector.
// For MST, we need to re-run LT in case the current modeset group requires DSC_DUAL.
bTwoHeadOneOrLinkRetrain = true;
break;
}
}
}
@ -4077,10 +4123,10 @@ bool ConnectorImpl::trainLinkOptimized(LinkConfiguration lConfig)
{
//
// Check if we are already trained to the desired link config?
// Even if we are, we need to redo LT if FEC is enabled or DSC mode is DSC_DUAL
// since if current modeset requires 2H1OR, LT done during assessLink will not
// have 2H1Or flag set or if last modeset required DSC but not 2H1OR, still 2H1Or
// flag will not be set and modeset will lead to HW hang.
// Make sure requested FEC state matches with the current FEC state of link.
// If 2Head1OR mode is requested, retrain if group is not active or
// last modeset on active group was not in 2Head1OR mode.
// bTwoHeadOneOrLinkRetrain tracks this requirement.
//
//
@ -4093,7 +4139,8 @@ bool ConnectorImpl::trainLinkOptimized(LinkConfiguration lConfig)
if ((activeLinkConfig == lowestSelected) &&
(!isLinkInD3()) &&
(!isLinkLost()) &&
!(this->bFECEnable) &&
((!bDscOptimizeLTBug3534707 && !this->bFECEnable) ||
(bDscOptimizeLTBug3534707 && (this->bFECEnable == activeLinkConfig.bEnableFEC))) &&
!bTwoHeadOneOrLinkRetrain)
{
if (bSkipRedundantLt || main->isInternalPanelDynamicMuxCapable())
@ -4209,11 +4256,9 @@ bool ConnectorImpl::trainLinkOptimized(LinkConfiguration lConfig)
//
// Make sure link is physically active and healthy, otherwise re-train.
// We need to retrain if the link is in 2Head1OR MST mode. For example,
// if we plug in a 2Head1OR panel to an active link that is already driving
// a MST panel in DSC mode, RM will assign a secondary OR to the 2Head1OR panel.
// But since there is no change required in linkConfig DPlib will skip
// LT, resutling in not adding secondary OR to LT; this will lead to HW hang.
// Make sure requested FEC state matches with the current FEC state of link.
// If 2Head1OR mode is requested, retrain if group is not active or last modeset on active group
// was not in 2Head1OR mode. bTwoHeadOneOrLinkRetrain tracks this requirement.
//
bRetrainToEnsureLinkStatus = (isLinkActive() && isLinkInD3()) ||
isLinkLost() ||
@ -5660,7 +5705,7 @@ void ConnectorImpl::notifyLongPulseInternal(bool statusConnected)
if (hal->getSupportsMultistream() && main->hasMultistream())
{
bool bDeleteFirmwareVC = false;
const DP_REGKEY_DATABASE& dpRegkeyDatabase = main->getRegkeyDatabase();
DP_LOG(("DP> Multistream panel detected, building message manager"));
//
@ -5669,6 +5714,7 @@ void ConnectorImpl::notifyLongPulseInternal(bool statusConnected)
//
messageManager = new MessageManager(hal, timer);
messageManager->registerReceiver(&ResStatus);
messageManager->applyRegkeyOverrides(dpRegkeyDatabase);
//
// Create a discovery manager to initiate detection

View File

@ -1687,6 +1687,50 @@ bool DeviceImpl::parseDscCaps(const NvU8 *buffer, NvU32 bufferSize)
return true;
}
bool DeviceImpl::parseBranchSpecificDscCaps(const NvU8 *buffer, NvU32 bufferSize)
{
if (bufferSize < 3)
{
DP_LOG((" Branch DSC caps buffer must be greater than or equal to 3"));
return false;
}
dscCaps.branchDSCOverallThroughputMode0 = DRF_VAL(_DPCD20, _BRANCH_DSC_OVERALL_THROUGHPUT_MODE_0, _VALUE, buffer[0x0]);
if (dscCaps.branchDSCOverallThroughputMode0 == 1)
{
dscCaps.branchDSCOverallThroughputMode0 = 680;
}
else if (dscCaps.branchDSCOverallThroughputMode0 >= 2)
{
dscCaps.branchDSCOverallThroughputMode0 = 600 + dscCaps.branchDSCOverallThroughputMode0 * 50;
}
dscCaps.branchDSCOverallThroughputMode1 = DRF_VAL(_DPCD20, _BRANCH_DSC_OVERALL_THROUGHPUT_MODE_1, _VALUE, buffer[0x1]);
if (dscCaps.branchDSCOverallThroughputMode1 == 1)
{
dscCaps.branchDSCOverallThroughputMode1 = 680;
}
else if (dscCaps.branchDSCOverallThroughputMode1 >= 2)
{
dscCaps.branchDSCOverallThroughputMode1 = 600 + dscCaps.branchDSCOverallThroughputMode1 * 50;
}
dscCaps.branchDSCMaximumLineBufferWidth = DRF_VAL(_DPCD20, _BRANCH_DSC_MAXIMUM_LINE_BUFFER_WIDTH, _VALUE, buffer[0x2]);
if (dscCaps.branchDSCMaximumLineBufferWidth != 0)
{
if (dscCaps.branchDSCMaximumLineBufferWidth >= 16)
{
dscCaps.branchDSCMaximumLineBufferWidth = dscCaps.branchDSCMaximumLineBufferWidth * 320;
}
else
{
dscCaps.branchDSCMaximumLineBufferWidth = 0;
DP_LOG(("Value of branch DSC maximum line buffer width is invalid, so setting it to 0."));
}
}
return true;
}
bool DeviceImpl::readAndParseDSCCaps()
{
// Allocate a buffer of 16 bytes to read DSC caps
@ -1703,6 +1747,21 @@ bool DeviceImpl::readAndParseDSCCaps()
return parseDscCaps(&rawDscCaps[0], sizeof(rawDscCaps));
}
bool DeviceImpl::readAndParseBranchSpecificDSCCaps()
{
unsigned sizeCompleted = 0;
unsigned nakReason = NakUndefined;
NvU8 rawBranchSpecificDscCaps[3];
if(AuxBus::success != this->getDpcdData(NV_DPCD20_BRANCH_DSC_OVERALL_THROUGHPUT_MODE_0,
&rawBranchSpecificDscCaps[0], sizeof(rawBranchSpecificDscCaps), &sizeCompleted, &nakReason))
{
return false;
}
return parseBranchSpecificDscCaps(&rawBranchSpecificDscCaps[0], sizeof(rawBranchSpecificDscCaps));
}
bool DeviceImpl::getDscEnable(bool *pEnable)
{
AuxBus::status status = AuxBus::success;

View File

@ -94,7 +94,9 @@ const struct
{NV_DP_REGKEY_KEEP_OPT_LINK_ALIVE_SST, &dpRegkeyDatabase.bOptLinkKeptAliveSst, DP_REG_VAL_BOOL},
{NV_DP_REGKEY_FORCE_EDP_ILR, &dpRegkeyDatabase.bBypassEDPRevCheck, DP_REG_VAL_BOOL},
{NV_DP_DSC_MST_CAP_BUG_3143315, &dpRegkeyDatabase.bDscMstCapBug3143315, DP_REG_VAL_BOOL},
{NV_DP_DSC_MST_ENABLE_PASS_THROUGH, &dpRegkeyDatabase.bDscMstEnablePassThrough, DP_REG_VAL_BOOL}
{NV_DP_DSC_MST_ENABLE_PASS_THROUGH, &dpRegkeyDatabase.bDscMstEnablePassThrough, DP_REG_VAL_BOOL},
{NV_DP_DSC_OPTIMIZE_LT_BUG_3534707, &dpRegkeyDatabase.bDscOptimizeLTBug3534707, DP_REG_VAL_BOOL},
{NV_DP_REGKEY_NO_REPLY_TIMER_FOR_BUSY_WAITING, &dpRegkeyDatabase.bNoReplyTimerForBusyWaiting, DP_REG_VAL_BOOL}
};
EvoMainLink::EvoMainLink(EvoInterface * provider, Timer * timer) :

View File

@ -63,6 +63,11 @@ bool MessageManager::send(MessageManager::Message * message, NakData & nakData)
DP_USED(sb);
NvU64 startTime, elapsedTime;
if (bNoReplyTimerForBusyWaiting)
{
message->bBusyWaiting = true;
}
post(message, &completion);
startTime = timer->getTimeUs();
do
@ -152,14 +157,13 @@ void MessageManager::Message::splitterTransmitted(OutgoingTransactionManager * f
if (from == &parent->splitterDownRequest)
{
//
// Start the countdown timer for the reply
//
parent->timer->queueCallback(this, "SPLI", DPCD_MESSAGE_REPLY_TIMEOUT);
//
// Client will busy-waiting for the message to complete, we don't need the countdown timer.
if (!bBusyWaiting)
{
// Start the countdown timer for the reply
parent->timer->queueCallback(this, "SPLI", DPCD_MESSAGE_REPLY_TIMEOUT);
}
// Tell the message manager he may begin sending the next message
//
parent->transmitAwaitingDownRequests();
}
else // UpReply

View File

@ -262,6 +262,10 @@ typedef struct DscCaps
unsigned dscPeakThroughputMode1;
unsigned dscMaxSliceWidth;
unsigned branchDSCOverallThroughputMode0;
unsigned branchDSCOverallThroughputMode1;
unsigned branchDSCMaximumLineBufferWidth;
BITS_PER_PIXEL_INCREMENT dscBitsPerPixelIncrement;
} DscCaps;

View File

@ -1083,6 +1083,7 @@ number of Downstream ports will be limited to 32.
#define NV_DPCD_EDP_DISPLAY_CTL_COLOR_ENGINE_EN_DISABLED (0x00000000) /* RWXUV */
#define NV_DPCD_EDP_DISPLAY_CTL_OVERDRIVE_CTL 5:4 /* RWXUF */
#define NV_DPCD_EDP_DISPLAY_CTL_OVERDRIVE_CTL_AUTONOMOUS (0x00000000) /* RWXUV */
#define NV_DPCD_EDP_DISPLAY_CTL_OVERDRIVE_CTL_AUTONOMOUS_1 (0x00000001) /* RWXUV */
#define NV_DPCD_EDP_DISPLAY_CTL_OVERDRIVE_CTL_DISABLE (0x00000002) /* RWXUV */
#define NV_DPCD_EDP_DISPLAY_CTL_OVERDRIVE_CTL_ENABLE (0x00000003) /* RWXUV */
#define NV_DPCD_EDP_DISPLAY_CTL_VBLANK_BKLGHT_UPDATE_EN 7:7 /* RWXUF */

View File

@ -44,3 +44,14 @@
#define NV_DPCD20_PANEL_REPLAY_CONFIGURATION_ENABLE_PR_MODE 0:0
#define NV_DPCD20_PANEL_REPLAY_CONFIGURATION_ENABLE_PR_MODE_NO (0x00000000)
#define NV_DPCD20_PANEL_REPLAY_CONFIGURATION_ENABLE_PR_MODE_YES (0x00000001)
/// BRANCH SPECIFIC DSC CAPS
#define NV_DPCD20_BRANCH_DSC_OVERALL_THROUGHPUT_MODE_0 (0x000000A0)
#define NV_DPCD20_BRANCH_DSC_OVERALL_THROUGHPUT_MODE_0_VALUE 7:0
#define NV_DPCD20_BRANCH_DSC_OVERALL_THROUGHPUT_MODE_1 (0x000000A1)
#define NV_DPCD20_BRANCH_DSC_OVERALL_THROUGHPUT_MODE_1_VALUE 7:0
#define NV_DPCD20_BRANCH_DSC_MAXIMUM_LINE_BUFFER_WIDTH (0x000000A2)
#define NV_DPCD20_BRANCH_DSC_MAXIMUM_LINE_BUFFER_WIDTH_VALUE 7:0

View File

@ -36,25 +36,26 @@
// and then checked back in. You cannot make changes to these sections without
// corresponding changes to the buildmeister script
#ifndef NV_BUILD_BRANCH
#define NV_BUILD_BRANCH r515_95
#define NV_BUILD_BRANCH r516_10
#endif
#ifndef NV_PUBLIC_BRANCH
#define NV_PUBLIC_BRANCH r515_95
#define NV_PUBLIC_BRANCH r516_10
#endif
#if defined(NV_LINUX) || defined(NV_BSD) || defined(NV_SUNOS)
#define NV_BUILD_BRANCH_VERSION "rel/gpu_drv/r515/r515_95-155"
#define NV_BUILD_CHANGELIST_NUM (31261195)
#define NV_BUILD_BRANCH_VERSION "rel/gpu_drv/r515/r516_10-205"
#define NV_BUILD_CHANGELIST_NUM (31396299)
#define NV_BUILD_TYPE "Official"
#define NV_BUILD_NAME "rel/gpu_drv/r515/r515_95-155"
#define NV_LAST_OFFICIAL_CHANGELIST_NUM (31261195)
#define NV_BUILD_NAME "rel/gpu_drv/r515/r516_10-205"
#define NV_LAST_OFFICIAL_CHANGELIST_NUM (31396299)
#else /* Windows builds */
#define NV_BUILD_BRANCH_VERSION "r515_95-3"
#define NV_BUILD_CHANGELIST_NUM (31249857)
#define NV_BUILD_BRANCH_VERSION "r516_10-10"
#define NV_BUILD_CHANGELIST_NUM (31385161)
#define NV_BUILD_TYPE "Official"
#define NV_BUILD_NAME "516.01"
#define NV_LAST_OFFICIAL_CHANGELIST_NUM (31249857)
#define NV_BUILD_NAME "516.26"
#define NV_LAST_OFFICIAL_CHANGELIST_NUM (31385161)
#define NV_BUILD_BRANCH_BASE_VERSION R515
#endif
// End buildmeister python edited section

View File

@ -29,7 +29,7 @@
* type const char*.
*
* References:
* http://www.uefi.org/pnp_id_list
* https://uefi.org/pnp_id_list
*
*/

View File

@ -26,7 +26,7 @@
* byte array, using the Secure Hashing Algorithm 1 (SHA-1) as defined
* in FIPS PUB 180-1 published April 17, 1995:
*
* http://www.itl.nist.gov/fipspubs/fip180-1.htm
* https://www.itl.nist.gov/fipspubs/fip180-1.htm
*
* Some common test cases (see Appendices A and B of the above document):
*

View File

@ -4,7 +4,7 @@
#if defined(NV_LINUX) || defined(NV_BSD) || defined(NV_SUNOS) || defined(NV_VMWARE) || defined(NV_QNX) || defined(NV_INTEGRITY) || \
(defined(RMCFG_FEATURE_PLATFORM_GSP) && RMCFG_FEATURE_PLATFORM_GSP == 1)
#define NV_VERSION_STRING "515.43.04"
#define NV_VERSION_STRING "515.48.07"
#else

View File

@ -130,4 +130,8 @@
#define NV_FUSE_OPT_FPF_GSP_UCODE16_VERSION 0x008241FC /* RW-4R */
#define NV_FUSE_OPT_FPF_GSP_UCODE16_VERSION_DATA 15:0 /* RWIVF */
#define NV_FUSE_STATUS_OPT_DISPLAY 0x00820C04 /* R-I4R */
#define NV_FUSE_STATUS_OPT_DISPLAY_DATA 0:0 /* R-IVF */
#define NV_FUSE_STATUS_OPT_DISPLAY_DATA_ENABLE 0x00000000 /* R---V */
#endif // __ga100_dev_fuse_h__

View File

@ -1,5 +1,5 @@
/*
* SPDX-FileCopyrightText: Copyright (c) 2003-2021 NVIDIA CORPORATION & AFFILIATES
* SPDX-FileCopyrightText: Copyright (c) 2003-2022 NVIDIA CORPORATION & AFFILIATES
* SPDX-License-Identifier: MIT
*
* Permission is hereby granted, free of charge, to any person obtaining a
@ -42,4 +42,11 @@
//
#define NV_XVE_PASSTHROUGH_EMULATED_CONFIG_ROOT_PORT_SPEED 3:0
//
// On some platforms it's beneficial to enable relaxed ordering after vetting
// it's safe to do so. To automate this process on virtualized platforms, allow
// RO to be requested through this emulated config space bit.
//
#define NV_XVE_PASSTHROUGH_EMULATED_CONFIG_RELAXED_ORDERING_ENABLE 4:4
#endif

View File

@ -29,8 +29,9 @@
#define NV_PGC6_AON_SECURE_SCRATCH_GROUP_05_PRIV_LEVEL_MASK_READ_PROTECTION_LEVEL0 0:0 /* */
#define NV_PGC6_AON_SECURE_SCRATCH_GROUP_05_PRIV_LEVEL_MASK_READ_PROTECTION_LEVEL0_ENABLE 0x00000001 /* */
#define NV_PGC6_AON_SECURE_SCRATCH_GROUP_05_PRIV_LEVEL_MASK_READ_PROTECTION_LEVEL0_DISABLE 0x00000000 /* */
#define NV_PGC6_AON_SECURE_SCRATCH_GROUP_03(i) (0x00118214+(i)*4) /* RW-4A */
#define NV_PGC6_AON_SECURE_SCRATCH_GROUP_05(i) (0x00118234+(i)*4) /* RW-4A */
#define NV_PGC6_AON_SECURE_SCRATCH_GROUP_42 0x001183a4 /* RW-4R */
#define NV_PGC6_AON_SECURE_SCRATCH_GROUP_03(i) (0x00118214+(i)*4) /* RW-4A */
#define NV_PGC6_AON_SECURE_SCRATCH_GROUP_05(i) (0x00118234+(i)*4) /* RW-4A */
#define NV_PGC6_AON_SECURE_SCRATCH_GROUP_42 0x001183a4 /* RW-4R */
#define NV_PGC6_BSI_SECURE_SCRATCH_14 0x001180f8 /* RW-4R */
#endif // __ga102_dev_gc6_island_h__

View File

@ -33,6 +33,9 @@
#define NV_USABLE_FB_SIZE_IN_MB NV_PGC6_AON_SECURE_SCRATCH_GROUP_42
#define NV_USABLE_FB_SIZE_IN_MB_VALUE 31:0
#define NV_USABLE_FB_SIZE_IN_MB_VALUE_INIT 0
#define NV_PGC6_BSI_SECURE_SCRATCH_14_BOOT_STAGE_3_HANDOFF 26:26
#define NV_PGC6_BSI_SECURE_SCRATCH_14_BOOT_STAGE_3_HANDOFF_VALUE_INIT 0x0
#define NV_PGC6_BSI_SECURE_SCRATCH_14_BOOT_STAGE_3_HANDOFF_VALUE_DONE 0x1
#endif // __ga102_dev_gc6_island_addendum_h__

View File

@ -0,0 +1,31 @@
/*
* SPDX-FileCopyrightText: Copyright (c) 2003-2022 NVIDIA CORPORATION & AFFILIATES
* SPDX-License-Identifier: MIT
*
* Permission is hereby granted, free of charge, to any person obtaining a
* copy of this software and associated documentation files (the "Software"),
* to deal in the Software without restriction, including without limitation
* the rights to use, copy, modify, merge, publish, distribute, sublicense,
* and/or sell copies of the Software, and to permit persons to whom the
* Software is furnished to do so, subject to the following conditions:
*
* The above copyright notice and this permission notice shall be included in
* all copies or substantial portions of the Software.
*
* THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
* IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
* FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
* THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
* LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
* FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
* DEALINGS IN THE SOFTWARE.
*/
#ifndef __gm107_dev_fuse_h__
#define __gm107_dev_fuse_h__
#define NV_FUSE_STATUS_OPT_DISPLAY 0x00021C04 /* R-I4R */
#define NV_FUSE_STATUS_OPT_DISPLAY_DATA 0:0 /* R-IVF */
#define NV_FUSE_STATUS_OPT_DISPLAY_DATA_ENABLE 0x00000000 /* R---V */
#endif // __gm107_dev_fuse_h__

View File

@ -29,7 +29,8 @@
#define NV_PGC6_AON_SECURE_SCRATCH_GROUP_05_PRIV_LEVEL_MASK_READ_PROTECTION_LEVEL0 0:0 /* */
#define NV_PGC6_AON_SECURE_SCRATCH_GROUP_05_PRIV_LEVEL_MASK_READ_PROTECTION_LEVEL0_ENABLE 0x00000001 /* */
#define NV_PGC6_AON_SECURE_SCRATCH_GROUP_05_PRIV_LEVEL_MASK_READ_PROTECTION_LEVEL0_DISABLE 0x00000000 /* */
#define NV_PGC6_AON_SECURE_SCRATCH_GROUP_03(i) (0x00118214+(i)*4) /* RW-4A */
#define NV_PGC6_AON_SECURE_SCRATCH_GROUP_05(i) (0x00118234+(i)*4) /* RW-4A */
#define NV_PGC6_AON_SECURE_SCRATCH_GROUP_03(i) (0x00118214+(i)*4) /* RW-4A */
#define NV_PGC6_AON_SECURE_SCRATCH_GROUP_05(i) (0x00118234+(i)*4) /* RW-4A */
#define NV_PGC6_BSI_SECURE_SCRATCH_14 0x001180f8 /* RW-4R */
#endif // __tu102_dev_gc6_island_h__

View File

@ -30,6 +30,9 @@
#define NV_PGC6_AON_SECURE_SCRATCH_GROUP_05_0_GFW_BOOT NV_PGC6_AON_SECURE_SCRATCH_GROUP_05(0)
#define NV_PGC6_AON_SECURE_SCRATCH_GROUP_05_0_GFW_BOOT_PROGRESS 7:0
#define NV_PGC6_AON_SECURE_SCRATCH_GROUP_05_0_GFW_BOOT_PROGRESS_COMPLETED 0x000000FF
#define NV_PGC6_BSI_SECURE_SCRATCH_14_BOOT_STAGE_3_HANDOFF 26:26
#define NV_PGC6_BSI_SECURE_SCRATCH_14_BOOT_STAGE_3_HANDOFF_VALUE_INIT 0x0
#define NV_PGC6_BSI_SECURE_SCRATCH_14_BOOT_STAGE_3_HANDOFF_VALUE_DONE 0x1
#endif // __tu102_dev_gc6_island_addendum_h__

View File

@ -84,6 +84,7 @@
#define MAX_BITS_PER_PIXEL 32
// Max HBlank pixel count
#define MAX_HBLANK_PIXELS 7680
#define MHZ_TO_HZ 1000000
/* ------------------------ Datatypes -------------------------------------- */
@ -1570,12 +1571,26 @@ _validateInput
return NVT_STATUS_INVALID_PARAMETER;
}
if ((pDscInfo->branchCaps.overallThroughputMode0 != 0) &&
(pModesetInfo->pixelClockHz > pDscInfo->branchCaps.overallThroughputMode0 * MHZ_TO_HZ))
{
DSC_Print("ERROR - Pixel clock cannot be greater than Branch DSC Overall Throughput Mode 0");
return NVT_STATUS_INVALID_PARAMETER;
}
if (pModesetInfo->activeWidth == 0)
{
DSC_Print("ERROR - Invalid active width for mode.");
return NVT_STATUS_INVALID_PARAMETER;
}
if (pDscInfo->branchCaps.maxLineBufferWidth != 0 &&
pModesetInfo->activeWidth > pDscInfo->branchCaps.maxLineBufferWidth)
{
DSC_Print("ERROR - Active width cannot be greater than DSC Decompressor max line buffer width");
return NVT_STATUS_INVALID_PARAMETER;
}
if (pModesetInfo->activeHeight == 0)
{
DSC_Print("ERROR - Invalid active height for mode.");
@ -1919,7 +1934,13 @@ DSC_GeneratePPS
if (*pBitsPerPixelX16 != 0)
{
*pBitsPerPixelX16 = DSC_AlignDownForBppPrecision(*pBitsPerPixelX16, pDscInfo->sinkCaps.bitsPerPixelPrecision);
if (*pBitsPerPixelX16 > in->bits_per_pixel)
// The calculation of in->bits_per_pixel here in PPSlib, which is the maximum bpp that is allowed by available bandwidth,
// which is applicable to DP alone and not to HDMI FRL.
// Before calling PPS lib to generate PPS data, HDMI library has done calculation according to HDMI2.1 spec
// to determine if FRL rate is sufficient for the requested bpp. So restricting the condition to DP alone.
if ((pWARData && (pWARData->connectorType == DSC_DP)) &&
(*pBitsPerPixelX16 > in->bits_per_pixel))
{
DSC_Print("ERROR - Invalid bits per pixel value specified.");
ret = NVT_STATUS_INVALID_PARAMETER;

View File

@ -196,6 +196,13 @@ typedef struct
NvU32 maxBitsPerPixelX16;
}sinkCaps;
struct BRANCH_DSC_CAPS
{
NvU32 overallThroughputMode0;
NvU32 overallThroughputMode1;
NvU32 maxLineBufferWidth;
}branchCaps;
struct GPU_DSC_CAPS
{
// Mask of all color formats for which encoding supported by GPU

View File

@ -2410,7 +2410,7 @@ NvU32 NvTiming_CalculateCommonEDIDCRC32(NvU8* pEDIDBuffer, NvU32 edidVersion)
CommonEDIDBuffer[0x7F] = 0;
CommonEDIDBuffer[0xFF] = 0;
// We also need to zero out any "EDID Other Monitor Descriptors" (http://en.wikipedia.org/wiki/Extended_display_identification_data)
// We also need to zero out any "EDID Other Monitor Descriptors" (https://en.wikipedia.org/wiki/Extended_display_identification_data)
for (edidBufferIndex = 54; edidBufferIndex <= 108; edidBufferIndex += 18)
{
if (CommonEDIDBuffer[edidBufferIndex] == 0 && CommonEDIDBuffer[edidBufferIndex+1] == 0)

View File

@ -64,7 +64,7 @@
//
// AT24CM02 EEPROM
// http://ww1.microchip.com/downloads/en/DeviceDoc/Atmel-8828-SEEPROM-AT24CM02-Datasheet.pdf
// https://ww1.microchip.com/downloads/en/DeviceDoc/Atmel-8828-SEEPROM-AT24CM02-Datasheet.pdf
//
#define AT24CM02_INDEX_SIZE 18 // Addressing bits
@ -72,7 +72,7 @@
//
// AT24C02C EEPROM
// http://ww1.microchip.com/downloads/en/DeviceDoc/Atmel-8700-SEEPROM-AT24C01C-02C-Datasheet.pdf
// https://ww1.microchip.com/downloads/en/DeviceDoc/Atmel-8700-SEEPROM-AT24C01C-02C-Datasheet.pdf
//
#define AT24C02C_INDEX_SIZE 8 // Addressing bits
@ -80,7 +80,7 @@
//
// AT24C02D EEPROM
// http://ww1.microchip.com/downloads/en/devicedoc/atmel-8871f-seeprom-at24c01d-02d-datasheet.pdf
// https://ww1.microchip.com/downloads/en/devicedoc/atmel-8871f-seeprom-at24c01d-02d-datasheet.pdf
// 2kb EEPROM used on LR10 P4790 B00 platform
//

View File

@ -2208,4 +2208,19 @@ typedef struct NV2080_CTRL_INTERNAL_GET_PCIE_P2P_CAPS_PARAMS {
NvU8 p2pWriteCapsStatus;
} NV2080_CTRL_INTERNAL_GET_PCIE_P2P_CAPS_PARAMS;
/*!
* NV2080_CTRL_CMD_INTERNAL_BIF_SET_PCIE_RO
*
* Enable/disable PCIe Relaxed Ordering.
*
*/
#define NV2080_CTRL_CMD_INTERNAL_BIF_SET_PCIE_RO (0x20800ab9) /* finn: Evaluated from "(FINN_NV20_SUBDEVICE_0_INTERNAL_INTERFACE_ID << 8) | NV2080_CTRL_INTERNAL_BIF_SET_PCIE_RO_PARAMS_MESSAGE_ID" */
#define NV2080_CTRL_INTERNAL_BIF_SET_PCIE_RO_PARAMS_MESSAGE_ID (0xb9U)
typedef struct NV2080_CTRL_INTERNAL_BIF_SET_PCIE_RO_PARAMS {
// Enable/disable PCIe relaxed ordering
NvBool enableRo;
} NV2080_CTRL_INTERNAL_BIF_SET_PCIE_RO_PARAMS;
/* ctrl2080internal_h */

View File

@ -99,7 +99,11 @@
#define GSP_RPC_TIMEOUT (119)
#define GSP_ERROR (120)
#define C2C_ERROR (121)
#define ROBUST_CHANNEL_LAST_ERROR (C2C_ERROR)
#define SPI_PMU_RPC_READ_FAIL (122)
#define SPI_PMU_RPC_WRITE_FAIL (123)
#define SPI_PMU_RPC_ERASE_FAIL (124)
#define INFOROM_FS_ERROR (125)
#define ROBUST_CHANNEL_LAST_ERROR (INFOROM_FS_ERROR)
// Indexed CE reference

View File

@ -306,7 +306,7 @@ typedef NvUFXP64 NvUFXP52_12;
* 2^(_EXPONENT - _EXPONENT_BIAS) *
* (1 + _MANTISSA / (1 << 23))
*/
// [1] : http://en.wikipedia.org/wiki/Single_precision_floating-point_format
// [1] : https://en.wikipedia.org/wiki/Single_precision_floating-point_format
#define NV_TYPES_SINGLE_SIGN 31:31
#define NV_TYPES_SINGLE_SIGN_POSITIVE 0x00000000
#define NV_TYPES_SINGLE_SIGN_NEGATIVE 0x00000001

View File

@ -54,7 +54,7 @@ typedef union {
/*
* Unused. For alignment purposes only. Guarantee alignment to
* twice pointer size. That is the alignment guaranteed by glibc:
* http://www.gnu.org/software/libc/manual/html_node/Aligned-Memory-Blocks.html
* https://www.gnu.org/software/libc/manual/html_node/Aligned-Memory-Blocks.html
* which seems reasonable to match here.
*/
NvU8 align __attribute__((aligned(sizeof(void*) * 2)));

View File

@ -56,7 +56,7 @@ static void libosDwarfBuildTables(libosDebugResolver *pThis);
static void dwarfBuildARangeTable(libosDebugResolver *pThis);
static void dwarfSetARangeTableLineUnit(libosDebugResolver *pThis, DwarfStream unit, NvU64 address);
// http://www.dwarfstd.org/doc/dwarf-2.0.0.pdf
// https://www.dwarfstd.org/doc/dwarf-2.0.0.pdf
// Debug Line information related structures
// (for branch history and call stacks)

View File

@ -85,6 +85,7 @@ endif
ifeq ($(TARGET_ARCH),aarch64)
CFLAGS += -mgeneral-regs-only
CFLAGS += -march=armv8-a
CONDITIONAL_CFLAGS += $(call TEST_CC_ARG, -mno-outline-atomics)
endif
CFLAGS += -fno-pic

View File

@ -1492,7 +1492,7 @@ static void LogEdidCea861Info(NVEvoInfoStringPtr pInfoString,
/*
* IEEE vendor registration IDs are tracked here:
* http://standards.ieee.org/develop/regauth/oui/oui.txt
* https://standards.ieee.org/develop/regauth/oui/oui.txt
*/
for (vsdbIndex = 0; vsdbIndex < pExt861->total_vsdb; vsdbIndex++) {
const NvU32 ieeeId = pExt861->vsdb[vsdbIndex].ieee_id;

View File

@ -67,6 +67,7 @@ ifeq ($(TARGET_ARCH),aarch64)
CFLAGS += -mgeneral-regs-only
CFLAGS += -march=armv8-a
CFLAGS += -mstrict-align
CONDITIONAL_CFLAGS += $(call TEST_CC_ARG, -mno-outline-atomics)
endif
CFLAGS += -fno-pic

View File

@ -276,4 +276,11 @@ typedef NvU8 FLCN_STATUS;
// Lane Margining errors
#define FLCN_ERR_LM_INVALID_RECEIVER_NUMBER (0xF5U)
// APM errors
#define FLCN_ERR_APM_NOT_FUSED_FOR_EK (0xF6U)
#define FLCN_ERR_APM_BROM_SIGN_FAIL (0xF7U)
// Booter Reload on SEC2-RTOS errors
#define FLCN_ERR_AUTH_GSP_RM_HANDOFF_FAILED (0xF8U)
#define FLCN_ERR_INVALID_WPRMETA_MAGIC_OR_REVISION (0xF9U)
#endif // FLCNRETVAL_H

View File

@ -301,23 +301,6 @@
#define __NV_ENABLE_MSI EnableMSI
#define NV_REG_ENABLE_MSI NV_REG_STRING(__NV_ENABLE_MSI)
/*
* Option: RegisterForACPIEvents
*
* Description:
*
* When this option is enabled, the NVIDIA driver will register with the
* ACPI subsystem to receive notification of ACPI events.
*
* Possible values:
*
* 1 - register for ACPI events (default)
* 0 - do not register for ACPI events
*/
#define __NV_REGISTER_FOR_ACPI_EVENTS RegisterForACPIEvents
#define NV_REG_REGISTER_FOR_ACPI_EVENTS NV_REG_STRING(__NV_REGISTER_FOR_ACPI_EVENTS)
/*
* Option: EnablePCIeGen3
*
@ -817,6 +800,30 @@
#define NV_REG_OPENRM_ENABLE_UNSUPPORTED_GPUS_ENABLE 0x00000001
#define NV_REG_OPENRM_ENABLE_UNSUPPORTED_GPUS_DEFAULT NV_REG_OPENRM_ENABLE_UNSUPPORTED_GPUS_DISABLE
/*
* Option: NVreg_DmaRemapPeerMmio
*
* Description:
*
* When this option is enabled, the NVIDIA driver will use device driver
* APIs provided by the Linux kernel for DMA-remapping part of a device's
* MMIO region to another device, creating e.g., IOMMU mappings as necessary.
* When this option is disabled, the NVIDIA driver will instead only apply a
* fixed offset, which may be zero, to CPU physical addresses to produce the
* DMA address for the peer's MMIO region, and no IOMMU mappings will be
* created.
*
* This option only affects peer MMIO DMA mappings, and not system memory
* mappings.
*
* Possible Values:
* 0 = disable dynamic DMA remapping of peer MMIO regions
* 1 = enable dynamic DMA remapping of peer MMIO regions (default)
*/
#define __NV_DMA_REMAP_PEER_MMIO DmaRemapPeerMmio
#define NV_DMA_REMAP_PEER_MMIO NV_REG_STRING(__NV_DMA_REMAP_PEER_MMIO)
#define NV_DMA_REMAP_PEER_MMIO_DISABLE 0x00000000
#define NV_DMA_REMAP_PEER_MMIO_ENABLE 0x00000001
#if defined(NV_DEFINE_REGISTRY_KEY_TABLE)
@ -832,7 +839,6 @@ NV_DEFINE_REG_ENTRY(__NV_DEVICE_FILE_GID, 0);
NV_DEFINE_REG_ENTRY(__NV_DEVICE_FILE_MODE, 0666);
NV_DEFINE_REG_ENTRY(__NV_INITIALIZE_SYSTEM_MEMORY_ALLOCATIONS, 1);
NV_DEFINE_REG_ENTRY(__NV_USE_PAGE_ATTRIBUTE_TABLE, ~0);
NV_DEFINE_REG_ENTRY(__NV_REGISTER_FOR_ACPI_EVENTS, 1);
NV_DEFINE_REG_ENTRY(__NV_ENABLE_PCIE_GEN3, 0);
NV_DEFINE_REG_ENTRY(__NV_ENABLE_MSI, 1);
NV_DEFINE_REG_ENTRY(__NV_TCE_BYPASS_MODE, NV_TCE_BYPASS_MODE_DEFAULT);
@ -863,6 +869,7 @@ NV_DEFINE_REG_STRING_ENTRY(__NV_RM_MSG, NULL);
NV_DEFINE_REG_STRING_ENTRY(__NV_GPU_BLACKLIST, NULL);
NV_DEFINE_REG_STRING_ENTRY(__NV_TEMPORARY_FILE_PATH, NULL);
NV_DEFINE_REG_STRING_ENTRY(__NV_EXCLUDED_GPUS, NULL);
NV_DEFINE_REG_ENTRY(__NV_DMA_REMAP_PEER_MMIO, NV_DMA_REMAP_PEER_MMIO_ENABLE);
/*
*----------------registry database definition----------------------
@ -885,7 +892,6 @@ nv_parm_t nv_parms[] = {
NV_DEFINE_PARAMS_TABLE_ENTRY(__NV_INITIALIZE_SYSTEM_MEMORY_ALLOCATIONS),
NV_DEFINE_PARAMS_TABLE_ENTRY(__NV_USE_PAGE_ATTRIBUTE_TABLE),
NV_DEFINE_PARAMS_TABLE_ENTRY(__NV_ENABLE_MSI),
NV_DEFINE_PARAMS_TABLE_ENTRY(__NV_REGISTER_FOR_ACPI_EVENTS),
NV_DEFINE_PARAMS_TABLE_ENTRY(__NV_ENABLE_PCIE_GEN3),
NV_DEFINE_PARAMS_TABLE_ENTRY(__NV_MEMORY_POOL_SIZE),
NV_DEFINE_PARAMS_TABLE_ENTRY(__NV_KMALLOC_HEAP_MAX_SIZE),
@ -908,6 +914,7 @@ nv_parm_t nv_parms[] = {
NV_DEFINE_PARAMS_TABLE_ENTRY(__NV_ENABLE_GPU_FIRMWARE_LOGS),
NV_DEFINE_PARAMS_TABLE_ENTRY(__NV_ENABLE_DBG_BREAKPOINT),
NV_DEFINE_PARAMS_TABLE_ENTRY(__NV_OPENRM_ENABLE_UNSUPPORTED_GPUS),
NV_DEFINE_PARAMS_TABLE_ENTRY(__NV_DMA_REMAP_PEER_MMIO),
{NULL, NULL}
};

View File

@ -571,11 +571,9 @@ typedef enum
((nv)->iso_iommu_present)
/*
* NVIDIA ACPI event IDs to be passed into the core NVIDIA
* driver for various events like display switch events,
* AC/battery events, etc..
* NVIDIA ACPI event ID to be passed into the core NVIDIA driver for
* AC/DC event.
*/
#define NV_SYSTEM_ACPI_DISPLAY_SWITCH_EVENT 0x8001
#define NV_SYSTEM_ACPI_BATTERY_POWER_EVENT 0x8002
/*
@ -584,14 +582,6 @@ typedef enum
#define NV_SYSTEM_GPU_ADD_EVENT 0x9001
#define NV_SYSTEM_GPU_REMOVE_EVENT 0x9002
/*
* Status bit definitions for display switch hotkey events.
*/
#define NV_HOTKEY_STATUS_DISPLAY_ENABLE_LCD 0x01
#define NV_HOTKEY_STATUS_DISPLAY_ENABLE_CRT 0x02
#define NV_HOTKEY_STATUS_DISPLAY_ENABLE_TV 0x04
#define NV_HOTKEY_STATUS_DISPLAY_ENABLE_DFP 0x08
/*
* NVIDIA ACPI sub-event IDs (event types) to be passed into
* to core NVIDIA driver for ACPI events.

View File

@ -1112,10 +1112,6 @@ NV_STATUS RmSystemEvent(
switch (event_type)
{
case NV_SYSTEM_ACPI_DISPLAY_SWITCH_EVENT:
// Legacy kepler case, do nothing.
break;
case NV_SYSTEM_ACPI_BATTERY_POWER_EVENT:
{
Nv2080PowerEventNotification powerParams;
@ -3052,6 +3048,38 @@ NV_STATUS rm_update_device_mapping_info(
return RmStatus;
}
static void rm_is_device_rm_firmware_capable(
nv_state_t *pNv,
NvU32 pmcBoot42,
NvBool *pbIsFirmwareCapable,
NvBool *pbEnableByDefault
)
{
NvBool bIsFirmwareCapable = NV_FALSE;
NvBool bEnableByDefault = NV_FALSE;
NvU16 pciDeviceId = pNv->pci_info.device_id;
if (NV_IS_SOC_DISPLAY_DEVICE(pNv))
{
bIsFirmwareCapable = NV_TRUE;
}
else
{
bIsFirmwareCapable = gpumgrIsDeviceRmFirmwareCapable(pciDeviceId,
pmcBoot42,
&bEnableByDefault);
}
if (pbIsFirmwareCapable != NULL)
{
*pbIsFirmwareCapable = bIsFirmwareCapable;
}
if (pbEnableByDefault != NULL)
{
*pbEnableByDefault = bEnableByDefault;
}
}
static NvBool NV_API_CALL rm_is_legacy_device(
NvU16 device_id,
NvU16 subsystem_vendor,
@ -3138,7 +3166,31 @@ NV_STATUS NV_API_CALL rm_is_supported_device(
rmStatus = halmgrGetHalForGpu(pHalMgr, pmc_boot_0, pmc_boot_42, &myHalPublicID);
if (rmStatus != NV_OK)
{
NvBool bIsFirmwareCapable;
rm_is_device_rm_firmware_capable(pNv,
pmc_boot_42,
&bIsFirmwareCapable,
NULL);
if (!bIsFirmwareCapable)
{
nv_printf(NV_DBG_ERRORS,
"NVRM: The NVIDIA GPU %04x:%02x:%02x.%x (PCI ID: %04x:%04x)\n"
"NVRM: installed in this system is not supported by open\n"
"NVRM: nvidia.ko because it does not include the required GPU\n"
"NVRM: System Processor (GSP).\n"
"NVRM: Please see the 'Open Linux Kernel Modules' and 'GSP\n"
"NVRM: Firmware' sections in the driver README, available on\n"
"NVRM: the Linux graphics driver download page at\n"
"NVRM: www.nvidia.com.\n",
pNv->pci_info.domain, pNv->pci_info.bus, pNv->pci_info.slot,
pNv->pci_info.function, pNv->pci_info.vendor_id,
pNv->pci_info.device_id, NV_VERSION_STRING);
goto threadfree;
}
goto print_unsupported;
}
goto threadfree;

View File

@ -58,10 +58,6 @@ struct BINDATA_STORAGE_PVT_ALL
#include "g_bindata_kgspGetBinArchiveBooterLoadUcode_TU116.c"
#include "g_bindata_kgspGetBinArchiveBooterLoadUcode_GA100.c"
#include "g_bindata_kgspGetBinArchiveBooterLoadUcode_GA102.c"
#include "g_bindata_kgspGetBinArchiveBooterReloadUcode_TU102.c"
#include "g_bindata_kgspGetBinArchiveBooterReloadUcode_TU116.c"
#include "g_bindata_kgspGetBinArchiveBooterReloadUcode_GA100.c"
#include "g_bindata_kgspGetBinArchiveBooterReloadUcode_GA102.c"
#include "g_bindata_kgspGetBinArchiveBooterUnloadUcode_TU102.c"
#include "g_bindata_kgspGetBinArchiveBooterUnloadUcode_TU116.c"
#include "g_bindata_kgspGetBinArchiveBooterUnloadUcode_GA100.c"
@ -84,10 +80,6 @@ struct BINDATA_STORAGE_PVT_ALL
#include "g_bindata_kgspGetBinArchiveBooterLoadUcode_TU116.c"
#include "g_bindata_kgspGetBinArchiveBooterLoadUcode_GA100.c"
#include "g_bindata_kgspGetBinArchiveBooterLoadUcode_GA102.c"
#include "g_bindata_kgspGetBinArchiveBooterReloadUcode_TU102.c"
#include "g_bindata_kgspGetBinArchiveBooterReloadUcode_TU116.c"
#include "g_bindata_kgspGetBinArchiveBooterReloadUcode_GA100.c"
#include "g_bindata_kgspGetBinArchiveBooterReloadUcode_GA102.c"
#include "g_bindata_kgspGetBinArchiveBooterUnloadUcode_TU102.c"
#include "g_bindata_kgspGetBinArchiveBooterUnloadUcode_TU116.c"
#include "g_bindata_kgspGetBinArchiveBooterUnloadUcode_GA100.c"
@ -108,10 +100,6 @@ BINDATA_CONST struct BINDATA_STORAGE_PVT_ALL g_bindata_pvt =
#include "g_bindata_kgspGetBinArchiveBooterLoadUcode_TU116.c"
#include "g_bindata_kgspGetBinArchiveBooterLoadUcode_GA100.c"
#include "g_bindata_kgspGetBinArchiveBooterLoadUcode_GA102.c"
#include "g_bindata_kgspGetBinArchiveBooterReloadUcode_TU102.c"
#include "g_bindata_kgspGetBinArchiveBooterReloadUcode_TU116.c"
#include "g_bindata_kgspGetBinArchiveBooterReloadUcode_GA100.c"
#include "g_bindata_kgspGetBinArchiveBooterReloadUcode_GA102.c"
#include "g_bindata_kgspGetBinArchiveBooterUnloadUcode_TU102.c"
#include "g_bindata_kgspGetBinArchiveBooterUnloadUcode_TU116.c"
#include "g_bindata_kgspGetBinArchiveBooterUnloadUcode_GA100.c"
@ -135,10 +123,6 @@ const NvU32 g_bindata_pvt_count = sizeof(g_bindata_pvt) / sizeof(BINDATA_STORAGE
#include "g_bindata_kgspGetBinArchiveBooterLoadUcode_TU116.c"
#include "g_bindata_kgspGetBinArchiveBooterLoadUcode_GA100.c"
#include "g_bindata_kgspGetBinArchiveBooterLoadUcode_GA102.c"
#include "g_bindata_kgspGetBinArchiveBooterReloadUcode_TU102.c"
#include "g_bindata_kgspGetBinArchiveBooterReloadUcode_TU116.c"
#include "g_bindata_kgspGetBinArchiveBooterReloadUcode_GA100.c"
#include "g_bindata_kgspGetBinArchiveBooterReloadUcode_GA102.c"
#include "g_bindata_kgspGetBinArchiveBooterUnloadUcode_TU102.c"
#include "g_bindata_kgspGetBinArchiveBooterUnloadUcode_TU116.c"
#include "g_bindata_kgspGetBinArchiveBooterUnloadUcode_GA100.c"

View File

@ -233,6 +233,18 @@ typedef struct GPUMGR_SAVE_MIG_INSTANCE_TOPOLOGY
GPUMGR_SAVE_GPU_INSTANCE saveGI[GPUMGR_MAX_GPU_INSTANCES];
} GPUMGR_SAVE_MIG_INSTANCE_TOPOLOGY;
#include "containers/list.h"
typedef struct PCIEP2PCAPSINFO
{
NvU32 gpuId[GPUMGR_MAX_GPU_INSTANCES]; // Group of GPUs
NvU32 gpuCount; // GPU count in gpuId[]
NvU8 p2pWriteCapsStatus; // PCIE P2P CAPS status for this group of GPUs
NvU8 p2pReadCapsStatus;
ListNode node; // For intrusive lists
} PCIEP2PCAPSINFO;
MAKE_INTRUSIVE_LIST(pcieP2PCapsInfoList, PCIEP2PCAPSINFO, node);
#ifdef NVOC_GPU_MGR_H_PRIVATE_ACCESS_ALLOWED
#define PRIVATE_FIELD(x) x
#else
@ -260,6 +272,8 @@ struct OBJGPUMGR {
GPUMGR_SAVE_MIG_INSTANCE_TOPOLOGY MIGTopologyInfo[32];
GPU_HANDLE_ID gpuHandleIDList[32];
NvU32 numGpuHandles;
pcieP2PCapsInfoList pcieP2PCapsInfoCache;
void *pcieP2PCapsInfoLock;
};
#ifndef __NVOC_CLASS_OBJGPUMGR_TYPEDEF__
@ -290,6 +304,31 @@ NV_STATUS __nvoc_objCreate_OBJGPUMGR(OBJGPUMGR**, Dynamic*, NvU32);
#define __objCreate_OBJGPUMGR(ppNewObj, pParent, createFlags) \
__nvoc_objCreate_OBJGPUMGR((ppNewObj), staticCast((pParent), Dynamic), (createFlags))
NV_STATUS gpumgrInitPcieP2PCapsCache_IMPL(struct OBJGPUMGR *pGpuMgr);
#define gpumgrInitPcieP2PCapsCache(pGpuMgr) gpumgrInitPcieP2PCapsCache_IMPL(pGpuMgr)
#define gpumgrInitPcieP2PCapsCache_HAL(pGpuMgr) gpumgrInitPcieP2PCapsCache(pGpuMgr)
void gpumgrDestroyPcieP2PCapsCache_IMPL(struct OBJGPUMGR *pGpuMgr);
#define gpumgrDestroyPcieP2PCapsCache(pGpuMgr) gpumgrDestroyPcieP2PCapsCache_IMPL(pGpuMgr)
#define gpumgrDestroyPcieP2PCapsCache_HAL(pGpuMgr) gpumgrDestroyPcieP2PCapsCache(pGpuMgr)
NV_STATUS gpumgrStorePcieP2PCapsCache_IMPL(NvU32 gpuMask, NvU8 p2pWriteCapStatus, NvU8 p2pReadCapStatus);
#define gpumgrStorePcieP2PCapsCache(gpuMask, p2pWriteCapStatus, p2pReadCapStatus) gpumgrStorePcieP2PCapsCache_IMPL(gpuMask, p2pWriteCapStatus, p2pReadCapStatus)
#define gpumgrStorePcieP2PCapsCache_HAL(gpuMask, p2pWriteCapStatus, p2pReadCapStatus) gpumgrStorePcieP2PCapsCache(gpuMask, p2pWriteCapStatus, p2pReadCapStatus)
void gpumgrRemovePcieP2PCapsFromCache_IMPL(NvU32 gpuId);
#define gpumgrRemovePcieP2PCapsFromCache(gpuId) gpumgrRemovePcieP2PCapsFromCache_IMPL(gpuId)
#define gpumgrRemovePcieP2PCapsFromCache_HAL(gpuId) gpumgrRemovePcieP2PCapsFromCache(gpuId)
NvBool gpumgrGetPcieP2PCapsFromCache_IMPL(NvU32 gpuMask, NvU8 *pP2PWriteCapStatus, NvU8 *pP2PReadCapStatus);
#define gpumgrGetPcieP2PCapsFromCache(gpuMask, pP2PWriteCapStatus, pP2PReadCapStatus) gpumgrGetPcieP2PCapsFromCache_IMPL(gpuMask, pP2PWriteCapStatus, pP2PReadCapStatus)
#define gpumgrGetPcieP2PCapsFromCache_HAL(gpuMask, pP2PWriteCapStatus, pP2PReadCapStatus) gpumgrGetPcieP2PCapsFromCache(gpuMask, pP2PWriteCapStatus, pP2PReadCapStatus)
NV_STATUS gpumgrConstruct_IMPL(struct OBJGPUMGR *arg_);
#define __nvoc_gpumgrConstruct(arg_) gpumgrConstruct_IMPL(arg_)
void gpumgrDestruct_IMPL(struct OBJGPUMGR *arg0);

View File

@ -508,6 +508,22 @@ static void __nvoc_init_funcTable_OBJGPU_1(OBJGPU *pThis) {
{
}
// Hal function -- gpuFuseSupportsDisplay
if (( ((chipHal_HalVarIdx >> 5) == 1UL) && ((1UL << (chipHal_HalVarIdx & 0x1f)) & 0x000003e0UL) )) /* ChipHal: TU102 | TU104 | TU106 | TU116 | TU117 */
{
pThis->__gpuFuseSupportsDisplay__ = &gpuFuseSupportsDisplay_GM107;
}
else if (( ((chipHal_HalVarIdx >> 5) == 1UL) && ((1UL << (chipHal_HalVarIdx & 0x1f)) & 0x0000fc00UL) )) /* ChipHal: GA100 | GA102 | GA103 | GA104 | GA106 | GA107 */
{
pThis->__gpuFuseSupportsDisplay__ = &gpuFuseSupportsDisplay_GA100;
}
else if (0)
{
}
else if (0)
{
}
// Hal function -- gpuClearFbhubPoisonIntrForBug2924523
if (( ((rmVariantHal_HalVarIdx >> 5) == 0UL) && ((1UL << (rmVariantHal_HalVarIdx & 0x1f)) & 0x00000002UL) )) /* RmVariantHal: PF_KERNEL_ONLY */
{

View File

@ -829,6 +829,7 @@ struct OBJGPU {
struct OBJGPU *__nvoc_pbase_OBJGPU;
const GPUCHILDPRESENT *(*__gpuGetChildrenPresent__)(struct OBJGPU *, NvU32 *);
const CLASSDESCRIPTOR *(*__gpuGetClassDescriptorList__)(struct OBJGPU *, NvU32 *);
NvBool (*__gpuFuseSupportsDisplay__)(struct OBJGPU *);
NV_STATUS (*__gpuClearFbhubPoisonIntrForBug2924523__)(struct OBJGPU *);
NV_STATUS (*__gpuConstructDeviceInfoTable__)(struct OBJGPU *);
NvU64 (*__gpuGetFlaVasSize__)(struct OBJGPU *, NvBool);
@ -1019,6 +1020,9 @@ struct OBJGPU {
struct Subdevice *pCachedSubdevice;
struct RsClient *pCachedRsClient;
RM_API physicalRmApi;
struct Subdevice **pSubdeviceBackReferences;
NvU32 numSubdeviceBackReferences;
NvU32 maxSubdeviceBackReferences;
NV2080_CTRL_INTERNAL_GPU_GET_CHIP_INFO_PARAMS *pChipInfo;
NV2080_CTRL_GPU_GET_OEM_BOARD_INFO_PARAMS *boardInfo;
NvBool bBar2MovedByVtd;
@ -1049,6 +1053,7 @@ struct OBJGPU {
NvU32 instLocOverrides4;
NvBool bInstLoc47bitPaWar;
NvU32 instVprOverrides;
NvBool bdisableTconOd;
NvU32 optimizeUseCaseOverride;
NvS16 fecsCtxswLogConsumerCount;
NvS16 videoCtxswLogConsumerCount;
@ -1245,6 +1250,8 @@ NV_STATUS __nvoc_objCreate_OBJGPU(OBJGPU**, Dynamic*, NvU32, NvU32, NvU32, NvU32
#define gpuGetChildrenPresent_HAL(pGpu, pNumEntries) gpuGetChildrenPresent_DISPATCH(pGpu, pNumEntries)
#define gpuGetClassDescriptorList(pGpu, arg0) gpuGetClassDescriptorList_DISPATCH(pGpu, arg0)
#define gpuGetClassDescriptorList_HAL(pGpu, arg0) gpuGetClassDescriptorList_DISPATCH(pGpu, arg0)
#define gpuFuseSupportsDisplay(pGpu) gpuFuseSupportsDisplay_DISPATCH(pGpu)
#define gpuFuseSupportsDisplay_HAL(pGpu) gpuFuseSupportsDisplay_DISPATCH(pGpu)
#define gpuClearFbhubPoisonIntrForBug2924523(pGpu) gpuClearFbhubPoisonIntrForBug2924523_DISPATCH(pGpu)
#define gpuClearFbhubPoisonIntrForBug2924523_HAL(pGpu) gpuClearFbhubPoisonIntrForBug2924523_DISPATCH(pGpu)
#define gpuConstructDeviceInfoTable(pGpu) gpuConstructDeviceInfoTable_DISPATCH(pGpu)
@ -2255,6 +2262,18 @@ static inline const CLASSDESCRIPTOR *gpuGetClassDescriptorList_DISPATCH(struct O
return pGpu->__gpuGetClassDescriptorList__(pGpu, arg0);
}
NvBool gpuFuseSupportsDisplay_GM107(struct OBJGPU *pGpu);
NvBool gpuFuseSupportsDisplay_GA100(struct OBJGPU *pGpu);
static inline NvBool gpuFuseSupportsDisplay_491d52(struct OBJGPU *pGpu) {
return ((NvBool)(0 != 0));
}
static inline NvBool gpuFuseSupportsDisplay_DISPATCH(struct OBJGPU *pGpu) {
return pGpu->__gpuFuseSupportsDisplay__(pGpu);
}
NV_STATUS gpuClearFbhubPoisonIntrForBug2924523_GA100_KERNEL(struct OBJGPU *pGpu);
static inline NV_STATUS gpuClearFbhubPoisonIntrForBug2924523_56cd7a(struct OBJGPU *pGpu) {
@ -3198,6 +3217,25 @@ static inline void gpuNotifySubDeviceEvent(struct OBJGPU *pGpu, NvU32 notifyInde
#define gpuNotifySubDeviceEvent(pGpu, notifyIndex, pNotifyParams, notifyParamsSize, info32, info16) gpuNotifySubDeviceEvent_IMPL(pGpu, notifyIndex, pNotifyParams, notifyParamsSize, info32, info16)
#endif //__nvoc_gpu_h_disabled
NV_STATUS gpuRegisterSubdevice_IMPL(struct OBJGPU *pGpu, struct Subdevice *pSubdevice);
#ifdef __nvoc_gpu_h_disabled
static inline NV_STATUS gpuRegisterSubdevice(struct OBJGPU *pGpu, struct Subdevice *pSubdevice) {
NV_ASSERT_FAILED_PRECOMP("OBJGPU was disabled!");
return NV_ERR_NOT_SUPPORTED;
}
#else //__nvoc_gpu_h_disabled
#define gpuRegisterSubdevice(pGpu, pSubdevice) gpuRegisterSubdevice_IMPL(pGpu, pSubdevice)
#endif //__nvoc_gpu_h_disabled
void gpuUnregisterSubdevice_IMPL(struct OBJGPU *pGpu, struct Subdevice *pSubdevice);
#ifdef __nvoc_gpu_h_disabled
static inline void gpuUnregisterSubdevice(struct OBJGPU *pGpu, struct Subdevice *pSubdevice) {
NV_ASSERT_FAILED_PRECOMP("OBJGPU was disabled!");
}
#else //__nvoc_gpu_h_disabled
#define gpuUnregisterSubdevice(pGpu, pSubdevice) gpuUnregisterSubdevice_IMPL(pGpu, pSubdevice)
#endif //__nvoc_gpu_h_disabled
NV_STATUS gpuGetProcWithObject_IMPL(struct OBJGPU *pGpu, NvU32 elementID, NvU32 internalClassId, NvU32 *pPidArray, NvU32 *pPidArrayCount, MIG_INSTANCE_REF *pRef);
#ifdef __nvoc_gpu_h_disabled
static inline NV_STATUS gpuGetProcWithObject(struct OBJGPU *pGpu, NvU32 elementID, NvU32 internalClassId, NvU32 *pPidArray, NvU32 *pPidArrayCount, MIG_INSTANCE_REF *pRef) {

View File

@ -78,6 +78,10 @@ static NV_STATUS __nvoc_thunk_KernelBif_engstateStateLoad(struct OBJGPU *pGpu, s
return kbifStateLoad(pGpu, (struct KernelBif *)(((unsigned char *)pKernelBif) - __nvoc_rtti_KernelBif_OBJENGSTATE.offset), arg0);
}
static NV_STATUS __nvoc_thunk_KernelBif_engstateStatePostLoad(struct OBJGPU *pGpu, struct OBJENGSTATE *pKernelBif, NvU32 arg0) {
return kbifStatePostLoad(pGpu, (struct KernelBif *)(((unsigned char *)pKernelBif) - __nvoc_rtti_KernelBif_OBJENGSTATE.offset), arg0);
}
static NV_STATUS __nvoc_thunk_KernelBif_engstateStateUnload(struct OBJGPU *pGpu, struct OBJENGSTATE *pKernelBif, NvU32 arg0) {
return kbifStateUnload(pGpu, (struct KernelBif *)(((unsigned char *)pKernelBif) - __nvoc_rtti_KernelBif_OBJENGSTATE.offset), arg0);
}
@ -130,10 +134,6 @@ static void __nvoc_thunk_OBJENGSTATE_kbifFreeTunableState(POBJGPU pGpu, struct K
engstateFreeTunableState(pGpu, (struct OBJENGSTATE *)(((unsigned char *)pEngstate) + __nvoc_rtti_KernelBif_OBJENGSTATE.offset), pTunableState);
}
static NV_STATUS __nvoc_thunk_OBJENGSTATE_kbifStatePostLoad(POBJGPU pGpu, struct KernelBif *pEngstate, NvU32 arg0) {
return engstateStatePostLoad(pGpu, (struct OBJENGSTATE *)(((unsigned char *)pEngstate) + __nvoc_rtti_KernelBif_OBJENGSTATE.offset), arg0);
}
static NV_STATUS __nvoc_thunk_OBJENGSTATE_kbifAllocTunableState(POBJGPU pGpu, struct KernelBif *pEngstate, void **ppTunableState) {
return engstateAllocTunableState(pGpu, (struct OBJENGSTATE *)(((unsigned char *)pEngstate) + __nvoc_rtti_KernelBif_OBJENGSTATE.offset), ppTunableState);
}
@ -274,6 +274,15 @@ static void __nvoc_init_funcTable_KernelBif_1(KernelBif *pThis, RmHalspecOwner *
pThis->__kbifStateLoad__ = &kbifStateLoad_IMPL;
}
// Hal function -- kbifStatePostLoad
if (0)
{
}
else if (( ((rmVariantHal_HalVarIdx >> 5) == 0UL) && ((1UL << (rmVariantHal_HalVarIdx & 0x1f)) & 0x00000002UL) )) /* RmVariantHal: PF_KERNEL_ONLY */
{
pThis->__kbifStatePostLoad__ = &kbifStatePostLoad_IMPL;
}
// Hal function -- kbifStateUnload
if (0)
{
@ -296,6 +305,23 @@ static void __nvoc_init_funcTable_KernelBif_1(KernelBif *pThis, RmHalspecOwner *
pThis->__kbifIsPciIoAccessEnabled__ = &kbifIsPciIoAccessEnabled_491d52;
}
// Hal function -- kbifInitRelaxedOrderingFromEmulatedConfigSpace
if (0)
{
}
else if (( ((rmVariantHal_HalVarIdx >> 5) == 0UL) && ((1UL << (rmVariantHal_HalVarIdx & 0x1f)) & 0x00000002UL) )) /* RmVariantHal: PF_KERNEL_ONLY */
{
if (( ((chipHal_HalVarIdx >> 5) == 1UL) && ((1UL << (chipHal_HalVarIdx & 0x1f)) & 0x0000fc00UL) )) /* ChipHal: GA100 | GA102 | GA103 | GA104 | GA106 | GA107 */
{
pThis->__kbifInitRelaxedOrderingFromEmulatedConfigSpace__ = &kbifInitRelaxedOrderingFromEmulatedConfigSpace_GA100;
}
// default
else
{
pThis->__kbifInitRelaxedOrderingFromEmulatedConfigSpace__ = &kbifInitRelaxedOrderingFromEmulatedConfigSpace_b3696a;
}
}
// Hal function -- kbifApplyWARBug3208922
if (( ((chipHal_HalVarIdx >> 5) == 1UL) && ((1UL << (chipHal_HalVarIdx & 0x1f)) & 0x0000fc00UL) )) /* ChipHal: GA100 | GA102 | GA103 | GA104 | GA106 | GA107 */
{
@ -312,6 +338,8 @@ static void __nvoc_init_funcTable_KernelBif_1(KernelBif *pThis, RmHalspecOwner *
pThis->__nvoc_base_OBJENGSTATE.__engstateStateLoad__ = &__nvoc_thunk_KernelBif_engstateStateLoad;
pThis->__nvoc_base_OBJENGSTATE.__engstateStatePostLoad__ = &__nvoc_thunk_KernelBif_engstateStatePostLoad;
pThis->__nvoc_base_OBJENGSTATE.__engstateStateUnload__ = &__nvoc_thunk_KernelBif_engstateStateUnload;
pThis->__kbifReconcileTunableState__ = &__nvoc_thunk_OBJENGSTATE_kbifReconcileTunableState;
@ -338,8 +366,6 @@ static void __nvoc_init_funcTable_KernelBif_1(KernelBif *pThis, RmHalspecOwner *
pThis->__kbifFreeTunableState__ = &__nvoc_thunk_OBJENGSTATE_kbifFreeTunableState;
pThis->__kbifStatePostLoad__ = &__nvoc_thunk_OBJENGSTATE_kbifStatePostLoad;
pThis->__kbifAllocTunableState__ = &__nvoc_thunk_OBJENGSTATE_kbifAllocTunableState;
pThis->__kbifSetTunableState__ = &__nvoc_thunk_OBJENGSTATE_kbifSetTunableState;

View File

@ -91,8 +91,10 @@ struct KernelBif {
NV_STATUS (*__kbifConstructEngine__)(struct OBJGPU *, struct KernelBif *, ENGDESCRIPTOR);
NV_STATUS (*__kbifStateInitLocked__)(struct OBJGPU *, struct KernelBif *);
NV_STATUS (*__kbifStateLoad__)(struct OBJGPU *, struct KernelBif *, NvU32);
NV_STATUS (*__kbifStatePostLoad__)(struct OBJGPU *, struct KernelBif *, NvU32);
NV_STATUS (*__kbifStateUnload__)(struct OBJGPU *, struct KernelBif *, NvU32);
NvBool (*__kbifIsPciIoAccessEnabled__)(struct OBJGPU *, struct KernelBif *);
void (*__kbifInitRelaxedOrderingFromEmulatedConfigSpace__)(struct OBJGPU *, struct KernelBif *);
void (*__kbifApplyWARBug3208922__)(struct OBJGPU *, struct KernelBif *);
NV_STATUS (*__kbifReconcileTunableState__)(POBJGPU, struct KernelBif *, void *);
NV_STATUS (*__kbifStatePreLoad__)(POBJGPU, struct KernelBif *, NvU32);
@ -106,7 +108,6 @@ struct KernelBif {
NV_STATUS (*__kbifGetTunableState__)(POBJGPU, struct KernelBif *, void *);
NV_STATUS (*__kbifCompareTunableState__)(POBJGPU, struct KernelBif *, void *, void *);
void (*__kbifFreeTunableState__)(POBJGPU, struct KernelBif *, void *);
NV_STATUS (*__kbifStatePostLoad__)(POBJGPU, struct KernelBif *, NvU32);
NV_STATUS (*__kbifAllocTunableState__)(POBJGPU, struct KernelBif *, void **);
NV_STATUS (*__kbifSetTunableState__)(POBJGPU, struct KernelBif *, void *);
NvBool (*__kbifIsPresent__)(POBJGPU, struct KernelBif *);
@ -123,6 +124,7 @@ struct KernelBif {
NvBool PDB_PROP_KBIF_UPSTREAM_LTR_SUPPORT_WAR_BUG_200634944;
NvBool PDB_PROP_KBIF_SUPPORT_NONCOHERENT;
NvBool PDB_PROP_KBIF_PCIE_GEN4_CAPABLE;
NvBool PDB_PROP_KBIF_PCIE_RELAXED_ORDERING_SET_IN_EMULATED_CONFIG_SPACE;
NvU32 dmaCaps;
RmPhysAddr dmaWindowStartAddress;
NvU32 p2pOverride;
@ -164,6 +166,8 @@ extern const struct NVOC_CLASS_DEF __nvoc_class_def_KernelBif;
#define PDB_PROP_KBIF_USE_CONFIG_SPACE_TO_REARM_MSI_BASE_NAME PDB_PROP_KBIF_USE_CONFIG_SPACE_TO_REARM_MSI
#define PDB_PROP_KBIF_IS_MSI_ENABLED_BASE_CAST
#define PDB_PROP_KBIF_IS_MSI_ENABLED_BASE_NAME PDB_PROP_KBIF_IS_MSI_ENABLED
#define PDB_PROP_KBIF_PCIE_RELAXED_ORDERING_SET_IN_EMULATED_CONFIG_SPACE_BASE_CAST
#define PDB_PROP_KBIF_PCIE_RELAXED_ORDERING_SET_IN_EMULATED_CONFIG_SPACE_BASE_NAME PDB_PROP_KBIF_PCIE_RELAXED_ORDERING_SET_IN_EMULATED_CONFIG_SPACE
#define PDB_PROP_KBIF_UPSTREAM_LTR_SUPPORT_WAR_BUG_200634944_BASE_CAST
#define PDB_PROP_KBIF_UPSTREAM_LTR_SUPPORT_WAR_BUG_200634944_BASE_NAME PDB_PROP_KBIF_UPSTREAM_LTR_SUPPORT_WAR_BUG_200634944
#define PDB_PROP_KBIF_IS_MSIX_CACHED_BASE_CAST
@ -191,10 +195,14 @@ NV_STATUS __nvoc_objCreate_KernelBif(KernelBif**, Dynamic*, NvU32);
#define kbifStateInitLocked(pGpu, pKernelBif) kbifStateInitLocked_DISPATCH(pGpu, pKernelBif)
#define kbifStateLoad(pGpu, pKernelBif, arg0) kbifStateLoad_DISPATCH(pGpu, pKernelBif, arg0)
#define kbifStateLoad_HAL(pGpu, pKernelBif, arg0) kbifStateLoad_DISPATCH(pGpu, pKernelBif, arg0)
#define kbifStatePostLoad(pGpu, pKernelBif, arg0) kbifStatePostLoad_DISPATCH(pGpu, pKernelBif, arg0)
#define kbifStatePostLoad_HAL(pGpu, pKernelBif, arg0) kbifStatePostLoad_DISPATCH(pGpu, pKernelBif, arg0)
#define kbifStateUnload(pGpu, pKernelBif, arg0) kbifStateUnload_DISPATCH(pGpu, pKernelBif, arg0)
#define kbifStateUnload_HAL(pGpu, pKernelBif, arg0) kbifStateUnload_DISPATCH(pGpu, pKernelBif, arg0)
#define kbifIsPciIoAccessEnabled(pGpu, pKernelBif) kbifIsPciIoAccessEnabled_DISPATCH(pGpu, pKernelBif)
#define kbifIsPciIoAccessEnabled_HAL(pGpu, pKernelBif) kbifIsPciIoAccessEnabled_DISPATCH(pGpu, pKernelBif)
#define kbifInitRelaxedOrderingFromEmulatedConfigSpace(pGpu, pBif) kbifInitRelaxedOrderingFromEmulatedConfigSpace_DISPATCH(pGpu, pBif)
#define kbifInitRelaxedOrderingFromEmulatedConfigSpace_HAL(pGpu, pBif) kbifInitRelaxedOrderingFromEmulatedConfigSpace_DISPATCH(pGpu, pBif)
#define kbifApplyWARBug3208922(pGpu, pKernelBif) kbifApplyWARBug3208922_DISPATCH(pGpu, pKernelBif)
#define kbifApplyWARBug3208922_HAL(pGpu, pKernelBif) kbifApplyWARBug3208922_DISPATCH(pGpu, pKernelBif)
#define kbifReconcileTunableState(pGpu, pEngstate, pTunableState) kbifReconcileTunableState_DISPATCH(pGpu, pEngstate, pTunableState)
@ -209,7 +217,6 @@ NV_STATUS __nvoc_objCreate_KernelBif(KernelBif**, Dynamic*, NvU32);
#define kbifGetTunableState(pGpu, pEngstate, pTunableState) kbifGetTunableState_DISPATCH(pGpu, pEngstate, pTunableState)
#define kbifCompareTunableState(pGpu, pEngstate, pTunables1, pTunables2) kbifCompareTunableState_DISPATCH(pGpu, pEngstate, pTunables1, pTunables2)
#define kbifFreeTunableState(pGpu, pEngstate, pTunableState) kbifFreeTunableState_DISPATCH(pGpu, pEngstate, pTunableState)
#define kbifStatePostLoad(pGpu, pEngstate, arg0) kbifStatePostLoad_DISPATCH(pGpu, pEngstate, arg0)
#define kbifAllocTunableState(pGpu, pEngstate, ppTunableState) kbifAllocTunableState_DISPATCH(pGpu, pEngstate, ppTunableState)
#define kbifSetTunableState(pGpu, pEngstate, pTunableState) kbifSetTunableState_DISPATCH(pGpu, pEngstate, pTunableState)
#define kbifIsPresent(pGpu, pEngstate) kbifIsPresent_DISPATCH(pGpu, pEngstate)
@ -503,6 +510,16 @@ static inline NV_STATUS kbifStateLoad_DISPATCH(struct OBJGPU *pGpu, struct Kerne
return pKernelBif->__kbifStateLoad__(pGpu, pKernelBif, arg0);
}
static inline NV_STATUS kbifStatePostLoad_56cd7a(struct OBJGPU *pGpu, struct KernelBif *pKernelBif, NvU32 arg0) {
return NV_OK;
}
NV_STATUS kbifStatePostLoad_IMPL(struct OBJGPU *pGpu, struct KernelBif *pKernelBif, NvU32 arg0);
static inline NV_STATUS kbifStatePostLoad_DISPATCH(struct OBJGPU *pGpu, struct KernelBif *pKernelBif, NvU32 arg0) {
return pKernelBif->__kbifStatePostLoad__(pGpu, pKernelBif, arg0);
}
static inline NV_STATUS kbifStateUnload_56cd7a(struct OBJGPU *pGpu, struct KernelBif *pKernelBif, NvU32 arg0) {
return NV_OK;
}
@ -523,6 +540,16 @@ static inline NvBool kbifIsPciIoAccessEnabled_DISPATCH(struct OBJGPU *pGpu, stru
return pKernelBif->__kbifIsPciIoAccessEnabled__(pGpu, pKernelBif);
}
static inline void kbifInitRelaxedOrderingFromEmulatedConfigSpace_b3696a(struct OBJGPU *pGpu, struct KernelBif *pBif) {
return;
}
void kbifInitRelaxedOrderingFromEmulatedConfigSpace_GA100(struct OBJGPU *pGpu, struct KernelBif *pBif);
static inline void kbifInitRelaxedOrderingFromEmulatedConfigSpace_DISPATCH(struct OBJGPU *pGpu, struct KernelBif *pBif) {
pBif->__kbifInitRelaxedOrderingFromEmulatedConfigSpace__(pGpu, pBif);
}
void kbifApplyWARBug3208922_GA100(struct OBJGPU *pGpu, struct KernelBif *pKernelBif);
static inline void kbifApplyWARBug3208922_b3696a(struct OBJGPU *pGpu, struct KernelBif *pKernelBif) {
@ -581,10 +608,6 @@ static inline void kbifFreeTunableState_DISPATCH(POBJGPU pGpu, struct KernelBif
pEngstate->__kbifFreeTunableState__(pGpu, pEngstate, pTunableState);
}
static inline NV_STATUS kbifStatePostLoad_DISPATCH(POBJGPU pGpu, struct KernelBif *pEngstate, NvU32 arg0) {
return pEngstate->__kbifStatePostLoad__(pGpu, pEngstate, arg0);
}
static inline NV_STATUS kbifAllocTunableState_DISPATCH(POBJGPU pGpu, struct KernelBif *pEngstate, void **ppTunableState) {
return pEngstate->__kbifAllocTunableState__(pGpu, pEngstate, ppTunableState);
}

View File

@ -511,33 +511,6 @@ static void __nvoc_init_funcTable_KernelGsp_1(KernelGsp *pThis, RmHalspecOwner *
{
}
// Hal function -- kgspGetBinArchiveBooterReloadUcode
if (( ((rmVariantHal_HalVarIdx >> 5) == 0UL) && ((1UL << (rmVariantHal_HalVarIdx & 0x1f)) & 0x00000002UL) )) /* RmVariantHal: PF_KERNEL_ONLY */
{
if (( ((chipHal_HalVarIdx >> 5) == 1UL) && ((1UL << (chipHal_HalVarIdx & 0x1f)) & 0x000000e0UL) )) /* ChipHal: TU102 | TU104 | TU106 */
{
pThis->__kgspGetBinArchiveBooterReloadUcode__ = &kgspGetBinArchiveBooterReloadUcode_TU102;
}
else if (( ((chipHal_HalVarIdx >> 5) == 1UL) && ((1UL << (chipHal_HalVarIdx & 0x1f)) & 0x00000300UL) )) /* ChipHal: TU116 | TU117 */
{
pThis->__kgspGetBinArchiveBooterReloadUcode__ = &kgspGetBinArchiveBooterReloadUcode_TU116;
}
else if (( ((chipHal_HalVarIdx >> 5) == 1UL) && ((1UL << (chipHal_HalVarIdx & 0x1f)) & 0x00000400UL) )) /* ChipHal: GA100 */
{
pThis->__kgspGetBinArchiveBooterReloadUcode__ = &kgspGetBinArchiveBooterReloadUcode_GA100;
}
else if (( ((chipHal_HalVarIdx >> 5) == 1UL) && ((1UL << (chipHal_HalVarIdx & 0x1f)) & 0x0000f800UL) )) /* ChipHal: GA102 | GA103 | GA104 | GA106 | GA107 */
{
pThis->__kgspGetBinArchiveBooterReloadUcode__ = &kgspGetBinArchiveBooterReloadUcode_GA102;
}
else if (0)
{
}
}
else if (0)
{
}
// Hal function -- kgspGetBinArchiveBooterUnloadUcode
if (( ((rmVariantHal_HalVarIdx >> 5) == 0UL) && ((1UL << (rmVariantHal_HalVarIdx & 0x1f)) & 0x00000002UL) )) /* RmVariantHal: PF_KERNEL_ONLY */
{

View File

@ -255,7 +255,6 @@ struct KernelGsp {
NV_STATUS (*__kgspExecuteFwsecFrts__)(struct OBJGPU *, struct KernelGsp *, KernelGspFlcnUcode *, const NvU64);
NV_STATUS (*__kgspExecuteHsFalcon__)(struct OBJGPU *, struct KernelGsp *, KernelGspFlcnUcode *, struct KernelFalcon *, NvU32 *, NvU32 *);
const BINDATA_ARCHIVE *(*__kgspGetBinArchiveBooterLoadUcode__)(struct KernelGsp *);
const BINDATA_ARCHIVE *(*__kgspGetBinArchiveBooterReloadUcode__)(struct KernelGsp *);
const BINDATA_ARCHIVE *(*__kgspGetBinArchiveBooterUnloadUcode__)(struct KernelGsp *);
const char *(*__kgspGetSignatureSectionName__)(struct OBJGPU *, struct KernelGsp *);
void (*__kgspStateDestroy__)(POBJGPU, struct KernelGsp *);
@ -282,7 +281,6 @@ struct KernelGsp {
struct OBJRPC *pRpc;
KernelGspFlcnUcode *pFwsecUcode;
KernelGspFlcnUcode *pBooterLoadUcode;
KernelGspFlcnUcode *pBooterReloadUcode;
KernelGspFlcnUcode *pBooterUnloadUcode;
MEMORY_DESCRIPTOR *pWprMetaDescriptor;
GspFwWprMeta *pWprMeta;
@ -368,8 +366,6 @@ NV_STATUS __nvoc_objCreate_KernelGsp(KernelGsp**, Dynamic*, NvU32);
#define kgspExecuteHsFalcon_HAL(pGpu, pKernelGsp, pFlcnUcode, pKernelFlcn, pMailbox0, pMailbox1) kgspExecuteHsFalcon_DISPATCH(pGpu, pKernelGsp, pFlcnUcode, pKernelFlcn, pMailbox0, pMailbox1)
#define kgspGetBinArchiveBooterLoadUcode(pKernelGsp) kgspGetBinArchiveBooterLoadUcode_DISPATCH(pKernelGsp)
#define kgspGetBinArchiveBooterLoadUcode_HAL(pKernelGsp) kgspGetBinArchiveBooterLoadUcode_DISPATCH(pKernelGsp)
#define kgspGetBinArchiveBooterReloadUcode(pKernelGsp) kgspGetBinArchiveBooterReloadUcode_DISPATCH(pKernelGsp)
#define kgspGetBinArchiveBooterReloadUcode_HAL(pKernelGsp) kgspGetBinArchiveBooterReloadUcode_DISPATCH(pKernelGsp)
#define kgspGetBinArchiveBooterUnloadUcode(pKernelGsp) kgspGetBinArchiveBooterUnloadUcode_DISPATCH(pKernelGsp)
#define kgspGetBinArchiveBooterUnloadUcode_HAL(pKernelGsp) kgspGetBinArchiveBooterUnloadUcode_DISPATCH(pKernelGsp)
#define kgspGetSignatureSectionName(pGpu, pKernelGsp) kgspGetSignatureSectionName_DISPATCH(pGpu, pKernelGsp)
@ -511,6 +507,19 @@ static inline NV_STATUS kgspExtractVbiosFromRom(struct OBJGPU *pGpu, struct Kern
#define kgspExtractVbiosFromRom_HAL(pGpu, pKernelGsp, ppVbiosImg) kgspExtractVbiosFromRom(pGpu, pKernelGsp, ppVbiosImg)
NV_STATUS kgspExecuteFwsecSb_TU102(struct OBJGPU *pGpu, struct KernelGsp *pKernelGsp, KernelGspFlcnUcode *pFwsecUcode);
#ifdef __nvoc_kernel_gsp_h_disabled
static inline NV_STATUS kgspExecuteFwsecSb(struct OBJGPU *pGpu, struct KernelGsp *pKernelGsp, KernelGspFlcnUcode *pFwsecUcode) {
NV_ASSERT_FAILED_PRECOMP("KernelGsp was disabled!");
return NV_ERR_NOT_SUPPORTED;
}
#else //__nvoc_kernel_gsp_h_disabled
#define kgspExecuteFwsecSb(pGpu, pKernelGsp, pFwsecUcode) kgspExecuteFwsecSb_TU102(pGpu, pKernelGsp, pFwsecUcode)
#endif //__nvoc_kernel_gsp_h_disabled
#define kgspExecuteFwsecSb_HAL(pGpu, pKernelGsp, pFwsecUcode) kgspExecuteFwsecSb(pGpu, pKernelGsp, pFwsecUcode)
NV_STATUS kgspExecuteBooterLoad_TU102(struct OBJGPU *pGpu, struct KernelGsp *pKernelGsp, const NvU64 gspFwWprMetaOffset);
#ifdef __nvoc_kernel_gsp_h_disabled
@ -524,19 +533,6 @@ static inline NV_STATUS kgspExecuteBooterLoad(struct OBJGPU *pGpu, struct Kernel
#define kgspExecuteBooterLoad_HAL(pGpu, pKernelGsp, gspFwWprMetaOffset) kgspExecuteBooterLoad(pGpu, pKernelGsp, gspFwWprMetaOffset)
NV_STATUS kgspExecuteBooterReload_TU102(struct OBJGPU *pGpu, struct KernelGsp *pKernelGsp);
#ifdef __nvoc_kernel_gsp_h_disabled
static inline NV_STATUS kgspExecuteBooterReload(struct OBJGPU *pGpu, struct KernelGsp *pKernelGsp) {
NV_ASSERT_FAILED_PRECOMP("KernelGsp was disabled!");
return NV_ERR_NOT_SUPPORTED;
}
#else //__nvoc_kernel_gsp_h_disabled
#define kgspExecuteBooterReload(pGpu, pKernelGsp) kgspExecuteBooterReload_TU102(pGpu, pKernelGsp)
#endif //__nvoc_kernel_gsp_h_disabled
#define kgspExecuteBooterReload_HAL(pGpu, pKernelGsp) kgspExecuteBooterReload(pGpu, pKernelGsp)
NV_STATUS kgspExecuteBooterUnloadIfNeeded_TU102(struct OBJGPU *pGpu, struct KernelGsp *pKernelGsp);
#ifdef __nvoc_kernel_gsp_h_disabled
@ -563,6 +559,19 @@ static inline NV_STATUS kgspWaitForGfwBootOk(struct OBJGPU *pGpu, struct KernelG
#define kgspWaitForGfwBootOk_HAL(pGpu, pKernelGsp) kgspWaitForGfwBootOk(pGpu, pKernelGsp)
NV_STATUS kgspWaitForProcessorSuspend_TU102(struct OBJGPU *pGpu, struct KernelGsp *pKernelGsp);
#ifdef __nvoc_kernel_gsp_h_disabled
static inline NV_STATUS kgspWaitForProcessorSuspend(struct OBJGPU *pGpu, struct KernelGsp *pKernelGsp) {
NV_ASSERT_FAILED_PRECOMP("KernelGsp was disabled!");
return NV_ERR_NOT_SUPPORTED;
}
#else //__nvoc_kernel_gsp_h_disabled
#define kgspWaitForProcessorSuspend(pGpu, pKernelGsp) kgspWaitForProcessorSuspend_TU102(pGpu, pKernelGsp)
#endif //__nvoc_kernel_gsp_h_disabled
#define kgspWaitForProcessorSuspend_HAL(pGpu, pKernelGsp) kgspWaitForProcessorSuspend(pGpu, pKernelGsp)
NV_STATUS kgspConstructEngine_IMPL(struct OBJGPU *pGpu, struct KernelGsp *pKernelGsp, ENGDESCRIPTOR arg0);
static inline NV_STATUS kgspConstructEngine_DISPATCH(struct OBJGPU *pGpu, struct KernelGsp *pKernelGsp, ENGDESCRIPTOR arg0) {
@ -741,22 +750,6 @@ static inline const BINDATA_ARCHIVE *kgspGetBinArchiveBooterLoadUcode_DISPATCH(s
return pKernelGsp->__kgspGetBinArchiveBooterLoadUcode__(pKernelGsp);
}
const BINDATA_ARCHIVE *kgspGetBinArchiveBooterReloadUcode_TU102(struct KernelGsp *pKernelGsp);
const BINDATA_ARCHIVE *kgspGetBinArchiveBooterReloadUcode_TU116(struct KernelGsp *pKernelGsp);
const BINDATA_ARCHIVE *kgspGetBinArchiveBooterReloadUcode_GA100(struct KernelGsp *pKernelGsp);
const BINDATA_ARCHIVE *kgspGetBinArchiveBooterReloadUcode_GA102(struct KernelGsp *pKernelGsp);
static inline const BINDATA_ARCHIVE *kgspGetBinArchiveBooterReloadUcode_80f438(struct KernelGsp *pKernelGsp) {
NV_ASSERT_OR_RETURN_PRECOMP(0, ((void *)0));
}
static inline const BINDATA_ARCHIVE *kgspGetBinArchiveBooterReloadUcode_DISPATCH(struct KernelGsp *pKernelGsp) {
return pKernelGsp->__kgspGetBinArchiveBooterReloadUcode__(pKernelGsp);
}
const BINDATA_ARCHIVE *kgspGetBinArchiveBooterUnloadUcode_TU102(struct KernelGsp *pKernelGsp);
const BINDATA_ARCHIVE *kgspGetBinArchiveBooterUnloadUcode_TU116(struct KernelGsp *pKernelGsp);
@ -1013,16 +1006,6 @@ static inline NV_STATUS kgspAllocateBooterLoadUcodeImage(struct OBJGPU *pGpu, st
#define kgspAllocateBooterLoadUcodeImage(pGpu, pKernelGsp, ppBooterLoadUcode) kgspAllocateBooterLoadUcodeImage_IMPL(pGpu, pKernelGsp, ppBooterLoadUcode)
#endif //__nvoc_kernel_gsp_h_disabled
NV_STATUS kgspAllocateBooterReloadUcodeImage_IMPL(struct OBJGPU *pGpu, struct KernelGsp *pKernelGsp, KernelGspFlcnUcode **ppBooterReloadUcode);
#ifdef __nvoc_kernel_gsp_h_disabled
static inline NV_STATUS kgspAllocateBooterReloadUcodeImage(struct OBJGPU *pGpu, struct KernelGsp *pKernelGsp, KernelGspFlcnUcode **ppBooterReloadUcode) {
NV_ASSERT_FAILED_PRECOMP("KernelGsp was disabled!");
return NV_ERR_NOT_SUPPORTED;
}
#else //__nvoc_kernel_gsp_h_disabled
#define kgspAllocateBooterReloadUcodeImage(pGpu, pKernelGsp, ppBooterReloadUcode) kgspAllocateBooterReloadUcodeImage_IMPL(pGpu, pKernelGsp, ppBooterReloadUcode)
#endif //__nvoc_kernel_gsp_h_disabled
NV_STATUS kgspAllocateBooterUnloadUcodeImage_IMPL(struct OBJGPU *pGpu, struct KernelGsp *pKernelGsp, KernelGspFlcnUcode **ppBooterUnloadUcode);
#ifdef __nvoc_kernel_gsp_h_disabled
static inline NV_STATUS kgspAllocateBooterUnloadUcodeImage(struct OBJGPU *pGpu, struct KernelGsp *pKernelGsp, KernelGspFlcnUcode **ppBooterUnloadUcode) {

View File

@ -808,6 +808,7 @@ static const CHIPS_RELEASED sChipsReleased[] = {
{ 0x20B3, 0x14a7, 0x10de, "NVIDIA PG506-242" },
{ 0x20B3, 0x14a8, 0x10de, "NVIDIA PG506-243" },
{ 0x20B5, 0x1533, 0x10de, "NVIDIA A100 80GB PCIe" },
{ 0x20B5, 0x1642, 0x10de, "NVIDIA A100 80GB PCIe" },
{ 0x20B6, 0x1492, 0x10de, "NVIDIA PG506-232" },
{ 0x20B7, 0x1532, 0x10de, "NVIDIA A30" },
{ 0x20F1, 0x145f, 0x10de, "NVIDIA A100-PCIE-40GB" },

File diff suppressed because it is too large Load Diff

View File

@ -519,6 +519,7 @@ struct Subdevice {
NV_STATUS (*__subdeviceCtrlCmdInternalPerfCfControllerSetMaxVGpuVMCount__)(struct Subdevice *, NV2080_CTRL_INTERNAL_PERF_CF_CONTROLLERS_SET_MAX_VGPU_VM_COUNT_PARAMS *);
NV_STATUS (*__subdeviceCtrlCmdBifGetStaticInfo__)(struct Subdevice *, NV2080_CTRL_INTERNAL_BIF_GET_STATIC_INFO_PARAMS *);
NV_STATUS (*__subdeviceCtrlCmdBifGetAspmL1Flags__)(struct Subdevice *, NV2080_CTRL_INTERNAL_BIF_GET_ASPM_L1_FLAGS_PARAMS *);
NV_STATUS (*__subdeviceCtrlCmdBifSetPcieRo__)(struct Subdevice *, NV2080_CTRL_INTERNAL_BIF_SET_PCIE_RO_PARAMS *);
NV_STATUS (*__subdeviceCtrlCmdHshubPeerConnConfig__)(struct Subdevice *, NV2080_CTRL_INTERNAL_HSHUB_PEER_CONN_CONFIG_PARAMS *);
NV_STATUS (*__subdeviceCtrlCmdHshubFirstLinkPeerId__)(struct Subdevice *, NV2080_CTRL_INTERNAL_HSHUB_FIRST_LINK_PEER_ID_PARAMS *);
NV_STATUS (*__subdeviceCtrlCmdHshubGetHshubIdForLinks__)(struct Subdevice *, NV2080_CTRL_INTERNAL_HSHUB_GET_HSHUB_ID_FOR_LINKS_PARAMS *);
@ -998,6 +999,7 @@ NV_STATUS __nvoc_objCreate_Subdevice(Subdevice**, Dynamic*, NvU32, struct CALL_C
#define subdeviceCtrlCmdInternalPerfCfControllerSetMaxVGpuVMCount(pSubdevice, pParams) subdeviceCtrlCmdInternalPerfCfControllerSetMaxVGpuVMCount_DISPATCH(pSubdevice, pParams)
#define subdeviceCtrlCmdBifGetStaticInfo(pSubdevice, pParams) subdeviceCtrlCmdBifGetStaticInfo_DISPATCH(pSubdevice, pParams)
#define subdeviceCtrlCmdBifGetAspmL1Flags(pSubdevice, pParams) subdeviceCtrlCmdBifGetAspmL1Flags_DISPATCH(pSubdevice, pParams)
#define subdeviceCtrlCmdBifSetPcieRo(pSubdevice, pParams) subdeviceCtrlCmdBifSetPcieRo_DISPATCH(pSubdevice, pParams)
#define subdeviceCtrlCmdHshubPeerConnConfig(pSubdevice, pParams) subdeviceCtrlCmdHshubPeerConnConfig_DISPATCH(pSubdevice, pParams)
#define subdeviceCtrlCmdHshubFirstLinkPeerId(pSubdevice, pParams) subdeviceCtrlCmdHshubFirstLinkPeerId_DISPATCH(pSubdevice, pParams)
#define subdeviceCtrlCmdHshubGetHshubIdForLinks(pSubdevice, pParams) subdeviceCtrlCmdHshubGetHshubIdForLinks_DISPATCH(pSubdevice, pParams)
@ -3391,6 +3393,12 @@ static inline NV_STATUS subdeviceCtrlCmdBifGetAspmL1Flags_DISPATCH(struct Subdev
return pSubdevice->__subdeviceCtrlCmdBifGetAspmL1Flags__(pSubdevice, pParams);
}
NV_STATUS subdeviceCtrlCmdBifSetPcieRo_IMPL(struct Subdevice *pSubdevice, NV2080_CTRL_INTERNAL_BIF_SET_PCIE_RO_PARAMS *pParams);
static inline NV_STATUS subdeviceCtrlCmdBifSetPcieRo_DISPATCH(struct Subdevice *pSubdevice, NV2080_CTRL_INTERNAL_BIF_SET_PCIE_RO_PARAMS *pParams) {
return pSubdevice->__subdeviceCtrlCmdBifSetPcieRo__(pSubdevice, pParams);
}
NV_STATUS subdeviceCtrlCmdHshubPeerConnConfig_IMPL(struct Subdevice *pSubdevice, NV2080_CTRL_INTERNAL_HSHUB_PEER_CONN_CONFIG_PARAMS *pParams);
static inline NV_STATUS subdeviceCtrlCmdHshubPeerConnConfig_DISPATCH(struct Subdevice *pSubdevice, NV2080_CTRL_INTERNAL_HSHUB_PEER_CONN_CONFIG_PARAMS *pParams) {

View File

@ -41,7 +41,7 @@ extern "C" {
*
* @details Order of values is not necessarily increasing or sorted, but order is
* preserved across mutation. Please see
* http://en.wikipedia.org/wiki/Sequence for a formal definition.
* https://en.wikipedia.org/wiki/Sequence for a formal definition.
*
* The provided interface is abstract, decoupling the user from the underlying
* list implementation. Two options are available with regard to memory

View File

@ -70,9 +70,9 @@
* - No HW random support (older CPUs)
*
* For additional information, see these links:
* - http://www.2uo.de/myths-about-urandom/
* - https://www.2uo.de/myths-about-urandom/
* - https://bugs.ruby-lang.org/issues/9569
* - http://security.stackexchange.com/questions/3936/is-a-rand-from-dev-urandom-secure-for-a-login-key
* - https://security.stackexchange.com/questions/3936/is-a-rand-from-dev-urandom-secure-for-a-login-key
*
* @{
*/

View File

@ -198,7 +198,7 @@ NvBool nvDbgBreakpointEnabled(void);
* coverage.
*
* - See @ref PORT_ASSERT for usage example.
* - See http://www.bullseye.com/help/build-exclude.html for more details.
* - See https://www.bullseye.com/help/build-exclude.html for more details.
*/
/**
* @def PORT_COVERAGE_PUSH_ON()

View File

@ -110,7 +110,7 @@ void portSyncShutdown(void);
* @brief A spinlock data type.
*
* For documentation on what a spinlock is and how it behaves see
* http://en.wikipedia.org/wiki/Spinlock
* https://en.wikipedia.org/wiki/Spinlock
*
* - A valid spinlock is any which is non-NULL
* - Spinlocks are not recursive.

View File

@ -26,6 +26,20 @@
#include "gpu/mem_mgr/mem_mgr.h"
#include "published/ampere/ga100/dev_fb.h"
#include "published/ampere/ga100/dev_vm.h"
#include "published/ampere/ga100/dev_fuse.h"
/*!
* @brief Read fuse for display supported status.
* Some chips not marked displayless do not support display
*/
NvBool
gpuFuseSupportsDisplay_GA100
(
OBJGPU *pGpu
)
{
return GPU_FLD_TEST_DRF_DEF(pGpu, _FUSE, _STATUS_OPT_DISPLAY, _DATA, _ENABLE);
}
/*!
* @brief Clear FBHUB POISON Interrupt state for Bug 2924523.

View File

@ -27,7 +27,20 @@
#include "published/maxwell/gm107/dev_bus.h"
#include "published/maxwell/gm107/dev_nv_xve.h"
#include "published/maxwell/gm107/dev_nv_xve1.h"
#include "published/maxwell/gm107/dev_fuse.h"
/*!
* @brief Read fuse for display supported status.
* Some chips not marked displayless do not support display
*/
NvBool
gpuFuseSupportsDisplay_GM107
(
OBJGPU *pGpu
)
{
return GPU_FLD_TEST_DRF_DEF(pGpu, _FUSE, _STATUS_OPT_DISPLAY, _DATA, _ENABLE);
}
/*!
* @brief gpuReadBusConfigRegEx_GM107

View File

@ -24,6 +24,7 @@
/* ------------------------ Includes ---------------------------------------- */
#include "gpu/bif/kernel_bif.h"
#include "ampere/ga100/dev_nv_xve_addendum.h"
/* ------------------------ Public Functions -------------------------------- */
@ -47,3 +48,26 @@ kbifApplyWARBug3208922_GA100
pKernelBif->setProperty(pKernelBif, PDB_PROP_KBIF_P2P_WRITES_DISABLED, NV_TRUE);
}
}
/*!
* @brief Check for RO enablement request in emulated config space.
*
* @param[in] pGpu GPU object pointer
* @param[in] pKernelBif BIF object pointer
*/
void
kbifInitRelaxedOrderingFromEmulatedConfigSpace_GA100
(
OBJGPU *pGpu,
KernelBif *pKernelBif
)
{
NvU32 passthroughEmulatedConfig = osPciReadDword(osPciInitHandle(gpuGetDomain(pGpu),
gpuGetBus(pGpu),
gpuGetDevice(pGpu),
0, NULL, NULL),
NV_XVE_PASSTHROUGH_EMULATED_CONFIG);
NvBool roEnabled = DRF_VAL(_XVE, _PASSTHROUGH_EMULATED_CONFIG, _RELAXED_ORDERING_ENABLE, passthroughEmulatedConfig);
pKernelBif->setProperty(pKernelBif, PDB_PROP_KBIF_PCIE_RELAXED_ORDERING_SET_IN_EMULATED_CONFIG_SPACE, roEnabled);
}

View File

@ -45,7 +45,7 @@
/* ------------------------ Static Function Prototypes ---------------------- */
static void _kbifInitRegistryOverrides(OBJGPU *, KernelBif *);
static void _kbifCheckIfGpuExists(OBJGPU *, void*);
static NV_STATUS _kbifSetPcieRelaxedOrdering(OBJGPU *, KernelBif *, NvBool);
/* ------------------------ Public Functions -------------------------------- */
@ -160,6 +160,70 @@ kbifStateLoad_IMPL
return NV_OK;
}
/*!
* @brief Configure PCIe Relaxed Ordering in BIF
*
* @param[in] pGpu GPU object pointer
* @param[in] pKernelBif KBIF object pointer
* @param[in] enableRo Enable/disable RO
*/
static NV_STATUS
_kbifSetPcieRelaxedOrdering
(
OBJGPU *pGpu,
KernelBif *pKernelBif,
NvBool enableRo
)
{
NV2080_CTRL_INTERNAL_BIF_SET_PCIE_RO_PARAMS pcieRo;
RM_API *pRmApi = GPU_GET_PHYSICAL_RMAPI(pGpu);
NV_STATUS status;
pcieRo.enableRo = enableRo;
status = pRmApi->Control(pRmApi, pGpu->hInternalClient, pGpu->hInternalSubdevice,
NV2080_CTRL_CMD_INTERNAL_BIF_SET_PCIE_RO,
&pcieRo, sizeof(pcieRo));
if (status != NV_OK) {
NV_PRINTF(LEVEL_ERROR, "NV2080_CTRL_CMD_INTERNAL_BIF_SET_PCIE_RO failed %s (0x%x)\n",
nvstatusToString(status), status);
return status;
}
return NV_OK;
}
/*!
* @brief KernelBif state post-load
*
* @param[in] pGpu GPU object pointer
* @param[in] pKernelBif KBIF object pointer
* @param[in] flags GPU state flag
*/
NV_STATUS
kbifStatePostLoad_IMPL
(
OBJGPU *pGpu,
KernelBif *pKernelBif,
NvU32 flags
)
{
NV_STATUS status;
kbifInitRelaxedOrderingFromEmulatedConfigSpace(pGpu, pKernelBif);
if (pKernelBif->getProperty(pKernelBif, PDB_PROP_KBIF_PCIE_RELAXED_ORDERING_SET_IN_EMULATED_CONFIG_SPACE)) {
//
// This is done from StatePostLoad() to guarantee that BIF's StateLoad()
// is already completed for both monolithic RM and GSP RM.
//
status = _kbifSetPcieRelaxedOrdering(pGpu, pKernelBif, NV_TRUE);
if (status != NV_OK)
return NV_OK;
}
return NV_OK;
}
/*!
* @brief KernelBif state unload
*

View File

@ -218,6 +218,9 @@ kdispStatePreInitLocked_IMPL(OBJGPU *pGpu,
NvU32 hSubdevice = pGpu->hInternalSubdevice;
NV2080_CTRL_INTERNAL_DISPLAY_GET_IP_VERSION_PARAMS ctrlParams;
if (!gpuFuseSupportsDisplay_HAL(pGpu))
return NV_ERR_NOT_SUPPORTED;
status = pRmApi->Control(pRmApi, hClient, hSubdevice,
NV2080_CTRL_CMD_INTERNAL_DISPLAY_GET_IP_VERSION,
&ctrlParams, sizeof(ctrlParams));

View File

@ -1339,6 +1339,12 @@ gpuDestruct_IMPL
pGpu->regopScratchBufferMaxOffsets = 0;
NV_ASSERT(pGpu->numSubdeviceBackReferences == 0);
portMemFree(pGpu->pSubdeviceBackReferences);
pGpu->pSubdeviceBackReferences = NULL;
pGpu->numSubdeviceBackReferences = 0;
pGpu->maxSubdeviceBackReferences = 0;
gpuDestructPhysical(pGpu);
}

View File

@ -226,6 +226,53 @@ gpuGetByHandle
return gpuGetByRef(pResourceRef, pbBroadcast, ppGpu);
}
NV_STATUS gpuRegisterSubdevice_IMPL(OBJGPU *pGpu, Subdevice *pSubdevice)
{
const NvU32 initialSize = 32;
const NvU32 expansionFactor = 2;
if (pGpu->numSubdeviceBackReferences == pGpu->maxSubdeviceBackReferences)
{
if (pGpu->pSubdeviceBackReferences == NULL)
{
pGpu->pSubdeviceBackReferences = portMemAllocNonPaged(initialSize * sizeof(Subdevice*));
if (pGpu->pSubdeviceBackReferences == NULL)
return NV_ERR_NO_MEMORY;
pGpu->maxSubdeviceBackReferences = initialSize;
}
else
{
const NvU32 newSize = expansionFactor * pGpu->maxSubdeviceBackReferences * sizeof(Subdevice*);
Subdevice **newArray = portMemAllocNonPaged(newSize);
if (newArray == NULL)
return NV_ERR_NO_MEMORY;
portMemCopy(newArray, newSize, pGpu->pSubdeviceBackReferences, pGpu->maxSubdeviceBackReferences * sizeof(Subdevice*));
portMemFree(pGpu->pSubdeviceBackReferences);
pGpu->pSubdeviceBackReferences = newArray;
pGpu->maxSubdeviceBackReferences *= expansionFactor;
}
}
pGpu->pSubdeviceBackReferences[pGpu->numSubdeviceBackReferences++] = pSubdevice;
return NV_OK;
}
void gpuUnregisterSubdevice_IMPL(OBJGPU *pGpu, Subdevice *pSubdevice)
{
NvU32 i;
for (i = 0; i < pGpu->numSubdeviceBackReferences; i++)
{
if (pGpu->pSubdeviceBackReferences[i] == pSubdevice)
{
pGpu->numSubdeviceBackReferences--;
pGpu->pSubdeviceBackReferences[i] = pGpu->pSubdeviceBackReferences[pGpu->numSubdeviceBackReferences];
pGpu->pSubdeviceBackReferences[pGpu->numSubdeviceBackReferences] = NULL;
return;
}
}
NV_ASSERT_FAILED("Subdevice not found!");
}
/*!
* @brief Determine whether the given event should be triggered on the given
* subdevice based upon MIG attribution, and translate encoded global IDs into
@ -437,9 +484,9 @@ gpuNotifySubDeviceEvent_IMPL
{
PEVENTNOTIFICATION pEventNotification;
THREAD_STATE_NODE *pCurThread;
RS_SHARE_ITERATOR it = serverutilShareIter(classId(NotifShare));
NvU32 localNotifyType;
NvU32 localInfo32;
NvU32 i;
if (NV_OK == threadStateGetCurrent(&pCurThread, pGpu))
{
@ -451,22 +498,11 @@ gpuNotifySubDeviceEvent_IMPL
NV_ASSERT(notifyIndex < NV2080_NOTIFIERS_MAXCOUNT);
// search notifiers with events hooked up for this gpu
while (serverutilShareIterNext(&it))
for (i = 0; i < pGpu->numSubdeviceBackReferences; i++)
{
RsShared *pShared = it.pShared;
Subdevice *pSubdevice;
INotifier *pNotifier;
NotifShare *pNotifierShare = dynamicCast(pShared, NotifShare);
Subdevice *pSubdevice = pGpu->pSubdeviceBackReferences[i];
INotifier *pNotifier = staticCast(pSubdevice, INotifier);
if ((pNotifierShare == NULL) || (pNotifierShare->pNotifier == NULL))
continue;
pNotifier = pNotifierShare->pNotifier;
pSubdevice = dynamicCast(pNotifier, Subdevice);
// Only notify matching GPUs
if ((pSubdevice == NULL) || (GPU_RES_GET_GPU(pSubdevice) != pGpu))
continue;
GPU_RES_SET_THREAD_BC_STATE(pSubdevice);
//

View File

@ -36,7 +36,9 @@
#include "published/ampere/ga102/dev_falcon_second_pri.h"
#include "published/ampere/ga102/dev_gsp.h"
#include "published/ampere/ga102/dev_gsp_addendum.h"
#include "published/ampere/ga102/dev_gc6_island.h"
#include "published/ampere/ga102/dev_gc6_island_addendum.h"
#include "gpu/sec2/kernel_sec2.h"
#define RISCV_BR_ADDR_ALIGNMENT (8)
@ -87,6 +89,23 @@ _kgspResetIntoRiscv
return NV_OK;
}
/*!
* Determine if GSP reload via SEC2 is completed.
*/
static NvBool
_kgspIsReloadCompleted
(
OBJGPU *pGpu,
void *pVoid
)
{
NvU32 reg;
reg = GPU_REG_RD32(pGpu, NV_PGC6_BSI_SECURE_SCRATCH_14);
return FLD_TEST_DRF(_PGC6, _BSI_SECURE_SCRATCH_14, _BOOT_STAGE_3_HANDOFF, _VALUE_DONE, reg);
}
/*!
* Boot GSP-RM.
*
@ -265,6 +284,7 @@ kgspExecuteSequencerCommand_GA102
{
NV_STATUS status = NV_OK;
KernelFalcon *pKernelFalcon = staticCast(pKernelGsp, KernelFalcon);
NvU32 secMailbox0 = 0;
switch (opCode)
{
@ -301,14 +321,26 @@ kgspExecuteSequencerCommand_GA102
NV_ASSERT_OR_RETURN(payloadSize == 0, NV_ERR_INVALID_ARGUMENT);
{
KernelFalcon *pKernelSec2Falcon = staticCast(GPU_GET_KERNEL_SEC2(pGpu), KernelFalcon);
NV_ASSERT_OK_OR_RETURN(_kgspResetIntoRiscv(pGpu, pKernelGsp));
kgspProgramLibosBootArgsAddr_HAL(pGpu, pKernelGsp);
status = kgspExecuteBooterReload_HAL(pGpu, pKernelGsp);
if (status != NV_OK)
NV_PRINTF(LEVEL_INFO, "---------------Starting SEC2 to resume GSP-RM------------\n");
// Start SEC2 in order to resume GSP-RM
kflcnStartCpu_HAL(pGpu, pKernelSec2Falcon);
// Wait for reload to be completed.
status = gpuTimeoutCondWait(pGpu, _kgspIsReloadCompleted, NULL, NULL);
// Check SEC mailbox.
secMailbox0 = kflcnRegRead_HAL(pGpu, pKernelSec2Falcon, NV_PFALCON_FALCON_MAILBOX0);
if ((status != NV_OK) || (secMailbox0 != NV_OK))
{
NV_PRINTF(LEVEL_ERROR, "failed to execute Booter Reload (ucode for resume from sequencer): 0x%x\n", status);
break;
NV_PRINTF(LEVEL_ERROR, "Timeout waiting for SEC2-RTOS to resume GSP-RM. SEC2 Mailbox0 is : 0x%x\n", secMailbox0);
DBG_BREAKPOINT();
return NV_ERR_TIMEOUT;
}
}

View File

@ -25,7 +25,6 @@
#include "gpu/gpu.h"
#include "gpu/falcon/kernel_falcon.h"
#include "gpu/nvdec/kernel_nvdec.h"
#include "gpu/sec2/kernel_sec2.h"
#include "published/turing/tu102/dev_fb.h" // for NV_PFB_PRI_MMU_WPR2_ADDR_HI
@ -117,34 +116,6 @@ kgspExecuteBooterLoad_TU102
return status;
}
NV_STATUS
kgspExecuteBooterReload_TU102
(
OBJGPU *pGpu,
KernelGsp *pKernelGsp
)
{
NV_STATUS status;
KernelNvdec *pKernelNvdec = GPU_GET_KERNEL_NVDEC(pGpu);
NV_PRINTF(LEVEL_INFO, "executing Booter Reload\n");
NV_ASSERT_OR_RETURN(pKernelGsp->pBooterReloadUcode != NULL, NV_ERR_INVALID_STATE);
kflcnReset_HAL(pGpu, staticCast(pKernelNvdec, KernelFalcon));
status = s_executeBooterUcode_TU102(pGpu, pKernelGsp,
pKernelGsp->pBooterReloadUcode,
staticCast(pKernelNvdec, KernelFalcon),
0xFF, 0xFF);
if (status != NV_OK)
{
NV_PRINTF(LEVEL_ERROR, "failed to execute Booter Reload: 0x%x\n", status);
return status;
}
return status;
}
NV_STATUS
kgspExecuteBooterUnloadIfNeeded_TU102
(

View File

@ -36,7 +36,8 @@
#include "published/turing/tu102/dev_bus.h" // for NV_PBUS_VBIOS_SCRATCH
#include "published/turing/tu102/dev_fb.h" // for NV_PFB_PRI_MMU_WPR2_ADDR_HI
#include "published/turing/tu102/dev_gc6_island_addendum.h" // for NV_PGC6_AON_FRTS_INPUT_WPR_SIZE_SECURE_SCRATCH_GROUP_03_0_WPR_SIZE_1MB_IN_4K
#include "published/turing/tu102/dev_gc6_island.h"
#include "published/turing/tu102/dev_gc6_island_addendum.h"
/*!
* Get size of FRTS data.
@ -98,6 +99,7 @@ typedef struct
} FALCON_APPLICATION_INTERFACE_DMEM_MAPPER_V3;
#define FALCON_APPLICATION_INTERFACE_DMEM_MAPPER_V3_CMD_FRTS (0x15)
#define FALCON_APPLICATION_INTERFACE_DMEM_MAPPER_V3_CMD_SB (0x19)
typedef struct
{
@ -132,25 +134,33 @@ typedef struct
#define NV_VBIOS_FWSECLIC_FRTS_ERR_CODE 31:16
#define NV_VBIOS_FWSECLIC_FRTS_ERR_CODE_NONE 0x00000000
#define NV_VBIOS_FWSECLIC_SCRATCH_INDEX_15 0x15
#define NV_VBIOS_FWSECLIC_SB_ERR_CODE 15:0
#define NV_VBIOS_FWSECLIC_SB_ERR_CODE_NONE 0x00000000
// ---------------------------------------------------------------------------
// Functions for preparing and executing FWSEC commands
// ---------------------------------------------------------------------------
/*!
* Patch DMEM of FWSEC for FRTS command
* Patch DMEM of FWSEC for a given command
*
* @param[inout] pMappedData Pointer to mapped DMEM of FWSEC
* @param[in] mappedDataSize Number of bytes valid under pMappedData
* @param[in] pFrtsCmd FRTS command to patch in
* @param[in] cmd FWSEC command to invoke
* @param[in] pCmdBuffer Buffer containing command arguments to patch in
* @param[in] cmdBufferSize Size of buffer pointed by pCmdBuffer
* @param[in] interfaceOffset Interface offset given by VBIOS for FWSEC
*/
NV_STATUS
s_vbiosPatchFrtsInterfaceData
static NV_STATUS
s_vbiosPatchInterfaceData
(
NvU8 *pMappedData, // inout
const NvU32 mappedDataSize,
const FWSECLIC_FRTS_CMD *pFrtsCmd,
const NvU32 cmd,
const void *pCmdBuffer,
const NvU32 cmdBufferSize,
const NvU32 interfaceOffset
)
{
@ -178,7 +188,7 @@ s_vbiosPatchFrtsInterfaceData
pIntFaceHdr = (FALCON_APPLICATION_INTERFACE_HEADER_V1 *) (pMappedData + interfaceOffset);
if (pIntFaceHdr->entryCount < 2)
{
NV_PRINTF(LEVEL_ERROR, "too few interface entires found for FRTS\n");
NV_PRINTF(LEVEL_ERROR, "too few interface entires found for FWSEC cmd 0x%x\n", cmd);
return NV_ERR_INVALID_DATA;
}
@ -222,15 +232,15 @@ s_vbiosPatchFrtsInterfaceData
if (!pDmemMapper)
{
NV_PRINTF(LEVEL_ERROR, "failed to find required interface entry for FRTS\n");
NV_PRINTF(LEVEL_ERROR, "failed to find required interface entry for FWSEC cmd 0x%x\n", cmd);
return NV_ERR_INVALID_DATA;
}
pDmemMapper->init_cmd = FALCON_APPLICATION_INTERFACE_DMEM_MAPPER_V3_CMD_FRTS;
pDmemMapper->init_cmd = cmd;
if (pDmemMapper->cmd_in_buffer_size < sizeof(FWSECLIC_FRTS_CMD))
if (pDmemMapper->cmd_in_buffer_size < cmdBufferSize)
{
NV_PRINTF(LEVEL_ERROR, "insufficient cmd buffer for FRTS interface\n");
NV_PRINTF(LEVEL_ERROR, "insufficient cmd buffer for FWSEC interface cmd 0x%x\n", cmd);
}
if (pDmemMapper->cmd_in_buffer_offset >= mappedDataSize)
@ -238,60 +248,88 @@ s_vbiosPatchFrtsInterfaceData
return NV_ERR_INVALID_OFFSET;
}
bSafe = portSafeAddU32(pIntFaceEntry->dmemOffset, sizeof(*pFrtsCmd), &nextOffset);
bSafe = portSafeAddU32(pIntFaceEntry->dmemOffset, cmdBufferSize, &nextOffset);
if (!bSafe || nextOffset > mappedDataSize)
{
return NV_ERR_INVALID_OFFSET;
}
portMemCopy(pMappedData + pDmemMapper->cmd_in_buffer_offset, sizeof(*pFrtsCmd),
pFrtsCmd, sizeof(*pFrtsCmd));
portMemCopy(pMappedData + pDmemMapper->cmd_in_buffer_offset, cmdBufferSize,
pCmdBuffer, cmdBufferSize);
return NV_OK;
}
/*!
* Excecute FWSEC for FRTS and wait for completion.
* Excecute a given FWSEC cmd and wait for completion.
*
* @param[in] pGpu OBJGPU pointer
* @param[in] pKernelGsp KernelGsp pointer
* @param[in] pFwsecUcode KernelGspFlcnUcode structure of FWSEC ucode
* @param[in] frtsOffset Desired offset in FB of FRTS data and WPR2
* @param[in] cmd FWSEC cmd (FRTS or SB)
* @param[in] frtsOffset (if cmd is FRTS) desired FB offset of FRTS data
*/
NV_STATUS
kgspExecuteFwsecFrts_TU102
static NV_STATUS
s_executeFwsec_TU102
(
OBJGPU *pGpu,
KernelGsp *pKernelGsp,
KernelGspFlcnUcode *pFwsecUcode,
const NvU32 cmd,
const NvU64 frtsOffset
)
{
NV_STATUS status;
NvU32 blockSizeIn4K;
FWSECLIC_READ_VBIOS_DESC readVbiosDesc;
FWSECLIC_FRTS_CMD frtsCmd;
void *pCmdBuffer;
NvU32 cmdBufferSize;
NV_ASSERT_OR_RETURN(!IS_VIRTUAL(pGpu), NV_ERR_NOT_SUPPORTED);
NV_ASSERT_OR_RETURN(IS_GSP_CLIENT(pGpu), NV_ERR_NOT_SUPPORTED);
NV_ASSERT_OR_RETURN(pFwsecUcode != NULL, NV_ERR_INVALID_ARGUMENT);
NV_ASSERT_OR_RETURN(frtsOffset > 0, NV_ERR_INVALID_ARGUMENT);
NV_ASSERT_OR_RETURN((cmd != FALCON_APPLICATION_INTERFACE_DMEM_MAPPER_V3_CMD_FRTS) ||
(frtsOffset > 0), NV_ERR_INVALID_ARGUMENT);
// Build up FRTS args
blockSizeIn4K = NV_PGC6_AON_FRTS_INPUT_WPR_SIZE_SECURE_SCRATCH_GROUP_03_0_WPR_SIZE_1MB_IN_4K;
if ((cmd != FALCON_APPLICATION_INTERFACE_DMEM_MAPPER_V3_CMD_FRTS) &&
(cmd != FALCON_APPLICATION_INTERFACE_DMEM_MAPPER_V3_CMD_SB))
{
NV_ASSERT(0);
return NV_ERR_INVALID_ARGUMENT;
}
frtsCmd.frtsRegionDesc.version = 1;
frtsCmd.frtsRegionDesc.size = sizeof(frtsCmd.frtsRegionDesc);
frtsCmd.frtsRegionDesc.frtsRegionOffset4K = (NvU32) (frtsOffset >> 12);
frtsCmd.frtsRegionDesc.frtsRegionSize = blockSizeIn4K;
frtsCmd.frtsRegionDesc.frtsRegionMediaType = FWSECLIC_FRTS_REGION_MEDIA_FB;
readVbiosDesc.version = 1;
readVbiosDesc.size = sizeof(readVbiosDesc);
readVbiosDesc.gfwImageOffset = 0;
readVbiosDesc.gfwImageSize = 0;
readVbiosDesc.flags = FWSECLIC_READ_VBIOS_STRUCT_FLAGS;
frtsCmd.readVbiosDesc.version = 1;
frtsCmd.readVbiosDesc.size = sizeof(frtsCmd.readVbiosDesc);
frtsCmd.readVbiosDesc.gfwImageOffset = 0;
frtsCmd.readVbiosDesc.gfwImageSize = 0;
frtsCmd.readVbiosDesc.flags = FWSECLIC_READ_VBIOS_STRUCT_FLAGS;
if (cmd == FALCON_APPLICATION_INTERFACE_DMEM_MAPPER_V3_CMD_FRTS)
{
// FRTS takes an FRTS_CMD, here we build that up
NvU32 blockSizeIn4K = NV_PGC6_AON_FRTS_INPUT_WPR_SIZE_SECURE_SCRATCH_GROUP_03_0_WPR_SIZE_1MB_IN_4K;
frtsCmd.frtsRegionDesc.version = 1;
frtsCmd.frtsRegionDesc.size = sizeof(frtsCmd.frtsRegionDesc);
frtsCmd.frtsRegionDesc.frtsRegionOffset4K = (NvU32) (frtsOffset >> 12);
frtsCmd.frtsRegionDesc.frtsRegionSize = blockSizeIn4K;
frtsCmd.frtsRegionDesc.frtsRegionMediaType = FWSECLIC_FRTS_REGION_MEDIA_FB;
frtsCmd.readVbiosDesc = readVbiosDesc;
pCmdBuffer = &frtsCmd;
cmdBufferSize = sizeof(frtsCmd);
}
else // i.e. FALCON_APPLICATION_INTERFACE_DMEM_MAPPER_V3_CMD_SB
{
// SB takes READ_VBIOS_DESC directly
pCmdBuffer = &readVbiosDesc;
cmdBufferSize = sizeof(readVbiosDesc);
}
if (pFwsecUcode->bootType == KGSP_FLCN_UCODE_BOOT_FROM_HS)
{
@ -345,8 +383,8 @@ kgspExecuteFwsecFrts_TU102
}
pMappedData = pMappedImage + pUcode->dataOffset;
status = s_vbiosPatchFrtsInterfaceData(pMappedData, pUcode->dmemSize,
&frtsCmd, pUcode->interfaceOffset);
status = s_vbiosPatchInterfaceData(pMappedData, pUcode->dmemSize, cmd,
pCmdBuffer, cmdBufferSize, pUcode->interfaceOffset);
portMemCopy(pMappedData + pUcode->hsSigDmemAddr, pUcode->sigSize,
((NvU8 *) pUcode->pSignatures) + sigOffset, pUcode->sigSize);
@ -358,7 +396,8 @@ kgspExecuteFwsecFrts_TU102
if (status != NV_OK)
{
NV_PRINTF(LEVEL_ERROR, "failed to prepare interface data for FRTS: 0x%x\n", status);
NV_PRINTF(LEVEL_ERROR, "failed to prepare interface data for FWSEC cmd 0x%x: 0x%x\n",
cmd, status);
return status;
}
}
@ -376,8 +415,8 @@ kgspExecuteFwsecFrts_TU102
return NV_ERR_INSUFFICIENT_RESOURCES;
}
status = s_vbiosPatchFrtsInterfaceData(pMappedData, pUcode->dmemSize,
&frtsCmd, pUcode->interfaceOffset);
status = s_vbiosPatchInterfaceData(pMappedData, pUcode->dmemSize, cmd,
pCmdBuffer, cmdBufferSize, pUcode->interfaceOffset);
memdescUnmapInternal(pGpu, pUcode->pDataMemDesc,
TRANSFER_FLAGS_DESTROY_MAPPING);
@ -385,7 +424,8 @@ kgspExecuteFwsecFrts_TU102
if (status != NV_OK)
{
NV_PRINTF(LEVEL_ERROR, "failed to prepare interface data for FRTS: 0x%x\n", status);
NV_PRINTF(LEVEL_ERROR, "failed to prepare interface data for FWSEC cmd 0x%x: 0x%x\n",
cmd, status);
return status;
}
}
@ -399,10 +439,11 @@ kgspExecuteFwsecFrts_TU102
if (status != NV_OK)
{
NV_PRINTF(LEVEL_ERROR, "failed to execute FWSEC for FRTS: status 0x%x\n", status);
NV_PRINTF(LEVEL_ERROR, "failed to execute FWSEC cmd 0x%x: status 0x%x\n", cmd, status);
return status;
}
if (cmd == FALCON_APPLICATION_INTERFACE_DMEM_MAPPER_V3_CMD_FRTS)
{
NvU32 data;
NvU32 frtsErrCode;
@ -437,6 +478,73 @@ kgspExecuteFwsecFrts_TU102
return NV_ERR_GENERIC;
}
}
else // i.e. FALCON_APPLICATION_INTERFACE_DMEM_MAPPER_V3_CMD_SB
{
NvU32 data;
NvU32 sbErrCode;
if (!GPU_FLD_TEST_DRF_DEF(pGpu, _PGC6, _AON_SECURE_SCRATCH_GROUP_05_PRIV_LEVEL_MASK,
_READ_PROTECTION_LEVEL0, _ENABLE))
{
NV_PRINTF(LEVEL_ERROR, "failed to execute FWSEC for SB: GFW PLM not lowered\n");
return NV_ERR_GENERIC;
}
if (!GPU_FLD_TEST_DRF_DEF(pGpu, _PGC6, _AON_SECURE_SCRATCH_GROUP_05_0_GFW_BOOT,
_PROGRESS, _COMPLETED))
{
NV_PRINTF(LEVEL_ERROR, "failed to execute FWSEC for SB: GFW progress not completed\n");
return NV_ERR_GENERIC;
}
data = GPU_REG_RD32(pGpu, NV_PBUS_VBIOS_SCRATCH(NV_VBIOS_FWSECLIC_SCRATCH_INDEX_15));
sbErrCode = DRF_VAL(_VBIOS, _FWSECLIC, _SB_ERR_CODE, data);
if (sbErrCode != NV_VBIOS_FWSECLIC_SB_ERR_CODE_NONE)
{
NV_PRINTF(LEVEL_ERROR, "failed to execute FWSEC for SB: SB error code 0x%x\n", sbErrCode);
return NV_ERR_GENERIC;
}
}
return status;
}
/*!
* Excecute FWSEC for FRTS and wait for completion.
*
* @param[in] pGpu OBJGPU pointer
* @param[in] pKernelGsp KernelGsp pointer
* @param[in] pFwsecUcode KernelGspFlcnUcode structure of FWSEC ucode
* @param[in] frtsOffset Desired offset in FB of FRTS data and WPR2
*/
NV_STATUS
kgspExecuteFwsecFrts_TU102
(
OBJGPU *pGpu,
KernelGsp *pKernelGsp,
KernelGspFlcnUcode *pFwsecUcode,
const NvU64 frtsOffset
)
{
return s_executeFwsec_TU102(pGpu, pKernelGsp, pFwsecUcode,
FALCON_APPLICATION_INTERFACE_DMEM_MAPPER_V3_CMD_FRTS, frtsOffset);
}
/*!
* Excecute FWSEC's SB command and wait for completion.
*
* @param[in] pGpu OBJGPU pointer
* @param[in] pKernelGsp KernelGsp pointer
* @param[in] pFwsecUcode KernelGspFlcnUcode structure of FWSEC ucode
*/
NV_STATUS
kgspExecuteFwsecSb_TU102
(
OBJGPU *pGpu,
KernelGsp *pKernelGsp,
KernelGspFlcnUcode *pFwsecUcode
)
{
return s_executeFwsec_TU102(pGpu, pKernelGsp, pFwsecUcode,
FALCON_APPLICATION_INTERFACE_DMEM_MAPPER_V3_CMD_SB, 0);
}

View File

@ -48,6 +48,8 @@
#include "published/turing/tu102/dev_gc6_island.h"
#include "published/turing/tu102/dev_gc6_island_addendum.h"
#include "gpu/sec2/kernel_sec2.h"
#define RPC_STRUCTURES
#define RPC_GENERIC_UNION
#include "g_rpc-structures.h"
@ -261,6 +263,23 @@ kgspFreeBootArgs_TU102
}
}
/*!
* Determine if GSP reload via SEC2 is completed.
*/
static NvBool
_kgspIsReloadCompleted
(
OBJGPU *pGpu,
void *pVoid
)
{
NvU32 reg;
reg = GPU_REG_RD32(pGpu, NV_PGC6_BSI_SECURE_SCRATCH_14);
return FLD_TEST_DRF(_PGC6, _BSI_SECURE_SCRATCH_14, _BOOT_STAGE_3_HANDOFF, _VALUE_DONE, reg);
}
/*!
* Set command queue head for CPU to GSP message queue
*
@ -464,6 +483,7 @@ kgspCalculateFbLayout_TU102
)
{
KernelMemorySystem *pKernelMemorySystem = GPU_GET_KERNEL_MEMORY_SYSTEM(pGpu);
KernelDisplay *pKernelDisplay = GPU_GET_KERNEL_DISPLAY(pGpu);
MemoryManager *pMemoryManager = GPU_GET_MEMORY_MANAGER(pGpu);
GspFwWprMeta *pWprMeta = pKernelGsp->pWprMeta;
RM_RISCV_UCODE_DESC *pRiscvDesc = pKernelGsp->pGspRmBootUcodeDesc;
@ -488,9 +508,8 @@ kgspCalculateFbLayout_TU102
// Figure out where VGA workspace is located. We do not have to adjust
// it ourselves (see vgaRelocateWorkspaceBase_HAL()).
//
KernelDisplay *pKernelDisplay = GPU_GET_KERNEL_DISPLAY(pGpu);
if (kdispGetVgaWorkspaceBase(pGpu, pKernelDisplay, &pWprMeta->vgaWorkspaceOffset))
if (gpuFuseSupportsDisplay_HAL(pGpu) &&
kdispGetVgaWorkspaceBase(pGpu, pKernelDisplay, &pWprMeta->vgaWorkspaceOffset))
{
if (pWprMeta->vgaWorkspaceOffset < (pWprMeta->fbSize - DRF_SIZE(NV_PRAMIN)))
{
@ -629,22 +648,46 @@ kgspExecuteSequencerCommand_TU102
{
NV_STATUS status = NV_OK;
KernelFalcon *pKernelFalcon = staticCast(pKernelGsp, KernelFalcon);
NvU32 secMailbox0 = 0;
switch (opCode)
{
case GSP_SEQ_BUF_OPCODE_CORE_RESUME:
{
{
KernelFalcon *pKernelSec2Falcon = staticCast(GPU_GET_KERNEL_SEC2(pGpu), KernelFalcon);
kflcnSecureReset_HAL(pGpu, pKernelFalcon);
kgspProgramLibosBootArgsAddr_HAL(pGpu, pKernelGsp);
status = kgspExecuteBooterReload_HAL(pGpu, pKernelGsp);
if (status != NV_OK)
NV_PRINTF(LEVEL_INFO, "---------------Starting SEC2 to resume GSP-RM------------\n");
// Start SEC2 in order to resume GSP-RM
kflcnStartCpu_HAL(pGpu, pKernelSec2Falcon);
// Wait for reload to be completed.
status = gpuTimeoutCondWait(pGpu, _kgspIsReloadCompleted, NULL, NULL);
// Check SEC mailbox.
secMailbox0 = kflcnRegRead_HAL(pGpu, pKernelSec2Falcon, NV_PFALCON_FALCON_MAILBOX0);
if ((status != NV_OK) || (secMailbox0 != NV_OK))
{
NV_PRINTF(LEVEL_ERROR, "failed to execute Booter Reload (ucode for resume from sequencer): 0x%x\n", status);
break;
NV_PRINTF(LEVEL_ERROR, "Timeout waiting for SEC2-RTOS to resume GSP-RM. SEC2 Mailbox0 is : 0x%x\n", secMailbox0);
DBG_BREAKPOINT();
return NV_ERR_TIMEOUT;
}
}
// Ensure the CPU is started
if (kflcnIsRiscvActive_HAL(pGpu, pKernelFalcon))
{
NV_PRINTF(LEVEL_INFO, "GSP ucode loaded and RISCV started.\n");
}
else
{
NV_ASSERT_FAILED("Failed to boot GSP");
status = NV_ERR_NOT_READY;
}
break;
}
@ -790,6 +833,33 @@ kgspService_TU102
return intrStatus;
}
static NvBool
_kgspIsProcessorSuspended
(
OBJGPU *pGpu,
void *pVoid
)
{
KernelGsp *pKernelGsp = reinterpretCast(pVoid, KernelGsp *);
NvU32 mailbox;
// Check for LIBOS_INTERRUPT_PROCESSOR_SUSPENDED in mailbox
mailbox = kflcnRegRead_HAL(pGpu, staticCast(pKernelGsp, KernelFalcon),
NV_PFALCON_FALCON_MAILBOX0);
return (mailbox & 0x80000000) == 0x80000000;
}
NV_STATUS
kgspWaitForProcessorSuspend_TU102
(
OBJGPU *pGpu,
KernelGsp *pKernelGsp
)
{
return gpuTimeoutCondWait(pGpu, _kgspIsProcessorSuspended, pKernelGsp, NULL);
}
#define FWSECLIC_PROG_START_TIMEOUT 50000 // 50ms
#define FWSECLIC_PROG_COMPLETE_TIMEOUT 2000000 // 2s

View File

@ -233,7 +233,7 @@ s_romImgFindPciHeader_TU102
default:
NV_PRINTF(LEVEL_ERROR, "Error: IFR version not supported = 0x%08x.\n",
ifrVersion);
return NV_ERR_NOT_SUPPORTED;
return NV_ERR_INVALID_DATA;
}
}

View File

@ -1256,43 +1256,52 @@ kgspInitRm_IMPL
*
* Here, we extract a VBIOS image from ROM, and parse it for FWSEC.
*/
if (kgspGetFrtsSize_HAL(pGpu, pKernelGsp) > 0)
if (pKernelGsp->pFwsecUcode == NULL)
{
if (pKernelGsp->pFwsecUcode == NULL)
KernelGspVbiosImg *pVbiosImg = NULL;
// Try and extract a VBIOS image.
status = kgspExtractVbiosFromRom_HAL(pGpu, pKernelGsp, &pVbiosImg);
if (status == NV_OK)
{
KernelGspVbiosImg *pVbiosImg = NULL;
status = kgspExtractVbiosFromRom_HAL(pGpu, pKernelGsp, &pVbiosImg);
if (status != NV_OK)
{
NV_PRINTF(LEVEL_ERROR, "failed to extract VBIOS image from ROM: 0x%x\n",
status);
goto done;
}
// Got a VBIOS image, now parse it for FWSEC.
status = kgspParseFwsecUcodeFromVbiosImg(pGpu, pKernelGsp, pVbiosImg,
&pKernelGsp->pFwsecUcode);
&pKernelGsp->pFwsecUcode);
kgspFreeVbiosImg(pVbiosImg);
if (status != NV_OK)
{
NV_PRINTF(LEVEL_ERROR, "failed to parse FWSEC ucode from VBIOS image: 0x%x\n",
status);
status);
goto done;
}
}
else if (status == NV_ERR_NOT_SUPPORTED)
{
//
// Extracting VBIOS image from ROM is not supported.
// Sanity check we don't depend on it for FRTS, and proceed without FWSEC.
//
NV_ASSERT_OR_GOTO(kgspGetFrtsSize(pGpu, pKernelGsp) == 0, done);
status = NV_OK;
}
else
{
NV_PRINTF(LEVEL_ERROR, "failed to extract VBIOS image from ROM: 0x%x\n",
status);
goto done;
}
}
/*
* We use a set of Booter ucodes to boot GSP-RM as well as manage its lifecycle.
*
* Booter Load loads, verifies, and boots GSP-RM in WPR2.
* Booter Reload resumes GSP-RM after it has suspended for running GSP sequencer.
* Booter Unload tears down WPR2 for driver unload.
*
* Here we prepare the Booter ucode images in SYSMEM so they may be loaded onto
* SEC2 (Load / Unload) and NVDEC0 (Unload).
*
* GSPRM-TODO: remove Reload (and Reload comment) once reload is handled by SEC2-RTOS
*/
{
if (pKernelGsp->pBooterLoadUcode == NULL)
@ -1306,25 +1315,6 @@ kgspInitRm_IMPL
}
}
if (pKernelGsp->pBooterReloadUcode == NULL)
{
KernelNvdec *pKernelNvdec = GPU_GET_KERNEL_NVDEC(pGpu);
if (pKernelNvdec == NULL)
{
NV_PRINTF(LEVEL_ERROR, "missing NVDEC0 engine, cannot initialize GSP-RM\n");
status = NV_ERR_NOT_SUPPORTED;
goto done;
}
status = kgspAllocateBooterReloadUcodeImage(pGpu, pKernelGsp,
&pKernelGsp->pBooterReloadUcode);
if (status != NV_OK)
{
NV_PRINTF(LEVEL_ERROR, "failed to allocate Booter Reload ucode: 0x%x\n", status);
goto done;
}
}
if (pKernelGsp->pBooterUnloadUcode == NULL)
{
status = kgspAllocateBooterUnloadUcodeImage(pGpu, pKernelGsp,
@ -1438,6 +1428,21 @@ kgspUnloadRm_IMPL
NV_PRINTF(LEVEL_INFO, "unloading GSP-RM\n");
NV_RM_RPC_UNLOADING_GUEST_DRIVER(pGpu, rpcStatus, NV_FALSE, NV_FALSE, 0);
// Wait for GSP-RM processor to suspend
kgspWaitForProcessorSuspend_HAL(pGpu, pKernelGsp);
// Dump GSP-RM logs and reset before invoking FWSEC-SB
kgspDumpGspLogs(pGpu, pKernelGsp, NV_FALSE);
kflcnReset_HAL(pGpu, staticCast(pKernelGsp, KernelFalcon));
// Invoke FWSEC-SB to put back PreOsApps during driver unload
status = kgspExecuteFwsecSb_HAL(pGpu, pKernelGsp, pKernelGsp->pFwsecUcode);
if (status != NV_OK)
{
NV_PRINTF(LEVEL_ERROR, "failed to execute FWSEC-SB for PreOsApps during driver unload: 0x%x\n", status);
NV_ASSERT(0);
}
{
// After instructing GSP-RM to unload itself, run Booter Unload to teardown WPR2
status = kgspExecuteBooterUnloadIfNeeded_HAL(pGpu, pKernelGsp);
@ -1471,9 +1476,6 @@ kgspDestruct_IMPL
kgspFreeFlcnUcode(pKernelGsp->pBooterLoadUcode);
pKernelGsp->pBooterLoadUcode = NULL;
kgspFreeFlcnUcode(pKernelGsp->pBooterReloadUcode);
pKernelGsp->pBooterReloadUcode = NULL;
kgspFreeFlcnUcode(pKernelGsp->pBooterUnloadUcode);
pKernelGsp->pBooterUnloadUcode = NULL;

View File

@ -24,7 +24,6 @@
#include "gpu/gsp/kernel_gsp.h"
#include "gpu/mem_mgr/mem_mgr.h"
#include "gpu/nvdec/kernel_nvdec.h"
#include "gpu/sec2/kernel_sec2.h"
#include "core/bin_data.h"
@ -133,7 +132,6 @@ static NV_STATUS
s_patchBooterUcodeSignature
(
OBJGPU *pGpu,
NvBool bIsForNvdec,
NvU32 ucodeId,
NvU8 *pImage,
NvU32 sigDestOffset,
@ -153,19 +151,9 @@ s_patchBooterUcodeSignature
NV_ASSERT_OR_RETURN(pSignatures != NULL, NV_ERR_INVALID_STATE);
NV_ASSERT_OR_RETURN(numSigs > 0, NV_ERR_INVALID_DATA);
// Booter Reload is on NVDEC0, all other Booters are on SEC2
if (bIsForNvdec)
{
KernelNvdec *pKernelNvdec = GPU_GET_KERNEL_NVDEC(pGpu);
NV_ASSERT_OR_RETURN(pKernelNvdec != NULL, NV_ERR_INVALID_STATE);
fuseVer = knvdecReadUcodeFuseVersion_HAL(pGpu, pKernelNvdec, ucodeId);
}
else
{
KernelSec2 *pKernelSec2 = GPU_GET_KERNEL_SEC2(pGpu);
NV_ASSERT_OR_RETURN(pKernelSec2 != NULL, NV_ERR_INVALID_STATE);
fuseVer = ksec2ReadUcodeFuseVersion_HAL(pGpu, pKernelSec2, ucodeId);
}
KernelSec2 *pKernelSec2 = GPU_GET_KERNEL_SEC2(pGpu);
NV_ASSERT_OR_RETURN(pKernelSec2 != NULL, NV_ERR_INVALID_STATE);
fuseVer = ksec2ReadUcodeFuseVersion_HAL(pGpu, pKernelSec2, ucodeId);
if (numSigs > 1)
{
@ -187,7 +175,6 @@ s_allocateUcodeFromBinArchive
OBJGPU *pGpu,
KernelGsp *pKernelGsp,
const BINDATA_ARCHIVE *pBinArchive,
const NvBool bIsForNvdec,
KernelGspFlcnUcode **ppFlcnUcode // out
)
{
@ -375,7 +362,7 @@ s_allocateUcodeFromBinArchive
if (status == NV_OK)
{
status = s_patchBooterUcodeSignature(pGpu,
bIsForNvdec, patchMeta.ucodeId,
patchMeta.ucodeId,
pMappedUcodeMem, patchLoc, pUcode->size,
pSignatures, signaturesTotalSize, numSigs);
NV_ASSERT(status == NV_OK);
@ -421,7 +408,7 @@ s_allocateUcodeFromBinArchive
// Patch signatures
NV_ASSERT_OK_OR_GOTO(status,
s_patchBooterUcodeSignature(pGpu,
bIsForNvdec, patchMeta.ucodeId,
patchMeta.ucodeId,
pUcode->pImage, patchLoc, pUcode->size,
pSignatures, signaturesTotalSize, numSigs),
out);
@ -459,27 +446,7 @@ kgspAllocateBooterLoadUcodeImage_IMPL
pBinArchive = kgspGetBinArchiveBooterLoadUcode_HAL(pKernelGsp);
NV_ASSERT_OR_RETURN(pBinArchive != NULL, NV_ERR_NOT_SUPPORTED);
return s_allocateUcodeFromBinArchive(pGpu, pKernelGsp, pBinArchive,
NV_FALSE /* i.e. not NVDEC */, ppBooterLoadUcode);
}
NV_STATUS
kgspAllocateBooterReloadUcodeImage_IMPL
(
OBJGPU *pGpu,
KernelGsp *pKernelGsp,
KernelGspFlcnUcode **ppBooterReloadUcode // out
)
{
const BINDATA_ARCHIVE *pBinArchive;
NV_ASSERT_OR_RETURN(ppBooterReloadUcode != NULL, NV_ERR_INVALID_ARGUMENT);
pBinArchive = kgspGetBinArchiveBooterReloadUcode_HAL(pKernelGsp);
NV_ASSERT_OR_RETURN(pBinArchive != NULL, NV_ERR_NOT_SUPPORTED);
return s_allocateUcodeFromBinArchive(pGpu, pKernelGsp, pBinArchive,
NV_TRUE /* i.e. NVDEC */, ppBooterReloadUcode);
return s_allocateUcodeFromBinArchive(pGpu, pKernelGsp, pBinArchive, ppBooterLoadUcode);
}
NV_STATUS
@ -497,6 +464,5 @@ kgspAllocateBooterUnloadUcodeImage_IMPL
pBinArchive = kgspGetBinArchiveBooterUnloadUcode_HAL(pKernelGsp);
NV_ASSERT_OR_RETURN(pBinArchive != NULL, NV_ERR_NOT_SUPPORTED);
return s_allocateUcodeFromBinArchive(pGpu, pKernelGsp, pBinArchive,
NV_FALSE /* i.e. not NVDEC */, ppBooterUnloadUcode);
return s_allocateUcodeFromBinArchive(pGpu, pKernelGsp, pBinArchive, ppBooterUnloadUcode);
}

View File

@ -514,11 +514,13 @@ NV_STATUS GspMsgQueueReceiveStatus(MESSAGE_QUEUE_INFO *pMQI)
int nRet;
int i;
int nRetries;
int nMaxRetries = 3;
int nElements = 1; // Assume record fits in one 256-byte queue element for now.
NvU32 uElementSize = 0;
NvU32 seqMismatchDiff = NV_U32_MAX;
NV_STATUS nvStatus = NV_OK;
for (nRetries = 0; nRetries < 3; nRetries++)
for (nRetries = 0; nRetries < nMaxRetries; nRetries++)
{
pTgt = (NvU8 *)pMQI->pCmdQueueElement;
nvStatus = NV_OK;
@ -587,10 +589,29 @@ NV_STATUS GspMsgQueueReceiveStatus(MESSAGE_QUEUE_INFO *pMQI)
// Retry if sequence number is wrong.
if (pMQI->pCmdQueueElement->seqNum != pMQI->rxSeqNum)
{
NV_PRINTF(LEVEL_ERROR, "Bad sequence number. Expected %u got %u.\n",
NV_PRINTF(LEVEL_ERROR, "Bad sequence number. Expected %u got %u. Possible memory corruption.\n",
pMQI->rxSeqNum, pMQI->pCmdQueueElement->seqNum);
// If we read an old piece of data, try to ignore it and move on..
if (pMQI->pCmdQueueElement->seqNum < pMQI->rxSeqNum)
{
// Make sure we're converging to the desired pMQI->rxSeqNum
if ((pMQI->rxSeqNum - pMQI->pCmdQueueElement->seqNum) < seqMismatchDiff)
{
NV_PRINTF(LEVEL_ERROR, "Attempting recovery: ignoring old package with seqNum=%u of %u elements.\n",
pMQI->pCmdQueueElement->seqNum, nElements);
seqMismatchDiff = pMQI->rxSeqNum - pMQI->pCmdQueueElement->seqNum;
nRet = msgqRxMarkConsumed(pMQI->hQueue, nElements);
if (nRet < 0)
{
NV_PRINTF(LEVEL_ERROR, "msgqRxMarkConsumed failed: %d\n", nRet);
}
nMaxRetries++;
}
}
nvStatus = NV_ERR_INVALID_DATA;
continue;
}

Some files were not shown because too many files have changed in this diff Show More