Compare commits

...

No commits in common. "releases/amd-18.20" and "dev" have entirely different histories.

9 changed files with 255 additions and 200 deletions

34
Jenkinsfile vendored
View File

@ -1,34 +0,0 @@
pipeline {
agent none
stages {
stage('Greeting') {
steps {
echo 'Hello Vulkan Open Source'
}
}
stage('Builds') {
parallel {
stage('Build32') {
steps {
sh 'sfsdfsd'
}
}
stage('Build64') {
steps {
sh 'Build64'
}
}
}
}
stage('Tests') {
steps {
echo 'Testing starts'
}
}
stage('Deploy') {
steps {
echo 'Deployment starts...'
}
}
}
}

View File

@ -1,5 +1,5 @@
The MIT License (MIT)
Copyright (c) 2017 Advanced Micro Devices, Inc.
Permission is hereby granted, free of charge, to any person obtaining a copy
@ -18,4 +18,4 @@ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
SOFTWARE.

353
README.md
View File

@ -10,29 +10,31 @@ Shaders that compose a particular `VkPipeline` object are compiled as a single e
### Product Support
The AMD Open Source Driver for Vulkan is designed to support the following AMD GPUs:
* Radeon™ HD 7000 Series
* Radeon™ HD 8000M Series
* Radeon™ R5/R7/R9 200/300 Series
* Radeon™ RX 400/500 Series
* Radeon™ M200/M300/M400 Series
* Radeon™ RX Vega Series
* AMD FirePro™ Workstation Wx000/Wx100/Wx300 Series
* Radeon™ Pro WX x100 Series
* Radeon™ Pro 400/500 Series
* Radeon™ RX 7900/7800/7700/7600 Series
* Radeon™ RX 6900/6800/6700/6600/6500 Series
* Radeon™ RX 5700/5600/5500 Series
* Radeon™ Pro W5700/W5500 Series
> **Note**
> For Pre-GFX10 GPUs, please use v-2023.Q3.3 or older release.
### Operating System Support
The AMD Open Source Driver for Vulkan is designed to support following distros on both the AMDGPU upstream driver stack and the [AMDGPU Pro driver stack](http://support.amd.com/en-us/kb-articles/Pages/Radeon-Software-for-Linux-Release-Notes.aspx):
* Ubuntu 16.04.3 (64-bit version)
* RedHat 7.4 (64-bit version)
The AMD Open Source Driver for Vulkan is designed to support following distros and versions on both the AMDGPU upstream driver stack and the [AMDGPU Pro driver stack](https://www.amd.com/en/support/linux-drivers):
* Ubuntu 22.04 (amd64 version)
* Ubuntu 20.04 (amd64 version)
* RedHat 8.6 (x86-64 version)
* RedHat 9.0 (x86-64 version)
The driver has not been tested on other distros. You may try it out on other distros of your choice.
The driver has not been well tested on other distros and versions. You may try it out on other distros and versions of your choice.
> **Note**
> To run the Vulkan driver with AMDGPU upstream driver stack on SI and CI generation GPUs, amdgpu.si_support and amdgpu.cik_support need to be enabled in kernel
### Feature Support and Performance
The AMD Open Source Driver for Vulkan is designed to support the following features:
* Vulkan 1.1
* More than 30 extensions
* Vulkan 1.3
* More than 170 extensions
* [Radeon™ GPUProfiler](https://github.com/GPUOpen-Tools/Radeon-GPUProfiler) tracing
* Built-in debug and profiling tools
* Mid-command buffer preemption and SR-IOV virtualization
@ -45,56 +47,118 @@ The following features and improvements are planned in future releases (Please r
### Known Issues
* CTS may hang in VK.synchronization.internally_synchronized_objects.pipeline_cache_compute with Linux kernel versions lower than 4.13
* If you are using upstream stack, you may need to upgrade the kernel to 5.3 or later version and firmware (under /lib/firmware/amdgpu/) to the right version from https://git.kernel.org/pub/scm/linux/kernel/git/firmware/linux-firmware.git/tree/amdgpu, and then update ramfs (sudo mkinitramfs -o /boot/initrd.img-`uname -r` `uname -r` or sudo mkinitcpio --generate /boot/initrd.img-`uname -r` `uname -r`)
* Timeline semaphore is not fully supported in Linux kernel until version 5.5. You can install [Vulkan timeline semaphore layer](https://github.com/KhronosGroup/Vulkan-ExtensionLayer) to enable the extension if you are using earlier version of Linux kernel
### How to Contribute
You are welcome to submit contributions of code to the AMD Open Source Driver for Vulkan.
The driver is built from source code in three repositories: [LLVM](https://github.com/GPUOpen-Drivers/llvm), [XGL](https://github.com/GPUOpen-Drivers/xgl) (including both Vulkan API translation and LLPC) and [PAL](https://github.com/GPUOpen-Drivers/pal).
The driver is built from source code in five repositories: [LLVM](https://github.com/GPUOpen-Drivers/llvm-project), [XGL](https://github.com/GPUOpen-Drivers/xgl), [LLPC](https://github.com/GPUOpen-Drivers/llpc), [GPURT](https://github.com/GPUOpen-Drivers/gpurt) and [PAL](https://github.com/GPUOpen-Drivers/pal).
For changes to LLVM, you should submit contribution to the [LLVM trunk](http://llvm.org/svn/llvm-project/llvm/trunk/). Commits there will be evaluated to merge into the amd-vulkan-master branch periodically.
For changes to LLVM, you should submit contribution to the [LLVM trunk](https://reviews.llvm.org/). Commits there will be evaluated to merge into the amd-gfx-gpuopen-master branch periodically.
For changes to XGL or PAL, please [create a pull request](https://help.github.com/articles/creating-a-pull-request/) against the dev branch. After your change is reviewed and if it is accepted, it will be evaluated to merge into the master branch in a subsequent regular promotion.
For changes to XGL, LLPC, GPURT and PAL, please [create a pull request](https://help.github.com/articles/creating-a-pull-request/) against the **dev branch**. After your change is reviewed and if it is accepted, it will be evaluated to merge into the master branch in a subsequent regular promotion.
**IMPORTANT**: By creating a pull request, you agree to allow your contribution to be licensed by the project owners under the terms of the [MIT License](LICENSE.txt).
When contributing to XGL and PAL, your code should:
When contributing to XGL, LLPC, GPURT and PAL, your code should:
* Match the style of nearby existing code. Your code may be edited to comply with our coding standards when it is merged into the master branch.
* Avoid adding new dependencies, including dependencies on STL.
Please make each contribution reasonably small. If you would like to make a big contribution, like a new feature or extension, please raise an issue first to allow planning to evaluate and review your work.
> **Note:** Since PAL is a shared component that must support other APIs, other operating systems, and pre-production hardware, you might be asked to revise your PAL change for reasons that may not be obvious from a pure Linux Vulkan driver perspective.
> **Note**
> Since PAL is a shared component that must support other APIs, other operating systems, and pre-production hardware, you might be asked to revise your PAL change for reasons that may not be obvious from a pure Linux Vulkan driver perspective.
## Build Instructions
### System Requirements
It is recommended to install 16GB RAM in your build system.
It is recommended to install at least 16GB RAM in your build system.
### Build System
* CMake 3.21 or newer is required. [Download](https://cmake.org/download/) and install proper one if the cmake is older than 3.21.
* C++ 20 support is required. Like gcc9, clang11.
* Ninja is required.
### Install Dev and Tools Packages
#### Ubuntu
```
sudo apt-get install build-essential python3 cmake curl g++-multilib gcc-multilib
sudo apt-get install libx11-dev libxcb1-dev x11proto-dri2-dev libxcb-dri3-dev libxcb-dri2-0-dev libxcb-present-dev libxshmfence-dev libx11-dev:i386 libxcb1-dev:i386 x11proto-dri2-dev:i386 libxcb-dri3-dev:i386 libxcb-dri2-0-dev:i386 libxcb-present-dev:i386 libxshmfence-dev:i386 libwayland-dev libwayland-dev:i386
sudo apt-get install build-essential cmake curl g++-multilib gcc-multilib git ninja-build pkg-config python3 python3-jinja2 python3-ruamel.yaml
```
##### 64-bit
```
sudo apt-get install libssl-dev libx11-dev libxcb1-dev x11proto-dri2-dev libxcb-dri3-dev libxcb-dri2-0-dev libxcb-present-dev libxshmfence-dev libxrandr-dev libwayland-dev
```
##### 32-bit
```
dpkg --add-architecture i386
sudo apt-get install libssl-dev:i386 libx11-dev:i386 libxcb1-dev:i386 libxcb-dri3-dev:i386 libxcb-dri2-0-dev:i386 libxcb-present-dev:i386 libxshmfence-dev:i386 libwayland-dev libwayland-dev:i386 libxrandr-dev:i386
```
#### RedHat
##### 64-bit
```
wget https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
sudo yum localinstall epel-release-latest-7.noarch.rpm
sudo yum update
sudo yum -y install gcc-c++ cmake3 python34 curl glibc-devel glibc-devel.i686 libstdc++-devel libstdc++-devel.i686 libxcb-devel libxcb-devel.i686 libX11-devel libX11-devel.i686 libxshmfence-devel libxshmfence-devel.i686
sudo yum -y install openssl-devel gcc-c++ python3 python3-pip curl glibc-devel libstdc++-devel libxcb-devel libX11-devel libxshmfence-devel libXrandr-devel wayland-devel
pip3 install jinja2 ruamel.yaml
```
##### 32-bit
```
sudo yum -y install openssl-devel.i686 gcc-c++ python3 python3-pip curl glibc-devel.i686 libstdc++-devel.i686 libxcb-devel.i686 libX11-devel.i686 libxshmfence-devel.i686 libXrandr-devel.i686 wayland-devel.i686
pip3 install jinja2 ruamel.yaml
```
### Install shader compiler tools
Shader compiler tools such as [DirectXShaderCompiler](https://github.com/microsoft/DirectXShaderCompiler) and [glslang](https://github.com/KhronosGroup/glslang) need to be installed to build raytracing support.
#### Ubuntu 20.04
It is recommended to install them from [VulkanSDK](https://packages.lunarg.com/) 1.3.280 or higher.
Ubuntu 20.04 (Focal Fossa)
```
wget -qO - https://packages.lunarg.com/lunarg-signing-key-pub.asc | sudo apt-key add -
sudo wget -qO /etc/apt/sources.list.d/lunarg-vulkan-1.3.280-focal.list https://packages.lunarg.com/vulkan/1.3.280/lunarg-vulkan-1.3.280-focal.list
sudo apt update
sudo apt install dxc glslang-tools
```
#### Others
Get [DirectXShaderCompiler](https://github.com/microsoft/DirectXShaderCompiler) and [glslang](https://github.com/KhronosGroup/glslang) source code and build tools on local.
```
#!/bin/bash
if [ ! -d DirectXShaderCompiler ]; then
git clone --depth=1 -b release-1.7.2308 https://github.com/microsoft/DirectXShaderCompiler.git
fi
if [ ! -d glslang ]; then
git clone --depth=1 -b sdk-1.3.280 https://github.com/KhronosGroup/glslang.git
fi
cd DirectXShaderCompiler
git submodule init
git submodule update
cmake -H. -Bbuilds -GNinja -DCMAKE_BUILD_TYPE=Release -C ./cmake/caches/PredefinedParams.cmake
cmake --build builds
cd ..
cd glslang
cmake -H. -Bbuilds -DCMAKE_BUILD_TYPE=Release -DCMAKE_INSTALL_PREFIX='builds/install'
cd builds
make -j8 install
cd ../../
```
Set env PATH and LD_LIBRARY_PATH before amdvlk driver build.
```
export PATH=<DirectXShaderCompiler>/builds/bin:<glslang>/install/bin:$PATH
export LD_LIBRARY_PATH=<DirectXShaderCompiler>/builds/lib:$LD_LIBRARY_PATH
```
### Get Repo Tools
```
mkdir ~/bin
curl https://storage.googleapis.com/git-repo-downloads/repo > ~/bin/repo
# Replacing python with python3 is only needed on Ubuntu 20.04 if the 'python' executable is not available
sed -i s/python/python3/ ~/bin/repo
chmod a+x ~/bin/repo
export PATH=~/bin:"$PATH"
```
### Get Source Code
@ -102,87 +166,49 @@ chmod a+x ~/bin/repo
```
mkdir vulkandriver
cd vulkandriver
~/bin/repo init -u https://github.com/GPUOpen-Drivers/AMDVLK.git -b master
~/bin/repo sync
repo init -u https://github.com/GPUOpen-Drivers/AMDVLK.git -b master
repo sync
```
> **Note:** Source code in dev branch can be gotten by using "-b dev" in the "repo init" command
> **Note**
> Source code in dev branch can be gotten by using "-b dev" in the "repo init" command.
### 64-bit Build
#### Ubuntu
### Build Driver and Generate JSON Files
```
cd <root of vulkandriver>/drivers/xgl
cmake -G Ninja -S xgl -B builds/Release64
cmake --build builds/Release64
cmake -H. -Bbuilds/Release64
cd builds/Release64
make -j$(nproc)
cmake -G Ninja -S xgl -B builds/Release32 -DCMAKE_C_FLAGS=-m32 -DCMAKE_CXX_FLAGS=-m32
cmake --build builds/Release32
```
#### RedHat
```
cd <root of vulkandriver>/drivers/xgl
cmake3 -H. -Bbuilds/Release64
cd builds/Release64
make -j$(nproc)
```
### 32-bit Build
#### Ubuntu
```
cd <root of vulkandriver>/drivers/xgl
cmake -H. -Bbuilds/Release -DCMAKE_C_FLAGS=-m32 -DCMAKE_CXX_FLAGS=-m32
cd builds/Release
make -j$(nproc)
```
#### RedHat
```
cd <root of vulkandriver>/drivers/xgl
cmake3 -H. -Bbuilds/Release -DCMAKE_C_FLAGS=-m32 -DCMAKE_CXX_FLAGS=-m32
cd builds/Release
make -j$(nproc)
```
> **Note:**
* If the build runs into errors like "collect2: fatal error: ld terminated with signal 9 [Killed]" due to out of memory, you could try with reducing the number of threads in "make" command.
* Debug build can be done by using -DCMAKE_BUILD_TYPE=Debug.
* To enable Wayland support, you need to build the driver by using -DBUILD_WAYLAND_SUPPORT=ON and install the Wayland [WSA library](https://github.com/GPUOpen-Drivers/wsa).
> **Note**
> * For debug build, use `-DCMAKE_BUILD_TYPE=Debug -DLLVM_PARALLEL_LINK_JOBS=2` (Linking a debug build of llvm is very memory intensive, so we use only two parallel jobs).
> * If you want to build tools (such as [amdllpc](https://github.com/GPUOpen-Drivers/llpc/blob/dev/llpc/docs/amdllpc.md)) together with driver, add `-m build_with_tools.xml` in repo init and add the build option `-DXGL_BUILD_TOOLS=ON`.
## Installation Instructions
### Install Vulkan SDK
Refer to installation instructions [here](http://support.amd.com/en-us/kb-articles/Pages/Install-LunarG-Vulkan-SDK.aspx).
You can download and install the SDK package [here](https://vulkan.lunarg.com/sdk/home).
### Uninstall Previously Installed JSON Files
Please make sure all JSON files for AMD GPUs under below folders are uninstalled:
```
/etc/vulkan/icd.d
/usr/share/vulkan/icd.d
```
### Copy Driver and JSON Files
### Install dependencies
#### Ubuntu
```
sudo cp <root of vulkandriver>/drivers/xgl/builds/Release64/icd/amdvlk64.so /usr/lib/x86_64-linux-gnu/
sudo cp <root of vulkandriver>/drivers/xgl/builds/Release/icd/amdvlk32.so /usr/lib/i386-linux-gnu/
sudo cp <root of vulkandriver>/drivers/AMDVLK/json/Ubuntu/* /etc/vulkan/icd.d/
sudo apt install libssl1.1
```
#### RedHat
```
sudo cp <root of vulkandriver>/drivers/xgl/builds/Release64/icd/amdvlk64.so /usr/lib64/
sudo cp <root of vulkandriver>/drivers/xgl/builds/Release/icd/amdvlk32.so /usr/lib/
sudo cp <root of vulkandriver>/drivers/AMDVLK/json/Redhat/* /etc/vulkan/icd.d/
sudo yum install openssl-libs
```
### Install Driver and JSON Files
```
sudo cmake --install builds/Release64 --component icd
sudo cmake --install builds/Release32 --component icd
```
> If you want to install driver to customized directory, you can add "-DCMAKE_INSTALL_PREFIX={installation directory}" in the cmake build command. JSON files will be installed to /etc/vulkan/icd.d while other files will be installed to the installation directory you specified.
> If RADV is also installed in the system, AMDVLK driver will be enabled by default after installation. You can switch the driver between AMDVLK and RADV by environment variable AMD_VULKAN_ICD = AMDVLK or RADV.
> **Note:** The remaining steps are only required when running the AMDGPU upstream driver stack.
### Turn on DRI3 and disable modesetting X driver
@ -203,60 +229,138 @@ Driver "modesetting"
```
### Required Settings
On the AMDGPU upstream driver stack, the max number of command streams per submission **MUST** be limited to 4 (the default setting in AMD Open Source driver for Vulkan is 16). This can be accomplished via the [Runtime Settings](#runtime-settings) mechanism by adding the following line to /etc/amd/amdPalSettings.cfg:
On the AMDGPU upstream driver stack with libdrm version lower than 2.4.92, the max number of IB per submission **MUST** be limited to 4 (the default setting in AMD Open Source driver for Vulkan is 16). This can be accomplished via the [Runtime Settings](#runtime-settings) mechanism by adding the following line to amdPalSettings.cfg:
```
MaxNumCmdStreamsPerSubmit,4
CommandBufferCombineDePreambles,1
```
### Install with pre-built driver
You could generate the installation package with below command while building driver:
#### Ubuntu
```
cmake -G Ninja -S xgl -B builds/Release64 [-DPACKAGE_VERSION=package version]
cmake --build builds/Release64 --target makePackage
```
#### RedHat
```
cmake -G Ninja -S xgl -B builds/Release64 [-DPACKAGE_VERSION=package version]
cmake --build builds/Release64 --target makePackage
```
You could also download pre-built package from https://github.com/GPUOpen-Drivers/AMDVLK/releases for each code promotion in master branch.
Below is the installation instruction:
#### Ubuntu 20.04, 22.04
```
sudo dpkg -r amdvlk # If old version is installed on the machine, remove it first
sudo dpkg -i amdvlk_x.x.x_amd64.deb
sudo apt-get -f install
```
#### RedHat 8.6, 9.0
```
sudo rpm -e amdvlk # If old version is installed on the machine, remove it first
sudo rpm -i amdvlk-x.x.x.x86_64.rpm
```
For Ubuntu, you could also install the latest driver build from https://repo.radeon.com:
```
sudo wget -qO - http://repo.radeon.com/amdvlk/apt/debian/amdvlk.gpg.key | sudo apt-key add -
sudo sh -c 'echo deb [arch=amd64,i386] http://repo.radeon.com/amdvlk/apt/debian/ bionic main > /etc/apt/sources.list.d/amdvlk.list'
sudo apt-get remove amdvlk # If old version is installed on the machine, remove it first
sudo apt update
sudo apt-get install amdvlk
```
## Runtime Settings
The driver exposes many settings that can customize the driver's behavior and facilitate debugging. Add/edit settings in /etc/amd/amdPalSettings.cfg, formatted with one `name,value` pair per line. Some example settings are listed below:
The driver exposes many settings that can customize the driver's behavior and facilitate debugging. You can add/edit settings in amdVulkanSettings.cfg or amdPalSettings.cfg file under one of below paths, formatted with one `name,value` pair per line:
* /etc/amd
* $AMD_CONFIG_DIR
| Setting Name | Valid Values | Comment |
| ------------------------ | ----------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------- |
| `ShaderCacheMode` | 0: disable cache<br/>1: runtime cache<br/>2: cache to disk | Runtime cache is the default mode. |
| `IFH` | 0: default<br/>1: drop all submits<br/> | Infinitely Fast Hardware. Submit calls are dropped before being sent to hardware. Useful for measuring CPU-limited performance. |
| `EnableVmAlwaysValid` | 0: disable<br/>1: default<br/>2: force enable<br/> | 1 is the default setting which enables the VM-always-valid feature for kernel 4.16 and above. The feature can reduce command buffer submission overhead related to virtual memory management. |
| `IdleAfterSubmitGpuMask` | Bitmask of GPUs (i.e., bit 0 is GPU0, etc.) | Forces the CPU to immediately wait for each GPU submission to complete on the specified set of GPUs. |
Some example settings are listed below:
*All* available settings can be determined by examining the .cfg source files that define them.
| Setting Name | Valid Values | Comment |
| ------------------------------ | ----------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------- |
| `AllowVkPipelineCachingToDisk` | 0: disallow<br/>1: default<br/> | 1 is default value which enables Pal's archive-file based caching.<br/>The archive-file is stored under ~/.cache/AMD/VkCache. |
| `ShaderCacheMode` | 0: disable cache<br/>1: runtime cache<br/>2: cache to disk | Runtime cache is the default mode. For "cache to disk", the cache file is generated under $AMD_SHADER_DISK_CACHE_PATH/AMD/LlpcCache or $XDG_CACHE_HOME/AMD/LlpcCache or $HOME/.cache/AMD/LlpcCache |
| `IFH` | 0: default<br/>1: drop all submits<br/> | Infinitely Fast Hardware. Submit calls are dropped before being sent to hardware. Useful for measuring CPU-limited performance. |
| `EnableVmAlwaysValid` | 0: disable<br/>1: default<br/>2: force enable<br/> | 1 is the default setting which enables the VM-always-valid feature for kernel 4.16 and above. The feature can reduce command buffer submission overhead related to virtual memory management. |
| `IdleAfterSubmitGpuMask` | Bitmask of GPUs (i.e., bit 0 is GPU0, etc.) | Forces the CPU to immediately wait for each GPU submission to complete on the specified set of GPUs. |
*All* available settings can be determined by examining below source files that define them.
* .../xgl/icd/settings/settings.cfg (API layer settings)
* .../pal/src/core/settings.cfg (PAL hardware-independent settings)
* .../pal/src/core/hw/gfxip/gfx6/gfx6PalSettings.cfg (PAL GFX6-8 settings)
* .../pal/src/core/hw/gfxip/gfx9/gfx9PalSettings.cfg (PAL GFX9+ settings)
* .../pal/src/core/settings_core.json (PAL hardware-independent settings)
* .../pal/src/core/hw/gfxip/gfx6/settings_gfx6.json (PAL GFX6-8 settings)
* .../pal/src/core/hw/gfxip/gfx9/settings_gfx9.json (PAL GFX9+ settings)
Runtime settings are only read at device initialization, and cannot be changed without restarting the application. If running on a system with multiple GPUs, the same settings will apply to all of them. Lines in the settings file that start with `;` will be treated as comments.
## Enable extensions under development
The extensions under development are not enabled by default in driver. You can enable them through environment variable:
```
export AMDVLK_ENABLE_DEVELOPING_EXT="<extension1-name> [<extension2-name>...]"
```
or
```
export AMDVLK_ENABLE_DEVELOPING_EXT="all"
```
The extension name is case-insensitive.
## PAL GpuProfiler Layer
The GpuProfiler is an optional layer that is designed to intercept the PAL interface to provide basic GPU profiling support. Currently, this layer is controlled exclusively through runtime settings and outputs its results to file.
You can use the following [Runtime Settings](#runtime-settings) to generate a .csv file with GPU timings of work performed during the designated frames:
You can use the following [Runtime Settings](#runtime-settings) to generate .csv files with GPU timings of work performed during the designated frames of an application (one file for each frame):
| Setting Name | Value | Comment |
| -------------------------------- | -------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `GpuProfilerMode` | 0: disable<br/>1: enable with sqtt off<br/>2: enable with sqtt for thread trace<br/>3: enable with sqtt for RGP | Enables and sets the SQTT mode for the GPU performance profiler layer. Actual capture of performance data must be specified via frame number with GpuProfilerStartFrame or by holding shift-F11. |
| `GpuProfilerLogDirectory` | <nobr>&lt;directory-path></nobr> | Must be a directory that your application has write permissions for. |
| `GpuProfilerGranularity` | 0: per-draw<br/>1: per-cmdbuf | Defines what is measured/profiled. *Per-draw* times individual commands (such as draw, dispatch, etc.) inside command buffers, while *per-cmdbuf* only profiles entire command buffers in aggregate. |
| `GpuProfilerStartFrame` | Positive integer | First frame to capture data for. If StartFrame and FrameCount are not set, all frames will be profiled. |
| `GpuProfilerFrameCount` | Positive integer | Number of frames to capture data for. |
| `GpuProfilerRecordPipelineStats` | 0, 1 | Gathers pipeline statistic query data per entry if enabled. |
| `GpuProfilerMode` | 0: disable<br/>1: enable with sqtt off<br/>2: enable with sqtt for thread trace<br/>3: enable with sqtt for RGP | Enables and sets the SQTT mode for the GPU performance profiler layer. Actual capture of performance data must be specified via frame number with GpuProfilerConfig_StartFrame or by pressing shift-F11. |
| `GpuProfilerConfig.LogDirectory` | <nobr>&lt;directory-path></nobr> | The directory path is relative to $AMD_DEBUG_DIR or $TMPDIR or /var/tmp/, default value is "amdpal/". Your application must have write permissions to the directory. The profiling logs are output to a subdirectory that is named in the format like <nobr>&lt;AppName></nobr>_<nobr>&lt;yyyy-MM-dd></nobr>_<nobr>&lt;HH:mm:ss></nobr>. |
| `GpuProfilerConfig.Granularity` | 0: per-draw<br/>1: per-cmdbuf | Defines what is measured/profiled. *Per-draw* times individual commands (such as draw, dispatch, etc.) inside command buffers, while *per-cmdbuf* only profiles entire command buffers in aggregate. |
| `GpuProfilerConfig.StartFrame` | Positive integer | First frame to capture data for. If StartFrame and FrameCount are not set, all frames will be profiled. |
| `GpuProfilerConfig.FrameCount` | Positive integer | Number of frames to capture data for. |
| `GpuProfilerConfig.RecordPipelineStats` | 0, 1 | Gathers pipeline statistic query data per entry if enabled. |
You can use the script [timingReport.py](https://github.com/GPUOpen-Drivers/pal/tree/master/tools/gpuProfilerTools/timingReport.py) to analyze the profiling log:
```
python timingReport.py <profiling_log_subdirectory>
```
## Dump Pipelines and Shaders
The output of timeReport.py includes the information of top pipelines like below:
```
Top Pipelines (>= 1%)
Compiler Hash | Type | Avg. Call Count | Avg. GPU Time [us] | Avg. Frame %
1. 0xd91d15e42d62dcbb | VsPs | 43 | 11,203.15 | 10.20 %
2. 0x724e9af55f2adf1b | Cs | 1 | 9,347.50 | 8.51 %
3. 0x396e5ad6f7a789f7 | VsHsDsPs | 468 | 8,401.35 | 7.65 %
```
You can add the following settings to amdPalSettings.cfg to dump the information of each pipeline:
```
EnablePipelineDump,1
PipelineDumpDir,<dump_dir_path>
```
PipelineDumpDir is a sub-path relative to $AMD_DEBUG_DIR or $TMPDIR or /var/tmp/, default value is "spvPipeline/". The pipeline dump file is named in the format like Pipeline<nobr>&lt;Type></nobr>_<nobr>&lt;Compiler_Hash></nobr>.pipe. For example, the above top 1 pipeline is dumped to PipelineVsFs_0xD91D15E42D62DCBB.pipe. The shaders referenced by each pipeline are also dumped to .spv files.
## PAL Debug Overlay
PAL's debug overlay can be enabled to display real time statistics and information on top of a running application. This includes a rolling FPS average, CPU and GPU frame times, and a ledger tracking how much video memory has been allocated from each available heap. Benchmarking (i.e., "Benchmark (F11)") is currently unsupported.
| Setting Name | Value | Comment |
| ------------------------------- | ----------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------- |
| `DebugOverlayEnabled` | 0, 1 | Enables the debug overlay. |
| `DebugOverlayLocation` | <nobr>0: top-left</nobr><br/><nobr>1: top-right</nobr><br/><nobr>2: bottom-left</nobr><br/><nobr>3: bottom-right</nobr> | Determines where the overlay text should be displayed. Can be used to avoid collision with important rendering by the application. |
| `PrintFrameNumber` | 0, 1 | Reports the current frame number. Useful when determining a good frame range for profiling with the GpuProfiler layer. |
| `TimeGraphEnable` | 0, 1 | Enables rendering of a graph of recent CPU and GPU frame times. |
| Setting Name | Value | Comment |
| ------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------- |
| `DebugOverlayEnabled` | 0, 1 | Enables the debug overlay. |
| `DebugOverlayConfig.DebugOverlayLocation` | <nobr>0: top-left</nobr><br/><nobr>1: top-right</nobr><br/><nobr>2: bottom-left</nobr><br/><nobr>3: bottom-right</nobr> | Determines where the overlay text should be displayed. Can be used to avoid collision with important rendering by the application. |
| `DebugOverlayConfig.PrintFrameNumber` | 0, 1 | Reports the current frame number. Useful when determining a good frame range for profiling with the GpuProfiler layer. |
| `DebugOverlayConfig.TimeGraphEnable` | 0, 1 | Enables rendering of a graph of recent CPU and GPU frame times. |
## Third Party Software
The AMD Open Source Driver for Vulkan contains code written by third parties.
* LLVM is distributed under the terms of the University of Illinois/NCSA Open Source License. See LICENSE.TXT file in the top directory of the LLVM repository.
* Please see the README.md file in the [PAL](https://github.com/GPUOpen-Drivers/pal) and [XGL](https://github.com/GPUOpen-Drivers/xgl) repositories for information on third party software used by those libraries.
* [LLVM](https://github.com/GPUOpen-Drivers/llvm-project) is distributed under the Apache License v2.0 with LLVM Exceptions. See LICENSE.TXT file in the top directory of the LLVM repository.
* [MetroHash](https://github.com/GPUOpen-Drivers/MetroHash) is distributed under the terms of Apache License 2.0. See LICENSE file in the top directory of the MetroHash repository.
* [CWPack](https://github.com/GPUOpen-Drivers/CWPack) is distributed under the terms of MITLicense. See LICENSE file in the top directory of the CWPack repository.
* Please see the README.md file in the [PAL](https://github.com/GPUOpen-Drivers/pal), [LLPC](https://github.com/GPUOpen-Drivers/llpc), [GPURT](https://github.com/GPUOpen-Drivers/gpurt) and [XGL](https://github.com/GPUOpen-Drivers/xgl) repositories for information on third party software used by those libraries.
#### DISCLAIMER
@ -264,9 +368,8 @@ The information contained herein is for informational purposes only, and is subj
AMD, the AMD Arrow logo, Radeon, FirePro, and combinations thereof are trademarks of Advanced Micro Devices, Inc. Other product names used in this publication are for identification purposes only and may be trademarks of their respective companies.
Vega is a codename for AMD architecture, and is not a product name.
Vega is a codename for AMD architecture, and is not a product name.
Linux is the registered trademark of Linus Torvalds in the U.S. and other countries.
Vulkan and the Vulkan logo are registered trademarks of the Khronos Group, Inc.

14
build_with_tools.xml Normal file
View File

@ -0,0 +1,14 @@
<?xml version="1.0" encoding="UTF-8"?>
<manifest>
<include name="default.xml" />
<remote name="khronos-github"
fetch="https://github.com/KhronosGroup/" />
<project path="drivers/spvgen" name="spvgen" revision="dev"/>
<project path="drivers/third_party/glslang" name="glslang.git" remote="khronos-github" revision="3225778615fd9d7e23fd11b71a05097d59ba0247"/>
<project path="drivers/third_party/SPIRV-tools" name="SPIRV-Tools.git" remote="khronos-github" revision="fe7bae090629f64115eb41aa8c41df419cef9159"/>
<project path="drivers/third_party/SPIRV-tools/external/spirv-headers" name="SPIRV-Headers.git" remote="khronos-github" revision="7d500c4d75ae3fbd37e1d5a20008ca9c8ee3c860"/>
<project path="drivers/third_party/SPIRV-cross" name="SPIRV-Cross.git" remote="khronos-github" revision="7d92d7d8794b102f550ad33dbedbd82203b755a9"/>
</manifest>

View File

@ -2,16 +2,20 @@
<manifest>
<remote name="vulkan-github"
fetch="https://github.com/GPUOpen-Drivers" />
fetch="." />
<default revision="releases/amd-18.20"
<default revision="master"
remote="vulkan-github"
sync-j="8"
sync-c="true" />
sync-c="true"
sync-s="true" />
<project path="drivers/xgl" name="xgl" revision="releases/amd-18.20"/>
<project path="drivers/pal" name="pal" revision="releases/amd-18.20"/>
<project path="drivers/AMDVLK" name="AMDVLK" revision="releases/amd-18.20"/>
<project path="drivers/llvm" name="llvm" revision="releases/amd-18.20-vulkan"/>
<project path="drivers/xgl" name="xgl" revision="dev"/>
<project path="drivers/pal" name="pal" revision="dev"/>
<project path="drivers/llpc" name="llpc" revision="dev"/>
<project path="drivers/gpurt" name="gpurt" revision="dev"/>
<project path="drivers/llvm-project" name="llvm-project" revision="amd-gfx-gpuopen-dev"/>
<project path="drivers/third_party/metrohash" name="MetroHash" revision="amd-master"/>
<project path="drivers/third_party/cwpack" name="CWPack" revision="amd-master"/>
</manifest>

View File

@ -1,8 +0,0 @@
{
"file_format_version": "1.0.0",
"ICD": {
"library_path": "/usr/lib/amdvlk32.so",
"api_version": "1.1.70"
}
}

View File

@ -1,8 +0,0 @@
{
"file_format_version": "1.0.0",
"ICD": {
"library_path": "/usr/lib64/amdvlk64.so",
"api_version": "1.1.70"
}
}

View File

@ -1,8 +0,0 @@
{
"file_format_version": "1.0.0",
"ICD": {
"library_path": "/usr/lib/i386-linux-gnu/amdvlk32.so",
"api_version": "1.1.70"
}
}

View File

@ -1,8 +0,0 @@
{
"file_format_version": "1.0.0",
"ICD": {
"library_path": "/usr/lib/x86_64-linux-gnu/amdvlk64.so",
"api_version": "1.1.70"
}
}